Category Archives: Hacking

Should you Mine for Litecoin in the Amazon?

By now everyone knows what bitcoin is. Almost as many people know you can mine bitcoin with your computer.

Like any good mining rush, some early adopters made it big and everyone else has been chasing the ghosts of a fortune.

Infinite Scale

If you could mine a crypto currency in the cloud for less than the cost of renting those servers, you could scale up nearly instantly and make a similarly sized pile of money.

Using the cloud for crypto-currency mining has long been the dream of miners, but it never really works. Forget that, let’s throw caution to the wind and see for ourselves. A couple of factors led me to experiment if it would work again with the following considerations:

  1. Litecoin is based on scrypt, this means that, unlike Bitcoin, special built hardware had not yet hit the market and is harder to make
  2. Google, Amazon, and Microsoft are in a race to the bottom in cloud pricing
  3. The most competitive tier is the lower powered virtual machines, that Amazon is even willing to give away for free
  4. The low powered tiers have processing power but cannot leverage GPU advantages, meaning Litecoin is a good fit for them.


Once the CUDA machine was up and running with CudaMiner it was time to get it benchmarked:
Then it was time to let it go:

It wasn’t always super reliable:



When discussing the price of crypto-currency, we nearly always determine effectiveness against the current price for them or running price. However their fluctuation means that if you believed there would be a change in price in the future of these prices you might be willing to invest more mining them. I believe that is the reason mining crypto currencies has continued to be popular.

Looking at how slow these miners were though, you would have to expect orders of magnitude increases in prices for it to be worth it.

Step-by-step AWS CUDA litecoin Mining

If you’re so inclined, here’s a snap of the history that it took to get running:

Does it make sense?


If we’re getting 260khash/s and AWS costs $.65 per hour for a GPU instance, plugging those numbers into current value calculator, we get a value of $.001 per hour.


I also tried using the free tier as I had originally thought this could be a good avenue. It was more than an order of magnitude slower, meaning that you would be pulling in less than $.0001 per hour (and there is no cloud space even close to that cheap).

Exponential Growth

If you look at a graph of the difficulty of litecoin:

You’ll immediately notice insane growth in difficulty. Basically, starting in May, Litecoin has turned into the arms-race that directly mimics what happened with Bitcoin.

TL;DR: It might have made sense in early spring this year to attempt to mine Litecoins with either a CPU or desktop GPU, this is no longer true. Barring huge decreases in difficulty and increases in prices, this is unlikely to reverse.

The state of Proxmark

What is the Proxmark3?

The Proxmark3 is fascinating, it has the ability to read and write a wide range of RFID cards, both low-frequency (typically things like door access) and high-frequency (more advanced cards, transit, credit cards, etc). It can also be frustrating, largely due to:

  1. Inconsistent support across OSes
  2. Unclear documentation about identifying RFID cards you find


Mac OS X Mavericks + Proxmark3 == sadpanda

TL;DR: don’t try to work with proxmark3 in OS X, it has just enough support to keep you trying, without enough to actually help you excel

There are no official binaries, and no unofficial binaries for OS X so that means we must to go the source.

To compile make sure you have what is in the COMPILING.txt, some of these are more obvious than others.
For example, libreadline:

brew install readline

This however will not link because some conflicting library is installed with XCode. You want to hijack that, use

brew link readline --force

The ARM compiler is easy to install (according to this thread) via That thread also advises you to use

brew install libusb libusb-compat --universal

for libusb.

Once you’re pretty confident you have most of these dependencies installed, you can attempt to make the project. If it works, good for you. If not, you’ll spend a bunch of time googling. You will almost certainly have to modify the CXXFLAGS, and QTLDLIBS in client/Makefile. There are a few recommendations as to what these should, it’s unclear which of these which is best.

Hopefully, at this point, you can compile the application. it is run from the client directory via

./proxmark3 /dev/ttysXXX.usbXXXX

Unfortunately, if you’re like me, you won’t have any such devices.

To further compound matters, there is a decent chance that your proxmark3 is actually running old firmware and needs to be updated. There was a pretty big shift in the firmware, it used to use libusb but now registers itself as a COM port. I think these were for performance related; but the result is that older firmware can’t use newer software and vice versa. Flashing the old to new requires a little bit of a hybrid approach combined with witch magic.

It’s all about the Kext’s

There is also another issue looming that you might not realize. OS X is going to hijack your usb connection and not let you use it. The proxmark3 app will complain about not being able to claim the device.
Allegedly, you can use what is called a “codeless kext” to force OS X to ignore a device. That is what is hoped to be achieved with make install_kext. This didn’t seem to work for me. I tried poking around for a bit, Apple has this concept of “VendorSpecificDriver” that is meant to allow you to disable OS X from claiming the device before you can. kextutil will become your friend to get this debugged as you attempt to combine this “VendorSpecificDriver” code with the kext created by the Makefile, in a sweet Frankenstein attempt. As far as I can see, this approach no longer seems to work.

In researching this kext, you may see a bunch of things about requiring signing for codeless kexts. I don’t think this is true, in both this story and my own experimentation, using the /System/Library/Extensions/ folder, you are able to load an unsigned kext. However, I was unable to actually get the kext working with my Proxmark, so maybe there is something else I was missing. I also tried a documented alternative approach to unload the kext that Apple was using: kextunload -b (You’ll notice CDC which is the “old” style of communication for Proxmark). No luck.

Stop paddling upstream

It was at this point that I decided I should reassess my approach. It turns out that in this thread, there are pre-compiled binaries for proxmark3 for windows. Awesome.

There are even instructions included about updating from the old firmware, which involves lots of holding the proxmark button down while interacting with the device from the computer. I had to flash the update to the new COM style bootloader and also the “bootrom”, “fullimage”, and “OS”, before the device would be probably recognized even with the new driver.

Installing the drivers was again a bit of a pain, but not too bad.

What does this signal mean?

Once the hardware is setup and you’re in action you’re likely to encounter another problem. How do I read this card and make any sense of it? Unfortunately, the answer doesn’t seem to be super simple. Some LF cards are labeled with things like “HID” or “Indala”, which tells you right away what to use. There also seem to be a lot of cards that qualify as “em4x”, particularly “em410”. There’s a good chance you’re trying to read one of these three, but if it’s not your best bet is to turn to the forums or look for any sort of labeling that can help you.

Here are a few of the commands I seemed to use most:

hw tune
hw version
hw tune
lf read
data samples 5000
lf hid fdskdemod


Be ready for a bit of flakiness, you might have to restart your computer occasionally, or re-plug in the proxmark (frequently). Once it’s reading though, it seems to do it pretty consistently.


Cloning cards is a whole new beast. It seems that many cards are not re-writable, or if they are you can’t use the standard cloning provided by the proxmark software. Again, this is a shame because you will see reference to “t55x7” cards, unfortunately it doesn’t seem possible to easily know what card type you have in your hand.


Generally, the Proxmark3 concept is great. I know how difficult it is to foster a good community that can work across the range of software and hardware necessary for a good Proxmark experience, so I applaud the effort. I hope the tools continue to improve.

I’d really like to see a bit more consistency around OS/driver support, and documentation to aid in identifying RFID cards. Hopefully, I can find time to figure some of this out and put in pull requests to the Proxmark3 repository and help the community.

Do you have experience with the Proxmark3? Does it match mine?

The Everyman Watch of 1938


Quick Background

This is different than things I usually post about. There have been some exciting developments on my other projects and I hope to post on those soon.

While cleaning out some old family items, I recently came across a few pocket watches. I was immediately drawn in by this piece: a Westclox Pocket Ben watch. I was intrigued in no small part because it is a dollar watch; a category of watch targeted the average person. A slight personal fascination with mechanical watches helped too.

Mechanical Era

We often talk about the democratizing potential of new technologies. I don’t have much personal context on this beyond the information age of smartphones and the Internet. However, were I a betting man I’d wager this effort to democratize technological advances is far from new. I think mechanical watches are both a historical example of this and the pinnacle of their era.

Tear Down

The watch was not working when I received it, the second hand broken off and it seems it had not worked for some time. It’s not a particularly valuable watch in any condition. I timeboxed several hours yesterday to attempt to take it apart and understand how it works. Here’s what I learned:

  • This watch was stamped “38” on the movement, signifying it was made in 1938.
  • There is no magic. The internals of the watch expose its secrets, gears and springs mostly.
  • Watches are assembled by hand, therefore they can be understood with eyes and manipulated with hands.
  • Be discerning: trust your hands, don’t be forceful but the pieces sometimes need caressing.
  • Westclox took a number of shortcuts to save money, particularly leaving the spring exposed, and sometimes replacing gears with pins that are distributed to fake acting as gears.
  • Despite being relatively low cost and a few shortcuts, the work is honest, glue was avoided oil minimal and tolerances respected.
  • The balancer is attached to the back of movement, which means the watch basically has to be assembled from the back forward. It would have been nice to learn this one earlier than being one step away from back together.


There is a small online community interested in watches like these (surprise, online community). One particularly helpful source was two videos showing a very honest dissection and assembly of these watches, the videos highlight the creators thoughts and frustrations. He clearly has much more experience than I with watches and tools that are better equipped.
The disassembly video:
The reassembly video:

I would be curious to know more about these watches from a historical perspective. I think it would also be particularly interesting to do an interview with people who worked on devices like this. Semi-specialized hand labor was a trademark of 20th century industry and is quickly fading. I imagine the story of watches like this will fade unless recorded soon.


I liked this page showing some the various Westclox Pocket Ben watches. The watch I have pretty clearly is closest to the advertised 1933 watch, which matches the printed 1938 date.

Next up

I unfortunately didn’t finish reassembling the watch. I was one step away from having it back together when I realized that the balancer must be integrated much sooner. This resulted in me assembling in the opposite direction and unable to finish in the time I had allotted. I hope to someday have a chance to revisit this.

Restaurant Week DC!

I found out today that it is Restaurant week in DC. I quickly found the official website, and equally quickly found out that it was hard for me to figure out things I wanted to know, like where these restaurants are, and what Yelp thinks of them. The only logical conclusion was to make a website do exactly that. I ended up using my site to find a place to eat for dinner, and it was delicious 🙂

Currently, I’ve overrun my Yelp requests for the day, I’m going to work on getting the Yelp functionality integrated again, ASAP.

Check out the site at, what do you think?

Basic screenshot of the DC Restaurant Week 2012 application

Technical Details

Gathering the information

In order to gather information about all of the restaurants from the official site at first I was a little unsure. Then I realized I could just use jQuery to gather all of the elements and create a json object that I could use directly in this page.

Below is the bulk of my parsing code, I was then able to JSON.stringify an array of these restaurant objects to easily copy the data.

  $('.formfont_black b').each(function (){
    var parentColumn = $(this).closest('td');
    var restaurant = {}; = $(this).html();
    restaurant.url = $(this).closest('a').attr('href');
    var lineSplit = parentColumn.html().split('<br>');
    restaurant.addr = lineSplit[1]; = lineSplit[2];

What database?

I thought about setting up a quick Rails application for the backend of this, but then realized that there was really limited value in a database since this information is all static, and there’s not that much of it. Therefore, I’ve dumped most of the content directly into the javascript files. If this were a more serious application this could easily be adjusted.

Third Party Integration

Google – I quickly realized that the geocoding API for Google Maps was severely going to throttle my ability to look up restaurants. It limits you to, I believe, ~11 queries per second. Therefore, in order to map all 250 restaurants, I mapped them once and just saved that data into a JSON map, the same way as the restaurant information.

I’ve pasted my hacked together method to get all of the geocoded information. I was then able to JSON.stringify() the result

      function geocode_address(map, geocoder, restaurant){
        geocoder.geocode( {address:restaurant.address}, function(results, status){
          if (status == google.maps.GeocoderStatus.OK) {
          var marker = new google.maps.Marker({map:map, position:results[0].geometry.location});
          google.maps.event.addListener(marker, 'click', function(){
            yelpRequest(restaurant, marker);
          goodAddresses[restaurant.address] = results[0].geometry.location;
          else {
            }, 100);

Yelp – this integration initially went pretty smoothly other than the fact the Yelp API does not like to let you have users directly authenticate which is a serious problem when I’m running the application without a backend. Now, I’ve run into the problem of hitting the ridiculously small number of daily queries (100). I’m working on getting this upgraded.