@madpilot makes

JoikuSpot premium edition released

A couple of days ago, Joiku released the premium edition of their WiFi access software for Sybian phones: JoikuSpot. I’ve blogged about JoikuSpot before, and now for 15€ you can get the premium edition, which NATs mail, Skype, SSH, HTTPS and number of other protocols. They also seemed to have fixed the issue which stopped the EeePC using DHCP. This is really awesome – I can finally leave that USB cable at home. Let’s see your beloved iPhone do that!

A recipe for server migration

As any website grows, we often find ourselves having to move servers to deal with capacity, or better prices or whatever. This can be a right pain to do, but with a little planning and a few tricks, you can make the process somewhat smoother. In this quick howto, I’ll cover how to move a typical web setup: Web server, DNS, Mail and database. Of course there are more than one way to skin a cat, but this method works quite well if you have a couple of days up your sleeve and has the advantage of only costing you a few minutes in downtime. In this example, we will pretend to move http://www.mycoolwebapp.com from the IP address 192.168.0.1 to the new IP address of 10.0.0.1 (Fake web host, and fake, local IP addresses – substitute with real values).

Step 1. Move your DNS

DNS can be the most painful step, as it can take up to 72 hours to move from one primary DNS server to another. In a nutshell, DNS acts like a big phone book, which tells your browser that http://www.mycoolwebapp.com belongs to the IP address 192.168.0.1. When you enter http://www.mycoolwebapp.com in to your browser, it will ask the operating system for the IP address. If the Operating system has it cached, it will return it. If the cached value looks old (ie the TTL has expired), or it doesn’t know about it it will ask a parent DNS server (or root server) where it can find the updated records, and will then fetch the new records from the relevant DNS server. The TTL is usually set pretty high – in the range of days, as generally Name-to-IP address mappings don’t change much, but if you do change, there means there could be a couple of days before all the servers around the world are updated, meaning your site won’t be found during that time!

Most users don’t have control of the Time-to-live (TTL) value on the root servers, so you need to ensure that both your old primary DNS server, and your new primary DNS server mirror each other. This way, regardless of whether the user get old information or new information, they will still be directed to the same information.

Most users WILL have control of the TTL on their own servers, so we can set that to something small which will make switch over much easier. Make note of the TTL on your old DNS server – you will need to wait that long after changes things before you can switch over to your new server. Setup your new DNS server to point to the old server (192.168.0.1) – ensure that all of the values are exactly the same. Now, on both servers, set the TTL to something very small, say 10 minutes (which is 600 seconds).

As this can take a couple of days to for the root DNS changes to propagate, you may as well change the root DNS details now. This will usually done via the domain registrars control panel. Simply change the primary and secondary DNS server IPs from the old DNS servers to the new ones (192.168.0.1 to 10.0.0.1 in this example). Make sure you double check all the values, because if you make a mistake it can take days to rectify!

Step 2. Setup the new webserver

Now that the DNS is setup and in the process of re-delegating, you can setup website on the new server for final acceptance testing. This is usually the easiest part of the process. What I do here is deploy the current software to the new server, take a snapshot of the database (phpmyadmin helps here if you are on MySQL, but each database system has a mechanism for backing up the database) and copy over any uploaded files etc. Now, most shared hosts will use virtual hosts, which means it serves up different pages based on the domainname, not the IP address, and if your software relies on the domain name for functionality, it can be really hard to test all the functionality with out the having the name-to-IP address mapping in place (remember, at this point http://www.mycoolwebapp.com is still pointing to 192.168.0.1).

Hosts file to the rescue! You might not know this, but all the major operating systems have a “hosts” file, which is checked before a DNS check is made, so if the operating system finds the requested domain name in that file, it won’t even bother querying the DNS server. By associating the new server IP address with the domain name in this file, we can actually view what is going on on the new server (Just don’t forget to delete the entry when you are done!). Windows users can find the file in c:windowssystem32driversetchosts, and Linux (and I’m pretty sure OSX users) can find it in /etc/hosts. There is usually examples in the files, so it’s best to follow those, but I know these values work for Windows and linux:

10.0.0.1    www.mycoolwebapp.com

You may need to restart your browser for the change to take effect. Now if you go to http://www.mycoolwebapp.com you will be taken to the new server allowing you to set it up and check everything is working properly with out affecting the currently live version.

Step 3: Setup mail

Email can be pretty painful to setup, and is one of those things that will get you in a lot of trouble if you stuff it up. First off all you need to know all of the email addresses associated with the domain. If your hosting provider uses a web admin control panel like Plesk, this is usually pretty easy. Mirror all of the accounts on the new server, and make sure all of the quotas are either equal to or greater than the current values. If you need to setup new passwords for all of the accounts, note them down, as you will need to notify each user of their new password. Don’t forget to check for things like aliases and forwards.

Step 4: Check and double check

Make sure you note down any other bits and pieces that may have been setup, like cronjobs, other services and make sure that things like mail contact forms etc actually work. There is a trick here for young players with email forms – the resulting email may go to the new server, not the old one, so if you test a contact for and no email appears, check the new server.

Step 5: The big move

The way I usually play this is to replace the website on the old server with a “We are down for maintenance” page. This has the advantage of ensuring no one posts any new information whilst the the site is in flux and also gives you a visual confirmation that things worked. If your server can use .htaccess file, this is pretty easy using a rewrite directive. If the framework you use has friendly URLS, there will probably already be a .htaccess file that tells the server to rewrite the URLS to file – this is the simplest solution – just create a maintenance.html file and change the rewrite to look like this:

RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^(.*)$ maintenance.html [QSA,L]

This allows you to still access CSS, images and JS files, so you can style the maintenance page, but any request to a file that doesn’t really exists will get piped through to the maintenance.html file.

If .htaccess isn’t an option or you have a lot of static file, you might have to do the old fashioned “move the current site root and replace with a new one” trick (There are rewrite rules you can do, but this just might be easier). Create a new directory that has the maintenance file (You would probably have to call it index.html in this instance) and put any associated images, CSS and JS files. Then move the current site root (for example public_html) to say, public_html.old and then move the new directory to replace the old one (ie public_html). When you view the site on the old server (you might need to remove the entry in the hosts file to see it) you should now see the maintenance message.

Next, re-sync the database and any new uploaded files and give the new server a final test (by putting the hosts file entry back in). Once you are happy everything is working, we can flick the DNS over.

Step 6. Flicking the DNS

On both the old and new server, change the IP addresses for all of the relevant entries from the old IP address to the new one. Within 10 minutes, everyone should be seeing the new server’s version of the site! This also means that all of the email will be heading to the new server to, so…

Step 7. Re-configure mail clients and syncing up folders

If you configured new passwords for all of the mail clients, they will need to enter the new password into their mail clients. I generally walk them through creating a NEW mail account on their clients – the reason for this will be revealed in a second.

If they have been using POP in the past, then they will probably already have local copies of their email, so your job here is done. However, if they are using IMAP (probably more likely now) you will need to sync up the two accounts. There are tools out there that can do it, but you will need to know the passwords on both servers, and they can be a bit hit and miss – thankfully you can use the users mail client to do the sync.

Simply create a new account on their mail client that points at the new server, and change the old account to point to the IP address of the old server. If done correctly, the user should now be able to access their old email and folders as well as any new email on the new server. Most mail clients will actually allow you to drag folders across servers – do that and your email will be synced up! (It might take a while, if they have a lot of archived mail). After that is done, you can delete the old account and set the new account to default.

Step 8. Shutdown the old server

The final task is to shutdown the old server. I usually leave it running for a week to be on the safe side, mainly to recover any email that the user forgot to move across (It ALWAYS happens).

Step 9. Do and get a drink

You deserve it – it’s probably been a big week!

I hope that this made sense, or at least acts as a resource when you try to explain to a client why it can take a week to move servers! Of course, these instructions are pretty general, so your mileage may vary. Golden rule is don’t rush and double check everything!

Proposal: An open inter-conversation microblogging protocol

Spurred on by Gary’s discussion on the number of micro-blogging sites around, the “Is it Distributed?” question made we wonder if we are going about this wrong. Cameron Adams was right when he said there is only one social network, so why are we flicking between a large number of them? Why aren’t we running out own?

Beyond a number of small superficial differences they all do the same thing – you add friends, post what your doing (usually in an arbitrary 140 characters or less) and read what others are doing. There really is no reason why this can’t be truly distributed, i.e. I can run my own micro-blogging site, and all my friends can run their own micro-blogging sites – all that is needed is some glue (a communication protocol) to bring it all together. The great thing about this, is we already have systems to make this happen – get your buzz-word bingo cards out people…

RESTful XML

The first part of this system is a RESTful API that allows friends to post information in your timeline and you to post to others. Everytime you post to your microblog, it will iterate through your list of friends and forward the message on to them. The same thing happens if you delete a post – if notifies all your friends to remove the post from their local database. To ensure that random people can’t spam our feeds, we can use OAuth to give “friends” permission to send us information.

Your own timeline

The reason that your microblog would need to be notified of other peoples posts would be so you can cache these posts on your own microblog, which gives you a twitter style public timeline. The advantage of this is that there is basically no database load to display YOUR feed – the only information in your database are the posts that you want to read!

Adding friends

So how can you add friends and allow others to follow you? This is actually pretty easy using OAuth – by adding your microblog to your friends microblog authorised list, they know that you need to be notified on an add or delete command. This gives us the side effect that we can manage not only who we follow but who follows us – if you want to stop someone from following you, you just de-authorise them. So what happens if a new friend adds your microblog to their timeline? A simple GET command could be made to receive all of the posts by the new friend, effectively syncing up the two databases – all future posts will obviously push to the new friend (and vice-versa) so there is no expensive polling.

Other peoples timelines

If someone has a public timeline, this is a no-brainer. Each persons microblog would just be available and others can just read it. But what about private timelines? Enter OpenID. If each of your friends provide an OpenID URL, they would be able to login to your microblog to read your private feed – no password required, but is still totally private.

Discovery services

Many twitter users scour the public timeline waiting for people to post things that they are interesting in. This is actually quite easy to implement on a distributed system – have a read only super node that everyone posts to. Voila, instant public timeline. This also means that you can easily create “channels”. Instead of only having one public timeline, you can have many based on different topics.

Unlimited extensions

One of the value-adds for Pownce is the ability to share attachments and events. In reality, all it does is provide a link to a file on a remote service. If you wanted to add this function to your microblogging site, you can quite easily – as long as you post the link to others. This means you have complete control over what your microblog does, as long as it still talks the protocol.

Advantages

  1. The obvious one is you aren’t at the mercy of servers doing a twitter (ie. being up and down like a yo-yo). If your friends server goes down you miss out on their posts, but no one elses.
  2. You have control over your data – you don’t have to worry about a service disappearing overnight and you not being get at it. It all on your server
  3. Distributed data – your server dies and your harddrive explodes, your data can be rebuilt from the data that is stored on one of your friends databases

Disadvantages

  1. If someones site is down they may miss some updates, so you would need a method for re-syncing all friends posts from a certain date – no biggie.
  2. It does make completely removing your account difficult as you can’t really ensure your friends are going to respond to delete commands correctly

So what do people how don’t have their own server to run this on? This is the kicker – you can still have hosted versions of the system. This works for blogs (I host my own, but some of my friends use systems like Blogger.com) and OpenID which makes it much more accessible.

If there is some interest in this, I’m sure we can start drafting some specifications. I’d be interested in your thought.s

88 Miles in the Business Review Weekly’s top 100 Australian Web 2.0 Applications

Business Review Weekly released it’s list of top 100 Australian web 2.0 lists today and 88 Miles came in at number 58!

There are some pretty cool apps on the list, including big players like Atlassian and of course our good friends Saasu.com, Scouta and Norg Media.

Rounding up the Perth list we have The broth, Minit, Buzka, Gooruze.

Congratulations to everyone who made the list!

An online version of the list can be seen here.

For the safety of the swimmers…

For the past twelve months, I’ve been a house mate at the Silicon Beach House – the collaborative, shared office space in the Perth CBD. Having seen a number of arrivals and departures over the past year, it will see it final set of departures, as it closes and gets turned into a resort (Ok, some other company is taking over the lease, but I started this metaphor, and I’m going to finish with it god damn it).

Unfortunately, due to a number of factors, it wasn’t viable to keep it open so whilst the idea of having a shared working space isn’t dead, it’s been cryogentically frozen until sometime another suitable venue can be found. So if you are in town, don’t try to drop in any more, as, well you’ll freak out the new occupants. For those of you who want to find me, I’m back working from the home-office, so if you are in Mt Lawley, give me a call and we can go for coffee :)

A stark realisation

There comes a moment in every career where you realise that there is a whole world outside of what you do. Sure, you don’t have to have three PhDs to figure out the world of macrame is significantly different to Ruby on Rails hacking, but when was the last time you thought about process in, say banking software development? Or had a look at what is going on in the world of Operating System code? To an outsider, these are related industries – they both involve sitting in front a grey box for hours on end banging gibberish on to keys with embossed letters that are out of order, but to us, they are worlds apart.

A couple of days ago, something reminded me that I, and I assume a great number of other web developers have forgotten our brethren that are more at home with linked-lists than unordered-lists, and as a result, we are significantly reducing our ability to find the right tool for the right job.

This revelation was borne from the recent Twitter downtime. As always happens, hundreds of well-meaning developers tried to offer up solutions to fix Twitters yo-yo like uptime graph – there was the usual Rails bashing, some bewilderment over MySQL replication, and a general consensus that is shouldn’t be this hard, but one article really stood out: Scaling a Microblogging Service. Go and have a read if you want to know why scaling Twitter is REALLY HARD (maybe impossible),

For those of you who don’t mind a spoiler, the crux is that Twitter is designed on a platform more suited to content management, when really it is a messaging system.

Now, you go to any web developer on the planet and ask them to build you a Twitter-clone, and I bet you each design would be pretty similar. You would have a table for users, a table to hold friend references, and a table for messages, which would all be linked via some sort of foreign key relationship. The reason being is that for 90% of what we work on on a day to day basis, this makes the most sense. Generally, you have few authors, many readers and those readers all get basically the same information.

Twitter is is completely different. Everyone is a producer and everyone is a consumer. Every second there are hundreds if not THOUSANDS of writes from people leaving their 140 characters worth of thoughts. To make matters worse, every user get a different view, so every page load would be absolutely thrashing the Twitter database servers.

Twitter is a MESSAGING SYSTEM – not a CMS. It closer to a mailing list then a blog. So why did the developers not make this observation straight away? Well, as web developers, this is foreign territory. If Nokia decided that they were going to build Twitter, I bet things would look very different. Twitter and SMS are pretty closely related, and for someone dealing with short messages all day, that may have been the natural path for them to take.

You might counter this argument by saying that we are web developers – of course we think in terms of web. We do what we know, just like everyone else. Correct. But maybe we should stop and think a little before we dive in and start coding up prototypes using our favourite RAD tools. Maybe we should try to look for metaphors in other areas before taking our hammer to unsuspecting screws, bolts and watermelons.

Maybe we should take some lessons from the User Interface guys? They use metaphors all the time when they design. Without somebody realising that people like putting paper into manila folders, we would still be kicking around text terminals. Lucky for us, our metaphors don’t need to be as abstract as that. We only need to look at outside our little bubbles to see what developers from other industries are doing and we may too see a obvious solution that we would have otherwise missed.

A multi-touch pad of my very own

I simply had to have one. As I previously posted, the ever so clever guy from http://ssandler.wordpress.com/MTmini/ posted instructions for making your own multi-touch (think Microsoft Surface or iPod Touch) pad. So what can any self-respecting geek do, other than to build one? Here is what I did.

Being the impatient kind of guy that I am, I didn’t want to have to buy anything to complete the build (I had to wait unit after work, and the shops were closed), so I did a little substitution with stuff you might find around your house. The ingredients:

  • 1 x An old red wine glass box (Although any decent sized box would do)
  • 1 x A4 picture frame
  • 1 x 6 year-old Dlink DCB-C300 Webcam
  • 30 cm of baking paper
  • 4cm of sticky tape
  • 1 blob of Bluetac
  • 1 x Software bundle from http://ssandler.wordpress.com/MTmini/

I had to modify the webcam slightly, as the current configuration was too long to fit in the box very well (You need as much gap between the pad and camera as possible which will give you maximum usage space), so I had to bend some pins to get it to lie flat. Next, I bluetac’d it to the bottom of the box. Next I took the cardboard backing off the picture frame, removed the glass and covered one side in the baking paper. I used the sticky tape to fasten it in, and replace the paper-clad glass back in the frame. Placing the frame on top of said box completes the build.

After installing the software and following the instructions (including the calibration instructions) it was done! Yep. That’s right. Done. 5 minutes work really (Not includingbending the pins and digging out the drivers for the webcam). See the photos and short (badly shot) video as proof of this wondrous feat. This is SERIOUSLY COOL.

The webcam sans it’s inners

The picture frame

And the proof that is works:

Multi-touch screen for under $50

This is why I love the intarwebs. I have found a project for this weekend. (Go to http://ssandler.wordpress.com/MTmini/ to get the skinny.)

Working with branches in Subversion

I know that Git is the flavour of the month in regards to version control at the moment, but I still use Subversion (SVN) for my day-to-day version control needs. And since it is still very popular, I think this quick tutorial still has it’s place. Today, I was asked by a client to show them how to branch a SVN repository so they could start making some major changes to their application with out running the risk of breaking the release version.

The scenario works something like this: You have finally launched your application and everything is purring along nicely. You decided to start working on the next iteration, which has some major changes that WILL break things initially. You start working away, and find yourself half way through the changes when you get a call from an irrate customer who can’t complete their transaction because of an obscure edge case bug that you missed. The dilemma that you have is that your source base is in a state of flux, and you can’t release it, because in it’s current state, it doesn’t work. Wouldn’t it be great if you could have maintenance version of your application that you could make the fix on? Enter branching.

Firstly, a bit of terminology. I’ve used the word “branch” a couple of times. The analogy is simple – you have a core line of code (The “trunk”) and you can have concurrent lines of code that “branch” from the trunk. One of those branches may well be our “stable” branch, meaning that the only changes that can occur are bug fixes – ie NO new features. As with any system, there is more than one way to fell a tree, but this is the system that I generally use.

So how do you do it? When I create a new SVN project, I like to add a trunk directory, and a branches directory. With in that branched directory, I add another directory called stable (let’s pretend my SVN server is at svn.myserver.com):

# Create a new project - tedious stuff like locking down access omitted for clarity

svnadmin create new_project

# Now head over to your working directory, and check out the initial version

svn co svn://svn.myserver.com/new_project

» Checked out revision 0.

cd new_project

mkdir branches

mkdir trunk

# Now to add the new directories to the repository

svn add branches trunk

» A         branches

» A         trunk

svn commit -m "Adding initial directories"

» Adding         branches

» Adding         trunk

»

» Committed revision 1.

Now we have our working copy setup and committed back to the server, you can start work on the trunk. Cut scene to the day before go live. You are pretty happy with how the trunk is looking, and you would like to branch the code into stable. For this we use the copy command

svn copy svn://svn.myserver.com/new_project/trunk svn://svn.myserver.com/new_project/branches/stable -m "Branching stable"

» Committed revision 10.

cd stable

svn update

» A    stable

» Updated to revision 11.

And we are done! Now you can continue on your merry way and change code in trunk with out affecting your stable release. Once you are happy with the next version and you’re ready to create your next stable branch from your new feature rich trunk, you can “merge” your changes from trunk into stable:

svn merge svn://svn.myserver.com/new_project/branches/stable svn://svn.myserver.com/new_project/trunk -m "Merging trunk changes into stable"

» A    new_file_that_is_in_trunk.txt

And of course it works the other way – say you found that obscure edge case bug in stable, and you have fixed it, you would want to merge the change back in to trunk, so you don’t introduce any “regression bugs” (Bugs that you fix, then inadvertently re-introduce by changing code). All you need to do is flip the order of the two URLS:

svn merge svn://svn.myserver.com/new_project/trunk svn://svn.myserver.com/new_project/branches/stable -m "Merging edge case fix from stable"

» M    fixed_file_from_stable.txt

There you go! Easy as pie – let’s just hope you don’t get to many conflicts that you need to manually resolve. It’s always a great idea to test you code again after a merge, just to be sure everything works as expected.

BarCamp Perth 2.0 – We came, we saw, we caught bird flu

Put 80-odd  geeks in a room and magic happens, which is what happened on Saturday at BarCamp 2.0, Perth. It was a fantastic turn out – we even got a couple of east coasters (Thanks @marclehmann and @liako) to enjoy the frivolities. Although, due to me running around like a blowfly with it’s head cut off, I still manged to get a couple of great presentations, which could lead to some seriously cool ideas which is all you can ask from a BarCamp.

But the biggest announcement for the day was WA’s very own Web conference – Edge of the Web.  After three awesome Web Awards over the last three years, it was a natural progression for us to push the envelope a little. Keep November 6 and 7 free – it’s going to be three types of awesome. We have international and national speakers, and we are fairly good at throwing parties over this side of the Nullabor. Oh, and we are running a poll, were you can put your 3c worth in picking our logo.

Having said that. I do have one gripe about our fair city. After the PTUB that followed BarCamp at the Royal, we moved on to @richardgiles‘ place for a cuppa tea and a scone. We realised we were out of Brandy, so we went out to find a friendly establishment to purchase a night-cap or two. It was 10pm in the evening AND EVERYTHING WAS CLOSED. I seriously caught a cab out to South Perth to go through a drive through. WTF. Anyway. Enough bitching – it was a top day and night and I’m not going to let our draconian liquor licensing laws spoil that.

Anyhoo, I’m off to nurse this cold that I and half of the Perth twitterati seem to have contracted.

Previous Next