@madpilot makes

Ideas 6: The Edge of the Web Edition

Perth has had it’s day in the sun, by holding the last five Ideas events, but now it is Brisvegas’ go. The theme: Edge of the Web, because we have two speakers giving you exclusive previews of their Edge of the Web presentations: Ash Donaldson presenting Designing to persuade: Shaping the User Experience and yours truly blabbing on about Stuff They Never Taught You at Website School.

We will be at the Plough Inn, Southbank on October 21st 2009. So if you are in Brisbane, and have been meaning to get to an AWIA event, now is your chance!

Members are $45, non members are $55. Bargin.

See you all then!

…and we still don’t have day-light saving

Yep, Software Engineering is dead

You know that feeling you get when something you’ve been taught to believe in gets discredited and because your belief was tenuous at best, the walls come tumbling down around you and then you have a huge weight lifted off you shoulders?

Pascal just posted this on the 220 mailing list. Amen. It’s something that I’m pretty sure I’ve been articulating for a long time. Whenever someone has asked me why software is hard, I always use this analogy:

If you ask a Civil Engineer to build you a bridge, it is easy to spec out. You know how far the bridge has to span, what sort of foundations you need, and as a result you can make a recommendation about what sort of bridge you need. The Engineer can build you a little model – you can look at the model and say “Yes! That is a bridge. That will do nicely”. They can mathematically model the bridge to make sure this doesn’t happen. They build the bridge and if it allows things to cross from one bank to the other, you have a success.

Unless you are building “Hello World”, a Software Engineer’s life isn’t so simple. You have different platforms, users, stakeholders, contexts – it gets exponentially harder with every feature that gets added. I once did a unit at Uni called Formal Methods which tried to mathematically model software. It was stupid. The code we modelled was like, nine lines long, and required a 32 page proof (I didn’t even get close). Stupid.

Of course, academics have been trying to shoehorn software into engineering for ever. In first year, they taught us UML which I guess is similar to architectural drawings or flow diagrams or something. I’m sure UML works really well when working with the waterfall model of software design, which has strong ties to old school, proper engineering. I couldn’t imagine having to go and update hundreds of UML documents every time a minor change was required. We are also taught in first year, that the waterfall model is pants in the real world, which by association makes UML nothing more than a nice thought experiment. (I’m still bemused by the number of Software firms that put it as a requirement for graduate Software Engineers – basically because coming up with job descriptions for inexperienced programmers is really hard).

Sure you can argue that testing is a software technique that we (should) use, but this is the exception to the rule. I guess the conclusion we need to come to is that Software isn’t an engineering problem – it’s a people problem. (Some may say, it’s a creative problem – that’s also true, but buy me a beer and I’ll explain that traditional engineering is too, so the argument doesn’t further my point). This in itself is a problem, as (gross generalisation ahead) boffins who like coding, tend not to deal with real people very well.

Further discussion on our internal list suggested that creating software products is the way to go. I think I want to agree with this – there are many examples of off-the-shelf products that are extremely popular: Microsoft Office, Adobe Photoshop etc. In these situations, the customer works with in the workflow of the software, and that seems to work. So do we as developers need to convince our clients that the feature they want may not be needed? Do our clients actually know what they need? Of course this view is not with out it’s flaws either – users will generally be working against the software, rather than with it. Is working with a sub-optimal solution better than battling with requirements and budget overruns?

I can’t help to think that there is something we are missing. It would seem there is a disconnect between what our clients want and what we can provide. If you look at the classic project triangle, your client wants to minimise price and time, and maximise good (I hope my English teacher isn’t reading this), where as we want to maximise all three. So the crucial “pick two” part flies out the window. Either we start sacrificing the good, re-negotiate the price, or try to stretch out the project to restore the balance – none of which makes for happy clients.

Well how about adding fat to the quote? In theory, this is fine – if a client sees value in an “inflated” (but more likey a realistic) price then everyone is happy right? Well, not really – software development is much like homework assignments: You start out with plenty of time, and the best intentions, and then end up pulling an all-nighter to get it finished – and you still only get a C at best. I suspect this is because it’s impossible to lock down requirements of an abstract problem. This isn’t only because of the difficulty in describing what we don’t understand, but because we don’t even know what half of the problems are going to be.

And this is our quandry – how can we estimate unknowns? Not just “we haven’t seen this before but it looks like X” unknowns, but “What the hell? How is that even possible?!” unknowns. Other areas of Engineering encounter these problems occasionally – we get them all the time. So, the solution (he says as if there is one) is to minimise the risks and/or consequences of these unknowns. Jobs that deal with people do this all the time. If you work in marketing, you can postulate all you like – you can’t be sure how a campaign will work until it does. Marketing is reactive.

When you make a change you can’t be sure what will happen. Sure, you can put an ad in the Yellow Pages year after year, because it has brought in on average Y leads per year – but there is no guarantee this year will be the same. It seems that the humanity-based sciences are happy with this, but quantitative-loving geeks don’t like that. Hell, binary is black and white, not Gray.

So, perhaps the key is to treat software as a living breathing thing. Agile programming and iterative development can help, but they are means to an end – they don’t work with out communication and understanding between people. We need to break down the barriers between provider and client – the question is: Is that even possible?

The first SchwaCMS goes live!

After the last announcement of MadPilot’s new CMS, I’m proud to follow it up with the announcement of the first site to use Schwa as it’s backend: Greenvale Mining NL.

Greenvale was designed by the ever-so-talented Adrianne from bird.STUDIOS, and was sliced by the latest edition to the twotwenty family: Niaal Holder from Speak.

We have a number of sites being launched over the next couple of week so watch this space!

Dear clients…

Please watch:

Introducing meftos.com

I’ve been pretty busy lately, and haven’t had anytime for some good old fashioned hacking. I’ve also been copping some flack for letting my ruby-fu lapse (it seems a lot can happen in three months. Actually a lot happens in three minutes), so I decided to clear a couple of hours last weekend to have a play with GitHub, haml and sass, and just to generally get friendly with ruby again.

I recently read a couple of articles about the doom and gloom around URL shorteners and how if a couple of the big ones collapsed the entire intergoop would fall on it’s face. Whilst that is a little bit of an over exaggeration, there is some food for thought in that statement. I was also reading about the collapse of magnolia (I know – old news. Sue me). Many an innocent bystander lost many months or years of bookmarking just because one site went down. Whilst I’m a fan of the cloud, I’m also a bit of a control freak, so this was a little scary.

I’ve been using del.icio.us for a while, but only for the bookmarking facilities, not the social part. And even though Yahoo probably won’t go broke any time soon, I was wondering what would happen if they decided to close the big bookmarker in the sky down. So meftos.com was born.

meftos.com is a personal bookmarker and url shortener built in Rails. It only has one user (you), and you host it yourself. From a URL shortening point of view, there is no one point of failure – sure if a number of individuals remove their servers, you will have some broken links, but that if far less impact than one mega site bombing.

If you have a server that can run Rails, you too can install your own copy – feel free to skin it, and change it’s same. All of the source code is on GitHub, and I’m told you can nearly use it out of the box on Heroku. Play around, feel free to kick the tyres. There is still some stuff to do – namely search, better user management (There is no simple user management gems in Ruby any more – I’ll probably have to write my own) and some other bits and pieces, but it seems to work ok.

More importantly, I got a little glimpse again of why coding makes me happy. That should keep me warm on those cold, winter nights…

Ideas 5: The accessibilty edition

The Australian Web Industry Association, together with Web Industry Professionals Association, present Ideas 5. This year’s Ideas is is focussing on Understanding WCAG 2.0 & Preparing Websites with Improved Accessibility. If you are a web developer, and you aren’t thinking about accessibility then you REALLY need to get your butt down to the Melbourne Hotel in Perth on 22 of April 2009. Tickets are only $40 for AWIA members ($55 if you aren’t. In unrelated news, AWIA memberships are pretty cheap)

The two talks will be given by Roger Hudson and About Andrew, both experts in their fields, so seriously, it’s a great opportunity to hear from people that know what they are talking about.

Go to the website and get more info. Go on. Seriously.

SchwaCMS goes live

For the past couple of months, I’ve been locked in my room, hacking away at a (probably not-so) top secret project – which I have just pulled the big switch on. So ladies and gentleman, let me introduce you to SchwaCMS. It’s a hosted CMS product that has taken ideas from many years of toiling away on other peoples’ CMSs.

There is a complete feature list on the website, but some of the geek highlights:

  • Upload progress bar
  • Proper use of HTTP status codes (including 410 – Gone)
  • Full UTF-8 support (Check it out — the ? isn’t a HTML entity, it’s a real schwa)
  • Weight based keyword extraction from content
  • Full-text search, including a Did you mean? function
  • Integrated spell checking
  • Just-in-time scaling and caching of images
  • HTML is automatically cleaned on save
  • The ability to export the menu as an ordered list for inclusion in third-party apps, such as blogs or forums
  • A demo that is tied to the browser session, as as soon as the user logs out, their demo site gets blown away

I’m pretty excited about this release. It’s built on a custom PHP framework, and is hosted on my very own hosting box – it almost feels like MadPilot is a real web company now! (It’s only taken 8 years :P)

Go check it out: http://www.schwacms.com

Using SSH to run remote commands using PHP. A cheat guide.

I’m working on a soon-to-be-released project that needed to run commands on a Linux server. Whilst it would be possible to use something like the exec command to run it, this would mean that the user that Apache was running as would have to have permissions to run the commands, which is less than cool. I could have messed around with sudo, but even that would open up some gaping holes, as all other websites hosted on the same box could theoretically run the same commands.

As it turns out, there is a PECL project that allows you to remotely login to a server using SSH, which would actually kill a number of birds with one stone:

  1. I can sandbox the commands that get run, by setting a special user that only has access to commands that are needed (using sudo)
  2. The web app would be able to talk to multiple servers, which wouldn’t have been possible with exec alone

The flow is simple: Login to the server – I’m using a username/password pair at the moment, but only because I haven’t been able to get public key exchange working on the server yet (interestingly, it works if I call the code from the command line), run the command, then check the output and response. There was a slight issue here,  ssh2_exec returns a pointer to a stream, which needs to be read. If there is no response (some programs complete without returning anything), then the process would block indefinately. Also, if the program fails, it might not output anything to stdout, instead outputting text to stderr, AND you miss out on checking the return status code (which quite often gives you some interesting information about the status of the program).

To get around this, I wrote this really simple bash script, that runs the command on your behalf and wraps the stdout, stderr, pwd and result in an XML envelope ready for parsing. Because you will always get the envelope returned (unless the process daemonises) you won’t get the blocking problem.

#!/bin/sh
tmp_stderr=`mktemp`
output=`$* 2<$tmp_stderr`
result=$?
error=`cat $tmp_stderr`
rm $tmp_stderr

echo "<?xml version=\"1.0\" encoding=\"UTF-8\"?>"
echo "<xmlsh>"

if [ -n "$output" ]
then
    echo "  <stdout>"
    echo "    <![CDATA["
    echo $output
    echo "   ]]>"
    echo "  </stdout>"
fi

if [ -n "$error" ]
then
    echo "  <stderr>"
    echo "    <![CDATA["
    echo $error
    echo "   ]]>"
    echo "  </stderr>"
fi

echo "  <meta>"
echo "    <pwd>$PWD</pwd>"
echo "    <return>$result</return>"
echo "  </meta>"
echo "</xmlsh>"

In a nutshell, when you call the script, it runs the program supplied as an argument, then pipes the stderr out to a temporary file, and pipes stdout into a variable. It wraps this, and the current working directory and return value in XML, and prints it out. Pretty simple, but it works.

Debugging JavaScript in Internet Explorer

As anyone who has ever received the dreaded Object doesn’t support this property or method error in IE can attest, debugging using everyone favourite browser is a right, royal pain in the heiny. Using Firebug in Firefox really has spoilt us as frontend developers (Hey, what am I talking? These ARE BASIC TOOLS that every other development platform has had since Ada Lovelace was in small britches, but I digress), so what is a cross-platform developer to do?

Little known fact outside of the .NET world: Visual Studio has a complete JavaScript debugger built in, which allows you to set breakpoints, add watches, mess around with variable values and more importantly, gives you better error messages, and actually highlights the line where things went wrong.

  1. Download and Install Visual Studio Express Edition (which is free as in beer).
  2. Create a new Web Site – call it what ever you like, it’s just a placeholder
  3. Hit “Run” (or F5) on the blank problem – a local server should start, and IE7 should load a blank page
  4. Change the URL to the page you want to debug. Once the page is loaded the debugger is good to go – in fact, if your page has any errors, Visual Studio will get focus, and politely tell you as such
  5. To add a break point, flick back to Visual Studio Express and open the JavaScript file you wish to add the break point to, and then refresh the browser – once context hits the point, you will be able to step through the code.

Whilst still lacking a DOM browser (Firebug Lite might be able to help out with that), this takes some of the fun out of debugging JavaScript IE, from the point of view that it is now actually possible.

Previous Next