@madpilot makes

Using SSH to run remote commands using PHP. A cheat guide.

I’m working on a soon-to-be-released project that needed to run commands on a Linux server. Whilst it would be possible to use something like the exec command to run it, this would mean that the user that Apache was running as would have to have permissions to run the commands, which is less than cool. I could have messed around with sudo, but even that would open up some gaping holes, as all other websites hosted on the same box could theoretically run the same commands.

As it turns out, there is a PECL project that allows you to remotely login to a server using SSH, which would actually kill a number of birds with one stone:

  1. I can sandbox the commands that get run, by setting a special user that only has access to commands that are needed (using sudo)
  2. The web app would be able to talk to multiple servers, which wouldn’t have been possible with exec alone

The flow is simple: Login to the server – I’m using a username/password pair at the moment, but only because I haven’t been able to get public key exchange working on the server yet (interestingly, it works if I call the code from the command line), run the command, then check the output and response. There was a slight issue here,  ssh2_exec returns a pointer to a stream, which needs to be read. If there is no response (some programs complete without returning anything), then the process would block indefinately. Also, if the program fails, it might not output anything to stdout, instead outputting text to stderr, AND you miss out on checking the return status code (which quite often gives you some interesting information about the status of the program).

To get around this, I wrote this really simple bash script, that runs the command on your behalf and wraps the stdout, stderr, pwd and result in an XML envelope ready for parsing. Because you will always get the envelope returned (unless the process daemonises) you won’t get the blocking problem.

#!/bin/sh
tmp_stderr=`mktemp`
output=`$* 2<$tmp_stderr`
result=$?
error=`cat $tmp_stderr`
rm $tmp_stderr

echo "<?xml version=\"1.0\" encoding=\"UTF-8\"?>"
echo "<xmlsh>"

if [ -n "$output" ]
then
    echo "  <stdout>"
    echo "    <![CDATA["
    echo $output
    echo "   ]]>"
    echo "  </stdout>"
fi

if [ -n "$error" ]
then
    echo "  <stderr>"
    echo "    <![CDATA["
    echo $error
    echo "   ]]>"
    echo "  </stderr>"
fi

echo "  <meta>"
echo "    <pwd>$PWD</pwd>"
echo "    <return>$result</return>"
echo "  </meta>"
echo "</xmlsh>"

In a nutshell, when you call the script, it runs the program supplied as an argument, then pipes the stderr out to a temporary file, and pipes stdout into a variable. It wraps this, and the current working directory and return value in XML, and prints it out. Pretty simple, but it works.

1 comment

  1. Hello - neat idea. Wanted to pint out that you are missing a closing quote on the following line:



    echo " $PWD

Leave a comment