Blogish - Tom O'Connor Blogish. It's a bit like a blog. The 'Change One Thing' Rule <p>Whenever we have planned (and sometimes unplanned) downtime, at work, I'm usually asked the question "<em>While we've got the entire system down to do X, shall we do Y also?</em>"</p> <p>Typically X is planned, and we're doing major maintenance - There's one coming up when there's grid circuit maintenance, where we're hoping it'll be fine on UPS and emergency generator - with an at-risk period.</p> <p>Occasionally, X is unplanned, like the time that the air conditioning failed, and everything shut down to save itself.</p> <p>I always decline the option to do Y at the same time, because it violates the "<strong>Change One Thing</strong>" rule.</p> <p>If I'm declaring a system outage to, say, upgrade the firmware on the core switch stack, I don't want to also take that opportunity to rewire a cabinet, or simultaneously upgrade VMware Hypervisors. &nbsp;I'll declare another outage for those individually. &nbsp;</p> <p>The problem with breaking the <strong>Change One Thing</strong> rule is that if you change two things, and something doesn't work quite right afterwards, you can't be 100% certain which to roll back, and it'll typically take N times longer to fix (where N is the number of things you changed).</p> <p>So I'm a bit of a stickler for this. &nbsp;I don't really enjoy giving up a weekend to work on a system when nobody else is using it but I'd rather do one task at a time, and get it right, and be able to make the most of the rest of the weekend; as opposed to having to pick through the permutations of the things that've changed, to try to restore service before 9AM on a Monday morning.</p> <p>Fortunately, I've got my team quite well trained not to get distracted from the One Thing we're doing when Outage Time comes. &nbsp;I can almost guarantee though, that someone will ask "<em>Can we do 'this other thing' at the same time?</em>". &nbsp;</p> <p>&nbsp;</p> <p>To which, my answer will always be: <strong>No</strong>.</p> <p>&nbsp;</p> <p>That can wait for another day, and we shall do that, and only that, then.</p> Part 1: Getting Started with Ansible <p><strong>An introduction to Ansible Configuration Management</strong></p> <p>&nbsp;</p> <p><strong>A brief history of Configuration Management</strong></p> <p><strong>===========================================</strong></p> <p>&nbsp;</p> <p>* CFEngine - Released 1993. Written in C</p> <p>* Puppet - Released 2005 - Written in Ruby. Domain Specific Language (DSL. SSL Nightmare.</p> <p>* Chef - Released 2009 - Written in Ruby, also a DSL, more like pure Ruby</p> <p>* Juju - Released 2010, Python, Very ubuntu.</p> <p>* Salt - Released 2011, Python, Never got it working right</p> <p>* Ansible - Released 2012, Python. &nbsp;Awesome.&nbsp;</p> <p>&nbsp;</p> <p><strong>Why Ansible?</strong></p> <p><strong>============</strong></p> <p>It&rsquo;s agentless. &nbsp;Unlike Puppet, Chef, Salt, etc.. Ansible operates only over SSH (or optionally ZeroMQ), so there&rsquo;s none of that crap PKI that you have to deal with using Puppet.</p> <p>It&rsquo;s Python. I like Python. &nbsp;I&rsquo;ve been using it far longer than any other language.&nbsp;</p> <p>It&rsquo;s self-documenting, &nbsp;Simple YAML files describing the playbooks and roles.</p> <p>It&rsquo;s feature-rich. &nbsp;Some call this batteries included, but there&rsquo;s over 150 modules provided out of the box, and new ones are pretty easy to write.</p> <p>&nbsp;</p> <p><strong>Installing Ansible</strong></p> <p><strong>==================</strong></p> <p>&nbsp;</p> <p>You can get it from the Python Package Index (PyPI):</p> <pre>pip install ansible</pre> <p>&nbsp;</p> <p>You can get it from your OS package index</p> <pre>sudo apt-get install ansible</pre> <p>&nbsp;</p> <p>You can download the source from Github and run yourself.</p> <pre>git clone</pre> <p>&nbsp;</p> <p>My preferred way of installing it is inside a virtualenv, then using pip to install it.&nbsp;</p> <p>&nbsp;</p> <p><strong>Ansible Modes</strong></p> <p><strong>=============</strong></p> <p><strong>Playbook Mode</strong></p> <p>&nbsp;- This executes a series of commands in order, according to a playbook.</p> <p>&nbsp;</p> <p><strong>Non-playbook mode</strong></p> <p>&nbsp;- This executes an ansible module command on a target host.&nbsp;</p> <p>&nbsp;</p> <p>I'll primarily be focussing on Playbook Mode, and hopefully giving an insight on what a playbook consists of, and how to use Ansible to deploy an application.</p> <p><strong>Parallax</strong></p> <p><strong>========</strong></p> <p>I've put together a collection of Ansible bits I've used in the past to give a quick-start of what a Playbook might look like for an example service. &nbsp;</p> <p>I'll be referring back to this in the rest of this article, so you'll probably want to grab a copy from Github to play with:</p> <pre>git clone</pre> <p>&nbsp;</p> <p><strong>First Steps</strong></p> <p><strong>===========</strong></p> <p>&nbsp;</p> <p>1. Install Ansible (see above)</p> <p>2. Clone Parallax</p> <p>&nbsp;</p> <p>From a first look at the source tree of Parallax, you should see a config file, and a directory called "playbooks".</p> <p>The config file (ansible.cfg) contains the ansible global configuration. &nbsp;Lots more information about it, and its directives can be found here:&nbsp;</p> <p><a href=""></a></p> <p><strong><br /></strong></p> <p><strong>Playbooks</strong></p> <p><strong>---------</strong></p> <p>Playbooks are the bread and butter of Ansible. &nbsp;They represent collections of 'plays', configuration policies which get applied to defined groups of hosts.</p> <p>In Parallax, there's a "playbooks" directory, containing an example playbook to give you an idea of what an Ansible Playbook looks like.</p> <p>&nbsp;</p> <p><strong>Anatomy of a Playbook</strong></p> <p><strong>=====================</strong></p> <p>If you take a look inside the Parallax example playbook, you'll notice there's the following file structure:</p> <pre>.<br />├── example_servers.yml<br />├── group_vars<br />│ &nbsp; ├── all<br />│ &nbsp; └── example_servers<br />├── host_vars<br />│ &nbsp; └── example-repository<br />├── hosts<br />├── repository_server.yml<br />├── roles<br />│ &nbsp; ├── __template__<br />│ &nbsp; ├── common<br />│ &nbsp; ├── gridfs<br />│ &nbsp; ├── memcached<br />│ &nbsp; ├── mongodb<br />│ &nbsp; ├── nginx<br />│ &nbsp; ├── nodejs<br />│ &nbsp; ├── redis<br />│ &nbsp; ├── repository<br />│ &nbsp; ├── service_example<br />│ &nbsp; └── zeromq<br />└── site.yml</pre> <p>&nbsp;</p> <p>Looking at that tree, there's some YAML files, and some directories. &nbsp;</p> <p>There's also a file called "hosts". &nbsp;This is the Ansible Inventory file, and it stores the hosts, and their mappings to the host groups.</p> <p>The hosts file looks like this:</p> <pre>[example_servers]<br /> set_hostname=vm-ex01<br /># example of setting a host inventory by IP address.<br /># also demonstrates how to set per-host variables.<br /><br />[repository_servers]<br />example-repository<br />#example of setting a host by hostname. &nbsp;Requires local lookup in /etc/hosts<br /># or DNS.<br />[webservers]<br />web01<br />[dbservers]<br />db01</pre> <p>&nbsp;</p> <p>It's standard INI-like file format, hostgroups are defined in [square brackets], one host per line. &nbsp;Per-host variables can follow the hostname or IP address. &nbsp;If you declare a host in the inventory by hostname, it must be resolvable either in your /etc/hosts file, or by a DNS lookup.</p> <p>The playbook definitions are in the .yml files. &nbsp;There's 3 in the Parallax example. &nbsp;Two which are separate YAML files, and one that's a kind of, catchall in 'site.yml'.</p> <p><strong>site.yml </strong>is the default name for a playbook, and you'll likely see it crop up when you look at other ansible examples (</p> <p>You'll also see lots of files called 'main.yml'. &nbsp;This is the default filename for a file containing Ansible Tasks, or Handlers. &nbsp;More on that later.</p> <p>So, site.yml, consists of 3 named blocks. &nbsp;If you look closely, you'll see that the blocks all have a name, they all have a hosts: line, and they all have roles.</p> <p>The<strong> hosts:</strong> line sets which host group (from the Inventory file 'hosts') to apply the following roles to.</p> <p>The<strong> roles:</strong> line, and subsequent role entries define the roles to apply to that hostgroup. &nbsp; The roles currently defined in parallax can be seen in the above tree structure.</p> <p>You can either put multiple named blocks in one site.yml file, or split them up, in the manner of 'example_servers.yml' and 'repository_server.yml'</p> <p>Other stuff in<strong> 'site.yml':</strong></p> <p><strong>'user:'</strong> - This sets the name of the user to connect to the target as. &nbsp;Sometimes shown as remote_user in newer ansible configurations. &nbsp;</p> <p><strong>'sudo:' </strong>- This tells Ansible whether it should run sudo on the target when it connects. &nbsp;You'll probably want to set this as "sudo: yes" most often, unless you plan to connect as root. &nbsp;In which case, this (ಠ.ಠ) is for you.</p> <p>&nbsp;</p> <p><strong><br /></strong></p> <p><strong>Roles</strong></p> <p><strong>=====</strong></p> <p>A role should encapsulate all the things that have to happen to make a thing work. &nbsp;If that sounds vague, it's because it is.&nbsp;</p> <p>The parallax example has a role called common, which installs and configures the things that I've found are useful as prerequisites for other things. &nbsp;You should go through and decide which bits you want to put into your 'common' role, if you decide to have one.</p> <p>Roles can have dependencies, which will require that another role be applied first. &nbsp;This is good for things like handling the dependencies before you deploy code.</p> <p><strong><br /></strong></p> <p><strong>Inside A Role</strong></p> <p><strong>-------------</strong></p> <p>Let's take a look at one of the pre-defined roles in Parallax:&nbsp;</p> <pre>├── redis<br />│ &nbsp; ├── files<br />│ &nbsp; ├── handlers<br />│ &nbsp; ├── meta<br />│ &nbsp; ├── tasks<br />│ &nbsp; └── templates</pre> <p>&nbsp;</p> <p>This, unsurprisingly is a quick role I threw together that'll install Redis from an Ubuntu PPA, and start the service.</p> <p>In general, a role consists of the following subdirectories, "files", "handlers", "meta", "tasks" and "templates".</p> <p><strong>files/</strong> contains files that will be copied to the target with the copy: module.</p> <p><strong>handlers/</strong> contains YAML files which contain 'handlers' little bits of config that can be triggered with the notify: action inside a task. Usually just handlers/main.yml - See <a href=""></a> for more information on what handlers are for.</p> <p><strong>meta/</strong> contains YAML files containing role dependencies. &nbsp;Usually just meta/main.yml</p> <p><strong>tasks/</strong> contains YAML files containing a list of named steps which Ansible will execute in order on a target. &nbsp;Usually tasks/main.yml</p> <p><strong>templates/</strong> contains Jinja2 template files, which can be used in a task with the template: module to interpolate variables in the template, then copy the template to a location on the target. &nbsp;Files in this directory often end .j2 by convention.</p> <p><strong><br /></strong></p> <p><strong>Example Role: Redis</strong></p> <p><strong>-------------------</strong></p> <p>&nbsp;</p> <p><strong>Path:</strong> parallax/playbooks/example/roles/redis</p> <p><strong>Structure:&nbsp;</strong></p> <pre>.<br />├── files<br />├── handlers<br />├── meta<br />├── tasks<br />│ &nbsp; └── main.yml<br />└── templates</pre> <p>&nbsp;</p> <p>All there is in this one, is a task file, unsurprisingly called 'main.yml' - Told you that name would crop up again. <br />- Actually, there's a .empty file under files, handlers, meta, and templates. &nbsp;This is just so that if you commit it to git, the empty directories won't vanish. &nbsp;</p> <p>&nbsp;</p> <p>Let's have a look at the redis role's tasks:</p> <pre><br />$ cat tasks/main.yml<br />---<br />&nbsp;- name: Add the Redis PPA<br />&nbsp; &nbsp;apt_repository: repo='ppa:rwky/redis' update_cache=yes<br />&nbsp;- name: Install Redis from PPA<br />&nbsp; &nbsp;apt: pkg=redis-server state=installed<br />&nbsp;- name: Start Redis<br />&nbsp; &nbsp;service: name=redis state=started</pre> <p>&nbsp;</p> <p>Each named block has an action below it. &nbsp;Each action refers to an Ansible Module. There's an index of all available modules and their documentation here: <a href=""></a></p> <p>&nbsp;</p> <p>Basically explained:</p> <p><strong>apt_repository:</strong> module configures a new apt repository for the system. &nbsp;It can take a ppa name, or a URL for a repository. &nbsp;update_cache tells ansible to run apt-get update after it's added the new repository.</p> <p><strong>apt:</strong> module tells Ansible to run apt-get install $pkg using &nbsp;whatever value has been defined for pkg.&nbsp;</p> <p><strong>service:</strong> tells Ansible to execute "sudo service $name start" on the target.</p> <p>&nbsp;</p> <p>I recommend you have a trawl through the roles as configured in Parallax, and see if you can make sense of how they work. &nbsp;If you open the Ansible Module Index, you'll be able to use that as a quick reference guide for the modules in the roles.</p> <p>&nbsp;</p> <p>One of the most useful features of Ansible, in my opinion is the "with_items:" action that some modules support. &nbsp;If you want to install multiple packages with apt at the same time, the easiest way to do it is like this:&nbsp;</p> <p>(example from roles/common/tasks/main.yml)</p> <p>&nbsp;</p> <pre>&nbsp;- name: install default packages<br />&nbsp; &nbsp; apt: pkg={{ item }} state=installed<br />&nbsp; &nbsp; with_items:<br />&nbsp; &nbsp; &nbsp; - aptitude<br />&nbsp; &nbsp; &nbsp; - vim<br />&nbsp; &nbsp; &nbsp; - supervisor<br />&nbsp; &nbsp; &nbsp; - python-dev<br />&nbsp; &nbsp; &nbsp; - htop<br />&nbsp; &nbsp; &nbsp; - screen</pre> <p>&nbsp;</p> <p><strong>Running Ansible</strong></p> <p><strong>===============</strong></p> <p>&nbsp;</p> <p>Once you've got your Host Inventory defined, and at least one play for Ansible to execute, it'll be able to do stuff for you,</p> <p>&nbsp;</p> <p>I've just spun up a new Ubuntu 13.10 Virtual Machine. &nbsp;It has the IP Address</p> <p>&nbsp;</p> <p>I'm going to create a new hostgroup called [demoboxes] and put that in:</p> <pre>[demoboxes]<br /> access_user=user</pre> <p>&nbsp;</p> <p>The variable access_user is required <strong>*somewhere*</strong> by the common role, to create the ssh authorised keys stuff, under that user's home directory.&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>and in site.yml:</p> <pre><br />- name: Install all the packages and stuff required for a demobox<br />&nbsp; hosts: demoboxes<br />&nbsp; user: user<br />&nbsp; sudo: yes<br />&nbsp; roles:<br />&nbsp; &nbsp; - redis<br />&nbsp; &nbsp; - nginx<br />&nbsp; &nbsp; - nodejs<br />&nbsp; &nbsp; - zeromq<br /></pre> <p>I've included a few other roles from parallax for the halibut.&nbsp;</p> <p>I'm going to run ansible-playbook -i hosts site.yml and see what happens.&nbsp;</p> <p>For the first run, we'll need to tell ansible the SSH and Sudo passwords, because one of the thing that the common role does is to configure passwordless sudo, and deploy a SSH key.&nbsp;</p> <p>In order to use Ansible with SSH passwords (pretty much required for the first run of normal machines (unless you deploy keys with something far lower level, like kickstart), you'll need the sshpass program.&nbsp;</p> <p>On ubuntu, you can install that as follows:</p> <pre>sudo apt-get install sshpass</pre> <p>When you use Parallax as a starting point, one thing you'll want to do is edit</p> <pre> roles/common/files/authorized_keys</pre> <p>and put your keys in it.&nbsp;</p> <p>&nbsp;</p> <p>So, for a first run, it's:</p> <pre> ansible-playbook -i hosts -k -K site.yml</pre> <p>&nbsp;</p> <p>You'll get the following prompts for the ssh password, and the sudo password:</p> <pre>SSH password:<br />sudo password [defaults to SSH password]:</pre> <p>&nbsp;</p> <p>Enter whatever password you gave Ubuntu at install time.&nbsp;</p> <p>&nbsp;</p> <p>Once the following tasks have completed, you can remove -k -K from the ansible command line</p> <pre><br />TASK: [common | deploy access ssh-key to user's authorized keys file] *********<br />changed: []<br />TASK: [common | Deploy Sudoers file] ******************************************<br />changed: []</pre> <p>&nbsp;</p> <p>Because at that point, you'll be able to use your ssh key, and passwordless sudo.</p> <p>&nbsp;</p> <p>At the end of the run, you'll get a Play Recap, as follows:</p> <pre>PLAY RECAP ********************************************************************<br /> &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; : ok=19 &nbsp; changed=8 &nbsp; &nbsp;unreachable=0 &nbsp; &nbsp;failed=0</pre> <p>You should now be able to open (or whatever your server's IP address is) in a browser.</p> <p>&nbsp;</p> <pre>Toms-iMac-2:example tomoconnor$ curl -i<br />HTTP/1.1 200 OK<br />Server: nginx/1.4.1 (Ubuntu)<br />Date: Sun, 26 Jan 2014 17:48:47 GMT<br />Content-Type: text/html<br />Content-Length: 612<br />Last-Modified: Mon, 06 May 2013 10:26:49 GMT<br />Connection: keep-alive<br />ETag: "51878569-264"<br />Accept-Ranges: bytes</pre> <p>&nbsp;</p> <p>Hurrah.&nbsp;</p> <p><strong>Next up:</strong> <a href="/blogish/part-2-deploying-applications-ansible/#.UuZ4yTdFBB0">Part 2: Deploying Applications with Ansible.</a></p> <p><strong>Finally: </strong><a href="/blogish/part-3-ansible-and-amazon-web-services/#.UuhLIWTFJUM">Part 3: Ansible and Amazon Web Services.</a></p> Part 2: Deploying Applications with Ansible <p>You should by now have worked your way through <a href="/blogish/getting-started-ansible/#.UuY1yTdFBB1">Part 1: Getting Started with Ansible</a>. &nbsp;If you haven't, go and do that now.&nbsp;</p> <p>In this article, I'll be demonstrating a very simple application deployment workflow, deploying an insanely simple node.js application from a github repository, and configuring it to start with supervisord, and be reverse-proxied with Nginx.</p> <p>As with last time, we'll be using Parallax as the starting point for this. &nbsp;I've actually gone through and put the config in there already (if you don't feel like doing it yourself ;)</p> <p>&nbsp;</p> <pre>- name: Install all the packages and stuff required for a demobox<br />&nbsp; hosts: demoboxes<br />&nbsp; user: user<br />&nbsp; sudo: yes<br />&nbsp; roles:<br />&nbsp; &nbsp; - redis<br />&nbsp; &nbsp; - nginx<br />&nbsp; &nbsp; - nodejs<br />&nbsp; &nbsp; - zeromq<br /># &nbsp; &nbsp;- deploy_thingy</pre> <p>&nbsp;</p> <p>In the <strong><em>9c818d0b8f</em></strong> version, you'll be able to see that I've created a new role, inventively called "deploy_thingy". &nbsp;</p> <p>&nbsp;</p> <p><strong>**Updated**</strong></p> <p>I've been recommended that my __template__ role be based on the output of&nbsp;</p> <pre>ansible-galaxy init $rolename</pre> <p>So I've recreated the __template__ role to be based on an ansible-galaxy role template.</p> <p>There's not that many changes, but it does include a new directory '<strong>default/</strong>' containing the Galaxy metadata required if you wish to push back to the public galaxy role index.</p> <p>&nbsp;</p> <p>In an attempt to make creating new roles easier, I put a <strong>__template__</strong> role into the file tree when I first created Parallax, so that all you do to create a new role is execute:</p> <pre>cp -R __template__ new_role_name</pre> <p>in the<strong> roles/ </strong>directory.</p> <pre>.<br />├── files<br />│ &nbsp; ├── .empty<br />│ &nbsp; ├── thingy.nginx.conf<br />│ &nbsp; └── thingy.super.conf<br />├── handlers<br />│ &nbsp; ├── .empty<br />│ &nbsp; └── main.yml<br />├── meta<br />│ &nbsp; ├── .empty<br />│ &nbsp; └── main.yml<br />├── tasks<br />│ &nbsp; └── main.yml<br />└── templates<br />&nbsp; &nbsp; └── .empty</pre> <p>&nbsp;</p> <p>In this role, we define some dependencies in <strong>meta/main.yml</strong>, there's two files in the <strong>files</strong>/ directory, and there's a set of tasks defined in <strong>tasks/main.yml</strong>. &nbsp;There's also some handlers defined in <strong>handlers/main.yml</strong>.</p> <p>&nbsp;</p> <p>Let's have a quick glance at the <strong>meta/main.yml</strong> file.&nbsp;</p> <pre>---<br />dependencies:<br />&nbsp; - { role: nodejs }<br />&nbsp; - { role: nginx }</pre> <p>&nbsp;</p> <p>This basically sets the requirement that this role, <em>deploy_thingy</em> depends on services installed by the roles: nginx and nodejs.</p> <p>Although these roles are explicitly stated to be installed in site.yml, this gives us a level of belt-and-braces configuration, in case the deploy_thingy role were ever included without the other two roles being explicitly stated, or if it were configured to run before its dependencies had explicitly been set to run.</p> <p><strong>tasks/main.yml</strong> is simple.&nbsp;</p> <pre>---<br />&nbsp;- name: Create directory under /srv for thingy<br />&nbsp; &nbsp;file: path=/srv/thingy state=directory mode=755<br />&nbsp;- name: Git checkout from github<br />&nbsp; &nbsp;git: repo=<br />&nbsp; &nbsp; &nbsp; &nbsp; dest=/srv/thingy<br />&nbsp;- name: Drop Config for supervisord into the conf.d directory<br />&nbsp; &nbsp;copy: src=thingy.super.conf dest=/etc/supervisor/conf.d/thingy.conf<br />&nbsp; &nbsp;notify: reread supervisord<br />&nbsp;- name: Drop Reverse Proxy Config for Nginx<br />&nbsp; &nbsp;copy: src=thingy.nginx.conf dest=/etc/nginx/sites-enabled/thingy.conf<br />&nbsp; &nbsp;notify: restart nginx</pre> <p>&nbsp;</p> <p>We'll create somewhere for it to live, check the code out of my git repository <a href="#footnote1">[1]</a>, Then drop two config files in place, one to configure supervisor(d), and one to configure Nginx.</p> <p>Because the command to configure supervisor(d) and nginx change the configuration of those services, there are notify: handlers to reload the configuration, or restart the service.</p> <p>&nbsp;</p> <p>Let's have a quick peek at those handlers now:</p> <pre>---<br />&nbsp; - name: reread supervisord<br />&nbsp; &nbsp; shell: /usr/bin/supervisorctl reread &amp;&amp; /usr/bin/supervisorctl update<br />&nbsp; - name: restart nginx<br />&nbsp; &nbsp; service: name=nginx state=restarted</pre> <p>&nbsp;</p> <p>When the supervisor config changes (and we add something to /etc/supervisor/conf.d), we need to tell supervisord to re-read it's configuration files, at which point, it will see the new services, and then run supervisorctl update, which will set the state of the newly added items from 'available' to 'started'.</p> <p>When we change the nginx configuration, we'll hit nginx with a restart. &nbsp;It's possible to do softer actions, like reload here, but I've chosen service restart for simplicity.</p> <p>&nbsp;</p> <p>I've also changed the basic Ansible config, and configuration of <strong>roles/common/files/insecure_sudoers</strong> so that it will still ask you for a sudo password in light of some minor criticism.</p> <p>I've found that if you're developing Ansible playbooks on an isolated system, then there's no great harm in disabling SSH Host Key Checking (in <strong>ansible.cfg</strong>), similarly how there's no great problems in disabling sudo authentication, so it's effectively like NOPASSWD use. &nbsp;</p> <p>However, <a href="">Micheil</a> made a very good point that in live environments it's a bit dodgy to say the least. &nbsp;So I've commented those lines out of the playbook in Parallax, so that it should give users a reasonable level of basic security. &nbsp;At the end of the day, it's up to you how you use Parallax, and if you find that disabling security works for you, then fine. &nbsp;It's not like you haven't been warned.&nbsp;</p> <p><strong>But I digress.</strong></p> <p>The next thing to do is to edit site.yml, and ensure that the new role we've created gets mapped to a hostgroup in the play configuration.</p> <p>In the latest version of Parallax this is already done for you, but as long as the role name in the list matches the directory in roles/, it should be ready to go.</p> <p>Now if we run:</p> <pre>ansible-playbook -k -K -i playbooks/example/hosts playbooks/example/site.yml</pre> <p>&nbsp;</p> <p>It should go through the playbook, installing stuff, then finally do the git clone from github, deploy the configuration files, and trigger a reread of supervisord, and a restart of nginx.</p> <p>If I now test that it's working, with:&nbsp;</p> <pre>curl -i<br />HTTP/1.1 200 OK<br />Server: nginx/1.4.1 (Ubuntu)<br />Date: Mon, 27 Jan 2014 14:51:29 GMT<br />Content-Type: text/html; charset=utf-8<br />Content-Length: 170<br />Connection: keep-alive<br />X-Powered-By: Express<br />ETag: "1827834703"</pre> <p>&nbsp;</p> <p>That <strong>X-Powered-By: Express</strong> line shows that Nginx is indeed working, and that the node.js application is running too.&nbsp;</p> <p>You can get more information about stuff that supervisord is controlling by running:&nbsp;</p> <pre>sudo supervisorctl status</pre> <p>on the target host.</p> <pre>$ sudo supervisorctl status<br />thingy &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; RUNNING &nbsp; &nbsp;pid 19756, uptime 0:00:06</pre> <p>If the Nginx side is configured, but the node.js application isn't running, you'd get a HTTP 502 error, as follows:&nbsp;</p> <pre>curl -i<br />HTTP/1.1 502 Bad Gateway<br />Server: nginx/1.4.1 (Ubuntu)<br />Date: Mon, 27 Jan 2014 14:59:34 GMT<br />Content-Type: text/html<br />Content-Length: 181<br />Connection: keep-alive</pre> <p>So, that's it. &nbsp;</p> <p>A <strong>very</strong> simple guide to deploying a very simple application with Ansible. &nbsp;Of course, it should be obvious that you can deploy *anything* from a git repository, it really boils down to the configuration of supervisord. &nbsp;For that matter, it doesn't have to be supervisord.</p> <p>I consider configuring supervisord for process controlling to be outside of the scope of this article, but I might touch on it in future in more detail.&nbsp;</p> <p>Next up, <a href="/blogish/part-3-ansible-and-amazon-web-services/#.UuhLIWTFJUM">Part 3: Ansible and Amazon Web Services.</a></p> <p><a name="footnote1"></a>1: It's really simple, and I'm not very node-savvy, so I'm sorry if it sucks.</p> Part 3: Ansible and Amazon Web Services <p>By this point, you should have read <a href="/blogish/getting-started-ansible/#.UuhKWWTFJUM">Part 1: Getting Started with Ansible</a>, and P<a href="/blogish/part-2-deploying-applications-ansible/#.UuhKcGTFJUM">art 2: Deploying Applications with Ansible</a>. &nbsp;</p> <p>If you haven't, go and do it <strong>now</strong>.</p> <p>You should also be familar with some of the basic concept surrounding AWS deployment, how AWS works, and so on.&nbsp;</p> <p>So, you'll have some idea how Ansible uses playbooks to control deployment to target hosts, and some idea of the capability for deploying code from version control systems (in Part 2, we used the Ansible git: module.).</p> <p>In this section, we'll be looking at how you can put it all together, using Ansible to provision an EC2 instance, do the basic OS config, and deploy an application.</p> <p>In previous parts, we've only needed to have the Ansible python module installed on the execution host (y'know, the one on which you run ansible-playbook and so on). &nbsp;When we're dealing with Amazon Web Services (or Eucalyptus), we need to install one extra module, called '<strong>boto</strong>', which is the AWS library for python.</p> <p>You can do this either from native OS packages, with&nbsp;</p> <pre>sudo apt-get install python boto </pre> <p>(on Ubuntu)</p> <p>&nbsp;</p> <pre>sudo yum install python-boto </pre> <p>(on RHEL, Centos, Fedora et al.)</p> <p>or from pip:</p> <pre>pip install boto</pre> <p><strong><em>Interesting side note.. </em></strong>I had to do this globally, as even inside a virtualenv, ansible-playbook reported the following error:</p> <pre>&nbsp; failed: [localhost] =&gt; {"failed": true}<br />&nbsp; msg: boto required for this module<br />&nbsp; FATAL: all hosts have already failed -- aborting</pre> <p>&nbsp;</p> <p>I think we'll create a separate playbook for this, for reasons which should become apparent as we progress.</p> <p>From your parallax directory, create a branch, and a new subdirectory under playbooks/</p> <p>I'm calling it part3_ec2, but you'll probably want to give your playbook and branch a more logical name.</p> <p>I'm going to go ahead and create a totally different hosts inventory file, this time only including four lines:</p> <pre><br />[local]<br />localhost<br />[launched]</pre> <p>&nbsp;</p> <p>The reason for this, is because a lot of the configuration, provisioning EC2 hosts etc actually happens from the local machine that you're running ansible-playbook on.</p> <p>The site.yml in this playbook will have a different format. &nbsp;For this first attempt, I'm not sure if I can see any real value in breaking the provisioning up into separate roles. &nbsp;I might change that in future, if we decide to configure Elastic LoadBalancers and so on.</p> <p>&nbsp;</p> <p><strong>AWS and IAM</strong></p> <p><strong>---------------</strong></p> <p>Amazon Web Services now provide a federated account management system called IAM (Identity and Access Management). Traditionally, with AWS, you could only create two pairs of Access/Secret keys.</p> <p>With IAM, you can create groups of users, with role-based access control capabilities, which give you far more granular control over what happens with new access/secret key pairs.</p> <p>In this article, I'm going to create an IAM group, called "deployment"</p> <p>From your AWS console, visit the <strong>IAM</strong> page:</p> <p><a href=""></a></p> <p>Click the "Create a new group of users" button as shown: <a href=""></a></p> <p>We'll call it "Deployment" <a href=""></a></p> <p>We need to assign roles to the IAM group. &nbsp;Power User Access seems reasonable for this.. Provides full access to AWS servies and resources, but does not allow user group modifications.</p> <p><a href=""></a></p> <p>&nbsp;</p> <p>This is a JSON representation of the permissions configuration:</p> <p><a href=""></a></p> <p>We'll create some users to add to this Deployment group:</p> <p>Let's call one "ansible".</p> <p><a href=""></a></p> <p>&nbsp;</p> <p>We should get an option now to download the user credentials for the user we just created.&nbsp;</p> <p>&nbsp;ansible</p> <pre>Access Key ID:<br />AKHAHAHAFATCHANCELOLLHA<br />Secret Access Key:<br />rmmDoYouReallyThingImGoingTo+5ShareThatzW</pre> <p>If you click the "Download Credentials" button, it'll save a CSV file containg the Username, and the Access/Secret Key.&nbsp;</p> <p><strong>--</strong>&nbsp;</p> <p>Back to the main theme of this evening's symposium:</p> <p>To avoid storing the AWS access and secret keys in the playbook, it's recommended that they be set as Environment Variables, namely:&nbsp;</p> <p><strong>AWS_ACCESS_KEY</strong></p> <p>and</p> <p><strong>AWS_SECRET_KEY</strong></p> <p>Second to that, we'll need a keypair name for the new instance(s) we're creating. &nbsp;I assume you're already familiar with the process of creating SSH keypairs on EC2.</p> <p>I'm calling my keypair "<strong>ansible_ec2</strong>". &nbsp;Seems logical enough.</p> <p>I've moved this new keypair, "<strong>ansible_ec2.pem</strong>" into <strong>~/.ssh/</strong> and set its permissions to <strong>600</strong> (otherwise ssh throws a wobbly.)</p> <p>We'll also need to pre-create a security group for these servers to sit in. &nbsp;As you'll see in my site.yml, i've called this "<strong>sg_thingy</strong>". &nbsp;I'm going to create this as a security group, allowing TCP ports 22, 80 and 443, and all ICMP traffic through the firewall.</p> <p>If you haven't specified an existing keypair, or existing security group, ansible will fail and return an error.</p> <p>I'm going to create a new <strong>site.yml</strong> file too, containing the following:</p> <pre>---<br /># Based heavily on the Ansible documentation on EC2:<br />#<br />&nbsp; - name: Provision an EC2 node<br />&nbsp; &nbsp; hosts: local<br />&nbsp; &nbsp; connection: local<br />&nbsp; &nbsp; gather_facts: False<br />&nbsp; &nbsp; tags: provisioning<br />&nbsp; &nbsp; vars:<br />&nbsp; &nbsp; &nbsp; instance_type: t1.micro<br />&nbsp; &nbsp; &nbsp; security_group: sg_thingy<br />&nbsp; &nbsp; &nbsp; image: ami-a73264ce<br />&nbsp; &nbsp; &nbsp; region: us-east-1<br />&nbsp; &nbsp; &nbsp; keypair: ansible_ec2<br />&nbsp; &nbsp; tasks:<br />&nbsp; &nbsp; &nbsp; - name: Launch new Instance<br />&nbsp; &nbsp; &nbsp; &nbsp; local_action: ec2 instance_tags="Name=AnsibleTest" group={{ security_group }} instance_type={{ instance_type}} image={{ image }} wait=true region={{ region }} keypair={{ keypair }}<br />&nbsp; &nbsp; &nbsp; &nbsp; register: ec2<br />&nbsp; &nbsp; &nbsp; - name: Add instance to local host group<br />&nbsp; &nbsp; &nbsp; &nbsp; local_action: lineinfile dest=hosts regexp="{{ item.public_dns_name }}" insertafter="[launched]" line="{{ item.public_dns_name }} ansible_ssh_private_key_file=~/.ssh/{{ keypair }}.pem"<br />&nbsp; &nbsp; &nbsp; &nbsp; with_items: ec2.instances<br />&nbsp; &nbsp; &nbsp; &nbsp; #"<br />&nbsp; &nbsp; &nbsp; - name: Wait for SSH to come up<br />&nbsp; &nbsp; &nbsp; &nbsp; local_action: wait_for host={{ item.public_dns_name }} port=22 delay=60 timeout=320 state=started<br />&nbsp; &nbsp; &nbsp; &nbsp; with_items: ec2.instances<br />&nbsp; - name: With the newly provisioned EC2 node configure that thing<br />&nbsp; &nbsp; hosts: launched # This uses the hosts that we put into the in-memory hosts repository with the add_host module.<br />&nbsp; &nbsp; sudo: yes # On EC2 nodes, this is automatically passwordless.&nbsp;<br />&nbsp; &nbsp; remote_user: ubuntu # This is the username for all ubuntu images, rather than root, or something weird.<br />&nbsp; &nbsp; gather_facts: True &nbsp;#We need to re-enable this, as we turned it off earlier.<br />&nbsp; &nbsp; roles:<br />&nbsp; &nbsp; &nbsp; - common<br />&nbsp; &nbsp; &nbsp; - redis<br />&nbsp; &nbsp; &nbsp; - nginx<br />&nbsp; &nbsp; &nbsp; - zeromq<br />&nbsp; &nbsp; &nbsp; - deploy_thingy<br />&nbsp; &nbsp; &nbsp; # These are the same roles as we configured in the 'Parallax/example' playbook, except they've been linked into this one.</pre> <p>&nbsp;</p> <p>I've gone ahead and predefined a hostgroup in our hosts inventory file called '[launched]', because I'm going to insert the details of the launched instances into that with a local_action.</p> <p>If it works, you should get something like this appearing in the hosts file after it's launched the instance:</p> <pre>[launched]<br /> ansible_ssh_private_key_file=ansible_ec2.pem</pre> <p>I've added a tag to the play that builds an EC2 instance, so that you can run ansible-playbook a further time with the command-line argument --skip-tags provisioning so that you can do the post-provisioning config steps, without having to rebuild the whole VM from the ground up.</p> <p>I've added some stuff to the common role, too, to allow us to detect (and skip bits) when it's running on an EC2 host.</p> <pre>&nbsp; - name: Gather EC2 Facts<br />&nbsp; &nbsp; action: ec2_facts<br />&nbsp; &nbsp; ignore_errors: True<br /></pre> <p>And a little further on, we use this when: selector to disable some functionality that isn't relevant on EC2 hosts.</p> <pre>&nbsp; &nbsp; when: ansible_ec2_profile != "default-paravirtual"</pre> <p>&nbsp;</p> <p><strong>Running Ansible to Provision</strong></p> <p><strong>============================</strong></p> <p>&nbsp;</p> <p>I'm running ansible-playbook as follows:</p> <pre>AWS_ACCESS_KEY=AKHAHAHAFATCHANCELOLLHA AWS_SECRET_KEY="rmmDoYouReallyThingImGoingTo+5ShareThatzW" ansible-playbook -i hosts site.yml</pre> <p>Because I've pre-configured the important information in site.yml, Ansible can now go off, using the EC2 API and create us a new EC2 virtual machine.</p> <pre><br />PLAY [Provision an EC2 node] **************************************************<br />TASK: [Launch new Instance] ***************************************************<br />changed: [localhost]<br />TASK: [Add instance to local host group] **************************************<br />ok: [localhost] =&gt; (item={u'ramdisk': None, u'kernel': u'aki-88aa75e1', u'root_device_name': u'/dev/sda1', u'placement': u'us-east-1a', u'private_dns_name': u'ip-10-73-193-26.ec2.internal', u'ami_launch_index': u'0', u'image_id': u'ami-a73264ce', u'dns_name': u'', u'launch_time': u'2014-01-28T22:33:50.000Z', u'id': u'i-414ec06f', u'public_ip': u'', u'instance_type': u't1.micro', u'state': u'running', u'private_ip': u'', u'key_name': u'ansible_ec2', u'public_dns_name': u'', u'root_device_type': u'ebs', u'state_code': 16, u'hypervisor': u'xen', u'virtualization_type': u'paravirtual', u'architecture': u'x86_64'})<br />TASK: [Wait for SSH to come up] ***********************************************<br />ok: [localhost] =&gt; (item={u'ramdisk': None, u'kernel': u'aki-88aa75e1', u'root_device_name': u'/dev/sda1', u'placement': u'us-east-1a', u'private_dns_name': u'ip-10-73-193-26.ec2.internal', u'ami_launch_index': u'0', u'image_id': u'ami-a73264ce', u'dns_name': u'', u'launch_time': u'2014-01-28T22:33:50.000Z', u'id': u'i-414ec06f', u'public_ip': u'', u'instance_type': u't1.micro', u'state': u'running', u'private_ip': u'', u'key_name': u'ansible_ec2', u'public_dns_name': u'', u'root_device_type': u'ebs', u'state_code': 16, u'hypervisor': u'xen', u'virtualization_type': u'paravirtual', u'architecture': u'x86_64'})</pre> <p>&nbsp;</p> <p><strong>Cool.</strong></p> <p><strong>Now what?&nbsp;</strong></p> <p>Well, we'll want to configure this new instance *somehow*. &nbsp;As we're already using Ansible, that seems like a pretty good way to do it.&nbsp;</p> <p>To prevent code reuse, I've symlinked the roles from the example playbook into the part3 playbook, so that I should theoretically be able to include them from here.&nbsp;</p> <p>Come to think of it, you should be able to merge the branches (you'll probably have to do this semi-manually), because it should be possible to have the two different play types coexisting, due to the idempotent nature of Ansible.</p> <p>I've decided not to merge my playbooks into one directory, because for the time being, i want to keep site.yml separate between the EC2 side and the non-EC2 side.</p> <p>As I mentioned earlier, I added a tag to the instance provisioning play in the site.yml file for this playbook. &nbsp;This means that now I've built an instance (and it's been added to the hosts inventory (go check!)), I can run the configuration plays, and skip the provisioning stuff, as follows:</p> <pre>ansible-playbook -i hosts --skip-tags provisioning &nbsp;site.yml</pre> <p>This will now go off and do stuff. &nbsp;I had to go through and add some conditionals to tell some tasks not to run on EC2 provisioned nodes, and some other stuff to prevent it looking for packages that are only available in ubuntu saucy.</p> <p>I'm not going to paste the full output, because we should now be fairly familiar with the whole ansible deployment/configuration thing.</p> <p>I will however, show you this:&nbsp;</p> <pre>PLAY RECAP ********************************************************************<br /> : ok=30 &nbsp; changed=11 &nbsp; unreachable=0 &nbsp; &nbsp;failed=0</pre> <p>It's probably worth noting that because I chose to append the newly added host to the physical host inventory file, that subsequent plays won't see it, so it's best to run a new ansible run, but this time <strong>skip</strong> the <em>provisioning</em> tag.&nbsp;</p> <p>Proof it works:</p> <p><img src="" alt="" /></p> <p>For what it's worth, I'm going to destroy that instance in a moment, so you'll have to do it yourself. Bwahahaha.</p> <p>My EC2 deployment playbook / branch etc can be found here:<a href="">&nbsp;</a></p> <p>Part 4, now available: <a href="/blogish/part-4-ansible-tower/#.U82KKoBdVUE">Ansible with Ansible Tower</a></p> Part 4: Ansible Tower <p>You may remember that in January, I wrote a trilogy of blogposts surrounding the use of Ansible, as a handy guide to help y&rsquo;all get started. &nbsp;I&rsquo;ve decided to revisit this now, and write another part, about Ansible Tower.</p> <p>In the 6-odd months since I wrote <a href="/blogish/getting-started-ansible/#.U82JuoBdVUE">Parts 1</a>, <a href="/blogish/part-2-deploying-applications-ansible/#.U82JzIBdVUE">2</a> and <a href="/blogish/part-3-ansible-and-amazon-web-services/#.U82J3IBdVUE">3</a> of my Getting Started with Ansible guide, it&rsquo;s had over 10,000 unique visitors. &nbsp;I&rsquo;m quite impressed with that alone. &nbsp;I&rsquo;ve built the ansible-based provisioning and deployment pipelines for two more companies, both based off my Parallax starting point I&rsquo;ve been working on since January. &nbsp;That alone has been gathering Stars and Forks on Github.</p> <p>And so, to part four: <strong>Ansible Tower.</strong></p> <p><a href="">Ansible Tower</a> is the Web-based User Interface for <a href="">Ansible</a>, developed by the company behind the Ansible project.</p> <p>It provides an easy-to-use dashboard, and role-based access control, so that it&rsquo;s easier to allow individual teams access to use Ansible for their deployments, without having to rely on dedicated build engineers / DevOps teams to do it for them.</p> <p>There&rsquo;s also a REST API built into Tower, which aids automation tasks (we&rsquo;ll come back to this in Part 5).</p> <p>In this tutorial, I&rsquo;m going to configure a server running Ansible Tower, and connect it to an Active Directory system. &nbsp;You can use any LDAP directory, but Active Directory is probably the most &nbsp;commonly found in Enterprise deployments.</p> <h2><strong>Prerequisites:</strong></h2> <p>Ansible Tower server (I&rsquo;m using a VMware environment, so both my servers are VMs)</p> <p><em><span style="white-space: pre;"> </span>1 Core, 1GB RAM Ubuntu 12.04 LTS Server, 64-bit</em></p> <p>Active Directory Server (I&rsquo;m using Windows Server 2012 R2)</p> <p><em><span style="white-space: pre;"> </span>2 Cores, 4GB RAM</em></p> <p>Officially, Tower supports CentOS 6, RedHat Enterprise Linux 6, Ubuntu Server 12.04 LTS, and Ubuntu Server 14.04 LTS.</p> <p>Installing Tower requires Internet connectivity, because it downloads from their repo servers.</p> <p>I have managed to perform an offline installation, but you have to set up some kind of system to mirror their repositories, and change some settings in the Ansible Installer file. &nbsp;</p> <p>I <strong>*highly*</strong> recommend you dedicate a server (vm or otherwise) to Ansible Tower, because the installer will rewrite pg_hba.conf and supervisord.conf to suit its needs. &nbsp;Everything is easier if you give it it&rsquo;s own environment to run in. &nbsp;</p> <p>You <strong>*might*</strong> be able to do it in Docker, although I haven&rsquo;t tried, and I&rsquo;m willing to bet you&rsquo;re asking for trouble.</p> <p>I&rsquo;m going to assume you already know about installing Windows Server 2012 and building a domain controller. (If there's significant call for it, I might write a separate blog post about this...)</p> <p>&nbsp;</p> <h2><strong>Installation Steps:</strong></h2> <p>&nbsp;SSH into the Tower Server, and upload the ansible-tower-setup-latest.gz file to your ~directory.</p> <p>Extract it</p> <p>Download and open <a href=""> </a>in a browser tab for perusal and reference.</p> <p>Install dependencies:</p> <pre>sudo apt-get install python-dev python-yaml python-paramiko python-jinja2 python-pip sshpass<br />sudo pip install ansible</pre> <pre>cd ansible-tower-setup-$VERSION </pre> <p>(where $VERSION is the version of Ansible it untarred. &nbsp;Mine&rsquo;s 1.4.11.)</p> <p>It should come as no surprise that the Ansible Tower installer is actually an Ansible Playbook (hosts includes, and it&rsquo;s all defined in group_vars/all and site.yml) - Neat, huh?&nbsp;</p> <p>Edit <strong>group_vars/all </strong>to set some sane defaults - basically changing passwords away from what they ship with.</p> <pre>pg_password: AWsecret<br />admin_password: password<br />rabbitmq_password: "AWXbunnies"</pre> <p><strong>**Important** - </strong>You really need to change these default values, otherwise it&rsquo;d be theoretically possible that you could expose your secrets to the world!</p> <p>The documentation says if you&rsquo;re doing to do LDAP integration, you should configure that now.&nbsp;</p> <p>I'm actually going to do LDAP integration at a later stage.</p> <pre>&nbsp;sudo ./</pre> <p>With any luck, you should get the following message.&nbsp;</p> <pre>The setup process completed successfully.</pre> <p>&nbsp;</p> <p>With Ansible Tower now installed, you can open a web browser, and go to http://</p> <p>You&rsquo;ll probably get presented with an unsigned certificate error, but we can change that later.</p> <h3>Sidenote on SSL. &nbsp;</h3> <p>It&rsquo;s all done via Apache2, so the file you&rsquo;d want to edit is:</p> <pre>/etc/apache2/conf.d/awx-httpd-443.conf</pre> <p>&nbsp;</p> <p>and edit:</p> <pre>&nbsp; SSLCertificateFile /etc/awx/awx.cert<br />&nbsp; SSLCertificateKeyFile /etc/awx/awx.key</pre> <p>&nbsp;</p> <p>You can now log into Tower, with the username: admin, and whatever password you specified in <strong>group_vars/all</strong> at setup time.</p> <p>In terms of actually getting started with Ansible Tower, I highly recommend you work your way through the PDF User guide I linked earlier on. &nbsp;There&rsquo;s a good example of a quickstart, and it&rsquo;s really easy to import your standalone playbooks.</p> <p>When you import a playbook, either manually or with some kind of source control mechanism, it&rsquo;s important to remember that in the playbook YAML file, you set<strong> hosts: all</strong>, because the host definition will now be controlled by Tower, so if you forget to do that, you&rsquo;ll probably find nothing happens when you run a job.</p> <p>Now for the interesting part&hellip;(and let&rsquo;s face it, it&rsquo;s the bit you&rsquo;ve all been waiting for)</p> <h2>Integrating Ansible Tower with LDAP / Active Directory</h2> <p>Firstly, make sure that you can a) ping the AD server and b) make a LDAP connection to it.</p> <p>ping is easy.. Just ping it by hostname (if you&rsquo;ve set up DNS or a hosts file)</p> <p>LDAP is pretty straight forward too, just telnet into it on port 389. &nbsp;If you get Connection Refused, you&rsquo;ll want to check Windows Firewall settings.</p> <p>On the Tower server, open up:</p> <pre> /etc/awx/</pre> <p>After line 80 (or thereabouts) there&rsquo;s a section on LDAP settings.</p> <p>Settings you&rsquo;ll want to change (and some sane examples):</p> <pre>AUTH_LDAP_SERVER_URI = ''</pre> <p>set this to the ldap connection string for your server:</p> <pre>AUTH_LDAP_SERVER_URI = 'ldap://'</pre> <p>On the AD Server, open Users and Computers, and create a user in Managed Service Accounts called something like &ldquo;Ansible Tower&rdquo; and assign it a suitably obscure password. &nbsp;Mark it as &ldquo;Password never expires&rdquo;.</p> <p>We&rsquo;ll use this user to form the <em>Bind DN </em>for LDAP authentication.</p> <p>I&rsquo;ve also created another account in AD-&gt;Users, as &ldquo;Bobby Tables&rdquo; - with the sAMAccountName of bobby.tables, and a simple password. &nbsp;We&rsquo;ll use this to test that the integration is working later on.</p> <p>We&rsquo;ll need the full DN path for the config file, so open Powershell, and run</p> <pre>`dsquery user`</pre> <p>In the list that's returned, look for the LDAP DN of your newly created user:</p> <pre> &ldquo;CN=Ansible Tower,CN=Managed Service Accounts,DC=wibblesplat,DC=com&rdquo;</pre> <p>Back in <strong>/etc/awx/</strong>, set:</p> <pre>AUTH_LDAP_BIND_DN = 'CN=Ansible Tower,CN=Managed Service Accounts,DC=wibblesplat,DC=com'</pre> <p># Password using to bind above user account.</p> <pre>AUTH_LDAP_BIND_PASSWORD = 'P4ssW0Rd%!'<br />AUTH_LDAP_USER_SEARCH = LDAPSearch(<br />&nbsp; &nbsp; &lsquo;CN=Users,DC=wibblesplat,DC=com', &nbsp; # Base DN<br />&nbsp; &nbsp; ldap.SCOPE_SUBTREE, &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; # SCOPE_BASE, SCOPE_ONELEVEL, SCOPE_SUBTREE<br />&nbsp; &nbsp; '(sAMAccountName=%(user)s)', &nbsp; &nbsp;# Query<br />)</pre> <p>You&rsquo;ll want to edit the <strong>AUTH_LDAP_USER_SEARCH </strong>&nbsp;attribute to set your site&rsquo;s Base DN correctly. &nbsp;If you store your Users in an OU, you can specify that here.</p> <pre>AUTH_LDAP_GROUP_SEARCH = LDAPSearch(<br />&nbsp; &nbsp; 'CN=Users,DC=wibblesplat,DC=com', &nbsp; &nbsp;# Base DN<br />&nbsp; &nbsp; ldap.SCOPE_SUBTREE, &nbsp; &nbsp; # SCOPE_BASE, SCOPE_ONELEVEL, SCOPE_SUBTREE<br />&nbsp; &nbsp; '(objectClass=group)', &nbsp;# Query<br />)</pre> <p>Again, you&rsquo;ll want to specify your site&rsquo;s Base DN for Groups here, and again, if you store your groups in an OU, you can specify that.</p> <p>This is an interesting setting:</p> <pre># Group DN required to login. If specified, user must be a member of this<br /># group to login via LDAP. &nbsp;If not set, everyone in LDAP that matches the<br /># user search defined above will be able to login via AWX. &nbsp;Only one<br /># require group is supported.<br />#AUTH_LDAP_REQUIRE_GROUP = 'CN=ansible-tower-users,CN=Users,DC=wibblesplat,DC=com'<br /># Group DN denied from login. If specified, user will not be allowed to login<br /># if a member of this group. &nbsp;Only one deny group is supported.<br />#AUTH_LDAP_DENY_GROUP = 'CN=ansible-tower-denied,CN=Users,DC=wibblesplat,DC=com'</pre> <p>Basically, you can choose a group, and if the user&rsquo;s not in that group, they ain&rsquo;t getting in.&nbsp;</p> <p>Both of these are specified as Group DNs:</p> <p>It&rsquo;s easy to discover Group DNs with</p> <pre>dsquery group</pre> <p>from Powershell on your AD server.</p> <p>Another clever setting. &nbsp;It&rsquo;s possible to give users the Tower &ldquo;is_superuser&rdquo; flag, based on AD/LDAP group membership:</p> <pre>AUTH_LDAP_USER_FLAGS_BY_GROUP = {<br />&nbsp; &nbsp; 'is_superuser': 'CN=Domain Admins,CN=Users,DC=wibblesplat,DC=com',<br />}</pre> <p>Finally, the last setting allows you to map Tower Organisations (Organizations) to AD/LDAP groups:</p> <pre>AUTH_LDAP_ORGANIZATION_MAP = {<br />&nbsp; &nbsp; 'Test Org': {<br />&nbsp; &nbsp; &nbsp; &nbsp; 'admins': 'CN=Domain Admins,CN=Users,DC=wibblesplat,DC=com',<br />&nbsp; &nbsp; &nbsp; &nbsp; 'users': ['CN=ansible-tower-users,CN=Users,DC=wibblesplat,DC=com'],<br />&nbsp; &nbsp; &nbsp; &nbsp; 'remove_users' : False,<br />&nbsp; &nbsp; &nbsp; &nbsp; 'remove_admins' : False,<br />&nbsp; &nbsp; },<br />&nbsp; &nbsp; #'Test Org 2': {<br />&nbsp; &nbsp; # &nbsp; &nbsp;'admins': ['CN=Administrators,CN=Builtin,DC=example,DC=com'],<br />&nbsp; &nbsp; # &nbsp; &nbsp;'users': True,<br />&nbsp; &nbsp; # &nbsp; &nbsp;'remove_users' : False,<br />&nbsp; &nbsp; # &nbsp; &nbsp;'remove_admins' : False,<br />&nbsp; &nbsp; #},<br />}</pre> <p>Committing the changes is as easy as restarting Apache, and the AWX Services.</p> <p>Restart the AWX Services first, with</p> <pre>supervisorctl restart all</pre> <p>Now restart Apache, with:</p> <pre>service apache2 restart</pre> <p>I created two groups in</p> <pre>CN=Users,DC=wibblesplat,DC=com</pre> <p>Called &ldquo;ansible-tower-denied&rdquo; and &ldquo;ansible-tower-users&rdquo;.</p> <p>I created two users, <strong>&ldquo;Bobby Tables (bobby.tables)&rdquo;</strong> - in <em>ansible-tower-users</em>, and <strong>&ldquo;Evil Emily (evil.emily)&rdquo;</strong> - in <em>ansible-tower-denied</em>. &nbsp;</p> <p>When I restarted Ansible&rsquo;s services, and tried to log in with <strong>bobby.tables</strong>, I get in. &nbsp;</p> <p><img src="" alt="" /></p> <p>When I view Organizations, I can see Test Org (according to the mapping), and Bobby Tables in that organisation.</p> <p><img src="" alt="" /></p> <p>When I try to log in as <strong>evil.emily,</strong> I get &ldquo;Unable to login with provided credentials.&rdquo; - Which is what we expect, as this user is in the deny access group.</p> <p><img src="" alt="" /></p> <p>&nbsp;</p> <h2>Using Ansible Tower</h2> <p>As far as how to use Tower is concerned, I don't really want to re-hash what Ansible have already said in their <a href="">User Manual PDF.&nbsp;</a></p> <p>I will, however walk through the steps to getting Parallax imported, and deployed on a test server.</p> <p>For this purpose, I've built a Test VM in my development environment, running Ubuntu 14.04. &nbsp;I'm going to configure Tower to manage this VM, download Parallax playbooks from Github, and create a job template to run them against the test server.</p> <p>In this example, I'm logged in as the 'admin' superuser account, although with the correct permissions configured within Tower, using Active Directory, or manual permission assignment, it's possible to do this on an individual, and a team level.</p> <h3>A few quick definitions:&nbsp;</h3> <p><strong>Organizations</strong> :- This is the top-level unit of hierarchical organisation in Tower. &nbsp;An Organization contains <strong>Users</strong>, <strong>Teams</strong>, <strong>Projects</strong> and <strong>Inventories</strong>. &nbsp;Multiple Organizations can be used to create multi-tenancy on a Tower server.</p> <p><strong>Users</strong> : - These are the logins to Tower. &nbsp;They're either manually created, or mapped in from LDAP. &nbsp;Users have Properties (name, email, username, etc..), <strong>Credentials</strong> (used to connect to services and servers), <strong>Permissions</strong> (to give them Role-based access control to <strong>Inventories</strong> and Deployments), <strong>Organizations</strong> (organizations they're members of), and <strong>Teams</strong> (useful for subdividing Organizations into groups of users, projects, credentials and permissions).</p> <p>&nbsp;</p> <p><strong>Teams</strong> : - A team is a sub-division of an organisation. &nbsp;Imagine you have a Networks team, who have their own servers. &nbsp;You might also have a Development team, who need their development environment. &nbsp;Creating Teams means that Networks manage theirs, and Development manage their own, without knowledge of each others' configurations.&nbsp;</p> <p>&nbsp;</p> <p><strong>Permissions</strong> : - These tie users and teams to inventories and jobs. &nbsp;You have Inventory permissions, and Deployment permissions.&nbsp;</p> <p><strong>Inventory</strong> permissions give users and teams the ability to modify inventories, groups and hosts.</p> <p><strong>Deployment</strong> permissions gives users and teams the ability to launch jobs that make changes "Run Jobs", or launch jobs that check state "Check Jobs"</p> <p>&nbsp;</p> <p><strong>Credentials</strong> : - These are the passwords and access keys that Tower needs to be able to ssh (or use other protocols) to connect to the nodes it's managing.</p> <p>&nbsp;</p> <p>There are a few types of Credentials that Tower can manage and utilise:</p> <p><strong>SSH Password</strong> - plain old password-based SSH login.</p> <p><strong>SSH Private Key</strong> - used for key-based SSH Authentication.</p> <p><strong>SSH Private Key w/ Passphrase </strong>- Used to protect the private key with a passphrase. &nbsp;The passphrase may be optionally stored in the database. &nbsp;If it's not, Tower will ask you for the password when it needs to use the Credential.</p> <p><strong>Sudo Password </strong>- Required if sudo has to run, and needs a password to auth.</p> <p><strong>AWS Credentials </strong>- Stores AWS Access Key and Secret Key securely in the Tower Database.</p> <p><strong>Rackspace credentials</strong> - Stores Rackspace username and Secret Key.&nbsp;</p> <p><strong>SCM Credentials</strong> - Stores credentials used for accessing source code repositories for the storage and retrieval of Projects.</p> <p><strong>Projects </strong>: - These are where your playbooks live. &nbsp;You can either add them manually, by cloning into&nbsp;</p> <pre>/var/lib/awx/projects</pre> <p>or by using Git, SVN, or Mercurial and having Tower do the clone automatically before each job run (or on a schedule).</p> <p><strong>Inventories</strong> : - These effectively replace the grouping within the Playbook directory hosts file. &nbsp;You can define groups of hosts, and then configure individual hosts within these groups. &nbsp;It's possible to assign host-specific variables, or Inventory specific variables from this.</p> <p>&nbsp;</p> <p><strong>Groups</strong> : - These live in Inventories, and allow you to collect groups of similar hosts, to which you can apply a playbook.</p> <p>&nbsp;</p> <p><strong>Hosts</strong> : - These live in <strong>Groups</strong>, and define the IP address / Hostname of the node, plus some host variables.</p> <p>&nbsp;</p> <p><strong>Job Templates</strong> : - This is basically a definition of an Ansible job, that ties together the <strong>Inventory</strong> (and its hosts/groups), a <strong>Project</strong> (and its <strong>Playbooks</strong>), <strong>Credentials</strong>, and some extra variables. &nbsp;You can also specify tags here (like --tags on the ansible-playbook command line).</p> <p>Job Templates can also accept HTTP Callbacks, which is a way that a newly provisioned host can contact the Tower server, and ask to be provisioned. &nbsp;We'll come back to this concept in Part 5.</p> <p><strong>Jobs</strong> : - These are what happens when a Job Template gets instantiated, and runs a playbook against a set of hosts from the relevant Inventory.</p> <p>Running Parallax with Tower</p> <p>The first thing we need to do (unless you've already done this / or had one automatically created by LDAP mapping), is to create an <strong>Organization</strong>. - Again, it's best to refer to the extant Ansible Tower documentation linked above for the best way to do this.&nbsp;</p> <p>I've actually mapped my Test Org in via the LDAP interface, so the next step is to create a Team.</p> <p>I've called my Team "<em>DevOps</em>"</p> <p>I'm going to assign them a Credential now.&nbsp;</p> <p>Navigate to Teams / DevOps</p> <p><img src="" alt="" /></p> <p>&nbsp;</p> <p>Under "Credentials", click the [+]</p> <p>Select type "Machine"</p> <p>&nbsp;- On a server somewhere, run ssh-keygen, and generate a RSA key. &nbsp;Copy the private key to the clipboard, and paste it into the SSH Private Key box. &nbsp;</p> <p><img src="" alt="" /></p> <p>&nbsp;Scroll down, and click Save.</p> <p>From the tabbed menu at the top, click Projects and then click the [+]</p> <p>Give the Project a meaningful name and description. &nbsp;Enter the SCM Type as Git</p> <p>Under SCM URL, give the public Github address for Parallax, and under SCM Branch set "tower"</p> <p>Set SCM Update Options to "Update on Launch" - this will do a git update before running a job, so you'll always get the latest version from Git.</p> <p><img src="" alt="" /></p> <p>&nbsp;</p> <p>Click Save.</p> <p>&nbsp;</p> <p>This will trigger a job, which will clone the latest version from Git, and save it into the Projects directory. &nbsp;If this fails, you might need to run:&nbsp;</p> <pre>chown -R awx /var/lib/awx/projects</pre> <p>&nbsp;</p> <p>Next, create an <strong>Inventory</strong>.</p> <p>Pretty straightforward - name, description, organisation.</p> <p><img src="" alt="" /></p> <p>Select that Inventory, and create a Group - It's possible to import Groups from EC2, by selecting the Source tab when you create a new group.</p> <p>Select that group you just created, and create a host under it, with the IP Address / hostname of your test server.</p> <p><img src="" alt="" /></p> <p>At this point, you can assign per-host variables.</p> <p><strong>Nearly there!</strong></p> <p>Click "<strong>Job Templates</strong>", and create a new job template. &nbsp;As I said before, these really tie it all together.</p> <p>Give it a name, then select your <strong>Inventory</strong>, <strong>Project</strong>, <strong>Playbook</strong> and <strong>Credential</strong>.</p> <p><img src="" alt="" /></p> <p>Click Save.</p> <p>&nbsp;</p> <p>To launch it, click the Rocketship from the Job Templates Listing.</p> <p><img src="" alt="" /></p> <p>&nbsp;</p> <p>You'll get redirected to the Jobs page, showing your latest job in Queued. &nbsp;</p> <p><img src="" alt="" /></p> <p>Unless you have a very busy Tower server, it won't stay Queued for long. &nbsp;Click the refresh button on the Queued section to reload, and you should see it's moved to Active.</p> <p><img src="" alt="" /></p> <p>You can click on the job for an update on its status, or just patiently wait for it to complete.</p> <p>When the job's done, you'll either have a red dot, or a green dot indicating the status of the job.</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>That's it. &nbsp;You've installed Ansible Tower, integrated it with Active Directory, and created your first deployment job of Parallax with Tower.</p> <p>&nbsp;</p> <h2>Other Resources:&nbsp;</h2> <p><a href=";utm_content=6793188&amp;utm_medium=social&amp;utm_source=twitter&amp;v=1amlwPP_X4k">Ansible Tower Demo video (12 minutes long)</a></p> <p><a href="">Other videos from Ansible on Youtube</a></p> <p>Coming Soon: Part 5. Automation with Ansible.</p> Part 5: Ansible Galaxy <p>It's been a while since I wrote Parts <a href="/blogish/getting-started-ansible/#.VFIzaXV_tB0">1</a>,<a href="/blogish/part-2-deploying-applications-ansible/#.VFIzd3V_tB0">2</a>,<a href="/blogish/part-3-ansible-and-amazon-web-services/#.VFIzh3V_tB0">3</a>,<a href="/blogish/part-4-ansible-tower/#.VFIzknV_tB0">4</a> on my Ansible Tutorial series, but I've recently changed my approach somewhat when using Ansible, and certainly when I build on Parallax. &nbsp;</p> <p>I've started using more and more from Ansible Galaxy. &nbsp;For those of you who don't know, Galaxy is a community "app store"-like thing for sharing reusable Ansible Roles. &nbsp;</p> <p><a href=""></a></p> <p>Let's pretend we want to deploy a staging server for a Python/Django application, using Postgres as the backend database all on a single server running Ubuntu 14.04 Trusty.</p> <p>I've recently done something similar, so I know roughly what roles I need to include. &nbsp;YMMV. &nbsp;</p> <p>Starting with the basic stuff. &nbsp;Let's find a role to install/configure Postgres. &nbsp;</p> <p><a href=""></a></p> <p>Click the "database" category. &nbsp;</p> <p>I tend to like to sort by Average Score, in Reverse order, so you get the highly rated ones at the top.</p> <p>The top-rated Postgres role is from ANXS <a href=""></a></p> <p>There's a bunch of useful links on that page, one to the role's github source, and the issue tracker.</p> <p>Below, there's a list of the supported platforms (taken from the role's metadata yml file).</p> <p>Just check that your target OS is listed there, and everything will probably work fine.</p> <p>It's also worth checking that your ansible installed version is at least as new as the role's Minimum Ansible Version.</p> <p>Starting with a base-point of Parallax (because it still has some really useful bits and bobs in it - like 'common')..&nbsp;</p> <p>cd <a href="">./playbooks/part5_galaxy</a> (or whatever you've called your playbook set).</p> <p>If you want to directly install the role into the roles/ directory, you'll need to append the -p flag, and the path (relative or absolute) to your project's roles directory. &nbsp;Otherwise they tend to get installed in a global location (which is a bit of a pain if you're not root).</p> <p>So when you run:</p> <pre>ansible-galaxy install -p roles ANXS.postgresql</pre> <p>&nbsp;</p> <pre>&nbsp;downloading role 'postgresql', owned by ANXS<br />&nbsp;no version specified, installing v1.0.3<br />&nbsp;- downloading role from<br />&nbsp;- extracting ANXS.postgresql to roles/ANXS.postgresql<br />ANXS.postgresql was installed successfully</pre> <p>You should have output that resembles that, or something vaguely similar.</p> <p>The next thing to do, is to integrate that role into our playbook. &nbsp;</p> <p>In <a href="">tutorial.yml</a>, you can see that there's a vars: section in the play definition, as well as some variables included when a role is listed.</p> <p>This also introduces a slightly different way of specifying a role within the playbook, where you can follow each role up with the required options.</p> <p>There's an option within <strong>ANXS.postgresql</strong> to use monit to ensure that postgresql server is always running. &nbsp;If you want to enable this, you will also need to install the <strong>ANXS.monit</strong> role.</p> <p>In a way not entirely different to pip freeze, and the requirements file, you can run</p> <pre>ansible-galaxy list -p roles/ &gt;&gt; galaxy-roles.txt </pre> <p>and then be able to reimport the whole bunch of useful roles with a single command:</p> <pre>ansible-galaxy install -r galaxy-roles.txt -p roles</pre> <p>I've determined from past experience that the following Galaxy roles tend to play nicely together, and will proceed to install them in the tutorial playbook so you get some idea of how a full deployment workflow might look for a simple application.</p> <p>These are the roles I've used..&nbsp;</p> <pre>&nbsp;ANXS.apt, v1.0.2<br />&nbsp;, v1.0.1<br />&nbsp;ANXS.fail2ban, v1.0.1<br />&nbsp;ANXS.hostname, v1.0.4<br />&nbsp;ANXS.monit, v1.0.1<br />&nbsp;ANXS.nginx, v1.0.2<br />&nbsp;ANXS.perl, v1.0.2<br />&nbsp;ANXS.postgresql, v1.0.3<br />&nbsp;ANXS.python, v1.0.1<br />&nbsp;brisho.sSMTP, master<br />&nbsp;EDITD.supervisor_task, v0.8<br />&nbsp;EDITD.virtualenv, v0.0.2<br />&nbsp;f500.project_deploy, v1.0.0<br />&nbsp;joshualund.ufw, master</pre> <p>ANXS provide a great many roles which all play nicely. &nbsp;Some of those are included as they are dependencies of other roles. &nbsp;I tend to use sSMTP to forward local mail to Sendgrid, because I hate running email servers.&nbsp;</p> <p>f500.project_deploy is a capistrano-like deployment role for Ansible which supports the creation of symlinks to the current deployed version (which subsequently allows rollbacks).</p> <p>I don't want to go into the process of explaining how to modify this to deploy a Django application, I'm going to assume you've got enough information to figure that out for yourself. &nbsp;</p> <p>I've also added the ufw role, which configures Ubuntu's ufw package, a neat interface to IPTables. &nbsp;</p> <p>Basically, it should be quite easy to see how it is possible to build a playbook without having to write quite so much in the way of new ansible tasks/modules.</p> <p>&nbsp;</p> <h2>Other Useful Commands:</h2> <pre>ansible-galaxy init </pre> <p>This will create a role in a format ready for submission to the Galaxy community.</p> <pre>ansible-galaxy list</pre> <p>Show currently installed roles.</p> <pre>ansible-galaxy remove [role name] </pre> <p>Removes a currently installed role.</p> <h2>Endnote</h2> <p>When you look at the list of available roles, it's quite staggering what you could possible integrate, without having to do too much coding yourself.</p> <p>It's fantastic. &nbsp;At the time I wrote this article, there were 7880 users, and 1392 roles in total. &nbsp;It's also growing rapidly day on day.<span style="white-space: pre;"> </span></p> <p>There's plenty more information on the <a href="">Galaxy intro page</a>, which covers how to share your own roles.</p> VPN Technologies: A primer <p><strong>VPN Technologies: A Primer:</strong></p> <p>&nbsp;</p> <p><strong>What does VPN stand for?</strong></p> <p>Virtual Private Network. &nbsp;Moving on...&nbsp;</p> <p>&nbsp;</p> <p><strong>What is a VPN?</strong></p> <p>A VPN is a mechanism to extend a private network (like your LAN [Local Area Network]) across a public network (like the Internet). &nbsp;The upshot of this is, that you can connect two separate computers, each on their own LAN, across a VPN so that they appear to be on the same network; which, in a sense, they are.&nbsp;</p> <p>Typically, a VPN will include some form of encryption, so that the traffic traversing the public network isn't identifiable as being part of either of the interconnected private networks.</p> <p>&nbsp;</p> <p><strong>What kinds of VPN are there?</strong></p> <p>Well, that's a big question. &nbsp;Basically, there's two main types. &nbsp;</p> <p>The ones that connect PCs to LANs (Like your laptop to a Work / Corporate network). (<strong>Remote-Access</strong>)</p> <p>The ones that connect two LANs together (Like between a business and their supplier). (<strong>Site-to-Site</strong> (or <strong>LAN-to-LAN</strong>))</p> <p>&nbsp;</p> <p>There's more to it, though.</p> <p>We can classify a VPN by the type of encryption and protocol used when the traffic traverses the public network, and also by the <a href="">OSI network layer</a> they present at.&nbsp;</p> <p>&nbsp;</p> <p><strong>Main types:</strong></p> <p><strong>Internet Protocol Security (IPsec)</strong> - My personal favourite.&nbsp;</p> <p><strong>Transport Layer Security / Secure Sockets Layer (TLS/SSL) </strong>Commonly referred to as SSL VPNs. &nbsp;</p> <p><strong>Datagram Transport Layer Security (DTLS) </strong>- used in Cisco AnyConnect and OpenConnect ()</p> <p><strong>Microsoft Point-to-Point Encryption (MPPE)</strong> - provides encryption over PPTP connections.</p> <p><strong>Microsoft Secure Socket Tunneling Protocol (SSTP)</strong> - tunnels PPTP or L2TP traffic over a SSL connection.</p> <p><strong>Secure Shell (SSH) </strong>- OpenSSH provides a VPN mechanism to forward network connections over a SSH connection.</p> <p>It occurs to me that there's two types of site-to-site VPN also, <strong>Policy Based</strong>, and <strong>Routed</strong>.</p> <p>&nbsp;</p> <p><strong>Policy Based</strong> VPNs are clever. &nbsp;They have access lists, comprised of rules to match traffic on the LAN that should be sent over the VPN. &nbsp;This might be something like "Match all VoIP traffic, and send it over the VPN to x.x.x.x", or "Match anything that looks like HTTP or HTTPS, and send it to y.y.y.y".</p> <p>This is a bit like an implementation of <a href="">Split Tunneling </a>.&nbsp;</p> <p><strong>Routed VPNs</strong> aren't anywhere near as clever, and basically replace the default route for your LAN with a path across the VPN, so that all traffic not destined for the LAN goes out across the VPN: "If destination is not on the LAN, send it to the VPN address", and so on,</p> <p>&nbsp;</p> <p>There's a couple of protocols which *could* be described as a VPN, however, unlike protocols such as IPsec, offer no encryption.&nbsp;</p> <p>Firstly, <strong>Point-to-Point Tunneling Protocol (PPTP)</strong> - First proposed as an RFC in 1999, now widely considered as cryptographically broken and, hence, insecure. &nbsp;PPTP leverages a Generic Routing Encapsulation (GRE) tunnel, and a non-standard packet format. &nbsp;The GRE tunnel between the two networks carries Point-to-Point Protocol (PPP) traffic, which can theoretically encapsulate IPX as well as IP traffic - although, I'm willing to bet nobody's actually using IPX over PPTP..</p> <p>PPTP is, however, well supported, with native clients on iOS, OSX, Windows and Android. &nbsp; On the other hand, it's about as secure as a net curtain is against accidental nakedness exposure. &nbsp;I wouldn't recommend it under any circumstances.</p> <p>&nbsp;</p> <p><strong>Microsoft Point-to-Point Encryption (MPPE) </strong>is probably worth a brief mention here. &nbsp;It's a mechanism for encrypting traffic over a PPTP connection. &nbsp;It uses RSA RC4 (Danger Will Robinson), and 40, 56 and 128 bit keys.&nbsp;</p> <p>I don't think I've found a use for it so far, so.. moving on..</p> <p>&nbsp;</p> <p>There's also <strong>Layer 2 Tunneling Protocol (L2TP):</strong></p> <p>L2TP alone does *not* provide authentication. It's merely an advanced tunnel (somewhat based on PPTP), which allows more support for transit over non IP networks (such as Frame Relay and ATM).</p> <p>Internet Protocol Security (IPsec) is often used on top of L2TP to provide encryption, confidentiality and integrity. This is commonly known as L2TP/IPsec.</p> <p>L2TP is common as "carrier-grade" tunneling when you have a reseller of a broadband (typically ADSL), service using somebody else's network.</p> <p>Without the encryption (and so on) provided by IPsec, L2TP isn't *really* a VPN technology in it's own right, more a protocol used to enable VPNs to work.</p> <p><strong>SSL VPNs</strong></p> <p>Traditional SSL/TLS VPNs (excluding, for the moment, OpenVPN), utilise TLS/SSL encryption to provide either a portal (common for remote access to corporate web-based IT resources - like Webmail...), or a SSL encrypted tunnel, providing end-to-end security encrypted by the TLS/SSL protocol suite.</p> <p><em>SSL Portals</em> are commonly used for remote access to corporate IT resources, such as intranets (technically, this makes them Extranets), webmail, or similar applications. &nbsp;The user accesses the portal via their web browser, and typically provides some form of 2-factor authentication, such as a RSA SecureID token's password in addition to their standard authentication information.</p> <p><em>SSL Tunnels</em> allow an initial connection via a web browser to open further, secured connections to remote resources via the use of Java Applets, or ActiveX controls. &nbsp;This is commonly used to provide secure remote access to remote desktop sessions.&nbsp;</p> <p>Typically, SSL VPNs will be solely browser based, making them tricky to implement for distinctly non-browser protocols.</p> <p>Personally, I don't like SSL VPNs, as they're no substitute for a proper, encrypted tunnel protected by IPsec, and they often use questionable key lengths for the SSL connection itself, which often makes me wonder whether it's worth encrypting at all.</p> <p><strong>OpenVPN</strong></p> <p><a href="">OpenVPN</a> is cool. &nbsp;It's open-source, and supported on pretty much every platform I can think of.. There's even an iOS client somewhere.</p> <p>OpenVPN uses OpenSSL to provide encryption of the tunnel, and control for the tunnel. &nbsp;In terms of authentication, there's a choice of pre-shared keys (a password known by both the client, and the VPN endpoint), a SSL x.509 certificate, and username and password. &nbsp;It's also possible to combine them to require both a valid certificate and a username and password, if desired.</p> <p>As OpenVPN runs on top of Linux, it's easy to deploy, and very configurable. &nbsp;I've actually implemented it in a number of environments in the past, mostly for remote-access scenarios, although it is possible to use it in a site-to-site capability.</p> <p>Clients are available for almost every operating sysem you can think of (however, I can't think of a way to connect it directly to a Cisco router).</p> <p>&nbsp;</p> <p>On to my favourite VPN Technology:</p> <p><strong>Internet Protocol Security (IPSec)</strong></p> <p>Can be a right pain in the arse to configure, however, once it's up, it's typically rock solid.&nbsp;</p> <p>Provides both transparent Site-to-Site tunnels, as well as remote-access connections.</p> <p>Most of my professional experience of VPNs has been dealing with IPSec, and the majority of that has been working on Cisco platforms. &nbsp;</p> <p>The key thing with IPSec, is that both ends must have the same configuration parameters, otherwise nothing works. &nbsp;In some ways, this makes everything awkward, but on the other hand, it makes for better security, as the VPN endpoint (or server, if you'd rather), won't accept any old shit, it has to be an exact match for what it's expecting.&nbsp;</p> <p>Most major firewall vendors have an implementation of IPSec. &nbsp;Cisco and Juniper are the two I understand the best, but there's also implementations for Linux and similar, in OpenSwan and StrongSwan.</p> <p>&nbsp;</p> <p><strong>Dishonourable Mentions:&nbsp;</strong></p> <p><a href="">Hamachi/LogMeIn VPN</a>:</p> <p>Proprietary VPN technology. &nbsp;Currently squatting on (rightfully allocated by the MOD). Requires a "mediation server" provided by Hamachi - Fuck knows what this does.</p> <p>I wouldn't trust it, frankly.&nbsp;</p> <p><strong>Footnotes:</strong></p> <p>I'm not going to comment on whether various protocols have been backdoored by various agencies. &nbsp;From my point of view, there's still a lot of FUD floating around post Snowden, and I really don't want to get involved.</p> <p>That said, there are some things I will say. &nbsp;3DES doesn't provide enough security really for me to want to use it. &nbsp;Ditto RC4. &nbsp;</p> <p>There's probably stuff I've missed off this list, but there's a comments section below.&nbsp;</p> Finally, a place for IT "shopping" questions. <p>I've been a ServerFault participant for quite a long time now. &nbsp;Four years, and five months, to be precise.</p> <p>I've gained over 20,000 reputation points, and recently been elected to a community moderator. &nbsp;It's still one of my favourite go-to places when I need an answer for something.</p> <p>However, one thing that's not allowed anywhere (seemingly) on the StackExchange network is the so-called<a href=""> Shopping Question</a>. &nbsp;</p> <p>While I agree, for the most part that traditional shopping questions go out of date remarkably quickly, I don't believe that it's a good enough reason to ban them outright. &nbsp;</p> <p>I was talking about this in ServerFault Chat recently, and I thought it was about time to have a go at a spin-off site, <strong>just </strong>for shopping questions.</p> <p>Here it is: <a href="">TechShoppingAndInfo</a></p> <p>It's a little rough around the edges, it's basically just OSQA with a couple of minor modifications, hosted by me.&nbsp;</p> <p>If you do manage to generate an error, or something's not as it should be, then let us know! Email: <a href=""></a></p> <p>Or tweet at us: <a href="">@techshoppinginf</a></p> <p>I'd love to see some recommendations or comments.</p> <p>Oh, and get populating that site!</p> Things that concern me: Unified Threat Management (UTM) <p>We live in a dangerous world. &nbsp;</p> <p>It should come as no surprise to anyone who is a Citizen of the Internet, that the risks of interacting with others on the 'net is a somewhat dangerous business.</p> <p>Riskier still, is operating a server, or entire network with direct connection to the internet.</p> <p>The number of denial of service and code execution exploits has risen dramatically in the last decade, unsurprisingly. &nbsp;The number of black-hat hacking attempts (to use "hacking" from the vernacular of the media - rather than it's true, nobler meaning) has also risen.&nbsp;</p> <p>The thing is, whilst this kind of attack used to be perpetrated by lone, troubled individuals, access to exploits and malware is now far simpler, and easier than ever before.</p> <p>The upshot of this is, that every internet-connected system is at risk. &nbsp;There's a requirement for strong firewalls and access mechanisms, powerful packet filters and intrusion detection systems.</p> <p>Looking solely at the edge here, it's easy to see why the measures need to be in place to protect systems. &nbsp;You have a house, but you keep the door shut, right?</p> <p>I've noticed recently that there's a somewhat worrying trend in network security, towards 'all-in-one' devices, comprising a Router, Firewall, VPN Endpoint, Intrusion Detection/Prevention Services (IDS/IPS), Anti-virus, Anti-malware, Anti-spam, Web Proxy, Data Loss Prevention (DLP), VoIP Gateway (often referred to as ALG, or sometimes Session Border Controller (SBC)).</p> <p>These devices are generally referred to as Unified Threat Management (UTM), or sometimes Next-Generation Firewalls (NGFW).</p> <p>In a traditional (non-UTM) network, there'd be perhaps, 3 or 4 (or more!) individual boxes, providing a different feature set from the above list.</p> <p>&nbsp;</p> <p>For example:&nbsp;</p> <p><img src=";h=360" alt="" width="480" height="360" /></p> <p>&nbsp;</p> <p>Simply put, there are a number of physical devices, each performing its own specific role. &nbsp;</p> <p>The IDS sits between the Edge Router and the inside of the Firewall, comparing traffic, so that if traffic that's detected on the outside, is visible on the inside, it shows that the firewall might not be as effective as it could be.</p> <p>The users' HTTP traffic can be filtered through the Web Proxy, if desired. &nbsp;VPN sessions can be terminated inside the firewall on the VPN Endpoint.</p> <p>There's a lot of different devices here, and there's a reasonable management overhead, which is why the managers aren't smiling.</p> <p>This is kinda network design 101.</p> <p>In a UTM deployment scenario, all of those services (plus a few more, for good measure) are deployed (and probably enabled out-of-the-box), on a single device.</p> <p>&nbsp;</p> <p>For example:</p> <p><img src=";h=360" alt="" /></p> <p>The UTM device in the example, has a Router, Firewall, VPN, Proxy, Anti-virus, Anti-spam, Anti-malware, Data Loss Prevention, Wireless LAN controller, IDS/IPS, and an ethernet switch all bundled up together. &nbsp;(I'm not referring to a specific product here, but I've seen all of those features in some form on a variety of UTM devices.)</p> <p>There's also a Centralised Management System, with single sign-on, to make for 'easier' management.</p> <p>&nbsp;</p> <p>This makes the managers happy. &nbsp;They've got a "Single Pane of Glasss" management interface, all their eggs in one basket, and so on.</p> <p>There are many things about UTM that make me generally uneasy. &nbsp;</p> <p><strong>1. All your eggs are in one basket.</strong></p> <p>I'm not saying here that you couldn't have multiple UTM devices in a High Availability (HA) pair, of course you could, and probably should.. However, what I am saying is that they'll both have to be identical, and the same vendor, probably - I've not found a UTM device that exhibits full interoperability with another one of a different vendor, in a HA arrangement.</p> <p>This also introduces your UTM appliance as a Single Point of Failure. &nbsp;If the device is overwhelmed, which is entirely possible (see point 4), and stops responding to traffic, then your network is going down. &nbsp;Hard.</p> <p>&nbsp;</p> <p><strong>2. Single Sign On.&nbsp;</strong></p> <p>Generally regarded as a Good Thing&trade;. &nbsp;The staggering downside is that if an attacker gains access to one password, they have access to the entire box, and I'm pretty sure they'll start turning things off. &nbsp;Probably starting with the IDS/IPS, and/or logging/audit trail.</p> <p>I'd much rather have a number of secure passwords to a range of services, than one password [to rule them all and in the darkness bind them].</p> <p>&nbsp;</p> <p><strong>3. Defence in Depth.&nbsp;</strong></p> <p>The primary tenet of running any kind of secure system. &nbsp;Basically, have lots of different layers of security, each with tight access controls between the layers.</p> <p>UTM effectively destroys this, on two fronts. &nbsp;Firstly, everything is on one device, implying that if the device is compromised, so are all the services.</p> <p>Secondly, it's not usually obvious, or visible what happens to the traffic as it passes between services. &nbsp;Is it all on one big fat pipe, or is there some level of separation?</p> <p>Conversely, in a network with lots of devices, it's possible to choose different devices from different vendors, and as a result, you'll have some greater level of assurance that you'll be able to block and incoming attack, because exploits for one platform won't be effective on one from a different vendor (hopefully!).</p> <p>&nbsp;</p> <p><strong>4. Performance</strong></p> <p>Looking at the non-UTM network map, there's 5 different services, all on its own device. &nbsp;Let's pretend that we've got a 1Gbit internet connection, and a 1Gbit LAN.</p> <p>Each of those devices should have the processing power to handle 1Gbit of traffic, without breaking a sweat (futureproofing, etc..)</p> <p>Now, putting that onto a single UTM box, we've got the requirement to handle 5Gbit of traffic internally (although, it probably needs to be higher still.). Secondly, the CPU and memory requirements are higher. &nbsp;It'll need *at least* 5 cores, because you *really* don't want CPU time &nbsp;contention when there's packets to be handled.. - Otherwise, everything will slow down.&nbsp;</p> <p>Imagine you've got a UTM device running linux, on a 2 core system. &nbsp;One core will forever be doing the general OS stuff, leaving you with one free one. &nbsp;Not great for performance.</p> <p>These services do need CPU and memory resources just like anything else would.</p> <p>&nbsp;</p> <p>Now consider all the services on the UTM device, and recall that they'll probably all be enabled on the box, by default. &nbsp;Every packet entering the device, will be handled by each service. &nbsp;As you can imagine, this places a lot of load on the CPU and internal interconnects. &nbsp;Each service will have to allocate RAM for buffers, and so on. &nbsp;</p> <p>As you enable multiple services, performance will take a hit.</p> <p>&nbsp;</p> <p>Now, I'm not saying that UTM is *always* a bad thing. &nbsp;All I'm saying is that I'd have to have a bloody good reason to deploy a UTM device in a network. &nbsp;</p> <p>Probably the best reason I can think of comes from analysing the potential risks and impacts of an attack, and subsequent breach. &nbsp;</p> <p>Budget also plays an important part in this, because UTM devices to tend to be less expensive than a full array of differing devices, from different vendors, each performing it's own task.</p> <p>&nbsp;</p> <p>I'd rather have a higher management overhead, and greater security, defence in depth and so on, than a single signon, single password, single device.&nbsp;</p> <p>Some of the management overhead can be mitigated by the inclusion of a log management platform, or a Security Information and Event Management (SIEM) device, which will aggregate logs, events, alarms, and so on from a number of different devices and services, presenting the output in a single management view. &nbsp;This is significantly different from having all of the management onboard the UTM device, as it still grants you a level of isolation between management traffic and dirty, internet-facing traffic.</p> <p>&nbsp;</p> <p><strong>TL;DR</strong></p> <p>Unified Threat Management is not a silver bullet. &nbsp;It might fit for some deployment scenarios. &nbsp;It's probably suitable for small office / home office usage.</p> <p>If you do deploy a UTM appliance, bear in mind the following:</p> <p><strong>*</strong> You potentially sacrifice defence in depth security for management simplicity.</p> <p><strong>*</strong> If it's performance you want, either turn off some of the features (and potentially sacrifice security even further), or have dedicated security appliances.</p> <p><strong>* </strong>UTM concerns me a bit, and it should probably raise concerns with you too.</p> <p>&nbsp;</p> FakeRAID and Virtualisation <p>I've been tinkering with Virtualisation quite a bit recently.</p> <p>For a new project, without an allocated budget, I was asked to provide some simple Virt. capability, to hold them over until they get budget approval, and can buy their own hardware.</p> <p>I managed to rescue a Dell R510 server from the scrap heap, only to discover that it contains a Dell S300 "FakeRAID" card, that's not supported by Linux (so KVM, Xen et al are out). &nbsp;It's also not supported by VMWare ESXi, so that's out. &nbsp;The only OS that *is* supported is Windows, and Windows Server.</p> <p>Dell's driver set for the card suggests that Server 2008R2 is the latest, supported server. &nbsp;It actually turns out that not only is the driver compatible with the installer or Server 2012, but given the way the local storage array is presented to the Server OS, that it's also compatible with Hyper-V.</p> <p>So from spending a few weeks digging into VMWare vCenter and so on, to now having one host running Hyper-V, it's been quite an interesting journey. &nbsp;Not only do I now have a working server (utilising a FakeRAID card - Something I'm not entirely happy with, but it's definitely better than nothing), but also the ability (much to my surprise), to run Linux VM guests on it.</p> <p>Installing Windows Server 2012 on this R510 is surprisingly easy, but does need the Dell S300 drivers to be on a USB stick ready for the installer to ask for them. &nbsp;When you get to the bit where you partition the disk, you need to provide a path to the driver .inf file, and then after that, it just appears as one big-ass hard disk, ready to be partitioned.</p> <p>I allocated a 70GB chunk for the OS installation, and left the rest to be allocated later on when I installed Hyper-V.</p> <p>I'd heard before that Hyper-V supports Linux quite happily, but never actually had a chance to experiment with it.&nbsp;</p> <p>Suffice to say, so far, I'm very impressed. &nbsp;Not so impressed I'd ditch VMWare for the grand scheme of things, regarding implementing Virt. around here, but impressed enough to hand this server over to the team for them to use.</p> <p>&nbsp;</p> <p>In the next few weeks, I'll be trying to get my hands on the vCloud (Ops, Automation, Chargeback, etc.. ) toolsuite, for some further experimentation. &nbsp;Looking forward to it enormously.</p> When should I use eval()? <h1>NEVER.</h1> <p>&nbsp;</p> <p>That's got that off my chest.</p> <p>&nbsp;</p> <pre>eval()&nbsp;</pre> <p>is possibly the most dangerous thing ever. &nbsp;It's basically a way to execute arbitrary code from a string or variable. &nbsp;</p> <p>Here's a few reasons why it's dangerous.</p> <p>It leaves you open to injection attacks.&nbsp;</p> <p>In Javascript, eval() forces the engine to drop into Interpreter mode, which slows down your application, and it will remain slow, as there's no opportunity for optimisation-level caching to take place.</p> <p>It's a bugger to debug, because there's no line numbers.</p> <p>In Javascript (client-side), eval() is dangerous because it exposes you to cross site scripting attacks. &nbsp;</p> <p>In server-side code, eval() is downright lethal, because it exposes the entire server to anything that the user wants to run.&nbsp;</p> <p>Python has a "safer" eval, called literal_eval in the ast module, which allows for parsing of user-provided data, without having to write a parser to sanitise it yourself. &nbsp;I'd still avoid it like the plague, given a choice.</p> <p>&nbsp;</p> <p>This is all fairly fresh in my mind, because I discovered a snippet of code somewhere (not disclosing where, as I'm doing the responsible thing and doing the disclosure properly), that was along the lines of&nbsp;</p> <pre>var jsonData = eval ("(" + string + ")");</pre> <p>Apparently JSON.parse() isn't good enough for them.&nbsp;</p> <p>Horrifying.</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> How To: Find a Rogue DHCP Server on your network <p><strong>Symptoms:</strong> Some clients are unable to connect to the internet. Some clients report a different IP address, subnet mask and default gateway, compared to others.</p> <p><strong>Caveats: </strong>Without a managed switch fabric, this is considerably more difficult.</p> <p><strong>Diagnosis:</strong></p> <p><strong>1.</strong> Allow a device to get an IP address from the rogue server. &nbsp;You might need to disable the main DHCP server to allow this to happen, as DHCP is a broadcast protocol, so it's really a case of the early bird getting the worm.</p> <p><strong>1b: </strong><a href="">Kyle Gordon</a> pointed out that this initially assumes that the DHCP server is the same as the default gateway. &nbsp;He suggests: &nbsp;<em>Usually /var/lib/dhcpcd/*.info or /var/lib/NetworkManager/* will contain the dhcp-server-identifier info on Linux, and something in the Windows Event Logs will show similar :-)</em></p> <div></div> <p><strong>2. </strong>&nbsp;Once you've got an IP from the rogue, look at the ethernet adaptor's status, and get the IP of the default gateway. &nbsp;For this example, we'll call it</p> <p><strong>3.</strong> Ping the default gateway for a few seconds. &nbsp;We need to do this to populate the ARP table.</p> <p><strong>4.</strong> In a Powershell/Cmd/Terminal window, run the command to view the ARP table. &nbsp;On windows, this is <strong>`arp -a`</strong>.</p> <p>&nbsp;</p> <p>What you're looking for is the mapping between the IP address and the Physical (MAC) address.</p> <p>&nbsp;</p> <pre>IP address <span style="white-space: pre;"> </span>Physical Address<br /> <span style="white-space: pre;"> </span>20:c9:d0:17:20:a4</pre> <p><strong>5.</strong> Go to and paste the found Physical/MAC address of the rogue. &nbsp;This will tell you who made the device.</p> <p>This works because all MAC address prefixes are registered by IANA, so you can look up a MAC address and it'll tell you who made the thing (roughly).</p> <p><strong>6.</strong> From the client with the rogue-assigned IP address, set up a long-running ping to the default gateway. &nbsp;We'll need this to confirm that it's been killed when we start unplugging/disabling ports.</p> <p><strong>7.</strong> Next, you want to open the management pages of all of the switches on the fabric of your network. &nbsp;Every switch has a MAC Address Table where it keeps track of physical switchports, and the learned MAC addresses it's seen on those ports.</p> <p><strong>8.</strong> Looking at the list of address tables (I find it's helpful to copy/paste them into a text editor, then do a search on the MAC of the rogue.) see if you can track down a port that has *only* that MAC assigned to it. &nbsp; If there's a single port on a managed device, you can disable/shutdown the port.</p> <p><strong>9.</strong> Failing that, if you find that the MAC is in the table, but on a port with other devices too, say, port 1 has 5 other things, and the rogue is one of them, then that indicates that there's another distribution switch on port 1, and the rogue is connected to that.</p> <p><strong>10.</strong> Hopefully, you might have some clue as to what is on each port, distribution switch wise, especially if you have managed under-desk distribution switches, although this is generally unlikely.</p> <p>&nbsp;</p> <p><strong>11.</strong> &nbsp;Start hunting. &nbsp;You know that it's on the network, and can ping it (so you can tell when it's been disconnected). &nbsp;You know something about the device, the manufacturer. &nbsp;</p> <p><strong>12.</strong> &nbsp;As you unplug devices, check whether the ping stops.&nbsp;</p> <p><strong>13.</strong> When the ping stops, you've found the rogue. &nbsp;</p> <p><strong>14. </strong>&nbsp;Congratulate yourself by having a coffee, beer, or a non-stimulating beverage.</p> <p>I actually worked through this process with one of my <a title="Astound Wireless" href="">Astound Wireless</a> customers, last night, over a VPN. &nbsp;</p> <p>It's really relatively straight forward, but is made considerably easier with a managed switch fabric.&nbsp;</p> <p>In this case, the rogue turned out to be an Apple Airport Extreme, which do tend to cause havok if misconfigured, or misconnected, as their default is to broadcast DHCP on the 3 LAN ports, which aren't obvious that they're LAN ports, as they have the mysterious <strong>&lt;-&gt; </strong>symbol.&nbsp;</p> <p>I suspect whomever plugged it into the network, should've connected the link to the building's switch fabric to the WAN port of the Airport Extreme, rather than the LAN. &nbsp;Or at the very least, disabled the DHCP server on the Airport.</p> <p>&nbsp;</p> <p>An ideal solution for preventing this kind of mishap is <a title="DHCP Snooping" href="">DHCP snooping</a>, but that *does* require a fully managed switch fabric, and a non-insignificant amount of management overhead.</p> FreeSWITCH on a Raspberry Pi. <p>I've had a <a href="">Raspberry Pi </a>for ages now.. I got one free courtesy of <a href="">Paypal at their Charity Hack in late 2012</a>, and our team (see photo, I'm there!) went on to use it to create the (World's First?) <a href="">Raspberry Pi based Wifi Hotspot</a>.&nbsp;</p> <p>I've wanted to do something potentially useful, definitely interesting, and probably rewarding with it for a while. &nbsp;</p> <p>I've also recently acquired an Arduino with Ethernet Shield, so that's also been on my mind for another hack platform. &nbsp;</p> <p><em>That aside for now, I've recently moved back from London, and into the basement flat at my parents' place. &nbsp;</em></p> <p><em>We had that flat built originally for my late grandfather, but after his death, it lay empty for a while, and as I'm now living here again, it's pretty much perfect for me.&nbsp;</em></p> <p><em>The only minor problem being, that until recently, it was effectively isolated from the rest of the house.. There was a dual pair of BT phone cable, originally to power a simple 9v intercom system, which whilst providing electrical connectivity for an intercom, wouldn't be suitable for FastEthernet, let alone Gigabit.</em></p> <p><em>So the other weekend, we attached a length of CAT5e cable, and used the existing phone cable to re-thread the new CAT5 cable... So now I've got ethernet down here.</em></p> <p><em>I've got an old CiscoLinksys SPA942 phone that I've had for at least 5 years, now.. And whilst that's all very well for dialing out, it's a little inconvenient for my folks, at least until I get a Malvern number on my SIP account, otherwise it's a national rate to dial my 0203 number..</em></p> <p>So I thought to myself, "Well, it's a 4-line phone..". &nbsp;Originally, I was going to get a 2nd hand retro Trimphone or something similar, rip out the guts, and use it to house the Raspberry Pi, plug in a USB soundcard, and USB Wifi Dongle, and then use that as a SIP handset, dialing out to my AQL VoIP service.</p> <p>But then I had a better idea.</p> <p>Why not compile <a href="">FreeSWITCH</a> onto the Raspberry Pi, then use my VoIP Phone down here to register to it, and use a VoIP Softphone on my parents' other devices. &nbsp;</p> <p>As it turns out, it only takes about 6 hours to compile FreeSWITCH (I did prune out <a href="">modules.conf</a>, disabling IVR, mod_flite and some other stuff I thought I wouldn't want/need.).</p> <p>I once compiled a kernel and Gnome 3.0 on an old 300MHz Via embedded PC, and that took 5 days.. I'm quite pleased by the speed of the Raspberry Pi's compile time.</p> <p>I suppose if I'd been in a rush, I could've cross-compiled it.&nbsp;</p> <p>I'm no stranger to VoIP, but in the past, when I've been configuring PBXes, I've always used Asterisk, and whilst there *is* an Asterisk binaries package in Raspbian, I don't *actually* like Asterisk, and their Dialplan config format makes my eyes bleed.&nbsp;</p> <p>My good friend Richard (<a href="">@pobk</a>) is a big fan of FreeSWITCH, so I figured it's about time I saw what all the fuss is about. &nbsp;If ever there's a good test for an application, it's running on a massively restricted system, in both CPU power, Disk space and memory..&nbsp;</p> <p>My only minor complaint is the amount of libraries that are required to build/run FreeSWITCH..&nbsp;</p> <p>Here's what I ran to build:</p> <pre>sudo apt-get install build-essential<br />sudo apt-get install git-core build-essential autoconf automake libtool libncurses5 libncurses5-dev make libjpeg-dev pkg-config unixodbc unixodbc-dev zlib1g-dev<br />sudo apt-get install libcurl4-openssl-dev libexpat1-dev libssl-dev screen<br />screen -S compile<br />#inside a Screen<br />cd /usr/local/src<br />git clone git://<br />cd freeswitch<br />./<br />&lt;edit modules.conf&gt;<br />./configure<br />make &amp;&amp; make install &amp;&amp; make all install cd-sounds-install cd-moh-install</pre> <p><strong>Total installed size is 550MB.</strong> &nbsp;I'm sure I could get that down, as 500MB of that is the sounds/ directory</p> <p>I think if I were having a 2nd attempt, I'd only bother with the 8Khz sample rate audio. (replacing cd-sounds-install with sounds-install)</p> <p>I also ran through this<a href=""></a></p> <p>Then configured an init.d script based on the one on that page.. Don't forget to change the FS_GROUP variable to "daemon"</p> <p>&nbsp;</p> <p>A brief (very) test with sipp, and after about a minute, it's handled 700 connections, and the load average is about 190 ;).</p> <p>It's still accepting calls, and the voice lag is about 10 seconds.</p> <p>As this'll only ever handle one or two calls concurrently, I think that's a pretty good result.</p> <p>I configured 5 devices, all using the default, preconfigured extensions, 1000-1004, and then set up a ring group.</p> <p>I'm using <a href="">Telephone</a> &nbsp;on my mac, <a href=""></a> on my Android(s), and Arkphone on my mum's iPad.</p> <p>&nbsp;</p> <p>Seems to work pretty well.</p> Lightning Post: Dumping MS DNS to Bind <p>This is the first in a series of Lightning Posts, short snippets that I don't really have the time to write up into a full post, but they're interesting nonetheless.</p> <p>&nbsp;</p> <p>Lightning Post 1: How to export DNS data from Microsoft DNS to a zone file.</p> <p>"Why'd you wanna do that?", I hear you cry.</p> <p>Well, It's entirely possible to use BIND (or PowerDNS, for that matter) as a DNS server instead of the integrated MS DNS service that's bundled with Windows Server.</p> <p>When you create an Active Directory, a process creates some service records, like _ldap._tcp.ForestDnsZones.yourdomain.tld and so on.</p> <p>Well, these aren't impossible to create by hand, but it's nice to have a dump for these things at least initially.&nbsp;</p> <p>So:&nbsp;</p> <p>Login as Administrator, and load up a Powershell console:</p> <p>&nbsp;</p> <pre>dnscmd YourDomainController.tld /ZoneExport YourDomain.fqdn.tld YourDmain.fqdn.tld.txt </pre> <p>&nbsp;</p> <p>Then you can look in %windir%/system32/dns/* and find the txt files &nbsp;containing your zone data.</p> <p>&nbsp;</p> <p>Done.</p> One size does not fit all <p>&nbsp;</p> <p>The tech interview process is broken.</p> <p><strong>Fundamentally.</strong></p> <p>About a month ago,<a href="/blogish/on-interviews/#.UUzk4lvfxZ8"> I wrote about how I've had some terrible interview experiences</a> over the last 6-odd weeks or so.&nbsp;</p> <p>I also just<a href=""> read this</a>, and agree with everything said there. &nbsp;I think there's more to say.</p> <p>I'm disheartened to find that these aren't the exceptions, they're the rule.&nbsp;</p> <p>The thing is, companies seem to have one type of interview, <strong>The Developer Challenge</strong>.</p> <p>That works fine, as a whole, if you're looking for <strong>Developers</strong>. &nbsp;</p> <p>But I'm not a developer. &nbsp;Not really, anyway. &nbsp;I can write code in a variety of different languages, I can pick up most sane languages without too much trouble. &nbsp;I still struggle with functional things like OCAML, F#, Haskell (and for different reasons, Erlang).</p> <p>But Python, Ruby, Java, C#, C++ and that ilk, I'm generally fine with.</p> <p>I know lots and lots about how python datastructures are neat, and why dictionaries are awesome, and how you can play with a string as if it were a list.</p> <p>I know why polynomial-time algorithms are bad.</p> <p>I know things like how you find DNA sequence alignments (I used to be a bioinformaticist).</p> <p>I was explaining to a friend of mine (an engineer at Google), that if I was asked to make a search engine as part of an interview question, then I'd do the following:</p> <p><span style="white-space: pre;"> </span><strong>1:</strong> wget --mirror</p> <p><span style="white-space: pre;"> </span><strong>2:</strong> load that data into Elasticsearch</p> <p><span style="white-space: pre;"> </span><strong>3:</strong> build a noddy webapplication with Flask and PyES to search the Elasticsearch index</p> <p><strong><span style="white-space: pre;"> </span>Done.</strong></p> <p>&nbsp;</p> <p>Because for me, a very pragmatic individual, I'd rather be building on other people's working code, functioning libraries, assembling building blocks to make one large system out of the component parts. &nbsp;</p> <p>If I need systems-glue, I'll typically break out Python, or in extreme cases, Java.</p> <p>If I want to search a list of words in better than O(n) time, I'll use a Trie. &nbsp;This I know from having read books, and wikipedia, and StackOverflow and so on. &nbsp;</p> <p>What I won't waste my time doing is trying to implement my own Trie library, because I know of at least 2 perfectly good, generic(ish) libraries, in a variety of languages, for doing what I want to do.</p> <p>Doing things this way means that someone else (someone far better at datastructures than me) has tested it, found edge cases, written documentation, fixed bugs and maintained the damn thing.</p> <p><strong>Interesting side note:</strong> If I&rsquo;ve had to resort to using a library from PyPi (or Github) for a Trie, it&rsquo;s fairly indicative, to me, that Python needs one in it&rsquo;s standard library. &nbsp;That&rsquo;ll piss off the interviewers for sure. &nbsp;Next thing you know, we&rsquo;ll have to do everything in x86 Assembly.</p> <p>I don't see my approach as anything other than realistic. &nbsp;It's how I'd solve the problem for work. &nbsp;I'm not aversed to reading through the library (and looking for insanity), or fixing bugs and sending them upstream, but in a real-life situation, I'd rather not spend my life whittling a log into a wheel. &nbsp;I'd rather use an existing template for a wheel.&nbsp;</p> <p>It occurs to me that there aren't that many problems that people come across which aren't a copy, or subset of an existing, solved problem. &nbsp;If you wanted to clone a sheep, you wouldn't work everything out from first principles, you'd read the research from the team who cloned Dolly, and work from there.</p> <p>And another thing.. We need to stop with the whole <em>"Here's a challenge, come back in X amount of time with a solution."</em> &nbsp;That approach is LAME. &nbsp;</p> <p>I like to discuss things with people. &nbsp;I don't like waiting for a day to get an answer, by the time my question has gone through the recruiter/HR/tech-lead. &nbsp;<br />Just do the thing in a skype session with me, or TeamViewer, or something.&nbsp;</p> <p>Whilst I was discussing a particular problem with my Google friend, he basically suggested that even without getting a decent answer to the challenge, but by explaining my thought processes, it's as good as a win from the hiring point of view. &nbsp;You just don't get that kind of interaction with a <em>"go code this and come back"</em> challenge. &nbsp;</p> <p>Sure, it's more time consuming for the interviewer, but surely they're already at the point where they want to spend time with you. &nbsp;I mean, having passed the phone screen, it's &nbsp;time for something a little more in depth. &nbsp;I get the whole, <em>"we don't know you, come to us with this challenge"</em> spiel, that Facebook used to do (remember the challenges page?).</p> <p>Actually, why not just check out my github/bitbucket repositories? &nbsp;There's loads of stuff I've written there. &nbsp;</p> <p>In a similar vein, I'm brought back to interview questions (and scenarios) where I've been told that searching the web is out of the question. &nbsp;</p> <p>I'm always left wondering whether searching the web for an answer to a problem is also disallowed for the other developers (who do work there). &nbsp;If it is, I'm 99% sure I don't want to work there anyway.</p> <p>It's effectively saying: <em>"We don't want you to learn from other people's mistakes. &nbsp;You have to make them yourself."</em>. &nbsp;That's not a realistic expectation, IMO.</p> <p>Some of the best interviews I've ever had have taken place in a pub, in a coffee shop, in a cafe, over lunch. &nbsp;</p> <p>During one of the best technical challenges I've done, I said to the interviewer: <em>"It's been a while since I did this, I'm gonna see what the ServerFault community says"</em>. &nbsp;In this case, this was an acceptable answer, and worked perfectly.</p> <h2>TL;DR [1]: Stop asking interview candidates to reinvent the wheel. Ask them questions about why they've chosen the tools they have.</h2> <p>Moving on:</p> <p>Interview challenges should be closer tailored to the actual job title. &nbsp;This "one true developer challenge" thing works fine for hiring developers. &nbsp;God help the candidate if they get sent the same challenge as a software engineer, if they're a front-end UX engineer. &nbsp;</p> <p><strong>The challenge set should probably be related to a real-life problem that fits into their job remit. &nbsp;</strong></p> <p>For <strong>developers</strong>, fine, ask them to write algorithms.</p> <p>For<strong> QA engineers</strong>, ask them about Selenium, or Cucumber.</p> <p>For<strong> Sysadmins/DevOps/SRE/Systems Engineers/Systems Architects</strong>, then ask questions like the ElasticSearch one, or <em>"How would you aggregate logfiles from a cluster of N servers"</em>, or <em>"How would you allow a cluster of N servers to share an assets directory"</em>? or <em>"How would you check database replication lag?", </em>or <em>"Why are 2 switches at the core better than one?"</em>.</p> <p>For <strong>Designers/UX/JS/CSS engineers</strong>, ask them something like: <em>"Design an interface for an ATM to make it more user-friendly. &nbsp;Bonus question: Make it more user-friendly to disabled users."</em> or <em>"How would you make this site look awesome on a mobile device?"</em></p> <p>&nbsp;</p> <p>Hopefully you can see the difference between the different class of question, and the different class of job. &nbsp;<br />It's no good asking a greengrocer which is the leanest cut of beef, just like it's not terribly conducive to ask a sysadmin to solve a problem that might require the use of a Red-Black Tree. &nbsp;</p> <p>It's possible they'll know the answer, but I sure as hell wouldn't fail them for not knowing.&nbsp;</p> <h2>TL;DR [2]: Tailor interview questions to match what you expect of the candidate In Real Life.</h2> <p>Interviews are not baseball caps. &nbsp;</p> <p><strong>One size does not fit all.</strong></p> <p>&nbsp;</p> <p>&nbsp;</p> Why do corners get cut? <p> <p>How many times have you found something at work that&rsquo;s not *quite* how it should be? &nbsp;Perhaps you&rsquo;ve got a server with &ldquo;Green&rdquo; drives in? &nbsp;Or a cheap unmanaged switch somewhere. &nbsp;Or something with a self-signed SSL certificate. &nbsp;Or a linux box instead of a router. &nbsp;Or a desk fan propped up behind a server, because otherwise it overheats. &nbsp;Or something with a big label above that states in large, unfriendly letters <em><strong><span style="text-decoration: underline;">&ldquo;Do not unplug. &nbsp;Ever&rdquo;.</span></strong></em>&nbsp;</p> <p>I&rsquo;ve been to many different companies over the last few years, and I&rsquo;ve seen some absolutely terrifying things lurking in dark corners of server rooms and infrastructure. &nbsp;I&rsquo;ve seen AD servers booting off desktop-grade NAS devices, or backups being stored on USB disks that didn&rsquo;t pass S.M.A.R.T tests 3 years ago, and unsurprisingly, still don&rsquo;t.&nbsp;</p> <p>Everyone who works with these types of system knows that it&rsquo;s not perfect, and it&rsquo;s the best they can do under the circumstances. &nbsp;And this, unsurprisingly, leads to stress of one form or another, either wondering what&rsquo;ll happen if/when it breaks, or the stress of what you do when it actually breaks.</p> <p>It all basically stems from working within a company with unrealistic expectations of what&rsquo;s feasible within the limited budget provided by senior management. &nbsp;In an ideal world, we&rsquo;d all have datacentres with A/B feeds, and big-ass UPSes, and fabulous glycol-chillers. &nbsp;But we don&rsquo;t. &nbsp;Hardly anyone does. &nbsp;I think Google probably do, but well, they&rsquo;re *fucking Google*.&nbsp;</p> <p>Because it&rsquo;s so much easier to reuse an old disk here, or say<em> &ldquo;we&rsquo;ll change it when we have next year&rsquo;s budget&rdquo;</em> - Here&rsquo;s a dirty little secret. &nbsp;It&rsquo;ll never happen. &nbsp;Ever. &nbsp;Next year&rsquo;s budget will be zapped by next year&rsquo;s problems, and you know the old adage, if it ain&rsquo;t broke, don&rsquo;t fix it.&nbsp;</p> <p>It&rsquo;s really a case of <em>"well, actually it&rsquo;s not broken, but it&rsquo;s not perfect, but we still can&rsquo;t afford to fix it and make it perfect, so y&rsquo;know, we won&rsquo;t."</em></p> <p>This kind of thing filters down into software architecture too. &nbsp;A surprising amount. Whenever you&rsquo;ve found a database table being used as a message queue, or an eval() call that&rsquo;s only ever used by an internal service (<strong>!</strong>). &nbsp;It&rsquo;ll have been put in because of deadlines, and the struggle to meet them means that something has to be bodged, somehow.&nbsp;</p> <p>It&rsquo;s also quite apparent in Agile teams when the project has been over-specced, or under budgeted for time (or other resources), and something falls through the cracks. &nbsp;In my experience, the first to go is code reviewing. &nbsp;Actually, testing is usually the first to go, but those two are pretty closely connected.</p> <p>It&rsquo;s similar to the BDD vs TDD mindset, where unless testing is an integral part of software development, ie, performed either before or in parallel with, application coding, then it&rsquo;ll probably never happen. &nbsp;Documentation too, to some extent.</p> <p>Interestingly enough, I don&rsquo;t think the developers or engineers are to blame for these problems. &nbsp;They&rsquo;re usually more than willing to do whatever is necessary to maintain the service/software/thing. &nbsp;The responsibility ought to lie with <strong>whomever holds the purse strings</strong>. &nbsp;</p> <p>It&rsquo;s all very well saying &ldquo;Our developers get the best possible software, and the best possible tools&rdquo;, but if the servers are old, running on tired disks, in an overheating room, fanned by a desktop fan, then I&rsquo;m afraid that you&rsquo;re not actually providing the best possible tools. &nbsp;</p> <p>It&rsquo;s really just an illusion.</p> </p> On Interviews <p>&nbsp;</p> <p>I&rsquo;ve had enough interviews over the last few years to realise that there&rsquo;s a few different styles of interviewing out there, and they all suck.&nbsp;</p> <p>&nbsp;</p> <p>There&rsquo;s the &ldquo;impossible question&rdquo; style - Like <a href="">The Barometer Question </a>.</p> <p>There&rsquo;s the shocking &ldquo;group interview&rdquo; - Like the one that made Peter Gradwell infamous.&nbsp;</p> <p>There&rsquo;s the Phone Interview - where I usually end up going off on tangents, and talking for 50-90 minutes.</p> <p>There&rsquo;s the technical challenge interview, which vary between awesome and terrible, depending on how they&rsquo;ve been implemented.</p> <p>&nbsp;</p> <p>It&rsquo;s the technical challenge I&rsquo;m going to pick apart now.&nbsp;</p> <p>The technical challenge for your recruitment drive should be based on real-life challenges you&rsquo;ve experienced in the past. &nbsp;It should not be based on</p> <p><strong>a)</strong> Things you&rsquo;re not sure are possible. &nbsp;</p> <p><strong>b)</strong> Things you don&rsquo;t know the answer to.</p> <p><strong>c)</strong> Things which aren&rsquo;t relevant in the day-to-day business of the company.</p> <p>&nbsp;</p> <p><strong>It should not be a &lsquo;closed-book&rsquo; test. </strong>&nbsp;I&rsquo;ve seen some in the past that mark you down points if your mouse pointer leaves the browser-tab&rsquo;s focus for a moment. &nbsp;Apparently &ldquo;because you&rsquo;re not supposed to use &nbsp;Well Excuse Me. &nbsp;I use serverfault, on average, 20-40 times a day. &nbsp;As Albert Einstein said, "I never commit to memory anything that can easily be looked up in a book."&nbsp;</p> <p>Just swap &ldquo;in a book&rdquo; for &ldquo;by searching google&rdquo; and you&rsquo;re pretty nearly there. &nbsp;I consult manpages, and questions, and articles I&rsquo;ve written, and articles others have written, and github, and Serverfault, and StackOverflow, and LWN, and so many more. &nbsp;I&rsquo;ve never believed in closed-book tests, even from school-days. &nbsp;If you&rsquo;re going to deprive me of my books and information sources *now*, what kind of working environment am I expect if I come and work for you?&nbsp;</p> <p>&nbsp;</p> <p><strong>A reasonable simulation of reality must be allowed</strong>. &nbsp;If I want a test server rebooted, you should allow me this. &nbsp;If I want to install software on the test server, you should allow me this too. &nbsp;Straight back to my earlier point of &ldquo;well, what restrictions are you going to place on me in future&rdquo;? &nbsp;</p> <p>&nbsp;</p> <p><strong>You must allow mistakes.</strong> &nbsp;Everyone makes mistakes. &nbsp;The only bad thing about making mistakes, is making the same mistake twice. &nbsp;</p> <p>Making different mistakes is forgivable, and it&rsquo;s the way humans work. &nbsp;I challenge you to find a person who&rsquo;s never made a mistake. &nbsp;Good Luck With That.</p> <p>I bet even you, oh high and mighty Interviewer have cocked something up in the past. &nbsp;</p> <p>Are interviewers looking for the perfect candidate? One who&rsquo;s never made a mistake? Well, tough shit, because that person doesn&rsquo;t exist, either as a job seeker, or as anyone else on the planet.</p> <p>More generally, here&rsquo;s some things which would make interviewing/job hunting more friendly. &nbsp;</p> <p><strong>Off-the-record chat time with existing employees. &nbsp;</strong>I&rsquo;d like to know, no-holds barred what working for XYZ Corp is actually like. &nbsp;I don&rsquo;t want a manager, or recruiting manager present for these, I want to actually see what your current employees think of your company.</p> <p><strong>Clear indications of the company&rsquo;s future. </strong>&nbsp;I&rsquo;d like to know in detail, what you&rsquo;ve been doing for the last 5 years, how your year to date is looking, what the plan is for the future. &nbsp;You&rsquo;re investing in me as much as I&rsquo;m investing my time for you, so I&rsquo;d like to know what the future holds. &nbsp;You can bet your ass I&rsquo;m going to dig you up on Companies House, so I&rsquo;d like to think that your side of the story matches theirs.</p> <p><strong>No Bullshit Job Descriptions. &nbsp;</strong>There&rsquo;s a lot of these about. &nbsp;Usually easy to spot because they&rsquo;ve got buzzwords and keywords and jargon crammed in left right and centre. &nbsp;This gem was recently found by someone on Twitter:&nbsp;</p> <pre>&ldquo;C++, Java, Scala, FPGA, .Net, F#, Haskell, Open Source, Unix, Linux, Hardware, Software, Computer Science, PhD, Msc, Masters, C++, .net C#, Java Developer Programmer, Quantitative Developer, Quantitative Programmer, Technologist, Developer, London, C++, Telecommunications, Gaming, Research, Micro-Chip, Electronics, Java, Scala, .net, F#, Python&rdquo;</pre> <p>I can&rsquo;t tell if that&rsquo;s a job skills list, or a SEO consultant&rsquo;s wet dream. &nbsp;That is *so* stuffed with keywords that it&rsquo;s very difficult to tell what the buggery is going on. &nbsp;It&rsquo;s also going to match all sorts of job searches. &nbsp;All they need to do is add in the term &ldquo;NoSQL&rdquo; and they&rsquo;ve got a home run.</p> <p><strong>The upshot of all this is: &nbsp;</strong>If you've got a hiring process that makes me feel like I'll be poorly treated if I did come to work for you, then I'm probably not going to come and work for you. &nbsp;So sort your interview style out, and hopefully it'll closely match a comfortable company ethic. &nbsp;</p> <p>&nbsp;</p> How I broke AWS OpsWorks <h1>Part 2</h1> <p><a name="part2">&sect;</a></p> <p>I had planned to have another go from scratch as soon as the AWS team cleared the broken instance out. After I'd noticed they'd wiped out the instances from my OpsWorks account. I wasn't entirely sure what'd caused the first one to break in such a catastrophic fashion, but I just built another one from defaults. This is how that unfolded.</p> <p>New Instance time!</p> <p><a href=""><img title="Hosted by" src="" alt="" /></a></p> <p>Well, that bit worked, we should have a go at starting it, really</p> <p><a href=""><img title="Hosted by" src="" alt="" /></a></p> <p>The setup process took about 10 minutes, before failing, albeit slightly less catastrophicaly than last time.</p> <p><a href=""><img title="Hosted by" src="" alt="" /></a></p> <p>Interestingly, this time it had failed, it was at least running. I had a quick look for the log files generated, but got this message:</p> <p><a href=""><img title="Hosted by" src="" alt="" /></a></p> <p>However! After I ssh'd in, I had a look in the usual places for something log-worthy.</p> <pre> /var/log/cloud-init.log /var/log/aws/opsworks/ installer.log opsworks-agent.log user-data.log </pre> <p>None of which contained any errors. I had a look through /var/log/secure to see what it was doing, and found the location of the chef files/JSON/logfiles, and found this:</p> <pre>/var/lib/aws/opsworks/chef/2013-02-19-16-00-48-01.log </pre> <p>&nbsp;</p> <p>Which contained the following interesting nuggets</p> <pre>[Tue, 19 Feb 2013 16:09:08 +0000] DEBUG: ---- Begin output of grep '/^StrictHostKeyChecking no$/' /home/deploy/.ssh/config ---- [Tue, 19 Feb 2013 16:09:08 +0000] DEBUG: STDOUT: [Tue, 19 Feb 2013 16:09:08 +0000] DEBUG: STDERR: [Tue, 19 Feb 2013 16:09:08 +0000] DEBUG: ---- End output of grep '/^StrictHostKeyChecking no$/' /home/deploy/.ssh/config ---- [Tue, 19 Feb 2013 16:09:08 +0000] DEBUG: Ran grep '/^StrictHostKeyChecking no$/' /home/deploy/.ssh/config returned 1 [Tue, 19 Feb 2013 16:09:08 +0000] DEBUG: Executing echo 'StrictHostKeyChecking no' &gt; /home/deploy/.ssh/config [Tue, 19 Feb 2013 16:09:08 +0000] DEBUG: ---- Begin output of echo 'StrictHostKeyChecking no' &gt; /home/deploy/.ssh/config ---- [Tue, 19 Feb 2013 16:09:08 +0000] DEBUG: STDOUT: [Tue, 19 Feb 2013 16:09:08 +0000] DEBUG: STDERR: [Tue, 19 Feb 2013 16:09:08 +0000] DEBUG: ---- End output of echo 'StrictHostKeyChecking no' &gt; /home/deploy/.ssh/config ---- [Tue, 19 Feb 2013 16:09:08 +0000] DEBUG: Ran echo 'StrictHostKeyChecking no' &gt; /home/deploy/.ssh/config returned 0 [Tue, 19 Feb 2013 16:09:08 +0000] INFO: Ran execute[echo 'StrictHostKeyChecking no' &gt; /home/deploy/.ssh/config] successfully [Tue, 19 Feb 2013 16:09:08 +0000] DEBUG: Processing template[/home/deploy/.ssh/id_dsa] on cupcake.localdomain [Tue, 19 Feb 2013 16:09:08 +0000] DEBUG: Skipping template[/home/deploy/.ssh/id_dsa] due to not_if [Tue, 19 Feb 2013 16:09:08 +0000] DEBUG: Processing directory[/srv/www/shortbread_beastie/shared/cached-copy] on cupcake.localdomain [Tue, 19 Feb 2013 16:09:08 +0000] DEBUG: Processing ruby_block[change HOME to /home/deploy for source checkout] on cupcake.localdomain [Tue, 19 Feb 2013 16:09:08 +0000] DEBUG: Processing deploy[/srv/www/shortbread_beastie] on cupcake.localdomain [Tue, 19 Feb 2013 16:09:08 +0000] INFO: deploying branch: HEAD [Tue, 19 Feb 2013 16:09:08 +0000] INFO: ensuring proper ownership [Tue, 19 Feb 2013 16:09:08 +0000] INFO: updating the cached checkout Tue, 19 Feb 2013 16:09:08 +0000] INFO: Cloning repo to /srv/www/shortbread_beastie/shar ed/cached-copy [Tue, 19 Feb 2013 16:09:08 +0000] DEBUG: Executing git clone --depth 5 /srv/www/shortbre ad_beastie/shared/cached-copy [Tue, 19 Feb 2013 16:09:09 +0000] DEBUG: ---- Begin output of git clone --depth 5 /srv/w ww/shortbread_beastie/shared/cached-copy ---- [Tue, 19 Feb 2013 16:09:09 +0000] DEBUG: STDOUT: Cloning into /srv/www/shortbread_beastie/shared/cached-copy... [Tue, 19 Feb 2013 16:09:09 +0000] DEBUG: STDERR: Warning: Permanently added ',' (RSA) to the list of known hosts. Permission denied (publickey). fatal: The remote end hung up unexpectedly [Tue, 19 Feb 2013 16:09:09 +0000] DEBUG: ---- End output of git clone --depth 5 /srv/www/shortbread_beastie/shared/cached-copy ---- [Tue, 19 Feb 2013 16:09:09 +0000] DEBUG: Ran git clone --depth 5 /srv/www/shortbread_beastie/shared/cached-copy returned 128 [Tue, 19 Feb 2013 16:09:09 +0000] ERROR: deploy[/srv/www/shortbread_beastie] (/opt/aws/opsworks/releases/20130218135253_103/cookbooks/deploy/definitions/opsworks_deploy.rb:60:in `from_file') had an error: git clone --depth 5 /srv/www/shortbread_beastie/shared/cached-copy returned 128, expected 0 ---- Begin output of git clone --depth 5 /srv/www/shortbread_beastie/shared/cached-copy ---- STDOUT: Cloning into /srv/www/shortbread_beastie/shared/cached-copy...STDERR: Warning: Permanently added ',' (RSA) to the list of known hosts. Permission denied (publickey). fatal: The remote end hung up unexpectedly </pre> <p>Could it be? Has the github deploy ssh key problem hit the almighty AWS too? Looks like it. A good question however, is why on earth does the precursor,</p> <pre>execute[echo 'StrictHostKeyChecking no' &gt; /home/deploy/.ssh/config] </pre> <p>which was completed sucessfully, not prevent this from happening?</p> <p>&nbsp;</p> <p>So, I changed the app git repository from a git@github SSH style one to a git://, which shouldn't need to add a SSH key to the known_hosts file. In 10 - 15 minutes, it should have re-run setup, and all that, and we should have either a working instance, or another failure.</p> <p><a href=""><img title="Hosted by" src="" alt="" /></a></p> <p>Bugger me. It worked.</p> <p>Here's a screengrab of it running in chrome live on the EC2 cloud. I actually killed it off a few minutes ago, as it's costing me money.</p> <p><a href=""><img title="Hosted by" src="" alt="" /></a></p> <p>Looks so far like the big bugs concern the SSH Host Key acceptance thing that I found, and the fact that if you disturb the running EC2 instances from the EC2 control panel (which I accidentally did), then the OpsWorks side loses touch with the EC2 side, and the whole thing goes into that boot-stop-terminate-boot loop.</p> Step by Step AWS EC2 tutorial <p>&nbsp;</p> <p>This has been roughly adapted from <a href="">this ServerFault question</a> for the case when it gets removed/deleted/closed. &nbsp;</p> <p>The question was about how to configure a Flash game server on Linux, but on EC2. &nbsp;I had a good look around, but didn't find any true step-by-step EC2 tutorials for proper beginners. &nbsp;So I made one. &nbsp;This one is fairly specific to SmartFox Server towards the end, but the first few bits about creating an instance, and adding stuff to the security group should be generic enough to be useful.</p> <h2>The Question:</h2> <pre>I have made a ActionScript 3.0 Flash game and implemented multiplayer functionality using SmartFoxServer. Now I want to put this game on my website which is hosted on 000webhost.<br />My game works absolutely fine on localhost. But I need to put my smartfox instance somewhere where it is publicly available. This is where I need you peoples help.<br />There is an article explaining what needs to be done - <a href=""><br /></a>I do not understand, do I have to put my game and my smartfox instance on a remote server, vps, dedicated server or what?</pre> <p>&nbsp;</p> <h2>The Answer:&nbsp;</h2> <p>Right. &nbsp;You'll need to get a VPS, or at least an Amazon EC2 cloud instance to run this on. I'm 99.99% certain that you can't use the free package at 000webhost to do this. &nbsp;They're a pure webhost, and you need somewhere you can configure and install Java, and the SmartFox server.</p> <p>So.. Go to <a href=""></a> and sign up for a free account.</p> <p>You'll need to provide them with a credit/debit card number, but they won't charge you as long as you keep within the free tier resource limits.</p> <p>Once you've got an account, go <a href="">here</a> and start an EC2 instance.&nbsp;</p> <p>This all assumes you know a bit about linux, but if you create your first instance using Ubuntu Linux 12.04 64-bit server, it'll make everything a bit easier!</p> <p>&nbsp;</p> <p>When you click to create an instance you get this chooser:</p> <p><img src="" alt="Select a method to configure your instance" width="943" height="570" /></p> <p>&nbsp;</p> <p>Select "Classic Wizard" and this AMI to boot.</p> <p><img src="" alt="Use this AMI (instance template)" width="843" height="80" /></p> <p>Select the for this instance..&nbsp;</p> <p><img src="" alt="Accept these defaults" width="875" height="588" /></p> <p>And the defaults on the next page too.</p> <p><img src="" alt="more defaults" width="867" height="587" /></p> <p>Select the default storage options.&nbsp;<img src="" alt="Storage options" width="864" height="581" /></p> <p>&nbsp;</p> <p>And then name it.&nbsp;<img src="" alt="Name that sucker!" width="856" height="296" /></p> <p>&nbsp;</p> <p>You now need to create a SSH key, and name that too. &nbsp;When you click "Download Keypair" your browser will save the private key. &nbsp;Keep this safe, because if you lose it, you've effectively lost the master key to your new server.</p> <p><img src="" alt="Get the key!" width="849" height="465" /></p> <p>&nbsp;</p> <p>Now we need to create a security group. &nbsp;This is the firewall of Amazon EC2.</p> <p><img src="" alt="Create a Security Group" width="848" height="545" /></p> <p>&nbsp;</p> <p>Add inbound rules for SSH, HTTP and HTTPS. &nbsp;This'll be enough for now.&nbsp;</p> <p><img src="" alt="Inbound rules" width="867" height="550" /></p> <p>Review the selections you've made.</p> <p><img src="" alt="Review" width="852" height="531" /></p> <p>Hurrah! It should now be booting..</p> <p><img src="" alt="Booting" width="840" height="431" /></p> <p>&nbsp;</p> <p>Time to get into it. &nbsp;This is the control panel.&nbsp;</p> <p><img src="" alt="CP view" width="1077" height="212" /></p> <p>Select your new server instance, and right click it and you get this menu.</p> <p>&nbsp;</p> <p><img src="" alt="Connect!" width="249" height="519" /></p> <p>&nbsp;</p> <p>Then click <strong>Connect</strong>.</p> <pre>&nbsp; &nbsp; To access your instance:<br />&nbsp; &nbsp; Open an SSH client.<br />&nbsp; &nbsp; Locate your private key file (SmartFox.pem). The wizard automatically detects the key you used to launch the instance.<br />&nbsp; &nbsp; Your key file must not be publicly viewable for SSH to work. Use this command if needed:&nbsp;<br />&nbsp; &nbsp; chmod 400 SmartFox.pem<br />&nbsp; &nbsp; Connect to your instance using its Public DNS. [].<br />&nbsp; &nbsp; Example<br />&nbsp; &nbsp; Enter the following command line:<br />&nbsp; &nbsp; ssh -i SmartFox.pem</pre> <p>Which is nearly right, except as it's an Ubuntu instance, you want to&nbsp;</p> <pre>&nbsp; &nbsp; ssh -i SmartFox.pem</pre> <p>So, let's do that.</p> <p>&nbsp;</p> <p>&nbsp;</p> <pre>&nbsp; &nbsp; ubuntu@ip-10-243-117-245:~$&nbsp;</pre> <p>And we're in.</p> <p>Magic!</p> <p>Gonna <a href="">need the SmartFox installer next</a>..&nbsp;</p> <p>&nbsp;</p> <p>Download with wget, then tar xzvf and extract it.&nbsp;</p> <p> <pre>cd ~<br />wget;<br />tar xzvf SFS2X_unix_2_0_1_64.tar.gz&nbsp;<br />ls -lah<br />total 98544<br />drwxr-xr-x &nbsp; 4 tom &nbsp;staff &nbsp; 136B 19 Feb 22:51 .<br />drwxr-xr-x &nbsp;79 tom &nbsp;staff &nbsp; 2.6K 19 Feb 22:41 ..<br />-rw-r--r-- &nbsp; 1 tom &nbsp;staff &nbsp; &nbsp;48M 21 May &nbsp;2012 SFS2X_unix_2_0_1_64.tar.gz<br />drwxr-xr-x &nbsp; 9 tom &nbsp;staff &nbsp; 306B 13 Feb &nbsp;2012 SmartFoxServer2X<br />⚡ SmartFoxServer2X ls -lah<br />total 160<br />drwxr-xr-x &nbsp; 9 tom &nbsp;staff &nbsp; 306B 13 Feb &nbsp;2012 .<br />drwxr-xr-x &nbsp; 4 tom &nbsp;staff &nbsp; 136B 19 Feb 22:51 ..<br />drwxr-xr-x &nbsp;15 tom &nbsp;staff &nbsp; 510B 13 Feb &nbsp;2012 .install4j<br />drwxr-xr-x &nbsp; 6 tom &nbsp;staff &nbsp; 204B 13 Feb &nbsp;2012 Client<br />-rwxr-xr-x &nbsp; 1 tom &nbsp;staff &nbsp; &nbsp;71K 13 Feb &nbsp;2012 LicenseAgreement.pdf<br />-rwxr-xr-x &nbsp; 1 tom &nbsp;staff &nbsp; 5.7K 13 Feb &nbsp;2012 RELEASE-NOTES.html<br />drwxr-xr-x &nbsp;13 tom &nbsp;staff &nbsp; 442B 13 Feb &nbsp;2012 SFS2X<br />drwxr-xr-x &nbsp; 8 tom &nbsp;staff &nbsp; 272B 13 Feb &nbsp;2012 jre<br />drwxr-xr-x &nbsp; 9 tom &nbsp;staff &nbsp; 306B 13 Feb &nbsp;2012 third-party-licenses</pre> <p>So, you can go ahead and start the damn thing now.</p> <pre>ubuntu@ip-10-243-117-245:~/SmartFoxServer2X/SFS2X$ ./sfs2x-service start</pre> <p>or with a full path, start it by running</p> <pre>/home/ubuntu/SmartFoxServer2X/SFS2X/sfs2x-service start</pre> <p>and stop it with:</p> <pre>/home/ubuntu/SmartFoxServer2X/SFS2X/sfs2x-service stop</pre> <pre>You can perform the following commands on that sfs2x-service: {start|stop|status|restart|force-reload}</pre> </p> <p>So, you can go ahead and start the damn thing now.&nbsp;</p> <p>&nbsp;</p> <pre>ubuntu@ip-10-243-117-245:~/SmartFoxServer2X/SFS2X$ ./sfs2x-service start</pre> <p>&nbsp;</p> <p>Interestingly enough, it looks like SmartFox needs port 8080 opening up on the AWS Security Group firewall.</p> <p>&nbsp;</p> <pre><br />&nbsp; &nbsp; ubuntu@ip-10-243-117-245:~/SmartFoxServer2X/SFS2X$ sudo netstat -anp |grep java<br />&nbsp; &nbsp; tcp6 &nbsp; &nbsp; &nbsp; 0 &nbsp; &nbsp; &nbsp;0 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;:::* &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;LISTEN &nbsp; &nbsp; &nbsp;9142/java &nbsp; &nbsp; &nbsp;&nbsp;<br />&nbsp; &nbsp; tcp6 &nbsp; &nbsp; &nbsp; 0 &nbsp; &nbsp; &nbsp;0 :::8080 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; :::* &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;LISTEN &nbsp; &nbsp; &nbsp;9142/java &nbsp; &nbsp; &nbsp;&nbsp;<br />&nbsp; &nbsp; udp6 &nbsp; &nbsp; &nbsp; 0 &nbsp; &nbsp; &nbsp;0 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;:::* &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;9142/java &nbsp; &nbsp; &nbsp;&nbsp;</pre> <p>&nbsp;</p> <p>Luckily, that's really easy.</p> <p>&nbsp;</p> <p>On the sidebar of the control panel, there's a Security Groups link.</p> <p><img src="" alt="Security Groups" width="196" height="433" /></p> <p>&nbsp;</p> <p>Edit it, add a custom TCP rule and allow port 8080 to</p> <p><img src="" alt="Adding a custom rule" width="1063" height="478" /></p> <p>Add the rule, and <strong>**apply the changes**.&nbsp;</strong></p> <p>You should now be able to reach your SmartFox game server on the DNS name given to you by Amazon EC2 in the control panel. &nbsp;It's the same bit you SSH'd to earlier.</p> <p>That's all folks!</p> <p>&nbsp;</p> Building and Scaling PDFTribute <p>It&rsquo;s probably easiest to see how <a href=""></a> started from the storify shown below.</p> <p>My good friend, <a href="">Patrick Socha</a>, so moved by the outpouring of data associated with the twitter hashtag <a href=";src=hash">#pdftribute</a> set up a quick and dirty twitter archive, extracting tweets containing links.</p> <p>I had a look at this, realised it looked awful on mobile (I was on a bus at the time!), and requested the source on Git. &nbsp;</p> <p>I built a really simple CSS design using <a href="">Divshot</a> (because I love bootstrap, love their CDN hosting, and I suck at designing anything by hand).</p> <p>Patrick merged those changes in, and redeployed to his site hosted at <a href=""></a> (which now redirects to the main site)</p> <p>We started talking about hosting somewhere a bit more capable for when we post to <a href="">Hacker News</a>, so I threw together a simple hosted instance on <a href="">Appfog</a>, deployed his code to there, whilst he was buying &nbsp;Then it was a <a href="">simple case of mapping the URL to appfog</a>, and praying the DNS was quick to propagate (it was, being a new domain).</p> <p>So at this point, we had the backend scraper, running on his VPS, writing data to a MongoDB instance at <a href=""></a>. &nbsp;So there&rsquo;s no massive need to worry about scaling Mongo, as we&rsquo;re outsourcing that service to a company who do mongo scaling as their <em>modus operandi</em>. <strong>Isn&rsquo;t &ldquo;the cloud&rdquo; awesome?&nbsp;</strong></p> <p>The frontend reads from mongohq, and builds the page. &nbsp;Appfog apps already have requests Varnished, so we didn&rsquo;t need to find somewhere to put another server, running Varnish. Which is another great thing. &nbsp;One less place things can go wrong.</p> <p>The IP addresses that Appfog give you (to create A records for) point to an <a href="">Elastic Load Balancer</a>, and then when you scale up on the Appfog Control Panel, it adds instances behind the ELB, and deploys your code to them.</p> <p>Before we posted to <a href="">reddit</a>, and <a href="">news.yc</a>, I scaled up the number of instances to 4, and created a <a href="">pingdom</a> account, so that I&rsquo;d get instant SMS monitoring if the site goes down.</p> <p>With the site now bootstrapped in CSS, *and* responsive, so it still looks awesome on mobile, and deployed to a 4-instance appfog cluster, we finally felt ready to get it out there.</p> <p>In the first iteration on this setup, there was only really a &ldquo;tweet this page&rdquo; link, which tagged both <a href="">@patricksocha</a> and <a href="">myself</a> in the tweet text, so we had some idea of how many shares there&rsquo;d been via twitter.</p> <p>I set up tracking code on <a href="">GoSquared</a>, so we&rsquo;d have some idea of how loaded our servers actually are, and how many people are actually on the site at any one time. -- This was actually crucial in deciding whether we scale up to more than 4 appfog instances.. In the end, when we had the highest traffic loading, we only had 5.&nbsp;</p> <p>In subsequent page versions, I added a &ldquo;Like on Facebook&rdquo; button, and accompanying <a href="">Facebook Page</a>, and then a Google +1 button (and why not?), and &nbsp;re-released the site to appfog. &nbsp;</p> <p>It was immediately apparent that not only was the site trending on twitter, but also on facebook, and it wasn&rsquo;t too long before we&rsquo;d made the front page of Hacker News. &nbsp;- This is a first for me. &nbsp;</p> <p><strong>By 11AM on Monday</strong>, we&rsquo;d also been linked to by <a href="">the BBC</a>, <a href="">Huffington Post </a>&nbsp;and <a href="">TechCrunch</a>. &nbsp;Over the course of the day, other news outlets picked up the story, and linked to us, so the waves of visitors followed the sun across the globe.&nbsp;</p> <p><strong>At 2PM on Monday</strong>, we had 641 concurrent visitors to the site, the highest so far, and thousands of shares on twitter, retweets, shares and likes on facebook. &nbsp;</p> <p><img src="" alt="Shares from Gosquared Analytics" width="361" height="334" /></p> <p><strong>By 13:30 on Tuesday</strong>, the site analytics told me that in total, since the site was launched, we&rsquo;d served 131,852 page views, and there were still &gt;150 visitors on the site, each spending on average &gt;30 minutes browsing. &nbsp;</p> <p>Since then, we&rsquo;ve been working on integrating our archive with others who&rsquo;ve come forward (and some that I found on reddit, and Hacker News), to build an open-access repository of papers, with searching, indexing and analysis. &nbsp;</p> <p>At the <a href="">Tomorrow&rsquo;s Web</a> meetup on Saturday 2nd of February,<a title="Video from the Talk" href=";list=PL7VEaBT4tcW3r79xxv5oNCJrNTe3XLGjS&amp;index=5"> Patrick and I spoke</a> about the challenges we've faced, and the process of building #PDFTribute. &nbsp;</p> <h2>The take-home lessons are these:&nbsp;</h2> <p><strong>* Your idea doesn&rsquo;t have to be awesome</strong></p> <p>We went a very long way on a very simple site, even when we were serving 10,000+ hits a day, the site was still pretty rough around the edges. &nbsp;The deep-levels of interaction and clever design can come later, but having a very simple MVP (for want of a better word) early on is a valid enough starting point to give you a place to build out from in the future.</p> <p><strong>* A quick response time is crucial.</strong> &nbsp;</p> <p>Especially for virality and time-critical stuff.. &nbsp;If we&rsquo;d waited a day before writing, or deploying the code, then we&rsquo;d probably have missed all the fun.&nbsp;</p> <p><strong>* Store *all* the things. </strong>&nbsp;</p> <p><img style="vertical-align: middle;" src="" alt="Store *all* The Things" width="400" height="300" /></p> <p>One of the decisions made early on by Patrick was to store the majority of the important tweet data in the database. &nbsp;We could hack together a deeper UI later on, but if we didn&rsquo;t have the tweet data, that would be tricky (not impossible, but certainly less straightforward). &nbsp;It&rsquo;s much easier to store tweet data as one document per tweet than to try and break it down into a relational database (hence MongoDB as a natural choice). &nbsp;Once you&rsquo;ve got the data, what you do with it can come later.</p> <p><strong>* Cut the Crap.</strong></p> <p>It&rsquo;s pretty clear from the design of PDFTribute that we went for a &ldquo;no BS&rdquo; appearance and implementation. &nbsp;The important information is easy to access, and there&rsquo;s no ads, no popups, and no &ldquo;crap&rdquo;. &nbsp;It&rsquo;d be very easy to fill the site out with visual &ldquo;flare&rdquo;, but that would invariably work against us. &nbsp;</p> <p><strong>* Don&rsquo;t underestimate the power of social media.</strong> &nbsp;</p> <p>This sounds obvious, living in the world of Twitter and Facebook and so on, but the majority of early adopter visitors we got, before the news outlets picked up the story were from Facebook Likes and Twitter, well, Tweets. When I first integrated the Facebook widget to the <em>&ldquo;Sharing is Caring&rdquo;</em> box on the PDFTribute page, I had to double check, because it was <strong>*already*</strong> showing 2.5k likes. &nbsp;Turns out that we were already very well shared and liked on Facebook. &nbsp;</p> <p><strong>* Outsourcing for fun and profit. &nbsp;</strong></p> <p><strong>&nbsp;</strong>We scaled to 150k visitors in one day by having a very stable platform, powered by a number of outsourced services. &nbsp;Hosted MongoDB, at MongoHQ, CSS and JS on the Divshot CDN, 4 backend webserver instances, already configured behind a load balancer, courtesy of Appfog&rsquo;s Free Package. &nbsp;Analytics via GoSquared. Monitoring and alerting was provided by Pingdom. &nbsp;The only &ldquo;files&rdquo; hosted on Appfog are PHP, and there&rsquo;s only 2 of those.</p> <p>The advantage of this from my point of view is that I don&rsquo;t have to remember how to make MongoDB scale, I don&rsquo;t need to throw together 3 replica sets on different AWS Availability Zones. &nbsp;I don&rsquo;t need to write a single line of Puppet or Chef, because it&rsquo;s all being provided to us, as a service, for free, ostensibly. &nbsp;</p> <p><strong>The total cost of PDFTribute was &pound;5.99 for the domain name. &nbsp;</strong></p> <p>That&rsquo;s the kind of lean agility that is needed to make sure that this kind of 3-day hack is successful.&nbsp;</p> <p>One of the brilliant things about the Appfog deployment process is that if you break all the things in the process, you still have 5 minutes of Varnish time, so that you have a chance to fix the site so that your visitors don&rsquo;t see that you broke things, and you don&rsquo;t lose traffic.</p> <p>Finally, and perhaps most crucially:</p> <p><strong>* If you&rsquo;re going viral, make sure you&rsquo;ve the resources to do so. </strong></p> <p><a href="/blogish/cost-forward-thinking/">I&rsquo;ve written about this in the past</a>, how some sites aren&rsquo;t built for web-scale, and when they go viral, or have a traffic surge (due to advertising, or similar), they can&rsquo;t cope with the sheer volume of visitors hitting the site at the time. &nbsp;<br />Personally, I think it&rsquo;s a little embarrassing, and always makes me doubt the &ldquo;reliability&rdquo; of the site that&rsquo;s been brought down by the traffic resulting from a post on HackerNews, or Slashdot, or well, wherever. &nbsp;</p> <p>This was pretty much the driver for using Appfog in our case. &nbsp;I wanted an easy-to-scale platform, with minimal effort. &nbsp;As I mentioned in earlier points, there&rsquo;s a lot to be gained for quick-and-easy scale from using SaaS/PaaS/IaaS providers to achieve this. &nbsp;This gives you more time to focus on the code, rather than spending all evening figuring out how to set up 3 MongoDB servers, and a handful of EC2 instances, running NginX, PHP, FastCGI (or something), then putting the assets on S3/Cloudfront. &nbsp;It&rsquo;s a pretty obvious choice for agility and smoke-testing to try and be reliant on your own services as much as possible.</p> <p>Going forward, <a href="">PDFTribute</a> is stable and still working. &nbsp;It&rsquo;s still hosted on Appfog, MongoHQ and Divshot&rsquo;s CDN for the bootstrap bits. &nbsp;</p> <p>Stay tuned for more information on, and keep checking the site, there&rsquo;s lots more we&rsquo;re working on, and it&rsquo;s sure as hell not over yet!</p> <p>&nbsp;</p> <p>The storify timeline, and the recorded video from Tomorrow's Web is shown below.</p> 2012: Retrospective <p>&nbsp;</p> <p><strong>2012: A year in review.</strong></p> <p>So.. &nbsp;2012.. What can I say.. &nbsp;Quite a lot, actually.</p> <p>When I last wrote an annual retrospective, it was 2011 going into 2012, and I'd just started at Baseblack, one of many Soho-based VFX studios. &nbsp;I had a good 12-odd months there, before the credit crunch hit the entire London VFX industry, and I was made redundant in October. &nbsp;Over my time there I wrote a *lot* of puppet manifests, built a render farm based on Dell Blade servers, undertook a Hitachi HNAS administrator course, learned how to use Maya, Realflow, Nuke, Shake, PFTrack, Silhouette, and a whole bunch of other VFX packages too numerous to mention. &nbsp;"Oh, your 2D render won't complete? Did you use Paint nodes? Oh.. Well .. Good Luck With That." &nbsp;It was a good year. &nbsp;I got my head around 2D and 3D LUTs, Figured out monitor and projector calibrations. &nbsp;Before it all went pearshaped, we'd built a really awesome pipeline, with the help of Paul Nendick, Andrew Bunday, and Michael Nguyen. &nbsp;We had a truly fantastic team. &nbsp;And then it all went wrong.&nbsp;</p> <p>After being made redundant from Baseblack, I started a new job, which frankly, I don't enjoy, so the less said about that, the better. &nbsp;Anyone who follows my tweets closely will know what I mean, and why it bugs me so, but I really don't want to go into it here.</p> <p><strong>Moving on.</strong></p> <p>I spent a few weeks of 2012 in Spain, pretty much only in Barcelona, with Giles.. &nbsp;Went to my first Cal&ccedil;otada in Tarrragona. &nbsp;That was a lot of fun. &nbsp;Definitely an experience to be repeated. &nbsp;I visited some roman ruins there which were truly stunning, and made for some <a href="">great photographs</a>.&nbsp;</p> <p>In July, I started my own company, <a href="">Astound Wireless</a>. &nbsp;Basically a name/company to cover the work I've done in the year regarding wireless/wired network consultancy and contracting. &nbsp;It all had a fairly odd start. &nbsp;I went to the London Realtime hackathon in April, fully expecting just to build something cool. &nbsp;I turned up, and <a href="">@leydon</a>, the organiser explained how they'd been having problems with the wireless, and I stepped up to fix the problem. &nbsp;The next day of the hackathon, I turned up at the venue with some of my personal collection of routers (basically a Cisco 2625XM, a switch, a whole bunch of access points, and mysteriously, an Axis webcam.) Over the rest of the morning of that day, I built an entirely new network from scratch for the hackathon attendees to use. &nbsp;Two weeks later, <a href="">General Assembly</a> &nbsp;ran the UnlockLondon Hackathon, and once again called on my services to build their network, which was slightly larger, spanning 2 floors, and presented a new challenge. &nbsp;I brought in yet more gear, I advised the purchase of some more access points, set up their network for the hackathon, and things went well. &nbsp;</p> <p>Hackathon/unconference/event wireless is *tricky*. &nbsp;Everyone expects bulletproof, permanent connectivity, which isn't too tricky if the incoming feed is able to support the amount of traffic. &nbsp;If it isn't, then some form of traffic shaping/caching has to take place. &nbsp;By the time that I was called in to organise the 3rd hackathon, I'd already started Astound Wireless. &nbsp;In July, the weekend the Olympics were due to start in London, I was running the network for the MMXII hackathon, sponsored by New Bamboo. &nbsp;I'd originally specced the network for the venue at Central Working in Bloomsbury, for 70-80 attendees. &nbsp;I brought some new kit along, introducing the 5GHz wavelength into the mix, which basically gives immensely better coverage (less contention) for any device supporting the 5GHz spectrum.&nbsp;</p> <p>In the end, only 20-30 people showed up, so the wireless seemed massively overspecced, and I didn't in fact end up deploying the caching server(s). &nbsp;Such is life. &nbsp;They shall have to wait for a further deployment in the year to come. &nbsp;</p> <p>Around about June, I found out about <a href="">Silicon Drinkabout</a>, a weekly (on a friday evening) social club for startup/small business owners in the London tech scene. &nbsp;In the past year I've met some *really* interesting people through that, and by and large, been to most of the weekly meetups. &nbsp;I've discovered that the Salvation Jane is a fabulous caf&eacute;, but kinda lacking in Wifi coverage (Might try and fix that!), and also that there's a whole raft of pubs in east london, around the Silicon Roundabout that are also lacking in wireless coverage. &nbsp;(Definitely working on those! ;) )</p> <p>I met a guy called Isaac at one of the Drinkabout sessions, and he mentioned that hi<a href="">s product i</a>s pretty reliant on decent wireless connectivity, so I started working with him, finding new and interesting ways to leverage existing wireless networks for our own purposes, as well as using my knowledge of infrastructure and systems for his startup's growth. &nbsp;There'll be much more of this in the year to come.</p> <p>Off the back of the Silicon Drinkabout thing, I <a href="">joined the Digital Sizzle team </a>for the Movember effort. &nbsp;I grew a mostly passable moustache (as in, it made me look vaguely manly, but really failed to grow in the middle bit, so looked kinda silly). &nbsp;The entire team raised a staggering &pound;15,150 in the name of bum and ball cancers (Prostate and Testicular). &nbsp;It has to be said that I only really managed to raise &pound;6.00, but every little helps, right?</p> <p>Last year, when I wrote my 2011 retrospective, I mentioned that I broke up with my partner, and was finding the London Dating Scene particularly hard going. &nbsp;It should come as no surprise to you that 2012 has been equally FAIL as far as dating is concerned, and I'm starting to wonder if the bottom of the barrel really has been scraped. &nbsp;The one experience that stands out in my mind is having been on one date, and after that, having received more text messages in a week, that I've sent in an entire month (about 90). &nbsp;That didn't go any further, unsurprisingly. &nbsp;I do like my personal space, especially as a guy who's trying to get a business off the ground, I need my own time and space, and woe to anyone who doesn't understand that. &nbsp;</p> <p>So 2012 was a mixed year. &nbsp;As was 2012, actually. &nbsp;Funny that.. Every year, you get a year older, but really, nothing really changes. &nbsp;One of the odd things I noticed a few years ago is that Christmas in itself, as a holiday has really lost it's edge. &nbsp;I don't ask for anything for christmas presents anymore, and also I'm not very good at buying shit for other people. &nbsp;I've never really had a grasp of what they'd like. &nbsp;All I can estimate is what I think they'd like. &nbsp;Which is often wrong. &nbsp;So it's either cash, or Amazon vouchers from here on out.</p> <p>The primary reason I don't ever ask for anything anymore is that I'm in a better position to know what I want / be able to get it that anyone else is. &nbsp;It also seems remarkably unfair asking my retired parents for anything with a value over &pound;100. &nbsp;Kinda. &nbsp;</p> <p>Christmas is now a hallmark holiday (pointless, and designed to cater for those with families who actually give a fuck). &nbsp;I've just spent an incredibly happy/peaceful holiday period with my family, but in the same vein, don't need to know it's "christmas", as I could equally spend as pleasant a two weeks in the middle of summer.</p> <p>Again, I digress, so back to the topic at hand..&nbsp;</p> <p>2012's been an interesting year, on reflection. &nbsp;Lots of good things, some bad things, so I'm going to cop out, and repeat exactly what I said for the blogpost this time last year.</p> <p><strong><em>"To strive, to seek, to find, and not to yield"</em></strong></p> <p>Except this year, it feels more significant. &nbsp;I'm exploring more on the business / contract / freelance side of things, and I'm sure as hell not going to yield.</p> <p>&nbsp;</p> Dennis Nedry and the Human Single Point of Failure <p> <p><strong><em>"John, I can't get Jurassic Park back on line without Dennis Nedry."</em></strong></p> <p>Words you never want to hear uttered. &nbsp;Unless you work for <em>InGen</em>, it's highly unlikely. &nbsp;</p> <p>Although there is the remaining problem of the Human Single Point of Failure (HSPOF). &nbsp;After you've spent the last year or two eliminating the single points of failure from your computational infrastructure, you realise that you're the only one who knows which cronjobs run when, and on which servers. &nbsp;You're the only one who knows how to kickstart the <a href="/blogish/postgres-replication-91/#.UNoSKImLKBU">postgresql streaming replication</a>, and pg_basebackup isn't documented in the wiki. &nbsp;</p> <p>You don't want to travel on the same bus, train, or plane as your colleagues, for the fear that if you both died, or were significantly incapacitated in a vehicular (or otherwise) accident, then there'd be a significant lack of knowledge within the rest of the team to carry out the business duties.</p> <p>Ah. &nbsp;The human single point of failure. &nbsp;Is there a way to get around this problem? &nbsp;Yep, probably. &nbsp;But only if you spend a near equal amount of time in documenting your system as you did building it. &nbsp;</p> <p>Having been in the situation of starting work on a system that's entirely (or nearly entirely) undocumented, then you wonder what'll happen to the systems when the lead architect leaves. &nbsp;The guy who's got the secrets to the systems locked in his head.How do you even start the process of that knowledge transfer?&nbsp;</p> <p>I still maintain that a better personnel structure of Jurassic Park would have lead to a better (yet far less exciting) conclusion to the film. &nbsp;If Ray Arnold and Dennis Nedry had worked as part of an agile team, with a complete company wiki documenting the systems and infrastructure, then the outcome would've been far less gory. &nbsp;</p> <p>As far as the documentation of new (and existing) systems is concerned, then the only real way forward is documenting as you go.</p> <p>The concept of writing a full swathe of documenting an entire system in one go, at the end of the project, is a staggering undertaking, even for the most committed systems engineers.&nbsp;</p> <p>Experience, and some common sense tells me that the best way to ensure knowledge transfer between team members (to eliminate the HSPOF), is to have another member of staff shadow the person with the knowledge. &nbsp;See what they do day-to-day, when they edit the code, ask what they're doing, why they're doing it, whether something else would work as well or better. &nbsp;Learn from them, from their experience, and hopefully gain some insight that will allow a better system to be built. &nbsp;</p> <p>And another thing. &nbsp;The idea of designing a park with no out-of-danger access paths, that is to say, if you have to turn off the fences to get to the dock, then you've seriously fucked up the design of your park. &nbsp;There should be a secure path between all locations that is "out of band", for want of a better word. &nbsp;And what's the deal with the circuit breakers being on the other end of the compound. &nbsp;That's pretty dumb. &nbsp;Especially if you've gotta go through the park. &nbsp;*Outside* where the dinosaurs are in order to get to the other end of the compound. &nbsp;I mean.. Isn't there a better route, &nbsp;a protected route?&nbsp;</p> <p>Tsk.</p> <p>Oh, and Happy Holidays to all my readers, across the globe, from Russia to London, and South America to Australia.</p> </p> Transferrable Skill in Higher Education <p> <p>So.. it transpires that I have a friend who studied Physics at Imperial College, and as a part of that, was taught how to use C++ <em>&ldquo;As a tool to help with computational physics&rdquo;.</em>&nbsp; - His words, not mine.</p> <p>As a result, he has no explicit knowledge of some of the finer points of C++ programming, no idea how a binary search algorithm works, why you&rsquo;d use a Deque and when you&rsquo;d use a Vector.&nbsp;</p> <p>This is because he was taught C++ by the Physics department, rather than by Computer Science or Engineering.</p> <p>I learnt C/C++ and Java at University too, but fortunately, the actual teaching was handled by the Department of Engineering. &nbsp;Which meant they took a far more holistic view, rather than just teaching the applications of any given language to the subject at hand. &nbsp;</p> <p>I think the application-specific mechanism of teaching is wrong. &nbsp;Mostly wrong when you try and get a job in a non-directly related field, and find (in his case, during an interview) that you&rsquo;ve no idea what a Bubblesort is, or why a Quicksort is preferable. Similarly, no concept of Big O notation, or computational efficiency.&nbsp;</p> <p>I don&rsquo;t know how you could expect to be a versatile programmer without knowledge of how to make your programs efficient, and not zap all the memory from a running system. &nbsp;</p> <p>One of the long-known and much commented on facts is that so few institutions teach &ldquo;modern&rdquo; software engineering, or for that matter, skills to survive in a digital age. &nbsp;I more mean things like how Distributed Version Control works, and why it&rsquo;s better than &lsquo;helloworld.c.bak&rsquo;, rather than &ldquo;How to use Facebook for fun and profit&rdquo;.</p> <p>I don&rsquo;t think it&rsquo;s great to use myself as an example here, because I picked up a large proportion of my code skill before university, and I was already using Source Control whilst writing code at school. &nbsp;It was Visual SourceSafe, but still, better than nothing.</p> <p>I&rsquo;d like to see a course alongside &ldquo;Programming 101&rdquo; called something like &ldquo;<strong>Programming Methodologies 101&rdquo;</strong> where the topics covered would be:</p> <p> <ul> <li>Source Control.</li> <li>TDD.</li> <li>Agile.&nbsp;</li> <li>XP.</li> <li>Waterfall.</li> <li>How to write documentation.</li> </ul> </p> <p><strong>Programming Methodologies 102:</strong></p> <p> <ul> <li>REST APIs&nbsp;</li> <li>Continuous Integration.</li> <li>Configuration Management.</li> </ul> </p> <p>This is the kind of thing that <a href="">General Assembly</a> do actually teach, and there&rsquo;s also <a href="">CodeSchool</a> with their Try Git free course, and also their Try R course. &nbsp;</p> <p>&nbsp;</p> <p>I think it&rsquo;s needlessly arrogant of university departments to handle the teaching of a language within their own department, such as Physics teaching &ldquo;C++ For Physicists&rdquo; rather than handing the students over to the Engineering/CS departments so that they can get a good, holistic, overall course in C++, which will make them better programmers in the long run, no matter what job they eventually end up in. &nbsp;It&rsquo;s all about transferrable skill. &nbsp;</p> <p>When you start university, the job you wind up doing won&rsquo;t exist yet. &nbsp;Which is why transferrable skill is the only way forward. &nbsp;Because otherwise you&rsquo;re just training somebody for an anachronistic career path.</p> </p> Jenkins as a Job Dispatch Engine <p>I get easily tired of doing the same thing over and over again, and will, wherever possible, script or automate it to make life easier for myself. &nbsp;This could be in the form of a lightweight webapp/REST api for stuff, or in this case, I used Jenkins.</p> <p>So on one server, we sometimes need to reload apache. &nbsp;As we don't like developers randomly executing shells on live servers, it's better to just allow access to a few specific commands, in this case, a wrapper script on the target server's /usr/local/bin that just wraps "/etc/init.d/httpd restart" or "/etc/init.d/httpd reload".&nbsp;</p> <p>In "/etc/sudoers" there's a Cmnd_Alias</p> <pre>Cmnd_Alias RESTARTER = /usr/local/bin/, /usr/local/bin/</pre> <pre>restarter ALL=(ALL) NOPASSWD: RESTARTER</pre> <p>And the restarter user can access this without specifying a password.</p> <p>The restarter user has a .ssh/authorized_keys file containing the jenkins user's ssh public key.</p> <p>On the jenkins job, there's a Parameterized Build flag, called "ARE_YOU_SURE" which prevents the accidental restart (as No is the default option).</p> <p>The sole build step is:</p> <p>&nbsp;</p> <pre>if [ "$ARE_YOU_SURE" = "Yes" ]; then<br />echo "Restarting..."<br />ssh -tt restarter@server-to-restart.fqdn.tld sudo /usr/local/bin/<br />else<br />echo "Aw, shucks"<br />fi</pre> <p>&nbsp;</p> <p>&nbsp;</p> <p>If you build and click "No" in the parameter, it will echo "Aw, shucks" and exit. &nbsp;If you click yes, it will SSH to the remote server as the restarter user, and then execute the script.</p> <p>If you don't specify ssh -tt, then you get pestered because the terminal it's trying to run sudo in isn't a TTY.&nbsp;</p> <p>Ta Da! Jenkins as a job dispatch engine.</p> <p>&nbsp;</p> Interesting Thing of the Day: Network Motifs <p>&nbsp;</p> <p><strong>Interesting thing of the day:</strong></p> <p><em>Milo, Ron, et al. "Network motifs: simple building blocks of complex networks." Science Signalling 298.5594 (2002): 824.</em></p> <p><a href="">&nbsp;Fulltext available from Google Scholar: -</a></p> <p>&nbsp;</p> <p>It occurs to me that in scalable systems engineering (the sort of thing I do for a living), you only tend to see Bi-fan networks and Bi-parallel ones. &nbsp;</p> <p>Bi-fan is rougly equivalent to a cross-connected core switch whereas&nbsp;Bi-parallel is a good representation of a Virtual IP with Load balancer.</p> <p>There are some Fully Connected Triads, often in the form of multi-master database replication clusters and in fullly-meshed networks which could theoretically scale up to N-ads, where N is probably no more than 10 or 15. &nbsp;With many connected mesh members, the complexity grows quadratically, where the number of connections is given by (n^2 -n )/2. &nbsp;</p> <p>A well-designed job dispatch system / queue exhibits a lot of the same functionality of a combination of bi-parallel with the feed-forward loop, where the message queue and job consumer/worker is represented by the feed-forward loop, and the system's resilience/redundancy is built in with the bi-parallel motif, of course sometimes more than n+1 redundancy is needed, and there are some tri-parallel motifs. N+M complexity is sometimes seen too, in systems where extreme levels of redundancy is required. &nbsp;</p> <p>I shall have to ponder over some other Network Motifs seen in this field. &nbsp;There's definitely more than those I've mentioned, but the interesting thing is that along with software engineering, these design patterns are also quite well rooted in the design of a scalable network. &nbsp;</p> <p>It seems evident that most evolved systems have eliminated the single points of failure, at least in the Motifs demonstrated in the article (with the exception of the Three-chain food web, which as an isolated unit is still dependent on the availability (or hunger) of node Y). &nbsp;</p> <p>&nbsp;</p> Thanksgiving 2012 <p>&nbsp;</p> <p>I think this seems like an appropriate time to say a few words in favour of the great United States of America. &nbsp;There&rsquo;s some things they just do excellently. &nbsp;</p> <p>Customer service springs to mind as one of the best I&rsquo;ve ever encountered. &nbsp;I&rsquo;m not exactly sure why this is, but people do seem to be far more willing to be kind, courteous and helpful.</p> <p>There&rsquo;s other things too, the weather is, generally better (and if not better, then certainly more predictable). &nbsp;There&rsquo;s also the stunningly beautiful scenery. &nbsp;I challenge anyone to gaze deeply at the Grand Canyon, or the <a href="">Yellowstone Paint Pots </a>&nbsp;and not be stricken with a sense of awe and grandeur. &nbsp;</p> <p>I&rsquo;d say that arguably, one of the finest American National Holidays (and gosh, aren&rsquo;t there a lot?) is Thanksgiving. &nbsp;It&rsquo;s a celebration of everything you should be thankful for (and is excellent if you choose to forget about the origins of the holiday, which depending on who you listen to, are a bit worrying.&nbsp;</p> <p>However, most americans and their families and friends celebrate Thanksgiving with a feast. &nbsp;This is one day of the year where you gorge yourself for a reason, as opposed to just gorging for no reason other than it&rsquo;s there.</p> <p>Thanksgiving in Britain is rarely celebrated, except for the expatriate community and its friends. &nbsp;As a result, unless you&rsquo;re doing the celebratory meal at your own home, you&rsquo;re bound for disappointment. &nbsp;</p> <p>This is the very same disappointment I have suffered tonight at the Harvey Nichols 5th Floor Restaurant whose &ldquo;Seasons&rdquo; restaurant chose to put on a (quite expensive, all things considered) Thanksgiving 3 course meal.&nbsp;</p> <p>The general premise of a restaurant like this is to provide great food, at a reasonably expensive price to the cognoscenti of gastronomy. &nbsp; Herein lies the problem. &nbsp;For me, I&rsquo;d be expecting some immense portions of still great food, but lots more of it than I&rsquo;m going to find at this restaurant.</p> <p>Odd restaurant. &nbsp;Top floor of Harvey Nichols on Brompton Road. &nbsp;Maybe 20-30 tables in a space of about 8 metres by 15 metres (at a guess). &nbsp;Covered by a curved glass roof, which did little to improve the atmosphere, all it really did was amplify the cackle of the woman 4 tables across by an order of magnitude. &nbsp;</p> <p>Hard floors, hard walls and glass. &nbsp;Very pretty. &nbsp;Terrible for acoustic damping.&nbsp;</p> <p><strong>First course.. </strong>Out of 14 of us, 12 (or so) chose the Sweetcorn, Bacon and Chicken soup. &nbsp;On a plate the size of a truck wheel, the well was filled with about 15mm of soup. &nbsp;All told, this was about 4 tablespoons, and a mouthful of bread for mopping.&nbsp;</p> <p>Paired wine was a fabulous Riesling, but again, there wasn&rsquo;t enough of that either. Sadly.&nbsp;</p> <p><strong>Second Course..</strong> &nbsp;Turkey with Orange and Pistachio stuffing, Mash and Gravy. &nbsp;</p> <p>I don&rsquo;t eat Pistachio nuts as they tend to make me unwell. &nbsp;I told the waiter of this. &nbsp;I expected to get a bit extra mash or turkey to compensate. &nbsp;Did I buggery.&nbsp;</p> <p>This was someone elses, but just imagine it without the stuffing roll.</p> <p><a href=""></a></p> <p>&nbsp;</p> <p>This is what I&rsquo;ve come to expect a Thanksgiving dinner plate to look like...&nbsp;</p> <p><a href=""></a></p> <p>Seriously. &nbsp;Where&rsquo;s my fucking food? Where&rsquo;s my goddamn cranberry sauce. &nbsp;Where&rsquo;s the mountain of mashed potato, and great lakes of gravy? Oh wait. &nbsp;It&rsquo;s not here. &nbsp;Sorry. &nbsp;I forgot where I was there.</p> <p><strong>Third Course. &nbsp;</strong>Dessert. Options are Pumpkin Pie, or &ldquo;Doughnut Funfair&rdquo; - &nbsp;Or something equally insanely named and utterly sickening. &nbsp;I make it very clear to the waiter that I don&rsquo;t eat chocolate, and he says that&rsquo;s fine. &nbsp;</p> <p>My plate arrives.. about 10 minutes before everyone elses. &nbsp;It contains 2 scoops of something white and unidentifiable, and something brown, and a wafer of something brown and sticky. &nbsp;On further questioning, it transpires that<em> &ldquo;eees Vanilla, and Chocolate&rdquo;</em>. &nbsp;Right. &nbsp;What part of<em> &ldquo;I can&rsquo;t eat chocolate, coffee or tea did you misunderstand?&rdquo;.</em></p> <p>The plate is withdrawn, and replaced several minutes later with 2 scoops of something white, and one scoop of something greenish. &nbsp;Apparently this time it&rsquo;s Vanilla and Pistachio. &nbsp;</p> <p>Really?&nbsp;</p> <p>Really?</p> <p>Did you completely fail to grasp the <em>&ldquo;I can&rsquo;t eat pistachio&rdquo; </em>from the earlier course, and still yet decide to insult me with something else in this, different course, which I still cannot eat.&nbsp;</p> <p>It&rsquo;s at this time that I attempt to bring reason to the table with the suggestion that instead of bringing me things to look at, that I ask what there is instead, and whether I can just have a glass of wine or brandy instead? Apparently there&rsquo;s Mango Sorbet. &nbsp;Well, it&rsquo;s not perfect. &nbsp;But it&rsquo;ll do.</p> <p>Who the FUCK puts chocolate in a Pumpkin Pie anyway? Here&rsquo;s some recipes for Pumpkin Pie. &nbsp;None of them contain Chocolate, Cocoa, Coffee or anything equally inedible.</p> <p><a href=""></a></p> <p><a href=""></a></p> <p><a href=""></a></p> <p><a href=""></a></p> <p><a href=""></a></p> <p>Interesting that. &nbsp;Literally none of them contain chocolate. &nbsp;Wonder why? That&rsquo;s because Chocolate either wasn&rsquo;t available in the 1600s, or they just don&rsquo;t see the need to pollute a natural flavour with something so unpleasant as cocoa.&nbsp;</p> <p>So, three glasses of wine, all of which were gorgeous, although a bit short in supply. &nbsp;I tend to think of a decent dinner containing 2-3 large glasses of wine per person. &nbsp;Perhaps I&rsquo;ve been dining abroad too much, but anyway. &nbsp;</p> <p>Five tablespoons of soup, at best.&nbsp;</p> <p>2 slices of turkey, a small dollop of mash (which may or may not have been chestnut mash), 6 green beans, a sole brussel sprout (cut into 5 sections), and maybe 3 tablespoons of gravy. &nbsp;</p> <p>3 scoops of Mango Sorbet (finally (!))</p> <p>&pound;45.00 a head. &nbsp;</p> <p>Plus 12.5% service charge, which frankly, wasn&rsquo;t warranted. &nbsp;The Dessert Debacle alone wiped the possibility of any service charge off the bill as far as I&rsquo;m concerned.</p> <p>&nbsp;</p> <p>Food: 6/10.&nbsp;</p> <p>Service: 2/10</p> <p>Ambience: 2/10</p> <p>&nbsp;</p> <p>Yeah. No.</p> <p>&nbsp;</p> GWAN: Snakeoil Beware <p> <p>I&rsquo;ve heard quite a bit about the &ldquo;G-WAN Application Server&rdquo; over the past few weeks. &nbsp;Initially it was a Serverfault question that left me thinking &ldquo;WTF&rdquo; (<a href=""></a>)</p> <p>I took a look at their website and thought: &ldquo;Those are pretty insane claims&rdquo;. &nbsp;They&rsquo;re also the kind of crap you tend to see where the intended audience is somebody who has absolutely no clue about scalability, or production-readiness. Y&rsquo;know, Managers.</p> <p>&nbsp;- Quite well summarised by this comment: &nbsp; <em>GWAN isn't designed to be a robust webserver, it's designed to perform exceptionally well in contrived and outlandish benchmarks, so PHBs will demand the IT team use it and buy support...</em> &ndash; <strong><a href="">Chris S</a>&diams; 19 hours ago</strong></p> <p>Interestingly enough, the only person who answered that question was a fellow called <a href="">Gil</a>&nbsp;who apparently works for G-WAN.. &nbsp;I don&rsquo;t normally take much offense to product owners on Serverfault et al, but the vast majority of his answers do seem to be a bit spammy.&nbsp;</p> <p>A lot of the pages on the website refer back to benchmarking the server. &nbsp;I&rsquo;m really not interested in that, not here, anyway. &nbsp;You&rsquo;ll see why in a little while.</p> <p>Moving on, somewhat, I decided that I should at least *download* the server, and have a poke about. &nbsp;</p> <p>So, I downloaded the tar.bz2 file containing the server (Bzip2? I suppose they&rsquo;d be interested in making it *appear* as small as possible.)</p> <p>This is what I unbzipped.&nbsp;</p> <p><a href=""></a></p> <p>&nbsp;</p> <p>I&rsquo;m a little terrified, &nbsp;to be quite honest.</p> <p>One of the things I *love* about Apache is being able to download the source code and &nbsp;take a good old nose about in it. &nbsp;I rarely *need* to, but it&rsquo;s nice to have the option. &nbsp;The biggest problem I have with closed-source applications is that this isn&rsquo;t possible. &nbsp;We just have to trust that they&rsquo;ve not left any big-ass buffer exploits in there, or that the code doesn&rsquo;t also contain a ssh daemon, or there&rsquo;s not a backdoor that&rsquo;s gonna send all my keystrokes to $unfriendly_nation.&nbsp;</p> <p>That doesn&rsquo;t appear to be possible, because of this gwan: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), statically linked, stripped</p> <p>So, back to that directory tree..&nbsp;</p> <p>There&rsquo;s a mysterious &ldquo;<strong></strong>&rdquo; directory, containing *more* stuff. This appears to be some kind of virtualhost configuration.&nbsp;</p> <p>Interesting way to do it, but it&rsquo;s a bit esoteric.&nbsp;</p> <p>Inside that directory, there&rsquo;s the even more bizarre &ldquo;<strong>#</strong>&rdquo; directory, which sounds entirely redundant, because didn&rsquo;t the next level up already define the IP address?&nbsp;</p> <p>-- Apparently the 2nd level directory is for defining virtual hosts on the listener. &nbsp;Well, that *almost* makes sense, but why not just use a config file like Apache, Nginx, Lighty, or well, any other goddamn server, Ever.&nbsp;</p> <p>Oh well, it&rsquo;s weird.&nbsp;</p> <p>Inside that, there&rsquo;s ./<strong>csp</strong>, ./<strong>www</strong>, ./<strong>handlers</strong> and ./<strong>logs</strong>.</p> <p>- From their README.txt:</p> <pre>/csp ........ G-WAN C/C++ and Objective-C/C++ examples<br />/www ........ G-WAN web server's 'root' directory<br />/handlers ... G-WAN web server's handler sample<br />/logs ....... G-WAN web server's access/error log files</pre> <p>OK. &nbsp;So what&rsquo;s the difference between a csp and a handler?&nbsp;</p> <p>Well, inside csp, there&rsquo;s a file called hello.c, which appears to be a *very* simple hello world example. &nbsp;Frankly, I&rsquo;m a bit bored of Hello World examples, so let&rsquo;s see if I can make my own directory structure, and make a FizzBuzz page? &nbsp;That way I&rsquo;ll have some idea how ass this idea is, and whether I&rsquo;d ever *ever* see a use for it.</p> <p>"handlers" is an odd one. &nbsp;I get that .csp contains files like hello.c which seem to be the closest thing to a MVC&rsquo;s Controllers. Maybe.&nbsp;</p> <p>Handlers is worrying, actually. &nbsp;the file main_hello.c__ appears to contain some kind of HTTP handler, assigning c functions to the processes required to build a HTTP request. Why do I care about building a HTTP request? &nbsp;Do I care? Do I *need* to write a special handler just to pass off to fizzbuzz.c?</p> <p>Of course, there&rsquo;s bugger all documentation to tell me what to do. &nbsp;There is however, commercial support, which starts at 149 Generic Currency Units (their page shows Swiss Francs, but I can&rsquo;t seem to change it).</p> <p>So a &ldquo;Hobbyist&rdquo; pays 149 GCU. Seems steep, most other Hobbyist programmes are free.&nbsp;</p> <p>Consultants pay 1499 GCU.&nbsp;</p> <p>Enterprises pay 10x that amount.&nbsp;</p> <p>If you want 24x7 support, you&rsquo;re looking to pay 149,999.00 GCU.&nbsp;</p> <p>God help you if you also want to white label it, as they tack on an extra 199,999.00 GCU for that. Blimey. &nbsp;</p> <p>For an extra 7,999 GCU you can enter into a code escrow agreement, meaning that you get an encrypted version of their source code, and if they go bust, they *hopefully* give you the key.</p> <p>I think it&rsquo;d be a far better use of the money to pay for the Code Escrow, and then use the 250k you&rsquo;ve saved on Amazon GPU instances to bruteforce the key. &nbsp;But there we go. &nbsp;You&rsquo;d probably find that they&rsquo;ve used Ultra super duper mega encryption that was *also* written in house. &nbsp;&nbsp;</p> <p>Back on topic. &nbsp;Well, almost.</p> <p>I just found this rather bold statement hidden in their FAQ (</p> <p><strong><em>&ldquo;G-WAN never had a security breach since day one in June 2009 (other servers can't sustain the same claim).&rdquo;</em></strong></p> <p>I suspect there&rsquo;s one reason for them to make that statement. Nobody&rsquo;s using it.</p> <p>Alternatively, it could be that they do exist, but nobody&rsquo;s found them because we can&rsquo;t look at the source code and find them.</p> <p><strong><em>G-WAN was written to port Desktop apps to the Web because there was no Web application server able to do that job.</em></strong></p> <p>Eugh.&nbsp;</p> <p>There&rsquo;s two ways to make Desktop applications more widely available. &nbsp;</p> <p>1) Rewrite the fucking thing, don&rsquo;t port it.</p> <p>2) Use something like Citrix XenApp to publish it. &nbsp;</p> <p>I don&rsquo;t approve of the concept of writing a server to &ldquo;port&rdquo; a desktop application. &nbsp;There&rsquo;s something deeply odd about that concept.&nbsp;</p> <p><strong><em>Despite its small footprint, G-WAN is an all-in-one solution because communicating with other servers (FastCGI, SCGI, etc.) takes time (enlarging latency), and wastes CPU and RAM resources. Remember that our goal here is to use the ultimate low-latency and resource-saving solution. This is why G-WAN is a:</em></strong></p> <p><strong><em>Web server</em></strong></p> <p><strong><em>App. server</em></strong></p> <p><strong><em>Cache server</em></strong></p> <p><strong><em>Key-Value store server</em></strong></p> <p><strong><em>Reverse-proxy and elastic load-balancer server</em></strong></p> <p>What ever happened to the <a href="">UNIX philosophies </a>&nbsp;of doing one thing, and doing it well, and then having a bunch of loosely coupled servers performing different tasks. Evidently not considered here. &nbsp;</p> <p>I&rsquo;d rather have a separate web server, a separate application server, connect the two over proxied HTTP, then connect to the cache with a plain text protocol, and the key/value server over http or a similar mechanism.&nbsp;</p> <p>No, I don&rsquo;t like the idea of having all in one box. &nbsp;It&rsquo;s asking for mischief.</p> <p>Oh, and their examples are weird. &nbsp; I found this snippet earlier on, and it made me go &ldquo;Huh?&rdquo;</p> <pre>// but we don't want to display "%20" for each space character<br />&nbsp; &nbsp;{<br />&nbsp; &nbsp; &nbsp; char *s = szName, *d = s;<br />&nbsp; &nbsp; &nbsp; while(*s)<br />&nbsp; &nbsp; &nbsp; {<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;if(s[0] == '%' &amp;&amp; s[1] == '2' &amp;&amp; s[2] == '0') // escaped space?<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;{<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; s += 3; &nbsp; &nbsp; // pass escaped space<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; *d++ = ' '; // translate it into the real thing<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; continue; &nbsp; // loop<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;}<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;*d++ = *s++; // copy other characters<br />&nbsp; &nbsp; &nbsp; }<br />&nbsp; &nbsp; &nbsp; *d = 0; // close zero-terminated string<br />&nbsp; &nbsp;}</pre> <p>I dunno about you, but I&rsquo;d probably just call urldecode() and be done with it.</p> <p>Oh, and this.</p> <p><a href=""></a></p> <p>Templating. Heard of it? MVC? Heard of it? Evidently not.</p> <p>Right. &nbsp;Back to trying to make a Fizzbuzz application?</p> <p>I had a brief attempt, and when I restarted the server I got this:&nbsp;</p> <pre>vagrant@precise64:~/gwan_linux64-bit$ ./gwan<br />Floating point exception</pre> <p>No logs, nothing on stderr, nothing. &nbsp;Smooth, guys. &nbsp;I half expected a HTTP 402 - Payment Required ;)</p> <p>&nbsp;</p> <p>-- The 2nd time I tried, I got this equally cryptic error message</p> <pre>Signal &nbsp; &nbsp; &nbsp; &nbsp;: 11:Address not mapped to object<br />Signal src &nbsp; &nbsp;: 1:SEGV_MAPERR<br />errno &nbsp; &nbsp; &nbsp; &nbsp; : 0<br />Thread &nbsp; &nbsp; &nbsp; &nbsp;: 0<br />Code &nbsp; Pointer: 000000445659 (module:./gwan, function:??, line:0)<br />Access Address: 000000000001<br />Registers &nbsp; &nbsp; : EAX=000004113418 CS=00000033 EIP=000000445659 EFLGS=000000010246<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; EBX=000004113418 SS=08201c24 ESP=000008801a00 EBP=000000000000<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; ECX=0000004457e0 DS=08201c24 ESI=000000000001 FS=00000033<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; EDX=000000000000 ES=08201c24 EDI=000004113418 CS=00000033<br />Module &nbsp; &nbsp; &nbsp; &nbsp; :Function &nbsp; &nbsp; &nbsp; &nbsp;:Line # PgrmCntr(EIP) &nbsp;RetAddress &nbsp;FramePtr(EBP)<br />~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~</pre> <p>This bit is epic. &ldquo;function ?? line: 0&rdquo; &nbsp;- You&rsquo;d think it&rsquo;d know where it threw a wobbly.&nbsp;</p> <p>I&rsquo;ll remind you that when I fuck up in ruby or python, the interpreter tells me where it failed. I like that, it makes debugging a huge application far easier.&nbsp;</p> <p>I really can&rsquo;t be arsed to make a contrived example work on a server that&rsquo;s just weird.&nbsp;</p> <p>I&rsquo;m gonna go back to writing in *real* languages, designed with web application development in mind.&nbsp;</p> <p>I just can&rsquo;t take application development seriously if there&rsquo;s no kind of built-in MVC. &nbsp;</p> <p>My conclusions are these:&nbsp;</p> <p>G-WAN is the Gentoo of the Application Server field.</p> <p>Designed for ricers who are more concerned with made up benchmarks and the belief that they&rsquo;ve tuned *every* little feature to the max.</p> <p>It&rsquo;s also weird in all the ways I mentioned above. &nbsp;</p> <p>Oh, and don&rsquo;t even consider using it in a production environment. &nbsp;If you&rsquo;ve *really* got to the point that you can&rsquo;t tune Apache, or Nginx, or Lighty, then there are other solutions, but G-Wan isn&rsquo;t one of them.</p> <p>There&rsquo;s Varnish, and there&rsquo;s HAProxy, and there&rsquo;s Pound and there&rsquo;s other ways to accelerate your application, but I can&rsquo;t see any benefit from rewriting into C and using a somewhat shonky API with a bizzare lack of debug output.&nbsp;</p> <p>I mean, by all means, if you&rsquo;ve got the cash to drop on the support, do so, but I&rsquo;d rather spend that cash on hiring a skilled engineer to make your application scale, and your servers hum happily.</p> <p>&nbsp;</p> <p>Oh, and I suspect that if you ran G-WAN on Gentoo you&rsquo;d find one of three things.</p> <p>1) You&rsquo;d create a supermassive black hole.</p> <p>2) Everything would slow down (relativistically speaking).</p> <p>3) Nobody would give a fuck.&nbsp;</p> <p>Probably the latter.</p> </p> Answered: Network Design <p>So my general gist from all of this is that a chassis switch is somewhat more expensive, but gives far more options for growth and expansion. &nbsp;All this was based on indicative list pricing of Extreme Networks gear (because I understand how that fits together). &nbsp;I suspect that Brocade, Cisco and Juniper, and Force10, and all those other switch hardware vendors would be pretty similarly priced.</p> <p>Those prices also don't include the cost of the cabling, the power, the racks, the cooling, or many other things. &nbsp;It also doesn't include my time in setting it all up.</p> <p>I do enjoy shopping questions, and I'm always in two minds about answering them, regardless of the prices being out of date or inaccurate, because at the end of the day this is just a ballpark estimation figure, and you'd never ever pay the full list price.</p> <p>So even if I'm not allowed to answer shopping questions on Serverfault, I'm inclined to make more of these type posts.<span style="white-space: pre;"> </span></p> Commvault License Keys <p> <p>If you're unfortunate enough to be using Commvault as your Backup Solution, and I mean that in the nicest possible way, at some point you'll be tasked with the challenge of redeeming your full license keys.</p> <p>This alone is by no means an easy feat. &nbsp;Baseblack's Commvault instance was provided by Hitachi Data Systems, and when you want to create a support ticket to register your licenses, you have to jump through *many* hoops, including the Global Portal, the HDS portal, the Commvault portal, and so on.</p> <p>Luckily, through the process of sending some really nasty nastygram emails, I've managed to track down a single link that waives the need to log into *any* support portals. &nbsp;Finally. Why wasn't this link just sent included with the software? Why make you jump through their hoops and nonsense? Well, I don't know either.</p> <p>Here's the link to redeem your Commvault License Keys.</p> <p><a href=""></a></p> <p>That wasn't so hard, was it?</p> <div></div> </p> My Battle with Commvault <p>This is a bit long-winded, and wordy. &nbsp;If you've come here looking for tips on improving commvault backup performance and/or throughput, then you should click <a href="#CommvaultTuning">here</a> for the good stuff.&nbsp;</p> <p>It's been a long time since I blogged. &nbsp;It's been a really long time since I blogged about anything we've been doing at $Dayjob.&nbsp;</p> <p>I've spent the better part of the year working on sorting out the backup solution here.&nbsp;</p> <p>Initally, we had a storage server, with 30-odd TB of SATA storage, using some bit of LSI technology with a battery-backed write cache.. Pretty good for scheduled rsnapshot backups. However, in May, we decided to sort out off-site backups, and build up some kind of disaster recovery strategy.&nbsp;</p> <p>Our storage reseller sent us a bunch of quotes for a number of hardware and supported software solutions. &nbsp;We're kinda limited by budget, and as a result also limited to the technologies we could use for backup.</p> <p>We have a Hitachi/Bluearc NAS filer, which is comprised of 2 tiers, one high-speed SAS pool, and one lower-speed, but huge SATA pool. &nbsp;The storage is all connected across the NAS backplane with 4Gbit FibreChannel, and the 2 NAS heads are cross-connected and interconnected to our core switch with 4 (2 per head) 10Gbit Fibre Ethernet links.&nbsp;</p> <p>Given the cost of media, and the ease of transporting offsite, a tape backup system was chosen. &nbsp;It's far cheaper in terms of offsite/offline storage to have tape media that sits in a box, rather than boxes of spinning rust that have to be maintained in a cool room, with power and maintenance costs included.</p> <p>The first solution would involve directly connecting the tape drives to the filer. &nbsp;Ideally with FibreChannel, but as we've already used all the FC ports on the filer for storage, we'd have to invest in a pair of FC switches. &nbsp;This is not a small outlay, and makes that solution prohibitively expensive.</p> <p>Luckily, an alternative exists, where we have a backup server, use NFS to mount the exported filesystems, connect that to the core with 10Gbit Ethernet, and then connect the tape drives to that server.&nbsp;</p> <p>We ordered a Spectra Logic T50e autochanger, Commvault Backup software and a server to run it all on, from our storage vendor. &nbsp;This is where the problems started.</p> <p>Predictably, there was at least one problem with the hardware. &nbsp;Our 10Gbit core is entirely fibre, specifically MMF, but that's beside the point here. &nbsp;The new backup server that had been ordered turned up with an X540-T2 NIC, which is 10Gbit over Copper.&nbsp;</p> <p><img src="" alt="" /></p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>What we needed was one with SFPs, <img src="" alt="" width="300" height="300" />&nbsp;like this, the X520-SR2. &nbsp;So we had to order one of those, and have that delivered, and postpone the whole project until that card arrived. &nbsp;Three.. Days .. Later, the NIC arrived. &nbsp;Without any optics. &nbsp;Apparently when ordering from Dell, these are two separate line items. &nbsp;This is not the case when ordering from Intel or anyone else.</p> <p>So, a week later, we got the whole NIC and made it work. &nbsp;About 2 weeks later, the reseller/distributor of the commvault software was on-site to install the software onto our server. &nbsp;Problem.. We'd been told that the entire software suite works on Linux. &nbsp;The actual scenario is that the backup controller (Commcell GUI) only works on Windows (despite just being a Java application). &nbsp;Not only does it only work on Windows, but it only works on Windows Server 2008 (probably works on 2003, but who the hell wants to install that in a new system). &nbsp;So we had to get a windows server license from somewhere. &nbsp;Luckily, one of the things I'd downloaded was the Windows 2008 Evaluation version ISO, and shoved it in the central ISO store. &nbsp;So on the day that the Commvault installer guy turned up, we had assumed that the plan would be something like:&nbsp;</p> <p><strong>1)</strong> Install the software onto a linux box.</p> <p><strong>2)</strong> Start backing stuff up.</p> <p><strong>3)</strong> Pub?</p> <p>Instead, we had to try and install windows 2008 R2 onto a virtual machine (which was slower than cold toffee). &nbsp;In the end, I just stole my desk mate's PC, and installed windows server on there, then moved it from under our desk, to the store room. &nbsp;At some point, we're gonna have to P2V it and free up a workstation (or install it somewhere more permanent).</p> <p>So.. a good proportion of the day was taken up with fucking about with Windows Servers, before actually getting to any of the configuration stuff.</p> <p>I think it's good that for the most part, the process of installing the Unix/Linux Commvault agent is pretty straight forward, as once you've got the server-side set up, the client installation goes off, talks to the server, and pretty much installs itself.</p> <p>More installations should happen like this, incidentally.</p> <p>Anyway.. We eventually got the Linux client installed, and the "Media Agent" - This is the bit that actually talks to the NFS, and also talks to the Fibrechannel attached tape devices, and manages the autochanger. &nbsp;We had to define the directories to be backed up in the Windows Server controler, then configure what goes where, and so on. &nbsp;(That's definitely a blog for another day). &nbsp;We kicked off a backup of *everything* - which at the time was about 20TB, and buggered off home. &nbsp;</p> <p>We came up with figures at some point for the amount of data we'd be backing up when working at full capacity.. This works out at about 30TB in total, and we wanted to be able to do it over a weekend, so in under 48 hours. So we'd need a backup system that could perform at at least 30TB/48Hrs, which works out at 173MB/s sustained throughput. -- This is important, and we'll come back to this figure a number of times.</p> <p>The first best-effort backup we tried, we were getting a total combined throughput of 290GB/hr (Commvault chooses GB/hour as it's unit of choice, a pretty weird one, to be honest..) 290GB/hr is 80.5MB/s.. At this speed, a full backup will take 30TB/(80.5MB/s), which is about 4.4 days. &nbsp;So, we'd have overshot our 48 hour backup window by more than double.&nbsp;</p> <p>This is about the time that we decided that we could have two problems, and decided to break into a very long and deep warren of rabbit holes, surrounding to objectives, benchmarking both the tape performance (including CommVault), and also benchmarking the Bluearc NAS.</p> <p>It's actually a lot more complex than all of that, because there's no single point of this system that isn't somehow connected to a bunch of other systems. &nbsp;We'd need to look at the Bluearc, the 10Gbit core, the brand new backup server, the fibre channel cards, the tape drives, the tape library, and all the software in between.</p> <p>Luckily, benchmarking NFS is fairly straight forward, and I devised a thing to run a bunch of IOZone tests overnight, so that the next day, I'd have between 1 and 1000 datapoints of IOZone performance to have a look at.</p> <p>We had to figure out the optimum parameters for reading NFS files at speed, given a range of possible block and chunk sizes, as well as a number of options regarding being single-threaded, or multi-threaded.</p> <p>This is the script I threw together to generate benchmarking runs using IOZone.</p> <pre>THREADS="1 2 4 8 16 24"<br />RS="8k 256k 512k 1M 4M 8M 16M"<br />FS="1M 2M 8M 16M 32M 256M"<br />echo "cd /mnt/shows/benchmarktest/`hostname`/" &gt; `hostname`<br />for f in $FS;<br /><span style="white-space: pre;"> </span>do for r in $RS;&nbsp;<br /><span style="white-space: pre;"> </span>do for t in $THREADS;&nbsp;<br /><span style="white-space: pre;"> </span>do echo "iozone -R -b autobench-${t}T-${r}-${f}-`date +%s`.asc -l${t} -u${t} -i0 -i1 -F $(seq -w -s ' ' 1 ${t}) -r ${r} -s ${f}";&nbsp;<br /><span style="white-space: pre;"> </span>done;&nbsp;<br /><span style="white-space: pre;"> </span>done;&nbsp;<br />done &gt;&gt; `hostname`</pre> <p>"/mnt/shows" is a location that's on our Fast SAS storage, so at least by this benchmark, we'd be looking at close to the maximum throughput we could ever hope to get out of our disk arrays. As the majority of our data is also stored on that faster pool, it's likely to be similarly representative of the real data.</p> <p>Anyway.&nbsp;</p> <p>After the first benchmark run, we're seeing a roughly direct correlation between number of threads and throughput, up to a point where there are more threads than cores, and the throughput slows down. &nbsp;This is to be expected, because when you have more running threads than CPU cores, you get more and more context switches, and so on.&nbsp;</p> <p>The other conclusion that's easy to draw is that re-read performance is crazyhigh. &nbsp;Basically, &nbsp;the Bluearc has a certain quantity of high-speed cache for data that's being read and re-read, so the first time you get it, you pull it from the disk.. Subsequent reads come out of memory, and so are blindingly fast, but only for small file sizes. &nbsp;Big files can't be stored in the onboard cache, which is where the BlueArc's option for a SSD tier becomes phenomenally cool.. It also comes at an insane price, but there we go.</p> <p>The overall best performance was from a 4k block/record size, from a 1MB file, using 20 reader threads (on a box with 32 cores, 20 threads is pretty much the sweetest spot), and this gave us 1.4TB/hour read performance.&nbsp;</p> <p>We reported these results to BlueArc, who believed that we may have hit a bug in the hardware optimised NFS server that they provide. &nbsp;The two choices we had were, a) upgrade to the latest patch release, and/or b) try turning off hardware optimisation, and see what happens.</p> <p>We ran another overnight batch of IOZone tests with the hardware optimisation turned OFF, and found that for some read sizes/thread counts, the performance was greatly improved, primarily small chunks from a big file, with only 2 reader threads, whilst performance for operations with multiple readers was absolutely destroyed. &nbsp;</p> <p>The knock-on effect of this is that if we were only servicing 2-4 clients, or 2-4 reader threads, we'd be fine to stick with software NFS processing, but for multiple user performance (like having a farm of 100 rendernodes reading and writing to the disks), we'd be screwed if we left hardware NFS mode off.</p> <p>So we turned that back on, and set about tuning NFS read block sizes to closely match the best performance we've seen.</p> <p>We got some Bluearc engineers in at some point in the last 100 days (it really does get hard to keep track of when actual events happened), and they upgraded our RAID arrays and also the NAS heads to the latest version which seemed to improve NFS performance too.&nbsp;</p> <p>That really is the very quick review of tuning NFS performance. &nbsp;There's probably a lot more to say, if I can remember it!</p> <p>The interesting thing to note about 10Gbit Ethernet performance is that in the scenario we're using it, we're probably having 1-4 threads at a time utilising the connection. &nbsp;This makes it incredibly difficult to saturate the link, because the majority of applications written in the last 10 years have been designed with 1Gbit Ethernet in mind, so they either don't scale terribly well, or are hard-coded to use a small number of threads. &nbsp;This includes the Linux NFS client, funnily enough, and if you compare the model of the Linux NFS client to the underlying model of the one used in Solaris, which uses SunRPC, it's actually possible to multiplex over a single connection by setting the number of network threads.&nbsp;</p> <p>If you're booted into Solaris/OpenSolaris/OpenIndiana, you can edit this setting with the following little gem.</p> <pre>$ mdb -kw<br />clnt_max_conns/W 8<br />q</pre> <p>"mdb" is a bit like sysctl.conf on Linux, and "clnt_max_conns" is the number of threads to use for the NFS client. &nbsp;We found that if we booted into OpenIndiana, and mounted the NFS mounts, and set the clnt_max_conns quite high, we'd get blistering performance out of the Bluearc, until OpenIndiana locked up and fell over. &nbsp;We didn't really put much effort into figuring this one out, it was more just a proof of concept thing to see if it worked. &nbsp;I suppose I'd quite like to go back to tinkering with OpenIndiana on 10Gbit Ethernet, but there really isn't time.</p> <p>Sadly.</p> <p>I did a bunch of stuff to the NIC driver options on Linux when trying to get decent performance out of these 10G NICs, and came, somewhat surprisingly, to the conclusion that if you leave the driver module defaults as they are when you install the thing, you get far better performance than you do when you start dicking about with things like TCP Window Scaling, TCP Checksumming and Selective ACK. &nbsp;The less you fiddle with, the better your performance is, and if for whatever reason, you get slightly better performance through tinkering, it'll be in the 0.5-1% range, rather than a 10-20% increase in speed.</p> <p>This is something else that I'd quite like some more time to benchmark and test things with, but again, we neither have the time to revisit this in any great depth or detail, nor the available hardware for tinkering. &nbsp;10Gbit NICs are still pretty expensive. I suppose when they come down in price, I'll probably buy some of my own and have a good old fiddle with them, and the settings, and write that up.. One day. &nbsp;;) I suppose if someone wanted to buy me the kit, I'd have to evaluate it and write up some kind of benchmark about what kind of performance you get by altering all the different settings, but again, as with everything the performance is all very well and good, but it'll only ever be a benchmark, as your true performance and throughput do tend to vary greatly depending on what it is you're doing with the hardware.</p> <p>With the NFS benchmarking out of the way (for now), it was time to have a good old poke at the tape drives. &nbsp;</p> <p>Luckily, it's pretty straightforward to benchmark a tape drive.. There's three options for doing it.&nbsp;</p> <p>1) Commvault "Validate Drive" - You load a tape, it writes stuff to it, reads the stuff back and gives you a speed. At least this mechanism just generates data from /dev/zero or wherever, and writes it directly to the tape, so there's no disks involved.</p> <p>2) IBM TapeTool - These are IBM HH-5 Ultrium drives, so there's an IBM Tape Test utility, I suspect this does something very similar to the Validate Drive tool.</p> <p>3) Gnu tar. Despite being clever FC-attached tape drives, they're still just magnetic character devices, so we can write directly to them with mt and tar. &nbsp;</p> <p>Results:</p> <p>1) Commvault reported ~120MB/s write, and ~135MB/s read speeds for both drives. &nbsp;</p> <p>2) The IBM Tape Tool utility reported ~130MB/s write speeds for both drives:</p> <p>3) When it came to using Gnu Tar for benchmarking, it was easiest to make it a two step process.</p> <p>The `pv` tool is incredibly useful for this kind of thing, as you can visualise the amount of data flowing down a FIFO pipe.&nbsp;</p> <p>With tar, just run something similar to the following.</p> <pre>tar cvf - /path/to/thing/to/backup | pv &gt; /dev/st0</pre> <p>So tar will read stuff from the path, and write it to the file -, which is STDOUT, which is piped into pv to see the throughput, and then written out to /dev/st0 (testing this by writing to /dev/null is also acceptable for testing NFS performance only, just grab the data with tar, and dump it in the bitbucket)..</p> <p>If we can get the performance with tar anywhere near the benchmarked write speeds from TapeTool or Commvault, we'll have proved 2 things. &nbsp;1) that the tape drives can handle data at this rate, and also that the NFS servers can push it at this rate.</p> <p>From the SAS tier, we get this kind of performance.</p> <pre>2GB 0:00:44 [ 253MB/s] [ &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &lt;=&gt; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; ]</pre> <p>From the SATA tier, we get:&nbsp;</p> <pre>2.43GB 0:01:10 [ 116MB/s] [ &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&lt;=&gt; &nbsp; &nbsp; ]</pre> <p>Which is to be expected. &nbsp;We always expected the SATA disks to be slower, by about this margin.</p> <p>So, with the SAS tier able to push a single stream at 253MB/s, that's 126MB/s per drive, which is pretty close (slightly over) the speed of the Commvault validation, and a little under the IBM TapeTool benchmark.</p> <p>So we've effectively proved, using real data, that the SAS disks can push at the right speed to keep the tapes happy at full speed. &nbsp;We've also proved that the tape drives can write at this speed.</p> <p>So why the bloody hell are we still seeing only 300-400GB/hour performance through Commvault?</p> <p><a name="CommvaultTuning"></a>Breaking it apart, there's a *lot* of different parameters in CommVault that are tunable regarding performance tweaking.&nbsp;</p> <p>I'm gonna come back to this in a more in-depth way at some point, but it breaks down to:</p> <p>In Subclient Properties:</p> <p><strong>1)</strong> Number of Data Readers: set this to 2x number of tape drives (even though CommVault support tell you not to!)</p> <p><strong>2)</strong> Allow multiple data readers within a drive or mountpoint - Tick this.</p> <p><strong>3)</strong> Storage Device -&gt; Data Transfer Option -&gt; Resource Tuning -&gt; Network Agents: Set this to 2 or 4 (see which works better for you).</p> <p>&nbsp;</p> <p>In Storage Policy Properties (specifically, the properties of your storage policies):</p> <p><strong>1)</strong> Device Streams (set this to 2x your tape drives too)</p> <p><strong>2)</strong> Enable Stream Randomization: Tick this.</p> <p><strong>3)</strong> Select an Incremental Storage Policy (and make the same changes to that one as this one).</p> <p>&nbsp;</p> <p>In the Copy Properties (of all given Storage Policies):</p> <p><strong>1)</strong> Media tab -&gt; Enable Multiplexing</p> <p><strong>2)</strong> Set Multiplexing Factor to 2x the number of tape drives too.</p> <p><strong>3)</strong> Enable "Use Device Streams rather than multiplexing if possible".</p> <p>&nbsp;</p> <p>In the Global Commvault Control panel:</p> <p>In Media Management</p> <p><strong>1) </strong>Set Chunk Size for Linux FileSystem to 16384 MB.</p> <p>&nbsp;</p> <p>Media Agent Properties:</p> <p><strong>1)</strong> "Maximum number of parallel data transfer operations"</p> <p>Set "Restrict to" to 200 (the highest it'll go). &nbsp;-- Why there's any kind of restriction here is a mystery to me.</p> <p>In the Control tab, Data Transfer box</p> <p><strong>2) </strong>Enable "Optimize for concurrent LAN backups".</p> <p>With the above settings, we're now getting performance between 700 and 900GB/hour depending on what we're backing up, but luckily, for now, this is within our backup window. &nbsp;I fear that if we plan to backup more within the same time period, we'll need more tape slots, and more tape drives.</p> <p>And that's it, I'm afraid. &nbsp;There's no more details I can give you about tuning the hell out of Commvault. I hope, that if some poor bastard is trying to configure CommVault to give decent backup performance speeds that this will be of some use.&nbsp;</p> <p>&nbsp;</p> Dear London [Personal] <p> <p>I've been trying to keep my blog more technical in nature these days, but as this was originally posted on Facebook (among much controversy), and also as per suggestions, posted to Reddit (<a href="">/r/disability</a> and <a href="">/r/london</a>) I decided I should have it here for posterity's sake (and the fact that here, I control the comments and the content.)</p> <p>&nbsp;</p> <p>--</p> <p>For a number of years, I&rsquo;ve had knee problems. I&rsquo;m not entirely sure what started them, or rather, what exacerbated them. I remember from a young age having a knee that&rsquo;d give way now and again. Now I&rsquo;m 26, I have recurring problems with knee pain and general knee weakness. There&rsquo;s not a lot I can do about it, there&rsquo;s painkillers and anti-inflammatories, but they&rsquo;re really not solving the root problem. I&rsquo;m not sure what would.&nbsp;</p> <p>I&rsquo;ve been thinking about this for some time, but these last few days, with the Olympics, have made my commutes throughout London particularly difficult.&nbsp;</p> <p>Along with my good friend Sam Hunt, I wrote this to illustrate my feelings on London transport (although, it probably equally transfers to other cities).&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>Dear London,</p> <p>Some of us are disabled. Sometimes it's not obvious, or visible.&nbsp;</p> <p>That does not mean it's OK to disbelieve us or ask us personal questions.&nbsp;</p> <p>Here&rsquo;s a limited example of the questions that I&rsquo;m talking about:</p> <p>* &ldquo;What&rsquo;s wrong with you?&rdquo;</p> <p>* &ldquo;You can&rsquo;t be disabled, you&rsquo;re so young.&rdquo;</p> <p>* &ldquo;You should give me your seat, I&rsquo;m pregnant, I need it more than you.&rdquo;</p> <p>* &ldquo;But you made it *onto* the train whilst standing up, why do you need to sit down now?&rdquo;</p> <p>Asking these questions does two things.</p> <p>* 1) It embarrasses us.&nbsp;</p> <p>* 2) It makes you look stupid/insensitive/cruel.&nbsp;</p> <p>**Don&rsquo;t do it.**</p> <p>Some disabled people are willing to talk about their disability with complete strangers, but most are not; would you reveal to a stranger your penis length? Or your most intimate secrets? I&rsquo;m not saying it&rsquo;s the same, but it&rsquo;s an embarrassing thing to ask of a perfect stranger.</p> <p>Sometimes we have good days. Sometimes we have bad days. We don&rsquo;t get to choose which day is good and which day is bad. Just because you&rsquo;ve had a bad day at work, does not give you the right to harass us because you want a seat on a tube train (or bus, DLR, Overground or any other form of public transport). Your want does not outweigh our need.</p> <p>I don&rsquo;t always carry my walking stick. This does not mean that I&rsquo;m not always disabled though. Even on good days (usually towards the end of them) I am in some degree of pain. Some people&rsquo;s disability is obvious (someone in a wheelchair, or with a pair of crutches), other people&rsquo;s are not obvious (someone with ME, or a pacemaker).</p> <p>I didn&rsquo;t make a choice to be disabled. I didn&rsquo;t one day think &ldquo;Hey, I know what I want. I want pain in my knee, and I want it to be difficult to walk when I&rsquo;m tired, or cold, and I really want this&rdquo;. I don&rsquo;t know a single disabled person who&rsquo;d wish any disability on anybody else. We play the cards we&rsquo;re dealt, because there&rsquo;s not much other choice.&nbsp;</p> <p>I have to get into work, every day. I live in West London and work in Soho. I wish I had an entirely step-free commute, I really do. But I don&rsquo;t. And that&rsquo;s not likely to change for a long while. My chosen route (for getting to work on time) takes 40-50 minutes. If I took an entirely step free route, I&rsquo;d be looking at a 1h30+ commute. Each Way. Many of Central London&rsquo;s transport network is off-limits to those unable to do stairs.</p> <p>So whilst I have to make a daily commute that&rsquo;s no fun, you (Londoners and London in general) can make life easier for me (and everyone else who&rsquo;s disabled and forced to use Public Transport).</p> <p>Here&rsquo;s how:</p> <p>* If we look tired, pained or generally unwell: Offer your seat.</p> <p>* If someone asks for a seat, give it without question</p> <p>* Don&rsquo;t ask questions (as mentioned above).</p> <p>* Get out of the way &nbsp;-- I mean this in the nicest possible way, but when I&rsquo;m navigating stairs, either up or down, I personally grip the handrail with my right hand (strongest side), regardless of the flow of people. Again, I wish I had a choice here, but I do not.</p> <p>Here&rsquo;s something else that upsets me:&nbsp;</p> <p>When a particularly good looking individual makes eye contact with you, looks you up and down, then instead of maintaining eye contact, instead looks disgusted, repulsed, or otherwise at unease by the simple fact that you&rsquo;re carrying a walking stick, or a crutch, or you're in a wheelchair.&nbsp;</p> <p>This has happened on a number of occasions. Please Londoners, don&rsquo;t judge based on outside appearance. It&rsquo;s really just as bad as if I were black, or asian, or of any other ethnic origin; I cannot change my disability just as much as they cannot change their race.&nbsp;</p> <p>It&rsquo;s just rude, and upsetting, and unnecessary.&nbsp;</p> <p>If tomorrow, you had an accident and became disabled for the rest of your life, I&rsquo;m sure you would wish people to show consideration and compassion, even just a little bit.&nbsp;</p> <p>We ask for that today.</p> </p> Retrospective: Unlock London Hackathon <p>Lessons learnt from the Unlock London Hackathon.</p> <p>I had an email on May 16th, &nbsp;asking for some assistance in setting up the wifi network for another hackathon. &nbsp;After my impromptu assistance at LondonRealtime went down so well, and "saved the network"; apparently I was a natural choice for the next one. &nbsp;</p> <p>At least I knew (sorta) the network layout at White Bear Yard. &nbsp;</p> <p>The main difference between this one and the last, was a bit more warning on the side of "We're gonna need wifi", but there was still a clause that it's gotta be "flawless" - Their words, not mine.</p> <p>Given the budget for the network was about &pound;400, I recommended a stack of cheap Access Points (which are the ones I used at LondonRealtime, and they're actually insanely good, considering they're less than 20 quid each), a couple of 24 port switches, and a smallish ethernet router.</p> <p>In this scenario, the router doesn't actually do a lot other than NAT, and DHCP.</p> <p>Last time around, we used my Cisco 2621XM, this time, they bought a Cisco RVS4000. &nbsp;Actually a solid bit of kit. &nbsp;I'm pretty impressed with that part of the network, aside from the fact that it's DHCP server doesn't seem to be able to handle anything larger than a /24 network address for the DHCP pool. &nbsp;Anyway.</p> <p>The switches do almost nothing, so unmanaged switches are fine. &nbsp;There's no need for anything too heavy here, because there's no need for a VLAN, and nothing intelligent going on at all.</p> <p>All the Access Points are configured identically. &nbsp;Static IP address, WPA2-PSK, 802.11bgn, and the same SSID. &nbsp;</p> <p>The advantage of this is that you get some form of client-side roaming, a bunch of different APs, with different ESSIDs, but the same SSID, and most OSes are intelligent enough to figure out how to move you around across channels and APs. &nbsp;</p> <p>We had 10 Access Points, 5 on each floor, about 10-15 metres apart.</p> <p>By and large, it worked pretty well. &nbsp;The problem came a lot later on, when you get 150+ people, each with &gt;1 devices. &nbsp;Interestingly, we never exhausted the DHCP pool, not on the first day, and the peak throughput was touching 55Mbit. &nbsp;That's quite impressive alone. &nbsp;</p> <p>At about 2 PM, the organiser of the event called me, as apparently some people were experiencing pretty heavy packetloss connecting to the network. &nbsp;I did some investigation, and discovered 2 interesting things.</p> <p><strong>1. </strong>&nbsp;The wifi itself was still rock solid, and I could ping any device from anywhere else on the network. &nbsp;That's one potential problem ruled out.</p> <p>Connectivity between the two floors was also fine. &nbsp;Each floor has it's own switch, and the connection between the two was also fine.</p> <p><strong>2. </strong>The packetloss was occurring between the 3rd hop and the rest of the world.</p> <p>Basically, it looked like this.</p> <pre> - 5 - 10 ms (Our router)<br /> - 10 - 20 ms (Their first edge switch)<br /> - 6501ms (Their core switch)<br /> - 8ms (Their CPE)</pre> <p>&nbsp;</p> <p>So there's a pretty enormous jump and a lot of packetloss associated with the connection between and &nbsp;It turns out, that those 2 are layer 3 switches for the building.</p> <p><strong>Here's my theory:&nbsp;</strong></p> <p>On an average day, there's 100-200 people in the building, spread across 3-4 floors, across 3-4 switches (depending on who you talk to). &nbsp;That amount of traffic is fine for the upstream provider, who supply a 1Gbit pipe into the building.</p> <p>What we did, however is take 150 people, and 150+ devices, and shove it all down one port on a HP Procurve switch, instead of spreading it out across a bunch of ports.</p> <p>At some point, the switch reached it's port buffer capacity, and started dropping traffic. &nbsp;I can't blame it, really.&nbsp;</p> <p>The reason for only using one port is that it would have probably been a lot more work to configure a split network with two &nbsp;(or more) routers, and still have a sensible amount of management.</p> <p>Interesting so far.</p> <p>At about 3PM on Saturday afternoon, I turned on the firewall on the edge router, and started blocking P2P and bittorrent traffic (as best it was supported, anyhow). &nbsp;This had the almost instant effect of cutting the outbound traffic from ~25Mbit to about 5Mbit. &nbsp;</p> <p>We're providing a free wifi service for the hackathon. &nbsp;We're <strong>not</strong> providing free wireless so you can download movies. &nbsp;</p> <p>One of the annoying things about the Cisco RVS4000 is that there's no intrinsic way to see who's using what data, i.e. there's no support for Netflow, or similar. &nbsp;There's also no sensible builtin traffic graph, which is more annoying. &nbsp;There is however SNMP data, which I only started to collect *after* disabling BitTorrent and P2P, sadly. . &nbsp;I need to collect the RRD files from my impromptu "laptopserver"..&nbsp;</p> <p><strong>Here's an interesting side-note</strong>: &nbsp;I took my Netbook along on Saturday and Sunday to connect up as a SNMP data receiver. &nbsp;Basically just running Ubuntu 12.04 server (with icewm, for firefox), and munin. &nbsp;Nothing fancy. &nbsp;I would've installed Logstash, but I realised I only have a GB of RAM on that laptop, so it's less than ideal.</p> <p>I'd had the forethought to shove my 60GB OCZ SSD in there on friday night, so that I knew it'd work on saturday. &nbsp;First problem. &nbsp; Laptops make crap servers. &nbsp;Even if you disable the ACPI features in the bios, the whole thing still tends to go to sleep. &nbsp;In the end, I used xdotool's mousemove feature to move the mousepointer about a bit so that the OS didn't see it as having gone to sleep. &nbsp;</p> <p>From my point of view, given I was actually *working* at this event, rather than volunteering, the whole thing had a totally different feeling. &nbsp;I felt pretty good on Friday night after setting everything up. &nbsp;We tested the speed and throughput, and it was pretty solid. &nbsp;You could roam between access points and across floors without any problem.&nbsp;</p> <p>Come Saturday afternoon, I was feeling pretty stressed. &nbsp;The wifi was solid, but the network problems were effectively out of my control. &nbsp;There wasn't any sensible steps we could take to increase the speed (and decrease the packet loss), and maintain a level of segregation between the guests and the White Bear Yard network. &nbsp;This was always a pretty high priority, as the potential for cyber-espionage and general badness is quite high when you let 100+ perfect strangers onto your premises and onto your network.</p> <p>&nbsp;</p> <p><strong>Conclusions:</strong></p> <p>48 hours and &pound;400 isn't really enough time and budget to provide a bulletproof wireless solution. &nbsp;That's not to say I didn't regret having a crack at it. &nbsp;I think you'd be hard pushed to find a wireless solutions vendor who'd even consider that project given that timescale and that budget.</p> <p>The Cisco RVS4000 router is nice, but doesn't provide enough management tools to make it a truly great platform. &nbsp;On contrast, I think I slightly prefer the Cisco 2621XM router, in spite of it being 10 years older, it feels like a more robust platform by an order of magnitude.</p> <p>Taking the traffic for 150 people and shoving it down one Fast Ethernet port is going to cause problems no matter how you look at it.</p> I've got your opinions! <p><img id="plugin_obj_183" title="Picture - First few answers" src="/media/cms/images/plugins/image.png" alt="Picture - First few answers" /></p> <p><img id="plugin_obj_184" title="Picture - More answers" src="/media/cms/images/plugins/image.png" alt="Picture - More answers" /></p> <p><img id="plugin_obj_185" title="Picture - Looks very split on this question." src="/media/cms/images/plugins/image.png" alt="Picture - Looks very split on this question." /></p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>So it does look like the majority of you aren't too fussed about me running advertising on this site. &nbsp;Rest assured, I'm actually not going to bother, not right now, anyhow.</p> <p>&nbsp;</p> <p>So it turned out that the email I was sent was from a slightly dubious marketing agency called "More Digital", and if you google for them, there's a whole raft of unsatisfied customers. &nbsp;So screw that. &nbsp;If I'm gonna run advertising, it's gonna be via Google.</p> <p>Lots of you want to see more articles on Devops and programming. &nbsp;Fair enough. &nbsp;I shall try and use this insight to write better articles in future.</p> <p>I'd like to hear individually from the person who doesn't like the design. &nbsp;That said, I don't much like the design, and it could be a bit more ajaxxy, but I don't want to sacrifice UX for shiny toys.</p> <p>6 of you don't want me to switch to Wordpress. 4 of you do, so it looks like the nays have that.</p> <p>&nbsp;</p> <p>Thanks to everyone who answered! It's genuinely good to get some of this kind of feedback now and again.</p> I Want Your Opinions <p>Earlier today, I got approached by an advertiser who wants me to run a small ad on this site. &nbsp;This got me thinking about whether it'd fit the theme of the site, and what the readers would think. &nbsp;</p> <p>So I'm asking you.</p> <p><img id="plugin_obj_179" title="Snippet - iFrameSurveyOpinion" src="/media/cms/images/plugins/snippet.png" alt="Snippet - iFrameSurveyOpinion" /></p> <p>&nbsp;</p> <p>Wow, that embed was uglier than I expected. x.x</p> Technology Empires <p>&nbsp;</p> <p>As time goes on, and I find myself dealing with more and more large companies, it's pretty obvious that there's a recurring pattern. &nbsp;</p> <p><strong>Large companies are awful to work with.</strong></p> <p>Dell are pretty bad. &nbsp;<br />On one recent occasion, we had the need at work to call Dell Support for a mysterious problem with our shiny new blade servers. &nbsp; <br />The biggest problem when dealing with Dell in particular is their scripted support technicians on 1st line. &nbsp;Most other companies seem to hire people with the capability of abstract thought. &nbsp;<br />Dell hires chimpanzees to read from scripts. &nbsp;Woe befalls you if you attempt at any point to deviate from the script, and heaven forbid you ask a question that they don't have the answer to. &nbsp;If that happens, there's a 1 in 3 chance of any of the following outcomes.</p> <p><strong>1) </strong>"Let me transfer you to another member of the team" - You get put on hold. &nbsp;Sometimes indefinitely, sometimes the call just drops out. &nbsp;Generally, you have to go through the same question-answering process all over again.</p> <p><strong>2)</strong> They make something up. &nbsp;This is the worst trait by far. &nbsp;I'd rather be told "Sorry, I don't know" rather than having them blame<em><strong> a) </strong></em>our choice of Operating System [Yes, it's not windows, or RHEL, but our problem is with a Layer 2 switch..]. or <strong><em>b) </em></strong>Lie blatantly about the problem.</p> <p><strong>3)</strong> They just hang up on you. &nbsp;I suspect this is a fight-or-flight response, but I've been hung up on many a time when asking for support.</p> <p>So just pretend for a moment that you've made it past the first line. &nbsp;You've got hold of someone who's prepared to acknowledge there's a problem. &nbsp;Well done. &nbsp;That's a big step out of the way. &nbsp;You've already spent *3* hours on the phone, being bounced between 1st and 2nd line; you've dialled and redialled after the phone-line has mysteriously gone dead. &nbsp;Now you've found a smart guy who knows what he's talking about, so you take his name. &nbsp;Just a formality really, as they never seem to be addressable by this means. &nbsp;You can also forget about asking for his Direct Dial number, because they don't exist either.</p> <p>You can also bet heavily that if you call back at any point in the future, that they're not working that day/hour/week, they've moved departments/countries/employer, or they're just AWOL.</p> <p>So you call your account manager. &nbsp;You ask them why the support is *so* inept, when you're paying the cost of a sports car for hardware. &nbsp;When you know that the problems you're having could likely be resolved with the assistance of an on-site engineer, or a Skype/WebEx chat and screen-share with someone suitably wise and with the time and patience to sit down, do some diagnostics and actually solve the problem. &nbsp;</p> <p>Your account manager will no doubt explain that on-site visits aren't covered under your support agreement, despite you swearing blind that you ticked that box when you set up the damn contract, and that nobody is currently available for your one-on-one session over the internets.&nbsp;</p> <p>So you make an appointment, for them to call you, and dial-in to a screen-sharing session. &nbsp;A session which might, unsurprisingly, never happen. You've scheduled 3pm on a Thursday afternoon, and that time rolls around, and it's half past 3, no call. &nbsp;It's now 4pm, no call. &nbsp;You call your account manager, only to discover that due to Flux Repolarisation issues, or some such garbage, your conference call has now been cancelled, and you'll have to re-book another slot.&nbsp;</p> <p><strong><em>*sigh*</em></strong></p> <p>That's just one technology company. &nbsp;One market leader. &nbsp;You'd think that they've become the market leader for a reason. &nbsp;They have, but that reason sure as hell ain't customer service expertise.</p> <p>&nbsp;</p> <p>Let's now look at a different field entirely. &nbsp;</p> <p>Imagine that you're running the technology and engineering team for a company, and you want to grow the size of your physical infrastructure. &nbsp;For argument's sake, imagine you want to grown your server room from one rack to 5 racks. &nbsp;You're gonna need more power to do that sensibly, and safely. &nbsp;To get more power, you're gonna need a new feed from the building's intake room to your floor. &nbsp;You're gonna need a new meter, and for that, you're gonna have to call your current supplier. &nbsp;</p> <p><strong>EDF.</strong></p> <p>So you call the number on your bill. &nbsp;You talk to the Business Accounts team. &nbsp;Or someone from it, anyway. &nbsp;They tell you to talk to Metering Services, and give you a number. &nbsp;You call the number. &nbsp;It's engaged, so you sit, on hold, in a queue of an inconceivable length, listening to Zero 7 from now until eternity. &nbsp;Someone might answer your call, but more likely, you'll get cut off.</p> <p>So you call back and repeat the process. &nbsp;Eventually you get through to someone, who tells you that they can't help you without you having a project number. &nbsp;To get a project number, you have to call a different team, Project Acquisitions, or some irritatingly redundantly named team of paper-pushing drivel-monkeys. &nbsp;To get a Project number, you later discover, you need a MPAN number. &nbsp;The MPAN number is, generally, like a serial number for your meter. &nbsp;If you're having a new meter, you might not have a MPAN number. &nbsp;To get a new meter, you seemingly need one, but to get one, you need a Meter. &nbsp;So round and round you go.</p> <p>Don't forget that every time you call in, you'll get a different person, again. &nbsp;You'll go through the same questions every time, probably in a slightly different order. &nbsp;You'll be passed between departments like a hot potato, and have your call dropped more frequently than a wet fish.</p> <p>If by sheer luck and persistence you manage to request a new project/MPAN/Metering ID/other electricity jargon number, then you'll probably be told that you'll have to call back at a later date, in order for the number to be in the system, and available for use. &nbsp;This time period can be anything from 24 hours, to 30 working days.</p> <p>Don't forget that your infrastructure build-out project is being delayed by this insanity, and every day that you're waiting for them, you're not making money by having your servers working their little ball grid arrays off.</p> <p>Jumping forwards in time, you've got hold of the required numbers, you've booked an appointment with one of their engineers to discuss the plan for installing a new 3-phase metered circuit to your premises. &nbsp;Their engineer gives you a list of requirements and specifications that you will pass to your electrical contractors so that they can do the hard work, the wrangling armoured 3-phase cable, the fuses, the breakers, the consumer units and testing. &nbsp;That's all covered by your electricians. &nbsp;</p> <p>So you've got the spec sheet. &nbsp;You've got the electricians in, and they've run cable between the intake room and your premises. &nbsp;They've done all the work to the latest standards, everything is *perfect*. &nbsp;It matches what the electricity board have said to the letter of the spec sheet. &nbsp;It's not only perfect, but it's also beautiful. &nbsp;Their cabling quality is second to none. &nbsp;There's not even a hidden rats nest of insanity in a dark corner.</p> <p>It's time to play the phone game again. &nbsp;This time, you need an electricity board engineer to come out and install a meter. &nbsp;This requires turning the power off to the floor, because of the temporary supply you've been using in the interim. &nbsp;So you've arranged some time when the work can be carried out. &nbsp;As I'm sure you're aware, in a busy, profitable company; finding this kind of time for outages is both tricky and expensive.</p> <p>It's 6:30AM on the day of the meter installation. &nbsp;The electricians are here to oversee the installation, and you're just waiting on the man from EDF. &nbsp;You're clutching onto your letter from the Metering Administration Team, clearly stating that you've got the appointment reserved. &nbsp;You made this way in advance, and followed the instructions of the phone-jockey to the letter for this process. &nbsp;You're totally sick of the sound of Zero 7, and struggle not to wheeze and rend your hair when you hear that damn song on the radio.</p> <p>EDF man turns up, and states in no uncertain terms, that the wiring he can see isn't up to "code". &nbsp;What he actually means is that they'll only certify work that's been done to a standard produced in the 1980s....</p> <p><strong>Digression</strong>: New wiring legislation allows for more leniency in the layout of the wiring. &nbsp;Specifically the 17th edition allows the armoured sheath of high-current cable to be used as the earth loop connection. &nbsp;If the cable's certified to the 17th edition, it's all kosher. &nbsp;<br />The EDF wiring manual states that they have to have a separate earth feed *as well*. &nbsp;<br />-- This alone makes less sense, because it's easier to accidentally damage a separate earth cable, rather than a bundle of armoured cable strands, but there we go. &nbsp;Since when has legislation made the blindest bit of sense?</p> <p>So that's the first strike against the master plan. &nbsp; Your electrician friends tell you that EDF (and other electricity boards) are famous for doing this, moving the goal posts, and generally being obtuse and obstructive. &nbsp;It's almost like they don't want your money. &nbsp;In a sense, it's pretty clear that they don't.</p> <p>The second strike comes when Mr EDF tells you that he's only got you down for one appointment, and that 2 are needed for a meter installation. &nbsp;The reasoning behind this isn't clear, or isn't made clear at least. &nbsp;This is the first time you've heard such a thing, and the people on the Metering Administrations Team sure as hell didn't tell you this when you made the appointment for the visit. &nbsp;It's pretty clear at this point that all the problems are down to an epic number of failures in inter-team communication, at a number of levels, throughout a massive company.</p> <p>Mr EDF apparently doesn't have any free time for the rest of the day. &nbsp;Apparently he finishes work at 16:30. &nbsp;He's not prepared to come back after that, because he doesn't get overtime, and he's "<em>not coming back in his own bloody time</em>". &nbsp;Smooth. &nbsp;It's like he doesn't want to be a nice, friendly man, and help out some people. &nbsp;</p> <p>I'll bear that in mind if you ever need a favour from me.</p> <p>It could be 2-3 weeks before you can get another engineer in, with a double appointment, to get the meter installed. &nbsp;Another 2-3 weeks of wasted time. &nbsp;Another expensive and difficult scheduled downtime. &nbsp;Another tongue-lashing from your managing director, who expected the whole project to be finished months ago.</p> <p>&nbsp;</p> <p>That's enough war stories for now. &nbsp;I hope you can see the general theme here. &nbsp;</p> <p>Here's a few other companies I can't stand dealing with, for their awful customer service, or their absolutely astounding failure to communicate efficiently:</p> <p>&nbsp;</p> <ul> <li><strong>British Telecom.</strong></li> <li><strong>IBM.</strong></li> <li><strong>Oracle.</strong></li> <li><strong>Insight.&nbsp;</strong></li> <li><strong>Misco/Wstore (Used to be good... Turned into box-shifters, and went rapidly down hill)</strong></li> <li><strong>T-Mobile.&nbsp;</strong></li> <li><strong>Thames Water.</strong></li> </ul> <p>&nbsp;</p> <p>&nbsp;</p> <p>But there is a revelation in all of this. &nbsp;There still exist some small companies who do things well. &nbsp;They're not market leaders in their fields, but the do have the agility to provide excellent customer service. &nbsp;My home phone line is provided by <a href="">Gradwell</a>, they just get lines wholesale through BT. &nbsp;Whilst it's fractionally more expensive, it does have the massive benefit that I don't have to waste my time dealing with peons at BT. &nbsp;I don't spend hours on hold, in a queue, listening to irritating 8-bit renditions of Bach. &nbsp;I call my account manager, or ask them on twitter, and the problems are resolved quickly and easily.</p> <p>My home electricity and gas are provided by a smaller company than EDF, called <a href="">Ecotricity</a>. &nbsp;I like them immensely. &nbsp;Even as a home customer, I have a single point of contact for enquiries about billing and so on. &nbsp;This is a massive step forward from the traditional sales team, contact team, and so on. &nbsp;I have their direct-dial in telephone number, their email address and should I need it, their manager's email and phone numbers. &nbsp;I've never needed it.</p> <p>I'd rather pay a small percentage more for decent customer service, and the actual feeling that I'm being treated like a person, with feelings; and not just as Yet Another Customer.&nbsp;</p> <p>I forever fear the days when these small, light, efficient companies become too big, or get bought out and borged by the powers of the larger enterprises. &nbsp;Too frequently when this happens, the quality of service goes down the pan. &nbsp;Middle managers from up high insert themselves like a playing card in the spokes of a bicycle. &nbsp;Interfering and making lots of noise in the process.&nbsp;</p> <p>&nbsp;</p> <p>There's a handy parallel to draw between the problems of communication within a large company, and the fall of the Roman Empire.</p> <p>One of the often mentioned possible causes for the fall of the Roman Empire, was the lack of communication involved with maintaining a large entity. &nbsp;More accurately, the problem isn't with a lack of communication, but the latency involved with the type of message passing.&nbsp;</p> <p>In Peter Heather's book&nbsp;"<a href=";pg=PA107&amp;lpg=PA107&amp;dq=fall+of+the+roman+empire+due+to+communications&amp;source=bl&amp;ots=JNBfY4sXlr&amp;sig=YIulN34Kn-eoCk7flrkOx_hkYzs&amp;hl=en&amp;sa=X&amp;ei=xhGlT43kBoX80QWbr_WXBA&amp;ved=0CGUQ6AEwAg#v=onepage&amp;q=fall%20of%20the%20roman%20empire%20due%20to%20communications&amp;f=false">The Fall Of The Roman Empire: A New History Of Rome And The Barbarians</a>", he stated that even at a daily rate of 50km, it could easily take 3 months to travel the 4000km from the edge of the empire to Rome. &nbsp;</p> <p style="padding-left: 30px;"><em>"Furthermore, measuring [the size of the empire] in the real currency of how long it took human beings to cover the distances involved, you could say it was five times larger than it appears on the map. &nbsp;To put it another way, running the Roman Empire with the communications available then was akin to running, in the modern day, and entity somewhere between five and ten times the size of the European Union."</em></p> <p>When dealing with the large companies mentioned above, it seems clear to me that there's a definite problem with communications between teams and departments. &nbsp;<br /><br />Frequently, there's also a problem within teams and departments, and as a result, the quality of service provided to clients, customers, and often those working for the company declines rapidly as the company grows.</p> <p>&nbsp;</p> Counting the Cost of Cloud Backup <p><strong>All information here was correct at the time of initial publication. &nbsp;Any differences between statements here and the actual status quo are likely to be either the cause of the vendors, or your strange little minds.</strong></p> <p>&nbsp;</p> <p>With Google's latest release of their "cloud storage service", "Google Drive", I'm once again brought to review and contrast the differences between a number of online storage providers. &nbsp;There's an absolutely epic list, far too many to pick apart individually here, but I'll try to cover a few that I've used personally, and some that I haven't, and the various pros and cons of each.</p> <p>Here's a brief list:&nbsp;</p> <p>&nbsp;</p> <ul> <li>GoogleDrive</li> <li>Dropbox</li> <li>Apple iCloud</li> <li></li> <li>Spideroak</li> <li>Windows Live Skydrive</li> <li>SugarSync</li> </ul> <p>&nbsp;</p> <p><strong>Things I'll be looking at:</strong></p> <p><em><strong>Cross-platform support:</strong></em></p> <p><span style="white-space: pre;"> </span>Is file-synchronisation supported across Linux, Windows, Mac, iOS, Android, Blackberry, Windows Phone and anywhere-access from a web browser?</p> <p><strong><em>Cost:</em></strong></p> <p><span style="white-space: pre;"> </span>It's no point having the best possible service if it costs the earth.&nbsp;</p> <p><span style="white-space: pre;"> </span>Also, if it's free, what's the downside? Where's the catch?</p> <p><strong><em>Maximum Storage:</em></strong></p> <p><span style="white-space: pre;"> </span>What's the maximum amount of storage you can allocate (and what will that cost!?)</p> <p><em><strong>Security:</strong></em></p> <p><span style="white-space: pre;"> </span>This is my biggest complaint about a number of cloud storage providers. &nbsp;Where is that data stored? How is that data stored? Can other people potentially access it? Does the host have access to it?</p> <p><strong><em>Terms of Service:</em></strong></p> <p><span style="white-space: pre;"> </span>Does the data remain your property? Do you waive rights to it?</p> <p><span style="white-space: pre;"> </span></p> <p>Let's look first at <strong>Cross-platform Support</strong>.</p> <p><img src="" alt="" width="771" height="141" /></p> <p>I've included WL Skydrive and SugarSync in this list in an effort to cover more potential vendors, but haven't personally used either of them.</p> <p>As you might predict, Apple's iCloud is poorly supported on anything other than an Apple device, and similarly, Microsoft's Skydrive is poorly supported on anything other than a mainstream platform.</p> <p>I suspect that the reason so few (only WL Skydrive) support WP7 is because the platform is still relatively new. &nbsp;I'm sure that full vendor support for WP7 will be along in time. &nbsp;Spideroak are certainly working on a Blackberry client for their service. <a href="">[0]</a></p> <p>If you're a die-hard blackberry user, or your company is very blackberry-centric, then your options are basically, Dropbox or SugarSync.</p> <p>All of the above services provide a web interface so that you'll always be able to get at your files from a modern web browser. &nbsp;If you're still using IE6, it's time to move on.</p> <p><strong><em>Cost:</em></strong></p> <p>This is the dealbreaker for many home users, and quite a number of business users too! &nbsp;All of the above vendors give you a certain amount of free storage when you sign up. &nbsp;Dropbox (and some others) allow you to increase the amount of storage space you have by getting your friends and family to sign up too. &nbsp;</p> <p>The amount you get for free varies quite widely, between 2GB (Dropbox &amp; Spideroak) to 7GB (WL Skydrive). [Why it's 7, and not 8 is anyone's best guess...]</p> <p>For simplicity of comparison, all cost figures are in US Dollars per month.</p> <p><img src="" alt="" /></p> <p>Sugarsync's actual values are for 30GB, 60GB and 100GB.</p> <p>Spideroak has an interesting pricing model, where the first 2GB are free, as per usual, but their increment in pricing starts with a chunk of 100GB for $10/month for each 100GB incremental chunk.</p> <p><strong><em>Maximum Storage Capacities:</em></strong></p> <p>For the majority of vendors, the actual maximum storage capacity is unclear. &nbsp;GoogleDrive is the only one I could find with a maximum storage capacity stated to be higher than the largest pricing bracket.</p> <p>Google's maximum is 16TB, which will cost you a handsome sum of $799.99 a month. &nbsp; At this scale of storage, you really should be thinking about how to do it more cost-effectively. &nbsp;Amazon S3 or a colocated disk array would probably be a sensible alternative at 16TB.</p> <p>'s Personal plan caps out at 50GB (19.99/mo), and their Business plan starts at $15/user/mo, with a maximum cap of 1000GB.&nbsp;</p> <p> also offer an enterprise level of &nbsp;storage, where there's no cap on the amount of storage you can use, it's just listed as "Unlimited". &nbsp;Naturally, there's the limitations of economies of scale, if you asked them to host a petabyte of data, you're gonna pay them through the nose for it.</p> <p>Dropbox on the other hand, for non-business users, the maximum seems to be 100GB, and for business users, the pricing plans start at $795 for 5 users and 1TB of storage.</p> <p>With regard to Apple's iCloud, it's unclear from their website what happens after you've used 100GB of their storage, whether it's a hard limit, or you can buy more 100GB chunks. &nbsp;I'd love to know the answer, but I'll bet it's expensive after 100GB. <a href="">[5]</a></p> <p>SugarSync offer 3 plans higher than their widely publicised 100GB plan (just after the fold on their pricing page <a href="">[2]</a>, of 250GB for $24.99/mo, 500GB for $39.99/mo, and 1TB for $79.99/mo.</p> <p>SugarSync for Business's maximum is 2TB, priced at $2099.33/year ($209.93/mo)</p> <p>&nbsp;</p> <p><strong><em>Security:</em></strong></p> <p>Let's see how the vendors shape up in terms of security. &nbsp;</p> <p>Spideroak, SugarSync are the only two which I'd say have the strongest security out of the lot of them. &nbsp;Dropbox is famously insecure, here's one reason why, once a file is shared, you've effectively lost control over it, as anyone it's shared with can then invite more people to view it. &nbsp;Dropbox also have the power to view your files, as they own the encryption keys. &nbsp;We should also not forget the major security outage in 2011, when dropbox accounts were effectively open to the public with no authentication whatsoever. &nbsp;Whilst this hole was fixed quite quickly (4-5 hours, ISTR), it still leaves a lingering feeling of malaise regarding the service.</p> <p>Dropbox's terms of service state<a href="">[1]</a> that "employees are prohibited from viewing the content of your files", by "prohibited" it's plausible to read that as "we've asked them not to, but there's nothing technically stopping them from doing it if they wanted". &nbsp;They also say that they'll decrypt your files if subpoena'd by law enforcement officals.</p> <p>I was unable to find information regarding the process of encryption and security for iCloud, as well as GoogleDrive and SkyDrive, but I suspect that as they own the keys, they can be subpoena'd for them, and will probably give them up without too much fuss.</p> <p>SugarSync will only access your files with your permission, and then they have to use a remote-access tool to allow you to grant access to them. &nbsp;(chatlog)</p> <p>Spideroak is the interesting one here. When you sign up, you set the key. &nbsp;If you lose the key, you lose the copy of the data. Spideroak staff can't view your data, as they don't have your key. :D</p> <p>In order to correctly and legally store data from the EU, on servers outside the EU, in a manner protecting Personally Identifiable Information (PII) <a href="">[3]</a>, the storage vendor must be EU Safe Harbour compliant <a href="">[4]</a>. &nbsp;This basically means that they meet the standards laid down by the US Department of Commerce. &nbsp;</p> <p>Finding a cloud storage provider who are a) functionally good, and b) Safe Harbour compliant is not an easy task. &nbsp;I had to do this for a previous employer, and the only one we could find at the time was;</p> <p>Sugarsync are also Safe Harbour approved, as is Amazon S3 which a number of storage vendors use for backend storage, but many of the vendors themselves are not compliant.</p> <p><img src="" alt="" width="523" height="145" /></p> <p>Where I've listed Backend Storage as "Owned Hardware", that's basically them having their own network for the entire platform, as opposed to using another vendor's storage platform, like the connection between Dropbox and Amazon S3.</p> <p>Not many vendors are Safe Harbour approved, and some don't state either way whether they are or they aren't. &nbsp;As with all these kind of things, its better to assume the worst case. &nbsp;In this case, if you can't see if they're Safe Harbour compliant, assume they're not.</p> <p>Similarly, If you're not sure whether the vendor can decrypt your files, assume they can unless it's explicitly stated otherwise.</p> <p>Role-based Access Control, a frequently requested enterprise feature is only (to the best of my knowledge) available on the enterprise/business account features of SugarSync, Dropbox (Bluebox) and (Enterprise, not business). &nbsp;As a feature, I don't even think it's technically applicable to iCloud, as that's not really a cloud storage service in the sense that Dropbox et al are, but a set of APIs tightly integrated into a number of devices.</p> <p><strong><em>Terms of Service:</em></strong></p> <p>Importantly, we come to terms of service for these cloud storage platforms. &nbsp;Basically, <strong>Google</strong> launched <strong>GoogleDrive</strong> about 2-3 days ago, and are already under fire for this text in their Terms of Service:</p> <p>&nbsp; <em>&nbsp;When you upload or otherwise submit content to our Services, you give Google (and those we work with) a worldwide licence to use, host, store, reproduce, modify, create derivative works (such as those resulting from translations, adaptations or other changes that we make so that your content works better with our Services), communicate, publish, publicly perform, publicly display and distribute such content."</em></p> <p>The upshot of this is, that once you upload a file to their servers, it's not yours anymore. &nbsp;Which kind of proves another point. &nbsp;The files can't be securely encrypted, if they can examine them in order to reuse them or whatever it is they want to do.</p> <p>Even Dropbox don't make claims or grabby paws at your data. &nbsp;They might expose it to everyone else, but they won't steal your intellectual property first! &nbsp;<strong>Dropbox</strong> has this to say:&nbsp;</p> <p><em>&nbsp; &nbsp;"You retain full ownership to your stuff. We don't claim any ownership to any of it. These Terms do not grant us any rights to your stuff or intellectual property except for the limited rights that are needed to run the Services, as explained below."</em></p> <p><strong>Microsoft say&nbsp;</strong></p> <p>&nbsp; &nbsp; <em>"Except for material that we license to you, we don't claim ownership of the content you provide on the service. Your content remains your content. We also don't control, verify, or endorse the content that you and others make available on the service."</em></p> <p>That's pretty interesting, especially with regards to Google who are trying their best at all times (apparently) not to be 'evil'.</p> <p>There's a very fine line to be drawn between enhancing the service, and Intellectual Property scavenging.&nbsp;</p> <p>I've reviewed a few salient points which should be considered when choosing a cloud backup vendor. &nbsp;I think it's safe to say that the non-business version of Dropbox has no place in the business environment. &nbsp;Especially when you're dealing with confidential or sensitive files, ones which could be potentially very damaging for the business if they were to be leaked.&nbsp;</p> <p>There is of course one very important alternative which shouldn't be overlooked. &nbsp;A number of open-source projects exist to replicate the services provided by Dropbox and so on. &nbsp;A quick na&iuml;ve googling finds Owncloud<a href="">[6]</a>, Sparkleshare<a href="">[7]</a>, and Syncany<a href="">[8]</a>. &nbsp;</p> <p>Whilst all of these have the nasty side-effect of you having to manage your own storage, they do helpfully allow you to maintain full control of the whereabouts of all of your data. &nbsp;This is ideal for those paranoid few of you, or companies where a traditional cloud provider is unacceptable due to client restrictions. &nbsp;</p> <p>There are also a few cloud providers who are based in *just* the EU, or the UK, so american law doesn't apply. &nbsp;This is again good for british based companies where you don't want, or can't allow data to leave the country.</p> <p>&nbsp;</p> <p>If I were choosing a cloud backup provider tomorrow, I'd be looking very seriously at Enterprise and SugarSync for Business. &nbsp;I think Dropbox for Enterprise is worth a look too, but beware of the underlying taste of Dropbox which may linger like a fart in a lift.</p> <p>&nbsp;</p> <p><strong>EDIT:</strong></p> <p>I've&nbsp;<a href="!/jackschofield/status/196404135027949568">been informed</a> on Twitter by a regular journotroll that my article is inaccurate. &nbsp;</p> <p>Apparently Microsoft are now offering 25GB for "free" to <strong>*existing*</strong> SkyDrive customers, although I can't find a reference to this on <a href="">their website</a> directly..</p> <p>&nbsp;<br />I don't really care that much, as they're still failing to support the wide array of platforms that are supported by SugarSync for example. &nbsp;</p> <p>There also appears to be a <a href="">bunch of third-party applications</a> to allow access to SkyDrive from Android platforms. &nbsp;But as they're third-party, I won't be trusting them with the security of my data.&nbsp;</p> <p>Just sayin'</p> Hell Hath No Fury Like a Man Discriminated <p><em>This article was originally published on one of my Tumblr Blogs. &nbsp;I was experimenting with the idea of separating my blogs, but it only showed to dilute the overall traffic.</em></p> <p>&nbsp;</p> <p><a href="">Luluvise</a> have been brought to my attention a couple of times recently. &nbsp;They're the social network who hate men.&nbsp;</p> <p>Or at least they sure as hell don't want them in their pool.</p> <p>Luluvise are, as far as I can tell, a british startup, based at a technology incubator/hub in East London. &nbsp;This makes them subject to UK and EU legislation and law on discrimination. &nbsp;</p> <p>Traditionally, women were favoured by car insurance companies as a "safe bet". They're allegedly statistically less likely to do damage by car, and so their insurance was cheaper.</p> <p>A recent ruling by the European Court of Justice basically made gender-based pricing unlawful. &nbsp;In 2004, the European Court passed <a href="">Directive 2004/113/EC1</a>&nbsp;which covers the discrimination on supply of goods and services, by gender, and makes it illegal.</p> <p><strong>2004/113/EC1</strong> states that&nbsp;</p> <p><em>Such legislation should prohibit discrimination based on&nbsp;sex in the access to and supply of goods and services.&nbsp;Goods should be taken to be those within the meaning&nbsp;of the provisions of the Treaty establishing the European&nbsp;Community relating to the free movement of goods.&nbsp;Services should be taken to be those within the&nbsp;meaning of Article 50 of that Treaty.</em></p> <p>Discrimination is primarily defined as either "<strong>direct</strong>" or "<strong>indirect</strong>".</p> <p><strong>Direct Discrimination:</strong></p> <p><em>where one person is treated less&nbsp;favourably, on grounds of sex, than another is, has been&nbsp;or would be treated in a comparable situation.</em></p> <p><strong>Indirect Discrimination:</strong></p> <p><em>where an apparently neutral&nbsp;provision, criterion or practice would put persons of one&nbsp;sex at a particular disadvantage compared with persons of&nbsp;the other sex, unless that provision, criterion or practice is&nbsp;objectively justified by a legitimate aim and the means of&nbsp;achieving that aim are appropriate and necessary</em></p> <p><strong>Legitimate aim</strong>&nbsp;is an exception that basically covers single-sex sports clubs and sporting events, and shelters for the protection of vulnerable persons.</p> <p>I think you'd have to be particularly eccentric to describe a social network as a protection shelter.</p> <p>There's also an exception covering financial and actuarial risk based on gender. &nbsp;This has now been re-reviewed by the European Council, and that loophole has now been plugged.</p> <p>Article 50 of the <a href="">Treaty establishing the European Community</a>&nbsp;[LONG!] states that:</p> <p><em>Services shall be considered to be &lsquo;services&rsquo; within the meaning of this Treaty where they are&nbsp;normally provided for remuneration, in so far as they are not governed by the provisions relating&nbsp;to freedom of movement for goods, capital and persons.</em></p> <p><em>&lsquo;Services&rsquo; shall in particular include:</em><br /><em>(a) activities of an industrial character;</em><br /><em>(b) activities of a commercial character;</em><br /><em>(c) activities of craftsmen;</em><br /><em>(d) activities of the professions.</em><br /><em>Without prejudice to the provisions of the Chapter relating to the right of establishment, the&nbsp;person providing a service may, in order to do so, temporarily pursue his activity in the State&nbsp;where the service is provided, under the same conditions as are imposed by that State on its own&nbsp;nationals.</em></p> <p>So we can infer that a commercial activity, such as running a website, or a social network, is a) classified as a Service by the EC, and b) is a discrimination-free zone.</p> <p>So there's definitely something afoot here.</p> <p>The scope of the<strong> 2004 Directive</strong> covers:&nbsp;</p> <p><em>all persons who provide goods and services, which are available to the public&nbsp;irrespective of the person concerned as regards both the public&nbsp;and private sectors, including public bodies, and which are&nbsp;offered outside the area of private and family life</em></p> <p><em><br /></em></p> <p>I think you'd be hard pushed to describe a social network as a service of family life.&nbsp;</p> <p><em>This Directive shall not apply to the content of media and&nbsp;advertising nor to education.</em></p> <p>Whilst we refer to Social networks as Social Media, I still think it would be a particularly hard sell to refer to that as "content of media", which is likely to refer more to TV and film production content.</p> <p>So the <strong>Principles</strong> in the <strong>2004 Directive</strong> state that:</p> <p><em>there shall be no direct discrimination based on sex,&nbsp;including less favourable treatment of women for reasons&nbsp;of pregnancy and maternity</em></p> <p>And</p> <p><em>there shall be no indirect discrimination based on sex.</em></p> <p>Boiled down, the <strong>TL;DR</strong> version is this:</p> <p>Discrimination can be either direct or indirect. &nbsp;Direct is stating for example, "we don't sell to women, or we don't sell to men". &nbsp;An example of Indirect Discrimination might be having a rule banning people with breasts from your supermarket.</p> <p>Unless you're running a Womens' Football League, or Shelter for whipped wives, or making a film about women, you can't discriminate on the grounds of gender.</p> <p>Running a social network that's inclusively for women... Does. Not. Count.</p> <p>If I visit, and hit "Sign up with Facebook", I get a snappy little error message telling me I can't. Because I'm male.</p> <p><img src="" alt="Luluvise hates me" width="650" height="534" /></p> <p>If that ain't gender-based discrimination, then I don't know what is.</p> <p><a href="">@jack</a> found the <a href="!/jack/status/196074416365510656">same thing earlier</a> today.&nbsp;</p> <p><img src="" alt="Luluvise hates @jack too." width="320" height="480" /></p> <p>While he's not in the UK or EU, Luluvise are, and their registered office being in London, makes them subject to the same rules and regulations as everybody else.</p> <p>Personally, this doesn't bother me, I don't want to know what my female friends (all 4 of them) are saying about me behind my back. &nbsp;Probably nothing, because being a massive gay, I've never dated any of my female friends.</p> <p>I bet that if I announced a male-only social network, I'd have ultrafeminists breathing fire down my back quicker than you can say "Germaine Greer".</p> <p>Footnote: Luluvise (or lulu vise, to make it translate) is Croatian for "pipe more". &nbsp;I only went looking for this because I was convinced when I learnt German, that 'lulu' was slang for 'twat'.</p> <p>Somewhat interestingly, <a href="">@tomscott</a> also has bones to pick with Luluvise, but more surrounding <a href="">their privacy policy and the Data Protection Act</a></p> <p>It also looks like <a href=";utm_medium=feed&amp;utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29">TechCrunch have already puzzled over the legality of Luluvise</a>, if from a somewhat different standpoint.</p> Smokeping on Nginx <p>Smokeping is one of my favourite diagnostic tools for tracking down sporadic network issues.</p> <p>You install it, configure it with a list of hosts, and it pings them regularly, and keeps track of the round-trip times, latency, packetloss, and so on.</p> <p>The web frontend is a Perl CGI script, and as a result, it's a bit of a bugger to make it work on Nginx.</p> <p>I wasn't gonna install Apache just for this one thing...</p> <p>Firstly, my server I'm doing this on is ancient, so I installed Smokeping from source. &nbsp;If you're running a more modern OS, and one where apt-get doesn't return 404 for the package files, I suggest you use vendor provided packages (or community provided PPAs).</p> <p>Let's get to it.</p> <p>I downloaded Smokeping 2.6.8 from <a href="">here</a>. &nbsp;These&nbsp;<a href="">installation instructions</a>&nbsp;are great. &nbsp;I already have rrdtool installed as it's a dependency of Munin (another firm favourite of mine) too.</p> <p>Fping I downloaded from here , and shamefully built from source too.</p> <p>The recommended webserver is Apache, but as I'm using Nginx already, and prefer it over Apache for performance and scalability, I decided it couldn't be that hard to do it without Apache.</p> <p>I had to install a bunch of prerequesite Perl modules. &nbsp;Fortunately, once you've extracted the smokeping distribution archive, there's a script "setup/" that does all the hard work for you.</p> <p>So all I basically did.&nbsp;</p> <pre>mkdir smokeping_install<br />cd smokeping_install<br />wget<br />wget<br />tar xzvf smokeping-2.6.8.tar.gz<br />tar xzvf fping-2.4b2_to4-ipv6.tar.gz<br />cd fping-2.4b2_to4-ipv6<br />./configure<br />make<br />sudo make install<br />cd ~/smokeping_install/smokeping-2.6.8<br />./configure<br />make<br />sudo make install</pre> <p>Which puts the fping binary in /usr/local/sbin/fping</p> <p>and smokeping itself in&nbsp;</p> <pre>/opt/smokeping-2.6.8</pre> <p>Things I had to do by hand:</p> <pre>mkdir /opt/smokeping-2.6.8/cache<br />chmod a+w /opt/smokeping-2.6.8/cache</pre> <p>and of course, the Nginx config.</p> <p>I wanted to tack smokeping onto my Munin vhost, so I just added a couple of sections to the bottom of that vhost configuration</p> <pre>&nbsp; &nbsp; &nbsp; &nbsp; location /smokeping {<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; include proxy.conf;<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; proxy_pass;<br />&nbsp; &nbsp; &nbsp; &nbsp; }<br />&nbsp; &nbsp; &nbsp; &nbsp; location /smokeping/ {<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; alias /opt/smokeping-2.6.8/htdocs;<br />&nbsp; &nbsp; &nbsp; &nbsp; }</pre> <p>Nginx can't serve CGI scripts by itself, so it requires a CGI server bound to localhost in order to make those accessible. &nbsp;I'm using Thttpd, as <a href="">suggested</a> <a href="">here</a>.</p> <p>I <a href="">downloaded thttpd from here</a>.</p> <p>It's insanely easy to build, same old combo of ./configure &amp;&amp; make &amp;&amp; make install.</p> <p>The Nginx wiki article about Thttpd CGI serving <a href="">suggests a patch</a> to thttpd for adding the X-Forwarded-For header.&nbsp;</p> <p>Patching the file is *easy*. &nbsp;Just save the patch file, and drop into the thttpd-2.25 source directory, and run</p> <pre>patch &lt; thttpd.patch</pre> <p>Then make and install as per usual.</p> <p>Here's my thttpd.conf file (in /etc/thttpd.conf)</p> <pre>host=<br />port=10000<br />user=www-data<br />logfile=/var/log/thttpd.log<br />pidfile=/var/run/<br />dir=/opt/smokeping-2.6.8/htdocs/<br />cgipat=**.cgi|**.pl</pre> <p>Once smokeping is running, it will generate rrd files that can be examined by the CGI scripts to produce html output.&nbsp;</p> <p>Ta da!</p> <p><img src="" alt="" /></p> <div></div> <p>&nbsp;</p> Retrospective: London Realtime <p>&nbsp;</p> <p><strong>London Realtime - Live Router Stats</strong></p> <p>Last weekend, I attended the <a href="">London Realtime</a> hackathon.&nbsp;</p> <p>It was a weekend long event, sponsored by a few API providers, namely <a href="">GoSquared</a>, <a href="">Twilio</a> , <a href="">Pusher</a>, <a href="">GeckoBoard</a>, <a href="">RabbitMQ</a>&nbsp;and <a href="">Amazon Web Services</a>.</p> <p>There were over 150 attendees, and by the end of the weekend, 27 hacks had been put together.</p> <p>I rocked up at White Bear Yard after work on Friday night, and discovered rapidly that there was a problem with the Wifi coverage at the event. &nbsp;<a href="!/LEYDON">Chris Leydon</a>&nbsp;grabbed me for my skills as a Sysadmin to see if I could figure out a solution. &nbsp;</p> <p>Basically, they'd bought a Draytek 2920w router, which was simply rebooting every 5-10 minutes. &nbsp;I grabbed a spare Macbook, and installed the <a href="">Logstash</a> agent. &nbsp;Next step was to point the Syslog target of the Draytek at the Logstash server, and grab some logfiles whilst it was shifting traffic, with the hope of seeing a pattern before a reboot.</p> <p>Attempting to log traffic to a USB stick was also futile, as the router didn't always flush() before rebooting, so there was no guarantee that that session would contain any logs. &nbsp;Hence the need to log to something a little more reliable. &nbsp;Logstash was it. &nbsp;</p> <p>I have a lot of time for Logstash, and will write on that topic soon, as I've just implemented a large centralised syslogging platform at $work.</p> <p>But I digress.</p> <p>We attempted a firmware upgrade, but discovered that the Draytek was running the latest "stable" release. Stable my arse.</p> <p>I decided that I'd head home fairly early, and come back with some better grade routing hardware on Saturday morning.</p> <p>At home, I've got a Cisco 2621XM ISR, and a handful of <a href=";qid=1335051665&amp;sr=8-1">TP-Link Wireless APs</a>, which despite their low cost, are &nbsp;surprisingly capable, and work really rather well.</p> <p><img src="" alt="" width="260" height="260" /></p> <p>So on Saturday morning, after a brief trip to $work to collect an unused 10/100 24 port switch made by a hitherto unheard of company called "Planet", I turned up at LDNRealtime for another go at overhauling the network.</p> <p>We basically had internet access provided by the office space that we were using, on some IP, which we couldn't really use, for the number of clients we wanted to use, so I set the Cisco up to present a DHCP server on, which gave us enough IP addresses for ~250 clients.</p> <p>Excellent.</p> <p>We shoved the router and switch into the comms cabinet on site, and started patching in the odd floor ports for connecting the Wireless APs to.</p> <p>Within a couple of hours, we had good, stable multi-AP wireless coverage of the upstairs floor, at least. &nbsp;Which was where most of the geeks were hanging out, so that was all good. &nbsp;</p> <p>I decided that for my hack, I'd try to use Geckoboard, a customisable web dashboard, to display live stats from the router via SNMP.</p> <p>The only minor problem with this plan was that I couldn't get a public IP on the router's Fa0/0 interface, only the one from within the RFC1918 network of the office.</p> <p>My basic plan was this:</p> <p>&nbsp;</p> <ol> <li>Poll SNMP with a python daemon running on a VM on my macbook.</li> <li>Process the data, and generate a JSON object compatible with the Geckoboard widget API.</li> <li>Figure out how the JSON can be accessed from the public internet.</li> </ol> <p>&nbsp;</p> <p>&nbsp;</p> <p><strong>Part 1</strong> was trivial. &nbsp;I toyed with the net-snmp libraries for Python, and then decided that the quickest way to do it was to shell out to snmpget with the subprocess module, and shove it through a regex to grab the value of a Counter32 type.&nbsp;</p> <p>That bit worked fine, and was reliable enough without having to fiddle with low-level libraries. &nbsp;Sidenote: There appears to be no, decent high-level interface to SNMP for python.</p> <p><strong>Part 2</strong> was also trivial, and basically involved calculating a delta for the amount of data sent and received on Fa0/0, which is the LAN interface on the Cisco.</p> <p>As I've mentioned, I wasn't able to get a public IP, so direct pull/push to the VM was out. But as Amazon Web Services were a sponsor, it seemed only fitting to fire up a Micro instance, and serve the files from there.</p> <p>I could have set up some intricate service between the two, using SSH forwarding and rabbitMQ in order to deliver the data between the collector and the presentation server, but I instead opted for a far lower level solution. &nbsp;Less moving parts, so to speak.</p> <p>I created a user on the EC2 node, and a user on the VM, created SSH keys, and copied the .pub across, then had the python daemon write out the JSON object to ./tmp, then shell out again with subprocess to scp, and transfer the file across to the EC2 presentation server.</p> <p>From there, it was easy to just install Apache, throw in an&nbsp;</p> <pre>AddType application/json .json</pre> <p>line to the Default config, and serve the files from there.</p> <p>It wasn't realtime per se, as you can't easily poll SNMP data every second (without overloading the router), so it had an effective granularity of about 30s. &nbsp;Which as it turns out is fine, as 30s is Geckoboard's finest refresh granularity too.</p> <p>At 3AM on Sunday morning, I added Line Charts to the dashboard, showing current and historical usage (over the last 3 hours or so). &nbsp;To do this, I added Redis to the stack of things powering the app, and basically, every time I grabbed new data, I'd do a LPUSH to a key storing a bunch of time-series values for the data usage delta, then pull the last 100 or so, and use those to build the JSON object.</p> <p>This is an example of the output (I've set it to serve static files now, so it's Sunday Evening's data forever). <a href=""></a></p> <p>Geckoboard is a lovely slick dashboard, and I really enjoyed using it in my hack. &nbsp;I have only 2 minor issues with it, both of which I discussed with the Geckoboard team over the weekend. &nbsp;Firstly, the numeric value widget defaults to "Financial" data presentation, ie 1,000,000,000 becomes 1B, not 1G (ish), which is irritating for data presentation use. &nbsp;It's not also obvious whether it's using <a href="">long-scale or short-scale</a> Billions.</p> <p>The other bug comes when using the Simple Line Chart widget, which is you can basically only use a maximum of ~250 datapoints in your line, as it's effectively a wrapped call to Google Charts, and after that length, you hit restrictions on URL length.</p> <p>This bug is effectively solved by using the HighCharts widget. &nbsp;</p> <p>Another bit of antiquated network hardware from my personal collection is my Axis 205 IP Camera, which I also brought along (mostly for fun), and then proceeded to set up as a streaming webcam for saturday+sunday.&nbsp;</p> <p>The Axis 205, although initially a bugger to set up, as it doesn't have a particularly intuitive setup procedure involving static ARP and pings and so on, is a pretty robust camera, and probably one of the smallest IP cameras available.</p> <p>It provides a web interface to a MotionJPEG stream, which is excellent if you're viewing on the LAN, but a bit of a bitch to proxy.</p> <p>You can't use a straight-off HTTP reverse proxy, like Varnish or just Apache, as it doesn't work like that. &nbsp;It's a bit more like having to proxy a websocket.</p> <p>My friend <a href="">Sam</a> () mentioned a <a href="">node.js</a> powered <a href="">proxy for MJPEG streams</a>, so I set about trying to figure out how I could use that.</p> <p>Whilst my previous hack had been effectively stateless communication between the Local VM and the EC2 instance, this would require a bit more ingenuity to get the traffic across the wire in one piece.</p> <p>I experimented with a simple netcat pipeline, basically one netcat to listen to the MJPEG stream, and then pipe it into another on the EC2 instance, but this doesn't work, because once you've got the stream, you can't very easily present it to a bunch of people.</p> <p>VLC apparently can't transcode MJPEG. &nbsp;Sadly.</p> <p>So this Node Proxy was pretty much the only sensible solution.</p> <p><a href="">This document</a>&nbsp; was most useful in the creation of a Point-to-Point VPN between my VM and EC2, all over SSH. &nbsp;The advantage of this was being able to present a Layer 3 interface from the Amazon EC2 instance to the VM, without any special port forwarding, or connectivity, so it could open as many ports as it wants without any special configuration.</p> <p>Many people are familiar with the use of ssh to forward ports, but few are aware that it can actually be used to create a point-to-point tunnel. &nbsp;Basically, you get a pair of tun0 devices, one on each end of your tunnel, assign IP addresses to them, and away you go.</p> <p>I didn't even need to set up routing, as all I needed was for one side to be able to connect to the other.</p> <p>I ran a local proxy on the VM, which listened to the IP Camera's MJPEG stream from, and presented it as a new stream on the tunnel interface (</p> <p>On the EC2 instance, I ran another instance of the Node Proxy, to listen to, and re-broadcast the MJPEG stream on the public interface of the EC2 node. &nbsp;I reserved an Elastic IP for this, just to make it a little bit easier, and provide something we could point the A record ( at more easily.</p> <p>Could've done it with a CNAME, but Elastic IPs are more stable.</p> <p>I also considered using the Amazon VPC and connecting an IPSEC tunnel to the Cisco directly, but this would've taken me all weekend to set up, as I didn't seem to have VPC enabled, and getting it enabled was taking some time. &nbsp;This was quicker, but potentially dirtier, and did get the job done.</p> <p>So by the end of Saturday, we had the Live Router Stats working, and then by midnight on Saturday, we had a streaming webcam feed.</p> <p>The webcam feed had an interesting side-effect to the router stats, as more people connected, they each got about 1.0Mbit of video streamed to them every second. By the time it came to the prize giving on Sunday afternoon, we were pushing about 55.0Mbit out of the router. &nbsp;The highest I saw it get to was 72Mbit out, 25Mbit in. &nbsp;Which is pretty damn impressive for a 10 year old Cisco.</p> <p>Here, you can see <a href="">my presentation</a> on Sunday. &nbsp;Here's a <a href="">Github repository</a> containing all the stuff above, in some kind of format.</p> <p>I also found some time to help out <a href="!/lawrencejob">Lawrence Job</a> with his interesting "hardware hack", <a href="">GoCubed</a>. He'd brought along an Arduino, and a strip of 32 RGB LEDs, and was interfacing the GoSquared API (which gives you a current visitors count), with the LED Strip, for a live visitors count.</p> <p>Together, we wrote a bit of VB.Net to drive the Serial port interface to the Arduino (not having an Ethernet shield), to grab the data from the GoSquared API, and present it to the Arduino in a sensible format.</p> <p>You can see more of that <a href="">here</a>.</p> <p>So that was London Realtime. &nbsp;I had a lot of fun, it was my first-ever hackathon, and I found a great niche to work in. &nbsp;</p> <p>Many thanks again to all of the API sponsors, and the people of White Bear Yard for putting up with us all. &nbsp;Thanks to <a href="!/leydon">Chris Leydon</a>, <a href="!/jamesjgill">James Gill</a>, <a href="">Geoff Wagstaff</a>, <a href="!/floopily">James Taylor</a>, <a href="!/SaulGCullen">Saul Cullen</a>&nbsp;and the rest of the <a href="!/gosquared">GoSquared</a> <a href="">team</a> for making this an insanely good weekend.</p> <p>&nbsp;</p> <p>Here's some videos from <a href="">Friday</a></p> <p><a href="">Saturday</a></p> <p>and <a href="">Sunday</a></p> <p>by the insanely good cameraman <a href="!/ollynewport">Olly</a> <a href="">Newport</a>.&nbsp;</p> <p>&nbsp;</p> So many things have Jumped The Shark <p>I've come to realise more than ever recently that a number of things have <a href="">Jumped The Shark</a>. &nbsp;</p> <p>&nbsp;</p> <ul> <li><strong>Facebook<br /></strong>No more a social network, than a targeted advertising and gaming platform. &nbsp;Not even well targeted ads. &nbsp;Or well targeted games. &nbsp;Not a day goes by when I don't have to block a person for spam, or block a game for trying to take over my wall.<br /><br /></li> <li><strong>Google+<br /></strong>Well, I'm afraid it just never really was, was it?<br /><br /></li> <li><strong>eBay<br /></strong>Full of trolls and shysters. If you're buying Buy It Now, you might as well use Amazon. &nbsp;If you're bidding on an Auction, forget it. &nbsp;You'll get sniped by a snipebot. &nbsp;The common man stands no ground. &nbsp;It's like eBay has become the High Frequency Algorithmic Trading world of the stock market.<br /><br /></li> <li><strong>PayPal</strong><br />You don't have to look far afield anywhere to find a story of someone who's been had by PayPal. &nbsp;They change their terms and conditions with the wind. &nbsp;There's conflicting precedents set regarding whether you can use PayPal for charities, for personal collections, and so on. &nbsp;Used to be good, now... not.<br /><br /></li> <li><strong>Apple</strong><br />One of the big two on this list. &nbsp;Apple used to be the bitch of the Artist. &nbsp;The hardware was excellent, the software just worked. &nbsp;Worked in a roundabout way, often, but did, just work. &nbsp;<br />I have spent long days trying to persuade Apple to do things it used to do, but the Powers That Be decided that they would no-longer support a small feature that hardly anyone uses. &nbsp;That feature, like Frame-packed HDMI 1.4a has such a niche market they decided to abandon it, but thereby destroying the hopes and productivity of anyone from that niche. &nbsp;<br /><br />Apple used to be the Professional's Friend. &nbsp;Now they're just pandering to the wants and needs of the cunts with the iPads and the iPhones and the iPods. &nbsp;<br /><br /></li> <li><strong>Ubuntu</strong><br />Ah, Ubuntu. &nbsp;The only linux distribution I've ever really liked. &nbsp;Decent up-to-date packages, sensible <a href="">FHS </a>layout (ish). In the days of 10.04, Ubuntu was pretty near perfect.&nbsp;<br />But then, <a href="">Mark</a>, You introduced Unity. You broke the desktop. &nbsp;You destroyed my productivity, and you buggered my favourite OS.<br />It's just like Apple. &nbsp;You made a decision to play up to the Desktop Users who want everything shiny, and nice.. Fuck the Enterprise, Fuck the people who just want it to be stable, and familiar. &nbsp;Fuck the people who want the User Experience to be like they've seen it for the last 5+ years. &nbsp;Nooo, it's gotta be shiny, and borderless, and that fucking top-level menu. &nbsp;What the hell?<br /><br />Sorry, but Ubuntu jumped the shark too.<br /><br /></li> <li><strong>The internet for developers</strong><br />This is a bit of a tenuous title, because I'm not really sure what it's called.<br />That thing where you've got an idea, but you lack the agility and funds to do anything about it. &nbsp;<br />What invariably happens is that you shelve the idea for a bit, go off, find people to help you, and when you come back, someone else has beaten you to it. &nbsp;When you're up against the likes of Zynga, Facebook, Twitter, and the other Giants Of The Internet, it's a bugger to keep up.<br />There's also no sensible way to protect your idea. &nbsp;Software patents in the EU are impossibly worthless. &nbsp;Not that they're much better anywhere else. &nbsp;For that matter, even if you did patent something, all it would take would be a different implementation, and you'd still be screwed.<br /><br />And another thing... <a href="">Digital Sharecropping</a> is becoming rife again. Primarily, people are building entire companies to feed off the data that is provided by Twitter and Facebook and Every Other Social Site. &nbsp;There's a constant struggle between the data providers (who can, and will change their APIs at a moment's notice), and the data consumers (who are at risk of simply being borged by their providers!). &nbsp;Worse still, there's a very real possibility that you'll spend many hundreds of man-hours, and possibly thousands of dollars (or pounds, or euro) working on "The Next Big Thing", only to find that one of the Giants have been doing the same, and they have a bigger marketing budget than you. &nbsp;So they effectively pull the rug out from under you.<br /><br />Just look at Twitter and for a prime example. &nbsp;Then count the number of short url providers that they de-rugged.<br /><br />Been there. &nbsp;Done that.</li> </ul> <p>I think the conclusion to draw here, especially in regard to the last point, is that I'm seeing less and less reason to maintain a presence in the application developer community. &nbsp;</p> <p>Working with computers had always been fun, but I'm getting the impression that if I'm not fighting the hardware (which I am), then I'm fighting the software. &nbsp;And when it's neither of those, then I might as well not bother for the fear of being de-rugged by a larger organisation.</p> <p><strong>So, what's next?</strong></p> Appearance Can Be Deceptive <p> <pre> - - [13/Mar/2012:00:41:09 +0000] "POST /3293b4da67abffca2460244619d9a8bca3f4c401.php HTTP/1.1" 200 1096 "-" "Mozilla/5.0 (Windows; U; MSIE 9.0; WIndows NT 9.0; en-US))"<br /> - - [13/Mar/2012:00:42:57 +0000] "POST /3293b4da67abffca2460244619d9a8bca3f4c401.php HTTP/1.1" 200 359 "-" "Mozilla/5.0 (Windows; U; MSIE 9.0; WIndows NT 9.0; en-US))"<br /> - - [13/Mar/2012:00:42:57 +0000] "POST /3293b4da67abffca2460244619d9a8bca3f4c401.php HTTP/1.1" 200 450 "-" "Mozilla/5.0 (Windows; U; MSIE 9.0; WIndows NT 9.0; en-US))"<br /> - - [13/Mar/2012:01:14:12 +0000] "POST /3293b4da67abffca2460244619d9a8bca3f4c401.php HTTP/1.1" 200 354 "-" "Mozilla/5.0 (Windows; U; MSIE 9.0; WIndows NT 9.0; en-US))"</pre> <p>Does this look suspicious to you?</p> <p>Sure as hell looks dubious to me. &nbsp;Looks like something doing something that it oughtn't. &nbsp;</p> <p>My first instinct on seeing this is to look at the <em>&nbsp;3293b4da67abffca2460244619d9a8bca3f4c401.php</em>&nbsp;file.</p> <p>Let's have a quick look inside that file.</p> <p>This looks dodgy:</p> <pre>&lt;?php<br />define('PAS_RES', 'atwentycharacterhash');<br />define('PAS_REQ', 'another20characters!');<br />define('RSA_LEN', '256');<br />define('RSA_PUB', '65537');<br />define('RSA_MOD', '1223197119012299291923139142398545904360596535434245223132131312321381297978456412347');</pre> <p>That's definitely dubious. &nbsp;It doesn't get any better.</p> <pre>$version=2;$requestId='0';$jsonRPCVer='2.0';</pre> <p>Yuk. JSON RPC. &nbsp;All on one line. &nbsp;Yuk.</p> <pre>function senzorErrorHandler($errno, $errstr, $errfile, $errline)</pre> <p>This looks really dodgy, and it's where the story really unfolds.</p> <p>Google for "senzorErrorHandler". &nbsp;Top hit is<a href=""></a> from StackOverflow, with the particularly scary title "My wordpress has been hacked, but what did the hacker do and how can I prevent it/ fix damage done". &nbsp;</p> <p>I'm frequently advising people over on Serverfault that if your server's been hacked, you can pretty much forget about doing anything other than&nbsp;<a href="">Nuking It From Orbit</a>&nbsp;and rebuilding from the last known good backup. &nbsp;</p> <p>Anyway. &nbsp;Let's have a look at that IP address that it's coming from. &nbsp;Who owns that?</p> <pre>inetnum: &nbsp; &nbsp; &nbsp; &nbsp; -<br />netname: &nbsp; &nbsp; &nbsp; &nbsp; BSB-Service-1<br />descr: &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; BSB-Service GmbH<br />route: &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;<br />descr: &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;PlusServer AG</pre> <p>Well that's a bit non-descript. &nbsp;That could be any server or botnet. &nbsp;I don't like the way this is going. &nbsp;Let's try opening a HTTP connection to that site. &nbsp;Just drop it into your web browser. &nbsp;Oh Look... Nothing there. &nbsp;That's a bit dubious.</p> <p>You'd be in the majority if you said "Screw that, IPTables it out, and delete that weird hex-named file in the webroot." &nbsp;</p> <p>You might even go as far as saying "Nuke it from orbit, and restore it from backups".&nbsp;</p> <p>If you did that, however, you'd destroy that site's profile with WebsiteDefender.</p> <p>WebsiteDefender have an agent that sits in your site's root directory. &nbsp;Instead of having a sensible name, like "website-defender-agent.php", they go with a 32char hex string, that looks dodgy.</p> <p>Hidden away in the WebsiteDefender website, there's this page about their scanning IP addresses:<a href=""></a></p> <p>Apparently you should allow access to to allow their traffic to hit your server. &nbsp;</p> <pre>;; ANSWER SECTION:<br /> 3600 IN<span style="white-space: pre;"> </span>A<span style="white-space: pre;"> </span><br /> 3600 IN<span style="white-space: pre;"> </span>A<span style="white-space: pre;"> </span><br /> 3600 IN<span style="white-space: pre;"> </span>A<span style="white-space: pre;"> </span><br /> 3600 IN<span style="white-space: pre;"> </span>A<span style="white-space: pre;"> </span></pre> <p>&nbsp;</p> <p>And there's that mysterious 85.25 IP address. &nbsp;I came very close to destroying that agent file, then did some closer digging.</p> <p>If you read the <a href="">non-accepted answer</a>&nbsp;on that StackOverflow question, then you'll see that someone else has drawn the same conclusion as me. Actually, this is how I figured it out, but I think it bears repeating.</p> <p>So there's a few lessons to be drawn from here.&nbsp;</p> <p>I've no particular exception to website agents, and having files on your website for them.&nbsp;</p> <p>Here's how I'd improve the WebsiteDefender one.</p> <p><strong>1)</strong> Rename the agent file from something hexadecimal and weird, to "website-defender-agent.php"</p> <p><strong>2)</strong> Set up better whois information for your IP addresses.</p> <p><strong>3)</strong> Have your scanning IPs redirect inbound HTTP requests to your FAQ. (This alone would have helped instantly).</p> <p><strong>4)</strong> Don't hit the server so damn frequently.</p> <p><strong>5)</strong> Hit the server from different IP addresses now and again.</p> <p><strong>6)</strong> Oh, and put a block of comments at the top of your Agent code saying What it is, Where it's from, What it does, and Why it's there.</p> <p>That would make life a lot easier for any sysadmin who finds the same file and panics a little.</p> </p> Hacking initrd.gz on Ubuntu Netboot Installer <p>&nbsp;</p> <p>This morning, I did something unquestionably naughty, and totally got away with it.</p> <p>A little background. &nbsp;We just had some *brand* new workstations delivered.. They turned up yesterday afternoon. &nbsp;They're high performance 3d workstations with an Intel DX79TO mainboard. &nbsp;This mainboard has the intel 87529 Gigabit Ethernet controller. &nbsp;I wouldn't normally pay so much attention to the controllers and so on that are actually on a board, but in future, I will.</p> <p>This controller is not supported by the e1000 driver that comes in the PXE installer on the netboot CD for Ubuntu 10.04. &nbsp;</p> <p>We plugged them in, fired them up, and watched the PXE installer fail as it couldn't find a supported kernel module for that hardware. &nbsp;Bugger.</p> <p>Our primary plan was to buy some cheap 1GE NICs, install with those, update the driver then carry on.</p> <p>My personal plan was to update the initramfs of the PXE installer, give it a newer e1000 kernel module, and *pray* that it works.</p> <p>Things you need.</p> <p><strong>1) <a href="">e1000</a> <a href=";DwnldID=15817&amp;ProdId=3299&amp;lang=eng&amp;OSVersion=Linux*&amp;DownloadType=Drivers">source from intel</a>.</strong></p> <p><strong>2) The initrd.gz and linux files from the pxe installer.</strong></p> <p><strong>3) The linux-headers for the kernel of the pxe boot installer</strong></p> <p>So go ahead and grab those initrd.gz and 'linux' files, then run `file` on 'linux' and grab the kernel version. Now you can download&nbsp;the headers for that version.</p> <p>If you have a look inside the <em>Makefile</em> for the e1000e drivers, there's this block</p> <pre>ifeq (,$(BUILD_KERNEL))<br />BUILD_KERNEL=$(shell uname -r)<br />endif</pre> <p>which I read, and thought "Ha! I can provide BUILD_KERNEL as an environment variable and build against that instead."</p> <pre>BUILD_KERNEL=2.6.32-21-generic make</pre> <p>If you run make install, then it'll insert it into your own kernel. &nbsp;If you just run make, you can look in the src directory, and find the .ko file we need.</p> <p>Then comes the harder part. &nbsp;</p> <p>Make some directories like&nbsp;</p> <pre>kernelhacking/{initrd,e1000e}</pre> <p>copy the <em>initrd.gz</em> into initrd/</p> <p>and run</p> <pre>zcat initrd.gz | (while true; do cpio -i -d -H newc --no-absolute-filenames || exit; done)</pre> <p>I lifted this little snippet from <a href="">some guy's blog on the subject</a>.</p> <p>then <em>mv</em> the <em>initrd.gz</em> up a level.</p> <p>Now you've got the rootfs that the installer uses.</p> <p>&nbsp;</p> <pre>tom.oconnor@charcoal-black:~/kernelhacking/initrdhacking$ ls -lah<br />total 544K<br />drwxrwxr-x 15 tom.oconnor dialout 2.0K 2012-03-01 11:35 .<br />drwxrwxr-x &nbsp;5 tom.oconnor dialout 2.0K 2012-03-01 13:40 ..<br />drwxr-xr-x &nbsp;2 root &nbsp; &nbsp; &nbsp; &nbsp;root &nbsp; &nbsp;8.0K 2012-03-01 11:35 bin<br />drwxr-xr-x &nbsp;2 root &nbsp; &nbsp; &nbsp; &nbsp;root &nbsp; &nbsp;2.0K 2012-03-01 11:35 dev<br />drwxr-xr-x 13 root &nbsp; &nbsp; &nbsp; &nbsp;root &nbsp; &nbsp;4.0K 2012-03-01 11:50 etc<br />-rwxr-xr-x &nbsp;1 root &nbsp; &nbsp; &nbsp; &nbsp;root &nbsp; &nbsp; 376 2012-03-01 11:35 init<br />drwxr-xr-x &nbsp;2 root &nbsp; &nbsp; &nbsp; &nbsp;root &nbsp; &nbsp;2.0K 2012-03-01 11:35 initrd<br />drwxr-xr-x 12 root &nbsp; &nbsp; &nbsp; &nbsp;root &nbsp; &nbsp;4.0K 2012-03-01 11:35 lib<br />lrwxrwxrwx &nbsp;1 root &nbsp; &nbsp; &nbsp; &nbsp;root &nbsp; &nbsp; &nbsp; 4 2012-03-01 11:35 lib64 -&gt; /lib<br />drwxr-xr-x &nbsp;2 root &nbsp; &nbsp; &nbsp; &nbsp;root &nbsp; &nbsp;2.0K 2012-03-01 11:35 media<br />drwxr-xr-x &nbsp;2 root &nbsp; &nbsp; &nbsp; &nbsp;root &nbsp; &nbsp;2.0K 2012-03-01 11:35 mnt<br />drwxr-xr-x &nbsp;2 root &nbsp; &nbsp; &nbsp; &nbsp;root &nbsp; &nbsp;2.0K 2012-03-01 11:35 proc<br />drwxr-xr-x &nbsp;2 root &nbsp; &nbsp; &nbsp; &nbsp;root &nbsp; &nbsp;4.0K 2012-03-01 11:35 sbin<br />drwxr-xr-x &nbsp;2 root &nbsp; &nbsp; &nbsp; &nbsp;root &nbsp; &nbsp;2.0K 2012-03-01 11:35 sys<br />drwxr-xr-x &nbsp;2 root &nbsp; &nbsp; &nbsp; &nbsp;root &nbsp; &nbsp;2.0K 2012-03-01 11:35 tmp<br />drwxr-xr-x &nbsp;6 root &nbsp; &nbsp; &nbsp; &nbsp;root &nbsp; &nbsp;2.0K 2012-03-01 11:35 usr<br />drwxr-xr-x &nbsp;8 root &nbsp; &nbsp; &nbsp; &nbsp;root &nbsp; &nbsp;2.0K 2012-03-01 11:35 var<br />tom.oconnor@charcoal-black:~/kernelhacking/initrdhacking$&nbsp;</pre> <p>&nbsp;</p> <p>The driver we want to replace is the e1000e.ko, deep within</p> <pre> lib/modules//kernel/drivers/net/e1000e.</pre> <p>I'm going to switch to root now, to save on sudo keystrokes:</p> <pre>root@charcoal-black:[email protected]21-generic/kernel/drivers/net/e1000e# ls<br />e1000e.ko<br />root@charcoal-black:[email protected]21-generic/kernel/drivers/net/e1000e# mv e1000e.ko /home/tom.oconnor/kernelhacking/old-e1000e.ko<br />root@charcoal-black:[email protected]21-generic/kernel/drivers/net/e1000e# cp /home/tom.oconnor/kernelhacking/src/e1000e-1.9.5/src/e1000e.ko .</pre> <p>I also found a pci.ids update file in the e1000 package. &nbsp;Let's track that down, or at least figure out where it gets installed to. &nbsp;</p> <pre>tom.oconnor@charcoal-black:~$ locate pci.ids<br />/usr/share/misc/pci.ids</pre> <pre>root@charcoal-black:/home/tom.oconnor/kernelhacking/initrdhacking# cd usr/share/misc/<br />root@charcoal-black:/home/tom.oconnor/kernelhacking/initrdhacking/usr/share/misc# ls<br />pci.ids.gz<br />root@charcoal-black:/home/tom.oconnor/kernelhacking/initrdhacking/usr/share/misc# zless pci.ids.gz&nbsp;<br />root@charcoal-black:/home/tom.oconnor/kernelhacking/initrdhacking/usr/share/misc# gunzip pci.ids.gz&nbsp;<br />root@charcoal-black:/home/tom.oconnor/kernelhacking/initrdhacking/usr/share/misc# ls<br />pci.ids<br />root@charcoal-black:/home/tom.oconnor/kernelhacking/initrdhacking/usr/share/misc# vim pci.ids&nbsp;<br />root@charcoal-black:/home/tom.oconnor/kernelhacking/initrdhacking/usr/share/misc# ls -lah<br />total 448K<br />drwxr-xr-x &nbsp;2 root root 2.0K 2012-03-01 11:42 .<br />drwxr-xr-x 10 root root 2.0K 2012-03-01 11:35 ..<br />-rw-r--r-- &nbsp;1 root root 358K 2012-03-01 11:35 pci.ids<br />root@charcoal-black:/home/tom.oconnor/kernelhacking/initrdhacking/usr/share/misc# pwd<br />/home/tom.oconnor/kernelhacking/initrdhacking/usr/share/misc<br />root@charcoal-black:/home/tom.oconnor/kernelhacking/initrdhacking/usr/share/misc# ls<br />pci.ids<br />root@charcoal-black:/home/tom.oconnor/kernelhacking/initrdhacking/usr/share/misc# rm pci.ids&nbsp;<br />root@charcoal-black:/home/tom.oconnor/kernelhacking/initrdhacking/usr/share/misc# wget<br />--2012-03-01 10:49:50-- &nbsp;<br />Resolving<br />Connecting to||:80... connected.<br />HTTP request sent, awaiting response... 200 OK<br />Length: 190157 (186K) [application/x-gzip]<br />Saving to: `pci.ids.gz'<br />100%[=======================================&gt;] 190,157 &nbsp; &nbsp; &nbsp;242K/s &nbsp; in 0.8s &nbsp; &nbsp;<br />2012-03-01 10:49:51 (242 KB/s) - `pci.ids.gz' saved [190157/190157]<br />root@charcoal-black:/home/tom.oconnor/kernelhacking/initrdhacking/usr/share/misc# ls -lah<br />total 256K<br />drwxr-xr-x &nbsp;2 root root 2.0K 2012-03-01 11:49 .<br />drwxr-xr-x 10 root root 2.0K 2012-03-01 11:35 ..<br />-rw-rw-r-- &nbsp;1 root root 186K 2012-02-27 02:15 pci.ids.gz</pre> <p>I couldn't be arsed patching the existing pci.ids, so I thought it would be just as good to replace it with the latest one from <a href=""></a></p> <p>Turns out, it is!</p> <p>The next thing to do is to rebuild the initramfs back into a cpio.gz file.</p> <p>I grabbed most of my initrd fu from <a href="">here</a>.&nbsp;It's a little out of date, perhaps, but seemed to work for me.</p> <p>Let's say we've got this</p> <pre>root@charcoal-black:/home/tom.oconnor/kernelhacking/initrdhacking# ls<br />bin &nbsp;dev &nbsp;etc &nbsp;init &nbsp;initrd &nbsp;lib &nbsp;lib64 &nbsp;media &nbsp;mnt &nbsp;proc &nbsp;sbin &nbsp;sys &nbsp;tmp &nbsp;usr &nbsp;var</pre> <p>First things first</p> <pre>$ touch initrdhacking/etc/mdev.conf</pre> <p>That's actually the only bit I had to do, and I only did it for compatibility reasons. &nbsp;The rest is all there, as we took it from a working initrd.gz file.</p> <p>The compress bit was the bit i didn't quite know about.</p> <pre>cd initrdhacking<br />find . | cpio -H newc -o &gt; ../initrd.cpio<br />cd ..<br />gzip initrd.cpio</pre> <p>Then we'll copy the new initrd.gz in place of the old one on the tftp server. &nbsp;You should always make a backup of the old one, in case this all goes horribly wrong, etc.</p> <pre>cd /tftpboot/ubuntu-1004-installer/amd64/<br />mv initrd.gz old.initrd.gz<br />cp /home/tom.oconnor/kernelhacking/initramfs.cpio.gz initrd.gz</pre> <p>Done.</p> <p>I went over to the new workstations, and kicked them into a PXE netboot/preseed session.</p> <p>:D. &nbsp;They're installing. &nbsp;Best feeling ever.</p> <p>I stuck my hands in an initramfs and fiddled with it's private parts.</p> <p>Epic Win.</p> <p>See <a href="/blogish/building-dkms-package-latest-intel-e1000e-driver/">Part Two</a> for the continuing saga of the e1000e kernel module.</p> <p>&nbsp;</p> Building a DKMS package of the latest Intel e1000e driver <p>This is a continuation to my earlier blogpost about <a href="/blogish/hacking-initrdgz-ubuntu-netboot-installer/">hacking initrd.gz</a>.</p> <p> <p>After it's installed with the modified installer kernel, and finished building, the installed kernel doesn't have the e1000e driver. &nbsp;This is because the netboot installer pulls in a kernel and it's wherewithall from apt. &nbsp;It also gets a new initrd.</p> <p>As a result, I've decided to build an e1000e-dkms module, and we'll specify the preseed installer to install that, along with linux-headers-generic, linux-headers-2.6.32 and build-essential.</p> <p>Building DKMS modules is fiddly, and requires some work and testing to make it work</p> <p>&nbsp;</p> <p>There's a few ways to build DKMS modules, according to the<a href=""> Ubuntu Wiki</a> on the topic. I've found another way to do it, albeit a slightly sneaky, round-about way, but it is a lot quicker than rolling one from scratch.</p> <p><strong>Here's what I did.</strong></p> <p>Knowing that we use the Wacom kernel drivers elsewhere in the organisation, and they're installed by DKMS from source too, I started off by grabbing the <a href="">wacom-dkms package</a> from the <a href="">PPA</a>.&nbsp;</p> <p>Deb files are really quite a simple archive format, that you can extract with</p> <pre>`ar xv $PACKAGE_NAME`</pre> <pre>tom.oconnor@charcoal-black:~$ mkdir dkmsroll &nbsp;<br />tom.oconnor@charcoal-black:~$ cd dkmsroll/<br />tom.oconnor@charcoal-black:~/dkmsroll$ ls<br />tom.oconnor@charcoal-black:~/dkmsroll$ wget 14:57:43-- &nbsp;<br />Resolving<br />Connecting to||:80... connected.<br />HTTP request sent, awaiting response... 200 OK<br />Length: 624404 (610K) [application/x-debian-package]<br />Saving to: `wacom-dkms_0.8.8-0ubuntu4_all.deb'<br />100%[===================================================================================================================================================================================================&gt;] 624,404 &nbsp; &nbsp; &nbsp;428K/s &nbsp; in 1.4s &nbsp; &nbsp;<br />2012-03-01 14:57:44 (428 KB/s) - `wacom-dkms_0.8.8-0ubuntu4_all.deb' saved [624404/624404]<br />tom.oconnor@charcoal-black:~/dkmsroll$ ar xv wacom-dkms_0.8.8-0ubuntu4_all.deb&nbsp;<br />x - debian-binary<br />x - control.tar.gz<br />x - data.tar.gz<br /></pre> <p><em>'debian-binary'</em> is a file that states the version of the dpkg file. &nbsp;You can ignore this safely for this exercise.</p> <p><em>'control.tar.gz'</em> is the archive that contains the metadata for the package. &nbsp;Let's extract that.&nbsp;</p> <pre>tom.oconnor@charcoal-black:~/dkmsroll$ tar zxvf control.tar.gz&nbsp;<br />./<br />./prerm<br />./postinst<br />./md5sums<br />./control</pre> <p>&nbsp;</p> <p>We're going to re-roll the package later on, slightly differently, but we'll need to keep the prerm and postinst files. &nbsp;These are plain old shellscript files with special names that will get run by dpkg in the installation phase.</p> <p>&nbsp;</p> <p>There's 4 you really tend to find, <strong>prerm</strong>, <strong>preinst</strong>, <strong>postrm</strong> and <strong>postinst</strong>.</p> <p><em>'control'</em> contains the important metadata, like <em>Depends</em>: and <em>Description</em>: and <em>Package</em>:, which is how apt indexes packages, and dpkg knows what to install in what order, and so on.</p> <p>The control file above looks like this:</p> <pre>Package: wacom-dkms<br />Source: wacom-source<br />Version: 0.8.8-0ubuntu4<br />Architecture: all<br />Maintainer: Taylor LeMasurier-Wren &lt;;<br />Installed-Size: 2964<br />Depends: dkms (&gt;= 1.95)<br />Section: misc<br />Priority: optional<br />Description: Wacom kernel driver in DKMS format</pre> <p>There's one thing left, and that's<strong> data.tar.gz</strong>. &nbsp;This is where the package data is actually stored, and when you uncompress it, you get a bunch of directories that when extracted by dpkg, will be reproduced in /</p> <pre>tom.oconnor@charcoal-black:~/dkmsroll$ tar xzvf data.tar.gz&nbsp;<br />./<br />./usr/<br />./usr/src/<br />./usr/src/wacom-0.8.8/<br />./usr/src/wacom-0.8.8/dkms.conf<br />./usr/src/wacom-0.8.8/GPL<br />./usr/src/wacom-0.8.8/config.guess<br />./usr/src/wacom-0.8.8/AUTHORS<br />./usr/src/wacom-0.8.8/config.guess.cdbs-orig</pre> <p>... And so on.</p> <p>What we've now got is the following:</p> <pre>tom.oconnor@charcoal-black:~/dkmsroll$ ls&nbsp;<br />control &nbsp;control.tar.gz &nbsp;data.tar.gz &nbsp;debian-binary &nbsp;md5sums &nbsp;postinst &nbsp;prerm &nbsp;usr/ &nbsp;wacom-dkms_0.8.8-0ubuntu4_all.deb<br /></pre> <p>Let's have a look at that usr/ directory (which when installed, becomes /usr/):</p> <p>We only really care about the directory structure here..&nbsp;</p> <pre>tom.oconnor@charcoal-black:~/dkmsroll$ tree -d usr/<br />usr/<br />|-- share<br />| &nbsp; |-- doc<br />| &nbsp; | &nbsp; `-- wacom-dkms<br />| &nbsp; `-- man<br />| &nbsp; &nbsp; &nbsp; `-- man8<br />`-- src<br />&nbsp; &nbsp; `-- wacom-0.8.8<br />&nbsp; &nbsp; &nbsp; &nbsp; `-- src<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |-- 2.6.16<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |-- 2.6.18<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |-- 2.6.24<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |-- 2.6.30<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |-- include<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |-- util<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |-- wacomxi<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; `-- xdrv<br />16 directories</pre> <p>So what we will need to do, to create an e1000e DKMS package, is create a tree structure that looks a bit like this.</p> <p>&nbsp;</p> <p>So let's create a directory to contain all this package building nonsense. &nbsp;</p> <pre>tom.oconnor@charcoal-black:~/dkmsroll$ mkdir e1000e-dkms<br />tom.oconnor@charcoal-black:~/dkmsroll$ cd e1000e-dkms<br />tom.oconnor@charcoal-black:~/dkmsroll/e1000e-dkms$&nbsp;</pre> <p>Now we're here, let's create 3 directories, 'DOWNLOAD', 'info', and 'src'.</p> <p>'DOWNLOAD' is where we'll leave the source package file that we downloaded from Intel, and any other associated downloaded stuff.</p> <pre>tom.oconnor@charcoal-black:~/dkmsroll/e1000e-dkms/DOWNLOAD$ ls<br />e1000e-1.9.5 &nbsp;e1000e-1.9.5.tar.gz</pre> <p>You can go ahead and extract the downloaded tarball. &nbsp;We need to pick and choose bits out of it, and generally have a good ol' reorganise.</p> <p>'info' is a directory of my own creation. &nbsp;It's where I tend to stash the metadata files, some of which will be built into the 'control' file, and some are (post|pre)(inst|rm) scripts..&nbsp;</p> <p>Best thing to do, copy the postinst/prerm files from the extracted wacom DKMS control files into info/, and edit them in your favourite text editor.</p> <p>You're looking for anything in the file that says "wacom", basically.&nbsp;</p> <p>In the postinst, in this case, it looks like this.</p> <pre>NAME=wacom<br />PACKAGE_NAME=$NAME-dkms<br />CVERSION=`dpkg-query -W -f='${Version}' $PACKAGE_NAME | awk -F "-" '{print $1}' | cut -d\: -f2`<br />ARCH=`dpkg --print-architecture`</pre> <p>All you need to do is change wacom to e1000e wherever it's mentioned, and update the version variable in the prerm file. &nbsp;Just go through the files and be sensible about where stuff needs changing.</p> <p>Next thing we need to do is set up where the package will be installed to.</p> <p>Earlier on, you made a 'src' directory. &nbsp;As I said before, this will effectively be dropped into / by dpkg, so usr/ becomes /usr/ and so on. &nbsp;Within reason, you can drop anything into the right path here, and dpkg will deploy it. &nbsp;Dropping stuff into /sys, /proc, or /dev might land you in hot water.</p> <p>I've set up the directory structure below usr/ to be very similar to the wacom one, except without usr/share/* (for simplicity).</p> <pre>tom.oconnor@charcoal-black:~/dkmsroll/e1000e-dkms/src$ tree -a<br />.<br />`-- usr<br />&nbsp; &nbsp; `-- src<br />&nbsp; &nbsp; &nbsp; &nbsp; `-- e1000e-1.9.5<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |-- dkms.conf<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; `-- src<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |-- 80003es2lan.c<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |-- 80003es2lan.h<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |-- 82571.c<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |-- 82571.h<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |-- defines.h<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |-- e1000.h<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |-- ethtool.c<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |-- hw.h<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |-- ich8lan.c<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |-- ich8lan.h<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |-- kcompat.c<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |-- kcompat_ethtool.c<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |-- kcompat.h<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |-- mac.c<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |-- mac.h<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |-- Makefile<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |-- manage.c<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |-- manage.h<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |-- Module.supported<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |-- netdev.c<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |-- nvm.c<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |-- nvm.h<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |-- param.c<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |-- phy.c<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |-- phy.h<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; `-- regs.h<br />4 directories, 27 files</pre> <p>The innermost src/ directory (actually /usr/src/e1000e-1.9.5/src) contains the contents of the src/ directory from the e1000e-1.9.5.tar.gz file we downloaded from Intel.</p> <p>Above that, we've got a dkms.conf file, which is basically like a pre-make file which controls the DKMS builder.</p> <p>We need to write that dkms.conf file, and specify how the DKMS builder should actually run make.</p> <p>Here's the contents of the dkms.conf file I used to match the source tree above.</p> <pre>MAKE="cd src/ &amp;&amp; BUILD_KERNEL=${kernelver} make"<br />CLEAN="cd src/ &amp;&amp; make clean"<br />BUILT_MODULE_NAME="e1000e"<br />BUILT_MODULE_LOCATION="src/"<br />DEST_MODULE_LOCATION="/kernel/../updates/"<br />PACKAGE_NAME="e1000e"<br />PACKAGE_VERSION="1.9.5"<br />REMAKE_INITRD="yes"<br />AUTOINSTALL="yes"</pre> <p>$kernelver is a variable provided for use inside dkms.conf files. It's basically the contents of `uname -r`.</p> <p>We can pass BUILD_KERNEL as an environment variable to make to allow us to specify the location of the linux headers for the kernel we're using, or to specify an alternate kernel to build against.</p> <pre>DEST_MODULE_LOCATION="/kernel/../updates/"</pre> <p>is where DKMS will shove the created .ko file, and BUILD_MODULE_NAME is the name of the kernel module, without the .ko extension. &nbsp;Simple really (at least in this case).</p> <p>We're ready to build the deb file now. &nbsp;We've got the metadata in order, we've got the contents of the package in place, where DKMS is expecting it to be, and we've got the dkms.conf file written.</p> <p>So change back to the root of the package build environment (contains the DOWNLOAD and info, and src directories).</p> <p>You'll need <a href="">jordansissel</a>'s <a href="">fpm</a> package builder for this next bit.</p> <pre>fpm -n "e1000e-dkms" \<br />&nbsp; &nbsp; -v "1.9.5" \<br />&nbsp; &nbsp; -a "all" \<br />&nbsp; &nbsp; -s dir \<br />&nbsp; &nbsp; -C src \<br />&nbsp; &nbsp; -t deb \<br />&nbsp; &nbsp; --pre-uninstall info/e1000e-dkms.prerm \<br />&nbsp; &nbsp; --post-install info/e1000e-dkms.postinst \<br />&nbsp; &nbsp; --url "" \<br />&nbsp; &nbsp; --description "DKMS Intel e1000e driver" \<br />&nbsp; &nbsp; --iteration "custombuild-r1" \<br />&nbsp; &nbsp; --depends "dkms (&gt;= 1.95)" \<br />&nbsp; &nbsp; --replaces "e1000e-dkms (&lt;&lt; 1.9.5)"</pre> <pre>Created /home/tom.oconnor/dkmsroll/e1000e-dkms/e1000e-dkms_1.9.5-custombuild-r1_all.deb</pre> <p>Ta-da! &nbsp;You've now got a dkms package, which if you like, you can inspect the contents of, just as we did before.</p> <p>Here's the contents of the control file, for example:</p> <pre>Package: e1000e-dkms<br />Version: 1.9.5-custombuild-r1<br />License: unknown<br />Vendor: none<br />Architecture: all<br />Maintainer: &lt;tom.oconnor@charcoal-black&gt;<br />Depends: dkms (&gt;= 1.95)<br />Replaces: e1000e-dkms (&lt;&lt; 1.9.5)<br />Standards-Version: 3.9.1<br />Section: default<br />Priority: extra<br />Homepage:<br />Description: DKMS Intel e1000e driver</pre> <p>Clever, huh? And not a single bit of dh_make in sight. &nbsp;I <strong>love</strong> fpm for this exact reason.</p> <p>We'll just test that installation process. &nbsp;Mine looks slightly different here because I've previously installed a few of these packages, but your should look something like this:</p> <pre>tom.oconnor@charcoal-black:~/dkmsroll/e1000e-dkms$ sudo dpkg -i e1000e-dkms_1.9.5-custombuild-r1_all.deb<br />[sudo] password for tom.oconnor:&nbsp;<br />(Reading database ... 385051 files and directories currently installed.)<br />Preparing to replace e1000e-dkms 1.9.5-baseblack-r6 (using e1000e-dkms_1.9.5-custombuild-r1_all.deb) ...<br />-------- Uninstall Beginning --------<br />Module: &nbsp;e1000e<br />Version: 1.9.5<br />Kernel: &nbsp;2.6.32-38-generic (x86_64)<br />-------------------------------------<br />Status: Before uninstall, this module version was ACTIVE on this kernel.<br />e1000e.ko:<br />&nbsp;- Uninstallation<br />&nbsp; &nbsp;- Deleting from: /lib/modules/2.6.32-38-generic/updates/dkms/<br />&nbsp;- Original module<br />&nbsp; &nbsp;- No original module was found for this module on this kernel.<br />&nbsp; &nbsp;- Use the dkms install command to reinstall any previous module version.<br />depmod....<br />Updating initrd<br />Making new initrd as /boot/initrd.img-2.6.32-38-generic<br />(If next boot fails, revert to the .bak initrd image)<br />update-initramfs....<br />DKMS: uninstall Completed.<br />------------------------------<br />Deleting module version: 1.9.5<br />completely from the DKMS tree.<br />------------------------------<br />Done.<br />Unpacking replacement e1000e-dkms ...<br />Setting up e1000e-dkms (1.9.5-custombuild-r1) ...<br />Loading new e1000e-1.9.5 DKMS files...<br />First Installation: checking all kernels...<br />Building only for 2.6.32-38-generic<br />Building for architecture x86_64<br />Building initial module for 2.6.32-38-generic<br />Done.<br />e1000e.ko:<br />Running module version sanity check.<br />&nbsp;- Original module<br />&nbsp;- Installation<br />&nbsp; &nbsp;- Installing to /lib/modules/2.6.32-38-generic/updates/dkms/<br />depmod....<br />Updating initrd<br />Making new initrd as /boot/initrd.img-2.6.32-38-generic<br />(If next boot fails, revert to the .bak initrd image)<br />update-initramfs....<br />DKMS: install Completed.<br />Processing triggers for initramfs-tools ...<br />update-initramfs: Generating /boot/initrd.img-2.6.32-38-generic</pre> <p>So let's recap. &nbsp;We took an existing dkms package from a PPA, took it apart, figured out how it's been put together. &nbsp;Made our own directory structure to look like that, modified the postinst and prerm files to fit our DKMS module instead, then dropped the source directory into the source tree, and built the new debian package with fpm.</p> <p>In order to make this installable at system build time, I simply published the deb to our internal apt repository, then in our preseed file, we've got a pkgsel\include line that now looks like this.</p> <pre>d-i pkgsel/include string puppet puppet-common facter ssh zsh curl dkms linux-headers-generic build-essential linux-headers-2.6.32-38 e1000e-dkms</pre> <p>2.6.32-38 is the default lucid kernel that has been installed initially, so we can specify that explicitly here. &nbsp;</p> <p>When the preseeder runs, it installs dkms, linux-headers and build-essential, then grabs the e1000e-dkms package and installs that too.</p> <p>The DKMS builder triggers a rebuild of the initramfs that lives in /boot, so that next time we boot, the kernel loads the new e1000e.ko module, and the system can then access the network.</p> <p><strong>Job's a good 'un.</strong></p> </p> Installing Holla - Ruby HTTP chat system <p>&nbsp;</p> <p>Today's a bit of a quiet day, so I'm going to have a go at scratching a bit of a personal itch.</p> <p><a href="">Campfire</a> is a really rather sexy HTTP based chat system provided by 37 signals.</p> <p>I've found a clone-type application written by <a href="">@maccman</a> on <a href="">Github</a>, called <a href="">Holla</a>. &nbsp;</p> <p>I'm gonna have a go at installing it on Ubuntu 10.04. &nbsp;It requires Ruby 1.9.2, which isn't installed by default, so that's the first hurdle.&nbsp;</p> <p>I'm gonna need to build it from source, probably.&nbsp;</p> <p>It looks like there's no backport for Lucid. &nbsp;So, yeah.. Source it is. &nbsp;:(. &nbsp;I might have a go at wrapping it up with <a href="">FPM</a> later on.&nbsp;</p> <p><a href="">Here's a tutorial</a> for 1.9.2 on 10.04. &nbsp;Note: I only did it as root, because that's the default login for this VM Image I was using. &nbsp;Normally I'd do it as me, but meh. Complications.</p> <pre>root@holla:~# apt-get install zlib1g zlib1g-dev build-essential libcurl4-openssl-dev<br />...</pre> <p>stuff happens....</p> <p>We're gonna grab the 1.9.3 stable recommended snapshot from <a href=""></a> (specifically, <a href="">this snapshot</a>), and hope for the best. &nbsp;I'm assuming that 1.9.3 will work, if 1.9.2 was recommended.This could be my undoing. &nbsp;We shall see.</p> <p>I'm aware that I could have used RVM, but as this is a server, I'd rather have native packages, or at least natively built sources. &nbsp;</p> <pre>root@holla:~# mkdir sources<br />root@holla:~# cd sources/<br />root@holla:~/sources# wget</pre> <p>...</p> <pre>root@holla:~/sources# tar xzvf ruby-1.9.3-p125.tar.gz&nbsp;</pre> <p>...</p> <pre>root@holla:~/sources# cd ruby-1.9.3-p125<br />root@holla:~/sources/ruby-1.9.3-p125# ./configure</pre> <p>.... Lots of Stuff ...</p> <pre>root@holla:~/sources/ruby-1.9.3-p125# make</pre> <p>.. This bit took ages...</p> <pre>root@holla:~/sources/ruby-1.9.3-p125# make test</pre> <p>... This bit took ages too...</p> <pre>root@holla:~/sources/ruby-1.9.3-p125# make install<br />root@holla:~/sources/ruby-1.9.3-p125# ruby -v<br />ruby 1.9.3p125 (2012-02-16 revision 34643) [x86_64-linux]</pre> <p>Woot.</p> <p>Right.. Next bit.</p> <p><strong>Prerequisites</strong></p> <p><strong>Ruby 1.9.2 [*] - Done</strong></p> <p><strong>Bundler [ ]</strong></p> <p><strong>Redis [ ]</strong></p> <pre>root@holla:~/sources# gem install bundler&nbsp;<br />/usr/local/lib/ruby/1.9.1/yaml.rb:56:in `':<br />It seems your ruby installation is missing psych (for YAML output).<br />To eliminate this warning, please install libyaml and reinstall your ruby.<br />^CERROR: &nbsp;Interrupted<br />root@holla:~/sources# gem install psych<br />/usr/local/lib/ruby/1.9.1/yaml.rb:56:in `':<br />It seems your ruby installation is missing psych (for YAML output).<br />To eliminate this warning, please install libyaml and reinstall your ruby.<br />Fetching: psych-1.2.2.gem (100%)<br />Building native extensions. &nbsp;This could take a while...<br />ERROR: &nbsp;Error installing psych:<br /><span style="white-space: pre;"> </span>ERROR: Failed to build gem native extension.<br />&nbsp; &nbsp; &nbsp; &nbsp; /usr/local/bin/ruby extconf.rb<br />extconf.rb:7: Use RbConfig instead of obsolete and deprecated Config.<br />checking for yaml.h... no<br />yaml.h is missing. Try 'port install libyaml +universal' or 'yum install libyaml-devel'<br />...<br />root@holla:~/sources# apt-get install libyaml-dev<br />root@holla:~/sources# gem install psych<br />...<br />Successfully installed psych-1.2.2<br />root@holla:~/sources# gem install bundler<br />Fetching: bundler-1.0.22.gem (100%)<br />Successfully installed bundler-1.0.22<br />1 gem installed</pre> <p><strong><br /></strong></p> <p><strong>Bundler [*]</strong></p> <p><strong>Redis [ ]</strong></p> <p>We'll install Redis<a href=""> like this</a>, perhaps.&nbsp;</p> <p>I found a decent Github gist, and forked it to modify it slightly so that it works.</p> <p>I only had to add `useradd` redis to make it work..&nbsp;</p> <p>It even uses Upstart! &lt;3 The gist contains a decent redis-server.conf file for upstart.</p> <pre>root@holla:/etc/init# vim redis-server.conf<br />root@holla:/etc/init# start redis-server<br />redis-server start/running, process 8388<br />root@holla:/etc/init# status redis-server<br />redis-server start/running, process 8388</pre> <p>Woot.</p> <p><strong>Redis [*]</strong></p> <p>Right. &nbsp;The blogpost also says it requires Juggernaut, a node.js application server, so let's go ahead and figure out node.js for Ubunu 10.04 while we're here.</p> <p>Here's<a href=""> someone elses blogpost</a>&nbsp;on the subject.&nbsp;</p> <p>I'm quietly horrified that these searches "Installing on Ubuntu 10.04" doesn't automatically return someone's PPA, or public apt repo. - It looks like it's available for Oneiric and Precise from launchpad PPAs, but not for Lucid.&nbsp;</p> <p>So, again.. Source it is? :(</p> <p>&nbsp;</p> <pre>root@holla:/etc/init# apt-get install g++ curl libssl-dev apache2-utils<br />root@holla:/etc/init# apt-get install git-core</pre> <pre>root@holla:~/sources# git clone git://</pre> <p>Fuck that. &nbsp;It's huge.</p> <pre>root@holla:~/sources# wget</pre> <p>./configure, make, make install.. and so on.</p> <p>&nbsp;</p> <p>Apparently these are required.</p> <pre>gem install juggernaut<br />npm install -g juggernaut</pre> <p>&nbsp;</p> <pre>root@holla:~/sources# cd ..<br />root@holla:~# mkdir app<br />root@holla:~# cd app<br />root@holla:~/app# git clone<br />root@holla:~/app# cd holla/</pre> <p>&nbsp;</p> <p>Now dependencies.. libxml2 and libxslt and libsqlite3-dev</p> <pre>root@holla:~/app/holla# apt-get install libxml2 libxml2-dev libxslt1-dev libxslt1.1 libsqlite3-dev</pre> <pre>root@holla:~/app/holla# bundle install</pre> <p>&nbsp;</p> <p>... Stuff happens...</p> <p>This bit took ages for me.</p> <p>It was trying to install some Bundle/debug shite, so i killed that off, and edited the Gemfile to remove the Debug lines.</p> <pre>root@holla:~/app/holla#&nbsp;start redis-server<br />root@holla:~/app/holla#&nbsp;rake db:migrate<br />root@holla:~/app/holla#&nbsp;rails server thin<br />root@holla:~/app/holla#&nbsp;gem install rails<br />root@holla:~/app/holla#&nbsp;&nbsp;rails server thin</pre> <p>&nbsp;</p> <p>Right. &nbsp;Now there's an instance of Holla running on port 3000 on &nbsp;Excellent.</p> <p>If you head over to your holla server (mine's just http://holla:3000/ thanks to DNS being ace)</p> <p>and View Source, you'll see that there's something expected on port 8080 of "localhost". &nbsp;This is because there's a bit of lame config that's not actually a) documented, or b) shown for production usage.</p> <p>I put the Holla git clone-down in /srv/holla.</p> <p>Head over there, and you need to find a line that contains "localhost", as that's where the config lives, for that bit of insanity.</p> <pre>grep localhost -R .</pre> <pre>./config/initializers/juggernaut.rb:ActionView::Helpers::AssetTagHelper.register_javascript_expansion :juggernaut =&gt; ["http://localhost:8080/application.js"]</pre> <p>Ah ha!</p> <p>Just need to change "localhost" to the hostname of your Holla server, save it and restart the rails app.</p> <p><strong>Good stuff.</strong></p> <p>At this point, it worked for me, except Juggernaut (the push server) wasn't running..&nbsp;</p> <p>In /etc/init (upstart's config directory), i made a couple of files "holla-app.conf" and "holla-push.conf"</p> <p><strong>holla-push.conf&nbsp;</strong></p> <pre>description "holla push server"<br />start on runlevel [2345]<br />stop on shutdown<br />exec /usr/local/bin/juggernaut --port 8080&nbsp;<br />respawn</pre> <p>and</p> <p><strong>holla-app.conf&nbsp;</strong></p> <pre>description "holla push server"<br />start on runlevel [2345]<br />stop on shutdown<br />chdir /srv/holla<br />exec /usr/local/bin/rails server thin<br />respawn</pre> <p>&nbsp;</p> <pre>root@holla:~# start holla-app<br />root@holla:~# start holla-push</pre> <p>And the server's working on, with push message support.</p> <p>Excellent.</p> <p>I tried after this to run it with Passenger and mod_rails. &nbsp;It wasn't particularly successful, so I ditched it, and instead modified the line&nbsp;</p> <pre>exec /usr/local/bin/rails server thin</pre> <p>in <em>/etc/init/holla-app.conf</em></p> <p>to&nbsp;</p> <pre>exec /usr/local/bin/rails server thin -p 80</pre> <p>So that it runs on port 80. &nbsp;Not secure. &nbsp;Not recommended. etc.. But it does run.&nbsp;</p> <p>&nbsp;</p> That DMG ate my System Preferences <p>&nbsp;</p> <p>Well, that's certainly another strange problem.</p> <p>We have a tendency to build our own DMG images for certain bits of software we roll out here. &nbsp;Sometimes we'll incorporate our own patches, other times it's just to make the application structure more FHS compliant, and stop it "<em>shitting all over the filesystem</em>", as we so charmingly term it.</p> <p>In the past, we've used InstallEase to build a DMG and PKG installer for OSX. &nbsp;It's been pretty good up until last week, when one of our engineers built a PKG that had a nasty side effect of destroying the System Preferences once installed. &nbsp;We initially pegged this as "another weird thing about Lion", and rebuilt the image, and so on. &nbsp;</p> <p>We tested the installation again on a different computer, all the time using Munki for package deployment. &nbsp;</p> <p>These are the important test findings:&nbsp;</p> <p><strong>1) </strong>If you install the package interactively, it's fine.</p> <p><strong>2) </strong>If you install the older version, it's fine.</p> <p><strong>3) </strong>If you install the new version with munki, it breaks everything.</p> <p><strong>4) </strong>If you install anything else with munki, it's also fine.</p> <p>That points to a clear difference between installing Interactively (with a person doing it) and an automated deployment with Munki.</p> <p>I actually had a look at the package today, and gave it a <strong><em>good hard poking</em></strong>. &nbsp;I found 2 really rather worrying things.</p> <p><strong>1) </strong>Inside the Resources directory, there's an Universal Binary called DeleteObjectHelper (Not off to a good start here..), and a file called DeleteObjectList.plist</p> <p>DeleteObjectList contains:</p> <pre>&lt;?xml version="1.0" encoding="UTF-8"?&gt;<br />&lt;!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" ""&gt;<br />&lt;plist version="1.0"&gt;<br />&lt;dict&gt;<br /><span style="white-space: pre;"> </span>&lt;key&gt;objectList&lt;/key&gt;<br /><span style="white-space: pre;"> </span>&lt;array&gt;<br /><span style="white-space: pre;"> </span>&lt;dict&gt;<br /><span style="white-space: pre;"> </span>&lt;key&gt;filePath&lt;/key&gt;<br /><span style="white-space: pre;"> </span>&lt;string&gt;Library/Preferences&lt;/string&gt;<br /><span style="white-space: pre;"> </span>&lt;/dict&gt;<br /><span style="white-space: pre;"> </span>&lt;dict&gt;<br /><span style="white-space: pre;"> </span>&lt;key&gt;filePath&lt;/key&gt;<br /><span style="white-space: pre;"> </span>&lt;string&gt;Library/Receipts&lt;/string&gt;<br /><span style="white-space: pre;"> </span>&lt;/dict&gt;<br /><span style="white-space: pre;"> </span>&lt;/array&gt;<br />&lt;/dict&gt;<br />&lt;/plist&gt;</pre> <p><strong>2) </strong>Inside the postflight (think postinst for debian) file was the following:</p> <pre>#!/bin/bash<br />"$1/Contents/Resources/DeleteObjectHelper" "$1/Contents/Resources/DeleteObjectList.plist" "$HOME" "$3"</pre> <p>So here's what happens if you install it by hand. &nbsp;Postflight runs after installation, and replaces $HOME with /Users/YourName. &nbsp;DeleteObjectHelper goes off and deletes the files in ~/Library. &nbsp;I assume this is to delete older versions, or something similar.</p> <p>I suspect that if you run it non-interactively, with munki, then it might run as root. &nbsp;It might also not have the environment that you'd expect as an interactive user. That means that if $HOME was '/', or nonexistant, it might default to '/'. &nbsp;</p> <p><strong>Very Bad Indeed.</strong></p> <p>The pointer to all of this was something system.log complaining about Preferences files being missing, when trying to authenticate to our wireless. &nbsp;I've worked with *nix systems long enough to pretty much start looking in /var/log/(syslog|messages|system.log) as a matter of principal.</p> <p>So, for whatever reason, the process that builds the package is incorporating a thing for cleaning up after itself. &nbsp;Not a bad thing, but it does have a fantastic bug in it that seems to make it delete the System Preferences if it can't find $HOME. &nbsp;As it's running as root (or whomever powerful), it just goes ahead and deletes everything it can find.</p> <p>The questions that remain on my mind are these:</p> <p><strong>1) </strong>Why did it put that in there anyway, as none of our other hand-rolled packages have it.</p> <p><strong>2) </strong>Can we build PKG and DMG files with some kind of GNU toolchain on our Jenkins environment to stop this nonsense from happening again (We know exactly how the build process works, etc..)</p> <p><strong>3) </strong>What on *earth* went through those developer's minds when they wrote that Evil Little Binary?</p> <p>&nbsp;</p> Puppet, Apt and our very own Thundering Herd <p>&nbsp;</p> <p>Puppet really is great. &nbsp;Don't ever get me wrong there. &nbsp;It's saved me masses of time and money over the last few years, and allowed me to do my job quickly and efficiently. &nbsp;</p> <p>That said, it really does have issues with scalability. &nbsp;After about 20-30 clients using WEBBrick, everything kinda falls over a bit.</p> <p>We had this problem at Baseblack. &nbsp;We've now got ~60 workstations and rendernodes all using Puppet for configuration management and software deployment. &nbsp;It's great. &nbsp;It far simplifies the process of rolling out updates and upgrades to new build machines. &nbsp;</p> <p>The problem we had was most clearly shown by the frequency with which the PSON error comes up.</p> <pre>"err: Could not retrieve catalog from remote server: Could not intern from pson: Could not convert from pson:"</pre> <p>And so on. &nbsp;</p> <p>This was always a transient error, and would go away if you ran puppet 2-3, or more times, and then it'd work fine. &nbsp;This isn't really a valid long-term solution. &nbsp;It's alright now and again, but it brings up a problem of reliability, and "how do you know when it's last run, if it can't be guaranteed to run every time".</p> <p>I wrote a dirty wrapperscript that would re-run it if it failed, and so on. &nbsp;This kinda worked a bit better, but sometimes, stuff would just not run anyway. &nbsp;</p> <p>So today, I'd had enough of this problem, and decided to do the Apache2/Mod_passenger thing. &nbsp;Back in the days of Puppet 0.24/0.25x this was a bit more of a pain in the arse than it seems to be now. &nbsp;Back then, the server of choice was Mongrel, now it's Passenger/mod_rack.</p> <p>Just follow this guide:&nbsp;<a href=""></a></p> <p>It's actually a pretty good explanation of the steps, and I don't see any need to replicate the information here. &nbsp;I made a couple of slight modifications. &nbsp;</p> <p><ol> <li>The file is horribly outdated in the given form, and I used this one instead:</li> <li>I changed the apache2 defaults for mpm_worker so that it could handle a shitload more requests than the default.&nbsp;</li> </ol></p> <p>Incidentally, this <a href="">thing is cool</a>. &nbsp;Some guy's written an insanely simple "calculator" spreadsheet for OpenOffice and Excel that allows you to calculate decent settings for MaxClients.</p> <p>&nbsp;</p> <pre>&lt;IfModule mpm_worker_module&gt;<br />ServerLimit 150<br />StartServers &nbsp; &nbsp; &nbsp; 5<br />MinSpareThreads &nbsp; &nbsp;5<br />MaxSpareThreads &nbsp; &nbsp;10<br />&nbsp; &nbsp; ThreadLimit &nbsp; &nbsp; &nbsp; &nbsp; 5&nbsp;<br />&nbsp; &nbsp; ThreadsPerChild &nbsp; &nbsp; &nbsp;5<br />MaxClients &nbsp; &nbsp; &nbsp; &nbsp;750<br />&nbsp; &nbsp; MaxRequestsPerChild &nbsp; 0<br />&lt;/IfModule&gt;</pre> <p>I wanted to be able to handle our own thundering herd of workstations and rendernodes, so that meant that the default ServerLimit had to go, and that it had to be able to handle *many* more threads than the default.</p> <p>I also moved the puppetmasterd "application" from /usr/share/puppet to /srv/puppet because in my mind (and the FHS), it makes more sense.</p> <p>There's a bit of a caveat in the process of moving that directory, in that wherever it is, it must be chowned puppet:puppet. &nbsp;After that, it's all fine.</p> <p>The problems really started for us after that. &nbsp;It worked fine with one workstation testing it, but throw 2+ at passenger, and apache tended to kill off the ruby children.&nbsp;</p> <p>The big hint was in /var/log/apache2/error.log:&nbsp;</p> <pre>[Tue Feb 21 15:50:12 2012] [alert] (11)Resource temporarily unavailable: apr_thread_create: unable to create worker thread<br /><br />[Tue Feb 21 15:50:15 2012] [error] (12)Cannot allocate memory: fork: Unable to fork new process</pre> <p>So, our puppetmaster runs on Proxmox as a VM. &nbsp;Proxmox is an OpenVZ virtualisation host, and as a result, it has hardlimits based around the content of /proc/user_beancounters. &nbsp;This is the thing that's basically stopping Apache from spawning threads to handle the requests.</p> <p>There's a&nbsp;<a href="">page here</a> about how to remove OpenVZ's limits:&nbsp;</p> <pre>clear; cat /proc/user_beancounters<br />vzctl set 101 &ndash;tcpsndbuf 999999999:999999999 &ndash;save<br />vzctl set 101 &ndash;tcprcvbuf 999999999:999999999 &ndash;save<br />vzctl set 101 &ndash;numtcpsock 999999999:999999999 &ndash;save<br />vzctl set 101 &ndash;numflock 999999999:999999999 &ndash;save<br />vzctl set 101 &ndash;othersockbuf 999999999:999999999 &ndash;save<br />vzctl set 101 &ndash;numothersock 999999999:999999999 &ndash;save<br />vzctl set 101 &ndash;numfile 999999999:999999999 &ndash;save<br />vzctl restart 101</pre> <p>Where 101 is the #id of your VZ container. &nbsp;</p> <p>I also had to bump the allocated memory up from 512M to 2GB, and push the swap (and why not) up to 1GB. &nbsp;</p> <p>After a quick restart of the Puppet container, and restarting apache one last time, I sucessfully ran 56 puppetd. &nbsp;One on each of the nodes, all at once, without a single error.</p> <p>Sounds like success to me.</p> <p>--</p> <p>The next problem we've got that's currently holding back speedy software deployments, and apt-updates, is our apt-cacher-ng server. &nbsp;That too is a Proxmox VM, and I initially thought that the same problems might be true, with having a connection limit on OpenVZ, which would be preventing stuff from getting a connection. &nbsp;</p> <p>If I run apt-get update on 50+ nodes simultaneously, the probability that some of them will error, something about connection to http://apt failing, is pretty close to P(1).</p> <pre>W: Failed to fetch &nbsp;Unable to connect to apt:3142:<br />W: Failed to fetch &nbsp;Unable to connect to apt:3142:<br />E: Some index files failed to download, they have been ignored, or old ones used instead.</pre> <p>The 3142 bit is because we've got apt-cacher-ng running on port 3142, and a line in /etc/apt/apt.conf.d/ containing</p> <pre>Acquire::http { Proxy "http://apt:3142"; };</pre> <p>This is the most fool-proof way to make sure that *everything* gets cached, PPAs, and the whole kitchen sink. &nbsp;This is what we want to do, because having a full debmirror is a) wasteful of disk space, something we're always at a premium with, working in VFX, and also, disk space is expensive; especially after the Thailand floods.</p> <p>So, we're using apt-cacher-ng, and I don't personally see that changing anytime soon.&nbsp;</p> <p>Right. &nbsp;So I bumped up the limits in openVZ's configuration, and there's still a problem that means that the apt requests aren't getting handled.</p> <p>I suspect that because Apt's protocol is just HTTP, that it might be possible to use something like Varnish or HAProxy and a bunch of apt-cacher-ng backends. &nbsp;</p> <p>It doesn't appear that apt-cacher-ng can run with multiple threads for handling lots more requests/second.&nbsp;</p> <pre>root@apt:/# netstat -anp|grep 3142|wc -l<br />2964</pre> <p>Yah.. That could be a problem.</p> <p>That said, I've just tested hammering it with ab as follows:</p> <pre>tom.oconnor@charcoal-black:~$ ab -n25000 -c550 -X apt:3142 &nbsp;<br />This is ApacheBench, Version 2.3 &lt;$Revision: 655654 $&gt;<br />Copyright 1996 Adam Twiss, Zeus Technology Ltd,<br />Licensed to The Apache Software Foundation,<br />Benchmarking [through apt:3142] (be patient)<br />Completed 2500 requests<br />...<br />Finished 25000 requests<br /><br />Server Software: &nbsp; &nbsp; &nbsp; &nbsp;Debian<br />Server Hostname: &nbsp; &nbsp; &nbsp; &nbsp;<br />Server Port: &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;80<br />Document Path: &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;/mozillateam/firefox-stable/ubuntu/dists/lucid/main/binary-amd64/Packages.gz<br />Document Length: &nbsp; &nbsp; &nbsp; &nbsp;420 bytes<br />Concurrency Level: &nbsp; &nbsp; &nbsp;550<br />Time taken for tests: &nbsp; 3.127 seconds<br />Complete requests: &nbsp; &nbsp; &nbsp;25000<br />Failed requests: &nbsp; &nbsp; &nbsp; &nbsp;0<br />Write errors: &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; 0<br />Non-2xx responses: &nbsp; &nbsp; &nbsp;25018<br />Total transferred: &nbsp; &nbsp; &nbsp;20990102 bytes<br />HTML transferred: &nbsp; &nbsp; &nbsp; 10507560 bytes<br />Requests per second: &nbsp; &nbsp;7994.07 [#/sec] (mean)<br />Time per request: &nbsp; &nbsp; &nbsp; 68.801 [ms] (mean)<br />Time per request: &nbsp; &nbsp; &nbsp; 0.125 [ms] (mean, across all concurrent requests)<br />Transfer rate: &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;6554.54 [Kbytes/sec] received<br />Connection Times (ms)<br />&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; min &nbsp;mean[+/-sd] median &nbsp; max<br />Connect: &nbsp; &nbsp; &nbsp; &nbsp;0 &nbsp; 16 208.9 &nbsp; &nbsp; &nbsp;1 &nbsp; &nbsp;3002<br />Processing: &nbsp; &nbsp; 1 &nbsp; 19 &nbsp;97.9 &nbsp; &nbsp; &nbsp;9 &nbsp; &nbsp;1478<br />Waiting: &nbsp; &nbsp; &nbsp; &nbsp;1 &nbsp; 19 &nbsp;97.9 &nbsp; &nbsp; &nbsp;9 &nbsp; &nbsp;1477<br />Total: &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;4 &nbsp; 35 230.9 &nbsp; &nbsp; 10 &nbsp; &nbsp;3023<br />Percentage of the requests served within a certain time (ms)<br />&nbsp; 50% &nbsp; &nbsp; 10<br />&nbsp; 66% &nbsp; &nbsp; 12<br />&nbsp; 75% &nbsp; &nbsp; 13<br />&nbsp; 80% &nbsp; &nbsp; 13<br />&nbsp; 90% &nbsp; &nbsp; 16<br />&nbsp; 95% &nbsp; &nbsp; 20<br />&nbsp; 98% &nbsp; &nbsp;128<br />&nbsp; 99% &nbsp; 1056<br />&nbsp;100% &nbsp; 3023 (longest request)</pre> <p>&nbsp;</p> <p>25k requests, at 550 concurrency, and I still can't make it return errors.</p> <p>So it looks like the problem isn't with serving files from the cache, it's downloading new stuff at the same time, and serving simultaneously. &nbsp;</p> <p>So it's blocking. &nbsp;Well that's an epic stack o' fail.</p> <p>Here's some <a href="">more evidence</a> to back up my findings. &nbsp;They're using Squid.&nbsp;</p> <p>They're also using Fabric for orchestration. &nbsp;Intriguing.</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>More here on this one later on.. When I've actually figured it out.</p> Not Storing Files In A Database <p> <p>Originally a comment here <a href=""></a></p> <p>In the above article, Bhumi gives a method for storing files *in* the database, using MySQL and PHP.</p> <p>My personal distaste for PHP aside, I don't think I could ever find a reason to store files *in* the database, rather than *on* the filesystem.</p> <p>I'm also primarily talking about RDBMS type databases, not NoSQL, which tend to have a mechanism for storing files a little bit more sanely than "old-fashioned" databases.</p> <p>Let's take a look at this idea in a bit more depth.</p> <p>If you read the original article, there's the very basic bare bones of a web application. &nbsp;There's a form. &nbsp;There's a MySQL table definition. &nbsp;There's a bit of PHP for handling uploaded files.</p> <p>Personally, I write pretty much exclusively in Python. &nbsp;This isn't going to be a technical article with codesamples however, more just a look at comparing the methodologies of storing in a database to storing in a filesystem.</p> <p><strong>Why would you choose a BLOB database type over a Filesystem? &nbsp;</strong></p> <p>I think there's gotta be about one use-case for storing files in a database. &nbsp;Hang on. &nbsp;I take that back. &nbsp;There aren't any.</p> <p>Filesystems are optimised for File Access. &nbsp;Databases are best optimised for row/column based tabular data access. &nbsp;The two should not be confused.</p> <p>Here's how I store uploaded files: &nbsp;</p> <p><strong>1)</strong> Upload the file to .../uploaded_files/...</p> <p><strong>2)</strong>&nbsp;Rename it to something sensible. &nbsp;</p> <p><strong>3)</strong> In the database, store the original filename, and the path to the uploaded file. &nbsp;</p> <p><strong>3a)</strong> Possibly also store some neat metadata, call magic() on it, store the size, the MD5Sum, and so on. &nbsp;</p> <p>It greatly depends on the application, but it's conceivable that some of the file metadata might need to be retrieved in the file's lifespan within the application. &nbsp;It's a lot quicker to stat the file once, and store the result, than it would be to stat it many times whenever the information is requested.</p> <p>This makes a few drastic differences to database-based file storage.</p> <p><strong>1)</strong> The database contains data of a predictable size. &nbsp;I mean, it's easier to calculate and predict the size of the Table based on the known bit-widths for each column. &nbsp;Once you start introducing BLOBs into your database, all bets for size are off.</p> <p><strong>2)</strong> Database Backups are smaller. - Not having arbitrary binary data in your database means that gzipping a SQL dump is likely to be more deterministic than it would if you already had gzipped (for example) binary data stored as a BLOB. &nbsp;<br />When you attempt to compress already compressed data, the output is frequently larger.</p> <p><strong>3)</strong> You can scale easier by having a shared/clustered filesystem, such as Gluster.</p> <p><strong>4)</strong> If it's a website, loading files can happen externally of your webapp, simply have a media subdomain and handle files with a lightweight webserver such as LigHTTP or Nginx.</p> <p>This means that your webapplication isn't a bottleneck for loading every damn file that's requested, into memory, stripping the slashes or base64_decoding it, before streaming it back to the user.&nbsp;</p> <p>This is actually one of the most important points. &nbsp;If the data is stored in a proprietary format in the database, you can't use regular filesystem utilities to access it. &nbsp;You need to do all that in your application.&nbsp;</p> <p><strong>Why reinvent so many wheels when the OS-provided Filesystem tools are all so outstanding?</strong></p> <p>How would you stat the file inside a database? Write it out to /tmp/tmpXXXXX and then stat that, before deleting the temporary file?</p> <p>Sounds a bit slow, to me.</p> <p>What if the file is Huge? How long could that take? What if the uploaded file is bigger than your system RAM? Surely it'd make sense to be able to handle multiple large files...</p> <p>What if your application breaks? Could it silently corrupt files on their way in / out? What if Something Bad Happens, and your database is partially corrupted. &nbsp;Would all the files potentially be corrupted? What about recovery on a non-application-serving system. &nbsp;Could be tricky potentially.. &nbsp;Certainly more tricky than just rsyncing files around.</p> <p>See what I mean? &nbsp;</p> <p>You *can* store files in a database. &nbsp;</p> <p>Doesn't mean you should.</p> </p> Building updated packages for sun-java6 6u30 <p>Firstly, welcome back.</p> <p>It's now 2012, and there's lots more to write about.</p> <p>&nbsp;</p> <p>Recently, Oracle withdrew the ability for Linux distributions to repackage Java and distribute their own packages. &nbsp;This has been widely regarded as a bad idea. &nbsp;I tend to agree.</p> <p>So, let's re-roll an old sun-java6 deb file, with a new content to contain the latest 6u30 java release.</p> <p>You will need:&nbsp;</p> <p>&nbsp;</p> <ol> <li>1. A set of build packages (I've got a set for lucid, so if this goes away, I'll find some way to host them.) from</li> <li>The latest Java packages:<a href="">&nbsp;</a> and&nbsp;<a href=""></a></li> <li>dch. &nbsp;just install devscripts package to get this.&nbsp;</li> <li>&nbsp;Some idea of how packaging on debian/ubuntu works.</li> </ol> <p>&nbsp;</p> <p><strong>Let's get started.</strong></p> <pre>mkdir package-build<br />cd package-build<br />wget<br />wget<br />wget<br />wget<br />wget<br />tom.oconnor@charcoal-black:~/package-build$ ls -1<br />jdk-6u30-linux-x64.bin<br />jdk-6u30-linux-i586.bin<br />sun-java6_6.26-2lucid1.debian.tar.gz<br />sun-java6_6.26-2lucid1.dsc<br />sun-java6_6.26.orig.tar.gz<br />tom.oconnor@charcoal-black:~/package-build$ dpkg-source -x *.dsc<br />gpgv: Signature made Tue 13 Dec 2011 22:31:53 GMT using RSA key ID CC559573<br />gpgv: Can't check signature: public key not found<br />dpkg-source: warning: failed to verify signature on ./sun-java6_6.26-2lucid1.dsc<br />dpkg-source: info: extracting sun-java6 in sun-java6-6.26<br />dpkg-source: info: unpacking sun-java6_6.26.orig.tar.gz<br />dpkg-source: info: unpacking sun-java6_6.26-2lucid1.debian.tar.gz<br />tom.oconnor@charcoal-black:~/package-build$ cd sun-java6-6.26/<br />tom.oconnor@charcoal-black:~/package-build/sun-java6-6.26$ ls<br />debian &nbsp;jdk-6u26-dlj-linux-amd64.bin &nbsp;jdk-6u26-dlj-linux-i586.bin<br />tom.oconnor@charcoal-black:~/package-build/sun-java6-6.26$ rm *.bin<br />tom.oconnor@charcoal-black:~/package-build/sun-java6-6.26$ ../jdk-6u30-linux-i586.bin jdk-6u30-dlj-linux-i586.bin<br />tom.oconnor@charcoal-black:~/package-build/sun-java6-6.26$ ../jdk-6u30-linux-x64.bin jdk-6u30-dlj-linux-amd64.bin<br />tom.oconnor@charcoal-black:~/package-build/sun-java6-6.26$ vim debian/rules</pre> <p>Head down to the block<em> "# check if the sources are the "same"</em></p> <p>Then find and comment the block following it, out, so you get this.</p> <pre># &nbsp; &nbsp; &nbsp; : # check if the sources are the "same"<br /># &nbsp; &nbsp; &nbsp; set -e; set -- $(all_archs); a1=$$1; shift; \<br /># &nbsp; &nbsp; &nbsp; unzip -q -d tmp-$$a1/src $$a1-jdk/; \<br /># &nbsp; &nbsp; &nbsp; for a2; do \<br /># &nbsp; &nbsp; &nbsp; &nbsp; unzip -q -d tmp-$$a2/src $$a2-jdk/; \<br /># &nbsp; &nbsp; &nbsp; &nbsp; echo "Comparing sources: tmp-$$a1/src tmp-$$a2/src ..."; \<br /># &nbsp; &nbsp; &nbsp; &nbsp; echo " &nbsp; &nbsp;diff -ur $(diff_ignore)"; \<br /># &nbsp; &nbsp; &nbsp; &nbsp; diff -ur $(diff_ignore) tmp-$$a1/src tmp-$$a2/src; \<br /># &nbsp; &nbsp; &nbsp; done</pre> <p>Save that file, and then run:</p> <pre>dch -v 6.30</pre> <p>This will create a changelog entry for version<em> 6.30</em>, and open <strong>$EDITOR</strong> to edit the changelog entry.&nbsp;</p> <p>Enter a stub entry..&nbsp;</p> <p>I put something like&nbsp;</p> <pre>* Updating internal contents to 6u30</pre> <p>.. There's some output, but you can ignore this.</p> <pre>dch warning: New package version is Debian native whilst previous version was not<br />dch warning: your current directory has been renamed to:<br />../sun-java6-6.30<br />dch warning: no orig tarball found for the new version.</pre> <p>&nbsp;</p> <pre>tom.oconnor@charcoal-black:~/package-build/sun-java6-6.26$ cd ..<br />tom.oconnor@charcoal-black:~/package-build$ cd sun-java6-6.30/<br />tom.oconnor@charcoal-black:~/package-build/sun-java6-6.30$ dpkg-buildpackage -b -uc</pre> <p>... LOTS OF STUFF ...</p> <pre>tom.oconnor@charcoal-black:~/package-build/sun-java6-6.30$ cd ..<br />tom.oconnor@charcoal-black:~/package-build$ ls<br />ia32-sun-java6-bin_6.30_amd64.deb &nbsp; &nbsp; sun-java6_6.26-2lucid1.dsc &nbsp;sun-java6-6.30 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;sun-java6-bin_6.30_amd64.deb &nbsp; sun-java6-fonts_6.30_all.deb &nbsp; sun-java6-jdk_6.30_amd64.deb &nbsp;sun-java6-plugin_6.30_amd64.deb<br />sun-java6_6.26-2lucid1.debian.tar.gz &nbsp;sun-java6_6.26.orig.tar.gz &nbsp;sun-java6_6.30_amd64.changes &nbsp;sun-java6-demo_6.30_amd64.deb &nbsp;sun-java6-javadb_6.30_all.deb &nbsp;sun-java6-jre_6.30_all.deb &nbsp; &nbsp;sun-java6-source_6.30_all.deb</pre> <p>&nbsp;</p> <p>Woo. Debs.</p> <p>What you want to do with them now is up to you. &nbsp;Next blogpost, I'm going to go over creating a package repository with reprepro.</p> <p>&nbsp;</p> <p>Thanks &nbsp;to <a href="">@mibus</a> for his similar <a href="">article</a>, which this is based partially on.&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> 2011: Personal Retrospective <p> <p><strong>This post has been abridged / redacted to preserve the identity of individuals mentioned. &nbsp;If you know me well enough, ask and I might show you the unredacted version. &nbsp;Or you might be able to figure it out yourself.&nbsp;</strong></p> <p>&nbsp;</p> <p>My, what a year it's been. &nbsp;Mostly good, but pricked throughout with sadness on occasion.</p> <p>Such a lot has happened, I've changed jobs twice. &nbsp;I left a good Infrastructure Engineer job in July for a promise of a better job at another company. &nbsp;I did 3 months of hard graft on a new infrastructure project there, only to find they'd chosen to make me redundant, and terminate my contract at 3 months. &nbsp;Bit of a bugger that, and the way it was phrased to me made it feel like it was a personal attack. &nbsp;The line I was given by the CTO was "You're not a good fit for the team", but that was actually bollocks, because everyone in the team confirmed my feelings, that I was actually a good fit for the team. &nbsp;A little stubborn, but that's a good quality in an engineer, I feel.</p> <p>That was on the 14th of October..&nbsp;</p> <p>Turns out, that my "<em>redundancy</em>" / <em>"contract termination"</em> / "<em>whatever</em>" came at a time when they also got rid of 2 product managers and a senior developer. &nbsp;Such is life. &nbsp;</p> <p>So, I spent about 3 weeks job hunting, various interviews ranging from "yes, that sounds really cool" to "<em>eugh. Massive company, sounds too boring for words</em>". &nbsp;I picked up a week-odd's work of contract work related to my former infrastructure job, and that kept me entertained for a while long enough.&nbsp;</p> <p>Soon enough, a good offer at a decent company turned up. &nbsp;I heard from my former colleague at the 3 month place, that a cool VFX company were looking for a DevOp engineer of my type of background. &nbsp;</p> <p>I applied, and was interviewed twice. &nbsp;Turns out that I'd previously been recommended to them by a guy who'd interviewed me in the past, so it felt like it really was all meant to be. &nbsp;</p> <p>I started there in the first week of November.. It feels like it was much longer ago, for some reason. Probably because I've been so impossibly busy. &nbsp;Still, that beats the original infrastructure job where I was impossibly bored.</p> <p>There's lots of new infrastructure stuff to do there, and we're doing lots of stuff with cool tech. &nbsp;I'm not entirely sure what I can talk about, so I'll just say that it's all insanely cool.</p> <p>This is good, because I was starting to feel quite bored of working for the same kinds of Web2.0 companies. &nbsp;Scalability, yeah, very buzzwordy for 2011, and important nonetheless, but once you've solved the problem, it feels very self-similar when you come to do it again for a different client or a different employer. &nbsp;</p> <p>There will always be the same problems, namely <strong>Time</strong>, <strong>Money</strong> and <strong>Knowledge</strong>. &nbsp;Scaling a system takes time, the hardware and servers cost money, and the developers need to know how to make their code performant.</p> <p>I don't think I'd go so far to say that I'm bored of Web2.0, but I'm certainly bored of the cheapskate mindset. &nbsp;I've said numerous times, both online and IRL, that if you want to play with the big boys, it's gonna cost you. &nbsp;There's some things you just can't do for free. &nbsp;Building decent software is definitely one of them.</p> <p>So. I'm working for a &nbsp;VFX / post-production company. &nbsp;They did all sorts of insanely crazy VFX for some recent films. &nbsp;It's truly cool. &nbsp;There's plenty of *new* challenges, and very little of the same old web stuff. &nbsp;This is good. &nbsp;Sometimes all you need to keep you interested is a sea change. &nbsp;It certainly worked for me.&nbsp;</p> <p>Until the 24th of October, my partner and I had been in a relationship (and also sharing a flat in Chiswick). &nbsp;It had become pretty apparent that we both wanted our own space, and the relationship in its current format had become pretty much impossible. &nbsp;I think it was all brought home to me when I wanted to start dating again, and to some extent, it felt like he already had. &nbsp;I got home from a trip a few hours earlier than planned, and there was some random guy on the sofa. &nbsp;I jumped in the shower, and by the time I got out, both of them had vanished, without saying a word. <em>&nbsp;Definitely an uncomfortable moment.&nbsp;</em></p> <p>I spent a very long time trying to rationalise the feelings that I'd once had, and whatever was left of them. &nbsp;I came to the eventual conclusion that what i really wanted was a companion. &nbsp;Someone I could chat to now and again, but not have any requirement to y'know, bugger them.</p> <p>When it came down to it, this was pretty much what the relationship had become after a couple of years. &nbsp;We'd done <em>Open Relationship Rules </em>from the outset, initially because he was living in Spain, which made everything a bit tricky, but even when we were living together, in the same flat; sharing the same bed, we'd kept those rules because they worked better for us. &nbsp;</p> <p>I think that should probably have been a bit of a better warning sign. &nbsp;I suspect in future, I'll be more aware.</p> <p>So we broke up, and went our seperate ways. &nbsp;I moved out, and found a new flat.. A somewhat bigger flat, and a new flatmate. &nbsp;An incredibly nice guy, and we do get on well. &nbsp;And I'm glad that I'm not living alone. &nbsp;I think I'd have ended up feeling pretty damn lonely. &nbsp;I mean, there's nights when Alex isn't home, and it's quiet, and I think that if I were living like that all the time, I'd end up pretty bored and probably self-destructive. &nbsp;So yeah, it's good to have a new place, and a new flatmate. &nbsp;</p> <p>I gather my ex is sticking in Chiswick in the old flat. &nbsp;He took over my share of the rent, but I suspect he'll fulfil his long-held dream of living around the corner from his office.&nbsp;</p> <p>I tried dating a guy back in January, whilst I was still technically in a relationship, albeit an open one, with my ex. That turned out a bit weird in the end, as I was falling head-over-heels in love with him, and he wasn't feeling the reciprocal feelings. At least, not to the same extent. &nbsp;That was weird. &nbsp;I don't think I've been in that position for a very long time, and such, I'd forgotten just how hard it was to deal with. &nbsp;One of the things that makes us human, I suspect, is the ability to reflect on past experiences and imagine how alternative paths might have turned out.<br />In some regards, this is a good ability. &nbsp;Truth be told, it's a bitch, and hindsight is a killer when it comes to that kind of thing.&nbsp;</p> <p>Still, he seems happier now with his new boyfriend. &nbsp;I suppose at the end of the day, that's all you can really ask for, for the best for your friends.</p> <p>After my long-term ex and I broke up, I tried dating again, again. &nbsp;Possibly too soon, as I still found that I was craving solace in being alone. &nbsp;Not the best thing if you're trying to make a new relationship (albeit, only dating) work. &nbsp;I don't know what that experience has taught me. &nbsp;Possibly that I'm a busy bugger, and I'm happiest when I'm busy, and that potentially higher-maintenance individuals are unlikely to be a good fit for me. &nbsp; Perhaps it's nothing of the sort, and I was just trying too hard, too soon.</p> <p>&nbsp;</p> <p>So far, it sounds like it's been a pretty depressing and gloomy year, perhaps.&nbsp;</p> <p>But I've done some insanely cool stuff. &nbsp;I went up the BT Tower, and took photos. Photography has played a pretty big part of my life in 2011, too. &nbsp;I got back into film photography with a Nikon F301 and a Nikon F5. &nbsp;Perhaps next year I'll buy a video camera, and branch out in that regard too.</p> <p>I went to a couple of awesome Winter parties, in a stunning tuxedo (that I now own). I've been blogging and writing technical articles a lot more in 2011 than ever before. &nbsp;I've got a Most Valued Blogger award and republishing arrangement from an online journal, and I'm looking to write some guest articles for some technical magazines in 2012. &nbsp;I'd also like to think about writing a book in the new year.&nbsp;</p> <p>I'm going to summarise 2011 as a number of small setbacks, but a persistent push forward against adversity, between job problems and the woes of the end of a long term relationship. &nbsp;It's not all doom and gloom, and I know it could have been a lot worse of a year. &nbsp;But from my reasonably comfortable life, some changes that might seem small to others are quite large and affect me in different ways.&nbsp;</p> <p>I find myself reminded once again of the phrase from Ulysses.</p> <p><em>"To strive, to seek, to find, and not to yield"</em></p> <p>I first saw this phrase gracing the doorway of the Mechanical Engineering department at Birmingham University, and for me, it's always struck a chord.</p> <p>So that's about it. &nbsp;The personal reflection and retrospective on 2011. &nbsp;Here's to a better year in 2012.</p> </p> Twitter and their REST blunder <p> <p>Hah. &nbsp;So it's New Year's Eve. &nbsp; Twitter is down, and has been for about 3-4 hours. &nbsp;That's because the NYE celebrations have already started. &nbsp;Somewhere on the other side of the world.</p> <p>I include Twitter on my personal website. &nbsp;The one you're reading. &nbsp;There's a template tag that displays my five latest tweets.&nbsp;</p> <p>About 2-3 hours ago, I got some error reports about XML parse errors on that template tag. &nbsp;I use and pull in the XML feed for parsing. &nbsp;No problems there, it's always worked.</p> <p>It relies on testing that the status from Twitter was sensibly formed. &nbsp;In order for that to happen, the request has to return HTTP 200 OK. &nbsp;That's cool. &nbsp;That's easy, and it's one of the principle tenets of RESTful APIs. &nbsp;</p> <p>Until tonight, it seems.&nbsp;</p> <pre>curl -I ';count=5'<br />HTTP/1.1 200 OK<br />Content-Type: text/html; charset=UTF-8<br />Content-Length: 4968<br />Set-Cookie: k=; path=/; expires=Sat, 07-Jan-2012 16:25:06 UTC;; httponly<br />Date: Sat, 31 Dec 2011 16:25:06 UTC<br />Server: tfe<br />Twitter is currently down for maintenance.<br />We expect to be back soon. For more information, check out Twitter Status &raquo;<br />Thanks for your patience!</pre> <p>So what we find is that Twitter are returning HTTP 200 OK, for a status which is blatently not what I asked for.</p> <p><strong>This is BAD.</strong> &nbsp;This is bad for two reasons, mostly.</p> <p><ol> <li>Web developers and engineers rely on HTTP Status codes to actively represent the output of the service, and also to provide a machine-readable representation of the webpage returned.</li> <li>One of the principle tenets of RESTful APIs is that the status returned should accurately represent the state of the service.&nbsp;</li> </ol></p> <p>There's nothing to stop you returning a HTML response with a HTTP 5xx status code. &nbsp;It's better to do that, in fact. &nbsp;The user sees something pretty, and the machine readable representation says "Er, this service is fucked.</p> <p>But if you return 200, and encapsulate a non-standard error message, inside an arbitrary block of HTML, it makes machine parsing incredibly difficult.</p> <p>So I'm turning off the latest_tweets template tag for a bit. &nbsp;At least until Twitter read this, apologise, and start returning decent HTTP status codes.</p> <p>Come on guys, you're one of the flagship startups of Web 2.0, if we can't look up to you to set a good example, what hope does everyone else have?&nbsp;</p> </p> A manifest for Agile DevOps <p>&nbsp;</p> <p>I&rsquo;ve decided. &nbsp;We need to start doing <a href="">points poker</a> here at Baseblack if we&rsquo;re going to carry on this Agile DevOps thing.&nbsp;</p> <p>I&rsquo;ve got to admit, the first time I came across the Agile methodology was quite late in my career. &nbsp;In the past, prioritisation of &ldquo;operations&rdquo; projects was reasonably first come first serve, or by order of priority (frequently, business need, and seldom operational requirement).</p> <p>For software development teams, Agile is a pretty good, native fit. &nbsp;The concepts embodied by stories and sprints fit a development team very cleanly. &nbsp;When it comes to systems administration and engineering, or what I&rsquo;ve come to refer to as DevOps, Agile can be a bit more awkward initially.&nbsp;</p> <p>Operations teams across the globe will tell you that their tasks are intrinsically more &ldquo;sprawly&rdquo;, and that interconnections between tasks are frequently more complex. &nbsp;</p> <p>The truth of the matter is, that frequently there is no simple and sensible way to break up a task into entirely unconnected subtasks. &nbsp;Something which can bugger up Agile, if you&rsquo;re too hard and fast with the requirements and rules by which you play the game.</p> <p>Pretty early on in this new job, I started looking at the previous DevOp Engineer&rsquo;s puppet manifests. They were *mostly* ok, but with some absolute crazy meatballs thrown in for good measure. &nbsp;</p> <p>It&rsquo;s actually a common fault of Sysadmins to want to throw out the previous team&rsquo;s work and start afresh, but in this case it actually was easier to start fresh than repair the foibles and cockups of the old code. &nbsp;&nbsp;</p> <p>Given that I&rsquo;d already spent 3-4 days reading and trying to interpret the state of the system, and it was blatently apparent that there were too many bits of &ldquo;wouldn&rsquo;t it be cool if we hacked this in to make it do X&rdquo;, and not enough actual hard and fast config to make things work. &nbsp;</p> <p>I&rsquo;ll put that one down to my predecessor not being very puppet-savvy.</p> <p>One of the big reasons this sprint overran was that the discovery process (first 3-4 days) was mostly involved with exploring the state of the systems, and what we wanted to accomplish. &nbsp;In the old manifests, there were huge chunks of code that installed numerous applications, which would be easier to manage and integrate if modularised. &nbsp;</p> <p>A good proportion of time in the implementation phase went into creating lots of individual modules for various applications and packages.</p> <p>As I was saying earlier about interconnected tasks, this wasn&rsquo;t just a Fix Puppet sprint.</p> <p>The background to fixing puppet was to enable the faster building of new machines from unboxing to users logging in. &nbsp;</p> <p>There were some massively weird problems with the internal DNS, using Bind9, and the old DHCP server was prone to some peculiar lease issues, and it was running on a physical VM host, when it probably ought to have been a VM guest. &nbsp;Fixing DNS would best be done whilst fixing DHCP. &nbsp;Fixing DNS meant installing PowerDNS, which in turn means installing Postgresql. Setting up DNS Slaves means installing PowerDNS on multiple servers and configuring Postgres replication.</p> <p>There&rsquo;s no way that I&rsquo;m building out multiple copies of anything without Puppet, so there&rsquo;s &nbsp;the first bit of recursive loop.</p> <p>The way to untangle this is to realise that puppet<a href=""> doesn&rsquo;t need a puppetmaster</a> to run manifests. &nbsp;All you need to do is write the puppet configs and then use the puppet agent itself to run the manifests from files. &nbsp;You can then use that to bootstrap a puppetmaster, or a DNS server, or just get a sense of how it will all fit together when you do the final server buildout.</p> <p>I&rsquo;m going to leave this here. &nbsp;I think the general conclusions to draw are the following.&nbsp;</p> <p>1) Agile is great.. It doesn&rsquo;t fit all teams, and it&rsquo;s worth trying. &nbsp;If it doesn&rsquo;t fit, no worries. &nbsp;If it does, cool.</p> <p>2) Planning is the biggest stage of any project, or at least, should be.&nbsp;</p> <p>3) Infrastructure projects shouldn&rsquo;t be forced into the traditional Agile Sprint, because they tend to become a lot more sprawly on investigation of the actual problem than they look at first glance.</p> <p>I&rsquo;m about to post the articles on <a href="/blogish/postgres-replication-91">Postgres replication</a>, and the<a href="/blogish/low-level-infrastructure-puppet-dns-and-dhcp/"> technical portion</a> of this article.</p> <p>&nbsp;</p> Low-level Infrastructure: Puppet, DNS and DHCP <p>&nbsp;</p> <p>Right. &nbsp;Let&rsquo;s have a look at the massive technical implications of the Fix Puppet idea.&nbsp;</p> <p>As I mentioned in my <a href="/blogish/manifest-agile-devops/">earlier blogpost</a>, in order to fix puppet in a sensible way, we&rsquo;ll have to review all, and overhaul some of the underlying infrastructure that allows it all to run.</p> <p>The interlinks and dependencies between all the parts are a little tricky to visualise. &nbsp;So, here&rsquo;s a picture.</p> <p><img id="plugin_obj_142" title="Picture - massive directed graph of dependencies" src="/media/cms/images/plugins/image.png" alt="Picture - massive directed graph of dependencies" /></p> <p>Anything in red needs attention, and the stuff in green *just works*. &nbsp;Things in blue are install stages, and these are what we&rsquo;re working on making perfect.</p> <p>Right, so we&rsquo;ve basically got a directed graph, representing the steps and stages that have to happen to a new machine before users can log in.&nbsp;</p> <p><strong>The steps taken to build a machine, roughly look like this:</strong></p> <p>&nbsp;</p> <ol> <li>Unbox.</li> <li>Plug in.</li> <li>Configure Netboot.</li> <li>Hand MAC Address to DHCP server and assign a hostname.</li> <li>Client PXEBoots.</li> <li>Client downloads a preseed file.</li> <li>Client installs itself.</li> <li>Client Reboots.</li> <li>Puppet runs on First Boot.</li> <li>Puppet completes.</li> <li>Client Reboots again.</li> <li>Users login</li> </ol> <p>&nbsp;</p> <p>That&rsquo;s about it, really. The first 4 steps are a hell of a lot easier with the support and co-operation of the supplier. &nbsp;It&rsquo;s nice to have systems preconfigured to PXE boot as the BIOS default, and even cooler if they can send the MAC addresses as labels on each physical machine.</p> <p>If we&rsquo;re going to build out a new infrastructure, we&rsquo;re going to need to review and reinstall the servers that provide this infrastructure, before we can build any workstations.</p> <p>I&rsquo;m a massive massive fan of puppet, and believe that it should be used for the configuration of all servers and workstations. &nbsp;As such, I didn&rsquo;t want to rebuild anything without using puppet, so the first step, had to be getting puppet working again.</p> <p><strong>So, without further ado, let&rsquo;s take a look at the Puppet portion of this, well, one of them.</strong></p> <p>My predecessor saw fit that all nodes should be defined with puppet-dashboard, which is itself, a fine piece of software, but I think more for reporting than specification. &nbsp;</p> <p>Initially, at least, I rebuilt the puppet manifest from a <a href="">known-good configuration</a>. &nbsp;Namely the base configs I wrote for a blogpost about a year ago; base configs that I&rsquo;m going to update soon.</p> <p>I&rsquo;m a bit of an old fashioned puppet user. &nbsp;I like my nodes defined in nodes.pp, not some External Node Classifier service. &nbsp;<br />Reason being, I like to be able to look in one place and find exactly what I want. &nbsp;It&rsquo;s not a massive ballache to clone down the puppet git repo, make a change and push it back up.</p> <p>In fact, it&rsquo;s better than having a web interface for your node classifications, because git provides you with an intrinsic log of what was changed, and it&rsquo;s easy to revert to an old version, because everything&rsquo;s stored in source control. &nbsp;</p> <p>You can also test what you&rsquo;re about to do, because again, it&rsquo;s just a source control repo. &nbsp;I&rsquo;m a fan of having <a href="">Jenkins</a> run a few sanity checks on your puppet repo, but that&rsquo;s a digression for another blogpost.</p> <p>I&rsquo;m not going to go into great depth about how to install DHCP and DNS, and how to make it work with puppet, at least, not here. &nbsp;</p> <p>What I will say, though is that <a href="">Puppet Module Tool</a>&nbsp;is the most fantastically easy way to generate boilerplate modules for puppet.</p> <p>All you need to do is run</p> <pre>puppet-module generate tomoconnor-dhcp </pre> <p>and you get a full puppet module folder called tomoconnor-dhcp which contains all the structure according to the best practice guidelines.</p> <p>&nbsp;</p> <p>Excellent.</p> <p>As part of the review process, it became quite apparent that <strong><em>Bind9</em></strong> has no sensible admin/management interface, or at least, there wasn&rsquo;t one installed, and frankly, anything that has such horrific config files should be shot.</p> <p>Having had good experience and results using <a href="">PowerDNS</a> in the past, we decided that this would be a valid upgrade from BIND.<br />PowerDNS relies on a SQL backend for storing the record data in. &nbsp;</p> <p>You can use either <a href="">MySQL</a> or <a href="">PostgreSQL</a>, or possibly some others. &nbsp;Since MySQL can be a bitch, and is, to all serious purposes, a toy database, Postgres seems like a better choice. &nbsp;9.1 is stable, and there are &nbsp;deb package available for it. &nbsp;9.1 also does <a href="/blogish/postgres-replication-91/">hot-standby replication</a>, which is a miracle, because Postgres replication used to be a massive pain in the testicles.</p> <p>There were, initially some mysterious problems with the TFTPd server being generally crappy, mostly regarding timeouts, which was because the storage of the TFTP data was on a painfully slow disk. &nbsp;Moving it from there to the NFS mount dramatically increased performance and stopped TFTP going crazy.</p> <p>In the TFTP'd config, there's a block for configuring the boot options of the preseed install. &nbsp;This is how PXE hands over the details of the preseed server, and the classes of preseed file to run (basically, which modules)</p> <p> <pre>label lucid_ws<br />&nbsp; &nbsp; &nbsp; &nbsp; menu label ^2) Auto Install Ubuntu Lucid WorkStation<br />&nbsp; &nbsp; &nbsp; &nbsp; text help<br />&nbsp; &nbsp; &nbsp; &nbsp; Start hands off install of a workstation.<br />&nbsp; &nbsp; &nbsp; &nbsp; endtext<br />&nbsp; &nbsp; &nbsp; &nbsp; menu default<br />&nbsp; &nbsp; &nbsp; &nbsp; kernel ubuntu-1004-installer/amd64/linux<br />&nbsp; &nbsp; &nbsp; &nbsp; append tasks=standard pkgsel/language-pack-patterns= pkgsel/install-language-support=false vga=normal initrd=ubuntu-1004-installer/amd64/initrd.gz -- quiet auto debian-installer/country=GB debian-installer/language=en debian-installer/keymap=us debian-installer/locale=en_GB.UTF8 netcfg/choose_interface=eth0 netcfg/get_hostname=ubuntu netcfg/ url=http://autoserver/d-i/lucid/preseed.cfg classes=wibblesplat;workstation DEBCONF_DEBUG=1</pre> </p> <p>Initially, the Preseed files contained all sorts of crazy hacky shit in the d-i late-command setting. &nbsp;</p> <p><br /><strong><em>late-command</em></strong> is cool. &nbsp;It&rsquo;s basically the last thing to run before the first reboot when you build a new debian/ubuntu system. &nbsp;You can tell it to do all sorts of stuff in there. &nbsp;You probably shouldn&rsquo;t, though. &nbsp;Especially when what you&rsquo;re doing in there is better done elsewhere.</p> <p>The previous Preseed file contained a whole bunch of &ldquo;inject these source files into <em>/etc/apt/sources.list</em>&rdquo;, which is utter bullshit, because you can do exactly the same thing with d-i local repositories, which does the same thing, only far far cleaner.</p> <p>That&rsquo;s not to say that my refactored preseed files don&rsquo;t use late-command at all.</p> <p>I&rsquo;ve chosen to insert some lines into <em>/etc/rc.local</em>&nbsp;on the freshly built system that ensures a puppet run at first boot.&nbsp;</p> <p>On the preseed server, there&rsquo;s a file called &ldquo;<strong><em></em></strong>&rdquo; which gets dropped into /usr/local/bin by way of a wget command in late-command.&nbsp;</p> <p>The next thing that happens in late-command is a line to remove &ldquo;exit 0&rdquo; from /etc/rc.local and replace it with a thing that calls &ldquo;<strong><em>/usr/local/bin/</em></strong>&rdquo;</p> <p>When firstboot runs, it runs puppet, checks for sanity, and then removes itself from /etc/rc.local.</p> <p>The code to actually do that looks like this:</p> <pre>d-i preseed/late_command string &nbsp;\<br />wget -q -O /target/root/ http://autoserver/d-i/bin/ &amp;&amp; \<br />chmod +x /target/root/ &amp;&amp; \<br />sed -i 's_exit 0_sh /root/firstboot.sh_' /target/etc/rc.local</pre> <p>This relies on having something on http://autoserver that is basically just apache hosting some files for the preseeder to retrieve during installation.</p> <p>&nbsp;Cool huh?&nbsp;</p> <p>That ensures that the first thing that happens once the new machine has been built and rebooted, is a puppet run.</p> <p>Some stuff we do here relies on our hand-rolled deb packages, which are stored in our own, internal APT repo. &nbsp;We&rsquo;ve also got an APT cache, created and maintained by <a href="">apt-cacher-ng</a>, which at least means that when you&rsquo;re rebuilding systems frequently, that all the packages you would otherwise download from <strong></strong> come straight over the LAN. &nbsp;</p> <p>The major problem initially with this was the speed, or lack of. &nbsp;It certainly wasn&rsquo;t performing anywhere near speeds you&rsquo;d expect from a 1GE LAN, and the reason was again, slow disks. Moving the apt-cache files to the NFS highspeed storage again helped performance. &nbsp;If we struggle in future, I&rsquo;m going to look at a SSD cache for this, but I think that the performance of the SAS/SATA disks on massively parallel storage provided by our NFS servers will be adequate for the forseeable future.</p> <p>Next up, the Puppetmaster. &nbsp;Again, I was pretty keen on building this from scratch, but using puppet itself to configure it&rsquo;s own master. &nbsp;Sounds pretty counter-intuitive, right? But the puppet client can bootstrap the master quite easily by using files as it&rsquo;s source. &nbsp;</p> <p>The first step is to clone down the latest puppet manifests from git, so you either need to git export elsewhere, or install git-core. &nbsp;Your choice.</p> <p>Once you&rsquo;ve got those, all you need to do is install puppet-client, and run:</p> <pre> puppet apply /path/to/your/manifests/site.pp</pre> <p>If you&rsquo;ve written the manifests right, and you&rsquo;ve got your master defined as a node, you should find that puppet will install puppetmaster, and so on, and then you get a ready and working puppetmaster that just configured itself.</p> <p>I used puppet-module tool to generate modules for the following services/items: &ldquo;applications&rdquo; - which actually contains a bunch of custom/proprietary application install rules, a declassified example is there&rsquo;s a googlechrome.pp file that installs chrome from a PPA.</p> <p>Other modules: dhcp, kernel, ldap, network, nfs, nscd, ntp, nvidia, postgres, powerdns and ssmtp.</p> <p>As is the trend with puppet, and modern DevOps, a vast majority of the code in the entire manifest repository has been gleaned and researched from other puppet modules on github. Acknowledgement is in place where it&rsquo;s due, and the working copies we&rsquo;re using are frequently forked on github from the original.</p> <p>It&rsquo;s great, this, actually. &nbsp;If you search on PuppetForge the array of modules available is staggering. &nbsp;It makes bootstrapping a new manifest set remarkably quick and easy.</p> <p>The NFS module contains a bunch of requirements for mounting NFS shares, and the definitions for an NFS share to be mounted. &nbsp;All pretty simple stuff, but modularised for ease of use.</p> <p>I&rsquo;m particularly proud of the postgres module which has a master class, and a slave class, which installs and configures the required files and packages to enable streaming hot-standby replication on Postgres9.1</p> <p>I will release the declassified fork of this soon.</p> <p>I&rsquo;m going to wrap this post up here. &nbsp;It&rsquo;s a massively long one, and there&rsquo;s still lots more left to write. &nbsp;</p> <p>&nbsp;</p> Postgres Replication on 9.1 <p> <p>Our new PowerDNS cluster (of 2 nodes, so far).. Is backed by Postgresql. &nbsp;</p> <p>In the past, I&rsquo;ve found that Postgres performs far better for a PowerDNS backend, than MySQL, and certainly better than the BIND, LDAP or SQLite backends.</p> <p>Until version 9.x, Postgres replication was a pretty sorry state of affairs. &nbsp;There were a few options for replication.&nbsp;</p> <p>Slony was commonly used, if not very good.. You&rsquo;d tend to get a horrific SPoF around the single master. &nbsp;In total, there were 9 or 10 different third party solutions for Postgres replication and clustering. &nbsp;They all had their pros and cons, and some were great, and some were downright awful. &nbsp;</p> <p>In 2008, the Postgres core team started to bring replication and clustering into the fold with the rest of the features of Postgres, and now, in 9.x, the option of hot and warm standby are both available, and stable.</p> <p>There&rsquo;s a comprehensive writeup of the history of Postgres replication here: <a href=",_Clustering,_and_Connection_Pooling">,_Clustering,_and_Connection_Pooling</a></p> <p>One of the things I adore about the hot-standby replication mode is that the basic configuration (/etc/postgresql/9.1/main/postgresql.conf) is identical between master and standby.</p> <p>This makes puppeting insanely easier than it would&rsquo;ve been if the master and standby had to have largely different configuration files.</p> <p>I changed about 5 config settings in the main config file.</p> <pre>listen_addresses = '*'<br />wal_level = hot_standby&nbsp;<br />max_wal_senders = 5<br />wal_keep_segments = 32<br />log_destination = 'syslog'</pre> <p>^^ I only changed the log_destination to make centralised logging easier in future.</p> <p>There&rsquo;s a limited change to pg_hba.conf to allow host-based authentication of the standby to the master. &nbsp;</p> <p>Add a line like:</p> <pre>host &nbsp; &nbsp;all &nbsp; &nbsp; all &nbsp; &nbsp; $SLAVE_IP/32 &nbsp; &nbsp; &nbsp;trust</pre> <p>I actually did this as a Puppet template file.</p> <p>On the standby server, you drop a file called &ldquo;recovery.conf&rdquo;, into /var/lib/postgresql/9.1/main</p> <p>Yes, the *DATA* directory. &nbsp;Yes, that makes no sense. Yes, It should by rights be /etc/postgres.... but it isn&rsquo;t.</p> <p>In that file, you have 2 lines.</p> <pre>standby_mode = 'on'<br />primary_conninfo = 'host=&lt;%= psql_master -%&gt; user=&lt;%=replication_user -%&gt; password=&lt;%=replication_password -%&gt; '</pre> <p>That&rsquo;s copypasta&rsquo;d from a puppet template. &nbsp;</p> <p>The interpolated lines are more like:</p> <pre>standby_mode = 'on'<br />primary_conninfo = 'host= user=replicant password=wibblewibblewibble '</pre> <p>Then all you&rsquo;ve gotta do is instantiate the standby with pg_basebackup, and then restart the master, and the standby, and they should come up, connect to each other, and start streaming replication updates.</p> <p>It&rsquo;s pretty magical.</p> <p>pgbasebackup lives in /usr/lib/postgresql/9.1/bin/pg_basebackup</p> <p>and should be run (as postgres user)</p> <pre>/usr/lib/postgresql/9.1/bin/pg_basebackup -D /var/lib/postgresql/9.1/main/ -x -h $Master_Hostname -U postgres</pre> <p>You should start the standby first, so that the master doesn&rsquo;t have a chance to get out of sync.</p> <p>The standby will start accepting read-only connections as soon as it&rsquo;s up to date with the master.</p> </p> Baking Certificates into OSX Lion for 802.1X <p>&nbsp;</p> <p>This is tricky. &nbsp;No question about that. &nbsp;</p> <p>In order to configure OSX Lion to use 802.1X authentication over WiFi, to login, and also connect (without prompting for credentials), we need to generate a .mobileconfig parameter file (plist).</p> <p>These files are a bugger to craft by hand, so what we'll do is use the Enterprise iPhone tool, to build one which can be used for a deployment to an iPhone, or OSX Lion laptop/desktop.</p> <p>Apple have a bunch of stuff about Enterprise Deployment, <a href="/wibblescrote/cms/page/50/edit-plugin/118/{http:/">here</a>.</p> <p>The file you want, however, is the<a href=""> iPhone Configuration Utility 3.4 for Mac OS X</a>.</p> <p>You'll need to run this on an Apple device, Macbook Air, or MBP, or iMac, etc.. As far as I know, you can't do this from an iPad.</p> <p>1. Download and install from the DMG.&nbsp;</p> <p>Run the Configuration Utility .</p> <p><img id="plugin_obj_119" title="Picture - Main screen of the iPhone Configuration " src="/media/cms/images/plugins/image.png" alt="Picture - Main screen of the iPhone Configuration " /></p> <p>Click "Configuration Profiles" in the selector on the LHS.</p> <p>Select "New", and you should get a blank new profile.&nbsp;</p> <p>Enter some details.</p> <p><img id="plugin_obj_120" title="Picture - Enter some Details" src="/media/cms/images/plugins/image.png" alt="Picture - Enter some Details" /></p> <p>"Identifier" is a reversed format of your profile, in a kinda java package-style notation, ie, <em></em> becomes <em>com.wibblesplat.wifi</em>; Simples!</p> <p>As part of the Profile, you can configure all sorts of settings that will be installed on the target device. &nbsp;Scroll down through General, Passcode, down to "Credentials".&nbsp;</p> <p><img id="plugin_obj_121" title="Picture - Selecting Credentials" src="/media/cms/images/plugins/image.png" alt="Picture - Selecting Credentials" /></p> <p>When you hit "Configure", you can choose a certificate file.&nbsp;</p> <p><img id="plugin_obj_122" title="Picture - Choose your Certificate File" src="/media/cms/images/plugins/image.png" alt="Picture - Choose your Certificate File" /></p> <p>At this point, we're going to pause here, and quickly recap how to create self-signed SSL certificates.</p> <p><strong>Open Terminal, and create a new directory that we can shove all the SSL related gubbins into.</strong></p> <pre>cloud-white:~ tom.oconnor$ mkdir wibblesplat</pre> <pre>cloud-white:~ tom.oconnor$ cd wibblesplat/</pre> <p><strong>Next, we need to generate a private key.</strong></p> <pre>cloud-white:wibblesplat tom.oconnor$ openssl genrsa -des3 -out wibblesplat.key 1024<br />Generating RSA private key, 1024 bit long modulus<br />...++++++<br />...........................++++++<br />e is 65537 (0x10001)<br />Enter pass phrase for wibblesplat.key:<br />Verifying - Enter pass phrase for wibblesplat.key:</pre> <p><strong>You should enter a passphrase here, but we can strip it off later.</strong></p> <p><strong>Now we've got the key, we'll use that to generate a Certificate Signing Request (CSR)</strong></p> <pre>cloud-white:wibblesplat tom.oconnor$ openssl req -new -key wibblesplat.key -out wibblesplat.csr<br />Enter pass phrase for wibblesplat.key:<br />You are about to be asked to enter information that will be incorporated<br />into your certificate request.<br />What you are about to enter is what is called a Distinguished Name or a DN.<br />There are quite a few fields but you can leave some blank<br />For some fields there will be a default value,<br />If you enter '.', the field will be left blank.<br />-----<br />Country Name (2 letter code) [AU]:GB<br />State or Province Name (full name) [Some-State]:England<br />Locality Name (eg, city) []:London<br />Organization Name (eg, company) [Internet Widgits Pty Ltd]:Wibblesplat Ltd<br />Organizational Unit Name (eg, section) []:R&amp;D Department<br />Common Name (eg, YOUR name) []:*<br />Email Address []:<br />Please enter the following 'extra' attributes<br />to be sent with your certificate request<br />A challenge password []:<br />An optional company name []:</pre> <p>Of course, you need to fill in the CSR with your<strong> *own*</strong> information, but that goes without saying, doesn't it? Do you sign your cheques with "<em>Signature</em>" in a cursive hand?</p> <p>Next, we'll strip the passphrase from the key, because it makes it a bugger if you use this certificate for Apache, or whatever, and it will always block and wait for the key if you've not stripped it.</p> <pre>cloud-white:wibblesplat tom.oconnor$ openssl rsa -in wibblesplat.key -out wibblesplat.unprotected.key<br />Enter pass phrase for wibblesplat.key:<br />writing RSA key</pre> <p><strong>Now we've got the key and the CSR, we can generate a SSL Certificate</strong>. You can specify anything from 1 day to 7304 days (20 years) for the validity. &nbsp;For CA Roots, it's probably best not to use 1 day ;).</p> <pre>cloud-white:wibblesplat tom.oconnor$ openssl x509 -req -days 900 -in wibblesplat.csr -out wibblesplat.crt -signkey wibblesplat.unprotected.key&nbsp;<br />Signature ok<br />subject=/C=GB/ST=England/L=London/O=Wibblesplat Ltd/OU=R&amp;D Department/CN=*<br />Getting Private key</pre> <p><strong>Now we've got the Certificate (.crt), the Key (.key), the unpassphrased key (.unprotected.key), and the Certificate Signing Request (.csr)</strong></p> <pre>cloud-white:wibblesplat tom.oconnor$ ls<br />wibblesplat.crt &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; wibblesplat.key<br />wibblesplat.csr &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; wibblesplat.unprotected.key</pre> <p><strong>Let's jump back to the main theme of this evening's symposium.&nbsp;</strong></p> <p>Now we've got a generated certificate, we can continue with profile generation.</p> <p>We were here.&nbsp;</p> <p><img id="plugin_obj_123" title="Picture - We were here" src="/media/cms/images/plugins/image.png" alt="Picture - We were here" /></p> <p>Navigate to wherever you left those SSL certificate files, and select the .crt</p> <p><img id="plugin_obj_124" title="Picture - Found a certificate" src="/media/cms/images/plugins/image.png" alt="Picture - Found a certificate" /></p> <p>When you click "Open", the right hand side of the credentials pane will display the signed certificate.&nbsp;</p> <p><img id="plugin_obj_125" title="Picture - Opened the Certificate" src="/media/cms/images/plugins/image.png" alt="Picture - Opened the Certificate" /></p> <p>Excellent. &nbsp;</p> <p>Now we can configure the Wifi settings to use that certificate.</p> <p>Scroll back up through the Profile settings, up to "Wi-Fi".</p> <p><img id="plugin_obj_126" title="Picture - Configure Wifi" src="/media/cms/images/plugins/image.png" alt="Picture - Configure Wifi" /></p> <p>Hit "Configure", and the right hand pane changes to another profile builder screen.</p> <p>Enter the SSID of your Wi-Fi, and select Security Type "WPA / WPA2 Enterprise"&nbsp;</p> <p><img id="plugin_obj_127" title="Picture - Configure Wifi SSID" src="/media/cms/images/plugins/image.png" alt="Picture - Configure Wifi SSID" /></p> <p>Scroll down the Right hand side down to "Enterprise Settings" and click some boxes.&nbsp;</p> <p><img id="plugin_obj_128" title="Picture - Configure Enterprise Wifi" src="/media/cms/images/plugins/image.png" alt="Picture - Configure Enterprise Wifi" /></p> <p>Click the "Trust" tab, and select the Certificate that we added to the Stored Credentials.</p> <p><img id="plugin_obj_130" title="Picture - Trust me, I'm a server" src="/media/cms/images/plugins/image.png" alt="Picture - Trust me, I'm a server" /></p> <p>Under "Trusted Server Certificate Names", hit the [+] button, and add whatever matches the CN of your certificate. &nbsp;In this case, it's "*".</p> <p>Nearly Done!</p> <p>Along the top button bar, hit "Export", and you get the Export Dialog:&nbsp;</p> <p><img id="plugin_obj_131" title="Picture - Export Me!" src="/media/cms/images/plugins/image.png" alt="Picture - Export Me!" /></p> <p>For "Security" ensure "None" is selected, then hit "Export..."</p> <p>Save the file with the .mobileconfig extension</p> <p><img id="plugin_obj_132" title="Picture - Export File as mobileconfig" src="/media/cms/images/plugins/image.png" alt="Picture - Export File as mobileconfig" /></p> <p>Right. &nbsp;That's the OSX bit done.</p> <p>The next thing I did, was to jump back over to my Ubuntu desktop, and fire up Meld.</p> <p>In case you've never used it, Meld is a great, interactive, diff tool. &nbsp;It supports 2 and 3 way diffs, and you can shuffle bits of code between the two panes of it easily.</p> <p>We're going to open someone else's mobileconfig file, and sanity check our own.</p> <p>Ronald Ip over at <a href=""></a> has published his <a href="">configuration profile</a> for accessing the wireless at Singapore Management University.</p> <p>It's an interesting read, and the link to the .mobileconfig file is at the bottom of his blogpost, also <a href="">here</a>.</p> <p>Open up Meld (you might need to <em>apt-get install meld</em>).</p> <p>Create a New Diff, and select the file you downloaded from iphoting as the Original, and your generated mobileconfig file as the "Mine".</p> <p><img id="plugin_obj_133" title="Picture - Choose files to meld" src="/media/cms/images/plugins/image.png" alt="Picture - Choose files to meld" /></p> <p>Now all you need to do is Sanity Check them. &nbsp;Make sure that, side by side, the files look *similar*. &nbsp;Of course, they can't be identical, but you want some idea that the keys and values are in the same order (this is <strong>*important*</strong>), and that yours has got most of the same information as the master.</p> <p><strong><em>**Important: **</em></strong></p> <p>If you wish to use 802.1X to authenticate to Radius for logins, then you'll need to configure a "Login Window" profile.&nbsp;</p> <p>This means you need to add a "macloginwindow" user account to your LDAP (or whatever your Radius server looks up against), and then configure the username and password for that in this file.</p> <p>To do that, edit the .mobileconfig file in a decent text editor, and add the lines</p> <pre>&lt;key&gt;UserName&lt;/key&gt;<br />&lt;string&gt;macloginwindow&lt;/string&gt;<br />&lt;key&gt;UserPassword&lt;/key&gt;<br />&lt;string&gt;000INSERTSecurePasswordInHERE000&lt;/string&gt;</pre> <p>Those lines need to go *just* after the block:</p> <pre>&lt;key&gt;TTLSInnerAuthentication&lt;/key&gt;<br />&lt;string&gt;MSCHAPv2&lt;/string&gt;</pre> <p>Then after&nbsp;</p> <pre>&lt;key&gt;SSID_STR&lt;/key&gt;<br />&lt;string&gt;wibblesplat-wifi&lt;/string&gt;</pre> <p>Insert&nbsp;</p> <pre>&lt;key&gt;SetupModes&lt;/key&gt;<br />&lt;array&gt;<br /><span style="white-space: pre;"> </span>&lt;string&gt;System&lt;/string&gt;<br /><span style="white-space: pre;"> </span>&lt;string&gt;Loginwindow&lt;/string&gt;<br />&lt;/array&gt;</pre> <p>Which will define that you're doing System settings, and Login Window settings.</p> <p>If you don't want to do Login Window stuff (but frankly, why wouldn't &nbsp;you), then &nbsp;you can safely remove the LoginWindow key.</p> <p>Somewhere near the bottom of the file, there's a Key marked "<em>PayloadType</em>", with Value "<em>Configuration</em>".</p> <p>One line above that, insert the two lines:</p> <pre>&lt;key&gt;PayloadScope&lt;/key&gt;<br />&lt;string&gt;System&lt;/string&gt;</pre> <p>&nbsp;</p> <p>That should be it for manual changes. &nbsp;As soon as I figure out how to do those from the OSX iPhone configurator, I'll update this. &nbsp;I suspect that because LoginWindow isn't actually an iPhone option, but more pertains to OSX on non-mobile devices, that it's not actually covered as a Thing in the Configurator.</p> <p>Once you're pretty happy, you can get on with the next step. &nbsp;On Lion, you can load the files with the omnipotent "open" command from Terminal. We used HTTP to distribute the files, but you could equally just scp them across to your Lion clients.</p> <p>You need to do the profile load as an Admin User, so in Terminal, do something like:</p> <pre>su - adminuser open wibblesplat.mobileconfig</pre> <p>Some box might appear asking if you want to apply the settings, say yes. &nbsp;</p> <p>You've come this far, it'd be foolish to say no.</p> <p>Then all you've gotta do is reboot. &nbsp;</p> <p>Technically, you might not need to, but at least rebooting should clear any saved session state, and you'll get a more representative idea of what ought to happen.</p> <p>Done. &nbsp;Congratulations. &nbsp;You've just baked in configuration details for WPA2 Enterprise and 802.1x</p> Renewing a SSL Certificate on OSX Server <p>&nbsp;</p> <p>This article relies on having a soon-to-expire SSL certificate on an Apple OSX Server. &nbsp;Ours are running Snow Leopard, and I&rsquo;m yet to try the whole thing on Lion.</p> <p>I&rsquo;ve got to admit, I went through a bit of a rigmarole to do this.</p> <p>To generate a new certificate, you need a key, and a CSR.</p> <p>To get the key, you need to export a PKCS12 file from KeychainAccess as ROOT. &nbsp;Yes, Root. Yes, OSX = Toy operating system. No, another admin user won&rsquo;t cut it. Yes it&rsquo;s a pain in the arse.&nbsp;</p> <p>For an imaginary</p> <p>&nbsp;</p> <ol> <li>Open Terminal.</li> <li>Run </li> </ol> <pre><ol><li>sudo /Applications/Utilities/Keychain\ Access/Contents/MacOS/Keychain\ Access</li></ol></pre> <ol> <li>Unlock the System keychain.</li> <li>Locate certificate (Category -&gt; Certificates) , Control + click =&gt; Export...&nbsp;</li> <li>Export to<strong> /tmp/wibblesplat.p12</strong></li> <li>Feed it a password for the p12 archive. &nbsp;Do NOT forget this. You&rsquo;ll need it.</li> <li>Go back to the Server Admin panel, grab the expiring certificate, and hit the Gearwheel, then select Generate Certificate Signing Request.</li> <li>Save that to a file. (<strong>/tmp/wibblesplat.csr</strong>)</li> </ol> <p>Next, we need to split the PKCS12 archive, to get the old private key out.</p> <pre>tom.oconnor@cloud-white:~$ cd /tmp<br />tom.oconnor@cloud-white:/tmp$ openssl pkcs12 -in wibblesplat.p12 -nocerts -out wibblesplat.key</pre> <p>&gt; Enter Import Password: *****<br />&gt; MAC verified OK<br />&gt; Enter PEM pass phrase: *****<br />&gt; Verifying - Enter PEM pass phrase: *****</p> <p>Strip the passphrase from the key (otherwise you have to enter it lots when you restart services.)</p> <pre>tom.oconnor@cloud-white:/tmp$ openssl rsa -in wibblesplat.key -out wibblesplat.unprotected.key</pre> <p>&gt; Enter pass phrase for wibblesplat.key: *****<br />&gt; writing RSA key</p> <p>Export the old Certificate from the p12. &nbsp;You might as well.</p> <pre>tom.oconnor@cloud-white:/tmp$ openssl pkcs12 -in wibblesplat.p12 -clcerts -nokeys -out wibblesplat.old.crt</pre> <p>Generate the Certificate from the CSR from earlier, and the freshly exported key.</p> <pre>tom.oconnor@cloud-white:/tmp$ openssl x509 -req -days 7300 -in wibblesplat.csr -signkey wibblesplat.unprotected.key -out</pre> <p>&gt; Signature ok<br />&gt; subject=/<br />&gt; Getting Private key</p> <p>Now, you go back to Server Admin. &nbsp;Re-select the expired certificate, and hit Gearwheel -&gt; Replace with new signed certificate.</p> <p>Find &nbsp;the file "" in Finder, and drag it into the Server Admin "Replace Screen"</p> <p>You don&rsquo;t need to replace the Key, because of the above steps, we used the old key.</p> <p>Head back over to Keychain Access,&nbsp;</p> <p>Find the newly updated certificate, and you should find that the new expiry time is somewhere about 20 years from now (7300 days, which is the longest you can set a certificate Valid To date)</p> <p>Then double click the new certificate, and under the Trust dropdown/treeview thingy, set&nbsp;</p> <p><strong style="font-style: italic;">"</strong>When using this certificate, to<strong style="font-style: italic;"> Always Trust"</strong></p> <p>Congrats. &nbsp;You've just replaced an expired certificate with one that won't expire for 20 years (well, near enough.)</p> Analysis and Comment: Why Point of Sale is a POS <p>&nbsp;</p> <p>A little background, perhaps:<br />Last night I attended a Winter Party at a bar called The Sterling, in the ground floor of the Gherkin.&nbsp;<br />I have absolutely no problem with the organisation of the Party itself, but more complaints about the Venue.&nbsp;</p> <p>&nbsp;</p> <p>I drink in various bars quite a lot. &nbsp;I've worked in a few bars. &nbsp;I observe how bar staff operate, and how their tills work. &nbsp;There's a few longstanding massive problems with pretty much every till/billing system I've ever seen in a bar.</p> <p>Some are downright terrible, and who knows what the designers/integrators were thinking. &nbsp;These typically have a<strong> full 103 button QWERTY keyboard</strong>, and you've gotta type shit in, to get stuff to come up on the tab. &nbsp;It's slow, poorly designed for purpose, and not ideal to use a keyboard in a wet/food preparation environment, so the keyboard gets filthy and broken, so the staff hammer on the keys harder and harder.. Yeah. You can see where this is going.</p> <p>There's the <strong>generic Touchscreen POS system</strong>, that's been adapted for bar use. Generally a better lot, but the biggest problem is traditional POS relies on having barcodes on everything, and a scanner on the till. &nbsp;Doesn't work very well for bar things, where things aren't quite as fixed-format as a traditional shop.</p> <p>One of my favourite haunts in Brighton gave all the staff a very<strong> small barcode reader</strong>, and everything was barcoded.</p> <p>You ask for a double G&amp;T, the barman swipes the Gordons shot twice, and a bottle of tonic, and you're done. &nbsp;It worked *perfectly*. &nbsp;This is the right kind of idea. &nbsp; Sadly, it's about the only place I've ever seen it.</p> <p>While we're vaguely on the topic of Brighton, &nbsp;there was a particular favourite cocktail bar down there which had the most remarkably unfriendly till system. <a href="">@PoBK</a> and I persuaded the barman to let us have a look at the interface after it took him nearly 10 minutes to enter the contents of a custom cocktail. &nbsp;The conclusion we came to was that some idiot designer had melded the keyboard entry system with the touchscreen system, but failed to recognise the actual *modus operandi* of a bar. &nbsp;Every time you wanted a new ingredient, you had to re-search the database, then enter the quantity, in Millilitres on the keyboard.</p> <p><strong>So here's an idea. &nbsp;Cocktail bars need a till system that's designed for their use.</strong> Frequently used ingredients in frequently used sizes have more accessible buttons than other, less frequently used things. &nbsp;Ergo,&nbsp;Gordons Gin has a big button, Anchovy paste, a smaller one, at the end of a list.</p> <p>Also enter a bunch of cocktails (matching the menu) into the database, so you can ring up a mojito with one click, rather than say, entering 2 shots of rum, mint, lime, etc etc etc.</p> <p><strong>I'm going to change theme here, momentarily.</strong></p> <p>One of the biggest complaints I've heard of (mostly) last night's venue, is the concept of tab inflation.</p> <p>A number of people have reported on Twitter and IRC that their tab receipt contains/contained drink items which were not their own.&nbsp;</p> <p>I can come up with a few reasons for this. &nbsp;All of them are avoidable, and yet were not avoided by design/implementation.</p> <p>Having observed the staff using the bar tab feature, this is the typical pattern:</p> <p>&nbsp;</p> <ol> <li>Customer orders drinks.&nbsp;</li> <li>Customer presents tab card, (a business card sized piece of paper with a number on)</li> <li>Barstaff hit "Add to Tab" button, which displays a screen of small (1 square cm) touchscreen buttons with numbers 1 to 500, or so.&nbsp;</li> <li>Barstaff select matching button to tab card, and receipt is printed.</li> <li>Customer gets drunk.</li> <li><strong>GOTO: 1</strong></li> </ol> <div><strong><br /></strong></div> <p><strong>There's a number of problems with this. &nbsp;</strong></p> <p>Let's start with Authorisation and Authentication, or "<em>How does the bar know you are who you say you are</em>". &nbsp;Well, under the above system, they don't. &nbsp; <br />I could grab/steal/replicate/forge anyone's card, and rack up a massive amount of money on their tab, because there's no checking mechanism in the system to be sure that the tab belongs to me. &nbsp;</p> <p>In the past, I've seen systems (often in hotels) where you sign the paper against your room number, the bar staff keep the paper, then when you check out, you can see the list of receipts.&nbsp;</p> <p>My local pub asks for the name on the payment card that is securing the tab.</p> <p>There's a pub in Hammersmith that uses a slightly more upmarket solution where the tab card is actually a key to a lockbox that holds your debit/credit card. A bit better, but that also leaves much to be desired in the whole card security theatre.</p> <p>Authentication is a bit of a bugger. Many of the ideas that work well, aren't ideally suited to the fast paced realm of bar service. &nbsp;Many of the ideas that are currently used are massively insecure.</p> <p>From what I've heard, the above problem isn't the actual vector for tab inflation, or at least, if it is, it's of considerably smaller incidence and volume.</p> <p>The biggest problem is threefold. Primarily, the bar staff have to manually read the number from the card, and secondly, match it with a list of small numbers from 1 to 500 (ish), thirdly, and from an <strong>User Experience (UX)</strong> point of view, most importantly, the buttons are small, closely spaced, and there doesn't appear to be a molly guard (something to stop you from incrementing the wrong tab, in short, "Are you sure you mean tab 123?").</p> <p>On more than one occasion last night, I heard the admission "<strong><em>I might've put that on the wrong tab, I'm not so good with numbers.</em></strong>" - Fair enough. &nbsp;I'm not brilliant with numbers myself, but I can design that point of failure out of your system.</p> <p>Here's the simplest solution. &nbsp;It's so simple you're gonna kick yourself. &nbsp;Print some computer-readable representation of the tab card number, on the card itself.&nbsp;</p> <p>A few options, in ascending price might be: Barcode, QR Code, Mag-stripe, Punched Card, RFID contactless technology, <a href="#Image1">SmartButton</a>, Fingerprint recognition (This'd be cool.)</p> <p>Yep, that's it. &nbsp;It's slightly more expensive, but you'd soon find that you'd be losing less revenue through unclaimed drinks. &nbsp;You'd get more money from repeat business, because you've not alienated customers by saying "you lied, and you did actually buy this drink" when they, blatently, haven't.</p> <p>Add a simple short authorisation code that the customer sets when they start the tab, and you've got primitive Authorisation as well as Authentication.</p> <p>&nbsp;</p> <p>&nbsp;</p> <p><a name="Image1"></a>[1]&nbsp;<img id="plugin_obj_113" title="Picture - Smart Button" src="/media/cms/images/plugins/image.png" alt="Picture - Smart Button" /></p> eBuyer's Black Monday <p>&nbsp;</p> <p>It's only a few days after Black Friday, and eBuyer have experienced their very own Black Monday. &nbsp;</p> <p>I'll set the scene. &nbsp;This morning, I had an email mailshot advertising a &pound;1 sale of clearance range items on eBuyer. &nbsp;Sounded like an ideal plan, especially if there were any hard disks up for grabs.&nbsp;</p> <p>I never actually got that far though. &nbsp;I followed the "sneak preview" instructions, duly "liked" eBuyer on facebook. &nbsp;10:30 rolled by, and oh look. &nbsp;Error 500 from eBuyer. &nbsp;Refresh. &nbsp;Try again. &nbsp;Same error. &nbsp;<strong>Connection terminated.</strong>&nbsp; Session reset. &nbsp;<strong>Page returned no data.&nbsp;</strong></p> <p>Let's have a look at the Facebook page for eBuyer. &nbsp;Seems to be some problem here.. Reports coming in from all over the web that the site's down. <strong><em>&nbsp;*gasp*</em></strong> Surely.. No? Oh my god.. They didn't anticipate the extra load on their servers that this clearance sale would cause? &nbsp;</p> <p>What a surprise. &nbsp;Another company suffering *seriously* bad PR. There's enough bile and vitriol about the entire fiasco split between their <a href="">Facebook page</a>, and <a href="!/search/ebuyer">#ebuyer on twitter</a>&nbsp;.</p> <p><strong>So what actually happened? &nbsp;</strong></p> <p><strong>&nbsp;</strong>Well, eBuyer effectively started a DDoS against themselves at about 10:30 this morning. &nbsp;It's fairly safe to assume that there's two main problems. &nbsp;</p> <p><strong>1) </strong>Their site is almost entirely dynamic content that has to be generated on every page view, especially as the clearance prices are only visible if you're logged in. &nbsp;So there's cookies involved, so the content can't be cached. &nbsp;This means that every page has to be "built" from scratch by the web-server(s), it's gotta make requests to the backend databases for prices and stock levels. &nbsp;Imagine this happening for every user.. Then every user's click, then every user's click in multiple tabs. &nbsp;No wonder the site's fucked.</p> <p><strong>2) </strong>More importantly, the majority of their connectivity has been saturated by people trying to access the site. &nbsp; There's commenters on Facebook and Twitter both saying if you chain-refresh the site, basically keeping your finger held down on F5, then you'll get to the site quicker. &nbsp;Probably not guys. &nbsp;</p> <p>The problem is that you've got the audience of the email, the people from twitter advertising, anyone who's seen the Retweet and anyone from Facebook all piling down to go and have a look at the eBuyer deal. &nbsp;</p> <p>Then there's the others.. The rubberneckers. &nbsp;Those are the types who slow you down on the highway by leaning out of the window of their car watching the ambulances wheel off the bodies. &nbsp;They're down there too. &nbsp;Looking for the charred remains of eBuyer and the scorchmark of their PR agency. &nbsp;</p> <p><strong>So how did Amazon survive their Black Friday sales?&nbsp;</strong></p> <p>Well, that's fairly straightforward. &nbsp;Amazon are *vast* and incredibly intelligent when it comes to business analysis. &nbsp;Amazon have been running sales for a lot longer, on a massive scale, but they've learnt from their mistakes. &nbsp;I can remember the Amazon sites falling over at peak time over Christmas. &nbsp;It's happened. &nbsp;It used to happen quite often, but they scaled up, and they scaled out. &nbsp;That's the only way they got to be one of the top eCommerce sites active on the internet. &nbsp;</p> <p>I'm not saying that eBuyer need the same level of architecture as Amazon have, but the key point is the Business Intelligence that's missing. &nbsp;This is how the conversation should've gone:</p> <p><strong>Date: </strong>01/11/11<br /><strong>Place: </strong>Business Intelligence Dept, Ebuyer HQ.</p> <p><strong>Alice: </strong><em>"Hey Bob.. On our mailing list, we've got a reader coverage of, hey, let's say 750,000 people, right?"</em></p> <p><strong>Bob : </strong><em>"Sure Alice, Why?"</em></p> <p><strong>Alice: </strong><em>"When we launch the &pound;1 deals on the 28th of November, I bet they're all gonna visit at 10:20, so they can sit there and refresh until we launch the deals."</em></p> <p><strong>Bob:</strong> <em>"Oh Alice, you're so wise. &nbsp;We need to tell the Operations team so that they can get some extra server power in for that day."</em></p> <p><strong>Alice : </strong><em>"Correctamundo, Bob. &nbsp;Imagine if we launched these deals and our site went down. &nbsp;Boy would our faces be red!".</em></p> <p>&nbsp;</p> <p>I can speak with experience when I say that I've worked in places where they fail to plan for scalability during sale season, or before a product launch. &nbsp;It's impossibly stressful to be given less than a week's notice of this kind of event, and have to be in the position to ensure the site's stability and continued uptime. &nbsp;In some cases, it is *actually impossible*, especially without a massive amount of planning, scaling up the number of servers, increasing the bandwidth to the routing core. &nbsp;Sorting out load balancing, particularly eCommerce aware load balancing.. It's tricky.</p> <p>&nbsp;</p> <p>This is where "cloud" services and infrastructure comes into it's own. &nbsp;Numerous cloud IaaS providers are offering scalable bandwidth, pay as you go, scalable servers, add as many as you like to their scalable cloud load balancers. &nbsp;eBuyer would only have had to pay for a couple of weeks usage, probably. &nbsp;A few days for testing, then 2 days either side of today to iron out bugs. &nbsp;I'm sure that it could have been handled more gracefully.&nbsp;</p> <p>Somewhere inside eBuyer HQ, someone forgot to tell the Operations team. &nbsp;Perhaps one day, advertising and marketing teams will come down from their high horses, and realise that without full support from the entire company, an advertising campaign such as this is actually seriously detrimental to the company. &nbsp;</p> <p><strong>Live situation update: It's 11:44 on the 28th</strong>, and eBuyer's site is <strong>still down</strong>. &nbsp;@ebuyer on Twitter are attempting to make up for their technical faux-pas by promising better servers for future sales. &nbsp;Yes, well done.. That's called "Locking the door after the horse has bolted".</p> <p>&nbsp;</p> <p>I last wrote about a&nbsp;<a href="/blogish/cost-forward-thinking/">similar problem in 2009</a>, when Derren Brown advertised his website on his TV show, and it was down for 2-3 days. &nbsp;Since then, I'm still seeing the same problems, companies failing to anticipate server load caused by advertising. &nbsp;Nothing's changed. &nbsp;It's still embarrasing. &nbsp;It's still unacceptable, and in this case, for eBuyer, it's going to prove very costly. &nbsp;Not only have they lost the potential of more sales from their &pound;1 sale, but they've also lost the regular traffic buying stuff for their day-to-day needs.</p> <p>Black Monday for eBuyer, Stunningly good day for their competitors.</p> <p>&nbsp;</p> So, You Wanna Be a Sysadmin? <p>&nbsp;</p> <p>So you wanna be a good sysadmin? I don't blame you. It's fun, and it's lucrative. Especially if you do it right.</p> <p>The difference between a good admin and a bad one are many and varied.</p> <p>Most importantly, it boils down to a level of devotion to the company. And a knowledge that is expansive. &nbsp;</p> <p>Not that it has to be all-encompassing. One of the important things to know is the limit of your knowledge. &nbsp;Know your comfort zones, and know your limits.</p> <p>There is nothing wrong with not having an instant answer, but there is something with not being able to find an answer. Powers of inference and deduction are key to your success, and you should exercise them frequently.</p> <p>I'd much rather work with someone who knows what they do not know, than someone who will blag it.&nbsp;</p> <p>You've also got to recognise your weaknesses, and work hard to ensure that they don't impact your work.</p> <p>&nbsp;</p> <p>Among sysadmins, the biggest weakness I see is ego.</p> <p>Ego, especially in an enterprise or commercial environment is a killer. Egotistical sysadmins will act as if the system is their Property. &nbsp;Often failing to document important infrastructure, or trying to make sure that "the company can never fire them" &nbsp;as they've engineered themselves into a critical position.</p> <p>This is a massive, yet all too common anti-pattern among admins.&nbsp;</p> <p>As I mention earlier on, a good admin needs to be devoted to the success of the company. With or without them.</p> <p>One of the biggest problems with Sysadmins with an ego the size of a planet, is the inability to accept personal responsibility. &nbsp;This is a real double-edged sword. &nbsp;There&rsquo;s some things which, at some point in time, will be, or will have been your fault. &nbsp;It&rsquo;s up to you to do two things. &nbsp;One, you&rsquo;ve gotta take the rap for it. &nbsp;Admit you were wrong, and apologise. &nbsp;Secondly, and more importantly, it&rsquo;s down to you to debrief the rest of the company about what happened, why it happened, and what you and your team will do to ensure that it doesn&rsquo;t happen again. &nbsp; You&rsquo;re allowed to make some mistakes, everyone does, but it&rsquo;s a fool who doesn&rsquo;t learn from past mistakes.</p> <p>&nbsp;</p> <p>You also need to consider what happens if you're unfortunate enough to get fired/made redundant?</p> <p>Bad times, but you'll only make them worse for yourself if you're petty enough to have installed logic bombs or backdoors. &nbsp;</p> <p>Let me tell you this. There are some very skilled sysadmins out there. Some better at computer forensics than me. Many with better toolkits and detection algorithms and hardware to recover deleted files.</p> <p>We will catch you. You will get found out. You will get into trouble, and you will never work in IT again.</p> <p>You could spend 10 years building systems and making a perfect CV, but if you let your ego get the better of you, you might as well have not bothered.</p> <p>&nbsp;</p> <p>I work hard to ensure that the changes I make are made for a reason; and they're well documented. The thing I fear most in any system, is a Single Point Of Failure. Engineers have worked for years to eliminate SPOFs from a number of systems. &nbsp;The important SPOF that must not be forgotten is the human factor.</p> <p>If you were hit by a bus tomorrow, what would happen? &nbsp;Are there passwords that only you know? &nbsp;Scripts that are only on your Mac?&nbsp;</p> <p>Citing &lsquo;security&rsquo; as a reason not to disclose information is stupid and childish. &nbsp;Sure, don&rsquo;t put passwords on a public wiki, but do put them in a password management utility, and do share access to that to your team. &nbsp;It&rsquo;s the same as that old adage, &ldquo;There&rsquo;s no I in TEAM&rdquo;, but without a solid team, your skills are of no use to the company. &nbsp;</p> <p>&nbsp;</p> <p>Empire building is a common trait among sysadmins. Often in a new job, an admin will seek and destroy existing systems because they're not perfect in their eyes. Only in their eyes, mind you.</p> <p>An existing system could have been running non-stop for 20 years on AS/400, but that doesn't warrant a costly move to x86 blades, based on a pitch from an amply bosomed <a href="">Booth Babe</a>.</p> <p>No, again, you need to think in terms of what is best for the company. "The Greater Good", not flashy new toys.</p> <p>The flashy new toys thing is massively dangerous. &nbsp;There are companies out there who don&rsquo;t shop around, who aren&rsquo;t interested in actually researching what they&rsquo;re being sold, and where mis-selling of infrastructure happens a lot. &nbsp;It&rsquo;s down to the sysadmin (or engineer) to actually test out stuff, instead of going with whichever has the best looking hardware, or the most promised (but invariably, individually licensed) feature sets. &nbsp;If that means going with slightly less bleeding edge technology, because it&rsquo;s more tried and tested, then so be it.&nbsp;</p> <p>&nbsp;</p> <p>There is absolutely nothing wrong with being a lazy sysadmin. &nbsp;Being irresponsible, on the other hand, is a massively concerning trait, and should be addressed as soon as possible. &nbsp;</p> <p>I frequently find myself doing things to make my life easier. &nbsp;I&rsquo;m a big fan of automated installations, Puppet, Kickstart/debian-installer, and centralised logging and monitoring. &nbsp;</p> <p>The reason for this, is that I don&rsquo;t want to repeat myself. &nbsp;I find building systems very enjoyable, but the minutia are very tiresome. &nbsp;I&rsquo;d rather do something once, and repeat the action, rather than do something a dozen times on different servers.</p> <p>Being a lazy sysadmin effectively means that you can concentrate on the important and interesting stuff, rather than spending all day working on something boring and trivial. &nbsp;</p> <p>Being irresponsible would be ignoring logs, and being obstructive with documentation, and going out of your way to piss people off. &nbsp;This is the &ldquo;media&rdquo; view of many sysadmins, not helped by the wicked and evil character Denis Nedry in Jurassic Park. &nbsp;You remember, he took power systems and security offline so that he could steal stuff un-noticed, then initiated logic bombs and so on to cover his tracks and prevent his detection. &nbsp;</p> <p>&nbsp;</p> <p>Don&rsquo;t be like that. &nbsp;You might &nbsp;end up like him, getting eaten by a Dilophosaurus.</p> <p>(You&rsquo;ll probably just get fired, and never work in IT again)</p> <p>&nbsp;</p> Coming Out - My story <p>&nbsp;</p> <p>I came out roughly of my own accord, I think it was sometime in November 2000, but I can't be that precise on the date. &nbsp;It seems a long time ago now. &nbsp;I say 'roughly of my own accord', because as the story will unfold, some might say I was outed. &nbsp;</p> <p>As a mere technicality, I came out to my internet group of friends a while before this. &nbsp;I remember that a lot more clearly as being a few days before my 13th birthday, or was it my 14th? Anyway.</p> <p>I found a huge group of other guys, all the same age (apparently(!)) who were all going through the same things. &nbsp;</p> <p>In the actual process of my coming out in reality, I'd been planning my thoughts on paper, as I knew that the conversation itself would come up soon, and when it did, I'd wanna be ready. &nbsp;I wasn't the best secret-keeper at that age. &nbsp;I told a bunch of "friends" at school, some were supportive, some gave up my truths to the kids who bullied me anyway, and that gave them more ammunition. &nbsp;Anyway.&nbsp;</p> <p>Writing things down on paper proved to be the catalyst for my eventual coming out. &nbsp;One of my teachers got hold of this notebook. &nbsp;Fuck knows how. &nbsp;Perhaps I'd left it somewhere in a lapse of personal security.&nbsp;</p> <p>Anyway. &nbsp;She phoned my parents. &nbsp;And told them <strong>Everything</strong>.</p> <p>I got back from a youth group, and Dad said "One of your teachers called earlier, and we need to talk". &nbsp;My mind rattled through a bunch of alternative possibilities. &nbsp;Was I in trouble? What on earth had I done that could possibly cause that kind of parent-teacher interaction with such urgency.</p> <p>We sat down in the lounge, and Mum asked me outright. &nbsp;"Tom, are you gay?". &nbsp;I admitted, and I don't remember a lot after that. &nbsp;I remember crying in her arms, more out of relief than anything else.</p> <p>Turns out my parents had known all along. <em>&nbsp;As they always seem to do.&nbsp;</em></p> <p>My parents and I have a very healthy, open dialogue about my homosexuality. &nbsp;They've met almost all of my boyfriends, they've been out clubbing with me in Birmingham, for my 20th and 21st birthdays. &nbsp;</p> <p>I only wish that everyone could have such open and accepting parents, but I'm afraid that the truth of the matter is, that not everyone is as accepting and modern-thinking as my folks.&nbsp;</p> <p>To that end, if you're coming out today, then rest assured, the community will support you, and the organisations that exist will do so too.</p> <p>Further Reading: <a href="/blogish/it-gets-better/">My It Gets Better story</a></p> <p>Endnote:<br />The next day, I had an interesting conversation with the headteacher of my school. Apparently the teacher in question had acted outside of school policies, and was pretty severely reprimanded in the coming weeks. &nbsp;I don't begrudge her actions today, but it's certainly not the best way to go about these things.</p> <p>&nbsp;</p> <p> <p>For help and advice on coming to terms with being gay, you can call the Lesbian &amp; Gay Foundation&nbsp;helpline on 0845 3 30 30 30 (local call rate), between 10am and 10pm or use their&nbsp;<a href="">online contact form</a> to receive a reply within 72 hours.&nbsp;</p> <p>Stonewall also offer <a href="">advice on coming out</a>.</p> </p> <p>&nbsp;</p> A Sensible Java Build Tool <p>&nbsp;</p> <p>I've been writing Java in one sense or another for a few years now. &nbsp;I learnt stuff at university, then used it in a few jobs. &nbsp;I've written Beans and Applets, and various bits of stuff in between.&nbsp;</p> <p>It's fairly safe to say, that I like Java. &nbsp;</p> <p>One thing however, that's been a pretty consistent bugbear in all the time I've been writing Java, has been the classpath, and dependency resolution. &nbsp;Luckily, all that can now change. &nbsp;</p> <p>I used to think that Ant was a pretty neat build tool. &nbsp;All the IDEs supported it, it kinda worked most of the time, but sometimes, building was a bit of a ballache - Some stuff had to be in your lib/ folder, sometimes in Ant's lib/ too. &nbsp;</p> <p>Lately though, and this week in particular, I've been playing with Maven. &nbsp;</p> <p>Maven is a pretty fucking cool build tool for Java applications. &nbsp;I suspect it probably works with other languages, but it's "designed" for Java.&nbsp;</p> <p>I don't think I really have the expertise or knowledge to explain how Maven works, partly because I haven't studied the inner workings that deeply, but also, because it's far better explained here (</p> <p>Instead, I'm going to dive right in, and explain what I've been working on this week. &nbsp;</p> <p>The company I work for currently, is making a pretty radical shift away from using PHP for everything. &nbsp;Instead, we've been investigating Java for creating a middleware layer that everything can talk to.</p> <p>I'm pretty chuffed with this, but I do wish that it had come a lot earlier on. &nbsp;If it had, I might not have been so decisive to leave when offered a better job.</p> <p>Basically, when we came up with this project, I insisted that we do it properly, for a change. &nbsp;</p> <p>I suggested that a good workflow would be something like: Netbeans IDE -&gt; Maven Project -&gt; Git SCM -&gt; Jenkins CI -&gt; Maven Repository (We chose Artifactory, but I did test Sonatype Nexus too, but didn't like it).</p> <p>This is a good pattern for the Joel Test's "<em>Can you make a build in one step?</em>"</p> <p>I basically wanted to create a demo project that can be used as the basis for all future FR projects, I do the R&amp;D to make the initial POM work, then everyone else can clone this, or inherit from it..&nbsp;</p> <p>This decision was twofold, I also wanted to figure out JPA/Hibernate and have some clue how that works for reverse engineering the classes from an existing database, the answer to that is: Pretty well, actually. - But that's another story.</p> <p>My IDE of choice is Netbeans. &nbsp;I've been using it since I was at university, except for a small android-related foray into Eclipse, and an experimental nosing around IntelliJ IDEA.&nbsp;</p> <p><strong>Stuff I did:</strong></p> <p><ol> <li>Created a new Netbeans Maven project from the quickstart archetype.</li> <li>Added the Dependencies on Hibernate (all the sub-dependencies get resolved, and added)</li> <li>Added the &lt;scm&gt;&lt;/scm&gt; and &lt;ciManagement&gt;&lt;/ciManagement&gt; lines to the POM</li> <li>Added maven-shade-plugin to allow us to build a fat JAR, which makes the jar bigger - it includes all the dependency JARs, but does make deployment a damnsight easier.</li> <li>Configured &lt;distributionManagement&gt;&lt;/distributionManagement&gt; to contain the url of the repository we're using.</li> </ol></p> <p>That's pretty much it. &nbsp;<a href="">Here's the finished POM</a>, with various bits of secret removed.&nbsp;</p> <p>When I edit something in Netbeans, and commit a change, there's a post-commit hook (post-receive) that calls the Jenkins API, and builds the project. &nbsp;Jenkins then deploys the artifacts (a fat JAR and a POM) to the Artifactory.</p> <p>Epic.</p> <p>&nbsp;</p> Deeply Concerning <p> <p>Right. &nbsp;This is important. &nbsp;I want you stop what you're doing and read this. &nbsp;It won't take long.&nbsp;</p> <p>I've just witnessed another act of homophobic bullying amongst school children. &nbsp;</p> <p>Sadly this time, they weren't in uniform, so tracking down the school responsible is going to be somewhat harder. &nbsp;What I can tell you is that there were 4 boys, three black, one white, and all very troubling.</p> <p>The thing that troubles me most, as gay man living in london, is that if these kids are to be believed, then we are all under threat. &nbsp;</p> <p>As the kids boarded the bus, three of the kids goaded the other one with chants like "<em>dirty queer</em>" and "<em>fuckin battyboy</em>". &nbsp;This alone troubles me. &nbsp;</p> <p>I thought that now that Section 28 has been repealed, that schools are supposed to encourage a level of tolerance and acceptance. &nbsp;This is evidently not true of these kids. &nbsp;</p> <p>I think we need to do something about this. &nbsp;<strong>Together</strong>. &nbsp;Not as a group of gay people trying to right the wrongs of the school curriculum, but as members of society.</p> <p>I genuinely feel bad for the kid they were harassing, that kid being merely an echo of my former self. &nbsp;</p> <p>One of the problems I experience when I see these types of incident is that I don't always feel comfortable to intervene. &nbsp;Last time was a different case, the kids were all in&nbsp;<a href="">Holland Park School</a>&nbsp;uniforms, so if any of them tried anything funny, I'd have at least some idea where they were from.</p> <p>These kids today looked and acted a lot tougher. &nbsp;I tried my best to listen in to their further goading and haranguing over their tinny R&amp;B <a href="">sodcasting</a>, but didn't really get very far.</p> <p>The general high-level overview of it is "<em>You're different, we don't like that. You'd better die before we kill you</em>". &nbsp;I don't know about you, but I'd say that kind of talk is pretty bad for teenagers who by that age, really should know better.</p> <p>So. &nbsp;Here's an interesting idea. &nbsp;I'd like to know what the actual problem is. &nbsp;</p> <p>Are schools not handling homophobia, when it comes up, do they brush it casually under the rug, like <strong>The Chase</strong> did?</p> <p>&nbsp;</p> <p>Do they have any out gay members of staff who are prepared to act as a positive role model for students? I know as sure as hell that this would have helped when I was having similar problems.</p> <p>What exactly are kids taught these days with regard to homosexuality? &nbsp;I mean, when I was at school it was very glossed over. &nbsp;Something along the lines of "<em>There are these people called homosexuals. &nbsp;What they do is bad.</em>"</p> <p>I would like all of you to write a letter to the Headteacher of your local school. &nbsp;If it's one that your children go to, then that's all the better. &nbsp;I want you to ask them the following questions.</p> <p><ol> <li>What does the school do to combat homophobic bullying?</li> <li>What level of teaching is there with regard to homosexuality?</li> <li>Do you have any out gay members of staff who act as outreach to gay kids growing up?</li> <li>If not, why not?</li> </ol></p> <p>Usually, I'm pretty proud of London. &nbsp;It's a great city.&nbsp;</p> <p>Sadly today, I feel let down by the city and its future generations. &nbsp;It's deeply concerning, and up to us to do something about it.&nbsp;</p> </p> Bored Engineer <p>So there's this saying, "There's nothing more dangerous than a bored engineer"; I tend to think that it's true. &nbsp;I've had very little to do at work lately, which has been in equal parts frustrating and annoying. &nbsp;I like having stuff to do. I like having plans for the future, but at the moment, there's very little.</p> <p>Anyway. &nbsp;I popped into Westfield the other day, and caught a Free BBC Prom. &nbsp;Very cool. &nbsp;Then I had a poke around on my mobile and realised two things.</p> <p>1) There is no mobile proms website.</p> <p>2) There is no mobile proms app!</p> <p>So I thought I'd have a go at writing one. &nbsp;Given that I have a Google Nexus S, and all of the SDK bits.</p> <p>I wrote a parser/scraper for the BBC Proms website (about 100 lines of python, using BeautifulSoup and simplejson) - I might Github this later on.</p> <p>I then set about writing a thing for the Android that would let me browse the list of proms, give me information about where they are, what time, etc. &nbsp;It's pretty close to being complete. &nbsp;Another day of development and it'll do most stuff.</p> <p>If you've got a 2.1 or better Android, then you can have a play with the v0.2 Beta of PromGuide by clicking <a href="/media/PromGuide-0.2.apk" target="_blank">here</a>&nbsp;or scanning the following QR code:</p> <p><img src="" alt="0.2 PromGuide QR Code" width="249" height="249" /></p> <p>If you find it useful, or broken, or totally hopeless, leave me some feedback.</p> Cloud Backup Strategy <p> <p>It has recently been brought to my attention that a number of users of cloud-based hosting services tend to use an "integrated" backup solution provided by the cloud host. &nbsp;This is probably some form of snapshot-based backup of a server's state.&nbsp;</p> <p>I quite like the idea of doing this, especially if there's no impact to the server being backed up whilst the snapshot is taken.&nbsp;</p> <p>However, I can immediately see one big problem with it. &nbsp;</p> <p>At least one scenario I can see that would require me to restore a backup is failure of the server host. &nbsp;Under this circumstance, it might be possible that a) you will be unable to get hold of the backup, which is probably stored somewhere on their storage cloud. or b) You can get access to the storage, but the backup is a proprietary format, either a raw snapshot, or a VMDK disk image which might be difficult/impossible to transfer to a different host. &nbsp;</p> <p>I'd be especially scared of using snapshot-backups for a database server, because in the unlikely event that the restore target is different to the backed up server, you might have some compatability problems, especially if you're using x86 MySQL and go to a x86-64 host. &nbsp;</p> <p>For this reason, I think it's probably best to have a couple of different backup strategies. &nbsp;</p> <p>I suggest having a snapshot backup is a good thing, and will allow a very fast restore process, but is only useful while your server's host is online. &nbsp;</p> <p>In the event that your host has gone down, it's important to have an offline/offsite backup. &nbsp;This alternative backup should also be as platform agnostic as possible.</p> <p>In other words, any databases should be exported as SQL files, and as far as possible, the system state should be backed up. &nbsp;I tend to keep track of what packages have been installed by storing `dpkg --get-selections &gt; /var/lib/backup/dpkg-state` or similar. &nbsp;This means that if I have to rebuild a server, i can just use that file, and restore the state of package installations really quickly and easily. &nbsp;</p> <p>That, and a copy of /etc, and restoration should be pretty easy.</p> <p>On the other hand, the concept of trying to restore a VMware, KVM or Xen snapshot (which might be inaccessible, or otherwise unavailable for export/download) onto a different system entirely, frankly fills me with a little bit of fear.</p> <p>Given the choice, a snapshot restore is almost certainly preferable, but it'd be prudent to have a backup strategy for your backup strategy. ;)</p> </p> Britannia Country House Hotel, Manchester. <p>&nbsp;</p> <p>I cannot begin to understand what the General Manager of the Britannia Country House Hotel, Manchester (BCH) is thinking when he runs the hotel day-by-day.</p> <p>Never before, have I found such incompetance among public-facing hoteliers. Firstly, a Disabled room was requested at the time of making the booking. &nbsp;We were given a non-disabled room on the 4th floor.&nbsp;</p> <p>*pester*</p> <p>Now a disabled room on the 1st floor.&nbsp;</p> <p>4 hours later, the main lift packs up. Stops working entirely, because *apparently*, 5 days ago, it was full of 8 american tourists, with 8 biiiiig bags, all crammed in so tightly they exceeded the weight limit and had to be freed by the fire service.</p> <p>Did the BCH have the lift fully repaired since then? Apparently not. &nbsp;We got in and it made a worrying *groink* noise before the doors closed.</p> <p>Anyway, 4 hours or so after checkin, the lift freezes and goes to the 4th floor, where it locks itself down, and won't move for love nor money.</p> <p>Leaving my disabled roomie trapped on the first floor.</p> <p>On investigation, it's revealed that they don't have an Evac-chair. &nbsp;Nor a service lift, not any means of getting people from Disabled Rooms (on the first floor), down to safety. &nbsp;</p> <p>Fuck knows what their fire contingency plan is. &nbsp;(Actually, my roomie spoke to the duty manager who was equally unaware of any fire contingency plans.</p> <p>Apparently there wasn't a lift engineer available to fix it within 24 hours, so he had to be carried down a flight of stairs on a chair, by 4 heavy barmen. &nbsp;Not exactly dignified. &nbsp;</p> <p>So after a *lot* of jiggery-pokery with some more unhelpful, rude and incompetant hoteliers, we now had a room on the ground floor, save for being up 3 steps, which have a plywood/carpet ramp. &nbsp;Which moves and flexes with every step. &nbsp;It's also quite a steep ramp, probably impossible to navigate with a wheelchair, based on its narrowness too.&nbsp;</p> <p>The room is also smaller than the previous room, with no disabled handrails, or pull cord.</p> <p>I also discovered, much to my infuriation (being someone who requires a hot shower in the morning, lest I be an evil cunt all day), that the shower didn't actually work. &nbsp;Some kind of mixer tap with a pull-up knob to divert the water flow, actually doesn't pull up at all.</p> <p>Secondly, and more worryingly, The in-room phone doesn't work. &nbsp; Combine this, with the room not having a disabled pull alarm, means that if Sam did fall in the bathroom, or anywhere else, he'd be completely stranded. &nbsp;Can't rely on mobile phone signals in these rooms, the windows seem to be lead-glass and the walls forged from bricks of Depleted Uranium.</p> <p>The hotel manager sent a maintainance man around, who poked at the shower, and the phone, and came to the conclusion that the shower was *so* old, and full of limescale that it was shagged, and the phone was more frustratingly "fucked". Apparently this hotel is made of 2 sections. &nbsp;The front bit is newer, and more modern, and the back bit is a converted block of flats. &nbsp;Apparently none of the phones in this back bit work.</p> <p>I managed to get 2 decent showers out of the shower, before it returned to original form, and stopped being functionally usable, and was just a dribbly tap. &nbsp;Great.&nbsp;</p> <p>I would mention it to the hotel staff again, but I don't think they really give a fuck.</p> <p>So, it's Monday morning, and we're not due to check out until Tuesday morning. &nbsp;*knock knock*. Oh, it's you. Head of Housekeeping. &nbsp;I think I'll name you Chardonnay for the duration.&nbsp;</p> <p>Yes, we're not checking out until Tuesday.&nbsp;</p> <p>*4 hours later*</p> <p>*sounds of key in lock*</p> <p>I go and answer the door, before she has a chance to unlock it further, and see middle aged cleaning woman (Helga, perhaps?), doesn't speak a fucking word of english, other than "my boss say this room empty"</p> <p>me: "Well, we're not checking out till tomorrow"</p> <p>her: "My boss say room empty"</p> <p>me: "Well, your boss is an idiot".</p> <p>her: "I go away now"</p> <p>5 minutes later, Chardonnay, Helga, and some guy turns up, and &nbsp;says "Yes, the cleaning lady doesn't speak much english"</p> <p>me: "Yes, I told you earlier, we're not checking out today."</p> <p>*sighs*</p> <p>Seriously. Would it be too much to ask for people who are employed in the UK to be able to speak passable english?</p> <p>Would it make sense if they knew what Do Not Disturb means?</p> <p>I wonder how many times Helga has caught someone in flagrante delicto whilst trying to service their rooms? &nbsp;Is she some kind of voyeuristic cleaning-pervert?</p> <p>"I wash your sheets, you make them dirty!"</p> <p>FFS, BCH. I'm used to far better customer service. Far better staff, far less rude cleaning staff, and generally, not being fucking disturbed when i leave a DND sign on the knob. &nbsp;What part of that is so fucking difficult to grasp?</p> <p>The available food at the hotel is similarly gash. &nbsp;Apparently they have an on-site pizza place. &nbsp;I'm yet to actually see anyone eating in it though. &nbsp;Someone asked at reception about the hotel pizza place, and they got given a Dominos menu. &nbsp;Insert comment here about dogfooding (or is that the toppings on the pizza?)</p> <p>There was a "Light Bites" menu available, which seems to have been mostly microwaved ready-meals, except for the "Stuffed Potato Skins", which were skins filled with tomato puree (tube quality), topped with cheddar, and microwaved.</p> <p>Eugh. So so so acidic.&nbsp;</p> <p>Sam ordered the Bruschetta, and we were both surprised that it was Ciabatta, untoasted, cut end-ways, rather than length ways, so it was 6 slices, each with a surface area of about 3 square inches, and coated with a thick layer of Margarine, topped with some raw onion, raw peppers and raw tomatoes.&nbsp;</p> <p>Perhaps we're spoiled, and London really is the paramount of global cuisine, but something tells me, that this isn't the case, and the cooks at the hotel are as incompetant as the rest of the fucking staff.</p> <p>On the day we checked in, Thursday, there was a "Carvery", which was actually just some lukewarm roast pork, and palid apple sauce, where I got a paltry 4 small slices of pig, and could have quite happily devoured 4x that amount, but apparently that wasn't an option. &nbsp;For this, we paid &pound;13.50.&nbsp;</p> <p>On other days, there was one of three options, Something meaty, and tasteless, something fishy, and smells funny, and something vegetarian and cold.</p> <p>Nothing particularly appetising, or nutricious. &nbsp;I am reminded at this time of school dinners, for a similar calorific value, and flavour level.</p> <p>Which reminds me. &nbsp;Further to the aforementioned disability problems, out of a possible 6 bars, only one was at ground level, with no steps to get to, but this wasn't open anywhere near as often as any of the others. &nbsp;The main lobby bar is down a flight of 3, quite deep, stairs. &nbsp;The bar in their built-in "nightclub" is down a flight of 4 steps, then up a further flight of 5. &nbsp;The bar in the back "bistro" area, requires climbing 6 steps, and descending 4.</p> <p>Basically, if you're unfortunate enough to be disabled, and unable to use stairs, you'd better be either tee-total or not thirsty, because your chances of getting a drink are pretty much nil.</p> <p>I'd hate to have to navigate the hotel in a wheelchair, many of the doors are seriously weighty, including the one to the corridor for our room, and that one doesn't open fully, because there's a mysteriously placed sticky-out-bit of wall, which makes opening the door past about 60 degrees, completely impossible.</p> <p>It's almost as if the floor plans were designed by Goebbels himself, as a disabillity assault course, designed to weed out the less capable.</p> <p>Perhaps a word of praise, now. &nbsp;Although only a brief one. &nbsp;The beer is cheap, cold and plentiful, and the bar staff are cute. &nbsp;However, they seem to hate the rest of the hotel staff as much as I do. &nbsp;A fantastic insight, for which I am deeply grateful to see that they have absolutely no faith in their management either.</p> <p>&nbsp;</p> <p>Overall review. &nbsp;Shocking. Don't stay here at any cost. &nbsp;If you do find yourself here, Run like hell.</p> <p>I keep finding "quirks" about this place that leave me aghast and open-mouthed. &nbsp;The lift/disabled access thing being fairly prominent in my mind.&nbsp;</p> <p>Oh, and I saw a rat in the lobby.&nbsp;</p> <p>Photos of this hellish establishment can be found here:&nbsp;<a title="Holy fuck, this place is terrible!" href=";feat=directlink">;feat=directlink</a></p> Desktops as Servers <p> <p>Personally, I hate the idea of using a desktop as a server in a production environment. &nbsp;I'm going to define the term "production environment" first. If you've got an environment, any environment where the service provided is relied on by anybody, for any reason, then that's a production environment. &nbsp;If it's just for you, and you don't mind when it all goes wrong and the shit hits the fan, then that's fine.</p> <p><strong>Case in point:</strong> I've got 2 re-appropriated desktops as a pair of Domain Controllers for testing a domain deployment. &nbsp;Each desktop is running Windows 2008 R2 server, and provides Active Directory, DHCP, DNS and Windows Deployment Services. &nbsp;This was fine for testing, and playing around with building workstations, but the problem comes when people find out about this, and want to rely on it. &nbsp;For about a week, I was experimenting with using Windows' DHCP and DNS servers for the entire office. &nbsp;This was fine and dandy until there was a powercut, and neither of the desktops came back on automatically. &nbsp;This is because, unlike most servers, the default ACPI configuration is to start "off", and not "last setting" or "on".</p> <p>So the desktops didn't boot up, and nobody could get a new DHCP lease. &nbsp;Bit of a bugger that, but easily fixed.</p> <p>In the event that I ever do get this kind of scenario in the office in production, where people are reliant on the availability of the Domain Controllers for login and file sharing, then I've already got some HP Proliant servers specced up and ready to order.&nbsp;</p> <p>There's other problems too. &nbsp;Desktop hard disks aren't designed for 24/7/365 operation, and aren't designed for a high duty cycle like that of a server. &nbsp;What disk manufacturers call "Enterprise Disks" are much more sturdily built than "Desktop Disks", they're designed to work harder, at higher temperatures, with higher duty cycles, and are generally designed to be always on. &nbsp;</p> <p>There's also a running trend amongst high capacity desktop hard disks, where they're "Green" or "Energy Efficient". &nbsp;One of the ways that manufactures implement this, is having the disk stop spinning when it's not in use, or send the entire unit to sleep. &nbsp;If you have a RAID set built out of Green Disks, then you'll probably find at some point that the array ends up degraded - "broken" in layman's terms. &nbsp;There's probably nothing wrong with the disk, but the disk firmware has shut it down, or put it to sleep because it's not immediately being used. &nbsp;The RAID controller, software or hardware, sees this as a disk failure, and all hell breaks loose. &nbsp;Especially if you have 2 of them that go to sleep, in a RAID 5 array, then you're really screwed.</p> <p>Desktop motherboards are also a different breed, they're generally designed with Athlon or Intel Core processors in mind, which have a very different fetch-execute cycle to a Server-grade Opteron or Xeon. &nbsp;They're kinda not really designed with server operation in mind, and are sorta "slower" or less performant than an equivalent speed server processor.</p> <p>On the topic of Desktop Motherboards, they're also less built for high memory configurations, typically with 2 or 4 DDR3 slots, and their capability to accept ECC (Error Correcting Code) RAM is very variable. &nbsp;Some do, some don't.</p> <p>I like build-in redundancy, and defence-in-depth, especially when building server solutions. &nbsp;I like having ECC RAM, it's more expensive, but does protect against bit-flip scenarios, those which could cause kernel oops, panics and blue screens of death. &nbsp;I also like having more than one of things, like multiple disks, and so on. &nbsp;I visibly squirm when I find SMEs using desktops as servers, in production, and then find that the "server" (or desktop) only has one hard disk.</p> <p>Server motherboards also often have neat features built in, like more PCI slots (and 64 bit width ones -- handy for RAID cards). &nbsp;There's also iLO/DRAC/IPMI for remote management built in, but remember, if you have remote management, make sure it's configured before it's too late.&nbsp;</p> <p>They also tend to have better BIOSes, which are designed for headless operation, no more "Keyboard not found - Please press F1 to continue" messages, which prevent your headless server from booting.</p> <p>Servers that are built as servers, on server hardware, cost more than a desktop, but last far longer. &nbsp;You get a much greater Return On Investment by not having to replace disks and memory that have failed in the first year, because they've simply worn out.&nbsp;</p> <p>As with any electronic equipment, the bathtub curve of failure rates applies, but the entire graph length is much shorter for consumer-grade hardware.&nbsp;</p> <p>If you look at the cost of a server, along side the cost of a desktop, then the cost of a server really is quite a lot higher. &nbsp;The rub is that the cost of downtime can be enormous, especially if the services provided by the server is core to the business, or it's core to the operations, such as logins, and file sharing (in the case of an office domain).</p> <p>Hardware is cheap, Downtime is damn expensive.&nbsp;</p> <p>Perhaps, along side everything else, the old adage is truer than ever:</p> <p>You really do get what you pay for.</p> </p> mod_rewrite is killing social media. <p>&nbsp;</p> <h2>mod_rewrite is killing social media.</h2> <p>This is a little ranty, but it's really pissed me off lately.&nbsp;</p> <p>That&rsquo;s right. It&rsquo;s you. The ones with image hotlink protection, and the ones who rewrite URLs to do strange and special SEO things, but who don&rsquo;t actually think about what happens when you send someone a link to something.</p> <p>(For the uninformed, hotlink protection is that thing where you get sent a link to an image, but the site owner is being draconian, and redirects you to google, because your referer wasn&rsquo;t their own site, so the image must have been stolen, and put on another webpage (!))</p> <p>Here&rsquo;s what happens. &nbsp;Someone makes a blog, and posts a funny image of a kitten. &nbsp; We all like kittens, so I copypasta the link, and send it to my friend.&nbsp;</p> <p>Problem is, the site owner is being a twat. They think that we&rsquo;re still in the 1990s, and bandwidth is expensive. &nbsp;They set cookies when I visit the site, and then they look for those when I look at their images.&nbsp;</p> <p>I post a tweet like &ldquo;Hey, check out this cute kitty!;</p> <p>I have the cookies, so it looks fine. My friends, however, do not, so they redirect to google, or something equally stupid.</p> <p>Here&rsquo;s the result. Either I look stupid, or they look stupid, or both. &nbsp;Neither of these are particularly good things.&nbsp;</p> <p>I can&rsquo;t save the image, and host it somewhere else, because that would be stealing it from the site owner / copyright holder, adding a dose of further legal problems, and also a massive layer of effort on top.&nbsp;</p> <p>Here&rsquo;s what site owners should do. &nbsp;Stop being a twat. &nbsp;If you&rsquo;re concerned about bandwidth usage from your assets, host them on Amazon&rsquo;s S3 cloud, and shovel it all through Cloudfront. &nbsp;Set up a CNAME to your Cloudfront Distribution point, like &ldquo;;, and serve your static assets through there. &nbsp;You&rsquo;ve got enough bundled bandwidth in the Free Tier to last more than long enough, and you also leave a sensible system by which I can share your media files on the social web.</p> <p>The first time I found this today, was on some guy&rsquo;s site where he claimed to be &ldquo;the self appointed curator of the internet&rdquo;. &nbsp;Hell, &nbsp;I think I&rsquo;m better for the health and wellbeing of the internet, personally. &nbsp;I don&rsquo;t protect against hotlinking, because it&rsquo;s stupid. It&rsquo;s like anti-right-click scripts on websites. Those are fucking dumb too.&nbsp;</p> <p>By &ldquo;protecting&rdquo; your images with some mod_rewrite trickery, you&rsquo;re actually diminishing the traffic to your website. I&rsquo;m never going to link to you again, because you&rsquo;ve got crap policies. &nbsp;You&rsquo;ve also lost the inquisitive organic traffic sources, the people who go &ldquo; I wonder what else is on that site&rdquo;, because you bounce them through to google, instead of your homepage, or the page that the image was originally on. &nbsp;That would be smart, that would mean you&rsquo;d get more traffic in general, more adword hits, etc etc. &nbsp;</p> <p>But no, you&rsquo;re all still living in the past, back in the days when bandwidth was an expensive commodity. &nbsp;Wake up and smell the megabits. &nbsp;We&rsquo;re not in that world any more. &nbsp;If your host is threatening to cut you off for costing them a fortune in bandwidth, tell them to fuck off, and find somewhere else. &nbsp;There&rsquo;s no shortage.</p> <p>Hell, go it alone on an EC2 micro instance on the free tier. &nbsp;I&rsquo;ll even tell you how to do it.&nbsp;</p> <p>Secondly, if I&rsquo;m visiting webpages on your site, I&rsquo;d like you to do 2 things. They&rsquo;re really simple, and you should have been doing them for years.&nbsp;</p> <p>1) When I click a link, I&rsquo;d like the Address bar to change accordingly. &nbsp;Or you can show a permalink link. &nbsp;One or the other. &nbsp; I&rsquo;d like to be able to share your website with my friends on twitter, or IRC, or Facebook. &nbsp;I can&rsquo;t do this if you don&rsquo;t give me the links to share. &nbsp;All I end up sharing is an invalid link that then bounces them to a HTTP Err 500 page, or a 302 Redirect to google. &nbsp;Yeah. Smart move there. &nbsp; NOT.</p> <p>2) This is the biggie. &nbsp;I&rsquo;d really like it if once you&rsquo;ve generated an URL, then it doesn&rsquo;t change. &nbsp;Ever. You could make my life immeasurably easier if I can keep a bookmark to your site for 10 years, and never have to wonder &ldquo;where&rsquo;d that page go? I&rsquo;m sure that URL is right...&rdquo;</p> <p>Oh, and read this: <a href="" target="_blank"></a></p> <p>&nbsp;</p> Seriously, What? <p>Sometimes you read something on the internet and think "Huh? Really?". &nbsp;When I read this, I swear, you could almost hear my brain go *boggle*. &nbsp;</p> <p>When I first started using Java, I remember reading something in the EULA (yes, I read it), about not using it for mission-critical or life-critical circumstances. &nbsp;Something about avionics and nuclear power stations.&nbsp;<br /><a href="" target="_blank">Specifically </a>"<span style="font-family: Arial, Helvetica, FreeSans, Luxi-sans, 'Nimbus Sans L', sans-serif; font-size: 12px;">You acknowledge that Licensed Software is not designed or intended for use in the design, construction, operation or maintenance of any nuclear facility."</span></p> <p><span style="font-family: Arial, Helvetica, FreeSans, Luxi-sans, 'Nimbus Sans L', sans-serif; font-size: 12px;">&nbsp;</span>The thing is, we all click-through these, because we all suspect that nobody would actually use Java for a nuclear power station, or say, host a mission-critical service on the cloud.&nbsp;</p> <p>However, tonight, that is <a href=";tstart=0" target="_blank">exactly what it appears someone has done</a>. &nbsp;I've also archived the page as a PDF, should it get deleted from sheer terror.<br /><img id="plugin_obj_83" title="File - Scary infrastructure decisions ahead." src="/media/cms/images/file_icons/pdf.gif" alt="File - Scary infrastructure decisions ahead." /></p> <p>I am honest-to-FSM scared by the concept that there could be no built-in redundancy to that system. &nbsp;(Part of me wants the CTO from them to contact me WRT systems consultancy, the other part wants me to run around screaming)</p> <p>I think the commenters say it best, but I'll still add my $0.02 here.</p> <p>While Amazon EC2 may be compliant to a number of standards, and have previously had no major issues, this latest incident should serve as a reminder to all users of cloud infrastructure. &nbsp;</p> <p>It's no different to any other system. &nbsp;It can go down, you can lose your data, and shit can hit the fan.</p> <p>Have lots of redundancy built-in from day one. &nbsp;Have lots of different layers of security and redundancy, like Defense-in-depth for nuclear reactors. &nbsp;</p> <p>Plan for the worst case scenarios, because in systems engineering, we deal with the when, not the what if.&nbsp;</p> The Name Game <h2>This is real-life Social Engineering.</h2> <p>(If you've just read this for the first time today, you should read all of it.)</p> <p><strong>The first meme we'll discuss is the "Royal Wedding Name". &nbsp;</strong></p> <p><strong>BOHICA.&nbsp;</strong></p> <p><strong>Again, It seems that some of you aren't understanding how these things work. &nbsp;The Royal Wedding Name asks for&nbsp;</strong></p> <ol> <li><strong>Your grandparent's name (first name, male or female)</strong></li> <li><strong>Your first pet's name</strong></li> <li><strong>The name of the street you&nbsp;grew up on.</strong></li> </ol> <div><strong><br /></strong></div> <div><strong>Right, you lot. Stop this now. &nbsp;</strong> I hate these name game memes, because as you should remember from last time, they're a crafted attack to reveal bits of information about you. &nbsp; <a href="/blogish/identity-theft/#.UmeC-_lJMwo">Remember what I did to someone's facebook profile based on this info?</a></div> <div>This one's been going on for a lot longer than I thought.. And a lot of you will be using your real grandparents' names, and the real names of your pets, and the real streets you've lived on. &nbsp;That's just silly. &nbsp;And dangerous.</div> <div>And it's your fault if you get your identity stolen because of that.&nbsp;</div> <div><strong><br /></strong></div> <h3><strong>Other Similar Memes:</strong></h3> <p><strong>January 7th 2014:</strong></p> <p>"The Birds of a Feather Porn Star Name Game" - spotted on twitter, <a href="">tweeted by an official twitter stream for a new ITV show.</a></p> <p>&nbsp;</p> <blockquote class="twitter-tweet" lang="en"> <p>Here's a little naughty treat for the <a href="">@loosewomen</a>! What would your name be? <a href=";src=hash">#BOAF</a> <a href=""></a></p> &mdash; Birds of a Feather (@OfficialBOAF) <a href="">January 7, 2014</a></blockquote> <script src=""></script> <p>&nbsp;</p> <p><img src="" alt="" width="300" height="200" /></p> <p>I am, unsurprisingly, <strong>furious</strong>. &nbsp;I've attempted to contact the twitter feed owner, to get them to pull these tweets off their stream.</p> <p>We've seen it from people who start memes, but this is the first time I've seen it with commercial sponsorship. &nbsp;It's not big, and it's not clever. <strong>&nbsp;Stoppit.&nbsp;</strong></p> <p>&nbsp;</p> <p><strong>November 14th 2013</strong></p> <p>The "Elf Name" meme.</p> <p>First spotted on Facebook, seems to be spreading both on Facebook and Twitter as <a href=";src=typd">#myELFname</a>.&nbsp;</p> <p>This one asks for the first letter of your first name (nothing too telling here), and the month you were born in (slightly more PII [<a href="">Personally Identifiable Information</a> ). &nbsp;</p> <p><img src="" alt="Android screenshot " width="360" height="640" /></p> <p>&nbsp;</p> <p>There's nothing special about this one, it's only asking for minimal PII, but still. &nbsp;It's a slippery slope, and these memes are *still* occurring.</p> <p><strong>October 23rd 2013</strong></p> <p><strong>"Your Downton Name"</strong></p> <p>Comprised of your <strong>Grandparent's First Name</strong>, and your <strong>First School</strong>.</p> <p>Both of these are known questions for a number of secret answer challenges.&nbsp;</p> <p><strong>February 7th 2013</strong></p> <p><strong>"The Corgi Name"</strong></p> <p>Comprised of your Zodiac sign, Favourite colour, and last digit of your telephone number. &nbsp;Apparently this is what passes for amusement these days. &nbsp;</p> <p>This one was heavily promoted by <a href="">@BuzzFeed</a>&nbsp;and gained 100+ retweets. &nbsp;They really should know better.</p> <div><strong>March 4th 2011</strong></div> <p>The current meme is the "Pornstar Name"</p> <p>This meme is asking for your<strong> First Pet</strong>, and your <strong>Mother's Maiden Name</strong>.&nbsp;</p> <p>&nbsp;</p> <p><strong>Earlier Occurances</strong></p> <p>The NPR Name Game:</p> <p>Comprised of your <strong>Grandparent's Middle Name</strong>, and your first <strong>Foreign Penpal's Last Name</strong>.</p> <p><strong>Seriously, These are some of the most common security questions used on a very large number of websites. </strong>&nbsp;By publicly tweeting the answer, you are handing over all the details a nefarious hacker needs to compromise your account, and steal your identity.</p> <p>I cannot stress this highly enough. &nbsp;<strong>Do NOT tweet your Porn Name / Pornstar Name, or any other of these Name Game memes.</strong></p> <p>There&rsquo;s &nbsp;often a meme going around on facebook/twitter/etc.. One of these note things, you do it, you tag your friends, they do it, and so on, or it proliferates on twitter.</p> <p>These bug me enormously, because they ask for a fair bit of information. &nbsp;Here&rsquo;s a brief summary of the answers you give.</p> <p>&nbsp;</p> <ol> <li>Your Full Name</li> <li>Your Mother&rsquo;s Middle Name.</li> <li>Your Grandfather&rsquo;s Name.</li> <li>Your favourite: Colour, Animal, Drink, Ice cream flavour, Cookie</li> <li>Place of Birth</li> <li>Street where you live</li> <li>Street you grew up on</li> <li>Name of your Pet</li> </ol> <p>&nbsp;</p> <p>I recognise some of those as secret question/answer pairs from a number of websites. &nbsp;</p> <p>I&rsquo;m really only kicking the tyres on this one, but what if someone designed these memes to gather data about people, including data about their past, place of birth, residential address, pet names, other stuff that&rsquo;s commonly asked for sample questions on &ldquo;Secret Question/Answer&rdquo; credentials online.</p> <p>I decided not to participate in this one unsurprisingly. &nbsp;In fact, I recommend that everyone who has done the &ldquo;Name Game&rdquo; note looks closely at their note privacy settings, just to make sure they don&rsquo;t mind everyone knowing this information about them.</p> <p>&nbsp;</p> Where are your eggs stored? <p>&nbsp;</p> <p>When I was growing up, one of the things that particularly interested me about the English language were idioms and proverbs. &nbsp;</p> <p>I think today, whilst many are still suffering the effects of the week, we should look a little more closely at one particular proverb, and perhaps its effective meaning today.</p> <p><strong>"Don't put all your eggs in one basket"</strong> :- This phrase is commonly (and some might say, <a href="" target="_blank">incorrectly</a>) attributed to Miguel Cervantes (in Don Quixote), but some sources have reported its usage as early as 1600. &nbsp;Also of little surprise is that many other historical cultures had similar phrases.</p> <p>OK. &nbsp;We've established that historical peoples knew about having redundancy in their Ova storage and distribution methods, so pray-tell, why has this fantastic tradition been forgotten?</p> <p>I am, of course, talking about the recent (21/04/11) Amazon EC2 and related services outage. &nbsp;<a href="" target="_blank">Reddit, Foursquare and Quora</a> are the big 3 companies who've been very public about their outage, but I wonder how many smaller companies and startups who rely on Amazon services for their server needs are also ending up out of pocket (due to lost revenues), or simply offline.</p> <p>So the problem is this. &nbsp;Amazon are fucking cheap, in comparison to pretty much any other VPS solution. &nbsp;This is a royal pain in the arsehole, from a systems engineering point of view, because Amazon also price all of their other services similarly cheap. &nbsp;S3 is Seriously Cheap Storage, (they should have called it SCS perhaps). &nbsp; There's also the Load-balancer and cloudfront CDN frontend, again, incredibly cheap. &nbsp;There's a real movement towards building ones entire infrastructure around the Amazon cloud, and I think this is the problem. &nbsp;Amazon even offer a DNS service (Route 53), so you can serve your website's DNS records from the cloud too. &nbsp;</p> <p>Can anyone see the problem with this? &nbsp;The architecture of the intrinsic scalability of the Amazon cloud does certainly allow you to create lots of small servers for things, so you've got a webserver basket, containing a half dozen server-eggs; and another basket for database-eggs. &nbsp;There's a massive problem here. &nbsp;Enormous problem. &nbsp;All of your baskets are inside one enormous basket. &nbsp;One incredibly big basket called "the Amazon cloud". &nbsp;</p> <p>What appears to be happening to Amazon's cloud at the moment is one of two things:</p> <p><strong>1) </strong>People have built crap websites, or have only one egg. &nbsp;If you've only got one server, and it goes down, you're screwed again. &nbsp;You might as well have a dedicated server from anywhere else. &nbsp;You've still got a massive Single Point of Failure, and when the worst case scenario happens, you're fucked.</p> <p><strong>2) </strong>People have lots of inter-cloud redundancy, but no intra-cloud redundancy. &nbsp;This is akin to having lots of small baskets of eggs, in one big picnic hamper.&nbsp;</p> <p>This is actually very common. &nbsp;It's trivially easy to construct a pretty big network on the Amazon Cloud, you add more EC2 compute nodes, then add some S3 storage, EBS block stores, Cloudfront CDN, oh, maybe Route 53 DNS, how about Simple Payments Service for micropayment, maybe Simple Message Queue, and that's before I get onto their database offerings.</p> <p>Amazon have gone a long way to making sure that everything you could ever need for this kind of system building architecture is there, at one place. &nbsp;<br />They're like Home Depot, only there's a greater chance of Amazon having what you want.&nbsp;</p> <p><strong>ERRR.<br /></strong>There's a problem here. &nbsp;I feel the same way about people who buy a 50 disk SATA array, and fill it with disks with the same batch number. &nbsp;It's no surprise that if one fails, you're probably going to get another failure, caused by the same bug or hardware problem. &nbsp;</p> <p>If you're going for true redundancy in the face of real adversity, then you need to start putting your eggs in many separate baskets. &nbsp;Globally distributed baskets. Baskets held by many different people. &nbsp;</p> <p>I generally approve the use of S3 for system backups, because by and large, it's fast, cheap, and pretty secure (especially if you encrypt it). &nbsp;It's *really* fast if you're uploading from inside Amazon's network. &nbsp;There is an epic problem though. &nbsp;Say you take nightly snapshots, and upload them to S3. &nbsp;One day, your server goes down, either Amazon's fault or one of a number of other reasons. &nbsp;</p> <p>I can see 2 enormous problems here. &nbsp;Primarily, if it's a fault on the Amazon network, it may affect your snapshot storage, and the ability to access them in a timely fashion, so while your Disaster Recovery Plan may say "Download the disk image and redeploy", you may not be able to download the disk image. &nbsp;Then you're screwed.</p> <p>It's also possible that a disk error on the Amazon side corrupts your snapshot images, in which case, again, you're screwed. &nbsp;In a subtly different way.&nbsp;</p> <p>Secondly, and this is a far more "doh!" problem, you may be able to locate and download your disk images, but not decrypt them, because the encryption key is stored on the primary server (also inside the backup image, encrypted). &nbsp;This is easily solved. &nbsp;Copy the key, print it out, and store it in an envelope in the company safe / bank deposit box / other secure location.</p> <p>The biggest problem with all of this, is that there doesn't seem to be a straightforward way to share data and server instances across diverse cloud providers. &nbsp;I'd like to build an universal image, and then deploy it to the Rackspace Cloud, Amazon EC2, Flexiscale, and so on, and be able to&nbsp;</p> <p><strong>a) </strong>interchange data between them easily (not too hard, but would require some API glue)</p> <p><strong>b) </strong>have a global system for GSLB between them, so that if EC2 is offline, then all traffic is mopped up by the other two clouds.</p> <p><strong>c) </strong>Have a sensible "in-one-place" billing system (more API glue)</p> <p>Physically, and from an engineering point of view, the biggest challenge of that lot is b. &nbsp;You'd need true global redundancy, and that don't come cheap. &nbsp; However, I think that's the topic for another blogpost. &nbsp;</p> <p>In the meantime, perhaps you should evaluate where your eggs are, and how many baskets you have.</p> <p>You should worry somewhat when all of your eggs are in one superbasket. &nbsp;</p> <p>Then I think it's time for an ovum redistribution exercise.</p> <p>&nbsp;</p> ISC DHCP and PowerDNS <p> <p>Lately, I've been playing around with a pair of domain controllers in the office, trying to figure out a good way to implement a domain. &nbsp;See, the problem is, this kind of thing is a "nice-to-have" rather than a core requirement. &nbsp;At least as far as the business directors are concerned. &nbsp;Their argument is something like "It worked fine with just a bunch of PCs connected to a switch".</p> <p>I do like things manageable, and planned, and certainly now as we're approaching 50 desktops in the office, plus mobile devices, plus laptops, and FSM knows what else, that there's a real need for a bit more structure and management.</p> <p>I ditched the Draytek's DHCP ability to allow me to test out Windows 2008R2's DNS / DHCP server, which interoperate fabulously, but do have a few limitations when it comes to specifying static leases (outside of the dynamic range). &nbsp;Bit annoying.</p> <p>It does however do the dynamic dns updates, whenever a client gets a new lease, the DNS gets updated automatically. &nbsp;This is cool indeed.</p> <p>I've been thinking of a way to replace this DNS and DHCP functionality with a bit of open-source goodness, because it's a nice thing to have, and even nicer to have for free.</p> <p>I chose <a href="" target="_blank">PowerDNS</a>, because, well, I like it, and it's pretty scalable. &nbsp;Apparently it's the DNS of choice for the Wikimedia foundation, and i've used it before in a couple of other tasks. &nbsp;It's got a pretty nice MySQL backend, and also one for Postgres. &nbsp;For the time being, i'll be using the MySQL one, because that's what we tend to use around here.</p> <p>So.. DHCPd, I chose <a href="" target="_blank">ISC's DHCPd</a>, because it's easily installed in Ubuntu. &nbsp;Always a winner there.&nbsp;</p> <p>After a considerable amount of googling around, I figured out how to use the dhcpd.conf file to trigger an event to happen on commit, release and expiry hooks. &nbsp;<a href="" target="_blank"></a> and <a href="" target="_blank"></a>&nbsp;were pretty useful.</p> <p>Then all I had to do was write a bit of python that would interact with the database, and update the records table.</p> <p><strong>Two major things caught me out.&nbsp;</strong></p> <p><strong>1)</strong> Don't forget to COMMIT the data to the database, PowerDNS uses InnoDB on MySQL, so you'll need to commit the transaction, or bugger all happens.</p> <p><strong>2)</strong> apparmor on Ubuntu prevents dhcpd from using the exec() syscall. &nbsp;This is easily resolved by setting apparmor from enforcing to complaining for dhcpd.</p> <p>Here's a couple of bits of code, one is the python updater, and the other shows how this all fits into the dhcpd.conf file.</p> <p><a title="" href="" target="_blank">;</a></p> <p><a title="dhcpd.conf example" href="" target="_blank"></a></p> <p>&nbsp;</p> </p> Policing the Tweet-waves <p>&nbsp;</p> <p>On Saturday morning, I noticed a particularly dangerous meme (for want of a better word), making the rounds on Twitter.</p> <p>Basically, <a href="" target="_blank">this image</a>&nbsp;was being retweeted over 25 times a minute. &nbsp;</p> <p>After some digging around, I managed to trace the source of the image (that is, it's first known posting) to the /x/ board on 4chan.&nbsp;</p> <p><a href="" target="_blank">Tweetmeme</a>&nbsp;&nbsp;<a href="" target="_blank">tells us</a> that it was first reported as being tweeted by @<a href="" target="_blank">ryphons</a>, who still hasn't contacted me for further information, WRT the image source.</p> <p>Tweetmeme also reports over 400 retweets, but I'm certain that the actual figure is much much higher.</p> <p>So, the main thing that I take issue with, is that the image was being interpreted as fact, rather than a simulation, or prediction of what might happen. &nbsp;There is mass panic in Japan already, the last thing we need (as humanity) is the panic and hysteria to spread to the west coast of the USA.&nbsp;</p> <p>While I am not a nuclear physicist, and cannot directly comment on the state of the nuclear reactors. I can definitely say that there is something seriously wrong with taking this kind of "map" as fact.</p> <p>One of the most interesting things about this, aside from @ryphons not stating the original source (Was he the original creator?), is that the company whose logo is on the image "<a href="" target="_blank">Australian Radiation Services</a>" have no record, on their website, or on google's index of their site, of the image being theirs. &nbsp;</p> <p>That should be raising red flags for you already.</p> <p>Combine that with the fact that the apparent source of the image was twitter, and prior to that, 4chan, that most reputable news agency (!), and it should be fairly clear why I made the decision to attempt to stem the spread of this image across twitter.</p> <p>I came under quite a lot of flak from a number of tweeps, who were concerned that I was playing down the situation. &nbsp;Rightly so. &nbsp;I am / was trying to prevent undue panic and the spread of misinformation.&nbsp;</p> <p>Twitter is an incredibly powerful tool, allowing fairly free transfer of information between large groups of individuals. &nbsp;Sadly it's also got lots of wankers, spreading <strong>Fear, Uncertainty and Doubt</strong>. &nbsp;It's this that we (as the clever people on Twitter) need to stop.&nbsp;</p> <p>There are many sources of valid information about the situation in Japan, notably from @<a href="" target="_blank">arclight</a>&nbsp;and blogging scientists like (<a href="" target="_blank"></a>)</p> <p>&nbsp;</p> <p>I am a scientist. &nbsp;I don't believe in "god", I don't believe in "karma", and I certainly don't approve of trash media. &nbsp;I believe in factual information, and interpretations thereof by qualified individuals.&nbsp;</p> <p>I don't read the Daily Mail for exactly the same reason. &nbsp;What you've got to bear in mind is that the media outlets are in this for the money. &nbsp;They'll continue to print uncertain and scaremongering drivel, because that's what people buy, out of uncertainty, or purely morbid curiosity. &nbsp;Sad fact of the matter is, that tabloid quality news outsells fact and science by quite a large margin.</p> <p>That alone, as a fact is quite sad. &nbsp;<span style="white-space: pre;"> </span></p> <p>&nbsp;</p> Proposal: Increasing Facebook Security <p>As I proved in my last blogpost, it's actually trivial to compromise a facebook account given a very small amount of personal information. &nbsp;After talking to a number of other geeks on Friday night, two things became quite apparent.&nbsp;</p> <ol> <li>Facebook security is poor, at best, and the ability to change the user's contact email address is shocking.</li> <li>Security questions and secret answers are easily exposed by social engineering, thus, these questions only work effectively if you have a completely different identity which you only use for secret questions and answers.</li> </ol> <p>I don't approve entirely of having secret questions that aren't related to you directly.. I mean, if you had a secret question which was "What is your mother's maiden name?", and you gave an answer which wasn't true, you'd have to do two things. a) remember that you lied, and b) always use the same one, or you'd be forever confused.</p> <p>Anyway. &nbsp;The real point to tonight's blogpost is that Facebook Security is gash. &nbsp;Seriously, even I was surprised that I was able to change my friend's contact email address, and sucessfully change his password. &nbsp;</p> <p>The only good thing about all of this, is that Facebook lock the account for 24 hours, and email the other email accounts, &nbsp;This was the only way that my friend was able to regain control of his account.</p> <p>I propose that facebook implement two-factor authentication for password resets, and possibly logins too. &nbsp;Given that Facebook already has and retains your phone numbers, it would be trivial, both in cost and implementation to produce a mechanism of 2-factor authentication for advanced profile control.</p> <p><strong>User story:</strong></p> <ol> <li>Alice wants to reset her facebook password. &nbsp;</li> <li>She clicks the Forget Password link, and correctly identifies her profile.</li> <li>She selects one of her registered phone numbers for 2-factor authentication.</li> <li>She then selects whether she is to recieve a voice call, or SMS message.</li> <li>Facebook send a validation code to the number, either as a SMS, or a short voice call, reading out the code.</li> <li>Alice enters the validation code, confirming her identity.</li> </ol> <p>This system would only work if you couldn't change the numbers that Facebook could contact you on (like you can currently change your contact email address), and you had already confirmed your phone numbers with Facebook in advance (on registration, perhaps, it could authenticate your phone number)</p> <p>I don't suppose anyone who works for Facebook reads this, do they?</p> <p><em><strong>Interesting sidenote:</strong></em></p> <p>It appears that it is <a href="" target="_blank">not possible</a> to change a Facebook Security Question, for "security reasons". &nbsp;</p> <p style="padding-left: 30px;">"To protect account security, it is not possible to update your account&rsquo;s security question once you have added one.&nbsp;"</p> <p>Why the buggery not? &nbsp;This seems unusual. &nbsp;Surely these kinds of events (twitter memes, facebook notes for these Name Game things) expose users' security questions and answers, and most important thing to do after a data breach, is to change the credentials in question. &nbsp;</p> <p>Most peculiar...&nbsp;</p> Identity Theft <p>To prove a point about the latest "Pornstar Name" Meme that's currently going around Twitter. &nbsp;Basically, the meme asks for you to tweet your Pornstar name which is comprised of the name of your first pet, and your mother's maiden name.&nbsp;</p> <p>I'm furious about this. &nbsp;Those two names are the two most common answers to security questions found on a number of websites.</p> <p>So. &nbsp;A theory: "Given just a user's facebook name, and their Pornstar name, it should be possible to compromise their facebook account".</p> <p><strong>I did this test with the full permission of the real account holder. &nbsp;I do not condone the use of this information for nefarious or illegal purposes, it is presented for educational use only</strong>.</p> <p>Proof:</p> <p>Open facebook, and click the "Forgot Password" link.</p> <p><img id="plugin_obj_71" title="Picture - Forgot your password?" src="/media/cms/images/plugins/image.png" alt="Picture - Forgot your password?" /></p> <p>1) Identify the target account:</p> <p><img id="plugin_obj_72" title="Picture - Identify the account" src="/media/cms/images/plugins/image.png" alt="Picture - Identify the account" /></p> <p>2) Confirm the account, but click "No longer have access to these"</p> <p><img id="plugin_obj_66" title="Picture - Confirm the account" src="/media/cms/images/plugins/image.png" alt="Picture - Confirm the account" /></p> <p>3) Provide a new email address:</p> <p><img id="plugin_obj_67" title="Picture - Provide a new email." src="/media/cms/images/plugins/image.png" alt="Picture - Provide a new email." /></p> <p>4) Go check that email account for further details on how to proceed.</p> <p><img id="plugin_obj_68" title="Picture - step5.png" src="/media/cms/images/plugins/image.png" alt="Picture - step5.png" /></p> <p>4b) There is a missing step here. &nbsp;I forgot to screencap the bit where it asks your secret question, which may or may not be one of the ones referred to in the Meme, but I bet it is. &nbsp;Mother's maiden name and the names of pets are the most common questions.</p> <p>5) You can then create a new password :O (For an account you don't own. .. Yeah, it's pretty bad, this, isn't it?)</p> <p><img id="plugin_obj_69" title="Picture - step7.png" src="/media/cms/images/plugins/image.png" alt="Picture - step7.png" /></p> <p>6) There is, however a problem. &nbsp;Facebook by default will lock the account for 24 hours. &nbsp;This does however protect the user, as it sends them a load of emails to their other email accounts, basically saying "OH SHIT, WHAT ARE YOU DOING?!!"</p> <p><img id="plugin_obj_70" title="Picture - Locked Account" src="/media/cms/images/plugins/image.png" alt="Picture - Locked Account" /></p> <h2>IMPORTANT:</h2> <p>I'm presenting this information as proof of the theory that the Pornstar Name meme is damaging, and provides enough information to compromise an account. &nbsp;</p> <p><strong>Again,&nbsp;I did this test with the full permission of the real account holder. &nbsp;I do not condone the use of this information for nefarious or illegal purposes, it is presented for educational use only.</strong></p> <p>&nbsp;</p> <p><strong>New Project: (or How to build an application in 5 days)</strong></p> <p><br />About a week ago, my good friend <a href="" target="_blank">@Moof</a> <a href="" target="_blank">asked the question</a> <em>&ldquo;Is there a website out there monitoring if countries currently in revolt have full connections to the internet? Is eg Bahrain disconnected?&rdquo;</em></p> <p><em></em><br />I thought this sounded like a challenge too good to pass up, and set about coming up with a way to figure out how we could programattically determine the state of a country&rsquo;s internet. &nbsp;</p> <p><br />I&rsquo;ve lately come up against the problem that when faced with a new idea, the hardest problem is getting it created, and working fast enough to ensure that your idea isn&rsquo;t stolen by another like-minded individual.&nbsp;</p> <p><br />With this in mind, I started work as soon as i&rsquo;d finished $dayjob at about 5pm on the 14th, and didn&rsquo;t stop until 3am. &nbsp;Putting together a week of 5pm - 3am development time, and calling in a favour from a <a href="" target="_blank">very good designer</a> I know, meant that we were able to launch the site by early friday afternoon. &nbsp;</p> <p><br /><a href="" target="_blank"></a> is a simple at-a-glance view of the world&rsquo;s internet connection status. &nbsp;Every country has a button, with their name and flag, which is either Green, Orange or Red, depending on the status of their internet.<br />Green is a Systems OK, all checks passed state, Orange indicates that some of the country&rsquo;s server are inaccessible, OR there are no servers registered for that country, and Red indicates that the country is Offline, ie, all servers registered against that country returned a false check status.</p> <p><br />The application is written exclusively in Python/Django, and backed onto a PostgreSQL database, with a hint of memcached in there to accelerate the page load-times. &nbsp;In the hour or two before go-live, I was experimenting with diferent caching settings.<br /><br />Using no page caching at all, the time to load the index page was about 4s (down to page generation, more than anything), rising to 8-10s whilst handling 20 concurrent connections. &nbsp;Moof expected a viral response to the site, especially if it ended up on <a href="" target="_blank">Linklog</a>, or <a href="" target="_blank">reddit</a>, so fast performance was a high priority.&nbsp;<br /><br />Due to the way the pages are generated, some of the data doesn&rsquo;t lead itself to caching. &nbsp;Static assets are already served from Nginx, so that&rsquo;s pretty fast and well behaved. &nbsp;The individual country pages (<a href="" target="_blank"></a>) don&rsquo;t lend themselves to caching, because some of the data is very changable. &nbsp;<br />In spite of that, the service that provides the data for the graph, does heavily cache the stream. &nbsp;Given that the resolution of the graph is on a scale of hours, the caching time reflects that, so that concurrent hits to a page will get cached graph data. &nbsp;</p> <p>We can also anticipate that more hits will occur to a country which is Offline or Unstable, as people will want to find out what&rsquo;s going on, so having some level of caching on those pages is very important.</p> <p>I experimented with a site-wide cache of all pages generated, but discovered early on that cache invalidation was a big problem, basically country statuses weren&rsquo;t updating quickly enough, based on the lifetime of the cache object, so as a trade-off of having more up-to-date information, against not quite caching so much, having a correct view of the global internet won out, naturally.<br /><br />The index page, now that the list is cached for 10 minutes, loads roughly 1600% faster than before. &nbsp;There&rsquo;s two tiers of caching taking place on this page, firstly queries are cached with Memcached (transparently by Django), and sections of the index are template-cached.</p> <p><br />I&rsquo;m very aware that the site is currently prone to false-negatives, that is to say, sometimes countries appear Unstable or Offline when they&rsquo;re not, but we&rsquo;ve also seen good reporting of positives, such as Saturday morning when <a href="" target="_blank">Libya </a> was disconnected. &nbsp;<br /><strong>This is a beta service</strong>, at best, still under active development, and still very much reliant on the power of crowd-sourcing to visit out website, get the word out about the application and the project, and ideally submit IP addresses for us to check.<br /> The more IP addresses we&rsquo;ve got, the more accurate the check data will be, and then the more accurate the site will be.<br />It&rsquo;s very difficult to perform accurate statistical functions on a very small dataset, and when you do, the margin for error is vast.<br />We&rsquo;re actively improving the site to make it more feature rich, as well as more accurate by determining servers to register against countries more intelligently.</p> <p>We&rsquo;ve got a reasonably good idea of what&rsquo;s required to make the data even more accurate, and we&rsquo;re working on that at the moment. &nbsp;</p> Monitoring with Munin <p>&nbsp;</p> <p>One of the things I&rsquo;m massively fond of when it comes to systems administration, is logging and monitoring. &nbsp;I love <a href="" target="_blank">munin</a>, and still prefer it over <a href="" target="_blank">Cacti</a> and <a href="" target="_blank">Zabbix</a>. &nbsp;I think the main reason is that it allows plugins to be configured with absolutely no browser interaction. &nbsp;<br />Creating a new graph on cacti and zabbix both require a considerable number of clicks. &nbsp;It&rsquo;s easy to install new munin plugins with things like <a href="" target="_blank">Puppet</a>. &nbsp;So.. Munin. &nbsp;Let&rsquo;s take a bit of a closer look.</p> <p>There&rsquo;s two parts to a munin installation. &nbsp;<strong>Munin server</strong>, and <strong>munin-node</strong>. &nbsp;</p> <p>Munin server doesn&rsquo;t really do the cool stuff, just data aggregation and graph creation. &nbsp;</p> <p>I&rsquo;ve included an example munin.conf <a href="" target="_blank">here</a>.</p> <p>There&rsquo;s only a couple of quirks here. &nbsp;</p> <p>I&rsquo;ve found for the majority of installations, that you can leave the vast majority of settings in-place as they are from the version installed by apt / yum / $package_manager_of_your_choice.</p> <p>So, the actual munin documentation suggests that use_node_name is a dodgy thing to do, but it&rsquo;s actually pretty useful, especially when you&rsquo;re defining SNMP hosts.</p> <p>use_node_name tells the not to grapher to use the hostname that&rsquo;s in [brackets], but instead to use the name in the connection banner (you can see this yourself, once munin is running, to telnet (or nc) to localhost:4949, and the line &ldquo;#munin node at &lt;your host&gt;&rdquo;)</p> <p>SNMP hosts.. are without doubt the coolest thing that Munin can do. &nbsp;by default, the auto-configuration of SNMP hosts will allow you to monitor some interesting things about routers, switches and windows hosts. &nbsp; So.. the only major quirk about this, is that because the snmp plugins run on one of your munin-node instances, so you have to set that as the address in the host definition. &nbsp;In the example, I&rsquo;ve done this on the munin server. &nbsp;</p> <p><strong>Munin-node. </strong>&nbsp;Very extensible, but as far as config goes, the default configuration that comes in the installation is more than capable.&nbsp;</p> <p><a href="" target="_blank">Here&rsquo;s mine</a>.</p> <p>If you have multiple munin-servers, or want to retrieve munin-plugin data from Nagios servers, then you can add multiple &ldquo;allow&rdquo; regex lines. &nbsp;</p> <p>&nbsp;</p> <p>So.. Munin plugins. &nbsp;This is the Really Cool Stuff.</p> <p>You can write munin plugins in any language you like. &nbsp;The vast majority on <a href="" target="_blank">Munin Exchange </a>&nbsp;are written in Perl or Bash. &nbsp;I prefer writing in Python, and the <a href="" target="_blank">munin-python</a>&nbsp;module is gorgeous. &nbsp;</p> <p>Basically, you need to handle two things, &ldquo;<em>config</em>&rdquo; and &ldquo;<em>run</em>&rdquo; modes. &nbsp;</p> <p>Munin-run is the thing that handles the plugin, and runs &ldquo;your-plugin config&rdquo;. &nbsp;This is what defines the format of the RRD files that munin uses to generate graphs. &nbsp;OK, so let&rsquo;s look at a simple munin plugin. &nbsp;I think we&rsquo;ll monitor... the number of files in /tmp (well, why not?)</p> <p><a title="Plugin details" href="" target="_blank"></a></p> <p>If we run that with python tmp_files config, then we get:</p> <pre>graph_title Number of Files in /tmp</pre> <pre>graph_category system</pre> <pre>graph_args --base 1000 -l 0</pre> <pre>graph_vlabel files</pre> <pre> The number of files in /tmp</pre> <pre>files.warning 10</pre> <pre>files.critical 120</pre> <pre>files.min 0</pre> <pre>files.type GAUGE</pre> <pre>files.label files</pre> <p>and if we run it without &ldquo;config&rdquo;, we get:&nbsp;</p> <pre>files.value 18</pre> <p>So, that works. &nbsp;:)</p> <p>&nbsp;</p> <p>Now if we copy that into /usr/share/munin/plugins, and chmod +x, and symlink it into /etc/munin/plugins.. and restart munin-node..&nbsp;</p> <pre>$ sudo mv /usr/share/munin/plugins/tmp_number</pre> <pre>$ sudo ln -s /usr/share/munin/plugins/tmp_number /etc/munin/plugins/tmp_number</pre> <pre>$ sudo chmod a+x /usr/share/munin/plugins/tmp_number</pre> <pre>$ sudo /etc/init.d/munin-node restart</pre> <pre>&nbsp;* Stopping Munin-Node &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; [ OK ]</pre> <pre>&nbsp;* Starting Munin-Node &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; [ OK ]</pre> <pre>$ munin-run tmp_number</pre> <pre>files.value 18</pre> <p>&nbsp;</p> <p>Cool. &nbsp;Right.. now that&rsquo;s done, and munin-node&rsquo;s been restarted, all we have to do is wait a while, and the new graph will get created. &nbsp;This can take a while, 5-10 minutes is a good guesstimate, but it can be longer.</p> <p>This is the graph produced by the plugin:</p> <p><img style="vertical-align: middle;" src="/media/tmp_number-day.png" alt="Example output from tmp_munin plugin" width="495" height="271" /></p> <p>Clever, eh?</p> <p>If you find that you&rsquo;ve waited ages, and still have no graphs, take a look at /var/log/munin on the munin-server and munin-node. &nbsp;There&rsquo;s plenty of non-cryptic logging there, and it&rsquo;s all pretty self explanatory.</p> <p>&nbsp;</p> Dedicated, Dedicated, Dedicated, Dedicated <p>After answering <a href="" target="_blank">this </a><a href="" target="_blank">question</a>, I reconsidered my answer a number of times, and I&rsquo;ve finally decided to rewrite it as a longer version as a blog/essay on my website. &nbsp;One of my <a href="" target="_blank">fellow sysadmin types</a> on <a href="" target="_blank">Serverfault</a>&nbsp;wrote an <a href="" target="_blank">answer</a>&nbsp;from a blog-post, and I intend to do the opposite. &nbsp;<br />&nbsp;<br />Right. <br />I see a lot of questions which are basically, &ldquo;<em>I want my blog/social network/niche site/new product launch site to handle a whole bunch of traffic, how do I do it?</em>&rdquo;. &nbsp;</p> <p><br />That&rsquo;s pretty much what a lot of these questions boil down to, eventually. &nbsp;<br />I&rsquo;m going to make a few assumptions too. &nbsp;<br />Given that someone&rsquo;s taking the time to ask, I&rsquo;ll assume that they&rsquo;re actually concerned about uptime of the site. &nbsp;For whatever reason, whether it&rsquo;s because their employer is telling them that they must have 5 nines uptime or better, or the site&rsquo;s actually making money for them. &nbsp;Whatever the reason, we can generally accept that these websites are business oriented, and <em>*should*</em> have a reasonable budget assigned. <br />After having worked for a few different companies now, I can also fully accept that this second assumption might be a bit great, and not everyone has a good concept of how large, or encompassing the budget should be.</p> <p>&nbsp;<br />Let&rsquo;s start at the bottom, with the really basic stuff. &nbsp;<br />One server will be OK for a certain level of uptime, but at some point, you&rsquo;ll near the sharp increase of the <a href="" target="_blank">bathtub curve</a>, and the probability that the hardware will fail goes up rapidly, and when it does, which it will, and if Murphy&rsquo;s law is anything to go by, it&rsquo;ll fail when you&rsquo;re out of town, at a wedding, or in the pub.<br />It&rsquo;s for this exact reason that as a Systems Engineer, I can&rsquo;t count any lower than two. &nbsp;What I mean by this, is that everything should come in pairs. &nbsp;Two servers, containing &gt;2 hard disks, 2 power supplies, and so on. &nbsp;<br />So, let&rsquo;s build a server, based on the above theories. &nbsp;<br />Disks fail lots. &nbsp;They&rsquo;ve got moving parts. &nbsp;So let&rsquo;s concentrate on those. &nbsp;If you&rsquo;ve only got one disk, and it &nbsp;fails, you&rsquo;re screwed. &nbsp;So let&rsquo;s put 2 disks into this server. &nbsp;<br />You&rsquo;ve got a choice again between hardware and software RAID. &nbsp;Linux software RAID is pretty good these days, but in some cases, hardware RAID is still preferable. &nbsp;I&rsquo;m a massive fan of <a href="" target="_blank">3ware</a> and <a href="" target="_blank">Adaptec</a> cards. &nbsp;Hardware RAID, &nbsp;if you get a good card, is invaluable. &nbsp;FakeRAID, as typically found on motherboards, or low-end raid cards is a bit of a ripoff. &nbsp;It&rsquo;s actually a form of software RAID, and utilises the main CPU. &nbsp;On a hardware RAID card, the onboard CPU takes a massive load off from your main CPU, and is more efficient at processing nested RAID levels than the software RAID is, which uses the main CPU, which probably should be doing the really cool stuff that your server is designed for, not low-level stuff like disk processing.<br />There&rsquo;s also something to be said for hardware RAID when it comes to non-linux operating systems. &nbsp;I gather that hardware RAID on windows platforms is a lot more stable than software RAID on the same.<br />So, basically, if you value your sleep, and your uptime, then you&rsquo;re going to need to protect yourself from these failures.</p> <p><br />That&rsquo;s disks out of the way for the time being, let&rsquo;s talk about power. <br />Most good servers (and by good, I mean ones I&rsquo;d consider in a high-availability infrastructure), have the capability of dual, or multiple power supplies. &nbsp;These are brilliant, and protect against PSU failure, and power rail failure. &nbsp;Be warned however; if you connect the PSUs to different phases, you&rsquo;ll probably see a very pretty, yet expensive fireworks show, and possibly set off the fire detection systems in the datacenter. &nbsp;Not a great idea.</p> <p><br />In spite of the benefits of multiple PSU servers, they are more expensive, and to some extent, don&rsquo;t offer a massive benefit, if the multiples are all plugged into the same power source, then you&rsquo;re really only protecting against PSU failure.<br />The biggest problem I&rsquo;ve seen in a datacenter, related to power, is the rack monkeys unplugging or rebooting the wrong server. &nbsp;As far as mitigating this goes, accidental unpluggings can be cut down with <a href="" target="_blank">locking C13 cables</a> , and remote-hands reboots can be avoided by using iLO/DRAC or an IP-PDU (Power Distribution Unit).<br />Whilst we&rsquo;re on the topic of ancilliary rack hardware, things worth having:</p> <ul> <li>IP-PDU (<a href="" target="_blank">APC </a>are very good)</li> <li>IP-KVM (<a href="" target="_blank">Raritan </a>and <a href="" target="_blank">Avocent </a>both seem to be leaders in this market, Startech are ok but the interface is a bit clunky.)</li> <li>IP-Serial Console (Raritan, Avocent, etc. )</li> </ul> <p>&nbsp;</p> <p>I&rsquo;ve rarely seen a 1U keyboard/monitor shelf in a rack. &nbsp;There actually is little point, you&rsquo;d be better off with a good Dell laptop, an USB-&gt;Serial cable, and perhaps stow a keyboard and monitor seperately in your rack somewhere.</p> <p><br />Wow, I really digressed there. &nbsp;Sorry about that. &nbsp;Where was I? &nbsp;Disks, Power, let&rsquo;s look at the network.</p> <p><br />Good servers have multiple NICs. &nbsp;You need to design your network to make use of this. &nbsp;Having one server/one NIC is good until your switch dies, or the NIC dies, or similar. &nbsp;Then your server goes down, and people get shouty.<br />But again, a pair of NICs is no good if they&rsquo;re only connected to a single switch. &nbsp;Not only will it intensify any Spanning-Tree problems you may have, but it also provides no protection against switch failure.</p> <p>So, a pair of switches. &nbsp;Or similar multiples of two, thereof. &nbsp;</p> <p><br />Not only do you want a pair of switches on the network infrastructure, but you also want them to connect to a HA pair of firewalls, I quite like <a href="" target="_blank">Cisco 55xx</a> series, the 5510 and better offer good Active/Standby pairing, so switching between the two is simple, and they each monitor each other, and will share a virtual IP between them, with HSRP.</p> <p><br />Next up, routing. &nbsp;You&rsquo;ll want a couple, if not more connections to the internets, for really nice stable connectivity. &nbsp;I&rsquo;ve argued this point over with a couple of colleagues, former and current. &nbsp;When you&rsquo;re relying on someone else for your network connectivity, &nbsp;and you only have a single connection to their network, regardless of how diverse their network may be, you still only have a single connection, and that&rsquo;s your biggest point of failure. &nbsp;You can harden your servers as much as possible against device and part failure, and I still highly recommend that you do, but if you don&rsquo;t have resilliency and redundancy at every level, then there&rsquo;ll still be a single point of failure somewhere on your network.</p> <p><br />It&rsquo;s actually perfectly acceptable to have multiple IP Transits from different providers and provide different IP addresses. &nbsp;As long as your application can cope with that. &nbsp;</p> <p>The ultimate solution however, requires having a couple of powerful routers at the edge of your network. &nbsp;<br />These connect to a couple of transit providers and advertise your IP addresses over BGP. &nbsp;You get a full internet routing table, and the rest of the internet sees the routes to your netblock. &nbsp;The real bugger is though, that you&rsquo;ve gotta have a reasonably large netblock to get noticed. &nbsp;At the moment, this is a /24 or bigger. &nbsp;That&rsquo;s 255 IP addresses, and given the current rate of IPv4 depletion, getting justification for one of these is getting harder and harder.<br />Your carrier/transit provider can help you with the paperwork to get it all sorted out, though. &nbsp;To stop the size of the full routing table becoming enormously massive, ISPs filter out netblocks smaller than a /24, so a certain amount of route aggregation takes place. &nbsp;</p> <p>Having BGP capable routers, and maintaining your own connections to the internet isn&rsquo;t a walk in the park. &nbsp;It&rsquo;s a specialist task, and requires a skilled systems engineer to operate &nbsp;and maintain it. &nbsp;Not a task for the unprepared.<br />There are other advantages of course, if for example, you use the Akamai CDN lots, and you&rsquo;re paying a lot for traffic to their network, then you may be able to enter into a peering agreement with them, where the traffic to their network is delivered cheaply, or for free, because it&rsquo;s mutually beneficial for both parties in the agreement. &nbsp;</p> <p>So I think we&rsquo;ve covered most of the major points on hardware and network resilliency and availability. &nbsp;Let&rsquo;s look at how to put that all together and build a hosting cluster.</p> <p>For the purposes of brevity, and clarity, let&rsquo;s assume for the time being that you&rsquo;ve chosen to use a dedicated host provider.&nbsp;<br />If you&rsquo;re choosing to do the owned hardware / rented rack space thing, then there&rsquo;s not a great deal of difference in the actual configuration of the servers, but there&rsquo;s more complexity involved, etc.</p> <p>For most small/medium size hosting needs, I tend to recommend a 4 node hosting cluster. &nbsp;This is based on having 2 web servers, and 2 database backends. &nbsp;<br />When you start off with a single webserver, with the database server on the same physical box, the fastest way to increase performance is to split and have 2 physical boxes, one for the webserver, one for the database. &nbsp;The biggest problem with this is, that speed and scalability/redundancy/resilience tend to go hand in hand. &nbsp;&nbsp;I personally don&rsquo;t like single points of failure, so having a pair of everything has become something of a personal motto. &nbsp;<br /><strong><em>&ldquo;I&rsquo;m a systems engineer, I can&rsquo;t count any lower than two&rdquo;<br /></em></strong></p> <p>So if you&rsquo;re concerned about uptime, then having a group of servers is a great thing. &nbsp;<br /><img style="vertical-align: middle;" src="" alt="Simple 4-node hosting cluster." width="550px;" height="434px;" /><br /> This is a rather quick and dirty sketch I knocked up in mspaint. &nbsp;</p> <p>I&rsquo;ve been asked to provide the configuration information about this cluster, so I&rsquo;ll copy that from my VM server later on. &nbsp;I tend to use my <a href="" target="_blank">powerful desktop</a> to build test infrastructures quite a lot, so knocking up and provisioning 4 VMs is no massive stress.</p> <p><br />A note now on configuration management. &nbsp;I have been in the position in the past where the &ldquo;How to build a server&rdquo; information is in a wiki page somewhere. &nbsp;This is alright, but you do tend to end up with a difficult to document process. &nbsp;&nbsp;</p> <p><br />Instead, I prefer a combination of Preseeding and <a href="" target="_blank">Puppet</a>. &nbsp;&nbsp;For no other reason than it&rsquo;s what I prefer, we&rsquo;ll be using Ubuntu 10.04 LTS in this article, although the processes for any other distribution aren&rsquo;t too different. &nbsp;When it comes to Puppet, both Debian/Ubuntu, and RHEL/Centos work well. &nbsp;I haven&rsquo;t tested Puppet with any other Distros, but I gather that it&rsquo;s fairly well supported across the board.</p> <p><br />Preseeding is the process of automating the steps of installing the operating system. &nbsp;It&rsquo;s basically an unattended installation process, that tells the installer what you would have selected, had you been in front of every machine whilst installing. &nbsp;As you can no doubt imagine, when you&rsquo;re building a farm of servers, preseeding is a massive bonus, and a timesaver. <br />Ubuntu, Debian, and Redhat-like distributions all have a mechanism for Preseeding a machine from bare metal. &nbsp;I suspect that other distros do similar things, but I&rsquo;ve never used them in a production environment. &nbsp;Preseeding is a fairly broad topic, so I&rsquo;ll cover that in a separate blogpost.</p> <p><br />One of the things I adore about Puppet is the community contributed packages that are available at the touch of a button. &nbsp;I&rsquo;ve built a fairly comprehensive puppet infrastructure from a majority of contributed modules and packages. &nbsp;I tend to just <a href=";ie=UTF-8&amp;" target="_blank">search Google</a> for &ldquo;puppet-&rdquo; like puppet-apache and so on. <br />I, like many other sysadmins and systems engineers, am quite lazy, and have lately started to reap more benefits from the open-source puppet community. &nbsp;You can pretty much build an entire infrastructure for a simple LAMP stack, based solely on other people&rsquo;s puppet configs.</p> <p><br />Puppet is lovely, it really is very easy to get going with, just start off with a server as your &ldquo;puppetmaster&rdquo;. &nbsp;I&rsquo;ve tended to go towards using <a href="" target="_blank">Amazon EC2</a> micro instances for these, for small deployments. &nbsp;When I&rsquo;m working on my VM network, i just use my workstation.</p> <p><br />I use bzr for my source control for puppet. &nbsp;I like bzr, and it&rsquo;s one of the best VCS tools i&rsquo;ve used. &nbsp;It doesn&rsquo;t matter what you use, as long as you use something. &nbsp;<br />But if you&rsquo;re not using source control, then there&rsquo;s bigger problems, and you need to rectify those first. <strong><em>Really</em></strong>.</p> <p><br />Luckily, the vast majority of peoples contributed puppet modules and classes interoperate pretty well. &nbsp;Once you&rsquo;ve got a decent setup for the /etc/puppet directory, and configured the puppet configuration itself, then the next bit is really easy.</p> <p><br />A very basic guide to setting up puppet can be found here:&nbsp;<br /><a href="" target="_blank">Bitfield Consulting - Puppet</a></p> <p><a href="" target="_blank"></a><br />Software we&rsquo;re going to use:</p> <ul> <li><a href="" target="_blank">Ubuntu </a>10.04 LTS</li> <li><a href="" target="_blank">Apache </a>2.2</li> <li><a href="" target="_blank">Varnish </a>2.1.4 (although, in this article, i&rsquo;m using <a href="" target="_blank">Pound</a>, rather than Varnish, but I&rsquo;ll detail Varnish in another blogpost)</li> <li><a href="" target="_blank">MySQL &nbsp;</a>5.2.something</li> <li><a href="" target="_blank">PHP</a> 5.3.something</li> <li><a href="" target="_blank">Wordpress</a>? I think WP will be OK, actually.&nbsp;</li> <li><a href="" target="_blank">Memcache </a>1.4.5</li> </ul> <p>&nbsp;</p> <p>All of it, nice free, open-source goodness. &nbsp;We like open-source.</p> <p><br /><a href="" target="_blank">This</a> is the full working puppet config that I use . &nbsp;It&rsquo;s all pretty much gleaned from &nbsp;other <a href=";ie=UTF-8&amp;" target="_blank">puppet-*</a> repos on github, and a few other places.</p> <p><br />So.. In theory, you should be able to checkout a copy of the above, and put it in /etc/puppet, and you should get a working puppetmaster, and be able to initialise 4 nodes. &nbsp;</p> <p><br />You will need to do some individual config, such as the loadbalancer setup, and adding vhosts for apache. &nbsp;I&rsquo;ve found that if you&rsquo;re building biiiig farms, with lots of the same stuff, then adding the vhost config to the puppet manifest is a good thing to do, but for 2 servers, this kind of manual config is very easy to do by hand.</p> <p><br />So that&rsquo;s about it. &nbsp;I think. &nbsp;<br />I know I've digressed momentarily from the main stream of this evening&rsquo;s symposium <em>[extra points if you know where this is from]</em>, but I think it&rsquo;s for the better. &nbsp;There&rsquo;s a lot of bits and bobs that I left out, and perhaps shouldn&rsquo;t have, and some stuff I left in, that perhaps shouldn&rsquo;t have been. &nbsp;I&rsquo;ve been meaning to write this up for a Very Long Time, and hope that it might be of some use to some of you, albeit under a somewhat bizarre set of circumstances <em>[and this..]</em>.</p> <p>&nbsp;</p> <p>I&rsquo;ll &nbsp;just recap briefly and say that when it comes to the server design for new projects, that a VPS server isn&rsquo;t a total writeoff, but I have found in a number of instances that the IO performance is the biggest bottleneck on these virtualised systems. &nbsp;<br />That isn&rsquo;t to say that every new project and infrastructure desires a 4 node LAMP server system, not by a long way, but if you actually have the traffic and requirement, and also the budget to do it, then having dedicated servers (or colocated/owned servers), then you&rsquo;ll probably find considerably better performance than you would with a VPS.</p> <p><br />A final side-note on general price and suitability, and something that&rsquo;s more relevant to the first part of this article.<br />All servers are not created equal. &nbsp;A 1U server from HP, might set you back as little as &pound;600, or as much as &pound;2500. &nbsp;On the other hand, you could build your own 1U servers from a 1U case, and off-the-shelf parts, but the build quality will be lower. <br />Combine that with the fact that you don&rsquo;t get any kind of parts warranty to the same extend that you do with business grade hardware, and that consumer parts aren&rsquo;t designed for a 100% duty cycle. &nbsp;<br />It is possible to make a desktop machine into a server, and do everything on the cheap, but I highly recommend against it. &nbsp;Things will fail, you won&rsquo;t get warning, you won&rsquo;t get warranty and it won&rsquo;t be pretty.</p> <p><br />If you&rsquo;re making money from your infrastructure, or the systems that sit on it, then you&rsquo;ve got some mechanism of getting the money back from the outlay of &ldquo;doing it right&rdquo;. &nbsp;If you&rsquo;re just building a lab environment, or playing with toy servers in your parents&rsquo; basement, then good luck to you, enjoy everything you do, but don&rsquo;t host other people&rsquo;s data on your toys.</p> <p><br />High-availability isn&rsquo;t something to look at lightly, it&rsquo;s a pretty hardcore branch of systems engineering. &nbsp;You&rsquo;re playing with the big boys now, and you need to have invested a similar amount of money in your hardware as they have in theirs.&nbsp;</p> <p><br />Unless you&rsquo;re Google, Facebook, or Twitter. &nbsp;But you&rsquo;re not.</p> <p>(If you are Google, Facebook or Twitter, then please leave me some insightful comments ;)</p> Velleman 8055 Drivers <p> <p>About 5 years ago, I wrote some "drivers" to interface a Velleman K8055 USB interface card . &nbsp;After a recent request, I have decided that they should be brushed up a little, and reinstated with a download link.</p> <p>I'm open to bug reports, but I can't promise that I'll be able to fix them quickly.&nbsp;</p> <pre>Instructions to compile/install You'll need the following dependencies: libusb-dev libqt4-dev qt4-qmake (ubuntu deps) 1) run qmake -o Makefile 2) cd lib 3) make all; make install 4) cd .. 5) make 6) plug in your 8055 usb board 7) sudo ./qcontrol 8) enjoy!</pre> <p>I'm reconsidering writing some python bindings for the driver, stay tuned!</p> </p> Legacies <p> <p>The legacy of a nobody</p> <p>&nbsp;</p> <p>I work hard, and try not to let the future bother me. &nbsp;I don't make 10 year plans, hell, I'm lucky if i know where I'll be in 30 days time. &nbsp;I don't make massive future plans, because I've found that everything can change massively week-to-week, and hastily made plans are often proved unservicable. &nbsp;I suppose this is par for the course when working in IT. &nbsp;There's a sheer unpredictability when working with the internet, long hours and late nights along the way, but basically all computer systems listen to Murphy's Law. &nbsp;</p> <p>&nbsp;</p> <p>You will be interrupted mid-holiday, mid-wank, or if you're really lucky (or unlucky...) mid-shag. &nbsp;When you have a network to maintain, I like to think it's like a parent with a child. &nbsp;Especially a toddler. &nbsp;There's always the haunting suspicion that something will happen, requiring emergency attention, although these things tend to be a lack of disk space, rather than a pea shoved into an unsuitable orifice.</p> <p>&nbsp;</p> <p>All that aside, these days, I find myself wondering more and more; "what will my legacy be?". &nbsp;That is to say, when I've died, how will I be remembered?&nbsp;</p> <p>I'm gay, and genetically messed up enough to be unable to have my own children, this alone is something that troubles me occasionally. &nbsp;Not the being gay bit, but being unable to continue my family's genes. &nbsp;Under the circumstances, that might be a good thing.&nbsp;</p> <p>In my father's and my own eyes, my grandfather was a great man. &nbsp;We both learned a great deal from him, both about engineering, and life in general. &nbsp;A great deal of my skill with metal and wood, and design comes directly from his influence. &nbsp;My interest in computing is mostly my dad's influence, at an early age, with an Apple IIe. &nbsp;I guess I'm just trying to say that my ancesters are greater than me. &nbsp;Greatness being totally subjective, of course, but I still get the massive feeling that there's something epic missing. &nbsp;</p> <p>&nbsp;</p> <p>For a very long time, I looked at my peers who had had children with a sense of disdain. &nbsp;Perhaps a something wasted, by having spawned so early on, but I now realise, at least they have done it. &nbsp;At whatever stage of life, at least they had the chance. &nbsp;</p> <p>&nbsp;</p> <p>Perhaps I could term my computer systems as my children. &nbsp;If that were the case, then I'd have had more than two dozen, over the last 10 years. &nbsp;Under this metaphor however, there's no longevity involved. &nbsp;Aside from some minor things which I know are still there, the majority of the systems with which I've worked have now ceased to be. &nbsp;Servers and Networks which I've designed and implemented have been replaced by smarter and faster ones. &nbsp;It's like a destructive evolution. &nbsp;One that leaves no fossils, no trace of earlier systems, and their designer. &nbsp;</p> <p>I haven't published any papers, written any journals, sown my seeds of academic greatness (ha!), or computational excellence. &nbsp;If I died tomorrow, there would be very little to mark my place in history.</p> <div></div> </p> N's Story <p> <p>After receiving such a great response to my own article (thanks everyone!), a good friend of mine asked whether I'd publish his similar story here.</p> <p>If any of these stories give enough hope to just one teenager (or anyone) to let them survive the hardship of coming out, and homophobic abuse, then that's enough.</p> <p>For various reasons which will become apparent as you read, N has decided to use this moniker to protect his identity.</p> <p>So, without further ado, here is N's Story. &nbsp;</p> <p>&nbsp;</p> <p><strong>It Gets Better (No, really)</strong></p> <p>Reading Tom&rsquo;s article (credit where credit&rsquo;s due) inspired me to write my own It Gets Better story, to which as you&rsquo;ve noticed I had to add an allusion to the fact that were it not for most of my friends (i.e. true friends, yes, again we come back to that careful wording, in some ways it&rsquo;s even more important in my case) I&rsquo;d be looking very lonely in my corner.&nbsp;</p> <p>You see, being gay was unthinkable... literally.&nbsp;</p> <p>I grew up with two parents who&rsquo;ve both decided that I&rsquo;m a poor excuse for a human being, as first my father and more recently my mother decided that disowning me was the better part of parenting. Now don&rsquo;t get me wrong, I deeply love and admire my mum. I want to stress that, because you&rsquo;re not going to like her much and I do want to insist that she has redeeming qualities.&nbsp;</p> <p>For example, she was very supportive financially and always pro-active in standing up for me when I was bullied at secondary school (something I&rsquo;ll come back to later in this article).&nbsp;</p> <p>She also proved to be surprisingly OK with other people&rsquo;s sons being gay... although in that respect she deserves to be cited as an example of what a NIMBY (<em>Not In My Back Yard</em>) is, because she certainly wasn&rsquo;t OK with me not being straight.</p> <p>One of the things that I found moving in Tom&rsquo;s article, in respect to my own experience, was how similar our experience of growing up gay was.&nbsp;</p> <p>We both had our first big crush at the same ages, i.e. 6-8 years old, in my case, my best friend&rsquo;s boyfriend.&nbsp;</p> <p>It&rsquo;s ironic that her dad was openly homophobic and that the words &ldquo;poofs&rdquo; and &ldquo;queers&rdquo; being his very disgusted words on the subject, already hurt when it came up for whatever reason in conversation.&nbsp;</p> <p>I suppose the difference in my upbringing and Tom&rsquo;s was that no sooner did I realise that I Liked Other Boys at the innocent age of 6-8 (I only learned the word for it when I was about 10), that I was immediately hit by a wave of hostility, disgust and disapproval.&nbsp;</p> <p>Things became even worse when my parents separated, as I was stuck in the middle and both of them saw me as an extension of their ex-spouse.</p> <p>This meant that while sticking up for my mum at his house I was getting &ldquo;you&rsquo;re just like your father&rdquo; thrown at me (and occasionally fists, frozen bread and on one occasion I was threatened with homelessness... well I was 16 by the time that last incident happened), and on the other hand I was getting &ldquo;you and your mother&rdquo; from my &ldquo;dad&rdquo; who then disowned me two years down the line when the divorce came through (I was 15 at this point).&nbsp;</p> <p>I smothered my feelings the same way a rat eats her young if you disturb her. It was a way of protecting myself from the hurt that I was getting, and was partly a conscious decision, partly a defensive reflex.&nbsp;</p> <p>Of course, this set me up for trouble, and just how much will soon become apparent.</p> <p>My first head-on encounter with this concerned a massive crush on my then best friend when I was ten, a tall blonde lad (yep, another rugby/ football player) who had my heart skipping beats every time we spent time together and then broke it by saying &ldquo;Oh God, you&rsquo;re not homosexual are you?&rdquo; A question provoked by a completely unrelated &ldquo;I have something to tell you&rdquo; that referred to a message from someone else.&nbsp;</p> <p>I stammered a &ldquo;No... No. I&rsquo;m not.&rdquo;&nbsp;</p> <p>Yes, that is correct. I denied my sexual orientation and my love three times... and then the break time ended. Oh the irony.&nbsp;</p> <p>There was a moment I&rsquo;ll never forget concerning him though, this time for happier reasons. During a school trip, one of the other boys was being very very nasty about my obvious feelings for our friend and the fact that I was always trying to please him. And quite frankly my dear, I didn&rsquo;t give a fuck.&nbsp;</p> <p>I felt like my chest was going to explode with that warm feeling that radiates out of you when you&rsquo;re happily in love and you don&rsquo;t care who knows. And yes, there was a slight sexual element to what I felt (use your imagination).&nbsp;</p> <p>Now the trouble that was brewing began when I started secondary school. Suddenly I went from being an unassuming pupil among others and occasionally a teacher&rsquo;s pet, to being a teacher&rsquo;s pet and a target for name-calling, stone throwing, being blamed for things I hadn&rsquo;t done and a few things I just prefer to forget now.&nbsp;</p> <p>As in Tom&rsquo;s case, it lasted all of five years and seems to be what people I went to school with mostly remember me for. I had very briefly explored my sexuality with a couple of other boys my age in primary school (sorry to disappoint, but apart from the usual stuff most little boys do, such as showing each other our penises behind a wall, it didn&rsquo;t go very far) but this was now out of the question because I was already a loner and made to feel it.</p> <p>Noticing a girl in my class when I was 11 gave me an opportunity to try out romance, that might dare to speak its name, and I repeated this a couple of times over the next few years until I was 16.&nbsp;</p> <p>In every case it was timid, furtive, and ultimately purely platonic. What I tried to convince myself were, crushes turned out to be a lonely teenage boy trying to A) be straight and B) make friends.&nbsp;</p> <p>As they were always well out of my &ldquo;league&rdquo; in at least one way or another, and as I never got anywhere it was easy to think that it was just down to my isolation or for more noble motives (if they had boyfriends for example).&nbsp;</p> <p>The fact that when one of them actually made a sexual pass at me I didn&rsquo;t like it and ran away should of course have made it obvious that I wasn&rsquo;t that kind of boy.</p> <p>Unfortunately, the overtly homophobic context I was wading in, had started influencing me to the point that I went to great lengths to deny my feelings for other boys.&nbsp;</p> <p>Given that I was already trying to hide my socially acceptable feelings for girls, you&rsquo;ll appreciate why self-harming was only a step away whenever I so much as got a kick out of a hot Sixth Form boy or (male) fellow pupil smiling at me.&nbsp;</p> <p>I don&rsquo;t want to give too many ideas to anyone who&rsquo;s potentially fragile, but the mildest example happened when I was 12. A Sixth Form boy whose name I didn&rsquo;t know but for whom I was carrying a torch of Olympic proportions ran past and ruffled my hair with a friendly smile. I spent the rest of the day grinning from ear to ear with a happy look that lasted until I got home.&nbsp;</p> <p>As soon as I got to my bedroom, I consciously realised that I was in love with another boy. Something snapped and I started crying and cut off all the hair he&rsquo;d touched as if he&rsquo;d somehow &ldquo;made&rdquo; me gay. I&rsquo;ll draw a veil over the more painful and dangerous things I did to punish and &ldquo;cure&rdquo; myself over the years, but you get the idea.&nbsp;</p> <p>You may have noticed that I kept pushing my attraction to boys to the back of my mind and that convincing myself that it was as dirty, wrong and even dangerous as my parents, grandparents, teachers (thank you Section 28) and classmates said it was, took up every brief second of the time during which another boy would catch my eye.&nbsp;</p> <p>Obviously I had platonic relationships with other boys, and I had a few very loyal friends willing to risk being stoned in the Biblical sense of the word, which was the danger for anyone who ventured outside with me on the side of the school grounds by the playing fields.&nbsp;</p> <p>But there were of course Other Boys (men didn&rsquo;t interest me yet, unlike boys in my own year up to and including Sixth Formers) who I&rsquo;d see helping their parents on market stalls on Saturdays or at the swimming baths (particularly in the changing rooms obviously) and for whom I&rsquo;d have romantic feelings with a sexual element to them. This was something I definitely wasn&rsquo;t equipped to handle; and it eventually led to my nearly having a nervous breakdown just before my coming out at 17, to a friend who&rsquo;d literally come out to me a minute before.</p> <p>Bisexuality seemed the obvious term to define my sexuality at this point, as the only reason I&rsquo;d admitted to liking boys was that I couldn&rsquo;t hide or suppress it, and liking girls sexually was never called into question because it was assumed you did.&nbsp;</p> <p>I&rsquo;d also spent years trying to fantasise about girls, and succeeding (use your imagination if you must) and on those occasions in which a boy I liked was involved. This (very unhealthy) pattern carried over into two encounters with young women, both of which involved me reacting to being pounced on and kissed by kissing back and letting my imagination get me aroused.&nbsp;</p> <p>I&rsquo;d spent years getting turned on by imagining straight guys I fancied doing things with girls, so it was only after a random conversation with my father-in-law and wife that I realised just how much of a mess I&rsquo;d let my life become when it dawned on me, after years with the second woman (and paternity) that I&rsquo;d been barking up the wrong tree since 11 years old.</p> <p>I came out to my mother before I was ready, because the friend I came out to was threatening to tell her if I didn&rsquo;t.&nbsp;</p> <p><strong>A word on this </strong>&ndash; A) please don&rsquo;t submit to blackmail, and B) don&rsquo;t let other people&rsquo;s prejudices shape the way you see yourself.&nbsp;</p> <p>My mother&rsquo;s reaction to me telling her I was bisexual can be split into three stages.&nbsp;</p> <p>Her first reaction was an encouragement to try with girls and the ludicrous suggestion that I spend more time with her ex-husband, followed by a very hurt and disappointed order not to talk to anyone else, and particularly my sister about it.&nbsp;</p> <p>Her second reaction a few days later was to shove a leaflet on blood donation into my hand and to order me to read it *very* carefully. I think you get the message too. (Queer = AIDS = you die...) Her third reaction was a combination of avoiding the subject, one very brief &ldquo;And you&rsquo;re not gay&rdquo; (nice to know she was listening) at the end of a comment on an unrelated subject, and a steady stream of homophobic comments in which the word &ldquo;gay&rdquo; was always used as a provocation and as an insult.&nbsp;</p> <p>At the end of my first year of university she even told me that I was very selfish because there were rumours about my sexuality and she had to live with what the neighbours thought. So, on meeting a woman who loved me and whom I cared about deeply (and still do), I thought it only natural to be faithful and to build our relationship together. Whenever doubts began to accumulate, I dispelled them by telling myself that I was with someone who would please my family and that I was doing my duty both to her and to them. I didn&rsquo;t realise to what extent I&rsquo;d internalised my &ldquo;education&rdquo;, but that fact of course was going to open my eyes in its own time.</p> <p>I got a lot of homophobia while at University, including having stones thrown at me by a couple of other students. Don&rsquo;t be put off by this, because it doesn&rsquo;t invalidate the fact that universities take this very seriously and other students will support you just they did in my case.&nbsp;</p> <p>I should also add that your true friends won&rsquo;t reject you and any gay or bisexual men among them will be more than happy to know that you&rsquo;re being yourself.&nbsp;</p> <p>One straight friend once said as much to me when I apologised for a crush on one of his friends from home. To be precise, he said &ldquo;Nah, don&rsquo;t be sorry. It&rsquo;s best to be honest about your feelings. If he wants to be a woman about it that&rsquo;s his problem!&rdquo;&nbsp;</p> <p>&nbsp;</p> <p>My father-in-law and my wife are both Christian.&nbsp;</p> <p>I&rsquo;m not.&nbsp;</p> <p>They both believe that homosexuality is a sin, and so for reasons unrelated to me, when the topic came up during dinner, my father-in-law aired his view that &ldquo;It&rsquo;s obvious that homosexuality is a form of deviant behaviour&rdquo;.&nbsp;</p> <p>I replied that people who are homosexual aren&rsquo;t attracted to the opposite sex and that, although I respect the right of other people to hold the same view as he does on the subject, I disagree.&nbsp;</p> <p>What hit a nerve I think, was when I insisted that it&rsquo;s in a person&rsquo;s nature and not just a question of preference, as almost every gay man I know (I don&rsquo;t know as many lesbian women) has zero attraction to women.&nbsp;</p> <p>I&rsquo;ll spare you the theology, but I was left with the nagging sensation of having accidentally breached a wall I&rsquo;d put up at the back of my mind and it coincided with my attraction to women being questioned.&nbsp;</p> <p>A few friends, bi, gay and straight alike had expressed surprise at me being with a woman, as I&rsquo;d never shown any interest in girls before. Now this in itself didn&rsquo;t bother me, because I believed, and still believe, that there are bisexual people out there. I know a fair few, and they&rsquo;re very visibly attracted to people of both sexes.&nbsp;</p> <p>What I realised, as I sat down with pen and paper and worked my way over my private life properly, was that I was looking at myself objectively for the first time since I was 10.&nbsp;</p> <p>Assuming that I liked girls was an obvious conclusion to draw as a teenager, but as the final pieces fell into place, it hit me like a sledgehammer that my defensive reflex against any form of sexuality had impaired my judgement. The prejudice I&rsquo;d picked up that love without desire was pure had led me to the conclusion that I liked girls and to hate my romantic and physical feelings for other boys. And of course hating myself for not being able to change, was an inevitable side effect and conveniently stopped me from exploring my feelings, let alone my sexuality.</p> <p>And so the floodgates burst.&nbsp;</p> <p>Years of sabotaging budding relationships with other young men from the age of 17 to 22 and over and running away from any gay or bisexual man who made a pass at me suddenly became painfully fresh in my memory. So did years of avoiding any kind of contact with LGB organisations beyond professional relations while doing &ldquo;governmenty&rdquo; work, as one friend who I later learned is gay referred to it, for the Union of Students.&nbsp;</p> <p>Even my attraction to him was unrequited, as I decided not to ask him out (it turns out my hunch was right and that our attraction was mutual) to concentrate on sorting things out with my ex-boyfriend who&rsquo;d recently been subjected to homophobia of life-threatening proportions... and in no shape to be in a relationship of course and therefore unattainable. And I&rsquo;d accepted the break-up in the first place because my mum had just had a serious car accident... something I&rsquo;ll always &ldquo;kick&rdquo; myself for, particularly now.&nbsp;</p> <p>And at no point did girls interest me other than as friends. A point of which I was reminded by the memory of a speaker at an event in one of the Union bars when he said that at our university there was a ratio of &nbsp;something like 5 girls per boy. One young guy not far from me said &ldquo;Shiiiiit....&rdquo; in awe. As you&rsquo;ll have gathered, I was unfazed by this piece of information, not being interested in girls but very much under the spell of one of my very tall, muscular and athletic (and straight, so nothing happened) flatmate.&nbsp;</p> <p>Suffice to say I&rsquo;ve managed to unwittingly come out several more times than I should have done, given that I came out to myself properly at about 10 and again at 14-15 and then again at 15 to a boy I fancied, or more precisely outed myself by replying &ldquo;Depends. Do you?&rdquo; to his hostile &ldquo;Do you smoke helmet?&rdquo;.</p> <p>I then came out at 17 to a few close friends. Naturally the entire Sixth Form got to know and were almost unanimously OK with it. As Tom mentioned in his article, and as my then boyfriend pointed out, girls are generally very supportive friends (if not among the most supportive) when you come out.&nbsp;</p> <p>Wives, understandably are not so supportive. It&rsquo;s not a situation you&rsquo;d wish on anyone, but as I can&rsquo;t properly fulfil my role as a husband now that I&rsquo;m conscious of what really pushes my buttons, all the arguments about my sexual orientation being an &ldquo;abomination&rdquo; and something &ldquo; I can change&rdquo; if I really want to become meaningless.&nbsp;</p> <p>I don&rsquo;t hold with running away and abandoning your dependents, but staying together on the grounds that I signed a contract despite the consequences for my wife and child would be just as irresponsible.&nbsp;</p> <p>The irony is that it was my wife who convinced me of that, by arguing exactly the opposite. Not least by equating the relationship I had with my (now ex-)boyfriend to the &ldquo;marriage&rdquo; that some people have with their dog, or comparing any form of LGB group with Sodom and Gomorrah. By using the kind of ignorant comment I hear so often when you tell &ldquo;them&rdquo; that you&rsquo;re gay or bisexual - &ldquo;You don&rsquo;t look it.&rdquo;</p> <p>Now I&rsquo;m not a coward when it comes to physical dangers, and I&rsquo;ve survived enough people trying to grievously hurt me to be able to make that statement. Homophobic violence hurt me less than having to live with homophobic attitudes from people I actually care about, but conforming to what people expect of you doesn&rsquo;t require courage even though it&rsquo;s painful.&nbsp;</p> <p>Living your life according to what&rsquo;s right for you and others is much braver and also much more responsible. Not that I really stood a chance, but I did make several bad decisions, and trying to be straight when I thought I was bisexual was one of them.&nbsp;</p> <p>When you get into your twenties (or any age) and realise that you&rsquo;re living with the decision of a scared teen with no apparent support to turn to, it&rsquo;s time to stop running and repair the damage before it gets worse.&nbsp;</p> <p>If this sounds familiar, go back and read Tom&rsquo;s article, that&rsquo;s where to go from here.</p> <p>Given my current situation it&rsquo;s not easy to be optimistic, but that&rsquo;s where my friends come in. I have to admit to feeling jealous when I see how accepting some parents are, but the people who love me for the person I am will always be supportive.&nbsp;</p> <p>My best friend very tactfully let me know that he&rsquo;d known for years and that he understood how hard it had been for me to get the words out.&nbsp;</p> <p>One of my other close straight friends who&rsquo;s built like a rugby player even asked &ldquo;And how is that offensive?&rdquo; when I warned him that it was common knowledge on campus that I liked guys and that some people seemed to think we were a couple. I really hope he meets a girl who deserves that much sweetness. Of course, friends of &ldquo;other&rdquo; persuasions who&rsquo;ve been through the same stuff as I have can read this and smile now and again as their own memories suddenly become more vivid.&nbsp;</p> <p>And while I&rsquo;m building a future that takes into account the fact that I have the right to be happy too, friends like that are the ones who remind me that it does get better.</p> <p>Oh, one more thing for my friends. Cheers!</p> </p> It Gets Better <p>&nbsp;</p> <p>I can clearly remember the reactions of most of my friends when I came out. &nbsp;I've worded that carefully, notice.. Friends. &nbsp;All of my true friends were supportive, one guy, a bit of a rugby lad, put his arm around me and said "Well done mate".</p> <p>Mind you, that was when I'd chosen to come out, of my own accord. &nbsp;</p> <p>Truth be told, I'd actually been outed years before that. &nbsp;Let's take a look back at that. &nbsp;</p> <p>I've known that I'm gay for a very long time indeed. &nbsp;I had a crush on a guy I knew while growing up in America, a kid called Evan. &nbsp;Let's see.. that would have been 1992-1994, so I'd have been ages 6-8.</p> <p>Evan's folks had this massive "ranch-like" place a bit out in the sticks, or at least, that's how I remember it. &nbsp;He had this "secret" hiding place that only he knew about, and he took me there, and we did things, and I Liked It. &nbsp;(I'm being vague on details here, use your imagination if you absolutely must.)</p> <p>In the very early years of exploring my sexuality, it was defined by points in time, with other boys my age, and "exploring" with them too. &nbsp;If I look back now, I can count about 6 guys who were similarly explorative, and this is still in the America Years. &nbsp;The strangest thing, perhaps because of our collective youth, was that nobody made a snide or derisive comment about any of this. &nbsp;It was just fun. &nbsp;It was never gay, queer, faggottry. &nbsp;Perhaps we didn't know the words, but I think that's not true. &nbsp;Perhaps innocence just wins out over all, and we don't know that it's "wrong" or "bad". &nbsp;</p> <p>I think part of it is that we were mostly expat kids, at good schools, with good parents, and that kind of bullying and behavior isn't really part of the close community that had been built up. &nbsp;I don't remember any instance of persistent bullying at the Montessori School I went to. &nbsp;I suspect that might be because everyone lived in fear of the Headmistress, a seriously scary woman, who is still Headmistress there to this day.</p> <p>But I digress.</p> <p>In 1994/5, dad's overseas job ended, and we moved back to a smallish town in Worcestershire, called Malvern. &nbsp;I started school at the "West Malvern Primary", mid-way through the term, I think, and was widely accepted with wonder and confusion. &nbsp;A softly spoken, somewhat gangly kid with an american accent and foppish ways. &nbsp;I know this for a fact. &nbsp;I did not fit in.</p> <p>I knew nothing of football, or well, anything that defines the british boyish primary school group of friends. &nbsp;</p> <p>As a result, I did a better job of making friends with the girls, well, some of them. &nbsp;One friend I made then was Robyn, who has been my friend ever since, and was really supportive when I came out in 6th Form.&nbsp;</p> <p>To the typical british schoolboy, apparently making friends with girls automatically makes you gay. &nbsp;I have to admit now, I've never followed the logic behind this one. &nbsp;I was a massive teacher's pet, and not particularly bothered about it. &nbsp;I enjoyed learning, and would have happily spent my lunchtimes pestering with the aging BBC Micros or reading away in the library. &nbsp;In fact, as time wore on, this is exactly what I did.</p> <p>Around about 1996, I made a friend, a very boyish boy, he was on the school football team. &nbsp;Somehow (and I really wish I could remember exactly how this came about), we ended up "exploring together" too. &nbsp;First at his house, watching Neighbours while we did it, then at my house ,there was some element of computer gameplay rewarded with a wank, and this carried on for some time.</p> <p>I once asked him, "What happens if I win the game?", and his reply has stayed with me for all these years, "I'll fucking suck your cock", he said. &nbsp;</p> <p>Actually, there was another guy with whom I had some brief encounters, or perhaps another two, but they were fleeting, and largely unremarkable. &nbsp;(I never did win that game.)</p> <p>The problem came a little later on, when we all started at a Secondary school (High School, for my american audience). &nbsp;Pretty much everyone who had been at the primary school went to one of two secondary schools. &nbsp;The bible-bashing Dyson Perrins, or slightly less mad, and much better The Chase. &nbsp;For obvious reasons, I went to The Chase.</p> <p>This is where the trouble really started. &nbsp;Going from a small school in a good bit of town, to a much bigger school, with a massive variety of kids from different backgrounds and upbringings. &nbsp;This was terrifying. &nbsp;I still lacked the social skills to make new friends, and for a long time, was still riding on the old friends I had at primary school. &nbsp;The problem was, our timetables were largely different, and we didn't often see each other aside for a short period at lunchtimes.</p> <p>Having a hard time making friends was made worse when some of the boys from my primary school had said something to the older kids, or the other kids in my year, about the experiences we'd had the year before. &nbsp;Apparently now it was wrong, and very bad, and I was queer, and gay, and I had big ears, and i was weird and not quite right. &nbsp;</p> <p>Incidentally, whenever I got bullied, and told my parents, I always mentioned it based on the big ears fact. &nbsp;I still wonder whether anything would have been handled differently if it had been known homophobia at that stage. &nbsp;I suspect not, not least from the school's point of view. &nbsp;I just wasn't quite ready to admit it to my parents. &nbsp;Dunno what i'd have said under further questioning. &nbsp;It seemed better to base it on the tangible fact that, yes, I do have big ears.</p> <p>I suspect my parents already knew that there was more to this story, but there it is, that's the truth.</p> <p>To me, the scariest part of this story was that the bullying didn't stop for FIVE years. &nbsp;There was always somebody willing to make a jab at my alleged sexuality, or with some cover story, and make comments about the size of my ears, or the way I walk, or that I'm a geek, or a nerd, or any other derisive and derogatory remark they could invent.</p> <p>Not all schools are created equal, I understand this now, but to the 14-year old me, this wasn't obvious. &nbsp;</p> <p>The general pattern was this: Get bullied, tell someone, the bully gets a light ticking off, you get bullied for a) the original reason, and b) being a grass.</p> <p>Now, the thing that annoys me most about all of this, retrospectively (although, it annoyed me then, too), was The Chase had a lot of "threats" against bullies. &nbsp;"We'll do this, and we'll make an example of you in assembly, or we'll name and shame you to the local newspaper". &nbsp;None of this was ever done, not in any of my time there, and I suspect little has changed. &nbsp;A look at the school's website today, they seem to have a clearer code on equal opportunities, and the bullying code hasn't changed. &nbsp;I do wonder if the prevalence of homophobic abuse at the school has changed.</p> <p>I'm not afraid or ashamed to admit that on a number of times, I considered suicide, but ultimately, the reason I'm still here today, is because I didn't want to devastate my family. &nbsp;I don't want to go into detail on that for a number of reasons. &nbsp;</p> <p>Let's jump forwards 2 years, to 6th form (or college), and I'm largely more comfortable with my friends group. &nbsp;The people around me are those who I want to be around me, and the folk who bullied me are now elsewhere, working in dead-end jobs where they will be for the rest of their lives. &nbsp;</p> <p>I joined a youth group in 2000, Malvern Young Firefighters, and met a great group of people, from outside school, some I knew from primary school, others were unknowns, but the group, and leadership allowed me to develop a new sense of self-confidence that had formerly been destroyed by years of bullying. &nbsp;</p> <p>There was rarely any bullying in this group, I think, for two reasons. &nbsp;1) The group leaders ruled strictly, and 2) the peer group was much more tightly knit.</p> <p>Everything changes again at University. &nbsp;<br />I went to Birmingham, and I spent the better part of 4 years there. &nbsp;It's funny, you go from having hidden your sexuality all through school and highschool, to an environment where not only are there a few people who look like you, and think like you, and fuck like you, but there's hundreds of them. &nbsp;Birmingham has one of the finest, and friendliest LGBTQ societies that I've ever found.&nbsp;</p> <p>These people made me feel welcome, they protected and educated me about what it's actually like being gay. &nbsp;</p> <p>It's all different, and it gets better. It really does. &nbsp;<br />You too will find people like you at university, and in bigger cities, and in liberal arts colleges. &nbsp;You will find your first "real" boyfriend, and you'll go through everything that your peers at school went through at age 12 with a girl behind the bike-sheds.</p> <p>I'd never have met all the wonderful people I've met in the last 10 years, if I'd let the bullies win. &nbsp;I'd never have met my wonderful boyfriend if I'd died when I was 14. &nbsp;</p> <p>First loves, first lovers, first boyfriends are all things that will happen, and can happen, but you have to give them the chance.&nbsp;</p> <p>Here's the important point, &nbsp;they're all right, &nbsp;Joel Burns, Tyler Oakley, the numerous others on the Trevor Project and youTube. &nbsp;It does get better. &nbsp;It got better for me, and I promise you, it'll get better for you.</p> <p>Sometimes you have to make a pro-active stand, and get yourselves out of the situation, other things just change over time, like people's attitudes to homosexuality. &nbsp;It's a continuum, and it's changing all the time. &nbsp;You just have to give it time to change. &nbsp;The vast population aren't perfect like us, they can't see the world the way we do, but the reason people are homophobic is because they're also ignorant, and they're afraid. &nbsp;</p> <p>If you've been reading this, and thought at any point, "<em>Hey, that's me!</em>" or, "<em>That's what they do to me</em>", and you're being bullied, for whatever reason. &nbsp;Please don't suffer in silence, there's no need. &nbsp;Times are changing, and we live in a progressive world. &nbsp;There is somebody out there willing to listen. &nbsp;There are people who've been through the same things. &nbsp;There is support for gay teens, hell, there's support there for anyone.</p> <p>Please give yourselves a chance for things to get better. &nbsp;</p> <p><strong>It gets better. It really does.</strong></p> <p>&nbsp;</p> <p>If you're in the USA:</p> <p></p> <p>&nbsp;</p> <p>If you're in the UK:</p> <p></p> <p></p> <p>&nbsp;</p> <p>London Gay &amp; Lesbian Switchboard:</p> <p>;</p> <p>&nbsp;</p> <p>List of local LGBT support groups / helplines:</p> <p></p> <p>&nbsp;</p> Mysterious Tiles <p>I recently acquired some ceramic tiles, and after a good bit of cleaning, they're all presentable and nice. &nbsp;There's 30 in total.</p> <p>Problem is, I'd like to know what the pattern is, who the designer/manufacturer was, and also, do they have any value.</p> <p>Some friends and family have suggested that they: "Look over 50 years old", "Look like a morris pattern", "look handmade", "look valuable", "mediaevally&nbsp;beautiful", "worth finding out about".</p> <p>Pictures on Flickr:</p> <p><a href=""></a></p> <p><a href=""></a></p> Zen and the Art of Speccing Servers <p> <p>Say for example you want to build a new Virtualization cluster. &nbsp;You've chosen the CPUs you want, and know you want 32 GB of fast shiny RAM. &nbsp;</p> <p>The next thing to decide on is how the hell you're gonna store your VMDK (or otherwise) images, and then store the backups and snapshots too.</p> <p>So. &nbsp;A typical VM Host server might be one of three choices.</p> <p>For sake of argument, i'm using Dell as a vendor.</p> <p><strong>Option 1:</strong></p> <p>Dell R805, Dual AMD 2425HE, 6 cores per CPU, 2 CPUs.</p> <p>32G of fast DDR2 ECC RAM. &nbsp;</p> <p>Ah. Hard disks. Bugger.</p> <p>You can have only 2 disks, in the R805 chassis. &nbsp;Bugger.</p> <p>I'll have 2 fast SAS 300GB 6Gbit 15K 2.5" drives, in RAID 1.</p> <p>Bugger. &nbsp;Only 300GB of storage. &nbsp;That's about enough for 3 small servers, or one big one.</p> <p>Bugger.</p> <p><strong>Approx Cost: &pound;3100</strong></p> <p>&nbsp;</p> <p>So, If i want to use the R805, i'm gonna need some kind of backend storage, be it NAS, or SAN, or an Unified Storage Device, providing NFS and iSCSI. &nbsp;</p> <p><strong>Option 2:</strong></p> <p>Dell R815</p> <p>Dual or Quad CPU, also AMD, 8 or 12 cores per CPU.</p> <p>32 G of RAM, again</p> <p>More disks!</p> <p>Split volumes, R1 / R5 (shame it's not R6, but there we go.)</p> <p>2x300GB SAS + 4 x 500GB SATA</p> <p>Giving 300GB + 1.3TB</p> <p>A bit better, but prohibitively expensive.</p> <p><strong>Dual 8 Core CPU = &pound;6208</strong></p> <p>Quad 8 core CPU = &pound;6698</p> <p>Dual 12 Core &nbsp;= &pound;7408</p> <p>Quad 12 Core = &pound;8608</p> <p>Bugger.</p> <p>&nbsp;</p> <p><strong>Option 3:</strong></p> <p>Dell 2970</p> <p>Dual 2425HE, again</p> <p>32 GB RAM</p> <p>&nbsp;</p> <p><em>Option A (8x2.5" disks)</em></p> <p>2 x 300GB SAS + 6x500GB SATA</p> <p>= 300GB + 2.3TB</p> <p><strong>Total Cost : &pound; 5125</strong></p> <p>&nbsp;</p> <p><em>Option B (6x3.5" disks)</em></p> <p>2 x 300GB SAS + 4 x 2TB SATA</p> <p>= 300GB + 5.7TB</p> <p><strong>Total Cost : &pound;4705</strong></p> <p>OR</p> <p>2 x 300GB SAS + 4 x 1TB SATA</p> <p>= 300GB + 2.8TB</p> <p><strong>Total Cost : &pound;4145</strong></p> <p>Right. &nbsp;Now. &nbsp;The interesting part is that this last server, the cost of storage alone, is only &pound;191/TB.</p> <p>One of the biggest problems associated with having large disk storage on the actual VM host itself, is the problem of not being particularly able to free up pockets of unused disk space.</p> <p>Alternatively, a separate storage node would effectively allow better distribution of the storage, and exporting disks across the network. &nbsp;</p> <p>So let's price that up.</p> <p>&nbsp;</p> <p><strong>From;</strong></p> <p>(Because I like their up-front pricing, and shiny configurator)</p> <p>Supermicro chassis, Intel server mobo, Intel Xeon E5504, dual CPU, 24GB RAM</p> <p>6x300GB 15K SAS = 1.3TB</p> <p>6x 2TB SATA = 9.5TB</p> <p>Total Storage: 10.8TB</p> <p><strong>Total Cost: &nbsp; &nbsp; &pound;6528</strong></p> <p>That's about &pound;605 per TB. &nbsp;Not ideal.&nbsp;</p> <p>&nbsp;</p> <p>But there's no real doubt that using iSCSI (or NFS) would provide masses more flexibility for the provisioning of storage for this project. &nbsp;Because the initial plan involved high-availability, using IP-based network storage protocols would also allow the disk-traffic to be routed across the public internet, using some kind of VPN technology.</p> <p>My gut feeling is that the best solution is a cheap(-er) server, backed onto a more expensive disk storage unit. &nbsp;</p> <p>I did consider pricing up a DAS array, and connecting it to one or other of the VM Hosts directly, either by FC or SAS, but then in the remote case of the failure, the disks aren't easily exportable to another server. Especially as SAS traffic can't be directly routed over the network.</p> </p> The Cost of Forward Thinking <p> <p>In the last two weeks, I've seen at least two websites fall off the internet because of a distinct lack of forward planning</p> <p>Firstly, there was Derren Brown's blog</p> <p>After Derren did his "The Events" trick with the lotto balls and dark magic, the number of fans hitting his page daily looking for clues, news, and gossip, caused the server to fall over.&nbsp;It even caused some of the channel 4 servers some traffic troubles (and they've got a lot of nodes!)</p> <p>Derren's blog was down for at least 2 days, as far as i could see. &nbsp;If his producers/agents/IT manager had said "hey, this stunt might turn out to be popular, let's move onto a cloud infrastructure, with a CDN cache, we might have to invest a bit of money now, but we'll have better uptime than if we're just serving from a single 1U Dedi in a rack" then the site may have remained up and serving for far longer, to endure the wave of traffic generated by the publicity on tv.&nbsp;</p> <p>The second one of these, was caused tonight by Dragons' Den Online, a cut-down version of the popular Dragons' Den format.</p> <p>The final segment was dedicated to a web startup, introducing Yet Another Social Network for families..&nbsp;Something about sharing photos, videos, calendars and wishlists<br />Personally, I do all this with Flickr, Google Apps, and Amazon Wishlist.</p> <p>It was remarked to me at least once, that this could be breaking down the nature of the family unit, because everyone spends their time in front of the computer instead of actually interacting with each other</p> <p>But I Digress</p> <p>About 20 minutes ago, I was looking at their site, Family Fridge, and noticed that it winked out of existence as soon as the web address was mentioned.</p> <p>Yes, they got Slashdotted by the BBC</p> <p>I've seen many a site get taken down by getting a FryTweet, that's a pretty effective way to kill a webserver, when 50,000+ followers all open the site at once, it's not good for any website</p> <p>I suppose there's that old adage about "no such thing as bad publicity"...&nbsp;I can't help but apply the same scenario as before<br />"If we spend a little money now, get a cloud computing services infrastructure, then we can use the Dragons' Den as advertising and get a whole stack of new members in one night"<br />Sure, upgrading the platform isn't free, but the potential in increased revenue from such a "publicity stunt" is significant, and should be enough to offset the cost of the new infrastructure.</p> <p>Moreover, I think it proves to some extent that the investment might not be quite so sound</p> <p>Scalability is something of a buzzword of the times we live and work in, but it's also very important, the moment you launch a product on twitter or facebook, you've instantly got a far wider audience than perhaps you initially anticipated</p> <p>In my opinion, it looks kinda bad on the developers of this site, that either they never anticipated that this would happen, or they don't care.</p> <p>On a technical note, they probably wouldn't need to go as far as a cloud-computing infrastructure, or even a CDN.&nbsp;<br />Simple page optimisations and front-end caching can make a world of difference to generating a new dynamic page for every single visitor.</p> <p>Knowing my luck, someone on Twitter or Facebook will pick this up as "Interesting" and i'll get a hundred requests a second, and my poor overworked hosting account at Streamline will get overwhelmed. &nbsp;</p> </p> The True Age Test <p> <p>A few weeks ago, I wrote about this facebook meme, &ldquo;The Name Game&rdquo; and I hypothesised that this wasn&rsquo;t a meme, but actually a data gathering exercise, possibly started by scammers.<br />I&rsquo;ve found another one. &nbsp;One of my friends took the &ldquo;True Age Test&rdquo;, and came out younger than their actual age. &nbsp;I&rsquo;ve just had a brief flick through the questions.</p> <p>Starting off with fairly harmless, questions which are related to the app, &ldquo;What is your actual age, what race are you, how much exercise do you get&rdquo; etc&hellip;</p> <p>Rapidly progresses into &ldquo;Have you ever had any heart conditions, did anyone in your family die before the age of 60 from coronary related illnesses&rdquo;</p> <p>Later, &ldquo;Do you have diabetes. Do you have any Digestive problems, Do you use drugs, How depressed do you feel, What is your relationship status&rdquo; and so on.</p> <p>Now, not only are these questions a bit personal, but there is no obvious information on how your data will be stored, or used, or archived. &nbsp;Given that facebook already shares a good proportion of your personal data with these applications, what is the probability that you&rsquo;ve just answered enough data to build up a probability report of how much a risk you would be to a) a future employer, b) a bank, building society, etc or c) an insurance salesman.</p> <p>It also doesn&rsquo;t state (nowhere that I saw, anyway) what they&rsquo;re gonna do with the data, Is it transient, or stored in a file somewhere. &nbsp;How long is it stored for? Do they plan to sell the data? Domestically, or overseas?</p> <p>Also, without a comprehensive code review, it&rsquo;s not very easy for people to see whether the data is going to be exported through a backdoor in the code, so even if they say &ldquo;Oh no, the data isn&rsquo;t stored, or identifiable&rdquo;, there doesn&rsquo;t seem to be any easy way to prove that.</p> <p>IIRC, Facebook don&rsquo;t ask to see your sourcecode to the application, so it might be quite easy for an individual with malevolent intent to gather a vast amount of potentially sensitive information quite easily.</p> <p>The motivation for people to participate in this application is simple &ldquo;I want to prove that my &lsquo;real age&rsquo; is younger than my biological age, therefore I feel good about myself&rdquo;.</p> <p>We all want to feel good, don&rsquo;t we?</p> <p>But at what cost?</p> </p> Drabble <p> <p>I wonder if you&rsquo;ve heard of a Drabble?</p> <p>A drabble, simply put, is a story, normally science fiction or fantasy that is exactly one hundred (100) words in length. No more, no less.</p> <p>Here is mine:</p> <pre>It was a slow day in the spaceport.</pre> <pre>&ldquo;These rocket cowlings aren&rsquo;t going to fix themselves&rdquo;, Simon thought to himself, wistfully.</pre> <pre>It was 4 days since the incident, nobody said a word after it happened, not until this morning, that is.</pre> <pre>Simon knew exactly what to do; he lifted the great copper mallet above his head, and struck the cowling with all his might.</pre> <pre>The resonance shook the entire rocket, the mallet, his arm and the rest of his body. &ldquo;Damnit&rdquo;, Simon swore, just as a shadow appeared over Simon&rsquo;s left shoulder.</pre> <pre>&ldquo;I owe you a pint, for this&rdquo;, the shadow said.</pre> <pre><br /></pre> </p> The Wiki Problem <p> <p>I love collaborative websites. &nbsp;Wikipedia, Blogs, community oriented stuff like Stack Overflow and ServerFault</p> <p>There is however, the lingering problem of vandalism, and it&rsquo;s one that seems to crop up on pretty much ever collaborative website i&rsquo;ve ever seen. Wikipedia has a lot of newbies contribs which are utter nonsense, advertising, spam, page blanking and so on. &nbsp;There&rsquo;s a hefty team of people on Wikipedia however who go around reverting this kind of stuff. &nbsp;I&rsquo;m one of them. &nbsp;I use mediawiki at work also, so I&rsquo;m pretty confident around the entire wiki platform, and IMHO, mediawiki is the best wiki software out there.</p> <p>Anyway, on Saturday, I was quite pleased to discover that the Science Museum in London has now got a collaborative object wiki.</p> <p>I love the idea of having visitors add their own memories of stuff that is on exhibition. &nbsp;It seems that it&rsquo;s mostly household items that are well commented on, for example Frigidare Refridgerators.</p> <p>It was on this site, on saturday that I discovered that they had fallen to the terrible plague of edit vandalism, and the homepage of the wiki was now some statement about some girl called Louise and her love of turkey and cannock. It seemed she had also discovered her User Page, and decided to spread the nonsense to the public home page.</p> <p>I created an account, reverted her edit, left her a message on her talk page (Sometimes these passive-aggressive things are all you can do!), and then had a rather nice thank you message from one of the administrators.</p> <p>I think that might have been my 8 or 9th visit to the science museum. &nbsp;I&rsquo;m forever discovering new stuff there.. That, and they keep adding new stuff :-). &nbsp;I&rsquo;m quite looking forward to the future &ldquo;Biker Tribes&rdquo; exhibition, as I&rsquo;m rather mad about motorbikes these days (more on that soon! [Sidenote: Anyone following me on Flickr might be interested in my Motorbikes Collection]). &nbsp;<br />There&rsquo;s much more I could say in praise of the Science Museum, but I haven&rsquo;t time, or pixels left.</p> </p> Uncrackable Passwords <p> <p>I got an email today from some software company.. Trying to sell me a password management tool. &nbsp;I used to use KeePass which was pretty effective. &nbsp;This one is considerably more expensive. &nbsp;Among its features, it boasts:</p> <p><ol> <li>Generate uncrackable passwords using the integrated Password Formulator</li> <li>Maximum protection of your sensitive data thanks to the security algorithm Rijndael 256-Bit!</li> <li>Instead of passwords like &ldquo;toothbrush&rdquo; or &ldquo;Rover&rdquo;, which can both be cracked in a few minutes, you now use passwords like &ldquo;g\/:1bmV5&Prime;&pound;$p&rsquo;}=8&gt;,,/2&not;%`CN?\A:y:Cwe-k)mUpHiJu:0md7p@&lt;i&rdquo; (with a 1-GHz-Pentium-PC, it takes approx. 307 years to guess this password!).</li> <li>Password lists on the internet: Place your encrypted password lists on the Internet and enjoy access to all of them, no matter where you are!</li> <li>Protection from keylogging (intercepting of keystrokes) &ndash; All password fields are internally protected from keylogging.</li> </ol></p> <p>I&rsquo;ve got issues with all three five points above.</p> <p><ol> <li>That&rsquo;s a pretty bold statement to say that your passwords are uncrackable.. I suspect they really mean that they haven&rsquo;t been able to crack them, or somebody &nbsp;hasn&rsquo;t been able to crack them YET.</li> <li>Another word for &nbsp;Rijndael&hellip; &nbsp;Yep, AES. &nbsp;Really nothing that sophisticated. &nbsp;Under closer inspection they&rsquo;re really no better than the free alternatives.</li> <li>While &ldquo;g\/:1bmV5T$x_sb}8T4@CN?\A:y:Cwe-k)mUpHiJu:0md7p@&lt;i&rdquo; may be long, secure, mixed cases, characters, alphanumeric, and symbols, it&rsquo;s certainly not memorable. &nbsp;So what happens if you generate this password for XYZ internet banking service, and then you go on holiday and forget to pay a bill, or need to move some money about.. You don&rsquo;t have your password safe with you. &nbsp;Bugger.</li> <li>Does anyone else think this is potentially asking for trouble? Assuming XYZ company is hosting them, &ldquo;securely&rdquo;, how can you prove they don&rsquo;t have a backdoor to decrypt the files. &nbsp;Do you trust them? Considering you&rsquo;ve paid &euro;30 for this package, it&rsquo;s not really as binding as a really expensive legal SLA.</li> </ol></p> <p>The other thing that&rsquo;s at the front of my mind now, is what password do you use to lock the password safe? Do you use a long, complex, difficult to break one, which you&rsquo;ll probably never remember, and will need to write it down (therefore making it totally pointless anyway), or a simple short password like your first pet&rsquo;s name, and some thoughtful numbers after it.</p> <p>Sidenote to point 3. &nbsp;307 years on a 1GHz Pentium.. What about a dual-quad core Pentium Xeon. &nbsp;Or a distributed attempt across 256 nodes of dual-quad core Xeons. &nbsp;Still, it&rsquo;s reaching a bit far, but it doesn&rsquo;t mean that this password is unbreakable. &nbsp;Not by a long way.</p> <p>Uh, right.. So this software is going to prevent me from putting a PS2/USB hardware keylogger between the PC and the keyboard? I think not. And if it claims to protect against software keylogging, how could you prove that it wasnt a keylogger itself. &nbsp;It would be a pretty ingenious way to harvest credentials, make the user believe they&rsquo;ve just bought a security enhancement, really they&rsquo;re buying a back door. &nbsp;(I&rsquo;m not saying that&rsquo;s what they&rsquo;re doing, but it&rsquo;s certainly enough to make me want further verification of the publisher&rsquo;s honesty.)</p> <p>I really don&rsquo;t like the sound of this software, actually, I&rsquo;m not keen on this &ldquo;credentials management&rdquo; type thing at all. &nbsp;There&rsquo;s too many unanswered questions. &nbsp;And that&rsquo;s before we get onto the rather open question of the use of biometrics for passwords. There seems to be a growing trend at the moment where biometric data (fingerprints, webcam images, iris scans) provide the password data, as opposed to the identity data that is then confirmed with a password.</p> <p>Private keys and passwords are easy to change when compromised, but how do you change your fingerprint, facial shape, or iris detail when your credentials are compromised?</p> </p> Epic Fail <p> <p>Another lesson learnt by a company who really should know better.</p> <p><strong>Raid != Backup.</strong></p> <p>This might be widely regarded as old news, but it&rsquo;s not too late IMO for me to add my $0.02.</p> <p>I picked this up on Slashdot about 20 minutes ago, &nbsp;and there&rsquo;s a few things that strike me as odd about the whole malarkey.</p> <p>Before I go any further though, I&rsquo;ve never heard of Journalspace until this article arose, &nbsp;then again, they&rsquo;re not really in my general field of view, I&rsquo;ve always had my own blog, on my own space.. so it&rsquo;s not really my &lsquo;thing&rsquo;.. Anyway, one thing that is my &lsquo;thing&rsquo; is data security and assurance.</p> <pre>Journalspace is no more.</pre> <pre>DriveSavers called today to inform me that the data was unrecoverable.</pre> <pre>Here is what happened: the server which held the journalspace data had two large drives in a RAID configuration. As data is written (such as saving an item to the database), it&rsquo;s automatically copied to both drives, as a backup mechanism.</pre> <p>Anyway&hellip; A few things strike me as odd.</p> <p>Was that RAID really their ONLY backup?.. for a site that had been going for 6 years, and probably had &gt;1000 users, I&rsquo;m surprised that they didn&rsquo;t write backups into their disaster recovery plan, and/or their plan to scale their site to meet their user&rsquo;s needs.</p> <p>Blaming OSX? That&rsquo;s a bit of a low blow. &nbsp; I&rsquo;ve never used OSX Server for webhosting, but it seems reasonably unlikely that this is the cause of their problems. &nbsp;And even if it was, that&rsquo;s still NO excuse not to have some form of backup.</p> <p>Disgruntled Employee Syndrome. &nbsp;While it&rsquo;s not always possible to keep 100% of employees happy for 100% of the time, it is reasonably easy to revoke root keys on servers, delete user accounts, remove privileges, etc when the employee leaves the company. &nbsp;It&rsquo;s like not taking their office keys from them when you escort them from the building. &nbsp;&ldquo;Come back and steal our data, we&rsquo;re practically leaving the whole office open&rdquo;.</p> <p>On a slightly different note,<em> RAID is not Backup, it&rsquo;s part of the solution.</em></p> <p>Well, actually, it&rsquo;s closer related to high availability and redundancy, but that is a different story.</p> <p>Off-machine backups are the key here. &nbsp;While they&rsquo;re costly, and time consuming to set up, they&rsquo;re also an essential part of the plan to maintain and scale.</p> <p>Mirrored disks on a RAID array will restore data fine if one of the disks fails, but if you run rm -rf /path/to/raid/folder, then the RAID will just mirror that command on each of the disks. &nbsp;Bye bye data&nbsp;</p> <p>I think it&rsquo;s somewhat unfair to expect all users to keep full backups of their own data. &nbsp;Not outrageous a demand, but you kinda expect at least some &nbsp;form of data storage, so that your 6 years of precious thoughts and feelings aren&rsquo;t lost into the ether one day.</p> <p>Given that each user might have 6 years of blogposts, maybe 2 a week?</p> <p>For 10,000 users, I make that about 600GB, averaging 100k per blogpost.</p> <p>(6*2*52*100*10000)</p> <p>Still, that&rsquo;s not a vast amount.. Could fit that on a fairly small DLT tape.. Could easily replicate that across a few servers, seperated geographically, you could write it to a shitload of &nbsp;DVDs, few dozen blurays, Shove it on a portable USB HDD and stick it in a fire safe in the CTO&rsquo;s basement.</p> <p>Hell, it&rsquo;s still less than a terabyte of data.</p> </p> Chip, Pin, Password... <p> <p>The list goes on. &nbsp;It really doesn&rsquo;t end there.</p> <p>Anyone who uses internet banking these days will find themselves handing over a vast array of numbers and passwords, authentication tokens and browser cookies. &nbsp;You have a card, this has a chip, you have a Challenge/Response card reader, and you have a pin.</p> <p>There&rsquo;s at least half a dozen banks in the UK that I can name who use the Challenge/Response type &nbsp;card readers.</p> <p>To log into my online banking, I need my Passwords, Pins and if i want to do &ldquo;advanced functionality&rdquo; I need my card and challenge auth reader.</p> <p>Now.. this is all very cool, I don&rsquo;t mind the CR device. &nbsp;No, my beef is with SecureCode.</p> <p>MasterCard / Natwest have licensed this &ldquo;extra level of auth&rdquo; for online transactions.</p> <p>Dominos pizza.. one of my favourite food retailers on the web have a requirement that I use my SecureCode password to authenticate that I&rsquo;m not a thief when i want to eat pizza.</p> <p>This is not helpful. &nbsp;The SecureCode password can&rsquo;t be any of the ones you already use for phone or netbanking</p> <p>and has to be &gt;8 chars alphanumeric. &nbsp;no symbols it seems ..</p> <p>Ok, so you can&rsquo;t remember it.. no problem, you just enter your DOB, some card details and it lets you through..</p> <p>How is that any more secure than just plain card details auth? &nbsp;IF anything.. isn&rsquo;t it less secure because it&rsquo;s loading a seperate site in an iframe on the retailer&rsquo;s website?</p> <p>Why can&rsquo;t i just use my Challenge/Response card reader and have everything work together?</p> <p>And secondly, Why can&rsquo;t I use that to login to online banking?</p> <p>If you work for natwest, mastercard, or any other monetary establishment, do pop a comment in and explain why the system is so archaic, and to be frank. SUCKS.</p> </p>