MPs’ Web Sites

When I set up Planet Westminster in 2006 I thought it would be a relatively simple project to maintain. Over the years, more and more MPs would start blogs. Every couple of months I’d add the new ones and everything would be great.

It hasn’t worked out like that at all. MPs’ web sites have proved to be really difficult to keep track of.

The problem is, of course, that the vast majority of MPs have absolutely no idea how web sites, blogs or web feeds work. That’s to be expected. What’s less expected is that many of them seem to get round that problem by delegating the work to people who also have no idea how web sites, blogs or web feeds work.

I’ve just done a clean-up of the feeds I’m currently monitoring. Here are some of the problems I’ve dealt with.

A few MPs (including Douglas Carswell and Caroline Lucas) changed the address of their web feed. Just changed it. No notification as fas as I can see. No attempt to redirect the old address to the new one. Just an old address returning a 404 error. Anyone who was subscribed to the old address would have just stopped getting updates. It’s almost like they don’t want people to follow what they have to say.

Ed Miliband’s web site has just ceased to exist. It now redirects you to the main Labour Party web site. Because the leader of the party obviously has no constituency responsibilities. Or something like that.

John McDonnell seems very confused. In 2007 he had a web site at In 2010, he was at john-­mcdonnell.­net. Both of these sites are now dead and he’s at It’s like no-one has told him that you can reuse web site addresses. I wonder what he’ll do once he’s run out of variations of his name on different top-level domains.

Eric Joyce has just lost control of his domain. His address currently goes to an unfinished web site campaigning for “John Smith for State Senator”. It doesn’t look as though Joyce realises this as he’s still promoting the web site on his Twitter profile.

Then there’s Rory Stewart. His web feed was returning data that my RSS parser couldn’t parse. Taking a closer look, it turned out that it was an HTML page rather than RSS or Atom. And it was an HTML page that advertised an online Canadian pharmacy pushing Cialis. Not really what an MP should be promoting.

Stuff like this happens all the time. MPs need to take more notice of this. And they need help from people who know what they are talking about. My theory (and it’s one that I’ve written about before) is that MPs’ web sites and blogs are often overcomplicated because they are developed by companies who come from a corporate IT background and who dismiss the possibility of using something free like WordPress and over-engineer something using tools that they are comfortable with. It can’t be a coincidence that many of the worst MP web sites I’ve seen serve pages with a .aspx extension (sorry – only geeks will understand that).

I’m going to repeat an offer I’ve made before. If any MP wants a blog set up for them,then I’m happy to help them or to put them in touch with someone who can help them. It needn’t be expensive. It needn’t be complex. But it can be very effective. And it will work.

Update: Eric Joyce replied to me on Twitter. He said:

Thanks. It’s being worked on and they seem to have pointed it at an obvious specimen page.


General Election in Battersea

There’s a General Election coming up. And I’m really not sure who I’m going to vote for. But that’s ok because this is going to be the UK’s first Election Campaign 2.0 where candidates and voters are going to be in constant two-way communication through the medium of the interweb. All I need to do is to hook up to the web feeds generated by all of my local candidates and within hours I’ll be able to make up my mind.

Or something like that.

Those nice people from YourNextMP have a list of the candidates currently standing in Battersea. Of course it might not be complete yet as candidates still have a few more days to register. But it’s enough to be going on with.

Let’s start with the current MP. Martin Linton has one of the smallest majorities in the country. In 2005 he beat Tory candidate Dominic Schofield by only 163 votes. Battersea has had a few boundary tweaks and that majority has been re-estimated to 332. Anyway, it’s small and you’d think that Martin Linton would be keen to get his message out to as many people as possible. I had him knocking on my door a couple of weeks ago and he’s sent me a letter telling me how to contact him during the campaign. But he doesn’t exactly appear to have embraced the digital era. He has a web site, of course, but there’s no web feed for it. If you go to the news page on the site, then there is a web feed but you have to view the page source in order to find it [Update: I was being slightly unfair there. The page does have autodiscovery configured correctly for the web feed – it just doesn’t seem to work in my version of Google Chrome. I see the web feed icon in Firefox]. The stories on the news page aren’t dated, but by reading the web feed you can see that the page was last updated on March 26th. Nothing at all since the election was called. He also has a Twitter page, but that was set up in February and has six entries, the last of which was posted on March 29th. All very disappointing.

But perhaps this lack off digital effort by Martin Linton can be explained by the even greater lack of effort made by his biggest rival, the Conservative candidate Jane Ellison. She has a web site, but as far as I can see there are no web feeds on it at all. Oh, actually, she republishes two on her news page – one from the Wandsworth Conservatives and one from the borough council (which is run by the Conservatives). I can find no other trace at all of digital engagement from Ms Ellison. No blog, no Twitter page, no Flickr account. Nothing to to get her message out to the digital natives of Battersea.

Moving to the other extreme, we should now look at Layla Moran, the Liberal Democrat candidate. Ms Moran seems to have more internet presence than all of the other candidates combined. She has a web site, a Twitter page, a Flickr account and a YouTube account. And they all have web feeds. And she uses all of these methods to get her message out and interact with people. In particular, she’s very active on Twitter and she responds to many messages that people send her there. I might not end up voting for her, but it certainly won’t be because I don’t know anything about her.

Next we come to Guy Evans of the Green Party. Guy doesn’t even have web site. Perhaps Green Party policy is that computers are bad for the environment. But that also seems to be true of David Priestly of the Official Monster Raving Loony Party and Nicholas Rogers of the Jury Team. I can’t find any kind of internet presence for either of them.

That leaves us with the two independent candidates in the Battersea election. Tom Cox and Hugh Salmon. Cox seems to be winning on number of feeds (he has a blog, a Twitter page and a YouTube account) but he doesn’t seem to use any of them much. Salmon, on the other hand, only has a blog and a Twitter account, but his Twitter account is pretty active – certainly more active than Linton or Fox but nowhere near as active as Moran.

So we have (currently) eight candidates. And between them I’ve managed to find eleven web feeds). And what do you do when you have a number of web feeds that you want to follow? Of course you build a planet. So here’s my new Battersea GE2010 page which monitors and aggregates all of the feeds I’ve discussed above. Yes, my web design skills are rather rudimentary. Please offer to help if you think you can do better. And if you know of any candidates or feeds that I’ve missed, then please let me know.

There’s no reason why this approach couldn’t be used elsewhere. I built the page with Perlanet, which is free software so you could build something similar for your constituency. And there’s plenty of other tools for doing the same thing. Let me know if you do. I’ll set up a directory or something.

If this is supposed to be the first truely digital General Election then let’s see how true that is. From what I can see in Battersea, we’ve still got some distance to go. Maybe things will change in the next three weeks.


Hack Day Plans

This weekend is Yahoo!’s Hack Day. And as in the last two years, I’m going to be there. Although (also like the last too years) I’m far to old and soft to consider staying up and hacking through the night. I’ll be leaving at a reasonable time on Saturday evening to get home to a comfortable bed. This will be easier than in previous years as this year’s venue is near the tube network (Alexandra Palace is a lovely venue – but a real bugger to get to).

So the question is, what to hack on. Actually I already have some ideas. And (unsurprisingly for those of you who are regular readers) it’ll be based around the local community stuff that I’ve been writing about (and talking to Lloyd about) recently.

Here’s the current plan.

Building local planets is all very well, but it can be hard work to get a good one going. As I’ve mentioned before, you need good local knowledge to pick up interesting feeds about a location. This certainly doesn’t scale to building local community sites for the whole of the UK (well, not without a lot of help). But I think you can get a lot of the way there – close enough to be useful – with an automated process. Last month I mentioned some feeds that I was using as a basis for all of my local planets. I think that’s an idea that is worth exploring further. There are other feeds that can be added to that list. Things like MySociety‘s FixMyStreet and GroupsNearYou. There are also things like TheyWorkForYou‘s feeds of when your MP has spaid something in Parliament.

One problem with this approach is that localities aren’t named consistantly. For some of these feed you need a placename (a Google news search for news mentioning “Balham”) and for others you need a postcode (which MP represents SW12). I’ve been looking at Yahoo!’s GeoPlanet API and it looks like it will get me some way towards solving this problem (as a bonus, there’s already a Perl module for it).

All of which leads me to my plan. A service that builds automated web sites providing local information for communities in the UK. I’m imagining that you put in a post code (or, perhaps, a placename) and it goes away and builds a useful and interesting web site for you.

I have no idea how close I’ll get in 24 hours of hacking, but it will be an interesting experiment. If you’re going to be Hack Day and this sounds interesting to you, then please get in touch.


Planet Atheist

I’ve mentioned before that I run a few planets. A planet is a simple web site which aggregates web feeds on a particular subject. They are named after the software which is used to build many such sites.

I’m always looking out for good ideas of other planets to add to my collection. Yesterday on irc, Dave Hodgkinson suggested a “planet sceptic” which is, of course, an excellent idea. It would be great to have a planet which aggregates a number of feeds from the growing sceptic/atheist community. And I thought it would be an interesting experiment to ask for ideas for the feeds to include.

Looking through my Bloglines subscriptions, I find a number of obvious candidates.

But I’m probably missing dozens of interesting feeds. If you have a suggestion, then please leave a comment. I’ll start with my list today but it’ll be easy enough to add stuff later.

Oh, and one other question. What should I call it? Dave originally suggested “Planet Sceptic”. Does “Planet Atheist” sound better? Or “Planet Rationalism”? Or perhaps “Planet Bright” (no, probably not that!) Again, let me know what you think in the comments.


Missing MPs’ Blogs

A while ago, I set up Planet Westminster – a pretty simple site that simply aggregates all of the MPs’ blogs that I could find. It was largely created to scratch a personal itch. I wanted a simple way to subscribe to all MPs’ blogs in my feed reader. And that’s really how I use it most of the time. I just read it in Bloglines rarely bother to look at the site (which explains why I haven’t fixed the character-encoding problems that are obvious to anyone visiting the site).

But I had a look at it today. And I tweaked a couple of presentation problems. As part of the process, I ran the software which aggregates the feeds by hand a couple of times. And that showed me one interesting issue that I had previously missed. The program displays an error when it can’t find the feed that it’s looking for. It’s currently generating eleven “missing feed” errors. That’s out of thirty-six feeds that I currently monitor. Perhaps a couple of those could be put down to temporary network glitches, but that’s potentially over a quarter of the (small number of) blogging MPs who have either given up on blogging or have moved their feeds without putting redirection in place (that’s starting to become quite a regular topic round these parts).

At one point it looked like MPs might start blogging in reasonable numbers. We’d broken the 5% barrier. It would be a shame if they decided if it was a waste of their time and started to abandon it.

The errors I’m getting are as follows (with links to the missing web feeds) . If any of these are your MP, then perhaps you’d investigate what’s going on and report back. One of them is my MP, Martin Linton, so I’ll start by investigating him.

Update: Having looked into it a bit further, I see that many of the problems are down to people moving their web feeds without putting redirection in place. Obviously I don’t blame the MPs for this, but it indicates how little their “tech support” people know about how this stuff works.

A few of the blogs have closed down though. And it’s interesting to note that in a couple of places a blog feed has been replaced by a news feed.

I need to put aside some time to do some more research into this in order to ensure that the date I have is up to date. And this is exactly the kind of information that PoliticalWeb is supposed to provide.


RSS Failure

Oops. Busted.

Earlier this year, I wrote a mild rant about web sites who change their RSS feeds without redirecting them and thereby losing a number of readers.

Last night mou commented on that entry pointing out that I’d done something very much like that myself. For the last two months, I haven’t been publishing a new index.rdf feed.

I strongly suspect that the date of the last new version of that file coincides with the date that I installed a new version of Movable Type and reset all of the templates to the defaults. By default, current versions of MT don’t seem to publish RSS feeds. They just publish an Atom version (atom.xml).

That’s no excuse though. I knew about that problem. Previously I’d worked around it by installing an RSS template from an older version of MT. I might do that again when I have some spare time to think about it. But in the meantime I’ve taken the easiest option and created a symbolic link from atom.xml to index.rdf. Hopefully that’ll work in the short term.

Apologies to anyone who was subscribed to the RSS feed and who, no doubt, thinks that I’ve dropped off the face of the world. I’m sorry that you’ll suddenly have two months worth of my nonsense to plough through this morning.

It might be a good time to mention the other feeds that I set up recently.  There’s one contains all of my long-form writing from this and other blogs, one that has shorter items from various microblogging platforms and then there’s the feed from planet davorg which contains everything.


Redirecting RSS

I’ve harped on about this before, but I firmly believe that when you publish a URL on the web then it should be permanent. Of course you might want to change the way that your site is set up at some point in the future, but when you do that you should do everything you can to ensure that visitors using the old URLs are seamlessly redirected to the new URL.

And this is true of any kind of URL. It’s not just web pages. The same is true of the URLs of your web feeds. Many people who read your web feeds won’t check that they’re still reading the correct address. They’ll usually just assume that you’re still publishing the feed to the same place. Perhaps I’m not typical, but I subscribe to almost 200 feeds in Bloglines. If one of those feeds goes quiet, it could be weeks before I notice the problem and investigate what has happened.

When I was talking about the problems with the new Sun RSS feeds last year, I mentioned in passing that they had lost a lot of subscribers by just moving them to new URLs, but Martin covered it in more detail.

In the last few days, I’ve seen three instances of the same thing happening. Three places where a web feed just stopped working. Only one of them bothered to tell their users what was going one.

Firstly, I noticed that I was no longer getting updates from my MP’s web site. When I investigated further I found that they had redesigned the site and the URL for the feed had changed. Now I don’t expect my MP or his staff to understand stuff like this. But I expect they paid a lot of money to the people who redesigned the site. It would have been nice to think they were getting their money’s worth.

Secondly, this morning the BBC Doctor Who news site told me that it was moving (again, due to a redesign and change of technology). In this case they told their readers to resubscribe to the new feed, but a simple web redirection could have made it seamless. As a big Doctor Who fan, Martin has also covered this in some detail. I expect the BBC’s web department to have the experience to know that this is a really bad way to handle the move.

And finally, this afternoon I noticed that I wasn’t getting any news from BoingBoing. I only noticed this because I had submitted a story to them and was looking to see if it had been published. Like the BBC web group, the people behind BoingBoing should really know what they are doing and shouldn’t make such basic mistakes.

I think that web feeds are a great tool. They enable me to regularly read far more data from the web than I did before I used them. But it’s clear that many web site owners are publishing them because everyone else is doing it and they don’t really understand how important they are.

Update: Another one. Today (May 1st) I see that the Telegraph have moved all of their RSS feeds. At least the dropped a message about it into the old feed. But haven’t these people heard of URL redirection?



It’s nine years since I registered the domain and set up a web site there. And I’ve never really known what to do with it. Since I started blogging, it’s seemed even less useful. The blog front page was where all the interesting stuff happened. The main page just contained links to a few bad jokes and a couple of useful sub-sites. For years I just tinkered with the design a bit, but I was never really happy with it. Sometime early in 2005 I rewrote it so that it took a lot of its content from various RSS feeds that I published. But the code to do that was a really nasty hack which I’ve wanted to rewrite since the day I first wrote it.

A few weeks ago, I wrote Perlanet which is a simple program for aggregating web feeds and republishing the results. As I had some spare time yesterday, I rewrote the front page using Perlanet to do most of the heavy lifting. It now contains the full text of the most recent entries from my various blogs, together with examples of my latest flickr uploads and list of recent twitters and delicious links. It’ll be simple to add other feeds to the mix in the future.

I realise that this isn’t exactly new. People have had sites like this for years. But I’m happy at how quickly I managed to build this and happier that it shows that Perlanet is as flexible as I wanted it to me. I’m also pretty happy with the way that it looks (although that is, I suspect, more to do with the Boilerplate CSS framework than my design skills).

I’ve also started to publish a number of Atom feeds. As you’ll see from the top right of the new page, there is one feed containing the blog entries, one containing the shorter stuff, one for photos (that’s just the original flickr feed but it might be expanded in the future) and one that contains everything (that’s the planet davorg feed). That allows readers a bit more flexibility over what content they subscribe to.

Oh, and I’ve also taken the opportunity to remove the links to all the old jokes. The pages are still there if you know where to look, but Google Analytics tells me that they won’t be missed.


More Planets

Over the weekend I found time to rebuild the rest of my missing planets. I’ve resurrected Planet Balham (Atom), Planet Westminster (Atom) and Planet Doctor Who (Atom). They all have Atom feeds available as well.

This has been an interesting test of Perlanet (my simple planet-building program). When building planet davorg, I was only using feeds that I had some kind of control over. It was therefore pretty simple to ensure that the web page created was valid HTML (though, due to some bugs in the Perl modules I’m using, the same can’t be said of the Atom feed). But with these new planets, I’m aggregating feeds from all sorts of places and am seeing problems that I hadn’t seen before. In particular I’ve changed Perlanet to deal with the cases where the feed can’t be downloaded for some reason (I think that some of the MPs on my list have stopped blogging) and where the feed isn’t valid.

There are also plenty of examples of feeds that have some pretty mad HTML in them which are breaking the layout of the output pages. On Planet Balham there seems to be some broken HTML that is badly effecting the <div>s on the page, moving the Google Adsense block halfway down the page. Also, the second half of the page is currently in italics due, I suspect, to an unclosed <i> tag. On Planet Westminster there’s also some kind of problem which means that the names of the feeds change size halfway down the page.

So it’s clear that I need to add something to clean up the feeds. I’ll probably look at using HTML::Tidy or HTML::Scrubber (perhaps both). Expect some better looking pages in the next few days.


Rebuilding Planets

A few months ago I moved this site to another server. At that point all of the “planets” that I was hosting on stopped working too because the software that I was using to build them wasn’t installed on the new server. And installing it was going to be a bit of a nightmare.

But over the last few days I’ve written a simple system that does much the same thing. You can read more of the technical details over at my use.perl blog (which is, of course, one of the source aggregated into planet davorg).

Planet davorg is already back online. My other planets should be back over the next few days. Now I have the software, it’s just a case of writing a few configuration files.

Of course, once you start aggregating stuff like this, you run into the data repetition problems that Paul Mison mentioned last week. I should be able to use the same software (or something based on it) to easily offer readers a choice of feeds containing just the content they are interested in.