Opentech 2015

It’s three weeks since I was at this year’s Opentech conference and I haven’t written my now-traditional post about what I saw. So let’s put that right.

I got there rather later than expected. It was a nice day, so I decided that I would walk from Victoria station to ULU. That route took me past Buckingham Palace and up the Mall. But I hadn’t realised that the Trooping of the Colour was taking place which made it impossible to get across the Mall and into Trafalgar Square. Of course I didn’t realise that until I reached the corner of St James Park near the Admiralty Arch. A helpful policeman explained what was going on and suggested that my best bet was to go to St James Park tube station and get the underground to Embankment. This involved walking most of the way back through the park. And when I got to the tube station it was closed. So I ended up walking to Embankment.

All of which meant I arrived about forty minutes later than I wanted to and the first session was in full swing as I got there.

So what did I see?

Being Female on the Internet – Sarah Brown

This is the talk I missed most of. And I had really wanted to see this talk. As I arrived she was just finishing her talk, and the audio doesn’t seem to be on the Opentech web site.

Selling ideas – Vinay Gupta

I think I didn’t concentrate on this as much as I should have. It was basically a talk about marketing – which is something that the geek community needs to get better at. Vinay illustrated his talk with examples from his Hexayurt project.

RIPA 2 – Ian Brown

Ian talked about potential changes to the Regulation of Investigatory Powers Act. It was all very scary stuff. The slides are online.

The 3rd year of Snowdenia — Caroline Wilson Palow

Caroline talked about Ed Snowden’s work and the way it is changing the world.

Privacy: I do not think that word means what you think it means — Kat Matfield

Kat has been doing research into how end users view privacy on the web. It’s clear that people are worried about their privacy but that they don’t know enough about the subject in order to focus their fear (and anger) at the right things.

The State of the Network Address — Bill Thompson

Bill thinks that many of the world’s woes are caused by people in power abusing the technological tools that geeks have build. And he would like us to do more to prevent them doing that.

The State of Data — Gavin Starks

Gavin works for the Open Data Institute. It’s his job to help organisations to release as much data as possible and to help the rest of us to make as much use of that data as possible. He talked about the problems that he sees in this new data-rich world.

Using data to find patterns in law — John Sheridan

John is using impressive text parsing and manipulation techniques to investigate the UK’s legislation. It sounds like a really interesting project.

Scenic environments, healthy environments? How open data offers answers to this age-old question. — Chanuki Seresinhe

The answer seems to be yes :-)

I stood as a candidate, and… — James Smith

James stood as a candidate in this year’s general election, using various geek tools to power his campaign. He talked through the story of his campaign and tried to encourage others to try the same thing in the next election.

Democracy Club — Sym Roe

The Democracy Club built an number of tools and web sites which built databases of information about candidates in the recent election – and then shared that data with the public. Sym explained why and how these tools were built.

The Twitter Election? — Dave Cross

This was me. I’ve already written up my talk.

Election: what’s next

This was supposed to follow my talk. Bill Thompson had some ideas to start the discussion and suggested that anyone interested retired to the bar. I put away my laptop and various other equipment and the set off to find them. But I failed, so I went home instead.

Yet another massively successful event. Thanks, as always, to all of the speakers and organisers.

TwittElection at OpenTech

Last Saturday was OpenTech. It was as great as it always is and I’ll write more about what I saw later. But I gave a talk about TwittElection in the afternoon and I thought it might be useful to publish my slides here along with a brief summary of what I said.

  • I started with a couple of screenshots of what TwittElection is. There’s basically a main page which shows how many days are left until the general election and a page for every constituency which has a widget displaying a Twitter list for all of the candidates in that constituency.
  • Why did I do it? Well I love elections. I have vague memories of one (or perhaps both) of the 1974 general elections and I have closely followed every general election since then. In the 90s I was occasionally  one of those annoying people who ask you for your voter number as you’re leaving the polling station and in 2005 I worked all night to make sure that the results on the Guardian web site were up to date.
  • I love Twitter too. Who doesn’t?
  • In 2010 I created a site that monitored the candidates in my local constituency. It wasn’t just Twitter (which was far less important back then) but any kind of web feed that they produced. That’s easy enough to do for one constituency, but it’s a bit more of a challenge for 650.
  • The technology for the system was pretty simple. It was the data that was going to be a lot trickier.
  • Just as I was considering the project, Twitter made a couple of changes which made my life substantially easier. Firstly they increased the number of Twitter lists that each user could create from 20 to 1000 (I needed 650). An secondly, they removed the restriction that Twitter list widgets were tightly associated with a specific list. Under the old system, I would have needed to create 650 individual widgets. Under the new system, I could create one widget and pass it a list ID in order to display any of my 650 lists.
  • I wrote the code in Perl. I made a throwaway remark about it being the “programming languages of champions”. Someone in the audience tweeted that quote and it’s been retweeted rather a lot.
  • I hosted the site on Github Pages in case it got too popular. This was a ridiculous thing to be worried about.
  • I used Bootstrap (of course) and small amounts of various Javascript libraries.
  • The data was harder. We have 650 constituencies and each one will have about six candidates. That means I’ll be looking for data about something like 4,000 candidates. And there’s no official centralised source for this data.
  • Back in November I asked my Twitter followers if they knew of anyone who was collecting lists of candidates and Sam Smith put me in touch with the Democracy Club.
  • At the time, the Democracy Club were just building a new version of YourNextMP – a crowd-sourced list of candidates. It did all that I needed. Which made me very happy. [Note: My talk followed one from the Democracy Club which went into this in far more detail.]
  • So with data from YNMP and my code, the site was build.
  • And it worked pretty well. There were a few bugs (including one that was pointed out by a previous speaker in the same session) but they all got fixed quickly.
  • I became an expert in Twitter error codes.
  • 403 and 429 are the codes that Twitter returns when you make more API requests than you are allowed to. There are two ways to deal with Twitter’s rate limits. You can keep a careful count of your requests and stop before you hit the limits. Or you can keep going until you get one of these codes back at which point you stop. The second option is far simpler. I took the second option. [Note: At this point I forgot to mention that the rate limits were so, well…, limiting that when I got my first complete data dump from YNMP, it took almost two days to build all of the Twitter lists.]
  • 108 means you’re trying to do something with a user that doesn’t exist. Basically, you’ve got the username wrong. Sometimes this is because there’s a typo in the name that YNMP has been given. Sometimes it’s because the user has changed their Twitter username and YNMP doesn’t know about the change yet. One common cause for the latter is when MPs changed their Twitter usernames to remove “MP” whilst the campaign was in progress and legally, there were no MPs. [Note: One of the YNMP developers spoke to me afterwards and admitted that they should have handled Twitter usernames better – for example, they could have stored the ID (which is invariant) rather than the username (which can change).]
  • Error 106 means that the user has blocked you and therefore you can’t add that user to a Twitter list. This seems like strange behaviour given that candidates are presumably using Twitter to publicise their opinions as widely as possible.
  • The first time I was blocked it was @glenntingle, the UKIP candidate for Norwich North.
  • I wondered why he might be blocking me. A friend pointed out that he might be embarrassed by his following habits. It turned out that of the 700 people he followed on Twitter, all but about a dozen of them were young women posting pictures of themselves wearing very little.
  • There was some discussion of this amongst some of my friends. This was apparently noticed by Mr Tingle who first protected his tweets and then deleted his account.
  • I’m not sure how good I feel about hounding a candidate off of Twitter.
  • Another UKIP candidate, @timscottukip, also blocked me. And I heard of another who was running his account in protected mode.
  • Some users didn’t understand crowd-sourcing. Every constituency page included a link to the associated page on YNMP along with text asking people to submit corrections there. But I still got a lot of tweets pointing out errors in my lists.
  • 72% of candidates were on Twitter.
  • Results by party were mixed. 100% of the SNP candidates were on Twitter, but only 51% of UKIP candidates (or perhaps I couldn’t see the others as they were blocking me!)
  • Was it worth it? Well, only 1000 or so people visited the site over the course of the campaign.
  • I haven’t yet seen if I can get any stats on people using the raw Twitter lists rather than looking at my web site.
  • I need to rip out all of the information that is specific to that particular election and encourage people to use the code for other elections. YNMP is based on software called PopIt and I think my code could be useful wherever that is used.
  • There are 1790 days until the next UK general election (as of Saturday 13th June 2015).

OpenTech 2013

Yesterday was the (almost) annual OpenTech conference. For various reasons, the conference didn’t happen last year, so it was good to see it back this year.

OpenTech is the conference where I most wish I could clone myself. There are three streams of talks and in pretty much every slot there are talks I’d like like to see in more than one stream. These are the talks that I saw.

Electromagnetic Field: Tales From the UK’s First Large-Scale Hacker Camp (Russ Garrett)
Last August, Russ was involved in getting 500 hackers together in a field near Milton Keynes for a weekend of hacking. The field apparently had better connectivity than some data centres. Russ talked about some of the challenges of organising an event like this and asked for help organising the next one which will hopefully take place in 2014.

Prescribing Analytics (Bruce Durling)
Bruce is the CTO of Mastodon C, a company that helps people extract value from large amounts of data. He talked about a project that crunched NHS prescription data and identified areas where GPs seem to have a tendency to prescribe proprietary drugs rather than cheaper generic alternatives.

GOV.UK (Tom Loosemore)
Tom is Deputy Director at the Government Digital Service. In less than a year, the GDS has made a huge difference to the way that the government uses the internet. It’s inspirational to see an OpenTech stalwart like Tom having such an effect at the heart of government.

How We Didn’t Break the Web (Jordan Hatch)
Jordan works in Tom Loosemore’s team. He talked in a little more detail about one aspect of the GDS’s work. When they turned off the old DirectGov and Business Link web sites in October 2012, they worked hard to ensure that tens of thousands of old URLs didn’t break. Jordan explained some of the tools they used to do that.

The ‘State of the Intersection’ address (Bill Thompson)
Bill’s talk was couched as a warning. For years, talks at OpenTech have been about the importance of Open Data and it’s obvious that this is starting to have an effect. Bill is worried that this data can be used in ways that are antithetical to the OpenTech movement and warned us that we need to be vigilant against this.

Beyond Open Data (Gavin Starks)
Gavin has been speaking at OpenTech since the first one in 2004 (even before it was called OpenTech) and, as with Tom Loosemore, it’s great to see his ideas bearing fruit. He is now the CEO of the Open Data Institute, an organisation founded by Tim Berners-Lee to the production and use of Open Data. Gavin talked about how the new organisation has been doing in its first six months of existence.

Silence and Thunderclaps (Emma Mulqueeny)
Emma has two contradictory-sounding ideas. The Silent Club is about taking time out in our busy lives to sit and be still and silent for an hour or so; and then sending her a postcard about what you thought or did during that time. The Thunderclap is a way to get a good effect out of that stack of business cards that we all seem to acquire.

Thinking Pictures Paul Clarke)
Paul takes very good photographs and used some of them to illustrate his talk which covered some of the ethical, moral and legal questions that go through his mind when deciding which pictures to take, share and sell.

1080s – the 300seconds project (300seconds)
The 300 seconds project wants to get more women talking at conferences. And they think that one good way to achieve that is for new speakers to only have to talk for five minutes instead of the full 20- or 40-minutes (or more) that many conferences expect. The Perl community has been using Lightning Talks to do this with great success for over ten years, so I can’t see why they shouldn’t succeed.

Politics, Programming, Data and the Drogulus (Nicholas Tollervey)
Nicholas is building a global federated, decentralized and openly writable data storage mechanism. It’s a huge task and it’s just him working on the project on his commutes. Sounds like he needs a community. Which is handy as the very next talk was…

Scaling the ZeroMQ Community (Pieter Hintjens)
Peter talked about how the ZeroMQ community runs itself. Speaking as someone who has run a couple of open source project communities, some of his rules seemed a little harsh to me (“you can only expect to be listened to if you bring a patch or money”) but his underlying principles are sound. All projects should aim to reach a stage where the project founders are completely replaceable.

The Cleanweb Movement (James Smith)
I admit that I knew nothing about the Cleanweb Movement. Turns out it’s a group of people who are building web tools which make it easier for people to use less energy. Which sounds like a fine idea to me.

Repair, don’t despair! Towards a better relationship with electronics (Janet Gunter and David Mery)
Janet and David started the Restart Project, which is all about encouraging people to fix electrical and electronic devices rather than throwing them out and buying replacements. They are looking for more volunteers to help people to fix stuff (and to teach people how to teach stuff).

CheapSynth (Dave Green)
Dave Green has been missing from OpenTech for a few years, but this was a triumphant return. He told us how you can build a cheap synth from a repurposed Rock Band game controller. He ended his talk (and the day) by leading the room in a rendition of Blue Money.

As always, OpenTech was a great way to spend a Saturday. Thank you to all of the organisers and the speakers for creating such and interesting day. As I tweeted during the day:


But I spent yesterday hacking on something. More on that later.

Watching the Press – Notes

Today, at Opentech, I gave a talk called “Watching the Press“. Here are some notes and references to go with the talk.

Downtown Abbey
The Daily Mail claimed that two hours of material were being cut from Downton Abbey for broadcast in the US – because the plot was too complex for US viewers to follow. They mentioned that it was showing on PBS and that PBS didn’t show any adverts. The original broadcast took eight hours, of which two were taken up by adverts.

The Daily Mail story is here. And here’s an interesting blog post from Jace Lacob who explained this in some detail to the Daily Mail reporter.

Salt in Chippies
The Daily Express ran a headline saying “Salt Banned in Chipshops“. They went on to claim that “Salt shakers are being removed from fish and chip shops in a nanny state ruling on what we can eat”. The truth (as explained if you actually read the story) was that one council were suggesting that fast food restaurants might keep the salt behind the counter so that people had to ask for it.

Winterval/War on Christmas
Sigh. This one has run for so long that the tabloids have just been repeating each others’ stories for well over ten years. But there’s no truth at the heart of the story.

Last year Kevin Arscott did a sterling job in researching the full story of these rumours. His report is well worth reading (not that any tabloid journalists will ever bother).

Most tabloid journalists don’t understand science. Therefore their stories are often disastrous. The best example is obviously the tabloid stories which led to the MMR hysteria of the late 1990s. The tabloids still refuse to accept their part in this and still insist on referring to MMR as a controversial vaccine.

Tabloids also give uncritical coverage to pseudo-science. Three stories pulled at random from the Daily Mail.

The best source for research into pseudo-science in the press is, of course, Ben Goldacre’s Bad Science blog.

What has changed?
None of this is new. Tabloids have been doing this for years. So what has changed? I think that the internet has brought about three changes.

  1. The tabloids have large new audiences. Many of us would never pay for a tabloid newspaper, but if the content is available for free on their web site we’ll look at it. This is clear from most Daily Mail comment threads, where many of the commenters will be putting forward views that you don’t expect from traditional Mail readers.
  2. The internet makes it easier to check facts. The journalists don’t often take advantage of this, but we can. See my recent blog post on Google and Adele for a good example of this.
  3. The internet also makes it easy to share your findings about the press. Jan Moir found this out to her cost in October 2009. She described the reaction to her piece on the death of Stephen Gately as “a heavily orchestrated internet campaign“. It wasn’t, of course. But it very easily could have been.

Some Interesting Projects
Churnalism is a web site for comparing press releases with published stories. The similarities can be startling.
Istyosty is a site which caches Daily Mail content so that we can share links without them getting more click revenue.
Last year some of us tried to suggest some improvements to the Press Complaints Commission. See Tim’s blog post on the campaign for more details.

Press Watching Blogs
The Sun – Tabloid Lies : @the_sun_lies
Mailwatch : @mailwatch
Express Watch : @expresswatch
Five Chinese Crackers : @5ChinCrack
Enemies of Reason :
Tabloid Watch :
The Daily Quail :
Angry Mob :
Nadia Knows :

How can I help?
Follow our 3-step programme

  1. Read the tabloids (Google reader is your friend)
  2. Check facts (at least more than the journalist did)
  3. Share your information (online and offline)

Tell us what you’ve found. We’ll help you spread the message.

The press lies to you. Let’s tell people.

Opentech Approaches

This year’s Opentech conference is this coming Saturday at ULU. It’s earlier than usual this year, so it might have crept up on you a bit.

I’m speaking at the conference again this year and I’ve been promoted to the main room. I’m on in the 4-5pm session speaking for twenty minutes on “Watching the Press”. I’ll be talking about how the internet makes it easier to keep tabs on the nonsense that the tabloids like to spread. I’ll be pointing out some of the more ridiculous stories that we’ve seen over the last few years and encouraging the audience to get involved in watching the press and raising awareness of its lies.

Over the weekend I’ll publish another post that will contain the slides from the talk along with lots of references to the various things I’ll be covering.

If you’re at the conference (and I highly recommend it) them please come up and say hello.

Opentech 2010

On Saturday I was at the Opentech conference. Some brief notes about the sessions I saw.

The day was sponsored by, so it seemed polite to see one of their sessions first. I watched Richard Stirling and friends talk about some of the work they’re doing on releasing lots and lots of linked data. There were some interesting-looking demonstrations (using a tool that, I believe, was called Datagrid [Update: Sam Smith reminds me that it was actually Gridworks]) but I was in the back half of the room and it was a little hard to follow the details. The session also had a demonstration of the new site.

The next session I attended was in the main hall. Hadley Beeman talked about the LinkedGov project which aims to take a lot of the data that the government are releasing and to improve it by adding metadata, filling in holes and generally cleaning it up.

Hadley was followed by Ben Goldacre and Louise Crow who have a cracking idea for a web site. They want to expose all of the clinical trial data which never gets published (presumably because the trial didn’t go the way that the people running it wanted it to go). They already have a prototype that demonstrates which pharmaceutical companies are particularly bad at this.

The final talk in this session was by Emma Mulqueeny and a few friends. They were introducing Rewired State, which runs hackdays to encourage people to build cool things out of government data. I was particularly impressed with Young Rewired State which runs similar events aimed people under the age of 18,

It was then lunchtime. That went disastrously wrong and I ended up not eating and getting back late so that I missed the start of the next session. Unfortunately I missed half of Louise Crow’s talk about MySociety’s forthcoming project FixMyTransport. I stayed to watch Tom Steinberg give an interesting explanation of why he though GroupsNearYou hadn’t taken off. Finally in this session, Tim Green and Edmund von der Berg talked about how three separate groups had worked together on some interesting projects during the last general election.

I was speaking in the next session. Unusually for Opentech, the organisers decided to have a session about the technology that  underlies some of the projects that the conference is about. I talked about Modern Perl, Mark Blackman covered Modern FreeBSD and Tom Morris introduced Modern Java (or, more accurately, Scala).

The next session I attended was largely about newspapers. Phil Gyford talked about why he dislikes newspaper web sites and why he built Today’s Guardian – a newpaper web site that looks more like a newspaper. Gavin Bell talked about the future of social networking sites and Chris Thorpe talked about automating the kind of serendipity that makes newspapers such a joy to read.

For the final session I went back to the main hall. Mia Ridge talked about why the techies who work for museums really want to open up their data in the same way as the government is now doing and asked us to go banging on the museums’ doors asking for access to their data. And finally Robin Houston told some interesting stories about the 10:10 campaign.

As always the conference was really interesting. As always there were far too many things that I wanted to see and in every session I could have just as easily gone to see one of the other tracks. And as always, I have come away from the conference fired with enthusiasm and wanting to help all of the projects that I heard about.

Of course, that’s not going to happen. I’m going to have to pick one or two of them.

If you weren’t at Opentech, then you missed a great day out. You should make an effort to come along next year.

Opentech Overview

[Update: Details of this year’s Opentech conference are at]

Yesterday was the annual Opentech conference. I’m going to have some more to say about it in some detail over the next few days, but those thoughts are still peculating so in the meantime here’s a list of the talks that I watched.

Community and Democracy in Hijacked Space

One of the Space Hijackers talked about some of their projects. If you haven’t heard of them, they are the people who drove a tank into the G20 protests. Their protests sound like a lot of fun.

Does FOI work? You bet! – Heather Brooke

Heather Brooke told the story of how she used the Freedom of Information Act to finally get details of MPs’ expenses out of the House of Commons. It was a long and complex story and Heather made it very interesting.

Digital Engagement – Richard Stirling (Cabinet Office)
Open Government Data – John Sheridan (OPSI)

Two civil servants talking about how the government is making more and more data available to the public. They were asking people to take the data and build interesting applications with it as the more applications built, the easier it is them to persuade people to release more data.

Opening Up Government Data: Give it to us Raw, Give it to us Now – Rufus Pollock (Open Knowledge Foundation)

Rufus Pollack of the Open Knowledge Foundation replied to the previous two talks explaining where he thought the government’s current efforts are falling short. They need to do more, sooner and they need to get the licensing right – the more open the license is, the better.

10 Cultures – Bill Thompson

Fifty years on from the original, Bill Thompson updated CP Snow’s “Two Cultures” talk for the twenty-first century and turned the title into a geek joke. Thompson’s main point was that the people making the big decisions in the UK all hold PPEs from Oxbridge and know next to nothing about the opportunities that digital technologies can bring us. We need more geekery in the halls of power.

Beyond Bad Science – Ben Goldacre

Ben Goldacre’s topic dovetailed nicely with Thompson’s. If people were better educated in science then there would be less excuse for the appalling science journalism that we currently suffer from. Goldacre went on to talk about the bloggers who are doing sterling work revealing the dangerous science stories that the mainstream media aren’t covering and suggested some tools we could build to help them to work together more efficiently.

The Guardian and the Ian Tomlinson story – Paul Roache

Paul Roache talked about how the Guardian dealt with the Ian Tomlinson video. Normally an exclusive like that would have been held back for the next edition of the paper. In this case they took the unusual step of putting on the web site first. This gamble seems to have paid off. Over the next day or so, the video was responsible for 20% of their web site traffic.

Opening up the Guardian – Simon Willison

Simon Willison talked about the Guardian’s Open API and Data Store. He also introduced the crowd-surfing application they wrote to process the MPs’ expenses details once they were published.

Spread The Web – Fran Sainsbury & William Perrin
Local web beyond the hype – William Perrin

Two linked talks about how the internet can help organisations and communities to communicate. The first talk was about the number of organisations who have paid stupid sums of money for a proprietary web site that they find too hard to update and how in many cases a simple WordPress site would be far better suited for their purpose. In the second talk William Perrin talked about using simple sites (again, WordPress or a similar technology) to bring communities together. This is an area I have a lot of interest in.

4iP – Public service tools for empowerment – Tom Loosemore

Tom talked about 4ip, a Channel Four initiative to support innovative digital projects. Tom listed half a dozen or so interesting projects that they have already supported.

Just before Tom’s speech there was a slight change of plan as Sir Bonar Neville Kingdom spoke to us. The text of his speech is now online. I highly recommend that you read it.

A fabulous conference as always. My thanks to all of the organisers. More thoughts on it over the next few days.

Opentech 2008

I spent yesterday at Opentech. I had a great time there. Here are my thoughts on the talks that I saw.

Rembrandt, Pr0n and Robot Monkeys: Lessons From the Present About Flesh and Technology – Kim Plowright
This could have been interesting, but I think it was somewhat constrained by the short time allocated. It seemed to be a rather disjointed amble through a bit of history and a look at people see their physical bodies in cyberspace.

Living with Chaos: Why Nothing is Simple in IT – Simon Wardley
If you’ve been following Simon’s blog, then you’ll be familiar with his view of the commoditisation of software. Somehow, I’ve missed seeing him giving his talks on the subject over  the last couple of years, so it was nice to finally see one. Again, this would have benefited from a longer timeslot – but I think that’ll be a common complaint as I go through the day.

What the Frog’s Eye Tells the Future – Matt Webb
Exploring the early history of the science of cybernetics and pointing out some surprising coincidences and some interesting comparisons with today. Matt is always interesting and I’d love to read more about this subject.

Here’s The UK EFF – Becky Hogge and Danny O’Brien
The Open Rights Group was formed out of a talk that took place at the last Opentech conference in 2005, when a pledge was set up for people to agree to pay £5 a month to support such an organisation. In this talk Becky Hogge and Danny O’Brien (who I didn’t recognise in his full beard) talked about what had happened in the last three years.

Except they didn’t really. Mainly they just asked for money. Apparenlly, of the 1,000 people who signed the original pledge, only about 750 kept their promise are making regular payments. So if you signed the pledge and haven’t set up your standing order then why not do so now? Or, if you didn’t sign the pledge but think that the UK needs a strong organisation campaigning for digital rights, then why not sign up? Or, if you are already making regular payments to them, why not increase the monthly amount? I just did.

Power to the people – one year on from the Power of Information Report
If you’ve see the Show Us A Better Way site, then you’ll know that there’s a growing movement within the UK government to free up public data and make it available in easy to use formats. In this session, various people behind this initiative spoke about how they’ve got to the current situation and where they hope to go next. It’s great to see this amount of data coming from the civil service and it seems that the best way to encourage them is to use the dat ato create really cool things. You can find out more about the Power of Information team, by reading their blog.

3 Years of OpenStreetMap -Nick Black
It’s been a while since I last looked at OpenStreetMap. And it looks like they’ve come a long way in a relatively short time.
Many of their maps now look really impressive. I shall be watching them far more closely in the future. I may even edit the occasional map.

Opening Data – Rufus Pollack
The Open Knowledge Foundation exist to promote the sharing of knowledge and data. They have a repository called CKAN (modelled on the Perl repository CPAN) where you can share any useful data that you have. Looks very interesting.

Planning Alerts – Duncan Parkes
I already knew about the planning alerts project. It’s one of the those ideas that seems simple and obvious – but no-one had thought of it until recently. You go to their site, give them your post code and email address and they send you regular messages about planning applications in your area. I signed up a few months ago, but I only get alerts from Lambeth (about 300 metres to the east of my house) as they don’t yet have a parser to extract data from the Wandsworth Council site. They asked for help with missing councils. I should probaby do that.

Publishing with Microformats – Jeremy Keith
Microformats is one of those areas that I’ve read about and really want to start using. But I haven’t really found a use for them yet. This talk helped a bit as it concentrated on using a couple of microformats (hCard and XFN) to mark up social relationships. I really need to investigate this further.

Information: Rewiring the London Gazette with RDFa – Jeni Tennison
Moving on from microformats, RDFa is a tremendously powerful way to add value to HTML pages. I’m not going to be using this any time soon, but it’s interesting to know it’s possible. And the data set (when it is released) is going to be incredible. Lodon Gazette (Jeni had a URL in her presentation, but I can’t remember it now) is the government’s official newspaper – it contains all of their announcements.

The Bastard Child of Baird and Berners Lee – Tom Loosemore
Tom gave an idea of some of the things he was thinking about just before he left the BBC a year ago. He’s basically talking about creating a network of recording boxes that will record all TV ever broadcast in the UK. Sounds cool – if slightly hamstrung by copyright rules.

Finding Good TV on the Interwebs with RDF and REST – Chris Jackson
Chris introduced URIplay – a project to catalogue and simplify the metadata that is broadcast alongside TV and radio. The idea is to make it easier to track down programmes that you want to watch.

Intro to Hadoop – Tom White
I only went to this by mistake. I turned up early for the Guardian talk. I knew nothing about Hadoop before the talk and I know almost nothing more now. building for the open web – Stephen Dunn and Mat Wall
Stephen and Mat talked about some of the design decisions that went into the recent (and ongoing) rebuild of the Guardian web site. It’s great to see a national media site designed by people who really understand how the web works and who are making an effort to exist within that ecosystem. There were also some interesting hints about the forthcoming Guardian Developer Network

So that’s what I saw. I think I pretty much made the right choices, but with three tracks it’s impossible to see everything you want to see. I heard people saying interesting things about the talk on tracking arms dealers using Python. I also with I could have seen the sessions on MySociety and OpenID. Hopefully there will be slides and video available online soon. I also felt that the “hallway track” was better than ever. Everywhere I went I found myself having interesting conversations with people.

I dashed home at the end in order to watch Doctor Who as soon a possible. I shouldn’t have bothered. What a waste of time that was.

Opentech 2008

The full schedule for Opentech has been announced. There are three tracks of talks and it looks like that I’ll need a couple of clones in order to see everything that I want to see.

The previous Opentech conference (was it really three years ago) was a lot of fun and I fully expect this one to be just as good. Registration is already open (you reserve a place and then pay a fiver on the door) and if previous experience is anything to go by, places will be booked up pretty quickly. I didn’t post this entry until I’d reserved mine :-)

Hope to see some of you there.


Just back from Opentech, so here are a few random notes. I’ll hopefully fill in more details later.

I started by listening to Danny O’Brien talking about “Living Live in Public”. Danny discussed his theory of how the geek world has a weird kind of celebrity where you can be incredibly famous to a very small subsection of the population. He also characterised fame as a situation where people know more about you than you know about them. Where’s the power in that relationship?

Then I went off to the seminar room to hear various people talking about Media Hacking. Before the talks started Ewan Spence, who was chairing the session, tried a bit of practical media hacking. He asked for volunteers who had an iPod Shuffle and five people came forward. He then put all of the iPods in a box, shook it up and handed them back at random. The iPod owner who was sitting in front of me returned to his seat distinctly unimpressed by the trick. The actual talks in the session started with Matt Westcott talking about running Linux on an iPod. An interesting trick, but not really interesting to me. Then Paul Mison spoke about ways of hacking iTunes. This was a good high-level survey, but could have done with being twice as long and more detailed. Mike Ryan introduced MythTV, the Open Source PVR package, which I’ll definitely be investigating further. Finally Michael Sparks introduced Kamaelia, a new BBC project for building complex applications out of simple components.

After lunch I was back in the main room for what were probably the two major talks of the day. The first was the official launch of BBC Backstage. Ben Metcalfe also announced a new Backstage data feed (containing weather data) and a competition to create an interesting application based on their recently announced TV schedule feed. Ben was followed by Jeremy Zawodny who was talking about how Yahoo! is opening up their data through the use of web services APIs. He also had some interesting thoughts about where the web services industry might be heading. An interesting question asked during that session was about the politics of persuading business managers that giving just anyone free access to all your company data is a good idea. Even more interesting if you know that the question was asked by someone who might be about to be involved in something very similar at another major content provider.

After a brief break (during which I got involved in an O’Reilly “meet the author” session) I went to a session on blogging and social software. Tom Reynolds gave some tips on how to write a work-based blog without getting fired, Paul Mutton drew graphs of social networks by monitoring IRC channels and Paul Lenz (from the company behind WhoShouldYouVoteFor) introduced their new site WhatShouldIReadNext.

Finally there was a session on web services. Don Young from Amazon gave what was a bit too much of a corporate presentation on Amazon Web Services, Gavin Bell talked about the concept of social documents and Lee Bryant introduced a couple of prototypes based on BBC Backstage data (did I mention the heavy BBC presence at the conference!) To finish off Simon Willison and Rob McKinnon talked about Greasemonkey. It was slightly badly timed given the major security flaw that was found in Greasemonkey this week, but a fixed version is promised in days. Simon demonstrated Matthew Somerville’s script for fixing the Odeon web site, but the biggest applause was saved for Rob when he demonstrated his script that reformats the New Zealand equivalent of Hansard on the fly. It takes something that is really difficult to read and converts it into something that looks like TheyWorkForYou.

More data (and links!) later but for now, here’s a link to the “opentech” tags on Technorati and Flickr.