How To Travel From London To Paris

Imagine that you want to travel from London to Paris. Ok, so that’s probably not too hard to imagine. But also imagine that you have absolutely no idea how to do that and neither does anyone that you know. In that situation you would probably go to Amazon and look for a book on the subject.

Very quickly you find one called “Teach Yourself How To Travel From London To Paris In Twenty-One Days”. You look at the reviews and are impressed.

I had no idea how to get from London to Paris, but my family and I followed the instructions in this book. I’m writing this from the top of the Eiffel Tower – five stars.


I really thought it would be impossible to get from London to Paris, but this book really breaks it down and explains how it’s done – five stars.

There are plenty more along the same lines.

That all looks promising, so you buy the book. Seconds later, it appears on your Kindle and you start to read.

Section one is about getting from London to Dover. Chapter one starts by ensuring that all readers are starting from the same place in London and suggests a particular tavern in Southwark where you might meet other travellers with the same destination. Chapter two suggests a walking route that you might follow from Southwark to Canterbury. It’s written in slightly old-fashioned English and details of the second half of the route are rather sketchy.

Chapter two contains a route to walk from Canterbury to Dover. The language has reverted to modern English and the information is very detailed. There are reviews of many places to stay on the way – many of which mention something called “Trip Advisor”.

Section two is about crossing the channel. Chapter three talks about the best places in Dover to find the materials you are going to need to make your boat and chapter four contains detailed instructions on how to construct a simple but seaworthy vessel. The end of the chapter has lots of advice on how to judge the best weather conditions for the crossing. Chapter five is a beginner’s guide to navigating the English Channel and chapter six has a list of things that might go wrong and how to deal with them.

Section three is about the journey from Calais to Paris. Once again there is a suggested walking route and plenty of recommendations of places to stay.

If you follow the instructions in the book you will, eventually, get to Paris. But you’re very likely to come away thinking that it was all rather more effort than you expected it to be and that next time you’ll choose a destination that it easier to get to.

You realise that you have misunderstood the title of the book. You thought it would take twenty-one days to learn how to make the journey, when actually it will take twenty-one days (at least!) to complete the journey. Surely there is a better way?

And, of course, there is. Reading further in the book’s many reviews you come across the only one-star review:

If you follow the instructions in this book you will waste far too much time. Take your passport to St. Pancras and buy a ticket for the Eurostar. You can be in Paris in less than four hours.

The reviewer claims to be the travel correspondent for BBC Radio Kent. The other reviewers were all people with no knowledge of travel who just happened to come across the book in the same way that you did. Who are you going to trust?

I exaggerate, of course, for comic effect. But reviews of technical books on Amazon are a lot like this. You can’t trust them because in most cases the reviewers are the very people who are least likely to be able to give an accurate assessment of the technical material in the book.

When you are choosing a technical book you are looking for two things:

  • You want the information in the book to be as easy to understand as possible
  • You want the information in the book to be as accurate and up to date as possible

Most people pick up a technical book because they want to learn about the subject that it covers. That means that, by definition, they are unable to judge that second point. They know how easily they understood the material in the book. They also know whether or not they managed to use that information to achieve their goals. But, as my overstretched metaphor above hopefully shows, it’s quite possible to follow terrible advice and still achieve your goals.

I first came aware of this phenomena in the late 1990s. At the time a large amount of dynamic web pages were built using Perl and CGI. This meant that a lot of publishers saw this as a very lucrative market and dozens of books on the subject were published many of which covered the Perl equivalent of walking from London to Paris. And because people read these books and managed to get to Paris (albeit in a ridiculously roundabout manner) they thought the books were great and gave them five-star reviews. Much to the chagrin of Perl experts who were standing on the kerbside on the A2 shouting “but there’s a far easier way to do that!”

This is still a problem today. Earlier this year I reviewed a book about penetration testing using Perl. I have to assume that the author knew what he was doing when talking about pen testing, but his Perl code was positively Chaucerian.

It’s not just book reviews that are affected. Any kind of technical knowledge transfer mechanism is open to the same problems. A couple of months ago I wrote a Perl tutorial for Udemy. It only covered the very basics, so they included a link to one of their other Perl courses. But having sat through the first few lessons of this course, I know that it’s really not very good. How did the people at Udemy choose which one to link to? Well it’s the one with the highest student satisfaction ratings, of course. It teaches the Perl equivalent of boat-building. A friend has a much better Perl course on Udemy, but they wouldn’t use that as it didn’t have enough positive feedback.

Can we blame anyone for this? Well, we certainly can’t blame the reviewers. They don’t know that they are giving good reviews to bad material. I’m not even sure that we can blame the authors in many cases. It’s very likely that they don’t know how much they don’t know (obligatory link to the Dunning–Kruger effect). I think that in some cases the authors must know that they are chancing their arm by putting themselves forward as an expert, but most of them probably believe that they are giving good advice (because they learned from an expert who taught them how to walk from London to Paris and so the chain goes back to the dawn of time).

I think a lot of the blame must be placed with the publishers. They need to take more responsibility for the material they publish. If you’re publishing in a technical arena then you need to build up contacts in that technical community so that you have people you can trust who can give opinions on your books. If you’re publishing a book on travelling from London to Paris then see if you can find a travel correspondent to verify the information in it before you publish it and embarrass yourselves. In fact, get these experts involved in the process of commissioning process. If you what to publish a travel book then ask your travel correspondent friends if they know anyone who could write it. If someone approaches you with a proposal for a travel book then run the idea past a travel correspondent or two before signing the contract.

I know that identifying genuine experts in a field can be hard. And I know that genuine experts would probably like to be compensated for any time they spend helping you, but I think it’s time and money well-spent. You will end up with better books.

Or, perhaps some publishers don’t care about the quality of their books. If bad books can be published quickly and cheaply and people still buy them, then what business sense does it make to make the books better.

If you take any advice away from this piece, then don’t trust reviews and ratings of technical material.

And never try to walk from London to Paris (unless it’s for charity).

Writing Books (The Easy Bit)

Last night I spoke at a London Perl Mongers meeting. As part of the talk I spoke about a toolchain that I have been using for creating ebooks. In this article I’ll go into a little more detail about the process.

Basically, we’re talking about a process that takes one or more files in some input format and (as easily as possible) turns them into one or more output formats which can be described as “ebooks”. So before we can decided which tools we need, we should decide what those various file formats should be.

For my input format I chose Markdown. This is a text-based format that has become popular amongst geeks over the last few years. Geeks tend to like text-based formats more than the proprietary binary formats like those produced by word processors. This is for a number of reasons. You can read them without any specialised tools. You’re not tied down to using specific tools to create them. And it’s generally easier to store them in a revision management system like Github.

For my output formats, I wanted EPUB and Mobipocket. EPUB is the generally accepted standard for ebooks and Mobipocket is the ebook format that Amazon use. And I also wanted to produce PDFs, just because they are easy to read on just about any platform.

(As an aside, you’ll notice that I said nothing in that previous paragraph about DRM. That’s simply because nice people don’t do that.)

Ok, so we know what file formats we’ll be working with. Now we need to know a) how we create the input format and b) how we convert between the various formats. Creating the Markdown files is easy enough. It’s just a text file, so any text editor would do the job (it would be interesting to find out if any word processor can be made to save text as Markdown).

To convert our Markdown into EPUB, we’ll need a new tool. Pandoc describes itself as “a universal document converter”. It’s not quite universal (otherwise that would be the only tool that we would need), but it is certainly great for this job. Once you have installed Pandoc, the conversion is simple:

pandoc -o your_book.epub title.txt --epub-metadata=metadata.xml --toc --toc-depth=2

There are two extra files you need here (I’m not sure why it can’t all be in the same file, but that’s just the way it seems to be). The first (which I’ve called “title.txt”), contains two lines. The first line has the title of your book and the second has the author’s name. Each line needs to start with a “%” character. So it might look like this:

% Your title
% Your name

The second file (which I’ve called “metadata.xml”) contains various pieces of information about the book. It’s (ew!) XML and looks like this:

<metadata xmlns:dc="">
<dc:title id="main">Your Title</dc:title>
<meta refines="#main" property="title-type">main</meta>
<dc:creator opf:file-as="Surname, Forename" opf:role="aut">Forename Surname</dc:creator>
<dc:publisher>Your name</dc:publisher>
<dc:date opf:event="publication">2015-08-14</dc:date>
<dc:rights>Copyright ©2015 by Your Name</dc:rights> </metadata>

So after creating those files and running that command, you’ll have an EPUB file. Next we want to convert that to a Mobipocket file so that we can distribute our book through Amazon. Unsurprisingly, the easiest way to do that is to use a piece of software that you get from Amazon. It’s called Kindlegen and you can download it from their site. Once it is installed, the conversion is as simple as:

kindlegen perlwebbook.epub

This will leave you with a file called “” which you can upload to Amazon.

There’s one last conversion that you might need. And that’s converting the EPUB to PDF. Pandoc will make that conversion for you. But it does it using a piece of software called LaTeX which I’ve never had much luck with. So I looked for an alternative solution and found it in Calibre. Calibre is mainly an ebook management tool, but it also converts between many ebook formats. It’s pretty famous for having a really complex user interface but, luckily for us, there’s a command line program called “ebook-convert” – which we can use.

ebook-convert perlwebbook.epub perlwebbook.pdf

And that’s it. We start with a Markdown file and end up with an ebook in three formats. Easy.

Of course, that really is the easy part. There’s a bit that comes before (actually writing the book) and a bit that comes after (marketing the book) and they are both far harder. Last year I read a book called Author, Publisher, Entrepreneur which covered these three steps to a very useful level of detail. Their step two is rather different to mind (they use Microsoft Word if I recall correctly) but what they had to say about the other steps was very interesting. You might find it interesting if you’re thinking of writing (and self-publishing) a book.

I love the way that ebooks have democratised the publishing industry. Anyone can write and publish a book and make it available to everyone through the world’s largest book distribution web site.

So what are you waiting for? Get writing. If you find my toolchain interesting (or if you have any comments on it) then please let me know.

And let me know what you’ve written.

Financial Account Aggregation

Three years ago, I wrote a blog post entitled Internet Security Rule One about the stupidity of sharing your passwords with anyone. I finished that post with a joke.

Look, I’ll tell you what. I’ve got a really good idea for an add-on for your online banking service. Just leave the login details in a comment below and I’ll set it up for you.

It was a joke because it was obviously ridiculous. No-one would possibly think it was a good idea to share their banking password with anyone else.

I should know not to make assumptions like that.

Yesterday I was made aware of a service called Money Dashboard. Money Dashboard aggregates all of your financial accounts so that you can see them all in one convenient place. They can then generate all sorts of interesting reports about where your money is going and can probably make intelligent suggestions about things you can do to improve your financial situation. It sounds like a great product. I’d love to have access to a system like that.

There’s one major flaw though.

In order to collect the information they need from all of your financial accounts, they need your login details for the various sites that you use. And that’s a violation of the Internet Security Rule One. You should never give your passwords to anyone else – particularly not passwords that are as important as your banking password.

I would have thought that was obvious. But they have 100,000 happy users.

Of course they have have a page on their site telling you exactly how securely they store your details. They use “industry-standard security practices”, their application is read-only “which means it cannot be used for withdrawals, payments or to transfer your funds”. They have “selected partners with outstanding reputations and extensive experience in security solutions”. It all sounds lovely. But it really doesn’t mean very much.

It doesn’t mean very much because at the heart of their system, they need to log on to your bank’s web site pretending to be you in order to get hold of your account information. And that means that no matter how securely they store your passwords, at some point they need to be able to retrieve them in plain text so they can use them to log on to your banks web site. So there must be code somewhere in their system which punches through all of that security and gets the string “pa$$word”. So in the worst case scenario, if someone compromises their servers they will be able to get access to your passwords.

If that doesn’t convince you, then here’s a simpler reason for not using the service. Sharing your passwords with anyone else is almost certainly a violation of your bank’s terms and conditions. So if someone does get your details from Money Dashboard’s system and uses that information to wreak havoc in your bank account – good luck getting any compensation.

Here, for example, are First Direct’s T&Cs about this (in section 9.1):

You must take all reasonable precautions to keep safe and prevent fraudulent use of any cards, security devices, security details (including PINs, security numbers, passwords or other details including those which allow you to use Internet Banking and Telephone Banking).

These precautions include but are not limited to all of the following, as applicable:


  • not allowing anyone else to have or use your card or PIN or any of our security devices, security details or password(s) (including for Internet Banking and Telephone Banking) and not disclosing them to anyone, including the police, an account aggregation service that is not operated by us

Incidentally, that “not operated by us” is a nice piece of hubris. First Direct run their own account aggregation service which, of course, they trust implicitly. But they can’t possibly trust anybody else’s service.

I started talking about this on Twitter yesterday and I got this response from the @moneydashboard account. It largely ignores the security aspects and concentrates on why you shouldn’t worry about breaking your bank’s T&Cs. They seem to be campaigning to get T&Cs changed so allow explicit exclusions for sharing passwords with account aggregation services.

I think this is entirely wrong-headed. I think there is a better campaign that they should be running.

As I said above, I think that the idea of an account aggregation service is great. I would love to use something like Money Dashboard. But I’m completely unconvinced by their talk of security. They need access to your passwords in plain text. And it doesn’t matter that their application only reads your data. If someone can extract your login details from Money Dashboard’s systems then they can do whatever they want with your money.

So what’s the solution? Well I agree with one thing that Money Dashboard say in their statement:

All that you are sharing with Money Dashboard is data; data which belongs to you. You are the customer, you should be telling the bank what to do, not the other way around!

We should be able to tell our banks to share our data with third parties. But we should be able to do it in a manner that doesn’t entail giving anyone full access to our accounts. The problem is that there is only one level of access to your bank account. If you have the login details then you can do whatever you want. But what if there was a secondary set of access details – ones that could only read from the account?

If you’ve used the web much in recent years, you will have become familiar with this idea. For example, you might have wanted to give a web app access to your Twitter account. During this process you will be shown a screen (which, crucially, is hosted on Twitter’s web site, not the new app) asking if you want to grant rights to this new app. And telling you which rights you are granting (“This app wants to read your tweets.” “This app wants to tweet on you behalf.”) You can decide whether or not to grant that access.

This is called OAuth. And it’s a well-understood protocol. We need something like this for the finance industry. So that I can say to First Direct, “please allow this app to read my account details, but don’t let them change anything”. If we had something like that, then all of these problems will be solved. The Money Dashboard statement points to the Financial Data and Technology Association – perhaps they are the people to push for this change.

I know why Money Dashboard are doing what they are doing. And I know they aren’t the only ones doing it (Mint, for example, is a very popular service in the US). And I really, really want what they are offering. But just because a service is a really good idea, shouldn’t mean that you take technical short-cuts to implement it.

I think that the “Financial OAuth” I mentioned above will come about. But the finance industry is really slow to embrace change. Perhaps the Financial Data and Technology Association will drive it. Perhaps one forward-thinking bank will implement it and other bank’s customers will start to demand it.

Another possibility is that someone somewhere will lose a lot of money through sharing their details with a system like this and governments will immediately close them all down until a safer mechanism is in place.

I firmly believe that systems like Money Dashboard are an important part of the future. I just hope that they are implemented more safely than the current generation.


Opentech 2015

It’s three weeks since I was at this year’s Opentech conference and I haven’t written my now-traditional post about what I saw. So let’s put that right.

I got there rather later than expected. It was a nice day, so I decided that I would walk from Victoria station to ULU. That route took me past Buckingham Palace and up the Mall. But I hadn’t realised that the Trooping of the Colour was taking place which made it impossible to get across the Mall and into Trafalgar Square. Of course I didn’t realise that until I reached the corner of St James Park near the Admiralty Arch. A helpful policeman explained what was going on and suggested that my best bet was to go to St James Park tube station and get the underground to Embankment. This involved walking most of the way back through the park. And when I got to the tube station it was closed. So I ended up walking to Embankment.

All of which meant I arrived about forty minutes later than I wanted to and the first session was in full swing as I got there.

So what did I see?

Being Female on the Internet – Sarah Brown

This is the talk I missed most of. And I had really wanted to see this talk. As I arrived she was just finishing her talk, and the audio doesn’t seem to be on the Opentech web site.

Selling ideas – Vinay Gupta

I think I didn’t concentrate on this as much as I should have. It was basically a talk about marketing – which is something that the geek community needs to get better at. Vinay illustrated his talk with examples from his Hexayurt project.

RIPA 2 – Ian Brown

Ian talked about potential changes to the Regulation of Investigatory Powers Act. It was all very scary stuff. The slides are online.

The 3rd year of Snowdenia — Caroline Wilson Palow

Caroline talked about Ed Snowden’s work and the way it is changing the world.

Privacy: I do not think that word means what you think it means — Kat Matfield

Kat has been doing research into how end users view privacy on the web. It’s clear that people are worried about their privacy but that they don’t know enough about the subject in order to focus their fear (and anger) at the right things.

The State of the Network Address — Bill Thompson

Bill thinks that many of the world’s woes are caused by people in power abusing the technological tools that geeks have build. And he would like us to do more to prevent them doing that.

The State of Data — Gavin Starks

Gavin works for the Open Data Institute. It’s his job to help organisations to release as much data as possible and to help the rest of us to make as much use of that data as possible. He talked about the problems that he sees in this new data-rich world.

Using data to find patterns in law — John Sheridan

John is using impressive text parsing and manipulation techniques to investigate the UK’s legislation. It sounds like a really interesting project.

Scenic environments, healthy environments? How open data offers answers to this age-old question. — Chanuki Seresinhe

The answer seems to be yes :-)

I stood as a candidate, and… — James Smith

James stood as a candidate in this year’s general election, using various geek tools to power his campaign. He talked through the story of his campaign and tried to encourage others to try the same thing in the next election.

Democracy Club — Sym Roe

The Democracy Club built an number of tools and web sites which built databases of information about candidates in the recent election – and then shared that data with the public. Sym explained why and how these tools were built.

The Twitter Election? — Dave Cross

This was me. I’ve already written up my talk.

Election: what’s next

This was supposed to follow my talk. Bill Thompson had some ideas to start the discussion and suggested that anyone interested retired to the bar. I put away my laptop and various other equipment and the set off to find them. But I failed, so I went home instead.

Yet another massively successful event. Thanks, as always, to all of the speakers and organisers.

TwittElection at OpenTech

Last Saturday was OpenTech. It was as great as it always is and I’ll write more about what I saw later. But I gave a talk about TwittElection in the afternoon and I thought it might be useful to publish my slides here along with a brief summary of what I said.

  • I started with a couple of screenshots of what TwittElection is. There’s basically a main page which shows how many days are left until the general election and a page for every constituency which has a widget displaying a Twitter list for all of the candidates in that constituency.
  • Why did I do it? Well I love elections. I have vague memories of one (or perhaps both) of the 1974 general elections and I have closely followed every general election since then. In the 90s I was occasionally  one of those annoying people who ask you for your voter number as you’re leaving the polling station and in 2005 I worked all night to make sure that the results on the Guardian web site were up to date.
  • I love Twitter too. Who doesn’t?
  • In 2010 I created a site that monitored the candidates in my local constituency. It wasn’t just Twitter (which was far less important back then) but any kind of web feed that they produced. That’s easy enough to do for one constituency, but it’s a bit more of a challenge for 650.
  • The technology for the system was pretty simple. It was the data that was going to be a lot trickier.
  • Just as I was considering the project, Twitter made a couple of changes which made my life substantially easier. Firstly they increased the number of Twitter lists that each user could create from 20 to 1000 (I needed 650). An secondly, they removed the restriction that Twitter list widgets were tightly associated with a specific list. Under the old system, I would have needed to create 650 individual widgets. Under the new system, I could create one widget and pass it a list ID in order to display any of my 650 lists.
  • I wrote the code in Perl. I made a throwaway remark about it being the “programming languages of champions”. Someone in the audience tweeted that quote and it’s been retweeted rather a lot.
  • I hosted the site on Github Pages in case it got too popular. This was a ridiculous thing to be worried about.
  • I used Bootstrap (of course) and small amounts of various Javascript libraries.
  • The data was harder. We have 650 constituencies and each one will have about six candidates. That means I’ll be looking for data about something like 4,000 candidates. And there’s no official centralised source for this data.
  • Back in November I asked my Twitter followers if they knew of anyone who was collecting lists of candidates and Sam Smith put me in touch with the Democracy Club.
  • At the time, the Democracy Club were just building a new version of YourNextMP – a crowd-sourced list of candidates. It did all that I needed. Which made me very happy. [Note: My talk followed one from the Democracy Club which went into this in far more detail.]
  • So with data from YNMP and my code, the site was build.
  • And it worked pretty well. There were a few bugs (including one that was pointed out by a previous speaker in the same session) but they all got fixed quickly.
  • I became an expert in Twitter error codes.
  • 403 and 429 are the codes that Twitter returns when you make more API requests than you are allowed to. There are two ways to deal with Twitter’s rate limits. You can keep a careful count of your requests and stop before you hit the limits. Or you can keep going until you get one of these codes back at which point you stop. The second option is far simpler. I took the second option. [Note: At this point I forgot to mention that the rate limits were so, well…, limiting that when I got my first complete data dump from YNMP, it took almost two days to build all of the Twitter lists.]
  • 108 means you’re trying to do something with a user that doesn’t exist. Basically, you’ve got the username wrong. Sometimes this is because there’s a typo in the name that YNMP has been given. Sometimes it’s because the user has changed their Twitter username and YNMP doesn’t know about the change yet. One common cause for the latter is when MPs changed their Twitter usernames to remove “MP” whilst the campaign was in progress and legally, there were no MPs. [Note: One of the YNMP developers spoke to me afterwards and admitted that they should have handled Twitter usernames better – for example, they could have stored the ID (which is invariant) rather than the username (which can change).]
  • Error 106 means that the user has blocked you and therefore you can’t add that user to a Twitter list. This seems like strange behaviour given that candidates are presumably using Twitter to publicise their opinions as widely as possible.
  • The first time I was blocked it was @glenntingle, the UKIP candidate for Norwich North.
  • I wondered why he might be blocking me. A friend pointed out that he might be embarrassed by his following habits. It turned out that of the 700 people he followed on Twitter, all but about a dozen of them were young women posting pictures of themselves wearing very little.
  • There was some discussion of this amongst some of my friends. This was apparently noticed by Mr Tingle who first protected his tweets and then deleted his account.
  • I’m not sure how good I feel about hounding a candidate off of Twitter.
  • Another UKIP candidate, @timscottukip, also blocked me. And I heard of another who was running his account in protected mode.
  • Some users didn’t understand crowd-sourcing. Every constituency page included a link to the associated page on YNMP along with text asking people to submit corrections there. But I still got a lot of tweets pointing out errors in my lists.
  • 72% of candidates were on Twitter.
  • Results by party were mixed. 100% of the SNP candidates were on Twitter, but only 51% of UKIP candidates (or perhaps I couldn’t see the others as they were blocking me!)
  • Was it worth it? Well, only 1000 or so people visited the site over the course of the campaign.
  • I haven’t yet seen if I can get any stats on people using the raw Twitter lists rather than looking at my web site.
  • I need to rip out all of the information that is specific to that particular election and encourage people to use the code for other elections. YNMP is based on software called PopIt and I think my code could be useful wherever that is used.
  • There are 1790 days until the next UK general election (as of Saturday 13th June 2015).

Quoted By The Daily Mail

This morning Tweetdeck pinged and alerted me to this tweet from a friend of mine.

He was right too. The article was about Reddit’s Button and about half-way though it, they quoted my tweet.

My reaction was predictable.

I was terribly embarrassed. Being quoted in the Daily Mail isn’t exactly great for your reputation. So I started wondering if there was anything I could do to to recover the situation.

Then it came to me. The Mail were following Twitter’s display guidelines and were embedding the tweets in the web page (to be honest, that surprised me slightly – I was sure they would just take a screenshot). This meant that every time someone looked at the Mail’s article, the Mail’s site would refresh its view of the tweet from Twitter’s servers.

You can’t edit the content of tweets once they had been published. But you can change some of the material that is displayed – specifically your profile picture and your display name.

So, over lunch I took a few minutes to create a new profile picture and I changed my display name to “The Mail Lies”. And now my tweet looks how you see it above. It looks the same on the Mail article.

As I see it, this can go one of two ways. Either I the Mail notice what I’ve done and remove my tweet from the article (in which case I win because I’m no longer being quoted by the Daily Mail). Or they don’t notice and my tweet is displayed on the article in its current form – well at least until I get bored and change my profile picture and display name back again.

This afternoon has been quite fun. The caper has been pretty widely shared on Twitter and Facebook and couple of people have told me that I’ve “won the internet”.

So remember boys and girls, publishing unfiltered user-generated content on your web site is always a dangerous prospect.

Public Service Announcement: Aegon Pensions

Do you have a person pension with Aegon? If so, I suggest you ask them to double-check the statements they have been sending you, as they might well be incorrect. I’ve recently discovered that mine have been wrong to the tune of several thousand pounds for seven years.

This year I’ve been transferring all of my personal pensions to a SIPP at Hargreaves Lansdown. It has generally been a painless process. You fill in a form and sent it to HL, they contact your current pension provider and a week later the money is sitting in your HL account.

Of course, you’ll want to know how much is in your pension fund, so you know how much money to expect to be transferred. But your current provider will be sending you annual statements. As the stock market has been rising for a lot of the last twelve months, the amount you’ll get will almost certainly be a little more than the amount on your last statement.

But there will be two values on your statement. – the fund value (FV) and the transfer value (TV). FV is the amount your fund is worth if you leave it with the current provider. TV is the amount they’ll send to your new provider. Looking at all of my statements, FV and TV were the same amount. So all was well with the world.

I found that I had six personal pensions (I really have no idea why I had so many – it seems rather more than you’d need) and, over a period of a few weeks, I set the transfers going on all of them. Five of them worked fine – I got a little more money than I expected. The sixth was with Aegon.

One Friday afternoon I got a phone call from an adviser at HL. Aegon wouldn’t make the transfer unless I confirmed that I was aware of the current valuations. He read out the valuations that Aegon had given him. TV was about 20% smaller than FV. This meant that I’d lose about a fifth of my money if I transferred the fund. I asked him to put the transfer on hold until I could confirm this with Aegon.

Aegon’s customer support line is closed over the weekend, so I couldn’t speak to them until Monday. But I double-checked my statements. There was a different between FV and TV in 2007, but since 2008 every statement had shown the two values to be the same. And, naively, I assumed that my statements were accurate.

On Monday I called Aegon. Their customer support people tried to help but really all they could do was to pass my questions on and tell me to wait for ten days or so.

A couple of weeks later I got a reply which basically just said that my statements were wrong and that, yes, there was a 20% early exit fee on my plan. I wasn’t happy with that so I wrote back to them asking how their system could issue incorrect statements for seven years without anyone noticing.

Today I got a reply to that letter. Here’s what they say:

Statements are system generated reports which are issued annually. These are usually issued directly to Policyholders or Financial Advisers without being checked. It was only when you brought the error regarding values to our attention the the matter has been investigated and future automated statements have been inhibited.

So there you go. There was apparently a bug in Aegon’s system which went undetected for seven years, until I tried to transfer my pension fund away from them.

I’m going to continue to try and find out how I can get my money out of Aegon without losing a large chunk of it. Given that most of the industry doesn’t work the same way that they do, I suspect my best approach is to accuse them of mis-selling the policy in the first place.

But if you have been receiving statements from Aegon over the last seven years, I’d ask them to check the values if I was you. Let me know what you find out.


I was convinced that the general election in 2010 was going to be the “Twitter election”. I built a web site (now sadly lost somewhere in cyberspace) that monitored what PPCs were saying on Twitter in my local constituency. But, all in all, it wasn’t very impressive. I gave a talk about how disappointing it had all been but then I forgot about it all.

But there’s another general election coming. And, surely, this one must be the Twitter election? A lot has changed in the last five years. Everyone is using Twitter. Surely this time some useful and interesting political discussion will take place on Twitter.

I set the bar a lot higher this time. Instead of just monitoring my local constituency, I’ve created a site that monitors all 650 constituencies in the country. Each constituency has a page, and on that page you’ll find a Twitter widget which displays a list I’m curating which contains all of the PPCs I can find for that constituency.

Well, when I say “I can find”, that’s a bit of a simplification. Obviously, finding details of all of the PPCs for 650 constituencies would be a bit of a mammoth task. But I’ve had help. There is a wonderful site called YourNextMP which is crowdsourcing details of all of the PPCs. And they have an API which allows me to grab their data periodically and update my information. If you have any information about PPCs in a constituency that they don’t already have, please consider adding it to their database.

After I found YourNextMP, it was just a simple matter of programming. I made heavy use of the Twitter API (via the Net::Twitter Perl module) and I’ve hosted the site on Github Pages (so I don’t need to worry if it suddenly gets massively popular). All of my code is available on Github – so feel free to send pull requests if there are features you’d like to add.

Oh, and obviously there’s a Twitter account – @TwittElection. Follow that if you want updates about the site or general chatter about the election campaign.

Today marks 100 days until the general election. I thought that was an appropriate day on which to officially launch the site.

Please let me know if you find the site useful.

Taxing Affairs

People complain about the Inland Revenue. Of course they complain about the taxes they have to pay. But they also complain about the level of service they get from the people in the tax office. Sometimes they might question the level of intelligence of people in the tax office.

Here’s an example of why they might do that. It concerns my company’s VAT return.

At the end of November I needed to fill in a VAT return. My accountants have a great online system where it does all that calculation for me. I can even submit the return online. The only thing it doesn’t do is to transfer the money to HMRC.

So on the 29th November I logged in to my account and saw that I needed to pay HMRC £X. I logged into my company bank account and transferred £X to HMRC’s account.

In December I went off on holiday.

In early January I returned from holiday and found a letter from HMRC saying that I hadn’t submitted a VAT return and that they had therefore estimated my payment as £Y. £Y was less than £X.

I emailed my accountant, she looked into it and soon realised what the problem was. Although I had made the payment for the VAT owed, I hadn’t actually submitted the return. I logged on and did that immediately. This should have been the end of the matter.

This morning I received another letter from HMRC. One written on 9th January. Today is 17th January. I don’t know what mechanism HMRC use to send letters, but it’s not particularly fast.

This morning’s letter said that my company had an outstanding VAT debt of £377.30.

£377.30 is £X – £Y. That is, it is exactly the amount by which the the payment I made was greater than their estimate of my liability.

This is very confusing. I have paid more than their estimated that I owed and they still think that I owe them money. I have no idea how they could have reached that conclusion.

But I hope my accountant can find out when she gets back to her office on Monday.

2014 in Gigs

Slightly later than usual, here’s my overview of the gigs I saw in 2014.

I saw 45 gigs in 2014. That’s 25% down on 2013’s 60 (which is my current record). Letting it drop below an average of one a week is disappointing. I’ll have to try harder this year.

I saw both Martin Carthy and Chvrches three times in 2014 and Annie Eve twice. Martin Carthy is definitely the artist I’ve seem most since I’ve been keeping track of such things. And it’s the first time for many years that I haven’t seen Amanda Palmer. But that’s only because she didn’t play London in 2014 (well, she played one small gig at the British Library, but I didn’t hear about it until it was far too late to get tickets).

What was less than impressive. Well, my review of Yes at the Albert Hall upset a couple of Yes fans. And Eddi Reader wasn’t as good as the previous time I saw her. But, in general, the quality of things I saw was pretty high. Perhaps I was being more picky and that’s why I saw fewer shows.

Anyway, here (in chronological order) are my ten favourite gigs of the year:

  • Haim – Haimwere on my top ten list from 2013. I saw them again early in 2014 and they were just as good.
  • Chvrches – I saw Churches three times. I’m going to choose the Somerset House show as my favourite. Because I was standing about five rows away from the stage.
  • Annie Eve – I saw Annie Eve twice. I think the first show (at the Lexington) was just better, but only because the Lexington is a much better venue than the Borderline. I’d love to see her play somewhere like the Union Chapel.
  • Rick Wakeman – Something a bit different here. Rick Wakeman playing all of Journey to the Centre of the Earth. Very cheesy. Very pompus. Very wonderful.
  • Lorde – Lorde couldn’t be more different than Rick Wakeman! But this was probably my favourite gig of the year. I can see myself enjoying Lorde shows for many years to come. Coincidentally, Lorde was also the support at the next gig on the list.
  • Arcade Fire – I had wanted to see Arcade Fire for a few years. This show was every bit as overblown and wonderful as I hoped it would be.
  • Hazel O’Connor – Another complete contrast. 80s legend playing a low-key show in the pub at the end of my road. Wonderful stuff.
  • Kate Bush – Probably on everyone’s list. For all the obvious reasons.
  • Tunng – Always love seeing Tunng. And this career retrospective show was great.
  • Peter Gabriel – And another stadium show to close with. I was astonished to find out that it was twenty years since I saw Peter Gabriel. It certainly won’t be another twenty until I see him again.

As always, there were shows that were unlucky to fall just outside the top ten list. Special mentions should go to Paper Aeroplanes, Neutral Milk Hotel, Lisa Knapp and Banks.

Right, so what’s happening this year?