Tracking Packages

Last week I ordered a new laptop. Yesterday, Dell’s web site told me that it had shipped. And, an hour or so later, they gave me a link to track the delivery on the UPS web site.

Now that link is fun of course. But refreshing the page dozens of times a day gets a bit boring. So I started to look for alternatives. Firstly, UPS have a service that sounds like it emails you whenever the status changes. So I set that up. I got an initial email at that point, but nothing more – even as the status changed a couple of times. Reading the description more closely, it seems it only sends an email a) when there’s a delay, b) when the package is delivered and c) when explicitly requested on the web site. So that’s no good.

Of course, what I really wanted was a web feed. Something that I could subscribe to in Google Reader that would always show me the latest status. Being a geek I started to think about a writing a program that would grab the information from the web site periodically and turn it into a web feed. But I stopped myself before I started writing any code. “Surely,” I thought to myself, “I can’t be the first person to want this. Something like this must already exist.”

I was right, of course. A quick Google brought me to Boxoh. Give them a tracking number (and it’s not just UPS – they also understand FedEx, DHL and USPS) and they will not only give you the web feed that I wanted, but also a Google map showing the progress of your package. How cool is that?

There appears to be no way to embed the map on another web site, but that’s the only fault I can find with the site.

Here’s the progress that my laptop is making. It started in Shanghai, before travelling to Incheon in South Korea and Almaty in Kazakhstan from where it flew to Warsaw, which is where it currently is. On Monday it’s due to arrive in Balham.

I love the fact that I can track it so easily. And I’m more than a little jealous of its travels.

Did Twitter Censor #GodIsNotGreat?

[Executive summary: Betteridge’s Law (probably) applies]

The Twitter furore over the #GodIsNotGreat hash tag has pretty much died down now, but there’s one branch of the debate that is still getting comments and retweets. Here’s an example from johnwilander.

#GodIsNotGreat pulled from trends because christians protest. But #ReasonsToBeatYourGirlfriend was allowed. Stay classy, @Twitter.

As I mentioned a couple of days ago, the hashtag vanished from the list of global trending topics on Friday morning. And this conspiracy theory leapt up almost immediately. As far as I can see, none of the people repeating this claim have any evidence to back it up – which is more than somewhat ironic given Hitchens’ evidence-driven view of the world.

The argument seems to go like this: At one point the hashtag was trending. Then Christians got upset and starting making death threats aimed at the people who started the trend. Soon after that, the hashtag was no longer trending. Therefore Twitter must have given in to Christian bullying and censored the hashtag.

Whilst it all sounds frighteningly possible, I hope I don’t have to spell out the flaws in the logic. If you can’t work it out for yourself then I recommend the Wikipedia article on Correlation does not imply Causation.

I could be wrong here. There might be some irrefutable piece of evidence proving conclusively that Twitter deliberately censored the hashtag. If there is, then I haven’t seen it and I’d be grateful to anyone who could bring it to my attention.

There is, however, some evidence that Twitter didn’t censor the hashtag. On Friday morning, as the debate still raging, a Facebook friend in Canada pointed out that it was still trending there. In the middle of the afternoon someone pointed out that it was still trending in San Francisco. So if Twitter were censoring it, they weren’t doing a very good job. There’s even someone who apparently works for Twitter saying that they didn’t do it.

Of course, none of this is conclusive evidence that Twitter didn’t censor the hashtag. But balancing some evidence for non-censorship against absolutely no evidence at all for the censorship I know which side I come out on.

All of which leaves us searching for an explanation for the sudden disappearance. And, to be honest, I don’t think we really need to look too hard. Things stop being trending topics all the time. Things have to drop out of the list so that new things can come in. Otherwise the list would constantly be full of nonsense about Justin Bieber and Twilight. The Twitter trending topics algorithm can’t possibly just measure the popularity of topics. That would be incredibly dull. Instead, what it does is to look for changes in popularity. A steady buzz of the same few million people talking about a particular topic doesn’t get noticed, but a sudden increase in the number of people discussing the same topic does. The Buffer blog has a good explanation of this and the official Twitter blog says much the same thing.

I’m sure that this won’t convince the conspiracy theorists. “Ah,” they’ll say, “That’s all very convenient. But that just gives Twitter an easy way to cover up their censorship..” Which is true, I suppose, but hardly a basis for a rational discussion.

And that’s the most disappointing thing to come out of this affair. The people making this accusations are fans of Christopher Hitchens. You would hope they’d be from the more rational end of the spectrum. You’d hope that they would be above making accusations like this without evidence. I guess no-one is immune from irrationality.

But I’m going to go out on a limb here. And lay my cards on the table. And other clichés that Hitchens would despise.

Twitter (probably) didn’t censor the #GodIsNotGreat hashtag.

Update: The author of the tweet I quoted above seems to agree with me.

LoveFilm and Silverlight

Yesterday, LoveFilm announced that they are changing the technology which powers their film streaming service. From early in January the existing Flash-based system will be replaced by one which uses Microsoft’s Silverlight technology. This is extremely disappointing for a couple of reasons.

Firstly, there’s the immediate technological fallout. Silverlight doesn’t run on as many platforms as Flash does. Anyone running an older (non-Intel) Mac will no longer be able to use this service. Neither will people running Linux on their PC. This also means that people trying to access the service on an Android device will be out of luck. I don’t know how many of LoveFilm’s customers this will affect, but it can’t be a trivial number.

But it’s the second reason that makes me even more depressed. And that’s the reasoning behind the decision. Paul Thompson, the project manager for the streaming service says this:

We’ve been asked to make this change by the Studios who provide us with the films in the first place, because they’re insisting – understandably – that we use robust security to protect their films from piracy, and they see the Silverlight software as more secure than Flash.

Simply put: without meeting their requirements, we’d suddenly have next-to-no films to stream online.

This is a change that the company have been forced into by the studios who make the films that LoveFilm want to stream. The studios believe that their content needs to be protected from piracy and that Silverlight provides a higher level of security than Flash does.

They’re probably right. But they’re fighting the wrong battle.

Remember when all the digital music that you could buy had DRM? Remember what a pain it was keeping track of how to play particular tracks or which devices your were allowed to play them on? Or perhaps you don’t remember that because you were sensible enough to steer clear of that madness. Perhaps you did what most people did and just ripped your CDs or *ahem* “acquired” music from elsewhere. Eventually the record companies realised that they were fighting a battle that they couldn’t win and now we all happily buy MP3s with no DRM. Well, I say “all”, but one of the fallouts from this battle is that a generation grew up with no experience of paying for music. There are still a large number of people who think nothing of downloading music of dubious provenance rather than buying it from Amazon or iTunes. If the record companies had seen sense earlier, they might have not lost an entire generation’s worth of income.

And that’s apparently where we see ourselves again now. The film studios think they are protecting their content, but actually they are training people to go elsewhere. I would love to be able to buy digital copies of films to download or to rent access to streaming versions, but they need to be DRM-free versions that I can use as I want to use them. Not crippled versions that I can only use on devices and in ways that are approved by the studios. And if the studios are going to stop suppliers from giving me what I want, then I’ll go elsewhere. It’s not as if it’s hard to track down versions of any film or TV show that has ever been released on DVD. Or shown on a digital TV channel. We all know where to get these things, right? And we all use them. Because we’re being trained to believe that it’s the easiest way to get hold of this content. And when the easiest way is also the cheapest way, the studios lose out.

It’s not just the film studios who are re-fighting the same battle. Book publishers are doing the same thing. Pretty much any Kindle book that you buy from Amazon will have DRM. The publishers are following exactly the same short-sighted logic and reaching the same flawed conclusions. They have a slight advantage over the record labels and film studios as their old-style product is a lot harder to rip into digital format. But the arguments against what they’re doing are just as valid. Kindle book DRM has been broken repeatedly. And once the DRM is removed from just one copy of a product,  the producer of that product has lost the game.

Those who do not learn from history are condemned to repeat it. The film studios and the publishers are repeating the mistakes that the record labels were making last decade. They run the risk of alienating and losing the support of a whole generation of potential customers.

Update: I should point out that there is a Linux port of Silverlight called Moonlight. But, as I understand it, it doesn’t support the DRM features that LoveFilm would be relying on.

Hating Gnome 3

I’ve been using Linux as my desktop operating system for about fifteen years. For most of that time I’ve used GNOME as my desktop environment. That’s longer than I ever used Windows so it’s become ingrained into the way I work. I’d guess that I’m at least 50% more efficient using GNOME than I am using any other desktop environment.

Then, a couple of months ago I upgraded to Fedora 15 which included the new GNOME 3. And everything changed.

And I really mean everything. GNOME 2 would be recognisable to someone used to using Windows or Apples’ OSX. It had menus which opened windows and those windows could be minimised into icons. Your most frequently used icons could be dropped onto your desktop for easy access. It’s the way that graphical user interfaces have worked for decades.

But the GNOME developers decided that this de facto standard was no longer what they wanted. Menus, they decided, were old-fashioned. What people really needed was to search for the name of the program they wanted to run but activating a hot-spot in the top-left corner of the screen and then typing. And no-one really needs icons all over their desktop. That just looks untidy. Oh, and minimising programs, who uses that? They’ve removed the minimise button from all windows. And if you manage to work out how to minimise a window (by right-clicking in the title bar to get a menu) the window minimises into nowhere rather into the icon dock that we’re used to.

As I say, pretty much everything changed. My first impressions were that hated it.

But I decided to give it a fair chance and I’ve been using it on three computers for six or eight weeks to see if I’d get used to it.

And I still hate it.

I’ve found out that there are ways to bend it back to approaching usability. Various extensions can be installed to fiddle with the minimal default set of icons in the top panel. Things like adding a drive menu and removing the accessibility icon. There’s a ‘tweak advanced settings’ tool that you need to install. That allowed me to put icons back on my desktop and return the missing minimise and maximise buttons to all windows. Oh, and somehow I managed to get a permanent Mac-style program launcher on the right-hand side of the screen. It’s not menus, but it’s better than the standard approach for the most common programs I use.

But it’s still not right. I can’t find a way to get my menus back. And, probably most importantly to me, I can’t find a way to put iconised windows anywhere useful (or, indeed, anywhere visible).

I’m sure that the GNOME developers thought they had good reasons for all of the individual changes that they made. But together they make for a completely different experience for the user. I’d probably be more productive in Windows than I am in GNOME 3. Windows is certainly far more like GNOME 2 than GNOME 3 is.

I don’t know who I’m more angry with. The GNOME developers for deciding to release a product that is so completely different to the previous version. Or the Fedora team for including it as the standard desktop in their latest version.

Some of you are probably thinking – ah, but surely GNOME is Open Source; why not just fork GNOME 2 and use that on Fedora. I really hope that someone does that, but I’m sure that a project like that is well beyond my expertise.

If that doesn’t happen, I’m probably going to have to look for an alternative desktop environment. I think that KDE still looks like a standard GUI. Perhaps I’ll give that a go. Or people have been trying to convince me to use a Mac for several years. I never seriously considered it because I didn’t want to learn a new desktop environment.

But if I’m being forced to learn a new environment anyway, then I should probably consider a Mac too.

MPs on Twitter

Did you ever make a chance remark that plants a seed of an idea which then grabs hold of you and refuses to let you go until you’ve done something about it?

That happened to me on Sunday. I was cleaning up some broken feeds on Planet Westminster when I tweeted:

Cleaning up some broken feeds on Planet Westminster (http://bit.ly/47fCK) Interesting how many MPs’ blogs have vanished since the election.

And a couple of minutes later I added:

Someone should monitor the numbers of MPs actively blogging and tweeting over time. Maybe that should be me.

And that was it. I realised that I’d get no rest until I’d started work on the project.

Yesterday I published a graph of the number of MPs on Twitter over time. It’s only the first step. I want to start tracking how active they are and how well they interact with other Twitter users. Expect more graphs to appear on that page over the coming weeks.

I have to thank the nice people over at TweetMinster. They are doing all the hard work of actually tracking the MPs on Twitter. All I’m doing is processing their list.

A few caveats. Currently the graph is generated manually, so it won’t be kept up to date automatically. Also it just works from the date that people on the list joined Twitter. It doesn’t handle people leaving Twitter – they’ll just come off the list and all of their data will vanish from the graph. So it doesn’t track, things like Nadine Dorries’ two (or is it three) flirtations with Twitter.

You should also note that I also don’t handle people joining Twitter before they become an MP. For example, the first MP to join Twitter was Julian Huppert on 2nd May 2007. But he didn’t become an MP until three years later.

So take it all with a pince of salt, But I think it’s an interesting start. Let me know what you think. And feel free to suggest other useful graphs that I could create.

And, yes, I’ll get round to doing blogs too at some point.

Free Web Advice: TalkTalk

Ten days ago I got a cold-call from TalkTalk. They called me on a number which is registered with the TPS and I have no existing business relationship with them so they should not have called that number.

In this situation most people, and this includes me, will probably just be mildly rude to the caller and hang up. But on this occasion I decided that I would take it further. I went to their web site to find a way to complain to them.

The don’t make it easy to find a way to get in touch via their web site, but eventually I found this form. The form starts by asking what your question is about. But the choice of subjects doesn’t include “Unwanted Cold Calls”. Eventually I decided to use “Joining TalkTalk” as it was the only option that seemed even vaguely appropriate. My problems didn’t end there as the form then changed to present me with a another list of options to choose from. Once more none of them matched so I chose “Before You Order” which was, at least, technically accurate.

Filling in the rest of the form was easy. I gave them my contact details, selected the option saying that I wasn’t a customer and wrote a description of my complain.

Lesson one: Making it hard to contact you will not stop people from contacting you. It will only ensure that that they are a little bit more angry with you when they eventually work out how to do it.

A couple of days later I got a reply by email. But it was useless. They said that they would remove my details from their marketing list (within 28 days!) but completely ignored my request for an explanation of why they thought it was reasonable to call me in the first place. So I replied to the email explaining in some detail why their response was unsatisfactory.

A few minutes later. I got an email telling me that my message could not be delivered as the email address was unknown. They had sent the email from an invalid email address. Presumably this is to stop people getting into a dialogue with them. Maybe it works for some people, but it didn’t work for me. I went straight back to the web form from hell and explained their shortcoming to them.

Lesson two: Never ever send customer complaint responses from an undeliverable email address. It gets your customers (and potential customers) really angry.

A couple of days later I got another reply. This one came from someone who at least seemed willing to try to deal with my problem. But they seemed somewhat confused. They said that they were unable to locate my file in their system and asked me to confirm whether or not I was a TalkTalk customer. Two problems with this. Firstly, they’re asking me to provide more details and not giving me an easy way to get the information back to them. And secondly, a few paragraphs back when I was talking about filling in the form for the first time I said that I “selected the option saying that I wasn’t a customer”. Yes, this information is included in the contact form. So why ask me for it.

Lesson three: If you ask someone for more information in order to progress a complaint, give them an easy way to get back to you. Otherwise they’ll just get even more angry.

Lesson four: If your contact form collects information, them make sure that information is available to the people dealing with the complaint. Asking people to repeat information that they have already given you is a great way to make them really angry.

I went back to the dreaded web form and filled it in again. Every reply I get has a case number assigned to it. Each new reply I submit generates a new case number. I’ve been copying the case numbers from the emails I’ve received and pasting them into the new request in the hope that someone will tie all of the replies together into a single thread.

Lesson five: Make it easy for your customer (or potential customer) to track the progress of their single ticket through your system. Forcing people to open multiple tickets for the same issue will just confuse your support staff and anger your customers.

Five simple lessons. All based around the idea that you really don’t want to make customers (or potential customers) angry. Let’s review the list.

Lesson one: Making it hard to contact you will not stop people from contacting you.
Lesson two: Never ever send customer complaint responses from an undeliverable email address.
Lesson three: If you ask someone for more information in order to progress a complaint, give them an easy way to get back to you.
Lesson four: If your contact form collects information, them make sure that information is available to the people dealing with the complaint.
Lesson five: Make it easy for your customer (or potential customer) to track the progress of their single ticket through your system.

Throughout this piece I’ve portrayed myself as a potential customer. I’m not, of course. The way the company have dealt with this complain has ensured that I’m never going to do business with TalkTalk.

But I’ll continue pushing this until they answer my questions. I’ll let you know how I get on.

Free Web Advice: VirginMedia

I’m not a web designer, but I’ve been working in this industry since before there were web sites so I like to think I know a bit about what does and doesn’t work as far as web site usability goes. It’s mainly the stuff which doesn’t work that stands out. And there’s so much of it.

Earlier this week I was using the VirginMedia web site. Specifically, I wanted to log on to my account and download a PDF copy of my latest bill. There were three things in the process that really annoyed me. I should point out that I’m a registered user of the site, so I already had an account set up.

Username or email
The login screen asks for your username and password. That’s pretty standard stuff, of course. But when a site asks me for a username then I assume that it is going to be “davorg” (the username I’ve used on web sites for as long as I can remember). In this case, that’s not what they wanted. Your username on the VirginMedia site is your email address. Other sites use email addresses as your username, but in most cases they then label the field as “email”. Labelling it as “username” adds an unnecessary complication. I gave them my username and, as it was incorrect, the error message pointed out that my username would, in fact, be my email address. So they recovered from the problem well, but there was a moment or two of unnecessary frustration.

Limited length passwords
Having established what my username was, my next problem was remembering my password. I tried a few likely candidates and, eventually, resorted to the “forgot my password” link. That sent me an email containing a link to a page where I could set a new password. And that’s when I remembered why I had forgotten the original password.

VirginMedia have strange limits on what can go in your password. They have the usual stuff about having both letters and digits in your password, but they also have a maximum length of ten characters. That’s why I couldn’t remember it – most of my standard passwords are longer than that. It seems strange to restrict users to such short passwords.

It’s worrying in another way too. If you’re following best practice for dealing with users’ passwords then you won’t be storing the password in plain text. You’ll have some encrypted version of the password. And many of the popular encryption algorithms (for example, MD5) have the property that no matter how long the text that you start with is the “hashed” version will always be the same length. So you create a database column of that length and you don ‘t need to restrict your users at all. Having this restriction isn’t conclusive proof that they’re storing plain text password, but it’s enough to worry me slightly.

Naming downloaded files
Having (finally) logged into my account it was easy enough to find the link to download my current bill. And within seconds I had the file on my computer. But the file was called “GeneratePDF”. And when I come to download next month’s bill that will also be called “GeneratePDF”. What has happened here is that GeneratePDF is the address in their web site that is used to.. well… generate PDFs. And in the absence of other information, browsers will name downloaded files using the address that they came from. It’s easy enough to change that default behaviour using the content-disposition header. Using this header it would be easy to tell my browser to save the downloaded file as, for example, vm-2011-05.pdf. Anything would have been more useful than the current set-up. Notice that the current name doesn’t even have a ‘.pdf’ extension so it’s likely that on some computers double-clicking the downloaded file won’t open in the the user’s PDF-reading software.

So there you have three things that annoyed me about the VirginMedia site. And the really annoying thing is that two of them (the first and third) are really trivial to fix. The second is probably harder to fix, but it’s possibly evidence of some rather broken design decisions taken early in the process of developing this web site.

I tweeted these three issues on Wednesday and I got a response from the virginmedia Twitter account saying “Ok, some fair points there. Will feed this back for you, thanks for taking the time to let us know!” I’ll be downloading my bill every month, so I’ll let you know if anything gets fixed.

Social Networking 101

If you have a blog and a Twitter account then it’s nice to feed your tweets onto the front page of your blog. It can be an effective way to let your friends see what you’re saying in both places.

If, however, you later delete your Twitter account then it’s probably a good idea to remove the widget from your blog.

There’s one very important reason for doing this. Eventually Twitter will allow your deleted account name to be recycled. And then someone else will be able to post tweets which automatically appear on your blog.

Say, for example, you’re an MP who has made a few enemies in her time. And say that you’ve flounced away from Twitter claiming that it is a “sewer”. In that situation you probably don’t want to leave a way open for people who don’t like you to post whatever they want on your web site.

I mean, if you’re currently campaigning about abstinence and sex education, you probably don’t want your web site to say:

I think sex before marriage should be discouraged. It’s better if at least one of you is married, doesn’t matter who to particularly.

Or:

I suppose with fisting there’s no risk of pregnancy.. ..maybe kids should be taught about that?

Sometimes I wonder if the money that Nadine Dorries spent on “PR” wouldn’t have been better spent on IT consultancy.

They’ll fix it eventually, so Tim has captured it for us.

Update: And it’s gone. That was slightly quicker than I expected. I’m now expecting a blog post from her accusing someone (probably Tim) of hacking her computer.

Independent URLs

Today Twitter got very excited about a story on the Independent web site. Actually, it wasn’t the story that got people excited, it was the URL that was being shared for the story. The story was some nonsense about Kate Middleton’s face being seen in a jelly bean. The URL was:

http://www.independent.co.uk/life-style/food-and-drink/utter-PR-fiction-but-people-love-this-shit-so-fuck-it-lets-just-print-it-2269573.html

And if you click on it, sure enough, it takes you to the story on the Independent web site. Some people presented this as evidence of a joker (or, worse, a republican) taking control of the web site. But the actual explanation is a little more complex than that.

The real URL – the one that the Independent published on the site and in its web feed – was somewhat different. It was:

http://www.independent.co.uk/life-style/food-and-drink/kate-middleton-jelly-bean-expected-to-fetch-500-2269573.html

That seems far more reasonable, doesn’t it (well, of course, the story is still completely ridiculous, but we’ll ignore that). So what was going on?

Well, if you look closely at both URLs you’ll see that the number at the end of them (2269573) is the same. That number is obviously the unique identifier for this story in the Independent’s database. That is the only information that the web site needs in order to present a visitor with the correct story. So the web site is being quite clever and ignoring any text that precedes that number. This means that you can put any text that you want in the URL and it will still work correctly as long as you have the correct identifier at the end. So the URL could just as easily have been one of these:

http://www.independent.co.uk/life-style/food-and-drink/why-do-people-still-read-the-indy-2269573.html

http://www.independent.co.uk/life-style/food-and-drink/you-can-put-any-text-here-2269573.html

The slight problem that the Independent had was that the alternative version of the URL was being shared so widely that Google was ranking it higher than the official version. So when people were Googling for the “kate middleton jelly bean” story, Google was presenting them with the dodgy version of the URL.

So why do the Independent use such a clever system if it’s so open to abuse?

One reason is for search engine optimisation. As I said above, you only really need the unique identifier for the story in order to find it in the database. And that means that the URL can be simplified to:

http://www.independent.co.uk/life-style/food-and-drink/2269573.html

But that doesn’t give Google much information about the content. So it’s generally considered good practice to have some text in the URL as well. And I suppose one of the simplest ways to implement that is to ignore everything in the URL except the last sequence of digits. That’s apparently what the Independent do.

There’s an alternative approach. And that’s to include both the text and the identifier in the URL. And to only accept a URL as valid if both match exactly. I can think of a good reason why that might not work for a newspaper web site. Sometimes newspapers change the headline on a story. And sometimes that change is for legal reasons. In cases like that you really don’t want to have the old headline left around in the URL. And you don’t want to change the URL as any links to the original URL will no longer work. In cases like that, the Independent’s approach works well. You can change the headline (and, hence, the URL) as often as you like and everything will still work.

Incidentally, whilst researching this post I found that the Daily Mail had written a rather gloating article about the Independent’s problems today. The URL for that article is:

http://www.dailymail.co.uk/sciencetech/article-1378504/Embarrassment-Independent-URL-twitter-fiasco.html

What’s interesting to note is that the text portion of that link is just as flexible as the Independent link. I can change it to:

http://www.dailymail.co.uk/sciencetech/article-1378504/The-Daily-Mail-Is-A-Bit-Crap.html

And everything still works correctly. The big difference between the two implementation is that the Mail version will redirect the browser to the canonical version of the URL whereas the Independent will leave the alternative URL in the browser address bar. I have to say that, in this case, I think the Daily Mail is right.

It’s not just newspapers that have this flexible approach to URLs. Amazon URLs have a flexible text section in them too. Each item that Amazon sells has a unique identifier, so the canonical Amazon URL looks like:

http://www.amazon.co.uk/dp/B003U9VLKG/

But whenever you see a URL on Amazon, they have added a descriptive text field:

http://www.amazon.co.uk/Harry-Potter-Deathly-Hallows-Part/dp/B003U9VLKG/

But, as with the newspaper URLs, that text field can be changed to anything. It’s only the identifier that is required.

http://www.amazon.co.uk/Three-Hours-Of-A-Boy-Looking-Glum-In-A-Tent/dp/B003U9VLKG/

Hours of fun for all the family.

Your mission, should you choose to accept it, is to find other web sites where there’s a ignored text section in their URLs. Please post the best ones you find in the comments.

Bonus points for getting one of the papers to write about your prank.

Update: Here’s Independent editor Martin King’s take on the incident. He says that the system is used for exactly the two reasons I mentioned above – “The feature has search engine benefits but from an editorial perspective it enables us to change repeatedly a headline on a moving article.”

Moonfruit and Techcrunch

For the past few weeks I’ve been working with Moonfruit. They have been working to replace their rather aging web site with something that looks a lot more contemporary.

Today was the day that the new version went live. And it was also the day that I got an interesting lesson in how marketing works in our digital world.

The company’s co-founder Wendy Tan White had been interviewed by Techcrunch and we were expecting that article to be published at about lunchtime. In order to get an idea of when the article went live, I set up a search panel on TweetDeck watching for mentions of “moonfruit” on Twitter.

During the morning there was a steady stream of mentions. This was largely people pushing their Moonfruit-hosted web sites. Then at about 12:25 that all changed. Where previously each update of the search was bringing in two or three new results, suddenly there were twenty in one go. And then another twenty. And another. And another.

On closer inspection I saw that the vast majority of them were exact reposts of this tweet from @techcrunch.

500 Startups Bites Into Moonfruit’s Simple Site Builder For Design Fans http://tcrn.ch/dYbp98 by @mikebutcher

Some of them were retweets, but most of them were automated reposts (often using Twitterfeed). In the first twenty-five minutes I estimate that the story was reposted 400 times. By now (about nine hours later) the number must be two or three times that.

I was astonished to see this volume of reposts. I knew that a story on Techcrunch was good publicity, but I had no idea just how good it was. That’s an incredible number of people who have been told about this article – and, hence, the Moonfruit relaunch.

But there’s another side to this. Why are there so many automated systems set uo to repost tweets from Techcrunch? I know that Techcrunch is a useful source of tech news, but doesn’t that mean that anyone who is interested in tech news will already be following @techcrunch on Twitter? If every tweet from @techcrunch is repeated a few hundred time and @techcrunch posts a few dozen tweets easch day, isn’t that a few thousand pointless tweets? I’m sure that these two or three hundred reposters aren’t amplifying Techcrunch’s reach by two or three hundred times. I’d be surprised if they were amplifying it by even ten times.

So what is the point of these hundreds of reposting engines? Is it some kind of spam system? Or an SEO trick? Or are there really hundreds of people out there who think that their followers benefit from reposted content from Techcrunch?

You might be wondering why I haven’t linked to any of the reposts. Well, of course, in the nine hours it’s taken me to get round to writing this post, most of them have vanished from Twitter’s search engine. Does that mean they were scams that Twitter has cleaned up? Or does Twitter’s search engine just have a really short lifespan?