Imminent Death Of Twitter Predicted: A case study, the Malaysian F1 GP.
First to explain the title “Imminent death of X predicted” is a snowclone , and its entry* in the Hacker’s dictionary sticks in my mind
Hugh Macleod proposed “All online social networks eventually turn into a swampy mush of spam." as Hugh's law, and a few weeks ago an article entitled "Can Twitter Survive What is About to Happen to It? " – which could serve as a proof of Hugh’s law got a lot of attention, not least from the twitterati, it said “There is soon going to be vastly more content in Twitter, and too much of it will be noise.” Which fits the “Imminent Death” template perfectly.
I want to call out one source of noise which that article missed: the “Blind retweet” . We had a good example of this among PowerShell folk recently – Hal Rottenberg posted a message asking people to "Please help me share the news of my # VMware #PowerShell book pre-order”. Over the next few hours 50 or so people reposted the message. Now, a re-tweet by someone widely followed of someone obscure is good. If you’re big in the PowerShell or VMware communities on twitter then telling your followers would be good. But Hal is widely followed: the re-posters are less followed that he is, and anyone interested in the book saw the post the first time, either because they follow Hal or because they watch the tags VMware and PowerShell. Those possible book buyers get a tide of messages repeating the same thing. It didn’t help Hal share the news, because that needed to be done by posting somewhere else, or at least long enough after the first post that it would catch people who missed it.
So which sources of noise did the author of "Can Twitter Survive..." indentify ? One of them was
- Hypertweeting. Some Twitter users tweet legitimately, but far too much. Or the content they tweet is just inane.
This only becomes a problem when someone or something you think is worth following is swamped by a tide of posts with little worthwhile content. For an example lets turn to F1, and James Allen. As an F1 commentator people were divided into whether he was the best commentator since Murray Walker, or the worst commentator since Murray Walker**. His blog - shows the journalism skills he’s honed over the years. When the cars are on the track, however, his tweeting goes berserk, and because it is stream of consciousness stuff, those journalistic skills go out of the Window. On Sunday night I fired up my PowerShell library for twitter, and ran
$JA= Get-TwitterUserTimeLine Jamesallenonf1 200
$JA | select created_at | clip
And pasted the times of his posts into Excel - during Sunday’s race he posted 102 tweets in 108 Minutes. In qualifying he managed 65 tweets in 68 minutes. Twitter is a lousy medium for a running commentary : his useful insights don’t come through the deluge of stuff I could get more easily elsewhere.
- Notification Overload. Another issue is the rise of Twitter bots from various services, whether benign in nature or deliberately spammy:
- News and content sites are starting to pump updates into Twitter for every article they publish.
I follow Autosport magazine on twitter, which is a classic case of pumping “updates into twitter for every article” across all forms of sport they made 51 posts on Saturday and 33 on Sunday and I can’t filter those to just F1: I’m not interested in Moto GP or IRL; although Autosport does have “per category” RSS feeds. James Allen’s blog has RSS too. So the moral of that is, I suppose, Don’t follow on twitter what you can subscribe to via RSS. However unless I take steps to filter them out I still get the tweets in a search for F1…
Back in PowerShell I thought I’d have a look at the last 1500 posts on F1 (the maximum twitter will allow) - since I did this at 11:30 PM it didn’t cover the race or the immediate aftermath when James Allen and Autosport were at their peak– the oldest tweet I got came in at 3PM, some four hours after the race finished, yet lots of tweets say things like “Wow it is raining a lot in the F1”. Everyone who cared either knew already or was trying not to find out until they watched a recording. A tweet which reports a recording as if it was a live event This PowerShell got me the people who had made more than 10 posts in that time.
$f1 = Get-TwitterSearch "F1" –deep
$f1multi = $f1| group author | sort Count -desc | where {$_.count -gt 9} | foreach {$_.name}
9 posters had posted 149 tweets between them – although that ignored ollieparsley of “Footy tweets” who announced the creation of a similar service for F1 using 37 different aliases which he controls – all 37 posts were made in the space of a minute, which I’d call deliberately spammy. (He has registered F1_ and the names of all 10 teams and all 20 twenty drivers, which I hope that gets him Another Cease and desist notice). There was also the person who made 10 tweets to tell 10 people about a blog post.
What were those multiple posters putting up ? I got the information into the clipboard with this line of PowerShell
$f1 | where {$f1multi -contains $_.author} | sort author,pubdate |
format-Table -a title,author,pubdate | out-string -Width 300 | clip
ALL these posters posted in great splurges (10 per minute or more) of links – to the same handful of stories data. Twitter’s 140 character limit compounds the problem because the links use shortening services (TinyUrl, Snurl, Bit.ly, is.gd and so on). The services don’t all return the same short URL for the same page (and even if they did different people might use different services to link to the same page) – so, without some client side processing we can’t tell when the same page is being linked to by multiple people. which means twitter can’t point to popular stories being linked to (which Digg, Technorati, stumbled upon del.ico.us etc can). Again, the widely followed person who links to something is useful to their followers. The person who posts links to pages we all read any how – and for the 10th time with a commonly followed tag – is just helping to turn it into the swampy mush of spam.
People often cite Metcalfe’s law – the Value of a network is proportional to the square of the number of users connected to it. The theory being that if 10 people are on a network each can talk to 9 others, so that’s 90 possible conversations, but if 1000 people are connected that 999,000 possible conversations. The problem with Metcalfe’s law is that all connections are assumed to add equal value. Of course that is not the case – the fact that I can get messages from some people actual reduces the value of the system, so there is an optimum size for each individual on the network – the key is how to segment it. Telephone systems and e-mail are effectively segmented to the people you know, as for twitter… Well if I was developing a twitter client I’d concentrate on ways to do that filtering.
Footnotes.
* The entry reads: Imminent Death Of The Net Predicted Since USENET first got off the ground in 1980-81 it has grown exponentially, approximately doubling in size every year. On the other hand most people feel the signal-to-noise ratio of USENET has dropped steadily. These trends led, as far back as mid –1983, to predictions of the imminent collapse or death of the net. Ten years and numerous doublings later, enough of these prognostications have been confounded that the phrase “Imminent Death Of The Net Predicted!” has become a running joke, hauled out any time someone grumbles about the S/N ratio or the huge and steadily increasing volume, or the possible loss of a key node, or the potential for Lawsuits when ignoramuses post copyrighted material etc etc etc. 15
** For those who don’t follow F1 in Britain, Murray Walker was its first regular commentary on TV and continued until he retired, when James Allen took over. Many people loved Murray, and many thought he was an idiot; some even seemed to think both.
Comments
- Anonymous
April 07, 2009
The comment has been removed