Government Klout: Does Your Public Sector "Digital IQ" Measure Anything Important?
If you work in the public or civic sectors, you now have now been assigned a Digital IQ whether you asked for it or not.
There is no surer way to get people to talk about your research than to make a list ranking entities, categorize the list into easily-digestible names, and declare that yours is the definitive study on who's hot and who's not.
However, for those being ranked, the real question is not what one's score is, but whether the score measures anything meaningful, and if so, what. Here I analyze the results of a new study about the digital skill sets of organizations operating in the public sector.
What's Your Public Sector Digital IQ?
The public sector's digital prowess has now been quantified, with everything from government and military agencies, political and advocacy groups,and various kinds of associations and organizations thrown into the competitive mix. Scott Galloway,a Clinical Associate Professor of Marketing at New York University's Stern School of Business (and the founder of "a think tank for digital innovation" called L2) published the study with his team (Maureen Mullen, Danielle Bailey, Tanuj Parikh, and Christine Patton of L2, Craig Markus of McCann Erickson, and Sanjay Rupani of The George Washington University), along with Doug Guthrie, Dean of The George Washington University School of Business in Washington, DC.
This study, L2's Digital IQ Index: Public Sector, is available in full here. (BY "the, either, well, someone, the, many, many, valid, the, I, the, they described by the authors as, "the definitive benchmark for online competence."
These digital competencies are boiled down to four easily digestible categories that communicate how smart each organization is compared to the others. Not unlike a 1950's grade school principal crudely assessing her students, public sector organizations are assigned the descriptions Genius, Gifted, Challenged, or Feeble. According to the press release, one of the study's key findings was that, "While more than 80 percent of organizations are present on at least one social media platform, the majority of public sector organizations measured were characterized as either “challenged” or “feeble” in the index."
For the record, these are the top organizations ranked in the study:
1. National Aeronautics and Space Administration (NASA)
2. The White House
3. People for the Ethical Treatment of Animals (PETA)
4. United States Army
5. Democratic National Committee (DNC)
6. World Wildlife Fund – U.S. (WWF)
7. Republican National Committee (RNC)
8. Nature Conservancy
9. American Association of Retired Persons (AARP)
10. Department of State (DOS)
My three main critiques of the L2 study on the Digital IQ of the Public Sector are the following:
(1) It makes little sense to group these diverse public sector organizations into an artificial "industry." Government agencies, political campaigns, nonprofit charities, and other organizations are entirely different operationally.
(2) Many of the primary conclusions of the study were already well-known prior to quantification. Sometimes it is fine to have an obvious, qualitative answer and thus the quantification is largely unnecessary.
(3) The data being collected and analyzed to create the rankings are largely divorced from organizational missions. The organizational mission is the only thing that matters.
There is also an excellent, complimentary critique of a similar L2 study by the Open Forum Foundation, focused more specifically on the precise metrics.
Does It Make Sense to Group Diverse Public Sector Organizations?
As one can see above, the top 10 list is a mix of government agencies (like NASA), political groups (like the DNC), and advocacy or other organizations (like AARP). The rest of the list of 100 is as well, of course. But an important question to ask straightaway is whether this makes any sense to do in the first place.
The justification seems to be that in seven previous studies, the L2 group looked at entire industries like Automotive and Retail. Thus, these predominantly business school professors and researchers appear to consider the Public Sector a similar "industry," within which entities should be ranked and even compared across industries like so many stocks on the NYSE. From the report, p. 12:
More than 50 percent of the organizations indexed registered Digital IQs in the Feeble and Challenged ranks, suggesting that most public sector organizations have yet to unlock the power of digital platforms. This is a stark contrast to the Digital IQ rankings of more digitally-mature private sector industries, including Automobile (32 percent Challenged & Feeble) and Specialty Retail (42 percent Challenged & Feeble).
The meaning of such comparisons for employees of the public sector or outside observers is murky at best (and even if there was a meaning, the statistical significance of the differences between 50%, 42%, and 32%, if any, is not given). As someone with some experience on the subject, I don't think it's useful to compare NASA and the Commerce Department with Ford and GM, and Ralph Lauren and Ann Taylor, mainly because no one of any consequence cares.
Further, the public sector is not an "industry" in the sense that the L2 study describes, because the entities within it (unlike Ford and GM, etc.) operate by completely different rules ranging from financial to legal to operational. In other words, where the funding for digital initiatives and programs comes from, on what timeline, with what laws and rules attached, etc. are remarkably different across government, political, advocacy, and other groups. How and why a group like the Army uses digital tools is truly different from what the DNC does with them, which is different still from a group like PETA. One need only look at the graph about online ad spending on p. 18 to see this; of course, anyone with knowledge of the field already knew that political and advocacy groups spend more here than, say, the Executive Branch.
One fantastic example of these differences in how organizations within the "public sector industry" operate their digital tools differently is evident in an Army Times study of the Army's internal digital portal named Army Knowledge Online (AKO). Users who were surveyed had complaints about its slow speed, its burdensome security features, its haphazard search engine, and more. In an Army Times interview, Gary Winkler, the Army’s top official in charge of AKO, defended the system in some ways, but also pointed out how bureaucratically difficult it is to change things. As the "program executive officer for enterprise information systems at Fort Belvoir" it might seem to an outsider like Mr. Winkler would have tons of power; however, here are four interesting factoids about the limitations put on a person in his position:
(1) Changes to AKO must be filtered through a 40-member Army CIO Executive Board, which includes members from each Army command, all headquarters organizations, plus the Army Reserve and the Army National Guard. This body also determines per-year AKO funding. It is chaired by the Chief Information Officer of the Army.
(2) Security requirements like password rules are under the separate control of the Army Forces Cyber Command (Fort Belvoir, Maryland), which itself works under the U.S. Cyber Command (Fort Meade, Maryland), which itself is subordinate to the U.S. Strategic Command (Omaha, Nebraska).
(3) Personnel deployed around the world have entirely different computer facilities at their disposal, and thus cannot always do things the easy way; in other words, there is a "best case scenario" envisioned by people in Maryland and elsewhere which doesn't always exist. For example, on AKO you can use an ID card called a "CAC card" to log in, bypassing passwords and questions, but you need a special CAC card reader to do so. The article relates this anecdote:
Maj. Jason B. Nicholson, a foreign area officer assigned to the U.S. Embassy in Dar Es Salaam, Tanzania, said he is nine hours from the closest CAC reader. He once mistyped his password, was locked out, and couldn’t remember the correct format for a security question about his birthday.
(4) AKO is the portal that takes personnel to over 600 different applications, forms, and the like, but AKO often doesn't have control over those documents. Winker relates in his Army Times interview:
MyForms is on the Army Publishing Directorate’s servers, medical records belong to the Army Medical Department, personnel management information to Human Resources Command, and finances to the Assistant Secretary of the Army (Financial Management & Comptroller).
AKO is not the only entity in the government operating under various levels of bureaucracy, constraints, and rules - far from it. And it is hard to believe that, say, the DNC or PETA operates its digital assets and plans in anything resembling what is described above. Thus, organizations within this artificial public sector "industry" can be compared, but the comparison is devoid of meaning. Further, none of the realism described above seems to be incorporated into the Digital IQ scoring; one might think a variation that "normalized" the IQ scores for such factors would be valuable (one could consider bureaucracy something akin to socioeconomic background or other factors that influence human behavior in a sociology study).
It is further unclear what the graph on p. 16 of the report actually means. No group on average is Genius or Feeble, and all of them are more-or-less in the middle (two Challenged, four Average, one Gifted). What is the significance of advocacy groups having an average score of 101, versus 98 for the Executive Branch and 91 for independent agencies? Obviously, some groups are somewhat better than others (mathematically this must be the case unless they are all tied), but did anyone with knowledge of the field not already know that the armed forces are currently more adept at using digital tools than lobbyists?
[Incidentally, the Armed Forces are part of the Executive Branch of government, and so their being put in a separate category by the study's authors is somewhat arbitrary and therefore potentially misleading. Yes, they operate in a somewhat unique manner from many other government entities, but any number of other distinctions like this could have been made (e.g., intelligence agencies, government agencies with an overseas role, etc.).]
Do These Digital IQ Rankings Mean Anything Useful?
The authors of the L2 study outline where their rankings came from on p.4 - a mix of analyses of everything from primary websites to social media presence to digital marketing to mobile compatibility. This seems logical and broad enough, and the purpose of this article is not to take a fine-toothed comb to these measurements and question whether social media should be 20% of the total and mobile 10% or vice-versa. Rather, I will assume that the measurements are fine, and question whether the overall data analysis reports anything novel or useful.
Many of the author-described "key findings" from the report are fairly basic: social media platforms are being adopted more now than they had been previously, different digital tools (web chat, video, podcasts, blogs...) are used to different degrees, and social media drives traffic to organization websites.
Other findings seem impressive on the surface but in my opinion are of questionable value. For example, on p. 21 there is a discussion and chart of Facebook "likes" across a number of organizations, with the U.S. Marine Corps having clearly the highest among them. (How did they do it? By asking their community to spread the word. Groundbreaking.)
What does it mean that the USMC has about six times more Facebook likes than the Army or NASA? Other than the fact that six times as many people pushed a certain button, I have no idea. And if the authors of the study do, they have kept it from their readers. Pages 22-23 offer similarly nebulous conclusions, for example, that the White House is "Best in Tweet" because it has 1.8 million Twitter followers (impressively, the most powerful institution in the world has almost caught up to singer Ricky Martin, whose heyday was during the Clinton Administration).
There are similar rankings for YouTube, too. What does it mean that a video like Saturday Night Live's "I'm on a Boat" has over five times as many YouTube views than the total views of all United Nations videos combined? Does it mean that people care about Andy Samburg five times more than Ban Ki-moon? Maybe, maybe not - but the L2 study offers no insight either way. As valid as the underlying data may be, it is hard to understand how "being the best" by these standards (i.e., ranking high on the list with a high Digital IQ) is worth much, except in the broadest sense. And in the broadest sense, the quantification is unnecessary - with a simple Web search and 60 seconds of investigation, one can plainly see that NASA is doing a qualitatively "better" job utilizing digital tools for marketing and outreach than, for example, the Commerce Department.
I have been criticizing such overly simplistic quantitative analyses of social media metrics for the public sector for some time now; see for example, this post about government agencies on Facebook for the O'Reilly Radar blog from Sept. 2009, Fallacious Celebrations of Facebook Fans. As I wrote then,
The meaningful question is not about who has more fans, but about who can authentically and transparently - and usefully - interact with citizens to provide social and intellectual value and become the pulse of their conversations.
This statement applies to many of the things that were measured in this Digital IQ study. Just because something can be quantified does not mean that it is useful to do so. Indeed, many unimportant things in life are easy to quantify, and many very important things are difficult to quantify.
Finding Meaningful Digital Behaviors To Measure
It is now fair to ask: So what is meaningful to measure?
I would suggest more difficult things like the following: the degree to which an organization has used digital tools to be more transparent than it was two years ago, the relative amount of useful information delivered to and utilized by citizens, the number of mainstream media stories that can be traced back to a digital "seed" planted by an organization on a blog, and the level of genuine citizen involvement (whether this be commenting, collaboration, co-creation, crowdsourcing, or even do-it-yourself (DIY) initiatives) with the processes of government. This point of view is largely absent from the L2 study.
In the end, there is only one important question to ask, whose answer (or its components) should be measured: Have digital tools and skills been deployed in a way that helped an organizational mission be completed faster, cheaper, or better? In my opinion, social media statistics divorced from organizational missions are nearly meaningless. This applies to both the public sector and the private sector.
Some of the examples mentioned in the L2 report, such as the Army National Guard's creative "Show Us Your Guns" campaign for military recruitment (p. 35), are probably meaningful and aligned to organizational missions. This critique of the L2 study does not take away from the value of such successful public sector digital campaigns; To the contrary, I argue that the metrics used to measure them often oversimplify what was actually done, reducing interacting systems of human emotions and relationships to a spreadsheet full of tweets, likes, and views that can be sorted, ranked, and published. Such reduction comes at the expense of a holistic understanding of digital behavior at mutliple levels within a complex social graph.
Where Do Public Sector Digital Metrics Go in the Future?
Unquestionably, measuring and analyzing how digital tools are being used in the public sector is important. Perhaps the L2 report is a first step toward meaningfully doing that. People working on everything from Government 2.0 to the 2012 Presidential campaigns to raising awareness of causes around the world have an interest in the strategies, tactics, and metrics involved.
L2 is "a membership organization that brings together thought leadership from academia and industry to drive digital marketing innovation." Thus, feedback on this article, and the study in general, is valuable. So, if you are an employee in the public sector, perhaps at one of these organizations in the Digital IQ Index, what do you think of the study? What about if you're a private sector marketing or similar expert? Email L2 directly, as they welcome in their report (info@L2ThinkTank.com).
Images of binary code, IQ, mayonnaise, T-Rex, and measuring tape used under Creative Commons. AKO logo image from the Army.