External forces and their impacts on academic and professional publishing
This chapter discusses the long-term implications facing scholarly publishers and other stakeholders in academic information services owing to the fact that the Internet has made scholarly communications available to anyone with a web connection. The chapter covers emerging user expectations, issues of trust and authority, implications of social sharing, pressures on business models, the expansion of information publishing outputs, and sources of stability within the tumult of change.
Of all the changes the Internet has introduced for scholarly publishers, authors and information specialists, perhaps the most fundamental is that scholarly communications are now on full display to the outside world.
Scholarly publishing was relatively cloistered for much of its history. Barriers included remoteness, gatekeepers such as admissions officers and librarians, specialised language and jargon, and difficulty discovering academic resources. Now, all of this has been changed by a networked information infrastructure that is vast, integrated, open, highly engineered and always on. It’s very unlike the world in which scholarly communications and academic cultures evolved, and the pressures this basic change has unleashed are being felt in publishing now, but there is sufficient momentum to believe this is only the beginning of a whole suite of changes.
Physicians now routinely face patients who have pre-diagnosed themselves via Google (Bird et al., 2010). Experts outside the academic sphere frequently comment on, and even force corrections to, scholarly articles. Attempts at humor in the literature can no longer be assumed to be ‘inside jokes’, and are quickly, sometimes painfully, exposed to ridicule themselves (Anon., 2011). Questions about who the audience now is have the potential to reset our basic quality thresholds.
The simplistic approach of ‘dumbing down’ scientific information for the public is fraught with risks for scholarly publishers, who often don’t know quite how to do it, can’t do it systematically enough or simply can’t invest the millions of dollars it takes to do it right. And it’s not clear whether the public wants more of it than current providers can offer – providers (such as WebMD) that are well funded, established and capable of taking on interlopers.
While the public is drifting into the scholarly realm, scientists and researchers are using publicly available tools more and more. Specialist tools such as PubMed are being subsumed by general tools (e.g. Google and Google Scholar) that have greater scope, better engineering and often more useful information presentations. Pieces of privately held information are moving online, with laboratory data captured in genomic databases, for instance. And hallway conversations, once only possible synchronously and face-to-face, can now occur asynchronously in online forums or at a distance through services such as WebEx.
Accordingly, scholarly publishers have had to expand their repertoire and tool sets. Now, experts in search engine optimisation (SEO), user experience and user interface design, analytics, and social marketing are part of most major publishing organisations, and appear here and there even at the smallest publishers. Editorial pace has quickened everywhere as the news cycle has become more important to authors, funders and consequently publishers. What used to be competition for authors and funds – which could be achieved largely by specialised market positioning – is increasingly predicated on competition for time and attention. Traffic, usage and prominence all depend on these.
In short, scholarly publishers are now integrated with the broader communication sphere. No longer is the deliberate, self-defined pace of academia the primary factor driving knowledge generation and cultural attenuation within science. No longer is the audience for scholarly research a closed system. And no longer is competition for supremacy based only on securing the best content. But while change has come and more is coming, predicting where we are headed is difficult. It can come suddenly or slowly, and sometimes in unpredictable ways.
Thomas Friedman famously titled a book The World Is Flat. While a provocative title, the world is much more complicated than that, and the information space has peculiarities all its own. Perhaps a more useful starting point is Clay Shirky’s statement that ‘abundance changes everything’ (Shirky, 2010). This simple statement reflects a profound change in the assumptions of our information world. When print resources were scarce, pricing was simpler, functions such as preservation and duplication were clearly necessary and comparatively straightforward, and the information world that had evolved within scarcity was stable and familiar. Incremental changes – moving from lead type to phototypesetting, shifting from sheet-fed to web offset printing – occurred within a world of information scarcity, where setting a price and counting copies aligned.
The world of scarcity for scholarly publishers also meant that libraries, academic departments, personal subscriptions and specialty bookstores generally defined the information distribution system. It would be rare for a patient, a non-professional or a non-expert to read specialist literature, and even harder for them to secure copies for personal use. The scholarly world was remote and had a semi-permeable barrier around it, one that permitted information flows at a slow, sporadic rate.
With everything shoveled onto the Internet, a surprising change occurred quickly and irrevocably – the ‘university without walls’ (Kassirer, 1999) became a reality, with some uncomfortable consequences. Suddenly, scholarly publishers found themselves subject to additional scrutiny from the media and the public; user interfaces and online feature sets were naturally compared with those of major online players; and new dimensions of information management emerged almost overnight.
Scholarly publishers found themselves in a ‘flat’ information world. But while the world was flat from an information standpoint, the terrain for making it work for various customer constituents while generating a sustainable commercial model remained uncertain and certainly not flat. In fact, it has proven to be full of daunting new peaks to scale and ravines to bridge. These are the complexities beyond Friedman’s meme.
The vaccine and autism story provides one of the most poignant stories from the first decade of online publishing. Due largely to the newfound exposure to scientific information the public was experiencing, and their ability to use the Internet to publish their own thinking, a breakdown in a scholarly publishing competency – the publication of a flawed study – was exacerbated by inadequate competitive know-how in the information world.
The paper in question (Wakefield et al., 1998) suggested a link between vaccines containing thimerasol and a supposed increase in autism. The paper was ultimately retracted years after publication, and its author banned from medicine, but why this flawed study caused problems is instructive. In a prior era, it would have been unlikely to light a wildfire.
Scott Karp, formerly of the Atlantic and now running Publish2, followed this story closely. Soon after the vaccine–autism connection was suggested and began to take hold in the public imagination, the American Academy of Pediatrics, among others, published studies showing that the connection was non-existent, and ultimately issued practice guidelines and policy statements underlining the lack of evidence, all in the hope of swaying public opinion with scientific findings. Unfortunately, while the Academy was doing excellent scientific and policy work, it was doing a poor job of SEO. Meanwhile, celebrity bloggers like Jenny McCarthy, whose child is autistic, became major sources of information for anyone searching on these issues on Google and other search engines. Resources from the American Academy of Pediatrics, not geared for search engine indexing, could only be found deep in search engine listings, making them virtually invisible to the majority of searchers.
The controversy lasted far too long, and was extended to no small extent by the inability of scientific organisations to make important content prominent in the modern information world. In the age of information abundance, rising to the top is vital. The information world may be flat, but it’s not two-dimensional. Achieving that third dimension – prominence – requires knowledge of the tools of online discovery, the ability to utilise a powerful brand online, and the skill to render trust proxies in the digital realm. The public is watching now.
The outside world served as a model for publishers moving online. Consequently, advertising sales was one of the first big hopes for the new world of online publishing. Projecting the broader information landscape inward, it was imagined in some circles that scholarly publishers might finally be able to use their affluent, highly educated audience in ways print never had allowed. However, there were two major barriers – first, the commercial space wasn’t prepared to spend large amounts of money online; and second, the same audience was spending 90 per cent and more of its time online elsewhere because of the low engagement offered by narrow, archive-like specialist publishing sites. Clearly, more work needed to occur before scholarly publishers could compete effectively online for audience engagement. Only then would advertising, subscription dollars and other commercial opportunities arise.
Today, some scholarly publishers are doing well with online advertising, but these tend to be those with major audience segments, larger and more experienced sales forces and sophisticated offerings. In short, those with advertising programs resembling mass media advertising programs are doing better. Yet, online advertising remains relatively undervalued compared with radio, print, television or outdoor (Meeker, 2010), suggesting that those succeeding now in these areas might do even better when ad rates rise to reflect audience engagement. The Matthew Effect – in which the rich get richer and the poor get poorer – is clearly at play.
Fortunately for scholarly publishers, one commercial opportunity arose quickly and right in front of them – namely, the institutional site license. For many publishers, these content licenses became the bread-and-butter of their online revenues. However, there were opportunity costs below the surface – a lack of a direct customer relationship with the end-user, and susceptibility to fluctuations in university funding.
In 2008, the macro-economic downturn around the world set off alarm bells in many publishing houses. Academic libraries were being asked to reduce their expenditures as part of general belt-tightening, while at the same time publishers were seeing usage through site licenses increase significantly. That is, there was more value flowing through site licenses at the same time libraries had to consider paying less. This mini-crisis is mainly playing out quietly. In some cases (Anderson, 2010), the crisis has played out in full view.
Publishers continue to serve the institutional library market, but they are also slowly diversifying their revenue streams through new products aimed at other parts of academic budgets (e.g. open access initiatives aimed at research funding bodies and academic publication funds, data products aimed at department and personal budgets, and integrated products aimed at academic IT budgets). As this diversification proceeds, the information available through traditional site licenses will probably become less and less interesting to the practicing researcher or practitioner. The modern products that require investments to build and market will follow the money elsewhere.
Pricing practices have also been shaped by referencing outside sources, most notably successful online file retailers such as Apple and its iTunes store. The $0.99 pricing originated by Apple for individual songs has been cited by Geoff Bilder with his iPub idea (Bilder, 2008), and extended commercially by Deep Dyve (DeepDyve, 2009), which added a Netflix-inspired model of renting articles in order to approach both publishers and the market with something novel. It seems unrealistic to expect success by modeling pricing approaches for a popular commodity like music. After all, scholarly content is niche, keepsake and generates a low volume of sales. However, the fact that leaders in the field are advocating and experimenting with models like this reveals the extent to which publishers and entrepreneurs in the scholarly space are looking outside for inspiration.
One way that publishers are coping with the Internet is through better integration into larger architectures. For major publishers, this means things such as Science Direct and the ‘Big Deal’. For smaller publishers, this means syndication, aggregation, SEO, partnerships, presence on common technology platforms and the like. In the era of abundance, being in more than one place is a good strategic option. Also, selling a package that allows buyers to get more for their money and approach the nirvana of one-stop shopping is often advantageous.
Yet these larger architectures teeter on the edge of being ‘black boxes’, with opaque rules for prominence, payment or both. Google’s algorithm is routinely adjusted, sometimes harshly, making SEO as much an act of monitoring and adjusting as strategising and tagging. Aggregators are large, impersonal and employ formulae for royalties that are often hard to pin down, but are often too good to give up in the short run (which inertia often extends into the long run). The lack of individual user data within institutional licenses may prove to be a problem in the long term, for all involved.
The information world may be flat, but there are still intersections such as Google, Ovid, EBSCO, Science Direct and conglomerate sales. Maintaining publishing prominence now means managing your presence through multiple layers and systems.
Because the customers we find on the Internet have expectations derived from the larger information realm, their demands and preferences deviate somewhat from the traditional journal or book consumer. Nevertheless, the PDF has retained its value as the content container of choice – for now. Pressures from multimedia publishing, data publishing and visualisation publishing are impinging on the article economy, where the PDF dominates. Better portable solutions are also eroding its dominance. Browser plug-ins like Readability allow users to send web pages directly to e-reading devices like the Kindle, removing the need to print and providing a satisfactory, portable and disposable reading experience.
Search engines effectively eliminated print indices and other traditional content access products, but it is only over the past 5–7 years that Google has emerged as the dominant search tool for many academic disciplines, supplanting even PubMed and its ilk.
From a browsing perspective, the flat information world has made faster editorial production around selected content a priority for many publishers, so that news sites, social sharing and email alerting can occur and drive traffic and awareness, while also satisfying authors. Interpretive editorial features are also more common now, as the Internet rewards some degree of repetitive content linking and layering, while the wider audience benefits from synthesised content.
For editors, faster publication has meant everything from publishing raw manuscripts online as soon as they are accepted (Journal of Biological Chemistry and others) to creating fast-track peer-review systems. This usually requires faster peer review either generally or selectively; less or faster copy editing and formatting in order to publish the information first; and tighter coordination with authors’ institutions for public and media relations, especially around big meetings.
Given the increased importance of branding in a flat information world, many publishers are engaged in brand management and rebranding initiatives. Although not entirely new, the pace of these changes and the stakes around them are both higher. Products and brands have been stretched to accommodate new Internet-inspired initiatives. Blogs, brand embellishment for fast-track articles, new article types, new journal siblings and some non-journal initiatives have been launched over the past decade. As these brand extensions and enhancements have proliferated, concerns have been raised about brand confusion and brand over-extension. In some cases, it is no longer clear what the brand promise is, as the brand supports initiatives of various quality, based on various authorship and review models, and geared to non-traditional user groups.
While products and brands have been stuffed full of interesting and useful new initiatives, deriving new revenues from these has proven more difficult. Concentrated purchasing power, a fragmented and often modest-sized core audience, users who are accustomed to having information appear to be free and sizeable competing outlets (aggregators, proxy access and the like) all make it more difficult to directly commercialise new offerings. Many publishers are seeking new purchasing outlets, reshaping or integrating products to appeal to different user groups (and, hence, different budgets) and investing in new product launches. Many publishers now have a significant percentage of revenues coming from recent product development initiatives.
As Simon Waldman discusses in his book Creative Disruption, experimentation and innovation can be driven by economic necessity, an often overlooked factor in market evolutions. The economic downturn of 2008–09 led many publishers to begin diversification efforts, which are now beginning to bear fruit. In short, the pace of change has accelerated because of macro-economic pressures.
The mobile networked information device lurked on the outskirts of user adoption for years, slowly gaining adherents, then rapidly achieved market power with devices like Amazon’s Kindle, Apple’s iPhone and iPad, and various Google Android devices. It’s now fair to say that non-mobile information consumption is on its way to becoming the exception rather than the rule. As Kevin Kelly has asserted in What Technology Wants (his final paper book, he says), we are in a screen-centric culture, and these screens are replacing paper as the substrate of choice. Becoming ‘people of the screen’ means that distinctions between mobile and deskbound are less useful than distinctions between screen-based and paper-based or connected and disconnected.
Journals were among the first print-based scholarly publications to move online, closely shadowed by catalogs and indices. This was partly because the articles within journals were easily separated from the issue batch, and were short enough for users to print on their own. Books, on the other hand, took much longer to become susceptible to the sway of new technologies, primarily because print-on-demand technologies are taking longer to become mainstream and because other options for individual consumption of very longform texts had not emerged. However, as soon as viable e-books emerged with the likes of the Kindle and the Nook, the book market changed rapidly and fundamentally, with more independent authors, new price points, new power centers and bankrupted bookstore chains.
This was not due to a shift from immobile to mobile – books were as mobile as e-books. It was a shift from paper to screen, the implications of which are not only user preference, but also price, market dynamics, manufacturing processes and sales channels. E-books often sell for less, but also can be shorter than traditional books, or even constituted out of pieces taken from separate titles as some publishers, such as O’Reilly and the National Academies, have shown.
As users become more screen-centric, more change will come. A recent pilot of the Amazon Kindle DX at various universities showed that while the DX had several flaws, users wanted it to be more like their computer, not more like their printed materials. The direction of change is clear.
Sustainability is a clear goal that publishers, authors, librarians, institutions and readers all share. It’s not just a code word for financial stability, but has many facets in the scholarly world. However, while elements such as prestige, reputation, impact factor and career have been affected to some degree by the changes in scholarly publishing, the underlying business model has been directly attacked by some as unfair and unsustainable, with options offered that aren’t clearly superior from a cost–benefit standpoint. Part of these claims has been the notion that the public now has a stake in research output, drawing a straight line from taxpayer to published paper. Because this could never have been contemplated in the print era, such pressures to make papers based on government-funded research reports freely available are clearly due to the availability the Internet has created.
The print model of scarcity was solved by the subscription, a form of sales that matched print nearly perfectly – it allowed publishers a steady flow of income, allowed readers and publishers to share content batches of mixed quality, and created a long-term relationship for stable cashflows on the publisher side and high brand affinity on the reader side. This panacea ended with the Internet, as the print bundle fell apart under the pressure of search engines; as sales moved from the individual to the aggregator (whether that aggregator is an institution or one like Ovid); and as brand loyalty could be fulfilled without direct interaction with the publisher.
The dilemma of customer contact has hit publishers and librarians in a number of ways, from reduced or anemic subscription data to less foot traffic at library facilities. For publishers, some solutions have included creating services only available to identified users, selling products through non-traditional channels such as sponsorships, and launching products geared for personal devices (e.g. smartphones and iPads). Because partners like Apple and Amazon withhold customer data, this latter approach has proven problematic in a new way. For libraries, efforts to bring patrons into facilities through new amenities, training courses and rented space have not turned the tide. Worse, such trends may be nothing more than a slippery slope to library brand devaluation as the library becomes about square footage rather than about expertise and knowledge. The direct relationship with either patrons or readers has been difficult to retain, redefine or re-establish.
A major alternative to the subscription model has been open access publishing. This model, which is financed up-front from authors instead of spreading the costs of publishing across multiple papers and an entire audience, has proven successful in two main ways: for selective journals, it has not created a barrier to success while providing a partial revenue solution; for less selective journals, it has provided a full revenue solution while not seriously impacting the prestige of the journal genre. Because author–reader overlap is large in many fields, open access can work in many places with little downside.
Many traditional publishers are launching open access journal programs or standalone open access journals, partly as a way to fend off competition and partly as a way to garner revenues from the open access funds many institutions and charitable bodies have established. How well this works long term has yet to be seen. Market dynamics seem destined to hold these service-based prices back to a common mean, making differential pricing unlikely for publishers accustomed to these practices in the subscription market.
The publishing business model online depends on traffic – from usage statistics to advertising impressions, if users aren’t visiting a site, the business model suffers to some extent. Journals, in particular, create value by filtering content and aggregating an audience with a high affinity around that content. So when something attempts to replicate those traits, publishers should pay attention.
The emergence of social media has created new ways to drive traffic via Facebook and Twitter, but has also created new filtering and affinity mechanisms for users. Nick Bilton, in his book I Live In the Future & Here’s How It Works, talks about two of these new filtering and affinity trends, which he names ‘trust networks’ and ‘anchoring communities’.
Trust networks are new to the online world. Instead of anonymous editors, fellow readers and distant authors – the trust network typically offered by a journal, for instance – Facebook and Twitter offer trust networks consisting of your friends and favorite colleagues. Publishers have realised they need to be in these trust networks, so have launched their own Twitter feeds and Facebook pages, all in hopes of being ‘liked’ in Facebook and ‘followed’ in Twitter. Again, new tasks have emerged because of external technological and social developments.
Anchoring communities are a slight variant on the trust network. It’s where people go to orient themselves in whatever mode they’re currently in. If you’re in a news acquisition mode, your anchoring community may be the New York Times, CNN, Twitter or Facebook. If you’re in a literature surveillance mode, your anchoring community may be Google, PubMed, Facebook or a journal portal site. Understanding modes of functioning and then creating the proper starting point for users takes user-centered design in new directions, away from the archival journal site and toward something that is more dynamic, broad-based and timely. Users are expecting a rich launch pad from any online property. Providing something less can set you back.
Publishers are playing with integrating social tools into their sites, from sharing buttons to commenting features to article ranking devices. But the question of community management goes deeper than technology solutions, and editorial groups are not inherently community organisers. There has not yet been an effective community effort in academic publishing, but the possibility seems to exist. Efforts such as the American Institute of Physics’ UniPHY offer hope in this area.
Efforts like the Open Researcher & Contributor ID (ORCID) have been launched to deal with the ambiguity of author names, partially in response to the increased pace of publication and contribution emerging from the social Web.
A natural consequence of external forces reshaping scholarly publishing is that user workflows are a central question now. When information devices are on hand everywhere, how people use information becomes ever more important. This begs the question, ‘Which people?’ And this is where I believe we are heading into some fraught areas.
One of the most common distinctions that has haunted the emergence of digital publishing has been the myth of young versus old, also known as the digital native versus digital immigrant distinction. This myth has largely been disproven, especially in light of many indications that culture trumps technology. That is, middle-aged and older academics and scientists are more prone to adopt new tools for expressing their results and opinions than younger colleagues simply because they are safer in doing so. Younger scientists and academics are gun-shy, carefully avoiding missteps online that could derail their careers. Designing for the young researcher or young academic is very likely designing for the most hesitant user of new technologies.
But there is a reason to look at the young, because the older academics and scientists are actively modeling some of their young colleagues’ most obvious behaviors, such as device adoption, social media use and communication preferences. There is a virtuous cycle here, with cautious younger members of the community inadvertently showing some of what they prefer but not using it professionally, and older colleagues adopting what is apparent and more confidently turning it to professional ends. But even for older academics, the culture of ‘publish or perish’ and the follow-on demands for legitimate, citable publications in prestigious branded journals has led to relatively unimpressive efforts in new media.
The digital revolution has promised increased transparency into customer behavior, yet after more than a decade, many analytics initiatives seem to generate more heat than light. Part of the challenge is the openness of online, which makes usage and the definition of the user less predictable. Also, transparency has proved somewhat challenging for both institutional and advertising sales, laying bare some of the mismatches in expectations between constituents on either side of the buyer–seller paradigm. There is no doubt that our sites and initiatives throw off more data than print ever did, but deciding what matters, what it means and how one set of data compares with another has in some ways returned us to clearing houses of measurement and third-party data sources. The notion of the self-sufficient data warehouse has proven to have dramatic limitations.
Many things have remained stable throughout these turbulent times. First, the ‘publish or perish’ culture has not only been stable, it has become more demanding as more academics and scientists compete for grants, publication and reputation. Second, despite a torrent of legitimate criticism, the impact factor has become even more ingrained and powerful in many scientific cultures, especially in Asia and Europe, with financial rewards or tenure tied directly to impact numbers. Third, brands have retained their power, if not increased their sway. Readers need clear ways to cut through search results and save time, and brands provide reliable guidance in many cases. Authors need efficient paths to publication, and brands are a guide for submissions. Finally, the behavior of academics and scientists at various career stages seems as predictable as ever. Scientific culture has not changed as dramatically as information distribution technologies, and in some ways may have become more rigid because of what these changes have enabled.
In The Clock of the Long Now, Stewart Brand articulates the layers of change as concentric circles. The outermost, which he labels ‘Fashion’, changes erratically and often. Subsequent layers, such as ‘Infrastructure’ and ‘Culture’, change much more slowly. We have experienced many changes in the fashion of information distribution and acquisition, from print to simple print-like websites to fully realised digital authoring and distribution platforms. Throughout this, however, the one piece of information fashion that has not changed is the authoring toolset, which mostly relies on Microsoft Word. Because of this, and because the infrastructure has also begun to change to the point where video and still cameras are everywhere, geolocation is commonplace, and other sensors can be strapped onto cheap mobile computers, I would watch for changes in the fashion of authorship. We are already seeing this with video abstracts in some journals, and papers that rely more on visualisation than on text presentation.
Culture will change more slowly, but it will change. In the past few years, I have seen more and more researchers and practitioners willing to admit they use tools like Google and Wikipedia over the presumably superior sources like PubMed and textbooks. Speed and comprehensiveness are competing effectively with other dimensions of value and driving some new choices. Prohibitions on citing outside the accepted literature are being relaxed here and there. The first culture tremors kicked off the acquisition of the Eigenfactor by Thomson-Reuters and new developments at PubMed, just to name two. I think some of the major providers of infrastructure will be redeveloping their offerings over the next decade, in response to larger cultural tremors. And financial tremors in academia will continue, especially as the ‘higher education bubble’ is examined. No matter whether it withstands scrutiny, pops or slowly deflates, caution will creep into planning and budgeting, further depleting the inventory of available options. Hard choices in a world of ‘satisficing’ (Simon, 1956), free tools and options may drive fundamental behavioral and economic change. What is now ‘dabbling’ in the electronic world with these new computer displays, apps and tools may become hard-core and irreversible reliance on them.
When scholarship moved onto the Internet, it opened itself to new external forces. Now, in fact, it may be claimed the tide has turned, and that it is the Internet that is beginning to invade academic research and study. The action is coming from outside, the external world is dictating terms and we are responding as the Internet moves into our realms. Efforts to take the lead often ring hollow, as new commercial players and laser-focused upstarts often seem to have the upper hand over slower-moving and established information providers.
The movement from scarcity to abundance has paved many new roads for users – patients, consumers, practitioners and scientists from other disciplines – while adding new devices, new capabilities and new expectations. As scientists and academics find success using these new tools and balancing these new information collaborators and competitors, the pressures publishers and librarians have been experiencing will ultimately intertwine with the cultures of science and academia. And that’s when we may be able to say we’ve truly witnessed a revolution.
Anderson, K. Available at: http://scholarlykitchen. sspnet. org/2010/06/14/the-latest-library-as-purchaser-crisis-are-we-fighting-the-wrong-battle/, 2010. [[Accessed 2011]].
Anon. Retraction Watch. Available at: http://retractionwatch. wordpress. com/2011/04/06/forget-chocolate-on-valentines-day-try-semen-says-surgery-news-editor-retraction-resignation-follow/, 2011. [[Accessed 2011]].
Bilder, G. Available at: http://sspnet. org/documents/300_geoffrey%20bilder_ssp_2008_iPub. pdf, 2008. [[Accessed 25 June 2011]].
DeepDyve. Available at: http://blog. deepdyve. com/2009/11/03/a-new-market-opportunity/, 2009. [[Accessed 25 June 2011]].
Meeker, M. Available at: http://www. slideshare. net/marketingfacts/mary-meekers-internet-trends-2010, 2010. [[Accessed 2011]].