Publishing and communication strategies
This chapter details publishing and communication strategies within the context of the digital technology revolution. The major focus is on scientific and academic journals publishing but the chapter also discusses the little-detailed tradecraft involved in the historical strategic development of particular product types against a backdrop of wider market evolution. Illustrative case-study exhibits are presented detailing the strategic development of selected products and markets, as well as to explain the powerful Network Effects that underline the success of journals that work to more closely match their communities.
Heraclitus of Ephesus can truly be considered the Publisher’s philosopher. He elaborated the doctrine of change being central to the universe, and he established the term ‘Logos’1 in Western philosophy, meaning both the source and the fundamental order of the universe.
Change does indeed seem to have been endemic in the publishing and communication worlds since the beginning of the technological and industrial wave based on the invention of the microprocessor in 1971 (Figure 5.1). For the last 20 years or more, every time publishers have gathered together, the talk has been of nothing but change – changes in technology, economic and political context, library finances and sociocultural change. Although there is a real tradition and continuity in our industry, we can take another image from Heraclitus, that even though a person steps in the same river, those who step into it are always washed by different waters: ‘each individual atom of water, does not constantly change; the totality of things constantly changes’ (Shiner, 1974).
This period since 1971 has also witnessed the current wave based on information and communication technologies (ICT), initially leading to process revolutions in our industry, and increasingly now, as we truly enter the deployment phase in the 2010s (Figure 5.2), to new products emerging. This digital revolution has driven a paradigm shift in our industry from print-based manufacturing to online service provision, causing major disruptions for both markets and products. As with any technological revolution, there are both threats and opportunities which arise from these ‘waves of creative destruction’ (Schumpeter, 1975).
Figure 5.2 5th Great Surge based on ICT Source: Based on the work of Carlota Perez, in particular Technological Revolutions and Financial Capital, figure 7.1, p. 74. Edward Elgar Publishing, 2002. Reproduced with permission. The authors are indebted to Professor Perez for elaborating this version especially for this chapter.
Further, the economic analysis conducted by Carlota Perez indicates the role played by financial capital in the global publishing industry from the 1980s up to the early years of the 21st century. Her analysis also clearly indicates the turning point that the industry has now reached, a theme that we develop later in this chapter.
These concerns that disruption was beginning to occur in our industry were most graphically raised by the physicist Michael Nielsen at a workshop for editors at the American Institute of Physics in 2009, to be found on his blog – ‘Is scientific publishing about to be disrupted’.2 Nielsen’s views and recent championing of Open Science3 have raised many issues. The key points of his hypothesis are:
Scientific publishers should be terrified that some of the world’s best scientists, people at or near their research peak, people whose time is at a premium, are spending hundreds of hours each year creating original research content for their blogs.
Contrary to the Nielsen hypothesis, if one area of publishing has responded to the challenges of digital technology, and embraced many opportunities offered, it has been journals publishing. The essential lacuna in Nielsen’s hypothesis is his confusion of science and research with their mediation into peer-reviewed content, and the incredibly complex set of services, activities and products that publishers bring to the material output of the academic and research worlds.
This chapter will focus on both products and markets, referring along the way to the key media, but always within the context of the digital technology revolution. For this reason – and also because it represents the specialism of the authors – we will focus on scientific and academic journals publishing. The chapter features exhibits predominantly taken from Taylor & Francis’s publishing programme as illustrative case studies on the strategic development of products and markets.
Since the year 2000, it is estimated that the journals industry has invested over £2 billion in technological systems, as well as innovated in areas such as electronic editorial systems, author tools, production workflow, plagiarism checking, content management, online content platforms, global sales management and many more elements that are outlined in Figure 5.3. We hope to show that in the Web 3.0 world and beyond, which values expert material and the transmission of peer-reviewed knowledge, journals and the publishing industry are rising to the challenge – with other media still highly valued despite the changed technological and economic contexts.
From its initial origins as recording the minutes of science, in the 20th century journals were often viewed as a slightly ‘eccentric offshoot’ of monograph publishing that publishers had to be involved in to cover fully their fields of publishing. Both Elsevier and Robert Maxwell began the industrialisation of journal publishing in the late 1940s and 1950s, not the least of the innovations being the pre-paid subscription, aggressive launching of international titles (such as Elsevier’s Biochimica et Biophysica Acta in 1946) and titles in applied science, and computer-based subscription management and fulfilment (Cox, 2002).
Yet journal publishing at that time and up until the mid to late 1980s was very much based on a ‘black box’ business model, with the content put together by author, editor and peer reviewer, and consumed by researcher/reader (and the same academic or scientist may play each of these roles at different times), with the ‘industry’ operating on the outside of the academic world dealing with production, printing, marketing and sales, subscription management and distribution – a relationship between publisher, library agent and librarian. But it was always the case, as with book commissioning, that good relationship management was a key skill for the publisher – getting the best editors, linking with eminent learned societies, and selecting a good, balanced editorial board who would not only add lustre to the title but help promote and sell subscriptions to their and their colleagues’ institutions.
Technology began to drive e-publishing in the 1960s and 1970s. This involved delivery of secondary publishing abstracting and indexing products (A&I) initially with the Dialog system, which was one of the predecessors of the Internet, as a provider of bibliographic information. Initially databases such as ERIC (education), INSPEC (electrical engineering) and MathSciNet (mathematical sciences) were delivered as magnetic tapes, but by the 1980s a low-priced dial-up version of a subset of Dialog was marketed to individual users as Knowledge Index. This included INSPEC, MathSciNet and more than 200 other bibliographic and reference databases. Even at this early stage, subscribers to the A&I databases were linked to document retrieval services such as CARL/ Uncover, who would go to physical libraries to copy materials for a fee and send it to the service subscriber.
The origins of A&I products can be traced back to An Index to Periodical Literature, first published in 1802 by William Frederick Poole, who had recognised that periodical articles were often being overlooked due to a lack of an effective indexing service. Originally, abstracting services developed to aid research scientists in keeping abreast of an exponential growth in scientific journal publications, focusing on comparatively narrow subject areas.
From their origin as a service to the scientific community, the number of abstracting services has grown to an estimated 1500 today, many of which were founded in print but are now to be found online. The conversion of print to online format allowed libraries to build in links to their own online public access catalogues (OPACs) as well as to electronic and print document delivery services. More recently, Jasco (2007) has argued that traditional abstracting services may eventually be outmoded because more efficient searches can be made using powerful search engines. However, quantum improvements in search and retrieval software have not necessarily meant that the task of identifying relevant material has been made any easier for the researcher. There is still enormous value in A&I services being structured on the basis of application of a bespoke taxonomy developed by subject specialists, which can provide much more granular and targeted search results than Google or even specialist search software (see Exhibit 1).
Such challenges to commercial producers of A&I services have been met by producers of bibliometric databases (e.g. SCOPUSTM) where the value of cross-referencing, citation and other metadata have been recognised as quality indicators and thus drivers of publication metrics, which allow users to identify quantifiably authoritative pieces of research from within an access-controlled online environment. The original purpose of A&I services to help researchers overcome the difficulties of tracing potentially useful articles scattered over not only the extant print periodical and other literature, but now predominantly the Internet, remains valid. More than ever, discoverability and search and navigation are central to research providers and users. The challenge for A&I service producers today is to retain integrity, identity, validity and authority in an age of seemingly ubiquitous sources of one-off and serial research and scholarly literature. Additionally, from these searches, rapid delivery of the primary material needs to be designed in, as well as linking to datasets, chemical reference works, handbooks, patent databases, etc., meeting the challenges of linking on the semantic web.
Within the physical sciences A&I databases have for long been a key tool for the research community, and in the online age have rapidly developed to incorporate many features making the access and usage of the data faster, more intuitive and enriched. An indexing service gives a full bibliographical reference whilst an abstracting service in addition provides a brief summary of the article or reports content. All forms of A&I can be searched by subject topic, author(s) name or subject keywords such as the chemical, plant or process name. A&I providers now need to develop and make available searchable related datasets, reference links, related patents and as the provisions develop further structural tagging and bespoke identifiers such as the International Chemical Identifier – InChI (see Exhibit 2).
Abstracting and indexing are becoming ever more interactive for the researcher or academic by developing newer and faster ways of getting to the relevant information from the literature (see Exhibit 3). This now includes such techniques as content and data mining, which is defined as the automated processing of considerable amounts of digital content for the purposes of information retrieval (be it structural data, tabulated results or data analysis), information extraction and meta-analysis. As the services available continue to develop, semantic annotation giving a meaningful context for information content could develop into a new standard for STM content, and facilitate improved and deeper search and browse facilities into articles and research. Enhancements in the capability for A&I to perform in-depth searches have meant that recently there has been consistent development of automated extraction tools which then present the important elements contained within documents and articles, including not only the expected scientific elements (genes, proteins, chemical compounds, diseases) but also now business-relevant elements such as company names, people/academics, products and places. The continuing development of these techniques aids in identifying the relationships between the elements the reader is interested in within significant numbers of documents.
At the beginning of the 1990s there were very few full-text primary journals distributed via the Dialog system mentioned above, and fewer than a dozen titles had been launched for online delivery as ascii text via email. In 1991 Paul Ginsparg began the LANL preprint server for physics at Los Alamos (now known as arXiv), which at the time was seen as a major threat to the existence of traditional physics journals (although it has transpired that this system itself has required significant funding, and the preprint server has come to play a different role from the journal publication of versions of scientific record). At the same time, the results of the ADONIS project, digital scans of biomedical journals on CD, led to the launch of a (not very long-lived) commercial product. There then began a period of intense experimentation, often involving libraries and technology providers in addition to both commercial and learned society publishers. In 1992 the American Association for the Advancement of Science launched the Online Journal of Current Clinical Trials; Elsevier started Project Tulip with the University of California system, delivering scanned images of materials science journals to UC libraries; and the Red Sage project (named after a congenial restaurant favoured by publishers in Washington, DC), a collaboration between ATT Labs, UC San Francisco and a group of learned society and commercial publishers and university presses, sought to develop an experimental digital journal library for the Health Sciences. These projects all provided valuable data, insights and experiences, but at this time the online product essentially represented a ‘dumb’ digital replication of the print version.4
By 1995–96 most major publishers were announcing their intentions to produce online versions of their print editions, and experiments such as the UK SuperJournal project began to look at how e-journals were actually used, and the features that users wanted (see Exhibit 4). What is remarkable is both how much attachment there still was to the print paradigm in the 1996–98 period, and how, in little over a decade, all of the requested core user requirements are now taken for granted by users of online journals.
At the turn of the millennium, most major publishers had full e-versions available of their journals, and some e-journals were already starting to contain features that were not available in the print version. In addition, publishers’ versions were also starting to compete with aggregators’ licensed versions of full-text databases – such as BIS, EBSCO Publishing, Ovid and BIDS. These aggregated databases typically provided unlinked ‘dumb’ online versions which simply replicated the print edition. Publishers responded both by seeking embargoes on their content
Further, journals and the activities surrounding research activities and the publishing workflow began to change. This can be seen reflected in what has been termed the ‘new research and learning landscape’, shown in Figure 5.4. This landscape integrates the publishing process much more into the academic and research process. Publishers can no longer sit as part of a ‘black box’ process outside the academic and research worlds, but must be integrated into it as a full partner. There has always been a teaching role for many journal articles, and this has been embraced much more in recent years with the decline of a lot of research and monograph publishing, and often due to the long gestation period of book publishing as opposed to article publication.
Figure 5.4 Journals in the new research and learning landscape Source: L. Dempsey, ‘The (Digital) Library Environment: Ten Years After’, Ariadne, Issue 46, 2006, UKOLN, University of Bath http://www.ariadne.ac.uk/issue46/dempsey/, figure 2, which is adapted from the OCLC Environmental Scan (see http://www.oclc.org/reports/escan/research/newflows. htm), which is in turn adapted from Liz Lyon, ‘eBank UK: Building the links between research data, scholarly communication and learning’, Ariadne, July 2003, Issue 36: http://www.ariadne.ac.uk/issue36/lyon/
In addition, the shift from print to online has led to a significant change in workflow whereby the print version (perhaps a single archival volume or an amalgam of issues printed together) becomes an ‘offcut’ of a fully digital flow process from author to reader (Figure 5.5). The attention of publishers has also moved much further from production of publishing products to provision of publishing services within research networks, as shown in Figure 5.6. This in turn can be seen both as the fundamental shift from journal to network and as a precursor to potential new products, especially driving innovations in ‘multi-layered’ network products of varieties of book, journal and reference content, including multimedia, datasets, etc.
Figure 5.6 From products to services, and from journals to digital research networks. Inspired by and adapted from a presentation by Stephen Rhind-Tutt, Alexander Street Press: http://alexanderstreet.com/articles/index.htm
Alongside these changes in both processes and the publishing landscape, journal publishers have been required to demonstrate the value of the services that they are providing – much of which is of course driven by investment in technological systems as well as through changing the ‘terms of trade’ of the business. So we can list the following:
Although the decline of the journal has been much heralded, with the development of large databases of journal content on the one hand, and the individual article pay-per-view (PPV) market developing on the other, the fact is that it is the journal, its brand and the reputational factors associated with it that still provides the key trust factor to the content. This will continue to be the case in the future online ‘multi-product content’ or ‘multi-layered content’, supplying journal and book content along with associated data and supplementary materials.
By 1997, a conference sponsored by the Association of Research Libraries in the USA was contemplating the decline of the scholarly monograph, with some referring to ‘its comatose body’. Indeed, one reason given for the continuing popularity of the research article, and its growing role in coursepacks and as teaching tools, is that monographs are seen as declining in relevance and timeliness by the reader, and in viability by the commercial publisher, and even in sustainability by the university presses. Every publisher uses the term in a slightly different way, but in general ‘monograph’ has become a catch-all term for a book that is not of a reference type, that is of primary material, and which may be multi-authored, single-authored, or an edited collection.5
It is worth drawing a distinction between humanities titles, the viability of which are fundamentally dependent on the credibility of the scholar(s) as author(s); social science monographs, which are today dependent on the globalisation of markets and new markets as library budgets shrink in established markets in the US and Europe; and STM titles, which face the same phenomenon as social science, but with the added twist of a degree of dependence on the market vicissitudes of practitioner, professional and industrial audiences. But in the second decade of the 21st century, the essential commercial equation that a commercial scholarly monograph publisher makes as the publishing decision is:
In general, however, university presses cannot take this approach and make as narrowly commercial a publishing decision, as they are much more producer-led rather than reader-led, and highly dependent on their host institutions or supporting foundations for sustaining their quality and prestige publishing. For the most part, on the basis of this model, commercial book publishers find it hard to compete with the prestige of university presses when it comes to the commissioning of the highest prestige material.
But there is no doubt that there is a funding crisis for the traditional university press model, as they try to come to grips with the new digital culture and the challenges of the Open Access lobby in both journal and book publishing.6 The free online and digital print-on-demand (POD), a model that has also been recently espoused by Bloomsbury Academic among other commercial publishers, is unproven and potentially hastens income decline as more people become comfortable reading online (driven not least by new-generation e-reader platforms and mobile technology). There is a real need to retain strong revenue streams as new layered content with multimedia features becomes available, such as the recent launch of The Waste Land app.7
Successful commercial monograph publishing is in fact in rude health in the 2010s. Although unit sales have shrunk by around 75 per cent since the early 1990s, a multifold increase in publishing output has balanced this decline, and, with changes in digital process and the marketplace, the economics of monographs have perhaps never been healthier. This derives from a number of factors, based partly on the pricing up of hardback library editions, and the adoption of a POD paperback edition if there is sufficient perceived demand in addition to library sales. Other factors include:
There are also significant developments through application of new digital possibilities by offering enhanced collections, particularly in STM subjects, in the form of enhanced databases (see Exhibit 5).
As monographs appeared to decline in the mid-1990s, so there was a shift among publishers as they piled into textbooks, seeking to replicate the ones that were commercially successful propositions. A new generation of student customers to repeat sales year on year is superficially very attractive. The textbook is also a high-cost, labour-intensive product. It needs to be kept up to date as a full subject review; it requires top authors who need to be kept happy and they will require a significant financial interest in the work; there need to be good inhouse development editors who are able to monitor and interpret intelligence on courses in the main markets that the book will serve; and the digital opportunities for ‘add-ons’ such as CD-ROMs, companion websites, multimedia supplementary material, etc., all come at a cost.
Thus the market has been starting to resist what are becoming very expensive items, and the textbook business is somewhat idiosyncratic in that the people who are recommending the books through adoption for their courses are not the ones who have to pay. Various phenomena are causing problems for the current textbook market:
The large reference work and encyclopaedia continue to go through hard times under the challenge of digital. Witness the demise of Encyclopaedia Britannica under the challenge of Encarta in the 1990s, and now, as a top reference publisher said to us recently, ‘We are getting murdered by Wikipedia and other free reference content’. A good reference work or encyclopaedia has great potential as an online digital product, especially when linked as a multi-layer with a publisher’s other digital content, but the investment required is not insignificant. And the integration of different products is not necessarily straightforward, especially when that content has been digitised to different standards, in different formats or is sitting on different platforms. Despite this, successful products still remain (see Exhibit 7).
There has, however, been growth in recent years of shorter reference works – handbooks – collections of survey articles/entries of around 5000–7000 words (such as the current volume). These are especially popular in STM subjects, and are typically cheaper than monographs in hardback, as most will have the potential for industrial or professional sales. A good, definitive reference handbook can gain real prestige by putting a real stamp on a particular field – as traditional print or replica e-book.
Technical proceedings as a product have not changed dramatically in recent years, reflecting their primary purpose as an archival record. They remain popular in applied scientific and technical disciplines, such as computer science, where a greater tradition exists for publishing conference papers alongside books and journal articles. Technical proceedings are indeed moving increasingly online, sold as e-books, although often as a supplement to the print edition rather than a substitute.
The digital age brings a number of new challenges and opportunities for publishers of technical proceedings. Dual pressure comes from conference organisers demanding a lower upfront financial commitment towards publication alongside greater project management support from the publisher to bring the publication together, including electronic submission and review software, author templates and instructions, handling of author queries, and managing copyright transfers. Customers are also demanding more sophisticated online products, which may further put pressure on publishers’ margins through the need to intervene more on manuscripts to improve consistency of style, introduce XML tagging to aid search and discovery, and host authors’ supplementary material.
Delivered effectively, these new requirements can enhance the product, maximise visibility and reinforce good working relationships between the conference and publisher, but the specialisation and investment now involved in securing technical proceedings contracts and selling them in sufficient quantities have narrowed the range of publishers involved in this sphere. Further, Open Access advocates within fields such as computer science have been urging simple author posting of conference proceedings in repositories, as an example of DIY publishing.
So far, we have looked at the many and varied outputs of scholarly communication as, effectively, standalone products. A single book or database appeals to a set of researchers active in a specific field. Commercial success is a function of the inherent quality of the book or database, how effectively it is promoted to potential customers and how able (and, more importantly, willing) they are to pay.
Does this mean that the performance of a scholarly publishing house is simply the sum of the sales of its products? On a crude level, the answer is of course ‘yes’. What it overlooks, though, is a key element in determining underlying and future performance – the competitive environment in which a firm operates.
Take a simple example. Publisher X may have the best book on Genetic Regulation this year and achieve excellent sales. Next year, Publisher Y may put out a superior book in the same field which has far greater appeal for customers. Sales of Publisher X’s book are extremely disappointing in the second year. The book is unchanged, but its performance is now lacklustre. It has suffered because its ‘market penetration’ – the book’s share of total sales of Genetic Regulation texts – has declined due to the competing book. Publisher X can respond with a vigorous marketing campaign. Given that there is now a superior product on the market, the results of this are likely to be modest. The book’s sales have peaked and will soon decline toward zero (or to the online ‘Long Tail’, which is much the same thing).
The typical scholarly monograph has a sales life cycle lasting several years. A very few exceptional titles continue selling for a decade or more (Nitterhouse, 2005). To manage this inevitable life cycle pattern of peak and trough, publishers build portfolios of products in their chosen subject areas, with new content added each year. As the sales of individual products move into decline, new products pick up the slack, ideally creating a stable financial performance for the publisher.
Figure 5.7 Igor Ansoff’s Product/Market matrix Source: Igor Ansoff (1987) New Corporate Strategy. New York: Wiley. Reproduced by permission.
Fighting for share of an existing market – the ‘Market Penetration’ strategy – is what Publisher X does when promoting its Genetic Regulation book aggressively to counteract the competitor title. It is a way of extending a book’s sales life. Decline will still come, but hopefully later, and with the additional revenues earned outstripping marketing spend.
Product Development – building a portfolio of Genetic Regulation texts – will enable Publisher X to generate a healthy, stable business. This business, however, will only ever be a portion of total sales of books in the field. In other words, it is limited by the size of the publishing market in Genetic Regulation. Publisher X can focus on high-margin products within existing markets, constraining monograph output and emphasising textbooks, and perhaps deploying telesales teams to maximise sales. This is in effect a coping strategy. Sooner or later, the market’s limits will be reached. Further progress depends on Ansoff’s other two strategies –’Market Development’ or ‘Diversification’.
Market Development involves finding new customers for existing products. This can either mean selling to non-purchasing customers in current markets or seeking new customers in different segments. Publisher X’s Genetic Regulation book may have sold well in North America, but have scope to improve in Europe with effective marketing. Alternatively, having sated demand with Genetics researchers, Publisher X might develop a secondary market with Botanists interested in gene regulation. These new markets may be relatively small but, as costs are limited to sales, marketing and distribution, margins can be high.
Publishers are not always the driver of Market Development strategies. Amazon now accounts for 25 per cent of university press book sales (Esposito, 2009) and has expanded the customer base for scholarly books beyond publishers’ traditional markets. This is a significant departure from historical practice, where publishers would select and negotiate the market and segments in which a book is sold.
Diversification is the biggest leap of all – creating new products in new markets, preferably building on strength in related areas. It targets the most vibrant, dynamic segments of a market, but is high risk – in terms of both the chance of failure in an unknown market and forgone opportunities in familiar segments. For this reason, Diversification has been called the ‘suicide cell’ of Ansoff’s matrix. And scholarly publishing is an industry which has historically been averse to speculative endeavour, let alone suicidal behaviour. There are consequently few examples of successful product diversification in scholarly and professional publishing. Most impressive is LexisNexis, now a division of Elsevier, with turnover of £2.6bn in 2010.8 This mighty oak grew from a full-text search facility indexing legal cases in New York and Ohio, launched by Mead Data Central as LEXIS in 1973. By 1980, Mead had hand-keyed all US federal and state legislation, creating a unique, comprehensive resource for the US legal profession. That same year, NEXIS was added, providing journalists with a searchable database of legal news articles (Harrington, 1984). Another illustration, also from Elsevier, is Scopus, which expanded a family of abstracting and indexing products into a comprehensive bibliometric resource capable of competing with Web of Science. A third example is Wiley Job Network, a paid-for recruitment site built on the back of Wiley’s publishing business (see http://www.wileyjobnetwork.com).
We mention Diversification in passing. Many of the products created – being in new market segments – cease being scholarly publishing as we recognise it. The rewards can be large, but the risks involved should not be underestimated.
Translated from books to journals, Ansoff’s four strategies take rather different form. Why? Books and journals both experience economies of scale, meaning that unit costs typically decline as portfolios or print runs grow. Journals additionally have ‘Network Effects’. This means that their value increases exponentially as their community grows (see Exhibit 9). Books conversely have intrinsic value, purchased simply for what is inside their covers.
While a book business benefits from scale, a journal business requires both scale and widespread accessibility and visibility within its research communities. This fundamental difference between books and journals was for a long time concealed beneath the superficial similarity of the two printed products, leading some people to assume that the same publishing strategies work for both. This is not the case, as we shall explore.
Journals succeed when an active, well-networked Editor serves a viable research community to a high standard. Put another way, a network product (the journal) converts the expertise of a network (the research community) into tangible, digestible outputs (articles, issues). Henry Oldenburg achieved this with the launch of Philosophical Transactions of the Royal Society in 1665. He could do so because a core ‘invisible college’ (the membership of the Royal Society) had been deliberately constructed (de Solla Price and Beaver, 1966). Its output was a network product which happened to be a printed journal, the best dissemination technology of the time.
1. Enhanced communication - digital technology and the development of social media have transformed researchers’ modes of communication, allowing a journal’s editorial team to be much closer to its research community and facilitating much faster communication.
Why do these developments matter? They are important because it is now both feasible and realistic for journals and other academic and scientific publications to be finely attuned to their audience and author base, and to evolve in step with their research community. What was previously a staccato back-and-forth exchange characterised by long silences is becoming a continuous process of co-creation, with scholarly communication leading and catalysing cutting-edge research. The more closely a journal matches its community, the more powerful the Network Effects the journal creates.
Let us return to Ansoff’s Product/Market matrix and examine why the underlying differences between books and journals might favour divergent publishing strategies. We will not dwell on Diversification strategies, which so often create products other than journals.
Like their book counterparts, journals publishers naturally pursue a Market Penetration strategy for some titles – trying to increase market share for existing journals in existing markets. This involves familiar spadework – setting up electronic submission systems, operating production management systems, producing high-quality print and online issues with increasing rapidity, promotion, conference representation, indexing in a disturbingly wide variety of sources, managing Editorial Boards so that they benefit a journal, improving the author experience, and so on. A competent publisher will undertake these activities as a matter of course.
The core decision a publisher must make is whether to concentrate on Product Development (launching new journals or acquiring titles) or Market Development (selling current journals to new audiences) – or try both. Which approach is best varies with each different journal list.
Product Development in a journals business focuses on building networks of researchers around a title. In practice, this depends upon both the Editor’s and the publisher’s ability to refine and improve the journal in light of that community’s evolving needs, so generating the strongest Network Effects. We consider this in the context of new launch journals first.
Achieving best fit with the community takes many forms. For instance, new technologies can help researchers work more effectively. In 2008, Taylor & Francis’ journal Food Additives and Contaminants spun off a Part B publishing surveillance papers. Food scientists need to keep track of problems highlighted in these surveillance papers, often wanting to engage the detail of the data rather than the accompanying research article. We thus built a dedicated database containing the datasets of all papers published in FAC Part B, allowing researchers to search systematically for data on additives and contaminants pertinent to their research.
New journals can also provide higher levels of service than incumbents. Remote Sensing Letters launched in 2010 to provide rapid communication in the remote sensing field. The Editors aim to make decisions on papers within 50 days of submission, and after acceptance edited articles are published online on average in 30 days. The response has been emphatic. The journal received 174 submissions in its first year, whilst usage in the first half of 2011 is 67 per cent up on the same period in 2010. RSL will receive it first Impact Factor in 2012.
Effective launches can help new research communities consolidate. The award-winning Routledge journal Jazz Perspectives has provided a rallying post for researchers in its field, whilst Celebrity Studies is currently raising the prestige of an intriguing interdisciplinary research area. Taylor & Francis’ journal Environmental Communication, driven forward by Editor Professor Steve Depoe, helped the International Environmental Communication Association (IECA) form in 2011. The IECA already has 214 members,10 still part way through its inaugural year.
What is true of new journals often applies to existing journals. Professor Len Evans, Editor-in-Chief of Biofouling, identified that authors increasingly expected very quick turnaround times. In 2007, Taylor & Francis established a 3-week Accept-to-Online-Publication time for the journal, backing the new service level with vigorous promotion. Between 2007 and 2010, full rate library subscriptions increased by 14 per cent (they had previously been in decline), revenue rose 134 per cent and gross margin improved significantly. Submissions over the same period grew by 148 per cent, full text downloads by 222 per cent and citations by 60 per cent. The journal has never been in better shape.
Such initiatives enable a journal to mesh efficiently with its community, extending its network of researchers. Their benefit is multiplied many times when sales reach makes the journal available widely across the globe. It makes no difference how this sales reach is achieved – consortial deals and Open Access can both deliver very wide readership.
Quite how much expanded sales reach can enhance a journal is demonstrated by a natural experiment. Environmental Technology joined Taylor & Francis in 2008, just short of the journal’s 30th anniversary. Between 2007 (the last year with the previous publisher, where it was published online but had no consortial sales deals) and 2010 usage increased by a breathtaking 2726 per cent. The Impact Factor is on a strong upward path, rising in 2009 and 2010, and the journal will publish 67 per cent more pages in 2012 than it did in 2007.
Sales models can create substantial challenges for publishers pursuing a Product Development strategy through new journal launches, however. Should new journals be included in sales deals to drive usage and gain an Impact Factor in the shortest time, or excluded to allow full rate subscriptions to grow? Some publishers have concluded that there is no answer to this riddle, and now only launch new titles on Open Access models. This throws up questions itself. Why should an author pay to publish in an unproven title? There is a genuine concern that Open Access publishing may not serve poorly funded, niche research areas effectively, especially those in Humanities and Social Science, and that it will become increasingly difficult to find appropriate outlets in these fields.
What is undeniable is that journals need to maximise access to their research networks, and so generate strong enough Network Effects to compete with established journals. Which economically sustainable business model is best able to deliver this access will be the subject of heated discussion for many years to come.
A journals Market Development strategy utilises many of the same approaches, but has the differential goal of establishing a title within an unfamiliar research network. Verspagen and Werker (2003) conducted an interesting study of how a mix of existing and new journals moved into the field of Evolutionary Economics after its rebirth in the early 1980s. Most successful of all was Research Policy, a wide-ranging journal which researchers identified as the most important journal in this field both pre-1985 and in 2003. The journal was established at the start of the period, but diversified to capture this emergent area and remain a key outlet within it.
As well as expand across disciplinary boundaries, a journal can spread its geographical footprint. Regional Studies introduced German and French abstracts in 2002 to build readership in those countries and raise the Regional Studies Association’s profile there. Spanish abstracts followed in 2004, and Mandarin in 2008. Between 2002 and 2010, published papers from German and French researchers grew by 50 per cent and citations to the journal by 170 per cent,11 testament to the success of this targeted internationalisation strategy.
But how do Editorial initiatives of this sort help develop a business? Journal publishing is what is sometimes called a ‘dog food business’. Purchasers are not reached directly, but rather by influencing users. Market share is built first in the form of quality articles, which drive readership and so citations, generating high Impact Factor. This in turn draws in better submissions, raising the standard of articles published. It is often described as a ‘virtuous circle’, and is a demonstration of Network Effects in practice. Increasingly, librarians refer to usage data when deciding whether to renew a subscription or to purchase a large sales collection. It follows that Market Development strategies which successfully increase a journal’s quality and worldwide appeal will tend to generate revenue for the publisher, even if this takes the negative form of protecting revenue which would otherwise have been lost.
Market development strategies can also directly earn revenue. Paid supplements are a very profitable addition to the journal mix, bringing high-margin income for relatively little work. It is still possible, if rare, for journals in the Medical and Pharmacological sectors to earn six-figure sums from supplements each year. Similarly, article reprints have underlain the profitability of Pharma journals for decades. What has become apparent in recent years, however, is that these non-recurrent sales can disappear rapidly in the face of recession, government regulation and low-cost online alternatives. They are unlikely to regain their former importance.
As with books, not all Market Development is publisher-driven. Google is now a powerful source of journal readership and draws in pay-per-view sales from a much broader-based audience. DeepDyve (www.deepdyve.com) wants to make a business from selling journal content outside traditional markets, using an ‘iTunes’ business model for short-term rental of articles at low prices. Meanwhile peer-to-peer networks like Mendeley (www.mendeley.com) have the potential to open up new markets whilst simultaneously demonetising them, a sort of ‘peer-to-peer’ model of article posting. The Chinese curse exhorts: ‘May you live in interesting times’. These are interesting times indeed.
Our original question asked whether a journal publisher should emphasise Product or Market Development, or indeed both. There is no simple answer to this – it depends on the circumstances of an individual journal list and the environment it operates in. What we can say is that the research community continually grows and evolves. Much of this change can be accommodated by existing journals (Ansoff’s Market Penetration strategy). But emerging research areas may require different kinds of journals (think of the innovative Journal of Visualized Experiments or Journal of Maps, for instance). Publishers must continue to expand their portfolio to meet these changing requirements, through launches or acquisition. If they don’t, other publishers will almost certainly occupy the vacant ground. Most likely, several publishers will have journals in a field, competing for primacy. The winner, as we have noted before, will be the one whose journals are best able to generate Network Effects and draw the community together around them.
If Product Development is an essential strategy for keeping a publisher’s portfolio aligned with research output, there is also a place for Market Development. This may best be thought of as a higher level activity – a means of extracting greater revenue from maturing journals.
When a journal is young, Product Development is the optimal strategy. As it grows and establishes itself, the focus shifts to Market Penetration. And finally, as the journal matures, it should seek out opportunities for revenue from non-traditional markets – stressing Market Development. Ansoff’s matrix gives us a key to managing the life cycle of a journal.
We now turn to how, and why, publishers build portfolios of journals. Historically, the barriers to entering scholarly publishing have been high. Typesetting and printing were highly skilled tasks before Desktop Publishing, and were costly. Conversely, incentives were limited, with most scholarly publishing performed by university presses and learned societies as a labour of love. These society journals – located at the centre of perhaps thousands of researchers in a given subject – had excellent, unparalleled connectedness to their research community, providing a large competitive advantage.
As we have seen, the world began changing with Robert Maxwell’s commercialisation of scholarly publishing in the 1950s, and was then transformed by the affordable, powerful computer technology from the 1980s. Now, one person can launch and run a journal. They may not do it well, but if they can generate sufficient submissions, they will be able to publish articles and issues, and perhaps to generate revenue.
Does that mean that everyone will develop portfolios of successful journals? A glance around the scholarly publishing world suggests not. Elsevier, Springer and Wiley published 46.9 per cent of all articles indexed in Thomson Reuters’ Journal Citation Reports in 2009.12 Independent presses publish individual titles for sure, and small clusters possibly, but to develop a quality programme requires scale and many, many skills. Indeed, managing a large journal portfolio is significantly more complicated than was ever the case in the past. Aside from academic Editors, it requires the collaboration of experts in copy-editing, typesetting, proof reading, printing, warehousing, online publication, digital conversion and content management, website design and hosting, submissions and review software, sales systems, customer services, marketing, library relations, editorial management, bibliometrics, finance, strategy and human resources, inter alia – and probably needs to be able to deploy these skills in many countries globally.
Why do publishers go to the trouble of building large journal programmes? The short answer is that only on very large lists can both economies of scale and Network Effects be fully realised. The coexistence of these two forces has long been recognised as a characteristic feature of the software industry (Schmalensee, 2000). Journals, unlike books, are beginning to behave like an online business.
Economies of scale are generated in familiar ways: printers make special rates for a publisher who brings 1000 journals; it is more affordable to attend a conference relevant for 50 journals than to display a single title, and likewise to spread the investment required for an online platform across a large portfolio; global sales teams generate more revenue per transaction on larger lists. These economies of scale provide a business with the cashflow needed to fund future investment and development. Network Effects enable the publisher’s journals to compete effectively in the market. They are maximised by worldwide sales reach, excellent peer review systems, production which is in step with the research community, a richly interlinked online platform, researcher loyalty and effective marketing strategies.
A journal publisher that currently achieves economies of scale but not Network Effects may well be profitable and successful today - but won’t be in a decade’s time. Both are now essential for success, and both are optimised on very large portfolios.
Society Friend – providing flexible, tailored services which make learned societies want to publish with you. At a certain point, positive feedback loops kick in, as societies trust publishers associated with high-quality society journals and who have the infrastructure (both in systems and attitude) required for effective society publishing. Blackwell, the so-called ‘honorary not-for-profit’ prior to their acquisition by Wiley, built a comprehensive journal business with this strategy through the 1990s. Sage is now seemingly trying to replicate that strategy.
Factory Ships – build presence by creating fleets of very large, well-managed journals and actively seeking out quality papers. This is analogous to the Factory Ships which dominate modern fisheries, locating fish with advanced sonar and harvesting them on an industrial scale. The strategy particularly suits owned journals, where a publisher has a free hand to amend the journal to best suit the Factory Ship role. As large journals tend to be cost-efficient, this can be a highly profitable publishing model. Elsevier has been extremely effective at acquiring and developing Factory Ship journals, particularly in STM disciplines.
Platform Publishing – this is a software industry concept, which involves taking a successful product and cloning it for a new field. Wiley’s phenomenally successful For Dummies series is a classic example of platform publishing, taking a clearly defined formula and adapting it for a wide range of topics. A different take on platform publishing is given by Cell Press, where Elsevier have created a family of five quality journals around the single title Cell since its purchase in 1999. Even more prolific has been Nature, which has now sired 37 daughter journals, all bearing the name ‘Nature…’.
New Kid in Town – when one has little or no presence in a discipline, an option is to launch titles in emerging and evolving research fields. The risks of launching in an unknown field are, of course, high, and librarians are additionally reluctant to subscribe to new titles. For these reasons, many established publishers have turned away from new journal launches. It is perhaps not a coincidence that smaller, ‘upstart’ publishers have launched titles in the niches left unoccupied, with mixed success. Well-thought through, intelligently conceived launches – like those of Public Library of Science – have flourished. Many others have not. Routledge has attained a leading position in Social Science fields such as Education, Politics and International Relations, and Area Studies, driven mainly from a strategy of launches by the then Carfax imprint in the 1985–95 period.
Starting with a core of quality titles makes all of these strategies easier. Having business intelligence gleaned from strategic priorities of government and funders’ research priorities, as well as links with key up-and-coming scientists and academic research ‘stars’ can help sharpen the portfolio focus. Throughout, the goal should be to produce good journals publishing high-quality papers. It is not necessary for a title to be the best in its field, but it must serve its research community supremely well, continuously evolving in the light of feedback from scholars. If any of these strategies are done well, they can then be monetised through employing an appropriate business model.
In reality, most publishers blend these approaches depending on the opportunities of their particular market segments. For example, a significant portion of Elsevier’s Factory Ship journals arrived in acquisitions such as North Holland (1971), Pergamon (1991) and Academic Press (2001), while Elsevier’s market-leading presence in Food Science and Geography arose from the New Kid in Town strategy of systematically launching journals in emerging disciplines. Meanwhile, Elsevier publishes many high-quality society journals in the Health Sciences, and pursues Platform Publishing with the highly successful Trends journals and Advances annuals. There is no one right way, but rather a suite of strategies to be employed according to the specific external conditions of any journal market, and internal corporate environment.
The desired end point is a comprehensive, quality journal portfolio in the publisher’s chosen subject area(s). This allows economies of scale to be realised, and significant Network Effects to be generated. For some publishers, one subject or a cluster of related subjects is enough. The American Institute of Physics has recently refocused on its core journal programme, for instance (Bousfield, 2011). Others aspire to complete spectrum coverage. An intriguing area for future investigation will be how journals with different roles in the research community – review journals, mainstream Factory Ships, niche sub-field outlets, letters journals, thought leaders, and so on – combine to form journal ecosystems within (or without) a publisher’s portfolio.
A key task becomes the development of metrics to understand journal health and ensure that each title in a portfolio makes a genuine contribution to the portfolio. If not, disposal or closure may be required. Writing almost 15 years ago, Gillian Page, Bob Campbell and Jack Meadows provided checklists for assessing the health of a journal and determining remedial action if it is ailing (Page et al., 1997). These contain sensible advice and proven formulae for managing complex situations. Today’s reality is perhaps more pragmatic. Faced with engrained difficulties, the simplest option is to consult the journal’s authors through a longitudinal or web survey, examine the journal’s key bibliometrics against its close competitors and discuss the results with the Editor-in-Chief. If a clear action plan is not produced, the journal may well have reached its final destination. There is so much information available today that the challenge is not so much defining malady, but rather creating a consensual, robust plan for resolution.
As we earlier observed, the changing dynamics of journal publishing have favoured certain publishers over others. Where once large society journals were dominant, they have increasingly ceded primacy to Factory Ship journals (currently, the state-of-the-art repository for Network Effects) and the nascent ‘mega-journals’ like Zootaxa and PLoS ONE. A key question now for journal publishers is whether self-organising groups of researchers, drawn together around issues of Open Access and data transparency through social media, will take the mantle from Factory Ships. This question won’t be answered for many years to come.
Arguments around the ‘Serials Crisis’ are much wider than differing business models – they hinge on who has the power (and moral authority?) to control the research communication process, and what level of services and quality standards (and their inherent costs) are required by those producing and consuming (and assessing?) the material output of research communities worldwide. Will semantic tagging marginalise established journal publishers? Will automated parsing systems render traditional copy-editing and typesetting irrelevant? Will Open Data – i.e. the notion that some data should be freely available to everyone to use, data-mine, mash up and republish as they wish, without restriction by copyright, patents or other ‘ownership’ mechanisms – redefine the relationships between researcher, government, research funder and publisher? We can but guess at the answers to these questions, but the journey is likely to be eventful!
What publishers can do now is continue to manage their journal portfolios intelligently with the long term in mind, manage each journal individually, invest in technology which improves researchers’ publishing experience and experiment in search of newer, more powerful modes of communication. Positioned thus, they will be able to incorporate and benefit from breakthrough technologies when they arrive.
The world has changed, but the fundamentals haven’t. The other major philosopher who employs a river-image is Wittgenstein. He distinguishes between the movement of the waters on the riverbed and the shift of the bed itself; and states that the bank of the river consists partly of hard rock subject to no alteration or to only an imperceptible one, and partly of sand, which may get washed away, or perhaps more sand will be deposited (Wittgenstein, 1979). For our purposes, the moving waters could be seen here as representing the changing journal content and its surrounding context and technologies, and the riverbed as the basic structure of scholarly communication with the solid rock and the more mutable sand. Will the rocky riverbed itself change, with some of the river banks getting swept away – a true paradigm shift heralding a new age of scholarly communication – or is it all just the sand and shingle moving around with the flow of the waters?13
Scholarly publishing involves groups of researchers judging what is worthwhile work through conference presentations, sharing drafts and research outputs, informal discussion and ultimately peer review; publishers then convert this material into readable and/or functional form and distribute it to small groups of researchers active in that field through journals; researchers read and cite the work, citing papers of most relevance, and so create a hierarchy of importance for papers, journals and authors; in turn, this influences future submission behaviour, creating a strong positive feedback loop.
Put another way, there are huge volumes of communication between researchers about their work. Publishers convert this into formal, version-of-record papers – which are the building blocks of future science. Modern publishers have taken on the mantle of latter-day alchemists, turning the lead of grey literature into the gold of highly cited papers.
As Michael Mabe notes elsewhere in this volume, the journal age was launched by Henry Oldenburg in 1665, establishing the four principles of registration, dissemination, peer review and archive. These have been the constants, the unchanging, immutable bedrock of scholarly communications – yet are there signs, for example in the rise of the scientist’s blog, that they represent a riverbed which is eroding? To these principles we may now add further key publishing requirements which the journals industry is best placed to provide, namely discoverability and access – and which can be characterised as representing a new, more permanent deposit on the riverbed and river banks.
Vannevar Bush announced the beginning of the end of this first journal age with his 1945 essay, ‘As we may think’ (Bush, 1945). Bush dreamt of a time when a globally networked community of scientists collaborated together for the common good. There are many indications that that time has come.
Familiar attributes guide good publishers – service for authors, relationship management, the centrality of peer review, dissemination and high-quality publications, inter alia. On these solid foundations, network science is rising, shaped by powerful Network Effects and lubricated by incredibly rapid, seamless communication between scholars. It will be a fluid, continuously evolving form of science, closer to the evolution of synapses in a brain than the clattering iron frame of a printing press. Above all else, it will be a competitive world, in which progressive, valuable initiatives are rewarded and useless, irrelevant activities are not. We can be certain that community relevance, brand, quality, reputation and the trust factor – all deriving from ‘root journals’ – will continue to play a strong role in research network communication processes and products.
Book publishing is experiencing transformation on a similar scale. Whether this signals the end of the modern book age – as ushered in by Johannes Gutenberg with movable type printing in 1439 – or simply the latest reinvention of an endlessly reinvented industry remains to be seen.
1. Treating each publication as its own ‘small business’ is a real differentiator even within the business template – the aim should be for each publication to serve its own particular niche and network to the highest standard possible.
2. Is each title positioned to deal with our changing world? Every publication needs to re-examine its aims and scope and its editorial policies as new technology opens up new content and market opportunities.
3. The external environment is harsh and will show up the badly equipped – so publishers need to deal with pressures from researchers, authors, editors, societies, research funders, libraries and governments, and be efficient in dealing with them.
We would like to thank the following colleagues at Taylor & Francis Group for their contributions to and assistance in developing this chapter: Alan Jarvis, Denise Schank, Ian White, Daniel Keirs, Colin Bulpitt, Tony Bruce, Jo Cross, James Hardcastle, Oliver Walton and Lyndsey Dixon. The responsibility for the use of their helpful information and contributions remains, of course, the authors’.
Publishing Research Consortium (PRC): http://www.publishingresearch. net/
Publishers (STM): http://www.stm-assoc.org/
Association of Learned, Professional and Society Publishers (ALPSP): http://www.alpsp.org.uk/
UK Serials Group (UKSG): http://www.uksg.org/
Publishers Association (PA): http://www.publishers.org.uk/
The Association of American Publishers (AAP): http://www.publishers.org/
Society for Scholarly Publishing (SSP): http://www.sspnet.org/
Research Information Network: http://www.rin.ac.uk/
Centre for Information Behaviour and the Evaluation of Research, University College London: http://www.ucl.ac.uk/infostudies/research/ciber/ downloads/
Davis, P. M., Solla, L. An IP-level analysis of usage statistics for electronic journals in chemistry: making inferences about user behavior. Journal of the American Society for Information Science and Technology. 2003; 54(11):1062–1068.
1Word, reason, plan, or account, the fundamental order in a changing world. There is a publishing journal, founded by Gordon Graham, formerly Chairman of Butterworths: LOGOS: The Journal of the World Book Community http://www.brill.nl/logos
8Reed Elsevier Annual Report 2010 (2011), accessed online 26 September 2011: http://reports.reedelsevier.com/ar10/business_review/lexisnexis/ 2010_financial_performance
11Data retrieved from Thomson Reuters’ Web of Knowledge on 27 September 2011.
12Data collated from Thomson Reuters’ 2009 Journal Citation Reports, with thanks to James Hardcastle and Jo Cross in Taylor & Francis’ Research & Business Information team.
13We are grateful to our Editor, Bob Campbell, for bringing the philosophers’ river-images to our attention and starting to elaborate this idea in the context of a possible journals industry paradigm shift.