Chapter 5: Publishing and communication strategies – Academic and Professional Publishing

5

Publishing and communication strategies

David Green and Rod Cookson

Abstract:

This chapter details publishing and communication strategies within the context of the digital technology revolution. The major focus is on scientific and academic journals publishing but the chapter also discusses the little-detailed tradecraft involved in the historical strategic development of particular product types against a backdrop of wider market evolution. Illustrative case-study exhibits are presented detailing the strategic development of selected products and markets, as well as to explain the powerful Network Effects that underline the success of journals that work to more closely match their communities.

Key words

Publishing strategy

communications strategy

product development

market development

market penetration

diversification

Ansoff matrix

Network Effects

economies of scale

‘Nothing Endures but Change’

Heraclitus, c. 535–475 BCE

Introduction

Heraclitus of Ephesus can truly be considered the Publisher’s philosopher. He elaborated the doctrine of change being central to the universe, and he established the term ‘Logos’1 in Western philosophy, meaning both the source and the fundamental order of the universe.

Change does indeed seem to have been endemic in the publishing and communication worlds since the beginning of the technological and industrial wave based on the invention of the microprocessor in 1971 (Figure 5.1). For the last 20 years or more, every time publishers have gathered together, the talk has been of nothing but change – changes in technology, economic and political context, library finances and sociocultural change. Although there is a real tradition and continuity in our industry, we can take another image from Heraclitus, that even though a person steps in the same river, those who step into it are always washed by different waters: ‘each individual atom of water, does not constantly change; the totality of things constantly changes’ (Shiner, 1974).

Figure 5.1 Kondratiev waves in history

This period since 1971 has also witnessed the current wave based on information and communication technologies (ICT), initially leading to process revolutions in our industry, and increasingly now, as we truly enter the deployment phase in the 2010s (Figure 5.2), to new products emerging. This digital revolution has driven a paradigm shift in our industry from print-based manufacturing to online service provision, causing major disruptions for both markets and products. As with any technological revolution, there are both threats and opportunities which arise from these ‘waves of creative destruction’ (Schumpeter, 1975).

Figure 5.2 5th Great Surge based on ICT Source: Based on the work of Carlota Perez, in particular Technological Revolutions and Financial Capital, figure 7.1, p. 74. Edward Elgar Publishing, 2002. Reproduced with permission. The authors are indebted to Professor Perez for elaborating this version especially for this chapter.

Further, the economic analysis conducted by Carlota Perez indicates the role played by financial capital in the global publishing industry from the 1980s up to the early years of the 21st century. Her analysis also clearly indicates the turning point that the industry has now reached, a theme that we develop later in this chapter.

These concerns that disruption was beginning to occur in our industry were most graphically raised by the physicist Michael Nielsen at a workshop for editors at the American Institute of Physics in 2009, to be found on his blog – ‘Is scientific publishing about to be disrupted’.2 Nielsen’s views and recent championing of Open Science3 have raised many issues. The key points of his hypothesis are:

 Scientific publishers should be terrified that some of the world’s best scientists, people at or near their research peak, people whose time is at a premium, are spending hundreds of hours each year creating original research content for their blogs.

 In comparison, scientific journals are standing still.

 Scientific publishing is moving from being a production industry to a technology industry.

 The cost of distributing information has now dropped almost to zero, and production and content costs have also dropped radically.

 The world’s information is now rapidly being put into a single, active network, where it can wake up and come alive.

 The result is that the people who add the most value to information are no longer the people who produce and distribute it, but rather the technologists, the programmers.

Contrary to the Nielsen hypothesis, if one area of publishing has responded to the challenges of digital technology, and embraced many opportunities offered, it has been journals publishing. The essential lacuna in Nielsen’s hypothesis is his confusion of science and research with their mediation into peer-reviewed content, and the incredibly complex set of services, activities and products that publishers bring to the material output of the academic and research worlds.

This chapter will focus on both products and markets, referring along the way to the key media, but always within the context of the digital technology revolution. For this reason – and also because it represents the specialism of the authors – we will focus on scientific and academic journals publishing. The chapter features exhibits predominantly taken from Taylor & Francis’s publishing programme as illustrative case studies on the strategic development of products and markets.

Strategic developments in the scientific and academic publishing industry

Since the year 2000, it is estimated that the journals industry has invested over £2 billion in technological systems, as well as innovated in areas such as electronic editorial systems, author tools, production workflow, plagiarism checking, content management, online content platforms, global sales management and many more elements that are outlined in Figure 5.3. We hope to show that in the Web 3.0 world and beyond, which values expert material and the transmission of peer-reviewed knowledge, journals and the publishing industry are rising to the challenge – with other media still highly valued despite the changed technological and economic contexts.

Figure 5.3 The scale of journal publishers’ technological and content investment since 2000 Source: ‘Access to Research Outputs: A UK Success Story’, PA, STM, ALPSP London 2010.

From its initial origins as recording the minutes of science, in the 20th century journals were often viewed as a slightly ‘eccentric offshoot’ of monograph publishing that publishers had to be involved in to cover fully their fields of publishing. Both Elsevier and Robert Maxwell began the industrialisation of journal publishing in the late 1940s and 1950s, not the least of the innovations being the pre-paid subscription, aggressive launching of international titles (such as Elsevier’s Biochimica et Biophysica Acta in 1946) and titles in applied science, and computer-based subscription management and fulfilment (Cox, 2002).

Yet journal publishing at that time and up until the mid to late 1980s was very much based on a ‘black box’ business model, with the content put together by author, editor and peer reviewer, and consumed by researcher/reader (and the same academic or scientist may play each of these roles at different times), with the ‘industry’ operating on the outside of the academic world dealing with production, printing, marketing and sales, subscription management and distribution – a relationship between publisher, library agent and librarian. But it was always the case, as with book commissioning, that good relationship management was a key skill for the publisher – getting the best editors, linking with eminent learned societies, and selecting a good, balanced editorial board who would not only add lustre to the title but help promote and sell subscriptions to their and their colleagues’ institutions.

Abstracting and indexingsecondary publishing

Technology began to drive e-publishing in the 1960s and 1970s. This involved delivery of secondary publishing abstracting and indexing products (A&I) initially with the Dialog system, which was one of the predecessors of the Internet, as a provider of bibliographic information. Initially databases such as ERIC (education), INSPEC (electrical engineering) and MathSciNet (mathematical sciences) were delivered as magnetic tapes, but by the 1980s a low-priced dial-up version of a subset of Dialog was marketed to individual users as Knowledge Index. This included INSPEC, MathSciNet and more than 200 other bibliographic and reference databases. Even at this early stage, subscribers to the A&I databases were linked to document retrieval services such as CARL/ Uncover, who would go to physical libraries to copy materials for a fee and send it to the service subscriber.

The origins of A&I products can be traced back to An Index to Periodical Literature, first published in 1802 by William Frederick Poole, who had recognised that periodical articles were often being overlooked due to a lack of an effective indexing service. Originally, abstracting services developed to aid research scientists in keeping abreast of an exponential growth in scientific journal publications, focusing on comparatively narrow subject areas.

From their origin as a service to the scientific community, the number of abstracting services has grown to an estimated 1500 today, many of which were founded in print but are now to be found online. The conversion of print to online format allowed libraries to build in links to their own online public access catalogues (OPACs) as well as to electronic and print document delivery services. More recently, Jasco (2007) has argued that traditional abstracting services may eventually be outmoded because more efficient searches can be made using powerful search engines. However, quantum improvements in search and retrieval software have not necessarily meant that the task of identifying relevant material has been made any easier for the researcher. There is still enormous value in A&I services being structured on the basis of application of a bespoke taxonomy developed by subject specialists, which can provide much more granular and targeted search results than Google or even specialist search software (see Exhibit 1).

Such challenges to commercial producers of A&I services have been met by producers of bibliometric databases (e.g. SCOPUSTM) where the value of cross-referencing, citation and other metadata have been recognised as quality indicators and thus drivers of publication metrics, which allow users to identify quantifiably authoritative pieces of research from within an access-controlled online environment. The original purpose of A&I services to help researchers overcome the difficulties of tracing potentially useful articles scattered over not only the extant print periodical and other literature, but now predominantly the Internet, remains valid. More than ever, discoverability and search and navigation are central to research providers and users. The challenge for A&I service producers today is to retain integrity, identity, validity and authority in an age of seemingly ubiquitous sources of one-off and serial research and scholarly literature. Additionally, from these searches, rapid delivery of the primary material needs to be designed in, as well as linking to datasets, chemical reference works, handbooks, patent databases, etc., meeting the challenges of linking on the semantic web.

Within the physical sciences A&I databases have for long been a key tool for the research community, and in the online age have rapidly developed to incorporate many features making the access and usage of the data faster, more intuitive and enriched. An indexing service gives a full bibliographical reference whilst an abstracting service in addition provides a brief summary of the article or reports content. All forms of A&I can be searched by subject topic, author(s) name or subject keywords such as the chemical, plant or process name. A&I providers now need to develop and make available searchable related datasets, reference links, related patents and as the provisions develop further structural tagging and bespoke identifiers such as the International Chemical Identifier – InChI (see Exhibit 2).

Exhibit 1: Scientific A&I database case-study – Chemical Abstracts Service

A major example of an abstracting and indexing database that has developed in tandem with the web is the Chemical Abstracts Service, which is a division of the American Chemical Society. In this example proprietary databases have been developed and expanded over numerous years and have thus become recognised by chemical and pharmaceutical companies, universities, government organisations and patent offices globally as an important source of chemical information. Chemical Abstracts Service has become vitally important to the physical sciences community by combining their databases with advanced search and analysis technologies, delivering current, complete and crosslinked digital information allowing scientific discovery. This information contains full linking of metadata and data sources.

What makes Chemical Abstracts important to researchers is that at its core is a periodical index providing summaries and indexes of disclosures in recently published scientific documents across all major publishing houses. The index incorporates 8000 journals, technical reports, conference proceedings, theses and dissertations as well as numerous new books in any of 50 major languages. In addition, the index incorporates patent specifications from 27 countries and two international organisations, giving the full database enriched content for the researcher to draw upon.

As well as the published articles and reviews, patents are also covered in detail by Chemical Abstracts and they account for over 15 per cent of the documents available for search and retrieval. The majority of patents that are of interest to chemists cover compositions of substance (i.e. new chemical compounds, mixtures, pharmaceuticals) or processes (i.e. synthesis). Patents are also beginning to include three-dimensional atomic structures, structural databases and their uses, as access to information becomes easier. As hundreds of thousands of patents are issued annually by countries, a great deal of scientific and editorial effort has gone into organising these patent documents for effective retrieval via A&I services such as Chemical Abstracts.

Exhibit 2: InChI - web-based storage and retrieval of chemical information

An example of how the provision of information digitally is developing is the InChI (International Chemical Identifier). InChIs are constructed of text strings comprising different layers and sublayers of information defining a compound or substance. The different layers of defining the chemical are separated by slashes (/). To ensure users easily understand how to find a particular chemical each InChI string starts with the InChI version number followed by the main layer.

So as to determine a particular chemical the main layer contains sublayers for the particular chemical formula, atom connections and hydrogen atoms which uniquely identifies the chemical. Depending on the structure of the molecule the main layer may be followed by additional layers, for example for charge, stereochemical and/or isotope information. This information can then be overlaid by scientists, publishers and database providers to enable web-based linking between sources of chemical content whether web-page, journal or magazine, or database, leading to article text identifying the compound or structure being discussed (the InChlKey and InChI is exact to each chemical).

By searching on the identifier via a resolver (such as the ChemSpider InchI Resolver) a lookup is performed between a shorter InChIKey and the full InChI, returning digitally all relevant compound or publication information. The software to enable these searches is freely downloadable online, highlighting how information can easily be sourced and returned in the digital age.

http://www.inchi-trust.org/index.php?q=node/6

Abstracting and indexing are becoming ever more interactive for the researcher or academic by developing newer and faster ways of getting to the relevant information from the literature (see Exhibit 3). This now includes such techniques as content and data mining, which is defined as the automated processing of considerable amounts of digital content for the purposes of information retrieval (be it structural data, tabulated results or data analysis), information extraction and meta-analysis. As the services available continue to develop, semantic annotation giving a meaningful context for information content could develop into a new standard for STM content, and facilitate improved and deeper search and browse facilities into articles and research. Enhancements in the capability for A&I to perform in-depth searches have meant that recently there has been consistent development of automated extraction tools which then present the important elements contained within documents and articles, including not only the expected scientific elements (genes, proteins, chemical compounds, diseases) but also now business-relevant elements such as company names, people/academics, products and places. The continuing development of these techniques aids in identifying the relationships between the elements the reader is interested in within significant numbers of documents.

Exhibit 3: Social Science database case-study -Educational Research Abstracts Online/ERA

The Social Science database Educational Research Abstracts Online/ERA, in the mid to late 1990s, was a ‘flat text’ (print) abstracting journals programme. This comprised ten segmented A&I print publications in the field of education which served a useful current awareness purpose to the individual user. The challenge was to convert this into a unitary, searchable and structured online database, which could represent the ‘front-end’ of global educational research, and offer the user a route into the full-text documents that lay ‘behind’ the abstract.

This has been achieved to a large extent via reference linking which either allows access to subscribed-to primary material, or a click-through to a document delivery/pay-per-view (PPV) service via the British Library Document Supply Centre (BLDSC). The next phase of development for ERA is to capitalise on the semantic search capabilities of the worldwide web, in order to weave its content into the wider research literature now available online.

Journals – primary research publishing

At the beginning of the 1990s there were very few full-text primary journals distributed via the Dialog system mentioned above, and fewer than a dozen titles had been launched for online delivery as ascii text via email. In 1991 Paul Ginsparg began the LANL preprint server for physics at Los Alamos (now known as arXiv), which at the time was seen as a major threat to the existence of traditional physics journals (although it has transpired that this system itself has required significant funding, and the preprint server has come to play a different role from the journal publication of versions of scientific record). At the same time, the results of the ADONIS project, digital scans of biomedical journals on CD, led to the launch of a (not very long-lived) commercial product. There then began a period of intense experimentation, often involving libraries and technology providers in addition to both commercial and learned society publishers. In 1992 the American Association for the Advancement of Science launched the Online Journal of Current Clinical Trials; Elsevier started Project Tulip with the University of California system, delivering scanned images of materials science journals to UC libraries; and the Red Sage project (named after a congenial restaurant favoured by publishers in Washington, DC), a collaboration between ATT Labs, UC San Francisco and a group of learned society and commercial publishers and university presses, sought to develop an experimental digital journal library for the Health Sciences. These projects all provided valuable data, insights and experiences, but at this time the online product essentially represented a ‘dumb’ digital replication of the print version.4

By 1995–96 most major publishers were announcing their intentions to produce online versions of their print editions, and experiments such as the UK SuperJournal project began to look at how e-journals were actually used, and the features that users wanted (see Exhibit 4). What is remarkable is both how much attachment there still was to the print paradigm in the 1996–98 period, and how, in little over a decade, all of the requested core user requirements are now taken for granted by users of online journals.

At the turn of the millennium, most major publishers had full e-versions available of their journals, and some e-journals were already starting to contain features that were not available in the print version. In addition, publishers’ versions were also starting to compete with aggregators’ licensed versions of full-text databases – such as BIS, EBSCO Publishing, Ovid and BIDS. These aggregated databases typically provided unlinked ‘dumb’ online versions which simply replicated the print edition. Publishers responded both by seeking embargoes on their content

Exhibit 4: The SuperJournal project

The SuperJournal research project was funded by UK JISC in the eLib Programme, to study the factors that would make electronic journals successful. The objective was to identify the features that deliver real value, and to explore the implications with stakeholders in the publishing process: readers, authors, libraries and publishers. The research was conducted over three years (1996–98) and the project ended in December 1998. The project developed an easy-to-use electronic journal application and made clusters of journals in Communication and Cultural Studies, Molecular Genetics and Proteins, Political Science and Materials Chemistry available to UK university test sites via the World Wide Web. At the start of the project readers were asked about their expectations for electronic journals. For two years their usage was recorded and analysed. At the end of the project, they were asked to comment on what they did and didn’t like about SuperJournal, and the features they wanted in future electronic journals and services. Key findings were that users wanted:

 a critical mass of journals in each field,

 certain key functionality – browse, search and print,

 a deep and permanent backfile,

 seamless discovery and access, but with a choice of services for finding what is needed in their subjects, and

 the time-saving convenience of desktop access, as well as the broadest possible choice of content

http://www.superjournal.ac.uk/sj/index.htm

Further, journals and the activities surrounding research activities and the publishing workflow began to change. This can be seen reflected in what has been termed the ‘new research and learning landscape’, shown in Figure 5.4. This landscape integrates the publishing process much more into the academic and research process. Publishers can no longer sit as part of a ‘black box’ process outside the academic and research worlds, but must be integrated into it as a full partner. There has always been a teaching role for many journal articles, and this has been embraced much more in recent years with the decline of a lot of research and monograph publishing, and often due to the long gestation period of book publishing as opposed to article publication.

Figure 5.4 Journals in the new research and learning landscape Source: L. Dempsey, ‘The (Digital) Library Environment: Ten Years After’, Ariadne, Issue 46, 2006, UKOLN, University of Bath http://www.ariadne.ac.uk/issue46/dempsey/, figure 2, which is adapted from the OCLC Environmental Scan (see http://www.oclc.org/reports/escan/research/newflows. htm), which is in turn adapted from Liz Lyon, ‘eBank UK: Building the links between research data, scholarly communication and learning’, Ariadne, July 2003, Issue 36: http://www.ariadne.ac.uk/issue36/lyon/

In addition, the shift from print to online has led to a significant change in workflow whereby the print version (perhaps a single archival volume or an amalgam of issues printed together) becomes an ‘offcut’ of a fully digital flow process from author to reader (Figure 5.5). The attention of publishers has also moved much further from production of publishing products to provision of publishing services within research networks, as shown in Figure 5.6. This in turn can be seen both as the fundamental shift from journal to network and as a precursor to potential new products, especially driving innovations in ‘multi-layered’ network products of varieties of book, journal and reference content, including multimedia, datasets, etc.

Figure 5.5 The online journal article workflow in the 21st century (Taylor & Francis example)

Figure 5.6 From products to services, and from journals to digital research networks. Inspired by and adapted from a presentation by Stephen Rhind-Tutt, Alexander Street Press: http://alexanderstreet.com/articles/index.htm

Alongside these changes in both processes and the publishing landscape, journal publishers have been required to demonstrate the value of the services that they are providing – much of which is of course driven by investment in technological systems as well as through changing the ‘terms of trade’ of the business. So we can list the following:

 online submission and peer review systems,

 smarter search and navigation in ‘discoverability’ technologies,

 author templates and authoring tools,

 originality checking with the Cross-Check system,

 getting author and researcher rights in balance through new copyright and licensing initiatives,

 developing new business models,

 providing legal advice to editors and learned societies,

 driving reputational factors such as branding and trust factors through publishing integrity programmes and third-party metrics,

 and not least serving editors and authors through providing clearly targeted channels of research dissemination in marketing, sales and promotional activities.

Although the decline of the journal has been much heralded, with the development of large databases of journal content on the one hand, and the individual article pay-per-view (PPV) market developing on the other, the fact is that it is the journal, its brand and the reputational factors associated with it that still provides the key trust factor to the content. This will continue to be the case in the future online ‘multi-product content’ or ‘multi-layered content’, supplying journal and book content along with associated data and supplementary materials.

Monographs

By 1997, a conference sponsored by the Association of Research Libraries in the USA was contemplating the decline of the scholarly monograph, with some referring to ‘its comatose body’. Indeed, one reason given for the continuing popularity of the research article, and its growing role in coursepacks and as teaching tools, is that monographs are seen as declining in relevance and timeliness by the reader, and in viability by the commercial publisher, and even in sustainability by the university presses. Every publisher uses the term in a slightly different way, but in general ‘monograph’ has become a catch-all term for a book that is not of a reference type, that is of primary material, and which may be multi-authored, single-authored, or an edited collection.5

It is worth drawing a distinction between humanities titles, the viability of which are fundamentally dependent on the credibility of the scholar(s) as author(s); social science monographs, which are today dependent on the globalisation of markets and new markets as library budgets shrink in established markets in the US and Europe; and STM titles, which face the same phenomenon as social science, but with the added twist of a degree of dependence on the market vicissitudes of practitioner, professional and industrial audiences. But in the second decade of the 21st century, the essential commercial equation that a commercial scholarly monograph publisher makes as the publishing decision is:

In general, however, university presses cannot take this approach and make as narrowly commercial a publishing decision, as they are much more producer-led rather than reader-led, and highly dependent on their host institutions or supporting foundations for sustaining their quality and prestige publishing. For the most part, on the basis of this model, commercial book publishers find it hard to compete with the prestige of university presses when it comes to the commissioning of the highest prestige material.

But there is no doubt that there is a funding crisis for the traditional university press model, as they try to come to grips with the new digital culture and the challenges of the Open Access lobby in both journal and book publishing.6 The free online and digital print-on-demand (POD), a model that has also been recently espoused by Bloomsbury Academic among other commercial publishers, is unproven and potentially hastens income decline as more people become comfortable reading online (driven not least by new-generation e-reader platforms and mobile technology). There is a real need to retain strong revenue streams as new layered content with multimedia features becomes available, such as the recent launch of The Waste Land app.7

Successful commercial monograph publishing is in fact in rude health in the 2010s. Although unit sales have shrunk by around 75 per cent since the early 1990s, a multifold increase in publishing output has balanced this decline, and, with changes in digital process and the marketplace, the economics of monographs have perhaps never been healthier. This derives from a number of factors, based partly on the pricing up of hardback library editions, and the adoption of a POD paperback edition if there is sufficient perceived demand in addition to library sales. Other factors include:

 lower costs through digital processing;

 add-on revenues and enhanced editions through e-books;

 new sales channels such as Amazon and e-book stores;

 new rights deals and licensing deals with e-book aggregators;

 new value being driven from the back list and sales revivals of out-of-print through digitisation.

There are also significant developments through application of new digital possibilities by offering enhanced collections, particularly in STM subjects, in the form of enhanced databases (see Exhibit 5).

Exhibit 5: Enhanced database case-study: CRCnetBASEs

The CRCnetBASEs were launched in 1999 as one of the first online e-book providers in academic publishing. Since then they have grown to include more than 7000 online references in more than 40 subject disciplines. CRCnetBASEs offer instant access to one of the world’s premier scientific and technical references with added support for libraries. The largest repository of scientific and technical information is the SCI-TECHnetBASE, which holds all online CRC Press content. It includes the entire line of science and technical books published through the CRC imprint. The database contains more than 6000 online volumes and is continually growing. Users can conduct sophisticated searches across the entire site or by sub-discipline and create an online resource centre tailored to the needs of the library or organisation.

As the CRCnetBASEs have developed they have incorporated the facility to allow converting raw materials or chemicals into more useful forms for spearheading the hunt for valuable new materials or techniques by providing the references and resources to generate this work. They also offer insights from experts on how to ensure processes are operated safely, sustainably and economically. In particular they can be used as a collection of interactive databases and dictionaries, allowing researchers to perform exhaustive searches using a variety of criteria, including structure searching and, in the CHEMnetBASE, incorporating up-to-date chemical tagging via the InChI project. It also provides ways to create customised tables and export the data in a choice of formats, allowing the researcher the ability to maximise the scope of the information available to them.

Furthermore, it is possible to access a Combined Chemical Dictionary (CCD), a chemical database containing entries on over 570 000 substances including:

 Dictionary of Natural Products

 Dictionary of Organic Compounds

 Dictionary of Drugs

 Dictionary of Inorganic and Organometallic Compounds

 Dictionary of Commonly Cited Compounds

 Dictionary of Marine Natural Products

 Dictionary of Food Compounds

 Dictionary of Carbohydrate.

This allows users to search content using the intuitive and powerful MarvinSketch Java Applet and JChem search engine from ChemAxon™ and allows users to draw their own structure queries. A polymers property database also offers a source of scientific and commercial information on over 1000 polymers where each entry includes trade names, properties, constituent monomers, commercial applications, manufacturing processes and references, enhancing the user’s experience and offering much greater usability.

Products such as the CRCNetBASE, whilst fundamentally a database of online book products, therefore keep fully abreast of technological developments in digital content provision and linking, thus continually enhancing the value proposition as key digital STM resources.

Textbooks

As monographs appeared to decline in the mid-1990s, so there was a shift among publishers as they piled into textbooks, seeking to replicate the ones that were commercially successful propositions. A new generation of student customers to repeat sales year on year is superficially very attractive. The textbook is also a high-cost, labour-intensive product. It needs to be kept up to date as a full subject review; it requires top authors who need to be kept happy and they will require a significant financial interest in the work; there need to be good inhouse development editors who are able to monitor and interpret intelligence on courses in the main markets that the book will serve; and the digital opportunities for ‘add-ons’ such as CD-ROMs, companion websites, multimedia supplementary material, etc., all come at a cost.

Thus the market has been starting to resist what are becoming very expensive items, and the textbook business is somewhat idiosyncratic in that the people who are recommending the books through adoption for their courses are not the ones who have to pay. Various phenomena are causing problems for the current textbook market:

 the burgeoning second-hand market

 ‘student sharing’

 the increased risk of piracy

 competition from new digital media, such as companion websites, bespoke coursepacks (based on current book and journal content) and virtual learning environments.

Nevertheless, a successful textbook in a key subject which keeps up to date and makes the most of digital possibilities remains highly viable for the astute and experienced publisher (see Exhibit 6).

Reference works (including encyclopaedias)

The large reference work and encyclopaedia continue to go through hard times under the challenge of digital. Witness the demise of Encyclopaedia Britannica under the challenge of Encarta in the 1990s, and now, as a top reference publisher said to us recently, ‘We are getting murdered by Wikipedia and other free reference content’. A good reference work or encyclopaedia has great potential as an online digital product, especially when linked as a multi-layer with a publisher’s other digital content, but the investment required is not insignificant. And the integration of different products is not necessarily straightforward, especially when that content has been digitised to different standards, in different formats or is sitting on different platforms. Despite this, successful products still remain (see Exhibit 7).

Exhibit 6: Textbook case study – Molecular Biology of the Cell (Garland Science)

‘This is a large book, and it has been a long time in gestation— three times longer than an elephant, five times longer than a whale.’ From the Preface of the first edition of Molecular Biology of the Cell.

The first edition’s author team comprised James D. Watson, Bruce Alberts, Dennis Bray, Julian Lewis, Martin Raff and Keith Roberts who started writing the book in 1978. It was published in 1983 to considerable acclaim (Friedberg, 2005). It is often referred to as ‘the Bible’ of cell biology due to its role in establishing the parameters of cell biology literacy.

Throughout its five editions the standards set in the first edition have been adhered to, as follows:

Authorship: Alberts, Lewis, Raff and Roberts have been joined by Alexander Johnson and Peter Walter. Each of the authors is responsible for a cluster of chapters while Roberts conceptualises and sketches the illustrations. The chapters are extensively reviewed by scientific experts in the numerous sub-fields covered in each chapter. For each edition, there may be as many as 250 expert reviewers. Each chapter undergoes extensive developmental editing to ensure that the level, voice and style are consistent throughout. Illustrations: More than half of the illustrations are original to the book. The authors take great care to explain the concepts as carefully in the pictures as in the legends and text. The rest of the illustrations are from scientific literature to ensure scientific accuracy and authority.

Pedagogy: The structure of Molecular Biology of the Cell is based upon a formula invented by James Watson for the first edition of Molecular Biology of the Gene. Each chapter is divided into short sections under declarative headings known as concept headings.

A brief version of Molecular Biology of the Cell, Essential Cell Biology, was published in 1997 and quickly established itself as the leading undergraduate cell biology textbook.

Both books are accompanied by a library of cell biology videos and animations which both reinforce the text’s concepts and expand on the content and scope of the book. Other supplements include PowerPoint slides of all figures, extensive problems, quizzes and test questions.

Both books are now available in downloadable e-book format for purchase in their entirety, as individual chapters and for limited-term rentals. In the first six months sales of the e-books are still a small proportion of print sales, although demand is growing and it is expected that e-books will comprise a sizeable proportion of sales within the next 12 months as more cheap e-readers are sold. Essential Cell Biology is soon to be available in media-integrated format for use on the iPad, and eventually, for Android devices.

Twenty-five years after the publication of the first edition of Molecular Biology of the Cell, it remains the leading book in sales and in intellectual achievement. The global course market is approximately 250 000 students and has been stable for the last 10 years.

There has, however, been growth in recent years of shorter reference works – handbooks – collections of survey articles/entries of around 5000–7000 words (such as the current volume). These are especially popular in STM subjects, and are typically cheaper than monographs in hardback, as most will have the potential for industrial or professional sales. A good, definitive reference handbook can gain real prestige by putting a real stamp on a particular field – as traditional print or replica e-book.

Technical proceedings

Technical proceedings as a product have not changed dramatically in recent years, reflecting their primary purpose as an archival record. They remain popular in applied scientific and technical disciplines, such as computer science, where a greater tradition exists for publishing conference papers alongside books and journal articles. Technical proceedings are indeed moving increasingly online, sold as e-books, although often as a supplement to the print edition rather than a substitute.

Exhibit 7: Encyclopaedia case study – The Routledge Encyclopedia of Philosophy

The original product

The Routledge Encyclopedia of Philosophy, or REP, was published as a ten-volume set in 1998. A CD-ROM was also offered as part of the package. The culmination of seven years of intense work by the General Editor Edward Craig and a team of over 20 subject editors, REP numbered over 1300 contributors and 2000 entries. Shortly after release of the print version work began on a fully searchable and extensively cross-referenced dynamic Web version, the Routledge Encyclopedia of Philosophy Online. Launched in 2001, REP Online expands this acclaimed Encyclopedia well beyond print reference boundaries. The full content of the classic ten-volume set remains at its core, but REP Online is a living, growing resource growing along with the discipline itself. Each year the REP releases a diverse range of new material, including newly commissioned entries by leading scholars and revised versions of existing entries. By 2010 REP Online contained over 100 new and revised entries.

The marketplace

The market for large-scale online academic reference products such as REP Online is driven mainly by academic libraries, which form the core customer base for REP Online. Whilst the market for multi-volume print-based reference works has diminished significantly, demand for quality, peer-reviewed academic reference works available online remains strong. The North American market is particularly important. In 2011 individual subscriptions became available for the first time. The marketplace for such digital products faces competition from a variety of sources, some in the form of similar subscription-based products from other publishers and some from free resources such as The Stanford Encyclopaedia of Philosophy.

Challenges

As with journals, delivering a steady stream of new quality content remains a key challenge for any online digital reference product such as REP. Other challenges come in the form of working with external platform providers and keeping essential parts of the site, such as the home page, up to date; the archiving of entries when revisions or replacements are commissioned; and maintaining the REP profile through reputational and trust factors in a marketplace increasingly populated by unreviewed content which can be freely accessed.

The digital age brings a number of new challenges and opportunities for publishers of technical proceedings. Dual pressure comes from conference organisers demanding a lower upfront financial commitment towards publication alongside greater project management support from the publisher to bring the publication together, including electronic submission and review software, author templates and instructions, handling of author queries, and managing copyright transfers. Customers are also demanding more sophisticated online products, which may further put pressure on publishers’ margins through the need to intervene more on manuscripts to improve consistency of style, introduce XML tagging to aid search and discovery, and host authors’ supplementary material.

Delivered effectively, these new requirements can enhance the product, maximise visibility and reinforce good working relationships between the conference and publisher, but the specialisation and investment now involved in securing technical proceedings contracts and selling them in sufficient quantities have narrowed the range of publishers involved in this sphere. Further, Open Access advocates within fields such as computer science have been urging simple author posting of conference proceedings in repositories, as an example of DIY publishing.

Exhibit 8: A technical proceedings case study

Since the 1970s, a prestigious professional membership society based in the UK has published the proceedings of its annual conference on state-of-the-art technical developments in the fields of infrastructure and engineering. The c. 70 papers appearing in the final publication total up to 700 pages, and are thoroughly reviewed and selected by a panel of experts drawn from academia and industry.

The organizers pass accepted and edited papers to a commercial publisher as camera ready copy, with a preface prepared by the editorial panel. The publisher adds a contents page, paginates and adds heads and footers to the manuscripts, designs the cover, and then prints, binds and delivers copies in bulk to the conference for distribution to delegates.

The publisher offers the proceedings for general sale after the conference through all usual channels, priced at approximately £150/US$220, with strong appeal to academic and professional libraries as an archive volume. Sales in the low hundreds are typically forecast and prove profitable, with the conference receiving a royalty.

Product development vs. market development

So far, we have looked at the many and varied outputs of scholarly communication as, effectively, standalone products. A single book or database appeals to a set of researchers active in a specific field. Commercial success is a function of the inherent quality of the book or database, how effectively it is promoted to potential customers and how able (and, more importantly, willing) they are to pay.

Does this mean that the performance of a scholarly publishing house is simply the sum of the sales of its products? On a crude level, the answer is of course ‘yes’. What it overlooks, though, is a key element in determining underlying and future performance – the competitive environment in which a firm operates.

Take a simple example. Publisher X may have the best book on Genetic Regulation this year and achieve excellent sales. Next year, Publisher Y may put out a superior book in the same field which has far greater appeal for customers. Sales of Publisher X’s book are extremely disappointing in the second year. The book is unchanged, but its performance is now lacklustre. It has suffered because its ‘market penetration’ – the book’s share of total sales of Genetic Regulation texts – has declined due to the competing book. Publisher X can respond with a vigorous marketing campaign. Given that there is now a superior product on the market, the results of this are likely to be modest. The book’s sales have peaked and will soon decline toward zero (or to the online ‘Long Tail’, which is much the same thing).

The typical scholarly monograph has a sales life cycle lasting several years. A very few exceptional titles continue selling for a decade or more (Nitterhouse, 2005). To manage this inevitable life cycle pattern of peak and trough, publishers build portfolios of products in their chosen subject areas, with new content added each year. As the sales of individual products move into decline, new products pick up the slack, ideally creating a stable financial performance for the publisher.

Exploring scholarly communication with Ansoff’s Product/Market matrix

This portfolio strategy is an example of ‘Product Development’, one of four core strategies for competitive markets outlined by Igor Ansoff in his Product/Market matrix (Figure 5.7) (Ansoff, 1987).

Figure 5.7 Igor Ansoff’s Product/Market matrix Source: Igor Ansoff (1987) New Corporate Strategy. New York: Wiley. Reproduced by permission.

Fighting for share of an existing market – the ‘Market Penetration’ strategy – is what Publisher X does when promoting its Genetic Regulation book aggressively to counteract the competitor title. It is a way of extending a book’s sales life. Decline will still come, but hopefully later, and with the additional revenues earned outstripping marketing spend.

Product Development – building a portfolio of Genetic Regulation texts – will enable Publisher X to generate a healthy, stable business. This business, however, will only ever be a portion of total sales of books in the field. In other words, it is limited by the size of the publishing market in Genetic Regulation. Publisher X can focus on high-margin products within existing markets, constraining monograph output and emphasising textbooks, and perhaps deploying telesales teams to maximise sales. This is in effect a coping strategy. Sooner or later, the market’s limits will be reached. Further progress depends on Ansoff’s other two strategies –’Market Development’ or ‘Diversification’.

Market Development involves finding new customers for existing products. This can either mean selling to non-purchasing customers in current markets or seeking new customers in different segments. Publisher X’s Genetic Regulation book may have sold well in North America, but have scope to improve in Europe with effective marketing. Alternatively, having sated demand with Genetics researchers, Publisher X might develop a secondary market with Botanists interested in gene regulation. These new markets may be relatively small but, as costs are limited to sales, marketing and distribution, margins can be high.

Publishers are not always the driver of Market Development strategies. Amazon now accounts for 25 per cent of university press book sales (Esposito, 2009) and has expanded the customer base for scholarly books beyond publishers’ traditional markets. This is a significant departure from historical practice, where publishers would select and negotiate the market and segments in which a book is sold.

Diversification is the biggest leap of all – creating new products in new markets, preferably building on strength in related areas. It targets the most vibrant, dynamic segments of a market, but is high risk – in terms of both the chance of failure in an unknown market and forgone opportunities in familiar segments. For this reason, Diversification has been called the ‘suicide cell’ of Ansoff’s matrix. And scholarly publishing is an industry which has historically been averse to speculative endeavour, let alone suicidal behaviour. There are consequently few examples of successful product diversification in scholarly and professional publishing. Most impressive is LexisNexis, now a division of Elsevier, with turnover of £2.6bn in 2010.8 This mighty oak grew from a full-text search facility indexing legal cases in New York and Ohio, launched by Mead Data Central as LEXIS in 1973. By 1980, Mead had hand-keyed all US federal and state legislation, creating a unique, comprehensive resource for the US legal profession. That same year, NEXIS was added, providing journalists with a searchable database of legal news articles (Harrington, 1984). Another illustration, also from Elsevier, is Scopus, which expanded a family of abstracting and indexing products into a comprehensive bibliometric resource capable of competing with Web of Science. A third example is Wiley Job Network, a paid-for recruitment site built on the back of Wiley’s publishing business (see http://www.wileyjobnetwork.com).

We mention Diversification in passing. Many of the products created – being in new market segments – cease being scholarly publishing as we recognise it. The rewards can be large, but the risks involved should not be underestimated.

Why Ansoff’s model works differently for journals

Translated from books to journals, Ansoff’s four strategies take rather different form. Why? Books and journals both experience economies of scale, meaning that unit costs typically decline as portfolios or print runs grow. Journals additionally have ‘Network Effects’. This means that their value increases exponentially as their community grows (see Exhibit 9). Books conversely have intrinsic value, purchased simply for what is inside their covers.

While a book business benefits from scale, a journal business requires both scale and widespread accessibility and visibility within its research communities. This fundamental difference between books and journals was for a long time concealed beneath the superficial similarity of the two printed products, leading some people to assume that the same publishing strategies work for both. This is not the case, as we shall explore.

Exhibit 9: Network Products - and the power of Network Effects

Products which become more valuable the more they are used (and the more users they have) are said to have ‘Network Effects’. These are also called ‘positive externalities’. Classic examples include telephone networks (of curiosity value if one person owns a phone, hugely powerful once millions connect) and Facebook (of increasing value to users as more of their friends join). Products and markets with Network Effects increase in value exponentially the more they are used (Katz, 1994) Metcalfe’s Law formalises this for telephone networks, stating that the value of a telecommunications network is proportional to the square of the number of users the network has (Gilder, 2003).

Journals exhibit strong Network Effects. If 10 000 scholars read a journal online, it is heavily cited, has a high Impact Factor and is influential in its discipline. A journal with 50 readers is an obscure title in a research niche. All other things being equal, usage determines the relative value of these two journals, even though the cost base of the two is identical. Phil Davis and Leah Solla have generalized this relationship for Chemistry journals, tentatively showing that journal usage has a Power Law relationship with the size of the user base, rising exponentially as users increase (Davis and Solla, 2003).

Books, on the other hand, have no Network Effects. Their value is intrinsic. The hundredth person to read a book has the same reading experience as the first, and value is not changed through use. Whilst uninterested in Network Effects, a book publisher will be highly conscious of economies of scale. A print run of 10 000 books has far lower unit costs than one of 50 copies, creating a far higher margin on each book sold.

Network Effects matter because they amplify differences between journals, and do so disproportionately. The reasons for this are self-evident. A journal in a global ‘Big Deal’ has higher readership than a self-published title with a handful of subscribers. Likewise, a journal at a subject-specialist publisher like the American Medical Association is more visible and more highly read than one at a publisher with no other Medical titles. Sales reach, learned society memberships, prominence within a research community, promotion, online platform and researcher habit all generate positive externalities. Some people argue that Open Access, semantic enrichment, data integration and other techniques for embedding content more deeply in the research community have similar effects. The jury is still out, but we should expect that some of these initiatives will prove useful. Journals with strong Network Effects see readership and Impact Factor rise consistently, drawing in a better and better selection of the available papers. Ultimately, these journals rise to the top of their fields, with a hierarchy of journals arrayed below them in accordance with their externalities.

So far, so conventional. Do we really need Network Effects to explain how the journal market operates? We do, and Metcalfe’s Law sheds light on why. There are a finite number of researchers in any field. Tenopir and King (2008) have found that the average US scientist reads more than 280 articles annually (obviously this varies between disciplines), a number which is increasing but again is finite. As journals with greater Network Effects gain quality papers, other journals must lose out, concentrating good research increasingly in small clusters of titles. And indeed this is what we see. In Environmental Science, for instance, more than 77 per cent of all JCR-ranked research was published in journals of 100 + papers in 2010, up from 69 per cent in 2006. Over the same period, the median size of journals in the area rose from 68 to 80 articles.9

In the wider world, markets governed by Network Effects are extremely hierarchical. The 90 per cent + market share that Microsoft enjoyed in computer operating systems for some years is the extreme, monopolistic end of the spectrum. Ted Lewis, however, has demonstrated that an equilibrium state for a network market with four significant players is 63 per cent share for the market leader, 26 per cent for the ‘fast follower’, and just 8 per cent and 2 per cent for the two ‘laggards’ (Lewis, 2009). The proportions vary between markets and depending on how many participants a market has, but the principle is clear – a single dominant player tends to gain a significant structural advantage in network markets.

What is the optimal strategy for a journal publisher in the age of Network Effects? It is likely to be a hybrid. Journal publishing still contains large fixed costs – for example, in building and developing online platforms, editorial and production processes, and in operating global marketing and sales forces. Economies of scale make many of these costs relatively low as journal portfolios grow. Network Effects meanwhile rise as accessibility and visibility increase, making journals more valuable as their circulation and embeddedness in the research community rise. Publishers who can achieve universal access and high visibility in their research communities, whilst being large enough to achieve economies of scale – and generate positive cashflow – will dominate the next decade of journal publishing.

Intriguingly, this future is business-model neutral. Everything is in play.


9Data collated from Thomson Reuters’ 2009 Journal Citation Reports, with thanks to James Hardcastle in Taylor & Francis’ Research & Business Information team.

Journals succeed when an active, well-networked Editor serves a viable research community to a high standard. Put another way, a network product (the journal) converts the expertise of a network (the research community) into tangible, digestible outputs (articles, issues). Henry Oldenburg achieved this with the launch of Philosophical Transactions of the Royal Society in 1665. He could do so because a core ‘invisible college’ (the membership of the Royal Society) had been deliberately constructed (de Solla Price and Beaver, 1966). Its output was a network product which happened to be a printed journal, the best dissemination technology of the time.

Three significant changes have occurred since the time of Oldenburg, however:

1. Enhanced communication - digital technology and the development of social media have transformed researchers’ modes of communication, allowing a journal’s editorial team to be much closer to its research community and facilitating much faster communication.

2. Bibliometric analysis - enabling the editorial team to see clearly the research publication trends in their field of interest, and the position their journal occupies within it.

3. Authors have a voice - and are systematically contributing to the journal publishing process for the first time.

Why do these developments matter? They are important because it is now both feasible and realistic for journals and other academic and scientific publications to be finely attuned to their audience and author base, and to evolve in step with their research community. What was previously a staccato back-and-forth exchange characterised by long silences is becoming a continuous process of co-creation, with scholarly communication leading and catalysing cutting-edge research. The more closely a journal matches its community, the more powerful the Network Effects the journal creates.

Let us return to Ansoff’s Product/Market matrix and examine why the underlying differences between books and journals might favour divergent publishing strategies. We will not dwell on Diversification strategies, which so often create products other than journals.

Like their book counterparts, journals publishers naturally pursue a Market Penetration strategy for some titles – trying to increase market share for existing journals in existing markets. This involves familiar spadework – setting up electronic submission systems, operating production management systems, producing high-quality print and online issues with increasing rapidity, promotion, conference representation, indexing in a disturbingly wide variety of sources, managing Editorial Boards so that they benefit a journal, improving the author experience, and so on. A competent publisher will undertake these activities as a matter of course.

The journals strategy conundrum – product or market development?

The core decision a publisher must make is whether to concentrate on Product Development (launching new journals or acquiring titles) or Market Development (selling current journals to new audiences) – or try both. Which approach is best varies with each different journal list.

Product development in journals publishing

Product Development in a journals business focuses on building networks of researchers around a title. In practice, this depends upon both the Editor’s and the publisher’s ability to refine and improve the journal in light of that community’s evolving needs, so generating the strongest Network Effects. We consider this in the context of new launch journals first.

Achieving best fit with the community takes many forms. For instance, new technologies can help researchers work more effectively. In 2008, Taylor & Francis’ journal Food Additives and Contaminants spun off a Part B publishing surveillance papers. Food scientists need to keep track of problems highlighted in these surveillance papers, often wanting to engage the detail of the data rather than the accompanying research article. We thus built a dedicated database containing the datasets of all papers published in FAC Part B, allowing researchers to search systematically for data on additives and contaminants pertinent to their research.

New journals can also provide higher levels of service than incumbents. Remote Sensing Letters launched in 2010 to provide rapid communication in the remote sensing field. The Editors aim to make decisions on papers within 50 days of submission, and after acceptance edited articles are published online on average in 30 days. The response has been emphatic. The journal received 174 submissions in its first year, whilst usage in the first half of 2011 is 67 per cent up on the same period in 2010. RSL will receive it first Impact Factor in 2012.

Effective launches can help new research communities consolidate. The award-winning Routledge journal Jazz Perspectives has provided a rallying post for researchers in its field, whilst Celebrity Studies is currently raising the prestige of an intriguing interdisciplinary research area. Taylor & Francis’ journal Environmental Communication, driven forward by Editor Professor Steve Depoe, helped the International Environmental Communication Association (IECA) form in 2011. The IECA already has 214 members,10 still part way through its inaugural year.

What is true of new journals often applies to existing journals. Professor Len Evans, Editor-in-Chief of Biofouling, identified that authors increasingly expected very quick turnaround times. In 2007, Taylor & Francis established a 3-week Accept-to-Online-Publication time for the journal, backing the new service level with vigorous promotion. Between 2007 and 2010, full rate library subscriptions increased by 14 per cent (they had previously been in decline), revenue rose 134 per cent and gross margin improved significantly. Submissions over the same period grew by 148 per cent, full text downloads by 222 per cent and citations by 60 per cent. The journal has never been in better shape.

Such initiatives enable a journal to mesh efficiently with its community, extending its network of researchers. Their benefit is multiplied many times when sales reach makes the journal available widely across the globe. It makes no difference how this sales reach is achieved – consortial deals and Open Access can both deliver very wide readership.

Quite how much expanded sales reach can enhance a journal is demonstrated by a natural experiment. Environmental Technology joined Taylor & Francis in 2008, just short of the journal’s 30th anniversary. Between 2007 (the last year with the previous publisher, where it was published online but had no consortial sales deals) and 2010 usage increased by a breathtaking 2726 per cent. The Impact Factor is on a strong upward path, rising in 2009 and 2010, and the journal will publish 67 per cent more pages in 2012 than it did in 2007.

Sales models can create substantial challenges for publishers pursuing a Product Development strategy through new journal launches, however. Should new journals be included in sales deals to drive usage and gain an Impact Factor in the shortest time, or excluded to allow full rate subscriptions to grow? Some publishers have concluded that there is no answer to this riddle, and now only launch new titles on Open Access models. This throws up questions itself. Why should an author pay to publish in an unproven title? There is a genuine concern that Open Access publishing may not serve poorly funded, niche research areas effectively, especially those in Humanities and Social Science, and that it will become increasingly difficult to find appropriate outlets in these fields.

What is undeniable is that journals need to maximise access to their research networks, and so generate strong enough Network Effects to compete with established journals. Which economically sustainable business model is best able to deliver this access will be the subject of heated discussion for many years to come.

Market development in journals publishing

A journals Market Development strategy utilises many of the same approaches, but has the differential goal of establishing a title within an unfamiliar research network. Verspagen and Werker (2003) conducted an interesting study of how a mix of existing and new journals moved into the field of Evolutionary Economics after its rebirth in the early 1980s. Most successful of all was Research Policy, a wide-ranging journal which researchers identified as the most important journal in this field both pre-1985 and in 2003. The journal was established at the start of the period, but diversified to capture this emergent area and remain a key outlet within it.

As well as expand across disciplinary boundaries, a journal can spread its geographical footprint. Regional Studies introduced German and French abstracts in 2002 to build readership in those countries and raise the Regional Studies Association’s profile there. Spanish abstracts followed in 2004, and Mandarin in 2008. Between 2002 and 2010, published papers from German and French researchers grew by 50 per cent and citations to the journal by 170 per cent,11 testament to the success of this targeted internationalisation strategy.

But how do Editorial initiatives of this sort help develop a business? Journal publishing is what is sometimes called a ‘dog food business’. Purchasers are not reached directly, but rather by influencing users. Market share is built first in the form of quality articles, which drive readership and so citations, generating high Impact Factor. This in turn draws in better submissions, raising the standard of articles published. It is often described as a ‘virtuous circle’, and is a demonstration of Network Effects in practice. Increasingly, librarians refer to usage data when deciding whether to renew a subscription or to purchase a large sales collection. It follows that Market Development strategies which successfully increase a journal’s quality and worldwide appeal will tend to generate revenue for the publisher, even if this takes the negative form of protecting revenue which would otherwise have been lost.

Market development strategies can also directly earn revenue. Paid supplements are a very profitable addition to the journal mix, bringing high-margin income for relatively little work. It is still possible, if rare, for journals in the Medical and Pharmacological sectors to earn six-figure sums from supplements each year. Similarly, article reprints have underlain the profitability of Pharma journals for decades. What has become apparent in recent years, however, is that these non-recurrent sales can disappear rapidly in the face of recession, government regulation and low-cost online alternatives. They are unlikely to regain their former importance.

As with books, not all Market Development is publisher-driven. Google is now a powerful source of journal readership and draws in pay-per-view sales from a much broader-based audience. DeepDyve (www.deepdyve.com) wants to make a business from selling journal content outside traditional markets, using an ‘iTunes’ business model for short-term rental of articles at low prices. Meanwhile peer-to-peer networks like Mendeley (www.mendeley.com) have the potential to open up new markets whilst simultaneously demonetising them, a sort of ‘peer-to-peer’ model of article posting. The Chinese curse exhorts: ‘May you live in interesting times’. These are interesting times indeed.

Choosing between product and market development

Our original question asked whether a journal publisher should emphasise Product or Market Development, or indeed both. There is no simple answer to this – it depends on the circumstances of an individual journal list and the environment it operates in. What we can say is that the research community continually grows and evolves. Much of this change can be accommodated by existing journals (Ansoff’s Market Penetration strategy). But emerging research areas may require different kinds of journals (think of the innovative Journal of Visualized Experiments or Journal of Maps, for instance). Publishers must continue to expand their portfolio to meet these changing requirements, through launches or acquisition. If they don’t, other publishers will almost certainly occupy the vacant ground. Most likely, several publishers will have journals in a field, competing for primacy. The winner, as we have noted before, will be the one whose journals are best able to generate Network Effects and draw the community together around them.

If Product Development is an essential strategy for keeping a publisher’s portfolio aligned with research output, there is also a place for Market Development. This may best be thought of as a higher level activity – a means of extracting greater revenue from maturing journals.

When a journal is young, Product Development is the optimal strategy. As it grows and establishes itself, the focus shifts to Market Penetration. And finally, as the journal matures, it should seek out opportunities for revenue from non-traditional markets – stressing Market Development. Ansoff’s matrix gives us a key to managing the life cycle of a journal.

Programme management and portfolio development

We now turn to how, and why, publishers build portfolios of journals. Historically, the barriers to entering scholarly publishing have been high. Typesetting and printing were highly skilled tasks before Desktop Publishing, and were costly. Conversely, incentives were limited, with most scholarly publishing performed by university presses and learned societies as a labour of love. These society journals – located at the centre of perhaps thousands of researchers in a given subject – had excellent, unparalleled connectedness to their research community, providing a large competitive advantage.

As we have seen, the world began changing with Robert Maxwell’s commercialisation of scholarly publishing in the 1950s, and was then transformed by the affordable, powerful computer technology from the 1980s. Now, one person can launch and run a journal. They may not do it well, but if they can generate sufficient submissions, they will be able to publish articles and issues, and perhaps to generate revenue.

Does that mean that everyone will develop portfolios of successful journals? A glance around the scholarly publishing world suggests not. Elsevier, Springer and Wiley published 46.9 per cent of all articles indexed in Thomson Reuters’ Journal Citation Reports in 2009.12 Independent presses publish individual titles for sure, and small clusters possibly, but to develop a quality programme requires scale and many, many skills. Indeed, managing a large journal portfolio is significantly more complicated than was ever the case in the past. Aside from academic Editors, it requires the collaboration of experts in copy-editing, typesetting, proof reading, printing, warehousing, online publication, digital conversion and content management, website design and hosting, submissions and review software, sales systems, customer services, marketing, library relations, editorial management, bibliometrics, finance, strategy and human resources, inter alia – and probably needs to be able to deploy these skills in many countries globally.

Why do publishers go to the trouble of building large journal programmes? The short answer is that only on very large lists can both economies of scale and Network Effects be fully realised. The coexistence of these two forces has long been recognised as a characteristic feature of the software industry (Schmalensee, 2000). Journals, unlike books, are beginning to behave like an online business.

Network Effects and economies of scale

Economies of scale are generated in familiar ways: printers make special rates for a publisher who brings 1000 journals; it is more affordable to attend a conference relevant for 50 journals than to display a single title, and likewise to spread the investment required for an online platform across a large portfolio; global sales teams generate more revenue per transaction on larger lists. These economies of scale provide a business with the cashflow needed to fund future investment and development. Network Effects enable the publisher’s journals to compete effectively in the market. They are maximised by worldwide sales reach, excellent peer review systems, production which is in step with the research community, a richly interlinked online platform, researcher loyalty and effective marketing strategies.

A journal publisher that currently achieves economies of scale but not Network Effects may well be profitable and successful today - but won’t be in a decade’s time. Both are now essential for success, and both are optimised on very large portfolios.

Publishers have attained large scale through four basic strategies:

 Society Friend – providing flexible, tailored services which make learned societies want to publish with you. At a certain point, positive feedback loops kick in, as societies trust publishers associated with high-quality society journals and who have the infrastructure (both in systems and attitude) required for effective society publishing. Blackwell, the so-calledhonorary not-for-profitprior to their acquisition by Wiley, built a comprehensive journal business with this strategy through the 1990s. Sage is now seemingly trying to replicate that strategy.

 Factory Ships – build presence by creating fleets of very large, well-managed journals and actively seeking out quality papers. This is analogous to the Factory Ships which dominate modern fisheries, locating fish with advanced sonar and harvesting them on an industrial scale. The strategy particularly suits owned journals, where a publisher has a free hand to amend the journal to best suit the Factory Ship role. As large journals tend to be cost-efficient, this can be a highly profitable publishing model. Elsevier has been extremely effective at acquiring and developing Factory Ship journals, particularly in STM disciplines.

 Platform Publishing – this is a software industry concept, which involves taking a successful product and cloning it for a new field. Wiley’s phenomenally successful For Dummies series is a classic example of platform publishing, taking a clearly defined formula and adapting it for a wide range of topics. A different take on platform publishing is given by Cell Press, where Elsevier have created a family of five quality journals around the single title Cell since its purchase in 1999. Even more prolific has been Nature, which has now sired 37 daughter journals, all bearing the name ‘Nature…’.

 New Kid in Town – when one has little or no presence in a discipline, an option is to launch titles in emerging and evolving research fields. The risks of launching in an unknown field are, of course, high, and librarians are additionally reluctant to subscribe to new titles. For these reasons, many established publishers have turned away from new journal launches. It is perhaps not a coincidence that smaller, ‘upstart’ publishers have launched titles in the niches left unoccupied, with mixed success. Well-thought through, intelligently conceived launches – like those of Public Library of Science – have flourished. Many others have not. Routledge has attained a leading position in Social Science fields such as Education, Politics and International Relations, and Area Studies, driven mainly from a strategy of launches by the then Carfax imprint in the 198595 period.

Starting with a core of quality titles makes all of these strategies easier. Having business intelligence gleaned from strategic priorities of government and funders’ research priorities, as well as links with key up-and-coming scientists and academic research ‘stars’ can help sharpen the portfolio focus. Throughout, the goal should be to produce good journals publishing high-quality papers. It is not necessary for a title to be the best in its field, but it must serve its research community supremely well, continuously evolving in the light of feedback from scholars. If any of these strategies are done well, they can then be monetised through employing an appropriate business model.

Delivering on publishing strategies

In reality, most publishers blend these approaches depending on the opportunities of their particular market segments. For example, a significant portion of Elsevier’s Factory Ship journals arrived in acquisitions such as North Holland (1971), Pergamon (1991) and Academic Press (2001), while Elsevier’s market-leading presence in Food Science and Geography arose from the New Kid in Town strategy of systematically launching journals in emerging disciplines. Meanwhile, Elsevier publishes many high-quality society journals in the Health Sciences, and pursues Platform Publishing with the highly successful Trends journals and Advances annuals. There is no one right way, but rather a suite of strategies to be employed according to the specific external conditions of any journal market, and internal corporate environment.

The desired end point is a comprehensive, quality journal portfolio in the publisher’s chosen subject area(s). This allows economies of scale to be realised, and significant Network Effects to be generated. For some publishers, one subject or a cluster of related subjects is enough. The American Institute of Physics has recently refocused on its core journal programme, for instance (Bousfield, 2011). Others aspire to complete spectrum coverage. An intriguing area for future investigation will be how journals with different roles in the research community – review journals, mainstream Factory Ships, niche sub-field outlets, letters journals, thought leaders, and so on – combine to form journal ecosystems within (or without) a publisher’s portfolio.

A key task becomes the development of metrics to understand journal health and ensure that each title in a portfolio makes a genuine contribution to the portfolio. If not, disposal or closure may be required. Writing almost 15 years ago, Gillian Page, Bob Campbell and Jack Meadows provided checklists for assessing the health of a journal and determining remedial action if it is ailing (Page et al., 1997). These contain sensible advice and proven formulae for managing complex situations. Today’s reality is perhaps more pragmatic. Faced with engrained difficulties, the simplest option is to consult the journal’s authors through a longitudinal or web survey, examine the journal’s key bibliometrics against its close competitors and discuss the results with the Editor-in-Chief. If a clear action plan is not produced, the journal may well have reached its final destination. There is so much information available today that the challenge is not so much defining malady, but rather creating a consensual, robust plan for resolution.

As we earlier observed, the changing dynamics of journal publishing have favoured certain publishers over others. Where once large society journals were dominant, they have increasingly ceded primacy to Factory Ship journals (currently, the state-of-the-art repository for Network Effects) and the nascent ‘mega-journals’ like Zootaxa and PLoS ONE. A key question now for journal publishers is whether self-organising groups of researchers, drawn together around issues of Open Access and data transparency through social media, will take the mantle from Factory Ships. This question won’t be answered for many years to come.

Arguments around the ‘Serials Crisis’ are much wider than differing business models – they hinge on who has the power (and moral authority?) to control the research communication process, and what level of services and quality standards (and their inherent costs) are required by those producing and consuming (and assessing?) the material output of research communities worldwide. Will semantic tagging marginalise established journal publishers? Will automated parsing systems render traditional copy-editing and typesetting irrelevant? Will Open Data – i.e. the notion that some data should be freely available to everyone to use, data-mine, mash up and republish as they wish, without restriction by copyright, patents or other ‘ownership’ mechanisms – redefine the relationships between researcher, government, research funder and publisher? We can but guess at the answers to these questions, but the journey is likely to be eventful!

What publishers can do now is continue to manage their journal portfolios intelligently with the long term in mind, manage each journal individually, invest in technology which improves researchers’ publishing experience and experiment in search of newer, more powerful modes of communication. Positioned thus, they will be able to incorporate and benefit from breakthrough technologies when they arrive.

The Tao of Academic Publishing

The world has changed, but the fundamentals haven’t. The other major philosopher who employs a river-image is Wittgenstein. He distinguishes between the movement of the waters on the riverbed and the shift of the bed itself; and states that the bank of the river consists partly of hard rock subject to no alteration or to only an imperceptible one, and partly of sand, which may get washed away, or perhaps more sand will be deposited (Wittgenstein, 1979). For our purposes, the moving waters could be seen here as representing the changing journal content and its surrounding context and technologies, and the riverbed as the basic structure of scholarly communication with the solid rock and the more mutable sand. Will the rocky riverbed itself change, with some of the river banks getting swept away – a true paradigm shift heralding a new age of scholarly communication – or is it all just the sand and shingle moving around with the flow of the waters?13

Scholarly publishing involves groups of researchers judging what is worthwhile work through conference presentations, sharing drafts and research outputs, informal discussion and ultimately peer review; publishers then convert this material into readable and/or functional form and distribute it to small groups of researchers active in that field through journals; researchers read and cite the work, citing papers of most relevance, and so create a hierarchy of importance for papers, journals and authors; in turn, this influences future submission behaviour, creating a strong positive feedback loop.

Put another way, there are huge volumes of communication between researchers about their work. Publishers convert this into formal, version-of-record papers – which are the building blocks of future science. Modern publishers have taken on the mantle of latter-day alchemists, turning the lead of grey literature into the gold of highly cited papers.

As Michael Mabe notes elsewhere in this volume, the journal age was launched by Henry Oldenburg in 1665, establishing the four principles of registration, dissemination, peer review and archive. These have been the constants, the unchanging, immutable bedrock of scholarly communications – yet are there signs, for example in the rise of the scientist’s blog, that they represent a riverbed which is eroding? To these principles we may now add further key publishing requirements which the journals industry is best placed to provide, namely discoverability and access – and which can be characterised as representing a new, more permanent deposit on the riverbed and river banks.

Vannevar Bush announced the beginning of the end of this first journal age with his 1945 essay, ‘As we may think’ (Bush, 1945). Bush dreamt of a time when a globally networked community of scientists collaborated together for the common good. There are many indications that that time has come.

Potentially, we now stand at the beginning of the second journal age.

Familiar attributes guide good publishers – service for authors, relationship management, the centrality of peer review, dissemination and high-quality publications, inter alia. On these solid foundations, network science is rising, shaped by powerful Network Effects and lubricated by incredibly rapid, seamless communication between scholars. It will be a fluid, continuously evolving form of science, closer to the evolution of synapses in a brain than the clattering iron frame of a printing press. Above all else, it will be a competitive world, in which progressive, valuable initiatives are rewarded and useless, irrelevant activities are not. We can be certain that community relevance, brand, quality, reputation and the trust factor – all deriving from ‘root journals’ – will continue to play a strong role in research network communication processes and products.

Book publishing is experiencing transformation on a similar scale. Whether this signals the end of the modern book age – as ushered in by Johannes Gutenberg with movable type printing in 1439 – or simply the latest reinvention of an endlessly reinvented industry remains to be seen.

What bedrock strategies should guide the publisher in times of such profound change? Happily, the core values of effective publishing remain unchanged:

1. Treating each publication as its own ‘small business’ is a real differentiator even within the business template – the aim should be for each publication to serve its own particular niche and network to the highest standard possible.

2. Is each title positioned to deal with our changing world? Every publication needs to re-examine its aims and scope and its editorial policies as new technology opens up new content and market opportunities.

3. The external environment is harsh and will show up the badly equipped – so publishers need to deal with pressures from researchers, authors, editors, societies, research funders, libraries and governments, and be efficient in dealing with them.

4. We should be as concerned with the intellectual health of our publications as we are with their commercial health – as a quality driver.

5. Quality and excellence will be rewarded, in better publications, better usage, better sales, better revenues and better returns.

It is up to us who work in the industry to follow the Zen of Publishing, and balance the Yin of Change with the Yang of Continuity.

‘Returning to the source is stillness, which is the way of nature. The way of nature is unchanging. Knowing constancy is insight.’ Lau Tzu, Tao Te Ching (c. 604–531 BCE)

Acknowledgments and sources of further information

We would like to thank the following colleagues at Taylor & Francis Group for their contributions to and assistance in developing this chapter: Alan Jarvis, Denise Schank, Ian White, Daniel Keirs, Colin Bulpitt, Tony Bruce, Jo Cross, James Hardcastle, Oliver Walton and Lyndsey Dixon. The responsibility for the use of their helpful information and contributions remains, of course, the authors’.

Thanks to Bob Campbell and Ian Borthwick for their editorial guidance, and especially to Bob Campbell for helping develop the philosophical aspects of this chapter.

We are also grateful for industry material available through the trade bodies’, researcher and industry groups’ websites, where there is a motherlode of further information and references:

Publishing Research Consortium (PRC): http://www.publishingresearch. net/

International Association of Scientific, Technical and Medical

Publishers (STM): http://www.stm-assoc.org/

Association of Learned, Professional and Society Publishers (ALPSP): http://www.alpsp.org.uk/

UK Serials Group (UKSG): http://www.uksg.org/

Publishers Association (PA): http://www.publishers.org.uk/

The Association of American Publishers (AAP): http://www.publishers.org/

Society for Scholarly Publishing (SSP): http://www.sspnet.org/

Research Information Network: http://www.rin.ac.uk/

Centre for Information Behaviour and the Evaluation of Research, University College London: http://www.ucl.ac.uk/infostudies/research/ciber/ downloads/

References

Ansoff, I. New Corporate Strategy. New York: Wiley; 1987.

Bousfield, D. AIP chooses to focus on core business. Outsell Insights 28. (June 2011):2011.

Bush, V. As we may think. The Atlantic. 1945; 176(1):101–108.

Cox, B. The Pergamon phenomenon 1951–1991: Robert Maxwell and scientific publishing. Learned Publishing. 2002; 15:273–278.

Davis, P. M., Solla, L. An IP-level analysis of usage statistics for electronic journals in chemistry: making inferences about user behavior. Journal of the American Society for Information Science and Technology. 2003; 54(11):1062–1068.

Esposito, J. Creating a consolidated online catalogue for the University Press community. LOGOS: The Journal of the World Book Community. 2009; 20(1–4):42–63.

Friedberg, E. C. The Writing Life of James D. Watson. Woodbury, NY: Cold Spring Harbor Laboratory Press; 2005.

Gilder, G. Metcalfe’s law and legacy. Forbes ASAP; 1993.

Harrington, W. G. A brief history of computer-assisted legal research. Law Library Journal. 1984; 77(3):543–556.

Jasco, P. A look at the endangered species of the database world. Information World Review. 2000; 164:72–73.

Katz, M. Systems competition and network effects. Journal of Economic Perspectives. 1994; 8(2):93–115.

Lewis, T. G. Network Science: Theory and Practice. Hoboken, NJ: Wiley; 2009.

Nitterhouse, D. Digital Production Strategies for Scholarly Publishers. Chicago: University of Chicago Press; 2005.

Okerson, A. Presentation to an EBSCO Seminar. ‘A History of E-journals in 10 Years: and What it Teaches Us’, Jerusalem, 13. (August):2000.

Page, G., Campbell, R., Meadows, J. ‘Managing a list of journals’, Journal Publishing. Cambridge, UK: Cambridge University Press; 1997. [pp. 321–45].

Schmalensee, R. Antitrust issues in Schumpeterian industries. The American Economic Review. 2000; 90(2):192–196.

Schumpeter, J. A., 1975, orig. pub,. ‘Creative destruction’, Capitalism. Harper: Socialism and Democracy. New York; 1942. [pp 82–5].

Shiner, R. A. Wittgenstein and Heraclitus: Two River-Images. Philosophy. 1974; 49:191–197.

de Solla Price, D., Beaver, D. Collaboration in an Invisible College. American Psychologist. 1966; 21(11):1011–1018.

Tenopir, C., King, D. W. Electronic journals and changes in scholarly article seeking and reading patterns. D-Lib Magazine. 14(11/12), 2008.

Verspagen, B., Werker, C. The invisible college of the economics of innovation and technological change. Estudios de economia aplicada. 2003; 21(3):393–419.

Wittgenstein, L. On Certainty. Oxford: Basil Blackwell; 1979.


1Word, reason, plan, or account, the fundamental order in a changing world. There is a publishing journal, founded by Gordon Graham, formerly Chairman of Butterworths: LOGOS: The Journal of the World Book Community http://www.brill.nl/logos

2http://michaelnielsen.org/blog/is-scientific-publishing-about-to-be-disrupted

3http://www.openscience.org/blog/?p=269

4A good source tracking the development of e-journals in the 1990s is Okerson (2000).

5http://www.arl.org/resources/pubs/specscholmono/renfro~print.shtml

6http://www.aaupnet.org/images/stories/documents/aaupbusinessmodels 2011.pdf

7http://touchpress.com/titles/thewasteland

8Reed Elsevier Annual Report 2010 (2011), accessed online 26 September 2011: http://reports.reedelsevier.com/ar10/business_review/lexisnexis/ 2010_financial_performance

10Data retrieved from http://environmentalcomm.org/founding-members on 27 September 2011

11Data retrieved from Thomson Reuters’ Web of Knowledge on 27 September 2011.

12Data collated from Thomson Reuters’ 2009 Journal Citation Reports, with thanks to James Hardcastle and Jo Cross in Taylor & Francis’ Research & Business Information team.

13We are grateful to our Editor, Bob Campbell, for bringing the philosophers’ river-images to our attention and starting to elaborate this idea in the context of a possible journals industry paradigm shift.