Does journal publishing have a future?
The demise of the peer-reviewed journal has been often predicted especially since the advent of digital publishing. This chapter examines what forces might impinge on journal publishing to precipitate such a collapse and whether they might succeed.
Publishing and information science conferences over the last four decades have regularly debated whether the journal has a future. Originally these speculative sessions were predicated on the imminence of the electronic transition. With this rapidly receding into history, the speculation continues but now has other engines. This chapter examines what forces might undermine the journal and looks at whether they are likely to succeed.
To merely pose the question ‘Does journal publishing have a future?’ does not help answer it; to do so we must deconstruct what a functioning journal system does, and for whom, to be able to analyse how and where it might collapse. A standard business tool for strategic market analysis, the political, economic, sociological and technological (PEST) approach (Aguilar, 1967), can be very helpful here in breaking down our Big Question into Four Key Questions. When this is done there are four domains, each with its own question:
Key Question 1: Research behaviour: will researchers still communicate and be evaluated by journal publication?
Many analyses of the scholarly communication system focus on the what and the how of communication but rarely the why. Understanding the why is locked up in the sociology and psychology of human discovery: how can I establish that I discovered something first, how sure can I be that my claims will be accepted, and how do I prevent others stealing my ideas and passing them off as their own? These issues lie at the heart of answering our first question. But aspects of this sit even deeper, embedded in the DNA of what it means to say something is knowledge.
In Plato’s epistemological formulation, knowledge is justified true belief, that is to say, I can only assert that a belief (an observation if you will) is knowledge if it can be justified and it turns out to be true. In philosophy of science, experimental results are observed (the collection of beliefs), they are put together to create theories that might explain them (justification) and are subsequently tested by further observation to see if they are true. In this context the scientific method is an epistemological engine generating knowledge out of observation.
This philosophical structure is also embedded in the evolved system of scholarly communication. An investigator reports on his or her observations (beliefs), relates them to the existing literature and the tests he or she conducts (the justification) and has them tested by external critical comment by peer reviewers (the test of truth). Of course this is only partially correct as the truth of the hypothesis is rarely contained or proved by a single experiment or paper reporting it. For the whole body of work in a field, however, the analogy is apt and I believe goes some way to explaining the essential conservatism of how academic work is reported. Albeit dimly, investigators know they have to demonstrate that what they saw once in their apparatus on a wet Wednesday is true everywhere for everyone for all time. For science, the status of knowledge is insufficient: that knowledge must be objective, seen by all. In Ziman’s phrase, science is public knowledge (Ziman, 1968).
The fundamental needs of researchers have been well studied over the last twenty or so years (Tenopir and King, 2000; Mabe and Amin, 2002; Mabe, 2003, 2009; Mabe and Mulligan, 2011). One of the key conclusions of this work is that what researchers want of their communication system very much depends on their role. What they want as the producers of research is very different from what they want when they are a consumer. We can sum up the findings of these studies as follows:
The journal as a method of organising and communicating knowledge serves all these needs through the information functions it possesses. These functions were established right at the creation of the journal by its inventor, the diplomat and administrator Henry Oldenburg (1619–77), who introduced them when he conceived the world’s first scientific journal Philosophical Transactions in 1665. The functions of the journal a la Oldenburg (Zuckerman and Merton, 1971; Merton, 1973), which deliver the author and reader needs outlined above, are:
And these functions are all achieved via creation and then management of the ‘journal brand’: the type of material and range of authors published, the rigour of the peer review process, and the attitude of the community the journal serves to its quality and importance.
Oldenburg created the world’s first research journal as first Joint Secretary of the newly founded Royal Society of London. He did this to solve a number of challenges faced by early scientists. Principal among these was the desire to establish precedence: the first authors of a phenomenon or result wanted their priority as discoverer to be publicly acknowledged and secured before they were prepared to share their results with their colleagues. Oldenburg realised that a periodical publication run by an independent third party could resolve this dilemma for the pioneering scientists of his age by faithfully recording the name of a discoverer, the date he or she submitted the paper as well as his or her description of the discovery. We can see this clearly in the letters that survive from Oldenburg to his patron Sir Robert Boyle (Hall and Hall, 1965–86), one of the founders of the Royal Society, and in the surviving records of the Royal Society:
[We must be] very careful of registring as well the person and time of any new matter.., as the matter itselfe; whereby the honor of the invention will be inviolably preserved to all posterity. [Oldenburg to Boyle, 24 November 1664]
Launched on 6 March 1665, Philosophical Transactions did exactly this. In its monthly issues, it registered the name of the authors and date that they sent their manuscripts to Oldenburg as well as recording their discoveries in their own words. This simple act secured the priority for first authors and encouraged them to share their results with others, safe in the knowledge that their ‘rights’ as ‘first discoverers’ were protected by so doing.
Philosophical Transactions from the outset did not publish all the material it received; the Council of the Royal Society reviewed the contributions sent to Oldenburg before approving a selection of them for publication. Albeit primitive, this is the first recorded instance of ‘peer review’. It was quickly realised by Oldenburg’s contemporaries that the accumulating monthly issues of the journal also represented a record of the transactions of science of archival value.
The four functions of Oldenburg’s journal, registration, dissemination, peer review and archival record, are so fundamental to the way scientists behave and how science is carried out that all subsequent journals, even those published electronically in the 21st century, have conformed to Oldenburg’s model. All modern journals carry out the same functions as Oldenburg’s and all journal publishers are Oldenburg’s heirs.
The growth in the size of the literature since Oldenburg’s day, with at the time of writing over 25 000 active peer-reviewed journals in existence, has made finding individual articles increasingly difficult for readers. This need to locate and retrieve has sometimes been called the ‘fifth function’: navigation. Fifty years after the creation of the journal, the first abstracting journals appeared with the role of helping readers navigate through the expanding number of papers. Price showed in the 1960s how the growth of this secondary publishing kept pace with primary; he estimates a ratio of about 300:1 between primary and secondary journal numbers (Price, 1963). Journals also developed their own indexes (subject and author) and published them on a regular basis.
All of this navigation apparatus has of course been swept away by the introduction of abstracting and indexing databases, firstly in paper and now electronically. These have drawn upon a key feature required in registration and dissemination: constant bibliographic citation. The journal system has since spawned a unique digital identifier for each article, the DOI (digital object identifier) which can be a pointer and link too. Citation has taken on a life of its own as a result of citation indices [originally developed by Garfield (1955) to help identify the core literature] being expanded and often misused to form a ‘brownie point’ system to identify top journals, scholars and their institutions. Such quality metrics have become incredibly important for the government agencies that fund research and have to justify their expenditure of public money, and in turn have begun to drive institutional macro-level behaviour and policy. This important topic is dealt with elsewhere in this book (see Chapter 10).
Evidence of the way researcher needs are served by the journal functions can be found in the literature. Probably the biggest continuous survey of author attitudes to publishing has been that carried out by Elsevier, reported by Mulligan and Mabe (2011). A summary of one whole year of responses from 63 384 authors is illustrated in Figure 17.1. This shows that the concerns about publication speed and therefore priority (the registration function in effect) remain paramount alongside those of quality (peer review, the certification function).
Despite considerable changes in the economic, sociological and technical environments within which academic authors publish over the last 300 years, their needs and therefore the journal functions remain unchanged. We can see this in the physical form of journal articles. Figures 17.2–17.4 show the opening pages of scientific articles published in 1672 in Philosophical Transactions (Newton’s famous report about the spectrum of white light), in 1985 in Nature (Nobelists Kroto, Smalley et al. announcing the discovery of Buckministerfullerene) and in 2009, an article in the online edition of Tetrahedron Letters. In all three cases, the same structural features can be found. The title of the article and the name and affiliation of its author(s); the date the paper was received by the journal, and in the later examples, when it was accepted for publication; the date of publication. Together these instance the registration function (who, what, when) and the certification function (date of acceptance after peer review and publication). The presence of the journal title (with bibliographic citation metadata fulfilling the navigation function) in each case also illustrates the importance of that branding to the status of the article and can be viewed as an example of the dissemination function.
Could these functions be delivered in another way or by a different type of publication vehicle? It is clear that despite considerable evolution in many other spheres, the mechanisms for delivering author and reader needs seem to be largely in a state of stasis. If we were to view this as an evolutionary path we might conclude that natural selection had forced the manner in which scholarly material is published down a largely single route. The new major selective pressure, however, is the introduction of digital publishing via the Internet. So far this has not had the effect that many have predicted but clearly with only 20 years under its belt the World Wide Web may yet surprise us: that is a topic for the next Key Question. Nevertheless there have been some novel approaches to scholarly publishing inspired by digital developments and these should be examined in the light of the functional model discussed above.
Discussions about whether the cardinal four functions could be delivered separately have been around for some time (Smith, 1999; Crow, 2002; Van de Sompel et al., 2004) and have been amplified by the growth of institutional repositories for digital material at scholarly institutions. There have also been subject-based repositories developed by associations or funding bodies. Could these replace the journal?
Figure 17.5 shows a cross comparison of various publishing vehicles and whether they are able to deliver the functions needed by academics. Unless documents have been already registered and certified by the journal process they will have the same uncertain status of any material that can be found on the Internet. The only vehicle that delivers all the functions simultaneously in one place as part of a unitary act of publication, distancing the interests of the author from those of the certifier, is the scholarly journal. Those who favour repositories either have to accept that journals are needed to give the documents status or have to find novel mechanisms that can work within a repository framework.
One suggestion has been to create ‘peer review panels’ that would sample the papers uploaded to repositories around the world, select the best and provide the certification function. It is assumed by adherents of this approach that the repositories are able to register, disseminate and archive. While the latter two functions could be achieved by repositories (although dissemination is about much more than simply placing stuff online and involves the brand value of the collecting entity as well as active dissemination tools), it is far from clear that an institutional repository could function as an independent, community-run, international collection and act as a registration centre, as these repositories are located in, funded by and intended to serve specific universities in specific nations. This might be possible with a subject-based repository, but even here issues of national politics (such repositories tend to be run and funded by nationally based and government-funded institutions) come to the fore. One cannot imagine a repository run by an agency of the former US Bush administration being too keen on publishing stem cell research.
I also believe that the idea of ‘peer review panels’ floating around and singling papers out for stardom from the morass of non-peer-reviewed content is organisationally flawed. Under the current journal system, authors (who have a clear self-interest in the matter) seek out journals that are appropriate for their needs and hope they are successful through the journal’s peer review system. Under the proposed ‘overlay journal system’ the peer review panels would have to read everything on all repositories worldwide before deciding which to thoroughly review and give status. In the current system authors are motivated to seek out a brand which ‘pulls’ them towards it and which thereby re-establishes and enhances its identity. In the alternate system, the peer review panel have to find the papers they want in the haystack of un-refereed material on the web and ‘push’ to give them status. What if the author doesn’t like the overlay journal panels standards, does he or she have a choice in their selection? This is also leaving aside the motivating factors for the ‘overlay journal team’. In the case of the traditional journal the editor-in-chief evaluates what he or she receives and sends it on to appropriate peers for review. The editor can exercise control over what is received and who (and how often) an academic is asked to peer review. In a very real sense the publication is publicly recognised as ‘his/her’ journal, a reputational reward in addition to any honorarium or expenses they receive. In the case of the panel of reviewers it unclear why they would comb the whole literature and then peer review what they select, especially if the authors then say they don’t want to appear there. Human factors are important in publishing and contribute to its success or failure.
A much bigger issue is whether the canonical journal function system will be affected by changes in demographics or by the singular nature of the practice in some disciplines. To examine this possibility, let’s look at what would cause each of the functions to cease to work.
Registration is about establishing priority and ownership of an idea. Clearly, one’s attitude to the ownership of the ideas presented in a paper will depend in part on how many authors that article has. So one factor that would contribute to a lessening of the desire to be registered would be a significant growth in co-authorship levels.
Certification is principally about peer review. Peer review practice will tend to vary by discipline, reflecting the nature of the research undertaken. The need for peer review will also be affected by the size of the discipline and whether practitioners could reasonably be familiar with most of the others working in their field. In areas where collaboration levels are high, the examination of the paper by co-authors may be so extensive that the additional review by referees appointed by a journal may not be seen to add much.
Dissemination is about public active disclosure. This will depend upon the levels of access to the journal by readers and the visibility to search of the content. While a situation could be envisaged where access levels (via subscription-related models) would be too low to sustain the interest of authors in publishing in a journal – a situation that was being approached in the paper-only publishing era – the growth of electronic delivery to usage levels in excess of that achieved even at the peak of paper circulation suggests this function is unlikely to break down, despite vociferous claims to the contrary. Indeed, recent studies show that academic satisfaction levels concerning access to research articles is very high and higher than other types of content, especially data (Publishing Research Consortium, 2010). The archival function is about ensuring permanence, and this depends upon the organisations involved and the technology applied. Currently, archive arrangements for articles are well thought through and do not raise undue concerns, although the situation for data is very different (Smit and Gruttermeier, 2011).
So, where could function breakdown occur? Figure 17.6 (from Mabe and Amin, 2002) shows the growth in co-authorship over the last 50 years. The average paper in 1950 had about 1.75 authors; in 2000 this had grown to just under 4. This does not suggest that the strength of the registration function would be significantly undermined by 2050, when, assuming a similar growth in collaboration, the average paper will possibly have eight co-authors. Each author may care a little less than if the paper was theirs alone, but they will still care. The exception here, as in many cases, can be found in the field of high-energy physics; where there are few but very large research groups and papers with over 100 co-authors (even over 200) are not uncommon. In this case, the idea that each of these myriad co-authors cares enough for registration to matter becomes a moot point. It is not uncommon for such papers to appear first on pre-print servers such ArXiv, and one can posit that the diminished need for registration contributes to this. This does not just affect registration, however, because as noted above, if a hundred or more co-authors have collectively corrected a draft paper it is difficult to see how a further two anonymous referees are going to help much. As co-authorship grows, then, certification also becomes less important.
The nature of the discipline also affects the degree to which the certification function is important. Those subjects where the number of practitioners is small and the personnel of research groups known widely throughout the discipline will have less need of an independent anonymous check to enhance trust and authority. If I know the researchers well I can probably judge how much I trust their work. These conditions would be fulfilled in smaller subdisciplines where each researcher knows the others, such as theoretical physics for example. Equally, if the nature of the research work is sui generis, that is, if it is contained within the paper without any need for external experiment, such as mathematics, theoretical physics and economics, or describes or actually operates a model, such as the programs of computer science, the reader may intuit what he or she believes about the trustworthiness of the paper without any need for independent reviewers. On the other hand, for most other subjects where the number of research groups is large, their members are not known personally to each other, and where the work is of an experimental nature, trust and authority can only be enhanced by the act of peer review.
In 2005 I directed a major survey study to test some of these hypotheses (see Figure 17.7 and Mulligan and Mabe, 2011). The vast majority of respondents to the survey gave answers that emphasised the continued importance of the four functions to the work of the journal even in the Internet age. However, a number of questions related to the decoupling of the article from the journal had statistically significant deviations from the majority view in particular subject areas. These were high-energy physics, mathematics, computer science and economics, exactly as predicted by the functional model.
So where does this leaves us in answering our first Key Question? From a philosophical perspective, the journal model is well designed to fit with the requirements for knowledge generation. It also fits well with the generalised needs of authors and readers, and this has not been unduly affected by the rise of digital publishing or so-called alternative systems such as the repository landscape. That said there are a minority of subjects where the canonical offering of the journal model does not sit well with their practice, and here we are seeing differences. While the vast majority of scientific disciplines adhere to the model, a minority do not. In most cases these deviations are due to intrinsic factors in specific disciplines rather than more general issues of growth in co-authorship. I think we can safely say that at least for the foreseeable future and with the noted exceptions of high-energy physics, economics, mathematics and computer science, most researchers will continue to want to communicate and be evaluated by journal publication.
The late 1990s, the era of the first Internet bubble, were a heady time. Many observers were convinced that the introduction of the Internet and the web would fundamentally overthrow almost all existing practice. Two quotations catch the Zeitgeist rather well:
It is sometimes difficult to remember that most of the Internet tools we have become familiar with and use virtually every day are barely 20 years old. In scholarly publishing the key landmarks have been
So, has the Internet changed everything? Since over a generation has now passed with electronic content and services as the norm, it is difficult to argue that the observed lack of change is due to users only just getting used to the new medium. We are at the early stage of maturity in terms of technology adoption. The results of recent studies on motivations for publishing by researchers seem to show that less has changed than we might hope or fear (University of California, 2007; RIN, 2009; Harley et al., 2010; Mulligan and Mabe, 2011).
The Core Trends Survey (Mulligan and Mabe, 2011) was organised and funded by Elsevier with the collaboration of the CIBER research unit at UCL and the NOP polling organisation. It was inspired by an earlier survey conducted at the very end of the paper publishing era (Coles, 1993), and with over 6000 respondents, Core Trends is one of the largest surveys of its kind to date. In the Coles study, a number of motivations were identified and respondents invited to indicate which two were the most important. Figure 17.7 shows the results from this study in comparison with the results from the later one.
Dissemination is the most important motivation in the Coles study, with 57 per cent of the respondents indicating this is the reason why they publish. However, first stated motivations can be misleading; respondents often choose a response that is ‘top of mind’, which can conveniently convey a more altruistic position than might truly exist, especially when the respondents are sophisticated and know that their answers are likely to be seen by their peers. Analysis of secondary motivations can get beyond this somewhat ‘obvious’ response to reveal the ‘covert’ motivations that are more likely to be influential in driving behaviour. In the Core Trends study, respondents were also invited to indicate which two motivations were the most important for them. The most important motivation was again ‘dissemination’ (73 per cent); ‘furthering my career’ and ‘future funding’ were the key secondary motivations.
By comparing the two studies we can determine whether there has been motivational shift in the intervening 12 years between the surveys and whether the Internet has had an effect on researcher motivations. In terms of change to secondary motivations, ‘recognition’ and ‘establishing precedent’ have clearly increased, especially the latter, but in general there has been no substantive shift in secondary motivations (which are more likely to represent the covert or ‘underlying’ motivators and relate directly to the canonical journal functions) as the research community moved from a fully paper-based environment to a virtually fully electronic one.
Yet clearly some things have changed. At the obvious level virtually all journals are now online and online has become the definitive edition with paper as the add-on. There are new communication possibilities, especially in the earlier informal interactions that precede formal public publication:
With the exception of wikis (which create the entirely new functionality of many-to-many written interaction), these are new tools but they are new tools for old purposes. They make communicative acts that were already happening more efficient through the application of technology. The challenge for the publisher, though, is how such enhancements might be monetised, and so far there is no clear answer.
We can think of this as a communication ecology (Altheide, 1994) with each communication type occupying a particular niche. In almost all cases, the addition of a technology component does not change the communication instance but merely enhances it. Table 17.1 shows the dimensions that a communication instance may adopt. Table 17.2 gives an example of how this works, looking at the instance of a live lecture and the consequences of technological enhancement.
|Directionality||Unidirectional (except for Q&A)|
|Enhancement||None (in the lecture hall) but technology allows development to ‘at a distance’ and ‘recorded’ broadcast, but reduced directionality webcast, no reduced directionality|
Thus an enhanced lecture which becomes a television broadcast or a webcast is still a lecture although much more useful for distance and time-shifted applications. Similarly, a one-to-one interactive communication between two people is at its simplest either an in-person conversation or an exchange of letters, or else a telephone conversation or an exchange of emails, or else a VoIP conversation or an exchange of instant messages. These are all the same communicative act but with slightly different enhancements that can both add and detract from the utility of the communicative act, but they are not something entirely new. Looking at the current publishing technology landscape with communications ecology glasses on reveals why so little appears to have changed in the fundamentals.
The biggest imponderable for this Key Question is whether a killer application could arise that would sweep all away. Based on the analysis here, it seems unlikely given that all the communication ecological niches appear to be occupied, but it cannot be ruled out entirely. So the Internet has not changed everything yet – but there still remains a remote possibility that it might, and it certainly has changed attitudes, as we shall see later.
Key Question 3: Business models: will there be any viable business models to sustain publishing operations with net returns?
There are a wide variety of business models which publishers can adopt singly or jointly for the journal. These range from supply-side models involving author payments, to demand-side reader payments, and to tolls and tariffs of various sorts. The most familiar approaches have been user-based models where either the reader pays directly, or the readers’ agents (in an academic context, libraries) pay, or national authorities pay through national site licences or their equivalent.
Supply-side payment models have been introduced over the last decade or so. These, commonly called ‘open access’ models, involve authors paying directly to publish, or their institution paying for them to be able to publish, or, in the case of funded research, their funders paying. Third-party tolls and tariffs have existed for some time and largely comprise advertising or telecommunication access charges. Experience has shown that in the academic marketplace there is a fairly limited applicability for advertising models except in broad-spectrum journals with a magaZine component (Nature, Science, BMJ, etc.).
Sponsorship of publications by charities, foundations, companies or government has also existed for some time, long before this approach was subsumed under the open access heading (a significant proportion of journals listed in the Directory of Open Access Journals are actually using a sponsorship model). The most recent business model arrival has been the time-share or rental approach of the new start-up DeepDyve in which online access only (without being able to print) is granted for a limited amount of time for a micropayment (usually a few dollars).
Currently, the majority business model remains a subscription or electronic licensing one, with about 95 per cent of the market by article share. Pay to publish models have roughly a 2–5 per cent share. Overlapping with both these is advertising, also with about 5 per cent, but being used as a model jointly with subscriptions or another approach. The biggest challenge to sustainable business models in the future has been the enthusiasm for a variety of approaches clustered under the moniker ‘open access’ which unlike any other business model variant have vocal advocacy from the ‘open access movement’. So what are these varieties of open access and do they challenge future sustainability?
A rough working definition of open access would be ‘a combination of philosophy and business models allowing all readers (not just those within institutions) free online access to scholarly research literature without payment at point of use’. Increasingly, the terms of access and use granted to the user are assuming a greater importance, with unrestricted re-use being termed by advocate Peter Suber ‘libre open access’ and that with restrictions, such as non commercial re-use, being termed ‘gratis open access’. The details of such doctrinal niceties are beyond this chapter but are becoming important and could easily affect the viability of the models proposed.
Open access comes in a number of variants dependent upon what is made open, when it is made open and how it is made open. The what question is itself a derivative of the progressive smearing out of the published/unpublished dichotomy which, while once a stark binary choice in the paper world, now digitally allows varying degrees of ‘published’.
stage one – author’s un-refereed draft manuscript for consideration by a journal, often called (especially in physics) a preprint (Authors Version in the NISO nomenclature, see NISO, 2008);
stage two – author’s final refereed manuscript accepted for publication by a journal and containing all changes required as a result of peer review but without copy-editing or any of the sophisticated digital enhancements possessed by the final article on a platform (Accepted Manuscript in the NISO nomenclature); and
In terms of when it is made open, there are two possibilities: immediately upon publication or at some time period after it, often called an embargo period. The how question is largely one of business model, if there is one. Using these definitions it is possible to disentangle the often complex mix of open access variants currently practised and these are shown in Table 17.3.
Each of the four types of open access described in Table 17.3 has advantages and disadvantages. In the case of ‘Gold’ open access and ‘Delayed’ open access the final peer-reviewed published version of the article is made freely available via two different but sustainable business models. In the case of ‘Gold’, this is a pay-to-publish model, which has the potential advantage that because it involves the use of research funds to pay for publication, money to publish increases in line with the growth in funded research, thereby avoiding one of the biggest causes of the serials crisis (the inability of library funding to keep pace with the volume of research published) in the subscription or licensing system. It also makes the article available online immediately upon publication. The downside is that unless care is taken with the choice of the re-use rights environment, about 20 per cent of journal income arising from corporate subscribers (who read but do not author) gets lost.
The ‘Delayed’ approach has the advantage that no new business model is required, but it is based on the hypothesis that free availability of content does not affect the business model. For short embargos under a subscription model, this assumption seems counterintuitive, and much heat has been expended in arguing whether free availability undermines sales. The risk is clearly visible in data made available by Elsevier and their ScienceDirect platform, shown in Figure 17.8. Even the most fast-moving disciplines (life sciences and medicine) potentially give up a significant proportion of their saleable downloads if content is made free at an embargo period of six months from publication. For slower moving fields such as chemistry only half of lifetime downloads have occurred after 18 months and the social sciences does not even reach half this mark.
The self-archiving or ‘Green’ approach has a variety of flavours depending upon whether the author is simply spontaneously depositing the accepted manuscript version of his or her paper in the institution’s digital repository or whether a more systematic deposit is occurring under a funding body (or other) mandate. While the haphazard deposit of these intermediate but peer-reviewed items probably will have little effect on the journals from which they come, the systematic variation not only starts to resemble an earlier version of the ‘Delayed’ approach but has no business model at all. Apparently free, it is a ‘nobody-pays’ model, parasitic on the status, authority and critical honing provided to an article by the journal system. At the time of writing, the European Commission-funded PEER Project has been trying to gather data to help resolve what the effect of a European-wide embargo mandate might be, and look at the wider viability at the practical level of ‘Green’ open access. The results are expected during 2012 (www.peerproject.eu).
Preprint servers have existed for a long time, right from the onset of digital publishing in 1991 with the creation of the high-energy physics preprint server ArXiv at Los Alamos National Labs by Paul Ginsparg (Ginsparg, 1994). At the time it was claimed that it was ‘publishing for free’ but since then it has become clear that it is neither publishing nor free. Nevertheless, as a clearing-house for draft papers providing an early-warning system for particular research communities it has proved popular and successful. The idea has not really spread very far from its home in physics. It has been embraced by mathematics, economics in places and computer science, but isn’t preferred in the life sciences and is positively deprecated in chemistry, where the prior appearance of an un-refereed manuscript is still treated as first publication and bars the author from submitting the public preprint to any journal.
As these early, draft non-peer-reviewed manuscripts come directly from their authors and have no journal investment in them, their widespread availability causes none of the potential parasitism of ‘Green’. However, they are just un-refereed drafts and most researchers still want to see the final paper and have the reassurance that it was accepted and published by a journal that they value.
It is clear that open access models are here to stay but it is far from clear whether they are yet truly sustainable. There are considerable question marks over ‘Green’ and ‘Delayed’ and the preprints do not really deliver the sort of authoritative content users demand. ‘Gold’ has the potential to be a viable alternative to existing model mixes, as a recent study confirms (RIN CEPA, 2011), but the issues of re-use of material, cannibalisation of corporate revenue, and questions about whether article processing charges high enough to sustain a fully 100 per cent ‘Gold’ universe would be acceptable to the market remain to be answered. Net returns will be possible but might involve considerable curtailing of services to do so.
We are already seeing experiments with minimalist ‘Gold’ open access journals: those where editorial intervention and any peer review involving more than basic methodological checks have been abandoned. PLoS ONE was the forerunner of this new type of ‘no-frills’ high-volume, low-cost journal publishing. Trading on the branding of PLOS and a perception (not always borne out in reality) of extreme speed of publishing, it has become a run-away success, publishing thousands of papers at a modest but certainly far from zero publishing fee. This has enabled PLOS to cross-subsidise its very expensive, high-quality titles that were not breaking even at article processing fees in the order of $3000. Many other publishers have jumped on this bandwagon, launching look-alikes in various disciplines. If successful, these initiatives could change the market significantly.
Key Question 4: Political Zeitgeist: will public (political) attitudes regarding the Internet make publishing impossible?
In the section on technology I remarked that surprisingly little had changed in terms of the fundamentals. However, the broader sociopolitical ramifications of the digital revolution have been considerable. It is in establishing a very challenging environment for political and public discourse that the effects of the Internet on values and attitudes to information have been profound.
The key factor in this is simple and obvious: Digital is Different! Digital documents can be reproduced infinitely and are, despite the best intentions of software producers, infinitely changeable. This has profound consequences for all players in the chain, in terms of business models, copyright, issues of authority and trust, and public attitudes. If one copy can serve the world, then concerns about access controls or perhaps a move to avoiding any access issues at all become important. If each document can be altered, how can we trust that we really are looking at the final version? If documents are decoupled from the organisations that issue them, how can we be sure that they carry the authority of that organisation with them?
Some of these issues are resolvable through branding and watermarking (such as the CrossRef project CrossMark which will certify with a logo the Versions of Record of articles). Issues of business models and copyright are much less tractable. The biggest problems of all, however, are those created by the attitudes which digital operation throws up. Some of these arise from the culture of the Internet but others are sui generis.
‘e = free’. Here the non-tangibility of digital objects (‘e’ standing for electronic) prejudices the consumer into thinking that the non-physical must be free or at lower cost (a debate already happening over the pricing of e-books). It has been estimated that for STM journal publishing a fully digital chain with no paper products at all would potentially reduce prices by about 10–20 per cent but certainly not to zero (Ware and Mabe, 2009).
‘yours = mine’. Here the ethos of the 1960s hippy counterculture meets the expectation of the Internet generation. This follows quite naturally from the first bullet. If electronic objects are ‘free’ then why should they not be copied and shared? The recording industry has already discovered the consequences of this phenomenon (napsterisation etc.) and the motion picture industry is having similar problems. E-books are regularly being pirated and the market cannot understand why an e-book isn’t substantially cheaper than any other edition (or free). Much of the radical wings of the open source and open access movements are influenced by this supposition.
‘(intellectual) property = theft’. There have always been those who disagree in principle with the concept of copyright (perhaps notoriously the science fiction writer Cory Doctorow, who views it as a threat to democracy!). In the past the physical difficulty of making copies of a work worked to support the legal notions of intellectual property (IP). Now the tables are turned and the digital universe pulls seductively in the opposite direction to copyright law leading to calls to ‘update’ IP, to bring copyright out of the ‘quill pens and paper’ era into the sunlit digital uplands. The habits of the mass market and the lure of ‘free’ attract politicians into supporting such measures.
‘public funding = public access’. This is probably the most dangerous, pernicious and erroneous slogan for the journal publishing community. Although it is true that a majority of research worldwide is funded by government (the public purse), publication is not. It could be argued that the outputs of the research funding are the data collected and the preprints (if there are any). As I pointed out above (within the limits imposed by various subjects whose views differ on the acceptability of preprints appearing in public), making preprints fully available is possible and would have no effect on journals whatsoever. But this is not what the politicians want. They want something (the peer-reviewed final published paper with all the investment in it made by the publisher) for nothing, and this slogan has underlain most political discussions of ‘Green’ open access mandates, especially in the USA and the EU.
So, to answer the last Key Question, can publishing still be successful in such a hostile political environment? The jury is out on this, but providing a copyright framework is maintained and politicians can be persuaded that what is apparently ‘free’ may turn out to be expensive for their economies in the long run, there is some hope.
So, can we now answer the question posed at the outset: ‘Does journal publishing have a future?’ I hope I have shown that scholarly behaviour is remarkably unchanged, and while technology has provided new tools, these are new tools for the same old purposes.
Based on these conclusions, the answer to our Big Question would be ‘Yes, probably And yet. It is clear that profound behaviour shifts have been observed in a few, largely predictable, subject areas. It is also clear that while we do not yet have the killer application, the memes of the Internet world are leaking out to affect attitudes to information across the board. Where these attitudes feed public and political positions they threaten to undermine the basic evidence-based approaches we have lived with for many years. Some business models will work in the future but this all depends on continuing respect for copyright, a sympathetic IP law regime and business conditions that make publishing economic.
Predicting the future is a difficult game. Gottlieb Daimler, inventor of the petrol-powered car, said in 1889: ‘There will never be a mass market for motor cars - about 1,000 in Europe - because that is the limit on the number of chauffeurs available!’ I hope this chapter’s conclusions may fare better in the future than his.
Crow, R. The case for institutional repositories: a SPARC position paper. SPARC. Available at: http://www. arl. org/sparc/bm~doc/ir_final_release_102. pdf, 2002.
Harley, D., Krzys, S., Earl-Novell, S., Lawrence, S., King, C. J., Final Report: Assessing the Future Landscape of Scholarly Communication: An Exploration of Faculty Values and Needs in Seven Disciplines CSHE 1. 10 Centre for Studies in Higher Education, UC Berkeley. Available at:, http://cshe. berkeley. edu/publications/publications. php?id=351, 2010
Mabe, M. A. What do authors really care about? Presentation made at the Fiesole Collection Development Retreat 2003. Available at: http://digital. casalini. it/retreat/2003_docs/Mabe. ppt, 2003.
NISO. Journal Article Versions. Available at http://www. niso. org/publications/rp/RP-8-2008. pdf, 2008.
Publishing Research Consortium. Access versus importance. Available at: http://www. publishingresearch. net/documents/PRCAccessvsImportanceGlobalNov2010_000. pdf, 2010.
RIN. Communicating Knowledge: How and Why UK Researchers Publish and Disseminate their Findings. London Research Information Network. Available at: http://www. rin. ac. uk/communicating-knowledge, 2009.
RIN CEPA. Heading for the open road: costs and benefits of transitions in scholarly communication. London Research Information Network. Available at: http://www. rin. ac. uk/system/files/attachments/Dynamics_of_transition_report_for_screen. pdf, 2011.
University of California. Faculty Attitudes and Behaviors Regarding Scholarly Communication: Survey Findings from the University Of California. Available at: http://osc. universityofcalifornia. edu/responses/materials/OSC-survey-full-20070828. pdf, 2007.
Ware, M., Mabe, M. A. The stm report: An overview of scientific and scholarly journals publishing. Oxford: International Association of Scientific, Technical and Medical Publishers. Available at www. stm-assoc. org/2009_10_13_MWC_STM_Report. pdf, 2009.