THREE Business and Technology – Peter F. Drucker on Nonprofits and the Public Sector


Business and Technology

TECHNOLOGY HAS BEEN FRONT-PAGE NEWS for well over a century—and never more so than today. But for all the talk about technology, not much effort has been made to understand it or to study it, let alone to manage it. Economists, historians, and sociologists all stress the importance of technology—but then they tend to treat it with “benign neglect,” if not with outright contempt. (On this, see the “Historical Note” at the end of this essay.)

More surprisingly, business and businessmen have done amazingly little to understand technology and even less to manage it. Modern business is, to a very considerable extent, the creature of technology. Certainly the large business organization is primarily the business response to technological development. Modern industry was born when the new technology of power generation—primarily water power at first—forced manufacturing activities out of home and workshop and under the one roof of the modern “factory,” beginning with the textile industry in eighteenth-century Britain. And the large business enterprise of today has its roots in the first “big business,” the large railroad of the mid-nineteenth century, that is, in technological innovation. Since then, the “growth industries,” down to computer and pharmaceutical companies of today, have largely been the outgrowth of new technology.

At the same time, business has increasingly become the creator of technology. Increasingly, technological innovation comes out of the industrial laboratory and is being made effective through and in business enterprise. Increasingly, technology depends on business enterprise to become “innovation”—that is, effective action in economy and society.

Yet business managers, or at least a very sizable majority of them, still look upon technology as something inherently “unpredictable.” Organizationally and managerially, technological activity still tends to be separated from the main work of the business and organized as a discrete and quite different “R & D” activity which, while in the business, is not really of the business. And until recently business managers, as a rule, did not see themselves as the guardians of technology and as concerned at all with its impact and consequence.

That this is no longer adequate should be clear to every business manager. It is indeed the thesis of this essay that business managers have to learn that technology is managerial opportunity and managerial responsibility. This means specifically:

1. Technology is no more mysterious or “unpredictable” than developments in the economy or society. It is capable of rational anticipation and demands rational anticipation. Business managers have to understand the dynamics of technology. At the very least, they have to understand where technological change is likely to have major economic impact and how to convert technological change into economic results.

2. Technology is not separate from the business and cannot be managed as such if it is to be managed at all. Whatever role “R & D” departments or research laboratories play, the entire business has to be organized as an innovative organization and has to be capable of technological (but also of social and economic) innovation and change. This requires major changes in structure, in policies, and in attitude.

3. The business manager needs to be concerned as much with the impacts and consequences of technology on the individual, society, and economy as with any other impacts and consequences of his actions. This is not talking “social responsibility”—that is, responsibility for what goes on in society (e.g., minority problems). This is responsibility for impact of one’s own actions. And one is always responsible for one’s impact.

These last ten years there has been a widely reported “disenchantment with technology.” It is by no means the first one in recent history (indeed, similar “disenchantments” have occurred regularly every fifty years or so since the mid-eighteenth century). What is certain, however, is that technology will be more important in the last decades of this century and will, in addition, change more than in the decades just past. Such great needs as the energy crisis, the environmental crisis, and the problems of modern urban society make this absolutely certain. Indeed, one can anticipate, with high probability, that the next twenty-five years will see as much, and as rapid, technological change as did the “heroic age” of invention, the sixty years between the mid-nineteenth century and the outbreak of World War I. In that period, which began in 1856 with Perkin’s discovery of aniline dyes and Siemens’ design of the first workable dynamo, and which ended in 1911 with the invention of the vacuum tube and with it of modern electronics, today’s “modern”—and even tomorrow’s “postmodern”—worlds were born. In this “heroic age” a new major invention appeared on the average every fifteen to eighteen months, to be followed almost immediately by the emergence of a new industry based on it. The next twenty-five or thirty years, in all likelihood, will far more resemble this late nineteenth-century period than the fifty years after World War I, which, technologically speaking, were years of refinement and modification rather than of invention. To the business and the businessmen who persist in the traditional attitude toward technology, the attitude which sees in it something “mysterious,” something “outside,” and something for which other people are responsible, technology will therefore be a deadly threat. But to business and businessmen who accept that technology is their tool, but also their responsibility, technology will be a major opportunity.

Anticipating and Planning Technology

The “unpredictability of technology” is an old slogan. Indeed, it underlies to a considerable extent the widespread “fear of technology.” But it is not even true that invention is incapable of being anticipated and planned. Indeed, what made the “great inventors” of the nineteenth century—Edison, Siemens, or the Wright brothers—“great” was precisely that they knew how to anticipate technology, to define what was needed and would be likely to have real impact, and to plan technological activity for the specific break-through that would have the greatest technological impact—and, as a result, the greatest economic impact.

It is even more true in respect to “innovation” that we can anticipate and plan; indeed, with respect to “innovation,” we have to anticipate and plan to have any effect. And it is, of course, with “innovation” rather than with “invention” that the businessman is concerned. Innovation is not a technical, but a social and economic, term. It is a change in the wealth-producing capacity of resources through new ways of doing things. It is not identical with “invention,” although it will often follow from it. It is the impact on economic capacity, the capacity to produce and to utilize resources, with which “innovation” is concerned. And this is the area in which business is engaged.

It should be said that technology is no more “predictable” than anything else. In fact, predictions of technology are, at best, useless and are likely to be totally misleading. Jules Verne, the French science fiction writer of a hundred years ago, is remembered today because his predictions have turned out to be amazingly prophetic. What is forgotten is that Jules Verne was only one of several hundred science fiction writers of the late nineteenth century—which indeed was far more the age of science fiction writing than even the present decade. And the other 299 science fiction writers of the time, whose popularity often rivaled and sometimes exceeded that of Jules Verne, were all completely wrong. More important, however, no one could have done anything at the time with Jules Verne’s predictions. For most of them, the scientific foundations needed to create the predicted technology did not exist at the time and would not come into being for many years ahead.

For the businessman—but also for the economist or politician—what matters is not “prediction,” but the capacity to act. And this cannot be based on “prediction.”

But technology can be anticipated. It is not too difficult—though not easy—to analyze existing businesses, existing industries, existing economies and markets to find where a change in technology is needed and is likely to prove economically effective. It is somewhat less easy, though still well within human limitations, to think through the areas in which there exists high potential for new and effective technology.

We can say flatly first that wherever an industry enjoys high and rising demand, without being able to show corresponding profitability, there is need for major technological change and opportunity for it. Such an industry can be assumed, almost axiomatically, to have inadequate, uneconomic, or plainly inappropriate technology. Examples of such industries would be the steel industry in the developed countries since World War II or the paper industry. These are industries in which fairly minor changes in process, that is, fairly minor changes in technology, can be expected to produce major changes in the economics of the industry. Therefore, these are the industries which can become “technology prone.” The process either is economically deficient or it is technically deficient—and sometimes both.

We can similarly find “vulnerabilities” and “restraints” which provide opportunities for new technology in the economics of a business and in market and market structure. The questions: what are the demands of customer and market which the present business and the present technology do not adequately satisfy?, and: What are the unsatisfied demands of customer and market?—that is, the basic questions underlying market planning—are also the basic questions to define what technologies are needed, appropriate, and likely to produce economic results with minimum effort.

A particularly fruitful way to identify areas in which technological innovations might be both accessible and highly productive is to ask: “What are we afraid of in this business and in this industry? What are the things which all assert ‘can never happen,’ but which we nonetheless know perfectly well might happen and could then threaten us? Where, in other words, do we ourselves know at the bottom of our hearts that our products, our technology, our whole approach to the satisfaction we provide to market and customer, is not truly appropriate and no longer completely serves its function?” The typical response of a business to these questions is to deny that they have validity. It is the responsibility of the manager who wants to manage technology for the benefit of his business and of his society to overcome this almost reflexive response and to force himself and his business to take these questions seriously. What is needed is not always new technology. It might equally be a shift to new markets or to new distributive channels. But unless the question is asked, technological opportunities will be missed, will indeed be misconceived as “threats.”

These approaches, of which only the barest sketch can be given in this essay, apply just as well to needs of the society as to needs of the market. It is, after all, the function of the businessman to convert need, whether of individual consumer or of the community, into opportunities for business. It is for the identification and satisfaction of that need that business and businessmen get paid. Today’s major problems, whether of the city, of the environment, or energy, are similarly opportunities for new technology and for converting existing technology into effective economic action. At the same time, businessmen in managing technology also have to start out from the needs of their own business for new products, new processes, new services to replace what is rapidly becoming old and obsolescent, that is, to replace today. To identify technological needs and technological opportunities one also starts out, therefore, with the assumption that whatever one’s business is doing today is likely to be obsolete fairly soon.

This approach assumes a limited and fairly short life for whatever present products, present processes, and present technologies are being applied. It then establishes a “gap,” that is, the sales volume which products and processes not yet in existence will have to fill in two, five, or ten years. It thus identifies the scope and extent of technological effort needed. But it also establishes what kind of effort is needed. For it determines why present products and processes are likely to become obsolescent, and it establishes the specifications for their replacement.

Finally, to be able to anticipate technology, to identify what is needed and what is possible, and above all what is likely to be productive technology, the business manager needs to understand the dynamics of technology. It is simply not true that technology is “mysterious.” It follows fairly regular and fairly predictable trends. It is not, as is often said, “science.” It is not even the “application of science.” But it does begin with new knowledge which is then, in a fairly well-understood process, converted into effective—that is, economically productive—application.

The Pace of Technology

It is often asserted today that technology is moving at a lightning pace, as compared with earlier times. There is no evidence for this assertion. It is equally asserted that new knowledge is being converted much faster into new technology than at any earlier time. This is demonstrably untrue. In fact, there is a good deal of evidence that it takes longer today to convert new knowledge, and especially new scientific knowledge, than it did in the nineteenth century—if not, indeed, in the eighteenth and earlier centuries. There is a lead time, and it is fairly long.

It took some twenty-odd years from Siemens’ design of the first effective dynamo to Edison’s development of the electric light bulb, which first made possible an electrical industry. It has taken at least as long, in fact it has taken longer, from the design of the first working computer in the early forties to the production of truly effective computers—let alone to the development of the “software” without which a computer is (as was the early electric company) a “cost center” rather than a producer of wealth and economic assets. And there are countless similar examples. The lead time for the conversion of new knowledge into effective technology varies greatly between industries. It is perhaps shortest in the pharmaceutical industry. But even there, it is closer to ten years than to ten months. And, in any one industry, the lead time seems to be fairly constant.

What has shortened is the time between the introduction of new technology into the market and its general acceptance. There is less time to establish a pioneering position, let alone a leadership position. But even there, the “lead time” has not shortened as dramatically as most people, including most businessmen, assume. For both the electric light bulb and the telephone, that is, for the 1880s, the lead time between a successful technological invention and widespread—indeed, worldwide—acceptance was a few months. Within five years after Edison had shown his light bulb to the invited journalists, every one of the major electrical manufacturing companies in existence today in the Western world (excepting only Phillips in Holland) was established, in business, and a leader in its respective market. And the same held true for the telephone and for telephone equipment.

In other words, it is the job of the businessman to understand what new knowledge is becoming acceptable and available, to assess its possible technological impact, and to go to work on converting it into technology—that is, into processes and products. He has to know, for this, not only the science and technology of his own field. Above all, he has to know that major technological “breakthroughs” very often, if not usually, originate in a field of science or knowledge that is different from that in which the old technology had its knowledge foundations. In this sense, the typical approach to “research,” that is, the approach of developing specialized expertise in the field in which one already is active, is likely to be a bar to technological leadership rather than its main pillar, as is commonly believed. What is needed, in other words, is a “techno logist,” rather than a “scientist.” And often a layman, with good “feel” for science and technology, and with genuine intellectual interest, does this much better than the highly trained specialist in a technical or a scientific field—who is likely to become the prisoner of his own advanced knowledge.

It is not necessary, it is indeed not even desirable, for the businessman to become a “scientist” or even a “technologist.” His role is to be the manager of technology. This requires an understanding of the process of technology and of its dynamics. It requires willingness to anticipate tomorrow’s technology and, above all, willingness to accept that today’s technology with its processes and products is becoming obsolete rapidly. It requires identifying the needs for new technology and the opportunities for it, in the vulnerabilities and restraints of the business, in the needs of the market and in the needs of society. Above all, it requires acceptance of the fact that technology has to be considered a major business opportunity, the identification and exploitation of which is what the businessman is paid for.

The Innovative Organization

The next quarter century, as has already been said, is likely to require innovation and technological change as great as any we have ever witnessed. Most of this, however, in sharp contrast to the nineteenth century, will have to be done in and by established organizations, and especially in and by established businesses.

It is not true, as is often said, that “big business monopolizes innovation.” On the contrary, the last twenty-five years have been preeminently years in which small business and often new and totally unknown businesses produced a very large share of the most effective innovations. Xerox was nothing but a small paper merchant as late as 1950. Even IBM was still a small company and a mere pygmy, even in its own office equipment industry, as late as World War II. Most of today’s pharmaceutical giants were either small companies at the end of World War II or barely in existence, and so on.

But still, increasingly, the major effort in technological change is in development and in market introduction. These do not require “genius.” They require very large numbers of competent people and large sums of capital. Both are indeed found in established institutions, whether business or government.

Altogether the existing businesses will have to become innovative organizations. For the last fifty to seventy-five years our emphasis has, properly, been on managing what we already know and understand. For the pace of technological innovation—and even the pace of economic change—in these last seventy-five years was, contrary to popular belief, singularly slow.

Now business will again have to become entrepreneurial. And the entrepreneurial function, as the first great Continental European economist J. B. Say (1767–1832) saw clearly almost two centuries ago, is to move existing resources from areas of lesser productivity to areas of greater productivity. It is to create wealth not by discovering new continents, but by discovering new and better uses for the existing resources and for the known and already exploited economic potentials. And technology, while not the only tool for this purpose, is an important one and may well be the most important one.

The great task of business can be defined as counteracting the specific “law of entropy” of any economic system: the law of the diminishing productivity of capital. It was on this “law” that Karl Marx based his prediction of the imminent demise of the “bourgeois system.” Yet capital has not only not become less productive, it has steadily increased its productivity in the developed countries—contrary to the assumed “law.” But Karl Marx was right in his premise. Left to its own devices, any economy will indeed move toward steadily diminishing productivity of capital. The only way to prevent it from becoming entropic, the only way to prevent it from degenerating into sterile rigidity, is the constant renewal of the productivity of capital through entrepreneurship—that is, through moving resources from less productive into more productive employment. This, therefore, makes technology the more important the more highly developed technologically a society and economy become.

In the next twenty-five years, when the world will have to grapple with a population problem, an energy problem, a resources problem, and a problem of the basic community, that is, the city, this function is likely to become increasingly more critical—independent, by the way, of the political, social, or economic structure in a developed economy, that is, independent of whether the “system” is “capitalist,” “socialist,” “Communist,” or something else.

This will require businessmen to learn how to build and how to manage an innovative organization. Normally, the innovative organization is being discussed in terms of “creativity” and of “attitudes.” What it requires, however, are policies, practices, and structure. It requires, first, that management anticipate technological needs, identify them, plan for them, and work on satisfying them.

It requires, second, and perhaps more important, that management systematically abandon yesterday.

“Creativity” is largely an excuse for doing nothing. The problem in most organizations which are incapable of innovation and self-renewal is that they cannot slough off the old, the outworn, the no longer productive. On the contrary, they tend to allocate to it their best resources, especially of good people. And any body incapable of eliminating waste products poisons itself eventually. What is needed to make an organization innovative is a systematic policy for abandoning the no longer truly productive, the no longer truly contributing. The innovative organization requires, above all, that every product, every process, every activity, be put “on trial for its life” periodically—maybe every two or three years. The question should be asked: “If we did not do this already, would we now—knowing what we now know—go into it?” And if the answer is No, then one does not ask: “Should we abandon it?” Then one asks: “How can we abandon it, and how fast?”

The organization, whether business, university, or government agency, which systematically sloughs off yesterday, need not worry about “creativity.” It will have such a healthy appetite for the new that the main task of management will be to select from among the large number of good ideas for the new the ones with the highest potential of contribution and the highest potential of successful completion.

But beyond this, the innovative organization needs specific policies. It needs measurement and information systems which are appropriate to the economic reality of innovation—and a regular, moderate, and continuous “rate of return on investment” is the wrong measurement. Innovation, by definition, is only cost for many years, before it produces a “profit.” It is first an investment—and a return only much later. But that also means that the rate of return must be far larger than the highest “rate of return” for which managers plan in a managerial type of business. Precisely because the lead time is long and the failure rate high, a successful innovation in an innovative organization must aim at creating a new business with its potential for creating wealth, rather than a nice and pleasant addition to what we already have and what we already do.

Finally, we will have to realize that innovative work is not capable of being organized and done within managerial components, that is, units concerned primarily with work on today and on tomorrow morning. It needs to be organized separately, with different structural principles and in different structural components. Above all, the demands on managerial self-discipline and on clarity of direction and objectives are much greater in innovative work and have to be extended to a much larger circle of people. And therefore, the innovative organization, while organically a part of the ongoing business, needs to be structurally and managerially separate. Businessmen, to be able to build and lead innovative organizations, will, therefore, have to be able to do both—manage what is already known and create the new and unknown. They will have to be able to optimize the existing business and to maximize the potential business.

These, to most businessmen, are strange and indeed somewhat frightening ideas. But there are plenty of truly innovative businesses around—in practically every country—to show that the task can be done, and is indeed eminently doable. In fact, what is needed primarily is recognition—lacking so far in most management thinking and in almost all management literature—that the innovative organization is a distinct and different organization, and is not only a slightly modified managerial organization.

Responsibility for the Impact of Technology

Everybody is responsible for the impact of his actions. This is one of the oldest principles of the law. It applies to technology as well. There is a great deal of talk today about “social responsibility.” But surely the first point is not responsibility for what society is doing, but responsibility for what one is doing oneself. And therefore, technology has to be considered under the aspect of the businessman’s responsibility for the social impacts of his acts. In particular, there is the question of the “by-product impacts,” that is, the impacts which are not part of the specific function of a process or product but are, necessarily or not, occurring without intention, without adding to the intended or wanted contribution, and indeed as an additional cost—for every by-product which is not converted into a “salable product” is, in effect, a waste and therefore a cost.

The topic of the responsibility of business for its social impacts is a very big one. And the impacts of technology, no matter how widely publicized today, are among the lesser impacts. But they can be substantial. Therefore, the businessman has to think through what his responsibilities are and how he can discharge them.

There is, these days, great interest in “technology assessment,” that is, in anticipating impacts and side effects of new technology before going ahead with it. The U.S. Congress has actually set up an Office of Technology Assessment. This new agency is expected to be able to predict what new technologies are likely to become important, and what long-range effects they are likely to have. It is then expected to advise government what new technologies to encourage and what new technologies to discourage, if not to forbid altogether.

This attempt can only end in fiasco. “Technology assessment” of this kind is likely to lead to the encouragement of the wrong technologies and the discouragement of the technologies we need. For future impacts of new technology are almost always beyond anybody’s imagination.

DDT is an example. It was synthesized during World War II to protect American soldiers against disease-carrying insects, especially in the tropics. Some of the scientists then envisaged the use of the new chemical to protect civilian populations as well. But not one of the many men who worked on DDT thought of applying the new pesticide to control insect pests infecting crops, forests, or livestock. If DDT had been restricted to the use for which it was developed, that is, to the protection of human beings, it would never have become an environmental hazard; use for this purpose accounted for no more than 5 or 10 percent of the total at DDT’s peak, in the mid-sixties. Farmers and foresters, without much help from the scientists, saw that what killed lice on men would also kill lice on plants, and made DDT into a massive assault on the environment.

Another example is the “population explosion” in the developing countries. DDT and other pesticides were a factor in it. So were the new antibiotics. Yet the two were developed quite independently of each other; and no one “assessing” either technology could have foreseen their convergence—indeed, no one did. But more important as causative factors in the sharp drop in infant mortality, which set off the “population explosion,” were two very old “technologies” to which no one paid any attention. One was the elementary public health measure of keeping latrine and well apart—known to the Macedonians before Alexander the Great. The other one was the wire-mesh screen for doors and windows invented by an unknown American around 1860. Both were suddenly adopted even by backward tropical villages after World War II. Together they were probably the main “causes” of the “population explosion.”

At the same time, the “technology impacts” which the “experts” predict almost never occur. One example is the “private flying boom,” which the experts predicted during and shortly after World War II. The private plane, owner-piloted, would become as common, we were told, as the Model T automobile had become around World War I. Indeed, “experts” among city planners, engineers, and architects advised New York City not to go ahead with the second tube of the Lincoln Tunnel, or with the second deck on the George Washington Bridge, and instead to build a number of small airports along the west bank of the Hudson River. It would have taken fairly elementary mathematics to disprove this particular “technology assessment”—there just is not enough airspace for commuter traffic by air. But this did not occur to any of the “experts”: no one then realized how finite airspace is. At the same time, almost no “expert” foresaw the expansion of commercial air traffic at the time the jet plane was first developed or that it would lead to mass transportation by air, with as many people crossing the Atlantic in one jumbo jet twelve times a day as used to go once a week in a big passenger liner. To be sure, transatlantic travel was expected to grow fast—but of course it would go by ship. These were the years in which all the governments along the North Atlantic heavily subsidized the building of new super-luxury liners, just when the passengers deserted the liner and switched to the new jet plane.

A few years later, we were told by everybody that “automation” would have tremendous economic and social impacts—it has had practically none. The computer offers an even odder story. In the late forties, nobody predicted that the computer would be used by business and governments. While the computer was a “major scientific revolution,” everybody “knew” that its main use would be in science and warfare. As a result, the most extensive market research study undertaken at that time reached the conclusion that the world computer market would, at most, be able to absorb 1,000 computers by the year 2000. Only twenty years later, there were some 150,000 computers installed in the world, most of them doing the most mundane bookkeeping work.

Then, a few years later, when it became apparent that business was buying computers for payroll or billing, the “experts” predicted that the computer would displace “middle management,” so that there would be nobody left between the chief executive officer and the foreman. “Is middle management obsolete?” asked a widely quoted Harvard Business Review article in the early fifties; and it answered this rhetorical question with a resounding Yes. At exactly that moment, the tremendous expansion of middle management jobs began. In every developed country middle management jobs, in business as well as in government, have grown about three times as fast as total employment in the last twenty years: and this growth has been parallel to the growth of computer usage.

Anyone depending on “technology assessment” in the early fifties would have abolished the graduate business schools as likely to produce graduates who could not possibly find jobs. Fortunately, the young people did not listen and flocked in record numbers to the graduate business schools so as to get the good jobs which the computer helped to create.

But while no one foresaw the computer impact on middle management jobs, every “expert” predicted a tremendous computer impact on business strategy, business policy, planning, and top management—on none of which the computer has, however, had the slightest impact at all. At the same time, no one predicted the real “revolution” in business policy and strategy in business in the fifties and sixties, the merger wave and the “conglomerates.”

Difficulty of Prediction

It is not only that man no more has the gift of prophecy in respect to technology than in respect to anything else. The impacts of technology are actually more difficult to predict than most other developments. In the first place, as the example of the “population explosion” shows, social and economic impacts are almost always the result of the convergence of a substantial number of factors, not all of them technological. And each of these factors has its own origin, its own development, its own dynamics, and its own experts. The “expert” in one field, e.g., the expert on epidemiology, never thinks of plant pests. The expert on antibiotics is concerned with the treatment of disease—whereas the actual explosion of the birth rate largely resulted from elementary and long-known public health measures.

But, equally important, what technology is likely to become important and have an impact, and what technology either will fizzle out—like the “flying Model T”—or will have minimal social or economic impacts—like “automation”—is impossible to predict. And which technology will have social impacts and which will remain just technology is even harder to predict. The most successful prophet of technology, Jules Verne, predicted a great deal of twentieth-century technology a hundred years ago (though few scientists or technologists of that time took him seriously). Yet he anticipated absolutely no social or economic impacts, but an unchanged mid-Victorian society and economy. Economic and social prophets, in turn, have the most dismal record as predictors of technology.

The one and only effect an “Office of Technology Assessment” is likely to have, therefore, would be to guarantee full employment to a lot of fifth-rate science fiction writers.

The Need for Technology Monitoring

However, the major danger is that the delusion that we can foresee the “impacts” of new technology will lead us to slight the really important task. For technology does have impacts, and serious ones, beneficial as well as detrimental ones. These do not require prophecy. They require careful monitoring of the actual impact of a technology once it has become effective. In 1948, practically no one correctly saw the impacts of the computer. Five and six years later, one could and did know. Then one could say: “Whatever the technological impact, socially and economically this is not a major threat.” In 1943, no one could predict the impact of DDT. Ten years later, DDT had become a worldwide tool of farmer, forester, and livestock breeder, and, as such, a major ecological factor. Then, thinking as to what action to take should have begun, work should have been started on the development of pesticides without the major environmental impact of DDT, and the difficult “tradeoffs” should have been faced between food production and environmental damage—which neither the unlimited use nor the present complete ban on DDT sufficiently considers.

“Technology monitoring” is a serious, an important, indeed a vital task. But it is not “prophecy.” The only thing possible. in respect to new technology, is speculation with about one chance in a hundred of being right—and a much better chance of doing harm by encouraging the wrong, or discouraging the most beneficial new technology. What needs to be watched is “developing” technology, that is, technology which has already had substantial impacts, enough to be judged, to be measured, to be evaluated.

And “monitoring” a “developing” technology for its social impacts is, above all, a managerial responsibility.

But what should be done once such an impact has been identified? Ideally, it should be eliminated. Ideally, the fewer the impacts, the fewer “costs” are being incurred, whether actual business costs, externalities, or social costs. Ideally, therefore, businesses start out with the commitment to convert the elimination of such an impact into “business opportunity.”

And where this can be done, the problem disappears, or rather, it becomes a profitable business and the kind of contribution for which business and businessmen are properly being paid. But where this is not possible, business should have learned, as a result of the last twenty years, that it is the task of business to think through what kind of regulation is appropriate. Sooner or later, the impact becomes unbearable. It does no good to be told by one’s public relations people that the “public” does not worry about the impact, that it would, in fact, react negatively toward any attempt to come to grips with it. Sooner or later, there is then a “scandal.” The business which has not worked on anticipating the problem and on finding the right solution, that is, the right regulation, will then find itself both stigmatized and penalized—and properly so.

This is not the popular thing to say. The popular thing is to assert that the problems are obvious. They are not. In fact, anyone who would have asked for regulation to cut down on air pollution from electric-power plants twenty or even ten years ago would have been attacked as an “enemy of the consumer” and as someone who, “in the name of profit,” wanted to make electricity more expensive. (Indeed, this was the attitude of regulatory commissions when the problem was brought to their attention by quite a few power companies.) When the Ford Motor Company in the early fifties introduced seat belts, it almost lost the market. And the pharmaceutical companies were soundly trounced by the medical profession every time they timidly pointed out that the new high-potency drugs required somewhat more knowledge of pharmacology, biology, and biochemistry than most practicing physicians could be expected to have at their disposal.

But these examples also, I think, bring out that the “public relations” attitude is totally inappropriate and, in fact, self-defeating. They bring out that neglect of the impacts and willingness to accept that “nobody is worried about it” in the not-so-very-long run penalizes business far more seriously than willingness to be unpopular could possibly have done.

Therefore, in technology-monitoring, the businessman not only has to organize an “early-warning” system to identify impacts, and especially unintended and unforeseen impacts. He then has to go to work to eliminate such impacts. The best way, to repeat, is to make the elimination of these impacts into an opportunity for profitable business. But if this cannot be done, then it is the better part of wisdom to think through the necessary public regulation and to start early the education of public, government, and also of one’s own competitors and colleagues in the business community. Otherwise the penalty will be very high—and the technology we need to tackle the central problems of “post-industrial” society will meet with growing resistance.


Technology is certainly no longer the Cinderella of management, which it has been for so long. But it is still to be decided whether it will become the beautiful and beloved bride of the prince, or instead turn into the fairy tale’s wicked stepmother. Which way it will go will depend very largely on the business executive and his ability and willingness to manage technology. But which way it will go will also very largely determine which way business will go. For we need new technology, both major “breakthroughs” and the technologically minor but economically important and productive changes to which the headlines rarely pay attention. If business cannot provide them, business will be replaced as a central institution—and will deserve to be replaced. Managing technology is no longer a separate and subsidiary activity that can be left to the “longhairs” in “R & D.” It is a central management task.

A Historical Note

The absence of any serious concern with, and study of, technology among the major academic disciplines is indeed puzzling. In fact, it is so puzzling as to deserve some documentation.

The nineteenth-century economist usually stressed the central importance of technology. But he did not go beyond paying his elaborate respects to technology. In his system, he relegated technology to the shadowy limbo of “external influences,” somewhat like earthquakes, locusts, or wind and weather, and as such incomprehensible, unpredictable, and somehow not quite respectable. Technology could be used to explain away phenomena which did not fit the economist’s theoretical model. But it could not be used as part of the model. The twentieth-century Keynesian economist does not even make the formal bow to technology which his nineteenth-century predecessor regarded as appropriate. He simply disregards it. There are, of course, exceptions. Joseph Schumpeter, the great Austro-American economist, in his first and best-known work on the dynamics of economic development, put the “innovator” into the center of his economic system. And the innovator in large part was a technological innovator. But Schumpeter found few successors. Among living economists only Kenneth Boulding at the University of Colorado seems to pay any attention to technology. The ruling schools, whether Keynesian, Neo-Keynesian, or Friedmanite, pay as little attention to technology as the pre-industrial schools of economists, such as the Mercantilists before Adam Smith. But they have far less excuse for this neglect of technology.

Historians, by and large, have paid even less attention to technology than economists. Technology was more or less considered as not worth the attention of a “humanist.” Even economic historians have given very little attention to technology until fairly recently. Interest in technology as a subject of study for the historian did not begin until Lewis Mumford’s book Technics and Civilizations (New York: Harcourt Brace, 1934). It was not until twenty-five years later that systematic work on the study of the history of technology began, with publication in England in 1954–58 of A History of Technology, edited by Charles Singer (London; Oxford University Press, 1954–58), five vols.; and shortly thereafter in the United States with the founding of the Society for the History of Technology in 1958 and of its journal, Technology and Culture. The relationship between technology and history has further been discussed in the first American textbook, Technology in Western Civilization, edited by Melvin Kranzberg and Carroll W. Pursell, Jr. (New York: Oxford University Press, 1967), 2 vols., and in my essay volume: Technology, Management & Society (New York: Harper & Row, 1970) (especially in the essays, “Work and Tools” first published in Technology and Culture [Winter 1959], “The Technological Revolution,” “Notes on the Relationship of Technology, Science and Culture” first published in Technology and Culture [Fall 1961], and “The First Technological Revolution and Its Lessons,” delivered as Presidential Address to the Society for the History of Technology in December 1965, and first published in Technology and Culture [Spring 1966]). The California medievalist Lynn White, Jr., has done pioneering work on the impact of technological changes on society and economy, especially in his book Medieval Technology and Social Change (London: Oxford University Press, 1962). But the only work that tries successfully to integrate technology into history, particularly economic history, is the recent book by the Harvard economic historian David S. Landes, The Unbound Prometheus: Technological Change and Industrial Development in Western Europe: 1750 to the Present (London: Cambridge University Press, 1969). Outside of the English-speaking countries, only one historian of rank has given any attention to technology, the German Franz Schnabel in his Deutsche Geschichte im Neunzehnten Jahrhundert (Freiburg: Herder, 1929–37).

Perhaps even more perplexing is the attitude of the sociologist. While the word “technology” goes back to the seventeenth century, it first became a widely used term as slogan if not as manifesto of the early sociologists in the late eighteenth century. To call the first technical university in 1794 Ecole Polytechnique was, for instance, a clear declaration of the central importance of technology to society and social structure. And the early fathers of sociology, especially the great French sociologists, Saint-Simon and Auguste Comte, did indeed see technology as the great liberating force in society. Marx still echoes some of this—but then relegates technology to the realm of secondary phenomena. Sociologists since then have tended to follow Marx and to put the emphasis on property relationships, kinship relationships, and on everything else, but not on technology. There are plenty of slogans such as that of “alienation.” But there has been practically no work done. And technology is barely mentioned in the major sociological theories of the last, that is, the post-Marx, century from Max Weber to Marcuse and Levy-Bruhl to Lévi-Strauss and Talcott Parsons. Technology either does not exist at all for the sociologist, or it is an unspecified “villain.”

In other words, the scholars have yet to start work on technology, as the way man works; as the extension of the limited physical equipment of the biological creature that is man; as a part—a major part—of man’s intellectual history and intellectual achievement; and as a human achievement which, in turn, influences the human condition profoundly. However, the businessman cannot wait for the scholars. He has to manage technology now.

First published in Labor, Technology and Productivity, edited by Jules Backman (New York: New York University Press, 1974).