Beating Murphy’s Law

Reading Time: 37 min 
Permissions and PDF Download

IN THE PRESS, managers trumpet the success of new technologies. But all too often the private answer to “How is the new equipment working?” is “Badly.” Managers lament that their experience is living proof of Murphy’s famous Law: “Whatever can go wrong, will.” Yet the introduction of new technology is essential for long-term survival. Several examples illustrate how Murphy can intrude.

A southern furniture manufacturer decided to implement computer-controlled fabric cutting equipment. The head of the project, a top engineer, visited firms that had already successfully installed the equipment, developed a detailed action plan, and projected the expected efficiency gains (see Figure 1).

Nine months later the expected gains had yet to materialize. The problems could be traced to a multitude of unforeseen circumstances. The software to support the cutting equipment was not prepared on time. The changes in production scheduling, which were made to keep the equipment’s utilization high, required more indirect labor than originally planned. New cutting tables were too wide. The cutting department found that their old jack-of-all-trades approach resulted in too much waiting time and that they had to work as specialized teams. Downstream from the cutting department, the sewing staff found their productivity reduced—the fabric was poorly cut, primarily because the suppliers sent nonstandard widths. Upstream and downstream, direct and indirect, cost and quality; everything that could go wrong, did. Murphy’s Law had taken over the project.

In a large aluminum processing plant, the “JRC” electromechanical pump, designed to stir molten metal in an electrical remelt furnace, was expected to improve furnace efficiency, reduce fuel and labor costs, and lengthen furnace life. Although the new pump had been successfully used in more than a dozen steel operations and in at least one other aluminum foundry, the innovation ran into multiple problems. The silicon materials in the nose piece turned brittle in the extremely high temperature baths. The vendor (which had licensed the original technology from this same aluminum company’s laboratories) was undergoing a major personnel re-structuring and was inattentive. Spare parts were very slow to arrive. Previously, the operators had fed the furnace infrequently; they prided themselves on pushing the furnaces past their designed limits by loading in large quantities of metal every few hours. The JRC required more “baby-sitting”—the operators had to load in smaller quantities of scrap more frequently so that the pump did not choke.

The vendor blamed the pump’s erratic performance on inadequate maintenance. The furnace operators blamed technical flaws and the vendor’s indifference. And the original inventor blamed the operators.

Multiple tests conducted over a period of months to determine the costs and benefits of the new pump were inconclusive because of downtime, noncomparable conditions from one test to another, several changes in the physical orientation of the pump within the furnace, and fluctuations in the volume of metal passing through the facility.

Meanwhile, social demand for recycling of aluminum cans continued to mount but there were not enough crucible trucks to handle any increased flow of metal from the remelt furnaces. Moreover, some of the downstream production managers considered the recycled metal to be a contaminant and undesirable for many applications. Therefore, there was little internal demand for increasing production through the remelt furnace. After six months of trials (and tribulations), the project was cancelled.

A U.S. electronics company introduced an MRPII production scheduling system at a European site to improve scheduling of the manufacturing process. At the same time, the site was ramping up from distribution to full production. Lacking both personnel and adequate computer power, the plant rapidly ran into difficulties. Processes became confused as some operators and supervisors, seeing the inaccuracy of the computer output, returned to previous systems. Some of the previous systems were computerized but on a remote computer system, while others were manual. Without accurate and timely data from many of the plant functions, the computer systems spiraled downward into more confusion. Finally, the plant manufacturing processes ground to a complete halt. The plant was “brought to its knees.” For almost three weeks no products were shipped. This stoppage reverberated around the world, as many other plants were unable to ship without the components produced at the European site. U.S. expediters flew in, but their activity added little to the productivity of the plant and much to the confusion. The plant manager raced from one hot spot to another, trying to personally resolve many of the problems. As the backlogs mounted, contract workers were hired to push the product through manually, ultimately doubling the size of the plant. It took over twelve months for the plant to return to normal.

This disaster not only cost the plant management much credibility at the corporate level, where the problem had been extremely visible because of the shipping halt, but the plant manager commented that it took the workforce a year to regain confidence in his management team.

Expect the Unexpected

Recent studies suggest that these examples are the norm rather than the exception. Majchrzak points out that an estimated 50 to 75 percent of U.S. firms experience failure in implementing advanced manufacturing technology.1 A study of nine plants over a three-to-nine-year period showed that while it was impossible to predict exactly what would go wrong when new technology was introduced, it was easy to predict that something would. The plants routinely experienced a pronounced drop in productivity following the introduction of new equipment—a drop whose dollar impact often exceeded the purchase price of the equipment itself.2 Yet in most cases these were not radical, new technologies. Even the purchase of incremental, relatively inexpensive equipment typically leads to a productivity downturn. Indeed, a study of the Census Bureau’s Longitudinal Establishment Data Base confirmed the presence of investment-related indirect costs for a sample of over 1,000 plants over a nine-year period.3 Apparently implementation always involves the unexpected.

Figure 2 shows a typical pattern exhibited by these adjustment costs. Engineering projects a clear improvement, recognizing that it will take time to ramp up to the new level of performance. But in reality, performance deteriorates shortly prior to installation as the plant makes ready for the new equipment. Anywhere from several weeks to more than a year later, the plant again reaches its old performance level and, with luck, surpasses it. The area of the dip in Figure 2 shows the magnitude of the adjustment costs.

This basic pattern has been found empirically in a variety of manufacturing facilities, including auto components plants, paper mills, and commercial kitchens.4 Statistically significant negative effects often persist for more than a year after the introduction of the new equipment. A study of twelve plants by Hayes and Clark found that “in most cases the additional cost, over several months, of adding new equipment (in terms of lost labor productivity, increased waste, equipment idle time, and so forth) appeared to be greater than the cost of the equipment itself.”5 The essence of this Murphy Curve is captured by the manager’s proverb “short-term pain for long-term gain,” but the short term can be long and the pain intense. In some cases, as with the aluminum pump described earlier, the pain is so great that the project is cancelled, leading to no gain at all.

Information Failure

Why do these projects incur adjustment costs? The obvious answer is, “People resist change.” Sometimes this is a factor. “Vet in many instances the workforce has clearly “bought into” the change before it occurs. Major adjustment costs still emerge well in excess of the installation costs. Why?

The greatest costs stem from unforeseen mismatches between the new technology’s capabilities and needs, and the existing process and organization. New equipment rarely behaves precisely as expected; often it needs debugging and adaptation.6 Implementation is complete only when the old and the new have been successfully adapted to each other. Debugging could proceed quickly and inexpensively were it not for the fact that the new equipment is an integral part of a larger, on-going system. Changes in the new equipment ripple through other departments. Finding the root cause of these ripple effects can be time consuming. Once causes are found and solutions implemented, a second wave of problems may emerge elsewhere and the problem-solving process begins anew. This iterative process of problem-diagnosis-solution-problem continues until performance is significantly improved, another new process draws off resources, or the project is abandoned, which-ever comes first.

If managers knew precisely what sort of problems were going to emerge, they could anticipate and correct them before they occur. But managers cannot perfectly anticipate problems because they and their subordinates have limited knowledge of both the new technology and their existing processes. It is impossible to predict exactly what problems will arise when the imperfectly understood equipment is installed on the imperfectly understood shop floor.

There are a number of reasons for incomplete knowledge of technology. The firm’s application may be unique, or at least on a different scale from any others. Or, the firm may have failed to investigate other users’ experience.

Imperfect knowledge about the existing process can be attributed to three major causes. The first is the degree to which “art” plays a role in the process. Few processes are understood to the level of “science?7 In other words, there are some unknowns or unquantifiable variables like “feel” that affect the process. The second cause is that managers and engineers don’t know in detail what is actually being done on the shop floor. Most plants exhibit deviations from original work design and variation from worker to worker. In many cases these deviations are to the benefit of the organization. They may represent the effects of decentralized problem solving, for example. But for good or ill, these unknown modifications still represent a potential source of problems when new technology is introduced. The third cause is a lack of scientific method in most manufacturing operations. We do not mean a lack of experiments, although most operations run very few. We mean a failure to capture the data and information created by changes instituted in the past. Whether the changes succeeded or failed, they represent a potentially rich method of revealing process interrelationships. Sadly, the data are often not recorded and thus are subsequently forgotten.

It is imperfect knowledge that makes Murphy’s Law possible, and in our examples it is clear that what you don’t know can hurt you. The fact that suppliers to the furniture maker had historically shipped nonstandard material widths wasn’t the problem; the problem was that the firm was unaware that they were nonstandard. Knowledge was the aluminum company’s problem as well; it suffered from inadequate technical problem solving and downstream managers who did not understand that the recycled metal was not a contaminant. Finally, in our MRP example, it was a failure of information that brought the plant to its knees. The problem was not imperfect technology or improper organization. The problem was imperfect knowledge about the interaction of the technology with the organization.

Beating Murphy’s Law

Our studies of hardware and software introduction in a variety of settings suggest some critical rules for beating Murphy’s Law.8 The rules are simple, but they represent a radically different approach to implementation of technology, an approach based on generating knowledge about the new equipment and the existing organization, systems, and processes. Some of this knowledge comes from solving problems as they come up. A systematic approach to learning also seeks knowledge proactively, before and during the implementation process.

Rule #1. Think of Implementation as R&D

A conventional process of new equipment adoption consists of approval, planning, and implementation. One participant in our studies observed, “Management saw [the technology] as a cassette you just plug in.” With this perspective there are only three tasks to perform: decide whether the firm wants the cassette (approval); assign responsibility for plugging it in (planning); and plug it in (implementation). While a corporation may wish to identify these phases for control purposes, we reject them for successful new technology acquisition. Acquisition should instead be considered an ongoing process of data gathering and learning that evolves over time. Initially, an organization must focus on technical data regarding equipment options and costs and a study of existing applications. Eventually, the technology goes through startup and data is generated in-house. But in every phase the goal is to learn all that can possibly be learned at that point in time. In effect, the introduction of technology should be considered less as an investment issue or technical issue and more as a question of research design.

Any good research design requires careful attention to the type of experiments that will be performed. The experiments should address both technical and organizational questions. Managers who understand that they are managing organizational change, not just technical change, are better positioned to direct the learning process. The work of technology managers should include: working very closely with users, whose role should be as codevelopers rather than receivers of the technology; constantly redefining the necessary support structure in the user organization, identifying and targeting potentially weak links; enlarging the definition of the technology to include the delivery system or other linkages on which the technology is critically dependent; and experimenting as consciously and productively with organizational forms as with technical ones, capitalizing wherever possible on experiments occurring naturally in the company.9

Rule #2. Ask “What made it hard?” Not “How well did it work?”

Both our furniture maker and our aluminum producer looked outside the firm early in the approval process to learn about the technology through the experience of others. But their investigations were geared toward the project’s approval phase and focused on how the potential benefits could be quantified.

The firm must look beyond the simple “How well does the technology work?” When evaluating prospective new technology, ask “How did you make this technology work? What had to be changed? What was hardest?” Too often, firms look to outside experience only to help them decide whether or not to buy the equipment, and waste the opportunity to learn how to avoid experiencing someone else’s problems.

It isn’t necessary that the firm with experience be in the same business as you. A different kind of business may not help you answer the “Should we buy it?” question, but it may be able to answer implementation questions. Also, be sure to look within your own company for experience. One plant’s risky learning by doing can become another plant’s inexpensive learning by example.

Our studies suggest that technical knowledge, about the hardware itself, transfers more easily than organizational knowledge, about how best to use it. One reason is the way technical projects are staffed. Typically, a technical expert is assigned responsibility for technical ramp-up, and therefore has an incentive to seek out technical information. Some technical knowledge learned elsewhere is often embodied in the equipment or software routines, making physical transfer easier. A second reason is that, although poor documentation is a problem for all types of learning, people are more likely to document technical than organizational solutions. Finally, most managers believe that their “people problems” are different from anyone else’s. After all, their people are different.

It may seem obvious that learning from others’ experience is a good idea. However, we have observed very different levels of receptiveness and aggressiveness among plants in acquiring knowledge from outside. The prevalence of the Not Invented Here syndrome is testimony to a gap between what could be done and what is done. Hayes and Clark describe the problems one company had in transferring knowledge among its own plants making the same product: “The transfer of relevant information appeared to be limited by the organizational difficulties associated with coordinating and communicating engineering knowledge, by the desire of each plant to protect its proprietary knowledge, and by each plant’s reluctance to assimilate superior techniques developed at other plants.”10 Corporate attitudes and reward systems can exacerbate this problem.11

The interfirm transfer situation can be even worse. Not only are firms concerned about trade secrecy, but many find it hard to admit that competitors and firms in other industries may be ahead in understanding certain areas.

Learning from outside the plant is not easy, but it can be heavily influenced by management actions, and perhaps even more by management attitudes. To be successful, passive observation is not enough. At this early stage the inquiry should be an active, even aggressive, targeted search for information.

Rule #3. Learn in Many Ways at Once

The adage “An ounce of prevention is worth a pound of cure” has been known to every manager since grade school, yet many seem fonder of “A penny saved is a penny earned.” Too often, learning is an afterthought. Our analysis points to the critical role of systematic and early learning to avoid Murphy’s Law.

Broadly speaking, there are four methods of learning that a firm can use:12

  • vicarious learning—learning from the experience of others;
  • simulation—constructing artificial models of the new technology and experimenting with it;
  • prototyping—actually building and operating the new technology on a small scale in a controlled environment; and
  • on-line learning—examining the actual, full-scale technology implementation while it is operating as part of the normal production process. A clear hierarchy exists among these four methods: costs get higher moving down the list, but so does fidelity.

As we move down the list toward on-line learning, trials (controlled experiments) cost more both directly and in opportunity costs (valuable people who could have been doing something else and, once on-line, lost production time). Once the actual system is built and started up, major changes require major expenditures of time and money. Changes increase disruptions and hurt performance. In contrast, changing a machine in a simulation may take only a few minutes of computer time and an hour to look at the results. Yet managers in our studies consistently underinvest in preimplementation learning, choosing in effect to do most of their learning later, when it is most expensive.

Part of the reason is the issue of fidelity. The more the learning experience corresponds accurately to the real situation in the factory, the higher its fidelity. The lower-cost learning methods have less fidelity to actual plant floor conditions. By definition, on-line learning has the highest fidelity—whatever you see happening will happen in reality! Prototypes can have excellent fidelity, but simulation has good fidelity only for those is-sues that are already understood at the level of science, hence that can be accurately modeled. Vicarious learning has good fidelity in those areas in which the other sites are similar to your own, but poor fidelity in areas where they are different.

Many managers appear unwilling to invest in learning by methods that offer less than perfect fidelity. They fail to recognize that learning need not be all or nothing. Another plant’s experience will not be completely relevant, but it is still possible to identify some issues that can be addressed vicariously. Similarly, simulation and prototyping can be effectively targeted at specific questions.

The consequence of these two competing hierarchies is important: use a mixed strategy for learning. Learn as much as possible using the low-cost, low-fidelity methods, but realize that some learning will probably be necessary from all four methods. On-line learning is the “residual” method —any bugs and problems that weren’t caught earlier will cause problems during startup of the full system. Diligent effort should be made to catch as many of these as possible during the earlier stages, but realize that, because of incomplete fidelity, some of them will only show up at the end. Therefore, resources (people and time) must be set aside for learning on-line.

Furthermore, the ideal learning strategy includes parallel and simultaneous use of all methods, not just sequential use. For example, opportunities for improvements that are not uncovered until die prototype is running may become the target of simulation. Prototype pilot lines should be kept going in parallel with the main production line, as test beds for diagnostic experiments and trials of changes. Both technical and organizational learning must be documented and remembered. As noted earlier, one plant’s Murphy’s Law disaster is another plant’s opportunity for vicarious learning.

Rule #4. Simulate and Prototype Everything

Since on-line learning is an expensive second best, and vicarious learning can only give limited guidance, effective simulation and prototyping are critical. A simulation of a new technology is a model of how it will work. Simulations can range from simple mathematical models such as spreadsheets, through elaborate Monte Carlo computer models, to physical models of the entire plant, before and after the new technology is introduced. For example, in the steel industry it is common to simulate changes using scaled-down models with water in place of molten steel. Simulations of organizational technology such as a new order-processing system can range from a simple walk through by representative personnel, to a complete mock-up using dummy data and dummy versions of the final software.

Simulation technology has improved dramatically in recent years due to advances in computer hardware (engineering workstations and personal computers are more than adequate for simulations of most production processes) and especially in easy-to-use, special purpose simulation languages. It is often possible to do a crude but useful initial simulation in less than a week of effort. In fact, we recommend that any new technology involving more than a few person months of total effort should probably be simulated in some way. Complex or large technology should usually receive several simulations targeted at different levels of detail and different aspects of the total system.

Simulations are by definition incomplete representations of reality. They are especially useful when dealing with complex changes, where the individual pieces of the technology are reasonably clear, but their interactions with each other and with the rest of the plant are not. The simulation then shows the overall effect of the total system. The other power of simulations is that they are very easy to change. Once the simulation is built, numerous alternative configurations of the technology can be tried rapidly. The overall effect can be a quantum leap in the ability to refine and improve system designs.13

A prototype is a small-scale construction or isolated version of the final system for learning purposes, using methods as close as possible to the final technological target. The purpose of prototyping is to learn about problems and opportunities that were not found during the simulation but that will cause delays or expense if they are left for on-line learning. The role of pilot plants and pilot lines, which are themselves prototypes, are well known and appreciated in many industries. Less recognized is that even small changes can be prototyped. Cleverness pays. Chaparral Steel, a well-known mini-mill, tries to prototype suggested process changes before full-scale implementation. In one example, engineers on the continuous caster proposed splash guards to improve quality by reducing splashover of molten steel between strands of the caster. To test the idea, they installed a plywood guard as a prototype. Naturally the hot steel ate away the plywood, but by keeping it under a continual water jet they prolonged its life long enough to discover that reducing splashes did indeed improve product uniformity and quality.

Rule #5. “Everything” Includes the Organization

Simulation is equally applicable to organizational changes, though it is rarely applied. Simulating the organizational change that accompanies the technical innovation can be as simple, and as difficult, as defining and trying out new relationships. When another plant we studied was about to implement a new MRPII system, the implementation manager persuaded representatives from all the potentially affected functions (shipping, purchasing, inventory, etc.) to come together in one room for several days to go through a noncomputerized simulation of the information flows that would be precipitated by the change. The group worked with the MRPII software and a scripted set of problems that the implementation manager had developed before the meeting. The simulation served to educate the various functions about the coming system. The most important outcome of the meetings was unanticipated: the supervisors got to know each other and talked about the process interdependencies that the new system was going to cause or exacerbate. Because they came to understand the needs of other functions, whose representatives they often had not even known before, the participants negotiated compromises and agreements that forestalled problems when the actual system was implemented.

Organizational prototyping, like technical prototyping, is the execution of a design on a small scale for the express purpose of evaluating that design from an organizational viewpoint. In electronics, a prototype circuit board is frequently made up in a lot of ten boards, each of which is sent to a different team of experts for assessment from its particular viewpoint: engineering design, manufacturing, quality assurance, purchasing, and so on. Although each of these functions may have been involved in the design process (or so one hopes), the physical embodiment of the design may raise new issues. Technologies that may appear benign and incremental before they are put into operation can have unanticipated effects. With organizational prototypes managers can anticipate needed alterations, potential pitfalls, and opportunities for additional benefits by observing the technology-organization interaction in microcosm before launching the full-scale production change.

Pilot runs of a new technology offer the opportunity for organizational prototyping, but they are rarely used for that purpose. Usually test runs are conducted by technical staff to learn about potential problems in the physical system. Litde attention is paid to the possibility of learning about organizational effects and opportunities, such as changes in roles, conflicts with existing rewards and incentives, differing responses to and use of the new technology depending upon operator background and skill, the different meaning that the technology has for different groups of users, and the most effective organizational structure.

The best use of prototypes is conducting experiments with alternative organizational design choices. Such experiments occur naturally in organizations when more than one site adopts a technology simultaneously. For instance, in a company we studied, a new computer-aided design (CAD) system was introduced to automate engineering schematics. At one site, the engineers had to enter their own data into the system and generate the schematics. At another site, a central core of technical support staff worked the CAD system so that the engineers’ role was largely unchanged. Since the CAD system was not scheduled to be rolled out to the rest of the corporation for some months, these different managerial treatments could have been observed to learn the most cost-effective use of the technology and to identify problems and opportunities for later sites. Similarly, when Digital Equipment’s eXpert SELling assistant (XSEL) was first rolled out, the European offices chose to have technical assistants run the software, whereas most U.S. offices assigned that duty to the sales representatives. The XSEL program office team was too preoccupied with fire fighting to conduct any systematic investigations of these alternative ways of deploying the expert system.14 In both examples, organizational prototyping occurred but the managers involved were unable to exploit it.

Managers react differently to the concept of organizational simulation and prototyping depending on their orientation toward resolving uncertainty. Those who are accustomed to voyages of technical discovery (e.g., engineers designing products or knowledge engineers) regard experimentation as a natural road toward certainty. Those who are accustomed to questions for which there are no absolute answers (e.g., general managers dealing with personnel problems) regard negotiation among conflicting views as a natural road to resolution of uncertainty.

In the development and deployment of new technologies, both approaches are necessary. It is important for the technically oriented manager to recognize that uncertainty goes beyond technical parameters to include the values and perceptions of users. Therefore, the final form of the technology must be negotiated among multiple sets of users, and the value for the different users must be established through interpersonal contact. Similarly, the more generally oriented manager needs to become comfortable with the concept of experimentation (prototyping) as a legitimate way to manage the gradual shaping of the organizational environment.

An organizational prototype need not be large nor expensive to be effective. As we will describe later, some plants have used just one person or a small team of people to explore the organizational implications of a new technology before full-scale launch, and this organizational prototyping proved to be an important factor in beating Murphy’s Law.

Rule #6. Follow Lewis and Clark

It is tempting to suggest that all of these problems reflect a failure to plan properly for the implementation of change. But in all our examples the companies prepared detailed plans. One firm had literally filled a bookshelf with a detailed plan of action. When asked how useful it was, the managers responded ruefully, “We had to abandon the plan after the first day or so of actual operation.”

The problem is not with planning per se but with the substance of a plan. Most firms plan specific actions to take if certain contingencies arise. But even a half dozen contingencies with a choice of 3 responses can generate 729 different choices. If sequence matters, the planning team now faces over 500,000 possible scenarios to plan for.

When Lewis and Clark headed west from St. Louis they did not attempt to specify in advance their exact trail and how they would cope with each expected contingency. They realized that the wilderness ahead was too unknown and the contingencies too many. Rather, they set out with a general sense of their route (up the Missouri River and over the Rockies), a good store of resources, and a team that had familiarized itself with everything known about the wilderness ahead. The expedition took advantage of opportunities as they presented themselves, most notably the appearance of Sacagawea, the guide they met en route. They consciously ran experiments; for a significant portion of the return trip, they separated into two groups to time alternate routes. And they documented every step of their journey for those who would follow.

Too often managers forget that new technologies have more in common with Lewis and Clark’s wilderness than today’s travel, when one can simply decide to go, plan a route, and arrive safely at the planned destination. With new technologies the unexpected may loom as high as the Rocky Mountains. It is virtually impossible, to say nothing of inefficient, to plan for every possible contingency. Planning should not seek to be a set of actions with a checklist for repairing the system should something go awry. Planning must provide a guiding structure for discovering and solving problems. It should focus more on what to look for and think about than on what to do. It should plan for an expedition of discovery, not a drive to a relative’s house; it should be a research design, not a recipe.

Rule #7. Produce Two Outputs: Salable Products and Knowledge

Eventually, the new technology is up and running. The new process produces not only salable products, but also usable knowledge. Production time, management time, labor, and materials should be budgeted for making both types of output.15

On-line learning provides the opportunity to gain usable knowledge. It can eliminate or ameliorate the problems that cause the Murphy Curve. The faster these problems are detected and rectified, the faster performance will attain its intended level. Such learning is not automatic, however. Careful observation and even controlled experiments are usually needed to understand and solve these problems.

At its simplest, on-line learning requires watching the operation of the new technology, noticing problems, and then developing countermeasures or solutions for them. The solutions may be technological (change the new equipment or some of the old equipment so they fit together better) or organizational (change the way the equipment is used, or adapt a part of the organization, such as the reward systems).

While problems such as frequent breakdowns or obvious software bugs can often be identified by observation or by statistical analysis of the first weeks of operation, working out causes and solutions of some problems usually calls for controlled experiments. These are deliberate temporary changes in the technology or organization, and the measurement of their effects. The goal of these experiments should not just be to make the problem go away for a while, but to understand its causes so that it can be fixed at the roots. The causes of quality problems, for example, may be quite subtle, especially if the new technology is a big departure from familiar methods. In this case a dramatic but useful type of experiment is to try a mixture of large process changes with the expectation of making the problem worse in some instances and better in others. With enough changes in control variables and performance outcomes, the underlying relationships can then be inferred from the pattern of results.

On-line learning is clearly a costly way to beat Murphy’s Law. While it eventually solves the problems if done well, it does so only after performance has been hurt. Furthermore, the effort of learning will itself hurt performance temporarily. Engineering and problem-solving resources have to be taken away from other improvement efforts. Precious machine time is usually needed to run controlled experiments and to examine the operation of the system in careful detail, at slow speeds. This may have high opportunity costs if the problems are causing the new technology to be a temporary bottleneck. Fortunately, even on-line learning does not necessarily have to be done during normal working hours. Special blocks of time can be set aside for running experiments at night or on weekends. Such time, and the associated people and other resources, should be planned for in advance, even though the specific problems to be solved are not known at the planning stage.

Budgeting Time for the Seven Rules

The core of our argument has been that to beat Murphy’s Law it is necessary to plan for and manage directed learning. Anything you don’t learn about early will hurt you later. There are a variety of kinds of knowledge needed and many methods of learning. But most organizations we see have tremendous difficulty doing so systematically. Instead, continual pressure to “get it designed and operating fast” and then to “get as much out of the equipment as possible starting Day 1” leads to reactive, unsystematic, and ultimately very expensive problem solving, with little real learning coming out of it.

One of the prime culprits is the narrow perspective of traditional investment analysis. Investments are treated as black boxes that yield predictable cash streams. In contrast, the learning activities that precede successful implementations will look like expenditures with no quantifiable return. Accounting measures of performance while an organization is learning will look worse than if it were just pushing ahead.16

To take the worst case scenario, suppose that our recommended mixed strategy for learning has not been followed effectively, and as a result a number of bugs and problems are built into the technology when it starts up. Considerable on-line learning will then be needed to uncover and fix them. To do this learning rapidly will require committing considerable time and energy to experiments. This uses up time that could have been used for salable output, albeit at low levels of productivity. Since output is probably already backlogged and oversold, it takes real courage to dedicate production time to experiments unlikely to produce salable products.

One simple solution is to budget for this learning time throughout the project schedule. In particular, keep a reserve of production time for on-line learning in the first several months of startup. This is over and above the planned lower output during startup. Ten percent of production time is a realistic amount if vicarious learning, simulation, and prototyping have been done thoroughly.

The Rules in Practice: Different Kinds of Knowledge

In summary, managers typically underinvest in learning both before and after startup. This is particularly true of the organizational changes relating to new technologies. To correct these deficiencies, firms must radically alter the way they think about and plan the implementation of technology.

Our seven rules reflect a different vision of what it means to implement technology. We have seen sites that excel at this method of introducing change. In a study of six plants in two companies that were implementing very similar software packages for support of manufacturing purchasing and control, it was possible to clearly differentiate successful from unsuccessful implementation.

None of the less successful sites developed all of the various kinds of site-specific knowledge needed for implementation—the know-how and know-wby. They tended to focus on one of the kinds of knowledge suggested in Figure 3, rather than on all four. Sometimes they viewed implementation primarily as a technical issue. The technical support personnel were expert in the software but not in purchasing. Because of their technical focus, they were uninterested in the strategic reasoning for the selection and initiation of the new software and thus were ill-suited to convey to users a sense of purpose and meaning behind the innovation. Nor were technical personnel familiar with the in-house software systems to which the new technology was being linked at each site. Hence no one possessed adequate understanding of the technical system as a whole. Finally, because the implementation teams in these sites generally viewed the technical realm—not the organizational—as their province, they were not prepared to deal with the changes in buying behavior required by the new system. Consequently, the implementation teams at the less successful (more costly) sites spent months of fruitless effort making detailed plans for a system that they did not really understand in practice. For example, in one plant, the massive twenty-seven-person implementation team met daily for seven months before the change to the new software. They filled large notebooks with detailed plans of action. Yet no one took the time to actually use the available simulation that a second plant in the same company later used to good effect. When the day came to turn on the system at the first plant, the notebooks turned out to be largely irrelevant. One person said, “We did a lot of planning we could never use.” No one had anticipated most of the real problems that experimentation with real data in the simulation might have revealed.

Not only were many organizational changes un-anticipated in this plant, but there was no one to model—or even guide—the organizational transition. No one had been designated as an organizational change agent. The manufacturing floor supervisors who had customarily made purchasing, scheduling, and inventory decisions were suddenly shorn of expertise. A member of the implementation team said, “We took all that [expertise] away from them, literally overnight. Everyone was reduced to the same baseline.” The supervisors were afraid to make any decisions, for the new system might countermand them. Consequently, there was no one to shape the human systems and procedures into the necessary configuration. No one understood the interaction of the software with the procedures.

At another plant in the same company, managers had focused heavily on know-why, educating people on the theory behind the innovation more than on the practice. An external consultant had conducted courses on the philosophy of manufacturing resource planning that underlay many of the innovations, including the new software package. However, the courses were held two years before the system was introduced! Knowledge was dim indeed by the time it was really needed. In this same plant, the technical experts had not foreseen the need to reproduce in the new software some critical linkages to other technologies provided by existing systems (architectural interdependencies) and had underestimated the importance to the plant of some previously available reports that were unavailable with the new system.

In contrast, the most successful sites all deliberately invested in the creation of local user-experts, whose job it became to anticipate, model, prototype, and teach the new behavior necessitated by the technology, especially during the critical period of change from the old to the new system. For instance, one buyer’s regular parts load was cut 75 percent so that he could become an expert in the system and head up the implementation project. He spent three months simulating the system and then, with one other senior buyer, brought up all his parts on the system three months before the other fourteen buyers. He explained, “We told the other buyers that we were working out all the bugs before we turned it over to them.”

These user-experts served as scouts in foreign territory. They already knew their own organizations tasks and procedures intimately. Therefore, when they became knowledgeable about the software, they had the personal experience to recognize both the organizational and technical impacts. Their hands-on trial of the system was a form of organizational prototyping. By virtue of their contact with the software developers and their designation by management as the resident experts, they also became acquainted with the strategic business purpose and the systems logic behind the software. They explored all four quadrants of the implementation knowledge grid.

The user-experts did not need to become the foremost available experts on all aspects of the technology and its interaction with current systems. Their task was not to solve all problems, but to identify them. They were well qualified to decide what kinds of information should be passed along to their fellow users and to anticipate where extra effort and resources would be needed to support the change. In the more successful sites, they also rewrote the corporate documentation, breaking it down into bite-sized pieces and customizing it for the other buyers. The early scouting effort also helped the user-experts identify knowledge that already existed in a highly transferable form (e.g., software or training modules), and hence experience that could be gained vicariously.

Thus, these user-experts helped diagnose the implementation situation, direct the learning effort, and prototype the needed changes in routine. Moreover, they identified the kinds of resources needed for successful implementation and pinpointed the timing of those resources. For instance, at one highly successful plant, a problem desk was staffed twenty-four hours a day on the plant floor during the technology change. Conversely, in a plant where Murphy’s Law presided, the implementation team who had spent months (fruitlessly) planning were reassigned back to other jobs the day that the new system was turned on. The plant manager acted as though he could thwart Murphy by simply denying that anything could go wrong.

Implementation as Strategy

Implementation can cause pain if poorly managed. But if managed well it can be a source of enormous gain. Manufacturing managers often build plants in out-of-the-way locations to get labor at lower cost, but give little attention to the nuts and bolts of technology implementation, which are ultimately more important. The firm that is better than its competitors at implementation receives new technology at what amounts to a discount price. The learning organization can also achieve greater benefits from its technology. Once you’ve beaten Murphy’s Law in startup, it’s not time to stop learning. The nature of the learning should shift gradually from debugging to continual improvement.

Even more exciting is the prospect of how the effective implementor might change over time. Faced with lower technology adjustment costs, the effective implementor can spend more on technology than the ineffective implementor. If managed properly, this creates more opportunities to learn; learning reduces adjustment costs further, leading to more investment. The effective implementor can place itself on an ever-improving spiral, leaving the poor implementor farther and farther behind.

Murphy’s Law is a fact of life. Unexpected problems will emerge when technology or any other change is introduced in a complex environment. It is up to the manager to determine whether Murphy will serve as the corporate scapegoat for poor performance, or the goad that can lead to competitive excellence through systematic learning.

References

1. A. Majchrzak, The Human Side of Factory Automation (San Francisco: Jossey-Bass, 1988).

2. R.H. Hayes and K.B. Clark, “Why Some Factories Are More Productive Than Others” Harvard Business Review, September–October 1986, pp. 66–73.

3. F.R. Lichtenberg, “Estimation of the Internal Adjustment Costs Model Using Longitudinal Establishment Data,” Review of Economics and Statistics, August 1988, pp. 421–430.

4. On auto components, see:

B.E. Ichniowski, “How Do Labor Relations Matter? A Study of Productivity in Eleven Manufacturing Plants” (Cambridge, Massachusetts: MIT Sloan School of Management, Ph.D. Diss., 1983).

On paper mills, see:

W.B. Chew, “Productivity and Change: Short-Term Effects of Investments on Factory Level Productivity” (Cambridge, Massachusetts: Harvard University, Ph.D. Diss., 1986).

On commercial kitchens, see:

W.B. Chew, T.F. Bresnahan, and K.B. Clark, “Measurement, Coordination, and Learning in a Multiplant Network,” in Measures for Manufacturing Excellence, ed. R.S. Kaplan (Boston: Harvard Business School Press, 1990).

5. R.H. Hayes and K.B. Clark, “Exploring the Sources of Productivity Differences at the Factory Level,” in The Uneasy Alliance, eds. K.B. Clark et al. (Boston: Harvard Business School Press, 1985).

6. D. Leonard-Barton, “Implementation as Mutual Adaptation of Technology and Organization,” Research Policy 17 (1988): 251–267.

7. R. Jaikumar and R.E. Bohn, “The Development of Intelligent Systems for Industrial Use: A Conceptual Framework,” in Research on Technological Innovation, Management, and Policy, Vol. 3, ed. R. Rosenbloom (Greenwich, Connecticut: JAI Press, 1986) pp. 169–211.

8. See D. Leonard-Barton, “Implementing New Production Technologies: Exercises in Corporate Learning,” in Managing Complexity in High Technology Industries: Systems and People, eds. M. Von Glinow and S. Mohrman (London: Oxford Press, 1989); and

Chew (1986).

9. M.J. Tyre and O. Hauptman, “Effectiveness of Organizational Response Mechanisms to Technological Change in the Production Process,” in Organizational Science, forthcoming. This study of forty-eight new process introduction projects in one company showed value for both interfunctional and interorganizational coordination mechanisms for problem solving.

10. Hayes and Clark (1985).

11. Chew et al. (1990).

12. R. Bohn, “Learning by Experimentation in Manufacturing” (Boston: Harvard Business School, Working Paper No. 88-001, 1988).

13. Bohn (1988).

14. D. Leonard-Barton, “The Case for Integrative Innovation: An Expert System at Digital,” Sloan Management Review, Fall 1987, pp. 7–19.

15. R.E. Bohn and R. Jaikumar, “The Dynamic Approach: An Alternative Paradigm for Operations Management,” Proceedings of the ASME Conference, Atlanta, Georgia, 1988.

16. See R.S. Kaplan, “Must CIM Be Justified by Faith Alone?” Harvard Business Review, March–April 1986, pp. 87–95.

Reprint #:

3231

More Like This

Add a comment

You must to post a comment.

First time here? Sign up for a free account: Comment on articles and get access to many more articles.