Capturing the Real Value of Innovation Tools
Advances in development tools have tremendous potential for increasing productivity, cost savings and innovation. To reap the full benefits of such technologies, though, companies need to avoid some common pitfalls.
Topics
When Intel announces yet another breakthrough in chip technology, the triumph is as much a testimony to the rapid advances of modern development tools as it is to the skills of the research and development team. Indeed, the exponential performance gains of integrated circuits have fueled dramatic advances in computer simulation and tools for today’s design teams. This progress has now come full circle: Today’s complex chips would be impossible to design and manufacture without the tools that they helped to create. Not surprisingly, companies in many fields have invested billions of dollars, expecting that these innovation tools will lead to huge leaps in performance, reduce costs and somehow foster innovation.
But tools, no matter how advanced, do not automatically confer such benefits. In the excitement of imagining how much improvement is possible, companies can easily forget that these artifacts don’t create products and services all by themselves. People, processes and tools are jointly responsible for innovation and development. In fact, when incorrectly integrated into an organization (or not integrated at all), new tools can actually inhibit performance, increase costs and cause innovation to founder.1 In a nutshell, tools are only as effective as the people and organizations using them.
SPECIAL REPORT
IT-DRIVEN
INNOVATION
This article is featured
in a special series on
how IT is remaking the
way the best companies
get innovation done.
Embedded Tools
Many new products or services depend on innovations in development tools. My research has found that new tools can significantly increase developers’ problem-solving capacity as well as their productivity, enabling them to address categories of problems that would otherwise be impossible to tackle. This is particularly true in the pharmaceutical, aerospace, semiconductor and automotive industries, among others. Furthermore, state-of-the-art tools can enhance the communication and interaction among communities of developers, even those who are “distributed” in time and space. In short, new tools (particularly those that exploit information technology) do hold the promise of faster, better, cheaper.
But that potential should be tempered: New tools must first be integrated into a system that is already in place. Specifically, they must be integrated into the work that needs to be done, not unilaterally pasted onto existing routines or substituted for what is presumed to be an equivalent. It is important to remember that tools are embedded within the organizations that deploy them as well as within the tasks the tools themselves are dedicated to performing. Moreover, each organization’s approach to how people, processes and tools are integrated is unique — a result of formal and informal routines, culture and habits. These long-standing patterns are reinforced project after project. Whether formally designated as a product development “system” or not, organizational patterns that have existed for years function as integrated entities — for better or worse. Attempting to change one aspect threatens to disrupt the overall system. How, then, can companies adopt and incorporate new tools into their existing systems to facilitate, instead of hamper, innovation?
Tools and the Auto Industry
Answers to that question can be found in several studies investigating new product development at different car manufacturers around the world. The global automotive industry provides a good microcosm to understand tool usage for two reasons. The first is the sheer amount of tooling involved in automotive development, which is impressive in its own right and contains important general lessons for innovation management. The second is the complexity, economic significance and pace of relentless change in the auto industry, which has made it the subject of considerable research, including studies on the adoption of new tools. The data highlights how the issue of tools in use is fundamentally important, representing an almost natural experiment into the attempted introduction of computer-integrated tools across firms. Results of this research suggest important lessons for any company incorporating new tools in its R&D process.
During the mid-1980s, Kim Clark and Takahiro Fujimoto of Harvard Business School launched a landmark study of automotive development performance and organization at 20 U.S., Euro-pean and Japanese companies.2 After interviewing managers at nearly all of the world’s automakers and collecting an impressive data set on 29 car projects, the researchers concluded that the Japanese approach to product development was, on average, nearly twice as efficient as its Western counterparts and a year faster in bringing new product concepts to market.
To explain that dramatic difference, Clark and Fujimoto cited five factors. First, Japanese firms were particularly effective at leveraging supplier capabilities and simplifying project coordination. Second, the best firms applied manufacturing expertise to routine development activities such as prototyping, die making, pilot runs and production ramp-up. Third, products could be brought to market faster partly because of increased overlapping of upstream and downstream activities and better communication and hand-offs of work. Fourth, Japanese projects involved on average half as many long-term participants as U.S. and European projects, thus leading to wider task assignments for individual engineers. And, finally, the best firms employed “heavyweight”project management, leading them to excel in time, cost and quality. In the early 1990s, a follow-up study showed that U.S. and European automotive firms had been able to narrow the gap in development performance by adopting Japanese-style supplier management practices, higher degrees of simultaneous engineering and stronger project management systems.3
In 1998, I began collaborating with Fujimoto, now at the University of Tokyo, to begin a new round of research that would build on the prior two studies. Before we started to collect data, however, we made a significant change that resulted directly from ongoing field research and case writing in the automotive industry. Three-dimensional computer-aided design, computer-aided engineering and new rapid prototyping tools were fundamentally revolutionizing automotive development, but these changes had not yet been studied systematically. Hence, we decided to include the use of such new tools as a major part of our study. Participating firms answered about 400 questions regarding each car project, and that information was augmented by site visits and interviews at each participating company. (See “About the Research.”)
The combined research program on global automotive development performance now consists of primary data from 72 new projects that were carried out in the United States, Europe and Japan between 1980 and 1999. To understand how performance evolved over this 20-year period of constant change, we looked at the number of total engineering hours invested in each project and the amount of time companies needed to bring a new concept to market. The first variable, engineering hours, measures the level of resources required to take a concept to market introduction.4 It includes all internal hours spent on design, engineering, prototype construction and so on, as well as any external hours subcontracted to engineering service firms. Not only do engineering hours have a direct impact on the total cost of a project, they also tie up important resources, thus limiting a firm’s R&D pipeline.
The second variable, total lead time, measures the calendar time that a company needs to define, design, engineer and introduce a new vehicle to the market.5 The clock starts when a new vehicle concept is initiated and stops with the first retail sales to customers. The longer it takes to bring a product to market, the more difficult it is for companies to respond to changing technologies and customer needs, thus increasing the risk of missing market windows. Conversely, projects that are completed too hastily run the risk of products with poor quality or too little functionality.
A Conundrum
To examine the relationship between R&D performance and new tools, we first looked at the changes in project productivity that had occurred since Clark and Fujimoto’s study in the 1980s. The results tell an interesting story: Although the transition from the 1980s to early 1990s can be characterized by Western companies’ closing of the productivity gap with respect to Japanese firms, data from the late 1990s shows the gap increasing again. (See “Resource Usage in the Global Auto Industry” and “Time to Market in the Global Auto Industry.”) What had happened?
One possibility is that companies “bought” development time by putting more engineering resources to work. The assumption is that project tasks can be broken up and worked in parallel through the insertion of additional engineers.6 As a result, the total project can be completed more quickly. Unfortunately, managing R&D projects is not that simple. Although companies can buy some time by throwing more resources on projects, adding more people also introduces organizational and task complexity. Moreover, the strategy doesn’t fundamentally improve how new products are developed. Examining the relationship between adjusted engineering hours and development lead time for all 72 projects in the automotive study strongly supports this notion: In fact, companies that brought vehicles to market more quickly needed fewer development resources. (See “Time to Market and Resource Usage in the Global Auto Industry.”) It seems that the ability to organize, employ better processes and manage projects differently resulted in fundamental capabilities that resulted in both higher speed and more efficiency.
Perhaps the most surprising finding was the convergence of many development practices that had accounted for performance differences in the 1980s. These included stronger project management, Japanese-style supplier management practices and the use of simultaneous engineering. Given that, we were surprised to find that the performance differences had actually widened again in Japan’s favor during the 1990s. What was going on?
Certainly, other forces must have been at play, including rapid advances of new technologies for CAD/CAE, computer-aided manufacturing and the availability of much-improved rapid prototyping tools. Together, such tools have been fundamentally changing the way developers experiment, solve problems, learn and interact with others, as well as manage information (and this transformation wasn’t just occurring in the automotive industry). Senior R&D managers were in fact telling us that “digital development” had been the most significant change in their entire careers. Thus, faced with a performance gap that had to be explained, we looked at the use of advanced tools as a possible driver of the Japanese advantage.
Interestingly, an analysis of the most recent car projects revealed that Western firms were leading Japanese competitors in at least two very important technology areas. Specifically, the most sophisticated CAD tools, including three-dimensional solid and surface models, were used much more extensively by U.S. and European firms. In contrast, some Japanese firms were still relying to some degree on less sophisticated tools, such as wire frames and two-dimensional models. (See “Technology of Tools Used in the Global Auto Industry.”)7 That same pattern occurred in the application of computer simulations to investigate the crashworthiness of car designs. Project data and interviews with experts confirmed that Western firms were using more complex models and tools with greater user friendliness than were many of their Japanese counterparts. For example, the number of finite elements — a measure of a simulation model’s fidelity — was higher in the United States and Europe than in Japan. (See “Complexity of Crash Simulation Models Used in the Global Auto Industry.”)
A deeper analysis of the data and interviews with managers revealed that the reasons for the recurring performance gap were complex. The poor economic performance of some Japanese firms in the 1990s while their Western competitors posted record profits surely removed some of the competitive pressures that the latter felt during the late 1980s. Furthermore, not only complacency but bureaucracy, poor planning and a short-term outlook certainly played important roles at some companies. But the research suggests a more fundamental reason for the apparent conundrum: Leading-edge tools do not result in exponential leaps in performance unless they are accompanied by change. Put another way, a company’s existing processes, organizational structure, management and culture can easily become a bottleneck when seeking to unlock the potential of new tools. In fact, research in several industries suggests that some firms can excel with “new” tools that are just good enough instead of being state of the art.
Behind the Conundrum: Common Pitfalls
Attaining the full benefits of new development tools is hardly a simple or straightforward matter. All too often, companies spend millions of dollars on tools that fail to deliver on their promise, and the culprit is typically not the technology itself but the use of that technology. Research in the automotive and other industries has uncovered a number of common pitfalls for companies adopting new R&D tools.8
Pitfall #1: Using New Tools Merely as Substitutes
When new tools first become available, companies decide whether to invest in them by determining whether existing activities can be accomplished faster or less expensively. Thus, proponents of CAE tools initially argued that substituting virtual prototypes for physical ones could, by itself, save millions of dollars. And, indeed, savings were realized through this simple act of replacement. But the greater value of state-of-the-art tools lies beyond their ability to function merely as substitutes. One manager in our study explains this by using the analogy of being stuck in morning traffic. Even if he had a Ferrari, his daily commute wouldn’t be any faster unless he could find a new route that took advantage of the sport car’s capabilities. Similarly, a company can’t unlock the full potential of leading-edge tools unless it also finds new ways to experiment, learn and manage innovation.
Consider German automaker BMW’s experience with advanced computer simulation tools.9 In the late 1990s, the company’s sales volumes were getting smaller for each model because changing customer demands led to increasingly differentiated markets. In response, BMW had no choice but to substantially increase the productivity and speed of its development system. To accomplish that, BMW wanted to perform more work in parallel, which would require greater coordination among the teams involved, and the company looked to computer tools as a possible solution. Through computer simulations, “virtual cars” that existed only in digital memory and not in the real world could be tested in parallel with ongoing design activities. The world of virtual reality also provided a logical venue for coordinating the efforts of different functional divisions of a company, such as between design and engineering.
BMW’s experience highlights the importance of understanding the role of processes and people in the adoption and use of new tools. A big advantage of simulated tests is that they can be used much earlier in the innovation process than can more costly physical tests. That, in turn, allows people to experiment with more design options and to find problems before significant resources are committed. It also enables engineers to kill a bad idea early, before it takes on a life of its own as an expensive formal project. To reap such benefits, though, BMW had to reorganize the way different groups worked together and change habits that had been so effective in the past.
Interestingly, our research suggests that some Japanese firms had an advantage precisely in this area, in spite of a tool “disadvantage.” (See “Timing and Availability of Simulation Tools and Prototypes in the Global Auto Industry.”) To test crashworthiness, the first technical experiments by these firms were simulated only months after vehicle layout started. Most likely, these models were far from perfect, but their creation and testing forced the technical communication and problem solving that are needed in parallel work. In contrast, non-Japanese firms started using their first simulation models months later in the process. Similarly, companies that used simulation earlier also made physical prototypes available much more quickly to their developers. These prototypes were necessary to complement computer simulations when the fidelity of the virtual models wasn’t close enough to reality. The combined result was more rapid experimentation and problem solving in their R&D organizations when such activities mattered most: during early development.
Pitfall #2: Adding (Instead of Minimizing) Interfaces
Iterative problem solving often involves different functional groups or departments. For the process to work, these efforts must be coordinated. Engineers from different disciplines design parts of a product that have to function as a whole, while prototypes are often built by another group. In such environments, iterative problem solving requires fluid hand-offs from one team to another, without the information loss and time delays that are often associated with organizational interfaces between the groups.
New tools, particularly those that are IT-mediated, can, by themselves, reduce some of these losses because information transfer is both reduced and standardized. Some CAD tools, for instance, allow a single master representation of an object that can be modified by developers. This is in sharp contrast to the practice of having many models and prototypes in multiple forms for a single product under development, which then turns any design change into a major obstacle. At the same time, these new tools create other interface problems, both functional and organizational. In the global automotive study, we examined organizational interfaces that could inhibit problem-solving cycles. In particular, we investigated how development work was divided between technology specialists and engineers. Companies employed specialists — people focused on a tool itself — to build up their expertise in the new technology, but the downside was that problem solving could be slowed when the integration of this expertise was not managed well.
Western firms were also employing more tool specialists than Japanese firms. (See “Use of Tool Specialists in the Global Auto Industry.”) Although these individuals supported the engineers, they were not expert designers and, in fact, they tended to separate the engineers from the design details and tools. In contrast, companies such as Japan’s Toyota Motor Corp. preferred simpler tools that were transparent to engineers and lowered the barriers between groups. In general, the Japanese engineers were per forming more CAD/CAE work themselves, effectively reducing the number of interfaces involved overall and speeding up experimentation and problem solving. It is important to note that when project engineers are more skilled at using design tools, they are less likely to relinquish integration to technology specialists, who tend to be less familiar with the system aspects of the product under development.
Pitfall #3: Changing Tools, But Not People’s Behavior
As discussed earlier, the promise of fewer expensive physical prototypes has been a powerful argument for investing in new tools, because this benefit can be measured and toted up easily. Even companies that are reluctant to rethink their R&D processes should see some quick gains when their development teams switch from expensive physical models to cheaper virtual ones. But the research suggests that replacing expensive prototypes with computer simulation is hardly straightforward or simple. An analysis of physical prototypes per project showed no significant overall decrease in the number of prototypes built, and Western firms on average built at least as many physical car prototypes as their Japanese competitors, in spite of their use of more advanced digital tools.10
Perhaps the cost savings in prototyping were realized in new projects that started in the very late 1990s and thus were not part of our sample, as more recent anecdotal evidence suggests. Or maybe the increase in automotive regulation (for example, in the crash-testing area) increased the need to test more prototypes and, as a result, an even greater number of prototypes would have been needed if digital tools had not been available.
But in my interviews with managers, I have come across another compelling explanation that should not be underestimated: The rate of technological change often exceeds that of behavioral change. (See “Implementation Issues.”) That is, when the knowledge base of an organization depends on the use of particular materials, prototypes and tools, engineers will not easily dismiss much of what they know, nor will they change how they work overnight. That’s why people had trouble accepting the results of a simulated test when they had spent years or even decades learning from physical models. Thus, when senior managers increased a team’s budget to run more computer simulations, anticipating substantial savings, they were sorely disappointed, because people ended up building more physical prototypes to verify that the simulations were accurate. The result was larger investments in both information technology and costly physical prototypes. In some cases, the engineers’ skepticism was well founded, because the virtual tests turned out to be poor substitutes. But in areas like crashworthi-ness, management’s inability to convince people to change their work patterns led to many wasted resources.
Tools in Use
The drive to compete through relentless innovation means that organizations must both keep up with and take advantage of new innovation tools. The essential issue here is that managers should think of “tools in use” as a core concept. As research has shown, the effectiveness of the same tool can be totally different depending on how the technology is deployed. It is the use of a tool that determines whether value is created (or resources are wasted).
For their part, engineers, designers and technicians often think in terms of how tools function vis-à-vis the work at hand. Articulated or not, they grasp the relationship between what they must do and how they accomplish it. That, of course, can lead to a deep-seated mistrust of anything new. Many resist embracing new tools because they fear a disruption to the established (and proven) ways of doing things. Because of that, managers need to establish the connection between a new tool and its specific usage before introducing that technology into the workplace.
If managers, especially senior executives who have the final say about which new tools their companies adopt, recognize that such artifacts are embedded in people’s work, routines and processes, and are ultimately part of an innovation system, they can regard such decisions more holistically on the one hand and more “situationally” on the other. The direct connections between a new tool and the work that must be done should always be considered, in and of itself but also in light of an organization’s strategic purpose. After all, it is not the new tool per se that makes the difference. What matters is how it can be deployed within a particular situation that is integrated into an innovation system dedicated to pursuing corporate goals.
References
1. A detailed discussion is contained in S.H. Thomke, “Experimentation Matters: Unlocking the Potential of New Technologies for Innovation” (Boston: Harvard Business School Press, 2003), which is the source of some of the material included in this article.
2. The study methods and findings were published in K.B. Clark and T. Fujimoto, “Product Development Performance: Strategy, Organization, and Management in the World Auto Industry” (Boston: Harvard Business School Press, 1991).
3. The general findings from the second round of research can be found in D. Ellison, K. Clark, T. Fujimoto and Y. Hyun, “Product Development Performance in the Auto Industry: 1990s Update,” working paper 95-066, Harvard Business School, Boston, 1995.
4. Total engineering hours are hours spent directly on projects by engineers, technicians and other employees. Measured activities include concept generation, product planning and product engineering carried out in-house or sub-contracted to engineering firms. The numbers exclude suppliers’ engineering hours, general overhead, new engines and transmission development, process engineering and pilot production. To account for project complexity, the following variables were measured: (1) number of body types per project (for example, two- or four-door sedans), (2) total percentage value of new parts that were designed (platform-type projects typically had values of more than 80%), (3) product category (micro, compact, mid-size and luxury) and (4) the supplier contribution to design. These variables were similar to project controls used in K.B. Clark and T. Fujimoto’s original 1991 study (see reference 2). A regression analysis showed that body type, new part design and product category were very significant (at less than 5%), whereas supplier design contribution had significance (at 12%). The variables’ regression coefficients were used to predict engineering hours for each project (given its complexity), which were then subtracted from the actual value reported by firms. Positive residual values indicated worse than expected performance and vice versa.
5. Total development time is the longest time-to-market measure, extending from the initiation of concept development to market introduction. Other measures that are often used in industry journals measure the time from program or design approval to start of production, which is much shorter and was also measured as part of our study. As with engineering hours, a regression analysis was used to determine the effect of project complexity on development time. The variables new part design, suppliers’ design contribution and product category were significant (at less than 10%); body type was also significant (at 18%). The variables’ regression coefficients were used to predict development time for each project (given its complexity), which was then subtracted from the actual value reported by firms. Positive residual values indicated worse than expected time and vice versa.
6. For a discussion of this assumed trade-off and actual empirical evidence, see K.B Clark and T. Fujimoto, “Product Development Performance” (1991); F.P. Brooks Jr., “The Mythical Man-Month: Essays on Software Engineering,” Anniversary Edition (Boston: Addison Wesley, 1995); and G.P. Pisano, “Development Factory: Unlocking the Potential of Process Innovation” (Boston: Harvard Business School Press, 1996).
7. The data shown were collected for six different subsystems of a car (body-in-white, interior, instrument panel, seats, suspension and engine/transmission) in 18 projects. For simplicity, the data are shown here in aggregated form by reporting only averages. Similar data were also collected for each firm’s supplier base, which showed similar regional differences but also a lower level of tool use by suppliers when compared to auto firms.
8. Detailed prescriptive advice for implementing development tools is contained in chapters 5 and 6 of S.H. Thomke, “Experimentation Matters” (2003).
9. The following discussion draws extensively from S. Thomke and A. Nimgade, “BMW AG: The Digital Auto Project (A),” Harvard Business School case no. 699-044 (Boston: Harvard Business School Publishing, 1998).
10. Physical prototypes in the automotive industry are usually made from metal or other material that allows for functional evaluations. In contrast, partial or full-scale models made from clay, foam, wood or other similar materials are not reported here. In our research, we expected the number of physical prototypes to be affected by project complexity, but the results of a regression analysis showed otherwise. The number of body types per project, the ratio of new part design, suppliers’ contribution to design and product category had no effect on the number of prototypes built per project, and thus it was unnecessary to make adjustment to the reported data (the significance of the regression analysis was greater than 50%). We did observe, however, that car programs with higher expected sales volumes also ended up with larger prototyping budgets that, in turn, led to more of them being built.
11. For details on how the first study data were collected, see K.B Clark and T. Fujimoto, “Product Development Performance” (1991), p. 369.