Docsity
Docsity

Prepara i tuoi esami
Prepara i tuoi esami

Studia grazie alle numerose risorse presenti su Docsity


Ottieni i punti per scaricare
Ottieni i punti per scaricare

Guadagna punti aiutando altri studenti oppure acquistali con un piano Premium


Guide e consigli
Guide e consigli

Strategic management of technological innovation SUMMARY, Sintesi del corso di Merceologia

Summary of chapters 1-2-3-4-5-10-11-12-13 of the book Strategic management of technological innovation by Schilling. Class Innovative Technologies and sustainability, A. Rocchi

Tipologia: Sintesi del corso

2019/2020

In vendita dal 20/07/2020

chiara-vespasiano
chiara-vespasiano 🇮🇹

4

(24)

10 documenti

1 / 51

Toggle sidebar

Spesso scaricati insieme


Documenti correlati


Anteprima parziale del testo

Scarica Strategic management of technological innovation SUMMARY e più Sintesi del corso in PDF di Merceologia solo su Docsity! Strategic management of technological innovation Chapter 1 – The importance of technological innovation Technological innovation: the act of introducing a new device, method or material for application to commercial or practical objectives. In many industries technological innovation is the most important driver of competitive success . Firms in a wide range of industries rely on products developed within the past 5 years for almost one-third of their sales and profits. The increasing importance of innovation is due in part to the globalization of markets. Foreign competition has put pressure on firms to continuously innovate in order to produce differentiated products and services. Introducing new products, in fact, helps firms protect their margins, while investing in process innovation helps firms lower their costs. Advances in IT have also played a role in speeding the pace of innovation as it has made it easier and faster for firms to design and produce new products. These technologies help firms develop and produce more variants for narrow customer groups’ (niche) needs, achieving differentiation from their competitors. While producing multiple product variations used to be expensive and time-consuming, flexible manufacturing technologies now enable firms to seamlessly (without interruption) transition from producing one product model to the next. Firms further reduce production costs by using common components in many of the models. As firms such as Toyota, Samsung and Sony adopt these new technologies, they raise the bar for competitors, triggering an industrywide shift to shortened development cycles and more rapid new product introductions. The results are greater market segmentation and rapid product obsolescence (invecchiamento). Product life cycles (the time between a product’s introduction and its withdrawal from the market or replacement by a next-generation product) have become very short. This encourages firms to focus increasingly on innovation as a strategic imperative: a firm that doesn’t innovate quickly, finds its margins diminishing as its products become obsolete. The impact of technological innovation on society Gross domestic product: total annual output of an economy as measured by its final purchase price. Externalities: costs that are borne by individuals other than those responsible for creating them. Thus, if a business emits pollutants in a community, it imposes a negative externality on the community members; if a business build a park, it creates a positive externality for community members. The push for innovation has had a clearly positive effect on society. Innovation enables a wider range of goods and services to be delivered to people worldwide. It has made the production of food and other necessities more efficient, yielded medical treatments that improve health conditions and enabled people to travel to and communicate with almost every part of the world. The aggregate impact of technological innovation can be observed by looking at GDP. The average world GDP per capita has risen steadily since 1971. The historic rate of economic growth in GDP could not be accounted for entirely by growth in labor and capital inputs. Economist Solow argued that this unaccounted for economic growth, represented technological change: technological innovation increased the amount of output achievable from a given quantity of labor and capital. Solow received a Nobel prize for his work in 1981 and the residual became known as the Solow Residual. While GDP has its shortcomings as a measure of standard of living, it does relate to the amount of goods consumers can purchase. Thus, to the extent that goods improve quality of life, we can ascribe some beneficial impact of technological innovation. Sometimes, technological innovation results in negative externalities as they may create pollution, erosion, elimination of natural habitats and depletion of ocean stocks. However, technology is knowledge to solve problems and pursue our goals. Sometimes this knowledge is applied to problems hastily (too quickly), without full consideration of the consequences and alternatives, but overall it will probably serve us better to have more knowledge than less. Innovation by industry: the importance of strategy The majority of effort and money invested in technological innovation comes from industrial firms. However, in the frenetic race to innovate, many firms precipitate into new product development without clear strategies or well-developed processes for choosing and managing projects. Such firms often initiate more projects than they can effectively support, that are a poor fit with the firm’s resources and objectives and suffer long development cycles and high project failure rates. Successful innovators have clearly defined innovation strategies and management processes. Most innovative ideas do not become successful new products. According to a study that combines data from prior studies of innovation success rates with data on patents, venture capital funding and surveys, it takes about 3000 ideas to produce one significantly new and successful commercial product. The innovation process is often conceived of as a funnel with many potential new product ideas going in the wide end but very few making it through the development process. While government plays a significant role in innovation, industry provides the majority of Research and development (R&D) funds that are ultimately applied to technological innovation. Improving a firm’s innovation success rate requires a well-crafted strategy. A firm’s innovation projects should align with its resources and objectives, leveraging its core competencies and helping it achieve its strategic intent. A firm’s organizational structure and control systems should encourage the generation of innovative ideas while also encouraging efficient implementation. A firm’s new product development process should maximize the likelihood of projects being both technically and commercially successful. To achieve this, a firm needs an in-depth understanding of the dynamics of innovation a well-crafted innovation strategy and well-designed processes for implementing the innovation strategy. Chapter 2 – Sources of innovation Innovation: the practical implementation of an idea into a new device or process. Innovation can rise from many different sources. It can originate with individuals or users who design solutions for their own needs. Innovation can also come from the research efforts of universities, government labs and incubators, or private non-profit organizations. One primary engine of innovation is firms. Firms are well suited to innovation activities because they typically have greater resources than individuals and a management system. Firms face strong incentives to develop differentiating new products and services which may give them an advantage over non- profit or government-funded entities. An even more important source of innovation, however does not arise from any one of these sources but rather the linkages between them. Networks of innovators that leverage knowledge and other resources from multiple sources are one of the most powerful agents of technological advance. Sources of innovation as a system: firms, individuals, universities, private non-profits and government-funded research. Creativity Creativity is the underlying process for innovation. Creativity enables individuals and organizations to generate new and useful ideas. Creativity is considered a function of intellectual abilities, knowledge, thinking styles, personality traits, intrinsic motivation and environment. Innovation begins with the generation of new ideas which are something imagined or pictured in the mind. Creativity is the ability to produce new and useful work/ideas. Novel work must be different from work that has been previously produced and it must be surprising, therefore it cannot simply be the next logical step in a series of known solutions. Reinvention is a product that Another important source of innovation comes from public research institutions such as universities, government labs and incubators which enable them to develop innovations that they would not have otherwise developed. Many universities have a research mission and in recent years universities have become more active in setting up technology transfer offices: offices designed to facilitate the transfer of technology developed in a research environment to an environment where it can be commercially applied (= activities to directly commercialize the inventions of faculty). Universities also contribute to innovation through the publication of research findings. In the United States, the creation of university technology transfer offices (offices designed to facilitate the transfer of technology developed in a research environment to an environment where it can be commercially applied) accelerated rapidly after the Bayh-Dole Act was passed in 1980. This act allowed universities to collect royalties on inventions funded with taxpayer dollars. Before this, the federal government was entitled to all rights from federally funded inventions. Universities also contribute significantly to innovation through the publication of research results that are incorporated into the development efforts of other organizations and individuals. Governments also play an active role in conducting research and development (in their own laboratories), funding the R&D of other organizations and creating institutions to foster collaboration networks and to nurture start-ups, for ex. science parks (regional districts, typically set up by government, to foster R&D collaboration between government, universities and private firms) and incubators (institutions designed to nurture the development of new businesses that might otherwise lack access to adequate funding or advice). In some countries, government-funded research and development exceeds that of industry-funded research, for ex. the US government was the main provider of research and development funds in the United States in the 1950s and 1960s, accounting for as much as 66,5% in 1964. Its share has fallen significantly since then, and in 2009 the US government spending accounted for only 31% of the nation’s R&D spending. However, the decline in the government share of spending is largely due to the rapid increase in industry R&D funding rather than a real decline in the absolute amount spent by the government. Since the 1950s, national governments have actively invested in developing science parks to foster collaboration between national and local government institutions, universities and private firms. These science parks often include incubators which help overcome the market failure that can result when a new technology has the potential for important societal benefits, but its potential for direct returns is highly uncertain. Private nonprofit organizations (such as research institutes and nonprofit hospitals) are another source of innovation. These organizations both perform their own R&D and fund R&D conducted by others. Innovation in collaborative networks Probably the most significant source of innovation does not come from individual organizations or people, but from the collaborative networks that leverage resources and capabilities across multiple organizations or individuals. Collaborative networks are particularly important in high- technology sectors, where it is unlikely that a single individual or organization will possess all of the resources and capabilities necessary to develop and implement a significant innovation. By providing member firms access to a wider range of information (and other resources) than individual firms possess, interfirm networks can enable firms to achieve much more than they could achieve individually thus they are an important engine of innovation. The mid-1990s saw record peaks in alliance activity as firms scrambled to respond to rapid change in information technologies. This resulted in a very large and dense web of connected firms. However, there was a subsequent decline in alliance activity toward the end of the decade that caused the web to diminish in size and splinter apart into two large components and many small components. The large component on the left is primarily made up of organizations in the chemical and medical industries. The large component on the right is primarily made up of organizations in electronics- based industries. This probably resulted in a substantial change in the amount of information transmitted between firms. Collaboration is often facilitated by geographical proximity, which can lead to regional technology clusters such as the Silicon Valley’s semiconductor firms. City and state governments might like to know how to foster the creation of a technology cluster in their region in order to increase employment, tax revenues and other economic benefits. For firms, understanding the drivers and benefits of clustering is useful for developing a strategy that ensures the firm is well positioned to benefit from clustering. Technology clusters: regional clusters of firms that have a connection to a common technology and may engage in buyer, supplier and complementor relationships, as well as research collaboration. This can lead to greater innovation productivity. Though advances in information technology have made it easier, faster, and cheaper to transmit information great distances, several studies indicate that knowledge does not always transfer readily via such mechanisms, therefore a tech cluster benefits from the proximity in knowledge exchange. Proximity and interaction can directly influence firms’ ability and willingness to exchange knowledge. Knowledge that is complex (has many underlying components or many interdependencies between those components, or both) or tacit (cannot be readily codified – documented in written form) may require frequent and close interaction to be meaningfully exchanged. Firms may need to interact frequently to develop common ways of understanding and articulating the knowledge before they are able to transfer it. Also, closeness and frequency of interaction can influence a firm’s willingness to exchange knowledge. When firms interact frequently and over time, they can develop trust and reciprocity norms, greater knowledge of each other, and their repeated interactions give them information as to the likelihood of their partner’s behaving opportunistically. A shared understanding of the rules of engagement emerges, where each partner understands its obligations with respect to how much knowledge is exchanged, how that knowledge can be used and how they’re expected to reciprocate. A cluster of firms with high innovation productivity can lead to more new firms starting up in the vicinity and can attract other firms to the area. As firms grow, divisions may be spun off into new firms, entrepreneurial employees may start their own enterprises, and supplier and distributor markets emerge to service the cluster. Successful firms also attract new labor to the area and help to make the existing labor pool more valuable by enabling individuals to gain experience working with the innovative firms. The increase in employment and tax revenues in the region can lead to improvements in infrastructure, schools and other markets that service the population. The benefits firms reap by locating in close geographical proximity to each other are known collectively as agglomeration economies. There are also some downsides to geographical clustering. First, the proximity of many competitors serving a local market can lead to competition that reduces their pricing power. Second, close proximity of firms may increase the likelihood of a firm’s competitors gaining access to the firm’s knowledge. Third, clustering can potentially lead to traffic congestion, high housing costs and higher concentrations of pollution. A big part of the reason that technologies are often regionally localized is that technological knowledge is held by people and people are often reluctantly mobile. In a well-known study was found that engineers in Silicon Valley were more loyal to their craft than to any particular company, but they were also very likely to stay in the region even if they changed jobs. This was due in part to the labor market for their skills in the region and in part to the disruption in an individual’s personal life if he or she were to move out of the region. Thus, if for some reason an innovative activity starts in a geographic location, the knowledge and expertise that accumulates might not spread readily into other geographic locations, leading to a localized cluster of technological expertise. Studies have indicated that while many innovative activities appear to have some geographic component, the degree to which innovative activities are geographically clustered depends on things such as: - the nature of the technology, such as its underlying knowledge base or the degree to which it can be protected by patents or copyright, and the degree to which its communication requires close and frequent interaction. - industry characteristics, such as the degree of market concentration or stage of the industry life cycle, transportation costs, and the availability of supplier and distributor markets. - The cultural context of the technology, such as the population density of labor or customers, infrastructure development, or national differences in the way technology development is funded or protected. Technology spillovers are positive externality benefits of R&D of one firm, resulting from the spread of that knowledge – acquired through R&D – across organizational or regional boundaries. Evidence suggests that technology spillovers are a significant influence on innovative activity. Whether R&D benefits will spill over is partially a function of the strength of protection mechanisms such as patents, copyrights, and trade secrets. Since the strength of protection mechanisms varies significantly across industries and countries, the likelihood of spillovers varies also. The likelihood of spillovers is also a function of the nature of the underlying knowledge base (for ex. tacit knowledge may not flow readily across firm boundaries) and the mobility of the labor pool. Chapter 3 – Types and patterns of innovation Technological innovation can come from many sources and take many forms. Different types of technological innovations offer different opportunities for organizations and the society, and they pose different demands upon producers, users and regulators. There is no single agreed-upon taxonomy to describe different kinds of technological innovations. These dimensions are useful for understanding some key ways that one innovation may differ from another. The path a technology follows through time is termed its technology trajectory (the path a technology takes through its lifetime. This path may refer to its rate of performance improvement, its rate of adoption in the marketplace or other change of interest). Though many factors can influence these technology trajectories, some patterns have been consistently identified in technology trajectories across many industry contexts and over many periods. Understanding these patterns of technological innovation provides a useful foundation on formulating technology strategy. Types of innovation Technological innovations are often categorized into different types. Different types of innovation require different kinds of underlying knowledge and have different impacts on the industry’s competitors and customers. Four of the dimensions most commonly used to categorize innovations are: Product innovations are embodied in the outputs of an organization – its goods or services. Process innovations, instead, are innovations in the way an organization conducts its business, such as in the techniques of producing or marketing goods or services. Process innovations are often oriented toward improving the effectiveness or efficiency of production by, for example, reducing defect rates or increasing the quantity that may be produced in a given time. In this instance, the process innovation (the genetic algorithm) can speed up the firm’s ability to develop a product innovation (a new therapeutic drug). New product innovations and process innovations often occur in tandem. First, new processes may enable the production of new products. Second, new products may enable the development of new processes. Finally, a product innovation for that became known as Moore’s law. In 1965, Gordon Moore, cofounder of Intel, noted that the density of transistors on integrated circuits had doubled every year since the integrated circuit was invented. That rate has since slowed to doubling every 18 months, but the rate of acceleration is still very steep. Figure 3.2 reveals a sharply increasing performance curve. However, Intel’s rate of investment (research and development dollars per year) has also been increasing rapidly, as shown in Figure 3.3. Not all of Intel’s R&D expense goes directly to improving microprocessor power, but it is reasonable to assume that Intel’s investment specifically in microprocessors would exhibit a similar pattern of increase. Figure 3.3 shows that the big gains in transistor density have come at a big cost in terms of effort invested. Though the curve does not yet resemble the traditional s-curve, its rate of increase is not as sharp as when the curve is plotted against years. Gordon Moore predicted that transistor miniaturization will reach its physical limits about 2017. Technologies do not always get the opportunity to reach their limits; they may be rendered obsolete by new, discontinuous technologies (a technology that fulfills a similar market need by building on an entirely new knowledge base). Initially, the technological discontinuity may have lower performance than the incumbent technology. In early stages, effort invested in a new technology may reap lower returns than effort invested in the current technology and firms are often reluctant to switch. However, if the disruptive technology has a steeper s-curve or an s-curve that increases to a higher performance limit, there may come a time when the returns to effort invested in the new technology are much higher than effort invested in the incumbent technology. New firms entering the industry are likely to choose the disruptive technology and incumbent firms face the difficult choice of trying to extend the life of their current technology or investing in switching to the new technology. If the disruptive technology has much greater performance potential for a given amount of effort, in the long run it is likely to displace the incumbent technology, but the rate at which it does so can vary significantly. S-curves are also often used to describe the diffusion of a technology. Unlike s-curves in technology performance, s-curves in technology diffusion (the spread of a technology through a population) are obtained by plotting the cumulative number of adopters of the technology against time. This yields an s-shape curve because adoption is initially slow when an unfamiliar technology is introduced to the market; it accelerates as the technology becomes better understood and utilized by the mass market, and eventually the market is saturated so the rate of new adoptions declines. One rather curious feature of technology diffusion is that it typically takes far more time than information diffusion. For example, Mansfield found that it took 12 years for half the population of potential users to adopt industrial robots, even though these potential users were aware of the significant efficiency advantages the robots offered. This could be because of the complexity of the knowledge underlying new technologies and the development of complementary resources that make those technologies useful. Although some of the knowledge necessary to utilize a new technology might be transmitted through manuals or other documentation, other aspects of knowledge necessary to fully realize the potential of a technology might be built up only through experience. Some of the knowledge about the technology might be tacit and require transmission from person to person through extensive contact. Many potential adopters of a new technology will not adopt it until such knowledge is available to them, despite their awareness of the technology and its potential advantages. Furthermore, many technologies become valuable to a wide range of potential users only after a set of complementary resources are developed for them. Finally, it should be clear that the s-curves of diffusion are in part a function of the s-curves in technology improvement: as technologies are better developed, they become more certain and useful to users, facilitating their adoption. Furthermore, as learning curve and scale advantages accrue to the technology, the price of finished goods often drops, further accelerating adoption by users. The rate at which a technology improves over time is often faster than the rate at which customer requirements increase over time. This means technologies that initially met the demands of the mass market may eventually exceed the needs of the market. Furthermore, technologies that initially served only low-end customers (segment zero) may eventually meet the needs of the mass market and capture the market share that originally went to the higher-performing technology. Several authors have argued that managers can use the s-curve model as a tool for predicting when a technology will reach its limits and as a prescriptive guide for whether and when the firm should move to a new, more radical technology. Firms can use data on the investment and performance of their own technologies, or data on the overall industry investment in a technology and the average performance achieved by multiple producers. Managers could then use these curves to assess whether a technology appears to be approaching its limits or to identify new technologies that might be emerging on s-curves that will intersect the firm’s technology s-curve. Managers could then switch s-curves by acquiring or developing the new technology. However, as a prescriptive tool, the s-curve model has several serious limitations. First, it is rare that the true limits of a technology are known in advance and there is often considerable disagreement among firms about what a technology’s limits will be. Second, the shape of a technology’s s-curve is not set in stone. Unexpected changes in the market, component technologies, or complementary technologies can shorten or extend the life cycle of a technology. Furthermore, firms can influence the shape of the s-curve through their development activities. For example, firms can sometimes stretch the s-curve through implementing new development approaches or revamping the architecture design of the technology. Christensen provides an example of this from the disk-drive industry. A disk drive’s capacity is determined by its size multiplied by its areal recording density; thus, density has become the most pervasive measure of disk-drive performance. In 1979, IBM had reached what it perceived as a density limit of ferrite- oxide–based disk drives. It abandoned its ferrite-oxide–based disk drives and moved to developing thin-film technology, which had greater potential for increasing density. Hitachi and Fujitsu continued to ride the ferrite-oxide s-curve, ultimately achieving densities that were eight times greater than the density that IBM had perceived to be a limit. Finally, whether switching to a new technology will benefit a firm depends on a number of factors, including: (a) the advantages offered by the new technology, (b) the new technology’s fit with the firm’s current abilities (and thus the amount of effort that would be required to switch, and the time it would take to develop new competencies), (c) the new technology’s fit with the firm’s position in complementary resources and (d) the expected rate of diffusion of the new technology. Thus, a firm that follows an s-curve model too closely could end up switching technologies earlier or later than it should. Technology cycles The s-curve model above suggests that technological change often follows a cyclical pattern. First, a technological discontinuity causes a period of turbulence and uncertainty, followed by rapid improvement, and producers and consumers explore the different possibilities enabled by the new technology. As producers and customers begin to converge on a consensus of the desired technological configuration, a dominant design emerges. The dominant design provides a stable benchmark for the industry, enabling producers to turn their attention to increasing production efficiency and incremental product improvements, then diminishing returns. This cycle begins again with the next technological discontinuity. The first design based on the initial technological discontinuity rarely becomes the dominant design. There is usually a period in which firms produce a variety of competing designs of the technology before one design emerges as dominant. The dominant design rarely embodies the most advanced technological features available at the time of its emergence. It is instead the bundle of features that best meets the requirements of the majority of producers and customers. The emergence of a new technological discontinuity can overturn the existing competitive structure of an industry, creating new leaders and new losers. Schumpeter called this process creative destruction, and argued that it was the key driver of progress in a capitalist society. Several studies have tried to identify and characterize the stages of the technology cycle in order to better understand why some technologies succeed and others fail, and whether established firms or new firms are more likely to be successful in introducing or adopting a new technology. One technology evolution model that rose to prominence was proposed by Utterback and Abernathy. They observed that a technology passed through distinct phases. In the first phase (what they termed the fluid phase), there was considerable uncertainty about both the technology and its market. Products or services based on the technology might be crude, unreliable, or expensive, but might suit the needs of some market niches. In this phase, firms experiment with different form factors or product features to assess the market response. Eventually, however, producers and customers begin to arrive at some consensus about the desired product attributes, and a dominant design (a product design that is adopted by the majority of producers, typically creating a stable architecture on which the industry can focus its efforts) emerges. The dominant design establishes a stable architecture for the technology and enables firms to focus their efforts on process innovations that make production of the design more effective and efficient or on incremental innovations to improve components within the architecture. Utterback and Abernathy termed this phase the specific phase because innovations in products, materials, and manufacturing processes are all specific to the dominant design. In this era, firms focus on efficiency and market penetration. Firms may attempt to achieve greater market segmentation by offering different models and price points. They may also attempt to lower production costs by simplifying the design or improving the production process. This period of accumulating small improvements may account for the bulk of the technological progress in an industry, and it continues until the next technological discontinuity. Understanding the knowledge that firms develop during different eras lends insight into why successful firms often resist the transition to a new technology, even if it provides significant advantages. During the era of incremental change, many firms cease to invest in learning about alternative design architectures and instead invest in refining their competencies related to the dominant architecture. Most competition revolves around improving components rather than altering the architecture; thus, companies focus their efforts on developing component knowledge and knowledge related to the dominant architecture. As firms’ routines and capabilities become more and more wedded to the dominant architecture, the firms become less able to identify and respond to a major architectural innovation. For example, the firm might establish divisions based on the primary components of the architecture and structure the communication channels between divisions on the basis of how those components interact. In the firm’s effort to absorb and process the vast amount of information available to it, it is likely to establish filters that enable it to identify the information most crucial to its understanding of the existing technology design. As the firm’s expertise, structure, communication channels and filters all become oriented around maximizing its ability to compete in the existing dominant design, they become barriers to the firm’s recognizing and reacting to a new technology architecture. While many industries appear to conform to this model in which a dominant design emerges, there are exceptions. In some industries, heterogeneity of products and production processes are a primary determinant of value, and thus a dominant design is undesirable. For example, art and cuisine may be examples of industries in which there is more pressure to do things differently than to settle upon a standard. Chapter 4 Why dominant designs are selected firm may find itself locked out of the market if it is unable to adopt the dominant technology. Such standards battles are high-stakes games—resulting in big winners and big losers. Increasing returns to adoption also imply that technology trajectories are characterized by path dependency (when end results depend greatly on the events that took place leading up to the outcome. It is often impossible to reproduce the results that occur in such a situation), meaning that relatively small historical events may have a great impact on the final outcome. Though the technology’s quality and technical advantage undoubtedly influence its fate, other factors, unrelated to the technical superiority or inferiority, may also play important roles. For instance, timing may be crucial; early technology offerings may become so entrenched that subsequent technologies, even if considered to be technically superior, may be unable to gain a foothold in the market. How and by whom the technology is sponsored may also impact adoption. If, for example, a large and powerful firm aggressively sponsors a technology (perhaps even pressuring suppliers or distributors to support the technology), that technology may gain a controlling share of the market, locking out alternative technologies. The influence of a dominant design can also extend beyond its own technology cycle. As the dominant design is adopted and refined, it influences the knowledge that is accumulated by producers and customers, and it shapes the problem-solving techniques used in the industry. Firms will tend to use and build on their existing knowledge base rather than enter unfamiliar areas.12 This can result in a very “sticky” technological paradigm that directs future technological inquiry in the area.13 Thus, a dominant design is likely to influence the nature of the technological discontinuity that will eventually replace it. Such winner-take-all markets demonstrate very different competitive dynamics than markets in which many competitors can coexist relatively peacefully. These markets also require very different firm strategies to achieve success. Technologically superior products do not always win—the firms that win are usually the ones that know how to manage the multiple dimensions of value that shape design selection. Multiple dimensions of value The value a new technology offers a customer is a composite of many different things. We first consider the value of the stand-alone technology, and then show how the stand-alone value of the technology combines with the value created by the size of the installed base and availability of complementary goods. In industries characterized by increasing returns (when the rate of return (not just gross returns) from a product or process increases with the size of its installed base), this combination will influence which technology design rises to dominance. The value a new technology offers to customers can be driven by many different things, such as the functions it enables the customer to perform, its aesthetic qualities, and its ease of use. To help managers identify the different aspects of utility a new technology offers customers, W. Chan Kim and Renee Mauborgne developed a “Buyer Utility Map.”15 They argue that it is important to consider six different utility levers, as well as six stages of the buyer experience cycle, to understand a new technology’s utility to a buyer. The stages they identify are purchase, delivery, use, supplements, maintenance, and disposal. The six utility levers they consider are customer productivity, simplicity, convenience, risk, fun and image, and environmental friendliness. Creating a grid with stages and levers yields a 36-cell utility map (see Figure 4.3). Each cell provides an opportunity to offer a new value proposition to a customer. A new technology might offer a change in value in a single cell or in a combination of cells. For example, when retailers establish an online ordering system, the primary new value proposition they are offering is greater simplicity in the purchase stage. On the other hand, as shown in Figure 4.3, the introduction of the Honda Insight hybridelectric vehicle offered customers greater productivity (in the form of gas savings), image benefits, and environmental friendliness in the customer’s use, supplements, and maintenance stages, while providing the same simplicity and convenience of regular gasoline- only–powered vehicles. Kim and Mauborgne’s model is designed with an emphasis on consumer products, but their mapping principle can be easily adapted to emphasize industrial products or different aspects of buyer utility. For example, instead of having a single entry for customer productivity, the map could have rows for several dimensions of productivity such as speed, efficiency, scalability, and reliability. The map provides a guide for managers to consider multiple dimensions of technological value and multiple stages of the customer experience. Finally, the new benefits have to be considered with respect to the cost to the customer of obtaining or using the technology—it is the ratio of benefits to cost that determines value. In industries characterized by network externalities, the value of a technological innovation to users will be a function not only of its stand-alone benefits and cost, but also of the value created by the size of its installed base and the availability of complementary goods (see Figure 4.4(a)). Thus, the value to consumers of using the Windows operating system is due in part to the technology’s stand-alone value (for example, the ability of the operating system to make it easy for consumers to use the computer), the installed base of the operating system (and thus the number of computers with which the user can easily interact), and the availability of compatible software. Visualizing the value of technological innovations in this way makes it clear why even innovations that offer significant improvements in technological functionality often fail to displace existing technologies that are already widely adopted: Even if a new innovation has a significant advantage in functionality, its overall value may be significantly less than the incumbent standard. This situation is poignantly illustrated in the case of NeXT computers. In 1985, Steve Jobs and five senior managers of Apple Computer founded NeXT Incorporated. They unveiled their first computer in 1988. With a 25-MHz Motorola 68030 and 8 MB of RAM, the machine was significantly more powerful than most other personal computers available. It offered advanced graphics capability and even ran an objectoriented operating system (called NextStep) that was considered extremely advanced. However, the machine was not compatible with the IBM- compatible personal computers (based on Intel’s microprocessors and Microsoft’s operating system) that had become the dominant standard. The machine thus would not run the vast majority of software applications on the market. A small contingent of early adopters bought the NeXT personal computers, but the general market rejected them because of a dire lack of software and uncertainty about the company’s viability. The company discontinued its hardware line in 1993 and ceased development of NextStep in 1996. A similar battle was playing out in 2012 between smartphone operating systems, though in this case there were two contenders who were more evenly matched: Apple’s iOS and Google’s Android. Both companies offered smartphone operating systems with intuitive, powerful, and aesthetically-pleasing interfaces (technological utility). Both were aggressively building communities of applications providers that provided large ranges of interesting and/or useful applications (complementary goods). Both were also trying to build installed base through aggressive marketing and distribution. Market share estimates of the two systems varied widely based on the timing of the data announcements, the geographical scope considered, and the product scope considered, but in early 2012 it was clear that Apple and Google were in a head-to-head battle for dominance, whereas Rim’s Blackberry and Microsoft’s Windows 7 were barely in the race (for more on this, see the section on the “Segment Zero” threat to Microsoft in Chapter 3). As shown in Figure 4.4(b), it is not enough for a new technology’s stand-alone utility to exceed that of the incumbent standard. The new technology must be able to offer greater overall value. For the new technology to compete on its stand-alone utility alone, that utility must be so great that it eclipses the combined value of an existing technology’s stand- alone utility, its installed base, and its complementary goods. In some cases, the new technology may be made compatible with the existing technology’s installed base and complementary goods as in Figure 4.4(c). In this case, a new technology with only a moderate functionality advantage may offer greater overall value to users. Sony and Philips employed this strategy with their high- definition audio format, Super Audio CD (SACD), a high-density multichannel audio format based on a revolutionary “scalable” bit-stream technology known as Direct Stream Digital (DSD). Anticipating that users would be reluctant to replace their existing compact disc players and compact disc music collections, Sony and Philips made the new Super Audio CD technology compatible with existing compact disc technology. The Super Audio CD players included a feature that enables them to play standard CDs, and the recorded Super Audio CDs included a CD audio layer in addition to the high-density layer, enabling them to be played on standard CD systems. Customers can thus take advantage of the new technology without giving up the value of their existing CD players and music libraries. When users are comparing the value of a new technology to an existing technology, they are weighing a combination of objective information (e.g., actual technological benefits, actual information on installed base or complementary goods), subjective information (e.g., perceived technological benefits, perceived installed base or complementary goods), and expectations for the future (e.g., anticipated technological benefits, anticipated installed base and complementary goods). Thus, each of the primary value components described above also has corresponding perceived or anticipated value components (see Figure 4.5). In Figure 4.5(a), the perceived and anticipated value components map proportionately to their corresponding actual components. However, as depicted in Figure 4.5(b), this need not be the case. For instance, perceived installed base may greatly exceed actual installed base, or customers may expect that a technology will eventually have a much larger installed base than competitors and thus the value accrued from the technology’s installed base is expected to grow much larger than it is currently. Firms can take advantage of the fact that users rely on both objective and subjective information in assessing the combined value offered by a new technology. For example, even a technology with a small installed base can achieve a relatively large mind share through heavy advertising by its backers. Producers can also shape users’ expectations of the future installed base and availability of complements through announcements of preorders, licensing agreements, and distribution arrangements. For example, when Sega and Nintendo were battling for dominance in the 16-bit video game console market, they went to great lengths to manage impressions of their installed base and market share, often to the point of deception. At the end of 1991, Nintendo claimed it had sold 2 million units of the Super Nintendo Entertainment System in the U.S. market. Sega disagreed, arguing that Nintendo had sold 1 million units at most. By May 1992, Nintendo was claiming a 60 percent share of the 16-bit market, and Sega was claiming a 63 percent share!17 Since perceived or expected installed base may drive subsequent adoptions, a large perceived or expected installed base can lead to a large actual installed base. Such a tactic also underlies the use of “vaporware”—products that are not actually on the market and may not even exist but are advertised—by many software vendors. By building the impression among customers that a product is ubiquitous, firms can prompt rapid adoption of the product when it actually is available. Vaporware may also buy a firm valuable time in bringing its product to market. If other vendors beat the firm to market and the firm fears that customers may select a dominant design before its offering is introduced, it can use vaporware to attempt to persuade customers to delay purchase until the firm’s product is available. The video game console industry also provides an excellent example here. When Sega and Sony introduced their 32-bit video game consoles (the Saturn and PlayStation, respectively), Nintendo was still a long way from introducing its next-generation console. In an effort to forestall consumer purchases of 32-bit systems, Nintendo began aggressively promoting its development of a 64-bit system (originally named Project Reality) in 1994, though the product would not actually reach the market until September 1996. The project underwent so many delays that some industry observers dubbed it “Project Unreality.”18 Nintendo was successful in persuading many customers to wait for its Nintendo 64, A firm can choose not to exploit its monopoly power, thus flattening the monopoly costs curve. For instance, one of the most obvious assertions of monopoly power is typically exhibited in the price charged for a good. However, a firm can choose not to charge the maximum price that customers would be willing to pay for a good. For example, many people would argue that Microsoft does not charge the maximum price for its Windows operating system that the market would bear. However, a firm can also assert its monopoly power in more subtle ways, by controlling the evolution of the industry through selectively aiding some suppliers or complementors more than others, and many people would argue that in this respect, Microsoft has taken full advantage of its near-monopoly power. Chapter 5 The previous chapter pointed out that some industries are characterized by increasing returns to adoption, meaning that the more a technology is adopted, the more valuable it becomes. In such industries, timing can be crucial—a technology that is adopted earlier than others may reap self- reinforcing advantages such as greater funds to invest in improving the technology, greater availability of complementary goods, and less customer uncertainty. On the other hand, the same factors that cause increasing returns to adoption may make very early technologies unattractive: if there are few users of the technology or availability of complementary goods is poor, the technology may fail to attract customers. A number of other first-mover advantages, and disadvantages, can shape how timing of entry is related to likelihood of success. Entrants are often divided into three categories: first movers (or pioneers – the first entrants to sell in a new product or service category), which are the first to sell in a new product or service category; early followers (also called early leaders – entrants that are early to market, but not first), which are early to the market but not first; and late entrants (entrants that do not enter the market until the time the product begins to penetrate the mass market or later), which enter the market when or after the product begins to penetrate the mass market. The research on whether it is better to be a first mover, early follower, or late entrant yields conflicting conclusions. Some studies that contrast early entrants (lumping first movers and early followers together) with late entrants find that early entrants have higher returns and survival rates, consistent with the notion of first-mover (or at least early-mover) advantage.1 However, other research has suggested the first firm to market is often the first to fail, causing early followers to outperform first movers.2 Still other research contends the higher returns of being a first mover typically offset the survival risk.3 A number of factors influence how timing of entry affects firm survival and profits. In this chapter, we will first examine first-mover advantages and disadvantages. We will then look more closely at what factors determine the optimal timing of entry, and its implications for a firm’s entry strategy. First mover advantages Being a first mover may confer the advantages of brand loyalty and technological leadership, preemption of scarce assets, and exploitation of buyer switching costs.4 Furthermore, in industries characterized by increasing returns, early entrants may accrue learning and network externality advantages that are self-reinforcing over time. The company that introduces a new technology may earn a long-lasting reputation as a leader in that technology domain. Such a reputation can help sustain the company’s image, brand loyalty, and market share even after competitors have introduced comparable products. The organization’s position as technology leader also enables it to shape customer expectations about the technology’s form, features, pricing, and other characteristics. By the time later entrants come to market, customer requirements may be well established. If aspects that customers have come to expect in a technology are difficult for competitors to imitate (e.g., if they are protected by patent or copyright, or arise from the first mover’s unique capabilities), being the technology leader can yield sustained monopoly rents (the additional returns – either higher revenues or lower costs – a firm can make from being a monopolist, such as the ability to set high prices, or the ability to lower costs through greater bargaining power over suppliers.). Even if the technology characteristics are imitable, the first mover has an opportunity to build brand loyalty before the entry of other competitors. Firms that enter the market early can preemptively capture scarce resources such as key locations, government permits, access to distribution channels, and relationships with suppliers. For example, companies that wish to provide any wireless communication service must license the rights to broadcast over particular radio frequencies from the government. In the United States, the Federal Communications Commission (FCC) is primarily responsible for allotting rights to use bands of radio frequencies (known as the spectrum) for any wireless broadcasting. The FCC first allocates different portions of the spectrum for different purposes (digital television broadcasting, third-generation wireless telecommunication, etc.) and different geographic areas. It then auctions off rights to use these segments to the highest bidders. This means that early movers in wireless services can preemptively capture the rights to use portions of the wireless spectrum for their own purposes, while effectively blocking other providers. By 2003, the proliferation of wireless services had caused the spectrum to become a scarce commodity, and the FCC was under pressure to allow the holders of wireless spectrum rights to sublet unused portions of their spectrum to other organizations. Once buyers have adopted a good, they often face costs to switch to another good. For example, the initial cost of the good is itself a switching cost, as is the cost of complements purchased for the good. Additionally, if a product is complex, buyers must spend time becoming familiar with its operation; this time investment becomes a switching cost that deters the buyer from switching to a different product. If buyers face switching costs, the firm that captures customers early may be able to keep those customers even if technologies with a superior value proposition are introduced later. This is often the reason given for the dominance of the QWERTY typewriter keyboard. In 1867, Christopher Sholes began experimenting with building a typewriter. At that time, letters were struck on paper by mechanical keys. If two keys were struck in rapid succession, they often would jam. Key jamming was a particularly significant problem in the 1800s, because typewriters then were designed so that keys struck the back side of the paper, making it impossible for users to see what they were typing. The typist thus might not realize he or she had been typing with jammed keys until after removing the page. Scholes designed his keyboard so that commonly used letter combinations were scattered as widely as possible over the keyboard. The QWERTY keyboard also puts a disproportionate burden on the left hand (3,000 English words can be typed with the left hand alone, while only 300 can be typed with the right hand alone). This positioning of keys would slow the typing of letter combinations, and thus reduce the likelihood of jamming the keys.6 Over time, many competing typewriter keyboards were introduced that boasted faster typing speeds or less-tiring typing. For example, the Hammand and Blickensderfer “Ideal” keyboard put the most commonly used letters in the bottom row for easy access, and used only three rows total. Another example, the Dvorak keyboard, placed all five vowels and the three most commonly used consonants in the home row, and common letter combinations required alternating hands frequently, reducing fatigue. However, QWERTY’s early dominance meant typists were trained only on QWERTY keyboards. By the time Dvorak keyboards were introduced in 1932, tens of millions of typists were committed to QWERTY keyboards—the switching costs of learning how to type all over again were more than people were willing to bear. Even after daisywheel keys (and later, electronic typewriters) removed all possibility of jamming keys, the QWERTY keyboard remained firmly entrenched. August Dvorak is said to have died a bitter man, claiming, “I’m tired of trying to do something worthwhile for the human race. They simply don’t want to change!” In an industry with pressures encouraging adoption of a dominant design, the timing of a firm’s investment in new technology development may be particularly critical to its likelihood of success. For example, in an industry characterized by increasing returns to adoption, there can be powerful advantages to being an early provider; a technology that is adopted early may rise in market power through self-reinforcing positive feedback mechanisms, culminating in its entrenchment as a dominant design. Intel is an apt example of this. Intel’s Ted Hoff invented the first microprocessor in 1971, and in 1975, Bill Gates and Paul Allen showed that it could run a version of BASIC that Gates had written. Gates’s BASIC became widely circulated among computer enthusiasts, and as BASIC was adopted and applications developed for it, the applications were simultaneously optimized for Intel’s architecture. IBM’s adoption of Intel’s 8088 microprocessor in its PC introduction secured Intel’s dominant position, and each of Intel’s subsequent generations of products has set the market standard. First movers disadvantages Despite the great attention that first-mover advantages receive, there are also arguments for not entering a market too early. In a historical study of 50 product categories, Gerard Tellis and Peter Golder found that market pioneers have a high failure rate—roughly 47 percent—and that the mean market share of market pioneers is 10 percent.10 By contrast, early leaders (firms that enter after market pioneers but assume market leadership during the early growth phase of the product life cycle) averaged almost three times the market share of market pioneers.11 Tellis and Golder point out that the market may often perceive first movers to have advantages because it has misperceived who the first mover really was. For example, while today few people would dispute Procter & Gamble’s claim that it “created the disposable diaper market,”12 in actuality, Procter & Gamble entered the disposable market almost 30 years after Chux, a brand owned by a subsidiary of Johnson & Johnson. In the mid1960s, Consumer Reports ranked both products as best buys. However, over time Pampers became very successful and Chux disappeared, and eventually people began to reinterpret history. Other studies have found that first movers earn greater revenues than other entrants, but that they also face higher costs, causing them to earn significantly lower profits in the long run. First movers typically bear the bulk of the research and development expenses for their product or service technologies, and they must also often pay to develop suppliers and distribution channels, plus consumer awareness. A later entrant often can capitalize on the research and development investment of the first mover, fine-tune the product to customer needs as the market becomes more certain, avoid any mistakes made by the earlier entrant, and exploit incumbent inertia (the tendency for incumbents to be slow to respond to changes in the industry environment due to their large size, established routines, or prior strategic commitments to existing suppliers and customers). Later entrants can also adopt newer and more efficient production processes while early movers are either stuck with earlier technologies or must pay to rebuild their production systems. Developing a new technology often entails significant research and development expenses, and the first to develop and introduce a technology typically bears the brunt of this expense. By the time a firm has successfully developed a new technology, it may have borne not only the expense of that technology but also the expense of exploring technological paths that did not yield a commercially viable product. This firm also typically bears the cost of developing necessary production processes and complementary goods that are not available on the market. Since the new product development failure rate can be as high as 95 percent, being the first to develop and introduce an unproven new technology is expensive and risky. By contrast, later entrants often do not have to invest in exploratory research. Once a product has been introduced to the market, technologies may favor waiting for enabling technologies to be further developed. 4. Do complementary goods influence the value of the innovation, and are they sufficiently available? If the value of an innovation hinges critically on the availability and quality of complementary goods, then the state of complementary goods determines the likelihood of successful entry. Not all innovations require complementary goods, and many more innovations can utilize existing complementary goods. For example, though numerous innovations in 35-mm cameras have been introduced in the last few decades, almost all have remained compatible with standard rolls of 35- mm film; thus availability of that complementary good was ensured. If, on the other hand, the innovation requires the development of new complementary goods, then a pioneer must find a way to ensure their availability. Some firms have the resources and capabilities to develop both a good and its complements, while others do not. If the firm’s innovation requires complementary goods that are not available on the market, and the firm is unable to develop those complements, successful early entry is unlikely. 5. How high is the threat of competitive entry? If there are significant entry barriers or few potential competitors with the resources and capabilities to enter the market, the firm may be able to wait while customer requirements and the technology evolve. Over time, one would expect customer expectations to become more certain, enabling technologies to improve, and support goods and services to be developed, thus increasing the likelihood that sponsored technologies will possess a set of attributes that meet consumer demands. However, if the technology proves to be valuable, other firms are also likely to be attracted to the market. Thus, if entry barriers are low, the market could quickly become quite competitive, and entering a market that has already become highly competitive can be much more challenging than entering an emerging market. Margins may already have been driven down to levels that require competitors to be highly efficient, and access to distribution channels may be limited. If the threat of competitive entry is high, the firm may need to enter earlier to establish brand image, capture market share, and secure relationships with suppliers and distributors. This is discussed further in the Research Brief “Whether and When to Enter?” 6. Is the industry likely to experience increasing returns to adoption? In industries that have increasing returns to adoption due to strong learning curve effects or network externalities, allowing competitors to get a head start in building an installed base can be very risky. If a competitor’s offering builds a significant installed base, the cycle of self-reinforcing advantages could make it difficult for the firm to ever catch up. Furthermore, if there are forces encouraging adoption of a single dominant design, a competitor’s technology may be selected. If protection mechanisms such as patents prevent the firm from offering a compatible technology, the firm may be locked out. 7. Can the firm withstand early losses? As was discussed earlier, a first mover often bears the bulk of the expense and risk of developing and introducing a new innovation. First movers thus often need significant amounts of capital that either is available internally (in the case of large firms) or can be accessed externally (e.g., through the debt or equity markets). Furthermore, the first mover must be able to withstand a significant period with little sales revenue from the product. Even in the case of successful new technologies, often a considerable period elapses between the point at which a first mover introduces a new innovation and the point at which the innovation begins to be adopted by the mass market. The s-curve shape of technology diffusion illustrates this aptly. New innovations tend to be adopted very slowly at first, while innovators and early adopters try the technology and communicate their experience to others. This slow initial takeoff of new innovations has caused the demise of many start-up firms. For example, in the personal digital assistant (PDA) industry—the precursor to smartphones—start-ups such as GO Corporation and Momenta had received accolades for their technology designs, but were unable to withstand the long period of market confusion about PDAs and ultimately ran out of capital. Companies such as IBM and Compaq survived because they were large and diversified, and thus not reliant on PDA revenues. Palm was a relatively late mover in the PDA industry so it did not have to withstand as long of a takeoff period, but even Palm was forced to seek external capital and was acquired by U.S. Robotics, which was later bought by 3COM. On the other hand, firms with significant resources also may be able to more easily catch up to earlier entrants.20 By spending aggressively on development and advertising, and leveraging relationships with distributors, a late entrant may be able to rapidly build brand image and take market share away from earlier movers. For example, though Nestlé was very late to enter the freeze-dried coffee market with Taster’s Choice, the company was able to use its substantial resources to both develop a superior product and rapidly build market awareness. It was thus able to quickly overtake the lead from General Foods’ Maxim. 8. Does the firm have resources to accelerate market acceptance? A firm with significant capital resources not only has the capability to withstand a slow market takeoff, but also can invest such resources in accelerating market takeoff. The firm can invest aggressively in market education, supplier and distributor development, and development of complementary goods and services. Each of these strategies can accelerate the early adoption of the innovation, giving the firm much greater discretion over entering early. Thus, a firm’s capital resources can give it some influence on the shape of the adoption curve. 9. Is the firm’s reputation likely to reduce the uncertainty of customers, suppliers, and distributors? In addition to capital resources, a firm’s reputation and credibility can also influence its optimal timing of entry. A firm’s reputation can send a strong signal about its likelihood of success with a new technology. Customers, suppliers, and distributors will use the firm’s track record to assess its technological expertise and market prowess. Customers may use the firm’s reputation as a signal of the innovation’s quality, and thus face less ambiguity about adopting the innovation. A firm with a well-respected reputation for successful technological leadership is also more likely to attract suppliers and distributors. This was aptly demonstrated in Microsoft’s entry into the videogame console industry: despite having little experience in producing hardware, suppliers and distributors eagerly agreed to work with Microsoft because of its track record in personal computing. Other things being equal, an entrant with a strong reputation can attract adoptions earlier than entrants without strong reputations. Strategies to improve timing options As should now be clear, managing the timing of entry into the market is a complex matter. If the technology has a clear advantage to consumers, entering the market early may give the entrant a path-dependent advantage that is nearly impossible for competitors to overcome. If, on the other hand, a firm enters a market very early and the advantages of the technology are not very clear to consumers, there is a strong possibility that the technology will receive a tepid welcome. Confounding this risk is the fact that watchful competitors may be able to use the firm’s failure to their advantage, refining the technology the firm has introduced to the market and making any corrections necessary to improve the technology’s market acceptance. The later entrant may be able to enter at a lower cost because it can capitalize on the research and development of the early firm, and use knowledge of the market gained from observing the early entrant’s experience. In the above, it is assumed that timing of entry is a matter of choice for the firm. However, implicit in this assumption is a corollary assumption that the firm is capable of producing the technology at any point in the time horizon under consideration. For this to be true, the firm must possess the core capabilities required to produce the technology to consumer expectations, or be able to develop them quickly. Furthermore, if the firm intends to refine an earlier entrant’s technology and beat the earlier entrant to market with a new version of this technology, it must have fast- cycle development processes. If a firm has very fast cycle development processes, the firm not only has a better chance at being an early entrant, but it can also use experience gained through customers’ reactions to its technology to quickly introduce a refined version of its technology that achieves a closer fit with customer requirements. In essence, a firm with very fast development deployment processes should be able to take advantage of both first- and second-mover advantages. The research on new product development cycle time indicates that development time can be greatly shortened by using strategic alliances, cross-functional new product development teams, and parallel development processes (when multiple stages of the new product development process occur simultaneously). stifle innovation. Standardization is the degree to which activities in a firm are performed in a uniform manner. It may be used to ensure that quality levels are met and that customers and suppliers are responded to consistently and equitably. However, by minimizing variation, standardization can limit the creativity and experimentation that leads to innovative ideas . Centralization is the degree to which decision-making authority is kept at top levels of the firm, while decentralization is the degree to which decision-making authority is pushed down to lower levels of the firm. Centralization can refer both to the geographical location of activities (the degree to which activities are performed in a central location for the firm) and to where power and authority over activities are located (activities might occur in locations far from the corporate headquarters, but the authority and decision making over those activities may be retained at headquarters, leading to greater centralization than their physical location would suggest). For firms that have multiple R&D projects ongoing, whether to centralize or decentralize R&D activities is a complex issue. Decentralizing R&D activities to the divisions of the firm enables those divisions to develop new products or processes that closely meet their particular division’s needs. The solutions they develop are likely to fit well within the operating structure of the division and be closely matched to the requirements of the customers served by that division. The development projects also take advantage of the diversity of knowledge and market contacts that may exist in different divisions. But there is much risk of reinventing the wheel when R&D activities are decentralized. Many redundant R&D activities may be performed in multiple divisions and the full potential of the technology to create value in other parts of the firm may not be realized. Furthermore, having multiple R&D departments may cause each to forgo economies of scale and learning-curve effects. By contrast, if the firm centralizes R&D in a single department, it may maximize economies of scale in R&D, enabling greater division of labor among the R&D specialists and maximizing the potential for learning-curve effects through the development of multiple projects. It also enables the central R&D department to manage the deployment of new technologies throughout the firm, improving the coherence of the firm’s new product development efforts and avoiding the possibility that valuable new technologies are underutilized throughout the organization. The use of a centralized versus decentralized development process varies by type of firm and industry. There is some disagreement about whether centralization enhances or impedes a firm’s flexibility and responsiveness to technological change (or other environmental shifts). A highly centralized firm may be better able to make a bold change in its overall direction because its tight command-and-control structure enables it to impose such change on lower levels of the firm in a decisive manner. Decentralized firms may struggle to get the cooperation from all the divisions necessary to undergo a significant change. But decentralized firms may be better able to respond to some types of technological or environmental change because not all decisions need to be passed up the hierarchy to top management; employees at lower levels are empowered to make decisions and changes independently and thus may be able to act more quickly. The combination of formalization and standardization results in a mechanistic structure . Mechanistic structures (an organization structure characterized by a high degree of formalization and standardization, causing operations to be almost automatic or mechanical) are often associated with greater operational efficiency, particularly in large-volume production settings. The careful adherence to policies and procedures combined with standardization of most activities results in a well-oiled machine that operates with great consistency and reliability. While mechanistic structures are often associated with high centralization, it is also possible to have a highly decentralized mechanistic structure by using formalization as a substitute for direct oversight. By establishing detailed rules, procedures and standards, top management can push decision-making authority to lower levels of the firm while still ensuring that decisions are consistent with top management’s objectives. Mechanistic structures, however, are often deemed unsuitable for fostering innovation. They achieve efficiency by ensuring rigid adherence to standards and minimizing variation, potentially stifling creativity within the firm. Organic structures (an organization structure characterized by a low degree of formalization and standardization: employees may not have well-defined job responsibilities and operations may be characterized by a high degree of variation) that are more free-flowing and characterized by low levels of formalization and standardization, are often considered better for innovation and dynamic environments. In the organic structure, employees are given far more latitude in their job responsibilities and operating procedures. Since much innovation arises from experimentation and improvisation, organic structures are often thought to be better for innovation despite their possible detriment to efficiency. Many of the advantages and disadvantages of firm size are related to the structural dimensions of formalization, standardization and centralization. Large firms often make greater use of formalization and standardization because as the firm grows it becomes more difficult to exercise direct managerial oversight. Formalization and standardization ease coordination costs, at the expense of making the firm more mechanistic. Many large firms attempt to overcome some of this rigidity and inertia by decentralizing authority, enabling divisions of the firm to behave more like small companies. Most firms must simultaneously manage their existing product lines with efficiency, consistency and incremental innovation, while still encouraging the development of new product lines and responding to technological change through more radical innovation. Tushman and O’Reilly argue that the solution is to create an ambidextrous org. The Ambidextrous organization is the ability of an organization to behave almost as two different kinds of companies at once. Different divisions of the firm may have different structures and control systems, enabling them to have different cultures and patterns of operations. It is a firm with a complex organizational form that is composed of multiple internally inconsistent architectures that can collectively achieve both short-term efficiency and long-term innovation. Such firms might utilize mechanistic structures in some portions of the firm and organic structures in others. This is one of the rationales for setting up an R&D division that is highly distinct (either geographically or structurally) from the rest of the organization; a firm can use high levels of formalization and standardization in its manufacturing and distribution divisions, while using almost no formalization or standardization in its R&D division. Incentives in each of the divisions can be designed around different objectives, encouraging very different sets of behavior from employees. A firm can also centralize and tightly coordinate activities in divisions that reap great economies of scale such as manufacturing, while decentralizing activities such as R&D into many small units so that they behave like small, independent ventures. Whereas traditionally research emphasized the importance of diffusing information across the firm and ensuring cross fertilization of ideas across new product development efforts, recent research suggests that some amount of isolation of teams, at least in early development stages, can be valuable. When multiple teams interact closely, there is a risk that a solution that appears to have an advantage (at least at the outset) will be too rapidly adopted by other teams. This can cause all of the teams to converge on the same ideas, thwarting (impedendo, ostacolando) the development of other creative approaches that might have advantages in the long run. Consistent with this, a significant body of research on “skunk works” has indicated that there can be significant gains from isolating new product development teams from the mainstream organization. Skunk Works® is a term that originated with a division of Lockheed Martin that was formed in 1943 to quickly develop a jet fighter for the US Army. It has evolved as skunk works to refer more generally to new product development teams that operate nearly autonomously from the parent organization, with considerable decentralization of authority and little bureaucracy. Separating the teams from the rest of the organization permits them to explore new alternatives, unfettered (privo di restrizioni) by the demands of the rest of the organization. Similarly, firms that have multiple product divisions might find that one or more divisions need a more organic structure to encourage creativity and fluid responses to environmental change, while other divisions benefit from a more structured and standardized approach. In 1980, Apple was churning out Apple II personal computers at a fast clip. However, Steve Jobs was not content with the product design; he wanted a computer so user-friendly that it would appeal even to people who had no interest in the technological features of computers. Jobs did not believe that the growing corporate environment at Apple was conducive to nurturing a revolution, so he created a separate division for the Macintosh that would have its own unique culture. He tried to instill a free spirited entrepreneurial atmosphere reminiscent of the company’s early beginnings in a garage, where individualistic and often eccentric software developers would flourish. The small group of team members was sheltered from corporate commitments and distractions. If big firms can have internal structures with the incentives and behavior of small firms, then much of the logic of the impact of firm size on technological innovation rates becomes moot (discutibile). A single organization may have multiple cultures, structures and processes within it; large firms may have entrepreneurial divisions that can tap the greater resources of the larger corporation, yet have the incentive structures of small firms that foster the more careful choice of projects or enhance the motivation of R&D scientists. Such entrepreneurial units may be capable of developing discontinuous innovations within the large, efficiency-driven organizations that tend to foster incremental innovations. Firms can also achieve some of the advantages of mechanistic and organic structures by alternating through different structures over time. Schoonhoven and Jelinek studied Intel, Hewlett-Packard, Motorola, Texas Instruments, and National Semiconductor and found that these firms maintained a “dynamic tension” between formal reporting structures, quasi formal structures and informal structures. While the organizations had very explicit reporting structures and formalized development processes, the organizations were also reorganized frequently to modify reporting relationships and responsibilities in response to a changing environment. Thus, while the organizations used seemingly mechanistic structures to ensure systematic and efficient production, frequent reorganizing enabled the firms to be flexible. These firms also used what Schoonhoven and Jelinek term quasiformal structures in the form of teams, task forces and dotted-line relationships (reporting relationships that were not formally indicated on the organizational chart). These quasiformal structures were more problem-focused and could change faster than the rest of the company. They also provided a forum for interaction across divisions and thus played an important boundary-spanning role. One advantage of quasiformal structures is that they fostered interactions based on interests rather than on hierarchy. This can foster more employee motivation and cross-fertilization of ideas. Some of the downsides to such quasiformal structures were that they required time to manage and they could be hard to kill. Since the quasi structures were not part of the formal reporting structure, it could sometimes be difficult to establish who had the authority to disband them. Modularity and loosely coupled organizations Another method firms use to strike a balance between efficiency and flexibility is to adopt standardized manufacturing platforms or components that can then be mixed and matched in a modular production system. This enables them to achieve standardization advantages (such as efficiency and reliability) at the component level, while achieving variety and flexibility at the end product level. Modularity refers to the degree to which a system’s components may be separated and recombined. Making products modular can exponentially increase the number of - ensure that innovations are standardized and implemented throughout the company. Managers are likely to choose a center-for-global approach to innovation when they have a strong desire to control the evolution of a technology, when they have strong concerns about the protection of proprietary technologies, when development activities require close coordination or when there is a need to respond quickly to technological change and dispersed efforts are likely to create inefficiencies. However, a center-for-global approach tends to not be very responsive to the diverse demands of different markets. Furthermore, the divisions that serve these markets might resist adopting or promoting centrally developed innovations. As a result, innovations developed centrally may not closely fit the needs of foreign markets and may also not be deployed quickly or effectively. A local-for-local strategy is the opposite of the center-for-global strategy. Each national subsidiary uses its own resources to create innovations that respond to the needs of its local market. A local-for-local strategy (when each division or subsidiary of the firm conducts its own R&D activities, tailored for the needs of the local market) takes advantage of access to diverse information and resources, and it customizes innovation for the needs and tastes of the local market. Managers are likely to choose a local-for-local strategy when divisions are very autonomous and when markets are highly differentiated. There are several downsides to the local- for-local strategy, however. It can result in significant redundancy in activities as each division reinvents the wheel. Furthermore, each division may suffer from a lack of scale in R&D activities and there is a risk that valuable innovations will not be diffused across the firm. Over time, firms have developed variants of these strategies that attempt to reap advantages of both the center- for-global and local-for-local strategies. The locally leveraged strategy, when a firm implementing a locally leveraged strategy attempts to take the most creative resources and innovative developments from the divisions and deploy them across the company. This strategy enables the firm to take advantage of the diverse ideas and resources created in local markets, while leveraging these innovations across the company. One way this strategy is employed in consumer markets is to assign an individual the role of international brand custodian. This person is responsible for ensuring that a successful brand is deployed into the firm’s multiple markets while also maintaining consistency in the product’s image and positioning. Such a strategy can be very effective if different markets the company serves have similar needs. Another approach, the globally linked strategy, entails creating a system of decentralized R&D divisions that are connected to each other and centrally coordinated. Each geographically decentralized division might be charged with a different innovation task that serves the global company’s needs. For example, a multinational auto manufacturer may empower one of its European divisions with the responsibility for developing new subcompact models that most closely fit the European markets but that may ultimately also be sold in the US, Canada and South America. In the meantime, its American division might bear the bulk of the responsibility for collaborating with other manufacturers to develop more efficient manufacturing processes that will ultimately be deployed corporate wide. Thus, while innovation is decentralized to take advantage of resources and talent pools offered in different geographic markets, it is also globally coordinated to meet companywide objectives. This approach also attempts to enable the learning accrued through innovation activities to be diffused throughout the firm. This strategy can be quite powerful in its ability to tap and integrate global resources, but it is also expensive in both time and money as it requires intensive coordination. In both the locally leveraged and globally linked strategies, R&D divisions are decentralized and linked to each other. The difference lies in the missions of the R&D divisions. In the locally leveraged strategy, the decentralized R&D divisions are largely independent of each other and work on the full scope of development activities relevant to the regional business unit in which they operate. This means, for example, that if their regional business unit produces and sells health care items, beauty care products, and paper products, the R&D division is likely to work on development projects related to all of these products. However, to ensure that the best innovations are leveraged across the company, the company sets up integrating mechanisms (such as holding regular cross-regional meetings or establishing a liaison such as an international brand custodian) to encourage the divisions to share their best developments with each other. By contrast, in the globally linked strategy, the R&D divisions are decentralized, but they each play a different role in the global R&D strategy. Instead of working on all development activities relevant to the region in which they operate, they specialize in a particular development activity. For example, an R&D division may be in a regional business unit that produces and sells health care, beauty care and paper products, but its role may be to focus on developing paper innovations, while other R&D divisions in the firm work on health care items or beauty care products. The role of the division should exploit some local market resource advantage (such as abundant timber or a cluster of chemical technology firms). This strategy attempts to take advantage of the diversity of resources and knowledge in foreign markets, while still linking each division through well-defined roles in the company’s overall R&D strategy. Bartlett and Ghoshal argue that, overall, the multinational firm’s objective is to make centralized innovation activities more effective (better able to serve the various local markets) while making decentralized innovation activities more efficient (eliminating redundancies and exploiting synergies across divisions). Bartlett and Ghoshal propose that firms should take a transnational approach where resources and capabilities that exist anywhere within the firm can be leveraged and deployed to exploit any opportunity that arises in any geographic market. They argue that this can be achieved by: - Encouraging reciprocal interdependence among the divisions of the firm (each division must recognize its dependency on the other divisions of the firm) - Utilizing integration mechanisms across the divisions, such as division-spanning teams, rotating personnel across divisions, and so on - Balancing the organization’s identity between its national brands and its global image. Chapter 11 – Managing the new product development process In many industries, the ability to develop new products quickly, effectively and efficiently is now the single most important factor driving firm success. In industries such as computer hardware and software, telecommunications, automobiles and consumer electronics, firms often depend on products introduced within the past five years for more than 50% of their sales. Yet, despite the attention for new product development, the failure rates for new product development projects are still agonizingly high. By many estimates, more than 95% of all new product development projects fail to result in an economic return. Many projects are never completed and of those that are, many flounder in the marketplace. Thus, a considerable amount of research has been focused on how to make the new product development process more effective and more efficient. Objectives of the new product development process For new product development to be successful, it must simultaneously achieve three sometimes conflicting goals: (1) maximizing the product’s fit with customer requirements, (2) minimizing the development cycle time (make it into the market + costs), and (3) controlling development costs. For a new product to be successful in the marketplace, it must offer more compelling features, greater quality or more attractive pricing than competing products. Despite the obvious importance of this imperative, many new product development projects fail to achieve it. This may occur for a number of reasons. First, the firm may not have a clear sense of which features customers value the most, resulting in the firm’s overinvesting in some features at the expense of features the customer values more. Firms may also overestimate the customer’s willingness to pay for particular features, leading them to produce feature-packed products that are too expensive to gain significant market penetration. Firms may also have difficulty resolving heterogeneity in customer demands; if some customer groups desire different features from other groups, the firm may end up producing a product that makes compromises between these conflicting demands and the resulting product may fail to be attractive to any of the customer groups. Numerous new products have offered technologically advanced features compared to existing products but have failed to match customer requirements and were subsequently rejected by the market. For example, Apple’s Newton MessagePad was exceptional on many dimensions, however, it was still much too large to be kept in a pocket, limiting its usefulness as a handheld device and many corporate users thought the screen was too small to make the product useful for their applications. Another example is Philips’ attempt to enter the video game industry. In 1989, Philips introduced its Compact Disc Interactive (CD-i). The CD-i offered a number of educational programs and played audio CDs. However, Philips had overestimated how much customers would value (and be willing to pay for) these features: the CD-i was priced at $799 and it was also very complex. Even products that achieve a very close fit with customer requirements can fail if they take too long to bring to market. Bringing a product to market early can help a firm build brand loyalty, preemptively capture scarce assets and build customer switching costs. A firm that brings a new product to market late may find that customers are already committed to other products. Also a first comer company has more time to develop complementary goods that enhance the value and attractiveness of the product. Products that are introduced to the market earlier are likely to have an installed base and availability of complementary goods advantage over later offerings. Another important consideration regarding development cycle time (the time elapsed from project initiation to product launch, usually measured in months or years) relates to the cost of development and the decreasing length of product life cycles. First, many development costs are directly related to time. Both the expense of paying employees involved in development and the firm’s cost of capital increase as the development cycle lengthens. Second, a company that is slow to market with a particular generation of technology is unlikely to be able to fully amortize the fixed costs of development before that generation becomes obsolete. This phenomenon is particularly vivid in dynamic industries such as electronics where life cycles can be as short as 12 months. Companies that are slow to market may find that by the time they have introduced their products market demand has already shifted to the products of a subsequent technological generation. Finally, a company with a short development cycle can quickly revise or upgrade its offering as design flaws are revealed or technology advances. A firm with a short development cycle can take advantage of both first-mover and second-mover advantages. Some researchers have pointed out the costs of shortening the development cycle and rushing new products to market. For example, Dhebar points out that rapid product introductions may cause adverse consumer reactions as consumers may regret past purchases and be wary of new purchases for fear they should rapidly become obsolete. Other researchers have suggested that speed of new product development may come at the expense of quality or result in sloppy market introductions. Compressing development cycle time can result in overburdening the development team, leading to problems being overlooked in the product design or manufacturing process. Adequate product testing may also be sacrificed to meet development schedules. However, despite these risks, most studies have found a strong positive relationship between speed and the commercial success of new products. Sometimes a firm engages in an intense effort to develop a product that exceeds customer expectations and brings it to market early, only to find that its development costs have ballooned expect to benefit significantly from a solution to those needs. According to a survey by the Product Development & Management Association, on average, firms report using the lead user method to obtain input into 38 percent of the projects they undertake. More detail on how firms use lead users is provided in the accompanying Theory in Action section. Much of the same logic behind involving customers in the new product development process also applies to involving suppliers. By tapping into the knowledge base of its suppliers, a firm expands its information resources. Suppliers may be actual members of the product team or consulted as an alliance partner. In either case, they can contribute ideas for product improvement or increased development efficiency. For instance a supplier may be able to suggest an alternative input (or configuration of inputs) that would achieve the same functionality but at a lower cost. Additionally, by coordinating with suppliers, managers can help to ensure that inputs arrive on time and that necessary changes can be made quickly to minimize development time. Consistent with this argument, research has shown that many firms produce new products in less time, at a lower cost, and with higher quality by incorporating suppliers in integrated product development efforts. For example, consider Chrysler. Beginning in 1989, Chrysler reduced its supplier base from 2,500 to 1,140, offering the remaining suppliers long-term contracts and making them integrally involved in the process of designing new cars. Chrysler also introduced an initiative called SCORE (Supplier Cost Reduction Effort) that encouraged suppliers to make cost-saving suggestions in the development process. The net result was $2.5 billion in savings by 1998. Boeing’s development of the 777 involved both customers and suppliers on the new product development team; United employees (including engineers, pilots, and flight attendants) worked closely with Boeing’s engineers to ensure that the airplane was designed for maximum functionality and comfort. Boeing also included General Electric and other parts suppliers on the project team, so that the engines and the body of the airplane could be simultaneously designed for maximum compatibility. Crowdsourcing: a distributed problem-solving model whereby a design problem or production task is presented to a group of people who voluntarily contribute their ideas and effort in exchange for compensation, intrinsic rewards, or a combination thereof. Firms can also open up an innovation task to the public through crowdsourcing. Crowdsourcing platforms such as InnoCentive, Yet2.com, and TopCoder present an innovation problem on a public Web platform, and provide rewards to participants who are able to solve them. Often individuals participate for the sheer excitement and challenge of solving the problem, or for social or reputational benefits, rather than because they anticipate earning large rewards. For example, Ben & Jerry’s asked its customers to invent their new varieties of ice cream flavors—the submitters of the best flavors were given a trip to the Dominican Republic to see a sustainable fair trade cocoa farm. LG similarly used crowdsourcing to develop a new mobile phone, for a reward of $20,000. Tools for improving the new product development process Some of the most prominent tools used to improve the development process include stage-gate processes, Quality Function Deployment (QFD) (“house of quality”), design for manufacturing, failure modes and effects analysis, computer-aided design and computer aided manufacturing . These tools can greatly expedite the new product development process and maximize its fit with customer requirements. 1. To help avoid the risk of pushing bad projects forward, many managers and researchers suggest implementing tough go/kill decision points which makes the managers evaluate whether or not to kill the project or allow it to proceed. The most widely known development model incorporating such go/kill points is the stage-gate process developed by Robert G. Cooper. The stage-gate process provides a blueprint for moving projects through different stages of development. At each stage, a cross-functional team of people (led by a project team leader) undertakes parallel activities designed to drive down the risk of a development project. At each stage of the process, the team is required to gather technical, market and financial information to use in the decision to move the project forward (go), abandon (kill), hold or recycle the project. Before each stage there is a go/kill gate. These gates are designed to control the quality of the project and to ensure that the project is being executed in an effective and efficient manner. Gates act as the funnel. Each stage of a development project typically costs more than the stage preceding it, therefore, breaking down the process into stages deconstructs the development investment into a series of incremental commitments. Expenditures increase only as uncertainty decreases. 2. QFD was developed as a process for improving the communication and coordination among engineering, marketing and manufacturing personnel. It achieves this by taking managers through a problem-solving process in a very structured fashion. The organizing framework for QFD is the “house of quality”. The house of quality is a matrix that maps customer requirements against product attributes. This matrix is completed in a series of steps: 1. The team must first identify customer requirements. 2. The team weights the customer requirements in terms of their relative importance from a customer’s perspective. 3. The team identifies the engineering attributes that drive the performance of the product. 4. The team enters the correlations between the different engineering attributes to assess the degree to which one characteristic may positively or negatively affect another. 5. The team fills in the body of the central matrix. Each cell in the matrix indicates the relationship (or lack thereof) between an engineering attribute and a customer requirement and a number indicates how strong that relationship is. 6. The team multiplies the customer importance rating of a feature by its relationship to an engineering attribute (one, three, or nine). These numbers are then summed for each column, yielding a total for the relative importance of each engineering attribute. 7. The team evaluates the competition. 8. Using the relative importance ratings established for each engineering attribute and the scores for competing products (from step 7), the team determines target values for each of the design requirements. 9. A product design is then created based on the design targets from step 8. The team evaluates the new design and assesses the degree to which each of the customer requirements has been met. The great strength of the house of quality is that it provides a common language and framework within which the members of a project team may interact. It makes the relationship between product attributes and customer requirements very clear, highlights the competitive shortcomings of the company’s existing products and helps identify what steps need to be taken to improve them. 3. Design for manufacturing methods (DFM) is another method of facilitating integration between engineering and manufacturing, and of bringing issues of manufacturability into the design process asap. DFM is a way of structuring the new product development process. Often this involves articulating a series of design rules. The purpose is typically to reduce costs and boost product quality by ensuring that product designs are easy to manufacture. The easier products are to manufacture, the fewer the assembly steps required, the higher labor productivity will be, resulting in lower unit costs. Designing products to be easy to manufacture decreases the likelihood of making mistakes in the assembly process, resulting in higher product quality. Manufacturing at an early stage of the design process can shorten development cycle time. This allows DFM to be able to increase the product’s fit with customer requirements. 4. Failure modes and effects analysis (FMEA) is a method by which firms identify potential failures in a system, classify them according to their severity, and put a plan into place to prevent the failures from happening. Potential failure modes are identified. They are evaluated on three criteria of the risk they pose: severity, likelihood of occurrence and inability of controls to detect it. Each criterion is given a score and a composite risk priority number is created for each failure mode by multiplying its scores together. The firm can then prioritize its development efforts to target potential failure modes that pose the most composite risk. The firm might find that it should focus first on failure modes that have less severe impacts, but occur more often and are less detectable. 5. Computer-aided design (CAD) is the use of computers to build and test product designs. Rapid advances in computer technology have enabled the development of low-priced and high- powered graphics-based workstations with which it’s possible to construct a three-dimensional image of a product or subassembly to be developed and tested in virtual reality. Engineers can quickly adjust prototype attributes by manipulating the three-dimensional model. This can reduce cycle time and lower costs. 6. Computer-aided manufacturing (CAM) is the implementation of machine-controlled processes in manufacturing. CAM is faster and more flexible than traditional manufacturing. A recent incarnation of computer- aided manufacturing is three-dimensional printing (also known as additive manufacturing – a design developed in a computer aided design program is printed in three dimensions by laying down thin strips of material until the model is complete). Tools for measuring new product development performance Many companies use a variety of metrics to measure the performance of their new product development process. Such performance assessments help the company improve its innovation strategy and development processes. Both the metrics used by firms and the timing of their use vary substantially across firms. Measures of the success of the new product development process can help management to: • Identify which projects met their goals and why. • Benchmark the organization’s performance compared to that of competitors or to the organization’s own prior performance. • Improve resource allocation and employee compensation. • Refine future innovation strategies. It’s important to use multiple measures to have a fair representation of the effectiveness of the firm’s development process or its overall innovation performance. Also, the firm’s development strategy, industry and other environmental circumstances must be considered when formulating measures and interpreting results. For example, a firm whose capabilities or objectives favor development of breakthrough projects may experience long intervals between product introductions and receive a low score on measures such as cycle time or percent of sales earned on projects launched within the past five years, despite its success at its strategy and viceversa. New Product Development Process Metrics Many firms use a number of methods to estimate the effectiveness and efficiency of the development process. To use such methods it is important to first define a finite period in which the measure is to be applied in order to get an accurate view of the company’s current procedures, and reward systems. They are held fully accountable for the success of the project. They typically excel at rapid and efficient new product development. Thus, autonomous teams are typically considered to be appropriate for breakthrough projects and some major platform projects. Many autonomous teams go on to become separate divisions of the firm. The potential for conflict between the functions and the team, particularly the project manager, rises with the move from functional teams to autonomous teams. The management of new product development teams For a new product development team to be effective, its leadership and administrative policies should be matched to the team’s structure and needs. Team Leadership The team leader is responsible for directing the team’s activities, maintaining the team’s alignment with project goals and serving as a communicator between the team and senior management. Different types of teams have different leadership needs. While lightweight teams might have a junior or middle-management leader who provides basic coordination between the functional groups, heavyweight and autonomous teams require senior managers with significant experience and organizational influence. Project managers in heavyweight and autonomous teams must have high status within the organization, act as a concept champion for the team within the organization, be good at conflict resolution, have multilingual skills (ex. language of marketing, engineering, and manufacturing) and be able to exert influence upon the engineering, manufacturing and marketing functions. Team Administration To ensure that members have a clear focus and commitment to the development project, many organizations have heavyweight and autonomous teams develop a project charter and contract book. The project charter encapsulates the project’s mission and articulates exact and measurable goals for the project. It may also stipulate the team’s budget, its reporting timeline and the key success criteria of the project. Establishing an explicit set of goals for the project helps ensure a common understanding of the project’s purpose and priorities, helps structure the new product development process and can facilitate cooperation between team members. The contract book defines in detail the basic plan to achieve the goal laid out in the project charter. Typically, it will estimate the resources required, the development time schedule and the results that will be achieved. It’s a tool for monitoring and evaluating the team’s performance in meeting objectives. It is also an important mechanism for establishing team commitment to the project and a sense of ownership. Managing Virtual Teams Recent advances in information technology have enabled companies to make greater use of virtual teams. Virtual teams are teams in which members may be a great distance from each other, but are still able to collaborate via advanced information technologies such as videoconferencing, groupware and e-mail or Internet chat programs. This enables individuals with unique skills to work on a project, regardless of their location. This is especially valuable for a company whose operations are highly global. Virtual teams pose a distinct set of management challenges. Collocation facilitates communication and collaboration. Proximity and frequent interaction help teams to develop shared norms. Virtual teams must often rely on communication channels that are much less rich than face-to-face contact and face significant hurdles in establishing norms. In the forming of virtual teams, it is important to select personnel who are both comfortable with the technologies being used and who have strong interpersonal skills. They must be able to work independently and tend to seek interaction rather than avoid it. Virtual teams also face challenges in developing trust, resolving conflict and exchanging tacit knowledge. Chapter 13 – Crafting a deployment strategy The value of any technological innovation is only partly determined by what the technology can do. A large part of the value of an innovation is determined by the degree to which people can understand it, access it and integrate it within their lives. Deployment is a core part of the innovation process. Deployment strategies can influence the receptivity of customers, distributors and complementary goods providers. Effective deployment strategies can reduce uncertainty about the product, lower resistance to switching from competing or substitute goods and accelerate adoption. Ineffective deployment strategies can cause even brilliant technological innovations to fail. Launch timing The timing of the product launch can be a significant part of a company’s deployment strategy . Generally, firms try to decrease their development cycles in order to decrease their costs and to increase their timing of entry options, but this does not imply that firms should always be racing to launch their products as early as possible. A firm can strategically use launch timing to take advantage of business cycle or seasonal effects, to position its product with respect to previous generations of related technologies, and to ensure that production capacity and complementary goods or services are in place. The role of each of these tactics is illustrated in the video game industry. If a console is introduced too early, it is likely to receive a tepid welcome because customers who recently purchased the previous-generation console are reluctant to spend money on a new console so soon. Optimizing Cash Flow versus Embracing Cannibalization For firms introducing a next generation technology into a market in which they already compete, entry timing can become a decision about whether and to what degree to embrace cannibalization which is when a firm’s sales of one product (or at one location) diminish its sales of another of its products (or at another of its locations). Research on product life cycles has emphasized the importance of timing new product introduction so as to optimize cash flows or profits from each generation and minimize cannibalization. If a firm’s current product is very profitable, the firm will often delay introduction of a next generation product until profits have begun to significantly decrease for the current product  maximize the return on investment in developing each generation of the product. However, in industries driven by technological innovation, delaying the introduction of a next generation product can enable competitors to achieve a significant technological gap. If competitors introduce products with a large technological advantage over the firm’s current products, customers might begin abandoning the firm’s technology. By providing incentives for existing customers to upgrade to its newest models, the firm can further remove any incentive customers have to switch to another company’s products when they purchase next generation technology. Licensing and compatibility Making a technology more open (not protecting it vigorously or partially opening the technology through licensing) could speed its adoption by enabling more producers to improve and promote the technology and allowing complementary goods developers to more easily support the technology. However, making a technology completely open poses several risks: - other producers may drive the price of the technology down to a point at which the firm is unable to recoup its development expense. - if competition drives the price down so no producer earns significant margins on the technology, no producer will have much incentive to further develop the technology. - it may cause its underlying platform to become fragmented as different producers alter it to their needs, resulting in loss of compatibility across producers and the possible erosion of product quality. In deploying a technological innovation, often a firm must decide how compatible (or incompatible) to make its technology with that provided by others or with previous generations of its own technology. If there is an existing technology with a large installed base or availability of complementary goods, the firm can sometimes leverage the value of that installed base and complementary goods by making its technology compatible with current products. If the firm wishes to avoid giving away its own installed base or complementary goods advantages to others, it may protect them by ensuring its products are incompatible with those of future entrants. Firms must also decide whether or not to make their products backward compatible with their own previous generation of technology. Backward compatible means that products of a technological generation can work with products of a previous generation. For example, a computer is backward compatible if it can run the same software as a previous generation of the computer. A firm that both innovates to prevent a competitor from creating a technological gap and utilizes backward compatibility can leverage the existing value yielded by a large range of complementary goods to its new platforms. While such a strategy may cause the firm to forfeit some sales of complementary goods for the new platform (at least initially), it can also effectively link the generations through time and can successfully transition customers through product generations while preventing competitors from having a window to enter the market. Pricing Pricing is a crucial element in the firm’s deployment strategy. Price simultaneously influences the product’s positioning in the marketplace, its rate of adoption and the firm’s cash flow . Before a firm can determine its pricing strategy, it must determine the objectives it has for its pricing model. Survival price strategy: if a firm is in an industry characterized by overcapacity or intense price competition, the firm’s objective may be simply survival  goods price simply cover the variable costs and some fixed costs. This is a short-run strategy. Maximize current profits: in the long run, the firm will want to find a way to create additional value  the firm first estimates costs and demand and then sets the price to maximize cash flow or rate of return on investment. For new technological innovations, firms often emphasize either a maximum market skimming objective or a maximum market share objective. Common pricing strategies for technological innovations include market skimming and penetration pricing. Skimming: attempt to maximize margins earned on early sales of the product through initially setting prices high on new products to signal the market that it is a significant innovation that offers a substantial performance improvement and to help the firm recoup initial development expenses (assuming there is also high initial demand). Penetration: attempt to maximize market share when achieving high volume is important. The price is set very low (or free) to maximize the good’s market share , hoping to rapidly attract customers, driving volume up and production costs down. High initial prices (skimming) may also attract competitors to the market and can slow adoption of the product. If costs are expected to decline rapidly with the volume of units produced, a skimming strategy can actually prove less profitable than a pricing strategy that stimulates more rapid customer adoption. Firms introducing a technological innovation can use strategic alliances or exclusivity contracts to encourage distributors to carry and promote their goods. Firms that already have relationships with distributors for other goods are at an advantage in pursuing this strategy. Bundling Relationships Firms can also accelerate distribution of a new technology by bundling it with another product that already has a large installed base, to piggyback on its success. As customers become familiar with the product, their ties to the technology (such as the cost of training) increase and their likelihood of choosing this technology in future purchases may also increase. This is a successful way for firms to build their installed base and ensure provision of complementary goods. Contracts and Sponsorship Firms can also set up contractual arrangements with distributors, complementary goods providers, and even large end users to ensure that the technology is used in exchange for price discounts, special service contracts, advertising assistance etc. As the new equipment’s benefits become clear to them, their likelihood of purchasing additional goods increases. They will also pass on the word. Guarantees and Consignment If there is uncertainty about the new product or service, the firm can encourage distributors to carry the product by offering them guarantees (such as promising to take back unsold stock) or agreeing to sell the product on consignment. A similar argument can be made for offering guarantees to complementary goods producers: if they are reluctant to support the technology, the firm can guarantee particular quantities of complementary goods will be purchased or it can provide the capital for production, thus bearing the bulk of the risk. Marketing The marketing strategy for a technological innovation must consider both the nature of the target market and the nature of the innovation. The three major marketing methods are advertising, promotions and publicity/public relations. Advertising Many firms use advertising to build public awareness of their technological innovation. The firm needs to craft an effective advertising message and choose advertising media that can convey this message to the appropriate target market. Effective adv. message: balance between achieving an entertaining and memorable message versus providing a significant quantity of informative content. The media used are generally chosen based on their match to the target audience, the richness of information or sensory detail they can convey, their reach and their cost per exposure. Promotions Firms can also use promotions at the distributor or customer level to stimulate purchase or trial. Promotions are usually temporary selling tactics that might include: • Offering samples or free trial. • Offering cash rebates after purchase. • Including an additional product (a “premium”) with purchase. • Offering incentives for repeat purchase. • Offering sales bonuses to distributor or retailer sales representatives. • Using cross-promotions between two or more noncompeting products to increase pulling power. • Using point-of-purchase displays to demonstrate the product’s features. Publicity and PR Many firms use free publicity to generate word of mouth. Viral marketing is an attempt to capitalize on the social networks of individuals to stimulate word-of-mouth advertising. Information is sent directly to targeted individuals (“seeding”) that are well-positioned in their social networks in some way. People may be more receptive to, or have greater faith in, information that comes through personal contacts. Firms may also sponsor special events, contribute to good causes, exhibit at trade associations or encourage online consumer reviews to generate public awareness and goodwill. Tailoring the Marketing Plan to Intended Adopters Innovations tend to diffuse through the population in an s-shape pattern where adoption is initially slow because the technology is unfamiliar, then accelerates as the technology becomes better understood and eventually the market is saturated so the rate of new adoptions declines. These stages of adoption have been related to the adopter categories of innovators; followed by early adopters, which cause adoption to accelerate; then the early majority and late majority as the innovation penetrates the mass market; and finally the laggards as the innovation approaches saturation. The characteristics of these groups make them responsive to different marketing strategies: - innovators and early adopters are typically looking for very advanced technologies that offer a significant advantage over previous generations. They are willing to take risks and to pay high prices, but they may also demand customization and technical support. - the early majority requires that the company communicate the product’s completeness, its ease of use, its consistency with the customer’s way of life and its legitimacy. Firms often find it is difficult to make the transition between successfully selling to early adopters versus the early majority. - the late majority and laggards, are often targeted with similar channels as those used for the early majority, although emphasizing reducing the cost per exposure. The marketing message must stress reliability, simplicity and cost-effectiveness. Using Marketing to Shape Perceptions and Expectations When distributors and customers are assessing the value of a technological innovation, they are swayed not only by evidence of the innovation’s actual value, but also by their perception of the innovation’s value and their expectations for its value in the future. Advertising, promotions and publicity can play a key role in influencing the market’s perceptions and expectations about the size of the installed base and the availability of complementary goods. Preannouncements can generate excitement about a product before its release, press releases forecasting sales can convince customers and distributors that the product’s installed base will increase rapidly. The firm’s reputation may create a signal about its likelihood of success. Preannouncements and Press Releases A firm that aggressively promotes its products can increase both its actual installed base and its perceived installed base. Even products that have relatively small installed bases can obtain relatively large mind shares through heavy advertising. A large perceived installed base can lead to a large actual installed base. Such a tactic underlies the use of “vaporware” by many software vendors: pre-advertised products that are not actually on the market yet and may not even exist. If other vendors beat the firm to market and the firm fears that customers may select a dominant design before its offering is introduced, it can use vaporware to attempt to persuade customers to delay purchase until the firm’s product is available. Reputation A firm’s reputation for both technological and commercial competence will critically influence the market’s expectation about its likelihood of success. Credible Commitments A firm can also signal its commitment to an industry by making substantial investments that would be difficult to reverse.
Docsity logo


Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved