chapter
stringlengths 1.97k
1.53M
| path
stringlengths 47
241
|
---|---|
The insurance industry is particularly interested in sustainability, given the impact that climate change has had on this industry’s profitability. In fact, climate change is the number one risk to the insurance industry.Ernst & Young (2008). According to an Ernst & Young study,Ernst & Young (2008). climate change could result in increased mortality and health problems, increased environmentally related litigation, increased conflicts over control of resources, and negative impacts on capital markets.
According to a 2005 study by the Association of British Insurers, if carbon dioxide emission levels are doubled, the capital requirement for insurers could increase by \$76 billion, which is an 80%–90% increase due to the increased risk of tropical cyclones in the United States and Japan.Association of British Insurers (2005). Allianz, Europe’s largest insurer, estimated that losses due to climate change could be as high as \$400 billion. In addition to property loss, insured companies may face carbon-regulatory risks governing its investment and insurance policies on green projects. Given these challenges, the industry is addressing the concept of sustainability and is taking notice of social, environmental, and economic impacts.
Many insurers have increased their focus on financial risk management. Yet proactive insurers are making progress in developing both investment strategies to “participate in the ‘green’ revolution in the financial markets” and in creating new climate-friendly products to address climate change risk.Mills (2007). Many of these financial products deal with green building, hurricane-resistant design, promotion of alternate fuels, and sustainable driving practices to reduce carbon emissions. Proactive insurers encourage the insured to participate in the insurance sustainability effort.
Insurance companies play an important role in social, economic, and ecologically friendly sustainability. Swiss Re has sold weather-risk products to 320,000 small farmers in India. For renewable energy-related insurance products, Willis Holdings covers potential power underproduction of wind farms. As a pioneer in offering green-building policies, Lexington Insurance Company’s new policies will pay the insured to rebuild a home using environmentally friendly and energy-efficient materials after it is destroyed by natural disasters.Tergesen (2008).
In Japan, Sompo Japan Insurance and Tokio Marine Nichido Fire Insurance Co., Ltd. have given premium discounts to 10 million policyholders who drive low-emitting cars. Travelers and Farmers cut 10% off the policy premium for hybrid cars. Progressive and GMAC insurance companies offer pay-as-you-drive (PAYD) policies in parts of the United States. In the U.S., automobiles account for 25% of all GHG emissions and it is anticipated that implementing PAYD policies and hybrid vehicle incentives could reduce emissions by 10%.Bordoff (2008).
Increasingly, insurance companies have utilized exclusion clauses—tightened conditions to foster the right decisions by customers. Some insurance companies limit liabilities for emitters of greenhouse gases and for companies that do not have a climate mitigation plan in place. “Development and establishment of business-continuity management (BCM) procedures [is used as] a prerequisite for adding on business interruption coverage to a company’s property insurance.”Ross, Mills, and Hecht (2007). As one of the world’s largest re-insurers, Swiss Re, Munich Re requires disclosure of a company’s climate strategy in their directors and officers insurance application.Makower (2005).
As this chapter has demonstrated, the finance function, as well as the finance industry, is greatly impacted by sustainability considerations. Every aspect of finance, from investments to banking and from trading to insurance and risk, requires new thinking when we consider the social, economic, and environmental impact of business. | textbooks/biz/Business/Advanced_Business/A_Primer_on_Sustainable_Business/03%3A_Finance/3.06%3A_Sustainable_Insurance.txt |
Products and processes have historically been designed for cradle to grave. That is, design has only considered the product from the point of manufacture to disposal. With growing awareness of environmental impacts and companies’ tendency to externalize costs, there has been a shift in thinking about design in terms of cradle to cradle, or from the point of acquisition of raw materials to the point of recycle and reuse.McDonough and Braungart (2002). Cradle to cradle design requires a shift in thinking about traditional manufacturing, recycling, and environmentalism. Cradle to cradle design encourages us not to choose the least environmentally damaging approach but rather to create and design a better approach. Cradle to cradle design encourages the integration of nature into the design process with a goal of zero waste. Products and processes integrating this design philosophy can receive Cradle to Cradle certification.McDonough Braungart Design Chemistry, LLC (2008).
4.02: Biomimicry
Biomimicry is an innovative method that searches for sustainable solutions by imitating features naturally found in the environment into the design of products. Using biomimicry, sustainable businesses can look at nature in new ways to understand how it can be used to help solve problems. Nature can be seen in three different perspectives: nature as model, nature as measure, and nature as mentor.Benyus (1997). Nature as model implies the emulation of forms, processes, or systems in product design. Nature as measure implies the evaluation of what is being designed against criteria of nature to see if current methods are as efficient as those from nature. Nature as mentor means creating a bond or relationship with nature, treating nature as a partner and teacher rather than just a place for resource removal.Benyus (1997).
Many industries have benefited from biomimicry. In the transportation industry, the fastest train in the world, the Shinkansen Bullet Train of the Japan Railways Group, incorporated biomimicry design methods into its revised design. With the initial design of the train, a loud noise was produced when the bullet train emerged from a tunnel. Designers redesigned the nose of the train after the beak of a kingfisher, which dives into water to catch fish. Not only did the modification create a quieter train, but it also resulted in less electricity usage and faster travel time.Biomimicry Institute (2009). This is an excellent example of utilizing nature to improve engineering.
Another example is GreenShield, a fabric finish made by G3i, which provides the same water and stain repellency as conventional fabric finishes with 8 times fewer harmful chemicals.Biomimicry Institute (2009). The innovation was developed from the water repellency of the leaves of a lotus plant. The plant’s surface texture traps air so that water droplets float and slide off cleanly while removing the dirt.
After studying the flippers, fins, and tails of whales, dolphins, and sharks, the company WhalePower applied biomimicry to design a far more efficient wind turbine blade with less drag, increased lift, and delayed stall. The company expects to apply its design to fan blades of all types to gain up to 20% increased efficiencies and quieter operations.WhalePower (n.d.).
The air conditioning system of Eastgate Building, an office building in Zimbabwe, was modeled from self-cooling mounds made by termites. The building uses 90% less energy than conventional buildings of the same size, and the owners have been able to spend \$3.5 million less on air-conditioning costs.Biomimicry Institute (2009).
These are but a few examples of the many improvements in design that have been brought about through biomimicry, or nature-inspired design. Sustainable businesses can find workshops, research reports, biological consulting, field excursions, and other resource information from the Biomimicry Guild, an environmental consultation firm, and from the Biomimicry Institution, a nonprofit advocacy group. The Institute has developed an online interactive resource, AskNature.org,Retrieved March 23, 2009, from http://www.asknature.org which allows users to pose a problem, and feedback is provided in the form of multiple ideas or examples from nature that might be useful in solving the problem. | textbooks/biz/Business/Advanced_Business/A_Primer_on_Sustainable_Business/04%3A_Research_and_Development/4.01%3A_Cradle_to_Cradle.txt |
As environmental awareness becomes more prevalent, businesses are assessing how their activities affect the environment. The environmental performance of products and processes has become a key issue, which is why some companies are investigating ways to minimize effects on the environment. Life cycle analysis (LCA, sometimes referred to as life cycle assessment) measures the environmental impact of specific products or processes from cradle to grave. Cradle to grave begins with the gathering of raw materials from the earth to create the product and ends at the point of materials disposal, recycle, or reuse (although LCA uses the term cradle to grave, recycle and reuse scenarios can be built into the analysis for a more accurate cradle to cradle analysis). LCA provides a snapshot in time of a specific product from a specific manufacturer, and it may be difficult to generalize findings. However, LCA is a useful tool for making product and process decisions that consider environmental criteria. The benefit of LCA is that businesses can identify the most effective improvements to reduce cumulative environmental impacts resulting from all stages in the product life cycle, often including upstream and downstream impacts not considered in more traditional analyses (e.g., raw material extraction, material transportation, ultimate product disposal, etc.). LCA is widely used for different purposes by different groups: environmental groups use it to inform consumers on what to buy, legislators use it for creating rules and regulations, and manufacturers use it as they seek to improve design and production standards. Less commonly used methods for environmental comparisons include value–impact assessments, environmental option assessments, and impact analysis matrices.
The LCA process is a systematic phased set of stages and is comprised of four components: goal definition and scoping, inventory analysis, impact assessment, and interpretation. The first stage is goal definition and scoping, which identifies the purpose of the analysis and the context in which the assessment will be conducted. In defining the scope of the LCA, it is important to define the system boundaries. The system boundaries can affect the outcomes of an LCA. Therefore, when comparing multiple products, such as plastic versus corn-based disposable cutlery, it is essential to ensure that the same system boundaries are used to examine both. A functional unit needs to be selected, such as a box of cereal, or a bar of soap, or a ton of grain. The definition of the boundaries should include where the material is extracted (the cradle) and what is the final disposal point for the product (the grave).
The next stage is the inventory analysis where data is collected related to energy, water, and materials usage. LCA includes an analysis of what has been used from the environment, such as raw materials, and what has been released into the environment, such as GHG emissions, solid waste disposal, and wastewater discharges. When moving to the inventory analysis stage, sustainable companies find it much easier to envision the system boundaries for data collection by developing a model of the life cycle or a flow diagram. A flow diagram is a map depicting inputs and outputs within the system boundaries. The diagram allows the investigator to break down the system into a set of subsystems that represent particular phases of the life cycle and shows linkages across these phases.Bhat (1996). For example, the flow chart may include raw material extraction, raw material processing, transportation, manufacture, production fabrication, filling and packaging, assembly, distribution, use, reuse, maintenance, recycle, and waste disposal. The focus of the inventory analysis is data collection of the raw material and energy consumption and emissions to air, water, and land. Data can be collected from various sources.
Suppliers of materials and energy as well as consultants specializing in sustainability can provide valuable information. Other sources that can provide information are government and industrial databases, government reports, existing LCA reports, and laboratory test data. LCA, though very valuable to sustainable businesses, is complex and labor intensive. Software is available to eliminate the need to conduct complex calculations. A sample of LCA software tools can be found at the following Web site: www.life-cycle.org/?page_id=125.Gloria (2009).
The two final stages, life cycle impact analysis and interpretation, evaluate the effects of resources and emissions identified in the previous stage. The third stage uses the findings of the inventory analysis to conduct an impact analysis that considers the consequential effects on population and ecology. Impact analysis provides quantifiable impact information on such issues as environmental and human health, resource depletion, and social welfare. The steps that have been identified with the impact analysis stage are identifying relevant environment impact categories, for example, global warming or acidification; classification or classifying carbon dioxide in relation to global warming; characterization or modeling the potential impact of carbon dioxide on global warming; describing impacts in ways for comparison; sorting and ranking indicators; weighting the most important impacts; and evaluating the results.Scientific Applications International Corporation (2006). The final stage is to interpret the findings from the previous stages to make informed decisions for products and processes.Scientific Applications International Corporation (2006).
The greatest benefit of an LCA is that is allows scientific comparison of products or processes in order to determine the most environmentally friendly option from cradle to grave. This scientific evidence may or may not support our beliefs about the best choice among options (see Note 5.4 "Test Your Knowledge"). However, the limitations of LCA studies should be understood when interpreting results. LCA studies are a static profile capturing the qualities of a specific product at that moment in time. The studies are constrained by the product (or process) selected, the manufacturer selected, its manufacturing practices, its supply chain practices, and the other boundaries of scope defined at the onset of the study. In addition, there are numerous approaches to the use of LCA, which further restrict comparison of studies. For example, depending on the purpose of the LCA, researchers may opt to use economic input–output LCA, screening LCA, process LCA, hybrid LCA, full-product LCA, financial LCA, life cycle energy analysis, or other specific approaches. As such, there exists much controversy over LCA study results as an indication of eco-friendliness.Narayan and Patel (n.d.). Furthermore, there is criticism that LCA studies only focus on environmental aspects and neglect other aspects of sustainability. While not a perfect method, LCA is the best model that exists for considering the environmental impact of products, processes, and services.
TEST YOUR KNOWLEDGE
Based on the results of life cycle analysis (LCA) studies,* which is the more environmentally friendly choice?
1. Paper or Styrofoam cup? LCA research shows production of Styrofoam is less energy and water intensive than paper cups and that production of paper cups creates more greenhouse gas (GHG) emissions.Haag, Maloney, and Ward (2006). The conclusion: Styrofoam is better from an environmental standpoint, but neither is ideal.Haag et al. (2006).
2. Stainless steel coffee mug or ceramic mug or Styrofoam cup? LCA research shows a reusable ceramic mug is more environmentally friendly than Styrofoam as long as it is used at least 46 times (that’s 46 cups of coffee!).Paster (2006). The LCA also shows that a stainless steel mug must be used at least 396 times to be more environmentally friendly than Styrofoam.Paster (2006).
3. Biodegradable to-go food containers or Styrofoam? LCA research shows biodegradable bioplastic containers made from corn or other agricultural products create more GHG emissions than Styrofoam.Athena Sustainable Materials Institute (2006).
4. Bioplastic disposable cutlery or plastic? LCA research shows that bioplastic products made from corn or other agricultural products (such as PLA or PHA) require more energy and produce more GHG emissions in manufacturing than do petroleum-based plastic cutlery.Gerngross and Slater (2000).
5. Biodegradable or plastic or paper bags? LCA research shows that plastic bags produce the least environmental impact in manufacturing, transportation, and recycling.Lilienfeld (2007).
* Since the time of the studies mentioned here, products and processes may have improved, thus impacting the results if another LCA study were to be conducted today. Updated LCA studies are needed.
As an example, an LCA of PLA (a corn-based bioplastic manufactured by Dow Chemicals’s NatureWorks, LLC) versus plastic found that the manufacture of plastic was less energy intensive, thus emitting fewer greenhouse gases during the manufacturing process, and that the plastic manufacturing process required less water. Therefore, the conclusion was that plastic was a better choice than PLA from an environmental impact standpoint. However, when the manufacturer of PLA, NatureWorks LLC, began purchasing wind power carbon offsets in 2006, the company’s LCA studies suggested that NatureWorks’s PLA was now the better choice from an environmental impact standpoint.Vink (2007). Others have disagreed with these results based on the argument that the purchase of wind power carbon offsets, or the investment in another company’s wind power project, does not bring the wind power to the NatureWorks manufacturing facility and, as such, does not reduce the intensity of the electricity consumption during the PLA manufacturing process.Athena Institute (2006). As this example demonstrates, LCA studies compare a specific product and determines its impact at that point in time, given the manufacturer, its various processes, and the boundaries defined for the study. This limits generalization of the findings to similar products by other manufacturers.
4.04: Crowdsourcing
Organizations have long used techniques such as brainstorming, the Delphi technique, and quality circles for employees and managers to generate creative solutions to problems. CrowdsourcingHowe (2006). is a similar idea on a larger scale using the Web to reach a larger set of problem solvers. Problems are made available via the Internet in the form of an open call for solutions. Participants (the crowd) may be customers, suppliers, employees, member communities, or simply the general public. The participants suggest solutions to the problem, discuss their merits or disadvantages, and select favorite choices. Participants can be motivated to do so through awards, recognition, or financial compensation. Participants are potential end users of the product and are generally willing to provide ideas and solutions from that aspect.
Sustainable businesses can benefit from crowdsourcing, which also has been referred to as community-based design, as a substitute for in-house R&D to reduce overhead and staffing expenses. Businesses can create their own online crowdsourcing site or they can utilize one of the many links that are currently available. Online discussion and voting from the community at large provides results similar to company-driven marketing research. Companies can obtain feedback, ideas, and solutions from a wider range of talent, which can conceivably develop better products with faster time to market and at lower costs.
As an example, InnoCentive provides outsource research functions to a variety of disciplines such as life sciences, computer science, business and entrepreneurship, engineering, and chemistry. Sustainable organizations can register with InnoCentive as solution seekers, while individuals can register as solvers. Organizations post a dilemma or problem for which they are seeking a solution, and the open community of solvers is available to offer suggestions and solutions.
For example, SunNight Solar developed solar-powered flashlights for use in developing countries and areas without electricity. The initial design provided task lighting, but the goal was to create another design to replace kerosene lanterns (a safety and environmental hazard) and to illuminate entire rooms. After several failed design attempts, SunNight Solar CEO Mark Bent turned to InnoCentive and put forth the design challenge to InnoCentive’s social network of over 140,000 solvers. The challenge was solved and the new SL-2 light, or Super BOGO, was sent into production.
Other crowdsourcing venues that outsource for a broad range of industries or disciplines include Innovation Exchange, NineSigma, Fellowforce, and Yet2.com.Retrieved March 23, 2009, from http://www.yet2.com CrowdSPRINGRetrieved March 26, 2009, from http://www.crowdspring.com focuses on contributions for logo design, business card design, graphic design, Web site design, and photography. Amazon created a platform called the Amazon Mechanical TurkRetrieved March 26, 2009, from https://www.mturk.com/mturk/welcome on which tasks called “HITs” (Human Intelligence Tasks) can be made public for people to work on and receive compensation.
As with other functions of the business, sustainability brings new ways of thinking to the task of R&D. From the way products are designed to the way research is conducted and problems are solved, sustainability challenges our old mindsets. | textbooks/biz/Business/Advanced_Business/A_Primer_on_Sustainable_Business/04%3A_Research_and_Development/4.03%3A_Life_Cycle_Analysis.txt |
The first element of the marketing mix is the product. The sustainable business addresses issues related to the product’s design, packaging, and branding.
Sustainable businesses focus on green product design and development, as discussed in Chapter 5. Green product design and development engages in design for the environment, sustainable product architectures, design for flexibility and reuse, green product testing, design for recycling, and life cycle analysis (LCA) for sustainability.
In designing for the environment, the sustainable business will become familiar with the International Organization for Standardization (ISO) 14000 standards, which focus on environmental management issues. The standards are quality guidelines for companies to continuously identify, control, and improve environmental performance. The sustainable business will take steps to create conditions to assure product testing does not cause unnecessary and harmful social or environmental impacts. Design for recycling, flexibility, and reuse not only reduces environmental impact but can also create cost efficiencies for the organization. It is important that the company conduct LCA on products and processes (discussed in Chapter 5). LCA is a method to better understand the impact of a product, service, or process throughout the entire duration of its life from acquisition of raw materials to use or reuse and to its eventual disposal.
Sustainability can also be applied to service design. Businesses providing services such as hospitals, hotels, and restaurants will focus on issues such as minimizing nonrenewable energy consumption, protecting water sources, enhancing the indoor air quality for the consumer, and using environmentally preferable products in providing those services.
A sustainable business also increases efforts to reduce waste and environmental impact through product packaging. Reducing the size of the package or redesigning the shape may result in increased efficiencies in storage and transportation. Eliminating plastic wrap or liners from products will reduce the amount of waste transferred to the landfill. Furthermore, biodegradable, recyclable, and reusable materials for packaging will significantly reduce the long-term environmental impact of packaging. Lastly, the packaging material itself may be altered.
Wal-Mart Stores, Inc. (and Sam’s West, Inc.) was the first to implement a packaging scorecard to evaluate the impact of packaging from suppliers. The scorecard criteria cover such items as greenhouse gas/carbon dioxide (GHG/C02) emissions per ton of production, product–package ratio, cube utilization, recycled content, renewable energy, and transportation. Businesses using a packaging scorecard have an objective measure of commitment to sustainability efforts and can inform suppliers of the commitment to sustainable packaging.
Another packaging inroad is the concept of eco-labeling. An eco-label is a label or symbol, such as ENERGY STAR, EcoLogo, or Green Seal, that educates and informs the buyer of certain environmental claims. Sustainable businesses are urged to use industry-wide labels, standardized by ISO 14024 regulations, which are generally recognized by the public versus proprietary labels that do not carry the same credibility factor. Other types of eco-labels may provide information on the product through its life cycle, such as the origin and history of the product or the amount of greenhouse gas emissions created in production. This approach is currently being used in Patagonia’s Footprint Chronicles and Wal-Mart’s Love, Earth jewelry line. Consumers are able to track the life of the product from raw materials to retail sale.
Lastly, any business should avoid the use of vague terms on packaging, such as green, nonpolluting, natural, eco-friendly, and others. If using such terms, a business should be ready to provide evidence to support its claims. This includes full awareness and understanding of processes and product supply chains. For example, a company that claims its organic product was produced without chemicals or pesticides may find that contaminants have crept in from processing or transport and have made the claim ultimately false. Such vagueness has the potential to be misinterpreted and misunderstood in numerous ways by consumers.
A company will develop a brand in order to give its company and its products an identity. Branding builds an emotional bond or connection with the consumer, and with that bond an organization can obtain loyalty from the consumer. Sufficient consideration should be given to determine a brand name or symbol that identifies the brand with the company’s sustainability philosophy and that captures the essence of the sustainable properties of the product. A sustainable business will have the triple bottom line (people, planet, profit) at the base of its branding. Sustainability and branding should provide a seamlessly integrated front. Separated from each other, branding faces the risk of becoming irrelevant or overlooked. Green companies will also want to differentiate themselves from other green companies on the basis of their sustainability. As an increasing number of organizations go green, it will become increasingly important to set themselves and their marketing efforts apart from the competition. | textbooks/biz/Business/Advanced_Business/A_Primer_on_Sustainable_Business/05%3A_Marketing/5.01%3A_Product.txt |
Pricing is a major element in sustainability marketing. Issues such as price elasticity, premium pricing, and perceived value pricing will be discussed in relation to pricing for sustainability.
In the past, environmental and social costs were considered external to the production costs and had not, by general rule, been included in consideration of setting prices. However, as stakeholders and legislation increase demands on the companies to provide more sustainable solutions, companies have been driven to consider these costs within pricing policies. Sustainable companies reexamine costing methods (as discussed in Chapter 4 and Chapter 8) and begin to consider the real and actual social, economic, and environmental costs associated with products and services.
The demand for environmentally friendly products is inelastic, for the most part, meaning that a change in the price has little or no effect on the quantity that consumers are willing to buy. Consumers have generally been willing to pay a slight green premium, or higher price, for environmentally friendly products. Through premium pricing, sustainable businesses can continue to invest in innovations and development of sustainable processes. However, premium pricing does not have to be the case. In Chapter 2, there are several considerations to help the sustainable business reduce costs through increased efficiency and reduced waste. When the sustainable business is successful in reducing costs from these efficiencies, it will have more flexibility in pricing policies.
Consumers have also become very knowledgeable and aware of sustainability alternatives and issues in recent times. When considering pricing strategies, companies need to be committed to ensuring that its sustainable products perform beyond or at least as good as those products that do not make sustainability claims. Companies may want to use perceived value pricing, which is a market-based approach to pricing as opposed to pricing based on the cost to make the product. The price is set by estimating the perceptions of the consumer regarding the benefit they think they will receive from the product or service.
5.03: Place (Distribution)
After producing the product, business must distribute the goods and services to the consumer. A sustainable business will want to create an efficient distribution system. In particular, logistics plays a vital part in the distribution system. Logistics is the freight transport of goods and services from manufacturer to distributor and onward to point of consumption.
The sustainable business may be interested in collaborative planning, forecasting, and replenishment, which focuses on information sharing among trading partners in order to develop a joint market plan. Not only can businesses share information, but they can also share transportation, warehousing, and infrastructure. The use of just-in-time electronic data interchange and electronic point-of-sale concepts by sustainable businesses allows ordering and stocking to be more cost effective and timely, which creates replenishment efficiencies in the system. Companies hold less stock, it is shipped only when needed, and this reduces unnecessary shipping.
Reverse logistics is another concept that has arisen from the increase in efforts to reduce waste. Reverse logistics is the movement of a product backward through the supply channel to be reused, recycled, or reprocessed. Sustainable companies should create a continuous process that plans for products to be flagged for recycling or reuse at whatever point is most efficient. Agents in the chain should be identified that are in a position to collect the used products, classify and sort them, and then transport them back to the manufacturer. Kodak, the manufacturer of cameras, is very successful using reverse logistics and remanufacturing for their single-use cameras through retail photo processing. Another company, Lexmark, a printer and toner cartridge manufacturer, creates a process in which the customer is responsible for reverse logistics through rebate programs and incentives for returning used cartridges.Manjumder and Groenevelt (2001).
Freight is transported via various means such as roadways, waterways, railways, and air travel. Each has its advantages and disadvantages. The sustainable business will examine the viability of using efficient forms of travel, such as rail or waterways, to transport the product whenever possible. These forms can provide efficiencies in transportation costs by transporting more of the product at one time versus multiple transports by road with smaller loads. In addition, fewer loads result in fewer road accidents, which impact the triple bottom line from a social perspective.
Roadway travel is by far the slowest means and, from a sustainability standpoint, it is also the most inefficient. When using the roadway for transport, the sustainable business will conduct transportation modeling solutions to determine the most efficient distribution system in order to minimize distances and transportation costs. Transport systems many times will be only partially loaded or even empty if precision in planning is not accomplished. The sustainable business may be able to collaborate with other businesses to maximize transportation loading in both directions where feasible. In addition, distribution facilities should be centrally located to minimize travel distances.
In order to reduce emissions, the transportation fleet should be periodically checked for fuel efficiencies and emission performance. Fleet carriers should not be allowed to idle when not moving (traveling), which unnecessarily uses excessive fuel. In order for internal systems to operate, such as radios, air-conditioning, and refrigeration, trucks typically have had to keep engines idling. IdleAire manufactures a system that provides truck stops with a power grid for truck hookup. The grid provides power to the trucks while they are parked. Using this product, the state of New York expects to reduce emissions from commercial truck idling by 98%.Washington State University Extension Energy Program (n.d.).
The sustainable business should also plan routes for maximum efficiency, such as UPS’s right-turn-only policy, and include stop points at diesel stations that have truck stop electrification to provide trucks with grid-based electricity. Companies that ship both refrigerated and nonrefrigerated products may consider dual temperature vehicles that move both product types in the same shipment and decrease the need for separate carriage.
Another example of transportation innovations in product distribution can be found at Unilever HLL’s subsidiary in India. The company’s laboratories developed a method that allows ice cream to be transported cheaply throughout the country in nonrefrigerated trucks. This innovation significantly reduced electricity consumption, eliminated the need for refrigerants, and was cheaper than previous transportation methods.Prahalad and Hart (2002). | textbooks/biz/Business/Advanced_Business/A_Primer_on_Sustainable_Business/05%3A_Marketing/5.02%3A_Price.txt |
Companies engage in promotion of products and services through advertising, public relations, word of mouth, and point of sale. The following paragraphs will discuss selected topics related to sustainable marketing promotion, such as advertising issues, cause-related marketing, sustainable promotional products, and greenwashing concerns.
Advertising is the most familiar element of promotion to reach potential customers. Businesses use sales promotions, personal selling, direct marketing, and public relations to communicate their message to potential customers. Market segment groups identified as particularly attractive for the sustainable business include Lifestyles of Health and Sustainability (LOHAS) and Cultural Creatives. The LOHAS segment of the population is described as individuals committed to health, the environment, social justice, personal development, and sustainable living. The Cultural Creatives segment of the population is described as individuals committed to spirituality, social justice, and environmentalism. Together, they represent a sizable and growing percentage of our population.
Whether a business specifically targets LOHAS or Cultural Creatives segments of the population or targets the general population, consumers are attracted to ethical marketing practices. A sustainable business often engages in cause-related marketing, or connecting its branding image with certain causes to which consumers will strongly relate. For the sustainable business, the cause is sustainability, and therefore it is critical to communicate the social and environmental benefits of products. It is also important that consumers are able to see a clear connection between the company (or its brand image) and the charitable cause it supports. When consumers consider the product, the corporation’s ethics and values are reflected in its choices of charitable causes and they are transparent to the consumer.
Two specific types of cause-related marketing are green marketing and social marketing. Green marketing refers to the marketing of products or services that are environmentally friendly. The U.S. Trade Commission and the Canadian Standards Association both provide guidelines for making environmental claims of products. Social marketing refers to marketing of products or services for social good. Sustainable businesses often partner with nonprofit organizations to promote social change or to donate a percentage of profit to these organizations. Well-known examples include the partnerships between Susan G. Komen for the Cure and (PRODUCT) RED and the various businesses that support these causes. Due to the emotional connections in linking a cause with a brand, consumer response may actually be stronger through these forms of cause-related marketing than by advertising alone.
Additional marketing promotion considerations are the marketing materials and promotional items. Marketing materials (including business cards) and promotional items will reflect the sustainable business’s commitment to environmental and social responsibility. Marketing materials and items used by the sustainable business do not produce waste, require fewer resources in production, are recycled and reusable, are biodegradable, use soy-based inks, use nontoxic components, and avoid PVC plastic and other harmful materials. Examples of eco-friendly promotional products are items made from PLA, a corn-based biodegradable plastic (such as pens or coffee mugs); organic products (such as T-shirts and bags); recycled products (such as mouse pads, umbrellas, and clothing); and renewable energy powered products (such as solar-powered or water-powered flashlights, calculators, and radios).
There are numerous communication channels to reach sustainability-minded consumers and to promote your sustainability message. See Note 6.6 "Promote Your Sustainability Message" for a small sample of the many print and online outlets for both advertising and press releases.
Promote Your Sustainability Message
There are many print and online media outlets to reach sustainability-minded consumers, such as
• Business Ethics Magazine
• ClimateChangeCorp
• Corporate Knights
• CSRwire
• Environmental Leader
• Environmental News Network
• Ethical Corporation
• GOOD Magazine
• GreenMoney Journal
• Greener World Media and its associated sites (such as GreenBiz)
• Grist Magazine
• LOHAS Journal
• Matter Network
• Mother Jones Earth
• NEED Magazine
• Plenty Magazine
• Sustainable Business Design Blog
• Sustainable Industries
• TreeHugger
• Triple Pundit
• World Business Council for Sustainable Development
The sustainable business’s marketing emphasis will be on openness, honesty, and transparency in any product or company claims. An effort to promote a single token product or act of a company as sustainable, green, or environmentally friendly will be met with skepticism by critics and will earn the company a reputation of greenwashing. Greenwashing is the act of creating an environmental spin on products or activities without genuine business-wide commitment to sustainability. Sustainability is a company-wide goal that permeates through every task, role, department, division, and activity of the company. Unwitting businesses may engage in greenwashing for a variety of reasons, such as a lack of understanding of sustainability. Other reasons may include attempts to expand market share, attract and manage employees, attract investors, derail critics, circumvent regulatory issues, and improve image. However, greenwashing may damage an otherwise credible business’s image or reputation.
The sustainable business can circumvent greenwashing by avoiding vague terms (such as green, nonpolluting, and eco-friendly), providing substantial evidence to support any sustainability claims, staying clear of irrelevant claims, and by providing specific details to curtail misunderstandings. Partnering with one’s harshest critics and nongovernmental organizations, such as Environmental Defense Fund, American Red Cross, National Wildlife Foundation, and ClimateGroup, may provide the organization some guidance in making meaningful progress toward sustainability and in creating positive impressions.
Suspect greenwashing can draw attention and can subject companies to violations of various federal and state laws. In particular, the Federal Trade Commission (FTC) Act set forth Green Guides in 1992 and revised them in 1998 to provide basic principles on what is permissible in green marketing claims. Due to the nature of guidelines, which are not legally binding, there has been little enforcement for companies to closely follow the guidelines. However, the FTC’s task is to monitor and prevent unfair deceptive practices and to bring action against a company if they believe it has committed deceptive practices. The criteria for deceptive practices are based on whether a claim can be substantiated, whether the claim is vague and misleading, and whether the claim provides an overstatement of environmental benefits.
Due to the rise in green marketing claims, the FTC is in the process of again updating the guidelines. A new chair of the FTC, William Kovacic, has been appointed and appears to be a strong advocate of addressing greenwashing. Companies are likely to observe stronger enforcement of the FTC Act with regard to greenwashing. The FTC has been holding public meetings on topics related to green marketing, such as green buildings, carbon offsets, and renewable energy certificates. The revised Green Guides are to be released in 2009.
In addition to FTC Green Guides for businesses, several third-party Web sites seek to help consumers identify cases of greenwashing. GreenPeace offers a Greenwash Detection Kit,Retrieved March 23, 2009, from archive.greenpeace.org/comms/97/summit/greenwash.html TerraChoice details the Six Sins of Greenwashing,Retrieved March 23, 2009, from sinsofgreenwashing.org/findings/greenwashing-report-2007/ CorpWatch tracks offenders through its Greenwash AwardsRetrieved March 23, 2009, from www.corpwatch.org/article.php?list=type&type=102 and related publications,Bruno (2002). and EnviroMedia Social Marketing and the University of Oregon maintain the Greenwashing Index.Retrieved March 23, 2009, from www.greenwashingindex.com The FTC and third parties are each placing growing emphasis on separating greenwashing from authentic green claims.
This chapter has shown that sustainability impacts marketing decisions made within the standard marketing mix of product, price, place, and distribution. Sustainable businesses will design, package, brand, price, distribute, and promote products and services with social, economic, and environmental impacts in mind. | textbooks/biz/Business/Advanced_Business/A_Primer_on_Sustainable_Business/05%3A_Marketing/5.04%3A_Promotion.txt |
According to a recent study,McKinsey & Company (2008). the carbon dioxide (CO2) emissions of the U.S. information technology (IT) industry already exceed the emissions of entire nations, such as Argentina, the Netherlands, and Malaysia. At the current pace, emissions are expected to quadruple and the IT industry is expected to exceed the airline industry in emissions by 2020. The research shows that the U.S. IT industry is increasing its energy usage at a rate of 10%–20% annually. The study estimates that at this rising rate of energy usage, the United States will need to build 30 new coal-fired or nuclear power plants by 2015 solely to support the nation’s IT usage.
The Smart 2020 reportGlobal eSustainability Initiative (2008). estimates that IT has the potential to reduce worldwide global emissions by 15% by 2020. According to this report, the greatest global opportunities for IT to help reduce emissions are in the areas of smart motor systems in China’s manufacturing industry, smart logistics in Europe’s transport and storage industries, smart building technologies in North America, and smart grid technologies in India.
In order to address growing concerns over the environmental impact of the IT industry and to take advantage of opportunities, the proactive and sustainability-focused business will develop green IT strategies. Green IT strategies are not only proactive and environmentally friendly but can also ultimately reduce the company’s energy consumption and costs.
There are a number of suggestions for green IT strategies. For example, the same McKinsey & CompanyMcKinsey & Company (2008). study suggests that most companies could double energy efficiency of data centers by 2012. The researchers propose automobile CAFE-type industry standards (corporate average fuel economy [CAFE] standards require an automaker to meet minimum average fuel efficiency across its entire fleet of manufactured vehicles). These CAFE-type industry standards would be used for measuring efficiency in conjunction with the following suggestions: creating an energy-efficiency dashboard, sealing cable cutoffs, turning off and removing excess hardware, increasing temperatures, virtualization, and upgrading equipment.
Greening the data center is often the starting point of green IT strategies. The first step in your green IT strategy is to know current energy usage, where energy is used and by what specific equipment, what usage is efficient, and what usage is wasteful. There are a number of IT-enabled energy-reduction systems (such as EnviroCube or EnerSure monitoring devices or Verdiem software tools), smart metering, and other technologies that can ultimately reduce cooling costs and electricity consumption. As if that is not incentive enough, the U.S. Environmental Protection Agency (EPA) is currently developing an ENERGY STAR rating for data center infrastructure, and the European Commission has developed a Code of Conduct for Green Data Centers. We will now look at some specific green IT strategies designed to increase efficiency and decrease energy consumption.
Storage
Storage resource management (SRM) helps identify underutilized capacity, removes or reassigns unused storage, identifies old or noncritical data that could be moved to less expensive storage, removes inappropriate data, and helps predict future capacity requirements. SRM can increase storage utilization and decrease power needs. Companies that have used SRM have experienced utilization improvements of 30%–40%.Harrison (2008).
Storage virtualization allows the work of several storage networks and devices to be integrated to appear as one virtual storage site. Storage virtualization can improve storage utilization by allowing storage to be assigned where it is needed.
Another tool is continuous data protection, which offers continuous or real-time byte-level backup of changes to documents. This often requires less storage space than traditional file-level backups.
Yet another option for reducing storage costs is storage tiering. Tiered storage assigns categories of data to specific types of storage media. The categories are company-defined based on levels of security and protection, usage, performance, or other considerations. This process can be automatically managed through software programs. The benefit of tiered storage is that it allows companies to increase utilization rates and decrease power consumption and cooling costs.
Servers
One green IT approach being used is server consolidation, which reduces the number of servers used by running multiple applications on each server. Another approach to reducing energy usage and increasing energy efficiency is server virtualization. Similar to storage virtualization discussed earlier, server virtualization allows virtual machines to run on one piece of hardware, at both the server and PC level.
Cloud computing is an option that allows access to computer technology via the Internet without your company purchasing or managing the technology. Cloud computing can be used with data centers, networks, configuration, software, hardware, infrastructure, platforms, services, and storage. Cloud computing can ultimately reduce costs while increasing utilization and efficiency. The FTC and computing professionals are beginning to address security issues in this new arena of cloud computing.Condon (2009).
Desktops
Green PCs are designed to minimize the use of electricity and to meet the Environmental Protection Agency’s ENERGY STAR standards (new ENERGY STAR standards for computers were updated in 2007). One example is thin clients, diskless machines that consume a fraction of the power of standard desktop machines. The average desktop computer uses 4 to 8 times more energy than a thin client.Naegel (2009). Another option to consider is a laptop rather than a desktop. Laptops consume approximately 5 times less energy than desktops.Chua (n.d.). Lastly, the use of an ENERGY STAR–rated LCD monitor will reduce energy consumption.
Ideally, desktops should use 4 watts of energy or less in sleep mode and 50 watts or less when idle. For laptops, the ideal is 2 watts or less in sleep mode and 14 to 22 watts or less in idle mode.Chua (n.d.). However, the EPA estimates that fewer than 10% of computers are set to use the sleep or hibernation mode.Chua (n.d.). This power-saving feature can easily be set up on your computer through the Control Panel’s power options, although turning off your computer at the end of every workday is the best choice. Employees could also use a desktop device, such as EcoButton, to put the computer into sleep mode. Smart power strips can also conserve energy by turning off items after a period of inactivity. Smart strips are useful for printers, monitors, computers, and other items that can be powered down at the end of each day.
In addition to energy efficiency, green PCs are designed to contain fewer toxic materials (such as lead) in production and shipping and to contain more components that are made from recycled parts and that can again be recycled at the end of the machine’s usefulness. The EPA’s Electronic Product Environmental Assessment Tool allows you to compare computer models before making a purchase. See Note 7.8 "Greener Printing From Your Computer" for tips on how to be more environmentally friendly when printing from your desktop.
Greener Printing From Your Computer
Before you print that next document, here are some ways you can achieve greener printing from your computer.
1. Make sure you are using an ENERGY STAR printer (and computer). You may think this one’s a no-brainer and you’ve got it covered, but wait . . . did you know that computer standards were revised in 2007 and new printer standards take effect this year? If your computer is older than 2007 and your printer is older than 2009, it may no longer meet ENERGY STAR standards, even though it met the standards that were in place at the time it was manufactured. If you should decide to upgrade, don’t forget to recycle the old one!
2. Change the margins. Studies at both Penn State University and Michigan State University found that changing margins can save paper. The Penn State study suggested that changing all university printer default margins to 0.75" (adding 19% more print space to the page) could save the university over \$122,000 a year, and Michigan State estimated a savings of \$67,512 a year.
3. Use paper with recycled content. Although both the Penn State and Michigan State studies found that switching to recycled content paper was more expensive, this has not been the case in my consulting experience. Many businesses that are not under contractual purchasing agreements do have the flexibility to comparison shop. A recent client was able to save 10% on paper costs by switching from virgin fiber to recycled content paper. Other “green” options are to look for unbleached paper or, better yet, tree-free paper!
4. Recycle and buy recycled. Recycle your paper, toner cartridges, and ink-jet cartridges. And don’t forget to buy recycled, too!
5. Install software to manage and reduce paper usage. Print management software programs (such as PaperCut, GreenPrint, and many others) can reduce printed pages and printer waste.
6. Use vegetable-based ink toner. SoyPrint is an environmentally friendly alternative to petroleum-based toner. Look for additional vegetable-based toners and ink-jet cartridges to hit the market soon.
7. Change the font. A Dutch company has created Ecofont, a new font that requires up to 20% less ink.Retrieved from www.ecofont.eu/english.html Ecofont is free to download and use.
By utilizing a combination of these suggestions, students at the University of Arkansas at Little Rock found that the College of Business could save 39% to 43% per year in paper and ink costs.Barakovic et al., 2009. Above all, as your company upgrades computing equipment, seek out recycling centers or take-back programs for monitors, desktops, laptops, and other electronic items.
E-Recycling
Many electronic items (monitors, computers, keyboards, televisions, external hardware devices, calculators, cell phones, and virtually anything that requires power for operation) can be donated to charitable organizations or repaired for continued use. For those electronics that cannot be repaired, electronics recycling (or e-cycling) is an option. The EPARetrieved March 23, 2009, from www.epa.gov/epawaste/conserve/materials/ecycling/donate.htm and Earth911 Web sites are the most comprehensive sources for finding where, what, and how to recycle in your local area. By donating unwanted electronics to charities or by recycling nonworking electronics, the sustainable business is doing its part to reduce electronics waste and divert it from the landfill.
6.02: Information Systems
In addition to the technology behind greening your computing operations, there are numerous software programs, or management information systems (MIS), to support corporate sustainability performance and to aid in executive decision-making tasks. MIS exist to measure any number of performance indicators related to social, environmental, and economic impact that are important to your company. Specific MIS can track carbon or greenhouse gas emissions (referred to as enterprise carbon accounting software), energy usage, compliance with voluntary and regulatory standards (such as ISO standards), environmental performance, supplier performance, or other sustainability indicators identified by your company. In addition to tracking sustainability-related performance indicators, software programs exist that are integrated with the Global Reporting Initiative (GRI) framework (see Chapter 8) for ease in reporting sustainability performance.
Prior to selecting software programs, you should be clear on what principles, standards, measurement and accounting tools, reporting, assurance, and stakeholder engagement protocols the company is following (see Chapter 8 and Chapter 9). Your company should select an appropriate MIS that supports the corporate conduct standards it is pursuing, measures and tracks the indicators of those standards, provides accessible data, and allows ease of reporting data progress on the standards (see Chapter 9). If the company does not subscribe to any particular voluntary or regulatory corporate conduct standards, the MIS should then meet the unique needs of the company for measurement, tracking, and reporting self-selected indicators.
An excellent resource for staying abreast of sustainability-related news in IT and information systems is Greener Computing. Other resources for computing professionals are Computer Professionals for Social Responsibility, the Green Grid, Climate Savers Computing Initiative, Green Computing Impact Organization, and the Green Electronics Council. For technology administrators, the Green ICT Strategies Course is free open-source courseware sponsored by the Australian Computer Society.
IT and MIS are both in a central position to help the organization reach its sustainability goals. That is, IT can help the organization operate in a more efficient and environmentally friendly manner, while MIS can serve an important role in transparency and gathering information for monitoring and reporting sustainability performance. | textbooks/biz/Business/Advanced_Business/A_Primer_on_Sustainable_Business/06%3A_IT_and_MIS/6.01%3A_Information_Technology.txt |
There exists a plethora of measurement and accounting tools available, depending on the direction your company has decided to follow in terms of social impact, environmental impact, economic impact, or a complete three-dimensional approach to sustainability. Measurement and accounting tools refer to calculators and formulas and are not to be confused with standards, benchmarks, or thresholds for achievement (to be discussed in Chapter 9). These measurement and accounting tools allow the company to measure its current behavior to establish a baseline, to set goals for improvement, and to measure future behavior to determine progress. This chapter will introduce you to the most common tools used by sustainable businesses.
Measuring Impact Tool
The World Business Council for Sustainable Development and the International Finance CorporationWorld Business Council for Sustainable Development and International Finance Corporation (2008). have jointly created the Measuring Impact Tool. This tool offers the broadest three-dimensional sustainability coverage by measuring governance, (environmental) sustainability, assets, people, and financial flows. The Measuring Impact Tool is designed to work with the Global Reporting Initiative and the International Financial Corporation’s Performance Standards for assessing projects on social and environmental standards before making investment decisions.
Greenhouse Gas Protocol
There are a number of other measurement and accounting tools focused only on the environmental dimension of sustainability. The Greenhouse Gas (GHG) Protocol was jointly created by the World Resources Institute and the World Business Council for Sustainability.World Resources Institute and World Business Council for Sustainable Development (2004, 2005). The GHG Protocol guides a company in creating base year measurements of GHG emissions, both direct and indirect, and allows the company to determine its own future goals for reduction. No comparative threshold or standard is provided. This tool can be used to implement the ISO 14064 standard on GHG emissions, and work currently underway will soon show how the GHG Protocol can be used with the Kyoto Protocol.Although there are a plethora of online carbon calculators available to companies, they do not measure the full scope of emissions as detailed in the GHG Protocol.
Global Water Tool
The World Business Council for Sustainable Development’sWorld Business Council for Sustainable Development (2007). Global Water Tool is currently under development with other groups around the world in order to standardize water footprint measurement, accounting, and reporting.
Global Environmental Management Initiative
In addition, the Global Environmental Management Initiative Water Tool,Global Environmental Management Initiative (2002). while not a quantifiable measurement tool, offers a guide for the corporation in analyzing corporate water usage throughout the supply chain, determining water-related risks and opportunities, and determining if the business case exists to create a water strategy. Both of these water tools are related to a specific environmental focus on water usage and do not consider broader environmental impacts.
Life Cycle Assessments
Life cycle analyses (or assessments, LCAs) are another tool used to measure the environmental impact of a company’s performance related to one specific product or service. LCAs do not assess the overall environmental performance of a company; they are focused only on the product or process under review. Nonetheless, LCA is a useful measurement tool for the sustainable business to help determine impacts of various products and services. Please refer to Chapter 5 for further discussion on applications of LCA. | textbooks/biz/Business/Advanced_Business/A_Primer_on_Sustainable_Business/07%3A_Accounting/7.01%3A_Measurement_and_Accounting_Tools.txt |
The Global Reporting Initiative (GRI) is the world’s most frequently used reporting guideline and format.KPMG International (2008). Currently in its third version, G3, this standard was used in reporting by nearly 1,500 businesses worldwide in 2007 and is becoming the accepted standard for reporting. The GRI is a template designed to be customized to the business; it offers industry-specific supplements to address the unique needs of the business. There are a number of software programs designed to aid in GRI reporting.
7.03: Assurance and Stakeholder Engagement
The final issues to consider in sustainability accounting are auditing and assurance as well as stakeholder engagement throughout the entire process. Sometimes referred to as a social (or environmental) audit, an ethical audit, or monitoring, auditing and assurance allows verification that proper checks and balances are in place to support the claims of the organization. There are currently two general assurance standards available, the AA1000 Assurance Standard and the International Standard on Assurance Engagements (ISAE) 3000, and one stakeholder engagement standard, AA1000 Stakeholder Engagement Standard.
AA1000 Assurance Standard
AccountAbility’sAccountAbility (2008). AA1000 Assurance Standard seeks to create a process for implementation and reporting of the AA1000 Framework. To ensure consistency in implementing the assurance standards, AccountAbility offers certification courses to become a Sustainability Assurance Practitioner.
International Standard on Assurance Engagements 3000
As another option, the International Auditing and Assurance Standards Board of the International Federation of AccountantsInternational Federation of Accountants (2003). has put forth the International Standard on Assurance Engagements (ISAE) 3000 standards for auditing nonfinancial statements. Keeping in mind that sustainability accounting is optional in the United States, some organizations may opt for providing internal assurance of activities and reporting. However, to increase credibility, organizations should opt for external third-party assurance from independent boards or firms providing sustainability audits or related services.
AA1000 Stakeholder Engagement Standard
Stakeholder engagement is another critical element that must be implemented throughout the entire sustainability accounting process. Stakeholder engagement is a process to promote cooperation between the organization and all its stakeholders as a means to involve and respond to the interests of stakeholders. AccountAbilityAccountAbility (2005a, 2005b). has issued the AA1000 Stakeholder Engagement Standard; however, it appears that most organizations develop their own stakeholder engagement process.
7.04: Accounting Methods
In recent years, overhead costs have become an increasingly significant part of product cost. Managers need high quality cost information to maintain greater control of processes and achieve quicker responses to competitive pressures. As a result, firms are using activity-based costing (ABC) to pinpoint internal company costs associated with each step in a production or service-related activity.Kaplan and Cooper (1998). While ABC is appropriate for financial reporting according to Generally Accepted Accounting Principles (GAAP), sustainable businesses seek to account for all costs over the long term. That is, sustainable businesses are looking beyond internal costs and are including broader considerations such as costs associated with the entire value chain or, as discussed in past chapters, the costs associated with cradle to cradle activities. Sustainability costing seeks to internalize those costs that have been historically externalized. The sustainable business now considers the financial costs of products and services over their lifetime and throughout the supply chain rather than passing those costs to society and the environment.
Accounting methods taking a longer term orientation include life cycle costing, life cycle environmental cost analysis, and full cost accounting. Life cycle costing (LCC) or life cycle cost analysis seeks to fully capture and internalize costs by examining the total cost from inception costs of products (development or purchase, delivery, installation) to operating costs (energy, water, maintenance, and repair) to end-of-life costs of products (removal, replacement, salvage, disposal).Barringer (2003). LCC cannot be used for financial reporting and, in general, is not consistent with GAAP, but is a useful tool for managers in costing from a planning standpoint.
Life cycle environmental cost analysis (LCECA) is another form of LCC; however, the objective of LCECA is to include eco-costs into the total costs of the product, or the direct and indirect costs of the environmental impacts caused by the product. With LCECA, sustainable businesses can more clearly identify feasible alternatives for cost-effective, environmental products.Kumaran, Ong, Tan, and Nee (2001).
Full-cost accounting (FCA), also known as total cost accounting, broadens the assessment of external costs and incorporates future costs. This approach seeks to determine the full cost of the societal, economic, and environmental impact (triple bottom line) of a given manufacturing or service activity. Fundamental to FCA is the valuation of the opportunity costs, hidden costs, or trade-offs that were made when the option to use a particular limited resource was selected.Carter, Perruso, and Lee (2008).
Accounting professionals are in a unique position to help the organization accurately measure and report social, economic, and environmental impacts. Various accounting methods and measurement and accounting tools aid in capturing the real costs of products and processes. Furthermore, a common sustainability-reporting framework exists to guide organizations in understanding what items to report. Lastly, guidelines for assurance and stakeholder engagement also exist to provide assistance for businesses. | textbooks/biz/Business/Advanced_Business/A_Primer_on_Sustainable_Business/07%3A_Accounting/7.02%3A_Reporting.txt |
It is not unusual for sustainability to be championed by one person, department, or division. If this is the case in your company, we applaud you for your initiative and foresight! Or perhaps there has not yet been any particular sustainability emphasis within your company and you wonder where to start. As such, we make these suggestions:
1. Prove the business case. Start with a small project in your division or department. Over time, refine the project so that it can be scaled and transferred to other areas of the company. Above all, make the business case by calculating the positive impacts and results of the project (often quantified in terms of savings, other improvements, or both).
2. Establish a green team. A green team can explore options for sustainability and identify the low-hanging fruit (easy-to-implement projects that are low cost but offer high returns). Work with others who share your vision for a sustainable workplace.
3. Raise awareness. Education and awareness are critical for change. Use a newsletter, Web site, discussion group, bulletin board, or other means of communication to publicize successes and educate others on sustainability impacts. One thing we have learned is that you must show people how sustainability (and its impacts) relates to them.
If your company has moved beyond the stage of sustainability as incremental improvements, then your company is well on its way toward embracing sustainability as strategy. We devote the rest of this chapter to a discussion of how sustainability is deeply embedded throughout the organization as a strategic priority of the company.
8.02: Sustainability as Strategy
Sustainability as strategy will encompass all aspects of the company’s operations, as demonstrated in the previous chapters. Sustainability as strategy entails a new perspective, recognizing that financial gain is not the only imperative of the firm. Rather, social, environmental, and economic gains can be enjoyed by all and business is the vehicle through which it can happen. Your business can be used to make the world a better place. This idea gains much resistance from those who have been trained to believe that profit is the only purpose of business. That is, some may balk at the idea that a business has any responsibility beyond that to its shareholders.Friedman (1970). For those sharing this perspective, consider the future risks inherent in current operational practices if more stringent social, environmental, or economic regulations emerge. Every executive and Board of Directors should be attuned to world trends impacting the global business environmentDoering et al. (2002). and conduct a risk audit of current operations (and the supply chain) to identify vulnerabilities in light of these global trends. A risk audit would entail an honest evaluation of energy usage, water usage, waste produced, toxins used or produced (or both), human resources practices, value and supply chain operations, community relations, regulations and standards, customer relations, technology, and the like. Furthermore, an honest assessment of strengths, weaknesses, opportunities, and threats is in order. The progressive company will view potential risks as opportunities to improve the organization and to seize new market opportunities. In the words of Peter Drucker, “Every single social and global issue of our day is a business opportunity in disguise.”Cooperider (2008).
Many successful businesses are exemplars of sustainable and responsible business practices; some before it was “fashionable.” Classic examples include Ben & Jerry’s (now owned by Unilever), Whole Foods Market, Body Shop (now owned by L’Oréal), ShoreBank, Interface, Newman’s Own, Burt’s Bees, Seventh Generation, Tom’s of Maine (now owned by Colgate), Greyston Bakery, Green Mountain Coffee Roasters, Armstrong International, Virgin Group, Golden Temple, and many others. Today we see a new generation of companies continuing and even expanding on sustainable and responsible business models. A brief overview of a very small selection of these companies is provided in Chapter 10 of this book. These companies serve as role models for others pursuing sustainability.
While many businesses will forge their own path toward sustainability, there is a growing infrastructure of principles and standards to help guide and provide direction to companies. Adoption of these principles and standards is voluntary, allowing businesses the flexibility to choose among the many options available. We will discuss the most commonly adopted principles and standards.
Principles of Corporate Conduct
There currently exists a growing body of protocols for businesses that seek to be sustainable. In addition to creating a strong values-based and ethical corporate culture, many businesses will explore the numerous principles for corporate behavior. Principles of corporate behavior are broad sweeping guidelines to which the business subscribes and which reflect the values and goals of the business. Companies will select one or more that are most appropriate to the type of business and that reflects the outcomes the business wishes to achieve. Whether or not the business elects to become an official signatory of the principles, they can still offer guidance on the type of values the corporation will seek to uphold. We will briefly explain the most common principles for corporate behavior.
United Nations Global Compact. Among the most commonly referenced set of principles for corporate conduct is the United Nations Global Compact. The UN Global Compact contains 10 principles for responsible and sustainable business activity in the areas of human rights, labor, the environment, and anticorruption. Over 4,700 businesses worldwide have become signatories (participation is also open to nonprofits, academic institutions, and municipalities).United Nations Global Compact (2008). The UN Global Compact is the business extension of broader UN goals, including the UN Millennium Development Goals (MDG) for governments and international organizations. The UN MDG set forth eight goals (with 21 accompanying targets) related to poverty, education, gender equality, child mortality, maternal health, disease, the environment, and global partnerships. The MDG initiative has been signed by 189 UN member states and international organizations with the goal of achievement by 2015.
AA1000 Framework. Another popular set of principles for corporate conduct is the AccountAbility 1000 (AA1000) series. The AA1000 Framework seeks to engage all stakeholders in determining the organization’s course toward its vision. The AA1000 Framework is designed to complement the Global Reporting Initiative (GRI), the most frequently used sustainability reporting framework worldwide (discussed in Chapter 8).
Caux Round Table Principles. The Caux Round Table Principles provide a global vision for business conduct based upon shared values. The principles were developed in 1994 and offer a self-assessment and improvement process self-appraisal tool for organizations to assess their progress.
ISO 26000. The International Standards Organization’s (ISO) 26000 guidelines were released in 2010 and serve as a set of principles or guidelines on corporate responsibility, or the relationship between a business and all its stakeholders. The ISO 26000 standards serve as guidelines only and are not part of the ISO certification process.
The Natural Step. The Natural Step puts forth four broad beliefs or philosophies on how business should operate within the natural environment. For those who subscribe to these value statements, the Natural Step offers a framework and tools to assist businesses.
The Aspen Principles. The Aspen Institute’s Business and Society Program provides educators and executives with research, information, and opportunities for sustainability and values-based leadership. The Aspen Institute’s Business and Society Program has put forth the Aspen Principles. These principles suggest that a long-term focus will ultimately lead to value creation for the corporation. Specifically, they promote improved corporate governance as a means toward long-term value creation for the company, economic growth for the nation, and better service to society.
Coalition of Environmentally Responsible Economies Principles. For the business that chooses to focus only on environmental impact, the Coalition of Environmentally Responsible Economies (CERES) Principles focus on the environment and climate change.
There are a number of less frequently used principles for corporate conduct. These include the defunct UN Human Rights Norms for Business, the Organization for Economic Cooperation and Development Principles of Corporate Governance and Guidelines for Multinational Enterprises, the International Chamber of Commerce Business Charter for Sustainable Development, and the Global Sullivan Principles of Social Responsibility.
Standards
After determining the principles to which a business will subscribe, the next step is to select standards for performance. Some standards identify specific guidelines for corporate behavior while others detail specific quantifiable benchmarks to achieve. There have been efforts to create uniform standards that apply to all organizations and all industries; these have had mixed success. Uniform standards include the Sustainability-Integrated Guidelines for Management, or SIGMA Project, Certified B Corporations, the Corporate Responsibility Index, and the now defunct Social Venture Network Standards of Corporate Responsibility. In addition, there are a growing number of local, regional, and national organizations that identify required criteria to become certified as a sustainable or green business (e.g., Bay Area Green Business Program).
SIGMA Project. Project SIGMA offers guidelines for companies on social, environmental, and economic performance. The guidelines attempt to integrate five types of capital (human, financial, social, manufactured, and natural) while practicing accountability and transparency with all stakeholders.
Certified B Corporations. B corporations are a new type of corporation. To be certified as a B corporation requires companies to (a) meet comprehensive and transparent social and environmental performance standards, (b) amend governance documents to incorporate the interests of all stakeholders, and (c) build collective voice through the power of a unifying brand.
Corporate Responsibility Index. Business in the Community’s Corporate Responsibility Index is an online survey of participating companies’ performance in seven areas of corporate responsibility: strategy, integration, management, social impact, environmental impact, assurance, and disclosure. The annual results are compiled to create a benchmark of corporate responsibility. Participating companies receive a personalized report to compare their own practices to the average benchmark. This process highlights the gap between current performance and the industry benchmark.
Not all standards address the full three-dimensional realm of sustainability. Some standards focus only on the social or environmental performance of an organization; other standards apply only to a particular industry.
Standards for Social Performance. Standards with a more narrow focus on socially related concerns include ISO 9000 (labor standards), SA 8000 (labor standards), Ethical Trading Initiative (ETI, labor standards), OHSAS 18001 (occupational health and safety), FairTrade (agriculture and handicrafts from emerging economies), and the Standards of Excellence in corporate community involvement (corporate citizenship).
Standards for Environmental Performance. Standards with a more narrow focus on environmentally related concerns include ISO 14000, the Kyoto Protocol, LEED (Leadership in Energy and Environmental Design) certification from the U.S. Green Building Council, and the Forest Stewardship Council. In addition, there is explosive growth in the number of local, regional, and national organizations offering certification as a green business.
Standards for Industry Performance. Standards with a focus on a particular industry are too numerous to mention and exist for every known industry. However, among the more well-known industry standards are the Apparel Industries Partnership (apparel), Fair Labor Association (apparel), Common Codes for the Coffee Community (coffee), Responsible Care (chemicals), Extractive Industries Transparency Initiative (mining, oil, gas, etc.), Green Computing Maturity Model Process (computing), RugMark (handwoven rugs), Equator Principles (banking and finance), and the AIChE Sustainablity Index (engineering and scientific firms), just to mention a few.
While adoption of principles and standards are neither required nor necessary for sustainability, they do add credibility to the organization’s sustainability efforts. Upon determining principles for corporate conduct and specific standards to follow, the sustainable business turns to the task of implementing the sustainability strategy throughout the various functional areas of the company and tracking and measuring sustainability performance (as explained in each of the preceding chapters). | textbooks/biz/Business/Advanced_Business/A_Primer_on_Sustainable_Business/08%3A_Next_Steps-_Sustainability_Strategy/8.01%3A_Sustainability_as_Incremental_Improvements.txt |
As a strategy, sustainability requires leadership and top-level commitment, strong values and ethics deeply embedded in the corporate culture, and incorporation throughout all business activities. Sustainability must be embedded in the core competencies and competitive position of the company and engage all stakeholders. Finally, reexamination of the business model, organizational structure, reward system, and other management systems are in order. We will examine each of these in further detail.
Leadership and Top-Level Commitment
Sustainability requires commitment by the Board of Directors, CEO, and top management team. This commitment and leadership begins at an executive level and is spread throughout the organization. Leadership and top-level commitment demonstrate that sustainability is a priority for the organization. Many corporations have created new positions, such as Corporate Responsibility Officer or Corporate Sustainability Officer, to oversee this aspect of company operations.
In addition to supporting sustainability as a value of the organization, many organizations, such as the U.S. Green Building Council, have turned to dynamic governance (also termed sociocracy)Endenburg (1998); Siong and Chen (2007); Buck and Villines (2007). as a model for corporate governance, decision making, and organizational structure. The sociocratic model has four principles: decisions are made by consent, the organization is a hierarchy of semiautonomous circles, circles are double-linked with two representatives from each circle serving on the next circle up in the hierarchy, and elections are held by consent. The model is inclusive, gives everyone a voice, and reaches consensus easier and faster than traditional governance, decision making, or organizational structure models.
Values and Ethics
One thing we see in common throughout sustainable organizations is a strong values-based and ethical corporate culture. In fact, it is argued that the strategic deployment of corporate values is a necessary building block for competitive advantage in this new era of sustainable business.Landrum, Boje, and Gardner (2009); Rochlin and Googins (2005). Training and development opportunities for employees will focus on personal growth and development, instilling corporate values and ethics, and promoting sustainability.Landrum, Boje, et al. (2009).
Core Competencies and Competitive Position
As we have seen throughout this book, sustainability encompasses the entire organization. Sustainability is deeply integrated throughout all activities, functions, operations, and business activities. Sustainability should also be deeply embedded in the company’s core competenciesHamel and Prahalad (1990). and contribute to a strong competitive position for the company. That is, your business must develop strengths, competencies, and expertise in a way that sets it apart from its competitors (which makes the business unique, one-of-a-kind, and different) and that produces a result that is valued by customers.Hamel and Prahalad (1990). The business must develop a skill set that promotes its core competencies and strengthens its competitive position so that the business becomes known as the place to patronize for those who seek out that particular core competence.
As an example, if you think of a business that has the absolute lowest prices, one particular business may come to mind. Or if you think of a business that has combined low prices and stylish or trendy items, another particular business may come to mind. These descriptions might identify the particular business’s core competency (or what they are known for, the business’s area of expertise). It is also certain that a broad skill set has been developed across all functions and dimensions of the business to promote and advance the core competency, thereby strengthening its competitive position in the market place.
A sustainable business must identify its core competency (what it is known for), identify the set of skills across the entire range of business functions that must be developed in order to perfect the core competency, and use this information to strengthen its competitive position against rivals. Sustainability must be rooted in the core competencies and must contribute to strengthening the company’s competitive position; sustainability should be the linchpin of, rather than peripheral to, the company’s strategy.
Stakeholder Engagement and Assurance
Sustainability requires a shift in mindset in the way companies interact with stakeholders. Companies have historically viewed stakeholders in terms of their threat and power and have developed strategies for managing stakeholders in order to reduce their threat and neutralize their power.Freeman (1984); Mitchell, Agle, and Wood (1997). By contrast, a sustainable business will interact with stakeholders, including critics, listen to their concerns, and will seek to engage them in identifying plausible solutions. There appears to be no prominently used stakeholder engagement standard although several exist, including AA1000 Stakeholder Engagement Standard and the SIGMA Project’s Stakeholder Engagement Tool (both discussed in Chapter 8). It appears that most companies develop their own approach to stakeholder engagement. As such, companies must consider how each stakeholder will be impacted within the sustainability efforts.
Suppliers. A commitment to sustainability will require that the company engage its suppliers in the move toward more sustainable business practices. This will require a critical analysis of suppliers’ current social, environmental, and economic impacts. It is of critical importance to engage suppliers in your transition toward sustainability so that your business has a complete understanding of the supplies being used, the conditions under which they were produced, and their associated impacts. Sustainable businesses often work with suppliers to help them become more sustainable. Furthermore, suppliers need to understand what types of products and services you seek to support your sustainability strategy.
Customers. Customers can offer valuable insights regarding your business and should be engaged in sustainability efforts. In addition, customers should be part of the sustainable business’s education and communication efforts related to sustainability. This group of stakeholders might ultimately be affected by changes in product or service offerings.
Employees. Employees can be engaged in the sustainability process in a number of ways. Training and education will be critical (as discussed in Chapter 3). For example, employees must understand their role in the sustainability strategy, rewards for achieving sustainability goals, and the change in corporate emphasis from a profit orientation to a more balanced triple bottom line orientation. Employees must also frequently receive communications related to sustainability progress. Lastly, employees can be an invaluable source of sustainability-related innovations.
Shareholders. Shareholders must also understand the change in corporate emphasis from profit orientation to triple bottom line. Studies show that sustainability-focused companies outperform other companies. Most recently, a study of companies with a commitment to sustainability showed that they continued to outperform other companies even during the midst of the economic crisis during the period of May through November 2008.A. T. Kearney, Inc. (2009).
Society. Communities and society at large are important stakeholders that must be included in a company’s sustainability efforts. Americans are skeptical of and generally do not trust businesses, particularly big businesses.Deutsch (2005). Furthermore, it may be more difficult to overcome image and reputation problems.
As we discuss society as a stakeholder, globalization and international strategies bear mention here. Once a company begins conducting business outside its own borders, the sustainable business will become cognizant of the unintended consequences of traditional international strategies.Landrum (2009). Companies have been accused of exploiting human and natural resources in areas in which they have business operations.
Base of the pyramid (BOP) strategies seek to address these concerns and improve the social, environmental, and economic performance of corporations conducting business in emerging economies.Prahalad and Hart (2002). Not without criticism,Landrum (2007). BOP strategies are an effort to adopt localized nonethnocentric partnership-based approaches to conducting business in emerging markets. BOP strategies also seek social, environmental, and economic benefits for all partners involved. The Base of the Pyramid Protocol 2.0Simanis and Hart (2008). provides an excellent standard for conducting business in emerging economies.
One example of a BOP strategy is Grameen Bank. Muhammad Yunus started Grameen Bank as a means of providing credit to the poorest residents in rural India. Loans are made to an individual, without collateral, whose family and friends guarantee the loan. Loans are typically small, or microloans, but can make a significant impact in residents’ quality of life. Yunus was awarded the Nobel Peace Prize in 2006 for this social banking model and strategy that ultimately fights poverty and promotes self-sufficiency in BOP communities.
Other stakeholders. The list of a company’s potential stakeholders is much larger than the five groups of stakeholders mentioned here. Other possible stakeholders include creditors, environmental organizations, nonprofits, government, and many more. The sustainable organization will engage each group in a cooperative dialogue to generate mutual benefit.
Numerous academic centers, research centers, and nonprofit organizations around the world work with businesses toward a sustainable future. Among those centers and organizations are the Applied Sustainability Center, Business Alliance for Local Living Economies, Center for Business as an Agent of World Benefit, Center for Companies That Care, Center for Corporate Citizenship, Center for Responsible Business, Center for Sustainable Business Practices, Center for Sustainable Enterprise, Center for Sustainable Global Enterprise, Consortium on Green Design and Manufacturing, Enterprise for a Sustainable World, Erb Institute for Sustainable Global Enterprise, Ethical Trading Initiative, Forum for Corporate Sustainability Management, Global Institute of Sustainability, Green Design Institute, Minnesota Center for Corporate Responsibility, National Association of Socially Responsible Organizations, Peace Through Commerce, World Business Council for Sustainable Development, and World Resources Institute. Sustainable businesses recognize the importance of mutual learning and networking with others in order to generate a shared knowledge base.
Assurance. It is important to provide assurance (a social audit, ethical audit, or monitoring) that systems are in place to track and measure sustainability claims made by a company. There are two widely used assurance standards that companies will want to consider: AccountAbility’s AA1000 Assurance Standard 2008 and the International Auditing and Assurance Standards Board’s International Standard for Assurance Engagements (ISAE 3000). Both are discussed in detail in Chapter 8.
Business Model, Systems, and Structure
Incorporating sustainability throughout all functional areas of the business and across the entire supply chain of the business will require closer examination of the business model being used, the various management systems in place (including reward systems), and the organizational design or structure in place; changes may be in order. A business model is the way in which a company’s value chain is organized in order to be most efficient and effective in achieving its social, environmental, and economic goals while making a profit.
A particular example of an innovative business model emerging in this era of sustainable business is a social or open business model that engages stakeholders in determining and defining how the business will operate. Stakeholders are the decision makers and contribute to the ongoing operations of the business. First termed crowdsourcing,Howe (2006). social business models leverage the power of mass collaboration in creating a successful business.Tapscott and Williams (2006). One example of a successful social business model is the sports apparel company nvohk where anyone can become a partner for \$50. Partners contribute apparel and logo designs, vote on designs, vote on advertising, sponsorships, and which charities receive 10% of the company profits, and make many other company-related decisions.
Furthermore, the company may need to reexamine its management and control systems (including corporate governance and reward systems), organizational structure, corporate culture, and other aspects of the business (such as the discussion on dynamic governance earlier in this chapter). For example, as with all aspects of strategy and strategic planning, the company must set sustainability-related goals, measure results, train, educate, and involve employees and other stakeholders, and tie rewards to the achievement of goals. The organizational hierarchy in place must be one that supports the sustainability-related goals and objectives of the strategic plan. Sustainability is well planned and coordinated across all activities of the corporation, and the business model, systems, and structure must support the sustainability-related goals of the strategic plan. | textbooks/biz/Business/Advanced_Business/A_Primer_on_Sustainable_Business/08%3A_Next_Steps-_Sustainability_Strategy/8.03%3A_Making_the_Sustainability_Commitment.txt |
We have presented an enormous amount of information throughout this book that may appear overwhelming. At this point you are probably wondering where to begin. First, keep in mind that there is no easy one-step approach to becoming sustainable; sustainability is a continuous process that requires critical self-analysis, honesty, innovation, and risk. That is, before beginning this journey toward sustainability, a business should be prepared to be self-reflective, critical, and honest about all its operations and associated impacts, and a business should be ready to take risks and be innovative, moving beyond its comfort zone, or business as usual.
Second, consider that sustainability encompasses the operations of the entire business: every process, every activity, and every function. A business will not be able to implement one or a few changes and proclaim that the business has achieved sustainability. A business should be prepared to apply the aforementioned critical self-analysis, honesty, innovation, and risk across all processes, all activities, and every function of the business. Sustainability is a company-wide change in mindset, philosophy, views, and practices related to how the business operates.
Lastly, realize that sustainability incorporates a triple bottom line in evaluating company performance: the environmental, social, and economic impact of the business (also referred to as planet, people, and profit). Since pursuit of this triple bottom line is central to sustainability, our discussion on this point bears repeating.
The efforts that a business makes to reduce its environmental impact are equated with the term going green. Since green modifications can often be translated into financial terms (cost, return on investment, savings), this is often the first step a business will pursue in beginning the sustainability journey. Among some of the commonly implemented activities here are creating company “green teams” to explore and champion ways to become more environmentally friendly, recycling and reducing waste, using recycled products, changing to compact fluorescent lightbulbs and retrofitting other lighting, implementing energy-saving activities, pursuing LEED certification, and implementing ISO 14001 standards.
The efforts that a business makes to increase its social impact often refer to the impact of company policies, procedures, practices, and operations on employees, on those employed by its suppliers, and on communities, cultures, and society. A business should critically evaluate the impact of its own practices and policies on employees. A business should also demand transparency from suppliers to understand where all supplies were generated and the conditions under which they were produced. Common activities of a sustainable business include the use of Fair Trade products (such as coffee in the break room), avoidance of products that may have been made with child or forced labor, contributions to solving social problems, implementation of SA 8000 standards, providing fair and safe working conditions, living wages, insurance and other benefits, and a offering employees a work–life balance.
The efforts that a business makes to maximize its economic impact often refer to the economic impact the business has on communities or societies within which it operates. This does not refer to the “profit” the company shows on financial statements but rather refers to how the community or society “profits” from the presence of the business, which, in turn, will result in continued profitability for the company. That is, economic impact refers to the continued prosperity of the business due to the economic benefit it provides to the community or society. Common activities include the payment of fair and living wages, providing positive impacts on the local economy and on local economic development (job creation, tax dollars, property values), and assessing the stress or relief created for local public service systems as a result of the business’s operations.
So how can your business become a sustainable business? To begin your journey, we recommend that you pick one thing, one process, one activity, or one department. Be prepared to apply critical self-analysis and be honest in identifying the associated environmental, social, and economic impact of current business practices, processes, and operations. Begin by measuring the current impact, set goals and timelines for improvement, and then track and measure those improvements and results. Do not be afraid to experiment and learn what other companies are doing. Involve and listen to employees, suppliers, customers, and others, including critics.
As your company begins its sustainability journey, remember that changes will impact operations company-wide. Therefore, sustainability education is important for employees, suppliers, and customers alike, as is communication of progress toward sustainability goals. It is also important not to overstate claims or accomplishments (referred to as greenwashing). Yet another word of caution is to remember that sustainability is three-dimensional. While the concept of green is becoming mainstream, sustainability requires that you not overlook the other areas of impact (social and economic impacts). As a company begins to build a track record of changes and successes, continue bringing more processes, activities, and departments into the fold until the entire organization is focused on the triple bottom line of sustainability. Above all, remember that as a company pursues sustainability, there is no end to this journey; it is a continuous process and refinement of the way we view business within the context of society. Refer to Note 9.6 "How to Begin the Journey Toward Sustainability" for additional tips.
We return to our definition introduced at the beginning of the book: a sustainable business is one that operates in the interest of all current and future stakeholders in a manner that ensures the long-term health and survival of the business and its associated economic, social, and environmental systems. Sustainability requires a new view of business and a new philosophy on how business should be conducted. Armed with this new perspective, we believe that business can become a vehicle for positive change.
How to Begin the Journey Toward Sustainability
1. Educate, inform, and engage stakeholders.
2. Pick one thing (one process, one activity, or one department).
3. Identify and measure its associated environmental, social, and economic impact as a result of current business practices, processes, and operations.
4. Engage stakeholders in identifying areas for improvement, creating measurable goals, and setting timelines for achievement.
5. Assign specific tasks and responsibilities.
6. Track, measure, and document results.
7. Refine and adjust as needed.
8. Communicate progress.
9. Expand efforts to other processes, activities, and departments (and repeat the previous steps).
10. Share your knowledge; mentor others. | textbooks/biz/Business/Advanced_Business/A_Primer_on_Sustainable_Business/08%3A_Next_Steps-_Sustainability_Strategy/8.04%3A_Conclusion.txt |
Now that you are familiar with the concept of sustainable business and how it impacts every aspect of the business, we are delighted to turn to real case examples of sustainable business practices. Fortunately, there are an increasing number of businesses moving toward sustainability. While the examples are too numerous to list here, we have selected a small sample of for-profit entities that are striving to maximize social, environmental, and economic impacts. Although space prohibits us from providing an in-depth look at each company, we have briefly highlighted some of the unique contributions each is making toward sustainability.
These case examples showcase the wide array of approaches being used by businesses of varying sizes in various industries. Some of these companies are making gains in one of the dimensions of sustainability (social, environmental, economic); others have a fully developed three-dimensional approach to sustainability. But what each of these case examples has in common is that they demonstrate it is possible to successfully pursue sustainability and a triple bottom line. You need look no further than the following companies for proof of those who exemplify our own motto: “Make a Profit, Make an Impact, Make a Difference. Because Sustainable Business is Good Business.”©
Alaffia Sustainable Skin CareRetrieved March 23, 2009, from http://www.alaffia.com; all Web sites in the following notes have been retrieved on March 23, 2009.
Alaffia Sustainable Skin Care (Olympia, Washington) is the North American retail and wholesale distributor of Fair Trade shea butter, African black soap, and tropical oils from the Alaffia/Agbanga Karite Cooperative in Togo, Africa.
The company follows a triple bottom line approach (people, profit, and planet). Alaffia’s relationship with the Cooperative brings income to and empowers communities in Togo. Additionally, Alaffia and Agbanga Karite donate 10% of sales proceeds (or 30% of income, whichever is greater) to community empowerment projects, AIDS and malaria outreach, and educational scholarships in Togo.
Alaffia sponsors Bicycles for Education, donates school supplies and uniforms, funds reforestation projects, and started the Alaffia Women’s Clinic in Togo. Alaffia also provides scholarships to Washington state students, donates soap and lotion to women’s shelters, offers Fair Trade talks, tours of the Washington facility, and community outreach and education on Fair Trade. With the help of others, the nonprofit Global Alliance for Community Empowerment (GACE) was formed to oversee community projects that focus on self-empowerment, the advancement of fair trade, education, sustainable living, and gender equality in Togo.
Through work individually and with GACE, Agbanga Karite Cooperative has provided more than 300 children with books, uniforms, and supplies for the 2004–2005 school year; paid the school enrollment fees for these children; donated desks and chairs to a local primary school in the village of Adjorogo; and donated and installed new school roofs on rural schools in central Togo.
baabaaZuZuhttp://baabaazuzu.com
baabaaZuZu (Lake Leelanau, Michigan) makes clothing from items that would otherwise be discarded. All clothing is made from 100% recycled materials, primarily wool and tweed. Most of the supply comes from secondhand shops. Each product is unique, but they all have a common pocket and hand-sewn blanket stitch. The product line consists of jackets, vests, hats, scarves, mittens, purses and bags, pins, and Christmas stockings.
Better World Clubhttp://www.betterworldclub.com
Better World Club (Portland, Oregon) is a nationwide auto and travel club. An alternative to other auto and travel clubs, the Better World Club provides emergency roadside assistance, travel planning services (auto, flight, and hotel), maps, trip routing services, partnership discounts, and auto insurance.
In addition to the standard fare for auto and travel clubs, the Better World Club also offers bicycle roadside assistance, discounts on hybrid or biodiesel auto rentals, discounts at eco-lodging facilities, discounts on eco-tours, membership discounts for hybrid vehicle owners, an online carbon emissions calculator, carbon offsets for your auto or travel plans, and a donation of 1% of revenue to environmental cleanup efforts and advocacy.
BetterWorld Telecomhttp://www.betterworldtelecom.com
BetterWorld Telecom (Reston, Virginia) is a telecommunications company providing voice and data solutions for businesses and organizations with social and sustainable missions. The company donates 3% of revenues (administered by the BetterWorld Charitable Foundation) to nonprofit organizations through grants that help children, education, fair trade, and the environment. The company’s goal is to donate \$1 million per year by 2012.
BetterWorld Telecom is striving for a paperless operation. When paper usage is necessary, it is 100% recycled or tree-free kenaf paper. The company is also carbon-neutral.
Boulevard Bread Companyhttp://www.boulevardbread.com
Boulevard Bread Company (Little Rock, Arkansas) is a multisite restaurant committed to being a low-impact and environmentally friendly business. The company buys organic produce from local sources when possible. The company uses biodegradable and compostable disposable utensils and cups made from corn or potato by-products. Carry-out containers that are not compostable are recyclable. Boulevard Bread sells only 100% Fair Trade and organic coffee, uses earth-friendly cleaners, uses recycled paper products, and recycles glass, cardboard, aluminum, and plastics. The company is pursuing zero waste. All locations have been retrofitted with energy-efficient lighting, and the main site has installed a tankless water heater.
Boulevard Bread Company recently joined forces with other local restaurants to create the Green Restaurant Alliance to network and support area restaurants pursuing environmentally friendly operations. In addition, Boulevard Bread supports the community through charitable donations, collaboration, local sustainable agriculture, and through training and mentoring other green food businesses.
Boutique Mixwww.boutiquemix.com
Boutique Mix (Washington, DC) is a fashion boutique offering “An International Ethnik Chik Kollection” of unique items from around the world. Boutique Mix sources natural organic handmade items following Fair Trade principles and nonhandmade items that are organic and use low-impact dyes and processes. Boutique Mix also offers its own line of Miatta-MiMi jewelry and gift baskets using beads and other accessories collected around the world.
An incredible 25% of all profits go toward charitable causes. Thirty-five percent of the charitable proceeds go toward rebuilding Sierra Leone by providing school supplies and other necessities to needy children, another 35% goes toward sponsoring children around the world through Plan USA, Children International, St. Jude’s Children’s Hospital, and the Christian Children’s Fund. The remaining 30% of charitable proceeds go toward Kiva loans for entrepreneurs in developing countries and to the Rebuilding Sierra Leone One Child at a Time campaign.
Brilliant Earthwww.brilliantearth.com
Brilliant Earth (San Francisco, California) specializes in conflict-free diamond jewelry. The conflict-free diamonds are from Canadian mines that follow the country’s environmental laws, the most rigorous in the world. Sapphires used in Brilliant Earth jewelry are sourced from Australia or Malawi following Fair Trade principles. When possible, gold and platinum are reclaimed through recycled jewelry and industrial waste. Brilliant Earth dedicates 5% of profits to the nonprofit organizations Green Diamonds and MedShare International to support African communities negatively affected by the diamond trade industry.
Burgervillehttp://www.burgerville.com
Burgerville (Vancouver, Washington) is a chain of 39 Pacific Northwest quick-service restaurants offering seasonal organic, local, and healthy food. In addition, they use hormone-free milk, and kid’s meals come with safe and educational toys, such as biodegradable garden pots and vegetable seed packets. Burgerville purchases 100% of their energy usage with wind power credits, they recycle used canola oil into biodiesel, and they offer affordable health care to employees. They are working toward all 39 restaurants becoming fully recycling and composting.
Caracallahttp://www.caracalla.com
Caracalla (Little Rock, Arkansas) is a salon and day spa with an aggressive recycling program that extends beyond the typical recycling of waste. Some of the unique ways in which Caracalla supports the reduce, reuse, recycle mantra are to buy reclaimed items for retail sale (such as mittens and hats made from old discarded sweaters), they sell vintage items, they recycle cut hair by sending it to Matter of Trust to be woven into hair mats capable of absorbing chemical oil spills, and they recycle worn pantyhose and stockings with Matter of Trust for the same purpose. In addition, the company purchases and sells recycled items, such as paper, bags, office supplies, toilet tissue, hand towels, pet toys, and even biodegradable bags for picking up dog waste. The salon is decorated with reclaimed and vintage items and uses or sells eco-friendly products, such as homemade herbal wraps (no packaging waste!), bamboo hairbrushes, hemp bags, natural hair and body products, soy candles in recycled glass jars, efficient lighting, and reusable coffee mugs.
Caracalla supports the local economy by purchasing from local and organic suppliers, particularly other sustainable or green businesses, and buys in bulk to reduce packaging waste.
The company also supports the local community through charitable donations and by offering free haircuts to customers who are donating hair to charity.
Clean Air Lawn Carehttp://www.cleanairlawncare.com
Clean Air Lawn Care (Fort Collins, Colorado) uses solar-powered lawn mowers for yard care. Trucks are equipped with solar panels to recharge the mowers throughout the day. When it is not possible to use solar-powered mowers, the company uses conventional mowers fueled with biodiesel. Clean Air Lawn Care will also remove yard waste to an organic waste recycling center, where available. The company purchases carbon offsets for the business and is carbon neutral. On the company Web site, you will find an online calculator to determine the carbon emissions of your current mowing methods. You will also find a scholarship application for environmentally minded students preparing to enroll in college for the first time.
Clean Green Collisionwww.cleangreencollision.com
At Clean Green Collision (Oakland, California) precautions are taken during auto repair to ensure that dust, remnants, and hazardous chemicals do not enter the car and leave odors and fumes that could potentially harm customers. Filtration is an important part of Clean Green Collision’s eco-friendly approach: paint fumes and other emissions are filtered, air in the sanding area is filtered twice, and there is a filtration system to capture emissions from welding. Other eco-friendly efforts include photosynthesis curing, use of water-based paints, remodeling with recycled and reclaimed windows and doors, and use of local suppliers. The shop claims it currently creates only 30%–40% of the emissions of a typical body shop, and the company’s goal is to operate a 100% emission-free auto body business.
Creative Paper Waleswww.creativepaperwales.co.uk/index.asp
Creative Paper Wales (Wales, United Kingdom) makes only recycled paper products. All manufacturing processes are environmentally friendly and minimize waste. The company supports Fair Trade. Creative Paper Wales is home to the ever popular Sheep Poo Paper and Reindeer Poo Paper, made from sheep and reindeer dung, respectively. The company offers to make paper from anything you desire, except live trees.
CREDO Mobilewww.credomobile.com
CREDO Mobile (San Francisco, California) was created in 1985 to help make the world a better place. Every time customers use their wireless, credit card, or long-distance services, the company donates a portion of the charges to progressive nonprofit organizations working for peace, human rights, economic justice, education, and the environment. The company offsets its carbon emissions, and innovative mobile activism allows subscribers to stay on top of fast-moving and progressive issues and take action right from their phones.
Earth Class Mailhttp://www.earthclassmail.com
Earth Class Mail (Seattle, Washington) offers online post office boxes and mail services. Customers view scanned images of mail received and, for each piece, they make a decision to open and scan the contents, recycle, archive, or forward the mail to them via surface mail. The company’s Web site states that the average person recycles 20% of their mail, whereas Earth Class Mail customers recycle more than 90% of their mail.
Earth Tonesearthtones.com
Earth Tones (Denver, Colorado) bills itself as “The Environmental Internet & Phone Company.” The company offers Internet access and long-distance and wireless phone services. Earth Tones is a for-profit company created in 1993 by a coalition of nonprofit environmental organizations. The company donates 100% of profits to environmental organizations, including Environment America, National Environmental Law Center, the Green Life, Campaign to Save the Environment, Toxics Action Center, ecopledge.com, Free the Planet!, and Recycling Action Campaign. Earth Tones offers online billing or (recycled) paper billing and phone recycling for customers. In addition, the Web site has resources available to everyone, including Green Alerts and a marketplace.
ECO Car Washhttp://www.ecocarwash.com
ECO Car Wash (Portland, Oregon) is a multilocation car wash that recycles 100% of the water used in washing. The car wash’s computer-controlled water management system uses 25 to 40 gallons of freshwater per vehicle wash, far less than hand washing at home. Additionally, ECO Car Wash uses water-soluble, bio-based, and biodegradable cleaning products. Furthermore, the company uses wind energy in all facilities. To support the community, ECO Car Wash makes contributions to several charitable organizations, including Providence Hospital, Shriners Hospital, the Grotto, and Children’s Charity Ball.
Eco-Librishttp://www.ecolibris.net
Eco-Libris (Newark, Deleware) is a carbon offset program. Book lovers and reading aficionados everywhere can buy an “offset” for every book they read. At Eco-Libris, the idea is simple: People can plant 1.3 trees for every book they read. Eco-Libris’ planting partners plant trees in Nicaragua, Belize, Guatemala, Honduras, Panama (all in Central America) and Malawi (Africa).
EDUNwww.edunonline.com
EDUN (Dublin, Ireland) is a socially conscious clothing company launched to create sustainable employment in developing countries. EDUN has established the Conservation Cotton Initiative (CCI) to improve the livelihoods of communities in Africa by promoting cotton grown organically or through methods that are part of a transition from conventional to organic production. CCI also works to incorporate sustainable conservation agricultural practices and the protection of wildlife. In addition to the EDUN retail collection of items made with organic cotton, edun LIVEwww.edun-live.com is a business-to-business solution for anyone who wants ethically produced blank T-shirts. Edun LIVE seeks to provide sustainable employment in Sub-Saharan Africa through high-volume sales of blank T-shirts. As part of edun LIVE, the company has created edun LIVE on campus,www.edunliveoncampus.com a partnership with Miami University of Ohio, to sell blank T-shirts to campus organizations with the goal to eventually expand to additional campuses. EDUN and edun LIVE products are currently produced in India, Peru, Tunisia, Kenya, Lesotho, Mauritius, and Madagascar. The company works with Verite for third-party monitoring and reporting of socially responsible business practices.
Fair Trade Sportswww.fairtradesports.com
Fair Trade Sports (Bainbridge Island, Washington) is a sports ball and equipment distributor and manufacturer. The company ensures all its hand-stitched balls are made by adults who are paid fair wages and who are provided healthy working environments. Additionally, since the sports ball business can be seasonal, the company offers microcredit loans to workers. The inner air bladders of the balls are made with FSC-certified latex from rubber plantations and then sent to Pakistan for assembly into sports balls. In the first ever Fair Trade deal with a plantation, Fair Trade Sports sources rubber from the Frocester Plantation in Sri Lanka and from the New Ambadi Rubber Estate. Following the deal with Fair Trade Sports, the Frocester Plantation then created the Fair Trade Welfare Society for the plantation’s rubber tappers and employees. Early funds generated from the Society led to the installation of a pump and piping system for nearby plantation households to access well water and to the restoration of a restroom facility on the plantation. All after-tax profits of Fair Trade Sports are donated to children’s charities to help at-risk children around the world.
FIO360http://fio360.com
FIO360 (Atlanta, Georgia) is the nation’s first eco-early care and learning boutique. The building is the first child care center to be LEED-certified and has floors that emit radiant heat and are made from virgin rubber plants, paint that is zero-VOC (volatile organic compounds), and solar tubes for lighting. The center uses organic furnishings, such as imported organic rugs, organic wooden toys, no PVC plastic products, and organic mattresses free of formaldehyde and other chemicals. Children are served organic and hormone-free meals using local fresh ingredients created by the center’s chef. The center also uses nontoxic personal care products on children and environmentally friendly cleaning products throughout the building. The curriculum is holistic, promotes multicultural awareness and learning, and, of course, environmental education.
Free Range Studiosfreerangestudios.com
Free Range Studios (Washington, DC) is a full-service creative agency delivering progressive socially minded messages for clients. You may be familiar with some of Free Range Studios’s flash movies (e.g., Sam Suds, The Meatrix, Friends With Low Wages, Grocery Store Wars, Say No to Blood Diamonds), written reports (prepared for Amnesty International, Green Mountain Coffee Roasters, and the ACLU), or the company’s work with socially conscious individuals, nonprofits, and businesses. In addition to Free Range Studio’s socially conscious creative work, the company also seeks to reduce its environmental impact and give back to communities through the use of triple bottom line accounting, 100% wind power, eco-printing, and other initiatives.
Frog’s Leap Wineryhttp://frogsleap.com
Frog’s Leap Winery (Rutherford, California) is committed to sustainable farming and traditional farming techniques, including dry farming, which requires tilling every 10 days to hold moisture and which eliminates the need for irrigation. All wines are made from organically grown grapes. The winery has been 100% solar-powered since 2005, and the Hospitality Center and administrative offices are in a LEED-certified building.
Gaia Napa Valley Hotel and Spahttp://www.gaianapavalleyhotel.com
Gaia Napa Valley Hotel and Spa (American Canyon, California) is the world’s first Gold LEED–certified hotel. To achieve this, all wood used in the construction was FSC-certified, paints are low VOC, and carpets contain postconsumer recycled material. In addition, restroom construction used recycled tiles and granite, and low flush toilets and showerheads were installed. The hotel’s koi pond uses filtered recycled water, and the facility installed Solatube lighting, solar panels, and a reflective roof coating. The hotel is furnished with natural, organic, and recycled materials, has all-natural and organic landscaping, and uses green cleaning products. To reduce waste, bulk soap, lotion, and shampoo dispensers are used in guest rooms and only recycled paper is used. There are recycling bins throughout the property, and educational kiosks inform guests of the environmental attributes of the property.
Galactic Pizzahttp://www.galacticpizza.com
Galactic Pizza (Minneapolis, Minnesota) makes excellent pizza from local and organic ingredients. The company emphasizes environmental and social responsibility in its operations. The company engages in many sustainability initiatives. For example, when possible, electric vehicles are used for deliveries, the restaurant uses 100% renewable wind energy, organic items are on the menu, purchase of the Second Harvest Heartland pizza generates a \$1 donation to this hunger relief organization, packaging is either made from recycled materials or is biodegradable, hemp products are on the menu, the menus are printed on hemp paper, produce comes from farms in Minnesota or Wisconsin when possible, the company recycles and composts, and 5% of pretax profits are donated to charity.
Great Elephant Poo Poo Paper Company Ltd.http://www.poopoopaper.com
The Great Elephant Poo Poo Paper Company Ltd. (Toronto, Ontario) recycles the waste of African and Asian elephants from elephant conservation parks and turns it into over 150 unique (and odorless) paper products. The paper products are handcrafted by artisans. A portion of profits is donated to elephant welfare and conservation programs.
Great Lakes Brewing Companyhttp://www.greatlakesbrewing.com
Great Lakes Brewing Company (Cleveland, Ohio) is a microbrewery focused on the triple bottom line. The company recycles waste, uses recycled products, and has invested in energy efficiency. To pursue sustainability even further, Great Lakes Brewing Company has incorporated zero-waste initiatives into its day-to-day operations. The ultimate goal is to mimic nature, where 100% of resources are used in closed-loop ecosystems. This is accomplished in several ways. Certain bread and pretzels found on the menu are made using grains from the brewing process. Brewery grains are also used as a substrate for growing organic shitake and oyster mushrooms. And the company also composts waste to create fertilizer to grow herbs and vegetables for menu items. In addition, the beer delivery truck, the Fatty Wagon, runs on 100% pure vegetable oil.
Green Microgymhttp://www.thegreenmicrogym.com
The Green Microgym (Portland, Oregon) is one of the few fitness facilities in the world operating partially on solar and human power. While the facility is fully equipped with all the standard equipment found in any gym, the equipment has been retrofitted to capture, store, and reuse energy produced from the use of elliptical trainers and stationary bikes. The company has a goal of net-zero energy usage. The “Burn & Earn” program pays members \$1 for every hour spent generating (or saving) electricity. The Green Microgym uses recycled rubber, marmoleum, and eco-friendly cork flooring, ENERGY STAR ceiling fans, LCD televisions, compact fluorescent bulbs, energy-efficient treadmills, dual flush toilets, green cleaning supplies, and paper products made with recycled content.
Greenforcewww.greenforce.biz
Greenforce (San Francisco, California) offers residential and commercial cleaning services using environmentally friendly cleaning products and methods. The company uses natural nontoxic biodegradable supplies and HEPA microfiltered vacuums. Greenforce thoroughly researches cleaning products to find those that perform as well as conventional products, and all staff are trained in green cleaning methods. On its Web site, Greenforce lists the products used and recommended by the company. In addition to eco-friendly cleaning, Greenforce offsets emissions created from travel to its cleaning sites (carbon neutral cleaning).
Greyston Bakeryhttp://www.greystonbakery.com
Greyston Bakery (Yonkers, New York) is an example of social entrepreneurship at its finest. The for-profit bakery was started to provide employment opportunities and economic renewal for this inner-city community. All profits from Greyston Bakery go to support the Greyston Foundation, which offers affordable child care for the community, affordable housing for homeless and low-income families, and affordable health care for persons with HIV. The bakery’s facility was selected as a Top Ten Green Project in 2004 for its use of natural light, rooftop gardens, efficient machinery, and the use of outdoor air to cool baked goods. The bakery produces many traditional baked goods but is well known as the exclusive supplier of brownies for Ben & Jerry’s ice cream products.
Habana Outposthttp://www.habanaoutpost.com
Habana Outpost (Brooklyn, New York) is a one-of-a-kind restaurant experience that begins with the outdoor food truck, a restored U.S. postal service truck. Habana Outpost is solar-powered; has both indoor and outdoor seating; uses compostable biodegradable plates, cups, and utensils; has tables made from recycled materials; operates a rainwater collection system to water plants and flush toilets; runs a human-powered bicycle-propelled juice blender; and composts and recycles waste.
In addition to these restaurant features, Habana Outpost serves as a community gathering place offering weekly movie nights and a host of other activities. For example, the Kid’s Corner offers ecological activities and an “alternative heroes” coloring book (about real-life heroes!). The restaurant hosts a weekend market of local vendors and weekly fashion shows for local designers. The restaurant also hosts an annual Earth Day Expo of informative and interactive displays on sustainability and has a gallery display featuring local artists’ works.
Habana Outpost is one of three Habana restaurants in New York City. The company operates Habana Works, Inc., a nonprofit offering free sustainability-related workshops through various programs such as Habana Labs and Urban Studio Brooklyn. Habana Labs is dedicated to researching, developing, applying, and teaching the best technology related to ecology and sustainable energy. The most recent Habana Labs project is the Offgrid Outlet, a motorized, sun-following solar panel. Another program of Habana Works is the Urban Studio Brooklyn, an architectural design and build program that recently launched the Fishmobile, a human-powered mobile fishing clinic and wetlab.
Higher Grounds Trading Companyhttp://www.highergroundstrading.com
Higher Grounds Trading Company (Traverse City, Michigan) sells organic and Fair Trade coffee. But the company’s commitment to sustainability goes beyond the products it sells. The company has a strong environmental emphasis in supporting sustainable agriculture, recycling, composting, and purchasing postconsumer recycled paper for office supplies.
The company has an even stronger social emphasis through its business operations. The Trade for a Change fund-raising program allows nonprofit organizations to sell Higher Grounds’s organic and Fair Trade blends and thus increases sales for the coffee farmers. Sales of Coffees for Change blends generate donations for organic agriculture, education about economic justice, protection of bird habitat and indigenous rights, and the construction of potable water systems. Sales of Water Carrier’s Blend generate a \$5 donation through the Water for All campaign for the construction of sustainable water systems in coffee-growing countries.
Through the Oromia Photo Project, Oromia Coffee Farmers Grower Union farmers’ activities are documented. Each week, new photos are added to the Web site so that you can learn more about how the coffee is produced. For each pound of the Ethiopian Oromia coffee sold, Higher Grounds will add an additional \$1 tip to go back to the farmers.
Higher Grounds’s Fair Trade Tours invites you to join them on a trip to partner farms and Fair Trade collaborators. You can choose from trips to Africa, Central America, or South America, and \$100 per participant is donated to a local project.
Hopworks Urban Brewerywww.hopworksbeer.com
Hopworks Urban Brewery (Portland, Oregon) is a brewpub offering organic beer and restaurant menu items made from local ingredients. Hopworks Urban Brewery refers to itself as an eco-brewpub and touts everything from composting to rain barrels to being powered by 100% renewable energy. The brew kettle uses biodiesel, the pizza oven heat is captured to heat the brewing water, the delivery truck uses biodiesel, and hot water from the wort heat exchanger is recovered for subsequent brew. There were many recycled and recovered materials used in the remodeling process, low and zero-VOC finishes were used, a rain barrel collection system was installed, and native landscaping is being used. The brewery also installed water and energy efficient equipment, designed for the use of natural lighting, and offers bicycle parking and a bike repair stand. The company’s waste recycling programs strive for zero waste and recycles food waste for animal feed and composting.
Hotlips Pizzahttp://www.hotlipspizza.com
Hotlips Pizza (Portland, Oregon) is a family-owned four-restaurant business. Hotlips Pizza uses as many locally grown ingredients as possible, including wheat, vegetables, cheese, and meat. The company tracks food miles, uses LED lighting, delivers pizza by bicycle or electric car, captures the heat from pizza ovens to heat the water, composts waste, and is exploring alternative fuel use to heat the pizza ovens.
Immaculate Baking Companyhttp://www.immaculatebaking.com
Immaculate Baking Company (Hendersonville, North Carolina) bakes delicious gourmet all-natural and organic baked cookies and organic ready-to-bake cookie dough. The company philosophy is “Bake well, be creative, have fun and give back.” Immaculate Baking Company works hard to maximize its social impact by baking “Cookies With a Cause.” The company created the Folk Artist’s Foundation to provide support and exposure for folk artists. Folk art also adorns all cookie packaging. In addition, the company created Soul Food Fund “artreach” programs to reach kids of all ages to help them creatively express themselves. As an aside, the company holds the distinction of baking the World’s Biggest Cookie in 2003—102 feet wide and over 40,000 pounds.
Indigenous Designshttp://www.indigenousdesigns.com
Indigenous Designs (Santa Rosa, California) sells organic Fair Trade fashions created by their own artisan network across South America. All items are handmade by artisans using traditional techniques, natural colors, natural dyes, and low-impact dyes. Indigenous Designs also partners with nongovernmental organizations and others to help provide training, educational materials, and equipment to the artisans.
In addition to organic Fair Trade fashions, Indigenous Designs purchases local green power to offset carbon emissions from its business activities, encourages employees to bike to work, and claims that about 20% of employees own and drive hybrid or biodiesel cars.
IceStonehttp://www.icestone.biz
IceStone (Brooklyn, New York) manufactures surfaces made from recycled glass and concrete. By recycling glass and concrete, IceStone saves hundreds of tons of glass from landfills each year. The products are cradle to cradle certified and are manufactured in a day-lit factory. The factory has a cool, low-emissions manufacturing process. IceStone is working to become carbon-neutral, purchases renewable energy credits, and strives to reduce energy usage. The company is working toward water reduction goals, and over 80% of the company’s waste is recycled, recovered, or composted. IceStone is implementing a greywater recycling system. All petroleum-based machine lubricants have been replaced with soy-based lubricants. Additionally, IceStone conducts environmental education programs for employees.
IceStone’s mission also provides living wages, health benefits, education programs, and life skill training to employees, including free English as a Second Language classes, all of which are tracked in the social audit with third-party verification. IceStone’s donation program provides free or discounted material to projects that share similar social and environmental goals, with Habitat for Humanity receiving annual donations. The company also partners with community, nonprofit, academic, industrial assistance, and local social services groups to promote green-collar job creation, sustainable business practices, and the development of the green building industry.
Within the supply chain, IceStone encourages suppliers to improve sustainability standards. IceStone’s glass and mother-of-pearl are recycled from post-industrial and post-consumer sources. IceStone advocates for stronger glass recycling programs in New York in order to create an infrastructure that allows the commercial reuse of regional waste glass. The company buys cement regionally and advocates for the greening of the cement industry. IceStone continuously conducts product research to seek the most eco-friendly and local materials possible.
Izzy’s Ice Cream Caféhttp://www.izzysicecream.com
Izzy’s Ice Cream Café (St. Paul, Minnesota) makes homemade ice cream using local ingredients, when possible, such as local maple syrup and dairy and cream from local and family-owned farms. Since making and freezing ice cream is an energy-intensive process, the ice cream parlor runs entirely on solar power. The shop is organizing to put more solar panels on its roof in order to supply solar power to the neighborhood. The company also delivers ice cream in thermo-insulated bags instead of refrigerated trucks.
Keen Footwearhttp://www.keenfootwear.com
Keen Footwear (Portland, Oregon) began in 2003 with the Hybrid: part shoe, part sandal; a cross between an athletic shoe and a sandal. The company now has a line of shoes, Ventura, that are 100% vegan and created through environmentally friendly manufacturing processes. The Transport bag collection is made from recycled aluminum and rubber reclaimed from the shoe factory floors. Even the packaging is environmentally friendly with shoe boxes made of 100% recycled materials, soy-based inks, water-based glues, and biodegradable materials. The shoe boxes are smaller than standard shoe boxes, resulting in less materials, labor, and waste.
Keen Footwear uses third-party independent monitoring of its operations, is seeking Fair Labor Association accreditation, and is currently preparing its first Accountability Report, following the Global Reporting Initiative guidelines. The Keen Foundation supports environmental and social causes.
Little Rock Green Garagelittlerockgreengarage.com
Little Rock Green Garage (Little Rock, Arkansas) is attempting to embrace environmental sustainability through all aspects of its operations and seeks to become one of the country’s first green auto repair facilities. The garage recycles waste, buys in bulk, uses refillable containers, and specializes in the repair of fuel-efficient vehicles.
LJ Urbanwww.ljurban.com
LJ Urban (Sacramento, California) is a real estate development company that has set out to be a catalyst of social change. One of the company’s interesting projects involves building an eco-urban community, appropriately named The Good Project. The Good Project consists of LEED-certified homes with ENERGY STAR appliances, solar panels, air intake air-conditioning, tankless water heaters, dual flush toilets, low-flow plumbing fixtures, reflective roofing, recycled countertops and insulation, compact fluorescent lights and occupancy sensors, and more eco-friendly features. The Good Project I is complete, and the company is now creating the Good Project II, which will also feature a community garden in the design. One of the most unique parts of the Good Project I was the Do-Some-Good-Now Commitment. For every eco-urban home sold, LJ Urban trained a local mason in West Africa to build sustainable homes. LJ Urban’s Good Projects were inspired by the simplicity of TOMS Shoes’s model of giving away a pair of shoes to children in need for every pair that was purchased.
Llamadas Pedaleadas (Pedaled Phone Calls)www.pedaleadas.com
Llamadas Pedaleadas (Managua, Nicaragua), or Pedaled Phone Calls, is a bicycle-pedaled mobile cart with public telephones on board. Recycling parts found in a junkyard, the company created a battery that can be recharged by pedal power. Electricity is generated as the person is traveling to his destination. If the battery runs low at the destination, he can drop the kickstand and start cycling in place. The mobile cart can be moved to any location, such as a park or festival, to provide public telephone service for consumers. The company’s goal is to create a ready-made business for local entrepreneurs and to increase access to affordable telephony for base of the pyramid customers.
Massanelli’s Cleanerswww.massanelliscleaners.com
Massanelli’s Cleaners (Jonesboro, Arkansas) offers dry-cleaning and fire-water recovery and restoration services. Massanelli’s Cleaners utilizes a completely environmentally friendly nontoxic, odorless cleaning process that has been thoroughly tested by the Environmental Protection Agency and causes neither short nor long-term health risks. Cleaning agents are 100% biodegradable and earth-friendly, and the perchloroethylene-free (perc-free) cleaning process is gentle not only on your clothing and textiles but also on the environment.
In an effort to further reduce the carbon footprint of Massanelli’s Cleaners, the company has joined the CarbonFree Small Business Program. The company has been recognized for environmental stewardship and was an official sponsor of the Green Jobs Now fair held at the University of Arkansas at Little Rock. Massanelli’s Cleaners supports numerous charitable organizations and has a strong philanthropy program.
Natural Fusion Hair Studiohttp://www.naturalfusionhairstudio.com
Natural Fusion Hair Studio (Frederick, Maryland) is an environmentally friendly hair salon. The salon seeks to reduce energy and water usage throughout its operations, recycles, uses nontoxic environmentally friendly cleaners, refills bottles, uses only natural and organic hair and beauty products, and purchases from beauty supply companies with sustainable practices. In addition, the salon gives back to the community and local charities. Located in a historic house, when they remodeled they preserved the original wood floors and added linoleum floors where new flooring was needed. The cutting stations are 1920s vanities, and the salon has utilized antiques whenever possible.
Peace Cerealwww.peacecereal.com/
Golden Temple’s Peace Cereal (Eugene, Oregon) is a line of organic cereals devoted to personal health and a peaceful planet. Ten percent of the proceeds from Peace Cereal sponsor the annual International Peace Prayer Day gathering. The company gives awards to peace activists and grants to nonprofit organizations working for peace. In addition, Peace Cereal founded the Socially Responsible Business Awards.
Pinehurst Innhttp://www.pinehurstinn.com
Pinehurst Inn Bed & Breakfast (Bayfield, Wisconsin) is a historic inn, built in 1885. The Pinehurst Inn uses solar hot water heaters, green cleaning products, and organic linens and towels. Pinehurst Inn composts food and garden waste, recycles, avoids chemical treatments on lawn and gardens, serves locally grown organic food and organic coffees and teas, and has converted their vehicle (the Grease Car) to run on recycled grease. In 2003, the owners added the Garden House, a green building that is energy-efficient and that used sustainable materials in construction. The Pinehurst Inn also purchases carbon offsets for the business as well as offsets for 50% of customers’ travel to the inn.
Pizza Fusionwww.pizzafusion.com
Pizza Fusion (Fort Lauderdale, Florida) is a pizza chain with a wide variety of organic, vegan, gluten-free, and lactose-free menu items. Seventy-five percent of the menu is organic; the company only uses all-natural free-range chicken and organic beef, and it serves organic drinks.
Each month, Pizza Fusion hosts fun lessons on sustainability through “Organics 101” for kids. The company delivers pizzas in company-owned hybrid vehicles, uses compostable food containers, and offsets 100% of power consumption through renewable energy certificates. Each franchise restaurant is LEED certified. Pizza Fusion also encourages customers to return their pizza boxes for recycling, and the Web site offers tips for sustainable living alongside a carbon footprint calculator.
sweetriotwww.sweetriot.com
Trendy chocolatier sweetriot (New York, New York) makes all-natural chocolate treats (called “peaces”) and works to create a more just and celebrated multicultural world. sweetriot gets its all-natural cacao from countries of origin in Latin America and abides by ethical and FairTrade sourcing. The finished dark chocolate–covered cacao goodies are packaged in recycled and reusable tins featuring the work of emerging artists. If you do not have local recycling facilities, the company encourages you to return your tin to them for recycling. sweetriot offsets all employee travel and office emissions and offers customers the option to offset carbon dioxide emissions for shipping their order. The company promotes fair human resources practices and work–life balance, and it also supports nonprofits that share similar values and ideals.
SunNight Solarhttp://www.sunnightsolar.com
SunNight Solar (Houston, Texas) is a company focused on the triple bottom line that makes solar-powered flashlights. The lights are rugged and durable and suited for harsh conditions in which no light is available. The lights use a low-environmental impact battery and can be used for either task lighting or room lighting. The solar-powered lights offer an alternative to kerosene, wood, and other forms of lighting used in developing countries.
SunNight Solar is home to the extremely popular BoGo Light program. For each flashlight purchased, the company donates one flashlight to a nonprofit for distribution in a developing country and gives them \$1 per flashlight to offset importation and distribution costs. The company sponsors several campaigns that maximize its social impact. Lights for Good is a fund-raising partnership with nonprofit organizations. WarLights allows you to purchase a flashlight for distribution to American troops in Iraq and Afghanistan. Three new giving programs are being developed: Save Our Sisters (which will donate lights to women’s groups and collectives in developing countries), Village Lights, and Need It/Take It.
Thanksgiving Coffee Companyhttp://www.thanksgivingcoffee.com
Thanksgiving Coffee Company (Fort Bragg, California) roasts Fair Trade, organic, and kosher blends of coffee. The company purchases coffee beans directly from small family farms and cooperatives in Guatemala, Ethiopia, Rwanda, Uganda, and Nicaragua. The company partners with nonprofits to support sustainable farming practices and environmental causes. The company recycles, composts, uses biodiesel in delivery trucks, and uses recycled paper. In 2002, the company purchased its first carbon offsets and became the first carbon neutral coffee company.
TOMS Shoeshttp://tomsshoes.com
TOMS Shoes (Santa Monica, California) was founded with the singular mission of improving the lives of children by providing shoes to those in need. Shoes are produced in Argentina and China following fair labor practices while creating minimal environmental impact. Factories are monitored by TOMS and third-party independent auditors. TOMS Shoes are sold online and in retail locations around the world with the promise that for each pair purchased, TOMS will give one pair to a child in need in Argentina, South Africa, and other locations around the world. To date, TOMS has donated over 60,000 pairs of shoes during Shoe Drops around the world. Through its nonprofit, Friends of TOMS, the public is invited to participate in Shoe Drops. The documentary For Tomorrow: The TOMS Shoe Story follows the early days of the company and its initial Shoe Drops.
Tropical Salvagehttp://www.tropicalsalvage.com
Tropical Salvage (Portland, Oregon) is a tropical wood furniture company that never cuts down a single tree to make a product. Items are made from reclaimed wood and trees from rivers and lakes; flood, landslide, and volcanic debris; and construction sites. The wood and trees are then transported to one of two facilities in Indonesia where artisans build, carve, and finish the wood to create beautiful furniture and decorative items. Items are then shipped to North America for retail sale. Tropical Salvage is collaborating with the nonprofit Institute for Culture and Ecology to create the Jepara Forest Conservancy, a public forest park and environmental education facility.
VerTerrahttp://www.verterra.com
VerTerra (New York, New York) is a manufacturer of disposable dinnerware. Plates, bowls, and cups are made from 100% renewable and compostable plant matter and water. The products are created by collecting fallen leaves from plantations, taken to the factory, sprayed with high pressure water, steamed, and UV sterilized. In the manufacturing process, the company recaptures over 80% of the water used. No chemicals, lacquers, glues, bonding agents, or toxins are ever used. The entire process uses only a fraction of the typical energy used for recycling. The disposable dinnerware products are durable, naturally biodegrade in 2 months, and can be used in the microwave, oven, and refrigerator. Items are made in South Asia by VerTerra’s own employees where employees receive fair wages in safe working conditions and are provided access to health care.
White Bear Racquet and Swim Clubwww.wbfit.com
White Bear Racquet and Swim Club (White Bear Lake, Minnesota) has fully embraced sustainability. The sustainability section of the club’s Web site outlines the many initiatives the company has undertaken in the quest for a more environmentally friendly facility. While too numerous to list, here is a small sampling of what the company has accomplished.
White Bear Racquet and Swim Club has replaced incandescent lights; increased the use of natural lighting; replaced chlorine with a salt water system for the pool; replaced a five–tennis court bubble with a permanent, super insulated tennis building featuring in-court radiant heat, installed cooling, and heating powered by ground source heat pumps (the old courts required over \$44,000 in heating costs; the new courts require less than \$300 in heating costs); and installed a super efficient lighting system. In addition, White Bear Racquet and Swim Club installed water-saving showerheads, restored outside land to its natural state (eliminating the need for watering, mowing, and fertilizing), reduced waste, began using local and organic foods, began using natural green cleaning products, and incorporated office furniture that is made from renewable or recycled materials and can all be recycled.
White Dog Caféhttp://www.whitedog.com
The White Dog Café (Philadelphia, Pennsylvania) is a restaurant that supports sustainable agriculture by purchasing seasonal, local, organic ingredients from local farmers whenever possible. In addition to supporting sustainable agriculture, the White Dog Café partners with “sister” restaurants in the area that are minority-owned. This project encourages customers to visit neighborhoods they otherwise might not visit and to support minority-owned businesses and cultural institutions. The sister restaurant project also has an international dimension to foster awareness, communication, and economic justice worldwide. The international program offers educational tours to the countries of international sister restaurants, a chef exchange program, hosts international visitors, and promotes Fair Trade.
White Dog Café has a mentoring program with a local high school’s restaurant, hotel, and tourism program, organizes community tours through different Philadelphia neighborhoods, hosts annual multicultural events, participates in Take a Senior to Lunch Day, and hosts speakers each month on various social and policy issues. White Dog Café donates an amazing 20% of pretax profits to nonprofits and the café has also created its own nonprofit, White Dog Community Enterprises.
Zambezi Organic Forest Honeywww.zambezihoney.com
Zambezi Organic Forest Honey (Oxford, Ohio) was founded by former Peace Corps volunteers who spent time in Zambia, Africa. Zambezi Organic Forest Honey helps local Zambian beekeepers access new markets for the organic honey that the Lunda people have been farming as a way of life for over 500 years.
Zambian beekeepers who register with the company cooperative gain access to free training on sustainable beekeeping, agriculture, and forestry practices; free education for literacy, mathematics, and small-business skills; free beekeeping supplies; and farmers are under no obligation to sell solely to the company, fostering further economic growth of the region. The company pays, on average, 40% above market prices for the organic honey, and the company collective currently has 5,000 registered beekeepers. In addition, Zambezi Honey donates a portion of profits back to Zambia for projects in malaria prevention, HIV/AIDS education, school scholarships, and rural-income generation grants. | textbooks/biz/Business/Advanced_Business/A_Primer_on_Sustainable_Business/09%3A_Sustainable_Business-_Case_Examples/9.01%3A_Sustainable_Business-_Case_Examples.txt |
• 1: An Introduction
Chapter One introduces the key themes covered in this book.
• 2: Electronic commerce technology
Chapter Two deals with the technology that underlies electronic commerce. Specifically, we discuss the methods that computers use to communicate with each other. We compare and contrast: the Internet (which is global in nature and has the potential to communicate with multiple stakeholder groups); the intranet (which focuses on internal communications within the organization–such as communication with employees); the extranet (which concentrates on exchanges with a specific business partner).
• 3: Attracting and retaining visitors
This chapter introduces elements of electronic strategy. In particular, we describe business practices that evolve because of the way that the Web changes the nature of communication between firms and customers. We describe attractors , which firms use to draw visitors to their Web site, including sponsorship, the customer service center, and the town hall.
• 4: Promotion - Integrated Web communications
This is the first of a series of five chapters that discuss the four major functions of marketing: promotion, price, distribution, and product (service). As the Web is a new communications medium, we devote two chapters to promotion. In Chapter Four, we introduce a model for thinking about communication strategy in cyberspace: the Integrated Internet Marketing model.
• 5: Promotion and purchase - Measuring effectiveness
Chapter Five describes new methods for measuring communication effectiveness in cyberspace. Specifically, we discuss the Internet as a new medium, in contrast to broadcasting and publishing. Currently, Web users perceive this medium to be similar to a magazine, perhaps because 85 percent of Web content is text. Other capabilities of the Web (e.g., sound) are not extensively used at this point.
• 6: Distribution
Organizations needed to be large to respond to these logistical challenges. The advent of electronic commerce has the potential to transform logistics and distribution. Today, a small software firm in Austin, Texas, can deliver its product (via the Web) to a customer in Seoul, South Korea. The economic landscape is altered dramatically. This chapter (along with the others) is future oriented as we outline strategic directions that are likely to be successful in the twenty-first century.
• 7: Service
Services are more and more important in the U.S. economy. In Chapter Seven, we describe how electronic commerce comes to blur the distinction between products and services. Traditionally, services are a challenge to market because of four key properties: intangibility, simultaneity, heterogeneity, and perishability. In this chapter, we show how electronic commerce can be used to overcome traditional problems in services marketing.
• 8: Pricing
Price directly affects a firm’s revenue. Chapter Eight describes pricing methods and strategies that are effective in cyberspace. We take a customer value perspective to illustrate various price-setting strategies (e.g., negotiation, reducing customer risk) and show how these strategies can be used to attain organizational objectives.
• 9: Post-Modernism and the Web - Societal effects
This chapter concentrates on societal changes that are encouraged by electronic commerce (and other related trends). Through the metaphors of modernism and postmodernism, we show how electronic commerce influences: perceptions of reality; notions of time and space; values; attitudes toward organizations. Chapter Nine is future oriented and discusses electronic commerce as a revolutionary force that has the potential to transform society and transform consumers’ perceptions of business practice.
New Page
Richard T. Watson (University of Georgia, USA)
Electronic commerce is a revolution in business practices. If organizations are going to take advantage of new Internet technologies, then they must take a strategic perspective. That is, care must be taken to make a close link between corporate strategy and electronic commerce strategy.
In this chapter, we address some essential strategic issues, describe the major themes tackled by this book, and outline the other chapters. Among the central issues we discuss are defining electronic commerce, identifying the extent of a firm’s Internet usage, explaining how electronic commerce can address the three strategic challenges facing all firms, and understanding the parameters of disintermediation. Consequently, we start with these issues.
Electronic commerce defined
Electronic commerce, in a broad sense, is the use of computer networks to improve organizational performance. Increasing profitability, gaining market share, improving customer service, and delivering products faster are some of the organizational performance gains possible with electronic commerce. Electronic commerce is more than ordering goods from an on-line catalog. It involves all aspects of an organization’s electronic interactions with its stakeholders, the people who determine the future of the organization. Thus, electronic commerce includes activities such as establishing a Web page to support investor relations or communicating electronically with college students who are potential employees. In brief, electronic commerce involves the use of information technology to enhance communications and transactions with all of an organization’s stakeholders. Such stakeholders include customers, suppliers, government regulators, financial institutions, mangers, employees, and the public at large.
Who should use the Internet?
Every organization needs to consider whether it should have an Internet presence and, if so, what should be the extent of its involvement. There are two key factors to be considered in answering these questions.
First, how many existing or potential customers are likely to be Internet users? If a significant proportion of a firm’s customers are Internet users, and the search costs for the product or service are reasonably (even moderately) high, then an organization should have a presence; otherwise, it is missing an opportunity to inform and interact with its customers. The Web is a friendly and extremely convenient source of information for many customers. If a firm does not have a Web site, then there is the risk that potential customers, who are Web savvy, will flow to competitors who have a Web presence.
Second, what is the information intensity of a company’s products and services? An information-intense product is one that requires considerable information to describe it completely. For example, what is the best way to describe a CD to a potential customer? Ideally, text would be used for the album notes listing the tunes, artists, and playing time; graphics would be used to display the CD cover; sound would provide a sample of the music; and a video clip would show the artist performing. Thus, a CD is information intensive; multimedia are useful for describing it. Consequently, Sony Music provides an image of a CD’s cover, the liner notes, a list of tracks, and 30-second samples of some tracks. It also provides photos and details of the studio session.
The two parameters, number of customers on the Web and product information intensity, can be combined to provide a straightforward model (see Exhibit 1) for determining which companies should be using the Internet. Organizations falling in the top right quadrant are prime candidates because many of their customers have Internet access and their products have a high information content. Firms in the other quadrants, particularly the low-low quadrant, have less need to invest in a Web site.
Exhibit 1.: Internet presence grid
Why use the Internet?
Along with other environmental challenges, organizations face three critical strategic challenges: demand risk, innovation risk, and inefficiency risk. The Internet, and especially the Web, can be a device for reducing these risks.
Demand risk
Sharply changing demand or the collapse of markets poses a significant risk for many firms. Smith-Corona, one of the last U.S. manufacturers of typewriters, filed for bankruptcy in 1995. Cheap personal computers destroyed the typewriter market. In simple terms, demand risk means fewer customers want to buy a firm’s wares. The globalization of the world market and increasing deregulation expose firms to greater levels of competition and magnify the threat of demand risk. To counter demand risk, organizations need to be flexible, adaptive, and continually searching for new markets and stimulating demand for their products and services.
The growth strategy matrix [Ansoff, 1957] suggests that a business can grow by considering products and markets, and it is worthwhile to speculate on how these strategies might be achieved or assisted by the Web. In the cases of best practice, the differentiating feature will be that the Web is used to attain strategies that would otherwise not have been possible. Thus, the Web can be used as a market penetration mechanism, where neither the product nor the target market is changed. The Web merely provides a tool for increasing sales by taking market share from competitors, or by increasing the size of the market through occasions for usage. The U.K. supermarket group Tesco is using its Web site to market chocolates, wines, and flowers. Most British shoppers know Tesco, and many shop there. The group has sold wine, chocolates and flowers for many years. Tesco now makes it easy for many of its existing customers (mostly office workers and professionals) to view the products in a full-color electronic catalogue, fill out a simple order form with credit card details, write a greeting card, and facilitate delivery. By following these tactics, Tesco is not only taking business away from other supermarkets and specialty merchants, it is also increasing its margins on existing products through a premium pricing strategy and markups on delivery.
Alternatively, the Web can be used to develop markets , by facilitating the introduction and distribution of existing products into new markets. A presence on the Web means being international by definition, so for many firms with limited resources, the Web will offer hitherto undreamed-of opportunities to tap into global markets. Icelandic fishing companies can sell smoked salmon to the world. A South African wine producer is able to reach and communicate with wine enthusiasts wherever they may be, in a more cost effective way. To a large extent, this is feasible because the Web enables international marketers to overcome the previously debilitating effects of time and distance, negotiation of local representation, and the considerable costs of promotional material production costs.
A finer-grained approach to market development is to create a one-to-one customized interaction between the vendor and buyer. Bank America offers customers the opportunity to construct their own bank by pulling together the elements of the desired banking service. Thus, customers adapt the Web site to their needs. Even more advanced is an approach where the Web site is adaptive. Using demographic data and the history of previous interactions, the Web site creates a tailored experience for the visitor. Firefly markets technology for adaptive Web site learning. Its software tries to discover, for example, what type of music a visitor likes so that it can recommend CDs. Firefly is an example of software that, besides recommending products, electronically matches a visitor’s profile to create virtual communities, or at least groups of like-minded people–virtual friends–who have similar interests and tastes.
Any firm establishing a Web presence, no matter how small or localized, instantly enters global marketing. The firm’s message can be watched and heard by anyone with Web access. Small firms can market to the entire Internet world with a few pages on the Web. The economies of scale and scope enjoyed by large organizations are considerably diminished. Small producers do not have to negotiate the business practices of foreign climes in order to expose their products to new markets. They can safely venture forth electronically from their home base. Fortunately, the infrastructure–international credit cards (e.g., Visa) and international delivery systems (e.g., UPS)–for global marketing already exists. With communication via the Internet, global market development becomes a reality for many firms, irrespective of their size or location.
The Web can also be a mechanism that facilitates product development , as companies who know their existing customers well create exciting, new, or alternative offerings for them. The Sporting Life is a U.K. newspaper specializing in providing up-to-the-minute information to the gaming fraternity. It offers reports on everything from horse and greyhound racing to betting odds for sports ranging from American football to snooker, and from golf to soccer. Previously, the paper had been restricted to a hard copy edition, but the Web has given it significant opportunities to increase its timeliness in a time sensitive business. Its market remains, to a large extent, unchanged–bettors and sports enthusiasts in the U.K. However, the new medium enables it to do things that were previously not possible, such as hourly updates on betting changes in major horse races and downloadable racing data for further spreadsheet and statistical analysis by serious gamblers. Most importantly, The Sporting Life is not giving away this service free, as have so many other publishers. It allows prospective subscribers to sample for a limited time, before making a charge for the on-line service.
Finally, the Web can be used to diversify a business by taking new products to new markets. American Express Direct is using a Web site to go beyond its traditional traveler’s check, credit card, and travel service business by providing on-line facilities to purchase mutual funds, annuities, and equities. In this case, the diversification is not particularly far from the core business, but it is feasible that many firms will set up entirely new businesses in entirely new markets.
Innovation risk
In most mature industries, there is an oversupply of products and services, and customers have a choice, which makes them more sophisticated and finicky consumers. If firms are to continue to serve these sophisticated customers, they must give them something new and different; they must innovate. Innovation inevitably leads to imitation, and this imitation leads to more oversupply. This cycle is inexorable, so a firm might be tempted to get off this cycle. However, choosing not to adapt and not to innovate will lead to stagnation and demise. Failure to be as innovative as competitors–innovation risk–is a second strategic challenge. In an era of accelerating technological development, the firm that fails to improve continually its products and services is likely to lose market share to competitors and maybe even disappear (e.g., the typewriter company). To remain alert to potential innovations, among other things, firms need an open flow of concepts and ideas. Customers are one viable source of innovative ideas, and firms need to find efficient and effective means of continual communication with customers.
Internet tools can be used to create open communication links with a wide range of customers. E-mail can facilitate frequent communication with the most innovative customers. A bulletin board can be created to enable any customer to request product changes or new features. The advantage of a bulletin board is that another customer reading an idea may contribute to its development and elaboration. Also, a firm can monitor relevant discussion groups to discern what customers are saying about its products or services and those of its competitors.
Inefficiency risk
Failure to match competitors’ unit costs–inefficiency risk–is a third strategic challenge. A major potential use of the Internet is to lower costs by distributing as much information as possible electronically. For example, American Airlines now uses its Web site for providing frequent flyers an update of their current air miles. Eventually, it may be unnecessary to send expensive paper mail to frequent flyers or to answer telephone inquiries.
The cost of handling orders can also be reduced by using interactive forms to capture customer data and order details. Savings result from customers directly entering all data. Also, because orders can be handled asynchronously, the firm can balance its work force because it no longer has to staff for peak ordering periods.
Many Web sites make use of FAQs–frequently asked questions–to lower the cost of communicating with customers. A firm can post the most frequently asked questions, and its answers to these, as a way of expeditiously and efficiently handling common information requests that might normally require access to a service representative. UPS, for example, has answers to more than 40 frequent customer questions (e.g., What do I do if my shipment was damaged?) on its FAQ page. Even the FBI’s 10 Most Wanted list is on the Web, and the FAQs detail its history, origins, functions, and potential.
Disintermediation
Electronic commerce offers many opportunities to reformulate traditional modes of business. Disintermediation , the elimination of intermediaries such as brokers and dealers, is one possible outcome in some industries. Some speculate that electronic commerce will result in widespread disintermediation, which makes it a strategic issue that most firms should carefully address. A closer analysis enables us to provide some guidance on identifying those industries least, and most, threatened by disintermediation.
Electronic commerce offers many opportunities to reformulate traditional modes of business. Disintermediation , the elimination of intermediaries such as brokers and dealers, is one possible outcome in some industries. Some speculate that electronic commerce will result in widespread disintermediation, which makes it a strategic issue that most firms should carefully address. A closer analysis enables us to provide some guidance on identifying those industries least, and most, threatened by disintermediation.
Consider the case of Manheim Auctions. It auctions cars for auto makers (at the termination of a lease) and rental companies (when they wish to retire a car). As an intermediary, it is part of a chain that starts with the car owner (lessor or rental company) and ends with the consumer. In a truncated value chain, Manheim and the car dealer are deleted. The car’s owner sells directly to the consumer. Given the Internet’s capability of linking these parties, it is not surprising that moves are already afoot to remove the auctioneer.
Edmunds, publisher of hard-copy and Web-based guides to new and used cars, is linking with a large auto-leasing company to offer direct buying to customers. Cars returned at the end of the lease will be sold with a warranty, and financing will be arranged through the Web site. No dealers will be involved. The next stage is for car manufacturers to sell directly to consumers, a willingness Toyota has expressed and that large U.S. auto makers are considering. On the other hand, a number of dealers are seeking to link themselves to customers through the Internet via the Autobytel Web site. Consumers contacting this site provide information on the vehicle desired and are directed to a dealer in their area who is willing to offer them a very low markup on the desired vehicle.
We gain greater insight into disintermediation by taking a more abstract view of the situation (see Exhibit 2). A value chain consists of a series of organizations that progressively convert some raw material into a product in the hands of a consumer. The beginning of the chain is 0 1 (e.g., an iron ore miner) and the end is O n (e.g., a car owner). Associated with a value chain are physical and information flows, and the information flow is usually bi-directional. Observe that it is really a value network rather than a chain, because any organization may receive inputs from multiple upstream objects.
Exhibit 2.: Value network
Consider an organization that has a relatively high number of physical inputs and outputs. It is likely this object will develop specialized assets for processing the physical flows (e.g., Manheim has invested heavily in reconditioning centers and is the largest non-factory painter of automobiles in the world). The need to process high volume physical flows is likely to result in economies of scale. On the information flow side, it is not so much the volume of transactions that matters since it is relatively easy to scale up an automated transaction processing system. It is the diversity of the information flow that is critical because diversity increases decision complexity. The organization has to develop knowledge to handle variation and interaction between communication elements in a diverse information flow (e.g., Manheim has to know how to handle the transfer of titles between states).
Combining these notions of physical flow size and information flow diversity, we arrive at the disintermediation threat grid (see ). The threat to Manheim is low because of its economies of scale, large investment in specialized assets that a competitor must duplicate, and a well-developed skill in processing a variety of transactions. Car dealers are another matter because they are typically small, have few specialized assets, and little transaction diversity. For dealers, disintermediation is a high threat. The on-line lot can easily replace the physical lot.
Exhibit 3: Disintermediation threat grid
We need to keep in mind that disintermediation is not a binary event (i.e., it is not on or off for the entire system). Rather, it is on or off for some linkages in the value network. For example, some consumers are likely to prefer to interact with dealers. What is more likely to emerge is greater consumer choice in terms of products and buying relationships. Thus, to be part of a consumer’s options, Manheim needs to be willing to deal directly with consumers. While this is likely to lead to channel conflict and confusion, it is an inevitable outcome of the consumer’s demand for greater choice.
Key themes addressed
First, we introduce a number of new themes, models, metaphors, and examples to describe the business changes that are implied by the Internet. An example of one of our metaphors is Joseph Schumpeter’s notion of creative destruction . That is, capitalist economies create new industries and new business opportunities. At the same time, these economies are destructive in that they sweep away old technologies and old ways of doing things. It is a sobering message that none of the major wagon makers was able to make the transition to automobile production. None of the manufacturers of steam locomotives became successful manufacturers of diesel locomotives. Will this pattern continue for the electronic revolution?
Amazon.com has relatively few employees and no retail outlets; and yet, it has a higher market capitalization than Barnes & Noble, which has more than one thousand retail outlets. Nonetheless, Barnes & Noble is fighting back by creating its own Web-based business. In this way, the Internet may spawn hybrid business strategies–those that combine innovative electronic strategies with traditional methods of competition. Traditional firms may survive in the twenty-first century, but they must adopt new strategies to compete. In this book, we introduce a variety of models for describing these new strategies, and we describe new ways for firms to compete by taking advantage of the opportunities that electronic commerce reveals.
Exhibit 4. Key themes addressed by this book
• New models, theories, metaphors, and examples for describing electronic commerce and its impact on business and society
• New models for creating businesses (via the Internet)
• Hybrid models that combine Internet strategies with traditional business strategies
• New forms of human behavior (e.g., chat rooms, virtual communities)
• New forms of consumer behavior (e.g., searching for information electronically)
• Postmodernism and the Web
• Describing the reliability and robustness of the technology that underlies the Internet and its multi-media component (the Web)
• Describing how organizations can compete today, with an emphasis on outlining electronic commerce strategies and tactics
• The Internet creates value for organizations
• The Internet enhances consumers’ life quality
• Predicting the future, especially the impact of information technology on future business strategies and business forms (e.g., “Amazoning” selected industries)
• Describing technology trends that will emerge in the future
• New ways of communicating with stakeholders and measuring communication effectiveness
• Comparing and contrasting the Internet with other communication media (e.g., TV and brochures)
• Key features of the Internet which make it a revolutionary force in the economy (a force of creative destruction)
• Speed of information transfer and the increasing speed of economic > transactions
• Time compression of business cycles
• The influence of interactivity
• The power and effectiveness of networks
• Opportunities for globalization and for small organizations to compete
• The multi-disciplinary perspective that is necessary to comprehend electronic commerce and the changes it inspires in the economic environment. Here, we focus on three disciplinary approaches:
• Marketing, marketing research, and communication
• Management information systems
• Business strategy
• Elements that underlie effective Web pages and Web site strategy.
• New kinds of human interactions that are enhanced by the Internet, such as:
• Electronic town hall meetings
• Brand communities (e.g., the Web page for Winnebago owners)
• Chat rooms
• Virtual communities
• New marketing strategies for pricing, promoting, and distributing goods and services
At the same time that information technology has the potential to transform business operations, it also has the potential to transform human behaviors and activities. The focus of our book is business strategy; so we concentrate on those human activities (e.g., consumer behavior) that intersect with business operations. Some examples of consumer behaviors that we discuss include: virtual communities; enhanced information search via the Web; e-mail exchanges (e.g., word-of-mouth communications about products, e-mail messages sent directly to organizations); direct consumer purchases over the Web (e.g., buying flowers, compact disks, software). Of course, the Internet creates new opportunities for organizations to gather information directly from consumers (e.g., interactively). The Internet provides a place where consumers can congregate and affiliate with one another. One implication is that organizations can make use of these new consumer groups to solve problems and provide consumer services in innovative ways. For instance, software or hardware designers can create chat rooms where users pose problems. At the same time, other consumers will visit the chat room and propose suggested solutions to these problems.
Value to organizations is one of our themes. As described previously, organizations can create value via the Internet by improving customer service. The stock market value of some high technology firms is almost unbelievable. Consider the U.S. steel industry, which dominated the American economy in the late nineteenth century and the first half of the twentieth century. As of March 1999, the combined market capitalization of the 13 largest American steel firms (e.g., U.S. Steel and Bethlehem Steel) is approximately USD 6 billion , less than one-third the value of the Internet bookseller, Amazon.com. On most days, the market capitalization of Microsoft rises or falls by more than the market capitalization of the entire U.S. integrated steel industry. Firms such as Microsoft do not have extensive tangible assets, as the steel companies do. In contrast, Microsoft is a knowledge organization, and it is this knowledge (and ability to invent new technologies and new technological applications) that creates such tremendous value for shareholders.
At the same time, technology creates value for consumers. Some of this value comes in the form of enhanced products and services. Some of the value comes from more favorable prices (perhaps encouraged by the increased competition that the Internet can bring to selected industries). Some of the value comes in the form of enhanced (and more rapid) communications–communications between consumers and communications between organizations and consumers. In brief, the Internet raises quality of life, and it has the potential to perform this miracle on a global scale.
To date, the Internet has begun to make some big changes in the business practices in selected industries . For instance, electronic commerce has taken over 2.2 percent of the U.S. leisure travel industry. In the near future, the Internet has the potential to transform many other industries . For instance, the USD 71.6 billion furniture business is a possibility. Logistics is a key for success in this industry. Consumers would expect timely delivery and a mechanism for rejecting and returning merchandise if it didn’t meet expectations.
What is the future of electronic commerce? As in any field of human endeavor, the future is very difficult to predict. We describe the promise of electronic commerce. As reflected in the stock prices of e-commerce enterprises, the future of electronic commerce seems very bright indeed. In this book, we present some trends to come, by taking a business strategy approach.
One way to try to understand the future of the Internet is by comparing it to other (communication) technologies that have transformed the world in past decades (e.g., television and radio). Another way to understand the Internet is to consider the attributes that make it unique. These factors include the following:
• the speed of information transfer and the increasing speed of economic
transactions;
• the time compression of business cycles;
• the influence of interactivity;
• the power and effectiveness of networks;
• opportunities for globalization.
The Internet is complex. We adopt an interdisciplinary approach to study this new technology and its strategic ramifications. Specifically, we concentrate on the following three disciplines: management information systems, marketing, and business strategy. As described at the outset of this chapter, we show how the Internet is relevant for communicating with multiple stakeholder groups. Nonetheless, since we approach electronic commerce from a marketing perspective, we concentrate especially on consumers (including business consumers) and how knowledge about their perspectives can be used to fashion effective business strategies. We focus on all aspects of electronic commerce (e.g., technology, intranets, extranets), but we focus particular attention on the Internet and its multi-media component, the Web.
For a variety of reasons, it is not possible to present a single model to describe the possibilities of electronic commerce. For that reason, we present multiple models in the following chapters. Some firms (e.g., Coca-Cola) find it virtually impossible to sell products on the Internet. For these firms, the Internet is primarily an information medium, a place to communicate brand or corporate image. For other firms (e.g., Microsoft), the Internet is both a communication medium and a way of delivering products (e.g., software) and services (e.g., on-line advice for users). In brief, one business model cannot simultaneously describe the opportunities and threats that are faced in the soft drink and software industries. The following section provides more details about this book and the contents of the remaining chapters.
Conclusion
As the prior outline clearly illustrates, this is a book about electronic commerce strategy. We focus on the major issues that challenge every serious thinker about the impact of the Internet on the future of business.
Cases
Dutta, S., and A. De Meyer. 1998. E*trade, Charles Schwab and Yahoo!: the transformation of on-line brokerage . Fontainebleau, France: INSEAD. ECCH 698-029-1.
Galal, H. 1995. Verifone: The transaction automation company. Harvard Business School, 9-195-088.
McKeown, P. G., & Watson, R. T. (1999). Manheim Auctions. Communications of the AIS, 1(20), 1-20.
Vandermerwe, S., and M. Taishoff. 1998. Amazon.com: marketing a new electronic go-between service provider . London, U.K.: Imperial College. ECCH 598-069-1 | textbooks/biz/Business/Advanced_Business/Book%3A_Electronic_Commerce_-_The_Strategic_Perspective_(Watson_Brethon_Pitt_and_Zinkan)/01%3A_An_Introduction.txt |
Introduction
In the first chapter, we argued that organizations need to make a metamorphosis. They have to abandon existing business practices to create new ways of interacting with stakeholders. This chapter will provide you with the wherewithal to understand the technology that enables an organization to make this transformation.
Internet technology
Computers can communicate with each other when they speak a common language or use a common communication protocol. Transmission Control Protocol/Internet Protocol (TCP/IP) is the communication network protocol used on the Internet. TCP/IP has two parts. TCP handles the transport of data, and IP performs routing and addressing.
Data transport
The two main methods for transporting data across a network are circuit and packet switching. Circuit switching is commonly used for voice and package switching for data. Parts of the telephone system still operate as a circuit-switched network. Each link of a predetermined bandwidth is dedicated to a predetermined number of users for a period of time.
The Internet is a packet switching network. The TCP part of TCP/IP is responsible for splitting a message from the sending computer into packets, uniquely numbering each packet, transmitting the packets, and putting them together in the correct sequence at the receiving computer. The major advantage of packet switching is that it permits sharing of resources (e.g., a communication link) and makes better use of available bandwidth.
Routing
Routing is the process of determining the path a message will take from the sending to the receiving computer. It is the responsibility of the IP part of TCP/IP for dynamically determining the best route through the network. Because routing is dynamic, packets of the same message may take different paths and not necessarily arrive in the sequence in which they were sent.
Addressability
Messages can be sent from one computer to another only when every server on the Internet is uniquely addressable. The Internet Network Information Center (InterNIC) manages the assignment of unique IP addresses so that TCP/IP networks anywhere in the world can communicate with each other. An IP address is a unique 32-bit number consisting of four groups of decimal numbers in the range 0 to 255 (e.g., 128.192.73.60). IP numbers are difficult to recall. Humans can more easily remember addresses like aussie.mgmt.uga.edu. A Domain Name Server (DNS) converts aussie.mgmt.uga.edu to the IP address 128.192.73.60. The exponential growth of the Internet will eventually result in a shortage of IP addresses, and the development of next-generation IP (IPng) is underway.
Infrastructure
Electronic commerce is built on top of a number of different technologies. These various technologies created a layered, integrated infrastructure that permits the development and deployment of electronic commerce applications (see Exhibit 9). Each layer is founded on the layer below it and cannot function without it.
Exhibit 5.: Electronic commerce infrastructure
National information infrastructure
This layer is the bedrock of electronic commerce because all traffic must be transmitted by one or more of the communication networks comprising the national information infrastructure (NII). The components of an NII include the TV and radio broadcast industries, cable TV, telephone networks, cellular communication systems, computer networks, and the Internet. The trend in many countries is to increase competition among the various elements of the NII to increase its overall efficiency because it is believed that an NII is critical to the creation of national wealth.
Message distribution infrastructure
This layer consists of software for sending and receiving messages. Its purpose is to deliver a message from a server to a client. For example, it could move an HTML file from a Web server to a client running Netscape. Messages can be unformatted (e.g., e-mail) or formatted (e.g., a purchase order). Electronic data interchange (EDI), e-mail, and hypertext text transfer protocol (HTTP) are examples of messaging software.
Electronic publishing infrastructure
Concerned with content, the Web is a very good example of this layer. It permits organizations to publish a full range of text and multimedia. There are three key elements of the Web:
• A uniform resource locator (URL), which is used to uniquely identify any server;
• A network protocol;
• A structured markup language, HTML.
Notice that the electronic publishing layer is still concerned with some of the issues solved by TCP/IP for the Internet part of the NII layer. There is still a need to consider addressability (i.e., a URL) and have a common language across the network (i.e., HTTP and HTML). However, these are built upon the previous layer, in the case of a URL, or at a higher level, in the case of HTML.
Business services infrastructure
The principal purpose of this layer is to support common business processes. Nearly every business is concerned with collecting payment for the goods and services it sells. Thus, the business services layer supports secure transmission of credit card numbers by providing encryption and electronic funds transfer. Furthermore, the business services layer should include facilities for encryption and authentication (see See Security).
Electronic commerce applications
Finally, on top of all the other layers sits an application. Consider the case of a book seller with an on-line catalog (see Exhibit 6). The application is a book catalog; encryption is used to protect a customer’s credit card number; the application is written in HTML; HTTP is the messaging protocol; and the Internet physically transports messages between the book seller and customer.
Exhibit 6. An electronic commerce application
Electronic commerce applications Book catalog
Business services infrastructure Encryption
Electronic publishing infrastructure HTML
Message distribution infrastructure HTTP
National information infrastructure Internet
Electronic publishing
Two common approaches to electronic publishing are Adobe’s portable document format (PDF) and HTML. The differences between HTML and PDF are summarized in Exhibit 7.
Exhibit 7. HTML versus PDF
HTML PDF
A markup language A page description language
HTML files can be created by a wide variety of software. Most word processors can generate HTML PDF files are created using special software sold by Adobe that is more expensive than many HTML creator alternatives
Browser is free Viewer is free
Captures structure Captures structure and layout
Can have links to PDF Can have links to HTML
Reader can change presentation Creator determines presentation
PDF
PDF is a page description language that captures electronically the layout of the original document. Adobe’s Acrobat Exchange software permits any document created by a DOS, Macintosh, Windows, or Unix application to be converted to PDF. Producing a PDF document is very similar to printing, except the image is sent to a file instead of a printer. The fidelity of the original document is maintained–text, graphics, and tables are faithfully reproduced when the PDF file is printed or viewed. PDF is an operating system independent and printer independent way of presenting the same text and images on many different systems.
PDF has been adopted by a number of organizations, including the Internal Revenue Service for tax forms. PDF documents can be sent as e-mail attachments or accessed from a Web application. To decipher a PDF file, the recipient must use a special reader, supplied at no cost by Adobe for all major operating systems. In the case of the Web, you have to configure your browser to invoke the Adobe Acrobat reader whenever a file with the extension pdf is retrieved.
HTML
HTML is a markup language , which means it marks a portion of text as referring to a particular type of information.6 HTML does not specify how this is to be interpreted; this is the function of the browser. Often the person using the browser can specify how the information will be presented. For instance, using the preference features of your browser, you can indicate the font and size for presenting information. As a result, you can significantly alter the look of the page, which could have been carefully crafted by a graphic artist to convey a particular look and feel. Thus, the you may see an image somewhat different from what the designer intended.
HTML or PDF?
The choice between HTML and PDF depends on the main purpose of the document. If the intention is to inform the reader, then there is generally less concern with how the information is rendered. As long as the information is readable and presented clearly, the reader can be given control of how it is presented. Alternatively, if the goal is to influence the reader (e.g., an advertisement) or maintain the original look of the source document (e.g, a taxation form or newspaper), then PDF is the better alternative. The two formats coexist. A PDF document can include links to a HTML document, and vice versa. Also, a number of leading software companies are working on extensions to HTML that will give the creator greater control of the rendering of HTML (e.g., specifying the font to be used).
Electronic commerce topologies
There are three types of communication networks used for electronic commerce (see Exhibit 8), depending on whether the intent is to support cooperation with a range of stakeholders, cooperation among employees, or cooperation with a business partner. Each of these topologies is briefly described, and we discuss how they can be used to support electronic commerce.
Exhibit 8. Electronic commerce topologies
Topology Internet Intranet Extranet
Extent Global Organizational Business partnership
Focus Stakeholder relationships Employee information and communication Distribution channel communication
The Internet is a global network of networks. Any computer connected to the Internet can communicate with any server in the system (see Exhibit 5). Thus, the Internet is well-suited to communicating with a wide variety of stakeholders. Adobe, for example, uses its Web site to distribute software changes to customers and provide financial and other reports to investors.
Exhibit 9.: The Internet
Many organizations have realized that Internet technology can also be used to establish an intra-organizational network that enables people within the organization to communicate and cooperate with each other. This so-called intranet (see Exhibit 10) is essentially a fenced-off mini-Internet within an organization. A firewall (see See Firewall) is used to restrict access so that people outside the organization cannot access the intranet. While an intranet may not directly facilitate cooperation with external stakeholders, its ultimate goal is to improve an organization’s ability to serve these stakeholders.
Exhibit 10.: An Intranet
Exhibit 11.: An extranet
The Internet and intranet, as the names imply, are networks. That is, an array of computers can connect to each other. In some situations, however, an organization may want to restrict connection capabilities. An extranet (see Exhibit 7) is designed to link a buyer and supplier to facilitate greater coordination of common activities. The idea of an extranet derives from the notion that each business has a value chain and the end-point of one firm’s chain links to the beginning of another’s. Internet technology can be used to support communication and data transfer between two value chains. Communication is confined to the computers linking the two organizations. An organization can have multiple extranets to link it with many other organizations, but each extranet is specialized to support partnership coordination.
The economies gained from low-cost Internet software and infrastructure mean many more buyers and supplier pairs can now cooperate electronically. The cost of linking using Internet technology is an order of magnitude lower than using commercial communication networks for electronic data interchange (EDI) , the traditional approach for electronic cooperation between business partners.
EDI
EDI, which has been used for some 20 years, describes the electronic exchange of standard business documents between firms. A structured, standardized data format is used to exchange common business documents (e.g., invoices and shipping orders) between trading partners. In contrast to the free form of e-mail messages, EDI supports the exchange of repetitive, routine business transactions. Standards mean that routine electronic transactions can be concise and precise. The main standard used in the U.S. and Canada is known as ANSI X.12, and the major international standard is EDIFACT. Firms following the same standard can electronically share data. Before EDI, many standard messages between partners were generated by computer, printed, and mailed to the other party, that then manually entered the data into its computer. The main advantages of EDI are:
• paper handling is reduced, saving time and money;
• data are exchanged in real time;
• there are fewer errors since data are keyed only once;
• enhanced data sharing enables greater coordination of activities between business partners;
• money flows are accelerated and payments received sooner.
Despite these advantages, for most companies EDI is still the exception, not the rule. A recent survey in the United States showed that almost 80 percent of the information flow between firms is on paper. Paper should be the exception, not the rule. Most EDI traffic has been handled by value-added networks (VANs) or private networks. VANs add communication services to those provided by common carriers (e.g., AT&T in the U.S. and Telstra in Australia). However, these networks are too expensive for all but the largest 100,000 of the 6 million businesses in existence today in the United States. As a result, many businesses have not been able to participate in the benefits associated with EDI. However, the Internet will enable these smaller companies to take advantage of EDI.
Internet communication costs are typically less than with traditional EDI. In addition, the Internet is a global network potentially accessible by nearly every firm. Consequently, the Internet is displacing VANs as the electronic transport path between trading partners.
The simplest approach is to use the Internet as a means of replacing a VAN by using a commercially available Internet EDI package. EDI, with its roots in the 1960s, is a system for exchanging text, and the opportunity to use the multimedia capabilities of the Web is missed if a pure replacement strategy is applied. The multimedia capability of the Internet creates an opportunity for new applications that spawn a qualitatively different type of information exchange within a partnership. Once multimedia capability is added to the information exchange equation, then a new class of applications can be developed (e.g., educating the other partner about a firm’s purchasing procedures).
Security
Security is an eternal concern for organizations as they face the dual problem of protecting stored data and transported messages. Organizations have always had sensitive data to which they want to limit access to a few authorized people. Historically, such data have been stored in restricted areas (e.g., a vault) or encoded. These methods of restricting access and encoding are still appropriate.
Electronic commerce poses additional security problems. First, the intent of the Internet is to give people remote access to information. The system is inherently open, and traditional approaches of restricting access by the use of physical barriers are less viable, though organizations still need to restrict physical access to their servers. Second, because electronic commerce is based on computers and networks, these same technologies can be used to attack security systems. Hackers can use computers to intercept network traffic and scan it for confidential information. They can use computers to run repeated attacks on a system to breach its security (e.g., trying all words in the dictionary for an account’s password).
Access control
Data access control , the major method of controlling access to stored data, often begins with some form of visitor authentication, though this is not always the case with the Web because many organizations are more interested in attracting rather than restricting visitors to their Web site. A variety of authentication mechanisms may be used (see Exhibit 12). The common techniques for the Internet are account number, password, and IP address.
Exhibit 12. Authentication mechanisms
Class Examples
Personal memory Name, account number, password
Possessed object Badge, plastic card, key, IP address
Personal characteristic Fingerprint, voiceprint, signature, hand size
Firewall
A system may often use multiple authentication methods to control data access, particularly because hackers are often persistent and ingenious in their efforts to gain unauthorized access. A second layer of defense can be a firewall , a device (e.g., a computer) placed between an organization’s network and the Internet. This barrier monitors and controls all traffic between the Internet and the intranet. Its purpose is to restrict the access of outsiders to the intranet. A firewall is usually located at the point where an intranet connects to the Internet, but it is also feasible to have firewalls within an intranet to further restrict the access of those within the barrier.
There are several approaches to operating a firewall. The simplest method is to restrict traffic to packets with designated IP addresses (e.g., only permit those messages that come from the University of Georgia–i.e., the address ends with uga.edu). Another screening rule is to restrict access to certain applications (e.g., Web pages). More elaborate screening rules can be implemented to decrease the ability of unauthorized people to access an intranet.
Implementing and managing a firewall involves a tradeoff between the cost of maintaining the firewall and the loss caused by unauthorized access. An organization that simply wants to publicize its products and services may operate a simple firewall with limited screening rules. Alternatively, a firm that wants to share sensitive data with selected customers may install a more complex firewall to offer a high degree of protection.
Coding
Coding or encryption techniques, as old as writing, have been used for thousands of years to maintain confidentiality. Although encryption is primarily used for protecting the integrity of messages, it can also be used to complement data access controls. There is always some chance that people will circumvent authentication controls and gain unauthorized access. To counteract this possibility, encryption can be used to obscure the meaning of data. The intruder cannot read the data without knowing the method of encryption and the key.
Societies have always needed secure methods of transmitting highly sensitive information and confirming the identity of the sender. In an earlier time, messages were sealed with the sender’s personal signet ring–a simple, but easily forged, method of authentication. We still rely on personal signatures for checks and legal contracts, but how do you sign an e-mail message? In the information age, we need electronic encryption and signing for the orderly conduct of business, government, and personal correspondence.
Internet messages can pass through many computers on their way from sender to receiver, and there is always the danger that a sniffer program on an intermediate computer briefly intercepts and reads a message. In most cases, this will not cause you great concern, but what happens if your message contains your name, credit card number, and expiration date? The sniffer program, looking for a typical credit card number format of four blocks of four digits (e.g., 1234 5678 9012 3456), copies your message before letting it continue its normal progress. Now, the owner of the rogue program can use your credit card details to purchase products in your name and charge them to your account.
Without a secure means of transmitting payment information, customers and merchants will be very reluctant to place and receive orders, respectively. When the customer places an order, the Web browser should automatically encrypt the order prior to transmission–this is not the customer’s task.
Credit card numbers are not the only sensitive information transmitted on the Internet. Because it is a general transport system for electronic information, the Internet can carry a wide range of confidential information (financial reports, sales figures, marketing strategies, technology reports, and so on). If senders and receivers cannot be sure that their communication is strictly private, they will not use the Internet. Secure transmission of information is necessary for electronic commerce to thrive.
Encryption
Encryption is the process of transforming messages or data to protect their meaning. Encryption scrambles a message so that it is meaningful only to the person knowing the method of encryption and the key for deciphering it. To everybody else, it is gobbledygook. The reverse process, decryption, converts a seemingly senseless character string into the original message. A popular form of encryption, readily available to Internet users, goes by the name of Pretty Good Privacy (PGP) and is distributed on the Web. PGP is a public domain implementation of public-key encryption.
Traditional encryption, which uses the same key to encode and decode a message, has a very significant problem. How do you securely distribute the key? It can’t be sent with the message because if the message is intercepted, the key can be used to decipher it. You must find another secure medium for transmitting the key. So, do you fax the key or phone it? Either method is not completely secure and is time-consuming whenever the key is changed. Also, how do you know that the key’s receiver will protect its secrecy?
A public-key encryption system has two keys: one private and the other public. A public key can be freely distributed because it is quite separate from its corresponding private key. To send and receive messages, communicators first need to create separate pairs of private and public keys and then exchange their public keys. The sender encrypts a message with the intended receiver’s public key, and upon receiving the message, the receiver applies her private key (see Exhibit 13). The receiver’s private key, the only one that can decrypt the message, must be kept secret to permit secure message exchange.
Exhibit 13.: Encryption with a public-key system
The elegance of the public-key system is that it totally avoids the problem of secure transmission of keys. Public keys can be freely exchanged. Indeed, there can be a public database containing each person’s or organization’s public key. For instance, if you want to e-mail a confidential message, you can simply obtain the sender’s public key and encrypt your entire message prior to transmission.
Exhibit 14: Message before encryption
To: George Zinkhan <[email protected]> From: Rick Watson <[email protected]> Subject: Money–––––––––––––––––––––––––––––– G’day George I hope you are enjoying your stay in Switzerland. Could you do me a favor? I need USD 50,000 from my secret Swiss bank account. The name of the bank is Aussie-Suisse International in Geneva. The account code is 451-3329 and the password is `meekatharra’ I’ll see you (and the money) at the airport this Friday. Cheers
Rick
Consider the message shown in Exhibit 10; the sender would hardly want this message to fall into the wrong hands. After encryption, the message is totally secure (see Exhibit 15). Only the receiver, using his private key, can decode the message.
Exhibit 15: Message after encryption
To: George Zinkhan <[email protected]> From: Rick Watson <[email protected]> Subject: Money––––––––––––––––––––––––––––––––-BEGIN PGP MESSAGE––- Version: 2.6.2 hEwDfOTG8eEvuiEBAf9rxBdHpgdq1g0gaIP7zm1OcHvWHtx 9 ip27q6vI tjYbIUKDnGjV0sm2INWpcohrarI9S2xU6UcSPyFfumGs9pgAAAQ0euRGjZY RgIPE5DUHG uItXYsnIq7zFHVevjO2dAEJ8ouaIX9YJD8kwp4T3suQnw7/d 1j4edl46qisrQHpRRwqHXons7w4k04x8tH4JGfWEXc5LB hcOSyPHEir4EP qDcEPlblM9bH6 w2ku2fUmdMaoptnVSinLMtzSqIKQlHMfaJ0HM9Df4kWh ZbY0yFXxSuHKrgbaoDcu9wUze35dtwiCTdf1sf3ndQNaLOFiIjh5pis bUg 9rOZjxpEFbdGgYpcfBB4rvRNwOwizvSodxJ9H VdtAL3DIsSJdNSAEuxjQ0 hvOSA8oCBDJfHSUFqX3ROtB3 yuT1vf/C8Vod4gW4tvqj8C1QNte ehxg== =fD44––-END PGP MESSAGE––-
Signing
In addition, a public-key encryption system can be used to authenticate messages. In cases where the content of the message is not confidential, the receiver may still wish to verify the sender’s identity. For example, one of your friends may find it amusing to have some fun at your expense (see Exhibit 16).
Exhibit 16: Message before signing
To: Rick Watson <[email protected]> From: [email protected] Subject: Invitation to visit the White House–––––––––––––––––––––––––––––– Dear Dr. Watson It is my pleasure to invite you to a special meeting of Internet users at the White House on April 1st at 2pm. Please call 212-123-7890 and ask for Mr. A. Phool for complete details of your visit. The President
If the President indeed were in the habit of communicating electronically, it is likely that he would sign his messages so that the receiver could verify it. A sender’s private key is used to create a signed message . The receiver then applies the sender’s public key to verify the signature (see Exhibit 17).
Exhibit 17.: Signing with a public-key system
A signed message has additional encrypted text containing the sender’s signature (see Exhibit 18). When the purported sender’s public key is applied to this message, the identity of the sender can be verified (it was not the President).
Exhibit 18: Message after signing
To: Rick Watson <[email protected]> From: [email protected] Subject: Invitation to visit the White House–––––––––––––––––––––––––––––– Dear Dr. Watson It is my pleasure to invite you to a special meeting of Internet users at the White House on April 1st at 2pm. Please call 212-123-7890 and ask for Mr. A. Phool for complete details of your visit. The President––-BEGIN PGP SIGNATURE––- Version: 2.6.2
iQCVAwUBMeRVVUblZxMqZR69AQFJNQQAwHMSrZhWyiGTieGukbhPGUNF3aB qm7E8g5ySsY6QqUcg2zwUr40w8Q0Lfcc4nmr0NUujiXkqzTNb 3RL41w5x fTCfMp1Fi5Hawo829UQAlmN8L5hzl7XfeON5WxfYcxLGXZcbUWkGio6/d4r 9Ez6s79DDf9EuDlZ4qfQcy1iA==G6jB
––-END PGP SIGNATURE––-
Imagine you pay USD 1,000 per year for an investment information service. The provider might want to verify that any e-mail requests it receives are from subscribers. Thus, as part of the subscription sign-up, subscribers have to supply their public key, and when using the service, sign all electronic messages with their private key. The provider is then assured that it is servicing paying customers. Naturally, any messages between the service and the client should be encrypted to ensure that others do not gain from the information.
Electronic money
When commerce goes electronic, the means of paying for goods and services must also go electronic. Paper-based payment systems cannot support the speed, security, privacy, and internationalization necessary for electronic commerce. In this section, we discuss four methods of electronic payment:
• electronic funds transfer
• digital cash
• ecash
• credit card
There are four fundamental concerns regarding electronic money: security , authentication, anonymity, and divisibility. Consumers and organizations need to be assured that their on-line orders are protected, and organizations must be able to transfer securely many millions of dollars. Buyers and sellers must be able to verify that the electronic money they receive is real; consumers must have faith in electronic currency. Transactions, when required, should remain confidential. Electronic currency must be spendable in small amounts (e.g., less than one-tenth of a cent) so that high-volume, small-value Internet transactions are feasible (e.g., paying 0.1 cent to read an article in an encyclopedia). The various approaches to electronic money vary in their capability to solve these concerns (see Exhibit 19).
Exhibit 19. Characteristics of electronic money
Security Authentication Anonymity Divisibility
EFT High High Low Yes
Digital cash Medium High High Yes
Ecash High High High Yes
Credit card High High Low Yes
Any money system, real or electronic, must have a reasonable level of security and a high level of authentication, otherwise people will not use it. All electronic money systems are potentially divisible. There is a need, however, to adapt some systems so that transactions can be automated. For example, you do not want to have to type your full credit card details each time you spend one-tenth of a cent. A modified credit card system, which automatically sends previously stored details from your personal computer, could be used for small transactions.
The technical problems of electronic money have not been completely solved, but many people are working on their solution because electronic money promises efficiencies that will reduce the costs of transactions between buyers and sellers. It will also enable access to the global marketplace. In the next few years, electronic currency will displace notes and coins for many transactions.
Electronic funds transfer
Electronic funds transfer (EFT), introduced in the late 1960s, uses the existing banking structure to support a wide variety of payments. For example, consumers can establish monthly checking account deductions for utility bills, and banks can transfer millions of dollars. EFT is essentially electronic checking. Instead of writing a check and mailing it, the buyer initiates an electronic checking transaction (e.g., using a debit card at a point-of-sale terminal). The transaction is then electronically transmitted to an intermediary (usually the banking system), which transfers the funds from the buyer’s account to the seller’s account. A banking system has one or more common clearinghouses that facilitate the flow of funds between accounts in different banks.
Electronic checking is fast; transactions are instantaneous. Paper handling costs are substantially reduced. Bad checks are no longer a problem because the seller’s account balance is verified at the moment of the transaction. EFT is flexible; it can handle high volumes of consumer and commercial transactions, both locally and internationally. The international payment clearing system, consisting of more than 100 financial institutions, handles more than one trillion dollars per day.
The major shortfall of EFT is that all transactions must pass through the banking system, which is legally required to record every transaction. This lack of privacy can have serious consequences.7 Cash gives anonymity.
Digital cash
Digital cash is an electronic parallel of notes and coins. Two variants of digital cash are presently available: prepaid cards and smart cards. The phonecard, the most common form of prepaid card, was first issued in 1976 by the forerunner of Telecom Italia. The problem with special-purpose cards, such as phone and photocopy cards, is that people end up with a purse or wallet full of cards. A smart card combines many functions into one card. A smart card can serve as personal identification, credit card, ATM card, telephone credit card, critical medical information record and as cash for small transactions. A smart card, containing memory and a microprocessor, can store as much as 100 times more data than a magnetic-stripe card. The microprocessor can be programmed.
The stored-value card, the most common application of smart card technology, can be used to purchase a wide variety of items (e.g,. fast food, parking, public transport tickets). Consumers buy cards of standard denominations (e.g., USD 50 or USD 100) from a card dispenser or bank. When the card is used to pay for an item, it must be inserted in a reader. Then, the amount of the transaction is transferred to the reader, and the value of the card is reduced by the transaction amount.
The problem with digital cash, like real cash, is that you can lose it or it can be stolen. It is not as secure as the other alternatives, but most people are likely to carry only small amounts of digital cash and thus security is not so critical. As smart cards are likely to have a unique serial number, consumers can limit their loss by reporting a stolen or misplaced smart card to invalidate its use. Adding a PIN number to a smart card can raise its security level.
Twenty million smart cards are already in use in France, where they were introduced a decade earlier. In Austria, 2.5 million consumers carry a card that has an ATM magnetic stripe as well as a smart card chip. Stored-value cards are likely to be in widespread use in the United States within five years. Their wide-scale adoption could provide substantial benefits. Counting, moving, storing and safeguarding cash is estimated to be 4 percent of the value of all transactions. There are also significant benefits to be gained because banks don’t have to hold as much cash on hand, and thus have more money available for investment.
Ecash
Digicash of Amsterdam has developed an electronic payment system called ecash that can be used to withdraw and deposit electronic cash over the Internet. The system is designed to provide secure payment between computers using e-mail or the Internet. Ecash can be used for everyday Internet transactions, such as buying software, receiving money from parents, or paying for a pizza to be delivered. At the same time, ecash provides the privacy of cash because the payer can remain anonymous.
To use ecash, you need a digital bank account and ecash client software. The client is used to withdraw ecash from your bank account, and store it on your personal computer. You can then spend the money at any location accepting ecash or send money to someone who has an ecash account.
The security system is based on public-key cryptography and passwords. You need a password to access your account and electronic transactions are encrypted.
Credit card
Credit cards are a safe, secure, and widely used remote payment system. Millions of people use them every day for ordering goods by phone. Furthermore, people think nothing of handing over their card to a restaurant server, who could easily find time to write down the card’s details. In the case of fraud in the U.S., banks already protect consumers, who are typically liable for only the first USD 50. So, why worry about sending your credit card number over the Internet? The development of secure servers and clients has made transmitting credit card numbers extremely safe. The major shortcoming of credit cards is that they do not support person-to-person transfers and do not have the privacy of cash.
Secure electronic transactions
Electronic commerce requires participants to have a secure means of transmitting the confidential data necessary to perform a transaction. For instance, banks (which bear the brunt of the cost of credit card fraud) prefer credit card numbers to be hidden from prying electronic eyes. In addition, consumers want assurance that the Web site with which they are dealing is not a bogus operation. Two forms of protecting electronic transactions are SSL and SET.
SSL
Secure Sockets Layer (SSL) was created by Netscape for managing the security of message transmissions in a network. SSL uses public-key encryption to encode the transmission of secure messages (e.g., those containing a credit card number) between a browser and a Web server.
The client part of SSL is part of Netscape’s browser. If a Web site is using a Netscape server, SSL can be enabled and specific Web pages can be identified as requiring SSL access. Other servers can be enabled by using Netscape’s SSLRef program library, which can be downloaded for noncommercial use or licensed for commercial use.
SET
Secure Electronic Transaction (SET) is a financial industry innovation designed to increase consumer and merchant confidence in electronic commerce. Backed by major credit card companies, MasterCard and Visa, SET is designed to offer a high level of security for Web-based financial transactions. SET should reduce consumers’ fears of purchasing over the Web and increase use of credit cards for electronic shopping. A proposed revision, due in 1999, will extend SET to support business-to-business transactions, such as inventory payments.
Visa and MasterCard founded SET as a joint venture on February 1, 1996. They realized that in order to promote electronic commerce, consumers and merchants would need a secure, reliable payment system. In addition, credit card issuers sought the protection of more advanced anti-fraud measures. American Express has subsequently joined the venture.
SET is based on cryptography and digital certificates. Public-key cryptography ensures message confidentiality between parties in a financial transaction. Digital certificates uniquely identify the parties to a transaction. They are issued by banks or clearinghouses and kept in registries so that authenticated users can look up other users’ public keys.
Think of a digital certificate as an electronic credit card. It contains a person’s name, a serial number, expiration date, a copy of the certificate holder’s public key (used for encrypting and decrypting messages and verifying digital signatures), and the digital signature of the certificate-issuing authority so that a recipient can verify that the certificate is real. A digital signature is used to guarantee a message sender’s identity.
The SET components
Cardholder wallet
The application on the cardholder’s side is also called the digital wallet . This software plug-in contains a consumer’s digital certificate, shipping and other account information. This critical information is protected by a password, which the owner must supply to access the stored data. In effect, an electronic wallet stores a digital representation of a person’s credit card and enables electronic transactions.
Merchant server
On the merchant side, a merchant server accepts electronic credit card payments.
Payment gateway
The payment gateway is the bridge between SET and the existing payment network. A payment gateway application translates SET messages for the existing payment system to complete the electronic transaction.
Certificate authority
The certificate authority issues and manages digital certificates, which are proofs of the identities for all parties involved in a SET transaction.
The process
The following set of steps illustrates SET in action.
• The customer opens a MasterCard or Visa account with a bank.
• The customer receives a digital certificate (an electronic file), which functions as a credit card for on-line transactions. The certificate includes a public key with an expiration date and has been digitally signed by the bank to ensure its validity.
• Third-party merchants also receive digital certificates from the bank. These certificates include the merchant’s public key and the bank’s public key.
• The customer places an electronic order from a merchant’s Web page.
• The customer’s browser receives and confirms that the merchant’s digital certificate is valid.
• The browser sends the order information. This message is encrypted with the merchant’s public key, the payment information, which is encrypted with the bank’s public key (which can’t be read by the merchant), and information that ensures the payment can be used only with the current order.
• The merchant verifies the customer by checking the digital signature on the customer’s certificate. This may be done by referring the certificate to the bank or to a third-party verifier.
• The merchant sends the order message along to the bank. This includes the bank’s public key, the customer’s payment information (which the merchant can’t decode), and the merchant’s certificate.
• The bank verifies the merchant and the message. The bank uses the digital signature on the certificate with the message and verifies the payment part of the message.
• The bank digitally signs and sends authorization to the merchant, who can then fill the order.
• The customer receives the goods and a receipt.
• The merchant gets paid according to its contract with its bank.
• The customer gets a monthly bill from the bank issuing the credit card.
The advantage of SET is that a consumer’s credit card number cannot be deciphered by the merchant. Only the bank and card issuer can decode this number. This facility provides an additional level of security for consumers, banks, and credit card issuers, because it significantly reduces the ability of unscrupulous merchants to establish a successful Web presence.
In order to succeed, SET must displace the current standard for electronic transactions, SSL, which is simpler than SET but less secure. Because of SSL’s simplicity, it is expected to provide tough competition, and may remain the method of choice for the interface between the on-line buyer and the merchant. The combination of SSL and fraud-detection software has so far provided low-cost, adequate protection for electronic commerce.
Cookies
The creator of a Web site often wants to remember facts about you and your visit. A cookie is the mechanism for remembering details of a single visit or store facts between visits. A cookie is a small file (not more than 4k) stored on your hard disk by a Web application. Cookies have several uses.
• Visit tracking: A cookie might be used to determine which pages a person views on a particular Web site visit. The data collected could be used to improve site design.
• Storing information: Cookies are used to record personal details so that you don’t have to supply your name and address details each time you visit a particular site. Most subscription services (e.g., The Wall Street Journal) and on-line stores (e.g., Amazon.com) use this approach.
• Customization: Some sites use cookies to customize their service. A cookie might be used by CNN to remember that you are mainly interested in news about ice skating and cooking.
• Marketing: A cookie can be used to remember what sites you have visited so that relevant advertisements can be supplied. For example, if you frequently visit travel sites, you might get a banner ad from Delta popping up next time you do a search.
Cookies are a useful way of collecting data to provide visitors with better service. Without accurate information about people’s interest, it is very difficult to provide good service.
Both Internet Explorer and Netscape Navigator allow surfers to set options for various levels of warnings about the use of cookies. Visitors who are concerned about the misuse of cookies can reject them totally, with the consequent loss of service.
Conclusion
The rapid growth of electronic commerce is clear evidence of the reliability and robustness of the underlying technology. Many of the pieces necessary to facilitate electronic commerce are mature, well-tested technologies, such as public-key encryption. The future is likely to see advances that make electronic commerce faster, less expensive, more reliable, and more secure.
Cases
Austin, R. D., and M. Cotteleer. 1997. Ford Motor Company: maximizing the business value of Web technologies . Harvard Business School, 9-198-006.
Parent, M. 1997. Cisco Systems Inc.: managing corporate growth using an Intranet. London, Canada: University of Western Ontario. 997E018. | textbooks/biz/Business/Advanced_Business/Book%3A_Electronic_Commerce_-_The_Strategic_Perspective_(Watson_Brethon_Pitt_and_Zinkan)/02%3A_Electronic_commerce_technology.txt |
Introduction
The Web changes the nature of communication between firms and customers. The traditional advertiser decides the message content, and on the Web, the customer selects the message. Traditional advertising primarily centers on the firm broadcasting a message. The flow of information is predominantly from the seller to the buyer. However, the Web puts this flow in reverse thrust. Customers have considerable control over which messages they receive because it is primarily by visiting Web sites that they are exposed to marketing communications. The customer intentionally seeks the message.[1]
The Web increases the richness of communication because it enables greater interactivity between the firm and its customers and among customers. The airline can e-mail frequent flyers special deals on underbooked flights. The prospective book buyer can search electronically by author, title, or genre. Customers can join discussion groups to exchange information on product bugs, innovative uses, gripes about service, and ask each other questions. Firms and customers can get much closer to each other because of the relative ease and low cost of electronic interaction.
Although there is some traditional advertising on the Web, especially that associated with search engines, in the main the communication relationship is distinctly different. This shift in communication patterns is so profound that major communication conglomerates are undergoing a strategic realignment. Increasingly, customers use search and directory facilities to seek information about a firm’s products and services. Consequently, persuading and motivating customers to seek out interactive marketing communication and interact with advertisers is the biggest challenge facing advertisers in the interactive age.
In the new world of Web advertising, the rules are different. The Web, compared to other media, provides a relatively level playing field for all participants in that:
• access opportunities are essentially equal for all players, regardless of size;
• share of voice is essentially uniform–no player can drown out others;
• initial set-up costs present minimal or nonexistent barriers to entry.
A small company with a well-designed home page can look every bit as professional and credible as a large, multinational company. People can’t tell if you do business from a 90-story office building or a two-room rented suite. Web home pages level the playing field for small companies.
Differentiation–success in appealing to desirable market segments so as to maintain visibility, create defensible market positions, and forge institutional identity–is considered to be a central key to survival and growth for businesses in the new electronic marketplace. In other words:
How do you create a mountain in a flat world?
An attractor is a Web site with the potential to attract and interact with a relatively large number of visitors in a target stakeholder group (for example, an auto company will want to attract and interact with more prospective buyers to its Web site than its competitors). While the Web site must be a good attractor, it must also have the facility for interaction if its powers of attraction are to have a long life span. Merely having attraction power is not enough–the site might attract visitors briefly or only once. The strength of the medium lies in its abilities to interact with buyers, on the first visit and thereafter. Good sites offer interaction above all else; less effective sites may often look more visually appealing, but offer little incentive to interact. Many organizations have simply used the Web as an electronic dumping ground for their corporate brochures–this in no way exploits the major attribute of the medium–its ability to interact with the visitor. Purely making the corporate Web site a mirror of the brochure is akin to a television program that merely presents visual material in the form of stills, with little or no sound. Television’s major attribute is its ability to provide motion pictures and sounds to a mass audience, and merely using it as a platform for showing still graphics and pictures does not exploit the medium. Thus, very little television content is of this kind today. Similarly, if Web sites are not interactive, they fail to exploit the potential of the new medium. The best Web sites both attract and interact–for example, the BMW site shows pictures of its cars and accompanies these with textual information. More importantly, BMW allows the visitor to see and listen to the new BMW Z3 coupe, redesign the car by seeing different color schemes and specifications, and drive the car using virtual reality. This is interaction with the medium rather than mere reaction to the medium.
We propose that the strategic use of hard-to-imitate attractors, building blocks for gaining visibility with targeted stakeholders, will be a key factor in on-line marketing. Creating an attractor will, we believe, become a key component of the strategy of some firms. This insight helps define the issues we want to focus on in this chapter:
• identification and classification of attractors;
• use of attractors to support a marketing strategy.
Types of attractors
Given the recency of the Web, there is limited prior research on electronic commerce, and theories are just emerging. In new research domains, observation and classification are common features of initial endeavors. Thus, in line with the pattern coding approach of qualitative research, we sought overriding concepts to classify attractors.
To understand how firms distinguish themselves in a flat world, we reviewed marketing research literature, surfed many Web sites (including specific checks on innovations indicated in What’s New pages or sections), monitored Web sites that publish reviews of other companies’ Web efforts, and examined prize lists for innovative Web solutions.
After visiting many Web sites and identifying those that seem to have the potential to attract a large number of visitors, we used metaphors to label and group sites into categories (see Exhibit 1). The categories are not mutually exclusive, just as the underlying metaphors are not distinct categories. For example, we use both the archive and entertainment park as metaphors. In real life, archives have added elements of entertainment (e.g., games that demonstrate scientific principles) and entertainment parks recreate historical periods (e.g., Frontierland at Disney).
Exhibit 1.: Types of attractors
The entertainment park
The archive
Exclusive sponsorship
The Town Hall
The club
The gift shop
The freeway intersection or portal
The customer service center
The entertainment park
Web sites in this category engage visitors in activities that demand a high degree of participation while offering entertainment. Many use games to market products and enhance corporate image. These sites have the potential to generate experiential flow, because they provide various degrees of challenge to visitors. They are interactive and often involve elements and environments that promote telepresence experiences. The activities in the entertainment park often have the character of a contest, where awards can be distributed through the network (e.g., the Disney site). These attractors are interactive, recreational, and challenging. The potential competitive advantages gained through these attractors are high traffic potential (with repeat visits) and creation or enforcement of an image of a dynamic, exciting, and friendly corporation.
Examples in this category include:
• GTE Laboratories’ Fun Stuff part of its Web site, which includes Web versions of the popular games MineSweeper, Rubik’s cube, and a 3D maze for Web surfers to navigate;
• The Kellogg Company’s site lets young visitors pick a drawing and color it by selecting from a palette and clicking on segments of the picture;
• Visitors to Karakas VanSickle Ouellette Advertising and Public Relations can engage in the comical Where’s Pierre game and win a T-shirt by discovering the whereabouts of Pierre Ouellette, KVO’s creative big cheese ;
• Joe Boxer uses unusual effects and contests for gaining attention. For solving an advanced puzzle, winners gain supplies of virtual underwear. Instructions such as “Press the eyeball and you will return to the baby,” are a blend of insanity and advertising genius.
The archive
Archive sites provide their visitors with opportunities to discover the historical aspects of the company’s activities. Their appeal lies in the instant and universal access to interesting information and the visitor’s ability to explore the past, much like museums or maybe even more like the more recently created exploratoria (entertainment with educational elements). The credibility of a well-established image is usually the foundation of a successful archive, and building and reinforcing this corporate image is the main marketing role of the archive.
The strength of these attractors is that they are difficult to imitate, and often impossible to replicate. They draw on an already established highly credible feature of the company, and they bring an educational potential, thus reinforcing public relations aspects of serving the community with valuable information. The major weakness is that they often lack interactivity and are static and less likely to attract repeat visits. The potential competitive advantage gained through these attractors is the building and maintenance of the image of a trusted, reputable, and well-established corporation.
Examples in this category include:
• Ford’s historical library of rare photos and a comprehensive story of the Ford Motor Company;
• Boeing’s appeal to aircraft enthusiasts by giving visitors a chance to find out more about its aircraft through pictures, short articles on new features, and technical explanations;
• Hewlett-Packard’s site where everyone can check out the Palo Alto garage in which Bill Hewlett and Dave Packa rd started the firm.
Exclusive sponsorship
An organization may be the exclusive sponsor of an event of public interest, and use its Web site to extend its audience reach. Thus, we find on the Internet details of sponsored sporting competitions and broadcasts of special events such as concerts, speeches, and the opening of art exhibitions.
Sponsorship attractors have broad traffic potential and can attract many visitors in short periods (e.g., the World Cup). They can enhance the image of the corporation through the provision of timely, exclusive, and valuable information. However, the benefits of the Web site are lost unless the potential audience learns of its existence. This is a particular problem for short-term events when there is limited time to create customer awareness. Furthermore, the information on the Web site must be current. Failure to provide up-to-the-minute results for many sporting events could have an adverse effect on the perception of an organization.
Examples of sponsorship include:
• Texaco publishes the radio schedule for the Metropolitan Opera, which it sponsors on National Public Radio;
• Coca-Cola gives details of Coke-sponsored concerts and sporting events;
• Planet Reebok includes interviews with the athletes it sp onsors. The Web site permits visitors to post questions to coaches and players.
A Web site can provide a venue for advertisers excluded from other media. For instance, cigarette manufacturer Rothmans, the sponsor of the Cape Town to Rio de Janeiro yacht race, has a Web site devoted to this sporting event.
The town hall
The traditional town hall has long been a venue for assembly where people can hear a famous person speak, attend a conference, or participate in a seminar. The town hall has gone virtual, and these public forums are found on the Web. These attractors can have broad traffic potential when the figure is of national importance or is a renowned specialist in a particular domain. Town halls have a potentially higher level of interactivity and participation and can be more engaging than sponsorship. However, there is the continuing problem of advising the potential audience of who is appearing. There is a need for a parallel bulletin board to notify interested attendees about the details of town hall events. Another problem is to find a continual string of drawing card guests.
Examples in this category are:
• Tripod, a resource center for college studen ts, has daily interviews with people from a wide variety of areas. Past interviews are archived under categories of Living, Travel, Work, Health, Community, and Money.
• CMP Publications Inc., a publisher of IT magazines (e.g., InformationWeek ), hosts a Cyberforum, where an IT guru posts statements on a topic (e.g., Windows 2000) and responds to issues raised by readers.
The club
People have a need to be part of a group and have satisfactory relationships with others. For some people, a Web club can satisfy this need. These are places to hang out with your friends or those with similar interests. On the Internet, the club is an electronic community, which has been a central feature of the Internet since its foundation. Typically, visitors have to register or become members to participate, and they often adopt electronic personas when they enter the club. Web clubs engage people because they are interactive and recreational. Potentially, these attractors can increase company loyalty, enhance customer feedback, and improve customer service through members helping members .
Examples include:
• Snapple Beverage Company gives visitors the opportunity to meet each other with personal ads (free) that match people using attributes such as favorite Snapple flavor;
• Zima’s loyalty club, Tribe Z, where members can access exclusive areas of the site;
• Apple’s EvangeList, a bulletin board for maintaining the faith of Macintosh devotees.
An interesting extension of this attractor is the electronic trade show, with attached on-line chat facilities in the form of a MUD (multiuser dungeon) or MOO (multiuser dungeon object oriented). Here visitors can take on roles and exchange opinions about products offered at the show.
The gift shop
Gifts and free samples nearly always get attention. Web gifts typically include digitized material such as software (e.g., screensavers and utilities), photographs, digital paintings, research reports, and non-digital offerings (e.g., a T-shirt). Often, gifts are provided as an explicit bargain for dialogue participation (e.g., the collection of demographic data).
Examples include:
• Ameritech’s Claude Monet exhibition where yo u can download digital paintings;
• Kodak’s library of colorful, high-quality digital images that are downloadable;
• Ragu Foods offers recipes, Italian-language lessons, merchandise, and stories written by Internet users. You can e-mail a request for product coupons. There is culture, too, in the form of an architectural tour of a typical Pompeiian house;
• MCA/Universal Cyberwalk offers audio and video clips from upcoming Universal Pictures’ releases, and a virtual tour of Uni versal Studios, Hollywood’s new ride based on Back to the Future. There is even a downloadable coupon hidden in the area that will let you bypass the line for the ride at the theme park.
One noteworthy subspecies of the gift is the software utility or update. Many software companies distribute upgrades and complimentary freeware or shareware via their Web site. In some situations (e.g., a free operating system upgrade), this can generate overwhelming traffic for one or two weeks. Because some software vendors automatically notify registered customers by e-mail whenever they add an update or utility, such sites can have bursts of excessively high attractiveness.
The freeway intersections or portals
Web sites that provide advanced information processing services (e.g., search engines) can become n-dimensional Web freeway intersections with surfers coming and going in all directions, and present significant advertising opportunities because the traffic flow is intense–rather like traditional billboard advertising in Times Square or Picadilly Circus. Search engines, directories, news centers, and electronic malls can attract hundreds of thousands of visitors in a day.
Some of these sites are entry points to the Web for many people, and are known as portals. These portals are massive on-ramps to the Internet. A highly successful portal, such as America Online, attracts a lot of traffic.
Within this category, we also find sites that focus upon specific customer segments and try to become their entry points to the Web. Demography (e.g., an interest in fishing) and geography (such as Finland Online’s provision of an extensive directory for Finland) are possible approaches to segmentation. The goal is to create a one-stop resource center. First movers who do the job well are likely to gain a long-term competitive advantage because they have secured prime real estate, or what conventional retailers might call a virtual location.
Examples include:
• Yahoo!, a hierarchical directo ry of Web sites;
• ISWorld, an entry point to serve the needs of information systems academics and students;
• AltaVista, a Web search engine originally operated by Digital (since acquired by Compaq Computers) as a means of promoting its Alpha servers.
The customer service center
By directly meeting their information needs, a Web site can be highly attractive to existing customers. Many organizations now use their Web site to support the ownership phase of the customer service life cycle. For instance, Sprint permits customers to check account balances, UPS has a parcel tracking service, many software companies support downloading of software updates and utilities (e.g., Adobe), and many provide answers to FAQs or frequently asked questions (e.g., Fuji Film). The Web site is a customer service center. When providing service to existing customers, the organization also has the opportunity to sell other products and services. A visitor to the Apple Web site, for example, may see the special of the week displayed prominently.
Summary
Organizations are taking a variety of approaches to making their Web sites attractive to a range of stakeholders. Web sites can attract a broad audience, some of whom are never likely to purchase the company’s wares, but could influence perceptions of the company, and certainly increase word-of-mouth communication, which could filter through to significant real customers. Other Web sites focus on serving one particular stakeholder–the customer. They can aim to increase market share by stimulating traffic to their site (e.g., Kellogg’s) or to increase the share of the customer by providing superior service (e.g., the UPS parcel tracking service).
Of course, an organization is not restricted to using one form of attractor. It makes good sense to take a variety of approaches so as to maximize the attractiveness of a site and to meet the diverse needs of Web surfers. For example, Tripod uses a variety of attractors to draw traffic to its site. By making the site a drawing card for college students, Tripod can charge advertisers higher rates. As Exhibit 2 illustrates, there are some gaps. Tripod is not an archive or the exclusive sponsor of an event.
Exhibit 2.: Tripod’s use of attractors
Type of attractor Tripod’s approach
Entertainment park Limited development, except for a novel concentration game, members can test their memory by matching different types of contraceptives.
Town Hall Daily interviews on topics of likely interest to college students. Past interviews can be recalled.
Club Only members can use HereMOO , a graphical, interactive environment in which members can interact. Visitors can join Tripod by providing some basic demographic data. Also, members can build a home page.
Gift shop Every 25th new member wins a T-shirt and every 10th new member wins a bottle opener key chain. There are also weekly competitions.
Freeway intersection or portal An entry point for a number of news services (e.g., USA Today ) and stock prices provided by other Web sites.
Customer service center A travel planner and daily reminder are examples of services that members can use.
Attractiveness factors
The previous examples illustrate the variety of tactics used by organizations to make their sites attractors. There is, however, no way of ensuring that we have identified a unique set of categories. There may be other types of attractors that we simply did not recognize or uncover in our search. To gain a deeper understanding of attractiveness, we examine possible dimensions for describing the relationship between a visitor and a Web site. The service design literature, and in particular the service process matrix, provide the stimulus for defining the elements of attractiveness.
The service process matrix (see Exhibit 3), with dimensions of degree of labor intensity and interaction and customization, identifies four types of service businesses. Labor-intensive businesses have a high ratio of cost of labor relative to the value of plant and equipment (e.g., law firms). A trucking firm, with a high investment in trucks, trailers, and terminals, has low labor intensity. Interaction and customization are, respectively, the extent to which the consumer interacts with the service process and the service is customized for the consumer.
Exhibit 3.: The service process matrix (Adapted from Schmener)
Because services are frequently simultaneously produced and consumed, they are generally easier to customize than products. A soft drink manufacturer would find it almost impossible to mix a drink for each individual customer, while dentists tend to customize most of the time, by treating each patient as an individual. The question facing most firms, of course, is to what extent they wish to customize offerings.
For many services, customization and interaction are associated. High customization often means high interaction (e.g., an advertising agency) and low customization is frequently found with low interaction (e.g., fast food), though this is not always the case (e.g., business travel agents have considerable interaction with their customers but little customization because airline schedules are set). The push for lower costs and control is tending to drive services towards the diagonal. The traditional carrier, for example, becomes a no-frills airline by moving towards the lower-left.
If we now turn to the Web, labor intensity disappears as a key element because the Web is an automated service delivery system. Hence, we focus our attention on interaction and customization and split these out as two separate elements to create the attractors grid (see See Attractors grid). Attractors require varying degrees of visitor interaction. A search engine simply requires the visitor to enter search terms. While the customers may make many searches, on any one visit there is little interaction. Just like a real entertainment park, a Web park is entertaining only if the visitor is willing to participate (e.g., play an interactive game). The degree of customization varies across attractors from low (e.g., the digital archive) to high (e.g., a customer service center).
Each of the four quadrants in the attractors grid has a label. A utility (e.g., search engine) requires little interaction and there is no customization, each customer receives the same output for identical keywords. A service center provides information tailored to the customer’s current concern (e.g., what is the balance of my account?). In mass entertainment (e.g., an entertainment park), the visitor participates in an enjoyable interaction, but there is no attempt to customize according to the needs or characteristics of the visitor. The atmosphere of a club is customized interaction. The club member feels at home because of the personalized nature of the interaction.
Exhibit 4.: Attractors grid
In contrast to the service process matrix’s push down the diagonal, the impetus with attractors should be towards customized service–up the diagonal (see Exhibit 4). The search engine, which falls in the utility quadrant, needs to discover more about its visitors so that it can become a customer service center. Similarly, mass entertainment should be converted to the personalized performance and interaction of a club. The service center can also consider becoming a club so that frequent visitors receive a special welcome and additional service, like hotel guests who are recognized by the concierge. Indeed, commercial Internet success may be dependent on creating clubs or electronic communities.
Where possible, organizations should be using the Web to reverse the trend away from customized service by creating highly customized attractors. Simultaneously, we could see the synergistic effects of both trends. A Web application reduces labor intensity and increases customization. This can come about because the model in See The service process matrix (Adapted from Schmenner) assumes that people deliver services, but when services are delivered electronically, the dynamics change. In this respect, the introduction of the Web is a discontinuity for some service organizations, and represents an opportunity for some firms to change the structure of the industry.
A potential of the Web is that it will make mass customization work. It will enable customized service to each customer, while serving millions of them at the same time. All customers will get more or less what they want, tailored to what is unique to them and their circumstances. This will be achieved, almost without exception, by information technology. The really important aspect of this is that by mass customization, the firm will learn from customers; more importantly, customers are more likely to remain loyal, not so much because the firm serves them so well, but because they do not want to teach another firm what’s already known about them by their current provider.
Sustainable attractiveness
The problem with many Web sites, like many good ideas, is that they are easily imitated. In fact, because the Web is so public, firms can systematically analyze each other’s Web sites. They can continually monitor the Web presence of competitors and, where possible, quickly imitate many initiatives. Consequently, organizations need to be concerned with sustainable attractiveness–the ability to create and maintain a site that continues to attract targeted stakeholders. In the case of a Web site, sustainable attractiveness is closely linked to the ease with which a site can be imitated.
Attractors can be classified by ease of imitation, an assessment of the cost and time to copy another Web site’s concept (see See Ease of imitation of attractors). The easiest thing to reproduce is information that is already in print (e.g., the corporate brochure). Product descriptions, annual reports, price lists, product photographs, and so forth can be converted quickly to HTML, GIFs, or an electronic publishing format such as Adobe’s portable document format (PDF). Indeed, this sort of information is extremely common on the Web, and so bland that we consider it has minimal attractiveness.
Exhibit 5.: Ease of imitation of attractors
Ease of imitation Examples of attractors
Easy Corporate brochure
Imitate with some effort Software utilitiesDirectory or search engine
Costly to imitate Advanced customer service application
SponsorshipValuable and rare resources
Impossible to imitate Archive with some exclusive features
Well-established brand name or corporate image
There is a variety of attractors, such as utilities, that can be imitated with some effort and time. The availability of multiple search engines and directories clearly supports this contention. The original offerer may gain from being a first mover, but distinctiveness will be hard to sustain. Nevertheless, while investing in easily imitated attractors may provide little gain, firms may have to match their competitors’ offerings so as to remain equally attractive, thus echoing the notion of strategic necessity of the strategic information systems literature. Attractors are more like services than products. Innovations generally are more easily imitated, just as the first life insurance company to offer premium discounts to nonsmokers was easily imitated (and therefore not remembered).
While a search engine or directory can be imitated, what is less difficult to copy is location or identity. Some search engines are better placed than others. For example, clicking on Netscape’s Search button gives immediate access to Netscape’s search engine, and additional clicks are required to access competitive search engines. This is like being the first gas station after the freeway exit or the only one on a section of highway with long distances between exit ramps. It is one of the best pieces of real estate on the information superhighway, and certainly Netscape should gain a high rent for this spot.
The key to imitation is whether a firm possesses valuable and rare resources and how much it costs to duplicate these resources or how readily substitutes can be found. Back-end computer applications that support Web front-end customer service can be a valuable resource, though not rare. FedEx’s parcel tracking service is an excellent example of a large investment back-end IT application easily imitated by competitor UPS. IT investment can create a competitive advantage, but it is unlikely to be sustainable because competitors can eventually duplicate the system.
Sponsorship is another investment that can create a difficult-to-imitate attractor. Signing a long-term contract to sponsor a major sporting or cultural event can create the circumstances for a long-lived attractor. Sponsorship is a rare resource, but its very rareness may induce competitors to escalate the cost of maintaining sponsorship for popular events. Contracts eventually run their course, and failure to win the next round of the bidding war will mean loss of the attractor.
There are some attractors that can never be imitated or for which there are few substitutes. No other beverage company can have a Coke Museum–real or virtual. Firms with respected and well-known brands (e.g., Coca-Cola) have a degree of exclusiveness that they can impart to their Web sites. The organization that owns a famous Monet painting can retain exclusive rights to offer the painting as a screensaver. For many people, there is no substitute for the Monet painting. These attractors derive their rareness from the reputation and history of the firm or the object. History can be a source of enduring competitiveness and, in this case, enduring attractiveness.
This analysis suggests that Web application designers should try to take advantage of:
• prior back-end IT investments that take time to duplicate;
• special relations (e.g., sponsorship);
• special information resources (e.g., an archive);
• established brand or image (part of the enterprise’s history);
• proprietary intellectual/artistic capital (e.g., a Monet painting).
Strategies for attractors
Stakeholder analysis can be a useful tool for determining which types and forms of attractors to develop. Adapting the notion that a firm should sell to the most favorable buyers, an organization should concentrate on using its Web site to attract the most influential stakeholders. For example, it might use an attractor to communicate with employees or it may want to attract and inform investors and potential suppliers.
After selecting the targeted stakeholder group, the organization needs to decide the degree of focus of its attraction. We proffer a two-stage process for selecting the properties of an attractor (see Exhibit 6). First, identify the target stakeholder groups and make the site more attractive to these groups–the influence filter. Second, decide the degree of customization–the target refractor. For example, Kellogg’s Web site, designed to appeal to all young children, filters but is not customized. American Airlines’ Web site is an implementation of filtering and customization. The site is designed to attract prospective flyers (filtering). Frequent flyers, an important stakeholder group, have access to their mileage numbers by entering their frequent flyer number and a personal code (customization).
Exhibit 6.: Attractor strategies
Broad attraction
A broad attractor can be useful for communicating with a number of types of stakeholders or many of the people in one category of stakeholders. Many archives, entertainment parks, and search engines have a general appeal, and there is no attempt to attract a particular segment of a stakeholder group. For example, Goodyear Tire & Rubber Company’s Web site, with its information on tires, is directed at the general tire customer. A broad attractor provides content with minimal adjustment to the needs of the visitor. Thus, many visitors may not linger too long at the site because there is nothing that particularly catches their attention or meets a need. In terms of See Attractors grid, broad attractors are utilities or mass entertainment.
Specialized attraction
A specialized attractor appeals to a more narrow audience. UPS, with its parcel tracking system, has decided to focus on current customers. A customer can enter an tracking number to determine the current location of a package and download software for preparing transportation documentation. A specialized attractor can be situation dependent. It may attract fewer visitors, but nearly all those who make the link find the visit worthwhile. A specialized attractor may be a utility (providing solutions to a particular class of problem) or a service center (providing service to a specific group of stakeholders) (see See Attractors grid).
Personalized attractor
The marketer’s goal is to develop an interactive relationship with individual customers. Personalized attractors, an incarnation of that dream, can be customized to meet the needs of the individual visitor. Computer magazine publisher Ziff-Davis offers visitors the opportunity to specify a personal profile. After completing a registration form, the visitor can then select what to see on future visits. For instance, a marketing manager tracking the CAD/CAM software market in Germany can set a profile that displays links to new stories on these topics. On future visits to the Ziff-Davis site, the manager can click on the personal view button to access the latest news matching the profile. The Mayo Clinic uses the Internet Chat facility to host a series of monthly on-line forums with Clinic specialists. The forums are free, and visitors may directly question an endocrinologist, for instance. Thus, visitors can get advice on their particular ailments.
There are two types of personalized attractors. Adaptable attractors can be customized by the visitor, as in the case of Ziff-Davis. The visitor establishes what is of interest by answering questions or selecting options. Adaptive attractors learn from the visitor’s behavior and determine what should be presented. Advanced Web applications will increasingly use a visitor’s previously gathered demographic data and record of pages browsed to create dynamically a personalized set of Web pages, just as magazines can be personalized.
One advantage of a personalized attractor is that it can create switching costs, which are not necessarily monetary, for the visitor. Although establishing a personal profile for an adaptable site is not a relatively high cost for the visitor, it can create some impediment to switching. An adaptive Web site further raises costs because the switching visitor will possibly have to suffer an inferior service while the new site learns what is relevant to the customer. Furthermore, an organization that offers an adaptable or adaptive Web site as a means of differentiation learns more about each customer. Since the capacity to differentiate is dependent on knowing the customer, the organization is better placed to further differentiate itself. Personalized attractors can provide a double payback–higher switching cost for customers and greater knowledge of each customer.
The flexibility of information technology means that organizations can build a Web page delivery platform that will produce a variety of customized pages. Thus, it is quite feasible for the visitor to determine before each access whether to receive a standard or customized page. For example, visitors could decide to receive the standard version of an electronic newspaper or one that they tailored. This choice might go hand in hand with a differential pricing mechanism so that visitors pay for customization, just as they do with many physical products. Flexible Web server systems should make it possible for organizations to provide simultaneously both broad and customized attractors. The choice then is not between types of attractors, but how much should the visitor pay for degrees of customization.
Conclusion
Because we often learn by modeling the behavior of others, we have used metaphors and examples to illustrate the variety of attractors that are currently operational. These should provide a useful starting point for practitioners designing attractors because a variety of stimuli are the most important means of stimulating creative behavior. However, we have no way of verifying that we have covered the range of metaphors, and other useful ones may emerge as organizations discover innovative uses of the Web. The attractors grid (see See Attractors grid) is a more formal method of classifying attractors, and provided we have identified the key parameters for describing attractors, does indicate complete coverage of the types of attractors.
The difference in the direction of the diagonal in the service process matrix and attractors grid suggests a discontinuity in the approach to delivering service. For some services, there should no longer be a reduction but an increase in customization as human-delivered services are replaced by Web service systems. Thus, this chapter provides two decision aids, metaphors and the attractor grid, for those attempting to identify potential attractors, and these challenge managers to rethink the current trend in service delivery.
The attractor strategy model is the third decision aid proffered. Its purpose is to stimulate thinking about the audience to be attracted and the degree of interactivity with it. The attractor strategy model is promoted as a tool for linking attractors to a stakeholder-driven view of strategy. In our view, attractors are strategic information systems and must be aligned with organizational goals.
Web sites have the potential for creating competitive advantage by attracting numerous visitors so that many potential customers learn about a firm’s products and services or influential stakeholders gain a positive impression of the firm. The advantage, however, may be short-lived unless the organization has some valuable and rare resource (e.g., sponsorship of a popular sporting event) that cannot be duplicated. A valuable, but not necessarily rare, resource for many organizations is the current IT infrastructure. Firms should find it useful to re-examine their existing databases to gauge their potential for highly attractive Web applications. Building front-end Web applications to create an attractor (e.g., customer service) can be a quick way of capitalizing on existing investments, but competitors are likely to be undertaking the same projects. IT infrastructure, however, is not enough to create a sustained attractor. The key assets are managerial IT skills and viewing information as the key asset that can create competitive advantage. Sustainable attractiveness is dependent on managers understanding what information to deliver and how to present it to stakeholders.
Cases
Sviokla, J. 1996. Edmund’s–www.edmunds.com. Harvard Business School, 9-397-016.
1. This chapter is based on Watson, R. T., S. Akselsen, and L. F. Pitt. 1998. Attractors: building mountains in the flat landscape of the World Wide Web. California Management Review 40 (2):36-56. ↵ | textbooks/biz/Business/Advanced_Business/Book%3A_Electronic_Commerce_-_The_Strategic_Perspective_(Watson_Brethon_Pitt_and_Zinkan)/03%3A_Attracting_and_retaining_visitors.txt |
Introduction
C ommunication is the very heart of marketing, and for years companies have fashioned communication strategies based on print, radio, and TV media to broadcast their message, but times are changing. In the age of the Internet, Benetton uses Quicktime VR to establish the atmosphere of its retail outlets, ABN Amro has a banner advertisement directly behind the goal at an Internet soccer game; Sony provides downloadable audio clips of its latest CDs; and Voice of America makes available, via FTP, software for predicting high-frequency broadcast propagation. These companies recognize that the Internet is an all-purpose communication medium for interacting with a wide variety of stakeholders. They know they must manage their brands and corporate image in cyberspace. They also know that the Internet is not just the Web, but a range of technologies that, in combination, can be a potent marketing strategy.
As organizations stampede to the Internet, they need a systematic way to examine opportunities and relate them to available Internet tools. In particular, they need a cohesive marketing strategy for exploiting Internet technologies. Integrated Internet Marketing (I2M) is a structured approach to combining marketing strategy with Internet technology. I2M promotes creation of a strategy that synergistically exploits the range of Internet technologies (e.g., text, audio, video, and hyperlinking) to achieve marketing goals.
This chapter, abundantly illustrated with instances of how companies are using the Internet to market wisely, presents the I2M model. A concluding case study demonstrates how one company, Benetton, is fashioning a coherent Internet-based strategy.
Internet technology for supporting marketing
To understand the potential of Internet marketing, knowledge of the different Internet tools is necessary. For convenience, some of these tools are grouped together and treated collectively because of common features (see See Internet technologies).
Exhibit 1.: Internet technologies
Technology Description Examples
Asynchronous text E-mail is generally used for one-to-one and one-to-few communications. A bulletin board (in the form of a newsgroup or listserv) can handle one-to-many and many-to-many communications. Cathay Pacific uses a one-to-many bulletin board to advise prospective customers of special airfares. Claris uses bulletin boards in the many-to-many mode to support the exchange of ideas among customers and support staff.
Synchronous text Chat enables several people to participate in a real-time text-based discussion. A chat session is conducted on a channel, and those connected to the channel receive all messages broadcast. The American Booksellers Association uses chat to interview authors.
File transfer File transfer protocol (FTP) permits the exchange of files across the Internet. Oracle uses FTP to distribute a 90-day trial version of Power Objects, a software product.
Telnet Telnet enables an authorized user to connect to and run programs on another computer. The Library of Congress Information System (LOCIS) is accessible using Telnet.
Audio Audio files are either downloaded and then played, or played as downloaded (so-called streaming audio). ABC uses Progressive Network’s RealAudio to deliver a news bulletin.
Video Video files, like audio, are either downloaded and then played, or played as they are downloaded (so-called streaming video). PBS uses VDOnet Corp. technology to broadcast samples of its programs.
Newswire An electronic newswire broadcasts stock prices, sports scores, news, weather, and other items. Companies are using Pointcast for reaching employees with internal news.
Search engine A search engine supports finding information on the Web. Simple engines find Web pages. More advanced engines locate information based on defined attributes (e.g., cheapest model Y of brand X camera). Internet Air Fares allows visitors the ability to search for the cheapest airfares on a particular route that they wish to travel.
Virtual reality The visitor can look around a location through a full 360 degrees, as well as zoom in and out. Honda use QuickTime VR to enable prospective customers to view its latest models, both inside and outside.
The Web as an integrating technology
The Web is the umbrella technology that can provide a single interface to each of the technologies previously described in See Internet technologies. The hypertext feature of the Web enables links to be created within a document or to another document anywhere on the Web. This supports rapid navigation of Web sites. The multimedia capability means that a Web page can display graphics, and videos and play sound and animations, as well as provide support for on-line forms and multiple windows. The Web is the means by which a company can use a variety of Internet tools to interact with customers and other influential stakeholders. It can shape and direct the dialogue between an organization and its stakeholders. To a large extent, an organization’s Web site defines the organization–establishing an enduring image in the minds of stakeholders. We maintain that organizations need a cohesive approach for using Internet technologies for communication.
Integrated Internet Marketing
The interactive and multimedia capabilities of the Web, combined with other Internet facilities such as e-mail’s support for personal and mass communication, present a range of tools for interacting with customers. Furthermore, the Web can provide an interface to back-end applications (e.g., databases and expert systems technology). Consequently, the Internet offers an excellent basis for a variety of marketing tactics, which permits the development of a model for Integrated Internet Marketing ( I2M ). The concepts of integrated Internet communication apply to all forms of communication, not just that between seller and buyer.
I2M (see Exhibit 2) is the coordination of Internet facilities to market products and services, shape stakeholders’ (customers, in particular) attitudes, and establish or maintain a corporate image. The central idea of I2M is that an organization should coordinate its use of the Internet to develop a coherent, synchronous marketing strategy.
Exhibit 2.: Integrated Internet Marketing
The Web offers a unique way to shape corporate image because it provides a means of communicating with so many stakeholder groups. For example, most organizations are interested in the ambiance or atmospherics that their establishment creates for the customer, where the term atmospherics refers to an organization’s retail environment. The Web provides an opportunity for customers to experience an organization’s atmospherics without actually being there (as the case later in this chapter demonstrates).
In the same way, the Web provides new opportunities in terms of signs, word of mouth, personal experiences, and public relations. Traditional marketing theory and practice have discovered that it is very difficult to manage a corporate image so that the identical image is communicated to every stakeholder group. The Web provides a powerful tool to assist managers in communicating a unified image.
The I2M matrix
The I2M matrix (see See The I2M matrix) can be used by firms to search systematically for opportunities for using the Internet to support marketing strategies. The concept is that each cell of the matrix is a focal point for brainstorming. An interactive version of the matrix can be used to stimulate thinking by showcasing how organizations are using a particular cell. Thus, clicking on the cell at the intersection of Atmospherics and Asynchronous text would jump to a page containing links to organizations using asynchronous text (e.g., a bulletin board) to establish atmosphere. Apple, an example for this cell, has established a bulletin board, EvangeList, to keep the faith of Macintosh aficionados. Postings to this bulletin board evoke an image of a feisty Braveheart valiantly fighting the Sassenachs (also known as Intel and Microsoft).
Exhibit 3.: The I2M matrix
Asynchronous text Synchronous text File transfer Telnet Audio Video Newswire Search engine Virtual reality
Atmospherics
Employees
Litter
News stories
Signs
Personal experiences
Advertising
Word of mouth
Public relations
Products and services
Popular culture
Because we often learn by modeling the behavior of others, linking I2M cells to existing Web examples assists managers in identifying opportunities for their organization. Furthermore, by providing a variety of examples for each cell, creative behavior is aroused because each example can be a different stimulus.
News stories
Traditionally, organizations have relied on news media and advertisements to transmit their stories to the customer. Naturally, the use of intermediaries can pose problems. For example, news stories, not reported as envisaged, can result in the customer receiving a distorted, unintended message. When dealing with the Pentium hullabaloo, Intel’s CEO Andy Grove used the Internet to communicate directly with customers by posting its press release to its Web page, as does Reebok.
Advertising
The hyperlink, a key feature of the Web, permits a reader to jump to another Web site by clicking on a link. An advertiser can place hyperlink signs or logos at relevant points on the Web so that interested readers may be enticed to link to the advertiser’s Web site. Hyperlinks are the billboards of the information highway. They are most valuable when they appear on Web pages read by many potential consumers, such as CNN or USA Today. As it is very easy to record the number of links from one page to another, it is relatively simple for advertisers to place a value on a particular hyperlink and for the owners of these pages to demand an appropriate rent.
Atmospherics
A Web site is the information age’s extension of society’s long history of developing attractive artificial environments. It parallels the Greek temple and Gothic cathedral of past centuries. These buildings were designed to evoke certain feelings within visitors (e.g., reverence). Similarly, a Web site should achieve a specific emotional effect on the visitor that prolongs browsing of a site.
Alberto’s nightclub in Mountain View, California, stimulates interest by creating an aura of excitement and action. The visual on its home page exudes the ethos of the club. The Web provides an opportunity for customers to experience an organization’s atmospherics without actually being there.
Employees
E-mail and bulletin boards have become effective methods of communicating with employees, particularly for highly dispersed international organizations. Because policy changes can be distributed inexpensively and instantly, the organization can gain a high degree of consistency in its communications with employees and other stakeholders. Instead of an in-house newsletter, an intranet can be used to keep employees informed of company developments. Previous issues of the newsletter can be made available, perhaps via a search engine, and there can be links to other related articles. For example, a story on new health benefits can have links to the firm’s benefits policy manual.
Use of e-mail and the Web should lead to consistent internal communication, a necessary prerequisite of consistent external communication with customers, suppliers, shareholders, and other parties. A well-informed employee is likely to feel greater involvement with the organization and more able to perform effectively.
Litter
The discarded Big Mac wrapper blowing across the highway does little for MacDonald’s corporate image. On the Internet, an advertisement arriving along with other e-mail may be perceived by some readers as highly offensive electronic pollution. Sending junk e-mail, also known as spamming, has aroused the ire of many Internet users, and America Online has taken action to block e-mail from certain firms and accounts. Just as offensive to some Web surfers are large or inappropriate graphics. These can be time polluters–wasting time and bandwidth as they load. Organizations need to ensure that their Internet communications are not offensive or time-wasting to visitors.
The Web makes it easy for unhappy consumers to create a Web site disparaging a company or product. A disgruntled Ford owner has created a Web site for the Association of Flaming Ford Owners. Consequently, firms must monitor such sites and Internet traffic about them to head off PR disasters.
Signs
Most organizations prominently display their logos and other identifying signs on their buildings, packaging, and other visual points of customer contact. There has been a clear transfer of this concept to the Web. A corporate logo frequently is visually reinforced by placing it on each Web page.
Organizations can be extremely creative in their use of signs. Reykjavik Advertising, with a collection of pages for a variety of Icelandic clients, makes clever use of the puffin, Iceland’s national bird. Reykjavik Advertising’s so-called traffic puffin indicates movement relative to a page hierarchy–back, up, or forward, respectively . It is an interesting alternative to the bland arrows of a Web browser. The traffic puffin appears on each page. After viewing the pages, a clear impression of the resourceful use of the puffin remains. A new medium creates opportunities for reinventing signs.
Animation is another way firms can reinvent their signs. Manheim Auctions, the Atlanta-headquartered car auction firm, uses animation to reinforce recognition of its corporate logo. The inner part of its circular logo rotates. Animation catches the eye and makes the visitor more aware of the Manheim logo.
Personal experience
Customers often prefer to try products before buying, and some software providers take advantage of this preference. Qualcomm widely distributes a freeware version of Eudora Light, an e-mail package. Customers who adopt the freeware version can easily upgrade to a commercial version, which offers some appealing additional features. In Qualcomm’s case, the incentive for the customer to upgrade is increased functionality.
Another approach is taken by game maker Storm Impact, which distributes TaskMaker as freeware. The full functionality of the game is available to play the first two tasks; however, the next eight tasks require payment of USD 25. On receipt of payment, a registration code to unlock the remaining tasks is e-mailed so that the next task can be tackled immediately. These examples support the notion of sampling–something which has previously been very difficult in the case of services and less tangible products.
Word of mouth
Gossip and idle chatter around the water fountain are now complemented by e-mail and bulletin boards. The impact of these electronic media can be agers realize even a ripple of discontent. Bad news travels extremely fast on the Internet. News is not always bad; Land’s End publishes customers’ testimonials about its products.
Corporations need to monitor bulletin boards that discuss their products and those of their competitors. As a result, they can quickly detect emerging problems and respond to assertions that may be incorrect. Eavesdropping on customers’ conversations is an important source of market intelligence, and it is becoming an important element of public relations.quite profound as Intel discovered when the flaw in the Pentium chip was revealed in a message on the Internet. The incident was quickly conveyed to millions of Pentium customers, who bombarded Intel with e-mail. Word of mouth does not adequately describe the situation when a single electronic message can reach hundreds of thousands of people in a matter of minutes. It’s more like a tsunami gathering momentum and crashing on the corporate doorstep before man
Public relations
When IBM announced its takeover bid for Lotus, it used the Internet to reach its stakeholders, media, and Lotus employees. Once the financial markets had been notified, IBM’s Web page featured the letter from IBM CEO Louis Gerstner to Jim Manzi, Lotus CEO. Also included were the internal memo to IBM employees, press release, audio clip of Gerstner explaining the offer, and a transcript of Gerstner’s 45-minute news conference. By the end of the day, 23,000 people had accessed the Web page–about double the normal traffic. In contrast, Lotus’s page had a four-paragraph statement from Manzi, and a company spokesperson said Lotus would respond when it had more to say about the offer.
As IBM demonstrated, the Web can be an effective public relations tool. The advantage is that a company can immediately transmit its message to stakeholders without relying on intermediaries, such as newspapers and TV, to redistribute messages. Of course, mass mailing is also a method for directly reaching stakeholders, but a letter lacks the recency and multimedia features of the Web.
Products and services
There are now thousands of firms using the Internet to deliver products and services. Software companies are selling software directly from Web sites (e.g., Adobe sells fonts) and many companies deliver services via their Web site (e.g., UPS permits customers to track parcels).
Computer firms struggle to solve hardware and software problems for a multitude of customers. This is a problem that can easily spiral out of control. One approach is to let customers solve each other’s problems. As sure as there is one customer with a problem, there is another who has solved it or who would love the opportunity to tackle a puzzler. If customers can be convinced to solve each other’s problems, then this creates the possibility of lowering the cost of customer service and raising customer satisfaction levels.
Thus, the real task is to ensure that the customer with the problem finds the customer with the solution. Apple, like many hardware and software firms, has a simple system for improving customer service. It uses a listserv to network customers using similar products. As a result, the customers support each other, reducing the number of people that Apple has to support.
Popular culture
Firms have discovered that popular culture (including movies, songs, and live performances) can be used to publicize their goods. As the Internet develops, clearly labeled products and ads are appearing in virtual network games. A popular MUD, Genocide, already features well-known fast-food stores. Goalkeeper, an Internet soccer simulator, lets visitors kick a soccer ball to try to beat the goalkeeper. The background of the game, a soccer stadium, includes typical sports arena advertising, including a banner for ABN AMRO, one of the world’s top 20 banks.
Conclusion
As transactions are increasingly conducted electronically, a firm’s Web site will be its defining image and the main point of interaction with many stakeholders. Consequently, firms must ensure that they take full advantage of the technology available to maximize their impact. A systematic approach, using the I2M matrix and modeling the behavior of others, provides a framework for designing and implementing an effective Web site that takes full advantage of the Internet tools. Integrated use of this technology, however, is not enough. An enterprise, with a jumble of different page layouts and icons, communicates disorganization. The collective image of the Web site must communicate the overall integration and message of the organization. Not only must use of Internet tools be integrated, but also a corporation’s entire Web presence must be cohesive in order to communicate a consistent message to stakeholders.
Cases
Subirana, B., and S. Palavecino. 1998. Amadeus: starting on the Internet and electronic commerce . Barcelona, Spain: IESE. ECCH 198-024-1. | textbooks/biz/Business/Advanced_Business/Book%3A_Electronic_Commerce_-_The_Strategic_Perspective_(Watson_Brethon_Pitt_and_Zinkan)/04%3A_Promotion_-_Integrated_Web_communications.txt |
Introduction
The Web has attracted a great deal of attention in recent years–perhaps significantly, in the influential business press and popular culture. Uniform Resource Locators (URLs) appear in many advertisements, and Business Week devotes a page to listing the URLs of its advertisers.[1]
Reporting on the Web is currently fascinating to general readers and listing URLs is helpful to consumers. However, systematic research is required to reveal the true nature of commerce on the Web. This is true particularly from the perspective of the Web in marketing communication, and especially so for the Web as an advertising medium or tool. In this chapter, we provide a brief overview of the Web as a phenomenon of the late 20th century; then we explore the Web as an advertising medium, using established theoretical models of consumer and industrial buying behavior; finally, we develop a model of Web conversion efficiency–its power to move the customer from being a passive Internet surfer to an interactive user of the medium.
The Internet and the World Wide Web
Cyberspace, or to give it its less clichéd name, the Internet (the net ), is a new medium based on broadcasting and publishing. However, unlike traditional broadcast media, it facilitates two-way communication between actors; unlike most personal selling (telemarketing being the obvious exception), it is not physically face to face, but neither is it time-bound. The medium possesses interactivity–it has the facility for individuals and organizations to communicate directly with one another regardless of distance or time. The Web has introduced a much broader audience to the net. Furthermore, it allows anyone (organization or individual) to have a 24-hour-a-day presence on the Internet.
The Web is not a transient phenomenon. It warrants serious attention by business practitioners. Statistics support this, although one astute observer recommends strongly that all estimates be made in pencil only, as the growth is so rapid. No communication medium or electronic technology, not even fax or personal computers, has ever grown as quickly.
An electronic trade show and a virtual flea market
While most academics and practitioners might be starting to think about, and even acknowledge, the importance of a Web site as a marketing communication tool, to date little systematic research has been conducted into the nature and effectiveness of this medium. Most of the work done so far has been of a descriptive nature–“what the medium is,” using such surrogate measures as the size of the Web audience to indicate its potential. While these endeavors might add to our general understanding, they do not address more specific issues of concern, such as the communication objectives that advertisers might have, and how they expect Web sites to achieve these objectives. Neither do these studies assess the effectiveness of this new medium from the perspective of the recipient of the message (the buyer , to use the broadest marketing term).
The Web is rather like a cross between an electronic trade show and a community flea market. As an electronic trade show , it can be thought of as a giant international exhibition hall where potential buyers can enter at will and visit prospective sellers. Like a trade show, they may do this passively, by simply wandering around, enjoying the sights and sounds, pausing to pick up a pamphlet or brochure here, and a sticker, key ring, or sample there. Alternatively, they may become vigorously interactive in their search for information and want-satisfaction, by talking to fellow attendees, actively seeking the booths of particular exhibitors, carefully examining products, soliciting richer information, and even engaging in sales transactions with the exhibitor. The basic ingredients are still the same. As a flea market , it possesses the fundamental characteristics of openness, informality, and interactivity, a combination of a community and a marketplace or marketspace. A flea market is an alternative forum that offers the consumer an additional search option, which may provide society with a model for constructing more satisfying and adaptive marketplace options. The Web has much in common with a flea market.
The central and fundamental problem facing conventional trade show and flea marketers is how to convert visitors, casually strolling around the exhibition center or market, into customers at best, or leads at least. Similarly, a central dilemma confronting the Web advertiser is how to turn surfers (those who browse the Web) into interactors ( attracting the surfers to the extent that they become interested; ultimately purchasers; and, staying interactive, repeat purchasers). An excellent illustration of a Web site as electronic trade show or flea market is to be found at the site established by Security First Network Bank, which was one of the first financial services institutions to offer full-service banking on the Internet. The company uses the graphic metaphor of a conventional bank to communicate and interact with potential and existing customers, including an electronic inquiries desk, electronic brochures for general information, and electronic tellers to deal with routine transactions. Thus, the degree of interaction is dependent on the individual surfer–those merely interested can take an electronic stroll through the bank, while those desiring more information can find it. Customers can interact to whatever degree they wish–transfer funds, make payments, write electronic checks, talk with electronic tellers (where they are always first in line), and see the electronic bank manager for additional requests, complaints, and general feedback.
We have taken the notion of trade shows as a marketing communication tool and extended it to the possible role of the Web site as an advertising medium. This is speculated upon, in the context of both the buying and selling process stages, and in both industrial and consumer contexts, in See Exhibit 1. The relative (to mass advertising and personal selling) communication effectiveness of a Web site is questioned graphically in See Buying and selling and Web marketing communication, although without prior quantitative data, it is mere conjecture at this stage to posit a profile. By simply placing a question mark between mass advertising and personal selling in the figure, we tempt the reader to contemplate the communication profile of the Web. Industrial buying can be thought of as the series of stages in the first column in See Buying and selling and Web marketing communication. The buyer’s information needs differ at each stage, as do the tasks of the marketing communicator. In column 2, a model of the steps in the consumer decision making process for complex purchases is shown, and it will be seen that these overlap the steps in the buying phases model to a considerable extent. The tasks that confront the advertiser and the seller in both industrial and consumer markets can similarly be mapped against these stages, through a series of communication objectives . This is shown in column 3. Each of these objectives requires different communication tasks of the seller, and these are similarly outlined in column 4. So, for example, generating awareness of a new product might be most effectively achieved through broadcast advertising, while closing a sale would best be achieved face to face, in a selling transaction. Most marketers, in both consumer and business-to-business markets, employ a mix of communication tools to achieve various objectives in the marketing communication process, judiciously combining advertising and personal selling.
The relative cost-effectiveness of advertising and personal selling in performing marketing communication tasks depends on the stage of the buying process, with personal selling becoming more cost effective the closer the buyer gets to the latter phases in the purchasing sequence–this is shown in column 5. A central question then is where does a Web site fit in terms of communication effectiveness? Again, rather than profile this, we leave it to the reader.
Exhibit 1.: Buying and selling and Web marketing communication
At this point, we re-emphasize the fact that the Web is still in its infancy, which means that no identifiable attempts have so far appeared in scholarly journals that methodically clarify its anticipated role and performance. This deficiency probably stems from the fact that few organizations or individuals have even begun to spell out their objectives in operating a Web site, let alone quantified them. This is not entirely unexpected–unlike expenditure on broadcast advertising, or the long-term financial commitment to a sales force, the establishment of a Web site is a relatively inexpensive venture, from which retraction is easy and rapid. It is not unlikely that many advertisers are on the Web simply because it is relatively quick and easy, and because they fear that the consequences of not having a presence will outweigh whatever might be the outcomes of a hastily ill-conceived presence. This lack of clear and quantified objectives, understanding, and the absence of a unified framework for evaluating performance, have compelled decision makers to rely on intuition, imitation, and advertising experience when conceptualizing, developing, designing, and implementing Web sites.
These two concerns–the lack of clear or consistent objectives and the relationship of those objectives to the variables under the control of the firm–are the issues that engage us here. We propose a more direct assessment of Web site performance using multiple indices such that differing Web site objectives can be directly translated into appropriate performance measures. We then explicitly link these performance measures to tactical variables under the control of the firm and present a conceptual framework to relate several of the most frequently mentioned objectives of Web site participation to measures of performance associated with Web site traffic flow. Finally, we develop a set of models linking the tactical variables to six performance measures that Web advertisers and marketers can use to measure the effectiveness and performance against objectives of a Web site. Finally, we discuss normative implications and suggest areas for further development.
The role of the Web in the marketing communication mix
Personal selling is usually the largest single item in the industrial marketing communications mix. On the other hand, broadcast advertising is typically the dominant way used to reach consumers by marketers. Where do Web sites fit? The Web site is something of a mix between direct selling (it can engage the visitor in a dialogue) and advertising (it can be designed to generate awareness, explain/demonstrate the product, and provide information–without interactive involvement). It can play a cost-effective role in the communication mix, in the early stages of the process-need recognition, development of product specifications, and supplier search, but can also be useful as the buying process progresses toward evaluation and selection. Finally, the site is also cost-effective in providing feedback on product/service performance. Web sites might typically be viewed as complementary to the direct selling activity by industrial marketers, and as supplementary to advertising by consumer marketers. For example, Web sites can be used to:
• gain access to previously unknown or inaccessible buying influences. Cathay Pacific Airlines uses a Web site to interview frequent international airline flyers, and determine their preferences with regard to airline, destination, airport, and even aircraft. Much of the active ticket purchasing is not normally done by these individuals, but by a secretary or personal assistant acting on their behalf.
• project a favorable corporate image. Guinness allows surfers to download from its Web site its latest television commercial, which can then be used as a screen saver. While the advertiser has not made the objectives of this strategy public, conceivably the approach builds affinity with the corporate brand as fun involvement, while the screen saver provides a constant reminder of the advertising message.
• provide product information. Many business schools are now using their Web sites to provide information on MBA and executive programs–indeed, there is now even an award to the business school judged to have the most effective Web site in North America. Similarly, Honda uses its Web site to give very detailed information about its latest models. Not only can the surfer download video footage and sound about the latest Honda cars, but by clicking the mouse on directional arrows, can get different visual perspectives of the vehicles, both from outside and inside the car.
• generate qualified leads for salespeople. The South African life assurance company SANLAM uses its Web site to identify customer queries, and if needed, can direct sales advisers to these.
• handle customer complaints, queries, and suggestions. Software developers such as Silverplatter are using their Web sites as a venue for customers to voice complaints and offer suggestions about the product. While this allows customers a facility to let off steam, it also allows the marketer to appear open to communication, and perhaps more importantly, to identify and rectify commonly occurring problems speedily.
• allows customers access to its system through its Web site. FedEx’s surprisingly popular site allows customers to track their shipments traveling through the system by typing in the package receipt number. “The Web is one of the best customer relationship tools ever,” according to a FedEx manager.
• serve as an electronic couponing device. A company called E-Coupon.com targets college students, because they possess two important characteristics–they are generally very computer literate and also need to save money. The site features lists of participating campus merchants, including music stores, coffee houses, and pharmacies. Students click on shop names to get a printable picture of a coupon on their computer screen, which they can take to shops for discounts or free samples; in return, they fill out a demographic profile and answer questions about product use.
In summary, different organizations may have different advertising and marketing objectives for establishing and maintaining a Web presence. One organization might wish to use the Web as a means of introducing itself and its new products to a potentially wide, international audience. Its objectives could be to create corporate and product awareness and inform the market. In this instance, the Web site can be used to expedite the buyer’s progress down phases 1 and 2 in See Buying and selling and Web marketing communication. On the other hand, if the surfer knows the firm and its products, then the net dialogue can be used to propel this customer down to the lower phases in the buying progression. Another firm may be advertising and marketing well-known existing products, and its Web site objectives could be to solicit feedback from current customers as well as inform new customers.
Thus, Web sites can be used to move customers and prospects through successive phases of the buying process. They do this by first attracting surfers, making contact with interested surfers (among those attracted), qualifying/converting a portion of the interested contacts into interactive customers, and keeping these interactive customers interactive. Different tactical variables, both directly related to the Web site as well as to other elements of the marketing communication mix, will have a particular impact at different phases of this conversion process: For example, hot links (electronic links which connect a particular site to other relevant and related sites) may be critical in attracting surfers. However, once attracted, it may be the level of interactivity on the site that will be critical to making these surfers interactive. This kind of flow process is analogous to that for the adoption of new packaged goods (market share of a brand = proportion aware x proportion of new buyers given awareness x repeat purchasing rate given awareness and trial) and in organizational buying (the probability of choice is conditional on variables such as awareness, meeting specifications, and preference).
Web marketing communication: a conceptual framework
Based on the above, we model the flow of surfer activity on a Web site as a six-stage process, which is shown in Exhibit 2. The variables and measures shown in Exhibit 2 are defined in Exhibit 3.
Exhibit 2.: A model of the conversion process on the Web
Exhibit 3. Web efficiency variables
Variable Meaning
Q 0 Number of people with Web access
Q 1 Number of people aware of the site
Q 2 Number of hits on the site
Q 3 Number of active visitors to the site
Q 4 Number of purchases
Q 5 Number of repurchases
All surfers on the Web may not be the relevant target audience for a given firm. Surfers can be in one of two groups:
• those potentially interested in the organization (η 0),
• those not interested (1- η 0).
The attractiveness of having a Web site for the organization depends on Q0η0, the number of potentially interested surfers on the Web (where Q0 is the net size measured in terms of surfers). The first stage of the model represents the flow of surfers on the net to land on the firm’s Web site, and it is acknowledged that only a fraction of the aware surfers (Q0η0) visits a firm’s Web site. This describes the awareness efficiency (η0) of the Web site. The awareness efficiency measures how effectively the organization is able to make surfers aware of its Web site. Advertisers and marketers can employ reasonably common and well-known awareness-generating techniques to affect this, such as including the Web site address in all advertising and publicity, on product packaging and other corporate communication materials, such as letterheads, business cards, and brochures.
The awareness efficiency index is:
The second stage of the model concerns attempts to get aware surfers to find the Web site. We distinguish between active and passive information seekers. Active seekers (Q1a) are those who intentionally seek to hit the Web site, whereas passive seekers (Q1b) are those aware surfers whose primary purpose in surfing was not necessarily to hit the Web site. Only a fraction of the aware surfers visit the firm’s Web site. The second stage of the model thus represents the locatability/attractability efficiency (η1) of the Web site. This measures how effectively the organization is able to convert aware surfers into Web site hits, either by facilitating active seeking behavior (surfers who actively look for the Web site), or by attracting passive seekers (not actively looking for the Web site, but not against finding it).
Enabling active seekers to hit the Web site easily can be achieved by maximizing the locatability of the site–such as using multiple sites (e.g., Web servers in the U.S., Europe, and Asia), names for the site that can be easily guessed (e.g., www.apple.com), and enhancing server speed and bandwidth (the number of visits which can be handled concurrently). Tools to attract passive seekers include using a large number of relevant hot links (e.g., EDS has a link from ISWorld, the Web site for information systems academics, to its Web site), embedding hot links in sponsored Web sites (e.g., IBM sponsors the Wimbledon Tennis Tournament Web site), and banner ads on search engines. We summarize the locatability/attractability index as:
where hits refers to the number of surfers who alight on the Web site.
At this stage, it should be apparent that there is a difference between a hit and a visit . Merely hitting or landing on a site does not mean that the surfer did anything with the information to be found there–the surfer might simply hit and move on. A visit, as compared to a hit, implies greater interaction between the surfer and the Web page. It may mean spending appreciable time (i.e., > x minutes) reading the page. Alternatively, it could be completing a form or querying a database. Although the operational definition of a visit is to some extent dependent on the content and detail on the page, the overriding distinctive feature of a visit is some interaction between the surfer and the Web page.
The next phase of the model concerns the efficiency and ability of the Web site in converting the hit to a visit. The third stage of our model represents the contact efficiency ( η 2 ) of the Web site. This measures how effectively the organization transforms Web site hits into visits. The efforts of the advertiser at this stage should be focused on turning a hit into a worthwhile visit. Thus, the hit should be interesting, hold the visitor’s attention, and persuade them to stay awhile to browse. The material should be readable–the concept of readability is a well-established principle in advertising communication. Visual effects should be appealing–sound and video can hold interest as well as inform. The possibility of gaining something, such as winning a prize in a competition, may be effective. The interface should be easy and intuitive. We summarize the contact efficiency index as:
Once the visitor is engaged–in real time–in a visit at the Web site, he or she should be able to do one or both of the following:
• establish a dialogue (at the simplest level, this may be signing an electronic visitors’ book; at higher levels, this may entail e-mail requests for information). The visitors’ book at the Robert Mondavi Wineries’ Web site not only allows visitors to complete a questionnaire and thus receive very attractive promotional material, including a recipe brochure, it also allows the more inquisitive visitor to ask specific questions by e-mail. It is important to note that it is feasible to establish the dialogue in a way that elicits quite detailed information from the visitor–for example, by offering the visitor the opportunity to participate in a competition in exchange for information in the form of an electronic survey, or by promising a reward for interaction (the recipe booklet in the preceding example).
• place an order. This may be facilitated by ensuring simplicity of the ordering process, providing a secure means of payment, as well as options on mode of payment (e.g., credit card, check, electronic transfer of funds). Alternative ordering methods might also be provided (e.g., telephone, e-mail, or a postal order form that can be downloaded and printed). For example, the electronic music store CDnow offers a huge variety of CDs and other items such as tapes and video cassettes. It provides visitors with thousands of reviews from the well-respected All-Music Guide as well as thousands of artists’ biographies. A powerful program built into the site allows a search for recordings by artist, title, and key words. It also tells about an artist’s musical influences and lists other performers in the same genre. Each name is hotlinked so that a mouse click connects the visitor to even more information. CDnow’s seemingly endless layers of sub-directories makes it easy and fun to get lost in a world of information, education, and entertainment–precisely the ingredients for inducing flow through the model. More importantly, from a measurability perspective, the site converts some of its many visitors to buyers.
This capability to turn visitors into purchasers, we term conversion efficiency, and summarize it in the form of an index as follows:
The final stage in the process entails converting purchases into re-purchases. The firm should consider the proficiency of the Web site not only to create purchases, but to turn these buyers into loyal customers who revisit the site and purchase on an ongoing basis. Variables which the marketer can influence include:
• regular updating and refreshing of the Web site. It is more likely that customers will revisit a Web site that is regularly revised and kept current;
• soliciting purchase satisfaction and feedback to improve the product specifically, and interaction generally;
• regular updating and exploiting of the transaction database. Once captured, customer data becomes a strategic asset, which can be used to further refine and retarget electronic marketing efforts. This can take a number of forms: customers can be reminded electronically to repurchase (e.g., an e-mail to a customer to have a car serviced); customers can be invited to collaborate with the marketer (e.g., loyal customers can be rewarded for referrals by supplying the e-mail addresses of friends or colleagues who may be leads).
This capability to turn purchasers into repurchasers, we term retention efficiency, and summarize as follows:
Finally, we define a sixth, or overall average Web site efficiency index ( ηAv ), which can be thought of as a summary of the process outlined in See Buying and selling and Web marketing communication .
This index can be an effective way to establish the extent to which Web site advertising and marketing objectives have been met. The measure is particularly relevant for a Web direct mail order operation where the main objective is to generate purchases and repeat purchases. However, a simple average may in other cases be misleading, and a more refined and appropriate measure might be a weighted average. A weighted average index is defined below:
where μi is the weighting accorded to each of the five efficiency indices in the model. So, for example, some advertisers might regard visits to the Web site as a very important criterion of its success (objective), without wishing or expecting these visits to necessarily result directly in sales. Other advertisers and marketers might want the visit to result in dialogue, which could result in sales, but only indirectly–mailing or faxing further information, accepting a free product sample, or requesting a sales call. Another group of Web advertisers might wish to emphasize retention efficiency. They would want to use the Web as a medium for establishing dialogue with existing customers and facilitating routine reordering. It would therefore be useful for advertisers and marketers wishing to establish overall Web efficiency to be able to weight Web objectives in terms of their relative importance.
Caching and undercounting
The previously developed model assumes that all hits are counted. However, there are hits that are never detected by a Web server because pages can be read from a cache memory rather than the server. A cache is temporary memory designed to speed up access to a data source. In the case of the Web, pages previously retrieved may be stored on the disk (the cache in this case) of the personal computer running the browser. Thus, when a person is flipping back and forth between previously retrieved pages, the browser retrieves the required pages from the local disk rather than the remote server. The use of a cache speeds up retrieval, reduces network traffic, and decreases the load on the server. As a consequence, however, data collected by a Web server undercount hits. The extent of undercounting depends on the form of caching.
Netscape, one of the most popular browsers, offers three levels of caching: once per session, always, and never . In terms of undercounting, the worst situation is never , which implies that if the page is in cache, the browser will not retrieve a new version from the server. This also means the customer could be viewing a page that could be months out of date. Always means the browser always checks to ensure that the latest version is about to be displayed. A hit will not be recorded if the page in the cache is the current version. The default for Netscape, once per session , results in undercounting but does mean the customer is reading current information, unless that page changes during the session.
The existence of a proxy server can further exacerbate undercounting. A proxy server is essentially a cache memory for a group of users (e.g., department, organization, or even country). Requests from a browser to a Web server are first routed to a proxy server, which keeps a copy of pages it has retrieved and distributed to the browsers attached to it. When any browser served by the proxy issues a request for a page, the proxy server will return the page if it is already in its memory rather than retrieve the page from the original server. For instance, a company could operate a proxy server to improve response time for company personnel. Although dozens of people within the organization may reference a particular Web page, the originating server may score one hit per day for the company because of the intervening proxy server. To further complicate matters, there can be layers of proxy servers, and one page retrieved from the original Web server may end up being seen by thousands of people within a nation. Clearly, the proliferation of proxy servers, which is likely to happen as the Web extends, will result in severe undercounting.
The use of cache memory or proxy servers will result in undercounting of hits (Q2) and active visitors (Q3). Consequently, the locatability/attractability index ( η1) will be underestimated since Q2 is the numerator in the index’s equation, and the conversion efficiency index (η3) will be overestimated as Q3 is in the denominator. It is more difficult to conjecture the effect on the contact efficiency index (η2). One possibility is that the index is underestimated because active visitors browse the site more frequently than those who just hit, and as a result are more likely to read the page from cache memory.
Clearly, empirical research is required to estimate correction factors for η1, η2, and η3. Unfortunately, these correction factors are likely to differ by page and change over time as the distribution of proxy servers changes. Therefore, the initial perception that the Web enables the ready calculation of efficiency measures needs to be tempered by the recognition that cache memory can distort the situation.
The counting problem caused by caching is not unlike other counting problems encountered by advertisers. Viewership, listenership, and readership of conventional media are cases in point. The issue of readership, for example, has perplexed advertisers, researchers, and publishers for many years: How does one measure readership? Is it merely circulation? Circulation probably undercounts in one way, because there may be more than one reader (e.g., two people read the subscription to Wired ), or overcounts in another (e.g., no one reads the subscription). We thus believe that caching is a new variation of the same old counting problem, and creative managers will need to discover innovative ways to solve it.
Conclusion
A fundamental problem in researching the effectiveness of marketing mix variables, such as pricing strategy or advertising, is that of isolating them from others. This is compounded further when the effects of a variable can be indirect, or have a prolonged lag effect. Cases in point are advertising’s ability to create awareness, which might or might not lead to an immediate sale, and its lag effects–consumers remember slogans long after campaigns have ended, and the effects of this on sales continue to intrigue researchers. Thus, advertisers and marketers sustain their efforts in searching for ways in which returns to marketing investments generally, and communication capital in particular, can be enhanced. This highlights the importance of establishing specific communication objectives for Web sites, and for identifying measurable means of determining the success of Web ventures. There is perhaps some solace to be gained from realizing that the Web is a lot more measurable than many other marketing communication efforts, with feedback being relatively quick, if not immediate.
The Web is a new medium which is characterized by ease of entry, relatively low set-up costs, globalness, time independence, and interactivity. As such, it represents a remarkable new opportunity for advertisers and marketers to communicate with new and existing markets in a very integrated way. Many advertisers will use it to achieve hitherto undreamed-of success; for others, it will be an opportunity lost and a damp squib. We hope that the process model for assessing Web site efficiency will achieve more of the former condition. From an academic perspective, the model can be used to develop research propositions concerning the maximization of Web site efficiency, and using data from real Web sites, to test these propositions. For the practitioner, the model provides a sequence of productivity measures which can be calibrated with relative ease. The challenges facing both parties, however, is to maximize the creativity that will justify advertising and marketing investments in a Web presence.
Cases
Roos, J., M. Lissack, and D. Oliver. 1998. Bringing the Internet to the masses: America Online Inc. (AOL). Lausanne, Switzerland: IMD. ECCH 398-184-1.
Christiaanse, E., J. Been, and T. van Diepen. 1997. KLM Cargo “bringing worlds together” Breukelen, Netherlands: Nijenrode University. ECCH 397-067-1.
1. This chapter is based on Berthon, P. R., Pitt, L. F., & Watson, R. T. (1996). The World Wide Web as an advertising medium: towards an understanding of conversion efficiency. Journal of Advertising Research, 36(1), 43-54. ↵ | textbooks/biz/Business/Advanced_Business/Book%3A_Electronic_Commerce_-_The_Strategic_Perspective_(Watson_Brethon_Pitt_and_Zinkan)/05%3A_Promotion_and_purchase_-_Measuring_effectiveness.txt |
Introduction
The Internet and the Web will radically change distribution. The new medium undermines key assumptions upon which traditional distribution philosophy is based, and in practice renders many conventional channels and intermediaries obsolete.
In simple markets of old, producers of goods or services dealt directly with the consumers of those offerings. In some modern business-to-business markets, suppliers also interact on a face-to-face basis with their customers. However, in most contemporary markets, mass production and mass consumption have caused intermediaries to enter the junction between buyer and seller. These intermediaries have either taken title to the goods or services in their flow from producer to customer, or have, in some way, facilitated this by their specialization in one or more of the functions that have to occur for such movement to occur. These flows of title and functions and the intermediaries who have facilitated them have generally come to be known as distribution channels. For a majority of marketing decision makers, dealing with the channel for their product or service ranks as one of the key marketing quandaries faced. In many cases, despite what the textbooks have suggested, there is frequently no real decision as to who should constitute the channel–rather, it is a question of how best to deal with the incumbent channel. Marketing channel decisions are also critical because they intimately affect all other marketing and overall strategic decisions. Distribution channels generally involve relatively long-term commitments, but if managed effectively over time, they create a key external resource. Small wonder then that they exhibit powerful inertial tendencies, for once they are in place and working well, managers are reluctant to fix what is not broken. We contend that the Web will change distribution like no other environmental force since the industrial revolution. Not only will it modify many of the assumptions on which distribution channel structure is based, in many cases, it will transform and even obliterate channels themselves. In doing so, it will render many intermediaries obsolete, while simultaneously creating new channels and, indeed, new intermediaries.
First, we review some of the rationale for distribution channel structure and identify the key tasks of a distribution channel. Second, we consider the Internet and the Web, and describe three forces that will affect the fundamental functions of distribution channels. This then enables the construction of a technology-distribution function matrix, which we suggest is a powerful tool for managers to use to assess the impact that electronic commerce will have on their channels of distribution. Next, we visit each of the cells in this matrix and present a very brief case of a channel in which the medium is currently affecting distribution directly. Finally, we conclude by identifying some of the long-term effects of technology on distribution channels, and possible avenues for management to explore to minimize the detrimental consequences for their distribution strategies specifically, and for overall corporate strategy in general.
What is the purpose of a distribution strategy?
The purpose of a distribution channel is to make the right quantities of the right product/service available at the right place, at the right time. What has made distribution strategy unique relative to the other marketing mix decisions is that it has been almost entirely dependent on physical location. The old saying among retailers is that the three keys to success are the 3 Ls– Location, Location, Location!
Intermediaries provide economies of distribution by increasing the efficiency of the process. They do this by creating time, place, and possession utility, or what we have referred to simply as right product, right place, right time. Intermediaries in the distribution channel fulfill three basic functions.
• Intermediaries support economies of scope by adjusting the discrepancy of assortments . Producers supply large quantities of a relatively small assortment of products or services, while customers require relatively small quantities of a large assortment of products and services. By performing the functions of sorting and assorting, intermediaries create possession utility through the process of exchange and also create time and place utilities. We refer to these activities as reassortment/sorting, which comprise:
• Sorting which consists of arranging products or services according to class, kind, or size.
• Sorting out which would refine sorting by, for example, grading products or output.
• Accumulation which involves the aggregation of stocks from different suppliers, such as all (or the major) producers of household equipment or book publishers.
• Allocation which is really distribution according to a plan–who will get what the producer(s) produced. This might typically involve an activity such as breaking bulk .
• Assorting which has to do with putting an appropriate package together. Thus, a men’s outfitter might provide an assortment of suitable clothing: shirts, ties, trousers, socks, shoes, and underclothes.
• Intermediaries routinize transactions so that the cost of distribution can be minimized. Because of this routinization, transactions do not need to be bargained on an individual basis, which would tend to be inefficient in most markets. Routinization facilitates exchange by leading to standardization and automation. Standardization of products and services enables comparison and assessment, which in turn abets the production of the most highly valued items. By the standardization of issues, such as lot size, delivery frequency, payment, and communication, a routine is created to make the exchange relationship between buyers and sellers both effective and efficient. In channels where it has been possible to automate activities, the costs of activities such as reordering can be minimized–for example, the automatic placing of an order when inventories reach a certain minimum level. In essence, automation involves machines or systems performing tasks previously performed by humans–thereby eliminating errors and reducing labor costs.
• Intermediaries facilitate the searching processes of both producers and customers by structuring the information essential to both parties. Sellers are searching for buyers and buyers are searching for sellers, and at the simplest level, intermediaries provide a place for these parties to find each other. Searching occurs because of uncertainty. Producers are not positive about customers’ needs and customers cannot be sure that they will be able to satisfy their consumption needs. Intermediaries reduce this uncertainty for both parties.
We will use these functions of reassortment/sorting, routinization, and searching in our construction of a technology-distribution function grid, or what we call the Internet Distribution Matrix.
What does technology do?
Understandably at this early stage, the focus has either been on the Web from a general marketing perspective, or as a marketing communication medium. While this attention is important and warranted, less attention has been given to the Web’s impact on distribution channels, and this may turn out to be even more significant than its impact on communication. Indeed, as we shall argue, distribution may in the future change from channels to media. We discern three major effects that electronic commerce will have on distribution. It will kill distance, homogenize time, and make location irrelevant. These effects are now discussed briefly.
The death of distance
In the mid-1960s, an Australian named Geoffrey Blainey wrote a classic study of the impact of geographic isolation on his homeland. He argued that Australia (and, of course, neighbors such as New Zealand) would find it far more difficult to succeed in terms of international trade because of the vast physical distances between the country and world markets. Very recently, Frances Cairncross chose to satirize Blainey’s title, The Tyranny of Distance, by calling her work on the convergence of three technologies (telephone, television, and computer), The Death of Distance. She contends that “distance will no longer determine the cost of communicating electronically.” For the distribution of many products–those that can be digitized, such as pictures, video, sound, and words–distance will thus have no effect on costs. The same is true for services. For all products, distance will have substantially less effect on distribution costs.
The homogenization of time
In the physical market, time and season predominate trading, and therefore by definition, distribution. We see evidence of this in the form of opening hours; activities that occur by time of day and in social and climatic seasonality. The virtual marketplace is atemporal; a Web site is always open. The seller doesn’t need to be awake to serve the buyer and, indeed, the buyer does not have to be awake, or even physically present, to be served by the seller. The Web is independent of season, and it can even be argued that these media create seasonality (such as a Thanksgiving Web browser). Time can thus be homogenized–made uniformly consistent for all buyers and all sellers. Time and distance vanish, and action and response are simultaneous.
The irrelevance of location
Any screen-based activity can be operated anywhere on earth. The Web bookstore Amazon.com, one of the most written about of the new Web-based firms, supplies books to customers who can be located anywhere, from book suppliers who can be located anywhere. The location of Amazon.com matters to neither book buyers nor book publishers. No longer will location be key to most business decisions. We have moved from marketplace to marketspace . To compare marketspace-based firms to their traditional marketplace-based alternatives, one needs to contrast three issues: content (what the buyer purchases), context (the circumstances in which the purchase occurs), and infrastructure (simply what the firm needs in order to do business).
The best way to understand a firm like Amazon.com as a marketspace firm is to simply compare it to a conventional bookstore on the three criteria of content, context, and infrastructure. Conventional bookstores sell books; Amazon.com sells information about books. It offers a vast selection and a delivery system. The interface in a conventional bookstore situation is in a shop with books on the shelves; in the case of Amazon.com, it is through a screen. Conventional bookstores require a shop with shelves, people to serve, a convenient location, and most of all, large stocks of books; Amazon.com requires a fast efficient server and a great database. Try as they might, conventional bookstores can never stock all the books in print; Amazon.com stocks no, or very few, books, but paradoxically, it stocks them all. It really matters where a conventional bookstore is located (convenient location, high traffic, pleasant surroundings); Amazon.com’s location is immaterial. Technology is creating many marketspace firms. In doing so, cynics may observe that it is enacting three new rules of retailing: Location is irrelevant, irrelevant, irrelevant.
The Internet distribution matrix
Contrasting the three effects of technology vertically, with the three basic functions of distribution channels horizontally, permits the construction of a three-by-three grid, which we call the Internet distribution matrix . This is shown in Exhibit 1. We suggest that it can be a powerful tool for managers who wish to identify opportunities for using the Internet and the Web to improve or change distribution strategy. It can also assist in the identification of competitive threats by allowing managers to concentrate on areas where competitors might use technology to perform distribution functions more effectively. Frequently, competition may not be from acknowledged, existing competitors, but from upstarts and from players in entirely different industries.
Exhibit 1.: The Internet distribution matrix
Each cell in the matrix in permits the identification of an effect of technology on a distribution function. So, for example, the manager is able to ask what effect the death of distance will have on the function of reassortment and sorting, or what effect the irrelevance of location will have on the activity of searching, in his or her firm. In order to stimulate thought in this regard, and to aid vicarious learning, we now offer a number of examples of organizations using their Web sites to exploit the effects of technology on distribution functions. It should be pointed out that neither the technological effects nor the distribution functions are entirely discrete–that is, uniquely identifiable in and of themselves. In other words, it is not possible to say that a particular Web site is only about the death of distance and not about time homogenization, or location irrelevance. Nor is it possible to say that, just because a Web site changes reassortment and sorting, it does not affect routinization and searching. Like most complex organizational phenomena, the forces all interact with each other in reality, and so we have at best succeeded, hopefully, in identifying cases that illustrate interesting best practices, or a good example, in each instance.
The effects of technology on distribution channels
In this section, we move through the cells in the Internet distribution matrix and, in order of sequence, present cases of firms using their Web sites to exploit the effects of technology in changing distribution functions.
The death of distance and reassortment and sorting
Music Maker is a Web site that allows customers anywhere to create CDs of their own by sorting through vast lists of recordings by various artists of every genre. The Web site charges per song, and then allows the customer to also personalize the CD by designing, coloring, and labeling it. The company then presses the CD and delivers it to the customer. Rather than compile a collection of music for the average customer, like a traditional recording company, or attempting to carry an acceptable inventory, like a good conventional record store, Music Maker lets customers do reassortment and sorting for themselves, regardless of how far away they may be from the firm in terms of distance. If a customer wants Beethoven’s “Fifth” and Guns’N’Roses on the same CD, they can have it. At present, distance is only a problem for delivery, and not for reassortment/sorting; however, in the not-too-distant future, even this will not be an impediment. As the costs of digital storage continue to plummet, and as transmission rates increase, customers may simply download the performances they like, rather than have a CD delivered physically, and then press their own CD, or simply store the sound on their hard drive.
The death of distance and routinization
A problem frequently encountered by business-to-business marketers with large product ranges is that of routinely updating their catalogs. This is required in order to accurately reflect the availability of new products and features, changes and modifications to existing products, and of course, price changes. Once the changes have been made, the catalog then needs to be printed, and physically delivered to customers who may be geographically distant, with all the inconvenience and cost that this type of activity incurs. The problem is compounded, of course, by a need for frequent update, product complexity, and the potentially large number of geographically dispersed customers.
DuPont Lubricants markets a large range of lubricants for special applications to customers in many parts of the world. Its catalog has always been subject to change with regard to new products, new applications of existing products, changes to specifications, and price changes. Similarly, GE Plastics, a division of General Electric, offers a large range of plastics with applications in many fields, and the company faced similar problems. Both firms now use virtual routinization by way of their Web sites to replace the physical routinization that updating of printed catalogs required previously. This can be done for customers regardless of distance, and the virtual catalog is, in a real sense, delivered instantaneously. Users are availed of the latest new product descriptions and specifications, and prices, and are also able to search the catalog for the best lubricant or plastic application for a particular job, whichever the case may be.
The death of distance and searching
Anyone who has experienced being a traveler in country A, who wants to purchase an airline ticket to travel from country B to country C, will know the frustration of being at the mercy of travel agents and airlines, both in the home country and also in the other two. Prices of such tickets verged on the extortionate, and the customer was virtually powerless as he or she tried to deal with parties in foreign countries at a distance, unable to shop on the ground (locally) and make the best deal. The German airline Lufthansa’s global reservation system lets travelers book fares from anywhere in the world, to and from anywhere in the world, and permits them to pick up the tickets at the airport. Unlike the Web sites of many airlines, which tend to be dedicated, Lufthansa’s allows the customer to access the timetables, fares, and routes of its competitors. In this way, distance no longer presents an obstacle to customers in their search for need satisfaction, because Lufthansa is able to directly interact with customers all over the world.
The homogenization of time and reassortment/sorting
In a conventional setting, students who wish to complete a degree need to be in class to take the courses they want, and so do the faculty who will present, and the other students who will take, the courses, all at the same time. Where two desired courses clash directly with regard to time slots, or are presented close together at opposite ends of the campus or on different campuses, the student is generally not able to take more than one course at a particular time. This problem is particularly prevalent for many MBA programs with regard to elective courses, and students have to choose among appealing offerings in a way that generally results in satisficing rather than optimizing. Traditional distance learning programs have attempted to overcome these problems but have only been partially successful, for the student misses the live interaction that real-time classes provide. The Global MBA (GEMBA) program of Duke University, Fuqua School of Business, allows its students to take the elective course lectures anywhere, anytime, over the Internet, and uses the medium to permit students to interact with faculty and fellow students. As the on-line brochure states, “Thanks to a unique format that combines multiple international program sites with advanced interactive technologies, GEMBA students can work and live anywhere in the world while participating in the program.” Students enroll for the course from many different parts of the world and in many time zones, yet are now able to self-assort the MBA program that they really want.
The homogenization of time and routinization
Every two months, British Airways mails personalized information to the many millions of members of its frequent flyer Executive Club. The problem is that this information is out-of-date on arrival. When club members wish to redeem miles for free travel, they either have to call the membership desk at the airline to determine the number of available miles, or, more commonly, request a travel agent to do so for them. There is also the problem of determining how far the member can travel on the miles available.
Nowadays, members are availed of on-line, up-to-the minute, and immediate information on their status on the British Airways Web site. By entering a frequent flyer number and a security code, a member is able to get a report on available miles, and check on the latest transactions that have resulted in the earning of miles. Then, the member is presented with a color map of the globe with the city of preferred departure at the center. Other cities to which the member would be able to fly on the available miles are also highlighted. The member is also able do what-if querying of the site by increasing the number of passengers, or upgrading the class of travel. Time is homogenized, and transactions routinized, because members can perform these activities when it suits them, and not have to wait for a mailed report, or for the travel agent’s office to open. What would be a highly customized activity (determining where the member could fly to and how) when performed by humans is reduced to a routine by a system.
The homogenization of time and searching
In many markets, the need to reduce uncertainty by searching is compounded by the problem that buyer and seller operate in different time zones or at different hours of the day or week. Even simple activities, such as routine communication between the parties, become problematic. Employee recruitment presents a good example of these issues–companies search for employees and individuals search for jobs. Both parties in many situations rely on recruitment agencies to enter the channel as intermediaries, not only to simplify their search processes, but also to manage their time (such as, when will it suit the employer to interview, and the employee to be interviewed?).
A number of enterprising sites for recruitment have been set up on the Web. One of these, Monster Board, lists around 50,000 jobs from more than 4,000 companies, including blue-chip employers rated among the best. It keeps potential employees informed by providing customized e-mail updates for job seekers and, of course, potential employers are able to access résumés of suitable candidates on-line, anytime.
The recruitment market also provides excellent examples of getting it wrong and getting it right on the Web as a distribution medium. For many years, the Times Higher Education Supplement has offered the greatest market for jobs in higher education in the United Kingdom and the British Commonwealth. Almost all senior, and many lower level, positions in universities and tertiary institutions are advertised in the Times Higher . In 1996, the Times Higher set up a Web site where job seekers could conveniently browse and sort through all the available positions. This must have affected sales of the Times Higher , for within a short while, the Times Higher Web site began to require registration and subscription, perhaps in an attempt to shore up revenues affected by a decline in circulation. Charging, and knowing what to charge for and how, on the Web are issues with which most managers are still grappling. Surfers, perhaps enamored of the fact that most Internet content is free, seem unwilling to pay for information unless it produces real, tangible, immediate, and direct benefits.
Universities in the United Kingdom may have begun to sense that their recruiting was less effective, or someone may have had a bright idea. At the same time as the Times Higher was attempting to charge surfers for access to its jobs pages, a consortium of universities set up a Web site called jobs.ac.uk, to which they all post available positions. Not only is the job seeker able to specify and search by criteria, but once a potential position is found, he or she is able to link directly to the Web site of the institution for further information on issues such as the student body, research, facilities, and faculty–or whatever else the institution has placed on its site. Jobs.ac.uk does not need to be run at a profit, as does the Times Higher . The benefits to the advertising institutions come in the form of reduced job advertising costs and being on a site where job seekers will obviously come to look for positions. This is similar to the way that shoppers reduce their search in the real world by shopping in malls where there is more than one store of the type they intend to patronize.
In traditional markets, where searching requires a physical presence, both buyer and seller need to interact at a mutually suitable time. Of course, this time is not necessarily suitable to the parties in a real sense, and is typically the result of a compromise.
Those who wish to transport large quantities of goods by sea either need to wait until a shipper in another country opens the office before placing a telephone call, or communicate by facsimile and wait for an answer. But what if capacity could be ascertained, and then reserved automatically? And, what if a shipper had spare capacity and wished to sell it urgently? SeaNet is a network that serves the global maritime industry 24 hours a day, regardless of time zones, by facilitating search for buyers and sellers. Reports indicate that this award-winning site is cash positive within a year, and that it experiences subscription renewals at a rate of 90 percent. Shippers can post their open positions, orders, sales, and purchase information onto the site. This information is updated almost instantly and can be accessed by any shipping company anywhere in the world searching the Internet in order to do business–not just subscribers. Companies that want to do business can then contact the seller by e-mail, or by more conventional methods. With the help of SeaNet’s site, shippers can find the information they need quickly and easily.
The irrelevance of location and reassortment/sorting
Conventional computer stores attempt to serve the average customer by offering a range of standard products from computer manufacturers. Manufacturers rely on these intermediaries to inform them about what the typical customer requires, and then produce an average product for this market. Customers travel to the store that is physically near enough to them in order to purchase the product. In this market, location matters. The store must be accessible to customers and, of course, be large enough to carry a reasonable range of goods, as well as provide access and parking to customers.
Dell Computer is one of the real success stories of electronic commerce, with estimates of daily sales off its Web site needing to be updated on a daily basis, and at the time of writing, estimated to be in excess of USD 6 million ( USD 5.5 million) each day. The company has been a sterling performer through the latter half of the 1990s, and much of this recent achievement has been attributed to its trading over the Internet. Using Dell’s Web site, a customer is able to customize a personal computer by specifying (clicking on a range of options) such attributes as processor speed, RAM size, hard drive, CD ROM, and modem type and speed. A handy calculator instantly updates customers on the cost of what they are specifying, so that they can then adjust their budgets accordingly. Once satisfied with a specified package, the customer can place the order and pay on-line. Only then does Dell commence work on the machine, which is delivered to the customer just over a week later. Even more importantly, Dell only places orders for items such as monitors from Sony, or hard drives from Seagate, once the customer’s order is confirmed. The PC industry leader Compaq’s current rate of stock turnover is 12 times per year; Dell’s is 30. This may merely seem like attractive accounting performance until one realizes the tremendous strategic advantage it gives Dell. When Intel launches a new, faster processor, Compaq effectively has to sell six-week-old stock before it is able to launch machines with the new chip. Dell only has to sell ten days’ worth. Dell’s location is irrelevant to customers–the company is where customers want it to be. Dell actually gets the customers to do some work for the company by getting them to do the reassortment and sorting themselves.
The irrelevance of location and routinization
Location has typically been important to the establishment of routines, efforts to standardize, and automation. It is easier and less costly for major buyers to set up purchasing procedures with suppliers who are nearby, if not local, particularly when the purchasing process requires lengthy face-to-face negotiation over issues such as price, quality, and specification. Recent examples of major business-to-business purchasing off Web sites, however, have tended to negate this conventional wisdom.
Caterpillar made its first attempt at serious on-line purchasing on 24 June 1997, when it invited preapproved suppliers to bid on a USD 2.4 million order for hydraulic fittings–simple plastic parts which cost less than a dollar but which can bring a USD 2 million bulldozer to a standstill when they go wrong. Twenty-three suppliers elected to make bids in an on-line process on Caterpillar’s Web site. The first bids came in high, but by lunchtime only nine were still left revising offers. By the time the session closed at the end of the day, the low bid was USD 22 cents. The previous low price paid on the component by Caterpillar had been USD 30 cents . Caterpillar now attains an average saving of 6 percent through its Web site supplier bidding system.
General Electric was one of the first major firms to exploit the Web’s potential in purchasing. In 1996, the firm purchased USD 1 billion worth of goods from 1,400 suppliers over the Internet. As a result, the company reports that the bidding process has been cut from 21 days to 10, and that the cost of goods has declined between 5 and 20 percent. Previously, GE had no foreign suppliers. Now, 15 percent of the company’s suppliers are from outside North America. The company also now encourages suppliers to put their Web pages on GE’s site, and this has been found to effectively attract other business.
The irrelevance of location and searching
Location has in the past been critical to the function of search. Most buyers patronize proximal suppliers because the costs of searching further afield generally outweigh the benefits of a possible lower price. This also creates opportunities for intermediaries to enter the channel. They serve local markets by searching for suppliers on their behalf, while at the same time serving producers by giving them access to more distant and disparate markets. Travel agents and insurance brokers are typical examples of this phenomenon. They search for suitable offerings for customers from a large range of potential suppliers, while at the same time finding customers for these suppliers that the latter would not have been able to reach directly in an economical fashion. The intermediary owns the customer as a result in these situations, and as a result, commands the power in the channel. Interactive marketing enables suppliers to win back power from the channel by interacting directly with the customer, thus learning more about the customer.
The U.K. insurance company Eagle Star now allows customers to obtain quotes on auto insurance directly off its Web site. It offers a 15 percent discount on purchase, and allows credit card payment. The company reports selling 200 policies per month in the first three months of this operation, generating USD 290,000 ( USD 265,000) in premiums, and making 40,000 quotations. While it could be argued that these numbers are minuscule compared to the broker market, it should be remembered that this type of distribution is still in its infancy. Customers may prefer dealing directly with the company, regardless of its or their location, and in doing so, create opportunities for the company to interact with them even further.
Some long-term effects
The long-term effects of the death of distance, homogenization of time, and the irrelevance of location, on the evolution of distribution channels will be manifold and complex to contemplate. However, we comment here on three effects which are already becoming apparent, and which will undoubtedly affect distribution, as we know it in profound ways.
First, we may in the future talk of distribution media rather than distribution channels in the case of most services and many products. A medium can variously be defined as: something, such as an intermediate course of action, that occupies a position or represents a condition midway between extremes; an agency by which something is accomplished, conveyed, or transferred; or a surrounding environment in which something functions and thrives.
Traditionally, distribution channels have been conduits for moving products and services. The effects of the three technological phenomena discussed above will be to move distribution from channels to media. Increasingly in the future, distribution will be through a medium rather than a channel.
The key distinction that we make between a channel and a medium in this context concerns the notion of interactivity. Electronic media such as the Internet are potentially intrinsically interactive. Thus, whereas channels were typically conduits for products, an electronic medium such as the Internet has the potential to go beyond simply passive distribution of products and services, to be an active (and central) creative element in the production of the product or service. From virtual markets (e.g., Priceline.com) through virtual communities (e.g., Firefly) to virtual worlds (e.g., The Palace), the hypermedia of the Web actively constitutes respectively a market, a community, and a virtual world. The medium is thus the central element that allows consumers to co-create a market in the case of Priceline, their own service and produce in the case of Firefly, and their virtual world in the case of ThePalace. Critically, in each instance, the primary relationship is not between customers, but with the mediated environment with which they mutually interact. In summary, McLuhan’s well-known adage that the “medium is the message” can be complemented in the case of interactive electronic medium such as the Web with the addendum that, in some cases, the “medium is the product.”
A second effect of these forces on channel functions may be a rise in commoditization as channels have a diminished effect on the marketer’s ability to differentiate a product or service. Commoditization can be seen as a process by which the complex and the difficult become simple and easy–so simple that anyone can do them, and does. Commoditization may easily be a natural outcome of competition and technological advance, which may see prices plunge and essential differences vanish. Commoditization will be accelerated by the evolution of distribution media that will speed information flow and thus make markets more efficient. The only antidote to commoditization will be to identify a niche market too small to be attractive to others, innovation sufficiently rapid to stay ahead of the pack, or a monopoly. No one needs reminding that the last option is even more difficult to establish than the preceding two.
Disintermediation (and also reintermediation) is the third effect that we discern. As networks connect everybody to everybody else, they increase the opportunities for shortcuts–so that when buyers can connect straight from the computer on their desk to the computer of an insurance company or an airline, insurance brokers and travel agents begin to look slow, inconvenient, and overpriced. In the marketing of products, as opposed to more intangible services, this is also being driven by cheap, convenient, and increasingly universal distribution networks such as FedEx and UPS. No longer does a consumer have to wait for a retailer to open, drive there, attempt to find a salesperson who is generally ill-informed, and then pay over the odds in order to purchase a product, assuming the retailer has the required item in stock. Products and prices can be compared on the Web, and lots of information gleaned. If one supplier is out of stock or more expensive, there is no need to drive miles to a competitor. There are generally many competitors, and all are equidistant, a mere mouse click away. These phenomena will all lead to what has been termed disintermediation, a situation in which traditional intermediaries are squeezed out of channels. As networks turn increasingly mass market, there is a continuous contest of disintermediation (see also the disintermediation threat grid on )
The Web also creates opportunities for reintermediation , where intermediaries may enter channels facilitated electronically. Where this occurs, it will be because they perform one of the three fundamental channel functions of reassortment and sorting, routinization, or searching more effectively than anyone else can. Thus, we are beginning to see new intermediaries set up sites which facilitate simple price search, such as the U.K.-based site Cheapflights, which enables a customer to search for the cheapest flight on a route, and more advanced sites (e.g., Priceline) which actually purchase the cheapest fare when customers state what price they are prepared to pay. In a world where new and unknown brands may have an uphill battle to establish themselves, there may be opportunities for sites set up as honest brokers, merely to validate brands and suppliers on Web sites. In these constant games of disintermediation and reintermediation, customer relationships will be the winners’ prize.
Dealing effectively with distribution issues in the future will require an understanding of the new distribution media, and how the new model will differ from the old. Most extant distribution and communication models are based on centralization, where the investment is at the core and substantially (as shown in Exhibit 2), and considerably lower on the periphery.
Exhibit 2.: The mass model of distribution and communication
In the new model which is shown in Exhibit 3, investment is everywhere, and everywhere quite low. Essentially all that is required is a computer and a telephone line, and anyone can enter the channel. This can be as supplier or customer. Intermediaries can also enter or exit the channel easily; however, their entry and continued existence will still depend on the extent to which they fulfill one or more of the basic functions of distribution. It will also depend on the effects that technology have on distribution in the markets they choose.
Exhibit 3.: The network model of distribution and communication
Conclusion
In this chapter, we have developed the Internet distribution matrix, and suggest that it can be used by existing firms and entrepreneurs to identify at least three things. First, how might the Internet and its multimedia platform, the Web, offer opportunities to perform the existing distribution functions of reassortment/sorting, routinization, and searching more efficiently and effectively. Cases of organizations using the medium to perform these activities, such as those that we have identified, can stimulate thinking. Second, the matrix can enable the identification of competitors poised to use the media to change distribution in the industry and the market. Finally, the matrix may enable managers to brainstorm how an industry can be vulnerable. Neither the firm nor its immediate competitors may be contemplating using the Web to achieve radical change. However, that does not mean that a small startup is not doing so. And the problem with such small startups is that they will not operate in a visible way, or at the same time. In many cases, they might not even take an industry by storm, but they might very well deprive a market of its most valuable customers, as they exploit technology to change the basic functions of distribution.
Cases
Dutta, S., A. De Meyer, and S. Kunduri. 1998. Auto-By-Tel and General Motors: David and Goliath. Fontainebleau, France: INSEAD. ECCH 698-066-1.
Jelassi, T., and H. S. Lai. 1996. CitiusNet: the emergence of a global electronic market. Fontainebleau, France: INSEAD and EAMS. ECCH 696-009-1.
Subirana, B., and M. Zuidhof. 1996. Readers Inn: virtual distribution on the Internet and the transformation of the publishing industry. Barcelona, Spain: IESE. ECCH 196-026-1. | textbooks/biz/Business/Advanced_Business/Book%3A_Electronic_Commerce_-_The_Strategic_Perspective_(Watson_Brethon_Pitt_and_Zinkan)/06%3A_Distribution.txt |
Introduction
In many advanced economies, services now account for a far greater proportion of gross national product than manufactured goods (e.g., more than 75 percent of GDP and jobs in the U.S.). Yet, it is only in recent years that marketing academics, practitioners, and indeed, service firms have begun to give serious attention to the marketing of services, as distinct from products. It is generally thought that the marketing of services is more difficult, complex, and onerous because of the differences between services and products. The Web, we believe, will dramatically change forever this received wisdom. Most of the problems of services really don’t matter on the Web. Services are no longer different in a difficult way. Using the Web to deliver services overcomes previously conceived limitations of services marketing, and in many cases, it creates hitherto undreamed of opportunities for services marketers.[1]
The Web offers marketers the ability to make available full-color virtual catalogues, provide on-screen order forms, offer on-line customer support, announce and even distribute certain products and services easily, and elicit customer feedback. The medium is unique because the customer generally has to find the marketer rather than vice versa, to a greater extent than is the case with most other media. In this chapter, we show how the Web is overcoming the traditional problems associated with the marketing of services. We are entering the era of cyberservice .
What makes services different?
What makes services different from products. In other words, what special characteristics do services possess? Services possess four distinct features not held by products, and an understanding of them is necessary to anticipate problems and to exploit the unique opportunities that some of these attributes provide. The unique characteristics of services are:
• Intangibility: Unlike products, services are intangible or impalpable, they cannot be seen, held, or touched. Whereas products are palpable things, services are performances or experiences. The main problem that intangibility creates for services marketers is that they have nothing to show the customer. Thus, experience and credence qualities are significantly important in the case of services.
• Simultaneity: In the case of goods, production and consumption are not simultaneous, and these activities do not occur at the same time or place. In the case of services, it is generally true that the producer and consumer both have to be present when a service is enacted.
• Heterogeneity: Products tend to possess a sameness, or homogeneity, that is not achieved by accident. Manufacturing lines produce homogenous products and have quality control procedures in place to test products as they come off the line, and to ensure that defective products don’t reach the market. Services have the characteristic of heterogeneity. They vary in output, and mistakes happen in real time, in the customer’s face, which creates a number of challenges for the services marketer.
• Perishability: Because services are produced and consumed simultaneously, they cannot be inventoried. For example, if there are twenty empty seats on an aircraft for a particular flight, the airline can’t say, “Don’t worry, stick them in a cupboard. We’ll certainly be able to sell them over Thanksgiving.” They are lost forever.
Cyberservice
Cyberservice overcomes many of the traditional problems of services marketing by giving the marketer undreamed of control over the previously capricious characteristics of services. This is because the Web, as an interactive medium, combines the best of mass production (based in the manufacture of products) and customization (typically found in custom-made services). The Web is the ultimate tool for mass customization. It has the ability to treat millions of customers as though they were unique. In this section, we illustrate how this is being done by innovative organizations using their Web sites to manage the difficulties previously caused by service characteristics.
Managing intangibility
1. Use the Web to provide evidence
Because customers can’t see the service, we have to give them evidence of what it is they will get. This has long been a successful stratagem employed by successful services marketers. McDonald’s emphasizes its commitment to cleanliness not only by having clean restaurants, but by constantly cleaning. Cyberservice puts evidence management into overdrive. The Royal Automobile Club (RAC) enables users to enroll for membership on-line. Information provided on the site includes details of the benefits of RAC membership, the extent of assistance the club has provided, the service options available, and methods of payment. Most importantly, however, the site also e-mails a new member within a few minutes of him or her joining. This message confirms all details, and provides instantaneous, tangible proof of membership in the form of a membership number. Once the member notes this number, or better still, prints the e-mail message, it is as good as having a policy document. Under traditional service delivery systems, such as the mail, this would take a few days at least. While the member might have received confirmation over a telephone, the Web site provides instant tangible assurance.
One of Ford Motor Company’s most innovative U.S. dealers is planning to install live video cameras in its service bays and relay a live feed to its Web site. Customers will be able to visit the service center and check the progress of their car’s service. By opening up its service center for continuous customer inspection, the dealer is making very evident the quality of its service.
2. Use the Web site to tangibilize the intangible
Although services are considered intangible, effective Web sites can, and should, give services a tangible dimension. There is a simple, but critical, reason for this: when you can’t really see what it is that you’re buying, you look for clues, or what psychologists call cues. The prospective visitor to a Disney theme park is about to part with a not inconsiderable amount of money. No matter how much he or she has heard from friends and associates, until the visit actually occurs, the visitor will not be able to judge the quality of the experience. The Disney Web site tangibilizes a future dream. It provides graphic details on the parks themselves, allows children to see and listen to their favorite characters, examine the rides that they might take, and get further information, before booking the visit. It is well to remember, in general, that when managing Web sites, three critical elements stand out:
• Quality of the Web site: A site must have quality text, graphics, video, and sound. When the customer sees the Web site and not the firm, the Web site becomes the firm!
• Frequency of update: Surfers will generally not visit a site frequently unless it changes regularly. A Web site, no matter how engaging on first impression, will fail if it is not seen to change, refresh, and generally be perceived as up-to-date. Interpreted from the customer’s perspective, it is almost the same as saying there is someone behind the Web site, who cares enough about it. Most importantly, there is someone who is concerned enough about the customer to constantly reinvigorate the Web site. The Web site is the firm’s street front. Customer’s expect it to change, just like the window displays of department stores.
• Server speed: In the pre-cyberservice days, service speed counted. In the Web environment, the surrogate for service speed is server speed and ease of navigation. Just as the customer won’t wait endlessly in line for a bank teller, a fast food restaurant server, or a travel agent, they will not wait forever to access a slow Web site on a sluggish server. Customers will simply move on. Immediacy is central to service and a defining expectation in cyberspace.
3. Sampling in cyberspace
It is very difficult to sample a service. The best way to convince someone to purchase wine is to have them try a sample glass. If they like it, they may buy a case, or at least a bottle. Wine estates and fine wine stores realize this and use tastings as a major element of promotional strategy. Similarly, car dealers arrange demonstration drives, and bookstores have their wares on display for customers to browse through before making a choice. Sampling is far more difficult with services, because they are intangible. The Web has the potential to change all this.
Each year, Harvard Business School Publishing Services (HBSP) generates many millions of dollars worth of business selling case studies, multimedia programs, books, and of course, the famous Harvard Business Review . Previously, an instructor anywhere in the world wishing to examine a Harvard case study had to order a sample copy either by telephone, fax, or in writing, and then wait some days for the item to arrive, after having been physically dispatched by HBSP. Nowadays, approved instructors from all over the world browse the Harvard site, using powerful search facilities to find cases and other materials in which they are interested. When something relevant is found, the instructor downloads it in Adobe Acrobat format and prints it, complete with a watermark indicating that the case is a sample, not for further reproduction. The instructor can then decide whether to order the item. Similarly, the Web site also allows surfers to enroll for regular electronic updates on abstracts of new cases, articles, books, and other products that may be of interest. As well, visitors can subscribe to receive bimonthly the abstracts of articles in the latest Harvard Business Review .
4. Multiplying memories
Because services are intangible, the customer frequently relies on the testimony of others (word of mouth) to a greater extent than in the case of products. Whereas in the case of a product, the customer actually has something to show for it, with services there is usually just a memory.
Vivid Travel Network is a collection of Web sites based in San Francisco that links and integrates travel information resources from all over the world. The key feature of the service, in this context, is that it brings together people with experiences of different travel locations with people interested in visiting those locations. Those who have visited a location relive their vacation by writing about it, engaging in discussion and recollection with others who have also been there. At the same time, they provide valuable and highly credible word of mouth information to prospective visitors by allowing vast networks to multiply memories.
Managing simultaneity
Some of the features of simultaneity that the Web allows services marketers to manage include:
1. Customization
Because services are produced and consumed simultaneously, there is a possibility that the provider can customize the service. If this is done well, it can lead to giving the customer what he or she wants to a far greater extent than is the case with most products. The Web has the ability to excel at this, and because its capacity is based on information technology, data storage, and data processing, rather than employees and physical location, it can do it on a scale that traditional service providers would find impossible to match. Pointcast offers an individually customized news retrieval service. The customer selects categories of personal interest, such as news, sport, stock quotes, and weather. The service then scans news providers, and compiles a customized offering for each person, which is updated regularly either by the individual requesting additional items, or by the software learning what the individual likes and prefers, and searching for information that will satisfy these needs. Thus, no two individuals receive the same service from Pointcast.
2. Managing the customer as a part-time employee
In order to obtain services, the customer generally has to come inside the factory. Thus, in most conventional service situations, clients enter banks, vacationers go inside travel agencies, and university students attend classes in classrooms. Furthermore, once inside the service factory, the customer actually has to do a bit of work. Indeed, in the case of some services, a substantial amount of work. Not only does the customer come inside a service factory, and do some work, in many cases the quality of the service the customer receives is almost as dependent on the customer as it is on the efforts of the service provider. The customer can therefore be seen as a co-producer in service firms, and is, in a substantial sense, a part-time employee. In most service settings, this can be an opportunity to save costs and spark innovation.
The Web site of a well-known international service company illustrates how the medium can be used to manage profitably customers as part-time employees. The international courier company UPS allows customers access to its system through its Web site. The site reduces uncertainty by allowing customers to track shipments traveling through the system by entering the package receipt number. Furthermore, customers can request a pickup and find the nearest drop-off site. UPS still uses a large team of service agents and a major telephone switchboard to deal with customer inquiries. Now, however, millions of tracking requests are handled on-line each month. Many would have used the more expensive and time-consuming telephone system. Clearly, UPS gains considerable savings by switching customers from telephone to Web parcel tracking. Furthermore, customers prefer this form of service delivery, otherwise, they would not have adopted it with such alacrity.
3. Innovation as part of customer participation
If we understand that, in service settings, the customer is a necessary co-producer and participant in the service creation process, then we can become aware of many possible service innovations that can create advantage in competitive markets. If the customer is willing to do some work, we can create enjoyable environments for them to do it in, and we can also devise service efficiencies that lead to significant cost reductions.
Firefly is an example of using the customer’s willingness to participate in the service production process to create service innovations on the Web. The Firefly network creates virtual communities of customers by getting them not only to give a lot of information about themselves, but also to do a lot of the work required to create this virtual community. Customers give information about their preferences regarding books, music, or films. Firefly then builds a profile of the customer’s likes, which is continually updated as the customer keeps on providing more information–usually in the form of ratings on scales. Customers are also put in touch with others who have similar interests to their own. This information is then correlated with other customers’ interests and enjoyment profiles to recommend new music, books, or films. Customers also give their opinions of the films, music, or books that they have seen, and this is then fed back to other customers. This information is not only very valuable to the customer, but a major asset to the company itself, which it can sell to film producers, record companies, or book sellers. Customer are thus not only co-creators of their own service and enjoyment, they also produce on behalf of Firefly a very valuable and saleable information asset.
4. Service industrialization
While service firms have to put up with the fact that the customer comes inside the factory, this is not always strictly true. It might be more appropriate to say that a fundamental dilemma facing services marketers is to decide on the extent to which they want the customer to come inside the factory.
It has been argued that service firms would more successful if they provided less service, not more! They should industrialize themselves, and become more like mass producers of goods than benevolent panderers to the whims of individuals. Rather than try to solve the problems that arise in service firms, they should try to eliminate them. Don’t fix the system, change the system. In doing so, they will be giving the customers what they really want, not more service, but less service! To many marketers in general, and service marketers in particular, this might sound like heresy. However, a simple Web example allows us to illustrate vividly these points.
Consider how you would normally obtain a telephone number that you were unable to find. You would call directory inquiries, carefully enunciate the name, and what you know of the address of the desired party, wait while the operator found it (hopefully!), and then listen to a computer voice rapidly read the number. A Web site, Switchboard.com, is a giant national database that contains the names, telephone numbers, and addresses of more than 100 million households and a further million businesses in the U.S. Visitors simply type in a name to get a listing of all of the people in the country by that name. Further information that the visitor has, such as state, city, or street name, helps narrow the search considerably. The visitor is able to print and keep the listing, once found, and also use the Web site to automatically send a postcard to the person just tracked down. This is the Web site for which the length of visit is one of the longest–for once visitors realize its potential to find one number, they immediately see its value in being able to search for, and contact, long-lost family members, friends, and schoolmates. Yet, this unique service is entirely produced by machines.
The directory assistance example illustrates how redesigning the system to provide less service, by replacing the human element with a machine, actually provides more service. Customers now have access to more information, which is so often the core element of any service.
5. Reducing customer errors
When customers are part of the production process, their errors can directly affect the service outcome. Indeed, one-third of all customer complaints are related to problems caused by customers. Thus, ways must be found to make the consumer component, as well as the producer component, of services fail-safe. Customer errors arise during preparation for the encounter, the service encounter, or the resolution of the encounter. Some examples illustrate how cyberservice reduces or avoids customer errors in each of the stages.
Encounter preparation
Customers can be reminded of what they need to do prior to the encounter–what to bring, the steps to follow, which service to select, and where to go. Hampton Inn generates personalized driving instructions for travelers to get them from their starting location to the selected Hampton Inn at their destination. Travelers can select their type of route, direct or scenic.
The encounter
An advantage of cyberservice is that customers can be led precisely through a process repeatedly. For example, when buying books from Amazon.com the customer is stepped through the process of selecting books and providing payment and shipping details. No steps can be missed and the system checks the validity of entered information. Furthermore, customers don’t type in book titles (a possible customer error); these are selected by clicking. Many Web sites require customers or prospects to enter their e-mail address twice because of the observed high customer data entry error rate. Of course, wherever possible, pull-down selection lists should be used so that customers have less opportunity to make errors.
Encounter resolution
On-line catalog companies, such as REI, e-mail customers a copy of their order so customers can correct any errors they may have made when entering delivery and order details.
Managing heterogeneity
Once more, there are a few things that the services marketer can manage on the Web in order to overcome the problems occasioned by service heterogeneity. Indeed, the Web offers unique opportunities in this regard.
1. Service standardization on the Web site
Some services marketers are reluctant to standardize service activities because they feel that this tends to mechanize and dehumanize an interaction between individuals. In some circumstances this is true, but that doesn’t mean that managers shouldn’t look for opportunities to produce service activities in as predictable and uniform a way as possible. Many people are cynical about the sincerity of the greeting, thanks, and farewell that one receives in a McDonald’s restaurant. However, by standardizing something as simple as this, the company has ensured that everyone is greeted, thanked, and bid farewell, in a setting where real warmth and friendliness don’t matter all that much anyway. McDonald’s has succeeded in eliminating much of the unpredictability that customers still face in so many other similar restaurant settings, surliness or complete indifference, or alternatively, service which is gushingly insincere. The real skills of services marketers becomes apparent in their ability to decide what should be standardized, and what should not.
Security First Network Bank (SFNB), which was one of the first financial services institutions to offer full-service banking on the Internet, uses a graphic metaphor–a color picture of the lobby of a traditional bank–to communicate and interact with potential and existing customers. Whereas in a real bank the customer might encounter great or indifferent service, warmth or rudeness, competence or incompetence, depending on the individual who serves them, in SFNB, the service is relevant and highly consistent.
2. Electronic eavesdropping on customers’ conversations
Firms must listen to different consumer groups to ensure that they are hearing what customers are saying and how they are perceived as responding to their complaints, concerns, and ideas. They need to listen to three types of customers: external customers, competitors’ customers, and internal customers (employees).
Everyday on the Internet, customers are talking about products. Newgroups and listservs provide forums for consumers, throughout the world, to pass comment on a company’s products and services. Furthermore, bad news travels at megabits per second to millions of customers, as Intel found when the flawed Pentium chip was detected. Companies can eavesdrop on these conversations and respond when appropriate. In addition, they can collect and analyze customers’ words to learn more about their customers and those of their competitors. Internally, an organization can set up electronic bulletin boards to foster communication from internal customers.
Traditional focus groups meet same time and same place. Our early work with electronic focus groups indicates that the chains of time and space can be easily snapped. We have successfully operated electronic focus groups spanning seven time zones and three countries.
Cyberservice means listening to more customers more intently and reacting electronically in real-time. It also means everyone in the organization can listen to customers. Key insights can be broadcast on internal bulletin boards so that everyone understands what the customer truly wants. There has never been a better opportunity to get closer to customers and stay focused on their needs.
3. Service quality
Whereas good quality can be controlled into, and bad quality out of, the production process for goods, in the case of services this is made much more difficult by heterogeneity. Thus, service quality needs to be carefully managed. In order for it to be managed, of course, it needs to be measured. If you can’t measure something, you can’t manage it. In the last ten years, tremendous progress has been made in the measurement of service quality.
Interactive, Web-based questionnaires are a convenient and inexpensive way of collecting customers’ perceptions of service quality or some other aspect of a service. Computing and IT services at the University of Michigan has an on-line survey for its customers to complete. An on-line version of SERVQUAL, a widely used measure of service quality, can capture customers’ expectations and perceptions of service quality and e-mails these data to a market research company. The real pay-off of Web-based questionnaires is in reducing the length of the feedback loop so that service quality problems are rapidly detected and corrected before too many customers are disaffected.
Managing perishability
Because products are produced before they are consumed, many can be stored until needed. Services cannot, for they are produced and consumed simultaneously, as we know. This gives them the characteristic of perishability. Services cannot be inventoried. To understand and minimize the effects of service perishability, astute services marketers are using Web sites to manage two things, supply and demand.
1. Managing supply on the Web site
Managing supply in a conventional service setting requires controlling all those factors of service production which affect the customer’s ability to acquire and use the service. Thus, it traditionally includes attention to such variables as opening and closing hours, staffing, and decisions as to how many customers will be able to use the service at any particular time. On the Web, these issues are circumvented, for the Web site gives the services marketer the ability to provide 24-hour service to customers anywhere. British Airways uses its Web site to provide services that, under conventional circumstances, would have been limited by people, time, and place. Customers are now able to purchase tickets off the Web site at any time convenient to them, without standing in line, from any place. British Airways provides a service to its Executive Club members whereby they regularly mail details on the frequent flyer program and miles available, as well as staffing a desk during office hours to which calls can be made. The human, time, location, and cost limitations of this are obvious.
2. Directing demand on the Web site
Services marketers also cope with service perishability by managing demand. That is, they use aspects of the services marketing mix, such as promotions, pricing, and service bundling, to stimulate or dampen demand for the service. Most service businesses are characterized by a high fixed cost component as a proportion of the total cost structure. Thus, in many situations, even very low prices for those last few seats or those last few rooms can be easily justified–20 or 30 percent of list price is still better than nothing when the service would have perished anyway. Many airlines are now conducting on-line ticket auctions on their Web sites as a way of managing demand. Airlines typically fill only two-thirds of their available capacity. By auctioning off unsold seats for imminent flights at low prices, the potential exists to approach 100 percent capacity. This is likely to result in substantial increases in airline profits, as full capacity on flights is reached with little or no increase in total costs.
Finally, some services marketers make good use of service bundling–putting together inclusive packages of services in a way that allows value to the customer to far exceed what he or she would have spent purchasing each component of the bundle individually. Microsoft’s travel Web site, Expedia.com, allows customers to shop for vacations, flights, car rentals, and tours and to combine these into personalized travel bundles, all from one location.
Conclusion
Services possess unique characteristics: intangibility, simultaneity, heterogeneity, and perishability. These have traditionally presented serious challenges to the services marketer. Cyberservice has the ability to ameliorate many of the problems traditionally associated with service, and even turn them into singular opportunities. Ironically, in the near future, it may be products that are more troublesome to marketers than services. The Web overturns the traditional hierarchy between products and services. How does cyberservice achieve this? The answer lies in three characteristics of cyberspace–the ability to quantize, search, and automate. Quantization of services (the breaking down of services into their smallest constituent elements) allows unparalleled mass customization (the recombination of elements into unique configurations). Search facilitates hyper-efficient information markets, matching supply and demand at a level previously unattainable. Automation allows service bottlenecks to be bypassed, returning power and choice to the customer, and overcomes the traditional limitations of time and space.
Cases
Charlet, J.-C., and E. Brynjolfsson. 1998. BroadVision . Graduate School of Business, Stanford University, OIT-21.
Huff, S. L. 1998. Scantran . London, Canada: University of Western Ontario. 997E010.
Charlet, J.-C., and E. Brynjolfsson. 1998. Firefly Network (A) . Graduate School of Business, Stanford University, OIT-22A.
1. An earlier version of this chapter appeared in Pitt, Leyland F., Pierre Berthon, and Richard T. Watson. 1999. Cyberservice: taming service marketing problems with the World Wide Web. Business Horizons 42 (1):11-18. ↵ | textbooks/biz/Business/Advanced_Business/Book%3A_Electronic_Commerce_-_The_Strategic_Perspective_(Watson_Brethon_Pitt_and_Zinkan)/07%3A_Service.txt |
Introduction
Uniquely among the marketing mix variables, price directly affects the firm’s revenue. Thus, the setting of prices is a critical issue facing managers. Traditional economic theory argues that decision-makers are rational, and that managers will set prices to maximize the firm’s surplus. Consumers are similarly rational and will seek to maximize their surplus by purchasing more of a product or service at lower prices than they will when prices are higher. Prices in markets that approach a form of pure competition are set by a confluence of supply and demand, and firms attempt to price goods and services so that marginal revenues equal marginal costs. Yet, in the real world of marketing, there is ample evidence of the bounded rationality of marketing decision-makers who seem to set prices with things other than profit maximization in mind. Pricing strategy sometimes focuses on market share objectives, while at other times it concentrates on competitors by either seeking to cooperate with or destroy them. Frequently, pricing is about brand or product image, as marketers seek to enhance the status of a brand by concentrating on its position in the mind of the customer, rather than on volume. Likewise, customers are in reality as emotional as they are rational, and purchase brands for the status and experiences that they confer, rather than merely on the utility that they provide.
From a marketing perspective, managers have tended to employ a range of pricing strategies to attain various organizational objectives. Most marketing textbooks describe the pricing of new products as high on launch and then the lowering of these prices at a later stage in order to skim the cream off the market. Or, firms attach low prices to new products right from the beginning of the life cycle, in order to ward off competition and penetrate the market. Managers have also resorted to pricing tactics such as discounting and rebates, price bundling, and psychological or odd-number pricing in order to appeal to customers. While theory suggests that customers are rational, the reality of most markets has meant that this rationality is bounded by such issues as product and information availability, the cost of search, and the inability of small customers to dictate price in any way to large suppliers. The advent of a new medium will change–is in fact already changing–the issue of price for both suppliers and customers in a way that is unprecedented. While the Internet, and its multimedia platform, the Web, have been seen by most marketers to be primarily about promotion and marketing communication, the effects that they will have on pricing will in all likelihood be far more profound.
In this chapter, we explore the impact that the Web will have on both the pricing decisions that managers make, and the pricing experiences that customers will encounter. For comfortable marketers, the Web may have the most unsettling pricing implications they have yet encountered; for the adventurous, it will offer hitherto undreamed-of opportunities. For many customers, the Web will bring the freedom of the price-maker, rather than the previously entrenched servitude of the price-taker. We introduce a scheme for considering the forces that determine a customer’s value to the firm, and the nature of exchange. We use this scheme to enable the identification of forces that will affect pricing on the Web, and then suggest strategies that managers can exploit.
Web pricing and the dynamics of markets
For customers, the Web facilitates search. Search engines such as Excite, Yahoo!, and Lycos allow the surfer to seek products and services by brand from a multitude of Web sites all over the world. They are also able to hunt for information on solutions to problems from a profusion of sites, and access the opinions and experiences of their peers in different parts of the world by logging on to bulletin boards and chat rooms. The use of such agents has been touted to reduce buyers’ search costs across standard on-line storefronts, specialized on-line retailers, and on-line megastores, and to transform a diverse set of offerings into an economically efficient market. The new promise of intelligent agents (pieces of software that will search, shop, and compare prices and features on a surfer’s behalf) gives the Internet shopper further buying power and choice.
The search phase in the consumer decision-making process, which can be costly and time-consuming in the real world, is reduced in terms of both time and expense in the virtual. An abundance of choice leads to customer sophistication. Customers become smarter, and exercise this choice by shopping around, making price comparisons, and seeking greatest value in a more assertive way. Marketers attempt to deal with this by innovation, but this in turn leads to imitation by competitors. Imitation leads to more oversupply in markets, which further accelerates the cycle of competitive rationality by creating more consumer choice. The Web has the potential to accelerate this cycle of competition at a rate that is unprecedented in history, creating huge pricing freedoms for customers, and substantial pricing dilemmas for marketers.
There are two simple but powerful models that may enable us to gain greater insight into pricing strategies on the Web. We integrate these into a scheme that is illustrated graphically in See Customer value categories and exchange spectrum. The first of these simply applies the well-known Pareto-principle, also known as the 80-20 rule, to the customer base of any firm. For most organizations, all customers are not created equal –some are much more valuable than others. For example, one Mexican cellular phone company found that less than 10 percent of its customers accounted for around 90 percent of its sales, and that about 80 percent of customers accounted for less than 10 percent. Seen another way, while margins earned on the most valuable customers allowed the Mexican company to recoup its investment in them in a matter of months, low-value customers took more than six years to repay the firm’s investment in them.
In the diagram in See Customer value categories and exchange spectrum, we have divided a firm’s customer base into four groups, which may best be understood in terms of the frequent flyer schemes run by most airlines nowadays. By far the largest group numerically, the C category customers nevertheless account for a very small percentage of an airline’s revenues and profits. These are probably customers who are not even members of the frequent flyer program, and if they are, they are likely to be blue card members who inevitably never accumulate enough air miles to be able to spend on anything. They are unlikely to be loyal customers; they don’t fly often, and when they do, their main consideration is the ticket price. For the sake of a few dollars, euros, or yen, they will happily switch airlines and fly on less than convenient schedules. Category B customers are like the silver card frequent flyers of an airline. They fly more frequently than Cs, and may even accumulate enough miles or points to claim rewards. However, they are still likely to be price sensitive, and exhibit signs of promiscuity by shopping around for the cheapest fares. The A category customers represent great value to the firm–in airline terms these are gold card holders. They use the product or service very frequently, and are probably so loyal to the firm that they do not shop around for price, even when there may be significant differences between suppliers. Because they represent substantial value to a firm such as an airline, they may be rewarded not only with miles, but special treatment, such as upgrades, preferential seating, and the use of lounges. Finally, the A category of customers represents a very small, but very valuable, group who account for a disproportionately large contribution to revenues and profits. Not only do these customers reap the rewards of value and loyalty, they are probably known by name to the firm, which inevitably performs service beyond the normal for them. An unsubstantiated but persistent rumor has it that there is a small handful of British Airways customers for whom the airline will even delay the Concord!
The second model in Exhibit 1 is derived from Deighton and Grayson’s (1995) notion of a spectrum of exchange based on the extent to which an exchange between actors is voluntary. Thus, at one extreme, exchange between actors can be seen as extremely involuntary, as in the case of theft by force. At least one party to this type of exchange does not wish to participate, but is forced to by the other’s actions. At the other extreme, an example of an extremely voluntary form of exchange would be the trading of stocks or shares by two traders on a stock exchange trading floor. This type of exchange is unambiguously fair , with no need for inducement for either party to act. Here, both actors participate entirely voluntarily for mutual gain–neither is able to buy or sell better shares or stocks at a price. Indeed, economists would argue that this bilateral exchange is the closest approximation to pure competition in the microeconomic sense. The two fully informed parties believe that each will be better off after the exchange. The market is highly efficient if price itself contains all the information that the parties need to make their decisions. Market efficiency is the percentage of maximum total surplus extracted in a market. In competitive price theory, the predicted market efficiency is 100 percent where the trading maximizes all possible gains of buyers and sellers from the exchange.
Exhibit 1.: Customer value categories and exchange spectrum
Returning once more to the other end of the spectrum, the next least voluntary form of exchange between actors is theft by stealth, where one actor appropriates the possessions of the other without the other’s knowledge. This follows on to the next point of fraud, where one party to the exchange enters into a transaction with the other in such a way that he or she is deliberately deceived, tricked, or cheated into giving up possessions without receiving the expected payment in return. Back on the other extreme of the spectrum, there are commodity exchanges, where actors buy and sell commodities such as gold, oil, copper, grain, and pork bellies. There is little or no difference between the product of one supplier and another–gold is gold is gold, commodities are commodities. The price of the commodity contains sufficient information for the parties to decide whether they will transact, and one seller’s commodity is exactly the same as another’s.
Between the extremes of the spectrum there is a gray area, which we label a range of marketing effectiveness . Adjacent to fraud there is what Deighton and Grayson refer to as seduction , which is an interaction between marketer and consumer that transforms the consumer’s initial resistance to a course of action into willing, even avid, compliance. Seduction induces consumers to enjoy things they did not intend to enjoy, because the marketer entices the consumer to abandon one set of social agreements and collaborate in the forging of another.
Second, and next to commodities, there is the vast array of products and services purchased and consumed by customers. While the customer may in many cases be seduced into purchasing these, frequently some of these products and services bear many of the characteristics of commodities. In a differentiated market, products vary in terms of quality or cater to different consumer preferences, but frequently the only real differences between them may be a brand name, packaging, formulation, or the service attached to them.
Where does marketing, as we know it, work best along this spectrum of exchange? The answer is, in a narrow band, labeled the range of marketing effectiveness; straddling most products and services, and extending from somewhere near the middle of seduction, to somewhere near the near edge of commodities. Here, the parties are not equally informed. There is information asymmetry, and the merit of the transaction being more or less certain for one than the other. Marketing induces customers to exchange by selling, informing or making promises to them. Obviously, activities such as theft by force or stealth, and also fraud, cannot be seen as marketing. Yet, marketing is also unnecessary, or at best perfunctory, at the other end of the spectrum. Two traders on a stock exchange floor can hardly be said to market to each other when they trade bundles of stocks or shares. The price contains all the information the parties to the transaction need to do the deal. The market is simply too efficient in these areas for marketing to work well–almost paradoxically, it is true to say that marketing is not effective when markets are efficient.
Bringing the two concepts (the Pareto distribution of the customer base, and the exchange spectrum) together may help us understand pricing strategy more effectively, particularly with regard to the effect of the Web on pricing for both sellers and customers. The objective of firms, with regard to the Pareto distribution, should be to:
• migrate as many customers upward as possible. That is, to turn C customers into Bs, Bs into As, and so forth. By doing this, the firm will increase its customer equity, or in simple terms, maximize the value of its customer transaction base.
• Forces in the market, however, including competition and the customer sophistication, tend to:
• force the customer distribution down, turning As to Bs, and Bs to Cs.
• Similarly, in the case of the exchange spectrum, marketing’s task is one of:
• moving products or services away from the zone of commodities, and more to the location of seduction.
• Likewise the marketplace forces of competition and customer sophistication have the effect of:
• commoditization, a process by which the complex and the difficult become simple and easy–so simple and easy that anybody can do them, and does. Commoditization is a natural outcome of competition and technological advance, people learn better ways to make things and how to do so cheaper and faster. Prices plunge and essential differences vanish. Cheap PCs and mass-market consumer electronics are obvious examples of this.
It is thus incumbent upon managers to understand the forces that may impel markets towards a preponderance of C customers, and products and services towards commodities. Technology is manifesting itself in many such effects, and the Web is an incubator at present. On a more positive note, technology also offers managers some exciting tools with which to overcome the effects of market efficiency and with which to halt, or at least decelerate, the inevitable degradation of the customer base. These are the issues that are now addressed.
Flattening the pyramid and narrowing the scope of marketing
While firms attempt to migrate customers upward in terms of customer value, and to broaden the range of marketing effectiveness on the spectrum of exchange, there are forces at work in the market that mitigate in the opposite direction. While these forces occur naturally in most markets, the effect of information technology has been to put them into overdrive. These forces are now discussed.
Technology facilitates customer search
Information search by customers is a fundamental step in all models of consumer and industrial buying behavior. Search is not without sacrifice in terms of money, and especially, time. A number of new technologies are emerging on the Internet that greatly facilitate searching. These vary in terms of their ability to search effectively, and also with regard to what they achieve for the searcher. Of course, some are well along the road to full development and implementation, and others are still on drawing boards. The tools also range from a simple facilitation of search, through more advanced proactive seeking, to the actual negotiation of deals on the customer’s behalf. However, all hold significant promise. These tools are described briefly in Exhibit 2.
Exhibit 2. Tools that facilitate customer search
Type of tool Functions Examples
Search engine Software that searches Web sites by key word(s). AltaVista and Hotbot.
Directory A Web site containing a hierarchically structured directory of Web sites. Yahoo!
Comparison site A Web site that enables comparisons of product/ service category by attributes and price. CompareNet, a Web site that lists comparative product information and prices.
Shopbot A program that shops the Web on the customer’s behalf and locates the best price for the sought product. Bots used by search engines Lycos and Excite.
Intelligent agent A software agent that will seek out prices and features and negotiate on price for a purchase. Kasbah, a bot being developed by MIT, can negotiate based on the price and time constraints provided.
At the very least, tools in Exhibit 2, such as search engines, directories, and comparison sites can reduce the customer’s costs of finding potential suppliers, and those of making product and price comparisons. More significantly, the more sophisticated tools, such as true bots and agents, will seek out lowest prices and even conduct negotiations for lower prices.
Reduction of buyers’ transaction costs
Nobel prize winner in economics, Ronald Coase, introduced the notion of transaction costs to the economics literature. Transaction costs are a set of inefficiencies that should be added to the price of a product or service in order to measure the performance of the market relative to the non-market behavior in firms. Of course, there are also transaction costs to buyers, including consumers. The different types of transaction costs, examples of these for customers, and how the Web may reduce them are illustrated in Exhibit 3. Obviously, some of these transaction cost reductions are real and monetary; in other cases, they may be more psychic in nature–such as the relating of poor service over the Internet on bulletin boards as a form of customer revenge (and this in turn can reduce transaction costs for other customers).
Exhibit 3. Transaction costs and the Web
Transaction costs Examples of how the Web can affect
Search costs
(finding buyers, sellers)
A collector of tin soldiers wishes to identify sources. He can use search engines and comparison sites, using the search term “tin soldier.”
Information costs (learning) A prospective customer wishes to learn more about digital cameras and what is available. Previously, she would have had to read magazines, talk to knowledgeable individuals, and visit stores. She can now access firm and product information easily and at no cost, obtain comparative product information, and access suppliers on the Web.
Bargaining costs (transacting, communicating, negotiating) The time normally taken by a customer to negotiate can now be used for other purposes, as intelligent agents transact and negotiate on the customer’s behalf.On-line bidding systems can achieve similar results. For example, GE in 1996 purchased USD 1 billion from 1,400 suppliers, and there is evidence of a substantial increase since. Significantly, the bidding process for the firm has been cut from 21 days to 10.
Decision costs The cost of deciding over Supplier A vs. Supplier B, or Product A vs. Product B. The Web makes information available on suppliers (on their or comparative Web sites) and products and services. For example, Travel Web allows customers to compare hotels and destinations on-line.
Policing costs (monitoring cheating) Previously, customers had to wait to receive statements and accounts, and then to check paper statements for correctness. On-line banking enables customers to check statements in real time. Chat lines frequently alert participants to good and bad buys, and potential product and supplier problems (e.g., the flaw in Intel’s Pentium chip was communicated extensively over the Internet).
Enforcement costs (remedying) When a problem exists with a supplier, how does the customer enforce contractual rights? In the non-Web world, this might require legal assistance. Publicizing the infringement of one’s rights would be difficult and expensive. Chat lines and bulletin boards offer inexpensive revenge, if not monetary reimbursement!
Customers make, rather than take, prices
Particularly in consumer markets, suppliers tend to make prices while customers take them. A notable exception would be auctions, but the proportion of consumer goods purchased in this way has always been very small, and has been mainly devoted to used goods. There are a number of instances on the Web where the opposite situation is now occurring. On-line auctions allow cybershoppers to bid on a vast range of products, and also services such as airline tickets, hotel room, and tickets. Already, many are finding bargains at the hundreds of on-line auction sites that have cropped up. Onsale.com is a huge auction Web site that runs seven live auctions a week, where people outbid one another for computer gear and electronics equipment. Onsale buys surplus or distressed goods from companies at fire sale prices so they can weather low bids.
At a higher level of customer price making, Priceline.com invites customers to name their price on products and services ranging from airline tickets to hotel rooms, and new cars to home mortgages. In the case of airline tickets, for example, customers name the price they are willing to pay for a ticket to a destination, and provide credit card details to establish good faith. Priceline then contacts airlines electronically to see if the fare can be obtained at the named price or lower, and undertakes to return to the customer within an hour. Priceline’s margin is the differential between the customer’s offer price and the fare charged by the airline.
Customers control transactions
Caterpillar uses its Web site to invite bids on parts from preapproved suppliers. Suppliers bid on-line over a specified period and a contract is awarded to the lowest bidder. Negotiation time is reduced and average savings on purchases are now 6 percent. In this way, the customer has taken almost total control of the transaction, for it has become difficult for suppliers to compete on anything but price. There is little opportunity to differentiate products, engage in personal selling, or to add service, as traditional marketing strategy would suggest suppliers do.
A return to one-on-one negotiation
In pre-mass market times, buyers and sellers negotiated individually over the sale of many items. It is possible that markets can move full circle, as buyers and sellers do battle in the electronic world. The struggle should result in prices that more closely reflect their true market value. We will see more one-on-one negotiation between buyers and sellers. As negotiation costs decrease significantly, it might be practical to have competitive bidding on a huge range of purchases, with a computer bidding against another computer on behalf of buyers and sellers.
Commoditization and efficient markets
The first goods to be bartered in electronic markets have been commodities. Price rather than product attributes, good selling, or warm advertising, is the determining factor in a sale. When the commodity happens to be perishable–such as airline seats, oranges, or electricity–the Web is even more compelling. Suppliers have to get rid of their inventory fast or lose the sale. The problem on the Web is that when customers can easily compare prices and features, commoditization can also happen to some high-margin products. Strong brand names alone may not be enough to maintain premium prices. In many cases, branded products may even prove to be interchangeable. While customers may not trust a new credit card company that suddenly appears on the Web because they do not know its name, they may easily switch between Amex and Diners Club, or Visa and MasterCard.
Migrating up the pyramid and more effective marketing
It is possible that a marketer considering the forces discussed above may become pessimistic about the future of marketing strategy, especially concerning the flexibility of pricing possibilities. Yet, we contend that all is not doom and gloom, and that there are strategies which managers may exploit that will allow them to migrate customers up the Pareto pyramid, and which will make marketing more effective in a time of market efficiency. These strategies are now discussed.
Differentiated pricing all the time
The information age, and the advent of computer-controlled machine tools, lets consumers have it both ways: customized and cheap, automated and personal. This deindustrialization of consumer-driven economics has been termed mass customization. The Web has already been an outstanding vehicle for mass customization, with personalized news services such as CNN and Pointcast, personalized search engines such as My Yahoo!, and the highly customized customer interaction pages of on-line stores such as Amazon.com. However, the Web also gives marketers the opportunity to exploit a phenomenon that service providers such as airlines have long known, the same product or service can have different values to different customers. Airlines know that the Friday afternoon seat is more valuable to the business travelers, and charge them accordingly. The Web should allow the ultimate in price differentiation–by customizing the interaction with the customer, the price can also be differentiated to the ultimate extent, so that no two customers pay the same price.
Creating customer switching barriers
Technology allows sellers to collect detailed data about customers’ buying habits, preferences, even spending limits, so they can tailor their products and prices to the individual buyer. Customers like this because it recognizes them as individuals and serves them better–recommends books that match their preferences, rather than some critic’s; advises on music that matches their likes, rather than the top twenty; and puts them in touch with people or jobs that match them, rather than a list of names or an address list of employers. This, in turn, creates switching barriers for customers that competitors will find difficult to overcome by mere price alone. While the customer may be able to purchase the product or service at a lower price on another Web site, that site will not have taken the time or effort to learn about the customer, and so will not be able to serve the customer as well. In terms of economics, the customer will not actually be purchasing the same item.
Use technology to de-menu pricing
Most firms have resorted to menu or list pricing systems in the past to simplify the many problems that are caused by attempting to keep prices recorded and up-to-date. Pricing is not just about the Web–within firms, there can be private networks or extranets (see ), that link them with their suppliers and customers. Extranets make it possible to get a precise handle on inventory, costs, and demand at any given moment, and adjust prices instantly. Without automation, there is a significant cost associated with changing prices, known as the menu cost . For firms with large product or service lines, it used to take months for price adjustments to filter down to distributors, retailers, and salespeople. Streamlined networks reduce menu cost and time to near zero, so there is no longer a really good excuse for not changing prices when they need to be changed.
Be much better at differentiation: stage experiences
The more like a commodity a product or service becomes, the easier it is for customers to make price comparisons and to buy on price alone. Marketers have attempted to overcome this in the past by differentiating products by enhancing quality, adding features, and branding. When products reached a phase of parity, marketers entered the age of service, and differentiated on the basis of customer service. However, in an era of increasing service parity, it is the staging of customer experiences that may be the ultimate and enduring differentiator. The Web provides a great theater for the staging of unique personal experiences, whether esthetic, entertaining, educational, or escapist, and for which customers will be willing to pay.
Understand that customers may be willing to pay more
Marketers will make a big mistake by assuming that customers will expect and want to pay less on the Web than they do in conventional channels. Indeed, managers in many industries have a long record of assuming that customers underestimate the value of a product or service to them, and would typically pay less for it if given the chance. There is a very successful restaurant in London that invites customers to pay for a meal what they think it is worth. Some exploit the system and eat for free; however, on average, customers pay prices that give the establishment a handsome margin.
Consider total purchase cost
The purchase price is one element of the total cost of acquiring a product or service. Searching, shipping, and holding costs, for instance, can contribute substantially to the acquisition cost of some products. In those circumstances, where Web-based purchasing enables a customer to reduce the total cost of a purchase, that person may be willing to pay more than through a traditional channel. This argument can be formulated mathematically.
Let T= total acquisition cost,
P = purchase price,
O = other costs associated with purchase (including opportunity costs)
then T = P O.
If we use w and t as subscripts to refer to Web and traditional purchases, then all things being equal, consumers will prefer to purchase via the Web when:
Tw < Tt .
Furthermore, consumers should be willing to pay a premium of δ = Pw – Pt where δ < Ot – Ow .
For industrial buyers, opportunity costs may be a significant component of the total costs of a purchase. Also, particularly busy consumers will recognize the convenience of Web purchasing. Both of these groups are likely to be willing to pay a premium price for products purchased via the Web, if the result is a reduction in the total purchase cost. As a general pricing strategy, Web-based merchants should aim to reduce customers’ Ot so they can raise Pw to just below the point where Tw = Tt.
The Web creates new ways for sellers to reduce the total costs that are faced by purchasers. Sellers can capitalize on these cost reductions by charging higher prices than those that are charged in traditional outlets.
Establish electronic exchanges
Many firms, particularly those in business-to-business markets, may find it more effective to barter rather than sell when prices are low. A number of electronic exchanges have already been successfully established to enable firms to barter excess supplies of components or products that would have otherwise been sold for really low prices. In this way, the firm rids itself of excess stock and receives value in exchange, in excess of the price that would have been realized. For example, Chicago-based FastParts Inc. and FairMarket Inc. in Woburn, Massachusetts, operate thriving exchanges where computer electronics companies swap excess parts.
Maximize revenue not price
Many managers overlook a basic economic opportunity. In many instances, it is better to maximize revenue rather than price. Airlines have perfected the science of yield management, concocting complicated pricing schemes that not only defy customer comparison, but that also permit revenue maximization on a flight, despite the fact that the average fare might be lower. Many airlines are now using Web sites to sell tickets on slow-to-fill or ready-to-leave flights, either on specials, or on ticket auctions. They also make use of external services, such as Priceline.com, wherein the customer, in a real sense, creates an option (the right, but not the obligation to sell a ticket), to both discern market conditions, and to sell last-minute capacity. Apart from their Web sites, airlines, hotels, and theaters can also use sites such as lastminute.com to market seats, rooms, and tickets a day or two before due date.
Reduce the buyer’s risk
Every purchase incorporates an element of risk, and basic finance proclaims that risk and return are directly related. Thus, consumers may be willing to pay a higher price if they can lower the risk of their transaction.
Consider the case of auto dealers who can either buy a used car at an auto auction or purchase on-line via the Web. With on-line buying, it is possible for dealers to reduce their risk. Dealers can treat the on-line system as part of their inventory and sell cars off this virtual lot. The dealer can buy cars as needed to meet customer demand. In the best case scenario, a buyer requests a particular model, the dealer checks the Web site, puts a hold on a particular car, negotiates the price with the buyer, and then buys the car from the Web. In effect, the dealer sells the car before buying it. In this case, the dealer avoids the risks associated with buying a car in anticipation of finding a customer.
Dealers can be expected to pay a premium when the risk of the transaction is reduced. As Exhibit 4 illustrates, some dealers may perceive buying a car at an auction as higher risk, and thus expect a higher return compared to buying on-line. The difference in the return is the premium that a dealer will be willing to pay for a car purchased on-line, all other things being equal.
Exhibit 4.: Risk and return trade-off
Web-based merchants who can reduce the buyer’s risk should be able to command a higher price for their product. Typical methods for reducing risk include higher quality and more timely information, and reducing the length of the buy and resell cycle. This risk effect that we describe should be equally applicable to both organizational buyers and individual consumers. Again, the Web creates a special opportunity for sellers to reduce the risks that buyers face. In turn, sellers can charge a higher price to buyers for this benefit (risk reduction), which has been created on-line.
Conclusion
The Internet and the World Wide Web will have a fundamental influence on the pricing strategy of firms. Similarly, the technology will open many doors to buyers hitherto closed by the effects of time, cost, and effort. In this chapter, we have illustrated the effects of the new technology on price from two perspectives. First, the technology has the potential to change the shape and structure of the firm’s customer base. At worst, it will flatten the customer base, turning the majority of a firm’s customers into transactional traders who buy the spot. However, used wisely, it has the potential for migrating a significant number of a firm’s customers up the value triangle, narrowing the customer base, and enabling the firm to build relationships with customers that negate the impact of mere price alone.
Second, the new media has the potential to move customers along the exchange spectrum in ways, and at rates, that have not hitherto been experienced. Technology may combine with market forces to reduce the vast majority of a firm’s transactions to the level of commodity trades, leaving managers with little opportunity to make prices. A far more optimistic scenario, however, sees managers using the technology in combination with other marketing strategies to seduce the customer into a mutually valuable relationship. The chapter identifies the effects of technology and the forces in the market that have the potential to flatten and homogenize customer base triangles and shift customers disproportionately towards the commodity end of the exchange spectrum. The chapter also finds a number of approaches available to managers to put the brakes on these processes, and indeed, use the new technology to accelerate more effective pricing strategy.
Marketers have always viewed price as one of the instruments of policy in the marketing mix–a variable which, theoretically at least, can be manipulated and controlled according to circumstances in the business environment and the nature of the target market. In practice, however, many pricing decisions are not taken by marketers, and are based more on issues such as cost and competition than any notion of customer demand. Seen pessimistically, price decision making has been, and may continue to be, a mechanistic process of calculating costs and attempting markups, or a knee jerk reaction to market conditions and competitive behavior. A more optimistic view might be that pricing decisions can be as creative as those taken with regard to the development of new products and services, or the development of advertising campaigns. Indeed, pricing may be the last frontier for marketing creativity. Ignored or utilized mechanically, the Internet and the Web may be the vehicles that destroy the last vestiges of managerial pricing discretion. In the hands of the wise, these vehicles may be the digital wagons that carry pricing pioneers to the edge of the cyber frontier.
Cases
McKeown, P. G., and R. T. Watson. Manheim Online . Terry College, University of Georgia, Contact [email protected] for a copy. | textbooks/biz/Business/Advanced_Business/Book%3A_Electronic_Commerce_-_The_Strategic_Perspective_(Watson_Brethon_Pitt_and_Zinkan)/08%3A_Pricing.txt |
Introduction
How are we to make sense of the Web and our involvement in it? This issue is no light matter, for how we make sense of what was, and is, delimits what will be. Thus, as more and more organizations establish a presence on the Web, the question of how to exploit the new medium presents challenges to practitioners and academics alike. How should economic and symbolic activity be conducted and conceptualized? How can we make sense of the new medium and our involvement in it? Different assumptions about this new medium will result in diverse activities–and the accompanying creation of different futures, and for businesses, varying degrees of marketing success or failure. This chapter explores the phenomenon of the Web using themes characterizing postmodernism, which is a collection of practices and thoughts that characterizes the information age. Postmodernism offers unique insights into information-rich contexts such as the Web.
Current media views and perspectives on the Web vary from dismissing it as a fad, to acclaiming it as the most significant contribution to communication since Gutenberg’s invention of movable type. Trying to make sense of the Web is no simple matter, yet as an increasing number of organizations establish a presence in the medium, the need becomes pressing. Traditional models of business are unlikely to prove effective. While trends such as changing technology, commercialization, globalization, and demographics are important in understanding the Web, they represent only half the story.
More fundamental shifts can be uncovered by changing to a higher level of abstraction, by shifting from elements to relationships. Such has been the work of a divergent body of thinkers from artists to philosophers, historians to scientists, whose fragmented works have come to be known as postmodern. Indeed, postmodernism is seen as the label for thinking that resonates most strongly with the Information Age, just as modernism was the philosophy that embodied the Industrial Age. While there is little agreement on, or indeed collective understanding of, what constitutes postmodernism, various broad, overlapping themes are discernible.
In this chapter, we explore the Web through the postmodern themes of fragmentation, dedifferentiation, hyperreality, time and space, paradox, and anti-foundationalism. The first two themes–fragmentation (disintegration) and dedifferentiation–represent the opposites (or counterparts) of two of modernism’s favorite systems concepts, integration and differentiation. The themes of hyperreality and space-time counter the traditional modernist assumption of what constitutes reality and progress. Anti-foundationalism, pastiche, and pluralism all question the modernist love of the one right answer (theory, way, view, voice, etc.). Although present in all media, we argue that it is the Web that most typifies postmodernist thought. This may be an important insight, for virtual realms (of which the Web is perhaps the most important), comprise perhaps the greatest marketing and organizational challenge and opportunity of the late twentieth century. Moreover, it was marketing practitioners who were among the first to embrace and explore the Web. Indeed, some argue that, after a technological medium, the Web is primarily a marketing medium.
What is modernism?
Modernity comprises those efforts to develop objective knowledge, absolute truths, universal morality and law, and autonomous art. It is the sustained attempt to free human thinking and action from the irrationality of superstition, myth, and religion. It comprises the basic summons toward human emancipation, clearly enunciated in the Enlightenment, a philosophical movement of the eighteenth century that emphasized the use of reason to bring about humanitarian reforms. Modernism has, at its heart, the idea of the rational person as the primary vehicle for progress and liberation. It stresses unity (underneath we are all the same) and progress (tomorrow will be better than today). So, to be modern is to find oneself in an environment that promises adventure, power, joy, growth, and transformation of ourselves and the world. Its themes, in contrast to postmodernism, comprise integration, differentiation, objective reality, linear time and delineated space, orthodoxy, unity, and foundationalism.
And Post-Modernism?
Modernism and postmodernism can be thought of as umbrella terms comprising many threads. However, modernism is a more coherent movement (because it values coherence) that has at its heart one fairly distinct core philosophy, ideology, and belief system. In contrast, postmodernism is characterized by multiple ideologies, multiple philosophies, and multiple beliefs. Indeed, postmodernism in some of its many guises actively seeks to undermine ideology and belief. Although nominally a late twentieth century movement, Postmodernism’s intellectual roots can be traced back to Heraclitus, a fifth-century b.c. philosopher. The movement seeks to undermine and debunk the assumptions underpinning previous ages’ thought systems and discourses. Obviously, this has the potential of degenerating into a rejection of everything.
The differences between modernism and postmodernism are summarized in See Themes–modern and postmodern perspectives. We explore these issues specifically in relation to the Web. The specific themes employed are fragmentation, dedifferentiation, hyperreality, time and space, paradox, and anti-foundationalism .
Exhibit 1. Themes–modern and postmodern perspectives
Theme Modernism Postmodernism
Relationships between elements in a system Integration and differentiation Disintegration (fragmentation) and dedifferentiation
Reality Reality is objective, “out there,” discovered, and physical–“reality” Reality is subjective, “in here,” constructed, and imagined–“hyperreality”
Time and space Linear, unitary, progressive chronology
Space is delineated–space is time Cyclic, multithreaded, fragmented chronology
Space is imploded (negated)–time is space
Values Orthodox, consistency, and homogeneity Paradox, reflexivity, and pastiche
Attitude towards organizations and the social institutions that produce them Foundationalism Anti-foundationalism
Before commencing our exploration, a number of points should be made. First, there are aspects of the Web that are undeniably modern. Indeed, the Web can be viewed as the latest technological development of the modernist dream of adventure, progress, and liberation. However, it is our intention to focus on the Web’s postmodern aspects. Second, ironically and yet relevant to a discussion of postmodernism, it is only the existence of a modern infrastructure (computers, integrated networks, and universal communication protocols) that enables a virtual and quintessentially postmodern world to be created. Finally, although the themes discussed are presented as distinct categories, this is for presentation purposes only. The categories are far from mutually exclusive–each contains, reflects, and refracts elements of the other.
Each theme is now discussed in turn under two sections. First, the theme is outlined in general abstract terms. Second, it is explored in specific relation to the Web.
Fragmentation
There is fragmentation or disintegration of traditional systems at all levels, including countries (the U.S.S.R. has broken up into many autonomous republics and the U.K. is devolving to give power to elected parliaments for Scotland and Wales), social groups (the family), political parties (the Communist party in many countries), and organizations (AT&T broke into three businesses in 1996). People’s lives are becoming increasingly disjointed and fragmented in contemporary society.
Fragmentation and the Web
Fragmentation is apparent in a number of different spheres on the Web. First, the Web offers the ultimate in niche marketing: millions of discussion groups, newsgroups, special interest groups, and a greater diversity of products and services than any shopping or strip mall. Indeed, a significant amount of the material placed on the Internet is designed to reach a single person, a handful of people, or a group of less than 1,000.
Second, the very fact that people find companies’ Web sites, rather than companies finding prospective customers, as in traditional media, means that the premise of mass marketing is rendered questionable at best, and irrelevant at worst. The advent of push technologies, though, may render part of the Web a little more familiar to traditional marketing. However, to bank on this is to misunderstand the nature of the Web and ignore its possibilities.
Third, people experience and behave differently in the new medium, with the Web resulting in a fragmentation of consensus. Research suggests that people feel more able to disagree and express differences in virtual media, and specifically on the Internet. Respondents in computer-mediated environments are more frank on sensitive topics, yet more inclined to offer false information in order to avoid identification. There is a lack of self-awareness and self-regulation of behavior. As well, the new medium has fueled and facilitated, to an unprecedented degree the fragmentation of the self. Individuals participating in MUDs, MOOs, and discussion groups regularly adopt multiple, often-contradictory identities, personas, and personalities. For example, research reports that 20 percent of participants in these forums regularly pose as the opposite gender.
Fourth, the Web is the ultimate global presence. This would seem to result in unprecedented unification and integration, yet the more closely we are linked, the more pronounced our differences become. Digitization breaks down wholes or entities (people, personalities, human beings) into millions of fragments, disconnected minutiae that can then be recombined across people into dehumanized profiles. This fragmentation mirrors the underlying Internet communication protocol, packet switching, which disassembles messages into packages (see ). These fragments, mingled with many other fragments, are transported from sender to receiver, where they are finally reassembled. The Web takes this digitization and packetizing to unprecedented lengths, with Internet companies, from banks to bookshops, typically knowing much more about their customers than traditional marketplace-based firms. Yet, paradoxically, as technology facilitates the much sought after one-to-one customer interaction, the customer becomes ever more fleeting, for the same technology allows customers to recreate and reinvent themselves in a collage of new co-existing images.
The Web fragments, and the successful Web companies of tomorrow, will exhibit this process–because their customers will.
Dedifferentiation
The dedifferentiation [1] of traditional system boundaries comprises the blurring, erosion, elimination, and washing away of established political, social, and economic boundaries (be these hierarchical or horizontal). Examples include boundaries between high and low culture, education and entertainment, teaching and acting, politics and show business, programs and advertisements, philosophy and literature, fact and fiction, author and reader, science and religion, producer and consumer. It is the dissolution of established distinctions that is captured by terms such as edutainment (an entertaining computer program that is designed to be educational), infomercial (a television show that is an extended advertisement), and docudrama (a drama dealing freely with historical events).
Dedifferentiation and the Web
The Web dissolves perimeters of time, place, and culture. Boundaries between nations, home and work, intimate time and business time, between night and day, and between individuals and organizations. There is no sovereignty in a boundaryless, electronic world. Capital, consumers, and corporations, in the form of communication packets, cross political boundaries millions of times every day. We explore two distinctions that the Web is blurring, fact and fantasy, public and private.
First, although hyperreality will be discussed in detail in the next section, it is important to point out that the distinction between reality and virtual reality diminishes on the Web. Fact and fantasy combine, the distinction between representations and their physical form become increasingly blurred. As Web usage increases, and more and more cultural objects are viewed on computer screens, there is likely to be a growing confusion of the representation with the original objects they portray. Amazon.com, promoted as the world’s largest bookstore, stocks a few best-sellers. The Web site is the defining presence. The reality is created not by bricks, mortar, and paper, but by digitized fragments displayed on a computer screen.
An example from the Web that illustrates this, and also the resulting blurring of the distinction between high and low culture, is Le MusÈe Imaginaire. Le MusÈe Imaginaire sells paintings by the world’s most famous artists such as Van Gogh, Canaletto, and Turner, to the world’s most famous people, such as Arnold Schwarzenegger, Sophia Loren, and Michael Jackson. The irony is that they are all fakes–genuine authentic fakes. (This can be taken both ways: the pictures are fakes, as the people who buy them are fakes in the sense of being actors and actresses). The fact that the site has received no less than 15 Web-design or cool site awards is testimony to a cyberculture that values the image equal to, or indeed over and above, the real. Indeed, in exact replication, how can one distinguish the authentic from the fake?
A search engine may return 10,000 hits on Shakespeare, but cannot tell you which sites contain genuine content written by the Bard, which contain informed discussion of his works, or which are complete nonsense. This echoes the widespread problem in cyberspace of establishing authenticity and, indeed, questions the very notion of our prior conceptual distinctions. When everything is a re-presentation, how can one speak of an original?
The distinction between private and public is also rendered especially problematic on the Web. All activity (personal and commercial) in cyberspace is routinely monitored to a degree unimaginable in the physical world. A person’s activities can be, and routinely are, catalogued in minute detail, and used to build intimate and revealing profiles of that person. People remain ambivalent to this monitoring, for on the one hand, it can help in channeling products and services that have added value to the individual, while on the other, it can represent a flagrant breach of a person’s privacy.
In summary, the Web blurs the distinction between private and public in such a way as to make it difficult to compartmentalize our lives in the same way as in the physical world.
Hyperreality
Hyperreality occurs wherein the artifact is even better than the real thing. In a three-stage process, we have (1) the real original, (2) the image of the original, and (3) the image uncoupled and freed from the real original. Examples include the fantasy world of theme parks (Disneyland), virtual reality (role-playing MUDs, MOOs and GMUKs14), situation comedies ( Third Rock from the Sun ), films ( The Lost World ), and computer games ( Myst ). These are examples of what was previously considered a simulation or reflection becoming real–indeed, more real than the real thing. Hyperreality provokes a general loss of the sense of authenticity–i.e., what is genuine, real, or original.
Hyperreality and the Web
The Web is hyperreality. Surfers experience telepresence–the extent to which persons feel present in the hypermedia environment of the Web–when they enter states of high flow. During periods of high flow, time stands still, energy is boundless, and action is effortless. The Web surfer is at one with the Internet, in the same sense that an ocean surfer can get totally immersed in a wave. Thus, surfing is an apt metaphor for describing sustained Web browsing.
Telepresence and flow can lead to addictive surfing, where the normal world is rejected in favor of the virtual, and often fantasy world, of the Web. For example, PJC Ventures is selling plots of land via the Web for USD 9.95 for 100 acres. Nothing particularly hyperreal, other than possibly the low price, until one finds out that the plots are on Mars, Pluto, and the other planets! The detachment from reality becomes even more extreme in the face of the U.S. Supreme Court’s ruling and the 1967 Multilateral treaty specifying that no person or country can own any part of space. Despite this, some 1,000 plots of land have been sold on Mars and a further 13,000 on the Moon.
The sense of hyperreality is magnified as it becomes increasing difficult to distinguish between genuine and spoof sites (e.g., Microsnot vs. Microsoft), and between professional (run by qualified practitioners) and amateur (run by unqualified enthusiasts) sites (e.g., British Medical Journal vs. Dr. Mom). Digital images can be, and are, seamlessly modified. Consider the site Hillary’s Hair, which allows surfers to view a vast range of pictures of the First Lady sporting various hairstyles, ranging from the elegant to the very unflattering.
A more dramatic illustration of the hyperreal world created by the Web is the case of bots or intelligent agents, which are autonomous, humanlike computer programs that can help in a variety of tasks. Bots can maintain and optimize your computer, navigate through a complex on-line file structure, and advise players in MUDs, MOOs, etc. Bots are virtual creations designed to pass as human beings. As the sophistication of these agents increases, people have been observed to develop emotional relationships with these bots, often unaware that they are virtual creations. However, perhaps even more importantly, those who are aware that these agents are virtual, still find themselves emotionally engaged and treat them as real people.
The case of Julia, an agent of the Mass-Neotek family of robots, has been documented by Foner [1993], who recalls people’s attitudes towards, treatment of, and emotional involvement, with the robot as a real person. Furthermore, he reproduces the log of an amusing, yet faintly troubling series of exchanges, covering a 13-day period, between Julia and a love-smitten suitor called “Barry” (name changed), who was blissfully unaware of her virtuality. As Foner wryly observes, it was not entirely clear whether Julia had passed a Turing test [2] or Barry had failed one.
In conclusion, the Web represents a new context where human agents are replaced with virtual agents, and reality is superseded by hyperreality.
Time and space
In the postmodern world, there has been a shift from the standard of linear progress, where the future is always something better than the past, to a model of circularity, where the past is continually recycled, reused, reinterpreted, and reinvented. Similarly, our experience of space has changed–the world has become a village and the universe, a microverse. These changes portray a general collapse and fragmentation of time and space.
Time and space on the Web
Cyberspace is not a matter of place, but the instant, the eternal present, where pasts and futures are continually recycled in eternal replication. In the computer world of the Web, the physical real is digitized and the digital becomes the real.
Electronic speed has fueled and facilitated the collapsing of space and time in all media. Many traditional media are unable to keep up. Thus, products are often out of date before the consumer gets them home: clothes, software, newspapers, and magazines (the news and weather are now reported immediately on the Web and render many newspapers out of date and irrelevant). In contrast, on the Web the only real currency is the current. For example, one of the authors recently brought the latest version of Norton Anti-Virus, only to be confronted, on loading the software, with the warning that the virus library used to identify malicious code was out of date. However, the program also offered to download the latest library via the Web. This principle is taken one stage further by an innovative piece of software, Oil Change, which allows a person’s computer to automatically update its software via the Web the instant an upgrade becomes available. It also undoes any changes so that the user can work with previous versions of the software if he or she chooses.
The Web enables on-line, 24-hour, 365-day buying, selling, and consuming, with real-time delivery of certain products, services, and software. The Web facilitates the decoupling of local time and local space, the desynchronization of local schedules, and the synchronization of global ones. Thus, a wired person can work or teach a class simultaneously in Paris, New York, and Tokyo–while living in the Alps.
The two sides of postmodern time, desynchronization and synchronization, are particularly apparent in cyberspace. On the one hand, the Web is the ultimate source of instant gratification, while on the other, the Web is the ultimate titillation, where gratification is always deferred–one click, one instant, one hypertext link away. The Web feeds desire’s ultimate object, desire. This may explain the addictive, drug-like nature of the cyberspace commented on in many magazines and newspapers. Surfing the Web echoes the all-consuming board-surfers’ search for the perfect wave.
Fragmentation and digitization of time and space allow recombination into novel configurations that surpass the traditional limitations of space and time. Thus, the Web is facilitating an explosion of virtual companies: teleworking (where distance is negated) replaces local-working (where space and distance predominate–i.e., commuting distance, physical location, quality of the physical offices, etc.).
The U.K.-based Internet Shopper Ltd. is run entirely through Web-mediated teleworking, boasting a staff of some 20, all of whom work from home. Employees are based all over the U.K., from the South East Coast to the Scottish Highlands. All staff were hired over the Internet, work via the Internet, socialize via the Internet (many of the staff have never met face to face), and find their next job via the Internet. Products are developed, refined, sold, and supported via the Internet. In this case, teleworking has dramatically changed working patterns. Employees can structure their days as they please, working when it suits them rather than when one is traditionally expected to be at work. Furthermore, the distinction between work and holiday is becoming increasingly blurred, with employees working via cell phones while basking on the beach.
Finally, the Web is also the ultimate source of endless recycling, replaying, and re-editing of the past. Consider retro-software and retro-computer sites, where one can relive the earliest versions of space invaders, or run your favorite Sinclair ZX spectrum program. Furthermore, because all communication can be recorded on the Web, it is possible for people to relive on-line relationships at any time. Alexa is creating an Archive of the Web for pages that are no longer available. You can relive your favorite Web site of 1996, even though it was erased a year ago.
Paradox, reflexivity, and pastiche
Postmodernism values the other, the paradox (literally that which is beyond belief), the eccentric (that which is out from the center–the decentered). Thus, the theme here is the questioning, and at times active sabotage, of the normal, the orthodox, the stable, and the consistent. It appears as the active seeking of the abnormal, the paradoxical, the dysfunctional, and the excluded. It is the active embracing of the other–indeed, of others.
On the creative side, paradox and reflexivity are actively employed in pastiche. [3] This comprises an often colorful, tongue-in-cheek collage style, or an ironic, self-referential mixing of codes (be these theoretical, philosophical, architectural, artistic, cinematic, literary, musical, etc.).
Paradox, reflexivity, pastiche and the Web
The Web embodies the dual nature of contemporary social phenomena. Duality means that many contemporary social phenomena are not experienced in a simple, unitary, fashion, but as two, often contradictory, parts. Thus, for example, the Web is experienced as both a liberator (it can liberate people from the confines of traditional time and space) and tyrant (it can be addictive, encouraging compulsive behavior and alienation). It is both constructed (people build Web sites, participate in discussion groups, and shape the way the Web evolves, etc.) and constructor (the Web changes the way we interact and the way in which we construct and experience phenomena–including ourselves).
Computer viruses and hackers also illustrate the duality of the Web. On the one hand, hackers routinely indulge in seemingly malicious destructive activity, while on the other hand, they actively promote the free flow of information. They are reflexively coupled to the world they oppose–the more they hack and create viruses, they more people try to protect themselves and their information. As a result, an ecology has developed in which anti-virus and security software programmers become dependent on the hackers, the parasites, for their existence–the parasites have their parasites.
Consider the phenomena of avatars used in MOOs and GMUKs. Avatars [4] typically refer to pictures (photos, drawings, and cartoons) or graphical objects that people use to represent themselves in cyberhabitats. They can be swapped or modified at will and, in some cases, even stolen. For the point of this discussion, it is interesting to observe that they both reveal and conceal. They can selectively amplify or hide an aspect of a person’s character, as well as allow a person to gain experiences outside his or her everyday self.
Finally, most Web sites exemplify pastiche. Styles and themes are borrowed (literally–HTML and JavaScript are routinely lifted from other sites) and mixed freely. Spoof sites, which parody other (typically mainstream) sites, are common (e.g., there are many spoof, irreverent “Spice Girl” sites).
Anti-foundationalism
Anti-foundationalism is a general antipathy towards and rejection of the establishment and orthodoxy. There is a distaste for conforming to doctrines or practices that are held to be right or true by an authority, standard, or tradition. Anti-foundationalism also means a general disbelief of theories, philosophies, or political systems that claim to offer universal goals, rules, truths, or knowledge–and the social institutions that claim to produce them. Examples of these include communism and capitalism and many other social, religious, political, and scientific grand theories.
Anti-foundationalism and the Web
The Web embodies the anti-foundational philosophy of postmodernism in a number of ways. First, the model upon which the Web is based is not the traditional one-to-many of traditional broadcast media, but a many-to-many model in which no one controls the message. Second, the Web effectively has no controlling center or hierarchy. The medium is radically decentered. Nobody controls the Internet.18 Third, the medium is not stable. It is evolving at an unprecedented rate and in unpredictable directions. The ground is always in motion. There is no foundational control and no one architect; rather, the Web is created by the millions of interactions of all its members.
The logic of the Web is quite different from that of the physical, linear world. The Web is hypertext and hypermedia. It is free of the constraints of traditional writing. A hypermedium is not a closed work with a stable meani ng, but an open fabric of links that are in the process of constant revision and supplementation. The traditional author’s voice is undermined, and the traditional relationship between author and reader is overthrown. Each reader creates his or her own text and own meaning.
Not surprisingly, the issue of copyright and intellectual property law has become a major issue on the Web. Sites like Total News manage to use other news providers’ proprietary content for their own ends while avoiding a breach of copyright laws. The manipulation, editing, threading, and recombination of text, images, sound, and video are fashionable on the Web.
In the fastest growing segments of MOOs and GMUKs there is no game or competition, other than spontaneous role playing and symbolic exchange. In short, there is no overall purpose or goal, no rules or regulation. Individuals create their own rules, reasons, and relations–none is prespecified.
Conclusion
In the modern hi-tech world, there is an ongoing elimination of the distinction between psyche and the environment, between waking and dreaming, between the conscious and the subconscious. When these important boundaries are blurred, people start to lose a sense of themselves. We argue that the Web dramatically speeds up this process. Cyberspace embodies the sudden, hyperreal dynamics of the dream. The conventional rules of time, space, logic, and identity are suspended. The surrealism, simultaneity, and instantaneous change that occur in the dream are embodied in the Web
The Web is rapidly becoming the major medium through which people communicate, make decisions, and even construct their social identities. For some organizations (e.g., Amazon.com and CDNow), the Web is already the dominant forum for business transactions. Making sense of the Web, to the extent that postmodernism facilitates this comprehension, will be essential for insightful organizational practice. The Information Age organization and its stakeholders inhabit the Web. Business research fields (such as consumer behavior, organizational design, and information systems) are based on investigations of corporations and stakeholders interacting in North American Industrial Age settings. The Web eradicates much of this theory, just as the disintegration of the Soviet Union swept away established foreign policy. Now, we need to develop theories of management that incorporate national culture and a networked cybersociety. Postmodernist thinking is a stimulus for fashioning new theories of management and business practice.
The Web confronts modernism because it is a major shift that shakes the very foundation of established management thought. The dominance of broadcast (push technology) has been usurped by the Web (pull technology), and the receiver has taken control from the sender of the timing and content of messages. In the world of advertising, the control of time and space has shifted hands. The trend to decustomize service has been reversed as the Web facilitates mass customization (see ). Services are being fragmented to support one-to-one interaction. New firms, the anti-foundationalists, can threaten the establishment within months of their birth (e.g., Netscape threatened Microsoft, and Amazon.com is still a major threat to Barnes & Noble). Understanding postmodernism is not an easy task, but then again, understanding the consequences of the Web is a major intellectual challenge. Reflecting on postmodernism and its themes should help managers make sense of this new cybersociety.
Cases
De Meyer, A., S. Dutta, and L. Demeester. 1998. Celebrity sightings. Fontainebleau, France: INSEA. ECCH 398-074-1.
Dutta, S., A. De Meyer, and P. Evrard. 1997. LOT Polish airlines & the Internet: flying high in cyberspace. Fontainebleau, France: INSEAD. ECCH 698-031-1.
1. Dedifferentiation means the reversion of specialized structures (such as cells) to a more generalized or primitive condition. In contrast, differentiation implies development from the simple to the complex. ↵
2. A Turing test, originally conceived by the mathematician Alan Turing, is a test of whether a computer can pass as being human to another human. ↵
3. A musical, literary, or artistic composition made up of selections from different works. ↵
4. An incarnation in human form ↵ | textbooks/biz/Business/Advanced_Business/Book%3A_Electronic_Commerce_-_The_Strategic_Perspective_(Watson_Brethon_Pitt_and_Zinkan)/09%3A_Societal_effects.txt |
Learning Objectives
1. Gain an understanding of environmental issues’ historical antecedents.
2. Identify key events leading to regulatory action.
3. Understand how those events shaped eventual business actions.
Sustainability innovations, currently driven by a subset of today’s entrepreneurial actors, represent the new generation of business responses to health, ecological, and social concerns. The entrepreneurial innovations we will discuss in this book reflect emerging scientific knowledge, widening public concern, and government regulation directed toward a cleaner economy. The US roots of today’s sustainability innovations go back to the 1960s, when health and environmental problems became considerably more visible. By 1970, the issues had intensified such that both government and business had to address the growing public worries. The US environmental regulatory framework that emerged in the 1970s was a response to growing empirical evidence that the post–World War II design of industrial activity was an increasing threat to human health and environmental system functioning.
We must keep in mind, however, that industrialization and in particular the commercial system that emerged post–World War II delivered considerable advantages to a global population. To state the obvious: there have been profoundly important advances in the human condition as a consequence of industrialization. In most countries, life spans have been extended, infant mortality dramatically reduced, and diseases conquered. Remarkable technological advances have made our lives healthier, extended education, and made us materially more comfortable. Communication advances have tied people together into a single global community, able to connect to each other and advance the common good in ways that were unimaginable a short time ago. Furthermore, wealth creation activity by business and the resulting rise in living standards have brought millions of people out of poverty. It is this creative capacity, our positive track record, and a well-founded faith in our ability to learn, adapt, and evolve toward more beneficial methods of value creation that form the platform for the innovative changes discussed in this text. Human beings are adept at solving problems, and problems represent system feedback that can inform future action. Therefore, we begin this discussion with a literal and symbolic feedback loop presented to the American public in the 1960s.
Widespread public awareness about environmental issues originated with the publication of the book Silent Spring by Rachel Carson in 1962. Carson, a biologist, argued that the spraying of the synthetic pesticide dichlorodiphenyltrichloroethane (DDT) was causing a dramatic decline in bird populations, poisoning the food chain, and thus ultimately harming humans. Similar to Upton Sinclair’s 1906 book The Jungle and its exposé of the shocking conditions in the American meatpacking industry, Silent Spring was a dramatic challenge to the chemical industry and to the prevalent societal optimism toward technology and post–World War II chemical use. Its publication ignited a firestorm of publicity and controversy. Predictably, the chemical industry reacted quickly and strongly to the book’s threat and was critical of Carson and her ideas. In an article titled “Nature Is for the Birds,” industry journal Chemical Week described organic farmers and those opposed to chemical pesticides as “a motley lot” ranging from “superstition-ridden illiterates to educated scientists, from cultists to relatively reasonable men and women” and strongly suggesting Carson’s claims were unwarranted.“Nature Is for the Birds,” Chemical Week, July 28, 1962, 5, quoted in Andrew J. Hoffman, From Heresy to Dogma: An Institutional History of Corporate Environmentalism (San Francisco: New Lexington Press, 1997), 51. Chemical giant Monsanto responded directly to Carson by publishing a mocking parody of Silent Spring titled The Desolate Year. The book, with a “prose and format similar to Carson’s…described a small town beset by cholera and malaria and unable to produce adequate crops because it lacked the chemical pesticides necessary to ward off harmful pests.”Andrew J. Hoffman, From Heresy to Dogma: An Institutional History of Corporate Environmentalism (San Francisco: New Lexington Press, 1997), 51. Despite industry’s counteroffensive, President Kennedy, in part responding to Carson’s book, appointed a special panel to study pesticides. The panel’s findings supported her thesis.Andrew J. Hoffman, From Heresy to Dogma: An Institutional History of Corporate Environmentalism (San Francisco: New Lexington Press, 1997), 57. However, it wasn’t until 1972 that the government ended the use of DDT.A ban on DDT use went into effect in December 1972 in the United States. See US Environmental Protection Agency, “DDT Ban Takes Effect,” news release, December 31, 1972, accessed April 19, 2011, www.epa.gov/history/topics/ddt/01.htm.
Figure 1.1 shows how toxins concentrate in the food chain. Humans, as consumers of fish and other animals that accumulate DDT, are at the top of the food chain and therefore can receive particularly high levels of the chemical. Even after developed countries had banned DDT for decades, in the early part of the twenty-first century the World Health Organization reapproved DDT use to prevent malaria in less developed countries. Lives were saved, yet trade-offs were necessary. Epidemiologists continue to associate high concentration levels with breast cancer and negative effects on the neurobehavioral development of children.Brenda Eskenazi, interviewed by Steve Curwood, “Goodbye DDT,” Living on Earth, May 8, 2009, accessed November 29, 2010, www.loe.org/shows/segments.htm?programID=09-P13-00019&segmentID=3; Theo Colburn, Frederick S. vom Saal, and Ana M. Soto, “Developmental Effects of Endocrine-Disrupting Chemicals in Wildlife and Humans,” Environmental Health Perspectives 101, no. 5 (October 1993): 378–84, accessed November 24, 2010, www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1519860. DDT, along with several other chemicals used as pesticides, is suspected endocrine disruptors; the concern is not just with levels of a given toxin but also with the interactive effects of multiple synthetic chemicals accumulating in animals, including humans.
Throughout the 1960s, well-publicized news stories were adding momentum to the call for comprehensive federal environmental legislation. The nation’s air quality had deteriorated rapidly, and in 1963 high concentrations of air pollutants in New York City caused approximately three hundred deaths and thousands of injuries.G. Tyler Miller and Scott Spoolman, Living in the Environment: Principles, Connections, and Solutions, 16th ed. (Belmont, CA: Brooks/Cole, 2009), 535. At the same time, cities like Los Angeles, Chattanooga, and Pittsburgh had become infamous for their dense smog. Polluted urban areas, once considered unpleasant and unattractive inconveniences that accompanied growth and job creation, were by the 1960s definitively connected by empirical studies to a host of respiratory problems.
Urban air quality was not the only concern. Questions were also being raised about the safety of drinking water and food supplies that were dependent on freshwater resources. In 1964, over a million dead fish washed up on the banks of the Mississippi River, threatening the water supplies of nearby towns. The source of the fish kill was traced to pesticide leaks, specifically endrin, which was manufactured by Velsicol.Andrew J. Hoffman, From Heresy to Dogma: An Institutional History of Corporate Environmentalism (San Francisco: New Lexington Press, 1997), 52. Several other instances of polluted waterways added to the public’s awareness of the deterioration of the nation’s rivers, streams, and lakes and put pressure on legislators to take action. In the mid-1960s, foam from nonbiodegradable cleansers and laundry detergents began to appear in rivers and creeks. By the late 1960s, Lake Erie was so heavily polluted that millions of fish died and many of the beaches along the lake had to be closed.G. Tyler Miller and Scott Spoolman, Living in the Environment: Principles, Connections, and Solutions, 16th ed. (Belmont, CA: Brooks/Cole, 2009), 535. On June 22, 1969, the seemingly impossible occurred in Ohio when the Cuyahoga River, which empties into Lake Erie, caught fire, capturing the nation’s attention. However, it was not the first time; the river had burst into flame multiple times since 1968.
Cuyahoga River fire
Chocolate-brown, oily, bubbling with subsurface gases, it oozes rather than flows. “Anyone who falls into the Cuyahoga does not drown,” Cleveland’s citizens joke grimly. “He decays.” The Federal Water Pollution Control Administration dryly notes: “The lower Cuyahoga has no visible life, not even low forms such as leeches and sludge worms that usually thrive on wastes.” It is also—literally—a fire hazard. A few weeks ago, the oil-slicked river burst into flames and burned with such intensity that two railroad bridges spanning it were nearly destroyed. “What a terrible reflection on our city,” said Cleveland Mayor Carl Stokes sadly.“America’s Sewage System and the Price of Optimism,” Time, August 1, 1969, accessed March 7, 2011, www.time.com/time/magazine/article/0,9171,901182,00.html#ixzz19KSrUirj.
Adding to air and drinking water concerns was the growing problem of coastal pollution from human activity. Pollution from offshore oil drilling gained national attention in 1969 when a Union Oil Company offshore platform near Santa Barbara, California, punctured an uncharted fissure, releasing an estimated 3.25 million gallons of thick crude oil into the ocean. Although neither the first nor the worst oil spill on record, the accident coated the entire coastline of the city of Santa Barbara with oil, along with most of the coasts of Ventura and Santa Barbara counties. The incident received national media attention given the beautiful coastal location of the spill. In response to the spill, a local environmental group calling itself Get Oil Out (GOO) collected 110,000 signatures on a petition to the government to stop further offshore drilling. President Nixon, a resident of California, complied and imposed a temporary moratorium on California offshore development.Andrew J. Hoffman, From Heresy to Dogma: An Institutional History of Corporate Environmentalism (San Francisco: New Lexington Press, 1997), 57–58.
Influenced by these events and the proliferation of environmental news stories and public discourse, citizens of industrialized countries had begun to shift their perceptions about the larger physical world. Several influential books and articles introduced to the general public the concept of a finite world. Economist Kenneth Boulding, in his 1966 essay “The Economics of the Coming Spaceship Earth,” coined the metaphors of “spaceship Earth” and “spaceman economy” to emphasize that the earth was a closed system and that the economy must therefore focus not on “production and consumption at all, but the nature, extent, quality, and complexity of the total capital stock.”See Kenneth E. Boulding, “The Economics of the Coming Spaceship Earth,” in Environmental Quality in a Growing Economy, ed. Henry Jarrett (Baltimore: Johns Hopkins University Press, 1966), 3–14. Paul Ehrlich, in the follow-up to his 1968 best seller The Population Bomb, borrowed Boulding’s metaphor in his 1971 book How to Be a Survivor to argue that in a closed system, exponential population growth and resource consumption would breach the carrying capacity of nature, assuring misery for all passengers aboard the “spaceship.”Philip Shabecoff, A Fierce Green Fire: The American Environmental Movement (New York: Hill & Wang, 1993), 95–96. Garrett Hardin’s now famous essay, “The Tragedy of the Commons,” was published in the prestigious journal Science in December 1968.Kenneth E. Boulding, “The Economics of the Coming Spaceship Earth,” in Valuing the Earth, Economics, Ecology, Ethics, ed. Herman Daly and Kenneth Townsend (Cambridge, MA: MIT Press, 1993), 297–309; Paul Ehrlich, The Population Bomb (New York: Ballantine Books, 1968); Paul Ehrlich, How to Be a Survivor (New York: Ballantine Books, 1975). It emphasized the need for new solutions to problems not easily addressed by technology, referring to pollution that involved public commons such as the air, water, soil, and oceans. These commonly used resources are shared in terms of access, but no single person or institution has formal responsibility for their protection.
Another symbolic turning point came in 1969 during the Apollo 11 mission, when the first photograph of the earth was taken from outer space. The image became an icon for the environmental movement. During that time period and subsequently, quotations proliferated about the new relationship between humans and their planetary home. In a speech at San Fernando Valley State College on September 26, 1966, the vice president of the United States Hubert H. Humphrey said, “As we begin to comprehend that the earth itself is a kind of manned spaceship hurtling through the infinity of space—it will seem increasingly absurd that we have not better organized the life of the human family.” In the December 23, 1968, edition of Newsweek, Frank Borman, commander of Apollo 8, said, “When you’re finally up at the moon looking back on earth, all those differences and nationalistic traits are pretty well going to blend, and you’re going to get a concept that maybe this really is one world and why the hell can’t we learn to live together like decent people.”
key takeaways
• By the 1970s, the public began to recognize the finite resources of the earth and to debate its ability to sustain environmental degradation as environmental catastrophes grew in size and number.
• Chemical contaminants were discovered to accumulate in the food chain resulting in much higher concentrations of toxins at the top.
• Key events and publications educated citizens about the impact of human activities on nature and the need for new approaches. These included the Santa Barbara oil spill, Silent Spring, and “The Tragedy of the Commons.”
exercises
• How do you think Americans’ experience of abundance, economic growth, and faith in technology influenced perceptions about the environment?
• How did these perceptions change over time and why?
• Compare your awareness of environmental and health concerns with that of your parents or other adults of your parents’ generation. Name any differences you notice between the generations.
• What parallels, if any, do you see between today’s discussions about environmental issues and the history provided here? | textbooks/biz/Business/Advanced_Business/Book%3A_Sustainability_Innovation_and_Entrepreneurship/01%3A_History/1.01%3A_Environmental_Issues_Become_Visible_and_Regulated.txt |
Learning Objectives
1. Understand the initial framework for US environmental regulation.
2. Explain why and how companies changed their policies and practices.
In response to strong public support for environmental protection, newly elected president Nixon, in his 1970 State of the Union address, declared that the dawning decade of the 1970s “absolutely must be the years when America pays its debt to the past by reclaiming the purity of its air, its waters and our living environment. It is literally now or never.”Richard Nixon Foundation, “RN In ‘70—Launching the Decade of the Environment,” The New Nixon Blog, January 1, 2010, accessed March 23, 2011, blog.nixonfoundation.org/2010/01/rn-in-70-the-decade-of-the-environment. Nixon signed into law several pieces of legislation that serve as the regulatory foundation for environmental protection today. On January 1, 1970, he approved the National Environmental Policy Act (NEPA), the cornerstone of environmental policy and law in the United States. NEPA states that it is the responsibility of the federal government to “use all practicable means…to improve and coordinate federal plans, functions, programs and resources to the end that the Nation may…fulfill the responsibilities of each generation as trustee of the environment for succeeding generations.”See National Environmental Policy Act of 1969, 42 U.S.C. § 4321–47. GPO Access US Code Online, “42 USC 4331,” January 3, 2007, accessed April 19, 2011, frwebgate.access.gpo.gov/cgi-bin/getdoc.cgi?dbname=browse_usc&docid=Cite:+42USC4331, Jan 3, 2007. In doing so, NEPA requires federal agencies to evaluate the environmental impact of an activity before it is undertaken. Furthermore, NEPA established the Environmental Protection Agency (EPA), which consolidated the responsibility for environmental policy and regulatory enforcement at the federal level.
Also in 1970, the modern version of the Clean Air Act (CAA) was passed into law. The CAA set national air quality standards for particulates, sulfur oxides, carbon monoxide, nitrogen oxide, ozone, hydrocarbons, and lead, averaged over different time periods. Two levels of air quality standards were established: primary standards to protect human health, and secondary standards to protect plant and animal life, maintain visibility, and protect buildings. The primary and secondary standards often have been identical in practice. The act also regulated that new stationary sources, such as power plants, set emissions standards, that standards for cars and trucks be established, and required states to develop implementation plans indicating how they would achieve the guidelines set by the act within the allotted time. Congress directed the EPA to establish these standards without consideration of the cost of compliance.Walter A. Rosenbaum, Environmental Politics and Policy, 2nd ed. (Washington, DC: Congressional Quarterly Press, 1991), 180–81.
To raise environmental awareness, Senator Gaylord Nelson of Wisconsin arranged a national teach-in on the environment. Nelson characterized the leading issues of the time as pesticides, herbicides, air pollution, and water pollution, stating, “Everybody around the country saw something going to pot in their local areas, some lovely spot, some lovely stream, some lovely lake you couldn’t swim in anymore.”Gaylord Nelson, interview with Philip Shabecoff, quoted in Philip Shabecoff, A Fierce Green Fire: The American Environmental Movement (New York: Hill & Wang, 1993), 114–15. This educational project, held on April 22, 1970, and organized by Denis Hayes (at the time a twenty-five-year-old Harvard Law student), became the first Earth Day.Hayes organized Earth Day while working for US Senator Gaylord Nelson. Hayes, a Stanford- and Harvard-educated activist with a law degree, helped found Green Seal, one of the most prominent ecolabeling systems in the United States, and directed the National Renewable Energy Laboratory under the Carter administration. On that day, twenty million people in more than two thousand communities participated in educational activities and demonstrations to demand better environmental quality.Tyler Miller Jr., Living in the Environment: Principles, Connections, and Solutions, 9th ed. (Belmont, CA: Wadsworth, 1996), 42. The unprecedented turnout reflected growing public anxiety. Health and safety issues had become increasingly urgent. In New York City, demonstrators on Fifth Avenue held up dead fish to protest the contamination of the Hudson River, and Mayor John Lindsay gave a speech in which he stated “Beyond words like ecology, environment and pollution there is a simple question: do we want to live or die?”Joseph Lelyveld, “Mood Is Joyful Here,” New York Times, April 23, 1970, quoted in Philip Shabecoff, A Fierce Green Fire: The American Environmental Movement (New York: Hill & Wang, 1993), 113. Even children’s books discussed the inability of nature to protect itself against the demands, needs, and perceived excesses associated with economic growth and consumption patterns. The 1971 children’s book The Lorax by Dr. Seuss was a sign of the times with its plea that someone “speak for the trees” that were being cut down at increasing rates worldwide, leaving desolate landscapes and impoverishing people’s lives.
Earth Day fueled public support and momentum for further environmental regulatory protection, and by 1972 the Federal Water Pollution Control Act (FWPCA) had set a goal to eliminate all discharges of pollutants into navigable waters by 1985 and to establish interim water quality standards for the protection of fish, shellfish, wildlife, and recreation interests by July 1, 1983.Walter A. Rosenbaum, Environmental Politics and Policy, 2nd ed. (Washington, DC: Congressional Quarterly Press, 1991), 195–96. Growing concern across the country about the safety of community drinking water supplies culminated in the Safe Drinking Water Act (SDWA) of 1974. This legislation established standards for turbidity, microbiological contaminants, and chemical agents in drinking water.Walter A. Rosenbaum, Environmental Politics and Policy, 2nd ed. (Washington, DC: Congressional Quarterly Press, 1991), 206–7. The Endangered Species Act (ESA) of 1973 forbade the elimination of plant and animal species and “placed a positive duty on the government to act to protect those species from extinction.”Philip Shabecoff, A Fierce Green Fire: The American Environmental Movement (New York: Hill & Wang, 1993), 175. Ten years after the publication of Silent Spring, the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA) was updated to prohibit or severely limit the use of DDT, aldrin, dieldrin, and many other pesticides. As a result, levels of persistent pesticides measured in human fatty tissues declined from 8 parts per million (ppm) in 1970 to 2 ppm by the mid-1980s.Philip Shabecoff, A Fierce Green Fire: The American Environmental Movement (New York: Hill & Wang, 1993), 46–47.
Corporate Response: Pollution Control
Pollution control typified the corporate response to environmental regulations from the genesis of the modern regulatory framework in the 1970s through the 1980s. Pollution control is an end-of-the-pipe strategy that focuses on waste treatment or the filtering of emissions or both. Pollution control strategies assume no change to product design or production methods, only attention to air, solid, and water waste streams at the end of the manufacturing process. This approach can be costly and typically imposes a burden on the company, though it may save expenses in the form of fines levied by regulatory agencies for regulatory noncompliance. Usually pollution control is implemented by companies to comply with regulations and reflects an adversarial relationship between business and government. The causes of this adversarial attitude were revealed in a 1974 survey by the Conference Board—an independent, nonprofit business research organization—that found that few companies viewed pollution control as profitable and none found it to be an opportunity to improve production procedures.Andrew J. Hoffman, From Heresy to Dogma: An Institutional History of Corporate Environmentalism (San Francisco: New Lexington Press, 1997), 81. Hence, from a strictly profit-oriented viewpoint, one that considers neither public reaction to pollution nor potential future liability as affecting the bottom line, pollution control put the company in a “losing” position with respect to environmental protection.
The environmental regulatory structure of the United States at times has forced companies into a pollution control position by mandating specific technologies, setting strict compliance deadlines, and concentrating on cleanup instead of prevention.Michael Porter and Claas van der Linde, “Green and Competitive: Ending the Stalemate,” Harvard Business Review 73, no. 5 (September/October 1995): 120–34. This was evident in a 1986 report by the Office of Technology Assessment (OTA) that found that “over 99 percent of federal and state environmental spending is devoted to controlling pollution after waste is generated. Less than 1 percent is spent to reduce the generation of waste.”US Congress, Office of Technology Assessment, Serious Reduction of Hazardous Waste (Washington, DC: US Government Printing Office, 1986), quoted in Stephan Schmidheiny, with the Business Council for Sustainable Development, Changing Course (Cambridge, MA: MIT Press, 1992), 106. The OTA at that time noted the misplaced emphasis on pollution control in regulation and concluded that existing technologies alone could prevent half of all industrial wastes.Stephan Schmidheiny, with the Business Council for Sustainable Development, Changing Course (Cambridge, MA: MIT Press, 1992), 100.
Economists generally agree that it is better for regulation to require a result rather require a means to accomplishing that result. Requiring pollution control is preferred because it provides an incentive for firms to reduce pollution rather than simply move hazardous materials from one place to another, which does not solve the original problem of waste generation. For example, business researchers Michael Porter and Claas van der Linde draw a distinction between good regulations and bad regulations by whether they encourage innovation and thus enhance competitiveness while simultaneously addressing environmental concerns. Pollution control regulations, they argue, should promote resource productivity but often are written in ways that discourage the risk taking and experimentation that would benefit society and the regulated corporation: “For example, a company that innovates and achieves 95 percent of target emissions reduction while also registering substantial offsetting cost reductions is still 5 percent out of compliance and subject to liability. On the other hand, regulators would reward it for adopting safe but expensive secondary treatment.”Michael Porter and Claas van der Linde, “Green and Competitive: Ending the Stalemate,” Harvard Business Review 73, no. 5 (September/October 1995): 120–34. Regulations that discouraged innovation and mandated the end-of-the-pipe mind-set that was common among regulators and industry in the 1970s and 1980s contributed to the adversarial approach to environmental protection. As these conflicts between business and government heated up, new science, an energy crisis, and growing public protests fueled the fire.
Global Science, Political Events, Citizen Concern
In 1972, a group of influential businessmen and scientists known as the Club of Rome published a book titled The Limits to Growth. Using mathematical models developed at the Massachusetts Institute of Technology to project trends in population growth, resource depletion, food supplies, capital investment, and pollution, the group reached a three-part conclusion. First, if the then-present trends held, the limits of growth on Earth would be reached within one hundred years. Second, these trends could be altered to establish economic and ecological stability that would be sustainable far into the future. Third, if the world chose to select the second outcome, chances of success would increase the sooner work began to attain it.Philip Shabecoff, A Fierce Green Fire: The American Environmental Movement (New York: Hill & Wang, 1993), 96. Also see Donella H. Meadows, Dennis L. Meadows, Jørgen Randers, and William W. Behrens III, The Limits to Growth (New York: Universe Books, 1972), 23–24. Again, the notion of natural limits was presented, an idea at odds with most people’s assumptions at the time. For the people of a country whose history and cultural mythology held the promise of boundless frontiers and limitless resources, these full-Earth concepts challenged deeply held assumptions and values.
Perhaps the most dramatic wake-up call came in the form of political revenge. Americans were tangibly and painfully introduced to the concept of limited resources when, in 1973, Arab members of the Organization of Petroleum Exporting Countries (OPEC) banned oil shipments to the United States in retaliation for America’s support of Israel in its eighteen-day Yom Kippur War with Syria and Egypt. Prices for oil-based products, including gasoline, skyrocketed. The so-called oil shock of 1973 triggered double-digit inflation and a major economic recession.Tyler Miller Jr., Living in the Environment: Principles, Connections, and Solutions, 9th ed. (Belmont, CA: Wadsworth, 1996), 42. As a result, energy issues became inextricably interwoven with political and environmental issues, and new activist groups formed to promote a shift from nonrenewable, fossil fuel–based and heavily polluting energy sources such as oil and coal to renewable, cleaner sources generated closer to home from solar and wind power. However, with the end of gasoline shortages and high prices, these voices faded into the background. Of course, a strong resurgence of such ideas followed the price spikes of 2008, when crude oil prices exceeded \$140 per barrel.Energy Information Administration, Department of Energy, “Petroleum,” accessed November 29, 2010, www.eia.doe.gov/oil_gas/petroleum/info_glance/petroleum.html.
Video Clip
NBC Nightly News Coverage of OPEC Meeting
(click to see video)
In the years following the 1973 energy crisis, public and government attention turned once again toward the dangers posed by chemicals. On July 10, 1976, an explosion at a chemical plant in Seveso, Italy, released a cloud of the highly toxic chemical called dioxin. Some nine hundred local residents were evacuated, many of whom suffered disfiguring skin diseases and lasting illnesses as a result of the disaster. Birth defects increased locally following the blast, and the soil was so severely contaminated that the top eight inches from an area of seven square miles had to be removed and buried.Clive Ponting, A Green History of the World (New York: Penguin Books, 1991), 372–73. Andrew Hoffman, in his study of the American environmental movement in business, noted that “for many in the United States, the incident at Seveso cast a sinister light on their local chemical plant. Communities became fearful of the unknown, not knowing what was occurring behind chemical plant walls.…Community and activist antagonism toward chemical companies grew, and confrontational lawsuits seemed the most visible manifestation.”Andrew J. Hoffman, From Heresy to Dogma: An Institutional History of Corporate Environmentalism (San Francisco: New Lexington Press, 1997), 73.
Over time, these developments built pressure for additional regulation of business. Politicians continued to listen to the concerns of US citizens. In 1976, the Toxic Substance Control Act (TSCA) was passed over intense industry objections. The TSCA gave the federal government control over chemicals not already regulated under existing laws.John F. Mahon and Richard A. McGowan, Industry as a Player in the Political and Social Arena (Westport, CT: Quorum Books, 1996), 144. In addition, the Resource Conservation and Recovery Act (RCRA) of 1976 expanded control over toxic substances from the time of production until disposal, or “from cradle to the grave.”Philip Shabecoff, A Fierce Green Fire: The American Environmental Movement (New York: Hill & Wang, 1993), 269. The following year, both the CAA and Clean Water Act were strengthened and expanded.According to the US Environmental Protection Agency, “The Clean Water Act (CWA) establishes the basic structure for regulating discharges of pollutants into the waters of the United States and regulating quality standards for surface waters. The basis of the CWA was enacted in 1948 and was called the Federal Water Pollution Control Act, but the act was significantly reorganized and expanded in 1972. ‘Clean Water Act’ became the Act’s common name with amendments in 1977.” Under the CWA, industry wastewater and water quality standards were set for industry and all surface-water contaminants. In addition, permits were required to discharge pollutants under the EPA’s National Pollutant Discharge Elimination System (NPDES) program. See US Environmental Protection Agency, “Laws and Regulations: Summary of the Clean Water Act,” accessed Match 7, 2011, www.epa.gov/lawsregs/laws/cwa.html.
In the late 1970s, America’s attention turned once again to energy issues. In 1978, Iran triggered a second oil shock by suddenly cutting back its petroleum exports to the United States. A year later, confidence in nuclear power, a technology many looked to as a viable alternative form of energy, was severely undermined by a near catastrophe. On March 29, 1979, the number two reactor at Three Mile Island near Harrisburg, Pennsylvania, lost its coolant water due to a series of mechanical failures and operator errors. Approximately half of the reactor’s core melted, and investigators later found that if a particular valve had remained stuck open for another thirty to sixty minutes, a complete meltdown would have occurred. The accident resulted in the evacuation of fifty thousand people, with another fifty thousand fleeing voluntarily. The amount of radioactive material released into the atmosphere as a result of the accident is unknown, though no deaths were immediately attributable to the incident. Cleanup of the damaged reactor has cost \$1.2 billion to date, almost twice its \$700 million construction cost.Tyler Miller Jr., Living in the Environment: Principles, Connections, and Solutions, 9th ed. (Belmont, CA: Wadsworth, 1996), 387. In large part due to the Three Mile Island incident, all 119 nuclear power plants ordered in the United States since 1973 were cancelled.Tyler Miller Jr., Living in the Environment: Principles, Connections, and Solutions, 9th ed. (Belmont, CA: Wadsworth, 1996), 385. No new commercial nuclear power plants have been built since 1977, although some of the existing 104 plants have increased their capacity. However, in 2007, the Nuclear Regulatory Commission received the first of nearly twenty applications for permits to build new nuclear power plants.Energy Information Administration, Department of Energy, “U.S. Nuclear Reactors,” accessed November 29, 2010, www.eia.doe.gov/cneaf/nuclear/page/nuc_reactors/reactsum.html.
One of the most significant episodes in American environmental history is Love Canal. In 1942, Hooker Electro-Chemical Company purchased the abandoned Love Canal property in Niagara Falls, New York. Over the next eleven years, 21,800 tons of toxic chemicals were dumped into the canal. Hooker, later purchased by Occidental Chemical Corporation, sold the land to the city of Niagara Falls in 1953 with a warning in the property deed that the site contained hazardous chemicals. The city later constructed an elementary school on the site, with roads and sewer lines running through it and homes surrounding it. By the mid-1970s, the chemicals had begun to rise to the surface and seep into basements.Andrew J. Hoffman, From Heresy to Dogma: An Institutional History of Corporate Environmentalism (San Francisco: New Lexington Press, 1997), 79. Local housewife Lois Gibbs, who later founded the Citizens’ Clearinghouse for Hazardous Wastes, noticed an unusual frequency of cancers, miscarriages, deformed babies, illnesses, and deaths among residents of her neighborhood. After reading an article in the local newspaper about the history of the canal, she canvassed the neighborhood with a petition, alerting her neighbors to the chemical contamination beneath their feet.Aubrey Wallace, Eco-Heroes (San Francisco: Mercury House, 1993), 169–70. On August 9, 1978, President Carter declared Love Canal a federal emergency, beginning a massive relocation effort in which the government purchased 803 residences in the area, 239 of which were destroyed.Andrew J. Hoffman, From Heresy to Dogma: An Institutional History of Corporate Environmentalism (San Francisco: New Lexington Press, 1997), 79.
Love Canal led directly to one of the most controversial pieces of environmental legislation ever enacted. On December 12, 1980, President Carter signed into law the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA), or Superfund. This law made companies liable retroactively for cleanup of waste sites, regardless of their level of involvement. Love Canal also signaled the beginning of a new form of environmental problem. As environmental historian Hoffman indicated, “Environmental problems, heretofore assumed to be visible and foreseeable, could now originate from an unexpected source, appear many years later, and inflict both immediate and latent health and ecological damage. Now problems could emerge from a place as seemingly safe as your own backyard.”Andrew J. Hoffman, From Heresy to Dogma: An Institutional History of Corporate Environmentalism (San Francisco: New Lexington Press, 1997), 79.
In the face of vehement industry opposition, the states and the federal government managed to put in place a wide-ranging series of regulations that defined standards of practice and forced the adoption of pollution control technologies. To oversee and enforce these regulations, taxpayers’ dollars now funded a large new public bureaucracy. In the coming years, the size and scope of those agencies would come under fire from proindustry administrations elected on a platform of smaller government and less oversight and intervention.
In the meantime, the creation of the EPA compelled many states to create their own equivalent departments for environmental protection, often to administer or enforce EPA programs if nothing else. According to Denise Scheberle, an expert on federalism and environmental policy, “few policy areas placed greater and more diverse demands on states than environmental programs.”Denise Scheberle, Federalism and Environmental Policy: Trust and the Politics of Implementation, 2nd ed. (Washington, DC: Georgetown University Press, 2004), 5. Some states, such as California, continued to press for stricter environmental standards than those set by the federal government. Almost all states have seen their relationships with the EPA vary from antagonistic to cooperative over the decades, depending on what states felt was being asked of them, why it was being asked, and how much financial assistance was being provided.
Despite growing public awareness and the previous decade of federal legislation to protect the environment, scientific studies were still predicting ecological disaster. President Carter’s Council on Environmental Quality, in conjunction with the State Department, produced a study in 1980 of world ecological problems called The Global 2000 Report. The report warned that “if present trends continue, the world in 2000 will be more crowded, more polluted, less stable ecologically, and more vulnerable to disruption than the world we live in now. Serious stresses involving population, resources, and the environment are clearly visible ahead. Despite greater material output, the world’s people will be poorer in many ways than they are today.”United States Council on Environmental Quality and the Department of State, The Global 2000 Report to the President (Washington, DC: US Government Printing Office, 1980), 1.
Despite forecasts like this, the election of Ronald Reagan in November of 1980 marked a dramatic decline in federal support for existing and planned environmental legislation. With Reagan’s 1981 appointments of two aggressive champions of industry, James Watt as secretary of the interior and Anne Buford as administrator of the EPA, it was apparent that the nation’s environmental policies were a prime target of his “small government” revolution. In its early years, the Reagan administration moved rapidly to cut budgets, reduce environmental enforcement, and open public lands for mining, drilling, grazing, and other private uses. In 1983, however, Buford was forced to resign amid congressional investigations into mismanagement of a toxic waste cleanup, and Watt resigned after several statements he made were widely viewed as insensitive to actions damaging to the environment. Under Buford’s successors, William Ruckelshaus and Lee Thomas, the environmental agency returned to a moderate course as both men made an effort to restore morale and public trust.
However, environmental crises continued to shape public opinion and environmental laws in the 1980s. In December 1984, approximately forty-five tons of methyl isocyanine gas leaked from an underground storage tank at a Union Carbide pesticide plant in Bhopal, India. The accident, which was far worse than the Seveso incident eight years earlier, caused 2,000 immediate deaths, another 1,500 deaths in the ensuing months, and over 300,000 injuries. The pesticide plant was closed, and the Indian government took Union Carbide to court. Mediation resulted in a settlement payment by Union Carbide of \$470 million.Andrew J. Hoffman, From Heresy to Dogma: An Institutional History of Corporate Environmentalism (San Francisco: New Lexington Press, 1997), 96. Over twenty-five years later, in 2010, courts in India were still determining the culpability of the senior managers involved.
Film Footage from Bhopal, India
video.google.com/videoplay?docid=-7024605564670228808&ei=CSgxSoCmOIq4rgLpptXgBA&q= Bhopal%2C+India&hl=en&client=firefox-a
This video, made in 2006 by Encyclomedia, shows images of victims of the Union Carbide chemical leak being treated in 1984.
This disaster produced the community “right to know” provision in the Superfund Amendments and Reauthorization Act (SARA) of 1986, requiring industries that use dangerous chemicals to disclose the type and amount of chemicals used to the citizens in the surrounding area that might be affected by an accident.Walter A. Rosenbaum, Environmental Politics and Policy, 2nd ed. (Washington, DC: Congressional Quarterly Press, 1991), 80. The right to know provision was manifested in the Toxics Release Inventory (TRI), in which companies made public the extent of their polluting emissions. This information proved useful for communities and industry by making both groups more aware of the volume of pollutants emitted and the responsibility of industry to lower these levels. The EPA currently releases this information at http://www.epa.gov/tri; other pollutant information is available at www.epa.gov/oar/data.
In 1990, Thomas Lefferre, an operations vice president for Monsanto, highlighted the sensitizing effect of this new requirement on business. He wrote, “If…you file a Title III report that says your plant emits 80,000 pounds of suspected carcinogens to the air each year, you might be comforted by the fact that you’re in compliance with your permit. But what if your plant is two blocks from an elementary school? How comfortable would you be then?”Andrew J. Hoffman, From Heresy to Dogma: An Institutional History of Corporate Environmentalism (San Francisco: New Lexington Press, 1997), 179.
Until the mid-1980s, environmental disasters were perceived to be confined to geographically limited locations and people rarely feared contamination from beyond their local chemical or power plant. This notion changed in 1986 when an explosion inside a reactor at a nuclear plant in Chernobyl in the Ukraine released a gigantic cloud of radioactive debris that standard weather patterns spread from the Soviet Union to Scandinavia and Western Europe. The effects were severe and persistent. As a result of the explosion, some 21,000 people in Western Europe were expected to die of cancer and even more to contract the disease as a result. Reindeer in Lapland were found to have levels of radioactivity seven times above the norm. By 1990 sheep in northwest England and Wales were still too radioactive to be consumed. Within the former Soviet Union, over 10,000 square kilometers of land were determined to be unsafe for human habitation, yet much of the land remained occupied and farming continued. Approximately 115,000 people were evacuated from the area surrounding the plant site, 220 villages were abandoned, and another 600 villages required “decontamination.” It is estimated that the lives of over 100,000 people in the former Soviet Union have been or will likely be severely affected by the accident.Clive Ponting, A Green History of the World (New York: Penguin Books, 1991), 377; World Health Organization, “Health Effects of the Chernobyl Accident: An Overview,” Fact sheet no. 303, April 2006, accessed April 19, 2011, www.who.int/mediacentre/factsheets/fs303/en/index.html.
Other environmental problems of an international scale made headlines during the 1980s. Sulfur dioxide and nitrogen oxides from smokestacks and tailpipes can be carried over six hundred miles by prevailing winds and often return to ground as acid rain. As a result, Wheeling, West Virginia, once received rain with a pH value almost equivalent to battery acid.Tyler Miller Jr., Living in the Environment: Principles, Connections, and Solutions, 9th ed. (Belmont, CA: Wadsworth, 1996), 436. As a result of such deposition, downwind lakes and streams become increasingly acidic and toxic to aquatic plants, invertebrates, and fish. The proportion of lakes in the Adirondack Mountains of New York with a pH below the level of 5.0 jumped from 4 percent in 1930 to over 50 percent by 1970, resulting in the loss of fish stocks. Acid rain has also been implicated in damaging forests at elevations above two thousand feet. The northeastern United States and eastern Canada, located downwind from large industrialized areas, were particularly hard hit.Clive Ponting, A Green History of the World (New York: Penguin Books, 1991), 367. Rain in the eastern United States is now about ten times more acidic than natural precipitation. Similar problems occurred in Scandinavia, the destination of Europe’s microscopic pollutants.
A 1983 report by a congressional task force concluded that the primary cause of acid rain destroying freshwater in the northeastern United States was probably pollution from industrial stacks to the south and west. The National Academy of Sciences followed with a report asserting that by reducing sulfur oxide emissions from coal-burning power plants in the eastern United States, acid rain in the northeastern part of the country and southern Canada could be curbed. However, the Reagan administration declined to act, straining relations with Canada, especially during the 1988 visit of Canadian Prime Minister Brian Mulroney.Walter A. Rosenbaum, Environmental Politics and Policy, 2nd ed. (Washington, DC: Congressional Quarterly Press, 1991), 184. Acid rain was finally addressed in part by the Clean Air Act Amendments of 1990.
The CAA, a centerpiece of the environmental legislation enacted during what might be called the first environmental wave, was significantly amended in 1990 to address acid rain, ozone depletion, and the contribution of one state’s pollution to states downwind. The act included a groundbreaking clause allowing the trading of pollution permits for sulfur dioxide and nitrogen oxide emissions from power plants in the East and Midwest. Plants now had market incentives to reduce their pollution emissions. They could sell credits, transformed into permits, on the Chicago Board of Trade. A company’s effort to go beyond compliance enabled it to earn an asset that could be sold to firms that did not meet the standards. Companies were thus enticed to protect the environment as a way to increase profits, a mechanism considered by many to be a major advance in the design of environmental protection.
This policy innovation marked the beginning of market-oriented mechanisms to solve pollution problems. The Clean Air Interstate Rule (CAIR) expanded the scope of the original trading program and was reinstated after various judicial challenges to its method. The question of whether direct taxes or market solutions are best continues to be debated, however. With President Obama’s election in 2008, the question of federal carbon taxes in the United States versus allowing regional and national carbon markets to evolve became a hot topic for national debate.
Another problem that reached global proportions was ozone depletion. In 1974, chemists Sherwood Rowland and Mario Molina announced that chlorofluorocarbons (CFCs) were lowering the average concentration of ozone in the stratosphere, a layer that blocks much of the sun’s harmful ultraviolet rays before they reach the earth. Over time, less protection from ultraviolet rays will lead to higher rates of skin cancer and cataracts in humans as well as crop damage and harm to certain species of marine life. By 1985, scientists had observed a 50 percent reduction of the ozone in the upper stratosphere over Antarctica in the spring and early summer, creating a seasonal ozone hole. In 1988, a similar but less severe phenomenon was observed over the North Pole. Sensing disaster, Rowland and Molina called for an immediate ban of CFCs in spray cans.
Such a global-scale problem required a global solution. In 1987, representatives from thirty-six nations met in Montreal and developed a treaty known as the Montreal Protocol. Participating nations agreed to cut emissions of CFCs by about 35 percent between 1989 and 2000. This treaty was later expanded and strengthened in Copenhagen in 1992.Tyler Miller Jr., Living in the Environment: Principles, Connections, and Solutions, 9th ed. (Belmont, CA: Wadsworth, 1996), 317–27. The amount of ozone-depleting substances close to Earth’s surface consequently declined, whereas the amount in the upper atmosphere remained high. The persistence of such chemicals means it may take decades for the ozone layer to return to the density it had before 1980. The good news was that the rate of new destruction approached zero by 2006.World Meteorological Organization, Scientific Assessment of Ozone Depletion: 2006, Global Ozone Research and Monitoring Project—Report No. 50 (Geneva, Switzerland: World Meteorological Organization, 2007), accessed November 29, 2010, www.wmo.ch/pages/prog/arep/gaw/ozone_2006/ozone_asst_report.html. It is interesting to note that businesses opposed restrictions on CFC use until patent-protected alternative materials were available to substitute for CFCs in the market.
The increasingly global scale of environmental threats and the growing awareness among nations of the interrelated nature of economic development and stable functioning of natural systems led the United Nations to establish the World Commission on Environment and Development (WCED) in 1983. The commission was convened the following year, led by chairwoman Gro Harlem Brundtland, former prime minister of Norway. In 1987, the so-called Brundtland Commission produced a landmark report, Our Common Future, which tied together concerns for human development, economic development, and environmental protection with the concept of sustainable development. Although this was certainly not the first appearance of the term sustainable development, to many the commission’s definition became a benchmark for moving forward: “Sustainable development is development that meets the needs of the present without compromising the ability of future generations to meet their own needs.” Around that same time, the phrase environmental justice was coined to describe the patterns of locating hazardous industries or dumping hazardous wastes and toxins in regions predominantly home to poor people or racial and ethnic minorities.
Pollution Prevention
By the mid-1970s, companies had begun to act to prevent pollution rather than just mitigate the wastes already produced. Pollution prevention refers to actions inside a company and is called an in-the-pipe as opposed to an end-of-the-pipe method for environmental protection. Unlike pollution control, which only imposes costs, pollution prevention offers an opportunity for a company to save money and implement environmental protection simultaneously. Still used today, companies often enter this process tentatively, looking for quick payback. Over time it has been shown they can achieve significant positive financial and environmental results. When this happens it helps open minds within companies to the potential of environmentally sound process redesign or reengineering that contributes both ecological and health benefits as well as the bottom line of profitability.
There are four main categories of pollution prevention: good housekeeping, materials substitution, manufacturing modifications, and resource recovery. The objective of good housekeeping is for companies to operate their machinery and production systems as efficiently as possible. This requires an understanding and monitoring of material flows, impacts, and the sources and volume of wastes. Good housekeeping is a management issue that ensures preventable material losses are not occurring and all resources are used efficiently. Materials substitution seeks to identify and eliminate the sources of hazardous and toxic wastes such as heavy metals, volatile organic compounds, chlorofluorocarbons, and carcinogens. By substituting more environmentally friendly alternatives or reducing the amount of undesirable substances used and emitted, a company can bypass the need for expensive end-of-the-pipe treatments. Manufacturing modifications involve process changes to simplify production technologies, introduce closed-loop processing, and reduce water and energy use. These steps can significantly lower emissions and reduce costs. Finally, resource recovery captures waste materials and seeks to reuse them in the same process, as inputs for another process within the production system, or as inputs for processes in other production systems.Stephan Schmidheiny, with the Business Council for Sustainable Development, Changing Course (Cambridge, MA: MIT Press, 1992), 101–4.
One of the earliest instances of pollution prevention in practice was 3M’s Pollution Prevention Pays (3P) program, established in 1975. The program achieved savings of over half a billion dollars in capital and operating costs while eliminating 600,000 pounds of effluents, air emissions, and solid waste. This program continued to evolve within 3M and became integrated into incentive systems, rewarding employees for identifying and eliminating unnecessary waste.Joseph Fiksel, “Conceptual Principles of DFE,” in Design for Environment: Creating Eco-Efficient Products and Processes, ed. Joseph Fiksel (New York: McGraw-Hill, 1996), 53. Other companies, while not pursuing environmental objectives per se, have found that total quality management (TQM) programs can help achieve cost savings and resource efficiencies consistent with pollution prevention objectives through conscious efforts to reduce inputs and waste generation.
Though pollution prevention is a significant first step in corporate environmental protection, Joseph Fiksel identifies several limitations to pollution prevention as typically practiced. First, it only incrementally refines and improves existing processes. Second, it tends to focus on singular measures of improvement, such as waste volume reduction, rather than on adopting a systems view of environmental performance. Renowned systems analyst Donella Meadows offered a simple definition of a system as “any set of interconnected elements.” A systems view emphasizes connections and relationships.Donella H. Meadows, “Whole Earth Models and Systems,” Coevolution Quarterly 34 (Summer 1982): 98–108, quoted in Joseph J. Romm, Lean and Clean Management (New York: Kodansha, 1994), 33. Third, as most of the gains are often in processes that were not previously optimized for efficiency, the improvements are not repeatable. Fourth, pollution prevention is detached from a company’s business strategy and is performed on a piecemeal basis.Joseph Fiksel, “Conceptual Principles of DFE,” in Design for Environment: Creating Eco-Efficient Products and Processes, ed. Joseph Fiksel (New York: McGraw-Hill, 1996), 54.
According to a 1989 National Academy of Engineering report by Robert Ayres, 94 percent of the material used in industrial production is thrown away before the product is made.Robert U. Ayres, “Industrial Metabolism,” in Technology and Environment, ed. Jesse H. Ausubel and Hedy E. Sladovich (Washington, DC: National Academy Press, 1989), 26; Robert Solow, “Sustainability: An Economist’s Perspective,” in Economics of the Environment, 3rd ed., ed. Robert Dorfman and Nancy S. Dorfman (New York: W. W. Norton, 1993), 181.
Key Takeaways
• In the 1970s, the federal government mandated certain standards and banned some chemicals outright in a command-and-control approach.
• Pollution prevention provided the first significant opportunity to reconcile business and environmental goals.
• Environmental problems grew in geographic scale and intensity through the 1980s, creating a growing awareness that more serious measures and new thinking about limits to growth were required.
Exercises
1. Compare and contrast pollution control and pollution prevention based on (a) their effectiveness and ease of administration as regulations, and (b) their effects on business processes and opportunities.
2. How did trends in environmental issues and regulations change and stay the same in the 1970s and 1980s as compared to earlier decades?
3. Do you see any overlap in circumstances today and the events and perspectives in the 1980s? | textbooks/biz/Business/Advanced_Business/Book%3A_Sustainability_Innovation_and_Entrepreneurship/01%3A_History/1.02%3A_Business_Shifts_Its_Focus.txt |
Learning Objectives
1. Understand how business opportunities arise from changes in environmental regulation as well as from the growing public demand to protect the environment and health.
2. Analyze how globalization and environmental hazards contributed to the development of sustainability as a framework for business and government.
In the United States, the slow pace of government action on environmental protection during the 1980s began to change with the Superfund reauthorization in 1986. The following year, Congress overrode President Reagan’s veto to amend the Clean Water Act to control nonpoint sources of pollution such as fertilizer runoff.Philip Shabecoff, A Fierce Green Fire: The American Environmental Movement (New York: Hill & Wang, 1993), 230. As America’s economy continued to expand during the 1980s, so did its solid waste problem. The issues of America’s bulging landfills and throwaway economy were captured by the image of the Mobro 4000, a barge carrying 3,168 tons of trash that set sail from Islip, Long Island, New York, on March 22, 1987.William Rathje and Cullen Murphy, Rubbish! (New York: Harper Perennial, 1992), 28. The barge spent the next fifty-five days in search of a suitable location to deposit its cargo while drawing significant media attention.Philip Shabecoff, A Fierce Green Fire: The American Environmental Movement (New York: Hill & Wang, 1993), 271. Meanwhile, New York City’s Fresh Kills Landfill became the largest landfill in the world. The following summer, the issue of waste returned to the headlines when garbage and medical waste, including hypodermic needles, began washing onto beaches in New York and New Jersey, costing coastal counties in New Jersey an estimated \$100 million in tourist revenue. Public outcry spurred the federal government to ban ocean dumping of municipal waste. The states of New York and New Jersey subsequently closed several coastal sewage treatment plants, upgraded others, and enacted laws for medical waste disposal.Andrew J. Hoffman, From Heresy to Dogma: An Institutional History of Corporate Environmentalism (San Francisco: New Lexington Press, 1997), 120–21.
America’s reliance on fossil fuels was brought to the forefront once again when the Exxon Valdez supertanker ran aground in Prince William Sound, Alaska, on March 24, 1989. Over 10 million gallons of crude oil spilled from the ship, polluting 1,200 miles of coastline. Approximately 350,000 sea birds, several thousand rare otters, and countless other animals were killed. In 2010, lasting damage from the spill was still documented. The accident coincided with and helped to further a generational peak in environmental awareness.
Legal judgments against Exxon exceeded \$5 billion, and the incident single-handedly led to the enactment of the Ocean Pollution Act of 1990, which mandated safety measures on ocean crude oil transport.Andrew J. Hoffman, From Heresy to Dogma: An Institutional History of Corporate Environmentalism (San Francisco: New Lexington Press, 1997), 121–22. By the early 1990s, the chemical and energy industries were becoming increasingly proactive on environmental matters, looking beyond regulatory compliance toward crafting a specific environmental management strategy. The nature of government regulation began to change as well, with increasing emphasis on goals rather than technology-forcing to achieve those goals (e.g., the Clean Air Act Amendments of 1990). This allowed industry more flexibility in selecting approaches to emissions reductions that made financial sense.
Improved regulatory design focused on goals and results rather than means and proscribed technical fixes, representing what many viewed as a positive policy strategy evolution. This adaptation by government occurred in part as a response to industry resistance to government imposition of “command and control” requirements. Often neglected in polarized discussions that simplistically frame business against government is the fact that governments are steadily adjusting, updating, and refining regulatory approaches to better reflect new knowledge, technology, and business realities. It should be kept in mind that the history of environmental and sustainability issues in business is an evolutionary process of constantly interacting and interdependent cross-sector participants that may collide but ultimately adapt and change. Just as the regulatory bodies have had to adapt to changing and emerging resource, waste stream, Earth system, and health problems, so too have environmental groups and companies had to acknowledge a novel cascade of problems associated with industrial production. Shifting, give-and-take, back-and-forth dynamics characterized the terrain even as new participants emerged. Examples of this evolution were the rising numbers of health, equity, energy, and environmental nongovernmental activist organizations, many of which had lost faith in governments’ capacities to solve problems. However, pressures on government by such groups might cause a regulatory response that creates an unintended new pollution problem. For example, does a focus on reducing large particulate matter in the air from vehicle emissions drive higher emissions of microsized particles that create a new set of medical challenges and respiratory afflictions? In addition, the environmental community is not monolithic. These organizations range from law-defying extreme activists attacking corporations to pragmatic, collaborative science-based nongovernmental organizations (NGOs) working closely with companies to generate solutions. Despite this rich evolutionary adaptive phenomenon across sectors, for the most part companies remained relatively resistant to environmental groups through the 1990s.
Compliance was still the primary goal, and companies combining forces to set industry standards became a method of forestalling regulation. Unless they were singled out due to their industry’s visibility or poor reputation, most companies continued to see health and environmental issues as a burden and additional cost. Environmentalism was associated with tree-huggers, altruists, overhead cost burdens, and public sector fines and regulation.
As if on a parallel yet nonintersecting path, in 1989, a special issue of the Scientific American journal articulated the state of scientific understanding of the growing global collision and the urgency of addressing the clashes among human economic growth patterns, ecological limits, and population growth. For the first time, the need to address dominant policies and economic growth models was being raised in a leading US scientific journal.
In fact, debate on scientific evidence and necessary global action was expanding to challenge the one-dimensional view held by most corporate leaders. With the rise in environmental problems at the global scale, the United Nations (UN) convened a conference on the environment in Rio de Janeiro in June of 1992, which became known as the Rio Earth Summit. Attending this unprecedented forum were more than 100 heads of state, representatives from 178 nations, and 18,000 people from 7,000 NGOs. Major results included a nonbinding charter for guiding environmental policies toward sustainable development, a nonbinding agreement on forestry management and protection, the establishment of the UN Commission on Sustainable Development, and conventions on climate change and biodiversity that have not yet been ratified by enough nations to go into effect. Despite the lack of binding treaties, the Rio Earth Summit succeeded in articulating general global environmental principles and guidelines in a consensus-driven setting involving participation by most of the world’s nations.Tyler Miller Jr., Living in the Environment: Principles, Connections, and Solutions, 9th ed. (Belmont, CA: Wadsworth, 1996), 706.
While there may have been less activity in the United States at the time, a new era was under way internationally. Creation of the World Business Council for Sustainable Development (WBCSD) marked a turning point in global business engagement. In preparation for the Rio Earth Summit, Swiss industrialist Stephan Schmidheiny organized the WBCSD in 1990. The council featured over fifty business leaders from around the world. Their task was without precedent, as Schmidheiny explained: “This is the first time that an important group of business leaders has looked at these environmental issues from a global perspective and reached major agreements on the need for an integrated approach in confronting the challenges of economic development and the environment.”Stephan Schmidheiny, with the Business Council for Sustainable Development, Changing Course (Cambridge, MA: MIT Press, 1992), xxi.
The WBCSD published a book in 1992 titled Changing Course, in which the objectives of business and the environment were argued to be compatible. Schmidheiny wrote that business must “devise strategies to maximize added value while minimizing resource and energy use,” and that “given the large technological and productive capacity of business, any progress toward sustainable development requires its active leadership.”Stephan Schmidheiny, with the Business Council for Sustainable Development, Changing Course (Cambridge, MA: MIT Press, 1992), 9. This language represented a mainstreaming of what is called eco-efficiency in business. The WBCSD opened new doors. Its work signaled acceptance of the new term sustainable business and hinted at sustainability as a term that referred to an alternative economic growth pattern. Sustainable business, defined as improving the efficiency of resource use, was beginning to be recognized by global business leaders as an activity in which corporations could legitimately engage. The important shift under way was that the notion of sustainability was moving from small pockets of visionary business leaders and development specialists to the broader international business community.
It made sense. World population growth trajectories predicted emerging economies growing at an accelerating rate. Their societies’ legitimate aspirations to live according to Western developed economies’ standards would require a tremendous acceleration in the throughput of raw materials, massive growth in industrial activity, and unprecedented demand for energy. People were beginning to wonder how that growth would be achieved in a way that preserved ecological systems, protected human health, and supported stable, viable communities. Figure 1.8 shows the significant increases in emerging economy populations compared to developed countries after 1950.
Of no small significance, certain publications emerged and within a few years were read widely by those interested in the debates over economic growth and population trajectories. In 1993, Paul Hawken authored The Ecology of Commerce, which brought to the public’s attention an alternative model of commerce without waste that relies on renewable energy sources, eliminates toxins, and thrives on biodiversity. Hawken moved beyond the WBCSD goals of minimization (eco-efficiency) by suggesting a restorative economy “that is so intelligently designed and constructed that it mimics nature at every step, a symbiosis of company and customer and ecology.”Paul Hawken, The Ecology of Commerce (New York: Harper Business, 1993), 12, 15. Written for a broad audience, Hawken’s book became a must-read for those trying to grasp the tensions among economic growth, the viability of natural systems, and the possibilities for change. An entrepreneur himself, Hawken looked to markets, firms, and an entrepreneurial mind-set to solve many of the problems.
In 1991, strategy thinker and Harvard Business School professor Michael Porter published articles about green strategy in Scientific American, and in 1995 his article with Claas van der Linde called “Green and Competitive: Ending the Stalemate” appeared in the Harvard Business Review.Michael E. Porter and Claas van der Linde, “Green and Competitive: Ending the Stalemate,” Harvard Business Review 73, no. 5 (September/October 1995): 120–34. Publication in a top business journal read by executives was important because it sent a strong signal to business that new ideas were emerging, in other words, that integrating environmental and health concerns into strategy could enhance a company’s competitive position. Business-executive-turned-educator Robert Frosch had already published his ideas about recovering waste materials in closed-loop systems in “Closing the Loop on Waste Materials.”Robert A. Frosch, “Closing the Loop on Waste Materials,” in The Industrial Green Game (Washington, DC: National Academy Press, 1997), 37–47. For a former executive of a major corporation to talk about recovering and using waste streams as assets and inputs for other production processes represented a breakthrough. Earlier classics such as Garrett Hardin’s “The Tragedy of the Commons” and Kenneth Boulding’s “The Economics of the Coming Spaceship Earth” continued to serve as foundations for new thinking about the contours of future business growth.Garrett Hardin, “The Tragedy of the Commons,” Science 16 (1968): 1243–48; Kenneth Boulding, “The Economics of the Coming Spaceship Earth” (paper presented at the Sixth Resources for the Future Forum on Environmental Quality in a Growing Economy, Washington, DC, March 8, 1966). A body of research and new reasoning was accumulating and diffusing, driving change in how people thought.
Even as the relationship among conventional business perspectives and environmental, health, and social issues shifted, albeit slowly, global problems continued to mount. Climate change debate moved from exclusively scientific conversations to mainstream media outlets. In the summer of 1988, an unprecedented heat wave attacked the United States, killing livestock by the thousands and wiping out a third of the country’s grain crop. The issue of global warming or, more appropriately, global climate change entered the headlines with new force.Kirkpatrick Sale, The Green Revolution: The American Environmental Movement, 1962–1992 (New York: Hill & Wang, 1993), 71. During the heat wave, Dr. James E. Hansen of the National Aeronautics and Space Administration (NASA) warned a Senate committee that the greenhouse effect—the process by which excessive levels of various gases in the atmosphere cause changes in the world’s climate—had probably already arrived.Philip Shabecoff, A Fierce Green Fire: The American Environmental Movement (New York: Hill & Wang, 1993), 196. The United Nations and the World Meteorological Organization established the Intergovernmental Panel on Climate Change (IPCC) in 1988 to study climate change. With input from over nine hundred scientists, the IPCC published its first report in 1995, which concluded that by the year 2100, temperatures could increase from 2°F to 6°F, causing seas to rise from 6 to 38 inches with changes in drought and flooding frequency. Citing a 30 percent rise in atmospheric carbon dioxide since the dawn of the Industrial Age, the IPCC reported that “the balance of evidence suggests a discernable human influence on global climate.” Twenty-four hundred scientists endorsed these findings.Paul Raeburn, “Global Warming: Is There Still Room for Doubt?” BusinessWeek, November 3, 1997, 158.
As with the issue of ozone depletion, an international conference was convened in December 1997 in Kyoto, Japan, to address the problem of global climate change. Representatives from over 160 nations hammered out an agreement known as the Kyoto Protocol to the United Nations Framework Convention on Climate Change (UNFCCC). The protocol, seen as a first step in addressing climate change issues, required developed nations to reduce their emissions of greenhouse gases by an average of 5.2 percent below 1990 levels by the years 2008 to 2012. Regulated greenhouse gases included carbon dioxide, nitrogen oxides, methane, hydrofluorocarbons, perfluorocarbons, and sulfur hexafluoride. To date, the US Senate has not ratified the agreement, and President Bush rejected the Kyoto Protocol.
The first IPCC report was followed by subsequent IPCC reports to refine the predictions for particular regions of the world; the last one was published in 2007. Other materials followed, such as the National Academy Press publication The Industrial Green Game in 1997, as leading scientists and business experts spoke out together about a need for new thinking. The book highlighted issues of national if not international concern, such as product redesigns and management reforms whose intent was to avoid environmental and health problems before they arose. A full life-cycle approach and systems thinking, deemed essential to the new industrial green game, were fundamental to the evolving alternative paradigm.
The global environmental threat from industrial chemicals was brought to the public’s attention with the 1996 publication of a book titled Our Stolen Future, which quickly became known as the sequel to Silent Spring. The authors, Theo Colborn, John Peterson Myers, and Dianne Dumanoski, building on decades of scientific research, raised the prospect that the human species, through a buildup of certain synthetic chemicals in human cells, might be damaging its ability to reproduce and properly develop. These chemicals, called “endocrine disrupters,” mimic natural hormones and thus disturb reproductive and developmental processes. Initial studies linked these chemicals to low sperm counts, infertility, genital deformities, neurological and behavioral disorders in children, hormonally triggered human cancers, and developmental and reproductive abnormalities in wildlife.Theo Colborn, Dianne Dumanoski, and John Peterson Myers, Our Stolen Future (New York: Dutton, 1996), vi. The buildup of chemical contaminants in the human body was documented in research reported in 2010 by the US Centers for Disease Control.From the US Centers for Disease Control and Prevention, “National Report on Human Exposure to Environmental Chemicals,” accessed December 29, 2010, http://www.cdc.gov/exposurereport; “The Fourth National Report on Human Exposure to Environmental Chemicals is the most comprehensive assessment to date of the exposure of the U.S. population to chemicals in our environment. CDC has measured 212 chemicals in people’s blood or urine—75 of which have never before been measured in the U.S. population. What’s new in the Fourth Report: The blood and urine samples were collected from participants in CDC’s National Health and Nutrition Examination Survey, which is an ongoing survey that samples the U.S. population every two years. Each two year sample consists of about 2,400 persons. The Fourth Report includes findings from national samples for 1999–2000, 2001–2002, and 2003–2004. The data are analyzed separately by age, sex and race/ethnicity groups. The Updated Tables, July 2010 provides additional data from the 2005-2006 survey period for 51 of the chemicals previously reported through 2004 in the Fourth Report and the new addition of four parabens and two phthalate metabolites in 2005–2006.” New science showing the transfer of chemicals from mother to fetus through the umbilical cord and from mother to child through breast milk brought new attention to chemicals and human health in 2009.Sara Goodman, “Tests Find More Than 200 Chemicals in Newborn Umbilical Cord Blood,” Scientific American, December 2, 2009, accessed March 7, 2011, www.scientificamerican.com/article.cfm?id=newborn-babies-chemicals- exposure-bpa.
Unfortunately, most leaders in the business community and business schools were not ready to discuss the scientific evidence and its implications. In the US business community, where the prior politics of environmentalism and business resistance to the threat of regulation had polarized debate, the conversations were not productive. Top business schools followed mainstream business thinking well into the first decade of the twenty-first century, marginalizing the topics as side issues to be dealt with exclusively by ethics professors or shunting them to courses or even other schools that focused on regulation, public policy, or nonprofit management.
Endocrine Disrupters
Men with higher levels of a metabolite of the phthalate DBP [dibutyl phthalate] have lower sperm concentration and mobility, low enough to be beneath levels considered by the World Health Organization to be healthy. Exposures were not excessive, but instead within the range experienced by many people.Our Stolen Future, “Semen Quality Decreases in Men with Higher Levels of Phthalate,” www.ourstolenfuture.org/newscience/oncompounds/phthalates/2006/2006-1101hauseretal.html.
Slowly, however, the groundwork was laid for significant and prevalent changes in how businesses relate to the environment. In the 1987 Our Common Future report discussed in Chapter 1, Section 1.2, the commission wrote, “Many essential human needs can be met only through goods and services provided by industry.…Industry extracts materials from the natural resource base and inserts both products and pollution into the human environment. It has the power to enhance or degrade the environment; it invariably does both.”World Commission on Environment and Development, Our Common Future (New York: Oxford University Press, 1987), 206.
Embedded within the statement was a particular linkage among previously conflicting interests. This would usher in a new way of doing business. As Mohan Munasinghe of the IPCC explained, “sustainable development necessarily involves the pursuit of economic efficiency, social equity, and environmental protection.”Mohan Munasinghe, Wilfrido Cruz, and Jeremy Warford, “Are Economy-wide Policies Good for the Environment?” Finance and Development 30, no. 3 (September 1993): 40. Thus, beginning in the 1990s, thanks to the efforts of a small number of pioneering firms and spokespersons able to span the science-business gap, sustainability as a business strategy was emerging as a powerful new perspective to create value for multiple stakeholders. A sustainable business perspective—and the sustainability innovations created by entrepreneurs—is the current evolutionary stage in an increasingly sophisticated corporate response to environmental and social concerns.
key takeaways
• In the 1980s and 1990s, population growth and the scale of industrialization and concomitant environmental concerns led to the pursuit of “sustainable” business models that acknowledged health and ecological system constraints.
• Pressure on companies grew due to scientific discoveries about pollutants, waste disposal challenges, oil spills, and other accidents.
• Proliferation and diffusion of reports educated the public and government officials, resulting in increased pressure for regulatory action and corporate response.
Exercises
1. What opportunities for business innovation and entrepreneurship can you identify given the trends and historical information?
2. What implications can you deduce from the population growth trends projected for the next fifty years?
3. If entrepreneurial opportunity is a response to inefficiencies in the market, what inefficiencies can you identify?
4. Summarize the mind-set of someone born in the 1960s with respect to knowledge and attitudes about sustainability compared to someone born in the late 1980s or 1990s.
An Overview of the Historical Context for Sustainable Business in the United States, 1960–2000
Year Event Legislation Environmental Framework for BusinessSee Richard R. Johnson, Andrea Larson, and Elizabeth Teisberg, The Path to Sustainable Business: Environmental Frameworks, Practices and Related Tools, UVA-ENT-0033 (Charlottesville, VA: Darden Business Publishing, University of Virginia, 1997); updated by author Andrea Larson to 2009. See comprehensive update: Andrea Larson, Sustainability and Innovation: Frameworks, Concepts, and Tools for Product and Strategy Redesign, UVA-ENT-0138 (Charlottesville, VA: Darden Business Publishing, University of Virginia, January 2010). (Full Discussions Appear in Chapter 3)
1962 Silent Spring
1963 New York City smog-related fatalities
1964 Mississippi River fish kills
1969 Cuyahoga River fire; Santa Barbara oil spill; Moon landing
1970 First Earth Day National Environmental Policy Act (NEPA); Clean Air Act (CAA) Pollution control
1972 The limits of growth Federal Water Pollution Control Act (FWPCA; became Clean Water Act); Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA)
1973 “Oil shock” Endangered Species Act (ESA)
1974 Safe Drinking Water Act (SDWA)
1975 Pollution prevention
1976 Seveso explosion Toxic Substance Control Act (TSCA); Resource Conservation and Recovery Act (RCRA)
1977 Clean Air Act Amendments of 1990; Clean Water Act amendments
1978 Love Canal; Second “oil shock”
1979 Three Mile Island
1980 Global 2000 Report Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA, a.k.a. Superfund)
1983 Federal acid rain studies
1984 Bhopal
1985 Ozone hole over Antarctica discovered
1986 Chernobyl Superfund Amendments and Reauthorization Act (SARA)
1987 Mobro 4000 trash barge; Montreal Protocol; Our Common Future Clean Water Act amendments Sustainable development
1988 Medical waste on NY and NJ beaches; Global warming
1989 Exxon Valdez Industrial ecology; The Natural Step (a framework discussed in Chapter 3)
1990 World Business Council for Sustainable Development (WBCSD) formed Clean Air Act Amendments of 1990
1992 Rio Earth Summit;Changing Course Design for Environment (DfE); Eco-efficiency
1993 The Ecology of Commerce Sustainable design
1996 Our Stolen Future
1997 Kyoto Protocol
2001 Toxic dust from World Trade Center and Pentagon attacks
2002 Cradle to Grave: Remaking the Way We Make Things Eco-effectiveness
2005 Capitalism at the Crossroads; EU begins greenhouse gas emission trading scheme Beyond greening
2006 An Inconvenient Truth
2007 Melamine-tainted pet food and leaded toys from China Supreme Court rules in Massachusetts v. Environmental Protection Agency (EPA) that EPA should regulate carbon dioxide and greenhouse gases under CAA
2008 Summer gas prices exceed \$4 per gallon Consumer Product Safety Act
2009 Regional Greenhouse Gas Initiative begins trading | textbooks/biz/Business/Advanced_Business/Book%3A_Sustainability_Innovation_and_Entrepreneurship/01%3A_History/1.03%3A_Pressures_on_Companies_Continue.txt |
Learning Objectives
1. Appreciate the scope and complexity of the challenges that have recently spurred sustainability innovation with respect to energy and materials.
2. Gain insight into the fundamental drivers creating opportunities for entrepreneurs and new ventures in the sustainability innovation arena.
Sustainability innovators create new products and services designed to solve the problems created by the collision of economic growth, population growth, and natural systems. They seek integrated solutions that offer financial renumeration, ecological system protection, and improved human health performance, all of which contribute to community prosperity. Sustainability innovation, growing from early ripples of change in the 1980s and 1990s, now constitutes a wave of creativity led by a growing population of entrepreneurial individuals and ventures. This form of creativity applies to raw materials selection, energy use, and product design as well as company strategies across supply chains. It encompasses renewable energy technologies to reduce pollution and climate impacts as well as the safer design of molecular materials used in common household products. Today’s tough economic times and need for job creation, while seemingly detracting from environmental concerns, in fact underscore the importance of monitoring energy and material input and waste cost-reduction measures; these are made visible through a sustainability lens. In addition, because the environmental health and ecological system degradation issues will only increase with economic growth, and public concern is unlikely to fade, those firms that explore sustainability efficiencies and differentiation opportunities now will be better positioned to weather the economic downturn.
Research indicates that individuals and ventures that pursue these objectives often work through networks of diverse supply-chain collaborations to realize new and better ways of providing goods and services. As a result, a plethora of substitute products, technologies, and innovative ways of organizing that address pollution, health, resource use, and equity concerns are being introduced and tested in the marketplace. This is the challenge and the excitement of sustainability innovation. In this chapter we look more closely at sustainability innovation. What forces have driven it, and how is it being defined?
Two areas, energy and materials, provide useful entry points for exploring why businesses are increasingly using sustainability frameworks for thinking about the redesign of their products and operations. However, in the first decade of the twenty-first century, the media and public increasingly focused on climate change as the top environmental issue. Severe storms and other extreme weather patterns predicted by climate change scientists had become more evident. Hurricane Katrina in New Orleans, accelerated Arctic and Antarctic warming, rising ocean levels, and increasing carbon dioxide (CO2) concentrations were discussed widely in the scientific reports and the mainstream media as examples of how human actions shaped natural systems’ dynamics. At the biological level, accumulating industrial chemicals in adults’ and children’s bodies were reported as one of the wide-ranging examples of system equilibrium disruptions. There was growing discussion of tipping points and ways to contain change within an acceptable range of variation for continued human prosperity.
Partly in response to this growing concern, globally and within nation-states, markets for carbon; clean and more efficient energy; and safer, cleaner products have grown rapidly. These markets will continue to expand given economic growth trajectories, the rapid movement of more people into a global middle class, and the constrained capacities of natural systems, including our bodies, to absorb the impacts.
While some hear only negative news in these words, entrepreneurs and innovators typically do not spend much time on the negative messages. They use innovation to create alternatives. They envision new and better possibilities. They take action to address perceived inefficiencies and to solve problems. Health and environmental problems, the inefficiencies related to pollution, and the newly understood health threats are viewed as opportunities for entrepreneurially minded individuals and ventures to offer substitutes.
The shift in perception about industrial and commercial pollution and adverse impacts has been augmented by a new appreciation of the scale and scope of human activity. For example, a short time ago pollution was considered a manageable local problem (and even a visible indicator of economic progress). Today our scientific knowledge has advanced to see not just visible acute pollution challenges as health problems but also molecular depositions far from their source; in other words, problems stretching across local, regional, and even global scales are major unintended effects of industrialization.
Changes in the Character of the Ecological and Health Challenges, Pre-1980s versus Post-1980s
Pre-1980s Post-1980s
Minor Systemic
Localized Global
Dispersed and separate Tightly coupled
Simple Complex
Isolated Ubiquitous
Stable and visible Turbulent and hard to discern
Slow-moving Accelerated
By 2010 there was a scientific and policy acknowledgement about the physical impossibility of maintaining ecosystems’ stability in the face of the existing and the anticipated scale and scope of pollution levels. A biosphere that seemed a short time ago to be infinite in its capacity to absorb waste and provide ecosystem services showed growing evidence of limits. Thus today, satisfying the legitimate material and energy demands of billions of upwardly mobile people in the global community, without severely disrupting ecosystem functions and exacting harsh human costs, is a first-order challenge for economic and business design. This problem is soluble, but it requires creativity that reaches beyond conventional thinking to imagine new models for economic growth and for business. In fact, in increasing numbers companies are now adopting sustainability principles in their product designs and strategies. Recognizing the problem-complexity shift represented by the second column in Table 2.1, companies are taking on what can be called a sustainability view of their world. The changes under way are captured in Table 2.2, which compares the old business approach, defined by more narrowly framed environmental issues, and leading entrepreneurial innovators’ perspectives on sustainability challenges.
Traditional View versus Sustainability View
Traditional view Sustainability view
Rhetoric and greenwash Operational excellence
Cost burden Efficiencies
Compliance Cost competitiveness/strategic advantage
Doing good/altruism Strong financial performance
Peripheral to the business Core to the business
Technology fix Frameworks, tools, and programs
Reactive Innovative and entrepreneurial
Let’s start at a more macro level of analysis that allows us to track the reframing of what historically have been called environmental concerns. To better understand the functioning and interdependencies of the natural and human-created systems of which we are a part, we can look at basic energy and material flows. Even a cursory look reveals some of the major challenges. Fossil fuel energy consumption is closely linked to local and global climate modification, ocean acidification (and consequently coral reef degradation that undermines ocean food supplies), and ground-level air pollution, among other problems. Materials extraction and use are tightly coupled with unprecedented waste disposal challenges and dispersed toxins. Furthermore, in our search for energy and materials to fuel economic growth and feed more people, we have been systematically eliminating the habitat and ecosystems on which our future prosperity depends.
In 1900 a business did not have to think about its impact on the larger natural world. However, with population growth, a rapidly expanding global economy, and greater transparency demanded from civil society, firms feel increasing pressure to adapt to a more constrained physical world. The existing business model is being challenged by entrepreneurial innovations offering different ways of thinking about business in society. Thus, by studying sustainability innovation, we are able to look at alternative business models for the future.
Americans have long voiced support for environmental issues in public opinion polls. That concern has grown, especially as human-influenced climate change became increasingly apparent and a harbinger of broader ecological and health challenges. Even as the US economy faltered dramatically in late 2008, 41 percent of respondents to a survey for the Pew Research Center stated in January 2009 that the environment should remain the president’s top priority, while 63 percent thought the same when President Bush was in office in 2001.Pew Research Center for the People and the Press, “Economy, Jobs Trump All Other Policy Priorities in 2009,” news release, January 22, 2009, accessed March 27, 2009, http://people-press.org/report/485/economy-top-policy-priority. In a different series of polls conducted by Pew between June 2006 and April 2008, over 70 percent of Americans consistently said there is “solid evidence” that global warming is occurring, and between 41 and 50 percent said human activity is the main cause. Independents and Democrats were one and one-half times to twice as likely as Republicans to agree to the statements, indicating ongoing political divisions over the credibility or impartiality of science and how it should inform our response to climate change.Pew Research Center for the People and the Press, “A Deeper Partisan Divide over Global Warming,” news release, May 8, 2008, accessed March 27, 2009, people-press.org/reports/pdf/417.pdf. Regardless of climate change public opinion polls, however, by 2010 energy issues had gained national attention for an ever-broadening set of reasons.
In fact, by 2010 climate change often was linked to energy independence and energy efficiency as the preferred strategy to get both liberals and conservatives to address global warming. This approach emphasized saving money by saving energy and deploying innovative technology rather than relying on federal mandates and changes to social behavior to curb emissions. The federal government was asked to do more under President Obama. Energy independence included reduced reliance on imported oil as well as nurturing renewable energy and technologies and local solutions to electricity, heating and cooling, and transportation needs. The Energy Security and Independence Act of 2007, among other things, increased fuel economy standards for cars, funded green job training programs, phased out incandescent light bulbs, and committed new and renovated federal buildings to being carbon-neutral by 2030.
Meanwhile, renewable energy sources continue to inch upward. By 2007, just over 71 quadrillion British thermal units of energy were produced in total in the United States. About 9.5 percent of that energy came from renewable sources: hydroelectric (dams), geothermal, solar, wind, and wood or other biomass. Indeed, wood and biomass accounted for about 52 percent of all renewable energy production, while hydroelectric power represented another 36 percent. Wind power represented about 5 percent of renewable energy and solar 1 percent.Energy Information Administration, Department of Energy, “Table 1.2: Primary Energy Production by Source, 1949–2009,” Annual Energy Review, accessed March 27, 2009, www.eia.doe.gov/emeu/aer/txt/ptb0102.html. The numbers were relatively small, but each of these markets was experiencing double-digit growth rates, offering significant opportunities to investors, entrepreneurs, and firms that wanted to contribute to cleaner energy and reduced fossil fuel dependence.
In fact, climate change took center stage among environmental issues in the first decade of this century, with public awareness of climate change heightened by unusual weather patterns. Hurricane Katrina, which devastated New Orleans in 2005, was interpreted as a sign of worse storms to come. The Intergovernmental Panel on Climate Change (IPCC) released its Fourth Assessment Report in 2007. This report affirmed global climate change was largely anthropogenic (caused by human activity) and indicated that change was occurring more rapidly than anticipated. Almost a doubling of the rate of sea level rise was recorded from 1993 to 2003 compared to earlier rates, and a steady increase in the ocean’s acidity was verified.Rajendra K. Pachauri, and Andy Reisinger, eds. (core writing team), Climate Change 2007: Synthesis Report (Geneva, Switzerland: Intergovernmental Panel on Climate Change, 2008), accessed November 30, 2010, http://www.ipcc.ch/publications_and_data/publications_ipcc_fourth_assessment_report _synthesis_report.htm. The ocean’s pH decreased about 0.04 pH units from 1984 to 2005. Acidity is measured on a logarithmic scale from 0 to 14, with a one pH unit increase meaning a tenfold increase in acidity. The 2006 Stern Review on the Economics of Climate Change, commissioned by the Treasury of the United Kingdom, attempted to put a cost on the price of business as usual in the face of climate change. It estimated climate change could incur expenses equivalent to 5 to 20 percent of the global gross domestic product (GDP) in the coming decades if nothing changed in our practices, whereas acting now to mitigate the impact of climate change would cost only about 1 percent of global GDP. As the report concluded, “Climate change is the greatest market failure the world has ever seen.”Sir Nicholas Stern, Stern Review on the Economics of Climate Change (London: HM Treasury, 2006), viii, accessed March 26, 2009, http://www.hm-treasury.gov.uk/sternreview_index.htm.
Also in 2007, former vice president Al Gore’s documentary on climate change, An Inconvenient Truth, won an Oscar for best feature documentary, while Gore and the IPCC were jointly awarded the Nobel Peace Prize. Although debates over the science continued, the consensus of thousands of scientists worldwide that the atmospheric concentrations of CO2 were at least in part man-made firmly placed global climate and fossil fuel use on the agenda. National policies and the US military engagements related to securing and stabilizing oil imports and prices focused attention further on avoiding oil dependency. Indicating resource issues’ close link to social conflicts, in 2008 the National Intelligence Estimate report from the CIA and other agencies warned climate change could trigger massive upheaval, whether from natural disasters and droughts that destabilized governments or increased flows of climate refugees, both the result of and cause of competition over resources and civil unrest.
Trailer for an inconvenient Truth
The 2006 film An Inconvenient Truth chronicles the perils of climate change and former US Vice President Al Gore’s work to alert people to the danger.
www.climatecrisis.net/trailer.
The 2008 Olympic Games in Beijing, meanwhile, highlighted the increasing pollution from high-growth industrializing countries. That year China eclipsed the United States as the leading emitter of CO2, while Chinese officials had to take steps to prevent athletes and tourists from choking in Beijing’s notorious smog. To reduce the worst vehicle emissions in the days leading up to the games, cars with even license plate numbers could drive one day, odd the next, and factories were shut down.Paul Kelso, “Olympics: Pollution over Beijing? Don’t Worry, It’s Only Mist, Say Officials,” Guardian (London), August 6, 2008, accessed November 30, 2010, http://www.guardian.co.uk/sport/2008/aug/06/olympics2008.china; Talea Miller, “Beijing Pollution Poses Challenge to Olympic Athletes,” PBS NewsHour, May 16, 2008, accessed November 30, 2010, www.pbs.org/newshour/indepth_coverage/asia/china/2008/athletes.html. India also has struggled to curb pollution as its industrialization accelerates. The World Bank estimated India’s natural resources will be more strained than any other country’s by 2020.“India and Pollution: Up to Their Necks in It,” Economist, July 17, 2008, accessed November 30, 2010, http://www.economist.com/world/asia/displaystory.cfm?story_ id=11751397.
To those living in a developed country, particularly in the United States where climate change continues to be debated, warming temperatures can seem somewhat abstract. The following links provide narratives and visual appreciation for how climate change actually influences many people around the world.
Bangladesh Migration Forced by Sea-Level Rise
http://www.guardian.co.uk/environment/video/2009/nov/30/bangladesh-climate-migration
A More General Travelogue (Nepal to Bangladesh) of Effects of Glacial Retreat on People
http://www.guardian.co.uk/environment/video/2009/dec/07/copenhagen-nepal-bangladesh
Glacier Melt in China Affects People
http://www.guardian.co.uk/environment/video/2008/jul/25/glacier.tian
Global Warming Affects Inuit in Canada
www.cbsnews.com/video/watch/?id=3181766n
Broad scientific consensus on climate change and its origin, the increased concentration of greenhouse gases (GHGs) in the atmosphere, has motivated hundreds of US cities, from Chicago to Charlottesville, to pledge to follow the Kyoto Protocol to reduce emissions within their municipalities through a variety of mechanisms including setting green building standards. The Kyoto Protocol is an international agreement among countries formally initiated in 1997 whose goal is to reduce (GHGs).
This city movement is under way despite the eight-year oppositional position of President Bush’s administration and the Obama administration’s unsuccessful effort to promote a national carbon policy. States also took the lead on many other environmental issues, and according to the Pew Center on Global Climate Change, as of January 2009, twenty-nine states had mandatory renewable energy portfolio standards to encourage the growth of wind, solar, and other energy sources besides fossil fuels. This meant states set target dates at which some percentage (5 to 25 percent, for example) of the energy used within the state must come from renewable energy technology. Another six states had voluntary goals.Pew Center on Global Climate Change, “Renewable & Alternative Energy Portfolio Standards,” October 27, 2010, accessed November 30, 2010, www.pewclimate.org/what_s_being_done/in_the_states/rps.cfm. California’s 2006 Global Warming Solutions Act committed the state to reduce GHG emissions from stationary sources. In fall 2010, California voters affirmed the state’s comprehensive climate law designed to promote renewable energy, green-collar jobs, and lower emission vehicles, along with other advanced sustainability-focused technologies. Transportation is also a heavy contributor to CO2 emissions. Regulation of GHG emissions from vehicles may join a series of other regulations on mobile pollution sources. Since trading programs have succeeded in reducing nitrogen oxides and sulfur dioxide from stationary sources, vehicles have increased their relative contribution to acid rain and ground-level ozone, or smog. Each vehicle today may pollute less than its counterpart in 1970, but Americans have more cars and drive them farther, thus increasing total pollution from this sector. The US Environmental Protection Agency (EPA) acknowledges, “Transportation is also the fastest-growing source of GHGs in the U.S., accounting for 47 percent of the net increase in total U.S. emissions since 1990.”US Environmental Protection Agency, Office of Transportation and Air Quality, “Transportation and Climate: Basic Information,” last modified September 14, 2010, accessed November 30, 2010, www.epa.gov/OMS/climate/basicinfo.htm. Other countries have seen similar increases in vehicles and their associated pollution.
Although few countries regulated GHGs from vehicles as of 2009, many have focused on reducing other pollutants. The United States, the European Union, India, China, and other countries realized that particulate matter emissions from diesel fuel in particular could not be controlled at the tailpipe or locomotive exhaust vent without changing the whole supply chain, and without that change, about 85 percent of the largest cities in developing countries would continue to suffer poor air quality.United Nations Environment Programme, Partnership for Clean Fuels and Vehicles, “Background,” accessed November 30, 2010, www.unep.org/pcfv/about/bkground.asp. Thus US refineries have been mandated to produce diesel fuel at or below fifteen parts sulfur per million. This is being phased in for vehicles, trains, ships, and heavy equipment from 2006 to 2014. The lower sulfur content both reduces the sulfur dioxide formed during combustion and allows the use of catalytic converters and other control technology that would otherwise be rapidly corroded by the sulfur.
For CO2 from these mobile sources, in 2009 President Obama asked the EPA to reconsider California’s request to regulate GHG emissions from vehicles, a request initially denied under the Bush administration despite a 2007 Supreme Court ruling that required the EPA to regulate GHGs under the Clean Air Act. Assuming California adopts stricter vehicle emissions standards, almost twenty other states will adopt those standards. Moreover, the American Recovery and Reinvestment Act of 2009 appropriated billions of dollars for green infrastructure, including high-speed rail.
Interactive Timeline of California Petition to Regulate GHGs from Cars
www.americanprogress.org/issues/2009/01/emissions_timeline.html
The Kyoto Protocol itself, nonetheless, faced an uncertain fate under the Obama administration. Discussions for the successor to Kyoto were held in December 2009 in Copenhagen. In the interim between those two frameworks, over 180 nations plus nongovernmental organizations (NGOs)—many criticized for the carbon footprint of traveling in private jets—attended the UN Bali Climate Change Conference in December 2007.
As climate change and its consequences have become increasingly accepted as real, more people and institutions are considering their “carbon footprints,” the levels of CO2 associated with a given activity. A number of voluntary programs, such as the Climate Registry, ISO 14000 for Environmental Management, and the Global Reporting Initiative, emerged to allow organizations and businesses to record and publicize their footprint and other environmental performance tracking. To assess and abet such efforts, in 2000 the US Green Building Council introduced a rating system called Leadership in Energy and Environmental Design (LEED). Buildings earn points for energy efficiency, preserving green space, and so on; points then convert to a certification from basic to platinum. The 7 World Trade Center building, for instance, was gold certified upon its reconstruction in 2006.Taryn Holowka, “7 World Trade Center Earns LEED Gold,” US Green Building Council, March 27, 2006, accessed March 27, 2009, www.usgbc.org/News/USGBCNewsDetails.aspx?ID=2225. Other green building programs have appeared, while groups such as TerraPass and CarbonFund began selling carbon offsets for people to reduce the impact of their local pollution. Investors also have jumped in. Sustainable-investment funds allow people to buy stocks in companies screened for environmental practices and to press shareholder resolutions. For example, institutional investors representing state retirement funds have asked for evidence that management is fulfilling its fiduciary responsibility to protect the stock price against climate change impacts and other unexpected ecological and related political surprises. The Social Investment Forum’s 2007 Report on Socially Responsible Investing Trends in the United States noted that about 11 percent of investments under professional management in the United States—\$2.7 trillion—adhered to one or more strategies of “socially responsible investment,” a category encompassing governance, ecological, health, and safety concerns.Social Investment Forum, 2007 Report on Socially Responsible Investing Trends in the United States (Washington, DC: Social Investment Forum Foundation, 2007), accessed March 27, 2009, www.socialinvest.org/resources/research.
Materials and Chemicals
In conjunction with threats to the globe’s ecosystems (a somewhat removed and therefore abstract notion for many), people became increasingly aware of threats to their personal health. This concern shifts attention from climate and energy issues at a more macro level to the material aspects of pollution and resource management.
Knowledge about health threats from chemical exposure goes back in history. Lead and mercury were known human toxins for centuries, with the “mad hatter” syndrome caused by hat makers’ exposure to mercury, a neurotoxin. The scale and scope of chemicals’ impacts, combined with dramatically improved scientific analysis and monitoring, distinguish today’s challenges from those of the past. Bioaccumulation and persistence of chemicals, the interactive effect among chemicals once in the bloodstream, and the associated disruptions of normal development have continued to cause concern through 2010. Chemical off-gassing from materials used to build Federal Emergency Management Agency (FEMA) temporary housing trailers causing health problems for Katrina Hurricane victims, the ongoing health problems of early responders to the 9/11 terrorist attack in New York City, and health issues associated with bisphenol A (BPA) in hard plastic containers and food and beverage cans are some of the well-known issues of public concern raised in the last few years.The US Department of Health and Human Services offers suggestions to parents to avoid exposure to children. See US Department of Health and Human Services, “Bisphenol A (BPA) Information for Parents,” accessed November 30, 2010, http://www.hhs.gov/safety/bpa.
The national Centers for Disease Control and Prevention began periodic national health and exposure reports soon after the publication of Our Stolen Future, authored by Theo Colborn, Dianne Dumanoski, and John Peterson Myers.See the home page for the book: “Our Stolen Future,” accessed March 7, 2011, www.ourstolenfuture.org Considered by many as the 1990s sequel to Rachel Carson’s groundbreaking 1962 book Silent Spring, which informed and mobilized the public about pesticide impacts, Our Stolen Future linked toxins from industrial activity to widespread and growing human health problems including compromises in immune and reproductive system functions. In 2005, the federal government’s Third National Report on Human Exposure to Environmental Chemicals found American adults’ bodies contained noticeable levels of over one hundred toxins (our so-called body burden), including the neurotoxin mercury taken up in our bodies through eating fish and absorbing air particulates (from fossil fuel combustion) and phthalates (synthetic materials used in production of personal care products, pharmaceuticals, plastics, and coatings such as varnishes and lacquers). Phthalates are associated with cancer outcomes and fetal development modifications.
BPA, an endocrine-disrupting chemical that can influence human development even at very low levels of exposure, has been associated with abnormal genital development in males, neurobehavioral problems such as attention deficit/hyperactivity disorder (ADHD), type 2 diabetes, and hormonally mediated cancers such as prostate and breast cancers.Frederick S. vom Saal, Benson T. Akingbemi, Scott M. Belcher, Linda S. Birnbaum, D. Andrew Crain, Marcus Eriksen, Francesca Farabollini, et al., “Chapel Hill Bisphenol A Expert Panel Consensus Statement: Integration of Mechanisms, Effects in Animals and Potential to Impact Human Health at Current Levels of Exposure,” Reproductive Toxicology 24, no. 2 (August/September 2007): 131–38, accessed November 30, 2010, www.ewg.org/files/BPAConsensus.pdf.
A recent update found three-fourths of Americans had triclosan in their urine, with wealthier Americans having higher levels.The report and updates are available from Centers for Disease Control and Prevention (CDC). See Centers for Disease Control and Prevention, “National Report on Human Exposure to Environmental Chemicals,” last modified October 12, 2010, accessed November 30, 2010, http://www.cdc.gov/exposurereport. This antibiotic is added to soaps, deodorants, toothpastes, and other products. In the first decade of the twenty-first century, pharmaceutical companies were coming under greater scrutiny as antibiotics and birth control hormones were found in city water supplies; the companies had to begin to assess their role in what has come to be called the PIE (pharmaceuticals in the environment) problem. Children, because of their higher consumption of food and water per body weight and their still-vulnerable and developing neurological, immune, and reproductive systems, are especially at risk.
The Prevalence of Contamination
Virtually all of America’s fresh water is tainted with low concentrations of chemical contaminants, according to the new report of an ambitious nationwide study of streams and groundwater conducted by the U.S. Geological Survey.C. Lock, “Portrait of Pollution: Nation’s Freshwater Gets Checkup,” Science News, May 22, 2004, accessed March 7, 2011, findarticles.com/p/articles/mi_m1200/is_21_165/ai_n6110353.
Europe has led the world in its public policy response to reduce the health risks of chemicals. After many years of debate and discussion with labor, business, and government, the EU adopted the “precautionary principle” in 2007, requiring manufacturers to show chemicals were safe before they could be introduced on a wide scale.European Commission, “What Is REACH?,” last modified May 20, 2010, accessed November 30, 2010, ec.europa.eu/environment/chemicals/reach/reach_intro.htm. The REACH directive—Registration, Evaluation, Authorization, and Restriction of Chemicals—will be phased into full force by 2018. REACH requires manufacturers and importers to collect and submit information on chemicals’ hazards and practices for safe handling. It also requires the most dangerous chemicals to be replaced as safer alternatives are found.
The opposite system, which gathers toxicological information after chemicals have spread, prevails in the United States. Hence only after a spate of contaminated products imported from China sickened children and pets did Congress pass the US Consumer Product Safety Act amendments in 2008 to ban lead and six phthalates from children’s toys. However, another phthalate additive, BPA, was not banned. Often found in #7 plastics, including popular water bottles seen on college campuses around the country, BPA was linked to neurological and prostate problems by the National Toxicology Program.National Institute of Environmental Health Sciences, National Toxicology Program, Bisphenol A (BPA) (Research Triangle Park, NC: National Institutes of Health, US Department of Health and Human Services, 2010), accessed November 30, 2010, www.niehs.nih.gov/health/docs/bpa-factsheet.pdf. Although the US Food and Drug Administration (FDA), unlike its EU and Canadian counterparts, chose not to ban the chemical, many companies stopped selling products with BPA.
Environmental Health Information
Environmental Health News provides environmental health information, global and updated daily.
www.environmentalhealthnews.org.
Indeed, consumers have been increasingly wary of materials that inadvertently enter their bodies through the products they use, the air they breathe, and what they put into their bodies by diet. Sales of organic and local foods have been rising rapidly in numbers and prominence since the 1990s due to a greater focus on health. According to the Organic Trade Association, organic food sales climbed from \$1 billion in 1990 to \$20 billion in 2007.Organic Trade Association, “Industry Statistics and Projected Growth,” June 2010, accessed November 30, 2010, http://www.ota.com/organic/mt/business.html. Once found only in natural food stores, organic foods have been sold predominantly in conventional supermarkets since 2000.Carolyn Dimitri and Catherine Greene, Recent Growth Patterns in the U.S. Organic Foods Market, Agriculture Information Bulletin No. AIB-777 (Washington, DC: US Department of Agriculture, Economic Research Service, 2002), accessed December 1, 2010, www.ers.usda.gov/publications/aib777/aib777.pdf. Meanwhile, community-supported agriculture by 2007 encompassed nearly 13,000 farms as people grew more interested in sourcing from their local food shed.US Department of Agriculture, “Community Supported Agriculture,” last modified April 28, 2010, accessed November 30, 2010, http://www.nal.usda.gov/afsic/pubs/csa/csa.shtml. In addition to protection against food supply disruption due to fuel price volatility, terrorist attack, or severe weather (most foods are transported over 1,000 miles to their ultimate point of consumption, creating what many view as undesirable distribution system vulnerabilities), local food production ensures traceability (important for health protection), higher nutritional content, fewer or no chemical preservatives to extend shelf life, and better taste while providing local economic development and job creation.
Whether from energy production or materials processing, a major challenge across the board is where to put the waste. As visible and molecular waste accumulates, there are fewer places to dispose of it. Global carbon sinks, the natural systems (oceans and forests) that can absorb GHGs, show signs of stress. Oceans may have reached their peak absorption as they acidify and municipal waste washes onshore. Forests continue to shrink, unable to absorb additional CO2 emissions still being pumped into the atmosphere. The United Nations’ Food and Agriculture Organization reported that from 1900 to 2005, Africa lost about 3.1 percent of its forests; South America lost around 2.5 percent; and Central America, which had the highest regional rate of deforestation, lost nearly 6.2 percent of its forests. Individual countries have been hit particularly hard: Honduras lost 37 percent of its forests in those 15 years, and Togo lost a full 44 percent. However, the largest absolute loss of forests continues in Brazil, home of the Amazon rain forest. Brazil’s forests have been shrinking annually since 1990 by about three million hectares—an area about the size of Connecticut and Massachusetts combined.Food and Agriculture Organization of the United Nations, “Global Forest Resources Assessment 2005,” last modified November 10, 2005, accessed March 26, 2009, http://www.fao.org/forestry/32033/en.
Video Clip
World Wildlife Fund Video on Deforestation
(click to see video)
Solid waste, particularly plastics, has also come under increasing scrutiny because of its proliferation in and outside of landfills. Estimates put the number of plastic bags used annually in the early 2000s between five hundred billion and five trillion.John Roach, “Are Plastic Grocery Bags Sacking the Environment?,” National Geographic News, September 2, 2003, accessed November 30, 2010, news.nationalgeographic.com/news/2003/09/0902_030902 _plasticbags.html; “The List: Products in Peril,” Foreign Policy, April 2, 2007, accessed March 25, 2008, www.foreignpolicy.com/story/cms.php?story_id=3762. These bags, made from oil, are linked to clogged waterways and choked wildlife. Mumbai, India, forbade stores from giving out free plastic bags in 2000. Bangladesh, Ireland, South Africa, Rwanda, and China followed suit with outright bans or fees for the bags.“The List: Products in Peril,” Foreign Policy, April 2, 2007, accessed March 25, 2008, www.foreignpolicy.com/story/cms.php?story_id=3762; “China Bans Free Plastic Shopping Bags,” International Herald Tribune, January 9, 2008, accessed November 30, 2010, www.iht.com/articles/2008/01/09/asia/plastic.php. San Francisco became the first US city to ban plastic bags at large supermarkets and pharmacies in 2007.Charlie Goodyear, “S.F. First City to Ban Plastic Shopping Bags,” San Francisco Chronicle, March 28, 2007, accessed March 25, 2009, www.sfgate.com/cgi-bin/article.cgi?file=/c/a/2007/03/28/MNGDROT5QN1.DTL. Los Angeles passed a similar ban in 2008 that takes effect in 2010 unless California adopts rules to charge patrons twenty-five cents per bag. Los Angeles had estimated that its citizens alone consumed about 2.3 billion plastic bags annually and recycled less than 5 percent of them.David Zahniser, “City Council Will Ban Plastic Bags If the State Doesn’t Act,” Los Angeles Times, July 23, 2008, accessed March 25, 2009, http://articles.latimes.com/2008/jul/23/local/me-plastic23.
The Life Cycle and Impact of Business Activity on Global Scale
www.storyofstuff.com
Bottled water may now face a similar fate because of the tremendous increase in trash from plastic bottles and the resources consumed to create, fill, and ship those bottles.Charles Fishman, “Message in a Bottle,” Fast Company, July 1, 2007, accessed March 26, 2009, www.fastcompany.com/magazine/117/features-message- in-a-bottle.html. New York City, following San Francisco; Seattle; Fayetteville, Arkansas; and other cities, has curbed buying bottled water with city money.Jennifer Lee, “City Council Shuns Bottles in Favor of Water from Tap,” New York Times, June 17, 2008, accessed March 26, 2009, www.nytimes.com/2008/06/17/nyregion/17water.html. The inability of natural systems to absorb the flow of synthetic waste was dramatically communicated with reports and pictures of the Great Pacific Garbage Patch, also known as the North Pacific Gyre. Pacific Ocean currents create huge eddies where plastic waste is deposited and remains in floating islands of garbage.
Video Clip
Greatgarbagepatch.org
(click to see video)
Although manufacturers of other products from CDs to laundry detergent have already decreased the amount of packaging they use, and although many American municipalities have increased their recycling capacity, the results are far less than what is required to achieve sustainability, and they still lag behind Europe’s progress. The European Parliament and Council Directive 94/62/EC of December 1994 set targets for recycling and incinerating packaging to create energy. By 2002, recycling rates in the EU exceeded 55 percent for glass, paper, and metals, although only about 24 percent of plastic was being recycled.Europa, “Packaging and Packaging Waste,” accessed March 27, 2009, europa.eu/scadplus/leg/en/lvb/l21207.htm#AMENDINGACT. An EU directive from 2003 addressed electronic waste specifically, requiring manufacturers of electronic equipment to set up a system to recycle their products. Target recycling rates were initially set at 70 percent by weight for small, household electronics and 80 percent for large appliances, with separate rates for recycling or reusing individual components.Europa, “Waste Electrical and Electronic Equipment,” last modified January 6, 2010, accessed November 30, 2010, europa.eu/scadplus/leg/en/lvb/l21210.htm. The United States as of March 2009 had no federal mandate for reclaiming electronic waste (e-waste), although some states had implemented their own rules.US Environmental Protection Agency, “eCycling: Regulations/Standards,” last modified February 23, 2010, accessed November 30, 2010, www.epa.gov/epawaste/conserve/materials/ecycling/rules.htm. Companies such as Dell, criticized for their lack of attention to e-waste, responded to NGO and public concern with creative solutions. Working with citizen groups, Dell was able to shift from viewing e-waste as someone else’s problem to developing a profit-making internal venture that reused many electronic devices, put disassembled component materials back into secondary markets, and reduced the dumping of e-waste into poor countries.
KEY TAKEAWAYS
• The world is composed of energy and materials, and how we design business activity defines the ways we use energy and materials.
• There is growing concern that current patterns of use for energy and materials are not sustainable. Waste streams are the focus of much of this concern.
EXERCISES
1. Propose an idea for a product that has sustainability concepts designed in from the outset. How does this change your thinking about resources you might use? How might it change processes of decision making within the firm and across supply chains?
2. What key elements characterize the standard model of business? What barriers can you list that would need to be overcome to move a mainstream business to a sustainability view? | textbooks/biz/Business/Advanced_Business/Book%3A_Sustainability_Innovation_and_Entrepreneurship/02%3A_Sustainability_Innovation_in_Business/2.01%3A_Energy_and_Materials-_New_Challenges_in_the_First_Decade_of_the_Twenty-first_Century_and_.txt |
Learning Objectives
1. Understand how sustainability innovation has been defined.
2. Begin to apply the basic ideas and concepts of sustainability design.
Recognition that the global economy is processing the world’s natural resources and generating waste streams at an unprecedented scale and scope calls for the redesign of commercial activity. Reconfiguring how we conduct business and implementing business practices that preserve the world’s natural resources for today’s communities and the economic, environmental, and social health and vitality of future generations only recently has become a priority. This notion lies at the heart of sustainability. Sustainability in the business sense is not about altruism and doing what is right for its own sake. Businesses with successful sustainability strategies are profitable because they integrate consideration of clean design and resource conservation throughout product life cycles and supply chains in ways that make economic sense. Sustainability innovation is about defining economic development as the creation of private and social wealth to ultimately eliminate harmful impacts on ecological systems, human health, and communities.
Awareness of the problem of pollution and resource limits has existed for decades but until now only in fragmented ways across informed academic and scientific subcommunities. Today it is becoming self-evident that our past patterns of energy and material use must be transformed. While some still question the seriousness of the challenges, governments and companies are responding. Government is imposing more environmental, health, and safety regulatory constraints on business. However, while regulation may be an important part of problem solving, it is not the answer. Fortunately, businesses are stepping up to the challenge. In fact, the inherent inefficiencies and blind spots that are built into the accepted business and growth models that have been debated and discussed for many years are beginning to be addressed by business. Entrepreneurial innovators are creating solutions that move us away from needing regulation. In addition, recently the critiques have moved from periphery to mainstream as it has become increasingly clear to the educated public that the economic practices that brought us to this point are not sufficient to carry us forward. Since governments alone cannot solve the problems, it will take the ingenuity of people across sectors to generate progress. Sustainability innovation offers a frame for thinking about how entrepreneurial individuals and firms can contribute.
The new models of business sustainability are emerging. They are based on current science, pressure from governments, and citizen demand and envision a world in which human economic development can continue to be sustained by natural systems while delivering improved living standards for more people. That is the goal; however, it takes concrete actions striving toward that ideal to make headway. Those entrepreneurs and ventures embodying the ideal of sustainability have found creative ways to achieve financial success by offering products that improve our natural environment and protect and preserve human health, equity, and community vitality. We will now explore this term, sustainability, and its significance in entrepreneurial thinking.
General Definition
Sustainability innovation reflects the next generation of economic development thinking. It couples environmentalism’s protection of natural systems with the notion of business innovation while delivering essential goods and services that serve social goals of human health, equity, and environmental justice. It is the wave of innovation pushing society toward clean technology, the green economy, and clean commerce. It is the combined positive, pragmatic, and optimistic efforts of people around the world to refashion economic development into a process that addresses the fundamental challenges of poverty, environmental justice, and resource scarcity. At the organizational level, the term sustainability innovation applies to product/service and process design as well as company strategy.
Sustainability and sustainability innovation have been defined by different individuals representing diverse disciplines and institutions. Certain fundamentals lie at the concepts’ core, however, and we illuminate these fundamentals in the discussion that follows. Keep in mind that any given definition’s precision is less important than the vision and framework that guide actions in the direction of enduring healthy economic development. Later we will examine concepts and tools that are used to operationalize sustainability strategy and design. It is by combining existing definitions with an understanding of sustainability’s drivers and then studying how entrepreneurial innovators implement the concept that you gain the full appreciation for the change sustainability represents. Note that you will find the terms sustainability, sustainable business, and even sustainability innovation used loosely in the media and sometimes applied to activities that are only continued (“sustained”) as opposed to the meaning of sustainability we work with in this text. Our definition addresses the systemic endurance and smooth functioning of ecological systems and the preservation of carrying capacities, together with protection of human health, social justice, and vibrant communities. We are interested in entrepreneurial and innovative disruption that can accelerate progress along this path.
Sustainability: Variations on a Theme
Paul E. Gray, a former president of the Massachusetts Institute of Technology (MIT), stated in 1989 that “furthering technological and economic development in a socially and environmentally responsible manner is not only feasible, it is the great challenge we face as engineers, as engineering institutions, and as a society.”Paul E. Gray, “The Paradox of Technological Development,” in Technology and Environment (Washington, DC: National Academy Press, 1989), 192–204. This was his expression of what it meant for MIT to pursue sustainability ideas.
Sustainability Defined by Chemical Engineers
A sustainable product or process is one that constrains resource consumption and waste generation to an acceptable level, makes a positive contribution to the satisfaction of human needs, and provides enduring economic value to the business enterprise.Bhavik R. Bakshi and Joseph Fiksel, “The Quest for Sustainability: Challenges for Process Systems. Engineering,” AIChE Journal 49, no. 6 (2003): 1350.
Sustainability Defined by The Natural Step
Pediatric cancer physician and researcher Karl-Henrik Robèrt, the founder of an educational foundation called The Natural Step that helps corporations and municipalities implement sustainability strategies, conveys sustainability this way: “Resource utilization should not deplete existing capital, that is, resources should not be used at a rate faster than the rate of replenishment, and waste generation should not exceed the carrying capacity of the surrounding ecosystem.”Karl-Heinrik Robert, The Natural Step: A Framework for Achieving Sustainability in Our Organizations (Cambridge, MA: Pegasus, 1997).
The Natural Step, a framework to guide decision making and an educational foundation with global reach based in Stockholm, Sweden, offers a scientific, consensus-based articulation of what it would mean for sustainability to be achieved by society and for humans to prosper and coexist compatibly with natural systems. Natural and man-made materials would not be extracted, distributed, and built up in the world at a rate exceeding the capacity of nature to absorb and regenerate those materials; habitat and ecological systems would be preserved; and actions that create poverty by undermining people’s capacity to meet fundamental human needs (for subsistence, protection, identity, or freedom) would not be pursued. These requisite system conditions acknowledge the physical realities of resource overuse and pollution as well as the inherent threat to social and political stability when human needs are systematically denied.
Sustainability Defined in a Business Operations Journal
The search for sustainability can lead to innovation that yields cost savings, new designs, and competitive advantage. Like the quality gurus who called for zero defects, the early adopters of the sustainability perspective may seem extreme in calling for waste-free businesses in which the nonproduct outputs become inputs for other products or services. But sustainability’s zero-waste goal offers a critical, underlying insight: health, environmental, and community social issues offer opportunities for businesses.Andrea L. Larson, Elizabeth Olmsted Teisberg, and Richard R. Johnson, “Sustainable Business: Opportunity and Value Creation,” Interfaces: International Journal of the Institute for Operations Research and the Management Sciences 30, no. 3 (May/June 2000), 2.
Examining innovative leaders provides a window into the future through which we can see new possibilities for how goods and services can be delivered if sufficient human ingenuity is applied. The approach extends the premises of entrepreneurial innovation, a long-standing driver of social and economic change, to consider natural system viability and community health. Drawing on systems thinking, ecological and environmental health sciences, and the equitable availability of clean commerce economic development opportunities, sustainability innovation offers a fast-growing market space within which entrepreneurial leaders are offering solutions and paths forward to address some of society’s most critical challenges.
It is important to recognize sustainability’s cross-disciplinary approach. Sustainability in business is about designing strategies for value creation through innovation using an interdisciplinary lens. Specialization and grounding in established disciplines provide requisite know-how, but sustainability innovation requires the ability to bridge disciplines and to rise above the narrow bounds and myopia of specialized training in conventional economic models to envision new possibilities. Sustainability innovation occurs when entrepreneurs and ventures stretch toward a better future to offer distinctly new products, technologies, and ways of conducting business. The empirical evidence suggests that while entrepreneurs who succeed typically bring their uniquely specialized know-how to the table, they also have a systems view that welcomes and mixes diverse perspectives to create change.
Business has traveled a long distance from the adversarial pollution control days of the 1970s in the United States, when systemic ecological problems were first acknowledged. Companies were asked to bear the costs of environmental degradation yet often lacked the ability or know-how to realize any rewards for those investments. Decades ago, the goals were narrow: compliance and cost avoidance. Today the intersecting environmental, health, and social challenges are understood as more complex. Community prosperity requires a far broader view of economic development. It requires a sustainability mind-set. While the challenges are undeniably serious, as our examples will show, the entrepreneurial mind sees wide open opportunities.
A growing number of companies now recognize that improving performance and innovation across the full sustainability agenda—financial, ecological, environmental, and social health and prosperity—can grow revenues, improve profitability, and enhance their brands. Sustainability strategies and innovations also position businesses favorably in markets, as their slower-learning competitors fail to develop internal and supply-chain competencies to compete. We predict that within a relatively short period of time what is now considered sustainability innovation will become mainstream business operation.
World Resources Institute’s Corporate Ecosystem Services Review
www.wri.org/project/ecosystem-services-review
Sidestepping the need for sustainability may prove difficult. Population growth rates and related higher levels of waste guarantee environmental concerns will grow in importance. The government and the public are increasingly concerned with the extent and severity of air, water, and soil contamination and the implications of natural resource consumption and pollution for food production, drinking water availability, and public health. As environmental and social problems increase, public health concerns are likely to drive new approaches to pollution prevention and new regulations encompassing previously unregulated activities. As concerns increase, so will the market power of sustainable business. The opportunities are there for the entrepreneurially minded. Sustainability innovation offers solutions.
The entrepreneurial leaders forging ahead with sustainability innovation understand the value of partnerships with supply-chain vendors and customers, nongovernmental organizations (NGOs), public policy agencies, and academia in pursuing product designs and strategies. Many of their innovations are designed to avoid the need for regulation by steadily reducing adverse ecological and health impacts, with the goal of eliminating negative impacts altogether. Significantly, environmental and associated health, community, and equity issues are integrated into core business strategy and thus into the operations of the firm and its supply chains.
Start-up firms and small to midsized companies have always been major movers of entrepreneurial innovation and will continue to lead in sustainability innovation. However, even large firms can offer innovative examples. Indeed, Stuart Hart in his 2005 book Capitalism at the Crossroads: The Unlimited Business Opportunities in Solving the World’s Most Difficult Problems argues that multinational corporations have the capacity and qualities to address the complicated problems of resource constraints, poverty, and growth.Stuart L. Hart, Capitalism at the Crossroads (Upper Saddle River, NJ: Wharton School Publishing, 2005). According to analysts of what is termed “bottom of the pyramid” markets where over two billion people live on one to two dollars a day, developing countries represent both a market for goods and the potential to introduce sustainable practices and products on a massive scale.
Sustainable Business: Opportunity and Value Creation
• Sustainable business strategies are ones that achieve economic performance through environmentally and socially aware design and operating practices that move us toward a cleaner, healthier, more equitable (and hence more stable) world.
• Sustainable business entrepreneurs understand that sustainability opportunities represent a frontier for creativity, innovation, and the creation of value.
By the first decade of the twenty-first century, a growing number of business executives believed that sustainability should play a role in their work. PricewaterhouseCoopers found that in 2003, 70 percent of CEOs surveyed believed that environmental sustainability was important to overall profit. By 2005, that number had climbed to 87 percent.Karen Krebsbach, “The Green Revolution: Are Banks Sacrificing Profits for Activists’ Principles?” US Banker, December 1, 2005, accessed March 27, 2009, www.accessmylibrary.com/coms2/summary_0286-12108489_ITM. In a later PricewaterhouseCoopers survey of technology executives, 71 percent said they did not believe their company was particularly harmful to the environment, yet 61 percent said it was nonetheless important that they reduce their company’s environmental impact. The majority of executives also believed strong demand existed for “green” and cleaner products and that demand would only increase.PricewaterhouseCoopers, “Going Green: Sustainable Growth Strategies,” Technology Executive Connections 5 (February 2008), accessed March 27, 2009, www.pwc.com/images/techconnect/TEC5.pdf.
Such employers as well as employees have begun striving toward sustainability. Labor unions and environmentalists, once at odds, jointly created the Apollo Alliance to promote the transition to a clean energy environment under the slogan “Clean Energy, Good Jobs.” Van Jones, formerly with the Obama administration, led Green For All, an organization that proposed the new green economy tackle poverty and pollution at the same time through business collaboration in cities to provide clean energy jobs.
Video Clip
Van Jones on Green for All
(click to see video)
Meanwhile, numerous large and well-known companies, including DuPont, 3M, General Electric, Walmart, and FedEx, have taken steps to save money by using less energy and material or to increase market share by producing more environmental products. Walmart, for instance, stated that as of 2009 its “environmental goals are simple and straightforward: to be supplied 100 percent by renewable energy; to create zero waste; and to sell products that sustain our natural resources and the environment.”Walmart, “Sustainability,” accessed March 27, 2009, http://walmartstores.com/Sustainability. But transitioning from a wasteful economic system to one that conserves energy and materials and dramatically reduces hazardous waste, ultimately reversing the ecological degradation and social inequity often associated with economic growth, takes a major shift in the collective state of mind.
Assumptions that Earth systems, regional and local ecological systems, and even the human body can be sustained and can regenerate in the face of negative impacts from energy and material consumption have proven wrong. Linear processes of extracting or synthetically producing raw materials, converting them into products, using those products, and throwing them away to landfills and incinerators increasingly are viewed as antiquated, old-world designs that must be replaced by systems thinking and life-cycle analysis. These new models will explicitly consider poverty alleviation, equity, health, ecological restoration, and smart energy and materials management as integrated considerations. The precise outline of the new approach remains ambiguous, but the direction and trajectory are clear. While government policies may contribute guidelines and requirements for a more sustainable economic infrastructure, the business community is the most powerful driver of rapid innovation and change. The entrepreneurs are leading the way.
In conclusion, economic development trajectories both in the United States and worldwide are now recognized as incompatible with ecological systems’ viability and long-term human health and social stability. Wetlands, coastal zones, are rain forests are deteriorating, while toxins and air and water pollution harm human health and drive political unrest and social instability; witness the growing numbers of environmental refugees. Even large Earth systems, such as the atmosphere and nitrogen and carbon cycles, are endangered. The business models we created in the nineteenth and twentieth centuries that succeeded in delivering prosperity to ever greater numbers of people did not anticipate the exponential population explosion, technological capability to extract and process ever-greater volumes of materials, natural resource demand, growing constraints on resources, political unrest, fuel cost volatility, and limits of ecological systems and human bodies to assimilate industrial waste.
Scholars and students of business will look back on the early decades of the twenty-first century as a transition as the human community responded to scientific feedback from natural systems and took to heart the desire to extend true prosperity to greater numbers by redesigning business. To the extent that this effort will be deemed successful, much of the credit will go to the entrepreneurial efforts to experiment with new ideas and to drive the desired change. No single venture or individual can address the wide range of sustainability concerns. It is the combination of large and small efforts across sectors and industries around the world that will create an alternative future. That is how change happens—and entrepreneurs are at the cutting edge.
KEY TAKEAWAYs
• Sustainability innovation provides new ways to deliver goods and services that are explicitly designed to create a healthier, more equitable, and prosperous global community.
• The sustainability design criteria differ from conventional business approaches by their concurrent and integrated incorporation of economic performance goals, ecological system protection, human health promotion, and community vitality. A new model is emerging through the efforts of entrepreneurial leaders.
Exercises
1. Identify an ecological, equity, health, or product safety problem you see that might be addressed through a sustainability innovation approach. What causes the problem? What kind of shift in mind-set may be required to generate possible solutions? | textbooks/biz/Business/Advanced_Business/Book%3A_Sustainability_Innovation_and_Entrepreneurship/02%3A_Sustainability_Innovation_in_Business/2.02%3A_Defining_Sustainability_Innovation.txt |
Learning Objectives
1. Provide an overview of the basic stages of corporate engagement.
2. Explore the evolutionary character of private sector adaptation.
During the 1990s and the first decade of the twenty-first century, start-up ventures and large corporations adopted a variety of approaches to shape what we now call sustainability-based product and strategy designs. A sustainability approach acknowledges the interdependencies among healthy economic growth and healthy social and ecological systems. Sustainability innovation and entrepreneurship seeks to optimize performance across economic, social, and ecological business dimensions. Applied broadly across countries, this effort will evolve a design of commerce aligned and compatible with human and ecosystem health. A growing number of firms are applying creative practices demonstrating the compatibility of profit, community health, and viable natural systems. This discussion provides an introduction to some of the most important approaches used by firms to guide firms.Some topics discussed here have well-developed research literature and are taught as courses in engineering, chemistry, and executive business programs. A word of caution: terms do not have precise or universal meanings. Different academics and practitioners offer alternative views, and thus definitions may vary; this overview employs a consensus definition of a tool or concept as it is expressed by the author or authors primarily responsible for creating that tool or concept.
The spectrum of approaches can be viewed along a continuum toward the ideal of sustainability. Imagine a timeline. The Industrial Revolution has unfolded on the left side with time moving toward the right on a continuum. We are quickly learning how and why our industrial system, as currently designed, can undermine biosphere systems such as the atmosphere, water tables, fisheries, or soil fertility. With entrepreneurial actors leading the way, our response is to adapt our institutions and our mind-sets. Ultimately the evolution of new knowledge will create new rules for commerce, driving a redesign of our commercial systems to coevolve more compatibly with the natural world and human health requirements. Currently we are in a transition from the left side of the continuum to the right. On the far right of the continuum is the ideal state in which we achieve a design of commerce compatible with human prosperity and ecosystem health. This ideal state includes provision of goods and services to support a peaceful global community, one that is not undermined by violence and civil unrest due to income and resource disparities. Is this ideal state unrealistic? Having a human being walk on the moon was once thought impossible. Electricity was once unknown. Global treaties were considered impossible before they were achieved. Humans shape their future every day, and they can shape this future. In fact, the author’s decades of research show people are already shaping it. It’s a question of whether the reader wants to join in.
Looking at the timeline—or continuum—as a whole, the transition from the Industrial Revolution toward the ideal state can be characterized by imagining a “filter” of environmental and health protection imposed on manufacturing processes. This process is well under way around the world. The filter first appeared at the “end of the pipe” where waste pollution moved from a facility to the surrounding water, air, and soil. With the first round of US regulations in the 1970s (mirrored by public policies in many other countries in the intervening years), typical end-of-the-pipe solutions included scrubbers, filters, or on-site waste treatment and incineration. These are called pollution control techniques, and regulations often specified the solution through fiat or “command and control” legislation.
Over time, as laws became more stringent, the conceptual filter for pollution control moved from filters on smokestacks outside a firm to operating and production processes inside. These in-the-pipe techniques constitute pollution prevention measures in manufacturing and processing that minimize waste and tweak the production system to operate as efficiently as possible. Pollution prevention measures repeatedly have been shown in practice to reduce costs and risks, offering improvements in financial performance and even the quality and desirability of the final products.
In the third and final stage of social and ecological protection, the stage in which sustainability innovation thrives, the conceptual filter is incorporated into the minds of product designers, senior management, and employees. Thus the possibilities for ecological disruption and human health degradation can be removed at the early design stages by the application of human ingenuity. Fostered by a systems mind-set and informed by current science, this ingenuity enables an evolutionary adaptation of firms toward the ideal sustainability state. Seeing this design creativity at work—for example, producing clean renewable energy for electricity and benign, recyclable materials—provides a window to a future landscape in which the original Industrial Revolution is rapidly evolving to its next chapter.
Eco-efficiency describes many companies’ first efforts to reduce waste and use fewer energy and material inputs. Eco-efficiency can reduce materials and energy consumed over the product life cycle, thus minimizing waste and costs while boosting profits. Considering eco-efficiency beyond the level of the individual company leads to rethinking the industrial sector. Instead of individual firms maximizing profits, we see a web of interconnected corporations—an industrial ecosystem—through which a metabolism of materials and energy unfolds, analogous to the material and energy flows of the natural world. The tools for design for environment (DfE) and life-cycle analysis (LCA) from the field of industrial ecology provide information on the complete environmental impact of a product or process from material extraction to disposal. Other approaches to product design, such as concurrent engineering, aid in placing the filter of environmental protection in a design process that invites full design participation from manufacturing, operations, and marketing representatives as well as research and development designers.
When powerful new business perspectives emerge, they often appear to be fads. Concentrating on quality, for example, seemed faddish as the movement emerged in the 1980s. Over time, however, total quality as a concept and total quality management (TQM) programs became standard practice. Now, over two decades after the quality “fad” was introduced to managers around the world, product quality assurance methods are part of the business fundamentals that good managers understand and pursue. Similarly, sustainability has been viewed as a fad. In fact, as its parameters are more carefully defined, it is increasingly understood as an emerging tenet of excellence.For a comprehensive discussion of sustainability as an emerging tenet of excellence, see Andrea Larson and Elizabeth Teisberg, eds., “Sustainable Business,” special issue, Interfaces: International Journal of the Institute for Operations Research and the Management Sciences 30, no. 3 (May/June 2000).
When we look at the emerging wave of sustainability innovation, we can view it as an adaptive process indicating that businesses are moving toward more intelligent interdependencies with natural systems. It is clear that companies are under growing pressure to offer cleaner and safer alternatives to existing products and services. This is in large part because the footprint, or cumulative impact, of business activity is becoming clearer. Pressures on companies to be transparent and factor in full costs, driven by a wide range of converging and increasingly urgent challenges from climate change and environmental health problems to regulation and resource competition, now accelerate change and drive innovation. Furthermore, growing demand for fresh water, food, and energy puts the need for innovative solutions front and center in business. In this chapter, we will look at the major shifts occurring and consider the role of paradigms and mind-sets. A presentation of core concepts, practical frameworks, and tools follows.
Approximate Timing of Major Approaches/Frameworks
Framework Approximate Date of Emergence Perspective
Pollution control (reactive) 1970s Comply with regulations (clean up the pollution) using technologies specified by government.
Pollution prevention (proactive) 1980s Manage resources to minimize waste based on better operating practices (prevent pollution); consistent with existing total quality management efforts.
Eco-efficiency 1990s Maximize the efficiency of inputs, processing steps, waste disposal, and so forth, because it reduces costs and boosts profits.
Industrial ecology, green chemistry and engineering, design for environment, life-cycle analysis, concurrent engineering 1990s Incorporate ecological/health impact considerations into product design stage; extend this analysis to the full product life cycle.
Sustainability innovation 2000s Combine all the above in a systems thinking approach that drives entrepreneurial innovation.
KEY TAKEAWAYS
• Business practices have moved along a continuum, with an increasing attention to environmental, social, and health concerns.
• Corporate practice has evolved from rudimentary pollution control to product design changes that take into consideration the full life cycle of products including their energy and material inputs.
• As a consequence of new knowledge and evolutionary learning, sustainability issues are now in the forefront as companies experiment with ways to optimize performance across economic, social, and environmental factors.
EXERCISES
1. Identify a business and describe what operational changes would be made if senior management applied life-cycle analysis sequentially to its operations and supply chain.
2. Select a product that you use. Identify as many inputs (energy, materials, and labor) as you can that enable that product to be available to you. Where and how might you apply these ideas to the production and delivery of the product? | textbooks/biz/Business/Advanced_Business/Book%3A_Sustainability_Innovation_and_Entrepreneurship/03%3A_Framing_Sustainability_Innovation_and_Entrepreneurship/3.01%3A_Evolutionary_Adaptation.txt |
Learning Objectives
• Explain how paradigms and innovation affect our perception of the possibilities for sustainable business.
• Understand why new ideas, often introduced through innovative thinking and action, can meet with initial resistance.
The early decades of the twenty-first century will mark a transition period in which conventional economic models that assume infinite capacities of natural systems to provide resources and absorb waste no longer adequately reflect the reality of growth and its related environmental and health challenges. Providing material goods and creating prosperous communities for expanding populations in ways that are compatible with healthy communities and ecosystems are the core challenges of this century.
Not surprisingly, entrepreneurial innovators are stepping up to provide alternatives better aligned with the constraints of population growth, material demand, and limited resources. This activity is consistent with the role of society’s entrepreneurs. They are the societal subgroup that recognizes new needs and offers creative solutions in the marketplace. However, innovators and their new ways are often misunderstood and rejected, at least initially. Understanding the challenges facing the sustainability entrepreneurs who produce new products and technologies is enhanced by understanding how a paradigm is created and replaced.
Education, cultural messages (conveyed through family, media, and politics), and social context provide us with ideas about how the world works and shape our mind-sets. Formalized and sanctioned by academic fields and canonical textbooks, assumptions become set paradigms through which we understand the world, including our role in it and the possibilities for change. Despite new knowledge, the reality of daily living, and the results of scientific research generating empirical evidence that can challenge core assumptions, it is well known that individuals and societies resist change and hold fast to their known paradigms. Why? Because the unquestioned assumptions have functioned well for many in the population, inertia is powerful, and often we lack alternatives that will explain and bring order to what appears to be contradictory information about how new or unprecedented events are unfolding.
The fact that reality does not correspond to our assumptions can be ignored or denied for a long time if no alternative path is perceived. For years, pollution was acknowledged and accepted as the price of progress, the cost that must be paid to keep people employed and maintain economic growth. “Clean commerce” was an oxymoron. Furthermore, specialized disciplines in academia create narrow intellectual silos that become impediments to broader systems views. In business, functional silos emerge as companies grow. Communication between research and development and manufacturing breaks down, manufacturing experts and marketing staff are removed from each other’s work and even geographically separate, and sales departments rarely have the opportunity to provide feedback to designers. These realities present barriers to understanding the complex nature–human relationship shift in which we are now engaged.
It is only when the incongruity between reality and our perceived understanding of that same world presents a preponderance of data and experience to challenge accepted thought patterns that new explanations are permitted to surface, seriously discussed, and legitimized by the mainstream institutions (universities, corporations, and governments). Recently, climate change, toxin-containing household products, the collapse of ocean fisheries, the global asthma epidemic, and other challenges for which no simple answers seem possible have provided incentives for people to imagine and begin to build a different business model.
In fact, business consultants, architects, engineers, chemists, economists, and nonprofit activists have been grappling for many decades with limits to economic growth. Interdisciplinary science has become increasingly popular, and higher funding levels signal recognition that research and solutions need to bridge conventionally segregated and bound areas of thought (e.g., economics, biology, psychology, engineering, chemistry, and ecology). The new approaches to resource use, pollution, and environmental and equity concerns have opened new avenues for thought and action.
A body of ideas and approaches reflects movement toward inter- and even metadisciplinary understanding. Similarities across these approaches will be readily apparent. In fact, in combination, each of these seemingly disparate efforts to close the gap between what we have been taught about economic growth and what we have observed in the last few decades reveals common themes to guide entrepreneurial innovation and business strategy. In Chapter 3, Section 3.3, we will explore some metaconcepts.
KEY TAKEAWAYS
• Educational institutions, cultural values, and everyday practices create and sustain assumptions that become paradigms, which then influence what we consider possible.
• The 1990s and first decade of the twenty-first century witnessed a variety of difficult and growing environmental and social problems and, in response, the introduction of new concepts for business. These sustainability concepts may offer an approach more attuned to the problems businesses face now and will face in the future.
EXERCISES
1. How do paradigms and entrepreneurial innovation interact?
2. What are the advantages and disadvantages of specialization when thinking about social and environmental issues and business? | textbooks/biz/Business/Advanced_Business/Book%3A_Sustainability_Innovation_and_Entrepreneurship/03%3A_Framing_Sustainability_Innovation_and_Entrepreneurship/3.02%3A_Paradigms_and_Mind-Sets.txt |
Learning Objectives
1. Identify the roles of carrying capacity and equity in the four key metaconcepts of sustainability.
2. Compare and contrast the four key metaconcepts, including their assumptions, emphases, and implications.
3. Apply the metaconcepts to identify sustainable business practices.
An educated entrepreneur or business leader interested in sustainability innovation should understand two core ideas. The first is that sustainability innovation ultimately contributes to preservation and restoration of nature’s carrying capacity. Carrying capacity refers to the ability of the natural system to sustain demands placed upon it while still retaining the self-regenerative processes that preserve the system’s viability indefinitely. Note that human bodies have carrying capacities, and thus we are included in this notion of natural carrying capacities. For example, similarly to groundwater supplies or coastal estuaries, children’s bodies can be burdened with pollutants only up to a point, beyond which the system collapses into dysfunction and disease.
The second core idea is equity, leading to our discussion of environmental justice as the second metaconcept category. Prosperity achieved by preserving and restoring natural system carrying capacities that structurally exclude many people from realizing the benefits of that prosperity is not sustainable, practically or morally. Sustainability scholars have suggested that a “fortress” future lies ahead if equity issues are not considered core to sustainability goals. The wealthy will need to defend their wealth from gated communities, while the poor live with illness, pollution, and resource scarcity.
Sustainability innovations guided by the following approaches aim to sustain biological carrying capacities and healthy human communities that strive toward equity. The ideal is that we tap into every person’s creativity and bring it to bear on how we learn to live on what scientists now call our “full Earth.”
Each of our four key metaconcepts—sustainable development, environmental justice, earth systems engineering and management, and sustainability science—addresses ideas of equity and carrying capacity in a slightly different way. Earth Systems Engineering and management and sustainability science focus on technology and carrying capacity, while sustainable development and environmental justice emphasize social structures and equity. Yet each metaconcept realizes equity and carrying capacity are linked; humans have both social and material aspirations that must be met within the finite resources of the environment.
Sustainable Development
Sustainable development refers to a socioeconomic development paradigm that achieves more widespread human prosperity while sustaining nature’s life-support systems. Under sustainable development, the next generation’s choices are extended rather than attenuated; therefore, sustainable development addresses equity issues across generations to not impoverish those generations that follow. Introduced in the Brundtland Commission’s 1983 report, which focused attention on the interrelated and deteriorating environmental and social conditions worldwide, sustainable development would balance the carrying capacities of natural systems (environmental sustainability) with sociopolitical well-being. While debate continues on the challenges’ details and possible solutions, there is widespread scientific consensus that continued escalation in scale and scope of resource and energy consumption cannot be maintained without significant risk of ecological degradation accompanied by potentially severe economic and sociopolitical disruption. In 1992, the Economic Commission for Europe described societal transformation toward sustainable development moving through stages, from ignorance (problems are not widely known or understood) and lack of concern, to hope in technology-based fixes (“technology will solve our problems”), to eventual conversion of economic activities from their current separation from ecological and human health goals of society to new forms appropriately adapted to ecological laws and the promotion of community well-being. The goal of sustainable development, though perhaps impossible to reach, would be a smooth transition to a stable carrying capacity and leveling of population growth. Societies would evolve toward more compatible integration and coevolution of natural systems with industrial activity. Because corporations are among the most powerful institutions in the world today, they are viewed as instrumental in creating the transition from the current unsustainable growth trajectory to sustainable development.
Environmental Justice
Environmental justice emerged as a mainstream concept in the 1980s. Broad population segments in the United States and elsewhere increasingly acknowledged that racial and ethnic minorities and the poor (groups that often overlap) suffered greater exposure to environmental hazards and environmental degradation than the general population. Following pressure from the Congressional Black Caucus and other groups, the US Environmental Protection Agency (EPA) incorporated environmental justice into its program goals in the early 1990s. The EPA defined environmental justice as “the fair treatment and meaningful involvement of all people regardless of race, color, national origin, or income with respect to the development, implementation, and enforcement of environmental laws, regulations, and policies.” The EPA also stated that environmental justice “will be achieved when everyone enjoys the same degree of protection from environmental and health hazards and equal access to the decision-making process to have a healthy environment in which to live, learn, and work.”US Environmental Protection Agency, “Compliance and Enforcement: Environmental Justice,” last updated November 24, 2010, accessed December 3, 2010, www.epa.gov/oecaerth/environmentaljustice. Other definitions of environmental justice similarly include an emphasis on stakeholder participation in decisions and an equitable distribution of environmental risks and benefits.
Environmental justice in the United States grew out of a civil rights framework that guarantees equal protection under the law, which globally translated into the framework of universal human rights. It crystallized as a movement in the years 1982–83, when hundreds of people were jailed for protesting the location of a hazardous waste dump in a predominantly black community in North Carolina.April Mosley, “Why Blacks Should Be Concerned about the Environment: An Interview with Dr. Robert Bullard,” November 1999, Environmental Justice Resource Center at Clark Atlanta University, accessed July 2, 2009, www.ejrc.cau.edu/nov99interv.htm. In 1991, the National People of Color Environmental Leadership Summit first convened and drafted the “Principles of Environmental Justice,” which were later circulated at the 1992 Rio Earth Summit.United Church of Christ, Toxic Wastes and Race at Twenty: 1987–2007 (Cleveland, OH: United Church of Christ, 2007), 2. The 2002 UN World Conference against Racism, Racial Discrimination, Xenophobia, and Related Intolerance also embraced environmental justice in its final report.United Nations, United Nations Report of the World Conference against Racism, Racial Discrimination, Xenophobia and Related Intolerance (Durban, South Africa: United Nations, 2001), accessed December 3, 2010, www.un.org/WCAR/aconf189_12.pdf.
Although the placement of hazardous waste dumps and heavily polluting industries in areas predominantly inhabited by minorities, such as incinerators in the Bronx in New York City and petrochemical plants along Louisiana’s Cancer Alley, remains the most glaring example of environmental injustice, the concept encompasses myriad problems. For instance, housing in which minorities and the poor are concentrated may have lead paint (now a known neurotoxin) and proximity to the diesel exhaust of freeways and shipping terminals.David Pace, “More Blacks Live with Pollution,” Associated Press, December 13, 2005, accessed December 1, 2010, http://www.precaution.org/lib/05/more_blacks_live_with_pollution.051213.htm; American Lung Association, “Comments to the Environmental Protection Agency re: Ocean Going Vessels,” September 28, 2009, accessed April 19, 2011, www.lungusa.org/get-involved/advocate/advocacy-documents/Comments-to-the-Environmental-Protection-Agency -re-Ocean-Going-Vessels.pdf. Migrant agricultural laborers are regularly exposed to higher concentrations of pesticides. As heavy industries relocate to areas where labor is cheaper, those regions and countries must shoulder more of the environmental and health burdens, even though most of their products are exported. For instance, demand for bananas and biodiesel in the Northern Hemisphere may accelerate deforestation in the tropics.
Climate change has also broadened the scope of environmental justice. Poor and indigenous people will suffer more from global warming: rising waters in the Pacific Ocean could eliminate island societies and inundate countries such as Bangladesh, cause warming in the Arctic, or cause droughts in Africa. Hurricane Katrina, which some scientists saw as a signal of the growing force of storms, was a dramatic reminder of how poor people have more limited access to assistance during “natural” disasters. In addition, those groups least able to avoid the consequences of pollution often enjoy less of the lifestyle that caused that pollution in the first place.
Spotting environmental injustice can sometimes be simple. However, to quantify environmental justice or its opposite, often called environmental racism, demographic variables frequently are correlated to health outcomes and environmental risk factors with an accepted degree of statistical significance. Rates of asthma, cancer, and absence from work and school are common health indicators. Information from the EPA’s Toxic Release Inventory or Air Quality Index can be combined with census data to suggest disproportionate exposure to pollution. For example, children attending schools close to major highways (often found in low-income neighborhoods) experience decreased lung health and capacity.
Higher Exposure to Pollution
For 2007, host neighborhoods with commercial hazardous waste facilities are 56% people of color whereas non-host areas are 30% people of color. Thus, percentages of people of color as a whole are 1.9 times greater in host neighborhoods than in non-host neighborhoods.…Poverty rates in the host neighborhoods are 1.5 times greater than non-host areas (18% vs. 12%) and mean annual household incomes in host neighborhoods are 15% lower (\$48,234 vs. \$56, 912). Mean owner-occupied housing values are also disproportionately low in neighborhoods with hazardous waste facilities.United Church of Christ, Toxic Wastes and Race at Twenty: 1987–2007 (Cleveland, OH: United Church of Christ, 2007), 143.
Video Clip
Fight for Environmental Justice in Chester, Pennsylvania. https://www.youtube.com/watch?v=5Opr-uzet7Q
Earth Systems Engineering and Management
With discussion of earth systems engineering (ESE), we transition from social and community concerns to human impacts on large-scale natural systems. Sometimes referred to as Earth Systems Engineering and management, ESE is a broad concept that builds from these basic premises:
1. People have altered the earth for millennia, often in unintended ways with enduring effects, such as the early deforestation of ancient Greece.
2. The scale of that alteration has increased dramatically with industrialization and the population growth of the twentieth century.
3. Our institutions, ethics, and other behaviors have yet to catch up to the power of our technology.
4. Since the world has become increasingly less natural and more—or entirely—an artifact of human activity, we should use technology to help us understand the impact of our alterations in the long and short terms. Instead of desisting from current practice, we should continue to use technology to intervene in the environment albeit in more conscious, sustainable ways. However, the interactions of human and natural systems are complex, so we must improve our ability to manage each by better understanding the science of how they operate and interact, building better tools to manage them, and creating better policies to guide us.National Academy of Engineering, Engineering and Environmental Challenges: Technical Symposium on Earth Systems Engineering (Washington, DC: National Academy Press, 2000), viii.
Defining ESE
The often unintended consequences of our technologies reflect our incomplete understanding of existing data and the inherent complexities of natural and human systems. earth systems engineering is a holistic approach to overcoming these shortcomings. The goals of ESE are to understand the complex interactions among natural and human systems, to predict and monitor more accurately the impacts of engineered systems, and to optimize those systems to provide maximum benefits for people and for the planet. Many of the science, engineering, and ethical tools we will need to meet this enormous challenge have yet to be developed. National Academies of Science, Engineering and Environmental Challenges: Technical Symposium on Earth Systems Engineering (Washington, DC: National Academies Press, 2000), viii.
In 2000, Nobel laureate Paul Crutzen coined the term “anthropocene” to describe the intense impact of humanity upon the world. Anthropocene designates a new geological era with the advent of the Industrial Revolution. In this era, as opposed to the previous Holocene era, humans increasingly dominate the chemical and geologic processes of Earth, and they may continue to do so for tens of thousands of years as increased concentrations of GHGs linger in the atmosphere.
Professor Braden Allenby, a former vice president of AT&T who holds degrees in law, economics, and environmental science, argues we must embrace this anthropogenic (human-designed) world and make the most of it. An early and consistent proponent of ESE, he wrote in 2000, “The issue is not whether the earth will be engineered by the human species, it is whether humans will do so rationally, intelligently, and ethically.”Braden Allenby, “Earth Systems Engineering and Management,” IEEE Technology and Society Magazine 19, no. 4 (Winter 2000–2001): 10–24. Thus ESE differs from other sustainability concepts and frameworks that seek to reduce humanity’s impact on nature and to return nature to a more equal relationship with people. Allenby believes technology gives people options, and investing in new technologies to make human life sustainable will have a greater impact than trying to change people’s behaviors through laws or other social pressures.
Brad Allenby Discusses Earth Systems Engineering
mitworld.mit.edu/video/531
ESE could be deployed at various scales. One of the more extreme is reengineering, which emerged in the 1970s and resurfaced after 2000 as efforts to curb greenhouse gas emissions floundered and people reconsidered ways to arrest or reverse climate change. Geoengineering would manipulate the global climate directly and massively, either by injecting particles such as sulfur dioxide into the atmosphere to block sunlight or by sowing oceans with iron to encourage the growth of algae that consume carbon dioxide (CO2). The potential for catastrophic consequences has often undermined geoengineering schemes, many of which are already technologically feasible and relatively cheap. On the scale of individual organisms, ESE could turn to genetic engineering, such as creating drought-resistant plants or trees that sequester more CO2.
Reflection on ESE
David Keith, an environmental scientist at the University of Calgary, talks about the moral hazard of ESE at the 2007 Technology, Entertainment, and Design (TED) Conference.
Keith discusses the history of geoengineering since the 1950s and argues that more people must seriously discuss ESE because it would be cheap and easy for any one country to pursue unilaterally, for better or worse.
www.ted.com/talks/david_keith_s_surprising_ideas_on_climate_change.html
Sustainability Science
Sustainability science was codified as a multidisciplinary academic field between 2000 and 2009 with the creation of a journal called Sustainability Science, a study section within the US National Academy of Sciences and the Forum on Science and Innovation for Sustainable Development, which links various sustainability efforts and individuals around the world. Sustainability science aims to bring scientific and technical knowledge to bear on problems of sustainability, including assessing the resilience of ecosystems, informing policy on poverty alleviation, and inventing technologies to sequester CO2 and purify drinking water. William C. Clark, associate editor of the Proceedings of the National Academy of Sciences, writes, “Like ‘agricultural science’ and ‘health science,’ sustainability science is a field defined by the problems it addresses rather than by the disciplines it employs. In particular, the field seeks to facilitate what the National Research Council has called a ‘transition toward sustainability,’ improving society’s capacity to use the earth in ways that simultaneously ‘meet the needs of a much larger but stabilizing human population…sustain the life support systems of the planet, and…substantially reduce hunger and poverty.’”William C. Clark, “Sustainability Science: A Room of Its Own,” Proceedings of the National Academy of Sciences 104, no. 6 (February 6, 2007): 1737–38.
Like ecological economics, sustainability science seeks to overcome the splintering of knowledge and perspectives by emphasizing a transdisciplinary, systems-level approach to sustainability. In contrast to ecological economics, sustainability science often brings together researchers from a broader base and focuses on devising practical solutions. Clark calls it the “use-inspired research” typified by Louis Pasteur.
Sustainability science arose largely in response to the increasing call for sustainable development in the late 1980s and early 1990s. The core question became how? The number of scholarly articles on sustainability science increased throughout the 1990s. In 1999, the National Research Council published Our Common Journey: A Transition Toward Sustainability. The report investigated how science could assist “the reconciliation of society’s development goals with the planet’s environmental limits over the long term.” It set three main goals for sustainability science research: “Develop a research framework that integrates global and local perspectives to shape a ‘place-based’ understanding of the interactions between environment and society.…Initiate focused research programs on a small set of understudied questions that are central to a deeper understanding of interactions between society and the environment.…Promote better utilization of existing tools and processes for linking knowledge to action in pursuit of a transition to sustainability.”National Research Council, Our Common Journey: A Transition toward Sustainability (Washington, DC: National Academy Press, 1999), 2, 10–11.
Shortly thereafter, an article in Science attempted to define the core questions of sustainability science, again focusing on themes of integrating research, policy, and practical action across a variety of geographic and temporal scales.Robert W. Kates, William C. Clark, Robert Corell, J. Michael Hall, Carlo C. Jaeger, Ian Lowe, James J. McCarthy, et al., “Sustainability Science,” Science 292, no. 5517 (April 27, 2000): 641–42.
At about the same time, groups such as the Alliance for Global Sustainability (AGS) formed. AGS is an academic collaboration among the Massachusetts Institute of Technology, the University of Tokyo, the Swiss Federal Institute of Technology, and Chalmers University of Technology in Sweden. The alliance seeks to inject scientific information into largely political debates on sustainability. Members of the alliance also created the journal Sustainability Science. Writing in the inaugural edition, Hiroshi Komiyama and Kazuhiko Takeuchi described sustainability science as broadly addressing three levels of analysis and their interactions: (1) global, primarily the natural environment and its life-support systems; (2) social, primarily comprising human institutions and collective activities; and (3) human, largely addressing questions of individual health, happiness, and prosperity (Figure 3.1).Hiroshi Komiyama and Kazuhiko Takeuchi, “Sustainability Science: Building a New Discipline,” Sustainability Science 1, no. 1 (October 2006): 1–6.
KEY TAKEAWAYS
• The broad metaconcepts in sustainability emphasize equity and maintenance of the earth’s carrying capacity, despite an increased human population.
• Sustainability metaconcepts focus on balancing the needs of humans and their environment, present and future generations, and research and policy. These problems are complex, and the metaconcepts therefore tend to endorse an interdisciplinary, systems-level view.
• Equity considerations as design criteria offer opportunities for novel approaches to product and business competitiveness while preserving socially and politically stable communities.
EXERCISES
1. Make a diagram comparing and contrasting the four metaconcepts, including their implications, assumptions, and past successes. Then present to others the framework you find most compelling and explain why. If you prefer, synthesize a fifth metaconcept to present.
2. Select an industry and briefly research how the four metaconcepts have changed its practices and may guide future changes. | textbooks/biz/Business/Advanced_Business/Book%3A_Sustainability_Innovation_and_Entrepreneurship/03%3A_Framing_Sustainability_Innovation_and_Entrepreneurship/3.03%3A_Core_Ideas_and_Metaconcepts.txt |
Learning Objectives
1. Understand the core premises of each framework or tool.
2. Compare and contrast the frameworks and tools to evaluate the contributions of each to sustainability thinking.
3. Apply the frameworks and tools to improve existing products and services or to create new ones.
This section lists and discusses a set of frameworks and tools available to business decision makers. Those who are starting companies or those inside established firms can draw from these ideas and conduct further research into any tool that is of particular interest. Our purpose is to educate the reader about the variety and content of tools being applied by firms that are active in the sustainability innovation space. Each tool is somewhat different in its substance and applicability. The following discussion moves from the most general to the most specific. For example, The Natural Step (TNS) is a broad framework used by firms, municipalities, and nonprofit organizations, whereas industrial ecology is an academic field that has provided overarching concepts as well as developed product design tools. Natural capitalism is a framework developed by well-known energy and systems expert Amory Lovins together with L. Hunter Lovins and author-consultant Paul Hawken. Ecological economics is a branch of economics that combines analysis of environmental systems with economic systems, while cradle-to-cradle is a design protocol with conceptual roots in the field of industrial ecology. Nature’s services refers to the ability of natural systems to ameliorate human waste impacts, and the related concept of ecosystem service markets references the burgeoning arena of markets for the services natural systems provide to business and society. The biomimicry approach calls for greater appreciation of nature’s design models as the inspiration for human-designed technology. Green chemistry is a fast-expanding challenge to the conventional field of chemistry. It invites use of a set of twelve principles for the design of chemical compounds. Green engineering offers guiding design parameters for sustainability applied to engineering education. Life-cycle analysis, design for environment, concurrent engineering, and carbon footprint analysis are tools for analysis and decision making at various levels of business activity including within the firm and extending to supply chains. There is no “right” framework or tool. It depends on the specific task at hand. Furthermore, some of these tools share common assumptions and may overlap. However, this is a useful sample of the types of frameworks and tools in use. Reviewing the list provides the reader with insights into the nature and direction of sustainability innovation and entrepreneurship.
The Natural Step
TNS is both a framework for understanding ecological principles and environmental problems and an international nonprofit education, consultation, and research institution based in Sweden. TNS was founded in 1989 by Swedish pediatric oncologist Dr. Karl-Henrik Robèrt. In his medical practice, Dr. Robèrt observed an increase of rare cancers in children who were too young to have their cells damaged through lifestyle choices. He began to explore human-caused pollution (environmental) causes—outcomes of industrial and commercial activity. Once engaged in the process and frustrated by the polarized public and scientific debates over pollution, Dr. Robèrt began enlisting leading Swedish scientists to identify irrefutable principles from which productive debate could follow. These principles became the basis for TNS framework now used by many businesses worldwide to guide strategy and product design.Andrea Larson and Wendy Warren, The Natural Step, UVA–G–0507 (Charlottesville: Darden Business Publishing, University of Virginia, 1997), 1–3.
The principles the scientists distinguished during the consensus-building process are three well-known and very basic physical laws. The first law of thermodynamics, also known as the law of conservation of energy, states that energy cannot be created or destroyed, only changed in form. Whether electrical, chemical, kinetic, heat, or light, the total energy remains constant. Similarly, the law of conservation of matter tells us that the total amount of matter is constant and cannot be created or destroyed.These two laws assume that matter and energy are not being converted into each other through nuclear processes, but when fission and fusion are taken into account, mass-energy becomes the new conserved quantity. Finally, by the second law of thermodynamics, we know that matter and energy tend to disperse. Greater entropy, or disorder, is the inevitable outcome. Think about the decomposition of discarded items. Over time, they lose their structure, order, and concentration; in other words, they lose their quality.
In our biosphere, these laws imply things do not appear or disappear; they only take on different forms. All energy and matter remain, either captured temporarily in products or dispersed into the air, water, and soil. The matter humans introduce into the biosphere from the earth’s crust (e.g., by mining and drilling) or from corporate research laboratories (synthetic compounds) eventually is released and dispersed into the larger natural systems, including the air we breathe, water we drink, and food we eat. Furthermore, humans do not literally “consume” products. We only consume or use up their quality, their purity, and their manufactured temporary structure. Thus there is no “away” when we throw things away.
However, if the law of entropy dictates that matter and energy tend toward disorder rather than toward complex materials and ecosystems, what keeps the earth’s systems running? An outside energy input is needed to create order. That energy is the sun. While the earth is essentially a closed system with respect to matter, it is an open system with respect to energy. Hence net increases in material quality on Earth ultimately derive from solar energy, present or ancient.Karl-Henrik Robèrt, Herman Daly, Paul Hawken, and John Holmberg, “A Compass for Sustainable Development,” Natural Step News 1 (Winter 1996): 4.
Green plant cells, as loci of photosynthesis, curb entropy by using sunlight to generate order. The cells produce more structure, quality, and order than they destroy through dissipation. Plants thereby regulate the biosphere by capturing carbon dioxide (CO2), producing oxygen for animal life, and creating food. Fossil fuels, meanwhile, are simply that: the end products of photosynthesis in fossil form.
The Natural Step for Business
To summarize, while the Earth is a closed system with regard to matter, it is an open system with respect to energy. This is the reason why the system hasn’t already run down with all of its resources being converted to waste. The Earth receives light from the sun and emits heat into space. The difference between these two forms of energy creates the physical conditions for order in the biosphere—the thin surface layer in the path of the sun’s energy flow, in which all of the necessary ingredients for life as we know it are mingled.Brian Nattrass and Mary Altomare, The Natural Step for Business (Gabriola Island, BC: New Society Publishers, 1999), 35.
Cyclical systems lie at the heart of TNS framework. While the natural world operates in a continuously regenerative cyclical process—photosynthesis produces oxygen and absorbs CO2; plants are consumed, die, and decay, becoming food for microbial life; and the cycle continues—humankind has typically used resources in a linear fashion, producing waste streams both visible and molecular (invisible) that cannot all be absorbed and reassimilated by nature, at least not within time frames relevant for preservation of human health and extension of prosperity to billions more who demand a better life. The result is increasing accumulations of pollution and waste coupled with a declining stock of natural resources.Andrea Larson and Joel Reichert, IKEA and the Natural Step, UVA-G-0501 (Washington, DC: World Resources Institute and Darden Graduate School of Business Administration, 1998), 18. In the case of oil, global society must address both declining resources and control of existing resources by either unstable governments or regimes whose aims can oppose their own populations’ and other countries’ well-being.
TNS System Conditions
With foundational scientific principles dictating a compelling logic that guides decision making, a framework of system conditions followed to form TNS system conditions:
1. The first system condition states that “substances from the earth’s crust must not systematically increase in the ecosphere.” This means that the rate of extraction of fossil fuels, metals, and other minerals must not exceed the pace of their slow redeposit and reintegration into the earth’s crust. The phrase “systematically increase” in the systems conditions deserves elaboration. The natural system complexity that has built and sustains the biosphere maintains systemic equilibrium within a certain range. We now recognize that humans contribute to CO2 atmospheric buildup, potentially tipping climate to a new equilibrium to which we must adapt.
2. The second system condition requires that “substances produced by society must not systematically increase in the ecosphere.” These substances, synthetic compounds created in laboratories, must be produced, used, and released at a rate that does not exceed the rate with which they can be broken down and integrated into natural cycles or safely incorporated in the earth’s crust (soil, water).
3. The third condition states that “the physical basis for productivity and diversity of nature must not be systematically diminished.” This requirement protects the productive capacity and diversity of the earth’s ecosystems as well as the green plant cells, the photosynthesizers on which the larger ecological systems depend.
4. Finally, the fourth system condition, a consideration of justice, calls for the “fair and efficient use of resources with respect to meeting human needs.”
Under TNS framework, these four system conditions act as a compass that can guide companies, governments, nonprofit organizations, and even individuals toward sustainability practices and innovation.Karl-Henrik Robèrt, Herman Daly, Paul Hawken, and John Holmberg, “A Compass for Sustainable Development,” Natural Step News 1 (Winter 1996): 4–5. Here, “sustainability” explicitly refers to a carrying capacity or ability of natural systems to continue the age-old regenerative processes that have maintained the requisite chemistry and systems balance to support life as we know it.
TNS framework has been applied in many corporations and is seen by some as a logical extension of quality management and strategic systems thinking.Andrea Larson and Wendy Warren, The Natural Step, UVA–G–0507 (Charlottesville: Darden Business Publishing, University of Virginia, 1997), 2. It incorporates environmental and health protection into decision making by using scientific principles. TNS allows a company to understand the physical laws that drive environmental problems and defines the broad system conditions that form a “sustainable” society. These conditions provide a vehicle to assess progress, and from them companies can develop a strategy applicable to their products and services. Design teams can ask whether particular product designs, materials selection, and manufacturing processes meet each of the system conditions and can adjust in “natural steps”—that is, steps that are consistent with financially sound decision making in the direction of meeting the system conditions. TNS does not provide a detailed how-to regarding specific product design; however, with the knowledge and framework provided by TNS, companies can develop a more informed approach and strategic position and begin to take concrete steps customized to their unique circumstance with respect to natural resource use and waste streams.
The Natural Step as an Institution
To learn more about The Natural Step as a framework or institution, go to http://www.naturalstep.org.
Industrial Ecology
Business activity currently generates waste and by-products. Unlike natural systems, modern human societies process resources in a linear fashion, creating waste faster than it can be reconstituted into reusable resources. According to the National Academy of Engineering, on average 94 percent of raw materials used in a product ends up as waste; only 6 percent ends up in the final product. Whereas pollution control and prevention focus on minimizing waste, industrial ecology allows for inevitable waste streams since they become useful inputs to other industrial and commercial processes. Continued provision of needed goods and services to growing populations in a finite biosphere becomes at least conceptually possible if all waste generated by business and consumer behavior is taken up by other industrial and commercial processes or safely returned to nature.
Consequently, the field of industrial ecology assumes the industrial system exists as a human-produced ecosystem with distinct material, energy, and information flows similar to any other ecosystem within the biosphere. It therefore must meet the same physical constraints as other ecosystems to survive. As a systems approach to understanding the interaction between industry and the natural world, industrial ecology looks beyond the linear cradle-to-grave viewpoint of design—you source materials, build the product, use the product, and throw it away—and imagines business as a series of energy and material flows in which ideally the wastes of one process serve as the feedstock of another. Accordingly, nature’s processes and business activities are seen as interacting systems rather than separate components. They form an industrial web analogous to but separate from the natural web from which they may nonetheless draw inspiration.Hardin B. C. Tibbs, “Industrial Ecology: An Environmental Agenda for Industry,” Whole Earth Review 4, no. 16 (Winter 1992): 4–19; Deanna J. Richards, Braden Allenby, and Robert A. Frosch, “The Greening of Industrial Ecosystems: Overview and Perspective,” in The Greening of Industrial Ecosystems, ed. Deanna J. Richards and Braden Allenby (Washington, DC: National Academy Press, 1994), 3.
Clinton Andrews, a professor of environmental and urban planning, suggested a series of themes for industrial ecology based on natural metaphors: “Nutrients and wastes become raw materials for other processes, and the system runs almost entirely on solar energy. The analogy suggests that a sustainable industrial system would be one in which nearly complete recycling of materials is achieved.” Andrews described the present industrial systems as having “primitive metabolisms,” which will be “forced by environmental and social constraints to evolve more sophisticated metabolisms.…Inexhaustibility, recycling, and robustness are central themes in the industrial ecology agenda.”Clinton Andrews, Frans Berkhout, and Valerie Thomas, “The Industrial Ecology Agenda,” in Industrial Ecology and Global Change, ed. Robert Socolow, Clinton Andrews, Frans Berkhout, and Valerie Thomas (Cambridge: Cambridge University Press, 1994), 471–72. Theoretically, restructuring industry for compatibility with natural ecosystems’ self-regulation and self-renewal would reduce the current human activity that undermines natural systems and creates the growing environmental health problems we face.
In 1977, American geochemist Preston Cloud observed that “materials and energy are the interdependent feedstocks of economic systems, and thermodynamics is their moderator.”Suren Erkman, “Industrial Ecology: An Historical View,” Journal of Cleaner Production 5, no. 1–2 (1997): 1–10. Cloud’s point about thermodynamics anticipates TNS, and he was perhaps the first person to use the term “industrial ecosystem.”Preston Cloud, “Entropy, Materials and Posterity,” Geologische Rundschau 66, no. 3 (1977): 678–96, quoted and cited in John Ehrenfeld and Nicholas Gertler, “Industrial Ecology in Practice: The Evolution of Interdependence at Kalundborg,” Journal of Industrial Ecology 1, no. 1 (Winter 1997): 67–79. Despite earlier analogies between the human economy and natural systems, this correspondence did not gain widespread currency until 1989 when business executive Robert Frosch and Nicholas Gallopoulos first coined the term “industrial ecology”Robert A. Frosch and Nicholas E. Gallopoulos, “Strategies for Manufacturing,” Scientific American 261, no. 3 (September 1989): 144–52. and described it in Scientific American as follows:
In nature an ecological system operates through a web of connections in which organisms live and consume each other and each other’s waste. The system has evolved so that the characteristic of communities of living organisms seems to be that nothing that contains available energy or useful material will be lost. There will evolve some organism that will manage to make its living by dealing with any waste product that provides available energy or usable material. Ecologists talk of a food web: an interconnection of uses of both organisms and their wastes. In the industrial context we may think of this as being use of products and waste products. The system structure of a natural ecology and the structure of an industrial system, or an economic system, are extremely similar.Robert A. Frosch, “Industrial Ecology: A Philosophical Introduction,” Proceedings of the National Academy of Sciences, USA, vol. 89 (February 1992): 800–803.
Professor Robert U. Ayres clarified process flows within the natural and industrial systems by naming them the “biological metabolism” and the “industrial metabolism.”Ayres coined the term “industrial metabolism” at a conference at the United Nations University in 1987. The proceedings of this conference were published in Robert U. Ayres and Udo Ernst Simonis, eds., Industrial Metabolism (Tokyo: United Nations University Press, 1994). The feedstocks of these systems are known as “biological nutrients” and “industrial nutrients,” respectively, when they act in a closed cycle (which is always the case in nature, and rarely the case in industry).See Robert U. Ayres, “Industrial Metabolism: Theory and Practice,” in The Greening of Industrial Ecosystems, ed. Deanna J. Richards and Braden Allenby (Washington, DC: National Academy Press, 1994), 25; Robert U. Ayres and Udo Ernst Simonis, eds., Industrial Metabolism (Tokyo: United Nations University Press, 1994). In an ideal industrial ecosystem, there would be, as Hardin Tibbs wrote, “no such thing as ‘waste’ in the sense of something that cannot be absorbed constructively somewhere else in the system.” This suggests that “the key to creating industrial ecosystems is to reconceptualize wastes as products.”Hardin B. C. Tibbs, “Industrial Ecology: An Environmental Agenda for Industry,” Whole Earth Review 4, no. 16 (Winter 1992): 4–19.
Others have pointed out that “materials and material products (unlike pure services) are not really consumed. The only thing consumed is their ‘utility.’”Robert U. Ayres and Allen V. Kneese, “Externalities: Economics and Thermodynamics,” in Economy and Ecology: Towards Sustainable Development, ed. Franco Archibugi and Peter Nijkamp (Dordrecht, Netherlands: Kluwer Academic Publishers, 1989), 90. This concept has led to selling the utilization of products rather than the products themselves, thus creating a closed-loop product cycle in which manufacturers maintain ownership of the product. For example, a company could lease the service of floor coverings rather than sell carpeting. The responsibility for creating a system of product reuse, reconditioning, and other forms of product life extension, or waste disposal, then falls on the owner of the product—the manufacturer—not the user.Walter R. Stahel, “The Utilization-Focused Service Economy: Resource Efficiency and Product-Life Extension,” in The Greening of Industrial Ecosystems, ed. Deanna J. Richards and Braden Allenby (Washington, DC: National Academy Press, 1994), 183. This product life cycle can be described as being “from cradle back to cradle,” rather than from cradle to grave, which is of primary importance in establishing a well-functioning industrial ecosystem.Walter R. Stahel, “The Utilization-Focused Service Economy: Resource Efficiency and Product-Life Extension,” in The Greening of Industrial Ecosystems, ed. Deanna J. Richards and Braden Allenby (Washington, DC: National Academy Press, 1994), 183. The cradle-to-cradle life cycle became so important to some practitioners that it emerged as an independent concern.
The challenges to establishing a sophisticated industrial ecosystem are many, including identifying appropriate input opportunities for waste products amid ownership, geographic, jurisdictional, informational, operational, regulatory, and economic hurdles. Although industrial ecology could theoretically link industries around the globe, it has also been used at a local scale to mitigate some of these challenges. Several eco-industrial parks are currently in development (Kallundborg, Denmark, is the well-known historical example) where industries are intentionally sited together based on their waste products and input material requirements. If the interdependent system components at the site are functioning properly, the emissions from the industrial park are zero or almost zero. Problems arise when companies change processes, move facilities, or go out of business. This disrupts the ordered and tightly coupled chain of interdependency, much as when a species disappears from a natural ecosystem. Industrial ecology thus provides a broad framework and suggests practical solutions.
Natural Capitalism
Natural capitalism is a broad social and economic framework that attempts to integrate insights from eco-efficiency, nature’s services, biomimicry, and other realms to create a plan for a sustainable, more equitable, and productive world. Paul Hawken, author of The Ecology of Commerce, and Amory Lovins and L. Hunter Lovins, cofounders of the Rocky Mountain Institute for resource analysis and coauthors with Ernest von Weizsäcker of Factor Four: Doubling Wealth, Halving Resource Use, were independently looking for an overall framework to implement the environmental business gains they had studied and advocated. After learning of each other’s projects, they decided in 1994 to collaborate on Natural Capitalism:
Some very simple changes to the way we run our businesses, built on advanced techniques for making resources more productive, can yield startling benefits both for today’s shareholders and for future generations. This approach is called natural capitalism because it’s what capitalism might become if its largest category of capital—the “natural capital” of ecosystem services—were properly valued. The journey to natural capitalism involves four major shifts in business practices, all vitally interlinked:
• Dramatically increase the productivity of natural resources.…
• Shift to biologically inspired production models.…
• Move to a solution-based business model.…
• Reinvest in natural capital.…Amory Lovins, L. Hunter Lovins, and Paul Hawken, “A Road Map for Natural Capitalism,” Harvard Business Review 77, no. 3 (May–June 1999): 146–48.
The Big Picture of Interdependence
In all respects, Natural Capitalism is about integration and restoration, a systems view of our society and its relationships to the environment. Paul Hawken, Amory Lovins, and L. Hunter Lovins, Natural Capitalism: Creating the Next Industrial Revolution (Boston: Little, Brown, 1999), xii–xiii.
Natural capitalism emphasizes a broad and integrated approach to sustainable human activity. Although economic, environmental, and social goals had been conventionally seen in conflict, natural capitalism argues, “The best solutions are based not on tradeoffs or ‘balance’ between these objectives but on design integration achieving all of them together.”Paul Hawken, Amory Lovins, and L. Hunter Lovins, Natural Capitalism: Creating the Next Industrial Revolution (Boston: Little, Brown, 1999), xi. Hence, by considering all facets of the problem in advance, business can yield dramatic, multiple improvements and will drive environmental progress. For perhaps the simplest example, using more sunlight and less artificial light in buildings lowers energy costs, reduces pollution, and improves workers’ outlook and satisfaction, and hence their productivity and retention rates.
Like similar broad frameworks for sustainability, natural capitalism perceives a variety of current structures, rather than lack of knowledge or opportunity for profit, as obstacles to progress: perverse incentives from government tax policy hamper change, the division of labor and capital investments among different groups does not reward efficiency for the entire system but only the cheapest choice for each individual, companies do not know how to value natural capital properly, and so on.
Moving Away from Fossil Fuels
Amory Lovins talks about weaning the US economy off oil, 2005 Technology, Entertainment, and Design (TED) Conference.
Lovins argues that interlocking government incentives, rewards, market forces, and other system-level considerations can easily create the conditions to reduce US oil use.
www.ted.com/talks/lang/eng/amory_lovins_on_winning_the_oil_endgame.html
Natural capitalism also criticizes eco-efficiency as too narrow: “Eco-efficiency, an increasingly popular concept used by business to describe incremental improvements in materials use and environmental impact, is only one small part of a richer and more complex web of ideas and solutions.…More efficient production by itself could become not the servant but the enemy of a durable economy.”Paul Hawken, Amory Lovins, and L. Hunter Lovins, Natural Capitalism: Creating the Next Industrial Revolution (Boston: Little, Brown, 1999), xi–xii.
Natural capitalism does, however, see eco-efficiency as one important component of curbing environmental degradation. Adapting the best-available technology and designing entire systems, rather than just pieces, to function efficiently from the outset saves money quickly. That money can be invested in other changes. Indeed, natural capitalism’s case studies argue major gains in productivity by reconceiving entire systems are often cheaper than minor gains from incremental improvements.
Natural capitalism’s three other principles emphasize eliminating waste entirely and uniting environmental and economic gains. For instance, mimicking natural production systems means waste from one process equals food for another in a closed loop. Shifting from providing goods to providing services holds manufacturers accountable for their products and allows them to benefit from their design innovations while eliminating the waste inherent in planned obsolescence. Finally, companies can reinvest in natural capital to replenish, sustain, and expand the services and goods ecosystems provide. Beyond mimicry, letting nature do the work in the first place means that benign, efficient processes, such as using wetlands to process sewage, can replace artificial and often more dangerous and energy-intensive practices.
For example, a study of forests around the Mediterranean suggested that preserving forests may provide greater economic value than consuming those forests for timber and grazing land. Forests contribute immensely to clean waterways by limiting erosion and filtering pollutants. They can also sequester CO2, provide habitats for other valuable plants and animals, and encourage recreation and tourism. Investing in forests could therefore return dividends in various ways.
Ecological Economics
Ecological economics as a field of study was formalized in 1989 with the foundation of the International Society for Ecological Economics (ISEE) and the first publication of the journal Ecological Economics. The move toward ecological economics had roots in the classical economics, natural sciences, and sociology of the mid-nineteenth century but gained significant momentum in the 1970sJuan Martinez-Alier with Klaus Schlüpmann, Ecological Economics: Energy, Environment and Society (Oxford: Basil Blackwell, 1987). as the strain between human activity (economics) and natural systems (ecology) intensified but no discipline or even group of disciplines examined the interaction of those two systems specifically. Robert Costanza commented on the problem and the need for a new approach: “Environmental and resource economics, as it is currently practiced, covers only the application of neoclassical economics to environmental and resource problems. Ecology, as it is currently practiced, sometimes deals with human impacts on ecosystems, but the more common tendency is to stick to ‘natural’ systems.…[Ecological economics] is intended to be a new approach to both ecology and economics that recognizes the need to make economics more cognizant of ecological impacts and dependencies; the need to make ecology more sensitive to economic forces, incentives, and constraints.”Robert Costanza, “What Is Ecological Economics?,” Ecological Economics 1 (1989): 1.
The 2 × 2 diagram in Figure 3.5 depicts how ecological economics embraces a wide array of disciplines and interactions among them. For instance, conventional economics examines only transactions within economic sectors, while conventional ecology examines only transactions within ecological sectors. Other specialties arose to examine inputs from ecosystems to the economy (resource economics) or from the economic system to the environment (environmental economics and impact analyses). Ecological economics encompasses all possible flows among economies and ecosystems.
Ecological economics examines how economies influence ecologies and vice versa. It sees economic activity as occurring only within the confines of Earth’s processes for maintaining life and equilibrium and ecology as overwhelmingly influenced by humans, even if they are but one species among many. In short, the global economy is a subset of Earth systems, not a distinct, unfettered entity. Earth’s processes and resultant equilibrium are threatened by massive material extraction from and waste disposal into the environment, while material inequality among societies and people threatens long-term prosperity and social stability. Hence the constitution of the ISEE propounds the “advancement of our understanding of the relationships among ecological, social, and economic systems and the application of this understanding to the mutual well-being of nature and people, especially that of the most vulnerable including future generations.”International Society for Ecological Economics, “Constitution: Article II. Purpose,” accessed December 1, 2010, www.ecoeco.org/content/about/constitution. The field continues to emphasize broadly and rigorously investigating interdependent systems and their material and energy flows.
Indeed, ecological economics began as a transdisciplinary venture. That variety in academic disciplines is reflected in the field’s seminal figures: Robert Costanza earned a master’s degree in urban and regional planning and a doctorate in systems ecology, Paul Ehrlich was a lepidopterist, Herman Daly was a World Bank economist, and Richard Norgaard an academic one. Diversity and breadth were enshrined in the ISEE constitution because “in an interconnected evolving world, reductionist science has pushed out the envelope of knowledge in many different directions, but it has left us bereft of ideas as to how to formulate and solve problems that stem from the interactions between humans and the natural world.”International Society for Ecological Economics, “Constitution: Article II. Purpose,” accessed December 1, 2010, http://www.ecoeco.org/content/about/constitution. Hence ecological economics has studied an array of issues, frequently including equitable economic development in poorer countries and questions of sustainable scale within closed systems.
Ecological Economics for Policy
Robert Costanza, Joshua Farley, and Jon Erickson discuss policy tools derived from ecological economic principles.
mitworld.mit.edu/video/531
Nonetheless, there has been some discussion of whether ecological economics should remain an eclectic category or become a defined specialty with concomitant methodologies.Richard B. Norgaard, “Ecological Economics: A Short Description,” Forum on Religion and Ecology, Yale University, 2000, accessed June 25, 2009, fore.research.yale.edu/disciplines/economics/index.html. Ecological economics tends to use different models than mainstream economics and has a normative inclination toward sustainability and justice over individual preference or maximizing return on investments.Mick Common and Sigrid Stagl, Ecological Economics: An Introduction (Cambridge: Cambridge University Press, 2005), 10; Paul Ehrlich, “The Limits to Substitution: Meta-Resource Depletion and a New Economic-Ecological Paradigm,” Ecological Economics 1 (1989): 11. Moreover, while mainstream economics continues not to require an environmental education for a degree, some doctoral programs now grant a separate degree in ecological economics, while others offer it as a field for specialization. The location of ecological economics courses within university economics departments, however, suggests that contrary to the founding aspirations of the field, ecological economics has become the purview of economists more than ecologists in the United States.
Cradle-to-Cradle
Cradle-to-cradle is a design philosophy articulated in the book of the same name by William McDonough and Michael Braungart in 2002.William McDonough and Michael Braungart, Cradle to Cradle: Remaking the Way We Make Things (New York: North Point Press, 2002). As of 2005, cradle-to-cradle is also a certification system for products tested by McDonough Braungart Design Chemistry (MBDC) to meet cradle-to-cradle principles. The basic premise of cradle-to-cradle is that for most of industrial history, we have failed to plan for the safe reuse of materials or their reintegration into the environment. This failure, born of ignorance rather than malevolence, wastes the value of processed goods, such as purified metals or synthesized plastics, and threatens human and environmental health. Hence McDonough and Braungart propose “a radically different approach for designing and producing the objects we use and enjoy…founded on nature’s surprisingly effective design principles, on human creativity and prosperity, and on respect, fair play, and good will.”William McDonough and Michael Braungart, Cradle to Cradle: Remaking the Way We Make Things (New York: North Point Press, 2002), 6.
Consider the Ants
Consider this: all the ants on the planet, taken together, have a biomass greater than that of humans. Ants have been incredibly industrious for millions of years. Yet their productiveness nourishes plants, animals, and soil. Human industry has been in full swing for little over a century, yet it has brought about a decline in almost every ecosystem on the planet. Nature doesn’t have a design problem. People do.William McDonough and Michael Braungart, Cradle to Cradle: Remaking the Way We Make Things (New York: North Pont Press, 2002), 16.
In this approach, ecology, economy, and equity occupy equally important vertices of a triangle of human activity, and waste is eliminated as a concept in advance, as all products should be designed to become harmless feedstocks or “nutrients” for other biological or industrial processes. These closed loops acknowledge matter is finite on Earth, Earth is ultimately humanity’s only home, and the only new energy comes from the sun. Cradle-to-cradle thus shares and elaborates some of the basic understandings of TNS and industrial ecology albeit with an emphasis on product design and life cycle.
McDonough is an architect who was inspired by elegant solutions to resource scarcity that he observed in Japan and Jordan. In the United States, he was frustrated by the dearth of options for improving indoor air quality in buildings in the 1980s. He also was frustrated with eco-efficiency’s “failure of imagination,” although eco-efficiency was a trendsetting business approach at the time. Eco-efficiency stressed doing “less bad” but still accepted the proposition that industry would harm the environment; hence, eco-efficiency would, at best, merely delay the worst consequences or, at worst, accelerate them. Furthermore, it implied economic activity was intrinsically negative. McDonough specified his personal frustration: “I was tired of working hard to be less bad. I wanted to be involved in making buildings, even products, with completely positive intentions.”William McDonough and Michael Braungart, Cradle to Cradle: Remaking the Way We Make Things (New York: North Point Press, 2002), 10.
Cradle-to-Cradle Design
William McDonough talks about cradle-to-cradle design at the 2005 TED conference.
www.ted.com/index.php/talks/william_mcdonough_on_cradle_to_cradle_design.html
Braungart, meanwhile, was a German chemist active in the Green Party and with Greenpeace: “I soon realized that protest wasn’t enough. We needed to develop a process for change.”William McDonough and Michael Braungart, Cradle to Cradle: Remaking the Way We Make Things (New York: North Point Press, 2002), 11. He created the Environmental Protection Encouragement Agency (EPEA) in Hamburg, Germany, to promote change but found few chemists had any concern for environmental design, while industrialists and environmentalists mutually demonized each other.
After Braungart and McDonough met in 1991, they drafted cradle-to-cradle principles and founded MBDC in 1994 to help enact them. One of their early successes was redesigning the manufacture of carpets for Swiss Rohner Textil AG. The use of recycled plastics in manufacturing carpet was rejected, as the plastic itself is hazardous; humans inhale or ingest plastics as they are abraded and otherwise degraded. Hence McDonough and Braungart designed a product safe enough to eat. They used natural fibers and a process that made effluent from the factory cleaner than the incoming water. This redesign exemplified McDonough and Braungart’s idea of “eco-effectiveness,” in which “the key is not to make human industries and systems smaller, as efficiency advocates propound, but to design them to get bigger and better in a way that replenishes, restores, and nourishes the rest of the world” and that returns humans to a positive “dynamic interdependence” with rather than dominance over nature.William McDonough and Michael Braungart, Cradle to Cradle: Remaking the Way We Make Things (New York: North Point Press, 2002), 78, 80.
McDonough and Braungart’s efforts proved that cradle-to-cradle design was possible, concretely illustrating concepts important to cradle-to-cradle design while affirming the prior decades of conceptual work. The first concept of eco-effectiveness or ecological intelligence to be realized in cradle-to-cradle was the sense of nature and industry as metabolic systems, fed by “biological nutrients” in the “biosphere” and “technical nutrients” in the “technosphere,” or industry. “With the right design, all of the products and materials of industry will feed these two metabolisms, providing nourishment for something new,” thereby eliminating waste.William McDonough and Michael Braungart, Cradle to Cradle: Remaking the Way We Make Things (New York: North Point Press, 2002), 104.
McDonough and Braungart operationalized and popularized the concept of “waste equals food,” and by that phrase they mean that the waste of one system or process must be the “food” or feedstock of another. They were drawing on the industrial ecology writing of Robert Ayres, Hardin Tibbs, and others, since in a closed loop the waste is a nutrient (and an asset) rather than a problem for disposal. Hence waste equals food.Paul Hawken, Amory Lovins, and L. Hunter Lovins, Natural Capitalism: Creating the Next Industrial Revolution (Boston: Little, Brown, 1999), 12. Also see Paul Hawken and William McDonough, “Seven Steps to Doing Good Business,” Inc., November 1993, 81; William McDonough Architects, The Hannover Principles: Design for Sustainability (Charlottesville, VA: William McDonough Architects, 1992), 7. A core goal of sustainable design is to eliminate the concept of waste so that all products nourish a metabolism. Although lowering resource consumption has its own returns to the system, the waste-equals-food notion allows the possibility for nontoxic “waste” to be produced without guilt as long as the waste feeds another product or process.
To explain further the implications of designing into the two metabolisms, McDonough and Braungart and Justus Englefried of the EPEA developed the Intelligent Product System, which is a typology of three fundamental products that guides design to meet the waste-equals-food test. The product types are consumables, products of service, and unsalables.Paul Hawken, Amory Lovins, and L. Hunter Lovins, Natural Capitalism: Creating the Next Industrial Revolution (Boston: Little, Brown, 1999), 67; William McDonough, “A Boat for Thoreau: A Discourse on Ecology, Ethics, and the Making of Things,” in The Business of Consumption: Environmental Ethics and the Global Economy, ed. Laura Westra and Patricia H. Werhane (Lanham, MD: Rowman and Littlefield, 1998), 297–317.
A “consumable” is a product that is intended to be literally consumed, such as food, or designed to safely return to the biological (or organic) metabolism where it becomes a nutrient for other living things.Paul Hawken and William McDonough, “Seven Steps to Doing Good Business,” Inc., November 1993, 81. McDonough added that “the things we design to go into the organic metabolism should not contain mutagens, carcinogens, heavy metals, persistent toxins, bio-accumulative substances or endocrine disrupters.”William McDonough, “A Boat for Thoreau: A Discourse on Ecology, Ethics, and the Making of Things,” in The Business of Consumption: Environmental Ethics and the Global Economy, ed. Laura Westra and Patricia H. Werhane (Lanham, MD: Rowman and Littlefield, 1998), 297–317. For an explanation of endocrine disrupters, see Theo Colburn, Dianne Dumanoski, and John Peterson Myers, Our Stolen Future (New York: Dutton, 1996).
A “product of service,” on the other hand, provides a service, as suggested by Walter Stahel and Max Börlin, among others.Walter R. Stahel, “The Utilization-Focused Service Economy: Resource Efficiency and Product-Life Extension,” in The Greening of Industrial Ecosystems, ed. Deanna J. Richards and Braden Allenby (Washington, DC: National Academy Press, 1994), 183; Robert U. Ayres and Allen V. Kneese, “Externalities: Economics and Thermodynamics,” in Economy and Ecology: Towards Sustainable Development, ed. Franco Archibugi and Peter Nijkamp (Dordrecht, Netherlands: Kluwer Academic Publishers, 1989), 90. Examples of service products include television sets (which provide the service of news and entertainment), washing machines (which provide clean clothes), computers, automobiles, and so on. These products would be leased, not sold, to a customer, and when the customer no longer required the service of the product or wanted to upgrade the service, the item would be returned to the producer to serve as a nutrient to the industrial metabolism. This system of design and policy provides an incentive for the producer to use design for environment (DfE) and concurrent engineering to design for refurbishing, disassembly, remanufacture, and so forth. Braungart suggests that “waste supermarkets” could provide centralized locations for customer “de-shopping,” where used service products are returned and sorted for reclamation by the producer.Paul Hawken and William McDonough, “Seven Steps to Doing Good Business,” Inc., November 1993, 81; Michael Braungart, “Product Life-Cycle Management to Replace Waste Management,” Industrial Ecology and Global Change, ed. Robert Socolow, Clinton Andrews, Frans Berkhout, and Valerie Thomas (Cambridge: Cambridge University Press, 1994), 335–37.
An “unsalable,” also known as an “unmarketable,” is a product that does not feed metabolism in either the technosphere or the biosphere and thus should not be made. Unsalables include products that incorporate dangerous (radioactive, toxic, carcinogenic, etc.) materials or that combine both biological and technical nutrients in such a way that they cannot be separated. These latter combinations are “monstrous hybrids” from the cradle-to-cradle perspective or “products plus”—something we want plus a toxin we do not. Recycling, as Ayres explained, has become more difficult due to increasingly complex materials forming increasingly complex products. His example was the once-profitable wool recycling industry, which has now virtually disappeared because most new clothes are blends of fibers from both the natural and industrial metabolisms that cannot be separated and reprocessed economically.Robert U. Ayres, “Industrial Metabolism: Theory and Practice,” in The Greening of Industrial Ecosystems, ed. Deanna J. Richards and Braden Allenby (Washington, DC: National Academy Press, 1994), 34–35.
In a sustainable economy, unsalables would not be manufactured. During the transition, unsalables, as a matter of business and public policy, would always belong to the original manufacturer. To guarantee that unsalables are not dumped or otherwise discharged into the environment in irretrievable locations, “waste parking lots” operated perhaps by a public utility would be established so that these products can be stored safely. The original manufacturers of the unsalables would be charged rent for the storage until such time when processes were developed to detoxify their products. All toxic chemicals would contain chemical markers that identify the chemical’s owner, and the owner would be responsible for retrieving, mitigating, or cleaning up its toxins should they be discovered in lakes, wells, soil, birds, or people.Paul Hawken and William McDonough, “Seven Steps to Doing Good Business,” Inc., November 1993, 81; Michael Braungart, “Product Life-Cycle Management to Replace Waste Management,” Industrial Ecology and Global Change, ed. Robert Socolow, Clinton Andrews, Frans Berkhout, and Valerie Thomas (Cambridge: Cambridge University Press, 1994), 335–37.
The second principle of ecological intelligence, “use current solar income,” is derived from the second law of thermodynamics. Though the earth is a closed system with respect to matter, it is an open system with respect to energy, thanks to the sun. This situation implies that a sustainable, steady-state economy is possible on Earth as long as the sun continues to shine.Robert U. Ayres and Allen V. Kneese, “Externalities: Economics and Thermodynamics,” in Economy and Ecology: Towards Sustainable Development, ed. Franco Archibugi and Peter Nijkamp (Dordrecht, Netherlands: Kluwer Academic Publishers, 1989), 105. Using current solar income requires that Earth capital not be depleted—generally mined and burned—as a way to release energy. Thus all energy must be either solar or from solar-derived sources such as wind power, photovoltaic cells, geothermal, tidal power, and biomass fuels.Geothermal power, although perhaps more plentiful than other sources, ultimately derives from heat within Earth’s mantle and is thus not technically solar derived. Fossilized animals and plants, namely oil and coal, while technically solar sources, fail the current solar income test, and their use violates the imperative to preserve healthy natural system functioning since burning fossil fuels alters climate systems and produces acid rain among other adverse impacts.
The third principle of ecological intelligence is “respect diversity.” Biodiversity, the characteristic that sustains the natural metabolism, must be encouraged through conscious design. Diversity in nature increases overall ecosystem resilience to exogenous shocks. Clinton Andrews, Frans Berkhout, and Valerie Thomas suggest applying this characteristic to the industrial metabolism to develop a similar robustness.Clinton Andrews, Frans Berkhout, and Valerie Thomas, “The Industrial Ecology Agenda,” in Industrial Ecology and Global Change, ed. Robert Socolow, Clinton Andrews, Frans Berkhout, and Valerie Thomas (Cambridge: Cambridge University Press, 1994), 472–75. (See Andrews’s guiding metaphors for industrial ecology earlier in this section.) Respecting diversity, however, has a broader interpretation than just biological diversity. In its broadest sense, “respect diversity” means “one size does not fit all.” Every location has different material flows, energy flows, culture, and character.William McDonough, “A Boat for Thoreau: A Discourse on Ecology, Ethics, and the Making of Things,” in The Business of Consumption: Environmental Ethics and the Global Economy, ed. Laura Westra and Patricia H. Werhane (Lanham, MD: Rowman and Littlefield, 1998), 297–317. Therefore, this principle attempts to take into account the uniqueness of place by celebrating differences rather than promoting uniformity and monocultures.
In addition to the requirement of ecological intelligence, an additional criterion similar to the fourth system condition of TNS asks of the design, “Is it just?” Justice from a design perspective can be tricky to define or quantify and instead lends itself to qualitative reflection. However, the sustainable design framework forces an intergenerational perspective of justice through its design principles and product typology. As William McDonough explains, products designed to fit neither the biological nor industrial metabolism inflict “remote tyranny” on future generations as they will be left with the challenges of depleted Earth capital and wastes that are completely useless and often dangerous.William McDonough, “A Boat for Thoreau: A Discourse on Ecology, Ethics, and the Making of Things,” in The Business of Consumption: Environmental Ethics and the Global Economy, ed. Laura Westra and Patricia H. Werhane (Lanham, MD: Rowman and Littlefield, 1998), 297–317.
Finally, cradle-to-cradle eco-effectiveness “sees commerce as the engine of change” rather than the inherent enemy of the environment and “honors its ability to function quickly and productively.”William McDonough and Michael Braungart, Cradle to Cradle: Remaking the Way We Make Things (New York: North Point Press, 2002), 150. Companies should make money, but they must also protect local cultural and environmental diversity, promote justice, and in McDonough’s world, be fun.
Nature’s Services
Nature’s services emerged in the late 1990s as a practical framework to put a monetary value on the services that ecosystems provide to humans to better weigh the trade-offs involved with preserving an ecosystem or converting it to a different use. The nature’s services outlook posits two things. First, “the goods and services flowing from natural ecosystems are greatly undervalued by society…[and] the benefits of those ecosystems are not traded in formal markets and do not send price signals.”Gretchen Daily, ed., Nature’s Services: Societal Dependence on Natural Ecosystems (Washington, DC: Island Press, 1997), 2. Second, we are rapidly reaching a point of no return, where we will have despoiled or destroyed so many ecosystems that the earth can no longer sustain the burgeoning human population. Nature’s systems are too complex for humans to understand entirely, let alone replace if the systems fail. Indeed, Stanford biology professor Gretchen Daily was inspired to edit the book Nature’s Services, published in 1997, after “a small group of us [scientists] gathered to lament the near total lack of public appreciation of societal dependence upon natural ecosystems.”Gretchen Daily, ed., Nature’s Services: Societal Dependence on Natural Ecosystems (Washington, DC: Island Press, 1997), xv. Daily expanded on these concepts in the 2002 book The New Economy of Nature.
Ecosystem Survival Is Human Survival
Unless their true social and economic value is recognized in terms we all can understand, we run the grave risk of sacrificing the long-term survival of these natural systems to our short-term economic interests.Gretchen Daily, ed., Nature’s Services: Societal Dependence on Natural Ecosystems (Washington, DC: Island Press, 1997), xx.
Nature’s services consist primarily of “ecosystem goods” and “ecosystem services.” Natural systems have developed synergistic and tightly intertwined structures and processes within which species thrive, wastes are converted to useful inputs, and the entire system sustains itself, sustaining human life and activity as a subset. For instance, ecosystems services include the carbon and nitrogen cycles, pollination of crops, or the safe decomposition of wastes, all of which can involve species from bacteria to trees to bees. Healthy ecosystems also provide “ecosystem goods, such as seafood, forage, timber, biomass fuels, natural fibers, and many pharmaceuticals, industrial products, and their precursors.”Gretchen Daily, ed., Nature’s Services: Societal Dependence on Natural Ecosystems (Washington, DC: Island Press, 1997), 3. In short, ecosystems provide raw materials for the human economy or provide the conditions that allow humans to have economy in the first place.
Although these natural goods and services can be valued “biocentrically” (i.e., for their intrinsic worth) or “anthropocentrically” (i.e., for their value to humans), the nature’s services framework focuses on the latter because its audience needs a way to incorporate ecosystems into conventional, cost-benefit calculations for human projects. For instance, if a field is “just there,” the conventional calculation of the cost of converting it to a parking lot will focus much more on the price of asphalt and contractors than on the value lost when the field can no longer filter water, support plants and wildlife, grow food, or provide aesthetic pleasure. A nature’s services outlook instead captures the value of the functioning field so that it can be directly compared to the value of a parking lot.
Anthropocentric valuation schemes can take numerous forms. They can consider how ecosystems contribute to broad goals of sustainability, fairness, and efficiency or more direct economic activity. For instance, a farmer could calculate the avoided cost of applying pesticides whenever a sound ecosystem or biological method instead controls pests. A state forestry agency could calculate the direct value of consuming ecosystem products, such as the value of trees cut and ultimately sold as lumber, or it could calculate the indirect value of using the same forest for recreation and tourism, perhaps by calculating travel costs and other fees people are willing to bear to use that forest.
Estimating the value of nature can be difficult, especially because we are not used to thinking about buying and selling its services, such as clean air and clean water, or we see them as so basic that we want them to be free to all. Moreover, most people do not even know the services nature provides or how those services interact. Nonetheless, in addition to the aforementioned methods, economists and others trying to use nature’s services often survey people’s willingness to pay for nature, such as using their willingness to protect an endangered animal as a proxy for their attitude toward that animal’s ecosystem as a whole. One spectrum of approaches to valuation is illustrated in Figure 3.7, where use value reflects present anthropocentric value and nonuse value encompasses biocentric value as well as anthropocentric value for future generations.
In addition to the uncertainty of ascertaining values for everything an ecosystem can do, nature’s services face the issues of whether some people’s needs should be valued more than others’ and of how present choices will constrain future options. Nature’s services practitioners also must be able to calculate changes in value from incremental damage, not just the total value of an ecosystem. For example, clear-cutting one hundred acres of rain forest to plant palm trees is one problem; eradicating the entire Amazon rain forest is quite another. Destroying the first hundred acres might have a very different cost than destroying the last hundred. Hence the nature’s services approach attempts to characterize with ever greater resolution ecosystems, their goods and services, and the systems interdependence to include the results in economic calculations. Finally, once those values are quantified, their corresponding ecosystems need to be protected as would any other asset. Systems for monitoring and safeguarding nature’s services must emerge concurrently with estimates of their worth.
Robert Costanza and collaborating scientists and economists wrote one of the first papers on the financial value of ecosystems, “The Value of Ecosystem Services: Putting the Issues in Perspective,” published in Ecological Economics in 1998.Robert Costanza, Ralph d’Arge, Rudolf de Groot, Stephen Farber, Monica Grasso, Bruce Hannon, Karin Limburg, et al., “The Value of Ecosystem Services: Putting the Issues in Perspective,” Ecological Economics 25, no. 1 (April 1998): 67–72, doi:10.1016/S0921-8009(98)00019-6. It and the review article “The Nature and Value of Ecosystem Services” by Kate Brauman, Gretchen Daily, T. Ka’eo Duarte, and Harold Mooney are worth reading for an accessible discussion of ecosystem services.Kate A. Brauman, Gretchen C. Daily, T. Ka’eo Duarte, and Harold A. Mooney, “The Nature and Value of Ecosystem Services: An Overview Highlighting Hydrologic Services,” Annual Review of Environment and Resources 32, no. 6 (2007): 1–32, doi:10.1146/annurev.energy.32.031306.102758.
Biomimicry
Biomimicry, expounded by Janine Benyus in a book of the same name, is “the conscious emulation of life’s genius” to solve human problems in design, industry, and elsewhere.Janine M. Benyus, Biomimicry: Innovation Inspired by Nature (New York: William Morrow, 1997), 2. Biomimicry also spawned a consultancy and nonprofit organization, both based in Montana. The Biomimicry Guild helps companies apply biomimicry’s principles, while the Biomimicry Institute aspires to educate a broad audience and spread those principles. Biomimicry’s core assumption is that four billion years of natural selection and evolution have yielded sophisticated, sustainable, diverse, and efficient answers to problems such as energy use and sustainable population growth. Humans now have the technology to understand many of nature’s solutions and to apply similar ideas in our societies from the level of materials, such as mimicking spider silk or deriving pharmaceuticals from plants, to the level of ecosystems and the biosphere, such as improving our agriculture by learning from prairies and forests or reducing our greenhouse gas emissions by shifting toward solar energy.
Biomimicry
Janine Benyus talks about biomimicry at the 2005 TED conference.
www.ted.com/talks/janine_benyus_shares_nature_s_designs.html
Biomimicry does not, however, merely exploit nature’s design secrets in conventional industry, whether to make Velcro or genetically engineered corn. Instead, biomimicry requires us to assume a sustainable place within nature by recognizing ourselves as inextricably part of it. Biomimicry focuses “not on what we can extract from the natural world, but on what we can learn from it.”Janine M. Benyus, prologue to Biomimicry: Innovation Inspired by Nature (New York: William Morrow, 1997). This emphasis leads to three precepts: nature is a model for sustainable designs and processes, nature is the measure for successful solutions, and nature is our mentor. It also lends urgency to protecting ecosystems and cataloguing their species and interdependencies so that we may continue to be inspired, aided, and instructed by nature’s ingenuity. In these respects, biomimicry most resembles industrial ecology and nature’s services but clearly shares traits with other frameworks and concepts.
Nature as the Ultimate Model
In short, living things have done everything we want to do, without guzzling fossil fuel, polluting the planet, or mortgaging their future. What better models could there be?…This time, we come not to learn about nature so that we might circumvent or control her, but to learn from nature, so that we might fit in, at last and for good, on the Earth from which we sprang.Janine M. Benyus, Biomimicry: Innovation Inspired by Nature (New York: William Morrow, 1997), 2, 9.
Nature’s ingenuity, meanwhile, displays recurrent “laws, strategies, and principles”:
Nature
• runs on sunlight.
• uses only the energy it needs.
• fits form to function.
• recycles everything.
• rewards cooperation.
• banks on diversity.
• demands local expertise.
• curbs excesses from within.
• taps the power of limits.Janine M. Benyus, Biomimicry: Innovation Inspired by Nature (New York: William Morrow, 1997), 7.
Benyus was frustrated that her academic training in forestry, in contrast, focused on analyzing discrete pieces, which initially prevented her and others from seeing principles that emerge from analyzing entire systems. Similarly, solutions to problems of waste and energy need to operate with the big picture in mind. Benyus explicitly allied biomimicry with industrial ecology and elucidated ten principles of an economy that mimicked nature:Janine M. Benyus, Biomimicry: Innovation Inspired by Nature (New York: William Morrow, 1997), 252–277. Italicized items in the list are Benyus’s wording.
1. Use waste as a resource. Whether at the scale of integrated business parks or the global economy, “all waste is food, and everybody winds up reincarnated inside somebody else. The only thing the community imports in any appreciable amount is energy in the form of sunlight, and the only thing it exports is the by-product of its energy use, heat.”Janine M. Benyus, Biomimicry: Innovation Inspired by Nature (New York: William Morrow, 1997), 255.
2. Diversify and cooperate to fully use the habitat. Symbiosis and specialization within niches assure nothing is wasted and provide benefits to other species or parts of the ecosystem just as it does to other companies or parts of industry when businesses collaborate to facilitate efficiency, remanufacturing, and other changes.
3. Gather and use energy efficiently. Use fossil fuels more efficiently and invest them in producing what truly matters in the long run while shifting to solar and other renewable resources.
4. Optimize rather than maximize. Focus on quality over quantity.
5. Use materials sparingly. Dematerialize products and reduce packaging; reconceptualize business as providing services instead of selling goods.
6. Don’t foul the nests. Reduce toxins and decentralize production of goods and energy.
7. Don’t draw down resources. Shift to renewable feedstocks but use them at a low enough rate that they can regenerate. Invest in ecological capital.
8. Remain in balance with the biosphere. Limit emissions of greenhouse gases, chlorofluorocarbons, and other pollutants that severely disrupt natural cycles.
9. Run on information. Create feedback loops to improve processes and reward environmental behavior.
10. Shop locally. Using local resources constrains regional populations to sizes that can be supported, reduces transportation needs, and lets people see the impact of their consumption on the environment and suppliers.
While biomimicry’s concepts can be used at different scales, they have already been directly applied to improve many conventional products. Butterflies alone have provided much help. For example, Lotusan paint uses lessons from the surface structure of butterfly wings to shed dirt and stay cleaner, obviating the need for detergents, while Qualcomm examined how butterfly wings scatter light to develop its low-energy and highly reflective Mirasol display for mobile phones and other electronics. These and other products have been catalogued by the Biomimicry Institute at AskNature.org.
Green Chemistry
Green chemistry, now a recognized field of research and design activity, grew from the awareness that conventional ways to synthesize chemicals consumed large amounts of energy and materials and generated hazardous waste, while the final products themselves were often toxic to humans and other life and persisted in the environment. Hence green chemistry seeks to produce safer chemicals in more efficient and benign ways as well as to neutralize existing contaminants. Such green chemicals typically emulate the nontoxic components and reactions of nature.
Note
1,300 Liters of Solvent for 1 Kilogram of Viagra
www.ted.com/talks/david_keith_s_surprising_ideas_on_climate_change.html
Green chemistry emerged as a field after the US Environmental Protection Agency (EPA) began the program “Alternative Synthetic Pathways for Pollution Prevention” in response to the 1990 Pollution Prevention Act. In 1993 the program, renamed “Green Chemistry,” established the Presidential Green Chemistry Challenge Award to encourage and recognize research that replaces dangerous chemicals and manufacturing processes with safer alternatives. Recent winners of the award have created ways to make cosmetics and personal products without solvents and an efficient way to convert plant sugars into biofuels.US Environmental Protection Agency, “Presidential Green Chemistry Challenge: Award Winners,” last updated July 28, 2010, accessed December 3, 2010, www.epa.gov/greenchemistry/pubs/pgcc/past.html. In 1997, the nonprofit Green Chemistry Institute was established and would later become part of the American Chemistry Society. The following year, the Organization for European Economic Development (OECD) created the Sustainable Chemistry Initiative Steering Group, and Paul Anastas and John Warner’s book Green Chemistry: Theory and Practice established twelve principles for green chemistry.Paul T. Anastas and John C. Warner, Green Chemistry: Theory and Practice (Oxford: Oxford University Press, 1998). The principles are quoted on the EPA website, US Environmental Protection Agency, “Green Chemistry: Twelve Principles of Green Chemistry,” last updated April 22, 2010, accessed December 1, 2010, www.epa.gov/greenchemistry/pubs/principles.html. Recognized as leaders in the green chemistry field, Anastas and Warner have continued to advance the ideas through innovation, education, and policy, with Warner helping to create the Warner Babcock Institute to support this mission. Paul Anastas, meanwhile, was confirmed as head of the EPA’s Office of Research and Development in 2010. Their green chemistry principles are reflected in a hierarchy of goals set by the Green Chemistry program:
1. Green Chemistry: Source Reduction/Prevention of Chemical Hazards
• Design chemical products to be less hazardous to human health and the environment*
• Use feedstocks and reagents that are less hazardous to human health and the environment*
• Design syntheses and other processes to be less energy and materials intensive (high atom economy, low feed factor)
• Use feedstocks derived from annually renewable resources or from abundant waste
• Design chemical products for increased, more facile reuse or recycling
2. Reuse or Recycle Chemicals
3. Treat Chemicals to Render Them Less Hazardous
4. Dispose of Chemicals Properly
*Chemicals that are less hazardous to human health and the environment are:
• Less toxic to organisms and ecosystems
• Not persistent or bioaccumulative in organisms or the environment
• Inherently safer with respect to handling and useUS Environmental Protection Agency, “Introduction to the Concept of Green Chemistry: Sustainable Chemistry Hierarchy,” last updated April 22, 2010, accessed December 1, 2010, www.epa.gov/greenchemistry/pubs/about_gc.html.
Green chemistry also refers to a journal devoted to the topic (Green Chemistry), and one of its associate editors, Terry Collins, has identified steps to expand green chemistry. First, incorporate environmental considerations and sustainability ethics into the training of all chemists and their decisions in the laboratory. Second, be honest about the terms green or sustainable and the evidence for the harm chemicals cause. For instance, a cleaner, more efficient way to produce a certain product may be progress, but if the product itself remains highly toxic and persistent in the environment, it is not exactly green. Consequently, “since many chemical sustainability goals such as those associated with solar energy conversion call for ambitious, highly creative research approaches, short-term and myopic thinking must be avoided. Government, universities, and industry must learn to value and support research programs that do not rapidly produce publications, but instead present reasonable promise of promoting sustainability.”Terry Collins, “Toward Sustainable Chemistry,” Science 291, no. 5501 (2001): 48–49.
Collins has devised ways to degrade toxic chemicals already in the environment. He formed a spin-off from Carnegie Mellon University, GreenOx Catalysts, to develop and market his products, which have safely broken down anthrax as well as hazardous waste from paper pulp mills. Green chemistry, however, does not exist merely in government or university enclaves. In 2006, the Dow Chemical Company, with annual sales over \$50 billion, declared sustainable chemistry as part of its corporate strategy.Dow Chemical, “Innovative Insect Control Technology Earns Dow Another Green Chemistry Award,” news release, June 26, 2008, accessed June 26, 2009, www.dow.com/news/corporate/2008/20080626a.htm; Dow Chemical, “Dow Sustainability—Sustainability at Dow,” accessed June 26, 2009, http://www.dow.com/commitments/sustain.htm. DuPont, meanwhile, created a Bio-Based Materials division that has focused on using corn instead of petroleum to produce polymers for a variety of applications, from carpets to medical equipment, while also reducing greenhouse gas emissions.DuPont, “DuPont Bio-Based Materials—Delivering Sustainable Innovations That Reduce Reliance on Fossil Fuels,” fact sheet, accessed June 26, 2009, vocuspr.vocus.com/VocusPR30/Newsroom/MultiQuery.aspx?SiteName= DupontNew&Entity=PRAsset&SF_PRAsset_PRAssetID_EQ=101244&XSL=MediaRoomText &PageTitle= Fact%20Sheet&IncludeChildren=true&Cache=. Since synthetic chemicals are the basic building blocks of most modern products, from shoes to iPhones to food preservatives, green chemistry can play a significant role in sustainability. Cradle-to-cradle design, earth systems engineering, and virtually every other framework and tool can benefit from more environmentally friendly materials at the molecular level. As John Warner, a key figure in educating companies about green chemistry providing innovation and new materials across sectors, states,
The field of chemistry has been around in a modern interpretation for about 150 years, [and] we have invented our pharmaceuticals, our cosmetics, our materials, in a mindset that has never really focused on sustainability, toxicity and environmental impact. When one shifts to thinking in that way, it actually puts you in a new innovative space. In that new innovative space, that is the hallmark of creativity. What companies find is instead of it slowing them down, it accelerates time to market because they run into less hurdles in the regulatory process and in the manufacturing process. And it puts them in spaces that they weren’t normally in because they’ve approached it from another angle. Chemicals policy creates the demand. Green chemistry is not chemical policy. Green chemistry is the supply side, the science of identifying those alternatives. And so hand in hand, those two efforts accomplish the goals of more sustainable futures. But they’re not the same.Jonathan Bardelline interview of John Warner, “John Warner: Building Innovation Through Green Chemistry,” October 18, 2010, accessed March 7, 2011, www.greenbiz.com/blog/2010/10/18/john-warner-building-innovation- green-chemistry?page=0%2C1.
Green Engineering
Green engineering, as articulated by Paul Anastas and Julie Zimmerman, is a framework that can be applied at scales ranging from molecules to cities to improve the sustainability of products and processes. Green engineering works from a systems viewpoint and is organized around twelve principles that should be optimized as a system. For instance, one should not design a product for maximum separation and purification of its components (principle 3) if that choice would actually degrade the product’s overall sustainability.
The Twelve Principles of Green Engineering
• Principle 1: Designers need to strive to ensure that all material and energy inputs and outputs are as inherently nonhazardous as possible.
• Principle 2: It is better to prevent waste than to treat or clean up waste after it is formed.
• Principle 3: Separation and purification operations should be designed to minimize energy consumption and materials use.
• Principle 4: Products, processes, and systems should be designed to maximize mass, energy, space, and time efficiency.
• Principle 5: Products, processes, and systems should be “output pulled” rather than “input pushed” through the use of energy and materials.
• Principle 6: Embedded entropy and complexity must be viewed as an investment when making design choices on recycle, reuse, or beneficial disposition.
• Principle 7: Targeted durability, not immortality, should be a design goal.
• Principle 8: Design for unnecessary capacity or capability (e.g., “one size fits all”) solutions should be considered a design flaw.
• Principle 9: Material diversity in multicomponent products should be minimized to promote disassembly and value retention.
• Principle 10: Design of products, processes, and systems must include integration and interconnectivity with available energy and materials flows.
• Principle 11: Products, processes, and systems should be designed for performance in a commercial “afterlife.”
• Principle 12: Material and energy inputs should be renewable rather than depleting.Paul Anastas and Julie Zimmerman, “Design through the Twelve Principles of Green Engineering,” Environmental Science and Technology 37, no. 5 (2003): 95A.
Green engineering considers two basic priorities above all others: “life-cycle considerations” and “inherency.” Life-cycle considerations require engineers and designers to understand and assess the entire context and impact of their products from creation to end of use. Inherency means using and producing inherently safe and renewable or reusable materials and energies. Inherency sees external ways to control pollution or contain hazards as a problem because they can fail and tend to tolerate or generate waste. In this sense, inherency is a stringent form of pollution prevention.
Meanwhile, waste is a concept important in many of the principles of green engineering. As Anastas and Zimmerman explain, “An important point, often overlooked, is that the concept of waste is human. In other words, there is nothing inherent about energy or a substance that makes it a waste. Rather it results from a lack of use that has yet to be imagined or implemented.”Paul Anastas and Julie Zimmerman, “Design through the Twelve Principles of Green Engineering,” Environmental Science and Technology 37, no. 5 (2003): 97A. Waste often has been designed into systems as a tolerable nuisance, but increasingly, we cannot deal with our waste, whether toxins, trash, or ineffective uses of energy and resources. To avoid material waste, for example, we can design products to safely decompose shortly after their useful lifetime has passed (e.g., there is no point in having disposable diapers that outlast infancy by millennia). To avoid wastes within larger systems, we can stop overdesigning them based on worst-case scenarios. Instead, we should design flexibility into the system and look to exploit local inputs and outputs, as the way a hybrid car recovers energy from braking to recharge its battery whereas a conventional car loses that energy as heat. We can also recognize that some highly complex objects such as computer chips may be better off being collected and reused, whereas simpler objects such as paper bags may be better off being destroyed and recycled. In essence, green engineering advocates avoiding waste and hazards to move toward sustainability through more thorough, creative planning and design.
Summary of Perspective of Green Engineering
Input Output
Material Renewable/recycled, nontoxic Easily separable and recyclable/reusable, nontoxic, no waste (eliminated or feedstock for something else)
Energy Renewable, not destructive to obtain No waste (lost heat, etc.), nontoxic (no pollution, etc.)
Human intelligence Creative, systems-level design to avoid waste, renew resources, and so forth in new products and processes Sustainability
Life-Cycle Analysis
Life-cycle analysis (LCA) methods are analytical tools for determining the environmental and health impacts of products and processes from material extraction to disposal. Engaging in the LCA process helps reveal the complex resource web that fully describes the life of a product and aids designers (among others) in finding ways to reduce or eliminate sources of waste and pollution. A cup of coffee is commonly used to illustrate the resource web of a product life cycle.
The journey of the cup of coffee begins with the clearing of forests in Colombia to plant coffee trees. The coffee trees are sprayed with insecticides manufactured in the Rhine River Valley of Europe; effluents from the production process make the Rhine one of the most polluted rivers in the world, with much of its downstream wildlife destroyed. When sprayed, the insecticides are inadvertently inhaled by Colombian farmers, and the residues are washed into rivers, adversely affecting downstream ecosystems. Each coffee tree yields beans for about forty cups of coffee annually. The harvested beans are shipped to New Orleans in a Japanese-constructed freighter made from Korean steel, the ore of which is mined on tribal lands in Papua New Guinea. In New Orleans, the beans are roasted and then packaged in bags containing layers of polyethylene, nylon, aluminum foil, and polyester. The three plastic layers were fabricated in factories along Louisiana’s infamous “Cancer Corridor,” where polluting industries are located disproportionately in African American neighborhoods. The plastic was made from oil shipped in tankers from Saudi Arabia. The aluminum foil was made from Australian bauxite strip-mined on aboriginal ancestral land and then shipped in barges fueled by Indonesian oil to refining facilities in the Pacific Northwest. These facilities derive their energy from the hydroelectric dams of the Columbia River, which have destroyed salmon fishing runs considered sacred by Native American groups. The bags of coffee beans are then shipped across the United States in trucks powered by gasoline from Gulf of Mexico oil refined near Philadelphia, a process that has contributed to serious air and water pollution, fish contamination, and the decline of wildlife in the Delaware River basin. And all of this ignores the cup that holds the coffee.Alan Thein Durning and Ed Ayres, “The History of a Cup of Coffee,” World Watch 7, no. 5 (September/October 1994): 20–23.
The coffee example illustrates the complexity in conducting an LCA. The LCA provides a systems perspective but is essentially an accounting system. It attempts to account for the entire resource web and all associated points of impact and thus is understandably difficult to measure with complete accuracy. The Society for Environmental Chemistry has developed a standard methodology for LCA. The following are the objectives of this process:Joseph Fiksel, “Methods for Assessing and Improving Environmental Performance,” in Design for Environment: Creating Eco-Efficient Products and Processes, ed. Joseph Fiksel (New York: McGraw Hill, 1996), 116–17.
• Develop an inventory of the environmental impacts of a product, process, or activity by identifying and measuring the materials and energy used as well as the wastes released into the environment.
• Assess the impact on the environment of the materials and energy used and released.
• Evaluate and implement strategies for environmental improvement.
The process of conducting an LCA often reveals sources of waste and opportunities for redesign that would otherwise remain unnoticed. As Massachusetts Institute of Technology professor and author John Ehrenfeld points out, “Simply invoking the idea of a life cycle sets the broad dimensions of the framework for whatever follows and, at this current stage in ecological thinking, tends to expand the boundaries of the actors’ environmental world.”John Ehrenfeld, “The Importance of LCAs—Warts and All,” Journal of Industrial Ecology 1, no. 2 (1997): 46. LCAs can be used not only as a tool during the design phase to identify environmental hotspots in need of attention but also as a tool to evaluate existing products and processes. LCA may also be used to compare products. However, one must be careful that the same LCA methodologies are used for each item compared to guarantee accurate relative results.
LCA has several limitations. The shortcomings most commonly cited include the following:
• Defining system boundaries for LCA is controversial.
• LCA is data-intensive and expensive to conduct.
• Inventory assessment alone is inadequate for meaningful comparison, yet impact assessment is fraught with scientific difficulties.
• LCA does not account for other, nonenvironmental aspects of product quality and cost.
• LCA cannot capture the dynamics of changing markets and technologies.
• LCA results may be inappropriate for use in eco-labeling.Joseph Fiksel, “Methods for Assessing and Improving Environmental Performance,” in Design for Environment: Creating Eco-Efficient Products and Processes, ed. Joseph Fiksel (New York: McGraw Hill, 1996), 113.
Concurrent Engineering
Concurrent engineering is a design philosophy that brings together the players in a product’s life cycle during the design stage. It presents an opportunity to integrate environmental protection in the design process with input from representatives across the entire product life cycle. Participants in a concurrent engineering design team include representatives of management, sales and marketing, design, research and development, manufacturing, resource management, finance, field service, customer interests, and supplier interests. The team’s goal is to improve the quality and usability of product designs, improve customer satisfaction, reduce cost, and ease the transition of the product from design to production. Definitions of concurrent engineering vary, but the key concepts include using a team to represent all aspects of the product life cycle, focusing on customer requirements and developing production and field support systems early in the design process.Susan E. Carlson and Natasha Ter-Minassian, “Planning for Concurrent Engineering,” Medical Devices and Diagnostics Magazine, May 1996, 202–15.
While seemingly a commonsense approach to design, concurrent engineering is far from typical in industry. The traditional procedure for product design is linear, where individuals are responsible only for their specific function, and designs are passed from one functional area (e.g., manufacturing, research and development, etc.) to the next. This approach can be characterized as throwing designs “over the wall.” For example, an architect may design a building shell, such as a steel skyscraper around an elevator core, and then pass the plans to a construction engineer who has to figure out how to route the heating, ventilating, and air-conditioning ducts and other building components. This disjunction can create inefficiency. Concurrent engineering instead would consider the many services a building provides—for example, lighting, heating, cooling, and work space—and determine the most efficient ways to achieve them all from the very beginning. Concurrent engineering therefore shortens the product development cycle by increasing communication early, resulting in fewer design iterations.Susan E. Carlson and Natasha Ter-Minassian, “Planning for Concurrent Engineering,” Medical Devices and Diagnostics Magazine, May 1996, 202–15.
Companies that employ a concurrent engineering design philosophy feature empowered design teams that are open to interaction, new ideas, and differing viewpoints.Susan Carlson-Skalak, lecture to Sustainable Business class (Darden Graduate School of Business Administration, University of Virginia, Charlottesville, VA, November 17, 1997). Concurrent engineering then is an effective vehicle to implement product design frameworks such as DfE, sustainable design, and even the process-oriented tool TNS, which is not a design framework per se but can be used effectively as a guide to change decision making during design.
Design for Environment
DfE is an eco-efficiency strategy that allows a company to move beyond end-of-the-pipe and in-the-pipe concepts like pollution control and pollution prevention to a systems-based, strategic, and competitively critical approach to environmental management and protection.Braden R. Allenby, “Integrating Environment and Technology: Design for Environment,” in The Greening of Industrial Ecosystems, ed. Deanna J. Richards and Braden Allenby (Washington, DC: National Academy Press, 1994), 140–41. It is a proactive approach to environmental protection in which the entire life-cycle environmental impact of a product is considered during its design.Thomas E. Graedel, Paul Reaves Comrie, and Janine C. Sekutowski, “Green Product Design,” AT&T Technical Journal 74, no. 6 (November/December 1995): 17. DfE is intended to be a subset of the Design for X system, where X may be assembly, compliance, environment, manufacturability, material logistics and component applicability, orderability, reliability, safety and liability prevention, serviceability, and testability.Thomas E. Graedel and Braden R. Allenby, Industrial Ecology (Englewood Cliffs, NJ: Prentice Hall, 1995), 186–87. Design for an end goal allows properties necessary to achieve that goal to be integrated most efficiently into a product’s life cycle. Hence DfE, like concurrent engineering, becomes a critical tool for realizing many aspirations of frameworks, such as cradle-to-cradle, or other tools, such as green supply chains.
Within the domain of DfE are such concepts as design for disassembly, refurbishment, component recyclability, and materials recyclability. These concepts apply to reverse logistics, which allow materials to be collected, sorted, and reintegrated into the manufacturing supply stream to reduce waste. Reverse logistics become especially important for green supply chains.
DfE originated in 1992, mostly through the efforts of a few electronics firms, and is described by Joseph Fiksel as “the design of safe and eco-efficient products.”Joseph Fiksel, “Introduction,” in Design for Environment: Creating Eco-Efficient Products and Processes, ed. Joseph Fiksel (New York: McGraw Hill, 1996), 3; Joseph Fiksel, “Conceptual Principles of DFE,” in Design for Environment: Creating Eco-Efficient Products and Processes, ed. Joseph Fiksel (New York: McGraw Hill, 1996), 51. These products should minimize environmental impact, be safe, and meet or exceed all applicable regulations; be designed to be reused or recycled; reduce material and energy consumption to optimal levels; and ultimately be environmentally safe when disposed. In accomplishing this, the products should also provide a competitive advantage for a company.Bruce Paton, “Design for Environment: A Management Perspective,” in Industrial Ecology and Global Change, ed. Robert Socolow, Clinton Andrews, Frans Berkhout, and Valerie Thomas (Cambridge: Cambridge University Press, 1994), 350.
Green Supply Chain
Green supply-chain management requires that sustainability criteria be considered by every participant in a supply chain at every step from design to material extraction, manufacture, processing, transportation, storage, use, and eventual disposal or recycling. A green supply-chain approach takes a broader systems view than conventional supply-chain management, which assumes basically that different entities take raw materials at the beginning of the supply chain and transform them into a product at the end of the supply chain, with environmental costs to be borne by other companies, countries, or consumers, since each link in the supply chain receives an input without asking about its origins and forgets about the output once it’s out the door. In contrast, the green supply chain considers the entire pathway and internalizes some of these environmental costs to ultimately turn them into sources of value.
Green supply chains thus modify conventional supply chains in two significant ways: they increase sustainability and efficiency in the existing forward supply chain and add an entirely new reverse supply chain. A green supply chain encourages collaboration among members of the chain to understand and share sustainability performance standards, best practices, innovations, and technology while the product moves through the chain. It also seeks to reduce waste along the forward supply chain and to reduce and ideally eliminate hazardous or toxic materials, replacing them with safer ones whenever possible. Finally, through the reverse supply chain, green supply chains seek to recover materials after consumption rather than return them to the environment as waste.
Expanded reverse logistics would ultimately replace the linearity of most production methods—raw materials, processing, further conversions and modification, ultimate product, use, disposal—with a cradle-to-cradle, cyclical path or closed loop that begins with the return of used, outmoded, out of fashion, and otherwise “consumed” products. The products are either recycled and placed back into the manufacturing stream or broken down into compostable materials. The cycle is never ending as materials return in safe molecular structures to the land (taken up and used by organisms as biological nutrients) or are perpetually used within the economy as input for new products (technical nutrients). Consequently, green supply chains appear implicitly in many conceptual frameworks while drawing on various sustainability tools, such as LCA and DfE.
Companies typically funnel spent items from consumers into the reverse supply chain by either leasing their products or providing collection points or other means to recover the items once their service life has ended.Shad Dowlatshahi, “Developing a Theory of Reverse Logistics,” Interfaces: International Journal of the Institute for Operations Research and the Management Sciences 30, no. 3 (May/June 2000): 143–55. Once collected, whether by the original manufacturer or a third party, the products can be inspected and sorted. Some items may return quickly to the supply chain with only minimal repair or replacement of certain components, whereas other products may need to be disassembled, remanufactured, or cannibalized for salvageable parts while the remnant is recycled or sent to a landfill or incinerator.
Concern for green supply-chain topics emerged in the 1990s as globalization and outsourcing made supply networks increasingly complex and diverse while new laws and consumer expectations demanded companies take more responsibility for their product across the product’s entire life.Jonathan D. Linton, Robert Klassen, and Vaidyanathan Jayaraman, “Sustainable Supply Chains: An Introduction,” Journal of Operations Management 25, no. 6 (November 2007): 1075–82; National Environmental Education and Training Foundation, Going Green Upstream: The Promise of Supplier Environmental Management (Washington, DC: National Environmental Education and Training Foundation, 2001). The green supply chain responds to these complex interacting systems to reduce waste, mitigate legal and environmental risks, reduce adverse health impacts throughout the value added process, improve the reputations of companies and their products, and enable compliance with increasingly stringent regulations and societal expectations. Thus green supply chains can boost efficiency, value, and access to markets, which then boost a company’s environmental, social, and economic performance.
Carbon Footprint Analysis
Carbon footprint analysis is a tool that organizations can use to measure direct and indirect emissions of greenhouse gases associated with their provision of goods and services. Carbon footprint analysis is also known as a greenhouse gas inventory, while greenhouse gas accounting describes the general practice of measuring corporate greenhouse gas emissions. The measurement of greenhouse gas emissions (1) allows voluntarily disclosure of data to organizations such as the Carbon Disclosure Project, (2) facilitates participation in mandatory emissions regulatory systems such as the Regional Greenhouse Gas Initiative, and (3) encourages the collection of key operational data that can be used to implement business improvement projects.
Similar to generally accepted accounting principles in the financial world, a set of standards and principles has emerged that guide data collection and reporting in this new area. In general, companies and individuals calculate their corporate emissions footprint for a twelve-month period. They are also increasingly calculating the footprint of individual products, services, events, and so forth. Established guidelines for greenhouse gas accounting, such as the Greenhouse Gas Protocol, define the scope and methodology of the footprint calculation.
The Greenhouse Gas Protocol, one commonly accepted methodology, is an ongoing initiative of the World Resources Institute and the World Business Council for Sustainable Development.The Greenhouse Gas Protocol Initiative, “About the GHG Protocol,” accessed July 2, 2009, http://www.ghgprotocol.org/about-ghgp. The Greenhouse Gas Protocol explains how to do the following:
1. Determine organizational boundaries. Corporate structures are complex and include wholly owned operations, joint ventures, and other entities. The protocol helps managers define which elements compose the “company” for emissions quantification.
2. Determine operational boundaries. Once managers identify which branches of the organization are to be included, they must identify and evaluate which specific emissions sources will be included.
3. Identify indirect sources. Sources that are not directly owned or controlled by the company but that are nonetheless influenced by its actions are called indirect sources, for instance, electricity purchased from utilities that produce indirect emissions at the power plant or emissions from employee commuting, suppliers’ activities, and so forth.
4. Track emissions over time. Companies must select a “base year” against which future emissions will be measured, establish an accounting cycle, and determine other aspects of how they will track emissions over time.
5. Collect data and calculate emissions. The protocol provides specific guidance about how to collect source data and calculate emissions of greenhouse gases. As a rule of thumb, the amount of energy consumed is multiplied by a series of source-specific “emissions factors” to estimate the quantity of each greenhouse gas produced by the source. Because multiple greenhouse gases are measured in the inventory process, the emissions for each type of gas are then multiplied by a “global warming potential” (GWP) to generate a “CO2 equivalent” to facilitate streamlined reporting of a single emissions number. CO2 is the base because it is the most abundant greenhouse gas and also the least potent one.United Nations Framework Convention on Climate Change, “GHG Data: Global Warming Potentials,” accessed July 2, 2009, http://unfccc.int/ghg_data/items/3825.php. For instance, over a century, methane would cause over twenty times more warming than an equal mass of CO2:
Total emissions in \(CO_2eq\) = σ(fuel consumed × fuel emissions factor × GWP).
The method for calculating emissions from a single facility or vehicle is the same as that for calculating emissions for thousands of retail stores or long-haul trucks; hence, quantifying the emissions of a Fortune 500 firm or a small employee-owned business involves the same process.
Companies can reduce their carbon footprint by reducing emissions or acquiring “offsets,” actions taken by an organization or individual to counterbalance the emissions, by either preventing emissions somewhere else or removing CO2 from the air, such as by planting trees. Offsets are traded in both regulated (i.e., government-mandated) and unregulated (i.e., voluntary) markets, although standards for the verification of offsets continue to evolve due to questions about the quality and validity of some products. A company can theoretically be characterized as “carbon neutral” if it causes no net emissions over a designated time period, meaning that for every unit of emissions released an equivalent unit of emissions has been offset through other reduction measures or that the company uses energy only from nonpolluting sources.
KEY TAKEAWAYS
• Business systems and the economy are subsystems of the biosphere.
• Businesses, including companies and supply chains and their interdependent ties to natural systems, like those natural systems, are composed of material, energy, and information flows.
• Mutually reinforcing compatibility between business and natural systems supports prosperity while sustaining and expanding the goods and services ecosystem services provide.
• Biologically inspired business models and product designs can offer profitable paths forward.
• Constraints, rather than limiting possibilities, can open up new space for business innovation and redesign.
EXERCISES
1. Select a product you use frequently. Describe its current life cycle and component and material composition based on what you know and can determine from a short search for information. Then describe how this same product would be designed, used, and handled through the end of its life if the product’s designers used the ideas introduced in this chapter. Be specific about what concepts and tools you are applying to your analysis.
2. Explain what is meant by this quotation from Chapter 3, Section 3.4: “Eco-efficiency, an increasingly popular concept used by business to describe incremental improvements in material use and environmental impact, is only one small part of a richer and more complex web of ideas and solutions.…more efficient production by itself could become not the servant but the enemy of a durable economy.”
3. Describe the ramifications when a company’s activities are not all at the same location along the continuum of sustainability.
4. Where have you seen the sustainability design ideas discussed in this chapter applied? Write a paragraph describing your observations. What new insights have you gained through exposure to these ideas? | textbooks/biz/Business/Advanced_Business/Book%3A_Sustainability_Innovation_and_Entrepreneurship/03%3A_Framing_Sustainability_Innovation_and_Entrepreneurship/3.04%3A_Practical_Frameworks_and_Tools.txt |
Learning Objectives
• Understand the constituent elements of the entrepreneurial process.
• Gain appreciation for how the elements fit together to form a whole.
In this chapter, we examine the ways in which entrepreneurial ventures combine the classic entrepreneurial process with sustainability concepts. This combination encompasses design approaches and corporate competencies that generate new offerings that achieve revenue growth and profitability while enhancing human health, supporting ecological system stability, and contributing to the vitality of local communities. This chapter shows the interconnections across sustainability, innovation, and entrepreneurship to give the reader greater understanding of a current global phenomenon: the search for new products, technologies, and ways of conducting business that will replace the old with designs intended to help solve some of society’s most challenging issues.
When products are designed and business strategies are structured around systems thinking that is associated with sustainability, the outcome, as in any system composed of interacting and interdependent parts, emerges as larger than the sum of its constituent elements. So we should keep in mind, as we dissect the entrepreneurial process into its core elements, that we do so for analytic purposes—first to understand the individual parts and then to see how they come together. Once that picture is clear, the reader will have gained new insights into what entrepreneurs active in the sustainability innovation space actually do. The Walden Paddlers case discussed in Chapter 4, Section 4.5, is a representative example of this approach.
Bear in mind that sustainability, innovation, and entrepreneurship are terms used to represent a wide range of ideas, depending on the context. However, just because they have come into common use and have been interpreted broadly does not mean they cannot be defined in focused and practical ways to help guide entrepreneurial individuals in business. Individuals and companies are, in fact, implementing sustainability designs and strategies through the use of innovative initiatives. At the present time, these three terms—sustainability, innovation, and entrepreneurship—are our best and most accurate descriptors of what is happening in the marketplace. No one term covers all the ground required. In the following sections, we examine entrepreneurial process and then discuss sustainability concepts to explain how the necessary parts merge to create a holistic picture.
Entrepreneurial activity can seem mysterious for those not familiar with the phenomenon. US culture has created heroic myths around its most famous entrepreneurs, reinforcing the idea that entrepreneurship is about individuals. As a consequence, many people believe those individuals are born entrepreneurs. In fact, it is more accurate to talk about entrepreneurship as a process. More frequently than not, a person becomes an entrepreneur because she or he is compelled to pursue a market opportunity. Through that activity—that process—entrepreneurship unfolds. A typical story of entrepreneurship is one in which the entrepreneur is influenced by his or her engagement with favorable conditions, circumstances in which an idea comes together successfully with a market opportunity. An individual has an idea or sees a problem needing a solution and generates a way to meet that need. A new venture is initiated and, if successful, an ongoing business created. Thus entrepreneurship—the creation of new ventures as either new companies or initiatives within larger organizations—is about the process of individuals coming together with opportunities, resulting in specific customers being provided with new goods and services.
For purposes of this discussion, entrepreneurship is not constrained to starting a company. While that definition is commonly assumed, entrepreneurship and entrepreneurial innovation can occur in a variety of settings including small or large companies, nonprofit organizations, and governmental agencies. Entrepreneurship emerges under widely diverse circumstances, typically in response to new conditions and in pursuit of newly perceived opportunities. We focus here not on the average new venture set up to compete under existing rules against existing companies and delivering products or services comparable to those already in the market. Rather, our focus is on entrepreneurial innovators who forge new paths and break with accepted ways of doing business, creating new combinations that result in novel technologies, products, services, and operating practices—that is, substantial innovation.
In that regard, our approach is aligned with entrepreneurship as defined by twentieth-century economist and entrepreneurship scholar Joseph Schumpeter, who pointed out that change in societies comes as a result of innovation created by entrepreneurs. His emphasis was on innovation and the entrepreneur’s ability through innovation to generate new demand that results in significant wealth creation. Peter Drucker, a twentieth- and twenty-first-century scholar of entrepreneurship, echoed similar ideas many decades later. Entrepreneurship is innovative change through new venture creation; it is the creation of new goods and services, processes, technologies, markets, and ways of organizing that offer alternatives with the intention of better meeting people’s needs and improving their lives. Innovation encompasses the creative combination of old and novel ideas that enables individuals and organizations to offer desired alternatives and replacements for existing products and services. These innovative products and ways of doing business, typically led by independent-thinking entrepreneurial individuals, constitute the substitutions that eventually replace older products and ways of doing things. Sustainability entrepreneurship and innovation build on the basics of this accepted view of innovation and entrepreneurship and extend it to encompass life-cycle thinking, ecological rules, human health, and social equity considerations.
Understanding entrepreneurial processes and the larger industrial ecosystem at work requires that we break down the subject matter into separate pieces and then recombine them. The pieces need to be examined on their own merit and then understood in relation to one another. We start with understanding the entrepreneurial process and move to examining the elements of sustainability innovation. Each piece is a necessary, but not sufficient, part of the puzzle. By examining the pieces carefully, we can see in Chapter 4, Section 4.1, how the entrepreneurial process unfolds and in Chapter 4, Section 4.2, what entrepreneurial leaders do to integrate sustainability principles.
Bear in mind that the mental exercise required in the following discussion is useful not only as an analytic approach for entrepreneurs or investors but also to set out core business plan elements. Business plans require elaboration on the market opportunity, a thorough understanding of what the entrepreneur brings to the business and the qualifications of the management team, and a clear articulation of the product or service offered and why a customer would purchase it. The business plan also must discuss the resources needed to launch the business and the market entry strategy proposed to establish early sales, lock in reliable suppliers, and provide a platform for growth. Thus learning and applying the analytical steps discussed in this section has direct synergies with writing a business plan.
Analysis of Entrepreneurial Process
Successful entrepreneurship occurs when creative individuals bring together a new way of meeting needs and a market opportunity. This is accomplished through a patterned process, one that mobilizes and directs resources to deliver a specific product or service to customers using a market entry strategy that shows investors financial promise of building enduring revenue and profitability streams. Sustainability adds to the design of a product and operations by applying the criteria of reaching toward benign (or at least considerably safer) energy and material use, a reduced resource footprint, and elimination of inequitable social impacts due to the venture’s operations, including its supply-chain impacts.
Entrepreneurial innovation combined with sustainability principles can be broken down into the following five key pieces for analysis. Each one needs to be analyzed separately, and then the constellation of factors must fit together into a coherent whole. These five pieces are as follows:
• Opportunity
• Entrepreneur/team
• Product concept
• Resources
• Entry strategy
Successful ventures are characterized by coherence or “fit” across these pieces. The interests and skills of the entrepreneur must fit with the product design and offering; the team’s qualifications should match the required knowledge needed to launch the venture. The market opportunity must fit with the product concept in that there must be demand in the market for the product or service, and of course, early customers (those willing to purchase) have to be identified. Finally, sufficient resources, including financial resources (e.g., operating capital), office space, equipment, production facilities, components, materials, and expertise, must be identified and brought to bear. Each piece is discussed in more detail in the sections that follow.
The Opportunity
The opportunity is a chance to engage in trades with customers that satisfy their desires while generating returns that enable you to continue to operate and to build your business over time. Many different conditions in society can create opportunities for new goods and services. As a prospective entrepreneur, the key questions are as follows:
• What are the conditions that have created a marketplace opportunity for my idea?
• Why do people want and need something new at this point in time?
• What are the factors that have opened up the opportunity?
• Will the opportunity be enduring, or is it a window that is open today but likely to close tomorrow?
• If you perceive an unmet need, can you deliver what the customer wants while generating durable margins and profits?
Sustainability considerations push this analysis further, asking how you can meet the market need with the smallest ecological footprint possible. Ideally, this need is met through material and energy choices that enhance natural systems; such systems include healthy human bodies and communities as well as environmental systems. Sustainability considerations include reducing negative impact as well as working to improve the larger system outcomes whenever and wherever financially possible. Let us examine the different pieces separately before we try to put them all together. The Walden Paddlers case in Chapter 4, Section 4.5, provides a company example to apply these concepts in their entirety.
Opportunity conditions arise from a variety of sources. At a broad societal level, they are present as the result of forces such as shifting demographics, changes in knowledge and understanding due to scientific advances, a rebalancing or imbalance of political winds, or changing attitudes and norms that give rise to new needs. These macroforces constantly open up new opportunities for entrepreneurs. Demographic changes will dictate the expansion or contraction of market segments. For example, aging populations in industrialized countries need different products and services to meet their daily requirements, particularly if the trend to stay in their homes continues. Younger populations in emerging economies want products to meet a very different set of material needs and interests. Features for cell phones, advanced laptop computer designs, gaming software, and other entertainment delivery technologies are higher priorities to this demographic group.
Related to sustainability concerns, certain demographic shifts and pollution challenges create opportunities. With 50 percent of the world’s population for the first time in history living in urban areas, city air quality improvement methods present opportunities. Furthermore, toxicological science tells us that industrial chemicals ingested by breathing polluted air, drinking unclean water, and eating microscopically contaminated food pass through the placenta into growing fetuses. We did not have this information ten years ago, but monitoring and detection technologies have improved significantly over a short time frame and such new information creates opportunities. When you combine enhanced public focus on health and wellness, advanced water treatment methods, clean combustion technologies, renewable “clean” energy sources, conversion of used packaging into new asset streams, benign chemical compounds for industrial processes, and local and sustainability grown organic food, you begin to see the wide range of opportunities that exist due to macrotrends.
When we speak of an opportunity, we mean the chance to satisfy a specific need for a customer. The customer has a problem that needs an answer or a solution. The opportunity first presents itself when the entrepreneur sees a way to innovatively solve that problem better than existing choices do and at a comparable price. Assuming there are many buyers who have the same problem and would purchase the solution offered, the opportunity becomes a true business and market opportunity. When opportunities are of a sufficient scale (in other words, enough customers can be attracted quickly), and revenues will cover your costs and promise in the near term to offer excess revenue after initial start-up investment expenditures are repaid, then you have a legitimate economic opportunity in the marketplace.
It is important to understand that ideas for businesses are not always actual opportunities; unless suppliers are available and customers can be identified and tapped, the ideas may not develop into opportunities. Furthermore, an opportunity has multiple dimensions that must be considered including its duration, the size of the targeted market segment, pricing options that enable you to cover expenses, and so forth. These dimensions must be explored and analyzed as rigorously as possible. While business plans can serve multiple purposes, the first and most important reason for writing a business plan is to test whether an idea is truly an economically promising market opportunity.
The Entrepreneur
The opportunity and the entrepreneur must be intertwined in a way that optimizes the probability for success. People often become entrepreneurs when they see an opportunity. They are compelled to start a venture to find out whether they can convert that opportunity into an ongoing business. That means that, ideally, the entrepreneur’s life experience, education, skills, work exposure, and network of contacts align well with the opportunity.
However, before we talk about alignment, which is our ultimate destination, we look at the entrepreneur. Consider the individual entrepreneur as a distinct analytic category by considering the following questions:
• Who is this person?
• What does this person bring to the table?
• What education, skills, and expertise does this person possess?
Like the opportunity, the entrepreneur can be broken down into components. This analysis is essential to understanding the entrepreneur’s commitment and motivations. Analysis of the entrepreneur also indicates the appropriateness of the individual’s capacities to execute on a given business plan. The components are as follows:
• Values. What motivates the individual? What does he or she care enough about to devote the time required to create a new venture?
• Education. What training has the individual received, what level of formal education, and how relevant is it to the tasks the venture requires to successfully launch?
• Work experience. Formal education may be less relevant than work experience. What prior jobs has the individual held, and what responsibilities did he have? How did he perform in those positions? What has he learned?
• Life experience. What exposure to life’s diversity has the individual had that might strengthen (or weaken) her competencies for building a viable business?
• Networks. What relationships does the individual bring to the venture? Have her prior experiences enabled her to be familiar and comfortable with a diverse mix of people and institutions so that she is able to call upon relevant outside resources that might assist with the venture’s execution?
If any one category could claim dominance in shaping the outcome of an innovative venture, it is that of the entrepreneur. This is because investors invest in people first and foremost. A good business plan, an interesting product idea, and a promising opportunity are all positive, but in the end it is the ability of the entrepreneur to attract a team, get a product out, and sell it to customers that counts. While management teams must be recruited relatively quickly, typically there is an individual who initially drives the process through his or her ability to mobilize resources and sometimes through sheer force of will, hard work, and determination to succeed. In challenging times it is the entrepreneur’s vision and leadership abilities that can carry the day.
Ultimately, led by the entrepreneur, a team forms. As the business grows, the team becomes the key factor. The entrepreneur’s skills, education, capabilities, and weaknesses must be augmented and complemented by the competencies of the team members he or she brings to the project. The following are important questions to ask:
• Does the team as a unit have the background, skills, and understanding of the opportunity to overcome obstacles?
• Can the team act as a collaborative unit with strong decision-making ability under fluid conditions?
• Can the team deal with conflict and disagreement as a normal and healthy aspect of working through complex decisions under ambiguity?
If a business has been established and the team has not yet been formed, these questions will be useful to help you understand what configuration of people might compose an effective team to carry the business through its early evolutionary stages.
Resources
Successful entrepreneurial processes require entrepreneurs and teams to mobilize a wide array of resources quickly and efficiently. All innovative and entrepreneurial ventures combine specific resources such as capital, talent and know-how (e.g., accountants, lawyers), equipment, and production facilities. Breaking down a venture’s required resources into components can clarify what is needed and when it is needed. Although resource needs change during the early growth stages of a venture, at each stage the entrepreneur should be clear about the priority resources that enable or inhibit moving to the next stage of growth. What kinds of resources are needed? The following list provides guidance:
• Capital. What financial resources, in what form (e.g., equity, debt, family loans, angel capital, venture capital), are needed at the first stage? This requires an understanding of cash flow needs, break-even time frames, and other details. Back-of-the-envelope estimates must be converted to pro forma income statements to understand financial needs.
• Know-how. Record keeping and accounting and legal process and advice are essential resources that must be considered at the start of every venture. New ventures require legal incorporation, financial record keeping, and rudimentary systems. Resources to provide for these expenses must be built into the budget.
• Facilities, equipment, and transport. Does the venture need office space, production facilities, special equipment, or transportation? At the early stage of analysis, ownership of these resources does not need to be determined. The resource requirement, however, must be identified. Arrangements for leasing or owning, vendor negotiations, truck or rail transport, or temporary rental solutions are all decision options depending on the product or service provided. However, to start and launch the venture, the resources must be articulated and preliminary costs attached to them.
The Product/Service Concept
What are you selling? New ventures offer solutions to people’s problems. This concept requires you to not only examine the item or service description but understand what your initial customers see themselves buying. A customer has a need to be met. He or she is hungry and needs food. Food solves the problem. Another customer faces the problem of transferring money electronically and needs an efficient solution, a service that satisfies the need. Automatic teller machines are developed and services are offered. Other buyers want electricity from a renewable energy source; their problem is that they want their monthly payments to encourage clean energy development, not fossil fuel–based electricity. In any of these situations, in any entrepreneurial innovation circumstance in fact, as the entrepreneur you must ask the following questions:
• What is the solution for which you want someone to pay?
• Is it a service or product, or some combination?
• To whom are you selling it? Is the buyer the actual user? Who makes the purchase decision?
• What is the customer’s problem and how does your service or product address it?
Understanding what you are selling is not as obvious as it might sound. When you sell an electric vehicle you are not just selling transportation. The buyer is buying a package of attributes that might include cutting-edge technology, lower operating costs, and perhaps the satisfaction of being part of a solution to health, environmental, and energy security problems.
Entry Strategy
Another category to examine carefully at the outset of a venture is market entry strategy. Your goal is to create something where nothing previously existed. Mobilizing resources, analyzing your opportunity, producing your first products for sale—none of these proves the viability of your business. Only by selling to customers and collecting the payments, expanding from those earliest buyers to a broader customer base, and scaling up to sufficient revenue streams to break even and then profit do you prove the enduring viability of the enterprise. Even then, a one-product operation is not a successful business; it is too vulnerable. A successful entrepreneur should consider additional products or services. Living through the early stages of a venture educates you about the customer and market and can point you to new opportunities you were unable to see previously. Your product concept at the end of year two may be, and often is, different from your original vision and intent.
The process of entrepreneurship melds these pieces together in processes that unfold over weeks and months, and eventually years, if the business successful. Breaking down the process into categories and components helps you understand the pieces and how they fit together. What we find in retrospect with successful launches is a cohesive fit among the parts. The entrepreneur’s skills and education match what the start-up needs. The opportunity can be optimally explored with the team and resources that are identified and mobilized. The resources must be brought to bear to launch the venture with an entry strategy that delivers the product or service that solves customers’ problem. Disparities among these core elements are signs of trouble. If your product launch requires engineering and information technology expertise and your team has no one with that knowledge, your team does not “fit” with the product. If you launch the product and have insufficient funds to sustain operations, perhaps you did not adequately calculate the capital resources required to reach the break-even point. Each category must be analyzed and thoroughly understood and all puzzle pieces joined to create the integrated picture required for financial success. In Chapter 4, Section 4.2, we will look at the core elements of sustainability innovation.
KEY TAKEAWAYS
• Entrepreneurship is the creation of new ways of meeting needs through novel products, processes, services, technologies, markets, and forms of organizing.
• Entrepreneurial ventures can be start-ups or occur within large companies.
• Entrepreneurship is an innovation process that mobilizes people and resources.
• Key to entrepreneurial success is the fit among the entrepreneur/team, the product concept, the opportunity, the resources, and the entry strategy.
EXERCISE
• In small teams, identify a successful entrepreneurial venture in your community and interview the entrepreneur or members of the management team. Define and describe the key elements of the entrepreneurial process for this enterprise. Analyze the fit between the entrepreneurial founder and the product or service, the fit between the product and the opportunity, and the fit between the resources and the entry strategy. | textbooks/biz/Business/Advanced_Business/Book%3A_Sustainability_Innovation_and_Entrepreneurship/04%3A_Entrepreneurship_and_Sustainability_Innovation_Analysis/4.01%3A_Entrepreneurial_Process.txt |
Learning Objectives
1. Understand the elements of sustainability innovation.
2. Explain how they can apply to existing companies and new ventures.
In this section, we discuss the ways in which entrepreneurial organizations integrate sustainability ideas into their ventures. Five core elements are necessary—systems thinking, molecular thinking, leveraging weak ties, collaborative adaptation, and radical incrementalism. Each contributes to innovation by opening up new vistas for creativity. For example, systems thinking allows participants to see previously hidden linkages and opportunities within a broader context. Molecular thinking initiates possibilities for innovation through substitution of more benign materials. The use of outside ties contributes novel perspectives and information to the decision process. Collaboration across functional and organizational boundaries helps generate new solutions. Radical incrementalism leads to system-wide innovation. Each of the core elements will be discussed and illustrated with examples.
Systems Thinking
Perhaps the most fundamentally distinctive feature of those engaged with sustainability innovation is the notion of systems thinking. Systems thinking does not mean “systems analysis,” which implies a more formal, mathematical tool. Nor is systems thinking one-dimensional, as we shall see. Systems thinking is best illustrated by contrasting it to linear thinking, the approach historically associated with business decision making. Linear thinking assumes businesses create and sell, each business focusing on its own operations. Supplier or customer activities are relevant only to the extent that understanding them can generate greater sales and profitability. This linear approach frames business activity as making and selling products that customers use and throw away. Therefore, conventional linear thinking in business ignores consideration of the product’s origins; the raw materials and labor input to make it; and the chemical, engineering, and energy-consuming processes required to convert raw materials into constituent components and the ultimate finished product. In addition, it does not consider the effects of the product’s use and the impacts when it is discarded at the end of its useful life.
In contrast, systems thinking applied to new ventures reminds us that companies operate in complex sets of interlocking living and nonliving systems, including markets and supply chains as well as natural systems. These natural systems can range from the atmosphere, to a wetlands area, to a child’s immune system. Bear in mind that systems thinking can be applied to new ventures whether the firm sells products or provides services. If the venture is a service business, conventional business thinking can obscure the fact that service delivery involves information technology including hardware, software, servers, and energy use (heating and cooling). Service businesses may use office buildings and have employees who travel daily to the office and deliver services using truck fleets. Thus service businesses and their related supply chains also can benefit from the application of sustainability thinking and systems thinking. In sum, every venture rests within and is increasingly buffeted by shifts in natural and commercial systems that may be influenced through the direct or indirect reach of its activities.
Taking a systems perspective reminds us that we are accustomed to thinking of businesses in terms of discrete units with clear boundaries between them. We forget that these boundaries exist primarily in our minds or as legal constructs. For example, we may view a venture or company as a discrete entity. By extension we perceive a boundary between the firm and its suppliers and customers. Yet research suggests that the most successful business innovations arise from activities that cross category boundaries. Thus if one’s mental map imposes boundaries, options may be unnecessarily constrained. In fact, given the dominance of linear thinking in business, systems thinking can give you an advantage over your more narrowly focused competitor. Your linear-oriented competitor may target incremental improvements to existing processes and shortchange research and development investments in longer-term goals—and then be surprised by unanticipated innovations in the industry. However, because you perceive the larger systems in which the venture is embedded, you can anticipate opportunities and be poised to act. Not only does the broader systems view lead to more opportunities, enabling you to adapt your competencies, it also holds the potential of producing outcomes that better serve the needs of customers and employees, your community, and your shareholders.
In other words, systems thinking asks you to see the larger picture, which, in turn, opens up new opportunity space. Let’s look more closely at the systems view through an analogy. When you imagine a river, what do you see? A winding line on a map? A favorite fishing spot? Or the tumbling, rushing water itself? Do you include the wetlands and its wildlife, visible and microscopic? Do you see the human communities along the water? Do you see the ultimate end points of the water flows—the estuaries, deltas, and the sea? Do you include the water cycle from the ocean, through evaporation raining in the mountains regenerating the headwaters of the river? In other words, do you see the river as its component parts or as an integrated living system?
Sustainability applied to new ventures incorporates systems thinking. If you think only about the fish or the single stream, you miss what makes the river alive; you miss what it feeds and what feeds it. Similarly, your venture is part of a set of interlocking and interdependent systems characterized by suppliers and buyers as well as by energy and material flows. The more you are aware of these systems and their relationships to your company, the more rigor you bring to product design and strategy development and the more sophisticated your analysis of how to move forward.
Another advantage of systems thinking is its invitation to jettison outdated ideas about the environment. The environment has, in the past, been considered “out there” somewhere, separate and apart from people and businesses. In reality, the environment is not external to business. Indeed, it is coming to comprise an integral new set of competitive factors that shape options and opportunities for entrepreneurs and firms. For ventures to successfully launch and grow in the twenty-first century, it is essential to understand this more expansive systems definition of the new competitive conditions.
When systems thinking guides strategy and action, the collision between business and natural systems becomes a frontier of opportunity. Systems thinking can encourage and institutionalize the natural ability of companies to evolve—not through small adaptations but through creative leaps. The companies discussed in this section demonstrate these tactics in action. For example, AT&T shows how a company can work from a systems view to optimize benefits across multiple systems. Shaw Industries underwent a profound strategic reorientation when it redesigned its products—carpets—not in the traditional linear make-use-waste model but in a new circular strategy. Shaw now takes back carpets at the end of their use life, disassembles them, and remanufactures them as new carpets. This is a radical rethinking of the value of a product. Coastwide Lab offers an example of a systems view that helped a smaller company generate systems solutions for customers, not just products. All three sustainability-inspired strategies indicate a stepped-up understanding of the broader systems in which the business operates. Systems thinking allowed each company to recognize new opportunities in its competitive terrain and to act on them in innovative ways that greatly improved its competitive position.
AT&T
In hindsight, it seems obvious that AT&T, a telecommunications company, should be an early advocate of its employees telecommuting to work. At the outset, however, there was more doubt than confidence in the telecommuting arrangement. Yet it was soon shown that AT&T’s innovative policy—grounded in systems and sustainability thinking—resulted in productivity growth, lower overhead costs, greater employee retention, reduced air pollution, lower gasoline and thus oil consumption, and more satisfied employees.
Was telecommuting an environmental policy because it reduced pollution, a cost-cutting measure because overhead and real estate costs dropped, or a national security measure because it lowered oil consumption? Perhaps it was a human resources initiative since it resulted in more satisfied employees. All of these descriptions are accurate, yet no single measure fully captures the systemic nature and benefits of this sustainability approach to rethinking work. AT&T’s telecommuting policy is an example of systems thinking. Between 1998 and 2004, then AT&T vice president Braden Allenby led the telecommuting initiative.Braden R. Allenby is currently professor of civil and environmental engineering and professor of law at Arizona State University and moved from his position as the environment, health, and safety vice president for AT&T in 2004. His systems thinking comes naturally as author of multiple publications on industrial ecology, design for the environment, and earth systems engineering and management. His coediting of The Greening of Industrial Ecosystems, published by the National Academy Press in 1994, and his authorship of Environmental Threats and National Security: An International Challenge to Science and Technology, published by Lawrence Livermore National Laboratory in 1994, and Information Systems and the Environment, published by the National Academy Press in 2001, also enhanced his ability to see natural systems as integral to corporate strategy. With his systems focus on linkages and interdependencies rather than emphasis on discrete units, Allenby looked to inputs and outputs, processes, and feedback, taking into consideration multiple viewpoints within and outside AT&T. Over time, analysis pinpointed a cross-cutting convergence of factors that, when targeted for optimization, produced positive benefits across the system of AT&T’s financial performance, employees, communities, and air pollution emissions.
New questions were asked. What is the relationship among working at home, spending hours in a car, spending time at a remote AT&T site, and productivity? What gasoline volumes, carbon dioxide (CO2) levels, greenhouse gas emissions, and dollar savings for AT&T are involved when telecommuting is an option for managers? If there are benefits for certain employees and the company, what about extending the policy to other employees? What is AT&T’s contribution to urban vehicle congestion, and can a telecommuting program help reduce gasoline use in a way that reduces oil dependency while benefiting towns, employees, and the firm? We know intuitively that these factors are interrelated, but it is unusual for a senior corporate executive to examine them from a strategic perspective. In this case, the telecommuting policy saved the company millions of dollars while raising productivity and enhancing AT&T’s reputation. Sustainability strategies will always be tailored to a venture’s unique competencies and circumstances; it will grow organically from the business you are in, the products you make, and the employees you hire.
Braden Allenby is a trained systems thinker and has contributed extensive writings on industrial ecology. Allenby saw the opportunity for telecommuting to reduce costs for AT&T and reduce pollution while raising employee productivity and satisfaction. As the environment, health, and safety vice president at AT&T, Allenby took the strategic view as opposed to the compliance perspective proscribed for many environment, health, and safety office heads. By the late 1990s AT&T had moved out of manufacturing. The key to the company’s success became service, and the key to high-quality service was application of in-house technology know-how by productive, satisfied employees.
Allenby quietly and successfully promoted telecommuting within the firm for over ten years, despite opposition. It helped that the program was not seen as a conventional “environmental” one that some might have assumed imposed irretrievable overhead costs. Inevitable resistance included the usual institutional inertia against change but also managers’ and employees’ discomfort with unfamiliar telecommuting job structures and loss of easy metrics for productivity. “Time at desk” was still equated with individual productivity as though the assembly line mentality of “if I don’t see you working, you probably aren’t working” held firm in the twenty-first-century information-age economy. In addition, many questioned how telecommuting relates to environment, health, and safety. Furthermore, weak technology, such as limited home computer bandwidth and an insufficient number of individuals willing to lead, slowed the process.
Despite obstacles, over time significant benefits were returned to AT&T as well as to its employees, their families, and their communities. Real estate overhead costs decreased (offices could be closed down) while productivity and job satisfaction increased according to the company’s Telework Center of Excellence studies.Joseph Roitz, Binny Nanavati, and George Levy, Lessons Learned from the Network-Centric Organization: 2004 AT&T Employee Telework Results (Bedminster, NJ: AT&T Telework Center of Excellence, 2005). Brad Allenby provided me with this source. Survey results showed that not having to commute and gaining uninterrupted time to concentrate increased each telecommuter’s workday by one additional productive hour, translating to an approximate 12.5 percent productivity increase. Upgrades to communication technology enabled easier phone messaging through personal computers and saved about one hour per week, an approximate 2.5 percent increase in telecommuters’ productivity.
The program expanded rapidly as financial and other advantages proved the efficacy of telecommuting. About 35,000 AT&T management employees were full-time telecommuters in 2002 representing 10 percent of the workforce. By 2004 that number had expanded to 30 percent. Another 41 percent worked from home one to two days a week. Detailed records were kept on the telecommuting program’s benefits and costs. Records included the number of employees who telecommuted and how many days they telecommuted per month, whether on the road, at home, or in a telecenter or satellite office. An annual survey provided the quantitative data and subjective elements of participation, such as employee perceptions of the personal and professional benefits.
Important results relevant for other companies were described in the AT&T report: “Work/family balance and improved productivity remain the top-tier benefits. Typically, these two things are seen as mutually exclusive—spending more time with one’s family while simultaneously getting more work done would seem to be impossible—but teleworkers are able to have their cake and eat it, too.”Joseph Roitz, Binny Nanavati, and George Levy, Lessons Learned from the Network-Centric Organization: 2004 AT&T Employee Telework Results (Bedminster, NJ: AT&T Telework Center of Excellence, 2005). Brad Allenby provided me with this source. Feedback on disadvantages of telework was recorded and used to adjust the program optimally.
The positive externalities reported were reduced use of fossil fuel resources, reduced vehicular air pollution, reduced contribution to greenhouse gases and global climate change, reduced runoff of automobile fluids, and decreased air deposition of nitrogen oxides (NOx) that lead to water pollution. AT&T estimated that “since one gallon of gasoline produces 19 lbs. of carbon dioxide (CO2), the 5.1 million gallons of gas our employee teleworkers didn’t use in 2000 (by avoiding 110 million miles of driving by telecommuting) equate to almost 50,000 tons of CO2. Similar benefits result from reductions in NOx and hydrocarbons.”Braden Allenby, “Telework: The AT&T Experience” (testimony before the House Subcommittee on Technology and Procurement Policy, Washington, DC, March 22, 2001), accessed December 2, 2010, www.fluxx.net/toronto/soc4.html. Reduced emissions may provide AT&T with assets in the form of emission credits to be used as internal offsets or sold at market price.
Results of the telecommuting policy included the following:
• Reduced costs for real estate and overhead.AT&T estimated savings of \$75 million a year when it first changed its policies to make salespeople and consultants mobile. Jennifer Bresnahan, “Why Telework?,” CIO Enterprise 11, no. 7 (January 15, 1998): 28–34.
• Employee productivity gains: AT&T estimated that increased productivity due to telework was worth \$100 million a year. Eighty percent of employees surveyed said the change had improved their productivity.
• Improved employee quality of life and morale: Eliminating the stress and wasted time of commuting contributed to productivity.
• Employee retention and related cost savings: AT&T employees turned down other job offers in part because of the telecommuting options they enjoyed.
• Appropriate management metrics: AT&T accelerated a transition from time-at-desk management to management by results and, more broadly, learned how to effectively manage knowledge workers in a rapidly changing, increasingly knowledge-based economy (seen as a competitive advantage).
• Security: After the 9/11 attacks on the World Trade Center and the Pentagon, a more dispersed workforce was viewed as a way to increase institutional resiliency and limit the impact of an attack (or for that matter any disaster, natural or otherwise).
As the AT&T example shows, when systems thinking guides strategy and action, the collision between business and natural systems can become a frontier of opportunity. Systems thinking can encourage and institutionalize the natural ability of companies to evolve, not through small adaptations but through creative leaps.
Shaw Industries
Shaw Industries underwent a profound strategic reorientation and redesigned its products—carpets—not in the traditional linear make-use-waste model but in a sustainability-inspired circular strategy. Shaw now takes back products at the end of their useful life, disassembles them, and remanufactures them as new carpets. This is a radical rethinking of the value of a product using systems terms.
In 2003, Shaw’s EcoWorx product won the US Green Chemistry Institute’s Green Chemistry Award for Designing Safer Products. The company combined application of green chemistry principles with a cradle-to-cradle design approach to create new environmentally benign carpet tile.Shaw Industries worked with William McDonough and Michael Braungart, an architect and chemist who conceived the cradle-to-cradle design approach that considers the ultimate end of products from the very beginning of their design in order to reduce waste and toxicity. The product met the rising demand for sustainable products, helping define a new market space that emerged in the late 1990s and 2000s as buyers became more cognizant of health hazards associated with building materials and furnishings. EcoWorx also educated the marketplace on the desirability of sustainable products as qualitatively, economically, and environmentally superior substitutes, in this case for a product that had been in place for thirty years.See Jeffrey W. Segard, Steven Bradfield, Jeffrey J. White, and Mathew J. Realff, “EcoWorx, Green Engineering Principles in Practice,” Environmental Science and Technology 37, no. 23 (2003): 5269–77.
Carpeting is big business. In 2004, the global market for carpeting was about \$26 billion, and it was expected to grow to \$73 billion in 2007. Carpeting and rugs sectors expect a combined growth rate of 17 percent that year. Shaw Industries of Dalton, Georgia, was the world’s largest carpet manufacturer in 2004. Its carpet brand names include Cabin Crafts, Queen, Designweave, Philadelphia, and ShawMark. The company sells residential products to distributors and retailers and offers commercial products directly to customers through Shaw Contract Flooring. The company also sells laminate, ceramic tile, and hardwood flooring. In 2003, Shaw recorded \$4.7 billion in sales.
Now acknowledged as an innovator in sustainable product design and business strategy, by early 2005, Shaw had completed a successful transformation to an environmentally benign carpet tile system design. Customers self-selected EcoWorx over tiles containing polyvinyl chloride (PVC), driving the new technology to over 80 percent of Shaw’s total carpet tile production. In retrospect, selecting carpet tiles as a key part of its sustainability strategy looks like a smart decision. In 2005, carpet tile was the fastest growing product category in the commercial carpet market.
In hindsight, Shaw’s decision seems the only way forward in the highly competitive floor covering business. However, in 1999, Shaw Industries Vice President Steve Bradfield described the carpet industry as “a marketing landscape that looked increasingly like a quagmire of greenwash.” Waste issues were putting pressure on the industry to clean up its act. Carpet took up considerable space in municipal landfills, took a long time to decompose, and was notoriously difficult to recycle. Moreover, carpet was coming under increasing scrutiny for its association with health problems.
In the late 1990s, companies vied to project the best image of environmental responsibility. However, Shaw Industries moved beyond marketing hype to a strategy that eliminated hazardous materials and recovered and reused carpet in a closed materials cycle. Shaw had to differentiate itself and create new capabilities and even new markets. EcoWorx, designed with cradle-to-cradle logic, required more innovation than simply the product. To implement its strategy, the company had to think in systems and design products not in the linear make-use-waste model but in cycles. For Shaw, this meant it must collect, disassemble, and reuse the old carpet tile material in new products. Moreover, the materials used in its products needed to be environmentally superior to anything used before.
Shaw was not the first company to think of this approach. In 1994, Ray Anderson of Interface Flooring Systems set the bar high for the industry by declaring sustainability as a corporate (and industry) goal.Ray Anderson, Mid Course Correction: Toward a Sustainable Enterprise: The Interface Model (Atlanta, GA: Peregrinzilla, 1998). While smaller in scale than Shaw Industries, Interface succeeded in changing the terms of the debate. For Shaw, the biggest player in the field, to not only rise to the challenge but to champion the way forward was not something one could necessarily predict.
Shaw’s EcoWorx, the replacement system for the PVC-nylon incumbent system, drove double-digit growth for carpet tile after its introduction in 1999. The system made it possible to recycle both the nylon face and the backing components into next-generation face and backing materials for future EcoWorx carpet tile. Shaw used its own EcoSolution Q nylon 6 branded fiber that would be recycled as a technical nutrient through a recovery agreement with Honeywell’s Arnprior depolymerization facility in Canada. The nylon experienced no loss of performance or quality reduction and cost the same or less.
Seeking every way possible to reduce materials use, remove hazardous inputs, and maintain or improve product performance, Shaw made the following changes:
• Replacement of PVC and phthalate plasticizer with an inert and nonhazardous mix of polymers ensuring material safety throughout the system. (PVC-contaminated nylon facing cannot be used for noncarpet applications of recycled materials.)
• Elimination of antimony trioxide flame retardant associated with harm to aquatic organisms.
• Dramatic reduction of waste during the processing phases by immediate recovery and use of the technical nutrients. (The production waste goal is zero.)
• A life-cycle inventory and mass flow analysis that captures systems impacts and material efficiencies compared with PVC backing.
• Efficiencies (energy and material reductions) in production, packaging, and distribution—40 percent lighter weight of EcoWorx tiles over PVC-backed tiles yields transport and handling (installation and removal/demolition) cost savings.
• Use of a minimum number of raw materials, none of which lose value, as all can be continuously disassembled and remanufactured.
• Use of a closed-loop, integrated plant-wide cooling water system providing chilled water for the extrusion process as well as the heating and cooling system.
• Provision of a toll-free phone number on every EcoWorx tile for the buyer to contact Shaw for removal of the material for recycling.
Models assessing comparative costs of the conventional versus the new system indicated the recycled components would be less costly to process than virgin materials. Essentially, EcoWorx tile remains a raw material indefinitely.
Moreover, as is typical of companies actively applying a systems-oriented innovation to product lines, Shaw has found other opportunities for cost reduction and new revenue. For example, Shaw projects \$2.5 million in overall savings per year from a Dalton, Georgia, steam energy plant designed collaboratively with Siemens Building Technologies. Manufacturing waste by-products are converted into gas that fuels a boiler to produce fifty thousand pounds of steam per hour that will be used on-site for manufacturing. The facility lowers corporate plant emissions, eliminates postmanufacturing carpet waste, and provides the Dalton manufacturing site with a fixed-cost reliable energy source, which is no small benefit in a time of high and fluctuating energy prices.
Once the power of systems thinking becomes clear, returning to a compartmentalized or linear view becomes an irrational abandonment of essential knowledge. Systems thinking illuminates how the world actually works and how actions far beyond what we can see influence our decisions and choices. It frees us to imagine alternative future products and services and create positive outcomes for more stakeholders. For Shaw, the benefits of thinking in systems were clear. The takeaway is that breaking out of the traditional linear approach to products and designing from a systems perspective can lead to differentiation, new competitive advantage, and tangible results.
Coastwide Labs
Systems thinking encourages systems solutions for your customers. Once you see the broader systems context and tightly coupled interdependencies, you have the opportunity to simultaneously solve multiple customer problems and provide a comprehensive “answer” for which they could not even form the right question.
Coastwide Laboratories, when it was a stand-alone company before being acquired by Express and then Staples, sold systems solutions to its customers. Coastwide’s approach was developed over several years and culminated in a complete strategic transformation in 2006. The change separated the firm from its competitors and enabled it to shape a regional market to its advantage. Rewards included customer retention, increased sales to existing customers, new customers, dominant market share in a seven-state region, and brand visibility. By selling systems solutions, Coastwide Labs reduced regulatory burdens for itself and its customers, reduced costs for both, and removed human health and environmental threats across the supply chain. The company tracked an array of trends and systems that influenced its market and customers. The resulting perspective put senior management in the driver’s seat to benefit from and shape those trends in ways that also meet customers’ latent needs.
Context is important. For decades, Coastwide’s product formulations, typical for the industry, were consistent with expectations for old-style janitorial products. The company made or bought cleaners, disinfectants, floor finishes and sealers, and degreasers and provided a full line of sanitary maintenance equipment and supplies. Performing the cleaning function was the primary requirement; other health and ecosystem impact considerations did not emerge until years later.
Serving the US Pacific Northwest region, Coastwide competed in a growing market in the 1990s, driven by expanding high-tech firms that emerged or grew rapidly in the 1980s and 1990s (e.g., Microsoft, Intel, Amgen, and Boeing). By the 1990s, the growth of overall demand for cleaning products had tapered off and the products were essentially commodities. This meant that growth, improved sales, and profitability depended on either increasing market share or offering value-added services. The commercial and industrial cleaning products industry remained fragmented in 2000 with many small companies with less than \$5 million in revenue competing as producers, distributors, or both.
However, this sleepy, traditional industry was about to wake up. In August 2002, Coastwide—by then a commercial and industrial cleaning product formulator and distributor—introduced the Sustainable Earth line of products. This experimental line was designed for performance efficacy, easy use, and low to zero toxicity. By 2006, the line had grown to dominate the company’s strategy, positioning Coastwide as the largest provider of safe and “clean” cleaning products, janitorial supplies, and related services in the region. The market extended from southern Canada to central California and west to Idaho.
The Sustainable Earth line enabled Coastwide to lower its customers’ costs for maintenance by offering system solutions. Higher dilution rates for chemicals, dispensing units that eliminate overuse, improved safety for the end user, and less employee lost-work time because of health problems associated with chemical exposure were reported. Higher dilutions also reduced the packaging waste stream, thereby reducing customer waste disposal fees. TriMet, the Portland, Oregon, metropolitan area’s municipal bus and light rail system, reduced its number of cleaning products from twenty-two to four by switching to Sustainable Earth products. Initial cleaning chemical cost savings to the municipality amounted to 70 percent, not including training cost savings associated with the inventory simplification. In 2006, the Sustainable Earth line performed as well or better than the category leaders while realizing a gross margin over 40 percent higher than on its conventional cleaners.
Perhaps most telling, Coastwide’s overall corporate strategy changed in 2006 to implement a corporate transformation to what the company terms “sustainability” products. All cleaning product lines were replaced with sustainably designed formulations and designs. It is important to keep in mind that health benefits and improved water quality in the region’s cities were not the reasons to design this strategy; they characterized opportunities for innovation that drove lower costs for buyers and higher revenues for Coastwide. Through carefully crafted positioning, this company has become a major player creating and shaping the market to its advantage.
Coastwide’s strategic roots were in its early systems approach to meet customers’ full-service needs, long before environmental and sustainability vocabulary entered the business mainstream. The corporate vision evolved from simply selling cleaning products to offering unique, nonhazardous cleaning formulations at the lowest “total cost” to the buyer. Eventually, Coastwide addressed its customers’ comprehensive maintenance and cleaning needs—in other words, their system’s needs—which only later included sustainability features.
The cleaning product markets are more complicated than one might suspect. Several factors shaped industry selling strategies. Customers needed multiple cleaning products and equipment for different applications. However, buyers had more than cleaning needs. Fast-growing and large electronics manufacturers with clean rooms had to protect their production processes from contaminants or suffer major financial losses from downtime, as much as a million dollars a day. In addition, a barrage of intensifying local, state, and federal regulatory requirements demanded safe handling, storage, and disposal of all toxic and hazardous materials. These legal mandates imposed additional costs such as protective clothing, training, and hazardous waste disposal fees. Adding complexity, historic buying patterns fragmented purchase decisions. One facility maintenance manager ordered a set of products from one supplier; a second ordered different products from another supplier. As a result, companies with geographically dispersed sites made nonoptimal choices from both a price and a systems sense. As in many compartmentalized companies, jobs were divided with people working against each other, sometimes under the same roof. Maintenance bought the products; the environment, health, and safety group was responsible for knowing what was in the products as well as for workers’ safety and health; and manufacturing had to ensure pristine production.
Furthermore, all buyers contended with wastewater disposal regulations that forbade contaminated water from leaving the premises and entering the water supply system, but the requirements were different depending on the local or state regulations. Typically, minimal or no training was given maintenance staff members who actually used the hazardous cleaning chemicals. High janitorial employee turnover and low literacy rates made it expensive to hire and train employees. A 150 to 200 percent annual turnover rate was typical with this employee group, imposing its own unique costs and health risks to the employer. The low status of the maintenance and janitorial function didn’t help. The job was delegated in the organization to the staff that did the cleaning work, or one supervisory level above. In other words, despite many small areas needing the customer’s attention as a complex set of interrelated factors (a system), responsibility was either nonexistent or fragmented across different departments that traditionally had no incentive to communicate.
More history magnifies the systems thinking in action. In the late 1990s, buyers wanted stockless systems with just-in-time delivery and single source purchasing to avoid dealing with seven or eight companies for ninety cleaning items. Coastwide had designed its first system-solution contract in the late 1980s when it contracted with Tektronix, a test, measurement, and monitoring computer equipment producer, then the largest Oregon employer and a high-tech company with a dozen operating locations. Coastwide offered to supply all Tektronix maintenance needs, including training personnel to use cleaning products safely. Getting Tektronix’s business required knowing the company’s different facilities, various manufacturing operations requirements, and maintenance standards. It also meant that Coastwide presented an analysis showing Tektronix the economics of why it made sense to outsource the company’s system needs. Coastwide had to understand the buyer’s internal use and purchasing systems, including its costs and chemical vulnerabilities.
Roger McFadden, Coastwide’s chemist and senior product development person—the internal entrepreneur, or intrapreneur—took on the additional job of keeping a list of chemicals the buyer wanted kept out of its facilities due to clean room contamination risks. McFadden saw this change as an opportunity to look at a variety of suspect chemicals on various health, safety, and environmental lists. The lists were growing for the customer and regulatory agencies. Eventually Coastwide was asked to handle the complete health and safety functions for this customer and eventually for others because it could do so at a lower cost with customized analyses presented to each buyer, and with a systems perspective that optimized efficiencies across linked system parts with tagged areas for continuous improvement. Important interrelated issues for Roger McFadden included product contamination, regulations, customers’ workers’ compensation and injury liability, and chemical compound toxicity thresholds and cancer rates.
To compete with foresight Coastwide also had to stay current on and continuously adapt its solutions services to larger and increasingly more relevant trends. McFadden served on the Governor’s Community Sustainability Taskforce for Oregon and in the process gained more information about the science of toxicity, state regulatory intentions, and changing governmental agency purchasing practices. This led to expanded sales to the state and city governments and to Nike, Hewlett-Packard, and Intel. Coastwide’s involvement with broader community issues translated into flows of information to senior management that helped the firm position itself and learn despite constantly moving terrain.
McFadden’s first step was to rethink the cleaning product formulations. The products had to work as well as not pose a risk or threat. The second step was to expand the product line so that customers would source a range of products solely from Coastwide, a step that provided customers with insurance that all cleaning products met uniform “clean” and low- or zero-toxicity specifications. Coastwide extended its “cleaner cleaners” criteria to auxiliary products. For example, PVC-containing buckets were rejected in favor of those made from safely reusable polyethylene. Used buckets were picked up by Coastwide’s distribution arm, with the containers color coded to ensure no other containers (for which the company would not know the materials inside) would inadvertently be brought back.
Understanding the interconnections across systems continued to bring Coastwide financial and competitive benefits. By 2005 the major trade organization for the industrial cleaning industry, the International Sanitary Supply Association (ISSA), began highlighting members’ green cleaning products and programs. Grant Watkinson, president of Coastwide, was featured on the organization’s website. The American Association of Architects’ US Green Building Council developed its Leadership in Energy and Environmental Design (LEED) program that set voluntary national standards for high-performance sustainable buildings. LEED assigned points that could be earned by organizations requesting certification if they integrated system-designed cleaning practices. Since many major corporations and organizations gain productivity and reputation advantages for having their buildings certified by LEED, Coastwide was positioned with more knowledge and media visibility as this market driver accelerated a transition to lower toxicity and more benign materials.
In addition, Coastwide was in a far better position than its competition when Executive Order 13148, Greening the Government Through Leadership in Environmental Management, appeared. This order set strict requirements for all federal agencies to “reduce [their] use of selected toxic chemicals, hazardous substances, and pollutants…at [their] facilities by 50 percent by December 31, 2006.”National Environmental Policy Act, “Executive Order 13148,” accessed March 7, 2011, ceq.hss.doe.gov/nepa/regs/eos/eo13148.html.
By 2006 most of the major institutional cleaning-products companies across the country had “green” product offerings of some sort, but Coastwide already was well ahead of them. Building service contractor and property manager customers told Coastwide they were awarded new business because of the “green” package Coastwide offers. Some buyers use the Sustainable Earth line as part of their marketing program to differentiate and enhance the value of their services. The city of San Francisco specified Coastwide’s line even though the company did not have sales representatives in that market (sales are through distributors). Inquiries from the US Midwest, South, and East Coast increased in 2006, and Roger McFadden and the firm’s corporate director of sustainability were frequently invited to speak in various US and Canadian cities outside Coastwide’s market area. In sum, by making sure it understood the dynamics of the relevant systems for its success and its customers’ benefit, Coastwide created a successful strategy because, in the current competitive environment, it was just good business.
Results for Coastwide included the following:
• The industry average net operating income was 2 percent; Coastwide averaged double or triple that level.
• Sales in 2005 increased by 8 percent, driven by market share increases in segments where the most Sustainable Earth products were sold; operating profits rose by an even larger percentage.
• The number of new customers rose over 35 percent in 2005, largely attributable to Sustainable Earth product lines.
Coastwide’s solution for buyers went further than any other firm’s to blend problem solving around a company’s unique needs with changing regulatory system requirements and emerging human health and ecosystem trends. Coastwide, through McFadden’s entrepreneurial innovation, saw an opportunity in the complex corporate, regulatory, and ecological systems and in its customers’ need for a sustainable response. By understanding the systems in which you operate, higher level solutions can emerge that will give you competitive advantage. By 2010 McFadden had become Stapless’ senior scientist, advising the \$27 billion office products company on its sustainability strategy.
In each instance of these instances, entrepreneurial (or intrapreneurial) leaders made decisions from a systems perspective. The individuals came to this understanding in different ways, but this way of seeing their companies’ interdependencies with both living and nonliving systems allowed them to introduce innovative ways of doing business, create new product designs and operating structures, and generate new revenues. Systems analysis is an effective problem-solving tool in dynamic, complex circumstances where economic opportunities are not easily apparent. A systems perspective accommodates the constant changes that characterize the competitive terrain.
To recap, we provide the following tactics to help you think in systems terms:
• Design products in “circles,” not lines.
• Optimize across multiple systems.
• Sell systems solutions.
This kind of broader systems-oriented strategy will be increasingly important for claiming market share in the new sustainability market space. Increasingly, senior management, and eventually everyone within firms and their supply chains, will understand that the future lies on a path toward benign products (no harm to existing natural systems) or products that—at the end of use—are returned so that their component parts can be used to make equal or better quality new products.The point is not the goal but the continuous effort. Systems thinking applied to entrepreneurial innovation is not merely a tool or theory—it is increasingly a mind-set, a survival skill, and key to strategic advantage.
KEY TAKEAWAYS
• A systems approach to business is a reminder that companies operate in complex sets of interlocking living and nonliving systems, including markets and supply chains as well as natural systems.
• Systems thinking can open up new opportunities for product and process redesign and lead to innovative business models.
• Individuals with a creative bent can lead sustainability innovation changes inside small or large firms.
EXERCISE
1. In teams, identify a commonly used product. Try to name all the component parts and material inputs involved in bringing the product to market. List the ways in which producing that item likely depended on, drew from, and impacted natural systems over the product’s life. | textbooks/biz/Business/Advanced_Business/Book%3A_Sustainability_Innovation_and_Entrepreneurship/04%3A_Entrepreneurship_and_Sustainability_Innovation_Analysis/4.02%3A_Systems_Thinking.txt |
Learning Objectives
1. Explore systems thinking at the molecular level.
2. Focus on materials innovation.
3. Provide examples of green chemistry applications.
In this discussion, we encourage you to think on the micro level, as though you were a molecule. We tend to focus on what is visible to the human eye, forgetting that important human product design work takes place at scales invisible to human beings. Molecular thinking, as a metaphorical subset of systems thinking, provides a useful perspective by focusing attention on invisible material components and contaminants. In the first decade of the twenty-first century there has been heavy emphasis on clean energy in the media. Yet our world is composed of energy and materials. When we do examine materials we tend to focus on visible waste streams, such as the problems of municipal waste, forgetting that some of the most urgent environmental health problems are caused by microscopic, and perhaps nanoscale, compounds. These compounds contain persistent contaminants that remain invisible in the air, soil, and water and subsequently accumulate inside our bodies through ingestion of food and water. Thinking like a molecule can reveal efficiency and innovation opportunities that address hazardous materials exposure problems; the principles of green chemistry give you the tools to act on such opportunities. The companies discussed in this section provide examples of successful sustainability innovation efforts at the molecular level.
Green chemistry, an emerging area in science, is based on a set of twelve design principles.Paul T. Anastas and John C. Warner, Green Chemistry: Theory and Practice (Oxford: Oxford University Press, 1998). Application of the principles can significantly reduce or even eliminate generation of hazardous substances in the design, manufacture, and application of chemical products. Green chemistry offers many business benefits. Its guiding principles drive design of new products and processes around health and environmental criteria and can help firms capture top (revenue) and bottom line (profitability) gains within the company and throughout value chains. As public demand and regulatory drivers for “clean” products and processes grow, molecular thinking enables entrepreneurs inside large and small companies to differentiate their businesses and gain competitive advantage over others who are less attuned to the changing market demands.
In the ideal environment, green chemistry products are derived from renewable feedstocks, and toxicity is deliberately prevented at the molecular level. Green chemistry also provides the means of shifting from a petrochemical-based economy based on oil feedstocks (from which virtually all plastics are derived) to a bio-based economy. This has profound consequences for a wide range of issues, including environmental health, worker safety, national security, and the agriculture sector. While no one scientific approach can supply all the answers, green chemistry plays a foundational role in enabling companies to realize concrete benefits from greener design.
What does it mean to pursue sustainability innovation at the molecular level? When chemicals and chemical processes are selected and designed to eliminate waste, minimize energy use, and degrade safely upon disposal, the result is a set of processes streamlined for maximum efficiency. In addition, hazards to those who handle the chemicals, along with the chemicals’ inherent costs, are designed out of both products and processes. With the growing pressure on firms to take responsibility for the adverse impacts of business operations throughout their supply chain and the demand for greater transparency by corporations, forward-thinking organizations—whether start-ups or established firms—increasingly must assess products and process steps for inherent hazard and toxicity.
12 Principles of Green Chemistry
1. Prevent waste, rather than treat it after it is formed.
2. Maximize the incorporation of all process materials into the final product.
3. Use and generate substances of little or no toxicity.
4. Preserve efficacy of function while reducing toxicity.
5. Eliminate or minimize use of or toxicity of auxiliary substances (e.g., solvents).
6. Recognize and minimize energy requirements; shoot for room temperature.
7. Use renewable raw material feedstock, if economically and technically possible.
8. Avoid unnecessary derivatization (e.g., blocking group, protection/deprotection).
9. Consider catalytic reagents superior to stoichiometric reagents.
10. Design end product to innocuously degrade, not persist.
11. Develop analytical methodologies that facilitate real-time monitoring and control.
12. Choose substances/forms that minimize potential for accidents, releases, and fires. Paul T. Anastas and John C. Warner, Green Chemistry: Theory and Practice (Oxford: Oxford University Press, 1998), 30.
Molecular thinking, applied through the use of the green chemistry principles, guides you to examine the nature of material inputs to your products. Once again, a life-cycle approach is required to consider, from the outset, the ultimate fate of your waste outputs and products. This analysis can occur concurrently with delivering a high-quality product to the buyer. Thus thinking like a molecule asks business managers and executives to examine not only a product’s immediate functionality but its entire molecular cycle from raw material, through manufacture and processing, to end of life and disposal. Smart decision makers will ask, Where do we get our feedstocks? Are they renewable or limited? Are they vulnerable to price and supply fluctuations? Are they vulnerable to emerging environmental health regulations? Are they inherently benign or does the management of risk incur costs in handling, processing, and disposal? Managers and sustainability entrepreneurs also must ask whether chemicals in their products accumulate in human tissue or biodegrade harmlessly. Where do the molecular materials go when thrown away? Do they remain stable in landfills, or do they break down to pollute local water supplies? Does their combination create new and more potent toxins when incinerated? If so, can air emissions be carried by wind currents and influence the healthy functioning of people and natural systems far from the source?
Until very recently these questions were not business concerns. Increasingly, however, circumstances demand that we think small (at the molecular and even nano levels) to think big (providing safe products for two to four billion aspiring middle-class citizens around the world). As we devise more effective monitoring devices that are better able to detect and analyze the negative health impacts of certain persistent chemical compounds, corporate tracking of product ingredients at the molecular level becomes imperative. Monitoring chemical materials to date has been driven primarily by increased regulation, product boycotts, and market campaigns by health-oriented nonprofit organizations. But instead of a reactive defense against these growing forces, forward-thinking entrepreneurial companies and individuals see new areas of business opportunity and growth represented by the updated science and shifting market conditions.
Green chemistry design principles are being applied by a range of leading companies across sectors including chemical giants Dow, DuPont, and Rohm and Haas and consumer product producers such as SC Johnson, Shaw Industries, and Merck & Co. Small and midsized businesses such as Ecover, Seventh Generation, Method, AgraQuest, and Metabolix also play a leading innovative role. (See the Presidential Green Chemistry Challenge Award winners for a detailed list of these businesses.)US Environmental Protection Agency, “Presidential Green Chemistry Challenge: Award Winners,” last updated July 28, 2010, accessed December 3, 2010, www.epa.gov/greenchemistry/pubs/pgcc/past.html. Currently green chemistry–inspired design and innovation has made inroads into a range of applications, including the following:
Adhesives Pesticides
Cleaning products Pharmaceuticals
Fine chemicals Plastics
Fuels and renewable energy technologies Pulp and paper
Nanotechnologies Textile manufacturing
Paints and coatings Water purification
Included in green chemistry is the idea of the atom economy, which would have manufacturers use as fully as possible every input molecule in the final output product. The pharmaceutical industry, an early adopter of green chemistry efficiency principles in manufacturing processes, uses a metric called E-factor to measure the ratio of inputs to outputs in any given product.The definition of E-factor is evolving at this writing. Currently pharmaceutical companies engaged in green chemistry are debating whether to include input factors such as energy, water, and other nontraditional inputs. In essence, an E-factor measurement tells you how many units of weight of output one gets per unit of weight of input. This figure gives managers a sense of process efficiency and the inherent costs associated with waste, energy, and other resources’ rates of use. By applying green chemistry principles to pharmaceutical production processes, companies have been able to dramatically lower their E-factor—and significantly raise profits.
Merck & Co., for example, uncovered a highly innovative and efficient catalytic synthesis for sitagliptin, the active ingredient in Januvia, the company’s new treatment for type 2 diabetes. This revolutionary synthesis generated 220 pounds less waste for each pound of sitagliptin manufactured and increased the overall yield by nearly 50 percent. Over the lifetime of Januvia, Merck expects to eliminate the formation of at least 330 million pounds of waste, including nearly 110 million pounds of aqueous waste.US Environmental Protection Agency, “Presidential GC Challenge: Past Awards: 2006 Greener Synthetic Pathways Award,” last updated June 21, 2010, accessed December 2, 2010, www.epa.gov/greenchemistry/pubs/pgcc/winners/gspa06.html.
Pfizer
In 2002, pharmaceutical firm Pfizer won the US Presidential Green Chemistry Challenge Award for Alternative Synthetic Pathways for its innovation of the manufacturing process for sertraline hydrochloride (HCl). Sertraline HCl is the active ingredient in Zoloft, which is the most prescribed agent of its kind used to treat depression. In 2004, global sales of Zoloft were \$3.4 billion. Pharmaceutical wisdom holds that companies compete on the nature of the drug primarily and on process secondarily, with “maximum yield” as the main objective. Green chemistry adds a new dimension to this calculus: Pfizer and other pharmaceutical companies are discovering that by thinking like a molecule and applying green chemistry process innovations, they see their atom economy exponentially improve.
In the case of Pfizer, the company saw that it could significantly cut input costs. The new commercial process offered dramatic pollution prevention benefits, reduced energy and water use, and improved safety and materials handling. As a consequence, Pfizer significantly improved worker and environmental safety while doubling product yield. This was achieved by analyzing each chemical step. The key improvement in the sertraline synthesis was reducing a three-step sequence in the original process to a single step.Stephen K. Ritter, “Green Challenge,” Chemical & Engineering News, 80, no. 26 (2009): 30. Overall, the process changes reduced the solvent requirement from 60,000 gallons to 6,000 gallons per ton of sertraline. On an annual basis, the changes eliminated 440 metric tons of titanium dioxide-methylamine hydrochloride salt waste, 150 metric tons of 35 percent hydrochloric acid waste, and 100 metric tons of 50 percent sodium hydroxide waste. With hazardous waste disposal growing more costly, this represented real savings now and avoided possible future costs.
By redesigning the chemical process to be more efficient and produce fewer harmful and expensive waste products, the process of producing sertraline generated both economic and environmental/health benefits for Pfizer. Typically, 20 percent of the wholesale price is manufacturing costs, of which approximately 20 percent is the cost of the tablet or capsule with the remaining percentage representing all other materials, energy, water, and processing costs. Using green chemistry can reduce both of these input costs significantly. As patents expire and pharmaceutical products are challenged by cheaper generics, maintaining the most efficient, cost-effective manufacturing process will be the key to maintaining competitiveness.
As mentioned earlier, E-factor analysis offers the means for streamlining materials processing and capturing cost savings. An efficiency assessment tool for the pharmaceutical industry, E-factor is defined as the ratio of total kilograms of all input materials (raw materials, solvents, and processing chemicals) used per kilogram of active product ingredient (API) produced. A pivotal 1994 study indicated that as standard practice in the pharmaceutical industry, for every kilogram of API produced, between twenty-five and one hundred kilograms or more of waste was generated—a ratio still found in the industry. By the end of the decade, E-factors were being used more frequently to evaluate products. Firms were identifying drivers of high E-factor values and taking action to improve efficiency. Multiplying the E-factor by the estimated kilograms of API produced by the industry overall suggested that, for the year 2003, as much as 500 million to 2.5 billion kilograms of waste could be the by-product of pharmaceutical industry API manufacture. This waste represented a double penalty: the costs associated with purchasing chemicals that are ultimately diverted from API yield and the costs associated with disposing of this waste (ranging from one to five dollars per kilogram depending on the hazard). Very little information is released by competitors in this industry, but a published 2004 GlaxoSmithKline life-cycle assessment of its API manufacturing processes revealed approximately 75 to 80 percent of the waste produced was solvent (liquid) and 20 to 25 percent solids, of which a considerable proportion of both was likely hazardous under state and federal laws.
For years, the pharmaceutical industry stated it did not produce the significant product volumes needed to be concerned about toxicity and waste, particularly relative to commodity chemical producers. However, government and citizen concern about product safety and high levels of medications in wastewater combined with the growing cost of hazardous waste disposal is changing that picture relatively quickly. With favorable competitive conditions eroding, companies have been eager to find ways to cut costs, eliminate risk, innovate, and improve their image.
After implementing the green chemistry award-winning process as standard in sertraline HCl manufacture, Pfizer’s experience indicated that green chemistry–guided process changes reduced E-factor ratios to ten to twenty kilograms. The potential to dramatically reduce E-factors through green chemistry could be significant. Other pharmaceutical companies that won Presidential Green Chemistry Challenge Awards between 1999 and 2010—Lilly, Roche, Bristol-Meyers Squibb, and Merck—reported improvements of this magnitude after the application of green chemistry principles. Additionally, Pfizer was awarded the prestigious UK environmental Crystal Faraday Award for innovation in the redesign of the manufacturing process of sildenafil citrate (the active ingredient in the product Viagra).
Not surprisingly, thinking like a molecule applied through use of green chemistry’s twelve principles fits easily with existing corporate Six Sigma quality programs whose principles consider waste a process defect. “Right the first time” was an industry quality initiative backed strongly by the US Food and Drug Administration. Pfizer’s Dr. Berkeley (“Buzz”) Cue (retired but still actively advancing green chemistry in the industry), credited with introducing green chemistry ideas to the pharmaceutical industry, views these initiatives as a lens that allows the companies to look at processes and yield objectives in a more comprehensive way (a systems view), with quality programs dovetailing easily with the approach and even enhancing it.
Dr. Cue, looking back on his history with green chemistry and Pfizer, said, “The question is what has Pfizer learned through understanding Green Chemistry principles that not only advantages them in the short term, but positions them for future innovation and trends?”Phone interview with Berkeley Cue, retired Pfizer executive, July 16, 2003. This is an important question for entrepreneurs in small firms and large firms alike. If you think like a molecule, overlooked opportunities and differentiation possibilities present themselves. Are you calculating the ratio of inputs to outputs? Has your company captured obvious efficiency cost savings, increased product yield, and redesigned more customer and life-cycle effective molecules? Are you missing opportunities to reduce or eliminate regulatory oversight by replacing inherently hazardous and toxic inputs with benign materials? Regulatory compliance for hazardous chemical waste represents a significant budget item and cost burden. Those dollars would be more usefully spent elsewhere.
Green chemistry has generated breakthrough innovations in the agriculture sector as well. Growers face a suite of rising challenges connected with using traditional chemical pesticides. A primary concern is that pests are becoming increasingly resistant to conventional chemical pesticides. In some cases, pesticides must be applied two to five times to accomplish what a single application did in the 1970s. Moreover, pests can reproduce and mutate quickly enough to develop resistance to a pesticide within one growing season. Increased rates of pesticide usage deplete soil and contaminate water supplies, and these negative side effects and costs (so-called externalities) are shifted onto individuals while society bears the cost.
AgraQuest
AgraQuest is an innovative small company based in Davis, California. The company was founded by entrepreneur Pam Marrone, a PhD biochemist with a vision of commercially harnessing the power of naturally occurring plant defense systems. Marrone had left Monsanto, where she had originally been engaged to do this work, when that company shifted its strategic focus to genetically modified plants. Marrone looked for venture capital and ultimately launched AgraQuest, a privately held company, which in 2005 employed seventy-two people and expected sales of approximately \$10 million.
AgraQuest strategically differentiated itself by offering products that provided the service of effective pest management while solving user problems of pest resistance, environmental impact, and worker health and safety. AgraQuest provides an exemplary case study of green chemistry technology developed and brought to market at a competitive cost. The company is also is a prime example of how a business markets a disruptive technology and grapples with the issues that face a challenge to the status quo.
About AgraQuest
Powering today’s agricultural revolution for cleaner, safer food through effective biopesticides and innovative technologies for sustainable, highly productive farming and a better environment.
As a leader in innovative biological and low-chemical pest management solutions, AgraQuest is at the forefront of the new agriculture revolution and a shift in how food is grown. AgraQuest focuses on discovering, developing, manufacturing and marketing highly effective biopesticides and low-chem pest and disease control and yield enhancing products for sustainable agriculture, the home and garden, and food safety markets. Through its Agrochemical and BioInnovations Divisions, AgraQuest provides its customers and partners with tools to create value-enhancing solutions.Andrea Larson and Karen O’Brien, from field interviews; untitled/unpublished manuscript, 2006.
Winner of the Presidential Green Chemistry Challenge Small Business Award in 2003 for its innovative enzymatic biotechnology process used to generate its products, AgraQuest employed a proprietary technology to screen naturally occurring microorganisms to identify those that may have novel and effective pest management characteristics.US Environmental Protection Agency, “Green Chemistry: Award Winners,” accessed July 28, 2010, www.epa.gov/gcc/pubs/pgcc/past.html. AgraQuest scientists traveled around the world searching out promising-looking microbes for analysis. AgraQuest scientists gathered microbe samples from around the world, identifying those that fight the diseases and pests that destroy crops. Once located, these microorganisms were screened, cultivated, and optimized in AgraQuest’s facilities and then sent in powder or liquid form to growers. In field trials and in commercial use, AgraQuest’s microbial pesticides have been shown to attack crop diseases and pests and then completely biodegrade, leaving no residue behind. Ironically, AgraQuest’s first product was developed from a microbe found in the company’s backyard—a nearby peach orchard. Once the microbe was identified, company biochemists analyzed and characterized the compound structures produced by selected microorganisms to ensure there were no toxins, confirm that the product biodegraded innocuously, and identify product candidates for development and commercialization.
The company, led by entrepreneur Marrone, has screened over twenty-three thousand microorganisms and identified more than twenty product candidates that display high levels of activity against insects, nematodes, and plant pathogens. These products include Serenade, Sonata, and Rhapsody biological fungicides; Virtuoso biological insecticide; and Arabesque biofumigant. The market opportunities for microbial-based pesticides are extensive. Furthermore, the booming \$4 billion organic food industry generates rising demand for organic-certified pest management tools. As growers strive to increase yields to meet this expanding market, they require more effective, organic means of fighting crop threats. AgraQuest’s fungicide Serenade is organic certified to serve this expanding market, and other products are in the pipeline.
The US Environmental Protection Agency (EPA) has streamlined the registration process for “reduced-risk” bio-based pesticides such as AgraQuest’s to help move them to market faster. The Biopesticides and Pollution Prevention Division oversees regulation of all biopesticides and has accelerated its testing and registration processes. The average time from submission to registration is now twelve to fourteen months rather than five to seven years.
Moreover, since the products biodegrade and are inherently nontoxic to humans, they are exempt from testing for “tolerances”—that is, the threshold exposure to a toxic substance to which workers can legally be exposed. This means that workers are required to wait a minimum of four hours after use before entering the fields, whereas other conventional pesticides require a seventy-two-hour wait. The reduction of restricted entry intervals registers as time and money saved to growers. Therefore, AgraQuest products can act as “crop savers”—used immediately prior to harvest in the event of bad weather. To growers of certain products, such as wine grapes, this can mean the difference between success and failure for a season.
AgraQuest deployed exemplary green chemistry and sustainability innovation strategies. The opportunity presented by the problems associated with conventional chemical pesticides was relatively easy to perceive, but designing a viable alternative took real ingenuity and a dramatic diversion from well-worn industry norms. Thinking like a molecule in this context enabled the firm to challenge the existing industry pattern of applying toxins and instead examine how natural systems create safe pesticides. Marrone and her team have been able to invent entirely new biodegradable and benign products—and capitalize on rising market demand for the unique array of applications inherent in this type of product.
As the science linking cause and effect grows more sophisticated, public concern about the human health and environmental effects of pesticides is increasing.Rick A. Relyeaa, “The Impact of Insecticides and Herbicides on the Biodiversity and Productivity of Aquatic Communities,” Ecological Applications 15, no. 2 (2005): 618–27; Xiaomei Ma, Patricia A. Buffler, Robert B. Gunier, Gary Dahl, Martyn T. Smith, Kyndaron Reinier, and Peggy Reynolds, “Critical Windows of Exposure to Household Pesticides and Risk of Childhood Leukemia,” Environmental Health Perspectives 110, no. 9 (2002): 955–60; Anne R. Greenlee, Tammy M. Ellis, and Richard L. Berg, “Low-Dose Agrochemicals and Lawn-Care Pesticides Induce Developmental Toxicity in Murine Preimplantation Embryos,” Environmental Health Perspectives 112, no. 6 (2004): 703–9. Related to this is an international movement to phase out specific widely used pesticides such as DDT and methyl bromide. Moreover, a growing number of countries impose trade barriers on food imports due to residual pesticides on the products.
In this suite of challenges facing the food production industry, AgraQuest found opportunity. The logic behind AgraQuest’s product line is simple: rather than rely solely on petrochemical-derived approaches to eradicating pests, AgraQuest products use microbes to fight microbes. Over millennia, microbes have evolved complex defense systems that we are only now beginning to understand. AgraQuest designs products that replicate and focus these natural defense systems on target pests. When used in combination with conventional pesticides, AgraQuest products are part of a highly effective pest management system that has the added benefit of lowering the overall chemical load released into natural systems. Because they are inherently benign, AgraQuest products biodegrade innocuously, avoiding the threats to human health and ecosystems—not to mention associated costs—that growers using traditional pesticides incur.
NatureWorks
In a final example, NatureWorks, Cargill’s entrepreneurial biotechnology venture, designed plastics made from biomass, a renewable input. The genius of NatureWorks’ biotechnology is that it uses a wide range of plant-based feedstocks and is not limited to corn, thus avoiding competition with food production. NatureWorks’ innovative breakthroughs addressed the central environmental problem of conventional plastic. Derived from oil, conventional plastic, a nonrenewable resource associated with a long list of environmental, price, and national security concerns, has become a major health and waste disposal problem. By building a product around bio-based inputs, NatureWorks designed an alternative product that is competitive in both performance and price—one that circumvents the pollution and other concerns of oil-based plastics. As a result of its successful strategy, NatureWorks has shifted the market in its favor.
NatureWorks LLC received the 2002 Presidential Green Chemistry Challenge Award for its development of the first synthetic polymer class to be produced from renewable resources, specifically from corn grown in the American Midwest. At the Green Chemistry and Engineering conference and awards ceremony in Washington, DC, attended by the president of the US National Academy of Sciences, the White House science advisor, and other dignitaries from the National Academies and the American Chemical Society, the award recognized the company’s major biochemistry innovation, achieved in large part under the guidance and inspiration of former NatureWorks technology vice president Patrick Gruber.
Gruber was an early champion of sustainability innovation. As an entrepreneur inside a large firm, he led the effort that resulted in NatureWorks’ bio-based plastic. Together with a team of chemical engineers, biotechnology experts, and marketing strategists, Gruber spearheaded the effort to marry agricultural products giant Cargill with chemical company Dow to create the spin-off company originally known as Cargill Dow and renamed NatureWorks in January 2005. Gruber was the visionary who saw the potential for a bio-based plastic and the possibilities for a new enzymatic green chemistry process to manufacture it. He helped drive that process until it was cost-effective enough to produce products competitive with conventional products on the market.
NatureWorks’ plastic, whose scientific name is polylactic acid (PLA), has the potential to revolutionize the plastics and agricultural industries by offering biomass-based biopolymers as a substitute for conventional petroleum-based plastics. NatureWorks resins were named and trademarked NatureWorks PLA for the polylactic acid that comprises the base plant sugars. In addition to replacing petroleum as the material feedstock, PLA resins have the added benefit of being compostable (safely biodegraded) or even infinitely recyclable, which means they can be reprocessed again and again. This provides a distinct environmental advantage, since recycling—or “down-cycling”—postconsumer or postindustrial conventional plastics into lower quality products only slows material flow to landfills; it does not prevent waste. Moreover, manufacturing plastic from corn field residues results in 30 to 50 percent fewer greenhouse gases when measured from “field to pellet.” Additional life-cycle environmental and health benefits have been identified by a thorough life-cycle analysis. In addition, PLA resins, virgin and postconsumer, can be processed into a variety of end uses.
In 2005, NatureWorks CEO Kathleen Bader and Patrick Gruber were wrestling with a number of questions. NatureWorks’ challenges were operational and strategic: how to take the successful product to high-volume production and how to market the unique resin in a mature plastics market. NatureWorks employed 230 people distributed almost equally among headquarters (labs and management offices), the plant, and international locations. As a joint venture, the enterprise had consumed close to \$750 million dollars in capital and was not yet profitable, but it held the promise of tremendous growth that could transform a wide range of markets worldwide. In 2005, NatureWorks was still the only company in the world capable of producing large-scale corn-based resins that exhibited standard performance traits, such as durability, flexibility, resistance to chemicals, and strength—all at a competitive market price.
The plastics industry is the fourth largest manufacturing segment in the United States behind motor vehicles, electronics, and petroleum refining. Both the oil and chemical industries are mature and rely on commodities sold on thin margins. The combined efforts of a large-scale chemical company in Dow and an agricultural processor giant in Cargill suggested Cargill Dow—now NatureWorks—might be well suited for the mammoth task of challenging oil feedstocks. However, a question inside the business in 2005 was whether the company could grow beyond the market share that usually limited “environmental” products, considered somewhere between 2 and 5 percent of the market. Was PLA an “environmental product,” or was it the result of strategy that anticipated profound market shifts?
NatureWorks brought its new product to market in the late 1990s and early 2000s at a time of shifting market dynamics and converging health, environmental, national security, and energy independence concerns. These market drivers gave NatureWorks a profound edge. Oil supplies and instability concerns loomed large in 2005 and have not subsided. Volatile oil prices and political instability in oil-producing countries argued for decreasing dependence on foreign oil to the extent possible. The volatility of petroleum prices between 1995 and 2005 wreaked havoc on the plastics industry. From 1998 to 2001, natural gas prices (which typically tracked oil prices) doubled, then quintupled, then returned to 1998 levels. The year 2003 was again a roller coaster of unpredictable fluctuations, causing a Huntsman Chemical Corp. official to lament, “The problem facing the polymers and petrochemicals industry in the U.S. is unprecedented. Rome is burning.”Reference for Business, “SIC 2821: Plastic Materials and Resins,” accessed January 10, 2011, http://www.referenceforbusiness.com/industries/Chemicals-Allied/Plastic-Materials-Resins.html. In contrast PLA, made from a renewable resource, offered performance, price, environmental compatibility, high visibility, and therefore significant value to certain buyers for whom this configuration of product characteristics is important.
Consumers are growing increasingly concerned about chemicals in products. This provides market space for companies who supply “clean materials.” NatureWorks’ strategists knew, for example, that certain plastics were under increasing public scrutiny. Health concerns, especially those of women and children, have put plastics under suspicion in the United States and abroad. The European Union and Japan have instituted bans and regulatory frameworks on some commonly used plastics and related chemicals. Plastic softeners such as phthalates, among the most commonly used additives, have been labeled in studies as potential carcinogens and endocrine disruptors. Several common flame retardants in plastic can cause developmental disorders in laboratory mice. Studies have found plastics and related chemicals in mothers’ breast milk and babies’ umbilical cord blood samples.Sara Goodman, “Tests Find More Than 200 Chemicals in Newborn Umbilical Cord Blood,” Scientific American, December 2, 2009, accessed January 10, 2011, www.scientificamerican.com/article.cfm?id=newborn-babies-chemicals -exposure-bpa; Éric Dewailly Dallaire, Gina Muckle, and Pierre Ayotte, “Time Trends of Persistent Organic Pollutants and Heavy Metals in Umbilical Cord Blood of Inuit Infants Born in Nunavik (Québec, Canada) between 1994 and 2001,” Environmental Health Perspectives 36, no. 13 (2003):1660–64.
Consumer concern about chemicals and health opens new markets for “clean” materials designed from a sustainability innovation perspective. In addition, international regulations are accelerating growth in the market. In 1999, the European Union banned the use of phthalates in children’s toys and teething rings and in 2003 banned some phthalates for use in cosmetics. States such as California have taken steps to warn consumers of the suspected risk of some phthalates. The European Union, California, and Maine banned the production or sale of products using certain polybrominated diphenyl ethers (PDBEs) as flame retardants. In 2006, the European Union was in the final phases of legislative directives to require registration and testing of nearly ten thousand chemicals of concern. The act, called Registration, Evaluation, Authorization, and Restriction of Chemicals (REACH), became law in 2007 and regulates the manufacture, import, marketing, and use of chemicals. All imports into Europe need to meet REACH information requirements for toxicity and health impacts. Companies are required to demonstrate that a substance does not adversely affect human health, and chemical property and safe use information must be communicated up and down supply chains to protect workers, consumers, and the environment.
All of these drivers contributed to the molecular thinking that generated NatureWorks’ corn-based plastics. The volatility of oil prices, growing consumer concerns about plastics and health, waste disposal issues, and changing international regulations are among the systemic issues creating a new competitive arena in which bio-based products based on green chemistry design principles can be successfully introduced.
Given higher levels of consumer awareness in Europe and Japan, NatureWorks’ plastic initially received more attention in the international market than in the United States. In 2004, IPER, an Italian food market, sold “natural food in natural packaging” (made with PLA) and attributed a 4 percent increase in deli sales to the green packaging.Carol Radice, “Packaging Prowess,” Grocery Headquarters, August 2, 2010, accessed January 10, 2011, www.groceryheadquarters.com/articles/2010-08-02/Packaging-prowess. NatureWorks established a strategic partnership with Amprica SpA in Castelbelforte, Italy, a major European manufacturer of thermoformed packaging for the bakery and convenience food markets. Amprica was moving ahead with plans to replace the plastics it used, including polyethelene terephthalate (PET), polyvinyl chloride (PVC), and polystyrene with the PLA polymer.
In response to the national phaseout and ultimate ban of petroleum-based shopping bags and disposable tableware in Taiwan, Wei-Mon Industry (WMI) signed an exclusive agreement with NatureWorks to promote and distribute packaging articles made with PLA.NatureWorks LLC, “First Launch by Local Companies of Environmentally Friendly Paper & Pulp Products with NatureWorks PLA,” June 9, 2006, accessed January 7, 2011, www.natureworksllc.com/news-and-events/press-releases/2006/6-9-06-wei-mon-extrusion-coated-paper-launch.aspx. In other markets, Giorgio Armani released men’s dress suits made completely of PLA fiber and Sony sold PLA Discman and Walkman products in Japan. Due to growing concerns about the health impacts of some flame-retardant additives, NEC Corp. of Tokyo combined PLA with a natural fiber called kenaf to make an ecologically and biologically neutral flame-resistant bioplastic.“NEC Develops Flame-Resistant Bio-Plastic,” GreenBiz, January 26, 2004, accessed December 2, 2010, www/greenbiz.com/news/news_third.cfm?NewsID= 26360.
The US market has been slower to embrace PLA, but Walmart’s purchasing decisions may change that. In fact, NatureWorks’ product solves several of Walmart’s problems. Walmart has battled corporate image problems on several fronts—in its treatment of employees, as a contributor to “big box” sprawl, and in its practice of outsourcing, among others. Sourcing NatureWorks’ bio-based, American-grown, corn-based plastic not only fits into Walmart’s larger corporate “sustainability” effort but addresses US dependence on foreign oil and supports the American farmer.
The spectrum of entrepreneurial activities in the sustainable materials arena is wide. While some entrepreneurs are early entrants who are fundamentally reconfiguring product systems, others take more incremental steps toward adopting cleaner, innovative materials and processes. However, incremental changes can be radical when taken cumulatively, as long as one constantly looks ahead toward the larger goal.
Many companies, within the chemical industry and outside, now understand that cost reductions and product/process improvements are available through green chemistry and other environmental efficiency policies. Documented cost savings in materials input, waste streams, and energy use are readily available. In recognition of the efficiency gains to be realized, as well as risk reduction and regulatory advantages, most firms acknowledge the benefits that result from developing a strategy with these goals in mind. In addition, companies know they can help avoid the adverse effects of ignoring theses issues, such as boycotts and stockholders’ resolutions that generate negative publicity.
However, the efficiency improvement and risk reduction sides of environmental concerns and sustainability are only the leading edge of the opportunities possible. Sustainability strategies and innovative practices go beyond incremental improvement to existing products. This future-oriented business strategy—geared toward new processes, products, technologies, and markets—offers profound prospects for competitive advantage over rival firms.
As the molecular links among the things we make and macrolevel issues such as health, energy independence, and climate change become more widely understood, companies that think strategically about the chemical nature of their products and processes will emerge as leaders. A “think like a molecule” approach to designing materials, products, and processes gives entrepreneurs and product designers an advantage. By combining this mode of operating with systems thinking and the other sustainability approaches discussed in Chapter 4, Section 4.4, you will have a strategy that will enable you not to merely survive but to lead in the twenty-first century.
KEY TAKEAWAYS
• Invisible design considerations—for example, the design of molecular materials—must be factored into consideration of sustainability design.
• Green chemistry offers principles to guide chemical design and production.
• Thinking like a molecule opens new avenues for progress toward safer product innovation.
EXERCISE
1. Contact your local government and ask about chemical compounds from industrial and commercial activity that end up in the water and air. What are the government’s major concerns? What are the sources of problematic chemicals? What is being done to reduce their release? Go to blog.epa.gov/blog/2010/12/17/tri or http://www.epa.gov/tri to read about the Toxic Release Inventory. Search the inventory for evidence of hazardous chemicals used in your area. | textbooks/biz/Business/Advanced_Business/Book%3A_Sustainability_Innovation_and_Entrepreneurship/04%3A_Entrepreneurship_and_Sustainability_Innovation_Analysis/4.03%3A_Molecular_Thinking.txt |
Learning Objectives
• Understand the notion of weak ties.
• Know how and why weak ties contribute to innovation.
Firms that carve out positions on the cutting edge of sustainable business share a common feature. They reach out to attract new information from nontraditional sources. Developing the capacity to seek, absorb, and shape changing competitive conditions with respect to human activity and natural systems through weak tieMark Granovetter, “The Strength of Weak Ties,” American Journal of Sociology 78, no. 6 (1973): 1360–80. cultivation holds a key to successful innovation. This is not surprising. Business success depends on continuous revitalization of strategic capabilities. Good strategy creates the future in which a company will succeed.
Not all individuals or companies can embrace change, however. In the past, revitalization of existing firms meant analysis of standard factors: competitors, market size and growth, product attributes, past consumer behavior, pricing strategies, and marketing programs. We suggest that limiting yourself to conventional analysis constrains strategic options.
To compete in the sustainability arena, companies must go beyond what has worked in the past and seek perspectives outside the historically assumed subset. We argue that incorporating rigorous sustainability analysis into your market positioning is likely to yield opportunities that can be keys to future success. What does this mean when it comes to environmental topics, opportunities in green chemistry applications, implementing sustainability principles in operations, and the myriad other environmental and health imperatives that fall under the term sustainability? It means developing what are called, in the academic literature on networks, weak ties with unconventional partners who provide you with increasingly essential strategic information. This does not mean that “the answer” will be easily found. It does mean that the net must be thrown wider to access information relevant to strategic success.
Sustainability innovation and entrepreneurship involves traveling across new ground. Imagine you will be accompanying the early nineteenth-century explorers Lewis and Clarke to explore the unfamiliar territory of the American West. You will be the first European Americans to chart a course from the eastern seaboard to the Pacific Ocean. The year is 1803, and there are very few maps of the American interior. The ones that exist are sketchy at best. How would you prepare for such a journey? You might talk to your friends and acquaintances to learn what they know about the terrain you’ll be covering. To get the information necessary to survive this foray into the unknown, however, you would probably go outside your immediate circle to talk with trappers, Native Americans, French traders, natural scientists, and other voyagers—people from diverse walks of life. You would need to build new relationships, or weak ties, to access a wide range of people who will provide you with the necessary information to move forward.
These ties are called “weak” not because they lack substance or will disappoint you but because they lie outside the traditional network of relationships on which you or the company depends. Contrasting weak ties with “strong ties” highlights their unique characteristics. Strong ties, as a category of network relations, have immediate currency and often long-standing rich histories with extensive mutual exchange. An example in an established company would be an existing relationship with a funder or supplier; for a start-up, it may be someone with whom the company has a history of successful collaboration. Typically, strong ties are to people and organizations you see often and to which you frequently turn for input. In the case of large firms, important strong ties may be those formed between heads of independent business units within the same organization. Alternatively, they might be ties to reliable suppliers or even to the board of directors and the people with whom that group associates.
Research indicates, however, that the longer the duration of strong ties between two entities, the more similar the entities’ perspectives are. People from the same circles tend to share the same pools of information. Under normal circumstances this is fine; we augment and reinforce each other’s understanding of how the known world works. However, it is likely that information from strong ties will add only minimal value to the information you already possess. When we want to take action in an arena outside the familiar terrain, information from strong ties often proves insufficient. We would argue, moreover, that relying solely on strong ties can actually deprive you of information, thereby insulating you from potentially important emergent data and trends.
In contrast, weak ties bring new or previously marginal information to the forefront. They enable you to reach outside the normal boundaries of “relevant” strategic information. Weak ties trigger innovative thinking because they bring in fresh ideas—viewpoints likely to diverge from yours or from senior management’s—and data otherwise overlooked or dismissed because they have not been a priority historically. Sometimes the most fruitful weak ties are to individuals or organizations previously considered to be your adversaries. Not surprisingly, the most innovative ideas for success may well come from those quarters most critical of how business has traditionally been done.
To successfully traverse the relatively unfamiliar territory of entrepreneurship and sustainability, you need to seek information from weak ties to access emergent perspectives and new scientific data that make what used to be peripheral issues—as many ecological and environmental health issues have been—now salient to strategic success. Perspectives gained from weak ties enable discerning companies to differentiate themselves and gain relative to their competitors. They can be formed with a range of individuals and organizations—including academics, consultants, nonprofit research institutes, government research organizations, and nongovernmental organizations (NGOs). The latter community is often business’s harshest critic on environmental issues. It is for this reason that business is increasingly forming weak ties to NGOs to engage them in thinking strategically about solutions.
Toward this end, it is important to understand that the NGO community is not homogeneous. There is a spectrum of groups active on environmental and sustainability issues. They range from those that view business as antithetical to social and ecological concerns to those that seek partnership and joint solutions. Certainly any weak tie relationship requires due diligence and partnerships must be considered carefully, but there is a wealth of untapped expertise and stakeholder value that is potentially available to you.
The accounts that follow illustrate effective use of weak ties to help craft sustainability strategies. Home Depot’s president Arthur Blank found new perspectives through weak ties by seeking input from NGOs critical of the company’s old-growth forest purchasing practices prior to 1999. That year Home Depot, the largest home improvement retailer in the United States, was also the largest lumber retailer in the world, selling between 5 and 10 percent of the global market. The company recorded \$38 billion in sales and over 200,000 employees in 930 stores. It also had been repeatedly voted “Most Admired Specialty Retailer.”
Faced with negative publicity and store boycotts by activist groups, however, the company’s openness to learning about alternative sourcing opportunities led to invitations to NGO representatives to meet with Home Depot’s senior management. Those new contacts—and the information flows they facilitated—helped put Home Depot on a track and timetable for dramatically reducing and ultimately ending old-growth forest wood purchasing and store sales. Stated Arthur Blank at the time, “Our pledge to our customers, associates and stockholders is that Home Depot will stop selling wood products from environmentally sensitive areas. Home Depot embraces its responsibility as a global leader to help protect endangered forests. By the end of 2002, we will eliminate from our stores wood from endangered areas—including certain lauan, redwood and cedar products—and give preference to ‘certified’ wood.”CBC News, “Home Depot Going Green,” November 10, 2000, accessed January 7, 2011, www.cbc.ca/money/story/1999/08/27/homedepot990827.html.
Certified wood is defined as lumber tracked from the forest, through manufacturing and distribution, to the customer to assure that harvesting the wood takes into account a balance of social, economic, and environmental factors. Home Depot’s ultimate goal was to sell only products made from certified lumber, but initially only about 1 percent of timber available was certified. How was Home Depot’s demand—let alone the industry’s—going to be met? The answer was that Home Depot’s decision moved markets. Vendors were asked to dramatically increase their supplies of certified lumber, driving demand back through the supply chain to lumber companies that expanded their activity in sustainably managed forestry.
Evidence that companies are seeking new perspectives grows each year as firms expand their range of conversations about improved practices to citizens groups, environmental scientists, and even international experts from other countries and industries. These groups are outsiders—examples of weak ties—because historically they have not been sought for strategically relevant information. However, this pattern has increasingly been shared by companies for which market scanning processes were previously limited to competitor and narrowly conceived industry trend data.
As the larger picture of economic activity’s impact on nature’s life support systems and the quality of life becomes more important to business, these ties now serve as conduits for knowledge on how and where the company might improve its overall strategy and performance. The known link of deforestation to climate change and species extinction combine with the implication of raw material processing methods in ecological and human health threats and known mutations to require—for fiduciary reasons—that companies buying and selling lumber pay attention to these issues. Firms that actively seek new perspectives that may have a bearing on their business success going forward will have a distinct advantage over those whose efforts are minimal, poorly designed, or viewed as marketing “greenwash.” Gaining true strategic leverage requires leadership. Home Depot was fortunate to have a leader with the broad intellect capable of seeing and implementing a wise path for the firm.
Statement from Home Depot on Wood Purchasing
We pledged to give preference to wood that has come from forests managed in a responsible way and to eliminate wood purchases from endangered regions of the world. Today there is limited scientific consensus on “endangered regions” of forestry. We have broadened our focus to understand the impact of our wood purchases in all regions and embrace the many social and economic issues that must be considered in recognizing “endangered regions” of forests. To fulfill the pledge, it was necessary to trace the origin of each and every wood product on our shelves. After years of research, we now know item by item—from lumber to broom handles, doors to molding and paneling to plywood—where our wood products are harvested.Home Depot, “Wood Purchasing,” accessed March 16, 2011, corporate.homedepot.com/wps/portal/Wood_Purchasing.
General Electric, Dell, and IKEA each pursued different types of weak ties. General Electric (GE) publicly announced the integration of environmental issues into product research and development (R&D) strategy and pursued weak ties to help develop strategy both by systematically contacting outside experts and by convening a series of gatherings of national experts and senior GE executives. In the process, GE unearthed previously unappreciated areas of technical innovation of great current and potential value to the company and launched a new corporate R&D strategy called “Ecomagination.” Dell worked with some of its harshest NGO critics to understand the emerging perspectives on managing electronic waste. The NGO links were Dell’s weak ties. This process of engagement not only helped Dell manage a public relations problem but—much to the company’s surprise—created a profitable new secondary service business that differentiated Dell as an industry leader in managing electronic waste. In another example, IKEA searched for assistance in its effort to reorient strategy after being embarrassed by a product that failed to meet European environmental regulatory requirements. IKEA’s weak tie to NGO consultant The Natural Step not only helped IKEA solve immediate product issues but helped fundamentally reorient company strategy on materials. IKEA’s openness to new information played a role in differentiating the company and augmented its existing reputation for design and low cost. The following sections include accounts of these company’s activities with lessons to be learned about profitably pursuing weak ties.
GE
In June 2005, GE CEO Jeff Immelt announced GE’s new sustainability strategy, “Ecomagination,” at a press event with Jonathan Lash, executive director of the environmental nonprofit organization World Resources Institute. Ecomagination, said Immelt, aims to “focus our unique energy, technology, manufacturing, and infrastructure capabilities to develop tomorrow’s solutions such as solar energy, hybrid locomotives, fuel cells, lower-emission aircraft engines, lighter and stronger materials, efficient lighting, and water purification technology.”Joel Makower, “‘Ecomagination’: Inside GE’s Power Play,” GreenBiz, May 10, 2005, accessed December 3, 2010, http://www.greenbiz.com/news/columns_ third.cfm?NewsID=28061.
Specifically, GE announced it would more than double its research investment in cleaner technologies, from \$700 million in 2004 to \$1.5 billion by 2010. GE also pledged to improve its own environmental performance by “reducing its greenhouse gas emissions 1% by 2012 and the intensity of its greenhouse gas emissions 30% by 2008, both compared to 2004 (based on the company’s projected growth, GE says its emissions would have otherwise risen 40% by 2012 without further action).”Joel Makower, “‘Ecomagination’: Inside GE’s Power Play,” GreenBiz, May 10, 2005, accessed December 3, 2010, http://www.greenbiz.com/news/columns_ third.cfm?NewsID=28061.
GE’s 2005 strategy was driven to a large degree by the cultivation of weak ties. Characteristic of many large firms active in eco-efficiency, GE had long viewed itself as a leader in environmental productivity improvements because it built energy-efficient airplane engines and other smaller systems and appliances that dramatically reduced resource and electricity use. However, these were design improvements that lacked the broader sweep of a systems view. To bring in new thinking and develop a new competitive stance, GE’s senior management aggressively sought perspectives from atypical sources.Thanks to Jon Freedman at GE Water, formerly with GE corporate marketing and a leader in the Ecomagination policy development process, for information about GE’s activity.
The Ecomagination story begins in 2003 and 2004 when three-year strategic plans drawn up by GE’s business unit CEOs were presented to corporate CEO Jeff Immelt. These indicated market opportunities in green-friendly products across all the units. Core customers were asking for products designed to address escalating resource scarcity and pollution pressures. Clean water and clean energy featured prominently. At the same time, Immelt had received periodic inquiries publicly (in the form of shareholder petitions) and privately as to how GE would respond in an increasingly resource-constrained world. What was GE’s position on environmental issues? Did it have a position?
A project to research the questions and trends was assigned and scoped out. GE assembled a team to interview thought leaders and experts outside the company in a variety of sectors. Academic experts in many fields, futurists, other business leaders, and leading NGOs were systematically interviewed as part of the information gathering that ultimately informed top management.
Through this process, topics were identified as relevant to GE’s markets and offerings. In 2004, GE hosted by-invitation-only meetings of top GE decision makers and a subset of outside experts to look at trends in water and energy concerns five to ten years out. Major customers, the dozen top executives at GE including the CEO, and a select group of outside expert advisors were present at the meetings from beginning to end, an attendance record unusual in the corporate world. In total, over one hundred experts inside and outside GE were consulted, forty leading companies studied, and multiple internal GE seminars and brainstorming sessions convened to discuss megatrends influencing GE’s future businesses.
As a result of this process, GE found that it was already seeing \$10 billion annual revenues from existing green technologies and services. The relative value of this activity was unexpected. Rather than being something foreign or new, GE was already seeing high returns from existing green technology innovations. This perspective, when combined with the outside expert feedback on likely trends, confirmed for GE management that their efforts should be redoubled to generate revenues of at least \$20 billion by 2010, with application of more aggressive targets thereafter.
Clean Edge, a research and advisory firm, estimated in 2006 that global markets for three of GE’s identified technologies—wind power, solar photovoltaics, and fuel cells—would grow to more than \$100 billion within 10 years, from some \$16 billion in 2006. This figure did not include clean-water technologies, in which GE has also invested heavily. A previous study predicted that the market for world water treatment technologies will reach \$35 billion by 2007.Joel Makower, “‘Ecomagination’: Inside GE’s Power Play,” GreenBiz, May 10, 2005, accessed December 3, 2010, http://www.greenbiz.com/news/columns_ third.cfm?NewsID=28061.
Weak ties influenced GE’s strategy formation in a number of ways. First, the ties helped GE design metrics to measure the current and potential values of some of its “green” technologies. One of GE’s weak ties was to GreenOrder, a New York–based consultancy specializing in sustainable business. According to GreenOrder, GE identified 17 products representing about \$10 billion in annual sales as part of the Ecomagination platform on which it planned to build. In doing so, the company undertook intensive processes to identify and qualify current Ecomagination products, analyzing the environmental attributes of GE products relative to benchmarks such as competitors’ best products, the installed base of products, regulatory standards, and historical performance. For each Ecomagination product, GE created an extensive “scorecard” quantifying the product’s environmental attributes, impacts, and benefits relative to comparable products.Joel Makower, “‘Ecomagination’: Inside GE’s Power Play,” GreenBiz, May 10, 2005, accessed December 3, 2010, http://www.greenbiz.com/news/columns_ third.cfm?NewsID=28061. Doing this analysis was one of the key roles played by GreenOrder.
As a result of these metrics, GE’s corporate Global Research Center doubled its R&D spending on Ecomagination products and associated services. Business units are required to focus on enhanced internal environmental performance and new product offerings. By October 2005, a senior vice president and officer of the corporation was appointed who reported directly to the CEO and took responsibility for the quantitative tracking of business units’ progress to both “walk the talk” internally and drive new product ideas.
The firm’s strategy change was driven by a historically unprecedented search for new information that used many weak ties to gain emerging perspectives and new science data. This process gave senior management a broader view of global resource trends and allowed the company to gauge how it could best leverage its assets and capabilities to both profit from and contribute to solutions.
In contrast to many firms that are low-key about their environmental activities (to avoid criticism of falling short of the ideal), Jeff Immelt put GE out on a limb. The company, already criticized for environmental transgressions such as that in the Hudson River,In 2002 the EPA decided to dredge 2.65 million cubic yards of sediment—enough dirt to fill an area the size of ten football fields to a height of 145 feet—which is expected to cost GE about \$460 million. The dredging is aimed at removing polychlorinated biphenyls (PCBs) dumped into the river from GE plants in Hudson Falls, New York, and Fort Edward, New York, from 1947 to 1977, before PCB use was banned. Deborah Brunswick, “EPA: Hudson River Dredging Delayed,” CNNMoney, July 26, 2008, accessed December 3, 2010, money.cnn.com/2006/07/28/news/companies/hudson_river. will be held to a higher, self-defined standard. There is reasoned debate, moreover, on the “greenness” of some of the technologies that GE is putting forward (nuclear power, “clean” coal, etc.). No company with a brand as well known as GE’s can afford to not deliver. Time will tell how successful GE’s strategy will be, but suffice it to say that a company such as GE does not make such a significant and public move without a thoroughly reasoned strategy. The GE example shows the formative role that weak ties can play in a company’s strategic transformation.
Dell
Next, we look at Dell. The article read, “Las Vegas, Nevada, January 9, 2002, environmentalists dressed in prison uniforms circled a collection of dusty computers outside the Consumer Electronics Show…to protest Dell Computer’s use of inmates to recycle computers. ‘I lost my job. I robbed a store. Went to jail. I got my job back,’ chanted five mock prisoners wearing ‘Dell Recycling Team’ signs and linked by chains. While Dell’s executives gathered at the huge electronics convention, the ‘high-tech chain gang,’ members of the Silicon Valley Toxics Coalition, attracted a small crowd outside.”Janelle Carter, “Senate Rejects Felon Vote Bid,” Associated Press, February 15, 2002, accessed December 10, 2011, www.sjcite.info/prison.html. Dell executives were understandably embarrassed by this incident. The assumption inside the company was that the company was doing what it reasonably could do about product recycling—a thorn in the paw of the industry lion. However, this public relations fiasco drew attention to an issue that no one in the industry was adequately addressing: electronic waste is a burgeoning problem that, if not dealt with, would come back to all players in the industry.
Disposal of electronic products represents one of the fastest growing industrial waste streams. Roughly one thousand hazardous materials used in manufacturing personal computers alone pose problems of human exposure to heavy metals, drinking water contamination, and air quality problems. With the rapid retirement of old models, a staggering volume of computers and other electronic equipment now migrates around the world. Only a small fraction goes to reuse programs. The majority are shipped to landfills and incinerators, or sent as waste to foreign countries. In response to the public health threats from hazardous materials in electronics waste streams, the European Union, Japan, China, and states within the United States are regulating electronic waste. One such regulation in the European Union is the Restrictions on Hazardous Substances in Electrical and Electronic Equipment.NetRegs, “Restriction Of Hazardous Substances in Electrical and Electronic Equipment (RoHS),” last updated October 15, 2010, accessed December 3, 2010, www.netregs.gov.uk/netregs/63025.aspx. “Product take-back” laws—and the threat of more such regulations in the future—are stimulating companies to experiment with a variety of means to take back and reuse products. (See the sidebar in this section.) Whether you agree or disagree with these actions, they are one of many drivers of sustainability strategies today:
Producers will be responsible for taking back and recycling electrical and electronic equipment. This will provide incentives to design electrical and electronic equipment in an environmentally more efficient way, which takes waste management aspects fully into account. Consumers will be able to return their equipment free of charge. In order to prevent the generation of hazardous waste, the proposal for a Directive on the restriction of the use of certain hazardous substances requires the substitution of various heavy metals and brominated flame retardants in new electrical and electronic equipment from 1 January 2008 onwards.Proposal for a Directive of the European Parliament and of the Council on Waste Electrical and Electronic Equipment and on the restriction of the use of certain hazardous substances in electrical and electronic equipment. European Commission, “Recast of the WEEE and RoHS Directives proposed,” COM (2000), accessed March 16, 2011, ec.europa.eu/environment/waste/weee_index.htm.
Dell is one of the largest personal computer manufacturers in the world. It is an information technology supplier and partner and sells a comprehensive portfolio of products and services directly to customers worldwide. Dell dealt with a US government contractor, UNICOR, which employed prison inmates to recycle outdated computers. The justification was cost; since recycling products was assumed to be a net cost to the company, efforts were made to cut associated expenses.
In February 2002, the Basel Action Network released an alarming report about end-of-life electronics exported and dumped in Asia. The report, “Exporting Harm: The High-Tech Trashing of Asia,” focused a significant amount of media and NGO attention on what computer manufacturers were doing to offer customers options for responsible electronics disposal. Later that year, the Computer Take-Back Coalition launched its “Toxic Dude” website, targeting Dell for not doing enough on computer recycling and reuse. Socially responsible investors (SRIs) and a variety of NGOs, including the aforementioned Silicon Valley Toxics Coalition and the Texas Campaign for the Environment, increased pressure on Dell to do more about electronic waste issues.
Following the prison-garbed protest, Dell began engaging in frequent conversations with these and other NGOs. These were Dell’s weak ties—new sources of information outside the company. Dell found that having conversations with these groups helped the company create a more strategically astute direction for its product end-of-life programs. Dell, a relatively young company that had grown rapidly, had not previously formed relationships with health and environmental NGOs. Through these conversations, Dell fundamentally reconfigured its recycling and reuse services for customers. As a leader in supply-chain management, productivity, and efficiency, the company designed an “asset recovery” program for end-of-life products—a program that would maximize quality and minimize costs for its recycling programs. Much to Dell’s surprise, the program not only minimized cost but generated value while also enhancing Dell’s brand and reputation as a responsible corporate citizen.
Early in 2003, Dell restructured its recycling program to make it easier for users and more proactive for the company. The “Dell Recycling” program was simplified and made more visible to customers. The company launched a national recycling tour consisting of one-day no-cost computer recycling events in cities across the country, with the objective of raising consumer awareness of computer recycling issues and solutions. When Dell first offered printers among its array of products, the company included free recycling of old printers. Ongoing discussions with NGOs informed the approaches chosen.
In late 2003 Dell broadened its national network of approved recyclers by partnering with two private companies to support its environmental programs for retiring, disassembling, reusing, and recycling obsolete computer equipment. Dell discontinued its partnership with UNICOR. These changes helped Dell grow its environmental programs more quickly and efficiently, improve the economics and convenience for customers, and properly dispose of customers’ old systems with minimal environmental or health impact. Moreover, the company began to see value in reclaiming assets rather than just costs in disposing of waste, a fundamental reorientation that would not have been possible without the weak ties that helped the company rethink its relationship with waste.
Tod Arbogast, who led Dell’s sustainable business efforts, stated,
The early discussions we had with NGOs and SRIs led to brainstorming sessions both within the company and with these stakeholders. Stakeholder input helped shape what we are doing now and it continues to be a valuable dialogue to this day. We came to realize that we could meet both our business objectives as well as the environmental goals we were being asked to adopt with new product recovery services offered to our customers. For example, our product recovery programs for our business customers have both helped grow the amount of used computers we are recovering and have become profitable. We’ve taken this same focus of meeting both sustainability and business goals into many areas since then including workplace conditions in our supply chain, chemical use policies and regular transparent reporting on all of these efforts to a broad set of external stakeholders. Connecting our sustainability objectives to our business objectives helps us get a broader set of internal colleagues supporting our efforts and helps us continue to expand our sustainability programs.Tod Arbobast, interview by author in preparation of book manuscript, summer 2006.
By engaging with vocal critics and environmental advocates and having open and honest dialogue with NGOs, the company effectively improved its end-of-life disposal offers by making them easier, more affordable, and more visible to customers. Dell was able to reach outside the company to get the additional information it needed to make this possible. By learning from the feedback it received and adjusting several of its tactics for raising awareness among consumers about responsible computer recycling, Dell created what is today one of the industry’s most aggressive and comprehensive recycling offers. In addition to the positive brand enhancement that came with having an environmentally responsible business offer, Dell also gained from showing customers that it could manage the entire life cycle of its technology equipment.
The story of electronics waste is not over. Dell and other leading companies are under intense scrutiny by NGOs to fulfill their commitments on waste management and toxics issues. Moreover, as a society, we still have a long way to go. To inspire more corporate action, in 2005, Calvert Investments and other SRIs filed shareholder resolutions with six computer companies, asking them to begin planning for recycling and take-back. As a result, Dell was the first US computer company to commit to setting recycling and take-back goals for personal computers.
IKEA
Global home furnishings retailer IKEA was stunned by claims in the 1990s that one of its most popular products—the Billy bookcase—was off-gassing formaldehyde at levels above German government safety standards. The resulting crisis for this company led to IKEA’s search for ways to prevent such an issue from happening in the future. After talking with different environmental groups and receiving much criticism but little concrete direction, IKEA turned to The Natural Step (TNS), an environmental educational organization headquartered in Stockholm, Sweden. Karl Henrik Robèrt, founder of TNS and an oncologist who became an environmental health activist due to children’s inexplicably rising cancer rates, was repeatedly invited to talk with IKEA’s senior management team and train them in TNS process. By teaching the group about overlooked market conditions that would increasingly impinge on IKEA’s worldwide practices, Robèrt catalyzed the group to commit to the first step of designing a green furniture line offering—and this weak tie ultimately helped IKEA develop its overarching sustainability strategy.
The task of “fixing” the company after its regulatory embarrassment seemed enormous to senior executives at the time. But the basic environmental education and criteria for designing both products and strategy offered by TNS educational framework allowed the senior executives to see a path forward. The major learning point is that without seeking outside perspectives from the very groups that had been most critical of the corporation, IKEA would not have found Dr. Robèrt and TNS ideas that were eventually integrated into the company’s strategy.
Working with Robèrt helped IKEA leaders see their industry from the outside; thereafter, they viewed steps transitioning toward “sustainable business” as noncontroversial. IKEA leaders were simply adapting to new scientific and health research data and integrating that data with their strategic choices. In their earliest experience with TNS, that meant certain chemicals known to be toxic to cells (causing cell mutation) would not be used in any production steps required to make residential household furniture. The solution of removing unsafe materials fit with IKEA’s corporate purpose of improving the lives of its customers.
The first concrete product that resulted from this solution was IKEA’s “eco-furniture” line, but the perspectives on materials and IKEA’s strategic positioning went far beyond one product line. IKEA continued to set some of the highest environmental strategy standards in the industry. As one of the first adopters of sustainability standards, IKEA has set the bar that others seek to match. The company’s initial corporate environmental action plan was called Green Steps, which was based on four intended actions/conditions posed in the form of questions:
1. Is the company systematically reducing its dependency on mining and nonrenewable sources?
2. Is the company reducing the use of long-lasting, unnatural substances?
3. Is the company reducing its encroachment on nature and its functions?
4. Is the company reducing unnecessary use of resources?
To ensure this policy is followed, IKEA trains all employees and regularly provides them with clear and up-to-date environmental information. The company also established an internal Environment Council, and all business plans and reports describe environmental measures and costs pertaining to the Green Steps.
IKEA does not manufacture its own products but instead commands a large international supply chain. The IKEA Group has nearly 220 stores in 33 countries. Nearly 1,600 suppliers manufacture products for IKEA. IKEA’s purchasing is carried out through 43 trading service offices around the world. IKEA mainly sources from European countries, but purchases from developing countries and countries in transition are rapidly increasing. A limited part of the supply comes from the industrial group of IKEA, Swedwood, which has 35 factories in 9 countries.
IKEA has taken steps to work with and educate current and potential suppliers on its environmental specifications and expectations. In this way, the company is shifting the industry standards, as captured in “The IKEA Way on Purchasing Home Furnishing Products” (IWAY). This guiding document supports the IKEA vision and business idea, outlining in great detail its expectations and procedures for suppliers. IWAY is administered and monitored by IKEA of Sweden Trading Services Office and by a global compliance group.“IKEA & the Environment—An Interview with Anders Berglund,” EarthShare Washington, accessed December 3, 2010, www.esw.org/giving/ikea.html.
IKEA has won many environmental business awards and is a leader in setting high standards for its products, particularly environmental standards. As one of the early adopters of a green strategic approach to how it conducts business, IKEA now enjoys brand recognition as the company that not only sells low cost, well-designed home furnishings but clean and safe products as well.
These examples illustrate senior managers responding to a changing business environment by establishing weak ties to outsiders who provide content on a new strategic direction for the company. These managers took advice from sources considered unconventional—even threatening—and used it for their companies’ financial and strategic gain. In these cases, we see three types of weak ties: to professional experts, to NGOs, and to an environmental educational organization.
There is no way to predict what outside source will offer weak tie benefits to your venture. However, a good way to find such sources is to identify the pool of weak ties from among your insider strong-tie group to relevant outsider voices. As noted, environmental groups and other NGOs are not homogeneous; some are more willing and able to work with entrepreneurs and companies than others. Certain leaders and their organizations are well established and widely respected. You need to research the topics that represent opportunities for your venture and then identify individuals and organizations with whom conversation may be fruitful. Ideally, you want to initiate weak tie conversations with individuals and groups aligned with sustainability solutions who do not take issue with your proposed or existing practices. You need a set of weak ties willing to join with you over time to help inform strategy.
In summary, if entrepreneurs do not seek outsider perspectives on the shifting state of the competitive game, they will be blinded to forces that hold, in some cases, the overnight potential to undermine the venture’s efforts. On the positive side, access to emergent perspectives and new scientific data on sustainability issues holds promise of strategic advantage. Access to this information enables discerning entrepreneurs to gain relative to competitors because information flows from weak ties bring tighter cohesion between a firm’s strategic thinking and the shifting conditions that shape market opportunities. Weak ties are a bridge to innovation, competitive differentiation, and new market opportunities.This discussion draws on the work of Mark Granovetter, “The Strength of Weak Ties: A Network Theory Revisited,” Sociological Theory 1 (1983): 201–33, accessed March 7, 2011, www.si.umich.edu/~rfrost/courses/SI110/readings/In_Out_and_ Beyond/Granovetter.pdf. Using weak ties for sustainability innovation can be understood as a parallel to adaptation in biology. As the complexity of business decisions and market dynamics grows, the effective use of weak ties can mean the difference between learning and not learning, at the individual, corporate, and supply-chain levels. We would argue that in the twenty-first century, it is essential to seek better information drawn from wider sources logically linked to a firm’s social and environmental footprint to adapt intelligently.
KEY TAKEAWAYS
• Incorporating sustainability considerations into business requires reaching out beyond conventional sources of business information.
• Entrepreneurs and businesses that tap into weak tie relationships around sustainability concerns can use them to find new ideas for products and services.
• Adaptation to the new business conditions in which environmental, health, and community concerns have become more important requires cultivation of weak ties.
EXERCISE
1. Identify a business you would like to create. What health, community, and environmental concerns might emerge as you imagine building your firm? Where would you turn for advice and information to anticipate how you should respond? Why? | textbooks/biz/Business/Advanced_Business/Book%3A_Sustainability_Innovation_and_Entrepreneurship/04%3A_Entrepreneurship_and_Sustainability_Innovation_Analysis/4.04%3A_Weak_Ties.txt |
Learning Objectives
1. Understand how implementation is carried out.
2. Learn about collaborative processes for adaptation and innovation.
Value-added networks (VANs) are necessary to implement sustainability innovation strategies; VANs provide the horsepower to implement projects and are the means to translate your strategic vision into competitive products or services. VANs are action oriented and results driven.
VANs are distinct from weak ties. The primary contribution of weak ties is new and diverse information that links strategy more coherently with broader systemic forces. Weak ties bridge the corporation to the “outside” world’s events and stakeholders. In contrast, VANs are composed of closer and stronger ties within your firm and its inner circle of collaborators. They are ties that can be intentionally and strategically joined to add value throughout the implementation process. Weak ties also differ from VANs in that they might be critics or even opponents of your company. The purpose of weak ties is information access beyond the known and the predictable, while the purpose of VANs is to take action. Weak ties serve an essential role for bringing creative alternative perspectives to the business at the options generation stage. VANs enable adaptive collaboration.
VANs can offer a wealth of creativity in the implementation process. VANs can be familiar faces in your backyard, or they might include suppliers or customers. They are an untapped, underappreciated resource for implementation ideas, feedback, and adaption as a plan is implemented. Rarely do company executives directly create and monitor VANs. More often they create the circumstances and culture that allow VANs to form and the protection and incentives for them to be effective. Our research indicates that where sustainability innovation strategies are successfully implemented, a group had come together with sufficient senior backing and the skills, resources, and authority to drive the project forward. It should perhaps go without saying that VANs tend to be more successful in implementing sustainability innovations in companies already open to change and known to be culturally innovative.
Membership in VANs can be formal or informal. If sustainability goals have been embraced by a company, the process might be more formal. If sustainability is being explored by only a subset of the firm, but resources and legitimacy are present, the process may be more organic. Sometimes all that is lacking to catalyze a VAN is the context for the right question, for example, asking a long-standing supplier, “Can we do this better if we integrate environmental/sustainability attributes?” When asked to provide greener, more benign materials, a supplier replied to one of the managers interviewed for this book, “Yes, sure, we can do that. You just never asked before.” In this situation, the collaborative VAN simply emerged, its leaders and other participants identifying themselves by stepping forward once the space is created for them to act and flourish.
VANs are often informal structures; they are interwoven in and under the firm’s formal administrative and functional hierarchies. However VANs are structured for a firm’s circumstances, there are certain things entrepreneurs and managers can do to provide conditions conducive to innovation. First, incentives for innovation and experimentation must be part of the picture. Making it safe to experiment is another essential element, as is fostering a culture where “there are no dumb questions” or “issues off the table.” Creating special, finite committees or advisory panels may be an effective approach for your context; if it is, be sure you reward members for their participation.
The VANs discussed here are the sets of relationships mobilized around sustainability innovation that contribute specific resources to converting ideas into action. In short, VANs are your nearest and best resource for inspiration, input, and feedback on how you can improve what you do and for practical ideas on how to implement and modify sustainability practices.
The examples that follow illustrate companies and individuals able to implement sustainability strategies by drawing knowledge and resources from VANs. Walden Paddlers’ VAN, under the direction of the entrepreneur-founder, illustrates that organizational boundaries—and as we will discuss, even the existence of an organization in some instances—are irrelevant to successful implementation. This example may seem odd to those unfamiliar with the rise of virtual organizations and virtual companies since the 1990s. The Walden Paddlers example is a powerful way of showing the effectiveness of determined efforts to employ VANs to implement sustainability strategy visions regardless of organizational structure.
Moreover, VANs can serve to implement strategy in diverse settings: Walden Paddlers was a fledgling enterprise and United Technologies Corporation (UTC) an established, multibillion-dollar global company. Walden had no existing procedures; UTC has decades of established operations procedures. Walden makes recreational kayaks; UTC makes massive industrial products. The companies have very different circumstances yet use similar strategies and tactics.
Walden illustrates how a sustainability innovation vision can create and mobilize a network and resources around cutting-edge product innovations. Perhaps because sustainability goals can resonate strongly with the values of contributors, VANs can build a distinct energy and momentum. The vision defined by sustainability objectives acts like an extra lift under a VAN’s wings. The UTC example shows how VANs form between innovators across functionalities. To borrow from UTC’s experience: work with innovators in other fields. Differentiation is a moving target; your VAN can help you stay on top of it and continually redefine it.
Tactics for Catalyzing Value-Added Networks
• Start with a compelling vision.
• Don’t take “no” for an answer—find people whose values align with yours.
• Work with innovators in other fields.
There will always be pessimists, the lazy, the comfortable, and people whose income depends on continuing the existing way of doing business. These are not the people you want in your VANs. Their attitude is “no,” and they bring imaginations to match. Entrepreneur Paul Farrow’s launch, successful growth, and ultimate sale of Walden Paddlers provide an unusual illustration of building a VAN to successfully implement strategy. All new initiatives and fledgling enterprises are start-ups and need to recruit resource- and information-rich participants by building lateral networks. In most companies, implementing sustainability strategy will, to a certain extent, constitute a deviation from the norm because it represents a new activity with all the characteristics of entrepreneurial initiatives. This means creating networks of like-minded others who understand and rally behind a powerful vision.
This account provides the core steps that enabled this VAN to succeed. Grit and determination to proceed despite hearing repeated discouraging feedback is part of the process. VANs share this with any innovation process, but remember that strategy that incorporates sustainability values into the core represents a larger and more far-reaching innovation of knowledge and meaning than a new product alone.
Walden Paddlers
Walden Paddlers represented a sustainability-oriented company from its inception. Paul Farrow built his company and core VAN from scratch. One day, on vacation in Maine, he made a back-of-the-envelope calculation that the economics of recycled plastics made into recreational kayaks was a market opportunity—thirty-five pounds of forty cents per pound of plastic sold for more than four hundred dollars at retail. Farrow saw the possibility for a higher quality product at a lower price to the user, and a profitable company. The question he pondered was whether he could create a new market space for kayaks made from used milk bottles. All he knew at that point was that he had a business idea worth exploring. He knew nothing about kayaks (except enjoying them for recreation) or recycled plastic, but he did know a little about plastics manufacturing.
The project began as many sustainability initiatives do. He talked with people he expected to understand his vision, experts in plastics and material science. He was summarily informed by materials specialists from preeminent Boston-area academic establishments that no one could make high-performance plastic for recreational kayaks from recycled materials. It was common knowledge; the composition of recycled plastics made it impossible. The recycled resins, appropriate for downscaling into speed bumps or perhaps waste cans, would not yield high-performance, aesthetically attractive kayak hulls. Furthermore, the industry lacked equipment to handle the new material and specifications. In conclusion, it could not be done.
Challenging the received wisdom of experts requires reaching beyond them to more open-minded fellow travelers, those with less invested in existing knowledge, objectives, and methods. With only his aspiration of earning a living doing something he believed in and that would help protect the natural environment, and a vague picture of using recycled resins to create a kayak of some sort (for a market that might or might not exist), Paul Farrow kept talking to people about his idea and gathering data. He sought the advice of materials science experts who would take his ideas seriously. He conducted research on the prospective customer segment and communicated through his extended family and network of friends that he had this crazy idea. In the process, he found a few receptive individuals who were willing to talk with him and consider the possibilities.
Your VAN can take form from unexpected locations. Reminded by his wife that he had a brother-in-law attending Rensselaer Polytechnic Institute in New York state, Farrow made some phone calls. His brother-in-law had taken a course on materials with a nationally known professor. Through persistence, several phone calls later Farrow connected with the professor, who had recently started a company with one of his former engineering students, Jeff Allott. Allott, now a product designer for the company General Composites, was coincidentally a paddle sports enthusiast and was intrigued by Farrow’s plan. Allott was also anticipating that the company’s government contracts would taper off in the near future, and General Composites needed to diversify. Moreover, Allott liked the notion of designing an unprecedented material that the experts had deemed impossible to create. Why not create a high-performance, aesthetically attractive, inexpensive recreational kayak from recycled milk bottles? Why can’t positive expectations for health, ecology, community, and financial gains be optimized simultaneously?
This was a typical entrepreneurial endeavor during which Farrow repeatedly heard “no” in response to his questions Eventually he received a “maybe” from a more imaginative individual who could see the new market space. The pattern of “no” and a few “maybes” repeated itself with manufacturers, national retailers, distributors, and component suppliers. From his innumerable rejections, Farrow had collected valuable information about how to implement his vision that he used to refine and recalibrate his plan. In this learning process Farrow’s VAN identified itself in a self-selection, self-organizing fashion typical of new enterprises.
Each node in the network was a person with close knowledge about how to implement the proposal. Each suggested ways forward and was willing to collaborate with untested strategy, design protocols, product ideas, and market segment definitions that had unknown but possibly significant returns. Farrow also tapped into each individual’s sense of competitive challenge, fun, and creativity posed by accomplishing something the so-called experts said was impossible. The results of the process were a set of innovations, an award-winning kayak, and a profitable company.
This story teaches the necessity of carefully selecting VAN participants whose goals are aligned with yours. The first manufacturer to sign on was Hardigg Industries. Its manufacturing manager was curious about working with the new recycled plastic resins and driven by the economic pressure of unused plant capacity. This seasoned manager was also interested in the prospect of a growing a new customer base in recycled plastic molding. In fact, Hardigg’s management was so motivated to try new approaches in recycled plastics that it contributed capital to the start-up by agreeing to generous terms that acknowledged the start-up’s cash-strapped condition. Hardigg invested \$200,000 in new equipment and drew up a flexible, informal contract based on shared returns and aligned future interests should the venture take off.
The start-up’s next phase illustrates how sustainability innovations are created. Extensive experimentation with different plastic compounds and resin colors followed. There were adjustments to the equipment to modulate temperatures and vary cooling times and methods. Farrow, along with the manufacturer and the designer, spent many hours testing, analyzing, discussing, and retesting. It was a microcosm of any implementation situation characterized by innovation and entrepreneurial process: learn as you go, draw from the creativity and imagination of your partners, collaborate, adapt and incorporate new knowledge along the way, and allow the feedback and events to shape the path and even the destination.
Entrepreneurs need to keep searching for allies to fill in the VAN gaps. The right mix of recycled plastic had to be developed to match the materials specifications of the product and the high heat demands of the molding equipment. Turned down by multiple plastic recyclers, Farrow finally found a Connecticut recycler who was trying to build his business and had a reputation for being open to new ideas. That recycler joined the emerging VAN and experimented with different collected plastics, testing a variety of pellets for melt consistency, texture, and color. More weeks of prototype experimentation unfolded, involving Paul Farrow, Jeff Allott, the recycler, and the head of manufacturing at Hardigg designing and redesigning incrementally but ultimately successfully to produce the first kayak.
Now Farrow had to address how to sell the kayak. What was the least expensive and most leveraged way to test the market? Attracted to the idea of selling more environmentally responsible kayaks, leading national sports equipment retailers were open to Farrow’s product ideas. Through extensive discussions with retailers like REI, Eastern Mountain Sports, L. L. Bean, and others emerged optimal pricing strategies at wholesale and retail, creative in-store marketing, and colorful packaging for the customer to protect the kayak when it is placed on a vehicle roof rack. In other words, the collaborative retailers literally told Farrow what decisions to make on pricing, marketing, and packaging to optimize sales.
A successful VAN process will elicit energy and initiative from those self-selected to be involved because they know that business, the environment, and communities are not separate. Explicit sustainability strategies attract committed people and release their creativity. Dale Vetter, an operations expert and Farrow’s friend and former business colleague, was drawn into the business bringing operating skills that complemented Farrow’s finance know-how and general management experience. Vetter’s creative redesign of the transport system that moved the kayaks from the manufacturer to Walden’s tiny warehouse and office headquarters outside Boston resulted in dramatically improved logistics efficiencies and reduced labor costs. The kayak seat supplier was persuaded by Farrow and Vetter to take back its packaging, ultimately saving itself money when it discovered a method to recycle its packaging materials. This allowed Walden to avoid expensive Boston-area waste disposal fees.
Farrow has downplayed the challenges of creating his company, yet in its time Walden Paddlers implemented an early model of sustainability innovation that functioned under an innovative corporate structure. The company was one of the earliest documented virtual corporationsSee also extensive literature on “network organizations.” See Mark Granovetter, “Economic Action and Social Structure: A Theory of Embeddedness,” American Journal of Sociology 91 (1985): 481–510; Walter W. Powell, “Neither Market Nor Hierarchy: Network Forms of Organization,” in Research in Organizational Behavior, ed. Barry M. Staw and L. L. Cummings (Greenwich, CT: JAI, 1990), 12:295–336; Andrea Larson, “Social Control and Economic Exchange: Conceptualizing Network Organizational Forms” (paper presented at the Annual Meeting of the American Sociological Association, Washington, DC, August 1990); Walter W. Powell, “Hybrid Organizational Arrangements: New Form or Transitional Development?,” California Management Review 30, no. 1 (1983): 67–87; H. B. Thorelli, “Networks: Between Markets and Hierarchies,” Strategic Management Journal 7 (1986): 37–51; Andrea Larson with Jennifer Starr, “A Network Model of Organization Formation,” Entrepreneurship Theory and Practice 17, no. 2 (Winter 1993): 5–15. Andrea Larson, “Network Dyads in Entrepreneurial Settings: A Study of the Governance of Exchange Relationships,” Administrative Science Quarterly 37, no. 1 (March 1992): 76–104; Andrea Larson, “Partner Networks: Leveraging External Ties to Improve Entrepreneurial Performance,” Journal of Business Venturing 6, no. 3 (May 1991): 173–88; Andrea Larson, “Strategic Alliances: A Study of Entrepreneurial Strategies for the 1990s” (paper presented at the Eleventh Annual Babson College Entrepreneurship Research Conference, Babson College, Babson Park, MA, 1991). and continued to innovate in materials, product design, transportation system, vendor relations, and wholesale buyer collaborations. Farrow was a sincere, informed, and modest yet passionate catalyst. Each VAN participant got hooked on his vision, and Farrow worked to ensure their economic interests were aligned. Both vision and potential returns were critical.
VAN participants, along with Farrow, heard discouraging comments throughout the start-up’s early stages. Farrow laughed as he said, “You have to get used to hearing ‘no.’ Your attitude has to be, ‘so what’? So you hear ‘no’ repeatedly.”Paul Farrow, interview with author, July 1996. Farrow’s casual way of talking about the implementation process masked his determination, persistence, and willingness to learn and adapt and to compromise when economic necessity required. The perfect would not shut out the good. His attitude was contagious and created the required commitment to make this idea fly. He commented on the people who said “no” to him: “Those people just have less imagination. But those aren’t the ones you want to work with. Do people think I’m a little odd in my passion for the vision? Sure, but you keep talking to people until you find the right partners who believe and will work hard to make the impossible happen.”
The Walden Paddlers case shows how you may need to create and inspire your VAN while you are on the journey. If there are no precedents, the VAN literally creates what it is doing as it goes forward. Farrow had only one of the requirements needed to build a company: a vague idea backed by some rough financial calculations. He needed a materials specialist to design the first kayak from recycled plastic because he knew nothing about designing kayaks and even less about materials science. He needed manufacturers with knowledge of molding equipment. He needed operations capability, administrative processes for health benefits and hiring, transportation services, and retail and wholesale outlets. Yet within eight years he had built a virtual corporation before “virtual” or “network” organizations were recognized as legitimate forms for business. He defied conventional wisdom on materials design and sold high-performance, aesthetically attractive, 100 percent recycled and recyclable recreational kayaks through nationally known retailer chains. In addition, he sold his company at an undisclosed price, gave himself time off to build a vacation home with his wife and three sons, then took on a new corporate sustainability challenge with a small, growing company. How did he do it? It was important that he didn’t accept the notion that his vision could not be realized. He formed his VAN of like-minded others and together they made it real.
What else can we learn from this case? Farrow questioned the conventional business wisdom—a common practice among entrepreneurial individuals. Their commitment to the unproven premise can be intense, and they may seem as though they will vision into action and results. However, implementation needs and invites collaborators.
Another lesson from the Walden Paddlers’ example is that it took patience to allow solutions to emerge and evolve from the network participants’ contributions. All participants had to be open to learning and finding the right “partners” willing to go outside their comfort and expertise zones to invest time and resources in a new idea. Don’t be surprised if it takes time to find willing partners. There are too many strong influences at work that cause people and firms to be insular.
Finally, you don’t need extensive resources, just enough to get to the next step. At every stage, the VAN became more closely aligned, tapping into its growing collective wisdom, imagination, and resources. The most underrated resource for breakthrough ideas might be the network of people you already know inside your firm or the network you can build outside through your company’s supplier and customer relationships.
Creativity and imagination drawn from people who initially may be considered outsiders can be pivotal to a company’s success. These individuals and their institutions can come to have a strong stake in the outcome, and they have the knowledge to generate paths forward that otherwise would remain latent. In Paul Farrow’s case, there were no vertically integrated functions; he was building from the ground up. Within an established firm, some functional activities in the VAN are typically incorporated into the formal boundaries of the organization (e.g., design, product development, manufacturing, marketing, sales). Others lie outside with suppliers and buyers or other key allies. Implementation requires you to ignore conventional corporate boundaries and view the VAN as a lateral web of information and material flows through which ideas and resources can be mobilized. There is no reason not to tap into this potential power.
United Technologies Corporation
United Technologies Corporation (UTC), despite its large size and dominance in mature markets with mature products, remains remarkably innovative, including its leadership in sustainability strategy. In the 1990s, UTC CEO George David announced the company’s goal of reducing its environmental footprint by a factor of ten. Explicitly committed to sustainability from the top, UTC was ahead of its time for an aerospace and building products and services firm. Management has since driven resource use efficiency programs through the business units and transitioned into new product designs that provide the power and performance people want for vehicles and operations while delivering on sustainability’s positive health, ecological, and overall natural system robustness agenda.
Its disciplined process of bringing innovative ideas to market explains UTC’s success over the years. The keys to UTC’s success were highly motivated VANs formed across business units and with outside customers and supply-chain participants that drove the new ideas to successful commercialization. These VANs are at the leading edge of solving problems with technology and market receptivity and are characterized by creative and innovative participants who bring extra dedication to sustainability ideas.
The company’s alternative power products business unit, UTC Power, faced a challenge, however. UTC’s goal for that unit was to shift the market paradigm for power generation in stationary applications and transportation. The issues for large power consumers are straightforward. Customers want energy efficiency and reliability, lower bills, and protection from grid outages. They need system resiliency to assure ongoing operations and customer satisfaction in case of weather or other disruptions. For example, supermarket chains, hotels, and hospitals experienced the impact of Hurricane Katrina and the human and financial losses when their doors had to be closed.
UTC Power has a portfolio of solutions that offers power generation solutions in a variety of new technology combinations. However, when you are working with new products and new markets, a paradigm shift requires extraordinary effort. In UTC Power’s case, you see examples that build on the company’s competencies in technology innovation and management of massive supply chains to form VANs with more creativity than the norm. Jan van Dokkum, president of the UTC Power business, described the unique VAN situation as follows: “We carefully analyze the market for opportunities to improve emissions and efficiency. We then work closely with UTRC [UTC Research Center], buy standard, volume-produced equipment, optimize the system, and, finally, work with the customer to deliver high levels of service.”Jan van Dokkum, phone interview with author, June 21, 2001.
UTC’s PureComfort heating and cooling energy system is a good example. The PureComfort system offers the customer three features in one: electrical power, heating, and cooling. The system operates either off the electrical grid or connected to it and thus can serve as a cheaper and more reliable ongoing operating power source, even when the grid goes out. Highly motivated existing VANs at UTC drive conventional products and markets effectively, but for a new product and new markets plus a sustainability focused change, there are extra drivers, particularly once the product goes to market. The PureComfort system project began under the leadership of the corporate UTRC, working with autonomous business units Carrier and UTC Power. The group brainstormed combining their expertise to produce new products for new markets. They looked for ways to improve building system efficiencies by using the “waste” from power generating equipment (e.g., microturbines or reciprocating engines) as a “fuel” for heating and cooling equipment. They collected the hot exhaust from the supplier-produced microturbines and ran it to a Carrier double-effect absorption chiller, which produces hot and cool water. They found the flow rate temperature ideal to generate cold or hot water, thus creating three-in-one equipment producing on-site electricity, hot water, and cold water for refrigeration.
The A&P supermarket chain installed a PureComfort system in its store in Mount Kisco, New York. A&P chose the highly efficient heating, cooling, and power system because it leads to energy savings and ultimately reduces the store’s dependence on the grid. The new rooftop unit uses underground-supplied natural gas to generate electricity for the store. Then it generates cold water, runs it to refrigerator “chillers,” and provides heat when needed. The UTC PureComfort unit produces combined power, heating, and cooling at greater than 80 percent efficiency rates compared to approximately 33 percent from the electric grid. Distance monitoring by UTC Power means the company’s service people will be at the A&P store to fix a problem before the people at A&P even realize one exists.
Meeting customers’ multiple cooling, heating, and power needs with an innovative integrated, reliable on-site system solution at a cost reduction from existing options addressed UTC Power’s strategic goals to deliver new products and new revenues. At the same time these offerings provided very low emissions, reduced customers’ energy costs, lowered grid dependence, and assured standby power supply. While it would not have necessarily called its strategy “green,” and its sales force is not necessarily hearing the term “sustainability” from its customers, UTC Power nonetheless has incorporated the core ideas into its strategy. These products provide safer, cleaner, and more reliable power sources than the alternatives available, at commensurate prices that are less expensive when full costs are considered.
However, the issue was not whether the PureComfort system met buyer needs or satisfied sustainability requirements; it did. The challenge was whether customers’ standard way of meeting power needs—paying for electricity from the grid—could change to a solution that required new purchase practices and economic calculations as well as different impacts on the company’s profit and loss statement and balance sheets.
Breakthroughs happen when VAN teams can tap into an intangible creativity source in sustainability agendas: the energy, the extra little bit of horsepower, or a passion for the technology and market changes. UTC Power experienced this type of breakthrough in its work with the city of London and the Ritz Carlton hotel chain in San Francisco. In each situation the VAN participants were well known for being creative, innovative, and willing to spend extra time to find solutions. New competitive space and successful positioning in that space were realized by firms working with other firms also positioned in the same market frontier.
The catalyst for this creativity is the process dynamics of UTC Power’s technology design to achieve clean, safe, reasonably priced products combined with supply-chain partners that want to save money and assure performance but also have an absolute commitment to creating sustainability solutions through redesign of products and procedures. This means there is more continuity and commitment in teams because participants are passionate about seeing their ideas come to fruition. VAN participants will go the extra distance. When innovators talk with other innovators about how to implement sustainability innovations, results are achieved.
UTC Power uses its internal, highly disciplined product development process and committed working relationships with buyers and original equipment manufacturers to accelerate learning and feedback and to improve its power products. UTRC also employs an innovation effort, working with the business units that have identified UTC technologies for new, market-ready products and markets. The PureComfort system process started with a small, multidivisional group looking at opportunities at the intersection of power, heating, and cooling.
Brainstorming engineers, who did not usually work together, found the intersection of power, heating, and cooling rife with possibilities and developed a second product, known as the PureCycle 200 system. Together they altered standard Carrier industrial cooling equipment by converting it to run “backward”; instead of using electricity to produce cooling, the system uses waste heat to produce electricity. The system uses field-tested Carrier technology to provide turnkey, zero-emission, reliable, low-cost electricity from various industrial heat sources. The electricity can be used on-site where it is produced or sold to the electric utility grid. Customers can potentially make money by offsetting traditional fossil fuel electricity generation. The payback and savings depend on the geographic location in the United States and the price of the displaced energy.
It is not necessarily easy building new types of supply-chain relationships to implement sustainability innovations. In UTC Power’s case, cross-business unit sales and service provisions had to be tightly coordinated, and getting electric utilities to buy excess power from buyers has been an uphill battle. Even with these challenges, a major obstacle is in developing trust with the end users, specifically the facility leader who makes the purchase decision and who is paid to be conservative. It is a tough sell because the system (though not the components) is new. It is mechanical and therefore may need servicing. Facility managers fear the unit will fail, and they have to be educated about the system, which takes time. Finally, having the system installed may seem “inconvenient,” as it can disrupt current operations during the switch.
Thus the value proposition has to be communicated effectively. UTC Power has developed economic models that show payback time frames for equipment installed in different geographic locations according to size of facility, electricity rates based on different fuel sources, and seasonal demand. In addition, a turnkey service contract is offered that monitors units from UTC centers in Charlotte or Hartford where operators have the technological ability to locate errors. As UTC Power continued to refine its extensive supply-chain coordination, more new opportunities for innovation emerged.
Fuel price volatility, changing and more violent weather patterns, deregulation, supply interruptions, and rolling blackouts and brownouts in the Northeast and California have generated considerable interest in distributed (nongrid, noncentralized), on-site, clean and reliable electricity, heat, and cooling power sources. To capture this interest while overcoming the natural resistance of cautious buyers is still a challenge. UTC Power and UTC are addressing this challenge by creating an “all in service” solution. Through a long-term contract, a customer avoids the up-front cash cost and spreads it over time, thereby better matching the cost with the energy savings.
Another value proposition involves public health. An important sign of change that should be noted by all managers is occurring in UTC Power’s urban bus transportation markets. Buyers such as the city of London and AC Transit in Oakland, California, are building previously externalized health costs into their purchase decisions. A regional public transit authority, AC Transit considers the cost of respiratory and other air-pollution-related illness resulting from diesel gasoline combustion, particularly from buses. Incorporating more of the full system costs into the equation shifts the price-performance calculation for conventional bus drivetrains compared with fuel cell systems. The price of the latter looks more attractive when adjusted downward by health cost savings due to reduced particulate matter and other air pollutants from transportation.
Through product take-back, UTC Power is getting a handle on design for disassembly. The company’s team must determine what parts are recoverable and recyclable and the economics of remanufacturing the leased units brought back for repair or at the end of their useful life. Extending this concept to field-installed stationary fuel cell power units, UTC Power found that the reverse logistics and reuse/recycling of materials and parts could actually make money. The notion of leasing transportation or stationary power plant fuel cell stacks has engaged UTC Power even more closely with its suppliers and buyers along the value chain to source recyclable materials and components. Successful supply-chain coordination within the company and outside is important to the success of any leasing solution and to the systems redesign for disassembly and recyclability.
Because new ideas that challenge existing ways of operating require early adopters, innovators initially tend to work with and sell to other innovators. UTC Power is building new markets through cooperation with forward-thinking internal UTC executives and staff in other business units, and combining that synergy with eager corporate buyers trying to solve urgent problems (e.g., harsh storms in tropical geographies, zero-downtime requirements for electrical power) or open-minded municipalities searching for creative cost-cutting measures.
Conclusion
As we noted at the outset of this section, VANs are necessary to implement sustainability strategies. VANs provide the horsepower to implement projects. They are the means to translate vision into competitive products or services. Whatever your business is, catalyzing VANs is essential to put your nascent strategy into action. The following are strategies for working with VANs:
• Start with a compelling vision.
• Don’t take “no” for an answer; find people whose values align with yours.
• Work with innovators in other fields.
Since by definition you will be forging a new path, you will hear “no” a lot. Don’t stop there: seek out those who understand the bigger vision and are inspired by the prospect of inventing the way forward with you. Source participants from your existing suppliers or find new ones inspired by your green strategic vision and the multiple gains, including financial, that would come to participating organizations that develop new capacities. Collaborate closely with other innovators in other functions or fields. Since differentiation is a moving target, call upon your VANs to help you continuously redesign and improve, moving individual participants in and out of the constellation of skill sets and leadership attributes you need. Implementing strategy requires new approaches to your existing relationships, tapping into the latent creativity that is there.
KEY TAKEAWAYS
• Innovation is carried out by teams working collaboratively.
• Create teams that foster creativity by including individuals who are open to change.
EXERCISE
1. Working with a partner, imagine a new product or process you want to create. Identify who would want it as well as what VANs and weak ties could help you implement it. How could they help? What would be the benefit for them? | textbooks/biz/Business/Advanced_Business/Book%3A_Sustainability_Innovation_and_Entrepreneurship/04%3A_Entrepreneurship_and_Sustainability_Innovation_Analysis/4.05%3A_Adaptive_Collaboration_through_Value-Added_Networks.txt |
Learning Objectives
1. Examine the role of incremental steps in innovation.
2. Understand how systems changes can result from combining small steps.
Some companies enter the market with a mission of challenging existing products with sustainable replacements. Their strategy is radical from the start. Others, typically larger established firms, gain momentum in sustainability innovation by building upon incremental improvements in products and systems. Business analysis often juxtaposes incremental change with radical or dramatic change; a common assumption is that the two are mutually exclusive. Moreover, literature in the sustainability field privileges the latter over the former, dismissing incremental change as timid at best and “greenwash” at worst—accusations that may indeed hold true at times. Separating the two concepts, incremental and radical, can be useful for heuristic purposes. Perhaps doing so is also psychologically satisfying; it’s either this or it’s that.
In real life, however, people in business make a series of small steps over time that add up to larger, more profound change. Sometimes early successes build momentum for bigger changes that previously were viewed as too radical or risky. Alternatively, incremental successes can build courage and internal support, stimulating requisite imagination and energy to design more radical and innovative changes. By consciously pursuing incremental changes with a radical ultimate goal and tracking progress, one can catalyze significant innovation and ultimately differentiate the firm.
Radical incrementalism involves small, carefully selected steps that result in learning that in turn reveals new opportunities. It means taking marginal, integrated progress toward more ambitious sustainability goals. Ideally, your whole company would participate in discussing and defining ideal characteristics of this goal, track milestones along the way, observe lessons, and feed this data back into the definition of the goal and the next steps forward.
Others have used the term radical incrementalism to describe a deliberate strategy for business operations (particularly in information technology) in which a series of small changes are enacted one after the other, resulting in radical cumulative changes in infrastructure. Our use of the concept differs in that while company strategists should have a vision of what sustainability means for their company, the incremental steps to get there necessarily shape the course. In other words, the feedback you get along the way will accelerate, alter, and inform your next actions. This is iterative and adaptive learning—one gains knowledge along the way that affects future decisions. The companies we examine here demonstrate this strategy.
Corporate adoption of green and sustainability strategies is gaining global momentum. Its implications are radical for firms, supply chains, and consumers because it represents a significant challenge to conventional ways of doing business. We present leaders here because they offer us a window to the future. In this section and the discussion of adaptive collaboration through value-added networks (VANs) in Chapter 4, Section 4.5, we discuss the means to implement sustainability innovation. The result, for those companies that successfully pursue it, is new market space shaped to the lead firm’s advantage. However, just as the journey of one thousand miles begins with a single step, so does the radical shift toward sustainability involve incremental changes.
Kaiser Permanente
Kaiser Permanente (KP) deliberately adopted a radically incremental approach to implementing its strategy. The company has a sustainability perspective on its corporate purpose (health care) that widens the meaning of “health care” to include not only medical treatment but the broader community health impacts of its facilities and operations and the materials it sources. We examine here one relatively small decision in KP’s broader strategy: the company’s decisions on the use of polyvinyl chloride (PVC), a material of increasing environmental concern. Specifically, we will look at KP’s choices regarding flooring. KP measured everything it did to build the business case for greening each incremental step and discovered there were significant economic benefits to be gained by seemingly small changes. Moreover, these incremental decisions have had radical impacts on the company’s success and have facilitated moving forward on other sustainability fronts. This discussion puts KP’s incremental step on flooring in the wider context of green buildings as an important arena for companies to measure the collective impact of seemingly small decisions. We present the business case for greener buildings and the economic and environmental benefits that they generate for companies as an integral part of their strategy. Next, we will discuss SC Johnson’s award-winning product sustainability assessment tool, Greenlist. As SC Johnson (SCJ) evolved its efforts to incorporate sustainability into its corporate strategy, it constructed a powerful tool to measure the range of environmental impacts of chemical inputs into its products. As a result, the company has significantly altered its environmental footprint, improved product performance, and achieved significant cost savings. Moreover, this tool has had broader catalytic effects on SCJ’s supply chain and competitors. By patenting Greenlist, SCJ hopes to widen the circle even more.
Both of our company examples, KP and SCJ, illustrate the following three radically incremental tactics:
1. Set big goals but take moderate, integrated steps.
2. Measure everything—build your business case.
3. Incorporate knowledge gained back into new product and process design.
Both KP and SCJ illustrate the tactics we advocate: set big goals but take moderate, integrated steps to get there. Both companies have religiously monitored and measured their progress to build the business case for the next ambitious step. Now both are grappling with incorporating the knowledge gained from their earlier successes into future product designs, process designs, or both.
KP is the largest health management organization in the United States, with 8.2 million members and over 500 hospitals and medical buildings under management. KP’s Green Building Committee first met in 2001 to determine priority projects it would take on. Seated at the table were representatives from interior design firms, construction companies, health nongovernmental organizations (NGOs), and architects, along with KP’s national environment health and safety people (labor joined later). KP’s interest focused on identifying an area where the firm could move relatively quickly to eliminate a problematic chemical and thereby make a demonstrable difference for human and community health and ecological well-being. The group made the decision to investigate PVC-free flooring. Given growing research on PVC’s toxicity to humans throughout its life cycle, this choice met the groups’ selection criteria. It was a radically incremental step.
KP does not move precipitously. Prudent spending and sound financial performance enable KP to deliver quality care, convenience, and access and affordability. KP is also dedicated to individual and community health and is science-driven and acutely sensitive to lowering the costs of health care. In this last respect, there is no choice in the health care industry; new drugs and procedures, health care worker shortages, provider consolidation, aging populations, and the rise of chronic health conditions across population segments continually drive costs up. Careful consideration of costs therefore must be part of the equation for procurement and strategic change. Strong core values, however, including resource stewardship and leadership in improving the quality of life of the communities in which it operates, were taken seriously by senior management.
John Kouletsis, director of strategic planning and design, called the organization “fearlessly incremental” in its strategic approach. Though it takes on big issues, the company is meticulous in accumulating quantitative and qualitative evidence to support decisions, especially major changes in purchasing. Company leadership is akin to the old political notion of statesmanship. The belief that what is good for the environment and the community is good for the health maintenance organization (HMO) members and therefore good for KP’s financial success guides strategy. KP employs a systems view of health care, incorporating environmental and community aspects, and this wider perspective on health informs the company’s green strategic decisions.
Jan Stensland was half of the duo in strategic sourcing and technology for KP. Her friendly, easy-going exterior belied intensity, intelligence, and absolute dedication to achieving the multidimensional objectives of her job. She conversed equally comfortably about material costs per square foot, parts per million contaminants, construction specifications, human health, and PVC exposure research. She also tracked internal rates of return for new decisions—for example, alternative flooring technology projects under consideration to renovate dozens of medical buildings throughout California, ten states, and Washington, DC, where thousands of patients and staff would spend time over the next several decades. While health is in the forefront of her mind, her proposals must show how the company will save money or get better spaces for the same cost. The national health care crisis of escalating costs is the elephant in her office, and she stares it down with an optimizing strategy across financial, community, health, and environmental objectives.
Stensland’s team sought ways to influence KP’s suppliers’ research and development (R&D) shops to redesign products so that health care facilities would be more effective measured in terms of patient treatment, disease prevention, and costs. Thus business effectiveness is viewed in a larger social context. Stensland thinks in terms of today and fifteen years out in talks with suppliers, working through negotiations to maximize health benefits and minimize costs for multiple stakeholders.
For example, 16 percent of KP’s 8.2 million person membership suffers from asthma. The rate of children’s asthma recently has risen to an epidemic level of 27 to 30 percent in some counties in California. Chronic respiratory and immune systems problems increasingly have been linked to low exposures to different chemical compounds. There are considerable health impacts and significant monies at stake; therefore, suppliers bid with particular attention to KP’s interests. Moreover, the health care industry often follows KP’s lead. When KP was first among HMOs to move away from PVC gloves due to escalating allergic reactions and their associated costs, the industry followed, opening up opportunities for firms able to provide substitutes. However, that was only KP’s first effort involving PVC.
PVC
KP’s decision in early 1999 to begin to phase out the use of PVC was commendable but controversial. PVC is ubiquitous; it is used to make many everyday materials and is a key component of medical products such as IV bags and tubing. There is also growing evidence that it is a substance of concern. According to the Healthy Building Network, dioxin (the most potent carcinogen known), ethylene dichloride, hydrochloric acid, and vinyl chloride are unavoidably created in the production of PVC and can cause severe health problems, including cancer and birth defects.
Kathy Gerwig, director of environmental stewardship at KP, views the firm taking a precautionary approach, meaning that where there is credible evidence that a material it is using may result in health and environmental harm, it should strive to replace that material with safer alternatives. As a senior manager, Gerwig is convinced there is enough evidence about the hazards of vinyl that the responsible course of action for a health care organization is to replace it with healthier commercially available alternatives that are equal or superior in performance, especially in the design and construction of their buildings.
Stensland described the company’s efforts on non-PVC flooring as an ongoing effort—one piece of a larger puzzle with short-term wins and long-term goals. Thinking this intently about materials takes time but yields good results. The subcommittee assigned to investigate whether substitutes were available for PVC flooring found the inexpensive per-square-foot price of vinyl did not reflect true life-cycle, health, and environmental costs. PVC flooring was discovered to carry high maintenance costs not previously considered because they were not included in the first-cost price of the flooring. True costs are often disguised when budgets are divided between purchasing for new construction or renovation, and ongoing operations once the flooring is installed.
KP conducted pilot projects in several of its medical office buildings and hospitals, administering tests and comparing maintenance budgets in vinyl and nonvinyl flooring buildings, and interviewed the people who cleaned the floors in those facilities. These investigations revealed that up to 80 percent of flooring maintenance costs could be eliminated with the use of a rubber flooring product (Nora, from Freudenberg Building Systems) and another non-PVC flooring product, Stratica, an ecopolymeric product. The rubber and non-PVC vinyl flooring products were more stain and slip resistant and had improved acoustic properties. But that was not the end of the story.
Qualitative issues related to flooring often translated into significant ongoing expenses. “Slips, trips, and falls” are major problems in buildings and an early indicator of problems with flooring. Accidents require expensive settlements awarded to employees and visitors to buildings. Stensland analyzed the square footage costs across buildings and examined data for two years running. The company’s new attention to the nature of, and differences across, various flooring materials uncovered two KP locations where rubber flooring was installed and for which data showed zero slips, trips, and falls. Furthermore, data from nurses revealed the harder vinyl floors generated more complaints and work absences by nurses who are on their feet all day. Non-PVC rubber flooring improved conditions for nurses and accomplished the environmental and health strategic goals. Analyses were conducted at multiple facilities. The magnitude of the flooring issue was significant for the company and its contract suppliers; in 2005, the company managed sixty-four million square feet of flooring. By 2015, it expects to have eighty-four million square feet under management.
However, that doesn’t solve the problem of flooring replacement in existing facilities. With regularly scheduled replacement of flooring in the more than five hundred medical buildings in the system, could PVC be eliminated there as well in a variety of areas? KP turned to the Collins and Aikman Corporation (C&A), its carpet supplier, and required that C&A develop a non-PVC carpet backing (the underlayer of carpeting contained most of the materials of concern), preferably at the same price. The manufacturer brought the new offering back to KP six months ahead of schedule. An equivalently priced new carpet backing whose performance exceeded the PVC-backed carpet was now available not just for KP but for all the manufacturer’s customers. The new material used postconsumer recycled polyvinyl buterol, the film used on safety glass for windshields that protects car passengers from broken glass in accidents. An enterprising engineer had discovered he could use the discarded sticky “waste” compound found at recyclers and brought it back into the materials stream for new applications.
By asking suppliers for alternative, safer products, KP—due to its size—has been driving the market toward products that reduce resource use and improve health conditions by eliminating chemical hazards and lowering maintenance expenses. Incremental steps are taken toward sustainability goals, pulling markets and supply chains along in what ultimately constitutes radical change: the substitution of a new, better product design for the old.
There are other examples. Refrigerants used in medical facility chiller systems have had the same problems as refrigerants in general use. When contracts for refrigerants came up for reconsideration, KP put bidders on notice that any problematic chemical in use or being phased out by 2008 could not be used in chillers. York Incorporated, an award-winning firm for its product efficiency and advanced technical designs, won the bid, producing new chillers with benign refrigerants in a unit that was 25 percent more energy efficient than the market standard. Thousands of chillers across hundreds of medical office buildings and hospitals now drive substitution of a radically more effective system for the existing products.
There are other examples of KP’s radically incremental approach. One of the companies selected to provide KP’s elevators produced a super energy-efficient design that addresses KP’s goal for more energy-efficient equipment, helping drive and justify that supplier’s improvements to its product design. Another elevator company had switched from petrochemical-based hydraulic fluids to soy-based fluids and was investigating more sustainable elevator car finish materials. In 2006 KP was talking with furniture and textile manufacturers to provide non-PVC upholstery. By 2005, KP was leading an effort to bring locally grown organic food into its hospitals, supporting local organic markets and working with food service suppliers like Sysco together with local growers to reduce fuel consumption in distribution. The goal is delivery of “clean” foods without chemical additives at reasonable cost to members and patients. The slow food movement, a grassroots and rapidly spreading effort to improve the quality of food through organic practices and limited radius distribution from the growing site, gains momentum when a company the size of KP focuses on locally grown organic produce.The head of Slow Food USA’s office, and founder of Slow Food International, Carlo Petrini views the organic and local food movements that have reinvigorated farmers’ markets and microbreweries across the United States as representative of a new dialogue emerging between traditional knowledge and advancing science knowledge that is creating a new business reality and a different model of business.
KP’s incremental steps to upgrade facilities add up to radical change. KP has put sustainable building design and construction practices into all new construction and “rebuilds” (KP renovations) through facility templates. These practices incorporate the following:
• Implementing efficient water and energy systems
• Using the least toxic building materials
• Recycling demolition debris, diverting thousands of tons of materials from landfills
• Making use of daylight whenever possible
• Managing storm water to enhance surrounding habitats
• Reducing site development area (e.g., total gross square footage) to concentrate and limit total paving and other site disturbances
• Installing over fifty acres of reflective roofing
• Publishing an Eco Toolkit reference book and providing it to KP capital project team members and more than 50 architects and design alliance partners
KP also incorporates health and ecosystem considerations into national contracts. These considerations include the following:
• Reducing the toxicity and volume of waste
• Increasing postconsumer recycled content
• Selecting reusable and durable products
• Eliminating mercury content
• Selecting products free from PVC and di-2-ethylhexyl phthalate (DEHP)
Successful changes include replacing three DEHP-containing medical products in the neonatal intensive care units with alternatives, ensuring the continued elimination of mercury-containing medical equipment from standards, and negotiating a national recycling contract. KP’s purchasing standards include 30 percent postconsumer content office paper and mercury-free and latex-free products.
In addition, KP facilities often partner with local community organizations to implement community initiatives. One example is a mercury thermometer exchange at Kaiser Permanente Riverside (CA) Medical Center. A total of 540 pounds of material were collected from 3,000 mercury thermometers. Over 1,200 digital thermometers were distributed. “Kaiser Permanente’s accomplishments in environmental performance are impressive and unique,” said Kathy Gerwig, director of environmental stewardship. “We hope that by changing our practices, we can drive change throughout the health care industry.”GreenBiz Staff, “Kaiser Permanente Turns Green,” GreenBiz, April 22, 2003, accessed January 7, 2011, http://www.greenbiz.com/news/2003/04/22/kaiser-permanente -turns-green.
KP’s metrics demonstrating the benefits of its sustainability efforts include the following:
• In 2003, KP diverted 8,000 tons of solid waste from landfills.
• In 2003, KP reused or safely redeployed more than 40,000 pieces of electronic equipment, weighing 410 tons and containing 10,500 pounds of lead.
• KP eliminated 27,000 grams of mercury from KP health care operations by phasing out mercury-containing blood pressure devices, thermometers, and gastrointestinal equipment.
• KP phased out one hundred tons of single-use devices in 2003.
The impact of energy conservation measures at KP prevented the creation of more than seventy million pounds of air pollutants annually. The aggregate impact of pollution prevention activities eliminated the purchase and disposal of forty tons of hazardous chemicals. Other activities reported by the company in 2005 are as follows:
• Waste minimization resulting in the recycling of nine million pounds of solid waste
• Electronic equipment disposition resulting in the recycling of 36,000 electronic devices containing 10,500 pounds of lead
• Optimal reuse of products that led to reprocessing 53,851 pounds of medical devices and supplies
• Capital equipment redistribution
• Greening janitorial cleaning products, eliminating exposure risks for employees, lowering costs, gaining system efficiencies, and improving performance
• Recycling and reuse of 8,300 gallons of solvents
• Energy conservation resulting in the recycling of 30,000 spent fluorescent lamps
In conclusion, KP provides a compelling example of the immediate gains to be had through pursing sustainability practices in radically incremental steps. KP’s senior management team works from the premise that human health and environmental health are the same thing. As an institution engaged with human health, it makes sense for KP to be active in resolving a paradox facing the health care industry: that hazardous chemicals used in medical products and buildings have harmful effects on patients and employees. It makes sense to coordinate purchasing across member medical centers and hospitals to ensure improved health conditions for members and the communities in which they live. The opportunities are vast for KP. That means the hundreds of suppliers that provide technical and routine needs for the company and the more than two thousand minor and major construction projects under way at any one time also can take advantage of new sustainability-inspired market space opportunities. The question is which ones will step up to the challenge and follow KP into the next generation of “good business”?
Radical incrementalism means taking small, carefully selected steps that result in learning that in turn reveals new opportunities. In this case a seemingly small decision on a seemingly innocuous issue—flooring—resulted in larger systemic changes across the company and its supply chains, even sending an urgent signal to the flooring industry. By greening its flooring, KP is improving health by eliminating a questionable material, improving working conditions and health for nurses, and reducing costs by bringing employee absences down and lowering accident liability costs. Putting the pieces together took time; KP staff members measured each step and outcome to evaluate the effects on cost and performance. Moreover, the results are driving bigger goals. Three years from the start-up of the project, KP made a new-construction standards change: no PVC vinyl flooring would be used in any future facilities. If we take into account all the other incremental changes KP is making, the systemic and company benefits are profound. KP’s radically incremental steps are part of its strategy to better support community health while it grows its operations.
We turn next to sustainability ideas applied to facilities. Buildings are not just where your business activities happen. Your facilities—and the decisions you make about resources, energy, materials, and so forth—are a significant investment and can either add to or subtract from your bottom line. They can also add to or subtract from your overall strategy. Buildings and their operating systems are an excellent area in which you can realize the benefits of radically incremental steps.
Among the many industries developing innovative strategies to increase profits and address environmental and related community quality of life concerns, the building sector presents some of the most accessible incremental opportunities that can aggregate into radical returns. Compared to standard buildings, “green” buildings can provide greater economic and social benefits over the life of the structures, reduce or eliminate adverse human health effects, and even contribute to improved air and water quality. Opportunities for reducing both costs and natural system impacts include low-disturbance land use techniques, improved lighting design, high-performance water fixtures, careful materials selection, energy-efficient appliances and heating and cooling systems, and on-site water treatment and recycling. Less familiar innovations include natural ventilation and cooling without fans and air conditioners; vegetative roofing systems that cool buildings, provide wildlife habitat, and reduce storm water runoff; and constructed wetlands that help preserve water quality while reducing water treatment costs.
The building industry and growing numbers of private companies are responding to these opportunities. Valuable economic benefits are being realized in improved employee health and productivity, lower costs, and enhanced community quality of life. Since 2000, adoption of green design and construction techniques has been greatly aided and accelerated by the Leadership in Energy and Environmental Design (LEED) rating system.
LEED is a voluntary green building rating system established by architects, interior designers, and the construction industry through a consensual process during the 1990s. The US Green Building Council (USGBC), a voluntary membership coalition, developed and continues to review the LEED standards. LEED guides building owners, architects, and construction firms to use industry standards and advances in those standards for environmental and health performance across a wide range of building criteria including site design, building materials selection, and energy systems. While each modification and upgrade to the building and site may seem small unto itself, the changes combine to create a dramatically more efficient building system with far lower operating costs and more satisfied owners over the life of the structure. While there is valid criticism about some of the specifications within LEED and its impact on innovation in the materials industry, overall the system has helped green the building industry.The Healthy Building Network criticizes the USGBC and LEED for continuing to include PVC in green building specifications. Others have criticized the LEED process for inhibiting innovation because it freezes the specific definition of “green” in a moment in time. This can mean that unforeseen, even greener, innovations will be left out of the criteria.
Green buildings perform the same functions and serve the same purposes as conventional buildings but with a smaller ecological footprint. They employ optimized and often innovative design features to reduce natural systems impacts throughout a building’s life cycle and all across the supply chain of materials, components, and operations.
Green buildings provide a range of benefits to stakeholders, from developers and owners to occupants and communities. Structural, mechanical, and landscape design elements can maintain comfort and indoor air quality, conserve resources, and minimize use of toxic materials while reducing pollution and damage to local ecosystems. A broad range of green design techniques, technologies, and operational strategies are available to building architects, engineers, and owners. Every building is different, and there is no single green design formula. However, there are common design objectives and classes of benefits. The potential benefits of green building practices include the following:
• Less disruption of local ecosystems and habitats
• Resource conservation
• Decreased air, water, and noise pollution
• Superior indoor air quality
• Fewer transportation impacts
While they may entail higher up-front costs (but not necessarilyLisa Fay Matthiessen and Peter Morris, “Costing Green: A Comprehensive Cost Database and Budgeting Methodology,” US Green Building Council, July 2004, accessed January 10, 2011, www.usgbc.org/Docs/Resources/Cost_of_Green_Full.pdf.), in the long term, green buildings can make up the shortfall. Careful design choices for particular locations can reduce that difference to zero. Some of the economic benefits they generate include the following:
• Lower capital costs. With careful design, measures such as passive solar heating, natural ventilation, structural materials and design improvements, and energy and water efficiency can reduce the size and cost of heating and cooling systems and other infrastructure. A new bank in Boise, Idaho, was able to take advantage of such considerations to go from an initially planned LEED Silver to an actual LEED Platinum with no added cost.US Green Building Council, “Banner Bank Building: Green Is Color of Money,” 2006, available from the project profiles at www.usgbc.org/DisplayPage.aspx?CMSPageID=1721.
• Lower operations and maintenance (O&M) costs. On average, lower energy and water consumption reduces energy demand 25–45 percent per square foot for LEED buildings versus conventional buildings.Cathy Turner and Mark Frankel (New Buildings Institute), Energy Performance of LEED for New Construction Buildings (Washington DC: US Green Building Council, 2008), accessed January 31, 2011, http://www.usgbc.org/ShowFile.aspx?DocumentID=3930; Greg Kats, Greening Our Built World: Costs, Benefits, and Strategies (Washington, DC: Island Press, 2009). The US Environmental Protection Agency (EPA) reported that office buildings that meet the energy efficiency requirements of the Energy Star program use 40–50 percent less energy than other buildings.Energy Star is familiar to many people for rating the energy efficiency of appliances, but a separate Energy Star certification system also exists for entire buildings. For the comparison, see EPA, Energy Star and Other Climate Protection Programs 2007 Annual, October 2008, accessed January 11, 2011, http://www.energystar.gov/ia/news/downloads/annual_report_2007.pdf.
• Increased market value. Green buildings can increase market value through reduced operating costs, higher lease premiums, competitive features in tight markets, and increased residential resale value. For instance, a 2008 study of Energy Star and LEED-certified office buildings versus conventional ones found that the green office buildings had higher occupancy rates and could charge slightly higher rents, making the market value of a green building typically \$5 million greater than its conventional equivalent.Piet Eichholtz, Nils Kok, and John M. Quigley, “Doing Well by Doing Good? Green Office Buildings” (Program on Housing and Urban Policy Working Paper No. W08-001, Institute of Business and Economic Research, Fisher Center for Real Estate & Urban Economics, University of California, Berkeley, 2008), accessed January 28, 2011, www.jetsongreen.com/files/doing_well_by_doing_good_green_ office_buildings.pdf.
• Less risk and liability. Using best practices yields more predictable results, and healthier indoor environments reduce health hazards. Some insurers offer discounts for certified green buildings, and others offer to pay to rebuild to green standards after damage.For instance, Fireman’s Fund Insurance Company, “Insurers Offer Rewards for Going Green,” 2010, accessed January 11, 2011, www.firemansfund.com/about-fireman-s-fund/our-commitments/about-our-green-insurance/Pages/insurers-offer-rewards-for-going-green.aspx; or Zurich in North America, “Green Buildings Insurance Article,” 2010, accessed January 11, 2011, http://www.zurichna.com/zna/realestate/greenbuildingsinsurancearticle.htm.
• Increased employee productivity. Green buildings increase occupant productivity due to better lighting and more comfortable, quiet, and healthy work environments. This improvement can be at least equal to buildings’ lifetime capital and O&M costs and is the largest potential economic benefit of green buildings. For example, a survey of employees at two companies that moved from conventional buildings into LEED-certified ones found the new buildings added on average about 40 hours per year per employee in increased productivity.Amanjeet Singh, et al., “Effects of Green Buildings on Employee Health and Productivity,” American Journal of Public Health 100, no. 9 (2010): 1665–68. Nationwide, the value of improved office worker productivity from indoor environmental improvements is estimated to be in the range of \$20–160 billion.William J. Fisk, “Health and Productivity Gains from Better Indoor Environments and Their Relationship with Building Energy Efficiency,” Annual Review of Energy and the Environment 25 (2000): 537–66.
• Reduced absenteeism. Lawrence Berkeley National Laboratory calculates that improvements to indoor environments could reduce health care cost and work losses by 9 percent to 20 percent from communicable respiratory diseases, 18 percent to 25 percent from reduced allergies and asthma, and 20 percent to 50 percent from other nonspecific health and discomfort effects, saving \$17–48 billion annually.William J. Fisk, “Health and Productivity Gains from Better Indoor Environments and Their Relationship with Building Energy Efficiency,” Annual Review of Energy and the Environment 25 (2000): 537–66.
• Market perception of quality. Green buildings require careful design attention and the use of best practices and display superior performance.
• Promotion of innovation. Green buildings employ new ideas and methods that produce significant improvements.
• Access to government incentives. A growing number of federal, state, and local agencies require green features and offer tax credits and other incentives such as faster, less costly planning and permit approvals.
Green buildings provide a tangible means of measuring incremental steps that can aggregate into radical system-level benefits. Moreover, they are a visible area in which to demonstrate corporate sustainability strategy—the benefits derived from greening facilities and building systems add up to significant cost savings and represent a demonstrable area in which to see near-term return on investment in green technologies and operating systems.
SC Johnson
We turn next to the example of incremental changes creating system innovations at SC Johnson. By the mid-1990s, SC Johnson (SCJ) had a very respectable record on corporate environmental responsibility. In 1975, SCJ voluntarily removed ozone-threatening chlorofluorocarbon (CFC) propellants from its products worldwide. This was three years before the US government banned CFCs. In 1992, when eco-efficiency was introduced as a cost savings measure by the World Business Council for Sustainable Development (WBCSD), SCJ of the first companies to join the WBCSD. Millions of dollars of unnecessary costs were trimmed by using fewer resources far more efficiently. The company was able to eliminate over 420,000,000 pounds of waste from products and processes over the ten-year period prior to 2004, resulting in cost savings of more than \$35 million.
In addition, the company built a landfill gas–powered turbine cogeneration energy plant that delivers 6.4 megawatts of electricity and some 40,000 pounds per hour of steam for SCJ’s Waxdale manufacturing facility in Wisconsin. This energy project enabled SCJ to halve its use of coal-generated utility electricity and thereby cut its carbon emissions.
SCJ is a 120-year-old family-owned (sixth generation) firm with explicit commitments to innovation, high-quality products, environmental concerns, and the communities in which it operates. SCJ is a consumer packaged goods (CPG) company and a “chemical formulator”—a company that chooses from a menu of chemical inputs to make its consumer products. With such well-known brands as Pledge, Windex, and Ziploc, the company had over \$6.5 billion in sales in 2006 and sold its products in more than 110 countries.
In holding up sustainability criteria as goals, SCJ had set off on a journey in which the end destination was not entirely clear, and by the new millennium company strategists knew it was time to evaluate the systems currently in place. SCJ’s earlier positive results motivated the company to look for more opportunities, so it stepped back and looked at the progress it made over a decade. Company strategists discovered that while eco-efficiency had become second nature to product design at SCJ, strategy needed to shift beyond capturing relatively easy efficiencies and move deeper. They engaged outside expertise to help develop and introduce product design tools that could be used to build preferred ingredient choices into product and packaging design. The result of this assessment was the development of a new product evaluation tool, Greenlist.
Greenlist is a tool SCJ developed to improve the quality of its products through better understanding of the health and environmental impact of material inputs. In the Greenlist database are 2,300 chemicals including surfactants, insecticides, solvents, resins, propellants, and packaging. Criteria measured include the chemicals’ biodegradability, aquatic toxicity, vapor pressure, and so forth. Through Greenlist, SCJ has reduced its environmental impact while simultaneously witnessing increases in production and sales growth.
Greenlist is a patented rating system (US Patent No. 6,973,362) that classifies raw materials used in SCJ’s products according to their impact on the environment and human health. Greenlist has helped SCJ phase out certain raw materials and use materials considered to be environmentally “better” and “best.” The result is a process that gives SCJ scientists access to ingredient ratings for any new product or reformulation and enables them to continuously improve the environmental profile of the company’s products.
The Greenlist screening process covers over 90 percent of the company’s raw materials volume and is continually updated as new findings emerge. Materials are assigned a score from a high of 3 to a low of 0. An ingredient with a 3 rating is considered “best,” 2 is “better,” and 1 is “acceptable.” Any material receiving a 0 is called a restricted use material (RUM) and requires company vice presidential approval for use. If a material is unavoidable and has a low score, the goal is to reduce and eliminate its use as soon as substitutes are available. When existing products are reformulated, the scientist must include ingredients that have ratings equal to or higher than the original formula.
While some raw materials with a 0 score are not restricted by government regulatory requirements, over the years SCJ has elected to limit their use. SCJ replaces these 0-rated materials with materials that are more biodegradable and have a better environment and health profile.
An example of Greenlist in action involves one of SCJ’s glass cleaner products. In 2002 and again in 2004, SCJ assessed the formulation of Windex blue glass cleaner to reduce volatile organic compounds. The reformulations reduced health and environmental impacts while increasing the product’s cleaning performance by 40 percent and growing its market share by 4 percent.
When SCJ introduced Greenlist in 2001, the company set a goal to improve its baseline Greenlist score for all raw material purchases from 1.2 to 1.41 by 2007. This goal was accomplished in early 2005. In 2001, SCJ’s use of “better” and “best” materials was at 9 percent of all raw materials scored, and by 2005, this number increased to 28 percent of all raw materials scored. The company uses an annual planning process to help drive these scores, and the Greenlist results are shared in the company’s annual public report.SC Johnson, “RESPONSIBILITY = SCIENCE: SC Johnson Public Report 2009,” accessed March 7, 2011, http://www.scjohnson.com/Libraries/Download_Documents/2009_SC_Johnson_Public_Report.sflb.ashx.
Moreover, SCJ has eliminated all PVC packaging (a step taken to eliminate risk and liability) and, as performance results remain stable or improve, the company has moved to 10 percent of surfactants made from bio-based as opposed to oil-based materials. Each change required coordination with suppliers, which have made the more efficient or benign substitute available for other customers as demand for “clean” materials grows.
SCJ has patented Greenlist, but it has made the process licensable by other companies at no charge (although SCJ’s formulations remain protected). The goal is to encourage application of Greenlist thinking and analysis across industry sectors. The company has already shared its Greenlist process with the US EPA, Environment Canada, the Chinese Environmental Protection Agency, industry associations, universities, and other corporations. Moreover, the company has been able to use insights from Greenlist to work with partner suppliers to help identify and develop ingredients that are more environmentally sustainable.
To date, “the company has been recognized with over 40 awards for corporate environmental leadership from governments and non-governmental organizations, including the World Environment Center Gold Medal, and Environment Canada’s Corporate Achievement Award. SCJ received the first-ever Lifetime Atmospheric Achievement Award from the US Environmental Protection Agency.”Five Winds International, “Greening the Supply Chain at SC Johnson: A Case Study,” accessed December 3, 2010, www.fivewinds.com/_uploads/documents/g60tzmxo.pdf. In 2005, SCJ announced that it had entered into a voluntary partnership with the EPA under the agency’s Design for the Environment (DfE) program. SCJ is the first major CPG company to partner with EPA on the program, which promotes innovative chemical products, technologies, and practices that benefit human health and the environment. In 2006, SCJ received the Presidential Green Chemistry Challenge Award for its Greenlist process.
SCJ has evolved its sustainability strategy from well-meant but relatively piecemeal efficiency efforts to developing an award-winning, innovative product assessment tool. The company has achieved real leadership in the world of consumer products manufacturing. Not only has the company strategically positioned itself ahead of the pack by anticipating regulatory restrictions before they happen, but it has developed enviable preferred purchaser relationships with its suppliers. SCJ has simplified its materials inputs list to fewer, greener inputs and is helping suppliers develop market leadership in supplying greener inputs. Moreover, SCJ is trying to teach the world how it does what it does—and it is doing this for free.
An area in which the company has recognized it needs to take further steps is in incorporating Greenlist further upstream in the product design process. SCJ’s goal is to use the tool not only to assess existing products but also to inspire breakthrough green innovations to capture new market space. Given the company’s track record of conscious evolution of its strategy, this is not an unrealistic goal.
Radical incrementalism, as we have seen, offers a path that can both deliver real-time benefits and lead to market-shifting innovation. KP and SCJ demonstrate the tactics we advocate here: set big goals but take moderate, integrated steps to get there. Both companies have religiously monitored and measured their progress to build the business case for the next ambitious steps. Consequently, both now grapple with incorporating the knowledge gained from their earlier successes into future product designs, process designs, or both.
Being radically incremental requires having an ambitious goal of corporate sustainability, but it does not imply that you will be able to map out all the steps with clockwork accuracy. It does mean, however, that one’s incremental steps must be integrated, that each success and failure must be evaluated, and that the road map under one’s feet must be redrawn accordingly. Being radical takes courage but so does radical incrementalism. Courage and resolve builds, however, with each successful step.
KEY TAKEAWAY
Radically incremental tactics include the following:
1. Setting big goals but taking moderate, integrated steps toward those goals.
2. Measuring everything (metrics are critical)—to build your business case.
3. Incorporating knowledge gained back into the process for new product and process design.
EXERCISES
1. List the small incremental steps Kaiser Permanente and SC Johnson took and the larger changes they added up to over time.
2. Select a familiar product and list all the incremental small steps that could be applied to its design, use and disposal that would reduce the product’s ecological/health footprint. As you consider these changes, look for imaginative leaps you could make to redesign the entire product, provide for the buyer’s need in new ways altogether, or consolidate incremental changes into a systems redesign involving supply chain partners that could improve the product and lower costs at the same time. | textbooks/biz/Business/Advanced_Business/Book%3A_Sustainability_Innovation_and_Entrepreneurship/04%3A_Entrepreneurship_and_Sustainability_Innovation_Analysis/4.06%3A_Radical_Incrementalism.txt |
Learning Objectives
1. Understand the basic causes and effects of climate change.
2. Know the regulatory frameworks governments have used to address climate change.
3. Identify business responses and opportunities related to climate change.
The thickness of the air, compared to the size of the Earth, is something like the thickness of a coat of shellac on a schoolroom globe. Many astronauts have reported seeing that delicate, thin, blue aura at the horizon of the daylit hemisphere and immediately, unbidden, began contemplating its fragility and vulnerability. They have reason to worry.Carl Sagan, Billions and Billions (New York, NY: Random House 1997), 86.
- Carl Sagan
Since the beginning of their history, humans have altered their environment. Only recently, however, have we realized how human activities influence earth’s terrestrial, hydrological, and atmospheric systems to the extent that these systems may no longer maintain the stable climate and services we have assumed as the basis of our economies. The science of climate change developed rapidly in the late twentieth century as researchers established a correlation between increasing atmospheric concentrations of certain gases, human activities emitting those gases, and a rapid increase in global temperatures. Many, but by no means all, international policy makers spurred research as it became apparent that impacts ranging from melting polar ice caps to acidified oceans and extreme weather patterns were attributed to anthropogenic (human) influences on climate. Global businesses, many of which initially balked at potential economic disruption from changes in the use of fossil fuel and other business practices, have largely acceded to the need for change. Nonetheless, the overall response to the challenge has been slow and not without resistance, thereby increasing the potential opportunities and urgency.
The Science of Global Climate Change
In the early 1820s, Joseph Fourier, the French pioneer in the mathematics of heat diffusion, became interested in why some heat from the sun was retained by the earth and its atmosphere rather than being reflected back into space. Fourier conceived of the atmosphere as a bell jar with the atmospheric gases retaining heat and thereby acting as the containing vessel. In 1896, Swedish Nobel laureate and physicist Svante August Arrhenius published a paper in which he calculated how carbon dioxide (CO2) could affect the temperature of the earth. He and early atmospheric scientists recognized that normal carbon dioxide levels in the atmosphere contributed to making the earth habitable. Scientists also have known for some time that air pollution alters weather. For example, certain industrial air pollutants can significantly increase rainfall downwind of their source. As intensive agriculture and industrial activity have expanded very rapidly around the world since 1850 (Figure 5.1), a growing body of scientific evidence has accumulated to suggest that humans influence global climate.
The earth’s climate has always varied, which initially raised doubts about the significance of human influences on climate or suggested our impact may have been positive. Successive ice ages, after all, likely were triggered by subtle changes in the earth’s orbit or atmosphere and would presumably recur. Indeed, changes in one earth system, such as solar energy reaching the earth’s surface, can alter other systems, such as ocean circulation, through various feedback loops. The dinosaurs are thought to have gone extinct when a meteor struck the earth, causing tsunamis, earthquakes, fires, and palls of ash and dust that would have hindered photosynthesis and lowered oxygen levels and temperatures. Aside from acute catastrophes, however, climate has changed slowly, on the scale of tens of thousands to millions of years. The same paleoclimatological data also suggest a strong correlation between atmospheric CO2 levels and surface temperatures over the past 400,000 years and indicate that the last 20 years have been the warmest of the previous 1,000.National Oceanic and Atmospheric Administration Paleoclimatology, “A Paleo Perspective on Global Warming,” July 13, 2009, accessed August 19, 2010, www.ncdc.noaa.gov/paleo/globalwarming/home.html.
In the last decades of the twentieth century, scientists voiced concern over a rapid increase in “greenhouse gases.” Greenhouse gases (GHGs) were named for their role in retaining heat in earth’s atmosphere, causing a greenhouse effect similar to that in Fourier’s bell jar. Increases in the atmospheric concentration of these gases, which could be measured directly in modern times and from ice core samples, were correlated with a significant warming of the earth’s surface, monitored using meteorological stations, satellites, and other means (see Figure 5.2).
The gases currently of most concern include CO2, nitrous oxide (N2O), methane (CH4), and chlorofluorocarbons (CFCs). CO2, largely a product of burning fossil fuels and deforestation, is by far the most prevalent GHG, albeit not the most potent. Methane, produced by livestock and decomposition in landfills and sewage treatment plants, contributes per unit twelve times as much to global warming than does CO2. N2O, created largely by fertilizers and coal or gasoline combustion, is 120 times as potent. CFCs, wholly synthetic in origin, have largely been phased out by the 1987 Montreal Protocol because they degraded the ozone layer that protected earth from ultraviolet radiation (Figure 5.3). The successor hydrochlorofluorocarbons (HCFCs), however, are GHGs with potencies one to two orders of magnitude greater than CO2.
In response to such findings, the United Nations and other international organizations gathered in Geneva to convene the First World Climate Conference in 1979. In 1988, a year after the Brundtland Commission called for sustainable development, the World Meteorological Organization (WMO) and the United Nations Environment Programme (UNEP) created the Intergovernmental Panel on Climate Change (IPCC). The IPCC gathered 2,500 scientific experts from 130 countries to assess the scientific, technical, and socioeconomic aspects of climate change, its risks, and possible mitigation.The IPCC comprises three working groups and a task force. Working Group I assesses the scientific aspects of the climate system and climate change. Working Group II addresses the vulnerability of socioeconomic and natural systems to climate change, negative and positive consequences of climate change, and options for adapting to those consequences. Working Group III assesses options for limiting greenhouse gas emissions and otherwise mitigating climate change. The Task Force on National Greenhouse Gas Inventories implemented the National Greenhouse Gas Inventories Program. Each report has been written by several hundred scientists and other experts from academic, scientific, and other institutions, both private and public, and has been reviewed by hundreds of independent experts. These experts were neither employed nor compensated by the IPCC nor by the United Nations system for this work. The IPCC’s First Assessment Report, published in 1990, concluded that the average global temperature was indeed rising and that human activity was to some degree responsible (Figure 5.4). This report laid the groundwork for negotiation of the Kyoto Protocol, an international treaty to reduce GHG emissions that met with limited success. Subsequent IPCC reports and myriad other studies indicated that climate change was occurring faster and with worse consequences than initially anticipated.
Charles David Keeling
Modern systematic measurement of CO2 emissions began with the work of scientist Charles David Keeling in the 1950s. The steady upward trajectory of atmospheric CO2 graphed by Dr. Keeling became known as the Keeling curve. This comment is from a front page New York Times article on December 21, 2010: “In later years, as the scientific evidence about climate change grew, Dr. Keeling’s interpretations became bolder, and he began to issue warnings. In an essay in 1998, he replied to claims that global warming was a myth, declaring that the real myth was that ‘natural resources and the ability of the earth’s habitable regions to absorb the impacts of human activities are limitless.’ In an interview in La Jolla, Dr. Keeling’s widow, Louise, said that if her husband had lived to see the hardening of the political battle lines over climate change, he would have been dismayed. “He was a registered Republican,” she said. “He just didn’t think of it as a political issue at all.”Justin Gillis, “Temperature Rising: A Scientist, His Work and a Climate Reckoning,” New York Times, December 21, 2010, www.nytimes.com/2010/12/22/science/earth/22carbon.html?_r=1&pagewanted=2.
Effects and Predictions
The IPCC Fourth Assessment Report in 2007 summarized much of the current knowledge about global climate change, which included actual historical measurements as well as predictions based on increasingly detailed models.Rajendra K. Pachauri and Andy Reisinger, eds. (core writing team), Climate Change 2007: Synthesis Report (Geneva, Switzerland: Intergovernmental Panel on Climate Change, 2008). Available from the Intergovernmental Panel on Climate Change, “IPCC Fourth Assessment Report: Climate Change 2007,” accessed August 19, 2010, http://www.ipcc.ch/publications_and_data/ar4/syr/en/contents.html. A fifth assessment report was begun in January 2010 but has yet to be completed. Unless otherwise footnoted, all numbers in this list are from the fourth IPCC assessment. These findings represent general scientific consensus and typically have 90 percent or greater statistical confidence.
The global average surface temperature increased 0.74°C ± 0.18°C (1.3°F ± 0.32°F) from 1906 to 2005, with temperatures in the upper latitudes (nearer the poles) and over land increasing even more. In the same period, natural solar and volcanic activity would have decreased global temperatures in the absence of human activity. Depending on future GHG emissions, the average global temperature is expected to rise an additional 0.5°C to 4°C by 2100, which could put over 30 percent of species at risk for extinction. Eleven of the twelve years from 1995 to 2006 were among the twelve warmest since 1850, when sufficient records were first kept. August 2009 had the hottest ocean temperatures and the second hottest land temperatures ever recorded for that month, and 2010 tied 2005 as the warmest year in the 131-year instrumental record for combined global land and ocean surface temperature.Data more current than the fourth IPCC report are available from NASA and NOAA, among other sources, at NASA, “GISS Surface Temperature Analysis (GISTEMP),” accessed January 27, 2011, data.giss.nasa.gov/gistemp; and National Oceanic and Atmospheric Administration, “NOAA: Warmest Global Sea-Surface Temperatures for August and Summer,” September 16, 2009, accessed January 27, 2011, www.noaanews.noaa.gov/stories2009/20090916_globalstats.html.
Precipitation patterns have changed since 1900, with certain areas of northern Europe and eastern North and South America becoming significantly wetter, while the Mediterranean, central Africa, and parts of Asia have become significantly drier. Record snowfalls in Washington, DC, in the winter of 2009–10 reflected this trend, as warmer, wetter air dumped nearly one meter of snow on the US capital in two storms.Bryan Walsh, “Another Blizzard,” Time, February 10, 2010, accessed January 7, 2011, www.time.com/time/health/article/0,8599,1962294,00.html.
Coral reefs, crucial sources of marine species diversity, are dying, due in part to their sensitivity to increasing ocean temperatures and ocean acidity. Oceans acidify as they absorb additional CO2; lower pH numbers indicate more acidic conditions. Ocean pH decreased 0.1 points between the years 1750 to 2000 and is expected to decrease an additional 0.14 to 0.35 pH by 2100. (A pH difference of one is the difference between lemon juice and battery acid.)
Glaciers and mountain snowpacks, crucial sources of drinking water for many people, have been retreating for the past century. From 1979 to 2006, Arctic ice coverage declined between 6 and 10 percent, with declines in summer coverage of 15–30 percent (Figure 5.7).
Seas have risen 20 to 40 centimeters over the past century as glaciers melted and water expanded from elevated temperatures. Sea levels rose at a rate of 1.8 (±0.5) millimeters per year from 1961 to 2003. From 1993 to 2003 alone, that rate was dramatically higher: 3.1 (±0.7) millimeters per year. An additional rise in sea level of 0.4 to 3.7 meters (1.3 to 12.1 feet) is expected by 2100. The former amount would threaten many coastal ecosystems and communities;James G. Titus, K. Eric Anderson, Donald R. Cahoon, Dean B. Gesch, Stephen K. Gill, Benjamin T. Gutierrez, E. Robert Thieler, and S. Jeffress Williams (lead authors), Coastal Elevations and Sensitivity to Sea-Level Rise: A Focus on the Mid-Atlantic Region (Washington, DC: US Climate Change Science Program, 2009), accessed August 19, 2010, http://www.epa.gov/climatechange/effects/coastal/sap4-1.html. the latter would be enough to submerge completely the archipelago nation of the Maldives. If trends continue as predicted, inundation of global coastal areas and island communities may soon present major human migration and resettlement challenges. Many consider this the most critical climate change issue.
Trees are moving northward into the tundra. A thawing permafrost, meanwhile, would release enough methane to catastrophically accelerate global warming.National Science Foundation, “Methane Releases from Arctic Shelf May Be Much Larger and Faster Than Anticipated,” news release, March 4, 2010, accessed January 7, 2011, http://www.nsf.gov/news/news_images.jsp?cntn_id=116532&org=NSF and http://www.nsf.gov/news/news_summ.jsp?cntn_id=116532&org=NSF&from=news. Other species, too, are migrating or threatened, such as the polar bear. The population of polar bears is expected to decline two-thirds by 2050 as their ice pack habitats disintegrate under current trends.US Geological Survey, “USGS Science to Inform U.S. Fish & Wildlife Service Decision Making on Polar Bears, Executive Summary,” accessed January 27, 2011, www.usgs.gov/newsroom/special/polar_bears/docs/executive_summary.pdf. Warmer waters will also increase the range of cholera and other diseases and pests.World Health Organization, “Cholera,” June 2010, accessed August 19, 2010, www.who.int/mediacentre/factsheets/fs107/en/index.html.
At the same time that humans have increased production of GHGs, they have decreased the ability of the earth’s ecosystems to reabsorb those gases. Deforestation and conversion of land from vegetation to built structures reduces the size of so-called carbon sinks. Moreover, conventional building materials such as pavement contribute to local areas of increased temperature, called heat islands, which in the evenings can be 12°C (22°F) hotter than surrounding areas. These elevated local temperatures further exacerbate the problems of climate change for communities through energy demand, higher air-conditioning costs, and heat-related health problems.US Environmental Protection Agency, “Heat Island Effect,” accessed January 27, 2011, www.epa.gov/heatisland.
By impairing natural systems, climate change impairs social systems. A shift in climate would alter distributions of population, natural resources, and political power. Droughts and rising seas that inundate populous coastal areas would force migration on a large scale. Unusually severe weather has already increased costs and death tolls from hurricanes, floods, heat waves, and other natural disasters. Melting Arctic ice packs have also led countries to scramble to discover and dominate possible new shipping routes. When the chairman of the Norwegian Nobel Committee awarded the 2007 Nobel Peace Prize to the IPCC and Al Gore, he said, “A goal in our modern world must be to maintain ‘human security’ in the broadest sense.” Similarly, albeit with different interests in mind, the United States’ 2008 National Intelligence Assessment, which analyzes emerging threats to national security, focused specifically on climate change.Ole Danbolt Mjøs, “Award Ceremony Speech” (presentation speech for the 2007 Nobel Peace Prize, Oslo, Norway, December 10, 2007), accessed January 7, 2011, http://nobelprize.org/nobel_prizes/peace/laureates/2007/presentation-speech.html.
Scientists have tried to define acceptable atmospheric concentrations of CO2 or temperature rises that would still avert the worst consequences of global warming while accepting we will likely not entirely undo our changes. NASA scientists and others have focused on the target of 350 parts per million (ppm) of CO2 in the atmosphere.James Hansen, Makiko Sato, Pushker Kharecha, David Beerling, Valerie Masson-Delmotte, Mark Pagani, Maureen Raymo, Dana L. Royer, and James C. Zachos, “Target Atmospheric CO2: Where Should Humanity Aim?” The Open Atmospheric Science Journal 2 (2008): 217–31. Their paleoclimatological data suggest that a doubling of CO2 in the atmosphere, which is well within some IPCC scenarios for 2100, would likely increase the global temperature by 6°C (11°F). Atmospheric CO2 levels, however, passed 350 ppm in 1990 and reached 388 ppm by early 2010. This concentration will continue to rise rapidly as emissions accumulate in the atmosphere. Worse, even if the CO2 concentration stabilizes, temperatures will continue to rise for some centuries, much the way a pan on a stove keeps absorbing heat even if the flame is lowered. Hence scientists have begun to suggest that anything less than zero net emissions by 2050 will be too little, too late; policy makers have yet to embrace such aggressive action.H. Damon Matthews and Ken Caldeira, “Stabilizing Climate Requires Near-Zero Emissions,” Geophysical Research Letters 35, no. 4: L04705 (2008), 1–5.
International and US Policy Response
The primary international policy response to climate change was the United Nations Framework Convention on Climate Change (UNFCCC). The convention was adopted in May 1992 and became the first binding international legal instrument dealing directly with climate change. It was presented for signature at the Earth Summit in Rio de Janeiro and went into force in March 1994 with signatures from 166 countries. By 2010 the convention had been accepted by 193 countries.United Nations Framework Convention on Climate Change, “Status of Ratification of the Convention,” accessed January 27, 2011, http://unfccc.int/kyoto_protocol/status_of_ratification/items/2613.php. UNFCCC signatories met in 1997 in Kyoto and agreed to a schedule of reduction targets known as the Kyoto Protocol. Industrialized countries committed to reducing emissions of specific GHGs, averaged over 2008–12, to 5 percent below 1990 levels. The European Union (EU) committed to an 8 percent reduction and the United States to 7 percent. Other industrialized countries agreed to lesser reductions or to hold their emissions constant, while developing countries made no commitments but hoped to industrialize more cleanly than their predecessors. Partly to help developing countries, the Kyoto Protocol also created a market for trading GHG emission allowances. If one country developed a carbon sink, such as by planting a forest, another country could buy the amount of carbon sequestered and use it to negate the equivalent amount of its own emissions.
The Kyoto Protocol has ultimately suffered from a lack of political will in the United States and abroad. The United States signed it, but the Senate never ratified it. US President George W. Bush backed away from the emission reduction targets and eventually rejected them entirely. By the time he took office in 2001, a 7 percent reduction from 1990 levels for the United States would have translated into a 30 percent reduction from 2001 levels. US GHG emissions, instead of declining, rose 14 percent from 1990 to 2008 (see Figure 5.8 for related energy consumption).US Environmental Protection Agency, 2010 Greenhouse Gas Inventory Report (Washington, DC: US Environmental Protection Agency, 2010), accessed January 29, 2011, http://www.epa.gov/climatechange/emissions/downloads10/US-GHG -Inventory-2010_ExecutiveSummary.pdf. Almost all other Kyoto signatories will also fail to meet their goals. The EU, in contrast, is on track to meet or exceed its Kyoto targets.European Union, “Climate Change: Progress Report Shows EU on Track to Meet or Over-Achieve Kyoto Emissions Target,” news release, November 12, 2009, accessed August 19, 2010, http://europa.eu/rapid/pressReleasesAction.do?reference=IP/09/1703&format=HTML&aged=0&language=EN&guiLanguage=en. GHG pollution allowances for major stationary sources have been traded through the EU Emissions Trading System since 2005. The consensus in Europe is that the Kyoto Protocol is necessary and action is required to reduce GHGs.
The Kyoto Protocol expires in 2012, so meetings have begun to negotiate new goals. In December 2007, UNFCCC countries met in Bali to discuss a successor treaty. The conference made little headway, and countries met again in December 2009 in Copenhagen. That conference again failed to generate legally binding reduction goals, but the countries confirmed the dangers of climate change and agreed to strive to limit temperature increases to no more than 2°C total. A subsequent meeting was held in Cancun, Mexico, in late 2010.
Individual countries and US states and agencies have acted, nonetheless, in the absence of broader leadership. In 2007, EU countries set their own future emissions reduction goals, the so-called 20-20-20 strategy of reducing emissions 20 percent from 1990 levels by 2020 while reducing energy demand 20 percent through efficiency and generating 20 percent of energy from renewable resources. In January 2008 the European Commission proposed binding legislation to implement the 20-20-20 targets. This “climate and energy package” was approved by the European Parliament and Council in December 2008. It became law in June 2009.European Commission, “The EU Climate and Energy Package,” accessed January 29, 2011, ec.europa.eu/clima/policies/brief/eu/package_en.htm and ec.europa.eu /environment/climat/climate_action.htm. In the Northeast United States, ten states collaborated to form the Regional Greenhouse Gas Initiative (RGGI), which caps and gradually lowers GHG emissions from power plants by 10 percent from 2009 to 2018. A similar program, the Western Climate Initiative, is being prepared by several western US states and Canadian provinces, and California’s Assembly Bill 32, the Global Warming Solutions Act, set a state GHG emissions limit for 2020.California Environmental Protection Agency Air Resources Board, “Assembly Bill 32: Global Warming Solutions Act,” accessed August 19, 2010, http://www.arb.ca.gov/cc/ab32/ab32.htm. Likewise, the federal government under President Barack Obama committed to reducing its emissions, while the US Environmental Protection Agency (EPA), in response to a 2007 lawsuit led by the state of Massachusetts, prepared to regulate GHGs under the Clean Air Act.
On December 23, 2010, the New York Times reported, “The Environmental Protection Agency announced a timetable on Thursday for issuing rules limiting greenhouse gas emissions from power plants and oil refineries, signaling a resolve to press ahead on such regulation even as it faces stiffening opposition in Congress. The agency said it would propose performance standards for new and refurbished power plants next July, with final rules to be issued in May 2012.”Matthew L. Wald, “E.P.A. Says It Will Press on With Greenhouse Gas Regulation,” New York Times, December 23, 2010, www.nytimes.com/2010/12/24/science/earth/24epa.html?_r=1&ref=environmentalprotectionagency.
Members of Congress, however, have threatened to curtail the EPA’s power to do so, either by altering the procedures for New Source Review that would require carbon controls or by legislatively decreeing that global warming does not endanger human health.“Coal State Senators Battle EPA to Control Greenhouse Gases,” Environmental News Service, February 23, 2010, accessed January 7, 2011, www.ens-newswire.com/ens/feb2010/2010-02-23-093.html; Juliet Eilperin and David A. Fahrenthold, “Lawmakers Move to Restrain EPA on Climate Change,” Washington Post, March 5, 2010, accessed January 7, 2011, www.washingtonpost.com/wp-dyn/content/article/2010/03/04/AR2010030404715.htm. In contrast, one bill to combat climate change would have reduced US emissions by 80 percent from 2005 levels by 2050. It passed the House of Representatives in 2009 but failed to make it to a Senate vote.
Corporate Response and Opportunity
Certain industries are more vulnerable than others to the economic impacts of climate change. Industries that are highly dependent on fossil fuels and high CO2 emitters, such as oil and gas companies, cement producers, automobile manufacturers, airlines, and power plant operators, are closely watching legislation related to GHGs. The reinsurance industry, which over the past several years has taken large financial losses due to extreme weather events, is deeply concerned about global climate change and liabilities for its impacts.
Given the potential costs of ignoring climate change, the costs of addressing it appear rather minimal. In 2006, the UK Treasury released the Stern Review on the Economics of Climate Change. The report estimated that the most immediate effects of global warming would cause damages of “at least 5% of global GDP each year, now and forever. If a wider range of risks and impacts is taken into account, the estimates of damage could rise to 20% of GDP or more.” Actions to mitigate the change, in contrast, would cost only about 1 percent of global GDP between 2010 and 2030.Lord Stern, “Executive Summary,” in Stern Review on the Economics of Climate Change (London: HM Treasury, 2006), 1, accessed January 7, 2011, http://www.hm-treasury.gov.uk/sternreview_index.htm.
Corporate reactions have ranged from taking action now to reduce or eliminate emissions of GHGs and active engagement with carbon trading markets to actively opposing new policies that might require changes in products or processes. Anticipatory firms are developing scenarios for potential threats and opportunities related to those policies, public opinion, and resource constraints. Among those companies actively pursuing a reduction in GHGs, some cite financial gains for their actions. Walmart and General Electric both committed to major sustainability efforts in the first decade of the twenty-first century as have many smaller corporations. Central to their strategies are GHG reduction tactics.
Excessive GHG emissions may reflect inefficient energy use or loss of valuable assets, such as when natural gas escapes during production or use. The Carbon Disclosure Project emerged in 2000 as a private organization to track GHG emissions for companies that volunteered to disclose their data. By 2010, over 1,500 companies belonged to the organization, and institutional investors used these and other data to screen companies for corporate social responsibility. Out of concern for good corporate citizenship and in anticipation of potential future regulation, GHG emissions trading has become a growing market involving many large corporations. The emissions trading process involves credits for renewable energy generation, carbon sequestration, and low-emission agricultural and industrial practices that are bought and sold or optioned in anticipation of variable abilities to reach emissions reduction targets. Some companies have enacted internal, competitive emissions reduction goals and trading schemes as a way to involve all corporate divisions in a search for efficiency, cleaner production methods, and identification of other opportunities for reducing their contribution to climate change.
In parallel to tracking GHG emissions, clean tech or clean commerce has become increasingly prevalent as a concept and a term to describe technologies, such as wind energy, and processes, such as more efficient electrical grids, that do not generate as much or any pollution. New investments in sustainable energy increased between 2002 and 2008, when total investments in sustainable energy projects and companies reached \$155 billion, with wind power representing the largest share at \$51.8 billion.Rohan Boyle, Chris Greenwood, Alice Hohler, Michael Liebreich, Eric Usher, Alice Tyne, and Virginia Sonntag-O’Brien, Global Trends in Sustainable Energy Investment 2009, United Nations Environment Programme, 2008, accessed January 29, 2011, sefi.unep.org/fileadmin/media/sefi/docs/publications/Global_Trends_2008.pdf. Also in 2008, sustainability-focused companies as identified by the Dow Jones Sustainability Index or Goldman Sachs SUSTAIN list outperformed their industries by 15 percent over a six-month period.Daniel Mahler, Jeremy Barker, Louis Belsand, and Otto Schulz, Green Winners (Chicago: A. T. Kearney, 2009), 2, www.atkearney.com/images/global/pdf/Green_winners.pdf.
Conclusion
Our climate may always be changing, but humans have changed it dramatically in a short time with potentially dire consequences. GHGs emitted from human activities have increased the global temperature and will continue to increase it, even if we ceased all emissions today. International policy makers have built consensus for the need to curb global climate change but have struggled to take specific, significant actions. In contrast, at a smaller scale, local governments and corporations have attempted to mitigate and adapt to an altered future. Taking a proactive stance on climate change can make good business sense.
At a minimum, strategic planning should be informed by climate change concerns and the inherent liabilities and opportunities therein. Whether operationalized by large firms or smaller companies, one important form of entrepreneurial innovation inspired by climate change challenges today is to apply tools associated with reduced climate and resource footprints that result in systemic reduction of energy and material inputs. When applied within firms and across supply chains, such tools increase profitability by lowering costs. More important, these measures can lead to innovations made visible by the efforts. At minimum, opportunities for product design and process improvements that both reduce climate change impact and increase resource efficiency and consumer loyalty make sense. Companies that chart a course around the most likely set of future conditions with an eye to competitive advantage, good corporate citizenship, and stewardship of natural resources are likely to optimize their profitability and flexibility—and hence their strategic edge—in the future.
KEY TAKEAWAYS
• Scientific consensus concludes human activity now influences global climate.
• Greenhouse gases (GHGs), of which carbon dioxide (CO2) is predominant, trap heat through their accumulation in the atmosphere.
• Governments at all levels and corporations are designing mechanisms and strategies for addressing climate change by monetizing impacts.
• Companies are well advised to stay current with the science and analyze their liabilities and opportunities as emissions restrictions are increasingly imposed through tax or market means.
EXERCISES
1. Gradual warming of the earth’s temperature is one indication/prediction of climate scientists. What other impacts are being felt today, or are likely to be felt in the future?
2. Given the climate change trends, what social and environmental concerns appear most significant?
3. What are the implications of climate change, and of regulation of GHG emissions, for companies?
4. Under what conditions does a climate change strategy become an opportunity or otherwise make sense for a firm? | textbooks/biz/Business/Advanced_Business/Book%3A_Sustainability_Innovation_and_Entrepreneurship/05%3A_Energy_and_Climate/5.01%3A_Climate_Change.txt |
Learning Objectives
1. Understand the conditions under which entrepreneurial leaders can work inside large companies.
2. Examine how and why sustainability implementation can require working with multiple stakeholders to increase social, environmental, and business benefits.
3. Identify how to translate sustainability thinking into viable corporate strategy.
4. Illustrate how to positively pair ecosystems, climate, sustainable development, and community contribution.
The first case looks at how a young entrepreneur, who recently completed his graduate training, successfully built an innovative pilot effort within a large real estate firm that manages real estate and ski resorts.
It might seem unlikely that a real estate developer, much less a project focused on expanding a ski resort, could provide a model of sustainable business practices, but real estate developer East West Partners (EWP) has done just that through its collaboration with a ski resort called Northstar Tahoe. Land conservation, waste reduction, and the adoption of wind energy are all part of EWP’s incorporation of environmental and community considerations into every aspect of the project. At the same time the developer realizes significant cost savings and builds a reputation that enhances its competitive advantage. This was accomplished through top leadership’s creating the opportunity for a young man with a newly minted MBA to innovatively integrate sustainability thinking into strategy.
East West Partners and the Northstar Development
East West Partners was founded in the 1970s by a group of real estate professionals working in the Richmond, Virginia, area. To “protect what we’re here to enjoy” was a founding principle for EWP. In the mid-1980s, two senior EWP partners formed autonomous divisions in North Carolina and Colorado, maintaining a commitment to community and environmental quality and a loose affiliation with the Virginia group.
In 2000, Booth Creek Holdings, Northstar ski resort’s parent company, approached EWP’s Colorado office about a joint venture to develop land owned by the resort. Their subsequent agreement created East West Partners, Tahoe. EWP’s initial decision to partner in the redevelopment of Northstar was based on the project’s positive economic potential and sense of fit between EWP’s and Northstar’s business philosophy. The project was big. Northstar, a popular, family-oriented ski resort, owned hundreds of acres of land that could be developed into residential home sites, each with a market value of hundreds of thousands of dollars. The expansion and redevelopment of Northstar-at-Tahoe, which included a ski village with an ice rink and a massive increase in resort housing, including fractional-ownership condominiums, was expected to cost \$2.7 billion over fifteen years. EWP would get zoning approvals, develop land, and build residences and commercial properties, profiting ultimately from property sales and management.
EWP Tahoe’s chief executive, Roger Lessman, and project manager, David Tirman, reasoned that through careful design and the latest green building techniques they could develop new homes with limited environmental impact that would save money on owner operations, particularly energy and water costs. Furthermore, environmentally responsible development and a proactive approach with the local communities would enhance community relations, possibly ease government approvals, and add to the sales appeal of their properties.
By mid-2002, however, the importance of environmental performance and the level of effort necessary to incorporate it into branding and marketing had exceeded initial expectations. Within a year of helping area residents develop a new community plan, EWP discovered that a small but vocal group of citizens was unilaterally antigrowth and opposed to any development, regardless of efforts toward sustainability. It became clear to Lessman and Tirman that they would need help working with the community and establishing EWP as a resort development industry leader sensitive to local social and environmental concerns.
The Ski Resort Industry
In the early 1990s, no single ski company could claim more than 3 percent of the North American market. But industry shifts were under way and by 2002, about 20 percent of US ski resorts captured 80 percent of skier visits. The total for US ski visits in 2001–2 was 54,411,000, with the four largest companies accounting for about 15,000,000. The trends toward acquisitions and larger companies with multiple resorts were accelerating. So too were the industry’s awareness and concern about global warming and its accompanying changing weather patterns influencing snowfalls and spring melts. Because of the industry’s intimate links to well-functioning natural systems, its acute weather dependence, and the protection of aesthetic beauty associated with nature, which customers travel there to enjoy (and pay to surround themselves with), the term sustainability was an increasingly familiar one in ski resort strategy discussions.
During the 1990s the industry emphasized ski villages and on-mountain residences. The affluence of aging baby boom generation skiers and their growing affinity for amenities such as shopping, restaurant dining, and off-season recreation alternatives led to a development surge in ski area villages and mountain communities. Unfortunately, social and environmental issues developed alongside the economic windfall provided by ski area land development. The second homes and high-end shops that attracted wealthy skiers also displaced lower-income residents who lived and worked in or near resort areas. Wildlife that was dependent on the fragile mountain habitat was displaced as well.
Environmental groups issued scathing reports on the damage caused by ski area development and rated ski areas for their impacts on wildlife. In October 1998, environmental activists in Vail, Colorado, protested a ski area expansion into Canada lynx habitat by burning ski resort buildings in a \$12 million fire.Hal Clifford, “Downhill Slide,” Sierra, January/February 2003, 35, accessed January 7, 2011, www.sierraclub.org/sierra/200301/ski.asp. Elsewhere, local citizen groups pursued less radical and perhaps more effective means of protecting mountain land and communities through actions that blocked, delayed, and limited development plan approvals by local zoning boards. In the California market, land developers faced very difficult government approval processes. Local government agencies and citizens were key players who could block or supply approvals for land development plans.
EWP’s Approach
The proactive approach that EWP adopted—engaging all relevant actors in an open process—had both benefits and drawbacks. It seemed that a small group of citizens would inevitably oppose development of any kind, and keeping that group informed might not have been in a developer’s best interest. On the other hand, a majority of nongovernmental organizations (NGOs) and local residents were likely to see the merit of socially and environmentally sustainable development, which argued for EWP’s full disclosure of its plans with sustainability considerations factored in throughout. The trust of locals, won through an open and transparent planning process, seemed to speed approvals and inform and even attract customers. EWP’s decision was to proceed with the sustainability-infused strategy and accept the risk that construction delays related to its proactive approach could cause added expenses, potentially overwhelming the benefits of goodwill, market acceptance, and premium pricing.
New Leadership Needed
EWP executives knew that environmental concerns were high on the list of factors they should consider in the Northstar development project given the area’s high sensitivity to environmental health and preservation issues. Not only were prospective buyers more environmentally aware, but also, in the California market, land developers faced a very difficult government approval process relative to that in other states.
To address these concerns in the summer of 2002, Lessman hired Aaron Revere as director of environmental initiatives and made him responsible for ensuring that no opportunity for environmental sustainability was overlooked in building and operating the resort consortium. Revere, a recent University of Virginia environmental science and Darden School of Business MBA graduate, made it clear to subcontractors and materials suppliers that any attempt to substitute techniques or materials that circumvented environmental design facets would not be overlooked or tolerated. With complete top management support, Revere’s efforts met with little or no internal resistance. Coworkers wanted to help preserve the natural beauty of the areas they worked in and took a strong interest in new methods for reducing environmental impact.
In the new development model Revere proposed, sustainability would be a defining criterion from the outset. He presented top management with a business plan for making environmental amenities a central platform that differentiated EWP’s project designs. He developed sustainability guidelines and outlined a strategy for making the Tahoe projects’ environmental criteria a model for design and marketing. EWP would streamline government approvals by meeting with community stakeholders and outlining EWP’s program for corporate responsibility before a project began. Contractors, subcontractors, suppliers, and maintenance services interested in working with an EWP project would know as much about a project’s environmental and social criteria as they did about its economics. Marketing and sales personnel would be educated about the sustainability qualities of the project from the start and were expected to use those qualities to help generate sales. As the story unfolded, early tests of EWP’s ability to translate ideals into concrete actions with measurable results came quickly.
A Cornerstone of Sustainable Development: LEED Certification
Revere was pleased to find that other top employees, particularly Northstar project manager David Tirman, had already written of EWP’s intent to make environmental sustainability a key feature. The Leadership in Energy and Environmental Design (LEED) green building certification served as a cornerstone in these efforts. The LEED system was the result of a collaborative panel of respected green building specialists convened by the US Green Building Council (USGBC). The USGBC was formed in 1993 to address the growing US interest in sustainable building technology. The group was associated with the American Institute of Architects (AIA), the leading US architectural design organization. USGBC created the LEED system to provide unambiguous standards that would allow purchasers and end users to determine the validity of environmental claims made by builders and developers. Additionally, LEED provided conscientious industry players with a marketing tool that differentiated their products according to their efforts to minimize adverse health and environmental impacts while maintaining high standards for building quality and livability.
EWP expected to be among the largest builders of LEED-certified projects as that certification system branched into residential buildings. EWP encouraged customers who bought undeveloped lots to use LEED specifications and was offering guidelines and recommended suppliers and architects. By 2006, LEED certification was sought for all Northstar structures.
Successful projects implemented with LEED certification by 2007 included careful dismantling of the clock tower building at Northstar. EWP worked with the nonprofit group Institute for Local Self-Reliance (ILSR) to develop a deconstruction and sales strategy for the assets. Revere, who with three other EWP employees had become a LEED-certified practitioner, documented the percentage of waste diverted from the landfill, energy savings, and CO2 offset credits that would result in tax benefits to EWP.
EWP’s renovation of Sunset’s restaurant on the shore of Lake Tahoe was already under way when Revere was hired. Revere nevertheless wanted to pursue LEED certification for every possible Tahoe Mountain Resorts structure. He soon became a familiar figure at the restaurant, finding design changes, products, and processes that captured environmentally effective building opportunities in the simplest and most efficient ways. His presence on the job enabled Revere to see new opportunities: A system for dispensing nonpolluting cleaning chemicals was installed; and “gray water” from sinks was drained separately, run through a special coagulation and filtration system, and reused for watering landscaping plants outside the restaurant. Sawdust from sanding the recycled redwood decking was captured and prevented from entering Lake Tahoe.
The end result of Revere’s efforts and the enthusiastic participation of the architect, contractors, workers, and even the chefs was the first restaurant renovation to receive LEED certification and a marketing tool that appealed to the resort’s environmentally aware clientele. By the time the renovation was completed, Revere estimated that the expense of seeking superior environmental performance was a negligible part of total renovation cost. Savings on operations—due to low energy-use lighting, maximum use of daylight and air circulation, natural cooling, and superior insulation—were expected to more than pay for the additional cost within the first two to three years.
Conservation and Development: Building Partnerships through an Oxymoron
While the pursuit of LEED certification for buildings was an excellent step toward reducing environmental impact, Revere and EWP management knew that they would have to do more to persuade the local community of their commitment. In 2002, the problem of habitat degradation from ski areas became the topic of considerable negative press. The environmental group Ski Area Citizens’ Coalition (SACC) published claims that ski areas had transitioned from economically marginal winter recreation facilities to year-round resorts with premium real estate developments, mostly without sensitivity to environmental and social issues. The group went on to rate several prominent ski areas on environmental concerns, issuing grades from A to F, on its website.Ski Area Citizens’ Coalition, “Welcome to the 2011 Ski Area Report Card,” accessed January 7, 2011, http://www.skiareacitizens.com.
Since the SACC weighted its ratings heavily on habitat destruction, and new construction necessarily destroyed habitat, Northstar, which planned a 200-acre expansion of its ski area, a 21-acre village and a 345-acre subdivision, fared poorly. While other ski areas with more land and larger residential areas had disturbed more habitats, the SACC viewed past development as “water over the dam.” In the eyes of the group, Northstar’s planned expansion of both ski trails and housing overwhelmed any possible sustainable development efforts. Though the SACC rating would probably have little if any impact on the number of skiers visiting Northstar or the number of new homes sold, EWP executives were nevertheless annoyed. They were working hard to be good stewards of the land, determined to set an example for profitable, socially and environmentally responsible development and operations without giving up their planned projects.
Rather than ignore the SACC rating and environmentalists’ concerns about development of any wilderness area, EWP management, under Aaron Revere’s leadership, began an open and direct dialogue with conservation groups such as Sierra Watch and the Mountain Area Preservation Foundation. In March 2005, the groups reached what many termed as a precedent-setting agreement to limit Northstar’s development of its eight thousand acres of land to fewer than eight hundred acres. In addition, the agreement required a transfer fee on all Northstar real estate sales to be used to purchase and protect sensitive wildlife habitat in the Martis Valley area of Tahoe. The fees were expected to total more than \$30 million for the Martis Valley alone. In contrast, the previous two state conservation bonds raised \$33 million for the entire Sierra mountain range.
In addition, the agreement called for a “habitat conservation plan” for the more than seven thousand acres of Northstar land not earmarked for residential and commercial development. EWP viewed that agreement as having dual benefits. Through the agreement, environmental and community groups dropped their opposition to the development projects proposed by Northstar, and a large tract of land was protected for the foreseeable future. The additional revenue generated for the purchase of more protected acreage allowed EWP to do more than simply responsibly develop land. Through the strategic intent to develop highly desirable and environmentally sustainable properties, the company had designed a new method of generating funds for the protection of the natural environment that is by definition key to its properties’ success.
Measuring Success and Making a Difference
Aaron Revere’s definition of his job with EWP included proving wherever possible the commercial viability of “doing the right thing.” What preserved and enhanced the natural environmental systems on which the resort depended would serve the longer-term economic interests of the owner. But Revere was interested in the quantitative gains in the short and intermediate terms. He wanted to add to the growing pool of data in the ski industry on the cost differentials between typical construction and development practices and those that strived to incorporate sustainable design elements. Tahoe Mountain Resorts provided an ideal opportunity for tracking improvements and measuring the economic benefits that sustainable practices brought to the company. Metrics included biodiversity/natural capital (ecosystem, flora and fauna, and rare species assessments), air and water quality, and water and energy use. Revere’s strategy included building an environmental initiation team within EWP/Northstar. He also sought early adopters in both Tahoe Mountain Resorts and nearby Booth Creek who would build sustainability into the corporate culture and brand. Sales and marketing people were encouraged to view sustainability features as what he termed “cooler and sexier” selling points that could command a premium price. Revere used weekly e-mail advisories to help keep implementation ideas fresh in the minds of his coworkers. He wanted to put local and organic food items on the menus of Tahoe Mountain Resort restaurants and eliminate the serving of threatened species such as Chilean sea bass and swordfish—the idea was to be consistent and authentic across operations. Advisories sent to colleagues included the following: “Consider permeable paving stones or grass instead of asphalt, stockpiling snow from road-clearing above ‘sinks’ that would replenish aquifers, preformed walls, VOC [volatile organic compound]-free paints, stains, and sealants, water-conserving sensors on faucets and lights, and recyclable carpeting.”Andrea Larson, East West Partners: Sustainable Business Strategy in Real Estate and Ski Resorts, UVA-ENT-0093 (Charlottesville: Darden Business Publishing, University of Virginia, October 21, 2008).
The California Waste Management Board awarded EWP its Waste Reduction Awards Program’s highest honor for eight consecutive years (1997–2004). Describing EWP, the board stated, “To date, East West Partners has achieved successful and unique waste reduction and recycling activities within its Coyote Moon golf course operations, Wild Goose restaurant operations, general office operations, and the planning of Old Greenwood and the Northstar Ski Village. From May 2002 to May 2003, East West Partners successfully diverted an estimated 12.5 tons of material from landfill. These efforts to ‘remove the concept of waste’ from their company vocabulary saved East West Partners thousands of dollars.”Andrea Larson, East West Partners: Sustainable Business Strategy in Real Estate and Ski Resorts, UVA-ENT-0093 (Charlottesville: Darden Business Publishing, University of Virginia, October 21, 2008).
Under Revere’s direction, EWP achieved Audubon International’s Gold Level certification for the Gray’s Crossing Golf Course. Only three other golf courses in the nation had achieved this status for exceptional environmental sensitivity in the design and operations of both the facility and the community that surrounds it. Working with Revere and EWP’s hand-picked contractors, the Audubon sustainable development experts were sufficiently impressed by the company’s sincere efforts on sustainability as a strategic theme that they offered to work with EWP to make the redevelopment of a second course, the Old Greenwood Golf Course, a Gold Level project as well. Sustainable design principles applied to golf courses created significant cost and environmental savings, requiring only 50 percent as much water and fertilizer as conventional courses. Typical of the myriad implementation choices made across Revere’s projects, cost savings, allocation of precious water to better purposes, and a halving of synthetic chemical use merged in what was ultimately seen as just good business.
Additional Reading
1. Auden Schendler, Aspen Skiing Company’s Testimony to the US House of Representatives, Committee on Natural Resources, Subcommittee on Energy and Mineral Resources, Oversight Hearing; “Towards a Clean Energy Future: Energy Policy and Climate Change on Public Lands,” March 15, 2007, Aspen Skiing Company, www.aspensnowmass.com/environment/images/ASC_House_Climate_Testimony.pdf.
2. Don C. Smith, “Greening the Piste,” Refocus (November–December 2004): 28–30. Article is available through the Darden Library search engine Science Direct, www.sciencedirect.com/science?_ob=MImg&_imagekey=B73D8-4F26XDH-Y-1&_cdi= 11464&_user=709071&_orig=search&_coverDate=11%2F01%2F2004&_sk= 999949993&view=c&wchp=dGLzVzz-zSkWz&md5=cc88a8caa01db6edb009bbed8bbca727&ie=/sdarticle.pdf.
KEY TAKEAWAYS
• Climate change is already influencing mountain ice packs and snowfall patterns, shortening ski seasons, and requiring ski resorts to adapt their strategies.
• Sincere efforts with stakeholders can create opportunities and help generate creative solutions.
• A committed, determined, educated entrepreneurial individual can create change within a large firm.
• Well-implemented sustainability concepts deliver concrete business benefits, both operational and strategic.
EXERCISES
1. What factors drove EWP to incorporate sustainability approaches into its strategy?
2. What are the roles of climate and climate change in shaping strategy for this company at the Tahoe location?
3. What are the changes Aaron Revere instituted? How did they contribute to operations and strategy for the firm? What learning could be transferred to other parts of the parent company’s activities?
4. Given the tasks Aaron Revere had when he began his job, identify no less than five of the most significant challenges he faced in this job. Use the case information, your knowledge of business, and your own experience and imagination to anticipate what you believe Aaron would tell you were his major challenges.
5. Prepare an analysis of key factors that explain Revere’s success. Come to class ready to present your analysis and defend your argument. | textbooks/biz/Business/Advanced_Business/Book%3A_Sustainability_Innovation_and_Entrepreneurship/05%3A_Energy_and_Climate/5.02%3A_East_West_Partners-_Sustainability_Strategy.txt |
Learning Objectives
1. Understand how measurable sustainability goals can drive business decisions.
2. Explain how projects within a company can contribute to larger changes in corporate culture and sustainability.
The second case, Frito-Lay (PepsiCo), examines innovative activity that has been ongoing for several years at a manufacturing facility in Arizona. Large firms typically struggle to implement significant change, yet this example shows how established companies can take steps that ultimately create innovative and systemic outcomes guided by sustainability principles that benefit multiple stakeholders.
It was late 2007, and Al Halvorsen had assembled his team of managers from across Frito-Lay North America (FLNA) to make a final decision on an ambitious proposal to take one of its nearly forty manufacturing plants “off the grid”The expression “off the grid” means reducing or eliminating a facility’s reliance on the electricity and natural gas grids and on water utilities for production inputs. through the installation of cutting-edge energy- and water-saving technologies. After a decade of successful initiatives to improve the productivity of operations and to reduce the energy and other resources used in the production of the company’s snack products, senior managers had decided that it was time to take their efforts to the next level.
Frito-Lay’s resource conservation initiatives started in the late 1990s. Company managers recognized potential operating challenges as they faced rising utility rates for water, electricity, and natural gas; increasing resource constraints; and expected government-imposed limits on greenhouse gas (GHG) emissions. These challenges had implications for the company’s ability to deliver sustained growth to its shareholders.
The programs the company put in place resulted in a decade of efficiency improvements, leading to incremental reductions in fuel consumption, water consumption, and GHG emissions. Each project’s implementation helped the operations and engineering teams within the organization grow their institutional knowledge and expertise in a range of emerging technologies.
By 2007, managers were starting to wonder how far they could take efforts to improve the efficiency and environmental impact of operations. Al Halvorsen was several months into a new initiative to evaluate the feasibility of bundling several innovative technologies at one manufacturing facility to maximize the use of renewable energy and dramatically reduce the consumption of water. By leveraging the expertise of the in-house engineering team, and grouping a number of technologies that had been previously piloted in isolation at other facilities, Halvorsen believed that a superefficient facility prototype would emerge that could serve as a learning laboratory for the improvement of the company’s overall manufacturing practices.
Halvorsen had asked the members of his cross-functional team of managers from across the organization to evaluate the broad scope of challenges involved with creating what was dubbed a “net-zero” facility. The project would likely push the boundaries of current financial hurdles for capital expenditure projects but would result in a number of tangible and intangible benefits. After months of evaluation, the time had come to decide whether to go forward with the project.
A Tasty History
Frito-Lay North America is one of the nation’s best-known snack-food companies, with origins in the first half of the twentieth century. In 1932, Elmer Doolin started the Frito Company after purchasing manufacturing equipment, customer accounts, and a recipe from a small corn-chip manufacturer in San Antonio, Texas. That same year, Herman W. Lay of Nashville, Tennessee, started a business distributing potato chips for the Barrett Food Products Company.
The two companies experienced dramatic growth in the ensuing years. Herman Lay expanded his distribution business into new markets and in 1939 bought the manufacturing operations of Barrett Foods to establish the H. W. Lay Corporation. The Frito Company expanded production capacity and broadened its marketing presence by opening a western division in Los Angeles in 1941. Although the war years posed significant challenges, the two companies emerged intact and won the hearts of American GIs with products that provided a tasty reminder of home.
Both companies experienced rapid growth in the postwar boom years, fueled by an ever-expanding product selection and the development of innovative distribution networks. By the mid-1950s, the H. W. Lay Corporation was the largest manufacturer of snack foods in the United States, and the Frito Company had expanded its reach into all forty-eight states. As the two companies expanded nationally, they developed cooperative franchise arrangements. In 1961, after several years of collaboration, the companies merged to form Frito-Lay Inc., the nation’s largest snack-food company.
In the years following the creation of Frito-Lay, the company continued to experience rapid growth and changes in its ownership structure. In 1965, a merger with Pepsi-Cola brought together two of the nation’s leading snack and beverage companies under one roof. The resulting parent, PepsiCo Inc., was one of the world’s leading food companies in 2007 and a consistent presence on Fortune’s “America’s Most Admired Companies” list. The company includes a number of other iconic brands such as Tropicana juices, Gatorade sports drinks, and Quaker foods. (See Figure 5.9 for a diagram of PepsiCo’s business units.)
By 2007, the Frito-Lay business unit owned more than fifteen brands that each grossed more than \$100 million in annual sales. The most well-known brands included Lay’s potato chips, Fritos corn chips, Cheetos cheese-flavored snacks, Ruffles potato chips, Doritos and Tostitos tortilla chips, Baked Lay’s potato crisps, SunChips multigrain snacks, and Rold Gold pretzels.
The Vision for a More Sustainable Snack Company
By the 1990s, PepsiCo’s Frito-Lay business unit was experiencing healthy growth in earnings and was continuing to expand internationally. In the United States and Canada, Frito-Lay North America was operating more than forty manufacturing facilities, hundreds of distribution centers and sales offices, and a sizeable fleet of delivery vehicles. As the company grew, the costs associated with operating these assets increased as well.
Increasing resource costs, fuel price volatility, and emerging concerns about future resource availability started to worry managers during this time period. Members of the environmental compliance group took the initiative and expanded their traditional regulatory compliance role to also focus proactively on resource conservation as a cost-reduction strategy. Later, a resource conservation and energy team was formed at Frito-Lay’s Plano, Texas, headquarters to coordinate efficiency initiatives across the portfolio of manufacturing and distribution facilities. At the facility level, “Green Teams” and “Energy Teams,” consisting of plant operators and line workers, assembled to closely monitor daily energy and water usage and to identify and implement productivity-boosting resource conservation projects.
Initial results of the resource conservation program were positive, with projects paying back well within the corporate financial benchmark of two to three years and achieving incremental reductions in energy and water use. The company’s senior management, including then CEO Al Bru, took notice of these results and set the stage for a more ambitious program at a time when competitors were only dabbling in the implementation of more efficient business processes.
In 1999, Senior Vice President of Operations Jim Rich challenged the team to expand its efforts into a company-wide effort to reduce resource use and costs. Managers at headquarters defined a set of stretch goals that, if achieved, would take the company’s efforts to the cutting edge of what was feasible on the basis of available technologies while still meeting corporate financial hurdles for the approval of capital expense projects. This set of goals, affectionately known as the BHAGs (“Big Hairy Audacious Goals”),The term “Big Hairy Audacious Goals” is borrowed from James Collins and Jerry Porras’s book Built to Last: Successful Habits of Visionary Companies (New York: HarperCollins, 1997). called for the following efforts:
• A reduction in fuel consumption per pound of product (primarily natural gas) by 30 percent
• A reduction in water consumption per pound of product by 50 percent
• A reduction in electricity consumption per pound of product by 25 percent
Over the next eight years, the Resource Conservation Team and facility Green Teams set about designing, building, and implementing projects across the portfolio of FLNA facilities. Both new and established technologies were piloted, and responsibility was placed on line employees to implement improved operating practices and to monitor variances in resource usage from shift to shift. A growing group of in-house engineering experts—both at headquarters and at manufacturing facilities—oversaw these initiatives, bypassing the need to hire energy service companies (ESCOs), outside consultants often hired for these types of projects, and ensuring that FLNA developed and retained valuable institutional knowledge.
By 2007, the team estimated that it was saving the company \$55 million annually in electricity, natural gas, and water expenses, compared with 1999 usage, as a result of the projects implemented to date. Piloted technologies included photovoltaic cells, solar concentrators, landfill gas capture, sky lighting, process steam recapture, and many other energy and water efficiency measures.
In 2006, Indra Nooyi was selected as the new chairman and CEO of the PepsiCo. As a thirteen-year veteran of the company, and the former CFO, she was supportive of the resource conservation initiatives at Frito-Lay and within other operating divisions. Nooyi set forth her vision for PepsiCo of “Performance with Purpose” in a speech on December 18, 2006, in New Delhi. “I am convinced that helping address societal problems is a responsibility of every business, big and small,” she said. “Financial achievement can and must go hand-in-hand with social and environmental performance.”Indra K. Nooyi, “Performance with a Purpose” (speech by PepsiCo President and CEO at the US Chamber of Commerce–India/Confederation of Indian Industry, New Delhi, India, December 18, 2006), accessed January 10, 2011, www.wbcsd.org/DocRoot/61wUYBaKS2N35f9b41ua/IKN-IndiaSpeechNum6FINAL.pdf. This statement established her triple-bottom-line vision for growth at the company.The term triple-bottom-line refers to a concept advanced by John Elkington in his book Cannibals with Forks: The Triple Bottom Line of 21st Century Business (Mankato, MN: Capstone Publishers, 1998). Companies that embrace triple-bottom-line thinking believe that to achieve sustained growth in the long term, they must demonstrate good financial, environmental, and social performance, also referred to as “sustainable business.”
In line with this new vision, and with the support of the FLNA finance team, what started as a productivity initiative began to push the boundaries of traditional business thinking about the value of “sustainable” operating practices. By the end of the twenty-first century’s first decade, all PepsiCo business units were adding environmental and resource conservation criteria to the capital expense approval process. With buy-in from the FLNA CFO, the benchmarks for capital expenditure projects were extended if a project could demonstrate additional benefits outside of traditional net present value calculations. This change was justified on the following grounds:
• New technological and manufacturing capabilities are of long-term value to the company and can result in future cost-cutting opportunities.
• Pilot projects that combine multiple technologies serve as a proof of concept for previously undiscovered operational synergies.
• Such projects are a part of overall corporate risk mitigation strategy to reduce dependence on water, energy, and raw materials in the face of resource cost pressures and an increasingly resource-constrained world.
• Sustainably manufactured products will have a place in the marketplace and will contribute to sales dollars, customer loyalty, and increased market share relative to competitors who do not innovate.
• Emerging government regulation, particularly with regard to carbon, could create additional value streams. For example, under a cap-and-trade system, projects that reduce net emissions would potentially generate carbon credits, which could be sold in a market.
• Water, electric, and natural gas inflation rates have been increasing even beyond expectations.
Measuring and Reporting GHG Emissions
A secondary benefit of FLNA’s conservation initiatives was the collection of rich data about operations, productivity, and resource usage. The efforts of each facility Energy Team to implement the corporate resource conservation program resulted in an in-depth understanding of the impact each project had on fuel and electricity consumption in the manufacturing process. Managers at headquarters were able to piece together an aggregate picture of energy consumption across the organization.
Around the same time period, managers within the environmental compliance group started to voice their opinion that the company should be documenting its success in improving the energy efficiency of its operations. During the 1990s, the issue of climate change was receiving increased attention globally—and the Clinton administration was warning that reductions in US emissions of GHGs would be necessary in the future as a part of the solution to this emerging global problem. FLNA managers believed that future climate regulation was likely and were concerned that they might be penalized relative to their competitors in the event that the government limited GHG emissions from manufacturing operations. Future emissions caps were likely to freeze a company’s emissions at their current levels or to mandate a reduction to a lower level. Managers were concerned that all the reductions in emissions made by the company prior to the establishment of a regulatory limit would be ignored. As a result, they sought out potential venues for documenting their successes.
Through dialogues with the US Environmental Protection Agency (EPA), the company learned about a new voluntary industry partnership program aimed at the disclosure and reduction of corporate emissions of GHGs. The Climate Leaders program was the flagship government initiative aimed at working with US companies to reduce GHG emissions, and it provided its partners with a number of benefits. The program, a government-sponsored forum for disclosure of emissions information, offered consulting assistance to companies in the creation of a GHG emissions inventory. In exchange for these benefits, Climate Leaders partners were required to annually disclose emissions and to set a meaningful goal and date by which they would achieve reductions.
In 2004, FLNA signed a partnership agreement with Climate Leaders—publicly disclosing its corporate emissions since 2002.The Climate Leaders program allowed individual business units or parent corporations to sign partnership agreements. In the years since FLNA signed its partnership with Climate Leaders, PepsiCo started reporting the aggregate emissions of all business units via the Carbon Disclosure Project (CDP). The emissions data presented in this case are included in the consolidated emissions reported by PepsiCo through the CDP. By joining the program, FLNA challenged itself to improve the efficiency of its operations even more. A corporate goal of reducing carbon dioxide (CO2) equivalent emissions per ton of manufactured product by 14 percent from 2002 to 2010 was included as a part of the partnership agreement. Public inventory results through 2007 are provided in Table 5.1 and include emissions from the following sources:
• Scope 1.The terms Scope 1 and Scope 2 refer to categories of greenhouse gas emissions as defined by the World Business Council for Sustainable Development/World Resource Institute Greenhouse Gas Protocol, which is the accounting standard used by Climate Leaders, the Carbon Disclosure Project, and other organizations. Scope 1 emissions are direct. Scope 2 emissions are indirect. Natural gas, coal, fuel oil, gasoline, diesel, refrigerants (hydrofluorocarbons [HFCs], perfluorocarbons [PFCs]).
• Scope 2. Purchased electricity, purchased steam.
Table 5.1 FLNA Public GHG Inventory Results, 2002–7
Scope 1 Emissions (Metric Tons CO2 Eq) Scope 2 Emissions (Metric Tons CO2 Eq) Total Emissions (Metric Tons CO2 Eq) Metric Tons of Product Manufactured Normalized Total
2002 1,072,667 459,088 1,530,755 1,287,069 1.19
2003 1,081,634 452,812 1,534,446 1,304,939 1.18
2004 1,066,906 455,122 1,522,028 1,324,137 1.15
2005 1,113,061 464,653 1,577,714 1,401,993 1.13
2006 1,076,939 456,466 1,533,405 1,394,632 1.10
2007 (Projected) 1,084,350 442,425 1,526,775 1,442,756 1.06
Source: PepsiCo Inc., Annual Reports, 2002–7, accessed March 14, 2011, pepsico.com/Investors/Annual-Reports.html; US EPA Climate Leaders inventory reporting, 2002–7; and Environmental Protection Agency, Climate Leaders, “Annual GHG Inventory Summary and Goal Tracking Form: Frito-Lay, Inc., 2002–2009,” accessed March 17, 2011, www.epa.gov/climateleaders/documents/inventory/Public_GHG_Inventory_FritoLay.pdf.
Taking the Next Step
By 2007, FLNA was well on its way to achieving the goal of a 14 percent reduction in normalized emissionsEmissions reduction goals are generally stated in either “absolute” or “normalized” terms. In the former, a company commits to reduce the total emissions generated over some period of time. In the latter, a commitment is made to reduce the emissions generated per some unit of production (e.g., pounds of product, units manufactured, etc.). A normalized emissions metric can illustrate increased efficiency in manufacturing a product or producing a service over time and is often preferred by businesses that are growth oriented.—having reduced emissions by 11 percent in the prior five years. Resource conservation projects had been rolled out at plants and distribution centers across North America to improve the efficiency with which products were manufactured and distributed to retailers.
Over the same seven-year period, top-line sales grew by 35 percent.Sales data are extracted from publicly available PepsiCo Inc. annual reports, 2002–7. PepsiCo, “Annual Reports,” accessed January 7, 2011, www.pepsico.com/Investors/Annual-Reports.html. As a result of the increase in sales and decrease in emissions intensity, absolute emissions, or the sum total of emissions from all sources, remained relatively flat during this period. (See Figure 5.10 for a summary of growth in sales and emissions over time.)
For most companies, this substantial reduction in emissions intensity per unit of production would be cause for celebration. Although FLNA managers were pleased with their progress, they were hopeful that future projects could reduce absolute emissions—enabling the company to meet or exceed future regulatory challenges by arresting the growth of GHG emissions while continuing to deliver sustained growth in earnings to shareholders. For the innovators at FLNA, and PepsiCo as a whole, this strategy was part of fulfilling the “Performance with a Purpose” vision set forth by their CEO.
It was time to set a new goal for the team. As they had done almost ten years before, members of the resource conservation team floated ideas about how they could push the limits of available technologies to achieve a new, more aggressive goal of cutting absolute resource usage without limiting future growth prospects. A variety of technologies was available to the team, many of which had been piloted separately at one or more facilities.
One manager asked the question, “What if we could package all these technologies together in one place? How far off the water, electricity, and natural gas grids could we take a facility?”Andrea Larson, Frito-Lay North America: The Making of a Net Zero Snack Chip, UVA-ENT-0112 (Charlottesville: Darden Business Publishing, University of Virginia, May 4, 2009).The team developed this kernel of an idea, which came to be the basis for what would be a new type of facility. The vision for this net-zero facility was simple: to maximize the use of renewable energy and to dramatically reduce the consumption of water in a manufacturing plant.
Going Net Zero at Casa Grande
Planning for its pilot net-zero facility began in earnest. Rather than build a new manufacturing facility, managers selected one of the company’s existing plants for extensive upgrades. But selecting which plant to use for the pilot was in itself a challenge, due to the varying effectiveness of certain renewable technologies in different geographic regions, production line characteristics, plant size considerations, and other factors.
With the assistance of the National Renewable Energy Laboratory (NREL), members of the headquarters operations team began evaluating a preselected portfolio of seven plants on the basis of a number of key criteria. Available energy technologies were mapped over geographic facility locations to predict potential effectiveness (e.g., solar panels were more effective in sunnier locales). An existing software model was modified to determine the best combination of renewable technologies by location while minimizing life-cycle costs of the proposed projects.
The results of the NREL model, when combined with a number of other qualitative factors, pointed to the Casa Grande, Arizona, manufacturing plant as the best location to pilot the net-zero facility. Casa Grande’s desert location in the distressed Colorado River watershed made it a great candidate for water-saving technologies, and the consistent sunlight of the Southwest made it a prime facility for solar energy technologies. Approximately one hundred acres of available land on the site provided plenty of space for deploying new projects. In addition, Casa Grande was a medium-size manufacturing operation, ensuring that the project would be tested at a significant scale to produce transferable results.
Casa Grande was a manufacturing location for Lay’s potato chips, Doritos tortilla chips, Fritos corn chips, and Cheetos cheese-flavored snacks and was the planned location for a future SunChips multigrain snacks production line. Although the ingredients for each product were different, the production processes were somewhat similar. Water was used in the cleaning and processing of ingredients. Energy in the form of electricity and natural gas was used to power production equipment, heat ovens, and heat cooking oil. A summary diagram of the production process for the snacks is provided in Figure 5.11.
Per the net-zero vision, a number of new technologies were being evaluated in concert as replacements for current technologies. These proposals included the following:
• A concentrated solar heating unit. Hundreds of mirrors positioned outside the facility would concentrate and redirect solar energy to heat water in a pipe to very high temperatures. The water would be pumped into the facility and used as process steam to heat fryer oil. Frito-Lay had successfully tested this technology at a Modesto, California, plant.
• Photovoltaic solar panels to generate electricity.
• A membrane bioreactor and nanofiltration system to recover and filter processing wastewater to drinking water quality for continuous reuse in the facility.
• A biomass-burning power plant to generate process steam or electricity. Sources of biomass for the plant could include crop waste from suppliers, waste from the production process, and sediments collected in the membrane bioreactor.
Although this combination of technologies had never before been piloted at a single facility, the results from individual projects at other facilities suggested that results at Casa Grande would be very promising. Based on these past experiences, the resource conservation team expected to achieve a 75 percent reduction in water use, an 80 percent reduction in natural gas consumption, and a 90 percent reduction in purchased electricity. Approximately 80 percent of the reduction in natural gas would come through the substitution of biomass fuels. (See Table 5.2 for a summary of historical and projected resource use and production at Casa Grande.)
Table 5.2 Summary of Resource Use and Production at Casa Grande, Arizona, 2002–10 (Projected)
Electricity Usage (kWh) Average per kWh Price (Dollars) Natural Gas Usage (mmBtu) Average per mmBtu Price (Dollars) Water Usage (Kilo-Gallons) Average per Kilo-Gallon Price (Dollars) Metric Tons of Product Manufactured
2002 18,000,000 0.072 350,000 4.00 150,000 1.20 45,455
2003 18,360,000 0.074 357,000 4.60 153,000 1.26 46,818
2004 18,727,200 0.076 364,140 5.29 156,060 1.32 48,223
2005 19,101,744 0.079 371,423 6.08 159,181 1.39 49,669
2006 19,483,779 0.081 378,851 7.00 162,365 1.46 51,159
2007 19,873,454 0.083 386,428 8.05 165,612 1.53 52,694
2008 (Projected) 20,270,924 0.086 394,157 9.25 168,924 1.61 54,275
2009 (Projected) 20,676,342 0.089 402,040 10.64 172,303 1.69 55,903
2010 (Projected) 21,089,869 0.091 410,081 12.24 175,749 1.77 57,580
Note: Actual operating data are disguised but directionally correct.
Source: Andrea Larson, Frito-Lay North America: The Making of a Net Zero Snack Chip, UVA-ENT-0112 (Charlottesville: Darden Business Publishing, University of Virginia, May 4, 2009).
Making the Call: Evaluating the Project at Casa Grande
After months of preparation and discussions, the net-zero team gathered in Plano, Texas, and via teleconference to decide the fate of the Casa Grande project. In the room were representatives from Operations, Marketing, Finance, and Public Affairs. On the phone from Arizona was Jason Gray, chief engineer for the Casa Grande facility and head of its Green Team. Leading the discussion were Al Halvorsen, the Resource Conservation Team leader, and Dave Haft, group vice president for Sustainability and Productivity.
The meeting was called to order and Halvorsen welcomed the team members, who had spent several months evaluating Casa Grande’s viability as the net-zero pilot facility. “Each of you was charged with investigating the relevant considerations on the basis of your functional areas of expertise,” Halvorsen said. “I’d like to start by going around the table and hearing the one-minute version of your thoughts and concerns before digging into the details. Let’s begin by hearing from the facility team.”Andrea Larson, Frito-Lay North America: The Making of a Net Zero Snack Chip, UVA-ENT-0112 (Charlottesville: Darden Business Publishing, University of Virginia, May 4, 2009). Unless otherwise specified, quotations in this section are from this source.
Each of the managers shared his or her synopsis.
Jason Gray, chief facility engineer at Casa Grande, said,
There’s a strong interest among the Green Team and our line workers about the possibility of being the proving ground for a new company-wide environmental initiative. But we need to recognize the potential challenges associated with layering in all these technologies together at once. In the past, our efficiency-related projects have involved proven technologies and were spread out incrementally over time. These projects will hit in rapid succession. That being said—we’ve always rallied around a challenge in the past. I imagine that we’ll hit a few snags on the way, but we’re up for it.
Larry Perry, group manager for environmental compliance and engineering, said,
On the whole, we are very optimistic about the reductions in energy and water usage that can be achieved as a result of the proposed mix of technologies at the facility. These reductions will have a direct impact on our bottom line, taking operating costs out of the equation and further protecting the company against future spikes in resource prices. In addition, our improved energy management will yield significant reductions in greenhouse gas emissions—perhaps even opening the door for our first absolute reductions of company-wide emissions. Although the carbon numbers are not yet finalized, we are working to understand the potential financial implications if future government regulations are imposed.
Anne Smith, brand manager, said,
Casa Grande is the proposed site of a new manufacturing line for a new SunChips manufacturing line. Although this line won’t account for all our production of SunChips snacks, it could strengthen our existing messaging tying the brand to our solar-energy-driven manufacturing initiatives. While we are optimistic that our sustainable manufacturing initiatives will drive increased sales and consumer brand loyalty, we have been unable to directly quantify the impact to our top line. As always, although we want to share our successes with the consumer, we want to continue to make marketing decisions that will not be construed as “green-washing.”
Bill Franklin, financial analyst, said,
I’ve put together a discounted cash-flow model for the proposed capital expense projects, and over the long term we just clear the hurdle. Although this is an NPV-positive project, we’re a few years beyond our extended payback period for energy projects. I know there are additional value streams that are not included in my analysis. As a result, I’ve documented these qualitative benefits but have excluded any quantitative impacts from my DCF analysis.
Aurora Gonzalez, public affairs, said,
As we look to the future, we all need to be aware that potential green-washing accusations are a primary concern. We must balance the desire to communicate our positive strides, while continuing to emphasize that our efforts in sustainability are a journey with an undetermined ending point.
Al Halvorson and David Haft listened attentively, aware that the decision had to accommodate the diverse perspectives and resonate strategically at the top level of the corporation. Discussion ensued, with strong opinions expressed. After the meeting ended, Halverson and Haft agreed to talk privately to reach a decision. An assessment of the facility’s carbon footprint would be part of that decision.
The following discussion provides background and analytic guidance for understanding carbon footprint analysis. It can be used with the preceding case to provide students with the tools to calculate the facility’s carbon footprint. The material is broadly applicable to any facility, thus the formulas provided in this section may be useful in applying carbon footprint analysis to any company’s operations.
Corporate GHG Accounting: Carbon Footprint Analysis. This section is a reprint of Andrea Larson and William Teichman, “Corporate Greenhouse Gas Accounting: Carbon Footprint Analysis,” UVA-ENT-0113 (Charlottesville: Darden Business Publishing, University of Virginia, May 4, 2009).
For much of the twentieth century, scientists speculated that human activities, such as the widespread burning of fossil fuels and large-scale clearing of land, were causing the earth’s climate system to become unbalanced. In 1979, the United Nations took a preliminary step to address this issue when it convened the First World Climate Conference. In the years that followed, governments, scientists, and other organizations continued to debate the extent and significance of the so-called climate change phenomenon. During the 1990s, scientific consensus on climate change strengthened significantly. By the turn of the century, approximately 99 percent of peer-reviewed scientific articles on the subject agreed that human-induced climate change was a reality.See Naomi Oreskes, “Beyond the Ivory Tower: The Scientific Consensus on Climate Change,” Science 306, no. 5702 (December 3, 2004): 1686, accessed February 6, 2009, www.sciencemag.org/cgi/content/full/306/5702/1686; Cynthia Rosenzweig, David Karoly, Marta Vicarelli, Peter Neofotis, Qigang Wu, Gino Casassa, Annette Menzel, et al., “Attributing Physical and Biological Impacts to Anthropogenic Climate Change,” Nature 453 (May 15, 2008): 353–57; National Academy of Sciences Committee on the Science of Climate Change, Climate Change Science: An Analysis of Some Key Questions (Washington, DC: National Academy Press, 2001); and Al Gore, An Inconvenient Truth (New York: Viking, 2006). While modelers continued to refine their forecasts, a general consensus emerged among the governments of the world that immediate action must be taken to reduce human impacts on the climate system.A broader discussion of the history and science of climate change is beyond the scope of this note. For additional information synthesized for business students on this subject, see Climate Change, UVA-ENT-0157 (Charlottesville: Darden Business Publishing, University of Virginia, 2010).
Large numbers of businesses initially responded to the climate change issue with skepticism. The American environmental regulatory landscape of the 1970s and 1980s was tough on business, with sweeping legislative initiatives relating to air quality, water quality, and toxic waste remediation. Private industry was still reacting to this legislation when scientific consensus was building on climate change. Many companies were content to wait for scientists and government officials to reach an agreement on the best path forward before taking action or, in some instances, to directly challenge the mounting scientific evidence.
In recent years, however, a number of factors have contributed to a shift in corporate opinion. These factors include growing empirical data of human impacts on the global climate system, definitive reports by the UN Intergovernmental Panel on Climate Change, and increased media and government attention on the issue. Perhaps most significant, however, is the impact that rising energy costs and direct pressure from shareholders to disclose climate-related operating risks are having on business managers who can for the first time connect this scientific issue with financial considerations.
A number of leading companies and entrepreneurial start-ups are using the challenge of climate change as a motivating force to shift strategic direction. These companies are measuring their GHG emissions, aggressively pursuing actions that will reduce emissions, and shifting product and service offerings to meet new customer demands. In the process, they are cutting costs, reducing exposure to weather and raw material risks, and unlocking growth opportunities in the emerging markets for carbon trading.
This technical note introduces a number of concepts relating to how companies are responding to the issue of climate change, with the goal of helping business managers develop a practical understanding in several key areas. The purposes of this note are to (1) present a working language for discussing climate issues, (2) introduce the history and motivation behind corporate emissions disclosure, and (3) describe a basic calculation methodology used to estimate emissions.
Carbon, Footprints, and Offsets
As with any emerging policy issue, a vocabulary has evolved over time to support climate change discussions. Academics, policy makers, nongovernmental organizations (NGOs), and the media speak in a language that is at times confusing and foreign to the uninitiated. An exhaustive introduction of these terms is not possible in this section, but a handful of frequently used terms that are central to understanding the climate change issue in a business context are introduced in the following paragraphs.
The Greenhouse Effect
Earth’s atmosphere allows sunlight to pass through it. Sunlight is absorbed and reflected off the planet’s surfaces toward space. The atmosphere traps some of this reflecting energy, retaining it much like the glass walls of a greenhouse would and maintaining a range of temperatures on the planet that can support life. Climate scientists hypothesize that human activity has dramatically increased concentrations of certain gases in the atmosphere, blocking the return of solar energy to space and leading to higher average temperatures worldwide.
GHGs
The atmospheric gases that contribute to the greenhouse effect include (but are not limited to) CO2, methane (CH4), nitrous oxide (N2O), and chlorofluorocarbons (CFCs). Note that not all the gases in earth’s atmosphere are GHGs; for example, oxygen and nitrogen are widely present but do not contribute to the greenhouse effect.
Carbon
Carbon is a catchall term frequently used to describe all GHGs. “Carbon” is short for carbon dioxide, the most prevalent of all GHGs. Because carbon dioxide (CO2) is the most prevalent GHG, it has become the standard by which emissions of other GHGs are reported. Emissions of gases such as methane are “converted” to a “CO2 equivalent” in a process similar to converting foreign currencies into a base currency for financial reporting purposes. The conversion is made on the basis of the impact of each gas once it is released into the earth’s atmosphere, as measured relative to the impact of CO2.
Footprint
A footprint is the measurement of the GHG emissions resulting from a company’s business activities over a given time period. In general, companies calculate their corporate emissions footprint for a twelve-month period. Established guidelines for GHG accounting are used to define the scope and methodology to be used in the creation of the footprint calculation. The term carbon footprint is sometimes used interchangeably with greenhouse gas inventory. In addition to enterprise-wide inventories, companies and individuals are increasingly calculating the footprint of individual products, services, events, and so forth.
Offset
In the most basic sense, an offset is an action taken by an organization or individual to counteract the emissions produced by a separate action. If, for example, a company wanted to offset the GHG emissions produced over a year at a manufacturing facility, it could either take direct actions to prevent the equivalent amount of emissions from entering the atmosphere from other activities or compensate another organization to take this action. This latter arrangement is a fundamental concept of some government-mandated emissions regulations. Within such a framework, a paper mill that switches from purchasing coal-generated electricity to generating on-site electricity from scrap-wood waste could generate offset credits and sell these credits to another company looking to offset its emissions. Offsets are known by a variety of names and are traded in both regulated (i.e., government-mandated) and unregulated (i.e., voluntary) markets. Standards for the verification of offsets continue to evolve due to questions that have been raised about the quality and validity of some products.
Carbon-Neutral
A company can theoretically be characterized as carbon-neutral if it causes no net emissions over a designated time period, meaning that for every unit of emissions released, an equivalent unit of emissions has been offset through other reduction measures. Companies that have made a carbon-neutrality commitment attempt to reduce their emissions by becoming as operationally efficient as possible and then purchasing offsets equivalent to the remaining balance of emissions each year. Although most companies today emit some level of GHGs via operations, carbon markets enable the neutralization of their environmental impact by paying another entity to reduce its emissions. In theory, such arrangements result in lower net global emissions of GHGs and thereby give companies some credibility to claim relative neutrality with regard to their impact on climate change.
Cap-and-Trade System
A number of policy solutions to the climate change challenge are currently under consideration by policy makers. A direct tax on carbon emissions is one solution. An alternative market-based policy that has received a great deal of attention in recent years is the cap-and-trade system. Under such a system, the government estimates the current level of a country’s GHG emissions and sets a cap (an acceptable ceiling) on those emissions. The cap represents a target level of emissions at or below the current level. After setting this target, the government issues emissions permits (i.e., allowances) to companies in regulated industries. The permits provide the right to emit a certain quantity of GHGs in a single year. The permits in aggregate limit emissions to the level set by the cap.
Initial permit distribution approaches range from auctions to government allocation at no cost to individual firms. In either case, following the issuance of permits, a secondary market can be created in which companies can buy and sell permits. At the end of the year, companies without sufficient permits to cover annual corporate emissions of GHGs either purchase the necessary permits in the marketplace or are required to pay a penalty. Companies who have reduced their emissions at a marginal cost lower than the market price of permits typically choose to sell their allotted permits to create additional revenue streams. To steadily reduce economy-wide emissions over time, the government lowers the cap (and thus further restricts the supply of permits) each year, forcing regulated companies to become more efficient or pay penalties. The cap-and-trade approach is touted as an efficient, market-based solution to reducing the total emissions of an economy.
Corporate Climate Change
Corporate attitudes about climate change shifted dramatically between 2006 and 2009, with dozens of large companies announcing significant sustainability initiatives. During this time, major business periodicals such as BusinessWeek and Fortune for the first time devoted entire issues to “green” matters, and the Wall Street Journal launched an annual ECO:nomics conference to bring together corporate executives to answer questions on how their companies are solving environmental challenges. Today, a majority of large companies are measuring their carbon footprints and reporting the information to the public and shareholders through established channels. (See the discussion of the Carbon Disclosure Project later in this section.)
A number of companies that were silent or openly questioned the validity of climate science during the 1990s are now engaged in public dialogue and are finding ways to reduce emissions of GHGs. In 2007, a group including Alcoa, BP, Caterpillar, Duke Energy, DuPont, General Electric, and PG&E created the US Climate Action Partnership (USCAP) to lobby Congress to enact legislation that would significantly reduce US GHG emissions. By 2009, USCAP had added approximately twenty more prominent partners and taken steps to pressure legislators for a mandatory carbon cap-and-trade system. The organization included the Big Three US automakers, a number of major oil companies, and some leading NGOs.
In addition to financial considerations, the case for corporate action on climate change is strengthened by a number of other factors. First, the proliferation of emissions regulations around the world creates a great deal of uncertainty for US firms. A company operating in Europe, California, and New England could face three separate emissions regulatory regimes. Without a more coordinated effort on the part of the United States and other governments to create unified legislation, firms could face an even more kaleidoscopic combination of regulations. Business leaders are addressing these concerns by becoming more actively engaged in the policy debate.
A second motivator for corporate action is shareholder pressure for increased transparency on climate issues. As our understanding of climate change improves, it is clear that impacts in the natural world as well as government-imposed emissions regulations will have a tremendous effect on the way that companies operate. Climate change has emerged as a key source of risk—an uncertainty that shareholders feel entitled to more fully understand.
In 2002, a group of institutional investors united to fund the nonprofit Carbon Disclosure Project (CDP). The organization serves as a clearinghouse through which companies disclose emissions data and other qualitative information to investors. The CDP has become the industry standard for voluntary corporate emissions reporting, and each year the organization solicits survey questionnaire responses from more than three thousand firms. In 2008, three hundred institutional investors representing over \$57 trillion in managed assets supported the CDP.For details about the questionnaire, see the Carbon Disclosure Project, “Overview,” www.cdproject.net/en-US/Respond/Pages/overview.aspx.
In 2007, the CDP received survey responses from 55 percent of the companies in the Fortune 500 list. This high level of participation speaks to the seriousness with which many companies are addressing climate change.
The Greenhouse Gas Protocol
The measurement of GHG emissions is important for three reasons: (1) a complete accounting of emissions allows for voluntary disclosure of data to organizations such as the CDP, (2) it provides a data set that facilitates participation in mandatory emissions regulatory systems, and (3) it encourages the collection of key operational data that can be used to implement business improvement projects.
GHG accounting is the name given to the practice of measuring corporate emissions. Similar to generally accepted accounting principles in the financial world, it is a set of standards and principles that guide data collection and reporting in this new field. The Greenhouse Gas Protocol is one commonly accepted methodology for GHG accounting and is the basis for voluntary reporting initiatives such as the CDP. It is an ongoing initiative of the World Resources Institute and the World Business Council for Sustainable Development to provide a common standard by which companies and governments can measure and report emissions of GHGs.
The Greenhouse Gas Protocol provides critical guidance for companies attempting to create a credible inventory of emissions resulting from its operations. In particular, it explains how to do the following:
• Determine organizational boundaries. Corporate structures are complex and include wholly owned operations, joint ventures, subsidiaries, collaborations, and a number of other entities. The protocol helps managers define which elements compose the “company” for emissions quantification purposes. A large number of companies elect to include all activities over which they have “operational control” and can thus influence decision making about how business is conducted.
• Determine operational boundaries. Once managers identify which branches of the organization are to be included in the inventory, they must identify and evaluate which specific emissions sources will be included. The protocol identifies two major categories of sources:
Direct sources. These are sources owned or controlled by the company that produce emissions. Examples include boilers, furnaces, vehicles, and other production processes.
Indirect sources. These sources are not directly owned or controlled by the company but are nonetheless influenced by its actions. A good example is electricity purchased from utilities that produce indirect emissions at the power plant. Other indirect sources include employee commuting, emissions generated by suppliers, and activities that result from the customer use of products, services, or both.
• Track emissions over time. Companies must select a “base year” against which future emissions will be measured, establish an accounting cycle, and determine other aspects of how they will track emissions over time.
• Collect data and calculate emissions. The protocol provides specific guidance about how to collect source data and calculate emissions of GHGs. The next section provides an overview of these concepts.
A Basic Methodology for Calculating Emissions
The calculation of GHG emissions is a process that differs depending on the emissions source.Although fossil fuel combustion is one of the largest sources of anthropogenic GHG emissions, other sources include process emissions (released during chemical or manufacturing processes), landfills, wastewater, and fugitive refrigerants. For the purposes of this note, we only present energy-related emissions examples. As a general rule of thumb, a consumption quantity (fuel, electricity, etc.) is multiplied by a series of source-specific “emissions factors” to estimate the quantity of each GHG produced by the source. (See Table 5.3 for a list of relevant emissions factors by type of fuel for stationary sources, Table 5.4 for mobile sources, and Table 5.5 for electricity purchases from producers.) Each emissions factor is a measure of the average amount of a given GHG, reported in weight, that is generated from the combustion of a unit of the energy source. For example, a gallon of gasoline produces on average 8.7 kg of CO2 when combusted in a passenger vehicle engine.Time for Change, “What Is a Carbon Footprint—Definition,” accessed January 29, 2011, http://timeforchange.org/what-is-a-carbon-footprint-definition.
Table 5.3 Emissions Factors for Stationary Combustion of Fuels
Emissions Source GHG Type Emissions Factor Starting Unit Ending Unit
Natural gas CO2 52.79 MMBtu kg
Natural gas CH4 0.00475 MMBtu kg
Natural gas N2O 0.000095 MMBtu kg
Propane CO2 62.73 MMBtu kg
Propane CH4 0.01 MMBtu kg
Propane N2O 0.000601 MMBtu kg
Gasoline CO2 70.95 MMBtu kg
Gasoline CH4 0.01 MMBtu kg
Gasoline N2O 0.000601 MMBtu kg
Diesel fuel CO2 73.2 MMBtu kg
Diesel fuel CH4 0.01 MMBtu kg
Diesel fuel N2O 0.000601 MMBtu kg
Kerosene CO2 71.58 MMBtu kg
Kerosene CH4 0.01 MMBtu kg
Kerosene N2O 0.000601 MMBtu kg
Fuel oil CO2 72.42 MMBtu kg
Fuel oil CH4 0.01 MMBtu kg
Fuel oil N2O 0.000601 MMBtu kg
Source: EPA Climate Leaders GHG Inventory Protocol Core Module Guidance, October 2004, accessed February 6, 2009, www.epa.gov/climateleaders/resources/cross-sector.html.
Table 5.4 Emissions Factors for Mobile Combustion of Fuels
Emissions Source GHG Type Emissions Factor Starting Unit Ending Unit
Gasoline, cars CO2 8.79 gallons kg
Diesel, cars CO2 10.08 gallons kg
Gasoline, light trucks CO2 8.79 gallons kg
Diesel, light trucks CO2 10.08 gallons kg
Diesel, heavy trucks CO2 10.08 gallons kg
Jet fuel, airplanes CO2 9.47 gallons kg
Source: EPA Climate Leaders GHG Inventory Protocol Core Module Guidance, October 2004, accessed February 6, 2009, www.epa.gov/climateleaders/resources/cross-sector.html.
Table 5.5 Source-Specific Emissions Factors
eGRID Subregion Acronym eGRID Subregion Name CO2 Emissions Factor (lb/MWH) CH4 Emissions Factor (lb/MWH) N2O Emissions Factor (lb/MWH)
AKGD ASCC Alaska Grid 1,232.36 0.026 0.007
AKMS ASCC Miscellaneous 498.86 0.021 0.004
AZNM WECC Southwest 1,311.05 0.017 0.018
CAMX WECC California 724.12 0.030 0.008
ERCT ERCOT All 1,324.35 0.019 0.015
FRCC FRCC All 1,318.57 0.046 0.017
HIMS HICC Miscellaneous 1,514.92 0.315 0.047
HIOA HICC Oahu 1,811.98 0.109 0.024
MROE MRO East 1,834.72 0.028 0.030
MROW MRO West 1,821.84 0.028 0.031
NEWE NPCC New England 927.68 0.086 0.017
NWPP WECC Northwest 902.24 0.019 0.015
NYCW NPCC NYC/Westchester 815.45 0.036 0.005
NYLI NPCC Long Island 1,536.80 0.115 0.018
NYUP NPCC Upstate NY 720.80 0.025 0.011
RFCE RFC East 1,139.07 0.030 0.019
RFCM RFC Michigan 1,563.28 0.034 0.027
RFCW RFC West 1,537.82 0.018 0.026
RMPA WECC Rockies 1,883.08 0.023 0.029
SPNO SPP North 1,960.94 0.024 0.032
SPSO SPP South 1,658.14 0.025 0.023
SRMV SERC Mississippi Valley 1,019.74 0.024 0.012
SRMW SERC Midwest 1,830.51 0.021 0.031
SRSO SERC South 1,489.54 0.026 0.025
SRTV SERC Tennessee Valley 1,510.44 0.020 0.026
SRVC SERC Virginia/Carolina 1,134.88 0.024 0.020
Source: Andrea Larson and William Teichman, “Corporate Greenhouse Gas Accounting: Carbon Footprint Analysis,” UVA-ENT-0113 (Charlottesville: Darden Business Publishing, University of Virginia, May 4, 2009).
Because multiple GHGs are measured in the inventory process, the accounting process calculates emissions for each type of gas. As common practice, emissions of non-CO2 gases are converted to a “CO2 equivalent” to facilitate streamlined reporting of a single emissions number. In this conversion, emissions totals for a gas like methane are multiplied by a “global warming potential” to convert to a CO2 equivalent. (See Table 5.6 for a list of GHGs and their global warming potentials.)
Table 5.6 Global Warming Potentials
GHG Global Warming Potential
CO2 1
CH4 25
N2O 298
Source: EPA Climate Leaders GHG Inventory Protocol Core Module Guidance, October 2004, accessed February 6, 2009, www.epa.gov/climateleaders/resources/cross-sector.html.
Given the scale of many companies, it is easy to become overwhelmed by the prospect of accounting for all the GHG emissions produced in a given year. In reality, quantifying the emissions of a Fortune 50 firm or a small employee-owned business involves the same process. The methodology for calculating emissions from a single facility or vehicle is the same as that used to calculate emissions for thousands of retail stores or long-haul trucks.
For the purposes of this note, we will illustrate the inventory process for a sole proprietorship. The business owner is a skilled cabinetmaker who manufactures and installs custom kitchen cabinets, bookshelves, and other high-end products for homes. She leases several thousand square feet of shop space in High Point, North Carolina, and owns a single gasoline-powered pickup truck that is used for delivering products to customers.
The business owner consults the Greenhouse Gas Protocol and identifies three emissions sources. Direct emissions sources include the gasoline-powered truck and a number of natural-gas-powered tools on the shop floor. Indirect emissions sources include the electricity that the company purchases from the local utility on a monthly basis.
The owner starts by collecting utility usage data. She reviews accounts payable records for the past twelve months to determine the quantities of fuel and electricity purchased. The records reveal total purchases of 26,700 MMBtus of natural gas; 2,455 gallons of gasoline; and 115,400 kWh of electricity. The calculations for emissions from each source are illustrated in Table 5.7.
Table 5.7 Direct and Indirect Emissions from All Sources
(a) Direct emissions from natural gas combustion
MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbiqaaGpbfaqabeqahaaaaqaaceqaaiaab2eacaqGnbGaaeOqaiaabshacaqG1bGaae4CaiaabccacaqGVbGaaeOzaiaabccacaqGUbGaaeyyaiaabshacaqG1bGaaeOCaiaabggacaqGSbaabaGaae4zaiaabggacaqGZbGaaeiiaiaabogacaqGVbGaaeOBaiaabohacaqG1bGaaeyBaiaabwgacaqGKbaaaeaacqGHxdaTaqaaceqaaiaabEeacaqGibGaae4raiaabccacaqGLbGaaeyBaiaabMgacaqGZbGaae4CaiaabMgacaqGVbGaaeOBaiaabohaaeaacaqGMbGaaeyyaiaabogacaqG0bGaae4BaiaabkhaaaqaaiabgEna0cabaiqabaGaae4raiaabYgacaqGVbGaaeOyaiaabggacaqGSbGaaeiiaiaabEhacaqGHbGaaeOCaiaab2gacaqGPbGaaeOBaiaabEgaaeaacaqGWbGaae4BaiaabshacaqGLbGaaeOBaiaabshacaqGPbGaaeyyaiaabYgaaaqaaiabg2da9aqaaiaabUgacaqGNbGaaeiiaiaab+gacaqGMbGaaeiiaiaabEgacaqGHbGaae4Caaaaaaa@83EA@
Assumption: The combustion of natural gas emits three GHGs: CO2, CH4, and N2O.
GHG MMBtus of natural gas Emissions factor Global warming potential kg CO2 equivalent
CO2 26,700 52.79 1 1,409,493
CH4 26,700 0.00475 25 3,171
N2O 26,700 0.000095 298 756
Total kg 1,413,420
(b) Direct emissions from vehicle gasoline combustion For the purposes of this note, we assume that emissions of CH4 and N2O are so small from the combustion of gasoline that they amount to a negligible difference. For the purposes of estimating emissions from gasoline combustion, many technical experts take this approach. The omission of calculations for CH4 and N2O is justified on the basis that the emissions factor used for CO2 assumes that 100 percent of the fuel is converted into gas during the combustion process. In reality, combustion in a gasoline engine is imperfect, and close to 99 percent of the fuel is actually converted to gases (the rest remains as solid matter). The resulting overestimation of CO2 emissions more than compensates for the omission of CH4 and N2O.
MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaqbaeqabeWbaaaaeaGabeaacaqGhbGaaeyyaiaabYgacaqGSbGaae4Baiaab6gacaqGZbGaaeiiaiaab+gacaqGMbGaaeiiaiaabEgacaqGHbGaae4Caiaab+gacaqGSbGaaeyAaiaab6gacaqGLbaabaGaae4yaiaab+gacaqGUbGaae4CaiaabwhacaqGTbGaaeyzaiaabsgaaaqaaiabgEna0cabaiqabaGaae4raiaabIeacaqGhbGaaeiiaiaabwgacaqGTbGaaeyAaiaabohacaqGZbGaaeyAaiaab+gacaqGUbGaae4CaaqaaiaabAgacaqGHbGaae4yaiaabshacaqGVbGaaeOCaaaabaGaey41aqlaeaGabeaacaqGhbGaaeiBaiaab+gacaqGIbGaaeyyaiaabYgacaqGGaGaae4DaiaabggacaqGYbGaaeyBaiaabMgacaqGUbGaae4zaaqaaiaabchacaqGVbGaaeiDaiaabwgacaqGUbGaaeiDaiaabMgacaqGHbGaaeiBaaaabaGaeyypa0dabaGaae4AaiaabEgacaqGGaGaae4BaiaabAgacaqGGaGaae4zaiaabggacaqGZbaaaaaa@817D@
Assumption: The combustion of gasoline in vehicles produces negligible amounts of GHGs other than CO2.
GHG Gallons of gasoline Emissions factor Global warming potential kg CO2 equivalent
CO2 2,455 8.79 1 21,579
Total kg 21,579
(c) Indirect emissions from the consumption of purchased electricity
MathType@MTEF@5@5@+=feaagyart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq=Jc9vqaqpepm0xbba9pwe9Q8fs0=yqaqpepae9pg0FirpepeKkFr0xfr=xfr=xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaqbaeqabeWbaaaaeaGabeaacaqGRbGaae4vaiaabIgacaqGGaGaae4BaiaabAgacaqGGaGaaeyzaiaabYgacaqGLbGaae4yaiaabshacaqGYbGaaeyAaiaabogacaqGPbGaaeiDaiaabMhaaeaacaqGWbGaaeyDaiaabkhacaqGJbGaaeiAaiaabggacaqGZbGaaeyzaiaabsgaaaqaaiabgEna0cabaiqabaGaae4raiaabIeacaqGhbGaaeiiaiaabwgacaqGTbGaaeyAaiaabohacaqGZbGaaeyAaiaab+gacaqGUbGaae4CaaqaaiaabAgacaqGHbGaae4yaiaabshacaqGVbGaaeOCaaaabaGaey41aqlaeaGabeaacaqGhbGaaeiBaiaab+gacaqGIbGaaeyyaiaabYgacaqGGaGaae4DaiaabggacaqGYbGaaeyBaiaabMgacaqGUbGaae4zaaqaaiaabchacaqGVbGaaeiDaiaabwgacaqGUbGaaeiDaiaabMgacaqGHbGaaeiBaaaabaGaeyypa0dabaGaae4AaiaabEgacaqGGaGaae4BaiaabAgacaqGGaGaae4zaiaabggacaqGZbaaaaaa@8187@
Assumption: The electricity the cabinetmaker’s business uses is generated in a region in which the mix of fuels used by utilities, when combusted, emits three GHGs: CO2, CH4, and N2O.
GHG kWh of electricity purchased Emissions factorEmissions factors for purchased electricity differ depending on the method of power production used by an electric utility (e.g., coal-fired boilers emit greenhouse gases, whereas hydroelectric generation does not). In the United States, region-specific emissions factors are published that reflect the mix of fuels used by electric utilities within a given region to generate electricity. An in-depth explanation of the process for deriving these emissions factors is beyond the scope of this note; however, a listing of recent regional emissions factors for purchased electricity is provided in Exhibit 3. For the purposes of this calculation, the emissions factors in Exhibit 3 were converted to provide an emission factor for kg CO2/kWh. Global warming potential kg CO2 equivalent
CO2 115,400 0.515 1 59,405
CH4 115,400 0.00001089 25 31
N2O 115,400 0.00000907 298 312
Total kg 59,748
Source: Andrea Larson and William Teichman, “Corporate Greenhouse Gas Accounting: Carbon Footprint Analysis,” UVA-ENT-0113 (Charlottesville: Darden Business Publishing, University of Virginia, May 4, 2009).
The total kilogram emissions from all three sources represent the total annual footprint for this business (subject to the boundary conditions defined in this exercise). This total is stated as “1,494,747 kg of CO2 equivalent.”
KEY TAKEAWAYS
• Efficiency improvements can lead to larger systems changes.
• Companies seek greater control over their energy and resource inputs and use to save costs, protect the environment, and improve their image.
• Firms can cooperate with and contribute to the communities in which they reside.
• Basic carbon footprint can be calculated for facilities or larger entities.
EXERCISES
1. If you are Al Halverson, what considerations are in the forefront of your mind as you consider the net-zero facility decision? If you oppose the idea, what arguments would you garner? If you favor the decision, what is your rationale?
2. Optional: For the Casa Grande facility, calculate the metric tons of GHG emissions from electricity and natural gas usage for each year from 2002 to 2007. (Pay close attention to units when applying emissions factors.)
3. Optional: Project the estimated reduction in GHG emissions and operating cost savings that will result from the proposed net zero project in years 2008–2010. Assume for the purposes of your analysis that all equipment upgrades are made immediately at the start of 2008. | textbooks/biz/Business/Advanced_Business/Book%3A_Sustainability_Innovation_and_Entrepreneurship/05%3A_Energy_and_Climate/5.03%3A_Frito-Lay_North_America-_The_Making_of_a_Net-Zero_Snack_Chip.txt |
Learning Objectives
1. Give an example of how biomimicry can be used to help solve business problems.
2. Identify the unique challenges of a sustainability-oriented start-up in a mature and conservative industry.
3. Analyze how a highly innovative company, still in the research and demonstration stage, will identify early customers and generate revenues to prove the commercial viability of the technology.
In our last case, we have the opportunity to see the early stage challenges of a high-potential entrepreneurial venture in California. Based on the entrepreneur’s patented scientific knowledge, this firm is scaling up technology to sequester carbon emissions.
Brent Constantz had three decades of entrepreneurial experience, starting with companies based on how cements formed in coral reefs and seashells. Yet those same reefs and shells were threatened by ocean acidification from anthropogenic carbon dioxide (CO2) emissions (Figure 5.12). Constantz had a simple insight: if humans could make cement as marine life did (biomimicry), without burning fuel and converting minerals in high-temperature processes, then we could significantly reduce our greenhouse gas (GHG) emissions. With that idea, the Calera Corporation was born.
Calera’s goal was to make synthetic limestone and a carbonate cement, both used as major feedstocks for concrete, by mimicking nature’s low-energy process. Calera’s process aimed to precipitatePrecipitate means to separate from a solution or suspension, in this case to form solids from an aqueous solution. carbonate cement from seawater (ideally retentate left by desalination) and combine it with a strong alkaline base. When Constantz accidentally discovered CO2 could enhance his process, he sought a source of CO2. When he brought his technology and his challenge to clean tech venture capitalist Vinod Khosla, Calera became a carbon capture and sequestration (CCS) technology company, one with massive storage potential if located proximate to point sources of pollution: power plants emitted 40 percent of US carbon dioxide in 2008 and industrial process facilities another 20 percent. Yet a high level of technical risk and a number of unknowns remained about the breadth of applicability due to the requirement for brines and alkaline materials. Khosla, as the principal investor, shared Constantz’s vision and saw the huge promise and the attendant risk of failure as a high-risk, high-impact home run to completely change assumptions about the power and cement industry or a strikeout swing.
In two and one-half years, Calera went from small batch processing in a lab as a proof of concept to constructing a continuously operating demonstration plant that suggested the feasibility of large-scale operations. In the process Constantz continued to uncover new possibilities. Since his process stripped magnesium and calcium ions from any water charged with minerals, such as seawater, some wastewaters, and brines, it could potentially yield potable water. Could the venture provide water purification technology as well? Could it be economic? Furthermore, wherever seawater and strong bases were not available, Calera needed to replace or produce them. Consequently, Calera developed a more energy-efficient process to use saltwater to produce sodium hydroxide, the base it needed. With that technology, Calera could potentially impact the mature chlor-alkali industry. There were also environmental remediation possibilities. Calera’s initial process had used the base magnesium hydroxide that had been discarded by other companies at its Moss Landing demonstration site. In lieu of seawater, Calera could use subsurface brines, which were often left behind by oil and gas drilling as hazardous wastes. As Constantz and his growing team saw their opportunities expand, the company grew rapidly. If everything worked as hoped, Calera’s method seemed a magic sponge capable of absorbing multiple pollutants and transforming them into desirable products. The reality, though full of possibilities, was complex with many practical hurdles.
Along the way, the Calera team had identified and added to the firm’s multiple areas of expertise—often as the company ran into the complexity of a developing process. Calera also attracted a wide range of curious onlookers who could someday become prospective customers. Government agencies and other companies were eager to get in on the action. To position itself favorably, Calera needed to understand its core competencies and identify key collaborators to bring the new technology to full-scale operation at multiple sites. Simultaneously, it needed to protect its intellectual property and forge a defensible market position. As a high-risk, highly capital-intensive start-up with a huge number of uncertainties and potential ways to address many markets and positively affect the environment, what business model made sense?
The Cement Industry
CO2-sequestering cement could make a significant impact. In 2008, 2.5 billion metric tons of Portland cement were produced with between 0.8 and 1 ton of CO2 emitted for every ton of cement.All tons indicate metric tons throughout this case. For production information, see Carrie Sturrock, “Green Cement May Set CO2 Fate in Concrete,” San Francisco Chronicle, September 2, 2008, accessed January 8, 2011, articles.sfgate.com/2008-09-02/news/17157439_1_cement-carbon-dioxide-power-plants. In 2001 in the United States, the world’s third-largest producer of cement, the average CO2 intensity of cement production was 0.97 tons CO2/ton cement, ranging by kiln from 0.72 tons CO2/ton cement to 1.41 tons CO2/ton cement. Coal was the overwhelming energy source (71 percent) of cement kilns, followed by petroleum coke and other fuels. See Lisa Hanle, Kamala Jayaraman, and Joshua Smith, CO2 Emissions Profile of the U.S. Cement Industry (Washington, DC: US Environmental Protection Agency, 2006), accessed January 8, 2011, www.epa.gov/ttnchie1/conference/ei13/ghg/hanle.pdf. Globally, the average CO2 intensity for cement production in 2001 was around 0.82 tons CO2/ton cement. See Ernst Worrell, Lynn Price, C. Hendricks, and L. Ozawa Meida, “Carbon Dioxide Emissions from the Global Cement Industry,” Annual Review of Energy and Environment 26, no. LBNL-49097 (2001): 303–29, accessed January 8, 2011, industrial-energy.lbl.gov/node/193. Numbers from California alone in 2008 put CO2 intensity there at 0.85 tons CO2/ton cement. See California Environmental Protection Agency Air Resources Board, “Overview: AB 32 Implementation Status” (presentation at the California Cement Industry workgroup meeting, Sacramento, CA, April 10, 2008), accessed May 29, 2010, www.arb.ca.gov/cc/cement/meetings/041008/041008presentations.pdf. China produced nearly 1.4 billion tons of cement in 2008, followed by India (about 200 million tons) and the United States (100 million tons).“Research Report on China’s Cement Industry, 2009,” Reuters, March 5, 2009, accessed January 8, 2011, www.reuters.com/article/pressRelease/idUS108100+05-Mar-2009+BW20090305; David Biello, “Cement from CO2: A Concrete Cure for Global Warming?” Scientific American, August 7, 2008, accessed January 8, 2011, www.scientificamerican.com/article.cfm?id=cement- from-carbon-dioxide; India Brand Equity Foundation, “Cement,” accessed January 8, 2011, www.ibef.org/industry/cement.aspx. Consequently, production of Portland cement, the main binder for conventional concrete, accounted for between 5 and 8 percent of global GHG emissions, making it one of the more GHG-intense industries (Figure 5.14).
Portland cement production generates CO2 in two ways (Figure 5.15). The first source of emissions is calcination, which decomposes quarried limestone (calcium carbonate) into quicklime (calcium oxide) and releases CO2 as a by-product. The second source is the heat needed to achieve calcination, which requires temperatures over 2700°F (1500°C), or almost one-third the surface temperature of the sun. These temperatures are generally achieved by burning fossil fuels or hazardous wastes containing carbon. Sustaining such temperatures consumes around 3 to 6 gigajoules (1,000 to 2,000 kWh) of energy per ton of cement, making energy costs around 14 percent of the value of total shipments.An alternative method, wet production, has largely been phased out due to its higher energy consumption. Ernst Worrell, “Energy Use and Efficiency of the U.S. Cement Industry” (presentation to the Policy Implementation Committee of the Energy Conservation and GHG Emissions Reduction in Chinese TVEs Project, Berkeley, CA, September 18, 2003). (By comparison, the typical US home uses around 11,000 kWh per year.US Energy Information Administration, “Frequently Asked Questions,” accessed January 29, 2011, www.eia.doe.gov/ask/electricity_faqs.asp#electricity_use_home.)
Since emissions from calcination are dictated by the chemistry of the reaction and cannot be changed, to save energy and lower emissions, kilns have striven to use heat more efficiently. In California, for instance, emissions from calcination remained steady at 0.52 tons of CO2 per ton of cement from 1990 to 2005, while emissions from combustion declined from 0.40 tons of CO2 per ton of cement to 0.34 tons.California Environmental Protection Agency Air Resources Board, “Overview: AB 32 Implementation Status” (presentation at the California Cement Industry workgroup meeting, Sacramento, CA, April 10, 2008), accessed May 29, 2009, www.arb.ca.gov/cc/cement/meetings/041008/041008presentations.pdf. Lowering emissions further, however, had proven difficult.
Video Clip
How Cement Is Made
(click to see video)
Given the carbon intensity of cement production, governments increasingly have attended to emissions from cement kilns. Calcination alone emitted 0.7 percent of US CO2 in 2007, a 34 percent increase since 1990 and the most of any other industrial process except energy generation and steel production.US Environmental Protection Agency, Fast Facts: Inventory of U.S. Greenhouse Gas Emissions and Sinks: 1990–2008 (Washington DC: US Environmental Protection Agency, 2010), accessed January 8, 2011, http://www.epa.gov/climatechange/emissions/downloads10/US-GHG-Inventory-Fast-Facts-2008.pdf. California’s Assembly Bill 32, the Global Warming Solutions Act of 2006, includes cement kilns under its GHG emissions reduction program, which would require kilns to further reduce their emissions starting in 2012. The EPA’s Greenhouse Gas Reporting Rule from April 2009 also requires kilns to send data about their GHG emissions to the EPA, a prerequisite for any eventual mandatory emissions reductions.
In addition to being energy and CO2 intense, cement production is also a capital-intense industry. A kiln and its concomitant quarrying operations may require an investment on the order of \$1 billion. Consequently, about a dozen large multinational companies dominate the industry. In 2010 there were 113 cement plants in the United States in 36 states, but foreign-owned companies accounted for about 80 percent of US cement production.
Despite this ownership structure, actual cement production and consumption is largely regional. The cement industry moves almost 100 percent of its product by truck; the majority goes to ready-mix concrete operators, from plant to use. The entire US cement industry shipped \$7.5 billion of products in 2009, a decline from \$15 billion in 2006 since domestic construction had declined.Portland Cement Association, “Overview of the Cement Industry: Economics of the U.S. Cement Industry,” December 2009, accessed January 8, 2011, http://www.cement.org/basics/cementindustry.asp. Worldwide, the cement industry represented a \$140 billion market in 2009 with about 47 percent poured in China.
Although cement can be used to produce mortar, stucco, and grout, most cement is used to produce concrete. To make concrete, cement is mixed in various proportions with water and aggregates, including fine aggregates such as sand and coarse aggregates such as gravel and rocks. (Concrete cement is commonly called simply concrete, although asphalt is also technically a type of concrete where the binder is asphalt instead of Portland cement.) The cement itself comes in five basic classes, depending on the desired strength, time to set, resistance to corrosion, and heat emitted as the cement sets, or hydrates. Though cement plays a crucial role in the properties of concrete, the other ingredients also matter. Aggregates help give concrete its strength and appearance. Plasticizers can be added in smaller quantities, as can materials such as coal fly ash or slag from blast furnaces to vary the concrete’s strength, weight, workability, and resistance to corrosion. Some states, such as California, require fly ash and slag be added to concrete to reduce its GHG intensity, improve the durability of the final material, and prevent these aggregates from entering landfills as waste materials.
A typical mix of concrete might contain by mass one part water, three parts cement, six parts fine aggregate, and nine parts coarse aggregate. Thus a cubic yard of concrete, which weighs roughly two and one-half tons (2,000 to 2,400 kg/m3), would require approximately 300 pounds (36 gallons) of water, 900 pounds of cement (9.5 bags, or 9.5 cubic feet), and 4,500 pounds of total aggregates. Varying amounts of air can also be trapped, or entrained, in the product. Cement, at around \$100/ton in 2010, is normally about 60 percent of the total cost of poured concrete. Aggregates, in contrast, cost closer to \$10/ton.
Making concrete adds more GHG emissions from, for instance, quarrying and transporting stone and keeping the water at the right temperature (from 70 to 120°F) to mix effectively. As the cement in concrete cures, it carbonates, which is the process in which CO2 interacts with the alkaline pore solutions in the concrete to form calcium carbonate. This process takes decades to occur and never accounts for more than a few percent of carbon sequestrations in cement.
By using less energy, Calera’s process already promised lower emissions. More important, using a standard construction material, cement, to capture CO2 would mean sequestration capacity scaled directly with economic activity as reflected in new construction. For instance, the Three Gorges Dam in China used approximately fifty-five million tons of concrete containing eight million tons of cement. The concrete in the dam is enough to pave a sixteen-lane highway from San Francisco to New York.Bruce Kennedy, “China’s Three Gorges Dam,” CNN, accessed January 8, 2011, www.cnn.com/SPECIALS/1999/china.50/asian.superpower/three.gorges. The comparison road value is derived from the Hoover Dam, which used approximately 6 million tons of concrete. US Department of the Interior, “Hoover Dam: Frequently Asked Questions and Answers,” accessed January 8, 2011, www.usbr.gov/lc/hooverdam/faqs/damfaqs.html. Hence if Calera cement had been used in that dam, it could have sequestered roughly four million tons of CO2 rather than emitting approximately seven million additional tons of it, for a net difference of eleven million tons. If Calera had manufactured the stones used as aggregate in the dam’s concrete, emissions potentially could have been reduced even more, so long as Calera’s process produced fewer emissions than quarrying the equivalent aggregate. The promise remained but so did the question: in how many places was the Calera process viable, and where did the economics make sense?
Constantz Looks for an Opening
Brent Constantz had focused his career on how nature makes cements and how we can apply those techniques to other problems. He now faced the challenge of moving from niche markets for small-scale, specialty medical cements to the mainstream of international construction, commodity materials, and carbon sequestration. For these markets Calera’s product promised negative net CO2 emissions but first had to compete on cost, set time, strength, and durability. Calera would need to pass all appropriate standards as well as target applications for which people would be willing to pay a premium for carbon-negative concrete. In addition, the chain of liability often terminated at the cement producer in the highly litigious construction industry. Consequently, Calera cement had to be deemed beyond reproach to penetrate the market. But if it was, then its ability to reduce GHG emissions would appeal to many in the construction industry who sought to lower costs and improve their environmental image.
A rock climber and wind surfer, Constantz earned his BA in Geological Sciences and Aquatic Biology from the University of California–Santa Barbara in 1981 and went on to earn his master’s (1984) and PhD (1986) in Earth Sciences from University of California–Santa Cruz. He received a US Geological Survey postdoctoral fellowship in Menlo Park, California, during which he studied isotope geochemistry. Next as a Fulbright Scholar in Israel, he studied the interaction of crystals and proteins during biomineralization. At that time, Constantz developed medical cements to help heal fractured or worn bones, and in 1988 he founded his first company, Norian Corporation, in Cupertino, California, to commercialize those medical cements. When Norian was sold in 1998 to Synthes, a company with \$3.4 billion in sales in 2009,Synthes, “Synthes Reports 2009 Results with 9% Sales Growth and 13% Net Earnings Growth in Local Currency (6% and 12% in US\$),” news release, February 17, 2010, accessed January 8, 2011, www.synthes.com/html/News-Details.8013.0 .html?&tx_synthesnewsbyxml_pi1[showUid]=39. Constantz became a consulting professor at Stanford University, where he continued to teach courses on biomineralization, carbonate sedimentology, and the “Role of Cement in Fracture Management” through 2010.Stanford biography at Stanford Biodesign, “People: Brent Constantz Ph.D.,” accessed January 8, 2011, biodesign.stanford.edu/bdn/people/bconstantz.jsp.
During his time at Stanford, Constantz founded and provided leadership for three more medical cement companies: Corazon, bought by Johnson & Johnson; Skeletal Kinetics, bought by Colson Associates; and Biomineral Holdings, which Constantz still controlled. He served on the board of directors of the Stanford Environmental Molecular Science Institute and also won a variety of awards, including a University of California–Santa Cruz Alumni Achievement Award in 1998 and a Global Oceans Award in 2004 for advancing our understanding of and helping to conserve oceans.
Indeed, climate change’s impact on oceans was increasingly on Constantz’s mind. In an interview with the San Francisco Chronicle, Constantz stated, “Climate change is the largest challenge of our generation.”Carrie Sturrock, “Green Cement May Set CO2 Fate in Concrete,” San Francisco Chronicle, September 2, 2008, accessed January 8, 2011, articles.sfgate.com/2008-09-02/news/17157439_1_cement-carbon-dioxide-power-plants. Constantz was concerned specifically with ocean acidification, which was destroying coral, the very topic that had inspired him for years. As CO2 is emitted into the atmosphere, a portion is absorbed by the oceans, forming carbonic acid by roughly the same process that gives carbonated beverages their bubbles. Constantz recognized that the process threatened by CO2 emissions—natural biomineralization—was also a solution. He founded Calera Corporation in 2007.
The name Calera is Spanish for lime kiln, but it also refers to a stratum of limestone underlying parts of California. That layer likely formed one hundred million years ago when seafloor vents triggered precipitation of calcium carbonate. Constantz found that a similar inorganic process to precipitate carbonates could make construction-grade cement. In fact, early lab work revealed the surprising finding that adding CO2 could increase the reaction’s yield eightfold. In one of his regular conversations with Khosla about the company, Constantz wondered out loud where to get more CO2. Khosla, a prominent clean tech investor, immediately saw the answer: carbon sequestration. If Calera could make cement with CO2, cement could now be produced that was, in fact, carbon negative. First-round funding for the enterprise came from Khosla in 2007. No business plan was written, and in 2010 there still was no formal board or enough clarity to develop a strategic plan.
Calera’s method puts power plant flue gases that contain CO2 in contact with concentrated brines or concentrated seawater, which contain dissolved magnesium and calcium ions. Hydroxides and other alkaline materials are added to the seawater to speed the reaction between the CO2 and minerals.See Brent R. Constantz, Cecily Ryan, and Laurence Clodic, Hydraulic cements comprising carbonate compound compositions, US Patent 7735274, filed May 23, 2008, and issued June 15, 2010. That reaction precipitates carbonates of magnesium and calcium, the cementitious materials found in coral reefs and seashells, thus storing the CO2 and leaving behind demineralized water. Unlike conventional cement kilns, Calera can produce its cement at temperatures below 200°F (90°C), dramatically lowering emissions of CO2 from fuel combustion (Figure 5.16 and Figure 5.17). In principle, Calera could produce and sell its aggregate, essentially manufactured stones; powdered stones, or cement, the binder in concretes; or supplementary cementitious material (SCM), an additive to improve the performance of concrete that can be added to the cement blend directly or later added to the concrete.
Yet in 2010, each of these materials was in the midst of optimization and testing. Some were early in their product development phase. Furthermore, even though Constantz held nearly two hundred patents or pending patents, including two for Calera’s processes, one for producing the carbonate cement, and another for demineralizing water, the medical cements he was accustomed to in earlier ventures typically used grams or less at a time, not tons or kilotons, and did not require massive machinery, tracts of land, and large capital investments. Calera faced another challenge: the industrial ecosystem.
One practical application of industrial ecology concepts refers to the collocation of factories or processes that can use each others’ wastes as feedstocks. When the waste stream of one plant becomes the material input of the next, the net effect is to save energy and material and reduce the necessary infrastructure. The most famous industrial ecology park, in Kalundborg, Denmark, included a power plant, a refinery, a pharmaceutical company, a drywall manufacturer, and a fish farm.The park has a website: Industrial Symbiosis, “Welcome to the Industrial Symbiosis,” accessed January 8, 2011, http://www.symbiosis.dk. The power plant, for instance, treated its flue gas to trap sulfur dioxide emissions and thereby produced gypsum, the raw material for drywall. Hot water from the power station went to the fish farm, as did wastes from the pharmaceutical company that could be used as fertilizer. Constantz saw an existing symbiosis between cement plants, power stations, and water supplies, but he would have to plan carefully to insert Calera effectively into that ecology.
If he could enter the markets, Constantz felt the opportunity was there. He commented on the global market for Calera’s technology:
Almost everywhere else in the world but the U.S. can projects get the value for carbon emission reductions. In cap and trade systems, the government sets a “cap” on emissions; if a business’s emissions fall below the cap, it can sell the difference on the market to companies that want to exceed their cap. If Calera proves out, it can go anywhere, set up next to a power plant and get our revenue just by selling carbon credits. That means we could produce cement in a developing country where they basically can’t afford concrete, so they otherwise couldn’t build out their infrastructure or even build houses. And the more cement Calera produces, the more carbon dioxide we remove from the atmosphere.Andrea Larson and Mark Meier, Calera: Entrepreneurship, Innovation, and Sustainability, UVA-ENT-0160 (Charlottesville: Darden Business Publishing, University of Virginia, September 21, 2010).
As Constantz reflected in his Los Gatos office on Calera’s potential impact on climate change, he observed, “A sufficiently high carbon price would enable a number of business models. Low prices limited the options available to Calera.” Calera planned to offer sequestration services to power plants or other heavy industrial users as its primary business and was therefore interested in any CO2 emissions. “We look at CO2 as a resource—not a pollutant—and a scarce resource. To replace all Portland cement with Calera cement, which we want to do, we would need about 19 billion tons of CO2 annually, forever.”Andrea Larson and Mark Meier, Calera: Entrepreneurship, Innovation, and Sustainability, UVA-ENT-0160 (Charlottesville: Darden Business Publishing, University of Virginia, September 21, 2010).
Government carbon regulations could help Calera generate revenue and customers but were not viewed as crucial. In the European Union’s Emissions Trading Scheme, CO2 in July 2010 traded at around €14/ton, or \$18/ton. The Northeastern states’ Regional Greenhouse Gas Initiative (RGGI) that began in 2009 capped GHG emissions from power plants at 188 million tons immediately, roughly a quarter of total US emissions, and will cut GHG emissions of RGGI sources 10 percent from that level by 2018. RGGI allowances sold at between \$1.86 and \$2.05 per ton at auction in December 2009.European Energy Exchange, “Emission Rights,” accessed January 10, 2011, http://www.eex.com/en; Regional Greenhouse Gas Initiative, “Auction Results: Auction 6,” December 2, 2010, accessed January 10, 2011, www.rggi.org/market/co2_auctions/results/auction6. Since RGGI allowed sources to cover up to 10 percent of their emissions by buying offsets, Calera planned to try to convince power companies to enter agreements with Calera rather than buy permits to meet their obligations. On the other side of the country, the Western Climate Initiative (WCI) was designing a cap-and-trade system for power generation and fuel consumption. The WCI comprised 11 Canadian provinces and western US states and would take full effect in 2015, with earlier phases beginning in 2012. In many cases there was strong interest for the future but little appetite for risk or actual implementation in the present, with the possible exception of suppliers to the California electricity market.
At the federal level, Calera also lobbied to have the American Clean Energy and Security Act of 2009 (HR 2454, the Waxman-Markey Bill) include sequestration other than by solely geological means; otherwise, Calera would not be recognized as providing offsets worth allowances in a trading program. The bill exited committee in May 2009 with the expanded sequestration options but then stalled. Before that, carbon capture and sequestration (CCS) debates had focused on geological sequestration, but that solution was expensive, required massive federal subsidies to CO2 emitters, and, according to a 2008 McKinsey & Company report, would not be commercially feasible for another twenty years.McKinsey Climate Change Initiative, Carbon Capture and Storage: Assessing the Economics (New York: McKinsey, 2008), accessed January 10, 2011, www.mckinsey.com/clientservice/sustainability/pdf/CCS_Assessing_the_Economics.pdf. Despite the enticing estimates that centuries’ worth of CO2 emissions could be stored underground,Joseph B. Lassiter, Thomas J. Steenburgh, and Lauren Barley, Calera Corporation, case 9-810-030 (Boston: Harvard Business Publishing, 2009), 3, accessed January 10, 2011, www.ecch.com/casesearch/product_details.cfm?id=91925. skeptics wondered how long it would stay there, as a sudden release of stored CO2 would be catastrophic. They further noted that gradual leaks would defeat the technology’s purpose and potentially acidify groundwater, causing new problems. Everyone, meanwhile, agreed that much depended on the price of carbon, which was contingent on evolving carbon markets in the United States and Europe.
A new bill with a mix of carbon trading and taxes was in the works in March 2010, and in the absence of congressional action, the EPA was preparing to regulate CO2 under the Clean Air Act per order of the Supreme Court in its 2007 decision Massachusetts v. EPA.Massachusetts v. Environmental Protection Agency, 549 US 497 (2007), accessed January 10, 2011, http://www.supremecourt.gov/opinions/06pdf/05-1120.pdf. Despite the overall failure of the Copenhagen Climate Conference in December 2009—Constantz considered the attempt to negotiate a successor to the Kyoto Protocol “a joke”Andrea Larson and Mark Meier, Calera: Entrepreneurship, Innovation, and Sustainability, UVA-ENT-0160 (Charlottesville: Darden Business Publishing, University of Virginia, September 21, 2010).—the United States did pledge, nonbindingly, to reduce its GHG emissions 17 percent from 2005 levels by 2020 and ultimately 83 percent by 2050, a significant departure from the previous Bush administration. In January 2010, President Obama announced via Executive Order 13514 that the federal government would reduce its GHG emissions 28 percent from 2008 levels by 2020. The federal government was the single largest consumer of energy in the United States. Nonetheless, Constantz claimed that even without climate change regulations, “We will be profitable, we don’t care, we don’t need a price on carbon.”
Moss Landing
Aside from climate change legislation, Constantz witnessed regulatory agencies “bending over backward to help us. Fortunately, people are in favor of what we’re doing because I think they see the higher purpose toward which we’re dedicated.”Andrea Larson and Mark Meier, Calera: Entrepreneurship, Innovation, and Sustainability, UVA-ENT-0160 (Charlottesville: Darden Business Publishing, University of Virginia, September 21, 2010). Calera’s process had proven effective, for instance, at trapping sulfur dioxide emissions, currently regulated in the United States under the Acid Rain Program and other standards. Water regulators and air boards alike, a total of nine agencies, eased the way for Calera’s first plant at Moss Landing, California. The site, two hundred acres along Monterey Bay, had seven three-million-gallon tanks for storing seawater, a total volume equivalent to thirty Olympic swimming pools, and permits for pumping sixty million gallons of seawater per day, or nearly seven hundred gallons per second, through the original World War II–era redwood pipe. The site also had five million tons of magnesium hydroxide left from earlier operations, which included making bombs.
In June 2008, Calera collaborated with the nearby Monterey Bay Aquarium Research Institute and Moss Landing Marine Lab to assess and minimize impacts on the bay’s marine ecosystems. Water is a key element of the Calera process, and everything was done to minimize its use. Constantz told a local paper, “We wanted to make sure we weren’t going to do any harm. We’re right next to these world-class oceanographic institutions. These places can publish papers about [the process], whereas most parts of the world don’t have scientists of that caliber to sign off on it.”Lizzie Buchen, “A Green Idea Set in Cement,” Monterey County Herald, October 4, 2008, accessed January 10, 2011, www.montereyherald.com/news/ci_10637168. Calera was interested in using the power plant’s water, potentially reducing demand for and impacts on Monterey Bay water. Constantz knew Moss Landing would set the standard for future plants. In fact, turning a site with a negative environmental history into a location that demonstrated clean energy and potable water technologies was very appealing to the entire management team.
The magnesium hydroxide, meanwhile, formed a gray and white crust that stretched for hundreds of yards and was visible from the sky. It provided the alkalinity for Calera’s early production. Massive metal sheds on the otherwise muddy soil housed a variety of production lines. Equally important, across the street stood the largest power plant on the West Coast, Dynegy’s 2,500 MW natural gas-fired plant.
In August 2008, Calera opened its test cement production plant. In April 2009, it achieved continuous operation and was capturing with 70 percent efficiency CO2 emissions from a simulated 0.5 MW coal-fired power plant.Joseph B. Lassiter, Thomas J. Steenburgh, and Lauren Barley, Calera Corporation, case 9-810-030 (Boston: Harvard Business Publishing, 2009), 1, accessed January 10, 2011, www.ecch.com/casesearch/product_details.cfm?id=91925. In December 2009, Calera ran a pipe beneath the road to tap into Dynegy’s flue stack, somewhat like sticking a straw in a drink, to capture emissions equivalent to a 10 MW plant as Calera moved up to a demonstration scale project. By spring 2010 the demonstration plant, twenty times the size of the pilot plant, had achieved continuous operation.
A typical cement plant may produce between five hundred thousand and two million tons of cement annually, which meant Calera’s Moss Landing Cement Company would remain a rather small player—or become a massive consumer of water. Seawater is typically only 0.1 percent magnesium ions and 0.04 percent calcium ions.Jay Withgott and Scott Brennan, Environment: The Science Behind the Stories, 3rd ed. (San Francisco: Pearson Benjamin Cummings, 2008), 445. Hence if Calera could extract those ions with perfect efficiency, it could create about 240 tons of calcium and magnesium daily, enough to make just under 590 tons of Calera cement daily. At continuous operation, Calera could produce only about 215,000 tons of cement annually. Calera’s Moss Landing plant could therefore sequester just over 100,000 tons of CO2 per year at full operation with its current water permit.These values assume that Calera’s cement is composed of calcium and magnesium carbonates. Calcium carbonate has a molecular weight of 100 grams per mole; magnesium carbonate has a molecular weight of 84 grams per mole. CO2 thus represents almost exactly half of the weight of each ton of Calera cement produced from standard seawater. This CO2 proportion, however, would not include emissions from energy needed to operate the plant.
Disruption: Opponents and Competitors
Calera, however, had the promise to be more than a cement plant. Because it could sequester CO2, sulfur dioxide, and mercury into carbonates, Calera offered a multipollutant control and remediation technology that might prove to be cheaper than existing methods and generate additional income from the sale of its by-products, cement and demineralized water. The promise of the multiple benefits of Calera’s process attracted attention from many quarters. California’s Department of Transportation was interested, since California uses more concrete than any other state and has its own GHG cap-and-trade program. Egyptian, Moroccan, and Saudi Arabian researchers and builders had expressed interest in the process because of its desalination aspect, and the zero-emissions showcase Masdar City in the United Arab Emirates had considered using Calera cement.Ben Block, “Capturing Carbon Emissions…in Cement?” Worldwatch Institute, January 26, 2009, accessed May 25, 2009, www.worldwatch.org/node/5996. Power plants and cement kilns were seeking ways to lower their emissions of all pollutants. Early in 2010, Calera was awarded a grant from the Australian government to build a demonstration plant to capture carbon from a coal-fired plant, which like most in Australia burned particularly dirty brown coal. Constantz by January 2010 had “a backlog of 70 people” representing “100 projects.” He noted that “selecting the right one is a proprietary, large process,” which includes consideration of local feedstocks, regulations, buyers and suppliers, incentives, and other factors.Andrea Larson and Mark Meier, Calera: Entrepreneurship, Innovation, and Sustainability, UVA-ENT-0160 (Charlottesville: Darden Business Publishing, University of Virginia, September 21, 2010).
In addition to considering his suitors, material resources, and business opportunities, Constantz also had to consider his competition. Other companies were trying to make cement in innovative ways to reduce GHG emissions. In 1979, German-born architect Wolf Hilbertz had published a way to produce calcium carbonate from seawater via electrolysis.Wolf Hilbertz, “Electrodeposition of Minerals in Sea Water: Experiments and Applications,” IEEE Journal on Oceanic Engineering 4, no. 3 (1979): 94–113, accessed January 10, 2011, www.globalcoral.org/IEEE_JOUR_1979small.pdf. That method had been commercialized as Biorock, also the name of the company, and used to help restore coral reefs by plating calcium carbonate onto rebar. The company Biorock, however, did not seem interested in pursuing terrestrial applications. In contrast, Novacem in England planned to use magnesium oxide and other additives to lower processing temperatures and obviate GHG emissions from cement kilns. Other companies were also attempting to sequester CO2 in cement. Carbon Sciences of Santa Barbara planned to use mine slime (water plus magnesium and calcium residues left in mines) and flue gas to make cement, and Carbon Sense Solutions in Nova Scotia planned to use flue gases to cure cement, thereby absorbing CO2. Nonetheless, Calera so far had kept ahead of these possible competitors and worked on ensuring that its products met familiar engineering performance standards to speed adoption.
Building performance aside, climate scientist Ken Caldeira at the Carnegie Institution’s Department of Global Ecology had publicly doubted that the Calera process would reduce net carbon emissions, as it currently used magnesium or sodium hydroxides, which would have to be produced somehow and did not seem included in life-cycle analyses of carbon emissions. Caldeira had also said that Calera basically took dissolved limestone and converted it back into limestone, and there were active online discussions on this issue.The debate seems to occur mainly over e-mail and groups, for instance, Climate Intervention, “Calera—Fooling Schoolchildren?” accessed January 10, 2011, http://groups.google.com/group/climateintervention/browse_thread/thread/7b5ff4ee64ce759d?pli=1. Calera simply waited for its patents to be published rather than directly refute the charge.
Portland cement was the industrial standard and had been since its invention in 1824. Any change was likely to encounter resistance from producers and consumers, and the standards-setting bodies were necessarily conservative and cautious. An array of organizations, from the American National Standards Institute’s American Standards for Testing and Materials (ASTM) to the Portland Cement Association and American Concrete Institute, in addition to individual companies, conducted their own rigorous quality tests and set many standards.
Ironically, rather than seeing himself as an opponent of the Portland cement industry, Constantz considered himself an ally: “I think we’re going to save their entire industry. As soon as there’s carbon legislation, the asphalt industry is going to eat their lunch. The Portland cement industry is really in trouble without us and they know that. That’s why they’re calling us up.”Andrea Larson and Mark Meier, Calera: Entrepreneurship, Innovation, and Sustainability, UVA-ENT-0160 (Charlottesville: Darden Business Publishing, University of Virginia, September 21, 2010). After all, the industry had tried to reduce emissions by increasing efficiency but could only do so much. Calera’s process appeared to be the breakthrough the industry needed. Moreover, the infrastructure already existed to link cement plants with power plants because the latter often have to dispose of fly ash. Likewise, power plants also consume lots of water, meaning the infrastructure existed to supply the Calera process, presuming the water contained sufficient salts.
Constantz felt Calera could disrupt the carbon sequestration industry, primarily oil and gas exploration companies that had been advocating enhanced recovery through injecting CO2 underground as a form of geological carbon sequestration: injecting compressed CO2 underground forced more oil and gas to the surface. Khosla agreed but was uncertain about the breadth of applicability of the Calera process. An attractive business and a few plants were definitely possible, but Calera had yet to prove it was anything more than a solution for some special cases.
To do so, Calera hoped to outperform all other CCS options, especially retrofits of existing plants. Even if the technical and environmental problems could be solved for widespread CCS, it would be costly, especially in a world without a price on carbon. In April 2010, the US Interagency Task Force on Carbon Capture and Storage estimated the cost of building typical CCS into new coal-fired plants (greenfield development) to be \$60 to \$114 per metric ton of CO2 avoided, and \$103/ton for retrofitting existing plants. That translated into increased capital costs of 25–80 percent. Such plants were also expected to consume 35–90 percent more water than similar plants without CCS.Interagency Task Force on Carbon Capture and Storage, Report of the Interagency Task Force on Carbon Capture and Storage (Washington DC: US Environmental Protection Agency/US Department of Energy, 2010), 27, 33–35, accessed January 10, 2011, http://www.epa.gov/climatechange/downloads/CCS-Task-Force-Report-2010.pdf. The report did not consider a model like Calera’s to be CCS; instead, it defined CCS only as geological sequestration.
Available CCS required much energy to operate, the so-called parasitic load it placed on power plants whose emissions it sequestered. This parasitic load represented a very high cost and penalty for the power plant as it was essentially lost electricity, translating directly into lost revenue. To cover the electricity needed to operate any system that trapped CO2 emissions from the flue and still supply its other customers, the power plant would have to consume more coal and operate longer for the same income.
Constantz noted that geologic CCS typically had parasitic loads around 30 percent. To solve this issue, Calera’s business model was to buy power at wholesale price, becoming the power plant’s electricity customer. The plant could increase its capacity factor to cover this additional power demand or reduce its power sales to the grid without much of a revenue loss. From the plant’s perspective, then, Calera did not alter revenue, unlike other options. Constantz believed Calera’s energy consumption could be much lower than that of CCS assuming the right local mineral and brine inputs could be exploited. In addition, to optimize its power use and price, Calera was designing a process that could take advantage of off-peak power. However, it remained uncertain how many locations met mineral input requirements to make the Calera process economically attractive.
Calera could disrupt other conventional pollution control industries. Existing technologies to control sulfur oxides (SOx), mercury, and other emissions could be supplanted by Calera’s technology. Such pollutants are currently subject to either cap-and-trade programs or Best Available Control Technology, which means companies have to install whatever available pollution control technology achieves the best results. The cost to power plants could be as high as \$500 to \$700 per kWh to remove these pollutants from their flue gas.Joseph B. Lassiter, Thomas J. Steenburgh, and Lauren Barley, Calera Corporation, case 9-810-030 (Boston: Harvard Business Publishing, 2009), 7, accessed January 10, 2011, www.ecch.com/casesearch/product_details.cfm?id=91925. Early experiments suggested that Calera’s process could trap these pollutants with over 90 percent efficiency in a single system, though nitrous oxides (NOx) would still need to be dealt with.
Conceivably, utilities could balk at the prospect of selling a large portion of their electricity to Calera, even if Calera set up shop where carbon was capped, such as the European Union, or approached companies wanting to reduce their emissions voluntarily. Utilities could switch to natural gas or find other ways to cut emissions. Calera, however, saw enough value in its own process and the coal-fired infrastructure that it had considered buying power plants outright and operating them itself.
Finally, Calera considered the possibility of providing a form of energy storage. Power plants could operate more at night, typically when demand was lower, to supply energy for Calera’s electrochemistry process, effectively storing energy in the form of other chemicals. During the day, there would be no increased energy demand from Calera, thereby increasing a power plant’s total energy output. In the same manner, Calera could also store energy from wind farms or other renewable sources.
Managing Growth
With many people eager to exploit Calera’s technology, the company emphasized maintaining control. From the very beginning, Constantz limited outside investors to the well-known venture capital investor Vinod Khosla. Khosla cofounded Sun Microsystems in 1982 and left five years later for the venture capital firm Kleiner Perkins Caufield and Byers. Khosla founded his own firm, Khosla Ventures, in Menlo Park, California, in 2004, and invested his own money in sustainable and environmental business innovations. By May 2009, Khosla had made a significant investment in Calera. Despite two rounds of investments, adding seven seasoned vice presidents for functions ranging from intellectual property to government affairs, and successful movement from batch process to continuous operation pilot plant to demonstration plant, Calera still had a board of only two members: Constantz and Samir Kaul of Khosla Ventures. Constantz believed “the largest risk of this company or any other company in this space is board problems. Because Calera had just one investor, it had been spared the problems of several board members, which can tank visionary start-ups.”Andrea Larson and Mark Meier, Calera: Entrepreneurship, Innovation, and Sustainability, UVA-ENT-0160 (Charlottesville: Darden Business Publishing, University of Virginia, September 21, 2010). Bad advice or conflicts posed a bigger threat than “the technology or the market,” a lesson Constantz had taken to heart from his previous enterprises.
The company also protected itself from liability by creating special-purpose entities (SPEs) to operate individual projects. According to Constantz, “We’re a corporation licensing its technology and intellectual property to other separate companies [SPEs] we’ve set up.”Andrea Larson and Mark Meier, Calera: Entrepreneurship, Innovation, and Sustainability, UVA-ENT-0160 (Charlottesville: Darden Business Publishing, University of Virginia, September 21, 2010). For example, the Moss Landing facility was owned and operated by the Moss Landing Cement Company, which, in turn, Calera owned. This division allowed Calera to reduce the threat of litigation and insurance costs at its office headquarters in nearby Los Gatos in Silicon Valley because cement production and associated construction were heavy industries in which equipment scale and complexity could involve expensive mistakes and working conditions posed many hazards. Everyone at the Moss Landing site was required to wear hard hats and safety glasses, and the sodium hydroxide produced by electrochemistry on-site was a toxic product.
The company also had grown to absorb more areas of technical expertise. Aurelia Setton came to Calera in mid-2008 as senior manager of corporate development after completing her MBA at Stanford Business School. She became director of strategic planning in the summer of 2009. Young and committed to sustainable business thinking, Setton had seen the company realize the implications of different technology applications and then move to recruit experts in those areas. First it was how to produce cement with less energy and then how to boost its ability to sequester CO2. Then it was water purification. Then it was electrochemistry, the process of extracting chemicals through splitting them in solution. “If we see enough value in it, we bring it in-house,” Setton said.Andrea Larson and Mark Meier, Calera: Entrepreneurship, Innovation, and Sustainability, UVA-ENT-0160 (Charlottesville: Darden Business Publishing, University of Virginia, September 21, 2010).
Nonetheless, Calera had to recognize limits. For instance, Setton knew “we are not a manufacturing company. Those partnerships are very complicated. People are very interested in getting into our IP [intellectual property], and we need their help, but there’s only one Calera and several of them.” Hence Calera felt it could dictate its terms.
To facilitate deployment, Calera entered a worldwide strategic alliance with Bechtel in December 2009. Bechtel is a global engineering, procurement, and construction (EPC) firm with forty-nine thousand employees. Based in San Francisco, Bechtel operates in about fifty countries and generated \$31.4 billion in revenues in 2008. Its past projects included the Channel Tunnel connecting England and France; the San Francisco–area metro system, Bay Area Rapid Transit (BART); and military bases, oil refineries, airports and seaports, nuclear and fossil-fuel-fired power plants, and railroad infrastructure. Calera worked closely with the Renewables and New Technology division in Bechtel’s Power Business Unit. That division had experience with CCS and government grant applications and contracts, which could help Calera. Bechtel also offered a massive network of suppliers. “We didn’t want to go out to a lot of EPC firms,” Constantz explained. “We opted to just go to one firm and let them see what we’re doing.”Andrea Larson and Mark Meier, Calera: Entrepreneurship, Innovation, and Sustainability, UVA-ENT-0160 (Charlottesville: Darden Business Publishing, University of Virginia, September 21, 2010). Bechtel advised Calera on the construction of its demonstration plant and played a pivotal role in worldwide deployment.
Calera pursued other possible collaborators. One was Schlumberger, the oil field and drilling firm with seventy-seven thousand employees and \$27 billion in revenues in 2008. Calera sought Schlumberger’s expertise in extracting subsurface brines, which were needed to replace seawater for Calera’s process for inland locations. In early 2010, Calera was also in the midst of signing a deal with a big supplier for its electrochemistry operations. Finally, for power plants, Constantz considered Calera “just another industrial user. We can fight over who keeps carbon credits and all that, but the only time we have a relationship is if they invest in a plant, and we don’t need them to invest.”Andrea Larson and Mark Meier, Calera: Entrepreneurship, Innovation, and Sustainability, UVA-ENT-0160 (Charlottesville: Darden Business Publishing, University of Virginia, September 21, 2010). Nonetheless, Setton believed Calera had leverage in negotiating the terms with a power plant for electricity and CO2.
Quantifying Economic Opportunities
By mid-2010, Setton conceived of Calera’s possible services as spanning four major categories: clean power, material efficiency, carbon management, and environmental sustainability (Figure 5.20). These opportunities were often interconnected, complex, and affected by changing regulations and markets, so to make money, the company had to manage this complexity and educate multiple audiences. It seemed a daunting, though exciting, balancing act for Setton. It was one she had a chance to hone when the Australian government and TRUEnergy wanted to see what Calera could do.
The Latrobe Valley, site of TRUEnergy’s Yallourn Power Station in the state of Victoria, Australia, contains about 20 percent of the world’s and over 90 percent of Australia’s known reserves of lignite, or brown coal, an especially dirty and consequently cheap coal. In 2006–2007, Australia produced 65.6 million metric tons of brown coal valued at A\$820 million, or about US\$10/ton.Ron Sait, “Brown Coal,” Australian Atlas of Minerals Resources, Mines, and Processing Centers, accessed January 10, 2011, www.australianminesatlas.gov.au/aimr/commodity/brown_coal_10.jsp. Australia accounted for about 8 percent of the world’s coal exports, and its lignite accounted for about 85 percent of electricity generation in Victoria. The Labor Government had proposed carbon trading in 2009, but that plan had been faltering through 2010. The coal industry nonetheless had invested in various demonstration projects to make brown coal a cleaner source of electricity. Bringing a Calera demonstration plant to the Yallourn Power Station was another such endeavor. The Calera project would eventually be increased to a scale of 200 MW.
The entire Yallourn Power Station had a capacity of 1,480 MW and voracious demand for resources. The plant needed thousands of tons of water per hour at full capacity. Some of that water would need to be sent to treatment afterward. The plant also had the low energy-conversion efficiency typical of coal-fired plants. Compounding that, the plant’s brown coal had a low energy density, about 8.6 gigajoules per ton. In addition, combusting brown coal creates more SOx and NOx than other fuels,The exact NOx and SOx emissions, before pollution control, depend on the design of the combustion unit, but for a variety of designs the US Environmental Protection Agency estimated SOx emissions to be 5 to 15 kg per ton of lignite burned and NOx emissions to be 1.8–7.5 kg per ton of lignite burned. See US Environmental Protection Agency, “Chapter 1: External Combustion Sources,” in AP 42, Compilation of Air Pollutant Emission Factors, Volume 1: Stationary Point and Area Sources, 5th ed. (Research Triangle Park, NC: US Environmental Protection Agency, 1998), 7–8, accessed January 10, 2011, http://www.epa.gov/ttn/chief/ap42/ch01/index.html. Although it is difficult to put an exact price on the cost of controlling emissions in Australia, trading programs in the United States give some insight. The United States runs a cap-and-trade program for NOx and SOx for power plants on the east coast, and from January 2008 to July 2010, permits to emit one ton of SOx decreased from approximately \$500 to \$50 per ton, while NOx allowances started around \$800 before peaking near \$1,400 and decreasing to \$50 per ton. See the Federal Energy Regulatory Commission, Emissions Market: Emission Allowance Prices (Washington, DC: Federal Energy Regulatory Commission, 2010), accessed January 10, 2011, www.ferc.gov/market-oversight/othr-mkts/emiss-allow/2010/07-2010-othr-emns-no-so-pr.pdf. Since the price of an allowance ideally represents the marginal cost to abate an additional ton of emissions, it reflects the cost of control technology. Calera claimed its process, as noted earlier, could achieve up to 90 percent CO2 reduction and do so at a lower price if local resources could provide valuable feedstock. and both pollutants were regulated in Australia.
Calera planned to look for local brines to provide alkalinity for its process. If they were unavailable, Calera would produce alkalinity with its proprietary electrochemistry process, which would increase the cost of cement production. The economics of the project would depend primarily on the price it could get for its cement. Calera had the potential to use wastewaters to provide calcium (Figure 5.21): about one hundred miles from the TRUEnergy plant, a large-scale desalination project was under construction, providing a potential feedstock for the Calera process. Utilizing such wastewater streams also offered potential revenue: as an example, in Europe, a desalination plant had to pay up to €200 per ton to dispose of its brine. Although prices would be different for Australia, Calera could be paid to take such waste brine for its process. Calera also considered using fly ash, a coal-combustion waste, for additional alkaline material.
With many variables and several unknowns, it was critical to determine the cost of each part of the process to determine the viability of the entire project. Nonetheless, the models depended on various assumptions, and those assumptions changed constantly as the project configuration and other factors changed. Nobody had ever built a Calera system in the field. That left much uncertainty in actual numbers. It also left uncertainty in broader strategies. Under many scenarios, Calera’s energy demand would remain far less than the parasitic load of other CCS options. On the other hand, in some scenarios, Calera would need to have closer to 50 percent electrochemistry ions, which would represent a high energy requirement. How many sites could compete with CCS in terms of this energy requirement? How should that impact the business model and expansion plan of Calera?
TRUEnergy, for sure, could greatly benefit from Calera, beyond the CO2 capture potential. TRUEnergy was a wholly owned subsidiary of the CLP Holdings Group, an electricity generation, distribution, and transmission investor based in Hong Kong with assets in India, China, Southeast Asia, and Australia. Lessons CLP learned now could pay dividends later, and the company had committed to lowering its carbon intensity.CLP Holdings, “Climate Vision,” accessed January 10, 2011, www.clpgroup.com/ourvalues/environmental/climatevision/Pages/climatevision.aspx. The Yallourn Power Station, which could have an operating life of forty or more years, could attempt to gain a strategic advantage and improved public image by reducing its carbon emissions in anticipation of eventual regulation. The plant could also potentially use Calera’s processes to lower SOx emissions. Calera’s cement could directly trap these particulates. Indirectly, if Calera purchased power at night, the plant could decrease SOx emissions at times when they were most destructive—typically hot, sunny afternoons—and SOx controls typically most expensive. This load shifting could save money on pollution controls or new generation capacity.
Next Steps
Setton sat in her office, adjoining Constantz’s in the building that Calera shared with the Los Gatos Public Library. Outside her door, a dozen employees worked at cubicles whose low, translucent partitions made them more side-by-side desks than cubicles. A light flashed in a bubble containing a toy-sized display in the foyer to represent CO2 moving from a power plant to a Calera cement plant and then to a concrete mixer truck. Bits of chalky stones, like the ones in vials on Constantz’s desk, represented Calera’s product. The company had grown rapidly and showed enormous promise, but it had yet to build full-scale, commercial plants to fulfill that promise. Setton summarized the situation: “To innovate means you have to protect yourself, have to convince people, have to prove quickly, and have to deploy widely. Two strategic questions are important: one, what are the partnerships that will help us convince the world and bring it to reality, and second, how fast can we deploy. That means resources and allocation. How much do we keep in house, how much do we outsource without losing our protection. Those are key questions as we grow fast.”Andrea Larson and Mark Meier, Calera: Entrepreneurship, Innovation, and Sustainability, UVA-ENT-0160 (Charlottesville: Darden Business Publishing, University of Virginia, September 21, 2010).
The Calera case offers an example of an entrepreneur taking a process performed naturally, but at a small scale in coral reef formation, and applying the inherent principles to cement production on a large scale. This imitation of natural system chemistry and function represents a growing inspirational focus and concrete product design approach for innovation. The following discussion introduces students to the notion of biomimicry in business.
Biomimicry
What better models could there be?…This time, we come not to learn about nature so that we might circumvent or control her, but to learn from nature, so that we might fit in, at last and for good, on the Earth from which we sprang.Janine M. Benyus, Biomimicry: Innovation Inspired by Nature (New York: William Morrow, 1997), 2, 9.
- Janine Benyus
Humans have always imitated nature. Therefore, biomimicry is probably as old as humanity. Biomimicry as a formal concept is much newer, however. As a design philosophy, biomimicry draws upon nature to inspire and evaluate human-made products and strategies for growth. Biomimetic designers and engineers first examine how plants, animals, and ecosystems solve practical problems and then mimic those solutions or use them to spur innovation. Plants and animals have evolved in relation to each other and the physical world over billions of years. That evolution has yielded successful strategies for adaptation and survival that can, in turn, inform business products, practices, and strategic choices. Nature’s sustainability strategies—a systems perspective, resource efficiency, and nontoxicity—form the core of biomimicry and offer a model on which to base sustainable innovations in commerce.
Key Concepts
Janine Benyus, a forester by training, is the central figure in articulating and advocating the principles of biomimicry. In her 1997 book Biomimicry: Innovation Inspired by Nature, she coined the term biomimicry and defined it as “the conscious emulation of life’s genius” to solve human problems in design and industry.Janine M. Benyus, Biomimicry: Innovation Inspired by Nature (New York: William Morrow, 1997), 2. Benyus has called it “innovation inspired by nature. It’s a method, a way of asking nature for advice whenever you’re designing something.”Michael Cervieri, “Float Like a Butterfly—With Janine Benyus,” ScribeMedia, October 22, 2008, accessed April 12, 2010, http://www.scribemedia.org/2008/10/22/float-like-a-butterfly-with-janine-benyus. Benyus also founded the Biomimicry Guild, a consultancy that helps companies apply biomimetic principles, and the Biomimicry Institute, a nonprofit organization that aspires to educate a broad audience.To view a twenty-three-minute video of Janine Benyus talking about biomimicry at the 2005 Technology, Entertainment, Design conference, see Janine Benyus, “Janine Benyus Shares Nature’s Designs,” filmed February 2005, TED video, 23:16, from a speech at the 2005 Technology, Entertainment, and Design conference, posted April 2007, accessed April 12, 2010, www.ted.com/talks/janine_benyus _shares_nature_s_designs.html.
Benyus was frustrated that her academic training focused on analyzing discrete pieces of life because it prevented her and others from seeing principles that emerge from analyzing entire systems. Nature is one such system, and Benyus calls for designers and businesses to consider nature as model, mentor, and measure. As she points out, four billion years of natural selection and evolution have yielded sophisticated, sustainable, diverse, and efficient answers to problems such as energy use and population growth. Humans now have the technology to understand many of nature’s solutions and to apply similar ideas in our societies whether at the materials level, such as mimicking spider silk or deriving pharmaceuticals from plants, or at the level of ecosystems and the biosphere, such as improving agriculture by learning from prairies and forests or reducing our GHG emissions by shifting toward solar energy. As the final step, if we assess our own products and practices by comparing them with natural ones, we will have a good sense of how sustainable they ultimately are.
Indeed, Benyus identified a list of principles that make nature sustainable and could do the same for human economic activity:
• Runs on sunlight
• Uses only the energy it needs
• Fits form to function
• Recycles everything
• Rewards cooperation
• Banks on diversity
• Demands local expertise
• Curbs excesses from within
• Taps the power of limitsJanine M. Benyus, Biomimicry: Innovation Inspired by Nature (New York: William Morrow, 1997), 7.
Such biomimetic principles could be, and have been, exploited to make innovative products in conventional industries. For instance, an Italian ice-axe manufacturer modified its product design after studying woodpeckers. The new design proved more effective and generated higher sales.Kate Rockwood, “Biomimicry: Nature-Inspired Products,” Fast Company, October 1, 2008. Biomimicry notions can be extrapolated further and urge us to assume a sustainable place within nature by recognizing ourselves as inextricably part of nature. Biomimicry focuses “not on what we can extract from the natural world, but on what we can learn from it.”Janine M. Benyus, Biomimicry: Innovation Inspired by Nature (New York: William Morrow, 1997). It also lends urgency to protecting ecosystems and cataloging their species and interdependencies so that we may continue to be inspired, aided, and instructed by nature’s ingenuity.
In its broader, systems-conscious sense, biomimicry resembles industrial ecology and nature’s services but clearly shares traits with William McDonough’s concept of cradle-to-cradle design, Karl-Henrik Robèrt’s Natural Step guidelines, and other sustainability strategies and theories.Each of these concepts relates to sustainable business and each has its own heritage. Hence the concepts are summarized here with a suggestion for further reading. Industrial ecology refers to the industry practice of collocation, which uses wastes from one process as input for another, such as using gypsum recovered from scrubbing smokestack emissions to make drywall. See Thomas E. Graedel and Braden R. Allenby, Industrial Ecology, 2nd ed. (Upper Saddle River, NJ: Prentice Hall, 2003). Nature’s services refer to the ways natural processes, such as photosynthesis and filtration in wetlands, provide goods and benefits to humans, such as clean air and clean water. See Gretchen Daily, ed., Nature’s Services: Societal Dependence on Natural Ecosystems (Washington, DC: Island Press, 1997). Cradle-to-cradle design emphasizes that products should be made to be safely disassembled and reused, not discarded, at the end of their lives to become feedstocks for new products or nutrients for nature. See William McDonough and Michael Braungart, Cradle to Cradle: Remaking the Way We Make Things (New York: North Point Press, 2002). The Natural Step is a strategic framework that considers human economic activity within the broader material and energy balances of the earth; it holds that because we cannot exhaust resources or produce products that nature is unable to safely replenish or degrade, we must switch to renewable and nontoxic materials. See Karl-Henrik Robèrt, The Natural Step Story: Seeding a Quiet Revolution (Gabriola Island, Canada: New Society Publishers, 2008); Natural Step, “Home,” accessed April 12, 2010, http://www.naturalstep.org; and Natural Step USA, “Home,” accessed April 12, 2010, www.naturalstep.org/usa. Benyus has even explicitly aligned biomimicry with industrial ecology to enumerate ten principles of an economy that mimics nature.Janine M. Benyus, Biomimicry: Innovation Inspired by Nature (New York: William Morrow, 1997), 252–77.
1. “Use waste as a resource,” whether at the scale of integrated business parks or the global economy.
2. “Diversify and cooperate to fully use the habitat.” Symbiosis and specialization within niches assures nothing is wasted and provides benefits to other companies or parts of industry.
3. “Gather and use energy efficiently.” Use fossil fuels more efficiently while shifting to renewable resources.
4. “Optimize rather than maximize.” Focus on quality over quantity.
5. “Use materials sparingly.” Dematerialize products and reduce packaging; reconceptualize business as providing services instead of selling goods.
6. “Don’t foul the nests.” Reduce toxins and decentralize production of goods and energy.
7. “Don’t draw down resources.” Shift to renewable feedstocks, but use them at a low enough rate that they can regenerate. Invest in ecological capital.
8. “Remain in balance with the biosphere.” Limit or eliminate pollution.
9. “Run on information.” Create feedback loops to improve processes and reward environmentally restorative behavior.
10. “Shop locally.” Use local resources for resiliency and to support regional populations, reduce transportation needs, build local economies, and let people see the impact of their consumption on the environment and local economic vitality.
Examples of Biomimetic Products
While biomimicry’s concepts can be applied at various scales, they are most often considered at the level of individual products or technologies. Velcro is perhaps the best-known example. In the 1940s, Swiss engineer George de Mestral noticed burrs stuck to his clothes and his dog’s fur after they went for a hike. He analyzed the burs and fabric under a microscope and saw how the hooks of the former tenaciously gripped the loops of the latter. He used this observation to invent Velcro, a name he derived from velours (French for velvet) and crochet (French for hook). Over the next several years, he switched from cotton to nylon to improve product durability and refined the process of making his microscopic arrays of hooks and loops (Figure 5.22). He then began to file patents worldwide. Velcro now is used in countless ways—including space suits, wallets, doll clothes, and athletic shoes.
Plants inspired another example of early design biomimicry.Many examples of biomimicry can be found at the Biomimicry Institute’s website, Ask Nature, “Home,” accessed April 12, 2010, http://www.asknature.org. The 2009 Biomimicry Conference in San Diego included an overview of biomimetic products; footage from that conference is available: “Biomimicry Conference 2009—San Diego,” YouTube video, 3:26, from the Zoological Society of San Diego’s Biomimicry Educational Conference on October 1–2, 2009, posted by MEMSDisplayGuy, November 9, 2009, accessed April 8, 2010, www.youtube.com/v/r-WUPr5LUR8. Joseph Paxton, a gardener, was charged with caring for an English duke’s giant Amazon water lily (Victoria amazonica), which British travelers had brought back from South America in the 1830s. The lily pads were so massive and buoyant that Paxton could put his young daughter on them and they would not sink. Intrigued, Paxton studied the underside of the water lily. He then used the rib-and-spine design that kept the water lilies afloat to build a greenhouse. A few years later, he applied the same principles to design the Crystal Palace for the 1851 Great Exhibition in London (Figure 5.23). The building relied on cast iron ribs to support glass plates and was a forerunner of modular design and modern greenhouses.Lucy Richmond, “The Giant Water Lily That Inspired the Crystal Palace,” Telegraph (UK), May 7, 2009, accessed January 10, 2011, http://www.telegraph.co.uk/comment/letters/5285516/The-giant-water-lily-that-inspired-the-Crystal-Palace.html; “Leaves Given Structural Support: Giant Water-Lily,” Ask Nature, accessed April 8, 2010, asknature.org/strategy/902666afb8d8548320ae0afcd54d02ae.
More recently, architects have learned how to regulate building temperatures by studying termite mounds. In 1995, architect Mick Pearce and engineers from Arup Associates obviated the need for an air-conditioning system for the Eastgate Centre in Harare, Zimbabwe, by using a series of air shafts and the thermal mass of the building. That alone saved \$3.5 million in construction costs. The shops and offices in the Eastgate Centre use 65 percent less energy than comparable buildings to maintain a comfortable temperature, reducing total energy needs by 10 percent and making rent 20 percent cheaper than comparable buildings. The design was inspired by termites (Macrotermes michaelsei) that built mounds ten to twenty feet tall while maintaining the structures’ internal temperature at 87°F, the ideal temperature to grow the fungi the termites eat, even when external temperatures dropped to 70°F. The termites use heat stored in the mud to help regulate temperature and open and close hatches in shafts that vent hot air and draw in cooler air.“Eastgate Centre Building: Passive Heating and Cooling Saves Energy,” Ask Nature, accessed November 16, 2009, www.asknature.org/product/373ec79cd6dba791 bc00ed32203706a1; “Ventilated Nests Remove Heat and Gas: Mound-Building Termites,” Ask Nature, accessed April 12, 2010, http://www.asknature.org/strategy/8a16bdffd27387cd2a3a995525ea08b3; Abigail Doan, “Green Building in Zimbabwe Modeled after Termite Mounds,” Inhabit, December 10, 2007, accessed January 10, 2011, inhabitat.com/building-modelled-on-termites-eastgate-centre-in-zimbabwe.
Interface Flooring Systems, a sustainability-minded carpet company, took another lesson from nature: leaves and twigs never look out of place on the forest floor, no matter how they are scattered or if they vary subtly in hue and shape. In 2000, the InterfaceFLOR division built this lesson into its Entropy line of carpet tiles, part of its platform of biomimetic products. Each carpet tile has a different random pattern within a basic design and color variations within an overall palette (Figure 5.24). This variation creates a harmonious whole and eliminates the need to match specific dyes or install tiles in a particular direction, which in turn saves money, material, and time for the initial installation and subsequent repairs. The company estimates installing Entropy carpet wastes only 1.5 percent of the carpet compared with the industry average of 14 percent for broadloom carpet.“i2™ Carpet and Flooring: As-Needed Tile Replacement Saves Resources,” Ask Nature, accessed January 10, 2011, www.asknature.org/product/a84a9167f21f1cc 690e0e673c4808833; InterfaceFLOR, “i2™ Modular Carpet—How Nature Would Design a Floor,” accessed April 12, 2010, http://www.interfaceflor.com/Default.aspx?Section=3&Sub=11.
Biomimicry can also aid sophisticated electronics. The communication technology company Qualcomm has applied the principle that makes butterflies and peacock feathers iridescent to full-color electronic displays, from cell phones to tablet computers. Its product, Mirasol, relies on what Qualcomm calls interferometric modulation within a microelectromechanical systems device. The display consists of pixels that contain two layers, a glass plate, and a reflective layer over a base substrate. Minute voltage differences change the distance between the plates in individual pixels, producing interference patterns that create different colors. The pixels do not need their own backlighting, unlike LCDs, and hence use very little energy and remain highly visible even in bright sunlight. The technology won several awards from 2008 to 2010, including the Wall Street Journal 2009 Technology Innovations Award in the semiconductor category and LAPTOP magazine’s 2010 Best Enabling Technology.Qualcomm, “Mobile Displays: Mirasol Display Technology,” accessed January 10, 2011, www.qualcomm.com/products_services/consumer_electronics/displays/mirasol; “Mirasol Display Hands-On High-Res,” YouTube video, 0:58, posted by engadget, January 8, 2010; accessed April 12, 2010, www.youtube.com/v/jmpBgaPGYKQ; Mirasol Displays, “How It Works,” accessed January 10, 2011, http://www.mirasoldisplays.com/how-it-works; and Mirasol Displays, “Press Center: Awards,” accessed January 10, 2011, http://www.mirasoldisplays.com/awards.
Conclusion
Nature provides a rich source of ideas that can make human-designed products and corporate strategies more efficient and resilient, and less toxic—and therefore more sustainable. Nature’s ecosystems avoid waste: what is discarded by one species is often used by another as input or nutrition. Nature solves problems with the materials at hand, the very building blocks of life, rather than exotic and synthetic chemicals. Its systems are self-energizing; nature runs on sunlight, mediated by photosynthesis. When strategy executives or product designers operate from a biomimicry vantage point, considering its principles and the examples of plants and animals that apply, they can use nature’s models to create sustainable business innovations.
KEY TAKEAWAYS
• Biomimicry can offer new ideas for solving some of our seemingly intractable ecological and environmental health problems.
• Entrepreneurs emerge from a wide variety of backgrounds; it is more a question of “fit” among the entrepreneur, the product/technology, and the market need that creates the opportunity.
• Success is not just about having a unique or superior technology; it is, perhaps most important, about finding early customers and generating revenue streams that satisfy investors.
EXERCISES
1. Describe each of the following for Calera:
1. entrepreneur
2. opportunity
3. product
4. concept
5. resources
6. market
7. entry
2. What are Calera’s major challenges now? What does the company have to get right in the short run to succeed? Prepare your analysis as a presentation with recommendations.
3. Name advantages or disadvantages in having the financial backing of Vinod Khosla. | textbooks/biz/Business/Advanced_Business/Book%3A_Sustainability_Innovation_and_Entrepreneurship/05%3A_Energy_and_Climate/5.04%3A_Calera_-_Entrepreneurship_Innovation_and_Sustainability.txt |
Learning Objectives
1. Learn the definition of a green supply chain.
2. Understand how integration of green and sustainability knowledge can improve the performance of supply chains.
3. Define reverse logistics, life cycle assessment, and design for environment in the context of supply chains.
4. Gain insight into the strategic value of greening your supply chain.
Regardless of how you might feel about Walmart, the effect of the company’s sustainability policies are being felt worldwide through its supply chains. On February 1, 2007, Walmart President and CEO Lee Scott announced his company’s “Sustainability 360” program would expand Walmart’s sustainability efforts from its operations and into its supply chains by “tak[ing] in,” as Scott said, “our entire company—our customer base, our supplier base, our associates, the products on our shelves, the communities we serve.”Walmart, “Wal-Mart CEO Lee Scott Unveils ‘Sustainability 360,’” news release, February 1, 2007, accessed January 10, 2011, http://walmartstores.com/pressroom/news/6237.aspx. Walmart customers could now track the company’s “Love, Earth” jewelry all the way back to the mine or buy fish certified by the Marine Stewardship Council. In 2010 the company announced the goal of a twenty-million-metric-ton greenhouse gas emission reduction from its global supply chain (encompassing over one hundred thousand suppliers).Walmart, “Sustainability Fact Sheet: Wal-Mart Takes the Lead on Environmental Sustainability,” news release, March 1, 2010, accessed January 30, 2011, http://walmartstores.com/download/2392.pdf. Furthermore, Walmart enlisted the nonprofit Carbon Disclosure Project, institutional investors with \$41 trillion in assets as of September 2007, to help Walmart’s suppliers of DVDs, toothpaste, soap, milk, beer, vacuum cleaners, and soda to assess and reduce their carbon footprints.Ylan Q. Mui, “Wal-Mart Aims to Enlist Suppliers in Green Mission,” Washington Post, September 25, 2007, accessed January 10, 2011, www.washingtonpost.com/wp-dyn/content/article/2007/09/24/AR2007092401435.html. Indeed, with roughly one hundred thousand suppliers, two million employees, and millions of customers per day,Walmart, “Sustainability Fact Sheet: Wal-Mart Takes the Lead on Environmental Sustainability,” news release, March 1, 2010, accessed January 30, 2011, http://walmartstores.com/download/2392.pdf. Walmart’s operations and those it encouraged, from product design and resource extraction through final consumption and disposal, could massively influence societies and the natural environment. As such impacts attracted attention, so did the benefits of and the need for greener supply networks.
Green supply chains (GSCs) became Supply Chain Digest’s number one supply-chain trend of 2006 as more companies such as Walmart embraced them.Dan Gilmore, “Top Ten Supply Chain Trends of 2006,” Supply Chain Digest, January 4, 2006, accessed January 10, 2011, http://www.scdigest.com/assets/FirstThoughts/07-01-04.cfm?cid=871&ctype=content. Fully developed green supply chains consider sustainability for every participant at every step, from design to manufacture, transportation, storage, and use to eventual disposal or recycling. This attentiveness would reduce waste, mitigate legal and environmental risks, minimize and possibly eliminate adverse health impacts throughout the value-added process, improve the reputations of companies and their products (enhancing brands), and enable compliance with increasingly stringent regulations and societal expectations. Thus GSCs offer the opportunity to boost efficiency, value, and access to markets through improving a company’s environmental, social, and economic performance.
Improving Conventional Supply Chains
In its simplest form, a conventional supply chain assumes that firms take raw materials at the beginning of the supply chain and transform them into a product at the end of the supply chain. Ultimately, the supply chain terminates at the point of the final buyer purchasing and using the product (see Figure 6.1). Vertical integration absorbs steps in the supply chain within a single corporation that conducts exchange through internal transfer pricing agreements. Disaggregation maintains ownership in discrete businesses that determine prices through market-based transactions.
A company that sells a final product must meet certain requirements and interact with suppliers, third-party logistics providers, and other stakeholder groups that can influence the entire process. Each institution tries to shape the supply chain to its own advantage. As the product moves from design to consumption (black arrows), waste and other problems (gray arrows) accrue. Whether those problems are unfair wages, deforestation, or air pollution, these costs are not necessarily reflected in the price of the finished product but are instead externalized to the public in some fashion or expected to be borne by intermediate links in the conventional chain.
While the term supply chain implies a one-way, linear relationship among participants (e.g., from concept, to resource extraction, to processing, to component manufacturing, to system integration, to final assembly, etc.), the chain is more accurately described as a network of individuals and organizations. Managing such networks can become quite complex, especially as they sprawl over more of the globe. Conventional supply-chain management plans, implements, and controls the operations of the supply chain as efficiently as possible—typically, however, from a limited vantage point that ignores and externalizes many costs.
In contrast, a green supply chain takes a broader, systems view that internalizes some of these costs and ultimately turns them into sources of value. Green supply chains thus modify conventional supply chains in two significant ways: they increase sustainability and efficiency in the existing forward supply chain and add an entirely new reverse supply chain (see Figure 6.2).
Improving Logistics
A company can select various ways to improve the sustainability of its logistics systems. The company may communicate sustainability standards backward to suppliers and require them to adopt environmental management systems or certifications, such as ISO 14001; survey and monitor suppliers’ current practices or products for their sustainability and offer training, technology, and incentives to improve those practices or products;According to the International Organization for Standardization, which established this qualification, ISO 14001 “gives the requirements for quality management systems [and] is now firmly established as the globally implemented standard for providing assurance about the ability to satisfy quality requirements and to enhance customer satisfaction in supplier–customer relationships.” International Organization for Standardization, “ISO 14001:2004,” accessed January 10, 2011, http://www.iso.org/iso/iso_catalogue/catalogue_tc/catalogue_detail.htm?csnumber=31807. require suppliers to avoid certain hazardous ingredients and label others; and/or ask suppliers and other supporting firms, such as transportation companies, to suggest ways to improve the efficiency and sustainability of the whole process. Hence companies “greening” their supply chains are likely to communicate and collaborate more with suppliers and subcontractors to innovate and find the best solutions. They might also reach out to nongovernmental organizations (NGOs) and government agencies for further assistance.
For example, US-based DesignTex, in the 1990s a leader in the contract textile industry and now a subsidiary of US commercial furniture manufacturer Steelcase,DesignTex, “Designtex, A Steelcase Company: Our Company,” accessed January 30, 2011, store.designtex.com/ourcompany.aspx?f=35398. chose to pursue an environmentally friendly commercial upholstery fabric. DesignTex collaborated with a small Swiss firm called Rohner Textil AG, chemical corporation Ciba Geigy, and the Environmental Protection Encouragement Agency (a German NGO) to determine product specifications, develop fabric requirements, and identify substitute benign chemicals for the toxic chemicals present along the fabric supply chain.Matthew M. Mehalik, “Sustainable Network Design: A Commercial Fabric Case Study,” Interfaces: International Journal of the Institute for Operations Research and the Management Sciences 30, no. 3 (May/June 2000): 180–89. The new product’s supply chain originated from the wool of free-range sheep and ramie grown without pesticides to a yarn-twisting mill and dye manufacturers, with scraps of the textile generated along the way being sold to farmers and gardeners for mulch.
Surprisingly, the production changes did not just reduce DesignTex’s environmental impact; they also added value: The factory’s effluent became cleaner than the incoming water supply. Regulatory paperwork was eliminated. Workers no longer needed protective masks or gloves, which eliminated health risks and liability exposure.William McDonough and Michael Braungart, “Waste Equals Food,” in Cradle to Cradle: Remaking the Way We Make Things (New York: North Point Press, 2002). Because of these decreased costs and the tax relief for the accompanying environmental investments, the innovation showed a payback period of only five years.Matthew M. Mehalik, “Sustainable Network Design: A Commercial Fabric Case Study,” Interfaces: International Journal of the Institute for Operations Research and the Management Sciences 30, no. 3 (May/June 2000): 180–89. It also was an early, successful illustration of cradle-to-cradle design, the cyclical design protocol that allows biologically benign products to safely return to nature.
Reverse Logistics
In addition to dramatically improving conventional supply-chain logistics, green supply chains extend past the point of product use, where conventional chains end, and consider how to recover and reuse materials—questions of reverse logistics. Many companies already have rudimentary reverse logistics systems to deal with customers’ returns of items they do not want or that were found defective or otherwise unsatisfactory. An expanded reverse logistics system would ultimately replace the linearity of most production methods—raw materials, to processing, to further conversions and modification, to ultimate product, to use, to disposal—with a cradle-to-cradle, cyclical path or closed loop that begins with the return of used, outmoded, out-of-fashion, and otherwise “consumed” products. The products are either recycled and placed back into the manufacturing stream or broken down into compostable materials. The cycle is never ending because materials return to the land in safe molecular structures (taken up and used by organisms as biological nutrients) or are perpetually used within the economy as input for new products (technical nutrients).
Companies typically funnel spent items from consumers into the reverse supply chain by leasing their products or providing collection points or by other means of recovering the items once their service life ends.Shad Dowlatshahi, “Developing a Theory of Reverse Logistics,” Interfaces: International Journal of the Institute for Operations Research and the Management Sciences 30, no. 3 (May/June 2000): 143–55. For example, Canon and Xerox provide free shipping to return used toner cartridges and have thus collectively recovered over one hundred thousand tons of ink and cartridges since 1990.Canon, “Toner Cartridge Return Program,” accessed October 2, 2009, www.usa.canon.com/templatedata/AboutCanon/ciwencrpr.html; Xerox, “Prevent and Manage Waste,” accessed January 10, 2011, www.xerox.com/about-xerox/recycling/supplies/enus.html.
Once collected, whether by the original manufacturer or a third party, the products could be inspected and sorted. Some items might return quickly to the supply chain with only minimal repair or replacement of certain components, whereas other products might need to be disassembled, remanufactured, or cannibalized for salvageable parts while the remnant is recycled or sent to a landfill or incinerator. “Companies that remanufacture are estimated to save 40–60 percent of the cost of manufacturing a completely new product…while requiring only 20 percent of the effort,” leading to significant, structural savings, wrote Shad Dowlatshahi in Interfaces.Shad Dowlatshahi, “Developing a Theory of Reverse Logistics,” Interfaces: International Journal of the Institute for Operations Research and the Management Sciences 30, no. 3 (May/June 2000): 144. Moreover, the reverse supply chain might spawn new suppliers as well as other sources of revenue for companies that engage in collection, disassembly, and so on, making the entire network more efficient.Joy M. Field and Robert P. Sroufe, “The Use of Recycled Materials in Manufacturing: Implications for Supply Chain Management and Operations Strategy,” International Journal of Production Research 45, no. 18–19 (October 2007): 4439–63. This concept of an eco-efficient closed loop thereby makes green supply chains a central piece of sustainable industrial ecosystems.
Life-Cycle Assessment and Design for Environment
The same techniques that improve the sustainability of conventional logistics also aid reverse logistics. In addition, green supply chains fundamentally require two tools: life-cycle assessment (LCA) and design for environment (DfE). According to the US Environmental Protection Agency’s National Risk Management Research Laboratory, LCA takes the viewpoint of a product, process, or service by “(1) compiling an inventory of relevant energy and material inputs and environmental releases; (2) evaluating the potential environmental impacts associated with identified inputs and releases; [and] (3) interpreting the results to help you make an informed decision,” typically to minimize negative impacts across the entire life of the product.US Environmental Protection Agency, “Life-Cycle Assessment (LCA),” accessed January 10, 2011, http://www.epa.gov/ORD/NRMRL/lcaccess. For examples, see Maurizio Bevilacqua, Filippo Emanuele Ciarapica, and Giancarlo Giacchetta, “Development of a Sustainable Product Lifecycle in Manufacturing Firms: A Case Study,” International Journal of Production Research 45, no. 18–19 (2007): 4073–98, as well as Stelvia Matos and Jeremy Hall, “Integrating Sustainable Development in the Supply Chain: The Case of Life Cycle Assessment in Oil and Gas and Agricultural Biotechnology,” Journal of Operations Management 25, no. 6 (2007): 1083–82. This analysis helps identify the points in the green supply chain that detract from ultimate sustainability and establishes a baseline for improvement. For example, Walmart’s third-party logistics provider in Canada began using railways more than roads to supply ten stores, thereby cutting carbon emissions by 2,600 tons. The company estimated it would save another \$4.5 million and prevent 1,400 tons of waste annually by switching from cardboard to plastic shipping crates.“Wal-Mart’s ‘Green’ Campaign Pays Off in Canada,” DC Velocity, October 1, 2007, accessed October 2, 2009, www.dcvelocity.com/news/?article_id=1338.
Application of DfE acknowledges that design determines a product’s materials and the processes by which the product is made, shipped, used, and recovered. Hence DfE could be used to avoid toxic materials from the outset; minimize energy and material inputs; and facilitate disassembly, repair, and remanufacturing. For instance, Hewlett Packard (HP) used DfE “product stewards,” whose role, HP explained, was as follows: “[Product stewards] are integrated into product design and research and development teams to identify, prioritize, and recommend environmental design innovations to make products easier to disassemble and recycle. Such features include modular designs, snap-in features that eliminate the need for glues and adhesives, fewer materials, and molded-in colors and finishes instead of paint, coatings, or plating.”Hewlett-Packard, “HP to Eliminate Brominated Flame Retardants from External Case Parts of All New HP Brand Products,” news release, November 1, 2005, accessed January 11, 2011, www.hp.com/hpinfo/newsroom/press/2005/051101a.html.
Conversely, process designs could influence product designs through new technology that implements an innovative idea. For example, in the Walden Paddlers case discussed in Section 4.5, Hardigg Industries was a plastics-molding company that partnered with Clearvue Plastics to create plastic pellets with 50 percent recycled content, which Hardigg thought was impossible until it was encouraged by the entrepreneurial founder of Walden Paddlers. Later, Hardigg was able to change its rotomolding technology to allow for the use of 100 percent recycled resins. Through the use of recycled materials and Clearvue’s innovation, Hardigg was able to lower costs, establish a competitive advantage within its industry, attract new customers, and increase customer satisfaction.Paul H. Farrow, Richard R. Johnson, and Andrea L. Larson, “Entrepreneurship, Innovation, and Sustainability Strategies at Walden Paddlers, Inc.,” Interfaces: International Journal of the Institute for Operations Research and the Management Sciences 30, no. 3 (May/June 2000): 215–25.
Greener Supply Chains: Accelerating Response to Changed Context
Although green supply chains could present novel challenges, they had spread to address a convergence of legal requirements, consumer expectations, and competition for continued profitability. In 2001, a study of twenty-five suppliers showed 80 percent received significant requests to improve the environmental quality of their operations and products, and they in turn asked their suppliers to do the same.Business for Social Responsibility Education Fund, Suppliers’ Perspectives on Greening the Supply Chain (San Francisco: Business for Social Responsibility Education Fund, 2001), accessed January 11, 2011, www.getf.org/file/toolmanager/O16F15429.pdf. A larger survey from 2008 indicated 82 percent of respondents were planning to implement or were already implementing green supply-chain management strategies.Walfried M. Lassar and Adrian Gonzalez, The State of Green Supply Chain Management: Survey Results (Miami, FL: Ryder Center for Supply Chain Management, Florida International University, 2008), accessed January 11, 2011, grci.calpoly.edu/projects/sustaincommworld/pdfs/WP_Florida_Supply_Chain_Mgmt.pdf. The trend toward green supply chains was expected to continue.
Concern for green supply-chain topics emerged in the 1990s as, on one hand, globalization and outsourcing made supply networks increasingly complex and diverse and, on the other hand, new laws and consumer expectations increasingly demanded that companies take more responsibility for their products across the entire life of those products.Jonathan D. Linton, Robert Klassen, and Vaidyanathan Jayaraman, “Sustainable Supply Chains: An Introduction,” Journal of Operations Management 25, no. 6 (November 2007): 1075–82; Going Green Upstream: The Promise of Supplier Environmental Management (Washington, DC: National Environmental Education and Training Foundation, 2001), accessed January 11, 2011, www.neefusa.org/pdf/SupplyChainStudy.pdf. Companies had to more closely monitor their suppliers. Total quality management and conventional supply-chain management adapted to address some of these challenges in “a paradigm shift [that] occurred when the scope of analysis was broadened beyond what was customary [for operations analysts] at the time.”Charles J. Corbett and Robert D. Klassen, “Expanding the Horizons: Environmental Excellence as Key to Improving Operations,” Manufacturing and Service Operations Management 8, no. 1 (Winter 2006): 5–22. These broader management practices and ISO 9001 in turn laid the foundation for green supply-chain management and ISO 14001.
Between 2000 and 2009, the increased emphasis on sustainability expanded the scope further and deeper into environmental, public health, and community/social issues and embraced stakeholders beyond consumers and investors.Charles J. Corbett and Robert D. Klassen, “Expanding the Horizons: Environmental Excellence as Key to Improving Operations,” Manufacturing and Service Operations Management 8, no. 1 (Winter 2006): 5–22. This new paradigm of “extended producer responsibility,” which included a call for greater transparency and accountability, also compelled companies toward green supply-chain design.Markus Klausner and Chris T. Hendrickson, “Reverse-Logistics Strategy for Product Take-Back,” Interfaces: International Journal of the Institute for Operations Research and the Management Sciences 30, no. 3 (May/June 2000): 156–65.
Laws to reduce human exposure to hazardous and toxic chemicals drive corporate attention to supply-chain materials use. Noncompliance with laws could hurt profits, market share, and brand image. For example, Dutch customs agents prevented approximately \$160 million worth of Sony PlayStation consoles from entering Holland in December 2001 because cadmium levels in their wiring exceeded levels set by Dutch law.Adam Aston, Andy Reinhardt, and Rachel Tiplady, “Europe’s Push for Less-Toxic Tech,” BusinessWeek, August 9, 2005, accessed January 11, 2011, http://www.businessweek.com/technology/content/aug2005/tc2005089_9729 _tc_215.htm. Sony disputed the root cause with its Taiwanese cable supplier but nonetheless had to pay to store, refurbish, and repack the machines.
Most forward-thinking global firms moved toward adopting consistent standards across all their markets, as opposed to different standards for different countries. Hence the tightest rules from one place tended to become the de facto global standard. For example, the EU’s directives 2002/95/EC on “the Restriction of the Use of certain Hazardous Substances in Electrical and Electronic Equipment” (RoHS) and 2002/96/EC on “Waste Electrical and Electronic Equipment” (WEEE) had many ramifications for suppliers and producers in the electronics industry. RoHS required all manufacturers of electronics and electrical equipment sold in Europe by July 2006 to substitute safer materials for six hazardous substances, such as lead and chromium. WEEE required producers to collect their electronic waste from consumers free of charge.European Commission, “Environment: Waste Electrical and Electronic Equipment,” accessed January 11, 2011, http://ec.europa.eu/environment/waste/weee/index_en.htm. The EU’s 2006 directive on “Registration, Evaluation, Authorization, and Restriction of Chemicals” (REACH) might further tighten global standards for producers and suppliers because it “gives greater responsibility to industry to manage the risks from chemicals and to provide safety information on the substances.”European Commission, “Environment: REACH,” accessed January 11, 2011, ec.europa.eu/environment/chemicals/reach/reach_intro.htm. Similar efforts have begun in Asia with Japan’s Green Procurement rules and China’s Agenda 21 goals.Adam Aston, Andy Reinhardt, and Rachel Tiplady, “Europe’s Push for Less-Toxic Tech,” BusinessWeek, August 9, 2005, accessed January 11, 2011, http://www.businessweek.com/technology/content/aug2005/tc2005089_9729 _tc_215.htm.
Consumers and institutional investors, meanwhile, have exerted pressure on companies through a variety of tactics from socially responsible investment screening criteria to market campaigns for engaging in fair trade or ending sweatshop labor. Failure to publicly improve practices anywhere along the supply chain could hurt brand image and curtail access to markets. American universities and colleges founded the Worker Rights Consortium in 2000 “to assist universities with the enforcement of their labor rights codes of conduct, which were adopted to protect the rights of workers producing apparel and other goods bearing university names and logos.”Worker Rights Consortium, “Mission: History,” accessed October 2, 2009, www.workersrights.org/about/history.asp. Manufacturers such as Canada’s Hudson Bay Company began to audit suppliers’ factories for compliance with labor standards.Tim Reeve and Jasper Steinhausen, “Sustainable Suppliers, Sustainable Markets,” CMA Management 81, no. 2 (April 2007): 30–33. By 2005, the Investor Environmental Health Network, following the effective strategy of institutional investors negotiating with companies for more action and accountability on climate change, was encouraging investment managers and corporations to reduce high-risk toxic chemicals used in their products and used by companies in which they invest.
Successful Green Supply Chains Manage Added Complexity
Businesses might face novel challenges when implementing, operating, or auditing green supply chains. Given these challenges, businesses that already used an environmental management system were more equipped to build a green supply chain.Nicole Darnall, G. Jason Jolley, and Robert Handfield, “Environmental Management Systems and Green Supply Chain Management: Compliments for Sustainability?” Business Strategy and the Environment 17, no. 1 (2008): 30–45; Toshi H. Arimura, Nicole Darnall, and Hajime Katayama, Is ISO-14001 a Gateway to More Advanced Voluntary Action? A Case for Green Supply Chain Management, RFF DP 09-05 (Washington, DC: Resources for the Future, 2009), accessed January 11, 2011, www.rff.org/documents/rff-dp-09-05.pdf. Nonetheless, all businesses could take steps to green their chains.
“Green” has become strategic. When sustainability is recognized as an operating and strategic opportunity, as in the cases of General Electric and Walmart, senior management supports green supply-chain initiatives and integrates them into the business’s core capabilities.Terry F. Yosie, Greening the Supply Chain in Emerging Markets: Some Lessons from the Field (Oakland, CA: GreenBiz, 2008), accessed January 11, 2011, http://www.greenbiz.com/sites/default/files/document/GreenBiz_Report_Greening _the_Supply_Chain.pdf; Samir K. Srivastava, “Green Supply-Chain Management: A State-of-the-Art Literature Review,” International Journal of Management Reviews 9, no. 1 (March 2007): 53–80. In 2010, however, authority over green supply chains still tended to be held by a variety of groups, such as supply-chain managers, environmental health and safety offices, and sustainability divisions.Walfried M. Lassar and Adrian Gonzalez, The State of Green Supply Chain Management: Survey Results (Miami, FL: Ryder Center for Supply Chain Management, Florida International University, 2008), accessed January 11, 2011, grci.calpoly.edu/projects/sustaincommworld/pdfs/WP_Florida_Supply_Chain_Mgmt.pdf. Personnel who might have once functioned separately within a company often had to collaborate and create new teams for green supply chains to work effectively, and those people needed time for the green supply chains to yield their maximum benefits.
Companies must actively include suppliers and service providers in greening supply chains so that they can build trust, lend their own expertise to increase sustainability, and receive adequate guidance and assistance in improving their operations.Mark P. Sharfman, Teresa M. Shaft, and Robert P. Anex Jr., “The Road to Cooperative Supply-Chain Environmental Management: Trust and Uncertainty among Pro-active Firms,” Business Strategy and the Environment 18, no. 1 (January 2009): 1–13. Businesses must state clear and reasonable expectations and allow sufficient lead time for suppliers to respond. They must also be willing to listen to suppliers.Business for Social Responsibility Education Fund, Suppliers’ Perspectives on Greening the Supply Chain (San Francisco: Business for Social Responsibility Education Fund, 2001), accessed January 11, 2011, www.getf.org/file/toolmanager/O16F15429.pdf. Furthermore, companies cannot simply issue guidelines from their headquarters; their representatives must instead be available on the ground and cooperating with local contacts to ensure results and prevent increased competition within the supply chain.Terry F. Yosie, Greening the Supply Chain in Emerging Markets: Some Lessons from the Field (Oakland, CA: GreenBiz, 2008), accessed January 11, 2011, http://www.greenbiz.com/sites/default/files/document/GreenBiz_Report_Greening _the_Supply_Chain.pdf. Indeed, suppliers need incentives and assurance that their share of the profit will be protected if they innovate to improve the process because maximizing the overall value of the supply chain may reduce value for individual links.Jonathan D. Linton, Robert Klassen, and Vaidyanathan Jayaraman, “Sustainable Supply Chains: An Introduction,” Journal of Operations Management 25, no. 6 (November 2007): 1078. For example, a design for disassembly that relies on pieces that snap together may obviate the need for suppliers of adhesives, even if it may create demand for disassembly and remanufacturing services.
Reverse supply chains complicate the overall supply chain, and therefore they need to be carefully crafted and considered in overall product design, production, and distribution. Materials and components recovered from used products need to reenter the same forward supply chain as new materials or components. Hence companies must recover items efficiently, train employees or subcontractors to assess properly the condition of a recovered item and what is salvageable and what is not, and manage their inventory to even out variation in the rate and quality of returned items.V. Daniel R. Guide Jr., Vaidyanathan Jayaraman, Rajesh Srivastava, and W. C. Benton, “Supply-Chain Management for Recoverable Manufacturing Systems,” Interfaces: International Journal of the Institute for Operations Research and the Management Sciences 30, no. 3 (May/June 2000): 125–42; also Nils Rudi, David F. Pyke, and Per Olav Sporsheim, “Product Recovery at the Norwegian National Insurance Administration,” Interfaces: International Journal of the Institute for Operations Research and the Management Sciences 30, no. 3 (May/June 2000): 166–79. They must also balance the availability of salvaged components or recycled materials with the need for new components or materials, especially as certain proprietary parts become unavailable or production processes change. In cases when consumers may want the same item they had before with only minor changes, such as a vehicle, businesses will also have to track individual pieces through disassembly and refurbishment.V. Daniel R. Guide Jr., Vaidyanathan Jayaraman, Rajesh Srivastava, and W. C. Benton, “Supply-Chain Management for Recoverable Manufacturing Systems,” Interfaces: International Journal of the Institute for Operations Research and the Management Sciences 30, no. 3 (May/June 2000): 125–42.
After establishing a green supply chain, companies need to assess its performance. In their 2008 survey of seventy supply-chain executives, Lassar and Gonzalez noted, “Almost 40 percent of the 56 firms that are active with green activities do not have any metrics to measure green/sustainability results in their firms.”Walfried M. Lassar and Adrian Gonzalez, The State of Green Supply Chain Management: Survey Results (Miami, FL: Ryder Center for Supply Chain Management, Florida International University, 2008), accessed January 11, 2011, grci.calpoly.edu/projects/sustaincommworld/pdfs/WP_Florida_Supply_Chain_Mgmt.pdf. Companies with metrics tracked quantities such as fuel use, packaging, and so on. Another study corroborates this trend: what metrics companies do have tend to cluster around eco-efficiency indicators, such as packaging used or miles traveled, likely because those are the easiest to observe, quantify, and associate with specific actions.Vesela Veleva, Maureen Hart, Tim Greiner, and Cathy Crumbley, “Indicators for Measuring Environmental Sustainability,” Benchmarking 10, no. 2 (2003): 107–19. Companies can, however, include broader measures such as customer satisfaction. However, even then a company may fall short. A systems, health-oriented, and green approach to design does not always work. Some view Frito-Lay’s SunChips compostable bag (offered to the market consistent with biodegradable bags being the fastest growing segment in packaging) as having failed due to its loud noise when handled. Since the crinkling of the bags at up to eighty-five decibels is comparable to glass breaking or an engine revving, the company has gone back to the drawing board with this packaging design.
Fading Extrinsic Challenges
Finally, green supply chains had to overcome institutional inertia and confusion. First, large companies with financial and political resources tended to resist change, especially at the outset, because of the large capital and infrastructural investments in the status quo. Walmart’s green initiative, however, appears to be the turning point that moves other large enterprises toward green supply chains.
Second, in 2009, no official criteria defined a green supply chain. Standards such as ISO 14000 usually focus on a single entity and not the supply chain, while legal requirements often focus on products and ingredients. ISO 14001, the core voluntary set of standards, is used by firms to design an environmental management system that provides internal monitoring and provides practices, procedures, and tools for systematic efforts to improve performance. However, nothing defines how much of the supply chain is required to have ISO 14000 or other certifications to qualify for the green supply chain label. When Home Depot solicited its suppliers for candidates to its Eco Options marketing campaign, one manufacturer praised the plastic handles of its paintbrushes as more environmentally sensitive than wooden handles, while another praised the wooden handles of its paintbrushes as environmentally better than plastic.Clifford Krauss, “At Home Depot, How Green Is That Chainsaw?” New York Times, June 25, 2007, accessed January 11, 2011, www.nytimes.com/2007/06/25/business/25depot.html?_r=1.
The lack of standards could promote individual certification programs, such as the cradle-to-cradle certification provided by McDonough Braungart Design Chemistry, LLC, which implies a corresponding green supply chain. This program, however, is private, largely to protect the confidential business information of its clients to ensure their cooperation, and has therefore been criticized for its lack of transparency.Danielle Sacks, “Green Guru William McDonough Must Change, Demand His Biggest Fans,” Fast Company, February 26, 2009, accessed January 11, 2011, www.fastcompany.com/blog/danielle-sacks/ad-verse-effect/william -mcdonough-must-change; Diana den Held, “‘Criticism on Cradle to Cradle? Right on Schedule,’ Says Michael Braungart,” Duurzaam Gebouwd (blog), March 20, 2009, accessed October 2, 2009, www.duurzaamgebouwd.nl/index.php?pageID=3946&messageID=1936. However, the cradle-to-cradle approach is now being explored in California as a statewide system to encourage safer, less polluting design protocols. In the worst cases, vague standards or opaque processes can lead to charges of “greenwashing,” or exaggerating or fabricating environmental credentials.Melissa Whellams and Chris MacDonald, “What Is Greenwashing, and Why Is It a Problem?” Business Ethics, accessed October 2, 2009, http://www.businessethics.ca/greenwashing. Greenwashing distracts people who are serious about taking care of the environment with counterproductive activities, misinforms the public, and undermines the credibility of more substantial initiatives of others.
Nonetheless, resistance to change and lack of an official definition reflect extrinsic problems rather than problems intrinsic to the mechanics of green supply chains. Such problems are more about marketing than about function. As green supply chains prove themselves through superior performance, they will likely become more studied, better understood and defined, and more widely spread. Good starting points for firms that understand these issues as strategic are to look at the inherent risks of not examining their supply chains and to envision a future market position in which a green, differentiated product and brand will grow revenues.
Green Supply Chains Improve Performance
Green supply chains yield a wide range of benefits. They can reduce a company’s negative environmental or social impact, decrease operating costs, increase customer service and sales, promote innovation, and mitigate regulatory risk. The most immediate benefits of green supply chains are reduced environmental harm and operations costs. For example, Fuji Xerox adopted a cradle-to-cradle philosophy that emphasized supporting document services over a life cycle rather than selling photocopiers and forgetting about them. Fuji Xerox leased equipment and recovered 99 percent of materials from used equipment in Asia in 2006, saving \$13 million on new materials, generating an additional \$5.4 million in revenue, and reducing raw material consumption by 2,000 tons at its factories in China.Fuji Xerox Australia, “Fuji Xerox Innovation Makes Business and Environmental Sense,” news release, September 25, 2007, accessed January 11, 2011, www.fujixerox.com.au/about/media/articles/546. Government institutions could also benefit. For example, Norway’s health-care system saved money by refurbishing more medical equipment.Nils Rudi, David F. Pyke, and Per Olav Sporsheim, “Product Recovery at the Norwegian National Insurance Administration,” Interfaces: International Journal of the Institute for Operations Research and the Management Sciences 30, no. 3 (May/June 2000): 166–79. Decreased costs could even accrue to suppliers.Business for Social Responsibility Education Fund, Suppliers’ Perspectives on Greening the Supply Chain (San Francisco: Business for Social Responsibility Education Fund, 2001), accessed January 11, 2011, www.getf.org/file/toolmanager/O16F15429.pdf.
Another benefit from green supply chains was increased innovation, largely because people worked together who had not done so before, or new challenges brought new answers. By collaborating with suppliers and designers to design its cradle-to-cradle system, Fuji Xerox saw the opportunity to make material and component improvements. The decision was made to redesign a spring and a roller, saving the US affiliate approximately \$40 million annually.Corporate Societal Responsbility: Knowledge Learning through Sustainable Global Supply Chain Management, p 14, accessed April 2, 2011, www.reman.org/pdf/Fuji-Xerox.pdf.
Moreover, green supply chains can lead to improved customer satisfaction and higher sales. Through product recovery programs, Dell increased sales and strengthened its brand reputation for customer satisfaction and corporate citizenship. Dell Asset Recovery Services (ARS) designed a customized solution that quickly recovered 2,300 servers from the Center for Computational Research at the University at Buffalo, SUNY. “That solves two problems for us,” said SUNY’s Tom Furlani. “It helps get rid of the old equipment in a cost-effective way, and it allows us to get new, faster equipment that is under warranty.” In addition to secure destruction of hard drive data, the Dell ARS maintains a zero landfill policy and a zero trash export policy. Unwanted equipment is disassembled into materials that reenter the manufacturing stream.Dell, That’s Refreshing, case study, November 2006, accessed January 11, 2011, www.dell.com/downloads/global/services/suny_ars_casestudy.pdf. This step also placed Dell in a more favorable position with the Basel Action Network, an NGO that targeted the company as contributing to e-waste exports to emerging economies.
Finally, green supply chains mitigate regulatory burdens and litigation risk. With the increasing severity of environmental regulations in different regions of the world and the global scale of today’s supply chains for even simple products (e.g., cloth from Latin America, cut and assembled into a shirt in China, and the product itself sold in Europe), green supply chains play a critical role in the operations strategy of multinational organizations. The consequences of not meeting regulations in a particular location can be major. For instance, Chinese suppliers have suffered from scandals over lead paint in toys and toxins in pet food and powdered milk, costing companies money in recalls and prompting calls for tighter regulation. In 2009, drywall produced in China was implicated in emissions of toxic sulfur compounds in homes built in America between 2004 and 2008, causing problems for homeowners, builders, and state regulatory agencies.Michael Corkery, “Chinese Drywall Cited in Building Woes,” Wall Street Journal, January 12, 2009, accessed January 11, 2011, http://online.wsj.com/article/SB123171862994672097.html; Brian Skoloff and Cain Burdeau, Associated Press, “Chinese Drywall Poses Potential Risks,” US News and World Report, April 11, 2009, accessed January 11, 2011, www.usnews.com/science/articles/2009/04/11/chinese-drywall-poses-potential-risks?PageNr=1.
Conclusion
Green supply chains have arisen in response to multiple, often interwoven problems: environmental degradation, rising prices for energy and raw materials, and global supply chains that link labor and environmental standards in one country with legal and consumer expectations in another. Green supply chains strive to ensure that value creation, rather than risk and waste, accumulates at each step from design to disposal and recovery. They have gained audience with large and small organizations across cultures, regions, and industries. Managing complex relationships and flows of materials across companies and cultures may pose a key challenge for green supply chains. Nonetheless, those challenges are not insurmountable, and the effort to green a supply chain can provide significant benefits.
KEY TAKEAWAY
• Green and sustainability thinking can improve supply-chain management to save money, improve products, and enhance brands.
EXERCISES
1. Select a common product and identify the many inputs and stages in its production that were required to deliver it to your hands.
2. Now analyze ways to “green” that supply chain; try to think of every possible way to apply sustainability concepts to optimize the supply-chain outcomes.
3. Discuss the barriers you might find in implementing that supply-chain strategy with real suppliers.
4. Go to the Green Design Institute at Carnegie Mellon University (www.eiolca.net) and explore the LCA method. | textbooks/biz/Business/Advanced_Business/Book%3A_Sustainability_Innovation_and_Entrepreneurship/06%3A_Clean_Products_and_Health/6.01%3A_Green_Supply_Chains.txt |
Learning Objectives
1. Evaluate and explain the conditions under which sustainability strategies succeed.
2. Discuss health challenges that offer market opportunities.
3. Analyze factors that favor sustainability innovation processes.
In our first case we have the opportunity to track Method, an entrepreneurial consumer products company, through two stages in its early growth. The first case presents the company and its unique sustainability strategy, highlighting both the scope of its efforts and unanticipated challenges that arose. Technical notes are provided for background on health threats from exposure to toxic materials in everyday life. The second Method case provides a 2010 update on the company’s activities and distinctive focus on innovation process. It is preceded by a discussion of toxicity issues intended to highlight Method’s ongoing innovative efforts to differentiate itself as a company that is about supply-chain solutions to the chemical hazards increasingly on the minds of consumers and scientists.
It was spring 2007, and Method cofounder Adam Lowry was deep in thought over enchiladas at Mercedes, a restaurant a block from his company’s office on Commercial Street in San Francisco. He began to sketch ideas on a piece of paper to sort the issues troubling him. As a company known for environmentally healthy household products with designer brand appeal, Method was eager to develop a biodegradable cleaning cloth. Sourcing polylactic acid (PLA) cloth from China had not been in his plans, but every US PLA manufacturer Lowry had talked to told him it was impossible for them to create the dry floor dusting cloth he wanted. There was also a genetic modification issue. US PLA producers did not screen their corn plant feedstock to determine whether it came from genetically modified organisms (GMOs). However, Lowry wondered, weren’t any bio-based and biodegradable materials a better alternative than oil-based polyester, the material used by the competition? Yet certain major retailers were unwilling to stock products that weren’t certifiably GMO-free. It was hard enough to manage a fast-growing new company, but why did some people seem willing to stop progress while they held out for perfection on the environmental front? The naysayers made Lowry think carefully about what it meant to be true to the environmental philosophy that formed the backbone of his business. He had often said that Method’s business was to change the way business was conducted. But where should the company draw the line?Andrea Larson, Method: Entrepreneurial Innovation, Health, Environment, and Sustainable Business Design, UVA-ENT-0099 (Charlottesville: Darden Business Publishing, University of Virginia, March 26, 2007). All quotations and references in this section, unless otherwise noted, come from this case.
As a hot new company that had received widespread publicity for its dedication to environmental values and healthy, clean production, use, and disposal of all its products, Method had set high standards. In a relatively short time, it had created a model for excellence in integrating health and environmental concerns into corporate strategy. From only a germ of an idea in 1999, Method had experienced explosive growth during the intervening years. The company proved that home cleaning products could evolve from toxic substances that had to be locked away from children and hidden in cupboards to nice-smelling, stylishly packaged, biodegradable, benign products that consumers proudly displayed on their countertops. In 2006, Inc. magazine listed Method at number seven of the five hundred fastest and most successfully growing firms in the United States. Method stood out in many ways from the typical entrepreneurial firm.
Leveraging only \$300,000 in start-up capital, twentysomethings Adam Lowry and Eric Ryan caused small-scale “creative destruction” across a \$17 billion industry in the United States by emphasizing the health, environmental, and emotional aspects of the most mundane of products: household cleaners. The company’s differentiating characteristic? Lowry and Ryan assumed from the start that incorporating ecological and human health concerns into corporate strategy was simply good business. By 2007, Method was growing rapidly and was profitable with forty-five employees and annual revenues of more than \$50 million. Its products were available in well-known distribution channels (drugstores, department stores, supermarkets, and other retail outlets) in the United States, Canada, Australia, and the United Kingdom. Customers embraced Method’s products, giving the company live feedback on its website, praising the firm and providing tips for the future. They were a loyal crowd and a signal that the time was right for this kind of business model. They even requested T-shirts featuring the Method brand, and the company responded by offering two different shirts: one that said, “Cleans like a mother” and another that simply said, “Method,” both with the company slogan—”People against dirty”—on the back. A baseball cap was also available.
Indeed, “People against dirty” was Method’s stated mission. The company website explains it this way: “Dirty means the toxic chemicals that make up many household products, it means polluting our land with nonrecyclable materials, it means testing products on innocent animals.…These things are dirty and we’re against that.” Under Lowry and Ryan’s leadership, Method shook up the monolithic and staid cleaning-products markets by delivering high-performance products that appealed to consumers from a price, design, health, and ecological perspective—simultaneously. From the original offering of a clear cleaning spray, Method’s product line had expanded by 2007 to a 125-product line of home solutions including dishwashing liquids and hand and body soaps. The “aircare” line, an array of air fresheners housed in innovatively designed dispensers, extended the product offerings in 2006, and the O-mop was added in 2007.
All products were made in alignment with Method’s strategy. They had to be biodegradable; contain no propellants, aerosols, phosphates, or chlorine bleach; and be packaged in minimal and recyclable materials. Method used its product formulation, eye-catching design, and a lean outsourcing network of fifty suppliers to remain nimble and quick to market while building significant brand loyalty.
Method sold its products in the United States through several national and regional groceries, but one of the company’s key relationships was with Target, the nation’s number-two mass retailer in 2007. Through Target’s 1,400 stores in 47 states, Method reached consumers across the United States. International sales were expanding, and the firm was regularly in discussion with new distribution channels.
An Upstart Innovator in an Industry of Giants
The US market for soaps and cleaning products did not seem a likely industry for innovation and environmental consciousness. It was dominated by corporate giants, many of which were integral to its founding. Although the soap and cleaning product industry was fragmented around the edges, with a typical supermarket stocking up to forty brands, market share was dominated by companies such as SC Johnson, Procter & Gamble (P&G), Unilever, and Colgate-Palmolive.
To put Method’s position in perspective, its total annual sales were approximately 10 percent of Procter & Gamble’s sales in dish detergent alone (\$317.6 million) (2006). P&G’s total annual sales in the category were more than \$1 billion. Furthermore, the market for cleaning products was under steady cost pressure from private-label brands, increasing raw materials prices and consumers’ view of these products as commodities. Companies that reported positive numbers in the segment between 2000 and 2006 did so by cutting costs and consolidating operations. Startups such as Seventh Generation and others attempted to penetrate the mass market with “natural” products, but those products were largely relegated to health food stores and chains such as Whole Foods. For Method to have obtained any foothold in this heavily consolidated segment dominated by market giants seemed improbable at best. But for Method founders Lowry and Ryan, the massive scale and cost focus of their competitors offered an opportunity.
Method to Their Madness
“You have all your domestic experiences in that house or wherever you live,” Ryan explained. And so, “from the furniture you buy to your kitchenware, you put a lot of thought and emotion into what you put in that space. Yet the commodity products that you use to maintain this very important space tend to be uninteresting, ugly, and toxic—and you hide them away.”Andrea Larson, Method: Entrepreneurial Innovation, Health, Environment, and Sustainable Business Design, UVA-ENT-0099 (Charlottesville: Darden Business Publishing, University of Virginia, March 26, 2007). Lowry and Ryan didn’t understand why it had to be that way.
They decided to take the opposite approach; if they could create products that were harmless to humans and the natural environment and were attractively designed with interesting colors and aromas, they could disrupt an industry populated with dinosaurs. By differentiating themselves from the competition in a significant and meaningful way, Lowry and Ryan hoped to offer an attractive alternative that also reduced the company’s ecological footprint and had a positive environmental impact. “It’s green clean for the mainstream,” said Lowry, “which wouldn’t happen if it wasn’t cool.”Andrea Larson, Method: Entrepreneurial Innovation, Health, Environment, and Sustainable Business Design, UVA-ENT-0099 (Charlottesville: Darden Business Publishing, University of Virginia, March 26, 2007).
To make green cool, Method took a two-pronged approach. First, it formulated new product mixtures that performed as well as leading brands while minimizing environmental and health impacts. Cleaning product manufacturers had been the target of environmental complaints since the 1950s, when the federal government enacted the Federal Water Pollution Control Act in part to address the foaming of streams due to the use of surfactants, chemicals used in soaps and detergents to increase cleaning power. In addition to surfactants, household cleaners often contained phosphates, chemicals used as water softeners and that also acted as plant nutrients, providing an abundant food source for algae. Fast-growing algae resulted in algal blooms, which depleted oxygen levels and starved aquatic life. Water sources contaminated with phosphates were also toxic for animals to drink. Another environmentally problematic compound in cleaning products was chlorine bleach, which when released into the environment could react with other substances to create toxic compounds. According to the Method website, “A major problem with most household cleaners is that they biodegrade slowly, leading to an accumulation of toxins in the environment. The higher the concentration of toxins, the more dangerous they are to humans, animals, and plant life. The key is to create products that biodegrade into their natural components quickly and safely.”Andrea Larson, Method: Entrepreneurial Innovation, Health, Environment, and Sustainable Business Design, UVA-ENT-0099 (Charlottesville: Darden Business Publishing, University of Virginia, March 26, 2007).
With a degree in chemical engineering from Stanford University, experience researching “green” plastics, and a stint at a climate-change think tank, Lowry saw these issues as opportunities.
Method counted on the competition’s seeing environmental and health issues as “problems.” Doing so allowed Method to seize competitive advantage through designing out human health threats and ecological impacts from the start, while their larger competitors struggled to deal with increasing legislative and public image pressures. Method products sold at a slight premium to compensate for the extra effort. “I knew as a chemical engineer that there was no reason we couldn’t design products that were nontoxic and used natural ingredients,” Lowry said. “It would be more expensive to do it that way. But that was okay as long as we created a brand that had a ‘premiumness’ about it, where our margins would support our extra investments in product development and high-quality ingredients.”Andrea Larson, Method: Entrepreneurial Innovation, Health, Environment, and Sustainable Business Design, UVA-ENT-0099 (Charlottesville: Darden Business Publishing, University of Virginia, March 26, 2007).
The second prong of Method’s attack on the entrenched cleaning products industry was to utilize design and brand to appeal to consumers tired of the same old products. In an industry rife with destructive price competition, Method realized it would have to be different. The founders believed that their competition was so focused on price that “they weren’t able to invest in fragrance or interesting packaging or design.” Lowry explained, “Our idea was to turn that reality on its head and come up with products that absolutely could connect with the emotion of the home. We wanted to make these products more like ‘home accessories.’ We believed there was an opportunity to really reinvent, and in the end, change the competitive landscape.”Andrea Larson, Method: Entrepreneurial Innovation, Health, Environment, and Sustainable Business Design, UVA-ENT-0099 (Charlottesville: Darden Business Publishing, University of Virginia, March 26, 2007).
By focusing their marketing and packaging as the solution “against dirty,” they tapped into consumers’ disquiet with the ingredients in their household cleaners. Through packaging that stood out from the rest, they created the opportunity to deliver the environmental and health message of the products’ ingredients.
Design of packaging to deliver that message was integral to Method’s success from its first sale. Method’s home-brewed cleaning formulas for kitchen, shower, bath, and glass surfaces were originally packaged in clear bottles that stood out on a shelf. “The manager of the store just liked the way the packaging looked,” said David Bennett, the co-owner of Mollie Stones, a San Francisco Bay–area grocer that was Method’s first retail customer. “It looked like an upscale product that would meet our consumer demands, so we went with it.”Andrea Larson, Method: Entrepreneurial Innovation, Health, Environment, and Sustainable Business Design, UVA-ENT-0099 (Charlottesville: Darden Business Publishing, University of Virginia, March 26, 2007).
As design continued to be a key element of Method’s appeal, the company recruited Karim Rashid, a renowned industrial designer who had worked with Prada and Armani. Rashid was responsible for bringing a heightened sense of style to Method’s packaging while continuing to focus on environmental impact. This led to the use of easily recycled number one and number two plastics (the types of plastic most commonly accepted by municipal recycling centers). Method’s approach seemed to represent a younger generation’s more holistic mental model. This small firm seemed to provide a window into a future where health, environmental, and what were increasingly called “sustainability issues” would be assumed as part of business strategy and product design.
Wipes, the O-mop, and PLA Material
PLA was an innovative and relatively new plastic material derived from plants such as corn, rice, beets, and other starch-based agricultural crops. PLA biodegraded at the high temperatures and humidity levels found in most composting processes. NatureWorks was the first large-scale plant in the United States to produce PLA in resin (pellet) form, based on milled material made from farm-supplied corn and corn waste. The resin pellets went to a fiber manufacturer who made bales; those bales of PLA material went next to the nonwoven cloth manufacturer, which converted it into giant rolls of nonwoven cloth. Next, a converter took the bulk nonwoven cloth, cut it into shapes, and packaged it according to the specifications of a customer such as Method. When NatureWorks first began operations, demand was limited. That picture changed quickly between 2004 and 2006, and by 2007 the plant could not produce its PLA feedstock resins fast enough to meet worldwide demand. PLA came out of the facility in pellet form and was melted, extruded, spun, and otherwise manipulated by converters at different steps of the supply chain into a virtually endless spectrum of materials for different applications across a wide range of product categories.
As a replacement for ubiquitous oil-based plastic feedstock, PLA promised a departure from the petroleum-based plastic materials that had come to dominate since synthetic plastics were first developed in volume after World War II. PLA had proved itself a particularly high-performing and cost-effective raw material that was well suited as a substitute for polyethylene terephthalate (PET) in many applications. PET was the oil-based polymer known generically as polyester and used extensively in packaging, films, and fibers for textiles and clothing.
The competition’s wipes and mop heads were made of petroleum-based nonbiodegradable plastic material, typically polyester or polypropylene. Although microfiber was quickly becoming commonplace, microfiber and the denier unit of measurement were first associated with material in women’s hosiery. Technology advances permitted polyester microfiber production for very fine fiber applications, and just as microfiber had become common in clothing lines, it was also used as a more effective wiping and cleaning product. Microfiber was fiber with strands measured at less than one denier, a unit of weight used to describe extremely fine filaments and equal to a yarn weighing one gram per nine thousand meters. Whether made from corn or oil, microfiber material, used by most companies selling residential cleaning wipes by 2006, made an excellent cleaning cloth. Its structure enabled the fiber surface to more effectively pick up dirt and dust compared with conventional materials and methods. The microfiber wipes could be washed and reused, providing greater durability than alternative products that were typically thrown away immediately after use.
Consistent with Method’s environmental and sustainability philosophy, Lowry wanted to use bio-based materials, specifically PLA nonwoven cloth, for the dry floor dusting product. Ultimately he wanted PLA to be the basis for all fibers used, both nonwoven disposable cloth and reusable woven microfiber. If customers weren’t grabbed by the marketing message that the mop was sexy and hip (a message consistent with Method’s playful tone), they might be pulled in by the ergonomic O-mop’s more effective, biotech-based, and nontoxic floor cleaning.
Lowry knew most disposable wipes ended up in landfills, not compost piles, even with their extended life. So the company supported municipal recycling and composting infrastructure development in an effort to encourage cradle-to-cradleCradle-to-cradle was an increasingly popular term that referred to a product cycle in which materials could be manufactured, used, then broken down and used again with no loss of quality; for more information on this concept, see William McDonough and Michael Braungart, Cradle to Cradle: Remaking the Way We Make Things (New York: North Point Press, 2002). resource use, or at least raise awareness and encourage behavior in that direction. Method estimated that eighty-three thousand tons of “wipe” material made of polyester or polypropylene plastic was ending up in landfills annually, enough to fill nine thousand tractor-trailers. If using PLA could reduce oil feedstock use even a little, he reasoned, it was an improvement. Even if the PLA fiber went to landfills, where temperature and humidity never reached the ideal composting levels that would quickly and thoroughly break it down, it would still decompose safely, perhaps after one to two months, unlike oil-based fibers, which could remain in landfill disposal sites in the same condition for thousands of years.
The market for bio-based plastic materials had taken off by 2007, but Lowry had had no luck finding a US manufacturer to create a PLA-based fabric suitable for the white, nonwoven, dry-floor duster cloth used with the O-mop. He had just talked with the last on his list of PLA manufacturers, and the answer was no. They had all told him it couldn’t be done. The material was too brittle, they couldn’t process it, it wouldn’t run on their machines, and the strands were too weak. In short, PLA nonwoven cloth for this application was technologically impossible.
Lowry picked up the phone and placed a call to a company he knew in China—a departure from business as usual given that 90 percent of Method’s inputs were sourced in the United States. Chinese suppliers often were excellent, but domestic sourcing was preferable to avoid the high transportation costs of moving product long distances. Typically the farther the transport requirement, the greater the fossil fuel use, so the choice seemed inconsistent with the firm’s sustainability approach. But Lowry was sure the dry-floor dusting cloth could be made with PLA resins, and the Chinese manufacturer confirmed it. Lowry placed the order. A Taiwanese fiber manufacturer would make the bales and send them to the Chinese nonwoven cloth manufacturer that would pass on the cloth to a nearby converter that would in turn cut and package it to meet Method’s needs. Lowry knew the suppliers were good and reliable and that the product would arrive promptly. Perhaps all Method’s PLA products would need to come from China. But was sourcing from the other side of the world “sustainable” in the sense that he and Ryan tried to apply sustainability principles to the company’s operations?
The other issue on Lowry’s mind was that Method’s products could be deemed unacceptable in certain distribution channels that would not tolerate any GMOs in their products. PLA was produced from agricultural material (often corn or cornfield waste material) that was brought by farmers to a centrally located milling plant that converted it and separated out the components from which PLA was made. There was no monitoring of the corn coming into the milling facility; thus there was no guarantee that all inputs to the PLA resin-producing process were free of GMOs. If Lowry used PLA, it meant certain large and reputable buyers would refuse to put Method products on their shelves. Even so, to Lowry, it seemed preferable to substitute PLA for petroleum-derived products and compromise on the GMO issue for the time being. After a particularly discouraging conversation with a company that declined to do business with Method until it agreed to stop using GMO agricultural inputs, he decided to write out his thoughts in an essay, both to sort them out for himself and to draft a position paper that he could later post on the Method website.
As our knowledge base grows regarding exposure to toxins, we become more informed and better equipped to find solutions. We are capable of learning and absorbing feedback from the environment and our bodies. Lead was removed from gasoline in the United States and extensive efforts made to remove lead-based paint from older homes, thereby significantly reducing exposure to lead (a neurotoxin), particularly for children. Chlorofluorocarbons (CFCs), known to break down upper atmosphere ozone, were banned enabling recovery of the ozone layer and over time reducing the ozone hole that formed every year over parts of the Southern Hemisphere. As a species, we act, we receive feedback, we adjust and adapt. We are beginning to learn and adapt with respect to toxic chemicals exposure. However, materials toxicity and contamination is just starting to receive attention and still remains secondary in the media’s attention due to the current focus on climate and energy issues (topics that also require attention to materials and toxic inputs/outputs). Nevertheless, materials issues will be acknowledged and addressed. The pattern will be similar to other arenas that challenge human ingenuity: most people will be overwhelmed by the problem scale, while others, the entrepreneurial individuals (and ventures they create), will drive innovation to create benign alternatives.See http://www.warnerbabcock.com for an example of a company committed to change
The next two sections provide additional background information on toxic substances. They are followed by a second case on Method that demonstrates how forward-thinking companies work on an ongoing basis to eliminate questionable chemical compounds from their products through innovative processes that lead to breakthrough designs and safer products in the marketplace.
Toxic Chemicals: Responding to Challenges and Opportunities
In the early 1960s, US scientist and writer Rachel Carson spoke about the risks of toxic chemicals: “We are subjecting whole populations to exposure to chemicals which animal experiments have proved to be extremely poisonous and in many cases cumulative in their effect. These exposures now begin at or before birth and—unless we change our methods—continue through the lifetime of those now living. No one knows what the results will be, because we have no previous experience to guide us.”Rachel Carson, Silent Spring (New York: Houghton Mifflin, 1962).
We have made progress in the face of the abundant evidence that increases in cancer and other disease rates are the result of exposure to chemicals. The US Environmental Protection Agency (EPA) was established in 1970 partly in response to Carson and others who foresaw the dangers of society’s ill-informed experimentation with toxic chemicals. Similar agencies now exist in most countries and the United Nations. Environmental and health nongovernmental organizations (NGOs) have become powerful change agents. Federal and state laws and international agreements have been passed banning or severely restricting the manufacture and use of certain exceptionally dangerous and persistent chemicals. However, progress is slow and public awareness insufficient. We remain vulnerable to both existing chemicals and hundreds of new ones that are invented and introduced into commerce daily.Andrea Larson, Darden Business School technical note, Toxic Chemicals: Responding to Challenges and Opportunities, UVA-ENT-0043 (Charlottesville: Darden Business Publishing, University of Virginia, 2004). Information presented in this section comes from this study.
More than thirty years after Carson’s book Silent Spring was published, scientists Theo Colborn and John Peterson Myers and a coauthor renewed the warning about widespread molecular toxins in the book Our Stolen Future (1996):
The 20th century marks a true watershed in the relationship between humans and the earth. The unprecedented and awesome power of science and technology, combined with the sheer number of people living on the planet, has transformed the scale of our impact from local and regional to global. With that transformation, we have been altering the fundamental systems that support life. These alterations amount to a great global experiment—with humanity and all life on earth as the unwitting subjects. Synthetic chemicals have been a major force in these alterations. Through the creation and release of billions of pounds of man-made chemicals over the past half-century, we have been making broad-scale changes to the earth’s atmosphere and even in the chemistry of our own bodies.…The global scale of the experiment makes it extremely difficult to assess the effects. Over the past fifty years, synthetic chemicals have become so pervasive in the environment and in our bodies that it is no longer possible to define a normal, unaltered human physiology. There is no clean, uncontaminated place, nor any human being who hasn’t acquired a considerable load of persistent hormone-disrupting chemicals. In this experiment, we are all guinea pigs and, to make matters worse, we have no controls to help us understand what these chemicals are doing.Theo Colborn, Dianne Dumanoski, and John Peterson Myers, Our Stolen Future (New York: Penguin Group, 1996), 239–40.
Synthetic chemicals are everywhere—in the plastics used in packaging, cars, toys, clothing, and electronics and in glues, coatings, fertilizers, lubricants, fuels, and pesticides. We make or “synthesize” chemicals from elements present in nature. Many “organic”“Organic” chemicals are chemicals that have a carbon backbone. Some occur naturally and some are synthetic. There is no connection between the term organic as it is used in chemistry and the use of the word in phrases such as organic food or organic farming. or carbon-based chemicals are derived from petroleum. We use synthetics to serve many purposes that natural materials cannot serve as well, and industry and consumers often save money in the process. Without synthetics, we wouldn’t have computers, television, and most drugs and medical equipment. Synthetic chemicals, however, have dangers as well as benefits. Those dangers are often unknown or even unsuspected when a chemical is first introduced. They may become evident only after thousands or even millions of pounds of that chemical have been released into the environment through industrial and agricultural processes and energy generation, or as products, emissions, or other wastes.
Synthetic chemicals’ detrimental environmental and health consequences are unintentional. The pesticide dichloro-diphenyl-trichloroethane (DDT), for example, was never intended to kill bald eagles or robins.Rachel Carson, Silent Spring (New York: Houghton Mifflin, 1962), 118–22. The chlorine bleaching process used in paper mills wasn’t meant to disrupt the endocrine systems of fish downstream.Ann Platt McGinn, Why Poison Ourselves? A Precautionary Approach to Synthetic Chemicals, Worldwatch Paper #153 (Washington, DC: Worldwatch Institute, November 2000), 22. Polychlorinated biphenols (PCBs) and pesticide residues weren’t supposed to end up in human breast milk, nor were they supposed to affect the immune and endocrine systems or possibly cause sperm decline and even infertility in men.Theo Colborn, Dianne Dumanoski, and John Peterson Myers, Our Stolen Future (New York: Penguin Group, 1996), 178.
History
Synthetic chemicals were first produced in laboratories during the nineteenth century. DDT was invented in 1874 in Germany and began its infamous career as pesticide in the 1930s. Before World War II, pesticides consisted mainly of metals such as arsenic, copper, lead, manganese, and zinc and compounds found in plants such as rotenone, nicotine sulfate, and pyrethrum. Plastics from cellulose were first created in the 1890s. Beginning in about 1900, synthetic plastics produced from oil began to find their way into industry. Polyvinyl chloride (a.k.a. “vinyl” or PVC) was discovered in the 1920s. PCBs were introduced in the 1920s. Steady progress through the early twentieth century led to rapid breakthroughs during the World War II years and the creation of thousands of new chemicals every year since. Some toxic chemicals are not created intentionally. Dioxins, for example, are by-products from chlorine-product manufacturing, combustion (especially of plastics), and paper bleaching.Ann Platt McGinn, Why Poison Ourselves? A Precautionary Approach to Synthetic Chemicals, Worldwatch Paper #153 (Washington, DC: Worldwatch Institute, November 2000), 9.
For most people, it would be hard to deny the benefits of the chemical era. Pharmaceuticals, plastics, semiconductors, disinfectants, and food preservatives are just a few of the many synthetic chemical–based conveniences on which we have come to depend. However, rather like the famous story of the sorcerer’s apprentice, the junior-level alchemist who knows enough to unleash the forces of magic but not enough to control them, we have the capacity to create a vast array of products with synthetic chemicals but are politically and technologically constrained in our ability to cope with the pollution and wastes we create along the way.
The chemists, physicists, engineers, and corporations who brought us the “green revolution” in agriculture, plastics, fuel for our vehicles, microchips, and myriad other useful products have also given us many unintended consequences. Even if you eat organic foods, prefer natural wood and leather furniture, and wear only organic cotton and wool clothing, the house you live in, the car you drive, and nearly everything else that you consume is dependent on synthetic chemicals at some point in its life cycle.
Impacts
Hazards associated with toxic ingredients in pesticides, solvents, lubricants, plastics, fuels, exhaust gases, cleaning fluids, and hundreds of other consumer and industrial substances are generally thought of in terms of impacts on human health, wildlife, and ecosystems. Human health impacts from toxic synthetic chemicals range from minor skin irritations and sinus conditions to chronic asthma, severe nervous system disorders, respiratory illnesses, cancers, and immune system dysfunction. Table 6.1 shows some classes of chemicals known to cause cancer in the workplace.Peter H. Raven and George B. Johnson, Biology, 5th ed. (New York: McGraw Hill, 1999), 342, table 17.3.
Table 6.1 Chemical Carcinogens in the Workplace
Chemical Cancer Workers at Risk for Exposure
Common Exposure Benzene Myelogenous leukemia Painters, dye users, furniture finishers
Diesel exhaust Lung Railroad and bus-garage workers
Mineral oils Skin Metal machining
Pesticides Lung Sprayers
Cigarette tar Lung Smokers
Uncommon Exposure Asbestos Mesothelioma, lung Brake-lining and insulation workers
Synthetic mineral fibers Lung Wall and pipe insulation installers; duct-wrapping workers
Hair dyes Bladder Hairdressers and barbers
Paint Lung Painters
PCBs Liver, skin Hydraulic fluids and lubricants workers
Soot Skin Chimney sweeps, bricklayers, firefighters
Rare Exposure Arsenic Lung, skin Insecticide/herbicide sprayers; tanners; oil refiners
Formaldehyde Nose Hospital and lab workers; wood, paper mill workers
Source: Andrea Larson, Darden Business School technical note, Toxic Chemicals: Responding to Challenges and Opportunities, UVA-ENT-0043 (Charlottesville: Darden Business Publishing, University of Virginia, 2004).
Ecological Impacts
Wildlife and ecosystems are often impaired by toxic chemical exposure long before we are aware that any damage has been done. In the mid-1980s, scientists found that the alligators in central Florida’s Lake Apopka were born with faulty reproductive systems following an accidental spill from the Tower Chemical Company more than ten years earlier. In 1998, farmland near the lake was allowed to flood as part of a wetland restoration project. Years of pesticide-intensive farming had taken its toll. Vast numbers of fish-eating birds such as herons and egrets died in as toxic chemicals from flooded agricultural fields moved up the food chain from algae and small aquatic animals to the amphibians and fish species the birds ate. By the time the birds consumed the chemicals, they had bioaccumulated to concentrations that caused acute poisoning.Ted Williams, “Lessons from Lake Apopka,” Audubon, July–August 1999, 64–72.
Polar bears also are suffering from bioaccumulation of toxins, but their pollutants come from thousands of miles away, carried by ocean and air currents. The toxins are concentrated through the food chain until prey species such as seals have millions of times the amount of heavy metal or persistent organic chemical that is found in the water.Theo Colborn, Dianne Dumanoski, and John Peterson Myers, Our Stolen Future (New York: Penguin Group, 1996), 88–91.
Virtually no place on earth is free from contamination by synthetic chemicals. They have been found in water, air, and human beings all over the globe. Some of the highest concentrations have been found near the Arctic Circle in the breast milk of indigenous people.Theo Colborn, Dianne Dumanoski, and John Peterson Myers, Our Stolen Future (New York: Penguin Group, 1996), 107. Some lakes in Norway, Sweden, and northern Canada are essentially dead from acid rain caused by power plants hundreds of miles away.G. Tyler Miller, Living in the Environment, 10th ed. (Belmont, CA: Wadsworth, 1998), 481. Populations of amphibians, long considered an indicator species for pollution, are declining all over the world, even in remote Amazonian forests, in part because of pesticides and other pollutants.Ashley Mattoon, “Deciphering Amphibian Declines,” in State of the World 2001 (Washington, DC: Worldwatch Institute, 2001), 63–82, accessed January 11, 2011, www.globalchange.umich.edu/gctext/Inquiries/Module%20Activities/State%20of%20the%20World/Amphibian%20Declines.pdf.
Tests for synthetic chemicals consistently find them in humans. For example, plastic additives providing flexibility, such as phthalates, are known for their endocrine-disrupting potential; they pass from tubing and bags used in intravenous medical preparations into the patients attached to them.Our Stolen Future, “About Phthalates,” accessed January 30, 2011, www.ourstolenfuture.org/newscience/oncompounds/phthalates/phthalates.htm. The same chemicals may end up in babies’ mouths when they chew on a soft plastic toy.Our Stolen Future, “About Phthalates,” accessed January 30, 2011, www.ourstolenfuture.org/newscience/oncompounds/phthalates/phthalates.htm. Window blinds and other hard plastic products sometimes contain lead. Wells and municipal water supplies contain varying concentrations of chemical contaminants. It may be indicative of the complexity of testing for, and guarding against, hazardous pollutants in water supplies that the US EPA sets drinking water standards for only thirty-three of the hundreds of pesticides in current use.Payal Sampat, Deep Trouble: The Hidden Threat of Groundwater Pollution, Worldwatch Paper #154 (Washington, DC: Worldwatch Institute, 2000), 27.
How Chemicals Cause Damage
Between fifty thousand and one hundred thousand synthetic chemicals are in commercial use, with more entering commerce every day.Ann Platt McGinn, Why Poison Ourselves? A Precautionary Approach to Synthetic Chemicals, Worldwatch Paper #153 (Washington, DC: Worldwatch Institute, November 2000), 7. The problem is that some of those chemicals cause illness or death to people, animals, and plants. Some, such as chemicals used in warfare and pesticides, were intended to kill or impair specific organisms, but the bulk of the harm from synthetic chemicals is unintended. Many of the consequences of our great experiment in chemical production and use have come as surprises to the scientists who created them.
Bioaccumulation
Traces of persistent synthetic chemicals are found in animals—in especially high concentration—at the top of the food chain. In a process known as bioaccumulation, persistent toxic wastes like PCBs, present in water and sediments, are eaten by phytoplankton and zooplankton that store them at about 250 and 500 times their ambient concentration. Those tiny creatures are in turn eaten by slightly larger animals such as microscopic shrimp, building PCB levels to tens of thousands of times that of the surrounding water. The shrimp are consumed by animals such as small fish, in whose tissues PCB concentrations may reach hundreds of thousands of times ambient levels. A larger fish eats the smaller fish and stores PCBs in concentrations millions of times higher. A top predator, such as a gull or a fish-eating eagle, eats the fish, accumulating up to twenty-five million times the original PCB concentration level. Finally, the chemical reaches a concentration where toxicity becomes manifest, and the gull can no longer produce viable offspring.Theo Colborn, Dianne Dumanoski, and John Peterson Myers, Our Stolen Future (New York: Penguin Group, 1996), 27. Human beings are not exempt from chemical bioaccumulation. Chemical pollutants are found in virtually all humans in our blood, fat tissues, and breast milk. The US Centers for Disease Control reports on pollutants present in human bodies, describing the “body burden” of accumulated chemicals.
Unfortunately, the old adage that “the dose makes the poison” doesn’t always apply. That belief assumes that the lower the dose, the lower the adverse effect. We now find that very low and high exposure levels stimulate cellular change; however, little influence is discernable with midrange exposures. Some chemicals, including tetraethyl lead, many pesticides, and other persistent organic pollutants (POPs) are known to cause reproductive problems and developmental problems before birth and during the first few years of life. Those impacts may occur even with concentrations so small that they are measured in parts per trillion. For that reason, the EPA works under the assumption that there is no safe exposure level for chemicals classed as probable human carcinogens.
Impacts on Children and Pregnant Women
The study of chemical threats to children’s health is still in an early stage. The EPA created the Office of Children’s Health Protection in 1997 in recognition of the need to address risks to children that are potentially different from risks to adults.
EPA’s traditional method of setting human health protection standards has relied almost exclusively on the assessment of risks to adults. That kind of broad focus is understandable, given how little was understood about environmental risk before 1970. It was assumed that people were comparable in terms of their response to exposures to pollution. As we learned more about the effects of environmental contaminants on human health, the differences among subsets of the population, particularly differences among children and adults, began to emerge.
A child’s nervous system, reproductive organs, and immune system grow and develop rapidly during the first months and years of life. As organ structures develop, vital connections between cells are established. Those delicate developmental processes in children may easily and irreversibly be disrupted by toxic environmental substances such as lead.
Neurotoxins that may have only a temporary ill effect on an adult brain can cause enduring damage to a child’s developing brain. The immaturity of children’s internal systems, especially in the first few months of life, affects their ability to neutralize and rid their bodies of certain toxics. If cells in the developing brain are destroyed by lead, mercury, or other neurotoxic chemicals, or if vital connections between nerve cells fail to form, the damage is likely to be permanent and irreversible.
Increasing Regulations
Rapidly expanding scientific understanding of chemicals and their impacts has resulted in closer regulatory oversight.
The EPA has faced an embarrassing backlog of chemical risk assessments for many years. In 1998, the agency developed a system for high production volume (HPV) chemicals. The program was intended to move testing forward through voluntary cooperation from industry in assessing approximately three thousand chemicals produced in volumes of one million pounds per year or more. The EPA-sponsored national computerized database known as the Toxic Release Inventory (TRI) tracks toxic chemicals that are being used, manufactured, treated, transported, or released into the environment. Section 313 of the Emergency Planning and Community Right-to-Know Act (EPCRA) of 1986 specifically requires manufacturers to report releases of six hundred designated toxic chemicals into the environment. The reports are submitted to the EPA and state governments. EPA compiles this data in the online, publicly accessible TRI.US Environmental Protection Agency, “Toxics Release Inventory (TRI) Program,” accessed January 31, 2011, http://www.epa.gov/tri.
There are five end points for the screening tests: acute toxicity, chronic toxicity, mutagenicity, ecotoxicity, and environmental fate. Of chemicals required for testing under the TRI requirements, only 55 percent or about 680 have been tested.US Environmental Protection Agency, “Toxics Release Inventory (TRI) Program, TRI Chemical List,” accessed January 30, 2011, www.epa.gov/tri/trichemicals. Seven percent of all other chemicals have complete test data. Only 25 percent of 491 chemicals examined by the EPA due to their use in consumer products brought into the home and used by children and families have data. Of the three thousand HPV chemicals imported or produced at over one million pounds annually by the United States, 43 percent have no basic toxicity testing data available. The government depends on companies to report; however, no testing data have been submitted by 148 of 830 companies producing chemicals in the high-volume range. A total of 459 companies sell products for which half or fewer chemicals used have been reported under the required testing protocols. Only twenty-one companies have submitted complete screening data for all chemicals they produce. The EPA observes that filling in the screening data gaps would cost about \$427 million, or about 0.2 percent of annual sales for the top one hundred US chemical companies.US Environmental Protection Agency, “HPV Chemical Hazard Data Availability Study: High Production Volume (HPV) Chemicals and SIDS Testing,” accessed January 29, 2011, www.epa.gov/hpv/pubs/general/hazchem.htm.
A significant step toward international restrictions on some on the most hazardous chemicals is evident in the sequence of conventions on POPs. POPs are widely considered the least acceptable hazardous chemicals. They persist in the environment for decades without degrading into harmless substances, they are organic, and they are highly toxic pollutants. Some other chemicals that are themselves relatively harmless create persistent toxic by-products, such as dioxins, as they are combusted or degrade.
On May 22, 2001, delegates from 127 countries, including the United States, formally signed the international treaty on POPs in Stockholm, Sweden. The signatories pledged to phase out the production and use of the twelve chemicals listed in Table 6.2. These twelve POPs are the first targets for an international convention restricting the trading and use of POPs.
Table 6.2 The United Nations’ Top Twelve Persistent Organic Pollutants (POPs)
Pollutant Date Introduced Uses, Pests, and Crops
Aldrin 1949 Insecticide—termites—corn, cotton, and potatoes
Chlordane 1945 Insecticide—termites
DDT 1942 Insecticide—mosquitoes
Dieldrin 1948 Insecticide—soil insects—fruit, corn, and cotton
Eldrin 1951 Rodenticide and insecticide—cotton, rice, and corn
Heptachlor 1948 Insecticide—soil insects, termites, ants, and mosquitoes
Hexachlorobenzine 1945 Fungicide and pesticide by-product and contaminant
Mirex 1959 Insecticide—ants and termites—also used as a fire retardant (unusually stable and persistent)
Toxaphene 1948 Insecticide—ticks and mites (a mixture of up to 670 chemicals)
PCBs 1929 Used primarily in capacitors and transformers and in hydraulic and heat transfer systems. Also used in weatherproofing, carbonless copy paper, paint, adhesives, and plasticizers in synthetic resins
Dioxins 1920s By-products of combustion (especially of plastics) and of chlorine product manufacturing and paper bleaching
Furans 1920s By-products, especially of PCB manufacturing, often with dioxins
Source: Andrea Larson, Darden Business School technical note, Toxic Chemicals: Responding to Challenges and Opportunities, UVA-ENT-0043 (Charlottesville: Darden Business Publishing, University of Virginia, 2004).
Acceptable Risk?
How much risk to our health and environment are we willing to accept and pass on to future generations in return for the benefits we expect from a new chemical? Many people would immediately answer, “None; it’s unacceptable to pass on any risk!” Chemical industry advocates recommend applying cost-benefit analysis to hazardous chemicals. They point out that it may be reasonable to eliminate 80 percent of the risk of a substance, but it costs a great deal to eliminate the last 20 percent. They would prefer that we accept the remaining risk and spend the savings on other pressing concerns.
Chemical risks are associated with four main variables for human health: exposure to a chemical, toxicity of the chemical, dosage received, and response (acute or chronic illness). Multiple exposures to several different chemicals and possible synergistic effects are sometimes accounted for as well. Ecological impacts are an added concern reviewed in some risk assessments. The reality is that the US regulatory system for monitoring chemicals is insufficient for the scope of the task. Reform of the key legislation, the Toxic Substances Control Act, may not be possible under the current political polarization in the United States. Some people have concluded that while targeting more benign, or fully benign, chemical components for products in the private sector is to be commended, nothing will take the place of dramatic chemical regulatory reform at the federal level.
The challenges are significant. It is hard to know exactly the risk to which we are exposed. Whose responsibility is it to assess risks from chemicals and communicate them to end users and others who may share the impacts? Limits to environmental regulatory budgets, industry resistance to regulatory constraints, public debt and sentiment that larger government is not the right choice, and increasing complexity of toxicology science combine to make it difficult for government to provide reassuring answers.
Alternative Approaches
Should those who benefit from the sale and use of toxic chemicals be held accountable for damages they cause if they knew or suspected harmful impacts? What if they were unaware that they were doing harm?
What are the opportunities for firms in this arena? It is important to learn from our mistakes. Cleaning up a Superfund siteSuperfund sites are highly polluted areas registered with the US Environmental Protection Agency. A multibillion-dollar fund for cleaning up those sites is financed by the companies that caused the pollution in accordance with the “polluter pays” principle. or settling lawsuits with survivors of chemical experiments such as those involving asbestos and diethylstylbestrol (DES) can bankrupt a company. Many women who took the fertility drug DES on the advice of their physicians gave birth to children with malformed reproductive organs and unusual reproductive system cancers. Worker exposure in asbestos insulation factories led to a signature form of deadly cancer known as mesothelioma, yet asbestos is still not banned.
The Precautionary Principle
In the future, given the right mix of politics, economics, public pressure, and tragic consequences, industry may find itself forced to change from a status quo of “make it now and find out what harm it does later” to something resembling the “precautionary principle” espoused by many governments and environmental groups and today the dominant paradigmatic approach in the European Union. The precautionary principle states that “even in the face of scientific uncertainty, the prudent stance is to restrict or even prohibit an activity that may cause long-term or irreversible harm.”Ann Platt McGinn, Why Poison Ourselves? A Precautionary Approach to Synthetic Chemicals, Worldwatch Paper #153 (Washington, DC: Worldwatch Institute, November 2000), 17–18. That concept places the burden of proof on those who would create a potential risk rather than on those who would face its impacts. Currently, most environmental disputes follow an opposite pattern. Those who are concerned about a potentially hazardous activity must prove that unreasonably high risk exists before the advocates of the activity can be expected to change. Applied to synthetic chemicals, the precautionary principle might lead us to look for alternatives to certain classes of chemicals, such as organohalogens (organic compounds that contain chlorine, fluorine, bromine, iodine, or astatine), which have proven exceptionally dangerous.
You Make It, You Own It
An economy is posited where consumers lease products. Instead of owning the product, they buy only the services it provides. For example, many copier companies lease their machines, selling document reproduction services rather than copiers. A system has been proposed that tags chemicals (as “technical nutrients”) with molecular markers. The materials remain the property of the manufacturer, which will own not only the product but also the waste, toxicity, and liability it may cause. Cradle-to-cradle product management would keep unavoidable toxins in closed-loop systems of cyclical use and reuse. Ideally companies would make either “biological nutrients” that return safely to the earth or “technical nutrients” that stay in technical cycles managed by the companies that use them.Robert A. Frosch and Nicholas E. Gallopoulos, “Strategies for Manufacturing,” Scientific American 261, no. 3 (September 1989): 144–52; Robert U. Ayres, “Industrial Metabolism,” in Technology and Environment, ed. Jesse H. Ausubel and Hedy E. Sladovich (Washington, DC: National Academy Press, 1989).
The Next Problem
If industry fails to reach such a level of self-regulation, mankind will undoubtedly face new surprises from our production and use of chemicals. The early pioneers of the internal combustion engine saw it as a cure for streets covered with horse manure, the pollutant of their day. They never dreamed that their innovation would produce the air pollution that now kills thousands of people every year. Without a more prudent approach, we may find that our new inventions create unforeseen dangers as well. A few of the many candidates for the next revolutions in chemical use include GMOs, nanoscale molecular machines, and exotic molecules such as buckyballs. Some of those will probably never do any harm and may prove valuable. Others may harm our bodies and the natural systems that we depend on in ways that we cannot foresee. Foresight requires considering an innovation’s risk of doing harm at least as carefully as we explore its potential benefits.
Thoughts for Commercial Enterprises
Some of our past experiments with chemicals provide opportunities for future technology. For example, devices that “sniff out” explosives may be used to detect and destroy abandoned land mines. Nontoxic substitutes for innumerable cleaners, solvents, lubricants, adhesives, medicinal supplies, bleaches, disinfectants, and hundreds of other products are waiting to be discovered. Agriculture needs cleaner, cheaper, safer substitutes for its pesticides and chemical fertilizers.
Alternatives
There already are safer alternatives for many of the processes and products that involve toxic chemicals, and companies are working diligently to discover more. Clean energy generation, such as fuel cells, solar cells, and wind power, had become a hot topic on Wall Street by 2005. Yet all these energy technologies need assessment from a component toxicity perspective and life-cycle view as well. Integrated pest management and organic farming are gaining popularity as the local food movement accelerates. Scientists are looking to nature for solutions to industrial as well as agricultural problems. The budding field of biomimicry explores and seeks to mimic the processes in nature that create materials and energy at ambient temperatures without using toxic chemicals. For example, spiders make waterproof webs that are twice as strong as Kevlar without toxicity. Abalones create shatterproof ceramics using seawater as their raw material. Leaves create food and useful chemical energy from sunlight, water, and soil.Janine Benyus, Biomimicry: Innovation Inspired by Nature (New York: William Morrow, 1997). Some bacteria even digest toxic organic chemicals and excrete harmless substances in the process.
Challenges and Opportunities
Both challenges and opportunities lie in learning to assess risks and to develop a clear vision of the short- and long-term benefits and the legal, financial, and social risks associated with new chemicals and the technologies they enable. Many options exist to help businesses design environmental and social responsibility into their products and services. Proven techniques include pollution prevention (P2), design for environment (DfE), The Natural Step (TNS) framework, and cradle-to-cradle thinking. In some cases, those options include efficiency improvements that have short-term payback periods. Other techniques inspire valuable innovations with long-term financial benefits, improved public image, and employee morale—a stakeholder approach. P2 can save money by eliminating waste in industrial processes and avoiding costly regulatory requirements and toxic waste disposal costs. The DfE school of thought recommends adding design criteria that insist on processes and products that are free from toxic chemicals throughout the product life cycle. Dr. Karl-Henrick Robèrt, father of TNS, suggested asking six questions about a persistent toxin such as dioxin before continuing to use it: “Is the material natural? Is it stable? Does it degrade into harmless substances? Does it accumulate in bodily tissues? Is it possible to predict the acceptable tolerances? Can we continue to place this material safely in the environment”Paul Hawken, The Ecology of Commerce (New York: Harper Business, 1993), 53.
Exposure to Toxins Presents New Health Issues
With consumers increasingly concerned about toxins in products after reports of lead in toys and endocrine-disrupting synthetic chemicals in plastics used (and chewed on) by teething small children or used in plastic containers (BPA [bisphenol A]), “clean” products are of major concern to parents today. Toxic chemicals designed into products will receive more attention going forward as scientific knowledge advances on how living organisms, including humans, absorb such chemical compounds. The next sections on chemicals in breast milk helps inform the reader from an often overlooked vantage point why these issues are becoming more visible and what opportunities are associated with the search for solutions.
A Reason for Environmental Health Concerns: Chemicals in Breast Milk
Breast-feeding advocates often refer to breast milk as “liquid gold.” Besides its direct benefit of feeding a growing baby, breast milk contains antibodies to protect infants from disease, nutrients to support organ development, and enzymes to aid digestion. Research has shown that the unique composition of human milk enhances brain development and lowers the risk and severity of a variety of serious childhood illnesses and chronic diseases, including diarrhea, lower respiratory infection, bacterial meningitis, urinary tract infections, lymphoma, and digestive diseases.Andrea Larson, Darden Business School technical note, Environmental Health: Chemicals in Breast Milk, UVA-ENT-0078 (Charlottesville: Darden Business Publishing, University of Virginia, 2004). All information in this section by author. There are also significant benefits to women who breast-feed, such as reduced risk of breast and ovarian cancer and osteoporosis.US Department of Health and Human Services, “The Surgeon General’s Call to Action to Support Breastfeeding, 2011,” accessed January 30, 2011, http://www.surgeongeneral.gov/topics/breastfeeding/calltoactiontosupport breastfeeding.pdf.
Although breast milk is recognized by doctors, public health officials, and scientists as the best first food for an infant, it is not pure. Many synthetic chemicals released into the environment, intentionally or not, can be found in breast milk. Chemicals such as famous “bad actors” like DDT and PCBs, as well as less well-known substances such as flame retardants (polybrominated diphenyl ethers, or PBDEs), have been detected in human breast milk around the world. Many of those synthetic chemicals are known or suspected causes of cancer, and they have been linked to other health problems such as diabetes, reproductive disorders, and impaired brain development. The health benefits of breast-feeding far outweigh the possible negative effects of chemical contaminants in breast milk, but the presence of those chemicals remains a cause for concern.
Many of the synthetic chemicals that have been found in breast milk have some general properties in common. They can be described as bioaccumulative and persistent. A substance that bioaccumulates is one that, once introduced into the environment, collects in living organisms that are exposed by breathing air, eating plants that have taken up the chemical from the soil, or drinking water that is contaminated with the substance. Thus bioaccumulating chemicals find their way into and up the food chain. Many such chemicals are not soluble in water but rather are soluble in fat. That means that instead of being expelled, they bind to fatty tissue and remain in the body. A chemical that is termed “persistent” is just that: it stays around. Chemicals that are persistent take a long time to be broken down and expelled, if they ever are. Many such synthetic chemicals resemble natural hormones and chemicals in the human body, which is why they are not easily broken down and expelled by the body.
Breast milk has a high fat content, which means it draws certain synthetic chemicals to it. To produce milk, a mother’s body utilizes stored fat, thus some of the synthetic chemicals that have accumulated in body fat over a woman’s lifetime are released in the production of breast milk and passed on to nursing infants. In many cases, human milk contains chemical residues in excess of limits established for commercially marketed food.Sandra Steingraber, Living Downstream: An Ecologist Looks at Cancer and the Environment (New York: Addison-Wesley, 1997), 168.
Few countries regularly track contaminants in breast milk, but recent studies from around the world show that synthetic chemicals can be found in breast milk in both industrialized and developing countries. From the Artic to Africa, in Europe, in the Americas, and in Asia, those chemicals have taken up residence in the environment and in human bodies.
The chemicals found in breast milk are of concern not simply because they demonstrate the global dispersal and persistence of some chemicals but also because exposure to them has been linked to negative health effects. It may be true that no study has ever shown that a child exposed to a specific chemical from breast milk will develop a specific disease, but a growing body of science tells us that there are links between human health and exposure to toxic chemicals in the environment.
The primary chemicals of concern that scientists have found in breast milk include dioxin, furans, and PCBs, as well as pesticide residues such as DDT, chlordane, aldrin, dieldrin, endrin, heptachlor, hexachlorobenzine, mirex, and toxaphene. Those chemicals, nine of which are pesticides, are recognized as highly toxic by the international health community and are scheduled for phaseout worldwide as part of the International Treaty on Persistent Organic Pollutants. Other chemicals found in breast milk include PBDEs, brominated flame retardants, solvents such as tetrachloroethylene, and metals such as lead, mercury, and cadmium.Natural Resources Defense Council, “Healthy Milk, Healthy Baby: Chemical Pollution and Mother’s Milk,” accessed January 30, 2011, http://www.nrdc.org/breastmilk/chems.asp. Metals and solvents do not bind to fat, so they are not stored in the body for long; however, they do pass from the mother’s blood into her breast milk and to her baby. Exposure to heavy metals and solvents, like exposure to POPs, has been linked to health effects.
To further explore the issue of synthetic chemicals in breast milk, let’s look at three examples: dioxins, PBDEs, and dieldrin.
Dioxins
Dioxins are chemical by-products and comprise a number of chemicals with similar molecular structure, seventeen of which are considered to be highly toxic and cancer causing. They are not produced intentionally and are created in a range of manufacturing and combustion processes, including the following:
• Production of certain pesticides (the best-known dioxin is Agent Orange)
• Paper pulp bleaching
• Municipal waste incineration
• Hospital waste incineration
• Production and incineration of PVC
• Diesel engine exhaust
Humans are primarily exposed to dioxins and furans through the food they eat. Dioxins are released into the air, and then rain, snow, and other natural processes deposit them onto soil and water, where they combine with sediments and contaminate crops and animals. Dioxin binds tightly to fat and therefore quickly bioaccumulates and persists for a long time in the body. Because it is initially airborne, dioxin has been detected in breast milk around the world, even in places with little or no industrial activity, such as the native Inuit villages in northern Canada.
Dioxin is one chemical that has been the subject of many studies. Exposure to low levels of dioxin, levels as low as those detected in breast milk, have been linked to impaired immune systems, leading to a higher prevalence of certain childhood conditions such as chest congestion. Scientists have found a correlation between high levels of dioxin in body fat and thyroid dysfunction. The thyroid hormone is important to proper brain development, especially early in life. Other studies have associated dioxin exposure to more feminized play behavior in boys and girls. Researchers have discovered that dioxin exposure may also increase the risk of diseases such as endometriosis and diabetes. Non-Hodgkin’s lymphoma and cancers of the liver and stomach have also been connected to dioxin.Lois Marie Gibbs, Dying from Dioxin (Cambridge, MA: South End Press, 1995), 138.
Dioxin continues to be released into the environment from industrial processes, but efforts are being made to reduce levels released. The World Health Organization conducted two breast milk studies in Europe in 1986 and 1993. Comparing the two revealed a decrease in dioxin levels.Gina M. Solomon and Pilar M. Weiss, “Chemical Contaminants in Breast Milk: Time Trends and Regional Variability,” Environmental Health Perspectives 110, no. 6 (June 2002): 343. That result demonstrates that efforts to reduce the creation and release of dioxin do lessen the amount of the chemical accumulated in breast milk.
PBDEs
Unlike dioxin, little is known about the possible health effects of PBDEs. PBDEs are synthetic chemical fire retardants that are added to plastics, electronics, furniture, and many other home and office products. They are not actually bound to those products, so they are slowly released into the environment over time.
What is known about PBDEs is that they are “rapidly building up in the bodies of people and wildlife around the world.”Marla Cone, “Cause for Alarm over Chemicals,” Los Angeles Times, April 20, 2003, accessed January 11, 2011, http://articles.latimes.com/2003/apr/20/local/me-chemicals20. In 2003, the European Union banned two PBDEs that were shown to be accumulating in human bodies; other countries outside Europe have yet to place any restrictions on PBDEs and their use continues to increase.
A study in Sweden demonstrated that there has been a steep increase in the levels of PBDEs measured in women’s breast milk.Natural Resources Defense Council, “Healthy Milk, Healthy Baby: Chemical Pollution and Mother’s Milk,” accessed January 11, 2011, http://www.nrdc.org/breastmilk/chems.asp. Sweden and other Scandinavian countries have been especially concerned with contaminants deposited by rain and snow by inevitable weather patterns that bring pollution from the countries to their south.
Very little is known about the specific ways PBDEs may contribute to human disease. PBDEs, however, demonstrate many properties that are very similar to dioxin and to PCBs. They persist a very long time in the environment and in the body. They are suspected of impairing thyroid function and brain development. Like dioxin, they are also suspected to cause cancer and have been linked to non-Hodgkin’s lymphoma.K. Hooper and T. A. McDonald, “The PBDEs: An Emerging Environmental Challenge and Another Reason for Breast-Milk Monitoring Programs,” Environmental Health Perspectives 110, no. 6 (June 2002): A339–47, quoted in Gina M. Solomon, “Flame Retardant Chemical Detections Rising in Breast Milk,” Quarterly Review, Harvard Medical School Center for Health and the Global Environment 2, no. 2 (2000), accessed January 30, 2011, www.ncbi.nlm.nih.gov/pmc/articles/PMC1240888.
As scientists uncover more information about how PBDEs are absorbed by the body and how such exposures might affect human health, the chemicals, like other POPs, may be subject to bans in many countries. Many European manufacturers are already scaling back use of some PBDEs based on what is already known about their health effects.Marla Cone, “Cause for Alarm over Chemicals,” Los Angeles Times, April 20, 2003, accessed January 11, 2011, http://articles.latimes.com/2003/apr/20/local/me-chemicals20.
Dieldrin
Dieldrin is an example of the many pesticides that have been banned or severely restricted worldwide. Dieldrin and its sister pesticides aldrin and endrin are banned from use in the United States. In some countries, they are permitted for specific uses under severe restriction. Dieldrin has been used in agriculture for soil and seed treatment as well as for control of mosquitoes and tsetse flies. Other uses for dieldrin include veterinary treatments for sheep, wood treatment against termites, and mothproofing of woolen products.
Dieldrin binds to soil and sediments. It is introduced to the human body primarily through eating contaminated fish, meat, and dairy products and through eating crops grown on soil treated with dieldrin. Dieldrin has been detected in 99 percent of breast milk samples tested for its presence.Natural Resources Defense Council, “Healthy Milk, Healthy Baby: Chemical Pollution and Mother’s Milk,” accessed January 11, 2011, http://www.nrdc.org/breastmilk/chems.asp. Studies done over time show that levels of dieldrin have been decreasing since the chemical was banned. Dieldrin is in the same family of pesticides as DDT. Like DDT, dieldrin is a carcinogen and can interfere with the body’s natural hormone system. Dieldrin is more toxic than DDT but does not persist as long in the environment.Natural Resources Defense Council, “Healthy Milk, Healthy Baby: Chemical Pollution and Mother’s Milk,” accessed January 11, 2011, http://www.nrdc.org/breastmilk/chems.asp.
Conclusion
Even though it contains synthetic chemical contaminants, breast milk is still the best food for babies, according to research. Infant formulas are not a more healthful substitute; after all, most formulas have to be mixed with water or milk and therefore are not free of contaminants. Moreover, formulas lack many of the other nutrients, antibodies, and fats found in breast milk.
The presence of chemicals in breast milk shows that these chemicals are found in most people, particularly people in industrialized countries. Breast milk, then, is both a measure of what environmental exposures give cause for concern and a measure of the effectiveness of efforts to reduce the prevalence of these synthetic chemicals in the environment. As the body of science connecting childhood exposures to these toxic chemicals to human health effects grows, it appears that breast milk contamination will be a growing cause for concern.
KEY TAKEAWAYS
• By applying a sustainability approach, one can find opportunities even in mature industries dominated by global giants.
• Markets exist and are likely to grow for products that reduce exposure to toxic chemicals.
• Application of sustainability principles presents both opportunities and unique dilemmas.
EXERCISES
1. What is Method’s strategy?
2. What role does sustainability innovation play in the company’s strategy?
3. Why was Adam Lowry confronted with the PLA dilemma?
4. What should he do about PLA? Why?
5. How is the model of chemical development and deployment changing? How could that change be accelerated? | textbooks/biz/Business/Advanced_Business/Book%3A_Sustainability_Innovation_and_Entrepreneurship/06%3A_Clean_Products_and_Health/6.02%3A_Entrepreneurial_Innovation_Health_Environment_and_Sustainable_Business_Design.txt |
Learning Objectives
1. Analyze a company culture conducive to innovation.
2. Understand how collaborative innovation processes work.
3. Discuss emerging new models of business.
This second Method case examines the process by which the firm created a breakthrough product design in 2010. Method also became a B Corporation, joining a fast-growing number of other companies committed to making money and using business innovation to solve health, social, and environmental problems by paying attention to toxicity and broad stakeholder interests. (A detailed discussion of B Corporations follows the case.) Together the two Method cases offer insights into how entrepreneurially minded individuals can address chemical contamination and design concerns through innovative approaches.
Method Products Inc. had hit a sweet spot for its buyers by 2010.Andrea Larson and Mark Meier, Method Products: Sustainable Innovation as Entrepreneurial Strategy, UVA-ENT-0159 (Charlottesville: Darden Business Publishing, University of Virginia, 2010). Since its founding in 2000, the privately held Method had a clear mission: make good-smelling, high-performing household cleaners that were healthy throughout their material life spans and packaged in attractive, eye-catching, and eco-designed containers. Said Adam Lowry, cofounder and chief “greenskeeper”: “We wanted to change the way people view home cleaning. There is a disconnect between the way people feel about and care for their homes and the design of the products they use to clean them. We set out to evolve the household cleaner from an object that lived under the sink to a countertop accessory and must-have item by providing cool-looking, effective, nontoxic products that are healthy for both the environment and the home.”Andrea Larson and Mark Meier, Method Products: Sustainable Innovation as Entrepreneurial Strategy, UVA-ENT-0159 (Charlottesville: Darden Business Publishing, University of Virginia, 2010). All other quotations in this section not otherwise attributed come from this case study.
As scientific studies revealed growing health problems associated with chemical exposure and regulation of chemicals steadily rose around the world, more informed customers were seeking effective but healthier cleaning product options on retail shelves.
Despite the company’s small size and entrepreneurially disruptive approach, Method’s stylish cleaning products had quickly become state of the art in an industry moving toward sustainable business thinking. “We want to be thought leaders and we want to evoke change,” said cofounder and design guru Eric Ryan.Andrea Larson and Mark Meier, Method Products: Sustainable Innovation as Entrepreneurial Strategy, UVA-ENT-0159 (Charlottesville: Darden Business Publishing, University of Virginia, 2010). Method had in fact altered the once staid market for cleaning products in which large competitors traditionally fought over shelf facings, thin margins, and fractional market share points. Method introduced a three-times-more-concentrated laundry detergent in 2004 that met Walmart’s requirement that all detergent suppliers concentrate their products to save resources on packaging and shipping. In 2006, Method began getting many of its products cradle-to-cradle certified by McDonough Braungart Design Chemistry. This certification meant the products were nontoxic and used fewer resources throughout their life cycles. The next year, Method became one of the founding B Corporations, a form of company that built environmental and social goals into its charter and passed third-party standards for sustainable practices. Method also worked through its public communications and website to express the goals and values that were an integral part of its company culture and products—that is, protecting health, children, and pets through eco-friendly and socially conscious products designed from a full life-cycle perspective.
Yet Lowry wanted to go further to catalyze the next wave of innovation within the company’s product categories. He and the company aspired to launch two major products a year. In 2008, Method turned to laundry detergents, a several-billion-dollar market in the United States alone. It devised an eight-times-concentrated detergent in an encapsulated tablet, or monodose format, which would further save packaging and product materials and drastically reduce manufacturing and distribution of energy usage. The consumer could toss the tablet in the washing machine with a load of laundry. It was convenient, efficient, less messy, and prevented using excess soap in each laundry load.
At a critical point in the product development process, everything about the monodose was working except one thing: the gel that encapsulated the detergent didn’t dissolve entirely in cold water, a result of the company’s decision not to use animal-derived ingredients such as the gelatins most often used for capsules. A bit of the plant-based gel casing could remain in the washing machine during the rinse cycle. Could Method and its loyal customers accept the residue as the price for a much more concentrated and environmentally compatible detergent formulation? Method could turn to petroleum-derived or gelatin capsules rather than plant-based materials for the capsule to solve the problem. Alternatively, the company could abandon the monodose concept and its inherent benefits. Lowry and his team contemplated what to do.
The Emergence of Method
Adam Lowry and Eric Ryan knew each other growing up in Michigan, where their families had entrepreneurial companies that became significant suppliers to the automobile industry. Lowry earned his bachelor of science in chemical engineering from Stanford University and worked on climate change policy at the Carnegie Institution for Science, an organization that focused on innovation and discovery. There, he helped develop software tools for the study of global climate change. In his postcollege work experience, Lowry honed his unique approach to commercial environmentalism, which would form the basis of Method’s success. Through his combined education and employment, Lowry became convinced that business was “the most powerful agent for positive change on the planet. But it’s not business as we know it today. It is fundamentally and profoundly different. It is business redesigned.”Method, “Methodology: Behind the Bottle,” accessed August 24, 2010, www.methodhome.com/behind-the-bottle.
In 2006, Ryan was named one of Time magazine’s eco-leaders and received similar accolades from Vanity Fair. Ryan attended the University of Rhode Island and went into marketing, eventually doing work for the Gap and Saturn. The old high school classmates ran into each other on an airplane in 1997 and realized they lived on the same block of Pine Street in San Francisco. They soon thereafter became roommates, helping to maintain a house full of college fraternity brothers. As the story went, no one liked to clean. The two spent time discussing what was cool and what was not in commercial markets—and hence ripe for innovation. The pair settled on cleaning products, a bastion of typically harsh, dangerous chemicals—definitely not cool.
Lowry and Ryan used their bathtub to mix their own cleaners from natural, fragrant, and gradually more benign and renewable ingredients. Initial funding of \$300,000 was provided through convertible debt from friends and family. They filled one beer pitcher at a time with their cleaners and made their first sale in February 2001 to a local grocery store—an order they scrambled to fill from their bathtub. The next day, Lowry and Ryan hired their new boss, an entrepreneurial CEO with an MBA and years of experience in the consumer packaged goods industry.
The founders knew presentation would matter when it came to making an impression on customers. Lowry said, “[We want to] inspire a happy, healthy home revolution.…We want it to be happy because, in our opinion, way too much of the green movement has been heavy handed in sort of an education-based instead of inspiration-based [way]. That’s one of the reasons that you see us concentrating so much on making our products not only work great and be green but be beautiful as well.”“Adam Lowry, the Man Behind the Method (Cleaning Products),” interview by Jacob Gordon, Treehugger Radio, podcast audio and transcript, April 30, 2009, accessed August 24, 2010, www.treehugger.com/files/2009/04/th-radio-adam-lowry.php.
They cold-pitched New York–based industrial designer Karim Rashid, whose aesthetically appealing work had appeared in museums, and he accepted the offer to design for Method. Soon, his designs would appear on countertops across the country. Rashid’s name and Ryan’s contacts got the company a pilot deal with Target to sell Method products at two hundred stores around Chicago and San Francisco. Method found a contract manufacturer to scale up production. Method’s commitment to excellence and attention to detail impressed Target, especially after a problem with leaky containers led Lowry and other Method employees to walk through Target and pull the leaky bottles off the shelves themselves; their container supplier quickly corrected the problem.
Method’s growth accelerated even as the company stayed true to its core values of style and social and environmental soundness. Target decided to carry Method products in all its stores, and Method went from having \$16 in cash, credit card debt, and arrears to vendors in 2001 to profitability in 2005.“How Two Friends Built a \$100 Million Company,” Inc., accessed January 11, 2011, www.inc.com/ss/how-two-friends-built-100-million-company. In 2006, Method experienced rapid growth, ending the year with about forty-five employees, fifty vendors and suppliers, and a foothold in the United Kingdom. The next year pushed sales to \$71 million. Method expanded rapidly from hand soaps and countertop cleaners to body washes, floor cleaners, dish soaps, and laundry detergents. These products were carried by large retail chains such as Costco, Target, Lowe’s, and Whole Foods and generated over \$100 million in revenue by 2010. Method’s former CEO distilled the company’s approach for success in 2006: “Method has to enter a category with a huge disruption. The story cannot be copied overnight or eroded by competitors. It has to have disruptive packaging, ingredients, and fragrance.”Stephanie Clifford, “Running Through the Legs of Goliath,” Inc., February 1, 2006, accessed January 12, 2011, www.inc.com/magazine/20060201/goliath.html.
The company also continued to use naturally occurring or naturally derived ingredients as much as possible. If synthetic ingredients were needed, they were screened for biodegradability and toxicity to humans and the environment but without the use of animal testing. People for the Ethical Treatment of Animals (PETA) gave Lowry and Ryan its 2006 award for Person of the Year. The founding duo wrote a guidebook in 2008 titled Squeaky Green: The Method Guide to Detoxing Your Home. Their company, meanwhile, strove to reduce its carbon footprint through efficiency, switching to biodiesel trucks, or buying offsets such as methane digesters for manure at three Pennsylvania dairy farms. It also became the first company to introduce a custom-made bottle manufactured from 100 percent post–consumer recycled (PCR) polyethylene terephthalate (PET), which has a recycling number, as part of the resin identification code, of one.
Despite its innovation, growth, and sterling public image, Method remained tiny relative to the competition. While Seventh Generation, an established producer of green cleaners and a fellow B Corporation, had sales comparable to Method’s, generating \$93 million in revenue in 2007 and over \$120 million the following year, the makers of conventional cleaning products were orders of magnitude larger. One of the largest companies in the world, Procter & Gamble (P&G) had a market capitalization of \$180 billion in April 2010, and its Household Care business unit alone had sales of \$37.3 billion in 180 countries in 2009. P&G’s laundry detergents in 2009 included Tide, the first synthetic heavy-duty detergent launched in 1946 and a now billion-dollar brand; Gain, another billion-dollar brand; Ace and Dash, each of which generated over \$500 million in sales; and Cheer.Jeffrey Hollender, “How I Did It: Giving Up the CEO Seat,” Harvard Business Review, March 2010, accessed January 11, 2011, http://hbr.org/2010/03/how-i-did-it-giving-up-the-ceo-seat/ar/1; Procter & Gamble, 2009 Annual Report, accessed January 11, 2011, www.pg.com/en_US/investors/financial_reporting/annual_reports.shtml. In short, P&G’s laundry detergents dominated that market and by themselves generated more than thirty times Method’s revenue.
Other giants with broad product portfolios operated in the laundry detergent market. In 2009, Unilever, also a food producer, had total sales of about \$55 billion. Colgate-Palmolive, known for toothpastes and dish soaps, had sales of \$15 billion. Clorox, best known for its chlorine bleach, had sales of \$5.5 billion. Clorox also had proven particularly adroit at moving into the green cleaning market. It launched its Green Works line in the United States in 2008, rapidly expanded into fourteen countries, and, according to the company, captured 47 percent of the natural cleaner market from mid-2008 to mid-2009, more than doubling the closest competitor’s share. Church & Dwight Co., makers of Arm & Hammer brand baking soda, pulled in another \$2.5 billion in sales in 2009 and marketed a series of baking soda–based green cleaners and laundry detergents under its Arm & Hammer Essentials line.Unilever, Annual Report and Accounts 2009, accessed January 12, 2011, annualreport09.unilever.com; Colgate-Palmolive, Colgate-Palmolive Company 2009 Annual Report, accessed January 12, 2011, http://www.colgate.com/app/Colgate/US/Corp/Annual-Reports/2009/HomePage.cvsp; Clorox Company, “Financial Overview,” accessed August 24, 2010, http://investors.thecloroxcompany.com/financials.cfm; Clorox Company, 2010 Annual Report to Shareholders and Employees, accessed January 12, 2011, www.thecloroxcompany.com/investors/financialinfo/annreports/clxar10/ar10_complete.pdf; Church & Dwight, “Church & Dwight Reports 2009 Earnings per Share of \$3.41,” news release, February 9, 2010, accessed January 12, 2011, investor.churchdwight.com/phoenix.zhtml?c=110737&p=irol-newsArticle&t=Regular&id=1385342&.
As Method mulled its own new laundry detergent, it remained very much aware of its smaller stature. As Eric Ryan told Inc. magazine in 2007, “When you run through the legs of Goliath, you need to spend a lot of time thinking about how to act so you don’t put yourself in a place you can be stepped on.”“How Two Friends Built a \$100 Million Company,” Inc., accessed January 11, 2011, www.inc.com/ss/how-two-friends-built-100-million-company#4 Josh Handy, the lead Method designer, made a similar point: “Where we’ve gone awry sometimes is when we’ve forgotten how small we are and therefore while we talked about ourselves as being the biggest green brand in the world, which typically we were, that’s the wrong mindset for Method. What we are is the 35th-smallest cleaning products brand in the world.”Andrea Larson and Mark Meier, Method Products: Sustainable Innovation as Entrepreneurial Strategy, UVA-ENT-0159 (Charlottesville: Darden Business Publishing, University of Virginia, 2010).
Handy understood that Method’s work environment had to support the creativity required for David-esque innovation. After Handy came to Method, he actively encouraged people to break rules to innovate, at one point literally drawing on a piece of furniture. Other employees followed his lead, and soon a room of once-uniform and uncomfortable white furniture was thus decorated and dubbed “the Wiggle Room.” Commitment to giving people and ideas room to “wiggle” was serious. The mission was stated as “Keep it weird, keep it real, keep it different,” and, as Eric Ryan commented,
We don’t build rockets over here; we build soap. And it’s hard to be different in soap. So ideas have to be flowing. We have to have an environment where people are comfortable sharing ideas. We do everything we can to make people as connected as possible. We have to have every brain in the game. The more different an idea, the more fragile it is and the more likely it is beaten down and doesn’t go anywhere. We have to cultivate our ability to be different, to be open to ideas. It means putting as much work into the culture as the product you are creating.Andrea Larson and Mark Meier, Method Products: Sustainable Innovation as Entrepreneurial Strategy, UVA-ENT-0159 (Charlottesville: Darden Business Publishing, University of Virginia, 2010).
Concentrating the Formula and Pushing the Monodose to Failure
When Method decided to pursue an improved detergent in early 2008, it turned over the matter to its team of “green chefs,” including Fred Holzhauer, whom Lowry characterized as being “as close to a true mad scientist as anyone I’ve ever met.” Lowry gave the green chefs the mission of creating a better detergent and trusted them to figure out the details. “What we do is set up a system,” Lowry explained, “a way of working, an environment that allows the innovation to occur within the boundaries that we want.”Andrea Larson and Mark Meier, Method Products: Sustainable Innovation as Entrepreneurial Strategy, UVA-ENT-0159 (Charlottesville: Darden Business Publishing, University of Virginia, 2010). Drummond Lawson, Method’s “Green Giant” (or director of sustainability) seconded that notion. Method’s strategy was to hire creative people and then get out of their way. In the case of Holzhauer, Lawson said, “He has this opportunity to really play with everything in the lab, bring it through, and get some prototypes up to the point where we can take them and put them in other people’s hands. Whereas if we mandate—go make this formulation with these characteristics—he’d be checked out and bored and gone. You’d end up with what you ask for instead of myriad opportunities.”Drummond Lawson, interview with author, San Francisco, January 19, 2010; unless otherwise indicated all subsequent attributions derive from this interview.
Method’s green chefs decided to build upon the success of Method’s three-times-concentrated detergent. Further concentration would decrease the water in the product, thereby decreasing volume, mass, packaging, storage space, and freight costs. Method also wanted to encapsulate the detergent in tablet form for the user’s convenience and to reduce detergent wasted by inaccurate measurement. The green chefs listed their goals and all the tools at their disposal, including the conventional harsh, artificial tools. They focused initially on getting the detergent formula right. As Holzhauer explained,
The first thing you do is you build it the old way. You build it with all the nasty stuff that a competitor would and say, “What would be the highest performing biggest-payoff thing that we could build?” And you build it and then you say, “This stuff rocks. This is my benchmark. How close can I get using materials that are available with green?” Then there’s the whole process of drawing lines through a whole bunch of stuff you wish you could use. And you’re left with some holes. You’re definitely left with some holes.Fred Holzhauer, interview with author, San Francisco, January 19, 2010; unless otherwise indicated all subsequent attributions derive from this interview.
Method, in collaboration with the Environmental Protection Encouragement Agency based in Hamburg, Germany, had amassed a list of safe, biodegradable chemicals to use as starting points for its products. Holzhauer had to ensure no unwanted interactions among those ingredients, but he also had to fill the holes in his toolkit to get the results he wanted. Method prized its products’ effectiveness above all else. If the products didn’t clean, it didn’t matter that they were natural, nontoxic, and beautiful. Rather than limiting Method, other constraints aside from performance forced the company to be more innovative.
Lowry had once called trade-offs among these various qualities “just a symptom of poor design.” In addition, Lowry considered it essential “to make sure that what you’re doing is really compelling for reasons other than being green. It has to be great in its own right, and green has to be just another part of its quality. The whole idea of eco-entrepreneur should become the standard for entrepreneurship in general.”Susanna Schick, “Interview with OppGreen 2009 Speaker, Adam Lowry, Chief Greenskeeper of Method,” Opportunity Green, November 3, 2009, accessed January 12, 2011, http://opportunitygreen.com/green-business-blog/2009/11/03/interview-with -oppgreen-2009-speaker-adam-lowry-chief-greenskeeper-of-method. Therefore, the chefs pushed further. They began to consult their networks.
Holzhauer said, “This is where collaboration and innovation really pay off. You start asking people who make detergents, [who] know you’re handy, and you leverage your relationships and you say, ‘Hey, would you guys entertain this thought? You guys can make sodium laurel sulfate, but nobody’s making MIPA sulfate, and that’s the kind of tool that would really make a difference in what I’m doing, and it’s just not available commercially. How about you whip me up a lab sample, you know?’ And [from that] you get a new tool and you try it.”
The chefs continued to use their contacts, get new tools, and test and alter them. They eventually refined the formula, dubbed “smartclean,” to be as effective as “the nasty stuff” yet naturally based and eight times as concentrated. They found their detergent showed nonlinear improvement; doubling the concentration more than doubled its effectiveness.
Unexpectedly, they had moved into a new realm of chemistry in which few people had any experience working with such concentrated liquids. They kept testing the formula until they understood at the molecular level exactly what was happening (Figure 6.3). They also realized that the increased effectiveness meant they needed far less of the detergent to do the job, which meant the product could now compete on cost.On April 23, 2010, special sales excluded, Method’s detergent sold on Amazon.com for about \$0.31 per load, the same price as Tide with Febreze, while Seventh Generation sold for about \$0.27 per load and Gain for \$0.19 per load.
The gel capsule, however, continued to be a problem. Holzhauer talked to people in the paintball industry to get a sense of how big he could make a glycerin or gelatin capsule to hold the detergent. He wanted something that could fit in your hand, and the paintball people thought that could be done. Holzhauer concentrated the detergent enough so the capsule size would contain all the detergent needed. Yet the detergent was so effective that it dissolved the capsules as well unless they were made sufficiently thick, at which point they would not fully dissolve in cold-water wash conditions. Holzhauer tinkered with the formula. He knew petroleum-based and animal-derived ingredients could make it all work perfectly, but that violated Method’s premise of sustainability and ethics. “We tried and tried and tried. We just never got where we wanted to go,” Holzhauer said.Andrea Larson and Mark Meier, Method Products: Sustainable Innovation as Entrepreneurial Strategy, UVA-ENT-0159 (Charlottesville: Darden Business Publishing, University of Virginia, 2010).
Surviving Failure: Collaboration and the Container
Ever optimistic that a tablet/capsule solution would be found, the chefs still wanted to test the “smartclean” detergent itself. They thought of a diaper cream Method already sold and got a simple idea: put the detergent in the pump dispenser used for the diaper cream. Let people squirt the detergent straight into the washing machine instead of dissolving a tablet. Suddenly, the critical solution had taken hold in the chef group: a pump instead of a tablet for this detergent. Holzhauer talked with Josh Handy to refine the pump bottle. “If you get a bright idea,” Holzhauer said, “you walk over to Josh’s desk and you say, ‘Hey, dude. I need one of these. Could you whip it up for me?’ And he’s like, ‘Sure.’ He’ll ask a few questions about it and make sure it’s worth the time, but he’ll do it.”Andrea Larson and Mark Meier, Method Products: Sustainable Innovation as Entrepreneurial Strategy, UVA-ENT-0159 (Charlottesville: Darden Business Publishing, University of Virginia, 2010). Handy began working on Holzhauer’s pump and posted his drawings outside the bathroom so that employees from all over could see them and provide feedback.
Detergents typically consisted of two parts: a tail that could stick to oils (and other grime) and a head that stuck to water, so the dirt could be rinsed away in the wash. This structure led detergent molecules to clump into spheres, called micelles, in water because the hydrophilic heads (circle) faced out, while the hydrophobic tails (line), consisting of chemicals similar to fat, faced in. Conventional detergents worked by breaking open the micelles in the washing machines so the tails could grab dirt and then getting the micelles to clump with the dirt in the middle so it could be washed away in the water. Breaking open micelles required agitation and thermal energy.
Method’s smartclean laundry detergent kept hydrophobic tails on the outside, available to interact with dirt immediately. This inverted micelle made the detergent more efficient: cleaning was improved while less detergent was wasted because it interacted more readily with dirt and less energy was needed to agitate and heat the laundry. This inverted property also reduced the amount of water needed in the liquid, thereby concentrating the detergent and reducing its mass and volume.
The switch from gel tablet to pump sent Holzhauer back to the lab to refine the formula. The detergent was incredibly viscous and had to be tweaked to work in a pump. It also had to be uniformly mixed so that each squirt dispensed exactly the same proportion of ingredients; in the tablet the ingredients would mix eventually once the tablet dissolved and thus could start off unevenly dispersed. Holzhauer wanted to keep tweaking the formula, which he had already refined to 95 percent effectiveness using all benign and renewable ingredients, but Method was preparing to launch the product. Holzhauer patented his work to date and continued to work on a revised version for future release.
The new detergent formula appeared to work in the pump, so Method shifted emphasis to making sure it could get the container it needed. Handy’s final design featured a pump mechanism that was easy to depress without Herculean strength (Figure 6.5 and Note 6.14 "Video Clip"). The pump would also encourage people to use the recommended amount, unlike conventional detergent container caps, which were designed to be much bigger than the amount of liquid actually needed and thus encouraged people to overuse detergent. A standard cap and bottle also made measurement a two-handed task, and a full bottle of typical two-times concentrate could easily weigh seven pounds or more. In contrast, Method’s customer could hold a laundry basket or child in one hand and dispense the necessary smartclean detergent—four short squirts—with the other. Method’s fifty-load smartclean container, when full, weighed less than two pounds.
Video Clip
Sticky (Laundry) Situation
(click to see video)
Handy’s design next had to be mass-produced. That task fell largely to the packaging engineering and project management teams. Collaboration was the key for these groups; said one packaging engineer, “I can literally turn my chair around and help contribute.”Andrea Larson and Mark Meier, Method Products: Sustainable Innovation as Entrepreneurial Strategy, UVA-ENT-0159 (Charlottesville: Darden Business Publishing, University of Virginia, 2010). Method employees worked at common long tables rather than in cubicles, so they could overhear discussions and add to them. The project manager believed innovation was more important than strict adherence to procedures. The packaging engineers, known internally as plastic surgeons, found they could use a stock engine (the internal workings of the pump) for the detergent but needed a custom top to match Method’s aesthetic and operating needs. They reached out to various suppliers and found only one willing to collaborate on the custom design. Method agreed to pay for the tooling necessary to produce the top.
Method wanted the bottle itself to have transparent windows to show the contents and the newly designed angled dip tube that sucked in the detergent. That transparency would allow customers to ensure the tube always rested at the bottom of the container to extract all the detergent rather than leave some behind. Method selected an independent, California-based plastics manufacturer that produced almost two hundred million containers annually and had experience using post–consumer recycled number-two plastic, or high-density polyethylene (HDPE). Method pushed the recycled content as high as it could before it started sacrificing transparency. Hence the company ultimately settled on 50 percent virgin HDPE and 50 percent PCR HDPE.
Simultaneously, Method worked to produce the detergent in sufficient volume. Method’s operations department began working with the contract manufacturer it selected to make the product, a supplier for almost all the biggest personal care and cleaner companies, including P&G, Unilever, and others. Method’s operations team ushered the smartclean detergent from batches in Method’s labs to identical but more massive batches in the factory. A pilot manufacturing run revealed new problems—and a new opportunity. The first factory batch was contaminated by dirt from the bottling equipment because the Method detergent was so powerful, it cleaned out the lines as it moved through the system. Although Method had to nix the batch, it stumbled into a potential market: industrial cleaners. Indeed, the manufacturer began using the smartclean laundry detergent as its default factory equipment cleaner.
Finally, Method needed new shrink-wrapping equipment to put a label around the bottle’s unique shape and to keep the pump locked while allowing customers to unscrew the cap and sniff the detergent, a design requirement responding to the desire of many buyers to smell the contents. Method worked with the manufacturer to get the results it needed and invested in the new equipment.
Quantifying Sustainability
Moving through the development and production of the smartclean laundry detergent, Method wanted to assess the product’s environmental impact. The first step was cradle-to-cradle certification, and smartclean was the first laundry detergent to obtain it. Smartclean also was recognized by the US Environmental Protection Agency’s Design for Environment program because of its nontoxic, biodegradable formulation.
In addition, Method wanted to calculate the detergent’s overall carbon footprint. It collaborated with Planet Metrics, a Silicon Valley start-up founded in 2008 with \$2.3 million in venture capital Series A funding. The young company was eager to collaborate with a company such as Method to build its reputation and beta-test its Rapid Carbon Modeling software. The ultimate goal of the software was to give companies a quick way to calculate returns on investment for various sustainability options. It did so by measuring life-cycle carbon emissions from a product throughout Scopes 1, 2, and 3. Those scopes are defined by the Greenhouse Gas Protocol accounting method as follows:
• Scope 1. All direct GHG emissions.
• Scope 2. Indirect GHG emissions from consumption of purchased electricity, heat, or steam.
• Scope 3. Other indirect emissions, such as the extraction and production of purchased materials and fuels, transport-related activities in vehicles not owned or controlled by the reporting entity, electricity-related activities (e.g., transmission losses) not covered in Scope 2, outsourced activities, waste disposal, and so forth.Greenhouse Gas Protocol Initiative, “Calculation Tools: FAQ,” accessed August 24, 2010, http://www.ghgprotocol.org/calculation-tools/faq.
Planet Metrics analyzed Method’s detergent from cradle to gate: all the activities and material needed to produce the product and get it ready to ship to retailers. Of course, later carbon dioxide emissions would presumably be lower for shipment to retailers, recycling old bottles, and so on because the bottles used less material overall, which meant less energy and less mass to be moved around for an equivalent cleaning capacity. Looking just at cradle to gate, however, Method’s smartclean detergent per load had a carbon footprint 35 percent smaller than the average two-times concentrated laundry detergent. It also used 36 percent less plastic and 33 percent less oil and energy. Finally, consumers would be more likely to use the appropriate amount of detergent, making the actual reductions even greater.
Conclusion
Reflecting on the path of the smartclean laundry detergent, Lowry considered it a success born of failure. People were given space to create and collaborate within Method and across its supply chain. They didn’t shut down after encountering obstacles; they got more creative and talked to each other. Lowry noted that some workers in the factories where the detergent was made volunteered to work extra, unpaid shifts because they believed they were part of something bigger: a social change, not just another way to make money from people’s dirty laundry. He said, “Cultures are the only sustainable competitive advantage. We don’t see the innovation per se as the competitive advantage. We see the ability to innovate as the competitive advantage. If you’re going to do that you’ve got to build a different type of company where you literally are built—the people and the culture of the place—around the ability to bring the best ideas forward and let them live and let them thrive. Each innovation gives us license to innovate again.”Andrea Larson and Mark Meier, Method Products: Sustainable Innovation as Entrepreneurial Strategy, UVA-ENT-0159 (Charlottesville: Darden Business Publishing, University of Virginia, 2010).
By summer 2010, it appeared the new laundry detergent was a successful launch. What was the next focus for this young, fast-moving company? Typical growth challenges faced the firm and its entrepreneurial founders. As it grew from start-up to midsize, could its innovative output be maintained? How should management’s attention be allocated across its innovation imperatives and its proliferation of product offerings and growth demands? And what was the end point? Was the goal to grow Method indefinitely?
Next we look more closely at the emerging category of B Corporations. Although the number of companies listed and requesting assessment as B Corporations remains relatively small, the high level of interest and rapid expansion of B Corporation designations for firms in the short time the classification has been available suggests growing interest in this alternative business model. B Corporation designation connects to entrepreneurial innovation favoring cleaner, more benign, and less destructive business footprints in its effort to legally protect a company’s commitment to those strategic goals, even when the firm is acquired by a larger corporation that may not share the same values.
B Corporation: A New Sustainable Business Model
We envision a new sector of the economy which harnesses the power of private enterprise to create public benefit.
- B Lab, “Declaration of Interdependence,” 2010
Jay Coen Gilbert and Bart Houlahan were friends as undergraduates at Stanford University. In 1993, a few years after graduation, they helped start the basketball shoe and apparel company AND1. As the company grew, cofounder Gilbert and president Houlahan emphasized financial success along with corporate social responsibility (also called triple-bottom-line strategy and sustainable business): AND1 paid employees respectable wages, donated 5 percent of profits to charity, and made sure factories in China met their standards. The company was generating close to \$250 million in annual revenue when it was sold to American Sporting Goods Inc. in 2005. Gilbert and Houlahan were personally enriched but disempowered: they watched their effort to create an innovative business model vanish under the new owners.Susan Adams, “Capitalist Monkey Wrench,” Forbes, March 25, 2010, accessed January 12, 2011, http://www.forbes.com/forbes/2010/0412/rebuilding-b-lab-corporate -citizenship-green-incorporation-mixed-motives.html; Peter Van Allen, “American Sporting Goods Buys AND1,” Philadelphia Business Journal, May 18, 2005, accessed January 12, 2011, http://www.bizjournals.com/philadelphia/stories/2005/05/16/daily16.html.
Gilbert and Houlahan were not alone. Ben & Jerry’s, known for its ice cream and its social responsibility, was sold to Unilever in 2000. Although some board members had misgivings, they voted for the sale because Vermont, like most states, required the board to act in the interest of the shareholders, which meant accepting Unilever’s exceptionally lucrative offer.
Gilbert and Houlahan wanted a way to protect the triple bottom line (combined financial, social, and environmental performance) of companies even as companies switched owners, evolved over time, or grappled with shareholders’ desire for greater dividends. That desire motivated them to contact another Stanford classmate, Andrew Kassoy, a private equity investor with the MSD Capital real estate fund of the Michael & Susan Dell Foundation. Each man invested \$1 million of his own money to start the nonprofit B Lab in 2006 to bring about their shared vision of capitalist corporations working simultaneously toward financial health and social and environmental benefits.Susan Adams, “Capitalist Monkey Wrench,” Forbes, March 25, 2010, accessed January 12, 2011, http://www.forbes.com/forbes/2010/0412/rebuilding-b-lab-corporate -citizenship-green-incorporation-mixed-motives.html; April Dembosky, “Protecting Companies that Mix Profitability, Values,” NPR Morning Edition, March 9, 2010, accessed January 12, 2011 http://www.npr.org/templates/story/story.php?storyId=124468487; B Corporation, “About B Corp.: The Team,” accessed April 18, 2010, www.bcorporation.net/index.cfm/nodeID/0360E845-9F78-4D71-8833-677CAC12CEF4/fuseaction/content.page.
With additional funding from the Rockefeller Foundation, B (as in benefit) Lab created various tools to help companies achieve these broader goals. B Lab developed the B Impact Rating System (BIRS), which companies could use to assess their social and environmental performance. B Lab also established standards for transparency and a basic legal framework that companies could incorporate into their articles of incorporation to safeguard their social and environmental goals, especially in times of transition. Finally, Gilbert, Houlahan, and Kassoy recruited eighty-one companies that scored high enough on the BIRS and were willing to commit formally to transparency and working for the greater public benefit. Thus, in late 2007, the first B Corporations appeared.
B Corporations were companies that were third-party-certified by B Lab, demonstrating they were serious about sustainability strategies and corporate social responsibility. They sought certification because they wanted to distinguish themselves from competitors, wanted to reassure consumers and investors, and fundamentally believed that “doing good” had to become part of business itself, not ancillary to it. Hence B Corporations shared “[a vision that] is simple yet ambitious: to create a new sector of the economy which uses the power of business to solve social and environmental problems.…As a result, individuals and communities will have greater economic opportunity, society will have moved closer to achieving a positive environmental footprint, more people will be employed in great places to work, and we will have built more local living economies in the U.S. and across the world.”B Corporation, “Why B Corps Matter,” accessed April 18, 2010, www.bcorporation.net/why.
Objectives and Advantages of B Corporation Status
B Corporations shifted the emphasis of business from shareholder value to stakeholder value. Employees, consumers, and communities, including the environment, should all benefit from economic activity. B Corporations hoped to create these benefits in three ways. First, in addition to financial goals, they established explicit social and environmental goals and strategies. Second, these companies were transparent about their operations and broader stakeholder goals, and they progressed toward those goals. To be certified as B Corporations, companies had to earn at least eighty of two hundred possible points on the BIRS survey and submit to random audits of their social and environmental performance. That way, investors and customers knew where their money really went, and the B Corporation could dissociate itself from “greenwashing” and other brand risks. Finally, B Corporations incorporated their sustainability principles explicitly into their governance documents. Formalizing these principles was believed to help these companies survive transitions and gave B Corporations some legal grounds for considering social and environmental consequences as well as shareholder returns in their decisions.
To be considered for certification, a company paid an application fee to B Lab and submitted its BIRS survey responses along with documentation for some of the answers. The BIRS survey covered an array of categories organized largely by stakeholders: Accountability, Employees, Consumers, Community, and Environment. For instance, under Employees, a series of questions covered employee benefits, including health insurance coverage and premiums, sick days and maternity leave, training opportunities, flexible schedules, and so on. Completing the BIRS took about sixty to ninety minutes, according to B Lab, and after submitting the survey to B Lab, companies received a report (Figure 6.6). Those companies that met or exceeded the eighty-point minimum could be certified. Through mid-2010, more than four-fifths of companies that applied for B Corporation status had been rejected. In addition to screening new companies, B Lab audited 10 percent of existing B Corporations in any given year, and any company that fell below acceptable performance standards had ninety days to correct the problem.B Corporation, “The B Impact Rating System,” accessed January 12, 2011, www.bcorporation.net/index.cfm/fuseaction/content.page/nodeID/f6780de0-cf1b-44a3-b8e4-195abbe68fb5; B Lab, Large Manufacturer Impact Assessment: Version 2.0, accessed January 12, 2011, www.bcorporation.net/resources/bcorp/documents/2010-B-Impact-Assessment%20%281%29.pdf; B Corporation, “Audits,” accessed January 31, 2011, www.bcorporation.net/audits.
The rating criteria were continually reviewed and revised by B Lab’s Standards Advisory Council, which included one B Lab member and eight independent members from business, academic, and nonprofit organizations. In 2010, the BIRS was in version 2.0, with version 3.0 under development. The original version 1.0 was developed from various extant corporate social responsibility metrics plus input from more than six hundred reviewers. Since B Lab provided its rating system to anyone, not just applicants, more than one thousand companies used BIRS in 2009 to monitor their performances. In conjunction with private investment funds and government agencies, B Lab in 2010 was also developing a Global Impact Investing Rating System for investors.B Corporation, 2009 B Corporation Annual Report, 5, accessed January 12, 2011, www.bcorporation.net/index.cfm/fuseaction/content.page/nodeID/dec9e60f -392c-4207-8538-be73be69cf85/externalURL.
In exchange for working for the public benefit and submitting to greater transparency and scrutiny, B Corporations received a number of benefits. First, they reduced the effects of labor, environmental, and other problems to their companies’ brands. Second, they shared ideas and services with each other. Indeed, in addition to routinely swapping best practices, B Corporations provided services to one another at a discount and helped each other find like-minded suppliers, consultants, and investors. Third, B Corporations had access to support from B Partners (non-B Corporations that nonetheless supported the concept) and the B Lab, which promoted B Corporations and helped devise metrics and attract investors and customers.
Among other things, B Lab advocated for state laws that favored B Corporations. As of 2010, no state recognized B Corporation as such, although the Maryland legislature passed a bill March 29, 2010, that would create separate legal recognition for B Corporations and give them some protection if shareholders sued to improve their returns at the expense of social and environmental goals.Douglas Tallman, “Maryland in Line to Become B Corporations Pioneer,” Gazette.Net, March 29, 2010, accessed January 12, 2011, www.gazette.net/stories/03292010/polinew175638_32561.php. Six other states were considering comparable laws, and the city of Philadelphia had already announced it would give \$4,000 in tax breaks to twenty-five B Corporations in the years 2012 through 2017. Ultimately, B Lab hoped the IRS would recognize B Corporations with a different tax status and expected B Corporations to equal nonprofits’ current share of GDP, about 5 percent, in twenty years.“The B Corporation: A Business Model for the New Economy,” Impact Investor, accessed January 12, 2011, http://www.theimpactinvestor.com/b-corp-model-rewrites-the-c.html; B Corporation, “Why B Corps Matter,” accessed April 18, 2010, www.bcorporation.net/why.
Other benefits may accrue to B Corporations. One may be attracting more talented and dedicated employees because potential employees are motivated to work for places that care about a triple (economic, social, environmental), not single, bottom line. Schools may help push their graduates in that direction. Already, the Yale School of Management has offered to forgive its graduates’ loans if they work for B Corporations.Carole Bass, “B School B Good,” Yale Alumni Magazine, March 29, 2010.
The B Corporation Community
As of March 2010, 285 B Corporations existed in 54 industries in 27 states and the District of Columbia. Most were in California, which had 81 B Corporations, followed by Pennsylvania with 37, and New York with 20. One-third of all B Corporations were in financial or business-to-business services (Figure 6.7). Collectively, the B Corporation community generated \$1.1 billion in revenues and saved more than \$750,000 through discounts they offered each other.B Lab, “Home,” accessed April 20, 2010, http://www.bcorporation.net.
B Corporations encompassed a diverse group, including shoemaker Dansko, renewable energy contractors and lawyers, real estate management firms, banks, and tea distributor Numi Organic Tea. They ranged from older companies to relatively young entrepreneurial ventures and included both explicitly green service providers and more conventional service providers. B Corporations tended to be smaller, privately held, and incorporated in states that encouraged sustainable businesses. Several of them are profiled briefly here.
King Arthur Flour
Based in Norwich, Vermont, King Arthur Flour was 100 percent employee owned and the country’s oldest flour maker, having operated continuously for more than two hundred years. It had gross sales of more than \$3 million in 2009 at its flagship store and was “the number-one selling unbleached flour in every market where we have full distribution.”King Arthur Flour Company, “About the King Arthur Flour Company,” accessed April 20, 2010, http://www.kingarthurflour.com/about. The company was committed to environmental stewardship and did not use chemical additives or genetically modified wheat in its flours. Employees could take forty paid hours each year to volunteer at nonprofit organizations, and the company donated 5 percent of profits to charities and offered free baking classes to children. The company won numerous awards for its efforts, including a 2008 Wall Street Journal Top Small Workplaces Award and a 2008 WorldBlu Most Democratic Workplaces Award. King Arthur Flour was a founding B Corporation, certified in June 2007.King Arthur Flour Company, “Good Works,” accessed April 20, 2010, www.kingarthurflour.com/about/goodworks.html#a2; B Corporation, “King Arthur Flour,” accessed January 12, 2011, www.bcorporation.net/kingarthurflour.
Seventh Generation
Seventh Generation had been making nontoxic, sustainability-oriented cleaning and household paper products since 1990. In 2008, the sale of its products saved more than fifty million gallons of water and one million gallons of petroleum compared with conventional products, and the company generated around \$4 million in pretax profits, 10 percent of which was donated to charities. Seventh Generation took its name from the Iroquois injunction to “consider the impact of our decisions on the next seven generations.”Seventh Generation, Corporate Responsibility 2.0: Our Corporate Consciousness Report, 2009, accessed January 12, 2011, http://www.7genreport.com; Seventh Generation, “About Us: About Seventh Generation,” accessed April 20, 2010, http://www.seventhgeneration.com/about?link-position=footer.
Like King Arthur Flour, Seventh Generation was based in Vermont and became a founding B Corporation in June 2007. According to Jeffrey Hollender, “Executive Chairperson and Chief Inspired Protagonist,” as well as the coauthor of The Responsibility Revolution,Jeffrey Hollender, The Responsibility Revolution: How the Next Generation of Businesses Will Win (Hoboken, NJ: Jossey-Bass, 2010). “Seventh Generation decided to become a B Corporation because there needs to be standards around corporate responsibility. In a landscape in which every company now says they’re a responsible business, there is no way for consumers, investors, and other stakeholders to tell real responsible businesses apart from those businesses that just say they are. The dual focus of B Corp, which involves a change to your bylaws and a comprehensive evaluation, is the best way to separate companies that really are responsible from ones that just pretend to be so.”B Corporation, 2009 B Corporation Annual Report, 12, accessed January 12, 2011, www.bcorporation.net/index.cfm/fuseaction/content.page/nodeID/dec9e60f-392c-4207-8538-be73be69cf85/externalURL.
Trillium Asset Management
This Boston-based firm pioneered socially responsible investing in 1982 and in June 2008 became a B Corporation with a Composite B Score of 116.9 points. The company handled about \$900 million in investments in 2009 from individuals and institutions and was “deeply committed to using the power of capital markets to move toward a sustainable economy that properly values people and planet.”B Corporation, “Trillium Asset Management,” accessed April 18, 2010, www.bcorporation.net/trillium; Trillium Asset Management, “Trillium Asset Management Corporation Announces Hiring of Matthew Patsky as Its New CEO,” news release, October 21, 2009, accessed January 12, 2011, trilliuminvest.com/news-articles-category/trillium-announces-hiring-of-matthew-patsky-as-its-new-ceo. Trillium’s thirty employees included several people focused on ecological and social impact research, and all employees could benefit from generous profit sharing. In addition, Trillium purchased carbon offsets for its operations and took other steps to improve its own sustainability record.
Greenlight Apparel
Founded to combat child labor, Greenlight Apparel gave 5 percent of sales to charities that helped achieve its mission. The company garnered 76 percent of the possible BIRS points when it was certified in December 2009. Greenlight’s interest was as follows: “[We] became a B Corporation because we wanted to add a third-party endorsement to our social and environmental efforts. Not only does it signify our willingness to strive to be a better corporation by our internal measures, but also gives us an opportunity to measure our impact against our peers.”B Corporation, “Greenlight Apparel,” accessed April 18, 2010, www.bcorporation.net/greenlightapparel.
The company, with headquarters in Fremont, California, began as a project for five business school students at the University of California–Davis after they observed labor conditions at apparel factories in Asia. The students assembled a business proposal that made it to the finals of the Global Social Venture Competition. Their first chance to prove their business model came when a Silicon Valley marathon race placed an order for shirts.
Deep Ecology
Deep Ecology was a scuba diving shop in Haleiwa, Hawaii, that was dedicated to protecting marine wildlife and habitats while providing divers of all levels with a great experience. Started in 1996 by Ken O’Keefe with \$8,000, the shop regularly dispatched employees to pick up trash from the ocean and rescued animals trapped by debris or abandoned fishing lines and nets, so-called ghost nets. Eventually, O’Keefe realized an environmental focus could differentiate his company from other dive shops. He changed the company’s name from North Shore Diving Headquarters to Deep Ecology because it would make it easier to franchise new shops “and more importantly, it reflects our unparalleled commitment to protection of the marine environment.”Ken O’Keefe, “Our History,” Deep Ecology, accessed April 18, 2010, http://www.oahuscubadive.com/our_history.html; B Corporation, “Deep Ecology,” accessed April, 18, 2010, www.bcorporation.net/deepecology. When Deep Ecology was certified as a B Corporation in December 2009, its rating was highest in the Environment category.
Conclusion
In 2010, B Corporation status was growing in importance as a reliable standard by which companies could demonstrate their commitment to social and environmental goals concurrently with their commitment to financial performance. That commitment to stakeholders, not just shareholders, was verified by the nonprofit B Lab and allowed B Corporations of all kinds to attract sustainability-conscious customers, investors, and vendors. That verification also allowed B Corporations to receive legal and technical support and various incentives to pursue their commitments over the long run.
KEY TAKEAWAYS
• When you are a small competitor challenging large incumbent firms, you must continue to innovate to differentiate your products. The innovation process requires input and creativity from a wide range of participants.
• “Failure” is another opportunity to learn and regroup; a culture that supports this view is more likely to foster institutional innovation and creativity.
EXERCISES
1. What cultural factors at Method help or hinder innovative initiatives?
2. Describe the innovation process for this new laundry detergent product introduction.
3. What is necessary to turn failure into success?
4. Imagine you are an executive who has to convince your board of directors to convert the company into a B Corporation. What would you argue? What would you expect the board to argue?
5. Identify a company that interests you and develop a strategy based on what you learned from the Method cases. | textbooks/biz/Business/Advanced_Business/Book%3A_Sustainability_Innovation_and_Entrepreneurship/06%3A_Clean_Products_and_Health/6.03%3A_Sustainability_Innovation_as_Entrepreneurial_Strategy.txt |
Learning Objectives
1. Examine the benefits of and barriers to a green chemistry innovation effort in the pharmaceutical industry.
2. Appreciate how health and environmental issues can be viewed as opportunities, not burdens.
3. Analyze the operating and financial benefits of green chemistry innovation.
At Pfizer, Yujie Wang reviewed the presentation she had prepared for the executive committee’s strategy meeting later that afternoon. She wanted to build on the company’s previous successes in green chemistry. Three of the committee members were familiar with the ideas, and she could count on their support. Four others had pushed for new ideas to be fed into their group over the last year. Depending on the strength of her argument this time, they might be persuaded to support the project. The remaining two members, who had significant responsibility for product development and operations, respectively, were somewhat less predictable. She had informed them of her progress during the project, but they seemed disinterested at best. Then again, they were busy people, and it had been hard to schedule the intermediate briefings she wanted to hold to update everyone. She knew the executives must be won over at least to a stance of “no opposition” to the proposal she would make.
Pharmaceuticals and Personal Care Products
The objective of an efficacious pharmaceutical is to make certain molecules biologically active in humans. Not surprisingly, however, the same molecules that can cause desired results can have adverse effects in the body as well as postpatient—after the drug is expressed from the body and its active ingredients are released from disposal pipes into streams and other water bodies.
Regulations require extensive pretesting of toxins in drugs (typically conducted by subcontractors) on different aquatic and mammalian species. Some critics argue the tests are sufficient; others question how accurately those surrogate studies can predict real results. Sweden, a nation that has aggressively studied chemical impacts on health and ecological systems, actively restricts nonbenign drug manufacture and distribution, requiring labeling of environmental toxins and imposing sales caps and even bans. The European Union’s 2005 Registration, Evaluation, Authorization, and Restriction of Chemicals (REACH) legislation would impose additional requirements on drug manufacturers (market size: 450 million).
According to the US Environmental Protection Agency (EPA), pharmaceuticals and personal care products (PPCPs) presented scientific concerns for the following reasons:
Large quantities of a wide spectrum of PPCPs (and their metabolites) can enter the environment following use by multitudes of individuals or domestic animals and subsequent discharge to (and incomplete removal by) sewage treatment systems. PPCP residues in treated sewage effluent (or in terrestrial runoff or directly discharged raw sewage) then enter the environment. All chemicals applied externally or ingested (and their bioactive transformation products) have the potential to be excreted or washed into sewage systems and from there discharged to the aquatic or terrestrial environments. Input to the environment is a function of the efficiency of human/animal absorption and metabolism and the efficiency of the waste treatment technologies employed—if any (sewage is sometimes discharged without treatment by storm overflow events, failure of systems, or “straight piping”). Removal efficiencies from treatment plants vary from chemical to chemical and between individual sewage treatment facilities (because of different technologies employed and because of operational fluctuations and “idiosyncrasies” of individual plants). Obviously, discharge of untreated sewage maximizes occurrence of PPCPs in the environment. No municipal sewage treatment plants are engineered for PPCP removal. The risks posed to aquatic organisms (by continual lifelong exposure) and to humans (by long-term consumption of minute quantities in drinking water) are essentially unknown. While the major concerns to date have been the promotion of pathogen resistance to antibiotics and disruption of endocrine systems by natural and synthetic sex steroids, many other PPCPs have unknown consequences. The latter are the focus of the ongoing U.S. EPA Office of Research and Development (ORD) work summarized here.US Environmental Protection Agency, “PPCPs: Frequent Questions,” accessed January 12, 2011, www.epa.gov/ppcp/faq.html.
Pfizer
In 2005 Pfizer employed fifteen thousand scientists and support staff in seven major labs around the world. Every weekday thirty-eight thousand sales representatives sold Pfizer products. The company’s \$3 billion annual advertising budget made it the fourth-largest US advertiser. In spring 2005 Pfizer was interviewing to fill the position of vice president of green chemistry. The new position reported to Dr. Kelvin Cooper, senior vice president, Worldwide Pharmaceutical Sciences, Pfizer Global R&D. The individual who would fill the position would need to examine the competitive challenges ahead, the internal progress to date, and ways to build on the Zoloft and Viagra innovation success stories in the context of a green chemistry embedded in corporate strategy. In the short run, how could the company take the lessons learned from those two cases and apply them beneficially elsewhere?
Exploring those questions had been Yujie Wang’s task for the past two months. The innovations provided dramatic cost savings, and the removal of toxic materials reduced both costs and risk. Given growing global attention to corporate accountability, increased government scrutiny of pharma companies, and the fast-growing popularity of sustainable business strategy, could adoption of a green chemistry strategy help Pfizer’s reputation and offer growth as well as profit opportunities? In this industry, companies competed primarily on drug offerings and secondarily on process, with “maximum yield” as the main objective to maximize profitability.
Adding sustainability to the mix meant explicitly integrating human and community health as well as ecological system preservation into corporate performance. Sustainable development ideas introduced decades earlier had been transformed into business practices and were implemented through strategy by well-known global companies such as Toyota, General Electric, Walmart, Electrolux, and United Technologies. Walmart and General Electric announced sustainability as part of their core strategy in 2005. The goal was to achieve financial success concurrently with these broader objectives.
Debate on climate change and discussion of pollutants’ effect on human health and the environment had raised awareness of the human influence on natural systems, and consequently financial institutions and insurance companies were paying more attention to firms’ existing and future liabilities. In the face of increased scrutiny by governments and nongovernmental organizations (NGOs), firms were starting to assess their own vulnerabilities and opportunities with respect to such topics. Sustainability and sustainable business were two common terms in that discussion. Others in business used the phrase triple bottom line, which referred to performance across financial, social, and ecological standards, or strategy attuned to economy, equity, and environment.
According to Joanna Negri, a process chemist and manager, and a member of the company’s green chemistry team, Pfizer “views sustainability and green chemistry as outcomes of good science—and this provides competitive business advantage through enhanced efficiency and safer processes.”Alia Anderson, Andrea Larson, and Karen O’Brien, Pfizer Pharmaceuticals: Green Chemistry Innovation and Business Strategy, UVA-ENT-0088 (Charlottesville: Darden Business Publishing, University of Virginia, January 22, 2007). Unless otherwise noted, other quotations in this section come from this case.
Green Chemistry at Pfizer
In 2002 Pfizer won the US Presidential Green Chemistry Award for Alternative Synthetic Pathways for its innovation of the manufacturing process for sertraline (“sir-tra-leen”) hydrochloride (HCl). Sertraline HCl was the active ingredient in the pharmaceutical Zoloft. Zoloft, in 2005 the most prescribed agent of its kind, was used to treat clinical depression, a condition that struck more than twenty million US adults and cost society nearly \$44 billion annually. As of February 2000, more than 115 million Zoloft prescriptions had been written in the United States; 2004 global sales grew to \$3.11 billion.Patrick Clinton and Mark Mozeson, “Pharm Exec 50,” PharmExec, May 2010, accessed January 12, 2011, pharmexec.findpharma.com/pharmexec/data/articlestandard//pharmexec/222010/671415/article.pdf.
Applying the principles of green chemistry to the Zoloft line, Pfizer dramatically improved the commercial manufacturing process of sertraline HCl. After meticulously investigating each of the chemical steps, Pfizer implemented green chemistry technology for this complex commercial process, which required extremely pure product. As a result, Pfizer significantly improved both worker and environmental safety. The new commercial process (referred to as the “combined” process) offered dramatic pollution prevention benefits including improved safety and material handling, reduced energy and water use, and double overall product yield.US Environmental Protection Agency, “2002 Greener Synthetic Pathways Award,” accessed January 31, 2011, www.epa.gov/gcc/pubs/pgcc/winners/gspa02.html. That success inspired green chemistry enthusiasts at Pfizer to look for other manufacturing processes to which the principles could be applied.
Complicating matters, however, was the state of the pharmaceutical industry in 2005: it was beleaguered by multiple issues affecting brand and profit margins, criticism of industry’s policies on access to drugs in poorer communities, and lawsuits resulting from unexpected side effects. Could greener processes provide Pfizer an edge in this shifting landscape? Would they generate both the cost savings needed to justify the effort and the social capital that would support Pfizer’s reputation, brand, and even its license to operate?
In 2001, informal conversations at a conference at the University of Massachusetts’s Center for Sustainable Production had marked the beginning of Pfizer’s involvement in green chemistry. While there, Dr. Berkeley Cue, then vice president of pharmaceutical sciences research at Groton Labs (reporting to Pfizer Global R&D’s Cooper) was surprised to learn that some Pfizer environment and safety chemists in attendance shared his interest. Impressed by the green chemistry work of professor and chemist John Warner at University of Massachusetts, Cue believed the approach held potential for Pfizer.
During 2001 through 2004 Cue built a group at Groton, pulling in the discovery chemists from R&D to optimize products from the design stage. In talking with other R&D sites at Pfizer, the network quickly spread to the UK offices and to California’s Pfizer R&D center in La Jolla. When Pfizer purchased Pharmacia in 2003, the company discovered that some of its new acquisition’s R&D people were interested in green chemistry. Cue described his role as supporting a bottom-up initiative: “I brought people together in a tactical way and provided resources to give them a strategy and a voice upwards in the organization, and out.”Alia Anderson, Andrea Larson, and Karen O’Brien, Pfizer Pharmaceuticals: Green Chemistry Innovation and Business Strategy, UVA-ENT-0088 (Charlottesville: Darden Business Publishing, University of Virginia, January 22, 2007).
In late 2003 a steering committee was formed to address the importance of the ideas for the corporation overall. Soon the active product ingredients (API) chemists joined in, and communication about the ideas expanded to legal and corporate affairs and R&D/manufacturing codevelopment teams. The committee communicated the message up and down the corporate hierarchy. Even the global marketing division was interested in the potential of this approach. By 2005, Pfizer had green chemistry activity in all seven of its R&D sites and had even begun to educate the federal oversight agency for the pharmaceutical industry, the Food and Drug Administration (FDA). (The FDA, with its legislative commitment to not compromise patient safety, was viewed by many as a demanding taskmaster that could dictate significant green chemistry changes to production that, although beneficial, would require long approval time frames.)
E-Factors and Atom Economy
Green chemistry is the design of chemical products or processes that eliminates or reduces the use and generation of hazardous substances. The application of green chemistry principles provided a road map that enabled designers to use more benign and efficient methods.
The industry used an assessment tool called E-factor to evaluate all major products. E-factor was defined in this industry as the ratio of total kilograms of all materials (raw materials, solvents, and processing chemicals) used per kilogram of API produced. Firms were identifying drivers of high E-factor values and taking actions to improve efficiency.
A pivotal 1994 study indicated that for every kilogram of API produced, between twenty-five and one hundred kilograms or more of waste was generated as standard practice in the pharmaceutical industry, a figure that was still common to the industry in 2005. Multiplying the E-factor by the estimate of kilograms of API produced by the industry overall suggested that for the year 2003 as much as 500 million to 2.5 billion kilograms of waste could be the by-product of pharma API manufacture. That waste represented a double penalty: costs associated with purchasing chemicals that are diverted from API yield and costs associated with disposing of that waste (ranging from \$1 to \$5 per kilogram). Very little information was released by industry competitors, but a published 2004 GlaxoSmithKline life-cycle assessment of its API manufacturing processes revealed that 75–80 percent of the waste produced was solvent (liquid) and 20–25 percent solid, of which a considerable proportion was likely hazardous under state and federal law.
For years pharma had said it did not produce significant enough product volumes to be concerned about toxicity and waste, particularly relative to commodity chemical producers. But with the competitive circumstances changing, companies were eager to find ways to cut costs, eliminate risk, and improve their image. After implementing its award-winning process as standard in sertraline manufacture, Pfizer’s experience suggested the results of green chemistry–guided process changes brought E-factor ratios down to ten to twenty kilograms. The potential to dramatically reduce E-factors through “benign by design” principles could, indeed, be significant. Eli Lilly, Roche, and Bristol-Meyers Squibb—all winners of a Presidential Green Chemistry Award between 1999 and 2004—reported improvements of this magnitude after green chemistry principles had been applied.
Predictably, green chemistry also fit with Six Sigma, the principles of which considered waste a process defect. “Right the first time” was an industry initiative that the FDA backed strongly. Groton’s Cue viewed green chemistry as a lens that allowed the company to look at processes and yield objectives in a more comprehensive way, with quality programs dovetailing easily with this approach.
Pfizer Company Background
Pfizer Inc., the world’s largest drug company, was created in 1849 by Charles Pfizer and his cousin Charles Erhart in Brooklyn, New York. The company began its climb to the top of the industry in 1941, when it was asked to mass-produce penicillin for the war effort. In the 1950s, the company opened branches in Belgium, Canada, Cuba, Mexico, and the United Kingdom and began manufacturing in Asia, Europe, and South America. Pfizer expanded its research and development, introducing a range of drugs and acquiring consumer products such as Bengay and Desitin, and by the mid-1960s, Pfizer’s annual worldwide sales had grown to \$500 million. Pfizer engaged in the discovery, development, manufacturing, and marketing of prescription medicines, as well as over-the-counter products, for humans and animals. In 2003, 88 percent of Pfizer’s revenue was generated from the human pharmaceuticals market, 6.5 percent from consumer health-care products, and 4 percent from animal health products.Pfizer, 8K Filing and 2003 Performance Report, Exhibit 99, January 22, 2004. Pfizer was traded on the New York Stock Exchange as ticker PFE. Its major competitors included Merck & Co. of Germany and Johnson & Johnson, GlaxoSmithKline Plc, and Novartis, all in the United States.Business and Company Resource Center, Pharmaceuticals Industry Snapshot, 2002.
Throughout the world, more than one billion prescriptions were written for Pfizer products in 2003.Pfizer, 8K Filing and 2003 Performance Report, Exhibit 99, January 22, 2004. In 2004, fourteen of Pfizer’s drugs were top sellers in their therapeutic categories, including Zoloft, erectile dysfunction therapy Viagra, pain management medication Celebrex, and cholesterol-lowering drug Lipitor.Business and Company Resource Center, Pharmaceuticals Industry Snapshot, 2002. The company’s many over-the-counter remedies included Benadryl and Sudafed. Subsidiaries in the Pfizer pharmaceutical group included Warner-Lambert, Parke-Davis, and Goedecke. In 2000, Pfizer merged with Warner-Lambert, making the company one of the top five drugmakers in the world. Pfizer then acquired pharmaceuticals company Pharmacia in 2003, making it the largest drug company in the world. This acquisition allowed Pfizer to diversify its product line because Pharmacia owned a range of therapeutic products in new areas, such as oncology, endocrinology, and ophthalmology.Business and Company Resource Center, Pharmaceuticals Industry Snapshot, 2002. The merger, which cost Pfizer \$54 billion, also greatly expanded its pipeline through Pharmacia’s research in atherosclerosis, diabetes, osteoporosis, breast cancer, neuropathic pain, epilepsy, anxiety disorders, and Parkinson’s disease. By 2004, Pfizer had locations in 80 countries and sold products in 150 countries. In 2003, Pfizer also began selling some of its nonpharmaceutical businesses, such as the Adams confectionary unit (to Cadbury Schweppes) and Schick-Wilkenson Sword shaving products (to Energizer Holdings).Business and Company Resource Center, Pharmaceuticals Industry Snapshot, 2002. Pfizer was headquartered in New York and in 2005 had four subsidiaries involved in pharmaceuticals, consumer health care, and animal health care. Three subsidiaries conducted their business under the Pfizer company name, the fourth as Agouron Pharmaceuticals.
Pfizer posted total revenues for 2003 at \$45.2 billion worldwide, an increase of 40 percent from 2002, and net income of \$3.9 billion. While the company’s largest market was in the United States, Pfizer’s international market grew 56 percent in 2003, to revenues of \$18 billion. According to Karen Katen, executive vice president of the company and president of Pfizer Global Pharmaceuticals, “[Pfizer’s] portfolio of leading medicines, which spanned most major therapeutic categories, drove Pfizer’s strong revenue growth in the fourth quarter and full-year 2003.”Alia Anderson, Andrea Larson, and Karen O’Brien, Pfizer Pharmaceuticals: Green Chemistry Innovation and Business Strategy, UVA-ENT-0088 (Charlottesville: Darden Business Publishing, University of Virginia, January 22, 2007). In fall 2004 Pfizer appeared well positioned for continued industry leadership and projected strong financial performance. The company had a target of \$54 billion for its 2004 revenue and planned to spend about \$7.9 billion in R&D during 2004.Pfizer, 8K Filing and 2003 Performance Report, Exhibit 99, January 22, 2004. “In the dynamic environment of today’s worldwide pharmaceutical industry,” said David Shedlarz, executive vice president and chief financial officer, “Pfizer is uniquely well-positioned to sustain our strong and balanced performance, leverage past and future opportunities, reinforce and extend our differentiation from others in the industry, and exploit both our operational flexibility and our proven abilities to execute.”Pfizer, 8K Filing and 2003 Performance Report, Exhibit 99, January 22, 2004.
Industry Challenges
But despite Pfizer’s optimism and past financial success, by early 2005 the entire pharmaceuticals industry was suffering from a devastating lack of customer trust. From 1990 to 2004, the industry experienced a series of well-publicized criticisms. Most contentious among these critiques was the accessibility of AIDS drugs to patients in southern Africa. Analysts such as Merrill Goozner, former chief economics correspondent for the Chicago Tribune, suggested in 1999 that private pharmaceutical companies contributed to the global AIDS crisis by claiming that lowering the price of drugs or easing patent protection for manufacturers in third-world countries would “stifle innovation.”Merrill Goozner, “Third World Battles for AIDS Drugs,” Chicago Tribune, April 28, 1999, accessed January 12, 2011, http://articles.chicagotribune.com/1999-04-28/news/9904280067_1_compulsory-licensing-south-africa-aids-drugs. In 2004 products from a flu vaccine production plant in the United Kingdom, critical to the US supply, were blocked due to health and safety concerns. The same year, New York Attorney General Elliot Spitzer filed suit against pharma giant GlaxoSmithKline, saying that the company concealed important information about the safety and efficacy of Paxil, an antidepressant drug. Adding to the controversy surrounding the pharmaceutical industry, popular filmmaker Michael Moore announced plans in 2005 to create a documentary called Sicko, which would use interviews with physicians, patients, and members of Congress to expose an industry that Moore claimed “benefits the few at the expense of the many.”Alissa Simon, quoted in BBC News, “Press Views: Michael Moore’s Sicko,” May 19, 2007, accessed January 31, 2011, http://news.bbc.co.uk/2/hi/6673039.stm.
A poll conducted in December 2004 showed that Americans held pharmaceutical companies at the same low esteem as tobacco companies.Marcia Angell, “Big Pharma Is a Two-Faced Friend,” Financial Times (London), July 19, 2004, accessed January 12, 2011, www.globalaging.org/health/us/2004/pharma.htm. The pressure on Pfizer grew in late 2004 when prescriptions for its Celebrex pain relief and arthritis drug fell 56 percent in December following the company’s announcement that the drug was linked to cardiovascular risk (heart attacks and strokes), a problem similar to Merck & Co.’s with its billion-dollar blockbuster drug Vioxx. (Merck, which was suspected of concealing Vioxx’s potentially lethal side effects to maintain sales, had withdrawn the drug from the market in September 2004, undermining both public confidence in the pharma industry and the regulatory oversight of the US Food and Drug Administration.)Theresa Agovino, “Pharmaceutical Industry Limps into 2005,” Boston Globe, December 19, 2004, accessed January 31, 2011, www.boston.com/business/year_in_review/2004/articles/pharmaceutical_industry_limps_into_2005. Pfizer ceased advertising Celebrex. In December 2004, the S&P 500 Pharmaceutical Subindustry Index was down 12.8 percent for the year, though the S&P 500 was up 6.8 percent.
The pharmaceutical industry was a high-risk, high-reward business. Consumers demanded lifesaving drug discoveries that were safe and affordable. In the United States, drug patents only lasted for five to ten years, so pharmaceutical companies were constantly threatened by generic competition. In 2004, it cost an estimated \$897 million to develop and test a new medicine; about 95 percent of chemical formulas failed during this process. In 2002, the FDA approved only seventeen new drugs, the lowest number since 1983. In an attempt to boost innovation, pharmaceutical R&D skyrocketed, with Pfizer investing \$7 billion on R&D in 2003, leading the industry by a margin of several billion.David Rotman, “Can Pfizer Deliver?” Technology Review, February 2004, accessed January 12, 2011, http://www.techreview.com/biomedicine/13462/?mod=related.
In 2005, Pfizer managed the world’s largest private pharmaceutical research effort, with more than thirteen thousand scientists worldwide. That tremendous investment, however, was not translating into drug output, which had been spiraling downward since 1996. In January 2005, Pfizer had 130 new molecules in its pipeline of new medicines, along with 95 projects to expand the use of therapies currently offered.Nancy Nielson, “Pfizer, A New Mission in Action,” in Learning to Talk: Corporate Citizenship and the Development of the U.N. Global Compact (Sheffield, UK: Greenleaf, 2004), 242–55. To meet its 2005 goal of double-digit growth of annual revenues, Pfizer planned to file applications for twenty new drugs before 2010.Nancy Nielson, “Pfizer, A New Mission in Action,” in Learning to Talk: Corporate Citizenship and the Development of the U.N. Global Compact (Sheffield, UK: Greenleaf, 2004), 242–55. Analysts viewed that unprecedented growth rate skeptically, saying that Pfizer had only seven drugs in the FDA testing phases.
From 1993 to 2003, Pfizer spent about \$2 billion on drugs that failed in advanced human testing or were pulled off the market due to problems such as liver toxicity. Thus Pfizer decided in 2005 to shift its R&D focus to analyzing past failed drug experiments to find patterns that might help detect toxicity earlier in the expensive testing process.
From 1995 to 2005, pharma companies invested significant R&D funding into genomics experiments, which were very expensive and yielded less-than-revolutionary results. After a decade of investments in high-powered genomic tools, pharmaceutical companies were in their most prolonged and painful dry spell in years. “Genomics is not the savior of the industry. The renaissance is in chemistry,” said Rod MacKenzie, Pfizer’s vice president of discovery research in Ann Arbor, Michigan.Alia Anderson, Andrea Larson, and Karen O’Brien, Pfizer Pharmaceuticals: Green Chemistry Innovation and Business Strategy, UVA-ENT-0088 (Charlottesville: Darden Business Publishing, University of Virginia, January 22, 2007).
Brand Protection
To counteract a growing reputation that Pfizer was unwilling to engage with certain NGOs, Pfizer was one of the earliest US signers of the voluntary UN Global Compact, which defined principles for corporate behavior including human rights, labor, and the environment. The UN Global Compact was designed to open dialogue among business, governments, NGOs, and society at large. The compact requires use of the precautionary principle, a guide to company decision making that assumed a “lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation.”United Nations, Report of the United Nations Conference on Environment and Development: Rio de Janeiro, 3–14 June 1992, August 12, 1992, accessed January 12, 2011, www.un.org/documents/ga/conf151/aconf15126-1annex1.htm. A study in 2003 by the International Institute for Management Development in Geneva found that stakeholders expect more social responsibility from the pharmaceutical sector than from any other industry. Pfizer transformed its quarterly financial report into a “performance report,” which included updates on corporate citizenship.Nancy Nielson, “Pfizer, A New Mission in Action,” in Learning to Talk: Corporate Citizenship and the Development of the U.N. Global Compact (Sheffield, UK: Greenleaf, 2004), 242–55. The company was rated in the Chronicle of Philanthropy as “the world’s most generous company.”Alia Anderson, Andrea Larson, and Karen O’Brien, Pfizer Pharmaceuticals: Green Chemistry Innovation and Business Strategy, UVA-ENT-0088 (Charlottesville: Darden Business Publishing, University of Virginia, January 22, 2007).
In the pharmaceuticals industry, innovation can be stifled by the complexity of global business, science, government, religion, and public response, all colliding over issues of life and death. AIDS was driving high demand for more breakthrough medicines but at an affordable price. “We have learned that no single entity—whether business, government, or NGO—can alone bridge the deep divides between poverty and affluence, health and disease, growth and stagnation. As the world’s foremost pharmaceutical company, we have an important obligation to take a global leadership role,” Pfizer chairman Hank McKinnell commented.Pfizer, Medicines to Change the World: 2003 Annual Review, accessed January 12, 2011, www.pfizer.nl/sites/nl/wiezijnwij/Documents/annualreportpfizer2003.pdf.
In 2000, Pfizer conducted focus groups at several Pfizer locations around the world to create a new mission. First it was decided that Pfizer would measure itself on a combination of financial and nonfinancial measures, reflecting stakeholders’ changing expectations of business. Second, Pfizer would no longer measure itself solely against others in the pharmaceuticals industry but against all other companies in all industries. The new mission statement was as follows: “We will become the world’s most valued company to patients, customers, colleagues, investors, business partners and to the communities where we live. This is our shared promise to ourselves and to the people we serve. Pfizer’s purpose is to dedicate ourselves to humanity’s quest for longer, healthier, happier lives through innovation in pharmaceutical, consumer and animal health products.”Alia Anderson, Andrea Larson, and Karen O’Brien, Pfizer Pharmaceuticals: Green Chemistry Innovation and Business Strategy, UVA-ENT-0088 (Charlottesville: Darden Business Publishing, University of Virginia, January 22, 2007).
Pfizer stated that it measured progress as putting people and communities first, operating ethically, being sensitive to the needs of its colleagues, and preserving and protecting the environment.
In 2002, Pfizer donated \$447 million to programs like its Diflucan Partnership Program, which provides health-care training and free medicine to treat HIV/AIDS-related infections to patients in Africa, Haiti, and Cambodia. That year Pfizer also held an internal symposium on green chemistry, a design approach that continued to drive manufacturing toward more benign material use.
In 2003, Pfizer became a member of the World Business Council for Sustainable Development, the International Business Leaders Forum, and Business for Social Responsibility, organizations that provide resources to firms to promote sustainable business practices internationally, sometimes referred to as triple-bottom-line performance (economy, environment, equity). Pfizer set a company goal for 2007 to reduce carbon dioxide emissions by 35 percent per million dollars of sales and, by 2010, to supply 35 percent of global energy needs through cleaner sources. Pfizer is a member of the EPA’s Climate Leaders Program, a voluntary industry-government partnership. Pfizer was again included in the Dow Jones Sustainability Asset Management Index, a global index that tracks the performance of leading companies not only in economic terms but also against environmental and social standards.
Zoloft
Zoloft was released in 1992 and was approved for six mood and anxiety disorders, including depression, panic disorder, obsessive-compulsive disorder (OCD) in adults and children, post-traumatic stress disorder (PTSD), premenstrual dysphoric disorder (PMDD), and social anxiety disorder (SAD).Pfizer, Medicines to Change the World: 2003 Annual Review, accessed January 12, 2011, www.pfizer.nl/sites/nl/wiezijnwij/Documents/annualreportpfizer2003.pdf. Zoloft was the most prescribed depression medication, with more than 115 million Zoloft prescriptions written in the United States in the drug’s first seven years on the market.US Environmental Protection Agency, “2002 Greener Synthetic Pathways Award: Pfizer, Inc.,” accessed January 12, 2011, www.epa.gov/greenchemistry/pubs/pgcc/winners/gspa02.html According to Pfizer’s 2003 filings, Zoloft brought in \$3.1 million in worldwide revenue, with \$2.5 million coming from the US market. Those revenues showed an increase of 16 percent worldwide, 14 percent in the United States, and 23 percent internationally during the fourth quarter of 2003 compared to the same period of the previous year.Pfizer, 8K Filing and 2003 Performance Report, Exhibit 99, January 22, 2004. Zoloft sales comprised approximately 9 percent of Pfizer’s total US sales in 2003, second only in sales percentage to Lipitor.
In 2002, Pfizer was awarded the Green Chemistry Award for Alternative Synthetic Pathways. Pfizer received the award for its development of the sertraline process, an innovative process for deriving Zoloft, for which sertraline is the active ingredient. Since developing the new process in 1998, Pfizer successfully implemented it as the standard in sertraline manufacture. To make Zoloft, a pure output of sertraline must be isolated from a reaction that occurs in solvent (or in a combination of solvents). The “combined” process of isolating sertraline was the third redesign of the commercial chemical process since its invention in 1985.US Environmental Protection Agency, “2002 Greener Synthetic Pathways Award: Pfizer, Inc.,” accessed January 31, 2011, www.epa.gov/greenchemistry/pubs/pgcc/winners/gspa02.html. Each of those redesigned reactions decreased the number of solvents used, thus simplifying both the process (through energy required and worker-safety precautions) and the resulting waste disposal. The traditional process used titanium tetrachloride, a liquid compound that was toxic, corrosive, and air sensitive (meaning it formed hydrochloric acid when it came in contact with air).US Environmental Protection Agency, “2002 Greener Synthetic Pathways Award: Pfizer, Inc.,” accessed January 31, 2011, www.epa.gov/greenchemistry/pubs/pgcc/winners/gspa02.html. Titanium tetrachloride was used in one phase of the process to eliminate water, which reversed the desired reaction if it remained in the mix. In the process of “dehydrating” this step of the reaction, the titanium tetrachloride reacted to produce heat, hydrochloric acid, titanium oxychloride, and titanium dioxide. Those by-products were carefully recovered and disposed, which required an additional process (energy), inputs (washes and neutralizers), and costs (waste disposal). The new process blended the two starting materials in the benign solvent ethanol and relied on the regular solubility properties of the product to control the reaction. By completely eliminating the use of titanium tetrachloride, the “combined” process removed the hazards to workers and the environment associated with transport, handling, and disposal of titanium wastes.US Environmental Protection Agency, “2002 Greener Synthetic Pathways Award: Pfizer, Inc.,” accessed January 31, 2011, www.epa.gov/greenchemistry/pubs/pgcc/winners/gspa02.html. Using ethanol as the solvent also significantly reduced the use of one of the starting materials, methyl methacrylate, and allowed this material to be recycled back into the process, increasing efficiency.
Another accomplishment of the new process was discovering a more selective catalyst. The original catalyst caused a reaction that created unwanted by-products. Removing these impurities required a large volume of solvent as well as substantial energy. Also, portions of the desired end product were lost during the purification process, decreasing overall yield. The new, more selective catalyst produced lower levels of impurities, which in turn had the effect of requiring less of the reactant (mandelic acid) for the next and final reaction in the process. Finally, the new catalyst was recovered and recycled, providing additional efficiency.
By redesigning the chemical process to be more efficient and produce less harmful or expensive waste products, the “combined” process of producing sertraline provided both economic and environmental/health benefits. Typically 20 percent of the wholesale price was manufacturing costs, of which approximately 20 percent was the cost of the tablet or capsule with the remaining percentage representing all other materials, energy, water, and processing costs. With generics on the horizon, achieving materials and processing cost reductions could prove a decisive capability differentiator.
Subsequent to receipt of the green chemistry award, Pfizer realized an even more efficient process driven off the earlier successes. The starting material for sertraline, called tetralone, contained an equal mixture of two components. One produces sertraline and the other a by-product that must be removed, resulting in a process that is only half as productive. Using a cutting-edge separation technology called multiple-column chromatography (MCC), Pfizer scientists were able to fractionate the starting material into the pure component that results in sertraline. The other component can be recycled back to the original 1:1 mixture, which could be now mixed with virgin starting material and resubjected to MCC separation. This new process was reviewed and approved for use by the FDA. The net result was twice as much sertraline produced from a unit of starting material. Half the manufacturing plant capacity was required per unit of sertraline produced.
A Depressing Decree from the United Kingdom
In December 2003, the Medicines and Healthcare Products Regulatory Agency (MHRA) of the United Kingdom included Zoloft (sold in the United Kingdom as Lustral) on a list of antidepressants banned from use for the treatment of children and teenagers younger than age eighteen.“UK Set to Ban Antidepressants for Children,” AFX International Focus, December 10, 2003. The safety and efficacy of the drugs was in question, a query brought to the attention of UK health officials after high rates of suicide were observed in patients taking certain antidepressants. Of the major antidepressants, only Eli Lilly’s Prozac is currently permitted for use in treating UK children.“UK Set to Ban Antidepressants for Children,” AFX International Focus, December 10, 2003. Pfizer immediately released a statement disagreeing with the findings of the MHRA, claiming that its “controlled clinical-trial data in pediatric and adolescent depression shows no statistically significant association between use of Zoloft and either suicidal ideation or suicidal behavior in depressed pediatric and adolescent populations.”Pfizer, 8K Filing and 2003 Performance Report, Exhibit 99, January 22, 2004. After reviewing Pfizer’s studies of Zoloft in pediatric populations, the FDA’s office of pediatric therapeutics concluded in 2003 that there were no safety signals calling for FDA action beyond ongoing monitoring of adverse events.Pfizer, 8K Filing and 2003 Performance Report, Exhibit 99, January 22, 2004.
Conclusion
Market and industry turbulence was standard for pharma decision makers, but the confluence of regulation; distrust; technology improvement, medical, and ecological studies; costly company errors; economic decline; and prohibitive R&D investment requirements made the decision circumstances particularly constrained in 2005. What could green chemistry offer within that context, if anything? Yujie Wang made last-minute changes to her priority list of recommendations and saved the slide presentation to a Zip drive. It was time to head down the hall to the executive committee meeting and try to convince the audience of the value of green chemistry going forward.
KEY TAKEAWAY
• Green chemistry represents an opportunity for the pharmaceutical industry, which is relatively inefficient in its use of energy and materials, to find cost savings and stimulate innovation.
EXERCISES
1. How and why did this process innovation happen?
2. Estimate the potential savings (in dollars) of applying green chemistry innovations to Zoloft. Use information from the case (Zoloft sales, sales price per dose, average dose, waste disposal costs) and make reasonable assumptions if needed to determine your calculation. Be prepared to present your analysis to the class. What are the potential savings if this practice is implemented more broadly?
3. What drivers are in play with respect to green chemistry, inside and outside Pfizer?
4. Can you identify strategic opportunities for Pfizer? On what contingencies do they depend?
5. How would you define the responsibilities of Pfizer’s new vice president? What priorities need attention first? | textbooks/biz/Business/Advanced_Business/Book%3A_Sustainability_Innovation_and_Entrepreneurship/06%3A_Clean_Products_and_Health/6.04%3A_Green_Chemistry_Innovation_and_Business_Strategy.txt |
Learning Objectives
1. Evaluate opportunities and challenges for sustainability innovation within the modular building market.
2. Analyze the value of collaborations for innovation.
3. Examine the stages of growth for a start-up firm active in the sustainability space.
Introduction
Project Frog was an innovative designer of kits to rapidly build energy-efficient, greener, healthier, and affordable buildings. The company was transitioning from start-up to the next phase of growth just as the 2008–10 economic recession brought virtually all new building construction to a halt across the United States. Conditions forced the company to rethink strategy, conserve cash, and further refine its product and its processes. The company’s Crissy Field project, completed in early 2010, provided a critically important demonstration of the company’s designs, and as the economy began to turn around in 2010, geographic expansion and new markets segments—possibly government, retail, and health care—were planned. Architect, designer, and founder Mark Miller; president Adam Tibbs; and new CEO Ann Hand also hoped to meet more aggressive margin targets that would enable the company to triple revenue and be profitable early in 2011, only five years after start-up. Venture capital funding from RockPort Capital Partners and investor exit expectations required rapid ramping up of projects in the short run. Miller summarized the overlap of Project Frog’s products and lead venture capital investor’s interests: “Their vision for energy and resource efficiency and innovative products is perfectly aligned with the Project Frog approach: to be better, greener, faster and cheaper.”Project FROG, “Project FROG, Makers of Smart Building Systems, Closes Series B Funding with RockPort Capital Partners,” news release, Business Wire, November 19, 2008, accessed January 28, 2011, www.thefreelibrary.com/Project+,+Makers+of+Smart+Building+Systems,+Closes+Series+B...-a0189242085.
In late spring 2010, having just moved from an operating role to a board position, Miller was focused on strategic concerns and how best to explain and sell his product to a broader range of buyers, including the military and potentially disaster-relief agencies. The Project Frog office, a short walk from San Francisco’s Embarcadero district, was informal and open. Although Miller occupied the only office with a door, it was bounded by two glass walls and a clear acrylic panel he used as a whiteboard. He could often be found crisscrossing the office or standing at someone’s workstation or at the table where Tibbs sat. Meanwhile, new CEO Ann Hand had set up her computer, sharing the long table with Tibbs. She sought to translate her experience at BP as senior vice president of global marketing and innovation into a strategy for Project Frog to build its brand and scale up. The senior team saw huge potential in Project Frog, but they had many decisions to make and priorities to set. Most important, they wanted to ensure Project Frog met key business goals as they focused on preparing to give the venture capital investors a successful exit in just a few years, either taking the company public or finding a buyer.
History
Mark Miller was no stranger to new design and enterprises in architecture. He graduated from Haverford College in 1984 and earned his master’s in architecture and a prestigious Keasbey Fellowship at Cambridge University. He went to Kuala Lumpur as a Henry Luce Scholar, helping design refugee camps among other projects and deepening his strong interest in the relationship between culture and architecture. He also was certified by the American Institute of Architects. Miller later served as director of corporate and technology projects and director of the Asia Projects Group for the firm Kaplan McLaughlin Diaz in San Francisco, where his portfolio included Euro Disney. In 2000, he used \$50,000 in personal savings to start MKThink, a design and architecture firm in San Francisco focused on innovative architectural design. Staff that included anthropologists conducted careful human behavior research to understand what people in work spaces truly need for high productivity and high performance. MKThink designed advanced offices and campuses for Sun Microsystems and General Electric’s Warren Tech Center in Michigan and worked extensively with Stanford University on several projects, including a dozen at the law school, work for the education and engineering schools, and the business school’s relocation analysis.
Around 2000, Miller began to think seriously about the education market and temporary or portable classrooms, the trailers that frequently begin as stopgaps and become permanent features of many schools despite their unhealthy interior environments and energy inefficiency. Miller said, “Design should speak to the issues of the day, and technology needs to enable the human condition, not dominate it. So what are the issues of today? Well, we’ve got a problem: 35% of the kids in the state of California go to school in out-of-date trailers. That’s an issue of the day. It’s how do you educate kids in public schools and what are their facilities like in solving that systematically? We have the technology, we have the knowledge. We can solve this.”Andrea Larson and Mark Meier, Project FROG: Sustainability and Innovation in Building Design, UVA-ENT-0158 (Charlottesville: Darden Business Publishing, University of Virginia, 2010). Other quotations in this section, unless otherwise noted, also refer to this case study.
Solutions oriented, Miller saw an opportunity to meet the challenge. While relatively cheap, how well did existing buildings address how students learn and what teachers need to be high-performing instructors? How could technology and design come together to create healthier schools while addressing the desperate need for more classrooms as well as rising and more volatile energy costs? Why accept existing answers? Smart buildings were emerging as alternatives. Estimates were that school overcrowding and insufficient tax revenues to government to pay for new school facilities would continue to force public school students into trailer classrooms, and this was not just California’s problem.
MKThink had always had a research component that enabled it to consider problems in its field, write half a dozen white papers a year, and present at conferences. By 2004, that research focused on the problem of unhealthy learning environments for children’s education. After all, 60 percent of the firm’s work was related to education. The group knew it had a solution, but not yet a new company, when it devised the basic idea for modular buildings that would be better places for kids to learn and more energy efficient. “I’m making a product that makes a system that becomes a kit. You could call it Lego and Tinker Toys on steroids,” Miller said. Witnessing the devastation and aftermath of the 2004 Indonesian tsunami and Hurricane Katrina in 2005 in the United States confirmed for Miller that better buildings also needed to be constructed quickly. “That was the birth of Frog”—Flexible Response to Ongoing Growth—Miller told GreenerBuildings magazine. “Frogs are green. They only jump forward and—one of my favorites—each frog is a prince with the message, ‘Do not be afraid of what’s not familiar.’ Because if you embrace it, it is a prince.”Leslie Gueverra, “Project FROG Becomes a Cinderella Story for Modular Construction,” GreenerBuildings, November 25, 2008, accessed January 28, 2011, http://www.greenbiz.com/news/2008/11/25/project-frog-becomes-cinderella -story-modular-construction.
By late 2005, Miller had decided to form a new company with two MKThink partners and two others, an industrial designer and a metal fabricator with a strong record of working well together. Together with angel investors, family and friends initially contributed \$1.2 million to launch Project Frog in 2006. Their driving mantra was “better, greener, faster, cheaper.” Their mission was to “provide global impact and market leadership in green building products and systems.” Miller emphasized what Project Frog was not: “We are not a construction company. We are not about better trailers.”
The metal fabricator, Bakir Begovic, became board vice chairman for Project Frog. He received his BS in mechanical, industrial, and manufacturing engineering from California Polytechnic State University in 1996. He had previous experience in various high-tech firms. Begovic was founding principal of B&H Engineering, a semiconductor manufacturing and technology firm with an emphasis on metal fabrication, manufacture, and assembly. He was also chair of the board of directors for Acteron, a coating company.
Indeed, Project Frog needed such architectural and high-tech manufacturing expertise because it hoped to combine and optimize the best of modular and traditional construction—cheap and mass-manufactured and also energy efficient and conducive to occupant comfort and productivity. To achieve all these results, Project Frog needed to innovate. Since 1947, productivity in manufacturing in the United States had increased sevenfold. In construction, in contrast, productivity had actually declined slightly. If Project Frog could harness the efficiency of manufacturing and bring it into the field of construction, the company could radically outperform the industry, which was used to margins of only a few percent. Instead of conceiving of a classroom as the culmination of a long, unique construction process involving myriad players, Miller conceived of it as a “technology-infused product,” likening it to an iPhone, that could be produced from standardized parts and plans on a large scale in a variety of locations. Project Frog would thereby consolidate many tasks that were normally parceled among architects, engineers, and contractors, making the process more efficient and hence cheaper for the consumer and more profitable for Project Frog. In some sense, buying a Project Frog building was like purchasing a PC kit or an IKEA bookshelf: a lot of thought went into designing and configuring the components, but it was up to the end user to assemble them or hire someone who could.
The company’s buildings, erected from standardized kits it designed and contractors assembled, would require less energy and materials to build and to operate. They offered spacious layouts designed to aid user productivity, health, and comfort. Units offered abundant natural light, state-of-the-art clean air systems, high-performance heating, ventilation, and air-conditioning (HVAC) systems, customized microclimate controls, and excellent acoustic performance. They also could be built faster because they did not require a new architectural design, engineering analysis, lengthy approval processes each time, nor did they require as much work and coordination of supplies from contractors. Project Frog modules also used recycled material, from steel beams to carpets and tiles, and were designed to support green “living” roofs and solar panels. Finally, more efficient design meant fewer machines and less labor were needed to assemble a building with fewer materials wasted. The net impact could be significant: even though construction was about 5 percent of the US economy, buildings accounted for about 40 percent of energy use and produced about two-thirds of landfill waste.
Sophisticated design and modeling software enabled this reconceptualization from construction process to manufactured product. Project Frog’s engineering designers began with SolidWorks, software used to design products as diverse as airplanes and cell phones. The designers infused into their plans and predictive performance models data about the actual environmental performance of building materials—data that were regularly updated with measurements from new buildings. This design and analysis became the core of Project Frog’s competence and intellectual property, which was the subject of several patent applications. The company also consulted Loisos+Ubbelohde, based in nearby Alameda, California, to help develop the energy modeling for its initial Project Frog kit system. That energy firm had previously worked on the Gap’s headquarters in New York and Apple’s Fifth Avenue store.Sarah Rich, “Project Frog’s 21st-Century Buildings,” Dwell, April 1, 2009, accessed January 30, 2011, http://www.dwell.com/articles/project-frogs-21st-century -buildings.html. George Loisos and Susan Ubbelohde had directed significant government and university research programs on building energy use and efficiency, and their collaboration was a significant addition to the Frog design team.
Better control of manufacturing allowed Project Frog to use a set of basic parts with minor modifications to produce an array of products. Project Frog chose, however, to outsource the actual fabrication to others instead of having to build its own capacity. Project Frog sought partners to supply the steel structure, glass panels, curtain walls, ceilings, and finishing, such as external siding or carpets. A reporter for Forbes magazine described the result: “They snap together for a not bad look, as if a bunch of Swedish designers got hold of a really big Erector set.”Quentin Hardy, “Ideas Worth Millions,” Forbes, January 29, 2009, accessed January 30, 2011, http://www.forbes.com/2009/01/29/innovations-venture-capital- technology_0129_innovations.html.
Miller chose to focus on the educational market in the early days of the company. Education is the largest segment of the \$400 billion construction market, accounting for about one-fourth of both the traditional and modular market. Furthermore, few people are involved in making the decisions relative to the size of the project, schools generally desire to go green and efficient, and they don’t have a lot of money but often need buildings quickly. Educational institutions have long needed to add or subtract space rapidly as schools and communities change. California had issued bonds at various times since 2002 to raise money to construct new schools to keep pace with its population. Compounding that growth, California was trying to reduce its average class size, requiring even more space. Hence when funding was available, construction could easily fall behind demand. Miller had seen Frog’s previous portable, temporary choices.
Schools would also save time on design because they would choose from a limited number of prefabricated choices and configure and combine them as needed. Project Frog’s designs were precertified in California by the Division of the State Architect, saving about six months on permitting individual projects. (The Division of the State Architect oversaw the design and construction of K−12 schools and community colleges and also developed and maintained building codes.) The State Allocation Board Office of Public School Construction noted that it took two to four years to design, build, and inhabit an average school for two thousand students, while portable classrooms took nine to fifteen months to plan and inhabit. Finally, students learned better when indoor air and light quality were better, thus schools had often been proponents of green construction.
Studies from 1999 through 2006 provided evidence of the link between green design and student performance. Window area correlated with improvement in math and reading, better air reduced asthma and other ailments that affected attendance, and improved temperature control increased the ability of students and teachers to concentrate. Meanwhile, money saved from operating more efficient buildings could be used to educate students. Project Frog thus used passive design, large windows and coatings, and other methods to improve learning and cut costs. California had strict energy-efficiency standards under Title 24, and the state specifically allotted \$100 million in 2009 for High Performance Incentive grants to improve energy efficiency or maximize daylight in K−12 schools.
That grant, however, was still in the future when Project Frog began with two pilot projects in California, a preschool and racing school. The results pleased customers, but Project Frog was not making enough money from them. The company received \$2.2 million from angel investors in 2007 and had revenue around \$3.7 million with sixteen full-time employees. However, it was burning about \$300,000 per month and had missed project completion deadlines. Nonetheless, in 2008 Miller projected to generate over \$50 million in revenue by 2010. Then portents of a recession began to appear.
Completed Projects as of Spring 2010
Project Frog gained momentum with a number of projects (see Note 7.3 "Project Examples"). The following are the most notable ones:
• Child Development Center at City College San Francisco, 2007. Constructed 9,400 square feet of space for children, teachers, and administrators.
• Jim Russell Racing Drivers School Learning and Technology Center, Sonoma, California, 2007. Constructed 14,000 square feet of classroom and meeting space.
• Greenbuild Conference Boston, 2008. Constructed and unveiled the 1,280-square-foot Frog 2.0 in one week.
• Jacoby Creek Charter School, Bayside, California, 2009. Replaced a Northern California school’s trailers with Frog classrooms, paid for by a state grant.
• Vaughn Next Century Learning Center, Los Angeles, 2010. Built a 3,000-square-foot structure for the charter school’s Infrastructure Career Academy, designed to train students in green-collar jobs.
• Crissy Field Center, San Francisco, 2010. Built Golden Gate National Park a 7,400-square-foot education center to Leadership in Energy and Environmental Design (LEED) Gold standards, including classrooms, offices, and a café.
• Watkinson School, classrooms for Global Studies Program, Connecticut, 2010. Built 3,800 square feet of Frog Zero classroom and lab space.
Project Examples
Customers were pleased with the buildings’ performance. Project Frog’s purchase price was 25–40 percent lower than traditional construction. Operating costs could be as much as 50–70 percent lower than conventional or trailer construction. The new Frog Zero units could claim 75 percent energy demand reduction through use of occupancy and daylight sensors, smart wall panels that absorbed and reflected light, natural light optimization, glare control, superior air quality, microclimate customization through advanced climate control technology, and enhanced acoustics. Carpeting and interiors were screened for toxicity. Conventional portables typically were equipped with pressed-wood furniture, vinyl walls, and new paint and carpet; these alternatives were superior to standard options, which could release invisible toxic gases known as volatile organic compounds (VOCs). The most advanced line, the Frog Zero buildings, produced more energy than they used and were energy neutral. Built from renewable or recyclable materials, the units could be disassembled easily and were designed with 100 percent recyclability potential.
However, the major appeal of any unconventional classroom construction was typically price. Project Frog’s California prices were between prices for traditional construction and portable or trailer classrooms. In California, laws had actually mandated that 30 percent of new classroom construction be portables, to avoid overbuilding classrooms that would become vacant when birth rates declined. But some school districts facing unexpected and shifting population demographics found themselves housing 50 percent of their students in portables that ranged from relatively new to over forty years old. In Florida, 75 percent of portables that were intended as temporary structures were later classified as “permanent” classroom spaces. Estimates for 2009 placed six million students in portable classrooms. In 2003, it was estimated 220,000 portable classrooms served public school systems nationwide. Perception of lower quality was often justified; portables were poorly suited to music and language learning and they had heating and cooling inefficiencies, absence of natural light, and poor air quality, all of which undermined performance of students and teachers.
The Industry
As of June 2009, all but seven states had some kind of energy-efficiency requirements for government buildings.Pew Center on Global Climate Change, “Building Standards for State Buildings,” June 16, 2009, accessed January 30, 2011, www.pewclimate.org/what_s_being_done/in_the_states/leed_state_buildings.cfm. About half those states required LEED Basic or equivalent certification specifically, and increasingly, states such as California and municipalities such as Boston and San Francisco required any large new construction or renovation to meet green building standards. LEED, created by the US Green Building Council (USGBC), was widely used to measure building efficiency and environmental impact and came in various levels, from Basic to Platinum. Other rating systems existed, especially as LEED Basic came to be considered too lax or inappropriate for homes or other structures, but LEED continued to be the industry norm. Buildings earned points toward certification based on site selection and design, environmental performance, and other attributes. The US General Services Administration (GSA), which oversaw many federal properties and purchases, began requiring LEED Silver certification in 2009. A study by McGraw-Hill Construction calculated the size of the green building market to be \$10 billion in 2005 and \$42 billion in 2009, and it estimated the market would be worth between \$96 billion and \$140 billion by 2013, with the education sector accounting for 15–30 percent of that market.McGraw-Hill Construction, 2009 Green Outlook: Trends Driving Change, accessed January 26, 2011, http://construction.com/market_research/reports/GreenOutlook.asp.
Meeting those standards and the needs of the client, however, traditionally involved an array of people. Architects devised plans and construction engineers decided how to implement them safely. Government agencies had to approve those plans, and then an array of craftspeople—masons, carpenters, electricians, glaziers, and so on—were marshaled by a general contractor to execute the plans. Each new participant took a slice of the profit and decreased efficiency by not having an influence on the end-to-end life-cycle design but only on one small piece. Furthermore, involving more people increased the chance for delay and cost overruns, and the longer a project continued, the more likely weather or supply disruptions could slow it further. A single building could take years to plan and build. Hence construction typically had low margins and was unattractive to venture capitalists.
Indeed, when Project Frog sought investors, it found itself being compared to steel manufacturers. Investors had no idea how to value the company accurately: it wasn’t traditional construction, nor was it traditional manufacturing. Project Frog combined many of the previously disparate aspects of construction in its predesigned, preapproved kit, which sped construction and limited the number of people involved, including distinct craft unions that would fight for their shares of the project. That increased the company’s profit while decreasing cost to clients. Miller encountered one other problem he didn’t anticipate: Project Frog was too fast. Schools typically forecast building new classrooms five to ten years out and had correspondingly sluggish procurement processes. Consequently, schools had a hard time determining how to buy something that could be standing and in use six months later.
Changes and Challenges
Project Frog president Adam Tibbs had shown a proclivity for entrepreneurial initiatives early, having started and sold a lawn-mowing company as a kid before earning his bachelor of arts in English from Columbia University in 1995. He worked as an editorial assistant for the Columbia University Press, where he gravitated toward digital publications, and then joined Nicholson NY, an Internet and software consulting company, where he managed major projects from 1996 to 1998. In 1999 he founded Bluetip, a software development and incubator company. Bluetip spun off and sold several companies before Tibbs entered real estate development in New York and the Virgin Islands. He bought a house in the country and set out to write a novel. He also consulted for nonprofits and often borrowed Miller’s office when he came to San Francisco, where his friend and eventual wife worked at MKThink. Eventually he went to work for Project Frog, where he arrived as president in June 2007.
In 2008 Project Frog began to redesign its base module and reorganize its business processes. Tibbs noticed that the original Project Frog designs were simply overbuilt; the same result could be achieved with less material and less design time. Tibbs was quick to note, “If you remove green from the table, the way we do things is still better. The innovation is business processes in an industry that doesn’t have any business processes.” Looking back, Tibbs recalled, “We stopped selling and redesigned from the ground up. We tried to bring intelligence in-house and keep it there.” The international law firm Wilson Sonsini Goodrich & Rosati was brought in to “clean up” the company’s procedures and documentation.
Meanwhile, Miller and his team examined their previous projects and relied on input from their own green material researchers as well as suppliers, especially steel manufacturer Tom Ahlborn, about how to improve environmental performance and efficiency. Ahlborn was based in California. He made the frame for the modules and also assembled them on-site. Hence his experience allowed engineers to make improvements along the entire life cycle of the project. After eighteen months of design, the 1,280-square-foot Frog 2.0 was unveiled at the Greenbuild Conference in Boston, where contractor Fisher Development Inc. assembled the demo module in only seven days to allay fears that Project Frog would miss deadlines again. The new design also earned California’s Division of State Architect (DSA) precertification and an award from the Modular Building Institute. The new Frog 2.0 was anticipated to be 25–40 percent cheaper to build and 50–75 percent cheaper to operate, which meant it was baseline LEED Silver and could potentially be energy neutral when outfitted with photovoltaic panels (part of the Frog Zero option.) The components were recyclable or compostable and engineered for seismic design category E (which included San Francisco; the highest category was F.) Moreover, the building could withstand 110-mile-per-hour winds and be assembled in one-half to one-fifth the time of a traditional building. Since the basic plans had to be approved by engineering and architecture firms in fifty states, Frog 2.0 also streamlined documentation and certification.
On the financial side, the Wilson Sonsini law firm introduced Project Frog to a few venture capital companies. A deal for \$8 million in Series B funding closed in November 2008. A partner from the venture capital fund joined Project Frog’s board of directors. The partner said of the new partnership, “This is a truly pioneering company. Project FROG is developing dynamic concepts from a product design and manufacturing platform and applying those innovations to the building industry. Project FROG has a critical grasp on the technical and market advancements that will be game changers in the green building industry. These attributes solidify Project FROG’s position as a leader in this fast growing marketplace.”Rockport Capital, “Project FROG Closes \$8MM Series B Financing Led by RockPort Capital Partners,” press release, November 19, 2008, accessed January 30, 2011, www.rockportcap.com/press-releases/project-frog-closes-8mm-series-b-financing-led-by-rockport-capital-partners.
Though still \$4 million short of its goal, Project Frog kept costs low and in 2010 raised an additional \$5.2 million through debt financing and promissory notes.Project FROG, “Project FROG, Makers of Smart Building Systems, Closes Series B Funding with RockPort Capital Partners,” news release, Business Wire, November 19, 2008, accessed January 30, 2011, www.reuters.com/article/2008/11/19/idUS111863+19-Nov-2008+BW20081119. In 2008, Project Frog won the Crunchies Award for Best Clean Tech company, given for compelling start-ups and Internet or technology innovation. Things continued to look up for the company when the Office of Naval Research asked the venture capital community about green buildings. The military was particularly interested in energy efficiency after paying exorbitant sums to keep fuel on the front lines in Iraq and Afghanistan. It had begun to see energy efficiency as a national security issue and sustainability (making sure the military had a positive footprint in terms of community, ecological, and health impacts of its operations) as key to continuing to operate bases in communities around the world. The investors recommended Project Frog, which eventually began work with the Navy on projects in Hawaii.
Even as Project Frog continually strove to distinguish itself from traditional trailer manufacturers, competition emerged from other modular groups. Miller believed that modular offerings sacrificed quality and green features. Nonetheless, they remained attractive to some clients such as cash-strapped schools.
New Hire, Next Steps, and Exit Strategy
Project Frog needed a way to stay ahead of the competition. Its improved Frog 2.0 certainly would help, and Frog Zero was the first energy-neutral building of its kind; streamlining business practices was now a priority. Project Frog turned to its supply chain to boost efficiency and profit.
Ash Notaney had worked with Booz Allen on strategy and supply-chain issues for twelve years. Through a mutual friend, he met Adam Tibbs and began to offer advice to the company about supply-chain management. In January 2010, he was hired. He noticed right away that people at Project Frog talked to one another; meetings were rare, which kept people available at their desks for interaction; the hierarchy was flat; and there were no corporate silos. “I don’t think we even had an organizational chart until one of the investors asked to see one,” Notaney recalled.Andrea Larson and Mark Meier, Project FROG: Sustainability and Innovation in Building Design, UVA-ENT-0158 (Charlottesville: Darden Business Publishing, University of Virginia, 2010). Other quotations in this section, unless otherwise noted, also refer to this case study. The spirit of collaboration was reflected in the office space: there were no cubicles, just tables where people worked side by side. Notaney literally sat with marketing to one side and the president to the other. Exposed HVAC conduits and hanging lights marked the building for what it was: a renovated roundhouse for streetcars that used to run along the Embarcadero. About two dozen employees were at work in the office on a given day, and probably two-thirds were under thirty years old. Clear plastic bins held sample materials from Project Frog buildings: exterior siding, interior wall, flooring, even bolts. Engineers continually manipulated plans on their SolidWorks screens.
Notaney began working with suppliers to collaborate more with Project Frog. The Crissy Field, Vaughan, and Jacoby projects used the same company to manufacture and assemble most of the kit. That company was Ahlborn Structural Steel. Tom Ahlborn, in particular, had been an excellent partner, continuing to suggest ways to improve the steel manufacture and assembly. Project Frog in return helped him cut costs and shared projected sales and volume of purchases over the coming year with increasingly detailed projections for closer time periods. Ahlborn became the preferred vendor for steel in any project unless contract stipulations or geography made it impossible. The company also used the same construction firm, Fisher Development Inc., for three of its installations. Fisher was based in San Francisco but worked nationally as a general contractor and construction manager. The company had worked with clients such as Williams-Sonoma and Hugo Boss and had assembled Project Frog’s demonstration module at the Greenbuild Conference. Fisher had also worked on the Watkinson School in Connecticut. Although no single Project Frog building gave Fisher much money, he appreciated that construction was predictable and short, which allowed him to finish a project at a profit and move on. Moreover, he believed Project Frog was ripe to expand into markets beyond education and consequently all the small buildings would begin to add up.
Meanwhile, Project Frog worked with YKK and its partner Erie Architectural Products to procure exterior glass panels and curtain walls. The new glass panels could be installed legally and technically by steel unions, which meant Project Frog’s contractors no longer needed to have glaziers on-site. The panels could also be modified for optimal performance in different environmental conditions. Roof panel suppliers were also involved, but to date the most effective relationships had been with Ahlborn and Fisher. Notaney was working to develop strategic partnerships with other suppliers.
The relationship with Fisher made sense for Tibbs as well. “We pick a guy we trust to fulfill our brand promise and make it a pleasurable experience,” he said. After all, the company wanted to meet aggressive targets for margins and revenue. The company needed to sell the value of the learning experience its buildings created. Further, Tibbs wanted the company to grow not just by getting more deals in more markets but by keeping more of the money for Project Frog from each deal by integrating more features into its own manufactured kit. A switch to ceilings that integrated insulation and panels as well as the structural frame moved the company further along that path.
Tibbs continued to push for automating more of the design, improving algorithms, filing patents, and infringing on the company’s earlier patents. He brought in GTC Law Group of Boston for patent advice. Tibbs wanted a way for clients to select features through online models and see the corresponding performance characteristics of the different designs. Once a plan was chosen, the computer could confirm the design, print a plan for the architect, and print any necessary parts designs and orders for suppliers.
In 2010, Project Frog raised an additional \$5.2 million through convertible notes. That brought another venture capital director onto the company’s board. He joined Ann Hand, who had a spot by virtue of being Project Frog’s CEO; Miller, who had moved out of daily operations not long after Hand had arrived; and the lead venture capital partner from the B round. The fifth seat on the board, by charter designated for an independent member, remained vacant.
By summer of 2010, the market seemed to be improving, and Project Frog was on track to double its revenue that year. In fact, Project Frog was poised to flourish in a market that had changed radically from 2007. Miller said, “We mitigate risk. Clients are smarter and much more rigorous about goals and timeframes. Everyone wants to do green. That’s changed. It has to be green, and it has to be cost-effective. They go together. That’s just the way it is now.”
The Crissy Field Center in Golden Gate National Park attracted 1,500 people to its grand opening and made a strong impression on visitors. Hundreds of people became Facebook fans of Project Frog. Guided tours of the center continued to draw many visitors through spring 2010 as did the building’s café. Miller said with pride, “People walk into Crissy Field and say, ‘I want one of these.’ People don’t usually buy buildings that way.” But now with Project Frog, they could. In 2010 Project Frog had something very tangible and attractive to sell.
Miller continued to ponder how best to present his product. The company offered a unique synthesis of product and technology; sometimes he called it a product-oriented technology company. He liked the idea of portraying Project Frog as an integrated space and energy package in one leased product rather than a building with a mortgage that would also cut a client’s energy costs. Furthermore, if prices reached the levels they had in 2007, breakeven could be cut in half. Miller wanted to underline that in a way people could understand and incorporate into their accounting. He worried, however, that the company might default downward into a conventional construction company if it did not maintain its industry expertise and vision for innovation at the edge of the industry.
The decision about an exit strategy also remained. Project Frog could go public. It also could court potential buyers. Yet many attitudes still reflected the confusion early investors felt about Project Frog’s business. The venture capitalists struggled to find comps (comparable firms) to do the valuation. Various corporations with related business entities had expressed interest in investing in Project Frog. Each saw something it liked because the company integrated so many previously distinct businesses. Tibbs conjectured a global construction company or European modular building maker could make a bid. “We have about a three-year expectation to exit,” Tibbs said. “I’m hoping to accelerate that.” The whiteboard behind him was covered with red marker goals and graphs for the coming years. “If things go according to plan, we should be profitable by Q1 next year. For me, going public would be more fun because I’ve never done that before.”
Project Frog and its venture capitalist investors appeared to share a business philosophy about green and what Mark Miller referred to as “edge of the grid energy areas”—the overlooked but attractive opportunities for innovation now that businesses and consumers were interested in saving energy and willing to invest in technology controls. The buyer had to get over the conventional “first cost” mentality, however. The new approach required monetizing the life cycle of the solution. It might mean taking facilities off the balance sheet.
Mark Miller was interested in these options, but his mind was focused on more immediate concerns:
We have to make sales and we have to execute. We have the product designed and defined. Now we need revenue. We’re inventing a category though. The VCs understand that and they like us, but aren’t sure how to think about us. We were one of the last VC deals done before the economy collapsed. And of course the market stopped for us too. I mean schools have no money and states are basically bailing out. And sales cycles are long because buyers have to be educated. We have our work cut out for us.
Award Criteria: City College of San Francisco, Child Development Center
Thermal Comfort Strategy
The units at CCSF strongly support thermal comfort, enhancing occupant productivity and satisfaction. The number of operable windows for ventilation exceeds minimum requirements. The efficiency/quality of thermal comfort with the Raised Floor System is superior to overhead or wall mounted, fan diffused systems in most modular units. Air is supplied by multiple floor diffusers, creating an upward flow of fresh air via natural convection and exhausted through ceiling return outlets, unlike overhead systems that mix cool and heated air near the ceiling, spending energy forcing it down to user zones. Cool air is supplied at higher temps/lower velocity than overhead systems, reducing discomfort from high air speed/cold spots. Energy savings are due to diffusers’ close proximity to occupants and user-defined location, direction, and flow; the living roof that supports consistent indoor temperatures; R-19 rigid expanded polystyrene (EPS) in the roof and the floor; and R-15 EPS in the walls.
Indoor Air Quality Strategy
The CCSF classrooms exemplify FROG’s effort to circumvent the health problems, low test scores and high absentee rates posed by indoor air pollution. We use Low/No VOC carpet tiles, ceiling tiles and interior paint. Sealants meet/exceed the requirements of the Bay Area Air Quality Management District Regulation 8, Rule 51. Intersept antimicrobial preservative in the carpet tiles combats a broad spectrum of bacteria/fungi. BioBlock inhibits the spread of mold/mildew on ceiling tiles. Under floor air distribution delivers outside air from below directly to the occupants’ breathing zone. New air replaces contaminated air instead of diluting it with old air, the method of most portables. FROG units allow for up to 100% outside air, providing clean air to the occupants, reducing any remaining VOCs/bacteria in occupied areas. Unlike most portables’ fiberglass batt insulation, FROG’s Ultratouch cotton fiber insulation resists microbial growth; doesn’t cause skin irritation; formaldehyde-free.
Daylighting Strategy
The FROG building’s integrated system of customizable window wall units, sunshades and clerestory windows allow the interiors of the CCSF classrooms to be illuminated far more naturally and efficiently than any other modular classroom. The customizable window wall system (85% of the exterior walls) consists of interchangeable window/wall panels of user-specified colors /materials. Each 2’× 4’ panel can be high performance glass or insulated composite panel. Design customization allows a perfect balance between the need for abundant light in some areas (i.e. play/learning rooms) and less in others (i.e. nap areas), shadow reduction and/or heat gain. Sunshades are mounted to the south and west side of the curtain walls to protect each classroom from an excess of direct sunlight and reduce glare. A signature feature of FROG’s structure is the unique clerestory. Each unit’s sloped roof assembly is enveloped on three sides by clerestory windows that flood the unit with natural light.
Acoustic Strategy
The acoustical ceiling panels used at CCSF contain a 70% Noise Reduction Coefficient (NRC). This reduces most echoing within the building, thus increasing speech clarity. In addition to the R-19 roof insulation (with space for an additional R-19), the living roof reduces outside noise transmission. The under floor air distribution system implements a pressurized plenum and harnesses natural convection to assist the airflow out of the floor diffusers and directly into the occupied zone, eliminating the noisy ducts of traditional portable models carrying air being pushed at high velocities. Most modular classrooms use a wall-mounted HVAC system, resulting in high levels of noise. Project FROG eliminates this excess noise with its Powerpak, which places the HVAC system in an exterior room separated from the learning area by an auxiliary room or restroom and an extra thick wall filled with sound-attenuating insulation.
Energy-Efficiency Strategy
The FROG units at CCSF use high quality recycled/recyclable materials, including recyclable acoustic ceiling tiles (75% recycled content (RC)); raised floor tiles (33.9% RC:1.8% post consumer (PC)/22.1% post industrial(PI)); Ultratouch batt insulation (85% PI recycled natural fibers). Carpet tiles (44% RC) and vinyl tiles (92% RC, 25% PC) can be replaced individually (instead of the entire floor) and reused. Most modular buildings consist of wood; all FROG units are steel (up to 100% RC) which can always be recycled. Non-steel materials (i.e. wood) are field cut, creating excess waste; FROG parts are cut in a metal shop and all excess is recycled. The FROG units are designed for minimal site disruption. Each unit’s foundation takes up less than 1/2 of the overall sq. footage of the unit itself, requiring only 7.5 cu yards concrete. The living roof reduces rainwater runoff; serves as a protective layer, increasing the building’s lifespan; and contributes to water/air purification.
Architectural Excellence
Customized and flexible, the new campus at CCSF is architecturally stylish inside and out. Exciting and expanding upward, the undulating roofline rises in the middle and lowers at the sides to provide a dramatic expression. The grand curved rear (which hides unsightly mechanical equipment) is trimmed with rounded edges to set a modern tone. The customized exterior earth tone colors were chosen to blend with the surrounding neighborhood context. The window wall system has interesting patterns of wall vs. window to create a unique exterior and functional interior. To foster creativity and encourage collaboration, the interior is full of natural light with optimal acoustics and clear sight lines. The careful configuration and positioning of the units creates a comfortable and safe campus environment, and is truly beautiful from every angle.
Economic Practicality
By using FROG units, CCSF realized significant economic savings that will multiply over time. Due to grouping/orientation, the CCSF FROG units are more than 30% more energy efficient than Title 24 requires. FROGS are built quickly enabling buyers to save on construction escalation costs (up to 12% per year). Since FROG units are California DSA Pre-Certified buildings and can be approved “over-the-counter”, the permit fees are lower than traditional construction. FROG installation costs are lower than traditional construction since units can be installed on a variety of surfaces with minimal waste, site preparation, clean-up, and landscaping. FROG buildings will perform optimally and inline with permanent structures. Costs associated with removal, demolition, and temporary building replacements are eliminated. The use of steel and glass eliminates roof/wall/flooring degradation for low long term costs. FROG modular building requires less on-site skilled labor.
Other
Energy Efficiency: FROG succeeded in making CCSF the most energy efficient of its kind. The raised floor system delivers air via floor diffusers directly to the occupied zone, creating an upward flow of fresh air of natural convection. By using higher-temperature air for cooling, the system can utilize outside air for a longer period, thereby reducing HVAC energy consumption. The natural light from the clerestory/window walls decrease the artificial light necessary for internal illumination. The glass is Solarban70XL Solar Control Low-E and blocks 63% of the direct solar heat, reducing the energy and costs of cooling, while still having the benefits of natural light. The smart lighting system balances the amount of natural light with Daylight Sensors, allowing for less energy usage and lowers wasted energy with Occupancy Sensors. Photovoltaic panels produce energy onsite for the units use and distribute back to the city grid when not in use.Modular Business Institute, “City College of San Francisco—Child Development Center,” accessed January 30, 2011, http://www.modular.org/Awards/AwardEntryDetail.aspx?awardentryid=370.
Project Frog Wins 2008 Crunchies Award for “Best Clean Tech”
SAN FRANCISCO, Calif.—January 13, 2009—Project FROG, San Francisco-based manufacturer of LEED rated high performance building systems, is pleased to announce it was honored on Friday with a 2008 Crunchies Award for “Best Clean Tech” company. The Crunchies, co-hosted by GigaOm, VentureBeat, Silicon Valley Insider and TechCrunch, is an annual industry award that recognizes and celebrates the most compelling start-ups, internet and technology innovations of the year. “We were honored just to be included as a finalist, so we were surprised and thrilled to receive the award for Best Clean Tech Company,” said Mark Miller, founder and CEO of Project FROG. “Clean Technology is an emerging field with tremendous opportunity for innovation, and we have great need for creative entrepreneurs, venture capitalists and especially prescient media such as the sponsors of the Crunchies. The other finalists are remarkable companies with important innovation and technology, and it’s a privilege to be recognized among them.” The awards were host to more than 80 nominees across 16 categories, and winners included Facebook, GoodGuide, Amazon Web Services and Google Reader. Better Place was the runner-up in the Clean Tech category.
About Project FROG
Better, greener, faster, cheaper. Smart. Project Frog, Inc. is a venture-backed company founded in 2006 with the mission of designing and manufacturing smart buildings—high-performance, green building systems that are healthy, quick to deploy, affordable, sustainable and permanent. The company’s leadership team comprises award-winning business professionals, engineers, architects as well as accomplished entrepreneurs and innovative builders. FROG (Flexible Response to Ongoing Growth) products are contemporary, highly functional, energy efficient, quick-to-deploy and adaptable. The recipient of numerous industry awards, Project FROG is at the forefront of change for a new standard in green building. For more information, visit http://www.projectfrog.com.Cleantech PR Wire, “Project FROG Wins 2008 Crunchies Award for ‘Best Clean Tech,’” press release, January 13, 2009, accessed March 7, 2011, www.ct-si.org/news/press/item.html?id=5279.
Project Frog Building Systems for the Future
I caught a small segment of an Anderson Cooper 360 show that highlighted the first energy-efficient building in New England. It’s also the only independent school in Hartford Connecticut. Watkinson School—Center for Science and Global Studies is a Project Frog design. Project Frog’s website states it “makes the most technologically advanced, energy-efficient building systems on the planet. Employing innovative clean technology across the construction spectrum.” I was impressed, but than [sic] again I’ve always been in the modern, contemporary mode, what is Project Frog’s style.
Watkinson School needed a new building and fast. So in keeping with the theme of science and global studies that surely covers global climate change, the school went with Project Frog’s building plans/concepts, and 7 months later the school was ready. It leaves no carbon footprint and cost far less to run than a conventional building.
Check out the segment I saw on CNN and Project Frog’s website for more information. To me this looks like the way to go for charter Schools, new office buildings, retail, and hopefully homes of the future. And the biggest news here, it’s cheaper than standard building structures. Project Frog’s website lists the qualities of its buildings:
Better
Healthier low VOC, high air quality, abundant daylight
Higher quality engineered, factory built, premium materials
Safer 2008 IBC, zone 4 seismic, 110+mph wind
Greener
Materials high recycled content
Operations 50–70% less consumption
Waste Reduction near zero on-site construction waste
Faster
Purchase single integrated point of purchase
Permit weeks not months
Build 5× faster than traditional construction
Cheaper
Purchase 25–40% less first cost
Operate 50–75% less operational cost
Recycle 100% recycle potential
I think we’re going to hear a whole lot more about Project Frog. Finally a company that presents a win, win situation for new building construction. Oh forgot to include that local contractors put up the buildings too.
Other Stories
green.venturebeat.com/2010/01/19/project-frog-leaps-ahead-with-5-2m-for-greener-school-buildings
FOR IMMEDIATE RELEASE CONTACT: Nikki Tankursley (September 29, 2009)
[email protected]
415-814-8520“Project Frog Building Systems for the Future,” BlogsMonroe.com, March 23, 2010, accessed April 5, 2010, www.blogsmonroe.com/world/2010/03/project-frog-building-systems-for-the-future.
Ann Hand, New CEO at Project Frog
World-Class Green Energy Executive to Grow Markets and Scale Business for Leading Manufacturer of Smart Buildings
SAN FRANCISCO—(BUSINESS WIRE)—Project Frog (http://www.projectfrog.com), leading manufacturer of smart building systems, announced today that Ann Hand has joined the company as Chief Executive Officer. She will provide strategic leadership as Project FROG seeks to capitalize on the high growth market for green buildings with its innovative high performance building systems.
“I am delighted that Ann has decided to join the Project FROG team,” said founder Mark Miller. “I look forward to working closely with her to develop our next generation of green building products and accelerate our growth. Ann has a great track record of building scalable businesses with sustainability as a cornerstone.”
Ann is a highly experienced executive within the clean energy sector and comes to Project FROG from BP where she was Senior Vice President of Global Brand Marketing and Innovation with responsibility for driving operational performance across 25,000 retail gas stations. Prior to that role, she was CEO of BP’s Global Liquefied Petroleum Gas business unit and oversaw 3,000 employees in 15 countries. Before BP, Ann held marketing, finance and operation positions at Exxon Mobil and McDonald’s Corporation.
“I believe in the mission of this company, the quality of its people and the potential of our technology to transform the building industry,” said Ann. “I was fortunate to have the satisfaction of making things ‘a little better’ at BP, and am compelled by the opportunity at Project FROG to change how buildings are built and redefine standards for how they perform…we can make construction a lot better.”
Chuck McDermott, a Project FROG board member and General Partner at RockPort Capital Partners says, “Ann is a very dynamic executive who understands how to create vision and build brands. We’re confident that she will provide important leadership as Project FROG diversifies products that grow markets and monetize its game-changing innovation.”
About Project FROG
Better, Greener, Faster. Smart. Project FROG makes the most technologically advanced, energy-efficient building systems on the planet. Employing innovative clean technology across the construction spectrum, Project FROG aims to transform the building industry by creating new standards for healthy buildings that significantly reduce energy consumption and construction waste. Venture funding from Rockport Capital facilitated entrance into education and governmental markets in California, New England and Hawaii. Near-term plans include expansion into new geographies and market sectors.
Project Frog’s smart building systems are frequent recipients of industry awards for their design and performance. For more information, visit http://www.projectfrog.com.
About RockPort Capital Partners
RockPort Capital Partners, http://www.rockportcap.com, is a leading venture capital firm partnering with clean tech entrepreneurs around the world to build innovative companies and bring disruptive technologies and products to the 21st century. RockPort’s investment approach is distinguished by collaboration with management teams to foster growth and create value. Combining domain expertise with policy and international experience, RockPort has a proven track record of leveraging its insights and networks to foster growth and create value.Business Wire, “Ann Hand New CEO at Project FROG,” news release, September 22, 2009, accessed September 1, 2010, http://www.businesswire.com/news/home/20090922005679/en/Ann-Hand-CEO-Project-FROG.
Interview with CEO Ann Hand
alisterpaine.info/2009/11/16/ceo-interview-ann-hand-of-project-frog
Time-Lapse Video of Project Frog Building at Greenbuild
it.truveo.com/Project-FROG-at-Greenbuild-2008/id/2823405421
Rating Environmental Performance in the Building Industry: Leadership in Energy and Environmental Design (LEED)
LEED provides building owners and operators a concise framework for identifying and implementing practical and measurable green building design, construction, operations and maintenance solutions.US Green Building Council, “Intro—What LEED Is,” accessed January 28, 2011, www.usgbc.org/DisplayPage.aspx?CMSPageID=1988.
- US Green Building Council
Environmentally preferable, “sustainable,” or “green” building uses optimal and innovative design and construction to provide economic, health, environmental, and social benefits. Green buildings cost little or nothing more to build than conventional facilities and typically cost significantly less to operate and maintain while having a smaller impact on the environment.Davis Langdon, Cost of Green Revisited: Reexamining the Feasibility and Cost Impact of Sustainable Design in the Light of Increased Market Adoption, July 2007, accessed January 28, 2011, www.centerforgreenschools.org/docs/cost-of-green -revisited.pdf; Steven Winter Associates Inc., GSA LEED Cost Study, October 2004, accessed January 28, 2011, www.wbdg.org/ccb/GSAMAN/gsaleed.pdf; US Green Building Council–Chicago Chapter, Regional Green Building Case Study Project: A Post-Occupancy Study of LEED Projects in Illinois, Fall 2009, accessed January 28, 2011, www.usgbc-chicago.org/wp-content/uploads/2009/08/Regional-Green-Building-Case-Study-Project-Year-1-Report.pdf. These savings plus a burnished environmental reputation and improved indoor comfort mean green buildings can command higher rents and improve occupant productivity.Piet Eichholtz, Nils Kok, and John M. Quigley, “Doing Well by Doing Good? Green Office Buildings” (Program on Housing and Urban Policy Working Paper No. W08-001, Institute of Business and Economic Research, Fisher Center for Real Estate & Urban Economics, University of California, Berkeley, 2008), accessed January 28, 2011, www.jetsongreen.com/files/doing_well_by_doing_good_ green_office_buildings.pdf In addition, green buildings’ life-cycle costing provides a more accurate way to evaluate long-term benefits than the traditional focus on initial construction cost alone.Andrea Larson, Jeff York, and Mark Meier, “Rating Performance in the Building Industry: Leadership in Energy and Environmental Design” (UVA-ENT-0053), 2010 Darden Case Collection. All other references in this section, unless otherwise noted, come from this source.
Although many were interested in the idea of green building, in the early 1990s green building was difficult to define, which slowed the market adoption of its principles and practices. In response, the USGBC was formed in 1993 in association with the American Institute of Architects, the leading US architectural design organization. By 2000, USGBC had about 250 members that included property owners, designers, builders, brokers, product manufacturers, utilities, finance and insurance firms, professional societies, government agencies, environmental groups, and universities. Those council members helped create the LEED rating system, released to the public in 2000. The LEED standard intended to transform the building market by providing guidelines, certification, and education for green building. Thus architects, clients, and builders could identify and acquire points across a variety of environmental performance criteria and then apply for independent certification, which verified the green attributes of the building for others, such as buyers or occupants.
LEED quickly expanded as it filled the need for a reliable definition of green building. Within two years of its release, LEED captured 3 percent of the US market, including 6 percent of commercial and institutional buildings under design that year. By 2003, USGBC had more than three thousand members, more than fifty buildings had been LEED certified, and more than six hundred building projects totaling more than ninety-one million square feet were registered for future certification in fifty US states and fifteen countries.US Green Building Council, Building Momentum: National Trends and Prospects for High-Performance Green Buildings, February 2003, 1, 11, 13, accessed January 28, 2011, www.usgbc.org/Docs/Resources/043003_hpgb_whitepaper.pdf.
LEED found multiple proponents. In December 2005, USGBC made the Scientific American 50, the magazine’s prestigious international list of “people and organizations worldwide whose research, policy, or business leadership has played a major role in bringing about the science and technology innovations that are improving the way we live and offer the greatest hopes for the future.”US Green Building Council, “USGBC Named to ‘Scientific American 50,’” news release, January 1, 2006, accessed January 28, 2011, www.usgbc.org/News/PressReleaseDetails.aspx?ID=2045. The federal government, through divisions such as the General Services Administration and US military, began providing incentives and requiring that its projects be LEED certified. The trademarked LEED certification became the de facto green building code for many locations, such as the cities of Santa Monica and San Francisco, or was rewarded with tax breaks, such as in New York, Indiana, and Massachusetts. Corporate and public sector organizations with certified or registered buildings soon included Genzyme, Honda, Toyota, Johnson & Johnson, IBM, Goldman Sachs, Ford, Visteon, MIT, and Herman Miller.
By July 2010, USGBC membership had jumped to over 30,000, more than 155,000 building professionals had been credentialed formally in the LEED system, and 6,000 buildings had been certified as meeting LEED criteria. The LEED system had been revised and expanded to include homes, renovation, and neighborhood development, not just individual, new commercial buildings. Almost half the states of the United States had begun to require LEED or equivalent certification for most state buildings. Hence, despite its shortcomings and competition, LEED remains the best-known green building program, and USGBC remains a committee-based, member-driven, and consensus-focused nonprofit coalition leading a national consensus to promote high-performance buildings that are environmentally responsible, profitable, and healthy places to live and work.
Why the Building Industry?
Buildings consume many resources and produce much waste. In the United States, buildings consume about 40 percent of all energy, including 72 percent of electricity, and 9 percent of all water, or forty trillion gallons daily. As a result, buildings produce about 40 percent of all greenhouse gas emissions. They also produce solid waste. A 2009 EPA study estimated that in one year, building construction, renovation, and demolition alone produced 170 million tons of debris, about half of which went straight to landfills.D&R International Ltd., “1.1: Buildings Sector Energy Consumption,” in 2009 Buildings Energy Data Book (Silver Spring, MD: US Department of Energy, 2009), 1–10, accessed January 28, 2011, buildingsdatabook.eren.doe.gov/docs/DataBooks/2009_BEDB_Updated.pdf; D&R International Ltd., “8.1: Buildings Sector Water Consumption,” in 2009 Buildings Energy Data Book (Silver Spring, MD: US Department of Energy, 2009), 8-1, table 8.1.1, accessed January 28, 2011, buildingsdatabook.eren.doe.gov/docs/DataBooks/2009_BEDB_Updated.pdf; US Green Building Council, “Green Building Facts,” accessed March 23, 2011, http://www.usgbc.org/ShowFile.aspx?DocumentID=5961; US Environmental Protection Agency, Estimating 2003 Building-Related Construction and Demolition Materials Amounts, accessed January 28, 2011, www.epa.gov/wastes/conserve/rrr/imr/cdm/pubs/cd-meas.pdf. Since Americans spend 90 percent of their time indoors, the building environment is also key to overall health.
The construction industry has major economic impacts. Construction and renovation is the largest sector of US manufacturing, and buildings and building products span more Standard Industrial Classification codes than any other industrial activity. The value of new construction put in place rose from \$800 billion in 1993 to peak at nearly \$1.2 trillion in 2006, equal to 5 to 8 percent of GDP over that span. About half of construction in the past two decades has been residential and about one-third commercial, manufacturing, office, or educational space (Figure 7.8). Including highways and other nonbuilding construction, total construction is roughly 70 percent private and 30 percent public.US Census Bureau, “Construction Spending: Total Construction,” accessed September 3, 2010, www.census.gov/const/www/totpage.html. Hence the building sector presents some of the most accessible opportunities to develop innovative strategies for increasing profits and addressing environmental and related community quality-of-life concerns.
Buildings, however, have some characteristics that can impede environmental design. They have a thirty- to forty-year life cycle from planning, design, and construction through operations and maintenance (O&M) and renovation to ultimate demolition or recycling. This long, varied life span requires advance planning to maximize environmental benefits and minimize harm and can lock older, less efficient, or hazardous technologies such as asbestos or lead paint in place. Indeed, advance planning is key. Structural and site design is the most important factor determining performance and cost throughout a building’s life.
LEED Silver-Qualified PNC Firstside Center
Buildings also involve multiple stakeholders, which can complicate optimization of the system. Costs are borne by one or more parties, such as owners, operators, and tenants. This division can hamper maximizing the overall efficiency of the building, as various groups vie for their own advantage or simply fail to coordinate their efforts. Wages and benefits paid to occupant employees dwarf all other expenses but are typically not included in building life-cycle costs. Depending on the arrangement, a tenant may pay for most of O&M but have had no say in the original design or site selection. A system such as LEED can make all parties aware of environmental performance and thus help them collaborate to improve it while also assuring others that the building has been designed to a certain standard.
How LEED Works
USGBC created the LEED Green Building rating system to, in the council’s words, transform the building market by doing the following:
• Defining green building by establishing a common standard of measurement
• Promoting integrated, whole-building design practices
• Recognizing environmental leadership in the building industry
• Stimulating competition
• Raising consumer awareness of green building benefits
To achieve these goals, LEED provides a comprehensive framework for assessing the environmental performance of a building over its lifetime as measured through the following categories (Table 7.1):
• Sustainable sites. Minimizing disruption of the ecosystem and new development.
• Water efficiency. Using less water inside and in landscaping.
• Energy and atmosphere. Minimizing energy consumption and emissions of pollutants.
• Materials and resources. Using recycled or sustainable building materials and recycling construction debris.
• Indoor environmental quality. Maximizing indoor air quality, daylight, and comfort.
• Innovation and design process. Fostering breakthroughs and best practices.
• Regional priorities. Credits that vary by site to reward local priorities.
Projects within a given LEED rating system can earn points in each category and all points are equal, no matter the effort needed to achieve them. For instance, installing bike racks and a shower in an office building can earn one point for Sustainable Sites, as can redeveloping a brownfield. Merely including a LEED Accredited Professional (LEED AP) on the design team earns a point for Innovation and Design. The same action could also earn multiple points across categories. Installing a green roof could potentially manage storm water runoff, mitigate a local heat island, and restore wildlife habitat. The most points are concentrated in energy efficiency, which accounts for nearly one-third of all possible points (Figure 7.10). Under LEED 3, released in 2009, once a project gains 40 of the possible 110 points and meets certain prerequisites, such as collecting recyclable materials, it can apply for LEED Basic certification. (The criteria are slightly different for LEED for residences.) This point system makes LEED flexible about how goals are met, rewards innovative approaches, and recognizes regional differences. This systems perspective distinguishes LEED from conventional thinking.
Table 7.1 LEED for New Construction Rating System
Sustainable Sites 26
Prereq 1 Construction Activity Pollution Prevention 0
Credit 1 Site Selection 1
Credit 2 Development Density and Community Connectivity 5
Credit 3 Brownfield Redevelopment 1
Credit 4.1 Alternative Transportation—Public Transportation Access 6
Credit 4.2 Alternative Transportation—Bicycle Storage and Changing Rooms 1
Credit 4.3 Alternative Transportation—Low-Emitting and Fuel-Efficient Vehicles 3
Credit 4.4 Alternative Transportation—Parking Capacity 2
Credit 5.1 Site Development—Protect or Restore Habitat 1
Credit 5.2 Site Development—Maximize Open Space 1
Credit 6.1 Stormwater Design—Quantity Control 1
Credit 6.2 Stormwater Design—Quality Control 1
Credit 7.1 Heat Island Effect—Nonroof 1
Credit 7.2 Heat Island Effect—Roof 1
Credit 8 Light Pollution Reduction 1
Water Efficiency 10
Prereq 1 Water Use Reduction—20% Reduction 0
Credit 1 Water Efficient Landscaping 2 to 4
Credit 2 Innovative Wastewater Technologies 2
Credit 3 Water Use Reduction 2 to 4
Energy and Atmosphere 35
Prereq 1 Fundamental Commissioning of Building Energy Systems 0
Prereq 2 Minimum Energy Performance 0
Prereq 3 Fundamental Refrigerant Management 0
Credit 1 Optimize Energy Performance 1 to 19
Credit 2 On-Site Renewable Energy 1 to 7
Credit 3 Enhanced Commissioning 2
Credit 4 Enhanced Refrigerant Management 2
Credit 5 Measurement and Verification 3
Credit 6 Green Power 2
Materials and Resources 14
Prereq 1 Storage and Collection of Recyclables 0
Credit 1.1 Building Reuse—Maintain Existing Walls, Floors, and Roof 1 to 3
Credit 1.2 Building Reuse—Maintain 50% of Interior Nonstructural Elements 1
Credit 2 Construction Waste Management 1 to 2
Credit 3 Materials Reuse 1 to 2
Credit 4 Recycled Content 1 to 2
Credit 5 Regional Materials 1 to 2
Credit 6 Rapidly Renewable Materials 1
Credit 7 Certified Wood 1
Indoor Environmental Quality 15
Prereq 1 Minimum Indoor Air Quality Performance 0
Prereq 2 Environmental Tobacco Smoke (ETS) Control 0
Credit 1 Outdoor Air Delivery Monitoring 1
Credit 2 Increased Ventilation 1
Credit 3.1 Construction IAQ Management Plan—During Construction 1
Credit 3.2 Construction IAQ Management Plan—Before Occupancy 1
Credit 4.1 Low-Emitting Materials—Adhesives and Sealants 1
Credit 4.2 Low-Emitting Materials—Paints and Coatings 1
Credit 4.3 Low-Emitting Materials—Flooring Systems 1
Credit 4.4 Low-Emitting Materials—Composite Wood and Agrifiber Products 1
Credit 5 Indoor Chemical and Pollutant Source Control 1
Credit 6.1 Controllability of Systems—Lighting 1
Credit 6.2 Controllability of Systems—Thermal Comfort 1
Credit 7.1 Thermal Comfort—Design 1
Credit 7.2 Thermal Comfort—Verification 1
Credit 8.1 Daylight and Views—Daylight 1
Credit 8.2 Daylight and Views—Views 1
Innovation and Design Process 6
Credit 1.1 Innovation in Design: Specific Title 1
Credit 1.2 Innovation in Design: Specific Title 1
Credit 1.3 Innovation in Design: Specific Title 1
Credit 1.4 Innovation in Design: Specific Title 1
Credit 1.5 Innovation in Design: Specific Title 1
Credit 2 LEED Accredited Professional 1
Regional Priority Credits 4
Credit 1.1 Regional Priority: Specific Credit 1
Credit 1.2 Regional Priority: Specific Credit 1
Credit 1.3 Regional Priority: Specific Credit 1
Credit 1.4 Regional Priority: Specific Credit 1
Total 110
Source: US Green Building Council, “LEED for New Construction and Major Renovation,” accessed March 7, 2011, http://www.usgbc.org/ShowFile.aspx?DocumentID=1095.
LEED has been amended regularly to respond to emerging needs. Partly in reaction to criticism that LEED focused too narrowly on new commercial construction, USGBC developed different LEED rating systems for different types of projects. In addition to the original LEED for New Construction and Major Renovation (LEED-NC), there are now LEED for Schools, LEED for Existing Building Operations and Maintenance (LEED-EB O&M), LEED for Commercial Interiors (LEED-CI), and LEED for Core and Shell (LEED-CS), all of which use the above categories and have similar, albeit slightly different, distributions of the 110 possible points among the categories.Rating systems are available at US Green Building Council, “LEED Resources and Tools: LEED 2009 Addenda,” accessed September 3, 2010, www.usgbc.org/DisplayPage.aspx?CMSPageID=2200#BD+C. The more recent LEED for Neighborhood Development (LEED-ND) and LEED for Homes have the same point approach but different categories. LEED-ND awards points for Innovation and Design and Regional Priorities plus Smart Location and Linkage, Neighborhood Pattern and Design, and Green Infrastructure and Buildings. LEED for Homes largely follows the categories of other building types but also has Locations and Linkages distinct from Sustainable Sites to encourage walking, infill, and so forth; Awareness and Education to encourage homeowners to educate others; and a Home Size Adjustment to acknowledge that bigger homes, efficiency notwithstanding, consume more resources than smaller ones. LEED for Homes also has 136, not 110, possible points with a lower threshold for Basic certification. LEED for Retail and LEED for Healthcare (versus more generic commercial buildings covered by LEED) were in development as of July 2010 and likely to be launched within a year.
Sample LEED Rating System Version 2.1 Credit
Energy & Atmosphere Credit 1: Optimize Energy Performance 1–10 Points
Intent
Achieve increasing levels of energy performance above the prerequisite standard to reduce environmental impacts associated with excessive energy use.
Requirements
Reduce design energy cost compared with the energy cost budget for energy systems regulated by ASHRAE/IESNA Standard 90.1-1999 (without amendments), as demonstrated by a whole building simulation using the Energy Cost Budget Method described in Section 11 of the Standard.
Table 7.2 Comparison of New versus Existing Buildings
New buildings (%) Existing buildings (%) Points
15 5 1
20 10 2
25 15 3
30 20 4
35 25 5
40 30 6
45 35 7
50 40 8
55 45 9
60 50 10
Source: Data from ASHRAE/IESNA Standard 90.1-1999.
Regulated energy systems include heating, cooling, fans, and pumps (HVAC), service hot water, and interior lighting. Nonregulated systems include plug loads, exterior lighting, garage ventilation and elevators (vertical transportation). Two methods can be used to separate energy consumption for regulated systems. The energy consumption for each fuel may be prorated according to the fraction of energy used by regulated and nonregulated energy. Alternatively, separate meters (accounting) may be created in the energy simulation program for regulated and nonregulated energy uses.
If an analysis has been made comparing the proposed design to local energy standards and a defensible equivalency (at minimum) to ASHRAE/IESNA Standard 90.1-1999 has been established, then the comparison against the local code may be used in lieu of the ASHRAE
Standard Project teams are encouraged to apply for innovation credits if the energy consumption of nonregulated systems is also reduced.
Optimize Energy Performance: 1–10 Points
Submittals
Complete the LEED Letter Template incorporating a quantitative summary table showing the energy-saving strategies incorporated in the building design.
Demonstrate via summary printout from energy simulation software that the design energy cost is less than the energy cost budget as defined in ASHRAE/IESNA 90.1-1999, Section 11.
Potential Technologies and Strategies
Design the building envelope and building systems to maximize energy performance. Use a computer simulation model to assess the energy performance and identify the most cost-effective energy efficiency measures. Quantify energy performance as compared with a baseline building.Reprinted courtesy of the US Green Building Council, LEED 2009 for New Construction and Major Renovations Rating System (Washington DC: US Green Building Council, 2009), last updated October 2010, accessed January 31, 2011, www.usgbc.org/DisplayPage.aspx?CMSPageID=220&.
To be LEED certified, a project is first registered for a few hundred dollars with the Green Building Certification Institute (GBCI), an independent spin-off of USGBC that assumed sole responsibility for LEED certifying buildings and training LEED APs in 2009. Documentation is gathered to demonstrate compliance with LEED criteria and then submitted to the GBCI along with another fee, over \$2,000 for an average project, for certification. Bigger projects cost more to certify, and higher levels of certification are available with more points: 50 points earns Silver, 60 Gold, and 80 or more Platinum (Figure 7.11). Higher certification typically correlates with less energy use. A 2008 study by USGBC and the New Buildings Institute found that in the United States, newly built LEED Basic commercial buildings (including offices and laboratories) used 24 percent less energy per square foot than the average of all commercial building stock, while LEED Gold and Platinum buildings used 44 percent less energy than the average. Just over half of the LEED buildings, however, performed significantly better or worse than predicted at the outset of the project, with one quarter actually consuming more energy than the code baseline.Cathy Turner and Mark Frankel (New Buildings Institute), Energy Performance of LEED for New Construction Buildings (Washington DC: US Green Building Council, 2008), accessed January 31, 2011, http://www.usgbc.org/ShowFile.aspx?DocumentID=3930.
LEED 3 was intended to address some of these prediction problems as well as criticisms that LEED could reward, for instance, a building for air-conditioning the desert as long as it did so more efficiently than comparable buildings. LEED 3 added online tools to facilitate planning and certification. It also harmonized criteria among its rating systems for different types of projects and added points to categories that made a larger overall difference in energy use, such as building near existing public transportation infrastructure instead of a more remote location. LEED already had been twice revised prior to LEED 3, and USGBC continues to support LEED as it evolves and expands.
To simplify use and speed adoption, LEED refers to existing industry standards of practice. LEED for Homes specifies ANSI (American National Standards Institute) Z765 for calculating square footage for the Home Size Adjustment. LEED for Operations and Maintenance adheres to ASHRAE (American Society of Heating, Refrigeration, and Air-Conditioning Engineers) standards for ventilation and various American Standards for Testing and Materials (ASTM) standards for lighting and reflectance.
Many credits require submission of a letter signed by the architect, engineer, owner, or responsible party and verification of the claims in language provided by a specific LEED template. To maintain the credibility of the third-party rating system, claims to credits are subject to auditing by GBCI.
Green Building Costs and Benefits
There are multiple aspects of green building cost and benefits. For LEED certification in particular, direct project costs include the administrative costs of the application process and fees, which can run into the thousands of dollars, as well as the financial impacts on building design, construction, and operation, due to implementation of LEED-related measures. These costs should be evaluated in terms of total cost of ownership, including both first costs and operating costs over the building’s life cycle. Indirect costs are often harder to assess but are worthy of consideration.
Green building can add little to nothing to total design and construction cost, at least for the lower levels of LEED certification or equivalent green building codes. A study by global construction consultant Davis Langdon in 2006 found “no significant difference in average costs for green buildings as compared to nongreen buildings. Many project teams are building green buildings with little or no added cost to the amount a traditional building costs, and with budgets well within the cost range of nongreen buildings with similar programs.”Davis Langdon, Cost of Green Revisited: Reexamining the Feasibility and Cost Impact of Sustainable Design in the Light of Increased Market Adoption, July 2007, accessed January 28, 2011, www.centerforgreenschools.org/docs/cost-of-green- revisited.pdf. Green design may require particular attention and effort in the initial phases, and design costs are generally higher, but more and more firms see green as part of the standard package, not an addition. Other studies of specific buildings by the GSA and various organizations found that green design might cost a few percentage points more but significantly reduced operating costs and improved occupant comfort.Steven Winter Associates Inc., GSA LEED Cost Study, October 2004, accessed January 28, 2011, www.wbdg.org/ccb/GSAMAN/gsaleed.pdf; US Green Building Council–Chicago Chapter, Regional Green Building Case Study Project: A Post-Occupancy Study of LEED Projects in Illinois, Fall 2009, accessed January 28, 2011, www.usgbc-chicago.org/wp-content/uploads/2009/08/Regional-Green- Building-Case-Study-Project-Year-1-Report.pdf. The City of Portland, Oregon, for example, had eighteen LEED buildings in 2004 and saved more than \$1 million per year in avoided wastewater treatment costs and another \$1 million a year in lower energy bills.Mike Italiano (board member, US Green Building Council), personal communication, March 14, 2003.
In some cases, highly innovative design features might retard both market and regulatory acceptance of green buildings (especially at the local level where green design knowledge may be low), slowing the project timetable and increasing costs. For example, regulators who are unfamiliar with constructed wetlands might doubt their effectiveness as a way to reduce the impacts of storm water runoff. Similarly, the real estate market in some areas, due to a lack of familiarity, might question the value of a geothermal heating system, or condo association rules might prohibit a supplemental solar electric system.
Nonetheless, green building, especially when certified to LEED or another standard, offers many benefits. Environmentally, it reduces the strain on the local ecosystem, conserves resources and habitat, and improves indoor air quality. Economically, green building lowers operating costs, can garner tax incentives, improves public image, can lower insurance costs, improves employee productivity and attendance, and increases market value. Indeed, in a 2008 study, Piet Eichholtz and collaborators compared 700 hundred Energy Star and LEED-certified office buildings to 7,500 conventional ones and found that the green office buildings had higher occupancy rates and could charge slightly higher rents, making the market value of a green building typically \$5 million greater than its conventional equivalent.The report states, “The results show that large increases in the supply of green buildings during 2007–2009, and the recent downturns in property markets, have not significantly affected the rents of green buildings relative to those of comparable high quality property investments; the economic premium to green building has decreased slightly, but rents and occupancy rates are still higher than those of comparable properties.” The report also concludes that green certification commands higher rental premiums and asset value at resale: “We find that green buildings have rents and asset prices that are significantly higher than those documented for conventional office space, while controlling specifically for differences in hedonic attributes and location using propensity score weights.” Piet Eichholtz, Nils Kok, and John M. Quigley, The Economics of Green Building, 3, 20, accessed January 26, 2011, www.ctgbc.org/archive/EKQ_Economics.pdf.
Given these benefits, green building will likely expand. With so much money on the line, the need for verified environmental performance and design standards will remain strong.
Alternatives to and Criticisms of LEED
Despite growth in the green building market, in 2009, \$42 billion represented less than 10 percent of total building construction. One criticism of LEED is that as a voluntary standard, it does not force enough change fast enough. Public policy analyst David Hart concluded LEED “is inevitably bumping up against its limits” and does not “act assertively to pull along the trailing edge of ‘brown building’ practice.”David M. Hart, “Don’t Worry About the Government? The LEED-NC ‘Green Building’ Rating System and Energy Efficiency in US Commercial Buildings” (MIT-IPC-Energy Innovation Working Paper 09-001, Industrial Performance Center, Massachusetts Institute of Technology, 2009), accessed January 31, 2011, http://web.mit.edu/ipc/publications/pdf/09-001.pdf. As more governments and organizations adopt LEED or similar standards because it gives them an established, reliable metric, the market could shift more quickly toward greener construction.
A second persistent criticism of LEED has been that basic certification doesn’t represent much improvement over conventional building. As recently as 2010, renowned architect Frank Gehry criticized LEED for crediting “bogus stuff” that doesn’t truly pay off.Blair Kamin, “Frank Gehry Holds Forth on Millennium Park, the Modern Wing, and Why He’s Not into Green Architecture,” Cityscapes (blog), Chicago Tribune, April 7, 2010, accessed January 31, 2011, featuresblogs.chicagotribune.com/theskyline/2010/04/looking-down-on-the-stunning-view-of-the-frank-gehry- designed-pritzker-pavilion-from-the-art-institute-of-chicagos-renzo-pian.html. LEED certification in this line of reasoning distracts people from more ambitious targets, and the money spent on registration and certification—ranging from about \$2,000 for smaller buildings for USGBC members to \$27,500 for larger buildings for nonmembers—could instead be spent on more environmental improvements.For costs, see Green Building Certification Institute, “Current Certification Fees,” 2010, accessed January 31, 2011, http://www.gbci.org/main-nav/building-certification/resources/fees/current.aspx; and Green Building Certification Institute, “Registration Fees,” accessed January 31, 2011, http://www.gbci.org/Certification/Resources/Registration-fees.aspx. For criticism, see Anya Kamenetz, “The Green Standard?,” Fast Company, October 1, 2007, accessed January 31, 2011, www.fastcompany.com/magazine/119/the-green-standard.html ?page=0%2C0. Such fees also mean USGBC and GBCI have an economic stake in making LEED the dominant standard of certification. USGBC has even criticized California’s State Building Code for the CalGreen label because USGBC feared the label would create confusion and detract from LEED’s value.“California’s Building Code Turns a Deeper Shade of Green,” Green Business, January 14, 2010, accessed January 31, 2011, http://www.greenbiz.com/news/2010/01/14/californias-building-code-turns-deeper-shade-green.
Finally, LEED unabashedly focuses on energy use as its main criterion for environmental performance. That has led to criticism from the nonprofit Environment and Human Health Inc. (EHHI) that LEED does too little to keep toxic materials out of buildings. An EHHI report from 2010 urged USGBC to discourage “chemicals of concern” such as phthalates and halogenated flame retardants and to include more medical professionals on its board. A USGBC vice president said he was willing to collaborate with critics to improve LEED, provided the expectations were reasonable: “LEED could say there should be no chemicals in any building and no energy used and no water and every building should give back water and energy. We could do all that, and no one would use the rating system. We can only take the market as far as it’s willing to go.”Suzanne Labarre, “LEED Buildings Rated Green…and Often Toxic,” Fast Company, June 3, 2010, accessed January 31, 2011, www.fastcompany.com/1656162/are-leed-buildings-unhealthy. Also Tristan Roberts, “New Report Criticizes LEED on Public Health Issues,” Environmental Building News, June 3, 2010, accessed January 31, 2011, http://www.buildinggreen.com/auth/article.cfm/2010/6/3/New-Report- Criticizes-LEED-on-Public-Health-Issues.
Yet LEED seems to have found just where the market is willing to go. Other certification systems exist but have not attained the status that LEED has. Green Globes, for instance, began in 2000, the same year as LEED, and had an online component from its inception. Green Globes offers a similar performance rating system, and certification is often cheaper than LEED. Green Globes is more prevalent in Canada, but in the United States it is being incorporated as ANSI’s official green building standard.Green Globes, “What Is Green Globes?,” accessed September 3, 2010, http://www.greenglobes.com/about.asp. The US EPA also awards Energy Star certification to buildings in the seventy-fifth or higher percentile for energy use in their category. Builders can apply by designing for Energy Star and completing an online application; actual operating data, however, are necessary to earn the final Energy Star label.Energy Star, “The Energy Star for Buildings & Manufacturing Plants,” accessed January 26, 2011, http://www.energystar.gov/index.cfm?c=business.bus_bldgs. There is no fee for certification. Finally, various regional certification programs exist, from EarthCraft in the southeast United States to Build It Green in California. These systems tend to be tailored more specifically to their locations.
Green building has become increasingly desirable. LEED and other certification systems have helped to make it even more desirable by creating trust. Builders, regulators, or the average person can know that LEED certification guarantees a modicum of environmental considerations without having to know a thing about what those are or how they work in the building. LEED in particular has proven powerful and flexible enough to spread internationally and to undergo frequent revision of its existing rating systems and expansion into brand new ones.
KEY TAKEAWAYS
• Challenging the building and construction industry and its submarkets with new products and unprecedented supply-chain requirements requires managing not only technology development but also market perception and accepted practices.
• Economic downturns add unique opportunities and challenges for new ventures.
• Meeting third-party standards offers market differentiation.
EXERCISES
1. Put together an analysis of the major elements of entrepreneurial venturing and sustainability innovation applied to Project Frog.
2. In teams, identify a differentiated and innovative company and interview senior management about their market and how they overcame challenges to convince early customers to accept their product or service. | textbooks/biz/Business/Advanced_Business/Book%3A_Sustainability_Innovation_and_Entrepreneurship/07%3A_Buildings/7.01%3A_Project_Frog_-_Sustainability_and_Innovation_in_Building_Design.txt |
Learning Objectives
1. Compare internal and external impediments to a company’s shift toward a sustainability strategy for a new building design.
2. Understand how and why decision participants might end up at cross purposes in implementing green building designs.
3. Identify traits of successful sustainability innovation processes.
The next case is Hermes Microtech.This case was prepared by Batten fellow Chris Lotspeich in collaboration with author Andrea Larson. Andrea Larson and Chris Lotspeich, “Greening” Facilities: Hermes Microtech, Inc., UVA-ENT-0054 (Charlottesville: Darden Business Publishing, University of Virginia, 2004). Case can be accessed through the Darden Case Collection at https://store.darden.virginia.edu. Created as an amalgam of various company experiences, this case shows the decision-making complexity of building design and construction. The viewpoints of various participants provide insights into why sustainability concerns change decision processes and therefore can be so difficult for conventional organizations.
Greening Facilities: Hermes Microtech Inc.
Heather GlenName has been changed. This case is an amalgamation of different business scenarios that case researcher/writer Chris Lotspeich created. The case is not about one single company and none of the names are real; note tongue-in-cheek choice of names. pushed back in her chair in her office at Hermes Microtech Inc., which gave her a commanding view of the books, binders, notes, and messages piled around her computer. The sunset was fading out over the Pacific, and as the last of her colleagues left, she welcomed the quiet opportunity to contemplate the task before her. Hermes CEO Alden Torus had just approved the most important project in Glen’s career to date, and she didn’t want to waste any time getting started. Glen had one month to organize an initial meeting of all key participants involved in creating and building Hermes’s new headquarters. For the first time, the company would bring together professionals from each phase of facilities design, construction, and operation to initiate project planning, and Glen would run the meeting. Although she was not the construction project manager, Glen was going to try to change the way her company built and ran its facilities to make them more environmentally friendly—and in the process transform the company itself.
Much had happened in the eighteen months since Glen had been appointed special projects coordinator by Sandy Strand, Hermes’s executive vice president of environment and facilities (E&F). Strand had asked her to lead efforts to make environmental quality a higher priority in the company’s buildings and facilities, a goal the CEO shared. Glen’s work in implementing energy-efficiency improvements at one of their microchip factories had produced mixed results. She learned a great deal about the technical potential for improvement from that pilot project, but her most valuable lessons concerned the organizational dynamics of the design-build effort. She realized that the most important factors for success—as well as the greatest challenges—lay in renovating the decision-making process rather than in different design and technology choices.
As dusk fell and the cubicles outside her office sank into shadow, photo sensors increased the brightness of the fluorescent light fixtures above Glen’s desk. She sipped another mouthful of coffee to stave off any drowsiness that might follow the meal she had just shared with Torus and Strand. Torus had called the dinner meeting to discuss how best to make the company’s next planned facility an environmentally friendly or “green” building. He wanted that to happen because he believed it would benefit the company, and he had supported Strand and Glen’s efforts. Yet Torus knew it would be a challenge to change the way the organization went about the design-build process.
“I am realistic about the constraints on my ability to effect change on this topic,” Torus had told them. “My time and attention are consumed with more traditional core business issues. I can make it clear to others that I support the goal of environmental improvements, but I need to rely on you to make it happen.” Torus asked Strand and Glen to suggest how best to proceed. He liked Glen’s proposal that everyone involved in the full life cycle of the building join in an initial integrated design workshop to initiate the project. “I can’t spare the time to attend the full meeting, but I can kick it off with introductory remarks,” Torus said to her. “Send me a one-page memo with the three to five most important things you want me to say.”
After dinner, Glen had returned to the office to draft an e-mail invitation to workshop participants. In her mind’s eye, she saw their faces, and reviewed their roles in the project and in the greening efforts to date.
The Hermes Story
Hermes was a medium-sized microelectronics manufacturer based in California’s Silicon Valley. The company started as a military contractor but grew to focus on consumer electronics through a series of mergers, acquisitions, and spin-offs. It made a mix of microchips spanning a range of capabilities and applications, from complex and costly chips for personal computers and cellular phones to simpler, cheaper devices for consumer appliances and automobiles. Hermes was essentially a component maker; almost all its customers were original equipment manufacturers (OEMs). Its ten manufacturing facilities, three R&D laboratories, and twenty sales offices in the United States, Europe, and Asia employed ten thousand people and generated annual revenues of \$1 billion, with a net profit of \$100 million.
Hermes CEO Alden Torus had been with the company since its founding twenty-five years earlier. The son of immigrants, he had started in the product development department and worked his way up through the ranks. Torus was an effective and charismatic engineer with a good head for business strategy and an encyclopedic memory for detail. He epitomized the corporate culture at Hermes: hardworking and production focused, he put in long hours to help develop and launch new products. Torus understood the importance of the first-mover advantage in the fast-paced microelectronics industry. Innovation was highly valued at Hermes, and product R&D was a spending priority.
Microchip Market Dynamics
Microchips were a commodity, competition was stiff, and profit margins were relatively narrow. The industry’s business cycle was highly variable, typified by regular and significant swings in price and profits. The driving influence was the rapid pace of technological development, characterized by Moore’s Law, which says computing processing power doubles every eighteen months. Racing each other as well as technical evolution, makers churned out increasingly sophisticated products, shrinking both transistor sizes and product development periods. Time to market was a critical competitive factor. The time available for new product launches did not often exceed eighteen months, including process and yield improvements. The sector was sensitive to macroeconomic conditions, particularly consumer spending. More than 85 percent of Hermes’s revenues came from chips embedded in consumer products.
Another influence on supply and demand fluctuations was the uneven or “lumpy” process of step function increases in production capacity. Microchip manufacturing was capital intensive, and new fabrication facilities—“fabs”—took many months to bring online. When chip demand rose far enough, competing manufacturers responded quickly and invested in new capacity. Those fabs tended to come online at about the same time; the surge in supply depressed prices, inventories built up, and the market slumped. Eventually demand and prices rose again, followed by a new round of investment in manufacturing capacity for the latest products.
Chip fabs were costly and complex. Microchips were made on silicon wafers in a series of steps that were carried out within high-tech devices called tools, each of which cost millions of dollars. The tools operated inside carefully climate-controlled environments called clean rooms. Microelectronics production was very sensitive to disruption and contamination by microscopic particles. Line stoppages could ruin production batches and cost more than \$1 million dollars per day or as much as tens of thousands of dollars per minute for some product lines. Clean rooms were isolated seismically from the rest of the fab on dedicated support pillars, so that vibrations from minor earthquakes or even nearby truck traffic did not disrupt the tools. Process water was deionized and highly filtered before being piped into the clean room and the tools.
Fabs had extensive HVAC systems with high-performance filters to maintain the clean room’s temperature, humidity, and quantity of airborne particulates within stringent parameters. The air handlers, fans, pumps, furnaces, and chillers were located outside the clean room and delivered conditioned air and cooling water into the clean room via ducts and pipes. Those HVAC systems typically made up 40–50 percent of a fab’s electricity consumption. Fab electricity use ranged from three million to fifteen million watts or megawatts (MW), depending on the size of the facility.
The Evolution of Hermes’s Environmental Strategy
Microchip manufacturing involved numerous hazardous materials, toxic emissions, and energy-intensive processes. Maintaining worker safety and managing pollution was a critical function. Potentially dangerous emissions were highly regulated and strictly controlled. Traditionally, environmental health and safety (EHS) management and strategy had focused on end-of-the-pipe problems and solutions, such as treating acid-contaminated exhaust air before it was released into the atmosphere. More recently, increased attention and effort had focused on pollution prevention strategies that reduced dangerous emissions by changes in production processes. Such strategies could meet regulated emissions control requirements at less cost than end-of-the-pipe methods and often yielded economic benefits through waste reduction and other manufacturing improvements. Hermes’s environmental activities were representative of the industry in that regard. In the mid-1990s, Hermes consolidated the EHS department and the maintenance department into one E&F department.
CEO Alden Torus did not pay much attention to environmental issues during most of his career. Like most of his colleagues, he regarded pollution control as a cost of doing business, driven by compliance with ever-increasing government regulations. He considered such matters to be the responsibility of the environment and facilities department but neither a high priority for senior management nor a central element of corporate strategy. He maintained that perspective during his tenure as VP of production and his early years as CEO.
Torus’s perspective began to change when his young son developed a rare form of cancer. During the course of his son’s treatment, he discovered that several other children in his neighborhood had the same type of cancer. His teenage daughter was passionate about environmental issues and had often complained about the extent of environmental contamination in Silicon Valley, asking her father to do something about it. Chemical feedstocks and by-products of electronics manufacturing had contaminated groundwater at more than one hundred locations. Santa Clara County had twenty-nine federally designated “Superfund” toxic waste sites, the highest concentration in the nation. Torus began to wonder if that had anything to do with his son’s illness. His son recovered after long and difficult treatment, but other children with the same disease died. Although no link to any specific chemical or site was established, that family crisis prompted Torus to rethink his views on industrial pollution.
Prompted by his children, Torus began to explore new perspectives. His friend Sandy Strand, Hermes’s VP of E&F, had long been interested in the potential business opportunities described by leading advocates of the integration of ecology and commerce (see Figure 7.12 for an organizational chart). Strand introduced Torus to the writing of such thinkers as Paul Hawken, Amory Lovins, and William McDonough and the work of organizations such as The Natural Step, the Coalition for Environmentally Responsible Economies, and the World Business Council for Sustainable Development. Torus learned about new business tools and strategies, including environmental management systems, green design, and industrial ecology. He heard from other CEOs about businesses in a wide range of industries that were finding profit and competitive advantage through innovation and collaboration with leading practitioners. Soon Torus joined Strand in the belief that Hermes could realize many business benefits by incorporating more environmental and social factors with traditional economic considerations into what author John Elkington called a new “triple bottom line.”
But where would they begin? Torus and Strand shared a long-term view of the transitional process of moving their industry (and the world economy) toward the vision of a more sustainable condition. Neither man advocated rapid change without regard to cost. They continued to believe that their priority was economic success and that building the business case for green business initiatives was essential. They recognized that they were well ahead of most of their colleagues on those issues and were pragmatic about the potential scope and pace of change, particularly within the managerial constraints of executive responsibility in a publicly traded company. They had limited time and attention to devote to a new strategic initiative, capital resources were perpetually constrained, and the company lacked experience with many of the promising approaches. Yet they wanted to start somewhere—and steadily, if slowly, develop momentum for organizational change.
Hermes’s Green Initiatives
Torus began by sharing his vision of the future with the company and the public and declaring his support for prudent green business initiatives. His advocacy did not require much of his time, but it provided crucial top-level support for the employees who would carry most of the responsibility for project implementation. Initial efforts would pursue incremental improvements toward clear, measurable objectives. Those efforts would be supported by education and training, recruitment of skilled staff, and outside expertise where necessary. Hermes had built its success on innovation and rigorous quality management.
Torus set two initial priorities: (1) development of a new, more environmentally friendly line of chips and (2) a 20 percent improvement in energy and water efficiency over five years. Those programs would have to pay for themselves within five years.
The green chip project would be implemented by the R&D and operations divisions of the production department, headed by Executive VP of Production Christopher “Chip” Smith. In addition to traditional areas of performance improvement, the new microprocessor had a design goal of using at least 15 percent less electricity than the previous model, which would appeal to OEM buyers and consumers because it would extend the battery life of portable devices such as laptops and cell phones. Manufacturing process improvements would reduce waste and toxic pollution. Hermes would advertise these attributes to differentiate their product, attract environmentally conscious consumers, and boost sales, thereby (hopefully) paying for the effort.
The energy and water efficiency effort would be implemented by the facilities maintenance division of the E&F department and the operations division of the production department. The program would pay for itself through avoided costs. The program would be headed up by Heather Glen, then a special assistant to Strand. At the time, Glen was a bright young electrical engineer and recent MBA graduate who had sought a position with Hermes because she had heard about the company’s greening efforts and wanted to work in that field. She had been at Hermes for one year and had spent most of that time pulling together an overview of all its fabs’ environmental performance and energy and water use. She had also initiated a pilot program to save energy through lighting retrofits at the company’s headquarters and two other office spaces, which were successful though small in scope.
Initial Efforts: The F3 Fab Energy Survey
Strand hired a team of consultants led by Rocky Mountain Institute (RMI), a nonprofit research and consulting organization. He had seen a lecture by Amory Lovins, RMI’s CEO and a resource efficiency pioneer, in which Lovins described RMI’s energy-efficiency work in fabs that saved up to half of the HVAC energy cost-effectively. He invited Lovins to meet with Torus, who agreed to a pilot effort at Hermes’s F3 fab near Dallas, Texas. Glen was designated project coordinator and liaison with RMI.
F3 was chosen because it was one of the most energy-intensive fabs in the company, water costs were relatively high, and a significant expansion was planned. The facility was built in the early 1970s by another firm and had been acquired by Hermes in the late 1980s. A renovation called Phase I was done in the late 1990s to accommodate a new production line, with only minor changes to the original HVAC system. A new addition was planned with another clean room and dedicated HVAC utilities, called Phase II. The initial drawings for Phase II had been completed by Expedia Design Company, Hermes’s long-standing architectural and engineering design vendor. EDC was a fab design vendor to several firms in the industry and had a reputation for speed and competitive fees.
The RMI consulting team was led by Bill Greenman, an architect with an MBA and a background in green design. Technical services were provided by Peter Rumsey and John Blumberg from Rumsey Engineers, an engineering design firm and frequent RMI partner that specialized in energy-efficient HVAC systems for clean rooms and green buildings. Their objective was to briefly survey F3 to identify existing opportunities for improvement and conduct a streamlined design review of Expedia’s plans for the rehab. The deliverable was a report with a list of recommendations that would be practical but general in nature, rather than a detailed engineering study based on performance measurements. The report would not include design plans or payback calculations. That introductory visit was intended to identify potential areas of improvement for further investigation and to provide an opportunity for the company and the consultants to learn more about each other. The limited scope of work also kept the consulting fees low.
Glen had been to the F3 site only once before, although she had worked with its facilities staff on her energy performance assessment. She flew from the company’s headquarters in the Silicon Valley to Texas and met the RMI team there for the two-day survey. The team spent the first morning describing their approach and being briefed on the facility. They then toured the site for the rest of that day and much of the second, working with the chief engineer and facilities staff to understand HVAC and controls systems, water use, and operating procedures. At the end of the second day, the team presented its initial conclusions and recommendations in a meeting attended by facilities staff, the site’s general manager Regina Shinelle, Expedia’s Phase II project manager Art Schema, and Strand, who flew in for the occasion.
The RMI team estimated that low- and no-cost changes to F3’s current operations could save up to 10 percent of the HVAC electricity almost immediately, such as utilizing evaporative “free” cooling in dry periods by operating all the cooling towers in parallel at low speed to reduce reliance on electric chillers. Another 15–25 percent savings were attainable with modest retrofit investments and estimated paybacks of two to three years, including pumping and fan system upgrades. Significant investments could reduce site HVAC energy use by more than 50 percent, requiring changes to Expedia’s Phase II design to allow consolidation of the two clean rooms’ independent process cooling systems into a centralized plant serving both buildings. The estimated payback period would be at least five years if it were to be conducted as a retrofit, once Phase II had been completed, or much sooner if combined with proposed Phase II energy-efficiency improvements.
The RMI team noted that significant opportunities for energy efficiency were not captured by the current Phase II design. These included larger low-friction air handlers with smaller fan motors and variable-speed drives, high-performance cooling towers, heat-reflective coatings on rooftop air intake ducts, and upgraded sensors and controls. Such measures would decrease HVAC energy use and cost by 30 to 60 percent, with paybacks ranging from immediate to several years depending on the measure. They would also increase construction costs, although some component capital costs would fall due to smaller equipment such as motors and chillers. The extent of this proposed redesign would be significant and would take weeks or even months.
With the exception of the centralized cooling plant, most recommended measures would not interrupt production and involved no intrusion into the clean room space. All the suggested methods had been demonstrated within the industry but not all in one place, and few had been tried within Hermes. RMI suggested that Hermes establish energy performance benchmarks to be used as guidelines for both existing fab operations and new design specifications.
Rumsey Engineers’ Blumberg worked on water efficiency measures and proposed a method for reclaiming wastewater for evaporative cooling. But when he investigated techniques for reusing some of the acid rinse water that drained from a tool, the production manager rebuked him for interfering with manufacturing matters and the idea was dropped.
The Phase II review also noted that Expedia’s design was an almost exact replica of another Hermes fab that was more than ten years old, which itself was based on blueprints from the 1970s. That became apparent when the team asked about a piping diagram showing an unusual zigzag in midair, and a facilities engineer named Steve Sparks replied that there was a structural pillar in that location in the fab these plans were drawn from—a pillar absent in Phase II. It did not appear to the RMI team that any performance improvements had been incorporated into the successive iterations of that design.
Such “copy exactly” practices were common in the microelectronics industry. Microchip manufacturing was extremely complicated. The sequence involved thousands of process variables and chemical interactions that were so complex as to defy full comprehension. Performance parameters and specifications were exacting, as minor deviations could be disastrous, and if problems occurred they needed to be isolated and identified. Time-to-market deadlines were unforgiving, and meeting them required an extraordinary level of control over process variables. Therefore, when something worked, it was copied exactly. A pilot production line for new product development was essentially “cloned” in numbers to create a high-volume manufacturing facility. That mind-set shaped all aspects facilities design, even areas outside the clean room that did not require such stringent inflexibility. “Copy exactly” reduced fab design effort, time, and cost but also hindered the adoption of technological and process improvements, including energy-conserving features.
Implementation Challenges
A few weeks after the survey in Texas, RMI and Rumsey Engineers submitted a brief report (see Table 7.3) summarizing their observations and recommendations, which was circulated at the site and among senior management including Strand, Torus, and Smith. Meetings were held to discuss the recommendations and strategies for implementation. The reactions were mixed.
Table 7.3 Executive Summary of Recommendations from Rumsey Engineers’ Review of Baseline Ventilation System Design for Hermes Office Building Renovation
Specifications Baseline Design Criteria Proposed Design Criteria—Larger Ducts Proposed Design Criteria—Larger Ducts and Lower Face Velocity Air Handler
Duct spec Avg. diameter is 36 in. Avg. diameter is 40 in. Increase duct area 20% (reduce external pressure loss by 36%). Avg. diameter is 40 in. Increase duct area 20% and increase air handler size (reduce total pressure loss by 36%).
Design face velocity (fpm) 500 500 400
Design flow (cfm) 50,000 50,000 50,000
Design total static pressure (in.) 4.5 3.6 2.9
Internal pressure loss (AHU; in.) 2 2.0 1.3
External pressure loss (ducting; in.) 2.5 1.6 1.6
Fan efficiency (%) 70 70 70
Motor efficiency (%) 90 90 90
Operating face velocity (fpm) 500 400 400
Operating flow (cfm) 50,000 32,500 32,500
Operating total static pressure (in.) 4.5 2.0 1.7
Internal pressure loss (AHU; in.) 2 1.1 0.8
External pressure loss (ducting; in.) 2.5 0.9 0.9
Fan efficiency (%) 70 70 70
Motor efficiency (%) 90 90 90
Motor HP 60 50 50
Motor VFD No Yes Yes
Annual operating hours 3,560 3,560 3,560
Annual energy use (kWh) 149,000 44,000 37,000
Annual energy cost (\$) 22,350 6,500 5,550
Assumptions:
Building Size = 50,000 square feet (SF)
Design CFM = 50,000 cubic feet per minute (cfm)
Operating CFM = 32,500 cfm for proposed design (with VFD); 50,000 cfm for base case (without VFD)
Operating hours per year = 3,560 (10 hours per day)
Case 1 Case 2 Case 3
Baseline design Proposed design—larger duct case Proposed design cost (or savings) Proposed design—larger duct and larger air handler case Proposed design cost (or savings)
Capital costs
Duct cost (\$) 120,000 130,000 10,000 130,000 10,000
Fan motors VFD cost (\$) 10,000 10,000 10,000 10,000
Air handler cost (\$) 60,000 60,000 0 63,000 3,000
Marginal cost (\$) 20,000 23,000
Operating costs
Fan motor energy cost/y (\$) 22,350 6,500 (15,850) 5,550 (16,800)
Payback \$20,000 ÷ \$15,850 per yr. = 1.3 yrs. \$23,000 ÷ \$16,800/yr. = 1.4 yrs.
ROI \$15,850 ÷ \$20,000 = 79% \$16,800 ÷ \$23,000 = 73%
The RMI team had been greeted with initial skepticism by the site’s facilities staff members, who were wary of outside interference, had never heard of RMI, and were confused about the unusual nonprofit-corporate consulting partnership. Chief Engineer Tom Dowit had been a particularly reluctant participant. It was rumored that Dowit had called the survey “just another far-fetched scheme of those environment division idealists” that was going to cost his facilities division money and distract him from his primary job of ensuring that the production division could maximize output. He was openly skeptical during RMI’s initial presentations, although as the survey progressed, he grudgingly acknowledged the value of some of the team’s observations, stating at one point that he would have made some of the same improvements if he’d been given permission and funding. But he grew defensive—and at one point openly derisive—during the final presentation as the team described opportunities to save tens of thousands of dollars.
Glen realized that Dowit might have feared that the consultants were making him look bad by finding large cost savings he had not uncovered himself in recent years. But she also understood why he might have taken a highly cautious approach to new techniques. The facilities engineering staff had a difficult job, with a great deal of responsibility for maintaining highly complex and sensitive production equipment. They had limited input into tool selection and operation, yet when something went wrong, they often got the blame. The facilities department budget was constantly squeezed, pressuring engineers to cut corners. Many production managers viewed facilities as an overhead cost center that played a subordinate support role to manufacturing’s revenue generation.
The rest of Dowit’s staff agreed that many of the recommendations were technically feasible and had already successfully implemented some operational changes since the visit. Their initial skepticism about “a bunch of academics who were coming here to write on blackboards and waste our time” subsided over the course of the survey as the RMI team’s skills became apparent, and most people quickly came to respect the consultants’ abilities and ideas. The staff would need additional money for retrofits to capture further savings. A few of the consultants’ ideas had been suggested in the past by site facilities staff, including measures used at other Hermes fabs, but most had been rejected because they did not meet the site’s requirement that retrofit investments have a maximum payback period of eighteen months.
Facilities Engineer Steve Sparks enthusiastically supported the energy-efficiency efforts and confided in Glen. He lamented the inefficiency of F3’s older equipment, pointing out that Dowit kept it running on a “shoestring budget.” Dowit blamed the spending constraints on the comptroller, but Sparks suspected that Dowit was also currying favor with the production department by minimizing O&M spending. Sparks had worked at another fab just after Dowit had left there as chief engineer (the same facility Expedia had used as a template for Phase II). Sparks thought that Dowit’s cost cutting might have helped him get promoted to this position at F3, but it had also run down the mechanical systems and left his successor with deferred upkeep costs. “To be fair,” Sparks added, “Dowit is not unusual in this careful approach, he’s good at it and he has been rewarded for it. This is typical of the facilities culture at Hermes.”
F3 General Manager Shinelle had little interest in any project that diverted attention from production and no interest in slowing down the Phase II expansion. She did not meet the RMI team until their final presentation and did not say much then or in subsequent meetings; what she did say tended to agree with Dowit. “This facility works, and energy is 2 percent of the cost of our chips,” she said. “I can’t spend time worrying about it. We need to use our limited investment capital to get new, high-quality products quickly to market.” Glen had the impression that Shinelle participated only reluctantly and would not have done so at all if it wasn’t clear that Strand had requested she host the pilot energy survey. Ultimately, Shinelle agreed to direct Dowit’s staff to select “a few” of the more cost-effective measures that met the site’s eighteen-month payback criteria, and she would approve those facilities funding requests.
Nevertheless, Shinelle refused to make any changes to the Phase II design that would slow the project timeline. Nudged by Strand, she directed Dowit to check with Expedia and see if there was still time to order more efficient motors than the inexpensive but relatively low-efficiency types specified in the design—as long as the cost premium did not exceed 10 percent. “The production division can’t afford to pay more for this expansion,” she insisted. “We lose tens of thousands of dollars of sales revenue every week that we delay getting Phase II manufacturing up and running. We have to stay within budget and on schedule.” Glen understood that Shinelle’s annual performance bonus was probably tied to that very achievement and that in any event the facilities division would be paying the utility bills.
Expedia’s Art Schema had been unsure how to respond to the design review comments during the on-site presentation. Although his primary clients Shinelle and Dowit seemed to think that the RMI team’s input wouldn’t change much if anything about Phase II, Schema could see that Strand supported the consultants’ efforts. Schema limited his comments to polite expressions of interest in the findings and promised to give them detailed consideration.
Within a week of receiving the RMI report, Schema sent a critique of the design review to F3, and Shinelle e-mailed copies to Glen and Strand. Expedia’s point-by-point response acknowledged the merit of a few of RMI’s suggestions but dismissed most of the recommendations as too costly, impractical, or impossible. The tenor of the response was that half of RMI’s recommendations were off-base and the other half were nothing new to Expedia. Schema’s cover letter read in part: “Expedia provides superior reliability and security. Our architects and engineers have built our close relationship with Hermes by delivering economical designs that work, as proven in previous projects. We leverage our skills and experience to consistently deliver low bids and rapid turnaround times, which benefit both Hermes and Expedia. We are open to discussions about changes to design criteria at any time with you, our valued clients.”
Shinelle defended Expedia’s approach and service in her attached e-mail. “Expedia has always been there for us and has never let us down. They have played a key role in Hermes’s agility and speed in product development and launch. Let’s not mess up a good partnership with untested ideas.” It occurred to Glen that both Shinelle and Smith had risen through the ranks of the production department boosted by reputations as star managers of fab construction projects—success stories that Expedia had helped build. In addition, Shinelle’s product quality and yield record at F3 was unmatched across Hermes’s manufacturing sites for its consistency, and she had a reputation for bringing new products to market very rapidly.
Mixed Results
Six months later, Glen and Strand regarded the F3 survey as only a partial success. On the upside, technical results were positive. F3 facilities staff had successfully implemented most of the RMI team’s low- and no-cost recommendations. Sparks and his colleagues were impressed by the new techniques, welcomed corporate-level support for investment in system improvements, and were openly supportive of the energy-efficiency efforts. They convinced Dowit to request that the RMI team return to conduct more detailed analysis of some of the more involved recommendations. The fact that Torus had mentioned the pilot effort at F3 in a companywide webcast address, praising the site manager and chief engineer’s efforts, helped their cause. But unlike the first visit, which was underwritten by Strand’s office, subsequent fees would have to come from the site’s operating budget. Shinelle agreed to allocate them in principle but said no such expenditures could be undertaken until the following quarter at the earliest.
On the downside, F3’s Phase II expansion project went ahead as designed. Some motor efficiency upgrades were incorporated at the last minute at minimal extra cost, but the scramble to change equipment orders at a late stage in a tight schedule resulted in some grumbling by Dowit and Schema.
Strand’s environment department had engaged the RMI team to conduct similar general surveys at two more sites in Oregon and the Silicon Valley, accompanied by Glen in each case. The visits occurred three months after the F3 survey, and the team’s recommendations had been submitted but not acted upon. Those visits had paralleled the experience at F3. The team worked with facilities division staff on energy improvements that did not risk interfering with production. (Manufacturing water efficiency was no longer investigated following Blumberg’s rebuke at F3.) Technical efficiency opportunities were similar; so, too, were the political dynamics. Some facilities staff members were skeptical, but receptivity increased as awareness of the RMI team’s capabilities spread by word of mouth and direct experience. Production staff members were more wary; the word going around the department was that the energy program was an expensive nuisance.
Expedia seemed to be torn about the energy program. Its design work was directly challenged by the consultants’ critiques, its managers’ personal networks and alliances were aligned with Hermes’s production department (the one that hired it), and its designers were not keen to devote a great deal of effort to restructure its cost-effective copy-exactly approach. However, Expedia also wanted to please its client and recognized that Hermes’s CEO was interested. Glen noted that Art Schema, the Hermes account manager at Expedia, had avoided directly criticizing the RMI team’s work—that had been left to subordinates—and had signaled his openness to discussing new frameworks for doing business. In a brief aside as a meeting broke up, he told Glen, “We can design more energy-efficient systems; Hermes has never asked us to.”
The Change Agent’s Dilemma
Glen saw her task as essentially intrapreneurial. She was trying to harness resources to realize a new vision of the future. She marveled at how difficult it was to be innovative even in a company built around the creation of new ideas, techniques, and products. She faced a big challenge in trying to change business as usual, pushing against a persistent headwind of inertia and resistance to new methods. The semiconductor industry was typified by a very cautious and conservative corporate culture, stemming from exacting technical and process requirements, safety risks posed by hazardous materials, the high cost of downtime, and brutal competition in a fast-moving marketplace. (It wasn’t for nothing that Intel CEO Andy Grove’s book was titled Only the Paranoid Survive.)
Glen had to persuade many people to change the way they did things, both with different departments at Hermes and with outside vendors. She sometimes felt like an outsider herself during the site visits; even colleagues from the facilities division of her own department viewed her as an environment division staffer from the corporate office. Glen was grateful that the RMI team could back up its claims with practical expertise. Her position afforded little formal authority to dictate change, although executive endorsement lent her informal authority, and her training provided only limited credibility with facilities engineers. The RMI team lacked authority but was building credibility with demonstrated skills, one site survey at a time, complementary to her strengths.
Her colleagues knew she had Strand and even Torus backing her, but executive time and attention was very limited, she was left to her own means to manage the process and implement change. Glen sensed that the production-focused skeptics in the opposing camp would respond positively to the energy program when she or Strand were present but would then return to the status quo as soon as the efficiency advocates weren’t looking, hoping they could wait it out until the CEO retired and the issue dissipated. She recalled the Chinese saying about that attitude among middle management: “Heaven is high and the Emperor is far away.”
A New Opportunity: The Green Headquarters Building
Despite his authority as CEO and personal credibility as a successful manager and leader, Alden Torus could not afford to dedicate much political and social capital to any efforts not directly focused on commercial success. Yet his interest in sustainable business opportunities remained strong, and he wanted to choose his interventions carefully to provide the greatest leverage for change. If he was going to risk his reputation and get out ahead of his colleagues on an unfamiliar issue, he wanted it to count. He was pleased with the early phases of the energy and water efficiency efforts, although it was becoming clear that the process of organizational learning and transformation would not be rapid. He wanted to expand awareness of—and attention to—environmental dimensions of commerce that went beyond using resources more efficiently.
One strategy under consideration was to establish companywide emissions reductions targets for gases that contributed to climate change. Specified targets could provide coherence to energy-efficiency efforts across the company’s facilities and prevent individual sites from “cream skimming” only those opportunities with the most attractive paybacks—an approach that often rendered longer payoff measures uneconomic under current investment criteria. Torus suspected that bundling projects for investment would increase the average payback periods but yield larger overall emissions reductions. Internal emissions trading might further reduce the total cost of such efforts by directing funds to the highest-leverage opportunities.
Torus saw a good opportunity in the company’s decision to consolidate the corporate headquarters and western US sales offices into one location. He asked the board of directors to support construction of a green building and invited RMI’s Lovins and Greenman to the board meeting to describe the potential benefits. Green buildings used more environmentally friendly materials and design and construction practices and typically reduced utility bills by as much as 50 percent through energy and water efficiency. They did not have to cost more to build than conventional buildings, although they required careful design attention. The board was intrigued by research indicating that worker productivity typically increased in green buildings by an average of 5 percent, which would be even more valuable to the company than eliminating the utility bills entirely.
“But what constitutes ‘green’ building?” the board asked. Lovins and Greenman had said that each project was unique, and there were no simple standards to apply to a design, no Band-Aids that would make it green. However, third-party accreditation was available through the LEED rating system established in 2000 by the US Green Building Council (USGBC), a respected consensus coalition of stakeholders from all aspects of the building industry. LEED certification required that best practices be used in certain core aspects of building construction and operation. It provided a list of techniques and practices, most of which were rooted in existing industry standards. Designers and builders could incorporate features chosen from this menu of options to earn points toward certification. LEED provided a framework for action with defined objectives and established criteria for what was “green.” USGBC data from scores of completed projects indicated that the basic level of certification added 0–5 percent to a building’s initial cost (not factoring in typical operating cost savings), and the primary factor in that variability was the skill and experience of the design-build team. LEED was well received in the industry, grew rapidly, and within three years of its release was being applied to more than 5 percent of all planned commercial and institutional construction and major renovation projects in the United States.
The board approved the project. Torus believed the building would provide a potent educational symbol of the business benefits of green design and serve as a tool for organizational learning. He thought it likely that the new headquarters’ innovative design approaches would appeal to Hermes’s corporate culture, particularly that of the production department. He liked the idea that strategic planning and new product conceptual development would occur in a unique facility.
As with other Hermes facilities, the new office building development was being managed by the production department. (Most Hermes buildings projects were production related, so to simplify administration the production department oversaw all new construction.) Torus had decided to make the new headquarters a green building after the project had already begun. The plan had been to completely renovate a four-story, fifty-thousand-square-foot office building in the Silicon Valley. The project management team had been designated, and the design and construction contractors had already been chosen based on their conceptual design: Hermes’s traditional partners, Expedia Design Company and Advanced Building Services (ABS). Art Schema was the Expedia project manager, and William Ditt was the ABS construction manager. Both had worked extensively with Hermes facilities in the past but had minimal experience with green building techniques. The next project milestone was to be a review of Expedia’s initial plans for the building core and utilities, but Torus had put the process on hold when he decided to seek board approval to make the renovation a LEED building project. It was not too late to change the design to meet that new objective.
Torus decided to build on both the momentum of the energy-efficiency program and RMI’s program. RMI and Rumsey Engineers would be retained as design consultants, based on their growing credibility in the company and reputation as leaders in the green design field. Glen was tasked with leading the greening effort toward the goal of attaining LEED certification and continuing her role as liaison to the RMI team.
Executive VP of Production Chip Smith had chosen Regina Shinelle as project manager and Tom Dowit as chief engineer. Glen could not help wondering whether that was a positive development and what Smith’s true intentions were. Smith had not revealed much about his opinion of the greening efforts; although he acted supportive in Torus’s presence, on most issues he embodied the production department’s perspective. Now Glen would be working with the two people who had presented the most stubborn resistance to her efforts and who did not share her priorities. If the renovation failed to attain LEED certification or performed poorly, it would be a major setback to the sustainability program. But if the collaborative effort resulted in an economical, high-performance LEED building, it would bring positive recognition to all participants and perhaps create greater buy-in for sustainability efforts among the skeptics companywide.
Glen was pleased that Steve Sparks had been named facilities director for the new headquarters. He had been the most enthusiastic supporter of the efficiency efforts at F3, and his persistent efforts had played a key role in successful implementation of the recommended measures, despite more hesitant colleagues. Glen had suggested to Strand that Sparks would be an ideal internal candidate for the position. Sparks was excited about the promotion and the opportunity to be more involved in green design.
Greening Strategies
Glen’s primary proposal as the project’s sustainability coordinator was to arrange an integrated design process called a charrette. This multidisciplinary-facilitated meeting would bring together project participants, stakeholders, and outside experts in the same room (often around the same table) at the earliest practical point in a project. The goal was to clarify desired outcomes, identify obstacles, and devise strategies for attaining the best overall result. That integrative process helped participants to understand their differing perspectives and incentives, exchange ideas, build trust, work out problems, and create consensus. The approach took some time, but the investment of extra effort could significantly improve plans and specifications, streamline construction, reduce total costs, and increase building performance. “An axiom of design is that all the big mistakes are made on the first day,” Greenman told Glen. “Most of a building’s life-cycle cost is determined by the tiny fraction of the budget spent on initial design. Carpenters know it makes sense to measure twice and cut once. A charrette helps us to do that.”
The charrette was to last for two days and would be held in Hermes’s R&D center conference facilities. Hermes R&D staff had used similar techniques for product design, but it had never been tried in a facilities project. Participants commented that never before had all the parties spanning the service life of a Hermes building project met together simultaneously.
The meeting would begin with team introductions, followed by presentations on green design and LEED by the RMI team (which had obtained excellent results in past charrettes). Glen planned to describe the list of LEED requirements and the credit areas that she thought were best suited for exploration. There were several areas she identified as readily achievable and many more worthy of deeper exploration. The group would pick an initial set of LEED credit areas to pursue. That would take much of the first day.
The most detailed technical subject would be a collective consideration of HVAC design alternatives. Rumsey Engineers had reviewed the preliminary design drawn up by Expedia before the green objectives were set—now called the baseline case. Rumsey had submitted a proposal outlining recommendations for increasing the ventilation system efficiency. It involved spending more money on construction to save money on operation. The executive summary of recommendations and estimated costs and benefits was to be circulated to each participant. That discussion would begin on the first day and carry over to the second day if necessary.
The last and most difficult topic, but perhaps the most important, would concern potential policy and procedural changes that might foster more efficient facilities investments. Hermes’s traditional approach to requiring, financing, designing, building, and operating its facilities was functional but not optimal. The energy-efficiency program of retrofit improvements had proven that there was widespread waste of energy and capital within the company’s facilities. It had also highlighted aspects of the process that hindered improvement. It was in the interest of management and shareholders to create a more efficient process.
Most of those issues were not unique to Hermes but were characteristic of the industry. Buildings were made in a collective but not well-optimized production process. As Greenman put it, “If a camel is a horse designed by a committee, than most buildings are camels.” Some decisions produced short-term savings for certain participants but degraded building performance or imposed long-term costs on the owners and occupants. Usually those choices made business sense to each decision maker and were not intended to cause problems elsewhere. Those challenges were a function of the rules of the game, and it was worth exploring whether changing any of those rules would produce better buildings. Glen’s discussion would examine the participants’ roles, incentives and disincentives, and the impact of financial and investment criteria. It had the potential to make some participants uncomfortable but also to yield significant process improvements.
Those thoughts were racing in Glen’s mind on the night Torus had approved the charrette. She was excited and a little anxious as she set out to draft a meeting invitation and brief description. She hoped that the charrette would reduce rather than inflame any latent (or blatant) tensions and conflicts among participants. Greenman had assured her the process usually worked surprisingly well, but she could see how achieving consensus might also seem like herding cats. She considered the cast of characters she now had to work with, each representing a different organization or department, and made notes summarizing her interpretation of each participant’s perspective going into the project.
She needed to identify the obstacles and opportunities in the group dynamic and select strategies that provided the highest leverage for change. The CEO had offered her the opportunity to try a few new approaches and policies that he could announce in his introductory remarks. She thought that a small number of well-targeted measures could “change the rules of the game” for key participants in the design-build process by providing different incentives or by removing important disincentives. That would help steer the group’s decision toward a successful outcome for this project, and perhaps for future facilities as well.
Hermes Personnel
• Regina Shinelle, project manager, production department, facilities division. “I’ve got to get this facility built on time and under budget. The faster the construction and the lower the capital expenditure, the bigger my bonus. I’ve relied on Expedia Design contractors in the past for prompt and reliable service. There is no room in this process for trial and error.”
• Tom Dowit, project chief engineer, environment department, facilities division. “It’s my job to ensure facility operations support production, while minimizing risk and cost. I don’t have the money or leeway to do anything expensive or risky. I’ve got to maintain the systems and pay the utility bills, and if my costs go down this year, my budget gets cut next year. Keep it simple, and if it ain’t broke don’t fix it—that’s my philosophy. I’ve been doing this a long time, and my approach works. That’s why I’ve been asked to oversee the design and construction process and contractors for this new building project. I use the value engineering process to review all aspects of design and construction, and either approve or reject proposed elements to control costs. Sure, there are some opportunities for improvement, but this green design stuff gets too much attention, costs too much, and slows me down. What a pain in the neck.”
• Steve Sparks, headquarters facilities director, environment department, facilities division. “I will be responsible for the new building’s operations and maintenance once it is completed. I am excited to work in a green building. I’ve seen some of those techniques work in practice and I believe there is great potential for facilities improvements. I’m surprised I was asked to participate in this design meeting. Usually the production department calls us when a facility is built, hands us the keys at the ribbon-cutting, and says ‘keep it clean and running.’ Or they tell us ‘new fab tools are going online in two weeks, make sure they have power and water.’ I look forward to having a say in decisions that affect my job.”
• Susan Legume, comptroller, production department. “My priority is cost control. I’ve got my eye on the bottom line and my mind on shareholder value. Capital spending is one area where costs can spiral, especially construction projects with multiple contractors. My job is to ride the project manager hard and squeeze the vendors harder. They can really nickel-and-dime the budget away if we don’t watch out. The value engineering process gives us a chance to rein in runaway spending by cutting unnecessary purchases from the design or construction.”
Design-Build Contractors
• Art Schema, project manager for Expedia Design Company. “We deliver reliability and security. Our architects and engineers have built this business by delivering economical designs that work, as proven in previous projects. We leverage our skills and experience to consistently deliver low bids and rapid turnaround times, which helps us win projects. Doing anything differently increases our costs; unfamiliar design techniques increase our risk exposure and project expense. As designer of record for this project, we would have to carefully consider whether we could sign off on any exotic features or plans. Of course, customer service is our number one priority and Hermes is a valued client, so we are open to whatever we are asked to do, as long as it is clearly defined and we are compensated for our efforts.”
• William Ditt, construction manager for Advanced Building Services. “I cut this bid to the bone to get this contract. We’ll make up for it by cutting a few corners, based on my experience where it will do the job. I’ve got to maintain cash flow by getting this done as fast as possible so I can move on to another project. I rely on my supplier network to get me the parts I need, quickly and at low cost. Any delays risk cutting into my thin profit margin and my tight schedule.”
Green Design Consultants
• Bill Greenman, consultant, Rocky Mountain Institute. “Nonprofit organizations such as RMI can collaborate fruitfully with businesses to make money while protecting the environment. We are design consultants and process facilitators. Although we do not ourselves provide architectural plans or engineering designs, our partner Rumsey Engineers can do that. We can help with LEED certification and suggest ideas and best practices that we’ve seen used successfully elsewhere. Standard design-build practices do not produce green buildings. Green building is new to the industry as a whole, and we have experience with these innovative techniques. Green building can be profitable, but each case is unique and requires increased design effort and careful project management.”
• Peter Rumsey, consultant, Rumsey Engineers. “We specialize in whole-systems energy-efficient design. We can deliver equivalent or superior HVAC performance at lower energy use and reduced cost of ownership (although our systems might cost more up front to build). It is always best to incorporate green techniques early in the project timeline, starting with the initial phases of design. All too often we’re called in at the last minute when the plans have been completed and it is very difficult to make changes, at least at low cost and minimal hassle.”
Environment, Entrepreneurship, and Innovation: Systems Efficiency Strategies for Industrial and Commercial Facilities
Many managers are unaware of the strategic advantages and cost savings possible through systems analysis applied to material, energy, and water use in building design and operation. This section provides whole-systems strategies for improving resource efficiency in industrial and commercial buildings.This background note was prepared by Batten fellow Chris Lotspeich in collaboration with author Andrea Larson. Andrea Larson and Chris Lotspeich, Environment, Entrepreneurship, and Innovation: Systems Efficiency Strategies for Industrial and Commercial Facilities, UVA-ENT-0052 (Charlottesville: Darden Business Publishing, University of Virginia, 2008). Note can be accessed through the Darden Case Collection at https://store.darden.virginia.edu. Systems thinking and integrated, multidisciplinary methods are explained that can stimulate innovation in both the equipment (technical) systems that make up facilities as well as the human (organizational) systems involved in the design-build-operate process. Identifying and using key leverage points and systemic synergies can dramatically increase the performance of buildings and the groups of people who make and run them. In practice those approaches have saved money, reduced environmental impacts, improved worker health and productivity, attracted new employees, greatly decreased operating costs while adding little or nothing to initial costs, and in some cases even decreased capital costs.
Resource Efficiency: Doing More with Less
Resource efficiency (also called “resource productivity” and “eco-efficiency”) provides cost-saving methods for reducing a company’s environmental and health impacts. Businesses consume resources to deliver goods and services and to create socioeconomic benefits. Primary resource inputs are materials, water, and energy. Their use directly links industrial activity to the earth through extraction, pollution, and waste generation. (Labor, money, and time are also economic inputs, although environmental and health impacts associated with their use are generally more indirect; we will focus on physical and energy resource use.) In any firm that manages for maximum efficiency, the life-cycle resource intensity and environmental “footprint” of a given product or company is evaluated across the supply chain, from the natural resource base through manufacturing and use to ultimate disposal or recycling.
Ideally resource efficiency enables the delivery of goods and services of equal or better quality while reducing both the costs and impacts of each unit of output. Systems efficiency strategies go beyond conservation by boosting productivity and differentiating the firm. When efficiency measurement stimulates innovation, doing more and better with less fosters revenue growth. Innovation and the entrepreneurial initiative that drives it result in the delivery to market of new goods and services with superior performance or other attributes that out-compete existing products and industries.
This Schumpeterian “creative destruction” (the creation of new products, processes, technologies, markets, and organizational forms) is fundamental to capitalism. A capitalist economizes on scarce capital resources by investing to improve productivity. The resource intensity of each unit of production tends to fall over time as knowledge and technology improve. Those dynamics have already increased resource productivity. For example, in the United States the amount of energy consumed per dollar of GDP has decreased in all but five of the years since 1976—for a total drop of more than 35 percent between 1973 and 2000. That improvement is good, but the reality is that standard practices have tended to prompt relatively incremental improvements. The potential for much greater productivity increases remains untapped, awaiting the systematic and synergistic application of best practices and better technologies. Unfortunately, market barriers and organizational behaviors maintain standard practices, thus hindering progress.
Overcoming those obstacles requires leadership, comprehensive strategies, and organizational change, but radical resource efficiency can be achieved. Radical resource efficiency results from effective management combined with innovative practices. Systems thinking and end-use, least-cost analysis (discussed later in this section) are essential conceptual frameworks for rapid improvement. Doing more with less is a basic and accepted business objective and a central concept of practices such as total quality management. Thus resource efficiency measures provide a familiar, practicable, and visibly beneficial first step.
“Greening” Facilities: A Good Place to Start
Buildings are one of an organization’s primary interfaces with natural systems via the impacts of materials, energy, water, and land use. Consequently, they deserve attention from both systems dynamics and corporate strategy perspectives. Buildings and facilities are ideal sites for initial resource efficiency efforts in most companies. Every business uses buildings and pays literal overhead costs to keep the roof up. Yet often overlooked are the simultaneous financial, environmental, and health leverage that buildings offer.
Most buildings are relatively wasteful of money and resources, compared with state-of-the-art green building examples. Best practices can yield large improvements in building performance, occupant health and productivity, and environmental impacts. These benefits come with 30–50 percent lower operating costs and on average only 2–7 percent higher initial costs (and, in some cases, decreased capital costs). Those benefits have been widely demonstrated in environmentally preferable or “green” buildings certified by USGBC’s Leadership in Energy and Environmental Design (LEED) rating system, and the US Department of Energy’s Energy Star label.
There are many areas for performance improvement. The opportunities discussed here are primarily but not exclusively in energy use. Typically, those are the easiest opportunities to identify and offer the quickest benefits at the least risk to most businesses. The major categories of energy savings opportunities include lighting; motors; pumps and fans; heating, ventilation, and air-conditioning (HVAC) systems; building envelope; thermal integration of temperature differences and heat flows; load management; measurement and controls; and operational techniques. Keep in mind that the same systems thinking can be applied to other dimensions of a company’s operations, including its supply chain.
Common resource efficiency opportunities in most building systems are quantifiable, proven, and relatively easy to understand and implement. Such opportunities are widespread due to technological improvements and because the design-build process consistently produces structural and mechanical systems that are relatively inefficient and overbuilt. Factories are particularly attractive subjects because manufacturing is a resource-intensive enterprise. Offices and other commercial buildings also offer potential. The economic and environmental gains are greatest in new design and construction, but retrofit opportunities abound.
Implementing a suite of proven best practices and technologies carries a high probability of yielding short-term cost-effective improvements. These measures increase profits directly, as each dollar of saved overhead goes straight to the bottom line. Although these savings convey more limited profit-growth potential than do sales, this oft-neglected frontier of cost reduction can add value at lower risk than launching new products and services, which only add to profits on the margin. In some cases, significant savings through more efficient resource use can make additional, relatively inexpensive capital available for higher priority investments.
Systems Thinking
Strategies discussed here are informed by systems thinking and the principles of system dynamics. These representative approaches to technology, design, and management have been successfully applied in a broad spectrum of facilities and contexts. As we have discussed, systems can be technical or organizational. Buildings are “technical” systems comprising subsystems such as climate control, water and plumbing, lighting, and others. Buildings are designed, built, and operated by “organizational” systems that include owners, architects, engineers, builders, tenants, and others. As with other manufacturing activities, this organizational system comprises different individuals, and teams execute an iterative process that results in a product (the building). Well-established systems analysis tells us that small changes at key nodes or input variables of complex systems can result in large changes in system outcomes. Thus identifying and using insights about key leverage points can significantly increase the performance of buildings as well as the groups that make and run them.
Implementation strategies typically are directed at creating change by making the business case for efficiency improvements and providing incentives for desired present and future behavior. As the reader knows, not all approaches will yield economic results in every context because conditions vary widely at different facilities and companies. There is no magic formula for success, nor can we provide an exhaustive list of opportunities. Rather, this discussion is intended as an introduction to representative opportunities and to methods for realizing their greatest value.
Leadership, Management, Innovation, and Entrepreneurship
Realizing those potential benefits requires that standard practices be changed. It is a leadership and management challenge that involves entrepreneurial innovation. Building design, construction, and operation is a complex process involving many participants, including developers, architects, contractors and subcontractors, clients, and end users. Greening that process encompasses design, engineering, and technology, and the management of information, money, and organizational behavior. The organizational learning value is high and spans a range of disciplines and enterprise functions. The successful integration of the varied participants involved in a building’s life cycle is a primary challenge to green building champions and is perhaps the most influential factor in achieving radical improvements in building performance.
When it comes to adopting a green building design, differences between managers and leaders are also a consideration. Management strategies are arguably more conservative than leadership initiatives. Managers typically seek stability and risk reduction as they help steer an organization toward defined goals. Managers tend to favor slower, more incremental change. In contrast, the more entrepreneurial leaders are innovation oriented and take greater risks to move an organization farther and faster toward end states that radically differ from the existing patterns. These leaders often are not formal, official leaders. They may emerge as leaders of change. Acting as a change agent is essentially entrepreneurial because implementing significant organizational change requires vision and initiative, not a risk-reduction mind-set. Entrepreneurs have a vision of a new future reality and harness resources to realize that vision. Entrepreneurial leadership seeks to create innovative change in a company’s products and services. Acting entrepreneurially within one’s own organization is what consultant Gifford Pinchot III terms “intrapreneuring.”Gifford Pinchot, Intrapreneuring: Why You Don’t Have to Leave the Corporation to Become an Entrepreneur (New York: Harper & Row, 1985). See also Elizabeth Pinchot and Gifford Pinchot, The Intelligent Organization (San Francisco: Berrett-Koehler Publications, 1996); and Gifford Pinchot and Ron Pellman, Intrapreneuring in Action: A Handbook for Business Innovation (San Francisco: Berrett-Koehler Publications, 1999). Sustaining innovation often requires organizational change, also potentially an innovative act.
A would-be change agent usually has limited resources with which to attain his or her objectives. He or she typically lacks formal authority over all the process participants whose cooperation is needed to reach a goal. Consequently, a systems perspective is valuable. An intrapreneur can identify and focus on leverage points in the system to effect the most change with limited resources. Identifying technical synergies can yield cost-effective performance improvements. (Examples are discussed later in this section.) Influencing the decision rules of participants can shift organizational process outcomes. Persuasion can substitute for compulsion. Identifying benefits and incentives for the participant decision maker can help build buy-in to the change agent’s approach.
Green buildings are innovative products with dramatically improved performance relative to standard buildings. Those improvements are heavily dependent on improvements in technical subsystems, such as energy and water use. They are determined by the actions and outcome of the organizational design-build-operate system, which is in effect the manufacturing process.
The economic benefits of greening facilities provide the strongest motivating factor and a common denominator for undertaking new practices involving disparate parties, unfamiliar methods, and the challenges of change. The dollar is the universal solvent, the value-neutral language of business. All participants can agree to the goal of cost cutting, regardless of their beliefs or perspectives on the environmental and social aspects.
Initial successes in green building can free up resources and build stakeholder knowledge, buy-in, and confidence. These traits are useful for further, more challenging steps toward sustainability, such as product and business model redesign.
This is not to say that efficiency measures are easy—they are not. The process requires unlearning old techniques and reforming the traditional process. Even modest changes can meet with significant resistance. But greening strategies use proven tools and techniques that can be discussed in quantifiable terms of engineering and financial analysis, simplifying the challenge of implementing new ways of doing things. Expert assistance is readily available, and successful systems and buildings provide literal examples. Skeptical participants might believe that certain measures “can’t work here,” but they can be shown buildings where such techniques have worked in a wide range of climates and structures. The merits can be presented with numbers rather than assertions.
Systematic Resistance
Green building is growing rapidly and moving into the mainstream of the construction industry. Nevertheless, many people continue to view it as a leading-edge activity and lacking standard practice, despite demonstrated benefits. The diffusion of this innovation is still in its early stages. As with many innovations, organizational behavior is the crux of the issue and has a larger impact than technology. It determines whether or not resource-efficient decisions are undertaken and implemented. That should not be surprising. After all, the usual ways of doing things seem to work. Buildings get built, their systems function, people occupy them and go about their business, and complaints are relatively few. Architects and engineers get paid and move on to the next project. Most of the parties involved are satisfied. If the system is not broken, why fix it?
Follow the Money to Find the Motives
Some might ask, if green building is so cost-effective, why isn’t more of it happening in the free market? Surely if it were profitable, people would do it. But in the workaday world, green building experience is lacking and schedule and budget pressures limit the amount of effort that can be put into design and construction. If the owner doesn’t ask for green features, it is up to another project participant to promote them. Champions of sustainable design face many obstacles to implementing their ideas, both in the marketplace and even within their own organizations. Selling environmentally friendly approaches and equipment to clients, managers, and colleagues often remains challenging, especially if taking those approaches or using that equipment asks them to do anything differently or spend more time and money. In addition, most design and construction professionals have little or no training or direct experience in sustainable building techniques. They don’t see much incentive to try something new if they think it might increase the risk of a lost bid or an unhappy client. If common practices, habits, and perspectives don’t prioritize green techniques then, as the saying goes, it can be hard to teach old dogs new tricks.
The picture is changing rapidly. Public agencies, architects, interior designers, construction companies, and other professionals are increasingly realizing the benefits of green buildings and are asking for—and getting—better results. Has it swept the country? No, but people are doing it and making money. There are many demonstrated economic benefits to more sustainable real estate development, but the problem is they don’t all accrue to the same parties. Some benefits aren’t counted directly in our economic system, such as reduced environmental impacts. But most important, we don’t live in a free market; we live in the real world. Free markets exist only in theories and textbooks. Actual markets function under the influence of human and organizational behaviors and dynamics that prevent more optimal results.
In politics, it is said that if you want to know why something happens (or doesn’t), follow the money. The same is true in building design and construction. We must look more closely at the economic incentives (and disincentives) facing the various parties to the design-build process to understand why more buildings aren’t more sustainable.
Usually, several different companies and individuals are involved in a construction project. Sometimes one party profits at the expense of another party in the same project (even in the same firm). For example, a contractor or project manager might buy cheaper, less efficient mechanical equipment to save money or speed delivery. As a result, the tenant or facilities manager pays higher energy bills. For each decision or action, determine who benefits and you will often understand why a better outcome for society and the environment (if not for the owner) didn’t occur.
Market dynamics and business models shape the decision rules of participants in the process and thus the outcomes. For example, the after-tax return on increasing the diameter of wire by just one size in a standard US office lighting circuit typically approaches 200 percent per year. The wire-size table in the National Electrical Code is meant only to help prevent fires, not save money, and hence specifies wire with half the diameter—and four times the electrical losses due to greater resistance—as would be economically desirable. However, an electrician altruistic enough to buy the larger (and more expensive) wire would no longer be the low bidder and wouldn’t get the job. This example embodies two barriers to more efficient buildings: a life-safety minimum-requirement code misinterpreted as an economic optimum, and a split incentive between the party who chooses the wire size and the one who later pays the electric bills.
It is worthwhile to examine the incentives and disincentives faced by the various parties to the design-build process, and explore why standard practices and paradigms often block environmental improvements, to determine effective remedies.
The Current Design-Build Process Paradigm
Consider a representative list of the different parties involved in creating typical commercial buildings. The owner might be a building developer seeking to sell or lease the property, or it might be a business, public agency, educational institution, or other organization that owns its buildings. The project manager might be an employee of the owner or a general contractor. The design is created by contractors and consultants, or sometimes by staff of the business owner, including architects, structural engineers, and mechanical engineers. Construction is typically contracted out, or sometimes performed by a unit of the developer or business owner. Facility managers operate and maintain the buildings.
Now consider some of the common pressures and motivations that each of these parties faces. Any of them can champion sustainable design but also can undermine it—often unintentionally—by pursuing goals that their position or employer’s policies dictate. Each project and decision maker is different, and generalizations are useful to a limited extent. Nevertheless, one can draw insights by considering typical incentives and disincentives that come with a given job description and role in the design-build process, regardless of the opinions and values of the person who is doing that particular job. Scholars of organizational behavior note that “where you stand depends on where you sit” applies.
Developers often build on speculation. They will find a buyer eventually. The lower their initial costs, the greater their potential profit from sale or lease. The structural shell is designed before tenants are found, and performance specifications are unlikely to exceed minimum building code requirements. Developers can buy low-quality equipment to save themselves money, and they don’t ultimately pay the resulting higher energy bills. They might be experienced in green building techniques but probably are not. Many see little incentive to risk slowing their project turnover rate, increasing costs, or alienating potential customers with unfamiliar green features.
Tenants usually have little control over building design and tend to have a short-term perspective on costs. Even buyers of spec buildings often have no influence on the design or performance.
Organizations that own their buildings are more likely to take a more integrated, long-term perspective on life-cycle cost and performance (especially for new construction). They might be more interested in green building concepts than other players—or at least more likely to push for improvements. Even then, senior managers might share and communicate a greener vision but face competing pressures from project managers or department heads within their own firm or among their contractors.
Project managers are often rewarded for completing work ahead of schedule and under budget. This can provide incentives to cut corners, reject or redo design features and specifications (such “value engineering” often undermines integrated design), squeeze more out of contractors, and proceed with the most readily available options without pausing to make improvements or even to correct noncritical shortcomings and mistakes. If the manager’s budget is funding construction but not building operation, there might be an incentive to use cheaper but lower-quality materials and equipment and leave any increased maintenance or cost concerns to somebody else. These factors apply to both owners’ employees and general contractors alike.
Architects are encouraged to innovate and are rewarded for interesting new designs with recognition and further work. However, environmental attributes do not often rank high in the review criteria of their clients and peers. Architects might have significant training or experience in whole-system, resource-efficient sustainable design but probably do not. If the client hasn’t asked them to create a green building, they have little incentive to struggle to explain the potential benefits to the owner or contractor. When fees are based on a percentage of project cost, the compensation structure rewards architects for what they spend and not for what they save the client (or whoever ultimately pays the utility bills) in reduced energy or water use and costs.
Architects and engineers must work together on the same design, but that does not mean that they necessarily coordinate their efforts to produce an optimal building. In many cases, the architects and engineers are from different contractors. Even when they are from two departments within the same firm, all too often there is relatively little communication and harmonization of design approaches and equipment specifications. The architect completes the design with minimal input from the engineers and in effect rolls up the drawings and pushes them through a little hole in the wall into the engineering department to execute the next project phase. The design process is sequential rather than simultaneous.
There are two main types of engineers involved in building. Structural engineers are relatively conservative in their approach because if their design doesn’t work, someone could die. Safety and consistency are prioritized over innovation. Mechanical engineers (MEs) face less pressure in that their worst-case design failure scenario is that building occupants might have to buy a fan or heater. But MEs are ultimately responsible for the majority of a building’s energy use. For example, HVAC systems comprise almost half of the energy use of a typical San Francisco office building, the largest share of the load. (The next-largest energy consumer is lighting at more than one-fourth, and plug loads account for more than 10 percent of the building’s total electricity use.) Yet better mechanical systems designs are typically invisible to users. Even if those paying the utility bills realize lower costs, unless they share the savings with the engineering team, the MEs are typically not rewarded for innovation or greater effort to green the design.
Both types of engineers face incentives to overdesign structural and mechanical systems, as excess capacity provides a margin of security (but often wastes resources). Both types labor under the same tight budgets and short timelines. They often specify average- rather than premium-quality equipment to cut initial costs and use design rules of thumb to save time. Indeed, if a problem arises, the engineer’s best defense is that the design follows standard practice. Techniques that worked in the past (or at least did not fail) are copied and reused. Measurement and analysis of previous structures’ actual performance is not commonly incorporated into improving the next similar design. Unlike architects, engineers are quite happy to make a building look and perform like the one next door. Those habitual approaches produce functional but overly energy-intensive designs.
Facility managers’ experience and input are rarely solicited and incorporated into the design process. Typically, the managers are handed the keys after the building is complete and tasked with keeping the lights on and the floors clean on a limited budget. Increasingly, their function is outsourced. Their staff might not have the time or training to commission, maintain, and operate systems at peak environmental performance. They might not pay the utility bills or have much funding for investment in building improvements. Even if they do, they might not be inclined to increase energy and water efficiency and cut costs if their reward is a smaller budget next year.
Apart from the owner, no single participant in this group decision-making system has compelling authority over the others, and none can exert determining influence over the process. Even the owner must exert considerable effort to ensure that her objectives survive every step of the sequence. The typical result of this collective process is safe, sometimes interesting-looking structures with poor energy performance and average (frequently excessive) environmental impacts.
Most of the parties to design-build projects are used to these standard approaches and common dynamics, adhere to them habitually, and expect them intuitively. They see nothing abnormal and perceive little need for improvement, given that for the most part the end-user clients and occupants are satisfied or at least not complaining any more than usual. No market failures are required to explain this outcome, although it imposes unnecessary costs on society. All the participants in this process are acting in their economic rational self-interest, within the bounds of their knowledge. If a camel is “a horse designed by a committee,” as the joke goes, then, in effect, all buildings are camels: their design intention has been subverted by the process.
Strategies for a Greener Facility Design-Build Process
The standard process produces suboptimal buildings because participants pursue their own objectives, even to a limited extent, rather than compromising more and cooperating in greater harmony to obtain optimal outcomes for building owners and users over the long term. Thus green building champions are necessarily change agents. Their challenge is to influence the organizational system by influencing the participants as well as technology and design. This experience can be as difficult as herding cats.
Only by providing participants with compelling reasons to change their approach, such as financial benefits and strategic advantage, can you foster lasting change.
The following paragraphs give a brief overview of some remedies to the common barriers to greener buildings.
Start early. It is very important to incorporate green elements from the very beginning of a project. An old design axiom says that all the really important mistakes are made on the first day. Even small decisions early in the process have significant influence on future building performance and costs. It is worthwhile to “measure twice and cut once” where building design and performance are concerned.
Increased awareness of green building techniques and demonstrated successes would benefit all the parties. Formal education plays a crucial role, but the pace of market transformation can be slow as graduates enter the workplace and make their mark. More and better in-service training and user-friendly resource materials for busy professionals can help shift existing practices faster. Positive, hands-on field experience with sustainable building is perhaps the most potent learning tool.
Encourage the use of outside energy-efficiency reviewers. Doing so can help establish a common baseline for design objectives and performance benchmarks. Authoritative third-party project assessments reinforce the importance of ensuring that specified and installed equipment and systems operate efficiently. For example, many energy companies provide energy-efficiency design assistance, useful resources and support, and sometimes economic incentives.
Building codes (such as California’s stringent Title 24) and voluntary guidelines (such as LEED) can improve building design and performance as well as help educate practitioners. The LEED rating system provides a framework for setting shared goals, a template for project execution, and neutral evaluation criteria based on consensus best practices and measurable criteria (for more information on the LEED rating system, see Chapter 7, Section 7.1.)
Set targets and rewards for performance. Use specific metrics and performance criteria. Provide clear financial incentives for high-quality work. For example, provide a bonus payment if the building’s performance exceeds California’s Title 24 Energy Code by more than 40 percent or if LEED rating points are earned. Performance-based fees compensate architects and engineers in part based on measured savings in energy and water efficiency relative to preagreed building performance standards, an incentive for more efficient design.
The most effective approach is neither a technology nor a set of guidelines and benchmarks but rather to redesign the process itself. An integrated design process brings together project participants, stakeholders, and outside expertise at the earliest practical point in the project to collaborate, cocreate, and execute a shared vision. Often called a charrette, such an intensive, multidisciplinary-facilitated meeting can help identify and overcome many of the barriers to optimal green design.
This integrative process assists participants in articulating their differing perceptions and incentives and allows them to exchange ideas, work out problems, and establish common terminology and objectives. It creates a communication space in which to build mutual understanding and trust, clarifies owners’ goals and options, and helps participants agree on any mutual trade-offs and concessions that might be required to achieve an optimal result. Those exercises can significantly improve plans and specifications, streamline construction, reduce total costs, and increase building performance—increasing the chances that systems will work as they are intended to, rather than just as they are designed to.
End-Use, Least-Cost Analysis
End-use, least-cost analysis is a core concept of whole-systems, resource-efficient design. Historically, energy resource discussions have focused on supply: where do we get more, and how much does it cost? But people don’t want barrels of oil or kilowatt-hours of electricity per se; they want the services that energy ultimately provides, such as hot showers, cold beer, comfortable buildings, light, torque, and mobility. Considered from the demand as well as the supply side of the equation, least-cost analysis identifies the cheapest, cleanest way to deliver each of these services. Often the better, more cost-effective way is using less energy more productively, with smarter technologies. Efficient end use can thus compete with new supply as an energy resource and leverage bigger savings in resources, cost, and upstream pollution across the whole system.
Saving energy (especially electricity) is cheaper than consuming fuel to generate it. Surveys of utility-directed “demand-side management” efforts to save electricity show saved watts—or what Amory Lovins calls “negawatts”—typically cost from \$0.025 to \$0.02 per saved kilowatt-hour or less. That is less expensive than the marginal cost of electricity from all other sources of supply and unlike most types of generation does not emit any pollution. Although the potential savings are finite, they are significant.
Consider a pumping example (Figure 7.13). The end use is to move a unit of fluid through an industrial pipe. The pump runs on electricity. Thermal losses occur when coal is burned at a power plant to produce steam that a generator converts to electricity. Energy losses compound in transmission and distribution, in the motor and pump, the throttling balance valve, and pipe friction, until ultimately only 10 percent of the coal’s embodied energy does the desired work.
Where is the biggest leverage point for resource efficiency? Conversion efficiency can be improved at the power plant (e.g., with heat recapture and cogeneration) and at other points along the delivery chain. Yet the biggest “bang for the buck” lies closest to the end-use application. For example, bigger pipes have less friction, reducing pumping requirements. That leverages upstream savings, turning losses into compounding savings. Each unit of conserved pumping energy in the pipe saves ten units of fuel, cost, and pollution at the power plant.
Generating Returns from Integrated Systems
Integrated design methodology optimizes the relationships among the components in technical systems as well as among a facility’s component subsystems. The performance of many mechanical systems is undermined by design shortcuts, compromised layouts, and penny-wise, pound-foolish capital cost cutting. An integrated design approach can recognize and mitigate these effects at the same or reduced construction cost. It is much more cost-effective to integrate these elements into the initial design than to try to squeeze them into the project later—or retrofit them after completion. Maximum savings are achieved by first minimizing load at the end-use application, before selecting the energy supply or applying energy-conserving measures “upstream” toward the motor or other energy-conversion device.
Consider the pumping example depicted in Figure 7.13. Bends in pipes or ducts increase friction and thus pumping power requirements. Optimal pipe and duct layouts eliminate bends. Larger-diameter pipe sizing is also very important because friction falls as nearly the fifth power of pipe diameter. Smaller pumping requirements enable smaller pumps, motors, and electrical systems, reducing capital costs. Larger pipes also maintain equivalent fluid flow at less velocity, enabling significant pumping energy savings. The “cube law” relationship between pump impeller power and fluid flow means that decreasing velocity by half drops pumping power use by almost seven-eighths. (Those same dynamics and potential savings also apply to ducts and fans.)
That approach was pioneered by Singaporean engineer Lee Eng Lock. Lee tutored Dutch engineer Jan Schilham at Interface Corporation, who applied those techniques to a pumping loop for a new carpet factory. A top European company designed the system to use pumps requiring a total of 95 horsepower. But before construction began, Schilham upsized the pipes and thus downsized the pumps. The original designer had chosen the smaller pipes because, according to the traditional cost-benefit analysis method, the extra cost of larger ones wasn’t justified by the pumping energy savings.
Schilham further reduced friction with shorter, straighter pipes by laying out the pipes first, then positioning the equipment that they connected. Designers normally position production equipment without concern for efficient power configuration, and then have a pipe fitter connect the components with long runs and numerous bends. Those simple design changes cut the power requirement to only 7 horsepower—a 92 percent reduction. The redesigned system cost less to build and to operate, was easier to insulate, involved no new technology, and worked better in all respects. That small example has important implications: pumping is the largest application of motors, and motors use three-quarters of all industrial electricity in the United States, or three-fifths of all electricity.
Inventor Edwin Land said, “People who seem to have had a new idea have often simply stopped having an old idea.”Andrea Larson and Mark Meier, Project FROG: Sustainability and Innovation in Building Design, UVA-ENT-0158 (Charlottesville: Darden Business Publishing, University of Virginia, 2010). The old idea is one of diminishing returns—that the greater the resource saving, the higher the cost. But that old idea is giving way to the new idea that innovative design can make big energy savings less expensive to attain than small savings. Such “tunneling through the cost barrier” has been proven in many kinds of technical systems. (A few other examples are highlighted later in this section.)
Noted green architect William McDonough said, “Our culture designs the same building for Reykjavik and Rangoon; we heat one and cool the other; why do we do it that way? I call this the ‘Black Sun.’”Andrea Larson and Mark Meier, Project FROG: Sustainability and Innovation in Building Design, UVA-ENT-0158 (Charlottesville: Darden Business Publishing, University of Virginia, 2010). Facilities’ energy intensity is chiefly in the HVAC systems that create interior comfort by compensating for climatic conditions and that provide (or remove) industrial process heat and cooling.
Cooling systems are typically designed to serve peak load, regardless of how frequently that occurs. The chilled water temperature is often determined by the most extreme thermal requirements of a small subset of the total load, such as one or two machines out of many. That results in excess cooling capacity and inefficient operation at partial loads. It is much more efficient to segregate the loads with parallel chilled water piping loops at two different temperatures. One higher-temperature loop with dedicated chillers optimized for that temperature can serve the majority of a facility’s load. A second lower-temperature loop with a smaller high-efficiency chiller can serve the most demanding subset of the load. This can improve overall cooling plant efficiency by 25 percent or more. Higher temperature chillers cost less than lower temperature chillers of equal capacity.
“Thermal integration” leverages temperature differences. Many businesses consume energy to create heat and then spend even more energy removing waste heat from their processes and facilities, without matching up the two. Instead, they should strive to make full use of available energies before discarding them to the environment. Waste heat from an oven or boiler can be used to preheat wash water or intake air. Winter or night cool air, groundwater, or utility water can provide free cooling. Heat exchangers can allow energy transfer between media that should not mix. Such measures can reduce or eliminate HVAC capacity.
Lighting is generally one of the most cost-effective energy savings opportunities, due to the rapid pace of improvements in lighting technology and design. Retrofits usually offer attractive paybacks, averaging roughly 30 percent ROI. Yet the impact on building systems extends beyond illumination. Energy-efficient bulbs also emit less heat, thereby reducing facility cooling loads, enabling HVAC capacity cost savings.
Money
Making the business case for efficiency improvements is perhaps the most important yet most challenging task facing a sustainability champion. Most companies spend a small fraction of their costs on energy, and it does not command much executive attention. Facilities maintenance is a far lower priority to most senior managers than production, sales, and customer service. Yet saving 1–2 percent of total costs matters, even in financial terms alone.
Whole-Systems, Life-Cycle Costing
Green building experience shows that cost-effective energy savings of 30–50 percent are achievable in many facilities worldwide. Much of this wasted energy and excess mechanical and electrical systems capacity results from minimizing first cost instead of cost of ownership, especially in fast-track projects. High-efficiency design and equipment can cost more up front. Penny-wise, pound-foolish shortcuts and cost cutting degrade performance and increase energy bills for a facility’s lifetime. Smart money looks at the big picture, not just the price tag.
Pervasive overemphasis on short-term first costs results in wasteful decisions. In facilities design and construction, the “value engineering” process is intended to save the owners money. Plans are reviewed and components are approved or rejected with a line-item-veto approach. Although that method can squeeze increments of capital cost out of a design, in actuality it undermines both long-term value and engineering integrity. A component-focused approach erodes design integration (and often function) and negates whole-systems benefits. Paying more for one component can often downsize or eliminate others, reducing total system capital cost as well as operating cost. Optimizing components for single benefits, not whole systems or multiple benefits, “pessimizes” the system. A first-cost approach might benefit one department’s budget one time but imposes increased operating costs on the firm for decades to come. Look for the cheapest total cost of owning and operating the entire system of which the device is a component.
Whole-system, life-cycle costing incorporates both capital and operating costs (as well downtime costs, changes in output, the value of reliability, and other factors). It allows companies to assess the actual total cost of ownership, a better reflection of the financial impact of decisions on a company and its shareholders. Those techniques should credit savings from reduced infrastructure (recall the “big pipes, small pumps” example).
Some designers try to save money by using standard rules of thumb and even copying old designs without improving upon them. That helps them offer low bids to secure work. Facilities owners might find such practices appealing to reduce short-run costs or to help reduce construction project timelines. Although facilities construction timing is critical to some industries’ profit model (e.g., electronics), fast-track design should not become standard procedure because speed comes at the price of lost efficiency and project value. Evaluate and improve upon past designs using operator feedback and careful measurement. Often the perceived need for fast design and construction is caused by lack of planning and preparation. Over time, fast design can inadvertently become a substitute for these vital steps.
Prioritization of Improvements
Whole-systems investment criteria are relevant to how proposed improvements are implemented. Green design consultants and champions often rank their suggestions according to their cost and return on investment (ROI). Managers are tempted to go for the “low-hanging fruit” and select the most financially attractive measures first (or only), to reduce costs. This is also true of energy savings companies (ESCOs), which consult to firms on efficiency opportunities and often help implement the measures. Many ESCOs upgrade lighting, share the savings resulting from reduced bills with their clients—and stop there. But “cream-skimming” the most attractive savings only can render less financially attractive measures uneconomical when they are considered individually rather than as part of a systemic set of upgrades. This can make larger potential total savings of the whole set of opportunities difficult or impossible to attain. Maximize cost-effective improvements by considering all proposed green measures as a package, and reinvest the resources freed up via the larger cost reduction projects into other, less individually attractive projects. Only in that way will you be able to attain large systemic improvements cost effectively.
Investment Criteria: Payback and ROI
Retrofit improvements are often blocked by nonsensical financial hurdles. Upgrades are commonly held to higher ROI standards than are purchases of new equipment. Reexamine investment criteria to avoid distortions and inconsistencies. Most companies seem to apply an eighteen-month to two-year cap on payback periods for investments in efficiency, although the rationale for doing so remains unclear. This provides a much stricter standard than typical investment criteria for new capacity or supply investments, which is closer to the cost of capital (e.g., about 11–15 percent ROI). Harmonize payback and ROI requirements so that operating and financial people speak a common language; otherwise they can’t compare investment opportunities on a level playing field. There are multiple approaches to calculating payback and ROI. Simplified methods are used here for the purposes of discussion.
The payback period for implementing an energy-efficiency measure can be calculated as the implementation costs divided by the energy savings in dollars. The resulting payback number represents the number of years of operation that is required to fully recover the capital investment costs. The ROI method used by the Department of Energy examines the projected annual cost after (CA) implementing a project, compared to a baseline annual cost before (CB) implementing the project. Expressed as a formula, ROI is the ratio of anticipated cost savings (CB − CA) to projected implementation cost (CI), expressed as a percentage.
Let us consider a hypothetical example of lighting and HVAC improvements. CA are energy costs after the energy-efficiency project is implemented, CB are the energy costs before implementing the project, and CI is the cost of the project. Using this calculation method, the ROI for the measures are shown in Table 7.4.
Table 7.4 Sample ROI
Savings (US\$) Costs (US\$) ROI (%)
HVAC measures 85,600 262,800 33
Lighting measures 37,200 100,800 37
Average 122,800 363,600 34
The ROI is 33 percent and 37 percent for HVAC and lighting measures, respectively. That means that each year an average of 34 percent of the original investment is recovered through energy savings—several times higher than the typical ROI requirement for investments in new productive capacity. If a company’s marginal cost of capital is, for example, 15 percent per year, that implies that the company is willing to accept a payback in the six- to seven-year range for additional capacity. Insisting that energy efficiency pay as much as \$0.04 to \$0.08 per kilowatt hour more for negawatts than for new electricity supply deprives shareholders of profits.
Figure 7.14 is the US Environmental Protection Agency’s suggested ranking system for prioritizing efficiency investments. Each box represents a category of equivalent payback and qualitative ROI criteria. Investments for both new equipment and upgrade improvements can be assessed on this equivalent basis with an eye toward added value to the firm.
While building green may require collaboration among many different people at multiple points in the process, the effort can be well worth it. For little to no additional upfront costs, green buildings save operating costs and improve occupant productivity. The key, however, is to optimize the entire system rather than to view the design, construction, and operation of a building as unrelated parts.
KEY TAKEAWAYS
• A systems approach can significantly improve building designs and ongoing operating costs by optimizing performance across all aspects of the building system.
• Some barriers remain to green building, such as inadequate funding models and lack of knowledge, but those barriers are decreasing steadily.
EXERCISES
1. What are the major challenges facing Heather Glen?
2. What obstacles did she confront in the past, and how did those help prepare her for her current task?
3. What can she do to successfully implement this new project? | textbooks/biz/Business/Advanced_Business/Book%3A_Sustainability_Innovation_and_Entrepreneurship/07%3A_Buildings/7.02%3A_Greening_Facilities_-_Hermes_Microtech_Inc..txt |
Learning Objectives
1. Analyze strategies to spur and sustain innovation within a mature industry.
2. Understand how a variety of innovations can accumulate to a significant breakthrough.
3. Examine how cradle-to-cradle design is implemented.
The carpet industry is the battlefield where the war for sustainability is being waged.
- Architect William McDonough
The Shaw Industries case examines the production of a cradle-to-cradle carpet product in which a waste stream becomes a material input stream.This case was written by Alia Anderson and Karen O’Brien under the supervision of author Andrea Larson and developed under a cooperative effort by the Batten Institute, the American Chemical Society, and the Environmental Protection Agency’s Office of Pollution Prevention. Alia Anderson, Andrea Larson, and Karen O’Brien, Shaw Industries: Sustainable Business, Entrepreneurial Innovation, and Green Chemistry, UVA-ENT-0087 (Charlottesville: Darden Business Publishing, University of Virginia, 2006). Note can be accessed through the Darden Case Collection at https://store.darden.virginia.edu. Unless otherwise noted, quotations in this section refer to this case. In this situation we look at innovation challenges faced by a large global competitor.
In 2003 Shaw’s EcoWorx carpet tiles won a US Green Chemistry Institute’s Presidential Green Chemistry Challenge Award. The company had earned the award by combining the application of green chemistry and engineering principles (Table 7.5) with a cradle-to-cradle designSee William McDonough and Michael Braungart, Cradle to Cradle: Remaking the Way We Make Things (New York: North Point Press, 2002) for extensive discussion of the C2C frame of reference. The field of industrial ecology provides a conceptual basis for this discussion; see Thomas E. Graedel and Braden R. Allenby, Industrial Ecology (Englewood Cliffs, NJ: Prentice Hall, 1995). (often called C2C) approach to create a closed-loop carpet tile system, a first in the industry. The product met the rising demand for “sustainable” innovations, helping to create a new market space in the late 1990s and 2000s as buyers became more cognizant of human health and ecosystem hazards associated with interior furnishings.
At the time, Steve Bradfield, Shaw’s contract division vice president for environmental development, commented on the process of creating the EcoWorx innovation, a process that by no means was over: “The 12 Principles and C2C provide a framework for development of EcoWorx that incorporates anticipatory design, resource conservation, and material safety.”Alia Anderson, Andrea Larson, and Karen O’Brien, Shaw Industries: Sustainable Business, Entrepreneurial Innovation, and Green Chemistry, UVA-ENT-0087 (Charlottesville: Darden Business Publishing, University of Virginia, 2006). The framework was part of a larger sustainability strategic effort that the contract division was leading at Shaw. The company also needed to explain the benefits of the EcoWorx system and educate the marketplace on the desirability of sustainable products as qualitatively, economically, and environmentally superior replacements for a product system that had been in place for thirty years. Change was difficult, especially when the gains from a substitute product were not well understood by the end user or the independent distributor. It was also difficult internally for a Shaw culture that didn’t fully comprehend the need to move beyond conservation.
Table 7.5 The Twelve Principles of Green Chemistry and Green Engineering
Green Chemistry
1 Prevention. It is better to prevent waste than to treat it or clean it up after it has been created..
2 Atom Economy. Synthetic methods should be designed to maximize the incorporation of all materials used in the process into the final product.
3 Less Hazardous Chemical Syntheses. Wherever practicable, synthetic methods should be designed to use and generate substances that possess little or no toxicity to human health and the environment.
4 Designing Safer Chemicals. Chemical products should be designed to effect their desired function while minimizing their toxicity.
5 Safer Solvents and Auxiliaries. The use of auxiliary substances (e.g., solvents, separation agents, etc.) should be made unnecessary wherever possible and innocuous when used.
6 Design for Energy Efficiency. Energy requirements of chemical processes should be recognized for their environmental and economic impacts and should be minimized. If possible, synthetic methods should be conducted at ambient temperature and pressure.
7 Use of Renewable Feedstocks. A raw material or feedstock should be renewable rather than depleting whenever technically and economically practicable.
8 Reduce Derivatives. Unnecessary derivatization (use of blocking groups, protection/deprotection, temporary modification of physical/chemical processes) should be minimized or avoided, if possible, because such steps require additional reagents and can generate waste.
9 Catalysis. Catalytic reagents (as selective as possible) are superior to stoichiometric reagents.
10 Design for Degradation. Chemical products should be designed so that at the end of their function they break down into innocuous degradation products and do not persist in the environment.
11 Real-Time Analysis for Pollution Prevention. Analytical methodologies need to be further developed to allow for real-time, in-process monitoring and control prior to the formation of hazardous substances.
12 Inherently Safer Chemistry for Accident Prevention. Substances and the form of a substance used in a chemical process should be chosen to minimize the potential for chemical accidents, including releases, explosions, and fires.
Green Engineering
1 Inherent Rather Than Circumstantial. Designers need to strive to ensure that all materials and energy inputs and outputs are as inherently nonhazardous as possible.
2 Prevention Instead of Treatment. It is better to prevent waste than to treat or clean up waste after it is formed.
3 Design for Separation. Separation and purification operations should be designed to minimize energy consumption and materials use.
4 Maximize Efficiency. Products, processes, and systems should be designed to maximize mass, energy, space, and time efficiency.
5 Output Pulled versus Input Pushed. Products, processes, and systems should be “output pulled” rather than “input pushed” through the use of energy and materials.
6 Conserve Complexity. Embedded entropy and complexity must be viewed as an investment when making design choices on recycling, reuse, or beneficial disposition.
7 Durability Rather Than Immortality. Targeted durability, not immortality, should be a design goal.
8 Meet Need, Minimize Excess. Design for unnecessary capacity or capability (e.g., “one size fits all”) solutions should be considered a design flaw.
9 Minimize Material Diversity. Material diversity in multicomponent products should be minimized to promote disassembly and value retention.
10 Integrate Material and Energy Flows. Design of products, processes, and systems must include integration and interconnectivity with available energy and materials flows.
11 Design for Commercial “Afterlife.” Products, processes, and systems should be designed for performance in a commercial “afterlife.”
12 Renewable Inputs. Material and energy inputs should be renewable rather than depleting.
Source: P. T. Anastas and J. C. Warner, Green Chemistry: Theory and Practice (New York: Oxford University Press, 1998), 30; and P. T. Anastas and J. B. Zimmerman, “Design through the 12 Principles of Green Engineering,” Environmental Science and Technology 37, no. 5 (2003): 95–101. Used by permission.
The US Carpet Industry
World War II demanded wool, then the dominant carpet material, for military uniforms and blankets, providing an incentive for companies to research and create alternative fibers. This move toward alternatives was part of the general wartime drive that culminated in the introduction of synthetic materials (man-made) for many uses. After the war, manufacturers continued to develop various new natural and synthetic materials. By the 1960s, DuPont and Chemstrand’s man-made nylon and acrylic materials supplied most of the growing carpet industry’s textile fiber needs. An average American household could now afford machine-tufted synthetic carpets that replaced the expensive woven wool carpets of the past. By 2004 nylon accounted for 68 percent of the fibers used in carpet manufacturing, followed by 22 percent polypropylene and 9 percent polyester, with wool constituting less than 0.7 percent of the total.
By the 1970s, carpet flooring was the dominant aesthetic standard in a high proportion of industrialized countries for residential and commercial flooring markets. Historically, woven wool carpets (in which the carpet surface and backing were essentially one layer) gave way to tufted (fibers pulled through a matrix web) and needle-punched carpets bonded by a latex backing layer using an array of synthetic face fibers and backing materials. Carpet tiles, the fastest growing segment of the commercial carpeting industry, were expected to steadily replace much of the rolled broadloom carpet used historically in offices and other commercial locations. Regardless of design, all carpeting had traditionally been a complex matrix of dissimilar materials constructed without any thought of disassembly for recycling.It was not until 1994 that the industry began to take a more serious look at sustainability. One early adopter was the carpet tile innovator Interface Inc., which took steps to integrate sustainability throughout the company from top to bottom, reducing scrap waste, identifying operational inefficiencies, lowering energy use through solar and other innovations, and introducing a carpet leasing program through which it collected and recycled end-of-use carpet. Independently, however, other carpet producers began developing their own programs and initiatives, programs that some would contend exceeded the solutions Interface devised.
Shaw Industries, Mohawk Industries, and Beaulieu of America were the three largest carpet producers in 2004. Interface was the largest carpet tile manufacturer. Invista, a fiber spin-off of DuPont, and Solutia were the sole US producers of Nylon 6.6, a type suitable for carpet. Honeywell and vertically integrated carpet giant Shaw Industries were the major producers of Nylon 6 for carpet use. Price competition, economic downturn, and overcapacity had taken a heavy toll on American fiber and carpet companies. Unlike the broader textile industry, the nylon fiber and carpet producers did not see an influx of low-cost imports due to high transportation costs, relatively low labor costs associated with US fiber and carpet production, and the difficulties in finding viable US distribution channels for imports. The industry was consolidating and companies vertically integrated, formed alliances, or organized around market niches as lower carpet and floor covering sales tracked personal income insecurity and general economic turbulence. The first few years of the twenty-first century witnessed the loss of more than 90,000 US textile jobs and 150 plant closings. The carpets and rugs sector experienced sluggish growth. Growth rebounded by 2005, but competition was fierce and buyers would not tolerate higher prices or lower product performance.
Shaw Industries
In 2006, Shaw Industries of Dalton, Georgia, was the world’s largest carpet manufacturer, selling in Canada, Mexico, and the United States and exporting worldwide. The company’s historic carpet brand names, including Cabin Crafts, Queen Carpet, Salem, Philadelphia Carpets, and ShawMark, were de-emphasized relative to the consolidated Shaw brand. Shaw sold residential products to large and small retailers and to the much smaller distributor channel. Shaw offered commercial products primarily to commercial dealers and contractors, including its own Spectra commercial contracting locations, through Shaw Contract, Patcraft, and Designweave. The company also sold laminate, ceramic tile, and hardwood flooring through its Shaw Hard Surfaces division, and rugs through the Shaw Living division. Shaw Industries was publicly traded on the New York Stock Exchange (NYSE) until 2000, when it was purchased by Warren Buffet’s Berkshire Hathaway Inc. Shaw’s stock had been one of the best-performing stocks on the NYSE in the 1980s, but Wall Street’s dot-com focus of the 1990s depressed the stock price of Shaw and other manufacturers. The Berkshire buyout took the Wall Street factor out of Shaw’s management strategy, and 2001 through 2006 were record earnings years for the company.
Between 1985 and 2006, Shaw Industries made a string of acquisitions, including other large carpet makers, fiber-dyeing facilities, and fiber extrusion and yarn mills, moving steadily toward broad vertical integration of inputs and processes. The firm’s expensive forays into retail stores ended, and Shaw concentrated on shifting its outside purchases of fiber to internal fiber production. Shaw polymerized, extruded, spun, twisted, and heat-set its own yarn and tufted, dyed, and finished the carpet. Several key acquisitions included the following:
• Amoco’s polypropylene operations, 1992. Polypropylene fiber, used mainly in berber-style residential products, reached a high-water mark of approximately 30 percent of the fiber usage in the carpet industry. By purchasing the Amoco plants in Andalusia, Alabama, and Bainbridge, Georgia, in 1992, Shaw became the world’s largest polypropylene carpet fiber producer, extruding all the fiber for its polypropylene carpets.
• Queen Carpet, 1998. Shaw purchased the fourth-largest manufacturer of carpets, Queen Carpet, which had \$800 million in sales the previous year.
• The Dixie Group, 2003. The Dixie Group, one of the nation’s biggest manufacturers of carpets for the mobile home and manufactured housing industry, sold six yarn-tufting, dyeing, finishing, and needlebond mills and distribution factories to Shaw for \$180 million in October 2003. Included in the sale was a carpet recycling facility.
• Honeywell Nylon 6 operations, 2005. Shortly after its acquisition of the BASF carpet nylon business, Honeywell decided to exit the Nylon 6 carpet fiber business by selling to Shaw its South Carolina fiber mills at Anderson, Columbia, and Clemson. This made Shaw the world’s largest producer of Nylon 6 carpet fiber. Honeywell’s 50 percent interest in the Evergreen postconsumer Nylon 6 depolymerization facility was included in the deal.
• Evergreen Nylon Recycling Facility, 2006. Within two months of closing the Honeywell acquisition, Shaw purchased the remaining 50 percent interest in the Augusta, Georgia, Evergreen caprolactam monomer recovery facility from Dutch State Mines (DSM). (Caprolactam was the monomer building block of Nylon 6.) This gave Shaw 100 percent ownership of the postconsumer Nylon 6 depolymerization facility, which had been closed since late 2001 due to low monomer prices. Shaw moved quickly to refurbish and restart the facility for production of thirty million pounds of caprolactam monomer by early 2007. This gave Shaw the only source of postconsumer Nylon 6 monomer, which could be continuously returned to carpet production.
Carpet Tile
For Shaw, the obvious place to start thinking about product redesign was at the top of the carpet hierarchy: carpet tile. Its high price in comparison with broadloom carpets, its thermoplastic polyvinyl chloride (PVC) plastisol backing, and its relative ease of recovery from commercial buildings where large volumes of product could be found made it the best hope for early success. That may have been the first and last point of agreement among fiber and carpet manufacturers as sustainability began to take on widely differing meanings. Given that lack of definition and standard measures of sustainability, marketing literature could be confusing for specifiers and end users looking to compare the environmental impacts of competitive carpet tiles.
Carpet tile as a product category bridged most commercial market segments (e.g., offices, hospitals, and universities). On the market for more than thirty years, it was introduced originally as a carpet innovation that enabled low-cost replacement of stained or damaged tiles, rotation of tiles in zones of high wear, and easy access to utility wiring beneath floors. Carpet tile’s higher cost, high mass and embodied energy, more stringent backing adhesion performance specifications compared with broadloom, and double-digit market growth rate made it a logical focus for exploring alternative tile system designs.
Carpet tile was composed of two main elements, the face fiber and the backing. The face was made from yarn made of either Nylon 6 or Nylon 6.6 fiber, the only viable nylons in carpet use. US carpet tile was traditionally made with PVC plastisol backing systems, which provided the tile’s mechanical properties and its dimensional stability. PVC was under suspicion, however, due to the potential of the plasticizer to migrate from the material, potentially causing health problems and product failures. The vinyl chloride monomer in PVC was also a source of health concern for many. Most carpet tiles were made with a thin layer of fiberglass in the PVC backing to provide dimensional stability. These tiles ranged from eighteen inches to thirty-six inches square and required high dimensional stability to lay flat on the floor.
Backing provided functions that were subject to engineering specifications, such as compatibility with floor adhesives, dimensional stability, securing the face fibers in place, and more. Selecting backing materials and getting the chemistry and physical attributes right for the system’s performance took time and resources, and added cost. Since the mid-1980s, the backing problems associated with PVC had led several companies, including Milliken, to seek PVC-alternative backings. In 1997, Shaw asked the Dow Chemical Company to provide new metallocene polyolefin polymers to meet Shaw’s performance specifications for a thermoplastic extruded carpet tile backing. Shaw added a proprietary compounding process to complete the sustainable material design. Seeking every way possible to reduce materials use and remove hazardous inputs, yet maintain or improve product performance, Shaw made the following changes:
• Replacement of PVC and phthalate plasticizer with an inert and nonhazardous mix of polymers, ensuring material safety throughout the system (third-party tested for health and safety through the McDonough Braungart Design Chemistry toxicity protocol, which PVC cannot meet).
• Elimination of antimony trioxide flame retardant associated in research with harm to aquatic organisms (and replacement with benign aluminum trihydrate).
• Dramatic reduction of waste during the processing phases by immediate recovery and use of the technical nutrients resulting from on-site postindustrial recycling of backing waste. (The production waste goal is zero.)
• A life-cycle inventory and mass flow analysis that captures systems impacts and material efficiencies compared with PVC backing. (Carpet tile manufacturing energy was slightly lower for polyolefin, but polyolefin supply chain–embodied energy was more than 60 percent lower than PVC.)
• Efficiencies (energy and material reductions) in production, packaging, and distribution—40 percent lighter weight of EcoWorx tiles over PVC-backed carpet tiles yielded cost savings in transport and handling (installation and removal/demolition cost savings).
• Use of a minimum number of raw materials, none of which lose value, because all can be continuously disassembled and remanufactured through a process of grinding and airflow separation of fiber and backing, facilitating recycling of both major components.
• Use of a closed-loop, integrated, plantwide cooling water system providing chilled water for the extrusion process as well as the heating and cooling system (HVAC).
• Provision of a toll-free number on every EcoWorx tile for the buyer to contact Shaw for removal of the material for recycling at no cost to the consumer (supported by a written Shaw Environmental Guarantee).
Although Shaw had yet not begun to get carpet tiles back for recycling because of the minimum ten years of useful life, models assessing comparative costs of the conventional feedstock versus the new system indicated the recycled components would be less costly to process than virgin materials.
EcoWorx Innovation
The EcoWorx system developed by Shaw Industries offered a way to analyze and refine the C2C design of a carpet tile system without regard to technology constraints of the past. The Twelve Principles of Green Chemistry and Green Engineering and C2C provided a detailed framework in which to evaluate a new technology for engineering a successful carpet tile production, use, and recovery system. The EcoWorx system also utilized Shaw’s EcoSolution Q Nylon 6 premium-branded fiber system, which was designed to use recycled Nylon 6 and in 2006 embodied 25 percent postindustrial recycled content in its makeup from blending and processing Nylon 6 fiber waste.
The EcoSolution Q Nylon 6 branded fiber system could be recycled as a technical nutrient through a reciprocal recovery agreement with Honeywell’s Arnprior depolymerization facility in Canada without sacrificing performance or quality or increasing cost. But Shaw’s original intention to take the Nylon 6 waste stream through the Evergreen Nylon Recycling Facility at Augusta, Georgia, was made possible with Shaw’s purchase of the Honeywell/DSM joint venture. The depolymerization process was restarted in February 2007. That allowed Shaw’s carpet tile products to make a cradle-to-cradle return to manufacturing, with nylon fiber from tile made into more nylon fiber and backing returned to backing.
Shaw’s objective was to create technology for an infinitely recyclable carpet tile, one that could be entirely recycled with no loss in quality from one life cycle to the next. The notion of closed-cycle carpet tiles forced the complex issue of compatibility between face fiber (the soft side on which people walk) and the backing. As for which face fiber to use, current technology allowed only Nylon 6 fiber to be reprocessed. The Nylon 6 material retained its flexibility and structure through multiple reprocessing cycles by disassembling the Nylon 6 molecules with heat and pressure to yield the monomer building block, caprolactam. This recycled monomer was identical in chemical makeup to virgin caprolactam. In contrast, Nylon 6.6 could not be economically depolymerized due to its molecular structure. Nylon 6.6 incorporated two monomer building blocks resulting in greater disassembly cost and complexity.
In 1997 Bradfield and Shaw chemist Von Moody discussed a particular method of processing polyolefin resins that produced flexible, recyclable polymers. Polyolefins were an intriguing material for Shaw to explore as carpet backing, given the company’s purchase of Amoco polypropylene (a type of polyolefin) extrusion facilities. After nearly \$1 million invested in research and development and a pilot backing line, the tests suggested that polyolefins could be melted and separated from Nylon 6 and therefore successfully recycled into like-new materials. Shaw created the pilot backing line with the intention of “fast-prototyping” the polyolefin backing by modeling the performance attributes of Shaw’s PVC backings. This prototyping risk might easily have failed but was instead the start of EcoWorx.
Shaw first introduced EcoWorx commercially in 1999. As a polyolefin-backed carpet tile, EcoWorx offered an alternative to the industry standard PVC backing at comparable cost, 40 percent less weight, and equal or improved effectiveness across all performance categories. EcoWorx earned the 1999 Best of Neocon Gold Award at the prestigious and largest annual interior furnishings and systems show in the United States. In 2002 the company’s EcoWorx tile called “Dressed to Kill” won the carpet tile Neocon Gold Award for design, effectively mainstreaming the new material. By 2002, Shaw had announced EcoWorx as the standard backing for all its new carpet tile introductions. Indeed, customers preferred the new product; consequently, by 2004 EcoWorx accounted for 80 percent of carpet tile sold by Shaw—faster growth than anticipated. At the end of 2004, Shaw left PVC in favor of the EcoWorx backing, accomplishing a complete change in backing technology in a brief four years.
EcoWorx as a system of materials and processes proved significantly more efficient. The backing was dramatically lighter than that of PVC-backed tiles. The EcoWorx process, which used electric thermoplastic extrusion rather than a traditional gas-fired or forced-air oven, was more energy-efficient. The process combined an ethylene polymer base resin (developed by Dow Chemical) with high-density polyethylene (HDPE), fly ash for bulk (instead of the virgin calcium carbonate traditionally used), oil that improved the product’s compatibility with the floor glue, antimicrobial properties, and black pigment in a proven nontoxic construction. This compound was applied to the carpet backs using a low-odor adhesive to maintain high indoor air quality standards. The backing material was combined with a nonwoven fiberglass mat for stability. Shaw’s agreement with customers at the point of sale was that Shaw would pay to have the carpet returned to it. Back in its plant Shaw would shred the carpet and separate the backing stream from the fiber stream. The “infinitely recyclable” duo of Shaw’s Nylon 6 fibers (marketed as EcoSolution Q) and EcoWorx backing received acclaim throughout the industry. Shaw’s competitive cost and exceptional performance compared with traditional products allowed it to step beyond the limits of the “green” niche market. Especially important, Shaw’s research showed that the cost of collection, transportation, elutriation,Elutriation refers to the process of shredding returned tiles and their purification by washing, straining, or separating by weight. and return to the respective nylon and EcoWorx manufacturing processes was less than the cost of using virgin raw materials.Steve Bradshaw (Shaw Industries), in discussion with author, March 2005. Shaw tripled the production capacity in 2000, and by the end of 2002, shipments of EcoWorx tiles exceeded those of PVC-backed styles.Steve Bradshaw (Shaw Industries), in discussion with author, March 2005. Shaw continued to expand its collection and recycling capacity in preparation for 2009, when the first round of EcoWorx carpet that was released in 1999 would reach the end of its first life cycle. It appeared that Shaw would be the first to close the industrial loop in the carpet tile industry.
The Recycling Challenge: Fits and Starts
Recycling carpet was a complex endeavor because carpet was composed of a complex composite of face fibers, glues, fillers, stabilizers, and backings, each with varying capacity to be melted and reused. Approximately 70 percent of the face fiber used in carpets was made of either Nylon 6 or Nylon 6.6, with each of these two types comprising an equal share of the nylon carpet fiber market. Neither fiber had the production capacity to serve the entire carpet industry.
Both nylons made excellent carpet. Although recovered Nylon 6.6 could be recycled into other materials (not carpeting), such as car parts and highway guard rails, the economic incentives for companies were low, and many people argued that “downcycling” in this way only postponed discarding the product in a landfill by one life cycle.
The development of technology for recycling Nylon 6 fibers into new carpet face fiber represented a major shift. Honeywell International Inc., a major supplier of the Nylon 6 fiber used by carpet manufacturers, was so confident about the market potential for recycled Nylon 6 fiber that it developed the \$80 million Evergreen Nylon Recycling Facility in Augusta, Georgia, in 1999. Unfortunately, the cost of recycled caprolactam was not competitive with virgin caprolactam (used in making Nylon 6) at that point in time, and the plant closed in 2001.Katherine Salant, “Carpet Industry Makes Strides in Reducing Footprint, but Path Includes Several Obstacles,” Washington Post, January 31, 2004.
The Honeywell Evergreen Nylon 6 depolymerization unit was restarted in early 2007, but it was Shaw’s purchase of Honeywell’s carpet fiber facilities in 2006 that ultimately made that happen. After purchasing the Honeywell interest in Evergreen, negotiations for the DSM portion of the joint venture gave Shaw 100 percent ownership. In February 2007, Shaw reopened the Evergreen facility to produce caprolactam for Shaw Nylon 6 polymerization operations. In 2007, Shaw owned and operated the only commercially scaled postconsumer Nylon 6 monomer recycling facility in the world. Invista and Solutia, the only producers of Nylon 6.6, had a long history of technical development and response to competitive challenges. Promising work was under way in dissolution technologies that would allow postconsumer Nylon 6.6 to be recycled in an economical manner, restoring the uneasy balance between the two nylon types on the environmental front. Nylon 6.6 was here to stay, and industry observers said large-scale recycling of Nylon 6.6 was a matter of when, not if, the process was perfected. However, in 2007 no hint of plans for a Nylon 6.6 recycling facility had yet surfaced.
Environmental and Health Concerns Associated with Carpeting
After World War II, the design and manufacture of products from man-made and naturally occurring chemicals provided a wide range of inexpensive, convenient, and dependable consumer goods on which an increasing number of people relied worldwide. Behind the valuable medicines, plastics, fuels, fertilizers, and fabrics lay new chemicals and processes that were not time tested but appeared to have superior performance relative to prewar materials. Most of the polymer building blocks were developed by chemists between 1950 and 2000 as a result of and a driver of the post–World War II economic boom.
By the 1990s the growing rate of carpet usage had led to serious concern over waste disposal; 95 percent of carpet ended up in landfills. In 2001, this waste stream was reported at 4.6 billion pounds the United States.Carpet America Recovery Effort, “Memorandum of Understanding for Carpet Stewardship (MOU),” accessed January 31, 2011, www.carpetrecovery.org/mou.php#goals. Growing water quality, cost, and land-use issues related to carpet disposal generated significant pressure from government and commercial buyers for the development of carpet recycling technology. In January 2002, carpet and fiber manufacturers signed the National Carpet Recycling Agreement together with the Carpet and Rug Institute (the industry trade association), state governments, nongovernmental organizations (NGOs), and the EPA. This voluntary agreement established a ten-year schedule to increase the levels of recycling and reuse of postconsumer carpet and reduce the amount of waste carpet going to landfills. The agreement set a national goal of diverting 40 percent of end-of-life carpet from landfill disposal by 2012.
One result of the national agreement was the 2002 creation of the Carpet America Recovery Effort, a partnership of industry, government, and NGOs designed to enhance the collection infrastructure for postconsumer carpet and report on progress in the carpet industry toward meeting the national goals defined in the National Carpet Recycling Agreement.Carpet America Recovery Effort, “Memorandum of Understanding for Carpet Stewardship (MOU),” accessed January 31, 2011, www.carpetrecovery.org/mou.php#goals. In the late 1990s Presidential Executive Order 13101, a purchasing guide, was fueling demand for “environmentally preferable products” by government and by purchasers that received federal funds. This program introduced the idea of multiple-environmental-impact purchasing evaluations as a replacement for the outdated practice of relying solely on recycled content as the measure of product sustainability.
However, the problems with carpeting would not be addressed so easily. As monitoring equipment capabilities advanced between 1990 and 2005, new health and ecological impact hazards associated with certain widely used chemicals were identified. “Environment” was a topic that historically related to on-site toxins and compliance activity, with “health” referring to effects that surfaced after the product left the company; both concerns were relegated to the environment, health, and safety office inside a corporation. But scientists, design engineers, and increasingly middle and senior management needed to incorporate a broader understanding of such concerns into the ways products were designed and made. This was particularly true in the construction and home furnishing sectors, where greater use of chemicals combined with less than adequate ventilation and more architecturally tight building designs to create health problems.
As far back as 1987, the US Consumer Product Safety Commission, the federal agency that monitors commercial product safety, received more than 130 complaints about flu and allergy symptoms and eye and throat irritations that began directly following the installation of new carpet. Although that was a small number, this data often represented the tip of a health problem iceberg. Over the next few years, air quality research led to the well-publicized concept of “sick building syndrome”—a condition in which occupants experienced acute illness and discomfort linked to poor indoor air quality. Carpets were not the only culprits. Wall materials and wall coverings (paint and wallpaper) as well as various hardwood floor treatments also were implicated. To the industry’s dismay, the EPA listed “chemical contaminants from indoor sources, including adhesives [and] carpeting…that may emit volatile organic compounds (VOCs)” as contributors to sick building syndrome.American Lung Association, American Medical Association, US Consumer Product Safety Commission, and US Environmental Protection Agency, Indoor Air Pollution: An Introduction for Health Professionals, accessed January 26, 2011, http://www.epa.gov/iaq/pdfs/indoor_air_pollution.pdf. It was not the building that was sick. At the time, the US Centers for Disease Control reported “body burdens” of chemicals in people’s bloodstreams from unidentified sources. Under increasing study were babies’ body burdens—the pollutants in infants’ blood and organ tissues—later known to result from placental cycling of blood, oxygen, and nutrients between mother and child.
Simultaneously, concern was building throughout the 1990s concerning PVC plastic that contained phthalate plasticizers. Phthalates were added to PVC during processing to make the resulting plastic soft and flexible; however, researchers discovered that phthalate molecules did not structurally bind to PVC, which therefore leached out of products. Though there was debate about the level of harm that leaching caused humans, reputable studies linked phthalates to reproductive and endocrine disorders in animals. Environmental health science reports and concerns over PVC plasticizers grew steadily between 1995 and 2005. California planned to add di-2-ethylhexyl phthalate (DEHP) to a list of chemicals known to cause birth defects or reproductive harm. The list, contained in Proposition 65, followed on the heels of warnings from the Food and Drug Administration, National Toxicology Program, and Health Canada that DEHP may cause birth defects and other reproductive harm. Furthermore, incineration of PVC released highly toxic organochlorine by-products, including microscopic dioxins, into the atmosphere, where they moved with regional weather patterns, returning to the lower atmosphere and eventually to earth through the hydrologic cycle. Breathing in dioxins had been linked to cancer, growth disruptions, and developmental problems in humans for many years from laboratory and production worker data. By July 2005, links between commonly used chemicals, even in very low doses, and human health deficiencies were being discussed on the front page of the Wall Street Journal. Despite the evidence against PVC, the California Department of General Services (DGS) approved PVC carpet tile in its 2006 California Gold Carpet Standard and instituted a 10 percent postconsumer recycled content requirement for all state carpet purchases, which virtually guaranteed increased purchases of PVC carpets. The DGS also refused to allow exemptions for non-PVC materials that had not been on the market long enough to be recovering adequate quantities of postconsumer material.
Many carpet manufacturers focused their early environmental efforts on reducing trim waste from industrial and installation processes (eco-efficiency). Trim waste cost the industry an estimated \$25 million per year in unused carpet production and disposal fees, but this represented only 2 percent of total carpet production and, though important, made a relatively small impact on the end-of-life waste volume issue. As efficiency strategies became more systems oriented, a competitive market grew for technology to recover and recycle postconsumer carpet.
Indeed, for many years real solutions to the problems of end-of-life recycling of carpet were lost in the clutter of the first and easiest step in environmental stewardship—reduction of materials, water, energy usage, and waste. Capabilities were developed—typically under a company’s environment, health, and safety office—that essentially absorbed a quality and cost-cutting issue under the compliance function. With respect to carpeting materials, efforts were concentrated on the 2 percent of all carpet materials that remained as scrap in the manufacturing plants. More than 98 percent of all materials entering the carpet manufacturing stream were shipped to the customer as finished carpet. Once used and in need of replacement, this postconsumer carpet traditionally ended its life in landfills.
Other environmental efforts in the carpet industry focused on converting or recycling products such as polyethylene terephthalate (PET) plastic bottles (waste streams from other industries) into carpet fiber, incorporating the recovered materials into new products. This effort was encouraged by the Comprehensive Procurement Guideline (CPG) program (RCRA 6002, 1998), which required federal agencies to purchase items containing recovered and reused postconsumer materials. Of the forty-nine items listed in the program, PET carpet face fiber, carpet backing, and carpet cushioning were included.US Environmental Protection Agency, “Indoor Air Quality: Indoor Air Facts No. 4 (Revised) Sick Building Syndrome,” last updated September 30, 2010, accessed January 26, 2011, http://www.epa.gov/iaq/pubs/sbs.html. EPA provided lists that gave priority to products containing a high percentage of postconsumer material. Shaw EcoWorx was not included on the vendor list because while it was an innovative breakthrough that would achieve a 100 percent recovery rate, it had yet to complete its initial life cycle. The CPG proposed designation of nylon carpets (fiber and backing). However, due to the lack of postconsumer nylon availability in the market, the CPG designation would boost federal purchases of PVC carpets at a time when non-PVC carpets were increasing their market share but had not yet had time to see postconsumer material returned and to therefore achieve CPG compliance.
In 2006, recycled plastic remained more costly than virgin fibers, which limited the carpet industry’s enthusiasm for this measure. But plastic came from oil, a feedstock source increasingly subject to price volatility and unstable supply. Crude oil prices rose from about \$25 per barrel in the 1990s to more than \$60 in 2006. In the face of this price uncertainty and consistent high oil prices, Shaw concentrated on and eventually achieved systems economics that resulted in the recovered EcoWorx materials coming back as feedstock under the price of virgin materials. Standards such as the CPG may have served as a disincentive to material innovation if first-generation products such as EcoWorx had to have significant postconsumer recycled content to qualify. The irony was that EcoWorx had won an EPA-sponsored Presidential Green Chemistry Challenge Award in 2003 in the safer chemicals category, yet an EPA nylon carpet CPG designation would effectively prevent federal agencies from purchasing it. Shaw and others devoted significant resources over a four-year period to persuade EPA to abandon the nylon carpet CPG designation in favor of a multiple impact assessment of carpet.
Green Building Council and LEED
Steve Bradfield was an early supporter of the US Green Building Council’s (USGBC) Leadership in Energy and Environmental Design (LEED) program, which established standards for environmentally preferred building materials and construction. Bradfield had participated for several years in the architecture and building industries’ movement to reduce and eliminate problematic materials that were increasingly linked with respiratory, allergy, and other human health problems. In 2003, Bradfield talked about Shaw’s sustainability policy. (The following year, he would testify before Congress in support of green legislation.) Shaw’s policy, Bradfield explained, articulated the firm’s corporate strategy to move steadily toward a cradle-to-cradle and a solar-powered future.
The Green Chemistry Research and Development Act of 2004
House Committee on Science Hearing, March 17, 2004
Steve Bradfield, Vice President for Environmental Development, Shaw Industries, Representing the Carpet and Rug Institute
(Excerpt)
Imagine a future when no carpet goes to a landfill, but is separated into its constituent parts at the end of its useful life to be sustainably recycled over and over again. This is happening today with some carpet types, but not enough as yet to significantly divert the 4.5 billion pounds of carpet that went to our nation’s landfills in 2003. Green chemistry can help to develop beneficial uses for the materials used to make carpet today and assure that steady progress is made toward sustainable materials that can go directly back into carpet production in the future.
Perhaps the most compelling reason to support green chemistry and the growth of sustainable materials and processes in carpet is jobs. Annual carpet production and consumption in the U.S. of \$12 billion is equal to the rest of world carpet production and consumption combined. Carpet jobs will stay in the U.S. if we can develop ways to keep postconsumer carpet materials in sustainable closed-loop recycling systems that reduce the need for virgin raw materials and lower the energy embodied in successive generations of carpet products. Why would any U.S. company choose to manufacture overseas if their valuable raw materials are being collected and recycled at lower cost, with no sacrifice of performance, from American homes and businesses in close proximity to the means of production?
The economic benefits of green chemistry are quantifiable in each of the example given herein. As an industry, green chemistry has helped to reduce the water required for dyeing a square yard of carpet from 14.9 gallons in 1995 to 8.9 gallons in 2002. The energy required from thermal fuels to make a square yard of carpet has fallen from 14.5 million BTUs in 1995 to
10.3 million BTUs in 2002. Today the carpet industry has the same level of CO2 emissions it reported in 1990 yet it produces 40% more carpet.
Shaw’s experience with green chemistry is representative of the developments that are ongoing in the industry. By way of illustration, Shaw’s polyolefin carpet tile backing has fueled an average annual growth rate in carpet tile of almost 15% per year over the last four years. This growth provides 440 jobs in our Cartersville, Georgia, carpet tile facility and generates more than \$100 million in revenue. It has reduced packaging costs by 70%, shipping costs by 20% and resulted in more than \$100,000 in annual postindustrial scrap recovery. The recovery of the postconsumer carpet tile will result in even more second-generation savings. Other manufacturers can share economic success stories that are just as compelling.
In 1950 the carpet industry shipped 97 million square yards of carpet. In 2001 we shipped 1.879 billion square yards. Between 1965 and 2001 carpet increased in price by 90.4% while the same time period saw an automobile increase 180.4% and a combined total of all commodities increased 315.4%. More than 80% of the U.S. carpet market is supplied by mills located within a 65-mile radius of Dalton, Georgia. Carpet is important to the economy of Georgia and the United States. Green chemistry is an important tool to facilitate its continued growth.
In conclusion, we support the adoption of the Green Chemistry Research and Development Act of 2004 with the suggestions that Congress encourage a cooperative effort among government, academia, and business; that Congress seek additional incentives to reward those companies that commercialize green chemistry developments; that obstacles to the green chemistry discovery process be removed from current federal environmental programs; and that adoption of green chemistry in the broader context of sustainable product development should become a primary instrument of pollution prevention policy in the United States with the additional goals of job creation and economic improvement.Testimony accessed March 7, 2011, http://www.gpo.gov/fdsys/pkg/CHRG-108hhrg92512/html/CHRG-108hhrg92512.htm.
In 2006 LEED requirements did not factor in EcoWorx’s recovery and reuse benefits in awarding points to companies looking to achieve higher LEED rankings. But the USGBC had begun a dialogue on how to incorporate multiple metrics, including cradle-to-cradle design points, into the 2007 version of LEED. At the same time, many corporations that were committed to sustainability practices, or at least wanted to gain positive publicity for their efforts, were setting LEED certification levels among their goals for their headquarters buildings.
Environmental pressure had been mounting for several years in the carpet industry. Said William McDonough, architect, environmentalist, and promoter of the cradle-to-cradle design approach with Michael Braungart, “The carpet industry is the battlefield where the war for sustainability is being waged.”Alia Anderson, Andrea Larson, and Karen O’Brien, Shaw Industries: Sustainable Business, Entrepreneurial Innovation, and Green Chemistry, UVA-ENT-0087 (Charlottesville: Darden Business Publishing, University of Virginia, 2006). Indeed, so many carpet companies seemed to be actively marketing carpet sustainability in comparison with other industries that the question of “Why carpet?” is often asked. With Presidential Executive Order 13101, the purchasing mandate, and others fueling the demand for “environmentally preferable products” in government, a new breed of environmentalist had appeared by the late 1990s, ready to constructively engage with industry but still offering conflicting views of what constituted sustainable design in the absence of consensus on a national standard.
The first LEED Green Building Rating System was completed in 2000 and grew quickly into an internationally recognized certification program for environmentally sensitive design. Recognizing that buildings account for 30 percent of raw materials use and 30 percent of waste output (136 million tons annually) in the United States,US Green Building Council, An Introduction to the US Green Building Council, accessed January 31, 2011, www.usgbc.org/Docs/About/usgbc_intro.ppt. the USGBC, an organization affiliated with the American Association of Architects, gathered representatives from all sectors of the building industry to develop this voluntary and consensus-based rating system. By adhering to an extensive point system with categories such as Indoor Environmental Quality, Materials and Resources, and Water Efficiency, both new buildings and interior renovations could become LEED certified at different levels of excellence (Basic, Silver, Gold, and Platinum). Carpet selection became an integral element of LEED certification through materials requirements such as “Recycled Content,” “Low-Emitting Materials–Flooring Systems,” and Low-Emitting Materials–Adhesives and Sealants.”See, for instance, US Green Building Council, “LEED 2009,” accessed January 31, 2011, http://www.usgbc.org//ShowFile.aspx?DocumentID=5719. But LEED offered few incentives for other important environmental impact reductions.
Between 2000 and 2004, the LEED Green Building Rating System gathered more than 3,500 member organizations and certified projects in 49 states and 11 countries.US Green Building Council, An Introduction to the US Green Building Council, accessed January 31, 2011, www.usgbc.org/Docs/About/usgbc_intro.ppt. LEED’s continued influence in the building industry was secured by policies in the US Department of the Interior, EPA, General Services Administration, Department of State, Air Force, Army, and Navy, mandating differing levels of LEED standards for future buildings. By 2005 California, Maine, Maryland, New Jersey, New York, Oregon, and many cities across the United States also had legislated LEED standards for construction and procurement at various levels, either through mandates on capital developments or tax credits to developers who met the requirements.US Green Building Council, LEED Initiatives in Government by Type, May 2007, accessed January 31, 2011, https://www.usgbc.org/ShowFile.aspx?DocumentID=1741.
Certifiers
Third-party organizations, both for profit and not for profit, were proliferating in 2005–6 in a bid to gather the critical mass necessary to be recognized as the certifier of choice for many different aspects of the environmental patchwork of metrics defining that elusive goal called sustainability. Even self-certification programs from various industry associations have attempted to build consensus. Recycled content seemed to be the path of least resistance, but life-cycle analysis, embodied energy studies, and variations on the complex theme of “closing the loop” proliferated and jockeyed for position in the new “industry” of environmental and health performance. Unfortunately, an inevitable “unintended consequence” of these efforts was confusion and controversy among stakeholders.
What Next?
As Steve Bradfield reflected on challenges in the near future, he said he hoped the innovations required to implement the EcoWorx strategy would continue to draw on the extensive capabilities of Shaw and its partner firms. Certainly whatever transpired had to be consistent with Shaw’s Environmental Vision Statement. Questions went through his mind. Did the company fully anticipate the requirements of reverse logistics systems design? Had they identified the probable challenges and bottlenecks? Was the Shaw culture changing quickly enough to execute the strategy successfully? Would the company have sufficient capacity for the disassembly stage? The capacity of the elutriation system initially would allow Shaw to recycle 1.8 million square yards of carpet per year. This equipment enabled separation of the backing and fiber in a single pass and was expected to meet the anticipated future growth capacity requirement of the returned postconsumer material over the next five to ten years. But would the economics of the system meet the organization’s expectations?
Shaw’s Environmental Vision Statement
Environmental sustainability is our destination and cradle-to-cradle is our path. Our entire corporation and all stakeholders will value and share this vision.
Through eco-effective technology we will continuously redesign our products, our processes, and our corporation.
We will take responsibility for all that we do and strive to return our products to technical nutrient cycles that virtually eliminate the concept of waste.
We will plan for generations, while accepting the urgency of the present. We are committed to the communities where we live and work. Our resources, health, and diversity will not be compromised.
We look forward to a solar-powered future utilizing the current solar income of the earth, anticipating declining solar costs and rising fossil fuel costs as technology and resource depletion accelerate.
We will lead our industry in developing and delivering profitable cradle-to-cradle solutions to our free-market economy. Economy, equity, and ecology will be continually optimized.
Honesty, integrity, and hard work remain our core values. We will continue to deliver unsurpassed safety, quality, design, performance, and value to our customers.Shaw Industries, “Shaw Industries Announces New Environmental Policy to Drive Manufacturing Processes,” press release, December 4, 2003, accessed March 7, 2011, www.shawcontractgroup.com/Contentpress_releases./pr_031204_Environmental.pdf.
In 2007, Bradfield knew that EcoWorx had become a major driver in the phenomenal growth of Shaw’s carpet tile business. In late 2006, the company had introduced EcoWorx broadloom, a twelve-foot roll version of the EcoWorx technology that brought cradle-to-cradle design to the staid broadloom business. Bradfield’s recent promotion to corporate director of environmental affairs for the \$5.8 billion Shaw organization signaled the adoption of cradle-to-cradle goals across every division and functional area—a major achievement given the humble beginnings of what had started out as a commercial carpet initiative. A new Shaw environmental website, http://www.shawgreenedge.com, offered a single destination for anyone interested in the initiatives driving Shaw’s sustainability efforts.
KEY TAKEAWAYS
• There are many drivers behind sustainability innovation changes in a large flooring firm.
• Cradle-to-cradle thinking can inform redesign and manufacturing of new flooring.
• Sustainability practices provide financial and strategic advantages.
EXERCISES
1. Create a graphic representation of the reverse supply chain. What challenges do you think Shaw will have going forward?
2. Describe what you see as innovative in the case and list the factors you believe were drivers of that innovation.
3. Analyze and assess Shaw’s EcoWorx story as a strategy. What are the arguments in favor of it? Against it?
4. Explain the benefits to the firm of sustainable design using green chemistry principles and cradle-to-cradle thinking.
5. What, if any, accounting consideration must be given a product that is expected to return perpetually as a new raw material?
6. What use might the EcoWorx product, cradle-to-cradle, and green chemistry principles have to inform product and process design in other product markets? Bring an illustration to class to discuss. | textbooks/biz/Business/Advanced_Business/Book%3A_Sustainability_Innovation_and_Entrepreneurship/07%3A_Buildings/7.03%3A_Shaw_Industries_-_Sustainable_Business_Entrepreneurial_Innovation_and_Green_Chemistry.txt |
Learning Objectives
1. Become familiar with some key innovations and entrepreneurial opportunities in the biomaterials arena.
2. Analyze the possibilities of biomaterials as an alternative feedstock platform to fossil fuels.
3. Examine the barriers and opportunities in producing biomass feedstock through a venture inside a large corporation.
4. Compare the innovative venture inside a big firm with a subsequent stand-alone start-up.
In the NatureWorksAndrea Larson, Alia Anderson, and Karen O’Brien, Natureworks: Green Chemistry’s Contribution to Biotechnology Innovation, Commercialization, and Strategic Positioning, UVA-ENT-0089, 2006 (Charlottesville: Darden Business Publishing, University of Virginia, 2006). All quotations and references are from this source unless otherwise indicated. case, students examine challenges of commercializing polylactic acid (PLA), a disruptive technology innovation that substitutes corn-based biomass for oil-based feedstock. NatureWorks was the first US firm to create—and bring to commercial scale—biomass feedstock for a wide variety of applications including plastic components, thin film, and fabrics.
In 2002 a ten-year joint venture between US agricultural giant Cargill Inc. and Dow Chemical received the prestigious Presidential Green Chemistry Challenge Award from the American Chemical Society’s (ACS) Green Chemistry Institute for its development of the first synthetic polymer class to be produced from renewable resources, specifically from corn grown in the American Midwest. The product was biomass material and held the potential to substitute a renewable feedstock (raw material) for petroleum-based polymers. Presented at the Green Chemistry and Engineering conference and awards ceremony in Washington, DC, attended by the president of the US National Academy of Sciences, the White House science advisor, and other dignitaries from the National Academies and the American Chemical Society, the award recognized the venture’s innovative direction. In January 2005, Cargill chose to acquire Dow’s share of the venture. Now the fledgling company had to learn to fly.
NatureWorks’ bio-based plastic resins were named and trademarked NatureWorks PLA for the polylactic acid that composed the base plant sugars. In addition to replacing petroleum as the material feedstock, PLA resins had the added benefit of being compostable (safely biodegraded) or even infinitely recyclable, which meant they could be reprocessed into the same product again and again. That feature provided a distinct environmental advantage over recycling, or “downcycling,” postconsumer or postindustrial materials into lower-quality products, which merely slowed material flow to landfills by one or two product life cycles. Additional life-cycle environmental and health benefits had been identified by a thorough life-cycle analysis (LCA) from corn to pellets. PLA resins, virgin or postconsumer, could then be processed into a variety of end uses.
By early 2005, CEO Kathleen Bader and Chief Technical Officer Pat Gruber were wrestling with a number of questions. NatureWorks’ challenges were both operational and strategic:
• How to take the successful product to high-volume production
• How to market the unique resin in a mature plastics market
With Cargill’s January 2005 decision to acquire Dow’s share of the venture, there were also questions about the structure of NatureWorks going forward.
Kathleen Bader had been at Dow for thirty years before joining NatureWorks in 2004. She had managed Dow’s Styrenics and Engineered Products, a \$4 billion business, between 1999 and 2003. She led Dow’s Six Sigma program implementation. As a NatureWorks board member who had long championed the technology, Bader had confidence in its future and supported it from her budget at Dow. She was a logical fit at the helm. One of her first decisions involved selecting a retail alliance partner and narrowing a list of prospective customers. Limited resources constrained her choices.
There were other issues, including application challenges when converting PLA resins to different plastic forms, the controversy over genetically modified organisms (GMOs), and appropriate market positioning for a “sustainable” product, still a vague concept to many. Many executives in the company knew all too well that positioning their new product would take far more than simply getting the technology right.
In spring 2005 NatureWorks employed 230 people, split almost equally among headquarters (labs and management offices), the plant, and the international division. International consisted primarily of the European Union; the Hong Kong representative who had worked with the Japanese market had been brought back to headquarters in early 2004. As a joint venture the enterprise had consumed close to \$750 million dollars in capital, was not yet profitable, but held the promise of tremendous growth that could transform a wide range of markets worldwide. In 2005 NatureWorks was still the only company in the world capable of producing on a large-scale bio-based resins that exhibited standard performance traits such as durability, flexibility, and strength—all at a competitive market price.
The Plastics Industry
The plastics industry was the fourth-largest manufacturing segment in the United States behind motor vehicles, electronics, and petroleum refining. In 2001, the United States produced 101.1 million pounds of resins from oil and shipped \$45.5 billion in plastic products.Encyclopedia of Business, 2nd ed., s.v. “SIC 2821: Plastic Materials and Resins,” accessed January 31, 2011, http://www.referenceforbusiness.com/industries/Chemicals-Allied/Plastic-Materials-Resins.html. Both the oil and chemical industries were mature and relied on commodities sold on thin margins. The combined efforts of a large-scale chemical company in Dow and an agricultural processor giant in Cargill suggested Cargill Dow—now NatureWorks—was in some ways well suited for the mammoth task of challenging oil feedstock. However, could the small company grow beyond the market share that usually limited environmental products, considered somewhere between 2 and 5 percent of the market? And for that matter, should PLA be considered an “environmental product”?
Wave of Change
The rising wave of interest and activity in biomaterials had pushed industrial biotechnology into the economic mainstream by 2005. Projects to convert renewable resources into industrial chemicals proliferated, funded by government, corporate, and private capital. Major agricultural companies and chemical giants had teamed up to produce carpeting, paint, inks, solvents, automobile panels, and roofing material made from plants. Production of plant-derived fuels, such as ethanol and biodiesel, was growing. Advocates described those as better and lower-cost products: less polluting, equally dependable, lower-cost feedstock; more environmentally friendly products and processes with fewer toxic by-products; a reduced reliance on imported oil; and a smaller environmental footprint.
McKinsey & Company (Zurich) estimated the 5 percent market share represented by biotechnology products in 2004 could jump to 10–20 percent by 2010, with the biggest shift occurring in biotech processes to make bulk chemicals, polymers, and specialty chemicals. Developments in enzymatic biocatalysis were already allowing for the production of new materials with improved properties compared to existing products. Bioprocesses enabled production of existing chemicals at lower cost. The textiles, energy, chemical, and pharmaceuticals industries were all transforming in the face of biotechnology advances. Within this larger dynamic, PLA was just one of many “platform” materials available to be converted into a range of derivative products.
NatureWorks was contributing to creating, and being carried forward by, this wave of biotechnology innovation. Factors were converging to create new markets worldwide. According to Fortune magazine (July 2003), “Sales that large [\$280 billion by 2012] would displace a notable quantity of oil, freeing it up for other uses and helping keep prices down—though no one can yet estimate by how much. It would also shift the source of industrial chemicals from foreign countries to farm fields nearer the markets where the end products will be consumed. That would cut transportation costs and conceivably reduce dependence on foreign oil.”Stuart F. Brown, “Bioplastic Fantastic Bugs That Eat Sugar and Poop Polymers Could Transform Industry—and Cut Oil Use Too,” Fortune, July 21, 2003, accessed March 8, 2011, http://money.cnn.com/magazines/fortune/fortune_archive/2003/07/21/346098/index.htm.
Pat Gruber, chief technology officer for NatureWorks LLC, had known of biotechnology innovation’s potential since his graduate school days in biochemistry. Gruber’s interest in environmental issues had a long history, going back to high school, where he had enjoyed and shown an aptitude for biology and chemistry. He had always liked crossing between the systems perspective of biology and the molecular building-block orientation of chemistry.
In the same year that NatureWorks’ achievements had been recognized by the Green Chemistry Challenge Award for innovation, the company brought online a plant with a capacity of 300 million pounds (140,000 metric tons) that promised to turn his team’s breakthroughs into a viable and very large business. In 2003 the business went on to win the United Kingdom’s Chemical Engineering prestigious Kirkpatrick Award for Chemical Engineering Achievement for “bringing to market a technology that allows abundant, annually renewable resources to replace finite petroleum, to make consumer goods without sacrificing performance or price.”
NatureWorks Pre-2005: The Cargill Dow Joint Venture (CD)
Cargill, the largest privately held company in the United States, was also the largest agricultural processor in the country, with 2004 revenues of \$63 billion. The company served the food processing, food service, and retail food industries. The origins of NatureWorks went back to 1988, when Pat Gruber joined Cargill after graduate school. Sponsorship by Cargill’s corn milling division launched what was then a small research project. During the 1990s Gruber and his team had acquired considerable biomaterials and bioprocessing expertise, but Cargill sought a polymer partner that would bring plastic processing and application knowledge as well as market know-how. Cargill processed and sold high-volume meats, corn, and other agricultural products to large customers such as Walmart and McDonald’s but knew little about resin converters, thermomolding lines, or polymer science applications, traditional domains of the plastics industry. As a Cargill employee summed it up in the early 1990s, “We know food, we don’t know chemicals.” On the chemicals side, in the early 1990s, experts in the chemical industry generally did not believe it was possible to create carbohydrate feedstock (plant-based starches and sugars) that would perform the same as and be cost competitive with petroleum-originated plastics.
Biography
Patrick Gruber, PhD
Vice President and Chief Technology Officer
Cargill Dow LLC
At the time this case was written, Patrick Gruber was vice president and chief technology officer of Cargill Dow LLC, which he cofounded in 1997. A decade earlier, Gruber had become interested in the use of renewable resources to develop and produce industrial chemicals. Products derived from industrial biotechnology, he argued, could equal or surpass petrochemical products and reduce our environmental footprint on a global scale.
The holder of forty-eight US patents, Gruber was highly recognized for his contribution to both sustainability and business, to science and commerce. His achievement included the 2002 Presidential Green Chemistry Challenge Award, the 2001 Discover Award for Environmental Innovation from the Christopher Columbus Fellowship Foundation, the 2003 Lee W. Rivers Innovation Award from the Commercial Market Development Association, the 2002 Julius Stieglitz Award presented by the ACS and the University of Chicago, the 2003 Society of Plastics Engineers’ Emerging Technology Award, and Chemical Engineering’s Kirkpatrick Award. He also received Popular Mechanics Design and Engineering Award, Industry Week’s Technology of the Year Award, Finance and Commerce’s Innovator of the Year Award, the US Department of Energy OIT Technology of the Year Award, Frost and Sullivan’s Technology of the Year Award, and the Industrial Energy Technology Conference Energy Award.
In addition to a BS in chemistry and biology and a PhD in chemistry from the University of Minnesota, he earned an MBA from the University of Minnesota’s Carlson School of Management.
Gruber held a number of positions at Cargill Inc. before cofounding Cargill Dow, including the bioproducts area’s technology director (1995 to 1998) and bioscience technical director (1998 through 1999). Gruber headed the company’s renewable bioplastics project in 1988, during which time he and his team developed the lactic acid polymer now known as NatureWorks PLA and Ingeo fibers. It was this invention that led to the formation of Cargill Dow LLC.
Ultimately Cargill found an interested partner in Dow Chemical, a \$40 billion commodity chemical and plastics manufacturer. Dow was active in oil-based raw materials, plastics, additives, processing aids, and solvents applied across multiple industries. In 2004, Dow’s commitment to its oil-based plastics businesses was expressed in plans to site large-scale plastics feedstock production facilities next to oil wells in the Arabian Peninsula. Dow also had major commitments to polypropylene (made from natural gas released in oil drilling) and polyethylene. Although Dow had considerable plastics science expertise, at the time Dow did not make polyethylene terephthalate (PET), the material PLA most likely would replace.
In 1995 the working partnership officially became a joint venture, a fifty-fifty undertaking between the two parent companies, Cargill and Dow. Though small, the enterprise was monitored closely because costs would show in red on the budgets of units within both companies. The initial \$100 million investment carried with it the assumption that Cargill, primarily an agricultural commodity trading company, would contribute its corn and biological process expertise, while Dow brought polymer science, process control methods, and plastic supply-chain marketing knowledge from its commodity plastic polymer businesses. Dow also had a large biotech effort in its pharmaceutical intermediates business that could provide complementary knowledge for chemical production. The agreement between the two industry giants seemed ideal. Furthermore, the structure of the plastics industry, dominated by large companies generating high-volume, low-margin mature commodity plastics through established supply chains, virtually ensured that small players with limited capital would not last.
Board communications issues and the turnover of three CEOs, as well as four marketing VPs, between 1997 and 2004 had reduced the joint venture’s effectiveness over its short life. Some thought the parent companies did not focus on the details of the business’s unique challenges. Others believed the joint venture had served its useful life and a new ownership structure was necessary to move forward.
The assumption by many outside the company was that PLA would be adopted quickly. However, the complexity of differentiating the corn-based plastic pellets that left the Nebraska PLA plant, selling a sufficient volume to downstream buyers to increase plant capacity to greater than 70 percent, and selling the plastic as part of a buyer’s sustainability strategy proved to be a tough challenge.
By 2005, when Cargill Dow became NatureWorks, it could claim more than fifteen years’ experience in biopolymer technology and applications. However, some believed that Cargill still viewed Dow as the polymer company that provided the “technology.” Managing under two different parent organizations created its own set of issues. Two accounting books had to be kept. Fiscal calendars and IT software systems were different. Dow required its process methods and proprietary software be purchased and incorporated by the joint venture. The plant was located on Cargill’s property, thus Cargill was paid by NatureWorks for site management services in addition to the corn raw material, and the business tapped into Cargill’s steam and electric infrastructure.
A member of the top management team commented in 2004 that until recently there had been no meaningful discussion between Cargill and Dow about what each of the investing parent companies wanted from its investment. Complicating matters was Cargill’s historical unwillingness to discuss GMOs and its general reluctance to engage in the public dialogue regarding environmental concerns, and sustainability in particular. Dow, on the other hand, understood the growing interest in the sustainability agenda and was experienced, although not necessarily successful with, environmental groups and the growing regulatory activity.
Making Plastic
The Cargill Dow undertaking was an industrial biotech project as opposed to molecular or gene-focused biotechnology that had evolved from Cargill’s corn-milling business director’s interest in finding new product opportunities for corn sugars. Among the key questions when the duo was considering the project in the 1990s were the following:
• Was it possible to create a cost- and performance-comparable plastic product using corn sugar instead of petroleum as the primary feedstock?
• And was there a business in bioplastics?
PLA innovation held the potential to revolutionize the plastics and agricultural industries by offering benign bio-based biopolymers to substitute for conventional petroleum-based plastics. In those days, however, plastics industry experts repeatedly told Pat Gruber and his small team that he would never find a low-cost biological supply for lactic acid production. They were informed that polymers from that source could never work in the variety of applications they had in mind. Yet the team of scientists Pat Gruber formed around the PLA project kept at their work, believing the technology could be developed and that markets would favor environmentally preferable and renewable resource–based materials. Using Cargill’s corn-milling facility and a 34,000-ton-per-year prototype lactic acid pilot plant built in 1994, the small and expensive project moved determinedly forward.
PLA was not new. Wallace Corothers, the DuPont scientist who invented nylon, first discovered the lactic acid polymer in the 1920s and DuPont research continued through the 1930s. Plant sugars were processed into polymers in small volumes in the laboratory producing very similar characteristics to petroleum-based polymers, the traditional building blocks of commodity plastics. However, costs were orders-of-magnitude too high and the material’s technical performance was not acceptable for large-scale plastics and fibers applications. While research continued on PLA and polylactides, the DuPont-ConAgra joint venture “Ecochem” in the early 1990s ultimately failed. Subsequently, only small volumes of PLA plastic were produced for specialized applications in which the safe dissolution of the material was valued (implants and controlled drug release applications, for example). In the first decade of the twenty-first century, medical sutures made from PLA were sold by DuPont for \$1,000 per kilo. Cost and technology constraints had prohibited PLA production in large volumes or for alternative uses.
Conventional plastic is made by cracking petroleum through heating and pressure. Long chains of hydrocarbons are extracted and combined with various additives to produce polymers that can be shaped and molded. The polymer material, called resin, comes in the form of pellets, powder, or granules and is sold by the chemical manufacturer to a processor. The processor, also called a converter, blends resins and additives to produce a buyer’s desired product characteristics. For example, an automobile dashboard part needs to be flexible. The processor blends in plasticizer additives to make the resin more flexible and moldable. Plasticizers, often supplied by specialty chemical providers, are the most commonly used additives. Other additives include flame retardants, colorants, antioxidants, antifungal ingredients, impact modifiers (to increase materials’ resistance to stress), heat or light stabilizers (to resist ultraviolet rays), and lubricants. In addition to those additives, some plastics also include fillers such as glass or particulate materials. First-tier processing companies typically sold resins with specific qualities in the form of rolled sheets or pellets. Additional converters along the supply chain melted the sheets or resin pellets and converted them by processes such as injection molding (for storage tubs such as yogurt containers or waste bins), blow molding (for plastic drink bottles), and extrusion (for films).Encyclopedia of Business, 2nd ed., s.v. “SIC 2821: Plastic Materials and Resins,” accessed January 31, 2011, http://www.referenceforbusiness.com/industries/Chemicals-Allied/Plastic-Materials-Resins.html.
In contrast, NatureWorks’ process for creating a proprietary polylactide, trade named NatureWorks PLA (for plastics) and Ingeo (for fibers), was based on the fermentation, distillation, and polymerization of a simple plant sugar, corn dextrose. The process harvested the carbon stored in plant sugar and made a PLA polymer with characteristics similar to those of traditional thermoplastics. The production steps were as follows:
• Starch was separated from corn kernels.
• Enzymes converted starch to dextrose (a simple sugar).
• Bacterial culture fermented the dextrose into lactic acid in a biorefinery.
• A second plant used a solvent-free melt process to manufacture lactide polymers.
• Polymer emerged from the plant in the form of resin pellets.
• Pellets had the design flexibility to be made into fibers, coatings, films, foams, and molded containers.
NatureWorks’ manufacturing sequence reduced consumption of fossil fuel by 30–50 percent compared with oil-based conventional plastic resins. PLA plastic waste safely composted in about forty-five days if kept moist and warm (above 140 degrees Fahrenheit) or, once used, could be burned like paper, producing few by-products. PLA offered a renewable resource replacement material for PET and polyester, both used widely in common products such as packaging and clothing.
Field corn was the most abundant and cheapest source of fermentable sugar in the world, and the standard variety used by NatureWorks (yellow dent number 2) was commonly used to feed livestock.Erwin T. H. Vink, Karl R. Rábago, David A. Glassner, and Patrick R. Gruber, “Applications of Life Cycle Assessment to NatureWorksTM Polylactide (PLA) Production,” Polymer Degradation and Stability, 80 (2003): 403–19, accessed April 19, 2011, http://www.natureworksllc.com/the-ingeo-journey/eco-profile-and-lca/~/media/the_ingeo_journey/ecoprofile_lca/ecoprofile/ntr_completelca_ecoprofile _1102_pdf. The corn was sent to a mill, where it was ground and processed to isolate the sugar molecules (dextrose). Dextrose was purchased from Cargill and fermented using a process similar to that used in beer and wine production. That fermentation yielded lactic acid. The lactic acid was processed, purified, melted, cooled, and chopped into pellets. It was then ready for sale and to be made by processing companies along the supply chain into cups, plates, take-home containers, polyester-like fabrics, or laptop computer covers. Once the product was used, it could be either composted (meaning it would biodegrade) or melted down and recycled into equal quality products.Erwin T. H. Vink, Karl R. Rábago, David A. Glassner, and Patrick R. Gruber, “Applications of Life Cycle Assessment to NatureWorksTM Polylactide (PLA) Production,” Polymer Degradation and Stability, 80 (2003), 403–19, accessed April 19, 2011, http://www.natureworksllc.com/the-ingeo-journey/eco-profile-and-lca/~/media/the_ingeo_journey/ecoprofile_lca/ecoprofile/ntr_completelca_ecoprofile _1102_pdf. Though NatureWorks had the technical capacity to combine postconsumer PLA products with virgin corn feedstock to make new products, large-scale collection required a reverse logistics system. Bader and Gruber hoped that capability would someday exist, allowing them to close the loop of their industrial process and practice fully renewable, “cradle-to-cradle”Robert A. Frosch and Nicholas E. Gallopoulos, “Strategies for Manufacturing,” Scientific American 261, no. 3 (September 1989): 144–52; see also William McDonough and Stanley Braungart, Cradle to Cradle: Remaking the Way We Make Things (New York: North Point Press, 2002). manufacturing, a new model then gaining credence as a substitute for the linear, cradle-to-grave industrial process that had traditionally characterized Western industrial economies.
A key breakthrough resulted in a dramatic cost reduction to manufacture the lactic acid for making PLA polymers. A new fermentation and distillation process enabled cheaper purification, better optical composition control, and significant yield increases over existing practice. In contrast, two-thirds of the material inputs in conventional PLA processing were lost to waste streams. The company’s patented new process permitted the inexpensive production of different PLA grades for multiple markets in a flexible manufacturing system within the single plant, while adhering to environmentally sound practices throughout.
Buyers
Typically buyers such as food service companies (Cisco, Guest Services), restaurant chains, and supermarkets needing hundreds of thousands of drinking cups would contract with cup producers that had relationships with materials converters that had in turn purchased either plastic resins or previously fabricated plastic sheets, foams, or coatings. Some supply chains were simple, with only three steps from NatureWorks feedstock resins to the ultimate user. Other supply chains could be much longer and more complex. Long-established and preferential working relationships with plastic resins producers were standard, as were multiyear contracts and lines optimized for conventional materials. But converters could be persuaded to source differently and to change molds and even line equipment if customers demanded. Fortunately PLA could be dropped into PET molds and lines with only minor changes. It was harder to drop PLA into polystyrene lines, and optimizing for PLA might mean cutting new tools, new mold designs, or even new lines, depending on the application. For example, PLA thickness might be less than that of the conventional plastic sheets it replaced, requiring retooling to thinner sheets. Conversion to PLA could mean significant additional throughput or faster line times (cost savings), but it might also require expenditures of time and money. That could yield financial gains to converters, but few were interested in making changes when profit margins already were slim.
The Market
NatureWorks brought its new product to market in the late 1990s and early 2000s at a time of economic recession, uncertain market dynamics, and rapidly intersecting health, environmental, national security, and energy independence concerns. While the economy seemed to settle by 2005, oil supplies and dependency concerns loomed large, with oil prices exceeding \$65 per barrel. Volatile oil prices and political instability in oil-producing countries argued for the US and other oil-dependent economies to decrease their oil dependence. European countries were moving more quickly than the United States, however.
Yet plastics were a visible reminder of societies’ heavy reliance on petroleum-based materials. The US food industry and demographic trends were creating rapidly growing markets for convenient prepared foods, and clear plastic packaging helped get customers’ attention at retail. Consumers had become increasingly well informed about chemicals in products and were becoming more aware that few had been tested for health impacts. Certain plastics known to leach contaminants even under normal use conditions were facing government and health nonprofits’ scrutiny. Health concerns, in particular those related to infants, children, and pregnant women, had put plastics under the microscope in the United States, but nowhere near the microscopic focus plastics had received in the European Union and Japan, where materials bans and regulatory frameworks received significant citizen support. Strong interest in green building in China and Taiwan along with strong government motivations and incentives to reduce oil dependency (true also for Europe) drove international market buyers to find alternative feedstock for plastic.
The volatility of petroleum prices between 1995 and 2005 wreaked havoc on the plastics industry. From 1998 to 2001, natural gas prices (which typically tracked oil prices) doubled, then quintupled, then returned to 1998 levels. The year 2003 was again a roller-coaster of unpredictable fluctuations, causing a Huntsman Chemical Corp. official to lament, “The problem facing the polymers and petrochemicals industry in the United States is unprecedented. Rome is burning.”Robert A. Frosch and Nicholas E. Gallopoulos, “Strategies for Manufacturing,” Scientific American 261, no. 3 (September 1989): 144–52. Others were assured that oil supplies, then central to plastics production, would be secured one way or another.
In contrast to petroleum-based plastics and fabrics, PLA, made from a renewable resource, offered performance, price, environmental compatibility, and high visibility, and therefore significant value to certain buyers and consumers for whom this configuration of product characteristics was important. But there was an information gap. Most late supply-chain buyers and individual consumers had to be reminded that plastics came from oil.
Competition
Several companies throughout the world had perfected and marketed corn-based plastic materials on a small scale. Japan was an early player in PLA technology. By the 1990s Shimadzu and MitsuiTuatsu in Japan were producing limited quantities of PLA and exploring commodity plastics applications. Their leadership reflected Japanese technological skills, greater public and government concern for environmental and related health issues, and greater waste disposal concerns given limited territory and a dense population. By 2004, Japanese companies were buying NatureWorks PLA and transporting the pellets to Chinese subsidiaries for research and production. Japan had already safely incinerated and composted PLA.
Larger companies were taking stabs at bio-based materials, but none was as far along or as targeted as NatureWorks. For example, Toyota had entered a joint venture with trading house Mitsui & Co. Ltd., which produced PLA from sweet potatoes. Toyota reportedly used PLA resins in its Prius hybrid car. Toyota announced plans in 2004 to construct a pilot plant to produce bioplastics made from vegetable matter. A new facility—to be built within an existing manufacturing plant in Japan—was expected to generate one thousand tons of the PLA plastics annually. Operations began in August 2004. Competitors and critics called these claims “greenwash”: they were skeptical of Toyota’s real intention to become a producer of its own plastic resins, a vertical integration step atypical of the auto company. But Toyota’s Biogreen Division recently had purchased a biopolymer feedstock company.
DuPont had a seven-year research program with biotechnology company Genencor using its enzyme to create a predominantly corn-based fiber called SoronaPeter Mapleston, “Automakers Work on Sustainable Platforms,” Modern Plastics 80, no. 3 (March 2003): 45, accessed January 31, 2011, http://plasticstoday.com/articles/automakers-work-sustainable-platforms. through a joint venture with Tate & Lyle. The Sorona polymer, expected to replace the company’s more expensive petrochemical-based product, was to emerge from a new, 100-million-pound-capacity plant in 2005. Sorona was only half bio-based, however, still relying on petroleum for half its feedstock. DuPont’s goal was to have 25 percent of its revenues derived from products made using renewable materials by 2015. Eastman Chemical Company’s new product called “Eastar Bio GP & Ultra Copolyester” was designed to biodegrade to biomass, water, and carbon dioxide in a commercial composting environment in 180 days.
Metabolix (Cambridge, Massachusetts) was awarded \$1.6 million from the Department of Commerce’s Advanced Technology Program to help fund a project to improve the efficiency of a bioprocess to make polyhydroxyalkanoate (PHA) biodegradable plastics from corn-based sugars. Metabolix said it was engineering bacteria to make production of PHA cost competitive with petrochemical-based plastics. A report on Metabolix in 2002 stated,
Genetically engineered microbes that produce thermoplastic polymers by fermenting cornstarch or sugar are going to start nibbling away at hydrocarbon-based resins more quickly than is generally expected. That is the view of James Barber, president of Metabolix Inc., whose company operates a pilot plant for polyhydroxyalkanoate (PHA) fermentation at its headquarters in Cambridge, Massachusetts. Metabolix was created in 1992 to develop PHA technology. In 2001, the company acquired Biopol technology from Monsanto. Biopol was originally developed by ICI in the 1980s. A recent \$7.4 million grant to Metabolix by the U.S. Dept. of Energy will help develop a new route to bioproduction of PHA. Instead of fermentation, Metabolix will investigate making PHA through photosynthesis in the leaves or roots of the switchgrass plant. This is a fast-growing, native American grass that grows relatively well even on marginal farmland. “Direct plant-grown PHA could allow us to challenge volume resins in lower-cost packaging and other markets,” Barber says.“Low-Cost Biopolymers May Be Coming Soon,” Plastics Technology, April 1, 2002, accessed January 31, 2011, http://www.thefreelibrary.com/Low-cost+biopolymers+ may+be+coming+soon.+%28Your+Business+in+Brief%29.-a084944193.
Germany’s BASF began R&D collaboration with Metabolix in 2003 to investigate PHA’s materials and processing properties. However, much of that competitive activity was intended to forge “platform” technical capacities to use biomaterials and processing for wide varieties of pharmaceutical and industrial applications, was in its infancy stages, and was not necessarily seen as a threat to NatureWorks. In late 2004, agriculture giant Archer Daniels formed a fifty-fifty joint venture with Metabolix to make alternatives to petrochemical plastics.
In terms of its stage and scale of technology, NatureWorks was alone among companies in the emerging industry, a situation which caused it some additional challenges. Buyers preferred comparing the cost and performance of two products rather than having to choose the only product available. In addition, NatureWorks could hardly lobby for government subsidies or regulations for its industry, since it was the sole representative of that industry.
Yet factors continued to line up favorably. The chemically tough nature of oil-based plastic polymers was both their most desirable and most problematic trait. Plastic polymers can take hundreds and even thousands of years to break down. With steadily increasing consumption rates of plastics (predicted to be 2.58 billion tons between 2004 and 2015“Global Plastic Companies Plan to Make Biodegradable Products,” Financial Express (Delhi), October 4, 2004, accessed January 31, 2011, www.financialexpress.com/news/global-plastic-companies-plan-to-make -biodegradable-products/57219/0.) and short product life spans (approximately 30 percent of plastic is used in packaging; this material is thrown away immediately), communities faced a significant solid waste problem. In 2004, plastic represented almost 40 percent of the municipal waste stream by tonnage.“Global Plastic Companies Plan to Make Biodegradable Products,” Financial Express (Delhi), October 4, 2004, accessed January 31, 2011, www.financialexpress.com/news/global-plastic-companies-plan-to-make -biodegradable-products/57219/0. The disposal issue had caused several countries to create a requirement for recyclability in plastic products. In 1994, the European Union passed the Packaging Recovery and Recycling Act, which required member nations to set targets for recovery and recycling of plastic wastes. By 2005, manufacturers had to take packaging back. The European Union also set a precedent with the Directive on End-of-Life Vehicles, which established a goal of 85 percent reuse and recycling (by weight of vehicle parts) by 2006. NatureWorks set up its EU office in 1996.
Similar laws followed in 1997 in Japan. One stated that the manufacturer was responsible for the cost of disposal of plastic packaging. Japan added to its waste regulations in 2001 by mandating that all electronics must contain 50–60 percent recyclable materials and that the manufacturers must take the electronic device back at the end of its useful life. This spurred the Japanese GreenPla designation (so named for green plastics, not PLA). This was a strict labeling program that identified products that met all government regulations for recyclability. The first product to receive the GreenPla designation was NatureWorks PLA resins.
In 2003, Taiwan initiated a phaseout of polystyrene foam and shopping bags. These regulations used the “polluter pays” approach, which made manufacturers responsible for the disposal and reuse of their products. The efforts were designed to inspire a movement toward the development of “readily recyclable” products, and two of three implementation phases were complete. The last phase would fine people for using nonbiodegradable materials. Whether termed sustainable business, triple bottom line (economic, social, and environmental performance), 3E’s (economy, equity, ecology), or simply good business, drivers of change were growing.
Additives
No discussion of plastics can leave out the issue of additives and related health concerns. Chemical specialty companies provided packages of additives that converters incorporated into melted resins to achieve the customer’s desired look and performance. One physical characteristic of plastic molecules is that the additives are not chemically bound in the polymers but rather physically bound (envision the additive molecule “sitting” inside a web of plastic molecules, rather than being molecularly “glued” in place). That means that as plastics undergo stress under normal use, such as heat or light, or pressure in a landfill, additive molecules are released into the environment. These “free-ranging” additives were causing scientists to raise questions about health impacts. Alarming data were accumulating from sources such as the American National Academy of Sciences and the US Centers for Disease Control. A 2005 Oakland Biomonitoring Project found evidence of the following chemicals in the blood of a twenty-month-old child in California: dichloro-diphenyl-trichloroethane (DDT), polychlorinated biphenyl (PCBs), mercury, cadmium, plasticizers, and flame retardants (polybrominated diphenyl ethers, or PBDEs); PBDEs, known to cause behavioral changes in rats at 300 parts per billion (ppb), registered at 838 ppb in the child.
Plasticizers, such as phthalates, were the most commonly used additives and had been labeled in studies as potential carcinogens and endocrine disruptors. Several common flame retardants regularly cause developmental disorders in laboratory mice. Possibly most startling were studies that found significant levels of phthalates, PDBEs, and other plastic additives in mothers’ breast milk. Those findings were confirmed for women in several industrially developed economies including the United Kingdom, Germany, and the United States.
Science trends had led to a series of regulations that plastic producers and other companies active in the international market could not ignore. In 1999, the EU banned the use of phthalates in children’s toys and teething rings and, in 2003, banned some phthalates for use in beauty products. California took steps to warn consumers of the suspected risk of some phthalates. The EU, California, and Maine banned the production or sale of products using certain PDBE flame retardants.
Attempting to address the fact that the majority of the thousands of chemical additives used in plastics have never been tested for health impacts, in 2005 the EU was in the final phases of legislative directives that required registration and testing of nearly ten thousand chemicals of concern. The act, called Registration, Evaluation, Authorization, and Restriction of Chemicals (REACH), was expected to become law in 2006. Imports into Europe would need to conform to REACH requirements for toxicity and health impacts. Europe used the precautionary principle in its decisions about chemicals use: unwilling to wait until conclusive scientific data proved causation, member countries decided that precautionary limits on, and monitoring of, chemicals would best protect human and ecological health.
Sales in Europe
NatureWorks’ innovation had received more attention in the international market than in the United States. In 2004, IPER, an Italian food market, sold “natural food in natural packaging” (made with PLA) and attributed a 4 percent increase in deli sales to the green packaging.Carol Radice, “Packaging Prowess,” Grocery Headquarters, August 2, 2010, accessed January 10, 2011, www.groceryheadquarters.com/articles/2010-08-02/Packaging-prowess. NatureWorks established a strategic partnership with Amprica SpA in Castelbelforte, Italy, a major European manufacturer of thermoformed packaging for the bakery and convenience food markets. Amprica was moving ahead with plans to replace the plastics it used, including PET, polyvinyl chloride (PVC), and polystyrene with the PLA polymer. In response to the national phaseout and ultimate ban of petroleum-based shopping bags and disposable tableware, Taiwan-based Wei-Mon Industry signed an exclusive agreement with NatureWorks to promote and distribute packaging articles made with PLA.World Business Council for Sustainable Development, “NatureWorks by Cargill Dow LLC: Capturing Consumer Attention and Loyalty,” accessed April 19, 2011, www.wbcsd.org/web/publications/case/marketing_natureworks _full_case_web.pdf. In other markets, high-end clothing designer Giorgio Armani released men’s dress suits made completely of PLA fiber; Sony sold PLA Discman and Walkman stereos in Japan; and, due to growing concerns about the health impacts of some flame retardant additives, NEC Corp. of Tokyo had combined PLA with a natural fiber called kenaf to make an ecologically and biologically neutral flame-resistant bioplastic.“NEC Develops Flame-Resistant Bio-Plastic,” GreenBiz, January 26, 2004, accessed January 27, 2011, http://www.greenbiz.com/news/2004/01/26/nec-develops-flame-resistant-bio-plastic.
Though the US market had not embraced PLA, there were signals that a market would evolve. In its eleven “green” grocery stores, Wild Oats Markets Inc.—a growing supermarket chain based in Portland, Oregon—switched to PLA packaging in its deli and salad bar. The stores advertised the corn-based material and had special recycling collection bins for the plastic tubs, which looked identical to petroleum-based containers. Wild Oats collected used PLA containers and sent them to a composting facility. The chain planned to expand that usage nationally to all seventy-seven Wild Oats stores,World Business Council for Sustainable Development, “NatureWorks by Cargill Dow LLC: Capturing Consumer Attention and Loyalty,” accessed April 19, 2011, www.wbcsd.org/web/publications/case/marketing_natureworks _full_case_web.pdf. scooping its larger rival, Whole Foods. Smaller businesses such as Mudhouse, a chain of homegrown coffee shops in Charlottesville, Virginia, had changed over to NatureWorks’ PLA plastic clear containers for cold drinks, sourced from Plastics Place in Kalamazoo, Michigan, a company that stated its mission as “making things right.”
NatureWorks marketing head Dennis McGrew noted that the more experimental companies and the firms trying to catch competitors were moving more quickly to explore PLA applications. It was significant that both smaller early adopter purchasers as well as large companies were interested. Soon, mainstream companies entered the mix. In 2004, Del Monte aced its rival Dole at the southern California food show with PLA fresh fruit packaging. Also that year, Marsh Supermarkets in Indianapolis agreed to use PLA packaging at its stores, representing an important new retail channel: the traditional supermarket.Carol Radice, “Packaging Prowess,” Grocery Headquarters, August 2, 2010, accessed January 10, 2011, www.groceryheadquarters.com/articles/2010-08-02/Packaging-prowess.
Clothing Fiber from PLA
Opportunities for fiber applications were growing. NatureWorks launched the Ingeo brand of PLA in January 2002, targeting fiber markets then dominated by PET, polyamide, and polypropylene fibers. Ingeo could be used for clothing, upholstery, carpets, and nonwoven furnishings as well as fiberfill for comforters and for industrial applications. By 2004 the company FIT had developed a range of man-made fibers derived from PLA polymers following the signing of a master license agreement between the Tennessee-based fiber maker and NatureWorks to produce and sell the fibers under the brand name Ingeo in North America and in select Asian markets. The agreement included technology licenses, brand rights, and raw material supply to manufacture and sell Ingeo. The US supply chain for apparel fiber had moved to Asia in the 1990s, making India and China the fabric markets to watch.
In 2004, Faribault Woolen Mill Company sold blankets and throws made with 100 percent PLA and a PLA/wool blend. Biocorp North America Inc., based in Louisiana, was one of a handful of companies producing compostable PLA cutlery and was able to offer the new product at a price competitive with conventional disposable knives, forks, and spoons. Biocorp had success selling its corn-based cutlery to sizable buyers such as Aramark and the US Environmental Protection Agency. In 2003, Ford introduced its Model U SUV, which boasted a range of “green” features such as a hydrogen engine; soy-based foam seating; and tires, roofing, and carpet mats all made with NatureWorks’ PLA.Joann Muller, “Lean Green Machine,” Forbes, February 3, 2003, accessed January 31, 2011, http://www.forbes.com/global/2003/0203/023.html. Though the new model was only a “concept vehicle,” Ford claimed that it was using the same cradle-to-cradle approach to design a market-ready vehicle.
Genetically Modified Organisms (GMOs)
A significant obstacle to marketing NatureWorks PLA in the United States was that the corn feedstock included genetically modified (called GM or GMO) corn. That PLA was certified to be free of any detectible genetic material by GeneScan Inc. and that the base sugar source (GMO or not) had no impact on PLA performance did not persuade the naysayers. Furthermore, the business was not in a position to control the corn sources coming to the mill and GMO and non-GMO were typically intermixed.
When the revolutionary NatureWorks PLA product was initially released in 2002, outdoor clothing company Patagonia jumped at the chance to use it. After approving the suitability of PLA fibers for its products and moving toward a sizable partnership, Patagonia realized that the corn feedstock, like nearly all the corn produced in the United States, had been genetically modified to be more pest resistant. Patagonia shared the concerns of many environmental nongovernmental organizations throughout the world that GMO products had not received sufficient testing for full ecological and social impact. The uncertainty that still surrounded GMO products caused such groups to lobby for a total ban on GMOs until more sound investigations were conducted. Patagonia abandoned the NatureWorks partnership and launched a publicity campaign against PLA. Environmental groups also questioned the use of food material (the corn) as feedstock when hunger remained a seemingly intractable problem internationally. NatureWorks expected to spend about \$2 billion on commercial development and production technology development to enable the conversion of other agriculturally based materials, such as corn stalks and other postharvest field waste, wheat straw, and grasses, into PLA.
Though NatureWorks would have preferred to produce GMO-free products, it was challenging to purchase separate quantities of non-GMO corn at a comparable price. In 2002 the company quantified the proportion of GMO/non-GMO corn in its final resin and designed a system of offsets to support customer choice regarding non-GMO sourcing. In this system, any PLA customer could pay \$.10 more per pound of PLA. NatureWorks would use this money to buy an equivalent offset amount of non-GMO corn (per one pound of PLA) for the processing plant’s primary feedstock. Though resin purchasers (under the direction of their buyers) could not guarantee that the product was 100 percent non-GMO, they could voice their preference for non-GMO corn. NatureWorks experts pointed out that since the genetically modified DNA was no longer present in the corn after it had been fermented, hydrolyzed, and distilled to make PLA, this system was the only way to work proactively on this customer issue. However, parent company Cargill had reservations about the program. Public Affairs and Communications Director Ann Tucker was working on reconfiguring the program on a more customer-directed and focused platform in early 2005. Sensitivity to the issues and the use of terms like genetically modified was not limited to Cargill. Dow had preferred that the company not say “from renewable resources.”
In 2005 the plant was operating at a lower capacity than projected. Bader was hearing the refrain repeatedly: “You cost a lot of money, make the bleeding stop” and “Your product doesn’t work because it does not offer a ‘drop-in’ (easily adopted) substitute for PET and polystyrene.” It was hard to determine and stay focused on priorities. There was so much to be done simultaneously. The top management team had to constantly ask themselves what core issues should be tackled first and what strategy would generate essential sales volumes.
Marketing
After successfully overcoming the scientific and technological barriers of producing PLA on a large scale, the team now faced the challenge of creating and managing a new market, a challenge that had not been attempted for thirty years. Manufacturers did not understand how to reconfigure their machinery to handle this new polymer, and many customers needed convincing that sustainable products were worth the investment. The pilot plant in Nebraska had the capacity to produce only 300,000 million pounds of plastic per year, hardly a contribution to the three billion oil-based pounds produced in the world annually.
Dennis McGrew, chief marketing officer, joined NatureWorks in April 2004 after twenty-one years at Dow in the plastics side of the business. McGrew was solutions oriented and brought with him considerable experience working on new business models for materials markets. The challenge as he described it was “taking PLA from niche to a broad market play.” NatureWorks had a solution for companies that wanted to move in the direction of more sustainably designed corporate strategies. For McGrew the company was selling resin pellets, but really what it had to sell was environmental responsibility. McGrew had realigned commercialization to global markets where environmental concerns were more familiar concepts.
Formerly a marginal topic, by 2005 sustainable business practices had entered the mainstream. Although the definition of sustainability depended somewhat on one’s perspective, it was clear insurers, investors, banks, end consumers, and governments worldwide were placing increasing emphasis on corporate accountability for the impact of their activities on communities, health, and the natural environment. Large companies were publishing social and environmental reports in response to investor demand, and there was a significant movement toward uniform international corporate reporting standards on what was called triple-bottom-line performance (economic, social, and environmental). The Dow Jones Sustainability Index tracked high performers in sustainable management practices. In April 2005, JPMorgan, the third-largest bank in the United States, announced a new policy of guidelines restricting lending and underwriting when projects harm the environment, following European financial institutions’ strategies. As the first US financial institution to incorporate environmental risk management into the due diligence process of its private equity divisions, the signal sent a message far beyond financial markets. A negative reputation for a company going forward could result in more expensive capital, higher insurance premiums, costlier bank credit, lower stock price, and even consumer boycotts.
These larger trends might support initiatives by firms such as NatureWorks but seemed remote to Bader and her senior management team. To go from niche to mainstream with PLA, it was essential that NatureWorks create an ongoing profitable business. This meant going from tens of millions of pounds of PLA produced to hundreds of millions of pounds.
KEY TAKEAWAYS
• There are ways to decouple economic growth from fossil fuels through materials innovation based on sustainability principles.
• Entrepreneurs manage significant strategic and operating barriers that are further complicated when working with disruptive technology that must move from development to commercialization.
• Ventures within large companies face their own set of challenges due to the parent organization’s scale, vested interests, and culture.
• Sustainability, a new branding and marketing category, faces challenges throughout the supply chain.
EXERCISES
1. What is this product in its markets? Is this a good opportunity? What is the potential for this product? Be specific about volume, markets, and applications.
2. What particular difficulties arise when working with innovative products such as PLA?
3. What does PLA displace; what does PLA complement? What are the implications of being a complement or a displacement?
4. What are the major marketing and sales (commercialization) challenges for NatureWorks?
5. What are the supply-chain issues? How might they be resolved?
6. How would you have managed the commercialization process differently? | textbooks/biz/Business/Advanced_Business/Book%3A_Sustainability_Innovation_and_Entrepreneurship/08%3A_Biomaterials/8.01%3A_NatureWorks_-_Green_Chemistrys_Contribution_to_Biotechnology_Innovation.txt |
2
Key Words and Concepts
• Parties
• Common law
• Customs and practices of the construction industry
• Statutes
• Regulations
• Contractors
• Architect/engineers
• Owners
• Public owner
• Private owner
• Service and supply organizations
• Labor force
• Government in its regulatory capacity
• General public
• Construction document sequence
• National Labor Relations Act
• Davis-Bacon Act
• Lien laws
• Miller Act
• License laws
• Subcontractor listing laws
• Equal Employment and disadvantaged business opportunity laws
• Uniform Commercial Code
• Tort law
• Contract liability
• Express contract provisions
• Implied contract provisions
• Tort liability
• Statutory liability
• Strict liability Absolute liability
This book deals with the business and legal aspects of construction contracting practice from the perspective of a participating contractor. The word legal connotes the operation or existence of law. In what follows, law is meant to include federal- or state-enacted laws or statutes, the rules of federal and state regulatory bodies promulgated to give practical effect to enacted statutes, and the common law. Common law is that body of past court decisions, dating from the legal practice in England prior to American independence, that serves as authority or precedent governing future decisions. It can be thought of as “judge-made” law. Since judges have been, and continue to be, influenced by the customs and practices of the construction industry, these customs and practices in a sense are part of the law as well.
Before examining in detail the law as just defined, we should look into the various elements of the construction industry. Who is involved? Who are the players or—as one usually hears—who are the parties who in one way or another participate in the construction process?
The Typical Parties
Although others may be peripherally involved, the important parties certainly include the following major party groups.
Construction Contractors and Subcontractors
First, construction contractors and their subcontractors are obviously the key participants. These are the entities charged with the responsibility of actually putting construction work in place. That is, those entities who determine the means, methods, techniques. sequence, and procedures and direct the actual construction activities.
Architect/Engineers
The architect/engineer (A/E) who designs the work and often administers the construction phase of the project personifies the second important group of participants. These entities are the creators of the drawings and specifications for the planned construction.
Construction Owners
The construction owners for whom the work is done and without whom there would be no construction industry constitute the third important segment. This group is the source of the money that drives the industry. Construction contracts with private owners often operate very differently from those with public owners. For that reason, the distinction between private and public owners is important.
The private owner includes just about any person or entity that is not a local, state, or national governmental body. Examples include you or your neighbor who wants a home built and large commercial entities such as restaurant and retail chains, real estate developers, and the giant industrial corporations. The private sector also includes quasi-public bodies that may be regulated by state governments but are still private companies. Examples in the western United States include, among others, the Pacific Gas and Electric Company and Southern California Edison Company, which are regulated by the State of California Public Utilities Commission.
On the other hand, the public owner can be local, state, or federal governmental bodies. The public sector also includes entities created for specific purposes by actions of the voters, such as school districts, water supply and sewer districts, and transportation or transit authorities.
Service and Supply Organizations
A fourth segment consists of the service and supply organizations of the industry, such as the firms that manufacture and market construction equipment. Other examples include the producers of the basic materials of construction such as cement, concrete aggregates and other stone products, lumber and timber products, steel, petroleum products, and many other raw materials or manufactured items.
Insurance companies and sureties are service organizations. There is an important difference between the insurance and surety business, even though the same entity often engages in both, which is explained in Chapters 8 and 9.
What about banking institutions? Do you consider them to be important service organizations? If you understand the significance of the term “credit line,” you know the answer. Few owners or contractors could exist without the participation of the banks, which furnish construction loans for owners and equipment loans or operating capital loans for contractors.
Finally, the service and supply group of entities includes consultants and attorneys who furnish personal services or advice. Consultants include specialty designers for such requirements as dewatering and ground support systems, management consultants, scheduling consultants, construction claims consultants, and many others. Attorneys provide legal advice to the various parties involved in construction and represent them in court as well as in many business situations.
Labor Force
Another major category of participant is the labor force. Without this segment, nothing would get built. Labor force, as used here, means not only organized labor, consisting of international and local labor unions, but also that very large group of workers in this country who comprise the open shop or merit shop segment.
Local, State, and Federal Governments
Another category of player is local, state, and federal government, not in the previously discussed role as a construction owner, but in their regulatory capacity as the promulgators of many of the rules and regulations governing the operation of the industry.
General Public
Finally, that broad body of persons constituting the general public must be included. Construction does not occur in a vacuum, and large projects, particularly those in heavily populated areas, temporarily affect the lives of many persons who are not involved in the actual construction work but who are simply living or working in the area. The general public can greatly affect construction projects in two ways. First, construction planning must consider the impact on the public during actual construction. Second, planning must cover any permanent effects on the public, including the environment, which is also “public.”
The provision of large programs of general liability insurance speaks to the first question, and the increasing requirements for environmental impact assessments, which take place before actual construction work is permitted, attest to the second.
Rules for Participants
The major participants or players in the construction process have just been discussed. What constitutes or defines the manner in which these participants interact or should interact?
Contracts
A primary body of rules for the conduct of the construction process is derived from the provisions of contracts and contract-related documents agreed to by participants in the industry. Figure 1-1 is a flowchart that represents the typical construction document sequence in which the more common contracts used in construction relate to each other.
The process typically starts with an owner who wants a project or a facility built. This person (or entity) signs a professional service contract with an architect/engineer to design the project and create a set of drawings and specifications. Next, the project is advertised for bids by a contract-related document called an advertisement. The advertisement often results in a pre-bid contract between one or more contractor bidders for the purpose of setting forth the terms of their agreement to submit a bid jointly and, if the bid is successful, to construct the project jointly. Such a contract between contractors is called a joint venture agreement. A contract-related offer to the owner to construct the project under stated commercial terms, submitted by a single contractor or joint venture contractor bidders, is the bid.
Bids are evaluated by the owner, eventually resulting in a prime construction contract to which the owner and the successful contractor or joint venture bidder are parties. The existence of the prime construction contract usually generates the need for surety bonds, insurance policies, subcontract agreements, labor agreements, and purchase order agreements, all of which are contracts that involve the prime contractor as a party. At this level, these secondary contracts, which flow from the existence of the prime construction contract and involve the prime contractor as a party, are called first-tier contracts.
If first-tier subcontract agreements are drawn up, they may generate a new family of second-tier surety bonds, insurance contracts, subcontract agreements, labor agreements, and purchase order agreements. In a similar manner, it is possible that a family of third-tier contracts may be generated.
Laws, Statutes, and Regulations of Governmental Agencies
A second important source of rules governing the construction process consists of three separate categories of laws, statutes, and regulations of governmental agencies.
The Federal Procurement Statutes are the source of basic rules and authority for the contracting regulations promulgated by the executive branch of the federal government. These statutes can be found in the United States Code (USC).
The Federal Regulations are the detailed rules for contracting by the federal government. These rules are currently called the Federal Acquisition Regulations (FAR). In addition, most federal agencies and subagencies have supplemented the FAR, resulting in the Department of Defense Federal Acquisition Regulation Supplement (DFARS) and the US. Army Federal Acquisition Regulation Supplement (AFARS). Subsubagencies have also created supplements such as the U.S. Army Corps of Engineers Federal Acquisition Regulation Supplement (EFARS). These various regulations are published by the respective agency. Some of them are also published by commercial publishers such as the Commercial Clearing House (CCH), and most are published in the Federal Register and codified yearly in the Code of Federal Regulations (CFR).
State and local laws and ordinances are published and codified by the various states according to individual practices, which vary from state to state and municipality to municipality. Some of the more prominent federal and state laws include the following:
• National Labor Relations Act (the Wagner Act). This federal act is the primary law in the United States governing the relations between employers and their work force.
• Davis-Bacon Act. This federal act establishes minimum wage rates that must be paid on any federal project or on any project that is financed with a significant amount of federal funds.
• State Mechanic’s Lien Laws, the federal Miller Act, and the various state “little” Miller Acts. These state and federal laws operate to ensure that persons or entities providing labor or materials for construction projects receive the payment that they are due.
• State Contractor License Laws. A number of states have enacted laws that require persons or entities to demonstrate certain minimum qualifications, post a bond, and pass a qualification examination in order to operate a construction contracting business.
• State Subcontractor Listing Laws. A number of states have enacted laws to prevent “bid shopping,” a practice of some prime contractors that is unfair to subcontractors and material suppliers. In California, for instance, prime contractor bidders are required by law to list in their prime bid the name of the subcontractor, the type of work, and the subcontract price for every item of work that they intend to subcontract with a subcontract value greater than one-half of one percent of the prime contract bid price. Upon award of the prime contract, the prime contractor is then compelled to subcontract the listed work to the listed subcontractor at the listed subcontract price. The prime contractor is relieved of that obligation only if the named subcontractor is unable or unwilling to enter into a subcontract agreement with substantially the same terms and conditions as the prime contract, or, alternately, the prime contractor elects to perform the listed work with his or her own forces.
• Equal Employment Opportunity Laws, Disadvantaged Business, and Women-Owned Business Participation Laws, and Other Forms of “Set-Aside”Laws. These laws and the ensuing regulations at the city, state, and federal levels are intended to remedy past patterns of discrimination in employment and business opportunity based on ethnic origin or sex.
• Uniform Commercial Code. This code has been adopted by statute in virtually every state. Its primary purpose is to establish fair and uniform trade practices applying to the sale of goods as distinct from the performance of construction services. Nevertheless, the code has an enormous impact on the construction industry since so many of the “nuts-and-bolts” transactions in the industry involve the sale of goods.
Tort Law
The third and final contributor to the rules governing the conduct of construction operations is a body of the common law called tort law. What is a tort? Broadly speaking, a tort is a civil wrong. The central concept of tort law is that in living our daily lives we cannot with impunity, either intentionally or unintentionally, conduct our affairs in a manner that will injure or damage others.
Liability in the Construction Process
At this point, we have discussed the participants in the construction industry and briefly examined the sources of the rules for the construction scenario. A common thread throughout is the liability involved in practically all forms of construction-related activity. This thread consists of three broad classes of liability arising in one or more separate ways.
Contract Liability
The most prominent and obvious way a participant in the construction process becomes exposed to potential liability is by becoming a party to a legal contract. This first broad class of liability is called contract liability and results when a party to the contract breaches the contract by failing to conform to one or more of its provisions.
There are two basic kinds of contract provisions. The first is a provision that is plainly written in the text of the contract document itself. This type of provision is called an express contract provision. It is stated explicitly in the contract. Most people understand provisions that are stated prominently and clearly in black letters in the text of the contract document.
The second kind of contract provision flows from the contract but is not in the form of an explicit statement. These provisions come from time-tested, commonly held understandings that are implied by the contract. Such commonly held understandings are said to be implicit in the contract and are considered implied contract provisions—or implied warranties. An example of an implied warranty is that each party when entering into a contract implicitly warrants that he or she will not act, or fail to act, in a manner that interferes with the other parties’ ability to perform their duties under the contract. These commonly held understandings come both from the customs and practices of the construction industry and from past decisions of courts, known as case law, another term for “judge-made law” or “common law.”
Examples of both kinds of contract provisions are covered in later chapters. Breach of either kind results in contract liability.
Tort Liability
The second broad class of liability flowing to persons engaged in construction is tort liability based on tort law. The general tort concept, discussed earlier in this chapter, is part of the common law. Tort liability does not depend on the existence of a contract. Many individuals in all segments of the industry continually incur tort liability without realizing it because they mistakenly believe their liabilities are limited to those resulting from breach of the provisions in the contracts to which they are a party.
A second mistaken notion is that tort liability arises only when a person knowingly and intentionally acts in a manner that injures or damages another. Intentional acts that injure others certainly do create tort liability, but tort liability can also be created by an act that is unintentionally committed. An intentional tort would be to damage someone else’s property on purpose, whereas if the damage was caused because one was negligent, the tort would be unintentional. An intentional tort may constitute a criminal act in addition to a civil wrong, giving rise to criminal penalties as well as monetary damages.
Statutory Liability
The third broad class of liability is that imposed by law or statute and is called statutory liability. This class of liability flows directly from the provisions of enacted laws or statutes that apply in specific localities as well as from federal laws that apply throughout the United States. As is true with contract and tort liabilities, statutory liabilities may be either express or implied.
Strict Liability
Any form of liability, whether contract, tort, or statutory, is a serious matter to be avoided whenever possible. A frequently used second liability descriptor that can apply to all three broad classes of liability is strict liability, which means that it is not necessary to prove fault or negligence to establish that a person or entity is liable for some act or failure to act. The mere fact that the act or failure to act occurred is all that is necessary to establish the liability.
Strict liability is usually associated with tort liability situations, but it can apply to other classes of liability as well. For instance, express warranties that are frequently included in construction contracts also impose strict liability on the contractor in the event of failure to honor or make good the terms of the warranty. Such a warranty might provide, for instance, that the contractor warrants that a roof installed under the contract will not leak for a period of, say, five years, and that if it does leak within this period, the contractor will make repairs at no additional cost to the owner so that the roof will not leak. Unless the roof leaked for some reason for which the owner was directly or indirectly responsible, courts will interpret the requirements of the warranty strictly. The owner would not have to prove that the contractor was at fault or was negligent. The mere fact that the roof leaked is all that is necessary to establish the contractor’s liability for the necessary repairs.
Just how strictly express warranties can be enforced is illustrated by a 1988 case in which the Iowa Supreme Court affirmed a lower court’s award of foreseeable consequential damages suffered by the owner of a distillery due to a design/build contractor’s failure to meet the performance guarantees stated in the contract. The contract provided that the completed plant would be capable of producing 190,000 gallons of ethanol per month and in so doing would not consume more than 36,000 BTU of heat per gallon of ethanol produced. The completed plant never met the stated output and consumed more BTUs per gallon than stated, causing the owner to lose money. The plant eventually was closed. The damages awarded the owner included the operating losses suffered during the period of plant operation plus the difference between the initial cost of the completed plant and its greatly diminished salvage value.[1]
A special kind of strict liability that applies to construction contracting is liability to third parties for damage that results from the performance of ultrahazardous construction activities such as blasting or demolition work in urban environments or liability for any activity that causes damage to a property owner’s land (as distinct from damage to buildings or other improvements on the land). This particular type of liability is sometimes called absolute liability . The standard of care used in conducting the construction operations or the precautions taken to avoid damage are not taken into consideration. The only thing that matters is that the damage was caused by the construction activity. If it was, the contractor’s liability is absolute.
Conclusion
This chapter briefly examined the major participants in the construction industry in the United States today, the rules by which they interact with each other, and the different kinds of liability knowingly or unknowingly assumed by participants in the industry. The emphasis here is that the first and foremost source of the rules governing the interactions of participants is the contracts into which they voluntarily enter.
Succeeding chapters continue this emphasis on the importance of contracts. Chapter 2 discusses the elements required for contract formation from a construction industry perspective leading into privity of contract and other contract relationships. The following chapters deal with the more important details of the contracts most commonly used in the construction industry.
Questions and Problems
1. What is common law? How have the customs and practices of the construction industry influenced its evolution?
2. What are the seven main groups of participants in the construction industry? Who are typical members of each group?
3. What are the seven major statutes—or groups of statutes, laws, or regulations—that were identified in this chapter? What is each intended to accomplish?
4. What is a tort? Is tort law a part of the common law?
5. Define and suggest possible examples of the following:
1. Contract liability
2. Tort liability
3. Express terms or provisions
4. Implied terms or provisions (sometimes called implied warranties)
5. Statutory liability
6. Strict liability
7. Absolute liability
6. Can certain contract liabilities also be strict liabilities? Can tort liabilities be strict liabilities? Can statutory liabilities be strict liabilities? Explain each of your answers.
7. Consider the following factual situation: The parties to a construction contract were a contractor and the U.S. Army Corps of Engineers (the Corps). The work of the contract was to construct an earth levee embankment along a river from material borrowed from a designated borrow pit shown on the contract drawings. To provide a means for the contractor to haul the borrow material from the borrow pit to the levee site, the Corps had previously obtained a 50-foot-wide easement through a landowner’s property. The contract contained no provisions one way or the other concerning potential damage to the landowner’s property, only that the contractor would be allowed to haul earth over the 50-foot easement previously obtained by the Corps. During the course of the contract work, construction equipment leaked crankcase oil, and a large quantity of fuel oil was inadvertently spilled by the contractor on the land within the 50-foot easement. The landowner alleged serious damage to the land and sued the contractor for damages. On the basis of these facts. answer the following questions:
1. If the damage to the land could be proved and you were the contractor, would you concentrate your legal efforts in attempting to convince the court that you had no liability or in seeking to prove that the extent of the actual damage was small? Explain your answer.
2. If the contractor had decided to contest the liability issue and the court ruled in favor of the landowner, what kind of liability would be involved: contract, tort, or statutory? Explain your answer.
3. If liability was found to exist in (b), would it flow from violation of some express provision of the contract or statutory law or from violation of an established principle founded in the common law? Explain your answer.
8. Consider these two situations:
Situation A
A construction contract contains a provision that the contractor must guarantee that a sewer system to be installed will not be subject to groundwater infiltration of over 500 gallons per day per inch diameter of pipe per mile. Upon completion of the installation, tests reveal infiltration of 12,500 gallons per day in an 18,000-foot run of six-inch pipe.
Situation B
A contractor encounters rock in an urban area excavation and elects to loosen it by blasting. The advice of experts was obtained, and the blasting operations were conducted with great skill and care. However, cracking occurred in the plaster walls of several houses in the immediate vicinity of the work as a result of the blasting. Briefly discuss these two situations, stating whether either of them results in liability for the contractor and, if so, the type or kind of liability involved.
1. Farm Fuel Products Corp. v. Grain Processing Corp., 429 N.W.2d 153 (Iowa 1988). | textbooks/biz/Business/Advanced_Business/Construction_Contracting_-_Business_and_Legal_Principles/1.01%3A_Interface_of_the_Law_with_the_Construction_Industry.txt |
Key Words and Concepts
• Contract formation
• Offer
• Acceptance
• Consideration
• Offering entity’s standard terms and conditions
• Conflict with the prime contract bidding documents
• Counteroffer
• Negotiation
• Meeting of the minds
• Subcontractor listings in bids
• Nonenforceable contracts
• Privity of contract
• Third-party beneficiary theory
• Intended v. incidental beneficiary
• Multiple prime contracts
Chapter 1 discussed various sources of the rules by which the construction industry operates. One important source of these rules was found to be the actual contracts entered into by the various “players” or participating entities in the industry. The first part of this chapter will examine the concepts of contract formation followed by a discussion of privity of contract and other contract relationships.
What Constitutes a Contract?
Since contracts are so important in defining the rules by which the construction industry operates, it should be obvious that when two parties enter into a contractual relationship, each would know and acknowledge that fact. However, this is not always the case. When one of the parties denies that a contract exists, it becomes important to understand when, and how, legally binding contracts are formed. Three elements for contract formation are necessary: an offer, an acceptance, and consideration.
Offer
What is an offer? What is its essential nature? One legal authority has defined an offer as a manifestation of interest or willingness to enter into a bargain made in such a way that the receiving party will realize that furnishing unqualified acceptance will seal the bargain.[1] If the willingness to enter into a bargain is manifested so that the person to whom it is made is aware, or should be aware, that some further manifestation of willingness will be required before an unqualified acceptance would seal the bargain, then what has transpired is not an offer.[2] For example, a house painter who declares, “I’ll paint your house for a price of \$3,000 during the third week of September, provided my other work will let me,” or words to that effect, has not made a binding legal offer because the manifestation of willingness is qualified or “hedged.”
What about the format of the offer? Is any particular format required? In the general case, no format is required as long as the offer meets reasonable standards of completeness and clarity. However, there are exceptions, the most prominent being the particular kind of offer occurring in construction that we refer to as a bid or a proposal. Bids and proposals are usually made in response to an advertised notice called an invitation for bid (IFB) or a request for proposals (RFP). Both an IFB and an RFP by their written terms usually require that the bid or proposal be in a specific format; if it is not, it is considered a “nonconforming” offer and will be rejected. Other than in situations where a format is specified, no mandatory format is required for an offer to be legally sufficient.
Does the offer have to be in writing? Generally, it does not—that is, a verbal offer that meets reasonable standards of completeness and clarity can be legally sufficient. Again, there are important exceptions. Bids and proposals made in response to advertised IFB or RFP notices invariably require written submissions. Also, offers for the sale of goods are governed by the provisions of the Uniform Commercial Code (UCC), which requires offers of over \$500 value to be in writing. Other local statutes may impose requirements on commercial transactions within the jurisdiction of the locality including, in some instances. a requirement that an offer must be in writing to be legally binding. Other than these kinds of exceptions, a valid offer can be either written or oral.
In every case, whether written or oral, a legally binding offer must be clear. It must define or describe that which is being offered. In the previous simplistic example, there is a lot of difference between
“I’ll paint your house for a price of \$3,000 during the third week of September provided my other work will let me.”
and
“I’ll paint your house for a price of \$3,000. My price includes scraping off all existing loose, flaking paint to bare wood, priming bare wood with Sherwin-Williams exterior primer, and applying two coats of Sherwin-Williams exterior house enamel, colors of your choice, one for the body of the house and one for the trim. Glazing work or repair of downspouts and drains is not included. The work will commence the third week in September and be completed that week, weather permitting.”
The second version, even if it were expressed verbally, is probably sufficiently clear and definitive to constitute a valid legal offer. The first is not, completely aside from the presence of the qualification.
Moving on, what defines the duration of an offer? Put another way, once given, for how long is an offer good? Sometimes an explicit statement in an offer clarifies that the offer will be good for only the period stated. Also, when offers or bids are made pursuant to the terms and conditions of an IFB or an RFP, the period for which the bidder may be held to the terms of his or her offer will ordinarily be explicitly stated in the IFB or RFP. Other than in exceptions such as those just given, an offer will be deemed legally valid until it is formally withdrawn. If the offer is not formally withdrawn, it will be deemed valid for a reasonable time. Unfortunately, there is no universally accepted definition of a “reasonable time.” Reasonable time thus becomes what a judge or an arbitrator thinks is reasonable in a particular case should a dispute arise.
Offers can be withdrawn in at least two different ways of importance to construction practitioners. First, if the offer contains a statement establishing a fixed duration, withdrawal at the end of that stated period would be implicit. Second, an offer that does not contain a statement establishing a fixed duration can usually be unilaterally withdrawn by the person or entity making it at any time prior to acceptance.
A particular issue concerning offers commonly results in construction disputes—whether or not the offering entity’s standard terms and conditions are deemed applicable to an offer. Another word for standard terms and conditions is boilerplate. That is, the fine print typically appearing on the back of vendors’ sales offers that has been carefully drafted to their advantage. Obviously, if the face of the offer explicitly states that it includes the offeror’s standard terms and conditions, these would apply. In addition, even though not explicitly stated on the face of the offer, the offeror’s standard terms and conditions apply if it could be shown that the person to whom the offer was made knew about them through previous dealings with the offeror where such standard terms and conditions did apply. For instance, a contractor who had habitually purchased form lumber from a particular supplier and who knew about the supplier’s standard terms and conditions and had accepted them in the past probably would be held to that knowledge and acceptance in regard to a new offer, even though the face of the offer did not explicitly state that it was subject to the supplier’s standard terms and conditions.
Another issue unique to the construction industry is created when the subcontractor and supplier bids to prime contractors include standard terms and conditions that are in conflict with the prime contract bidding documents . This can occur even when these bids are stated to be in accordance with the prime contract bidding documents as illustrated in the following common bidding situation:
A subcontractor submits a bid to a prime contractor who, in tum, is submitting a prime bid to the owner in accordance with the prime contract bidding documents. The sub-bid states prominently on its face that it is submitted “in accordance with the prime contract bidding documents” or words to that effect. However, buried in the boilerplate on the back of the subcontractor’s quotation form is a statement that the sub-bid offer is good for a period of ten days and, if not accepted within this period, the subcontractor is not bound to the offer. The effective date of the sub-bid is the same as the date of the prime bid. The prime contract bidding documents require the prime bid to the owner to be held open for a period of 60 days. The prime contractor’s bid is the lowest, and the prime contractor is awarded the contract 45 days after the bid date and shortly thereafter attempts to enter into a subcontract with the subcontractor. The subcontractor refuses to honor the sub-bid on the grounds that the offer expired ten days after the date of the prime bid.
Now what? Clearly, the prime contractor was in no position to contract with the subcontractor until awarded the prime contract by the owner. How would this conflict be resolved?
The sub-bid was stated to be “in accordance with the prime contract bidding documents.” Thus, if it can be established that the subcontractor knew, or should have known, about the prime contract bidding provisions, those provisions would take precedence over the bidder’s standard terms and conditions. Chapter 7 deals with this completely unnecessary kind of conflict between prime contractors and their subcontractors and suppliers and explains how to avoid it.
Acceptance
Moving on to the acceptance, the second element that must exist to form a contract, a number of points are important. Obviously, for the acceptance to have any relevance and legal meaning, it must be an acceptance of whatever was offered. A form of acceptance that changes the offer in any significant respect is not an acceptance at all but a counteroffer. An exchange of offers and counter offers between two parties constitutes a negotiation. In a negotiation, only the final offer and acceptance matter in respect to contract formation. A contract between two parties cannot be legally binding until and unless there is meeting of the minds—that is, the mutual agreement is not made under duress—at the time the contract is formed. Both parties must understand and accept that they have mutually agreed to be bound by the same set of terms and conditions or, in other words, by the final offer and acceptance. The trouble starts when the parties later discover that they did not have a common understanding of the agreement. Such is the genesis of many construction contract disputes.
As in the case of the offer, the acceptance may normally be written or oral and, if written, may be in any format, providing that a true meeting of the minds results. The only exceptions are where written or specifically formatted acceptances are required by the terms of an IFB or an RFP, by local statute, or by state laws that have adopted the Uniform Commercial Code , which specifically requires that an acceptance be in writing.
The construction industry has also spawned a recurring dispute involving a question about acceptance found in no other line of commercial activity. The dispute arises when the advertised bidding documents require prime contractors to list the names of their subcontractors on the face of the prime bid, indicating that they have relied on those sub-bids and have incorporated them in the prime bid (see sub-bid listing laws discussed in Chapter 1). This type of requirement is fairly common and when present is ordinarily known and understood by both prime contractors and subcontractors prior to the sub-bids being given. Under these circumstances, subcontractors often contend that the prime contractor’s act of incorporating the sub-bid in the prime bid and listing the name of the subcontractor constitutes a legally binding acceptance of the sub-bid offer . Unfortunately, from the subcontractor’s point of view, courts and boards have generally held to the contrary—that is, in and of itself, the use of a sub-bid and the listing of the subcontractor by a prime contractor in the bid to an owner does not constitute a legally binding acceptance of that sub-bid. An acceptance of any offer, including sub-bids, must be communicated directly from the party to whom the offer was made to the party who made it rather than communicated indirectly through a third party (the owner). This holding may seem unfair, particularly when contrasted to the doctrine of promissory estoppel discussed in Chapter 12. Nonetheless, this has been the usual case law ruling whenever this question arises.
Consideration
The third and final element necessary for contract formation is the consideration. In construction, the consideration may be money, but not always. It can just as well be some other “cash good” thing, such as the discharge of an obligation that has a value. The value may not be great. The main point is that consideration for both parties to the contract must always be present in one form or another in order for a contract to be formed. One way to think of consideration is that each party must have a rational reason for entering into the contract and an expectation of receiving something of value for performing the contract satisfactorily. In a construction contract, the owner’s consideration is getting the project work performed and the contractor’s consideration is receiving the contract price.
Contract Must Not Be Contrary to Law—Nonenforceable Contracts
A binding contract can never be formed without the presence of the three necessary elements for contract formation explained previously. However, the undeniable presence of the offer, acceptance, and consideration does not always guarantee the existence of a binding legal contract. In addition, the contract must not contravene the law and, if a public contract, must not contravene or be contrary to public policy. For example, an otherwise valid contract to set fire to a building enabling the other party to the contract to collect the insurance would not be legally enforceable.
In a construction industry context, a more common situation arises in public work where an otherwise valid contract is entered into by a public official not legally empowered to contract for the work. Such circumstances can lead to cases where the contractor has performed the work in good faith and then been unable to secure payment.
A good illustration of normal contractual relationships being voided because the contract was illegal is afforded by an Alabama case where the contractor was not paid for extra work performed because the governor of the state had not approved the contract/change order as required by statute.[3] Another is a New York case where the contractor was not paid for work performed and was forced to pay back monies that had previously been paid, because the city commissioner who had awarded the contract was found to have been bribed.[4]
Privity of Contract and Other Contract Relationships
In construction contract disputes, the threshold question is: “Does a contractual relationship exist between the parties to the dispute?” As discussed in Chapter 1, two important kinds of liability are contract liability and tort liability. Contract liabilities arise whenever the provisions of a contract, whether express or implied, are breached (broken) by one of the parties to the contract.
Privity of Contract
Contract liability flows from the existence of a contract. Without a contract, there can be no contract liability and, consequently, no sustainable legal cause of action for breach of contract. Therefore, if a party has been damaged by another and seeks redress through a lawsuit under a theory of contract liability, that party must first establish the existence of a contract with the party who caused the damage by breaching that contract. The existence of such a contractual relationship is called privity of contract.
It is not uncommon for a construction contractor to sue architect/engineers or construction managers for alleged failure to properly perform their duties associated with the construction contract, although the contractor does not have a contract directly with them. Such lawsuits must be based on tort with the contractor claiming tortious interference with construction activities or negligence on the part of the architect/engineer or construction manager. In tort cases, a cause of action does not depend on the existence of a contract, so the privity of contract issue does not arise.
Third-Party Beneficiary Relationship
Contract or tort liability involving two parties is straightforward enough. However, a more complex situation sometimes arises in construction cases where lawsuits involving contract-type liabilities may be sustained against a party with whom one does not have a contract. Such lawsuits depend on the third-party beneficiary theory. The basic concept is that when each of two or more separate entities has a valid contract with a common third entity, they may be third-party beneficiaries of the contract between the “common” entity and the other noncommon entities. This relationship is illustrated in Figure 2-1.
In this situation, entity A has a valid contract with owner C, and entity B also has a valid contract with owner C. If the third-party beneficiary relationship is found to exist between entities A and B, A can sue B for damages suffered by A if B breaches some provision of B‘s contract with C. Likewise, B can sue A for damages suffered by B if A breaches some provision of A’s contract with C.
For example, suppose that an owner C has a loan agreement with bank B to advance funds to cover monthly approved estimates for work completed on a construction project. Owner C also has separately contracted with contractor A to construct the project according to approved drawings and specifications. All goes well until bank B stops advancing funds without legal justification, thus breaching B’s contract with owner C. Under these hypothetical circumstances, contractor A could probably successfully sue bank B and collect full payment for work completed on the argument that A was an intended third-party beneficiary of the bank’s contract with owner C, even though no contract existed between the contractor and the bank.
Third-Party Beneficiary Intent
The third-party beneficiary relationship illustrated in Figure 2-1 will be deemed to exist if it can be shown that the three parties had the intent to establish it when the contracts were formed. How can such an intent be shown? Obviously, if the wording of the contracts contains any explicit provisions establishing such intent, that intent would be deemed to exist.
Even though the contracts do not contain explicit language establishing intent, courts sometimes reasonably infer that the parties had that intent from the circumstances surrounding the particular contracts. In deciding whether to apply the third-party beneficiary rule, courts will make a careful distinction regarding the beneficiary relationship. It is not enough that one party may benefit from the fruits of the other’s contract with the common owner. Courts will also want to know whether the benefit was incidental or intended. A third-party beneficiary relationship that is merely incidental is insufficient to establish rights of recovery. On the other hand, an intended third-party beneficiary relationship will establish rights of recovery.
Suppose, for instance, that in the previous example the bank loan agreement did not state the proceeds of the loan were intended to finance the construction project and did not refer to the construction project in any other way. In those circumstances, a court might conclude that the construction contractor’s benefit from the loan proceeds was merely incidental rather than intended, thus denying the contractor rights of recovery.
Multiple Prime Contracts
A second common example is the multiple prime contractor situation. Suppose that civil works contractor A, mechanical contractor B, and electrical contractor C each contract directly with common owner D on the same project, requiring the work forces of all three contractors to be present on the site simultaneously. If the written or implied terms of each prime contract with owner D make little or no reference to the other prime contracts, and any one of the contractors A, B, or C damages one or more of the others through a breach of their individual contracts with owner D, the damaged contractor(s) would have no right of recovery directly against the contractor who caused the damage. The contracts with D do not establish an intended third-party beneficiary relationship. As discussed in Chapter 13 on contract breaches, the damaged contractor under the circumstances just described could well have a valid breach of contract cause of action against the owner for failing to manage or control the other prime contractors properly, but a lawsuit based on the third-party beneficiary rule would fail.
This principle is well-illustrated by an Illinois case where the appellate court ruled that a prime contractor on a multiple-prime contract could not sue one of the other prime contractors who had allegedly caused a delay because the multiple-prime contract documents did not create an affirmative duty toward other contractors on the site. There was no intended third-party beneficiary relationship. The court ruled, however, that the damaged contractor could sue the owner for failure “to properly supervise the construction project.”[5]
Years ago, the previous situation frequently occurred in projects involving multiple prime contracts. Now, owners often protect themselves from such breach of contract suits by inserting identical language in each prime contract stating that, if one of the prime contractors damages any of the others in the course of their individual contracts, the damaged contractors’ only recourse is to seek recovery directly from the contractor causing the damage. In no circumstances will the owner be responsible.
An excellent example of this latter approach was the policy of the Massachusetts Water Resources Authority (MWRA) in administering the construction of the Boston Harbor Project, an immense sewerage collection and treatment project that is one of the largest projects of its type in the world. Approximately 30 or 40 separate prime contractors were engaged within a very confined site on Deer Island on the north end of Boston Harbor. These prime contracts contain precisely the kind of language just described, effectively insulating the MWRA from breach of contract claims from individual contractors alleging failure of the owner to properly control this virtual army of prime contractors working on the site.
The MWRA case is an example of the use of exculpatory clauses—or disclaimers —that lay off a liability that the owner would otherwise have. Under the circumstance described above, the intended third-party beneficiary relationship would be established and, if any of the prime contractors damaged any of the others through a breach of contract with the owner, the damaged contractor could directly sue the contractor causing the damage, even though privity of contract did not exist.
A Tennessee case illustrates the right of a co-prime contractor to sue another co-prime because the project bid documents provided that several prime contractors would be working on the site and that their progress schedules were to be “strictly observed.” In that case, the court felt that such language was sufficient to establish the third-party beneficiary relationship, because it implied that each contractor could rely on that requirement in the other’s contracts being complied with.[6]
In either of the multiple prime contract situations just described, if the behavior of the contractor causing the damage were so bad as to constitute disregard of a civil duty owed to the other primes, the damaged prime contractor would have legally sustainable grounds for suing in tort, which does not depend on privity of contract, as an alternative to the other available theories of recovery.
For example, if a civil work contractor unnecessarily and carelessly created excessive quantities of abrasive dust that interfered with a mechanical contractor’s assembly and installation of permanent machinery, the mechanical contractor would probably be successful in suing the civil work contractor in tort.
Conclusion
This chapter reviewed the elements necessary for contract formation from the perspective of the construction industry, the privity of contract concept, and the application of the third-party beneficiary relationship to common construction contracting situations. Chapter 3 will present an overview of prime construction-related contracts as an introduction to more detailed discussion of prime construction contracts presented in later chapters.
Questions and Problems
1. What is an offer? Need it be in a particular format? Need it be in writing? Under what circumstances would a specific format be required? Under what circumstances would the offer have to be in writing?
2. What does the issue of clarity have to do with the legal sufficiency of an offer?
3. Must a legally sufficient offer necessarily contain an explicit statement of the time duration within which the offer may be accepted? Under what circumstances are explicit duration statements required? If a legally sufficient offer does not contain an explicit duration statement, how long would the offer be deemed to be open to acceptance?
4. In what ways may an offer be withdrawn?
5. Discuss the rules that determine when an offer in the construction industry is deemed to incorporate the offeror’s standard terms and conditions. Make clear the circumstances in which such an offer would be so deemed. Explain the usual rule that courts follow when an offer purported to be in accordance with bid plans and specifications is made by an entity whose standard terms and conditions conflict with the specified bid provisions.
6. What is the fundamental requirement that a legally sufficient acceptance must meet? What is a purported acceptance that changes the terms of an offer called? Do the rules concerning the question of oral v. written form in the acceptance vary from those applying to offers?
7. Discuss the usual court holding in the case of a subcontractor’s claim that a prime contractor’s listing of the subcontractor in the prime’s bid to the owner constitutes a legally binding acceptance by the prime of the subcontractor’s offer.
8. Is the presence of consideration necessary for formation of a legally binding contract? Must the consideration be money? If not, what must it be?
9. Under what two circumstances discussed in this chapter would a contract containing legally sufficient elements of offer, acceptance, and consideration be invalid and nonenforceable?
10. What is the threshold question in breach of contract cases? What does privity of contract mean? Why is privity important?
11. What two situations discussed in this chapter would permit a sustainable lawsuit where privity of contract would not matter?
12. What rule will courts apply in deciding whether or not a claimed third-party beneficiary relationship exists?
13. An owner has entered into separate prime construction contracts with contractor A, contractor B, and contractor C, all on the same construction project. Each contract contains similar provisions, none of which state any benefit that A, B, and C have as a result of the others’ contracts with the owner. None of the contracts refer to the others in any way. Although not negligent, contractor B falls far behind schedule in performance of the contract, which causes significant increases in the cost of performance of the separate contracts that contractors A and C have with the owner.
1. Are contractors A and C likely to succeed in sustaining a lawsuit against B for damages suffered due to B’s failure to perform contract work in a timely manner? State the basis for your opinion.
2. Assuming slightly different facts—namely, that B conducted contract operations in a grossly careless and unsafe manner that adversely affected A’s and C’s operations-respond to question (a), including the basis for your opinion.
1. Second Restatement of Contracts § 24.
2. Id. § 26.
3. Rainer v. Tillett Bros. Const. Co., 381 So. 2d (Ala. 1980).
4. S.T. Grand, Inc. v. City of New York, 344 N.Y.S.2d 938 (N.Y. 1973).
5. J. F. Inc. v. S. M. Wilson & Co., 504 N.E.2d 1266 (Ill. App. 1987).
6. Moore Construction Co., Inc. v. Clarksville Department of Electricity, 707 S.W.2d 1 (Tenn. App. 1986). | textbooks/biz/Business/Advanced_Business/Construction_Contracting_-_Business_and_Legal_Principles/1.02%3A_Contract_Formation_Privity_of_Contract_and_Other_Contract_Relationships.txt |
Key Words and Concepts
• Owner–architect/engineer contract
• Owner–CM contract
• Owner–contractor contract
• Design only
• Construct only
• Design–construct
• Turnkey
• Fast-track
• Construction management
• Agency relationship
• Commercial terms
• Risk of performance
• Cost-reimbursable commercial terms
• Fixed-price commercial terms
• Cost plus a percentage fee
• Cost plus a fixed fee
• Cost plus an incentive fee
• Target estimate
• Guaranteed maximum price
• Relationship of risk to profit
• Lump sum contract
• Schedule-of-bid-items contract
The prime contract is the start of the construction contract’s hierarchical chain. It is from this contract that subcontracts and sub-subcontracts are derived as well as many of the related secondary contracts discussed in Chapter 1. All construction-related prime contracts are not the same, or even necessarily similar, although as pointed out in Chapter 2, they all contain the three essential elements of offer, acceptance, and consideration that are fundamental to their formation.
What are the generic types of construction-related prime contracts, and what are the major distinguishing features between them? This chapter examines these questions from the standpoint of the identity of the contracting entities, the nature of the contractual services provided, and the commercial terms under which these contracts operate.
The Parties to Construction-Related Prime Contracts
Construction-related prime contracts involve owners, architect/engineers, construction managers, and construction contractors. In each case, the owner typically contracts with one of the others, depending on the particular purpose to be accomplished by the contract.
Owner–Architect Contracts and Owner–Engineer Contracts
As discussed in Chapter 1, architect/engineers (A/Es) are entities that typically design projects, prepare drawings and specifications for the construction contract, and in some instances perform field inspection services and administration of the construction contract. Architectural firms and engineering firms provide similar types of services. The difference between them is that architects deal with residential, commercial, and institutional buildings, whereas engineering companies deal with engineered structures such as highways, dams, bridges, tunnels, and heavy industrial buildings and structures. Prime contracts between owners and architects are called owner–architect contracts, whereas such contracts with engineers are called owner–engineer contracts.
Owner–Construction Manager Contracts
Construction managers (CMs) are distinctly different entities from A/Es. Their role is to manage the construction aspects of a project on behalf of the owner, usually as the owner’s agent. A prime contract between an owner and a construction manager is called an owner–CM contract.
Owner–Contractor Contracts
The fourth and final construction-related prime contract party is the construction contractor, the actual builder who determines the means, methods, techniques, sequence, and procedures and directs the actual construction operations. Contracts between owners and construction contractors are called owner–contractor contracts.
The Nature of the Contractual Service Provided
Another way to separate or distinguish one prime construction-related contract from another is by the nature of the contractual services that each involves.
Design Only Services
One obvious category of services is design only, which pertain to owner–A/E contracts. The use of the modifier “only” distinguishes this category of contract service from another called design–construct (design–build). The creation of drawings and specifications is a necessary part of the design process. Thus, design only is normally understood to include the preparation of a complete set of drawings and specifications used to secure bids and to construct the project. Design only contracts may also include assisting the owner in obtaining and evaluating bids for the purpose of awarding a construction contract, providing general inspection services during construction, and providing monthly certified estimates of construction work satisfactorily performed. These estimates are the basis of monthly progress payments and final payment to the construction contractor. Such contracts seldom require continuous on-site presence of the designer during construction or exhaustive site inspections to ensure compliance with the drawings and specifications. Only such inspection services necessary to reasonably assure general compliance are normally required under a design only contract.
Construct Only Services
The second obvious kind of contractual service is construct only, pertaining to owner–contractor contracts. This is the typical service provided by construction contractors. It includes assuming full contractual responsibility to perform the work according to the requirements of the drawings and specifications. Again, the modifier “only” is used to distinguish pure construction contracts from design–construct contracts.
Design–Construct Services
Recently, a hybrid form of contract has become prominent, where the contractual services of design only and construct only contracts are incorporated into design–construct contracts, sometimes called design–build contracts. In this form of contract, the architectural or engineering design work, creation of the drawings and specifications, and actual construction work are all performed by a single entity. Therefore, the owner enjoys the advantage of dealing throughout with only one party that has complete responsibility. A number of companies furnish complete design–construct services using their own forces. Other companies market design-construct services as joint ventures or by using a subcontract to provide part of the required services. An A/E may form a joint venture with a construction contractor or enter into a subcontract with a construction contractor for the construction portion of the overall project. More commonly, reciprocal arrangements are made with the construction contractor in the lead role.
Design–construct contracts can be very large and complex. One such contract in the heavy engineering field was the North Fork Hydroelectric Project on the Stanislaus River in central California completed for the Calavaras County Irrigation District in the late 1980s. This \$450-million project consisting of a complex of dams, tunnels, and powerhouses was built by a joint venture of two large construction contractors who entered into a subcontract with a prominent A/E to provide the extensive design engineering services required. An even larger (\$1.2 billion) design–construct contract was undertaken to design and build a rapid transit system for the City of Honolulu. The contract consisted of three phases for preliminary design, final design and construction, and an initial period of system operation. The contracting parties were the City and County of Honolulu and a joint venture of four large engineering and construction companies.[1]
Turnkey and Fast-Track Design–Construct Services
The two buzzwords often used in connection with design-build contracts are turnkey and fast-track.
Turnkey refers to a type of design-construct contract in which the contractor performs virtually every task required to produce a finished, functioning facility. This includes, in addition to the normal design–construct duties, procuring all permits and licenses and procuring and delivering all permanent machinery or equipment that may be involved. It would not be unusual for an owner who had contracted on a design–construct basis for a complete hydroelectric power station to furnish the turbines, generators, transformers, and switchgear, requiring the contractor to design and construct the balance of the facility (including furnishing all other necessary equipment and materials) around this owner-procured permanent equipment. Such a contract would be a design–construct contract, but it would not be a turnkey contract. If the contractor also furnished the equipment items just listed, the design-construct contract would also be a turnkey contract. All turnkey contracts are necessarily design–construct, but many design–construct contracts are not turnkey.
A fast-track project is one in which the construction phase is started at a point when only limited design work has been completed. For example, site grading and structure excavation begin when foundation design work is complete, but design work for all subsequent elements of the project, although in progress, is incomplete. This approach has the obvious advantage-on paper, at least-of shortening the overall delivery period for the completed facility, as Figure 3-1 illustrates. Since “time is money,” fast-track project delivery offers considerable potential savings to an owner. However, several severe risks accompany the fast-track approach that can erode the potential savings. The foremost risk is that after construction is in place a problem may develop with subsequent design that requires costly and time-consuming changes to work already completed. At the very least, the owner loses the flexibility to make relatively inexpensive changes reflecting new and unexpected requirements, an advantage enjoyed throughout the design phase of a non-fast-track project.
Sometimes, the fast-track approach is used when the design and construction entities are not the same, each operating under separate contracts with the owner. This creates even greater risk for the owner, particularly if the design phase is not carefully managed. Errors, changes, or delays in design that impact construction are almost certain to result in claims from the construction contractor for additional compensation and time for contract performance.
Construction Management Services
The final type of contract service involved in construction-related contracts is construction management, pertaining to owner–construction manager contracts. A distinction should be made between this use of the term construction management as an administrative service performed for an owner and the meaning of that term as it relates to the direct management of construction operations by a construction contractor’s organization. Although many of the same professional qualifications are required, the two activities are distinctly different. When services are being furnished on a construction management contract, the construction manager (CM) normally furnishes purely professional services as an agent of the owner and does not perform significant actual construction work—that is, an agency relationship is created between the CM and the owner. Although performing no actual construction, the CM may provide such “general conditions” items as utilities, sanitary services, trash removal, and general elevator or hoisting services for the benefit of the construction contractor or contractors. The CM’s role as a provider of professional services is not unlike that of the NE, who also provides professional services with the aim of serving the owner’s interest.
CMs may be involved in the very early stages of a project, even the predesign phase, to assist the owner in planning the project and in preparing a predesign conceptual estimate of the probable project cost. This involvement may continue through the design and preparation of the contract documents phase, where the CM will provide constructability advice, evaluations of alternate designs, and assistance in obtaining and evaluating bids for the construction of the project. During construction, the CM provides general administration authority, performs inspection services to ensure compliance with the plans and specifications, and assists in closing out the contract.
A CM acting as the owner’s agent is normally precluded from performing any actual construction work. However, in one form of CM contract, the agency relationship is partly replaced by the more normal owner-construction contractor relationship, where the CM’s interest is separate from the owner’s. Under this form of CM contract, the CM is part general contractor and does perform part of the construction work in addition to previously described CM services.
Although both entities are agents of the owner, CM and A/E services are essentially different. Figure 3-2 compares typical A/E and CM services. An A/E who has designed the project may also serve the owner as a CM. The same A/E entity may have two separate contracts with the owner, one for design services and another for CM services, or a single contract that provides for both.
Commercial Terms
Another major difference in construction-related prime contracts centers on commercial terms. This part of the contract establishes the method of payment to the party providing the services and defines where the financial risk of performance lies. The two broad classes of commercial terms for construction-related contracts are cost-reimbursable terms (cost-reimbursable contracts) and fixed-price terms (fixed-price contracts). A cost-reimbursable contract is one performed almost entirely on the owner’s funds. As the provider of the contract services incurs costs in providing the services, the owner periodically reimburses the provider for these incurred costs, usually on a monthly basis. The provider thus has little or no funds tied up in the contract and the payments received from the owner are directly dependent on the costs of the services provided. In contrast, there is no relation between the costs that the provider of services may be incurring and payment received from the owner on fixed-price contracts. The owner pays the fixed price stipulated in the contract regardless of what costs the provider is incurring. The fixed price is normally paid in a series of progress payments, usually monthly, as the services are provided.
Although there is basically only one form of fixed-price commercial terms, there are a number of different forms of cost-reimbursable terms.
Cost Plus Percentage Fee Terms
The simplest form of cost-reimbursable commercial terms is the cost plus percentage fee (CPPF) basis of payment, sometimes referred to as a cost plus or a time and materials basis. Many owner–A/E and owner–CM contracts operate on this form as do many small construction contracts. The owner agrees to reimburse the costs incurred by the provider of the services and, in addition, to pay a fee equal to a fixed percentage of incurred costs that is stipulated in the contract. Aside from the practice of professionalism and the desire of the provider to protect his or her reputation for fair dealing in order to secure additional business, there is no incentive for the provider to control costs. Theoretically, the more money spent, the more earned. In the case of construction contracts, this form of commercial terms has a particularly great potential for abuse.
Cost Plus Fixed Fee Terms
Because of the potential for abuse of cost plus percentage fee terms, the cost plus fixed fee (CPFF) form of commercial terms evolved. This form of payment is often used in federal government contracts for military-related construction when war or the threat of war has created conditions where firm pricing is not feasible. It is also broadly used for owner-A/E and owner-CM contracts and for private construction contracts when for one reason or another the drawings and specifications are not definitive enough to permit firm pricing. In this form of commercial terms, the owner reimburses all of the service provider’s costs and pays a fee that is fixed at the beginning of the contract. This fee will not change unless the scope of the services provided is expanded by change order to the contract. The determination of the fee is usually based on an estimate of the probable cost of the services to be provided or, sometimes in the case of owner-A/E or owner-CM contracts, on a percentage of the estimated construction cost of the project involved that is agreed to by the parties prior to entering into the contract. This form of commercial terms ensures that, if the costs overrun the original estimate without a change in scope, the provider of the services will not benefit by an increased fee as is the case under CPPF terms.
Target Estimate (Cost Plus Incentive Fee) Terms
A more sophisticated form of cost-reimbursable commercial terms is the target estimate form, sometimes called cost plus incentive fee (CPIF) terms. The target estimate is an estimate agreed upon by the parties prior to entering into the contract, as the most probable cost of providing the contemplated services. A fee as payment for the services is also agreed to, based on the magnitude of the target estimate, with the proviso that the parties will share the benefits or penalties of any underruns or overruns in the actual costs incurred in providing the services compared to the target estimate. The exact formula for the sharing of the underruns or overruns must also be agreed to at the onset and can vary widely depending on the particular contract. For instance, the formula could provide that the parties split underruns or overruns 50-50. It is not unusual for the provider of services to insist that the formula set a cap on the provider’s share of any overruns, the cap usually being equal to the amount of the agreed-upon fee. In all of the previously discussed forms of commercial terms, the provider of the services bears none of the financial risk of performance. In the target estimate arrangement, however, the provider does assume part of this risk, depending on the exact formula agreed upon. Ordinarily, the target estimate approach requires that fairly definitive information about the services to be provided be known at the onset. As a result, the target estimate will be relatively more accurate than the initial estimate for a cost plus fixed-fee contract, although probably not as accurate as an estimate for a fixed-price contract.
Guaranteed Maximum Price Terms
Another form of cost-reimbursable commercial terms is the guaranteed maximum price (GMP) arrangement. This form is similar to the target estimate form in that the parties agree on an initial estimate for the cost of the contemplated services and on a fee for the provider based on this estimated cost. The agreed-upon estimate for the cost of providing the services and the agreed-upon fee, usually along with an allowance for contingencies, are then added together to yield the guaranteed maximum price which, as its name implies, is a price that the provider contractually guarantees will be the owner’s maximum financial exposure for the services received. The owner then reimburses the provider for all costs of the services as they are incurred and makes pro rata payments of the agreed-upon fee as would be the case for CPFF and target estimate contracts. The difference is that once the owner has paid out funds equal to the GMP, no further payment is made. The provider must then continue to perform at his or her own expense until all of the agreed-upon services have been performed according to the contract terms. If a point is reached when all services have been provided according to the contract terms and the owner’s financial outlay is less than the GMP, the owner receives the total benefit of the savings. The GMP form of commercial terms has gained enormous popularity in recent years, particularly for contracts in the field of residential and commercial building construction. Obviously, unless the GMP is set at an inflated level compared to a reasonable estimate of the cost of providing the services, the provider assumes a considerable risk of performance under this form of commercial terms.
Fixed-Price Contracts
All of the proceeding forms of commercial terms apply to cost-reimbursable contract situations. The one other broad class of contract is the fixed-price contract, also called a firm-price contract, or sometimes a lump sum, or hard money contract. All four terms mean that the provider will be paid an agreed fixed price for providing the contractually stipulated services. There is no relationship between the payment received from the owner and the costs incurred by the provider. The financial risk of performance is borne entirely by the provider of the services. Fixed-price commercial terms require a particularly definitive mutual understanding of the scope of services to be provided. In the case of construction contracts, such an understanding is difficult to attain unless a complete and accurate set of plans and specifications is available, upon which the fixed price can be determined and agreed.
In any form of contracting, there is a definite relationship of risk to profit . When the commercial terms of any performance contract require that the performer or provider assume the entire financial risk of performance, that performer is taking a far greater risk than under other commercial terms. It follows that the provider is entitled to greater profit than would be the case if less risk were assumed. Therefore, the profit potential in fixed-price contracting is much greater than for other forms of contracting, particularly for construction contracts. The fixed-price or hard money contract is the traditional form around which today’s construction contracting industry evolved. The underlying philosophy of this form of contracting has been whimsically described by construction contractors as a matter of “what you bid and what you thought” v. “what you did and what you got.”
Fixed-price contracts in construction take one of two different forms. The first is a true lump sum contract , where payment is made in a total fixed monetary amount called the lump sum contract price . Usually, a breakdown of the lump sum price agreed to by the owner and the contractor is used as the work progresses to determine the appropriate part of the lump sum price to be paid monthly for work performed that month. The sum of the monthly payments will equal the lump sum contract price. Unless the scope of the work specified in the contract is changed, the lump sum price will not change.
The second form of fixed-price contract is the schedule-of-bid-items contract . In this type of contract, work is broken down into a series of bid items , each for a discrete element of the project work. Each bid item contains a title or name that describes the particular element of work involved, an estimated quantity and unit of measurement for the units of work in the item, an agreed fixed unit price, and finally, an extension price for the bid item consisting of the product of the fixed unit price and the estimated quantity of units of work. For instance, a bid item might read
BI 21—Powerhouse Structural Excavation
10,200 cy @ \$12.25 per cy = \$124,950.
As the actual work progresses, the quantity of units of work performed are physically measured or counted in the field, which, when multiplied by the fixed unit price stated in the contract, determine what the contractor will be paid that month for the work of that particular bid item. Some bid items are specified by the bid form to be fixed lump sum prices. The total contract price paid to the contractor is the monetary sum of all unit price extensions and lump sum amounts for the quantities of work actually performed. Payment is usually made monthly for measured quantities of work units actually performed that month. If no changes are made in the nature of the work described in the various bid items, the fixed unit prices and fixed-bid-item lump sum prices will not change even though the quantity of work units actually performed for the unit-price-bid items may turn out to be more or less than stated in the contract.
Contracts of this type contain language to the effect that the bid item quantities are provided for bidding purposes only and are not warranted or guaranteed by the owner. Thus, the total contract price (the sum of the bid items) paid by the owner for actual contract performance may turn out to be more or less than the apparent contract price at the time the contract is signed. This can occur even when there are no changes, depending on the accuracy of the contractually stated quantities of units of work to be performed under the various bid items. Since the fixed unit and lump sum prices are determined by competitive bidding or negotiation prior to contract formation, the potential for differences between the contractually stated and the eventual measured quantities when the actual work is performed creates some interesting problems for both owner and contractor that are beyond the scope of this book.
Figure 3-3 illustrates some of the comparative consequences of previously discussed forms of commercial terms. The table is constructed around the performance of a hypothetical project with an assumed estimated cost of \$15,000,000, representing the best estimate possible at the time the contract was signed. The table indicates the consequences to the contractor and to the owner for both cost underrun (\$13,500,000) and cost overrun (\$16,500,000) outcomes under the various forms of commercial terms illustrated.
[table id=3 /]
Conclusion
This chapter presented a general overview of prime construction-related contracts from the standpoint of the typical parties involved, the nature of the services contracted for, and the commercial terms.
Chapter 4 will augment this general discussion by examining the format and general components of the prime construction contract between owner and general contractor for the performance of construction work. Chapter 5 will then concentrate on the content of the key clauses of such contracts.
Questions and Problems
1. Who are the four typical parties involved in most construction-related prime contracts? What is the nature of the contract services performed for each of the three prime contract types discussed in this chapter?
2. What do the terms turnkey and fast-track mean? Discuss the relationship of each to design-construct contracts.
3. How do the services provided by a construction manager in an owner–CM contract and by a general contractor in an owner-contractor contract differ? Is it ever possible for a single construction contractor to function partly as a CM and partly as a general contractor on the same project?
4. In a typical project where the owner contracts with a CM and the work is performed by a number of individual trade construction contractors, with whom do the trade contractors contract? Who bears the financial risk of performance for any overruns in the estimated value of the payments to the trade contractors—the CM or the owner?
5. Define each and explain the differences between CPPF, CPFF, CPIF, GMP, and fixed-price commercial terms. Discuss the allocation of the risk of performance between owner and the provider of the services for each of these commercial terms arrangements. Does the amount of profit or fee that the provider of the services can reasonably expect to receive relate to the allocation of risk of performance? How?
6. Consolidated Energy Corporation (CE) entered into a contract with the Slippery Hills Utility District (SHUD) to perform a feasibility study for a hydroelectric project on the basis that payment to CE would include actual costs of all direct salaries and expenses required for the study multiplied by a billing rate factor of 1.85. Following receipt of a favorable report (which SHUD and CE considered to complete the first contract), SHUD and CE entered into a second contract under which CE was to design completely a dam and powerhouse and, concurrently with the design work, was to start construction of the project and pursue construction to final completion. SHUD reserved to itself the task of procuring the hydraulic turbines and generators according to CE’s design. CE was to be paid all costs for its work plus a fee of \$5,000,000, with the provision that CE would absorb any costs in excess of a total project cost of \$35,000,000 (including the \$5,000,000 fee, but excluding the cost of the hydraulic turbines and generators).
1. What kind of a contract was the first contract with respect to commercial terms?
2. Briefly discuss the second contract, identifying the type of contract service provided, whether it was a turnkey or nonturnkey contract, and the commercial terms.
3. Had CE agreed to perform the same contract work for the unqualified sum of \$35,000,000, what kind of contract would result from the standpoint of commercial terms?
4. Had SHUD and CE agreed to share equally any cost savings under \$30,000,000 and to each pay one-half of any overruns, what kind of contract would result from the standpoint of commercial terms?
7. A contract was entered into for which an estimate of project costs (exclusive of the contractor’s fee) equal to \$12,250,000 was agreed to by the parties. The contractor’s fee was agreed to be 4% of the estimated cost. The contract further provided that the owner would reimburse all project costs to the contractor as they were expended and pay the contractor’s fee periodically as the work progressed with the proviso that the owner’s obligation to pay costs and fee was limited to a total sum of \$12,962,500. Any expenditure in excess of this total necessary to complete the work were to be for the account of the contractor.
1. With respect to commercial terms, what type of contract was this?
2. When the project was completed, the total costs, exclusive of fee, amounted to \$11,275,000. How much did the owner pay for the job?
3. Under the circumstances in (b), how much money did the contractor gain or lose from the entire transaction?
4. If the total project costs had been \$13,625,000, how much would the owner have paid for the job?
5. Under the circumstances in (d), how much money did the contractor gain or lose?
8. A contract was entered into for which an estimate of the project costs (exclusive of the contractor’s fee) equal to \$22,425,000 was agreed to by the parties. The contractor’s fee was agreed to be 6% of the estimated cost. The contract further provided that the owner would reimburse all project costs to the contractor as they were expended and would pay the contractor’s fee periodically as the work progressed. The contract further provided that the owner and contractor would share in any cost overruns or underruns, 60% to the owner and 40% to the contractor.
1. With respect to commercial terms, what kind of contract was this?
2. When the project was completed, the total costs, exclusive of contractor’s fee, amounted to \$20,125,000. How much money did the owner pay for the job?
3. Under the circumstances in (b), how much money did the contractor gain or lose from the entire transaction?
4. If the total costs on project completion, exclusive of contractor’s fee, had been \$24,975,000, how much would the owner have paid for the job?
5. Under the circumstances in (d), how much money would the contractor gain or lose from the entire transaction?
1. Because of a change in political sentiment driven by a competition for funds and a recession economy, this project was terminated at the end of the first phase and remains uncompleted at this writing. | textbooks/biz/Business/Advanced_Business/Construction_Contracting_-_Business_and_Legal_Principles/1.03%3A_The_Prime_ContractAn_Overview.txt |
Learning Objectives
• Owner-contractor contracts
• Fixed-price, competitively bid contracts
• Standard forms-of-contract
• Federal government construction contract
• AIA contracts
• EJCDC contract
• State highway department contracts
• Other agency contracts
• One-of-a-kind contracts
• Bidding documents
• General conditions
• Supplementary conditions
• Specifications
• Drawings or plans
• Reports of investigations of physical conditions
Continuing the overview of construction-related prime contracts presented in Chapter 3, this chapter focuses on the particular construction-related prime contract of interest to construction contractors—that is, owner–contractor contracts for construction services. This focus will be concentrated even further by confining the discussion to fixed-price contracts arrived at by competitive bidding.
Generally, someone who is knowledgeable and comfortable operating in the competitively bid, fixed-price contract environment usually finds little difficulty when operating under other forms of construction contracts. The reverse is not always true.
Standard Forms of Contract
A number of standard forms-of-contract for fixed-price, competitively bid prime construction contracts are widely used today. A discussion of the more prominent of these follows.
Federal Government Construction Contract
Foremost among standard forms-of-contract is the federal government construction contract. This form of contract is normally used by all branches of the federal government for construction work. Prominent examples of different federal agencies using this contract include the General Services Administration, the Bureau of Reclamation, the U.S. Army Corps of Engineers, the U.S. Navy Facilities Engineering Command, the U.S. Bureau of Public Roads, and the National Park Service. The actual contracts, depending on the particular federal agency, all differ slightly in the wording of the basic provisions, and the titles used for the contract document divisions vary. However, the contracts are of the same type and contain the same basic provisions.
A typical instance where this form-of-contract was used is the U.S. Army Corps of Engineers Lock and Dam No. 26 project on the Mississippi River. This immense public works project. north of St. Louis, Missouri, involved a series of major contracts beginning in the early 1980s. Bids were taken for the third contract of the series on August 23, 1985, three to four months after it was advertised, so that bidding contractors would have time to prepare their fixed-price bids. The bidding documents consisted of two four-inch-thick volumes of technical specifications, four two-inch-thick volumes of drawings, and seven or eight extensive addendums, each of which made numerous changes in all of the other documents, including previously issued addendums. Obviously, preparing a fixed-price bid for this contract was a complicated matter requiring hundreds of hours. Smaller projects entail fewer documents and require less effort to prepare a bid. But regardless of the size of the federal project, the essential contract provisions under which the project is to be built will be the same. The larger, fixed-price federal contracts that contain a schedule of bid items are the most complex and offer the best example of the variety of problems that can occur. Five bids were received for this Lock and Dam No. 26 contract, ranging from a low bid of \$227 million to a high of \$288 million.
American Institute of Architects Contracts
A second important standard form-of-contract is the American Institute of Architects (AIA) Standard Form of Agreement Between Owner and Contractor. The two companion documents necessary to form the complete contract are AIA Form A-101 and AIA Form A-201. This contract is by far the most widely used form for fixed-price building construction work in both the public and private sectors, particularly the private sector. Entire texts have been written by legal scholars on this particular contract.[1]
Associated General Contractors Contracts
The AGC Standard Form Prime Contract Between Owner and Contractor is recommended for use by the Associated General Contractors of America (AGC). This contract is commonly used on private work and is suitable for both building construction and engineered construction projects. Its usage is less broad than that of the AIA contract.
Engineers Joint Contract Documents Committee Contract
Another form-of-contract, the Engineers Joint Contract Document Committee (EJCDC) Contract, is used primarily for engineered construction in the private sector. Its use has also been endorsed by the Associated General Contractors of America.
State Highway Department Contracts
Another broad class of competitively bid, fixed-price contracts consists of the state highway department contracts of the various states. These contracts tend to be similar in format, no doubt because the construction work within each state is similar. The influence of the Federal Highway Administration (FHWA) has forced this similarity. The format usually consists of an infrequently published “bible,” which contains all general provisions and standard technical specifications of the state. Often a revision manual will be periodically published with changes. Then, in addition to the “bible” and its revision manual, each particular project will have its own set of “special provisions” that apply to that particular project. The special provisions contain site-specific provisions and information and any further changes to the “bible” as it relates to that specific project. The technical requirements of these state highway department contracts tend to be similar, even though some general provisions may vary. These contracts, like most others, are written by the owner agencies. From the standpoint of the legal rights afforded the contractor, these contracts vary considerably. Contractors who bid frequently in a particular state are aware of the provisions of that state’s contract and know what to expect.
Other Agency Contracts
Many other agencies traditionally build infrastructure systems over time through a series of recurring contracts for similar construction work. Examples are the rapid transit districts and water and sewer districts of the large metropolitan centers, as well as state agencies (other than highway departments), such as the California Department of Water Resources and the California Department of Architecture. Each agency tends to create its own unique form of prime construction contract, often based on the federal government contract, which it then uses over and over. Construction contractors who frequently bid to one or more of these agencies become familiar with the terms of the particular form that each agency uses.
One-of-a-Kind Contracts
Occasionally, contracts are created for a particular project. These one-of-a kind contracts tend to vary widely. Little about them is standard or traditional, either in format or detailed provisions. Since contracting parties can agree to anything that is not contrary to law, these isolated, individual, one-of-a-kind contracts can take almost any hybrid form that the parties concoct. They are limited only by the imagination of the parties who draft them, each of whom attempts to secure the most favorable agreement possible from that party’s point of view. Disputes that arise from one-of-a-kind contracts are usually more difficult to resolve because there is no past pattern of experience, as is the case with one of the “tried-and-true” standard forms-of-contract.
Typical Documents Comprising the Contract
Fixed-price, competitively bid contracts are comprised of certain, fairly typical documents. With the exception of one-of-a-kind contracts, the major categories of most contracts of this type consist of the following list:
• Bidding documents, consisting of the “Invitation to Bid,” the “Instructions to Bidders,” and the “Bid Form”
• General Conditions of Contract
• Supplementary Conditions of Contract
• Specifications
• Drawings
• Reports of investigations of physical conditions
Some contracts may not contain all of these categories but, with the exception of one-of-a-kind contracts, none is likely to contain material that won’t logically fit into one category or another.
Bidding Documents
The first category, bidding documents, normally begins with an advertisement, originally discussed in Chapter 1. The back section of contemporary industry periodicals, such as the Engineering News Record, contains a plethora of bid advertisements with every new issue. The advertisement identifies the project for which bids are desired, the owner, the time and place of the bid opening, and instructions to potential bidders on how to obtain a full set of contract documents.
The second document in the bidding group is usually the Invitation for Bids (IFB) or, sometimes, a Request for Proposals (RFP). The federal government and some other owners use the IFB when bidders must strictly conform to the drawings and specifications and the RFP when bidders may propose variations for the project. Both typically include the following:
• A description of the contract work
• The identity of the owner
• The place, date, and precise time of the bid opening
• The penal sum of the required bonds (bid bond, performance bond, and labor and material payment bond)[2]
• A description of the drawings and specifications, their cost, and where they may be obtained
• The length of time after bid opening that bids will be deemed good (duration of bids)
• Rules regarding the withdrawal or modification of bids and late bids
• Information regarding any planned pre-bid conferences and pre-bid site inspections
• Particular requirements of law of which the owner wants bidders to be aware
• Any special instructions, other requirements, or other information that the owner wants to point out to bidders
In addition to the IFB or RFP, the contract documents may also contain a section called Instructions to Bidders. When used, this section is an adjunct to the instruction portion of the IFB or RFP. Sometimes all necessary instructions are contained within the IFB or RFP, and there is no separate Instructions to Bidders section. More logically, the Instructions to Bidders is a separate document, and the IFB or RFP contains all of the other necessary but noninstructional information that a bidder needs.
In every case, the contract documents contain the Bid Form. Bidders complete this document, sign, seal, and turn it in at the appointed place, prior to the deadline set for the submittal of bids. The fully executed Bid Form constitutes the “offer” element necessary for contract formation, discussed in Chapter 2. Note that the Bid Form must be completely filled out, signed, and sealed, all in accordance with the IFB or RFP and the Instructions to Bidders to constitute a responsive bid. The contents of the Bid Form usually include the following:
• A definitive statement of the general terms and conditions of the offer. This statement is normally unilaterally determined by the owner and is preprinted on the form.
• The format of the commercial terms applying to the offer. Again, this format is normally determined unilaterally by the owner either as a single lump sum total price or as a schedule of bid-item prices. In the first case, the bid form contains a single blank space in which the bidder is instructed to enter a single lump sum price for the entire project. In the second case, the form contains a numbered series of all bid items for the project, each consisting of a description of the work for discrete parts of the project and either blanks for unit prices and extensions against a preprinted quantity of work or a single blank for a lump sum price. The total bid in this case is the sum of the unit price extensions and lump sum prices. With either a single lump sum format or a schedule-of-bid-items format, the bidder fills in the blanks for defining the precise commercial terms of the bid.
• Supplementary information that the owner may want to know about the bidder. This usually consists of information about the bidder’s financial strength and past experience.
• Additional information for federal bids. The bid form for federal contracts contains a number of “Certifications and Representations” in affidavit form, such as noncollusion and nonsegregated facilities affidavits, required to comply with federal law.
• Affirmative action requirements for public projects. Bid Forms for public projects usually require written goals and timetables for meeting the requirements of equal opportunity legislation and minority business enterprise/women business enterprise requirements.
• Bid security. Finally, the Bid Form must contain the required bid security, usually in the form of a bid bond issued by an approved surety. Sometimes, a certified check must be presented for the bid security.
Oddly enough, private sector bids often require much more supplementary information on the Bid Form than do public sector bids. And, among public projects, Bid Forms for federal contracts usually require less supplementary information than the average.
A final interesting point concerning bidding documents is that the AIA approach excludes the bidding documents from the contract. Article 1 of AIA A-201, General Conditions of the Contract for Construction, states:
The Contract Documents do not include Bidding Documents such as the Advertisement or Invitation to Bid, the Instructions to Bidders, sample forms, the Contractor’s Bid or portions of Addenda relating to any of these, or any other documents, unless specifically enumerated in the Owner–Contractor Agreement.
Why would the AIA wish to exclude the bidding documents from the contract? The rationale seems to be that the eventual contract is considered to be the end result of a negotiation, not the result of a binding firm-price bid. The bid is regarded as merely the starting point for the ensuing negotiation. Most other forms-of-contract include the bidding documents as part of the contract.
General Conditions of Contract
The second section of the documents that normally comprise the contract is the General Conditions of Contract, often referred to simply as the General Conditions, or sometimes, General Provisions. Here are found very definitive statements, clause by clause, of all general terms and conditions that govern the performance of the contract work. In the case of the federal government and other agencies that frequently contract for construction work, the general concept of this section of the documents is to include all clauses that will remain the same, contract after contract, changing very infrequently. Many of these standard clauses in federal contracts pertain to the requirements of the Federal Acquisition Regulations, which by law must be included in every federal construction contract.
Supplementary Conditions of Contract
In addition to the General Conditions or General Provisions, most construction contracts contain a section called Supplementary Conditions or Special Conditions. The idea of this section is to include clauses dealing with general matters that apply to the instant contract only—that is, those that are either site-specific or in some other way apply only to the specific contract. Such matters might better be called “project-specific” matters. Some forms of contract do not have a Special (or Supplementary) Conditions section. Instead they include all general matters, whether standard or project-specific, in the General Provisions section. It is also common to include general project-specific matters in Division 1 of the Specifications section. In the Uniform Construction Index (UCI) form of technical specifications, which is widely used, Division 1 is titled “General Requirements.” Thus, to be entirely sure that nothing of a general nature has been overlooked in a particular case, it is necessary to carefully read the General Conditions, the Supplementary Conditions (if included), and Division 1 of the Technical Specifications.
One important area of the Supplementary Conditions for contracts where federal funds are involved is the Davis-Bacon Wage Determination originally discussed in Chapter 1. By federal law, wages paid the workers on any such project must be at least as high as listed in the Davis-Bacon Determination for each trade classification involved in the work. Even where federal funds are not required, many states require that prevailing wages be paid on public work. These rates are set by a commissioner on a project-to-project basis at a level he or she has determined through investigation to equal the “prevailing” wage for each classification of work in the locality of the project. This determination is obviously significant to contractors interested in submitting a bid. For example, if the determination is set at low “open-shop” rates, potential bidding contractors, bound by union labor agreements that require payment of higher rates, know that they are competing at a disadvantage and might be well advised not to bid at all. On the other hand, if the Davis-Bacon commissioner has determined the “prevailing” rates to be union labor agreement rates, all bidders are on a more equal footing. Open-shop or merit-shop contractors will have to pay the same rates as union contractors.
Specifications
The technical requirements for each division of work in the contract will be completely detailed in that section of the contract document called the Specifications. The format usually conforms to the Uniform Construction Index, which is understood by virtually every segment of the industry. Depending on size of contract, the Specifications can be voluminous. It is necessary that completely definitive requirements be carefully stated so that both parties to the contract have a mutual understanding of the precise technical standards the project work must meet.
Drawings
The next important section of the contract documents is the Drawings, which complement the Specifications. The Drawings must be sufficiently complete to adequately show exactly what is to be built. Certain features of the work may be shown in fairly general terms, with the requirement stated that the contractor must prepare detailed shop drawings that conform to and augment the general contract drawings. These must be submitted to the owner or the owner’s engineer for approval prior to fabrication of the material covered by the shop drawings. For example, a contractor may supply detailed bar-bending schedules and placing drawings for reinforcing steel and structural steel fabrication and erection drawings, including the connections. However, the basic contract drawings advertised for fixed-price bids must be sufficiently clear and accurate so that, if contractors carefully conform to them, a satisfactorily constructed product will result. If either the Drawings or Specifications do not meet this standard, the owner may incur severe liability under the Spearin Doctrine, which is discussed in Chapter 13.
Reports of Investigations of Physical Conditions
An additional and final section that may or may not be included as an integral part of the contract documents consists of various reports of investigations of physical conditions at the project site. These reports often concern geotechnical aspects of subsurface soil or rock conditions. They usually appear in the form of written evaluations and soil boring logs describing subsurface conditions. Other examples are weather records and, in the case of projects on or near streams and rivers, stream flow hydrographs. These reports are probably the more common examples of this type of information, but basically any included information describing physical conditions at the site falls into this category. A more detailed discussion of these kinds of reports and whether or not they are considered to be part of the contract is included in Chapter 5.
Conclusion
This chapter focused on the format and the general contents of the major component sections of prime contracts between owners and general contractors for the performance of construction work. The prominent forms-of-contract commonly used today were also briefly discussed.
Chapter 5 will show why contractors need to understand the nature of the potential contract before they commit to any particular construction project. Also, the details of the critical or “red flag” clauses contained in such contracts will also be analyzed from the point of view of the bidding contractor.
Questions and Problems
1. What is the historic, traditional form of contract upon which the present-day construction industry is based? What are the seven forms of contract discussed in this chapter? Why might a one-of-a-kind contract cause later trouble?
2. What major categories could you expect to find in the documents particularly related to bidding for a typical competitively bid fixed-price contract and what type of information or requirements are contained in each? Would every set of contract documents be likely to contain a General Conditions section? A Supplementary or Special Conditions section? In what three possible places in a set of typical contract documents would you look to be certain that all matters of general importance (other than technical matters and details on the drawings) were examined and noted? Which document part defines the offer element necessary for contract formation?
3. Why is the Davis-Bacon Determination important in a set of contract documents? Where would you expect to find it? Would documents for every project be expected to contain a Davis-Bacon Determination? If not, in which category of projects would you expect to find it?
4. What is the attitude concerning bidding documents held by the AIA? Do most other forms of contract reflect the AIA attitude? Questions 5 and 6 assume that the reader has access to a set of typical federal contract documents for an actual project and to AIA Document A-201 (General Conditions of Contract).
5. With respect to the federal contract, determine the following and cite the section of the documents from which you obtained the answer (place the appropriate abbreviation from the following list in parentheses at the end of each answer):
[table id=1 /]
1. What is the date and time of bid opening?
2. What is the penal sum of the required performance bond?
3. Will there be a pre-bid conference?
4. What is the number of days that bids must be held open for acceptance?
5. Do bidders have to state whether they are a small business concern?
6. Do bidders have to certify that they do not maintain segregated facilities?
7. How many milestone completion dates are there?
8. What is the amount of liquidated damages for each day that each milestone is late?
9. Is there a clause pertaining to suspension of work?
10. Does the government have the right to occupy a completed part of the work?
11. Is there a clause dealing with variations in work quantities?
12. What is the date of the Davis-Bacon Determination?
13. How long does the contractor have to perform the entire project?
14. How many bid items are there?
15. Is there a clause pertaining to changes in the work?
16. Is there a clause dealing with differing site conditions?
17. How often can the contractor expect to receive progress payments?
18. Is there a clause pertaining to default terminations and excusable delay?
19. Is there a clause for termination of the contract for the convenience of the government?
20. Is there a clause concerning contract disputes?
6. With respect to the AIA document A-201 (General Conditions of Contract), determine the following and indicate where in the document you obtained the answer. At the end of each answer, cite the source in the document by writing in parentheses the article and subarticle.
1. Is the owner empowered to stop the work?
2. Is the owner empowered to terminate the contract?
3. Does this form of contract contemplate or imply a fixed time for completion of the work?
4. Is the contractor required to indemnify and hold harmless the owner?
5. Does this contract contemplate changes in the work?
6. Does this contract provide relief for the contractor’s failure to perform due to conditions beyond the contractor’s control?
7. Does this contract provide that either the owner or contractor can make a claim against the other for damages suffered?
8. Is the contractor likely to be liable to the owner for damages caused by late completion?
9. Does the contract contain the equivalent of the differing site conditions clause found in a federal contract?
10. Is the contractor required to carry insurance?
11. Does the contract imply that there could be payments made to the contractor by the owner in the event of owner-caused delays?
12. Does the contract provide for progress payments?
1. See Sweet, Justin, Sweet on Construction Industry Contracts: Major AJA Documents (New York: John Wiley & Sons, 1987).
2. Bond requirements for fixed-price, competitively bid contracts are discussed in Chapter 9. | textbooks/biz/Business/Advanced_Business/Construction_Contracting_-_Business_and_Legal_Principles/1.04%3A_Prime_ContractFormat_and_Major_Components.txt |
Key Words and Concepts
• “Red flag” clause
• Dispute resolution clause
• Sovereign immunity
• Changes clause
• Differing site conditions clause
• Delays and suspensions
• “No-damages-for-delay” clause
• Default terminations
• Convenience terminations
• Time provisions
• Notice to proceed provisions
• “Stepped” notices to proceed
• Single completion time
• Milestone completion times
• Liquidated damages provision
• Actual damages
• Availability of the site
• Restrictions to site availability
• Payment provisions
• Payment frequency
• Payment for materials and fabricated items
• Retention
• Mobilization allowance
• Final payment
• Exculpatory clauses
• Disclaimers
• Attitude of courts to disclaimers
• Present trend on underground construction
• Geotechnical design summary report
• Geotechnical baseline report
• Insurance requirements
• Surety bond requirements
• Indemnification
• Basis of quantity measurement
• Variation in quantities clause
• Equal employment opportunity/disadvantaged/women-owned business requirements
• Escalation provisions
Before preparing a cost estimate and submitting a competitive bid for a contract, a contractor must first be sure that the various sections of the contract documents are complete. Chapter 4 identified the major categories of typical contract documents and discussed the general nature of each.
Once this first step has been completed, a prudent contractor bidder will do much more before making a decision on whether to proceed. It is imperative to know what kind of a contractual situation will be encountered if a bid is submitted and the contract awarded. Will the contract be fair, or will it be heavily biased in favor of the owner? Aside from the financial “risk of performance” associated with the actual construction work, what contractual risks lie buried in the contract language?
It took a wrenching personal experience for this author to appreciate the true consequences of failure to identify properly and answer these questions at the time of bidding for a large bridge substructure project. In that instance, the fact that the owner, a state department of transportation, was shielded by the doctrine of sovereign immunity applying to all contracts with that state was not discovered until long after the project was bid, the contract entered into, and major disputes had developed in the course of the work. The eventual resolution of the contractor’s claims, which was not appealable, was not obtained until 16 years after the completion of the work.
So, the lesson to be learned from this chapter is how to avoid unknowingly assuming the risks inherent in such situations.
Threshold “Red Flag” Clauses
An old adage states: “Do not sign a contract until you have read and understood every word.” Today, literal compliance with this rule is not practical, even if one wanted to do that. Reading contract language is a tedious, sleep-inducing activity, and most people hate to do it. Also, there is an obvious difference between signing a contract and committing to the preparation of an estimate and bid. However, submittal of a bid places the contractor in a position where failure to proceed with signing the contract and completing the project according to the contract terms can become extremely costly. This is true because bid security is normally required in the case of fixed-price construction contracts. Further, the preparation of a cost estimate for a fixed-price construction project is time consuming and expensive. Although a potential bidder can always drop further consideration of a project after work on the estimate has begun, the time and money expended up to that point is lost, and failing to proceed after starting can be destructive to a construction organization’s morale. So the old adage might just as well be stated: “Do not undertake a cost estimate and start bid preparation until you have read and understood the potential contract.” How can one approach this ideal? One good way is to seek out, carefully examine, and understand the provisions of certain key contract clauses, often referred to as “red flag” clauses. Generally, these clauses tell bidding contractors the kind of contractual situation they will encounter if they are successful in securing that particular contract.
Experienced construction executives probably would agree on the choice of clauses included in this chapter even if they did not agree on the precise “pecking order” in which the clauses are listed. Each “red flag” clause will be discussed from the standpoint of what the clause typically provides and why it is important. In this discussion, clause titles are generic. Actual titles may vary from contract to contract.
Dispute Resolution and Governing Law Clause
Bidding contractors need to be aware of the contract dispute resolution provisions. If disputes arise, who will resolve them and by what set of rules?
A well-drafted dispute resolution clause spells out precisely what steps the contractor and owner are required take to resolve disputes between them, usually defining time limits within which various procedural steps must be initiated. Some contracts specify straightforward and reasonably simple procedures, whereas others are excessively complicated and time consuming. In extreme cases, the contract states that the architect/engineer’s or owner’s decision is final and binding, ostensibly leaving the contractor no recourse in the event of disagreement with that decision. Usually, such A/E and/or owner decisions will be binding on the contractor on matters having to do with the standard of acceptability of the performed contract work. Whether such decisions will be supported by the courts with respect to “questions of law” depends on whether the work is public or private and on the law of the state in which the project is located. Federal contracts do not contain clauses providing that the engineer’s decisions are final and binding.
A well-drafted dispute resolution clause also states the means by which the dispute will eventually be resolved if the parties cannot come to an agreement. The normal possibilities are arbitration (usually under the auspices of the American Arbitration Association), submittal to an administrative board of the owner-agency involved, submittal to a specially appointed contract disputes review board, or a formal trial in a court-of-law. In the latter case, the contract may specify the particular court that will try the case. Of special concern from the contractor’s standpoint are the contract provisions in the few states that have not waived sovereign immunity (see the introduction to this chapter). In these instances, the contractor may not sue the state in a court-of-law on a matter arising from the contract. The only procedure open to the contractor is referral to a claims court, controlled by the state with whom the contractor has the dispute. There is no appeal. If a monetary award is made to the contractor, the state may not be required to make payment until and unless the state legislature passes a specific bill appropriating the necessary funds.
Finally, a well-drafted clause states what legal rules will apply—that is, what the governing law will be. The federal government contract clause states that disputes will be settled in accordance with the Contract Disputes Act of 1978, a federal law. The AIA contract clause states that the law of the state where the contract is performed will apply. The importance of this clause cannot be overemphasized because the laws of different states vary considerably.
The subject of dispute resolution is more fully discussed in Chapter 23.
Changes Clause
Every construction contract today contains a changes clause. However, the detailed provisions of the clause vary from contract to contract. The clause generally defines the owner’s right to change the contract unilaterally, places limitations on that right, establishes the contractor’s duty to perform the change, and the contractor’s right to be paid for performing the change. These details range from the provisions of the changes clause in the federal government contract, which are broad and evenhanded, to provisions in some contracts that are grossly unfair and which place the contractor at a distinct disadvantage when the owner makes changes. Chapter 14 focuses exclusively on the detailed provisions of the changes clause.
Differing Site Conditions Clause
The differing site conditions (DSC) clause is probably next in importance. Not all construction contracts contain a DSC clause. Many contractors put this clause at the head of the “red flag” list and will not submit a bid if the contract documents do not include a fair and comprehensive DSC clause. This clause is sometimes called a “changed conditions” clause. In the case of the AIA contract, the clause is called “concealed conditions.”
The relevance of this clause to underground construction is discussed in Chapter 4. This clause also normally applies to any physical site condition found during contract performance that materially differs from those indicated in the contract documents or from conditions normally encountered in the type of work of the contract. The detailed provisions range from the DSC clause in the federal government contract, which is comprehensive, fair, and serves as a model clause for the industry as a whole, to contracts with clauses containing less explicit language, to contracts containing no DSC clause at all. Chapter 15 is entirely devoted to the subject of differing site conditions and the operation of this clause.
Delays and Suspensions of Work
Another important “red flag” clause deals with delays and suspensions of work. There are several important aspects of this subject. First, construction contracts usually impose severe liabilities on the contractor because of generally stringent requirements for work to meet narrow technical standards within fixed time requirements. Such is the nature of construction contracting. However, both owners and contractors understand that certain conditions may occur under which the contractor’s failure to perform within the required time limits will be excused. Such conditions are sometimes called conditions of force majeure and normally would include acts, or failures to act, by the owner or others and “acts of God” that delay or prevent the contractor’s performance. The delays and suspension of work clause usually carefully enumerates what constitutes a reason for excusable delay for that particular contract. Bidding contractors need to know that this list is broad enough to include situations that the contractor knows from past experience may occur.
Again, the federal contract provisions are broad and fair and afford contractor’s time relief for delays that are truly beyond their control. Other contracts may designate far fewer situations for which relief will be granted, and some even require the contractor to complete the work by a certain stated date, called a “date certain” under any and all circumstances—in other words, the contractor is granted no relief whatsoever.
A second question arises regarding responsibility for extra costs arising when the owner either delays or suspends the work. Who pays? The federal suspension of work clause and other similar clauses contain provisions that are complete, explicit, and generally state that the government pays if the government causes a delay that damages the contractor. Other contracts range from some that are completely silent on the subject, to those containing “no-damages-for-delay” clauses , which state that a contractor’s relief for an owner-caused delay to the contractor’s operations is limited to an extension of time only. Some contractors will not bid on contracts containing a no-damages-for-delay clause. No-damages-for-delay clauses are further discussed in Chapter 16.
Terminations and Partial Terminations
An important “red flag” clause closely related to the delay and suspension of work clause is that dealing with terminations and partial terminations. Here, we are talking about the owner’s right, not to delay the work, but to unilaterally terminate all of the work of the contract or to terminate some divisible part of the work. Most construction contracts give the owner this right, usually under each of the following circumstances. First, the owner may terminate the contract when the contractor’s performance is either (1) far behind a reasonable time schedule or (2) results in work that fails to meet contract quality requirements or (3) when the contractor becomes financially insolvent. Second, the owner may terminate the contract without disclosing any reason.
The first set of circumstances constitutes a default of the contract by the contractor. Most contracts give the owner broad powers in these circumstances to remove the contractor from the site, to take over the equipment and materials on the site (whether paid for or not), and to complete the work or cause it to be completed by others. If the owner’s costs in completing the work exceed the unpaid portion of the original contract price, the difference must be paid by the contractor. Such terminations are called default terminations.
The second kind of termination is called a termination for the convenience of the owner or, more simply, a convenience termination. Contracts differ greatly in regard to this kind of termination. The difference is not in providing the right of the owner to effect such terminations (almost all present-day contracts provide for this), but in the provisions dealing with the final payment that the contractor receives if the owner does decide to terminate the contract or part of the contract short of completion. Many thorny questions arise. The contractor may be heavily committed to the project when suddenly, without warning, the contract is terminated. The contractor normally will have ongoing commitments in the form of purchase orders and subcontract agreements that must also be terminated and for which the contractor will incur unavoidable costs. The owner is undoubtedly due a credit for the value of the uncompleted work, but how large a credit? Is the contractor entitled to anticipated profit on the uncompleted work? Resolution of these and many similar questions never comes easily. Many contracts are completely silent on determination of a fair and equitable settlement in the event of a convenience termination, in essence leaving the parties to “fight it out.” On the other hand, the federal contract and other similar contracts deal more or less effectively with this subject, depending on the particular contract. Since a convenience termination is an act completely beyond the contractor’s control and must be regarded as a definite possibility, the detailed provisions dealing with payment are vitally important. Chapter 16 deals with some of the unique problems associated with both default and convenience terminations.
Other Important “Red Flag” Clauses
By the time that a contractor understands the threshold “red flag” clauses just discussed, he or she will have a fairly good indication of the type of owner and type of contract involved with the project. If the contractor is still interested in submitting a bid, the balance of the “red flag” clauses provide further critical information. These clauses-some general and others highly project-specific-include the following:
• Tune provisions
• Liquidated or actual damages for late completion
• Site availability and access to the site
• Payment and retention provisions
• Reports of physical site conditions
• Exculpatory clauses
• Insurance and bond provisions
• Indemnification clauses
• Measurement and payment provisions
• Variation in quantities
• Equal employment opportunity and disadvantaged business assistance requirements
• Escalation provisions
The detailed provisions of many of these clauses are complex and their potential impact on the contractor’s cost of performance can be enormous. Each will be discussed in some detail.
Time Provisions
The time provisions are project-specific and are much broader than simply a statement of how much time that the contractor has been given to complete the work. Contractors should look for at least the following additional information:
1. What are the notice to proceed (NTP) provisions? Will the contractor receive NTP reasonably promptly after signing the contract, or does the owner have the right to delay giving NTP, perhaps indefinitely? In some instances, a delayed NTP may be an advantage to the contractor, but normally, once the contractor is committed to the project, starting work as soon as possible is to the contractor’s advantage and delaying it is a great disadvantage. Many contracts provide that NTP will be given within a specifically stated number of calendar days from the bid date or from the date of execution of the contract, whereas others are silent on this point. Another question is whether the contract provides for more than one, or a series of “stepped” NTPs, each NTP releasing only limited parts of the project or limited activities that the contractor may perform under that particular NTP. Many difficult contractual problems can ensue when stepped NTPs are used or when the owner issues NTPs that are qualified. Both practices severely limit the contractor’s flexibility.
2. What is the time period after NTP within which the contract work must be completed? The bidder should not simply assume that the time period stated in the contract is necessarily reasonable. The contractor will normally be held to that period whether it is reasonable or not, so if the stated period appears to be insufficient, the bidder had best not proceed further unless extra costs are included in the bid estimate to meet the required schedule. Otherwise, the bidder should expect to pay liquidated damages for late completion. Some contractors will not submit a bid if they believe that the time allowed for contract performance is unreasonably short.
3. ls a single completion time for the total project specified or is a series of “milestones” listed that must be met within specified time limits, each milestone completion time pertaining to a discrete part of the contract work? Contractors frequently find that they can meet the final completion deadline without undue difficulty but that intermediate milestones prove difficult and costly, particularly when very early milestone dates are specified. If milestone dates are specified, bidding contractors must take note and analyze them carefully.
Liquidated or Actual Damages for Late Completion
Closely related to the time provisions are other project-specific provisions stating the type of contractor liability that will result in the event of late completion. If the required contract work is not completed within the stated period, the contract has, in effect, been breached by the contractor, entitling the owner to be paid damages. The monetary amount will be determined in one of two very different ways.
1. Most construction contracts include a liquidated damages provision. This provision states that, for every day the project work remains uncompleted beyond the time allotted for contract performance, the contractor shall pay the owner a stated dollar amount. The concept of liquidated damages is that the true or actual damages suffered by the owner are often difficult to determine accurately. Therefore, to obviate the need for making this determination, the owner and contractor agree at the time of contract formation on a dollar amount per calendar day that will be accepted by both parties as proper and appropriate recompense for any delay in contract completion. If the contractor completes the work late, the owner does not have to prove that any damages were thereby suffered but is entitled to the liquidated damages figure stated in the contract. If the contract provides for milestone completion dates in addition to the final completion date, liquidated damages may apply separately to each milestone date, with the total liquidated damages liability of the contractor being cumulative. Obviously, the magnitude of the stated daily liquidated damage figures may be of great concern to bidding contractors. In the writer’s experience, these have varied from less than \$50 per day to over \$50,000 per day.
2. Monetary damages due the owner in the event of late completion may also be determined by proof of actual damages. If the contract does not contain a liquidated damages provision, the owner is entitled by application of common law principles to be paid the actual monetary damages suffered due to late completion. Although a contractor may finish a project late but pay nothing because the owner suffered no consequential damages, the owner’s damages in the event of late completion are sometimes immense and can be proven in court. Such contracts can represent a far greater financial risk to the contractor than if the contract provided for liquidated damages, depending, of course, on the stated daily liquidated damages figure. Thus, the absence of a liquidated damages provision in the contract is not necessarily beneficial from the contractor’s standpoint. If a liquidated damages provision is not included, bidding contractors would be well advised to determine the potential magnitude of actual damages the owner might suffer in the event of late completion before proceeding further. Liquidated damages are discussed in further detail in Chapter 17.
Site Availability and Access to the Site
Another project-specific matter concerns the availability of the site to the contractor. An implied obligation of the owner in every construction contract is to make the project site and reasonable access to it available to the contractor at the time of notice to proceed without restrictions unless the contract contains provisions to the contrary. Some contracts do contain such provisions to the contrary by placing restrictions to the site, either on the availability of the site or on the means of access to it. The restrictions can either state that the entire site, or separate portions of the site, will be made available at a time considerably later than the date of notice to proceed and/or that the contractor must complete the work and relinquish the site or sites by stipulated dates that are earlier than the final completion date of the contract.
A large contract for rapid transit construction in Atlanta in 1968 illustrates the problems inherent in stepped or phased construction site release dates. In that case, the bid documents provided for a highly fragmented schedule of release dates for construction work areas due to difficulty encountered by the Metropolitan Atlanta Rapid Transit Authority (MARTA) in obtaining title to the property involved. Included in these stepped work area release dates were specified dates for three areas contiguous to the main underground subway station-one area for ventilation ducts and the remaining two for auxiliary entrances to the station. All three were to be released to the contractor at dates stated in the contract that were considerably later than release of work areas necessary for the construction of the main underground station itself. After construction started, MARTA’s difficulties continued, delaying release of these three areas beyond the dates stated in the contract that created great uncertainty in scheduling and executing the contractor’s work activities. For many months, it was not known when, or even if, these three areas would be released. After delaying station construction significantly, the area for the ventilation ducts was eventually released, but the two areas for auxiliary station entrances were never released and eventually were deleted from the contract. The end result of delay and uncertainty of release for these three areas was that the project was in a continual state of flux for a considerable period of the contract, which seriously affected the contractor’s ability to schedule and carry out the underground station work. Considerable extra costs were generated which, fortunately, were eventually recognized and reimbursed by MARTA.
Another Georgia project involved the construction of 16 freeway bridges for the state department of transportation. Each bridge occupied an individual work site that the bidding documents indicated would be released to the contractor for bridge construction at stated dates. The contractor’s bid was prepared in anticipation that the individual sites would be released for construction in the order and by the dates stated in the bidding documents. After submitting the low bid and entering into the contract, the contractor commenced construction according to the bid plan, which relied on the bridge site release sequence and dates stated in the contract. Only two of the 16 bridge sites were released by the dates stated, the remainder being released from one to nine months late. The contractor’s planned work schedule and sequence of crew and equipment movement from site to site were totally disrupted, forcing the contractor to build the job on a continually changing schedule as first one and then another of the bridge sites were released late and out of sequence in a completely unpredictable pattern. The contractor filed claims for the greatly increased performance costs due to disruption and delay to the planned work program upon which the bid was based. The claims were denied, and the contractor sued in the Georgia courts.
Unfortunately for the contractor, the Georgia Court of Appeals reversed a trial court jury verdict that awarded the contractor substantial breach of contract damages for the state’s failure to release the bridge sites as stated in the contract.[1]
Apparently, in Georgia, contractors are not entitled to rely on these types of representations in bidding documents. Aside from the correctness or the incorrectness of the Georgia Court of Appeals decision, this case dramatically illustrates the risk posed to bidding contractors when the bidding documents indicate that the work site will be turned over to the contractor for construction in a piecemeal fashion.
Another form of site availability restriction is limiting the contractor’s right to occupy the site to certain days and to certain hours of the day, often to periods in the middle of the night. Such provisions are common for contracts for work in city streets, such as underground utility work or paving work. Similar restrictions may be placed on the availability of the means of access to the project site or sites, even to the extreme of stating that the contractor must make their own arrangements for access to the site. Another common form of restriction is to limit the hours in the workday during which certain kinds of construction activity, such as operating heavy haulage equipment or conducting blasting work, are allowed.
These and similar restrictions can affect the contractor’s cost of performance enormously. Such restrictions may not be immediately apparent when a bidder skims through a set of contract documents. However, a contractor bidding on a project must examine the documents carefully and clearly understand any restrictions to avoid later misfortune.
Payment and Retention Provisions
The general importance of the payment provisions is obvious. However, several specific aspects are somewhat less so. All large construction contracts call for successive progress payments to be made to the contractor, each based on an estimated value of the work satisfactorily completed during the progress payment time period. The important issues here are what is the duration of the progress payment period and how long after the end of the payment period can the contractor expect to receive payment? Provisions vary. A monthly period for payment frequency is common as is the provision that the contractor receives payment within 30 days from the date that the owner’s architect or engineer certifies the correctness of the estimate of work completed. In extremely large projects, a two-week progress payment period may be stated. Less obvious provisions govern the extent to which payment for the value of materials and fabricated products, such as structural steel and precast concrete procured and paid for by the contractor but not yet incorporated into the work, are made. Many contracts include the value of such items in progress payments, usually at less than their full value, with the balance of the contractor’s purchase price to be paid when the materials have been fully incorporated into the work. More restrictive payment terms do not recognize any value of such materials until they have been fully incorporated into the work.
A good example of the impact that payment for fabricated materials stored on site can have on a contractor’s cash flow is afforded by the Bolton Hills Tunnel Contract constructed for the Maryland Mass Transit Administration in the late 1970s. That contract involved \$12 million worth of fabricated steel segmented liner plate to be procured by the contractor and installed as the tunnels were excavated. Fabrication and delivery to the site had to occur a number of months prior to payment being received for the installed plate in the tunnels. The fabricator required payment on delivery to the contractor’s storage yard at the site, and the contractor would have been badly strapped for cash had the contract not provided for partial payment upon delivery of the fabricated liner plate to the job site. Had the contractor been required to carry the full investment of the value of the liner plate until payment on installation, the bid would have been higher.
Another key issue is the retained percentage provision. Retained percentage or retention is a deduction made from each progress payment prior to paying over the balance to the contractor. The owner holds these retained monies until after satisfactory completion of the contract and acceptance of the work to provide a fund to remedy nonconforming work that the contractor may refuse or fail to remedy, as well as to provide some protection to the owner if the contractor falls in default of the contract and/or abandons the contract. After the satisfactory completion and acceptance of all contract work by the owner, the retained funds, less any amounts used to correct defects or satisfy other unpaid obligations of the contractor to the owner, are paid over to the contractor, constituting final payment of the contract price.
A common retention percentage is 10%, meaning that the contractor will receive only 90% of the value of each progress payment. Thus, when the contractor has fully completed all contract work, the owner will hold 10% of the contract price up until the point of final payment. On many types of work, the prime contractor’s markup on the estimated project cost will be less than 10%, meaning that, unless the contractor has retained 10% from all payments made to the material suppliers and subcontractors, the contractor could well be in a negative cash flow position at the completion of the contract work, even though the contract will eventually be profitable once the owner makes final payment. Under these circumstances, the point at which the final payment is made becomes increasingly important.
A more reasonable approach that has gained popularity in recent years is for the owner to retain 10% for the first 50% of the contract, then cease further retention, if the contractor is performing satisfactory work and is conforming to an agreed-upon project progress schedule. Further, at some subsequent point when most of the work has been completed, say, 90-95% of the total, and the contractor’s performance continues to be satisfactory with respect to quality and schedule, the retention is reduced to 200% of the estimated value of the uncompleted work. The remainder of the retention is then paid over as the final contract payment on completion and acceptance of all the work. Another popular approach is for the owner to deposit the retainage in an escrow account for which the interest is payable to the contractor. The owner may also permit the contractor to pledge interest-bearing securities or provide an irrevocable letter of credit in lieu of retention.
The retention provisions in a particular case can have a significant effect on the contractor’s cash flow, resulting in more or less investment of the contractor’s own funds in the project. This becomes even more of a concern in the case of major projects for heavy engineering civil works construction, significantly affecting the amount that bidding contractors include in their cost estimates for the cost of money, which in tum affects the total cost that the owner pays for the project.
Many heavy construction projects require large investments of capital before the work of the project can begin. These early cash outlays are required for the purchase of construction equipment, freight and assembly of the equipment at the work site, and installing extensive plant facilities such as temporary utility distribution systems, batch plants, material handling systems, and heavy-duty repair facilities. If not provided for separately in some manner, these large early expenditures can only be recovered through progress payments as the permanent work of the project is put in place.
Such projects may extend over several years, resulting in large investments tied up in the project for some time before being recovered. The time value of these invested funds is considerable and raises the bid price that the owner pays for the project. Another effect is to eliminate otherwise qualified bidders who may not have the financial strength to afford the initial high investment. Even if they have the necessary funds or the credit line to borrow them, they may wish to employ these resources elsewhere. Further, large public owners frequently can borrow at lower rates of interest than contractors.
For all these reasons, projects of this type frequently contain a lump sum bid item called a mobilization allowance, usually fixed by the owner in the bid form at a finite number of dollars. The project specifications make clear the kinds of costs that the mobilization allowance is intended to cover, and the contractor may immediately bill these large early expenses against the mobilization bid item. This lowers the contractor’s investment expense considerably and thus lowers the bid price to the owner in the same manner that favorable retention provisions and other favorable payment terms lower the bid price. Therefore, the presence or absence of the mobilization payment provision is important to bidding contractors. Otherwise, the bid must contain large interest charges as a necessary cost.
The importance of the time when final payment is made was previously mentioned, especially when the total retained funds are sizable. Some contracts contain straightforward procedures for contract closeout and release of final payment to the contractor. Others result in final payment months after completion and acceptance of the work. Contracts with a high retained percentage and with difficult and time-consuming closeout procedures obviously are not desirable from the contractor’s standpoint and therefore usually carry a much higher bid markup.
Reports of Physical Site Conditions
As mentioned in Chapter 4, it is important to determine precisely which documents comprise the contract and whether the contract documents include reports of physical conditions at the site, particularly for projects involving underground construction.
If such reports are included, are they to be considered part of the contract documents? Practices vary. Some owners make it very clear that the material has been included for the use of bidding contractors with the understanding that bidders shall consider it accurate and rely on it in determining their bids. Under these circumstances, the information implicitly is part of the contract documents. This fact will usually also be stated explicitly.
In contrast, other owners seek to limit or to avoid completely the liability that would ordinarily flow to them if physical site conditions data they furnish proves to be incorrect. They usually do this by means of a clause stating that the data is not part of the contract documents and that the owner and the architect/engineer bear no liability for any errors or inaccuracies that may later be found. Such a clause is one example of a type of clause called a disclaimer , and although not generally favored, may be recognized by the courts, provided that it is prominent and unambiguous and does not conflict or fly in the face of other contract provisions. On the other hand, many courts are reluctant to give this type of disclaimer full force and effect. They reason that if the owner does not want bidding contractors to use the information and rely on it in formulating the bid, then why include the information with the contract documents? Better to let the bidders make their own investigation of the site conditions. A problem with this approach is that the bidding period is usually too short to permit bidders to make in-depth investigations of physical site conditions, particularly underground conditions.
Good examples of courts’ reaction to the enforceability of disclaimers is afforded by the following two cases. In the first, a South Dakota State highway contract requiring borrow excavation contained the following disclaimer:
The information covering the pit for the project is given to you for informational purposes only. The Department of Transportation does not guarantee the quantity or the quality of the material listed in the above information. Interested contractors should investigate the area before considering it for bidding purposes.
The successful bidder had received the bidding documents including the borrow pit data only two weeks before bids were due. The borrow pit data proved to be grossly inaccurate and, after submitting a claim which was denied, the contractor sued to recover the increased cost of performance due to the inaccurate borrow pit information. The Supreme Court of South Dakota ruled against the contractor saying:
The unambiguous language of a contract defeats an implied warranty claim. There is no ambiguity in this case as to the parties’ intentions. The burden was placed on Mooney’s by expressed contract to determine the nature of the material in the pit. It appears that Mooney’s would now try to escape this contractor responsibility through use of an implied warranty.
Thus, the disclaimer was enforced.[2]
The opposite ruling resulted in a Maryland case where Baltimore County took bids for underwater repairs to the concrete piers for a bridge. The contract required the contractor to build sheet pile cofferdams around the piers, then dewater and excavate the bed of the river inside the cofferdams, exposing the piers. Once the piers were exposed, the contractor was required to chip away and replace deteriorated concrete, which was represented in the bidding documents to average approximately six inches in thickness.
The bidding documents contained a disclaimer to the effect that the site data included was for information purposes only and did not purport to represent actual conditions and did not relieve bidders of the obligation to verify independently all such data before submitting a bid.
During actual construction, the river bed was found to consist of hard material that was difficult to excavate, instead of the soft material represented in the bidding documents, and it contained numerous large boulders interfering with the cofferdam construction. The contractor incurred large cost overruns in performing the work. Further, only two inches of deteriorated concrete was found on the surface of the piers instead of the six inches stated in the bid documents. This resulted in the contractor being compensated for only 114 cubic yards of concrete removed instead of the 230 cubic yards that was calculated in the bid. The contractor sued to recover these large losses.
On appeal from an adverse lower court decision, the Court of Special Appeals of Maryland ruled for the contractor, stating that reliance on the data in the bidding documents was justified because there was no possible way that bidders could verify the data or otherwise obtain other more accurate data. The court further stated that the county’s data published in the bidding documents resulted from four years of periodic underwater inspections that could not possibly be duplicated by bidding contractors in the short period allowed for bidding.[3]
A more modern view for projects involving significant underground construction has resulted from the work of the Technical Committee on Contracting Practices of the Underground Technology Research Council sponsored jointly by the American Society of Civil Engineers and the American Institute of Mining, Metallurgical and Petroleum Engineers.[4] This approach requires that the owner have an adequate geotechnical investigation carried out pre-bid and include the engineer’s analysis of the data resulting from the investigation in the form of a geotechnical “baseline” report that the bidders may use in the preparation of their bids. Depictions of the logs of the actual soil borings and data recorded from various physical tests performed on materials at the site may or may not be included as part of the contract documents, but the engineer’s geotechnical baseline report including a summary of the analysis of all the detailed data and the engineer’s conclusions will be included as part of the contract. This engineer’s geotechnical evaluation is frequently called the geotechnical design summary report (GDSR) or, more recently, the geotechnical baseline report (GBR). If geotechnical conditions actually encountered during construction are more adverse than described in the geotechnical baseline report, the contractor is afforded relief for any ensuing loss of time or money through the provisions of the differing site conditions clause of the contract. This latter clause will be examined in detail in Chapter 15.
Exculpatory Clauses in General
Exculpatory clauses or disclaimers were discussed above in connection with their use in limiting the owner’s liability for inaccurate or otherwise misleading reports of physical site conditions. This is but one example of such clauses. Today, many knowledgeable persons, including contractors, owners, and architect/engineers, oppose the use of this type of clause, and many courts are reluctant to enforce them. However, their use persists. Bidding contractors who encounter them in contract documents and assume they will not be enforced do so at their peril. In any contract bidding situation, all exculpatory clauses must be identified and their potential impact evaluated.
Insurance and Bond Provisions
Insurance and bond provisions, briefly mentioned in previous chapters, are of paramount importance and could probably be considered a threshold matter. Ordinarily, bidding contractors will be able to meet the insurance requirements, provided they are not so stringent or unusual that the required policies are not available in the insurance market. The question then becomes one of insurance premium cost. Sufficient money must be included in the bid cost estimate to pay the required policy premiums for the life of the project. The advice of insurance specialists is usually needed to assure this; therefore, the insurance requirement provisions of the documents should be reviewed pre-bid by either in-house specialists on the contractor’s staff, an insurance broker, or both.
The contractor’s situation with regard to the surety bond requirements is considerably different. Here, cost is not the only question, although the premiums for the required package of bonds can be a substantial sum that must be included in the bid cost estimate. The major question is whether the contractor will be able to obtain the bonds at all. The answer depends on the relationship between the particular contractor and the surety companies. This, in turn, depends on the contractor’s financial strength and performance record on past contracts, the likely contract price for the project under consideration, and the contractor’s backlog of bonded work.
The key bond is the performance bond. If the contractor’s surety commits to furnishing the performance bond, the other normally required bonds (the bid bond and the labor and materials payment bond) will also be furnished by the surety. The surety’s agreement to provide the required project bonds must be secured very early in the bid preparation process. Without this commitment, the contractor cannot sensibly proceed, unless there is reason to believe that some development occurring during the bid preparation period will cause the surety to commit to furnish the required bonds. Chapter 8 is devoted to the subject of insurance contracts and Chapter 9 to surety bonds.
Indemnification Requirements
Many construction contracts require the contractor to “indemnify and hold harmless” the owner and the architect/engineer from all losses that they may suffer arising from any act or failure to act of the contractor in the performance of the contract work. Such indemnification usually extends to providing the legal defense in court for the indemnified parties if they are sued by persons or entities who allege they have been damaged as a result of the contract work and, if a judgment is obtained against the indemnified parties, to pay the judgment.
An indemnification clause imposes serious potential liabilities on the contractor. Many cannot afford to accept this risk unless the risk is insurable. A serious problem arises when the indemnification requirements are unreasonably broad and the contractor finds that they cannot be covered by insurance. For instance, such requirements have sometimes gone to the extreme of requiring the contractor to indemnify the owner and architect/engineer for the self-inflicted consequences of their own negligence. Therefore, before proceeding very far in pursuit of a potential contract, it is essential that contractors find out what the indemnification requirements are and determine whether they are insurable.
The federal government has not waived sovereign immunity with respect to tort liability. Neither have many of the individual states. These entities are thus protected from lawsuits by third parties arising from any act or failure to act on the part of their contractors. Construction contracts with these entities may not contain an indemnification clause.
Measurement and Payment Provisions
In the case of schedule-of-bid-items contracts paid on the basis of unit prices for measured quantities of work put in place which are common in engineered construction, the basis of quantity measurement and the exact rules determining which items of work will be separately paid and which will be included in the payment for other items are very important. Usually, the answers to both questions will be found in the measurement and payment language of the specifications. Contractors may well make the decision on whether to bid or decline to bid a job on the basis of the measurement and payment provisions if they have been so unclearly or unfairly written that they impose risks on the contractor that cannot be evaluated. Although detailed discussion of this subject is beyond the scope of this text, suffice it to say here that the measurement and payment provisions must be carefully examined and understood to avoid later unpleasant surprises.
Variation in Quantities Clause
The variation in quantities clause is also important in contracts for heavy engineered construction paid on a unit price basis, particularly those where the estimated quantities are large and can potentially underrun or overrun. The bid unit prices on such contracts normally contain one component to cover the contractor’s direct costs of performing the work and a separate component to cover the distributed portion of the contractor’s total job overhead and general and administrative expense. These latter costs are more or less fixed and independent of the final quantity of work done under the various bid items. An underrun in quantity will mean that the contractor will not recover all necessary fixed costs for the job, thereby suffering a loss. On the other hand, if the quantities overrun, the contractor will recover more than the fixed costs and reap an unexpected financial gain. The contractor will also experience a loss or reap a gain separately with respect to the profit and contingency components, which are also included in the bid unit price.
One widely used form of the variation of quantities clause provides that the bid unit price on any bid item in the job applies for all actual measured final quantities of work that fall within 15% under or over the estimated quantity shown on the schedule of bid items (the bid quantity). If the actual measured final quantity turns out to be less than 85% of the bid quantity, the unit price will be renegotiated upward if necessary to enable the contractor to recover the distributed fixed costs that would otherwise be lost. Similarly, if the actual final quantity turns out to be over 115% of the bid quantity, the unit price will be negotiated downward if necessary to prevent the contractor from over-recovering fixed costs.
Some clauses provide that the adjustment will “be based upon any increase or decrease in costs due solely to the variations above 115% or below 85% of the estimated quantity.” Under this form of the clause, the manner in which the contractor distributed the fixed general costs and profit to the various bid items is left unaltered when unit price adjustments due to quantity variations are considered. Unless the contractor’s actual costs of performing the work are directly affected solely by the increase or decrease in the quantity of work actually performed (either over 115% or under 85% of the bid quantity, respectively), there will be no unit price adjustment.
Under either form of the clause, the actual percentage figures controlling when the clause becomes operative may vary for particular contracts, but the principles of operation remain as explained above.
Equal Employment Opportunity and Disadvantaged/ Women-Owned Business Requirements
Equal employment opportunity requirements and disadvantaged/women-owned business subcontracting requirements that are frequently included in contracts in the public sector were briefly mentioned in Chapter 1. These requirements usually take the form of specifying goals by trades for
1. The percentage of the contractor work force that should be filled by women and/or members of ethnic minorities; and
2. Specifying a percentage of the total contract price that should represent either materials purchased from, or services subcontracted with, disadvantaged person-owned enterprises (DBEs) or women-owned enterprises (WBEs).
Ordinarily, the requirement for stated percentages of women or minority employees is seldom a problem in today’s contracting world, but there has been a great deal of trouble and litigation concerning the requirement for purchasing materials from, and subcontracting with, DBEs and WBEs. The issues involved are socioeconomic and often highly charged politically and emotionally. They are beyond the scope of this book. Suffice it to say that bidding contractors must know what the DBE and WBE requirements are and follow the stated instructions to the letter when submitting bids. Significant costs may be involved, which must be included in the bid price.
Escalation Provisions
Escalation provisions can be important in long-term contracts spanning a number of years. The essential idea of an escalation provision is that, in order to induce a lower bid price, the owner agrees to take the risk or part of the risk of increases in the cost of labor and key construction materials above the levels that existed at the time bids were taken.
Such contracts normally include a schedule of the labor hourly rates and the unit costs of key materials existing at the time of bid upon which the contractor’s bid was based. The contractor’s certified payrolls and paid invoices for all materials, maintained during contract performance, determine the actual manhours worked, labor rates paid, actual quantities of materials purchased and actual prices paid, all of which establish a basis for computing escalation costs. The contract will provide that the owner pay the contractor for all or a stated percentage of the escalation cost, in addition to the normal contract price determined by the bid.
The majority of contracts do not contain escalation provisions. However, when escalation provisions are present, the contractor is relieved of considerable risk, which will result in a lower bid price to the owner. Since the owner, not the contractor, is taking the risk, the owner will be the beneficiary of any savings when anticipated escalation does not occur, which would otherwise have been included in the bid price and thus paid by the owner even though these costs were not incurred by the contractor.
Conclusion
This chapter concluded the focus on prime construction contracts by first explaining why contractors must understand the implications of potential contracts for construction work by seeking out the “red flag” clauses and thereafter looking at the details of the clauses themselves.
The following chapters will shift emphasis from the prime construction contract to some of the prominent secondary contracts that are closely related to prime construction contracts, starting with Chapter 6 on the subject of labor agreements.
Questions and Problems
1. Why is it important for bidding contractors to search out and become familiar with the “red flag” clauses discussed in this chapter? At what point in the bid preparation process should this be done?
2. Why is important for a bidding contractor to know that an owner may be protected under sovereign immunity?
3. What point does this chapter make concerning the treatment that a contractor can expect under the federal contract in contrast to other contracts with respect to disputes, changes, differing site conditions, delays, and terminations?
4. What are the five threshold “red flag” clauses discussed in this chapter?
5. What are the three separate aspects of time provisions that were discussed?
6. What are liquidated damages and actual damages? Does the contractor normally have any input to the amount of a contractually provided liquidated damages daily figure? Why or why not? Which is preferable from the contractor’s standpoint—liquidated damages or actual damages? Of what significance is the contractually stipulated liquidated damages daily rate?
7. What is meant by the phrase “conditions of force majeure”? Cite examples. In what way is this subject of concern to bidding contractors?
8. What is the significance of site availability provisions? Discuss the several aspects of such provisions.
9. What are four important aspects of payment provisions discussed in this chapter?
10. What is the difference in the problem created for bidding contractors by the insurance provisions and the bond provisions of typical contract documents? Which is more likely to present a difficulty to a bidding contractor? Why?
11. What are the two contrasting attitudes of owners toward inclusion of reports on physical site conditions as part of the contract discussed in this chapter? What is an exculpatory clause or disclaimer? What is the attitude of the courts regarding disclaimers of the accuracy of reports of physical site conditions? What is the modern view of how owners seeking bids on underground projects should handle disclosure and intended bidder reliance on reports of geotechnical investigations? Do you think it makes sense to expand this view to other types of projects as well?
12. What is meant by indemnification? How do bidding contractors normally protect themselves from indemnification clauses? What is the general concern that may arise with indemnification requirements?
13. Why is a variation in quantities clause important? How does a typical variation in quantities clause work? Would such a clause apply to the type of contract where the contract price was bid as a single lump sum? Why or why not?
14. Which group of clauses included in equal employment opportunity or DBE/WBE requirements is likely to cause the most difficulty for contractors?
15. What are escalation provisions? How do they work? Do all contracts contain them?
1. Department of Transportation v. Fru-Con Corporation, 426 S.E.2d 905 (Ga. App. 1992).
2. Mooney's, Inc. v. South Dakota Department of Transportation, 482 N.W.2d 43 (S.D.1992).
3. Raymond International, Inc. v. Baltimore County, 412 A. 2d 1296 (Md. App. 1980).
4. See Avoiding and Resolving Disputes During Construction, published by the American Society of Civil Engineers, 345 East 47th St., New York, NY 10017-2398. | textbooks/biz/Business/Advanced_Business/Construction_Contracting_-_Business_and_Legal_Principles/1.05%3A_OwnerConstruction_Contractor_Prime_Contract_Red_Flag_Clauses.txt |
Key Words and Concepts
• Employers
• Union organizations
• Single-employer or multi-employer parties
• Local and international unions
• Basic and specialty crafts
• Local and national agreements
• Local area-wide agreements
• Project agreements
• Industrial work agreements
• Maintenance work agreements
• Single and multi-craft agreements
• Single area system agreements
• National trade agreements
• National special purpose agreements
• Union security
• Union jurisdiction
• Hiring hall
• Grievance procedure
• Work stoppage and lockout
• Subcontracting clause
• Wage/benefit rates
• Hours worked and hours paid
• Workday and workweek
• Overtime
• Shift work
• Work rules
• Manning
• Stewards
• Me too/most favored nation provisions
The last three chapters have discussed the distinguishing features of construction industry prime contracts in general and then focused on owner-construction contractor contracts for construction services. The “red flag” clauses that determine how individual construction contracts deal with certain critical issues were examined in detail.
The focus of this chapter is on construction labor agreements, one of a series of contracts closely related to the prime construction contract. Persons aspiring to manage construction operations should be familiar with the structure of organized construction labor in the United States and with the provisions of typical labor agreements for at least three reasons. First, managers of construction operations may be employed by union contractors who consistently work under labor agreements and are generally bound by their terms. Second, even contractors who normally work on an open- or merit-shop basis may, in particular circumstances, decide to sign and be bound by a labor agreement for a particular job. Third, both union and open- or merit-shop contractors need to know under what conditions the other will be working in order to evaluate their competitive advantages or disadvantages—in other words, to get a “handle” on the competition.
Any construction superintendent or project manager knows that the cost of labor is, by far, the most volatile, difficult to control element of total construction cost. Therefore, for the sizable segment of the industry that employs union labor, the collective bargaining, or labor agreement, governing the relationships of construction employers with their workers becomes a very important agreement indeed. It is not possible to estimate accurately the probable labor element of the cost of construction without an intimate understanding of such agreements. Simply knowing the wage rates is not enough. Large cost issues depend on the intricacies of the overtime and shift work provisions, general work rules, manning requirements, and other cost-generating provisions that are often contained in labor agreements. Further, once a construction contract has been entered into with an owner, the contractor cannot effectively manage the job or control costs without a complete understanding of these often complex provisions. Each of these considerations is discussed in this chapter.
The Parties
The parties to construction labor agreements are contractor employers and union organizations. This can be represented as the beginning of a “relationship tree,” as in Figure 6-1.
Further, the employer parties in the relationship tree can be expanded to include single-employer or multi-employer parties, as shown in Figure 6-2.
A single employer consists of one contractor, whereas a multi-employer party is a group of contractors that have banded together to form an employers’ association. Examples of employer’s organizations include the various state Associated General Contractors {AGC) organizations, the National Association of Homebuilders, and the National Constructors Association.
Union organizations consist of either local unions or international unions, with separate unions for the basic crafts and for the specialty crafts, as shown in a further expansion of the relationship tree in Figure 6-3.
The basic crafts consist of operating engineers, teamsters, carpenters (including piledrivers and millwrights), ironworkers, masons (cement finishers), and (although strictly speaking, not a craft) laborers. All crafts in construction other than the basic crafts are called specialty crafts, which include electricians, plumbers, sheetmetal workers, tile setters, and boilermakers, to name a few.
Common Types of Labor Agreements
Turning from the parties to the agreement itself, we see a number of features that distinguish one labor agreement from another such as the geographical limits of the agreement. Labor agreements can be local or national in geographical scope, as indicated as the beginning of a second relationship tree in Figure 6-4.
A local agreement involves just one particular local union. Usually, it also involves one craft, so it is really a local single craft agreement. Certain local agreements can be multi-craft agreements. All of the local unions of a particular craft in the United States report to and are a part of a governing national body called the international union for that craft. If the agreement is made directly with the international union, it is binding on all of the local unions throughout the country and is called a national agreement.
Local agreements can be subdivided into a number of categories expanding the second relationship tree as shown in Figure 6-5.
A local area-wide agreement is one that applies to the full geographical limit of the particular local’s territory, which might be limited to a particular county (or counties) within a state, to the entire state, or, in a few instances, to a group of several states. A project agreement is one that applies to a particular project named in the agreement and to no others. Some projects consist of just one construction prime contract, and a “single project” agreement would be applied to that single job only. An excellent example of this type of project agreement was one negotiated between the contractor joint venture partners who constructed the Stanislaus North Fork Project in central California a few years ago.[1] This agreement was negotiated with all of the basic crafts expected to be employed on the project and applied only to that project. The terms and conditions of the agreement were considerably more favorable to the contractor employer than other local agreements in existence in that section of California at that time. When the project work was completed, the agreement automatically terminated.
Some large projects amount to an infrastructure system built by a number of similar prime construction contracts over a number of years, and a single-area system agreement is an agreement that would apply to each of the separate projects within that system and to them only. The San Francisco Bay Area Rapid Transit District subway was constructed under a single-area system agreement in the late 1960s and 1970s, and the Los Angeles Area Rapid Transit District subway and the Boston Harbor Project for tunnel work and sewerage treatment plant construction in Massachusetts were both built under these types of agreements.
National agreements can be subdivided into national trade agreements and national special purpose agreements, further expanding the second relationship tree as indicated in Figure 6-6.
National trade agreements apply nationwide between the signatory employer and every local union for the particular trade, regardless of location. Currently, many contractor employers hold national trade agreements with each of the basic crafts and/or the various specialty crafts. National special purpose agreements are those made across the trades where all of the signatory trades are engaged in a specific common and narrowly defined type of work, such as industrial work or purely maintenance activity. The industrial work agreement shown in Figure 6-6 applies to the construction of industrial facilities such as factories and plants. The maintenance work agreement, sometimes called the National President’s Maintenance Agreement, applies only to the performance of maintenance work in existing industrial facilities.
Previous chapters have dealt with various aspects of construction industry contracts. Are labor agreements contracts? Chapter 2 discussed the three elements necessary for contract formation, which are the offer, the acceptance, and the consideration. Where and in what form are these elements found in the labor agreement? The offer and the acceptance occur through a long series of offers and counteroffers constituting a classic example of a negotiation. Practically everyone has heard of “labor negotiations.” For unions and union employers, labor negotiations occur at regular cycles, either annually or every two or three years. The productive capacity to perform construction work on the one hand, and the wage rates and fringe benefits stated in the labor agreement constitute the necessary consideration element. So, labor agreements are very much contracts and are subject to the same laws and rules of interpretation as are other contracts.
Labor Agreement Threshold “Red Flag” Provisions
As in prime construction contracts, labor agreements between contractor employers and labor unions contain “red flag” provisions. The threshold provisions include the following:
• Union security provisions
• Union jurisdiction
• Hiring hall provisions
• Grievance procedures
• Work stoppage/lockout provisions
• Subcontracting clause
Union Security Provisions
Union security provisions establish that union membership is a condition of employment. For new employees who are not already union members, a set period of time (usually a matter of days) is also established within which they must join the union in order to remain employed. It should be noted that in the several “right-to-work” states in the United States, the requirement for union membership as a condition of employment is contrary to state law and cannot be legally enforced. This does not mean that labor unions are illegal in “right-to-work” states, only that membership in a union cannot be demanded as a condition of employment.
Union jurisdiction provisions deal with the scope of the agreement, both in terms of the work performed and the geographical extent of the agreement. They list the specific kinds of work that can be performed only by the union members and specify the geographical area covered by the agreement. Union jurisdiction provisions usually differ among the various separate union craft agreements in an area. The provisions commonly conflict, with each craft reserving or claiming the same work. Employers signatory to several agreements applying to a common construction project cannot meet all of these conflicting conditions. The problem is generally resolved by the employer conducting a “mark up” meeting at the start of the project where all items of work in the project are assigned to one craft or another in accordance with the traditional work practices in the area, known as area practice. Unions disagreeing with such assignments can file a protest with the National Joint Board for Settlement of Jurisdictional Disputes, a national body that will hold a hearing and either support the employer’s assignment or alter it. Most unions and union employer groups are stipulated to the Board, which means that they agree to abide by the Board decisions.
Hiring Hall Provisions
Labor agreements may contain a hiring hall provision. This is a critical provision for the contractor-employer since the specific requirements can have a significant effect on hiring flexibility. A typical hiring hall provision states that the union is the exclusive source of referrals to fill job openings. If the contractor needs to hire new employees, they must be requested from the union hiring hall. When there is no hiring hall provision, the contractor can fill job openings from other sources (“hire off the bank”), as long as the requirements of the union security provisions are met.
Significant differences can exist in the hiring hall provisions in different labor agreements. For example, some provisions give the union the right to designate foremen; in others, the employer has that right. Since foremen are the first level of management on any project, it is essential for the contractor to know who will control their selection. Also, in some hiring hall provisions, the employer has the right to bring in a specified number of key employees; in others, the contractor must rely solely on the labor force in the area of the project. Many contractors have developed a following of workers with proven skills who will move to a new area in order to maintain continuous employment. It is a significant advantage to the contractor to fill key positions with these long-term employees.
Grievance Procedures
Each labor agreement contains a set grievance procedure for settling disputes between the contractor and the union. This clause is analogous to the dispute resolution clause in a prime construction contract between a contractor and owner. It is obviously important to know exactly what steps will be taken at each point in this procedure as well as what will happen if initial efforts fail to produce a settlement, which leads directly to the next threshold provision: work stoppage/lockout provisions.
Work Stoppage/Lockout Provisions
Most labor agreements contain work stoppage and lockout provisions. These provisions provide that the union must continue work (no work stoppage) while a dispute is being settled and that the employer must continue to offer employment (no lockout). However, even if the agreement contains work stoppage and lockout provisions, the agreement may provide that the union may cease work if the workers are not being paid or if the work site is unsafe. When improperly used, this latter provision can lead to work stoppages on spurious grounds in order to pressure contractor employers during disputes. A dispute over the exact pay rate in a particular case, or similar arguments concerning the amount that the workers are paid (as long as they are being paid some rate provided in the agreement), does not constitute sufficient grounds for the union to engage in a work stoppage. A work stoppage on these grounds would constitute a breach of the labor agreement. In these circumstances, the proper course of action for the union would be to file a grievance.
Subcontracting Clause
The effect of a subcontracting clause is to bind the contractor to employ only subcontractors who agree to the terms of the contractor’s labor agreement. The provisions of the clause will not necessarily require that a subcontractor actually sign a similar agreement with the union. They may only bind the subcontractor to abide by the terms and conditions of the contractor’s labor agreement and to make the required fringe benefit payments into the union trust funds. Much controversy has occurred over the inclusion of such clauses in labor agreements, since the requirement to use union subcontractors can make a prime contractor’s bid noncompetitive in areas where open-shop contractors are competing for the work. When the writer was managing a union contracting organization, bids were sometimes lost for this reason when open-shop competition existed.
Other “Red Flag” Provisions
In addition to the threshold provisions, the following “red flag” provisions, although secondary, are also important:
• Wage/benefits hourly rates
• Normal workday and workweek
• Overtime definition and pay premium
• Shift work definition and pay premium
• Work rules and manning provisions
• Steward provisions
• Me too/most favored nation provisions
Wage/Benefits Hourly Rates
The agreed-upon wage and fringe benefit rates obviously form the main body of the contract consideration and determine what the various crafts are to be paid. Usually, wage and fringe benefit rates are stated in terms of an amount per hour. An important point is whether the stated fringe amount per hour is to be paid on an hours-worked or on an hours-paid basis. Consider the following case:
Base rate: \$17.50 per hour
Health and welfare: \$1.50 per hour
Pension: \$0.75 per hour
Vacation: \$0.50 per hour
The question arises when a worker works overtime. For instance, if the overtime premium was time-and-a-half and the worker works 11 hours on a particular day, the respective amounts paid directly to the worker and paid into the union trust funds on an hours-worked basis would be
Worker gets: (\$17.50) (8+ 1.5 X 3) = \$17.50 x (12.5 hrs. paid) = \$218.75
Trust funds get: (\$1.50 + 0.75 + 0.50) x 11 hrs. worked = \$2.75 x 11 = \$30.25
On an hours-paid basis, the respective amounts paid to the worker and to the union trust funds would be
Worker gets: \$218.75
Trust funds get: \$2.75 x 12.5 hrs. paid = \$34.38
The difference in total trust fund payments in this example of \$34.38 – \$30.25 = \$4.13 amounts to 13.6% of the lower amount paid. A large amount of overtime on a major project can result in a considerable difference in the contractor’s labor costs depending on the method labor fringes are paid.
Normal Workday and Workweek
Workday and workweek provisions include clauses defining the standard workday, in terms of the number of hours worked (8 hours), the consecutive number of hours worked between starting time and lunch or dinner breaks, and the minimum required hours off between consecutive shifts worked by the same worker. They may also include defining the number of hours for “show-up time” to be paid if work is canceled after a person reports to work and then is sent home due to inclement weather, the minimum number of hours that a worker must be paid after starting to work, and similar rules that result in workers being paid for more hours than they actually work.
For example, it is not uncommon for workers who actually report to work and who are then sent home without performing any work at all to be paid a minimum of two hours at the straight time rate unless they had been advised by the contractor employer before they left their homes for work not to come to work that day due to inclement weather. Similarly, in such circumstances if workers actually were put to work on arriving at the jobsite at the start of the work shift and then were sent home due to inclement weather shortly thereafter, a minimum of four hours at the straight time rate commonly is required to be paid.
These provisions also establish the number of workdays that constitute a standard workweek (5 workdays) and a range of normal or standard starting and quitting times for each standard shift during the standard workday. Some labor agreements include guaranteed 40-hour-week clauses, which provide that workers are guaranteed 40 hours’ pay for the week once work is started on the first day in any one workweek. It matters not that work had to be suspended because of inclement weather or other circumstances completely beyond the control of the contractor employer. The workers still receive pay at the straight time rate for the entire week.
Overtime Definition and Pay Premium
Overtime definition and pay premium provisions establish a schedule of overtime pay rates, usually in terms of a multiple of the basic hourly pay rate (time-and-a-half, double time, or triple time). These provisions also define when the overtime rate is to be paid-for example, after so many hours worked in a day or week (8 hours per day or 40 hours per week). on weekends and holidays, or when the hours worked do not fall between normal shift starting and ending times. In special circumstances, provisions may be included that allow exceptions to what would otherwise be considered overtime work. For instance, some projects require work to be performed in the middle of the night only when no work is being performed during the other two shifts. This would be common in work performed in heavily trafficked streets in urban areas. In these circumstances, the contractor employer usually can obtain the union’s agreement to pay a wage rate for this night work that is higher than the straight time rate that would be paid if the work was performed on standard day shift but considerably less than the overtime rate that otherwise would be required to be paid.
Shift Work and Pay Premium
Shift work and pay premium provisions define standard work shifts (first, second, and third shifts or “day,” “swing,” and “graveyard” shifts) based on the particular hours during the day that the shift works. A typical arrangement would be day shift: 8 a.m. to 4:30 p.m. with a 1⁄2 hour meal break; swingshift: 4:30 p.m. to 12:30 a.m. with a 1⁄2 hour meal break; and graveyard shift: 12:30 a.m. to 8 a.m. with a 1⁄2 hour meal break. However, a particular range of times for starting each shift is usually stated. The provisions may also contain clauses requiring that once a swing or graveyard shift is started, the workers must continue to work and be paid for a full workweek, and the union must be given a minimum notice period before shift work is to start. This author has experienced contracts in some jurisdictions where the requirement for continued payment for workers on shift work throughout the full workweek was extremely costly. Once shift work was started in a given week, the crews involved had to be paid their shift work wages for the entire week, even though work was required to be suspended because of inclement weather or other circumstances beyond the control of the contractor employer.
The pay premium provisions for shift work are usually stated in terms of straight-time hours to be worked for eight hours’ straight-time pay. For example, a day-shift worker will work eight hours and receive eight hours’ pay; a swing-shift worker, seven-and-a-half hours for eight hours’ pay; and a graveyard-shift worker, only seven hours for eight hours’ pay. If a day-shift worker works ten hours, he or she would be paid eight hours’ straight time and two hours at the specified overtime rate. A swing-shift worker would be paid eight hours’ straight time and two-and-a-half hours at the specified overtime rate. A graveyard-shift worker would receive eight hours at straight time and three hours’ pay at the specified overtime rate. Such a scenario is a typical arrangement, but the specific premium pay provisions may vary from shift to shift and from agreement to agreement.
It should be clear from the preceding that such matters as overtime work and shift work must be very carefully managed when the labor agreement contains expensive provisions such as a guaranteed 40-hour-week clause, shift work clauses requiring pay for the entire week, and so on. Otherwise, costs will quickly get out of hand. Also, these provisions must be clearly understood when pricing construction work in advance of actual construction, as when formulating bids or proposals.
Work Rules and Manning Provisions
Every labor agreement will contain work rules and manning provisions. These address such issues as when foremen, general foremen, or master mechanics must be utilized; what number of workers are required for standard crews; and the requirements for employing apprentices and helpers. Some agreements are very restrictive and allow the contractor little flexibility in determining the number of workers that must be hired. Others give the contractor the right to determine crew sizes and to hire the workers as the contractor sees fit. These provisions and the presence or absence of restrictive productivity-limiting practices are of critical importance to the contractor. Construction work cannot be accurately priced or managed effectively without an intimate understanding of these matters.
The manning rules can take a number of different forms, many of which greatly limit the employer’s flexibility and result in hiring additional workers who are not actually needed to get the work done. Examples of such restrictive provisions include the following:
• When equipment is broken down and being repaired in the field, the regular equipment operator is required to be present to assist the mechanic who is assigned to make repairs, rather than operate another operable unit, even though the operator is not a mechanic and is of no practical assistance.
• An equipment operator may change equipment only one time during any one shift. This requirement is particularly onerous and expensive to the contractor–employer on small jobs with several pieces of equipment that are not required to be operated continuously. For instance, a contractor doing utility work might conceivably be using a small backhoe, a front-end loader, and a small dozer intermittently where each piece of equipment is only operated a few hours during the shift. Only one operator, capable of operating any of the three pieces of equipment, is required in order to perform the required work operations.[2] Nonetheless, in some jurisdictions, the contractor would be required to employ three operators even though it would not be possible to operate all three pieces of equipment simultaneously.
• An operator must be assigned for a stated number of pumps, compressors, or welding machines on the job, regardless of whether an operator is actually needed. In one instance in the eastern United States, this work rule has resulted in contractors utilizing an inefficient jet-eductor dewatering system instead of a more efficient deep well system because the work rules applicable to the project mandated that one operator was required around the clock for every three deep wells (of which a large number were required), whereas fewer operators were required for the jet-eductor system.
• Stated crew sizes must be used, such as a minimum of four or five in a piledriving crew when only three are actually needed to do the work.
• An oiler must be assigned to each crane over a stated size, whether or not an oiler is needed for the safe operation and maintenance of the crane. This frequently has resulted in an assignment of an oiler to a single operator center-mount crane even though an oiler is not needed and there is no place on the crane for the oiler to ride safely when the operator is driving the crane from one work location to another.
• Laborers must be assigned to assist carpenter crews at a stated number of laborers for a stated number of carpenters, regardless of how many laborers are actually needed.
• Laborers must be assigned to dewatering pumps, even when the pumps are electrically powered and automatically controlled so that they require little or no attention at all.
Steward Provisions
A labor agreement will usually contain provisions relating to the union shop steward, an individual appointed by the union to deal with the employer on behalf of the union employees at the site. The steward is not the same as the union business agent, who represents the union in dealing with all contractor employers within a certain area but who is not normally continually on the jobsite. The steward is employed by the contractor, ostensibly as a regular craft employee expected to do a normal day’s work. The steward provisions permit the steward to engage in union activities while on the job. A danger to the contractor is that overly permissive language in the agreement permits the steward to engage in full-time union activities and perform little or no work.
Me Too/Most Favored Nation Provisions
Me too/most favored nation provisions can be very expensive to the contractor. “Me too” clauses entitle a worker to be paid the highest overtime rate of any craft actually working overtime on that day, even though the rate in the worker’s union agreement is lower. Most favored nation provisions require that any clauses that are less favorable than similar clauses of subsequent agreements negotiated with another craft are replaced with the more favorable language of the later agreement with the other craft.
A telling example of the extra labor expense that can be generated by such clauses was a project in the eastern United States in which this author was involved a number of years ago. The project involved structural concrete work, requiring carpenters, operating engineers, laborers, and cement masons to be conducted simultaneously with excavation and ground support operations requiring operating engineers, piledrivers, and laborers. The labor agreements provided that all crafts on the job received overtime pay at time-and-a-half with the exception of the carpenters and piledrivers who received double time.[3] Frequently, concrete placements would run into the second shift with laborers, operating engineers, and cement masons being required for a number of hours at the overtime rate of time-and-a-half. Frequently, lagging crews, part of the ground support operation, were also required to work overtime to lag up ground that had been excavated during the day shift. This crew consisted of seven or eight laborers and one piledriver whose sole job was to cut the lagging boards to length with a chain saw so that the cut boards could be installed by the laborers. Thus, among all of the workers on overtime, sometimes totaling as many as 20, there was one piledriver. Because the piledriver was entitled to overtime at the double-time rate, each of the others was also required to be paid overtime at the double-time rate, even though their agreements called for overtime at the time-and-a-half rate.
Fortunately, for the good of the industry, such onerous provisions are antiquated today, and few labor agreements contain them. Many current labor agreements are fair and even-handed but, as the incidents just related indicate, each new agreement must be carefully read and understood to avoid unpleasant surprises.
Conclusion
This chapter presented a brief general survey of the types of labor agreements commonly in use in the United States today according to whom the employer and union parties are likely to be, the geographical limits of the agreements, and the general nature of the construction labor involved.
Also, the details of typical provisions found in labor agreements were discussed, particularly emphasizing those provisions of special importance to contractors when pricing and managing construction work.
Chapter 7 moves on to construction purchase orders and subcontract agreements. Both are additional examples of contracts closely related to the prime contract between the construction contractor and the owner.
Questions and Problems
1. Define or explain the following relevant labor agreement terms:
1. Single employer and multi-employer party
2. Local and international unions
3. Basic crafts and specialty crafts (name the six basic crafts)
4. Local and national labor agreements
5. Local area-wide agreements
6. Project agreements-single project and single-area system agreements
7. National trade agreements and national special purpose agreements
8. Single craft and multi-craft agreements
2. Do labor agreements contain the three elements of offer, acceptance, and consideration necessary for contract formation? How do the offer and acceptance typically occur? What parts of a labor agreement comprise the consideration element?
3. What are the six threshold clauses usually found in a construction labor agreement? What is the general subject matter of each?
4. What are three important aspects of hiring that may be contained in a hiring hall clause? What does “hiring off the bank” mean when the project is operating under a labor agreement? Do workers “hired off the bank” have to be union members or agree to become union members?
5. What is a work stoppage? A lockout? What two circumstances will always be viewed by courts as justifying a union’s refusal to continue work? Would a dispute over the proper rate of pay qualify as constituting one of the preceding circumstances? Would such a dispute be subject to the procedure set forth under the grievance clause?
6. What is the essential meaning of a subcontracting clause? What is the typical union position on subcontracting clauses? Why? What is the employer’s position? Why?
7. What are the seven additional “red flag” clauses discussed in this chapter? What is the general subject matter of each?
8. What is the distinction between payment of union benefits on an hours-worked basis and on an hours-paid basis? Under what circumstances on a project does this distinction become important?
9. What is meant by a standard or normal workday? Standard or normal workweek? What is meant by “show-up time”? What is a shift? What does shift work mean? What is overtime? Overtime premium? How many hours of work comprise a standard (normal) straight time day shift? A swing shift? A graveyard shift? How many work shifts constitute a standard (normal) workweek? How many hours at the straight time rate are usually paid for the standard (normal) day shift? For swing shift? For graveyard shift?
10. What are work rules? Manning provisions? What are the two examples of work rules-and five examples of manning provisions cited in this chapter?
11. What is a steward? A business agent? Who pays each? Does a steward perform construction work on the project?
12. What is a “me too” provision? A most favored nation provision?
13. The following question assumes that you have access to an actual construction industry labor agreement. Refer to the particular labor agreement that you have and answer the following questions:
1. Who is the union party? The employer party?
2. Is the employer party a single party employer or a multi-party employer?
3. Does the union party consist of basic crafts, specialty crafts, or a mixture of basic and specialty crafts?
4. List every “red flag” provision discussed in this chapter that you can find in the labor agreement and cite the article number for each provision that you list.
14. A project required 117,900 actual carpenter work-hours, of which 15% were performed on an overtime basis. The carpenter base pay was \$27.50 per hour, and the total union fringes were \$6.27 per hour. Overtime work was paid at double time. How much additional labor expense would the contractor employer incur if the union agreement called for payment of union fringes on an hours-paid basis rather than on an hours-worked basis?
15. A project cost estimate indicates that the work will require an average of six crane operators for a total of 6,494 crane operator-hours and an average of nine other heavy-equipment operators for a total of 9,720 other heavy-equipment operator-hours. The estimate was made on the basis that all work would be performed on a one-shift-per-day, five-days-per-workweek basis. The climate at the site was such that 12% of the normal workdays in the five-day workweek were expected to be lost because of inclement weather. This fact was recognized in the anticipated project schedule. How much more labor cost would have to be anticipated for crane operators and heavy-equipment operators if the labor agreement for the project provided for a guaranteed 40-hour week for these particular operating engineer classifications and provided that labor fringes were to be paid on an hours-paid rather than an hours-worked basis than if the labor agreement did not contain these two provisions? The following wage rates applied:
[table id=4 /]
16. Project work on a major Midwestern river was progressing on the basis of a normal eight-hour-day shift, five days per week. The owner directed the contractor to accelerate completion of a contractually mandated milestone for the project by putting the work crews on shift work, three shifts per day, five days per week. The work involved for the milestone required piledrivers and operating engineers. The labor costs per day for the total crew were \$3,840 base pay and \$768 union fringes per each eight-hour-day shift. The portion of the work to be accelerated would have required an additional 60 full eight-hour shifts of work or 60 x 8 = 480 crew-hours to complete. The project labor agreement provided for shift work but stated that once shift work was established at the beginning of the workweek, the crews must be paid for the entire week for each new week’s work started even if work were temporarily suspended later in the week because of inclement weather or otherwise. The labor agreement further provided that union fringes were to be paid on an hours-paid basis and that the day shift was to receive eight hours’ pay for eight hours’ work; the swing shift eight hours’ pay for seven-and-a-half hours’ work; and the graveyard shift eight hours’ pay for seven hours’ work. The acceleration order was issued in the middle of the winter when an average of four shifts per five-day workweek (two shifts on swing shift and two shifts on graveyard shift) could be expected to be lost because of extremely cold weather. Assuming that there would be no loss of efficiency for work performed on swing and graveyard shifts, determine how many days would be required to reach the milestone required by the acceleration order and what the extra labor cost for complying with the acceleration order would be.
1. This project is one of two projects cited in Chapter 3 as examples of very large public works design–construct projects.
2. Many operating engineers possess the capability to operate all three pieces of equipment.
3. Carpenters and piledrivers both belong to the same international union. | textbooks/biz/Business/Advanced_Business/Construction_Contracting_-_Business_and_Legal_Principles/1.06%3A_Labor_Agreements.txt |
Key Words and Concepts
• Buyer
• Seller
• Sale of goods
• Significance of labor at construction site
• Uniform Commercial Code
• Purchase orders for provision of services at site
• One-time/continuing supply
• Maximum quantity/approximate quantity
• Conflicts in boilerplate
• Flow-down language
• “Red flag” purchase order provisions
• Rules of payment and quantity measurement
• “No pay until paid” provisions
• F.O.B. point/freight/risk of loss
• Sales tax
• Purchase order terms and conditions
• Long form/short form purchase orders
• Essence of subcontracts
• Flow of contract liability
• Subcontract work per plans and specifications
• Incidental subcontract work
• Compliance with general terms and conditions of prime contract
• “Red flag” subcontract provisions
Purchase orders and subcontracts are additional contracts that are closely related to the construction contract between owner and prime construction contractor. Although similar in some respects, they are fundamentally different in purpose.
Occasionally, a purchase order is used when a subcontract agreement would have been more appropriate and vice versa. Both are important contracts, and construction practitioners need to understand clearly the purpose and key features of each in order to decide which should be used for a particular business transaction and in order to draft them correctly.
Purchase Orders
Construction purchase orders generally are intended for transactions that involve the sale of goods by a seller and delivery of those goods to a contractor buyer at the site of a construction project. This purpose should be distinguished from the provision of services or the performance of work involving labor at or on the construction project site. For example, consider a transaction to provide fabricated structural or reinforcing steel to a construction jobsite. Such an undertaking clearly involves the provision of extensive quantities of labor to perform the fabrication work in addition to furnishing the basic raw material, but this labor is provided at the steel fabricator’s plant or yard, not at the construction project site. The fabricator is furnishing “goods” in the form of the fabricated structural or reinforcing steel.
Goods or Provision of Services?
In settling construction contract disputes, courts occasionally struggle with the issue of whether particular transactions constitute the sale of goods or the provision of services. Resolution of this issue may determine whether the provisions of the Uniform Commercial Code apply that affect the seller’s potential liability under the code. The issue is often resolved on the basis of whether the court believes the final product consists mostly of a manufactured material or product or mostly of on-site construction labor.
In a Florida case, a road-building contractor was sued because a passing car hit a drop-off during road-paving operations, then went out of control, killing the driver, and injuring a passenger. The suit alleged that the contractor was negligent in the “manufacture of a product.” Under this theory, the contractor would be subject to strict liability for defects in the product. A trial court concluded that the contractor was a “manufacturer” and was liable for defects in the product being manufactured, which allegedly caused the accident. The Supreme Court of Florida reversed the trial court, holding instead that the construction of a public road pursuant to a Department of Transportation contract did not constitute the manufacture of a “product” and that the doctrine of strict liability could therefore not be applied.[1] In another case, the United States Court of Appeals ruled that a contractor joint venture acted in the capacity of a merchant when it purchased a tunnel-boring machine, even though the tunnel-boring machine was intended for use on a sewer construction project, and therefore the transaction was governed by the Uniform Commercial Code.[2] In another case, a public utility contracted for the design and construction of a large reinforced concrete cooling tower for a power plant. After completion, the cooling tower exhibited a number of problems, and the utility sued the design-build contractor that had designed and constructed it. One of the utility’s positions was that designing and constructing a reinforced concrete cooling tower constituted the manufacture of a “product,” thus bringing the contract under the mantle of the Uniform Commercial Code, which afforded better avenues of recovery for the utility. The resolution of this kind of issue once again will depend on the determination of a court on whether the final installed product consists mostly of a manufactured material or product or mostly of on-site construction labor.
The lesson for construction practitioners is to treat all transactions involving the provision of significant amounts of on-site construction labor as a construction operation requiring the use of a subcontract agreement. On the other hand, transactions that do not involve the provision of significant amounts of labor at the construction site should be treated as the sale of goods. The sale of goods should be handled with a purchase order.
Use of Purchase Orders for Certain Jobsite Services
Although not appropriate for transactions involving significant amounts of labor at the construction site, purchase orders for the provision of certain services on or at the site that involve minimal labor are commonly used. Examples are the provision of chemical toilets, which are periodically serviced by the provider, the provision and collection of trash containers, and similar services.
Purchase Order Quantity Limitations
A purchase order can be limited to a one-time transaction that occurs once and is finished, or it may provide for a continuing supply of goods on an “as-required basis.” Purchase orders for a continuing supply may be either “open-ended” as to the total quantity to be furnished, or they may be limited to some stipulated maximum quantity stated in the purchase order. Some purchase orders state an approximate quantity.
Conflicts with Seller’s Sales Quotations
A “battle of the forms” may later develop when there are conflicts in boilerplate—that is, when the fine print on the back of preprinted vendor’s sales quotation document conflicts with similar boilerplate on the back of preprinted purchase order document. Vendors naturally try to obtain the most favorable sales terms possible, and the written quotations that they furnish usually contain conditions-of-sale language preprinted on the back of the quotation form aimed at achieving that result. Contractor/purchasers do likewise by using standard preprinted purchase order forms that have general conditions printed on the back, which put the contractor/buyer in the most advantageous position. To further complicate matters, buyers and sellers often negotiate “special” or “supplementary” terms and conditions applying to a particular transaction and to that particular transaction only. If any of these various terms and conditions conflict, and the conflicts are not identified and eliminated from the purchase order, future disputes between the parties are likely.
The courts’ reaction to these disputes has been mixed. In one case, a contractor purchased water treatment materials from a supplier under a sales quotation that stated in the boilerplate on the back of the quotation that, among other things, the supplier disclaimed the implied warranties under the Uniform Commercial Code. The contractor’s purchase order did not, by its terms, waive any of the contractor’s warranty rights, including those contained in the UCC. The South Carolina Court of Appeals ruled that the language of the purchase order did not disclaim the specific language in the supplier’s sales quotation and that the sales quotation governed.[3]
In an opposite holding, the Court of Special Appeals of Maryland held that an equipment supplier’s standard terms and conditions, which contain a disclaimer of liability for failure to make timely delivery, were overridden by the contractor’s purchase order that contained contradictory terms.[4]
Drafters of purchase orders should harmonize the preprinted purchase order and sales quotation language by making clear that one or the other controls. Also, it is important to make sure that the words entered on the face of the purchase order document describing the instant transaction do not conflict with the boilerplate on the back of whichever preprinted form is intended to control. Otherwise, “an argument waiting to happen” has almost certainly been created.
Flow-Down Language from Prime Contracts
When purchase orders flow from prime construction contracts, the two are closely related. Such purchase orders often contain explicit flow-down language intended to make all applicable provisions of the prime contract also apply to the purchase order.
Additionally, prime contractors need to specify that materials furnished by vendors for use on the project meet all requirements of the prime contract. The best way to do this is to incorporate the applicable section of the prime contract technical specifications by reference to the section and paragraph numbers when describing the material to be furnished in the purchase order.
“Red Flag” Purchase Order Provisions
Like other contracts, certain purchase order provisions stand out because of important particulars to which the buyer and seller agreed—that is, “red flag” purchase order provisions. The following discussion is not all-inclusive but covers most critical provisions that usually should be included.
Necessary Identifying Information
At the onset, the purchase order should prominently identify the following on the face of the document, using correct name styles:
• Construction project for which the prime construction contract is held by the buyer
• Owner for that project
• Architect/engineer
• Contractor buyer
• Seller
Description of the Goods Purchased
An accurate and complete description of each separate item to be furnished must appear, including any appropriate references to specific sections of the prime contract plans and technical specifications where necessary. Part and parcel of this description is the quantity of each separate item to be furnished.
Shipping Instructions
Complete shipping instructions should be included, designating the exact name and address of the intended receiving party and instructions on how the goods are to be packaged and marked. This is particularly important in purchase orders for fabricated reinforcing steel to be delivered to the jobsite that is to be cut, bent, and tagged so that the individual bars are identified and in similar supply purchase orders for products such as miscellaneous metal or structural steel. Without correct definitive markings, these types of products can be extremely difficult to even locate in the contractor’s lay-down area after delivery, let alone identify each piece for correct installation or erection.
When the goods purchased are susceptible to damage in shipment or when identification of individual items at the jobsite may be difficult, packaging and marking instructions are particularly important. In one case, granite facing slabs that were quarried in the southeastern United States and shipped some distance to the jobsite developed disfiguring scars after the facing slabs had been erected on the face of the building. The scars were found to have resulted from the careless use of steel banding straps during shipment to the jobsite. Great expense was incurred to remove the disfigured slabs and replace them with new slabs from the quarry. The importance of the particulars of the packaging and shipping instructions in a case like this is obvious.
Pricing and Basis of Quantity Measurement
The purchase price and the basis of quantity measurement for payment must be clearly stated. The purchase price is normally stated for each line item in the purchase order as a lump sum price or as a unit price and extension against a stated quantity. The nominal dollar amount for the entire purchase order is the sum of the lump sum prices and/or extensions for all the line items.
Just as the rules forming the basis of quantity measurement for pay purposes were of critical importance in prime construction contracts (refer to Chapter 5), so are these rules important for purchase orders. Is the supplier of the material to be paid on the same basis as the contractor/buyer, or on some other basis? Purchase orders often contain a flow-down provision stating that the same rules defining the basis of measurement for payment from the owner to the prime contractor in the prime construction contract will also apply to the purchase order for payment from the prime contractor buyer to the seller.
Note that even in these instances, when the rules for measurement for pay purposes are stated in the purchase order to be the same as for the prime contract, the lump sum and unit prices typically will be lower for the purchase order since only the component of the total pay item for furnishing the required material is represented in the purchase order price.
Not all purchase orders contain flow-down provisions from prime construction contracts relating to measurement and payment. Prime contractors routinely purchase many items of materials or goods for which payment will be completely unrelated to the provisions of the prime contract.
Payment and Retention Provisions
The payment and retention terms have exactly the same significance to a seller under a purchase order contract as they do to the prime contractor under the prime contract with the owner. Their positions are virtually identical, just one step apart on the payment ladder. In this respect, it is common for prime contractors to impose the same payment and retention terms on their vendors, who are the sellers under the purchase order agreements, as the owner imposes on the prime contractor in the payment and retention provisions of the prime contract.
A frequently recurring contract problem between vendors and contractor buyers is the “no pay until paid” dilemma. The same problem arises between prime contractors and their subcontractors when the primes are using payments from the owner as their source of funds. Purchase order and subcontract payment terms usually contain a clause stating that the prime does not have an obligation to make payment until and unless payment has been received from the owner. The provision further states that once the prime has received payment, the vendor or subcontractor must be paid within a stated period, usually ten calendar days. Such provisions are legal and enforceable up to a point. However, situations can arise where the owner never pays the prime contractor, due to insolvency, being legally prevented from paying, or some other reason. Then what? Does the prime have an enforceable liability to pay, or is the vendor or subcontractor simply out of luck?
Most courts view the “no pay until paid” clause as less than absolute—that is, it will be enforced with respect to the timing of the prime’s obligation to make payment but does not excuse the prime from eventually paying. In the end, the prime, not the vendor or subcontractor, will have to pay and absorb the loss. In a few states, courts will relieve the prime from making payment if the purchase order or subcontract agreement explicitly states that receipt by the prime of payment from the owner is a “condition precedent” to any obligation of the prime to pay the vendor or subcontractor.
The following two examples illustrate the predominate court holding. In a New Jersey case, a U.S. District Court held that although the project owner’s insolvency resulted in nonpayment to a prime contractor on a shopping center project for the site work required for the project, the prime contractor was still liable for payment to the subcontractor who had actually performed the work, even though the subcontract agreement provided that the prime contractor was to pay the subcontractor “within thirty (30) days after acceptance and receipt of final payment from the owner of the building.” The prime contractor never received final payment for the work performed by the subcontractor from the owner but was still required to pay \$101,760 to the subcontractor, the final balance due the subcontractor under the terms of the subcontractor agreement.[5]
In another example, a second-tier subcontractor remained unpaid for installation of exterior windows, walls, and doors for a building project because the prime contractor had encountered financial difficulties and had failed to pay the first-tier subcontractor for the installation work performed by the second-tier subcontractor. The subcontract agreement provided that
Subject to the terms and conditions of this contract, final payment will be made to UPG upon final acceptance of the work by the owner, the approval thereof by the architect, and the receipt of payment in full from the general contractor.
In overturning a lower trial court decision, the Commonwealth Court of Pennsylvania said:
The language of Article Six… merely addresses the time at which payment is to be made… Accordingly, we conclude that the trial judge erred in interpreting the language of the second paragraph of Article Six as imposing absolute conditions precedent to UPG’s entitlement to MTI’s final payment.[6]
An example of a case that turned on the presence of the words “condition precedent” in the payment language of the subcontract agreement is afforded by the 1991 decision by the Court of Special Appeals of Maryland. In that case, the general contractor had subcontracted exterior masonry work on a residential condominium project under a subcontract agreement that contained the following payment language:
It is specifically understood and agreed that the payment to the trade contractor is dependent, as a condition precedent, upon the construction manager receiving contract payments, including retainer [sic] from the owner.
After receiving progress payments, the subcontractor submitted a final invoice for \$283,079, which the prime contractor refused to pay because payment had not been received from the project owner, who eventually filed for bankruptcy. In absolving the prime contractor from liability for payment of the \$283,079, the court said:
In addition to providing a standard “pay-when-paid” clause, this contract further provides that payment by Carley Capital Group to Gilbane is a condition precedent to Gilbane’s obligation to pay Brisk. Regardless of whether the parties discussed the prospect of owner insolvency during their negotiations, the objective meaning of the clause is clear—Gilbane, the construction manager (general contractor), is not obligated to Brisk, the trade contractor (subcontractor), unless and until Gilbane is paid by the owner, CarleyCapitalGroup.[7]
Specified Delivery Schedule
The purchase order should carefully define the required delivery schedule for the material. This delivery requirement is equivalent to the statement of allowed contract time in a prime construction contract. In situations where the contractor/buyer is held to a tight performance period for the prime contract work, obtaining required materials from suppliers on time is obviously important.
Required Delivery Point
Another essential element is a statement defining the required point of delivery. This determines who pays the freight charges, which can be considerable, and who is responsible for the material in the event of damage or loss during shipment. Many purchase orders specify the delivery point to be the construction project jobsite. This typically would be done by stating “F.O.B. construction jobsite” (F.O.B. means “free on board”). In this case, the cost of loading the material by the seller and the delivery cost (the freight) is deemed to be included in the purchase price. Title does not pass to the buyer until the material reaches the jobsite where the buyer is required to unload the material. Under these circumstances, the seller would have to assume the risk of loss or damage during transit.
An alternate arrangement is for the purchase order to designate the delivery point to be “F.O.B. seller’s plant or yard.” In this case, title passes to the buyer when the goods are loaded by the seller onto transit vehicles supplied or arranged for by the buyer. The buyer owns and is responsible for the material from that point on. Which provision is stated in the purchase order is obviously important to both parties.
The following situation rather dramatically illustrates the importance of the preceding provisions in a purchase order. A tunnel contractor had procured a 12-foot diameter tunnel-boring machine (TBM) from a seller located some distance from the site of the tunnel project. During transport to the site, the TBM broke free of its lashings on the transport vehicle, rolled off the vehicle, bounced on the roadway shoulder, coming to rest in an adjoining farmer’s field. In a case like this, the particulars of the purchase order regarding the F.O.B. point were obviously important:
• Who paid for recovering the TBM from the field, reloading it, and for any damage suffered, the tunnel contractor or the seller?
• Suppose the TBM had rolled off the opposite side of the transport vehicle onto the opposing traffic lane and had collided with an oncoming car, injuring or killing innocent third parties. Who would be liable, the tunnel contractor or the seller?
The resolution of these and similar questions can be pre-agreed by buyer and seller by designating the intended F.O.B. point.
Sales Taxes
Similarly, the purchase order should make clear that any applicable sales taxes either are, or are not, included in the stated purchase price. Purchase orders can be written either way.
Purchase Order General Conditions
Most construction contractor’s purchase order contracts with material suppliers contain general conditions printed on the back of the purchase order form. These are usually titled “Purchase Order Terms and Conditions.” They normally contain all of the following general clauses usually found in prime contracts (discussed in detail in Chapter 5):
• Disputes resolution
• Changes
• Termination provisions
• Provisions in the event of late delivery
• Conditions excusing late or nondelivery
• Insurance and bond requirements
• Indemnification
• Escalation
These clauses have the same significance for the contractor/buyer and the vendor/seller as they do for the owner and the contractor under the prime construction contract.
Special or Supplementary Provisions
Finally, the purchase order may contain a section titled “Special Provisions” or “Supplementary Provisions,” where the buyer and seller record any special agreements or terms pertaining to that particular transaction. In the event of conflict with other provisions of the purchase order, courts give more weight to specially recorded terms than to preprinted terms. Although it concerns a conflict in a prime contract rather than in a purchase order, the following case perfectly illustrates the normal judicial treatment of the issue involved.
A pipeline company awarded a contract to a contractor to lay a 109-mile petroleum pipeline in Kansas. The contract, which in this case was typewritten, contained a broad indemnification clause, which, among other things, held the contractor responsible for all damages to the owner’s property. The contractor added a handwritten clause to the typewritten contract that stated:
Contractor shall not be liable under any circumstances or responsible to company for consequential loss or damages of any kind whatsoever including but not limited to loss of use, loss of product, loss of revenue or profit.
This handwritten addition was initialed by executives for both the contractor and owner prior to contract execution.
After completion of the project, the pipeline ruptured. The pipeline company attempted to obtain compensation for lost oil, cleanup costs, and damage to surrounding property. The contractor pointed to the handwritten clause as a defense against the claim of liability for these clearly consequential costs.
The Supreme Court of Kansas determined that the handwritten addition was in direct conflict with the typewritten indemnity clause in the contract. In resolving this conflict, the court ruled for the contractor by stating:
The second handwritten sentence of 2.03, when given its plain and ordinary meaning, clearly limits Willbros’ liability to Wood River for consequential damages. This handwritten provision controls and modifies the printed provision in 2.01 whereby Willbros agrees to pay Wood River for damages to Wood River’s property.[8]
Rather than wait for the decision of a court, which may take years, the best procedure is to coordinate carefully all provisions of the purchase order, each with the others, to identify and remove conflicts. This is commonly done by “lining out” conflicting preprinted language that the buyer and seller intend to delete from the agreement.
AGCC Forms of Purchase Order Agreements
Most general contractors have devised their own preprinted purchase order forms. However, for contractors who have not or in situations where vendors object to a particular company’s form of agreement, standard forms of agreement promulgated by the Associated General Contractors of America (AGC) are available. Examples of such AGC forms are those published by the Associated General Contractors of California (AGCC). Form AGCC-6 (Long Form Purchase Order) is intended for large transactions that extend over some period of time. Form AGCC-7 (Short Form Purchase Order) is intended for smaller, less complicated transactions. Users of both are advised by the AGCC to consult legal counsel before using or modifying these forms.
Subcontract Agreements
A threshold point regarding subcontracts is that a prime contract between an owner and a prime construction contractor must exist before a construction subcontract can exist. The context of the previous discussion on purchase orders focused on the consideration of purchase orders that resulted from a prime construction contract. However, a contractor often will also write purchase orders for miscellaneous goods that have no relationship to a particular prime contract. In the case of subcontract agreements, however, there must be preexisting related prime contracts.
The essence of a subcontract transaction is that a prime contractor who holds a separate contract with an owner decides to “lay off’ or subcontract a portion of the work to another contractor, called a subcontractor. The parties to the subcontract agreement, therefore, become the contractor and the subcontractor. It is important to realize that even when subcontracting a portion of the prime contract work to a subcontractor, the prime contractor still retains the original liability to the owner for the performance of that work according to the prime contract terms. What has occurred is the establishment of a secondary liability of the subcontractor to the prime contractor for the performance of the subcontract work in accordance with the terms of the subcontract.
Construction subcontract agreements will always involve the provision of significant amounts of labor, largely or entirely on the site of the prime construction contract. In addition, subcontract agreements may also involve the provision of materials—both materials that are permanently incorporated into the work (permanent materials) and those that are not permanently incorporated (job materials and supplies commonly called expendable materials). Subcontracts also may involve the use of construction equipment at the jobsite by the subcontractor for the performance of the subcontract work. In short, the subcontract involves all of the elements of work to be performed just as if the prime contractor had done the subcontracted work directly.
The subcontract work may be work that is directly spelled out and precisely described in the prime contract plans and specifications, or it may be work that, although related to the prime contract, is not directly spelled out and thus would be considered incidental to the contract. An example of the former is a subcontract calling for furnishing and driving precast concrete bearing piling that are clearly shown on the prime contract plans and completely described in the prime contract technical specifications. An example of incidental subcontract work would be a subcontract written by an excavation contractor on an earthfill dam contract with a building subcontractor to furnish and erect a temporary shop building on the project site to be used for repairing the prime contractor’s heavy earth-moving equipment. The prime contract would not ordinarily specify the construction of such a shop building as an item of required contract work so this work, although required, would be incidental to the prime contract.
Even in the case of a subcontract for work that is merely incidental, the subcontract often imposes some of the prime contract requirements on the performance of that work. For example, in the case of the above subcontract for constructing a shop building, the subcontract agreement commonly requires the subcontractor’s compliance with the prime contract on Davis-Bacon minimum wage rates, wage-hour laws, regulations pertaining to equal opportunity employment practices, and so on.
Subcontract “Red Flag” Provisions
Following are some of the more important “red flag” provisions of subcontract agreements.
Necessary Identifying Information
As with purchase orders, subcontract agreements should prominently provide the following identifying information using correct name styles where applicable:
• Project for the prime contract
• Owner for that project
• Architect/engineer
• Prime contractor
• Subcontractor
Description of the Subcontract Work
The work to be performed by the subcontractor must be carefully and completely described, incorporating direct references to all applicable drawings and technical specifications and all other applicable sections of the prime construction contract. For some subcontracts, this description of the subcontract work will be relatively simple. In other cases, the description may comprise a number of pages of information and schedules of work items. When the subcontract work is exactly as specified in the prime contract, it is normal to describe the work by citing the particular drawings and sections of the technical specifications in the prime contract that define the work without reproducing them in the text of the subcontract. If pertinent portions of the prime contract general provisions are meant to apply, the subcontract should so state explicitly.
Pricing and Basis of Quantity Measurement
The subcontract price or prices and the rules to be applied to establish the basis of measurement must be clearly stated, in exactly the same manner as for purchase orders. For items of subcontract work directly lifted from the prime contract, the basis of measurement will often be exactly as stated in the prime contracts as if the work was being performed by the prime contractor. In other words, the subcontractor will be paid by the contractor in the same manner that the contractor is paid for the work by the owner, although generally not at the same price or prices. If the subcontract work is incidental to the prime contract, payment to the subcontractor is not related to the payment provisions of the prime contract.
Payment and Retention Provisions
The payment and retention provisions have exactly the same significance as they do for prime contracts and purchase orders and will not be discussed further (see the earlier discussion in this chapter on purchase orders). Also the “no pay until paid” issue arises in the same way as for purchase orders and is generally treated by the courts similarly. The clause will hold up except in cases where the owner never pays the prime contractor. Then, the prime will normally have to pay the subcontractor even though payment has not been received from the owner.
Contractor Control of Performance Time Requirements
Since the prime contractor is subject to the overall project deadlines, stringent performance time requirements can be expected to be written into all subcontracts. Generally, the prime contractor will retain the right to determine when the subcontractor is to perform the subcontract work. The subcontract will often state that “subcontractor shall perform the subcontract work on a schedule to be determined by contractor” or words to that effect. Under this arrangement, the contractor can schedule the subcontractor to perform the subcontract work in a manner that conforms to the overall project schedule, often requiring the subcontractor to move in, perform work, and move out a number of separate times.
Generally, the effect of these or similar provisions is to give the prime contractor complete control of the time requirements for the performance of the subcontractor’s work. However, this control must be exercised in a reasonable manner. Notification of when the subcontract work will be required must be furnished early enough for the subcontractor to plan and execute work efficiently. The prime contractor may not demand the impossible.
Separate cases, one in Texas and one in Idaho, illustrate how courts treat this issue. In the Texas case, a formwork subcontractor failed to staff the project at the required level of 40 to 48 manhours per day. This failure would have delayed the project by eight months if allowed to continue. The prime contractor terminated the subcontractor for default and performed the balance of the work with their own forces at an average of 68 manhours per day. The Court of Appeals of Texas ruled that the subcontractor had breached the subcontract and that the prime contractor was not required to sit by helplessly while the subcontractor fell further and further behind schedule. The default termination was upheld, and the prime contractor was allowed to recover the increased costs of completing the work from the subcontractor.[9]
In the Idaho case, when rainy weather delayed a subcontractor’s performance of foundation and other concrete work on a commercial building project, the prime contractor improperly pressured the subcontractor to pour concrete under unreasonable weather conditions. When the subcontractor refused, the prime faxed a termination letter to the subcontractor without any prior notice. Further, evidence at the trial indicated that the prime’s project manager directed a subordinate to “document” the subcontractor’s alleged poor performance. The subordinate then went back through the contractor’s daily report log, altering it by adding negative comments about the subcontractor’s performance.
The subcontractor sued, alleging breach of contract. The trial court jury found for the subcontractor and awarded them payment for all of the work performed, lost profit on the unperformed work, and \$25,000 in punitive damages. On appeal, the Idaho Court of Appeals affirmed the jury award stating:
There is substantial evidence that Citadel’s actions were an extreme deviation from reasonable standards of business conduct. Because punitive damages are an appropriate sanction for oppressive conduct in the marketplace, we conclude that the District Court did not err in submitting the issue of punitive damages to the jury.[10]
Some subcontracts may explicitly state the start and finish dates for the subcontractor’s work. If this is the case, the subcontractor is liable for the consequences of failing to meet those deadlines, subject, of course, to any conditions of force majeure that are stated in the subcontract.
Damages in the Event of Late Completion
A well-drafted subcontract must deal with the subcontractor’s liability for damages in the event of failure to perform in accordance with the subcontract time requirements. A flow-down clause may impose the liquidated damages liability of the prime contractor to the owner on the subcontractor to the extent that the subcontractor’s failure to perform leads the owner to assess liquidated damages against the prime. In addition, the subcontract often will explicitly state the flow of contract liability—that is, the contract will provide that the prime contractor can recover additional damages from the subcontractor, if the prime’s cost of performance was increased as a result of delays caused by the subcontractor. In this situation, the liquidated damages collected from the subcontractor (equal to what the prime had to pay the owner) constitutes just one element of the total damages suffered by the prime due to the subcontractor’s failure to perform.
Subcontract Changes Clause
Most subcontracts will also give the prime contractor the right to make changes unilaterally in the subcontract work, delay or suspend the subcontract work, or terminate it in the same manner that the owner has these rights in the prime contract. Also, the subcontractor’s rights and obligations are similar to those of the prime contractor in similar circumstances under the provisions of the prime contract.
Insurance and Bond Requirements
The subcontract should clearly state the insurance and bond requirements that the subcontractor must meet. The insurance requirements are generally the same as for the prime contractor with respect to work under the prime contract. The subcontract may or may not require the subcontractor to furnish a performance bond and a labor and material payment bond, depending on the requirements of the contractor who drafts the agreement. Bid bonds are normally not required.
When the prime contractor intends that the subcontractor furnish a performance bond, many contractors write the subcontract to provide that failure to furnish a performance bond constitutes a material breach of the subcontract. This enables the contractor to terminate the subcontractor for cause and engage another subcontractor in the event that the subcontractor refuses, or is unable, to furnish the bond. In these circumstances, any price increase would be for the account of the original subcontractor. Subcontractors should not bid to general contractors or sign subcontracts drawn in this manner unless they are certain that they will be able to furnish a performance bond.
Indemnification
When the prime contract contains an indemnification clause, all subcontracts should contain a similar clause, so that the prime’s liability to the owner and architect/engineer for acts committed by the subcontractor is passed through to the subcontractor. Even when the prime contract does not require indemnification because the owner is protected by sovereign immunity or otherwise does not require indemnification, many subcontracts will still contain a comprehensive indemnification clause requiring the subcontractor to indemnify the prime. Although the owner is protected by sovereign immunity, the prime contractor is not and thus may require protection from the consequences of the subcontractor’s acts or failures to act that is afforded by the indemnification clause.
48-Hour and 72-Hour Clauses
Any well-drawn subcontract agreement enables the contractor to compel the subcontractor to perform the subcontract work in a timely manner under the general contractor’s general direction and control. This control will extend at least as far as the owner’s control with respect to the contractor’s performance under the prime contract and, in some cases, even further. The specific control provisions are the “48-hour” and “72-hour” clauses, present in most subcontract agreements.
The 48-hour clause pertains to the contractor’s right, after directing the subcontractor to remedy some default causing a problem on the project (such as failing to pick up their construction debris from the work site), to perform the necessary corrective work with the contractor’s forces for the account of the subcontractor if the subcontractor fails to remedy the default within 48 hours of receipt of the contractor’s directive. The 72-hour clause permits the contractor to terminate the subcontract agreement for default, after furnishing notice that the subcontractor is in default, the particulars of the default, and the corrective action required to remedy the default. The notification must be in writing and must put the subcontractor on notice that the default must be remedied within 72 hours from the date and time of receipt of the notice. If the subcontractor does not remedy the default within 72 hours of the notice, the contractor may terminate the subcontract. In particular cases, time limits other than 48 hours and 72 hours may be specified, although these time limits are common.
Both clauses are necessary to ensure the contractor’s control over the subcontractor to protect the contractor’s position with the owner under the provisions of the prime contract. Both are reasonable, provided they are fairly administered. Unreasonable exercise of these clauses could constitute a material breach of the subcontract on the part of the contractor (see Chapter 13).
Union Labor Only Clause
A clause should also be included in the subcontract agreement that binds the subcontractor to the provisions of any labor agreements containing a subcontracting clause to which the prime contractor is a party (see Chapter 6). Such a clause requires the prime to subcontract work only to subcontractors who agree to sign or be bound by the terms of the prime contractor’s labor agreement. The only way that the prime contractor can avoid breaching labor agreements containing such clauses is by inserting a clause in all subcontracts that ensures that the subcontractors will either sign, or at least agree to abide by, the terms of the prime’s labor agreements.
AGCC Forms of Subcontract
As in the case of purchase orders, most contractors have devised their own preprinted subcontract agreement forms. In situations where such agreements are not used, the standard forms of subcontract such as those promulgated by the Associated General Contractors of California (AGCC) are available. The short form standard subcontract, Form AGCC-4, is used for relatively minor, short-term subcontract situations. The long form standard subcontract, AGCC-3, is used for more complex, long-term subcontracts. The AGCC advises users to consult legal counsel when using or modifying these forms.
Conclusion
This chapter emphasized the fundamental difference between construction purchase orders and subcontracts and made clear the type of transaction for which each should be used. The close relationship of both documents to the prime construction contract was also emphasized as was the necessity for drafters of these documents to avoid conflicts between boilerplate preprinted on the back of the document forms and project-specific provisions entered on the face of the documents. Finally, the reasons why the typical purchase order and subcontract “red flag” clauses are necessary were stated, and the details of such clauses were examined in detail. Chapter 8 covers insurance contracts, another type of contract closely related to the prime construction contract.
Questions and Problems
1. Who are the parties to a construction purchase order? What type of transaction is involved? How is this transaction distinguished from that of a subcontract? Is the provision of certain kinds of jobsite services properly handled by means of a purchase order? What kind of services?
2. With regard to purchase orders, what is the meaning of “open-ended,” “one-time transaction,” “maximum quantity,” and “approximate quantity”?
3. How do conflicts in purchase order terms and conditions-of-sale terms arise? What is the problem? What is the solution?
4. What is flow-down language? What is the easiest way to be certain that materials furnished under a purchase order will meet the requirements of the prime construction contract?
5. What are the generic names and the typical content of the “red flag” clauses for purchase orders discussed in this chapter?
6. What typical flow-down language regarding the basis for measurement for payment is found in a purchase order? In what situation would a purchase order typically not contain flow-down language regarding measurement for payment?
7. What is the import of the typical “no-pay-until-paid” provision in purchase orders? To what extent is this provision enforceable? In what situation is it not enforceable?
8. What are two separate aspects of the declaration of the F.O.B. point in a purchase order? What do the letters F.O.B. mean? How do purchase orders handle the question of sales tax?
9. What are the eight key issues discussed in this chapter covered by the purchase order terms and conditions typically found preprinted on the back of purchase orders? Are these subjects common to prime construction contracts also? What are special provisions or supplementary provisions with respect to a purchase order?
10. What must preexist for a construction subcontract agreement to exist? Does this apply to construction purchase orders? Why or why not?
11. What is the essence of a subcontract agreement? Is the prime contractor’s contract liability to the owner for work included in a subcontract changed in any way? What is the chain or flow of contract liability when a construction subcontract is created?
12. What single most important fact about a subcontract distinguishes it from a purchase order? What may be provided by the subcontractor in addition to on-site labor? Must the work of a construction subcontract necessarily be directly and completely spelled out in the prime construction contract? May it be? Cite examples.
13. Is incidental subcontract work necessarily subject to the general terms and conditions of the prime contract? Can it be? How can a prime contractor ensure that it is?
14. Are provisions concerning how the subcontractor will be paid and the basis for measurement for payment for subcontracts typically handled differently or the same as for purchase orders? How about the payment and retention provisions?
15. Describe two ways discussed in this chapter to require that the subcontract provides that the subcontractor wi11perform the subcontract work in a manner that conforms to the time schedule of the prime contract.
16. Under a typical subcontract agreement, what are two separate kinds of monetary damages for which subcontractors may be liable if they fail to meet the subcontract time of performance requirements? Explain the basis for each. Do conditions of force majeure apply to subcontract work?
17. Do typical subcontract provisions in regard to changes, delays, suspensions of work, and terminations differ, or are they the same as for prime construction contracts? For the subcontract situation, whose position is equivalent to that of the project owner? To that of the prime contractor?
18. Are insurance and bond requirements for construction subcontracts generally the same or different than for prime contracts? How do many contractors state the requirements for the furnishing of a performance bond by the subcontractor? Why should such a clause give a subcontractor pause?
19. If a prime contract does not contain a requirement that the prime contractor indemnify the owner, should subcontracts flowing from that prime contract still contain a broad clause that the subcontractor indemnify the prime contractor? Why or why not?
20. What is the importance of the 48-hour clause? The 72-hour clause? Why are these clauses necessary?
21. How should a prime contractor signatory to labor agreements containing subcontracting clauses avoid exposure to breach of those agreements when writing subcontract agreements? What could happen if this protection is not provided?
1. Edward M. Chadbourne, Inc. v. Vaughn, 491 So. 2d 551 (Fla. 1986).
2. S & M Joint Venture v. Smith Jnternat'l Inc., 669 F.2d 1106 (6th Cir.1982).
3. Mace Industries, Inc. v. Paddock Pool Equipment Co., Inc., 339 S.E.2d 527 (S.C. App. 1986).
4. USEMCO, Inc. v. Marbro Co., Inc., 483 A.2d 88 (Md. App. 1984).
5. Seal Tite Corp. v. Ehret, Inc., 589 F. Supp. 701 (D.C.M.J. 1984).
6. United Plate Glass Co. v. Metal Trends Industries, Inc., 525 A.2d 468 (Pa. Commw. 1987).
7. Gilbane Building Co. v. Brisk Waterproofing Co., Inc., 585 A.2d 248 (Md. App. 1991).
8. Wood River Pipeline Co. v. Willbros Energy Services Co., 738 P.2d 866 (Kan. 1987).
9. D.E.W., Inc. v. Depco Forms, Inc., 827 S.W.2d 379 (Tex. App. 1992).
10. Cuddy Mountain Concrete, Inc. v. Citadel Construction, Inc., 824 P.2d 151 (Idaho App. 1992). | textbooks/biz/Business/Advanced_Business/Construction_Contracting_-_Business_and_Legal_Principles/1.07%3A_Purchase_Order_and_Subcontract_Agreements.txt |
Key Words and Concepts
• Insured
• Carrier
• Worker’s compensation insurance
• Employer’s liability coverage
• State worker’s compensation commission
• U.S. Longshoremen’s and Harbor Workers’ Act
• Jones Act
• Factors governing worker’s compensation insurance base premiums
• Modifiers
• Public liability insurance
• Exclusions
• XCU hazards
• Endorsements
• Additional named insureds
• Deductibles
• Primary policy
• Umbrella policy
• Public liability insurance
• Public liability premium structure
• Retro insurance policies
• Occurrence
• P & I policies
• Builder’s risk insurance
• Consequential damages
• Proximate costs
• Named peril policy
• All risk policy
• Error, omission, or deficiency exclusion
• Losses to temporary structures
• Builder’s risk premium structure/monetary limits
• Equipment floater insurance
• Hull insurance
• Miscellaneous construction insurance policies
• Owner provided insurance programs
• “Red flag” clauses
• Policy term
• Subrogation
• Occurrence policies
• Claims-made policies
• Escalation in premiums
Thus far, the discussion in this book has covered both prime construction contracts and three examples of closely related contracts: the labor agreement, purchase order agreements, and subcontract agreements. This chapter continues with insurance contracts, a fourth category of contract closely related to prime construction contracts.
Individual companies engaged in the practice of construction contracting are exposed to many risks and liabilities besides the monetary risk of performance of the construction work itself. For example, they are responsible for the health and safety of their employees while on the job, injury or property loss to third parties, loss or damage to construction work in place but not yet accepted by the owner, and loss or damage to their construction equipment. Additionally, they bear liability stemming from indemnification clauses in prime contracts, purchase orders, and subcontracts (see Chapters 5 and 7). These risks and liabilities are so large that contractors must purchase insurance policies to protect, or partially protect, themselves. Some of these policies are required by the terms of prime contracts and subcontracts or by statute. In many respects, construction industry insurance contracts are similar to those used throughout the business world. However, these contracts also contain provisions unique to the construction industry.
The primary parties to construction-related insurance policies consist of the construction contractor (the insured) and an insurance company who provides the required insurance coverage (the carrier, sometimes referred to as “the company”). Additional parties are sometimes named as additional named insureds.
The following are the important individual policies for the construction industry:
• Worker’s compensation and employer’s liability policies
• Public (or third-party) liability policies
• Builder’s risk policies
• Equipment floater policies
• Miscellaneous policies for special situations and needs
Each of these policies insures against loss from a different kind of risk or liability.
Worker’s Compensation and Employer’s Liability Policies
Virtually every state in the nation imposes a statutory liability on employers, including construction employers, in the event that their employees are injured or killed in the course of performing their employment duties. The liability is absolute and does not depend on the circumstances of an occurrence or who is at fault. The dollar amount of the liability to the employer for any specific occurrence is normally established by statute for the particular state involved.
The essence of the worker’s compensation and employer’s liability insurance contract is that the insurer agrees, for a price (the “premium”), to
1. Assume the liability imposed on the insured contractor employer by the worker’s compensation laws of the state named in the policy when an employee is injured or killed.
2. Assume any other liability that may flow to the contractor employer related to injury or death of employees.
Worker’s Compensation Section
The liability assumed by the insurer under the worker’s compensation section of the policy is that liability defined by the statute for the particular state or states named in the policy. The monetary amount of this liability is the benefit level set by the state worker’s compensation commission, a regulatory body set up by the statute. The worker’s compensation commission normally sets the premium level that the contractor will pay the insurer for obtaining coverage, although this method of setting premiums is presently undergoing revision in some states, particularly in California.
Employer’s Liability Section
Under this section of the policy, the insurer agrees to assume any other liability that the insured employer may have in addition to those imposed by the worker’s compensation law. Under the worker’s compensation statutes of the various states, the employer’s liability to the employee is limited to the benefit level stated ·in the statute. The employer cannot be sued by the employee for additional compensation. Therefore, an injured employee or the heirs of an employee who was killed may, in addition to collecting the statutory benefits, sue the owner, architect/engineer, or construction manager for the construction project where the employee was working. In this case, an indemnification clause in the prime contract would create a liability for the contractor, which would be independent of the contractor’s liability to the employee created by the worker’s compensation statutes.
Such an indemnification clause requires the contractor to “indemnify and hold harmless” the owner, architect/engineer, and construction manager, which means that the contractor would have to defend all such employee lawsuits brought against these entities and, if a judgment were awarded, pay the judgment. It is this potential additional liability of the contractor that the insurance carrier assumes under the employer’s liability section of the policy.
USL&HW Act and the Jones Act
Construction projects involving maritime operations on or over navigable streams and rivers come under the jurisdiction of two federal laws with substantially higher benefits than the state worker’s compensation statutes. These laws are the U.S. Longshoremen’s and HarborWorkers’ Act (USL&HW Act) and the Jones Act (for the crews of marine vessels). Workers (or their heirs) covered by these laws who are injured or killed on the job may elect to be paid benefits under the federal law rather than under the state worker’s compensation statutes. Federal benefits are higher, and, therefore, the premiums for insurance coverage are also higher than for the same kinds of work not performed on or over navigable streams or rivers.
Premium Structure
Worker’s compensation and employer’s liability insurance base premiums are usually stated in terms of a percentage of payroll for each particular labor classification involved (for example, so many dollars per \$100 of payroll). This is true for every state except the state of Washington, where premiums are calculated in terms of dollars per worker-hour. Except in Washington, open shop contractors, who pay generally lower wage rates, have a substantial initial competitive advantage over union contractors. The worker-hour system in the state of Washington, therefore, tends to “level the playing field” in regard to the premium costs for worker’s compensation insurance.
The two principal factors governing the worker’s compensation insurance base premiums set by the various worker’s compensation commissions for each individual labor classification are:
1. The particular state where the work is being performed; and
2. The kind of work being done, which clearly bears on the likelihood of workers becoming injured or killed.
The net result is that worker’s compensation base premium rates vary widely between states and between labor classifications within each state. For example, a few years ago, the premium for the classification of general carpentry in California was \$13.91 per \$100, whereas, in Indiana, the premium was \$3.56 per \$100. At the same time, the premium in Hawaii was \$56.46 per \$100. In each case, the premium is the amount that the contractor insured has to pay the insurer to cover the contractor’s statutory liability for exactly the same kind of work.
The kind of work also affects the premium. For instance, during this same period, the rate in California for concrete sidewalk work was \$6.50 per \$100, whereas the rate for roofing work was \$30.33 per \$100, almost five times higher.
Premium Modifiers
Premiums are also affected by the rate modifier, a factor based on a particular employer’s safety record and previous claims history. The base premium is multiplied by the modifier, which can range from about 0.75 (very good) to 1.50 (very bad), to determine the actual premium that a particular employer pays. The use of modifiers is currently undergoing revision in some states.
Public Liability Policies
By the very nature of their work, contractors are exposed to significant potential liability for damages suffered by noncontractually-involved third parties (the general public). Depending on the nature of the project involved, these potential liabilities can be modest or of enormous magnitude. For example, building a one-story school building on the outskirts of a small country town would involve relatively little potential liability. On the other hand, performing underground utility work involving gas mains in a downtown urban location would entail great potential liability. Construction mishaps on projects of this type have resulted in explosions destroying several blocks of street, causing loss of life and millions of dollars of property damage. Construction contractors cannot afford to risk liabilities of this magnitude, so they purchase public, or third-party, liability insurance policies to protect themselves.
The essence of the public liability insurance contract is that the insurance company, in exchange for the premium, agrees to assume the liabilities of the insured contractor subject to a stated deductible amount, up to stated monetary limits of the policy. In addition to paying any judgment awarded in the event of third-party lawsuits against the contractor, the insurance company will furnish and pay for the legal defense in court of such suits.
It is important to understand that public liability insurance (often called comprehensive liability insurance or comprehensive general liability insurance) protects the insured contractor from the claims of third parties only, as distinct from claims of the owner who is one of the parties to the construction contract. Contractors have occasionally attempted to use comprehensive general liability insurance policies to cover the costs for making good any defective contract work performed by them or by their subcontractors. The following two court decisions demonstrate the futility of such attempts.
In the first case, a general contractor for a condominium project in Florida subcontracted the furnishing and erection of prestressed concrete. The subcontract provided that the subcontractor would be covered by a comprehensive general liability policy obtained by the general contractor. Two years after completion of the project, the owner sued the general contractor for breach of contract due to several construction defects. The general contractor settled the suit with the owner and then sued the subcontractor whose prestressed concrete work was part of the defective work. The subcontractor argued that they were protected by the comprehensive general liability policy. The District Court of Florida rejected this argument stating:
If insurance proceeds could be used to pay for repairing and/or replacing of poorly constructed products, a contractor or subcontractor could receive initial payment for its work and then receive subsequent payment from the insurance company to repair and replace it. Equally repugnant on policy grounds is the notion that the presence of insurance obviates the obligation to perform the job initially in a workman like manner.[1]
In the second case, a Minnesota court ruled similarly but even more forcefully. The owner of a newly constructed high-rise apartment building sued the general contractor because masonry walls on the completed project were cracking and spalling. As procured by the general contractor, the comprehensive general liability policy included an endorsement that explicitly excluded coverage for property damage to work constructed by the general contractor, but a similar exclusion for work constructed by others “on behalf of” the general contractor had been deleted from the policy. The masonry work was constructed “on behalf of” the general contractor by a masonry subcontractor. Even so, the Minnesota Supreme Court ruled that the coverage of the policy did not apply to defects in the constructed work of the project, no matter by whom it was constructed.
In defense of the owner’s lawsuit for breach of contract due to the defective masonry walls, the general contractor argued that although the insurance policy did not cover the defective walls if they had constructed the walls with their own forces, the deletion of the phrase “on behalf of” meant that the policy did cover defects in work constructed by their subcontractor. The court rejected the argument that the cost of making good any defects in the subcontractor’s work was covered by the policy, concluding instead that general liability insurance policies do not protect contractors or subcontractors against contractually assumed business risks such as failure to complete a project properly.[2]
Clearly then, comprehensive general liability insurance policies only protect the insured against the claims of third parties who have been injured or whose property has been damaged in some way by construction activities related to the referenced project.
Normal Liabilities That Are Covered
The liabilities assumed by the insurer are limited to the risk of loss or injury to third parties only, usually caused by any or all of the following:
• Construction operations at the project site
• Ownership, operation, or use of the site itself
• Operations of the insured’s subcontractors
• Automotive operations related to the work at the site
By endorsement, a special provision expanding coverage, the policy can be made to cover additional liabilities such as those resulting from injuries to others occurring after the project has been completed (“completed operations” coverage) and liabilities flowing to the insured because of some separate contract that is related to the prime contract.
Exclusions, Endorsements, and Deductibles
A number of exclusions may apply to the coverage of the policy. Common exclusions include the XCU hazards. The “X” exclusion (explosion) excludes liabilities arising from the use of explosives by the contractor or from any other kind of explosion. The “C” (collapse) exclusion excludes liabilities arising from some form of structural collapse occurring as a result of insured’s excavation operations, pile driving, or other foundation work activity. A structural steel frame collapse caused by a rigging accident during the structural steel erection (and thus unrelated to foundation operations) would not be a “C” exclusion. The “U” (underground) exclusion excludes liability for damage to existing underground utilities caused by the contractor’s construction operations such as excavation or pile driving.
It is normally possible to obtain public liability insurance without some or all of the XCU exclusions (to be really protected, a contractor cannot accept these exclusions), but deleting the exclusions from the policy will result in higher premiums.
Endorsements are the opposite of exclusions. They are special provisions added to the policy that expand the coverage. Both endorsements and exclusions are matters of agreement between the contractor and the insurance company, not requirements of law.
A deductible is an amount stated in the policy that must be exceeded before the insurance company has any liability. The amount of the deductible is a matter of agreement between the contractor and the insurance company. The higher the deductible, the lower the premium that the contractor pays for the insurance.
Monetary Limits-Primary and Umbrella Policies
The contractor and the insurance company may set the monetary limits as high as they choose and agree on, although the provisions of prime contracts usually set minimum limits for the third-party liability policy that the contractor is required to carry. By custom and practice of the industry, insurance coverage involving large monetary limits is often provided through a primary policy tailored to meet the monetary limit requirements of the contractor’s prime construction contract with the owner, whereas an excess or “umbrella” policy is designed to raise the monetary limits to a much higher level. Frequently, the limits required by the provisions of the prime contract, while high enough to satisfy the owner, are not high enough to satisfy a prudent contractor. Hence, the need for the umbrella policy.
Premium Structure
The public liability premium structure for both primary and umbrella policies can be reckoned in two distinctly different ways. The contractor may pay a premium:
1. Based on payroll dollars expended in a manner similar to that for worker’s compensation insurance premiums.
2. Based on a fixed percentage of the prime contract price.
Most contractors prefer the second method since, if the estimated labor for the project should overrun, the contractor at least avoids being doubly penalized by paying more for third-party liability insurance.
Contractors can often obtain lower premium charges for public liability insurance by arranging a “retro” insurance policy with their insurance carrier. Under this arrangement, more favorable premium rates are offered for a low claim/loss record for previous years, whereas an unfavorable claims/loss record will cause future premiums to rise. This system is similar to the premium modifier system used on worker’s compensation insurance, previously discussed.
Definition of Occurrence
The term occurrence has special meaning in the insurance industry. An occurrence is an event that gives rise to a claim that the insurance company must pay. Ordinarily, the occurrence is an accident from the point of view of the insured. There are some limits on the type of event that can constitute an occurrence under the terms of the policy. To constitute an occurrence, the event must be something that was neither “intended” nor “expected” by the insured. For example, a contractor who failed to take precautions against gravel spilling from trucks in a haul operation over public roads, knowing that the gravel would spill but counting on insurance to cover the cost of any broken windshields that would result, may find the insurance company refusing to pay claims for broken windshields on the grounds that the contractor “expected” the gravel to spill and, by doing nothing to prevent it, “intended” that the gravel would spill.
Along these same lines, once a particular occurrence has taken place, the insured has a duty to do everything reasonably possible to ensure that the same occurrence does not happen again. If it does reoccur and the insured cannot show that everything reasonably possible was done to prevent the reoccurrence, the insurance company is not likely to pay.
P & I Policies
Public liability policies covering marine operations are called protection and indemnity policies (P & I policies). They operate in essentially the same way as third-party policies covering land-based operations.
Builder’s Risk Policies
A third major type of construction insurance policy is builder’s risk insurance, sometimes called “installation floater” insurance. According to the terms of most prime contracts, the risk of physical loss to construction work put in place by a contractor rests with the contractor until the work is completed and accepted by the owner. This risk can be enormous in dollar terms. Builder’s risk insurance can be obtained to cover all or part of this risk.
The essence of a builder’s risk policy is that the insurer, for a price (the premium), agrees to assume the risk of physical damage to or loss of work in place and will pay the insured contractor the value of the work that was lost or damaged, subject to any agreed deductible, up to the monetary limits of the policy.
Limitation on Policy Coverage
The insurance company’s liability is limited to the value of the work that was lost up to the monetary limit of the policy, but does not include consequential damages, such as lost time or increased cost of performance, that the insured contractor may have suffered as a result of the loss. For example, if a fire destroys a school building project while it is under construction, the contractor’s builder’s risk insurance would pay for the cost of replacing the construction work lost—that is, the proximate costs. However, the contractor would not recover any of the extra costs that would be incurred due to the extra time required because the work had to be repeated, any increased costs due to labor and material escalation on later work, or other increased costs of that type.
Named Peril v. All Risk Policies
Builder’s risk insurance can be obtained as either a named peril or an all risk policy. The named peril policy, as its name implies, insures against loss only for those risks or perils, such as fire or flood, named in the policy. The all risk policy protects against loss caused by any risk or peril, subject only to any exclusions named in the policy.
Exclusions and Deductibles
There usually will be exclusions in builder’s risk policies, even in the all risk type of policy. Some of the more common exclusions include the following:
• Loss due to strikes, lockouts, war, riot, and so on
• Loss due to court orders and ordinances
• Loss due to occupancy or use by the owner
• Any portion of a loss resulting from the insured contractor’s failure to take reasonable precautions to limit the extent of the loss
• Loss due to an error, omission, or deficiency in the owner’s design of the project or the owner’s architect/engineer’s design of the project
When written in the form just stated, this last exclusion would not apply to losses due to an error, omission, or deficiency in any of the contractor’s operations and would not apply to losses due to negligence of the contractor’s employees. The logic behind the exclusion applying to losses caused by errors, omissions, and deficiencies in the work of the owner, or of the A/E engaged by the owner, but not to losses caused by similar failings of the contractor requires explanation.
Builder’s risk insurance is basically contractor’s insurance, although it is often procured by the owner. Contractors need the insurance because they cannot count on their forces being error free. However, the policy is not intended to underwrite the owner’s work or the work of an A/E engaged by the owner. These entities usually purchase separate errors and omissions insurance to protect them from the consequences of faults in their work. The contractor does not need protection against faults in the owner’s work because, under the terms of the prime construction contract, losses due to the fault of the owner or A/E are the owner’s responsibility, not the contractor’s. The loss of a completed roof structure because of a collapse caused by an error in the A/E’s structural calculations would not be covered by the builder’s risk policy and would be the responsibility of the owner. On the other hand, if the roof collapse was due to a rigging accident, in tum caused by the failure of the contractor to install an adequate temporary guying system, the loss would be the contractor’s responsibility and would be covered by the contractor’s builder’s risk policy if the exclusions to the policy were stated in the form shown previously.
Of course, if the exclusions were stated in a more restrictive form with respect to the contractor’s operations or those of subcontractors, the coverage of the policy would be altered considerably. An example of a more restrictive form of exclusion is afforded by a 1979 Wisconsin case in which a general contractor constructing a dormitory had purchased a builder’s risk policy that contained an exclusion for “loss or damage caused by faulty materials, improper workmanship or installation, errors in design or specifications. “During construction, a retaining wall constructed by a subcontractor collapsed causing extensive damage to the dormitory. When the insurance company refused to pay the general contractor’s claim against the builder’s risk policy, the general contractor sued, arguing that there had been no faulty workmanship on their part, and, therefore, the policy should cover the costs of repairing the damage. A trial court granted summary judgment for the insurance company—meaning that, as a matter of law in view of facts that were not in dispute, the insurance policy did not cover the occurrence, and a trial was, therefore, not necessary.
On appeal, the Wisconsin Supreme Court affirmed the trial court’s decision that the policy did not cover faulty construction work no matter whether performed by the general contractor or a subcontractor but sent the case back to the trial court to determine whether faulty work on the part of the subcontractor was, in fact, the sole cause of the collapse. That question required a trial for its determination.[3]
Thus, unlike the rigging accident scenario described before, the policy in this case did not cover occurrences that were the result of faulty construction work. The exact language of the exclusions is clearly a matter of great importance.
Like public liability insurance, builder’s risk insurance claim payments are usually subject to a deductible amount that must be exceeded before the insurance company has any liability. The amount of the deductible is a matter of agreement between the insurance company and the contractor. Higher deductibles result in lower premiums that the contractor pays for the insurance.
Temporary Structures
Builder’s risk policies traditionally cover losses to the contractor’s temporary structures in addition to losses to the permanent work. The cost of replacing such structures as equipment shops, falsework, cofferdams, and access bridges or trestles in the event of their loss can be very large, and it is important to the contractor that these structures are included in the builder’s risk policy. However, even though temporary structures are included in this manner, builder’s risk policies do not cover the contractor’s construction equipment or tools.
To put this into perspective, consider what builder’s risk coverage would mean in practice after the occurrence of the following hypothetical rigging accident. Because of a rigging failure, a gang form that is being erected by a truck crane is totally destroyed after falling through an access walkway, and the falling gang form then destroys a previously poured concrete floor slab on a project under construction. The motor crane turns over and is also wrecked. Which losses would be covered by a typical builder’s risk policy?
Replacement of the permanent floor slab would obviously be covered, since it is part of the permanent project being built under the construction contract. In addition, replacement of the access walkway and the gang form would be covered because they are temporary structures erected at the site by the contractor for the process of building the permanent work. However, repair of the contractor’s crane would not be covered because it is a unit of construction equipment specifically excluded from coverage.
Premium Structure
Two methods are commonly used to determine the premiums for builder’s risk insurance. The first provides for periodically increasing premium payments, with the amount of each premium payment increasing as the value of the work actually in place—and thus at risk—increases. This method is logical but more complicated than the second method, which provides for flat premium payments for the duration of the project. The flat premium payment is equivalent to the average of the periodically increasing premium payments determined by the first method. With the second method, the insured overpays with respect to the actual risk in the early stages of the project and underpays in the latter stages. The second method is the prevalent method in use in the industry. The premium is usually stated as a percentage of the full contract price.
Monetary Limits of Policy
The amount of the premium is influenced to some extent by the monetary limits of the policy. The policy can be written for the full contract price or for an amount less than the full contract price. The premium for coverage up to a limit less than the full contract price would be somewhat less.
Reasons for Carrying Builder’s Risk Insurance
Many prime contracts require the contractor to carry builder’s risk insurance with stated minimum monetary limits. In this case, the contractor has no choice and must obtain the insurance. A contractor who has the option usually evaluates the risk, compares the costs and probability of occurrence of possible losses against the certain cost of the policy premiums, and makes the decision on that basis. Obviously, the nature of the project is a major influence on the decision. A contractor whose contract consisted solely of constructing a large pedestal-type concrete foundation for a steam turbine at an open rural site would be exposed to virtually no risk of loss, but if the project consisted of a multi-story, wood, low-cost housing complex in a congested urban setting, the exposure to risk would be relatively high.
The more costly that builder’s risk coverage becomes, the more it becomes a factor in competing for construction work. Wealthy companies possess the ability to “self-insure,” and in cases in which the contract documents do not require the contractor to carry builder’s risk insurance, these companies have a cost advantage over smaller companies who must purchase the insurance to avoid catastrophe in case of a loss.
Equipment Floater Policies
The fact that builder’s risk policies exclude construction equipment leads to the fourth major kind of construction insurance called equipment floater insurance. This type of policy protects the contractor against physical damage or loss to tools and construction equipment (including theft). On many kinds of construction projects, the contractor’s tools and equipment can be exposed to considerable risk of loss or damage. In this type of insurance contract, the insurance company agrees to make good any loss or damage to the contractor’s equipment subject to any agreed deductible up to the policy limits. Equipment floater policies, like builder’s risk policies, can be all risk policies or may insure only against certain named perils. All risk policies are more common.
Method of Determining Loss
The policy stipulates the method of determining the value of a total loss to the contractor’s equipment. Commonly used methods include the following:
• Under the replacement value method, the insurer will pay the cost of replacing the lost unit with an equivalent new unit.
• Under the book value method, the insurer will pay only the depreciated value of the unit on the contractor’s books at the time of the loss.
• Under the pre-agreed value method, the insurer will pay a value pre-agreed by the parties and stated in the policy for each unit of equipment insured.
The book value method is more common.
Premium Structure
Equipment floater insurance premiums are usually reckoned as a percentage of the value of the covered equipment per year (usually the book value). As the equipment depreciates, the book value is less, and the premiums decline.
Equipment Floater Insurance for Marine Equipment Operations
Equipment floater insurance is called hull insurance when it covers permanently floating marine equipment, such as barges, tugs, and dredges. When land-based equipment is working from barges on water—for example, a crawler crane working from crane mats on a barge deck—the equipment on the barge is normally insured under the equipment floater policy, but at a higher premium for the time that the equipment is on the water. The barge would be covered by a separate hull insurance policy.
Evaluating the Need for Equipment Floater Insurance
A final point on equipment floater insurance concerns the need for the insurance. The chances of damage or loss for certain kinds of equipment may be relatively high, whereas, for others, it is virtually nil. For example, contrast the previously mentioned example of the crawler crane operating on crane mats on a flat-decked work barge on a river bridge crossing with a service crane in the contractor’s home-base shop or yard complex. The former certainly would be insured, whereas the latter probably would not. The need for insurance is highly project-specific and use-specific. When the purchase of new equipment is financed by pledging the equipment as collateral for the loan, the lending institution will insist that the equipment be insured to its full value.
Miscellaneous Policies for Special Situations
A final construction insurance category consists of the miscellaneous construction insurance policies that are sometimes obtained by construction contractors. Chief among these are:
• Railroad protective insurance (usually required when working over or near a railroad right-of-way), in which the railroad is insured against loss caused by contractor’s operations.
• Transit insurance, which covers loss or damage when items are being transported (refer, for instance, to the TBM incident related in Chapter 7).
• Business interruption insurance, which is intended to cover the costs incurred by a contractor when normal business is interrupted by some event beyond the contractor’s control. This type of insurance is seldom purchased because it is so expensive.
• Fidelity and forgery insurance, which is intended to replace losses due to malfeasance in the office place by a contractor’s own employees, such as theft or forgery—that is, so-called “white collar” crime.
Owner-Provided Insurance Programs
Owner-provided insurance programs sometimes provide all or part of the insurance policies required for a project free of charge to the contractor. The project bidding documents will include the scope of the insurance coverage provided. For instance, the AIA form of contract provides that the owner will provide builder’s risk insurance for the full value of the contract work. It is not uncommon for large public owners to provide a complete package consisting of worker’s compensation insurance, major public liability insurance, and builder’s risk insurance. This latter arrangement is often called a “wrap-up” insurance program.
Under this arrangement, the contractor excludes from the bid all premium costs for the insurance designated to be provided by the owner, including premium monies only for insurance policies required by law or by the terms of the contract that the owner is not providing. The bid may also include the premium costs for any additional insurance that the contractor considers necessary on and above that provided by the wrap-up program.
Contractors generally do not favor owner-provided wrap-up insurance. They prefer to control the insurance procurement process so they can benefit from the bargaining power derived from a favorable claims/loss history and their past relationships with a particular insurance company. Under owner-provided wrap-up insurance arrangements, contractors with low claims or loss experience lose the benefit of their superior past performance.
“Red Flag” Insurance Provisions
As with all contracts, insurance policies contain certain provisions that are particularly important to both construction contractors and their insurance carriers. An adequate comprehension of the protection actually provided by the policy is impossible unless these provisions are completely understood; hence, the following discussion of the principal “red flag” clauses.
Named Exclusions
Probably the most important of all of the “red flag” provisions is the named exclusion section of the contract. This section includes a clear statement of the particular risks and hazards that are excluded from the coverage of the policy; however, insureds sometimes overlook important exclusions simply because they do not read the policy carefully.
Additional Named Insureds
An insurance policy may, by endorsement, add other insured parties in addition to the insured contractor. These other insured parties are called additional named insureds. The named parties then have the same insurance coverage as the insured contractor. Adding additional named insureds increases the risk that the insurer is assuming, and thus also increases the premium paid by the insured.
As previously discussed, the indemnification clause in prime construction contracts states that the contractor promises to indemnify and hold harmless the owner, the architect/engineer (and, sometimes, the construction manager as well) from the claims of third parties arising out of any act or failure to act of the contractor. Most contractors cannot afford to accept the risk that this clause imposes on them and must obtain insurance coverage. By naming the owner, the architect/engineer, and construction manager as additional named insureds in the public liability insurance contract, the contractor obtains this necessary protection.
As discussed earlier in this chapter, the contractor meets the requirements of the prime contract indemnification clause with respect to claims against the owner and A/E by contractor’s employees who become injured on the job (or by the heirs of workers who are killed) through the employer’s liability section of the worker’s compensation policy.
Deductibles
A deductible is an amount stated in the policy that, in the event of a loss, the contractor must pay or absorb “off the top,” leaving the balance of the loss for the insurance company’s account. If the amount of the loss is less than the deductible, the insurance company has no liability. This provision is common in the insurance industry and results in lower premiums than if the entire loss was paid by the insurance company. However, the insured should be aware of the magnitude of the deductible in a particular case in relation to the premium paid for the insurance.
Policy Term
An additional important clause deals with the time period, or term, that the insurance policy is in effect. This policy term is stated in the policy either in the form of fixed dates between which the coverage is in effect or, alternately, until the prime contract work is completed and accepted by the owner. Thus, the periods during which the coverage is in effect may be very different if delays occur and the work takes longer than the contractor planned when the insurance coverage was placed, depending on which way the term is stated. If the policy term is stated in the fixed date form and expires before the project is finished, the contractor must renew the policy to continue the insurance coverage. If the contractor should inadvertently fail to renew, the result could be catastrophic. When the policy is renewed, the contractor has to pay an additional premium. On the other hand, if the policy is written to remain in effect until the project is completed and accepted, the contractor is fully protected for the original premium, even if there are delays.
Subrogation
Another “red flag” clause concerns subrogation. An insurer who has been granted subrogation rights literally “stands in the insured’s shoes” with regard to any right or remedy that the insured may have against the party who was at fault causing claims to be made. This means that once the insurance company has paid a claim or judgment entered against the insured, they are free to attempt to recover the money paid by suing (in the name of the insured) the original party at fault. They have this right whether the insured wants its name to appear in such a lawsuit or not. Contractors may not want to be named as plaintiffs in lawsuits in which they no longer have a direct interest. If this is the case, it is important when they consider a potential insurance contract to note whether or not the insurance company has been granted subrogation rights.
Policy Cancellations
Finally, insurance policies normally contain provisions establishing the right of the insurance company to unilaterally cancel the policy prior to the expiration of the normal term. The contractor must be sure this clause also requires the insurance company to give adequate notice before canceling so the contractor will have a reasonable opportunity to canvas the market and replace the insurance at a reasonable premium.
Recent Trends in the Construction Insurance Industry
The insurance industry has changed significantly in recent years. Two examples occurring in the mid-1980s were the emergence of the claims-made policy and a dramatic increase in premium levels coupled with reduced coverage. Fortunately, from the construction contractor’s point of view, both trends were decreasing by the mid-1990s.
Claims-Made v. Occurrence Policies
The traditional form of insurance policy is the occurrence policy. The insured is covered if the occurrence giving rise to the loss takes place within the term of the policy, even though the claim with respect to the loss is made after the expiration of the policy. Claims or lawsuits commonly are initiated several years after the event causing the loss occurred, often after the project has been completed and the policy has expired. Occurrence policies cover this situation, as long as the event giving rise to the loss and to the claim occurred during the policy period.
In the mid-1980s, the insurance industry aggressively promoted a different type of policy called a claims-made policy. In this form of coverage, the insurer is liable only when both the event giving rise to the claim and the claim itself occur during the policy term. If the actual claim is not made during the policy period, the policy will not cover it, even though the event giving rise to the claim occurred during the policy period. Contractors have no way to control when third parties may decide to file claims and are thus at considerable risk under this type of policy. Under a claims-made policy, contractors cannot meet prime contract indemnification requirements without putting their entire companies on the line. Although the trend toward claims-made policies is currently on the wane, contractors should be alert to their existence and note this aspect of policy coverage provisions carefully.
Premium Escalation and Diminished Coverage
The mid-1980s also saw a marked escalation in insurance premiums. The increases far exceeded price increases generally, and many smaller contractors were driven out of business. Even some larger companies experienced severe difficulties. For example, prior to 1985, one large contractor bought public liability insurance in a combined policy covering all of its projects for a premium of about 0.6% of the company’s total annual labor exposure (the total yearly labor cost expenditure for all of the company’s projects). The total annual labor exposure was about \$15,000,000, so the annual premium was \$90,000 (\$15,000,000 times 0.006). The limit of the third-party liability coverage was an aggregate amount of \$30,000,000.
By 1987, the same company paid 5% of its total labor exposure for only one-third the coverage. This meant that the annual premium increased to \$750,000 (\$15,000,000 times 0.05), and coverage decreased to an aggregate amount of \$10,000,000. Builder’s risk insurance premiums increased about the same. Fortunately, insurance premiums have since abated.
Conclusion
This chapter discussed the general need for construction contractors to protect themselves from various kinds of losses by purchasing insurance policies. The common insurance policies in use in the construction industry today were listed, followed by a discussion of the kinds of protection provided by each policy and details of how the policy generally operates. The principal “red flag” clauses pertaining to insurance policies in general were briefly discussed, followed by a brief examination of some recent trends in the construction insurance field.
Chapter 9, on the subject of surety bonds, will round out this book’s discussion of closely related contracts that arise or result from the existence of prime construction contracts.
Questions and Problems
1. What are the five general categories of insurance contracts discussed in this chapter?
2. What is the essence of a worker’s compensation and employer’s liability insurance contract? Can a worker who has been injured on the job sue the employer for damages? Who can be sued? Explain the two separate kinds of liability involved. Which of these liabilities is statutory, and which is contractual?
3. What is the function of a state worker’s compensation commission?
4. What is the general import of the Longshoremen’s and Harbor Worker’s Act? The Jones Act? Are these state or federal laws? What is their relationship to state worker’s compensation laws? Which is more favorable to the worker? In what way? Can a worker receive benefits under the Longshoremen’s and Harbor Worker’s Act and state worker’s compensation laws simultaneously? Are the insurance premiums under the Longshoremen’s and Harbor Worker’s Act and the Jones Act higher or lower than those for coverage under most state worker’s compensation laws?
5. How are premium payments for worker’s compensation and employer’s liability policies usually reckoned? How is this done in the state of Washington? Which is better from the standpoint of a contractor employing union labor in a high wage rate area? Why? What two factors influence how high the basic premium (before application of a rate modifier) will be? Are the differences in premiums that could result from the first factor great or small? How great? How about the premium differences resulting from the second factor? What is a rate modifier? Can it have a significant effect on a contractor’s competitive position? How?
6. Hypothetical contractor A performs an annual construction volume of work containing a labor component of \$15,000,000at the base pay level—that is, excluding union fringes and all forms of insurance premiums and taxes. Their average worker’s compensation insurance premium is \$19.00 per \$100 of payroll (calculated on base pay). Their experience modifier is 0.72. Hypothetical contractor B performs the same annual volume with the same labor component and same average worker’s compensation insurance premium rate, but their experience modifier is 1.45. What is the dollar difference in annual total worker’s compensation insurance premiums paid by contractors A and B?
7. What is the essence of a public liability or third-party liability insurance policy? Who are the beneficiaries under this kind of insurance? Do risks covered by this kind of insurance vary much from project to project? What were the two extreme examples discussed in this chapter? Can you think of other greatly contrasting risks that might be encountered? What about a contract for the disposal of toxic wastes from a construction site?
8. What is completed operations coverage? How may it be included in the policy? What are exclusions? What are the XCU hazards? Can they be excluded? Are they always excluded? If they are not excluded, what is the effect on the premium? What are deductibles? What relation do they have to the premium?
9. Are the monetary limits and inclusion or absence of exclusions in public liability insurance policies governed by law, or are they matters of agreement between the contractor and insurance company? What is a primary policy? An umbrella policy? What are the two different ways in which the premium for public liability insurance is reckoned? Which is the most favorable from the contractor’s standpoint? Why?
10. What is an occurrence? What is the significance of the phrase “neither intended or expected”? What duty does the contractor have with regard to the reoccurrence of an event that has resulted in a claim against the policy?
11. What is P & I insurance?
12. What is the essence of a builder’s risk policy? Does the need for builder’s risk insurance apply equally to all construction contracts? What two contrasting examples were discussed in this chapter?
13. What kind of losses are typically covered under a builder’s risk policy? What is the difference between consequential damages and proximate damages associated with a causal event? Does a builder’s risk policy respond to both? To either? If so, to which?
14. What is the difference between a named peril builder’s risk policy and an all risk policy? Can exclusions still apply to an all risk policy? If so, what are some of the common ones that might apply?
15. What is the distinction between how the errors, omissions, or deficiencies exclusion of a builder’s risk policy may be applied (depending on the wording of the exclusion language in the policy) to the insured contractor’s operations and to those of the designer when a project or portion of project is lost or destroyed during construction due to an error, omission, or deficiency? Explain the logic of this distinction.
16. What are the two ways of reckoning builder’s risk premiums discussed in this chapter? How is the premium usually stated in an insurance broker’s quotation to a contractor?
17. Is the need for builder’s risk insurance the same for different types of projects? What would be an example of a project that does not justify it? A project that does? Is builder’s risk insurance ever contractually required? Do some contractor’s “self-insure”? Which ones?
18. What is the essence of an equipment floater policy? What three methods are discussed in this chapter for determining what the policy will pay in the event of a loss? Can equipment floater insurance be a named peril or an all risk policy?
19. What is hull insurance? How is equipment floater insurance handled under one policy with respect to equipment such as crawler cranes operating from mats on floating barges as well as on dry land? Is the need for equipment floater insurance a variable, depending on the piece of construction equipment involved and the project involved? Cite some examples to illustrate your answer.
20. What are some of the types of miscellaneous insurance policies discussed in this chapter? What are the risks against which they insure?
21. What is an additional named insured? What is the relationship of additional named insureds on a public liability policy to the indemnification requirements in a prime construction contract?
22. What does right of subrogation mean with respect to an insurance contract? What is a common contractor attitude regarding an insurance company’s rights of subrogation?
23. What is an occurrence policy? A claims-made policy? Why is a claims-made policy not responsive to the needs of a construction contractor? What trends occurred in the 1980-1990 period in the insurance industry with regard to the level of premiums charged?
1. Centex Homes Corp. v. Prestressed Systems, Inc., 444 So. 2d 66 (Fla. App. 1984).
2. Knudson Construction Co. v. St. Paul Fire and Marine Insurance Co., 396 N.W.2d 229 (Minn. 1986).
3. Kraemer Bros., Inc. v. U.S. Fire Insurance Co., 278 N.W.2d 857 (Wisconsin 1979). | textbooks/biz/Business/Advanced_Business/Construction_Contracting_-_Business_and_Legal_Principles/1.08%3A_Insurance_Contracts.txt |
Key Words and Concepts
• Hired guarantees
• Surety
• Principal
• Obligee
• Guarantee
• Penal sum
• Premium
• Indemnitor
• Indemnity agreement
• Differences between surety bonds and insurance contracts
• Requirements for serving as surety
• Surety belief in principal
• Bid bond
• Performance bond
• Determination of surety obligation
• Common owner misconception about performance bonds
• Effect of excess early contract payments
• Contractor’s protection of bonding capacity
• Labor and material payment bonds
• Mechanic’s liens
• Claimants
• Used or reasonably required in performance of construction contract
• Work guarantee bonds / Lien discharge bonds
• Second-tier bonds / Third-tier bonds / Subcontract performance bonds
• Labor and material payment bonds / Material supplier bonds
Surety bonds are unique examples of important construction industry documents that flow from prime construction contracts. Essentially, they are hired guarantees. Guarantees of what? Who makes them and to whom? On whose behalf are the guarantees made? Why are the guarantees required? What happens when demands are made that the guarantees be made good? This chapter deals with these issues.
Relevant Parties and Surety Bond Terms
To understand the purpose and operation of the various construction industry surety bonds, you should become familiar with the following commonly used terms:
• Surety
• Principal
• Obligee
• Guarantee
• Penal sum
• Premium
• Indemnitor
Surety
The surety (sometimes called the obligor or the bonding company) is a financial institution possessing great wealth and stability. Sureties will be required to furnish convincing evidence of their financial strength and are often required by the terms of prime construction contracts to be registered as approved sureties and to appear as such on a published list maintained by the U.S. government.
Principal
The entity that actually furnishes the bond is called the principal. The surety is the entity that furnishes the guarantee that the bond promises.
Obligee
The guarantee promised by the bond is made to an entity called the obligee. In the case of bid bonds and performance bonds furnished by the prime construction contractor, the owner of the project being constructed is the obligee. In the case of labor and material payment bonds furnished by the prime contractor, the owner is usually the obligee for the use and benefit of subcontractors and material suppliers. In some jurisdictions, the subcontractors and material suppliers themselves are considered to be the obligees. When performance bonds are furnished by a subcontractor, the prime contractor is the obligee.
Guarantee
The guarantee is a promise made by the surety to the obligee that, if the principal should fail to carry out fully and faithfully whatever particular duty to the obligee is stated in the bond, the surety steps in and either performs that duty or causes it to be performed by others. The exact nature of the guarantee varies, depending on the type of surety bond involved. This concept is different from that of an insurance policy where the insurer agrees to pay for a loss resulting from some unexpected catastrophe or from claims made by third parties to the construction contract. Essentially, the guarantee is a case of the surety underwriting the performance of the principal.
Penal Sum
Although the surety guarantees the performance of the principal, there is a monetary limit to the guarantee called the penal sum of the bond. The amount of the penal sum is stated in different ways, depending on the type of bond. For example, the penal sum of one type of surety bond, called a bid bond, is usually 10% of the amount of the bid, whereas the penal sum of performance bonds and labor and material payment bonds is usually 100% of the contract price. The penal sum is the upper limit of the surety’s potential financial liability to the obligee.
Premium
The premium is the fee that the principal pays to the surety in exchange for providing the guarantee to the obligee. Before 1985, bond premiums on large contracts for well-established contractors ranged from 1⁄2 to 3⁄4% of the total contract price for a package consisting of the bid, performance, and labor and materials payment bonds. The cost of the same bond package in the small contract market was between 1 1⁄2 and 2% of the construction contract price. Bond premiums have escalated, then stabilized somewhat, since that time.
lndemnitor
An indemnitor is a person or entity who promises to pay the surety back for any cost that the surety incurs if called upon to make good the guarantee. The principal always is an indemnitor. A surety often also requires personal indemnification from the officers or owners of the entity that is the principal. This concept of personal indemnification is the origin of the oft-repeated expression “going on the line.” If the principal is a subsidiary company of some other entity, the surety generally wants indemnification from the parent company as well as from the subsidiary. In other words, the surety makes sure that it is indemnified by the entity “where the money is” and from which the assets cannot be transferred by accounting manipulations in the face of an impending claim against the bond.
How do Surety Bonds Work?
The potential liability assumed by a surety greatly exceeds the premium charged for underwriting the performance of the principal. In part, the surety is operating on an actuarial basis, but additional considerations lie behind the willingness of the surety to assume the liability involved. These other considerations are explained in this section.
Indemnity Agreement
The surety bond proper is a legal instrument that results from a separate contract between the surety and the principal, in which the surety agrees, for a price (the premium), to guarantee the principal’s performance with respect to some obligation to the obligee that the principal has assumed. In this sense, the bond is the evidence that the obligee wants to see that this separate contract exists. The separate contract, which the obligee never sees directly, is called an indemnity agreement. In addition to specifying that the surety will provide the required guarantee to the obligee, the indemnity agreement will provide that the principal and all other named indemnitors who may be a party to the agreement will pay the surety back for any losses that the surety incurs in making good the guarantee.
Surety Bonds v. Insurance Contracts
What are the differences between surety bonds and insurance contracts? Under an insurance contract, the insurer agrees, for the premium, to pay damages or replace something that has been lost or destroyed as a result of the occurrence of a covered event, such as an accident or a fire (including claims made against the insured by third parties arising from these occurrences). Surety bonds are very different. In the event of a call against a surety bond, the surety’s obligations are not triggered by an event such as an accident or fire. Instead, the call against the surety’s guarantee is made as a result of some kind of alleged failure of the principal to perform.
How Good Is the Guarantee?
The guarantee is as good as the financial resources and integrity of the surety. The main requirement for serving as surety is that the entity must be perceived as having great financial strength with a history and reputation of living up to its obligations. The obligee would not have confidence in the guarantee unless these requirements were met.
Surety’s Belief in the Contractor’s Ability to Perform
The potential cost to the surety in the event of a call on its guarantee can be enormous. (The penal sum of the performance bond for the Lock and Dam No. 26 contract discussed in Chapter 4 was the full contract price of \$227 million.) Insurance contracts may also involve large risks, but there is a key difference. In providing insurance to a contractor, the insurer is betting that it can predict the likelihood of losses on an actuarial basis accurately enough so that it makes money on the average. The competence and financial strength of the contractor are not key factors in the insurer’s decision-making process.
The surety bond case is very different. Since it is the performance of the principal that is being guaranteed, the surety has to believe in the principal and be convinced that the principal has the intention, the resources, and the ability to perform. Even though protected, at least on paper, by the terms of the indemnity agreement, sureties simply will not furnish bonds to construction contractors if they have any doubt about the contractor’s ability to perform.
Bid Bonds
An important element of the bidding process is that owners have the assurance that the bidding contractor who is awarded the contract will accept and sign it and will furnish all insurance policies and additional surety bonds required by the bid documents. If the successful bidder refuses to sign the contract, the owner must accept a higher price for the work or rebid the project. Neither alternative is desirable. Even though the owner can sue the low bidder for damages, the project will be delayed, and it may not be possible to recover all of the costs. The bid bond protects the interests of the owner against this potential loss.
Bid Bond Guarantee
The guarantee of the bid bond is twofold:
1. The surety guarantees to the obligee (the project owner) that the principal will enter into the contract in the event of an award; and
2. The principal will furnish the performance bond and insurance policies required by the contract.
Bid Bond Penal Sum
The penal sum for bid bonds can be expressed in one of two distinctly different ways:
1. As either a fixed amount of money or as a percentage of the bid total, which serves as liquidated damages for failure to enter into the contract. According to the terms of the bond, the obligee does not have to prove actual damages but must merely show that the principal failed to enter into the contract and/or failed to furnish the required insurance policies and other required bonds.
2. The penal sum can be stated in the form of actual damages suffered, up to a stated limit. Here, the obligee must prove the extent of actual damages before the surety will pay on the guarantee.
The first method is more common, and 10% of the total bid is usually the stated amount. However, even though the bond is written in this form, courts sometimes limit the owner’s recovery to actual damages.
Some bid documents provide that bidders furnish a certified check in the amount of 10% of the bid price instead of a bid bond. The checks are returned to all unsuccessful bidders the day following the bid opening—and to the successful bidder when the signed contract and the required insurance policies and bonds are received by the owner.
Performance Bonds
Owners naturally want assurance that, once they have awarded a contract, the contractor will perform according to the contract’s terms. This assurance is provided by the performance bond.
Performance Bond Guarantee
The guarantee is the surety’s promise to fulfill the principal’s obligations to perform the separate contract that the principal has made with the obligee if the principal is unwilling or unable to perform. Before a call against the guarantee can be legally sustained, the obligee must clearly establish that the principal is in default of the terms of the contract. It is sometimes unclear whether the contractor principal is truly in default or if the principal’s performance or lack of performance has been caused by an act or failure to act of the owner obligee or has been the result of some other condition of force majeure. Under these circumstances, establishing a de facto (actual) default can be a complicated matter, which often is settled only in court following protracted litigation.
Surety’s Options to Make Good the Guarantee
Once convinced that the principal is truly in default, the surety has three options for making good the guarantee to the obligee. The second and third methods are more common.
1. Assist the principal to remedy the default. Ordinarily this would be accomplished by advancing funds to the principal but not taking control of the contract.
2. Take control of contract performance and complete the work by engaging another contractor or by retaining the principal and subsidizing project operations and actively directing the work.
3. Allow the obligee to complete the contract by engaging another contractor and, when the work has been completed, pay money to the obligee for any excess costs incurred in completing the work.
Penal Sum—How Much Does the Surety Pay?
The penal sum for performance bonds is usually 100% of the contract price, the upper limit of that surety’s monetary exposure in the event of a default. Determination of surety obligations can be reached in various ways. One method is for the surety and the obligee to negotiate a fixed amount to discharge the guarantee (up to the penal sum limit) to be paid by the surety “up front” before the balance of the work is completed. If the actual costs incurred by the obligee in completing the work exceed this amount, the surety pays nothing further. If the actual costs are less than the agreed-upon amount, the obligee keeps the difference. This method amounts to the surety buying their way out of the liability.
A second more common procedure is for the surety to agree to pay the obligee’s actual costs to complete the contract work, less any unpaid contract balance, plus any liquidated damages that may be due under the contract. The amount that the surety must pay is limited to the penal sum of the performance bond. The way that the surety’s payment is calculated under the second method is illustrated with the following hypothetical example:
A contractor defaults on a \$5,000,000 construction contract after beginning work and receiving a total of \$1,500,000 in progress payments for work completed prior to the default. The contractor has furnished the owner with a performance bond with a penal sum equal to the full contract price. The surety agrees that the owner (obligee) should complete the work, which costs the owner \$4,750,000. In addition, the project is finally finished well after the required completion date of the original contract and, under the terms of that contract, \$300,000 in liquidated damages are due.
Unpaid Balance = \$5,000,000 – \$1,500,000 = \$3,500,000
Surety’s Obligation = \$4,750,000 – \$3,500,000 + \$300,000 = \$1,550,000
If it had cost the owner \$5,750,000 to complete the work, the surety’s obligation would increase to \$2,550,000. However, if the owner had spent \$8,750,000 to finish the work, the surety would have to pay only \$5,000,000, the amount of the penal sum of the bond, not the \$5,550,000 that would otherwise be calculated.
Owner’s Misconception About Performance Bonds
A common misconception about performance bonds by owners is that they sometimes mistakenly feel that they have absolute power over the contractor because they hold a performance bond. They believe that the surety will immediately respond to complaints and force the contractor to do whatever the owner wants done. This expectation is not likely to be fulfilled for a number of reasons.
First, the surety will not act until and unless they believe that the principal is truly in default and will not or cannot cure the default. The surety is even less inclined to act if the principal is financially strong. The principal has indemnified the surety. If the principal has substantial assets, the surety knows that it will recover any money that it might have to pay if the owner sues to enforce performance of the bond guarantee and if the decision of the court supports the owner’s position. Therefore, there is less incentive for the surety to act immediately.
A second factor may motivate the surety not to act on the guarantee in questionable cases. Under the legal theory of subrogation, the surety has all of the contractual rights of the principal and is not likely to remedy an alleged default until all viable legal defenses to the owner’s claim of default have been investigated and found to be of no avail. If the surety should pay the obligee when there is a viable legal defense to the claim of default, it may be legally found to be a volunteer and be unable to recover the money paid to the obligee from the principal and other indemnitors.
Thus, the owner/obligee may be surprised to find that the surety sides with the contractor/principal when faced with a call on the guarantee. To collect eventually on the performance bond guarantee, the obligee must be legally correct on the facts of the alleged default, or both the principal and surety will be excused from their obligations. However, if the allegation of default is legally correct, the obligee eventually is made whole, but only up to the penal sum limit of the bond.
Excess Early Contract Payments
It has previously been mentioned that the obligee’s recovery is limited to the penal sum of the bond. However, the obligee may recover less than the excess costs required to complete a defaulted contract even though the penal sum of the performance bond has not been exceeded. Suppose that at the time of the default, the owner had overpaid the contractor for work completed in terms of the actual cost of the work completed relative to the actual cost of the work remaining.
Under these circumstances, the effect of excess early contract payments is that the owner/obligee may be unable to collect the full cost to complete the work less the unpaid balance of the contract, even if that total is less than the penal sum of the performance bond. The surety will contest its obligation to pay the full amount by showing that the owner improperly overpaid the contractor for the work actually performed. This will reduce the unpaid contract balance at the time of the default and increase the amount that the owner is asking the surety to pay. If the surety can prove overpayment to the contractor, it will reduce the amount to be paid to the obligee accordingly. This is a real danger to the owner in paying out on contracts where payment schedules have been heavily front-end loaded—that is, payment heavily unbalanced in favor of work items scheduled to be performed early in the contract.
Contractor Protection of Bonding Capacity
It may appear from the preceding discussion that the performance bond provides little protection for the obligee after all. This would be more true if it were not for the necessity of contractor/principals to protect their bonding capacity. Since so much construction work is bonded, contractors must maintain their ability to obtain performance bonds. Smaller contractors particularly do not want their surety to receive complaints about their performance, jeopardizing their ability to secure bonds for future projects. For this reason, they are more likely to respond to owners’ demands when complaints over performance arise.
If being declared in default looms as a real possibility, contractors generally do everything possible to avoid the surety taking over their contracts for two main reasons:
1. If the surety should take a loss, the contractor will find it difficult to ever get another bond.
2. Since the surety is indemnified, the contractor loses control of expenditures in the event of a takeover but not the legal obligation to pay the surety back for them. As long as they can maintain operations, contractors naturally prefer to spend their money themselves rather than have the surety or the owner spend it for them.
Labor and Material Payment Bonds
Private construction contracts are subject to mechanic’s lien laws, which enable subcontractors or suppliers to file liens, or legal claims, against the project property if they are not paid by the owner’s contractor for their services or furnished materials. Payment bonds assure that such persons or entities are paid by the surety if the contractor/principals refuse or are unable to pay. If there is no bond and liens are perfected-where, through a lawsuit, liability to pay the amount of the lien is established by a court-owners have to pay the claimant to avoid having their property sold to satisfy the lien. For this reason, private owners want payment bonds.
A mechanic’s lien cannot be placed against federal government or other public property. However, at the federal level, and in most states, public policy demands that subcontractors and suppliers on public projects be paid for their work. Thus, Congress has enacted the Miller Act, and many of the state legislatures have enacted “Little Miller Acts,” both of which, among other things, require labor and material payment bonds on public construction contracts, federal and state respectively.
Labor and Material Payment Bond Guarantee and Claimants
The guarantee of a labor and material payment bond is the surety’s promise that it pays claimants if the principal is unable or refuses to pay them. To understand the guarantee depends on understanding the definition of a claimant. For a person or entity to be a proper claimant, a number of conditions must be met.
First, a construction contract between the principal and the obligee must exist, and this contract must be referenced in the bond. The bond then defines claimants as those persons or entities who have contracts with the principal to perform services or furnish materials on the project pursuant to the principal’s construction contract with the obligee. Thus, someone who wants to collect on a payment bond must first be sure that this requirement is satisfied—that is, that the claimant has a contract with the principal who, in turn, has a contract with the obligee. Some payment bonds go further, including as claimants those persons or entities who have contracts with other entities that, in turn, have contracts with the principal.
In the case of federal contracts for which a Miller Act bond is required, sub-subcontractors and material suppliers to subcontractors of the principal are also treated as claimants. So are material suppliers to sub-subcontractors. Material suppliers to the principal are always considered to be claimants. However, second-tier material suppliers to first-tier material suppliers who hold a purchase order contract with the principal do not qualify.
The federal practice is mirrored in the laws of some states, as the following Kansas case illustrates. The City of Wichita awarded a prime contract to a general contractor for construction of a sewage digester. The general contractor (Penta) then awarded a contract to Wells Products Corp. for furnishing the digester floating cover and gas compressor system. Penta also furnished a public works payment and performance bond in accordance with the Kansas “Little Miller Act” statute. Wells Products Corp. failed to pay J. W. Thompson Co., one of its suppliers, who then brought a claim against the prime contract payment bond. As in federal contracts, Kansas public works payment bonds protect only suppliers to subcontractors, not suppliers to suppliers. It, therefore, became important for the trial court to decide whether Wells Products Corp. functioned as a subcontractor or as a supplier to Penta. J. W. Thompson Co., who was, in effect, a second-tier supplier, argued that Wells Products Corp. had furnished personnel on site for the purpose of assisting in adjusting and starting up the equipment and, therefore, was functioning as a subcontractor. The prime contractor claimed that the agreement with Wells Products Corp. was a sales agreement since the actual contract evidenced all of the “trappings” of a sales agreement and contained none of the clauses usually found in subcontracts. The trial court concluded that Wells Products Corp. was a subcontractor, making J. W. Thompson Co. a proper claimant under the prime contract labor and material payment bond. The Kansas Supreme Court reversed the trial court, deciding instead that Wells was a material supplier, not a subcontractor. The Kansas Supreme Court stated:
Modern conditions frequently demand a high degree of specialization in manufacturers and suppliers. The facts that Wells had the duties to inspect Penta’s installation of the components purchased from Wells, to be present at the start up, and to instruct the City of Wichita employees on the use of the gas compressor system were key factors in the trial court’s conclusion that Wells was in fact a subcontractor. But it is clear that such activities are common in the construction of sophisticated systems.
Since J. W. Thompson Co. was determined by the Kansas Supreme Court to be a supplier to a supplier, they could not recover against the payment bond.[1]
This case illustrates that, in claims against labor and material payment bonds, the determination of who qualifies as a claimant and who doesn’t can become a very complicated matter that will be decided based on the specific facts in each case and on the statutes that apply in the particular jurisdiction involved. The specific wording of the bond itself is also very important in defining who may qualify as a claimant.
Claimant status, in and of itself, does not guarantee the right to recovery under the bond. Although the wording of individual bonds may differ, most generally require that additional tests be met. Claimants must prove that they have not been paid within a period of time stated in the bond after completing services or furnishing materials and that the services or the materials they furnished were used or reasonably required in the performance of the construction contract. The phrase “used or reasonably required” originally meant literally “bricks and mortar in place at the site of the work,” and courts would exclude such items as the delivery costs of materials because they were not incurred on the work site. Modern courts tend to construe the meaning of the phrase more broadly, but many cost items are still excluded. For example, the overhead expense of a home office, expense of estimating material and subcontract quotations, and cost of negotiating and preparing purchase orders and subcontract agreements are ordinarily excluded.
Other First-Tier Bonds
A number of less common first-tier bonds are also used in the construction industry.
Work Guarantee Bonds
In a work guarantee bond, the surety guarantees that the completed construction work of the principal will meet the requirements of a warranty contained in the contract. A roofing bond, for example, could be written with respect to an explicit warranty stated in the contract that the completed roof will not leak or require replacement for a minimum of five years after it is accepted by the owner. Such contracts sometimes permit the owner to hold part of the retention until the end of the warranty period. By putting up a bond, the contractor can secure the release of the retained funds. The specific guarantee of this bond is that, if the roof leaks and the contractor either cannot or will not return and repair it, the surety will pay the cost of repairs.
Lien Discharge Bonds
In a lien discharge bond, the surety guarantees that the principal pays the obligee in the event that the obligee is compelled at some future date to satisfy a lien placed on the facility constructed by the project because the principal had not paid the lien claimant. A prime contractor who refused payment to a supplier or subcontractor who had filed a lien because of a dispute over that party’s performance would obtain a lien discharge bond. This bond protects the owner against a possible adverse judgment on the lien placed on the owner’s property by the subcontractor or supplier. Therefore, the owner need not withhold money from the final payment to the prime contractor to protect its interests. If the contractor had previously furnished a payment bond, the owner already has that guarantee and would not require an additional lien discharge bond.
Subcontract Bonds and Material Supplier Bonds
All of the bonds previously discussed have been first-tier bonds. Many possible lower levels, or tiers, of contracts can relate to the same project. For example, the prime contractor’s subcontractor may subcontract a portion of the subcontract work to yet another entity. This situation gives rise to the need for subcontract second-tier bonds and sub-subcontract third-tier bonds.
Typically, the first-tier bonds that can be obtained for lower tiers are subcontract performance bonds and subcontract labor and material payment bonds. Each serves the same purposes and operates in the same general manner for lower tiers as for the first tier. Essentially, the parties simply change seats and shift down one tier. For example, in the case of a subcontract performance bond, the contractor becomes the obligee instead of the owner, the subcontractor becomes the principal instead of the contractor, and so on. The surety position would remain the same as for a first-tier bond. The wording of lower-tier bonds differs slightly from that of first-tier bonds, but they operate the same way.
Material supplier bonds are another example of lower-tier bonds that are available. Generally, they are intended to cover claims from a subcontractor (or material supplier) against a material supplier holding a material supply contract with the prime contractor. If such material suppliers fail to pay their own material suppliers or subcontractors, the surety would respond.
Determining the need for lower-tier bonds requires business judgment as well as legal advice from attorneys knowledgeable in the bonding field.
Conclusion
This chapter explained that surety bonds are very different than insurance policies in that they are essentially hired guarantees rather than protection from the consequences of some physical catastrophe. Relevant surety bond terms and the way in which surety bonds work were then discussed. The details of operation of the bonds in common construction industry use followed, including subcontract and material supplier bonds.
The following Chapter 10, on construction joint-venture agreements, concludes the examination of the normal industry contracts that are closely related to the prime construction contract.
Questions and Problems
1. What is a surety? Who is a principal? An obligee? What is the general nature of a surety bond guarantee? What is a penal sum? The premium? Who is an indemnitor?
2. How do surety bonds work? What is the essence of a surety bond? What is an indemnity agreement?
3. How do surety bonds differ from insurance contracts?
4. What is the primary requirement for an entity to serve as a surety? What belief must a surety hold with regard to a principal before the surety will furnish a bond?
5. What is the purpose of a bid bond? What does the guarantee of the bond promise? What are the two parts of a bid bond guarantee? What are two ways of stating the penal sum for bid bonds? Which way is more common?
6. What is the purpose of a performance bond? What does the guarantee of the bond promise? What fact must be clearly established before a surety can properly be expected to make good on the guarantee?
7. What three options does a surety normally have once it becomes convinced that the principal is in default? Which options are more commonly utilized? When the surety agrees to pay money to the obligee, what two alternate means are used to determine the surety’s obligation? Which of these latter two methods is the more common? What is the top limit of the surety’s obligation in any case?
8. What is the misconception that owners sometimes hold about performance bonds? Is the surety as likely to act to cure an alleged default if the principal is financially strong and has the means to cure the default? Why not? What is another factor that a surety will carefully consider in making the decision on whether to cure an alleged default? If an owner is legally correct in alleging that the principal is in default, will recovery under the bond ultimately be realized?
9. Even when the surety agrees to cure a default by paying the owner/obligee money, under what two circumstances could the owner be paid less money than the excess costs expended in completing the contract?
10. What is a typical construction contractor’s mindset in regard to bonding capacity? Why will contractors at all costs try to avoid a surety’s taking over performance of their contract?
11. What is the reason that labor and materials payment bonds are required by owners on projects subject to mechanic’s lien laws? Why are they required on public contracts that are not lienable? What are the Miller Act and “Little Miller Acts”? What do they provide with regard to payment bonds?
12. What is the guarantee of a labor and materials payment bond? Who is a claimant? What does an entity who has not been paid have to establish in order to be considered a claimant?
13. Payment can be claimed for what kind of things under a payment bond? What did the words “used or reasonably required” originally mean to courts? What do the words mean today?
14. What is a work guarantee bond? How is it used? What is the guarantee? Answer the same three questions with respect to a lien discharge bond. What are second-tier bonds? What are the shifts in position of the various parties from first-tier bonds to second-tier bonds?
15. A contractor entered into a \$6,755,000 subcontract with a subcontractor, who furnished a performance bond of 100% of the subcontract price. After the contractor paid the subcontractor \$5,252,000 for work completed, the subcontractor became bankrupt, and the contractor terminated the subcontract for default. With the agreement of the surety, the contractor engaged another subcontractor who completed the work of the original subcontract for a total additional cost to the contractor of \$3,927,000. The work was completed 185 calendar days later than the subcontractor was contractually bound to complete the original subcontract. The liquidated damages for the original subcontract were \$1,500 per calendar day. The subcontractor’s surety refused to pay the amount of its normal obligation on the grounds that the contractor had grossly overpaid the subcontractor for work completed. The contractor sued the surety. At the trial, the court determined that the contractor had, in fact, overpaid the subcontractor prior to the default by \$1,403,000.
1. What is the actual unpaid balance of the subcontract in dollars at the time of the default?
2. What should the unpaid balance have been in dollars at the time of the default according to the ruling of the court?
3. What would the surety’s monetary liability in dollars have been if the court had ruled that there had been no overpayment to the subcontractor prior to the default?
4. What was the surety’s monetary liability in dollars in view of the court’s actual decision?
1. J. W. Thompson Co. v. Wells Products Corp., 758 P.2d 738 (Kan. 1988). | textbooks/biz/Business/Advanced_Business/Construction_Contracting_-_Business_and_Legal_Principles/1.09%3A_Surety_Bonds.txt |
3
Key Words and Concepts
• Joint and several liability
• Conventional joint ventures
• Item joint ventures
• Formation and termination of joint ventures
• Agreement on terms of bid
• Participation formula
• General management matters
• Managing partner
• Management fees
• Management committee
• Working capital
• Capital calls
• Failure to meet capital calls
• Return of capital
• Investment of excess funds
• Accounting matters
• Bond and indemnification matters
• Insurance
• Bankruptcy of a partner
• Equipment acquisition provisions
• Item/conventional joint-venture similarities and differences
Two or more construction contractors sometimes compete for a particular project as a joint venture by pooling their resources and sharing the risk and potential profit. Several factors make this practice attractive. Joint venturing may make it possible for a single contractor to participate in a bid on a project that would otherwise be too large a risk. Since each partner in a construction joint venture usually prepares an independent cost estimate for the performance of the project, risk is reduced by basing the joint-venture bid on more than a single cost estimate—that is, two (or more) heads are better than one.
To submit a joint-venture bid, two or more contractors form a new and separate legal entity to submit the bid and, if the joint-venture bid is successful, this entity then executes the ensuing construction contract. The concept is not unlike that of a partnership between individuals, except that entire companies are involved as partners.
When a joint-venture entity submits a successful bid, a series of issues are created that affect both the owner of the project and the joint-venture partner companies. For instance, who is responsible to the owner for contract performance? Who is liable in the event of a default? The partners, in turn, need to decide how liability as well as profits are to be shared between them, where they get their working capital, who will direct the day-by-day activities on the job, who will own the joint venture’s assets, and what will happen if an individual partner cannot or will not meet its obligations to the other partners.
The subject of this chapter—the joint-venture agreement—is the construction industry document that addresses these concerns. This agreement is a contractor’s contract. Although it refers to the owner and the prime construction contract to which the joint-venture entity and the owner are the parties, it is not a flow-down agreement. Usually, the joint-venture agreement will predate the prime contract.
Joint and Several Liability
The fundamental principle behind joint-venture agreements is that the partners agree to be jointly and severally liable with respect to the duties, obligations, and liabilities of the joint venture. Joint and several liability means to each partner company that, if the other partners are unable or unwilling to meet their share of joint-venture obligations, each partner company can be held liable, not only for that partner’s share but also for the other partners’ shares as well-for the joint venture’s total obligation. Without joint and several liability, owners would not award construction contracts to joint ventures.
Conventional v. Item Joint Ventures
Two basic types of joint-venture arrangements are common in the construction industry: conventional joint ventures and item joint ventures. In a conventional joint venture, the partners (two or more in number) agree to share benefits and liabilities according to a participation formula, with each partner accepting its specified share of each according to the formula (subject to the previously explained principle of joint and several liability). Usually, in a conventional joint venture, the actual on-site construction work will be performed by the field forces of just one of the partners. The cost of providing the field forces and other costs of actual construction are charged to the joint venture. In item joint ventures, each partner (usually only two) agrees to be responsible for a separate physical part of the contract work. Each partner constructs that separate part with their own individual field forces according to the contract specifications, incurs separate costs, and retains payment for that part of the work from the owner. One partner may profit while the other suffers a loss. The partners do not mutually share the risks and benefits of the total contract. Rather, each accepts the risks and benefits accruing to each separate part of the contract. The aforesaid arrangements are internal between the partners. There is still only one joint-venture entity responsible to the owner for the total project, and there is still joint and several liability.
A common example where item joint ventures are used is a highway construction contract that contains heavy grading and paving work and bridge work. A grading and paving contractor who does not do bridge work may form an item joint venture with a bridge builder who does not do heavy grading and paving. If the joint-venture bid is successful, the grading and paving contractor will perform the grading and paving work, while the bridge contractor performs the bridge work. This arrangement permits bidding without depending on subcontract bids and can result in certain advantages.
Conventional Joint Ventures
The mode of operation of a conventional joint venture is that of a single independent entity, with its own assets, bank accounts, books of account, and management structure. The joint-venture agreement should contain the following key provisions, which define this mode of operation.
Formation and Termination Matters
The formation section of the agreement normally states that the joint venture is formed for the purpose of submitting a bid for some specific project named in the agreement and, if the bid is successful, to enter into a contract for the project with the owner and construct the project according to the terms of the contract. It should be made clear that the agreement is limited to the single project stated in the agreement and that the agreement expires when the project is completed and all of the terms of the joint-venture agreement have been fulfilled. In other words, the agreement does not create a permanent marriage between the partners, nor is the agreement intended to place any limitation on other business of any of the partners.
The agreement also normally states that each partner in the joint venture is responsible for that partner’s pre-bid expense incurred in investigating the project and preparing an independent cost estimate on which to base the bid.
The agreement provides that no bid shall be submitted until and unless all the partners agree on the terms of the bid. Further, the agreement provides that if a partner disagrees with the terms of the bid, they may withdraw from the joint venture at that point, permitting the remaining partners to continue with the bid.
Once the agreement has been signed and the partners start preparing their bid for the project, a partner who decides to withdraw is precluded by the terms of the agreement from either submitting a separate individual bid for the named project or from becoming a member of another joint venture that submits a separate bid for the same project.
The joint venture created as a result of the agreement constitutes a completely separate legal entity that will exist throughout the entire life of the agreement. That entity must have a legal name-style in order to do business, and the agreement includes this agreed-upon name-style. The agreement states the termination provisions applying to the joint venture. The purpose of these provisions is to establish what happens after the joint venture submits its bid if the bid is not successful and, alternatively, what happens if the bid is successful and a construction contract is awarded to the joint venture. The agreement normally provides that, if no contract is awarded, the agreement expires and has no further force and effect. In the event of a contract award, the agreement remains in effect until each and every provision of that contract has been carried out. Provisions are also included that apply after completion of the construction contract, such as those dealing with disposition of the assets of the joint venture.
The provisions dealing with the assets of the joint venture usually state that no partner will accrue any right to any of the joint-venture assets until the construction contract with the owner has been fully completed according to its terms. The agreement then details the specifics for the division of assets among the partners when the construction contract has been completed. These normally provide that the liquid assets will be distributed to the partners according to the participation formula stated elsewhere in the agreement and that nonliquid assets be sold at auction and the proceeds similarly distributed. Any remaining nonliquid assets are then usually distributed to the partners as the partners may agree at that time.
Participation Percentages
A conventional joint-venture agreement must establish how partners share the assets and liabilities of the joint venture. A participation formula must be stated defining the proportional share of each partner in percentages, which total 100% for all of the partners.
Since partners are jointly and severally bound to the owner, if one or more partners should refuse or be unable to meet their proportional share of any liability that the joint venture may have, the remaining partners are required to make up the delinquent partner’s share as well as meet their own obligation. Because of this possibility, the agreement provides for cross-indemnification for losses, wherein each partner furnishes indemnification to the other partners for any losses suffered by any of them because that partner failed to meet its full share of any liabilities. This indemnification permits partners who had to pay more than their proportionate shares of a loss, because of another partner’s failure to pay, to get their money back eventually, as long as the delinquent partner has any assets or later comes into any assets.
General Management Matters
Another major section of a conventional joint-venture agreement deals with general management matters, which include at least five separate general management areas:
1. The agreement must provide the name of the managing partner. Usually, the managing partner is the partner company that has the largest participation percentage and provides the field organization that performs the actual construction work of the contract.
2. The agreement defines the authority of the managing partner. This authority gives the managing partner the ability to legally obligate the joint-venture entity and to designate the individual who will occupy the position of project manager. The project manager usually is a proven member of the managing partner’s permanent organization but can be any person satisfactory to the managing partner.
3. The agreement should deal with the often contentious subject of a management fee to be paid to the managing partner (on and above their share of the joint-venture profit) for furnishing the field forces and managing the project. Some agreements provide that the managing partner will not receive a fee. Others may provide that the managing partner receive a fixed fee stated in the agreement, regardless of whether the joint venture makes a profit. This latter arrangement, where the fee is unrelated to profit, is not popular since many contractors feel that it is inequitable for the managing partner to be paid a fee when the joint venture fails to make a profit (or incurs a loss). Consequently, a more common method is for the management fee to be dependent on the profit earned by the joint venture. Under this arrangement, the managing partner is paid a fee equal to a stated percentage of the total joint-venture profit prior to the distribution of the remaining profit to the partners according to the participation formula. For instance, consider the following specific case:
Partner A (managing partner): 60% participation
Partner B: 25% participation
Partner C: 15% participation
The management fee for the managing partner is 10%. The total joint-venture profit = \$3,500,000. This would result in the following profit distribution:
A’s share:0.10 x \$3,500,000 + 0.60 x (0.90 x \$3,500,000) = \$350,000 + \$1,890,000 = \$2,240,000
B’s share:0.25 x (0.90 x \$3,500,000) = \$787,500
C’s share:0.15 x (0.90 x \$3,500,000) = \$472,500
Total:\$3,500,000
Obviously, the greater the joint-venture profit, the greater the management fee taken off the top prior to the distribution of the balance. If no profit or a loss results, there will be no management fee.
4. The agreement normally establishes a management committee to set joint-venture policy for the guidance of the project manager. The authority of the committee, the committee’s meeting schedule, and the voting rights of the partners in making committee decisions will generally be set forth.
5. Joint-venture agreements sometimes delineate specific tasks or services to be performed by a partner for the benefit of the joint venture for which the partner will be paid directly. Such payments are considered to be normal costs of the joint venture. Common examples include data-processing services performed in the managing partner’s home office or design engineering services for project temporary structures performed in the home office of one of the partners. In these cases, the agreement may also state the dollar value of the compensation to be paid for such services.
Some agreements provide that each partner bill the joint venture each month for an amount equal to their proportionate share of 10% of the revenue received by the joint venture from the owner for that month and that these amounts be paid out to each partner as compensation for that partner’s home office general and administrative effort chargeable to the joint venture. Such a provision in the joint-venture agreement is then advanced as support for a 10% charge on all construction contract change orders to meet the expenses of each partners’ home office general and administrative management effort.
Working Capital Matters
Three main points dealing with working capital should be addressed in a conventional joint-venture agreement. The first, the capital call, is a request made from time to time by the managing partner for each partner to contribute funds to the joint venture for operating capital. The partners are required to respond to capital calls by contributing their proportionate share of the total call. When a call is made, a date is set by which all contributions must be received.
The second point deals with the consequences of a partner’s refusal or failure to meet a capital call. Specific agreements vary but generally provide that from that point onward the delinquent partner loses all voting rights and all rights to a share of the joint-venture profits until the delinquency is made up. Under some agreements, delinquent partners loses all rights permanently. However, delinquent partners are not relieved of their share of the joint-venture’s full liabilities. These potential liabilities flow to the partners as result of each partner signing the joint-venture agreement and, if such joint-venture liabilities eventually occur, they can only be discharged by partners paying their proportionate shares.
The third point deals with the return of capital to the partners. Conventional joint-venture agreements typically provide that the joint venture only retain funds sufficient to guarantee the ability to meet all future liabilities. In other words, the joint-venture management will not retain funds in excess of reasonable needs. A prudent joint-venture management committee will construe this provision very conservatively, particularly in situations where, because of front-end loading, the joint venture’s earnings from contract work are disproportionally high in relation to the cost of the work performed. Unanticipated costs later in the job may require the expenditure of those funds, and an imprudent partner may have committed them elsewhere and be unable to respond to the capital call asking for their return. For this reason, some joint-venture agreements specify that no excess funds be distributed until the project work is completed. This latter provision is viewed by some as overly conservative and is less frequently used than simply providing for distribution of funds in “excess of reasonable needs.” Additional provisions in this section typically deal with such matters as how the joint venture will report income for tax purposes, where options are permitted by the tax laws, conditions under which a partner may borrow from the joint venture against its share of equity, and requirements for the prudent and conservative investment of excess funds in interest-bearing securities, pending eventual distribution to the partners.
Accounting Matters
A conventional joint-venture agreement must also deal with at least six separate accounting matters:
1. The partners must agree on the bank or banks where the joint-venture bank accounts are to be established. The joint-venture agreement should either list the banks or provide a means for their selection.
2. The agreement should provide that separate books of account be set up for all necessary joint-venture accounting records and that joint-venture accounts must not intermingle with the accounts of any other business entity, particularly those of any of the partners.
3. The agreement should state the required frequency of financial reports to the partners, normally monthly.
4. The tax reporting declaration, where there is an option, should be stated in the agreement.
5. The agreement should contain a fiscal year declaration fixing the starting and ending dates of the particular fiscal year for the joint venture agreed on by the partners.
6. If a home office charge by the managing partner for the provision of data-processing and accounting services is intended, in addition to the management fee, the agreement should so provide.
Bond and Indemnification Matters
A number of provisions in conventional joint-venture agreements deal with bond and indemnification matters. Three main points are of interest:
1. Usually only one package of bonds is put up by the joint venture, in which the agreed-upon name-style of the joint venture appears as principal. The sureties of the several partners, through internal indemnification agreements among themselves, arrange for one of the sureties to furnish the necessary guarantees on the bonds and to sign the bonds as surety. For all of this to occur, each partner has to indemnify its individual surety for its proportionate share of the contract. Such indemnifications may involve personal indemnifications by the owners and/or officers of the individual contractor partners.
2. When personal indemnifications are required by a partner’s surety, a prudent joint-venture partner will insist that similar personal indemnifications be furnished to all of the other partners as well. Therefore, many joint-venture agreements provide that such “like indemnifications” be given by each partner to each of the others.
3. Bond brokerage fees on large contracts can be sizeable. Since the brokers of all of the partners are usually involved to some degree, the total brokerage commission is sometimes split among them in proportion to the partners’ participation percentages, Some joint-venture agreements provide for this.
Insurance Matters
Conventional joint-venture agreements also contain provisions regarding insurance. Since the joint venture is going to be a separate, independent operating entity, the joint venture name-style appears as the insured on all the normal insurance policies discussed in Chapter 8. The joint-venture agreement insurance provisions deal with five main points:
1. All partners in the joint venture should be named as additional named insureds on all joint-venture insurance policies. Otherwise, partners can be sued individually regarding a joint-venture matter and required to defend such suits and pay any judgments that are entered against them without any joint-venture insurance protection.
2. Many joint-venture partner companies do not want the insurance company to have subrogation rights. (See discussion of subrogation in Chapter 8.) If this is the case, the joint-venture agreement should require that the insurance company’s subrogation rights be waived.
3. The added protection of a completed operations endorsement with third-party liability insurance was discussed in Chapter 8. The joint-venture agreement should require that this endorsement be included with the joint-venture’s third-party liability policy.
4. When the prime construction contract requires the joint venture to indemnify the owner and architect/engineer, the joint-venture agreement should provide that the joint-venture’s third-party liability insurance policy be written with the owner and architect/engineer named as additional named insureds for the reason explained in Chapter 8.
5. Joint-venture insurance policies should be written to cover the use of partner-furnished construction equipment rented to the joint venture so that the individual partners who actually own the equipment are protected.
Partner Bankruptcy Provisions
The typical conventional joint-venture agreement also contains provisions dealing with the unhappy event of the bankruptcy of one or more of the partners. First, the agreement normally provides that bankrupt partners will immediately lose rights to all further profit and all management committee rights but are not relieved of their share of liability.
The agreement also provides that surviving partners assume the bankrupt partner’s share of any further joint-venture profits, pro rata to the surviving partner’s original shares, and then complete the construction contract according to its terms.
Construction Equipment Acquisition and Disposal
Comprehensive conventional joint-venture agreements deal with the problems of how, and from where, the joint venture will obtain the necessary construction equipment to construct the project through its equipment acquisition provisions. The value of the necessary equipment acquisitions can run to many millions of dollars for large projects.
A comprehensive agreement provides for acquisition of the equipment as determined by the management committee in one of three basic ways:
1. The partners may contribute the necessary cash for the joint venture to purchase new or used equipment outright from third parties.
2. Some or all of the equipment may be purchased from one or more of the partners at sale prices to be mutually agreed upon. An alternate form of purchase from a partner includes a guaranteed buy-back agreement at the end of the project. With this arrangement, it is often also specified that the original sale price and the buy-back price be determined by an independent equipment appraiser.
3. The equipment may be rented, either from one or more of the partners or from third parties at rental rates to be approved by the joint-venture management committee. A related provision is that when equipment rented from one of the partners is damaged when in use by the joint venture, such damage will be repaired at the expense of the joint venture, normal wear and tear excepted.
Item Joint Ventures
The mode of operation for item joint ventures (usually only two partners) is that each partner operates as a separate company. The partners have little in common except a common name-style and common contract bonds furnished to the owner. Each partner has separate assets, bank accounts, books of account, and profits or losses. The item joint-venture agreement provides the specifics of the key arrangements between the partners.
Comparisons with Conventional Joint-Venture Agreements
Item joint ventures are similar to conventional joint ventures in the following ways:
• The partners of item joint ventures are jointly and severally bound to the owner under common contract bonds.
• There will be a single management interface with the owner, and the item joint-venture agreement indicates which joint-venture partner will provide this interface.
• The purpose of the item joint-venture agreement states that the agreement is for the purpose of submitting a bid and performing the resulting construction contract if it is awarded.
• The item joint-venture agreement contains provisions similar to conventional joint-venture agreements in dealing with termination in case no contract is awarded, as well as when a contract is awarded.
• The agreement specifies the agreed-upon name-style for the joint venture.
• The provisions of the item joint-venture agreement in regard to pre-bid expense are the same as for conventional joint-venture agreements. Both agreements call for each partner to bear their own pre-bid expense.
Although item joint-venture agreements have many of the same features as conventional joint venture agreements, many other features are different:
• Item joint-venture agreements contain no requirement for common agreement on terms of the bid. Instead, the agreement provides that each partner controls the portion of the schedule of bid items pertaining to its own work but has no control of portions of the schedule that pertains to the other partner’s work.
• There is no joint-ownership-of-assets provisions in an item joint-venture agreement since each partner maintains full ownership of individual partner assets.
• No common participation in profits or losses is provided for in an item joint-venture agreement. Partners benefit or suffer separately on their own section of the job.
• The item joint-venture agreement provides that the managing partner will control affairs of the other partner only in respect to the necessary interface with the owner and serve only as a necessary administrative conduit. The managing partner is not given the power to bind the other partner legally to anything.
• The item agreement does not provide for a management fee, except for costs of the administrative interface services performed by the managing partner on behalf of the other partner.
• The item joint-venture agreement does not provide for a management committee.
• No common working capital or common books of account, bank accounts, tax returns, or insurance policies are provided for in an item joint-venture agreement. Partners maintain their own.
Conclusion
This chapter concluded the discussion of common construction industry contracts by examining construction joint-venture agreements. It explained why contractors enter into joint-venture agreements and introduced the principle of joint and several liability, without which joint ventures could not exist. The chapter examined typical provisions of both conventional and item joint-venture agreements in detail and concluded with a discussion of the similarities and differences between these two types of joint-venture arrangements.
The following two chapters on the subject of the bidding process in the construction industry shift emphasis from the contract documents themselves to how the customs and practices of the industry and past decisions of courts have influenced contract operation and interpretation.
Questions and Problems
1. What reasons for the formation of construction contractor joint ventures were discussed in this chapter? What does “jointly and severally bound” mean? Why is it important with respect to construction joint ventures?
2. What are the two major types of construction joint ventures? What are the distinguishing features of each?
3. With respect to conventional joint-venture agreements, what are the seven aspects of formation and termination matters that were discussed?
4. What is a participation formula for a conventional joint venture? What does cross-indemnification for losses mean? Why is it important?
5. What is meant by the term managing partner? What are two important powers normally bestowed on the managing partner by a conventional joint-venture agreement?
6. Discuss two alternate arrangements for the payment of a management fee to the managing partner. Which is generally favored by contractors? Why?
7. What is a joint-venture management committee? How does it usually work? What is its primary function? Is it intended to control the day-by-day management of the work?
8. What are some of the home office services for which a managing partner might reasonably bill the joint venture in addition to any management fee? Why do some joint ventures follow the practice of having each partner bill the joint venture each month for its proportionate share of 10% of the joint-venture revenue for that month?
9. What is a capital call? Is notice generally required? What normally happens if a partner fails to meet a capital call?
10. What is the reason for a joint venture’s management committee to be very conservative in returning funds in excess of immediate needs? Why is conservatism in this respect particularly important when a job’s payment schedule has been front-end-loaded? What should be done with such excess capital?
11. What are six separate issues regarding accounting matters for conventional joint ventures that should be addressed by a comprehensive joint-venture agreement?
12. What are the three separate aspects of bond and indemnification matters that were discussed in this chapter with regard to conventional joint-venture agreements?
13. What were the five main points made with regard to insurance matters?
14. What two consequences will follow the bankruptcy of a conventional joint-venture partner?
15. What are three ways for a conventional joint venture to obtain construction equipment?
16. What are six similarities between conventional joint-venture agreements and item joint-venture agreements? What are seven differences between the two kinds of agreements?
17. Contractor A, contractor B, and contractor C are in a conventional joint venture with shares of 55%, 25%, and 20%, respectively. Contractor A is the sponsor with a management fee equal to 7 1⁄2% of any profits of the joint venture. All capital calls are met by all partners. On completion, the job has made a \$2,750,000 profit prior to any distributions.
1. What is the total amount that each partner will receive from the job?
2. If contractor B failed to respond to any capital calls, what would be the total amount that each partner would receive from the job?
3. If contractor B failed to respond to any capital calls and, instead of making a profit of \$2,750,000, the job incurred a loss of \$200,000, what is each partner’s liability for the loss? | textbooks/biz/Business/Advanced_Business/Construction_Contracting_-_Business_and_Legal_Principles/1.10%3A_JointVenture_Agreements.txt |
Key Words and Concepts
• Public/private sectors
• Difference in bid rules
• Bidding statutes
• Bid document addendum
• Purpose of bidding statutes
• Influence of the federal policy
• Requirements of the United States Code
• Public owner’s compliance with bidding rules
• Material impropriety
• Factual determination of low bid
• Unit prices
• Written price extensions
• Bid total
• Alternate bids
• Responsive bidder
• Responsible bidder
• Late bids
• Rejection of all bids
• Irregularities and informalities
• Bidder’s property right to the contract
• Bid protests
• Status
• Timeliness
• Successful protests
• Right to reject all bids not absolute
The first ten chapters of this book deal with specific types of contracts that are widely used in the industry. From here on, the focus will be on the customs and practices of the industry and with past decisions of our courts that govern how these contracts operate and how they are interpreted. Chapter 11 is the first of two chapters on the subject of bids and proposals.
Public and Private Sector Bidding
Bidding practices of the public and private sectors of the industry differ tremendously. The term public in this context means that the construction work is financed by public funds in the form of tax dollars or the proceeds from the sale of municipal, state, or federal bonds.
Public and private work have different bid rules. Public construction contracts are advertised and let in accordance with the bidding statutes and other legislatively mandated rules of the particular governmental entity that is paying for the construction work. For instance, when the work is financed with federal funds, the laws and regulations promulgated by federal agencies and bodies govern the process of advertising and awarding construction contracts. Similarly, state, county, and municipal governments have statutes and regulations that govern when their funds are used to pay for the cost of the work. In addition, special governmental or quasi-governmental bodies such as sewer or rapid transit districts are often established by special enabling legislation. The enabling legislation usually provides definitive rules for advertising and awarding the construction contracts required to carry out the mission of the particular special body involved.
Unlike public owners, private owners can establish whatever rules that they want. They also can change the rules at will with the result that these rules are not necessarily observed. Although the public owner has the ability to set particular rules and to change them by issuing an addendum to the bidding documents, this power is severely regulated. A bid document addendum is a modification to the bidding documents formally issued by the owner to all holders of bidding documents before bids are received. In the public sector, there must be a reasonable time period from the issue date of the last addendum issued and the date of the bid opening to ensure that all bidders have sufficient time to reflect properly the import of the addendum in their bids. Bidders are required to list on the bid form all addenda received for their bids to be considered responsive. Failure to list addenda may result in the bid being rejected.
In the private sector anything can happen, whereas in the public sector the result will usually be that the job will be awarded to the lowest “responsive” and “responsible” bidder. These terms have important special meanings that will be discussed later in this chapter.
Public Bidding Statutes
The requirements of the federal, state, and local bidding statutes and resulting regulations make the outcome of the bidding process in the public sector very predictable compared to the private sector. The purposes of public bidding statutes are:
1. To protect public funds. In other words, bidding statues are designed to ensure that the public pays the minimum possible price for construction work determined by open competitive bidding.
2. To protect and ensure a continuation of the free enterprise system upon which the political and economic structure of the United States is founded.
The public bidding statutes are stringently written and enforced to ensure that public sector construction contracting remains honest. Increasingly, those who violate the rules find themselves subject to both civil and criminal liability. Errant construction companies have been assessed large fines and their owners or officers sent to prison along with the corrupt public officials who have been caught, tried, and convicted of violating the public trust.
Federal Construction Contract Procurement Policy
Because numerous separate statutes regulate the public bidding and contract award process for different public owners, a discussion of specific rules that may apply in a particular case is not practical. However, examining the federal construction contract procurement policy, which is broadly reflected throughout all public construction work in the United States today, will help in understanding the basic principles behind most bidding statutes. The influence of the federal government policy has been enormous, and the federal contracting rules serve as a model for the rest of the public sector. Therefore, understanding the major federal rules will aid in understanding the general requirements of public sector bidding and contract award at most other levels.
The federal rules set forth in the United States Code include the following five broad requirements:
1. There must be sufficient advertising time between the first advertisement of the bid and the bid opening so that prospective bidders know about the project and have sufficient opportunity to prepare their bids.
2. The bidding documents must be sufficiently clear and detailed to assure free and open competition. The purpose of this requirement is to assure that each bid received represents a price tendered by each individual bidder to construct the identical project.
3. There must be a public bid opening and a public reading of all bids received at the date, time, and place stated in the bid advertisement. This requirement ensures that every person present at the bid opening has the opportunity to hear the bid prices tendered by the various bidders. It follows from this requirement that the contents of all the bids received and opened become public knowledge and that any bid received may be examined by any person with a legitimate interest in doing so. It should be noted in connection with this rule that a procurement procedure leading to a negotiated contract is also permitted by the federal rules and is occasionally employed for certain projects. In these cases, there is no public bid opening, and the government does not publicly divulge the contents of the various proposals received. The procedure requiring a public bid opening and a public reading of all bids received and opened is far more common.
4. The contract must be awarded to the lowest responsive and responsible bidder whose bid is in the best interest of the government. For contracts other than those awarded on a negotiated basis, this requirement will usually be satisfied by the lowest bid received from a responsible bidder that is fully responsive to the terms and conditions of the bidding documents. The requirement also applies to contracts that are negotiated in that the government is required to award the contract to the bidder whose proposal is determined (price and other factors considered) to be in the best interest of the government.
5. All bids may be rejected when rejection is determined to be in the best interest of the government.
When these federal rules are applied to public sector bidding, the usual result will be that the contract is awarded to the lowest responsive, responsible bidder. It matters not that this bidder has no past relationship with the public owner or that the public owner might prefer that another bidder had won the contract. All that matters is that the bidder to whom the job is awarded be the lowest responsible bidder whose bid is responsive to the terms and conditions stated in the bidding documents.
Public Owners’ Actions After Bids Received
To comply with the basic principles just stated, a public owner will normally take a number of separate actions once bids or proposals have been received for a construction project.
Material Improprieties
A public owner must determine whether there is any material impropriety that would preclude award of a public contract. A material impropriety can be anything that is not proper in either the bidding documents or the bidding process. Examples include such acts as bribery, bid rigging, or offering private clarification of bid document requirements to selected bidders, or anything else that would impugn the integrity of the bidding process. A material impropriety can also include unfair or improper resolution of errors or ambiguities in the bidding documents or in the bids received that make it impossible to be certain that each bid is for exactly the same intended work.
Factual Determination of the Low Bid
A public owner must make a factual determination of the low bid. This is more complicated than simply noting and recording which bid submitted has the lowest dollar figure written in the space for the total bid price. The public owner must also make certain that the bids received include no arithmetic mistakes or discrepancies, or, if such mistakes or discrepancies are found, that the apparent low bid remains low when they are corrected.
The rules governing the determination of the low bid may be set forth in the bidding statutes applying to the project and are usually stated in the bidding documents themselves. One common question is whether the unit price or the written price extension determines the intended bid price for a bid item in a schedule-of-bid-items bid when there is a discrepancy between them. Typically, the rules state that the unit price governs. Also, when the price extensions and lump sum prices in a schedule-of-bid-items bid are totaled, they sometimes do not equal the written bid total . The normal rule in this situation is that the correct total be substituted for the figure written in the bid form and be considered the bidder’s intended bid.
The preceding rules are illustrated by an Alaska case where the State Department of Transportation took bids for grading and drainage work. This project was advertised as a schedule-of-bid-items contract by the Alaska State Department of Transportation. The apparent low bidder was announced at the bid opening, but it was later discovered when the bids were checked that the sixth bidder had made an arithmetical mistake when summing the total of the bid-item extensions. This resulted in an apparent bid total higher than the arithmetical total of all of the bid-item extensions. The Department of Transportation corrected the sixth bidder’s arithmetical mistake, making them the low bidder. Over the original low bidder’s protest, the contract was awarded to the sixth bidder on the basis of the corrected bid. The low bidder filed suit to set aside the award.
The standard specifications governing the bid provided that, in case of discrepancies between prices written in words and those written in figures, the prices written in words would govern and that, in case of discrepancy between unit bid prices and extensions, unit prices would govern. The original low bidder argued that these provisions did not apply since there was no discrepancy in the sixth bidder’s bid between the unit prices written in words or numbers or between the unit prices and the bid-item extensions. The error occurred in the addition of the extensions. The State Department of Transportation argued that if the specifications permitted correcting unit price extensions, the State clearly had the power to correct the addition error in totaling the extensions. The Supreme Court of Alaska agreed that, if the State Department of Transportation was empowered to correct arithmetic errors in bid-item extensions, it was implicitly empowered to correct the arithmetic total of those extensions. Additionally, the court noted that the bid specifications provided that the total of the bid-item extensions was merely for informational and comparative purposes at the bid opening. The unit prices controlled, and the downward correction of the sixth bidder’s bid total was proper.[1]
Alternate bids will be considered in making the factual determination of the low bid if the bid documents provide for alternate bids and include the rules for evaluating alternates. These rules must be such that the determination of which bid is low will be a factual and objective process wherein all bidders are treated equally.
A final point is that the low bid determination cannot be made on a basis different from that indicated in the bid documents. That is, the public owner cannot change the basis expressly stated or implied by the bid documents for determining the low bid and then make a determination on this changed basis. For instance, if after the bids are opened, the owner alters the bid quantities on a schedule-of-bid-items bid to quantities different from the quantities stated in the bid documents, the order of bidders, low to high, may be drastically altered. Such a practice is strictly prohibited.
The preceding point was convincingly demonstrated to this author in the mid-1970s when a division of his company was the low bidder by a narrow margin on a contract for driving eight 20 ft. diameter tunnels through a railroad embankment. Each tunnel was approximately 300 feet long, and payment was to be made on the basis of a unit price per foot of tunnel actually driven and measured for payment. The bid quantity for the tunnel excavation bid item was stated to be approximately 2,400 feet.
The bid schedule contained one lump sum bid item for mobilization and seven other unit-price bid items, all of them for minor work except for the tunnel excavation bid item, which constituted more than 95% of the work of the project. The specified contract duration was two years. The contract stated that after the expiration of one year, the owner, a medium-sized city, could elect to delete one of the eight tunnels from the contract even though bids were to be submitted for constructing all eight tunnels. This fact resulted in individual bidders distributing fixed indirect costs and anticipated job profit to the individual bid items in a highly variable manner.
At a meeting the day after the bid opening attended by the city engineer, the city attorney, and the author, the city engineer announced that he had determined that, if the city decided after the first year of construction to delete one of the tunnels, which the city engineer considered to be likely, the author’s company would become the third bidder on the basis of seven tunnels even though we were the low bidder on the basis of the as-advertised eight tunnels. The city engineer then indicated his inclination to award the construction contract on the basis of the seven-tunnel scenario. Before I had an opportunity to voice my objection, the city attorney interrupted, advising the city engineer in no uncertain terms that such an act on the city’s part would be illegal and would not receive the support of the city attorney’s office. The matter was thereupon immediately dropped, and the discussion shifted to details germane to award of the contract on the basis of our bid. Bids for public work must be evaluated on the basis advertised in the bid documents, not on some other basis.
Responsive and Responsible Bidders
The public owner must make a separate determination that the low bidder is both a responsive bidder and a responsible bidder. These terms sound like much the same thing but are, in fact, very different.
A responsive bidder is one who has filled out and signed the bid forms in accordance with the bidding instructions and who has submitted an unqualified bid in full conformance with the requirements of the bid documents. There may be no additions or alterations of any kind.
A responsible bidder is one who possesses sufficient financial resources to undertake the project and, in addition, has the necessary experience and a track record indicating the ability to execute successfully the work of the contract.
A public contract cannot be properly awarded to a bidder who has not been determined by the public owner to be both responsive and responsible. Bid responsiveness is determined by examination of the bid itself, which cannot be altered by the bidder once it is submitted and opened. On the other hand, bidder responsibility is a matter that the public owner can determine after the bid opening. Both bid responsiveness and bidder responsibility must be conclusively demonstrated to the public owner’s satisfaction prior to the award of the contract.
Three federal contract decisions by the Comptroller General of the United States are good examples where low bids were rejected on the grounds that they were nonresponsive. In the first case, the low bidder had submitted a preprinted bid bond that differed materially from the terms of the required bid bond for government contracts.[2]
In the second case, the apparent low bidder submitted a bid bond that contained the following notation:
If this contract includes the removal of asbestos materials, then this bond is to be null and void.
Removal of asbestos materials was, in fact, required by the contract. Post-bid-opening assurances by the low bidder’s bonding company that they would waive the restriction noted in the bond were to no avail in persuading the Comptroller General to consider the bid responsive, since determination of bidder responsiveness is a matter that must be based entirely on the bid as it appeared at the time of the bid opening.[3]
In the third case, the low bid had been declared nonresponsive by the Army Corps of Engineers because the bidder “clarified” the specifications by adding the following statement to the bid:
Bid based on Army furnishing four voice-grade phone lines to building 9370.
The Comptroller General supported the Corps’ determination of nonresponsiveness stating:
By qualifying its bid, Howard has attempted to shield itself from responsibilities from which other bidders would not be similarly protected. Since the phone lines at issue will be required under the contract, Howard’s clarification has the effect of shifting these costs to the Army. We therefore find that Howard’s bid was properly rejected as nonresponsive.[4]
A case where the low bidder was denied the contract on the grounds that they failed to meet the bidder responsibility requirements occurred in New Jersey where the Army Corps of Engineers took bids for a hazardous waste remediation project. The Corps asked the low bidder to provide references as part of the pre-award survey. They then contacted the references provided as well as reviewed internal government records regarding prior projects performed by the bidder. Considerable negative information emerged, including a prior project owner’s complaint that the bidder had refused to honor a warranty, an allegation that on a prior contract the bidder had allowed the release of contaminated water and gas, and an incident where the bidder’s personnel had been indicted and convicted for submitting false payment requests. Additionally, the low bidder’s proposed project manager appeared to lack adequate experience on similar projects. When the government contracting officer determined that the low bidder was a nonresponsible bidder and ineligible for contract award, the low bidder went to court arguing that it had been improperly debarred from government contracting without due process of law. The U.S. District Court for the District of Columbia determined that a rational basis for the government’s nonresponsibility determination existed and refused to disturb the contracting officer’s finding.[5]
In another case, however, the Comptroller General allowed a bidder to furnish information after the bid opening demonstrating that its proposed subcontractor possessed the required specialized experience. The low bidder had submitted a list of projects performed by its proposed subcontractor, but this list did not meet the five-year experience period required by the specifications. When questioned by the contracting officer after the bid, the low bidder supplied additional information showing earlier projects and was awarded the contract. The second low bidder protested that the low bidder’s bid should have been rejected as nonresponsive because complete information had not been supplied with the bid. In rejecting the second bidder’s protest, the Comptroller General stated:
Even though the solicitation provided that a bidder’s failure to submit with its bid evidence of compliance with this requirement would render the bid nonresponsive, such a solicitation provision is not effective to convert a matter of responsibility into one of responsiveness. Information concerning a prospective contractor’s responsibility may be submitted any time prior to award.[6]
Rejection of Late Bids
A public owner normally must reject bids received after the time specified in the bid documents for submitting bids. The only exception might be when a bidder can show that the lateness in submitting the bid is due to circumstances totally beyond that bidder’s control and that accepting the bid would not prejudice the position of other bidders whose bids were submitted within the time limit. In other words, for a late bid to be accepted, it must be determined that the late bidder gained no advantage over competitors as a result of submitting the bid late, such as receiving a last-minute price cut in a major subcontract quotation.
In practice, late bids are usually rejected, but not always. For instance, a New Jersey court permitted the acceptance of a late bid where the bidder had phoned in shortly before the bid opening, advising that it was being delayed by inclement weather and would arrive shortly. The bidder submitted the bid two minutes late, but before any bids had been opened. Under these circumstances, the court judged that permitting the acceptance of the bid and awarding the contract to the late bidder was proper.[7] Similarly, the Comptroller General permitted the acceptance of a late bid because a government representative had directed the bidder to the wrong room. The bidder arrived at the designated place for the bid opening about a minute early. A government representative mistakenly gave the bidder inaccurate information on where the bids were being received. In this case, the tardiness was caused by improper government action, no bids had been opened when the bid was received, and the Comptroller General ruled that the acceptance of the bid and award of the contract was proper.[8]
In spite of occasional exceptions, bidders should assume that bids will be rejected if submitted late.
Rejection of All Bids
Public owners may reject all bids upon a determination that rejection is in the public interest. However, once a public owner has rejected all bids, the contract cannot be awarded later unless the entire advertising and bidding process is repeated and entirely new bids are received. Once bids are rejected, they remain rejected.
Bid Irregularities / Informalities
As previously pointed out, one form of a material impropriety precluding award of a public contract is error or ambiguity in the bids received that make it impossible to determine that each bid is for exactly the same work. Such error and/or ambiguity are known as bid irregularities or informalities. If a public owner awards a contract on the basis of a bid containing an irregularity or informality, the other bidders may sue to prevent the award of the contract or, if it has already been awarded, to set aside the award. Therefore, it is important to understand what these terms mean and when their presence may disqualify a bid.
Major and Minor Irregularities / Informalities
The terms bid irregularity and bid informality mean the same thing; both have to do with bidder responsiveness. Essentially, they refer to a deviation from the literal requirements of the bidding instructions in the format and content of a submitted bid. A bid with an irregularity or informality is, by definition, not fully responsive, so the question becomes one of deciding whether the deviation is significant enough to cause the bid to be rejected.
A major irregularity or informality means one that has an important effect on the terms of the bid, whereas a minor irregularity or informality is one of less significance. A bid containing a major irregularity is required to be rejected, whereas a minor irregularity may be waived by the owner.
Rule for Determining Major or Minor Irregularities
How does the public owner determine whether an irregularity is major or minor? Although there are no universally accepted rules, there is one very practical guide that serves to identify a major irregularity or informality. If the irregularity or informality is such that it could reasonably relieve the bidder of the contractual obligations that they assumed by submitting the bid, the irregularity or informality should be deemed major. An obvious example of this type of irregularity or informality is the submittal of an unsigned bid. Since a bidder usually cannot be legally held to the terms of an unsigned bid, this irregularity would probably be considered major, requiring the bid to be rejected even though the bidder may want the owner to accept the bid and award the contract. Other examples of major irregularities are the failure to list subcontractors when such a listing is required by bidding statutes or the failure to include a signed bid bond in the required form with the bid.
An example where a minor informality was waived is afforded by a federal case where the Comptroller General ruled that the low bidder’s failure to acknowledge receipt of a bid addendum extending the contract performance time could be waived as a minor informality. The addendum in question changed the contract performance time on a river channel project from 100 calendar days to 130 calendar days. The low bidder failed to acknowledge receipt of the amendment in its bid. The Army Corps of Engineers waived this irregularity and awarded the contract, and the second lowest bidder filed a protest. The Comptroller General noted that failure to acknowledge all addendums usually renders a bid nonresponsive; but when the effect of an addendum is to make the contract requirements less stringent rather than more stringent, failure to acknowledge may be waived as a minor informality.[9]
In another case, however, the Comptroller General ruled that failure of the low bidder to acknowledge receipt of an addendum altering a Davis-Bacon wage rate determination may not be waived and rendered the bid nonresponsive. The low bidder argued that its collective bargaining agreement obligated it to pay Davis-Bacon wages and that its low bid would remain unchanged, with or without the addendum. Therefore, the government should have waived this minor informality. The Comptroller General disagreed, stating that Davis-Bacon wage rate determinations exist for the protection of the contractor’s employees and their rights may not be waived under any circumstances. Therefore, the low bidder’s failure to acknowledge the correction to the Davis-Bacon wage determination was an informality that rendered the bid nonresponsive.[10]
Similarly, the Delaware Supreme Court ruled that subcontractor listing requirements must be strictly followed when receiving bids. On a project for improvements to sewage treatment facilities, the low bidder failed to list the subcontractors that it intended to employ on the project. The Delaware statutes required bidders on state projects to list all subcontractors that would be used. The low bidder had indicated “none” in the space provided for listing the electrical subcontractor. When the State Department of Natural Resources and Environmental Control rejected the bid, the low bidder went to court to have the decision reversed. In upholding the rejection of the bid, the Delaware Supreme Court acknowledged that rejection of the low bid would result in higher costs to the taxpayers but, nonetheless, stated that the state statute reflected a clear legislative intent to prevent “bid shopping and the evils which are said to arise from such a practice.” Therefore, the statute must be strictly enforced despite the increased expenditure of public funds.[11]
Bidder’s Property Right to the Contract
The usual result of the public bidding process, as just described, is that the lowest responsive and responsible bidder is awarded the contract. However, there is no property right to the potential construction contract established by the mere fact that a bidder is the low bidder. We have already seen that a bidder must be determined by the public owner to also be responsive and responsible before the contract can be awarded. Even when the low bidder is determined to be both responsive and responsible, the bidder still does not acquire a property right in the potential contract because the owner may reject all of the bids if such an action is in the public interest. Only when the public owner decides to award the contract can the lowest responsive and responsible bidder be thought to have a property right to the contract.
Bid Protests
Bid protests are formal objections filed by a bidder to some aspect of the bidding process. They may be objections to the bid document terms and conditions, in which case they should be filed before the bid opening date. Bid protests can also be filed after the bids have been opened to challenge the award of a contract to a low bidder if the protester believed that bid was irregular or improper.
Status to File Bid Protests
Not just anyone has the status (standing) to file a bid protest. Who does? Generally, this right is vested in any potential bidder when the protest is lodged prior to the bid opening. Similarly, any actual bidder has status to file a bid protest after the bid opening. There may also be others, but those just cited are always considered to have status.
Timeliness
The timeliness of the protest affects its chance of success. Protests concerning the terms and conditions of the bid documents should be made before rather than after the bid opening. Those lodged after the bid opening will probably be to no avail. The further in advance of the bid opening date that such a protest is made, the better chance it will have.
A bid protest regarding the award of the contract should be made as soon as possible after the bid opening and/or the owner’s declared intention to make the award.
Protest to Whom?
Bid protests can be directed to the administrative bodies overseeing the particular office of the public owner who is taking bids. Such bodies can be expected to investigate and intervene if the protest is meritorious. Alternatively, bid protests may be directed to a court of law having jurisdiction in the locality where the bids are taken. In that case, the bid protester would seek injunctive action by the court. Also, bid protests may be simultaneously directed to both the agency administrative body and the courts. Typically, the court’s decision will prevail—that is, the court may support the protest and order relief with respect to a bid protest that has been previously rejected by the administrative agency involved.
What Can Be Gained by a Bid Protest?
Successful protests depend on both the timing and nature of the protest. For example, when the protest concerns the terms and conditions of the bid documents, a successful protest can result in an injunction issued by a court or administration action on the part of the public owner’s parent agency that prevents bids from being taken until the objectionable terms in the bid documents are changed. On the other hand, when the protest concerns the awarding of the contract, a successful protest can result in injunctive or administrative action to prevent the public owner from awarding the contract or to compel cancellation of the original award and re-award of the contract to a named alternative bidder.
A number of years ago, the author’s company and another general contractor were each individually certified by a major city to have met all bidder prequalification requirements for a sewer tunnel project funded by the federal Environmental Protection Agency (EPA). Bidder prequalification was required as a condition for bidding. As frequently happens, the two companies decided during the bidding period to bid together as a joint venture and asked the city to furnish a registered set of bidding documents in the name of the agreed joint venture. The city arrogantly refused to certify the joint venture as meeting prequalification requirements, even though each partner was individually prequalified, and refused to issue the registered set of bid documents that would permit the joint venture to bid.
Such an egregious, arbitrary action clearly has the effect of limiting free and open competition. We therefore immediately lodged an administrative protest with the EPA office disbursing federal funds for the project. After conducting a fact-finding hearing, the EPA froze the funding for the project, delaying the bid opening for over a month. To restore the federal funding, the city was required to rescind their previous action, permitting our joint venture to bid. This result came about only because of the strong protest lodged with the EPA.
The chances of securing the preceding results are much improved by a timely filing of the bid protest. However, it sometimes happens that, due to the time required to resolve the issue, all that can be gained is a “paper victory” or, at best, a recovery of the costs of bid preparation and submittal. For example, if a contract award protest is not resolved until after the contract has been awarded and the work started, a court is not likely to force the owner to cancel the existing contract in midstream and award the balance of the work to the bidder filing the protest. The court may, however, award the protesting bidder damages equal to the bid preparation and submittal costs.
Rejection of All Bids in the Public Interest
As previously stated, a public owner has the right to reject all bids. However, this right is not absolute. There are limitations.
The first limitation is that bids may be properly rejected only after a formal determination or finding that such rejection is in the public interest. This determination cannot be arbitrary and must be based on reasonably compelling grounds. Examples of reasonably compelling grounds would include discovery of major irregularities or informalities in the bidding documents or in the bids received, the low bid exceeding available funds (or, even if within available funds, exceeding the architect’s or engineer’s estimate), some demonstrable last-minute change in the immediate or ultimate need for the project work, and so on.
For instance, in a New Jersey bid protest case, a court ruled that economic considerations, including the prospect of a more favorable bidding climate, justified the owner’s rejection of all bids. The Township of Belleville took bids for a street and utility improvement contract. The apparent low bidder had omitted the bid bond and was allowed to go to its office, retrieve the bond, and return to the bid opening 20 minutes later with the bond. The township then accepted the low bid. The second low bidder protested that the bid submitted without the bond at the time of bid opening was nonresponsive and had to be rejected. A trial court agreed with this second bidder. However, rather than award the contract to the second bidder, the township then elected to reject all bids and resolicit at a later date. The second bidder challenged this decision. The Superior Court of New Jersey ruled in favor of the township, stating that, although a public project owner is not allowed unfettered discretion in rejecting all bids, they are allowed to take economics into consideration. In the opinion of the court, the township made a good-faith decision that the best interest of the public would be served by a rebid.[12]
The second limitation is that if the public owner rejects all bids and cannot justify the determination that the rejection of the bids was in the public interest, the bid rejection is subject to court challenge and reversal. Although the outcome of such a case is uncertain and the benefit of any doubt will probably be given to the public owner, reversal is possible.
A Louisiana court ruled that when a public owner rejects all bids, it must inform the bidders of the cause for the rejection. The State of Louisiana Legislative Budgetary Control Consul took bids for renovation on the state capitol building. All bids were rejected and the project put out to bid again. One of the original bidders demanded to know the reasons for all bids being rejected and went to court to compel an answer. The Consul argued that they were not expressly required to divulge the reason for the rejection of the bids.
The Court of Appeals of Louisiana agreed that the state statute authorizing public owners to reject all bids did not expressly require divulgence of the cause for the rejection but said that divulgence was an implicit requirement if the statute was to serve its intended purpose. The court said:
If only the public entity knows the reason for the rejection of bids, but yet refuses to divulge the reason for the rejection, then what safeguard is there that the rejection was for just cause? To conclude otherwise is to make a mockery of the law. We are of the opinion that the bidder in the instant case has a right to know the reason for the rejection, and the legislature has imposed a duty on the public entity to inform a requesting bidder of the reason for the rejection.[13]
All bids received will normally not be rejected, and the outcome of the bid opening will be that the construction contract will be awarded to the lowest responsive and responsible bidder.
Conclusion
This chapter emphasized the great difference in bidding practices of the public and private sectors of the construction industry, the importance of public bidding statutes, and the tremendous influence of the federal bidding policy as set forth in the United States Code. The actions that should be taken by a public owner after taking bids were discussed as well as the general subjects of bidder responsiveness and responsibility, bid irregularities and informalities, late bids, bid protests, and rejection of all bids in the public’s best interest. Chapter 12 deals with the important subject of mistakes in bids.
Questions and Problems
1. What is the fundamental difference between public and private bidding? What are two examples mentioned in this chapter illustrating the freedom that private owners have in setting bidding terms and conditions? Can the public owner set its own rules? Can a public owner change the rules once a set of bid documents has been issued? What is an addendum?
2. What are two main purposes served by public bidding statutes? Who creates these statutes? What three sources of bidding statutes were discussed?
3. What are the five main requirements of the federal bidding policy as set forth in the United States Code?
4. Is the federal policy very influential? Must all federal contracts be awarded to the lowest responsive and responsible bidder?
5. What is a material impropriety? When an extension for a unit-price bid item on a schedule-of-bid-items bid conflicts with the unit price that was bid, which usually governs? When the sum of the lump sum items and the extensions of the unit-price items on a schedule-of-bid-items bid form is incorrectly added and entered as the bid total, what will be taken to be the bidder’s intended bid—the total written in or the correct total?
6. Can alternate bids be considered in making a determination of the low bid? Under what conditions? Can a public owner change the basis for determination of the low bid from that explicitly or implicitly stated in the published bid documents?
7. What does bidder responsiveness mean? What does bidder responsibility mean? At what point in time must bidder responsiveness be demonstrated? How about bidder responsibility?
8. What are bid irregularities or informalities? Explain the two broad classes of bid irregularities and informalities discussed in this chapter. What is the rule discussed in this chapter as a practical test to distinguish one class from the other? What three examples of major irregularities were mentioned?
9. Does a bidder necessarily have a property right to a contract on which it was the lowest bidder? To a contract on which it was the lowest responsive and responsible bidder? Does a bidder ever have a property right to a contract? Under what circumstances?
10. Who has status to file a bid protest prior to bids being opened? How about following bid opening? When should protests concerning bidding terms and conditions be filed? When should protests of the award of the contract to a particular bidder be filed?
11. What can a protester hope to gain from a bid protest concerning bidding terms and conditions? A protest concerning award to a particular bidder? A protest to contract award to a particular bidder once the job is well under way?
12. What are the two kinds of bodies to which a bid protest can be made? Can the protest be made to both simultaneously? In case of differing decisions, which body’s decision governs?
13. What must a public owner do prior to rejecting all bids received? What may happen if the public owner fails to perform this step or does so without having reasonably compelling grounds? Is the outcome certain in these cases? What are three examples of reasonably compelling grounds?
1. Vintage Constr., Inc. v. State Dept. of Transp., 713 P.2d 1213 (Alaska 1986).
2. Matter of Allgood Electric Co. Comp. Gen. No. B-235171 (July 18, 1989).
3. Matter of Star Brite Construction Co., Inc. Comp. Gen. No. B-255206 (February 8, 1994).
4. Matter of Howard Electrical & Mechanical, Inc. Comp. Gen. No. B-228356 (January 6, 1988).
5. Geo-Con, Inc., 853 F.Supp. 537 (D.D.C. 1994).
6. Matter of BBC Brown Boveri, Inc., Comp. Gen. No. B-227903 (September 28, 1987).
7. William M. Young & Co., Inc. v. West Orange Redeveloping Agency, 311 A.2d 390 (N.J. Super. A.D. 1973).
8. Matter of Baeten Construction Co.,Comp. Gen. No. B-210681 (August 12, 1983).
9. Matter of Patterson Enterprises Limited, Comp. Gen. No. B-207105 (August 16, 1982).
10. Matter of Bin Construction Co., Inc., Comp. Gen. No. B-206526 (June 30, 1982).
11. George & Lynch, Inc. v. Division of Parks and Recreation, 465 A.2d 345 (Del. 1983).
12. Marvec Construction Corp. v. Township of Belleville, 603 A.2d 184 (N.J. Super. L. 1992).
13. Milton J. Womac, Inc. v. Legislative Budgetary Control Consul, 470 So.2d 460 (La. App. 1985). | textbooks/biz/Business/Advanced_Business/Construction_Contracting_-_Business_and_Legal_Principles/1.11%3A_Bid_and_Proposals.txt |
Learning Objectives
• Firm bid rule
• Doctrine of mistake
• Meeting of the minds
• Right of bidder to withdraw
• Rescinded contract
• Six tests for right to withdraw a bid
• Importance of timeliness in declaring a mistake
• Proof of mistake
• Duty of owner to request bid verification
• Contract reformation
• Required conditions for reformation
• Sub-bids and material price quotations
• Promissory estoppel
• Required elements to establish liability
• Reliance
• Reasonable reliance
In the previous chapter, the bidding and contract award process was extensively discussed. The point was made that this process is consistent and predictable for the public sector of the industry, but not for the private sector. This chapter deals with an additional aspect of the bidding process applying to the public sector: mistakes in bids.
Firm Bid Rule and Doctrine of Mistake
Clearly, public owners are subject to many restrictions in the advertising, bidding, and contract award process as illustrated by federal law and the federal construction procurement policy. Similarly, these same procurement rules impose an important requirement on bidders called the firm bid rule.
The firm bid rule is not limited to federal government contracts but is a consistent feature of all public procurements. Under this rule, a submitted bid is understood and required to be firm. The price is fixed, not subject to negotiation, and the only terms and conditions of the bid are those established by the owner’s bid documents. Once the construction contract is awarded, the bidder is legally bound to perform the contract according to those terms and conditions. An exception to this in the federal practice are those instances when the government calls for proposals leading to a negotiated contract. Under these circumstances, the bidder’s proposal would be subject to further discussion under rules determined in advance by the government and stated in the request for proposals.
Not only are public bids required to be firm, but public owners also require that bid security be provided to guarantee that the low responsive and responsible bidder will enter into a contract and furnish the required bonds. As discussed in Chapter 9, this security will usually be a bid bond or a certified check in the amount of 10% of the bid price. When used, certified checks are returned uncashed to the unsuccessful bidders, usually the day following the bid opening. The check is returned to the successful bidder when the required bonds and insurance policies are furnished and the contract signed. If the successful bidder then fails to sign the contract and furnish the required bonds and insurance policies, the bid security is forfeited.
Since the terms and conditions of the contractor’s bid, except for the pricing, are entirely determined by the owner, the firm bid rule imposes immense liability on public bidders. For the bid price, they undertake a firm obligation to perform the contract work strictly in accordance with the owner’s terms and conditions, typically consisting of section after section of highly technical specifications. Not only that, the bidder must perform all of the contract work within fixed time limitations that are often very restrictive.
The severe implications of the firm bid rule raise the question of what happens when a low bidder makes a mistake and submits a bid with a price lower than intended. Under the doctrine of mistake, the bidder on federal contracts and in most states may be relieved of the duty to perform the contract and, in certain circumstances, may be allowed to correct the bid and still be awarded the contract. Several logical reasons underlie this concept.
First, from the standpoint of equity, one party to a contract should not be permitted to profit unconscionably because of a mistake of the other party. A corollary point is that a bid containing a mistake does not represent the intent of the bidder, and a contract based on such a bid cannot represent a meeting of the minds . Without such a meeting of the minds with respect to the three elements required for contract formation-offer, acceptance, and consideration-there can be no proper, legally binding contract.
The doctrine of mistake, as it has been applied by our courts, usually has resulted in bidders who make a mistake in their bids on federal contracts (and other public contracts following the federal policy) being allowed to withdraw. In this case, the potential contract would be said to be a rescinded contract . When the contract has been rescinded, both the bidder and the bidder’s surety are released from the normal obligations guaranteed by the bid bond.
Generalized Rules for Withdrawal
If low bidders were indiscriminately released from the obligations of their bids whenever they claimed that they had made a mistake, the integrity of the public bidding process would be undermined. Bidders who were low by large margins could avoid performing the contract by the simple expedient of claiming that they had made a mistake. Therefore, the kinds of mistakes that permit bidders to withdraw are strictly limited, and our courts have defined narrow generalized grounds for withdrawal. The following six separate tests for withdrawing a bid must be met by a bidder who has made a mistake in a bid:
1. The claimed mistake must be material—that is, it must make a significant difference in the total bid price.
2. The claimed mistake must be subject to objective determination. This means that the nature and magnitude of the mistake must be clearly demonstrable by examining the bid or bid preparation documents.
3. The claimed mistake must be clerical in nature as opposed to a mistake in judgment. An example of a clerical mistake would be a mistaken total for a column of figures or some other demonstrable arithmetic mistake. An example of a mistake in judgment would be overestimating the productivity of a pile driving crew, resulting in an estimated cost for that work that was far too low. In an Iowa case, a contractor was relieved of its bid because of a bid error attributed to a last-minute recording of a subcontractor’s price as \$22,000 instead of the correct price of \$220,000. The contractor had requested bid withdrawal immediately after the bid opening.[1] Similarly, in a New York case, a contractor who had intended to make a last-minute price reduction of \$21,300 inadvertently transposed this reduction to the final bid papers as \$213,000. The contractor informed the owner immediately of the mistake and requested withdrawal of its bid. When the owner refused to allow the contractor to withdraw and awarded the contract, the contractor refused to perform, and the owner sued for monetary damages and for forfeiture of the contractor’s bid bond. The contractor and surety moved to have the alleged contract rescinded. The court ruled for the contractor stating: There was never any meeting of the minds of the parties which could give rise to a contract since the bidder never submitted its real bid but instead, an erroneous one not at all expressing its intent.[2]However, in a case where the bid involved the construction of bridge decking over the Mississippi River between Missouri and Tennessee, the Missouri Supreme Court refused to excuse a bidder who had claimed two separate mistakes, one involving the use of incorrect labor rates and the second involving the omission of state sales tax. The court concluded that both mistakes were judgmental, not clerical. The court also concluded that the bidder conducted a poor pre-bid investigative analysis of the local conditions affecting its bid.[3]These cases illustrate the distinction that courts make between judgmental and clerical errors.
4. It must be clear that the owner would unconscionably profit from the mistake if the bidder were not allowed to withdraw. This is really an extension of the first test mentioned, that of materiality.
5. The position of the owner must not be prejudiced except for the loss of bargain resulting from allowing the bidder to withdraw. If the bidder who submitted a bid that was too low as the result of a mistake is allowed to withdraw, the owner obviously loses the benefit of the bargain that otherwise would have been enjoyed. This consequence of bidder withdrawal is inevitable. However, the owner should lose nothing else. This test is usually associated with the timeliness of the bidder’s claim of mistake. If the bidder waited for a considerable period before calling an obvious mistake to the owner’s attention, the owner will lose a great deal of valuable time in addition to the obvious loss of the bargain of the low price.
6. The bidder’s mistake should not have resulted from a failure to perform some positive legal duty or from gross or culpable negligence. In other words, if the bidder made no effort to ascertain the local laws and regulations that clearly affect the contract work or prepared the bid in a haphazard and careless way that indicated gross negligence, relief may not be granted.
Timeliness in Reporting Mistakes
Timeliness in reporting a bid mistake as soon as possible is a key element in gaining relief from the consequences of the mistake. Failure to do so could very well preclude the bidder’s right of withdrawal because of the requirement that the owner’s position not be prejudiced beyond loss of bargain. Delay in declaring the mistake could easily result in such prejudice.
Proof of Mistake
Before the bidder and bidder’s surety are released from the obligations of the bid, proof of the mistake is required. The burden of proof is on the bidder, and the proof must be clear.
The best evidence to prove a mistake is the written bid preparation “papers,” which can include anything written, ranging from the formal bid preparation sheets on the bidder’s stationery, to computer printouts, to even such things as notations on telephone memo pads or scraps of paper such as the back of an envelope. Finalizing a bid is often a stressful and frenetic affair resulting in many opportunities for making mistakes. A bidder who has made a mistake cannot afford to be shy and must be prepared to explain exactly how the mistake occurred.
Duty to Verify a Low Bid
Not only do bidders for public construction contracts have certain rights when they discover a bid mistake, but some public owners also have a duty to verify the low bid when a mistake is suspected. There are three important points in connection with this duty to request bid verification.
1. The federal practice requires the government to seek verification of the low bid when a mistake is suspected or should have been suspected. The government normally does this by promptly requesting the low bidder to check the bid and confirm in writing that it is correct and represents the intent of the bidder. The government request for verification should be made in writing.
2. The fact that a low bid is substantially lower than the next lowest bid or substantially lower than the government estimate is in and of itself cause to suspect that a mistake was made.
3. When a specific mistake is suspected, it is not enough that the government merely seek general verification of the bid. In these circumstances, the government should direct the bidder’s attention to the specific area of the bid where the mistake is suspected and request confirmation of the bidder’s intent with respect to that specific area of the bid.
The following cases illustrate this point. In the first case, a public utility in Oregon took bids for a project in which the specifications stated that concrete-encased duct banks were to be used under railroads or roadways in filled areas. The electrical drawings did not show any railroads or roadways or any other clear indications of duct banks extending under these kinds of surface features. Even after the issue of an addendum consisting of additional drawings, the presence of duct banks under railroads or roadways was unclear. When the bids were opened, the owner’s consulting engineer suspected that most bidders had failed to provide for concrete-encased duct banks under road areas and recommended that the owner contact the low bidder before the contract award and ascertain that they had included the cost of concrete encasements for the duct banks under roads in their bid. The owner did contact the low bidder but only inquired whether they were satisfied with their bid. The inquiry did not mention the particular mistake that was suspected by the engineer. The low bidder rechecked the bid, did not discover the mistake, and confirmed the bid and executed the contract.
Once the contract was underway, the owner sent the contractor a set of drawings clearly indicating the requirement for concrete encasement around the duct banks and the location of duct banks under various roads. The contractor immediately informed the owner that the duct bank encasement work on the new drawings constituted a constructive change to the contract and would require additional compensation. A dispute ensued that eventually wound up in court.
The Court of Appeals of Oregon was not persuaded by the owner’s contention that they had warned the contractor by requesting them to recheck the bid prior to award of the contract, ruling instead that when the owner had reason to know the low bidder had likely made a mistake and strongly suspected where the potential mistake lay, the so-called warning was completely insufficient. The court further ruled that the concrete encasement requirement constituted a change to the contract and that the contractor was entitled to additional payment.[4]
In a federal case, the U.S. Claims Court (now the United States Court of Federal Claims) ruled that the government improperly accepted a bid knowing that the bid contained a mistake and knowing the general area where the mistake was made. Bids were taken for the construction for a health center in Utah where the bidders were given approximately a seven-week period in which to prepare their bids. The project required a modular storage system (MSS), which the specifications indicated must be manufactured as a unit by a single manufacturer. The bid documents indicated that a separate addendum would be issued prior to the bid date listing those firms qualified to manufacture the MSS.
The specifications made continuous reference to a particular manufacturer for the MSS system. The addendum that was to have been issued no less than 72 hours prior to the bid opening was never issued. The trial testimony indicated that the government architect had discussed the addendum with the government but was told that there was insufficient time to issue it. The architect’s proposed addendum indicated only one manufacturer qualified to supply the system, and that manufacturer was different than the one referenced in the specifications.
The low bidder, whose bid was considerably below the government’s estimate, orally advised the government on the day of the bid opening that their bid did not contain any costs for the MSS. Later, in response to an oral inquiry from the government for confirmation of its bid, the low bidder confirmed its bid in writing, believing (according to the trial evidence) that the nonissue of the addendum meant that no costs were intended by the government to be included for the MSS. The oral “inquiry” from the government consisted of a telephone call from a government representative to the contractor’s office leaving a message with a person who answered the phone that only requested bid confirmation. There was no reference of any kind to the costs for the MSS system. In ruling for the contractor, who eventually went to court after the contract was awarded, the court noted that the government had actual knowledge that the low bid did not include costs for the MSS because they had been so advised orally at the bid opening. Under these circumstances, a request for bid verification that did not specifically refer to this problem was judged to be insufficient. The court said:
Such failure, in light of the defendant’s actual knowledge that D & D was misreading the specifications, i.e., believing that receipt of the Addendum was a condition precedent to including bidding costs on the MSS, indicates at the very minimum of bad faith on the government’s part, and ordinarily would entitle plaintiff to an equitable adjustment.
The court concluded that the contractor was entitled to an equitable adjustment for costs of the MSS system that had been omitted from the bid.[5]
The owner’s request for verification for a low bid can produce a completely different result. A number of years ago, the author’s company had submitted a bid for a subway project that was 20% below the second low bid and the owner’s estimate. The bid had been based on our interruption of the requirements for support of the underground openings required by the project. We received both an oral and written notification that our bid was very low along with the request that we confirm our bid in writing. After checking the bid for errors and finding none, we carefully explained by letter our interpretation of the ground support specifications upon which our bid was based, and that, based on that interpretation, our bid was correct. Upon receipt of our letter, the owner, a major rapid transit district, advised that our interpretation of their specifications was erroneous and construction of the project according to our interpretation and by our intended methods would not be acceptable to them. Following an extended series of conferences, the owner finally agreed that our interpretation of their specifications was possible, although not what they had intended. All bids were rejected, and the project readvertised for bids with revised drawings and specifications making clear exactly what the owner required. The author’s company was once again the low bidder, although at a considerably higher figure. Had there not been a requirement for bid verification on the part of the owner, this misunderstanding regarding the ground support requirements for the job would have not surfaced until after the contract had been entered into, probably resulting in a major dispute.
Possible Outcomes on Mistake Verification
A number of outcomes are possible under federal rules when a bid mistake has been discovered and verified. The usual result is that the mistaken bidder withdraws the bid, and the potential contract is rescinded. However, that is not always the case.
First, if the lowest responsive, responsible bidder who has made a bid mistake is willing to waive the right of relief, the discovery of the bid mistake will not matter, and the bidder will be awarded the contract at the original bid price. If the magnitude of the error is not too great, many bidders will elect this option. In most cases, waiver of the bidder’s right of relief will be effected by the bidder’s simply remaining silent after the mistake is discovered—that is, not informing the owner that a mistake was made.
Second, in some circumstances, a better result for the bidder on a federal government contract may be obtained with a contract reformation. The bidder may be allowed to correct the mistake resulting in the contract being reformed rather than rescinded, as when a mistaken bid is withdrawn. In this case, the reformed contract price will be the original bid price corrected upward by the amount of the mistake. Such a correction is allowed only when the correction does not alter the order of bidders in terms of lowest bid price to highest.
Formerly, the government would permit this option only when it could be conclusively shown on the face of the bid itself what the dollar amount of the intended bid would have been without the mistake. In effect, this is what occurs when the government makes upward corrections in erroneous unit price extensions and errors in the addition of the total of the individual bid items in a schedule-of-bid-items bid. More recently, bidders have been allowed to make upward corrections to the contract price based on demonstration of a mistake in the bid work papers as well, as distinct from a mistake demonstrable on the face of the bid, provided that reference to the bid papers clearly establishes the amount of the intended bid. For instance, the Comptroller General of the United States supported a government contracting officer’s decision allowing a bidder who had misplaced a decimal point when transposing the cost of subcontracted electrical work to correct its bid, raising the bid total to within one percent of the second low bidder. When the second low bidder filed a protest, the Comptroller General ruled that it was proper to allow bid correction because the bidder submitted clear evidence of both the existence of a mistake and the intended bid price. The bidder’s worksheets indicated not only the misplaced decimal point but also the intended markup to be applied to the subcontract work. Therefore, it was possible to determine the intended bid price with precision.[6]
The Comptroller General acted similarly over the objection of the second low bidder in another case by allowing the low bidder to increase its bid by the amount of omitted home office overhead cost. The bid preparation worksheets indicated that the bidder intended to include home office overhead costs of \$370,000, but because of a decimal point mistake, included only \$37,000. The low bid was corrected upward by \$333,000, still leaving it low. The correction was allowed because the low bidder’s worksheets furnished convincing evidence of the intended bid price.[7]
However, the Comptroller General refused to reverse a contracting officer’s determination that a low bidder not be allowed to increase its bid when the bidder claimed that they had mishandled the interrelationship between their base bid and certain option items. When the contracting officer examined the low bidder’s bid preparation papers, he discovered that it was possible to arrive at two different bid prices. The Comptroller General said that a mistaken bid can be raised only when the bidder can provide clear evidence of the intended bid amount and that when examination of the bid papers indicated that it was possible to arrive at two different bid prices, this standard had not been met. The bidder was allowed to withdraw its bid but was not allowed to make an upward correction.[8]
A third point is that contract reformation is possible even when the bid mistake is not discover ed until after the contract has been entered into. However, the reformed contract total can never exceed the price of the next higher bid.
Finally, if the correction of a mistake would result in the contract price increasing to a figure higher than that of the second lowest bid, the only remedy available to the bidder is the withdrawal of the mistaken bid and rescission of the contract. If the contract has already been entered into, the only possible remedy would be cancellation of the contract.
Promissory Estoppel
General contractors commonly rely on price quotations from subcontractors and material suppliers to competitively and accurately determine their costs for significant portions of their work. Very few can efficiently execute all of the work required by the typical prime construction contract, and most tend to build organizations that focus on performing particular kinds of work only. In addition, few prime contractors are also construction material suppliers. Therefore, general contractors bidding for prime construction contracts depend on price quotations received from subcontractors and material suppliers, the lowest of which will be included in the general contractor’s bid to the owner.
The subcontractors’ and material suppliers’ price quotations are based on the drawings and specifications that are part of the bidding documents prepared by the owner for each project. These are the same documents upon which the general contractor relies, and it is presumed that the general contractors, subcontractors, and material suppliers all have the same understanding of the requirements of the project drawings and specifications when the price quotations are offered and received. Further, when subcontractors and material suppliers tender their price quotations to general contractors, they understand that the general contractors will rely on these quotations and consider them to be in strict conformance with the project drawings and specifications unless advised otherwise. The subcontract and material supply price quotations are typically received only a short time before the prime contract bids are due.
If the subcontractor or material supplier should then refuse or otherwise fail to honor the quotation, the general contractor who is determined to be the low bidder and awarded the prime contract usually is forced to obtain the subcontract work or materials from others whose price quotation was higher. Since the price differential would not have been included in the prime contract bid to the owner, the general contractor has been damaged. These damages can be recovered under the doctrine of promissory estoppel .
Concept of Promissory Estoppel
Promissory estoppel is based on the concept of equity and requires that one who has placed another in a changed and untenable position by promising a certain performance is “estopped” from denying the performance. One who denies performance must make good the damage caused by failure to perform as promised. Although involving the common law principle of damages for breach of contract, promissory estoppel does not depend on the existence of a contract between the general contractor and the subcontractor or material supplier. In the bidding situation just described, a contract between the general contractor and the subcontractor or material supplier has not yet come into being. It is the refusal or failure of the subcontractor or material supplier to enter into a contract based on the price quotation that triggers the application of promissory estoppel.
Elements Necessary to Establish Liability
The subcontractor or supplier who refuses to honor a price quotation to a general contractor is liable for the resulting damages if the general contractor can prove the following elements that establish liability:
1. The general contractor must establish that a clear and definite offer was made by the subcontractor or material supplier.
2. The general contractor must establish that at the time the offer was made the subcontractors or material supplier knew that the general contractor would rely on the offer. This condition of reliance is satisfied if the subcontractor or material supplier knew that the purpose for which the general contractor was receiving quotations was to use the lowest of the quotations received in the prime bid to the owner.
3. The general contractor must have relied on the offer in the prime bid to the owner, and the reliance must be considered to be reasonable; that is, the price quotation must not be so much lower than others received that the general contractor would have reason to suspect a mistake in the quotation or a misunderstanding regarding the scope of work included and specifications that apply.
4. Before the subcontractor or material supplier can be held liable, the fact that the general contractor has or will be damaged by the failure to perform and the extent of the damages must be established.
The following cases illustrate the application of the preceding rules. In an Alaska case, the low bidder had relied on an electrical quotation received prior to its bid to the owner. Two days after the bid opening, the electrical contractor informed the prime contractor that it had omitted certain work from its subcontract bid and would be unable to perform the electrical work at the quoted price. Following the prime contract award, the prime contractor awarded the electrical subcontract to the second low electrical bid and sued the low electrical bidder for the price differential. The low electrical bidder defended the suit, arguing that its quotation was nothing more than an offer to enter into a subcontract and that no binding subcontract was formed since the prime had never formally accepted the offer.
In ruling that the original electrical subcontractor was liable on the basis of promissory estoppel, the Supreme Court of Alaska indicated that the subcontractor would be held to its bid if it was foreseeable that the prime contractor would act in reliance on the bid by incorporating it into the prime contract. The court further said:
It is industry custom for subcontractors to submit bids at the last moment. This trade practice has evolved because of the industry demands for firm, current prices. The custom is facilitated by the ease with which a bid can be placed without the formalities of a contract. However, if the contractor is to deliver a set price to an owner, these bids must be binding for a reasonable time.[9]
In a federal case, the U.S. Court of Appeals also ruled that a subcontractor was bound to its quotation to a prime contractor submitting a bid to an owner. In a contract for the construction of a storage reservoir, the prime contractor informally requested bids from subcontractors for earthwork and piping. The low subcontract bidder submitted their quotation and, at the prime’s request, confirmed it with a detailed written breakdown. After award of the contract to the prime contractor, the subcontractor advised that they would be unable to perform the subcontract work on the project due to “changes in our workload and other developments.” The prime contractor awarded the subcontract for the work to the next lowest subcontractor available and sued the original low bidder for the \$155,056 price differential. The U.S. District Court held that the subcontractor was liable for the price differential based on the doctrine of promissory estoppel. The U.S. Court of Appeals affirmed.[10]
On the other hand, an Illinois court refused to hold a subcontractor liable on the grounds that the prime contractor was unable to prove that its reliance on the sub-bid offer was reasonable and justifiable. The court found that there was such a great disparity in bids received from subcontractors for the same work that the prime contractor should not have relied on the low bid without verification. The court said:
We hold that the trial court properly refused to apply the Doctrine of Promissory Estoppel because Nielsen knew, or should have known of the obviously mistaken bid by National. Consequently, such reliance as Nielsen claims it placed on National’s bid was as a matter of law not reasonable.[11]
Although the preceding discussion has been framed in terms of price quotations to general contractors in a bidding situation, the doctrine of promissory estoppel is a general legal principle that can be applied to other situations in construction as well. Construction owners, for instance, can have an expectation induced as well when receiving bids from contractors. When a bid bond or other form of bid security is required, however, the owner’s interest is protected without resort to the doctrine of promissory estoppel.
Conclusion
This chapter concluded an examination of construction industry bidding practices with a brief discussion of the firm bid rule, the doctrine of mistake, bid rescission and reformation, and promissory estoppel. The next seven chapters focus on the operation and interpretation of the contracts that result from the bidding and award process.
Questions and Problems
1. What is the import of the firm bid rule? Does it apply to all federal projects? To most? To what type of project does it not apply? What is the import of the doctrine of mistake? Why is it so important to contractors bidding competitively today?
2. Why doesn’t a contract based on a bid containing a mistake represent a meeting of the minds? Could a meeting of the minds result if the contractor elected to waive the normal right to relief?
3. What does “rescinded” mean in the context of a contract based on a bid containing a mistake? What are the six tests that must be met in order for the potential contract to be rescinded when a bidder declares a mistake in the bid?
4. Does a bidder who has made a mistake in the bid have to prove the existence of the mistake before being allowed to withdraw the bid? What is the best way to prove the existence of the mistake? What are bidding “papers”? What do they include?
5. What three points does this chapter make about a public owner’s duty to verify a suspected bid mistake?
6. Once a bidder has proved the existence of a mistake and has established grounds for withdrawal of the bid, what are the three possible outcomes under federal rules? What is a reformed contract? In the recent past, what two requirements had to be met for a federal contract that had not been awarded to be reformed on account of a bid mistake? How have these requirements changed?
7. Why is timeliness so important with respect to declaring a bid mistake? What does “loss of bargain” mean in this connection? Is it affected by the timing of the declaration of a bid mistake? What is affected?
8. Does the doctrine of promissory estoppel depend on the existence of a contract? What is the central idea of the doctrine? What are the four aspects of today’s competitive bidding situation discussed in this chapter that make the doctrine so important? Does the doctrine apply to other situations in construction?
9. What are the four necessary elements that must be proved to recover damages under the doctrine of promissory estoppel? Distinguish between reliance and reasonable reliance with reference to the doctrine. Give an example of this distinction in the typical sub-bid/prime bid situation.
10. A material supplier gives a clear, firm quote to a contractor, who is bidding a well-publicized construction job. The supplier knows the contractor is bidding the job as a prime contractor when the quote is given. The price quoted was reasonable compared to other quotes that the contractor received for the same material. The contractor uses the supplier’s quote in the prime bid, is the low bidder, and is awarded the prime contract. The supplier then refuses to furnish the material, and the contractor has to spend an additional \$200,000 over the amount of the supplier’s quote to obtain the same material from another supplier.
1. Is the contractor likely to get a judgment for the \$200,000 by suing the supplier? Why or why not?
2. If, at the time the quote was given, the supplier did not know that the contractor was bidding the job as a prime contractor and did not know why the contractor wanted the quote, would the contractor be likely to get a judgment for the \$200,000? Why or why not?
3. If the supplier’s price was 40% of the next lowest bid and, without further contact with the supplier, the contractor used the price in the bid, would the contractor be likely to get a judgment? Why or why not?
4. If the contractor had received the supplier’s price so late that it could not be used in the prime bid to the owner but took the price over the phone anyway, and the supplier then refused to furnish the material for that price, would the contractor be likely to get a judgment? Why or why not?
1. M. J. McGough Co. v. Jane Lamb Memorial Hospital, 302 F. Supp. 482 (D.C.S.D. Iowa 1969).
2. City of Syracuse v. Sarkisian Bros., Inc., 451 N.Y.S.2d 945 (App. Div. 1982).
3. State of Missouri v. Hensel Phelps Constr. Co., 634 S.W.2d 168 (Mo. 1982).
4. Ace Electric Co. v. Portland General Elec. Co., 637 P.2d 1366 (Or. App. 1981).
5. Derrick & Dana Contracting, Inc. v. United States, 7 Cl. Ct. 627 (1985).
6. Matter of Guardian Construction, Comp. Gen. No. B-220982 (March 6, 1986).
7. Matter of Lash Corporation, Comp. Gen. No. B-233041 (February 6, 1989).
8. Matter of H. A. Lewis, Inc.,Comp. Gen. No. B-249368 (November 16, 1992).
9. Alaska Bussell Electric Co. v. Vern Hickel Construction Co., 688 P.2d 576 (Alaska, 1984).
10. Preload Technology, Inc. v. A. B. & J. Construction Co., Inc., 696 F.2d 1080 (5th Cir. 1983).
11. S. M. Nielsen Co. v. National Heat & Power Co., Inc., 337 N.E.2d 387 (Ill. App. 1975). | textbooks/biz/Business/Advanced_Business/Construction_Contracting_-_Business_and_Legal_Principles/1.12%3A_Mistakes_in_Bids.txt |
Key Words and Concepts
• Breach of contract
• Privity of contract
• Proof of breach
• Materiality of the contract breach
• Protest and reservation of rights
• Waiver of rights
• Written notice of protest
• Effect of disclaimers
• Anticipatory breach of contract
• Express contract provisions
• Implied warranties
• Failure to make payment
• Interference with contractual performance
• The Spearin Doctrine
• Misrepresentation
• Nondisclosure of superior knowledge
• Improper termination of contract
• Uncompleted punch list work
Up to this point, we have dealt in general terms with construction prime contracts, contracts closely related to prime contracts, and the bidding and contract award process, including the subject of mistakes in bids. This chapter covers issues connected with breach of contract. Following chapters concern the actual operation of contracts in practice.
Consider the following scenarios:
A contract between a prime contractor and an owner provides that the owner will make monthly progress payments within 30 calendar days from the engineer’s approval of the contractor’s estimate of work performed the previous month, subject to a 10% retention requirement. The owner, however, consistently does not pay until an average of 60 days after the engineer’s approval and in some instances as late as 90 days after monthly estimate approval. Or suppose that the owner states at some point that all future progress payments will be withheld until the contractor agrees to withdraw a claim filed on a previously disputed contract matter.
Consider a subcontract situation in which a prime contractor treats a subcontractor as the owner treated the contractor in the situation described in the previous paragraph. Or suppose a subcontractor refuses to continue subcontract performance and walks off the job because of a dispute over the proper amount of payment for a subcontract work item previously performed. Suppose that after having entered into a subcontract agreement with a subcontractor, the prime contractor arbitrarily terminates the subcontractor in favor of another subcontractor who offers to perform the subcontract work for a lower price.
All of these situations have one thing in common-each constitutes a breach of contract entitling the nonbreaching party to recover monetary damages resulting from the breach.
Breach of Contract and Materiality of Breach
Breach of Contract
A breach of contract is a default of a contract obligation, or, in other words, a refusal or a failure by a party to a contract to meet some duty required by the contract. The failure can be either a failure of omission or commission. There can be no breach of contract unless privity of contract exists (see Chapter 2).
Once privity of contract has been established, two additional elements must be proven to show that a contract breach has occurred. First, it must be proven that the contract imposed a specific duty on one party or the other or on all parties to the contract. This duty may be either a requirement to perform certain acts or duties or to refrain from certain acts. Second, it must be proven that there was a failure to meet that duty.
Materiality of the Breach
The concept of materiality previously discussed with respect to bidding irregularities and mistakes in bids (see Chapters 11 and 12) also applies to contract breaches. Because of the complexity of construction contract terms and conditions and technical specifications in today’s world, few contracts are performed to completion without a variety of breaches by both owner and contractor parties to the contract. The question then becomes how important is a particular contract breach—or, what is the materiality of the contract breach?
The more “material” the breach, the greater the rights and remedies that accrue to the nonbreaching party or parties. Minor breaches may not give rise to any remedy at all, whereas a major or material breach of contract may relieve the nonbreaching party from any obligation to continue performance of the contract. This means, in theory, that an owner has the right to terminate the contract in the case of a material breach by the contractor and the contractor may refuse further performance and abandon the contract in the case of a material breach by the owner. Of course, just how serious or major a breach has to be to constitute a “material” breach is an important legal question requiring advice of competent counsel. Hasty or precipitous action should never be taken on the grounds of a perceived material breach of contract by the other party. The legal consequences of later being found incorrect are extremely serious.
Having a means to judge how important a particular breach is would be highly desirable. Unfortunately, there are only a few recognized rules to help resolve this question. First, look to the wording of the contract itself, which may make it apparent that some matters are much more important than others. For instance, strongly worded explicit provisions that are prominent, clear, and not in conflict with other provisions of the contract are given great weight.
Second, consider the response of the nonbreaching party at the time of the breach. This response generally indicates how the nonbreaching party viewed the seriousness of the breach when it occurred. Courts usually consider the response to be an indication of the materiality of the breach. For example, the party who immediately sends a written protest and reservation of rights to the breaching party clearly has notified that party that a breach has occurred, that damages will result, and that the nonbreaching party expects to be compensated for those damages. A message has been sent that something important has occurred. Contrast the preceding described reaction to a case where the nonbreaching party takes no action at all following the breach. Such a failure to call notice to the breach may constitute a complete waiver of rights and remedies or, at the very least, may reduce the materiality that a court may later attach to the breach, thus reducing the nonbreaching party’s rights and remedies.
The following cases illustrate what can happen when a party acquiesces to the other party’s breach without protest. In one case, the owner and contractor had a disagreement over the handling of a subcontractor’s payment during the construction of a residence. At that point, the owner took over all project accounting and made payments directly to subcontractors and suppliers, telling the contractor that it should consider itself a volunteer if it continued work. Nonetheless, the contractor did continue work for five additional months, at which point the owner ordered the contractor off the job. When the contractor filed a lien to secure payment for work performed, a trial court found that the owner had breached the contract by preventing the contractor from continuing performance but that the contractor could not recover damages because the contract had been rescinded. The Court of Appeals of Indiana upheld the trial court, stating that by failing to demand modification or termination, the contractor had consented to rescission of the contract.[1]
In another case, the Supreme Court of Arkansas held that a contractor who performed extra work directed by the owner on a shopping center without demanding a change order for the work, by acquiescence, agreed that the contract should be interpreted to have included that work. The owner had demanded the removal of unsuitable soils and replacement with more easily compacted soils beneath the paving sections of the project, and the contractor performed the work without protest or demand for a change order. When the contractor later filed a claim for payment for the work, the owner refused to pay. In ruling for the owner, the court held that
Where a contract is ambiguous, the court will accord considerable weight to the construction the parties themselves give to it, evidenced by subsequent statements, acts, and conduct. This record reflects that throughout the performance of the contract, Coney did all the undercutting that was required. Not only were there no claims for extra work, there is virtually nothing in the record to indicate that undercutting was of any serious concern to Coney prior to this litigation.[2]
This case illustrates that the contractor’s failure to protest and demand a change order not only barred recovery for the extra work performed but resulted in the court believing that the extra work was intended to be in the contract in the first place.
Written Notice of Protest
Since the rights and remedies available to the nonbreaching party are proportional to the materiality of the breach, it is extremely important that the nonbreaching party immediately sends a written notice of protest and reservation of rights when any breach occurs unless the matter is extremely minor. What appears to be a relatively minor matter may turn out to have serious consequences. A businesslike written protest and reservation of rights protects the nonbreaching party’s interests.
Contractors naturally wish to avoid being considered “claim happy,” and some fear retribution as a result of putting the owner on notice of alleged breaches. However, the contractor ultimately is judged by overall performance and the degree of professionalism exhibited. In the long run, a reasonable owner respects the contractor for making contractual positions clear from the beginning, as long as the contractor behaves reasonably. If the owner is not reasonable when advised of a breach, the contractor is better off discovering this fact earlier rather than later in the course of performing the work of the contract.
Effect of Disclaimers or Exculpatory Clauses
Consideration of contract breaches often involves the matter of disclaimers or exculpatory clauses (see Chapters 4 and 5). These clauses may govern whether a contract breach has occurred.
First, recall that a disclaimer or exculpatory clause is a clause in the contract stating that a party to the contract, usually the owner, is not liable for the consequences of some act or failure to act that otherwise would have been a breach of contract. The mere presence of a disclaimer in a contract does not necessarily mean that it will be enforced by the courts. Some courts are disinclined to give certain types of disclaimers full force and effect. However, if the disclaimer is prominent and clear and does not conflict with any other provision of the contract, it probably will be enforced. A disclaimer that conflicts with other contract provisions generally is not enforced, particularly in federal government contracts. Courts reason, using an old adage, that the left hand cannot properly take away what the right hand has bestowed.
Anticipatory Breaches of Contract
A threat by a party to a construction contract to take a course of action or to refuse to perform some duty required by the contract that would constitute a breach if actually carried out can, in and of itself, constitute a contract breach. The breach resulting from this type of situation is called an anticipatory breach of contract. As an example, consider a contract in which the owner controls and is contractually required to provide the means of access to the project. If the owner advised the contractor at some point during contract performance that the contractor’s access would be shut off in two weeks, such advice would constitute an anticipatory breach.
Because the nonbreaching party can be damaged by the threat alone, it is not necessary to wait until the threatened action actually occurs to accrue rights of relief. For instance, in the previous example, the contractor faced with the access closure might immediately expend monies to procure or construct an alternate means of access in anticipation of losing the contractually provided access. Once the money has been spent, the damages have been incurred, even if the owner should later relent on the threatened closure. The contractor has accrued the right to be compensated for the money spent. The breach is created by the threat itself.
In such situations, the nonbreaching party should file a written protest and reservation of rights in anticipation of the threatened event after ascertaining that the breaching party really means to carry the threat out. There must be evidence of the clear intent to commit the threatened act or the clear intent not to act, as the case may be. Such intent is usually established by the words and acts of the breaching party at the time. “Words and acts of a party” means their written or oral communications and general behavior.
An anticipatory breach can also arise from an announced intention not to act when the contract requires some action to be taken.
Express Obligations and Implied Warranties
As discussed in Chapter 1, there are two types of contract provisions: express contract provisions and implied provisions (or implied warranties). Both types result in contract obligations or duties.
Express Obligations
Express obligations result directly from the clear meaning of the written words in the contract. The possibilities for breaches of express contract obligations are virtually endless since there is no limit to the different provisions that parties may expressly insert into the contract.
Implied Obligations (Implied Warranties)
Implied obligations are not expressly stated in the contract. Rather, they are the result of widely shared and well-understood implications of the contract. Frequently referred to as implied warranties, implied obligations are much more limited in number than express obligations and are frequently repeated from one contract to another.
Frequent Breach of Contract Situations
In practice, most common contract breaches are breaches of implied warranties. Far fewer common breaches involve express contract obligations. Of the following breach situations, all but the first are breaches of an implied warranty.
Failure to Make Payment for Completed Work
The obligation to make payment for completed work is always expressly stated in the contract (see Chapter 5). The obligation would be implied even if it were not expressly stated. Failure to make payment is a “material breach” and excuses the nonbreaching party from further performance—that is, courts will support the right of a contractor or a subcontractor who is not being paid to stop work and abandon the project or the right of a material supplier to abandon a purchase order contract and cease supplying material. This particular breach by an owner—or contractor in subcontract and purchase order situations—invariably excuses further performance of the contract. A party who is not being paid cannot be expected to continue to perform.
In a leading case on this issue ,the U.S. Supreme Court stated in 1919 that
In a building or construction contract like the one in question, calling for the performing of labor and furnishing of materials covering a long period of time and involving large expenditures, a stipulation for payments on account to be paid from time to time during the progress of the work must be deemed so material that a substantial failure to pay would justify the contractor in declining to proceed…[3]
The failure to pay must be “substantial” to justify a contractor abandoning the work. In a Kansas case, a subcontract for concrete work to be performed for a general contractor provided that the subcontractor was to submit invoices by the 25th of each month and the prime contractor would forward the invoices to the owner for approval and payment. The subcontract also provided that the prime contractor was under no obligation to pay the subcontractor for work performed until the prime contractor had been paid for the work by the owner. During performance, the turnaround time between the subcontractor’s invoice submittal and receipt of payment ran 36 to 38 days. The subcontractor walked off the job and sued the prime contractor for breach of contract, stating that lack of prompt payment prevented them from meeting their payroll. In reversing a trial court decision in favor of the subcontractor, the Supreme Court of Kansas determined that the subcontract did not require the prime contractor to make payment within 30 days as the subcontractor alleged and that, even if it did, a delay of six to eight days was not enough to justify the subcontractor’s abandonment of the work.[4]
Although courts will support contractors walking off the job when failure to pay reaches substantial proportions, stopping work and abandoning the contract under other breach situations is an extremely risky course for any contractor or subcontractor to take and can result in the contractor or subcontractor being held to have materially breached the contract.
As discussed in Chapter 7, contractors often include clauses making payment to the material supplier or subcontractor conditional upon being paid by the owner. Also, courts have usually held such clauses enforceable only with regard to the timing of the prime contractor’s payment to the material supplier or subcontractor. Such clauses permit delay in making payment when the owner has not paid the contractor for the work in question, but if the contractor continues to withhold payment from the subcontractor after it has become clear that the owner will never pay, the contractor will be in breach of contract. Extremely strong, clear, and prominent contract language is required to establish the contractor’s right to withhold payment altogether from a material supplier or subcontractor who has properly performed the contract when the owner does not pay. Even then, courts may refuse to enforce the right because they strongly support the proposition that a material supplier or subcontractor that has performed according to the contract is entitled to be paid.
Interference with Contractual Performance
Every contract includes an implied warranty that no party shall act or fail to act in a manner that impedes or interferes with the other party’s ability to perform the contract work. There is an implied duty of cooperation.
Frequently encountered examples of breaches caused by interference with contractual performance include the owner’s failure to coordinate properly the work of multiple prime contractors, taking unreasonable time to check and approve shop drawings, or failing to make the site or access to it available to the contractor. Another frequent claim of contract breach arises from a prime contractor’s failure to coordinate properly the work of subcontractors, so that one subcontractor’s work interferes with another’s.
The following cases illustrate incidents in which courts found breaches of contract due to interference. In Illinois, an electrical contractor on a multiple prime project recovered lost labor productivity because of the owner’s failure to coordinate properly the various prime contractors on the site. The electrical contractor’s progress was hampered by the general building contractor’s failure to complete rough-in work. The electrical contractor was also forced to perform work in a start-and-stop, out-of-sequence manner because the general building contractor was frequently moving its crews about the site. In ruling for the electrical contractor, the Appellate Court of Illinois said:
Although we agree the District’s duty to keep the project in the state of forwardness is not tantamount to a warranty guaranteeing that no delays will occur, if the District either actively created or passively permitted to continue a condition over which it had control which made performance of the contract more difficult or expensive, it may be held to have breached an implied contractual duty for which it must respond in damages.[5]
In a Florida case involving construction of a shopping center, the District Court of Appeals of Florida ruled that the owner’s lateness in providing necessary drawings and specifications and in executing required change orders amounted to active interference in the work.[6]
In another case, during performance of a government contract requiring the installation of meters in apartments housing naval personnel, the contractor encountered recurring problems with noncooperative occupants. The U.S. Court of Appeals determined that the Navy had breached the contract by failing to provide reasonable access for the performance of the work. In reversing an earlier decision by the Armed Services Board of Contract Appeals, the court held:
After the contractor notified the project manager that the contractor’s reasonable efforts had not resulted in gaining entry to certain apartments, the Navy was under an implied obligation to provide such access so that the contractor could complete the contract within the time required by its terms. Consequently, if any part of the contractor’s work was thereafter delayed for an unreasonable period of time because of the Navy’s failure to provide access to the apartments, the contractor is, under the “Suspension of Work” clause entitled to an increase in the cost of performing the contract.[7]
The preceding cases are illustrative only. Abundant case law decisions support many other forms of breaches of contract caused by interference.
The Spearin Doctrine
Perhaps the most important and well-known implied construction contract warranty is the Spearin Doctrine, which refers to the owner’s implied warranty of the accuracy and sufficiency of the drawings and specifications. The Spearin Doctrine resulted from a landmark case decided in 1918 by the U.S. Court of Claims (now the United States Court of Federal Claims). Spearin had a contract with the government to construct a dry dock project that contained a large sewer. Spearin performed the contract work strictly in accordance with the government’s drawings and specifications. During contract performance, a storm occurred, and the completed sewer burst, destroying itself and causing considerable damage to the balance of the other work in progress. The government took the position that the possibility of damage to the work during the life of the project was a risk that Spearin as the contractor had assumed. Spearin disagreed. The Court of Claims decision in this case has proven to be the Magna Carta of rights for construction contractors. In ruling for the contractor, the court said:
If the contractor is bound to build according to plans and specifications prepared by the owner, the contractor will not be responsible for the consequences of defects in the plans and specifications.[8]
Simply stated, the Spearin Doctrine says that the owner warrants the accuracy and sufficiency of the drawings and specifications that are for the contractor’s use in performing the contract work. The basic principle involved applies to owners, prime contractors, or anyone who contracts with and furnishes drawings and specifications for the use of the party actually doing the work. This means that if the drawings and specifications are precisely followed and the result is not satisfactory, the responsibility rests with the entity that furnished the drawings and specifications. Additionally, the responsibility for the consequences of errors and omissions lies with the furnisher of the drawings and specifications. This responsibility extends to the cost of attempting to comply with defective drawings and specifications, including the costs of all delays involved, such as time needed for the drawings and specifications to be corrected.
An exception to the applicability of the Spearin Doctrine is the situation when the specifications are of the performance type. Performance specifications are those that simply define the requirements that the finished product must meet, leaving it to the contractor to devise the design, means, methods, and materials required to meet the specified requirements. Here, if the finished product does not meet project requirements, the contractor bears the liability.
A number of other implied warranties are similar in principle to the Spearin Doctrine:
• Architect/engineers impliedly warrant that their design work is competently performed and conforms to the normal standards of the profession.
• When the contract calls for owner-furnished materials or equipment, owners impliedly warrant that the materials or equipment that they furnish are proper and suitable for their intended purpose.
• When a contract requires the contractor to follow a specified erection procedure or construction sequence, the owner impliedly warrants that the contractually specified construction method or procedure will work and produce the desired result. If it does not, the responsibility for both the poor result and the costs associated with attempting to comply with the specified method or procedure lie with the owner.
• When architect/engineers, construction managers, or contractors provide cost estimates to owners, they impliedly warrant that these cost estimates are reasonably accurate.
• Contractors who perform construction work for laypersons who are relying on the contractor’s skill and expertise impliedly warrant that such construction work will be done properly and will result in a product generally satisfactory for the intended purpose.
In all of these situations, the entity that impliedly furnishes the particular warranty involved is responsible for the damages suffered by the other party to the contract if the warranted promise is not fulfilled.
Misrepresentation
In a sense, misrepresentation can be considered a breach of an implied warranty that representations in the contract documents are accurate. If such representations tum out to be materially different from indicated in the contract documents, a misrepresentation breach of the contract has occurred, entitling the nonbreaching party to damages consisting of the costs and delays resulting from reliance on the representation.
The representation in the contract documents need not necessarily be explicit. It may be implied or suggested by the information that is provided. A Missouri contractor on a highway grading project recovered damages from the Missouri Highway Commission after discovering that the cuts and fills on the project were not balanced, even though the contract documents did not explicitly state that they would be balanced. The Missouri Court of Appeals held that other information in the contract had the effect of representing to the contractor that the cuts and fills were balanced.[9]
Misrepresentation can be either intentional or nonintentional. Nonintentional misrepresentation is more common. When misrepresentation can be shown to have been intentional, it is also a tort. Tortious misrepresentation subjects the wrongdoer to punitive damages in addition to the actual damages resulting from the misrepresentation.
Three essential elements must be proven to establish misrepresentation. First, there must have been a positive representation, either expressed or implied by other expressed representations in the contract documents. Second, the representation must subsequently be found to be either untrue or incorrect. Finally, the nonbreaching party must have both relied on the representation and suffered damage as a result of that reliance.
Nondisclosure of Superior Knowledge
Another important breach of a contract implied warranty is the nondisclosure of superior knowledge. This concerns a situation in which some material condition or circumstance emerges during the course of contract performance that makes performance more difficult and costly, about which the contract documents are totally silent. If it can be proven that the owner or, in the case of a subcontract, the prime contractor, was aware of the condition or circumstance and either deliberately concealed or failed to disclose it, such nondisclosure constitutes a breach of the contract. The nonbreaching party is then entitled to damages amounting to the extra cost in dealing with the nondisclosed condition or circumstance as well as the cost of any resulting delays.
In a sense, nondisclosure of superior knowledge is a form of negative misrepresentation. Both parties to the contract are commonly understood to warrant that they have made available to the other party all information or data that they possess that might affect the other party’s performance of the contract work.
The doctrine of nondisclosure of superior knowledge has evolved from a long line of court cases. In the leading case, an industrial manufacturer had a contract to manufacture a product for the federal government. The government failed to disclose the fact that it was necessary to grind a new disinfectant prior to blending it in with the other ingredients. The government had sponsored research on the development of the product and was aware that the grinding process would be required to meet the product’s specifications. The U.S. Court of Claims (now the United States Court of Federal Claims) determined that the government breached the contract by not disclosing its superior knowledge. The court said:
Where the “balance of knowledge” favors the government, it must disclose its knowledge, less by silence it “betray a contractor into a ruinous course of action.”[10]
In a later classic construction case, the U.S. Navy contracted with a joint-venture contractor to construct very tall radio towers on the northwest coast of Australia. The contractor’s performance was adversely affected by a destructive pattern of high winds and dangerous offshore currents. The court found that the Navy knew about the winds and the currents but did not disclose this superior knowledge to prospective bidders by including the known data in the bid documents or otherwise making this superior knowledge known. The court, in finding for the contractor and awarding the resulting extra costs, stated that under these circumstances “the government cannot remain silent with impunity.”[11]
This principle can apply to any contractual relationship in which one party possesses superior knowledge affecting the other party’s burden of performance and does not disclose it prior to entering into the contract. Also, the duty to disclose superior knowledge continues after award of the contract throughout contract performance.
Improper Termination of Contract
Another breach situation that occasionally arises is improper termination of the contract. This breach is really a form of interference with contractual performance, discussed earlier in this chapter. If an owner or, in the case of a subcontract, a contractor, improperly terminates the contract, the contractor or subcontractor has been prevented from performing.
Most construction contracts contain express provisions for termination for default (see Chapter 5 on “red flag” clauses). However, when these provisions are improperly or unjustly invoked, the party invoking them has committed a material breach of contract. In other words, the party terminating the contract must be certain that the other party actually was in default.
If the terminating party is not correct and a court later finds the termination improper, the act of terminating the contract may itself be declared a material breach of the contract, entitling the terminated party to all damages flowing from the improper termination. These can include damage to a contractor’s reputation, loss of bonding capacity, and in some cases the bankruptcy of the company. The monetary damages are usually very substantial.
For example, the writer was involved as an expert witness in a case where a private contract had been signed for the renovation of an existing structure to convert it to a large central office facility. The owner terminated the design-build contractor at approximately the 95% completion point, alleging that the contractor was behind schedule and was producing shoddy work. The owner then entered into a contract with another contractor for the completion of the original work plus a number of changes and additions. Substantial monies were due the original contractor at the time of termination for a number of months of contract work that had been performed, consisting primarily of work performed by a large number of subcontractors, all of whom remained unpaid. A board of arbitrators first ruled that all of the subcontractors were entitled to be paid in full and directed the design-build contractor to immediately pay them. The board then ruled that the design-build contractor be paid by the owner for all payments made to subcontractors plus their own costs and a reasonable profit thereon. Additionally, the arbitrators were so offended by the circumstances of the termination, which they determined to be totally unjustified, that they took the unusual step of ordering that the owner pay all costs of the arbitration proceedings.
Owners and contractors administering subcontracts are often under the mistaken impression that they can properly terminate a construction contract (or subcontract) for default because the contractor or subcontractor fails to complete promptly minor defects, called “punch list” items, after the contract work is substantially complete. Uncompleted punch list items do not constitute a breach. The contract cannot be properly terminated on this account.
For instance, the Corps of Engineers Board of Contract Appeals determined that a road construction contractor who had achieved substantial completion, but who had not performed punch list items, was improperly terminated for default by the government. They concluded that once it had the use of the project for its intended purpose, the government must pay for contract performance. The board said:
Failure to correct minor deficiencies in a substantially completed contract is not a default; it is a constructive, deductive change… to declare a contract in default under such circumstances would work a forfeiture—a result the law abhors.[12]
In this situation, the contractor or subcontractor is entitled to be paid the balance of the contract price, less the actual cost to remedy any punch list items that the owner actually remedies, or engages others to remedy, less any diminished value to the completed project for punch list items that either are not remedied by choice or that are impossible to remedy.
Conclusion
Any discussion of possible breach of contract situations can be virtually endless, particularly if one attempted to elaborate on breaches of express contract obligations. This chapter is merely a brief look at this important subject. The next chapter examines some important provisions and ramifications of contract changes clauses.
Questions and Problems
1. What is a contract breach? What two elements (in addition to privity of contract) are necessary to prove a contract breach? Do breaches involve acts of commission, acts of omission, or both?
2. Are all breaches equally material? Why is the degree of materiality important? What should the nonbreaching party do when the contract has been breached? Why are some contractors reluctant to put the owner on notice that the contract has been breached? Why should a contractor’s actions in a breach situation not be governed by this concern?
3. What two means might a court employ to determine the materiality of a breach?
4. What is the significance of a disclaimer with respect to contract breaches? What two conditions must exist for a disclaimer to be given full force and effect?
5. What is an anticipatory breach? How may a party be damaged by a threat of something that is not carried out? How can the nonbreaching party judge whether the threat posed in an anticipatory breach is likely to be carried out?
6. Must the obligation element of a contract breach be expressed, implied, or can it be either? What type of contract breach tends to recur in similar ways more frequently?
7. What is the single breach of an express contract obligation discussed in this chapter? Can this breach be a material breach excusing continued performance by the nonbreaching party?
8. Explain the breach of interference. What implied warranty is involved? What single example of a contractor-committed interference breach and what three examples of owner-committed interference breaches were discussed in this chapter?
9. What is misrepresentation? What implied warranty is involved? What three elements are necessary to prove misrepresentation?
10. How did the Spearin Doctrine originate? What is the implied warranty involved? Describe five implied warranties that are similar to the Spearin Doctrine.
11. What is the doctrine of nondisclosure of superior knowledge? Give the names and the details of the cases mentioned in this chapter that illustrate the principle of this doctrine. What is the central implied warranty?
12. Why is improper contract termination a form of interference? Is this breach a material breach? If the terminating party is wrong, what is a court of law likely to decide?
13. Do uncompleted punch list items constitute a material breach that justifies termination of the contract? How is final payment to the contractor reckoned when all of the contract work is complete except for punch list items that the contractor either cannot or will not remedy?
14. Indicate whether each of the following occurrences during the performance of a federal construction contract is (a) a breach of an express condition of the contract, (b) a breach of an implied warranty of the contract, or (c) not a breach but an occurrence contemplated by the contract and dealt with by one or more of the standard clauses of the contract. Refer to Chapter 5 on standard (“red flag”) clauses.
1. Contractor encounters a differing site condition.
2. Government refuses to grant a timely, fully justified, and documented request for an extension of time.
3. Government suspends work on a portion of the project.
4. Government does not disclose to bidders or to the successful contractor important information affecting the contractor’s cost of performance.
5. Government fails to pay properly submitted monthly progress payment requests in a timely manner.
6. Government orders acceleration.
7. Government terminates contract without stating any reason.
8. A government-specified construction method proves completely unsatisfactory when the contractor follows it.
9. Government changes the specified manner in which the work is to be performed.
10. The plans contain a number of serious errors.
1. Glen Gilbert Construction Co., Inc. v. Garbish, 432 N.E.2d 455 (Ind. App. 1982).
2. RAD-Razorback Limited Partnership v. B. G. Coney Co., 713 S.W.2d 462 (Ark. 1986).
3. Guerini Stone Co. v. Carlin Constr. Co., 248 U.S. 334 (1919).
4. Havens v. Safeway Stores, 678 P.2d 625 (Kan. 1984).
5. Amp-Rite Electric Co., Inc. v. Wheaton Sanitary District, 580 N.E.2d 622 (Ill. App. 1991).
6. Newberry Square Development Corp. v. Southern Landmark, Inc., 578 So. 2d 750 (Fla. App. 1991).
7. Blinderman Construction Co., Inc. v. United States, 695 F.2d 552 (Fed. Cir. 1982).
8. United States v. Spearin, 248 U.S. 132, 39 S. Ct. 59, 63, L. Ed. 166 (1918).
9. Idecker, Inc. v. Missouri State Highway Commission, 654 S.W.2d 617 (Mo. App. 1983).
10. Helene Curtis Industries, Inc. v. United States, 312 F.2d 774 (Ct. Cl. 1963).
11. Hardeman-Monier-Hutcheson v. United States, 458 F.2d 1364, 198 Ct. Cl. 472 (1972).
12. Appeal of Wolfe Construction Co., Eng. BCA No. 3610 (June 29, 1984). | textbooks/biz/Business/Advanced_Business/Construction_Contracting_-_Business_and_Legal_Principles/1.13%3A_Breach_of_Contract.txt |
Key Words and Concepts
• Three questions central to the changes concept
• Federal contract changes clause
• Respects in which contract cannot be unilaterally changed
• General scope of the contract
• Change order, change directive, change notice
• Formal change to the contract
• Equitable price adjustment
• No pay without signed change order
• Two-part change to the contract
• Oral change orders
• Constructive changes
• Change element
• Order element
• Constructive change notice requirement
• Cardinal changes
• Forward-priced changes
• Retrospectively priced changes
• Force account
• Extended contract performance situations
• Breach damages not limited by changes clause
• Impact costs / Time-related impacts / Loss-of-efficiency impacts
• Change order payment disputes
• Current judicial attitude to payment disputes
• Conditions likely to result in payment for changes
• Proper contractor reaction to oral or written directives
Think back for a moment to some trip that you had planned. Did the undertaking unfold exactly as envisioned? Probably not. Now suppose that you had been compelled to carry out the plan exactly as originally conceived, regardless of the circumstances encountered, the additional expense, and the impracticality of adhering to the original plan. Such a situation would obviously be far from desirable, especially if a flight were canceled or a bridge washed out!
This analogy can be applied to construction contracting in which change is virtually inevitable. Even small, simple projects normally involve necessary or at least desirable changes, whereas large, complex projects sometimes involve thousands of changes. Without an agreed-upon, orderly procedure for making changes to the contract that are desired or necessary, a construction owner would be placed in a similar situation as you were on your trip.
Contract Change Procedure
After accepting the reality that changes are inevitable in construction contracts and procedures for handling such changes are necessary, those drafting construction contracts must consider these central questions:
• Will the owner have the right to unilaterally make changes to the work?
• Will the contractor be compelled to carry out changes made by the owner?
• After performing changed work directed by the owner, will the contractor be entitled to payment for the additional costs incurred?
From a contractual point of view, answers to these questions cannot be implied. They must be clearly stated in the written contract. A well-drafted changes clause explicitly answers each question in the affirmative and provides detailed language defining the entire contract change procedure.
Federal Contract Changes Clause
A changes clause is not an exculpatory clause excusing the owner from liability for changes. Rather, the clause provides for a structured way for the owner to direct changes and for the contractor to perform them and be properly compensated. The federal contract changes clause states:
1. The Contracting Officer may, at any time, without notice to the sureties, by written order designated or indicated to be a change order, make changes in the work within the general scope of the contract, including changes
1. In the specifications (including drawings and designs);
2. In the method or manner of performance of the work;
3. In the Government-furnished facilities, equipment, materials, services, or site; or
4. Directing acceleration in the performance of the work.
2. Any other written order or an oral order (which, as used in this paragraph (b), includes direction, instruction, interpretation, or determination) from the Contracting Officer, that causes a change shall be treated as a change order under this clause, provided, that the Contractor gives the Contracting Officer written notice stating (1) the date, circumstances, and source of the order and (2) that the Contractor regards the order as a change order.
3. Except as provided in this clause, no order, statement, or conduct of the Contracting Officer shall be treated as a change under this clause or entitle the Contractor to an equitable adjustment.
4. If any change under this clause causes an increase or decrease in the Contractor’s cost of, or the time required for, the performance of any part of the work under this contract, whether or not changed by any order, the Contracting Officer shall make an equitable adjustment and modify the contract in writing. However, except for an adjustment based on defective specifications, no adjustment for any change under paragraph (b) of this clause shall be made for any costs incurred more than 20 days before the Contractor gives written notice as required. In the case of defective specifications for which the Government is responsible, the equitable adjustment shall include any increased cost reasonably incurred by the Contractor in attempting to comply with the defective specifications.
5. The Contractor must assert its right to an adjustment under this clause within 30 days after (1) receipt of the written change order under paragraph (a) of this clause or (2) the furnishing of a written notice under paragraph (b) of this clause, by submitting to the Contracting Officer a written statement describing the general nature and amount of proposal, unless this period is extended by the Government. The statement of proposal for adjustment may be included in the notice under paragraph (b) above.
6. No proposal by the Contractor for an equitable adjustment shall be allowed if asserted after final payment under this contract.[1]
The words change order as used in the federal clause mean a directive from the contracting officer (the owner) or designated representative to make a change in the work within the general scope of the contract, including
• Changing the details of original work
• Adding new work
• Deleting original work
• Changing the method or manner of performance of the original work, which could mean changing the times in the day, or days in the week, month, or year, during which work may be performed
• Shortening the time period allowed for completion of the work (acceleration)
• Slowing the rate at which the work may be performed
• Changing the commitments of the government with respect to materials, facilities, equipment, services, or the site conditions to be furnished to the contractor
Although this list of potential changes is very broad, federal contracts cannot be changed in two respects. The government cannot unilaterally change any of the general conditions (“General Provisions” in the federal contract) and cannot unilaterally make changes that are beyond the general scope of the contract.
For instance, clauses such as the differing site conditions clause, the suspension of work clause, or the changes clauses itself cannot be unilaterally changed or deleted.
The general scope of the contract means the size, type of construction work, and the intended purpose of the work to be contracted for, as contemplated by the government and the contractor when the contract was signed. Changes outside this intended general scope are not permitted. For instance, adding the construction of a boiler house for a steam heating system to an original contract for the grading and paving of a parking lot would be a change beyond the scope of the contract, whereas simply changing the configuration of the parking lot on the same site would not be. Adding quantities of paving would not normally be a change beyond the scope of the contract, nor would changing design details for the paving. However, if such changes were made in quantities that doubled or tripled the contract price, the general scope of the contract would probably be judged to have been exceeded.
Specifics in Changes Clauses
The federal contract changes clause is broadly regarded as the model clause in the industry. It is comprehensive, fair, and has stood the test of time. Although the federal clause has been widely copied, clauses in other contracts vary. Following are some important points to note when examining an unfamiliar changes clause.
Distinctions Between Contract Change Terms
There is an important distinction between the terms change order as previously discussed and formal change to the contract, which does not even appear in the federal clause. Change order means a directive from the owner or designated representative to the contractor to make some change (see the preceding examples). Formal change to the contract means the written modification to the contract which describes the change and states the increase, or decrease in the case of a deletion, in total contract price and total time for contract performance. Formal changes to the contract are written legal documents executed by the owner and contractor at some time after the change order has been issued. Other terms often used in lieu of change order are change directive and change notice.
Who Is Empowered to Make Changes?
The changes clause addresses the issue of who can make changes in different ways, depending on the contract. The specific language in the contract must be carefully read to obtain a definitive answer. Note that the federal clause mentions only the “contracting officer,” a person defined in the federal acquisition regulations. After the contract has been awarded, the contractor is always formally advised of the name of the contracting officer. In practice, the contracting officer frequently designates others as authorized representatives to issue change orders to the contractor.
Contractors who perform changes ordered by persons without authority under the contract run the risk of not being paid for the change. For instance, a contractor providing construction services to the government under a fixed-price contract discovered that payment would not be made for the extra work of attending meetings, performing inspections, and providing other services not included in the contract that had been directed by a government official bearing the title of “project coordinator.” The role and authority of the project coordinator were not defined in the contract. The Veterans Administration Board of Contract Appeals ruled that, although the contractor had performed the work in good faith, the contract provisions requiring authorization of changes by the contracting officer would be strictly enforced. In the Board’s words:
It is long been a tenet of Federal contract law that employees without actual authority cannot bind the government. … It is the duty of the contractor, when ordered by an unauthorized Government employee to perform work obviously beyond the contract requirements, to promptly register a protest with the Contracting Officer.[2]
Similarly, a contractor constructing a building for the Postal Service found that specification relaxations approved by the government inspector were not binding on the government. The inspector believed he had authority to approve “minor changes,” and the contracting officer did not learn what had occurred until after the work had been completed, at which point he refused to ratify the change. The Postal Service Board of Contract Appeals ruled that, although there may have been an honest misunderstanding, the inspector could not alter the contract. The Board said:
Although Mr. Hale agreed to relax the specifications, his agreement was not binding on the government, as he lacked authority to change the specifications. The notice to proceed, after designating the Contracting Officer Representative, gave notice to Appellants that changes were reserved to the Contracting Officer. Mr. Hale’s misrepresentation as to his authority did not create any right in Appellants to avoid the contract’s express terms.[3]
A more difficult question arises when the person ordering the change has “apparent authority.” This apparent authority can be created by previous actions of the owner, such as readily paying for previous changes ordered by that person, which create the impression that the owner intended that person to have such authority. The point is illustrated by a North Carolina case where the issue became the apparent authority of the prime contractor’s field superintendent. A subcontract agreement for excavation and grading work provided that the subcontractor would be paid extra for rock excavated, the quantity to be measured by the general contractor’s engineer. During contract performance, representatives of the subcontractor and the prime contractor’s field superintendent agreed that rock measurements would no longer be required to be made by the prime contractor’s engineer. Rather, other prime contractor on-site personnel could take the measurements. This agreement was confirmed in writing by the subcontractor in a letter to the prime contractor. Subsequently, the prime contractor refused to make payment because their engineer had not performed the measurements for the rock quantity excavated. At that point, the subcontractor abandoned work and sued the prime contractor, alleging breach of contract. The prime contractor countersued for the extra cost of obtaining another subcontractor to complete the work. A trial court ruled in favor of the subcontractor. In affirming the trial court decision, the Court of Appeals of North Carolina said that the “dominant question” was whether the prime contractor field superintendent “had authority to modify the contract with subcontractor by dispensing with the requirement that ADC’s engineers measure the rock…. ” Trial evidence indicated that there were other occasions when the prime contractor’s field superintendent had orally ordered additional work to be performed as changes to the contract, all of which were subsequently paid. Evidence also indicated that substantial quantities of rock were excavated on the project without the prime contractor’s engineers being sent to the site to do the measuring and that the prime contractor knew that the measuring was being done by other site personnel and initially continued to pay the subcontractor’s invoices. For these reasons, the Court of Appeals concluded that the prime contractor’s field superintendent had authority to orally modify the written subcontract.[4]
When the changes clause makes clear where the authority lies, problems such as those just described are more easily avoided.
Who Is Empowered to Make Formal Changes to the Contract?
Executing a formal change to the contract on behalf of the owner is different from ordering the change to be made. Both should be formalized in writing, with a clear, unambiguous description of the change, and the formal change to the contract should also include the agreed-upon change in contract price and contract time extension (if any). A person possessing authority to act for the owner in increasing the contract price and the time allowed for contract performance always possesses the authority to order changes, but the reverse is often not true—that is, the person with authority to order the change may not possess the authority to change the contract price or the time for performance. It is, therefore, helpful if the changes clause makes clear which representative of the owner possesses the authority to perform each separate function. In some jurisdictions, defining respective functions and designating who is empowered to perform them is governed by statute.
How Are Price and Time Adjustments Determined?
Interestingly, this is one area where the changes clause in the federal contract is silent, saying only that there shall be an equitable adjustment to the contract price and time. Changes clauses in other contracts usually state precisely how the change in price will be determined, often specifying several alternate methods. If the owner and contractor do not agree on the price change, the changes clause usually prescribes that payment will be made by the force account method, the details of which are spelled out in the clause. The force account method of payment is more fully discussed later in this chapter.
“No Pay Without Signed Change Order” Language
Most changes clauses contain language intended to strictly limit the contractor’s right to payment to only those changes authorized by the owner prior to the change being undertaken. Usually, the clause will require that the authorization be in writing.
Some especially restrictive clauses require that the actual formal change to the contract, stating the agreed price and time for performance, be executed prior to performance of the change. Such provisions are unworkable today. Even in the most efficient owner’s organization, it takes too long administratively after the change has been ordered to agree on price and time and to prepare an appropriate formal change to the contract. The project would grind to a halt in the meantime. The problem is exacerbated because large jobs commonly involve several hundred or even several thousand changes.
It is a different matter when the changes clause merely says there shall be no payment without a signed change order (also called change notice or change directive) in hand prior to performance of the change. This at least permits the work to be completed while the price and time changes are negotiated and the formal change to the contract prepared, although this procedure requires the contractor to carry the financial burden of performing the change in the interim. To solve this problem, some contracts provide for a two-part change to the contract, in which the contractor is promptly paid demonstrable costs under a Part I change to the contract prior to finalization of the change under a later Part II change to the contract.
Sometimes, change orders are written on a “price not to exceed” basis. Under this arrangement, the contractor is assured payment up to the not-to-exceed limit, although payment normally is not received until a formal change to the contract has been executed by both parties.
In practice, contractors are given many oral change orders (the federal contract mentions the words “oral order”). After the contractor has performed the changed work, owners sometimes refuse to make payment, claiming that the work was not authorized by a written change order. Sometimes the refusal to pay is based on a claim that the person who issued the oral order was not authorized to do so or that person will deny that he or she issued the order. The attitude of our courts toward payment disputes of this type is discussed later in this chapter.
Constructive Changes
A constructive change is a change that is not acknowledged by the owner as such when it occurs, but which nonetheless is a change. In this situation, the owner takes the position that whatever the contractor is directed to do or is prevented from doing is not a change, but rather is required or prohibited by the original contract, as the case may be. In these situations, the contractor is required to proceed according to the owner’s instructions but is free to assert and later attempt to prove that the owner’s instructions constituted a change order. If the contractor is correct, courts will deem that a constructive change has occurred, and the contractor will be awarded the costs incurred plus a reasonable profit thereon.
Two elements must be proved to establish a constructive change. First, the change element must be proved. Proof hinges on the facts of each particular case. The court must be convinced that a true change occurred in the work or requirements of the contract. Second, the order element must be proved. This is established entirely by the owner’s acts or words, both written or oral. It is not sufficient that the owner made a “suggestion” that something be done or an “observation” that something might be a “good idea.” There must have been an actual order or directive to the contractor or a course of conduct by the owner that had the practical effect of such an order or directive.
The following cases illustrate court determination of three different types of constructive change. In the first case, a contractor constructing an underground parking garage for the federal government recovered extra costs when earlier permission to alter excavated slopes was rescinded by the government after the majority of the slopes had been excavated. The project specifications required slopes for the exterior berms to be excavated at a one-foot vertical to two-foot horizontal slope and also required an excavation bracing system. During performance, the contractor requested and received permission to cut the slopes at the steeper ratio of 1 to 1.5. After most of the slopes had been cut and a new bracing system designed based on the steeper slopes, the government rescinded its earlier approval of the steeper slopes. The contractor asserted a constructive change for the extra costs of reverting to the original system. The government argued that it had the right to rescind earlier permission and to insist on compliance with the excavation slopes specified in the contract. The U.S. Claims Court (now the United States Court of Federal Claims) agreed that the government could rescind its earlier approval but ruled that the contractor was entitled to an equitable adjustment for the extra costs caused by the rescission. The equitable adjustment awarded the contractor not only the additional construction costs of revising the slopes back to one-foot vertical on two-foot horizontal but also included the costs incurred for redesigning the bracing system to accommodate the originally specified slopes.[5]
In the second case, the Armed Services Board of Contract Appeals ruled that a government contracting officer’s refusal to allow the contractor to use its intended method of performance was a constructive change. The contract work involved installing a telephone switching system at an Army ocean terminal. New cable was to be installed along three miles of wharves, but the contract documents did not indicate where or how the cable was to be attached to the wharves. The contractor had planned to strap the cable to the guardrails, but the contracting officer required that the cables be installed underneath the wharves, which required drilling 15,000 bolt holes through reinforced concrete. When the contractor appealed the contracting officer’s denial of the contractor’s claim for a constructive change, the board ruled for the contractor, holding that:
By disapproving appellant’s proposed method, the Government required appellant to employ a more expensive and time-consuming method of installing the cable on the wharves and thereby constructively changed the terms of the contract. The appellant is entitled to additional compensation and performance time for that constructive change.[6]
In the third case, the federal government had awarded a contract for construction of a new auto repair shop at a naval air station. The contract specified that the new shop was to be built during the first phase of the project, during which time the existing shop on the site was to remain in operation. The second phase of the contract consisted of demolishing the original shop. The contractor intended to grade and pave the entire site around the existing shop during the first phase and so indicated this intention on their critical path method (CPM) schedule which the government approved. When the government refused to give the contractor access to the entire site to perform the intended grading, the contractor asserted this refusal was a constructive change. The Armed Services Board of Contract Appeals concluded that the contractor’s interpretation of the contract was reasonable, in that, although the contract required the existing shop to remain operational, it placed no restrictions on access to the site. Further, the board said that the government’s approval of the contractor’s CPM schedule was evidence of the reasonableness of the contractor’s expectation. Since the contractor had incurred considerable additional costs and delay due to performing the site work in two separate phases, the board ruled that the contractor was entitled to an equitable adjustment for a constructive change to the contract.[7]
Constructive Change Notice Requirements
A contractor who believes a constructive change has occurred must give prompt written notice of the constructive change to the owner. Notice is crucial to preserve the contractor’s rights of recovery for the additional costs and extra contract time associated with the change. Without such notice, it can later be argued (rightly or wrongly) that the owner was unaware that the contractor regarded the owner’s instructions to constitute a change to the contract for which the contractor expected payment.
It should be noted that the federal contract changes clause refers directly to the constructive change situation and to the importance of notice in the second full paragraph of the clause.
Cardinal Changes
A cardinal change is a change to the contract that, because of its size or the nature of the changed work, is clearly beyond the general scope of the contract. It is beyond the reasonable contemplation of the owner and contractor at the time of contract formation. Additive cardinal changes are illegal on public contracts, even if both owner and contractor agree to the change, because such a large addition of work violates public bidding statutes guaranteeing free and open competition. On private work, such a change is not illegal and not improper if both owner and contractor agree to the change. However, even in private work, a cardinal change cannot be forced upon the contractor.
These principles are illustrated by a recent decision of the United States Court of Federal Claims. The Department of Energy had awarded a performance specification contract for the construction of a fabric filter particle collection system (that is, a “baghouse”) of an open-end design to accommodate potentially explosive conditions. Among other performance specifications, one specification called for an inlet gas operating range of 0.6 to 1.6 psi. The contractor interpreted this specification to mean that gas entering the baghouse would exert a pressure of 0.6 to 1.6 psi at an imaginary line separating the inlet pipe from the baghouse, whereas the government insisted that a constant internal operating pressure must be maintained throughout the baghouse within that range. During construction, the contracting officer demanded written assurances from the contractor that the baghouse would maintain a constant internal operating pressure in the range of 0.6 to 1.6 psi. The contractor refused on the grounds that, given the government’s open-end design, it was impossible to comply with the contracting officer’s demand. The government terminated the contractor for default.
The court found that the government’s insistence on a constant internal operating pressure of 0.6 to 1.6 psi was a constructive change to the contract because the specifications did not stipulate any particular internal operating pressure. Further, the court ruled that the government’s directive constituted a cardinal change to the contract. In converting the default termination to a termination for the convenience of the government, the court said:
If the requirements that the government imposed on Airprep, in this case, were in the general scope of the contract, then Airprep was obligated to perform, even if the government misinterpreted the contract. A contractor has no right to stop work if the project to be constructed is fundamentally the same as the one contracted to build. A contractor is not, however, obligated to undertake “cardinal changes”—drastic modifications beyond the scope of the contract work… changes that alter the nature of the thing to be constructed.[8]
No universal standards exist to determine precisely how large or how unusual the change must be to constitute a cardinal change. In some cases, courts have ruled that an extraordinary number of changes, each of which was not excessive in itself, amounted to a cardinal change. In one such case, a private owner awarded a guaranteed maximum price (GMP) contract for the modernization of a paper mill. Once contract performance started, the owner issued a steady stream of drawing revisions, in most instances ignoring the contract requirement for written change orders. More than 16,000 manhours of redesign effort were expended by the owner, resulting in such an excessive number of changes that the California Court of Appeals determined that the degree of change was far beyond the contemplation of the parties at the time the contract was entered into and that by issuing excessive revisions and radically altering the scope of the work, the owner had abandoned the original contract. In the court’s words:
When an owner imposes upon the contractor an excessive number of changes such that it can fairly be said that the scope of the work under the original contract has been altered, an abandonment of contract properly may be found. In these cases, the contractor, with the full approval and expectation of the owner, may complete the project. Although the contract may be abandoned, the work is not.
Since the work performed benefitted the owner and was performed with their approval, the contractor was entitled to recover its total direct costs plus a reasonable overhead and profit.[9]
Fortunately, relatively few contracts result in such massive change.
As a general operating principle, a contractor should refuse to perform a change believed to be a cardinal change, except under the threat of being placed in default of the contract by the owner. Even then, the contractor should proceed only after notifying the owner in writing that the directive to perform the work is a cardinal change and that performance is being compelled under protest. This principle holds on both public and private contracts, unless the contractor on private work is willing to perform the cardinal change.
An architect/engineer or construction manager who compels a contractor to perform a cardinal change on either public or private work has committed a tortious act and may be sued in tort (see Chapter 1) even though privity of contract does not exist. This would be in addition to whatever contractual remedies that the contractor has with respect to the owner.
Price and Time Adjustments for Contract Changes
Forward Pricing
If the contractor and owner agree on the price and time requirement for the changed or additional work before starting performance of the change, the change is said to be forward priced. Under fixed-price contracts, the contractor assumes the full financial risk of performance in the same manner as for the original contract when changes are forward priced. For this reason, the adjustment to contract price and time should include, in addition to a reasonable profit, an allowance to cover the risk that the contractor is assuming. Depending on the nature of the change, the price and time adjustment to the contract will be greater than if the owner were assuming the risk.
Once agreement has been reached on a forward-priced change order, the payment terms may not be altered. This principle is illustrated by a Corps of Engineer Board of Contract Appeals decision on a mass transit contract. The contractor had negotiated a forward-priced lump sum change order with the resident engineer whom the contract documents had designated as the authorized representative of the contracting officer. Following the contractor’s completion of the work covered by the change order, the contracting officer would not agree to the negotiated price, demanding instead that the contractor provide proof of the actual costs incurred. The court ruled that since the resident engineer, acting within the scope of his contractual authority, had negotiated the forward-priced change with the contractor and the contractor had performed the change in good faith, the contracting officer could not require an after-the-fact accounting of actual costs.[10]
Retrospective Pricing
When price and time adjustments to the contract are not determined until after the changed or additional work has been completed, the change has been retrospectively priced. In this situation, the basis of the price and time adjustment normally will be job records maintained by the contractor or owner, or both. If the contractor and owner cannot agree on the proper price and time adjustments, the dispute must be resolved under the dispute resolution provisions of the contract. In any event, the price and time adjustment is determined retrospectively, either by the contractor and owner, or by others.
Force Account
Force account is a particular form of retrospective pricing in which the contract spells out a specific procedure for arriving at the price adjustment when the contractor and the owner fail to agree on the price by forward pricing. Force account is also widely used to determine price adjustments for miscellaneous minor added work.
When force account is used, daily records are kept of labor, material, and equipment usage expended on the changed work by the general contractor and all subcontractors involved. The records are agreed upon daily and signed by representatives of both owner and contractor. When the work has been completed, the records are used as the basis for computing the direct costs associated with the change. The force account provisions then state fixed percentages of labor, materials, equipment operation, and subcontract costs that are allowed for overhead and profit markups, regardless of what the contractor’s actual overhead costs may be.
Application of Force Account Provisions to Extended Performance Situations
If the force account markup percentages are too low compared to the contractor’s actual overhead costs, the contractor will receive less than an equitable cost adjustment. This is particularly true when the contract performance time has been extended because of changes directed by the owner or, as frequently occurs, because the contractor encounters differing site conditions (see Chapter 15). In these situations, the contract time and price adjustments are more equitably determined using the force account records as the best evidence of the time of contract performance change as well as for the direct cost portion of the contract price change. Then, the indirect cost portion of the contract price change is also determined on the basis of the contractor’s actual indirect costs which take the extension of contract performance time into consideration. In addition, the contractor is allowed a reasonable profit.
Use of Force Account Records in Determining Breach of Contract Damages
The provisions of the changes clause, including the force account provisions, are contractually prescribed procedures that parties to a contract should follow for matters falling within the purview of the contract. However, when the contract has been breached, the proper determination of the monetary value of the breach damages is not limited by the changes clause in the contract. For instance, if the owner has breached the contract, the contractor is entitled to be paid all costs resulting from the breach, both direct and indirect, plus a reasonable profit. If force account records have been kept, they are the best possible evidence of the contractor’s direct costs. However, the contractor’s actual indirect costs should be paid in lieu of the force account markup percentages, and a reasonable profit on both direct and indirect costs should be added to make the contractor whole. This is true whether the contract performance time has been extended by the breach or not.
Impact Costs
Costs flowing from the change in addition to the proximate costs (meaning the direct labor, materials, and so on, actually incurred at the time of performing the change) are impact costs. These consist of (1) the time-related costs that flow from the change and (2) the effect that the change may have on the efficiency of performance of the original unchanged work.
The time-related costs usually consist of extended job overhead and extended home office overhead costs because the project took longer to complete as a result of the change. They also frequently include labor and material escalation costs and the higher cost of performing work in inclement weather. All of these kinds of costs are associated with the original work on the project being performed later than it would have been if the change had not occurred.
For instance, suppose the project was scheduled to be completed in a northern city by mid-September and was proceeding on schedule when a large quantity of extra work was directed to be performed in April of the project’s final year. The extra work extended the completion of all following original work four months past September into a severe winter. Also, ready-mix concrete prices and craft labor rates both increased on October 1. Clearly, additional costs would be incurred for winter protection, concrete cost increases, and increases in craft wages.
Loss-of-efficiency costs in the performance of the unchanged work are additional costs incurred to complete part or all of the original unchanged work on the project due to the disruptive effect of changes made to the changed work. This is particularly important when a large number of changes must be dealt with on a continuing basis. Numerous studies indicate such situations can have a devastating effect on construction costs for a number of reasons, such as crowding of the trades, frequent moving of crews with associated starts and stops, frequent requirements for overtime, the necessity of going through a learning curve more times than would otherwise be necessary, and the general effect on morale due to continual changes and delays.
The National Association of Electrical Contractors has conducted studies to assist in quantifying the loss of efficiency due to these effects. Also, the Business Roundtable has published data illustrating the loss of labor efficiency when excessive overtime is worked on an extended basis.
When the forward-priced method of pricing changes is used, impacts can be included on an estimated basis along with the proximate costs. Otherwise, they and the proximate costs will be determined retrospectively.
Change Order Payment Disputes
Change order payment disputes frequently arise between owners and contractors and between contractors and subcontractors. At least three separate root causes are responsible:
• The owner or, in the case of a subcontract, the contractor, claims that the work was not authorized in advance by a signed change order.
• The person alleged to have directed the work denies directing it, or the owner or, in the case of a subcontract, the contractor, claims that person did not have authority to order the work performed.
• The contractor or subcontractor alleges that the direction received from the owner or contractor respectively constitutes a constructive change to the contract.
Judicial Attitude to Payment Disputes
In some cases, changed work has been performed by the contractor in good faith, and courts have denied payment on the grounds of the absence of a signed change order or other proper advance authorization. However, the current judicial attitude is toward equitable principles to avoid unjust enrichment of the owner in the owner–contractor relationship or of the contractor in the contractor-subcontractor relationship. Courts are heavily influenced by the contemporaneous words, acts, and conduct of the parties and by their past patterns of behavior. Contemporaneous words, acts, and conduct refer to how the parties behaved when changes in the contract work were actually performed. Earlier patterns of behavior means the way in which the parties handled similar changes earlier in the contract. For instance, suppose an owner had consistently paid the contractor for changed or additional work orally directed by the resident engineer throughout contract performance and then refused to pay for a particularly large later change that the contractor performed on the resident engineer’s oral direction. Even if the changes clause said there would be no pay for work performed without a signed change order, a court today would probably hold that the contractual provision had been waived by the owner’s earlier behavior.
The following cases illustrate situations in which the contractor was not paid for performing extra work because the work was performed in the absence of a signed change order. In a 1984 Ohio case, an excavation contractor on a bridge improvement project encountered Brea sandstone in an excavation represented in the contract to contain no rock. Brea sandstone is extremely hard. The county engineer acknowledged that the material excavated was not as represented in the contract and directed the contractor to blast and remove the rock and to keep track of costs for payment purposes. The county engineer witnessed the performance of the work. However, the county commission refused to pay for all but a small portion of the extra work on the grounds that Ohio law requires extra work be authorized in writing and approved by the county commission. When the contractor sued for the balance, a trial court ordered payment but was reversed by the Court of Appeals of Ohio.[11]
Similarly, a contractor in Florida on an airport project failed to secure payment for extra work directed by the owner. The contract contained a changes clause requiring advance written authorization for any extra work. During construction, the owner revised the plans for the underground drainage system, resulting in considerable extra work. No change order was ever issued. When the contractor presented an itemized claim for the extra work after completion of construction, the owner refused to pay. Despite the fact that the owner had ordered the extra work and it was satisfactorily performed, the District Court of Appeals of Florida supported the owner’s position. The contractor was not paid for the work.[12]
However, the following cases illustrate the current trend in judicial attitude. In a 1993 Arkansas case involving the construction of a residence, the topographical information furnished by the owner proved to be inaccurate, which required a large number of changes to be made by the construction contractor due to inaccurate ground elevations. The owner orally directed the changes and paid progress payments systematically for a number of them. Eventually, the owner refused to make a progress payment, alleging that changes had been made without written authorization as required by the General Conditions of the construction contract. The Supreme Court of Arkansas ruled in favor of the contractor on the grounds that the owner was aware of the changes, orally assented to them, made progress payments, and continued to approve changes orally. The owner could not behave in this manner and then rely on the “no pay without written change order” language as grounds for refusing to make payment.[13]
In a Wyoming case involving a guaranteed maximum price (GMP) contract for the conversion of an existing building to a restaurant, the contract required changes in the work to be authorized in writing by the owner as a condition for adding their value to the guaranteed maximum price. To meet the owner’s schedule, the contractor was repeatedly asked to make renovations that were beyond the scope of the original contract. All extra work performed was by oral direction of the owner. When the owner refused to increase the guaranteed maximum price by the value of the extra work, the contractor sued. In ruling for the contractor, the Wyoming Supreme Court held that
The habitual disregard of a provision which requires that change orders for extras be in writing, if determinable as a matter of fact, can amount to a waiver of the contractual requirement. It is apparent from the record that the parties ignored the writing requirement and frequently orally agreed to “extras.” The record also clearly demonstrates that the provision requiring written approval of all changes in the work was waived by the words and conduct of the parties.[14]
A 1994 Missouri case resulted in the same holding. The framing subcontractor on a large apartment complex in Kansas City assisted the concrete and plumbing subcontractors at the prime contractor’s request in order to expedite the work and were paid for this extra work. At that point, the general contractor instructed the subcontractor not to include extra work in future payment requests because it created a problem with the owner and construction lender. The subcontractor was directed just to keep track of the extra work which was to be paid separately. When the subcontractor continued to perform extra work orally directed by the prime contractor and separately billed for it, the general contractor paid for part of the work, but not all. The subcontractor eventually filed a mechanic’s lien on the project for the value of the unpaid extra work. The project owner and general contractor then argued that, in the absence of written change orders, the subcontractor was not entitled to payment and should not be allowed to maintain a lien. The Missouri Court of Appeals found that the general contractor, through the words and deeds of its site representatives, had waived the written change order requirement. The court said:
All of the extra work performed and the extra materials supplied for buildings 12 through 17 were furnished either at the direction or under the supervision of Mr. Ryan or the agents of Ryan’s Construction. Based on the action of Ryan Construction concerning the extra work performed by Henley on buildings 12 through 17, the general contractor’s conduct concerning extras in general, and the large scale of the extra work, the trial court could reasonably infer that Ryan Construction either expressly or by acquiescence waived the written change order requirement with regards to the claim for extras on buildings 12 through 17. The trial court did not err in including these extras in the mechanic’s lien.[15]
As the preceding cases indicate, waiver of a contractual right occurs when the parties’ behavior is inconsistent with the enforcement of that right.
Orders for Payment of Disputed Changes
Regardless of the literal wording of the changes clause, the following conditions usually result in a court’s order for payment for changes:
• The owner, or contractor in the case of a subcontract, approves the work being done;
• The owner or contractor authorizes or allows the work to proceed; and
• The owner or contractor knows that the contractor or subcontractor respectively expects to be paid for the work.
Proper Contractor Reaction to Oral or Written Directives
When oral or written instructions or directives are received from the owner that the contractor believes constitute a change order, the proper reaction is as follows:
• Promptly request a written change order.
• If a change order is not received, proceed only after written advice to the owner that the instruction or directive received constitutes a change to the contract and that the work is being undertaken in expectation that payment will be made for the change.
• If the owner maintains that the directed work is not a change, but, at the same time, insists that the instruction or directive be carried out, the contractor must proceed with the work. However, this should be done only after advising the owner in writing that the work is being performed under protest and that all rights under the contract or subcontract are reserved.
• File a claim for the costs and time involved in performing the changed work and proceed under the disputes resolution provisions of the contract.
If the contractor’s position is contractually proper, the chances of eventual recovery of costs and time for performing the work according to oral or written directives is greatly enhanced by following this procedure.
Conclusion
This chapter highlighted the concept and operation of contract changes clauses. One reason that construction contracts, particularly heavy construction contracts, undergo changes is because the contractor encounters differing site conditions, the subject of the following chapter.
Questions and Problems
1. What three interrelated rights and/or obligations are central to the concept of contract changes? What would happen in today’s contracting world if construction contracts did not contain changes clauses? Is a changes clause an exculpatory clause? If not, what is the purpose of the clause?
2. Are all changes clauses more or less the same, or are they different? What seven broad kinds of changes provided for by the federal changes clause were listed in this chapter?
3. What is the difference between a change order (change directive, change notice) and a formal change to the contract?
4. Explain the differences in legal empowerment required by an individual in the owner’s organization to order changes and to authorize formal changes to the contract.
5. Why is it important that a contractor who has been directed to perform a change be certain that the person from whom he received the order had proper authority to order changes? What is apparent authority? How can it be created?
6. What does the federal contract changes clause have to say about the change in contract price resulting from changes? What are some other methods for determining change order pricing?
7. What is the intent of the “no pay without signed change order” language? What would be the effect in today’s construction world of a clause providing that a formal change to the contract be signed before actual work on the change could begin?
8. Explain the concept of two-part formal changes to the contract, including why such a procedure is often utilized.
9. What is the danger to the contractor in performing added or changed work on the basis of an oral change order?
10. What is a constructive change? Explain the nature of the two necessary elements to establish that a constructive change has occurred.
11. Explain the importance of the contractor giving prompt written notice of constructive change. What argument might the owner later make if notice is not given?
12. What is a cardinal change? Why are cardinal changes illegal in public work? Can a contractor properly be forced to perform a cardinal change in private work? Do universally accepted precise standards exist to define when a cardinal change has occurred?
13. Explain the difference between forward-priced changes and retrospectively priced changes.
14. What is force account? Explain typical force account provisions.
15. Explain how application of force account provisions might result in the contractor receiving less than an equitable adjustment in contract price after performing changed work. How is this possibility of inequitable payment affected when the changed work extends the period of contract performance?
16. Explain why the provisions of the changes clause, including the force account provisions, do not apply when determining breach-of-contract damages. Explain how force account records can still be helpful when determining such damages.
17. What are impact costs? Name three separate examples of time-related impact costs.
18. What are loss-of-efficiency impact costs? Name five general causes for these kinds of extra costs.
19. What is the current judicial tendency in dealing with change order payment disputes?
20. Why are words, acts, and conduct of the parties to the contract and their earlier patterns of behavior important when a court seeks to resolve a change order payment dispute?
21. What three conditions, when met, will usually result in a court ordering that payment be made in cases involving change order payment disputes?
What four-step procedure was outlined in this chapter that a contractor (or subcontractor) should follow to ensure eventual payment for changed work when the owner (or contractor) denies that his (or her) directive constitutes a change to the contract but insists that the directive be carried out?
1. F.A.R. 52.243-4 48 C.F.R. 52.243-4 (Nov. 1996).
2. Appeal of Bud Rho Energy Systems Inc., VABCA No. 2208 (December 31, 1985).
3. Appeal of Henry Burge and Alvin White, PSBCA No. 2431 (May 19, 1989).
4. Son-Shine Grading, Inc. v. ADC Constr. Co., 315 S.E.2d 346 (N.C. App. 1984).
5. Baltimore Contractors, Inc. v. United States, 12 Cl. Ct. 328 (1987).
6. Appeal of Communications International, Inc., ASBCA No. 30976 (October 23, 1987).
7. Appeal of West Coast General Corporation, ASBCA No. 35900 (April 14, 1988).
8. Airprep Technology, Inc. v. United States, 30 Fed. Cl. Ct. 488 (1994).
9. C Norman Peterson Co. v. Container Corporation of America, 218 Cal. Rptr. 592 (Cal. App. 1985).
10. Appeal of Excavation Construction, Inc., ENGBCA No. 4106 (December 27, 1985).
11. Cleveland Trinidad Paving Co. v. Board of County of Commissioners of Cuyahoga County, 472 N.E.2d 753 (Ohio App. 1984).
12. Southern Road builders, Inc. v. Lee County, 495 So.2d 189 (Fla. App. 1986).
13. Hempel v. Bragg, 856 S.W.2d 293 (Ark. 1993).
14. Huang International, Inc. v. Foose Construction Co., 734 P.2d 975 (Wyo. 1987).
15. T. D. Industries, Inc. v. The Lakes Project Investors, 883 S.W.2d 44 (Mo. App. 1994). | textbooks/biz/Business/Advanced_Business/Construction_Contracting_-_Business_and_Legal_Principles/1.14%3A_Contract_Changes.txt |
Key Words and Concepts
• The federal differing site conditions clause
• Type I (Category I) conditions
• Type II (Category II) conditions
• Duty of contractor to give notice
• Duty of government to investigate/issue a determination
• Equitable adjustment
• Not an exculpatory clause
• No implied right to relief
• Conflicting exculpatory clauses
• Lack of notice may not prejudice government rights
• Constructive notice
• Lack of notice may bar recovery
• Condition difference must be material
• Failure to make adequate site inspection
• Latent conditions
• Patent conditions
• Importance of prompt written notice
• Request for owner’s instructions/directive
• Reservation of rights
• Filing of claim
• Contractor/owner agreement on equitable adjustment
• Equitable adjustment determined by others
• Contractor must prove cost and time impacts
• Determination by provisions of the changes clause
• Force account equitable for direct costs only
The differing site condition clause has an interesting history. Today many contractors refuse to submit bids on projects where unknown site conditions could result in surprises and the contract does not contain a differing site conditions clause. This was not always the case.
In the past, the risk of encountering adverse physical conditions at the site that were unknown when the contract was entered into was borne by the contractor. For instance, if excavation work that was expected to be entirely in soil turned out to be in rock below a certain depth, the contractor was bound to complete the contract without any adjustment whatsoever. In addition to absorbing the additional costs of excavating rock instead of soil, the contractor could also be required to pay liquidated damages if, due to encountering the rock, the time allowed for performance of the contract was exceeded. In other words, the contractor, not the owner, bore the entire risk of cost and time performance regardless of what was encountered.
Recognizing this considerable risk exposure, prudent contractors included substantial cost contingencies in their bids to protect themselves if unknown adverse site conditions were encountered. The contingencies were, therefore, also included in the contract price. Thus, if these conditions were actually encountered, the owner, in effect, had already paid the cost to deal with them. However, this practice also resulted in the owner paying costs to overcome unknown adverse conditions whether such conditions actually were encountered or not, frequently producing a windfall for the contractor.
The federal government eventually realized that considerable savings in federal contract dollars were possible by the government assuming the risk of unknown adverse site conditions rather than imposing this risk on the contractor. This was the genesis of the current differing site conditions clause. This clause provides that, if unknown conditions are encountered during performance of the work that differ materially from the conditions represented in the contract or those ordinarily encountered in the work of the contract, the contract price and time for performance will be increased accordingly. Under this arrangement, the government pays and allows extra contract time only for conditions that are actually encountered. Contractors do not include cost contingencies in their bids for unknown adverse site conditions and reap no “windfall” if such conditions are not encountered.
The Federal Differing Site Conditions Clause
The federal contract differing site conditions clause reads as follows:
DIFFERING SITE CONDITIONS
1. The Contractor shall promptly, and before such conditions are disturbed, give a written notice to the Contracting Officer of (1) subsurface or latent physical conditions at the site which differ materially from those indicated in this contract, or (2) unknown physical conditions at the site, of an unusual nature, which differ materially from those ordinarily encountered and generally recognized as inhering in work of the character provided for in this contract.
2. The Contracting Officer shall investigate the site conditions promptly after receiving the notice. If the conditions do materially so differ and cause an increase or decrease in the Contractor’s cost of, or the time required for, performing any part of the work under this contract, whether or not changed as a result of the conditions, an equitable adjustment shall be made under this contract and the contract modified in writing accordingly.
3. No request by the Contractor for an equitable adjustment to the contract under this clause shall be allowed, unless the Contractor has given the written notice required; provided, that the time prescribed in (a) above for giving written notice may be extended by the Contracting Officer.
4. No request by the Contractor for an equitable adjustment to the contract for differing site conditions shall be allowed if made after final payment under this contract.[1]
Type I Differing Site Conditions
The first of the two differing site condition types described in the federal clause refers to any physical condition encountered in the work of the contract that differs materially from a condition indicated in the contract documents. In other words, the condition must be indicated a certain way in the contract documents and, when encountered during actual performance, must be found to be materially different. Such a condition is commonly called a type I or Category I condition. It is not necessary that the indication in the contract be explicit. In other words, conditions implied by the drawings and specifications taken as a whole, as well as conditions that are expressly stated, are considered by a court to be “indications” of the contract.
Two facts must exist to establish a type I differing site condition. First, the contract documents must have indicated a physical condition in a certain way. Second, when the condition was encountered during actual performance, it was found to be materially different. A simple example of a type I differing site condition is finding wet, sticky clay at a location in an excavation where the soil boring logs, which were stated to be part of the contract documents, indicated that the material would be damp sand. In the absence of a differing site condition clause in the contract, this situation constitutes misrepresentation on the part of the owner. Without the clause, a contractor who encounters such a condition has no means of relief except to sue the government for breach of contract and prove in court the necessary elements to establish a misrepresentation breach (see Chapter 13).
Type II Differing Site Conditions
The second type of differing site condition referred to in the federal contract is called a type II or Category IIcondition. This refers to a physical condition encountered during contract performance that differs materially from conditions normally expected in the type of construction work in the contract involved. In this case, the difference is not between an encountered condition and a condition shown or indicated a certain way in the drawings or other parts of the contract documents, but rather is a difference between the conditions encountered and the conditions considered normal or usual for the type of construction work being done. In other words, the condition encountered must be of such an unusual nature that it could not have been reasonably anticipated for the type of project at hand. To establish a type II differing site condition, a contractor must prove that the condition encountered is truly unusual and thus not anticipated when the contract was signed.
An example of a type II differing site condition is finding a material in an excavation that, even though identified correctly on the soil boring logs, behaves in a manner materially different from the material’s usual behavior—that is, exhibits some abnormal physical property that could not have been reasonably anticipated by an experienced contractor. Without the differing site conditions clause, the contractor has no other means of relief unless it can be established that the government knew about the abnormal behavior of the material and did not disclose this superior knowledge prior to contract formation.
An excellent illustration of a type II differing site condition occurred on one of the tunnels for the Boston Harbor Project in Massachusetts. The tunnel was founded in massive competent argillite, and the specifications required that it be excavated by use of a tunnel-boring machine (TBM). The spoil, or “muck,” produced by a TBM normally consists of small rock chips no larger than two to three inches maximum dimension grading on down to sand size. When handled by the TBM discharge conveyors and muck haulage equipment, the material usually behaves much like sand and gravel. Instead, in a limited section of the Boston tunnel, the TBM produced muck that resembled wet flowing concrete that was difficult to handle on the tunnel muck conveyors and haulage equipment. The material was correctly described geologically in the contract documents, but its behavior was highly unusual and could not have been expected in the work of the contract at hand.
Duty of Contractor to Give Notice
Note that the federal clause requires the contractor to promptly notify the government whenever either type I or type II differing site conditions are encountered and “before such conditions are disturbed.” The purpose of this duty of the contractor to give notice is to provide an opportunity for the government to view and investigate the condition to verify that the condition is, in fact, a differing site condition. If the condition is disturbed or obliterated, it may be difficult or impossible to do this, which could effectively bar recovery under the clause.
A secondary purpose for giving notice promptly is to provide the government the opportunity to direct the actions to be taken by the contractor in dealing with the differing site condition. Since the government is paying the costs, it clearly has the right to direct the manner in which the condition is dealt with in the field when a choice is available. Also, some encounters with differing site conditions make it necessary to redesign all or part of the project, a function obviously controlled by the government in its capacity as owner.
Duty of Government to Promptly Investigate
The federal clause provides that, once notified, the government has a positive duty to investigate the condition and make a determination that it is or is not a differing site condition. Failure to investigate promptly and make a determination in good faith is a breach of contract.
The significance of this point was illustrated in a contract for the construction of an immigration processing center where the contract documents contained detailed representations of the subsurface soil conditions. During performance, the contractor encountered organic muck that was not indicated in the soil information included with the contract documents. The presence of this material made it impossible to construct the building’s concrete foundations in the manner described in the contract. Although the contractor promptly informed the government’s site representatives when the muck was encountered and requested instructions, three months passed before the government finally acknowledged a differing site condition and directed the contractor to remove the muck. The contracting officer agreed to pay the direct costs for removing the muck but refused to pay delay damages or other impact costs caused by the three months’ delay. When the contractor sued, the U.S. Court of Claims (now the United States Court for Federal Claims) held that the government’s slow response had brought the contract work to a complete halt and that under these circumstances the government must pay the contractor not only its direct costs but all increased costs of contract performance including delay damages.[2]
If the government determines that the condition is not a differing site condition, the contractor may accept the decision or, as with any other contracting officer’s decision, dispute the determination under the provisions of the disputes resolution clause in the contract.
Equitable Adjustment Provided
The federal clause makes clear that if the contracting officer finds that the condition is a differing site condition that increases or decreases the cost or time for performance of the work, an equitable adjustment will be made to the contract price and time. This promise is unequivocal and cannot be overridden by any other provision of the contract. Most, if not all, encounters with differing site conditions result in upward adjustments in contract price and time.
Differing Site Conditions and Government Liability
The federal differing site conditions clause is not an exculpatory clause. It is true that, in the case of a type I differing site condition, the clause does partially exculpate or remove the stigma of fault or blame associated with a breach of contract by the government, but it does not operate to relieve the government of liability. Rather, it has the reverse effect of explicitly establishing the government’s liability for costs and contract time to overcome the condition and provides an orderly process by which the contractor may claim and recover these costs through an equitable adjustment to the contract.
Thus, the clause provides a contract remedy as distinct from a breach remedy—that is, the contractor’s right to relief is based on a specific provision of the contract that promises relief. Without this contractual remedy, the contractor’s only avenue for relief is to sue the government for breach of contract, alleging misrepresentation in the case of a type I condition or failure of the government to disclose superior knowledge in a type II condition.
Other Differing Site Conditions Clauses
The right to relief based on differing site conditions is not an implied right of the contract. There is no right of relief unless the contract contains a differing site conditions clause promising relief. In a typical case on this point, a U.S. District Court ruled that the absence of a differing site condition clause in the contract placed the risk of subsurface conditions squarely with the contractor.[3] Inexperienced contractors sometimes make the mistake of assuming that they will automatically receive cost and time adjustments for encountering conditions different than they expected.
Although many public contracts, and even many in the private sector, contain the federal differing site conditions clause verbatim or nearly verbatim, many others do not. In some contracts, the analogous clause is titled “Changed Conditions” or “Concealed Conditions.” If the federal clause is not used, it is important to read the alternate clause carefully to see what it does and does not provide.
Does the Clause Cover Both Type I and Type II Conditions?
The wording of the clause is particularly important in determining whether both type I and type II differing site conditions are included. Type I conditions will generally always be included, but some differing site conditions clauses do not include type II conditions. There will be no relief under clauses providing only for type I differing site conditions unless the condition that is actually encountered has been indicated differently in the contract documents. It makes no difference how unusual the condition actually was.
Does the Contract Contain Conflicting Exculpatory Clauses?
Some contracts contain conflicting exculpatory clauses—that is, they conflict directly with a differing site conditions clause contained in the same contract. For instance, if the contract contains soil boring logs and a differing site conditions clause, the contractor is clearly protected if adverse soil conditions different from those indicated in the boring logs are encountered. However, if the contract also contains a clause stating that the owner will not be responsible for the accuracy of the soil boring logs, an obvious conflict has been created.
Court decisions resolving this conflict have been mixed. The current judicial and administrative trend is to favor the differing site conditions clause over the exculpatory clause, often on the basis of a “precedence of contract documents clause” that gives precedence to general conditions clauses over clauses in other parts of the contract. The following cases illustrate this point.
A subcontract for performance of excavation work in Illinois contained a differing site condition clause allowing additional compensation for “subsurface and/or latent conditions at the site materially differing from those shown on the Drawings or indicated in the Specifications.” When the excavation subcontractor encountered pockets of peat which were not indicated on the boring logs and which substantially increased the costs of excavation, they submitted a claim under the differing site condition clause. The general contractor would not pay, claiming that the subcontractor was not entitled to rely on the boring logs because the specifications expressly disclaimed responsibility for their accuracy. A trial court ruled for the subcontractor, holding that when a contract contains both a differing site condition clause and a disclaimer of site condition data, the differing site condition clause takes precedence. The Appellant Court of Illinois affirmed on the grounds that the subcontract contained an “order of precedence” clause establishing the precedence of the general conditions over the specifications.[4]
In a similar case, an excavating subcontractor in Idaho encountered subsurface water in the soil that was so serious that its trucks were mired in up to the wheel hubs. A pre-bid site inspection had revealed only a dry cracked surface. The standard AGC form of subcontract agreement had been used that incorporated a differing site conditions clause. However, the contract also contained a disclaimer assigning to the subcontractor the risk for
All loss or damage arising out of the nature of the work aforesaid, or from action of the elements, or from unforeseen difficulties or obstructions which may be encountered in the prosecution of the work until its acceptance by the Principal, and for all risks of every description connected with the work.
When the subcontractor filed a claim for the extra expense in dealing with the muddy conditions, the general contractor refused to pay, arguing that the subcontract agreement imposed that risk on the subcontractor.
In spite of the disclaimer, the Idaho Supreme Court held that the subcontractor had encountered site conditions differing from those indicated in the contract documents and that could not be seen during a reasonable pre-bid site inspection. The subcontractor was awarded additional compensation.[5]
In an earlier federal case, the Engineer Board of Contract Appeals found that a contractor was entitled to extra compensation under the differing site condition clause when it was discovered that a government-approved quarry could not produce acceptable stone when commercially feasible construction methods were employed, even though the government had disclaimed in the contract any knowledge of whether the approved quarry contained acceptable material.
A general contractor had entered into a contract for the construction of a perimeter dike in Lake Huron, Michigan, and had subcontracted the production of stone from the government-approved quarry. The subcontractor was able to produce stone from the quarry only by the use of commercially infeasible and costly procedures. The general contractor was forced to switch to an alternate source 100 miles farther from the jobsite. When the contractor filed a claim for the extra costs involved, the government cited the disclaimer, arguing that designation of the original quarry as an approved source of stone did not amount to a representation regarding the cost of production or of the suitability of the material removed. However, the Board held that the inability to produce satisfactory material from the quarry using normal commercial construction methods amounted to an “unforeseen condition” within the meaning of the differing site condition clause, entitling the contractor to compensation for the additional costs involved in obtaining the stone from the alternate source.[6]
Without the presence of a differing site condition clause, exculpatory language in the contract disclaiming responsibility for the accuracy of the site conditions represented poses a great risk to the contractor. Courts generally will enforce these disclaimers unless it can be shown that the owner withheld site information in their possession from bidding contractors.
A 1987 decision of the New Jersey Supreme Court underscores this point. A highway contractor for the Department of Transportation (DOT) encountered soft soil conditions in saturated clay that greatly increased excavation costs. Nothing in the contract documents indicated that such conditions would be encountered, but the contract did not contain a differing site condition clause. The contract did contain a clause disclaiming the DOT’s responsibility for the accuracy or completeness of site condition data and said that the contractor would not be entitled to a price increase due to differing site conditions. When the contractor sued for additional compensation, they were able to show at the trial that, prior to taking bids, the DOT had received a letter from a consultant warning of difficult work conditions that would be caused by the saturated soil. This letter was never made available, or in any way disclosed, to bidders. In ruling for the contractor, the court stated that, although the DOT could not be held liable for failing to depict site conditions accurately, it must disclose all relevant information in its possession to bidders. The court further said that there was no doubt that the letter contained information that would have assisted bidders in pricing and planning the contract work. For this reason, the contractual disclaimer was unenforceable, and the contractor was entitled to recover its increased costs.[7]
What Are the Notice Requirements?
The federal clause provides that the contractor notify the government promptly when differing site conditions have been encountered and “before such conditions have been disturbed.” The reasons for this requirement were discussed earlier in this chapter. Under the federal contract, the contractor’s failure to furnish notice in accordance with the requirements of the clause is not necessarily fatal to the success of a differing site conditions claim. If it can be demonstrated that the lack of notice did not prejudice the rights of the government in any way, recovery under the clause will usually not be barred. Prejudice to the government’s rights could be caused by both denying the opportunity to make an investigation to verify the condition before it was disturbed and by precluding the opportunity to direct and control the course of action to be taken to deal with the condition. For this reason, when notice has not been given, the contractor must be able to show that the owner was not placed at a disadvantage (or prejudiced) in either of these ways to recover under a differing site conditions claim. The contractor must clearly establish that lack of notice could not possibly have made any difference—that is, where there is no doubt that the condition was a differing site condition and that the contractor had taken the only possible course of action, or at least a course of action that was no more costly and equally preferable from the government’s standpoint to any other course that might have been taken.
The notice requirements in the differing site conditions clause in other contracts can be considerably more restrictive than in the federal clause, particularly in clauses stating that the prompt furnishing of notice is a condition precedent to recovery under the clause. Courts will be more inclined to give full force and effect to the literal interpretation of clauses containing such language rather than applying the no prejudice to the rights of the owner standard.
What Are the Owner’s Responsibilities Under the Clause?
The government’s contractual duty under the federal clause to investigate and determine whether the conditions encountered by the contractor are differing site conditions was discussed earlier in this chapter. The contractor has a legitimate right to know whether the owner agrees that the conditions encountered constitute differing site conditions under the contract and whether a cost and time adjustment to the contract will be forthcoming. The importance of the cost adjustment is obvious. The time adjustment is also important when significant time is involved since the contractor bears the burden of completing the project within the contractually stipulated time allowance. This increase in time allowance should be equal to the additional time needed to complete the project because of differing site conditions. The contractor is entitled to know whether the completion date will be extended by the owner in order to realistically and economically schedule the remaining contract work.
If a time extension is not forthcoming when significant time has been lost, usually the only way the project can be completed by the original completion date is by accelerating the rate of performance of the remaining work, a costly undertaking. Thus, it is important for the contractor to receive the results of the owner’s determination promptly.
Once the owner’s determination has been obtained, the contractor at least knows the owner’s position. If the owner determines that the encountered conditions do not constitute differing site conditions under the contract, the contractor must either accept the determination or dispute it under the dispute resolution provisions of the contract. In either case, the contractor must absorb the extra costs involved (temporarily, at least) and attempt to complete the unextended contract on time by accelerating performance or risk being held in default by the owner. Clearly, the contractor cannot properly explore available options without knowing the owner’s position. The federal clause imposes the duty of making a prompt investigation and determination on the government. Clauses in other contracts may or may not impose a similar duty on the owner. If the clause does not impose this contractual duty, the contractor is placed in a very disadvantageous position when differing site conditions are encountered.
Reasons for Denying Differing Site Condition Claims
Once the contractor has claimed differing site conditions, the owner may deny the claim. Common reasons for denial follow.
Lack of Notice
As discussed earlier, most differing site conditions clauses require the contractor to furnish prompt notice, sometimes (as in the federal clause) before the conditions are disturbed. Lack of notice can bar an otherwise valid claim if prejudice to the owner’s interests can be shown. Some courts interpret the notice clause so strictly that a valid claim will be disallowed even when it is shown that the lack of notice did not prejudice the owner’s interests.
The following cases illustrate how courts deal with the lack of notice issue. In the first case, a government contractor removing and stockpiling riprap from a government quarry encountered explosive charges in the rock left by a previous government contractor. Although the contractor did not provide prompt written notice as required by the federal differing site conditions clause, they later submitted a claim for lost productivity because of the explosive charges found in the quarry. When the contracting officer failed to pay, the contractor filed an appeal with the Interior Board of Contract Appeals. The board held that the contractor’s failure to give notice was prejudicial to the government because the contracting officer did not know about the conditions encountered by the contractor. Having this knowledge would have enabled the contracting officer to elect to terminate the contract for the convenience of the government rather than pay the increased costs involved in dealing with the explosives. The contractor’s appeal was denied.[8]
In the second case, a contractor for a federal contract for the construction of a post office building encountered soft clay not indicated on the soil boring logs when excavating the site. They removed the clay without putting the government on notice after calling in a consultant who advised that there was no reasonable alternative. By the time the government’s architect learned of the situation, the contractor had removed the clay and was backfilling the area. The contracting officer denied the contractor’s differing site condition claim because of failure to comply with the notice requirement providing the government an opportunity to investigate and control the fix. The contractor appealed.
The Postal Service Board of Contract Appeals found that a differing site conditions claim can be denied because of lack of notice but only when the government can show that its options were limited by the lack of notice. This was not true in this case, and the government had suffered no prejudice. The board said:
There is sufficient reliable evidence to conclude that a differing site condition existed in the southwest corner of the site. The government, however, has not demonstrated there was a reasonable alternative to the method adopted by the contractor to deal with the problem which would have been more efficient or less costly. Accordingly, the contractor may recover the costs of removing and replacing 1,794 cubic yards of soil in the southwest corner of the post office site.[9]
Difference Not Material
The owner may deny a contractor’s claim on the basis that the condition is not different, or not sufficiently different, from the condition indicated in the contract (Type I differing site condition) or from the conditions normally encountered (Type II differing site condition). To qualify as either a Type I or Type II differing site condition, the condition difference must be material. Marginal differences are not sufficient.
For instance, the Armed Services Board of Contract Appeals was not convinced that an 18-inch difference between the depth of an existing sewer line shown on contract drawings and the actual depth of the sewer line encountered during contract performance was a “material” difference under the meaning of the differing site conditions clause. The drawings indicated that the invert of the sewer line was 10 feet below the ground surface. The contractor asserted that the 18-inch lower depth of the sewer required working below the water table, necessitating more expensive construction techniques. In ruling that the 18-inch difference was not a material difference, the board said:
We are simply not persuaded on the evidence that had the sewer line been 18 inches higher, none of this would have happened and instead, the 8 foot section of pipe could have been replaced with the rubber tire backhoes without the shoring, a trench box, or dewatering.[10]
Unfortunately, there are no generally accepted rules for deciding whether a particular difference is significant enough to be material. The question often rests on judicial determination.
Failure to Conduct an Adequate Pre-Bid Site Inspection
Frequently, owners deny contractor’s differing site conditions claims based on the owner’s contention that, if the contractor had conducted a reasonable and proper site inspection prior to contract formation, the condition would have been discovered and the contractor would have included additional costs in the bid to deal with it. Most bid documents strongly suggest or even require that the contractor make such a site inspection prior to submitting a bid. A bidding contractor who had knowledge prior to the bid that an actual physical condition at the site was more severe than indicated in the contract documents and who had then bid only an amount to cover the less severe condition indicated in the contract cannot reasonably expect relief under the differing site conditions clause. For this reason, the argument that the contractor’s failure to make an adequate pre-bid site inspection can be effective in barring the contractor’s differing site conditions claim.
On the other hand, the contractor will not be held to a standard of clairvoyance—that is, that the requirement for a reasonable pre-bid site inspection does not mean that the contractor will be held responsible for the discovery of latent conditions or be held responsible for failing to make “a skeptical analysis of the plans and specifications.” This means that unless there are specific instructions to verify certain measurements or to determine certain quantities of work to be done, the contractor is entitled to take the drawings and specifications at face value and to rely on them.
The application of this general concept to differing site conditions is illustrated by the words of the U.S. Court of Claims (now the United States Court of Federal Claims) in a related case:
Contractors are businessmen, and in the business of bidding on Government contracts, they are usually pressed for time and are consciously seeking to underbid a number of competitors. Consequently, they estimate only those costs which they feel the contract terms will permit the Government to insist upon in the way of performance. They are obligated to bring to the Government’s attention major discrepancies of errors which they detect in the specifications or drawings, or else fail to do so at their peril. But they are not expected to exercise clairvoyance in spotting hidden ambiguities in the bid documents, and they are protected if they innocently construe in their own favor an ambiguity equally susceptible to another construction.[11]
A latent condition is one that is hidden or not obvious, whereas a patent condition is obvious. Generally speaking, bidding contractors are only expected to note patent conditions in pre-bid site inspections. If a condition is not patent, a bidding contractor’s failure to discover it during a pre-bid site inspection will not bar a later claim for a type I condition under the differing site conditions clause.
The following cases illustrate how our courts have dealt with the site inspection issue. In the first case, the contractor was denied a differing site conditions claim because it failed to conduct any pre-bid site inspection at all. The contract required renovation of dormitories at a military base, and many of the contract drawings bore the notation for the contractor to “verify in field” many of the building dimensions. The government provided bidders an opportunity to inspect the building prior to submitting bids. During performance, the contractor encountered a number of discrepancies in various building dimensions from those indicated on the drawings and asserted a differing site conditions claim. The Armed Services Board of Contract Appeals denied the claim on the grounds that the discrepancies could have been detected during a reasonable pre-bid site inspection. The contract called for the renovation of an old building, and the drawings specifically required field verification. The board said:
The contractor certainly knew that this was a renovation contract which included demolition from the invitation to bid. There were two scheduled walk-through site investigations where the type of construction and likelihood of irregular dimensions could be uncovered. Unfortunately, the contractor chose not to look at the subject of its bid. It chose to rely on what because of the nature of the undertaking were less than perfect drawings, definitively labeled as such by the terms “verify” and “verify in field.”[12]
In the next case, the contractor conducted a pre-bid site inspection, but the General Services Administration Board of Contract Appeals concluded that the inspection was inadequate. The contract required the renovation of a ten-story building including the replacement of the flooring. During performance, the contractor found that the north wall of the building was out of square with the other walls, which increased the total square footage of each floor. They asserted this was a differing site condition. The board concluded that the drawings strongly suggested that the walls were out of square and that the contract documents required the contractor to conduct a pre-bid site inspection and verify the dimensions shown on the drawings. The board felt that if the contractor had complied with these requirements, it would have known the exact floor area of the building. In denying the contractor’s claim, the board opined:
The contract drawings gave ample indication that a problem possibly existed regarding the angles at which the east and west walls intersected with the north wall. The effect of uneven angle of intersection on the calculations of surface areas is obvious, and the significance of this fact is only enhanced by the fact that we are dealing here with a 10-story building. When, in making the site inspection, the party ignores such data in contract drawings and makes no measurements for purposes of verification, we cannot conclude that the inspection is reasonably adequate.[13]
In another case, the Armed Services Board of Contract Appeals supported a contractor’s differing site condition claim, holding that the contractor had no obligation to pretest soil samples to determine whether subsurface conditions were suitable for the proper bedding of pipe. In a contract to replace sewer lines at an Air Force base, the contract documents expressly represented that the material to be excavated would be sand and that no hard material would be encountered. Pipe was required to be set on a bedding of sand or gravel. The contractor priced its bid on the basis that it could bed the pipe on the native material but instead encountered hardpan sandstone that had to be removed and replaced to bed the pipe properly. The contracting officer denied the contractor’s differing site conditions claim, asserting that a more thorough pre-bid site inspection would have revealed the presence of the hardpan. The board ruled that the pre-bid site inspection requirement did not impose a duty on the contractor to test subsurface materials. The contractor was entitled to rely on the affirmative representations in the contract documents, and the removal and replacement of the hardpan constituted a differing site condition.[14]
Dealing With Differing Site Conditions
The following course of action will greatly enhance the chances of an equitable contract cost and time adjustment being granted when differing site conditions are encountered.
Prompt Written Notice
The importance of prompt written notice to the owner that differing site conditions have been encountered cannot be overemphasized. The notice should be given before the conditions are disturbed. Although constructive notice may have occurred, written notice is far preferable. An example of constructive notice would be a contractor encountering a differing site condition during excavation operations when the owner’s inspector was present, observed the condition, and thus was aware of it.
The written notice should also request the owner to investigate promptly the encountered conditions and to issue a determination that differing site conditions have been encountered.
Request for Owner’s Instructions
The contractor should also request the owner’s instructions or directive on how to deal with the encountered conditions, unless there is only one possible course of action. Further, the contractor should advise that contract performance will be delayed due to lack of instructions if instructions or a directive from the owner is not received within a reasonable period of time.
Failure to Receive Determination or Receipt of Adverse Determination
If the owner either fails to make a determination within a reasonable period of time or determines that the encountered condition does not constitute a differing site condition, the contractor must assume that no cost or time adjustment to the contract is immediately forthcoming. Unless the contractor is prepared to concede the matter, the owner should be advised in writing that the contractor disagrees with the determination and is reserving all rights under the contract. A claim should then be filed in accordance with the disputes resolution provisions of the contract for later adjudication by others. In the interim, contract work must be continued according to the owner’s instructions or directive with no guarantee that an equitable contract cost or time adjustment will ever be received. Although placed in a very disadvantageous position, the contractor has no alternative but to proceed on this basis. If the encountered condition is truly a differing site condition under the contract, the contractor usually will eventually be made whole through the disputes resolution provisions of the contract.
Determination of the Equitable Adjustment
The adjustment in contract price and time may be determined by agreement between contractor and owner or, if the owner and contractor are unable to agree, the equitable adjustment may be determined by others under the dispute resolution provisions of the contract. In either case, the contractor must prove cost and time impacts—that is, the performance cost increases and impact of the overall contract performance time extension as a basis for the equitable adjustment.
As a general rule, the same principles that govern determination of contract price and time adjustments resulting from contract changes apply (see Chapter 14). In fact, the differing site conditions clause in many contracts provides that the price and time adjustment be determined by the provisions of the changes clause, although the federal clause and the clauses in some other contracts are silent on this point. If force account provisions under a changes clause are used to determine differing site conditions cost and time adjustments, the difficulty discussed in Chapter 14 arises when the force account indirect cost markups are not high enough to meet the contractor’s actual costs when project performance time has been extended. In this case, the contractor would receive less than an equitable adjustment. Therefore, when the contract has been extended due to differing site conditions, force account provisions can be considered equitable for direct costs only. Indirect costs should be determined on the basis of the contractor’s provable actual costs and a reasonable profit should be added.
Conclusion
Differing site conditions, contract changes, and breach of contract situations usually result in delay to the project. In the following chapters, we turn to the general subject of delay and how it is handled in a contractual sense.
Questions and Problems
1. Explain the reason why differing site conditions clauses are included in construction contracts.
2. Without a differing site conditions clause in the contract, what must the contractor do to obtain relief if conditions are encountered that are different from those indicated in the contract documents?
3. What is a type I differing site condition? A type II? Does the federal differing site conditions clause include both?
4. What does the federal clause provide regarding the contractor’s duty to notify the government when a differing site condition has been encountered? What does the clause state that the contracting officer must do when notified that a differing site condition has been encountered?
5. Explain why the federal clause is not an exculpatory clause.
6. Can the rights provided by differing site conditions clauses ever be considered to be implied by the contract?
7. What four specific points should you look for when reading the differing site conditions clause in contracts other than the federal contract?
8. List the three common reasons discussed in this chapter that owners deny contractor differing site conditions claims.
9. What is constructive notice? Is constructive notice an adequate substitute for written notice? What request should a contractor make to the owner as part of a written notice that a differing site condition has been encountered? What should the contractor do when the owner does not respond within a reasonable period of time?
10. What two actions should the contractor take after receiving the owner’s determination regarding previous notice of encountering a differing site condition when the contractor disagrees with the determination?
11. By what two avenues mentioned in this chapter can the amount of differing site conditions cost and time adjustments to the contract be determined? What does the contractor have to prove regardless of which avenue is used?
12. What other prominent contract clause provisions are frequently used to make the equitable adjustment resulting from encountering a differing site condition? What restrictions should be placed on the use of force account provisions to ensure that an equitable adjustment is reached?
1. F.A.R. 52.236-2 48 C.F.R. 52.236-2 (Nov. 1996).
2. Beauchamp Construction Co., Inc. v. United States, 14 Cl. Ct. 430 (1988).
3. Pinkerton and Laws Co., Inc. v. Roadway Express, Inc., 650 F.Supp. 1138 (N.D. Ga. 1986).
4. Roy Strom Excavating & Grading Co., Inc. v. Miller-Davis Co., 501 N.E.2d 717 (Ill. App. 1986).
5. Beco Corp. v. Roberts & Sons Construction Co., Inc., 760 P.2d 1120 (Idaho, 1988).
6. Appeal of Construction Aggregates Corporation, ENGBCA No. 4242 (Dec. 31, 1980).
7. P. T. & L. Construction Co. v. State of New Jersey Department of Transportation, 531 A.2d 1330 (N.J. 1987).
8. Appeal of M. D. Activities, IBCA No. 2113 (Dec. 7, 1987).
9. Appeal of M & M Builders, Inc., PSBCA No. 2886 (May 29, 1991).
10. Appeal of H. V. Allen Co., Inc., ASBCA No. 40645 (Oct. 4, 1990).
11. Blount Bros. Construction Co. v. United States, 346 F.2d 962 (Ct. Cl. 1965).
12. Appeal of Zenith Construction, ASBCA No. 33576 (Mar. 11, 1989).
13. Appeal of J. S. Alberici Construction Co., Inc., GSBCA No. 9897 (Aug. 31, 1989).
14. Appeal of Tenaya Construction, ASBCA No. 27799 (Nov. 5, 1986). | textbooks/biz/Business/Advanced_Business/Construction_Contracting_-_Business_and_Legal_Principles/1.15%3A_Differing_Site_Conditions.txt |
Key Words and Concepts
• Time is of the essence
• Suspension of work
• Delay
• Increases in direct/time related costs
• Inefficiency due to interruptions of performance
• Excusable delay
• Compensable delay
• Contractual provisions for compensable delay
• The federal contract suspension of work clause
• No-damages-for-delay clauses
• Attitude of courts toward no-damages-for-delay clauses
• Contracts that are silent on delay
• Delay in early completion situations
• Root causes of delay and suspensions of work
• Importance of the notice requirement
• Constructive notice
• Difference of terminations from delays or suspensions of work
• Federal contract default termination clause
• Federal contract termination-for-convenience clause
• Genesis of termination-for-convenience clause
• Abuse of discretion
• Termination-for-convenience clause in other contracts
Delays and Suspensions of Work
Delays in construction contracting can be both psychologically and financially destructive, just as they are in everyday life. Whether the delay results from an act of God, breach of contract by one of the parties, or differing site conditions, its impact on construction contracts is often catastrophic. The old adage “time is money” is definitely true in these situations.
Time Is of the Essence
Construction prime contracts and subcontracts often contain a statement that “time is of the essence.” These words appear to mean that contract performance be started promptly and continue without interruption until completion within the specified time period. Taken absolutely literally, the words mean that the contractor or subcontractor has an absolute duty to perform all contract requirements with no delay whatsoever and is in breach of the contract for failing to complete the contract work within the contractually specified time. Similarly, these words also suggest that an owner who does not promptly review and approve shop drawings or promptly perform other contractually specified duties has breached the contract.
The common judicial view is not quite so stringent. Courts usually apply the time-is-of-the-essence concept only to delays in performance that are unreasonable. In this view, construction contracts by their very nature are so fraught with the possibility of delay that some delay is almost inevitable. Also, the clause is sometimes interpreted to mean that the contractor or subcontractor is required to meet time deadlines, but the owner or prime contractor in subcontract situations is not—that is, it is a “one-way street.” However, contractors, subcontractors, and owners would be well advised to act as though time-is-of-the-essence requirements will be strictly enforced with respect to their commitments to others. They would equally be wise not to count too heavily on reciprocal commitments made by others being strictly enforced.
Delays v. Suspensions of Work
Interruptions to work can result in either a delay or a suspension of work. A suspension of work results from a written directive of the owner to stop performance of all or part of the contract work. When this occurs, work on the entire project or on some discrete part of the project ceases entirely until the owner lifts the suspension. A delay differs from a suspension in two ways: First, a delay may be only a slowing down or a temporary interruption of the work without stopping it entirely. Second, whether a slowing down or a temporary interruption of work, a delay is triggered by something other than a formal directive from the owner to stop work. As with suspensions, delays can affect the entire project or only a discrete portion of it. Suspensions of work and delays can be caused by a variety of conditions-bad weather, strikes, equipment breakdowns, shortages of materials, changes, differing site conditions, or some act, or failure to act, of the owner separate from a directive to the contractor to stop work. Regardless of the cause, and whether within the control of the parties to the contract or not, suspensions and delays can be devastating for both parties to the contract.
The distinction between a suspension of work and a delay is a technical one. In the following discussion, the word delay indicates a loss of time, whether caused by a suspension of work or by some other delaying factor. Such delays result in increases to direct and indirect time-related costs for both the contractor and owner, with the magnitude of the cost increases depending on the extent of the suspension or delay. In addition to these increases in time-related costs, the contractor often experiences increases in direct costs due to inefficiencies caused by the interruption of performance .
The owner’s cost increases usually involve additional project administration costs since supervisory staff is on the job longer as well as consequential cost increases due to the project going on line later than anticipated. As any cost estimator knows, time-related costs have a tremendous impact on the overall cost of performance. The potential magnitude of these costs makes interruption in the performance of the work a very serious matter for both owner and contractor. There is no doubt that time is money in the construction contracting world.
Compensable v. Excusable Delay
Once contract time has been lost, a threshold question is whether the delay is compensable or excusable—that is, whether the contractor will be paid, or made whole, for the extra costs incurred as a result of the delay or whether only an extension of contract time will be granted.
An excusable delay is a non-compensable loss of time for which the contractor will receive an extension of time but no additional payment. Excusable delays are not the fault of either party to the contract. Although given an extension of time, the contractor must bear the costs associated with the delay. Since they are also absorbing time-related costs, the owner is also bearing the consequences of the delay. Thus, each party bears its own share of the costs of an excusable delay. Common examples of excusable delays include strikes, unless caused by the contractor’s breach of a labor contract or some act contrary to reasonable labor management and inclement weather over and above the normal inclement weather experienced at the project’s location.
A compensable delay entitles the contractor to both a time extension and to compensation for the extra costs caused by the delay. Unless the contract contains an enforceable no-damages-for-delay clause, an owner-caused delay is a compensable delay. It is also possible that some delays that would normally be excusable only may become compensable if they flow from an earlier compensable delay. An example is a case where an owner-caused delay resulted in follow-on work to be performed at a time in the year when normal weather-related delays are likely to occur, and when that work would have been completed before the inclement weather had the owner-caused delay not occurred. In this situation, the extra costs resulting from performing in the normal inclement weather, although ordinarily not compensable, become compensable.
Contractual Provisions for Compensable Delay
A contractor cannot reasonably expect to be paid for delays that are self-inflicted. On the other hand, one would expect that the contractor be compensated when the delay is caused by the owner. The extent to which the contractor is entitled to compensation for extra costs resulting from delays and suspensions varies according to the contractual provisions for compensable delay. Reading and understanding these provisions is critical to protection of the interests of both contractor and owner.
The Federal Suspension of Work Clause
The federal contract suspension of work clause reads as follows:
SUSPENSION OF WORK
1. The Contracting Officer may order the Contractor, in writing, to suspend, delay, or interrupt all or any part of the work of this contract for the period of time that the Contracting Officer determines appropriate for the convenience of the Government.
2. If the performance of all or any part of the work is, for an unreasonable period of time, suspended, delayed, or interrupted (1) by an act of the Contracting Officer in the administration of this contract, or (2) by the Contracting Officer’s failure to act within the time specified in this contract (or within a reasonable time if not specified), an adjustment shall be made for any increase in the cost of performance of this contract (excluding profit) necessarily caused by the unreasonable suspension, delay, or interruption and the contract modified in writing accordingly. However, no adjustment shall be made under this clause for any suspension, delay, or interruption to the extent that performance would have been so suspended, delayed, or interrupted by any other cause, including the fault or negligence of the Contractor or for which an equitable adjustment is provided for or excluded under any other term or condition of this contract.
3. A claim under this clause shall not be allowed (1) for any costs incurred more than 20 days before the Contractor shall have notified the Contracting Officer in writing of the act or failure to act involved (but this requirement shall not apply as to a claim resulting from a suspension order), and (2) unless the claim, in an amount stated, is asserted in writing as soon as practicable after the termination of the suspension, delay, or interruption, but not later than the date of final payment under the contract.[1]
Note that the clause first establishes the authority of the contracting officer to order the contractor to “suspend, delay, or interrupt all or part of the work…. ” Then, it promises that, if the performance of all or any part of the work is suspended, delayed, or interrupted for an “unreasonable” period of time by an act or failure to act of the contracting officer, an adjustment will be made for any increase in the cost of contract performance excluding profit.
A separate clause in the federal contract provides that the contracting officer will “extend the time for completing the work” for justifiable cause, which includes delay due to acts or failure to act of the government. The contractor must notify the contracting officer of the cause of the delay within ten days of its occurrence or within such further period of time before the date of final payment under the contract that may be granted by the contracting officer.
Thus, the federal contract provides that the contractor receive both the costs and an appropriate extension of contract time for delay caused by any government act or failure to act administratively in respect to contract changes, constructive changes, differing site conditions, and so on. Therefore, delays of this type are compensable delays under the terms of the federal contract.
Delays and Suspensions in Other Contracts
Although many federal contract provisions are widely copied throughout the industry, the federal delay provisions are often not contained in other contracts. The federal contract approach could be said to be at one end of the spectrum and contracts containing no-damages-for-delay clauses at the opposite end.
No-Damages-for-Delay Clauses
A typical no-damages-for-delay clause reads as follows:
NO DAMAGES FOR DELAY
The Contractor (Subcontractor) expressly agrees not to make, and hereby waives, any claim for damages on account of any delay, obstruction or hindrance for any cause whatsoever, including but not limited to the aforesaid causes, and agrees that its sole right and remedy in the case of any delay . . . shall be an extension of the time fixed for completion of the Work.
Under these provisions, the contractor’s or subcontractor’s relief in the event of delay “for any cause whatsoever” is limited to an extension of contract time for whatever period the delay can be shown to have extended overall contract performance. There is no cost adjustment. Taken literally, this clause means that the contractor receives no relief other than a time extension even in instances where the owner’s acts or failure to act, including the owner’s negligence, caused the delay. This provision is a classic example of an exculpatory clause.
Judicial Attitudes on No-Damages-for-Delay Clauses
Individual judicial response to no-damages-for-delay clauses has been mixed. Courts in some states are loath to enforce the clause because the contract documents are drafted by the owner and advertised on a “take it or leave it” basis, compelling the contractor to accept the clause or refrain from bidding. Contracts resulting from such bidding documents are called contracts of adhesion. Many feel that such contracts are bargains unfairly struck, particularly when the potential delay may be caused by some act or failure to act on the part of the owner. When the owner’s acts or omissions have been particularly egregious, courts often refuse to enforce the clause. However, this is not universally true. In the state of New York, for example, courts generally enforce no-damages-for-delay clauses on the reasoning that the bidding contractors were aware of the risks imposed by the clause and should have included sufficient contingencies in the bid to cover them. This mindset is totally opposite the thinking behind the differing site conditions clause and similar clauses where owners try to eliminate large bid contingencies by creating even-handed bidding conditions.
The following cases illustrate the courts’ uneven treatment of this issue. For example, the highest court of the state of New York held that a no-damages-for-delay clause prevented contractor recovery of even those damages caused by the owner’s active interference with the contractor’s work. The contractor completed its contract 28 months later than originally scheduled and attributed the delay to the city’s failure to coordinate its prime contractors and to interference with the sequence in timing of the contractor’s work. A trial court, hearing the contractor’s suit for \$3.3 million, instructed the jury that the contractor could not recover unless the city’s active interference resulted from bad faith or deliberate intent, and the jury denied recovery. The Court of Appeals of New York upheld the trial court’s instructions, describing the no-damages-for-delay clause as “a perfectly common and acceptable business practice” that “clearly, directly and absolutely, barred recovery of delay damages.”[2]
Courts in Illinois and Iowa have ruled similarly. In the Illinois case, when the contractor received notice to proceed with construction of a new high school, the site was not ready. After the site became available, the owner began to issue a barrage of change orders that eventually totaled more than \$2.1 million. In a lawsuit filed by the contractor to collect delay damages, an official of the owner testified that in order to avoid cost escalation, the contract had been awarded before all design decisions had been finalized. The architect’s field representatives testified that, although this was new construction, it resembled a remodeling job before it was completed. Nonetheless, the trial court denied recovery of delay damages. In affirming the trial court, the Appellant Court of Illinois held, “If the contract expressly provides for delay or if the right of recovery is expressly limited or precluded, then these provisions will control.” The court further opined:
Lombard’s experience with public construction projects should have enabled it to protect itself from risks by either increasing its bid or negotiating the deletion of this contractual provision…. In any event, the Commission bargained for the right to delay with the insertion of the no-damage provision.[3]
In the Iowa case, a contractor installing lighting and signs on a new interstate highway was not permitted to start work until two years after the contract was awarded because of delays by others in the construction of the highway. The Iowa Department of Transportation paid for cost escalation on certain materials but refused to compensate the contractor for the delay, relying on a no-damages-for-delay clause in the contract. The trial court directed a verdict for the Transportation Department, and the Supreme Court of Iowa upheld the directed verdict. Incredibly, the Supreme Court said:
There was no evidence that 2-year delays were unknown or even that they were uncommon in highway construction.[4]
Other courts have taken a more lenient view. In Missouri, a contract was awarded for the alteration of the superstructures of two bridges. Separate prime contracts had been let for the construction of the bridges’ substructures. The superstructure contractor received notice to proceed eight months before scheduled completion of the substructures, and to comply with the superstructure schedule, the contractor immediately placed mill orders and began steel fabrication. Because of a differing site condition problem, the substructure completion was delayed, and the superstructure contractor was forced to start field work 175 days behind schedule. The owner granted a 175-day time extension but refused to pay additional compensation, relying on a no-damages-for-delay clause included in the contract. At the subsequent trial, it was found that when the notice to proceed was issued, the owner was aware of the differing site condition problem and the likelihood that the substructure contractor would be delayed. A federal district court awarded the superstructure contractor substantial delay damages, and the U.S. Court of Appeals affirmed. Both courts held that active interference is a recognized exception to the enforceability of no-damages-for-delay provisions. The U.S. Court of Appeals further stated that active interference requires a willful bad faith act by the owner, which in this case had occurred because the owner knew that a delay by the substructure contractor was likely but nonetheless issued a notice to proceed to the superstructure contractor.[5]
In a Florida case, the contractor for the construction of a shopping center was delayed because the owner was late in providing necessary drawings and specifications and delayed executing change orders, even though written change authorization was required before the contractor could proceed with the work. The contract contained a no-damages-for-delay clause. The contract was completed behind schedule, and when the owner withheld final payment, the contractor sued for the contract balance plus damages incurred because of the owner-caused delay. In ruling for the contractor, the District Court of Appeal of Florida held that active interference by the project owner is a well-recognized exception to the enforceability of no-damages-for-delay clauses and that the owner’s unreasonable delay in issuing drawings and specifications and executing change orders amounted to active interference.[6]
Contracts With No Provisions for Delays
Some contracts are silent on the issue of damages for delay. They contain no express language that either establishes or denies the contractor’s right to be paid for the extra costs associated with owner-caused delays. Under these circumstances, the only way the contractor can recover the costs and lost time associated with owner-caused delays is through a lawsuit proving breach of contract on the part of the owner. The particular breach that would have to be proved would be the breach of the owner’s implied warranty not to impede or interfere with the contractor’s performance (see Chapter 13). Although a heavy burden, this is a far better situation for the contractor than if the contract contained a no-damages-for-delay clause. Of course, for the contractor, the best contract contains fair and equitable provisions promising compensation for costs and time extension for delays caused by the owner.
Delay in Early Completion Situations
Occasionally, a contractor makes a claim for recovery of extra costs resulting from a suspension, delay, or interruption of work by the owner even though all the contract work is completed by or before the contractually specified completion date. In delay in early completion situations, is the contractor entitled to be paid delay costs?
This question can be illustrated by comparing this situation to one in which the delay causes the contract to be completed after the specified contract completion date (see Figure 16-1). In the first case, after working for 14 months at a pace sufficient to meet contract requirements, the contractor was delayed for four months. Performance then continued at the pre-delay pace, and the project was completed four months late. Without the delay, the project would have been completed on time. If the delay was caused by the owner and the contract does not contain a no-damages-for-delay clause, the contractor is entitled to the extra costs for the four-month delay as well as a four-month time extension.
In the second case, the contractor worked 14 months at a pace faster than that required to meet the 24-month contractual requirement when the four-month delay occurred. Following the delay, the contractor progressed at this same pace for four additional months and finished the contract in 22 months, two months early. In this situation, it is more difficult for the contractor to sustain a claim for damages. Many owners take the position that, since the contractor finished the contract early, there was no damage caused by the delay, and, thus, the contractor is not entitled to either a time extension or extra costs. Presumably, these owners consider that the contractor’s bid was based, or should have been based, on taking the full allowable time for contract completion and, since performance was not delayed beyond the time allowed for completion, the contractor is due nothing.
The weakness of this position is that the contractor accepted all risk of performance of the contract and, in the absence of an owner-caused delay, would be liable for all extra time-related costs if the contract was not finished on time as well as for any contractually mandated liquidated damages. It cannot then reasonably be argued that the contractor should not also be entitled to save costs by finishing the work earlier than required by the contract if able to do so. Therefore, the owner causing a four-month delay is liable for the resulting extra costs to the contractor even though the contractor finishes the contract work early.
In this case, the contractor also should have been given a four-month extension of time at the conclusion of the delay, extending the time allowed for contract performance to 28 months after notice to proceed. Had the contract been extended in this manner, the contractor finished six months early, just as would have been the case if there had been no owner-caused delay. If not prohibited from doing so by explicit contract language, the contractor has the right to complete the contract early and, if prevented from doing so by an owner-caused four-month delay, as in this case, is entitled to be paid any extra incurred costs due to the delay.
Federal case law and most state law is highly supportive of the preceding principle. The Armed Services Board of Contract Appeals ruled in a 1982 case that a contractor had the right to complete the contract ahead of schedule and the government was liable for preventing or hindering early completion. A contract for the renovation of military buildings provided that the government was to arrange access to the individual buildings within two weeks of the contractor’s request for access. The government failed to provide access within the time specified for several of the buildings. Even though completing the project in less than the contractually stipulated time, the contractor submitted a claim for delay damages. The board said:
Barring express restrictions in the contract to the contrary, the construction contractor has the right to proceed according to his own job capabilities at a better rate of progress than represented by his own schedule. The government may not hinder or prevent earlier completion without incurring liability.[7]
In another federal case, an excavation contractor had submitted a schedule showing project completion in February, indicating the intention to work under winter conditions. However, the contractor’s in-house schedule, based on more optimistic production, indicated much earlier project completion. The contractor was achieving its in-house schedule, but when the quantity of unclassified material to be excavated overran the estimated bid quantity by 41%, the contractor was prevented from completing excavation before the winter season, forcing them to shut down until spring. The government denied the contractor’s claim for delay damages on the grounds that the submitted schedule indicated working under winter conditions. The U.S. Court of Claims (now the United States Court of Federal Claims) held that it did not matter that the contractor had not informed the government of its intended schedule. The court stated:
There is no incentive for a contractor to submit projections reflecting an early completion date. The government bases its progress payments on the amount of work completed each month, relative to the contractor’s proposed progress charts. A contractor which submits proposed progress charts using all the time in the contract, and which demonstrates that work is moving along ahead of schedule will receive full and timely payments. If such a contractor falls behind its true intended schedule, i.e., its accelerated schedule, it will still receive full and timely progress payments, so long as it does not fall behind the progress schedule which it submitted to the government.
On the other hand, if a contractor which intended to finish early reflected such intention in its proposed progress charts, it would have to meet that accelerated schedule in order to receive full and timely progress payments; any slowdown might deprive the contractor of such payments even if the contractor is performing efficiently enough to finish within the time allotted in the contract. In short, a contractor cannot lose when it projects that it will use all the time allowed, but it can be hurt by projecting early completion.[8]
Owners have more difficulty understanding their liability when they are not aware that the contractor intends to finish the work early. Contractors are therefore well advised to put the owner on notice formally whenever they are planning to complete the contract work earlier than required by the contract even though, as the preceding Court of Claims decision demonstrates, there is no contractual requirement to do so. Although not stated in the court decision, even if the contractor informs the owner that they intend to finish early, they retain the contractual right to revert to the original completion date if future events should force a change in plan.
There are exceptions to these rules. Some contracts contain explicit provisions that, although not prohibiting early completion, make clear that the owner will only be responsible for otherwise compensable delay costs or time extensions for delays that extend contract performance beyond the contractually stipulated date. For instance, contracts for subway construction in Los Angeles and a tunnel contract on the Boston Harbor Project in Massachusetts contain contract provisions to this effect. The language in the Boston Harbor Project reads as follows:
An adjustment in Contract Time will be based solely upon net increases in the time required for the performance for completion of parts of the Work controlling achievement of the corresponding Contract Time(s) (Critical Path). However, even if the time required for the performance for completion of controlling parts of the Work is extended, an extension in Contract Time will not be granted until all of the available Total Float is consumed and performance for completion of controlling Work necessarily extends beyond the Contract Time.[9]
The contract separately provided that without an extension of contract time, there would be no extra payment for time-related costs.
When provisions of this kind are included in the contract, they will be enforced and the contractor will receive no compensation for delay damages when completing the contract early.
Causes for Delays and Suspensions of Work
What are the root causes of delays and suspensions of work? What causal events seem to occur again and again? The following are typical examples:
Defective Specifications
One of the most common causes of delay is defective specifications resulting from the application of the Spearin Doctrine. When the drawings and specifications contain errors or omissions, costly delays often result-first, in attempting to comply with the erroneous drawings and specifications and, second, in waiting for the errors to be corrected and revised drawings and specifications to be issued.
Site Availability Problems
Another common cause of delay is lack of site availability at the time the notice to proceed is issued. Unless the contract provides otherwise, the contractor is entitled to the full use of the site at the time of notice to proceed. If the site is not available at that time, the contractor may be delayed. Also, an owner’s failure to provide a reasonable means of access to the work or interruption of access previously provided may delay the contractor.
Changes and Differing Site Conditions
Delays are also caused by changes directed by the owner, including changes because of problems associated with encountering differing site conditions. Just the requirement to perform added work may delay completion of the contract, and additional time is often lost waiting for the architect or engineer to revise the drawings and specifications when changes are required. This is particularly true when differing site conditions are encountered.
Owner’s Failure to Act Administratively
The owner may delay the contractor by failing to act or by acting in a dilatory manner administratively. The contractor is entitled to expect reasonable promptness in performance of contractual acts required of the owner, such as approvals of shop drawings and so on. If the owner does not cooperate, the contractor is delayed. Problems also arise when the contractor needs additional information, instructions, or a directive to proceed in connection with changes or differing site conditions, and the owner either refuses or is unreasonably slow in providing the needed information, thus delaying the contractor.
Case law decisions previously cited in Chapters 13, 14, and 15 illustrate the courts’ handling of such causes of delay.
Notice Requirements
The federal suspension of work clause and the clauses in most other construction contracts that promise relief for the contractor in the event of suspensions, delays, or interruptions contain a stringent notice requirement. Several aspects of this requirement are important.
Purpose of the Notice Requirement
Usually the contractor is required to furnish written notice to the owner within a stated period of time following any event that the contractor contends has caused or will cause a delay. Without such notice, the owner may not know that some act or failure to act is delaying the contractor. The requirement is reasonable, and failure on the contractor’s part to comply with it may result in waiver of entitlement to relief.
A secondary reason for notice is to establish a start date for the delay. This reason applies to all delays, both compensable delays caused by the owner and excusable delays. Although not appreciated at the time, in case of dispute, the time extent of many delays may have to be decided by a court or arbitrator years after the event, and a record establishing the start date can be invaluable.
Case law decisions denying contractors’ recovery due to lack of notice are legend. For instance, the U.S. Court of Appeals denied any recovery for extra costs when a contractor encountered more subsurface rock than indicated in the contract documents because they failed to notify the owner within five days of any event that could give rise to a claim for additional compensation or an extension of time, as the contract required. The contractor’s claim, without prior notice of claim, was filed three months after completion of the work where the rock was encountered.[10]
Similarly, the Armed Services Board of Contract Appeals denied a contractor’s claim for extra compensation due to the poor condition of exterior surfaces on Navy housing units that were being painted. The claim was not raised until after the contractor had applied primer and finished coats to the surfaces. Once the surface had been primed and painted, there was no way for the government to evaluate or verify the contractor’s allegations.[11]
Constructive Notice
In some circumstances, the owner may be held to have received constructive notice of delay. For instance, if an act of God shuts down the work or the owner issues a written directive to suspend all work, the owner is presumed to be aware of the associated delay. Constructive notice means that, even though not specifically notified formally, the owner knows that the work is being delayed.
In Ohio, a contractor on a sewer construction project failed to give the owner notice when differing site conditions were encountered. The contractor had bid the project on the basis that it would be possible to bore a tunnel and jack the sewer pipe into place for most of the job, which would have been possible according to the soil boring logs contained in the contract documents. During the work, saturated silty sand was encountered, which had not been indicated in the boring logs, preventing the contractor from using the jacked-pipe method of construction. A far more expensive open-cut method was required. The owner’s representatives were present at the site throughout performance and were aware of the soil conditions encountered. Additionally, many meetings were held to discuss the problem, and extensive written correspondence passed between the contractor and the owner.
When the contractor submitted the claim under the differing site condition clause, the owner denied the claim on the grounds that timely notice had not been given as required by the contract. The Court of Appeals of Ohio ruled that the contractor’s claim was not barred by failure to give written notice because the owner through its on-site representatives knew of the conditions, which served as constructive notice of the situation. In the words of the court:
There is no reason to deny the claim for lack of written notice if the District was aware of differing soil conditions throughout the job and had a proper opportunity to investigate and act on its knowledge, as a purpose of formal notice would thereby have been fulfilled.[12]
Although most courts would probably rule as the Ohio court did in similar circumstances, the contractor should always promptly give written notice of a delay to the owner.
Terminations
There is an obvious difference between terminations and suspensions or delays. Suspensions or delays mean a slowing of work or a cessation of work that is temporary in nature. However, terminations mean the cessation is permanent. Some construction contracts contain provisions where, under circumstances stated in the contract, both the owner and the contractor may terminate the contract, but the following discussion refers only to situations in which the contract is unilaterally terminated by the owner.
Requirement for an Enabling Clause
The owner’s right to terminate the contract depends on the existence of a specific clause in the contract giving the owner that right. Practically all construction contracts contain clauses permitting the owner to terminate the contract when the contractor is not meeting the contract requirements. Such terminations are called default terminations. Today, most contracts also contain a clause permitting termination for the convenience of the owner. In both cases, specific contract clauses establish the owner’s right to take the termination action.
Default Terminations
The federal contract default termination clause provides in pertinent part:
DEFAULT (FIXED-PRICE CONSTRUCTION)
1. If the Contractor refuses or fails to prosecute the work or any separable part, with the diligence that will insure its completion within the time specified in this contract including any extension, or fails to complete the work within this time, the Government may, by written notice to the Contractor, terminate the right to proceed with the work (or the separable part of the work) that has been delayed. In this event, the Government may take over the work and complete it by contract or otherwise, and may take possession of and use any materials, appliances, and plant on the work site necessary for completing the work. The Contractor and its sureties shall be liable for any damage to the Government resulting from the Contractor’s refusal or failure to complete the work within the specified time, whether or not the Contractor’s right to proceed with the work is terminated. This liability includes any increased costs incurred by the Government in completing the work….[13]
This contract language provides very strong rights to the government in order to protect the public interest when a contractor fails to meet the obligations of the contract. The contractor loses any further right to proceed and, together with the surety, is liable for all excess costs that the government may incur in completing the contract. Default termination language in other contracts contains similar provisions.
The consequences of default terminations are so severe that this step should be taken only in extreme situations. The attitude of our federal courts on this point is clear in the following citations from typical case law:
Termination for default is a drastic action which should only be imposed on the basis of solid evidence.[14]
It should be observed that terminations for default are a harsh measure and being a species of forfeiture, they are strictly construed.[15]
Convenience Terminations
The federal fixed-priced contract termination-for-convenience clause reads as follows:
TERMINATION FOR CONVENIENCE OF THE GOVERNMENT (FIXED-PRICE)
1. The Government may terminate performance of work under this contract in whole or, from time to time, in part if the Contracting Officer determines that a termination is in the Government’s interest. The Contracting Officer shall terminate by delivering to the Contractor a Notice of Termination specifying the extent of termination and the effective date.
2. After receipt of a Notice of Termination, and except as directed by the Contracting Officer, the Contractor shall immediately proceed with the following obligations, regardless of any delay in determining or adjusting any amounts due under this clause:
1. Stop work as specified in the notice.
2. Place no further subcontracts or orders (referred to as subcontracts in this clause) for materials, services, or facilities, except as necessary to complete the continued portion of the contract.
3. Terminate all subcontracts to the extent they relate to the work terminated.
4. Assign to the Government, as directed by the Contracting Officer, all right, title, and interest of the Contractor under the subcontracts terminated, in which case the Government shall have the right to settle or to pay any termination settlement proposal arising out of those terminations.
5. With approval or ratification to the extent required by the Contracting Officer, settle all outstanding liabilities and termination settlement proposals arising from the termination of subcontracts; the approval or ratification will be final for purposes of this clause.
6. As directed by the Contracting Officer, transfer title and deliver to the Government (i) the fabricated or unfabricated parts, work in process, completed work, supplies, and other material produced or acquired for the work terminated, and (ii) the completed or partially completed plans, drawings, information, and other property that, if the contract had been completed, would be required to be furnished to the Government.
7. Complete performance of the work not terminated.
8. Take any action that may be necessary, or that the Contracting Officer may direct, for the protection and preservation of the property related to this contract that is in the possession of the Contractor and in which the Government has or may acquire an interest.
9. Use its best efforts to sell, as directed or authorized by the Contracting Officer, any property of the types referred to in subparagraph (b)(6) of this clause; provided, however, that the Contractor (i) is not required to extend credit to any purchaser and (ii) may acquire the property under the conditions prescribed by, and at prices approved by, the Contracting Officer. The proceeds of any transfer or disposition will be applied to reduce any payments to be made by the Government under this contract, credited to the price or cost of the work or paid in any other manner directed by the Contracting Officer….[16]
The genesis of the termination-for-convenience clause dates back to the end of the Civil War when the cessation of hostilities placed the government in the position of remaining contracted for supplies and equipment that were no longer needed. At that time, the general purpose of the clause was to permit the government to stop contract performance when a major change in circumstances obviated the need for further performance. Since the clause first appeared in federal contracts, a number of federal court and board decisions have broadened its use to the point of permitting the government to terminate a contract for practically any reason, providing that the government acts in good faith. More recently, several federal court and board decisions have been more restrictive to prevent the contracting officer’s abuse of discretion when invoking the clause. In most instances today, the clause is invoked for legitimate reasons, and abuse of discretion cases are relatively rare.
Further actions that the contractor should take when the clause is invoked are fully spelled out in succeeding paragraphs of the federal clause (not cited here). A procedure is established for the contractor to make a monetary claim to the government for an equitable adjustment to settle the contract fairly. Generally speaking, such termination settlements reimburse the contractor for all costs incurred, including settlement costs with subcontractors and suppliers plus a reasonable profit thereon. Anticipated profit on the unperformed terminated work is not allowed.
Termination-for-convenience clauses in other contracts may or may not follow the line of the federal clause. A prudent contractor should be particularly interested in the provisions in these clauses governing how the contract will be settled in a termination-for-convenience situation.
Conclusion
Closely related to delays, suspensions of work, and terminations are the subjects of liquidated damages, force majeure, and time extensions. These topics are discussed in the next chapter.
Questions and Problems
1. Discuss the popular view and the judicial view of the meaning of the words “time is of the essence” in a construction contract, subcontract, or purchase order.
2. What is the prudent view of time-is-of-the-essence language in a contract with respect to the following:
1. Your contractual commitments to others; and
2. Others’ contractual commitments to you.
3. Explain two differences between a “delay” and a “suspension” of work.
4. In what two ways does delay to a construction contract increase costs for both the contractor and owner?
5. Explain the difference between an excusable delay and a compensable delay. State some common examples of excusable delay.
6. Explain the principal provisions of the federal contract suspension of work clause.
7. Explain the principal provisions of a typical no-damages-for-delay clause.
8. Are no-damages-for-delay clauses universally enforceable?
9. What is the reasoning of courts that
1. Refuse to enforce no-damages-for-delay clauses?
2. Do enforce no-damages-for-delay clauses?
10. When the contract is silent on the subject of damages for delay, what course of action must a contractor follow to recover time and money lost caused by an owner’s delay when the owner refuses to grant additional time and money?
11. Explain why a contractor is entitled to be paid for extra costs suffered because of an owner-caused delay when, in spite of the delay, the contractor finishes the contract on or before the contractually stipulated date. Under what circumstances is the contractor not entitled to be paid such costs when the contract is finished on or before the specified date?
12. What can a contractor do in advance to enhance the chances of recovering costs incurred because of owner-caused delays when the contractor plans to finish before the contractually specified date?
13. What are the four general root causes of delay discussed in this chapter?
14. Explain two reasons for the importance of prompt written notice to the owner when the contractor has been delayed. Does the necessity for prompt written notice occur only when the owner is delaying the contractor or for excusable delays as well?
15. Explain constructive notice. Does the fact that constructive notice may exist mean that the contractor should not also give prompt written notice?
16. Explain how a termination differs from a delay or suspension of work.
17. Name and explain the two types of terminations discussed in this chapter. Do they each require an enabling clause in the contract? Does the federal contract contain an enabling clause for each?
18. What was the original reason behind the termination-for-convenience clause in the federal contract?
19. Does current federal contract law allow the government the completely unfettered right to invoke the termination-for-convenience clause?
20. What should be the primary concern of the contractor concerning termination-for-convenience clauses in contracts other than the federal contract?
1. F.A.R. 52.242-14 48 C.F.R. 52.242-14 (Nov. 1996).
2. Kalisch-Jarcho, Inc. v. City of New York, 448 N.E.2d 413 (N.Y. 1983).
3. M. A. Lombard & Son Co. v. Public Building Commission of Chicago, 428 N.E.2d 889 (Ill. App. 1981).
4. Dickinson Co., Inc. v. Iowa State Dept. of Trans., 300 N.W.2d 112 (Iowa 1981).
5. United States Steel Corp. v. Missouri Pacific Railroad Co., 668 F.2d 435 (8th Cir. 1982).
6. Newberry Square Development Corp. v. Southern Landmark, Inc., 578 So.2d 750 (Fla. App. 1991).
7. Appeal of CWC, Inc., ASBCA No. 26432 (June 29, 1982).
8. Weaver-Bailey Contractors, Inc. v. United States, 19 Cl. Ct. 474 (1990).
9. MWRA Contract CP-151, General Conditions Article 11.12.1.
10. Galien Corp. v. MCI Telecommunications Corp., 12 F.3d. 465 (5th Cir. 1994).
11. Appeal of Lamar Construction Co., Inc., ASBCA No. 39593 (Feb. 6, 1992).
12. Roger J. Au & Son, Inc. v. Northeast Ohio Region Sewer District, 504 N.E.2d 1209 (Ohio App. 1986).
13. F.A.R. 52.249-10 48 C.F.R. 52.249-10 (Nov. 1996).
14. Mega Construction Co., Inc. v. United States, 29 Fed. Ct. 396 414 (1993).
15. Composite Laminates v. United States, 27 Fed. Ct. 310 (1992).
16. F.A.R. 52.24902 48 C.F.R. 52.249-2 (Nov. 1996). | textbooks/biz/Business/Advanced_Business/Construction_Contracting_-_Business_and_Legal_Principles/1.16%3A_Delays_Suspensions_and_Terminations.txt |
Key Words and Concepts
• Liquidated damages provisions
• Conceptual basis of liquidated damages
• Liquidated damages provisions are a contract remedy
• Judicial attitude to liquidated damages provisions
• Bonus/penalty clauses
• Force majeure
• Common conditions of force majeure
• Contract relief for force majeure
• Time extensions
• Importance of notice of claim
• Contractor responsibility to prove entitlement
• Owner’s responsibility to act
• No time extension until owner grants it
Suppose that you had hired a contractor to build a home for your family that was to be completed and ready for occupancy by a certain agreed date. Based on this expectation, you sold your previous home and were required to vacate by the agreed date that the new home was to be ready. If the contractor failed to complete the new home by that date, you would not only be greatly inconvenienced but might have to rent temporary accommodations until the new home was ready at totally unexpected additional expense. If an ongoing business property or product manufacturing facility had been involved instead, the inconvenience and monetary loss would be increased enormously.
When something like this occurs, who pays, and how much do they pay? If the delay was not the contractor’s fault, what then? How is this situation handled contractually? These and related questions are answered in this chapter.
Liquidated Damages
Today, most large construction contracts contain liquidated damages provisions stating explicitly that, for each calendar day the contract work remains uncompleted after the final completion date stated in the contract, the contractor shall pay the owner a certain dollar amount stated in the contract. Sometimes a series of dollar amounts are stated, each applying if interim completion dates for separate parts of the contract work called milestone completion dates are not met, in addition to the provision applying to the final completion date. These specified payments are intended as reimbursement for the monetary loss suffered by the owner that was caused by the delay in completion. They are called liquidated damages because they are stated as fixed dollar amounts per day.
Conceptual Basis of Liquidated Damages
The conceptual basis of liquidated damages provisions is that, in many cases, the actual damages that the owner will suffer in the event of late completion are very difficult (if not impossible) to determine when the contract was signed. The owner and the contractor, therefore, agree on a fixed daily dollar amount or, if milestone completion dates are specified, fixed daily dollar amounts that are considered a reasonable measure of the extent to which the owner could be damaged by late completion. In practice, the contractor usually has no input into the determination of the daily dollar amount(s). In the public sector and in much private work, the determination is unilaterally made by the owner, and the contract documents are advertised for bids on a “take-it-or-leave-it” basis.
Liquidated Damages Provisions Are a Contract Remedy
Under common law contract principles, a breach of the contract by one party that damages the other entitles the nonbreaching party to the actual monetary damages suffered. Unexcused failure of the contractor to meet the contractually specified completion date clearly is a breach of contract. In the absence of liquidated damages provisions in the contract, the owner would have to itemize the actual damages and present them to the contractor in order to be made whole. If the contractor failed to pay, the owner’s recourse would be to either withhold the amount from money otherwise payable to the contractor or sue, prove the extent of the actual damages, and obtain a judgment compelling the contractor to pay. The liquidated damages provisions relieve the owner from this burden. Their effect is to substitute a contract remedy for a common law breach remedy.
Liquidated Damages Are Not a Penalty
It is important that both parties to the contract realize that liquidated damages are a contractually specified remedy to make the owner whole in the event of late completion. They cannot be properly assessed as a penalty to punish the contractor for some act that displeases the owner or, when not properly due, as pressure to coerce the contractor into a course of action favorable to the owner. In cases of disputed liquidated damages assessments, the courts will not support punitive or coercive motives on the part of the owner.
For instance, the Iowa Supreme Court ruled in 1991 that liquidated damages clauses in three highway contracts were unenforceable penalties. The contractor had entered into three simultaneous contracts for highway resurfacing that each required completion within 40 days and called for liquidated damages of \$400 per day. The contractor finished two of the contracts behind schedule. A county department had withheld a total of \$32,400 in liquidated damages. The contractor filed suit to recover the withheld money, alleging that the imposition of liquidated damages was punitive in nature. The court held that a project owner must be able to show how the daily rate was determined in order to enforce a liquidated damages clause. In this case, the county disregarded their own construction manual, which called for a sliding daily rate based on the total contract price. The three contracts ranged from a contract price of \$37,957 to \$251,696, yet the county assessed a daily rate of \$400 in each case. Further, at the trial, one of the county’s engineers testified, “We wanted the liquidated damages amount to be sufficient to make the contractor aware that we need that project completed.” In the court’s view, this testimony added to the impression that the liquidated damages assessment was intended as a penalty rather than reflecting the level of damages that conceivably could have been suffered from late completion. In ruling for the contractor, the Iowa Supreme Court said:
No witness was called to justify the suggested liquidated damages amount contained in the DOT manual schedule. The county engineer did not conduct studies or present any other data suggesting that the defendants anticipated that the government entities and the public could sustain damages equivalent to the \$400 per day liquidated damages amount contained in each of the three contracts. … Therefore, we conclude that the \$400 per day liquidated damages clause is an unreasonable amount and therefore a penalty that should not be enforced.[1]
Judicial Attitude Toward liquidated Damages Provisions
The contractor in the previously related case was fortunate. The current judicial attitude toward liquidated damages is to enforce such provisions in the event of unexcused late completion. The owner does not have to prove the amount of damages or that any damages resulted as a consequence of late completion. For every unexcused day of late completion, the owner is generally due the liquidated amount stated in the contract. However, there are exceptions. In addition to overturning improper assessments made for punitive or coercive purposes, courts may overturn liquidated damages assessments if the daily amount stated in the contract does not bear a reasonable relationship to the amount that the owner could be thought to be damaged. The standard of reasonableness is based on whether the daily amount is a reasonable estimate of the extent to which the owner might be damaged by late completion, in light of the level of knowledge possessed by the owner and contractor when the contract was signed.
A typical judicial holding along these lines is exemplified by a 1985 decision of the Corps of Engineers Board of Contract Appeals. A contract for the construction of rest area facilities allowed 90 days for performance and called for liquidated damages of \$143 per day for late completion. The contractor was very late in completing the work, and the government assessed liquidated damages of \$18,447 against a total contract amount of \$29,189. In supporting the contractor’s appeal of this relatively large liquidated damages assessment, the Board of Contract Appeals said:
The Board concludes that the liquidated damages provision in this contract was not based on any reasonable forecast of probable damages that might follow a breach, and therefore that the liquidated damages provision will not be enforced.[2]
Even though payment is sometimes evaded as just explained, contractors are well advised in planning the performance of their contract work to believe that liquidated damages provisions will be enforced.
Bonus/Penalty Clauses
Can the liquidated damages provisions be applied in reverse if the contractor finishes early? If the owner is damaged for every day’s late completion, is there a like benefit from every day’s early completion? Not necessarily. The owner may not have planned on the use of the completed facility until the specified contract completion date and may be unprepared to occupy and use it in the event of early delivery. There are other reasons as well why early completion might not benefit the owner. Therefore, the typical liquidated damages clause cannot be applied in reverse for early completion. However, contracts sometimes contain a bonus/penalty clause that does provide the contractor a monetary benefit for early completion as well as providing for payment to the owner in the event of late completion. Usually, the daily rate for early completion will be less than the rate for late completion. Bonus/penalty clauses are relatively rare in the public sector although quite common in private work.
Force Majeure
In a contractual sense, force majeure means a condition beyond a party’s control. An owner-caused delay would be a condition of force majeure from the standpoint of the contractor, even though the delay was within the control of the owner. On the other hand, inclement weather or a flood are conditions of force majeure from the standpoint of both parties.
Common Conditions of Force Majeure
In addition to owner-caused delays, acts of God, war, riots, labor strikes, inability to obtain critical materials when all proper procurement actions have been taken, and other similar situations are common conditions of force majeure. It should be noted that mere failure of a prime contractor’s subcontractors or material suppliers to perform in a manner that meets the time requirements of the prime contract seldom constitutes a condition of force majeure, since both are under the control of the prime contractor. For a subcontractor or material supplier delay to be considered force majeure, it is necessary for the prime contractor to prove that the inability of the subcontractor or material supplier to perform is caused by conditions not only beyond their control but beyond the control of the prime contractor as well. This is usually a heavy burden of proof.
Contract Relief for Conditions of Force Majeure
Since such conditions are not the contractor’s fault, the contract relief for conditions of force majeure normally is an extension of contract time to avoid the unfair assessment of liquidated damages. The resulting delay is contractually considered excusable. If the contract does not contain an enforceable no-damages-for-delay clause and the condition was caused by the owner, the delay is also compensable, entitling the contractor to both a time extension and additional payment.
Time Extensions
Even though extensions of contract time for conditions of force majeure are promised by the contract, they are far from automatic. The contractor must follow prescribed contract procedures and must prove entitlement to assure that contractually justified time extensions will be forthcoming.
Importance of Notice of Claim
Most contracts contain a provision that the contractor claiming entitlement to a time extension must file notice of claim within a stated number of calendar days after the event giving rise to the claim or waive the right to relief. Although sometimes the owner has constructive notice of the cause of the delay for which the contractor is entitled to a time extension, the importance of the contractor filing time extension claims within the contractually prescribed time cannot be overemphasized.
The owner has no duty to grant a time extension if the contractor has not requested one. Therefore, when the contractor has been delayed, an immediate request for a time extension should be submitted in writing. The initial notice should be followed by a written claim for the number of days that the completion of the contract has been delayed. The claim should be filed at the earliest possible time following the end of the delay so that the total extent of the delay can be determined. This is normally shortly after the conclusion of the delay.
The importance of notice is dramatically illustrated in the following 1991 decision of the Alabama Supreme Court who reversed a trial court decision in favor of a contractor who had been assessed \$85,500 in liquidated damages on a contract for site preparation and road construction for a residential subdivision. The contract called for liquidated damages of \$300 per day for late completion and provided that time extensions for delay beyond the control and without the fault of the contractor would be granted, provided that written requests for time extensions were submitted to the owner’s engineer within 20 days of commencement of the delay. The contract also contained a no-damages-for-delay clause, so the only remedy for delay was an extension of time. The contractor completed the contract 285 days behind schedule but contested the liquidated damages assessment on the grounds that the delay had been caused by the interference of the owner’s separate utility contractor and therefore was excusable. A trial court agreed and remitted the liquidated damages to the contractor.
On appeal, the Supreme Court of Alabama reversed, stating that the contractor was aware of the requirement to submit a written request within 20 days of the event giving rise to a request for an extension of time and had, in fact, complied with that procedure on one occasion when receiving a 45-day extension of time for a separate cause. Holding that the contractor had waived any right to extensions of time, the Supreme Court stated:
The Cove Creek contract provided that time was of the essence and then went on to specify liquidated damages for delay. The contract also contained a provision for extensions, and APAC availed itself of that provision on at least one occasion and received a 45 day extension because of delays caused by R & M. We hold that APAC’s delays were not excusable and that it is bound by the contract and subject to the liquidated damages provisions.[3]
Contractor Responsibility to Prove Entitlement
In any type of claim situation, whether for time, additional contract payment, or both, the contractor-claimant bears the legal burden to prove entitlement under the terms of the contract to whatever is being claimed. For this reason, the contractor must support a time extension claim by showing that delaying events beyond his or her control have consumed part of the time allotted by the contract for performance of the contract work-in other words, by showing that the delaying events have extended contract completion. This is usually done by supporting the claim with a critical path method (CPM) schedule analysis indicating the extent of the overall delay to the project.
Owner’s Responsibility and Contractor Tame-Extension Requests
Contractors bear the heavy contractual burden of performing the required contract work within the contract time period. They also are contractually entitled to the benefit of having the contract time extended due to the impact of an excusable or compensable delay within a reasonable time after the delay ends so that they can properly and efficiently plan the remaining contract work. To secure this right, it is incumbent upon the contractor at the conclusion of each significant delay to make a CPM schedule analysis and to request a discrete number of days’ time extension in accordance with the result of the analysis, which should also be submitted to the owner in support of the contractor’s claim. The majority of courts today support this approach, as well as holding that if an owner receives a time extension claim properly supported in this manner, the owner has a duty to grant the time extension in a timely manner rather than waiting until the end-of-project performance before granting it. Failure of the owner to grant a properly supported request for an extension of contract time or failure to grant it in a timely manner is a breach of the contract.
Granting of Time Extensions
The contractor may not safely assume that a time extension will be granted simply because it has been properly claimed and the contractor believes it to be due. A time extension can only be granted by a formal change to the contract executed by the owner. Regardless of the merits of the contractor’s claim, the contract date is not extended until and unless the owner has formally notified the contractor by a written change to the contract that the contract has been extended by a stated number of calendar days to the new date stated. An oral intimation that the contract will be extended or may be extended if the contractor “needs it later” to avoid being assessed liquidated damages is not sufficient or contractually proper. Once the owner has had sufficient time to act following receipt of the contractor’s claim for a time extension and a change to the contract has not been initiated, the contractor must necessarily assume that the claim has been denied.
If the owner fails to act on a properly supported contractor time extension claim or denies it directly, the contractor’s proper course of action is clear. First, a notice should be filed in writing protesting the denial or lack of timely action as the case may be, and, second, the contractor should take all possible and reasonable action to meet the unextended contract completion date. By failing to prosecute the work in a manner that assures project completion by the then-existing completion date, the contractor is exposed to a breach of contract determination by the owner that could result in the contract being terminated for default. Although a court may eventually declare the default termination to be improper, the long legal battle that would ensue is an expensive burden. The contractor’s proper contractual position after protesting in writing is to attempt to meet the unextended contract completion date and pursue a separate remedy under the doctrine of constructive acceleration, which is discussed in Chapter 19.
Conclusion
This chapter discussed the general concepts of liquidated damages, force majeure, and contract time extensions. Contractor time extension claims, owner entitlement to liquidated damages, and contractor claims for monetary damages for compensable delays are closely related issues and must normally be determined by a structured CPM schedule analysis. The following chapter explains how the owner’s and contractor’s respective liabilities and entitlements are sorted out in practice by this method.
Questions and Problems
1. Is an owner’s right to assess and collect liquidated damages an express or an implied right under a construction contract? Who ordinarily determines the daily amount? What is the general attitude of courts concerning the determination of this amount?
2. Can liquidated damages provisions be properly used as a means of intimidation or coercion? Can they be properly assessed as a penalty?
3. Under typical contract provisions, can liquidated damages be applied in reverse for early completion? What are the provisions of a bonus/penalty clause?
4. What is a condition of force majeure? From the contractor’s standpoint, can such a condition be caused by the owner? Why is the mere failure of a material supplier or a subcontractor to perform often not a condition of force majeure? What relief does a contractor usually receive for costs incurred because of delays caused by an act of God? Same question when the delay is caused by an act of the owner?
5. When a delay has occurred because of an act of God or some other cause beyond the contractor’s control, does the owner have the duty automatically to grant an extension of time? If not, what triggers the owner’s duty to act?
6. When an owner does grant an extension of time, how is this act accomplished contractually?
7. What are two separate aspects of an owner’s duty when the contractor presents a properly supported and justified claim for an extension of time during contract performance? Is the owner’s duty met by granting the extension of time after the contract work is completed if the contractor “needs” the extension of time to avoid the assessment of liquidated damages? Why is timeliness in the owner’s granting a time extension an important factor from the contractor’s standpoint?
8. What is the contractor’s proper course of action when justified extensions of time have been properly claimed and the owner refuses to grant them and/or simply fails to act on them? Why? What is the contractor’s proper contractual position in this situation?
1. Rohlin Construction Co., Inc. v. City of Hinton, 476 N.W.2d 78 (Iowa 1991).
2. Appeal of Great Western Utility Corp., ENGBCA No. 4934 (Apr. 5, 1985).
3. Cove Creek Development Corp. v. APAC-Alabama, Inc., 588 So.2d 458 (Ala. 1991). | textbooks/biz/Business/Advanced_Business/Construction_Contracting_-_Business_and_Legal_Principles/1.17%3A_Liquidated_Damages_Force_Majeure_and_Time_Extensions.txt |
Key Words and Concepts
• Work activities
• Dependency ties
• As-built network
• As-planned network
• Schedule update network
• Owner responsibility delays / Contractor responsibility delays
• Discrete events
• Burden of performance
• Excusable delays
• Incorporation of delays into network
• Forward-looking analysis
• Retrospective analysis
• Intermediate impact analysis
• Concurrent delay
• Four principles governing delay impact analysis
• Delay analysis for single-path projects
• Delay analysis for multi-path (concurrent path) projects
• Delay impact analysis for complex projects with several interconnected concurrent paths
• Float time
• Owner liability for delay damages
• Contractor-caused delay
• Contractor liability for liquidated damages
• Contractor entitlement to an extension of time
• Damages offset not necessarily day-for-day
This chapter explains the principles and procedure by which contractor liability for liquidated or actual damages and owner liability for monetary damages for owner-caused delay are determined in practice. Delays usually occur during performance of the typical contract, some within the control of the contractor, some caused by the owner or for which the owner is otherwise liable, and some that are beyond the control of either party, for which neither party is liable. All three commonly occur at various times, generally affecting only one part of the project, although some may affect the entire project.
When the owner and contractor disagree on questions of extensions of time, liquidated damages, or owner-caused delay damages, the first problem facing courts and other dispute resolution bodies is determining delay responsibility. Once the individual delays and the party responsible for them have been identified, the second problem is determining the consequences of the delay(s). This latter problem is usually solved by performance of a critical path method(CPM) schedule delay impact analysis. The principles and procedure for performing such an analysis follow.
Preliminary Points and Definitions
Construction contracts today commonly provide that the performance of the work be planned and monitored by the use of the critical path method. The contractor is required to submit a CPM network schedule that depicts the construction work activity sequence and the beginning and ending dates of all work activities as the contractor intends to perform them. Such an initial schedule is often referred to as the baseline schedule. As the work progresses, the contractor is usually required to update the baseline schedule periodically (monthly or quarterly), reflecting the actual beginning and ending dates for all completed work activities, the actual start and estimated completion dates for all activities in progress, and the anticipated start and end dates for all remaining work activities. Such schedules provide a convenient method by which the impact of delays can be determined. (If you are not familiar with the CPM scheduling method, refer to any of the numerous available texts on that subject for details.) In the following discussion, only general concepts will be presented to illustrate the methods of delay impact analysis.
As-Planned, As-Built, and Schedule Update Networks
A CPM network is a graphic depiction in which the various physical work items of the project are represented as a sequential arrangement of work activities joined together by dependency ties. The dependency ties indicate the sequence in which the activities must be performed as well as the requirement that immediately preceding activities must be completed prior to the start of following activities on the same path of work or to the start of following activities on parallel paths of work.
For instance, consider the simplified generic CPM network schedule illustrated by Figure 18-1 in which contract work activities are represented as time-scaled solid arrows A, B, C, D, E, F, G, and H. The time duration of each in months is indicated directly above each work activity arrow.
Such a CPM network schedule implies that the following logic discipline be maintained:
• Along each line of consecutive work activity arrows, each predecessor work activity must be completed before the successor work activity can commence.
• The dependency ties that exist between separated work activity arrows are established by the dotted lines between the work activity arrows, either those included in a line of separated arrows such as between work activities C and D and between work activities G and H, or between work activity arrows on parallel lines such as between work activities A and E, F and D, and between work activities D and H.
• The direction of the arrowhead included in dependency ties determines which of the connected work activities is the precedent activity and which is the successor activity that cannot start until the completion of its predecessor. In Figure 18-1, work activity E cannot start until work activity A has been completed, work activity D cannot start until work activities C and F have been completed, and work activity H cannot start until work activities D and G have been completed.
• The time duration that the completion of work activities could be delayed without delaying the completion of the project is termed float time. In Figure 18-1, work activities Band C together have 2 months of float and work activity G has 3 months of float.
• Early start and early finish dates are the earliest possible points in time that a work activity can start and finish, while late start and late finish dates are the latest points in time that a work activity can start and finish without delaying the completion of the project. For instance, the early and late start dates for work activity B would be 3 months and 5 months after NTP and 8 months and 10 months after NTP, respectively, while early and late start dates for work activity C would be 8 months and 10months after NTP, and 10months and 12 months after NTP, respectively. Similarly, early and late start dates for work activity G would be 12 and 15 months after NTP, and 16 and 19 months after NTP, respectively.
• The critical path is a path of work activities that contains no float and thus controls the completion date of the project. In Figure 18-1, the sequence of work activities A, E, F, D, and H is the critical path. Some networks may contain more than one critical path.
• Early and late start dates and early and late finish dates are the same points in time for work activities on the critical path (i.e., activities that contain no float).
When a network schedule such as Figure 18-1 is created before the commencement of project work as a planning tool, or is submitted to the owner by the contractor before contract work in compliance with project specification scheduling requirements, it is referred to as an as-planned or baseline schedule network reflecting the contractor’s plan for the accomplishment of the work. As the work progresses, such a schedule is monitored by periodically updating it to reflect the completion status of the project at the points in time of the periodic updates. Such a network would be referred to as a schedule update network.
For instance, if the generic schedule network shown in Figure 18-1 had been submitted by a contractor as the as-planned or baseline schedule, the schedule update after seven months of contract performance might look as indicated in Figure 18-2. Such an update schedule reflects actual as-built performance up to the date of the update documented by project records and as-planned performance for the balance of the work to project completion. At the time of the update in this hypothetical illustration, activity A had been completed and activities B and E partially completed. The completion dates for activities B and E necessarily would have to be estimated based on experience to date while the balance of the work is shown in accordance with the baseline schedule indicating the contractor’s belief that, in spite of a slow start, the balance of the work will be accomplished in accordance with the original plan. The updated schedule network shown in Figure 18-2 indicates that at the time of the update, the contractor was two months behind the baseline schedule. Subsequent periodic schedule updates as the work progressed would normally be carried out until the project was completed, each update regularly reflecting as-built performance up to the date of the update and as-planned performance thereafter to project completion.
A final update made at the point in time of project completion would reflect total as-built performance as documented in project records. Such an as-built network for the project represented by Figures 18-1 and 18-2 could well tum out as indicated in Figure 18-3. By improving performance in the completion of activity B and for all of activities D, F, and H, the contractor in this hypothetical example regained time and completed the project one month earlier than the 25-month completion time of the baseline schedule.
It should be noted that the hypothetical schedule networks shown in Figures 18-1, 18-2, and 18-3 are simple generic networks utilized here for purposes of illustration only. Actual project schedule networks are normally far more complex and contain many more work activities and dependency ties as well as numerous paths to project completion.
Owner Responsibility Delays
Delays or extensions of work activity performance exclusively caused by the owner or otherwise the contractual responsibility of the owner are compensable delays. These kind of delays entitle the contractor to additional payment for any costs incurred as a result of the delays. The contractor is also entitled to an extension of contract performance time if the delays cause the overall project performance duration to be extended. For purposes of identification of this class of delay, the symbol “ORD” (owner responsibility delay) is used in the schedule networks that follow in this chapter.
Contractor Responsibility Delays
If the total time allowed for contract completion is exceeded, delays or extensions of work activity performance that are exclusively the fault of the contractor or otherwise the contractual responsibility of the contractor do not entitle the contractor to extra compensation or an extension of contract performance time and may result in the contractor becoming liable for the payment of liquidated damages. For purposes of the analysis in this chapter, this class of delay must be subdivided into two subclasses:
• The first subclass consists of delays that can be easily recognized as discrete events or happenings. Examples would be clearly identified contract breaches such as failure to make required submittals by contractually stipulated dates or any other identified delay caused by or the contractual responsibility of the contractor including subcontractors and suppliers. For purposes of identification in the networks that follow, the symbol “CRD” (contractor responsibility delay) will be used for this subclass of contractor responsibility delay.
• The second subclass arises from the contractor’s failure to perform the actual work items of the contract at a rate sufficient to complete the contract on time. This subclass of contractor delay is not directly identified by a symbol on the network schedules that follow. The contractor’s burden of performance is to complete all of the contract work in accordance with the requirements of the technical specifications within the time limits specified in the contract. To meet this burden, the contractor is free to choose the means, methods, techniques, sequences, and procedures of accomplishing the work. Contractor failure to meet the burden of performance, unless interfered with or delayed by others, may entitle the owner to liquidated damages-or actual provable damages in contracts that do not contain a liquidated damages provision. However, the contractor’s failure to accomplish a given item of work (i.e., a work activity) within the time limits of an as-planned or intended project schedule does not in and of itself mean that the contractor has failed to meet the burden of performance or has incurred liability to pay damages. Unless the contract contains explicit language to the contrary, the contractor is free to make up for such lost time by performing later work activities at a rate faster than the as-planned or originally intended rate that may have been depicted on the as-planned schedule or on any other form of contractor-produced project schedule. Only when the required work of the entire contract has been completed, or the required work for a given milestone has been completed in the case of milestone completion date contracts, is it possible to determine whether the contractor has satisfactorily met the burden of performance.
Excusable Delays
Delays that are not the fault of either the owner or contractor are excusable delays (ED). If they result in the project duration being extended, EDs entitle the contractor to an extension of time only. The most common examples are force majeure conditions such as acts of God, war, riot, and so on.
Incorporation of Delays into the CPM Network for Delay Impact Analysis
The previously explained classes of delays or extension of work activity performance include:
• Owner responsibility delays (ORDs)
• Discrete event contractor delays (CRDs)
• Slow-work-performance contractor delays (not directly identified by a symbol)
• Excusable delays (EDs)
If the starting dates of particular project work activities shown on a schedule network have been delayed by ORDs, CRDs, or EDs, each delaying event can be identified, both as to the point in time at which it occurred (just before the start of the affected work activity) and as to its duration (the time duration of the delay). Also, some ORDs do not delay the start of a contractor work activity but rather take the form of extending the work activity to a longer duration than would have been the case without the delay. (An example of this kind of ORD would be the improper imposition by the owner’s engineer of a requirement that was not included in the project specifications that increased the time required for a work activity to be performed.) In these cases, an estimate can be made for the additional time the work activity was improperly extended. This time duration can be identified as a ORD and extracted from the duration of the affected work activity. Such a delay would be arbitrarily shown in the network to have occurred at the end of the shortened work activity (just before the start of the following work activity), even though in actuality the delaying effect was uniformly experienced throughout the work activity.
By these procedures, all of the above described delay types that may have affected the contractor work performance activities represented in a CPM schedule network can be identified and depicted in the network as discrete delay activities, each with its own duration positioned in the network at the point in time at which it is considered to have occurred.
Identifying and depicting each of the various kinds of delays as discrete activities in the network is the first step in CPM schedule delay impact analysis.
Forward-Looking and Retrospective Impact Analysis
If known or anticipated delays are inserted as discrete activities in an as-planned schedule and a delay impact analysis is performed to determine their impact on job completion, the analysis is referred to as a forward-looking delay impact analysis. This method of analysis is similar to forward-pricing a monetary claim.
On the other hand, if all project delays are inserted into an as-built network after all project work has been completed, the analysis is a retrospective delay impact analysis (akin to retrospectively pricing a monetary claim).
Some contract delay analysts believe that the impacts of delays should be analyzed when they occur and appropriate time and cost adjustments made to the contract progressively as the project work proceeds. If this is done, each delay is inserted into the updated schedule in effect when the delay occurs and a contemporaneous impact analysis is performed on a part as-built, part as-planned schedule to determine the impact of that delay.
There are arguments for and against all three approaches to delay impact analysis that are beyond the intended scope of this book. However, the principles underlying all of them are essentially the same. Once the delays have been inserted into the CPM schedule, whichever one of the three approaches is used, the analytical procedure is similar. Therefore the principles and procedures that follow, which are illustrated by application to an as-built schedule, are equally applicable to any CPM delay impact analysis, whenever performed.
Consecutive and Concurrent Events
A series of events, whether work activities or delays inserted into a network, are said to be consecutive if they follow along the same path of the network one after the other. For instance, work activities A, B, C, and D, as well as work activities E, F, G, and H, in Figure 18-1 are consecutive activities. When the events occur on separate parallel paths of the network, they are said to be concurrent events. Concurrent events may or may not occur within the same time frame. The fact that they occur on separate paths to project completion is what makes them concurrent. In Figure 18-1, work activities B, C, and D are concurrent with work activities E, F, and G.
Four Principles Governing Delay Impact Analysis
As stated previously, the starting point in determining either the number of calendar days of liquidated damages due the owner, or the number of days of compensatory delay damages due the contractor in a complex delay analysis situation, is identifying each delay as an ORD, CRD, or ED delay and inserting each into the CPM network as a discrete activity where each occurred. Once this step has been accomplished, the concept of concurrent delay must be considered.
Concurrent delay means delay to project completion that results from a number of individual delays, one or more of them occurring on a particular path to project completion and the others occurring on concurrent or parallel paths. In this situation, each of the delays may or may not have a contributory effect on project completion. Because of this, consideration of the following four principles is necessary to allocate liability between owner and contractor:
• First Principle: An owner cannot properly assess liquidated damages (LD) for periods of time during which the owner was concurrently delaying the project. In other words, the contractor can properly be assessed liquidated damages for only that part of any delay to completion of the project that was exclusively caused by or was exclusively the contractual responsibility of the contractor.
• Second Principle: A contractor cannot properly be paid delay damages (DD) for periods of time when the contractor and/or excusable delays were concurrently delaying completion of the project. In other words, the contractor can be properly paid delay damages for only that part of the total project completion delay that was exclusively caused by, or was exclusively the contractual responsibility of, the owner.
The following cases illustrate the application of the first and second principles. In one case, the government delayed completion of a floodgate rehabilitation project by delivering faulty government-furnished equipment to the contractor. However, during the final months of work, the contractor fell behind schedule with the project’s electrical work. The Engineer Board of Contractor Appeals found the two problems to be intertwined, each exacerbating the other, and refused to allow the government to withhold liquidated damages—but the board also denied the contractor recovery of delay damages.[1]
In another decision, the Veterans Administration Board of Contract Appeals similarly ruled that where government-cost and contractor-cost delays are so intertwined that they cannot be segregated, the government may not recover liquidated damages, and the contractor may not recover delay damages. In a contract for the demolition of various structures and construction of new buildings and facilities, the contractor’s slow progress and need to perform rework activities delayed completion of a boiler plant. On the other hand, the Veterans Administration’s slow response to a differing site condition and failure to coordinate the work of a separate contractor delayed completion of the paving and utility work. In total, the contract was completed 241 days behind schedule.
When the Veterans Administration withheld \$282,45 2in liquidated damages, the contractor filed a claim for a time extension and more than \$1.6 million in delay damages. Both the government and contractor alleged that the other’s delay had been on the CPM schedule’s critical path, thus causing the late completion of the overall project. In rejecting both parties’ as-built CPM schedules, the board stated:
We find both parties have failed with regard to their attempts to establish that one or the other’s delay was solely on some mythical critical path and, therefore, was the sole cause of the delay to the contract. The project did not consist solely of the boiler plant, nor did it consist solely of final paving and electrical services…. Given the intertwined causes of delay to the project, we leave the parties where we find them. Accordingly, the government is not entitled to liquidated damages, nor is the contractor entitled to compensation for delay damages.[2]
However, case law has clearly established that when delays attributable to the owner and contractor can be separately identified and quantified, costs of compensable delay and liquidated damages can be recovered. In a 1986 decision, the Interior Board of Contract Appeals ruled that the contractor renovating a building at the U.S. Merchant Marine Academy could recover costs for delays that were identified as caused by the government. The project was plagued by a variety of delays, but the board determined that the contractor had properly segregated periods of contractor-caused delay and concurrent delay from periods of government-caused delay.[3]
Similarly, in a 1990 decision, the Postal Service Board of Contract Appeals held that when contractor-caused delays can be segregated from owner-caused delays, the contractor is entitled to recovery for the compensable portion of the delay. The contract called for substantial completion of a post office building and associated site improvements within 300 days of notice to proceed. The contracting officer complained repeatedly to the contractor about inadequate worker supply and slow progress. However, municipal officials then altered the required grades for paving and curb work, necessitating a change order from the Postal Service. The Postal Service was slow to issue the change order, bringing the contract work to a complete standstill. The project was completed well behind schedule. When both parties claimed that delay by the other had contributed to the late completion, the board agreed but said that fact did not preclude the contractor’s recovery for owner-caused delay. When periods of owner-caused delays can be segregated from periods of contractor-caused delays and the resulting costs can be separately identified and documented, the contractor can recover for periods of delay solely caused by the owner.[4]
The next principle to be considered in dealing with concurrent delay establishes the basis for segregating the exclusive effect of owner-caused delays on overall project completion from the exclusive effect of contractor-caused delays.
• Third Principle: To determine the exclusive effect on overall project completion of any one class of delays that have been identified on an as-built schedule, or on the as-built portion of an updated schedule, it is necessary to collapse the schedule on which that class of delays has been identified by removing that class of delays and reconstituting the schedule as a collapsed schedule. The effect of the removed class of delays is the difference in the completion date of the collapsed schedule from the completion date of the original schedule.
A corollary application of this principle in reverse can be stated as follows when a forward-looking delay impact analysis is made on an as-planned schedule or on the as-planned portion of an updated schedule:
• To determine the exclusive effect on overall project completion of any one class of delays, it is necessary to insert that class of delays into an as-planned schedule or into the as-planned portion of an updated schedule to reconstitute the schedule as an expanded schedule. The effect of the inserted class of delays is the difference in completion date of the expanded schedule from the completion date of the original schedule.
The fourth and final principle governs the determination of the proper time period that the contract performance time should be extended.
• Fourth Principle: The original contractually stipulated completion time plus the extension of time to which the contractor is entitled plus the contractor’s liability for liquidated damages equals the as-built project completion time.
Delay Impact Analysis for Single-Path Projects
Actual construction projects rarely if ever occur with just one consecutive path of activities from notice to proceed (NTP) to completion. However, in order to start discussion of delay impact analysis with the simplest possible case, the single-path project situation will be considered first. Figure 18-4 represents a single-path, as-built schedule.
The typical questions to be answered by the analysis are:
• What is the owner’s liability for delay damages (if any)?
• What is the contractor’s liability for liquidated damages (if any)?
• What is the contractor’s entitlement to an extension of contract time (if any)?
By inspection, the project completion date is shortened by four months if the schedule is collapsed by removing the ORD. Thus, by application of the third principle stated earlier, the effect or impact of the ORD is to increase project completion by four months for which the owner has delay damage liability to the contractor.
Similarly, if the schedule is collapsed by removing both the ORD and the ED (leaving only contractor-controlled activities in the schedule), project completion is shortened by seven months, and the project will be completed 23 months after NTP, meeting contract completion requirements. Thus, the contractor has met the burden of performance and has no liability for liquidated (or actual) damages. By application of the fourth principle, the contractor’s entitlement to an extension of time is determined as follows:
1. Original completion time + extension of time + liability for liquidated damages = total as-built completion time.
2. 24 months + extension of time + 0 = 30 months.
3. Extension of time = 30 months – 24 months = 6 months.
Now, assume that the contractually stipulated completion date is 20 months (instead of 24 months), resulting in the situation depicted by Figure 18-5.
The owner’s liability for delay damages does not change, remaining at four months determined, as before, by shortening the schedule when the ORD is removed. However, when both the ORD and the ED are removed, leaving only contractor-controlled activities, the schedule will collapse to a total of 23 months from NTP. If neither the ORD or ED had occurred, the contractor still would not have completed the work within the contractually stipulated performance period. Completion would have been three months late. Therefore, the contractor is subject to payment of liquidated or actual damages for the three months that actual performance extended beyond the time allowed.
Note that both subclasses of contractor delays were present in the preceding example. Even if the two-month CRD (a discrete identifiable delay) had not occurred, the contractor would still have been one month late—that is, would have failed to perform the three work activities at progress rates sufficient to meet contract time requirements.
Although perhaps overly simplistic, this example illustrates the application of the proper principles and thought processes to arrive at correct conclusions. Discussion of more complex situations follow.
Delay Impact Analysis for Multi-Path (Concurrent Path) Projects
In the preceding discussion for the second single-path project, when all ORD and ED activities were removed and the schedule collapsed, the remaining project duration consisted of activities that were entirely within the control of the contractor. In that case, since this duration exceeded the contractually stipulated project completion period, contractor-controlled delays existed, and the contractor had liquidated or actual damages liability.
In the general case, contractor-controlled delay can consist of discrete identifiable delays (CRDs), failures to complete physical work activities at a pace sufficient to meet contract time requirements, or a combination of both. Almost all construction projects are multi-path projects in which a number of separate concurrent paths of consecutive activities extend from NTP to project completion. When all ORD and ED activities are removed from a network schedule for such a project, the collapsed schedule will represent what the project duration “would have been” if the ORD and ED activities had not occurred. All remaining activities, whether CRD activities or work activities, are entirely under the control of the contractor. Under these circumstances, it might seem as if any remaining time overrun past the contractually stipulated completion date will result in liquidated or actual damages liability. This may not be true, however, because in accordance with the first principle stated earlier, the project overrun must also have been caused exclusively by the contractor—that is, there must not have been any concurrent delay to project completion that was caused by or was the responsibility of the owner or was the result of an excusable event (ED). To ensure that this condition is met prior to determining that the contractor has liability, an additional step in delay analysis is required.
Consider the multi-path project with an as-built schedule shown in Figure 18-6.
By removing the ORD activities, the as-built schedule will collapse as shown in Figure 18-7. Based on the second principle, the owner’s liability for payment of delay damages to the contractor is four months. This delay to the project completion was exclusively caused by or was otherwise the contractual responsibility of the owner.
The next step in analysis is to remove the ED activities, collapsing the network further as shown in Figure 18-8. The collapsed schedule now represents a “would have been” schedule, consisting entirely of contractor-controlled activities. This schedule would have been achieved if none of the ORD or ED events had occurred. Since, by this schedule, the project is completed in 23 months (one month earlier than the contractually stipulated time), the contractor has no liability for liquidated or actual damages.
If the contractually stipulated completion period for the as-built performance shown by Figure 18-6 was 20 months rather than 24 months, all analytical steps previously taken through production of the collapsed network shown in Figure 18-8 would be the same. It is now clear that the contractor would not have met the burden of performance required by the time requirements of the contract. As Figure 18-8 shows, on the upper path, the contractor was three months late (consisting of a two-month discrete identifiable delay and one-month combined slippage in the required time performance of the four work activities on the path). On the lower path, the contractor met the burden of performance exactly by completing all activities on that path in 20 months. Even though three months late on the upper path, the contractor did not incur three months’ liability because of the first principle, stated earlier. For the contractor to incur liability for delay, the delay to the project must be exclusively caused by or the contractual responsibility of the contractor.
The three-month delay on the upper path of Figure 18-8, although caused by the contractor, did not exclusively delay the project by three months. The delay to the completion of the project that was exclusively caused by the contractor was one month only. The reasoning is as follows:
1. By inspection, as-built completion on the upper path occurred 30 months after NTP.
2. If the three months of contractor delay on the upper path had not occurred, the as-built completion on the path would shorten to 27 months after NTP.
3. There are no contractor-controlled delays on the lower path.
4. If the upper path had been shortened to 27 months after NTP, total project completion would have been determined by the length of the lower path and would have occurred 29 months after NTP.
5. Therefore, the impact of all contractor-controlled delays on total project completion was 30 months less 29 months, or one month. Project completion was exclusively extended by contractor-controlled delays by only one month.
The final step in the analysis is determining the extension of time due the contractor. The reasoning follows:
1. Original contractually stipulated completion + extension of time to which contractor is entitled + contractor’s liability for liquidated damages = as-built completion time.
2. For the case of original contractually stipulated completion time of 24 months:
24 months + extension of time + (0) = 30 months
Extension of time = 30 months – 24 months – (0) = 6 months
3. For the case of original contractually stipulated completion time of 20 months:
20 months + extension of time + 1 month = 30 months
Extension of time = 30 months – 20 months – 1 month = 9 months
Delay Impact Analysis for Complex Projects with Several Interconnected Concurrent Paths
The typical construction project consists of a number of concurrent paths of activities where dependency ties exist between activities on separate paths. Such projects can be analyzed by similar procedures to those previously explained. Refer to Figures 18-9, 18-10, and 18-11.
The starting point is the representation of the as-built performance of the project constructed from project records, which results in Figure 18-9. All delays (ORD, CRD, and ED delays) are identified as activities on Figure 18-9. The dotted line durations of 110 CD, 155 CD, 70 CD, and 65 CD respectively, are float time, the length of time that the completion of the preceding activity can be delayed without delaying the completion of the project. The as-built completion time was 1385 calendar days (CD), whereas the contractually stipulated completion time was 1100 CD.
Owner Liability for Delay Damages
The first step in analysis is to remove all ORD activities from Figure 18-9, collapsing the schedule to the network shown in Figure 18-10. Note that all interpath dependency ties are maintained. Also, when collapsing the schedule, all activities are shown with the earliest possible start times (the “early” start times). Total project completion time reduces from 1385 CD to 1220 CD, establishing the owner’s liability for delay damages equal to 1385 CD-1220 CD, or 165 CD.
Has the Contractor Met the Burden of Performance?
The second step is to remove all ED activities from Figure 18-10, collapsing the schedule to the network shown on Figure 18-11. (Again note that all interpath dependency ties are maintained.) The completion time of Figure 18-11, which contains only contractor-controlled activities reduces to 1215 CD.
The third step is determining whether the contractor has performed the actual work activities at a pace sufficient to meet the contract time requirements (the burden of performance by the contractor). This must be done separately for all of the possible paths leading to completion of the project. Thus
Path ABDEFG:
30+ 150+ 30+ 300+ 460 + 40 + 175 = 1185 CD
1185 – 1100 = 85 CD longer than required time
Path CDEFG:
150 + 30 + 300 + 460 + 40 + 175 = 1155 CD
1155 – 1100 = 55 CD longer than required time
Path CHIG:
150 + 30 + 765 + 40 + 175 = 1160 CD
1160 – 1100 = 60 CD longer than required time
Path JKHIG:
30 + 175 + 30 + 765 + 40 + 175 = 1215
1215 – 1100 = 115 CD longer than required time
Path JKLM:
30 + 175 + 550 + 305 = 1060
1100 – 1060 = 40 CD longer than required time
Note that the durations of any float are not considered in making these determinations.
Clearly, the contractor did not meet the burden of performance on any path except path JKLM. The extent of contractor-caused delay on each individual path was
Path ABDEFG: 85 CD
Path CDEFG: 55 CD
Path JKHIG: 115 CD
Path JKLM: -0-
Contractor-Caused Delay to Project
The fourth step is determining the extent to which the contractor exclusively extended the completion date of the project, if any, by failing to meet the burden of performance. It is first necessary to determine to what extent the contractor-caused delay extended each of the paths to completion of the as-built schedule. This is done in accordance with the third principle by removing the exclusive contractor-caused delay from each individual path and noting the extent to which the path shortens. Working with Figure 18-9,
Path ABDEFG:
30+ 80 + 150 + 30 + 300 + 120 + 460 + 40 + 175 – 85
= 1300 CD (without contractor delays)
Path CDEFG:
150 + 30 + 300 + 120 + 460 + 40 + 175 – 55
= 1220 CD (without contractor delays)
Path CHIG:
150 + 30 + 765 + 40 + 175 – 60
= 1100 CD (without contractor delays)
Path JKHIG:
30 + 100 + 175 + 30 + 765 + 40 + 175 – 115
= 1200 CD (without contractor delays)
Path JKLM:
30 + 100 + 175 + 550 + 160 + 305 – 0
= 1320 CD (without contractor delays)
Again, note that the duration of any float is not considered in making these determinations.
Contractor Liability for Liquidated Damages
As just illustrated, if there had been no contractor delays, the project would have been completed in 1320 CD, the longest of the five possible paths from which all contractor delays have been removed. Therefore, the extent of delay exclusively caused by the contractor is the actual project duration minus 1320 CD, or 1385 CD – 1320 CD = 65 CD. The contractor’s liability for liquidated damages equals this number of calendar days.
Contractor Entitlement to Extension of Time
The final step is determining the contractor’s entitlement to an extension of time that is consistent with the liability for liquidated damages. By application of the fourth principle,
1. 1100 CD + extension of time + 65 CD = 1385 CD
2. Extension of time = 1385 CD – 1100 CD – 65 CD = 220 CD
Summary of Delay Impact Analysis
Referring to Figure 18-9, the following is a summary of the delay impact analysis:
Owner’s liability for delay damages = 165 CD.
Contractually stipulated completion date should be extended by 220 CD.
Contractor’s remaining liability for liquidated or actual damages =65 CD.
Determining Damages Offset
In situations such as the preceding, the principle of offsetting the monetary value of damages applies. However, the monetary consequences of one day of delay caused to the other party may be considerably different for the owner and for the contractor. In other words, the contractor’s actual provable costs caused by each calendar day that the project completion is extended may be very different from the contractually stated liquidated damages figure due the owner per day of delay or, in cases in which the owner is due actual damages, from the provable actual costs to the owner for each day completion is extended. Therefore, the number of days delay may not be offset directly; only the monetary consequences of the respective delays may be offset.
Conclusion
Determining which party to the contract is responsible for delays and the relief if any to which each is entitled in complex multiple-delay situations is elusive. However, if the various delays can be isolated and properly identified as either owner-caused, contractor-caused, or excusable, the proper allocation of rights to relief and liabilities can be determined by application of the principles outlined in this chapter. In complex cases, the use of computers greatly reduces the analytical labor required.
An additional topic closely related to delay and extension of time is the concept of constructive acceleration, the subject of the next chapter.
Questions and Problems
1. Explain the difference in as-planned, as-built, and intermediate CPM network schedules.
2. What are the three distinct classes of delays to construction that were discussed in this chapter? Which of the three could be said to be a neutral delay or a delay that is no one’s fault?
3. What are the two subclasses of contractor-caused delays? Does the occurrence of either necessarily mean that the contractor will be liable for liquidated damages? Explain your answer.
4. What are forward-looking, retrospective, and contemporaneous delay impact analyses?
5. Explain concurrent and consecutive events and the difference between them. What is concurrent delay?
6. Explain the four principles discussed in this chapter that are useful to allocate properly liabilities and damages for delay between contractor and owner.
7. Explain why the contractor’s liquidated (or actual) damages liability cannot be offset against the owner’s liability for delay damages on a day-for-day basis when both are present.
8. Redraw to a convenient scale the as-built network shown in Figure 18-9, changing the CD durations of the various delays and work activities as follows. Maintain all dependency ties shown. Note that the location and duration of float intervals will change.
[table id=2 /]
The contractually specified completion date remains 1100 CD.
1. Determine the actual completion date.
2. Determine the owner’s liability for delay damages in CD.
3. Determine the contractor’s liability for liquidated damages in CD.
4. If the contract completion date should have been extended by the owner, determine the number of CD after NTP to which it should have been extended. If the contract should not have been extended, indicate this.
9. Refer to the as-built network (see figure below) constructed on the basis of job records. All delays have been identified and contractual responsibility for each determined as noted. Note that delay H is fixed in time and occurred between 130 CD and 138 CD after NTP. Delay H will therefore not shift to an earlier time frame when networks are collapsed. The contractually stipulated project completion date was 120 CD after notice to proceed.
1. Determine the owner’s liability for delay damages in CD.
2. Determine the contractor’s liability for delay damages in CD.
3. If the contract completion date should have been extended by the owner, determine the number of CD after NTP to which it should have been extended. If the contract should not have been extended, indicate this.
1. Appeal of Gulf Construction Group, lnc., ENGBCA No. 5961 (Oct. 13, 1993).
2. Appeal of Coffey Construction Co., Inc., VABCA No. 3361 (Feb. 11, 1993).
3. Appeal of Wickham Contracting Co., Inc., IBCA No. 1301-8-79 (Mar. 31, 1986).
4. Appeal of H. A. Kaufman Co., PSBCA No. 2616 (July 31, 1990). | textbooks/biz/Business/Advanced_Business/Construction_Contracting_-_Business_and_Legal_Principles/1.18%3A_Allocating_Responsibility_for_Delays.txt |
Key Words and Concepts
• Acceleration
• Voluntary acceleration
• Directed acceleration
• Delay with time extension
• Delay without time extension
• Accelerated performance without delay
• Constructive acceleration
• Effect of owner’s directive to accelerate
• Contractor’s proper contractual procedure
As discussed in Chapter 17, a contractor cannot assume with impunity that the contract completion date will necessarily be extended simply because a properly supported claim for a time extension has been filed with the owner. This is true even when the owner’s project representatives have informally indicated their personal belief that an extension will be granted if it is later determined to be necessary for the contractor to avoid being assessed liquidated damages. The proper and prudent position for the contractor to take if the owner denies or simply fails to act on a properly supported claim for a time extension is to protest the lack of action or the denial in writing and then advise the owner that the denial or lack of action has placed them in a position where they are compelled to attempt to complete the contract by the original required date. Completion by this date when significant delays have been experienced or additional work has been added to the contract usually will entail incurring additional costs for overtime, shift work, and mobilization of additional crews, equipment, and material. A contractor, who has incurred such costs in meeting or attempting to meet the original completion date when an extension of time should have been granted by the owner, is entitled to recover these costs under the doctrine of constructive acceleration.
Voluntary and Directed Acceleration
An understanding of the terms acceleration, voluntary acceleration, and directed acceleration in a contractual sense is necessary to understand the concept of constructive acceleration.
Acceleration and Voluntary Acceleration
Acceleration means completion of the contract work or part of the contract work at a more rapid rate than required by the contract. Ordinarily, the contractor has the right to work at a faster pace than the minimum needed to meet the contract completion date. This is an implied right under the contract. The contractor is usually the party bearing the financial risk of performance, and unless the contract expressly prohibits completion of the work before the contractually specified date, the contractor is free to speed up the work. Exceptions might include, for instance, a contract involving embankment construction where it was necessary to control the vertical soil load on substrata by requiring that the rate of embankment placement not exceed a stated maximum rate. Except in such situations, contractors sometimes voluntarily accelerate their performance to reduce time-related costs by finishing the project early or because they believe a faster pace is more cost effective for other reasons. Also, when unexcused delays have put the contractor behind schedule, voluntary acceleration of remaining work is the only way to regain schedule and avoid being declared in default, being assessed liquidated damages, or both. In any of these situations, voluntary acceleration on the part of the contractor has occurred.
Directed Acceleration
Acceleration may also be directed by the owner, if the contract so provides. The right of the owner to direct acceleration is not an implied right; it must be explicitly provided in the contract. The changes clause usually does provide that the owner may direct acceleration of all or part of the work. Completion of contract work at a pace that is faster than would ordinarily be required pursuant to a directive from the owner is called directed acceleration. As with any other change, an owner who directs acceleration in order to complete the project earlier than contractually required must pay the extra costs incurred by the contractor in complying with the directive.
Constructive Acceleration
Constructive acceleration is a forced completion of the contract work in a shorter period than should have been allowed by the issue of proper contract time extensions. The normal scenario that triggers constructive acceleration is that either compensable or excusable delays are encountered by the contractor, for which properly supported claims for appropriate extensions of time are made to the owner. If the owner either denies the claims or simply fails to act, the contractor must conclude that the contract time remains unchanged and therefore make every reasonable effort to complete the work by the unextended date, usually incurring additional costs. Failure to make this effort places the contractor in a position in which the owner, shielded by a superior economic position, could contend, however improperly that the contractor is behind schedule and is breaching the contract and could declare the contractor in default. A contractor cannot passively afford to be placed in this position.
The absence of a change order granting a time extension within a reasonable time after submittal of a properly supported claim creates the constructive acceleration situation. It makes no difference if the owner eventually grants a time extension after the acceleration effort and extra costs have been expended. Constructive acceleration still will have occurred. In other words, even though time extensions may eventually be given, the failure of the owner to issue time extensions in a timely manner also triggers constructive acceleration.
Timely manner means within a reasonable period of time after submittal of the contractor’s properly supported claim for a time extension. Reasonable period of time means sufficient time for the owner to evaluate the contractor’s claim and determine if it has merit.
It makes no difference to a claim of constructive acceleration whether causal events giving rise to the claim are excusable or compensable, as long as they are not the contractor’s fault and cause more time to be needed to complete the contract. Thus, the causal events could include a strike or inclement weather (excusable events) or an owner-caused delay or change in the work (compensable events).
A Constructive Acceleration Example
Figure 19-1 illustrates a constructive acceleration situation. In the original contract, the contractor had 24 months to complete the work. Three separate cases are discussed as follows.
Case I—Delay with Time Extension
Case 1 in Figure 19-1 illustrates a delay with time extension. After 12 months of contract performance at a normal pace (sufficient to finish all work within 24 months), the contractor is delayed for six months and is contractually entitled to an extension of time. The owner promptly issues a six-month time extension, extending the original contract completion date from 24 months to 30 months. Having been granted this time extension, the contractor continues work at the normal pace for 12 additional months and completes the contract in 30 months, meeting the contract completion requirement. No acceleration has occurred. If the six-month delay was compensable, as opposed to being merely excusable, the contractor would be entitled to monetary damages equal to the extra time-related costs incurred as a result of the delay.
Case 2—Delay with No Time Extension
The second case in Figure 19-1 illustrates a delay without a time extension. It is the same as Case 1 until the end of the delay when, in contrast to Case 1, the owner denies or refuses to act on the contractor’s claim for a six-month extension of time. Since the contractor does not have a time extension, the contractor works at an accelerated pace for six months at added expense and finishes the job in 24 months, the original completion date. In this situation, the contractor finishes the work in only 18 months of actual working time. The contractor also finishes the project six months earlier than the date to which completion should have been extended. This is a classic case of constructive acceleration. Because an extension of time to which the contractor was entitled was not granted by the owner, the contractor was forced to accelerate performance by six months even though the owner did not issue a formal directive to do so.
Case 3—Accelerated Performance Without Delay
In Case 3 in Figure 19-1, accelerated performance without a delay is illustrated. After 12 months of performance at the normal pace, even though there is no delay, the owner issues an acceleration directive to the contractor, who then works six months at an accelerated pace, completing the project in 18 months in accordance with the directive, six months earlier than the original completion date.
Insofar as the acceleration aspects are concerned, there is no practical difference between Case 2 and Case 3. Case 3 illustrates directed acceleration; Case 2, constructive acceleration. In both cases, the contractor is entitled to be paid for the costs of the acceleration effort. If the delay in Case 2 was compensable, the contractor is also entitled to recover the extra time-related costs incurred due to the delay.
It would make no difference if the owner in case 2, instead of failing to issue a time extension at all, waited until the completion of the project before issuing a six-month time extension. The contractor still has been constructively required to accelerate and is entitled to recover the extra costs incurred.
It is not necessary in the constructive acceleration situation to finish the contract by the original completion date as depicted by Case 2. Now consider Figure 19-2. In this case, although the contract work is not finished until three months after the original completion date, the contractor finishes three months earlier than the date to which the contract should have been extended. This is also a valid constructive acceleration situation, and the contractor is entitled to the acceleration costs exactly as in Cases 2 and 3.
Proving Constructive Acceleration
Constructive acceleration normally results in extra costs incurred in trying to meet the unextended contract completion date. If the elements required for a valid constructive acceleration claim can be established, the contractor is entitled to recover these costs. Four elements must be proved.
Entitlement to Time Extension
To prevail in a constructive acceleration claim, the contractor must first establish entitlement to a time extension or time extensions by proving that performance was delayed by some event or condition for which the contract promises that an extension of time will be granted. A properly documented claim for a time extension must have been promptly submitted to the owner after the event or condition giving rise to the claim in accordance with the notice provisions of the contract. For instance, the Department of Agriculture Board of Contract Appeals denied a contractor’s claim for constructive acceleration because they failed to establish their entitlement to an extension of time.
On a contract to construct an earthen dam, the contractor had fallen behind schedule and was directed by the contracting officer to bring the work back into compliance with the progress schedule. After adding two scrapers to the equipment spread and commencing to work ten-hour days, the contractor demanded compensation for the increased costs, alleging that the contracting officer had constructively accelerated the schedule. In denying the constructive acceleration claim, the board held that
Acceleration is defined as a directive to increase efforts in order to complete performance on time, despite excusable delay. To prevail on an acceleration claim, a contractor must show excusable delay; notice to the government of the excusable delay, with a request for a contract extension; and the contractor must also prove that the costs claimed were actually incurred as a result of action specifically taken to accelerate performance.[1]
Similarly, the National Aeronautics and Space Administration (NASA) Board of Contract Appeals denied a contractor’s claim for constructive acceleration because the contractor failed to furnish timely notice of the delay claimed and failed to submit evidence supporting their entitlement to an extension of time.[2]
In a California case, the General Services Board of Contract Appeals granted a contractor’s claim for constructive acceleration costs because the government failed to grant a legitimate request for an extension of contract time in a timely manner. Heavy rains at the start of construction of a federal building made it impossible for the excavation subcontractor to proceed. The contracting officer did not grant an extension of time for this delay until 16 months after the completion of the excavation. The subcontractor was required to switch to a more expensive method of excavation to comply with the original schedule. In the words of the Board:
As a defense to a claim of constructive acceleration, a belated time extension is worthless. … It had to have become clear to anyone who did not sleep through the entire two days that the soil at the site was saturated with moisture and could not be compacted as required. The Government could not, by continually insisting on documentation of what was already known, justify its refusal to grant a time extension.[3]
Failure of Owner to Issue Extension of Time
The owner must not have issued a time extension, or if one was issued, the owner must have failed to issue it within a reasonable period of time after receiving the contractor’s properly documented claim.
Proof of Extra Costs
The contractor must prove that extra costs were incurred in attempting to finish the project by the unextended completion date.
Completion Before Date to Which Contract Should Have Been Extended
The contractor must complete the project earlier than the date to which project completion would have been extended if the owner had issued the time extension in a timely manner. As noted previously, the actual completion date need not be as early as the original completion date as long as it is earlier than the date to which the contract should have been extended. For instance, the U.S. Court of Claims (now the United States Court of Federal Claims) determined that a contractor could recover acceleration costs on a contract that was completed 524 days after the original completion date.[4]
Effect of an Owner’s Directive to Accelerate
Although not one of the four elements necessary to prove entitlement to damages, a fifth point is that the contractor does not have to have been explicitly directed by the owner to meet the original date in order to prove a valid case of constructive acceleration. However, if the contractor can show such explicit direction or pressure, the case becomes much stronger. What has occurred when an owner either refuses to act or denies a meritorious request for an extension of time is that the owner has breached a contractual duty. The breach is further compounded if the owner then improperly orders or otherwise pressures the contractor to finish the work by the original date. In the last case cited, the court concluded that an acceleration order need not be couched in mandatory terms. The contracting officer had issued extensive correspondence citing the original completion date and pressuring the contractor to step up progress. The contractor did not know until the end of the project how long an extension of time would be granted, and the court felt that, with the threat of liquidated damages hanging over the contractor’s head, the contracting officer’s letters improperly pressured the contractor to accelerate the work beyond the rate required if the contract had been properly extended.[5]
Contractor’s Proper Contractual Procedure
The proper contractual procedure for the contractor in a constructive acceleration situation includes the following:
• First, promptly file a properly supported claim for an extension of time for a definite number of days, as soon as possible after each excusable or compensable causal event causing delay. When appropriate, such requests should be supported by the type of as-built, forward-looking CPM network analysis discussed in Chapter 18.
• Second, if a change order granting the claimed time extension is not received within a reasonable period of time, the contractor should protest in writing and advise the owner in writing that operations are being accelerated in an effort to meet the original unextended contract completion date.
• Third, as a follow-up to the preceding point, the owner should be advised in writing as soon as practical the details of the acceleration effort, the estimated additional daily costs, and the contractor’s expectation of payment for these costs. The contractor must then ensure that the project will be completed before the date to which the contract should have been extended.
• Finally, the contractor must carefully document all acceleration costs actually incurred to be able to prove conclusively the expenditures in an eventual constructive acceleration claim to the owner.
Conclusion
This chapter on constructive acceleration concluded a related group of chapters dealing with problems associated with the time allowed for contract performance and the impact of delay on performance.
The final four chapters focus on the generalized rules by which contracts are interpreted, the importance of job documentation and records in the construction industry, construction contract claims, and a discussion of the means by which contract disputes are settled.
Questions and Problems
1. What is acceleration, voluntary acceleration, and directed acceleration? Why might a contractor want to accelerate voluntarily? Does the contractor normally have the contractual right to do so? Is the owner’s right to direct acceleration of contract work an implied right of a construction contract?
2. What is constructive acceleration? How does it come about? When contractors are contractually due an extension of time and their claim is either ignored or not acted on by the owner, what motivates them to attempt to complete the project by the original specified completion date?
3. Is it necessary in the constructive acceleration situation for the contractor to complete the project by the original contractually specified completion date? What is necessary?
4. After the contractor has accelerated construction because the owner refused to grant an extension of time, is the contractor’s claim for the costs of acceleration defeated if the owner eventually relents and issues an extension of time such that, had it been issued in a timely manner, there would have been no need for the acceleration?
5. What are the four elements that a contractor must prove to establish a valid claim for constructive acceleration?
6. What are the four procedural steps that a prudent contractor should take in the constructive acceleration situation?
7. Refer to Figure 18-11 in Chapter 18. Assume that this network schedule was the as-planned schedule submitted by the contractor to the owner at the beginning of the project. Assume further that activity J represented a contractually stipulated period after NTP within which the owner was required to turn over the area for one of the tunnel portals to the contractor. Assume further that the 40-calendar-day CRD was not present in the network and that the network completion date was thereby 1175 calendar days after NTP. Finally, assume that the contractually specified completion date was also 1175 CD after NTP.
The owner did not turn over the tunnel portal until 130 calendar days after NTP. At the end of this delay, the contractor filed a properly documented written claim for an extension of time. At that point, none of the durations of the other activities in the network were expected to change, and the contractor’s extension of time request documented the number of days claimed on the basis of a forward-looking analysis starting from the as-planned schedule. The owner refused to grant an extension of time, so the contractor accelerated the performance at considerable extra cost of activities K and I.
By what number of calendar days after NTP must the contractor finish the contract to be entitled to the acceleration costs under the doctrine of constructive acceleration?
8. Under the circumstances of question 7, the contractor was able to prove that the time-related costs of each day’s delay in project completion were \$2,375. To what figure in dollars (if any) is the contractor entitled, in addition to the costs of constructive acceleration?
1. Appeal of Donald R. Stewart & Associates, AGBCA No. 89-222-1 (Jan. 16, 1992).
2. Appeal of Carney General Contractors, Inc., NASABCA No. 375-4: Sept. 1980.
3. Appeal of Continental Heller Corp., GSBCA No. 7140 (Mar. 23, 1984).
4. Norair Engineering Corp v. United States, U.S. Claims Court No. 259-80C (Dec. 2, 1981).
5. Ibid. | textbooks/biz/Business/Advanced_Business/Construction_Contracting_-_Business_and_Legal_Principles/1.19%3A_Constructive_Acceleration.txt |
Key Words and Concepts
• Contract must not be redrafted
• Determination of the intent of the parties
• Manifestations of intent
• Express contract terms
• Course of performance
• Course of dealing
• Separately negotiated terms
• Customs and trade practices
• Contract must be read as a whole
• Interpretation giving lawful and reasonable meaning to all other provisions preferred
• Express terms govern over all else
• Relative importance of the various manifestations of intent
• Parole evidence rule
• Doctrine of contra proferentem
Previous chapters dealt with various important provisions of construction-related contracts and how courts interpreted and applied these provisions. The interpretation of a contract is a legal matter that lies in the province of judges and arbitrators, not the parties to the contracts themselves. However, it is important that these parties possess at least a rudimentary understanding of the rules of contract interpretation. This chapter explains and discusses some of the more common rules.
The resolution of many construction contract disputes turns on what the terms and provisions of the contract really mean. When disputes arise, courts, arbitrators, or other dispute resolution bodies determine the correct meaning of the contract and apply it to the situation of each particular case. They approach their task with the following mindset:
1. The contract must be interpreted as it is. The contract must not be redrafted to reflect what the reviewing body believes “it should have said.”
2. The reviewing body tries to determine the intent of the parties when they entered into the contract. In other words, they try to find what the parties were trying to accomplish when they wrote the language of the contract.
3. In the search for that intent, the reviewers look for “tracks,” or manifestations of intent, that may lead them to an understanding of what the parties were thinking when they entered into the contract.
Manifestations of Intent
Some common manifestations of intent include the following:
Express Contract Terms
Perhaps nothing can express the intent of a party to be bound by a contract provision more clearly than signing a contract prominently containing that express provision. By signing a contract, the parties indicate their intention to be bound by each provision in the contract. If the provision is express and clear, there is no need to look further. However, if the contract is silent or the expressly stated provisions are badly drafted and unclear, it is necessary to search for other manifestations of intent.
Course of Performance
Course of performance means the sequence of events during the contract from its beginning up to some particular point in time. The actions and attitudes of the parties during the period prior to the occurrence of a dispute reveal how each party to the contract understood the contract’s meaning and how each responded to the causal events leading to the dispute. For example, an owner’s practice of previously paying for changes in the work based on oral direction indicates that the owner intended the contract to operate that way as opposed to a situation in which the owner paid for changed work only if a signed change order had been issued. Similarly, a contractor who does not put the owner on notice at the time of a breach of contract by the owner is sending a clear message that the contractor did not think a breach had occurred or, if it had, that it was not an important breach.
Course of Dealing
A third manifestation of intent is course of dealing. This means how the parties have previously dealt with each other, prior to entering into the current contract. Past actions and attitudes indicate what the parties are likely to have intended in a new contract that on its face is unclear.
Separately Negotiated Terms
If a contract contains separately negotiated terms or provisions as opposed to standard “boilerplate” language, those terms are taken as a very strong manifestation of intent. Separately negotiated terms mean those that were obviously drafted for that particular contract. The inclusion of such provisions clearly shows that the parties intended them to apply. Otherwise, why would they have taken the trouble to draft the special language? On the other hand, boilerplate language could be present and frequently is—simply because it was lifted or carried over from other contracts used in the past. One or both of the parties may not have realized that the language was there and thus failed to insist that it be altered or deleted. This can happen when preprinted contract forms are used or when a previously used specification or other provision is carelessly included in the new contract documents without careful scrutiny.
Customs and Trade Practices
A fifth manifestation of intent is an express or implied reference to the customs and trade practices of the industry. Customs and trade practices are sometimes determinate in resolving unclear contract meaning, particularly when the contract expressly provides that normal trade practices are intended to apply. Even when normal trade practices are not specifically mentioned, there is an implied presumption that they were meant to apply. Parties to the contract are generally expected to interact according to the customs and trade practices of the industry in the absence of express indications to the contrary.
Generalized Rules of Contract Interpretation
The preceding manifestations of intent are involved in the following generalized rules of contract interpretation.
The Contract Must Be Read As a Whole
First, and most important, the contract must be read as a whole, not as a series of isolated parts. It must also be read with an attempt to give reasonable meaning to each provision. No provision in the contract can be arbitrarily regarded as meaningless. Otherwise, why would the parties have included that provision in the contract?
An excellent example of the application of this rule is afforded by the action of the Supreme Court of Montana in reversing a trial court decision arising from the construction of a housing project for the Montana Housing Authority. A conflict developed between the utility subcontractor and the mechanical subcontractor over who was responsible for installation and hookup work within five feet of the building lines. A lower court had ruled that the utility subcontractor must perform this work based on the court’s interpretation of the technical specifications that required the utility contractor to complete its work “in every respect complete and ready for immediate and continued use.”
The Supreme Court of Montana reversed the lower court, based on their review of all of the contract documents. Another clause stated that the work of the utility subcontractor terminated at a point five feet from building foundations where the utility lines were required to be plugged or capped. A further clause indicated that the plumbing subcontractor’s duties continued to a point five feet outside the building foundation. Reading the contract as a whole, the court concluded that it was not the utility contractor’s obligation to bring the lines within the five-foot limit of the buildings. The court said:
When read together, these contractual provisions indicate Palm Tree (utility subcontractor) was not required to make the water and sewer service lines connections. The whole of the contract is to be taken together so as to give effect to every part if reasonably practicable, each clause helping to interpret the other.[1]
This case illustrates the principle that an interpretation that gives lawful and reasonable meaning to all the other provisions of the contract will prevail over an interpretation that does not. In other words, each provision will be read so that it will not conflict or be inconsistent with other provisions when this is reasonably possible.
Similarly, the Supreme Court of Arkansas settled an argument whether undercutting was required in certain work areas of a shopping center project but not required in other areas of the same project by considering all of the contract documents including an engineer’s soil report deemed to be incorporated into the contract by reference. The court concluded that undercutting was required in all areas of the project. That interpretation was the only one that permitted harmonizing the various parts of the contract documents. In the court’s words:
In seeking to harmonize different clauses of a contract, we should not give effect to one to the exclusion of another, even though they seem conflicting or contradictory, nor adopt an interpretation which neutralizes a provision if the various clauses can be reconciled. The object is to ascertain the intention of the parties, not from particular words or phrases, but from the entire context of the agreement.[2]
Determine the Relative Importance of the Manifestations of Intent
If irreconcilable conflicts or ambiguities remain after reading the contract as a whole, the various manifestations of intent should then be examined to see if they shed light on what the parties intended. In doing that, the relative importance of the various manifestations are usually weighted as follows:
• Express contract terms are more important than course of performance, course of dealing, or the customs and trade practices of the industry.
• Course of performance is more important than course of dealing or the customs and trade practices of the industry.
• Course of dealing will take precedence over the customs and trade practices of the industry.
• Separately negotiated or added terms will take precedence over boilerplate language.
Although applicable primarily to purchase order agreements, clear articulation of the preceding principles is set forth in Article 2-208 of the Uniform Commercial Code, subparagraph (2) which states:
The expressed terms of the agreement and any such course of performance, as well as any course of dealing and usage of trade, shall be construed whenever reasonable as consistent with each other; but when such construction is unreasonable, expressed terms shall control course of performance and course of performance shall control both course of dealing and usage of trade.
Even though customs and trade practices of the industry are at the bottom in the previous list of precedences, they are not unimportant. They cannot be used to override clear express language in the contract, but when a contract is ambiguous, consideration of customs and trade practices often removes the ambiguity. Words and terms will be given their ordinary and customary meaning. In particular, technical terms and usages will be given meaning according to the customs and trade practices of the industry.
For instance, a New York general contractor was held to have breached the contract when they withheld money retained from a subcontractor until final approval and acceptance of the prime contract, when the subcontract contained express language stating, “Any balance due the subcontractor shall be paid within 30 days … after his work is finally approved and accepted by the Architect and/or Engineer.” The general contractor argued, “By trade custom and usage, the general contractor always withholds money retained from the subcontractor pending final approval and acceptance of the total job.”
Unimpressed by this argument, a New York court concluded, “There is no reason to resort to trade practices or evidence of custom for an interpretation when the contract is unambiguous” and that under such certain circumstances, the subcontract clause “may not be changed by an attempt to invoke trade custom.”[3]
However, in a case in which a contract, on its face, was clearly ambiguous, another court relied on customs and trade practices of the industry to help determine the probable meaning of the ambiguous provision.[4]
Parole Evidence Rule
Construction practitioners should be familiar with the parole evidence rule. Parole or extrinsic evidence is evidence of the intent of the parties other than the express provisions of the contract itself. Specific examples include the following:
• Previous oral or written understandings or agreements between the parties, such as records of the negotiations leading to contract formation. This category also includes letters and other written forms of communications.
• Course of performance and course of dealing.
• Customs and trade practices of the industry.
If an express contract provision is clear and prominent, it matters not how it got that way—courts will give it full force and effect and will not consider parole evidence.
Only when the contract is not clear does parole evidence become important. Then, courts will apply the tests just discussed to attempt to resolve the ambiguity by determining the intent of the parties.
The following cases illustrate these points. In the first, the Supreme Court of South Carolina would not permit the introduction of parole evidence consisting of an oral agreement that contradicted the terms of an unambiguous written contract. A general contractor on a HUD housing project had issued subcontracts for interior plumbing to a plumbing subcontractor and for utility work to a second subcontractor. The utility subcontract clearly stated that the subcontractor was to perform its work in conformity with the plans and specifications. The court found that the plans and specifications clearly placed the obligation to pay water and sewer tap fees on the utility subcontractor. The general contractor paid these fees and withheld that amount from monies otherwise due the utility subcontractor. A trial court permitted the introduction of parole evidence to the effect that an independent oral agreement between the general contractor and the utility subcontractor provided that the subcontractor would not be required to pay the fees. When the trial court found in favor of the subcontractor, the general contractor appealed, asserting that the lower court had violated the parole evidence rule in allowing the introduction of evidence relating to the independent agreement.
Agreeing with the general contractor, the Supreme Court reversed the trial court, stating:
Where the terms of a written agreement are unambiguous, extrinsic evidence of statements made contemporaneously with or prior to its execution are inadmissible to contradict or vary the terms….
Under the written subcontractor agreement, Ward (utility subcontractor) was responsible for the tap fees. We hold the terms of the written contract were contradicted in direct violation of the parole evidence rule.[5]
On the other hand, the Supreme Court of Nevada permitted the introduction of parole evidence to determine the true intent of the parties when they found that the contract was ambiguous. The subcontract for the installation of a roof on a new warehouse resulted in a dispute when the owner withheld payment from the general contractor, alleging that the installed roof did not comply with the contract specifications. A trial court ruled for the general contractor, who had sued to recover the withheld payments. The owner appealed, claiming that Johns-Manville roofing specifications were required by the contract, but that the contractor had installed the roofing in accordance with Bird specifications, in violation of the contract. The owner conceded that both specifications were considered prior to execution of the contract but that only the Johns-Manville specifications were integrated into the final agreement and that evidence submitted by the contractor relating to the Bird specifications violated the parole evidence rule.
After review of the trial record, the Supreme Court found that, although the contract referenced the roofing specifications, nowhere in the document was it stated which set of specifications were intended. The contract was, therefore, ambiguous. For this reason, the court held that the lower court properly admitted parole evidence to determine the intent of the parties. The lower court decision in favor of the general contractor was affirmed.[6]
Doctrine of Contra Proferentem
When a contract provision is ambiguous and all of the preceding steps, including consideration of parole evidence, fail to resolve the ambiguity, the doctrine of contra proferentem will control. This rule requires that the meaning of an ambiguous contract provision be construed against the drafter. The drafter is the party that had the opportunity to make the provision clear, and the drafter bears the burden of failure to do so.
The rule cannot be successfully invoked simply because one party does not agree with the other party’s interpretation of a particular contract provision. The provision in question must be determined to be ambiguous—that is, the provision must be susceptible to more than one reasonable meaning before it can be construed against the party that drafted it.
As long as the claimant’s interpretation is reasonable and does not conflict with other provisions of the contract, it does not matter that the drafter also has a reasonable interpretation of the provision. After all, the word ambiguous means “subject to more than one reasonable meaning.” Therefore, the meaning of the provision will be construed against the drafter by acceptance of the claimant’s interpretation, even though the drafter’s interpretation is also reasonable.
In arguments concerning contract ambiguity, owners frequently take the position that they wrote the specifications, know what they meant to say, and therefore their interpretation of the specifications controls. Courts have little sympathy for this argument. In one case, the U.S. Court of Claims (now the United States Court of Federal Claims) said:
A government contractor cannot properly be required to exercise clairvoyance in determining its contractual responsibilities. The crucial question is “What plaintiff (non-drafting party) would have understood as a reasonable construction contractor,” not what the drafter of the contract terms subjectively intended.[7]
Similarly, the U.S. Court of Appeals said that when dealing with the question of contract ambiguity, a court should
… place itself into the shoes of a reasonable and prudent contractor and decide how a contractor would act in claimant’s situation.[8]
When the court finds that the contract is ambiguous, the following cases illustrate the usual outcome.
In resolving an argument over payment for reinforcing steel accessories, the Engineer Board of Contract Appeals found the contract drafted by a mass transit district to be ambiguous in a way that was too subtle to create a duty for bidders to inquire. The transit district contended that payment should be made only for the weight of reinforcing steel detailed on the drawings, whereas the contractor argued that the specifications required that payment be made for accessories and welding rods as well. In finding for the contractor, the board concluded that the contract was ambiguous, but not so obvious that it imposed a duty on the contractor to inquire into its intended meaning at the time of bid. The contractor was paid for the weight of the accessories and welding rods.[9]
Similarly, the Court of Appeal of Louisiana found that a payment provision drafted by the State of Louisiana Department of Transportation and Development for the laying of drain conduit was ambiguous on whether the unit bid price for drain conduit included the work of placing and compacting backfill around the conduit. The contractor contended that backfill was to be separately paid, whereas the state’s chief engineer contended that backfilling the conduit was included in the unit price for laying the conduit. The court found that the contract was ambiguous and that the ambiguity was latent, thus excusing the contractor from inquiring at the time of bid. The contractor received separate payment for the backfill.[10]
The controlling principle is well defined by the words of the U.S. Court of Claims (now the United States Court of Federal Claims) in one of the leading cases on this point:
When the Government draws specifications which are fairly susceptible of a certain construction and a contractor actually and reasonably so constitutes them, justice and equity require that construction be adopted. Where one of the parties to a contract draws a document and uses therein language which is susceptible to more than one meaning, and the intention of the parties does not otherwise appear, that meaning will be given to the document which is more favorable to the party who did not draw it. This rule is specially applicable to Government contracts when the contractor has nothing to say as to its provisions.[11]
In each of these cases, it is important to note that the court found the contract to be ambiguous and that the ambiguity was latent, excusing the contractor from the duty to inquire about the intended meaning at the time of bid. If the court found that the contract was not ambiguous or that, even though ambiguous, the ambiguity could be cleared up by consideration of parole evidence, the disputed language would be given the meaning that the court determined to have been intended. If the court found that the disputed language was ambiguous but that the ambiguity was so obvious that the contractor should have inquired as to the intended meaning at the time of bid, the contractor’s claim that the disputed language be construed in its favor would fail.
Conclusion
The interpretation of a contract is a legal matter that in cases of dispute is not decided by laypersons. This chapter gave a brief overview of how judges and arbitrators approach the difficult problem of interpreting the meaning of a contract. Understanding these highlights makes the conduct of proper contractual relations easier for all participants in the construction process.
Questions and Problems
1. What are the three things explained in the introduction to this chapter that courts and others do to determine the meaning of disputed contract provisions?
2. What are five manifestations of intent discussed in this chapter? What do the terms “course of performance” and “course of dealing” mean?
3. What is meant by “reading the contract as a whole”? Can some provisions of the contract be regarded as meaningless?
4. Explain the order of importance of the following four manifestations of intent:
1. Course of dealing
2. Customs and trade practices
3. Express terms
4. Course of performance
5. What is the relative importance of separately negotiated terms and boilerplate language?
6. What is parole evidence? What is the parole evidence rule?
7. What is the doctrine of contra proferentem? Under what narrow circumstances will it be applied by a court? Is it negated when the drafter’s interpretation of a disputed contract provision is just as reasonable as the claimant’s? Why not?
8. The preprinted standard terms and conditions on the back of a purchase order for the supply of transit mix concrete to a project provided that payment for materials delivered would be made within ten days of the buyer’s receipt of payment from the project owner and that there would be no pay until the buyer had received payment from the owner. The face of the purchase order in one of the blank spaces under the section entitled “Additional Provisions” contained a typed-in statement that read: “Payment for all concrete delivered in the month will be made to Seller by the end of the following month.” Work started, and the contractor-buyer refused to pay the supplier-seller until the tenth day after receiving payment from the owner, which usually occurred 30 to 45 days later than the end of the month following delivery. The supplier protested each payment and, on completion of the work, sued the contractor for interest on the late payments, alleging breach of contract and citing the typed-in payment statement. The contractor contended that the typed-in statement was never intended to supersede the standard terms and conditions and the contractor only agreed to it because it was expected that the owner would pay early enough to permit payment to the supplier by the end of the following month.
What would the court’s likely decision be? Explain your answer in terms of the rules for contract interpretation discussed in this chapter.
9. A certain contract contained the following clause: Contractor shall execute and return the contract along with all required insurance policies and contract bonds within 10 calendar days of its delivery to contractor by owner. Notice to proceed shall be issued by owner within 15 calendar days of receipt of the executed contract, insurance policies, and bonds from the contractor. The work of the contract including punch list work and final cleanup of the site shall be completed within 210 calendar days from the date of notice to proceed. A separate clause provided for the assessment of \$1,000 per calendar day in liquidated damages for each day that the contract work remained uncompleted beyond 210 calendar days from the notice to proceed (NTP). The contractor completed the contract 295 calendar days after NTP, and \$85,000in liquidated damages was withheld from the final contract payment. The contractor sued for the \$85,000 alleged to have been wrongfully withheld. The contractor claimed that they had been advised by an owner’s representative prior to the bid opening that the schedule was flexible and that “it would be all right” if the contractor did not make the required completion date and that there would be no liquidated damages assessed.
1. Would the contractor’s suit be successful? Why or why not?
2. If the court conducted a trial, would it be likely that the contractor would be allowed to testify about the claimed pre-bid understanding? Why or why not?
3. With respect to question b., if the contractor had a written note from the owner’s representative confirming what the contractor was told pre-bid, would the chances of success be enhanced? Would the contractor be allowed to introduce the note as evidence at the trial? Why or why not?
10. A clause in the technical specifications of a contract for the construction of a 3,500,000 CY embankment reads as follows: Fill material shall be spread in six-inch lifts and compacted by a maximum of four passes of a Caterpillar 825C compactor. The minimum compacted density shall be 95% modified Proctor density. When compaction tests were taken during contract performance, it was found that six to nine passes of the 825C compactor were required to obtain 95% modified Proctor density. The engineer directed the contractor to compact the embankment to 95% modified Proctor density. After filing a letter of protest and notice of claim for the additional compaction costs, the contractor complied and made the additional passes. Following project completion, the contractor sued for the extra costs, alleging a constructive change. The contractor testified in court that they thought the specification provision meant that no more than four passes would be required and that the sentence about the required density being 95% modified Proctor was included because it was thought that this density would be achieved with less than four passes. An engineer, testifying on behalf of the owner, said that he had written the specification and knew what it meant, which was that the 95% modified Proctor density must be met and that the sentence about the four passes was included because it was expected that the 95% density would be achieved with no more than four passes. What would the court’s likely decision be? Explain your answer in terms of the rules for contract interpretation discussed in this chapter.
1. Bender v. Rookhuizen, 685 P.2d 343 (Mont. 1984).
2. Rad-Razorback Limited Partnership v. Coney, 713 S.W.2d 462 (Ark. 1986).
3. Cable-Wiedemer, Inc. v. A. Friederich & Sons Co., 336 N.Y.S.2d 139 (Cnty. Ct. 1972).
4. Hardware Specialties, Inc. v. Mishara Constr. Co., Inc., 311 N.E.2d 564 (Mass. App. 1974).
5. Southern States Supply Co., Inc. v. Commercial Industrial Contractors, Inc., 329 S.E.2d 738 (S.C. 1985).
6. Trans Western Leasing Corp. v. Corrao Constr. Co., Inc., 652 P.2d 1181 (Nev. 1982).
7. Corvetta Constr. Co. v. United States, 461 F.2d 1330 (Ct. Cl. 1972).
8. P. J. Maffei Bldg. Wrecking Corp. v. United States, 732 F.2d 913 (Fed. Cir. 1984).
9. Appeal of George Hyman Construction Co., ENGBCA No. 4506 (Sept. 29, 1981).
10. Johnson Brothers Corp. v. State of Louisiana, 556 So.2d 154 (La. App. 1990).
11. Peter Kiewit Sons, et al. v. United States, 109 Ct. Cl. 390 (1947). | textbooks/biz/Business/Advanced_Business/Construction_Contracting_-_Business_and_Legal_Principles/1.20%3A_Common_Rules_of_Contract_Interpretation.txt |
Key Words and Concepts
• The “put-it-in-writing” rule
• Definition of documentation
• The value of good documentation
• Hearsay
• Job records exception to hearsay rule
• Conditions for introduction of job records
• Letters of transmittal/submittal
• Letters of dispute or protest
• Confirmations and meeting minutes
• Routine job records
• Contractual notices, orders, or directives
• Personal diaries
• Job document matrix
Previous chapters have been replete with references to the importance of well-kept job records in preserving the contractual rights of all parties to the construction process. Another name for well-kept job records is “good documentation,” the subject of this chapter.
Documentation
Good documentation on a construction project does not just happen. It is the result of careful preplanning and a concerted effort at all levels of the field organization. It also requires constant application of the “put-it-in-writing” rule.
“Put-It-in-Writing” Rule
The “put-it-in-writing” rule is one of the cardinal rules of good contract administration, if not the cardinal rule. It is much easier to state than to implement. Self-discipline and strong work habits are required to detail in writing the thousands of daily occurrences on an active construction job, even though you may know that the potential value of such writings far outweighs the effort required to produce them.
Events should be recorded at or shortly after they occur, not at some later time. Anyone with construction experience knows how intense daily activity can become and how difficult it is to take the time to make a written record of something that has just occurred. Often, this is just not possible at the moment, but it ordinarily can be done at the end of the day or at least by the end of the following day. Even records prepared within a week of the event are more valuable than no records at all. One useful technique is to dictate into a hand-held recorder kept constantly nearby, replacing the tape at the start of each new day. The information from the previous day’s tape can be transcribed by an office associate or stenographer into a daily job diary, a permanent written record. Once transcribed, the tape can be reused on the third day. Such daily records are detailed and extremely valuable for later reference. Writings prepared later than a week or more after the event have little or no value as a job record. By this time, they are more “recollections” than records.
The writer vividly recalls the usefulness of this type of record keeping on a tunnel project executed by his company in the mid-1970s. The project consisted of two parallel soft-ground, shield-driven tunnels under compressed air for a subway project in Baltimore. The schedule required two headings to be driven simultaneously, three shifts per day, five days per week. Each of the three shifts was supervised by a “walker”—a tunnel superintendent—who reported to the general tunnel superintendent. The general tunnel superintendent’s home was in southeastern Washington, D.C., a 75-minute drive from the jobsite. His practice was to arrive at the job early in the morning prior to the end of the graveyard shift so that he could visit each heading during that shift and talk to the graveyard walker. He remained on the job throughout the day shift and stayed long enough into the swing shift to observe conditions in both headings and to talk to the swing shift walker. For this reason, the general tunnel superintendent was intimately familiar with the details of the work in each of the two headings for each of the three shifts of the day. He then dictated the events of the day into a hand-held recorder while waiting in traffic between Baltimore and Washington on his trip home, completing the dictation on the reverse trip from Washington to Baltimore early the next morning. On reaching the jobsite, the cassette for the previous day’s activities was given to the project secretary in exchange for a clean cassette. The secretary typed the dictation each day and returned the copy to the general tunnel superintendent who edited the typed record, making any necessary corrections.
A major differing site condition was encountered during the project, which resulted in a claim for additional compensation and contract time that was litigated before the Maryland Board of Contract Appeals. During the three-week hearing, both the Transit Authority and the contractor almost totally relied on the contractor’s job records, including the daily reports resulting from the general tunnel superintendent’s dictation. Although the language in these reports was sometimes quite colorful, the reports proved invaluable in securing a successful board ruling.
What Is Documentation?
Written work products that are mere recitations or summaries written long after events occur are often incorrectly represented as documentation. Such written work products may be useful as effective tools of persuasion in a dispute resolution proceeding, but they are not documentation. Contemporaneous written records of the facts themselves are documentation, but the recitations and summaries are not.
Written opinions of persons who were not present at the events in question also do not constitute documentation, no matter how experienced and knowledgeable the persons may be. Such expert opinions are important and useful in successfully resolving disputes and may be heavily relied upon by courts and arbitrators, but they are not documentation.
Documentation consists of the writings or records of persons who were present at events, written at the time or shortly after the time of the event. In many instances, it may be the only evidence in existence that reveals what actually occurred.
Value of Good Documentation
Good documentation is invaluable in resolving misunderstandings before they escalate into disputes. One party to a misunderstanding may have an incomplete or incorrect picture of the facts of an event or occurrence on the project. Good documentation of the true facts in the possession of the other party is very effective in clearing up the misunderstanding, thus avoiding a potential dispute before it starts.
If a dispute does arise that cannot be resolved short of litigation or arbitration, the party that can produce carefully prepared authentic job records supporting its position usually will prevail. The litigation or arbitration usually occurs sometime after the completion of the project involved. The actual participants in events, such as the engineers, foremen, and superintendents who were assigned to the project are often not available to testify because they have been transferred to other work, have left the employ of a party to the contract, or even, in some instances, have died. The existing job records, properly prepared by these persons, usually may be introduced and accepted as valid evidence of what actually occurred on the project without the necessity of the person who created the records appearing in court and personally testifying.
The home office principals of the parties involved, such as owners and company officers, usually are more readily available to testify, and they may be knowledgeable about what occurred on the job because their subordinates orally reported events to them at the time. However, they are not permitted to testify about what occurred or did not occur on the job because they were not there; and oral statements made to them by their subordinates are hearsay . Hearsay is a communication that is secondhand. The person “knows” some fact only because someone else told it to them, not because the person was present at events and knows the fact to be true on the basis of firsthand knowledge. Since these persons are not allowed to testify, the presentation in court of good documentation of events may be the only way to prove what actually occurred.
Exceptions to the Hearsay Rule
Although hearsay generally may not be admitted as evidence in court, there are certain exceptions. One such exception important to the construction industry is that, subject to certain rules, construction job records (which are hearsay in written form) are usually permitted to be introduced and accepted as evidence. The federal rules for acceptance of job records as evidence are quite broad, with the result that the records will be admitted if they can be authenticated as genuine. Some state jurisdictions are more restrictive, but properly authenticated job records will generally be admitted.
Conditions for Introduction of Job Records
In most cases, satisfaction of the following conditions permit the introduction of job records as evidence in court:
• It must be established that the persons who prepared or originated the records were actually present at the events covered and were in a position to have accurate knowledge. For instance, no one could reasonably argue that a crew foreman’s signed and dated time card was not prepared by a person who was present on the job and who had accurate knowledge.
• The records must have been prepared in the normal course of business—that is, it must be shown that the records are of a type that would normally be prepared under the circumstances existing at the time of preparation. For instance, foreman’s time cards, project daily progress reports, and accident reports are all clearly the type of documents routinely prepared in the normal course of the business of construction companies. Other examples are daily diaries, weekly and monthly cost reports, force account records, tax returns, material delivery tickets, records of work quantities measured for payment, and so on.
• The records must have been prepared at the time of events, or reasonably soon thereafter.
• There must be no suggestion or intimation that the records were prepared for the specific purpose of use in litigation. Such a suggestion impugns the objectivity and believability of the records.
Typical Job Records
By way of example, the following is a discussion of 20 typical construction job record documents. Each document is intended to serve specific purposes. To be certain that these purposes are served, each must be carefully drafted and must contain certain necessary elements. The specific job records are:
1. Letters of transmittal
2. Letters of submittal
3. Notice of claim for constructive change
4. Notice of claim for constructive suspension
5. Notice of claimed delay
6. Request for time extension
7. Notice of acceleration
8. Notice of differing site conditions
9. Letter requesting information/interpretations
10. Letter disputing instructions/interpretations
11. Letter advising proceeding under protest
12. Confirmations of instructions or agreements
13. Minutes of meetings
14. Project daily reports
15. Force account time and materials records
16. Cross-sections and other records of work performed
17. Foremen’s daily time cards
18. Material delivery tickets
19. Contractual notices—that is, NTPs, notice to correct deficiencies, notices of suspension, termination, and so on
20. Personal diaries
For discussion purposes, it is useful to consider these types of documents in a series of six closely related groups.
Letters of Transmittal and Submittal
The first group consists of letters of transmittal and letters of submittal (documents 1 and 2). Both are similar in that each is a cover document for some other document of importance, such as a contract, purchase order, subcontract, drawings, schedules, and the like. Each of these documents has two aims: to establish a record of precisely what was transmitted or submitted and a record of the date that the transmittal or submittal was made. It is not difficult to understand the importance of both of these pieces of information with regard to the liability question if, for example, a series of concrete footings, poured according to superseded construction drawings, had to be demolished and repoured. Were the footings wrongly poured because the owner’s engineer failed to transmit the revised drawings to the contractor? Or was it because of poor drawing control by the contractor, who left the revised drawings rolled up in the corner of the job trailer and poured the footings according to the original drawings? The letter of transmittal of the revised drawings, if properly drafted, will settle this question.
Letters of submittal differ from letters of transmittal in one important way. Letters of transmittal do not imply or state that an approval is required or sought, whereas letters of submittal do indicate a request for approval. Both usually require an acknowledgement of receipt. Letters of transmittal are typically used to send drawings, specifications, prime contracts, purchase orders, subcontracts, change orders, certificates of insurance and similar documents, whereas letters of submittal are used to send material samples, shop drawings, proposed CPM schedules, proposed methods or procedures for carrying out the work, and the like. Preprinted forms for both letters of transmittal and letters of submittal are in common use today.
Letters of Notice
The second group consists of the typical contractor notices required by the “red flag” clauses of most construction contracts. All of these (documents 3 through 8) contain the same two basic elements as the first group—that is, they describe or identify an event or subject to which the notice pertains, and they establish a date of record that the notice was given. In addition, in each case, the contractor is taking a position. Therefore, each document should contain an additional element, stating the contractor’s position and the basis for believing that the position is correct. In addition to the three preceding elements, the notices in this group should contain other elements, depending on the specific notice. For instance, the notice of acceleration (document 7) should make clear that the contractor is accelerating construction operations and expects to be paid the extra costs of the acceleration. Similarly, the claim for constructive change (document 3), the claim for constructive suspension (document 4), the claimed delay (document 5), an independent request for a time extension (document 6), and the notice of differing site conditions (document 8) should all make the contractor’s position clear and that additional time and money are being requested.
Letters Requesting or Disputing Instructions or Letters of Protest
The third group consists of a letter requesting information or instructions (document 9), a letter disputing or taking exception to instructions previously furnished by owner or engineer (document 10), and a letter advising that the contractor is proceeding under protest (document 11). The two elements of identification and establishment of a date of record are required as for all the other documents. In addition, document 10, which disputes instructions or interpretations, should explain that a dispute exists and the reason that the instructions or interpretations have been disputed. The letter advising proceeding under protest (document 11) must make clear that a dispute exists, that the contractor is proceeding under protest, and that additional time and money are expected.
Confirmations and Meeting Minutes
The fourth group includes confirmation of instructions or agreements (document 12) and minutes of meetings (document 13). Both possess the two elements of identification and establishment of a date of record and, in addition, contain an element that confirms an understanding of a conversation, meeting, or instructions received. Such letters can relieve the recipient from the necessity of replying by indicating that if no advice to the contrary is received, the understandings stated in the letter or meeting minutes will be regarded as correct.
Routine Job Records
The fifth group—daily reports (document 14), force account records (document 15), cross-section data and other measurements of work performed (document 16), daily time cards (17), and material delivery tickets (18)—all share a common attribute. They are all forms of routine job records required to operate the project. Their purpose is to record facts about what has occurred. There are only two elements: recording facts and establishing the date that the facts were recorded.
Contractual Notices, Orders, or Directives
This class of project documents includes the more formal type of notice, order, or directive, required by the contract to be given by the owner or construction manager to the prime contractor, or by the prime contractor to subcontractors. Such things as notice of award of contract or subcontract, notices to proceed, stop orders, cure notices (order to remedy defaults), suspension of work or acceleration directives, and termination notices (document 19) are all included in this category. Although less frequent than other job documents, their importance is obvious. They should be drafted with great care and must contain some mechanism to establish the fact and date of delivery.
Personal Diaries
Many construction executives and managers maintain personal diaries (document 20) on a routine basis, entering facts about important meetings or events shortly after they occur when recollection is fresh. Such diaries are highly regarded as probative evidence in construction disputes, provided the entries are factual and not unduly editorialized.
The writer maintained this type of daily diary throughout his contracting career. These diaries repeatedly were effectively used in dispute resolution, including use as trial exhibits in court and in hearings before administrative boards. However, such diaries must be factual, inasmuch as they are subject to discovery during litigation. For this reason, some in the industry do not keep diaries because they regard them to be a two-edged sword. However, the writer’s experience has been that the benefits to be gained in maintaining a detailed diary far outweigh the drawbacks. On one occasion, the writer’s original diaries were subpoenaed by the federal government as evidence in a criminal trial involving other parties and were not returned for a number of years. These experiences should make clear the importance that courts, arbitrators, and other dispute resolution bodies place on this type of record.
Job Document Matrix
The relation of the various necessary elements just discussed to the documents themselves is represented diagrammatically by the job document matrix shown in Figure 21-1.
[table id=5 /]
Conclusion
Most construction documentation, particularly correspondence, is generated during the “heat of battle” on active construction projects. There are usually two sides to every issue, and each person’s view of the situation will be highly influenced by “where he or she sits in the stadium.” The purpose in writing a letter to an opposite number should not be to vent one’s spleen, but by being factual and professional, to convince the other of the correctness of one’s position. Unfortunately, much actual construction correspondence overlooks this simple truth.
Questions and Problems
1. What is the cardinal rule of good contract administration? At what point should the rule be exercised to result in good job documents?
2. Does the term documentation include recitations or summaries of events written after the fact? Are later written opinions of qualified construction experts considered to be documentation?
3. Why are good job records useful in construction litigation?
4. Why are home office principals often not permitted to testify in court about events that occurred on the project?
5. What is hearsay? Does the hearsay rule usually apply to construction project records?
6. What are the four requirements that must be met before project records may be presented as evidence in court?
7. What is the difference between a letter of transmittal and a letter of submittal?
8. What ten separate elements of various project documents are discussed in this chapter?
9. What is the general purpose of construction correspondence dealing with disputed matters that is so often overlooked in practice? | textbooks/biz/Business/Advanced_Business/Construction_Contracting_-_Business_and_Legal_Principles/1.21%3A_Documentation_and_Records.txt |
Key Words and Concepts
• Claim definition
• Change in contract time
• Written claim notice
• Causal event
• “Proximate” costs
• Detailed claim submittal
• Impact costs
• Entitlement element
• Extended indirect costs
• Quantum element
• Escalation costs
• Failure to give notice
• Severe weather costs
• Waiver
• Decreased efficiency of work performance
• Constructive notice
• Claim processing procedure
• Industry published inefficiency factors
• Time limits for owner’s consideration
• Excessive overtime
• Change in contract price
• Comparison with bid estimate
• “Cost of the work”
• “Measured mile” analysis
• Contractor’s fee
• Comparison with other contracts
To some construction owners, A/Es, and CMs, “claim” is an unsavory word. They regard claims as hostile assaults on their management of the contract and contractors who file them as devious and unscrupulous. On the other hand, some contractors indiscriminately file claims whether they are contractually justified or not. In reality, filing a claim is the only contractually provided procedure by which either party to the contract can openly and fairly assert their position regarding contract time or money when disputes arise. Filing a claim is the first step in the contractually provided dispute resolution process.
This chapter highlights some of the more important aspects of this complicated subject.
Threshold Matters
Nearly all claims originate with the contractor following the occurrence of a contract dispute. The Massachusetts Water Resources Authority contract for the construction of the Inter-Island Tunnel in Boston Harbor defined a claim as follows:
A claim means a written demand or assertion by the Contractor seeking an adjustment in Contract Price and payment of monies so due, an extension or shortening in Contract Time, the adjustment or interpretation of Contract terms, or other relief arising under or relating to the Contract following denial of a submittal for change under Article 10….
By the above definition, a contractor’s claim is triggered by the owner’s denial of a contractor’s proposal for a change in contract price or time. The claim is a response to the owner’s denial, which now takes the form of a demand for a stated amount of money or time, which demand becomes subject to the dispute resolution provisions of the contract. The original contractor’s proposal for an adjustment in contract price or time that triggers the owner’s denial typically originates for one of the following reasons:
• The owner issues a formal change order or change notice adding contract work or making changes in original contract work, originally specified working conditions, or originally permitted construction methods. The occasion for the owner’s issuing of the change order or change notice could be a desired scope change in the finished project or because acknowledged differing site conditions had been encountered by the contractor. In these situations, the contractor is required to propose a change in contract price, time, or both in response to the change order or change notice.
• The contractor, believing they have encountered differing site conditions, so notifies the owner and requests (proposes) that the owner issue a change order for an appropriate increase in contract price, time, or both.
• The contractor, believing that instructions received from the owner constitute a constructive change to the contract, so notifies the owner and requests (proposes) that the owner issue a change order for an appropriate increase in contract price, time, or both.
• Some causal event that the contractor believes is compensable or excusable has occurred and the contractor, after so notifying the owner, requests (proposes) that the owner issue a change order increasing the contract price, time, or both.
In order for a contractor’s claim to be valid, it usually must be established by written notice submitted to the owner within a stated number of days after the occurrence of the event giving rise to the claim. For instance, the Massachusetts Water Resources Authority contract provides that
For any claim under this article to be valid, it shall be based upon written notice delivered by the Contractor to the Authority promptly, but in no event later than twenty-one (21) days, after the occurrence of the event giving rise to the claim and stating the general nature of the claim (underline added for emphasis).
In the context of this type of contract provision, the “event giving rise to the claim” (causal event) could be the owner’s denial and refusal to issue a change order in accordance with a contractor’s cost/time proposal, or the occurrence of some event that the contractor believes is either compensable or excusable under the terms of the contract.
Once the written notice of claim has been delivered to the owner by the contractor, most contracts provide that the contractor’s detailed claim submittal, supported by a CPM schedule analysis in cases involving a demand for additional contract time be submitted to the owner within a stated number of days following the notice. Some contracts require the detailed claim to be submitted within a stated number of days following the occurrence of the event that gave rise to the claim notice. For instance, the federal government contract proscribes that the contractor’s detailed claim proposal be submitted within 30 days after the furnishing of a written claim notice while the Massachusetts Water Resources Authority contract proscribes that the detailed claim submittal be submitted within 60 days after the occurrence giving rise to the claim notice.
The contractor’s detailed claim submittal must clearly establish that (1) the contractor is entitled by the terms of the contract to an adjustment in contract price, time, or both of some amount (the entitlement element), and (2) establish the amount of the claimed dollar change in contract price or claimed number of calendar days of contract time extension—the quantum element.
“Red Flag” Contract Provisions
The above discussed matters pertain to contract claims in general. Some of the more specific provisions that contractors should be particularly alerted to include the following.
Notice Requirements
Perhaps the most important provision regarding claims is that requiring the giving of notice by the contractor that they are filing a claim. Most contracts provide explicitly that failure to give notice within the number of days specified in the contract after the event giving rise to the claim results in waiver of the contractor’s right to file a claim. Contractors sometimes avoid waiver of their claim rights by showing that the owner was aware of the event giving rise to the claim and had constructive notice of the contractor’s intention to seek monetary or contract time relief in respect to that event. This is particularly true for claims for an extension of contract time following an obvious excusable event such as a labor strike or a flood shutting down the entire project for a finite period of time. However, “dodging the bullet” in this manner is risky and should be avoided by strictly complying with the notice requirements stated in the contract.
The contractor should always follow up the initial notice of claim with a submittal explaining their detailed claim position, citing relevant contract language supporting entitlement to monetary, time relief, or both as well as detailed calculations establishing the dollar amount claimed, and, in cases where an extension of contract time is claimed, a CPM analysis supporting the number of calendar days of contract time extension claimed. The entitlement explanation can always be submitted promptly but, in many cases, costs and contract time associated with the claimed event may be ongoing and cannot be finalized until the total impact of the event has been experienced, sometimes many months after the onset of the event. In these situations, it is common for the contractor to submit best estimates of the monetary and time quantum, subject to later correction when final actual figures are available.
Claim Processing Procedure
The contract usually proscribes the procedure for processing the claim once the contractor has properly submitted it. In some contracts, these procedures are relatively straightforward, resulting in reasonably prompt consideration of the claim by the owner’s engineer or construction manager. The owner usually awaits the recommendation of their engineer or construction manager before communicating their position on the claim back to the contractor, either accepting it, denying it, or accepting in part and denying in part. The contractor then must either accept the owner’s decision or dispute it and invoke the dispute resolution procedures of the contract, usually within a stated number of days after receiving the decision. Other contracts have extremely complicated and time-consuming procedures for consideration of contractor claims, often resulting in no serious consideration of the claim until the end of the contract work. Since the contractor often cannot invoke the dispute resolution procedures of the contract until they have received the owner’s final decision with respect to the claim, complicated and time-consuming claim consideration procedures unfairly penalize contractors who may have legitimate claim positions and seriously impact their fiscal liquidity. For this reason, many contracts contain provisions setting strict time limits for the owner’s consideration of contractor claims. Failure of the owner to furnish a final decision on the claim within these time limits is considered tantamount to a denial of the claim, which frees the contractor to immediately invoke the dispute resolution provisions of the contract. Contracts containing such time limits are far preferable from the contractor’s standpoint.
Proscribed Procedure for Determination of Adjustments of Contract Price and Time
If the claimed entitlement issue is resolved in the contractor’s favor, the monetary adjustment to the contract price or the number of calendar days of contract time extension with respect to the claim must each be determined. With regard to the first determination, the change in contract price, most contracts provide the following methods, listed in order of preference:
• By use of lump sum prices or unit prices in the contract bid schedule that were applicable to the original contract work.
• By mutual acceptance of new lump sum prices or unit prices to be applied to the claim work.
• If the owner and contractor do not agree to one of the above methods, on the basis of actual costs of the claim work determined from mutually accepted job records, plus a fee to cover contractor’s indirect claim costs and contractor’s profit—the so-called “cost of the work” method.
Since by their nature claims are contentious, the owner and contractor seldom agree on one of the first two methods and contract price changes are usually determined on the basis of the third method—that is, on the basis of the “cost of the work” plus a contractor’s fee.
The contract provisions defining the “cost of the work” are usually explicit and detailed. For instance, the Oakland County Drain Commission contract for the construction of a sewage retention treatment basin in Michigan included the following pertinent language:
25. BASIS FOR DETERMINING COST OF CHANGES IN THE WORK (CONTINUED)
“COST” is herein used shall be the actual and necessary costs incurred by the Contractor by reason of the change in the work for—
1. labor
2. materials
3. equipment rental
4. insurance premiums
1. Labor costs shall be the amount shown on the Contractor’s payrolls with payroll taxes added when such taxes can be shown to have been incurred. In no case shall the rates charged for labor exceed the rate paid by the Contractor for the same class of labor employed by him to perform work under the regular items of the Contract.
2. Material costs shall be the net price paid for material delivered to the site of the work. If any material previously required is omitted by the written order of the Owner after it had been delivered to or partially worked on by the Contractor and consequently will not retain its full value for other uses, the Contractor shall be allowed the actual costs of the omitted material less a fair market value of the material as determined by the Owner.
3. Equipment rental shall be the actual additional costs incurred for necessary equipment. Costs shall not be allowed in excess of usual rentals charged in the area for similar equipment of like size and condition; including the cost of necessary supplies and repairs for operating the equipment. No costs, however, shall be allowed for the use of the equipment on the site in connection with other work. If equipment not on the site is required for the change in the work only, the cost of transporting such equipment to and from the site shall be allowed. The rental rate established for each piece Contractor owned equipment, including appendices and attachments to equipment used, will be determined by the Rental Rate Blue Book for Construction Equipment Volume 1, 2, or 3 as applicable; the edition which is current at the time the work was started will apply. The established hourly rental rate will be equal to the “Monthly” rate divided by 176, modified by the applicable rate adjustment factor and the map adjustment factor, plus the “Estimated Operating Costs per Hour.” For equipment not listed in the Rental Rate Blue Book, Volume 1, 2, or 3, the rental rate will be determined by using the rate listed for a similar piece of equipment or by proportioning a rate listed so that the capacity, size, horsepower, and age are properly considered. In the event the machinery and equipment actually on the project site is idle for reasons beyond the control of the Contractor, the rental rate of the Contractor-owned equipment will be the “Monthly” rate divided by 176, modified by the applicable rate adjustment factor and the map adjustment factor, and then multiplied by 50%. No payment will be allowed for operating costs. This section applies to only machinery and equipment necessary for performance of the work in question.
4. Insurance premiums shall be limited to those based on labor payroll and to the types of insurance required by the Contract. The amount allowed shall be limited to the net costs incurred as determined from the labor payroll covering the work. The Contractor shall, upon request of the Owner, submit verification of the applicable insurance rates and the premium computations.
5. “Plus” as herein used is defined as a percentage to be added to the items of “Cost” to cover superintendents, use of ordinary tools, bonds, overhead expense, and profit. The percentage shall not exceed 15% on work done entirely by the Contractor and shall not exceed an aggregate total of 25% on work done by a Subcontractor.
6. “SPECIFIED MAXIUM LIMIT OF COSTS” is the amount stated in the written order of the Owner authorizing the change in the work. The amount to be allowed the Contractor shall be the “cost”, and “plus” the percentage or the specified maximum, whichever is the lessor amount.
This contract goes on to provide:
B. The Contractor shall keep complete, active, daily records of the net actual cost of changes in the work and shall present such information at the end of each working day as verified by the inspector, in such form and at such times as the Owner may direct.
C. If the Owner and Contractor can not reach mutual agreement in establishing the cost of changed work, the method of establishing said costs shall be on a cost plus basis.
The cited above Oakland County Drain Commission contract provision pertains to determining contract price changes for claims that do not involve an extension of contract time. This contract also provides for a change of contract time in a separate article stating in pertinent part:
26. CHANGE OF CONTRACT TIME (CONTINUED)
B. The Contract Time may be extended in an amount equal to time lost due to substantial delay of a type or of a cause that could not reasonably have been foreseen or anticipated by the Contractor, and that is beyond the control of the Contractor or its Subcontractor, if the Contractor timely and properly asserts a claim pursuant to this section. Delays that may give rise to an extension of time, if such delays are substantial and otherwise come within the preceding sentence, include those caused by negligent acts or omissions by Owner or others excluding Contractor or its agents or its Subcontractors performing additional work as contemplated by Section 9, or caused by fires, floods, labor disputes not involving a dispute between Contractor or its subcontractors and their own employees, epidemics, or other “Act of God,” as that term is commonly understood.
The Massachusetts Water Resource Authority (MWRA) Inter-Island Tunnel contract contains language to a similar effect with regard to contract claims that do not involve an extension of contract time, but in its provisions regarding changes in contract time, the MWRA contract distinguishes between time extensions due to causes which are excusable only and those that are compensable as far as extended contract costs are concerned.
With regard to extensions of contract time for excusable causes, this contract provides:
11.12 CRITERIA FOR DETERMINING ADJUSTMENTS IN CONTRACT TIME
The Criteria to be used to determine an adjustment in Contract Time necessitated by changes ordered or negotiated pursuant to these General Conditions, or work covered by a submittal or a claim, are limited to the following:
11.12.1. An adjustment in Contract Time will be based solely upon net increases in the time required for the performance or completion of parts of the Work controlling achievement of the corresponding Contract Time(s) (Critical Path). However, even if the time required or the performance for the completion of the controlling parts of the Work is extended, an extension in Contract Time will not be granted until all of the available Total Float is consumed and performance or completion of the controlling work necessarily extends beyond the Contract Time.[1]
11.12.2. The Authority may elect, at its sole discretion, to grant an extension in Contract Time, without the Contractor’s request, because of delays meeting the requirements set forth below.
11.12.3. An extension in Contract Time will not be granted unless the Contractor can demonstrate through an analysis of the Progress Schedule that the increases in the time to perform or to complete the Work, or specified part of the Work, beyond the corresponding Contract Time(s) arise from unforeseeable causes beyond the control and without the fault or negligence of both the Contractor and his Subcontractors, suppliers, or other persons or organizations, and if such causes in fact lead to performance or completion of the Work, or specified part in question, beyond the corresponding Contract Time, despite the Contractor’s reasonable and diligent actions to guard against those effects. Examples of such causes include: (1) Acts of Gods or of the public enemy; (2) Acts of the Government or of another Public Entity in its sovereign capacity; (3) Acts of another contractor in performance of a contract with the Authority; (4) Fires, floods, epidemics, quarantine restrictions; (5) sinkholes, archeological finds; (6) freight embargoes; (7) unusually severe weather; (8) a case of an emergency; (9) delays as itemized in this paragraph, to Subcontractors or Suppliers or other persons or organizations at any tier arising from unforeseeable causes beyond the control and without fault or negligence of either the Contractor or any such Subcontractors, Suppliers or other persons or organizations.
11.12.4. It is the intent of the Contract Documents that an extension in Contract Time, if any granted, shall be the Contractor’s sole and exclusive remedy for any delay, disruption, interference, or hindrance and associated costs, however caused, resulting from causes contemplated in this paragraph but not included under paragraph 11.13.
11.12.5. The provisions of this paragraph 11.12. shall govern and are applicable to Contractor requests, submittals or claims for acceleration in lieu of the alternate extension in Contract Time.
The MWRA contract then continues with article 11.13, dealing with changes in contract time due to compensable causes as follows:
11.13 CHANGES IN CONTRACT TIME MAY BE COMBINED WITH CHANGES IN CONTRACT PRICE:
It is the intent of the Contract Documents that an extension in Contract Time shall be combined with an appropriate increase in Contract Price to provide the Contractor with full remedy for any delay, disruption, interference, extension or hindrance caused by: Acts of the Authority in its contractual capacity in connection with changes in the Work, differing physical conditions or differing reference points; a case of an emergency, of uncovering work, or a suspension of work not excluded by another provision of the Contract Documents. However, no adjustment in Contract Price under this paragraph shall be provided: (1) to the extent that performance would have been so extended by any other cause, including fault or negligence of the Contractor, or his Subcontractors, Suppliers, or other persons or organizations; (2) for which an adjustment is provided or excluded under any other provision of the Contract Documents; (3) for acceleration costs in lieu of extension costs to the extent that the acceleration costs exceed those of the alternate extension in Contract Time; or (4) if delays merely prevent the Contractor’s achievement of completion of the Work, or part in question, ahead of the corresponding Contract Time(s). The Contractor shall be entitled to a Contract Price increase due to these delays, disruptions, extensions, interferences or hindrances only when delays extend the Work or specified part of the Work, beyond the applicable Contract Time(s) including any authorized adjustments.
Finally, the MWRA contract continues with article 11.14placing a limitation on the costs allowed due to an extension in contract time:
11.14 COST OF THE WORK INVOLVED—EXTENSION IN CONTRACT TIME:
When determining the cost of the work involved to complement an extension in Contract Time, amounts shall be allowed only if related solely to the extension in Contract Time,… “
As can be seen from the above, contract provisions governing changes to contract price and time are apt to be complicated. In the following sections of this chapter dealing with methods of proving the price and time quantum elements, the contract provisions of the type illustrated above must be strictly observed.
Methods of Proving Price and Time Quantum
Two separate general claim scenarios can each generate cost and time impacts. In the first scenario, some new element of work not present in the original contract is required to be accomplished either as a result of an owner directive or due to the contractor encountering differing site conditions. If the contract price and time change is to be forward priced by agreement between the owner and contractor, the quantum analysis is nothing more than making a cost and time estimate for performance of the added work by the same general methods as used for the original total project cost and time estimate. Alternately, if the price and time changes are to be determined retrospectively, it is only necessary to maintain mutually agreed accurate job records detailing the elements of contractually allowable cost and time adjustments previously discussed in this chapter.
In this first claim scenario, costs may consist of the “proximate” costs directly associated with the added work itself, incurred at the time and location where the work was added and the consequential or “impact costs,” which consist of (1) time-related costs such as increased indirect costs due to the extension of the contract period, escalation costs on unchanged work that is pushed into a period of increased labor rates or material prices, excess costs resulting from pushing original work into periods of severe weather that otherwise would have been avoided; and (2) the adverse effect of the added work on the efficiency of performance of related original contract work.[2]
The second general claim scenario is one where the original contract work is made more difficult as the result of (1) an owner directive changing the details of the original work itself, changing the conditions under which the original work must be constructed, or restricting the construction methods or equipment that is permitted to be employed for the original work; and (2) situations where the original work is made intrinsically more difficult for the above reasons due to the contractor encountering differing site conditions.
In this second claim scenario, proving time and price quantum always involves evaluation of a decrease in efficiency of performing work originally included in the contract. Several different methods utilized by contractors to prove decreased efficiency of original work performance are discussed and illustrated in the following sections.
Use of Industry Published Factors
The National Association of Electrical Contractors and others have conducted studies and published factors claimed to quantify the loss of efficiency in performing unchanged work due to stacking of trades, frequent crew movements with associated starts and stops, frequent requirements for prolonged overtime, the necessity of going through a learning curve more times than would otherwise would be necessary, and the general effect on morale due to continual changes and delays. Similarly, the Business Roundtable has published data illustrating the loss of efficiency when excessive overtime is worked on an extended basis.[3] Generally speaking, this method of proving decreased work efficiency of performance is met with skepticism by owners, courts, and arbitrators because of a lack of proof that the same conditions applying to the underlying studies were present in the claim situation involved.
Comparison with Contractor’s and Engineer’s Bid Estimates
Typically, the contract bid price is based on the contractor’s cost and time estimate prepared at the time of bid. Also, the owner’s engineer usually makes a parallel estimate indicating their assessment of a reasonable bid price under the competitive conditions existing at the time of bid. Either or both of these estimates provide a useful standard for measuring the adverse impact of causal events not contemplated at time of bid on the performance of unchanged original contract work.
In one case in the writer’s experience, both the contractor’s bid estimate and the engineer’s estimate for a IBM-excavated tunnel in rock, which had been impacted by excess water inflows not expected at the time of bid, were available to establish a reasonably expected rate of advance under the water inflow conditions indicated by the contract documents. In this case, the advance rate was also heavily dependent upon the tunnel excavation temporary support assumptions in each estimate. When the engineer’s estimate was adjusted to reflect the same temporary support assumptions as the contractor’s estimate, the adjusted engineer’s estimate advance rate was 121.8 ft. per day. The actual advance rate in the contractor’s estimate was 123.9 ft. per day. Alternately, when the contractor’s bid estimate was adjusted to reflect the same temporary support assumptions as the engineer’s estimate, the adjusted contractor’s estimate advance rate was 95.5 ft. per day. The actual advance rate in the engineer’s estimate was 95.1 ft. per day. It would be difficult to argue in this case that these estimates did not provide a reasonable figure for the tunnel advance rate that could be expected under as-bid water inflow conditions.
Measured Mile Analysis
The most reliable methodology to prove impact of an adverse causal event on the efficiency of original unchanged work performance is to compare work performance in an impacted area of the project with the performance of identical work in an area of the project that was not impacted by the causal event-—that is, the so-called “measured mile” analysis. The following two examples, taken from the writer’s claim evaluation experience, illustrate the use of this method.
During pile-driving work for steel bearing piling supporting an outfall sewer, the contractor alleged a loss of productivity caused by excessive owner directed changes, extra work, and owner-caused delays experienced during pile-driving operations. The contractor had based their claim on a productivity of 11 piles per day, which they stated was their “as-built” production for pile-driving work where the alleged interferences were not encountered. However, a study of as-built driving performance unimpacted by the claimed causal events revealed a production of 159 piles in 16.33 days for an average of 9.74 piles per day. The contractor’s actual productivity in driving 434 piles that was impacted by the claimed causal events was 434 piles in 63.0 workdays for an average of 6.89 piles per day. The extra pile-driving time on account of the claimed interferences therefore was:
Actual time required = 63.0 workdays
Time required at 9.74 piles per day = 44.6 workdays
Additional time required = 18.4 workdays
In another example from the same project, the contractor claimed that productivity losses in the Stage 2 phase in the construction of a bypass conduit from the productivity achieved during the Stage 1 phase were incurred due to delays caused by owner-directed strengthening of the Stage 2 conduit, differing site conditions caused by leakage from the existing outfall sewer, and numerous other directed changes in the work. The contractor also claimed that owner-caused delays pushed the Stage 2 work into severe winter weather.
According to the contractor’s claim, the following actual work quantities and man-hours (mh) required to accomplish them represented the as-built bypass conduit Stage 2 construction for which reimbursement for lost productivity was sought:
[table id=6 /]
The as-built Stage 1 figures taken as the “measured mile” were:
[table id=7 /]
Two adjustments to the Stage 2 as-built mh were found to be necessary to reflect replacing a failed sheet pile wall during excavation due to contractor error and reflecting an estimated 15% increase in general complexity of Stage 2 work over Stage 1 work. The adjustments to the actual Stage 2 mh are shown in Figure 22-1. Based on Figure 22-1, the productivity losses for Stage 2 construction, including the effect of performing Stage 2 in the winter relative to Stage 1 construction were calculated as shown in Figure 22-2.
[table id=8 /]
[table id=9 /]
In both the above examples, the productivity loss can easily be converted into dollars and cents. In carrying out this step, the contractual provisions governing labor and equipment hourly costs and the application of percentage markups for indirect costs and profit discussed in the previous sections of this chapter must be strictly observed. For instance, in the first of the above examples the parties had agreed that the daily labor costs including all applicable fringes and taxes for the pile driving crew were \$2,443 per day. They further agreed that the crew equipment costs totaled \$1,800 per day according to the equipment hourly rates proscribed by the contract provisions. The pile-driving work was performed by a subcontractor and the contract proscribed the following percentage markups for the subcontractor and prime contractor:
Subcontract markup on labor @ 15% and on equipment @ 10% for indirect costs and profit.
Additional allowance for subcontractor small tools and supplies @ 2% of the labor total.
Prime contractor markup @ 5% of subcontractor labor and equipment total for prime contractor indirect cost and profit.
Additional prime contractor allowance for bond @ 0.68% Additional prime contractor allowance for insurance @ 1.2%
On this basis, the claim quantum was computed as follows:
Subcontractor labor 18.4 days @ \$243/day = \$44,951
Subcontractor equipment 18.4 days @ \$1,800/day = \$33,120
Subcontractor subtotal = \$78,071
Subcontractor markup on labor @ 15% = \$6,743
Subcontractor markup on equipment @ 10% = \$3,312
Subcontractor subtotal = \$88,126
Subcontractor small tools and supplies @ 2% of labor = \$899
Subcontractor total = \$89,025
Prime contractor’s markup @ 5% of \$78,071 = \$3,904
Prime contractor subtotal = \$92,929
Prime contractor allowance for bond @ 0.68% = \$632
Prime contractor allowance for insurance @ 1.2% = \$1,115
Total claim amount = \$94,676
Comparison with Similar Cost Experience on Other Contracts
Sometimes the entire contract work is adversely impacted by a series of intertwined causal events that significantly affect the efficiency of performance of original unchanged contract work. Such was the writer’s experience in the early 1980s during the performance of structural concrete work for the underground Peachtree Station constructed in downtown Atlanta for the Metropolitan Atlanta Rapid Transit Authority (MARTA). In this case, the entire surface and underground structural concrete operation was adversely impacted due to the following causations:
• Excessive number of changes both by formal change notice and constructive changes.
• Changes directed at the “eleventh hour” preventing their incorporation into the work in a timely and organized manner.
• Numerous errors, omissions, and conflicts found in the drawings and specifications as the work was performed.
• Lack of engineering information and/or direction and eleventh-hour provision of same.
• Directed acceleration of the work requiring stacking of crews and overtime work.
• Change in sequence of performance of the underground cavern structural work.
• Impacts associated with interference of other MARTA contractors who were allowed into the main cavern work area concurrently with the structural concrete operations.
Since the entire surface and underground work area was adversely impacted, there was no “measured mile” that could be used to compare work item productivities to prove the claimed decrease in efficiency of performance. However, our heavy engineering construction division performing the concrete work had completed eight other rapid transit projects involving similar work operations and was concurrently completing a similar deep-mined station for the Washington Metropolitan Area Transit Authority (WMATA) in Bethesda, Maryland. All of these projects had been set up and operated in a virtually identical manner and the labor productivity cost records for each of them was structured in an identical format making it possible to directly compare actual construction performance for similar work items. Great similarity existed between many of the MARTA work items and comparable work items on six of the other projects for which our computer bank contained the work-item productivity records. None of the previously constructed projects departed from the norm in factors adverse to contractor work performance.
Figure 22-3 shows the comparison made for 27 separate MARTA concrete work items to our average “normal” performance for similar work items from the other company projects. For each work item, the work-item description is listed as well as the as-built MARTA quantity and unit of measure, the MARTA mh factor per unit of work accomplished, the MARTA mh total, the normal project mh factor, and the normal project mh total. The MARTA project required 309,100 mh for the 27 separate work items, which would have required only 254,960 mh according to our experience on the other projects. The excess MARTA mh equal to 309,100 – 254,960 = 54,140 exceeded our experience on comparable projects by 21.23%. Following an extensive review of our records, MARTA accepted our offer of proof and issued a change order that compensated us for the claimed inefficiency.
[table id=10 /]
Conclusion
This chapter has reviewed the conceptual basis of construction contract claims, typical contract provisions delineating procedural requirements for their filing and processing, and the methods of proof traditionally offered by contractors seeking to establish the associated contract price and time quantum.
If the owner rejects a claim, the contractor must abandon it or contest the owner’s rejection through the dispute resolution provisions in the contract. The following final chapter in this book reviews the methods of contract dispute resolution practiced in the United States today.
Questions and Problems
1. What triggers the filing of a contractor’s claim?
2. What are the four basic reasons why contractors submit proposals for a change in contract price or time before the filing of a claim that were discussed in this chapter?
3. What two examples of “the event giving rise to the claim” were given in this chapter?
4. What two separate elements of a claim must be established by the contractor’s detailed claim submittal?
5. What is the most important contract provision regarding contractor claims discussed in this chapter?
6. What is the usual consequence of a contractor’s failure to conform to the claim notice provisions in construction contracts? What is “constructive notice”? Should contractors rely on the legal sufficiency of constructive notice?
7. Discuss the importance of the claim processing procedure provisions in construction contracts from the contractor’s standpoint.
8. In the case of construction contracts that proscribe strict time limits for the issue of the owner’s decision on a properly submitted contract claim, what is the usual consequence of an owner’s failure to issue a decision within the proscribed time limits?
9. What general methods for the determination of changes in contract price, when entitlement has been recognized on a contract claim, are provided by most construction contracts? Of the three methods, which is usually employed?
10. According to the Oakland County Drain Commission contract, what four elements make up the “cost of the work”? What is the maximum fee to be paid to a prime contractor for overhead and profit in addition to the cost of the work when their own forces perform the work of the claim? What is the maximum fee in the aggregate for the prime contractor and subcontractor when the work of the claim is performed by a subcontractor?
11. How does the MWRA contract differ from the Oakland County Drain Commission Contract with respect to extensions in contract time?
12. Discuss the two general claim scenarios presented in this chapter, pointing out how they differ and the kinds of claim costs generated by each.
13. Discuss the four methods for proving the level of decreased efficiency of original contract work performance due to claim causations presented in this chapter? Which is generally the most persuasive? Which is the least persuasive?
1. In the context of this contract language, "Total Float" has the meaning of "float" as defined in Chapter 18.
2. These two general types of impact costs are the same as those discussed under price and time adjustments for contract changes in Chapter 14.
3. The effect of this published data is discussed in connection with the pricing of contract changes in Chapter 14. | textbooks/biz/Business/Advanced_Business/Construction_Contracting_-_Business_and_Legal_Principles/1.22%3A_Construction_Contract_Claims.txt |
Key Words and Concepts
• Lawsuits involving important federal questions
• Diversity cases
• Federal district courts
• United States Court of Federal Claims
• United States Courts of Appeal for the Federal Circuit
• United States Supreme Court
• State trial courts
• State courts of appeal
• State supreme courts
• Venue
• Bench trials / Jury trials
• Discovery
• Depositions
• Fact witness/expert witness
• Transcript
• Plaintiff/respondent
• Cross-examination
• Findings-of-fact
• Conclusions-of-law
• Right of appeal
• Hearings before boards of contract appeals
• Arbitration / AAA arbitration / Party arbitration / Single arbitration
• Alternative dispute resolution
• Mediation
• Mini-trials
• Disputes review boards
When contracting parties cannot settle disputes themselves, the disagreement must be resolved by other means. The farther from the job level that the dispute reaches, the more likely it is to become highly adversarial, time consuming, and expensive. For instance, settling disputes by the decision of a court following a lawsuit can lead to costs that are on the order of magnitude of the most favorable judgment that can be obtained. These costs include legal representation, various consultants, expert witnesses, and so on, as well as the drain on a company’s organization. Key personnel are often tied up for extended periods preparing for trial and for actual court appearances, keeping them away from their normal revenue-producing duties.
For these reasons, alternative dispute resolution (ADR) procedures are common today. When the parties in the dispute are genuinely committed to the process, these methods can be very effective, as well as far less time consuming and expensive.
The disputes resolution clause in a contract determines the particular method of settlement to be used for that particular contract. If a particular method is mandated, it must be used unless the parties mutually agree to change it.
Courts of Law
If a dispute resolution method has not been mandated by the disputes resolution clause of the contract, a dissatisfied contractor is free to file a lawsuit in a court-of-law in the state or federal system.
Lawsuits in the Federal Court System
Unless the contract provides otherwise, lawsuits involving an important federal question will be tried in the federal district court for the geographical area in which the dispute arose. Diversity cases—those in which the parties are residents of different states—are also tried in federal district court. Appeals from a contracting officer’s decision on a federal contract may be heard by one of the government administrative boards of contract appeals or, at the contractor’s option, may be heard by the United States Court of Federal Claims (formerly the U.S. Court of Claims), a federal court established for the purpose of trying cases involving claims against the federal government. Decisions of the federal district courts and the United States Court of Federal Claims can be appealed to one of the United States Courts of Appeal for the Federal Circuit, the particular court depending on the geographical area.
Decisions of the Courts of Appeal for the Federal Circuit can be appealed to the United States Supreme Court. If the Supreme Court agrees to take the case, the appeal will be heard. The decision of the Supreme Court is final and binding.
Lawsuits in the State Court System
Lawsuits other than those involving important federal questions or diversity are tried in the first level of the state court system of the various states. The name of the first-level court, or state trial court, varies depending on the particular state. Like the federal system, there are a number of first-level trial courts based on geographical area.
Decisions of the trial courts are appealable to the state courts of appeal and decisions of the courts of appeal are appealable to the state supreme court , whose decisions are appealable to the United States Supreme Court.
Determination of Venue
Venue means the court in which the lawsuit is tried. Its determination is a legal matter, often itself requiring the decision of a court. However, in some cases, there may be a choice. In these instances, the choice of venue will be made by the attorneys representing the party filing the lawsuit.
Features of Court Trials of Lawsuits
The trial of a construction case lawsuit in a court of law is a civil proceeding as opposed to a criminal trial. The features of such trials include the following:
• The trial may be conducted by a judge sitting without a jury—a bench trial—or, on demand of either party, the trial may be held before a jury. In both cases, the purpose of the trial is to determine the facts, to which the law is then applied, resulting in the decision of the court. In a bench trial, the judge first determines the facts and then applies the law to arrive at the decision. In the case of a jury trial, the jury’s function is to determine the facts. Then the judge carefully “instructs” the jurors what the law is in that particular case and how they must apply the law to the facts to arrive at a correct decision. In both cases, the judge conducts the entire proceeding and maintains the order and decorum of the court.
• Court trials are very formal. The judge maintains complete control, and his or her procedural decisions (rulings) are final insofar as the trial is concerned, although they may be appealed. Every word that is spoken is recorded verbatim by a court reporter.
• The judge controls the quantity and type of exhibits and testimony that go into the trial record as evidence. There are strict rules defining what is admissible and what is not. See for instance, the parole evidence rule and hearsay rule discussed in Chapters 20 and 21, respectively.
• The process of discovery prior to the trial will be afforded both sides. Discovery gives each side the right to examine and make copies of all pertinent files and documents possessed by the other side. Certain types of documents claimed to be privileged may be excluded by the judge from the discovery process. Included among these are communications between the parties and their attorneys and all attorney work-products.
• As part of the discovery process, each side also has the right to take the deposition of employees of the other side and whomever the other side intends to call as a witness at the trial, either as a fact witness, a person who has been involved in the project and has firsthand factual knowledge or as an expert witness, an expert in the field of the lawsuit who offers an opinion based on his or her knowledge and experience. At the deposition, the witness must truthfully answer all questions asked. A fact witness will speak from his or her own knowledge of the facts in the case, whereas an expert witness must reveal all opinions held and the basis for them. A verbatim record of the questions and answers is recorded by a court reporter who prepares a transcript that may be used by attorneys for either side when questioning that witness on the stand at the trial.
• At the trial itself, each side is allowed to present its case starting with the plaintiff, the party who instituted the lawsuit followed by the respondent, the party being sued, who presents a rebuttal. The plaintiff then responds to the rebuttal with a surrebuttal at which point the trial usually ends, although the judge may permit another round of presentations. During each presentation, each side introduces trial exhibits in the form of various documents, explanatory charts, and so on, and each of the side’s witnesses offer oral testimony under oath.
• At the end of each witness’s testimony, the opposing attorney has the right to conduct a cross-examination of the witness. The purpose of cross-examination is to give the opposing attorney every reasonable opportunity to discredit or impugn the testimony of the witness. Cross-examination of an opponent’s witnesses is one of the fundamental rights of a litigant in our legal system.
• In the case of bench trials, the judge may issue written findings-of-fact and conclusions-of-law along with the decision of the court, although this is not common in the lower courts, where the trial initially occurs. The appellate courts usually issue findings-of-fact and conclusions-of-law. These writings state the facts that the court found to be true and the principles of law that the court applied to these facts to arrive at the decision. The collective body of these writings constitute what has previously been described as case law, which will then be cited by judges and lawyers in future cases. No findings-of-fact and conclusions-of-law are issued in jury trials, only the jury’s decision, which is announced immediately following the jury’s deliberation at the conclusion of the trial. In bench trials, many months may elapse, sometimes even years, between the conclusion of the trial and the decision.
• Finally, a most important feature of court trials is the right of appeal—that is, the decisions of the trial court can be appealed by either party to an appellate court. If the appellate court agrees to hear the appeal, it reviews the trial court’s decision, either affirming it, overturning it, or affirming in part and overturning in part. The appellate court’s decision can then be appealed to the state or federal supreme court, as the case may be.
Hearings Before the Federal Boards of Contract Appeals
The federal boards of contract appeals have been established by the various agencies of the federal government to hear and render decisions on contract disputes arising from construction contracts administered by the particular agency. Typical federal boards include the Armed Forces Board of Contract Appeals, the Corps of Engineers Board of Contract Appeals, the Department of Transportation Contract Appeals Board, and many others. At least one state, Maryland, has established a state board of contract appeals. These boards consist of judges experienced in construction contract law who are appointed by the agency concerned. A contractor dissatisfied with the final decision of the contracting officer of a federal agency on a matter arising from a federal contract may appeal that decision either directly to the United States Court of Federal Claims or to the administrative board of the agency involved.
Hearings before the federal boards of contract appeals are conducted in a manner similar to court trials, except that the proceedings will always be conducted by a sitting judge. There is no jury. Following the hearing, the board will issue a written decision supported by findings-of-fact and conclusions-of-law. As with bench trials in courts of law ,many months or even years may elapse prior to the board’s issuing its decision. The decisions of the federal boards of contract appeals may be appealed to the United States Court of Federal Claims.
Arbitration
Arbitration is a third method of dispute resolution. It is generally faster and less expensive than court trials or hearings before administrative boards. Even so, arbitration of large, complicated cases can still be time consuming and expensive. The arbitrators, who are usually working professionals, cannot sit continuously for complicated cases, so the hearings are often fragmented, extending the time required. One arbitration in which the writer appeared as an expert witness was conducted intermittently over a period of 18 months. Most arbitration proceedings are not that long. Occasionally, however, they can last even longer.
Arbitration of a contract dispute cannot be compelled unless the contract expressly requires it. The right to arbitration is not an implied right. However, if the contract does require it, courts compel arbitration of the dispute on the demand of either party. The following cases are typical of the extensive case law on this point.
In a federal case involving a contract for construction of a sewer, the United States Court of Appeals required a city to arbitrate a differing site condition claim in spite of the city’s argument that the contract provided that the engineer’s decision would be final. The contract stated:
All claims of the Owner or the Contractor shall be presented to the Engineer for his decision, which shall be final except in cases where time and/or financial considerations are involved, and in such cases shall be submitted to arbitration if not solved by mutual agreement between the Owner and the Contractor.
When the contractor submitted a differing site condition claim for ground water that had not been anticipated, the city refused to arbitrate, alleging that the engineer had final authority in these matters and the arbitration clause did not extend to disputes of this nature. Although the court criticized the language used in the arbitration clause, it applied federal law requiring that arbitration clauses be generously construed and resolved in favor of arbitration.
In the words of the court:
Obviously, financial considerations are the heart of the instant contractor’s claim. Though we entertain some doubt whether the agreement was intended to cover the instant claim, we must enforce federal policy and come down in favor of arbitration.[1]
In another case, a project owner opposed arbitration, arguing that the demand for arbitration was not a dispute “arising out of, or relating to, the Contract Documents,” because the disputed issue involved work not authorized by the contract or by written change order. The original contract was for a stipulated sum of \$592,000 and stated that, although change orders would be necessary, the total contract price was in no event to exceed \$700,000. However, the contract also contained a broad form arbitration clause.
The court found that since the contract documents provided that the contract included change orders pertaining to “all items necessary for the proper execution and completion of the Work,” a dispute involving a claim for extra work necessary for completion of the project was subject to the arbitration clause. The court stressed that, in ordering arbitration, it was not establishing owner liability in excess of \$700,000. Rather, the court said, the contractor’s entitlement, if any, as well as the effect of such entitlement on the contract price ceiling, would have to be determined by the arbitrators.[2]
The following three principal systems of arbitration are commonly used today for construction cases. Normally, the contract states that one or the other of these systems is to be used. If the contract does not state this, the parties must agree on one of the systems.
AAA Arbitration Under Construction Industry Rules
One system is arbitration under the auspices of the American Arbitration Association (AAA) in accordance with the construction industry rules. In this system, each party reviews a list of potential arbitrators furnished by the AAA. Persons on the list are knowledgeable professionals who have been screened and prequalified by the AAA and who have agreed to serve as arbitrators. Each party may strike from the list anyone who is not satisfactory to them. Three persons who are acceptable to both parties—that is, persons remaining on the list who have not been struck by one party or the other-are then selected by the AAA to form a panel to hear the case, one of whom is usually an attorney and who serves as chairperson. Arbitrators must disclose any material facts about themselves that could be perceived as affecting their ability to render an impartial decision, such as prior acquaintance or business dealings with any of the parties. A party may demand replacement of an arbitrator who they feel may not render a fair decision based on such disclosure. For smaller cases, the procedure is the same except that the board consists of a single person who usually is an attorney.
Party Arbitration System
A second system is the party arbitration system. Each party unilaterally selects a knowledgeable professional to serve as an arbitrator on the board. These two persons then select a third member of the board who functions as chairperson. In this system, the first two members sometimes act as party advocates as well as arbitrators, whereas the third member must always be strictly impartial. If the first two members are unable to agree on the third member, a court can be petitioned to appoint the third member.
Single Arbitrator System
The third system is one in which the parties agree on a single arbitrator to constitute the board. Such a person often is a retired judge experienced in construction cases who agrees to serve as arbitrator and hear the case.
Features of Arbitration Proceedings
The following features distinguish arbitration proceedings from court trials and hearings before administrative boards:
• Arbitration generally is far less formal. Arbitrators have broad powers to set the rules on such matters as discovery and procedural matters for the conduct of the hearing. Arbitrators usually allow discovery, but they are not compelled to do so.
• Arbitration panels are far more flexible than courts on the rules of evidence. Generally, these rules are considerably relaxed in arbitration.
• Following the hearing, which is generally conducted in a manner similar to a court trial, the panel will issue its decision. The time period between the conclusion of the hearing and the decision is generally fairly short, far less than in a court trial. Usually, only a conclusory decision is issued with no supporting findings-of-factor conclusions-of-law.
• Finally, and most importantly, there is generally no viable appeal to an arbitration decision. Only in cases where it can be proved that the arbitrators exercised bad faith or refused to permit the introduction of evidence or that an arbitrator failed to disclose information that might have prevented rendering an impartial decision is a successful appeal to a court possible.
Examples of case law on this subject include vacation of the award due to the arbitrator’s refusal to hear relevant testimony[3] and vacation of the award because one of the arbitrators failed to disclose ongoing business dealings with one of the parties to the arbitration.[4]
Alternative Dispute Resolution
In recent years, alternate dispute resolution (ADR) procedures have increasingly been used. These methods include the following:
Mediation
In mediation, the parties engage a respected, knowledgeable neutral person to serve as a mediator. This person investigates the facts of the dispute, meets with the parties jointly and separately, and listens to their arguments. The mediator then proposes a settlement, sometimes as a written report and sometimes orally. The mediator’s recommendation is not binding. Ordinarily, the mediator’s recommendations are not admissible as evidence in a later court trial if either party pursues the matter in a lawsuit.
Mini-Trials
Another procedure, called a mini-trial, is also used. In this case, the parties arrange for a hearing to be conducted somewhat like a court trial. There is no judge or jury. Instead, two senior persons with settlement authority hear the evidence, one from each party. They do not participate in the presentation of the respective cases other than to ask questions. Following the conclusion of the hearing, these two individuals have each become personally knowledgeable about the strengths and weaknesses of each side’s arguments. They then confer privately and attempt to arrive at a settlement through negotiation. This system has the advantage of speed and a less adversarial atmosphere. A number of major disputes have been settled in this manner.
Disputes Review Boards
Another form of alternate dispute resolution that is increasingly used is a contractually provided contract disputes review board (DRB). In this instance, the construction contract between the parties expressly provides for the creation of a three-member board. As soon as the contract has been signed, each party selects a knowledgeable person to serve as member of the board. These two persons then select a third member, who normally acts as chairperson. Once each party has selected a member of the board, the two selected members have no further contact with those parties, instead becoming fully independent. All contact between the board and the parties to the contract is conducted through the chairperson.
Board members are required to act impartially and are subject to the same type of conflict-of-interest disclosure requirements as arbitrators are. The board members are furnished copies of the project plans and specifications and periodically visit the project jobsite to become familiar with the project as it progresses. If the parties are unable to resolve contract disputes as they occur, either party may refer the dispute to the board. The board then holds a hearing, listens to the arguments of both parties, and promptly furnishes a written recommendation for the resolution of the dispute that contains a detailed explanation of the reasoning supporting the recommendation. The recommendation is not binding, but along with the supporting reasoning, it is usually admissible as evidence in any later court trial. Such boards are now widely used, and in most cases the parties have been able to resolve the dispute promptly with the aid of the board’s recommendations. This form of dispute resolution has the obvious advantage of great speed. Disputes are resolved quickly and inexpensively once they have been presented to the board. The continuous availability of the board, which has been kept in close touch with the project as it progresses, is a unique feature that is not present in any of the other methods of dispute resolution available to the industry today.
Model specifications to be included in the contract for the appointment and operation of disputes review boards first appeared in a publication of the American Society of Civil Engineers.[5] This was followed by a second ASCE publication[6] and, more recently, by Construction Dispute Review Board Manual.[7]
Conclusion
Dispute avoidance far outweighs the merits of any of the dispute resolution methods discussed in this chapter. Disputes will be greatly minimized, or will not occur at all, if each party to the contract fully understands both their responsibilities and their rights under the contract and truly endeavors to honor the contract. Only when one or both parties fail to do this does dispute resolution become necessary.
Questions and Problems
1. What two general kinds of cases will be litigated in the federal district courts as opposed to the state courts of the state where the work was performed? Which two avenues for dispute resolution are available to a contractor for resolution of disputes arising from a federal contract?
2. What is the difference between a bench trial and a jury trial? In a bench trial, who determines the facts? Who applies the law? In a jury trial, who determines the facts? Who applies the law? Do the judge’s instructions to the jury deal with the facts or the law?
3. In court trials, who controls the procedure? Who rules on the admissibility of evidence? Are court trials subject to appeal?
4. What is discovery? What are privileged documents? What is a deposition, and what is its purpose? What does the term fact witness mean? What is the difference between a fact witness and an expert witness?
5. What do the terms plaintiff and respondent mean? Which presents its case first? What is the function or purpose of trial exhibits, oral testimony, and cross-examination?
6. What are findings-of-fact and conclusions-of-law? Who issues them? With what type of court proceeding are they usually associated? What is their relationship to case law?
7. Do hearings before the various administrative boards of contract appeals differ materially from court trials? Is there a jury? Will there be findings-of-fact and conclusions-of-law issued with the decisions of such boards? Are the decisions of the federal boards of contract appeals themselves appealable? To whom?
8. What is arbitration? Under what circumstances can arbitration be compelled in construction cases? ls the choice of arbitration an implied right of either party to the contract? What are the three different systems of arbitration used for construction cases discussed in this chapter?
9. Explain the party arbitrator system. How many panel members are there? How are they selected? Is a party arbitrator necessarily impartial and neutral? How is the chairperson selected?
10. What are the four features of arbitration proceedings that were discussed in this chapter?
11. Under what limited circumstances may an arbitration decision be appealed or overturned?
12. What three ADR procedures were discussed in this chapter? Is mediation binding? In a mini-trial, who makes the final decision for settlement of the dispute? What are two principal advantages of the mini-trial?
13. What principal feature of the disputes review board approach to dispute resolution is not present in mediation, court trials, hearings?
1. Ruby-Collins, Inc. v. City of Huntsville, 748 F.2d 573 (11th Cir. 1984).
2. Sisters of St. John the Baptist v. Phillips R. Geraghty Constructor, Inc., 494 N.E.2d 102 (N.Y. 1986).
3. Manchester Township Board of Education v. Thomas P. Carney, Inc., 489 A.2d 682 (N.J. Super. A.D. 1985).
4. Barcon Associates, Inc. v. Tri-County Asphalt Corp., 411 A.2d 709 (N.J. App. Div. 1980).
5. Avoiding and Resolving Disputes in Underground Construction (New York: American Society of Civil Engineers, 1989).
6. Avoiding and Resolving Disputes During Construction (New York: American Society of Civil Engineers, 1991).
7. Construction Dispute Review Board Manual (New York: McGraw-Hill Companies, Inc., 1996). | textbooks/biz/Business/Advanced_Business/Construction_Contracting_-_Business_and_Legal_Principles/1.23%3A_Dispute_Resolution.txt |
This unit is aimed at getting you familiar with the materials and how to make the most of this learning experience.
01: Getting Started
This unit is an orientation and provides the information you need to successfully learn from the instruction provided here. Though we do not talk about digital accessibility in this unit, it is still important to read through it.
Key Point: Here the main focus is on digital accessibility. We may use the word “accessibility” on its own, which in the context of the discussion here should be interpreted as meaning “digital accessibility.”
1.02: This Resource Will Be Helpful to...
Managers
This resource is aimed primarily at those who are responsible for implementing accessibility at an organizational level. These people tend to be managers, but may also be accessibility specialists, whose role it is to oversee the implementation of accessibility strategies and awareness throughout an organization.
Web Developers
Web developers may also wish to read the materials here to expand their understanding of the organizational aspects of implementing accessibility, thereby extending their role to that of an IT accessibility specialist, who is often the person to lead the implementation of accessibility culture in an organization.
Everyone Else
While managers and web developers are the primary audience here, anyone who has an interest in the aspects of implementing accessibility culture in an organization will find the materials informative.
1.03: Choosing Your Learning Path
A variety of elements have been added throughout the materials here to aid your learning. These are described here.
Your Accessibility Toolkit
Throughout the content, we’ve identified elements that should be added to the Accessibility Toolkit you will be assembling as you keep reading. These elements will include links to resource documents and online tools, as well as software or browser plugins that you may need to install or introduce to your staff. These will be identified in a green Toolkit box like the following:
Toolkit: Provides useful tools and resources for your future reference.
Technical Details
Though the instruction here has been developed without much of the technical details of accessibility, there are places throughout the content where important technical information has been included. These details are contained in the blue Technical boxes. It’s a good idea for those managing web accessibility efforts to be aware of some key technical elements of implementing digital accessibility, so they understand what technical staff should know.
Technical: Aimed more at technical staff, typically containing HTML code samples.
Key Points
Important or notable information will be highlighted and labelled in Key Point boxes such as the one that follows. These will include “must know” information.
Key Point: “Must know” information and interesting points.
Try This
Try This boxes contain activities designed to get you thinking or to give you first-hand experience with something you’ve just read about.
Try This: Typically a short interactive exercise.
Readings & References
Readings & References: These boxes provide links to various web resources for optional reading on the topics being discussed.
Self Tests
These short tests are included throughout the units to help you reinforce what you are learning.
Try This: Skip ahead to the end and read through the Content Recap for a high-level summary of the topics covered here.
1.04: Final Project
The Final Project is writing a digital accessibility policy. Copy the template of topics listed below and paste them into the policy document you will develop. As you progress through the materials here, the readings and activities will provide information that you can use to help write the content for the document.
Project Details
A digital accessibility policy should be written as a guide or set of instructions that management and staff can refer to when they need to understand what they should be doing to meet the organization’s accessibility requirements.
The following is a list of potential sections for the policy document. You can start with these and make the following changes and additions: add or remove sections or subsections; provide text for each section explaining the what, how, and/or who the section of the policy applies to; and organize it in a coherent way.
Key Point:
Template of Topics for Your Digital Accessibility Policy
• Background
• Company commitment
• Accessibility committee
• Scope and responsibilities
• Authority and enforcement
• Support
• Guidelines and standards
• Website development
• Web content
• Documents and communications
• Multimedia
• Third-party content
• Hiring equity and employment accommodation
• Training and awareness
• Digital accessibility resources
• Procurement
• Accessibility auditing and quality assurance
• Monitoring and periodic reviews
• Reporting
• Policy review
1.05: Accessibility Statement
Though we attempt to make all elements of this resource conform with international accessibility guidelines, we must acknowledge a few accessibility issues that are out of our control.
• Some external resources may not conform with accessibility guidelines.
• Third-party video content may not be captioned or may be captioned poorly.
• PDFs included in the web-based version of this resource have been tested with Acrobat Pro for accessibility, though will be inaccessible to those without the Acrobat Reader application installed on their computer.
Accessibility Tips for Web-Based Version
• Search for the “Skip to content” link at the start of each page when navigating by keyboard, and follow it to jump directly to the main content of the page.
• Links to external sites will always open in a new window.
• Use your screen reader’s list headings feature to navigate through the headings within the content of a page.
• Use the “Previous Section” and “Next Section” links found at the bottom of each page to navigate through the sequence of pages. To access these links most easily, use your screen reader’s landmarks list to jump to the navigation region, then press Tab and Shift-Tab to move between the next and previous links.
• Depending on the operating system and browser being used, font size can be adjusted by pressing a key combination including the plus (+) and minus (-) keys. On Windows systems, this is typically “CTRL+” and on Mac “Command+”. | textbooks/biz/Business/Advanced_Business/Digital_Accessibility_as_a_Business_Practice/01%3A__Getting_Started/1.01%3A_Getting_Started.txt |
In this unit, you learned the following points about understanding accessibility:
• When a business addresses digital accessibility, it actually saves money and doing so is not a non-recoverable cost to the business.
• There is a strong potential for a significant increase in customers for businesses that address digital accessibility.
• Digital accessibility should not be an afterthought but rather it needs to be part of the business strategy and the daily operations of the business.
• Addressing accessibility is a quality attribute of business and improves its profile.
02: Understanding the Big Picture
This unit provides an overview of elements in digital accessibility culture, as well as background information to provide context for what you will learn about in the units that follow. You will develop a big picture of digital accessibility culture. This knowledge will act as a framework in which you will assemble key materials and resources as you progress through the content.
What does accessibility mean to your business?
In the following video from the Whitby Chamber of Commerce, four local Durham region and Toronto business owners tell us what accessibility means to their businesses. (Note: The captions for this video no longer seem to be working.)
A YouTube element has been excluded from this version of the text. You can view it online here: http://pressbooks.library.ryerson.ca/dabp/?p=630
© TheWhitbychamber. Released under the terms of a Standard YouTube License. All rights reserved.
2.02: Objectives and Activities
Objectives
By the end of this unit, you should be able to:
• Describe a variety of business cases for digital accessibility.
• Identify different types of disability and potential digital barriers experienced by people with each.
• Compare how accessibility laws are being introduced around the world.
Activities
• Write a convincing elevator pitch, lasting no longer than one minute.
2.03: The Sharp Clothing Company
The narrative here revolves around the story of “The Sharp Clothing Company” who recently received a complaint about the accessibility of their online store, which included a threat to take legal action if the company does not address the issue in a reasonable amount of time. This complaint came as a surprise to the company, who thought they were compliant with local accessibility laws, having recently retrofitted several of their retail locations to accommodate wheelchair access. However, they did not consider digital accessibility.
The company currently has twelve stores across Ontario and Quebec located primarily in shopping malls, and a distribution centre where clothing imported from around the world is distributed to physical stores and out to customers purchasing online. The head office is located in central Toronto.
The company has been growing rapidly, opening about two new stores per year since going public in 2012, with 222 people currently employed, across a broad range of roles. The company is making plans to expand into international markets in the coming year.
Other Company Details
• Business: Sales and distribution of economical clothing
• Established: 2002 and publicly traded since 2012
• Union status: Non-unionized
• Annual revenue in 2016: \$46.5 million
• Marketing channels: Social media, website, television, newspapers, billboards, and print catalog
Employees
Total number of employees: 222
• 8 senior managers
• 6 middle managers (at head office and distribution centre)
• 16 office staff
• 4 cleaning and maintenance staff
• 12 store managers
• 12 assistant store managers
• 100 in-store sales staff
• 8 communications and marketing staff
• 5 web developers
• 2 mobile app developers
• 2 user experience designers
• 5 web content authors
• 4 purchasers/buyers
• 8 24-hour telephone and online help staff
• 6 media support staff (videographer, photographer, and graphic artists, etc.)
• 24 distribution centre staff
Your Role
The complaint that was filed ended up with the company’s CEO. She has come to you to handle the issue and tasks you to ensure that this type of complaint does not happen again. You already have a little background in accessibility, but it is primarily around customer service and design of physical spaces to accommodate people with disabilities. You gained this experience as the project manager during the company’s efforts to make its stores accessible to people with disabilities. However, you have little experience with “digital accessibility” and have a limited technical background.
Your goal is to educate yourself about digital accessibility and implement a plan to address the complaint to ensure no other similar complaints occur. You have a budget which might cover hiring one or two additional staff members, training staff, updating technology, and launching promotional activities to raise awareness of digital accessibility throughout the company.
You will be working closely with other managers and specific staff in order to bring the company into compliance with digital accessibility laws, both locally and in the jurisdictions where the company is planning to do business.
As you progress through the reading and activities, you will be introduced to the various elements that need to be addressed in order to accomplish the company’s compliance goals. In the final unit, you will assemble what you have learned into a Digital Accessibility Policy for the Sharp Clothing Company , a document that you can take away and ultimately use as a guide to implementing an accessibility plan for your own organization. | textbooks/biz/Business/Advanced_Business/Digital_Accessibility_as_a_Business_Practice/02%3A_Understanding_the_Big_Picture/2.01%3A_Understanding_the_Big_Picture.txt |
In 2011 and 2012, Karl Groves wrote an interesting series of articles that looked at the reality of business arguments for web accessibility. He points out that any argument needs to answer affirmatively to at least one of the following questions:
1. Will it make us money?
2. Will it save us money?
3. Will it reduce risk?
He outlines a range of potential arguments for accessibility:
• Improved search engine optimization: Customers will be able to find your site more easily because search engines can index it more effectively.
• Improved usability: Customers will have a more satisfying experience, thus spend more or return to your site more often.
• Reduced website costs: Developing to standard reduces bugs and interoperability issues, reducing development costs and problems integrating with other systems.
• People with disabilities have buying power: They won’t spend if they have difficulty accessing your site; they will go to the competition that does place importance on accessibility.
• Reduced resource utilization: Building to standard reduces use of resources.
• Support for low bandwidth: If your site takes too long to load, people will go elsewhere.
• Social responsibility: Customers will come if they see you doing good for the world, and you are thinking of people with disabilities as full citizens.
• Support for aging populations: Aging populations also have money to spend and will come to your site over the less accessible, less usable competition.
• Reduced legal risk: You may be sued if you prevent equal access for citizens/customers or discriminate against people with disabilities.
What accessibility really boils down to is “quality of work,” as Groves states. So, in approaching web accessibility, you may be better off not thinking so much in terms of reducing the risk of being sued, or losing customers because your site takes too long to load. Rather, the work you do is quality work, and the website you present to your potential customers is a quality website.
A YouTube element has been excluded from this version of the text. You can view it online here: http://pressbooks.library.ryerson.ca/dabp/?p=639
Readings & References: If you’d like to learn more about business cases, here are a few references:
2.05: AODA Background
Video: AODA Background
A YouTube element has been excluded from this version of the text. You can view it online here: http://pressbooks.library.ryerson.ca/dabp/?p=641
For readers from Ontario, Canada, we’ll provide occasional references to the Accessibility for Ontarians with Disabilities Act (AODA). If you’re studying here to work with accessibility outside Ontario, you may compare AODA’s web accessibility requirements with those in your local area. They will be similar in many cases and likely based on the W3C WCAG 2.0 guidelines. The goal in Ontario is for all obligated organizations to meet the Level AA accessibility requirements of WCAG 2.0 by 2021, which, ultimately, is the goal of most international jurisdictions.
The AODA provided the motivation to create this resource. All businesses and organizations in Ontario with more than 50 employees (and all public sector organizations) are now required by law to make their websites accessible to people with disabilities (currently Level A). Many businesses still don’t know what needs to be done in order to comply with the new rules, and this resource hopes to fill some of that need.
The AODA was passed as law in 2005, and, in July of 2011, the Integrated Accessibility Standards Regulation (IASR) brought together the five standards of the AODA, covering information and communication, employment, transportation, and design of public spaces, in addition to the original customer service standard.
The AODA sets out to make Ontario fully accessible by 2025, with an incremental roll-out of accessibility requirements over a period of 20 years. These requirements span a whole range of accessibility considerations, including physical spaces, customer service, the web, and much more.
Our focus here is on access to information, information technology (IT), and the web. The timeline set out in the AODA requires government and large organizations to remove all barriers in web content between 2012 and 2021. The timeline for these requirements is outlined in the table below. Any new or significantly updated information posted to the web must comply with the given level of accessibility by the given date. This includes both internet and intranet sites. Any content developed prior to January 1, 2012 is exempt.
Level A Level AA
Government January 1, 2012 (except live captions and audio description) January 1, 2016 (except live captions and audio description)
January 1, 2020 (including live captions and audio description)
Designated Organizations* Beginning January 1, 2014, new websites and significantly refreshed websites must meet Level A (except live captions and audio description) January 1, 2021 (except live captions and audio description)
* Designated organizations means every municipality and every person or organization as outlined in the Public Service of Ontario Act 2006 Reg. 146/10, or private companies or organizations with 50 or more employees, in Ontario.
Key Point: The next key date for AODA designated organizations is January 1, 2021, when all web content must meet Level AA accessibility compliance.
Toolkit: Download and review the AODA Compliance Timelines [PDF].
Readings & References: For more about the AODA you can review the following references:
A YouTube element has been excluded from this version of the text. You can view it online here: http://pressbooks.library.ryerson.ca/dabp/?p=641
© Melanie Belletrutti. Released under the terms of a Standard YouTube License. All rights reserved. | textbooks/biz/Business/Advanced_Business/Digital_Accessibility_as_a_Business_Practice/02%3A_Understanding_the_Big_Picture/2.04%3A_The_Business_Case_for_Accessibility.txt |
The Sharp Clothing Company recently completed upgrading all of its premises to include wheelchair access ramps, accessible washrooms, wheelchair-height customer service desks, and so on. The management was confident everything was done to meet accessibility standards, so they were surprised to receive a customer complaint about the company’s online store being completely inaccessible by keyboard.
You have been asked to investigate the issue. After looking into the complaint you are surprised to find that there were many different types of disabilities and each one with its own accessibility challenges. You decide you need to learn more about how people with different disabilities use the web and digital information, since you see now there is more to accessibility than providing access for wheelchair users.
To understand where accessibility issues can arise, it is helpful to have a basic understanding of a range of disabilities and the related barriers found in digital content. These include:
• people who are blind
• people with low vision
• people who are deaf or hard of hearing
• people with mobility-related disabilities
• people with learning or cognitive disabilities
Not all people with disabilities encounter barriers in digital content, and those with different types of disabilities encounter different types of barriers. For instance, if a person is in a wheelchair, they may encounter no barriers at all in digital content. A person who is blind will experience different barriers than a person with limited vision. Many of the barriers that people with disabilities encounter on the web are often barriers found in electronic documents and multimedia. Different types of disabilities and some of their commonly associated barriers are described here.
Watch the following video to see how students with disabilities experience the Internet.
A YouTube element has been excluded from this version of the text. You can view it online here: http://pressbooks.library.ryerson.ca/dabp/?p=643
© Jared Smith. Released under the terms of a Standard YouTube License. All rights reserved.
In this video, David Berman talks about types of disabilities and their associated barriers.
A YouTube element has been excluded from this version of the text. You can view it online here: http://pressbooks.library.ryerson.ca/dabp/?p=643
© davidbermancom. Released under the terms of a Standard YouTube License. All rights reserved.
People who Are Blind
People who are blind tend to face the most barriers in digital content, given the visual nature of much digital content. They will often use a screen reader to access their computer or device, and may use a refreshable Braille display to convert text to Braille.
Common barriers for this group include:
• Visual content that has no text alternative
• Functional elements that cannot be controlled with a keyboard
• Overly complex or excessive amounts of content
• Inability to navigate within a page of content
• Content that is not structured (i.e., missing proper headings)
• Inconsistent navigation
• Time limits (insufficient time to complete tasks)
• Unexpected actions (e.g., redirect when an element receives focus)
• Multimedia without audio description
For a quick look at how a person who is blind might use a screen reader like JAWS to navigate the web, watch the following video.
A YouTube element has been excluded from this version of the text. You can view it online here: http://pressbooks.library.ryerson.ca/dabp/?p=643
© rscnescotland. Released under the terms of a Standard YouTube License. All rights reserved.
People with Low Vision
People with low vision are often able to see digital content if it is magnified. They may use a screen magnification program to increase the size and contrast of the content to make it more visible. They are less likely to use a screen reader than a person who is blind, though in some cases they will. People with low vision may rely on the magnification or text customization features in their web browser or word processor, or they may install other magnification or text reading software.
Common barriers for this group include:
• Content sized with non-resizable absolute measures
• Inconsistent navigation
• Images of text that degrade or pixelate when magnified
• Low contrast (inability to distinguish text from background)
• Time limits (insufficient time to complete tasks)
• Unexpected actions (e.g., redirect when an element receives focus)
See the following video for a description of some of the common barriers for people with low vision.
A YouTube element has been excluded from this version of the text. You can view it online here: http://pressbooks.library.ryerson.ca/dabp/?p=643
© Media Access Australia. Released under the terms of a Standard YouTube License. All rights reserved.
People who Are Deaf or Hard of Hearing
Most people who are deaf tend to face barriers where audio content is presented without text-based alternatives, and encounter relatively few barriers in digital content otherwise. Those who are deaf and blind will face many more barriers, including those described for people who are blind. For those who communicate with American Sign Language (ASL) or other sign languages (e.g., langue de Signes Quebecoise or LSQ), the written language of a website may produce barriers similar to those faced when reading in a second language.
Common barriers for this group include:
• Audio without a transcript
• Multimedia without captions or transcript
• Lack of ASL interpretation (for ASL/Deaf community)
People with Mobility-Related Disabilities
Mobility-related disabilities are quite varied. As mentioned earlier, one could be limited to a wheelchair for getting around, and face no significant barriers in digital content. Those who have limited use of their hands or who have fine-motor impairments that limit their ability to target and click elements in digital content with a mouse pointer, may not use a mouse at all. Instead, they might rely on a keyboard or perhaps their voice to control movement (i.e., speech recognition) through digital content along with switches to control mouse clicks.
Common barriers for this group include:
• Clickable areas that are too small
• Functional elements that cannot be controlled with a keyboard
• Time limits (insufficient time to complete tasks)
People with Learning or Cognitive Disabilities
Learning and cognitive-related disabilities can be as varied as mobility-related disabilities, perhaps more so. These disabilities can range from a mild reading related disability, to very severe cognitive impairments that may result in limited use of language and difficulty processing complex information. For most of the disabilities in this range, there are some common barriers, and others that only affect those with more severe cognitive disabilities.
Common barriers for this group include:
• Use of overly complex/advanced language
• Inconsistent navigation
• Overly complex or excessive amounts of content
• Time limits (insufficient time to complete tasks)
• Unstructured content (no visible headings, sections, topics, etc.)
• Unexpected actions (e.g., redirect when an element receives focus)
More specific disability-related issues include:
• Reading: Text justification (inconsistent spacing between words)
• Reading: Images of text (not readable with a text reader)
• Visual: Visual content with no text description
• Math: Images of math equations (not readable with a math reader)
Everyone
While we generally think of barriers in terms of access for people with disabilities, there are some barriers that impact all types of users, though these are often thought of in terms of usability. Usability and accessibility go hand-in-hand. Adding accessibility features improves usability for others. Many people, including those who do not consider themselves to have a specific disability (such as those over the age of 50) may find themselves experiencing typical age-related loss of sight, hearing, or cognitive ability. Those with varying levels of colour blindness may also fall into this group.
Some of these usability issues include:
• Link text that does not describe the destination or function of the link
• Overly complex content
• Inconsistent navigation
• Low contrast
• Unstructured content
Try This: Experience colour blindness: Corbis Colour Blindness Simulation
Readings & References: To learn more about disabilities and associated barriers, read the following: How People with Disabilities Use the Web | textbooks/biz/Business/Advanced_Business/Digital_Accessibility_as_a_Business_Practice/02%3A_Understanding_the_Big_Picture/2.06%3A_Types_of_Disabilities_and_Associated_Barriers.txt |
Knowing the lengths your company recently went to to ensure physical accessibility at the storefront locations, you are eager to gain an understanding about how accessibility legislation may extend into the digital realm. The added risk of potential legal action and reference to a human rights violation in the complaint has drawn concern from the company’s leadership. They have asked you to investigate further into what legislation might already exist with reference to digital accessibility.
You discover that, in fact, there is legislation in place in Ontario as part of the Accessibility for Ontarians with Disabilities Act (AODA), specifically Section 12 and Section 14 that speak to digital accessibility, which cover accessible formats and web content, respectively. You see that, indeed, accessible websites are addressed in Section 14(4).
While you are reading about the AODA Information and Communications Standard, you remember the discussion at the last manager’s meeting, about the plan coming together that will see several new stores open over the next year, located in the United States, the European Union, and Australia. It occurs to you that these countries may have their own digital accessibility standards, and that you should look into those while learning about the local accessibility requirements.
Web Content Accessibility Guidelines (WCAG 2.0)
The W3C Web Content Accessibility Guidelines (WCAG 2.0) has become broadly accepted as the definitive source for web accessibility rules around the world, with many jurisdictions adopting it verbatim, or with minor adjustments, as the basis for accessibility laws that remove discrimination against people with disabilities on the web.
While you do not need to read the whole WCAG 2.0 document, it is good to have a basic understanding of what it covers.
Toolkit: WCAG 2.0 can be dry, and time consuming to read through and understand. We have created the 10 Key Guidelines that summarizes and helps familiarize you with the more common web accessibility issues.
After reviewing the 10 Key Guidelines, start by learning about the Canadian and U.S. web accessibility regulations, then take the Challenge Test to check your knowledge.
Canada
Accessibility for Ontarians with Disabilities Act (AODA)
The materials here haqve been written in the context of the AODA, which came into effect in 2005 with the goal of making Ontario the most inclusive jurisdiction in the world by 2025. Part of this twenty-year rollout involved educating businesses in Ontario, many of which are now obligated by the Act to make their websites accessible, first at Level A between 2012 and 2014, and at Level AA between 2016 and 2021.
Key Point: AODA adopts WCAG 2.0 for its Web accessibility requirements, with the exception of two guidelines:
1. Ontario businesses and organizations are not required to provide captioning for live web-based broadcasts (WCAG 2.0 Guideline 1.2.4, Level A)
2. Ontario businesses and organizations are not required to provide audio description for pre-recorded web-based video (WCAG 2.0 Guideline 1.2.5, Level AA)
Otherwise, AODA adopts WCAG 2.0 verbatim.
Toolkit: For key information on the adoption of WCAG 2.0 in the context of the AODA, refer to the Integrated Accessibility Standards (of the AODA).
Canadian Government Standard on Web Accessibility
In 2011, the Government of Canada (GOC) introduced its most recent set of web accessibility standards, made up of four sub standards that replace the previous Common Look and Feel 2.0 standards. The Standard on Web Accessibility adopts WCAG 2.0 as its Web accessibility requirements with the exception of Guideline 1.4.5 Images of Text (Level AA) in cases where “essential images of text” are used, in cases where “demonstrably justified” exclusions are required, and for any archived Web content. The standard applies only to Government of Canada websites.
Toolkit: Full details of Government of Canada accessibility requirements read the Standard on Web Accessibility.
Accessibility 2024
In 2014 the British Columbia government released Accessibility 2024, a ten-year action plan designed around twelve building blocks intended to make the province the most progressive in Canada for people with disabilities. Accessible Internet is one of those building blocks. The aim is to have all B.C. government websites meet WCAG 2.0 AA requirements by the end of 2016.
Canadians with Disabilities Act
Currently a work in progress, this act intends to produce national accessibility regulations for Canada. Visit the Barrier-Free Canada website for more about the developing Canadians with Disabilities Act, and the Government of Canada on the consultation process.
United States
Americans with Disabilities Act (ADA)
The ADA does not have any specific technical requirements upon which it requires websites to be accessible, however, there have been a number of cases where organizations that are considered to be “places of public accommodation” have been sued due to the inaccessibility of their websites (e.g., Southwest Airlines and AOL), where the defendant organization was required to conform with WCAG 2.0 Level A and Level AA guidelines.
There is a proposed revision to Title III of the ADA (Federal Register Volume 75, Issue 142, July 26, 2010) that would, if passed, require WCAG 2.0 Level A and AA conformance to make Web content accessible under ADA.
Section 508 (of the Rehabilitation Act, U.S.)
Section 508 is part of the U.S. Rehabilitation Act and its purpose is to eliminate barriers in information technology, applying to all Federal Agencies that develop, procure, maintain, or use electronic and information technology. Any company that sells to the U.S. Government must also provide products and services that comply with the eleven accessibility guidelines Section 508 describes in Section 1194.22 of the Act.
These guidelines were originally based on a subset of the WCAG 1.0 guidelines, and were recently updated to include WCAG 2.0 Level A and AA guidelines as new requirements for those obligated through Section 508. Though in effect as of March 20, 2017, those affected by the regulation are required to comply with the updated regulation by January 18, 2018.
2.08: Self-Test 1
1. In Ontario, which section of the AODA Information and Communication Standards addresses website and web content accessibility?
1. Section 6
2. Section 12
3. Section 13
4. Section 14
5. Section 18
2. In the United States, when are obligated organizations required to comply with the recent changes to Section 508 of the Rehabilitation Act?
1. January 1, 2019
2. January 1, 2018
3. January 18, 2018
4. March 17, 2017
5. January 1, 2017
An interactive or media element has been excluded from this version of the text. You can view it online here:
http://pressbooks.library.ryerson.ca/dabp/?p=654 | textbooks/biz/Business/Advanced_Business/Digital_Accessibility_as_a_Business_Practice/02%3A_Understanding_the_Big_Picture/2.07%3A_North_American_Digital_Accessibility_Laws_and_Regulations.txt |
United Kingdom
Equality Act 2010
The Equality Act in the United Kingdom does not specifically address how web accessibility should be implemented, but in Section 29(1), require that those who sell or provide services to the public must not discriminate against any person requiring the service. Effectively, preventing a person with a disability from accessing a service on the web constitutes discrimination.
Sections 20 and 29(7) of the Act make it an ongoing duty of service providers to make “reasonable adjustments” to accommodate people with disabilities. To this end, the British Standards Institution (BSI) provides a code of practice (BS 8878) on web accessibility, based on WCAG 1.0.
For more about BSI efforts, watch the following video:
A YouTube element has been excluded from this version of the text. You can view it online here: http://pressbooks.library.ryerson.ca/dabp/?p=656
© BSI Group. Released under the terms of a Standard YouTube License. All rights reserved.
Readings & References:
Europe
Throughout Europe, a number of countries have their own accessibility laws, each based on WCAG 2.0. In 2010, the European Union itself introduced web accessibility guidelines based on WCAG 2.0 Level AA requirements. The EU Parliament passed a law in 2014 that requires all public sector websites, and private sector websites that provide key public services, to conform with WCAG 2.0 Level AA requirements, with new content conforming within one year, existing content conforming within three years, and multimedia content conforming within five years.
This does not mean, however, that all countries in the EU must now conform. The law now goes before the EU Council, where heads of state will debate it, which promises to draw out adoption for many years into the future, if it gets adopted at all.
Readings & References:
Italy
In Italy, the Stanca Act 2004 (Disposizioni per favorire l’accesso dei soggetti disabili agli strumenti informatici) governs web accessibility requirements for all levels of government, private firms that are licensees of public services, public assistance and rehabilitation agencies, transport and telecommunications companies, as well as ICT service contractors.
The Stanca Act has 22 technical accessibility requirements originally based on WCAG 1.0 Level A guidelines, updated in 2013 to reflect changes in WCAG 2.0.
Readings & References:
• Stanca 2013 Requirements (Italian)
Germany
In Germany, BITV 2.0 (Barrierefreie Informationstechnik-Verordnung), which adopts WCAG 2.0 with a few modifications, requires accessibility for all government websites at Level AA (i.e., BITV Priority 1).
Readings & References:
• BITV (Appendix 1)
France
Accessibility requirements in France are specified in Law No 2005-102, Article 47, and its associated technical requirements are defined in RGAA 3 (based on WCAG 2.0). It is mandatory for all public online communication services, public institutions, and the State, to conform with RGAA (WCAG 2.0).
Readings & References:
• Law No 2005-102, Article 47 (French)
• Référentiel Général d’Accessibilité pour les Administrations (RGAA) (French)
Spain
The web accessibility laws in Spain are Law 34/2002 and Law 51/2003, which require all government websites to conform with WCAG 1.0 Priority 2 guidelines. More recently, UNE 139803:2012 adopts WCAG 2.0 requirements and mandates that the following types of organizations comply with WCAG Level AA requirements: government and government-funded organizations; organizations larger than 100 employees; organizations with a trading column greater than 6 million Euros; or organizations providing financial, utility, travel/passenger, or retail services online.
(See: Legislation in Spain )
Australia
Though not specifically referencing the web, section 24 of the Disability Discrimination Act of 1992 makes it unlawful for a person who provides goods, facilities, or services to discriminate on the grounds of disability. This law was tested in 2000, when a blind man successfully sued the Sydney Organizing Committee for the Olympic Games (SOCOG) when its website prevented him from purchasing event tickets.
The Australian Human Rights and Equal Opportunity Commission (HREOC) shortly after released the World Wide Web Access: Disability Discrimination Act Advisory Notes. These were last updated in 2014, and, while they do not have direct legal force, they do provide web accessibility guidance for Australians on how to avoid discriminatory practices when developing web content, based on WCAG 2.0.
Readings & References:
Readings & References: For more about international web accessibility laws, see the following resources:
2.10: Activity- One-Minute Elevator Pitch
Establishing a culture of accessibility in an organization requires buy-in from senior management. These managers may not always understand the implications of accessibility barriers on the company. Using the knowledge you have gained to this point, write an elevator pitch to convince a senior manager that accessibility is important to the company.
If you are not familiar with elevator pitches, they often unfold when you, the speaker, getting on the elevator, happen to run into a key senior person in the company, who typically spends her day running from meeting to meeting. You have her as a captive audience for one minute while the elevator ascends. This is the only opportunity you will have to pitch your idea to this person, and if you succeed in convincing this person, she will support you in your effort.
Your task in this activity is to convince one of the following people that digital accessibility is very important and you have a good idea that is sure to benefit the company. You may want to consider different arguments to convince different people.
• President/Chief Executive Officer
• Director of Marketing
• IT Manager/Chief Information Officer
• General Manager
• Human Resource Director
For help with creating your elevator pitch, read Mindtools’s How to Create an Elevator Pitch.
Suggested Viewing: If you would like to see examples of an elevator pitch, have a look through the following video resources.
1. Video: 6 Elevator Pitches for the 21st Century
A YouTube element has been excluded from this version of the text. You can view it online here: http://pressbooks.library.ryerson.ca/dabp/?p=658
© THNKR. Released under the terms of a Standard YouTube License. All rights reserved.
2. Video: Elevator Pitch Winner (Utah State)
A YouTube element has been excluded from this version of the text. You can view it online here: http://pressbooks.library.ryerson.ca/dabp/?p=658
© UtahStateCES. Released under the terms of a Standard YouTube License. All rights reserved.
A YouTube element has been excluded from this version of the text. You can view it online here: http://pressbooks.library.ryerson.ca/dabp/?p=658
© PEATworks. Released under the terms of a Standard YouTube License. All rights reserved.
A YouTube element has been excluded from this version of the text. You can view it online here: http://pressbooks.library.ryerson.ca/dabp/?p=658
© PEATworks. Released under the terms of a Standard YouTube License. All rights reserved.
2.11: Understanding the Big Picture- Takeaways
In this unit, you learned that:
• When a business addresses digital accessibility, it actually saves money and doing so is not a non-recoverable cost to the business.
• There is a strong potential for a significant increase in customers for businesses that address digital accessibility.
• Digital accessibility should not be an afterthought, rather it needs to be part of the business strategy and the daily operations of the business.
• Addressing accessibility is a quality attribute of a business and improves its profile. | textbooks/biz/Business/Advanced_Business/Digital_Accessibility_as_a_Business_Practice/02%3A_Understanding_the_Big_Picture/2.09%3A_International_Digital_Accessibility_Regulations.txt |
In this unit, you learned these aspects about establishing a digital accessibility committee:
• Disability sensitivity training, a good understanding of accessibility and standards such as WCAG, and accessibility barriers are all key knowledge areas required in different company roles.
• Accessibility committee members should be chosen strategically and should represent a good cross-section of the business.
03: The Committee and the Champion
If your organization has more than a handful of employees, or has multiple groups or departments that serve different purposes within the organization, it is helpful to recruit staff to represent and speak for each group on an Accessibility Committee (or whatever you choose to call it). These will typically be comprised of people in senior positions, people with influence, and employees with disabilities. These committee members must be willing to sell the ideas put forth by the committee to raise awareness and affect the culture within the group they represent.
You will probably also want to assign a person to be in charge of the whole committee: A person we will refer to as the Accessibility Champion. This person should have expertise in the accessibility area, be able to lead, manage change, and oversee the organization’s accessibility efforts as a whole.
3.02: Objectives and Activities
Objectives
By the end of this unit, you should be able to:
• Distinguish accessibility skills and knowledge required across a variety of roles.
• Outline the makeup of an Accessibility Committee.
• Describe important characteristics of a good Accessibility Champion.
Activities
• Assess your own ability to be an Accessibility Champion.
3.03: Identifying Key Areas and People
Having learned about how people with disabilities use digital content and the Web, and knowing about the local and international digital accessibility regulations, you now want to determine what led to the customer’s complaint in the first place.
The person who submitted the complaint has identified that he is blind and uses a technology called a screen reader to access content on the Web. From your research on how people with disabilities use the Web, you know that screen readers read out the text content from web pages.
The complaint mentions two particular issues. First, many images in the shopping area of the company’s website are announced as file names, such as “rt-004.jpg”, rather than something meaningful like “add to shopping cart.” You discover the problem is the result of images in the shopping cart application not having a text description. You know that “alt text” is the way to provide a text description for images on the Web.
Second, the buttons in the shopping cart cannot be activated with a key press; rather, they require a person to click the buttons with a mouse. Since people who are blind typically cannot use a mouse (not being able to see a mouse pointer), you have learned that they usually rely on their keyboard to navigate through content and to press buttons or activate links. When these website elements cannot be accessed or operated with a key press, they are inaccessible to anyone who relies on a keyboard to navigate.
You first approach the content developer who set up the products in the shopping cart application, and ask that she go through the product list and add the missing text descriptions. But, she tells you the shopping cart editor does not have a way to add text descriptions, or alt text, for product images.
You then approach your company’s web developer to see if it is possible to add an alt text field to the editor used to add product images. As it turns out, the shopping cart application is a third-party proprietary application, and, apart from simple changes to brand the shopping cart, there is little that can be done to make changes to the editor without going back to the vendor. You also ask your web developer about the keyboard access problem, and he tells you this cannot be modified either without going back to the vendor.
You wonder how the company ended up purchasing this shopping cart application given its limited accessibility support. The next stop in your investigation is your purchasing department. You ask about the accessibility requirements that were included with the request for proposals (RFP), and discover that no accessibility requirements were outlined in the RFP. The purchasing department did not know about the requirement to purchase accessible technologies when they are available.
Through this investigation, you begin to realize that accessibility knowledge needs to be weaved through many roles in the company. The next area you focus on is understanding what types of digital accessibility knowledge is needed for various roles in the company, and the potential training that might be needed to ensure various roles understand their accessibility responsibilities.
The roles you identify include:
• Retail store staff
• Retail store managers and assistant managers
• Web developers
• Web content editors
• Communications and marketing staff
• Procurement and purchasing staff
• Telephone support staff
• Video support staff
• Graphic artists
• Senior managers and directors
• Human resource staff
• Distribution centre staff
• Office support staff
Depending on a person’s role in a company, different types of accessibility knowledge may be needed. The following is an example of the different knowledge various roles may need, though depending on the size of a company and the nature of the business, this knowledge could be adapted across roles. For instance, if a company does not have a human resource (HR) department, then knowledge of accessible hiring practices and accessibility knowledge requirements for various roles may shift to senior managers responsible for hiring new staff.
Retail store staff: Since retail staff often do not use digital tools or content beyond perhaps a web-based checkout, the main focus of their knowledge should be disability sensitivity, so they are able to interact comfortably and appropriately when people with disabilities are shopping in the retail stores.
People are often unsure how to interact with a person with a disability, if they have little experience with it. They may feel uncomfortable and wary of saying the wrong thing. In general, people with disabilities should be treated like anyone else, though this may be difficult for some, for instance, who have never met a person who is blind or deaf or uses a wheelchair.
Retail store managers and assistant managers: Like other retail store staff, store managers should also receive disability sensitivity training, and they should be able to provide training to other store staff.
Managers should also have a general overview of the business’s accessibility requirements as a whole, so they are able to identify and potentially resolve any accessibility issues they may encounter through the day-to-day retail store operation.
Web developers: The company’s web developers play a key role in ensuring that the company’s public website, in particular, meets accessibility requirements. They should have a good understanding of the W3C Web Content Accessibility Guidelines (WCAG 2.0), in addition to having expert knowledge of HTML, CSS, and JavaScript. WCAG is the international guideline for developing accessible web content and is the basis for many international web accessibility regulations.
A web developer may also be a good person to oversee the company’s digital accessibility efforts, as they have a good understanding of the technologies involved and the ability to evaluate and remediate accessibility issues. An accessibility lead should have both understanding of accessibility technology, and an understanding of disability and related accessibility barriers. This combination of expertise can be difficult to find, so it would be more effective to educate a web developer on disability and accessibility issues than training a disability expert on the technical aspects of implementing digital accessibility.
Web content editors: Those who develop the content for a website should have basic understanding of WCAG 2.0, though they typically do not need the level of understanding that web developers need. Among the many potential accessibility issues in digital content, web content editors should be aware of things like including text descriptions for images, structuring content with the proper use of headings, and creating links in content that describe in a meaningful way where the link leads.
Communications and marketing staff: Marketing staff should also have the basic understanding of WCAG that content editors have, though there are some other guidelines that may be relevant, such as effective use of colour when developing promotional materials (e.g., having sufficient contrast between foreground and background and ensuring that colour alone is not used to represent meaning).
Marketing staff may also produce documents that are distributed both internally and externally to the public. They should also have an understanding of how to use accessibility features in various document authoring tools such as Microsoft Word or Adobe Acrobat Pro, among others. Most current document authoring tools should have features for testing and authoring accessible documents. Upgrades may be necessary to take advantage of those features, if older software is still being used.
Procurement and purchasing staff: Those who buy products and resources for a company need to have a good understanding of WCAG 2.0, or at a minimum understand that when purchasing, software in particular, and choosing between comparable products, the more accessible one should be purchased. Purchasing agents may make use of third party accessibility evaluation services to report on the accessibility of potential purchases. The company’s web developers may also be a good source for evaluators, assuming they have acquired the necessary expertise with WCAG.
Procurement staff also need to know how to ask for accessibility features from vendors and how to critically evaluate the responses to those requests, ensuring vendors are being honest about the accessibility of their products. Some vendors may tell you what you want to hear, which may not necessarily be the whole truth, while others may not know about accessibility, which is a good indication that there products are not accessible.
Telephone support staff: These staff should have similar disability sensitivity training, though typically, unless a person identifies themselves as having a disability, they may not be aware of such facts. Nonetheless, if they are interacting with a person they know to have a disability, they need to know how to interact in an acceptable way.
Telephone support staff should know how to use a TTY (Teletype or Teletypewriter), used by people who are deaf to communicate with hearing individuals by phone. If your support services do not include TTY access, telecommunications providers can typically provide the service.
Video support staff: Video production editors need to know about captioning and audio description. Captioning provides access to the audio track in multimedia content for those who are deaf, and audio description provides access to meaningful visual elements or activity in a video that are not obvious by listening to the audio track, for those who are blind.
Graphic artists: Similar to marketing staff, graphic artists need to be aware of the basic WCAG guidelines and issues around the use of colour.
Senior managers and directors: The senior people in a company need to have a basic understanding of digital accessibility as a whole, as well as a good understanding of the accessibility regulations that govern a business’s accessibility requirements. They also need to be open to change and to understand the business arguments for creating an inclusive business. Ultimately, it is the senior management in a company that make or break a company’s accessibility efforts.
Human resource staff: HR staff need to have a good understanding of the local accessibility laws, and related accessible hiring practices. They also need to know about the required accessibility knowledge for the roles described here, as well as other potential roles. HR staff also need to be able to ask the right questions to determine, for instance, if a web developer has expertise with WCAG, or to perhaps assess the marketing department or office personnel’s understanding of accessibility features in the authoring tools they use.
HR staff may also be responsible for training efforts. While having accessibility knowledge for a given role should give applicants an advantage over others, in reality it is often difficult to find candidates with both expert understanding of the job they are being hired for, and knowledge of accessibility elements for that role. Fortunately, for many roles, accessibility training is often quick, like training office staff to use PDF accessibility features. With a few hours of training, staff can acquire all the skills they need to get started creating accessible content. However, for other roles, like web developers, it can take a significant amount of training and time to develop their expertise.
Distribution centre staff: These staff members may need little accessibility training. These people may include inventory control staff, a shipper/receiver, truck drivers, or a warehouse manager, among others. They may have no interaction with the public, and may not be involved in activities that produce digital content, but should be aware of their company’s accessibility obligations.
Office support staff: These staff members are likely to use various document authoring tools, and should be aware of, and use, the accessibility features tools such as Microsoft Word and Acrobat Acrobat Pro provide. | textbooks/biz/Business/Advanced_Business/Digital_Accessibility_as_a_Business_Practice/03%3A_The_Committee_and_the_Champion/3.01%3A_The_Committee_and_the_Champion.txt |
Understanding the diversity of skills and knowledge in the company’s workforce, you decide that it will be more effective if each department managed their own digital accessibility efforts. You decide to create an accessibility committee, made up of senior or influential people from each of the major groups in the company. Your goal is to take a proactive approach to accessibility, addressing issues before they become complaints, rather than a reactive approach, where issues are addressed only when a complaint has been received.
By implementing a proactive approach, you are aiming to address potential barriers before they result in lost customers. While the company did receive a complaint, you understand that many people who encounter accessibility barriers do not complain. They just leave, perhaps going to the competition. You are convinced that if they come to your website and have a pleasant, accessible experience, they will likely return and make additional purchases.
You plan to have the accessibility committee initially meet several times over a two month period, to get accessibility efforts underway, then reduce the frequency of meetings to once per quarter, to receive updates from each group and discuss any new issues that arise.
Who Should be on the Committee?
Ideally the Accessibility Committee (AC) should be made up of influential and knowledgeable representatives from different areas of the company, starting with a senior executive who can affect the company’s accessibility policy. In Figure 2.1, the CEO at the top of the organizational structure would be that person, though it does not necessarily need to be the company’s top officer.
Figure 2.1: A possible accessibility committee structure
Source: Web Accessibility: Web Standards and Regulatory Compliance, Chapter 3: Implementing Accessibility in the Enterprise. (Urban Burks, 2006)
Below the senior executive is the person who will oversee the committee, in this case referred to as the “Accessibility Champion.” This person should be in a relatively senior position, or have substantial influence in the company, who has a good understanding of accessibility, disability, and the technical aspects of implementing accessibility. This person should also find accessibility interesting and challenging, and should not be forced into the role. Depending on the size of the company, it may be a person dedicated specifically to overseeing the company’s accessibility efforts, or it could be someone in another role who manages accessibility efforts on a part-time basis.
You debate whether you are the best person to take on the Accessibility Champion role. While you are becoming more familiar with digital accessibility, and you find it very interesting, you are not sure if you have the technical knowledge to fully understand the possibilities or options for developing and implementing digital accessibility. For now, you take on the role yourself, but leave the option open to assign the role to another member of the accessibility committee once it has been established, or even look outside the company for a person with the right balance of technical background and disability/accessibility knowledge to understand the technologies involved.
You decide to approach the CEO, who originally asked you to investigate the complaint, and ask her if she would be a member of the committee. She agrees, but suggests that after initially establishing the committee, she will pass that role to the senior VP. You also approach the director of retail sales, who oversees the retail managers and visits retail stores regularly. You also ask the IT manager to participate, as well as one of the senior web developers who reports to him and who has some web accessibility experience.
Accessibility Committee Members
• CEO (president)
• You (project manager)
• Director of retail sales
• IT manager
• HR manager
• Senior web developer
• The senior VP (oversees operations at head office)
• Director of marketing (oversees the video editors and graphic artists)
Other members of the accessibility committee are strategically selected from throughout the company, with the aim of including representatives across all areas of the company where digital accessibility is a concern, as well as those known to be knowledgeable on the subject of digital accessibility, which may include people within the company who have disabilities.
Accessibility Committee Goals and Responsibilities
Clear goals for the accessibility committee should be defined and promoted throughout an organization so that everyone understands the committee’s function.
The accessibility committee should be responsible for:
• Raising accessibility awareness
• Encouraging feedback to share problems and solutions
• Implementing quality assurance procedures
• Consulting on legal matters related to accessibility
• Providing web and digital accessibility support
• Developing internal accessibility standards
• Representing the organization in accessibility-related public affairs
Source: Implementing Accessibility in the Enterprise, pp. 73-74
Readings & References: | textbooks/biz/Business/Advanced_Business/Digital_Accessibility_as_a_Business_Practice/03%3A_The_Committee_and_the_Champion/3.04%3A_Establishing_an_Accessibility_Committee.txt |
1. Of the following roles, which roles need a good understanding of WCAG 2.0, as opposed to a basic understanding? Choose all that apply.
1. Graphic artists
2. Web developers
3. Web content editors
4. Video support staff
5. Procurement and purchasing staff
6. Retail store staff
2. Which of the following should be goals and responsibilities of an accessibility committee? Choose all that apply.
1. Planning the annual company golf tournament
2. Raising accessibility awareness
3. Representing the organization in public affairs related to accessibility
4. Encouraging feedback to share problems and solutions
5. Developing internal accessibility standards
6. Implementing accessibility in quality assurance procedures
7. Consulting on legal matters related to accessibility
8. Providing web and digital accessibility support
An interactive or media element has been excluded from this version of the text. You can view it online here:
http://pressbooks.library.ryerson.ca/dabp/?p=677
3.06: Characteristics of a Digital Accessibility Champion
The Accessibility Champion will be the person who leads an organization’s accessibility efforts. The title may vary from organization to organization, though the role will be the same.
The Accessibility Champion should be able to relate to others at their level. For instance, when working with developers to promote accessibility practices, being able to talk to them with appropriate technical language will help get the message across convincingly. Similarly, when working with designers, this person should be able to talk in terms of Universal Design and Inclusive Design practices.
Though an Accessibility Champion does not necessarily need to have formal computer science or design backgrounds, knowledge in these areas is important to be effective in the role. At a minimum, the Champion needs to be comfortable working with technology, and have a good understanding of how people with disabilities access information and digital content.
The Accessibility Champion should have a particular set of necessary, and desirable characteristics, described here:
Necessary Characteristics
• Has the ability to lead
• Can influence people at all levels of authority, and across all roles
• Has strong communication skills (verbal and written) and the ability to motivate people
• Is creative when faced with challenging situations
• Can talk about disability issues in an informed way
• Is familiar with a range of assistive technologies
• Understands the societal effects of inclusion
• Is passionate about inclusive design
• Has a strong technical background
Desirable Characteristics
• Is able to teach
• Is able to present convincingly to small or large audiences
• Has a disability or is closely related to someone with a disability
• Is a software engineer, or programmer
• Is a web developer
• Is a user interface or interaction designer
3.07: Activity- To Be or Not to Be the Accessibility Champion
Playing the role of the Digital Accessibility Project Manager, one of your decisions will be who takes on the role of the Accessibility Champion. You could very well be the Champion, but do you have the characteristics to take on this role?
Using the “Characteristics of a Digital Accessibility Champion,” how many of the characteristics do you possess? How many are you lacking? What skills could you learn to make you better suited for the role? Are there other characteristics of a champion you have that are not mentioned?
3.08: The Committee and the Champion- Takeaways
In this unit, you learned that:
• Disability sensitivity training and a good understanding of accessibility and standards, such as WCAG accessibility barriers, are all key knowledge areas required in different company roles.
• Accessibility committee members should be chosen strategically and should represent a good cross section of the business. | textbooks/biz/Business/Advanced_Business/Digital_Accessibility_as_a_Business_Practice/03%3A_The_Committee_and_the_Champion/3.05%3A_Self-Test_2.txt |
In this unit, you learned the following about creating a digital accessibility culture:
• Accessibility auditing is an important step. Choosing a reputable service involves careful consideration focusing on key reputability factors.
• Two approaches to accessible websites are retrofitting and starting over. The correct approach for your situation will need to consider several factors including outsourcing the work to external vendors.
• Building a company-wide strategy about accessibility includes building awareness, hiring people with disabilities, focused presentations, and training.
• Web development accessibility guidelines focus on user interaction with a website whereas web content accessibility guidelines focus more on standards compliance. Both are important.
• Several approaches should be used to monitor adherence to accessibility guidelines including unbiased quality assurance reviews and the use of automated tools.
• Implementing accessibility will include managing change. Kotter’s Eight-Step Model for Leading Change and Lewin’s Three-Step Model are two common models that can help when planning and facilitating the implementation.
• Resistance by staff may be the most challenging element in implementing change and overcoming the five main reasons people resist change needs to be part of your change management strategy.
04: Creating Digital Accessibility Culture
Digital Accessibility Culture Defined
“Culture,” in the context of digital accessibility, refers to an overarching consciousness or awareness throughout an organization of potential barriers — barriers that may prevent people with disabilities from participating in digital activities or consuming digital information at the same level as their fully able peers.
It means that attention to accessibility is weaved into an organization’s processes and integrated with quality assurance, so, when products and services are evaluated, accessibility is part of that evaluation. Everyone who is involved in producing products or delivering services has accessibility in mind while they carry out their duties. They know how to address accessibility in their work, and, if they encounter potential barriers they are not sure of, they ask questions, perhaps, addressing questions to accessibility experts on staff or to the Web or third-party experts to search out answers. In short, they will persevere until they find a solution.
Developing digital accessibility culture requires buy-in from the most senior executive in an organization. That buy-in trickles down through an organization, influencing senior managers, who influence junior managers, who influence the staff reporting to them, and so on, flowing all the way down the organizational hierarchy.
This culture, by way of its adoption throughout an organization, becomes a practice that guides business activities in designing and developing new products, to production, to service delivery, to marketing and communications, to procurement, to hiring, and more. All aspects of the organization are influenced by attention to digital accessibility inclusion.
4.02: Objectives and Activities
Objectives
By the end of this unit, you should be able to:
• Assess an organization’s current level of digital accessibility.
• Describe strategies for building awareness of digital accessibility.
• Plan staff-training strategies for different areas of the organization.
• Explain strategies for managing resistance and change.
Activities
• Handling Resistance
4.03: Assessing an Organization's Current Digital Accessibility Status
One of the the accessibility committee’s first tasks is to determine where the company is in terms of its compliance with accessibility guidelines and to identify gaps where improvements are needed. Since you do not have an accessibility expert on staff, you decide to look into firms that provide accessibility auditing services.
Choosing an Accessibility Auditor
Unless there is already an accessibility expert on staff, organizations likely want to hire a third-party auditor or find a person to hire on as one.
It is typically easier to find an auditing service than to find an accessibility expert to hire. Depending on the size of the company, and the amount of digital information it produces, you may or may not want to hire a permanent accessibility person.
Finding a reputable auditing service can also be a challenge. With the growing public awareness of accessibility issues, there are a growing number of so-called accessibility experts popping up, taking advantage of the market for these services that has emerged with this awareness. You want to evaluate these auditor services to ensure their reputability.
Factors you might take into consideration when selecting an accessibility service could include:
• The number of years the service has been in business (the longer the better)
• The clients they have served (potential references)
• The accessibility of the service’s website (dead giveaway to search elsewhere, if their site is not accessible)
• Whether their process aligns with accessibility auditing best practices (not required, but advisable)
• Whether they provide self-service tools (most good firms provide automated test tools)
• Whether accessibility auditing is their primary business (be wary of firms offering accessibility auditing as an add-on service for an unrelated primary business)
• Whether they offer training (the ability to train your staff is a good sign)
There are a number of reputable international services, so companies do not necessarily need to choose a local auditor. Since most accessibility regulations are based on WCAG, most international and local services audit based on those guidelines (or they should).
Self-Assessing
While searching for an accessibility-auditing firm on the Web, you come across a few accessibility-testing tools. Before you decide whether you need a third-party accessibility auditing service, you want to understand what you can do yourself with these tools to get a general sense of your company’s accessibility status. You decide to do a little research on tools that can be used to test/check accessibility. You find there are a wide variety of free and commercial accessibility-testing tools.
Automated Accessibility Checkers
There are quite a number of automated tools for testing web accessibility with varying degrees of coverage and accuracy. Much like choosing an accessibility auditor, choosing a good testing tool also requires a little critical evaluation. You may refer to existing evaluations of these tools and choose based on your needs. You may also want to adopt a number of accessibility checkers that complement each other.
Try This: Copy the URL of your organization’s website and paste it into the AChecker Web Accessibility Checker to see how accessible your website is.
Keep in mind that no automated accessibility checker can identify all potential barriers. In most cases, some human decision-making is required, particularly where meaning is being assessed. For example, does an image’s text alternative describe an image accurately, or does link text effectively describe the destination of the link? Both require a human decision on the meaningfulness of the text.
Key Point: Automated accessibility-testing tools cannot identify all potential barriers. A human must also be involved in testing, making decisions where meaning is being assessed.
There a number of different types of web accessibility checkers. Depending on your needs, one type may serve your organization better than others.
API-Based Accessibility Checkers
API stands for application programming interface. An API allows developers to integrate the accessibility checker into other web-based applications, such as web-content editors, to provide accessibility testing from within the application. For instance, with a content editor, the checker assesses the contents of the editor while creating and editing content. Tenon and AChecker provide APIs that can be used for integrations with other applications. To take advantage of an API, a developer would need to create the integration in many cases, though some applications may already have an integration with an accessibility checker.
Text-Based Accessibility Checkers
Text-based checkers typically output a list of accessibility issues it has identified; and, in some cases, provide recommendations to correct those issues. Some checkers may also categorize issues based on their importance or whether an issue is either a definite barrier or a potential barrier, which would need to be confirmed by a person. Some checkers evaluate single pages while others spider through a site and produce a site-wide accessibility report. These checkers often ingest one of the following: a URL of a web page, a user-uploaded HTML file, or copied and pasted HTML code; then, they produce a report based on the input provided. AChecker is a good example of a text-based accessibility checker, which also provides an API for integrations.
Visual Accessibility Checkers
A third type of accessibility checker provides a visual presentation of a web page, pointing out where the issues appear on a page. The WAVE accessibility checker is a good example of a visual accessibility checker.
Browser Plugin Accessibility Checkers
Some accessibility checkers are available as a browser plugin, making it easy, while viewing a web page, to click a button and get an accessibility report. The WAVE Chrome Extension is a good example of a browser-based plugin.
Toolkit: Add an automated web accessibility checker such as AChecker or WAVE accessibility checker to your toolkit. They can be used to get a sense of the accessibility of your organization’s website.
Other Accessibility Checkers
The accessibility checkers mentioned above are just a tiny sample of the tools available. A well-crafted Google search, using terms like “accessibility checker” turns up many more. Or, you can browse through the list of accessibility checkers compiled by the W3C at the following link.
Other Types of Automated Testing Tools
While using automated web-accessibility checkers is a good start for assessing the accessibility of an organization’s web resources, there are likely other tools needed to assess different types of content. Examples of such tools include:
Manual Testing
As mentioned above, automated testing cannot identify all potential accessibility barriers. There are a few easy manual tests that can be used to identify issues automated checkers may not pick up.
• Tab Key Test: Determines whether all functional elements in web content are keyboard accessible. Place a mouse cursor in the location field of your web browser, then repeatedly press the Tab key on your keyboard and follow the cursor’s path through the web page. Any functional elements like links, buttons, tabs, or forms, among others, should all receive focus while tabbing through the page and, while in focus, should operate by pressing the Enter key or spacebar.
• Select-All Test: Determines keyboard accessibility also. While your mouse cursor is focused on a web page, press CTRL+A (CMD+A on Macs) to select all the content on the page. Any items that are not selected are potentially not keyboard accessible.
• Screen Reader Testing: A little more involved than the two tests above. Install a screen reader and navigate through a web page using the Tab key or arrow keys (in most cases). While listening to the output of the screen reader, determine if the output make sense; if not, there could be accessibility and usability problems. We recommend that web developers have a screen reader installed to test web content before it goes public. The ChromeVox screen reader works well for this purpose, installing as an extension for the Chrome web browser.
• User Testing: Though not always required, having actual users with disabilities navigate your company’s web content can turn up a variety of usability issues that an accessibility expert may not identify. When recruiting people with disabilities for testing, ensure they are comfortable navigating the Web and are able to use their assistive technologies effectively to get meaningful feedback on usability issues. Novice assistive-technology users may encounter problems related to their inexperience, rather than problems with the accessibility of the content.
Toolkit: For full coverage of web accessibility auditing practices, consider enrolling your web developers in the Professional Web Accessibility Auditing Made Easy course.
Toolkit: The Accessible Information and Communication: A Guide for Small Business is another useful tool for accessing an organization’s digital accessibility status. | textbooks/biz/Business/Advanced_Business/Digital_Accessibility_as_a_Business_Practice/04%3A_Creating_Digital_Accessibility_Culture/4.01%3A_Creating_Digital_Accessibility_Culture.txt |
Understanding now that there are gaps in the company’s compliance with accessibility guidelines, you start to think about the approaches you might take to implement solutions to fill those gaps.
Having spent some time learning about accessibility testing and trying the tools and strategies you came across, you discover there are lots of potential accessibility problems with the company’s website. You share the results of your testing, and the tools and strategies with your senior web developer, who you ask to review and come up with an estimate of the time it would take to fix the issues you discovered.
The web developer reports back to you after a few days with a plan that will take longer than you expected. But, he also suggests, having reviewed the details of WCAG and the local accessibility regulations, perhaps he could prioritize the issues by first addressing the critical Level A issues described in WCAG, as well as addressing some of the easier Level AA issues.
He also suggests that you go back to the shopping cart vendor and see whether they are open to making some changes to their system to improve accessibility, reviewing the relevant business arguments if necessary in order to convince them the work will be good for their business.
Retrofitting versus Starting Over
Retrofitting an inaccessible website can be time consuming and expensive, particularly when the changes need to be made by someone other than the website’s original developer. Adding accessibility to a new development project will require much less effort and expense, assuming the developers have accessibility forefront in their mind while development is taking place.
Sometimes retrofitting is the only solution available. For instance, a company is not prepared to replace its website with a new one. In such cases, it may be necessary to prioritize what gets fixed first and what can be resolved later. WCAG can help with this prioritizing. It categorizes accessibility guidelines by their relative impact on users with disabilities, ranging from Level A (serious problems) to Level AAA (relatively minor usability problems). These levels are described below.
• Level A: These issues must be resolved, or they will produce barriers that prevent some groups of people from accessing content.
• Level AA: These issues should be resolved, or they will create barriers that are difficult to get around for some groups of users.
• Level AAA: These issues could be resolved to improve usability for a wide range of people, including those without disabilities.
Level AA is the generally agreed-upon level most organizations should aim to meet, while addressing any Level AAA requirements that can be resolved with minimal effort. For organizations that directly serve people with disabilities, they may want to address more Level AAA issues, though it should be noted that full compliance with Level AAA requirements is generally unattainable, and in some cases undesirable. For instance, the WCAG lower-level high school reading level requirement is a Level AAA requirement. For a site that caters to lawyers, or perhaps engineers, high school-level language may be inappropriate, or even impossible, thus it would be undesirable to meet this guideline in such a case.
Key Point: Level AA is the generally agreed upon level of accessibility most organizations should strive to meet. Where feasible, some Level AAA issues could also be addressed.
Working with Vendors and Developers
It is not uncommon for vendors, particularly those from jurisdictions that have minimal or no accessibility requirements, to resist an organization’s requests to improve accessibility of their products. But, there are also vendors who will jump at an opportunity to take advantage of an organization’s accessibility expertise to improve their product. The latter mentality is becoming more and more common as accessibility awareness grows around the world.
You realize that replacing the shopping cart on the company’s website, which was the subject of the complaint the company received, is not currently an option. The company has just renewed a three-year contract with the shopping cart provider and breaking that contract would be very costly. You understand that local regulations allow “undue hardship” as a legitimate argument for not implementing accessibility, and the company’s lawyer agrees.
Regardless, you also understand that your company is losing a potentially large number of customers, who leave the website and shop elsewhere when they encounter accessibility barriers. You want to demonstrate that the company takes accessibility complaints seriously, so you approach the shopping cart vendor with a proposal to help them improve the accessibility of their product.
Depending on the vendor, various approaches may be taken to either guide the vendor through the process of addressing accessibility in their products, or convince a vendor that this is something they need to do.
Ideally, you want the collaboration with a vendor to be a collegial one, where both your company and the vendor are benefiting. You could offer to have an accessibility audit performed on the software being (or having been) purchased, which should have been done anyway as part of the procurement process, and offer that audit to the company. Contributing to a vendor in this way may create a sense of “owing your company” and they will be more receptive to working together to address accessibility issues. When purchasing a new product, it is often possible to have the vendor cover the cost of an accessibility audit performed by an auditor of the company’s choice. After the fact though, it’s unlikely a vendor will want to take on that expense. Thus, an audit may be more of an offering to keep the vendor on the side of the company when asking for work from them that will likely be unpaid.
On the other hand, if a vendor is resistant, and not interested in your offer, as a last resort you may need to apply more cunning tactics, for instance, by threatening to publish the accessibility audit.
Of course, these scenarios describe only a couple potential vendor/client relationships, which really require a clear understanding of a vendor’s position before approaching them with work they will likely not be paid for. Often the business arguments, introduced earlier, work well to convince vendors that accessibility is something that they can benefit from.
The bottom line is that some vendors will be more approachable than others and different strategies may be needed to have your accessibility requirements met by what may be considerable effort on the part of the vendor.
Suggested Viewing:
A YouTube element has been excluded from this version of the text. You can view it online here: http://pressbooks.library.ryerson.ca/dabp/?p=692
© MaRS Entrepreneurship Programs. Released under the terms of a Standard YouTube License. All rights reserved.
A YouTube element has been excluded from this version of the text. You can view it online here: http://pressbooks.library.ryerson.ca/dabp/?p=692
© Tinmouse Animation Studio. Released under the terms of a Standard YouTube License. All rights reserved. | textbooks/biz/Business/Advanced_Business/Digital_Accessibility_as_a_Business_Practice/04%3A_Creating_Digital_Accessibility_Culture/4.04%3A_Planning_Possible_Solutions.txt |
The initial goals reached by the accessibility committee through its first few meetings are to create an “accessibility culture” where the whole company is aware of the importance of accessibility, creating a policy that guides how digital accessibility is addressed throughout the company.
With a number of gaps identified, the committee suggests several initial areas to focus on, which together will provide the basis for the company’s accessibility policy.
1. Build awareness
2. Provide training
3. Communicate accessibility guidelines
4. Monitor adherence to guidelines (quality assurance)
5. Ensure accessible procurement practices
Building Awareness
One of the main reasons barriers arise is a lack of awareness. Most people have never met a person who is blind, let alone get to know such a person. As a result, they have little reason to think about accessibility and the potential barriers that may prevent a blind person from accessing digital content.
Hiring People with Disabilities
One sure way to raise awareness of accessibility is to hire people with a disability. Having people with disabilities in a company’s workforce helps build diversity, spread tolerance, and raise awareness of inequalities that are created when people have little or no experience with disability.
Hiring a person who is blind, for instance, will help expose your workforce to the challenges a blind person faces in everyday life and at work. This person could be a member of the accessibility committee, providing valuable input based on firsthand experience. This person could also provide screen reader testing of the company’s digital resources and quickly identify accessibility issues before they become complaints.
People who are blind can be just as skilled at many activities as people who can see. There are blind programmers, accountants, teachers, lawyers, even hairstylists, to name just a few occupations. Many are highly educated with advanced degrees and doctorates.
Blindness is used here as an example because this group tends to face the most barriers in digital content. However, many people with disabilities are skilled workers. They are often overlooked as a result of systemic misconception of what people with disabilities are capable of.
Run an Accessibility Awareness Campaign
Accessibility awareness campaigns can take a variety of forms and can involve publicity, training opportunities, presentations, an archive of resource materials, and an initiative for more company staff. Depending on the size and type of business, some of the following strategies could be used to implement an awareness campaign:
Posters
Posters can be placed in prominent places where staff are likely to encounter them. Some of these locations may include elevators, printer and copier rooms, lunch rooms, reception areas, and bathrooms. Posters could also be made available through an archive linked from the company’s website, where accessibility resources are gathered.
Here is a sample of the types of posters that might be used in an accessibility awareness campaign: Accessible PDFs [PDF]
How-To Instructions
Instructional materials can also be created or gathered and added to the accessibility resources. Here are a few examples of the types of instructional materials that could be distributed:
Instructional Videos
Instructional videos can also be created, or gathered, and made available to staff. There are a great many videos from sources like YouTube that can be gathered into a single, easily accessible location, then publicized throughout the organization. Here are some examples of accessibility-related instructional videos. Search YouTube for more.
Email Campaigns
Email campaigns can also be another effective way to raise awareness, perhaps as part of a company newsletter include an “Accessibility Awareness” section. This section might include a link to video tutorials, perhaps updated from accessibility efforts ongoing throughout the company, links to various resources, or announcements about upcoming accessibility workshops. The possibilities are many, and, because a newsletter is distributed regularly, staff are consistently reminded of their accessibility responsibilities.
Or, you could set up a company mailing list, that anyone with an accessibility question can post to, as well as posting accessibility-related information similar to a newsletter. A person or two from the company’s accessibility committee could be assigned to monitor the mailing list and provide responses when others have not replied. All employees can be added to the mailing list, so everyone becomes aware of ongoing accessibility efforts, and receive regular “reminders” through the dialog occurring there.
Workshops Presentations
Educating staff and teaching them new accessibility-related skills can help raise awareness throughout an organization. You may make workshops mandatory for particular staff, like web developers, or optional with a little bribery, like a pizza lunch, to get staff in the same room for a presentation and a question-and-answer session.
Accessibility Knowledge Base
Providing an on-demand collection of resources related to accessibility and encouraging employees to use them can also help raise awareness. A knowledge base can be created with various types of educational materials, such as printable how-to tutorials, video, and examples of good practices. Employees should be encouraged to use these resources and contribute their own accessibility solutions. New additions to the knowledge base, or simply reminders to use what’s there, can be encouraged through the company newsletter, or perhaps with a prominent link on the company’s employee portal. | textbooks/biz/Business/Advanced_Business/Digital_Accessibility_as_a_Business_Practice/04%3A_Creating_Digital_Accessibility_Culture/4.05%3A_Developing_a_Company-Wide_Strategy.txt |
Your accessibility resources are beginning to accumulate. You’ve decided to put up a few posters and add an accessibility awareness section to your company’s monthly newsletter.
One of the things you’d like to do is develop a number of short workshops for staff in specific roles. Since your company distributes many PDF documents, you think this would be a good starting place for developing the workshops. However, you are not an expert in creating accessible PDF documents, so you will need to educate yourself first. You check with the web accessibility auditing firm you communicated with before doing your own informal audit of the company’s website, and it turns out they offer an accessible documents workshop.
You also realize that the company’s web developers need to be trained as well. While you could give your developers access to the resources you gathered on developing accessible websites, having an expert coach your developers will be a more effective way to get them trained quickly, and it will also give them the opportunity to ask questions, including ones specific to the company’s website and the development processes in place at the company.
You plan to attend the workshops yourself, keeping an eye open for participants with a particular talent or enthusiasm for the topics being taught, thinking ahead to potential staff who could be recruited to the accessibility committee or to lead future workshops or presentations.
Training Efforts Can also Help Develop Awareness
There are a variety of topics related to accessibility that make good one- to three-hour workshops, which teach specific skills and knowledge or raise awareness of accessibility issues. During the early stages of developing digital accessibility business practices in an organization, it may be necessary to bring in an external service to provide training; however, over time, particular staff within the organization may be able to take on the role of instructor. The opportunity to teach topics further helps the trainer build expertise in the topic.
Here are a few suggestions that could be developed into workshops or lunchtime presentations:
Accessible Document Authoring
Audience: Office staff and others
Topics: Creating accessible Microsoft Word documents, converting Word documents to PDF, and using Adobe Acrobat Pro to make PDFs accessible
How People with Disabilities Use the Web
Audience: Everyone
Topics: Meet a person who is blind, gain disability awareness, navigate the Web with a screen reader, review assistive technologies, and experience barriers firsthand
Basic Web Accessibility
Audience: Web content authors and developers
Topics: Introducing the Web Content Accessibility Guidelines (WCAG), common accessibility barriers and their solutions, accessibility principles, and success criteria and techniques
Advanced Web Accessibility with WAI-ARIA
Audience: Web developers
Topics: Static vs. dynamic WAI-ARIA, JavaScript libraries, landmarks and roles, WAI-ARIA best practices
Toolkit: For more advanced web interaction training for your organization’s web developers have them review the ARIA Workshop.
Web Accessibility Auditing
Audience: Web developers and web content authors
Topics: Automated testing, manual testing, screen reader testing, user testing, and types of audits and reports
Toolkit: For more advanced training for your organization’s web developers on accessibility auditing practices, enrol them in Professional Web Accessibility Auditing Made Easy.
Multimedia Captioning
Audience: Web content developers, video production staff, everyone
Topics: Live versus asynchronous captions, open versus closed captions, Amara caption editor, YouTube captioning tools, captioning tools in other media authoring tools, captioning standards, captioning services, and described video
4.07: Self-Test 3
1. Which of the following factors might you take into consideration when selecting a service to audit the accessibility of your organization’s website? Choose all that apply.
1. How long the firm has been in business?
2. Has auditing processes that align with the W3C accessibility auditing best practices.
3. Provides automated self-assessment tools for accessibility checking
4. Offers training for your staff.
5. Auditing staff that have a university accessibility degree.
2. When self-assessing web accessibility, which of the following are strategies that might be used? Choose all that apply.
1. Do a Tab key test.
2. Use automated accessibility checkers.
3. Have people with disabilities do testing
4. Use a screen reader to navigate through a website.
5. Do colour contrast testing using an online tool.
An interactive or media element has been excluded from this version of the text. You can view it online here:
http://pressbooks.library.ryerson.ca/dabp/?p=702
4.08: Accessibility Workshop Resources
Scan through the following workshop slides for an overview of the topics that would be included in training for web developers.
View on Prezi: Accessibility for Web Developers
An interactive or media element has been excluded from this version of the text. You can view it online here:
http://pressbooks.library.ryerson.ca/dabp/?p=705
Toolkit: Add the Web Accessibility Workshop to your toolkit, and share it with your web developers
For more advanced training for web developers, that focuses on auditing practices and use of WAI-ARIA for making web interactivity accessible, review the topics of the following workshop.
View on Prezi: Accessibility Hands-on
An interactive or media element has been excluded from this version of the text. You can view it online here:
http://pressbooks.library.ryerson.ca/dabp/?p=705
Toolkit: Add the Accessibility Hands-On Workshop to your toolkit, and share it with your web developers. | textbooks/biz/Business/Advanced_Business/Digital_Accessibility_as_a_Business_Practice/04%3A_Creating_Digital_Accessibility_Culture/4.06%3A_Workshops_and_Training_Opportunities.txt |
To ensure consistency in the accessibility of the company’s digital content, the accessibility committee has decided there needs to be a set of simple guidelines staff can follow. You start by creating three guidelines:
• Web Development Accessibility Guidelines
• Web Content Accessibility Guidelines
• PDF Accessibility Guidelines
Web Development Accessibility Guidelines
Though you could just have your developers review WCAG to understand what needs to be done to ensure the company’s presence on the Web is accessible to everyone, there would be a great deal of “interpretation” required, and many aspects of WCAG that may not be relevant. Instead, you may want to identify the specific issues that need to be addressed in the company’s websites, and develop a set of guidelines specific to those issues.
Use the 10 Key Guidelines [PDF] as your starting point. These guidelines can be distilled to create a brief list of issues developers must keep in mind when developing for the Web. These guidelines might be as follows, though depending on the organization, may require minor adapting to fit its needs. They should be easy to learn and memorize.
1. All images must have an alt attribute that describes the meaningful elements of the image, or the alt attribute is left empty if the image is decorative or otherwise meaningless.
2. All multimedia content with meaningful dialogue must have captions (video) or a transcript (audio).
3. Content with meaningful sections or topics must be structured using properly sequenced HTML headings.
4. If a series of items looks like a list, use proper list markup to format those items.
5. Where data is being presented in a table, the first row (and sometimes the first column) should be formatted as proper table header cells (i.e., <th>).
6. Ensure that when navigating through web content using the Tab key, the cursor moves from left to right, top to bottom, and does not veer from that pattern.
7. Ensure that the cursor’s focus is easily visible when navigating through elements of the page with the Tab key.
8. When using colour to represent meaning, ensure that something other than colour also represents that meaning.
9. Ensure that contrast between foreground (text) colours and background colours provide sufficient contrast, at a ratio of 4.5:1 minimum for smaller text, 3:1 for larger text.
10. Ensure that all features that operate with a mouse, also operate with a key press.
11. Provide ways to skip past repetitive content using bypass links and ARIA landmarks.
12. Ensure that link text describes what would be found if the link were followed.
13. Ensure that accessible error or success feedback is provided after completing an action such as submitting a form or logging in, typically using the ARIA alert role with the element containing the feedback.
14. Ensure that all form fields are explicitly labelled using the HTML Label element.
15. Ensure that all functional elements whose operation is not immediately obvious, includes instructions on how to use them.
16. Describe the accessibility features of an application or website, and position a link to those details in the interface that will be easy to find for assistive technology users.
Web Content Accessibility Guidelines
The Web content accessibility guidelines will be very similar to those for web development listed above, though differing in that content typically tends to be less interactive than a user interface. As such, these guidelines focus more on the accessibility aspects of content.
1. All images must have an alt attribute that describes the meaningful elements of the image, or the alt attribute is left empty if the image is decorative or otherwise meaningless.
2. If images cannot be effectively described in alt text of 125 characters or less, provide a full description in the surrounding text, with a short description and a reference to the full description in the alt text.
3. Avoid using images of text, and where unavoidable, reproduce the meaningful text of the image as actual text in the image’s surroundings.
4. All multimedia with meaningful dialogue must have captions (video) or a transcript (audio).
5. Content with meaningful sections or topics must be structured using properly sequenced HTML headings.
6. If a series of items looks like a list, use proper list markup to format those items.
7. When using colour to represent meaning, ensure that something other than colour also represents that meaning.
8. Ensure that contrast between foreground (text) colours and background colours provide sufficient contrast, at a ratio of 4.5:1 minimum for smaller text, 3:1 for larger text.
9. Where data is being presented in a table, the first row (and sometimes the first column) should be formatted as proper table header cells (i.e., <th>).
10. Ensure that all form fields are explicitly labelled using the HTML Label element.
PDF Accessibility Guidelines
PDF accessibility guidelines are similar to web content accessibility guidelines, though there are PDF specific requirements such as reading order, reading language, and document tagging that also need to be addressed to create an accessible PDF. Documents are typically created with Microsoft Word, saved as a PDF, then opened in Adobe Acrobat Pro to adjust the document’s accessibility.
1. When creating a Word document to be converted to PDF, use proper headings to organize sections/topics in the document.
2. When creating a Word document to be converted to PDF, ensure all images have alt text describing the meaningful elements of the image.
3. When creating a Word document to be converted to PDF, ensure that any document with data tables have the “Repeat as header row at the top of each page” option checked in the table properties.
4. Use the Windows version of Word 2010+ to save Word documents as PDF with the “Document structure tags for accessibility” option checked when saving. (This is not supported on Macs.)
5. Use the Make Accessible tools of the Action Wizard in Adobe Acrobat Pro 11+ or Acrobat Pro DC to assess the accessibility of the converted Word document, and make accessibility adjustments.
6. Use the Reading Order tool in the Acrobat Pro toolbar on the left to arrange the elements in the PDF in a logical reading order. | textbooks/biz/Business/Advanced_Business/Digital_Accessibility_as_a_Business_Practice/04%3A_Creating_Digital_Accessibility_Culture/4.09%3A_Developing_Organization_Accessibility_Guidelines.txt |
The company’s web developers are working hard to educate themselves about web accessibility, and they are actively attending workshops and investigating accessible web development practices. You see improved accessibility in the company’s websites, though there is still uncertainty about whether the sites comply with local accessibility requirements.
You return to the accessibility auditing firm you are currently in contact with while you have been developing the company’s accessibility plan, and you ask for their assistance in auditing a number of new features that have been added to the main website. Ultimately though, you want to have an expert accessibility person on staff, who can provide accessibility audits on demand.
Web Content Quality Assurance
In an ideal situation, an organization’s web developers would provide web accessibility audits. However, even with accessibility experience, it is wise to have a second pair of eyes review the work of the implementer. This task can be assigned to another developer or perhaps the Accessibility Champion. This is a typical practice in many development activities, and should be no different when accessibility is the subject. In the early stages of building accessibility knowledge into a company’s culture, however, the expertise may not exist in-house to provide effective quality assurance, so third parties may need to be brought in.
Your senior web developer does have some experience with web accessibility, though he does not consider himself an expert. The other developers on staff are still new to the subject. As a result, there is not a sufficiently knowledgeable developer on staff to review the accessibility work of the senior member of the team. You decide that while your developers are building their accessibility expertise, you will bring in a third-party auditor, both to review and to train your developers.
Key Point: Third-party auditing services can be a good source of expertise for managing accessibility quality assurance and training staff.
Accessibility reviews from an expert third-party auditor can act as a form of training. Typically, web accessibility reports will identify barriers, explain why particular barriers are a problem, and provide potential solutions to correct problems. They are typically written for a developer audience, so they are effective tools to educate developers. There will often be questions and feedback between the developer and the auditor, much like a student to teacher relationship. It may only take a few audit scenarios to bring developers to a point where they can do their own audits.
Examples of accessibility auditing services:
Automated Tools to Monitor Web Accessibility
Another option to help ensure the accessibility of an organization’s web content is to implement an accessibility-monitoring system that will send alerts when potential problems are detected. There are a number of these systems available of varying cost and coverage, some of which include other quality assurance tests, like spell checking, finding broken links, testing text readability, and, even, PDF testing. Though these tools can be helpful in catching issues, they should not be relied upon exclusively to identify all potential accessibility issues that may arise in an organization’s web content. Reviews by a human being should also be conducted on a regular basis. Reports could be shared through a knowledge base where all such reports and accessibility-related information can be stored.
Here are a few examples of accessibility monitoring applications that might be used to supplement accessibility quality assurance efforts of an organization.
Commercial systems:
Free open source
• Vamola (Italy) – now inactive, but a good base to build your own.
Document Quality Assurance
Document-accessibility auditing is relatively simple when compared with web accessibility auditing. Typically, a third-party review of documents is not necessary, though there may be cases where complex documents, like forms, do require more than basic skills to make them accessible.
For most documents, running the authoring software’s accessibility review tool is sufficient to pick up any potential problems that may exist in a document. Training to use these tools can often be completed in a few hours, followed by opportunities to use these new skills to develop expertise. It may be helpful to have other document authoring staff quickly examine accessibility for the author, as a second review to ensure the document is as accessible as it can be. Having staff review others’ work for accessibility can help strengthen accessibility awareness and maintain accessibility skills across a broader range of staff.
If you have a copy editor or production editor on staff, who reviews grammar, word usage, and spelling, and so on, this person may be a good candidate to develop expertise in document accessibility testing, to combine accessibility testing with copy editing.
4.11: Self-Test 4
1. When recommending accessibility requirements for web developers, it is best to send them directly to WCAG, on the W3C website.
1. True
2. False
2. When recommending guidelines for staff that produce PDF documents, WCAG should be suggested.
1. True
2. False
An interactive or media element has been excluded from this version of the text. You can view it online here:
http://pressbooks.library.ryerson.ca/dabp/?p=711 | textbooks/biz/Business/Advanced_Business/Digital_Accessibility_as_a_Business_Practice/04%3A_Creating_Digital_Accessibility_Culture/4.10%3A_Monitoring_Adherence_to_Guidelines.txt |
Most of the digital accessibility concerns within your company revolve around the Web, electronic documents, and multimedia. In your research, you discover there are a few other areas where digital accessibility should be considered. These include branding, coding practice, and communication.
Branding
When considering branding elements for an organization, there are a few accessibility considerations to keep in mind.
Use of colour: Using the WCAG guidelines for colour use, ensure sufficient contrast between text colours, and the background colour they may appear over. At a minimum, use standard 10- to 12-point fonts and provide a contrast ratio of 4.5:1 or greater.
Fonts: There are font characteristics that make fonts more or less legible, thus, more or less readable. “Fancy” fonts like Comic Sans for instance, can take longer to recognize, and this affects reading speed. This effect can be magnified for those with a print impairment.
Readings & References: Article on Font Legibility
Images with Text: While sometimes it is unavoidable, images of text should be used sparingly or avoided altogether. Text in images tends to degrade when magnified by those with low vision, making it difficult to read. For those who are blind and using a screen reader, text in images cannot be read at all. The text of a logo is an exception, but, if you also include a company motto as part of the logo, for instance, consider adding it as actual text next to the logo, rather than making it part of the logo image itself.
Coding
You might be surprised to know that there are quite a few blind or low-vision computer programmers. The way they code is much like any other coder does. Though code itself is not typically accessible or inaccessible. Good coding practices, such as effective use of space and effective commenting, can make code more usable for both sighted and blind coders.
Readings & References: How blind coders code
Communication
Many of the guidelines for creating accessible web content and documents also apply to communication. Where paper documents are distributed, be sure an electronic version is also available, making sure headings are properly used to structure topics, and describe in text form any visual elements in the communication.
Email can also be a major form of communication, both for promotional purposes and for personal communications. Emails can be created as plain text, rich text, or HTML. While plain text will generally be accessible, it can lack structural elements, which may be important for longer emails. Rich text and HTML can be marked up with headings, lists, alt text for images, and so on, to make them more accessible. Although there may be readers who display emails as plain text, in which case these formatted emails can be difficult to understand. For a typical personal email communication, plain text is usually fine. Where formatted text is used, it is advisable to also provide a plain text version as a fallback.
For more about accessible email, see the following optional readings.
Readings & References:
Text chat is also becoming a more common form of communication, often used by customers to contact support services through a real-time chat application linked from a website. These applications can often be inaccessible to blind users if they have not been developed with consideration for accessibility. The primary considerations when choosing a synchronous web-based chat application include:
• Properly implemented WAI-ARIA Live Region support
• Ability to pause new messages
• Access to a log of chat messages
• Keyboard access throughout the application
• An audio indicator when new messages are posted
• A visual indicator when new messages are posted
• Easy shortcut navigation between the message input field, the chat stream, and the connected users list (if applicable)
For more information on what makes a chat application accessible, as well as ratings for many popular chat applications (as of 2013), see the following Readings & References. Though the focus in the article is on chats used in education, it is relevant for other chat usage scenarios.
Service Equality versus Compliance
Though all services should ideally be accessible to anyone who attempts to access them, there may be occasions when it is just not possible to provide full access for everyone. Chats are one example where there may be unavoidable barriers, mainly because most of the available chat applications have room for accessibility improvements. That said, though, chat accessibility is improving.
Another good example of a technology that remains a challenge to access for some people with disabilities is videoconferencing systems. Though there have been efforts made by developers to improve the accessibility of these technologies, the currently available systems are generally difficult to use, are only partially usable, or are not usable at all with assistive technologies.
In cases such as these, all efforts should be made to procure the most accessible technology you can find, but with the understanding that they may not be accessible to everyone given the state of the art for these technologies. This is not to say organizations should not use them, but instead where they are used, acknowledgement should be provided about the limited access available, and provide an alternative where possible.
Similarly, primary service-delivery methods should be made accessible first, rather than resorting to providing alternative means of accessing these services for those with disabilities. For instance, do not create a website that is inaccessible because your organization wants to make use of some compelling, inaccessible technology, then create an alternative site for those accessing with assistive technology as a means of complying with regulated accessibility requirements. Despite best intentions, maintenance and upkeep of alternative sites are likely to fall behind that of the primary site. In general, the practice of providing alternative websites is frowned upon, except in cases where it is unavoidable because the technology being used is not yet available in an accessible form. | textbooks/biz/Business/Advanced_Business/Digital_Accessibility_as_a_Business_Practice/04%3A_Creating_Digital_Accessibility_Culture/4.12%3A_Other_Digital_Accessibility_Considerations.txt |
You understand that there will be a significant change in the way the company does business, implementing what you see as a change in the culture of the company. Your employees already understand aspects of accessible customer service with the company’s retail locations, such as ramps connecting floor levels within stores, elevators where stairs are used to move between levels, and checkout counters that accommodate wheelchair users, among other adjustments to “physical spaces.” But, there is little knowledge within the company around issues of “digital accessibility.”
Accommodation for accessibility of physical spaces is less likely to require change in the company’s processes, but digital accessibility quite likely will. It will also involve changes in employee behaviour.
To counter any resistance to the changes that will be needed, you decide to educate yourself on change management. You gather all the information you need for forming arguments that you can use to convince your colleagues, leveraging the business arguments for implementing digital accessibility: these changes are something the company “wants to do” rather than “has to do,” and these changes are good for the company.
So far you have experienced the “business case” for digital accessibility, and you have also been exposed to the legislative reasons behind it. Research suggests that companies who embrace a culture of accessibility are more successful/profitable. However, acceptance of this culture isn’t necessarily easy. One key issue you might experience in fostering a culture of accessibility is ”resistance to change” from some of your employees/colleagues. They may wonder why dedicating resources (people, time, money) to implementing digital accessibility is important and how it will affect them.
Readings & References: Why Diversity Matters
People resist change for a number of reasons, most notably, due to fear of the unknown. Not knowing how the change will affect them directly (and to a smaller extent how it affects the company) will cause a number of employees to not readily or willingly accept (at least initially) the proposed changes. Linked with this is the fear of breaking routines, both in how people do their jobs and how it affects their life in general (hours, travel, and technology, etc.).
Resistance to change can occur when workers and management do not agree with the reasons for the change and the advantages and disadvantages of the change process.
Some reasons for resisting change include:
• Self-interest, which can occur when people are more concerned with the implication of the change for themselves rather than considering the effects for the company’s success.
• Misunderstanding, because the purpose of the change has not been communicated effectively or has been interpreted differently.
• Low tolerance to change, because workers prefer having security and stability in their work.
Experience (and research) suggests that the best strategies for minimizing resistance to change is to communicate more effectively, to help people develop the skills/knowledge to handle the proposed changes, and to involve them in designing the changes to be implemented.
In this section of the unit, you will gain an understanding of some of the reasons why people may resist change which you may encounter in your organization and how to overcome them.
Change is difficult. The more prepared you are to deal with resistance, the better your chances for success in implementing the changes required. Watch this video outlining seven key strategies for overcoming resistance:
A YouTube element has been excluded from this version of the text. You can view it online here: http://pressbooks.library.ryerson.ca/dabp/?p=715
© Forward Focus. Released under the terms of a Standard YouTube License. All rights reserved.
The seven strategies for overcoming change in the workplace follow below:
1. Structure the team to maximize its potential.
2. Set challenging, achievable, and engaging targets.
3. Resolve conflicts quickly and effectively.
4. Show passion.
5. Be persuasive.
6. Empower innovation and creativity.
7. Remain positive and supportive. | textbooks/biz/Business/Advanced_Business/Digital_Accessibility_as_a_Business_Practice/04%3A_Creating_Digital_Accessibility_Culture/4.13%3A_Managing_the_Impact_of_Change.txt |
Source: Andrews McMeel Syndication
Instilling digital accessibility culture throughout an organization is likely going to involve change, change that may meet with some resistance. Change can be uncomfortable, and for processes and practices that are ingrained in an organization over many years, it can be very difficult to upset this “status quo.”
Depending on the scope of changes that must occur, preparing for change may be critical to successfully implementing a digital accessibility plan. It is helpful to have a framework from which to manage the changes that will occur as digital accessibility is being implemented throughout an organization.
But which model or framework should you use to help implement a successful accessibility plan? Change management books will introduce you to many models that may fit with your company culture and work processes. To give you a couple of samples of proven change models, we will use Kotter’s eight-step model in this section and Lewin’s three-step change model in the next. Both have many loyal followers and can help you think about how to start moving towards your new digital accessibility plan. Whichever model works best for you, it is important to remember that all of the steps must be followed in order for the model to be effective.
Kotter’s Eight-Step Model for Leading Change
Dr. John P. Kotter at the Harvard Business School, devised the “Eight-Step Process for Leading Change.” It consists of eight stages:
1. Create Urgency
2. Form a Powerful Coalition
3. Create a Vision for Change
4. Communicate the Vision
5. Remove Obstacles
6. Create Short-Term Wins
7. Build on the Change
8. Anchor the Changes in Corporate Culture
A YouTube element has been excluded from this version of the text. You can view it online here: http://pressbooks.library.ryerson.ca/dabp/?p=718
© MindToolsVideos. Released under the terms of a Standard YouTube License. All rights reserved.
Using Kotter’s process, you imagine the pieces of your growing plan fitting into his framework as a way to optimize the strategies and ensure that your hard work pays off for the company.
1. Create Urgency
The obvious element of urgency in your company’s case is the complaint and the suggestion by the customer that legal action may be taken if the company does not show active movement toward resolving the accessibility issues with the online store.
In addition, you may also argue that the market of people with disabilities is a large one, and the company is currently missing out on a good portion of this market, even sending potential customers to the competition. There is an opportunity to capture a growing market of older people with disabilities, many of whom are baby boomers who have reached a stage in their lives where they are losing their sight or hearing, and may not be as mobile as they used to be.
Most of all you need buy-in from senior people in the company. The business arguments discussed in Unit 2, carefully crafted to highlight the benefits to the company, can go a long way to convincing those who will ultimately determine whether a shift in the business culture has a chance of success or not. In the case of the Sharp Clothing Company, the threat of a lawsuit is a strong motivator for senior management, though ideally other business arguments should help lead to change before it reaches the point of legal action.
2. Form a Powerful Coalition
The accessibility committee you have established fills this step of the process, gathering leaders and knowledgeable staff, including those who may need accessibility accommodations, from across the company. This group of people will help define acceptable practices for the company by its’ actions.
3. Create a Vision for Change
Understanding that many people will resist change, they want/need to understand where the company is heading. Articulating a clear vision for the company as to how the company wants to be seen and recognized with respect to accessibility is key here. The accessibility committee’s plan — including steps to build awareness, develop training, communicate guidelines, monitor accessibility quality assurance, adjust procurement processes, review hiring practice, and consolidate these in a digital accessibility policy for the company — meets the objectives of this step.
4. Communicate the Vision
Through the newsletter campaign, strategically placed posters, training opportunities, a series of guidelines tailored to particular roles, and the involvement of people from across the company, you will communicate the company’s move towards creating an inclusive business. This does require a highly coordinated and planned communication strategy that must be consistently applied by the change team.
5. Remove Obstacles
You decide to make the accessibility committee meetings open, so anyone who wants to attend may do so. You also decide to setup a virtual “suggestion box,” positioned prominently on the company’s employee web portal. There employees are encouraged to suggest improvements or identify where accessibility issues occur. Since most employees like to have a say in how things get done, take time to give them the opportunity to try ideas that are in alignment with the vision and strategies.
6. Create Short-Term Wins
The accessibility committee has come up with the idea of highlighting accessibility accomplishments in the quarterly newsletter and on the company’s main websites. Once per year, all the accessibility related projects or suggestions would be gathered for the whole company to vote upon, with the winner receiving a weekend for two at a local hotel and spa. The reason short term wins are important is that they not only make the participants feel good about accomplishing something, but it also then gives them the momentum to move onto the next step or phase. By having one large undefined project, employees may give up if they can’t see the finish line ahead of them.
7. Build on the Change
Through the wins that have been gathered, the submitters or implementers of more significant ones are given an opportunity to show off their accomplishments. Presentations are recorded and posted to the employee portal for all to see. Links to the videos are included in the quarterly newsletter. Acknowledging the accomplishments and those responsible for them acts as an important feedback mechanism and form of appreciation from the change team.
8. Anchor the Changes in Corporate Culture
With much of the accessibility plan in place, your plan is to formalize all of these elements into a company Digital Accessibility Policy. Culture defines what is acceptable behaviour or not, and the goal here is to make sure that your policy becomes part of your company culture and what your company values.
Readings & References: For more about the Kotter change management strategies, visit the following resources: | textbooks/biz/Business/Advanced_Business/Digital_Accessibility_as_a_Business_Practice/04%3A_Creating_Digital_Accessibility_Culture/4.14%3A_Managing_Change-_Kotters_Model.txt |
1. Lewin’s change model includes eight key steps for managing change.
1. True
2. False
2. Of the following, which one is not a stage of the Kotter Model?
1. Communicate the Vision
2. Create Urgency
3. Misunderstanding
4. Create Short-Term Wins
An interactive or media element has been excluded from this version of the text. You can view it online here:
http://pressbooks.library.ryerson.ca/dabp/?p=720
4.16: Managing Change- Lewins Model
Kurt Lewin developed a change model involving three steps: unfreezing, changing, and refreezing. The model represents a very simple and practical model for understanding the change process. For Lewin, the process of change entails creating the perception that a change is needed, then moving toward the new, desired level of behaviour and, finally, solidifying that new behaviour as the norm. The model is still widely used and serves as the basis for many modern change models.
A YouTube element has been excluded from this version of the text. You can view it online here: http://pressbooks.library.ryerson.ca/dabp/?p=722
© MindToolsVideos. Released under the terms of a Standard YouTube License. All rights reserved.
Unfreezing
In the initial phase of investigating digital accessibility, you have been building awareness through the creation of the accessibility committee, and you have been investigating what aspect of the company’s processes and human resources need to be adjusted. Hanging posters in the lunch room, elevators, and bathrooms, and the accessibility section in the monthly newsletter is raising awareness across the company, ahead of the retraining that is being planned.
You have also been practicing strategies for convincing staff at various levels that accessibility is a good thing for everyone, particularly those in senior positions, so that they understand the business, social, and economic aspects of accessibility. You have prepared yourself for resistance to the changes coming as part of the company’s move toward becoming an inclusive organization.
Changing
Based on your knowledge of the Sharp Clothing Company’s workforce, you have a series of short workshops planned that will help staff in various positions learn about their responsibilities to produce accessible products and deliver accessible customer service and introduce them to the tools to help them accomplish these.
To help standardize the processes, the accessibility committee has developed the guidelines for web developers, web content developers, and document authors and producers, so it is clear what steps must be taken in order to ensure they are producing products and services that will be accessible to everyone. The training being planned uses these guidelines as a framework for instruction, with staff receiving hands-on experience with the tools and processes associated with their jobs , and they have a reference they can continue to use and refer to until they have mastered the tasks and strategies they were taught.
Refreezing
To ensure that attention to accessibility remains high, the company newsletter will continue to highlight particular accessibility accomplishments by staff, and present various accessibility tips and interesting bits of knowledge to keep awareness high.
The yearly contest for the best accessibility implementation will also help keep awareness high, publicizing ongoing efforts, and giving people throughout the company the opportunity to vote on who should receive the “Spa Weekend for Two.”
The plan is to hire on a screen reader user to help with accessibility testing, to be a member of the accessibility committee, to work day-to-day with the staff at head office, and to help keep awareness high.
Having an employee who is blind will also help other staff members become accustomed to people with disabilities, and become more aware of barriers that may prevent some people from participating fully.
Try This: Knowing that most people will resist change for a variety of reasons, let’s watch a brief clip from a popular TV show Big Bang Theory which demonstrates how some people react to change.
A YouTube element has been excluded from this version of the text. You can view it online here: http://pressbooks.library.ryerson.ca/dabp/?p=722
© fahss. Released under the terms of a Standard YouTube License. All rights reserved.
Questions to think about:
• Have you ever encountered a situation similar to this? If so, how did you handle it?
• Do you often find yourself as the waiter, trying to offer suggestions to move things along? Were your ideas well received or implemented?
• What if you had to manage someone like Sheldon?
Identifying Forces For and Against Change
Reflecting back to the “Dumpling Paradox” video clip, here are some of their reasons for wanting to order dinner:
• They were hungry.
• They were familiar with this restaurant.
• They had past experience knowing what to order.
Yet, there were equally compelling arguments presented for not ordering their regular items:
• Needing to now order for three persons rather than four.
• Their regular menu choices would now lead to too much food to split three ways.
• Too much food overall for them to enjoy.
What we have just done is to conduct a Force Field analysis, a key process of the Kurt Lewin Change Model. It helps you identify the compelling reasons for change and those “forces” which will oppose change. This is a common first step many change leaders use to assess a situation before introducing something new. You can learn more about the process from Change Management Coach.
Force Field Exercise
As a personal exercise to understand a force field analysis, complete the columns below to identify the driving forces and restraining forces as to why you might consider joining a local gym. An example of how to identify the force field will be provided later in this section.
Force Field Example – Should I join a local gym?
Driving Forces Restraining Forces
4.17: Activity- Responding to Resistance
When implementing organizational change, there will inevitably be those who resist or even outwardly oppose the need for change. From senior executives with considerable power and influence to those “working in the trenches,” each person approaches change from their own perspective and will have different reasons for being concerned.
Readings & References: In the article Overcome The 5 Main Reasons People Resist Change, the author provides five main reasons people resist change:
• Fear of the unknown/surprise
• Mistrust
• Loss of job security/control
• Bad timing
• An individual’s predisposition toward change
Change can be uncomfortable, and, in hopes of avoiding this discomfort, people will often present arguments against it. In this activity, you will consider what arguments your colleagues at Sharp Clothing might make in an attempt to stop or hinder your efforts to introduce accessibility compliance.
Write three possible arguments a resistant employee may give in opposition to implementing digital accessibility, and write a convincing counterargument for each that would help reduce resistance. In each, indicate who the target employee is and/or their role within the organization.
Hints: You might search the Web for statistics or other evidence that demonstrates the benefits of an accessible organization.
Here are some example arguments against change. You may use these or come up with others.
• This has been the way we have always done it.
• We have no people with disabilities as clients.
• It would cost too much to make our website accessible.
• A blind person would not be able to access our website.
• We don’t have the time or resources to implement your accessibility plan.
• We can just use the auto-captioning on YouTube for our promo videos.
• The laws do not apply to us; we only have 25 employees.
4.18: Creating Digital Accessibility Culture- Takeaways
In this unit, you learned that:
• Accessibility auditing is an important step. Choosing a reputable service involves careful consideration focusing on key reputability factors.
• Two approaches to accessible websites are retrofitting and starting over. The correct approach for your situation will need to consider several factors including outsourcing the work to external vendors.
• Building a company-wide strategy about accessibility includes building awareness, hiring people with disabilities, focused presentations, and training.
• Web development accessibility guidelines focus on user interaction with a website, whereas web content accessibility guidelines focus more on standards compliance. Both are important.
• Several approaches should be used to monitor adherence to accessibility guidelines including unbiased quality assurance reviews and the use of automated tools.
• Implementing accessibility will include managing change. Kotter’s Eight-Step Model for Leading Change and Lewin’s Three-Step Model are two common models that can help plan and facilitate the implementation.
• Resistance by staff may be the most challenging element in implementing change. Overcoming the five main reasons people resist change needs to be part of your change management strategy. | textbooks/biz/Business/Advanced_Business/Digital_Accessibility_as_a_Business_Practice/04%3A_Creating_Digital_Accessibility_Culture/4.15%3A_Self-Test_5.txt |
In this unit, you learned the following about procurement and accessibility policy:
• To be successful, an effective web accessibility policy should be rooted within the business culture following the WebAIM eight-step process.
• A web accessibility policy should include procurement practices for both IT and non-IT related goods and services.
• Vendors should be able to verify and validate the accessibility compliance of their products and services.
05: Procurement and Accessibility Policy
What brought about the decision to develop a culture of accessibility throughout the Sharp Clothing Company was the purchase of a shopping cart application for the company’s website without considering and properly evaluating it for accessibility. You decide to investigate strategies for procuring accessible products and services, and you start looking at how accessibility procurement practices will fit into the digital accessibility policy you have been piecing together.
This unit will provide you with guidance on how to create a context for accessible procurement practices through a broader accessibility policy. It will also provide information on how to document your company’s accessibility requirements when communicating with external vendors, and how to assess and work with vendors to support accessibility for all users.
5.02: Objectives and Activities
Objectives
By the end of this unit, you should be able to:
• Explain the elements that make up an accessibility policy.
• Explain key differences in procuring and contracting for accessibility.
• Create an accessibility statement focusing on inclusion.
• Describe strategies for assessing a vendor’s accessibility knowledge.
• Critique and validate vendor accessibility claims against recognized standards.
Activities
• Critique Accessibility Claims
5.03: Digital Accessibility Policy
The pieces of the Digital Accessibility Policy are coming together. You understand that procurement also needs to be included as part of the policy. Before proceeding, you decide to take a look at what others have done in creating procedures for developing such a policy. You also want to look at policies others have published. You discover a wide range of policies, from simple statements outlining an organization’s commitment to web accessibility, to complex documents that describe in great detail each aspect of an organization’s digital accessibility requirements.
Some policies focus specifically on the Web, ensuring web content is accessible to everyone. Others are more general, covering a wide range of accessibility matters including customer service standards as well as accessibility of the built environment.
You decide that your focus will remain on developing a policy that encompasses digital accessibility, which includes web content, documents, multimedia, and information technology (IT).
Creating a Web Accessibility Policy
WebAIM, at the Center for Persons with Disabilities at Utah State University, undertook a project to develop an Eight-Step Implementation Model for creating an organization’s web accessibility policy, recognizing that implementing and maintaining a policy is a cultural or systemic issue within that organization. It needs to be ingrained at all levels to be successful and continue as business practice, with all employees committed to an inclusive presence on the Web.
WebAIM describes the steps as follows:
1. Gather Baseline Information: This step is essentially an audit of the organization’s website(s) current accessibility status.
2. Gain Top-Level Support: In order for an accessibility policy to work, it needs buy-in from the top levels of the organization.
3. Organize a Web Accessibility Committee: Assemble stakeholders from various groups in the organization, including respected individuals from these groups, web development staff, and where possible, people with disabilities.
4. Define a Standard: Create an organizational web accessibility standard, which could be based on WCAG 2.0 with adjustments to match organizational needs.
5. Create an Implementation Plan: Set timelines and priorities, delegate responsibilities, and monitor progress.
6. Provide Training and Technical Support: Identify those who publish to the Web, assess their skills, plan training for different groups, create lists of resources, tools, code samples, and manuals that provide guidance on producing accessible web content.
7. Monitor Conformance: Schedule annual/semi-annual website reviews, define “monitoring” in someone’s job description, and ensure that person is well versed in HTML authoring and web accessibility.
8. Remain Flexible: Plan for change, such as changes in staff, standards, and technologies.
Readings & References: For a full description of each of these steps, see the Eight-Step Implementation Model.
Examples of Web Accessibility Policies
The following are a few good examples of web accessibility policies from different types of companies and organizations around the world. Scan through them to get a sense of the variability that exists in the types of information included in these policies.
Readings & References:
Food Drink Entertainment
Starbucks Customer Service Policy (AODA)
Government
Massachusetts State Treasury
Cultural Organizations
Musée d’art contemporain de Montréal Web Accessibility Policy
Banks and Commerce
Ontario Securities Commission | textbooks/biz/Business/Advanced_Business/Digital_Accessibility_as_a_Business_Practice/05%3A_Procurement_and_Accessibility_Policy/5.01%3A_Procurement_and_Accessibility_Policy.txt |
While the focus of this unit is on procuring accessible information technology (IT), accessible procurement in general should be part of a larger policy that addresses accessibility at the organizational level. A digital accessibility policy can also fall within a larger accessibility policy which addresses other aspects such as access to physical spaces and access to customer service.
Digital accessibility as a policy on its own will be introduced here to set the context for our discussion of procurement in this unit and the discussion of hiring practices in the next unit.
A YouTube element has been excluded from this version of the text. You can view it online here: http://pressbooks.library.ryerson.ca/dabp/?p=736
Procurement in Accessibility Policy
Procuring accessible IT will affect numerous elements of an accessibility policy.
In the early stages of developing an accessibility policy, assessing the baseline accessibility level will include taking stock of third-party software used by the organization. These might include a content management system (CMS), a learning management system (LMS), point-of-sale systems, human resource management systems, and a variety of other types of systems for administering day-to-day operations. Each of these at some point would have gone through a procurement process.
Here are some examples of how procurement fits into an organization’s overall digital accessibility policy:
• In order to implement accessible procurement requirements, agreement is needed from the organization’s top level.
• When providing training and support, staff need to be taught how to use accessibility features within the systems procured.
• Ongoing monitoring is needed to ensure that upgrades to systems don’t compromise accessibility. Often, when software as a service (SaaS) is being used, system updates behind the scenes go unnoticed by typical users of the system.
Readings & References: For more about procurement and accessibility policy, visit these resources:
Ontario Integrated Accessibility Standards Regulation:
• A guide to developing an accessibility policy for organizations with 1-49 employees [PDF]
• Developing accessibility policies and a multi-year accessibility plan: A guide for organizations with 50 or more employees [PDF]
Additional Resources:
5.05: Procuring Accessible Information Technology
Many organizations will have in-house developers who create and maintain web content and who can be trained to implement web accessibility in their development practices. Most organizations however, will purchase or license third-party software or Web applications for particular purposes rather than having their in-house developers create them. It is generally more economical to license than it is to create complex systems like a content management system (CMS) or a customer relationship management system (CRMS).
The accessibility of these third party tools should always be assessed before committing to a particular system. This often starts by asking vendors to describe the accessibility features of their product in a request for proposals (RFP). This request might include:
• A checklist of desirable accessibility features that the vendor completes
• A request for a Voluntary Product Accessibility Template (VPAT) if the company is in the U.S.
• A request for a third party evaluation of the product being procured
Here are some tips to follow once you receive accessibility information from vendors:
1. Do some research and take the time to carefully assess the vendor’s claims for accuracy: Because vendors are often in competition to win your business, it is possible that they may exaggerate claims, word claims to work around known issues in their products, and, in the worst cases, make false claims about the accessibility of their products. It can be helpful to look at the vendor’s own website, test it with an automated checker, and look for accessibility information there. This can tell you a lot about the vendor’s accessibility knowledge. Failing to assess a vendor’s claims may have significant legal consequences, if accessibility claims are made against your organization and it can be shown that due diligence was not employed when acquiring a product.
2. Think beyond “yes or no”: It is not uncommon to shortlist a product for purchase or licensing that is not fully accessibility compliant, but that does have a good variety of accessibility features. It can be counterproductive to make absolute compliance a requirement. You may find for some complex systems that there are no fully compliant offerings. A system may not be compliant, but it may be the most accessible of a class of systems. Aim to procure the system that provides the best mix of required features and accessibility.
Once your organization has decided on procuring a system, it is important to build accessibility requirements into the contract, so that if issues are later found that present significant barriers, vendors must take responsibility and provide solutions.
Some vendors will be receptive to accessibility requests, particularly those in regions where accessibility laws affect their ability to sell their products. It is not uncommon for vendors to have little knowledge of accessibility, having never had such a request (but who, when educated, may be very happy to accommodate those requests). To set yourself up for success, simply plan to work with vendors that understand the importance of accessibility.
Readings & References: For more about strategies for procuring accessible IT, review the following resources. | textbooks/biz/Business/Advanced_Business/Digital_Accessibility_as_a_Business_Practice/05%3A_Procurement_and_Accessibility_Policy/5.04%3A_Procurement_as_Part_of_Digital_Accessibility_Policy.txt |
While looking at accessibility policies developed by other organizations, you come across several websites that include an “accessibility statement.” You decide such a statement would be a good addition to your company website. You spend some time reviewing statements others have posted to their sites, then you gather a list of elements to include in the statement you will create for your company.
An accessibility statement can be used on a website and in various documentation to let visitors and stakeholders know about the company’s commitment to accessibility. Though an accessibility statement will help inform others of an organization’s efforts and commitment, it is not a requirement for compliance.
Outlined below are several elements you may consider including in an accessibility statement. Once a statement has been prepared, it should be linked prominently on a website, preferably near the top of the site, where it will be easier to find for those who are navigating with assistive technology.
Statement of Commitment
A standard statement of commitment for both the website and the policy document can be created to guide an organization’s accessibility efforts. The following is an example of what a statement of commitment might look like:
At XYZ Company, we are committed to ensuring our goods and services are accessible to everyone and to removing barriers that may prevent some people from accessing these goods and services.
Statement of Compliance
If your website has been reviewed, either by an external auditor or one internal, you might choose to include a statement that describes the level of compliance the website meets. A statement of compliance must include the date the site was judged to be compliant. Because websites tend to change over time, compliance can only be claimed for the date on which an audit was completed. A copy of an audit report might also be linked from the statement, but is not required. A second element that must be included in the compliance statement is the specification or standard the site is claiming compliance with. In most cases, this will be WCAG 2.0. The final required element is the level of compliance, either Level A, Level AA, or Level AAA (if WCAG is the specification being used).
A simple statement of compliance might look like the following:
On January 20, 2017, this website conformed with the Web Content Accessibility Guidelines 2.0 at Level AA.
A compliance statement may also include additional information about the scope of the claim. For example, the claim may only refer to a particular area of a website, in which case that portion should be described in the claim, such as “the publicly accessible areas of the site.” Or, a statement may only apply to parts of the website the organization has control over, and not apply to third-party web applications or services that may be used. A statement, then, might include omissions, such as “not including the shopping cart application.”
Known Accessibility Issues
It is not uncommon for an accessibility statement to acknowledge potential barriers that an organization may be aware of, that are perhaps a work in progress, or may refer to third-party tools or technology that may not be available in an accessible form. Though this statement should not be an excuse for using less-than-accessible tools or applications on a website, it can help alleviate complaints when an organization demonstrates their awareness and their plans to remedy barriers over time, or to make public their use of technology that may not have an available accessible alternative. One such example may be videoconferencing systems. Though these are often required tools for communication, there is no current videoconferencing system that would comply with accessibility requirements.
An example of a known-issues statement might look like the following:
We are aware of a number of potential barriers in the Shopping Cart application that may prevent some users from purchasing products from our website. We are working with the vendor to address these issues, and we are looking at potential alternatives that may be implemented in the future. If you are experiencing difficulties using the shopping cart to make purchases, please contact our online support team at (111) 555-2134, who will be able to assist you with your purchase.
You may also include general contact information in the statement to allow site visitors to report any accessibility problems they may encounter.
Website Accessibility Features
Another element that might be included in an accessibility statement is a description of the accessibility features that have been implemented on a website. This can be helpful for users who need accessibility features, so they do not need to discover these features on their own, reducing the effort in learning how to use website features when using assistive technology.
Some of these features may include:
• Keystrokes for direct keyboard access to features
• Use of captions and/or transcripts with multimedia
• Use or WAI-ARIA to create interactive elements
• Use of navigation elements such as landmarks, bypass links, and headings
• Instructions for using complex features, like a photo gallery or shopping cart application
• Descriptions of a site layout
Readings & References: Examples of accessibility statements:
5.07: Self-Test 6
1. How many steps does the WebAIM accessibility policy implementation model have?
1. 5
2. 6
3. 7
4. 8
5. 9
6. 10
2. When creating an accessibility statement, which of the following were mentioned as elements that might be included in the statement? Choose all that apply.
1. Known accessibility issues
2. Website accessibility features
3. Statement of commitment
4. Statement of compliance
5. The name of the website’s developer
An interactive or media element has been excluded from this version of the text. You can view it online here:
http://pressbooks.library.ryerson.ca/dabp/?p=744 | textbooks/biz/Business/Advanced_Business/Digital_Accessibility_as_a_Business_Practice/05%3A_Procurement_and_Accessibility_Policy/5.06%3A_Creating_an_Accessibility_Statement.txt |
There are a variety of circumstances where accessibility requirements may need to be explicitly stated, for example:
• Requests for proposals (RFP)
• Purchase contracts
• Purchasing procedures
• Design specifications
There may be other documents in an organization that also require accessibility statements or requirements, such as process documents or literature about the organization or its products. An organization’s accessibility committee members may be asked to gather relevant documents from their respective areas within the organization in order to produce a full list of relevant document-accessibility statements. Here, we will look at RFPs, and the request for accessibility information they should contain.
Request for Proposals (RFP)
General Accessibility Statement
The National Center on Disability and Access to Education (NCDAE) provides a number of examples for wording that could be included in RFPs, for organizations in the U.S. (see Sample 1 below). We have provided an AODA-adapted version (see Sample 2 below). While statements such as these make relatively clear what the requirements are to a person knowledgeable in accessibility-related issues, they may not be explicit enough to produce a good description of the product’s accessibility features. Such wording also uses absolute language such as “Applicants must state their level of compliance….” Given that many products may not fully comply, language such as this can be used as a starting point, but it should be supplemented with more specific requirements.
Sample 1: Contained in a Request for Proposal (Section 508)NOTICE – All electronic and information technology (EIT) procured through this RFP must meet the applicable accessibility standards of 36 CFR 1194. 36 CFR 1194 implements Section 508 of the Rehabilitation Act of 1973, as amended, and is viewable at the following URL: www.section508.gov The following Section 508 technical standards are applicable to this RFP, as a minimum: “Software Applications and Operating Systems (1194.21)”, “Web-based Intranet and Internet Information and Applications (1194.22)”, “Video or Multimedia Products (1194.24) C.4” Applicants must state their level of compliance to applicable sections to be considered for purchase under this RFP.
Sample 2: Contained in a Request for Proposal (AODA)NOTICE – All information and communication technology (ICT) procured through this RFP must meet the accessibility standards of the Integrated Accessibility Standards Regulations (IASR) Reg. 191/11, s. 14 and O. Reg. 191/11, s. 15. Regulation 191/11, s. 14 implements the Information and Communications Standards of the Accessibility for Ontarians with Disabilities Act (AODA), with regard to accessible websites and web content, and Regulation 191/11, s. 15 with regard to Educational and training resources and materials, viewable at the following URL: http://www.ontario.ca/laws/regulation/110191#BK15. The associated technical standards for these regulations are specified in the W3C’s Web Content Accessibility Guidelines 2.0, viewable at the following URL: http://www.w3.org/TR/WCAG20/. Applicants must state their level of compliance to applicable sections to be considered for purchase under this RFP.
Specific Accessibility Requirements
Consider developing a specific list of accessibility requirements for vendors, using the WCAG 2.0 10 Key Guidelines introduced in Unit 2 as a starting point, as well as any other requirements your organization may deem necessary.
With this strategy in mind, we have created a sample checklist for vendors. The checklist is structured so that vendors can easily indicate their level of compliance with various requirements, and also provide explanations for the indicated state of compliance. These explanations are particularly important for items identified as “partially compliant.”
Toolkit: Download the Sample Vendor Accessibility Compliance Checklist [PDF] and add it to your Toolkit. Depending on the type of product being procured, not all of the 10 Key Guidelines may be relevant. Adjust the list accordingly to specify only those features relevant to the product type being procured.
5.09: Assessing Vendor Knowledge of Accessibility
Assume that a vendor’s proposal is accepted for review and competition and they are asked to provide a demonstration. It is at this point when you can move into more particular, perhaps technical, questions about specific accessibility features of their product.
Assessing a Vendor’s Website
It is often easy to get a sense of a vendor’s knowledge and commitment to accessibility by simply looking through their website.
• Sample a few pages and run them through an automated accessibility checker and HTML validator. How well do they do? The results will be a good indication of the quality and accessibility of work the company does.
• Does the vendor’s website have a prominent accessibility statement? Though not a requirement, if they do have one, it’s a good indication the company cares about accessibility.
• Does that statement, if there is one, have a compliance claim and are accessibility features on the site listed? If the accessibility features are listed, it likely means they are thinking about people with disabilities who are visiting their site, which is one step above thinking about accessibility in general.
• If there is a demo of the software you intend to procure, sample a few screens for checker and validator testing. How well do they do? You may be able to decide in this manner whether the software you are intending to procure from this company has the potential or not to meet the accessibility requirements of your organization.
• If an RFP is being issued, you may want to mention to vendors that their website may be reviewed for accessibility. Or, you may want to review websites prior to an RFP, to aid with shortlisting vendors to approach.
5.10: Self-Test 7
1. When vendors are describing the level of accessibility their product complies with, they should mention Level AAA.
1. True
2. False
2. If a vendor mentions a few known accessibility issues in their accessibility statement for a product, purchasing, or licensing the product should be avoided.
1. True
2. False
An interactive or media element has been excluded from this version of the text. You can view it online here:
http://pressbooks.library.ryerson.ca/dabp/?p=753 | textbooks/biz/Business/Advanced_Business/Digital_Accessibility_as_a_Business_Practice/05%3A_Procurement_and_Accessibility_Policy/5.08%3A_Stating_Accessibility_Requirements.txt |
During a product demonstration, specific and technical questions should be asked. It is a good idea to have the responses from the RFP accessibility requirements in hand so they can be clarified.
In addition to clarifications in the proposal’s accessibility responses, here are some additional questions that can be asked to assess a vendor’s level of accessibility understanding and their willingness to address accessibility issues in their software to meet your organization’s requirements.
Toolkit: See Accessibility Questions for Product Demonstrations for a collection of questions that can be asked while interviewing vendors.
Here is the list of 10 possible questions and expected answers:
1. Which accessibility standards does your product comply with? At which Level (if WCAG)?
Depending on the jurisdiction of the vendor, a local accessibility standard should be mentioned (e.g., Section 508 in the U.S.; AODA in Ontario), or mention WCAG. Added points if the vendor is from a different jurisdiction, and mentions the requirements of your jurisdiction, if different from their own.
At a minimum, where WCAG is the vendor’s guideline of choice, Level A should be mentioned, or talk about what remains to be addressed to meet Level A. If Level AA is mentioned, added points could be given. If Level AAA is mentioned, that is a warning sign, since very few if any complex systems will meet Level AAA requirements. Ask what AAA features have been implemented in the system.
2. Is your product accessible without a mouse? Please demonstrate.
The answer should always be “Yes.” The vendor should be able to demonstrate by using the Tab key to navigate through the user interface (UI). All functional elements, like menus, links, buttons, and forms, etc., should all be able to receive focus, and users should be able to navigate through menus and open elements using only the keyboard. Watch out for elements that can receive focus, but do not operate with a keypress.
3. Has your product been tested with assistive technologies? If so, with which ones?
Should answer “Yes.”
Should mention a screen reader at a minimum. Should be able to identify which screen readers, such as ChromeVox, JAWS, NDVA, etc. Added points if they also mention Voiceover and/or Talkback for mobile devices.
4. Who did the testing?
The developers of the software should be mentioned. Screen reader testing should be part of the development process. Or, a particular accessibility person within the organization may be the tester. Added points if people with disabilities were used in testing, or a third party accessibility expert. Is the third party tester reputable, if one was used?
5. What was the testing methodology? What were the results?
Should mention a combination of automated and manual testing strategies, and screen reader testing. Added points if testing with people with disabilities is also part of their testing process.
Ideally mention the level of conformance reached, as well as acknowledging any issues that may remain, and what they plan to do to address those issues. Answering “fully accessible” is a warning sign. Very few systems will be fully accessible to everyone.
6. How is accessibility built into your company’s quality assurance (QA) process?
Should talk about the development process at a minimum, and where accessibility design and testing tasks fit into the process. Added points if the vendor goes into detail about the Web accessibility policy implemented at the company.
7. If you roll out upgrades after we purchase the product, how can you assure us the upgrades will not break accessibility?
Should refer back to the QA process, local upgrade testing before pushing updates to production environments. Added points if third-party accessibility expert is involved.
8. Does your product make use of WAI-ARIA? If it does, how so?
Where a product has a fairly complex, interactive UI, the answer should be “Yes.” This indicates the vendor understands the complex accessibility issues associated with custom-built web interactivity.
May mention using ARIA landmarks. Added points if the specific ARIA attributes are mentioned for particular types of interactions, for example, using menu-related ARIA for complex menu, tab panel–related ARIA for tab panel presentations, and so on. Perhaps mention libraries used to implement ARIA, like jQuery or MooTools, or perhaps a custom-made ARIA library created by the vendor.
9. Does your product adapt responsively to different screen sizes? Please demonstrate.
Should answer “Yes.” Should be able to grab the corner of a browser window and drag it inward to reduce the window size, and the content should adapt cleanly as the window size increases and decreases. Should also be able to demonstrate the product on a mobile device like a smartphone or tablet, and have the UI adapt to the device’s screen size.
10. Does your product magnify cleanly using just a browser’s zoom feature? Please demonstrate.
Should answer “Yes.” Should be able to use the browser’s zoom function to increase the size of the content to at least 200% without the content flowing off the side of the screen or overlapping with adjacent content. Added points for zoom sizes greater than 200%. Good zoom adaptation indicates relative measures (em, %) have been used to size elements rather than absolute measures (px, pt), which is also a requirement for good responsive designs. | textbooks/biz/Business/Advanced_Business/Digital_Accessibility_as_a_Business_Practice/05%3A_Procurement_and_Accessibility_Policy/5.11%3A_Accessibility_Questions_During_Product_Demonstrations.txt |
A Voluntary Product Accessibility Template, or VPAT as it is commonly known, is a checklist that companies who supply the U.S. government can fill out to document the accessibility of their products. Because a VPAT is completed by the vendor, they have a tendency to present products in a better light than what may actually be the case. As a result, any time you are presented with a VPAT, it is important that a person knowledgeable in web accessibility critically reviews the document.
Toolkit: Download the VPAT 2.1 [DOC] form and add a bookmark/favourite to your Tool Kit.
Readings & References: VPAT 2.1 is now available.
The VPAT 2.1 Web and software accessibility requirements are based on WCAG 2.0. VPAT 2.1 also encompasses the European Union’s EN 301 549 accessibility requirements, and the Revised Section 508 requirements of January 2017.
5.13: Getting a Second Opinion
Even expertly created audits or assessments can benefit from a second opinion. Not all auditors approach accessibility requirements the same way. Some may have a more strict approach, following guidelines and techniques more stringently. Others may have a more practical approach, taking into consideration a range of variables such as budget, human resources, and adaptive technology support for particular techniques, etc. Those with a practical approach come up with solutions that best fit the circumstance, perhaps foregoing some of the strict compliance rules in favour of feasibility.
Web Accessibility Auditing Services
There are a growing number of companies that provide professional web accessibility auditing services. Not all of these companies are reputable and may not have an expert understanding of web accessibility. Some of the same strategies one might use to evaluate the accessibility knowledge of a vendor can also be used to evaluate the knowledge of a potential auditor.
If you would like a third-party accessibility auditing service to evaluate a product your organization is intending to procure, here are a few things you should look for:
1. Are the auditors web developers? Many issues that present barriers are the result of using HTML, WAI-ARIA, or JavaScript incorrectly. A web developer (or a person with a strong understanding of these technologies) can accurately identify the origins of more complex issues and offer effective solutions.
2. Does the service provide tools? The more reputable services develop their own tools such as automated checkers, contrast evaluators, and browser plugins, etc. that the average user can use to test accessibility for themselves. A good collection of tools is a good indicator that the service knows what it is doing. Tools may also indicate that developers are on staff.
3. How long has the service been in business? If you can’t find any indication of how long the service has been in business, be wary. If a service has been around for a while (over five years), it’s a good indication.
4. Is there a sample audit you can examine? You can tell a lot about the skills of auditors by the reports they produce. Ask for one if you can’t find one on the Web. If you are unable to get a sample, be wary.
5. Is the audit methodology posted publicly? Auditing services that know what they are doing will post their methods for everyone to see.
6. Are there a variety of services to choose from? The more reputable services will include a variety of audit options, training for different audiences, website accessibility monitoring, and other services that approach web accessibility from many angles.
7. Is the service’s website accessible? Reputable services will lead by example. Their websites will be spotless from an accessibility perspective and the HTML of the site should validate.
Third-Party Web Accessibility Auditing
It may be beneficial for both the vendor and the procuring organization to have a third-party accessibility auditing service brought in to provide an unbiased review of the software being acquired. This requirement may be part of a contractual agreement that requires confirmation of compliance with a given standard from an expert working at arm’s length from the two parties. It provides a level of protection for both parties, providing an objective account of a software’s state of accessibility that both parties can refer to if disagreement should arise.
A Few Reputable Accessibility Auditing Services
Here are a few web accessibility auditing services known to be reputable: | textbooks/biz/Business/Advanced_Business/Digital_Accessibility_as_a_Business_Practice/05%3A_Procurement_and_Accessibility_Policy/5.12%3A_Voluntary_Product_Accessibility_Template_%28VPAT%29.txt |
Assuming you have received proposals from vendors, tested their understanding of web accessibility, and are ready to move into a contractual arrangement to license or purchase the software or web application from one of them, it is important to lay out web accessibility requirements in the issued contract.
The following are two approaches to defining accessibility requirements: the first is a more general approach, relying on a specific standard to which vendors can refer, and the second is a more detailed approach, which relies on a standard but also supplies specific requirements that must be met.
Section 508 Wording
The National Center on Disability and Access to Education (NCDAE) provides some standard text for contracts that makes accessibility a requirement of the agreement. This text will be relevant for those procuring under the Section 508 regulations:
Sample 1: Purchasing contracts of specific products
Vendors must ensure that the course management system contained in the proposal fully conforms with Section 508 of the Rehabilitation Act of 1973, as amended in 2017. (For information on Section 508, see www.section508.gov.) This includes both the student and instructor views and also includes all interaction tools (e.g., chats, discussion forums), and add-ons (e.g., grade functions). Vendors must declare if any portion of the version under consideration does not fully conform to Section 508, and the ways in which the proposed product is out of compliance.
While this approach may be sufficient in some cases, where it has been confirmed that the vendor understands the requirements of Section 508, there will be cases when vendors don’t know if their product conforms or not. To reduce that likelihood, details of the Section 508 requirements could be specified, something like the requirements outlined in the “WCAG 2.0 Wording” section below.
WCAG 2.0 Wording
Like the wording for RFPs introduced earlier in this unit, it is important for contracts to provide specific details in order to avoid potential confusion for vendors who may not be fully aware of WCAG 2.0 requirements. The following is an adapted version of the Section 508 wording above, with suggested wording that specifies an agreed upon course of action should accessibility issues be discovered after the contract is implemented, along with the procuring organization’s specific accessibility requirements. Though WCAG 2.0 is specified, this language would be appropriate for those drafting AODA-related web accessibility requirements.
Sample 2: Purchasing contracts of specific productsVendors must ensure that the course management system contained in the proposal fully conforms with the Web Content Accessibility Guidelines (WCAG 2.0), Level AA, as published by the Web Accessibility Initiative (WAI) of the W3C, summarized in the list below (see WCAG 2.0 for more information). This includes both the student and instructor views and also includes all interaction tools (e.g., chats, discussion forums), and add-ons (e.g., grade functions). Vendors must declare if any portion of the version under consideration does not fully conform to WCAG 2.0 Level AA, and describe the ways in which the proposed product is out of compliance.Vendors agree that their product will continue to conform with these requirements, and in the event potential violations are discovered, will arrange to have such issues resolved unless otherwise agreed upon exceptions are stated in writing.
G1.1.1 (Level A)
All meaningful images in the User Interface (UI) include a text alternative, in the form of alt text, that accurately describes the meaning or function associated with the image.
G1.2.2 (Level A)
Where video is provided, either in the product itself, or in the associated documentation, meaningful spoken dialog in the video includes closed captions produced by a person rather than an automated service.
G 1.3.1 (Level A)
Content is structured using proper HTML section headings (e.g., h1, h2) and proper lists (i.e., ol, ul, li) rather than formatting that just creates the appearance of a heading or list.
G 1.3.2 (Level A)
When navigating through elements of the UI and content using only the Tab key, the cursor takes a logical path, from left to right and from top to bottom.
G 1.4.1 (Level A)
When colour is used in a meaningful way, some method other than colour is used to represent that same meaning.
G 1.4.3 (Level AA)
When text is presented over a coloured background, a minimum contrast between the two of 4.5:1 for standard sized text, and 3:1 for larger text, is provided.
G 2.1.1 (Level A)
All elements in the UI or content that function with a mouse click, also function using only a keyboard.
G 2.4.1 (Level A)
Means are provided that allow assistive technology users, or keyboard only users, to skip past repetitive elements such as menus and navigation bars, using either bypass links, or WAI-ARIA landmarks.
G 2.4.4 (Level A)
All link text is meaningful either on its own, or within the context of other adjacent links, accurately describing the destination or function of the link.
G 3.3.1 (Level A)
Error messages are presented in a way that can be consumed by assistive technology without requiring the user to search through the content to find them, either presented consistently in one place on the page or using an ARIA alert role.
G 3.3.2 (Level A)
Forms are formatted in a way that explicitly associates labels with input fields, and sufficient instructions are provided to describe the expected input in each field.
G 2.4.7 (Level AA)
When navigating through the UI using the tab key only, the focus position is easily followed visually through elements on the screen.
G 4.1.1 (Level A)
(Exception: Meets this requirement to the extent that markup violations do not introduce barriers that affect access for assistive technologies.)
The markup of the UI complies with HTML5 specifications.
Vendor Declaration of Non-Compliance [Vendor declares here any aspects of the product that are known to be non-compliant with WCAG 2.0 Level AA requirements, to be acknowledged by the procuring organization and either agreed upon as exceptions, or to be resolved within a particular time period following the implementation of the contract.] | textbooks/biz/Business/Advanced_Business/Digital_Accessibility_as_a_Business_Practice/05%3A_Procurement_and_Accessibility_Policy/5.14%3A_Contract_for_Accessibility.txt |
Now that you have a contractual agreement in place with a particular vendor, you will want to maintain a working relationship with them through which any accessibility issues that may be introduced into the product, either through software updates or through issues discovered after the contract is in place, can be resolved with minimal aggravation.
Fortunately, with the introduction of accessibility laws in many jurisdictions, vendors and suppliers are becoming more aware of accessibility issues and recognize that improving the accessibility of their products is good for business. As such, vendors are often receptive to accessibility improvements to their products. Your organization can potentially provide free user testing through feedback from your own employees or clients.
Talking About Web Accessibility with Vendors
How you approach your discussion of accessibility with vendors will vary depending on the level of awareness the vendor already has. You should already have an idea of the vendor’s understanding through the assessments you have already done, reviewing the company’s website, and if you are at the proposal stage, through responses to the requirements laid out in the RFP.
You will generally want to start the accessibility discussion right at the beginning of the procurement process. If there is work to be done to improve accessibility, the best approach is to address it from the start, rather than having to go back and retrofit after other details of a potential contract have been worked through.
There are several approaches to the accessibility discussion with vendors, which range from:
• Putting the onus on your company, the procuring organization, and its responsibility to provide accessible products and service (We can’t buy from you if you are not accessible)
• Stating the business case, selling the idea that accessibility is good for the vendor through increased sales and improved efficiency (Your revenues will increase, and/or your costs will decrease)
• Stating the legal case, reducing the likelihood of legal action for discriminating against people with disabilities (You are less likely to be sued)
• Making the social-conscience case or positioning the vendor as a company that should demonstrate corporate responsibility. (It’s the right thing to do)
The above may all sound familiar to you if you think back to the business cases introduced early. There are a number of other discussions you can employ to build a vendor’s accessibility awareness:
• Accessibility is practice, not a touch-up. Addressing accessibility from the start requires little extra effort once you know what you’re doing. Retrofitting can be difficult, expensive, or even impossible.
• Standards are clear, and adopted all over the world. Companies are adopting standards such as WCAG as a business advantage. Many buyers have accessibility at the top of their procurement requirements, and will often skip over suppliers that are not addressing it.
• Show your accessibility. For those in the U.S., if you do not have a VPAT, it will likely affect your bottom line. Purchasers may bypass your company if they can’t find one. In other areas of the world, providing an “accessibility statement” demonstrates to purchaser’s that accessibility is a priority, and a well-thought-out statement is more likely to cause purchasers to look at your company more closely.
• We can help you become accessible. If your organization knows what it needs in terms of accessibility, share that knowledge with potential partner vendors and educate them as part of the process of improving the accessibility of their products so they match the needs of your organization.
5.16: Activity- Critique Accessibility Claims
Though specific to the U.S., the Voluntary Product Accessibility Template (VPAT) is a checklist that companies who supply the U.S. government can fill out to document the accessibility of their products. Because a VPAT is completed by the vendor, or a representative of the vendor, they have a tendency to present products in a better light than what may actually be the case. As a result, any time you are presented with a VPAT, or similar accessibility claim, it is important that a person knowledgeable in digital accessibility critically review the document.
Toolkit:Download the VPAT 2.1 form [DOC] and add it to your Toolkit.
Review the Canvas and Blackboard VPATs provided below and write up a critique of each. Note that these are the older VPAT forms, which are based on the requirements of the old Section 508.
Questions you may want to answer include, but are not limited to:
• Is there missing information or additional information that could have been provided?
• Is there overly complex language used that might confuse a reader with limited accessibility knowledge?
• Are there statements that would cause you to question the validity of the remarks?
• Are there any statements acknowledging known issues?
• Are there explanations that do not appear relevant to the criteria?
• On a scale of 1 to 10, how well do you think the remarks address the given criteria?
Comparing the two VPATs, which would you be more likely to believe is accurate, and why do you think this?
5.17: Procurement and Accessibility Policy- Takeaways
In this unit, you learned that:
• To be successful, an effective web accessibility policy should be rooted within the business culture following the WebAIM eight-step process.
• A web accessibility policy should include procurement practices for both IT and non-IT related goods and services.
• Vendors should be able to verify and validate the accessibility compliance of their products and services. | textbooks/biz/Business/Advanced_Business/Digital_Accessibility_as_a_Business_Practice/05%3A_Procurement_and_Accessibility_Policy/5.15%3A_Working_with_Vendors.txt |
In this unit, you learned the following points about hiring accessibility staff:
• Companies are missing out on a significant talent pool of highly educated and skilled workers when they exclude people with disabilities in their hiring practices.
• Few formal technical training programs focus on developing accessible web content thus creating self-taught specialists who teach themselves independently. These specialists will have some common knowledge as well as informal personal skill sets related to accessible content.
06: Hiring Accessibility Staff
This final unut will expand on the discussion of policy introduced in Unit 5. One of the key elements in implementing web accessibility policy is hiring staff that possess accessibility knowledge and ensuring they know how accessibility fits into their roles.
Hiring staff explicitly to implement and manage accessibility efforts is perhaps the most important indicator that an organization is committed to developing and maintaining products and services that are inclusive. There are two particular roles that we will examine here: the Web Developer and the Web/IT Accessibility Specialist. These two roles are generally responsible for the bulk of compliance efforts undertaken in an organization.
Despite our focus on these two roles in particular, it is important to remember that accessibility can and should be a part of a company’s regular business practices, with staff at all levels contributing to the maintenance of an organization’s accessibility status.
A YouTube element has been excluded from this version of the text. You can view it online here: http://pressbooks.library.ryerson.ca/dabp/?p=770
© TEDx Talks. Released under the terms of a Standard YouTube License. All rights reserved.
6.02: Objectives and Activities
Objectives
By the end of this unit, you should be able to:
• Identify accessibility knowledge and skills needed across organizational roles.
• Develop an organizational rationale for hiring people with disabilities.
• State accessibility skills required by web developers.
• Identify job descriptions for hiring accessibility professionals.
Activities
• Find an accessibility professional job description.
• Complete the Sharp Clothing Company digital accessibility policy.
6.03: Hiring Knowledgable Staff
It is clear to you that web developers need to have a good understanding of accessibility, as they will be responsible for much of the company’s digital accessibility. But, you also want to understand what knowledge and skills other staff should have, so you can ask the appropriate accessibility questions during job interviews with potential candidates.
Roles and Accessibility Responsibilities
For most positions an organization may be hiring for, digital accessibility knowledge or skills need not be a requirement, though having that knowledge or skills should add points to a candidate’s overall score. For most positions, a little training will provide the needed details of accessibility requirements for particular roles.
Of the various roles that could be found in an organization, it is the web developers who will need to be most familiar with accessibility requirements and the strategies to meet those requirements. Knowledge and skills for web developers will be covered separately later in this unit.
The following lists in general terms the skills and knowledge each role in an organization should possess or be trained in, starting with knowledge everyone should possess and followed by additional specific skills for particular roles:
Everyone
• Disability sensitivity
• Organization requirements (high level, legislated obligations)
Senior managers
• Organization requirements (details of legislated obligations)
• Experience with change-management projects
Store managers
• Customer-service accessibility
Sales staff
• Customer-service accessibility
Office staff
• Document accessibility
• Basic web accessibility
Human resource staff
• Role-based accessibility knowledge and skills
• Accessible employment practices and local accessibility regulations
• Document accessibility
• Knowledge of training, change management
• Knowledge of accommodations for people with disabilities
• Knowledge of the organization’s accessibility efforts
Communication and marketing
• Document accessibility
• Multimedia accessibility
• Basic web accessibility
Purchasers
• Organization requirements (procurement)
• Basic web accessibility
Telephone support staff
• Customer service accessibility
UI designer
• Universal design principles
• Basic web accessibility
Web content authors
• Basic web accessibility
Media support staff
• Basic web accessibility
• Multimedia accessibility
Distribution centre staff
• Minimal
Cleaning and maintenance
• Minimal
Readings & Resources: Accessibility Job Descriptions | textbooks/biz/Business/Advanced_Business/Digital_Accessibility_as_a_Business_Practice/06%3A_Hiring_Accessibility_Staff/6.01%3A_Hiring_Accessibility_Staff.txt |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.