Assignment 2

We are confident that we have the best essaywriters in the market. We have a team of experienced writers who are familiar with all types of essays, and we are always willing to help you with any questions or problems you might face. Plus, our writers are always available online so you can always get the help you need no matter where you are in the world.

Order a Similar Paper Order a Different Paper

book attached

4-5 pages


Save your time - order a paper!

Get your paper written from scratch within the tight deadline. Our service is a reliable solution to all your troubles. Place an order on any task and we will take care of it. You won’t have to worry about the quality and deadlines

Order Paper Now

Three PART assignment:


The owner of a fast-food franchise has exclusive rights to operate in a medium-sized metropolitan area. The owner currently has a single outlet open, which has proved to be very popular, and there are often waiting lines of customers. The owner is therefore considering opening one or more outlets in the area. 

1) What are the key factors that the owner should investigate before making a final decision? 

2) What trade-offs would there be in opening one additional site versus opening several additional sites?

(Stevenson, 2018, p. 367)

This section should be approximately 2 pages in length.


1) Briefly explain the purpose of each of these control charts:


     B. Range



2) Classify each of the following as either a Type I error or a Type II error.

     A – Putting an innocent person in jail 

     B – Releasing a guilty person from jail

     C – Eating (or not eating) a cookie that fell on the floor

     D – Not seeing a doctor as soon as possible after ingesting poison

(Stevenson, 2018, p. 454)

This section should be approximately 1 page in length.


1) Reflect on your life, personal or professional and provide an example or examples of independent and dependent demand.  Make sure to provide a compare and contrast between independent and dependent demand.

2) Briefly describe MRP and ERP.

This section should be approximately 1  – 2 pages in length.

*Ensure you follow 

 writing standards. (Make sure to include a cover page.  A running head and abstract are NOT required.)

** A total of three references are required to include your textbook.

Please name your file: last name_first name_MGT5203.E1_#2

Example: Reagan_Matthew_MGT5203.E1_#2

page i 

page ii 

page iii 

page iv 


Published by McGraw-Hill Education, 2 Penn Plaza, New York, NY 10121. Copyright © 2021 by McGraw-Hill Education. All rights reserved. Printed in the United States of America. No part of this publication may be reproduced or distributed in any form or by any means, or stored in a database or retrieval system, without the prior written consent of McGraw-Hill Education, including, but not limited to, in any network or other electronic storage or transmission, or broadcast for distance learning.

Some ancillaries, including electronic and print components, may not be available to customers outside the United States.

This book is printed on acid-free paper.

1 2 3 4 5 6 7 8 9 LWI 24 23 22 21 20

ISBN 978-1260-57571-2

MHID 1-260-57571-3

Cover Image:
Daniel Prudek/Shutterstock










All credits appearing on page or at the end of the book are considered to be an extension of the copyright page.

The internet addresses listed in the text were accurate at the time of publication. The inclusion of a website does not indicate an endorsement by the authors or McGraw-Hill Education, and McGraw-Hill Education does not guarantee the accuracy of the information presented at these sites.

page v 

Supply Chain Management


Purchasing and Supply Chain Management

Third Edition

Bowersox, Closs, Cooper, and Bowersox

Supply Chain Logistics Management

Fifth Edition

Burt, Petcavage, and Pinkerton

Supply Management

Eighth Edition


Purchasing and Supply Management

Sixteenth Edition

Simchi-Levi, Kaminsky, and Simchi-Levi

Designing and Managing the Supply
Chain: Concepts, Strategies, Case Studies

Third Edition

Stock and Manrodt

Supply Chain Management

Project Management

Brown and Hyer

Managing Projects: A Team-Based Approach


Project Management: The Managerial Process

Eighth Edition

Service Operations Management

Bordoloi, Fitzsimmons, and Fitzsimmons

Service Management: Operations, Strategy, Information Technology

Ninth Edition

Management Science

Hillier and Hillier

Introduction to Management Science: A Modeling and Case Studies Approach with Spreadsheets

Sixth Edition

Business Research Methods


Business Research Methods

Thirteenth Edition

Business Forecasting

Keating and Wilson

Forecasting and Predictive Analytics

Seventh Edition

Business Systems Dynamics


Business Dynamics: Systems Thinking and Modeling for Complex World

Operations Management

Cachon and Terwiesch

Operations Management

Second Edition

Cachon and Terwiesch

Matching Supply with Demand: An Introduction to Operations Management

Fourth Edition

Jacobs and Chase

Operations and Supply
Management: The Core

Fifth Edition

Jacobs and Chase

Operations and Supply


Sixteenth Edition

Schroeder and Goldstein

Operations Management: Contemporary Concepts and Cases

Eighth Edition


Operations Management

Fourteenth Edition

Swink, Melnyk, and Hartley

Managing Operations Across the Supply Chain

Fourth Edition

Business Statistics

Bowerman, Drougas, Duckworth, Froelich, Hummel, Moninger, and Schur

Business Statistics and Analytics in Practice

Ninth Edition

Doane and Seward

Applied Statistics in Business and Economics

Sixth Edition

Doane and Seward

Essential Statistics in Business and Economics

Third Edition

Lind, Marchal, and Wathen

Basic Statistics for Business and Economics

Ninth Edition

Lind, Marchal, and Wathen

Statistical Techniques in Business and Economics

Eighteenth Edition

Jaggia and Kelly

Business Statistics: Communicating with Numbers

Third Edition

Jaggia and Kelly

Essentials of Business Statistics: Communicating with Numbers

Second Edition


Connect Master: Business Statistics

Business Analytics

Jaggia, Kelly, Lertwachara, and Chen

Business Analytics: Communicating with Numbers

page vi 

page vii 


The material in this book is intended as an introduction to the field of operations management. The topics covered include both strategic issues and practical applications. Among the topics are forecasting, product and service design, capacity planning, management of quality and quality control, inventory management, scheduling, supply chain management, and project management.

My purpose in revising this book continues to be to provide a clear presentation of the concepts, tools, and applications of the field of operations management. Operations management is evolving and growing, and I have found updating and integrating new material to be both rewarding and challenging, particularly due to the plethora of new developments in the field, while facing the practical limits on the length of the book.

This text offers a comprehensive and flexible amount of content that can be selected as appropriate for different courses and formats, including undergraduate, graduate, and executive education.

This allows instructors to select the chapters, or portions of chapters, that are most relevant for their purposes. That flexibility also extends to the choice of relative weighting of the qualitative or quantitative aspects of the material, and the order in which chapters are covered, because chapters do not depend on sequence. For example, some instructors cover project management early, others cover quality or lean early, and so on.

As in previous editions, there are major pedagogical features designed to help students learn and understand the material. This section describes the key features of the book, the chapter elements, the supplements that are available for teaching the course, highlights of the fourteenth edition, and suggested applications for classroom instruction. By providing this support, it is our hope that instructors and students will have the tools to make this learning experience a rewarding one.

What’s New in This Edition

In many places, content has been rewritten or added to improve clarity, shorten wording, or update information. New material has been added on supply chains, and other topics. Some problems are new, and others have been revised. Many new readings and new photos have been added.

Some of the class preparation exercises have been revised. The purpose of these exercises is to introduce students to the subject matter before class in order to enhance classroom learning. They have proved to be very popular with students, both as an introduction to new material and for study purposes. These exercises are available in the Instructor’s Resource Manual. Special thanks to Linda Brooks for her help in developing the exercises.


I want to thank the many contributors to this edition. Reviewers and adopters of the text have provided a “continuously improving” wealth of ideas and suggestions. It is encouraging to me as an author. I hope all reviewers and readers will know their suggestions were valuable, were carefully considered, and are sincerely appreciated. The list includes post-publication reviewers.

Jenyi Chen

Cleveland State University

Eric Cosnoski

Lehigh University

Mark Gershon

Temple University

Narges Kasiri

Ithaca College

Nancy Lambe

University of South Alabama

Anita Lee-Post

University of Kentucky

Behnam Nakhai

Millersville University of Pennsylvania

Rosa Oppenheim

Rutgers Business School

Marilyn Preston

Indiana University Southeast

Avanti Sethi

University of Texas at Dallas

John T. Simon

Governors State University

Lisa Spencer

California State University, Fresno

Nabil Tamimi

University of Scranton

Oya Tukel

Cleveland State University

Theresa Wells

University of Wisconsin-Eau Claire

Heath Wilken

University of Northern Iowa

Additional thanks to the instructors who have contributed extra material for this edition, including accuracy checkers: Ronny Richardson, Kennesaw State University and Gary Black, University of Southern Indiana; Solutions and SmartBook: Tracie Lee, Idaho State University; PowerPoint Presentations: Avanti Sethi, University of Texas-Dallas; Test Bank: Leslie Sukup, Ferris State University.

Special thanks goes out to Lisa Spencer, California State University-Fresno, for her help with additional readings and examples.

page viii 

Finally, I would like to thank all the people at McGraw-Hill for their efforts and support. It is always a pleasure to work with such a professional and competent group of people. Special thanks go to Noelle Bathurst, Portfolio Manager; Michele Janicek, Lead Product Developer; Fran Simon and Katie Ward, Product Developers; Jamie Koch, Assessment Content Project Manager; Sandy Ludovissy, Buyer; Matt Diamond, Designer; Jacob Sullivan, Content Licensing Specialist; Harper Christopher, Executive Marketing Manager; and many others who worked behind the scenes.

I would also like to thank the many reviewers of previous editions for their contributions: Vikas Agrawal, Fayetteville State University; Bahram Alidaee, University of Mississippi; Ardavan Asef-Faziri, California State University at Northridge; Prabir Bagchi, George Washington State University; Gordon F. Bagot, California State University at Los Angeles; Ravi Behara, Florida Atlantic University; Michael Bendixen, Nova Southeastern; Ednilson Bernardes, Georgia Southern University; Prashanth N. Bharadwaj, Indiana University of Pennsylvania; Greg Bier, University of Missouri at Columbia; Joseph Biggs, Cal Poly State University; Kimball Bullington, Middle Tennessee State University; Alan Cannon, University of Texas at Arlington; Injazz Chen, Cleveland State University; Alan Chow, University of Southern Alabama at Mobile; Chrwan-Jyh, Oklahoma State University; Chen Chung, University of Kentucky; Robert Clark, Stony Brook University; Loretta Cochran, Arkansas Tech University; Lewis Coopersmith, Rider University; Richard Crandall, Appalachian State University; Dinesh Dave, Appalachian State University; Scott Dellana, East Carolina University; Kathy Dhanda, DePaul University; Xin Ding, University of Utah; Ellen Dumond, California State University at Fullerton; Richard Ehrhardt, University of North Carolina at Greensboro; Kurt Engemann, Iona College; Diane Ervin, DeVry University; Farzaneh Fazel, Illinois State University; Wanda Fennell, University of Mississippi at Hattiesburg; Joy Field, Boston College; Warren Fisher, Stephen F. Austin State University; Lillian Fok, University of New Orleans; Charles Foley, Columbus State Community College; Matthew W. Ford, Northern Kentucky University; Phillip C. Fry, Boise State University; Charles A. Gates Jr., Aurora University; Tom Gattiker, Boise State University; Damodar Golhar, Western Michigan University; Robert Graham, Jacksonville State University; Angappa Gunasekaran, University of Massachusetts at Dartmouth; Haresh Gurnani, University of Miami; Terry Harrison, Penn State University; Vishwanath Hegde, California State University at East Bay; Craig Hill, Georgia State University; Jim Ho, University of Illinois at Chicago; Seong Hyun Nam, University of North Dakota; Jonatan Jelen, Mercy College; Prafulla Joglekar, LaSalle University; Vijay Kannan, Utah State University; Sunder Kekre, Carnegie-Mellon University; Jim Keyes, University of Wisconsin at Stout; Seung-Lae Kim, Drexel University; Beate Klingenberg, Marist College; John Kros, East Carolina University; Vinod Lall, Minnesota State University at Moorhead; Kenneth Lawrence, New Jersey Institute of Technology; Jooh Lee, Rowan University; Anita Lee-Post, University of Kentucky; Karen Lewis, University of Mississippi; Bingguang Li, Albany State University; Cheng Li, California State University at Los Angeles; Maureen P. Lojo, California State University at Sacramento; F. Victor Lu, St. John’s University; Janet Lyons, Utah State University; James Maddox, Friends University; Gita Mathur, San Jose State University; Mark McComb, Mississippi College; George Mechling, Western Carolina University; Scott Metlen, University of Idaho; Douglas Micklich, Illinois State University; Ajay Mishra, SUNY at Binghamton; Scott S. Morris, Southern Nazarene University; Philip F. Musa, University of Alabama at Birmingham; Roy Nersesian, Monmouth University; Jeffrey Ohlmann, University of Iowa at Iowa City; John Olson, University of St. Thomas; Ozgur Ozluk, San Francisco State University; Kenneth Paetsch, Cleveland State University; Taeho Park, San Jose State University; Allison Pearson, Mississippi State University; Patrick Penfield, Syracuse University; Steve Peng, California State University at Hayward; Richard Peschke, Minnesota State University at Moorhead; Andru Peters, San Jose State University; Charles Phillips, Mississippi State University; Frank Pianki, Anderson University; Sharma Pillutla, Towson University; Zinovy Radovilsky, California State University at Hayward; Stephen A. Raper, University of Missouri at Rolla; Pedro Reyes, Baylor University; Buddhadev Roychoudhury, Minnesota State University at Mankato; Narendra Rustagi, Howard University; Herb Schiller, Stony Brook University; Dean T. Scott, DeVry University; Scott J. Seipel, Middle Tennessee State University; Raj Selladurai, Indiana University; Kaushic Sengupta, Hofstra University; Kenneth Shaw, Oregon State University; Dooyoung Shin, Minnesota State University at Mankato; Michael Shurden, Lander University; Raymond E. Simko, Myers University; John Simon, Governors State University; Jake Simons, Georgia Southern University; Charles Smith, Virginia Commonwealth University; Kenneth Solheim, DeVry University; Young Son, Bernard M. Baruch College; Victor Sower, Sam Houston State University; Jeremy Stafford, University of North Alabama; Donna Stewart, University of Wisconsin at Stout; Dothang Truong, Fayetteville State University; Mike Umble, Baylor University; Javad Varzandeh, California State University at San Bernardino; Timothy Vaughan, University of Wisconsin at Eau Claire; Emre Veral,
page ixBaruch College; Mark Vroblefski, University of Arizona; Gustavo Vulcano, New York University; Walter Wallace, Georgia State University; James Walters, Ball State University; John Wang, Montclair State University; Tekle Wanorie, Northwest Missouri State University; Jerry Wei, University of Notre Dame; Michael Whittenberg, University of Texas; Geoff Willis, University of Central Oklahoma; Pamela Zelbst, Sam Houston State University; Jiawei Zhang, NYU; Zhenying Zhao, University of Maryland; Yong-Pin Zhou, University of Washington.

William J. Stevenson

page xviii 


You’re in the driver’s seat.

Want to build your own course? No problem. Prefer to use our turnkey, prebuilt course? Easy. Want to make changes throughout the semester? Sure. And you’ll save time with Connect’s auto-grading too.

They’ll thank you for it.

Adaptive study resources like SmartBook
® 2.0 help your students be better prepared in less time. You can transform your class time from dull definitions to dynamic debates. Find out more about the powerful personalized learning experience available in SmartBook 2.0 at

page xix 


Effective, efficient studying.

Connect helps you be more productive with your study time and get better grades using tools like SmartBook 2.0, which highlights key concepts and creates a personalized study plan. Connect sets you up for success, so you walk into class with confidence and walk out with better grades.

Study anytime, anywhere.

Download the free ReadAnywhere app and access your online eBook or SmartBook 2.0 assignments when it’s convenient, even if you’re offline. And since the app automatically syncs with your eBook and SmartBook 2.0 assignments in Connect, all of your work is available every time you open it. Find out more at

No surprises.

The Connect Calendar and Reports tools keep you on track with the work you need to get done and your assignment scores. Life gets busy; Connect tools help you keep learning through it all.

Learning for everyone.

McGraw-Hill works directly with Accessibility Services Departments and faculty to meet the learning needs of all students. Please contact your Accessibility Services office and ask them to email [email protected], or visit
for more information.

page xx 

Note to Students

The material in this text is part of the core knowledge in your education. Consequently, you will derive considerable benefit from your study of operations management,
regardless of your major. Practically speaking, operations is a course in

This book describes principles and concepts of operations management. You should be aware that many of these principles and concepts are applicable to other aspects of your professional and personal life. You can expect the benefits of your study of operations management to serve you in those other areas as well.

Some students approach this course with apprehension, and perhaps even some negative feelings. It may be that they have heard that the course contains a certain amount of quantitative material that they feel uncomfortable with, or that the subject matter is dreary, or that the course is about “factory management.” This is unfortunate, because the subject matter of this book is interesting and vital for all business students. While it is true that some of the material is quantitative, numerous examples, solved problems, and answers at the back of the book help with the quantitative material. As for “factory management,” there is material on manufacturing, as well as on services. Manufacturing is important, and something that you should know about for a number of reasons. Look around you. Most of the “things” you see were manufactured: cars, trucks, planes, clothing, shoes, computers, books, pens and pencils, desks, and cell phones. And these are just the tip of the iceberg. So it makes sense to know something about how these things are produced. Beyond all that is the fact that manufacturing is largely responsible for the high standard of living people have in industrialized countries.

After reading each chapter or supplement in the text, attending related classroom lectures, and completing assigned questions and problems, you should be able to do each of the following:

  1. Identify the key features of that material.

  2. Define and use terminology.

  3. Solve typical problems.

  4. Recognize applications of the concepts and techniques covered.

  5. Discuss the subject matter in some depth, including its relevance, managerial considerations, and advantages and limitations.

You will encounter a number of chapter supplements. Check with your course syllabus to determine which ones are included.

This book places an emphasis on problem solving. There are many examples throughout the text illustrating solutions. In addition, at the end of most chapters and supplements you will find a group of solved problems. The examples within the chapter itself serve to illustrate concepts and techniques. Too much detail at those points would be counterproductive. Yet, later on, when you begin to solve the end-of-chapter problems, you will find the solved problems quite helpful. Moreover, those solved problems usually illustrate more and different details than the problems within the chapter.

I suggest the following approach to increase your chances of getting a good grade in the course:

  1. Do the class preparation exercises for each chapter if they are available from your instructor.

  2. Look over the chapter outline and learning objectives.

  3. Read the chapter summary, and then skim the chapter.

  4. Read the chapter and take notes.

  5. Look over and try to answer some of the discussion and review questions.

  6. Work the assigned problems, referring to the solved problems and chapter examples as needed.

Note that the answers to many problems are given at the end of the book. Try to solve each problem before turning to the answer. Remember—tests don’t come with answers.

And here is one final thought: Homework is on the Highway to Success, whether it relates to your courses, the workplace, or life! So do your homework, so you can have a successful journey!


page xxi 

Brief Contents


1  Introduction to Operations Management


2  Competitiveness, Strategy, and Productivity


3  Forecasting


4  Product and Service Design




5  Strategic Capacity Planning for Products and Services




6  Process Selection and Facility Layout


7  Work Design and Measurement




8  Location Planning and Analysis


9  Management of Quality


10  Quality Control


11  Aggregate Planning and Master Scheduling


12  Inventory Management


13  MRP and ERP


14  JIT and Lean Operations




15  Supply Chain Management


16  Scheduling


17  Project Management


18  Management of Waiting Lines


19  Linear Programming


Appendix A: Answers to Selected Problems

Appendix B: Tables

Appendic C: Working with the Normal Distribution

Appendic D: Ten Things to Remember Beyond the Final Exam

Company Index

Subject Index

page xxii 



1  Introduction to Operations Management



Production of Goods Versus Providing Services

Why Learn About Operations Management?

Career Opportunities and Professional Societies

Process Management

The Scope of Operations Management


Why Manufacturing Matters

Operations Management and Decision Making



The Historical Evolution of Operations Management

Operations Today


Agility Creates a Competitive Edge

Key Issues for Today’s Business Operations


Sustainable Kisses

Diet and the Environment: Vegetarian vs. Nonvegetarian

Operations Tour:

Wegmans Food Markets


Key Points

Key Terms

Discussion and Review Questions

Taking Stock

Critical Thinking Exercises



Selected Bibliography and Further Readings

Problem-Solving Guide

2  Competitiveness, Strategy, and Productivity




Mission and Strategies


Amazon Ranks High in Customer Service

Low Inventory Can Increase Agility

Operations Strategy

Implications of Organization Strategy for Operations Management

Transforming Strategy into Action: The Balanced Scorecard



Why Productivity Matters

Dutch Tomato Growers’ Productivity Advantage

Productivity Improvement


Key Points

Key Terms

Solved Problems

Discussion and Review Questions

Taking Stock

Critical Thinking Exercises



Home-Style Cookies

Hazel Revisited

“Your Garden Gloves”

Girlfriend Collective

Operations Tour:

The U.S. Postal Service

Selected Bibliography and Further Readings

3  Forecasting



Features Common to All Forecasts

Elements of a Good Forecast

page xxiii 

Forecasting and the Supply Chain

Steps in the Forecasting Process

Approaches to Forecasting

Qualitative Forecasts

Forecasts Based on Time-Series Data

Associative Forecasting Techniques



Forecast Accuracy


High Forecasts Can be Bad News

Monitoring Forecast Error

Choosing a Forecasting Technique

Using Forecast Information

Computer Software in Forecasting

Operations Strategy


Gazing at the Crystal Ball


Key Points

Key Terms

Solved Problems

Discussion and Review Questions

Taking Stock

Critical Thinking Exercises



M&L Manufacturing

Highline Financial Services, Ltd.

Selected Bibliography and Further Readings

4  Product and Service Design



Design as a Business Strategy



Dutch Boy Brushes Up Its Paints

Idea Generation


Vlasic’s Big Pickle Slices

Legal and Ethical Considerations

Human Factors

Cultural Factors


Green Tea Ice Cream? Kale Soup?

Global Product and Service Design

Environmental Factors: Sustainability


Kraft Foods’ Recipe for Sustainability

China Clamps Down on Recyclables

Recycle City: Maria’s Market

Other Design Considerations


Lego A/S in the Pink

Fast-Food Chains Adopt Mass Customization

Phases in Product Design and Development

Designing for Production

Service Design


The Challenges of Managing Services

Operations Strategy


Key Points

Key Terms

Discussion and Review Questions

Taking Stock

Critical Thinking Exercises


Operations Tour:

High Acres Landfill

Selected Bibliography and Further Readings



5  Strategic Capacity Planning for Products and Services




Excess Capacity Can Be Bad News!

Capacity Decisions Are Strategic

page xxiv 

Defining and Measuring Capacity

Determinants of Effective Capacity

Strategy Formulation

Forecasting Capacity Requirements

Additional Challenges of Planning Service Capacity

Do It In-House or Outsource It?


My Compliments to the Chef, Er, Buyer

Developing Capacity Strategies

Constraint Management

Evaluating Alternatives

Operations Strategy


Key Points

Key Terms

Solved Problems

Discussion and Review Questions

Taking Stock

Critical Thinking Exercises



Outsourcing of Hospital Services

Selected Bibliography and Further Readings



6  Process Selection and Facility Layout



Process Selection

Operations Tour:

Morton Salt



Foxconn Shifts Its Focus to Automation

Zipline Drones Save Lives in Rwanda

Self-Driving Vehicles

Process Strategy

Strategic Resource Organization: Facilities Layout


A Safe Hospital Room of the Future

Designing Product Layouts: Line Balancing


BMW’s Strategy: Flexibility

Designing Process Layouts


Key Points

Key Terms

Solved Problems

Discussion and Review Questions

Taking Stock

Critical Thinking Exercises


Selected Bibliography and Further Readings

7  Work Design and Measurement



Job Design

Quality of Work Life

Methods Analysis


Taylor’s Techniques Help UPS

Motion Study

Work Measurement

Operations Strategy


Key Points

Key Terms

Solved Problems

Discussion and Review Questions

Taking Stock

Critical Thinking Exercises


Selected Bibliography and Further Readings



8  Location Planning and Analysis


The Need for Location Decisions

The Nature of Location Decisions

Global Locations



General Procedure for Making Location Decisions

Identifying a Country, Region, Community, and Site

Service and Retail Locations

Evaluating Location Alternatives


Key Points

page xxv 

Key Terms

Solved Problems

Discussion and Review Questions

Taking Stock

Critical Thinking Exercises



Hello, Walmart?

Selected Bibliography and Further Readings

9  Management of Quality



The Evolution of Quality Management

The Foundations of Modern Quality Management: The Gurus

Insights on Quality Management


American Fast-Food Restaurants Are Having Success in China

Hyundai: Exceeding Expectations

Quality and Performance Excellence Awards

Quality Certification

Quality and the Supply Chain

Total Quality Management

Problem Solving and Process Improvement

Quality Tools

Operations Strategy


Key Points

Key Terms

Solved Problem

Discussion and Review Questions

Taking Stock

Critical Thinking Exercises



Chick-n-Gravy Dinner Line

Tip Top Markets

Selected Bibliography and Further Readings

10  Quality Control





Falsified Inspection Reports Create Major Risks and Job Losses

Statistical Process Control

Process Capability


RFID Chips Might Cut Drug Errors in Hospitals

Operations Strategy


Key Points

Key Terms

Solved Problems

Discussion and Review Questions

Taking Stock

Critical Thinking Exercises



Toys, Inc.

Tiger Tools

Selected Bibliography and Further Readings

11  Aggregate Planning and Master Scheduling




Duplicate Orders Can Lead to Excess Capacity

Basic Strategies for Meeting Uneven Demand

Techniques for Aggregate Planning

Aggregate Planning in Services

Disaggregating the Aggregate Plan

Master Scheduling

The Master Scheduling Process


Key Points

Key Terms

Solved Problems

Discussion and Review Questions

Taking Stock

Critical Thinking Exercises



Eight Glasses a Day (EGAD)

Selected Bibliography and Further Readings

12  Inventory Management





The Nature and Importance of Inventories

Requirements for Effective Inventory Management

page xxvi 


Radio Frequency Identification (RFID) Tags

Catch Them Before They Steal! Reducing Inventory Loss With an Assist From AI

Drones Can Help With Inventory Management in Warehouses

Inventory Ordering Policies

How Much to Order: Economic Order Quantity Models

Reorder Point Ordering

How Much to Order: Fixed-Order-Interval Model

The Single-Period Model

Operations Strategy


Key Points

Key Terms

Solved Problems

Discussion and Review Questions

Taking Stock

Critical Thinking Exercises



UPD Manufacturing

Grill Rite

Farmers Restaurant

Operations Tours:

Bruegger’s Bagel Bakery


Selected Bibliography and Further Readings

13  MRP and ERP



An Overview of MRP

MRP Inputs

MRP Processing

MRP Outputs

Other Considerations

MRP in Services

Benefits and Requirements of MRP


Capacity Requirements Planning




11 Common ERP Mistakes and How to Avoid Them

Operations Strategy


Key Points

Key Terms

Solved Problems

Discussion and Review Questions

Taking Stock

Critical Thinking Exercises



Promotional Novelties

DMD Enterprises

Operations Tour:

Stickley Furniture

Selected Bibliography and Further Readings

14  JIT and Lean Operations




Toyota Recalls

Supporting Goals

Building Blocks


General Mills Studied NASCAR Pit Crew to Reduce Changeover Time

Lean Tools


Gemba Walks

Transitioning to a Lean System

Lean Services


Operations Strategy


Key Points

Key Terms

Solved Problems

Discussion and Review Questions

Taking Stock

Critical Thinking Exercises



Level Operations

Operations Tour:


Selected Bibliography and Further Readings



page xxvii 

15  Supply Chain Management



Trends in Supply Chain Management


Walmart Focuses on Its Supply Chain

Supply Chain Transparency

At 3M, a Long Road Became a Shorter Road

Global Supply Chains

ERP and Supply Chain Management

Ethics and the Supply Chain

Small Businesses

Management Responsibilities



Supplier Management

Inventory Management

Order Fulfillment


Operations Tour:

Wegmans’ Shipping System


UPS Sets the Pace for Deliveries and Safe Driving

Springdale Farm

Active, Semi-Passive, and Passive RFID Tags

Creating an Effective Supply Chain


Clicks or Bricks, or Both?

Easy Returns



Key Points

Key Terms

Discussion and Review Questions

Taking Stock

Critical Thinking Exercises




Selected Bibliography and Further Readings

16 Scheduling


Scheduling Operations

Scheduling in Low-Volume Systems

Scheduling Services

Operations Strategy


Key Points

Key Terms

Solved Problems

Discussion and Review Questions

Taking Stock

Critical Thinking Exercises



Hi-Ho, Yo-Yo, Inc.

Selected Bibliography and Further Readings

17 Project Management



Project Life Cycle

Behavioral Aspects of Project Management


Artificial Intelligence Will Help Project Managers

Work Breakdown Structure

Planning and Scheduling with Gantt Charts


Deterministic Time Estimates

A Computing Algorithm

Probabilistic Time Estimates

Determining Path Probabilities


Budget Control

Time–Cost Trade-Offs: Crashing

Advantages of Using Pert and Potential Sources of Error

Critical Chain Project Management

Other Topics in Project Management

Project Management Software

Operations Strategy

Risk Management


Key Points

Key Terms

Solved Problems

Discussion and Review Questions

Taking Stock

Critical Thinking Exercises



Time, Please

Selected Bibliography and Further Readings

page xxviii 

18  Management of Waiting Lines


Why Is There Waiting?


New Yorkers Do Not Like Waiting in Line

Managerial Implications of Waiting Lines

Goal of Waiting-Line Management

Characteristics of Waiting Lines

Measures of Waiting-Line Performance

Queuing Models: Infinite-Source

Queuing Model: Finite-Source

Constraint Management

The Psychology of Waiting


David H. Maister on the Psychology of Waiting

Operations Strategy


Managing Waiting Lines at Disney World


Key Points

Key Terms

Solved Problems

Discussion and Review Questions

Taking Stock

Critical Thinking Exercises



Big Bank

Selected Bibliography and Further Readings

19 Linear Programming



Linear Programming Models

Graphical Linear Programming

The Simplex Method

Computer Solutions

Sensitivity Analysis


Key Points

Key Terms

Solved Problems

Discussion and Review Questions



Son, Ltd.

Custom Cabinets, Inc.

Selected Bibliography and Further Readings

APPENDIX A Answers to Selected Problems




APPENDIX C Working with the Normal Distribution


APPENDIX D Ten Things to Remember Beyond the Final Exam



Company Index

Subject Index

page 1 

page 2 

page 3 

Recalls of automobiles, foods, toys, and other products; major oil spills; and even dysfunctional state and federal legislatures are all examples of operations failures. They underscore the need for effective operations management. Examples of operations successes include the many electronic devices we all use, medical breakthroughs in diagnosing and treating ailments, and high-quality goods and services that are widely available.

page 4 


Operations is that part of a business organization that is responsible for producing goods and/or services.

are physical items that include raw materials, parts, subassemblies such as motherboards that go into computers, and final products such as cell phones and automobiles.

are activities that provide some combination of time, location, form, or psychological value. Examples of goods and services are found all around you. Every book you read, every video you watch, every e-mail or text message you send, every telephone conversation you have, and every medical treatment you receive involves the operations function of one or more organizations. So does everything you wear, eat, travel in, sit on, and access through the internet. The operations function in business can also be viewed from a more far-reaching perspective: The collective success or failure of companies’ operations functions has an impact on the ability of a nation to compete with other nations, and on the nation’s economy.

The ideal situation for a business organization is to achieve an economic match of supply and demand. Having excess supply or excess capacity is wasteful and costly; having too little means lost opportunity and possible customer dissatisfaction. The key functions on the supply side are operations and supply chains, and sales and marketing on the demand side.

While the operations function is responsible for producing products and/or delivering services, it needs the support and input from other areas of the organization. Business organizations have three basic functional areas, as depicted in
Figure 1.1: finance, marketing, and operations. It doesn’t matter whether the business is a retail store, a hospital, a manufacturing firm, a car wash, or some other type of business; all business organizations have these three basic functions.

Finance is responsible for securing financial resources at favorable prices and allocating those resources throughout the organization, as well as budgeting, analyzing investment proposals, and providing funds for operations. Marketing is responsible for assessing consumer wants and needs, and selling and promoting the organization’s goods or services. Operations is responsible for producing the goods or providing the services offered by the organization. To put this into perspective, if a business organization were a car, operations would be its engine. And just as the engine is the core of what a car does, in a business organization, operations is the core of what the organization does. Operations management is responsible for managing that core. Hence,

operations management
is the management of systems or processes that create goods and/or provide services.

Operations and supply chains are intrinsically linked, and no business organization could exist without both. A

supply chain
is the sequence of organizations—their facilities, functions, and activities—that are involved in producing and delivering a product or service. The sequence begins with basic suppliers of raw materials and extends all the way to the final customer. See
Figure 1.2. Facilities might include warehouses, factories, processing centers, offices, distribution centers, and retail outlets. Functions and activities include forecasting, purchasing, inventory management, information management, quality assurance, scheduling, production, distribution, delivery, and customer service.

Figure 1.3a provides another illustration of a supply chain: a chain that extends from wheat growing on a farm and ends with a customer buying a loaf of bread in a supermarket. The value of the product increases as it moves through the supply chain.

page 5 

One way to think of a supply chain is that it is like a chain, as its name implies. This is shown in
Figure 1.2. The links of the chain would represent various production and/or service operations, such as factories, storage facilities, activities, and modes of transportation (trains, railroads, ships, planes, cars, and people). The chain illustrates both the
sequential nature of a supply chain and the interconnectedness of the elements of the supply chain. Each link is a customer of the previous link and a supplier to the following link. It also helps to understand that if any one of the links fails for any reason (quality or delivery issues, weather problems, or some other problem [there are numerous possibilities]), that can interrupt the flow in the supply chain for the following portion of the chain.

Another way to think of a supply chain is as a tree with many branches, as shown in
Figure 1.3b. The main branches of the tree represent key suppliers and transporters (e.g., trucking companies). That view is helpful in grasping the size and complexity that often exists in supply chains. Notice that the main branches of the tree have side branches (their own key suppliers), and those side branches also have their own side branches (their own key suppliers). In fact, an extension of the tree view of a supply chain is that each supplier
page 6(branch) has its own supply tree. Referring to
Figure 1.3a, the farm, mill, and bakery of the trucking companies would have their own “tree” of suppliers.

Supply chains are both external and internal to the organization. The external parts of a supply chain provide raw materials, parts, equipment, supplies, and/or other inputs to the organization, and they deliver outputs that are goods to the organization’s customers. The internal parts of a supply chain are part of the operations function itself, supplying operations with parts and materials, performing work on products, and/or performing services.

The creation of goods or services involves transforming or converting inputs into outputs. Various inputs such as capital, labor, and information are used to create goods or services using one or more
transformation processes (e.g., storing, transporting, repairing). To ensure that the desired outputs are obtained, an organization takes measurements at various points in the transformation process (
feedback) and then compares them with previously established standards to determine whether corrective action is needed (
Figure 1.4 depicts the conversion

Table 1.1 provides some examples of inputs, transformation processes, and outputs. Although goods and services are listed separately in
Table 1.1, it is important to note that goods and services often occur jointly. For example, having the oil changed in your car is a service, but the oil that is delivered is a good. Similarly, house painting is a service, but the paint is a good. The goods–service combination is a continuum. It can range from primarily goods, with little service, to primarily service, with few goods.
Figure 1.5 illustrates this continuum. Because there are relatively few pure goods or pure services, companies usually sell
product packages, which are a combination of goods and services. There are elements of both goods production and service delivery in these product packages. This makes managing operations more interesting, and also more challenging.


Examples of inputs, transformation, and outputs






High goods percentage


Cutting, drilling


Physical labor



Intellectual labor






Raw materials








Food products






Cell phones



High service percentage



Health care






Vehicle repair










Retail stores






Legal constraints

Government regulations

Table 1.2 provides some specific illustrations of the transformation process.


Illustrations of the transformation process




Food Processor

Raw vegetables


Canned vegetables

Metal sheets

Making cans











Doctors, nurses


Treated patients



Medical supplies






The essence of the operations function is to
add value during the transformation process.

is the term used to describe the difference between the cost of inputs and the value or price of outputs. In nonprofit organizations, the value of outputs (e.g., highway construction, police and fire protection) is their value to society; the greater the value-added, the greater the effectiveness of these operations. In for-profit organizations, the value of outputs is measured by the prices that customers are willing to pay for those goods or services. Firms use the money generated by value-added for research and development, investment in new facilities and equipment, worker salaries, and
profits. Consequently, the greater the value-added, the greater the amount of funds available for these purposes. Value can also be psychological, as in

Many factors affect the design and management of operations systems. Among them are the degree of involvement of customers in the process and the degree to which technology is used to produce and/or deliver a product or service. The greater the degree of customer
page 7involvement, the more challenging it can be to design and manage the operation. Technology choices can have a major impact on productivity, costs, flexibility, and quality and customer satisfaction.

page 8 


Although goods and services often go hand in hand, there are some very basic differences between the two, differences that impact the management of the goods portion versus management of the service portion. There are also many similarities between the two.

Production of goods results in a
tangible output, such as an automobile, eyeglasses, a golf ball, a refrigerator—anything that we can see or touch. It may take place in a factory, but it can occur elsewhere. For example, farming and restaurants produce
nonmanufactured goods. Delivery of service, on the other hand, generally implies an
act. A physician’s examination, TV and auto repair, lawn care, and the projection of a film in a theater are examples of services. The majority of service jobs fall into these categories:

Professional services (e.g., financial, health care, legal)

Mass services (e.g., utilities, internet, communications)

Service shops (e.g., tailoring, appliance repair, car wash, auto repair/maintenance)

Personal care (e.g., beauty salon, spa, barbershop)

Government (e.g., Medicare, mail, social services, police, fire)

Education (e.g., schools, universities)

Food service (e.g., catering)

Services within organizations (e.g., payroll, accounting, maintenance, IT, HR, janitorial)

Retailing and wholesaling

Shipping and delivery (e.g., truck, railroad, boat, air)

Residential services (e.g., lawn care, painting, general repair, remodeling, interior design)

Transportation (e.g., mass transit, taxi, airlines, ambulance)

Travel and hospitality (e.g., travel bureaus, hotels, resorts)

Miscellaneous services (e.g., copy service, temporary help)

Manufacturing and service are often different in terms of
what is done, but quite similar in terms of
how it is done.

page 9 

Consider these points of comparison:

Degree of customer contact. Many services involve a high degree of customer contact, although services such as internet providers, utilities, and mail service do not. When there is a high degree of contact, the interaction between server and customer becomes a “moment of truth” that will be judged by the customer every time the service occurs.

Labor content of jobs. Services often have a higher degree of labor content than manufacturing jobs do, although automated services are an exception.

Uniformity of inputs. Service operations are often subject to a higher degree of variability of inputs. Each client, patient, customer, repair job, and so on presents a somewhat unique situation that requires assessment and flexibility. Conversely, manufacturing operations often have a greater ability to control the variability of inputs, which leads to more-uniform job requirements.

Measurement of productivity. Measurement of productivity can be more difficult for service jobs due largely to the high variations of inputs. Thus, one doctor might have a higher level of routine cases to deal with, while another might have more difficult cases. Unless a careful analysis is conducted, it may appear that the doctor with the difficult cases has a much lower productivity than the one with the routine cases.

Quality assurance. Quality assurance is usually more challenging for services due to the higher variation in input, and because delivery and consumption occur at the same time. Unlike manufacturing, which typically occurs away from the customer and allows mistakes that are identified to be corrected, services have less opportunity to avoid exposing the customer to mistakes.

Inventory. Many services tend to involve less use of inventory than manufacturing operations, so the costs of having inventory on hand are lower than they are for manufacturing. However, unlike manufactured goods, services cannot be stored. Instead, they must be provided “on demand.”

Wages. Manufacturing jobs are often well paid, and have less wage variation than service jobs, which can range from highly paid professional services to minimum-wage workers.

Ability to patent. Product designs are often easier to patent than service designs, and some services cannot be patented, making them easier for competitors to copy.

There are also many
similarities between managing the production of products and managing services. In fact, most of the topics in this book pertain to both. When there are important service considerations, these are highlighted in separate sections. Here are some of the primary factors for both:

  1. Forecasting and capacity planning to match supply and demand

  2. Process management

  3. Managing variations

  4. Monitoring and controlling costs and productivity

  5. Supply chain management

  6. Location planning, inventory management, quality control, and scheduling

Note that many service activities are essential in goods-producing companies. These include training, human resource management, customer service, equipment repair, procurement, and administrative services.

Table 1.3 provides an overview of the differences between the production of goods and service operations. Remember, though, that most systems involve a blend of goods and services.

page 10 


Typical differences between production of goods and provision of services







Customer contact



Labor content



Uniformity of input



Measurement of productivity



Opportunity to correct problems before delivery







Narrow range

Wide range



Not usually


Whether operations management is your major or not, the skill set you gain studying operations management will serve you well in your career.

There are many career-related reasons for wanting to learn about operations management, whether you plan to work in the field of operations or not. This is because every aspect of business affects or is affected by operations. Operations and sales are the two line functions in a business organization. All other functions—accounting, finance, marketing, IT, and so on—support the two line functions. Among the service jobs that are closely related to operations are financial services (e.g., stock market analyst, broker, investment banker, and loan officer), marketing services (e.g., market analyst, marketing researcher, advertising manager, and product manager), accounting services (e.g., corporate accountant, public accountant, and budget analyst), and information services (e.g., corporate intelligence, library services, management information systems design services).

A common complaint from employers is that college graduates come to them very focused, when employers would prefer them to have more of a general knowledge of how business organizations operate. This book provides some of the breadth that employers are looking for in their new hires. Apart from the career-related reasons, there is a not-so-obvious one: Through learning about operations and supply chains, you will have a much better understanding of the world you live in, the global dependencies of companies and nations, some of the reasons that companies succeed or fail, and the importance of working with others.

Working together successfully means that all members of the organization understand not only their own role, but they also understand the roles of others. In practice, there is significant interfacing and
collaboration among the various functional areas, involving
exchange of information and
cooperative decision making. For example, although the three primary functions in business organizations perform different activities, many of their decisions impact the other areas of the organization. Consequently, these functions have numerous interactions, as depicted by the overlapping circles shown in
Figure 1.6.

Finance and operations management personnel cooperate by exchanging information and expertise in such activities as the following:

  1. Budgeting. Budgets must be periodically prepared to plan financial requirements. Budgets must sometimes be adjusted, and performance relative to a budget must be evaluated.

  2. Economic analysis of investment proposals. Evaluation of alternative investments in plant and equipment requires inputs from both operations and finance people.

  3. Provision of funds. The necessary funding of operations and the amount and timing of funding can be important and even critical when funds are tight. Careful planning can help avoid cash-flow problems.

page 11 

Marketing’s focus is on selling and/or promoting the goods or services of an organization. Marketing is also responsible for assessing customer wants and needs, and for communicating those to operations people (short term) and to design people (long term). That is, operations needs information about demand over the short to intermediate term so that it can plan accordingly (e.g., purchase materials or schedule work), while design people need information that relates to improving current products and services and designing new ones. Marketing, design, and production must work closely together to successfully implement design changes and to develop and produce new products. Marketing can provide valuable insight on what competitors are doing. Marketing also can supply information on consumer preferences so that design will know the kinds of products and features needed; operations can supply information about capacities and judge the
manufacturability of designs. Operations will also have advance warning if new equipment or skills will be needed for new products or services. Finance people should be included in these exchanges in order to provide information on what funds might be available (short term) and to learn what funds might be needed for new products or services (intermediate to long term). One important piece of information marketing needs from operations is the manufacturing or service

lead time
in order to give customers realistic estimates of how long it will take to fill their orders.

Thus, marketing, operations, and finance must interface on product and process design, forecasting, setting realistic schedules, quality and quantity decisions, and keeping each other informed on the other’s strengths and weaknesses.

People in every area of business need to appreciate the importance of managing and coordinating operations decisions that affect the supply chain and the matching of supply and demand, and how those decisions impact other functions in an organization.

Operations also interacts with other functional areas of the organization, including legal, management information systems (MIS), accounting, personnel/human resources, and public relations, as depicted in
Figure 1.7.

page 12 

legal department must be consulted on contracts with employees, customers, suppliers, and transporters, as well as on liability and environmental issues.

Accounting supplies information to management on costs of labor, materials, and overhead, and may provide reports on items such as scrap, downtime, and inventories.

Management information systems (MIS) is concerned with providing management with the information it needs to effectively manage. This occurs mainly through designing systems to capture relevant information and designing reports. MIS is also important for managing the control and decision-making tools used in operations management.

personnel or
human resources department is concerned with the recruitment and training of personnel, labor relations, contract negotiations, wage and salary administration, assisting in manpower projections, and ensuring the health and safety of employees.

Public relations is responsible for building and maintaining a positive public image of the organization. Good public relations provides many potential benefits. An obvious one is in the marketplace. Other potential benefits include public awareness of the organization as a good place to work (labor supply), improved chances of approval of zoning change requests, community acceptance of expansion plans, and instilling a positive attitude among employees.


There are many career opportunities in the operations management and supply chain fields. Among the numerous job titles are operations manager, production analyst, production manager, inventory manager, purchasing manager, schedule coordinator, distribution manager, supply chain manager, quality analyst, and quality manager. Other titles include office manager, store manager, and service manager.

People who work in the operations field should have a skill set that includes both people skills and knowledge skills. People skills include political awareness; mentoring ability; and collaboration, negotiation, and communication skills. Knowledge skills, necessary for credibility and good decision making, include product and/or service knowledge, process knowledge, industry and global knowledge, financial and accounting skills, and project management skills. See
Table 1.4.


Sample operations management job descriptions

Production Supervisor

Supply Chain Manager

Social Media Product Manager

  • Manage a production staff of 10–20.

  • Ensure the department meets daily goals through the management of productivity.

  • Enforce safety policies.

  • Coordinate work between departments.

  • Have strong problem-solving skills, and strong written and oral communication skills.

  • Have a general knowledge of materials management, information systems, and basic statistics.

  • Direct, monitor, evaluate, and motivate employee performance.

  • Be knowledgeable about shipping regulations.

  • Manage budgetary accounts.

  • Manage projects.

  • Identify ways to increase consumer engagement.

  • Analyze the key performance indicators and recommend improvements.

  • Lead cross-functional teams to define product specifications.

  • Collaborate with design and technical to create key product improvements.

  • Develop requirements for new website enhancements.

  • Monitor the competition to identify need for changes.

If you are thinking of a career in operations management, you can benefit by joining one or more of the following professional societies.

APICS, the Association for Operations Management 8430 West Bryn Mawr Avenue, Suite 1000, Chicago, Illinois 60631

American Society for Quality (ASQ) 230 West Wells Street, Milwaukee, Wisconsin 53203

page 13 

Institute for Supply Management (ISM) 2055 East Centennial Circle, Tempe, Arizona 85284

Institute for Operations Research and the Management Sciences (INFORMS) 901 Elkridge Landing Road, Linthicum, Maryland 21090-2909

The Production and Operations Management Society (POMS) College of Engineering, Florida International University, EAS 2460, 10555 West Flagler Street, Miami, Florida 33174

The Project Management Institute (PMI) 4 Campus Boulevard, Newtown Square, Pennsylvania 19073-3299

Council of Supply Chain Management Professionals (CSCMP) 333 East Butterfield Road, Suite 140, Lombard, Illinois 60148

APICS, ASQ, ISM, and other professional societies offer a practitioner certification examination that can enhance your qualifications. Information about job opportunities can be obtained from all of these societies, as well as from other sources, such as the Decision Sciences Institute (University Plaza, Atlanta, Georgia 30303) and the Institute of Industrial Engineers (25 Technology Park, Norcross, Georgia 30092).


A key aspect of operations management is process management. A

consists of one or more actions that transform inputs into outputs. In essence, the central role of all management is process management.

Businesses are composed of many interrelated processes. Generally speaking, there are three categories of business processes:

  1. Upper-management processes. These govern the operation of the entire organization. Examples include organizational governance and organizational strategy.

  2. Operational processes. These are the core processes that make up the value stream. Examples include purchasing, production and/or service, marketing, and sales.

  3. Supporting processes. These support the core processes. Examples include accounting, human resources, and IT (information technology).

Business processes, large and small, are composed of a series of supplier–customer relationships, where every business organization, every department, and every individual operation is both a customer of the previous step in the process and a supplier to the next step in the process.
Figure 1.8 illustrates this concept.

A major process can consist of many subprocesses, each having its own goals that contribute to the goals of the overall process. Business organizations and supply chains have many such processes and subprocesses, and they benefit greatly when management is using a process perspective. Business process management (BPM) activities include process design, process execution, and process monitoring. Two basic aspects of this for operations and supply chain management are managing processes to meet demand and dealing with process variability.

Managing a Process to Meet Demand

Ideally, the capacity of a process will be such that its output just matches demand. Excess capacity is wasteful and costly; too little capacity means dissatisfied customers and lost
page 14revenue. Having the right capacity requires having accurate forecasts of demand, the ability to translate forecasts into capacity requirements, and a process in place capable of meeting expected demand. Even so, process variation and demand variability can make the achievement of a match between process output and demand difficult. Therefore, to be effective, it is also necessary for managers to be able to deal with variation.

Process Variation

Variation occurs in all business processes. It can be due to variety or variability. For example, random variability is inherent in every process; it is always present. In addition, variation can occur as the result of deliberate management choices to offer customers variety.

There are four basic sources of variation:

  1. The variety of goods or services being offered. The greater the variety of goods and services, the greater the variation in production or service requirements.

  2. Structural variation in demand. These variations, which include trends and seasonal variations, are generally predictable. They are particularly important for capacity planning.

  3. Random variation. This natural variability is present to some extent in all processes, as well as in demand for services and products, and it cannot generally be influenced by managers.

  4. Assignable variation. These variations are caused by defective inputs, incorrect work methods, out-of-adjustment equipment, and so on. This type of variation can be reduced or eliminated by analysis and corrective action.

Variations can be disruptive to operations and supply chain processes, interfering with optimal functioning. Variations result in additional cost, delays and shortages, poor quality, and inefficient work systems. Poor quality and product shortages or service delays can lead to dissatisfied customers and can damage an organization’s reputation and image. It is not surprising, then, that the ability to deal with variability is absolutely necessary for managers.

Throughout this book, you will learn about some of the tools managers use to deal with variation. An important aspect of being able to deal with variation is to use metrics to describe it. Two widely used metrics are the
mean (average) and the
standard deviation. The standard deviation quantifies variation around the mean. The mean and standard deviation are used throughout this book in conjunction with variation. So, too, is the normal distribution. Because you will come across many examples of how the normal distribution is used, you may find the overview on working with the normal distribution in the appendix at the end of the book helpful.


The scope of operations management ranges across the organization. Operations management people are involved in product and service design, process selection, selection and management of technology, design of work systems, location planning, facilities planning, and quality improvement of the organization’s products or services.

The operations function includes many interrelated activities, such as forecasting, capacity planning, scheduling, managing inventories, assuring quality, motivating employees, deciding where to locate facilities, and more.

We can use an airline company to illustrate a service organization’s operations system. The system consists of the airplanes, airport facilities, and maintenance facilities, sometimes spread out over a wide territory. The activities include:

Forecasting such things as weather and landing conditions, seat demand for flights, and the growth in air travel.

Capacity planning, essential for the airline to maintain cash flow and make a reasonable profit. (Too few or too many planes, or even the right number of planes but in the wrong places, will hurt profits.)

page 15 

Locating facilities according to managers’ decisions on which cities to provide service for, where to locate maintenance facilities, and where to locate major and minor hubs.

Facilities and layout, important in achieving effective use of workers and equipment.

Scheduling of planes for flights and for routine maintenance; scheduling of pilots and flight attendants; and scheduling of ground crews, counter staff, and baggage handlers.

Managing inventories of such items as foods and beverages, first-aid equipment, in-flight magazines, pillows and blankets, and life preservers.

Assuring quality, essential in flying and maintenance operations, where the emphasis is on safety. This is important in dealing with customers at ticket counters, check-in, telephone and electronic reservations, and curb service, where the emphasis is on efficiency and courtesy.

Motivating and training employees in all phases of operations.

Managing the Supply Chain to Achieve Schedule, Cost, and Quality Goals

Consider a bicycle factory. This might be primarily an
assembly operation: buying components such as frames, tires, wheels, gears, and other items from suppliers, and then assembling bicycles. The factory also might do some of the
fabrication work itself, forming frames and making the gears and chains, and it might buy mainly raw materials and a few parts and materials such as paint, nuts and bolts, and tires. Among the key management tasks in either case are scheduling production, deciding which components to make and which to buy, ordering parts and materials, deciding on the style of bicycle to produce and how many, purchasing new equipment to replace old or worn-out equipment, maintaining equipment, motivating workers, and ensuring that quality standards are met.

Obviously, an airline company and a bicycle factory are completely different types of operations. One is primarily a service operation, the other a producer of goods. Nonetheless, these two operations have much in common. Both involve scheduling activities, motivating employees, ordering and managing supplies, selecting and maintaining equipment, satisfying quality standards, and—above all—satisfying customers. Also, in both businesses, the success of the business depends on short- and long-term planning.

page 16 

A primary function of an operations manager is to guide the system by decision making. Certain decisions affect the
design of the system, and others affect the
operation of the system.

System design involves decisions that relate to system capacity, the geographic location of facilities, the arrangement of departments and the placement of equipment within physical structures, product and service planning, and the acquisition of equipment. These decisions usually, but not always, require long-term commitments. Moreover, they are typically
strategic decisions.
System operation involves management of personnel, inventory planning and control, scheduling, project management, and quality assurance. These are generally
tactical and
operational decisions. Feedback on these decisions involves
measurement and
control. In many instances, the operations manager is more involved in day-to-day operating decisions than with decisions relating to system design. However, the operations manager has a vital stake in system design because
system design essentially determines many of the parameters of system operation. For example, costs, space, capacities, and quality are directly affected by design decisions. Even though the operations manager is not responsible for making all design decisions, he or she can provide those decision makers with a wide range of information that will have a bearing on their decisions.

A number of other areas are part of, or support, the operations function. They include purchasing, industrial engineering, distribution, and maintenance.

Purchasing is responsible for the procurement of materials, supplies, and equipment. Close contact with operations is necessary to ensure correct quantities and timing of purchases. The purchasing department is often called on to evaluate vendors for quality, reliability, service, price, and ability to adjust to changing demand. Purchasing is also involved in receiving and inspecting the purchased goods.

Industrial engineering is often concerned with scheduling, performance standards, work methods, quality control, and material handling.

Distribution involves the shipping of goods to warehouses, retail outlets, or final customers.

Maintenance is responsible for general upkeep and the repair of equipment, the buildings and grounds, heating and air-conditioning, parking, removing toxic wastes, and perhaps security.

The operations manager is the key figure in the system. He or she has the ultimate responsibility for the creation of goods or provision of services.

page 17 

The kinds of jobs that operations managers oversee vary tremendously from organization to organization, largely because of the different products or services involved. Thus, managing a banking operation obviously requires a different kind of expertise than managing a steelmaking operation. However, in a very important respect, the
jobs are the same: They are both essentially
managerial. The same thing can be said for the job of any operations manager regardless of the kinds of goods or services being created.

The service sector and the manufacturing sector are both important to the economy. The service sector now accounts for more than 70 percent of jobs in the United States, and it is growing in other countries as well. Moreover, the number of people working in services is increasing, while the number of people working in manufacturing is not. The reason for the decline in manufacturing jobs is twofold: As the operations function in manufacturing companies finds more productive ways of producing goods, the companies are able to maintain or even increase their output using fewer workers. Furthermore, some manufacturing work has been
outsourced to more productive companies, many in other countries, that are able to produce goods at lower costs. Outsourcing and productivity will be discussed in more detail in this and other chapters.

Many of the concepts presented in this book apply equally to manufacturing and service. Consequently, whether your interest at this time is on manufacturing or on service, these concepts will be important, regardless of whether a manufacturing example or service example is used to illustrate the concept.

The Why Manufacturing Matters reading gives another reason for the importance of manufacturing jobs.

page 18 


The chief role of an operations manager is that of planner and decision maker. In this capacity, the operations manager exerts considerable influence over the degree to which the goals and objectives of the organization are realized. Most decisions involve many possible alternatives that can have quite different impacts on costs or profits. Consequently, it is important to make
informed decisions.

Operations management professionals make a number of key decisions that affect the entire organization. These include the following:

What: What resources will be needed, and in what amounts?

When: When will each resource be needed? When should the work be scheduled? When should materials and other supplies be ordered? When is corrective action needed?

Where: Where will the work be done?

How: How will the product or service be designed? How will the work be done (organization, methods, equipment)? How will resources be allocated?

Who: Who will do the work?

An operations manager’s daily concerns include costs (budget), quality, and schedules (time).

Throughout this book, you will encounter the broad range of decisions that operations managers must make, and you will be introduced to the tools necessary to handle those decisions. This section describes general approaches to decision making, including the use of models, quantitative methods, analysis of trade-offs, establishing priorities, ethics, and the systems approach. Models are often a key tool used by all decision makers.



is an abstraction of reality, a simplified representation of something. For example, a toy car is a model of a real automobile. It has many of the same visual features (shape, relative proportions, wheels) that make it suitable for the child’s learning and playing. But the toy does not have a real engine, it cannot transport people, and it does not weigh 3,000 pounds.

Other examples of models include automobile test tracks and crash tests; formulas, graphs, and charts; balance sheets and income statements; and financial ratios. Common statistical models include descriptive statistics such as the mean, median, mode, range, and standard deviation, as well as random sampling, the normal distribution, and regression equations.

Models are sometimes classified as physical, schematic, or mathematical.

Physical models look like their real-life counterparts. Examples include miniature cars, trucks, airplanes, toy animals and trains, and scale-model buildings. The advantage of these models is their visual correspondence with reality. 3-D printers (explained in
Chapter 6) are often used to prepare scale models.

Schematic models are more abstract than their physical counterparts; that is, they have less resemblance to the physical reality. Examples include graphs and charts, blueprints, pictures, and drawings. The advantage of schematic models is that they are often relatively simple to construct and change. Moreover, they have some degree of visual correspondence.

Mathematical models are the most abstract: They do not look at all like their real-life counterparts. Examples include numbers, formulas, and symbols. These models are usually the easiest to manipulate, and they are important forms of inputs for computers and calculators.

The variety of models in use is enormous. Nonetheless, all have certain common features: They are all decision-making aids and simplifications of more complex real-life phenomena. Real life involves an overwhelming amount of detail, much of which is irrelevant for any particular problem. Models omit unimportant details so that attention can be concentrated on the most important aspects of a situation.

page 19 

Because models play a significant role in operations management decision making, they are heavily integrated into the material of this text. For each model, try to learn (1) its purpose, (2) how it is used to generate results, (3) how these results are interpreted and used, and (4) what assumptions and limitations apply.

The last point is particularly important because virtually every model has an associated set of assumptions or conditions under which the model is valid. Failure to satisfy all of the assumptions will make the results suspect. Attempts to apply the results to a problem under such circumstances can lead to disastrous consequences.

Managers use models in a variety of ways and for a variety of reasons. Models are beneficial because they:

  1. Are generally easy to use and less expensive than dealing directly with the actual situation.

  2. Require users to organize and sometimes quantify information and, in the process, often indicate areas where additional information is needed.

  3. Increase understanding of the problem.

  4. Enable managers to analyze what-if questions.

  5. Serve as a consistent tool for evaluation and provide a standardized format for analyzing a problem.

  6. Enable users to bring the power of mathematics to bear on a problem.

This impressive list of benefits notwithstanding, models have certain limitations of which you should be aware. The following are three of the more important limitations.

  1. Quantitative information may be emphasized at the expense of qualitative information.

  2. Models may be incorrectly applied and the results misinterpreted. The widespread use of computerized models adds to this risk because highly sophisticated models may be placed in the hands of users who are not sufficiently knowledgeable to appreciate the subtleties of a particular model; thus, they are unable to fully comprehend the circumstances under which the model can be successfully employed.

  3. The use of models does not guarantee good decisions.

Quantitative Approaches

Quantitative approaches to problem solving often embody an attempt to obtain mathematically optimal solutions to managerial problems.
uantitative approaches to decision making in operations management (and in other functional business areas) have been accepted because of calculators and computers capable of handling the required calculations. Computers have had a major impact on operations management. Moreover, the growing availability of software packages for quantitative techniques has greatly increased management’s use of those techniques.

Although quantitative approaches are widely used in operations management decision making, it is important to note that managers typically use a combination of qualitative and quantitative approaches, and many important decisions are based on qualitative approaches.

Performance Metrics

Managers use metrics to manage and control operations. There are many metrics in use, including those related to profits, costs, quality, productivity, flexibility, assets, inventories, schedules, and forecast accuracy. As you read each chapter, note the metrics being used and how they are applied to manage operations.

Analysis of Trade-Offs

Operations personnel frequently encounter decisions that can be described as
trade-off decisions. For example, in deciding on the amount of inventory to stock, the decision maker must take into account the trade-off between the increased level of customer service that the additional inventory would yield and the increased costs required to stock that inventory.

Decision makers sometimes deal with these decisions by listing the advantages and disadvantages—the pros and cons—of a course of action to better understand the consequences
page 20of the decisions they must make. In some instances, decision makers add weights to the items on their list that reflect the relative importance of various factors. This can help them “net out” the potential impacts of the trade-offs on their decision.

Degree of Customization

A major influence on the entire organization is the degree of customization of products or services being offered to its customers. Providing highly customized products or services such as home remodeling, plastic surgery, and legal counseling tends to be more labor intensive than providing standardized products such as those you would buy “off the shelf” at a mall store or a supermarket or standardized services such as public utilities and internet services. Furthermore, production of customized products or provision of customized services is generally more time consuming, requires more highly skilled people, and involves more flexible equipment than what is needed for standardized products or services. Customized processes tend to have a much lower volume of output than standardized processes, and customized output carries a higher price tag. The degree of customization has important implications for process selection and job requirements. The impact goes beyond operations and supply chains. It affects marketing, sales, accounting, finance, and information systems.

A Systems Perspective

A systems perspective is almost always beneficial in decision making. Think of it as a “big picture” view. A

can be defined as a set of interrelated parts that must work together. In a business organization, the organization can be thought of as a system composed of subsystems (e.g., marketing subsystem, operations subsystem, finance subsystem), which in turn are composed of lower subsystems. The systems approach emphasizes interrelationships among subsystems, but its main theme is that
the whole is greater than the sum of its individual parts. Hence, from a systems viewpoint, the output and objectives of the organization as a whole take precedence over those of any one subsystem.

A systems perspective is essential whenever something is being designed, redesigned, implemented, improved, or otherwise changed. It is important to take into account the impact on all parts of the system. For example, if the upcoming model of an automobile will add forward collision braking, a designer must take into account how customers will view the change, the cost of producing the new system, installation procedures, and repair procedures. In addition, workers will need training to make and/or assemble the new system, production scheduling may change, inventory procedures may have to change, quality standards will have to be established, advertising must be informed of the new features, and parts suppliers must be selected.

Establishing Priorities

In virtually every situation, managers discover that certain issues or items are more important than others. Recognizing this enables the managers to direct their efforts to where they will do the most good.

Typically, a relatively few issues or items are very important, so that dealing with those factors will generally have a disproportionately large impact on the results achieved. This well-known effect is referred to as the

Pareto phenomenon
. This is one of the most important and pervasive concepts in operations management. In fact, this concept can be applied at all levels of management and to every aspect of decision making, both professional and personal.

page 21 


Systems for production have existed since ancient times. For example, the construction of pyramids and Roman aqueducts involved operations management skills. The production of goods for sale, at least in the modern sense, and the modern factory system had their roots in the Industrial Revolution.

The Industrial Revolution

The Industrial Revolution began in the 1770s in England and spread to the rest of Europe and to the United States during the 19th century. Prior to that time, goods were produced in small shops by craftsmen and their apprentices. Under that system, it was common for one person to be responsible for making a product, such as a horse-drawn wagon or a piece of furniture, from start to finish. Only simple tools were available; the machines in use today had not been invented.

Then, a number of innovations in the 18th century changed the face of production forever by substituting machine power for human power. Perhaps the most significant of these was the steam engine, because it provided a source of power to operate machines in factories. Ample supplies of coal and iron ore provided materials for generating power and making machinery. The new machines, made of iron, were much stronger and more durable than the simple wooden machines they replaced.

In the earliest days of manufacturing, goods were produced using

craft production
: Highly skilled workers using simple, flexible tools produced goods according to customer specifications.

Craft production had major shortcomings. Because products were made by skilled craftsmen who custom-fitted parts, production was slow and costly. And when parts failed, the replacements also had to be custom made, which was also slow and costly. Another shortcoming was that production costs did not decrease as volume increased; there were no
economies of scale, which would have provided a major incentive for companies to expand. Instead, many small companies emerged, each with its own set of standards.

A major change occurred that gave the Industrial Revolution a boost: the development of standard gauging systems. This greatly reduced the need for custom-made goods. Factories began to spring up and grow rapidly, providing jobs for countless people who were attracted in large numbers from rural areas.

Despite the major changes that were taking place, management theory and practice had not progressed much from early days. What was needed was an enlightened and more systematic approach to management.

Scientific Management

The scientific management era brought widespread changes to the management of factories. The movement was spearheaded by the efficiency engineer and inventor Frederick Winslow Taylor, who is often referred to as the father of scientific management. Taylor believed in a “science of management” based on observation, measurement, analysis and improvement of work methods, and economic incentives. He studied work methods in great detail to identify the best method for doing each job. Taylor also believed that management should be responsible for planning, carefully selecting and training workers, finding the best way to perform each job, achieving cooperation between management and workers, and separating management activities from work activities.

Taylor’s methods emphasized maximizing output. They were not always popular with workers, who sometimes thought the methods were used to unfairly increase output without a corresponding increase in compensation. Certainly, some companies did abuse workers in their quest for efficiency. Eventually, the public outcry reached the halls of Congress, and hearings were held on the matter. Taylor himself was called to testify in 1911, the same year in which his classic book,
The Principles of Scientific Management, was published. The publicity from those hearings actually helped scientific management principles to achieve wide acceptance in industry.

page 22 

A number of other pioneers also contributed heavily to this movement, including the following:

Frank Gilbreth was an industrial engineer who is often referred to as the father of motion study. He developed principles of motion economy that could be applied to incredibly small portions of a task.

Henry Gantt recognized the value of nonmonetary rewards to motivate workers, and developed a widely used system for scheduling, called Gantt charts.

Harrington Emerson applied Taylor’s ideas to organization structure and encouraged the use of experts to improve organizational efficiency. He testified in a congressional hearing that railroads could save a million dollars a day by applying principles of scientific management.

Henry Ford, the great industrialist, employed scientific management techniques in his factories.

During the early part of the 20th century, automobiles were just coming into vogue in the United States. Ford’s Model T was such a success that the company had trouble keeping up with orders for the cars. In an effort to improve the efficiency of operations, Ford adopted the scientific management principles espoused by Frederick Winslow Taylor. He also introduced the
moving assembly line, which had a tremendous impact on production methods in many industries.

Among Ford’s many contributions was the introduction of

mass production
to the automotive industry, a system of production in which large volumes of standardized goods are produced by low-skilled or semiskilled workers using highly specialized, and often costly, equipment. Ford was able to do this by taking advantage of a number of important concepts. Perhaps the key concept that launched mass production was

interchangeable parts
, sometimes attributed to Eli Whitney, an American inventor who applied the concept to assembling muskets in the late 1700s. The basis for interchangeable parts was to standardize parts so that any part in a batch of parts would fit any automobile coming down the assembly line. This meant that parts did not have to be custom fitted, as they were in craft production. The standardized parts could also be used for replacement parts. The result was a tremendous decrease in assembly time and cost. Ford accomplished this by standardizing the gauges used to measure parts during production and by using newly developed processes to produce uniform parts.

page 23 

A second concept used by Ford was the

division of labor
, which Adam Smith wrote about in
The Wealth of Nations (1776). Division of labor means that an operation, such as assembling an automobile, is divided up into a series of many small tasks, and individual workers are assigned to one of those tasks. Unlike craft production, where each worker was responsible for doing many tasks, and thus required skill, with division of labor the tasks were so narrow that virtually no skill was required.

Together, these concepts enabled Ford to tremendously increase the production rate at his factories using readily available inexpensive labor. Both Taylor and Ford were despised by many workers, because they held workers in such low regard, expecting them to perform like robots. This paved the way for the human relations movement.

The Human Relations Movement

Whereas the scientific management movement heavily emphasized the technical aspects of work design, the human relations movement emphasized the importance of the human element in job design. Lillian Gilbreth, a psychologist and the wife of Frank Gilbreth, worked with her husband, focusing on the human factor in work. (The Gilbreths were the subject of a classic film,
Cheaper by the Dozen.) Many of her studies dealt with worker fatigue. In the following decades, there was much emphasis on motivation. Elton Mayo conducted studies at the Hawthorne division of Western Electric. His studies revealed that in addition to the physical and technical aspects of work, worker motivation is critical for improving productivity. Abraham Maslow developed motivational theories, which Frederick Hertzberg refined. Douglas McGregor added Theory X and Theory Y. These theories represented the two ends of the spectrum of how employees view work. Theory X, on the negative end, assumed that workers do not like to work, and have to be controlled—rewarded and punished—to get them to do good work. This attitude was quite common in the automobile industry and in some other industries, until the threat of global competition forced them to rethink that approach. Theory Y, on the other end of the spectrum, assumed that workers enjoy the physical and mental aspects of work and become committed to work. The Theory X approach resulted in an adversarial environment, whereas the Theory Y approach resulted in empowered workers and a more cooperative spirit. William Ouchi added Theory Z, which combined the Japanese approach with such features as lifetime employment, employee problem solving, and consensus building, and the traditional Western approach that features short-term employment, specialists, and individual decision making and responsibility.

Decision Models and Management Science

The factory movement was accompanied by the development of several quantitative techniques. F. W. Harris developed one of the first models in 1915: a mathematical model for inventory order size. In the 1930s, three coworkers at Bell Telephone Labs—H. F. Dodge, H. G. Romig, and W. Shewhart—developed statistical procedures for sampling and quality control. In 1935, L.H.C. Tippett conducted studies that provided the groundwork for statistical sampling theory.

At first, these quantitative models were not widely used in industry. However, the onset of World War II changed that. The war generated tremendous pressures on manufacturing output, and specialists from many disciplines combined efforts to achieve advancements in the military and in manufacturing. After the war, efforts to develop and refine quantitative tools for decision making continued, resulting in decision models for forecasting, inventory management, project management, and other areas of operations management.

During the 1960s and 1970s, management science techniques were highly regarded; in the 1980s, they lost some favor. However, the widespread use of personal computers and user-friendly software in the workplace contributed to a resurgence in the popularity of these techniques.

The Influence of Japanese Manufacturers

A number of Japanese manufacturers developed or refined management practices that increased the productivity of their operations and the quality of their products, due in part to the influence of Americans W. Edwards Deming and Joseph Juran. This made them
page 24very competitive, sparking interest in their approaches by companies outside Japan. Their approaches emphasized quality and continual improvement, worker teams and empowerment, and achieving customer satisfaction. The Japanese can be credited with spawning the “quality revolution” that occurred in industrialized countries, and with generating widespread interest in lean production.

The influence of the Japanese on U.S. manufacturing and service companies has been enormous and promises to continue for the foreseeable future. Because of that influence, this book will provide considerable information about Japanese methods and successes.

Table 1.5 provides a chronological summary of some of the key developments in the evolution of operations management.


Historical summary of operations management

Approximate Date




Division of labor

Adam Smith


Interchangeable parts

Eli Whitney


Principles of scientific management

Frederick W. Taylor


Motion study, use of industrial psychology

Frank and Lillian Gilbreth


Chart for scheduling activities

Henry Gantt


Moving assembly line

Henry Ford


Mathematical model for inventory ordering

F. W. Harris


Hawthorne studies on worker motivation

Elton Mayo


Statistical procedures for sampling and quality control

H. F. Dodge, H. G. Romig, W. Shewhart, L.H.C. Tippett


Operations research applications in warfare

Operations research groups


Linear programming

George Dantzig


Commercial digital computers

Sperry Univac, IBM





Extensive development of quantitative tools



Industrial dynamics

Jay Forrester


Emphasis on manufacturing strategy

W. Skinner


Emphasis on flexibility, time-based competition, lean production

T. Ohno, S. Shingo, Toyota


Emphasis on quality

W. Edwards Deming, J. Juran, K. Ishikawa


Internet, supply chain management



Applications service providers and outsourcing


Social media, YouTube, and others



Advances in information technology and global competition have had a major influence on operations management. While the
internet offers great potential for business organizations, the potential, as well as the risks, must be clearly understood in order to determine if and how to exploit this potential. In many cases, the internet has altered the way companies compete in the marketplace.

Electronic business, or

, involves the use of the internet to transact business. E-business is changing the way business organizations interact with their customers and their
page 25suppliers. Most familiar to the general public is

, consumer–business transactions, such as buying online or requesting information. However, business-to-business transactions such as e-procurement represent an increasing share of e-business. E-business is receiving increased attention from business owners and managers in developing strategies, planning, and decision making.

The word

has several definitions, depending on the context. Generally,
technology refers to the application of scientific knowledge to the development and improvement of goods and services. It can involve knowledge, materials, methods, and equipment. The term
high technology refers to the most advanced and developed machines and methods. Operations management is primarily concerned with three kinds of technology: product and service technology, process technology, and information technology (IT). All three can have a major impact on costs, productivity, and competitiveness.

Product and service technology
refers to the discovery and development of new products and services. This is done mainly by researchers and engineers, who use the scientific approach to develop new knowledge and translate that into commercial applications.

Process technology
refers to methods, procedures, and equipment used to produce goods and provide services. They include not only processes within an organization but also supply chain processes.

Information technology (IT)
refers to the science and use of computers and other electronic equipment to store, process, and send information. Information technology is heavily ingrained in today’s business operations. This includes electronic data processing, the use of bar codes to identify and track goods, obtaining point-of-sale information, data transmission, the internet, e-commerce, e-mail, and more.

Management of technology is high on the list of major trends, and it promises to be high well into the future. For example, computers have had a tremendous impact on businesses in many ways, including new product and service features, process management, medical diagnosis, production planning and scheduling, data processing, and communication. Advances in materials, methods, and equipment also have had an impact on competition and productivity. Advances in information technology also have had a major impact on businesses. Obviously, there have been—and will continue to be—many benefits from technological advances. However, technological advance also places a burden on management. For example, management must keep abreast of changes and quickly assess both their benefits and risks. Predicting advances can be tricky at best, and new technologies often carry a high price tag and usually a high cost to operate or repair. And in the case of computer operating systems, as new systems are introduced, support for older versions is discontinued, making periodic upgrades necessary. Conflicting technologies can exist that make technological choices even more difficult. Technological innovations in both
products and
processes will continue to change the way businesses operate, and hence require continuing attention.

The General Agreement on Tariffs and Trade (GATT) of 1994 reduced tariffs and subsidies in many countries, expanding world trade. However, new tariffs in 2018 and 2019, some temporary, have had an impact on the strategies and operations of businesses large and small around the world. One effect is the importance business organizations are giving to management of their supply chains.

Globalization and the need for global supply chains have broadened the scope of supply chain management. However, tightened border security in certain instances and new tariffs have added challenges and uncertainties to managing supply chain operations. In some instances, organizations are reassessing their use of offshore outsourcing.

Competitive pressures and changing economic conditions have caused business organizations to put more emphasis on operations strategy, working with fewer resources, revenue management, process analysis and improvement, quality improvement, agility, and lean production.

During the latter part of the 1900s, many companies neglected to include
operations strategy in their corporate strategy. Some of them paid dearly for that neglect. Now, more and
page 26more companies are recognizing the importance of operations strategy on the overall success of their business, as well as the necessity for relating it to their overall business strategy.

Working with fewer resources due to layoffs, corporate downsizing, and general cost cutting is forcing managers to make trade-off decisions on resource allocation, and to place increased emphasis on cost control and productivity improvement.

Revenue management is a method used by some companies to maximize the revenue they receive from fixed operating capacity by influencing demand through price manipulation. Also known as yield management, it has been successfully used in the travel and tourism industries by airlines, cruise lines, hotels, amusement parks, and rental car companies, and in other industries such as trucking and public utilities.

Process analysis and improvement includes cost and time reduction, productivity improvement, process yield improvement, and quality improvement and increasing customer satisfaction. This is sometimes referred to as a

Six Sigma

Given a boost by the “quality revolution” of the 1980s and 1990s,
quality is now ingrained in business. Some businesses use the term
total quality management (TQM) to describe their quality efforts. A quality focus emphasizes
customer satisfaction and often involves
teamwork. Process improvement can result in improved quality, cost reduction, and
time reduction. Time relates to costs and to competitive advantage, and businesses seek ways to reduce the time to bring new products and services to the marketplace to gain a competitive edge. If two companies can provide the same product at the same price and quality, but one can deliver it four weeks earlier than the other, the quicker company will invariably get the sale. Time reductions are being achieved in many companies now. Union Carbide was able to cut $400 million of fixed expenses, and Bell Atlantic was able to cut the time needed to hook up long-distance carriers from 15 days to less than 1, at a savings of $82 million.

refers to the ability of an organization to respond quickly to demands or opportunities. It is a strategy that involves maintaining a flexible system that can quickly respond to changes in either the volume of demand or changes in product/service offerings. This is particularly important as organizations scramble to remain competitive and cope with increasingly shorter product life cycles and strive to achieve shorter development times for new or improved products and services.

Lean production, a new approach to production, emerged in the 1990s. It incorporates a number of the recent trends listed here, with an emphasis on quality, flexibility, time reduction, and teamwork. This has led to a
flattening of the organizational structure, with fewer levels of management.

Lean systems
are so named because they use much less of certain resources than typical mass production systems use—space, inventory, and workers—to produce a comparable amount of output. Lean systems use a highly skilled workforce and flexible equipment. In effect, they incorporate advantages of both mass production (high volume, low unit cost) and craft production (variety and flexibility). Quality is also higher than in mass production. This approach has now spread to services, including health care, offices, and shipping and delivery.

The skilled workers in lean production systems are more involved in maintaining and improving the system than their mass production counterparts. They are taught to stop an operation if they discover a defect, and to work with other employees to find and correct the
page 27cause of the defect so that it won’t recur. This results in an increasing level of quality over time and eliminates the need to inspect and rework at the end of the line.

Because lean production systems operate with lower amounts of inventory, additional emphasis is placed on anticipating when problems might occur
before they arise and avoiding those problems through planning. Even so, problems can still occur at times, and quick resolution is important. Workers participate in both the planning and correction stages.

Compared to workers in traditional systems, much more is expected of workers in lean production systems. They must be able to function in teams, playing active roles in operating and improving the system. Individual creativity is much less important than team success. Responsibilities also are much greater, which can lead to pressure and anxiety not present in traditional systems. Moreover, a flatter organizational structure means career paths are not as steep in lean production organizations. Workers tend to become generalists rather than specialists, another contrast to more traditional organizations.


There are a number of issues that are high priorities of many business organizations. Although not every business is faced with these issues, many are. Chief among the issues are the following.

Economic conditions. Trade disputes and tariffs have created uncertainties for decision makers.

Innovating. Finding new or improved products or services are only two of the many possibilities that can provide value to an organization. Innovations can be made in processes, the use of the internet, or the supply chain that reduce costs, increase productivity, expand markets, or improve customer service.

Quality problems. The numerous operations failures mentioned at the beginning of the chapter underscore the need to improve the way operations are managed. That relates to product design and testing, oversight of suppliers, risk assessment, and timely response to potential problems.

Risk management. The need for managing risk is underscored by recent events that include financial crises, product recalls, accidents, natural and man-made disasters, and economic ups and downs. Managing risks starts with identifying risks, assessing vulnerability and potential damage (liability costs, reputation, demand), and taking steps to reduce or share risks.

Cyber-security. The need to guard against intrusions from hackers whose goal is to steal personal information of employees and customers is becoming increasingly necessary. Moreover, interconnected systems increase intrusion risks in the form of industrial espionage.

Competing in a global economy. Low labor costs in third-world countries have increased pressure to reduce labor costs. Companies must carefully weigh their options, which include outsourcing some or all of their operations to low-wage areas, reducing costs internally, changing designs, and working to improve productivity.

Three other key areas require more in-depth discussion: environmental concerns, ethical conduct, and managing the supply chain.

Environmental Concerns

Concern about global warming and pollution has had an increasing effect on how businesses operate.

Stricter environmental regulations, particularly in developed nations, are being imposed. Furthermore, business organizations are coming under increasing pressure to reduce their carbon footprint (the amount of carbon dioxide generated by their operations and their supply chains) and to generally operate sustainable processes.

refers to service
page 28and production processes that use resources in ways that do not harm ecological systems that support both current and future human existence. Sustainability measures often go beyond traditional environmental and economic measures to include measures that incorporate social criteria in decision making.

All areas of business will be affected by this. Areas that will be most affected include product and service design, consumer education programs, disaster preparation and response, supply chain
page 29waste management, and outsourcing decisions. Note that outsourcing of goods production increases not only transportation costs, but also fuel consumption and carbon released into the atmosphere. Consequently, sustainability thinking may have implications for outsourcing decisions.

Because they all fall within the realm of operations, operations management is central to dealing with these issues. Sometimes referred to as “green initiatives,” the possibilities include reducing packaging, materials, water and energy use, and the environmental impact of the supply chain, including buying locally. Other possibilities include reconditioning used equipment (e.g., printers and copiers) for resale, and recycling.

The reading above suggests that even our choice of diet can affect the environment.

Ethical Conduct

The need for ethical conduct in business is becoming increasingly obvious, given numerous examples of questionable actions in recent history. In making decisions, managers must consider how their decisions will affect shareholders, management, employees, customers, the community at large, and the environment. Finding solutions that will be in the best interests of all of these stakeholders is not always easy, but it is a goal that all managers should strive to achieve. Furthermore, even managers with the best intentions will sometimes make mistakes. If mistakes do occur, managers should act responsibly to correct those mistakes as quickly as possible, and to address any negative consequences.

Many organizations have developed
codes of ethics to guide employees’ or members’ conduct.

is a standard of behavior that guides how one should act in various situations. The Markula Center for Applied Ethics at Santa Clara University identifies five principles for thinking ethically:

  • The
    Utilitarian Principle: The good done by an action or inaction should outweigh any harm it causes or might cause. An example is not allowing a person who has had too much to drink to drive.

  • The
    Rights Principle: Actions should respect and protect the moral rights of others. An example is not taking advantage of a vulnerable person.

  • The
    Fairness Principle: Equals should be held to, or evaluated by, the same standards. An example is equal pay for equal work.

  • The
    Common Good Principle: Actions should contribute to the common good of the community. An example is an ordinance on noise abatement.

  • The
    Virtue Principle: Actions should be consistent with certain ideal virtues. Examples include honesty, compassion, generosity, tolerance, fidelity, integrity, and self-control.

The center expands these principles to create a framework for ethical conduct. An

ethical framework
is a sequence of steps intended to guide thinking and subsequent decisions or actions.
page 30Here is the one developed by the Markula Center for Applied Ethics:

  1. Recognize an ethical issue by asking if an action could be damaging to a group or an individual. Is there more to it than just what is legal?

  2. Make sure the pertinent facts are known, such as who will be impacted, and what options are available.

  3. Evaluate the options by referring to the appropriate preceding ethical principle.

  4. Identify the “best” option and then further examine it by asking how someone you respect would view it.

  5. In retrospect, consider the effect your decision had and what you can learn from it.

More detail is available at the Center’s website:

Operations managers, like all managers, have the responsibility to make ethical decisions. Ethical issues arise in many aspects of operations management, including:

  • Financial statements: accurately representing the organization’s financial condition.

  • Worker safety: providing adequate training, maintaining equipment in good working condition, maintaining a safe working environment.

  • Product safety: providing products that minimize the risk of injury to users or damage to property or the environment.

  • Quality: honoring warranties, avoiding hidden defects.

  • The environment: not doing things that will harm the environment.

  • The community: being a good neighbor.

  • Hiring and firing workers: avoiding false pretenses (e.g., promising a long-term job when that is not what is intended).

  • Closing facilities: taking into account the impact on a community, and honoring commitments that have been made.

  • Workers’ rights: respecting workers’ rights, dealing with workers’ problems quickly and fairly.

The Ethisphere Institute recognizes companies worldwide for their ethical leadership. Here are some samples from their list:

Apparel: Gap

Automotive: Ford Motor Company

Business services: Paychex

Café: Starbucks

Computer hardware: Intel

Computer software: Adobe Systems, Microsoft

Consumer electronics: Texas Instruments, Xerox

page 31 

E-commerce: eBay

General retail: Costco, Target

Groceries: Safeway, Wegmans, Whole Foods

Health and beauty: L’Oreal

Logistics: UPS

You can see a complete list of recent recipients and the selection criteria at

The Need to Manage the Supply Chain

Supply chain management is being given increasing attention as business organizations face mounting pressure to improve management of their supply chains. In the past, most organizations did little to manage their supply chains. Instead, they tended to concentrate on their own operations and on their immediate suppliers. Moreover, the planning, marketing, production and inventory management functions in organizations in supply chains have often operated independently of each other. As a result, supply chains experienced a range of problems that were seemingly beyond the control of individual organizations. The problems included large oscillations of inventories, inventory stockouts, late deliveries, and quality problems. These and other issues now make it clear that management of supply chains is essential to business success. The other issues include the following:

  1. The need to improve operations. Efforts on cost and time reduction, and productivity and quality improvement, have expanded in recent years to include the supply chain. Opportunity now lies largely with procurement, distribution, and logistics—the supply chain.

  2. Increasing levels of outsourcing. Organizations are increasing their levels of

    , buying goods or services instead of producing or providing them themselves. As outsourcing increases, some organizations are spending increasing amounts on supply-related activities (wrapping, packaging, moving, loading and unloading, and sorting). A significant amount of the cost and time spent on these and other related activities may be unnecessary. Issues with imported products, including tainted food products, toothpaste, and pet foods, as well as unsafe tires and toys, have led to questions of liability and the need for companies to take responsibility for monitoring the safety of outsourced goods.

  1. Increasing transportation costs. Transportation costs are increasing, and they need to be more carefully managed.

  2. Competitive pressures. Competitive pressures have led to an increasing number of new products, shorter product development cycles, and increased demand for customization. And in some industries, most notably consumer electronics, product life cycles are relatively short. Added to this are the adoption of quick-response strategies and efforts to reduce lead times.

  3. Increasing globalization. Increasing globalization has expanded the physical length of supply chains. A global supply chain increases the challenges of managing a supply chain. Having far-flung customers and/or suppliers means longer lead times and greater opportunities for disruption of deliveries. Often, currency
    page 32differences and monetary fluctuations are factors, as well as language and cultural differences. Also, tightened border security in some instances has slowed shipments of goods.

  4. Increasing importance of e-business. The increasing importance of e-business has added new dimensions to business buying and selling and has presented new challenges.

  5. The complexity of supply chains. Supply chains are complex; they are dynamic, and they have many inherent uncertainties that can adversely affect them, such as inaccurate forecasts, late deliveries, substandard quality, equipment breakdowns, and canceled or changed orders.

  6. The need to manage inventories. Inventories play a major role in the success or failure of a supply chain, so it is important to coordinate inventory levels throughout a supply chain. Shortages can severely disrupt the timely flow of work and have far-reaching impacts, while excess inventories add unnecessary costs. It would not be unusual to find inventory shortages in some parts of a supply chain and excess inventories in other parts of the same supply chain.

  7. The need to deal with trade wars. Trade wars can occur if a country objects to its trade imbalance with another country. This can result in tariffs and retaliatory tariffs, causing changes in cost structures. Uncertainty about how long and to what degree tariffs will be in place can greatly increase pressure on companies that have global supply chains.

Elements of Supply Chain Management

Supply chain management involves coordinating activities across the supply chain. Central to this is taking customer demand and translating it into corresponding activities at each level of the supply chain.

The key elements of supply chain management are listed in
Table 1.6. The first element, customers, is the driving element. Typically, marketing is responsible for determining what customers want, as well as forecasting the quantities and timing of customer demand. Product and service design must match customer wants with operations capabilities.


Elements of supply chain management


Typical Issues



Determining what products and/or services customers want



Predicting the quantity and timing of customer demand



Incorporating customers, wants, manufacturability, and time to market


Capacity planning

Matching supply and demand



Controlling quality, scheduling work



Meeting demand requirements while managing the costs of holding inventory



Evaluating potential suppliers, supporting the needs of operations on purchased goods and services



Monitoring supplier quality, on-time delivery, and flexibility; maintaining supplier relations



Determining the location of facilities



Deciding how to best move information and materials


Processing occurs in each component of the supply chain: It is the core of each organization. The major portion of processing occurs in the organization that produces the product or service for the final customer (the organization that assembles the computer, services the car, etc.). A major aspect of this for both the internal and external portions of a supply chain is scheduling.

Inventory is a staple in most supply chains. Balance is the main objective; too little causes delays and disrupts schedules, but too much adds unnecessary costs and limits flexibility.

page 33 

Purchasing is the link between an organization and its suppliers. It is responsible for obtaining goods and/or services that will be used to produce products or provide services for the organization’s customers. Purchasing selects suppliers, negotiates contracts, establishes alliances, and acts as a liaison between suppliers and various internal departments.

The supply portion of a value chain is made up of one or more suppliers, all links in the chain, and each one capable of having an impact on the effectiveness—or the ineffectiveness—of the supply chain. Moreover, it is essential that the planning and execution be carefully coordinated between suppliers and all members of the demand portion of their chains.

Location can be a factor in a number of ways. Where suppliers are located can be important, as can the location of processing facilities. Nearness to market, nearness to sources of supply, or nearness to both may be critical. Also, delivery time and cost are usually affected by location.

Two types of decisions are relevant to supply chain management—strategic and operational. The strategic decisions are the design and policy decisions. The operational decisions relate to day-to-day activities: managing the flow of material and product and other aspects of the supply chain in accordance with strategic decisions.

The major decision areas in supply chain management are location, production, distribution, and inventory. The
location decision relates to the choice of locations for both production and distribution facilities. Production and transportation costs and delivery lead times are important.
Production and
distribution decisions focus on what customers want, when they want it, and how much is needed. Outsourcing can be a consideration. Distribution decisions are strongly influenced by transportation cost and delivery times, because transportation costs often represent a significant portion of total cost. Moreover, shipping alternatives are closely tied to production and inventory decisions. For example, using air transport means higher costs but faster deliveries and less inventory in transit than sea, rail, or trucking options. Distribution decisions must also take into account capacity and quality issues. Operational decisions focus on scheduling, maintaining equipment, and meeting customer demand. Quality control and workload balancing are also important considerations.
Inventory decisions relate to determining inventory needs and coordinating production and stocking decisions throughout the supply chain. Logistics management plays the key role in inventory decisions.

Enterprise Resource Planning (ERP) is being increasingly used to provide information sharing in real time among organizations and their major supply chain partners. This important topic is discussed in more detail in
Chapter 13.

Operations Tours

Throughout the book you will discover operations tours that describe operations in all sorts of companies. The tour below is of Wegmans Food Markets, a major regional supermarket chain. Wegmans has been consistently ranked high on
Fortune magazine’s list of the 100 Best Companies to Work For since the inception of the survey a decade ago.

page 36 

page 40 

page 41 


The name of the game is competition. The playing field is global. Those who understand how to play the game will succeed; those who don’t are doomed to failure. And don’t think the game is just companies competing with each other. In companies that have multiple factories or divisions producing the same good or service, factories or divisions sometimes find themselves competing with each other. When a competitor—another company or a sister factory or division in the same company—can turn out products better, cheaper, and faster, that spells real trouble for the factory or division that is performing at a lower level. The trouble can be layoffs or even a shutdown if the managers can’t turn things around. The bottom line? Better quality, higher productivity, lower costs, and the ability to quickly respond to customer needs are more important than ever, and the bar is getting higher. Business organizations need to develop solid strategies for dealing with these issues.

page 42 


In this chapter, you will learn about the different ways companies compete and why some firms do a very good job of competing. You will learn how effective strategies can lead to competitive organizations, as well as what productivity is, why it is important, and what organizations can do to improve it.


Companies must be competitive to sell their goods and services in the marketplace.

is an important factor in determining whether a company prospers, barely gets by, or fails. Business organizations compete through some combination of price, delivery time, and product or service differentiation.

Marketing influences competitiveness in several ways, including identifying consumer wants and needs, pricing, and advertising and promotion.

  1. Identifying consumer wants and/or needs is a basic input in an organization’s decision-making process, and central to competitiveness. The ideal is to achieve a perfect match between those wants and needs and the organization’s goods and/or services.

  2. Price and quality are key factors in consumer buying decisions. It is important to understand the trade-off decision consumers make between price and quality.

  3. Advertising and promotion are ways organizations can inform potential customers about features of their products or services, and attract buyers.

Operations has a major influence on competitiveness through product and service design, cost, location, quality, response time, flexibility, inventory and supply chain management, and service. Many of these are interrelated.

  1. Product and service design should reflect joint efforts of many areas of the firm to achieve a match between financial resources, operations capabilities, supply chain capabilities, and consumer wants and needs. Special characteristics or features of a product or service can be a key factor in consumer buying decisions. Other key factors include
    innovation and the
    time-to-market for new products and services.

  2. Cost of an organization’s output is a key variable that affects pricing decisions and profits. Cost-reduction efforts are generally ongoing in business organizations.
    Productivity (discussed later in the chapter) is an important determinant of cost. Organizations with higher productivity rates than their competitors have a competitive cost advantage. A company may outsource a portion of its operation to achieve lower costs, higher productivity, or better quality.

  3. Location can be important in terms of cost and convenience for customers. Location near inputs can result in lower input costs. Location near markets can result in lower transportation costs and quicker delivery times. Convenient location is particularly important in the retail sector.

  4. Quality refers to materials, workmanship, design, and service. Consumers judge quality in terms of how well they think a product or service will satisfy its intended purpose. Customers are generally willing to pay more for a product or service if they perceive the product or service has a higher quality than that of a competitor.

  5. Quick response can be a competitive advantage. One way is quickly bringing new or improved products or services to the market. Another is being able to quickly deliver existing products and services to a customer after they are ordered, and still another is quickly handling customer complaints.

  6. Flexibility is the ability to respond to changes. Changes might relate to alterations in design features of a product or service, or to the volume demanded by customers, or the mix of products or services offered by an organization. High flexibility can be a competitive advantage in a changeable environment.

page 43 

  1. Inventory management can be a competitive advantage by effectively matching supplies of goods with demand.

  2. Supply chain management involves coordinating internal and external operations (buyers and suppliers) to achieve timely and cost-effective delivery of goods throughout the system.

  3. Service might involve after-sale activities customers perceive as value-added, such as delivery, setup, warranty work, and technical support. Or it might involve extra attention while work is in progress, such as courtesy, keeping the customer informed, and attention to details.
    Service quality can be a key differentiator; and it is one that is often sustainable. Moreover, businesses rated highly by their customers for service quality tend to be more profitable, and grow faster, than businesses that are not rated highly.

  4. Managers and
    workers are the people at the heart and soul of an organization, and if they are competent and motivated, they can provide a distinct competitive edge via their skills and the ideas they create. One often overlooked skill is answering the telephone. How complaint calls or requests for information are handled can be a positive or a negative. If a person answering is rude or not helpful, that can produce a negative image. Conversely, if calls are handled promptly and cheerfully, that can produce a positive image and, potentially, a competitive advantage.

Why Some Organizations Fail

Organizations fail, or perform poorly, for a variety of reasons. Being aware of those reasons can help managers avoid making similar mistakes. Among the chief reasons are the following:

  1. Neglecting operations strategy.

  2. Failing to take advantage of strengths and opportunities, and/or failing to recognize competitive threats.

  3. Putting too much emphasis on short-term financial performance at the expense of research and development.

  4. Placing too much emphasis on product and service design and not enough on process design and improvement.

    page 44 

  5. Neglecting investments in capital and human resources.

  6. Failing to establish good internal communications and cooperation among different functional areas.

  7. Failing to consider customer wants and needs.

The key to successfully competing is to determine what customers want and then directing efforts toward meeting (or even exceeding) customer expectations. Two basic issues must be addressed. First: What do the customers want? (Which items on the preceding list of the ways business organizations compete are important to customers?) Second: What is the best way to satisfy those wants?

Operations must work with marketing to obtain information on the relative importance of the various items to each major customer or target market.

Understanding competitive issues can help managers develop successful strategies.


An organization’s

is the reason for its existence. It is expressed in its

mission statement
. For a business organization, the mission statement should answer the question “What business are we in?” Missions vary from organization to organization, depending on the nature of their business.
Table 2.1 provides several examples of mission statements.


Selected portions of company mission statements


To help people and businesses throughout the world to realize their full potential.


To help people and businesses communicate with each other.


To inspire and nurture the human spirit—one cup and one neighborhood at a time.

U.S. Dept. of Education

To promote student achievement and preparation for global competitiveness and fostering educational excellence and ensuring equal access.

A mission statement serves as the basis for organizational

, which provide more detail and describe the scope of the mission. The mission and goals often relate to how an organization wants to be perceived by the general public, and by its employees, suppliers, and customers. Goals serve as a foundation for the development of organizational strategies. These, in turn, provide the basis for strategies and tactics of the functional units of the organization.

Organizational strategy is important because it guides the organization by providing direction for, and alignment of, the goals and

of the functional units. Moreover, strategies can be the main reason for the success or failure of an organization.

There are three basic business strategies:

  • Low cost

  • Responsiveness

  • Differentiation from competitors

Responsiveness relates to the ability to respond to changing demands. Differentiation can relate to product or service features, quality, reputation, or customer service. Some organizations focus on a single strategy, while others employ a combination of strategies. One company that has multiple strategies is Not only does it offer low-cost and quick, reliable deliveries, it also excels in customer service.

page 45 

Strategies and Tactics

If you think of goals as destinations, then strategies are the roadmaps for reaching those destinations. Strategies provide
focus for decision making. Generally speaking, organizations have overall strategies called
organizational strategies, which relate to the entire organization. They also have
functional strategies, which relate to each of the functional areas of the organization. The functional strategies should support the overall strategies of the organization, just as the organizational strategies should support the goals and mission of the organization.

are the methods and actions used to accomplish strategies. They are more specific than strategies, and they provide guidance and direction for carrying out actual
operations, which need the most specific and detailed plans and decision making in an organization. You might think of tactics as the “how to” part of the process (e.g., how to reach the destination, following the strategy roadmap), and operations as the actual “doing” part of the process. Much of this book deals with tactical operations.

It should be apparent that the overall relationship that exists from the mission down to actual operations is
hierarchical. This is illustrated in
Figure 2.1.

A simple example may help to put this hierarchy into perspective.

page 46 

Here are some examples of different strategies an organization might choose from:

Low cost. Outsource operations to third-world countries that have low labor costs.

Scale-based strategies. Use capital-intensive methods to achieve high output volume and low unit costs.

Specialization. Focus on narrow product lines or limited service to achieve higher quality.

Newness. Focus on innovation to create new products or services.

Flexible operations. Focus on quick response and/or customization.

High quality. Focus on achieving higher quality than competitors.

Service. Focus on various aspects of service (e.g., helpful, courteous, reliable, etc.).

Sustainability. Focus on environmental-friendly and energy-efficient operations.

A wide range of business organizations are beginning to recognize the strategic advantages of sustainability, not only in economic terms, but also through promotional benefits by publicizing their sustainability efforts and achievements.

Sometimes, organizations will combine two or more of these, or other approaches, into their strategy. However, unless they are careful, they risk losing focus and not achieving advantage in any category. Generally speaking, strategy formulation takes into account the way organizations compete and a particular organization’s assessment of its own strengths and weaknesses in order to take advantage of its

core competencies
—those special attributes or abilities possessed by an organization that give it a
competitive edge.

The most effective organizations use an approach that develops core competencies based on customer needs, as well as on what the competition is doing. Marketing and operations work closely to match customer needs with operations capabilities. Competitor competencies are important for several reasons. For example, if a competitor is able to supply high-quality products, it may be necessary to meet that high quality as a baseline. However, merely
matching a competitor is usually not sufficient to gain market share. It may be necessary to exceed the quality level of the competitor or gain an edge by excelling in one or more other dimensions, such as rapid delivery or service after the sale. Walmart, for example, has been very successful in managing its supply chain, which has contributed to its competitive advantage.

page 47 

To be effective, strategies and core competencies need to be aligned.
Table 2.2 lists examples of strategies and companies that have successfully employed those strategies.


Examples of operations strategies

Organization Strategy

Operations Strategy

Examples of Companies or Services

Low price

Low cost

U.S. first-class postage


Southwest Airlines


Short processing time

On-time delivery

McDonald’s restaurants

Express Mail, UPS, FedEx

Uber, Lyft, Grubhub

Domino’s Pizza


Differentiation: High quality

High-performance design and/or high-quality processing

Consistent quality

TV: Sony, Samsung, LG



Five-star restaurants or hotels

Coca-Cola, PepsiCo


Electrical power

Differentiation: Newness


3M, Apple


Differentiation: Variety



Burger King (“Have it your way”)

Hospital emergency room

McDonald’s (“Buses welcome”)


Supermarkets (additional checkouts)

Differentiation: Service

Superior customer service




Nordstrom, Von Maur

Differentiation: Location


Supermarkets, dry cleaners

Mall stores

Service stations

Banks, ATMs

Strategy Formulation

Strategy formulation is almost always critical to the success of a strategy. Walmart discovered this when it opened stores in Japan. Although Walmart thrived in many countries on its reputation for low-cost items, Japanese consumers associated low cost with low quality, causing Walmart to rethink its strategy in the Japanese market. And many felt that Hewlett-Packard (HP) committed a strategic error when it acquired Compaq Computers at a cost of $19 billion. HP’s share of the computer market was less after the merger than the sum of the shares of the separate companies before the merger. In another example, U.S. automakers adopted a strategy in the early 2000s of offering discounts and rebates on a range of cars and SUVs, many of which were on low-margin vehicles. The strategy put a strain on profits, but customers began to expect those incentives, and the companies maintained them to keep from losing additional market share.

On the other hand, Coach, the maker of leather handbags and purses, successfully changed its longtime strategy to grow its market by creating new products. Long known for its highly durable leather goods in a market where women typically owned few handbags, Coach created a new market for itself by changing women’s view of handbags by promoting “different handbags for different occasions” such as party bags, totes, clutches, wristlets, overnight bags, purses, and day bags. And Coach introduced many fashion styles and colors.

To formulate an effective strategy, senior managers must take into account the core competencies of the organizations, and they must
scan the environment. They must determine
page 48what competitors are doing, or planning to do, and take that into account. They must critically examine other factors that could have either positive or negative effects. This is sometimes referred to as the

analysis (strengths, weaknesses, opportunities, and threats). Strengths and weaknesses have an internal focus and are typically evaluated by operations people. Threats and opportunities have an external focus and are typically evaluated by marketing people. SWOT is often regarded as the link between organizational strategy and operations strategy.

An alternative to SWOT analysis is Michael Porter’s five forces model,

which takes into account the threat of new competition, the threat of substitute products or services, the bargaining power of customers, the bargaining power of suppliers, and the intensity of competition.

In formulating a successful strategy, organizations must take into account both order qualifiers and order winners.

Order qualifiers
are those characteristics that potential customers perceive as minimum standards of acceptability for a product to be considered for purchase. However, that may not be sufficient to get a potential customer to purchase from the organization.

Order winners
are those characteristics of an organization’s goods or services that cause them to be perceived as better than the competition.

Characteristics such as price, delivery reliability, delivery speed, and quality can be order qualifiers or order winners. Thus, quality may be an order winner in some situations, but in others only an order qualifier. Over time, a characteristic that was once an order winner may become an order qualifier.

Obviously, it is important to determine the set of order qualifier characteristics and the set of order winner characteristics. It is also necessary to decide on the relative importance of each characteristic so that appropriate attention can be given to the various characteristics. Marketing must make that determination and communicate it to operations.

Environmental scanning
is the monitoring of events and trends that present either threats or opportunities for the organization. Generally, these include competitors’ activities; changing consumer needs; legal, economic, political, and environmental issues; the potential for new markets; and the like.

Another key factor to consider when developing strategies is technological change, which can present real opportunities and threats to an organization. Technological changes occur in products (high-definition TV, improved computer chips, improved cellular telephone systems, and improved designs for earthquake-proof structures); in services (faster order processing, faster delivery); and in processes (robotics, automation, computer-assisted processing, point-of-sale scanners, and flexible manufacturing systems). The obvious benefit is a competitive edge; the risk is that incorrect choices, poor execution, and higher-than-expected operating costs will create competitive

Important factors may be internal or external. The following are key external factors:

  1. Economic conditions. These include the general health and direction of the economy, inflation and deflation, interest rates, tax laws, and tariffs.

  2. Political conditions. These include favorable or unfavorable attitudes toward business, political stability or instability, and wars.

  3. Legal environment. This includes antitrust laws, government regulations, trade restrictions, minimum wage laws, product liability laws and recent court experience, labor laws, and patents.

  4. Technology. This can include the rate at which product innovations are occurring, current and future process technology (equipment, materials handling), and design technology.

  5. Competition. This includes the number and strength of competitors, the basis of competition (price, quality, special features), and the ease of market entry.

  6. Customers. Loyalty, existing relationships, and understanding of wants and needs are important.

    page 49 

  7. Suppliers. Supplier relationships, dependability of suppliers, quality, flexibility, and service are typical considerations.

  8. Markets. This includes size, location, brand loyalties, ease of entry, potential for growth, long-term stability, and demographics.

The organization also must take into account various
internal factors that relate to possible strengths or weaknesses. Among the key internal factors are the following:

  1. Human resources. These include the skills and abilities of managers and workers, special talents (creativity, designing, problem solving), loyalty to the organization, expertise, dedication, and experience.

  2. Facilities and equipment. Capacities, location, age, and cost to maintain or replace can have a significant impact on operations.

  3. Financial resources. Cash flow, access to additional funding, existing debt burden, and cost of capital are important considerations.

  4. Products and services. These include existing products and services, and the potential for new products and services.

  5. Technology. This includes existing technology, the ability to integrate new technology, and the probable impact of technology on current and future operations.

  6. Other. Other factors include patents, labor relations, company or product image, distribution channels, relationships with distributors, maintenance of facilities and equipment, access to resources, and access to markets.

After assessing internal and external factors and an organization’s distinctive competence, a strategy or strategies must be formulated that will give the organization the best chance of success. Among the types of questions that may need to be addressed are the following:

What role, if any, will the internet play?

Will the organization have a global presence?

To what extent will
outsourcing be used?

What will the supply chain management strategy be?

To what extent will new products or services be introduced?

What rate of growth is desirable and

What emphasis, if any, should be placed on lean production?

How will the organization differentiate its products and/or services from competitors’?

The organization may decide to have a single, dominant strategy (e.g., be the price leader) or have multiple strategies. A single strategy would allow the organization to concentrate on one particular strength or market condition. On the other hand, multiple strategies may be needed to address a particular set of conditions.

Many companies are increasing their use of outsourcing to reduce overhead, gain flexibility, and take advantage of suppliers’ expertise. Amazon provides a great example of some of the potential benefits of outsourcing as part of a business strategy.

Growth is often a component of strategy, especially for new companies. A key aspect of this strategy is the need to seek a growth rate that is sustainable. In the 1990s, fast-food company Boston Market dazzled investors and fast-food consumers alike. Fueled by its success, it undertook rapid expansion. By the end of the decade, the company was nearly bankrupt; it had overexpanded. In 2000, it was absorbed by fast-food giant McDonald’s.

Companies increase their risk of failure not only by missing or incomplete strategies; they also fail due to poor execution of strategies. And sometimes they fail due to factors beyond their control, such as natural or man-made disasters, major political or economic changes, or competitors that have an overwhelming advantage (e.g., deep pockets, very low labor costs, less rigorous environmental requirements).

A useful resource on successful business strategies is the Profit Impact of Market Strategy (PIMS) database ( The database contains profiles of over 3,000 businesses located primarily in the United States, Canada, and western Europe. It is used by companies and academic institutions to guide strategic thinking. It allows subscribers to answer strategy questions about their business. Moreover, they can use it to generate benchmarks and develop successful strategies.

page 50 

According to the PIMS website,

database is a collection of statistically documented experiences drawn from thousands of businesses, designed to help understand what kinds of strategies (e.g., quality, pricing, vertical integration, innovation, advertising) work best in what kinds of business environments. The data constitute a key resource for such critical management tasks as evaluating business performance, analyzing new business opportunities, evaluating and reality testing new strategies, and screening business portfolios.
The primary role of the PIMS Program of the Strategic Planning Institute is to help managers understand and react to their business environment. PIMS does this by assisting managers as they develop and test strategies that will achieve an acceptable level of winning as defined by various strategies and financial measures.


Supply Chain Strategy

A supply chain strategy specifies how the supply chain should function to achieve supply chain goals. The supply chain strategy should be aligned with the business strategy. If it is well executed, it can create value for the organization. It establishes how the organization should work with suppliers and policies relating to customer relationships and sustainability. Supply chain strategy is covered in more detail in a later chapter.

Sustainability Strategy

Society is placing increasing emphasis on corporate sustainability practices in the form of governmental regulations and interest groups. For these and other reasons, business organizations are or should be devoting attention to sustainability goals. To be successful, they will need a sustainability strategy. That requires elevating sustainability to the level of organizational
page 51governance; formulating goals for products and services, for processes, and for the entire supply chain; measuring achievements and striving for improvements; and possibly linking executive compensation to the achievement of sustainability goals.

Global Strategy

Global strategies have two different aspects. One relates to where parts or products are made, or where services such as customer support are performed. The other relates to where products or service are sold. With wages and standards of living increases in countries such as China and India, new market opportunities present themselves, requiring well-thought out strategies to take advantage of those potential opportunities while minimizing any associated risks.

As globalization increased, many companies realized that strategic decisions with respect to globalization had to be made. One issue companies face today is that what works in one country or region does not necessarily work in another, and strategies must be carefully crafted to take these variabilities into account. Another issue is the threat of political or social upheaval. Still another issue is the difficulty of coordinating and managing far-flung operations. Indeed, “In today’s global markets, you don’t have to go abroad to experience international competition. Sooner or later the world comes to you.”



The organization strategy provides the overall direction for the organization. It is broad in scope, covering the entire organization.

Operations strategy
is narrower in scope, dealing primarily with the operations aspect of the organization. Operations strategy relates to products, processes, methods, operating resources, quality, costs, lead times, and scheduling.
Table 2.3 provides a comparison of an organization’s mission, its overall strategy, and its operations strategy, tactics, and operations.


Comparison of mission, organization strategy, and operations strategy

In order for operations strategy to be truly effective, it is important to link it to organization strategy; that is, the two should not be formulated independently. Rather, formulation of organization strategy should take into account the realities of operations’ strengths and weaknesses,
page 52capitalizing on strengths and dealing with weaknesses. Similarly, operations strategy must be consistent with the overall strategy of the organization, and with the other functional units of the organization. This requires that senior managers work with functional units to formulate strategies that will support, rather than conflict with, each other and the overall strategy of the organization. As obvious as this may seem, it doesn’t always happen in practice. Instead, we may find power struggles between various functional units. These struggles are detrimental to the organization because they pit functional units against each other rather than focusing their energy on making the organization more competitive and better able to serve the customer. Some of the latest approaches in organizations, involving teams of managers and workers, may reflect a growing awareness of the synergistic effects of working together rather than competing internally.

In the 1970s and early 1980s, operations strategy in the United States was often neglected in favor of marketing and financial strategies. That may have occurred because many chief executive officers did not come from operations backgrounds and perhaps did not fully appreciate the importance of the operations function. Mergers and acquisitions were common; leveraged buyouts were used, and conglomerates were formed that joined dissimilar operations. These did little to add value to the organization; they were purely financial in nature. Decisions were often made by individuals who were unfamiliar with the business, frequently to the detriment of that business. Meanwhile, foreign competitors began to fill the resulting vacuum with a careful focus on operations strategy.

In the late 1980s and early 1990s, many companies began to realize this approach was not working. They recognized that they were less competitive than other companies. This caused them to focus attention on operations strategy. A key element of both organization strategy and operations strategy is strategy formulation.

Operations strategy can have a major influence on the competitiveness of an organization. If it is well designed and well executed, there is a good chance the organization will be successful; if it is not well designed or executed, it is far less likely that the organization will be successful.

Strategic Operations Management Decision Areas

Operations management people play a strategic role in many strategic decisions in a business organization.
Table 2.4 highlights some key decision areas. Notice that most of the decision areas have cost implications.


Strategic operations management decisions

Decision Area

What the Decisions Affect

  1. Product and service design

  2. Capacity

  3. Process selection and layout

  4. Work design

  5. Location

  6. Quality

  7. Inventory

  8. Maintenance

  9. Scheduling

  10. Supply chains

  11. Projects

Costs, quality, liability, and environmental issues

Cost structure, flexibility

Costs, flexibility, skill level needed, capacity

Quality of work life, employee safety, productivity

Costs, visibility

Ability to meet or exceed customer expectations

Costs, shortages

Costs, equipment reliability, productivity

Flexibility, efficiency

Costs, quality, agility, shortages, vendor relations

Costs, new products, services, or operating systems

Two factors that tend to have universal strategic operations importance relate to quality and time. The following section discusses quality and time strategies.

Quality and Time Strategies

Traditional strategies of business organizations have tended to emphasize cost minimization or product differentiation. While not abandoning those strategies, many organizations have embraced strategies based on
quality and/or

Quality-based strategies
focus on maintaining or improving the quality of an organization’s products or services. Quality is generally a factor in both attracting and retaining customers.
page 53Quality-based strategies may be motivated by a variety of factors. They may reflect an effort to overcome an image of poor quality, a desire to catch up with the competition, a desire to maintain an existing image of high quality, or some combination of these and other factors. Interestingly enough, quality-based strategies can be part of another strategy such as cost reduction, increased productivity, or time, all of which benefit from higher quality.

Time-based strategies
focus on reducing the time required to accomplish various activities (e.g., develop new products or services and market them, respond to a change in customer demand, or deliver a product or perform a service). By doing so, organizations seek to improve service to the customer and to gain a competitive advantage over rivals who take more time to accomplish the same tasks.

Time-based strategies focus on reducing the time needed to conduct the various activities in a process. The rationale is that by reducing time, costs are generally less, productivity is higher, quality tends to be higher, product innovations appear on the market sooner, and customer service is improved.

Organizations have achieved time reduction in some of the following:

Planning time: The time needed to react to a competitive threat, to develop strategies and select tactics, to approve proposed changes to facilities, to adopt new technologies, and so on.

Product/service design time: The time needed to develop and market new or redesigned products or services.

Processing time: The time needed to produce goods or provide services. This can involve scheduling, repairing equipment, methods used, inventories, quality, training, and the like.

Changeover time: The time needed to change from producing one type of product or service to another. This may involve new equipment settings and attachments, different methods, equipment, schedules, or materials.

Delivery time: The time needed to fill orders.

Response time for complaints: These might be customer complaints about quality, timing of deliveries, and incorrect shipments. These might also be complaints from employees about working conditions (e.g., safety, lighting, heat or cold), equipment problems, or quality problems.

It is essential for marketing and operations personnel to collaborate on strategy formulation in order to ensure that the buying criteria of the most important customers in each market segment are addressed.

Agile operations is a strategic approach for competitive advantage that emphasizes the use of flexibility to adapt and prosper in an environment of change. Agility involves a blending of several distinct competencies such as cost, quality, and reliability along with flexibility. Processing aspects of flexibility include quick equipment changeovers, scheduling, and innovation. Product or service aspects include varying output volumes and product mix.

Successful agile operations requires careful planning to achieve a system that includes people, flexible equipment, and information technology. Reducing the time needed to perform work is one of the ways an organization can improve a key metric:

page 54 


Organization strategy has a major impact on operations and supply chain management strategies. For example, organizations that use a low-cost, high-volume strategy limit the amount of variety offered to customers. As a result, variations for operations and the supply chain are minimal, so they are easier to deal with. Conversely, a strategy to offer a wide variety of products or services, or to perform customized work, creates substantial operational and supply chain variations and, hence, more challenges in achieving a smooth flow of goods and services throughout the supply chain, thus making the matching of supply to demand more difficult. Similarly, increasing service reduces the ability to compete on price.
Table 2.5 provides a brief overview of variety and some other key implications.


Organization strategies and their implications for operations management

Organization Strategy

Implications for Operations Management

Low price

Requires low variation in products/services and a high-volume, steady flow of goods results in maximum use of resources through the system. Standardized work, material, and inventory requirements.

High quality

Entails higher initial cost for product and service design, and process design, and more emphasis on assuring supplier quality.

Quick response

Requires flexibility, extra capacity, and higher levels of some inventory items.


Entails large investment in research and development for new or improved products and services plus the need to adapt operations and supply processes to suit new products or services.

Product or service variety

Requires high variation in resource and more emphasis on product and service design; higher worker skills needed, cost estimation more difficult; scheduling more complex; quality assurance more involved; inventory management more complex; and matching supply to demand more difficult.


Affects location planning, product and service design, process design, outsourcing decisions, returns policies, and waste management.


The Balanced Scorecard (BSC) is a top-down
management system that organizations can use to clarify their vision and strategy and transform them into action. It was introduced in the early 1990s by Robert Kaplan and David Norton,

and it has been revised and improved since then. The idea was to move away from a purely financial perspective of the organization and integrate other perspectives such as customers, internal business processes, and learning and growth. Using this approach, managers develop objectives, metrics, and targets for each objective and initiatives to achieve objectives, and they identify links among the various perspectives. Results are monitored and used to improve strategic performance results.
Figure 2.2 illustrates the conceptual framework of this approach. Many organizations employ this or a similar approach.

page 55 

As seen in
Figure 2.2, the four perspectives are intended to balance not only financial and nonfinancial performance, but also internal and external performance, as well as past and future performance. This approach can also help organizations focus on how they differ from the competition in each of the four areas if their vision is realized.
Table 2.6 has some examples of factors for key focal points.


Balanced scorecard factors examples

Focal Point



Delivery performance

Quality performance

Number of suppliers

Supplier locations

Duplicate activities

Internal Processes


Automation potential



Job satisfaction

Learning opportunities

Delivery performance


Quality performance


Retention rate

Although the Balanced Scorecard helps focus managers’ attention on strategic issues and the implementation of strategy, it is important to note that it has no role in strategy formulation.

page 56 

Moreover, this approach pays little attention to suppliers and government regulations, and community, environmental, and sustainability issues are missing. These are closely linked, and business organizations need to be aware of the impact they are having in these areas and respond accordingly. Otherwise, organizations may be subject to attack by pressure groups and risk damage to their reputation.


One of the primary responsibilities of a manager is to achieve
productive use of an organization’s resources. The term
productivity is used to describe this.

is an index that measures output (goods and services) relative to the input (labor, materials, energy, and other resources) used to produce it. It is usually expressed as the ratio of output to input:


Although productivity is important for all business organizations, it is particularly important for organizations that use a strategy of low cost, because the higher the productivity, the lower the cost of the output.

A productivity ratio can be computed for a single operation, a department, an organization, or an entire country. In business organizations, productivity ratios are used for planning workforce requirements, scheduling equipment, financial analysis, and other important tasks.

Productivity has important implications for business organizations and for entire nations. For nonprofit organizations, higher productivity means lower costs; for profit-based organizations, productivity is an important factor in determining how competitive a company is. For a nation, the rate of
productivity growth is of great importance. Productivity growth is the increase in productivity from one period to the next relative to the productivity in the preceding period. Thus,


page 57 

For example, if productivity increased from 80 to 84, the growth rate would be

Productivity growth is a key factor in a country’s rate of inflation and the standard of living of its people. Productivity increases add value to the economy while keeping inflation in check. Productivity growth was a major factor in the long period of sustained economic growth in the United States in the 1990s.

Computing Productivity

Productivity measures can be based on a single input (partial productivity), on more than one input (multifactor productivity), or on all inputs (total productivity).
Table 2.7 lists some examples of productivity measures. The choice of productivity measure depends primarily on the purpose of the measurement. If the purpose is to track improvements in labor productivity, then labor becomes the obvious input measure.


Some examples of different types of productivity measures

Partial measures are often of greatest use in operations management.
Table 2.8 provides some examples of partial productivity measures.


Some examples of partial productivity measures

Labor productivity

Units of output per labor hour

Units of output per shift

Value-added per labor hour

Dollar value of output per labor hour

Machine productivity

Units of output per machine hour

Dollar value of output per machine hour

Capital productivity

Units of output per dollar input

Dollar value of output per dollar input

Energy productivity

Units of output per kilowatt-hour

Dollar value of output per kilowatt-hour

The units of output used in productivity measures depend on the type of job performed. The following are examples of labor productivity:

Similar examples can be listed for
machine productivity (e.g., the number of pieces per hour turned out by a machine).

page 58 

Calculations of multifactor productivity measure inputs and outputs using a common unit of measurement, such as cost. For instance, the measure might use cost of inputs and units of the output:


Note: The unit of measure must be the same for all factors in the denominator

page 59 

Productivity measures are useful on a number of levels. For an individual department or organization, productivity measures can be used to track performance
over time. This allows managers to judge performance and to decide where improvements are needed. For example, if productivity has slipped in a certain area, operations staff can examine the factors used to compute productivity to determine what has changed and then devise a means of improving productivity in subsequent periods.

Productivity measures also can be used to judge the performance of an entire industry or the productivity of a country as a whole. These productivity measures are
aggregate measures.

In essence, productivity measurements serve as scorecards of the effective use of resources. Business leaders are concerned with productivity as it relates to
competitiveness: If two firms both have the same level of output but one requires less input because of higher productivity, that one will be able to charge a lower price and consequently increase its share of the market. Or that firm might elect to charge the same price, thereby reaping a greater profit. Government leaders are concerned with national productivity because of the close relationship between productivity and a nation’s standard of living. High levels of productivity are largely responsible for the relatively high standards of living enjoyed by people in industrial nations. Furthermore, wage and price increases not accompanied by productivity increases tend to create inflationary pressures on a nation’s economy.

Advantages of domestic-based operations for domestic markets often include higher worker productivity, better control of quality, avoidance of intellectual property losses, lower shipping costs, political stability, low inflation, and faster delivery.

Productivity in the Service Sector

Service productivity is more problematic than manufacturing productivity. In many situations, it is more difficult to measure, and thus to manage, because it involves intellectual activities and a high degree of variability. Think about medical diagnoses, surgery, consulting, legal services, customer service, and computer repair work. This makes productivity improvements more difficult to achieve. Nonetheless, because service is becoming an increasingly large portion of our economy, the issues related to service productivity will have to be dealt with. It is interesting to note that government statistics normally do not include service firms.

page 60 

A useful measure closely related to productivity is
process yield. Where products are involved, process yield is defined as the ratio of output of good product (i.e., defective product is not included) to the quantity of raw material input. Where services are involved, process yield measurement is often dependent on the particular process. For example, in a car rental agency, a measure of yield is the ratio of cars rented to cars available for a given day. In education, a measure for college and university admission yield is the ratio of student acceptances to the total number of students approved for admission. For subscription services, yield is the ratio of new subscriptions to the number of calls made or the number of letters mailed. However, not all services lend themselves to a simple yield measurement. For example, services such as automotive, appliance, and computer repair don’t readily lend themselves to such measures.

Factors that Affect Productivity

Numerous factors affect productivity. Generally, they are methods, capital, quality, technology, and management.

A commonly held misconception is that workers are the main determinant of productivity. According to that theory, the route to productivity gains involves getting employees to work harder. However, the fact is that many productivity gains in the past have come from
technological improvements. Familiar examples include:



GPS devices

Copiers and scanners



The internet, search engines





3D printers

Radio frequency ID tags


Medical imaging

However, technology alone won’t guarantee productivity gains; it must be used wisely and thoughtfully. Without careful planning, technology can actually
reduce productivity, especially if it leads to inflexibility, high costs, or mismatched operations. Another current productivity pitfall results from employees’ use of computers or smartphones for nonwork-related activities (playing games or checking stock prices or sports scores on the internet or smartphones, and texting friends and relatives). Beyond all of these is the dip in productivity that results while employees learn to use new equipment or procedures that will eventually lead to productivity gains after the learning phase ends.

page 61 

Other factors that affect productivity include the following:

Standardizing processes and procedures wherever possible to reduce variability can have a significant benefit for both productivity and quality.

Quality differences may distort productivity measurements. One way this can happen is when comparisons are made over time, such as comparing the productivity of a factory now with one 30 years ago. Quality is now much higher than it was then, but there is no simple way to incorporate quality improvements into productivity measurements.

Use of the internet can lower costs of a wide range of transactions, thereby increasing productivity. It is likely that this effect will continue to increase productivity in the foreseeable future.

Computer viruses can have an immense negative impact on productivity.

Searching for lost or misplaced items wastes time, hence negatively affecting productivity.

Scrap rates have an adverse effect on productivity, signaling inefficient use of resources.

New workers tend to have lower productivity than seasoned workers. Thus, growing companies may experience a productivity lag.

Safety should be addressed. Accidents can take a toll on productivity.

A shortage of technology-savvy workers hampers the ability of companies to update computing resources, generate and sustain growth, and take advantage of new opportunities.

Layoffs often affect productivity. The effect can be positive and negative. Initially, productivity may increase after a layoff, because the workload remains the same but fewer workers do the work—although they have to work harder and longer to do it. However, as time goes by, the remaining workers may experience an increased risk of burnout, and they may fear additional job cuts. The most capable workers may decide to leave.

Labor turnover has a negative effect on productivity; replacements need time to get up to speed.

Design of the workspace can impact productivity. For example, having tools and other work items within easy reach can positively impact productivity.

Incentive plans that reward productivity increases can boost productivity.

And there are still other factors that affect productivity, such as
equipment breakdowns and
shortages of parts or materials. The education level and training of workers and their health can greatly affect productivity. The opportunity to obtain lower costs due to higher productivity elsewhere is a key reason many organizations turn to
outsourcing. Hence, an alternative to outsourcing can be improved productivity. Moreover, as a part of their strategy for quality, the best organizations strive for
continuous improvement. Productivity improvements can be an important aspect of that approach.

Improving Productivity

A company or a department can take a number of key steps toward improving productivity:

  1. Develop productivity measures for all operations. Measurement is the first step in managing and controlling an operation.

  2. Look at the system as a whole in deciding which operations are most critical. It is overall productivity that is important. Managers need to reflect on the value of potential productivity improvements
    before okaying improvement efforts. The issue is
    effectiveness. There are several aspects of this. One is to make sure the result will be something customers want. For example, if a company is able to increase its output through productivity improvements, but then is unable to sell the increased output, the increase in productivity isn’t effective. Second, it is important to adopt a systems viewpoint: A productivity increase in one part of an operation that doesn’t increase the productivity of the system would not be effective. For example, suppose a system consists of a sequence of two operations, where the output of the first operation is the input to the second operation, and each operation can complete its part of the process at a rate of 20 units per hour. If the productivity of the first operation is increased, but the productivity of the second operation is not, the output of the system will still be 20 units per hour.

  3. Develop methods for achieving productivity improvements, such as soliciting ideas from workers (perhaps organizing teams of workers, engineers, and managers), studying how other firms have increased productivity, and reexamining the way work is done.

  4. Establish reasonable goals for improvement.

  5. Make it clear that management supports and encourages productivity improvement. Consider incentives to reward workers for contributions.

  6. Measure improvements and publicize them.

page 62 

Don’t confuse productivity with
efficiency. Efficiency is a narrower concept that pertains to getting the most out of a
fixed set of resources; productivity is a broader concept that pertains to effective use of overall resources. For example, an efficiency perspective on mowing a lawn with a hand mower would focus on the best way to use the hand mower; a productivity perspective would include the possibility of using a power mower.

Fracking productivity improvement is another example. Drilling methods have become more effective. Drillers are now adopting a hydraulic fracturing method pioneered by companies such as Liberty Resources and EOG Resources that uses larger amounts of water and minerals. Although it is a more costly process, it has increased production rates in the first year of a well’s life, after which output tends to drop off dramatically. Processes such as these have reduced the break-even cost of producing a barrel of oil and kept profitable some acreage that drillers might otherwise have left idle.

Michael E. Porter, “The Five Competitive Forces that Shape Strategy,”
Harvard Business Review 86, no. 1 (January 2008), pp. 78–93, 137.

Christopher A. Bartlett and Sumantra Ghoshal, “Going Global: Lessons from Late Movers,”
Harvard Business Review, March-April 2000, p. 139.

Robert S. Kaplan and David P. Norton,
Balanced Scorecard: Translating Strategy into Action (Cambridge, MA: Harvard Business School Press, 1996).

page 74 

page 75 


Forecasts are a basic input in the decision processes of operations management because they provide information on future demand. The importance of forecasting to operations management cannot be overstated. The primary goal of operations management is to match supply to demand. Having a forecast of demand is essential for determining how much capacity or supply will be needed to meet demand. For instance, operations needs to know what capacity will be needed to make staffing and equipment decisions, budgets must be prepared, purchasing needs information for ordering from suppliers, and supply chain partners need to make their plans.

Businesses make plans for future operations based on anticipated future demand. Anticipated demand is derived from two possible sources, actual customer orders and forecasts. For businesses where customer orders make up most or all of anticipated demand, planning is straightforward, and little or no forecasting is needed. However, for many businesses, most or all of anticipated demand is derived from forecasts.

Two aspects of forecasts are important. One is the expected level of demand; the other is the degree of accuracy that can be assigned to a forecast (i.e., the potential size of forecast error). The expected level of demand can be a function of some structural variation, such as a trend or seasonal variation. Forecast accuracy is a function of the ability of forecasters to correctly model demand, random variation, and sometimes unforeseen events.

Forecasts are made with reference to a specific time horizon. The time horizon may be fairly short (e.g., an hour, day, week, or month), or somewhat longer (e.g., the next six months, the next year, the next five years, or the life of a product or service). Short-term forecasts pertain to ongoing operations. Long-range forecasts can be an important strategic planning tool. Long-term forecasts pertain to new products or services, new equipment, new facilities, or something else that will require a somewhat long lead time to develop, construct, or otherwise implement.

Forecasts are the basis for budgeting, planning capacity, sales, production and inventory, personnel, purchasing, and more. Forecasts play an important role in the planning process because they enable managers to anticipate the future so they can plan accordingly.

Forecasts affect decisions and activities throughout an organization, in accounting, finance, human resources, marketing, and management information systems (MIS), as well as in operations and other parts of an organization. Here are some examples of uses of forecasts in business organizations:

Accounting. New product/process cost estimates, profit projections, cash management.

Finance. Equipment/equipment replacement needs, timing and amount of funding/borrowing needs.

Human resources. Hiring activities, including recruitment, interviewing, and training; layoff planning, including outplacement counseling.

page 77 

Marketing. Pricing and promotion, e-business strategies, global competition strategies.

MIS. New/revised information systems, internet services.

Operations. Schedules, capacity planning, work assignments and workloads, inventory planning, make-or-buy decisions, outsourcing, project management.

Product/service design. Revision of current features, design of new products or services.

In most of these uses of forecasts, decisions in one area have consequences in other areas. Therefore, it is very important for all affected areas to agree on a common forecast. However, this may not be easy to accomplish. Different departments often have very different perspectives on a forecast, making a consensus forecast difficult to achieve. For example, salespeople, by their very nature, may be overly optimistic with their forecasts, and may want to “reserve” capacity for their customers. This can result in excess costs for operations and inventory storage. Conversely, if demand exceeds forecasts, operations and the supply chain may not be able to meet demand, which would mean lost business and dissatisfied customers.

Forecasting is also an important component of
yield management, which relates to the percentage of capacity being used. Accurate forecasts can help managers plan tactics (e.g., offer discounts, don’t offer discounts) to match capacity with demand, thereby achieving high-yield levels.

There are two uses for forecasts. One is to help managers
plan the system, and the other is to help them
plan the use of the system. Planning the system generally involves long-range plans about the types of products and services to offer, what facilities and equipment to have, where to locate, and so on. Planning the use of the system refers to short-range and intermediate-range planning, which involve tasks such as planning inventory and workforce levels, planning purchasing and production, budgeting, and scheduling.

Business forecasting pertains to more than predicting demand. Forecasts are also used to predict profits, revenues, costs, productivity changes, prices and availability of energy and raw materials, interest rates, movements of key economic indicators (e.g., gross domestic product, inflation, government borrowing), and prices of stocks and bonds. For the sake of simplicity, this chapter will focus on the forecasting of demand. Keep in mind, however, that the concepts and techniques apply equally well to the other variables.

page 78 

Despite of its use of computers and sophisticated mathematical models, forecasting is not an exact science. Instead, successful forecasting often requires a skillful blending of science and intuition. Experience, judgment, and technical expertise all play a role in developing useful forecasts. Along with these, a certain amount of luck and a dash of humility can be helpful, because the worst forecasters occasionally produce a very good forecast, and even the best forecasters sometimes miss completely. Current forecasting techniques range from the mundane to the exotic. Some work better than others, but no single technique works all the time.


A wide variety of forecasting techniques are in use. In many respects, they are quite different from each other, as you shall soon discover. Nonetheless, certain features are common to all, and it is important to recognize them.

  • Forecasting techniques generally assume that the same underlying causal system that existed in the past will continue to exist in the future.

Comment A manager cannot simply delegate forecasting to models or computers and then forget about it, because unplanned occurrences can wreak havoc with forecasts. For instance, weather-related events, tax increases or decreases, and changes in features or prices of competing products or services can have a major impact on demand. Consequently, a manager must be alert to such occurrences and be ready to override forecasts, which assume a stable causal system.

  • Forecasts are not perfect; actual results usually differ from predicted values; the presence of randomness precludes a perfect forecast. Allowances should be made for forecast errors.

  • Forecasts for groups of items tend to be more accurate than forecasts for individual items because forecasting errors among items in a group usually have a canceling effect. Opportunities for grouping may arise if parts or raw materials are used for multiple products or if a product or service is demanded by a number of independent sources.

  • Forecast accuracy decreases as the time period covered by the forecast—the
    time horizon—increases. Generally speaking, short-range forecasts must contend with fewer uncertainties than longer-range forecasts, so they tend to be more accurate.

An important consequence of the last point is that flexible business organizations—those that can respond quickly to changes in demand—require a shorter forecasting horizon and, hence, benefit from more accurate short-range forecasts than competitors who are less flexible and who must therefore use longer forecast horizons.


A properly prepared forecast should fulfill certain requirements:

  • The forecast should be
    timely. Usually, a certain amount of time is needed to respond to the information contained in a forecast. For example, capacity cannot be expanded overnight, nor can inventory levels be changed immediately. Hence, the forecasting horizon must cover the time necessary to implement possible changes.

  • The forecast should be
    accurate, and the degree of accuracy should be stated. This will enable users to plan for possible errors and will provide a basis for comparing alternative forecasts.

  • The forecast should be
    reliable; it should work consistently. A technique that sometimes provides a good forecast and sometimes a poor one will leave users with the uneasy feeling that they may get burned every time a new forecast is issued.

    page 79

  • The forecast should be expressed in
    meaningful units. Financial planners need to know how many
    dollars will be needed, production planners need to know how many
    units will be needed, and schedulers need to know what
    machines and
    skills will be required. The choice of units depends on user needs.

  • The forecast should be
    in writing. Although this will not guarantee that all concerned are using the same information, it will at least increase the likelihood of it. In addition, a written forecast will permit an objective basis for evaluating the forecast once actual results are in.

  • The forecasting technique should be
    simple to understand and use. Users often lack confidence in forecasts based on sophisticated techniques; they do not understand either the circumstances in which the techniques are appropriate or the limitations of the techniques. Misuse of techniques is an obvious consequence. Not surprisingly, fairly simple forecasting techniques enjoy widespread popularity because users are more comfortable working with them.

  • The forecast should be
    cost-effective: The benefits should outweigh the costs.


Accurate forecasts are very important for the supply chain. Inaccurate forecasts can lead to shortages and excesses throughout the supply chain. Shortages of materials, parts, and services can lead to missed deliveries, work disruption, and poor customer service. Conversely, overly optimistic forecasts can lead to excesses of materials and/or capacity, which increase costs. Both shortages and excesses in the supply chain have a negative impact not only on customer service but also on profits. Furthermore, inaccurate forecasts can result in temporary increases and decreases in orders to the supply chain, which can be misinterpreted by the supply chain.

Organizations can reduce the likelihood of such occurrences in a number of ways. One, obviously, is by striving to develop the best possible forecasts. Another is through collaborative planning and forecasting with major supply chain partners. Yet another way is through information sharing among partners and perhaps increasing supply chain visibility by allowing supply chain partners to have real-time access to sales and inventory information. Also important is rapid communication about poor forecasts, as well as about unplanned events that disrupt operations (e.g., flooding, work stoppages), and changes in plans.


There are six basic steps in the forecasting process:

  1. Determine the purpose of the forecast. How will it be used and when will it be needed? This step will provide an indication of the level of detail required in the forecast, the amount of resources (personnel, computer time, dollars) that can be justified, and the level of accuracy necessary.

  2. Establish a time horizon. The forecast must indicate a time interval, keeping in mind that accuracy decreases as the time horizon increases.

  3. Obtain, clean, and analyze appropriate data. Obtaining the data can involve significant effort. Once obtained, the data may need to be “cleaned” to get rid of outliers and obviously incorrect data before analysis.

  4. Select a forecasting technique.

  5. Make the forecast.

  6. Monitor the forecast errors. The forecast errors should be monitored to determine if the forecast is performing in a satisfactory manner. If it is not, reexamine the method, assumptions, the validity of data, and so on; modify as needed; and prepare a revised forecast.

page 80 

Once the process has been set up, it may only be necessary to repeat steps 3 and 6 as new data become available.

Note, too, that additional action may be necessary. For example, if demand was much less than the forecast, an action such as a price reduction or a promotion may be needed. Conversely, if demand was much more than predicted, increased output may be advantageous. That may involve working overtime, outsourcing, or taking other measures.


There are two general approaches to forecasting: qualitative and quantitative. Qualitative methods consist mainly of subjective inputs, which often defy precise numerical description. Quantitative methods involve either the projection of historical data or the development of associative models that attempt to utilize
causal (explanatory) variables to make a forecast.

Qualitative techniques permit inclusion of
soft information (e.g., human factors, personal opinions, hunches) in the forecasting process. Those factors are often omitted or downplayed when quantitative techniques are used because they are difficult or impossible to quantify. Quantitative techniques consist mainly of analyzing objective, or
hard, data. They usually avoid personal biases that sometimes contaminate qualitative methods. In practice, either approach, or a combination of both approaches, might be used to develop a forecast.

The following pages present a variety of forecasting techniques that are classified as judgmental, time-series, or associative.

Judgmental forecasts
rely on analysis of subjective inputs obtained from various sources, such as consumer surveys, the sales staff, managers and executives, and panels of experts. Quite frequently, these sources provide insights that are not otherwise available.

Time-series forecasts
simply attempt to project past experience into the future. These techniques use historical data with the assumption that the future will be like the past. Some models merely attempt to smooth out random variations in historical data; others attempt to identify specific patterns in the data and project or extrapolate those patterns into the future, without trying to identify causes of the patterns.

Associative models
use equations that consist of one or more
explanatory variables that can be used to predict demand. For example, demand for paint might be related to variables such as the price per gallon and the amount spent on advertising, as well as to specific characteristics of the paint (e.g., drying time, ease of cleanup).


In some situations, forecasters rely solely on judgment and opinion to make forecasts. If management must have a forecast quickly, there may not be enough time to gather and analyze quantitative data. At other times, especially when political and economic conditions are changing, available data may be obsolete, and more up-to-date information might not yet be available. Similarly, the introduction of new products and the redesign of existing products or packaging suffer from the absence of historical data that would be useful in forecasting. In such instances, forecasts are based on executive opinions, consumer surveys, opinions of the sales staff, and opinions of experts.

Executive Opinions

A small group of upper-level managers (e.g., in marketing, operations, and finance) may meet and collectively develop a forecast. This approach is often used as a part of long-range planning and new product development. It has the advantage of bringing together the considerable knowledge and talents of various managers. However, there is the risk that the view of one person will prevail, and the possibility that diffusing responsibility for the forecast over the entire group may result in less pressure to produce a good forecast.

page 81 

Salesforce Opinions

Members of the sales staff or the customer service staff are often good sources of information because of their direct contact with consumers. They are often aware of any plans the customers may be considering for the future. There are, however, several drawbacks to using salesforce opinions. One is that staff members may be unable to distinguish between what customers would
like to do and what they actually
will do. Another is that these people are sometimes overly influenced by recent experiences. Thus, after several periods of low sales, their estimates may tend to become pessimistic. After several periods of good sales, they may tend to be too optimistic. In addition, if forecasts are used to establish sales quotas, there will be a conflict of interest because it is to the salesperson’s advantage to provide low sales estimates.

Consumer Surveys

Because it is the consumers who ultimately determine demand, it seems natural to solicit input from them. In some instances, every customer or potential customer can be contacted. However, usually there are too many customers or there is no way to identify all potential customers. Therefore, organizations seeking consumer input usually resort to consumer surveys, which enable them to
sample consumer opinions. The obvious advantage of consumer surveys is that they can tap information that might not be available elsewhere. On the other hand, a considerable amount of knowledge and skill is required to construct a survey, administer it, and correctly interpret the results for valid information. Surveys can be expensive and time-consuming. In addition, even under the best conditions, surveys of the general public must contend with the possibility of irrational behavior patterns. For example, much of the consumer’s thoughtful information gathering before purchasing a new car is often undermined by the glitter of a new car showroom or a high-pressure sales pitch. Along the same lines, low response rates to a mail survey should—but often don’t—make the results suspect.

If these and similar pitfalls can be avoided, surveys can produce useful information.

Other Approaches

A manager may solicit opinions from a number of other managers and staff people. Occasionally, outside experts are needed to help with a forecast. Advice may be needed on political or economic conditions in the United States or a foreign country, or some other aspect of importance with which an organization lacks familiarity.

Another approach is the

Delphi method
, an iterative process intended to achieve a consensus forecast. This method involves circulating a series of questionnaires among individuals who possess the knowledge and ability to contribute meaningfully. Responses are kept anonymous, which tends to encourage honest responses and reduces the risk that one person’s opinion will prevail. Each new questionnaire is developed using the information extracted from the previous one, thus enlarging the scope of information on which participants can base their judgments.

The Delphi method has been applied to a variety of situations, not all of which involve forecasting. The discussion here is limited to its use as a forecasting tool.

As a forecasting tool, the Delphi method is useful for
technological forecasting; that is, for assessing changes in technology and their impact on an organization. Often, the goal is to predict
when a certain event will occur. For instance, the goal of a Delphi forecast might be to predict when video telephones might be installed in at least 50 percent of residential homes or when a vaccine for a disease might be developed and ready for mass distribution. For the most part, these are long-term, single-time forecasts, which usually have very little hard information to go by or data that are costly to obtain, so the problem does not lend itself to analytical techniques. Rather, judgments of experts or others who possess sufficient knowledge to make predictions are used.

page 82 



time series
is a time-ordered sequence of observations taken at regular intervals (e.g., hourly, daily, weekly, monthly, quarterly, annually). The data may be measurements of demand, sales, earnings, profits, shipments, accidents, output, precipitation, productivity, or the consumer price index. Note that forecasts based on sales will understate demand when demand exceeds sales, causing shortages (stockouts) to occur. Forecasting techniques based on time-series data are made on the assumption that future values of the series can be estimated from past values. Although no attempt is made to identify variables that influence the series, these methods are widely used, often with quite satisfactory results.

Analysis of time-series data requires the analyst to identify the underlying behavior of the series. This can often be accomplished by merely
plotting the data and visually examining the plot. One or more patterns might appear: trends, seasonal variations, cycles, or variations around an average. In addition, there will be random and perhaps irregular variations. These behaviors can be described as follows:

  1. Trend
    refers to a long-term upward or downward movement in the data. Population shifts, changing incomes, and cultural changes often account for such movements.

  2. Seasonality
    refers to short-term, fairly regular variations generally related to factors such as the calendar or time of day. Restaurants, supermarkets, and theaters experience weekly and even daily “seasonal” variations.

  3. Cycles
    are wavelike variations of more than one year’s duration. These are often related to a variety of economic, political, and even agricultural conditions.

  4. Irregular variations
    are due to unusual circumstances such as severe weather conditions, strikes, or a major change in a product or service. They do not reflect typical behavior, and their inclusion in the series can distort the overall picture. Whenever possible, these should be identified and removed from the data.

  5. Random variations
    are residual variations that remain after all other behaviors have been accounted for.

These behaviors are illustrated in
Figure 3.1. The small “bumps” in the plots represent random variability.

The remainder of this section describes the various approaches to the analysis of time-series data. Before turning to those discussions, one point should be emphasized: A demand forecast should be based on a time series of past
demand rather than unit sales. Sales would not truly reflect demand if one or more
stockouts occurred.

Naive Methods

A simple but widely used approach to forecasting is the naive approach. A

naive forecast
uses a single previous value of a time series as the basis of a forecast. The naive approach can be used with a stable series (variations around an average), with seasonal variations, or with trend. With a stable series, the last data point becomes the forecast for the next period. Thus, if demand for a product last week was 20 cases, the forecast for this week is 20 cases. With seasonal variations, the forecast for this “season” is equal to the value of the series last “season.” For example, the forecast for demand for turkeys this Thanksgiving season is equal to demand for turkeys last Thanksgiving; the forecast of the number of checks cashed at a bank on the first day of the month next month is equal to the number of checks cashed on the first day of this month; and the forecast for highway traffic volume this Friday is equal to the highway traffic volume last Friday. For data with trend, the forecast is equal to the last value of the series plus or minus the difference between the last two
page 83values of the series. For example, suppose the last two values were 50 and 53. The next forecast would be 56:



Change from Previous Value








53 + 3 = 56

Although at first glance the naive approach may appear
too simplistic, it is nonetheless a legitimate forecasting tool. Consider the advantages: It has virtually no cost, it is quick and easy to prepare because data analysis is nonexistent, and it is easily understandable. The main objection to this method is its inability to provide highly accurate forecasts. However, if resulting accuracy is acceptable, this approach deserves serious consideration. Moreover, even if other forecasting techniques offer better accuracy, they will almost always involve a greater cost. The accuracy of a naive forecast can serve as a standard of comparison against which to judge the cost and accuracy of other techniques. Thus, managers must answer the question: Is the increased accuracy of another method worth the additional resources required to achieve that accuracy?

page 84 

Techniques for Averaging

Historical data typically contain a certain amount of random variation, or
white noise, that tends to obscure systematic movements in the data. This randomness arises from the combined influence of many—perhaps a great many—relatively unimportant factors, and it cannot be reliably predicted. Averaging techniques smooth variations in the data. Ideally, it would be desirable to completely remove any randomness from the data and leave only “real” variations, such as changes in the demand. As a practical matter, however, it is usually impossible to distinguish between these two kinds of variations, so the best one can hope for is that the small variations are random and the large variations are “real.”

Averaging techniques smooth fluctuations in a time series because the individual highs and lows in the data offset each other when they are combined into an average. A forecast based on an average thus tends to exhibit less variability than the original data (see
Figure 3.2). This can be advantageous because many of these movements merely reflect random variability rather than a true change in the series. Moreover, because responding to changes in expected demand often entails considerable cost (e.g., changes in production rate, changes in the size of a workforce, inventory changes), it is desirable to avoid reacting to minor variations. Thus, minor variations are treated as random variations, whereas larger variations are viewed as more likely to reflect “real” changes, although these, too, are smoothed to a certain degree.

Averaging techniques generate forecasts that reflect recent values of a time series (e.g., the average value over the last several periods). These techniques work best when a series tends to vary around an average, although they also can handle step changes or gradual changes in the level of the series. Three techniques for averaging are described in this section:

  1. Moving average

  2. Weighted moving average

  3. Exponential smoothing

Moving Average One weakness of the naive method is that the forecast just
traces the actual data, with a lag of one period; it does not smooth at all. But by expanding the amount of historical data a forecast is based on, this difficulty can be overcome. A

moving average
forecast uses a
number of the most recent actual data values in generating a forecast. The moving average forecast can be computed using the following equation:



For example, MA
3 would refer to a three-period moving average forecast, and MA
5 would refer to a five-period moving average forecast.

page 85 

Note that in a moving average, as each new actual value becomes available, the forecast is updated by adding the newest value and dropping the oldest and then recomputing the average. Consequently, the forecast “moves” by reflecting only the most recent values.

In computing a moving average, including a
moving total column—which gives the sum of the
n most current values from which the average will be computed—aids computations. To update the moving total: Subtract the oldest value from the newest value and add that amount to the moving total for each update.

Figure 3.3 illustrates a three-period moving average forecast plotted against actual demand over 31 periods. Note how the moving average forecast
lags the actual values and how
smooth the forecasted values are compared with the actual values.

The moving average can incorporate as many data points as desired. In selecting the number of periods to include, the decision maker must take into account that the number of data points in the average determines its sensitivity to each new data point: The fewer the data points in an average, the more sensitive (responsive) the average tends to be. (See
Figure 3.4A.)

If responsiveness is important, a moving average with relatively few data points should be used. This will permit quick adjustment to, say, a step change in the data, but it also will
page 86cause the forecast to be somewhat responsive even to random variations. Conversely, moving averages based on more data points will smooth more but be less responsive to “real” changes. Hence, the decision maker must weigh the cost of responding more slowly to changes in the data against the cost of responding to what might simply be random variations. A review of forecast errors can help in this decision.

The advantages of a moving average forecast are that it is easy to compute and easy to understand. A possible disadvantage is that all values in the average are weighted equally. For instance, in a 10-period moving average, each value has a weight of 1/10. Hence, the oldest value has the
same weight as the most recent value. If a change occurs in the series, a moving average forecast can be slow to react, especially if there are a large number of values in the average. Decreasing the number of values in the average increases the weight of more recent values, but it does so at the expense of losing potential information from less recent values.

Weighted Moving Average A

weighted average
is similar to a moving average, except that it typically assigns more weight to the most recent values in a time series. For instance, the most recent value might be assigned a weight of .40, the next most recent value a weight of .30, the next after that a weight of .20, and the next after that a weight of .10. Note that the weights must sum to 1.00, and that the heaviest weights are assigned to the most recent values.



Note that if four weights are used, only the
four most recent demands are used to prepare the forecast.

The advantage of a weighted average over a simple moving average is that the weighted average is more reflective of the most recent occurrences. However, the choice of weights is somewhat arbitrary and generally involves the use of trial and error to find a suitable weighting scheme.

Exponential Smoothing

Exponential smoothing
is a sophisticated weighted averaging method that is still relatively easy to use and understand. Each new forecast is based on the previous forecast plus a percentage of the difference between that forecast and the actual value of the series at that point. That is:

where (Actual − Previous forecast) represents the forecast error and
α is a percentage of the error. More concisely,



The smoothing constant
α represents a percentage of the forecast error. Each new forecast is equal to the previous forecast plus a percentage of the previous error. For example, suppose the previous forecast was 42 units, actual demand was 40 units, and
α = .10. The new forecast would be computed as follows:

Then, if the actual demand turns out to be 43, the next forecast would be

An alternate form of Formula 3–3a reveals the weighting of the previous forecast and the latest actual demand:


For example, if
α = .10, this would be

The quickness of forecast adjustment to error is determined by the smoothing constant,
α. The closer its value is to zero, the slower the forecast will be to adjust to forecast errors (i.e., the greater the smoothing). Conversely, the closer the value of
α is to 1.00, the greater the responsiveness and the less the smoothing. This is illustrated in
Figure 3.4B.

page 88 

Selecting a smoothing constant is basically a matter of judgment or trial and error, using forecast errors to guide the decision. The goal is to select a smoothing constant that balances the benefits of smoothing random variations with the benefits of responding to real changes if and when they occur. Commonly used values of
α range from .05 to .50. Low values of
α are used when the underlying average tends to be stable; higher values are used when the underlying average is susceptible to change.

Some computer packages include a feature that permits automatic modification of the smoothing constant if the forecast errors become unacceptably large.

Exponential smoothing is one of the most widely used techniques in forecasting, partly because of its ease of calculation and partly because of the ease with which the weighting scheme can be altered—simply by changing the value of

Exponential smoothing should begin several periods back to enable forecasts to adjust to the data, instead of starting one period back. A number of different approaches can be used to obtain a
starting forecast, such as the average of the first several periods, a subjective estimate, or the first actual value as the forecast for period 2 (i.e., the naive approach). For simplicity, the naive approach is used in this book. In practice, using an average of, say, the first three values as a forecast for period 4 would provide a better starting forecast because that would tend to be more representative.

Other Forecasting Methods

You may find two other approaches to forecasting interesting. They are briefly described in this section.

Focus Forecasting Some companies use forecasts based on a “best recent performance” basis. This approach, called

focus forecasting
, was developed by Bernard T. Smith,
page 89and is described in several of his books.

It involves the use of several forecasting methods (e.g., moving average, weighted average, and exponential smoothing) all being applied to the last few months of historical data after any irregular variations have been removed. The method that has the highest accuracy is then used to make the forecast for the next month. This process is used for each product or service, and is repeated monthly.

Diffusion Models When new products or services are introduced, historical data are not generally available on which to base forecasts. Instead, predictions are based on rates of product adoption and usage spread from other established products, using mathematical diffusion models. These models take into account such factors as market potential, attention from mass media, and word of mouth. Although the details are beyond the scope of this text, it is important to point out that diffusion models are widely used in marketing and to assess the merits of investing in new technologies.

Techniques for Trend

Analysis of trend involves developing an equation that will suitably describe trend (assuming that trend is present in the data). The trend component may be linear, or it may not. Some commonly encountered nonlinear trend types are illustrated in
Figure 3.5. A simple plot of the data often can reveal the existence and nature of a trend. The discussion here focuses exclusively on
linear trends because these are fairly common.

There are two important techniques that can be used to develop forecasts when trend is present. One involves use of a trend equation; the other is an extension of exponential smoothing.

Trend Equation A

linear trend equation
has the form



page 90 

For example, consider the trend equation

= 45 + 5
t. The value of

t = 0 is 45, and the slope of the line is 5, which means that, on average, the value of

will increase by five units for each time period. If
t = 10, the forecast,

, is 45 + 5(10) = 95 units. The equation can be plotted by finding two points on the line. One can be found by substituting some value of
t into the equation (e.g.,
t = 10) and then solving for

. The other point is
a (i.e.,

t = 0). Plotting those two points and drawing a line through them yields a graph of the linear trend line.

The coefficients of the line,
a and
b, are based on the following two equations:




Note that these two equations are identical to those used for computing a linear regression line, except that
t replaces
x in the equations. Values for the trend equation can be obtained easily by using the Excel template.

page 92 

Trend-Adjusted Exponential Smoothing

A variation of simple exponential smoothing can be used when a time series exhibits a
linear trend. It is called

trend-adjusted exponential smoothing
, or sometimes
double smoothing, to differentiate it from simple exponential smoothing, which is appropriate only when data vary around an average or have step or gradual changes. If a series exhibits a trend, and simple smoothing is used on it, the forecasts will all lag the trend: If the data are increasing, each forecast will be too low; if decreasing, each forecast will be too high.

The trend-adjusted forecast (TAF) is composed of two elements—a smoothed error and a trend factor.






In order to use this method, one must select values of
α and
β (usually through trial and error) and make a starting forecast and an estimate of trend.

Using the cell phone data from the previous example (where it was concluded that the data exhibited a linear trend), use trend-adjusted exponential smoothing to obtain forecasts for periods 6 through 11, with
α = .40 and
β = .30.

page 93 

The initial estimate of trend is based on the net change of 28 for the
three changes from period 1 to period 4, for an average of 9.33. The Excel spreadsheet is shown in
Table 3.2. Notice that an initial estimate of trend is estimated from the first four values and that the starting forecast (period 5) is developed using the previous (period 4) value of 728 plus the initial trend estimate:


Using the Excel template for trend-adjusted smoothing

Source: Microsoft

Unlike a linear trend line, trend-adjusted smoothing has the ability to adjust to
changes in trend. Of course, trend projections are much simpler with a trend line than with trend-adjusted forecasts, so a manager must decide which benefits are most important when choosing between these two techniques for trend.

Techniques for Seasonality

Seasonal variations
in time-series data are regularly repeating upward or downward movements in series values that can be tied to recurring events.
Seasonality may refer to regular annual variations. Familiar examples of seasonality are weather variations (e.g., sales of winter and summer sports equipment) and vacations or holidays (e.g., airline travel, greeting card sales, visitors at tourist and resort centers). The term
seasonal variation is also applied to daily, weekly, monthly, and other regularly recurring patterns in data. For example, rush hour traffic occurs twice a day—incoming in the morning and outgoing in the late afternoon. Theaters and restaurants often experience weekly demand patterns, with demand higher later in the week. Banks may experience daily seasonal variations (heavier traffic during the noon hour and just before closing), weekly variations (heavier toward the end of the week), and monthly variations (heaviest around the beginning of the month because of Social Security, payroll, and welfare checks being cashed or deposited). Mail volume; sales of toys, beer, automobiles, and turkeys; highway usage; hotel registrations; and gardening also exhibit seasonal variations.

page 94 

Seasonality in a time series is expressed in terms of the amount that actual values deviate from the
average value of a series. If the series tends to vary around an average value, then seasonality is expressed in terms of that average (or a moving average); if trend is present, seasonality is expressed in terms of the trend value.

There are two different models of seasonality: additive and multiplicative. In the
additive model, seasonality is expressed as a
quantity (e.g., 20 units), which is added to or subtracted from the series average in order to incorporate seasonality. In the
multiplicative model, seasonality is expressed as a
percentage of the average (or trend) amount (e.g., 1.10), which is then used to multiply the value of a series to incorporate seasonality.
Figure 3.6 illustrates the two models for a linear trend line. In practice, businesses use the multiplicative model much more widely than the additive model, because it tends to be more representative of actual experience, so we will focus exclusively on the multiplicative model.

The seasonal percentages in the multiplicative model are referred to as

seasonal relatives
seasonal indexes. Suppose that the seasonal relative for the quantity of toys sold in May at a store is 1.20. This indicates that toy sales for that month are 20 percent above the monthly average. A seasonal relative of .90 for July indicates that July sales are 90 percent of the monthly average.

Knowledge of seasonal variations is an important factor in retail planning and scheduling. Moreover, seasonality can be an important factor in capacity planning for systems that must be designed to handle peak loads (e.g., public transportation, electric power plants, highways, and bridges). Knowledge of the extent of seasonality in a time series can enable one to
remove seasonality from the data (i.e., to seasonally adjust data) in order to discern other patterns
page 95or the lack of patterns in the series. Thus, one frequently reads or hears about “seasonally adjusted unemployment” and “seasonally adjusted personal income.”

The next section briefly describes how seasonal relatives are used.

Using Seasonal Relatives Seasonal relatives are used in two different ways in forecasting. One way is to
data; the other way is to
incorporate seasonality in a forecast.

To deseasonalize data is to remove the seasonal component from the data in order to get a clearer picture of the nonseasonal (e.g., trend) components. Deseasonalizing data is accomplished by
dividing each data point by its corresponding seasonal relative (e.g., divide November demand by the November relative, divide December demand by the December relative, and so on).

Incorporating seasonality in a forecast is useful when demand has both trend (or average) and seasonal components. Incorporating seasonality can be accomplished in this way:

  1. Obtain trend estimates for desired periods using a trend equation.

  2. Add seasonality to the trend estimates by
    multiplying (assuming a multiplicative model is appropriate) these trend estimates by the corresponding seasonal relative (e.g., multiply the November trend estimate by the November seasonal relative, multiply the December trend estimate by the December seasonal relative, and so on).

Example 4 illustrates these two techniques.

page 96 

Computing Seasonal Relatives A widely used method for computing seasonal relatives involves the use of a

centered moving average
. This approach effectively accounts for any trend (linear or curvilinear) that might be present in the data. For example,
Figure 3.7 illustrates how a three-period centered moving average closely tracks the data originally shown in
Figure 3.3.

Manual computation of seasonal relatives using the centered moving average method is a bit cumbersome, so the use of software is recommended. Manual computation is illustrated in Solved Problem 4 at the end of the chapter. The Excel template (on the website) is a simple and convenient way to obtain values of seasonal relatives (indexes). Example 5 illustrates this approach.

For practical purposes, you can round the relatives to two decimal places. Thus, the seasonal (standard) index values are:

















Computing Seasonal Relatives Using the Simple Average Method The simple average (SA) method is an alternative way to compute seasonal relatives. Each seasonal relative is the average for that season divided by the average of all seasons. This method is illustrated in Example 5, where the seasons are days. Note that there is no need to standardize the relatives when using the SA method.

page 97 

page 98 

The obvious advantage of the SA method compared to the centered MA method is the simplicity of computations. When the data have a stationary mean (i.e., variation around an average), the SA method works quite well, providing values of relatives that are quite close to those obtained using the centered MA method, which is generally accepted as accurate. Conventional wisdom is that the SA method should not be used when linear trend is present in the data. However, it can be used to obtain fairly good values of seasonal relatives as long as the ratio of the intercept to the slope is large, or when variations are large relative to the slope, shown as follows. Also, the larger the ratio, the smaller the error. The general relationship is illustrated in the following figure.

Techniques for Cycles

Cycles are up-and-down movements similar to seasonal variations but of longer duration—say, two to six years between peaks. When cycles occur in time-series data, their frequent irregularity makes it difficult or impossible to project them from past data because turning points are difficult to identify. A short moving average or a naive approach may be of some value, although both will produce forecasts that lag cyclical movements by one or several periods.

The most commonly used approach is explanatory: Search for another variable that relates to, and
leads, the variable of interest. For example, the number of housing starts (i.e., permits to build houses) in a given month often is an indicator of demand a few months later for products and services directly tied to construction of new homes (landscaping; sales of washers and dryers, carpeting, and furniture; new demands for shopping, transportation, schools). Thus, if an organization is able to establish a high correlation with such a
leading variable (i.e., changes in the variable precede changes in the variable of interest), it can develop an equation that describes the relationship, enabling forecasts to be made. It is important that a persistent relationship exists between the two variables. Moreover, the higher the correlation, the better the chances that the forecast will be on target.


Associative techniques rely on identification of related variables that can be used to predict values of the variable of interest. For example, sales of beef may be related to the price per pound charged for beef and the prices of substitutes such as chicken, pork, and lamb; real estate prices are usually related to property location and square footage; and crop yields are related to soil conditions and the amounts and timing of water and fertilizer applications.

The essence of associative techniques is the development of an equation that summarizes the effects of

predictor variables
. The primary method of analysis is known as

. A brief overview of regression should suffice to place this approach into perspective relative to the other forecasting approaches described in this chapter.

Simple Linear Regression

The simplest and most widely used form of regression involves a linear relationship between two variables. A plot of the values might appear like that in
Figure 3.8. The object in linear regression is to obtain an equation of a straight line that minimizes the sum of squared vertical
page 99deviations of data points from the line (i.e., the
least squares criterion). This

least squares line
has the equation



Note: It is conventional to represent values of the predicted variable on the
y axis and values of the predictor variable on the
x axis.)
Figure 3.9 is a general graph of a linear regression line.

The coefficients
a and
b of the line are based on the following two equations:




page 100 


Using the Excel template for linear regression

Source: Microsoft

One indication of how accurate a prediction might be for a linear regression line is the amount of scatter of the data points around the line. If the data points tend to be relatively close to the line, predictions using the linear equation will tend to be more accurate than if the data points are widely scattered. The scatter can be summarized using the

standard error of estimate
. It can be computed by finding the vertical difference between each data point and the
page 101computed value of the regression equation for that value of
x, squaring each difference, adding the squared differences, dividing by
n − 2, and then finding the square root of that value.



For the data given in
Table 3.3, the error column shows the
c differences. Squaring each error and summing the squares yields .01659. Hence, the standard error of estimate is

One application of regression in forecasting relates to the use of indicators. These are uncontrollable variables that tend to lead or precede changes in a variable of interest. For example, changes in the Federal Reserve Board’s discount rate may influence certain business activities. Similarly, an increase in energy costs can lead to price increases for a wide range of products and services. Careful identification and analysis of indicators may yield insight into possible future demand in some situations. There are numerous published indexes and websites from which to choose.

These include:

Net change in inventories on hand and on order

Interest rates for commercial loans

Industrial output

Consumer price index (CPI)

The wholesale price index

Stock market prices

page 102 

Other potential indicators are population shifts, local political climates, and activities of other firms (e.g., the opening of a shopping center may result in increased sales for nearby businesses). Three conditions are required for an indicator to be valid:

  1. The relationship between movements of an indicator and movements of the variable should have a logical explanation.

  2. Movements of the indicator must precede movements of the dependent variable by enough time so that the forecast isn’t outdated before it can be acted upon.

  3. A fairly high correlation should exist between the two variables.

measures the strength and direction of relationship between two variables. Correlation can range from −1.00 to +1.00. A correlation of +1.00 indicates that changes in one variable are always matched by changes in the other; a correlation of −1.00 indicates that increases in one variable are matched by decreases in the other; and a correlation close to zero indicates little
linear relationship between two variables. The correlation between two variables can be computed using the equation


The square of the correlation coefficient,
2, provides a measure of the percentage of variability in the values of
y that is “explained” by the independent variable. The possible values of
2 range from 0 to 1.00. The closer
2 is to 1.00, the greater the percentage of explained variation. A high value of
2, say .80 or more, would indicate that the independent variable is a good predictor of values of the dependent variable. A low value, say .25 or less, would indicate a poor predictor, and a value between .25 and .80 would indicate a moderate predictor.

Comments on the Use of Linear Regression Analysis

Use of simple regression analysis implies that certain assumptions have been satisfied. Basically, these are as follows:

  • Variations around the line are random. If they are random, no patterns such as cycles or trends should be apparent when the line and data are plotted.

  • Deviations around the average value (i.e., the line) should be normally distributed. A concentration of values close to the line with a small proportion of larger deviations supports the assumption of normality.

  • Predictions are being made only within the range of observed values.

If the assumptions are satisfied, regression analysis can be a powerful tool. To obtain the best results, observe the following:

  • Always plot the data to verify that a linear relationship is appropriate.

  • The data may be time-dependent. Check this by plotting the dependent variable versus time; if patterns appear, use analysis of time series instead of regression, or use time as an independent variable as part of a
    multiple regression analysis.

  • A small correlation may imply that other variables are important.

In addition, note these weaknesses of regression:

  • Simple linear regression applies only to linear relationships with
    one independent variable.

  • One needs a considerable amount of data to establish the relationship—in practice, 20 or more observations.

  • All observations are weighted equally.

page 103 

page 104 

Nonlinear and Multiple Regression Analysis

Simple linear regression may prove inadequate to handle certain problems because a linear model is inappropriate or because more than one predictor variable is involved. When nonlinear relationships are present, you should employ nonlinear regression; models that involve more than one predictor require the use of multiple regression analysis. While these analyses are beyond the scope of this text, you should be aware that they are often used. Multiple regression forecasting substantially increases data requirements.


Accuracy and control of forecasts is a vital aspect of forecasting, so forecasters want to minimize forecast errors. However, the complex nature of most real-world variables makes it almost impossible to correctly predict future values of those variables on a regular basis. Moreover, because random variation is always present, there will always be some residual
page 105error, even if all other factors have been accounted for. Consequently, it is important to include an indication of the extent to which the forecast might deviate from the value of the variable that actually occurs. This will provide the forecast user with a better perspective on how far off a forecast might be.

Decision makers will want to include accuracy as a factor when choosing among different techniques, along with cost. Accurate forecasts are necessary for the success of daily activities of every business organization. Forecasts are the basis for an organization’s schedules, and unless the forecasts are accurate, schedules will be generated that may provide for too few or too many resources, too little or too much output, the wrong output, or the wrong timing of output, all of which can lead to additional costs, dissatisfied customers, and headaches for managers.

Some forecasting applications involve a series of forecasts (e.g., weekly revenues), whereas others involve a single forecast that will be used for a one-time decision (e.g., the size of a power plant). When making periodic forecasts, it is important to monitor forecast errors to determine if the errors are within reasonable bounds. If they are not, it will be necessary to take corrective action.


is the difference between the value that occurs and the value that was predicted for a given time period. Hence, Error = Actual − Forecast:



Positive errors result when the forecast is too low, while negative errors occur when the forecast is too high. For example, if actual demand for a week is 100 units, and forecast demand was 90 units, the forecast was too low. The error is 100 − 90 = +10.

Forecast errors influence decisions in two somewhat different ways. One is in making a choice between various forecasting alternatives, and the other is in evaluating the success or failure of a technique in use. We shall begin by examining ways to summarize forecast error over time, and see how that information can be applied to compare forecasting alternatives.

page 106 

Summarizing Forecast Accuracy

Forecast accuracy is a significant factor when deciding among forecasting alternatives. Accuracy is based on the historical error performance of a forecast.

Three commonly used measures for summarizing historical errors are the

mean absolute deviation (MAD)
, the

mean squared error (MSE)
, and the

mean absolute percent error (MAPE)
. MAD is the average absolute error, MSE is the average of squared errors, and MAPE is the average absolute percent error. The formulas used to compute MAD,

MSE, and MAPE are as follows:




Example 9 illustrates the computation of MAD, MSE, and MAPE.

page 107 

From a computational standpoint, the difference between these measures is that MAD weights all errors evenly, MSE weights errors according to their
squared values, and MAPE weights according to
relative error.

One use for these measures is to compare the accuracy of alternative forecasting methods. For instance, a manager could compare the results to determine which one yields the
lowest MAD, MSE, or MAPE for a given set of data. Another use is to track error performance over time to decide if attention is needed. Is error performance getting better or worse, or is it staying about the same?

In some instances, historical error performance is secondary to the ability of a forecast to respond to changes in data patterns. Choice among alternative methods would then focus on the cost of not responding quickly to a change relative to the cost of responding to changes that are not really there (i.e., random fluctuations).

Overall, the operations manager must settle on the relative importance of historical performance versus responsiveness and whether to use MAD, MSE, or MAPE to measure historical performance. MAD is the easiest to compute, but weights errors linearly. MSE squares errors, thereby giving more weight to larger errors, which typically cause more problems. MAPE should be used when there is a need to put errors in perspective. For example, an error of 10 in a forecast of 15 is huge. Conversely, an error of 10 in a forecast of 10,000 is insignificant. Hence, to put large errors in perspective, MAPE would be used. Another use of MAPE is when there is a need to compare forecast errors for
different products or services. One example would be forecasts for store brands versus national brands.


Many forecasts are made at regular intervals (e.g., weekly, monthly, quarterly). Because forecast errors are the rule rather than the exception, there will be a succession of forecast errors. Tracking the forecast errors and analyzing them can provide useful insight on whether forecasts are performing satisfactorily.

There are a variety of possible sources of forecast errors, including the following:

  1. The model may be inadequate due to (
    a) the omission of an important variable, (
    b) a change or shift in the variable that the model cannot deal with (e.g., the sudden appearance of a trend or cycle), or (
    c) the appearance of a new variable (e.g., new competitor).

  2. Irregular variations may occur due to severe weather or other natural phenomena, temporary shortages or breakdowns, catastrophes, or similar events.

  3. Random variations. Randomness is the inherent variation that remains in the data after all causes of variation have been accounted for. There are always random variations.

A forecast is generally deemed to perform adequately when the errors exhibit only random variations. Hence, the key to judging when to reexamine the validity of a particular forecasting technique is whether forecast errors are random. If they are not random, it is necessary to investigate to determine which of the other sources is present and how to correct the problem.

A very useful tool for detecting nonrandomness in errors is a

control chart
. Errors are plotted on a control chart in the order that they occur, such as the one depicted in
Figure 3.11. The center line of the chart represents an error of zero. Note the two other lines, one above
page 108and one below the center line. They are called the upper and lower control limits because they represent the upper and lower ends of the range of acceptable variation for the errors.

In order for the forecast errors to be judged “in control” (i.e., random), two things are necessary. One is that all errors are within the control limits. The other is that no patterns (e.g., trends, cycles, noncentered data) are present. Both can be accomplished by inspection.
Figure 3.12 illustrates some examples of nonrandom errors.

Technically speaking, one could determine if any values exceeded either control limit without actually plotting the errors, but the visual detection of patterns generally requires plotting the errors, so it is best to construct a control chart and plot the errors on the chart.

To construct a control chart, first compute the MSE. The square root of MSE is used in practice as an estimate of the standard deviation of the distribution of errors.

That is,


Control charts are based on the assumption that when errors are random, they will be distributed according to a normal distribution around a mean of zero. Recall that for a normal distribution, approximately 95.5 percent of the values (errors in this case) can be expected to fall within limits of 0 ± 2
S (i.e., 0 ± 2 standard deviations), and approximately 99.7 percent of the values can be expected to fall within ± 3
s of zero. With that in mind, the following formulas can be used to obtain the upper control limit (UCL) and the lower control limit (LCL):


Combining these two formulas, we obtain the following expression for the control limits:


page 109 

Another method is the

tracking signal
. It relates the cumulative forecast error to the average absolute error (i.e., MAD). The intent is to detect any

in errors over time (i.e., a tendency for a sequence of errors to be positive or negative). The tracking signal is computed period by period using the following formula:


Values can be positive or negative. A value of zero would be ideal; limits of ± 4 or ± 5 are often used for a range of acceptable values of the tracking signal. If a value outside the acceptable range occurs, that would be taken as a signal that there is bias in the forecast, and that corrective action is needed.

After an initial value of MAD has been determined, the value of MAD can be updated and smoothed (SMAD) using exponential smoothing:


page 111 

A plot helps you to visualize the process and enables you to check for possible patterns (i.e., nonrandomness) within the limits that suggest an improved forecast is possible.


Like the tracking signal, a control chart focuses attention on deviations that lie outside predetermined limits. With either approach, however, it is desirable to check for possible patterns in the errors, even if all errors are within the limits.

If nonrandomness is found, corrective action is needed. That will result in less variability in forecast errors, and, thus, in narrower control limits. (Revised control limits must be computed using the resulting forecast errors.)
Figure 3.13 illustrates the impact on control limits due to decreased error variability.

Comment The control chart approach is generally superior to the tracking signal approach. A major weakness of the tracking signal approach is its use of cumulative errors: Individual errors can be obscured so that large positive and negative values cancel each other. Conversely, with control charts, every error is judged individually. Thus, it can be misleading to rely on a tracking signal approach to monitor errors. In fact, the historical roots of the tracking signal approach date from before the first use of computers in business. At that time, it was much more difficult to compute standard deviations than to compute average deviations; for that reason, the concept of a tracking signal was developed. Now computers and calculators can easily provide standard deviations. Nonetheless, the use of tracking signals has persisted, probably because users are unaware of the superiority of the control chart approach.


Many different kinds of forecasting techniques are available, and no single technique works best in every situation. When selecting a technique, the manager or analyst must take a number of factors into consideration.

The two most important factors are
cost and
accuracy. How much money is budgeted for generating the forecast? What are the possible costs of errors, and what are the benefits that might accrue from an accurate forecast? Generally speaking, the higher the accuracy, the higher the cost, so it is important to weigh cost–accuracy trade-offs carefully. The best forecast is not necessarily the most accurate or the least costly; rather, it is some combination of accuracy and cost deemed best by management.

Other factors to consider in selecting a forecasting technique include the availability of historical data; the availability of computer software; and the time needed to gather and analyze data and to prepare the forecast. The forecast horizon is important because some techniques are more suited to long-range forecasts while others work best for the short range. For example, moving averages and exponential smoothing are essentially short-range techniques, because they produce forecasts for the
next period. Trend equations can be used to project over much longer time periods. When using time-series data,
plotting the data can be very helpful in choosing an appropriate method. Several of the qualitative techniques are well-suited to long-range forecasts because they do not require historical data. The Delphi method and executive opinion methods are often used for long-range planning. New products and services lack historical data, so forecasts for them must be based on subjective estimates. In many cases,
page 112experience with similar items is relevant.
Table 3.4 provides a guide for selecting a forecasting method.
Table 3.5 provides additional perspectives on forecasts in terms of the time horizon.


A guide to selecting an appropriate forecasting method

Source: Adapted from J. Holton Wilson and Deborah Allison-Koerber, “Combining Subjective and Objective Forecasts Improves Results,”
Journal of Business Forecasting, Fall 1992, p. 4. Institute of Business Forecasting.


Forecast factors, by range of forecast


Short Range

Intermediate Range

Long Range

1. Frequency




2. Level of aggregation


Product family

Total output

Type of product/service

3. Type of model

Smoothing Projection Regression

Projection Seasonal Regression

Managerial judgment

4. Degree of management involvement




5. Cost per forecast




In some instances, a manager might use more than one forecasting technique to obtain independent forecasts. If the different techniques produced approximately the same predictions, that would give increased confidence in the results; disagreement among the forecasts would indicate that additional analysis may be needed. Another possibility is combining the results of two techniques. Still another possibility is to use several techniques on recent data and then use the one with the least error to make the actual forecast, but keep the others, and then use the one with the least error to make the next forecast, and so on. Then, if one technique consistently performs better than the others, that technique would emerge as the favorite.


A manager can take a
reactive or a
proactive approach to a forecast. A reactive approach views forecasts as probable future demand, and a manager reacts to meet that demand (e.g., adjusts production rates, inventories, the workforce). Conversely, a proactive approach seeks to actively influence demand (e.g., by means of advertising, pricing, or product/service changes).

page 113 

Generally speaking, a proactive approach requires either an explanatory model (e.g., regression) or a subjective assessment of the influence on demand. A manager might make two forecasts—one to predict what will happen under the status quo and a second one based on a “what if clear” approach, if the results of the status quo forecast are unacceptable.


Computers play an important role in preparing forecasts based on quantitative data. Their use allows managers to develop and revise forecasts quickly, and without the burden of manual computations. There is a wide range of software packages available for forecasting. The Excel templates on the text website are an example of a spreadsheet approach. There are templates for moving averages, exponential smoothing, linear trend equation, trend-adjusted exponential smoothing, and simple linear regression. Some templates are illustrated in the Solved Problems section at the end of the chapter.


Forecasts are the basis for many decisions and an essential input for matching supply and demand. Clearly, the more accurate an organization’s forecasts, the better prepared it will be to take advantage of future opportunities and reduce potential risks. A worthwhile strategy can be to work to improve short-term forecasts. Better short-term forecasts will not only enhance profits through lower inventory levels, fewer shortages, and improved customer service, they also will enhance forecasting
credibility throughout the organization: If short-term forecasts are inaccurate, why should other areas of the organization put faith in long-term forecasts? Also, the sense of confidence that accurate short-term forecasts would generate would allow allocating more resources to strategic and medium- to longer-term planning and less on short-term, tactical activities.

Maintaining accurate, up-to-date information on prices, demand, and other variables can have a significant impact on forecast accuracy. An organization also can do other things to improve forecasts. These do not involve searching for improved techniques but relate to the inverse relation of accuracy to the forecast horizon: Forecasts that cover shorter time frames tend to be more accurate than longer-term forecasts. Recognizing this, management might choose to devote efforts to
shortening the time horizon that forecasts must cover. Essentially, this means shortening the
lead time needed to respond to a forecast. This might involve building
flexibility into operations to permit rapid response to changing demands for products and services, or to changing volumes in quantities demanded; shortening the lead time required to obtain supplies, equipment, and raw materials, or the time needed to train or retrain employees; or shortening the time needed to
develop new products and services.

Lean systems are demand driven; goods are produced to fulfill orders rather than to hold in inventory until demand arises. Consequently, they are far less dependent on short-term forecasts than more traditional systems.

In certain situations, forecasting can be very difficult when orders have to be placed far in advance. This is the case, for example, when demand is sensitive to weather conditions, such as the arrival of spring, and there is a narrow window for demand. Orders for products or services that relate to this (e.g., garden materials, advertising space) often have to be placed many months in advance—far beyond the ability of forecasters to accurately predict weather conditions and, hence, the timing of demand. In such cases, there may be pressures from salespeople who want low quotas and from financial people who don’t want to have to deal with the cost of excess inventory to have conservative forecasts. Conversely, operations people may want more optimistic forecasts to reduce the risk of being blamed for possible shortages.

Sharing forecasts or demand data throughout the supply chain can improve forecast quality in the supply chain, resulting in lower costs and shorter lead times. For example, both Hewlett-Packard and IBM require resellers to include such information in their contracts.

The following reading provides additional insights on forecasting and supply chains.

page 114 

See, for example, Bernard T. Smith and Virginia Brice,
Focus Forecasting: Computer Techniques for Inventory Control Revised for the Twenty-First Century (Essex Junction, VT: Oliver Wight, 1984).

See, for example,
The National Bureau of Economic Research, The Survey of Current Business, The Monthly Labor Review, and
Business Conditions Digest.

The absolute value, represented by the two vertical lines in Formula 3–2, ignores minus signs; all data are treated as positive values. For example, −2 becomes +2.

The actual value could be computed as

The theory and application of control charts and the various methods for detecting patterns in the data are covered in more detail in Chapter 10, on quality control.

page 138 

page 139 


This section discusses what product and service designers do, the reasons for design (or redesign), and key questions that management must address.

What Does Product and Service Design Do?

The primary focus of product or service design should be on customer satisfaction. The various activities and responsibilities of product and service design include the following (functional interactions are shown in parentheses):

  1. Translate customer wants and needs into product and service requirements (marketing, operations)

  2. Refine existing products and services (marketing)

  3. Develop new products and/or services (marketing, operations)

  4. Formulate quality goals (marketing, operations)

  5. Formulate cost targets (accounting, finance, operations)

  6. Construct and test prototypes (operations, marketing, engineering)

  7. Document specifications

  8. Translate product and service specifications into
    process specifications (engineering, operations)

Product and service design involves or affects nearly every functional area of an organization. However, marketing and operations have major involvement.

page 141 

Objectives of Product and Service Design

Primary consideration: Customer satisfaction.

Secondary considerations: Cost or profit, quality, ability to produce a product or provide a service, ethics/safety, and sustainability.

Key Questions

From a buyer’s standpoint, most purchasing decisions entail two fundamental considerations; one is cost and the other is quality or performance. From the organization’s standpoint, the key questions are:

  1. Is there demand for it? What is the potential size of the market, and what is the expected demand profile (will demand be long term or short term, will it grow slowly or quickly)?

  2. Can we do it? Do we have the necessary knowledge, skills, equipment, capacity, and supply chain capability? For products, this is known as

    ; for services, this is known as

    . Also, is outsourcing some or all of the work an option?

  1. What level of quality is appropriate? What do customers expect? What level of quality do competitors provide for similar items? How would it fit with our current offerings?

  2. Does it make sense from an economic standpoint? What are the potential liability issues, ethical considerations, sustainability issues, costs, and profits? For nonprofits, is the cost within budget?

Reasons for Product and Service Design or Redesign

Product and service design typically has
strategic implications for the success and prosperity of an organization. Consequently, decisions in this area are some of the most fundamental that managers must make. Product and service design or redesign should be closely tied to an organization’s strategy.

Organizations become involved in product and service design or redesign for a variety of reasons. The main forces that initiate design or redesign are market opportunities and threats. The factors that give rise to market opportunities and threats can be one or more

  • Economic (e.g., low demand, excessive warranty claims, the need to reduce costs)

  • Social and demographic (e.g., aging baby boomers, population shifts)

  • Political, liability, or legal (e.g., government changes, safety issues, new regulations)

  • Competitive (e.g., new or changed products or services, new advertising/promotions)

  • Cost or availability (e.g., of raw materials, components, labor, water, energy)

  • Technological (e.g., in product components, processes)

While each of these factors may seem obvious, let’s reflect a bit on technological changes, which can create a need for product or service design changes in several different ways. An obvious way is new technology that can be used directly in a product or service (e.g., a faster, smaller microprocessor that spawns a new generation of smartphones). Technology also can indirectly affect product and service design: Advances in processing technology may require altering an existing design to make it compatible with the new processing technology. Still another way that technology can impact product design is illustrated by digital recording technology that allows television viewers to skip commercials when they view a recorded program. This means that advertisers (who support a television program) can’t get their message to viewers. To overcome this, some advertisers have adopted a strategy of making their products an integral part of a television program, say by having their products prominently displayed and/or mentioned by the actors as a way to call viewers’ attention to their products without the need for commercials.

The following reading suggests another potential benefit of product redesign.

page 142 


Ideas for new or redesigned products or services can come from a variety of sources, including customers, the supply chain, competitors, employees, and research. Customer input can come from surveys, focus groups, complaints, and unsolicited suggestions for improvement. Input from suppliers, distributors, and employees can be obtained from interviews, direct or indirect suggestions, and complaints.

One of the strongest motivators for new and improved products or services is competitors’ products and services. By studying a competitor’s products or services and how the competitor operates (pricing policies, return policies, warranties, location strategies, etc.), an organization can glean many ideas. Beyond that, some companies purchase a competitor’s product and then carefully dismantle and inspect it, searching for ways to improve their own product. This is called

reverse engineering
. Automotive companies use this tactic in developing new models. They examine competitors’ vehicles, searching for best-in-class components (e.g., best hood release, best dashboard display, best door handle). Sometimes, reverse engineering can enable a company to leapfrog the competition by developing an even better product. However, some forms of reverse engineering are illegal under the
Digital Millennium Copyright Act.

page 143 

Suppliers are still another source of ideas, and with increased emphasis on supply chains and supplier partnerships, suppliers are becoming an important source of ideas.

Research is another source of ideas for new or improved products or services.

Research and development (R&D)
refers to organized efforts that are directed toward increasing scientific knowledge and product or process innovation. Most of the advances in semiconductors, medicine, communications, and space technology can be attributed to R&D efforts at colleges and universities, research foundations, government agencies, and private enterprises.

R&D efforts may involve
basic research, applied research, or

Basic research has the objective of advancing the state of knowledge about a subject, without any near-term expectation of commercial applications.

Applied research has the objective of achieving commercial applications.

Development converts the results of applied research into useful commercial applications.

Basic research, because it does not lead to near-term commercial applications, is generally underwritten by the government and large corporations. Conversely, applied research and development, because of the potential for commercial applications, appeals to a wide spectrum of business organizations.

The benefits of successful R&D can be tremendous. Some research leads to patents, with the potential of licensing and royalties. However, many discoveries are not patentable, or companies don’t wish to divulge details of their ideas so they avoid the patent route. Even so, the first organization to bring a new product or service to the market generally stands to profit from it before the others can catch up. Early products may be priced higher because a temporary monopoly exists until competitors bring their versions out.

The costs of R&D can be high. Some companies spend more than $1 million
a day on R&D. Large companies in the automotive, computer, communications, and pharmaceutical industries spend even more. For example, IBM spends about $6 billion a year, and Hewlett-Packard Enterprises about $2 billion a year. Even so, critics say that many U.S. companies spend too little on R&D, a factor often cited in the loss of competitive advantage.

It is interesting to note that some companies are now shifting from a focus primarily on
products to a more balanced approach that explores both product and
process R&D.
page 144Also, there is increasing recognition that technologies often go through life cycles, the same way that many products do. This can impact R&D efforts on two fronts. Sustained economic growth requires constant attention to competitive factors over a life cycle, and it also requires planning to be able to participate in the next-generation technology.

In certain instances, however, research may not be the best approach. The preceding reading illustrates a research success.


Designers must be careful to take into account a wide array of legal and ethical considerations. Generally, they are mandatory. Moreover, if there is a potential to harm the environment, then those issues also become important. Most organizations are subject to numerous government agencies that regulate them. Among the more familiar federal agencies are the Food and Drug Administration, the Occupational Health and Safety Administration, the Environmental Protection Agency, and various state and local agencies. Bans on cyclamates, red food dye, phosphates, and asbestos have sent designers scurrying back to their drawing boards to find alternative designs that were acceptable to both government regulators and customers. Similarly, automobile pollution standards and safety features, such as seat belts, air bags, safety glass, and energy-absorbing bumpers and frames, have had a substantial impact on automotive design. Much attention also has been directed toward toy design to remove sharp edges, small pieces that can cause choking, and toxic materials. The government further regulates construction, requiring the use of lead-free paint, safety glass in entranceways, access to public buildings for individuals with disabilities, and standards for insulation, electrical wiring, and plumbing.

Product liability can be a strong incentive for design improvements.

Product liability
is the responsibility of a manufacturer for any injuries or damages caused by a faulty product because of poor workmanship or design. Many business firms have faced lawsuits related to their products, including Firestone Tire & Rubber, Ford Motor Company, General Motors, tobacco companies, and toy manufacturers. Manufacturers also are faced with the implied warranties created by state laws under the

Uniform Commercial Code
, which says that products carry an implication of
merchantability and
fitness; that is, a product must be usable for its intended purposes.

The suits and potential suits have led to increased legal and insurance costs, expensive settlements with injured parties, and costly recalls. Moreover, increasing customer awareness of product safety can adversely affect product image and subsequent demand for a product.

Thus, it is extremely important to design products that are reasonably free of hazards. When hazards do exist, it is necessary to install safety guards or other devices for reducing accident potential, and to provide adequate warning notices of risks. Consumer groups, business firms, and various government agencies often work together to develop industrywide standards that help avoid some of the hazards.

Ethical issues often arise in the design of products and services; it is important for managers to be aware of these issues and for designers to adhere to ethical standards. Designers are often under pressure to speed up the design process and to cut costs. These pressures often require them to make trade-off decisions, many of which involve ethical considerations. One example of what can happen is “vaporware,” when a software company doesn’t issue a release of software as scheduled because it is struggling with production problems or bugs in the software. The company faces the dilemma of releasing the software right away or waiting until most of the bugs have been removed—knowing that the longer it waits, the more time will be needed before it receives revenues and the greater the risk of damage to its reputation.

Organizations generally want designers to adhere to guidelines such as the following:

  • Produce designs that are consistent with the goals of the organization. For instance, if the company has a goal of high quality, don’t cut corners to save on costs, even in areas where it won’t be apparent to the customer.

page 145 

  • Give customers the value they expect.

  • Make health and safety a primary concern. At risk are employees who will produce goods or deliver services, workers who will transport the products, customers who will use the products or receive the services, and the general public, which might be endangered by the products or services.


Human factor issues often arise in the design of consumer products. Safety and liability are two critical issues in many instances, and they must be carefully considered. For example, the crashworthiness of vehicles is of much interest to consumers, insurance companies, automobile producers, and the government.

Another issue for designers to take into account is adding new features to their products or services. Companies in certain businesses may seek a competitive edge by adding new features. Although this can have obvious benefits, it can sometimes be “too much of a good thing,” and be a source of customer dissatisfaction. This “creeping featurism” is particularly evident in electronic products such as handheld devices that continue to offer new features, and more complexity, even while they are shrinking in size. This may result in low consumer ratings in terms of “ease of use.”


Product designers in companies that operate globally also must take into account any cultural differences of different countries or regions related to the product. This can result in different designs for different countries or regions, as illustrated by the following reading.

page 146 


Traditionally, product design has been conducted by members of the design team who are located in one facility or a few nearby facilities. However, organizations that operate globally are discovering advantages in global product design, which uses the combined efforts of a team of designers who work in different countries and even on different continents. Such
virtual teams can provide a range of comparative advantages over traditional teams such as engaging the best human resources from around the world without the need to assemble them all in one place, and operating on a 24-hour basis, thereby decreasing the time-to-market. The use of global teams also allows for customer needs assessment to be done in more than one country with local resources, opportunities, and constraints to be taken into account. Global product design can provide design outcomes that increase the marketability and utility of a product. The diversity of an international team may yield different points of view and also ideas and information to enrich the design process. However, care must be taken in managing the diversity, because if it is mismanaged, it can lead to conflicts and miscommunications.

Advances in information technology have played a key role in the viability of global product design teams by enabling team members to maintain continual contact with each other and to instantaneously share designs and progress, and to transmit engineering changes and other necessary information.


Product and service design is a focal point in the quest for sustainability. Key aspects include cradle-to-grave assessment, end-of-life programs, reduction of costs and materials used, reuse of parts of returned products, and recycling.

Cradle-to-Grave Assessment

Cradle-to-grave assessment
, also known as life cycle analysis, is the assessment of the environmental impact of a product or service throughout its useful life, focusing on such factors as global warming (the amount of carbon dioxide released into the atmosphere), smog formation, oxygen depletion, and solid waste generation. For products, cradle-to-grave analysis takes into account impacts in every phase of a product’s life cycle, from raw material extraction from the earth, or the growing and harvesting of plant materials, through fabrication of parts and assembly operations,
page 147or other processes used to create products, as well as the use or consumption of the product, and final disposal at the end of a product’s useful life. It also considers energy consumption, pollution and waste, and transportation in all phases. Although services generally involve less use of materials, cradle-to-grave assessment of services is nonetheless important, because services consume energy and involve many of the same or similar processes that products involve.

The goal of cradle-to-grave assessment is to choose products and services that have the least environmental impact, while still taking into account economic considerations. The procedures of cradle-to-grave assessment are part of the ISO 14000 environmental management standards, which are discussed in
Chapter 9.

End-of-Life Programs

End-of-life (EOL) programs deal with products that have reached the end of their useful lives. The products include both consumer products and business equipment. The purpose of these programs is to reduce the dumping of products, particularly electronic equipment, in landfills or third-world countries, as has been the common practice, or incineration, which converts materials into hazardous air and water emissions and generates toxic ash. Although the programs are not limited to electronic equipment, that equipment poses problems because it typically contains toxic materials such as lead, cadmium, chromium, and other heavy metals. IBM provides a good example of the potential of EOL programs. Over the last 15 years, it has collected about 2 billion pounds of product and product waste.

The Three Rs: Reduce, Reuse, and Recycle

Designers often reflect on three particular aspects of potential cost savings and reducing environmental impact: reducing the use of materials through value analysis; refurbishing and then reselling returned goods that are deemed to have additional useful life, which is referred to as remanufacturing; and reclaiming parts of unusable products for recycling.

Reduce: Value Analysis

Value analysis
refers to an examination of the
function of parts and materials in an effort to reduce the cost and/or improve the performance of a product. Typical questions that would be asked as part of the analysis include: Could a cheaper part or material be used? Is the function necessary? Can the function of two or more parts or components be performed by a single part for a lower cost? Can a part be simplified? Could product specifications be relaxed, and would this result in a lower price? Could standard parts be substituted for nonstandard parts?
Table 4.1 provides a checklist of questions that can guide a value analysis.


Overview of value analysis

  1. Select an item that has a high annual dollar volume. This can be material, a purchased item, or a service.

  2. Identify the function of the item.

  3. Obtain answers to these kinds of questions:

    1. Is the item necessary and have value, or can it be eliminated?

    2. Are there alternative sources for the item?

    3. Can the item be provided internally?

    4. What are the advantages of the present arrangement?

    5. What are the disadvantages of the present arrangement?

    6. Could another material, part, or service be used instead?

    7. Can specifications be less stringent to save cost or time?

    8. Can two or more parts be combined?

    9. Can more/less processing be done on the item to save cost or time?

    10. Do suppliers/providers have suggestions for improvements?

    11. Do employees have suggestions for improvements?

    12. Can packaging be improved or made less costly?

  4. Analyze the answers obtained above, as well as the answers to other questions that arise, and then make recommendations.

The following reading describes how Kraft Foods is working to reduce water and energy use, CO
2 and plant waste, and packaging.

page 148 

Reuse: Remanufacturing

An emerging concept in manufacturing is the remanufacturing of products.

refers to refurbishing used products by replacing worn-out or defective components, and reselling the products. This can be done by the original manufacturer, or another company. Among the products that have remanufactured components are automobiles, printers, copiers, cameras, computers, and telephones.

There are a number of important reasons for doing this. One is that a remanufactured product can be sold for about 50 percent of the cost of a new product. Another is that the process requires mostly unskilled and semiskilled workers. Also, in the global market, European lawmakers are increasingly requiring manufacturers to take back used products, because this means fewer products end up in landfills and there is less depletion of natural resources, such as raw materials and fuel.

page 149 

Designing products so they can be more easily taken apart has given rise to yet another design consideration:

Design for disassembly (DFD)


Recycling is sometimes an important consideration for designers.

means recovering materials for future use. This applies not only to manufactured parts but also to materials used during production, such as lubricants and solvents. Reclaimed metal or plastic parts may be melted down and used to make different products. (See readings above and on next page.)

Companies recycle for a variety of reasons, including

  1. Cost savings

  2. Environment concerns

  3. Environmental regulations

An interesting note: Companies that want to do business in the European Union must show that a specified proportion of their products are recyclable.

The pressure to recycle has given rise to the term

design for recycling (DFR)
, referring to product design that takes into account the ability to disassemble a used product to recover the recyclable parts.

page 150 


Aside from legal, ethical, environmental, and human considerations, designers must also take into account product or service life cycles, how much standardization to incorporate, product or service reliability, and the range of operating conditions under which a product or service must function. These topics are discussed in this section. We begin with life cycles.

Strategies for Product or Service Life Stages

Most, but not all, products and services go through a series of stages over their useful life, sometimes referred to as their life cycle, as shown in
Figure 4.1. Demand typically varies by phase. Different phases call for different strategies. In every phase, forecasts of demand and cash flow are key inputs for strategy.

When a product or service is introduced, it may be treated as a curiosity item. Many potential buyers may suspect that all the bugs haven’t been worked out and that the price may drop after the introductory period. Strategically, companies must carefully weigh the trade-offs in getting all the bugs out versus getting a leap on the competition, as well as getting to market at an advantageous time. For example, introducing new high-tech products or features during peak back-to-school buying periods or holiday buying periods can be highly desirable.

It is important to have a reasonable forecast of initial demand so an adequate supply of product or an adequate service capacity is in place.

Over time, design improvements and increasing demand yield higher reliability and lower costs, leading the growth in demand. In the growth phase, it is important to obtain accurate projections of the demand growth rate and how long that will persist, and then to ensure that capacity increases coincide with increasing demand.

In the next phase, the product or service reaches maturity, and demand levels off. Few, if any, design changes are needed. Generally, costs are low and productivity is high. New uses for products or services can extend their life and increase the market size. Examples include baking soda, duct tape, and vinegar. The maker of LEGOs has found a way to grow its market, as described in the following reading.

In the decline phase, decisions must be made about whether to discontinue a product or service and replace it with new ones or abandon the market, or to attempt to find new uses or new users for the existing product or service. For example, duct tape and baking
page 152soda are two products that have been employed well beyond their original uses of taping heating and cooling ducts and cooking. The advantages of keeping existing products or services can be tremendous. The same workers can produce the product or provide the service using much of the same equipment, the same supply chain, and perhaps the same distribution channels. Consequently, costs tend to be very low, and additional resource needs and training needs are low.

page 153 

Some products do not exhibit life cycles: wooden pencils; paper clips; nails; knives, forks, and spoons; drinking glasses; and similar items. However, most new products do.

Some service life cycles are related to the life cycles of products. For example, as older products are phased out, services such as installation and repair of the older products also phase out.

Wide variations exist in the amount of time a particular product or service takes to pass through a given phase of its life cycle: Some pass through various stages in a relatively short period; others take considerably longer. Often, it is a matter of the basic
need for the item and the
rate of technological change. Some toys, novelty items, and style items have a life cycle of less than one year, whereas other, more useful items, such as clothes washers and dryers, may last for many years before yielding to technological change.

Product Life Cycle Management

Product life cycle management (PLM)
is a systematic approach to managing the series of changes a product goes through, from its conception, design, and development, through production and any redesign, to its end of life. PLM incorporates everything related to a particular product. That includes data pertaining to production processes, business processes, people, and anything else related to the product.

PLM software can be used to automate the management of product-related data and integrate the data with other business processes, such as enterprise resource planning (discussed in
Chapter 12). A goal of PLM is to eliminate waste and improve efficiency. For example, PLM is considered to be an integral part of lean production (discussed in
Chapter 14).

There are three phases of PLM application:

  • Beginning of life, which involves design and development;

  • Middle of life, which involves working with suppliers, managing product information and warranties; and

  • End of life, which involves strategies for product discontinuance, disposal, or recycling.

Although PLM is generally associated with manufacturing, the same management structure can be applied to software development and services.

Degree of Standardization

An important issue that often arises in both product/service design and process design is the degree of standardization.

refers to the extent to which there is absence of variety in a product, service, or process. Standardized products are made in large quantities of identical items; calculators, computers, and 2 percent milk are examples. Standardized service implies that every customer or item processed receives essentially the same service. An automatic car wash is a good example: Each car, regardless of how clean or dirty it is, receives the same service. Standardized processes deliver standardized service or produce standardized goods.

Standardization carries a number of important benefits, as well as certain disadvantages. Standardized products are immediately available to customers. Standardized products mean
interchangeable parts, which greatly lower the cost of production while increasing productivity and making replacement or repair relatively easy compared with that of customized parts. Design costs are generally lower. For example, automobile producers standardize key components of automobiles across product lines; components such as brakes, electrical systems, and other “under-the-skin” parts would be the same for all car models. By reducing variety, companies save time and money while increasing the quality and reliability of their products.

Another benefit of standardization is reduced time and cost to train employees and reduced time to design jobs. Similarly, the scheduling of work, inventory handling, and purchasing and accounting activities become much more routine, and quality is more consistent.

Lack of standardization can at times lead to serious difficulties and competitive struggles. For example, the use of the English system of measurement by U.S. manufacturers, while most of the rest of the world’s manufacturers use the metric system, has led to problems
page 154in selling U.S. goods in foreign countries and in buying foreign machines for use in the United States. This may make it more difficult for U.S. firms to compete in the European Union.

Standardization also has disadvantages. A major one relates to the reduction in variety. This can limit the range of customers to whom a product or service appeals. And that creates a risk that a competitor will introduce a better product or greater variety and realize a competitive advantage. Another disadvantage is that a manufacturer may freeze (standardize) a design prematurely and, once the design is frozen, find compelling reasons to resist modification.

Obviously, designers must consider important issues related to standardization when making choices. The major advantages and disadvantages of standardization are summarized in
Table 4.2.


Advantages and disadvantages of standardization


  1. Fewer parts to deal with in inventory and in manufacturing.

  2. Reduced training costs and time.

  3. More routine purchasing, handling, and inspection procedures.

  4. Orders fillable from inventory.

  5. Opportunities for long production runs and automation.

  6. Need for fewer parts justifies increased expenditures on perfecting designs and improving quality control procedures.


  1. Designs may be frozen with too many imperfections remaining.

  2. High cost of design changes increases resistance to improvements.

  3. Decreased variety results in less consumer appeal.

Designing for Mass Customization

Companies like standardization because it enables them to produce high volumes of relatively low-cost products, albeit products with little variety. Customers, on the other hand, typically prefer more variety, although they like the low cost. The question for producers is how to resolve these issues without (1) losing the benefits of standardization, and (2) incurring a host of problems that are often linked to variety. These include increasing the resources needed to achieve design variety; increasing variety in the production process, which would add to the skills necessary to produce products, causing a decrease in productivity; creating an additional inventory burden during and after production, by having to carry replacement parts for the increased variety of parts; and adding to the difficulty of diagnosing and repairing product failures. The answer, at least for some companies, is

mass customization
, a strategy of producing standardized goods or services, but incorporating some degree of customization in the final product or service. Several tactics make this possible. One is
delayed differentiation, and another is
modular design. (See reading on following page.)

Delayed differentiation
is a
postponement tactic: the process of producing, but not quite completing, a product or service, postponing completion until customer preferences or specifications are known. There are a number of variations of this. In the case of goods, almost-finished units might be held in inventory until customer orders are received, at which time customized features are incorporated, according to customer requests. For example, furniture makers can produce dining room sets, but not apply stain, allowing customers a choice of stains. Once the choice is made, the stain can be applied in a relatively short time, thus eliminating a long wait for customers, giving the seller a competitive advantage. Similarly, various e-mail or internet services can be delivered to customers as standardized packages, which can then be modified according to the customer’s preferences. HP printers that are made in the United States but intended for foreign markets are mostly completed in domestic assembly plants and then finalized closer to the country of use. The result of delayed differentiation is a product or service with customized features that can be quickly produced, appealing to the customers’ desire for variety and speed of delivery, and yet one that for the most part is standardized, enabling the producer to realize the benefits of standardized production. This technique is not new. Manufacturers of men’s clothing, for example, produce suits with pants that have legs that are unfinished, allowing customers to tailor choices as to the exact length and whether to have cuffs or no cuffs. What is new is the extent to which business organizations are finding ways to incorporate this concept into a broad range of products and services.

page 155 

Modular design
is a form of standardization. Modules represent groupings of component parts into subassemblies, usually to the point where the individual parts lose their separate identity. One familiar example of modular design is computers, which have modular parts that can be replaced if they become defective. By arranging modules in different configurations, different computer capabilities can be obtained. For mass customization, modular design enables producers to quickly assemble products with modules to achieve a customized configuration for an individual customer, avoiding the long customer wait that would occur if individual parts had to be assembled. Dell Computers has successfully used this concept to become a dominant force in the PC industry by offering consumers the opportunity to configure modules according to their own specifications. Many other computer manufacturers now use a similar approach. Modular design also is found in the construction industry. One firm in Rochester, New York, makes prefabricated motel rooms complete with wiring, plumbing, and even room decorations in its factory and then moves the complete rooms by rail to the construction site, where they are integrated into the structure.

One advantage of modular design of equipment compared with nonmodular design is that failures are often easier to diagnose and remedy because there are fewer pieces to investigate. Similar advantages are found in the ease of repair and replacement; the faulty module is
page 156conveniently removed and replaced with a good one. The manufacture and assembly of modules generally involve simplifications: Fewer parts are involved, so purchasing and inventory control become more routine, fabrication and assembly operations become more standardized, and training costs often are relatively low.

The main disadvantages of modular design stem from the decrease in variety: The number of possible configurations of modules is much less than the number of possible configurations based on individual components. Another disadvantage that is sometimes encountered is the inability to disassemble a module in order to replace a faulty part; the entire module must be scrapped—usually at a higher cost.


is a measure of the ability of a product, a part, a service, or an entire system to perform its intended function under a prescribed set of conditions. The importance of reliability is underscored by its use by prospective buyers in comparing alternatives, and by sellers as one determinant of price. Reliability also can have an impact on repeat sales, reflect on the product’s image, and, if it is too low, create legal implications. Reliability is also a consideration for sustainability: The higher the reliability of a product, the fewer the resources that will be needed to maintain it, and the less frequently it will involve the three Rs.

The term

is used to describe a situation in which an item does not perform as intended. This includes not only instances in which the item does not function at all, but also instances in which the item’s performance is substandard or it functions in a way not intended. For example, a smoke alarm might fail to respond to the presence of smoke (not operate at all), it might sound an alarm that is too faint to provide an adequate warning (substandard performance), or it might sound an alarm even though no smoke is present (unintended response).

Reliabilities are always specified with respect to certain conditions, called

normal operating conditions
. These can include load, temperature, and humidity ranges, as well as operating procedures and maintenance schedules. Failure of users to heed these conditions often results in premature failure of parts or complete systems. For example, using a passenger car to tow heavy loads will cause excess wear and tear on the drive train; driving over potholes or curbs often results in untimely tire failure; and using a calculator to drive nails might have a marked impact on its usefulness for performing mathematical operations.

Improving Reliability Reliability can be improved in a number of ways, some of which are listed in
Table 4.3.


Potential ways to improve reliability

  1. Improve component design.

  2. Improve production and/or assembly techniques.

  3. Improve testing.

  4. Use backups.

  5. Improve preventive maintenance procedures.

  6. Improve user education.

  7. Improve system design.

Because overall system reliability is a function of the reliability of individual components, improvements in their reliability can increase system reliability. Unfortunately, inadequate production or assembly procedures can negate even the best of designs, and this is often a source of failures. System reliability can be increased by the use of backup components. Failures in actual use often can be reduced by upgrading user education and refining maintenance recommendations or procedures. Finally, it may be possible to increase the overall reliability of the system by simplifying the system (thereby reducing the number of components that could cause the system to fail) or altering component relationships (e.g., increasing the reliability of interfaces).

A fundamental question concerning improving reliability is: How much reliability is needed? Obviously, the reliability needed for a household light bulb isn’t in the same category
page 157as the reliability needed for an airplane. So the answer to the question depends on the potential benefits of improvements and on the cost of those improvements. Generally speaking, reliability improvements become increasingly costly. Thus, although benefits initially may increase at a much faster rate than costs, the opposite eventually becomes true. The optimal level of reliability is the point where the incremental benefit received equals the incremental cost of obtaining it. In the short term, this trade-off is made in the context of relatively fixed parameters (e.g., costs). However, in the longer term, efforts to improve reliability and reduce costs can lead to higher optimal levels of reliability.

Robust Design

Some products or services will function as designed only within a narrow range of conditions, while others will perform as designed over a much broader range of conditions. The latter have

robust design
. Consider a pair of fine leather boots—obviously not made for trekking through mud or snow. Now consider a pair of heavy rubber boots—just the thing for mud or snow. The rubber boots have a design that is more
robust than that of the fine leather boots.

The more robust a product or service, the less likely it will fail due to a change in the environment in which it is used or in which it is performed. Hence, the more designers can build robustness into the product or service, the better it should hold up, resulting in a higher level of customer satisfaction.

A similar argument can be made for robust design as it pertains to the production process. Environmental factors can have a negative effect on the quality of a product or service. The more resistant a design is to those influences, the less likely is a negative effect. For example, many products go through a heating process: food products, ceramics, steel, petroleum products, and pharmaceutical products. Furnaces often do not heat uniformly; heat may vary either by position in an oven or over an extended period of production. One approach to this problem might be to develop a superior oven; another might be to design a system that moves the product during heating to achieve uniformity. A robust-design approach would develop a product that is unaffected by minor variations in temperature during processing.

Taguchi’s Approach Japanese engineer Genichi Taguchi’s approach is based on the concept of robust design. His premise is that it is often easier to design a product that is insensitive to environmental factors, either in manufacturing or in use, than to control the environmental factors.

The central feature of Taguchi’s approach—and the feature used most often by U.S. companies—is
parameter design. This involves determining the specification settings for both the product and the process that will result in robust design in terms of manufacturing variations, product deterioration, and conditions during use.

The Taguchi approach modifies the conventional statistical methods of experimental design. Consider this example. Suppose a company will use 12 chemicals in a new product it intends to produce. There are two suppliers for these chemicals, but the chemical concentrations vary slightly between the two suppliers. Classical design of experiments would require 2
12 = 4,096 test runs to determine which combination of chemicals would be optimum. Taguchi’s approach would involve only testing a portion of the possible combinations. Relying on experts to identify the variables that would be most likely to affect important performance, the number of combinations would be dramatically reduced, perhaps to, say, 32. Identifying the best combination in the smaller sample might be a near-optimal combination instead of the optimal combination. The value of this approach is its ability to achieve major advances in product or process design fairly quickly, using a relatively small number of experiments.

Critics charge that Taguchi’s methods are inefficient and incorrect, and often lead to non-optimal solutions. Nonetheless, his methods are widely used and have been credited with helping to achieve major improvements in U.S. products and manufacturing processes.

page 158 

Degree of Newness

Product or service design change can range from the modification of an existing product or service to an entirely new product or service:

  1. Modification of an existing product or service

  2. Expansion of an existing product line or service offering

  3. Clone of a competitor’s product or service

  4. New product or service

The degree of change affects the newness to the organization and the newness to the market. For the organization, a low level of newness can mean a fairly quick and easy transition to producing the new product, while a high level of newness would likely mean a slower and more difficult, and therefore more costly, transition. For the market, a low level of newness would mean little difficulty with market acceptance, but possibly low profit potential. Even in instances of low profit potential, organizations might use this strategy to maintain market share. A high level of newness, on the other hand, might mean more difficulty with acceptance, or it might mean a rapid gain in market share with a high potential for profits. Unfortunately, there is no way around these issues. It is important to carefully assess the risks and potential benefits of any design change, taking into account clearly identified customer wants.

Quality Function Deployment

Obtaining input from customers is essential to assure that they will want what is offered for sale. Although obtaining input can be informal through discussions with customers, there is a formal way to document customer wants.

Quality function deployment (QFD)
is a structured approach for integrating the “voice of the customer” into both the product and service development process. The purpose is to ensure that customer requirements are factored into every aspect of the process. Listening to and understanding the customer is the central feature of QFD. Requirements often take the form of a general statement such as, “It should be easy to adjust the cutting height of the lawn mower.” Once the requirements are known, they must be translated into technical terms related to the product or service. For example, a statement about changing the height of the lawn mower may relate to the mechanism used to accomplish that, its position, instructions for use, tightness of the spring that controls the mechanism, or materials needed. For manufacturing purposes, these must be related to the materials, dimensions, and equipment used for processing.

The structure of QFD is based on a set of matrices. The main matrix relates customer requirements (what) and their corresponding technical requirements (how). This matrix is illustrated in
Figure 4.2. The matrix provides a structure for data collection.

Source: Ernst and Young Consulting Group,
Total Quality (Homewood, IL: Dow-Jones Irwin, 1991), p. 121.

page 159 

Additional features are usually added to the basic matrix to broaden the scope of analysis. Typical additional features include importance weightings and competitive evaluations. A correlational matrix is usually constructed for technical requirements; this can reveal conflicting technical requirements. With these additional features, the set of matrices has the form illustrated in
Figure 4.3. It is often referred to as the
house of quality because of its house-like appearance.

An analysis using this format is shown in
Figure 4.4. The data relate to a commercial printer (customer) and the company that supplies the paper. At first glance, the display appears complex. It contains a considerable amount of information for product and process planning. Therefore, let’s break it up into separate parts and consider them one at a time. To start, a key part is the list of customer requirements on the left side of the figure. Next, note the technical requirements, listed vertically near the top. The key relationships and their degree of importance are shown in the center of the figure. The circle with a dot inside indicates the strongest positive relationship; that is, it denotes the most important technical requirements for satisfying customer requirements. Now look at the “importance to customer” numbers that are shown next to each customer requirement (3 is the most important). Designers will take into account the importance values and the strength of correlation in determining where to focus the greatest effort.

Next, consider the correlation matrix at the top of the “house.” Of special interest is the strong negative correlation between “paper thickness” and “roll roundness.” Designers will have to find some way to overcome that or make a trade-off decision.

On the right side of the figure is a competitive evaluation comparing the supplier’s performance on the customer requirements with each of the two key competitors (A and B). For example, the supplier (X) is worst on the first customer requirement and best on the third customer requirement. The line connects the X performances. Ideally, design will cause all of the Xs to be in the highest positions.

Across the bottom of
Figure 4.4 are importance weightings, target values, and technical evaluations. The technical evaluations can be interpreted in a manner similar to that of the competitive evaluations (note the line connecting the Xs). The target values typically contain technical specifications, which we will not discuss. The importance weightings are the sums of values assigned to the relationships (see the lower right-hand key for relationship weights). The 3 in the first column is the product of the importance to the customer, 3, and the small (Δ) weight, 1. The importance weightings and target evaluations help designers focus on desired results. In this example, the first technical requirement has the lowest importance weighting, while the next four technical requirements all have relatively high importance weightings.

page 160 

The house of quality approach involves a sequence of “houses,” beginning with design characteristics, which leads to specific components, then production processes, and finally, a quality plan. The sequence is illustrated in
Figure 4.5. Although the details of each house are beyond the scope of this text,
Figure 4.5 provides a conceptual understanding of the progression involved.

The Kano Model

Kano model is a theory of product and service design developed by Dr. Noriaki Kano, a Japanese professor, who offered a perspective on customer perceptions of quality different from the traditional view that “more is better.” Instead, he proposed different categories of quality and posited that understanding them would better position designers to assess and address quality needs. His model provides insights into the attributes that are perceived to be
page 161important to customers. The model employs three definitions of quality: basic, performance, and excitement.

Basic quality refers to customer requirements that have only a limited effect on customer satisfaction if present, but lead to dissatisfaction if not present. For example, putting a very short cord on an electrical appliance will likely result in customer dissatisfaction, but beyond a certain length (e.g., 4 feet), adding more cord will not lead to increased levels of customer satisfaction. Performance quality refers to customer requirements that generate satisfaction or dissatisfaction in proportion to their level of functionality and appeal. For example, increasing the tread life of a tire or the amount of time house paint will last will add to customer satisfaction. Excitement quality refers to a feature or attribute that was unexpected by the customer and causes excitement (the “wow” factor), such as a voucher for dinner for two at the hotel restaurant when checking in.
Figure 4.6A portrays how the three definitions of quality influence customer satisfaction or dissatisfaction relative to the degree of implementation. Note that features that are perceived by customers as basic quality result in dissatisfaction if they are missing or at low levels, but do not result in customer satisfaction if they are present, even at high levels. Performance factors can result in satisfaction or dissatisfaction, depending on the degree to which they are present. Excitement factors, because they are unexpected, do not result in dissatisfaction when they are absent or at low levels, but have the potential for disproportionate levels of satisfaction if they are present.

Over time, features that excited become performance features, and performance features soon become basic quality features, as illustrated in
Figure 4.6B. The rates at which various design elements are migrating is an important input from marketing that will enable designers to continue to satisfy and delight customers and not waste efforts on improving what have become basic quality features.

The lesson of the Kano model is that design elements that fall into each aspect of quality must first be determined. Once basic needs have been met, additional efforts in those areas should not be pursued. For performance features, cost–benefit analysis comes into play, and these features should be included as long as the benefit exceeds the cost. Excitement features pose somewhat of a challenge. Customers are not likely to indicate excitement factors in surveys because they don’t know that they want them. However, small increases in such factors produce disproportional increases in customer satisfaction and generally increase brand loyalty, so it is important for companies to strive to identify and include these features when economically feasible.

The Kano model can be used in conjunction with QFD, as well as in Six Sigma projects (see
Chapter 9 for a discussion of Six Sigma).

page 162 


Product design and development generally proceeds in a series of phases (see
Table 4.4).


Phases in the product development process

  1. Feasibility analysis

  2. Product specifications

  3. Process specifications

  4. Prototype development

  5. Design review

  6. Market test

  7. Product introduction

  8. Follow-up evaluation

Feasibility analysis. Feasibility analysis entails market analysis (demand), economic analysis (development cost and production cost, profit potential), and technical analysis (capacity requirements and availability, and the skills needed). Also, it is necessary to answer the question: Does it fit with the mission? It requires collaboration among marketing, finance, accounting, engineering, and operations.

Product specifications. This involves detailed descriptions of what is needed to meet (or exceed) customer wants, and requires collaboration between legal, marketing, and operations.

page 163 

Process specifications. Once product specifications have been set, attention turns to specifications for the process that will be needed to produce the product. Alternatives must be weighed in terms of cost, availability of resources, profit potential, and quality. This involves collaboration between accounting and operations.

Prototype development. With product and process specifications complete, one (or a few) units are made to see if there are any problems with the product or process specifications.

Design review. At this stage, any necessary changes are made or the project is abandoned. Marketing, finance, engineering, design, and operations collaborate to determine whether to proceed or abandon.

Market test. A market test is used to determine the extent of consumer acceptance. If unsuccessful, the product returns to the design review phase. This phase is handled by marketing.

Product introduction. The new product is promoted. This phase is handled by marketing.

Follow-up evaluation. Based on user feedback, changes may be made or forecasts refined. This phase is handled by marketing.


In this section, you will learn about design techniques that have greater applicability for the design of products than the design of services. Even so, you will see that they do have some relevance for service design. The topics include concurrent engineering, computer-assisted design, designing for assembly and disassembly, and the use of components for similar products.

Concurrent Engineering

To achieve a smoother transition from product design to production, and to decrease product development time, many companies are using
simultaneous development, or concurrent engineering. In its narrowest sense,

concurrent engineering
means bringing design and manufacturing engineering people together early in the design phase to simultaneously develop the product and the processes for creating the product. More recently, this concept has been enlarged to include manufacturing personnel (e.g., materials specialists) and marketing and purchasing personnel in loosely integrated, cross-functional teams. In addition, the views of suppliers and customers are frequently sought. The purpose, of course, is to achieve product designs that reflect customer wants, as well as manufacturing capabilities.

Traditionally, designers developed a new product without any input from manufacturing, and then turned over the design to manufacturing, which would then have to develop a process for making the new product. This “over-the-wall” approach created tremendous challenges for manufacturing, generating numerous conflicts and greatly increasing the time needed to successfully produce a new product. It also contributed to an “us versus them” mentality.

For these and similar reasons, the simultaneous development approach has great appeal. Among the key advantages of this approach are the following:

  1. Manufacturing personnel are able to identify production capabilities and capacities. Very often, they have some latitude in design in terms of selecting suitable materials and processes. Knowledge of production capabilities can help in the selection process. In addition, cost and quality considerations can be greatly influenced by design, and conflicts during production can be greatly reduced.

  2. Design or procurement of critical tooling, some of which might have long lead times, can occur early in the process. This can result in a major shortening of the product development process, which could be a key competitive advantage.

  3. The technical feasibility of a particular design or a portion of a design can be assessed early on. Again, this can avoid serious problems during production.

  4. The emphasis can be on
    problem resolution instead of
    conflict resolution.

page 164 

However, despite the advantages of concurrent engineering, a number of potential difficulties exist in this co-development approach. Two key ones are the following:

  • Long-standing boundaries between design and manufacturing can be difficult to overcome. Simply bringing a group of people together and thinking they will be able to work together effectively is probably naive.

  • There must be extra communication and flexibility if the process is to work, and these can be difficult to achieve.

Hence, managers should plan to devote special attention if this approach is to work.

Computer-Aided Design (CAD)

Computers are increasingly used for product design.

Computer-aided design (CAD)
uses computer graphics for product design. The designer can modify an existing design or create a new one on a monitor by means of a light pen, a keyboard, a joystick, or a similar device. Once the design is entered into the computer, the designer can maneuver it on the screen: It can be rotated to provide the designer with different perspectives, it can be split apart to give the designer a view of the inside, and a portion of it can be enlarged for closer examination. The designer can obtain a printed version of the completed design and file it electronically, making it accessible to people in the firm who need this information (e.g., marketing, operations).

A growing number of products are being designed in this way, including transformers, automobile parts, aircraft parts, integrated circuits, and electric motors.

A major benefit of CAD is the increased productivity of designers. No longer is it necessary to laboriously prepare mechanical drawings of products or parts and revise them repeatedly to correct errors or incorporate revisions. A rough estimate is that CAD increases the productivity of designers from 3 to 10 times. A second major benefit of CAD is the creation of a database for manufacturing that can supply needed information on product geometry and dimensions, tolerances, material specifications, and so on. It should be noted, however, that CAD needs this database to function and that this entails a considerable amount of effort.

page 165 

Some CAD systems allow the designer to perform engineering and cost analyses on proposed designs. For instance, the computer can determine the weight and volume of a part and do stress analysis as well. When there are a number of alternative designs, the computer can quickly go through the possibilities and identify the best one, given the designer’s criteria. CAD that includes finite element analysis (FEA) capability can greatly shorten the time to market of new products. It enables developers to perform simulations that aid in the design, analysis, and commercialization of new products. Designers in industries such as aeronautics, biomechanics, and automotives use FEA.

Production Requirements

As noted earlier in the chapter, designers must take into account
production capabilities. Design needs to clearly understand the capabilities of production (e.g., equipment, skills, types of materials, schedules, technologies, special abilities). This helps in choosing designs that match capabilities. When opportunities and capabilities do not match, management must consider the potential for expanding or changing capabilities to take advantage of those opportunities.

Forecasts of future demand can be very useful, supplying information on the timing and volume of demand, and information on demands for new products and services.

Manufacturability is a key concern for manufactured goods: Ease of fabrication and/or assembly is important for cost, productivity, and quality. With services, ease of providing the service, cost, productivity, and quality are of great concern.

The term

design for manufacturing (DFM)
is used to indicate the designing of products that are compatible with an organization’s capabilities. A related concept in manufacturing is

design for assembly (DFA)
. A good design must take into account not only how a product will be fabricated, but also how it will be assembled. Design for assembly focuses on reducing the number of parts in an assembly, as well as on the assembly methods and sequence that will be employed. Another, more general term,

, is sometimes used when referring to the ease with which products can be fabricated and/or assembled.

Component Commonality

Companies often have multiple products or services to offer customers. Often, these products or services have a high degree of similarity of features and components. This is particularly true of
product families, but it is also true of many services. Companies can realize significant benefits when a part can be used in multiple products. For example, car manufacturers employ this tactic by using internal components such as water pumps, engines, and transmissions on several automobile nameplates. In addition to the savings in design time, companies reap benefits through standard training for assembly and installation, increased opportunities for savings by buying in bulk from suppliers, and commonality of parts for repair, which reduces the inventory that dealers and auto parts stores must carry. Similar benefits accrue in services. For example, in automobile repair, component commonality means less training is needed because the variety of jobs is reduced. The same applies to appliance repair, where commonality and
substitutability of parts are typical. Multiple-use forms in financial and medical services are other examples. Computer software often comprises a number of modules that are commonly used for similar applications, thereby saving the time and cost to write the code for major portions of the software. Tool manufacturers use a design that allows tool users to attach different power tools to a common power source. Similarly, HP has a universal power source that can be used with a variety of computer hardware.


There are many similarities between product and service design. However, there are some important differences as well, owing to the nature of services. One major difference is that unlike manufacturing, where production and delivery are usually separated in time, services are usually created and delivered

page 166 

refers to an
act, something that is done to or for a customer (client, patient, etc.). It is provided by a

service delivery system
, which includes the facilities, processes, and skills needed to provide the service. Many services are not pure services, but part of a

product bundle
—the combination of goods and services provided to a customer. The service component in products is increasing. The ability to create and deliver reliable customer-oriented service is often a key competitive differentiator. Successful companies combine customer-oriented service with their products.

System design involves development or refinement of the overall

service package


  1. The physical resources needed.

  2. The accompanying goods that are purchased or consumed by the customer, or provided with the service.

  3. Explicit services (the essential/core features of a service, such as tax preparation).

  4. Implicit services (ancillary/extra features, such as friendliness, courtesy).

Overview of Service Design

Service design begins with the choice of a service strategy, which determines the nature and focus of the service, and the target market. This requires an assessment by top management of the potential market and profitability (or need, in the case of a nonprofit organization) of a particular service, and an assessment of the organization’s ability to provide the service. Once decisions on the focus of the service and the target market have been made, the customer requirements and expectations of the target market must be determined.

Two key issues in service design are the degree of variation in service requirements and the degree of customer contact and customer involvement in the delivery system. These have an impact on the degree to which service can be standardized or must be customized. The lower the degree of customer contact and service requirement variability, the more standardized the service can be. Service design with no contact and little or no processing variability is very much like product design. Conversely, high variability and high customer contact generally mean the service must be highly customized. A related consideration in service design is the opportunity for selling: The greater the degree of customer contact, the greater the opportunities for selling.

Differences between Service Design and Product Design

Service operations managers must contend with issues that may be insignificant or nonexistent for managers in a production setting. These include the following:

  1. Products are generally tangible; services are generally intangible. Consequently, service design often focuses more on intangible factors (e.g., peace of mind, ambiance) than does product design.

  2. In many instances, services are created and delivered at the same time (e.g., a haircut, a car wash). In such instances, there is less latitude in finding and correcting errors
    before the customer has a chance to discover them. Consequently, training, process design, and customer relations are particularly important.

  3. Services cannot be inventoried. This poses restrictions on flexibility and makes capacity issues very important.

  4. Services that are highly visible to consumers and must be designed with that in mind; this adds an extra dimension to process design, one that usually is not present in product design.

  5. Some services have low barriers to entry and exit. This places additional pressures on service design to be innovative and cost-effective.

  6. Location is often important to service design, with convenience as a major factor. Hence, design of services and choice of location are often closely linked.

    page 167 

  7. Service systems range from those with little or no customer contact to those that have a very high degree of customer contact. Here are some examples of those different types:

    Insulated technical core; little or no customer contact (e.g., software development)

    Production line; little or no customer contact (e.g., automatic car wash)

    Personalized service (e.g., haircut, medical service)

    Consumer participation (e.g., diet program, dance lessons)

    Self-service (e.g., supermarket)

    If there is little or no customer contact, service system design is like product system design.

  8. Demand variability alternately creates customer waiting times, which sometimes leads to lost sales, or idle service resources.

When demand variability is a factor, designers may approach service design from one of two perspectives. One is a cost and efficiency perspective, and the other is a customer perspective. Waiting line analysis (see
Chapter 18) can be especially useful in this regard.

Basing design objectives on cost and efficiency is essentially a “product design approach” to service design. Because customer participation makes both quality and demand variability more difficult to manage, designers may opt to limit customer participation in the process where possible. Alternatively, designers may use staff flexibility as a means of dealing with demand variability.

In services, a significant aspect of perceived quality relates to the intangibles that are part of the service package. Designers must proceed with caution because attempts to achieve a high level of efficiency tend to depersonalize service and to create the risk of negatively altering the customer’s perception of quality. Such attempts may involve the following:

  • Reducing consumer choices makes service more efficient, but it can be both frustrating and irritating for the customer. An example would be a cable company that bundles channels, rather than allowing customers to pick only the channels they want.

  • Standardizing or simplifying certain elements of service can reduce the cost of providing a service, but it risks eliminating features that some customers value, such as personal attention.

  • Incorporating flexibility in capacity management by employing part-time or temporary staff may involve the use of less-skilled or less-interested people, and service quality may suffer.

Design objectives based on customer perspective require understanding the customer experience, and focusing on how to maintain control over service delivery to achieve customer satisfaction. The customer-oriented approach involves determining consumer wants and needs in order to understand relationships between service delivery and perceived quality. This enables designers to make enlightened choices in designing the delivery system.

Of course, designers must keep in mind that while depersonalizing service delivery for the sake of efficiency can negatively impact perceived quality, customers may not want or be willing to pay for highly personalized service either, so trade-offs may have to be made.

Phases in the Service Design Process

Table 4.5 lists the phases in the service design process. As you can see, they are quite similar to the phases of product design, except that the delivery system also must be designed.


Phases in service design process

1. Conceptualize.

Idea generation

Assessment of customer wants/needs (marketing)

Assessment of demand potential (marketing)

2. Identify service package components needed (operations and marketing).

3. Determine performance specifications (operations and marketing).

4. Translate performance specifications into design specifications.

5. Translate design specifications into delivery specifications.

page 168 

Service Blueprinting

A useful tool for conceptualizing a service delivery system is the

service blueprint
, which is a method for describing and analyzing a service process. A service blueprint is much like an architectural drawing, but instead of showing building dimensions and other construction features, a service blueprint shows the basic customer and service actions involved in a service operation.
Figure 4.7 illustrates a simple service blueprint for a restaurant. At the top of the figure are the customer actions, and just below are the related actions of the direct contact service people. Next are what are sometimes referred to as “backstage contacts”—in this example, the kitchen staff—and below those are the support, or “backroom,” operations. In this example, support operations include the reservation system, ordering of food and supplies, cashier, and the outsourcing of laundry service.
Figure 4.7 is a simplified illustration—typically, time estimates for actions and operations would be included.

The major steps in service blueprinting are as follows:

  1. Establish boundaries for the service and decide on the level of detail needed.

  2. Identify and determine the sequence of customer and service actions and interactions. A flowchart can be a useful tool for this.

  3. Develop time estimates for each phase of the process, as well as time variability.

  4. Identify potential failure points and develop a plan to prevent or minimize them, as well as a plan to respond to service errors.

Characteristics of Well-Designed Service Systems

There are a number of characteristics of well-designed service systems. They can serve as guidelines in developing a service system. They include the following:

  • Being consistent with the organization’s mission.

  • Being user-friendly.

  • Being robust if variability is a factor.

    page 169 

  • Being easy to sustain.

  • Being cost-effective.

  • Having value that is obvious to customers.

  • Having effective linkages between back-of-the-house operations (i.e., no contact with the customer) and front-of-the-house operations (i.e., direct contact with customers). Front operations should focus on customer service, while back operations should focus on speed and efficiency.

  • Having a single, unifying theme, such as convenience or speed.

  • Having design features and checks that will ensure service that is reliable and of high quality.

Challenges of Service Design

Variability is a major concern in most aspects of business operations, and it is particularly so in the design of service systems. Requirements tend to be variable, both in terms of differences in what customers want or need, and in terms of the timing of customer requests. Because services generally cannot be stored, there is the additional challenge of balancing supply and demand. This is less of a problem for systems in which the timing of services can be scheduled (e.g., doctor’s appointment), but not so in others (e.g., emergency room visit).

Another challenge is that services can be difficult to describe precisely and are dynamic in nature, especially when there is a direct encounter with the customer (e.g., personal services), due to the large number of variables.

Guidelines for Successful Service Design

  1. Define the service package in detail. A service blueprint may be helpful for this.

  2. Focus on the operation from the customer’s perspective. Consider how customer expectations and perceptions are managed during and after the service.

  3. Consider the image that the service package will present both to customers and to prospective customers.

  4. Recognize that designers’ familiarity with the system may give them quite a different perspective than that of the customer, and take steps to overcome this.

  5. Make sure that managers are involved and will support the design once it is implemented.

  6. Define quality for both tangibles and intangibles. Intangible standards are more difficult to define, but they must be addressed.

  7. Make sure that recruitment, training, and reward policies are consistent with service expectations.

    page 170 

  8. Establish procedures to handle both predictable and unpredictable events.

  9. Establish systems to monitor, maintain, and improve service.


Product and service design is a fertile area for achieving competitive advantage and/or increasing customer satisfaction. Potential sources of such benefits include the following:

  • Packaging products and ancillary services to increase sales. Examples include selling laptops at a reduced cost with a two-year internet access sign-up agreement, offering extended warranties on products, offering installation and service, and offering training with computer software.

  • Using multiple-use platforms. Auto manufacturers use the same platform (basic chassis, say) for several nameplates (e.g., Jaguar S type, Lincoln LS, and Ford Thunderbird have shared the same platform). There are two basic computer platforms, PC and Mac, with many variations of computers using a particular platform.

  • Implementing tactics that will achieve the benefits of high volume while satisfying customer needs for variety, such as mass customization.

  • Continually monitoring products and services for small improvements rather than the “big bang” approach. Often, the “little” things can have a positive, long-lasting effect on consumer attitudes and buying behavior.

  • Shortening the time it takes to get new or redesigned goods and services to market.

A key competitive advantage of some companies is their ability to bring new products to market more quickly than their competitors. Companies using this “first-to-market” approach are able to enter markets ahead of their competitors, allowing them to set higher selling prices than otherwise due to absence of competition. Such a strategy is also a defense against competition from cheaper “clones” because the competitors always have to play “catch up.”

From a design standpoint, reducing the time to market involves:

  • Using standardized components to create new but reliable products.

  • Using technology such as computer-aided design (CAD) equipment to rapidly design new or modified products.

  • Concurrent engineering to shorten engineering time.

page 175 

page 176 


is a measure of the ability of a product, service, part, or system to perform its intended function under a prescribed set of conditions, and often over a designated time interval or life span. In effect, reliability is a

Suppose that an item has a reliability of .90. This means it has a 90 percent probability of functioning as intended, either when needed (e.g., a security warning system) or over its life span (e.g., a vehicle). The probability it will fail is 1 − .90 = .10, or 10 percent. Hence, it is expected that, on average, 1 in every 10 such items will fail or, equivalently, that the item will fail, on average, once in every 10 trials. Similarly, a reliability of .985 implies 15 failures per 1,000 parts or trials.


Engineers and designers have a number of techniques at their disposal for assessing reliability. A discussion of those techniques is not within the scope of this text. Instead, let us turn to the issue of quantifying overall product or system reliability. Probability is used in two ways:

  1. The probability that the product or system will function when activated.

  2. The probability that the product or system will function for a given length of time.

The first of these focuses on
one point in time and is often used when a system must operate for one time or a relatively few number of times. The second of these focuses on the
length of service. The distinction will become more apparent as each of these approaches is described in more detail.

page 177 

Finding the Probability of Functioning When Activated

The probability that a system or a product will operate as planned is an important concept in system and product design. Determining that probability when the product or system consists of a number of
independent components requires the use of the rules of probability for independent events.

Independent events
have no relation to the occurrence or nonoccurrence of each other. What follows are three examples illustrating the use of probability rules to determine whether a given system will operate successfully.

Rule 1. If two or more events are independent and
success is defined as the probability that all of the events occur, then the probability of success is equal to the product of the probabilities of the events.

Suppose a room has two lamps, but to have adequate light both lamps must work (success) when turned on. One lamp has a probability of working of .90, and the other has a probability of working of .80. The probability that both will work is .90 × .80 = .72. Note that the order of multiplication is unimportant: .80 × .90 = .72. Also note that if the room had three lamps, three probabilities would have been multiplied.

This system can be represented by the following diagram:

Even though the individual components of a system might have high reliabilities, the system as a whole can have considerably less reliability because all components that are in series (as are the ones in the preceding example) must function. As the number of components in a series increases, the system reliability decreases. For example, a system that has eight components in a series, each with a reliability of .99, has a reliability of only .99
8 = .923.

Obviously, many products and systems have a large number of component parts that must all operate, and some way to increase overall reliability is needed. One approach is to use

in the design. This involves providing backup parts for some items.

Rule 2. If two events are independent and
success is defined as the probability that
at least one of the events will occur, the probability of success is equal to the probability of either one plus 1.00 minus that probability multiplied by the other probability.

There are two lamps in a room. When turned on, one has a probability of working of .90 and the other has a probability of working of .80. Only a single lamp is needed to light for success. If one fails to light when turned on, the other lamp is turned on. Hence, one of the lamps is a backup in case the other one fails. Either lamp can be treated as the backup; the probability of success will be the same. The probability of success is .90 + (1 − .90) × .80 = .98. If the .80 light is first, the computation would be .80 + (1 − .80) × .90 = .98.

This system can be represented by the following diagram:

Rule 3. If two or more events are involved and success is defined as the probability that at least one of them occurs, the probability of success is 1 −
p (all fail).

page 178 

Three lamps have probabilities of .90, .80, and .70 of lighting when turned on. Only one lighted lamp is needed for success; hence, two of the lamps are considered to be backups. The probability of success is

1 − [(1 − .90) × (1 − .80) × (1 − .70)] = .994

Note: It is assumed that the switch that activates each lamp has a reliability of 100%. To see how to incorporate a switch with less than 100% reliability, consider that the second “lamp” is actually a switch with a probability of operating equal to .80, and the third lamp is the only backup (i.e., the second lamp). Thus, the problem would be solved in exactly the same way.

This system can be represented by the following diagram:

Finding the Probability of Functioning for a Specified Length of Time

The second way of looking at reliability considers the incorporation of a time dimension: Probabilities are determined relative to a specified length of time. This approach is commonly used in product warranties, which pertain to a given period of time after purchase of a product.

A typical profile of product failure rate over time is illustrated in
Figure 4S.1. Because of its shape, it is sometimes referred to as a bathtub curve. Frequently, a number of products fail shortly after they are put into service, not because they wear out, but because they are defective to begin with. The rate of failures decreases rapidly once the truly defective items are weeded out. During the second phase, there are fewer failures because most of the defective items have been eliminated, and it is too soon to encounter items that fail because they have worn out. In some cases, this phase covers a relatively long time. In the third phase, failures occur because the products are worn out, and the failure rate increases.

Information on the distribution and length of each phase requires the collection of historical data and analysis of those data. It often turns out that the

mean time between failures (MTBF)

page 179in the infant mortality phase can be modeled by a negative exponential distribution, such as that depicted in
Figure 4S.2. Equipment failures, as well as product failures, may occur in this pattern. In such cases, the exponential distribution can be used to determine various probabilities of interest. The probability that equipment or a product put into service at time 0 will fail
before some specified time,
T, is equal to the area under the curve between 0 and
T. Reliability is specified as the probability that a product will last
at least until time 
T; reliability is equal to the area under the curve
beyond T. (Note that the total area under the curve in each phase is treated as 100 percent for computational purposes.) Observe that, as the specified length of service increases, the area under the curve to the right of that point (i.e., the reliability) decreases.

Determining values for the area under a curve to the right of a given point,
T, becomes a relatively simple matter using a table of exponential values. An exponential distribution is completely described using a single parameter, the distribution mean, which reliability engineers often refer to as the mean time between failures. Using the symbol
T to represent length of service, the probability that failure will
not occur before time
T (i.e., the area in the right tail) is easily determined:

P(no failure before
T) =



The probability that failure will occur before time
T is:

P(failure before
T) = 1 −


Selected values of

are listed in
Table 4S.1.

page 180 


Values of


page 181 

page 182 

Product failure due to wear-out can sometimes be modeled by a normal distribution. Obtaining probabilities involves the use of a table (refer to Appendix Table B.2). The table provides areas under a normal curve from (essentially) the left end of the curve to a specified point
z , where
z is a
standardized value computed using the formula

Thus, to work with the normal distribution, it is necessary to know the mean of the distribution and its standard deviation. A normal distribution is illustrated in
Figure 4S.3. Appendix Table B.2 contains normal probabilities (i.e., the area that lies to the left of
z). To obtain a probability that service life will not exceed some value
T, compute
z and refer to the table. To find the reliability for time
T, subtract this probability from 100 percent. To obtain the value of
T that will provide a given probability, locate the nearest probability under the curve
to the left in Appendix Table B.2. Then, use the corresponding
z in the preceding formula and solve for


A related measure of importance to customers, and hence to designers, is

. It measures the fraction of time a piece of equipment is expected to be operational (as opposed to being down for repairs). Availability can range from zero (never available) to 1.00 (always available). Companies that can offer equipment with a high availability factor have a competitive advantage over companies that offer equipment with lower availability values. Availability is a function of both the mean time between failures and the mean time to repair. The availability factor can be computed using the following formula:


MTBF = Mean time between failures

 MTR = Mean time to repair, including waiting time

page 190 

page 191 


Hospitals that not too long ago had what could be described as “facility oversupply” are now experiencing what might be called a “capacity crisis” in some areas. The way hospitals plan for capacity is critical to their future success. The same applies to all sorts of organizations, at all levels of these organizations.

refers to an upper limit or ceiling on the load that an operating unit can handle. The load might be in terms of the number of physical units produced (e.g., bicycles assembled per hour) or the number of services performed (e.g., computers upgraded per hour). The operating unit might be a plant, department, machine, store, or worker. Capacity needs include equipment, space, and employee skills.

page 192 

The goal of strategic capacity planning is to achieve a match between the long-term supply capabilities of an organization and the predicted level of long-term demand. Organizations become involved in capacity planning for various reasons. Among the chief reasons are changes in demand, changes in technology, changes in the environment, and perceived threats or opportunities. A gap between current and desired capacity will result in capacity that is out of balance. Overcapacity (i.e.,
excess capacity) causes operating costs that are too high, while undercapacity (i.e., not enough capacity to meet demand) causes strained resources and a possible loss of customers.

The key questions in capacity planning are the following:

  1. What kind of capacity is needed?

  2. How much is needed to match demand?

  3. When is it needed?

The question of what kind of capacity is needed depends on the products and services that management intends to produce or provide. Hence, in a very real sense, capacity planning is governed by those choices.

Forecasts are key inputs used to answer the questions of how much capacity is needed and when is it needed.

Related questions include:

  1. How much will it cost, how will it be funded, and what is the expected return?

  2. What are the potential benefits and risks? These involve the degree of uncertainty related to forecasts of the amount of demand and the rate of change in demand, as well as costs, profits, and the time to implement capacity changes. The degree of accuracy that can be attached to forecasts is an important consideration. The likelihood and impact of wrong decisions also need to be assessed.

  3. Are there sustainability issues that need to be addressed?

  4. Should capacity be changed all at once, or through several (or more) small changes?

  5. Can the supply chain handle the necessary changes? Before an organization commits to ramping up its input, it is essential to confirm that its
    supply chain will be able to handle related requirements. And different issues occur for the supply chain when output decreases.

Because of uncertainties, some organizations prefer to delay capacity investment until demand materializes. However, such strategies often inhibit growth because adding capacity takes time and customers won’t usually wait. Conversely, organizations that add capacity in anticipation of growth often discover that the new capacity actually attracts growth. Some organizations “hedge their bets” by making a series of small changes and then evaluating the results before committing to the next change.

In some instances, capacity choices are made very infrequently; in others, they are made regularly, as part of an ongoing process. Generally, the factors that influence this frequency are the stability of demand, the rate of technological change in equipment and product design,
page 193and competitive factors. Other factors relate to the type of product or service and whether style changes are important (e.g., automobiles and clothing). In any case, management must review product and service choices periodically to ensure that the company makes capacity changes when they are needed for cost, competitive effectiveness, or other reasons.


For a number of reasons, capacity decisions are among the most fundamental of all the design decisions that managers must make. In fact, capacity decisions can be
critical for an organization.

  1. Capacity decisions have a real impact on the ability of the organization to meet future demands for products and services; capacity essentially limits the rate of output possible. Having capacity to satisfy demand can often allow a company to take advantage of tremendous benefits. When Microsoft introduced its new Xbox, there were insufficient supplies, resulting in lost sales and unhappy customers. Similarly, shortages of flu vaccine in some years due to production problems affected capacity, limiting the availability of the vaccine.

  2. Capacity decisions affect operating costs. Ideally, capacity and demand requirements will be matched, which will tend to minimize operating costs. In practice, this is not always achieved because actual demand differs from expected demand or tends to vary (e.g., cyclically). In such cases, a decision might be made to attempt to balance the
    costs of over- and undercapacity.

  3. Capacity is usually a major determinant of initial cost. Typically, the greater the capacity of a productive unit, the greater its cost. This does not necessarily imply a one-for-one relationship; larger units tend to cost
    proportionately less than smaller units.

  4. Capacity decisions often involve a long-term commitment of resources, and once they are implemented, those decisions may be difficult or impossible to modify without incurring major costs.

  5. Capacity decisions can affect competitiveness. If a firm has excess capacity, or can quickly add capacity, that fact may serve as a barrier to entry by other firms. Then, too, capacity can affect
    delivery speed, which can be a competitive advantage.

  6. Capacity affects the ease of management; having appropriate capacity makes management easier than when capacity is mismatched.

  7. Globalization has increased the importance and the complexity of capacity decisions. Far-flung supply chains and distant markets add to the uncertainty about capacity needs.

page 194 

  1. Because capacity decisions often involve substantial financial and other resources, it is necessary to plan for them far in advance. For example, it may take years for a new power-generating plant to be constructed and become operational. However, this increases the risk that the designated amount of capacity will not match actual demand or reserve requirements when the capacity becomes available.


Capacity often refers to an upper limit on the
rate of output. Even though this seems simple enough, there are subtle difficulties in actually measuring capacity in certain cases. These difficulties arise because of different interpretations of the term
capacity and problems with identifying suitable measures for a specific situation.

In selecting a measure of capacity, it is important to choose one that does not require updating. For example, dollar amounts are often a poor measure of capacity (e.g., a capacity of $30 million a year), because price changes necessitate updating of that measure.

Where only one product or service is involved, the capacity of the productive unit may be expressed in terms of that item. However, when multiple products or services are involved, as is often the case, using a simple measure of capacity based on units of output can be misleading. An appliance manufacturer may produce both refrigerators and freezers. If the output rates for these two products are different, it would not make sense to simply state capacity in units without reference to either refrigerators or freezers. The problem is compounded if the firm has other products. One possible solution is to state capacities in terms of each product. Thus, the firm may be able to produce 100 refrigerators per day
or 80 freezers per day. Sometimes this approach is helpful, sometimes not. For instance, if an organization has many different products or services, it may not be practical to list all of the relevant capacities. This is especially true if there are frequent changes in the mix of output, because this would necessitate a frequently changing composite index of capacity. The preferred alternative in such cases is to use a measure of capacity that refers to
availability of inputs. Thus, a hospital has a certain number of beds, a factory has a certain number of machine hours available, and a bus has a certain number of seats and a certain amount of standing room.

No single measure of capacity will be appropriate in every situation. Rather, the measure of capacity must be tailored to the situation.
Table 5.1 provides some examples of commonly used measures of capacity.


Measures of capacity




Auto manufacturing

Labor hours, machine hours

Number of cars per shift

Steel mill

Furnace size

Tons of steel per day

Oil refinery

Refinery size

Gallons of fuel per day


Number of acres, number of cows

Bushels of grain per acre per year, gallons of milk per day


Number of tables, seating capacity

Number of meals served per day


Number of seats

Number of tickets sold per performance

Retail sales

Square feet of floor space

Revenue generated per day

Up to this point, we have been using a general definition of capacity. Although it is functional, it can be refined into two useful definitions of capacity:

  1. Design capacity
    : The maximum output rate or service capacity an operation, process, or facility is designed for.

  2. Effective capacity
    : Design capacity minus allowances such as personal time, and preventive maintenance.

page 195 

Design capacity is the maximum rate of output achieved under ideal conditions. Effective capacity is always less than design capacity, owing to realities of changing product mix, the need for periodic maintenance of equipment, lunch breaks, coffee breaks, problems in scheduling and balancing operations, and similar circumstances.
Actual output cannot exceed effective capacity and is often less because of machine breakdowns, absenteeism, shortages of materials, and quality problems, as well as factors that are outside the control of the operations managers.

These different measures of capacity are useful in defining two measures of system effectiveness: efficiency and utilization.
Efficiency is the ratio of actual output to effective capacity.
Capacity utilization is the ratio of actual output to design capacity.



Both measures are expressed as percentages.

It is not unusual for managers to focus exclusively on efficiency, but in many instances this emphasis can be misleading. This happens when effective capacity is low compared to design capacity. In those cases, high efficiency would seem to indicate an effective use of resources, when in fact it does not. The following example illustrates this point.

Compared to the effective capacity of 40 units per day, 36 units per day looks pretty good. However, compared to the design capacity of 50 units per day, 36 units per day is much less impressive, although probably more meaningful.

Because effective capacity acts as a lid on actual output, the real key to improving capacity utilization is to increase effective capacity by correcting quality problems, maintaining equipment in good operating condition, fully training employees, and improving bottleneck operations that constrain output. Eliminating waste, which is a key aspect of lean operation (discussed in
Chapter 14), can also help to improve effective capacity.

Hence, increasing utilization depends on being able to increase effective capacity, and this requires a knowledge of what is constraining effective capacity.

The following section explores some of the main determinants of effective capacity. It is important to recognize that the benefits of high utilization are realized only in instances where there is demand for the output. When demand is not there, focusing exclusively on utilization can be counterproductive, because the excess output not only results in additional variable costs but also generates the costs of having to carry the output as inventory. Another disadvantage of high utilization is that operating costs may increase because of increasing waiting time due to bottleneck conditions.

page 196 


Many decisions about system design have an impact on capacity. The same is true for many operating decisions. This section briefly describes some of these factors, which are then elaborated on elsewhere in the book. The main factors relate to facilities, products or services, processes, human considerations, operational factors, the supply chain, and external forces.

Facilities The design of facilities, including size and provision for expansion, is key. Locational factors, such as transportation costs, distance to market, labor supply, energy sources, and room for expansion, are also important. Likewise, layout of the work area often determines how smoothly work can be performed, and environmental factors such as heating, lighting, and ventilation also play a significant role in determining whether personnel can perform effectively or whether they must struggle to overcome poor design characteristics.

Product and Service Factors Product or service design can have a tremendous influence on capacity. For example, when items are similar, the ability of the system to produce those items is generally much greater than when successive items differ. Thus, a restaurant that offers a limited menu can usually prepare and serve meals at a faster rate than a restaurant with an extensive menu. Generally speaking, the more uniform the output, the more opportunities there are for standardization of methods and materials, which leads to greater capacity. The particular mix of products or services rendered must also be considered, because different items will have different rates of output.

Process Factors The quantity capability of a process is an obvious determinant of capacity. A more subtle determinant is the influence of output
quality. For instance, if quality of output does not meet standards, the rate of output will be slowed by the need for inspection and rework activities. Productivity also affects capacity. Process improvements that increase quality and productivity can result in increased capacity. Also, if multiple products or multiple services are processed in batches, the time to change equipment settings must be taken into account.

page 197 

Human Factors The tasks that make up a job, the variety of activities involved, and the training, skill, and experience required to perform a job all have an impact on the potential and actual output. In addition, employee motivation has a very basic relationship to capacity, as do absenteeism and labor turnover.

Policy Factors Management policy can affect capacity by allowing or not allowing capacity options such as overtime or second or third shifts.

Operational Factors Scheduling problems may occur when an organization has differences in equipment capabilities among alternative pieces of equipment or differences in job requirements. Inventory stocking decisions, late deliveries, purchasing requirements, acceptability of purchased materials and parts, and quality inspection and control procedures also can have an impact on effective capacity.

Inventory shortages of even one component of an assembled item (e.g., computers, refrigerators, automobiles) can cause a temporary halt to assembly operations until the components become available. This can have a major impact on effective capacity. Thus, insufficient capacity in one area can affect overall capacity.

Supply Chain Factors Supply chain factors must be taken into account in capacity planning if substantial capacity changes are involved. Key questions include: What impact will the changes have on suppliers, warehousing, transportation, and distributors? If capacity will be increased, will these elements of the supply chain be able to handle the increase? Conversely, if capacity is to be decreased, what impact will the loss of business have on these elements of the supply chain?

External Factors Product standards, especially minimum quality and performance standards, can restrict management’s options for increasing and using capacity. Thus, pollution standards on products and equipment often reduce effective capacity, as does paperwork required by government regulatory agencies by engaging employees in nonproductive activities. A similar effect occurs when a union contract limits the number of hours and type of work an employee may do.

Table 5.2 summarizes these factors. In addition,
inadequate planning can be a major limiting determinant of effective capacity.


Factors that determine effective capacity

  1. Facilities

    1. Design

    2. Location

    3. Layout

    4. Environment

  2. Product/service

    1. Design

    2. Product or service mix

  3. Process

    1. Quantity capabilities

    2. Quality capabilities

  4. Human factors

    1. Job content

    2. Job design

    3. Training and experience

    4. Motivation

    5. Compensation

    6. Learning rates

    7. Absenteeism and labor turnover

  5. Policy

  6. Operational

    1. Scheduling

    2. Materials management

    3. Quality assurance

    4. Maintenance policies

    5. Equipment breakdowns

  7. Supply chain

  8. External factors

    1. Product standards

    2. Safety regulations

    3. Unions

    4. Pollution control standards


The three primary strategies are leading, following, and tracking. A leading capacity strategy builds capacity in anticipation of future demand increases. If capacity increases involve a long lead time, this strategy may be the best option. A following strategy builds capacity when demand exceeds current capacity. A tracking
page 198strategy is similar to a following strategy, but it adds capacity in relatively small increments to keep pace with increasing demand.

An organization typically bases its capacity strategy on assumptions and predictions about long-term demand patterns, technological changes, and the behavior of its competitors. These typically involve (1) the growth rate and variability of demand, (2) the costs of building and operating facilities of various sizes, (3) the rate and direction of technological innovation, (4) the likely behavior of competitors, and (5) availability of capital and other inputs.

In some instances, a decision may be made to incorporate a

capacity cushion
, which is an amount of capacity in excess of expected demand when there is some uncertainty about demand. Capacity cushion = capacity − expected demand. Typically, the greater the degree of demand uncertainty, the greater the amount of cushion used. Organizations that have standard products or services generally have smaller capacity cushions. Cost and competitive priorities are also key factors.

Steps in the Capacity Planning Process

  1. Estimate future capacity requirements.

  2. Evaluate existing capacity and facilities and identify gaps.

  3. Identify alternatives for meeting requirements.

  4. Conduct financial analyses of each alternative.

  5. Assess key qualitative issues for each alternative.

  6. Select the alternative to pursue that will be best in the long term.

  7. Implement the selected alternative.

  8. Monitor results.

Capacity planning can be difficult at times due to the complex influence of market forces and technology.


Capacity planning decisions involve both long-term and short-term considerations. Long-term considerations relate to overall
level of capacity, such as facility size, whereas short-term considerations relate to probable
variations in capacity requirements created by such things as seasonal, random, and irregular fluctuations in demand. Because the time intervals covered by each of these categories can vary significantly from industry to industry, it would be misleading to put times on the intervals. However, the distinction will serve as a framework within which to discuss capacity planning.

Long-term capacity needs require forecasting demand over a time horizon and then converting those forecasts into capacity requirements.
Figure 5.1 illustrates some basic demand patterns that might be identified by a forecast. In addition to basic patterns, there are more complex patterns, such as a combination of cycles and trends.

When trends are identified, the fundamental issues are (1) how long the trend might persist, because few things last forever, and (2) the slope of the trend. If cycles are identified, interest focuses on (1) the approximate length of the cycles and (2) the amplitude of the cycles (i.e., deviation from average).

Short-term capacity needs are less concerned with cycles or trends than with seasonal variations and other variations from average. These deviations are particularly important because they can place a severe strain on a system’s ability to satisfy demand at some times and yet result in idle capacity at other times.

An organization can identify seasonal patterns using standard forecasting techniques. Although commonly thought of as annual fluctuations, seasonal variations are also reflected in monthly, weekly, and even daily capacity requirements.
Table 5.3 provides some examples of items that tend to exhibit seasonal demand patterns.


Examples of seasonal demand patterns




Beer sales, toy sales, airline traffic, clothing, vacations, tourism, power usage, gasoline consumption, sports and recreation, education, power usage


Welfare and Social Security payments, bank transactions


Retail sales, restaurant meals, automobile traffic, automotive rentals, hotel registrations


Power usage, automotive traffic, public transportation, classroom use, retail sales, restaurant meals

page 199 

When time intervals are too short to have seasonal variations in demand, the analysis can often describe the variations by probability distributions such as a normal, uniform, or Poisson distribution. For example, we might describe the amount of coffee served during the midday meal at a luncheonette by a normal distribution with a certain mean and standard deviation. The number of customers who enter a bank branch on Monday mornings might be described by a Poisson distribution with a certain mean. It does not follow, however, that
every instance of random variability will lend itself to description by a standard statistical distribution. Service systems, in particular, may experience a considerable amount of variability in capacity requirements unless requests for service can be scheduled. Manufacturing systems, because of their typical isolation from customers and the more uniform nature of production, are likely to experience fewer variations. Waiting-line models and simulation models can be useful when analyzing service systems. These models are described in
Chapter 18.

Irregular variations are perhaps the most troublesome, because they are difficult or impossible to predict. They are created by such diverse forces as major equipment breakdowns, freak storms that disrupt normal routines, foreign political turmoil that causes oil shortages, discovery of health hazards (nuclear accidents, unsafe chemical dumping grounds, carcinogens in food and drink), and so on.

The link between marketing and operations is crucial to a realistic determination of capacity requirements. Through customer contracts, demographic analyses, and forecasts, marketing can supply vital information to operations for ascertaining capacity needs for both the long term and the short term.

Calculating Processing Requirements

A necessary piece of information is the capacity requirements of products that will be processed. To get this information, one must have reasonably accurate demand forecasts for each product and know the standard processing time per unit for each product, the number of workdays per year, and the number of shifts that will be used.

page 200 

The task of determining capacity requirements should not be taken lightly. Substantial losses can occur when there are misjudgments on capacity needs. One key reason for those misjudgments can be overly optimistic projections of demand and growth. Marketing personnel are generally optimistic in their outlook, which isn’t necessarily a bad thing. But care must be taken so that that optimism doesn’t lead to overcapacity, because the resulting underutilized capacity will create an additional cost burden. Another key reason for misjudgments may be focusing exclusively on sales and revenue potential, and not taking into account the
product mix that will be needed to generate those sales and revenues. To avoid that, marketing and operations personnel must work closely to determine the optimal product mix needed and the resulting cost and profit.

A reasonable approach to determining capacity requirements is to obtain a forecast of future demand, translate demand into both the
quantity and the timing of capacity requirements, and then decide what capacity changes (increased, decreased, or no changes) are needed.

Long-term capacity alternatives include the expansion or contraction of an existing facility, opening or closing branch facilities, and the relocation of existing operations. At this point, a decision must be made about whether to make or buy a good, or provide or buy a service.


While the foregoing discussion relates generally to capacity planning for both goods and services, it is important to note that capacity planning for services can present special challenges due to the nature of services. Three very important factors in planning service capacity are (1) there may be a need to be near customers, (2) the inability to store services, and (3) the degree of volatility of demand.

Convenience for customers is often an important aspect of service. Generally, a service must be located near customers. For example, hotel rooms must be where customers want to stay; having a vacant room in another city won’t help. Thus, capacity and location are closely tied.

Capacity also must be matched with the
timing of demand. Unlike goods, services cannot be produced in one period and stored for use in a later period. Thus, an unsold seat on an airplane, train, or bus cannot be stored for use on a later trip. Similarly, inventories of goods
page 201allow customers to immediately satisfy wants, whereas a customer who wants a service may have to wait. This can result in a variety of negatives for an organization that provides the service. Thus, speed of delivery, or customer waiting time, becomes a major concern in service capacity planning. For example, deciding on the number of police officers and fire trucks to have on duty at any given time affects the speed of response and brings into issue the
cost of maintaining that capacity. Some of these issues are addressed in the chapter on waiting lines.

Demand volatility presents problems for capacity planners. It tends to be higher for services than for goods, not only in the timing of demand, but also in the amount of time required to service individual customers. For example, banks tend to experience higher volumes of demand on certain days of the week, and the number and nature of transactions tend to vary substantially for different individuals. Then, too, a wide range of social, cultural, and even weather factors can cause major peaks and valleys in demand. The fact that services can’t be stored means service systems cannot turn to inventory to smooth demand requirements on the system the way goods-producing systems are able to. Instead, service planners have to devise other methods of coping with demand volatility and cyclical demand. For example, to cope with peak demand periods, planners might consider hiring extra workers, hiring temporary workers, outsourcing some or all of a service, or using pricing and promotion to shift some demand to slower periods.

In some instances,
demand management strategies can be used to offset capacity limitations. Pricing, promotions, discounts, and similar tactics can help to shift some demand away from peak periods and into slow periods, allowing organizations to achieve a closer match in supply and demand.


Once capacity requirements have been determined, the organization must decide whether to produce a good or provide a service itself, or to outsource from another organization. Many organizations buy parts or contract out services, for a variety of reasons. Among those factors are:

  • Available capacity. If an organization has available the equipment, necessary skills, and
    time, it often makes sense to produce an item or perform a service in-house. The additional costs would be relatively small compared with those required to buy items or subcontract services. On the other hand, outsourcing can increase capacity and flexibility.

  • Expertise. If a firm lacks the expertise to do a job satisfactorily, buying might be a reasonable alternative.

  • Quality considerations. Firms that specialize can usually offer higher quality than an organization can attain itself. Conversely, unique quality requirements or the desire to closely monitor quality may cause an organization to perform a job itself.

  • The nature of demand. When demand for an item is high and steady, the organization is often better off doing the work itself. However, wide fluctuations in demand or small orders are usually better handled by specialists who are able to combine orders from multiple sources, which results in higher volume and tends to offset individual buyer fluctuations.

  • Cost. Any cost savings achieved from buying or making must be weighed against the preceding factors. Cost savings might come from the item itself or from transportation cost savings. If there are fixed costs associated with making an item that cannot be reallocated if the service or product is outsourced, that has to be recognized in the analysis. Conversely, outsourcing may help a firm avoid incurring fixed costs.

  • Risks. Buying goods or services may entail considerable risks. Loss of direct control over operations, knowledge sharing, and the possible need to disclose proprietary information are three risks. Liability can also be a tremendous risk if the products or services of other companies cause harm to customers or the environment, as well as damage to an organization’s reputation. Reputation can also be damaged if the public discovers that a supplier operates with substandard working conditions.

page 202 

In some cases, a firm might choose to perform part of the work itself and let others handle the rest in order to maintain flexibility and to hedge against loss of a subcontractor. If part or all of the work will be done in-house, capacity alternatives will need to be developed.

Outsourcing brings with it a host of supply chain considerations. These are described in
Chapter 15.

The reading above describes outsourcing that might surprise you.


There are a number of ways to enhance development of capacity strategies:

Design flexibility into systems. The long-term nature of many capacity decisions and the risks inherent in long-term forecasts suggest potential benefits from designing flexible systems. For example, provision for future expansion in the original design of a structure frequently can be obtained at a small price compared to what it would cost to remodel an existing structure that did not have such a provision. Hence, if future expansion of a restaurant seems likely, water lines, power hookups, and waste disposal lines can be put in place initially so that if expansion becomes a reality, modification to the existing structure can be minimized. Similarly, a new golf course may start as a 9-hole operation, but if provision is made for future expansion by obtaining options on adjacent land, it may progress to a larger (18-hole) course. Other considerations in flexible design involve the layout of equipment, location, equipment selection, production planning, scheduling, and inventory policies, which will be discussed in later chapters.

Take stage of life cycle into account. Capacity requirements are often closely linked to the stage of the life cycle that a product or service is in. At the
introduction phase, it can be difficult to determine both the size of the market and the organization’s eventual share of that market. Therefore, organizations should be cautious in making large and/or inflexible capacity investments.

In the
growth phase, the overall market may experience rapid growth. However, the real issue is the rate at which the
organization’s market share grows, which may be more or less than the market rate, depending on the success of the organization’s strategies. Organizations generally regard growth as a good thing. They want growth in the overall market for their products or services, and in their share of the market, because they see this as a way of increasing volume, and thus, increasing profits. However, there can also be a downside to this because increasing output levels will require increasing capacity, and that means increasing investment and increasing complexity. In addition, decision makers should take into account
page 203possible similar moves by competitors, which would increase the risk of overcapacity in the market, and result in higher unit costs of the output. Another strategy would be to compete on some nonprice attribute of the product by investing in technology and process improvements to make differentiation a competitive advantage.

In the
maturity phase, the size of the market levels off, and organizations tend to have stable market shares. Organizations may still be able to increase profitability by reducing costs and making full use of capacity. However, some organizations may still try to increase profitability by increasing capacity if they believe this stage will be fairly long, or the cost to increase capacity is relatively small.

In the
decline phase, an organization is faced with underutilization of capacity due to declining demand. Organizations may eliminate the excess capacity by selling it, or by introducing new products or services. An option that is sometimes used in manufacturing is to transfer capacity to a location that has lower labor costs, which allows the organization to continue to make a profit on the product for a while longer.

Take a “big-picture” (i.e., systems) approach to capacity changes. When developing capacity alternatives, it is important to consider how parts of the system interrelate. For example, when making a decision to increase the number of rooms in a motel, one should also take into account probable increased demands for parking, entertainment and food, and housekeeping. Also, will suppliers be able to handle the increased volume?

Capacity changes inevitably affect an organization’s supply chain. Suppliers may need time to adjust to their capacity, so collaborating with supply chain partners on plans for capacity increases is essential. That includes not only suppliers, but also distributors and transporters.

The risk in not taking a big-picture approach is that the system will be unbalanced. Evidence of an unbalanced system is the existence of a
bottleneck operation. A

bottleneck operation
is an operation in a sequence of operations whose capacity is lower than the capacities of other operations in the sequence. As a consequence, the capacity of the bottleneck operation limits the system capacity; the capacity of the system is reduced to the capacity of the bottleneck operation.
Figure 5.2 illustrates this concept: Four operations generate work that must then be processed by a fifth operation. The four different operations each have a capacity of 10 units per hour, for a total capacity of 40 units per hour. However, the fifth operation can only process 30 units per hour. Consequently, the output of the system will only be 30 units per hour. If the other operations operate at capacity, a line of units waiting to be processed by the bottleneck operation will build up at the rate of 10 per hour.

page 204 

Here is another perspective. The following diagram illustrates a three-step process, with capacities of each step shown. However, the middle process, because its capacity is lower than that of the others, constrains the system to its capacity of 10 units per hour. Hence, it is a bottleneck. In order to increase the capacity of the entire process, it would be necessary to increase the capacity of this bottleneck operation. Note, though, that the potential for increasing the capacity of the process is only 5 units, to 15 units per hour. Beyond that, Operation 3’s capacity would limit process capacity to 15 units per hour.

Prepare to deal with capacity “chunks.” Capacity increases are often acquired in fairly large chunks rather than smooth increments, making it difficult to achieve a match between desired capacity and feasible capacity. For instance, the desired capacity of a certain operation may be 55 units per hour, but suppose that machines used for this operation are able to produce 40 units per hour each. One machine by itself would cause capacity to be 15 units per hour short of what is needed, but two machines would result in an excess capacity of 25 units per hour. The illustration becomes even more extreme if we shift the topic—to open-hearth furnaces or to the number of airplanes needed to provide a desired level of capacity.

Attempt to smooth out capacity requirements. Unevenness in capacity requirements also can create certain problems. For instance, during periods of inclement weather, public transportation ridership tends to increase substantially relative to periods of pleasant weather. Consequently, the system tends to alternate between underutilization and overutilization. Increasing the number of buses or subway cars will reduce the burden during periods of heavy demand, but this will aggravate the problem of overcapacity at other times and certainly add to the cost of operating the system.

We can trace the unevenness in demand for products and services to a variety of sources. The bus ridership problem is weather related to a certain extent, but demand could be considered to be partly random (i.e., varying because of chance factors). Still another source of varying demand is seasonality. Seasonal variations are generally easier to cope with than random variations because they are
predictable. Consequently, management can make allowances in planning and scheduling activities and inventories. However, seasonal variations can still pose problems because of their uneven demands on the system: At certain times the
page 205system will tend to be overloaded, while at other times it will tend to be underloaded. One possible approach to this problem is to identify products or services that have complementary demand patterns—that is, patterns that tend to offset each other. For instance, demand for snow skis and demand for water skis might complement each other: Demand for water skis is greater in the spring and summer months, and demand for snow skis is greater in the fall and winter months. The same might apply to heating and air-conditioning equipment. The ideal case is one in which products or services with complementary demand patterns involve the use of the same resources but at different times, so that overall capacity requirements remain fairly stable and inventory levels are minimized.
Figure 5.3 illustrates complementary demand patterns.

Variability in demand can pose a problem for managers. Simply adding capacity by increasing the size of the operation (e.g., increasing the size of the facility, the workforce, or the amount of processing equipment) is not always the best approach, because that reduces flexibility and adds to fixed costs. Consequently, managers often choose to respond to higher than normal demand in other ways. One way is through the use of overtime work. Another way is to subcontract some of the work. A third way is to draw down finished goods inventories during periods of high demand and replenish them during periods of slow demand. These options and others are discussed in detail in the chapter on aggregate planning.

Identify the optimal operating level. Production units typically have an ideal or optimal level of operation in terms of unit cost of output. At the ideal level, cost per unit is the lowest for that production unit. If the output rate is less than the optimal level, increasing the output rate will result in decreasing average unit costs. This is known as

economies of scale
. However, if output is increased beyond the optimal level, average unit costs will become increasingly larger. This is known as

diseconomies of scale
Figure 5.4 illustrates these concepts.

page 206 

Reasons for economies of scale include the following:

  • Fixed costs are spread over more units, reducing the fixed cost per unit.

  • Construction costs increase at a decreasing rate with respect to the size of the facility to be built.

  • Processing costs decrease as output rates increase because operations become more standardized, which reduces unit costs.

Reasons for diseconomies of scale include the following:

  • Distribution costs increase due to traffic congestion and shipping from one large centralized facility instead of several smaller, decentralized facilities.

  • Complexity increases costs; control and communication become more problematic.

  • Inflexibility can be an issue.

  • Additional levels of bureaucracy exist, slowing decision making and approvals for changes.

The explanation for the shape of the cost curve is that at low levels of output, the costs of facilities and equipment must be absorbed (paid for) by very few units. Hence, the cost per unit is high. As output is increased, there are more units to absorb the “fixed” cost of facilities and equipment, so unit costs decrease. However, beyond a certain point, unit costs will start to rise. To be sure, the fixed costs are spread over even more units, so that does not account for the increase, but other factors now become important: worker fatigue; equipment breakdowns; the loss of flexibility, which leaves less of a margin for error; and, generally, greater difficulty in coordinating operations.

Both optimal operating rate and the amount of the minimum cost tend to be a function of the general capacity of the operating unit. For example, as the general capacity of a plant increases, the optimal output rate increases and the minimum cost for the optimal rate decreases. Thus, larger plants tend to have higher optimal output rates and lower minimum costs than smaller plants.
Figure 5.5 illustrates these points.

In choosing the capacity of an operating unit, management must take these relationships into account along with the availability of financial and other resources and forecasts of expected demand. To do this, it is necessary to determine enough points for each size facility to be able to make a comparison among different sizes. In some instances, facility sizes are givens, whereas in others, facility size is a continuous variable (i.e., any size can be selected). In the latter case, an ideal facility size can be selected. Usually, management must make a choice from given sizes, and none may have a minimum at the desired rate of output.

Choose a strategy if expansion is involved. Consider whether incremental expansion or single step is more appropriate. Factors include competitive pressures, market opportunities, costs and availability of funds, disruption of operations, and training requirements. Also, decide whether to lead or follow competitors. Leading is more risky, but it may have greater potential for rewards.

page 207 



is something that limits the performance of a process or system in achieving its goals. Constraint management is often based on the work of Eli Goldratt (
The Theory of Constraints), and Eli Schragenheim and H. William Dettmer (
Manufacturing at Warp Speed). There are seven categories of constraints:

Market: Insufficient demand

Resource: Too little of one or more resources (e.g., workers, equipment, and space), as illustrated in
Figure 5.2

Material: Too little of one or more materials

Financial: Insufficient funds

Supplier: Unreliable, long lead time, substandard quality

Knowledge or competency: Needed knowledge or skills missing or incomplete

Policy: Laws or regulations interfere

There may only be a few constraints, or there may be more than a few. Constraint issues can be resolved by using the following five steps:


  1. Identify the most pressing constraint. If it can easily be overcome, do so, and return to Step 1 for the next constraint. Otherwise, proceed to Step 2.

  2. Change the operation to achieve the maximum benefit, given the constraint. This may be a short-term solution.

  3. Make sure other portions of the process are supportive of the constraint (e.g., bottleneck operation).

  4. Explore and evaluate ways to overcome the constraint. This will depend on the type of constraint. For example, if demand is too low, advertising or price change may be an option. If capacity is the issue, working overtime, purchasing new equipment, and outsourcing are possible options. If additional funds are needed, working to improve cash flow, borrowing, and issuing stocks or bonds may be options. If suppliers are a problem, work with them, find more desirable suppliers, or do things in-house. If knowledge or skills are needed, seek training or consultants, or outsource. If laws or regulations are the issue, working with lawmakers or regulators may be an option.

  5. Repeat the process until the level of constraints is acceptable.


An organization needs to examine alternatives for future capacity from a number of different perspectives. Most obvious are economic considerations: Will an alternative be economically feasible? How much will it cost? How soon can we have it? What will operating and maintenance costs be? What will its useful life be? Will it be compatible with present personnel and present operations?

Less obvious, but nonetheless important, is possible negative public opinion. For instance, the decision to build a new power plant is almost sure to stir up reaction, whether the plant is gas-fired, hydroelectric, or nuclear. Any option that could disrupt lives and property is bound to generate hostile reactions. Construction of new facilities may necessitate moving personnel to a new location. Embracing a new technology may mean retraining some people and terminating some jobs. Relocation can cause unfavorable reactions, particularly if a town is about to lose a major employer. Conversely, community pressure in a new location may arise if the presence of the company is viewed unfavorably (noise, traffic, pollution).

page 208 

A number of techniques are useful for evaluating capacity alternatives from an economic standpoint. Some of the more common are cost–volume analysis, financial analysis, decision theory, and waiting-line analysis. Cost–volume analysis is described in this section. Financial analysis is mentioned briefly, decision analysis is described in the chapter supplement, and waiting-line analysis is described in
Chapter 18.

Cost–Volume Analysis

Cost–volume analysis focuses on relationships between cost, revenue, and volume of output. The purpose of cost–volume analysis is to estimate the income of an organization under different operating conditions. It is particularly useful as a tool for comparing capacity alternatives.

Use of the technique requires identification of all costs related to the production of a given product. These costs are then designated as fixed costs or variable costs.
Fixed costs tend to remain constant regardless of volume of output. Examples include rental costs, property taxes, equipment costs, heating and cooling expenses, and certain administrative costs.
Variable costs vary directly with volume of output. The major components of variable costs are generally materials and labor costs. We will assume that variable cost per unit remains the same regardless of volume of output, and that all output can be sold.

Table 5.4 summarizes the symbols used in the cost–volume formulas.


Cost–volume symbols

FC = Fixed cost

VC = Total variable cost

v = Variable cost per unit

TC = Total cost

TR = Total revenue

R = Revenue per unit

Q = Quantity or volume of output

BEP = Break-even quantity

P = Profit

The total cost associated with a given volume of output is equal to the sum of the fixed cost and the variable cost per unit times volume:



v = variable cost per unit.
Figure 5.6A shows the relationship between volume of output and fixed costs, total variable costs, and total (fixed plus variable) costs.

Revenue per unit, like variable cost per unit, is assumed to be the same regardless of quantity of output. Total revenue will have a linear relationship to output, as illustrated in
Figure 5.6B. The total revenue associated with a given quantity of output,
Q, is


Figure 5.6C describes the relationship between profit—which is the difference between total revenue and total (i.e., fixed plus variable) cost—and volume of output. The volume at which total cost and total revenue are equal is referred to as the

break-even point (BEP)
. When volume is less than the break-even point, there is a loss; when volume is greater than the break-even point, there is a profit. The greater the deviation from this point, the greater the profit or loss.
Figure 5.6D shows total profit or loss relative to the break-even point.
Figure 5.6D can be obtained from
Figure 5.6C by drawing a horizontal line through the point where the total cost and total revenue lines intersect. Total profit can be computed using the formula

Rearranging terms, we have


The difference between revenue per unit and variable cost per unit,
v, is known as the
contribution margin.

The required volume,
Q, needed to generate a specified profit is


A special case of this is the volume of output needed for total revenue to equal total cost. This is the break-even point, computed using the formula

page 209 


Different alternatives can be compared by plotting the profit lines for the alternatives, as shown in
Figure 5.6E.

Figure 5.6E illustrates the concept of an

indifference point
: the quantity at which a decision maker would be indifferent between two competing alternatives. In this illustration, a quantity less than the point of indifference would favor choosing alternative B because its profit is higher in that range, while a quantity greater than the point of indifference would favor choosing alternative A.

Capacity alternatives may involve
step costs, which are costs that increase stepwise as potential volume increases. For example, a firm may have the option of purchasing one, two, or three machines, with each additional machine increasing the fixed cost, although perhaps not linearly. (See
Figure 5.7A.) Then, fixed costs and potential volume would depend on the number of machines purchased. The implication is that
multiple break-even quantities may occur, possibly one for each range. Note, however, that the total revenue line might not intersect the fixed-cost line in a particular range, meaning that there would be no break-even point in that range. This possibility is illustrated in
Figure 5.7B, where there is no break-even point in the first range. In order to decide how many machines to purchase, a manager must consider projected annual demand (volume) relative to the multiple break-even points and choose the most appropriate number of machines, as Example 4 shows.

Cost–volume analysis can be a valuable tool for comparing capacity alternatives if certain assumptions are satisfied:

  • One product is involved.

  • Everything produced can be sold.

  • The variable cost per unit is the same regardless of the volume.

  • Fixed costs do not change with volume changes, or they are step changes.

  • The revenue per unit is the same regardless of volume.

  • Revenue per unit exceeds variable cost per unit.

As with any quantitative tool, it is important to verify that the assumptions on which the technique is based are reasonably satisfied for a particular situation. For example, revenue per unit or variable cost per unit is not always constant. In addition, fixed costs may not be constant over the range of possible output. If demand is subject to random variations, one must take that into account in the analysis. Also, cost–volume analysis requires that fixed and variable costs can be separated, and this is sometimes exceedingly difficult to accomplish. Cost–volume analysis works best with one product or a few products that have the same cost characteristics.

A notable benefit of cost–volume considerations is the conceptual framework it provides for integrating cost, revenue, and profit estimates into capacity decisions. If a proposal looks attractive using cost–volume analysis, the next step would be to develop cash flow models to see how it fares with the addition of time and more flexible cost functions.

Financial Analysis

Operations personnel need to have the ability to do
financial analysis. A problem that is universally encountered by managers is how to allocate scarce funds. A common approach is to use financial analysis to rank investment proposals, taking into account the
time value of money.

Two important terms in financial analysis are
cash flow and
present value:

Cash flow
refers to the difference between the cash received from sales (of goods or services) and other sources (e.g., sale of old equipment) and the cash outflow for labor, materials, overhead, and taxes.

Present value
expresses in current value the sum of all future cash flows of an investment proposal.

The three most commonly used methods of financial analysis are payback, present value, and internal rate of return.

page 212 

Payback is a crude but widely used method that focuses on the length of time it will take for an investment to return its original cost. For example, an investment with an original cost of $6,000 and a monthly net cash flow of $1,000 has a payback period of six months.

Payback doesn’t take into account the
time value of money. Its use is easier to rationalize for short-term paybacks than for long-term paybacks. The
present value (PV) method does take the time value of money into account. It summarizes the initial cost of an investment, its estimated annual cash flows, and any expected salvage value in a single value called the
equivalent current value, taking into account the time value of money (i.e., interest rates).

internal rate of return (IRR) summarizes the initial cost, expected annual cash flows, and estimated future salvage value of an investment proposal in an
equivalent interest rate. In other words, this method identifies the rate of return that equates the estimated future returns and the initial cost.

These techniques are appropriate when there is a high degree of
certainty associated with estimates of future cash flows. In many instances, however, operations managers and other managers must deal with situations better described as risky or uncertain. When conditions of risk or uncertainty are present, decision theory is often applied.

Decision Theory

Decision theory is a helpful tool for financial comparison of alternatives under conditions of risk or uncertainty. It is suited to capacity decisions and to a wide range of other decisions managers must make. It involves identifying a set of possible future conditions that could influence results, listing alternative courses of action, and developing a financial outcome for each alternative–future condition combination. Decision theory is described in the supplement to this chapter.

Waiting-Line Analysis

Analysis of lines is often useful for designing or modifying service systems. Waiting lines have a tendency to form in a wide variety of service systems (e.g., airport ticket counters, telephone calls to a cable television company, hospital emergency rooms). The lines are symptoms of bottleneck operations. Analysis is useful in helping managers choose a capacity level that will be cost-effective through balancing the cost of having customers wait with the cost of providing additional capacity. It can aid in the determination of expected costs for various levels of service capacity.

This topic is described in
Chapter 18.


Simulation can be a useful tool in evaluating what-if scenarios, and is described on this book’s website.

page 213 


The strategic implications of capacity decisions can be enormous, impacting all areas of the organization. From an operations management standpoint, capacity decisions establish a set of conditions within which operations will be required to function. Hence, it is extremely important to include input from operations management people in making capacity decisions.

Flexibility can be a key issue in capacity decisions, although flexibility is not always an option, particularly in capital-intensive industries. However, where possible, flexibility allows an organization to be agile—that is, responsive to changes in the marketplace. Also, it reduces to a certain extent the dependence on long-range forecasts to accurately predict demand. And flexibility makes it easier for organizations to take advantage of technological and other innovations. Maintaining excess capacity (a capacity cushion) may provide a degree of flexibility, albeit at added cost.

Some organizations use a strategy of maintaining a capacity cushion for the purpose of blocking entry into the market by new competitors. The excess capacity enables them to produce at costs lower than what new competitors can. However, such a strategy means higher-than-necessary unit costs, and it makes it more difficult to cut back if demand slows, or to shift to new product or service offerings.

Efficiency improvements and utilization improvements can provide capacity increases. Such improvements can be achieved by streamlining operations and reducing waste. The chapter on lean operations describes ways for achieving those improvements.

Bottleneck management can be a way to increase effective capacity, by scheduling non-bottleneck operations to achieve maximum utilization of bottleneck operations.

In cases where capacity expansion will be undertaken, there are two strategies for determining the timing and degree of capacity expansion. One is the
expand-early strategy (i.e., before demand materializes). The intent might be to achieve economies of scale, to expand market share, or to preempt competitors from expanding. The risks of this strategy include an oversupply that would drive prices down, and underutilized equipment that would result in higher unit costs.

The other approach is the
wait-and-see strategy (i.e., to expand capacity only after demand materializes, perhaps incrementally). Its advantages include a lower chance of oversupply due to more accurate matching of supply and demand, and higher capacity utilization. The key risks are loss of market share and the inability to meet demand if expansion requires a long lead time.

In cases where capacity contraction will be undertaken,
capacity disposal strategies become important. This can be the result of the need to replace aging equipment with newer equipment. It can also be the result of outsourcing and downsizing operations. The cost or benefit of asset disposal should be taken into account when contemplating these actions.

page 222 


Decision theory represents a general approach to decision making. It is suitable for a wide range of operations management decisions. Among them are capacity planning, product and service design, equipment selection, and location planning. Decisions that lend themselves to a decision theory approach tend to be characterized by the following elements:

  • A set of possible future conditions that will have a bearing on the results of the decision.

  • A list of alternatives for the manager to choose from.

  • A known payoff for each alternative under each possible future condition.

To use this approach, a decision maker would employ this process:

  1. Identify the possible future conditions (e.g., demand will be low, medium, or high; the competitor will or will not introduce a new product). These are called
    states of nature.

  2. Develop a list of possible
    alternatives, one of which may be to do nothing.

  3. Determine or estimate the
    payoff associated with each alternative for every possible future condition.

  4. If possible, estimate the
    likelihood of each possible future condition.

  5. Evaluate alternatives according to some
    decision criterion (e.g., maximize expected profit), and select the best alternative.

page 223 

The information for a decision is often summarized in a

payoff table
, which shows the expected payoffs for each alternative under the various possible states of nature. These tables are helpful in choosing among alternatives because they facilitate comparison of alternatives. Consider the following payoff table, which illustrates a capacity planning problem.






Small facility




Medium facility




Large facility




*Present value in $ millions.

The payoffs are shown in the body of the table. In this instance, the payoffs are in terms of present values, which represent equivalent current dollar values of expected future income less costs. This is a convenient measure because it places all alternatives on a comparable basis. If a small facility is built, the payoff will be the same for all three possible states of nature. For a medium facility, low demand will have a present value of $7 million, whereas both moderate and high demand will have present values of $12 million. A large facility will have a loss of $4 million if demand is low, a present value of $2 million if demand is moderate, and a present value of $16 million if demand is high.

The problem for the decision maker is to select one of the alternatives, taking the present value into account.

Evaluation of the alternatives differs according to the degree of certainty associated with the possible future conditions.


Despite the best efforts of a manager, a decision occasionally turns out poorly due to unforeseeable circumstances. Luckily, such occurrences are not common. Often, failures can be traced to a combination of mistakes in the decision process, to
bounded rationality, or to 

The decision process consists of these steps:

  1. Identify the problem.

  2. Specify objectives and criteria for a solution.

  3. Develop suitable alternatives.

  4. Analyze and compare alternatives.

  5. Select the best alternative.

  6. Implement the solution.

  7. Monitor to see that the desired result is achieved.

In many cases, managers fail to appreciate the importance of each step in the decision-making process. They may skip a step or not devote enough effort to completing it before jumping to the next step. Sometimes this happens owing to a manager’s style of making quick decisions or a failure to recognize the consequences of a poor decision. The manager’s ego can be a factor. This sometimes happens when the manager has experienced a series of successes—important decisions that turned out right. Some managers then get the impression that they can do no wrong. But they soon run into trouble, which is usually enough to bring them back down to earth. Other managers seem oblivious to negative results and continue the process they associate with their previous successes, not recognizing that some of that success may have been due more to luck than to any special abilities of their own. A part of the
page 224problem may be the manager’s unwillingness to admit a mistake. Yet other managers demonstrate an inability to make a decision; they stall long past the time when the decision should have been rendered.

Of course, not all managers fall into these traps—it seems safe to say that the majority do not. Even so, this does not necessarily mean that every decision works out as expected. Another factor with which managers must contend is

bounded rationality
, or the limits imposed on decision making by costs, human abilities, time, technology, and the availability of information. Because of these limitations, managers cannot always expect to reach decisions that are optimal in the sense of providing the best possible outcome (e.g., highest profit, least cost). Instead, they must often resort to achieving a
satisfactory solution.

Still another cause of poor decisions is that organizations typically departmentalize decisions. Naturally, there is a great deal of justification for the use of departments in terms of overcoming span-of-control problems and human limitations. However,

can occur. This is a result of different departments’ attempts to reach a solution that is optimum for each. Unfortunately, what is optimal for one department may not be optimal for the organization as a whole. If you are familiar with the theory of constraints (see
Chapter 16), suboptimization and local optima are conceptually the same, with the same negative consequences.


Operations management decision environments are classified according to the degree of certainty present. There are three basic categories: certainty, risk, and uncertainty.

means that relevant parameters—such as costs, capacity, and demand—have known values.

means that certain parameters have probabilistic outcomes.

means that it is impossible to assess the likelihood of various possible future events.

Consider these situations:

  1. Profit per unit is $5. You have an order for 200 units. How much profit will you make? (This is an example of
    certainty because unit profits and total demand are known.)

  2. Profit is $5 per unit. Based on previous experience, there is a 50 percent chance of an order for 100 units and a 50 percent chance of an order for 200 units. What is expected profit? (This is an example of
    risk because demand outcomes are probabilistic.)

  3. Profit is $5 per unit. The probabilities of potential demands are unknown. (This is an example of

The importance of these different decision environments is that they require different analysis techniques. Some techniques are better suited for one category than for others.


When it is known for certain which of the possible future conditions will actually happen, the decision is usually relatively straightforward: Simply choose the alternative that has the best payoff under that state of nature. Example 5S–1 illustrates this.


At the opposite extreme is complete uncertainty: No information is available on how likely the various states of nature are. Under those conditions, four possible decision criteria are
maximin, maximax, Laplace, and
minimax regret. These approaches can be defined as follows:

—Determine the worst possible payoff for each alternative, and choose the alternative that has the “best worst.” The maximin approach is essentially a pessimistic one because it takes into account only the worst possible outcome for each alternative. The actual outcome may not be as bad as that, but this approach establishes a “guaranteed minimum.”

—Determine the best possible payoff, and choose the alternative with that payoff. The maximax approach is an optimistic, “go for it” strategy; it does not take into account any payoff other than the best.

—Determine the average payoff for each alternative, and choose the alternative with the best average. The Laplace approach treats the states of nature as equally likely.

Minimax regret
—Determine the worst
regret for each alternative, and choose the alternative with the “best worst.” This approach seeks to minimize the difference between the payoff that is realized and the best payoff for each state of nature.

The next two examples illustrate these decision criteria.

Solved Problem 6 at the end of this supplement illustrates decision making under uncertainty when the payoffs represent costs.

The main weakness of these approaches (except for Laplace) is that they do not take into account
all of the payoffs. Instead, they focus on the worst or best, and so they lose some information. Still, for a given set of circumstances, each has certain merits that can be helpful to a decision maker.

page 227 


Between the two extremes of certainty and uncertainty lies the case of risk: The probability of occurrence for each state of nature is known. (Note that because the states are mutually exclusive and collectively exhaustive, these probabilities must add to 1.00.) A widely used approach under such circumstances is the
expected monetary value criterion. The expected value is computed for each alternative, and the one with the best expected value is selected. The expected value is the sum of the payoffs for an alternative where each payoff is
weighted by the probability for the relevant state of nature. Thus, the approach is:

Expected monetary value (EMV) criterion—
Determine the expected payoff of each alternative, and choose the alternative that has the best expected payoff.

The expected monetary value approach is most appropriate when a decision maker is neither risk averse nor risk seeking, but is risk neutral. Typically, well-established organizations with numerous decisions of this nature tend to use expected value because it provides an indication of the long-run, average payoff. That is, the expected-value amount (e.g., $10.5 million in the last example) is not an actual payoff but an expected or average amount that would be approximated if a large number of identical decisions were to be made. Hence, if a decision maker applies this criterion to a large number of similar decisions, the expected payoff for the total will approximate the sum of the individual expected payoffs.


In health care, the array of treatment options and medical costs makes tools such as decision trees particularly valuable in diagnosing and prescribing treatment plans. For example, if a 20-year-old and a 50-year-old both are brought into an emergency room complaining of chest pains, the attending physician, after asking each some questions on family history, patient history, general health, and recent events and activities, will use a
decision tree to sort through the options to arrive at the appropriate decision for each patient.

Decision trees are tools that have many practical applications, not only in health care but also in legal cases and a wide array of management decision making, including credit card fraud; loan, credit, and insurance risk analysis; decisions on new product or service development; and location analysis.


decision tree
is a schematic representation of the alternatives available to a decision maker and their possible consequences. The term gets its name from the treelike appearance of the diagram (see Figure 5S.1). Although tree diagrams can be used in place of a payoff table, they are particularly useful for analyzing situations that involve
sequential decisions.
page 228For instance, a manager may initially decide to build a small facility only to discover that demand is much higher than anticipated. In this case, the manager may then be called upon to make a subsequent decision on whether to expand or build an additional facility.

A decision tree is composed of a number of
nodes that have
branches emanating from them (see
Figure 5S.1). Square nodes denote decision points, and circular nodes denote chance events. Read the tree from left to right. Branches leaving square nodes represent alternatives; branches leaving circular nodes represent chance events (i.e., the possible states of nature).

After the tree has been drawn, it is analyzed from
right to left; that is, starting with the last decision that might be made. For each decision, choose the alternative that will yield the greatest return (or the lowest cost). If chance events follow a decision, choose the alternative that has the highest expected monetary value (or lowest expected cost).


In certain situations, it is possible to ascertain which state of nature will actually occur in the future. For instance, the choice of location for a restaurant may weigh heavily on whether a new highway will be constructed or whether a zoning permit will be issued. A decision maker may have probabilities for these states of nature; however, it may be possible to delay a decision until it is clear which state of nature will occur. This might involve taking an option to buy the land. If the state of nature is favorable, the option can be exercised; if it is unfavorable, the option can be allowed to expire. The question to consider is whether the cost of the option will be less than the expected gain due to delaying the decision (i.e., the expected payoff
above the expected value). The expected gain is the

expected value of perfect information (EVPI)

Other possible ways of obtaining perfect information depend somewhat on the nature of the decision being made. Information about consumer preferences might come from market research, additional information about a product could come from product testing, or legal experts might be called on.

There are two ways to determine the EVPI. One is to compute the expected payoff under certainty and subtract the expected payoff under risk. That is,


page 230 

A second approach is to use the regret table to compute the EVPI. To do this, find the expected regret for each alternative. The minimum expected regret is equal to the EVPI.


Generally speaking, both the payoffs and the probabilities in this kind of a decision problem are estimated values. Consequently, it can be useful for the decision maker to have some indication of how sensitive the choice of an alternative is to changes in one or more of these values. Unfortunately, it is impossible to consider all possible combinations of every variable in a typical problem. Nevertheless, there are certain things a decision maker can do to judge the sensitivity of probability estimates.

Sensitivity analysis
provides a range of probability over which the choice of alternatives would remain the same. The approach illustrated here is useful when there are
page 231two states of nature. It involves constructing a graph and then using algebra to determine a range of probabilities for which a given solution is best. In effect, the graph provides a visual indication of the range of probability over which the various alternatives are optimal, and the algebra provides exact values of the endpoints of the ranges. Example 5S–8 illustrates the procedure.

The graph shows the range of values of
P(2) over which each alternative is optimal. Thus, for low values of
P(2) [and thus high values of
P(1), since
P(1) +
P(2) = 1.0], alternative B will have the highest expected value; for intermediate values of
P(2), alternative C is best; and for higher values of
P(2), alternative A is best.

To find exact values of the ranges, determine where the upper parts of the lines intersect. Note that at the intersections, the two alternatives represented by the lines would be equivalent in terms of expected value. Hence, the decision maker would be indifferent between the two at that point. To determine the intersections, you must obtain the equation of each line. This is relatively simple to do. Because these are straight lines, they have the form
y =
a +
bx, where
a is the
y-intercept value at the left axis,
b is the slope of the line, and
x is
P(2). Slope is defined as the change in
y for a one-unit change in
x. In this type of
page 232problem, the distance between the two vertical axes is 1.0. Consequently, the slope of each line is equal to the right-hand value minus the left-hand value. The slopes and equations are as follows:

From the graph, we can see that alternative B is best from
P(2) = 0 to the point where that straight line intersects the straight line of alternative C, and that begins the region where C is better. To find that point, solve for the value of
P(2) at their intersection. This requires setting the two equations equal to each other and solving for
P(2). Thus,

Rearranging terms yields

Solving yields
P(2) = .40. Thus, alternative B is best from
P(2) = 0 up to
P(2) = .40. B and C are equivalent at
P(2) = .40.

Alternative C is best from that point until its line intersects alternative A’s line. To find that intersection, set those two equations equal and solve for
P(2). Thus,

Rearranging terms results in

Solving yields
P(2) = .67. Thus, alternative C is best from
P(2) > .40 up to
P(2) = .67, where A and C are equivalent. For values of
P(2) greater than .67 up to
P(2) = 1.0, A is best.

Note: If a problem calls for ranges with respect to
P(1), find the
P(2) ranges as above, and then subtract each
P(2) from 1.00 (e.g., .40 becomes .60, and .67 becomes .33).

page 244 

page 245 

page 246 


Process selection refers to deciding on the way production of goods or services will be organized. It has major implications for capacity planning, layout of facilities, equipment, and design of work systems. Process selection occurs as a matter of course when new products or services are being planned. However, it also occurs periodically due to technological changes in products or equipment, as well as competitive pressures.
Figure 6.1 provides an overview of where process selection and capacity planning fit into system design. Forecasts, product and service design, and technological considerations all influence capacity planning and process selection. Moreover, capacity and process selection are interrelated, and are often done in concert. They, in turn, affect facility and equipment choices, layout, and work design.

How an organization approaches process selection is determined by the organization’s
process strategy. Key aspects include:

  • Capital intensity: The mix of equipment and labor that will be used by the organization.

  • Process flexibility: The degree to which the system can be adjusted to changes in processing requirements due to such factors as changes in product or service design, changes in volume processed, and changes in technology.


Process choice is demand-driven. The two key questions in process selection are:

  1. How much variety will the process need to be able to handle?

  2. How much volume will the process need to be able to handle?

Answers to these questions will serve as a guide to selecting an appropriate process. Usually, volume and variety are
inversely related; a higher level of one means a lower level of the other. However, the need for flexibility of personnel and equipment is
directly related to the level of variety the process will need to handle: The lower the variety, the less the need for flexibility, while the higher the variety, the greater the need for flexibility. For example, if a worker’s job in a bakery is to make cakes, both the equipment and the worker will do the same thing day after day, with little need for flexibility. But if the worker has to make cakes, pies, cookies, brownies, and croissants, both the worker and the equipment must have the flexibility to be able to handle the different requirements of each type of product.

There is another aspect of variety that is important. Variety means either having dedicated operations for each different product or service, or if not, having to get equipment ready every time there is the need to change the product being produced or the service being provided.

page 247 

Process Types

There are five basic process types: job shop, batch, repetitive, continuous, and project.

Job Shop. A job shop usually operates on a relatively small scale. It is used when a low volume of high-variety goods or services will be needed. Processing is
intermittent; work includes small jobs, each with somewhat different processing requirements. High flexibility using general-purpose equipment and skilled workers are important characteristics of a job shop. A manufacturing example of a job shop is a tool and die shop that is able to produce one-of-a-kind tools. A service example is a veterinarian’s office, which is able to process many types of animals and a variety of injuries and diseases.

Batch. Batch processing is used when a moderate volume of goods or services is desired, and it can handle a moderate variety in products or services. The equipment need not be as flexible as in a job shop, but processing is still intermittent. The skill level of workers doesn’t need to be as high as in a job shop because there is less variety in the jobs being processed. Examples of batch systems include bakeries, which make bread, cakes, or cookies in batches; movie theaters, which show movies to groups (batches) of people; and airlines, which carry planeloads (batches) of people from airport to airport. Other examples of products that lend themselves to batch production are paint, ice cream, soft drinks, beer, magazines, and books. Other examples of services include plays, concerts, music videos, radio and television programs, and public address announcements.

page 248 

Repetitive. When higher volumes of more standardized goods or services are needed, repetitive processing is used. The standardized output means only slight flexibility of equipment is needed. Skill of workers is generally low. Examples of this type of system include production lines and assembly lines. Sometimes these terms are used interchangeably, although assembly lines generally involve the last stages of an assembled product. Familiar products made by these systems include automobiles, television sets, smartphones, and computers. An example of a service system is an automatic carwash. Other examples of service include cafeteria lines and ticket collectors at sports events and concerts. Also,
mass customization is an option.

Continuous. When a very high volume of nondiscrete, highly standardized output is desired, a continuous system is used. These systems have almost no variety in output and, hence, no need for equipment flexibility. Workers’ skill requirements can range from low to high, depending on the complexity of the system and the expertise that workers need. Generally, if equipment is highly specialized, worker skills can be lower. Examples of nondiscrete products made in continuous systems include petroleum products, steel, sugar, flour, and salt. Continuous services include air monitoring, supplying electricity to homes and businesses, and the internet.

These process types are found in a wide range of manufacturing and service settings. The ideal is to have process capabilities match product or service requirements. Failure to do so can result in inefficiencies and higher costs than are necessary, perhaps creating a competitive disadvantage.
Table 6.1 provides a brief description of each process type, along with the advantages and disadvantages of each.


Types of processing

Figure 6.2 provides an overview of these four process types in the form of a matrix, with an example for each process type. Note that job variety, process flexibility, and unit cost are highest for a job shop and get progressively lower moving from job shop to continuous processing. Conversely, volume of output is lowest for a job shop and gets progressively higher moving from job shop to continuous processing. Note, too, that the examples fall along the diagonal. The implication is that the diagonal represents the ideal choice of processing system for a given set of circumstances. For example, if the goal is to be able to process a small volume of jobs that will involve high variety, job shop processing is most appropriate. For less variety and a higher volume, a batch system would be most appropriate, and so on. Note that combinations far from the diagonal would not even be considered, such as using a job shop for high-volume, low-variety jobs, or continuous processing for low-volume, high-variety jobs, because that would result in either higher than necessary costs or lost opportunities.

Another consideration is that products and services often go through
life cycles that begin with low volume, which increases as products or services become better known. When that happens, a manager must know when to shift from one type of process (e.g., job shop) to the next (e.g., batch). Of course, some operations remain at a certain level (e.g., magazine publishing), while others increase (or decrease as markets become saturated) over time. Again, it is important for a manager to assess his or her products and services and make a judgment on whether to plan for changes in processing over time.

All of these process types (job shop, batch, repetitive, and continuous) are typically ongoing operations. However, some situations are not ongoing but instead are of limited duration. In such instances, the work is often organized as a

page 249 

Project. A

is used for work that is nonroutine, with a unique set of objectives to be accomplished in a limited time frame. Examples range from simple to complicated, including such things as putting on a play, consulting, making a motion picture, launching a new product or service, publishing a book, building a dam, and building a bridge. Equipment flexibility and worker skills can range from low to high.

The type of process or processes used by an organization influences a great many activities of the organization.
Table 6.2 briefly describes some of those influences.


Process choice affects numerous activities/functions

Process type also impacts supply chain requirements. Repetitive and continuous processes require steady inputs of high-volume goods and services. Delivery reliability in terms of quality and timing is essential. Job shop and batch processing may mean that suppliers have to be able to deal with varying order quantities and timing of orders. In some instances, seasonality is a factor, so suppliers must be able to handle periodic large demand.

The processes discussed do not always exist in their “pure” forms. It is not unusual to find hybrid processes—processes that have elements of other process types embedded in them. For instance, companies that operate primarily in a repetitive mode, or a continuous mode, will often have repair shops (i.e., job shops) to fix or make new parts for equipment that fails. Also, if volume increases for some items, an operation that began, say, in a job shop or as a batch mode may evolve into a batch or repetitive operation. This may result in having some operations in a job shop or batch mode, and others in a repetitive mode.

page 250 

Product and Service Profiling

Process selection can involve substantial investment in equipment and have a very specific influence on the layout of facilities, which also require heavy investment. Moreover, mismatches between operations capabilities and market demand and pricing or cost strategies can have a significant negative impact on the ability of the organization to compete or, in government agencies, to effectively service clients. Therefore, it is highly desirable to assess the
page 252degree of correlation between various process choices and market conditions
before making process choices in order to achieve an appropriate matching.

Product or service profiling
can be used to avoid any inconsistencies by identifying key product or service dimensions and then selecting appropriate processes. Key dimensions often relate to the range of products or services that will be processed, expected order sizes, pricing strategies, expected frequency of schedule changes, and order-winning requirements.

Sustainable Production of Goods and Services

Business organizations are facing increasing pressure from a variety of sources to operate sustainable production processes. According to the Lowell Center for Sustainable Production (, “Sustainable Production is the creation of goods and services using processes and systems that are: non-polluting; conserving of energy and natural resources; economically efficient; safe and healthful for workers, communities, and consumers; and socially and creatively rewarding for all working people.” To achieve this, the Lowell Center advocates designing and operating processes in ways that:

  • “wastes and ecologically incompatible byproducts are reduced, eliminated or recycled on-site;

  • chemical substances or physical agents and conditions that present hazards to human health or the environment are eliminated;

  • energy and materials are conserved, and the forms of energy and materials used are most appropriate for the desired ends; and

  • work spaces are designed to minimize or eliminate chemical, ergonomic and physical hazard.”

To achieve these goals, business organizations must focus on a number of factors that include energy use and efficiency, CO
2 (carbon footprint) and toxic emissions, waste generation, lighting, heating, cooling, ventilation, noise and vibration, and worker health and safety.

Lean Process Design

Lean process design is guided by general principles that are discussed more fully in a later chapter. One principle of particular interest here is waste reduction, which relates to sustainability objectives. Lean design also focuses on variance reduction in workload over the entire process to achieve level production and thereby improve process flow. Successful lean design results in reduced inventory and floor space; quicker response times and shorter lead times; reduced defects, rework, and scrap; and increased productivity. Lean design is often translated into practice using cellular layouts, which are discussed later in this chapter.

Lean process design has broad applications in seemingly diverse areas such as health care delivery systems, manufacturing, construction projects, and process reengineering.


Technology and technological innovation often have a major influence on business processes.

Technological innovation
refers to the discovery and development of new or improved products, services, or processes for producing or providing them.

refers to applications of scientific knowledge to the development and improvement of goods and services and/or the processes that produce or provide them. The term
high technology refers to the most advanced and developed equipment and/or methods.

Process technology and information technology can have a major impact on costs, productivity, and competitiveness.
Process technology includes methods, procedures, and equipment used to produce goods and provide services. This not only involves processes within an organization, it also extends to supply chain processes.
Information technology (IT) is the science and use of computers and other electronic equipment to store, process, and send information. IT is
page 253heavily ingrained in today’s business operations. This includes electronic data processing, the use of bar codes and radio frequency tags to identify and track goods, devices used to obtain point-of-sale information, data transmission, the internet, e-commerce, e-mail, and more.

With radio frequency (RFID) tags, items can be tracked during production and in inventory. For outbound goods, readers at a packing station can verify that the proper items and quantities were picked before shipping the goods to a customer or a distribution center. In a hospital setting, RFID tags can be used in several ways. One is to facilitate keeping accurate track of hospital garments, automating the process by which clean garments are inventoried and disbursed. An RFID tag can be worn by each hospital employee. The tag contains a unique ID number which is associated with each wearer. When an employee comes to the counter to pick up garments, the employee’s tag is scanned and software generates data regarding garment, type, size, location on racks, and availability for that employee. The garments are then picked from the specified racks, their RFID tag is read by a nearby scanner and processed, and the database is automatically updated.

Technological innovation in processing technology can produce tremendous benefits for organizations by increasing quality, lowering costs, increasing productivity, and expanding processing capabilities. Among the examples are laser technology used in surgery and laser measuring devices, advances in medical diagnostic equipment, high-speed internet connections, high-definition television, online banking, information retrieval systems, and high-speed search engines. Processing technologies often come through acquisition rather than through internal efforts of an organization.

While process technology can have enormous benefits, it also carries substantial risk unless a significant effort is made to fully understand both the downside and the upside of a particular technology. It is essential to understand what the technology will and won’t do. Also, there are economic considerations (initial cost, space, cash flow, maintenance, consultants), integration considerations (cost, time, resources), and human considerations (training, safety, job loss).


An increasingly asked question in process design is whether to automate.

is machinery that has sensing and control devices that enable it to operate automatically. If a company decides to automate, the next question is how much. Automation can range from factories that are completely automated to a single automated operation.

Automated services are becoming increasingly important. Examples range from automated teller machines (ATMs) to automated heating and air conditioning and include automated inspection, automated storage and retrieval systems, package sorting, mail processing, e-mail, online banking, and E-Z pass.

Automation offers a number of advantages over human labor. It has low variability, whereas it is difficult for a human to perform a task in exactly the same way, in the same amount of time, and on a repetitive basis. In a production setting, variability is detrimental to quality and to meeting schedules. Moreover, machines do not get bored or distracted, nor do they go on strike, ask for higher wages, or file labor grievances. Still another advantage of automation is the reduction of variable costs. In order for automated processing to be an option, job-processing requirements must be
standardized (i.e., have very little or no variety).

Both manufacturing and service organizations are increasing their use of automation as a way to reduce costs, increase productivity, and improve quality and consistency.

Automation is frequently touted as a strategy necessary for competitiveness. However, automation also has certain disadvantages and limitations compared to human labor. To begin with, it can be costly. Technology is expensive; usually it requires high volumes of output to offset high costs. In addition, automation is much less flexible than human labor. Once a process has been automated, there are substantial reasons for not changing it. Moreover, workers sometimes fear automation because it might cause them to lose their jobs. This can have an adverse effect on morale and productivity.

Decision makers must carefully examine the issue of whether to automate, or the degree to which to automate, so they clearly understand all the ramifications. Also, much thought and
page 254careful planning are necessary to successfully
integrate automation into a production system. Otherwise, it can lead to major problems. Automation has important implications not only for cost and flexibility, but also for the fit with overall strategic priorities. If the decision is made to automate, care must be taken to remove waste from the system prior to automating, to avoid building the waste into the automated system.
Table 6.3 has a list of questions for organizations that are considering automation.


Automation questions

  1. What level of automation is appropriate? (Some operations are more suited to being automated than others, so partial automation can be an option.)

  2. How would automation affect the flexibility of an operation system?

  3. How can automation projects be justified?

  4. How should changes be managed?

  5. What are the risks of automating?

  6. What are some of the likely effects of implementing automation on market share, costs, quality, customer satisfaction, labor relations, and ongoing operations?

Generally speaking, there are three kinds of automation: fixed, programmable, and flexible.

Fixed automation is the least flexible. It uses high-cost, specialized equipment for a fixed sequence of operations. Low cost and high volume are its primary advantages; minimal variety and the high cost of making major changes in either product or process are its primary limitations.

Programmable automation involves the use of high-cost, general-purpose equipment controlled by a computer program that provides both the sequence of operations and specific details about each operation. This type of automation has the capability of economically producing a fairly wide variety of low-volume products in small batches. Numerically controlled (N/C) machines and some robots are applications of programmable automation.

Computer-aided manufacturing (CAM)
refers to the use of computers in process control, ranging from robots to automated quality control.

Numerically controlled (N/C) machines
are programmed to follow a set of processing instructions based on mathematical relationships that tell the machine the details of the operations to be performed. The instructions are stored on a device such as a microprocessor. Although N/C machines have been used for many years, they are an important part of new approaches to manufacturing. Individual machines often have their own computer; this is referred to as
computerized numerical control (CNC). Or one computer may control a number of N/C machines, which is referred to as
direct numerical control (DNC).

N/C machines are best used in cases where parts are processed frequently and in small batches, where part geometry is complex, close tolerances are required, mistakes are costly, and there is the possibility of frequent changes in design. The main limitations of N/C
page 255machines are the higher skill levels needed to program the machines and their inability to detect tool wear and material variation.

The use of robots in manufacturing is sometimes an option. Robots can handle a wide variety of tasks, including welding, assembly, loading and unloading of machines, painting, and testing. They relieve humans from heavy or dirty work and often eliminate drudgery tasks.

Some uses of robots are fairly simple, others are much more complex. At the lowest level are robots that follow a fixed set of instructions. Next are programmable robots, which can repeat a set of movements after being led through the sequence. These robots “play back” a mechanical sequence much as a video recorder plays back a visual sequence. At the next level up are robots that follow instructions from a computer. Below are robots that can recognize objects and make certain simple decisions.

Still another form of robots are collaborative robots (also known as cobots) that are designed to work collaboratively with humans. The collaborative application of robotics enables humans and robots to work together safely and effectively, augmenting the capabilities of their human counterparts, achieving results neither could do alone. Cobots are designed with multiple advanced sensors, software, and end of arm tooling that help them quickly and easily sense and adapt to anything that comes into their work space. They also have the ability to detect any abnormal force applied to their joints while in motion. These robots can be programmed to respond immediately by stopping or reversing positions when they come into contact with a human.

Flexible automation evolved from programmable automation. It uses equipment that is more customized than that of programmable automation. A key difference between the two is that flexible automation requires significantly less changeover time. This permits almost continuous operation of equipment
and product variety without the need to produce in batches.

In practice, flexible automation is used in several different formats.


flexible manufacturing system (FMS)
is a group of machines that include supervisory computer control, automatic material handling, and robots or other automated processing equipment. Reprogrammable controllers enable these systems to produce a variety of
similar products. Systems may range from three or four machines to more than a dozen. They are designed to handle intermittent processing requirements with some of the benefits of automation and some of the flexibility of individual, or stand-alone, machines (e.g., N/C machines). Flexible manufacturing systems offer reduced labor costs and more consistent quality when compared with more traditional manufacturing methods, lower capital investment and higher
page 256flexibility than “hard” automation, and relatively quick changeover time. Flexible manufacturing systems often appeal to managers who hope to achieve both the flexibility of job shop processing and the productivity of repetitive processing systems.

Although these are important benefits, an FMS also has certain limitations. One is that this type of system can handle a relatively narrow range of part variety, so it must be used for a family of similar parts, which all require similar machining. Also, an FMS requires longer planning and development times than more conventional processing equipment because of its increased complexity and cost. Furthermore, companies sometimes prefer a gradual approach to automation, and FMS represents a sizable chunk of technology.

Computer-integrated manufacturing (CIM)
is a system that uses an integrating computer system to link a broad range of manufacturing activities, including engineering design, flexible manufacturing systems, purchasing, order processing, and production planning and control. Not all elements are absolutely necessary. For instance, CIM might be as simple as linking two or more FMSs by a host computer. More encompassing systems can link scheduling, purchasing, inventory control, shop control, and distribution. In effect, a CIM system integrates information from other areas of an organization with manufacturing.

The overall goal of using CIM is to link various parts of an organization to achieve rapid response to customer orders and/or product changes, to allow rapid production, and to reduce
indirect labor costs.

A shining example of how process choices can lead to competitive advantages can be found at Allen-Bradley’s computer-integrated manufacturing process in Milwaukee, Wisconsin. The company converted a portion of its factory to a fully automated “factory within a factory” to assemble contacts and relays for electrical motors. A handful of humans operate the factory, although once an order has been entered into the system, the machines do virtually all the work, including packaging and shipping, and quality control. Any defective items are removed from the line, and replacement parts are automatically ordered and scheduled to compensate for the defective items. The humans program the machines, monitor operations, and attend to any problems signaled by a system of warning lights.

page 257 

As orders come into the plant, computers determine production requirements and schedules and order the necessary parts. Bar-coded labels that contain processing instructions are automatically placed on individual parts. As the parts approach a machine, a sensing device reads the bar code and communicates the processing instructions to the machine. The factory can produce 600 units an hour.

The company has realized substantial competitive advantages from the system. Orders can be completed and shipped within 24 hours of entry into the system, indirect labor costs and inventory costs have been greatly reduced, and quality is very high.

The Internet of Things (IoT). The internet of things is the extension of internet connectivity into devices such as cell phones, vehicles, audio and video device, and much more, some of which you are probably familiar with. These devices can send and receive information with others over the internet. Industrial use of the IoT will have a major impact on manufacturing and the global economy with intelligence that augments human capabilities. Applications involve AI (artificial intelligence) machine learning, quality and productivity improvement, and predictive maintenance.

3D Printing

A 3D printer is a type of
industrial robot that is controlled using computer-assisted design (CAD).

3D printing
, also known as
additive manufacturing, involves processes that create three-dimensional objects by applying successive layers of materials to create the objects. The objects can be of almost any size or shape. These processes are different than many familiar processes that use
subtractive manufacturing to create objects: Material is removed by methods such as cutting, grinding, sanding, drilling, and milling. Also, producing an object using 3D printing is generally much slower than the time needed using more conventional techniques in a factory setting.

In early applications, material was deposited onto a powder bed using inkjet printer heads—hence, the name
3D printing. Today, the term 3D printing refers to a wide range of techniques such as
extrusion (the deformation of either metal or plastic forced under pressure through a die to create a shape) and
sintering (using heat or pressure or both to form a solid material from powder without causing it to liquefy).

page 258 

3D printers come in a wide variety of sizes and shapes. Some printers look very much like a microwave oven, while others look completely different.

The use of
3D scanning technologies allows the replication of objects without the use of molds. That can be beneficial in cases where molding techniques are difficult or costly, or where contact with substances used in molding processes could harm the original item. 3D objects can also be created from photographs of an existing object. That involves taking a series of photographs of the object (usually about 20) from various angles in order to capture adequate detail of the object for reproduction.

It is possible that in the long term, 3D printing technologies could have a significant impact on where and how production occurs and on supply chains.

Applications. Commercial applications of 3D printing are occurring in a wide array of businesses, and also have a few consumer applications, some of which are shown in
Table 6.4.


Some examples of applications of 3D technology

Industrial Applications

Mass customization: Customers can create unique designs for standard goods (e.g., cell phone cases)

Distributed manufacturing: Local 3D printing centers that can produce goods on demand for pickup

Computers: Computers, motherboards, other parts

Robots: Robots and robot parts

Rapid prototyping: Rapid fabrication of a scale model of a physical part or assembly

Rapid manufacturing: Inexpensive production of one or a small nu