Docsity
Docsity

Prepara tus exámenes
Prepara tus exámenes

Prepara tus exámenes y mejora tus resultados gracias a la gran cantidad de recursos disponibles en Docsity


Consigue puntos base para descargar
Consigue puntos base para descargar

Gana puntos ayudando a otros estudiantes o consíguelos activando un Plan Premium


Orientación Universidad
Orientación Universidad

ACEF for MOSA: Cost-Effective Architecture in DoD Programs, Tesis de Bachillerato de Matemáticas

Defense AcquisitionsCost AnalysisSystems ArchitectureProgram Management

The development of the Architecture Cost Effectiveness Framework (ACEF), which combines multi-attribute utility analysis with cost effectiveness analysis throughout the program lifecycle. The ACEF is designed to assist DoD program managers in evaluating the cost effectiveness of architecture alternatives, thereby establishing traceability between architectural decisions and operational utility, and experiencing more consistent benefits of MOSA. The ACEF aims to help programs realize the MOSA benefits of improved utility, reduced lifecycle cost, and decreased fielding time.

Qué aprenderás

  • What is the Architecture Cost Effectiveness Framework (ACEF) and how does it assist DoD program managers?
  • What are the benefits of using the ACEF in DoD programs?

Tipo: Tesis de Bachillerato

2020/2021

Subido el 09/11/2022

usuario desconocido
usuario desconocido 🇪🇸

1 documento

1 / 167

Toggle sidebar

Vista previa parcial del texto

¡Descarga ACEF for MOSA: Cost-Effective Architecture in DoD Programs y más Tesis de Bachillerato en PDF de Matemáticas solo en Docsity! DEVELOPMENT OF THE ARCHITECTURE COST EFFECTIVENESS FRAMEWORK AND APPLICATION TO OPEN SYSTEMS ARCHITECTURES THESIS Donald A. Barrett, Captain, USAF AFIT-ENV-MS-17-M-171 DEPARTMENT OF THE AIR FORCE AIR UNIVERSITY AIR FORCE INSTITUTE OF TECHNOLOGY Wright-Patterson Air Force Base, Ohio DISTRIBUTION STATEMENT A.APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED. The views expressed in this thesis are those of the author and do not reflect the official policy or position of the United States Air Force, Department of Defense, or the United States Government. This material is declared a work of the United States Government and is not subject to copyright protection in the United States. AFIT-ENV-MS-17-M-171 iv Abstract The Modular Open System Approach (MOSA) is an initiative to, among other things, reduce cost and schedule for acquisitions programs. While programs have experienced savings using MOSA, the majority of programs have not, in part due to a lack of a logical method for evaluating architecture alternatives. This research develops the Architecture Cost Effectiveness Framework (ACEF), which combines multi-attribute utility analysis with cost effectiveness analysis throughout the program lifecycle. Step 1 is the establishment of a business strategy that includes an Attribute Hierarchy of selected operational requirements. The business strategy also defines Lifecycle Utility Reference Profiles (LURPs) that document changing requirements for each attribute over time. Step 2 develops a reference architecture for all alternatives. Step 3 creates architecture alternatives using MOSA, including planned design increments. Step 4 compares the alternatives by evaluating the Component Acquisition Cost (CAC) and Component Integration Cost (CIC) for each component. Step 5 conducts a sensitivity analysis to understand the effect of decision-maker preference on the results. Finally Step 6 examines the results and updates the business strategy and reference architecture. The ACEF will assist program managers establish traceability between architectural decisions and operational utility, and thus experience more consistently the benefits of MOSA. v Acknowledgments This thesis would not have been possible without the support of a great many people. I would like to extend thanks to my committee for providing feedback and guidance, as well as for their patience as I completed my thesis around many job related delays. I would also like to extend thanks to my co-workers, at AFRL, SAF/AQ and throughout the OMS community, who helped me better understand acquisition, MOSA and open architectures in general. I would also like to extend thanks to my lovely wife for putting up with my late nights and time locked away in the office. I couldn’t have succeeded without her patience and support. Captain Donald A. Barrett vi Table of Contents Page Abstract .............................................................................................................................. iv Acknowledgments................................................................................................................v Table of Contents ............................................................................................................... vi List of Figures .................................................................................................................... ix List of Tables .................................................................................................................... xii I. Introduction .....................................................................................................................1 General Issue ...................................................................................................................1 The Architecture Effect .............................................................................................. 2 Unfulfilled Potential .................................................................................................. 3 Problem Statement ..........................................................................................................4 Research ..........................................................................................................................4 Research Question: ................................................................................................... 4 Investigative Questions ...................................................................................................5 Investigation Question 1 ............................................................................................ 5 Investigation Question 2 ............................................................................................ 5 Investigation Question 3 ............................................................................................ 5 Investigation Question 4 ............................................................................................ 5 Investigation Question 5 ............................................................................................ 5 Methodology ...................................................................................................................5 Assumptions ....................................................................................................................7 Limitations ......................................................................................................................8 Implications .....................................................................................................................9 Preview ............................................................................................................................9 II. Literature Review ..........................................................................................................10 Chapter Overview .........................................................................................................10 Acquisition Review .......................................................................................................10 Capability Requirements ......................................................................................... 11 System Requirements Definition .............................................................................. 13 Modular Open System Approach ..................................................................................13 What is MOSA? ....................................................................................................... 14 MOSA Evaluation Tools .......................................................................................... 24 Results of MOSA Application .................................................................................. 37 Cost Effectiveness .........................................................................................................42 Multi-Attribute Utility Analysis ....................................................................................45 ix List of Figures Page Figure 1: Trends for Acquisition Timelines in Software Intensive Systems ...................... 2 Figure 2: The Relationship between Capability Documents in the Acquisition Process (DoD, 2015: 5) ........................................................................................................... 12 Figure 3: Modular Open Systems Approach (OSJTF, 2004: 3) ....................................... 15 Figure 4: Decomposition of a System into its Modular Design (DAU, 2015) ................. 17 Figure 5: Decomposition of a System Showing Multiple Interface Types (DAU, 2015) 20 Figure 6: Types of Standards as related to Openness and Market Acceptance (DAU, 2015) .......................................................................................................................... 22 Figure 7: Decomposition of a System showing Open Interfaces (DAU, 2015) ............... 22 Figure 8: Open Architecture Assessment Matrix Resulting from OAAM Analysis (Naval OA Enterprise Team, July 2009) ............................................................................... 27 Figure 9: KOSS Tool Overview (Naval OA Enterprise Team, August 2009: 5) ............. 29 Figure 10: KOSS Tool Example (Naval OA Enterprise Team, August 2009: 8) ............. 29 Figure 11: KOSS Cost of Change (Naval OA Enterprise Team, August 2009: 13) ......... 30 Figure 12: Evolutionary Acquisition Model with 3 Phases (Ford and Dillard, 2008: 14) 32 Figure 13: Architecture Tradeoff Analysis Method Conceptual Flow (SEI, 2016) ......... 34 Figure 14: Architecture Management Relationships (Cloutier, et al 2008) ...................... 36 Figure 15: Reference Architecture Objectives (Cloutier, et al 2008) ............................... 36 Figure 16: UAS Open Architectures (GAO, 2013: 12) .................................................... 40 Figure 17: Scatter Plot for a Notional Cost Effectiveness Analysis (DAU, 2013) ........... 44 Figure 18: Objective Hierarchy Applied to a Car Purchase ............................................. 46 x Figure 19: Functional Value Hierarchy Applied to a Car Purchase ................................. 47 Figure 20: System Characteristic Attribute Hierarchy Applied to a Car Purchase ........... 47 Figure 21: Attribute Hierarchy with Weights ................................................................... 49 Figure 22: Example Utility Curves ................................................................................... 50 Figure 23: Utility Risk Preference (NSA, 2000: 25) ........................................................ 51 Figure 24: Example Utility Profile ................................................................................... 60 Figure 25: Example LURP................................................................................................ 60 Figure 26: Notional Weapon System Design Increment Diagram ................................... 63 Figure 27: Notional Design Increment ............................................................................. 65 Figure 28: Instantaneous Utility........................................................................................ 67 Figure 29: The Component Breakdown ............................................................................ 69 Figure 30: Interface Changes ............................................................................................ 71 Figure 31: Notional Design Increment ............................................................................. 72 Figure 32: ICER Comparison ........................................................................................... 75 Figure 33: Program X Attribute Hierarchy ....................................................................... 84 Figure 34: EO Video Resolution LURP ........................................................................... 85 Figure 35: IR Video Resolution LURP ............................................................................. 85 Figure 36: MTI Detection Coverage LURP...................................................................... 85 Figure 37: Subsystem A Geolocation Accuracy LURP .................................................... 86 Figure 38: Program X Reference Architecture Diagram .................................................. 90 Figure 39: Prime Alternative Architecture Implementation ............................................. 91 Figure 40: Alternative 2 Architecture Implementation ..................................................... 95 Figure 41: Alternative 3 Architecture Implementation ..................................................... 97 xi Figure 42: Comparison of Operational Utility over Time .............................................. 101 Figure 43: Instantaneous Cost Effectiveness Ratio Comparison .................................... 115 Figure 44: EO Video Resolution LURP Scenarios ......................................................... 119 Figure 45: Prime Alternative KPP Utility Calculations .................................................. 130 Figure 46: Prime Alternative KSA Utility Calculations ................................................. 131 Figure 47: Alternative 2 KPP Utility Calculations ......................................................... 132 Figure 48: Alternative 2 KSA Utility Calculations......................................................... 133 Figure 49: Alternative 3 KPP Utility Calculations ......................................................... 134 Figure 50: Alternative 3 KSA Utility Calculations......................................................... 135 1 I. Introduction General Issue Congress and the DOD have long sought to improve the acquisition of major weapon systems, yet many DOD programs are still falling short of cost, schedule, and performance expectations. The results are unanticipated cost overruns, reduced buying power, and in some cases delays or reductions in the capability ultimately delivered to the warfighter. Government Accountability Office, 2015 (GAO, 2015: 20) The Modular Open System Approach (MOSA) was developed to help reverse the trend of United States Department of Defense (DoD) weapons systems procurement programs being delivered over budget and behind schedule. MOSA goes back as far as 1994, when the original Open Systems Joint Task Force was created to "Sponsor and accelerate the adoption of open systems in electronics included in weapons systems acquisitions,” in order to “reduce life-cycle costs and facilitate effective weapon system intra- and interoperability." (Kaminski, 1994: 1) Combining modular design and open architectures with robust business practices, MOSA is intended to help deliver capability faster and cheaper. The DoD has required the use of MOSA within the Defense Acquisition System, in one form or another, for decades. However, not all programs have experienced the promised effects of MOSA. Instead, costs have continued to rise and multiple major weapons systems have seen their fielding delayed. Figure 1 highlights this trend by showing the historical fielding times of various DoD systems, compared with other technology intensive systems. 2 Figure 1: Trends for Acquisition Timelines in Software Intensive Systems The Architecture Effect Acquisition programs are chartered to deliver operationally useful systems to satisfy a validated military need. To do this, programs balance many requirements to design and deliver a weapon systems. MOSA is an approach that allows programs to satisfy operational requirements using business practices to create an open system implementation tailored to the program’s needs. An Open System is A system that employs modular design tenets, uses widely supported and consensus based standards for its key interfaces, and is subject to validation and verification tests to ensure the openness of its key interfaces. (OSJTF, 2004: A3) While not an exhaustive list, MOSA is supposed to: • accelerate and simplify the delivery of advanced capability into systems without replacing entire systems (Kendall, 2015: 14) • provide valuable mechanisms for continuing competition and incremental upgrades (DoD, 2015: 86) • facilitate reuse across the joint force. (DoD, 2015: 86) 3 Despite this intent, DoD programs using MOSA have experienced varying levels of success. Some have seen outstanding results, with cost reduced and fielding accelerated. For example, the US Navy’s Acoustic Rapid COTS Insertion (ARCI) program showed lifecycle cost improvements of 5 to 1. (Boudreau, 2009: xvi) However, these success stories have proven to be the exception rather than the rule. Multiple major programs have been born in the MOSA era, but the trend of over budget and late to need programs has continued largely unchanged. Unfulfilled Potential Better Buying Power (BBP) 3.0 is an initiative written by the Under Secretary of Defense for Acquisitions, Technology, and Logistics to “bend the cost curve.” (Kendall, 2015) It contains a series of guidelines for improving acquisition, one of which is MOSA. BBP 3.0 outlines a primary challenge MOSA faces: Often, this design feature has been either traded away because of competing requirements or lost because the government has failed to secure technical control and ownership of all the needed interfaces. (Kendall, 2015: 14) The results of interviews conducted by the MOSA Technical Standards Working Group (MOSA TSWG, 2016) indicate that program managers are not making a conscious effort to acquire closed solutions. Instead, they are using MOSA, but making MOSA implementation decisions that have negative consequences on their ability to cost effectively modify their system. Through conversation with a variety of programs, it has become clear that this is at least partially due to unclear or conflicting perception on how these implementation decisions relate to the system’s operational utility, cost and schedule. Programs believe that a completely closed solution is to be avoided because it 6 cost at any point in the program. (2) The lifecycle cost effectiveness ratio, compares the operational utility to the total cost for the entire lifecycle. Operational utility is measured against the validated Key Performance Parameters (KPPs) and Key System Attributes (KSAs) identified by the program and Additional Performance Attributes (APAs) that the program office identifies. Multi-Attribute Utility Analysis (MAUA) as primarily described by Keeney and Raiffa (Keeney and Raiffa, 1993) is used to aggregate the individual utilities associated with each KPP, KSA or APA into a single operational utility value. To accomplish this, the program office develops an Attribute Hierarchy based on the program’s required capabilities, with the leaf level attributes corresponding to the program’s KPPs, KSAs and APAs. This hierarchy includes weights for each leaf level attribute. Next, a lifecycle utility reference profile (LURP) is developed for each leaf level attribute. A LURP is a set of utility curves that represent the changing utility over time for that attribute. Each individual leaf level utility curve is based on the threshold and objectives of the related KPP, KSA or APA. The LURPs are documented in the program’s business plan. Once the Attribute Hierarchy and LURPs are established, the program office develops a reference architecture to communicate the desired architectural requirements and creates multiple alternatives that comply with the reference architecture. Each alternative includes a design implementation, as well as sufficient design increments to address the business strategy of the alternative. Once all alternatives have been generated, the cost effectiveness ratio is found for each alternative. Operational utility is used as the effectiveness measure and is calculated by applying the specific design choices for each alternative against the LURPs and 7 Attribute Hierarchy. To develop a measure of the utility over time, the utility is evaluated annually. The cost is found by estimating the acquisition and integration cost of each component. The Component Acquisition Cost is calculated by analogy to a known component cost. The Component Integration Cost is calculated by using an appropriate cost estimation tool to determine the changes required at each component interface. At any point, the instantaneous cost effectiveness ratio can be evaluated for each alternative by comparing the current annual utility and the cost to date. To evaluate the lifecycle cost effectiveness ratio, the average annual utility is compared to the predicted total lifecycle cost. This analysis is performed multiple times to account for uncertainty in both the analysis (due to user preference) and in the future combat environment (due to uncertain predictions). The analyst should vary the weights provided in the Attribute Hierarchy to determine the sensitivity of the analysis to user preference. The analyst should also vary the LURPs throughout the lifecycle to understand how each alternative behaves in the face of changing requirements. Finally, the results should be analyzed to determine how the architectural decisions should change based on the cost effectiveness of the approach. The last step in the process is to update the reference architecture and business plan based on the predicted cost effectiveness shown by the analysis. Assumptions The research contained in this paper contains a variety of assumptions that are necessary to scope the research. The first assumption is that the program office is able and willing to develop a program roadmap of sufficient fidelity to scope requirement 8 changes throughout the program lifecycle. This includes developing LURPs that encompass future requirement changes. This schedule will be vital to assessing the effectiveness of architecture approaches. This assumption is challenging to meet, so additional sensitivity analysis is performed regarding future requirements changes. This analysis assumes that integration costs are able to be estimated by estimating and summing the component acquisition and integration costs. The component acquisition cost is based on existing systems with a modification factor applied to a known cost for a current system. Ideally, components will be COTS, which tend to have predictable lifecycle costs. Also, standard techniques for estimating costs of incrementally improving IT components currently exist. Thus, cost estimates for evolutionary systems should be acceptably accurate, but revolutionary systems may be less accurately estimated using these techniques. Additionally, the cost estimating method does not include global programmatic or engineering costs. These costs are assumed to be equivalent between alternatives which should be valid when alternatives are roughly the same magnitude. Future versions of the ACEF will utilize stochastic methods to account for uncertainty in this cost estimation. Limitations The ACEF estimates cost by accounting for the component acquisition cost and the component integration cost. The ACEF does not account for other costs, such as maintenance of the architecture. Investigating how the maintenance cost of the architecture alternatives affects the program is an area of future research. Additionally, the component integration cost is calculated only for the software interfaces, and does not 11 fair and reasonable price.” (DoD, 2007: 3) Phrased a different way, program managers must ensure that their solutions are cost effective (mission capability at a reasonable cost), and that the timing of the capability delivery has value. To accomplish this, the DAS prefers an incremental approach, meaning that the programs are planned and executed to deliver capability in increments rather than in one step. This is intended to deliver capability faster to the warfighter and allow future increments to be tailored to lessons learned from earlier increments. Capability Requirements Before directing architectural requirements, the required weapon systems capabilities must be understood. A capability requirement is a “capability which is required to meet an organization’s roles, functions, and missions in current or future operations.” (DoD, Feb 2015: GL-9) . The capability requirements generation process is governed by the Joint Requirements Oversight Council (JROC) as part of the Joint Capabilities Integration and Development System (JCIDS), which validates the requirements necessary to fill a legitimate military capability need. This process is described by the Chairman of the Joint Chiefs of Staff Instruction 3170.01I (JCS, 2015) and the JCIDS Manual (DoD, Feb 2015). The capability requirements are described in three primary documents: the Initial Capabilities Document (ICD), Capability Development Document (CDD), and the Capability Production Document (CPD). The relationship between capability requirement documents and the acquisition process is shown in Figure 2. 12 Figure 2: The Relationship between Capability Documents in the Acquisition Process (DoD, 2015: 5) The ICD contains Measures of Effectiveness (MOEs) that describe mission level attributes that result in operational effectiveness. The CDD decomposes these MOEs into task level attributes known as Measures of Performance (MOPs). These MOPs are primarily described as Key Performance Parameters (KPPs) and Key System Attributes (KSAs). KSAs are “performance attributes of a system considered critical or essential to the development of an effective military capability.” (DoD, Feb 2015: D-A-1) KSAs are “Performance attributes of a system considered important to achieving a balanced solution/approach to a system, but not critical enough to be designated a KPP.” (DoD, Feb 2015: D-A-1) Finally, the program can also have Additional Performance Attributes (APAs), which are not important enough to be KPPs or KSAs but are still important to the system. MOPs include the description of the attribute and as threshold and objective parameters for the attribute. The threshold is the lowest value that is operationally effective or suitable, while the objective value is the desired operational goal. Usually, performance above the objective value does not justify additional expense. (DoD, Feb 13 2015) The values between the threshold and objective represents the trade space. Another important point is that the requirements “are not expected to be static during the product life cycle. As knowledge and circumstances change, consideration of adjustments or changes may be requested by acquisition, budgeting, or requirements officials.” (DoD, 2015: 5) System Requirements Definition While the capability requirements (and their associated documents) define the required behaviors of the weapons system, they do not explicitly define the design of the system. Instead, the capability requirements are used to inform the system requirements, which generally include performance requirements that are either identical to, or derived from, the capability requirement KPPs and KSAs. In addition, the program office adds further requirement attributes to drive “selection of a realistic and affordable solution to the warfighter’s [capability requirements].” (DoD, 2010: 4) These additional attributes will include relevant system architecture or interface specification requirements identified through the systems engineering process. In the words of the OSA Contract Guidebook, because the System Specification defines the attributes of the overall system to be developed, this document must describe how the technical system characteristics will contribute to overall system openness (such as its modularity and how open standards will be incorporated). (OSA Data Rights Team, 2013: vi) Additional discussion on architecture requirements will be provided later in this chapter. Modular Open System Approach Acquisitions programs shall be managed through the application of a systems engineering approach that optimizes total system performance and minimizes total ownership costs. A modular, open-systems approach shall be employed, where feasible. (DoD, 2007: 9) 16 Table 1: MOSA Enabling Environment Supportive Practices Program and System Level requirements and system level functional performance specifications. Configuration management processes that track key interfaces and standards throughout the life of the program. Program staff with training or relevant experience in OSA Assignment of responsibilities for implementing MOSA. Program management and acquisition planning efforts conducive to MOSA implementation Effective identification and mitigation of barriers or obstacles Application of standard reference models and architectural frameworks Continuing market research and analysis MOSA Principle 2: Employ Modular Design A key aspect of MOSA is the use of modular design. Modular design is recommended by the OSJTF (MOSA is explicitly Modular), the OSA Guidebook for PM’s, and throughout the commercial sector. As described by the OSJTF, modularity principle is “maximal cohesiveness of the functions and minimal coupling among elements.” (OSJTF, 2004: 13) Fully coupled in this context means that elements are permanently and uniquely dependent on each other in the performance of an internal workflow. Modularity is a description of system partitioning that results in: • Modules that are scalable, reusable by other modules within a system and/or by other systems, and consist of isolated, self-contained functional units • Disciplined definition of interfaces between modules, that are usually open and published interfaces Modular designs, by definition, permit substitution of units with similar components or products from alternate sources with minimum impact on existing units. Modularity is a highly desirable attribute; a highly modular system will support the 17 ability of acquiring, modifying or repairing smaller (and usually cheaper) components independently rather than as a larger system. On the other hand, tightly coupled processes are very efficient for their singular purpose. Balancing the efficiency of tight coupling with the flexibility of modular systems is part of the optimization inherent in the system engineering process. The number and variety of interchangeable modules, and the ease with which they can be interchanged, determines how open the system is. Additional measures of openness can include the time and cost required to change a module, or the number of independently provisioned modules. Conceptually, the way to go about designing a modular and open system is to first study critical work/mission threads associated with the system domain. Then, perform market analysis of relevant existing OTS components, and optimize a system by assembling the components. Finally, the program office must plan to close requirement gaps by either incremental improvement to existing modules and/or developing new modules. Figure 4 shows a generic example of a modular design. Figure 4: Decomposition of a System into its Modular Design (DAU, 2015) The information technology sector provides additional guidance and tools for 18 developing modular designs. The commercial sector tends to align modular information systems into what it calls “product lines.” The Carnegie Mellon Software Engineering Institute (SEI) is among the pioneers of software product lines. SEI has provisioned a suite of engineering level training materials and tools to assign developers in designing modular systems. (SEI, 2017) Another commercial concept for modular design is the Service Oriented Architecture (SOA) paradigm. “Service Oriented Architecture is a paradigm for organizing and utilizing distributed capabilities that may be under the control of different ownership domains.” (Federal CIO Council, 2008: vi) There are a wide variety of tools for developing SOA, including the Practical Guide for Federal SOA, which was compiled as a government-industry collaboration. (Federal CIO Council, 2008). The Model Driven Architecture (MDA) is another suite of tools and practices maintained by the Object Management Group (OMG) for aligning concepts like MOSA and SOA with intended mission and business outcomes. (OMG, 2017) These tools, among others, are highly useful to programs as they work to develop modular and open systems. MOSA Principle 3: Designate Key Interfaces The OSJTF defines an interface as “the functional and physical characteristics required to exist at a common boundary or intersection between systems or items.” (OSJTF, 2004: A2) Interfaces can be software (messaging, streaming data, etc…), hardware (Ethernet cables, computer chassis, etc…), electrical (28VDC power), etc… Use of modular design requires careful treatment of multiple levels of interfaces. The choice of modules will in many cases drive the options for and selection of modules. In other cases, the choice of interface will constrain the selection of modules. It should be 21 interfaces for major system components resulting in limited supplier support. (OSJTF, 2004: 14) Phrased a different way, not acquiring sufficient data rights (defining the level too high) may result in vendor lock, while insisting on too many data rights (defining the level too high) may preclude access to the best commercial solutions. These twin risks make the OTB analysis of required intellectual property rights one of the most important activities for a program employing MOSA. MOSA Principle 4: Select Open Standards Once key interfaces are identified, the program team select standards for key interfaces. According to the OSJTF key interfaces are strongly desired to be open, (OSJTF, 2004: 14) but there is significant flexibility provided to the program to make decisions about the implementation of interfaces to meet program risk management objectives. The OSJTF and DAU frame the standard selection in the context of multiple types of standards that can be utilized for each interfaces. They define these types based on two primary criteria: market acceptance, and degree of openness. Market acceptance reflects how widely the standards are used within the commercial market. Widely accepted standards offer the maximum possibility for a program to select solutions that natively conform to the standard, whereas narrowly accepted standards are more likely to require modification of the solution. The DAU describes the degree of openness by how much information is freely available about the standard. The spectrum of standard types defined by DAU are shown in Figure 6, while Figure 7 extends the previous DAU provided system diagram to include standard interfaces. 22 Figure 6: Types of Standards as related to Openness and Market Acceptance (DAU, 2015) Figure 7: Decomposition of a System showing Open Interfaces (DAU, 2015) There are many other definition of openness. In particular, consider the World Wide Web Consortium’s (W3C) requirements for an open standard (W3C, 2017): • transparency (due process is public, and all technical discussions, meeting minutes, are archived and referencable in decision making) • relevance (new standardization is started upon due analysis of the market needs, including requirements phase, e.g. accessibility, multi-linguism) • openness (anybody can participate, and everybody does: industry, individual, public, government bodies, academia, on a worldwide scale) • impartiality and consensus (guaranteed fairness by the process and the neutral hosting of the W3C organization, with equal weight for each participant) 23 • availability (free access to the standard text, both during development, at final stage, and for translations, and assurance that core Web and Internet technologies can be implemented Royalty-Free) • maintenance (ongoing process for testing, errata, revision, permanent access, validation, etc.) The W3C definition is particularly useful because it is broad in scope, and also provides reasonably objective, observable criteria across the scope of the definition. The OSJTF further states that key interfaces should be “examined very carefully to insure that the use of an open standard is both feasible and appropriate based on performance and business objectives.” (OSJTF, 2004: 16) It is interesting that this guidance by the OSJTF is not further developed within the MOSA literature. The concept of open versus closed is meant to be an indicator of the program’s ability to meet the objectives of MOSA. For example, under the W3C definition, it is plain to see why more open interfaces tend to lead to modules that are replaced/modified faster and cheaper. Conversely, interfaces that are not commonly used or have restricted data rights (i.e. closed) tend to take longer and are more costly to replace or modify. The formal definition of “proprietary” conveys that the owner has exclusive rights to control use and access. In that sense, many proprietary standards, e.g. the ones owned by standards bodies, are also open. However, in government-facing MOSA literature, “proprietary” implies that access and use is relatively restricted. These messy semantics are exacerbated by the fact that open and closed do not have universal precise engineering definitions. Aids such as Figure 6 and Figure 7 are meant by DAU to assist programs in selecting interface standards, but only provide additional heuristic guidance rather than prediction of MOSA outcomes. The selection of modules and interfaces is dependent on the functional design that arises from business case analysis together with market analysis 26 • Minimization of modifications to open standards that limit flexibility • Scope of unique development • Limitation of impact of proprietary solutions on openness • Requirements compliance with JCIDS • Requirements compliance with Interoperability and Supportability references • Spiral development • Exportation of reusables, flexibility/openness • Prime System Integrator competitive assignment The technical areas include 30 questions, spread among the following topics: • Interoperability (6 questions): How readily can the program’s separate systems exchange information and appropriately utilize each other’s functional capabilities? • Maintainability (2): What architectural characteristics address obsolescence and provide for timely technology refresh, fixes, and upgrades? • Extensibility (3): Does the program follow a well-defined System Engineering process for implementing capability extension? • Composability (2): Are the program’s systems capable of being highly modular and having minimal dependencies (loosely coupled) so they can be readily combined with other modules to provide new types of functionality? • Reusability (4): Are the assemblies that are candidates for reuse readily available, certified for reliability and performance, and easily obtained for reuse? • General Design Tenet (13): General design questions related to MOSA applications After the questionnaire is completed, the results are tabulated, mathematically combined, and graphically displayed on a chart based on the Open Architecture Assessment Model (OAAM)(Figure 8). This chart allows programs to be compared visually based on their expected level of openness. 27 Figure 8: Open Architecture Assessment Matrix Resulting from OAAM Analysis (Naval OA Enterprise Team, July 2009) This tool has been utilized by weapon system programs, but also by several US Navy research efforts. While OAAT is effective for evaluating programs across an enterprise, it provides little utility for a single program evaluating multiple architecture alternatives. The business areas provide little discrimination between approaches because they would likely be applied across all alternatives a single program would consider. The technical areas have potential to be effective, but the subjective choices have limited granularity. The following are examples of questions (Naval OA Enterprise Team, 2009): • “To what extent are open standards selected for key interfaces?” The available answers are: “N/A, None, Little Extent, Moderate Extent, Large Extent.” • “To what extent can system components be substituted with similar components from a competitive source?” The available answers are: “N/A, None, Little Extent, Moderate Extent, Large Extent.” • “To what extent do the system level functional and performance specifications permit an open system design? The available answers are: “N/A, None, Little Extent, Moderate Extent, Large Extent.” 28 The OAAT provides is no quantification of the answers and no consideration for how each selection impacts the end result of lower cost or faster delivery. For example, the program may use 50% open interfaces. Is this low, moderate or high? If the 50% covers all potential modifications, it may actually support the business objectives of the program even though the analyst may not consider the approach to have a large extent of open interfaces. It is also likely that multiple alternatives would score similarly well. If only one interface changes, does it change the answer from large to moderate? This lack of granularity makes the OAAT ineffective at estimating the cost effectiveness of different architectural approaches. Key Open Sub System Tool Another US Navy developed open architecture tool is the Key Open Sub System (KOSS) tool. KOSS was developed in 2008 by the Naval Open Architecture Enterprise Team to assist programs complying with Principle 3 of MOSA, enabling them to identify “volatile Subsystems/Components that would yield the greatest benefit to lifecycle affordability by applying MOSA principles.” (Naval OA Enterprise Team, 2009: 4) KOSS enables programs to evaluate the effect of different design choices. The program team decomposes the system into system components in four categories: hardware, software, middleware and operating system. Once this is done, the team creates a capability map, which determines when components are expected to change, and a rate of obsolescence. Additionally, the team determines a relative cost of change and relative weapon system capability improvement. Using the rate of obsolescence and the capability map, KOSS determines a relative rate of change for each component. Using the relative rate of change and relative cost of change, the KOSS tool calculates an open architecture 31 and Dillard, 2008), which uses control theory to model the dynamic behavior of the Defense Acquisition System for a given program. Originally designed to model evolutionary acquisition processes, Ford and Dillard expanded their model in 2008 to include the effects of open architectures on the system acquisition process (Ford and Dillard, 2008). In 2009 they performed a case study using the US Navy’s Acoustic Rapid Capability Insertion (ARCI) program (Ford and Dillard, 2009). Their research mathematically demonstrated that evolutionary approaches and open architectures can be combined to deliver capability to the warfighter faster and cheaper. While useful research, the model proposed by Ford and Dillard focused on modeling the behavior of the evolutionary programmatic approach and not modeling the technical design of the system. When applied to development projects, system dynamics focuses on how performance evolves in response to interactions among development strategy (e.g., Evolutionary Acquisition versus traditional acquisition), managerial decision-making (e.g., the allocation of resources), and development processes (e.g., concurrence).” (Ford and Dillard, 2009: 213) These performance impacts are evaluated within the context of a program lifecycle. Figure 12 shows a notional evolutionary lifecycle with 3 phases. 32 Figure 12: Evolutionary Acquisition Model with 3 Phases (Ford and Dillard, 2008: 14) Instead of using detailed technical parameters, the authors modeled the effect of open architectures in relation to their previously developed evolutionary acquisition model. Table 2 shows these some of these effects. Table 2: Open Architecture Effects on Evolutionary Acquisition (Ford and Dillard, 2008: 17) This approach is useful for investigating the use of evolutionary approaches for a system, but has limited ability to model the technical differences between architecture alternatives. The effects shown in Table 2 are high level programmatic changes rather than detailed technical effects. It would be challenging to relate the key interface definition choices, or the selection of specific standards, to the effects chosen by Ford and Dillard’s model. Additionally, their model does not take into account the effect to 33 delivered warfighter capability or the cost of implementing open. Consequently, other methods would be required to evaluate the cost effectiveness of architecture alternatives. Architecture Tradeoff Analysis Method Another analysis model is the Software Engineering Institute’s (SEI) trademarked “Architecture Tradeoff Analysis Method,” or ATAM, which is designed to facilitate the analysis of software architecture approaches. In SEI’s words: “Having a structured method helps ensure that the right questions regarding an architecture will be asked early, during the requirements and design stages when discovered problems can be solved cheaply.” (Kazman et al, 1999: 1) SEI developed an approach that utilizes quality attribute goals to establish levels of utility for architecture approaches. The ATAM uses the following process (SEI, 2016): 1. Present the ATAM. The evaluation leader describes the evaluation method to the assembled participants, tries to set their expectations, and answers questions they may have. 2. Present business drivers. A project spokesperson (ideally the project manager or system customer) describes what business goals are motivating the development effort and hence what will be the primary architectural drivers (e.g., high availability or time to market or high security). 3. Present architecture. The architect will describe the architecture, focusing on how it addresses the business drivers. 4. Identify architectural approaches. Architectural approaches are identified by the architect, but are not analyzed. 5. Generate quality attribute utility tree. The quality factors that comprise system "utility" (performance, availability, security, modifiability, usability, etc.) are elicited, specified down to the level of scenarios, annotated with stimuli and responses, and prioritized. 6. Analyze architectural approaches. Based on the high-priority factors identified in Step 5, the architectural approaches that address those factors 36 Figure 14: Architecture Management Relationships (Cloutier, et al 2008) While the concept of Reference Architectures has been matured primarily through the software engineering field, its use within the systems engineering field has expanded greatly. Cloutier, et al, (2008) provide an excellent overview of the purposes and challenges of using references architectures, and Figure 15 shows their description of the objectives of reference architectures. Figure 15: Reference Architecture Objectives (Cloutier, et al 2008) Within the DoD, reference architectures have been used in a variety of ways. 37 Proprietary versions are commonly used internally by DoD contractors to reduce cost and schedule and permit them to rapidly respond to new solicitations. An exemplar for DoD government use is the Joint STARS Recapitalization program, which developed a reference architecture in collaboration with industry prior to starting the program competition. Creation of the reference architecture allowed the government and industry partners to define normative architecture behavior, and establish the appropriate level of architectural requirements. There are a variety of tools utilized to develop reference architectures. These include tool sets such as the MDA described above, as well as descriptive tools such as OMG’s Systems Modeling Language (SysML) and the DoD Architecture Framework (DoDAF). SysML is: A general-purpose modeling language that supports the specification, design, analysis and verification of systems that may include hardware, software, data, personnel, procedures, and facilities. SysML is a graphical modeling language with a semantic foundation for representing requirements, behavior, structure, and properties of the system and its components. (Friedenthal et al, 2012: xvii) DoDAF builds on SysML to create an “overarching, comprehensive framework and conceptual model enabling the development of architectures.” (DoD CIO, 2017) DoDAF facilitates visualization of architectures through a variety of interconnected models that contain important architectural information. Results of MOSA Application Studies on the effects of MOSA vary in both focus and quality. One study was conducted by Dr. Rene Rendon of the Naval Postgraduate School in 2008 (Rendon, 2008). This study utilizes the OAAT to evaluate a variety of US Navy acquisition programs to determine their openness. Rendon noted the structure of the program in its 38 early phases led to a greater degree of openness. This includes contracting types as well as industry involvement in the development of requirements. These results, while interesting, are not directly relevant to the technical design choices of a modular and open system. When using the OAAT, Rendon found that half (16 out of 32) of the programs solicited were found to be highly open. 14 were rated as medium and only 2 were rated as low. However, this study did not correlate the OAAT rating with measurable effects of the MOSA approach. Said differently, it is unclear if programs rated highly by the OAAT actually achieved cost reductions or schedule compression. Consequently, it is still unclear whether the OAAT actually predicts improved behaviors. One program that has produced significant MOSA literature is the US Navy’s Acoustic Rapid COTS Insertion (ARCI) program. An in depth study of this program was performed (Boudreau, 2006). The ARCI program was developed to permit rapid upgrades of US Navy acoustic detection capability on attack submarines. The program determined that a MOSA approach would facilitate the use of COTS hardware technology and evolutionary software acquisition to reduce the cost and time to fielding. Boudreau found that “the lifecycle cost of A-RCI/APB has improved by nearly 5:1 over its predecessor system”, along with significant schedule compression. (Boudreau, 2009: xvi) A focus of the ARCI program was the development of a well-defined business strategy. The ARCI program planned for annual software updates (called Advanced Processing Builds) coupled with bi-annual hardware updates and a detailed development and integration schedule. This business plan was communicated to stakeholders and served as the guiding principle for the ARCI MOSA implementation. The ARCI program also identified the APB interfaces, and practiced rigid control over the data rights to 41 and that programs plan for an open systems approach prior to the start of development” (GAO, 2013: 19) • “Contracts and requests for proposal should include appropriate language that describes the open systems architecture, defines open standards, and establishes requirements for control documents to ensure the government retains rights for the identified key interfaces.” (GAO, 2013: 19) The GAO findings show that the implementation of MOSA requires more than just having the contractor develop a MOSA compliant implementation. Rather, the program must develop the approach prior to any contract, and this approach must include specific architecture requirements based on the programs business case. A more recent study was conducted in 2016 by the MOSA Technical Standards Working Group (TSWG) to explore the use of MOSA in Defense Acquisition Programs (MOSA TSWG, 2016). The TSWG interviewed a large group of government program offices, industry and academia, for a total of over 82 responses. The report covered a range of topics, a few of the findings are summarized here. • “All but four of the MDAPs and MAIS reported themselves employing a MOSA approach.” (MOSA TSWG, 2016: 34) This finding shows that programs are complying with the MOSA approach, or are at least claiming to. However, merely claiming MOSA compliance is not enough. So the TSWG investigated whether programs were measuring MOSA effects. They found: • “MOSA measurement tools were cited as unknown or inadequate. Inability to effectively measure results will hamper any efforts to move toward MOSA benefits.” (MOSA TSWG, 2016: 8) • “According to the senior leaders, with one or two exceptions, quantitative measurement did not exist in programs implementing MOSA. Measurement of modularity, openness, and overall success with MOSA is based largely on the subjective judgment of subject matter experts.” (MOSA TSWG, 2016: 9) 42 This study was continued by the MOSA Study Team, who prepared a report for the Under Secretary of Defense for Acquisition, Logistics and Technology to support BBP 3.0. (MOSA Study Team, 2016). Without measurement tools or any way to quantitatively measure progress, the Study Team was unable to determine how effective programs were in implementing MOSA. It was unclear to the investigators what level of cost reduction or schedule compression was achieved by MOSA compliant programs. As they began investigating other reasons why programs are not experiencing the effects of MOSA, they found: • “Another common theme is the government’s lack of consideration of lifecycle costs when making design decisions.” (MOSA Study Team, 2016: 9) • “The Study Team recommends programs review the business case and system design prior to every major contracting action to determine opportunities for modular design.” (MOSA Study Team, 2016: 16) Both studies were also interesting by what they didn’t find. Lack of standards what found to be a barrier in less than 2% of responses. In addition, over 40% of respondents found that there were no gaps in standards for their programs. In contrast, the largest barriers to using MOSA were lack of resources and conflicting or unclear MOSA requirements. Both of these concerns can be addressed with better understanding of how MOSA requirements relate to both cost and capability. Cost Effectiveness Cost Effectiveness Analysis (CEA) is a key component of an Analysis of Alternatives, as described in the Defense Acquisition Guidebook. (DAU, 2013) It’s important to understand the distinction between cost effectiveness and cost benefit. “Cost 43 - effectiveness analysis is a technique that relates the costs of a program to its key outcomes or benefits. Cost - benefit analysis takes that process one step further, attempting to compare costs with the dollar value of all (or most) of a program’s many benefits.” (Cellini and Kee, 2010: 493) In many fields, it is difficult or undesirable to put a monetary value on the benefits of a system. This is especially true in military systems, where the various measures of effectiveness are challenging to monetize. Consequently, cost effectiveness is traditionally used more than cost benefit within the DoD. In a CEA, both cost and effectiveness are calculated independently. Traditional CEA performs multiple sub-analyses to account for each Measure of Effectiveness (MOE). Once the cost and effectiveness are calculated for all alternatives for a given MOE, there are multiple ways to compare the alternatives’ performance. One method is the scatter plot (Figure 17). In this case, there is a single plot for each MOE, and the alternative is plotted against both cost and effectiveness. 46 generation of an attribute hierarchy. Traditionally, the analyst begins at the highest, most abstract attribute or decision. Parnell recommends beginning with a statement of the primary decision objective. He uses the example “Purchase the best car” as his purpose. Keeney and Raiffa concur with this approach, while the NSA handbook recommends starting at the definition of the system being procured (Continuing the example: “Sports Car”). After the top attribute has been defined, the analyst decomposes that attribute into smaller attributes. Many techniques exist for the decomposition process. Parnell’s objective hierarchy (Figure 18) decomposes the attributes into desired objectives, such as “minimize the environmental impact” and then to value measures. His functional value hierarchy (Figure 19) decomposes the attributes into system functions, then to objectives then to value measures. Figure 18: Objective Hierarchy Applied to a Car Purchase 47 Figure 19: Functional Value Hierarchy Applied to a Car Purchase The NSA handbook recommends decomposing the hierarchy according to system characteristics. Figure 20: System Characteristic Attribute Hierarchy Applied to a Car Purchase The lowest level attributes have many names. including bottom-level, leaf level, and terminal branche. The widely accepted methodology is to continue to decompose until the analyst reaches a leaf level attribute that is easily measurable and understood. Most methods do not explicitly describe where to stop; rather, the analyst decides. In the words of Keeney and Raiffa, ‘judgment must be used to decide where to stop the formalization by considering the advantages and disadvantages of further specification.” 48 (Keeney and Raiffa, 1993: 43) A ‘bottoms up’ approach can be utilized either in addition to or replacement of the top down approach. The bottoms up approach begins with known value measures, and then builds up logical groupings at higher levels until a single attribute is found. The Additive Value Function Along with the attribute hierarchy, the analyst must develop a method for assessing the value of incremental changes in each attribute. Usually, this method is in the form of a function that measures the change in utility as a function of incremental utility and weights. One of the most commonly used is the additive value function, which is used by Keeney and Raiffa, Parnell, Melese, and the NSA Handbook (among others). Parnell describes the additive value function with the following equation: (Parnell, et al, 2013: 195) 𝐸𝐸(𝑥𝑥) = �𝑤𝑤𝑖𝑖 𝑛𝑛 𝑖𝑖=1 𝐸𝐸𝑖𝑖(𝑥𝑥𝑖𝑖) Where v(x) = the alternative’s value of x i = 1 to n is the index of the value measure xi = the alternative’s score of the ith value measure vi(xi) = the single dimensional y-axis value of an x-axis score of xi wi = the weight of the ith value measure The additive value function assumes mutual preferential independence, which means that the result of the value function on one value measure does not depend on the 51 risk levels. The risk averse curve in figure 24 provides more utility for a given attribute value, while the risk preferring curve provides less utility for the same value, driving additional risk to the program by preferring harder to achieve attribute values. Figure 23: Utility Risk Preference (NSA, 2000: 25) The extant literature includes many methods to determine decision maker preference and develop utility curves. For example, Melese, et al, note a method for eliciting decision maker feedback using marginal information to develop cumulative value functions. Cost Estimation In addition to measuring the effectiveness through MAUA, a CEA must also evaluate cost. There are many methods for estimating costs, usually in specific domain areas. An interesting example is the generic life cycle cost estimation shown by Blanchard and Fabrycky (Blanchard and Fabrycky, 2011: 567-630). Almost all costs 52 break down into two primary components: parts and labor. Estimating the cost of parts is a traditional exercise, often completed by analogy to an existing related part. This is described in depth is various guides, including the NASA Cost Estimating Handbook (NASA, 2008). There are a wide variety of cost estimation methods for software, including COCOMO II, Function Points methods, SEER-SEM, Putman’s Model, etc… COCOMO II is a software cost model originally developed by Barry Boehm as COCOMO 81 in the 1980s, and developed into COCOMO II in the late 1990s. This model is based on a large statistical population of software development programs, resulting in a fairly simple general form equation: (Center for Software Engineering, 2000: 1) 𝑃𝑃𝑃𝑃 = 𝐴𝐴 ∗ ��𝑆𝑆𝑈𝑈𝑆𝑆𝐸𝐸� 𝐸𝐸 ∗�𝐶𝐶𝑃𝑃 Where PM: Person months A: Calibration factor, which is 2.94 for COCOMO II Size: Measure of functional size of a software module that has an additive effect on software development effort E: Scale factor(s) that has an exponential or nonlinear effect on software development effort. It has a value of 0.91 ∗ ∑𝑆𝑆𝑆𝑆 SF: Scale factors EM: Effort multipliers that influence software development effort Utilizing the COCOMO II equations requires the use of a wide variety of factors 53 and multipliers. These are found in the COCOMO II Model Definition Manual. (Center for Software Engineering, 2000) A larger discussion of COCOMO II is provided in Appendix B. Summary As this chapter showed, the application of MOSA requires a broad systems engineering approach, requiring knowledge in multiple subject areas. This chapter addressed the investigation questions, beginning with a discussion of the Defense Acquisition System, including the JCIDS and the CDD and SRD. MOSA was summarized, along with a discussion of several MOSA related tools and an overview of several MOSA implementation studies. The MOSA research showed that both tools and metrics for MOSA application are lacking; resulting in many programs not knowing whether they have a successful MOSA implementation or not. CEA theory was developed, including MAUA, attribute hierarchies, the additive value function and utility curves. Finally, the chapter concluded with a brief discussion of software cost estimation, focusing on COCOMO II. 56 attribute over the programs lifecycle. Develop Attribute Hierarchy and Cumulative Weights While the ACEF does not require a specific approach to develop the Attribute Hierarchy, the approach selected will result in a hierarchy of leaf level attributes that will primarily be KPPs and KSAs from the approved capability documents. Some attributes, such as the Net Ready KPP, will be split to create multiple value measures. Other attributes may be omitted because they are not relevant to the comparison of architecture approaches. For example, the system architecture choices (such as interface definition) will not likely affect the system’s ability to withstand direct enemy weapons fire, and thus will not affect the Force Protection KPP. In addition, APAs may be added to the Attribute Hierarchy at the discretion of the program team. For example, some programs choose to add sensor accuracy requirements, or computing latency requirements, to the SRD to ensure appropriate capability delivery. The analyst should take care using non- JCIDS validated attributes for two reasons: (1) the analyst must ensure that they directly add operational utility, and (2) the analyst must ensure that the capability is not being “double counted” in a KPP or KSA. For example, if the sensor accuracy APA is derived from a target identification KPP, the target identification KPP should take precedence. However, the analyst can make the sensor accuracy APA a leaf level attribute under a target identification KPP branch. There are multiple ways an ACEF analyst may design an Attribute Hierarchy. The hierarchy itself may be composed of functions, objectives, characteristics, or other means. For ACEF, it is expected that the analyst will use one of the following methods: 57 • Functional. The Attribute Hierarchy documents operational activities that result in KPP or KSA leaf level attributes. If this approach is followed, it is highly recommended to align with the DoD Architecture Framework Operational View 5 (OV-5) Operational Activity Decomposition Tree. If the OV-5 is used, branches that don’t result in relevant leaf level attributes will have to be trimmed. • Characteristic. The Attribute Hierarchy is broken down into system characteristics that have associated KPPs, KSAs, or APAs. • Logical. The Attribute Hierarchy is broken down into whatever organization required to appropriately capture the preference between KPPs, KSAs and APAs. Both the functional and characteristic hierarchies will likely be accomplished by a top down approach. In the case of the functional, the analyst would begin with the top level activity, and then decompose into branches, and then into leaves. If using an OV-5, this activity is largely accomplished. After the leaf level activities are identified, the analyst will associate KPPs, KSAs or APAs as required. Leaf level attributes may not have associated value measures, so the analyst should then follow with a bottoms up approach to trim the tree of branches that do not have associated value measures. A similar approach is taken with the characteristic method. For the logical method, the analyst utilizes a bottoms up approach beginning with the KPPs, KSAs and APAs, building a hierarchy that captures the preference between attributes. For example, there may be a branch for all KPPs, a branch for all KSAs, and a branch for all APAs. This would support a tiered preference where KPPs are always preferred over KSAs, regardless of what activity or characteristic they would logically belong to. A simple logical hierarchy could also be completely flat. After the Attribute Hierarchy is developed, weights should be assigned using methods such as the ones described by Blanchard and Fabrycky. Each branch will have weights that sum to 1. This effort begins at the top level, and then each lower branch is 58 weighted, moving down until all leaf level attributes have been assigned weights. At this point, the cumulative weight of each leaf level attribute is found by multiplying the leaf level attribute weight by every branch level weight above it. These weights will be used for each alternative throughout the ACEF analysis. Define Lifecycle Utility Reference Profiles Once all of the leaf level attributes have been defined and weighted, Lifecycle Utility Reference Profiles (LURPs) must be generated for them. A LURP is a reference model for an attribute that describes how both requirements and their associated utility change throughout the lifecycle of the program. The LURP does this by defining a utility curve that varies throughout the life of the program, usually by increasing or decreasing threshold and objective values over time. The LURPs are similar in concept to design mission profiles used in aeronautical engineering to calculate fuel consumption or structural life, in that they both are used to have a common reference to estimate performance rather than accurately predict the behavior of the system. In ACEF, the LURPs are used to estimate utility across alternatives that deploy different design implementations. To accommodate these design variations, the LURPs represent requirements change over time rather than define specific modifications or upgrades. The set of LURPs are applied to all alternatives, with each alternative’s design implementations evaluated against the changing requirements described in the LURPs. For example, a LURP for video resolution would define a requirement change from Standard Definition video to High Definition video, rather than define a requirement for a new Full Motion Video (FMV) system. This would allow for flexibility in developing solutions that can take advantage of architecture alternatives. Using the video example, 61 reasons. The ACEF requires the program either develop or tailor a reference architecture for every acquisition. The reference architecture is an efficient way of communicating the program’s desired approach, including key interfaces and the use of standards. In ACEF, the reference architecture also serves as a baseline between each analyzed alternative. At the very minimum, the reference architecture must: • Define all external interfaces • Define all architectural patterns that will apply across the alternatives • Define minimal acceptable key interfaces within the system • Propose minimal acceptable standards for key interfaces and components • Propose any restrictions that the reference architecture will enforce. For example, restricting a component from having any proprietary interfaces The program should develop the initial reference architecture by conducting market surveys of existing product lines and COTS/GOTS solutions. Any system boundaries, interfaces and standards defined in the reference architecture are required to be used for each of the ACEF alternatives. If the key interfaces are defined at a low level of decomposition, then it may restrict the available product lines for use in the system. The ACEF analyst should begin with a less constrained reference architecture avoid overly constraining the alternatives, and then update the reference architecture with additional constraints based on the results of the analysis. There are many tools available to support the generation of the architecture, including the MDA and SysML tools described in Chapter II. Step 3: Create Alternatives The primary purpose of the ACEF is to allow for analysis of multiple architecture alternatives. These alternatives are made up of at least one design increment, but will 62 likely include multiple design increments to address changing requirements over the lifecycle. Each design increment will be designed in accordance with MOSA principles, and will include: • The system architecture implementation, updated as required for the design increment. The system architecture is required to conform to the reference architecture. • The components required to meet the system requirements. These may be known (for off-the-shelf components), or notional (for new design or modified components). It should be noted that the term component refers to a coherent logical grouping of system hardware and software that will be replaced or modified as a group. The component is the lowest level of decomposition in ACEF. • Identification and definition of all interfaces between all identified components. These interfaces must at the minimum include the key interfaces defined in the reference architecture. The alternative may propose any type of interface, including key or proprietary. The alternative should also clearly identify any standards that are fundamental to the design • Estimated system level capability. The alternative must provide an estimated system level capability for each attribute in the business strategy Attribute Hierarchy. This will be updated for each design increment. A notional weapon system design increment is shown in Figure 26. Note that each interface has 3 components: the interface definition (which describes the interface form, fit, and function), and a ‘side’ for each component. The component side of the interface is the aspect of the component that is responsible for executing the interface. This figure will be used in following sections to describe the development of alternatives. 63 Figure 26: Notional Weapon System Design Increment Diagram Each alternative develops its design increments independently. Alternatives are not required to share design increment schedules unless specified in the common business strategy. Design Prime Alternative A primary tenet of ACEF is to start with an alternative that conforms to the reference architecture and prefers off-the-shelf (OTS) or non-developmental items (NDI) in all cases. This alternative is called the Prime Alternative in ACEF. This is not only consistent with the fundamental MOSA “plug-and-play” value proposition, but specifically in accordance with both DoD policy and federal acquisition legislation which requires the use of OTS components whenever possible. In addition to this guidance, the prime alternative should also make use of existing DoD product lines where available. To create the Prime Alternative, the program begins to functionally decompose the system in accordance with the reference architecture. At each step of the functional decomposition, the program performs a market survey to determine if an OTS component exists that addresses some or all of the system requirements. At the completion of the 66 • Developmental components. Based on the expected level of development of a component, it may make business sense to delay the delivery of the initial capability. This is particularly beneficial when the OTS components will not meet the threshold requirements. • Further decomposition of components. Existing OTS components may not provide the optimal level of openness for the long term business objectives of the program. The program should investigate breaking down components into smaller logical groups. • Alternative interface definition. Depending on the expected design increment delivery, there may be other options for defining interfaces. For example, rather than a commercial standard, a government led or even proprietary standard may be more beneficial. • Design increment delivery. The program should investigate options for scheduled design increments, including the timing as well as the packing of the capability for each increment. A program has wide flexibility to create alternatives, and is not limited to these areas of investigation. The program should use both engineering and business judgement to utilize the flexibility of ACEF with programmatic realism to choose a set of alternatives. Step 4: Compare Alternatives The ACEF is designed to facilitate analysis of the cost effectiveness of different architecture alternatives. As part of this process, the analyst will compare utility versus cost, not just at a single point, but across the life cycle of the system. Operational Utility Operational utility is found by using MAUA and the additive value function. The equation discussed in Chapter II is used here, in a modified form to align with ACEF terminology: 𝑈𝑈𝑇𝑇(𝐴𝐴) = �𝑤𝑤𝑖𝑖 𝑛𝑛 𝑖𝑖=1 𝐸𝐸𝑇𝑇,𝑖𝑖(𝐴𝐴𝑇𝑇,𝑖𝑖) 67 Where UT(A) = the instantaneous utility for time T for the capability provided by alternative A. i = 1 to n is the index of the leaf level attribute AT,i = capability associated with the ith leaf level attribute for alternative A at time T. VT,i(AY,i) = incremental utility associated with AY,i, given by the LURP at time T. wi = the weight of the ith leaf level attribute, taken from the Attribute Hierarchy. The operational utility is calculated annually, to align with the government Planning, Programming, Budgeting, and Execution (PPBE) process. When one or less modifications occur during a given year, the utility is calculated once to create the annual utility. If needed, the year can be broken down into monthly segments to account for multiple modifications that occur during the year, and thus multiple monthly utility values. These monthly utility values are then added together and divided by 12 to create the effective annual utility. Figure 28 shows a graph of instantaneous utility versus time, with a single instantaneous time highlighted. Figure 28: Instantaneous Utility The instantaneous utility is useful for evaluating the operational utility of the system at any given moment. However, using a single instantaneous utility calculation to compare alternatives may mask overall utility trends. As Figure 28 shows, using the 68 instantaneous utility of the system at any single point does not indicate trends of the operational utility of the system throughout its life. Cost In addition to the operational utility described above, cost models must be developed to establish the ‘cost’ component of the ‘cost effectiveness’ analysis. The ACEF does not attempt to accurately model all aspects of costs, but focuses on the areas that will permit differentiation between alternatives. Consequently, this cost estimation should not be used for other purposes without careful consideration. The ACEF evaluates two types of costs: the cost to acquire a component, and the cost to integrate the component. To generate the cost of the system, ACEF evaluates these costs for each component, and then sums for all components. The ACEF governing cost equation is: 𝐶𝐶 = � (𝐶𝐶𝐴𝐴𝐶𝐶 + 𝐶𝐶𝐶𝐶𝐶𝐶) 𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝑛𝑛𝐶𝐶𝑛𝑛𝐶𝐶𝐶𝐶 𝑖𝑖=1 Where: C = the total cost CAC = the Component Acquisition Cost CIC = Component Integration Cost This is shown in Figure 29, where the component is shown broken down into the component core, and the component’s interfaces. 71 Component Integration Cost As shown in Figure 29, each interface has an end associated each component being connected. The Component Integration Cost (CIC) is found for every component by evaluating the ends of all interfaces associated with the component, as described by the following equation: 𝐶𝐶𝐶𝐶𝐶𝐶 = � 𝐶𝐶𝐶𝐶𝑗𝑗 𝐼𝐼𝑛𝑛𝐶𝐶𝐶𝐶𝐼𝐼𝐼𝐼𝐼𝐼𝐼𝐼𝐶𝐶𝐶𝐶 𝑗𝑗=1 Where: CIC = the Component Interface Cost ICj = the Interface Cost at the jth interface of the component The first evaluation is to determine whether or not the interface requires modification. Figure 30 shows the initial integration of a notional system, consisting of components A, B, and D. Figure 30: Interface Changes Table 3 shows a matrix of how the interface definition affects each component. For example, for component A, the interfaces 2 and 3 will require adaptation because 72 component A does not natively support the interface definition, and thus additional cost, while interfaces 1 and 5 will not because the system uses component A’s interface definition.. Interface 4 does not require adaptation because component A natively supports the interface standard. Table 3: Interface Matrix Interface Defined By Component A Component B Component D 1 Component A No Change N/A Adapted 2 Standard Adapted N/A Adapted 3 Component D Adapted N/A No Change 4 Standard No Change Adapted N/A 5 Component A Adapted Adapted N/A Next, Figure 31 shows the same scenario, but with component X replacing component A. Figure 31: Notional Design Increment Note that component X natively supports standards for both interface 2 and 4. This design increment chooses to continue to use component A’s interface definition for interface 1, accepting the additional adaptation to limit the changes required to 73 component D. In contrast, interface 5 is redefined to match component X’s interface definition. Table 4 shows a matrix of how the interface definition changes affects each component. Table 4: Notional Design Increment Interface Matrix Interface Defined By Component X Component B Component D 1 Component A Adapted N/A No Change 2 Standard No Change N/A No Change 3 Component D Adapted N/A No Change 4 Standard No Change No Change N/A 5 Component X Adapted Adapted N/A This table shows a couple of important results. First, component D is completely unchanged. The use of modular design and defined interfaces successfully isolated component D from the replacement of component A. Component B had a single interface change, due to the fact that the original interface was defined by Component A. Note that component D had interfaces that did not change using both standard and non-standard interface definitions. The use of standards (and definition of key interfaces) does not guarantee an unchanging interface. The more widely accepted an interface is, the more likely that a component will natively utilize it and not require adaptation during integration. Also, the more information that is available to the program team about the interface, the easier it is for the team to modify it. Using more open standards increases the amount of information that is freely available. For each interface that changes, the analyst selects a cost estimation method to account for the level of effort to adapt the component to the interface definition. 76 while Alternative B had an LCER of 70.25 per $M per year. Alternative A averages more utility per dollar per year, meaning that on average, it will produce a more cost effective solution than Alternative B. The program team will determine whether or not this difference is significant based on their overall program objectives. Step 5: Account for Uncertainty Single comparisons are subject to various estimates that introduce unquantified uncertainty. This uncertainty must be analyzed to ensure that the results are not unduly sensitive to assumptions about user preference or future requirements. Sensitivity analysis is described in many places in the extant literature (see NSA, 2000 and Parnell, 2013) and the general purpose is to “assess the impact on value of changing each uncertain input across its range of uncertainty” (Parnell, 2013: 189). The general technique is to identify variables with uncertainty, identify low, nominal and high values for the variables, and then evaluate the system for each variable. This evaluation is done by performing the calculation using the off-nominal value for a single variable, and keeping the remaining variables nominal. This process is repeated for each uncertain variable. The behavior of the total calculated value as each variable is changes is investigated to determine if the total value is sensitive to that variable. Cumulative Weight Sensitivity Analysis The first area of uncertainty to address is decision maker capability requirement preference. This uncertainty occurs when decision makers do not explicitly apply their priorities when assigning different attributes; or, if they do prioritize, if they don’t assign quantitative values for their preference. Therefore differences between the values they 77 assign to the Attribute Hierarchy and their actual priorities for delivered system capabilities arise. To account for this uncertainty, the weights in the Attribute Hierarchy should be varied, the analysis re-run, and the LCER values compared to determine whether the alternatives are sensitive to these changes. In particular, the analyst should look for instances where the alternative preference order changes Lifecycle Utility Reference Profile Sensitivity Analysis In addition to the weights, the analyst should determine the alternatives’ sensitivity to the LURPs. The LURPs are designed to account for changing requirements throughout the program’s lifecycle. However, while attributes such as KPPs, KSAs, and APAs are explicitly defined and validated for the initial capabilities, rarely are future threshold and objective values explicitly defined. Rather, future upgrades are traditionally treated as separate acquisition activities, with new requirements created separately. In contrast, the LURPs are not intended to accurately predict future requirements changes; rather, they are intended to acknowledge the requirement that system utility should improve in general across the lifecycle, and provide a common reference for evaluating the alternatives. Despite this, it is important to vary the threshold and objective values of the LURPs to ensure that the ACEF results are not sensitive to a particular LURP configuration. To accomplish this, the threshold and objective values are varied for a single LURP at a time, and the analysis is re-run. Step 6: Examine Results and Update Business Strategy and Reference Architecture After the analysis has been run, the business strategy and reference architecture should be updated to reflect the lessons learned. For the business strategy, the program 78 should ensure that all selected attributes support the discrimination between alternatives and ensure that the LURPs are appropriate for the analysis. If there are attributes that do not add value to the analysis, they should be removed and the analysis re-run. If any particular LURP drives the analysis, the analyst should consult SMEs in that area to verify that the LURP represents realistic changes in the attribute requirement. While the reference architecture should be updated based on the results obtained through the ACEF, the exact areas to update will depend on how the reference architecture was constructed. Some areas to consider are: • Breakdown of components. The analyst should focus on specific design increments that stress particular design patterns. If they show localized cost effectiveness, it may be productive to investigate combining these patterns with other alternative characteristics. • Selection of standards. Use of standards is generally preferred. However, the analyst should investigate how use of a particular standard affects other aspects of the design. If a standard is selected because a given component type widely utilizes it, but other related components do not, it may not be cost effective overall. Another similar standard may be more cost effective. The ACEF can be executed in multiple iterations, based on feedback from stakeholders. The result of ACEF should be a stable and mature business strategy and reference architecture as input to the contracts generation process. Summary This chapter started with a discussion of the case for a cost effectiveness framework. Next, the ACEF was developed in six steps, beginning with establishing the business strategy. This included developing an Attribute Hierarchy and a set of LURPs. Next, the ACEF developed a reference architecture, followed by the creation of a set of 81 Table 5: Key Performance Parameters for Program X EO Video Resolution System shall collect and provide to analysts EO video. Threshold: 480p Objective: 1080p IR Video Resolution System shall collect and provide to analysts IR video. Threshold: 480p Objective: 1080p Radar MTI Detection Coverage System shall collect and provide to analysts MTI detections. Threshold: 625 square kilometers Objectives: 1225 square kilometers Disseminate FMV Data System shall disseminate FMV video through SATCOM. Threshold = objective. The Electro-Optical (EO) video will provide primarily day FMV, while the Infrared (IR) will provide night capability. The Radar MTI will provide increased ability to track targets through visual occlusion, including clouds. Finally, the dissemination of the FMV data will allow for situational awareness for decision makers worldwide, and support for reach back intelligence exploitation. In addition to the KPPs, the CDD also includes a variety of KSAs (Table 6). The radar tracker will take the detections from the MTI system and stitch them together to form contiguous tracks, greatly increasing the analyst’s ability continuously monitor their targets. Subsystem A will be a notional subsystem for identifying and geo-locating targets. Additional information about Subsystem A is not required for this analysis. Disseminating all other data further will support both situational awareness and reachback intelligence exploitation. 82 Table 6: Key System Attributes for Program X Radar Tracker System shall utilize MTI data to create tracks. Threshold = objective. Subsystem A Geolocation Accuracy System shall identify and geolocate targets using Subsystem A. Threshold: 25 meter allowable error Objective: 10 meter allowable error Disseminate MTI Detections System shall disseminate MTI detections through SATCOM. Threshold = objective. Disseminate Subsystem A Data System shall disseminate Subsystem A data through SATCOM. Threshold = objective. Disseminate Radar Tracks System shall disseminate radar tracks through SATCOM. Threshold = objective. In addition to the initial requirements, it is anticipated that multiple blocks of modifications will be developed to deliver capability in the face of changing requirements. Specifically, it is anticipated that sensor requirements will continue to progress. Because of this, an effective MOSA strategy is considered vital to Program X. Step 1: Establish Business Strategy Attribute Hierarchy When generating the Attribute Hierarchy, Program X determined that KPPs are inherently more important than KSAs, by a ratio of 60/40. This breakdown was used to develop the Attribute Hierarchy into two separate branches with the KPPs and KSAs arranged as leafs in their respective branch (Figure 33). Next, the relative values for leaf level attributes in each of the branches were found by using the method described by Blanchard and Fabrycky. The Program X team assigned an importance rating of 100 for the most important attribute in the branch. Each of the other attributes was assigned an importance rating (from 0 to 100) that corresponded to the operational value that attribute 83 would provide. Using this 0-100 scale, the additive weight was found by normalizing the respondent weights using the following equation: 𝐴𝐴𝑊𝑊𝐼𝐼 = 𝐶𝐶𝐶𝐶𝐼𝐼 ∑ 𝐶𝐶𝐶𝐶𝐼𝐼 Where: AWa = Additive Weight of attribute a IRa = Importance Weight of attribute a The results of the survey and normalization are shown below (Table 7 and Table 8). Table 7: Program X KPP Attribute Weights Importance Rating Additive Weight EO Video Resolution 100 0.3125 IR Video Resolution 80 0.2500 MTI Detection Coverage 50 0.1563 Disseminate Video Data 90 0.2813 Table 8: Program X KSA Attribute Weights Respondent Weight Hierarchy Weight Radar Tracks 80 0.2192 Disseminate MTI Detections 60 0.1644 Disseminate Radar Tracks 50 0.1370 Disseminate Subsystem A Data 75 0.2055 Subsystem A Geolocation Accuracy 100 0.2740 Using these results, the Attribute Hierarchy was developed:
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved