Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Bat Chooser Upgrade: Performance Metrics for Parameters, Exams of Engineering

The performance figures of merit for the bat chooser upgrade, including lower, upper, baseline, and scoring parameters. It also covers weighting criteria, utilization of resources requirement, and technology and buildability requirements.

Typology: Exams

Pre 2010

Uploaded on 08/26/2009

koofers-user-ryf
koofers-user-ryf 🇺🇸

10 documents

1 / 27

Toggle sidebar

Related documents


Partial preview of the text

Download Bat Chooser Upgrade: Performance Metrics for Parameters and more Exams Engineering in PDF only on Docsity! The Bat Chooser™ Upgrade Requirements Document Prepared for Systems and Industrial Engineering 554b, Spring 2003 by ACDC Systems Engineering Uninc. T. Baker Z. Musaka E. Smith J. Ussery [blank] Table of Contents 1 The System Requirement.................................................................................................................................. 1 2 Input/Output and Functional Requirement .................................................................................................... 1 2.1 Time Scale.....................................................................................................................................................................................1 2.2 Inputs.............................................................................................................................................................................................2 2.3 Input Trajectories ........................................................................................................................................................................2 2.4 Outputs .........................................................................................................................................................................................3 2.5 Output Trajectories .....................................................................................................................................................................3 2.6 Matching Function ......................................................................................................................................................................3 3 Technology Requirement..................................................................................................................................3 3.1 Available Components ................................................................................................................................................................4 3.2 Implementation Techniques.......................................................................................................................................................5 3.3 Required Interfaces......................................................................................................................................................................5 3.4 Standards and Specifications ......................................................................................................................................................5 4 Input/Output Performance Requirement ........................................................................................................6 4.1 Definition of Performance Figures of Merit............................................................................................................................6 4.2 Lower, Upper, Baseline, and Scoring Parameters....................................................................................................................6 4.3 Weighting Criteria ..................................................................................................................................................................... 13 5 Utilization of Resources Requirement............................................................................................................ 14 5.1 Definition of Resource Figures of Merit ............................................................................................................................... 14 5.2 Lower, Upper, Baseline and scoring parameters .................................................................................................................. 14 5.3 Weighting Criteria ..................................................................................................................................................................... 17 6 Trade-Off Requirement................................................................................................................................... 18 6.1 Combining Function ................................................................................................................................................................ 18 6.2 Weights....................................................................................................................................................................................... 18 7 System Test Requirement ............................................................................................................................... 19 7.1 Test Plan .................................................................................................................................................................................... 19 7.2 Input/Output Performance Tests .......................................................................................................................................... 20 7.3 Utilization of Resource Tests .................................................................................................................................................. 21 8 Rationale for Operational Need...................................................................................................................... 21 [blank] Requirements page 1 1 The System Requirement The system Design Requirement involves the following components • Input/Output and Functional Requirement, • Technology Requirement, • Input/Output Performance Requirement, • Trade off Requirement, and • System Test Requirement. 2 Input/Output and Functional Requirement The Input/Output and Functional Requirements for the system under design are specified by the sextuple, IORBO, defined as: IORB0 = (TSB0, IRB0, ITRB0, ORB0, OTRB0, ERB0) Where: TSB0 specifies the system time scale IRB0 specifies the system input requirement ITRB0 specifies the system input trajectories ORB0 specifies the system outputs OTRB0 specifies system output trajectories ERB0 specifies the matching/eligibility function These elements are described in detail below. 2.1 Time Scale The time scale of the Bat Choosing process is denoted TSRB0 and defined as TSRBO = (TS1BO × TS2BO × TS3BO), where: Requirements page 4 Z2 ∈ BUILDABLE_SYSTEM_MODELS Z2 implements Z1}, and ALLOCATED_SYSTEM_DESIGNS = {(Z, DSZ, TSZ, Z@, SCR): (Z, DSZ, TSZ) ∈ CTL (IOR); (Z@, SCR) ∈ BSR; Z@ implements Z}, Then FUNCTIONAL_SYSTEM_MODELS = RNG(PJN(CTL(IOR), 1)), BUILDABLE_SYSTEM_MODELS = RNG(PJN(BSR, 1)), IMPLEMENTABLE_SYSTEM_MODELS = RNG(PJN(CTL(IOR, TYR), (1,4))), ALLOCATED_SYSTEM_DESIGNS = RNG(PJN(CTL(IOR, TYR), (1,2,3,4,5))), IMPLEMENTABLE_SYSTEM_MODELS is a subset of FUNCTIONAL_SYSTEM_MODELS × BUILDABLE_SYSTEM_MODELS, and ALLOCATED_SYSTEM_DESIGNS is equivalent to a subset of CTL(IOR) × BSR The Technology and Buildability Requirements for the system under design are specified as TYRBO and defined as: TYRBO = (TYRTBO, TYRBBO, TYRSBO) Where: TYRTBO specifies technology the components for a COTS system TYRBBO specifies the implementation of a buildable system based on cost and time TYRSBO specifies the safety specification for the system 3.1 Available Components The technology components are specified as TYRTBO and defined below. TYRTB0 = (TYRT1BO × TYRT2BO × TYRT3BO × TYRT4BO × TYRT5BO × TYRT6BO × TRYT7BO × TRYT8BO × TRYT9BO) ∪ (NIL) Requirements page 5 Where: TYRT1BO = Computer (IBM PC with 80386 processor computer system, human- machine interfaces (keyboard, mouse, monitor, printer) TYRT2BO = Executable Software Applications TYRT3BO = Threshold Measurement System (lasers and photodetectors) TYRT4BO = Control Switch for the Measurement System TYRT5BO = Hardware to mount the Measurement System TYRT6BO = Interface mediums (data cables and power cables) TRYT7BO = Technology design constraints of the bat design and development TRYT8BO = Physical Constraints of the system TRYT9BO = Safety Constraints 3.2 Implementation Techniques The Implementation is specified in terms of cost and time as TYRBBO and defined below: TYRBB0 = (TYRB1BO × TYRB2BO) ∪ (NIL) Where: TYRB1BO = Cost of the implementation of the system TYRB2BO = Implementation within the time allowable 3.3 Required Interfaces There are three primary interfaces in the Bat Chooser Upgrade. The human-machine interface at the operations terminal, the data and power cables, and the internal components interface. The human-machine interface and the cables are part of the technology specification. The cables facilitate the internal interface between external components and the personal computer, and is assumed to be part of the computer assembly. 3.4 Standards and Specifications The Bat Chooser Upgrade will comply with the safety specification in TYRT9B0. Requirements page 6 4 Input/Output Performance Requirement In the graphs of scoring functions included in this section, the Units cell has two special interpretations. • Judgment is a subjective scale based on user responses. • Ratio is the quotient of two measurements of estimates with the same units, hence is dimensionless. 4.1 Definition of Performance Figures of Merit The overall performance figure of merit is denoted IF0B0 and is computed as follows: IF0B0 = ISF1B0 × IW1B0 + ISF2B0 × IW2B0 + …+ ISFnB0 × IWnB0, where n is the total number of I/O Performance Figures of Merit, and ISFiB0 = ISiB0 (IFiB0 (FSD)) for i = 1, 2, …, n as explained in the following section. 4.2 Lower, Upper, Baseline, and Scoring Parameters In this section, the following naming convention is used. The initial letter I indicates that the name is for an Input/Output Performance Requirement. The terminal B0 indicates that the name involves the initial iteration of the Bat Chooser Upgrade. IFiB0 = the ith figure of merit measured per the test plan, IBiB0 = the baseline value for the ith figure of merit, IFXiB0 = measured value for the ith figure of merit, ILTHiB0 = lower threshold for the ith figure of merit, IRiB0 = ranking of importance of 1 to 10, ISFiB0 = score for the ith figure of merit, ISiB0 = scoring function for the ith figure of merit, ISLiB0 = slope for the ith figure of merit, IUTHiB0 = upper threshold for the ith figure of merit, Requirements page 9 FoM 5. Operator Reproducibility Score IS5B0 = SSF (ILTH5B0, IB5B0, IUTH5B0, ISL5B0) Units none Lower Threshold 0 Baseline 0.98 Upper Threshold 1.00 Slope 10 The Bat Chooser Upgrade shall be resistant to common operator errors as recognized be software engineering. FoM 6. Time Required for Measurement Score IS6B0 = SSF (ILTH6B0, IB6B0, IUTH6B0, ISL6B0) Units Minutes Lower Threshold 0 Baseline 30 Upper Threshold 120 Slope -0.09 The time required for bat selection should not be more than 60 minutes. 0 10 20 30 40 50 60 0.5 1 Time Required for Measurement Time, minutes Sc or e Requirements page 10 FoM 7. Physical Damage to Bats Score IS7B0 = SSF (ILTH7B0, IB7B0, IUTH7B0, ISL7B0) Units swings Lower Threshold 0 Baseline 1 Upper Threshold 10,000 Slope -1 As bats do not contact balls during bat selection, and barring erratic swinger behavior, there should not be any physical damage to bats. FoM 8. Availability Score IS8B0 = SSF (ILTH8B0, IB8B0, IUTH8B0, ISL8B0) Units Ratio Lower Threshold 0.75 Baseline 0.95 Upper Threshold 1.00 Slope 10 The Bat Chooser Upgrade will be available 95% of the time(t) = e-λ(t), with λ = 0.02/year. This is 0.95 availability. Requirements page 11 FoM 9. Reliability Score IS9B0 = SSF (ILTH9B0, IB9B0, IUTH9B0, ISL9B0) Units Ratio Lower Threshold 0.75 Baseline 0.86 Upper Threshold 1.00 Slope 10 Bat Chooser Upgrade will have a reliability of at least R(t) = e-λt, with λ = 0.05/year. This is 0.86 reliability at 3 years. FoM 10. Safety Score IS10B0 = SSF (ILTH10B0, IB10B0, IUTH10B0, ISL10B0) Units Ratio Lower Threshold 5 Baseline 10 Upper Threshold 10 Slope 10 Bat Chooser Upgrade shall conform to all Federal regulations concerning consumer product safety. Requirements page 14 5 Utilization of Resources Requirement 5.1 Definition of Resource Figures of Merit The overall utilization figure of merit is denoted UF0B0 and is computed as follows: UF0B0 = USF1B0 × UW1B0 + USF2B0 × UW2B0 + … + USFnB0 × UWnB0 Where n is the total number of utilization of resources figures of merit and USFiB0 = USiB0(UFiB0(FSD)) for i = 1, 2, …, n as shown below. 5.2 Lower, Upper, Baseline and scoring parameters The following naming convention is used in this section. The initial letter “U” indicates the name is for utilization of Resources Requirement. The terminal “B0” indicates that the name involves the initial iteration of the Bat Chooser Upgrade system. UFiB0 = the ith figure of merit measured per the test plan UBiB0 = the baseline value for the ith figure of merit UFXiB0 = the measured value for the ith figure of merit ULTHiB0 = Lower threshold for the ith figure of merit URiB0 = Ranking of importance from 1 to 10 USFiB0 = Score for the ith figure of merit USiB0 = Scoring functions for the ith figure of merit USLiB0 = Slope for the ith figure of merit UUTHiB0 = Upper threshold for the ith figure of merit UWiB0 = Weight of the ith figure of merit SSF = Standard scoring function The following are the parameters necessary to evaluate the figures of merit using the scoring functions. Requirements page 15 FoM 1. Cost of System and Production Score US1B0 = SSF (ULTH1B0, UB1B0, UUTH1B0, USL1B0) Units Dollars Lower Threshold 0 Baseline 375 Upper Threshold 750 Slope -0.005 This is the amount of money spent on the design and production of the system 0 100 200 300 400 500 600 700 0.5 1 Production Cost Cost, US Dollars Sc or e FoM 2. System Design Time Score US2B0 = SSF (ULTH2B0, UB2B0, UUTH2B0, USL2B0) Units Calendar Weeks Lower Threshold 0 Baseline 13 Upper Threshold 14.5 Slope -0.68 This is the amount of time required to design the system. There is no benefit to finishing early; however there are penalties to finishing late. Requirements page 16 FoM 3. Operating Cost Score US3B0 = SSF (ULTH3B0, UB3B0, UUTH3B0, USL3B0) Units Dollars Lower Threshold 0 Baseline 570 Upper Threshold 1140 Slope -0.005 This is the cost of performing the measurement process on one bat. The cost should not exceed a $1000 a year. 0 200 400 600 800 1000 0.5 1 Operating Cost Cost, US Dollars Sc or e FoM 4. Selling Price Score US4B0 = SSF (ULTH4B0, UB4B0, UUTH4B0, USL4B0) Units Dollars Lower Threshold 375 Baseline 1125 Upper Threshold 2250 Slope 0.001 This is the selling price of the system. 500 1000 1500 2000 0.5 1 Scoring for Selling Price Price, US Dollars Sc or e Requirements page 19 7 System Test Requirement 7.1 Test Plan In addition to Bat Chooser Upgrade’s current built-in-tests, Bat Check and Bat Test, tests must be conducted to verify satisfaction of requirements. The test plan implements a combined developmental and operational test process. This approach combines standard test measurements given below with Reliability, Availability and Maintainability (RAM) as determined during trade-off-analysis using the I/O Figures of Merit. A school of thought proposes that by measuring a phenomenon you, by default, corrupt the results. This logic path will be ignored for this application. However, it is important to note that by measuring a subjects swing in a controlled environment, where the subject is aware of the measurement the data is in fact affected by the measurement tool and process. It is possible to measure the interface and internal hardware and software data in a nonintrusive manner but not the subject. The System test for the purpose of this class is the design based on whether or not the system meets the customer's requirement. The original requirement was to increase hitting capability. The test measurements for this requirement are: 1) The number of singles, doubles, triples, HR (HITS) 2) The Team Batting Average (TBA) 3) The team On Base Percentage (OBP) 4) Number of runs scored per game (RUNS) The complete set of system measurement techniques includes Test, Demonstration, Analysis, and Inspection (TDAI). For this design we will only use: Test and Analysis. The test will include the measurement of each member of the team before spring training to determine batting specifics. The test will measure bat speed and performance through three measurement volumes. The test scenario will include a complete professional baseball team where each member will go through one complete test sequence of all bats in the Bat Chooser Upgrade. The testing will occur at the end of the 2002 season. The results will be provided to the hitting coach and implemented during spring training of the 2003 season. 7.1.1 Explanation of Test Plan The Test metrics to be examined at the Bat Chooser Upgrade II system level include: Requirements page 20 Measurement Measurement Units Performance Specification Singles hit by the team in Spring Training ratio Hits in 2003 ST/Hits after 2002 Increase over Fall numbers Doubles hit by the team in Spring Training ratio Hits in 2003 ST/Hits after 2002 Increase over Fall numbers Triples hit by the team in Spring Training ratio Hits in 2003 ST/Hits after 2002 Increase over Fall numbers HR hit by the team in Spring Training ratio Hits in 2003 ST/Hits after 2002 Increase over Fall numbers Team Batting Average % TBA Fall/TBA Spring Increase over Fall numbers Team On Base Batting Percentage % OBP Fall/OBP Spring Increase over Fall numbers Runs Scored per Game % Runs Fall/Runs Spring Increase over Fall numbers 7.1.2 Analysis The analysis will be accomplished post test to determine the best bat weight, swing acceleration, and position of highest velocity during the swing. These performance indices will enable the customer to improve the overall hitting performance of the team. 7.1.3 Test Trajectory A test trajectory will be generated (1, 2, …, n) for the subjects participating in the Bat Chooser Upgrade test program. Each subject will swing each bat 3 times in a random order selected by the program based on a random variable seed at the time of test initiation. The minimum time between swings will be 15 seconds. 7.2 Input/Output Performance Tests The implementable system test items generated by the input/output requirement IOR and the technology is denoted ISTISR and is defined as follows: ISTISR = {ISTI: ISTI = (ISD, ZREAL, SCRREAL); ISD ∈ ISR: ZREAL ∈ DSYSTEMS, If ISD = (Z, DSZ, TSZ, Z@, SCR, ZS, HI, HS, HO), then ZREAL is a component test representative of Z@ with respect to SCR and SCRREAL}. A sample table follows. Requirements page 21 System Test Trajectory IR1BO IR2BO IR3BO IR4BO IR5BO IR6BO IR7BO 1 2 3 4 7.3 Utilization of Resource Tests Utilization of resources will be applied using a simple cost benefit analysis where the values and weights listed below are applied to each implementable system in the final solution space. Figure of Merit Value UWiB0 Cost of System and Production 10 0.25 System Design 10 0.25 Operating Cost 6 0.15 Selling Price 8 0.20 Training Time 4 0.10 Component Reuse 2 0.05 8 Rationale for Operational Need The content of this document is an elaboration and refinement of material developed in the Operational Need document. With further reflection on the Bat Chooser Upgrade mission statement and the product demonstration conducted 29 Jan 03, the staff at ACDC Systems Engineering has concluded that • There are genuine performance issues that would interfere with successful commercial use of the Bat Chooser Upgrade. In particular, repeatability and stability of the LED source and detector alignment will lead to frustration in both owners and end users if not corrected in the upgrade. • While no formal trade was done on whether the existing Bat Chooser Upgrade software running on 16-bit Windows will remain viable in the long term, it is adequate as long as “home” editions of Windows are available. These editions are capable of running legacy applications that “professional” editions will not run. —END—
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved