Docsity
Docsity

Prepara i tuoi esami
Prepara i tuoi esami

Studia grazie alle numerose risorse presenti su Docsity


Ottieni i punti per scaricare
Ottieni i punti per scaricare

Guadagna punti aiutando altri studenti oppure acquistali con un piano Premium


Guide e consigli
Guide e consigli

Model based software design appunti completi, Appunti di Ingegneria del Software

Appunti completi di tutto il corso di MBSD tenuto dal professor M. Violante nell'anno accademico 2018/2019.

Tipologia: Appunti

2019/2020
In offerta
30 Punti
Discount

Offerta a tempo limitato


Caricato il 06/01/2020

francescomalacarne
francescomalacarne 🇮🇹

4.2

(5)

8 documenti

Anteprima parziale del testo

Scarica Model based software design appunti completi e più Appunti in PDF di Ingegneria del Software solo su Docsity! 2018-2019 MBSD notes Prof. Massimo Violante Francesco Malacarne 1 Sommario Hardware basics ............................................................................................................................................ 3 Model-based software design ...................................................................................................................... 9 Model of a system ..................................................................................................................................... 9 Design flow .............................................................................................................................................. 13 Code generation: algorithm export or full executable? ......................................................................... 22 Design guidelines .................................................................................................................................... 26 Simulink solver ........................................................................................................................................ 33 Solver for code generation ..................................................................................................................... 36 Auto-code generation ................................................................................................................................. 40 Model construction and configuration ................................................................................................... 40 Code generation ...................................................................................................................................... 42 Analysis of the generated code............................................................................................................... 46 How to deal with different solvers ......................................................................................................... 49 How to create a code generation? ......................................................................................................... 49 Some C language reviews ....................................................................................................................... 50 Pointer ................................................................................................................................................. 50 Functions ............................................................................................................................................. 50 Stateflow ..................................................................................................................................................... 50 Why can’t we use Simulink? ................................................................................................................... 51 Code generation of this system .............................................................................................................. 55 Led blinking model .................................................................................................................................. 55 Hazard indicator developement ......................................................................................................... 57 AUTOSAR ..................................................................................................................................................... 61 AUTOSAR introduction ............................................................................................................................ 61 Architecture description ......................................................................................................................... 63 AUTOSAR operating system .................................................................................................................... 71 Main features of the operating system .............................................................................................. 71 ISO 26262 .................................................................................................................................................... 74 Testing ......................................................................................................................................................... 88 Hardware testing .................................................................................................................................... 89 Software testing ...................................................................................................................................... 91 4 The CPU is able to do two operations only: reading or writing. A generic read cycle or write cycle always start with an address line: it is necessary to specify where the operation (no matter what) is going to be performed. The second instruction contains the actual operation that is wanted (read or write). CPUs contain memory elements inside. These memories, called registers, are very fast but also very expensive, that’s why they are very limited. Each element of the registers has a specific role in the overall loop. 5 Memories inside a PC are only two types: volatile or non-volatile. The first one is much faster than the latter one, but it is also more expensive, that’s why it is used to store data that must be easily accessible, and it is flushed any time the PC is powered-off. The software of an embedded system must be on a non-volatile memory, since it must be present any time the system is powered-up. What is an embedded system? It is a controller programmed and controlled by a real time operating system with dedicated function within a larger mechanical or electrical system. It is embedded as a part of a complete device often including hardware and mechanical components. 6 Why are there two different types of architecture? Because it depends on the application. A control device for a fixed and specific system, needs only a few memory and a very small computational power, the processor has to compute specific tasks only, therefore it could be more efficient to choose a MCU. By contrast, when dealing with possible upgrade due to the increasing complexity of the system, it could be necessary to add memory, then a SoC shall be used. What is the best solution for an embedded system? A MCU in general. Why? Because it is cheaper, more compact and designed to do a precise task. What is the drawback? Since the processor is not very powerful and designed for a specific task only, the control algorithm must be as simple as possible. There are special MCU that guarantees a working condition also in the presence of failure. These architecture implementations, such as the Infineon AURIX family, are used when safety is the most important parameter (for example when dealing with airbags). The first operation that a CPU does is to associate a certain address to any device present into the system. 9 Model-based software design What’s the problem with the standard coding procedure? That it requires too much time. The handwritten coding part can be easily avoided by means of a software that automatically produces the code. Model of a system The key point of the modelling phase is that a model doesn’t have to be a precise copy of the original system, it must be able to behave like the original one, but in a simplified version, namely without all the features that are not relevant for our purposes. Obviously, omitting information, we approximate the original system. This approximation can be good or not depending on the precision level we need to 10 achieve, there is not a unique solution. Moreover, depending on the situation, we could obtain two different models starting from the same origin, because the relevant features could be different! Models are created to test different alternatives for the solution. Furthermore, they are the starting point for the code generation, depending on the model, a certain code will be produced. 11 During the modelling phase we do not care about the implementation; we concentrate on what the system should do, not on how it does. Basically, we do not care what is the code language or the platform where it will be plugged-in; the same model could be developed in different languages. There is a complete decoupling between the model development and the platform used. By contrast, when producing the code, it is necessary to specify the platform where this software will go. We are going to develop models of control strategies by means of a specific modelling language (Simulink + Stateflow for instance). Our control strategies will be only modelled, we will not develop the code that will execute this control strategy. After the modelling phase we have a transformation tool: this tool takes the model and it transforms automatically in something else (an artefact, a C code automatically generated by the Simulink embedded coder). The transformation tool uses different transformation rules depending on the artefact we must produce. 14 System requirements and software design are included into the bottom part of the schematic. Basically, in this part we will develop a model with a domain specific language (Simulink for example) to create a platform independent model (namely that can be used for any platform). During this part of the process, we will model two different components: the plant to be controlled and the controller action used to control the plant. Basically, we will have two different models, not only one. If we just model the control function, we can not test the C code (simulate it), which is dangerous because we can not know if it is reliable or not, if it works or not. Before testing the controller on the real system, we have to be reasonably sure that it will behave correctly, that’s why we are also modelling the plant. Why testing on the real system? Because model is abstraction, then we are neglecting something. The model of the plant and the model of the controller don’t have to be written into the same language and by means of the same platform. 15 Model in the loop testing: what does it means? We have a model of the plant, a model of the controller and we test them. If something is not working, we have to perform changes. In this scenario the working environment is a development PC. We define test scenarios (different inputs) and then we evaluate the outputs checking whether the system is able to follow the reference we placed as input. All the components are developed in Simulink or Stateflow and tested on our development PC. 16 Code generation: the code generation comes after the model in the loop testing. Once we are 100% sure about the model and the controller, we continue the process with the automatic code generation. The code generation is based on a transformation tool that will produce an artefact (a code), given all the specifications we provided. Obviously, this code depends on the platform we are going to use for our system, however the starting model is always the same. Overview of the design flow \ / Processor-in-the- loop testing IASNA INNS: 1-1 «Te)L] 7_{__ ware Integration —g_- HW/SW integration Exocutable Design with Simulation The implementation is deployed on the target hw (e.g., EVB or ECU). Continuous a Verification & Validation 20 Hardware in the loop testing: the hardware in the loop testing is used to evaluate the real time performances of the controller. All the time delays are removed: the plant could be either the physical plant or a dedicated hardware (emulation hardware). The controller sees a real system, it is possible to perform the last evaluation about the performances of the system. Why using an emulation hardware that costs a lot? Because it is safe. Once tested in this way, it is possible to proceed with the test in real life scenarios. Overview of the design flow \ Hardware în the 00p testing VON) ‘and Calibration VISIERE] VA Hardware-in-the-loop testing RESEARCH REQUIREMENTS Executable Specification _— Designwith Simulation aulER i E Eee esi ate AVI) Le Egea rete SIR COR ESRI ES pento, with The implementation runs on target hw, II the plant model on rapid prototyping hw. CET Continuous e Verification & Validation 24 A full executable contains also the basic software that allows the controller to work properly. The model contains everything, not only the controller, but also the drivers associated with the platform. When we generate the code, it concerns the whole dotted box, then we do not have to perform any kind of manual merging. This is the case when we do rapid control prototyping, the hardware is not the final one, it is much more powerful and expensive, but it makes our life easier when dealing with control design. When we need to test many different times a controller, we must do it quickly, we cannot pretend to integrate manually any time the different pieces of code. What is the drawback? That it is not well optimized, the efficiency is surely lower than in the algorithm export case. What will we use? Panthera: rapid control prototyping (whole system), only one code, we do not have to merge anything. NXP: algorithm export, we will manually merge the different codes. 25 ▪ Analysis: consistency in the unit measures is one of the most important factors which must be considered when dealing with systems. ▪ Transformation tool: this is a requirement to stay in the market nowadays, no code knowledge needed, but it is possible to be more aligned with the ICT world. ▪ MATLAB/Simulink are the best for what concerns automotive industries, that’s why we are studying and developing models in this platform. ▪ Autosar is the standard when dealing with static models. For our purposes, we will use Simulink when dealing with control algorithm, namely for dynamic behaviours, whereas Stateflow for supervisory logic (to check whether the system is working in the correct way or not). 26 Design guidelines The model is actually a documentation; therefore, it must be readable, clear and not-ambiguous. There must be a common language within the entire world for developing models so that they will be understood in many different companies (easy interaction). The first aspect to be considered is the usage of layered models. This fact is very important as it allows to guarantee a good readability and logic flow. A general approach consists in dividing the overall model into the controller and the plant one, then divide each of them in the subcategories. This layered process is also very useful when we want to deal with code generation of a single part only. Naming conventions = File and directory names: = Model elements (Inports, Outports, = No leading digits, no blanks Subsystems) shall adopt company-defined = Can use underscores to separate parts naming convention, as example: = Cannot have more than one consecutive = Same as file names, plus underscore = Cannot start with an underscore = For Inports/Outports the name shall be: = Cannor end with an underscore <Signal name>_<unit>, such as EngineSpeed_RPM, or BatteryLevel_V, or CarAcceleration _ms2 = File extension: = Should not use underscores ». For Subsystems che name shall be: <Subsystem name>, such as RequestedTorqueComutation Simulink usage = Ina Simulink model, ports must comply with the following rules: = Place Inport blocks on the left side of the diagram; you may move them to prevent signal crossings = Place Outport blocks on the right side of the diagram: you may move them to prevent signal crossings © [ree__D [i] r_kaa a hh ; DD Io TIE as er 30 Always use a decading structure. All the derivations must be performed going toward the bottom part of the model. 31 The only exception is the feedback path that is obviously opposite with respect to the standard one. 34 current time (as consequence there will be a new state, a new input and a new output). The procedure is iterated until the end of the simulation. There are two types of solvers: fixed-step and variable-step. Each point that is shown in the following figure, represent a point where the solver will create an output. A fixed step solver is a solver where Δ is chosen “a priori” before the simulation and it will be always the same along the simulation, irrespective of the possible different frequencies present into the signals. It is clear that during this procedure it is not necessary to deploy the Δ update when analysing the solver process. Variable step solver chooses Δ any time instant. This process is performed by Simulink automatically: when the system changes rapidly, the value of the integration time is reduced, whereas when the system is slowly varying, there will be a higher value of Δ. The main problem of the fixed step solver is that if the value of the integration time is not small enough, we could not be able to represent correctly the behaviour of the system. On one hand, choosing a small value of Δ is convenient to characterize the system behaviour, on the other hand it is not effective in term of computational effort. The higher the integration time, the smaller is the needed effort. 35 Another important concept is the type of “time” chosen. In fact, it is possible to have a continuous solver, or a discrete one. A continuous solver tries to emulate the real time (in practice it is not possible due to the limited memory, as there cannot be infinite points between two time instants), therefore between two measurements (𝑡 and 𝑡 + Δ𝑡), the solver will work each minor step. The solver is called any minor step. A discrete solver will work only every major time step. The Simulink solver is called any major step, we basically don’t care about what’s inside the two time instants. For this reason, a discrete solver is much simpler. 36 High accuracy means small integration time, then very high CPU time. On the contrary, low accuracy means large integration time, then small CPU time. Sampling time and integration time are two different things. The sampling time is the time which select the measurement time. It is used for the control strategy, in particular it defines the timing sequence for the controller, when it is necessary to apply the control law. The integration time is the time when Simulink must evaluate the model, it has nothing to do with the sampling time and the control strategy. The integration time, in some sense is able to represent the speed of the model dynamic, whereas the sampling time represents the speed of the controller (it affects the control strategy only). The integration time will specify to Simulink when to solve the model and perform all the loop. Example: sampling time 100 𝑚𝑠, integration time 100 𝜇𝑠. Solver for code generation Is it possible to generate a code for any solver? No, we will be able to generate a code for a discrete fixed step solver only. Why? Let’s try to understand it. Firstly, what is a task? A task is a sequence of operations that our PC must perform. This set of operations requires a certain time to be accomplished (the blue rectangle in the slide). The available time between one task and the next one must be sufficiently high to guarantee the execution of the task. A generic software can have 3 different kind of tasks: ▪ Periodic task: it happens every fixed step, the time step is known as inter-arrival time, and in general, it is known. The task time must be smaller than the inter-arrival time. ▪ Aperiodic task: we know the minimum inter-arrival time. The task time must be smaller than the minimum inter-arrival time. An example of aperiodic task is the measure of the position of a point in a revolution tire for instance. When the speed is higher the inter-arrival time is higher, however, knowing the maximum speed, we can know the minimum inter-arrival time, then we can choose the task time according to this value. 39 After the code generation we must be able to guarantee that the code is executed within the assigned time. This could be a problem if we considered a too complex model. 40 Lecture advices Auto-code generation Refer to: "C:\Users\Francesco\Google Drive\Documenti\Università\4 anno\Model-based software design\Lectures\Exercises\Code_generation" Model construction and configuration This is the top-level mode, namely the surface of the whole project. The control logic will be the part that will be translated in code by means of the auto-code generation. In this particular case, the code is a simple gain that allows the conversion from m/s to km/h, but in general the control logic is much more complicated. In this phase we are doing the model in the loop test, therefore we have to check the result (by means of the scope) only. We obtain the correct result, then the model is correct. Now we want to perform the auto-code generation. Model in the loop testing: what does it means? We have a model of the plant, a model of the controller and we test them. If something is not working, we have to perform changes. In this scenario the working environment is a development PC. We define test scenarios (different inputs) and then we evaluate the outputs checking if the system is able to follow the reference we placed as input. All the components are developed in Simulink or stateflow on our development PC. Code generation: the code generation comes after the model in the loop testing. Once we are 100% sure about the model and the controller, we continue the process with the automatic code generation. The code generation will produce an artefact (a code). Obviously, this code depends on the platform we are going to use for our system. Software in the loop testing: the process of testing the C code generated for the controller is called software in the loop testing. The idea is to work on the Development PC (still to make simulation), but in this case we are testing the model of the plant with the real C code generated by the code generation. We want to evaluate whether the software we produced is able to behave as the model we created as a starting point. Why should we do this? The automatic code generation adds details (implementation details, since the code depends on the used platform), then it could happen to have errors or problems due to time. Processor in the loop testing: We are no longer working with a development PC, we put the real code on the real platform (embedded system to be used in the final application) and we test it. The embedded 41 system has a bounded amount of power with respect to the development PC, since it only requires performances and it must be cheap. In this case we are working with a real platform and the Simulink plant model on a development PC. However, the real time is still missing. The Simulink model can not work as the plant, there are delays introduced by the PC. Therefore, we can validate whether the software is working properly or not, but we cannot evaluate the real performances (real time behaviour). Hardware in the loop testing: the hardware in the loop testing is used to evaluate the real time performances of the controller. All the time delays are removed: the plant could be either the physical plant or a dedicated hardware (emulation hardware). The controller sees a real system, it is possible to perform the last evaluation about the performances of the system. Why using an emulation hardware that costs a lot? Because it is safe. Firstly, it is necessary to select the proper solver configuration in order to allow the code generation. As we said, the correct way to allow the code generation is by means of a fixed step discrete solver. How to select the fixed-step size (integration step)? Let’s assume that our model will be translated in software as a periodic task, which means that it will be called periodically after a certain time (let’s say 100 ms). Our software will be started after any 100 ms. Our software every 100 ms will sample the input, will compute the output and the next state. For the simulation point of view, the fact that the software is executed every 100 ms, means that there is a fixed step solver with a time step equal to the period of the execution time (100 ms). The fixed step size is the period of the periodic task that will incapsulate the code corresponding to our model. The model in the loop test is very useful to check the quality of the output that you are providing. What happens if the output is not equal to the expected one? There are two options: 1. Look for bugs in the equations. 2. If there are no bugs it means that we need a different integration step, maybe the problem must be solved in less time. It’s clear that the platform must be able to solve the problem in that new smaller amount of time. There must be consistency between the simulation and the actual implementation. 44 We want to have a compact number of files. In the report pane it is possible to set all the characteristics of the report. 45 The hardware implementation is important since we must be able to implement in a real platform the model and the code that we have generated. The output code depends on the implementation platform, this is an important step. 46 Once set everything up it is possible to generate the code: select the subsystem corresponding to the control logic and then press “build the selected subsystem”. Analysis of the generated code When the code generation is done a report will be displayed. Moreover, there is a new folder in our directory, it contains all the code produced during the code generations. The two important files are the following ones: • ControlLogic.c • ControlLogic.h ControlLogic.h contains the declaration of the data types that are needed in order to implement the code of our model (these types depend on the platform). Moreover, it contains the prototype of the three functions that will be produced by the code generator. The three functions are: ControlLogic_initialize: responsible for the preparation of the model. It will be executed only once at the beginnig. It must be invoked once before time 0. It is an operation performed by the operating system before the execution of the software. ControlLogic_step: this is the core of the code generator. It is a function that takes care of the single operations to be performed in one step. It contains the operations that the Simulink solver will do when evaluating our model. It takes the input and the state of the model, then it computes the output and the 49 • M_initialize(): it contains the code for preparing the execution of the model M_step(). The function shall be executed once, before starting the execution of the function M_step(). It is allocating memory. • M_step(): it contains the code implementing the functionalities of the model M. Each integration time M_step() is executed. • M_terminate(): it contains the code for deleting the memory allocated by M_initialize(). It could be executed once after the very last execution of the code. How to deal with different solvers In general, we will have two different subsystems into our schematic (ControlLogic and Plant) that must interact with each other. Is there any problem? Yes, because for example we are trying to implement a continuous system with a discrete time software (fixed time step with a continuous solver, which is the Simulink configuration necessary to perform the code generation). What is the solution? It is possible to use the so called “referenced subsystems”. There are submodels with their own solver (that can be different from the one used for the code generation), then it is up to Simulink to create a coherent solution. Basically, we could have a subsystem with a continuous variable step solver, but when creating the code, Simulink will create a coherent solution. How to create a code generation? New code generation – select the directory – wait for the creation of some folders. The folders created by default are: • Controller: it contains the controller schematic (Simulink model) • Plant: it contains the plant model (Simulink) • Harness: it contains the top level of our model: this is the default project (our starting point). However, although not complete, it has the key points to start working. The structure of the harness model contains a top-level controller block and a top-level plant block casted in a feedback loop. Inside the controller block there isn’t the controller file, it is in another directory (the one we called Controller). By doing this we can specify a different solver, this is the reason why we have the model in another place. Moreover, there are blocks (rate transition blocks) that are needed to synchronize the different elements (for example there could be models working with variable step solver and continuous domain, whereas we know that the software will need a fixed step discrete solver to allow the code generation). The key point is that the controller can have a different solver with respect to the required one (as well as the plant). The transition blocks will give rise to a coherent solution. Let’s say that we want to generate a code for the controller only. We can open the controller model and perform the conversion. In this case we have set the inputs and the outputs as a part of the data structure (not as individual components as before). By doing this, the step function will not have the two elements associated with the input and the output, since they are into the data structure (feedback_control_M). What have we done? An algorithm export, namely a procedure that creates the code of a single part; this piece of code will be integrated on the platform. By contrast when performing the rapid prototyping we 50 will create the whole code, not a single part of it. The procedure is always the same, what changes is the configuration we are using. The transformation rules changes depending on what we are doing (algorithm export or rapid prototyping) and the platform we are going to use, but the overall process is the same. The artefact is different depending on the usage, despite the procedure is the same. Some C language reviews Pointer A pointer is a variable that stores/points the address of another variable. A pointer in C is used to allocate memory dynamically (namely at run time). In our case we use a pointer to update a value of the input and the output (since they may change). Functions A function must be declared following this structure: Type_of_the_return_value name_of_the_function (type_of_the_input input) It is called simply by its name (name_of_the_function) and passing all the inputs declared. The following figure is an example of a C function (calling and declaration). Stateflow Stateflow is a specific tool for dealing with machines with finite different working conditions. For example, the ABS system of our car has 3 different working conditions: 1. Every sensor is working correctly → it behaves in a certain way. 2. Only some sensors are working fine → it will behave in a different way. 3. No sensor is working → it won’t be engaged. With Stateflow it is possible to design these 3 working conditions in order to let the system work accordingly to the current situation. 51 Despite it is a complex system, it can be decomposed into three different working conditions. Each state is a working condition. Why can’t we use Simulink? This model is a data flow model; input data are processed to produce a certain output. This is a possible approach, but it is not the only one available. Moreover, sometimes it is not the most effective one. Let’s imagine that the input is coming from a vehicle: many users could take advantage of this data (abs, board display exc.). Sometimes it is convenient (or necessary according to ISO 26262) to implement a supervising or monitoring logic, for example to check whether the input is received or not (diagnostic purposes) for safety reasons. How to implement an additional functionality to check whether the input remains unchanged for 5 minutes? If it happens there is a problem and a light must turn on. We are adding a subsystem used to check this problem. We are asking the subsystem to check the input for 5 minutes and verify if it is still the same. We want to track the input signal within a time window of 5 min. We should use 3000 time delay blocks. Can we use such number of components? No, it is not feasible this solution, also because in case of changes it would be a mess to tune the system. Each state is a representation of a working condition (the on state of the lamp, or the off state of the lamp for example). 54 Additionally, it is possible to perform actions when passing between two states, simplifying the overall model. These actions can be defined between curly brackets. When the transition in performed the action is computed. Then, we can add a loop transition. We exit from the state and we enter to the same one. Why is this useful? Because we are actually exiting from the state, therefore we could reset any counter, or perform the exit task. Inside a state there is always (hidden) a time counter that is automatically resetted whenever exiting from it. By adding a secondary inner loop that verifies when it remains into the state for more than 300 seconds, it is possible to accomplish the task with a single state. Why is it working? Because if there is a variation, the first inner loop is activated, then we are exiting from the state and we are flushing the internal timer of the state. By contrast, if we are remaining into the state (basically there is no variation of the derivative, otherwise we would exit from the first arrow), after 300 seconds the error flag will be changed. 55 Whenever we exit the state, we reset the counter. Code generation of this system Generation settings: • ARM processor cortex • Embedded real time • Code only generated • We want the report • Reusable functions • Compact files. What happens if we change the time step in 0.15 s from 0.1 s? The code is reflecting the settings we have changed. Obviously, the real platform can not change this time during the run time. Our hardware platform must be designed to operate the step function accordingly to the initial design, it cannot change whenever we want. Led blinking model Stateflow is much more powerful than this. It allows to create state inside state also. Our new task is to model the blinking of an indicator for a car turning. We have two working states basically: blinking or not blinking (on or off). The default state is the off state; to pass to the on state a variable must be different from 0, in this case the variable is turn (it represents the turning action of a car). When turn is off, whenever we are in the model, we have to go back to the initial condition. Fortunately, Stateflow allows to simply add a condition to the top-level state for stopping whatever is going on into the inner states. Basically, if turn becomes 0 while L is on, the light will be stopped since turn == 0 (placed on the top level of the model) is hierarchically higher than the inner conditions. Based on the value of turn, we take one direction or the other. 56 If we set the turn to 1 we have the blinking of the left indicator, if we set turn to 2 we have the blinking of the right indicator. What happens if we set turn to 2 starting from 1? Nothing, since there is nothing that tells the system we have to change from the first selector to the second one, if the value is changed while operating. The system keeps blinking ignoring that condition. This is a problem obviously, because it is a possible scenario to set the wrong indicator and correct it right after. The solution is to incapsulate the models of the blinking in another state and go back to the selector in case of value changes. 59 When we change the selector, we want to immediately change into the other side, we don’t want any delay, how to implement this feature? It is possible to create another path to go from ON to OFF, namely when the turn value changes in the opposite one (from 1 to 2 or vice versa), the transition must be immediate. Furthermore, when one of the two indicators is on, it is possible to set the hazard indicator (turn = 3). When doing this we want a synchronized blinking of the two indicators. To accomplish this task, it is possible to add a self-loop on the parallel structure that activates when two conditions are met: turn == 3 and a flag is equal to 0. This second condition is necessary since, if not present, the self-loop would continue at infinite. Nevertheless, it is necessary to place an action that changes the value of the flag whenever the self-loop is performed, otherwise it would be never-ended. The only way to bring the flag back to 0 is to set turn = 0. [tum -= 0] f BEEP 1] [turn == 2] I i ) Late +5e0) after(1\sec) i i i i i i i i i i i ee / alter(1.se0) 2 / 2 ; 61 AUTOSAR AUTOSAR introduction The goal of these lectures is to understand the idea behind AUTOSAR and why automotive is pushing toward this direction. AUTOSAR is an open architecture for the software to be deployed inside embedded control units (suspension control, powertrain control unit and so on). The philosophy behind AUTOSAR is shown in the following figure, where the main players are reported: • General OEM: car manufacturers, the ones that ask for a certain product. • Generic Tier 1: ECU suppliers Since general OEM want to deploy electronic components, but they have no time and enough knowledge, they rely on ECU suppliers (tier 1). Tier 1 provide sub-assembly with mechatronic components and software. Behind a single ecu there are a lot of parts both on software side and hardware side. The main purpose of AUTOSAR is to spread the idea of using standard components for both categories. In addition to hardware and software, there are tools and services industries that provides tools and solutions to problems that arise. Lastly, since the code is divided into many different layers and AUTOSAR is the basic one, there must be a connection among the hardware and software (drivers). The semi- conductor companies are the ones that best suit this task. Basically, tier 1 receive benefits from the three other companies in order to provide the best solution to the OEM. 64 When modelling an ECU, we have two pieces: the software components (top boxes) and the hardware components (bottom boxes, the ECUs). Such components are deployed separately and concurrently. Application developer: he receives a certain request from the OEM. He neglects the details of the hardware platform. He deals with abstract components: the AUTOSAR services that will interface with the hardware. Each component of the top boxes (software component) describes a software operation (actuation of some actuator, controllers and so on) and it can be developed in different ways (hand- written, auto generated, bought from companies…). Each software component is able to interact with the others by means of protocols. Such communication will be performed on the hardware obviously, but at the application layer, there will be only a virtual functional bus that specifies only that there is a connection (not how practically it is deployed). Hardware developer: he needs to establish the power needed to accomplish the tasks. According to AUTOSAR, there could be many different ECUs, not only one to run the application, moreover, they could come from many different suppliers. The hardware developer could simply merge different specific ECUs. Once the two components are prepared (application and platform), it is necessary to merge them together. AUTOSAR provides the tool chain to establish which software (top boxes) must interact with any of the ECUs present in the platform, knowing which the capability of the single ECU and the queries of the software are. 65 Let’s imagine buying some ECUs and deploying only one from the scratch. While developing the application we do not take care about this information, we do not care about the fact that different ECUs will have to cooperate, we only know that there is a communication. Such communication is specified in the tool chain: AUTOSAR will take care about the software to allow the cooperation between ECUs. When the software component 3 is asking for a service for the software component 1, the implementation of the software component 3 will ask the runtime environment ECU 2 to generate bus transfer on a connection, that will be taken by the runtime environment ECU 1, which will forward the request to the software component 1. Another important advantage is that the application is generic and can be used in different scenarios: let’s say we have a vehicle with one ECU dedicated to each software and another vehicle with only one huge ECU: the application is the same, because only the mapping is changed, but such mapping is not inside the application, it is handled by the central software. The advantage of this approach is that it provides a very high portability, we can focus our resources on the application and not on the portability. To recap: when modelling with the virtual functional bus, the elements of our application are called software components. Such components interact through abstract interfaces (protocols). The transport mechanism is not defined at the virtual functional bus (only the fact that a communication is present is defined). When we move forward to the platform, we define where the single software component will be allocated in the single ECU. At that time the implementation of the communication is decided; such implementation depend on the mapping, if the communication is inside the single ECU we do not need to access the network, we can use a global variable (shared memory). 66 Software components are defined as atomic, since they cannot be partitioned on many ECUs, they must stay on a single ECU (one ECU can manage multiple software components though). VFB: logical interconnections between software components (we do not know how the connection work, we only know that a connection is present). RTE: code generated once the mapping between the software components and ECU is known. It implements the mechanism that the VFB uses to communicate. BSW: it provides the services to enable the usage of the software components. AUTOSAR classic: microcontroller, powertrain, body and comfort ecc. AUTOSAR adaptive: specification intended for enabling AUTOSAR for virtual machines. 69 hardware. These functions are referred to with the generic term “driver”. They program and operate hardware components (for instance ADC0 only reads, whereas SPI reads and writes). Each component must have a driver: ADC0 and SPI are peripherals inside the microcontroller, so the drivers that takes care of programming those resources will be placed in the microcontroller abstraction layer. The ADC0 and ADC1 information must be available on the top level. The top level does not care whether the ADC1 and ADC0 are both on the microcontroller or somewhere else, it just needs to access the device. The method for getting ADC1 is the same as ADC0, despite not being on the same platform. Nevertheless, the actual function structure will be different, since ADC1 will call SPIread and SPIwrite, as the ADC1 is not on the microcontroller. The ECU abstraction layer provides a set of functions and services that are abstracting the path, we read ADC1 and ADC0 without knowing where they are physically located, we only have to call ECU_ADC1read or ECU_ADC0read, the function will deal with the actual communication. The RTE (application layer) will have only two functions: getTemperature and getPressure. Hidden inside this software function there will be the connection with ECU_ADC0read and ECU_ADC1read that will handle the path. The key point is that on the top level the path is not known! The mapping will be specific for the platform and done later. 70 Now the architecture is changed: the ADC0 is no longer inside the microcontroller, but it is in a secondary ECU connected to the microcontroller with a CAN interface. The CAN interface is a network communication method. Although having a different structure, the getTemperature is always the same, whereas the bottom functions will be different. The key point here is that the application does not change when changing the hardware, what changes is the RTE. The other advantage of this approach is the possibility to limit the modifications needed whenever changing components in the platform: the application won’t change, only the bottom functions will. The MCAL is responsible for the abstraction of the hardware localization on the ECU. It provides a set of functions that collects data from different part of the ECU and send them to the ECUAL without specifying the native locus. At this level we only know how the actual hardware that is providing the information is related to the MC, we do not know where it is. 71 The ECUAL is responsible for the abstraction of the path to reach a certain information, which means that it is unaware of the fact that a certain information is coming from the MC or other places. At the level of the ECUAL the method used to get an information is the same as the others, obviously the path is different, but at this level we do not know it, we only know that the information is coming from an ADC chain (for instance). The RTE is the implementation of the communication defined at the software level VFB. AUTOSAR operating system Part of the AUTOSAR code is the operating system: it allows to decompose the application in tasks (that are fragment of codes) that could evolve concurrently. The operating system is responsible of optimizing the usage of components in the entire system. The operating system in a normal PC does not know which are the program that will be used in advance, that’s why it must be versatile. On the contrary, when dealing with AUTOSAR platforms, the tasks are known in advance, it is configured statically, namely we predefine how many tasks will be run. The operating system can guarantee the deterministic structure of data: the reaction will be given within a certain maximum amount of time. Main features of the operating system Periodic behaviours can be run precisely. But this is not a normal operating system, it is not possible to run new programs, to do it, it is necessary to rebuild the operating system. 74 ISO 26262 What is the ISO 26262? The ISO 26262, titled "Road vehicles – Functional safety", is an international standard for functional safety of electrical and/or electronic systems in production automobiles defined by the International Organization for Standardization (ISO) in 2011. The main goals of the ISO 26262 are: Goals of ISO 26262: • Provides an automotive safety lifecycle (management, development, production, operation, service, decommissioning) and supports tailoring the necessary activities during these lifecycle phases. • Covers functional safety aspects of the entire development process (including such activities as requirements specification, design, implementation, integration, verification, validation, and configuration). • Provides an automotive-specific risk-based approach for determining risk classes (Automotive Safety Integrity Levels, ASILs). • Uses ASILs for specifying the item's necessary safety requirements for achieving an acceptable residual risk. • Provides requirements for validation and confirmation measures to ensure a sufficient and acceptable level of safety is being achieved. Basically, the ISO 26262 is a structural process. But why do we need a structural process for the automotive industry? 75 Hardware diffusion is associated with the software distribution; both components could have problems in terms of failure or bug. 76 Software is necessary to use the hardware. It is instrumental in order to make possible advanced features. Moreover, it is flexible, it can be changed frequently to correct bugs or improve performances. The problems are: • Software is complex by definition. It is responsible for running multiple processes concurrently. Furthermore, at the same time parallel features for the same process must be carried on (for instance noise cancellation, echo cancellation). All the possible interactions must be considered. • Software are not perfect, they could have bug inside, some scenarios not considered during the development phase. Such problems could lead to two important critical situations: safety critical (whenever human life are involved) or mission critical (whenever huge amount of money are involved). 79 U2: main processor responsible for the ignition of fuel. U1: secondary processor that is looking what the main processor is doing (watchdog). If the watchdog sees possible errors, it starts “barking”. 80 What to do when problems arise? The two standard ways of acting (passing to neutral or turning off the vehicle) could be done. The perception of the driver was a complete uncontrollable system. Investigations started to analyse all the development process used by Toyota. In all companies it is necessary to act following this 3 step workflow: 1. Describe which are your goal. 2. Describe the process you are using to reach the goals. 3. Check to have achieved all the declared goals. One of the Toyota’s declared rules was to use more sensors to measure the same quantity, so that in case of significative discrepancies, the safety procedure would be initialized. For instance, to measure the acceleration desired by the driver we should measure the acceleration pedal position. How to do it? We could use a potentiometer. What if it fails? We should use two different sensors. Independent acquisition channels to store separate variables is one of the main topics when dealing with safety systems. Toyota specified something (to use two variables) but they didn’t follow what they declared. Toyota didn’t use the code guidelines they promised to use. MISRA-C guidelines are the state of the art for automotive (good guidelines, they are worth using, since law uses such rules). If something goes wrong, you must be able to demonstrate why you didn’t follow the general rules; you must prove that also your solution is worth. 81 The main problem was found to be into the watchdog algorithm, in particular a memory problem was found: the maximum amount of dynamic memory cannot go beyond a limit, but in Toyota’s application it overcame such limit. Why did it happen? Because they didn’t follow the state of art, as consequence there were too many data, stack overflow. Recursion in computer science is a method of solving a problem where the solution depends on solutions to smaller instances of the same problem (as opposed to iteration). Recursion is forbidden because it cannot guarantee the maximum amount of memory available to be respected. 84 Complexity of the embedded systems is growing a lot and it is starting to have possible impacts on human life, so it is necessary to introduce a rigorous process to reduce as much as possible the risk of an unexpected behaviour during the real working conditions. This process is all included into the ISO 26262. All companies are asking to develop components and subcomponents following such guidelines, since they are the only way to reduce risks and to avoid problems in case of troubles. Only electronic and electrical parts are included into the ISO 26262. Any mechanical and hydraulic part is outside the scope of this guide. ISO 26262 is a sort of recipe containing all the components necessary to accomplish a project and the process in which assembling together all these elements. In the automotive we have two contrasting parameters: the safety and the price. 85 ASIL is a way of quantifying how a certain component is risky for the human user. There are 4 different levels classified on the basis of the consequences of a failure. For instance, the electrical steering lock is a system intended for working only when the car is stopped, but what if it starts working while driving? This is the reason why it is classified as maximum level of risk. The first task when dealing with a project, is always to find the ASIL level. Once the ASIL level of that component is determined, the ISO 26262 provides the approach to be used to design and work with that component. Which is the difference between safety and functional safety? A system is safe when it is intrinsically safe, which means that a failure condition can never take place. A system is functionally safe if by design it is not safe, but a component is added to transform it in safe. The traffic light is an example of functional safety component, in fact 2 crossing road are not intrinsically safe, but the presence of such element guarantees a layer of security. Functional safety logic must be implemented to guarantee redundancy and avoid unexpected behaviours; of course, this is an addition of details (not functionally needed), hence it is an addition of money. Such 86 investment is justified only when the system is really critical (air conditioning doesn’t need any of the additional costs, human life is not in danger). ASIL levels: from D (most critical) to A (least critical). The steering lock is classified as D, whereas the air conditioning as A. How is the ISO 26262 supporting the designer? By means of the process. The process describes the recipe, and has the V shape form. How to read the table: • 0 means optional. • + means the method is recommended. • ++ the method is mandatory. The same method can have different level of implementation depending on the ASIL level. 89 fault is to avoid bugs, whereas the approach behind the hardware approach is to manage the unexpected condition via software. Hardware testing Standard prescribes this list of mechanism for hardware error detection at the software architectural level. Basically, we want that the software is able to detect and manage hardware problems. How is it possible? Adding redundant functionalities that can cope with the hardware problems. Example: pressure sensor with a certain expected output range. We must make sure that what we are reading is correct, we cannot use a wrong value. One of the prescriptions is that the range must be always verified, and if out of range, an error message must occur. The added code is a redundant task that checks whether the input is inside the expected range. Can we have any test to check the presence of offset errors? With this code no, but we can implement a different code so that this error is prevented. Another possible strategy consists in acquiring two signals coming from two sensors (in case of disagreement we must see an error). Introduction = Countermeasures against hw faults are implemented in the item sw_ Table 4 — Mechanisms for error det Methods 1a_[Range checks of input and output data tb _[Piausix ace _ te [peectond, He |eeme menli Specification: input x shall belong to [0.5,4.5] >| Implementation of range check: if( x < 0.5 || x > 4.5) error(); [în ertemai montormg fact can be for erample an ASIC 01 another software element performing a watchoog functon | n Offset. Introduction = Countermeasures against hw faults are implemented in the item sw Table 4 — Mechanisms for error detection at the software architectural level ASIL a|s|c|o ta _ [Range checks of input and output data #[#]#|# 1b_[Piausibilty check® +|[+|+#|# fc [pen est +|+i|+|+ Specification: whenever x is | then y shall be 0 9 Implementation of plausibility check: 1f£( x È Types of methods that may be used to detect data errors inciude error detecting codes and multiple data storage. | Anextemal monitoring facility can be for example an ASIC or another software element performing a watchdog function. 1 && y == 1) error() 91 Those were all hardware problems managed and detected by software. But what about software errors? Software faults will resort on software quality measurements. How to prevent such errors? 1. Process control: documentation and declaration of what you are doing. 2. Methods: subset of programming language (MISRA, MBSD). 3. Test: extensively check whether the software we developed is able to do what we expect and we declared. Software testing We have a certain piece of software, we know what it should do, let’s check whether it is actually doing what it promises. 94 V development process: after the definition of all the requirements we design the software, intended as all the needed modules, then we implement the code for each of the software brick (unit). The developer is responsible of delivering software units. Why splitting in units? Because splitting in multiple units keeps the complexity low, moreover we can assign each unit to different developer groups, so that the system can be developed in parallel. After the realization we start with the unit testing. Each of the unit must be tested individually. So far, we do not care about the integration between different units, we do not care about the others, they are considered as independent elements. If all the units tested individually are working properly, then we perform the integration testing, which means testing that the cooperation between different entities is not creating problems. If the integration is correct, the system must be tested on a “real” environment, which accounts temperature variations, noise ecc. The horizontal arrows are placed since in the presence of problems that is the process we have to restart from. 95 Unit testing Each of the unit software must be tested singularly, irrespective of the others. For each unit we must define a set of inputs to be used to test the module. Guidelines provide a procedure to follow. Software testing is also used to check the quality of the code. In particular, three parameters/coefficients are used: 1. Statement coverage: the number of statements that have been covered by the test out of the overall number of statements. 2. Branch coverage: percentage of branches that have been executed over the total number of branches. 3. MC/DC: percentage of conditions (Boolean) that have been evaluated out of all the possibilities. Unit Testing: methods Table 11 — Methods for deriving test cnses for software unit testing Methods Asi aA_8|e|jo la_]Anaysls ot requirements + [E] fb_|Generatior and analysis of equivalence classes + a |#| fe_|Ana ysis of boundary values + «|a | 1d_[Error guessing” ++ LL [®© Equivalence classes can ba Keniiied based on the divsion of npirs end 0Lip.ts, such that a representative test vale can ba 'selectad for each class. È Tris rethod spolles to Iaterfaces, values approgchine and crossing the boundaries and cut of targa values. Titor guasaing fanta can ba based en deta callaciad fireush a “assors lasmac' procass arci apart ucigment. Table 12 — Structural coverage metrics at the software unit level ASIL Methods A_B|oe a [Statement coverage # + + 1h [Rranch cnverage + # [#0 4 fo _|MC/DC (Modified ConditionDecision Coverage) + +|4 + Table 10 — Methods for software unit testing Methods AsiL a|8e|ce|o Ta | | Requiremenis-based test |] 16 [intertaue lat «||| Te] Fauitimecion test aaa Te _| Resource usage (est: TE Te _| Back-to-back comparison test belween model and coda. if applicabie! + ||| [the sofivare requirements at the unit evsi are he besls for thls requirsments-basac tes! | mis incluces irjection of arttrary fau:s (ag by comuating valuas cf varabiles, by introcucing coce mutations, or by conuping [values of CPU registera). [E some aspeeta ot ne ressures Lange test cor onij e svaliatea property vran tra somvsre unit tenta ore execute an ine terget IRareyare or Iftne emustor fer me target processor s.ppors resource usage tests. | this rrethoc requires è model tnet can simulere the functionality of the sof:wara uns. Here. ihe rode and code are simulatesi In he same viay and resuis compared ‘ih ecch other Requirements-based = lt is composedof: = Equivalence class partitioning = Boundary value analysis
Docsity logo


Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved