Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Control Theory: Understanding the Science of Modifying Complex Systems, Study notes of Advanced Control Systems

Control Systems EngineeringSystems EngineeringAutomatic Control SystemsMathematical Modeling

Control Theory is a branch of system theory concerned with changing the behavior of complex systems through external actions. This scientific discipline, which is mathematically-oriented, offers principles applicable to various fields, from engineering and physics to economics and social sciences. the history and applications of control theory, including feedback control, adaptive control, and optimization.

What you will learn

  • How does feedback control differ from adaptive control in Control Theory?
  • What is Control Theory and what are its key applications?
  • What role does optimization play in Control Theory?

Typology: Study notes

2021/2022

Uploaded on 03/31/2022

dyanabel
dyanabel 🇺🇸

4.7

(22)

53 documents

1 / 9

Toggle sidebar

Related documents


Partial preview of the text

Download Control Theory: Understanding the Science of Modifying Complex Systems and more Study notes Advanced Control Systems in PDF only on Docsity! Control Theory General Background As long as human culture has existed, control has always meant some kind of power over man's environment. Cuneiform fragments suggest that the control of irrigation systems in Mesopotamia was a well-developed art at least by the 20th century BC. There were some ingenious control devices in the Greco-Roman culture, the details of which have been preserved. Methods for the automatic operation of windmills go back at least to the Middle Ages. Large-scale implementation of the idea of control, however, was impossible without a high-level of technological sophistication, and it is probably no accident that the principles of modern control started evolving only in the 19th century, concurrently with the Industrial Revolution. A serious scientific study of this field began only after World War II and is now a major aspect of what has come to be called the second industrial revolution. Although control is sometimes equated with the notion of feedback control (which involves the transmission and return of information)--an isolated engineering invention, not a scientific The Place of Control Systems In Attachment Theory One of Bowlby’s key insights was that many of Freud’s best ideas about close relationships and the importance of early experience were logically independent of the drive reduction motivation theory that Freud used to explain them. In order to preserve these insights, Bowlby looked for a scientifically defensible alternative to Freud’s drive reduction motivation theory. Freud viewed infants as clingy and dependent; interested in drive reduction rather than in the environment. Ethological observations present a very different view. The notion that human infants are competent, inquisitive, and actively engaged in mastering their environments was also familiar to Bowlby from Piaget’s detailed observations of his own three children presented in The Origin of Intelligence in Infants. One of Bowlby’s key insights was that the newly emerging field of control systems theory offered a way of explaining infants’ exploration, monitoring of access to attachment figures, and awareness of the environment. This was a scientifically defensible alternative to Freud’s drive reduction theory of motivation. It placed the emphasis on adaptation to the real world rather than to drive states and emphasized actual experience rather than intra-psychic events as influences on develop- ment and individual differences. Note that the first step toward this alternative motivation model was reformulating athe infant- mother (and implicitly adult-adult) bonds in terms of the secure base phenomenon. Without the secure base concept, we have no control systems alternative to Freud’s drive theory. Thus, it is logically necessary, at every turn, to keep the secure base formulation at the center of attachment theory. See Waters & Cummings, Child Development, June 2000 for elaboration on this point. The following material was compiled from articles on control theory and optimization available on-line at Britanica.com. E.W. discipline--modern usage tends to favour a rather wide meaning for the term; for instance, control and regulation of machines, muscular coordination and metabolism in biological organisms, prosthetic devices; also, broad aspects of coordinated activity in the social sphere such as optimization of business operations, control of economic activity by government policies, and even control of political decisions by democratic processes. Scientifically speaking, modern control should be viewed as that branch of system theory concerned with changing the behaviour of a given complex system by external actions. (For aspects of system theory related to information, see below.) If physics is the science of understanding the physical environment, then control should be viewed as the science of modifying that environment, in the physica, biological, or even social sense. Much more than even physics, control is a mathematically-oriented science. Control principles are always expressed in mathematical form and are potentially applicable to any concrete situation. At the same time, it must be emphasized that success in the use of the abstract principles of control depends in roughly equal measure on the status of basic scientific knowledge in the specific field of application, be it engineering, physics, astronomy, biology, medicine, econometrics, or any of the social sciences. This fact should be kept in mind to avoid confusion between the basic ideas of control (for instance, controllability) and certain spectacular applications of the moment in a narrow area (for instance, manned lunar travel). Examples of modern control systems To clarify the critical distinction between control principles and their embodiment in a real machine or system, the following common examples of control may be helpful. There are several broad classes of control systems, of which some are mentioned below. Machines that cannot function without (feedback) control Many of the basic devices of contemporary technology must be manufactured in such a way that they cannot be used for the intended task without modification by means of control external to the device. In other words, control is introduced after the device has been built; the same effect cannot be brought about (in practice and sometimes even in theory) by an intrinsic modification of the characteristics of the device. The best known examples are the vacuum-tube or transistor amplifiers for high-fidelity sound systems. Vacuum tubes or transistors, when used alone, introduce intolerable distortion, but when they are placed inside a feedback control system any desired degree of fidelity can be achieved. A famous classical case is that of powered flight. Early pioneers failed, not because of their ignorance of the laws of aerodynamics, but because they did not realize the need for control and were unaware of the basic principles of stabilizing an inherently unstable device by means of control. Jet aircraft cannotbe operated without automatic control to aid the pilot, and control is equally critical for helicopters. The accuracy of inertial navigation equipment (the modern space compass) cannot be improved indefinitely because of basic mechanical limitations, but these limitations can be reduced by several orders of magnitude by computer- directed statistical filtering, which is a variant of feedback control. Robots On the most advanced level, the task of control science is the creation of robots. This is a collective term for devices exhibiting animal-like purposeful behaviour under the general command of (but without direct help from) man. Industrial manufacturing robots are already fairly common, but real breakthroughs in this field cannot be anticipated until there are fundamental scientific advances with regard to problems related to pattern recognition and the mathematical structuring of brain processes. Control Systems A control system is a means by which a variable quantity or set of variable quantities is made to conform to a prescribed norm. It either holds the values of the controlled quantities constant or causes them to vary in a prescribed way. A control system may be operated by electricity, by mechanical means, by fluid pressure (liquid or gas), or by a combination of means. When a computer is involved in the control circuit, it is usually more Information of type A means that the effect of any potential control action applied to the system is precisely known under all possible environmental circumstances. The choice of one or a few appropriate control actions, among the many possibilities that may be available, is then based on information of type B; and this choice, as stated before, is called optimization. The task of control theory is to study the mathematical quantification of these two basic problems and then to deduce applied-mathematical methods whereby a concrete answer to optimization can be obtained. Control theory does not deal with physical reality but only with its mathematical description (mathematical models). The knowledge embodied in control theory is always expressed with respect to certain classes of models, for instance, linear systems with constant coefficients, which will be treated in detail below. Thus control theory is applicable to any concrete situation (e.g., physics, biology, economics) whenever that situation can be described, with high precision, by a model that belongs to a class for which the theory has already been developed. The limitations of the theory are not logical but depend only on the agreement between available models and the actual behaviour of the system to be controlled. Similar comments can be made about the mathematical representation of the criteria and disturbanes. Once the appropriate control action has been deduced by mathematical methods from the information mentioned above, the implementation of control becomes a technological task, which is best treated under the various specialized fields of engineering. The detailed manner in which a chemical plant is controlled may be quite different from that of an automobile factory, but the essential principles will be the same. Hence further discussion of the solution of the control problem will be limited here to the mathematical level. To obtain a solution in this sense, it is convenient (but not absolutely necessary) to describe the system to be controlled, which is called the plant, in terms of its internal dynamical state. By this is meant a list of numbers (called the state vector) that expresses in quantitative form the effect of all external influences on the plant before the present moment, so that the future evolution of the plant can be exactly given from the knowledge of the present state and the future inputs. This situation implies, in an intuitively obvious way, that the control action at a given time can be specified as some function of the state at that time. Such a function of the state, which determines the control action that is to be taken at any instant, is called a control law. This is a more general concept than the earlier idea of feedback; in fact, a control law can incorporate both the feedback and feed forward methods of control. In developing models to represent the control problem, it is unrealistic to assume that every component of the state vector can be measured exactly and instantaneously. Consequently in most cases the control problem has to be broadened to include the further problem of state determination, which may be viewed as the central task in statistical prediction and filtering theory. Thus, in principle, any control problem can be solved in two steps: (1) Building an optimal filter (so-called Kalman filter) to determine the best estimate of the present state vector; (2) determining an optimal control law and mechanizing it by substituting into it the estimate of the state vector obtained in step 1. In practice, the two steps are implemented by a single unit of hardware, called the controller, which may be viewed as a special-purpose computer. The theoretical formulation given here can be shown to include all other previous methods as a special case; the only difference is in the engineering details of the controller. The mathematical solution of a control problem may not always exist. The determination of rigorous existence conditions, beginning in the late 1950s, has had an important effect on the evolution of modern control, equally from the theoretical and the applied point of view. Most important is controllability; it expresses the fact that some kind of control is possible. If this condition is satisfied, methods of optimization can pick out the right kind of control using information of type B. Optimization Optimization is a field of applied mathematics consisting of a collection of principles and methods used for the solution of quantitative problems in many disciplines: physics, biology, engineering, economics, business, and others. This mathematical area grew from the recognition that problems under consideration in manifestly different fields could be posed theoretically in such a way that a central store of ideas and methods could be used in obtaining solutions for all of them. A typical optimization problem may be described in the following way. There is a system, such as a physical machine, a set of biological organisms, or a business organization, whose behaviour is determined by several specified factors. The operator of the system has as a goal the optimization of the performance of this system. The latter is determined at least in part by the levels of the factors over which the operator has control; the performance may also be affected, however, by other factors over which there is no control. The operator seeks the right levels for the controllable factors that will optimize, as far as possible, the performance of the system. For example, in the case of a banking system, the operator is the governing body of the central bank; the inputs over which there is control are interest rates and money supply; and the performance of the system is described by economic indicators of the economic and political unit in which the banking system operates. The first step in the application of optimization theory to a practical problem is the identification of relevant theoretical components. This is often the most difficult part of the analysis, requiring a thorough understanding of the operation of the system and the ability to describe that operation in precise mathematical terms. The main theoretical components are the system, its inputs and outputs, and its rules of operation. The system has a set of possible states; at each moment in its life, it is in one of these states, and it changes from state to state according to certain rules determined by inputs and outputs. There is a numerical quantity called the performance measure, which the operator seeks to maximize or minimize. It is a mathematical function whose value is determined by the history of the system. The operator is able to influence the value of the performance measure through a schedule of inputs. Finally, the constraints of the system must be identified; these are the restrictions on the inpts that are beyond the control of the operator. The simplest type of optimization problem may be analyzed using elementary differential calculus. Suppose that the system has a single input variable, represented by a numerical variable x, and suppose that the performance measure can be expressed as a function y = f (x). The constraints are expressed as restrictions on the values assumed by the input x; for example, the nature of the problem under consideration may require that x be positive. The optimization problem takes the following precise mathematical form: For which value of x, satisfying the indicated constraints, is the function y = f (x) at its maximum (or minimum) value? From calculus, the extreme values of a function y = f (x) with a sufficiently smooth graph can be located only at points of two kinds: (1) points where the tangent to the curve is horizontal (critical points) or (2) endpoints of an interval, if x is restricted by the constraints to such an interval. Thus the problem of finding the largest or smallest value of a function over the ndicated interval is reduced to the simpler problem of finding the largest and smallest value among a finite set of candidates, and this can be done by direct computation of the value of f (x) at those points x. The theory of linear programming was developed for the purpose of solving optimization problems involving two or more input variables. This theory uses only the elementary ideas of linear algebra; it can be applied only to problems for which the performance measure is a linear function of the inputs. Nevertheless, this is sufficiently general to include applications to important problems in economics and engineering. Control of machines In many cases, the operation of a machine to perform a task can be directed by a human (manual control), but it may be much more convenient to connect the machine directly to the measuring instrument (automatic control); e.g., a thermostat (temperature-operated switch) may be used to turn on or off a refrigerator, oven, air-conditioning unit, or heating system. The dimming of automobile headlights, the setting of the diaphragm of a camera, the correct exposure for colour prints may be accomplished automatically by connecting a photocell or other light-responsive device directly to the machine in question. Related examples are the remote control of position (servomechanisms), speed control of motors (governors). It is emphasized that in such case a machine could function by itself, but a more useful system is obtained by letting the measuring device communicate with the machine in either a feed-forward or feed-back fashion. Control of large systems More advanced and more critical applications of control concern large and complex systems the very existence of which depends on coordinated operation using numerous individual control devices (usually directed by a computer). The launch of a spaceship, the 24-hour operation of a power plant, oil refinery, or chemical factory, the control of air traffic near a large airport, are well-known manifestations of this technological trend. An essential aspect of these systems is the fact that human participation in the control task, although theoretically possible, would be wholly impractical; it is the feasibility of applying automatic control that has given birth to these systems. Biocontrol The advancement of technology (artificial biology) and the deeper understanding of the processes of biology (natural technology) has given reason to hope that the two can be combined; man-made devices should be substituted for some natural functions. Examples are the artificial heart or kidney, nerve-controlled prosthetics, and control of brain functions by external electrical stimuli. Although definitely no longer in the science-fiction stage, progress in solving such problems has been slow not only because of the need for highly advanced technology but also because of the lack of fundamental knowledge about the details of control principles employed in the biological world. Mathematical Formulation of Control Theory Control Theory is a field of applied mathematics that is relevant to the control of certain physical processes and systems. Although control theory has deep connections with classical areas of mathematics, such as the calculus of variations and the theory of differential equations, it did not become a field in its own right until the late 1950s and early 1960s. After World War II, problems arising in engineering and economics were recognized as variants of problems in differential equations and in the calculus of variations, though they were not covered by existing theories. At first, special modifications of classical techniques and theories were devised to solve individual problems. It was then recognized that these seemingly diverse problems all had the same mathematical structure, and control theory emerged. The systems, or processes, to which control theory is applied have the following structure. The state of the system at each instant of time t can be described by n quantities, which are labeled x1(t), x2(t), . . . , xn(t). For example, the system may be a mixture of n chemical substances undergoing a reaction. The quantities x1(t), . . . , xn(t) would represent the concentrations of the n substances at time t. At each instant of time t, the rates of change of the quantities x1(t), . . . , xn(t) depend upon the quantities x1(t), . . . , xn(t) themselves and upon the values of k so-called control variables, u1(t), . . . , uk(t), according to a known law. The values of the control variables are chosen to achieve some objective. The nature of the physical system usually imposes limitations on the allowable values of the control variables. In the chemical-reaction example, the kinetic equations furnish the law governing the rate of change of the concentrations, and the control variables could be pressure and temperature, which must lie between fixed maximum and minimum values at each time t. Systems such as those just described are called control systems. The principal problems associated with control systems are those of controllability, observability, stabilizability, and optimal control. The problem of controllability is the following. Given that the system is initially in state a1, a2, . . . , an, can the controls u1(t), . . . , uk(t) be chosen so that the system will reach a preassigned state b1, . . . , bn in finite time? The observability problem is to obtain information about the state of the system at some time t when one cannot measure the state itself, but only a function of the state. The stabilizability problem is to choose control variables u1(t), . . . , uk(t) at each instant of time t so that the state x1(t), . . . , xn(t) of the system gets closer and closer to a preassigned state as the time of operation of the system gets larger and larger.
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved