Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Direct Adaptive Control for Nonlinear Systems with Disturbances - Prof. Wassim Haddad, Study Guides, Projects, Research of Aerospace Engineering

The development of a direct adaptive control framework for adaptive stabilization, disturbance rejection, and command following of multivariable nonlinear uncertain systems with exogenous disturbances. The derivation of adaptive control laws and the conditions for stability and convergence.

Typology: Study Guides, Projects, Research

Pre 2010

Uploaded on 09/17/2009

koofers-user-ant
koofers-user-ant 🇺🇸

10 documents

1 / 14

Toggle sidebar

Related documents


Partial preview of the text

Download Direct Adaptive Control for Nonlinear Systems with Disturbances - Prof. Wassim Haddad and more Study Guides, Projects, Research Aerospace Engineering in PDF only on Docsity! Georgia Institute of Technology School of Aerospace Engineering Homework Project 8 Dr. W. M. Haddad AE 6580: Aerospace Nonlinear Control Adaptive Control for Nonlinear Uncertain Systems • Work all problems • Show all work and give computer code developed • You can consult books but NOT people • Neatness counts for points • Due December 15, 12:00 pm • Georgia Tech honor code applies 1. Introduction Unavoidable discrepancies between system models and real-world systems can result in degradation of control-system performance including instability. Thus, it is not surprising that one of the fundamental problems in feedback control design is the ability of the control system to guarantee robustness with respect to system uncertainties in the design model. To this end, adaptive control along with robust control theory have been developed to address the problem of system uncertainty in control-system design. The fundamental differences between adaptive control design and robust control theory can be traced to the modeling and treatment of system uncertainties as well as the controller architecture structures. In partic- ular, adaptive control is based on constant linearly parameterized system uncertainty models of a known structure but unknown variation, while robust control is predicated on structured and/or unstructured linear or nonlinear (possibly time-varying) operator uncertainty models consisting of bounded variation. Hence, for systems with constant real parameter uncer- tainty, robust controllers will unnecessarily sacrifice performance, whereas adaptive feedback controllers can tolerate far greater system uncertainty levels to improve system performance. Furthermore, in contrast to fixed-gain robust controllers, which maintain specified constants within the feedback control law to sustain robust performance, adaptive controllers directly or indirectly adjust feedback gains to maintain closed-loop stability and improve perfor- mance in the face of system uncertainties. Specifically, indirect adaptive controllers utilize parameter update laws to identify unknown system parameters and adjust feedback gains to account for system variation, while direct adaptive controllers directly adjust the controller gains in response to plant variations. In this project you are asked to develop a direct adaptive control framework for adap- tive stabilization, disturbance rejection, and command following of multivariable nonlinear uncertain systems with exogenous disturbances. In particular, you will develop a Lyapunov- based direct adaptive control framework that requires a matching condition on the system 1 Ĵ : R × Rn → Rm×m and t ≥ 0, in place of G(x)Ĵ(x)Ψ = J(x), it follows that the adaptive feedback control law u(t) = Ĝ(t, x(t))K(t)F (t, x(t)) + Ĵ(t, x(t))Φ(t)w(t), (8) with the update laws K̇(t) = −1 2 Q1Ĝ T(t, x(t))GT(t, x(t))V ′s T(x(t))FT(t, x(t))Y, K(0) = K0, (9) Φ̇(t) = −1 2 Q2Ĵ T(t, x(t))GT(t, x(t))V ′s T(x(t))wT(t)Z, Φ(0) = Φ0, (10) where V ′s (x) satisfies (3) with fc(x) = f(t, x) + G(t, x)Ĝ(t, x)KgF (t, x), guarantees that the solution (x(t), K(t), Φ(t)) ≡ (0, Kg,−Ψ) of the closed-loop system (7)–(10) is Lyapunov stable and x(t) → 0 as t → ∞ for all x0 ∈ R n. Observation 2.3. It follows from Observation 2.2 that Theorem 2.1 can also be used to construct adaptive tracking controllers for nonlinear uncertain systems. Specifically, let rd(t) ∈ R n, t ≥ 0, denote a command input and define the error state e(t) , x(t)− rd(t). In this case, the error dynamics are given by ė(t) = ft(t, e(t)) + G(t, e(t) + rd(t))u(t) + Jt(t, e(t))wt(t), e(0) = e0, t ≥ 0, (11) where ft(t, e(t)) = f(e(t) + rd(t)) − n(t), with f(rd(t)) = n(t), and Jt(t, e(t) + rd(t))wt(t) = n(t) − ṙd(t) + J(t, e(t) + rd(t))w(t). Now, the adaptive tracking control law (8)–(10), with x(t) replaced by e(t), guarantees that e(t) → 0 as t → ∞ for all e0 ∈ R n. It is important to note that the adaptive control law (4)–(6) does not require explicit knowledge of the gain matrix Kg, the disturbance matching matrix Ψ, and the disturbance weighting matrix function J(x), even though Theorem 2.1 requires the existence of Kg, F (x), Ĝ(x), Ĵ(x), and Ψ such that the zero solution x(t) ≡ 0 to (2) is globally asymptotically stable and the matching condition G(x)Ĵ(x)Ψ = J(x) holds. Furthermore, no specific structure on the nonlinear dynamics f(x) is required to apply Theorem 2.1; all that is required is 4 the existence of F (x) such that the zero solution x(t) ≡ 0 to (2) is asymptotically stable so that (3) holds. However, if (1) is in normal form, then we can always construct a function F : Rn → Rs, with F (0) = 0, such that the zero solution x(t) ≡ 0 to (2) is globally asymptotically stable without requiring knowledge of the system dynamics. These facts are exploited below to construct nonlinear adaptive feedback controllers for nonlinear uncertain systems. For simplicity of exposition in the ensuing discussion, assume that J(x) = D, where D ∈ Rn×d is a disturbance weighting matrix with unknown entries. Assume the nonlinear system (1) is in normal form, that is, f(x) = Ãx + f̃u(x), G(x) = [ 0(n−m)×m Gs(x) ] , J(x) = D = [ 0(n−m)×d D̂ ] , (12) where à = [ A0 0m×n ] , f̃u(x) = [ 0(n−m)×1 fu(x) ] , A0 ∈ R (n−m)×n is a known matrix of zeros and ones capturing the multivariable controllable canonical form representation, fu : R n → Rm is an unknown function and satisfies fu(0) = 0, Gs : R n → Rm×m, and D̂ ∈ Rm×d. Here, assume that fu(x) is unknown and is parameterized as fu(x) = Θfn(x), where fn : R n → Rq and satisfies fn(0) = 0, and Θ ∈ R m×q is a matrix of uncertain constant parameters. Note that Ĵ(x) and Ψ in Theorem 2.1 can be taken as Ĵ(x) = G−1s (x) and Ψ = D̂ so that G(x)Ĵ(x)Ψ = J(x) = D is satisfied. Next, to apply Theorem 2.1 to the uncertain system (1) with f(x), G(x), and J(x) given by (12), let Kg ∈ R m×s, where s = q + r, be given by Kg = [ Θn − Θ, Φn ], (13) where Θn ∈ R m×q and Φn ∈ R m×r are known matrices, and let F (x) = [ fn(x) f̂n(x) ] , (14) where f̂n : R n → Rr and satisfies f̂n(0) = 0 is an arbitrary function. In this case, it follows that, with Ĝ(x) = G−1s (x), fc(x) = f(x) + G(x)Ĝ(x)KgF (x) 5 = Ãx + f̃u(x) + [ 0(n−m)×m Gs(x) ] G−1s (x) [ Θnfn(x) − Θfn(x) + Φnf̂n(x) ] = Ãx + [ 0(n−m)×1 Θnfn(x) + Φnf̂n(x) ] . (15) Now, since Θn ∈ R m×q and Φn ∈ R m×r are arbitrary constant matrices and f̂n : R n → Rr is an arbitrary function, we can always construct Kg and F (x) without knowledge of f(x) such that the zero solution x(t) ≡ 0 to (2) can be made globally asymptotically stable. In particular, choosing Θnfn(x) + Φnf̂n(x) = Âx, where  ∈ R m×n, it follows that (15) has the form fc(x) = Acx, where Ac = [ AT0 ,  T ]T is in multivariable controllable canonical form. Hence, choosing  such that Ac is asymptotically stable, it follows from the converse Lyapunov theorem that there exists a positive-definite matrix P satisfying the Lyapunov equation 0 = ATc P + PAc + R, (16) where R is positive definite. In this case, with Lyapunov function Vs(x) = x TPx, the adaptive feedback controller (4) with update laws (5) and (6) or, equivalently, K̇(t) = −Q1Ĝ T(x(t))GT(x(t))Px(t)FT(x(t))Y, K(0) = K0, (17) Φ̇(t) = −Q2Ĵ T(x(t))GT(x(t))Px(t)wT(t)Z, Φ(0) = Φ0, (18) guarantees global asymptotic stability of the nonlinear uncertain dynamical system (1) where f(x), G(x), and J(x) are given by (12). As mentioned above, it is important to note that it is not necessary to utilize a feedback linearizing function F (x) to produce a linear fc(x). However, when the system is in normal form, a feedback linearizing function F (x) provides considerable simplification in constructing V ′s (x) necessary in computing the update laws (5) and (6). Next, consider the case where both f(x) and G(x) are uncertain. Specifically, assume that Gs(x) is unknown and is parameterized as Gs(x) = BsGn(x), where Gn : R n → Rm×m is known and satisfies det Gn(x) 6= 0, x ∈ R n, and Bs ∈ R m×m, with det Bs 6= 0, is an unknown symmetric sign definite matrix but the sign definiteness of Bs is known, that is, Bs > 0 or 6 K(0) = [0, 0, 0], and Φ(0) = 0, plot the phase portrait of the controlled and uncontrolled system, the state trajectories versus time, the control signal versus time, and the adaptive gain history versus time. Problem 5. Consider the nonlinear dynamical system representing a controlled rigid spacecraft given by ẋ(t) = −I−1b XIbx(t) + I −1 b u(t), x(0) = x0, t ≥ 0, (27) where x = [x1, x2, x3] T represents the angular velocities of the spacecraft with respect to the body-fixed frame, Ib ∈ R 3×3 is an unknown positive-definite inertia matrix of the spacecraft, u = [u1, u2, u3] T is a control vector with control inputs providing body-fixed torques about three mutually perpendicular axes defining the body-fixed frame of the spacecraft, and X denotes the skew-symmetric matrix X ,   0 −x3 x2 x3 0 −x1 −x2 x1 0   . Use Theorem 2.2 to design an adaptive feedback controller that guarantees x(t) → 0 as t → ∞. With Ib =   20 0 0.9 0 17 0 0.9 0 15   , and initial conditions x(0) = [0.4, 0.2, −0.2], plot the angular velocities versus time and the control signals versus time. Problem 6. Consider the uncertain controlled Mathieu system given by z̈(t) + µ(1 + 2ε cos 2t)z(t) = bu(t), z(0) = z0, ż(0) = ż0, t ≥ 0, (28) where µ, ε, b ∈ R are unknown and sign b is known. Use Theorem 2.2 in conjunction with Observation 2.2 to design an adaptive feedback controller that guarantees x(t) → 0 as t → ∞, where x = [z, ż]T. With µ = 1, ε = 0.4, b = 3, and initial conditions x(0) = [1, 1]T and 9 K(0) = [0, 0, 0], plot the phase portrait of the controlled and uncontrolled system, the state trajectories versus time, the control signal versus time, and the adaptive gain history versus time. Problem 7. Consider the spring-mass-damper uncertain system with nonlinear stiffness given by mẍ(t) + cẋ(t) + k1x(t) + k2x 3(t) = bu(t) + d̂w(t), x(0) = x0, ẋ(0) = ẋ0, t ≥ 0, (29) where m, c, k1, k2 ∈ R are positive unknown constants, and b is unknown but sign b is known. Let rd(t), t ≥ 0, be a desired command signal and define the error state ẽ(t) , x(t) − rd(t) so that the error dynamics are given by m¨̃e(t) + c ˙̃e(t) + (k1 + k2(ẽ 2(t) + 3rd(t)ẽ(t) + 3r 2 d(t)))ẽ(t) = bu(t) + d̂w(t) −(mr̈d(t) + cṙd(t) + k1rd(t) + k2r 3 d(t)), ẽ(0) = ẽ0, ˙̃e(0) = ˙̃e0, t ≥ 0. (30) Assume that the disturbance signal w(t) is a sinusoidal signal with unknown amplitude and phase, that is, d̂w(t) = √ A21 + A 2 2 sin(ωt+φ) = A1 sin ωt+A2 cos ωt, where φ = tan −1(A2/A1) and A1 and A2 are unknown constants. Furthermore, let the desired trajectory be given by rd(t) = tanh ( t − 20 5 ) , so that the position of the mass is moved from −1 to 1 at t = 20 sec. Use Theorem 2.2 in conjunction with Observation 2.3 to design an adaptive tracking controller that guarantees e(t) → 0 as t → ∞, where e = [e1, e2] T. With m = 1, c = 1, k1 = 2, k2 = 0.5, d̂w(t) = 2 sin(ωt+1), ω = 2, b = 3, and initial conditions e(0) = [0, 0]T, K(0) = 01×5, and Φ(0) = 01×6, plot the actual position and the reference signal versus time, the control signal versus time, and the adaptive gain history versus time. Problem 8. Consider the nonlinear dynamic equations for a single-link manipulator with flexible joints and negligible damping coupled through a gear train to a DC-motor 10 given by I1q̈1(t) + MgL sin q1(t) + k(q1(t) − q2(t)) = 0, q1(0) = q10, q̇1(0) = q̇10, t ≥ 0, (31) I2q̈2(t) − k(q1(t) − q2(t)) = u(t), q2(0) = q20, q̇2(0) = q̇20, (32) where q1 and q2 are angular positions, I1 and I2 are mass moments of inertia of the link and the motor, respectively, k is a spring constant, M is the total mass of the link, L is the distance from the joint axis to the link center of mass, g is the gravitational constant, and u is a control torque input. Defining the state variables x1(t) , q1(t), x2(t) , q̇1(t), x3(t) , − MgL I1 sin q1(t) − k I1 (q1(t) − q2(t)), x4(t) , − MgL I1 q̇1(t) cos q1(t) − k I1 (q̇1(t) − q̇2(t)), (31) and (32) can be written in the form of (1) with x = [x1, x2, x3, x4] T, G(x) = [01×3, βδ] T, w(t) ≡ 0, and f(x) =     x2 x3 x4 −(α cos x1 + β + γ)x3 + α(x 2 2 − γ) sin x1     , where α , MgL I1 , β , k I1 , γ , k I2 , δ , 1 I2 . Assume that α, β, γ, and δ are unknown positive constants. Furthermore, the angular position q1(t) is required to track the angle rd(t) = sin t. Use Theorem 2.2 in conjunction with Observation 2.3 to design an adaptive tracking controller that guarantees ei(t) → 0 as t → ∞, where ei(t) , di−1 dti−1 (x1(t)−rd(t)), i = 1, · · · , 4. With α = 10, β = 2, γ = 4, δ = 1, and initial conditions e(0) = 04×1, K(0) = 01×8, and Φ(0) = 01×5, plot the actual position q1(t) and the reference signal versus time, the control signal versus time, and the adaptive gain history versus time. The last set of problems does not involve Theorems 2.1 and 2.2. Problem 9. Consider the controlled nonlinear uncertain second-order dynamical system given by Mq̈(t) + C(q(t))q̇(t) + K(q(t))q(t) = u(t), q(0) = q0, q̇(0) = q̇0, t ≥ 0, (33) 11
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved