Adaptive control has been extensively investigated and developed in both theory and application during the past few decades, and it is still a very active. PDF | The parameters of a process may be unknown or may change slowly over time. This chapter discusses how one can control a process. PDF | Adaptive assistive control for a haptic interface system is proposed in the present paper. The assistive control system consists of three subsystems: a servo .

Author: | BUCK CLAYWELL |

Language: | English, Spanish, Portuguese |

Country: | Moldova |

Genre: | Art |

Pages: | 386 |

Published (Last): | 27.11.2015 |

ISBN: | 735-1-47517-446-3 |

Distribution: | Free* [*Register to download] |

Uploaded by: | ROXANE |

Direct and Indirect Adaptive Control 8. Model Reference Adaptive Control Adaptive Pole Placement Control. This tutorial paper looks back at almost 50 years of adaptive control trying to establish how much more we need to secure for the industrial community an. Nonlinear and Adaptive Control with Applications . grounding for analysis and design of adaptive control systems and so these form the.

This was achieved through physical measurements by configuration of the servo driver, the computer system and the DC motor using, Simulink toolbox and Microsoft Windows Software Development kits SDKs 7. The reference input to the system was a step function input. The purpose is to determine the time-response of the developed system also, the stability of the system and its ability to reach one stationary state when starting from another. Results from physical measurements show that MRAC system was able to accommodate nonlinearities associated with DC motor and yet, maintain good control of the motor without voltage overshoot as against the PID controller. Lately, developments in magnetic materials, microprocessors, semiconductor technology and mechatronics etc. For high performance drive Vol. DC servomotors are mostly the choice; they are widely deployed in these applications due to their reliability and ease of control because of the decoupled nature of the field and the armature magneto motive force Sheel, Chandkishor, and Gupta, DC Servomotor systems have two outputs that can be controlled, angular speed and angular position. For some applications, such as disk drives and robotics, position control is more important than speed control Makableh, One of the parameters that negatively affect the efficient control of DC servomotor is overshoot. In the context of control theory, overshoot can be regarded as an output exceeding its final steady state value.

DC Servomotor systems have two outputs that can be controlled, angular speed and angular position. For some applications, such as disk drives and robotics, position control is more important than speed control Makableh, One of the parameters that negatively affect the efficient control of DC servomotor is overshoot. In the context of control theory, overshoot can be regarded as an output exceeding its final steady state value.

Overshoot could be seen as a form of distortion that affects the rise time, settling time etc. Reviews show that conventional controllers such as Proportional Integral Derivative PID are not that capable of handling nonlinearities associated with DC motors and at the same time, mitigate the effects of overshoot.

Thus, one of the drawbacks of conventional tracking controllers for electric drives is that they are unable to capture the unknown load characteristics over a widely ranging operating point. Apparently, this makes tuning of controller parameters very difficult. There are many ways to overcome these difficulties but, generally there are four basic way that are common to adaptive controller; 1 Model reference adaptive control MRAC , 2 Self tuning, 3 Dual control and 4 Gain scheduling.

Usually load torque is a nonlinear function of a combination of variables such as speed and position of the rotor. Therefore, identifying the overall nonlinear system through a linearized model around a widely varying or changing operating point, under fast switching frequencies, can introduce errors which can lead to unstable or inaccurate performance of the system Astron and Wittenmark, It has the electrical and the mechanical representation.

The torque developed on the motor Vol. To substantiate that, assume a current-carrying conductor is established in a magnetic field with flux , and the conductor is located at a distance r from the center of rotation. The relationship among the developed torque, flux and current ia is. In addition to the torque developed, when the conductor moves in the magnetic field, a voltage is generated across its terminals.

This voltage is known as the back emf, which is proportional to the shaft velocity, and tends to oppose the current flow. The relationship between the back emf and the shaft velocity is:. Equations 1a and 1b form the fundamentals of the dc-motor operation. Generally speaking, MRAC is composed of four parts namely; the plant containing unknown parameters, a reference model for compactly specifying the desired output of the control system, a feedback control law containing adjustable parameters.

The adaptation law of MRAC systems extracts parameter information from the tracking errors. However, unlike NARMA-L2, the model reference architecture requires that a separate neural network controller be trained offline, in addition to the neural network plant model. The controller training is computationally expensive, because it requires the use of dynamic backpropagation Beale, Hagan, and Demuth, Perhaps, it has been emphasized that complete controllability and observability of the process must be assumed for successful neural network modeling and control Saerens and Soquet, More so Narendra and Parthasarathy indicated that considerable progress in nonlinear control theory is still needed to obtain rigorous solutions to identification and control problems using neural networks.

The PID control system is achieved by tuning and calculating the error signal between an output measured value and a reference value input , the controller works to minimize the error signal or the difference between the output signal and the reference signal to a minimum value; such that the output measured value will be as close as possible to the input reference signal. The mathematical model of the PID controller has been proposed by many authors and is represented by: 2 Where: is the controller output signal, is the error signal, is the proportional gain, Ki is the integral gain and Kd is the derivative gain.

The DC motor takes in single input in the form of an input voltage and generates a single output parameter in the form of output speed. It is a single-input, single-output system SISO.

Figure 2 is the electromechanical representation of a DC motor, the diagram is used to develop the system level transfer function that characterize the operation or behavior of a DC motor. Fig 2 Electrical Model of DC Motor The armature is modeled as a circuit with resistance Ra connected in series with an inductance La and a voltage source ea, and eb representing the back electromotive force emf in the armature when the rotor rotates.

Looking at the diagram of fig 1, it can be seen that the control of the dc motor is applied at the armature terminals in the form of applied voltage ea t. It can be deduced that the torque developed in the motor is proportional to the air-gap flux and the armature current.

Putting the control Vol. From equations 3 through 6, the applied voltage ea t is considered as the cause and Equation 5 considers that the immediate effect due to the applied voltage. From Equation 3, armature current ia t causes the motor torque , while in Equation 6 the back emf was defined. It can be seen also from Equation 7 that the motor torque produced causes the angular velocity and displacement m t of the rotor respectively. The transfer function between the motor displacement and the input voltage is obtained as thus; 9 Note that TL has been set to zero in Equation 9.

Fig 2 shows a block diagram of the DC motor system for speed control. From the diagram, one can see clearly how the transfer function is related to each block. It can be seen from Equation 9 that s can be factored out of the denominator and the significance of the transfer function is that the dc motor is an integrating device between these two variables. From fig 3 also, it can be seen that the motor has a built-in feedback loop caused by the back emf Eb.

Fig 3 Simulink Model of a DC Servomotor in terms of speed The back-emf physical represents the feedback of a signal that is proportional to the negative of the speed of the motor. From equation 9, it can be noted that back emf constant Kb represents an added term to the resistance Ra and the viscous-friction coefficient Bm.

Effectively, the back-emf effect is equivalent to an electric friction which tends to improve the stability of the motor and apparently the stability of the system. So simulation can be performed on the control of DC motor using ANN model, there is need to construct an equivalent DC motor to a discrete time model.

Note that the choice of load torque here is arbitral because considering load torque as one of the functions of a DC motor; it is a common characteristic for most propeller driven loads.

Alternatively, Direct substitution and substitute for position in equations 4 , 5 and 10 i. Fig 4 Simulink Model of a DC Servomotor in terms of Speed and Position Deploying direct substitution and substitute for position in equations 4 , 5 and 10 i.

Then the equations yields;. Then substituting in terms of using equations 11 into the following equation from the work of Weerasooriya and El-Sharkawi , to determine the function governing the speed control of a DC motor, gives. Assume that the term is replaced in equation 22 with desired reference motor position at next instance as , and compute the control voltage the input voltage with the following equation, then. Since some of these approaches are relatively recent and research is still going on, we will not discuss them further in the rest of the book.

Transitions between different operating points that lead to significant parameter changes may be handled by interpolation or by increasing the number of operating points. The two elements that are essential in implementing this approach are a lookup table to store the values of Kj and the plant measurements that correlate well with the changes in the operating points.

The approach is called gain scheduling and is illustrated in Figure 1. With this approach, plant parameter variations can be compensated by changing the controller gains as functions of the input, output, and auxiliary measurements. The advantage of gain scheduling is that the controller gains can be changed as quickly as the auxiliary measurements respond to parameter changes.

Frequent and rapid changes of the controller gains, 6 Chapter 1. Introduction Figure 1. Multiple models adaptive control with switching. One of the disadvantages of gain scheduling is that the adjustment mechanism of the controller gains is precomputed offline and, therefore, provides no feedback to compensate for incorrect schedules. Large unpredictable changes in the plant parameters, however, due to failures or other effects may lead to deterioration of performance or even to complete failure.

Despite its limitations, gain scheduling is a popular method for handling parameter variations in flight control [3,6] and other systems [7, , ]. While gain scheduling falls into the generic definition of adaptive control, we do not classify it as adaptive control in this book due to the lack of online parameter estimation which could track unpredictable changes in the plant parameters.

These schemes are based on search methods in the controller parameter space [8] until the stabilizing controller is found or the search method is restricted to a finite set of controllers, one of which is assumed to be stabilizing [22, 23].

In some approaches, after a satisfactory controller is found it can be tuned locally using online parameter estimation for better performance []. Since the plant parameters are unknown, the parameter space is parameterized with respect to a set of plant models which is used to design a finite set of controllers so that each plant model from the set can be stabilized by at least one controller from the controller set.

Without going into specific details, the general structure of this multiple model adaptive control with switching, as it is often called, is shown in Figure 1. Why Adaptive Control 7 In Figure 1. This by itself could be a difficult task in some practical situations where the plant parameters are unknown or change in an unpredictable manner. Furthermore, since there is an infinite number of plants within any given bound of parametric uncertainty, finding controllers to cover all possible parametric uncertainties may also be challenging.

In other approaches [22, 23], it is assumed that the set of controllers with the property that at least one of them is stabilizing is available. This is achieved by the use of a switching logic that differs in detail from one approach to another.

While these methods provide another set of tools for dealing with plants with unknown parameters, they cannot replace the identifier-based adaptive control schemes where no assumptions are made about the location of the plant parameters. One advantage, however, is that once the switching is over, the closed-loop system is LTI, and it is much easier to analyze its robustness and performance properties.

This LTI nature of the closed-loop system, at least between switches, allows the use of the well-established and powerful robust control tools for LTI systems [29] for controller design. These approaches are still at their infancy and it is not clear how they affect performance, as switching may generate bad transients with adverse effects on performance. Switching may also increase the controller bandwidth and lead to instability in the presence of high-frequency unmodeled dynamics.

Guided by data that do not carry sufficient information about the plant model, the wrong controllers could be switched on over periods of time, leading to internal excitation and bad transients before the switching process settles to the right controller. Some of these issues may also exist in classes of identifier-based adaptive control, as such phenomena are independent of the approach used.

The following simple examples illustrate situations where adaptive control is superior to linear control. Consider the scalar plant where u is the control input and x the scalar state of the plant.

The parameter a is unknown We want to choose the input u so that the state x is bounded and driven to zero with time. If a is a known parameter, then the linear control law can meet the control objective. The conclusion is that in the Chapter 1. Introduction 8 absence of an upper bound for the plant parameter no linear controller could stabilize the plant and drive the state to zero. The switching schemes described in Section 1.

As we will establish in later chapters, the adaptive control law guarantees that all signals are bounded and x converges to zero no matter what the value of the parameter a is. This simple example demonstrates that adaptive control is a potential approach to use in situations where linear controllers cannot handle the parametric uncertainty.

Another example where an adaptive control law may have properties superior to those of the traditional linear schemes is the following.

It is clear that by increasing the value of the controller gain k, we can make the steady-state value of jc as small as we like. This will lead to a high gain controller, however, which is undesirable especially in the presence of high-frequency unmodeled dynamics. In principle, however, we cannot guarantee that x will be driven to zero for any finite control gain in the presence of nonzero disturbance d.

The adaptive control approach is to estimate online the disturbance d and cancel its effect via feedback. Therefore, in addition to stability, adaptive control techniques could be used to improve performance in a wide variety of situations where linear techniques would fail to meet the performance characteristics.

This by no means implies that adaptive control is the most 1. A Brief History 9 appropriate approach to use in every control problem. The purpose of this book is to teach the reader not only the advantages of adaptive control but also its limitations.

Adaptive control involves learning, and learning requires data which carry sufficient information about the unknown parameters. For such information to be available in the measured data, the plant has to be excited, and this may lead to transients which, depending on the problem under consideration, may not be desirable. Furthermore, in many applications there is sufficient information about the parameters, and online learning is not required. In such cases, linear robust control techniques may be more appropriate.

The adaptive control tools studied in this book complement the numerous control tools already available in the area of control systems, and it is up to the knowledge and intuition of the practicing engineer to determine which tool to use for which application. The theory, analysis, and design approaches presented in this book will help the practicing engineer to decide whether adaptive control is the approach to use for the problem under consideration.

Starting in the early s, the design of autopilots for high-performance aircraft motivated intense research activity in adaptive control. Highperformance aircraft undergo drastic changes in their dynamics when they move from one operating point to another, which cannot be handled by constant-gain feedback control. A sophisticated controller, such as an adaptive controller, that could learn and accommodate changes in the aircraft dynamics was needed.

Model reference adaptive control was suggested by Whitaker and coworkers in [30, 31] to solve the autopilot control problem.

Sensitivity methods and the MIT rule were used to design the online estimators or adaptive laws of the various proposed adaptive control schemes. An adaptive pole placement scheme based on the optimal linear quadratic problem was suggested by Kalman in [32].

The work on adaptive flight control was characterized by a "lot of enthusiasm, bad hardware and nonexisting theory" [33]. The lack of stability proofs and the lack of understanding of the properties of the proposed adaptive control schemes coupled with a disaster in a flight test [34] caused the interest in adaptive control to diminish. The s became the most important period for the development of control theory and adaptive control in particular.

State-space techniques and stability theory based on Lyapunov were introduced. Developments in dynamic programming [35, 36], dual control [37] and stochastic control in general, and system identification and parameter estimation [38, 39] played a crucial role in the reformulation and redesign of adaptive control.

By , Parks [40] and others found a way of redesigning the MIT rule-based adaptive laws used in the model reference adaptive control MRAC schemes of the s by applying the Lyapunov design approach. Their work, even though applicable to a special class of LTI plants, set the stage for further rigorous stability proofs in adaptive control for more general classes of plant models. The advances in stability theory and the progress in control theory in the s improved the understanding of adaptive control and contributed to a strong renewed interest in the field in the s.

On the other hand, the simultaneous development and progress in computers and electronics that made the implementation of complex controllers, such as 10 Chapter!. Introduction the adaptive ones, feasible contributed to an increased interest in applications of adaptive control.

The s witnessed several breakthrough results in the design of adaptive control.

MRAC schemes using the Lyapunov design approach were designed and analyzed in []. The concepts of positivity and hyperstability were used in [45] to develop a wide class o MRAC schemes with well-established stability properties.

At the same time parallel efforts for discrete-time plants in a deterministic and stochastic environment produced several classes of adaptive control schemes with rigorous stability proofs [44,46]. The excitement of the s and the development of a wide class of adaptive control schemes with wellestablished stability properties were accompanied by several successful applications [47— 49].

The successes of the s, however, were soon followed by controversies over the practicality of adaptive control. As early as it was pointed out by Egardt [41] that the adaptive schemes of the s could easily go unstable in the presence of small disturbances. The nonrobust behavior of adaptive control became very controversial in the early s when more examples of instabilities were published by loannou et al.

Rohrs's example of instability stimulated a lot of interest, and the objective of many researchers was directed towards understanding the mechanism of instabilities and finding ways to counteract them. By the mid- s, several new redesigns and modifications were proposed and analyzed, leading to a body of work known as robust adaptive control. An adaptive controller is defined to be robust if it guarantees signal boundedness in the presence of "reasonable" classes of unmodeled dynamics and bounded disturbances as well as performance error bounds that are of the order of the modeling error.

The work on robust adaptive control continued throughout the s and involved the understanding of the various robustness modifications and their unification under a more general framework [41, ]. In discrete time Praly [57, 58] was the first to establish global stability in the presence of unmodeled dynamics using various fixes and the use of a dynamic normalizing signal which was used in Egardt's work to deal with bounded disturbances.

The use of the normalizing signal together with the switching a-modification led to the proof of global stability in the presence of unmodeled dynamics for continuous-time plants in [59]. The solution of the robustness problem in adaptive control led to the solution of the long-standing problem of controlling a linear plant whose parameters are unknown and changing with time.

By the end of the s several breakthrough results were published in the area of adaptive control for linear time-vary ing plants [5, ]. The focus of adaptive control research in the late s to early s was on performance properties and on extending the results of the s to certain classes of nonlinear plants with unknow parameters. These efforts led to new classes of adaptive schemes, motivated from nonlinear system theory [] as well as to adaptive control schemes with improved transient and steady-state performance [].

New concepts such as adaptive backstepping, nonlinear damping, and tuning functions are used to address the more complex problem of dealing with parametric uncertainty in classes of nonlinear systems [66]. In the late s to early s, the use of neural networks as universal approximators of unknown nonlinear functions led to the use of online parameter estimators to "train" or update the weights of the neural networks.

Difficulties in establishing global convergence results soon arose since in multilayer neural networks the weights appear in a nonlinear fashion, leading to "nonlinear in the parameters" parameterizations for which globally 1.

This led to the consideration of single layer neural networks where the weights can be expressed in ways convenient for estimation parameterizations.

These approaches are described briefly in Chapter 8, where numerous references are also provided for further reading. In the mids to early s, several groups of researchers started looking at alternative methods of controlling plants with unknown parameters [].

These methods avoid the use of online parameter estimators in general and use search methods, multiple models to characterize parametric uncertainty, switching logic to find the stabilizing controller, etc.

Research in these non-identifier-based adaptive control techniques is still going on, and issues such as robustness and performance are still to be resolved. Adaptive control has a rich literature full of different techniques for design, analysis, performance, and applications. Several survey papers [74, 75] and books and monographs [5,39,41,,49,50,66,] have already been published. Despite the vast literature on the subject, there is still a general feeling that adaptive control is a collection of unrelated technical tools and tricks.

The purpose of this book is to present the basic design and analysis tools in a tutorial manner, making adaptive control accessible as a subject to less mathematically oriented readers while at the same time preserving much of the mathematical rigor required for stability and robustness analysis.

Some of the significant contributions of the book, in addition to its relative simplicity, include the presentation of different approaches and algorithms in a unified, structured manner which helps abolish much of the mystery that existed in adaptive control. Furthermore, up to now continuous-time adaptive control approaches have been viewed as different from their discrete-time counterparts. In this book we show for the first time that the continuous-time adaptive control schemes can be converted to discrete time by using a simple approximation of the time derivative.

This page intentionally left blank Chapter 2 Parametric Models Let us consider the first-order system where jc, u are the scalar state and input, respectively, and a, b are the unknown constants we want to identify online using the measurements of Jt, u. The first step in the design of online parameter identification PI algorithms is to lump the unknown parameters in a vector and separate them from known signals, transfer functions, and other known parameters in an equation that is convenient for parameter estimation.

We refer to 2. The SPM may represent a dynamic, static, linear, or nonlinear system. For this reason we refer to 2. As we will show later, this property is significant in designing online PI algorithms whose global convergence properties can be established analytically.

We can derive 2. In some cases, the unknown parameters cannot be expressed in the form of the linear in the parameters models. In such cases the PI algorithms based on such models cannot be shown to converge globally. The transfer function W q is a known stable transfer function. In some applications of parameter identification or adaptive control of plants of the form whose state x is available for measurement, the following parametric model may be used: Chapter 2.

Parametric Models 15 where Am is a stable design matrix; A, B are the unknown matrices; and x, u are signal vectors available for measurement.