A. Proof of Theorem 4.1
In this appendix, we prove Theorem 4.1. First, note that with , , given by (4.15), it follows from (4.1), (4.4), and (4.14) that
Now, defining and , using (4.5)â€“(4.7), (4.9), and , it follows from (4.2) and (A.1) that
where , , , , , and . Furthermore, since is nonnegative and asymptotically stable, it follows from Theorem 2.3 that there exist a positive diagonal matrix and a positivedefinite matrix such that (4.18) holds.
Next, to show that the closedloop system given by (4.17), (A.2), and (A.3) is ultimately bounded with respect to , consider the Lyapunovlike function
where . Note that (A.4) satisfies (3.3) with , , , where . Furthermore, is a class function. Now, using (4.17) and (A.2), it follows that the difference of along the closedloop system trajectories is given by
Next, using
it follows that
Furthermore, note that since, by assumption, and , , it follows that
Hence,
Now, for
it follows that for all , that is, for all and , where
Furthermore, it follows from (A.12) that
Hence, it follows from (A.4) and (A.15) that
where . Thus, it follows from Theorem 3.2 that the closedloop system given by (4.17), (A.2), and (A.3) is globally bounded with respect to uniformly in , and for every , , , where
, and . Furthermore, to show that , , suppose there exists such that for all . In this case, , , which implies , . Alternatively, suppose there does not exist such that for all . In this case, there exists an infinite set . Now, with (A.13) satisfied, it follows that for all , that is, for all and , where is given by (A.14). Furthermore, note that , , and (A.16) holds. Hence, it follows from Theorem 3.3 that the closedloop system given by (4.17), (A.2), and (A.3) is globally ultimately bounded with respect to uniformly in with ultimate bound given by , where .
Next, to show ultimate boundedness of the error dynamics, consider the Lyapunovlike function
Note that (A.18) satisfies
with , , , , and , where . Furthermore, is a class function. Now, using (4.18), (A.10), and the definition of , it follows that the difference of along the closedloop system trajectories is given by
where in (A.20) we used and for and . Now, noting and , using the inequalities
and rearranging terms in (A.20) yields
Now, for
it follows that for all , where
or, equivalently, for all , , where (see Figure 3)
Next, we show that , . Since for all , it follows that, for , ,
Now, let and assume . If , , then it follows that , . Alternatively, if there exists such that , then, since , it follows that there exists , such that and , where . Hence, it follows that
which implies that . Next, let , where and assume and . Now, for every such that , , it follows that
which implies that , . Now, if there exists such that , then it follows as in the earlier case shown above that , . Hence, if , then
Finally, repeating the above arguments with , , replaced by , , it can be shown that , , where .
Visualization of sets used in the proof of Theorem 4. 1.
Next, define
where is the maximum value such that , and define
where is given by (A.30). Assume that (see Figure 3) (this assumption is standard in the neural network literature and ensures that in the error space there exists at least one Lyapunov level set . In the case where the neural network approximation holds in , this assumption is automatically satisfied. See Remark A.1 for further details). Now, for all , . Alternatively, for all , . Hence, it follows that is positively invariant. In addition, since (A.3) is inputtostate stable with viewed as the input, it follows from Proposition 3.4 that the solution , , to (A.3) is ultimately bounded. Furthermore, it follows from [21, Theorem 1] that there exist a continuous, radially unbounded, positivedefinite function , a class function , and a class function such that
Since the upper bound for is given by , it follows that the set given by
is also positively invariant as long as (see Remark A.1). Now, since and are positively invariant, it follows that
is also positively invariant. In addition, since (4.1), (4.2), (4.15), and (4.17) are ultimately bounded with respect to and since (4.2) is inputtostate stable at with viewed as the input then it follows from Proposition 3.4 that the solution , , of the closedloop system (4.1), (4.2), (4.15), and (4.17) is ultimately bounded for all .
Finally, to show that and , , for all note that the closedloop system (4.1), (4.15), and (4.17), is given by
where
Note that and are nonnegative and, since whenever , , , . Hence, since is nonnegative with respect to pointwiseintime, is nonnegative with respect to , and , it follows from Proposition 2.9 that , , and , , for all .
Remark A.1.
In the case where the neural network approximation holds in , the assumptions and invoked in the proof of Theorem 4.1 are automatically satisfied. Furthermore, in this case the control law (4.15) ensures global ultimate boundedness of the error signals. However, the existence of a global neural network approximator for an uncertain nonlinear map cannot in general be established. Hence, as is common in the neural network literature, for a given arbitrarily large compact set , we assume that there exists an approximator for the unknown nonlinear map up to a desired accuracy. Furthermore, we assume that in the error space there exists at least one Lyapunov level set such that . In the case where is continuous on , it follows from the StoneWeierstrass theorem that can be approximated over an arbitrarily large compact set . In this case, our neuroadaptive controller guarantees semiglobal ultimate boundedness. An identical assumption is made in the proof of Theorem 5.1.
B. Proof of Theorem 5.1
In this appendix, we prove Theorem 5.1. First, define , where
Next, note that with , , given by (5.2), it follows from (4.1), (4.4), and (4.14) that
Now, defining and and using (4.6), (4.7), and 4.9), it follows from (4.2) and (B.2) that
where , and . Furthermore, since is nonnegative and asymptotically stable, it follows from Theorem 2.3 that there exist a positive diagonal matrix and a positivedefinite matrix such that (5.5) holds.
Next, to show ultimate boundedness of the closedloop system (5.4), (B.3), and (B.4), consider the Lyapunovlike function
where and with . Note that (B.5) satisfies (3.3) with , , , where . Furthermore, is a class function. Now, using (5.4) and (B.3), it follows that the difference of along the closedloop system trajectories is given by
Now, for each and for the two cases given in (B.1), the righthand side of (B.6) gives the following:

(1)
if , then . Now, using (A.8), (A.9), and the inequalities
it follows that

(2)
otherwise, , and hence, using (A.8), (A.9), (B.7), (B.9), and (B.10), it follows that
Hence, it follows from (B.6) that in either case
Furthermore, note that since, by assumption, and , , , it follows that
Hence,
Now, it follows using similar arguments as in the proof of Theorem 4.1 that the closedloop system (5.4), (B.3), and (B.4) is globally bounded with respect to uniformly in . If there does not exist such that for all , it follows using similar arguments as in the proof of Theorem 4.1 that the closedloop system (5.4), (B.3), and (B.4) is globally ultimately bounded with respect to uniformly in with ultimate bound given by , where . Alternatively, if there exists such that for all , then for all .
Next, to show ultimate boundedness of the error dynamics, consider the Lyapunovlike function
Note that (B.16) satisfies (A.19) with , , , , and , where Furthermore, is a class function. Now, using (5.5), (B.13), and the definition of , it follows that the forward difference of along the closedloop system trajectories is given by
where once again in (B.17) we used and for and .
Next, using (A.21) and (B.17) yields
Now, using similar arguments as in the proof of Theorem 4.1 it follows that the solution , of the closedloop system (5.4), (B.3), and (B.4) is ultimately bounded for all given by (A.35) and for .
Finally, , is a restatement of (5.2). Now, since , and , it follows from Proposition 2.8 that and , for all .