Skip to main content

Theory and Modern Applications

Switched hyperbolic balance laws and differential algebraic equations

Abstract

Motivated by several applications, we investigate the well-posedness of a switched system composed by a system of linear hyperbolic balance laws and by a system of linear algebraic differential equations. This setting includes networks and looped systems of hyperbolic balance laws. The obtained results are globally in time, provided that the inputs have finite (but not necessarily small) total variation.

1 Introduction

In this paper, we investigate the well-posedness of switched systems consisting of linear hyperbolic balance laws and algebraic differential equations and having the form

$$\begin{aligned} & \partial _{t} \mathbf{u}(t,x) + \mathbf{A}_{\sigma }(t,x) \partial _{x} \mathbf{u}(t,x) = \mathbf{s}_{\sigma}\bigl(t,x, \mathbf{u}(t,x)\bigr), \end{aligned}$$
(1a)
$$\begin{aligned} & \mathbf{B}_{\sigma }(t) \begin{pmatrix} \mathbf{u}(t,0) \\ \mathbf{u}(t,1) \end{pmatrix} = \mathbf{B}_{\mathbf{w},\sigma}(t) \mathbf{w}(t) + \mathbf{b}_{\sigma}(t), \end{aligned}$$
(1b)
$$\begin{aligned} &\mathbf{E}_{\sigma }\dot{\mathbf{w}} = \mathbf{H}_{\sigma } \mathbf{w}+ \mathbf{K}_{0,\sigma}(t) \mathbf{u}\bigl(t,0^{+}\bigr) + \mathbf{K}_{1,\sigma}(t) \mathbf{u}\bigl(t,1^{-}\bigr) + \mathbf{f}(t). \end{aligned}$$
(1c)

Here the unknown u, defined for \(t>0\) and \(x \in [0,1]\), satisfies the system of linear hyperbolic partial differential equations (1a), briefly PDEs, and w, defined for \(t > 0\), is the solution to (1c), a linear differential algebraic equation (DAE) with index one. The functions u and w are linked together through the boundary conditions (1b) of the PDE and the vector field of the DAE (1c). The complete system (1a)–(1c) is subject to some external switching governed by the parameter σ. For various examples of coupled systems PDE-DAE, see [7]. Systems like (1a)–(1c) occur in many real applications such as networks for water supply, electrical power distribution [3, 20], or gas transport [3, 15, 16]. Similar systems, but with nonlinear PDE, are used also for modeling the human circulatory system [2527] or controlling traffic flows [13, 17] with autonomous vehicles.

In the literature the coupling between hyperbolic PDEs and ODEs at the boundary has been studied in different settings; see [5, 6, 1012, 18, 19] and the references therein. In the case on nonlinear systems of hyperbolic balance laws, only results local in time and with small total variation have been obtained [4, 5]. Instead, the present setting allows us to prove the existence of a global in time solution without any restrictions on the total variation of the initial datum. This is in accordance with the results obtained in the Ph.D. thesis by Hante [21] about the well-posedness of switched linear balance laws on bounded domains. We remark that the results by Hante do not cover the case of the present paper. This is due to the fact that (1a)–(1c) is a so-called loop system, i.e., the boundary condition (1b) at one side can depend on the trace of the solution at the other side.

Here we treat only the particular case of DAEs of index one. This is due to the fact that solutions to DAEs with index more than one are distributions in general, in particular, Dirac distributions and their derivatives; see [28]. This exceeds the regularity we need for boundary terms of the hyperbolic PDEs. Coupled systems with linear transport equations and linear switched DAEs of arbitrary index are investigated in [7].

In the present paper, we prove the well-posedness of (1a)–(1c) by using an iterative converging procedure based on the solutions to both PDEs and DAEs. As regards the hyperbolic balance laws (1a)–(1b), we use the well-known definition of broad solutions (see, e.g., [8]) based on the concept of characteristic curves. Using the Banach fixed point theorem, we extend the results on bounded intervals, contained in [21], to the case of looped systems. Moreover, we obtain suitable bounds on the total variation, which allow us to consider the traces of the solution at the boundaries. Regarding the DAEs, we use well-known results and estimates; see [24].

The paper is organized as follows. In Sect. 2, we summarize several results about the well-posedness of linear hyperbolic balance laws and about the solutions to algebraic differential equations. In Sect. 3, we investigate the coupled problem (1a)–(1c). The supplementary technical details are collected in Sect. 4.

2 Separate systems

In this section, we briefly recall the theory for both linear hyperbolic PDEs with two boundaries and linear DAEs. For the PDEs, the existing results are extended to include looped systems. These are the basic steps to produce solutions to the complete switching system (1a)–(1c).

2.1 Hyperbolic PDEs

Consider the following semilinear initial boundary value problem IBVP:

$$\begin{aligned} &\partial _{t} \mathbf{u}(t,x) + \mathbf{A}(t,x) \partial _{x} \mathbf{u}(t,x) = \mathbf{s}\bigl(t,x,\mathbf{u}(t,x)\bigr), \end{aligned}$$
(2a)
$$\begin{aligned} & \begin{pmatrix} \mathbf{B}^{0}_{0}(t)&\mathbf{B}^{1}_{0}(t) \\ \mathbf{B}^{0}_{1}(t)&\mathbf{B}^{1}_{1}(t) \end{pmatrix} \begin{pmatrix} \mathbf{u}(t,0) \\ \mathbf{u}(t,1) \end{pmatrix} = \mathbf{b}(t), \end{aligned}$$
(2b)
$$\begin{aligned} &\mathbf{u}(0,x)= \bar{\mathbf{u}}(x), \end{aligned}$$
(2c)

where t R + and \(x \in [0, 1]\). We underline that the boundary conditions (2b) are not intended in classical sense (see, e.g., [2, 14]), so that we do not prescribe that the traces of the solution at \(x=0\) and \(x=1\) strictly satisfy (2b). Roughly speaking, condition (2b) prescribes the value of the solution only on the incoming components; see, for example, [23, Sect. 2]. Hypotheses (H-4) and (H-5) below introduce noncharacteristic conditions for this reason.

We introduce the following assumptions:

  1. (H-1)

    The map A: R + ×[0,1] R n × n is a \(\mathbf{C^{2}}\) function.

  2. (H-2)

    The source term s: R + ×[0,1]× R n R n is bounded, measurable with respect to t, and Lipschitz continuous with respect to x and u. In particular, there exists \(L_{\mathbf{s}} > 0\) such that

    $$ \bigl\vert \mathbf{s} (t, x, \mathbf{u} ) \bigr\vert \le L_{ \mathbf{s}}, \bigl\vert \mathbf{s} (t, x_{1}, \mathbf{u}_{1} ) - \mathbf{s} (t, x_{2}, \mathbf{u}_{2} ) \bigr\vert \le L_{\mathbf{s}} \vert x_{1} - x_{2} \vert + L_{\mathbf{s}} \vert \mathbf{u}_{1} - \mathbf{u}_{2} \vert $$

    for all \(t \ge 0\), \(x, x_{1}, x_{2} \in [0, 1]\), and u, u 1 , u 2 R n .

  3. (H-3)

    The system is strictly hyperbolic, i.e., the matrix \(\mathbf{A}(t,x)\) has n real and distinct eigenvalues \(\lambda _{1} (t, x) < \cdots < \lambda _{n}(t, x)\) for all t R + and \(x \in [0, 1]\). We denote by \(\mathbf{l}_{i}(t,x)\) and \(\mathbf{r}_{i}(t,x)\), \(i \in \{1, \dots,n \}\), the left and right eigenvectors of the matrix A, respectively. Without loss of generalities, we assume that

    $$\begin{aligned} \vert \mathbf{r}_{i} \vert = 1,\qquad \mathbf{l}_{j} \cdot \mathbf{r}_{i} = \textstyle\begin{cases} 1 & \text{if } i=j, \\ 0 & \text{if } i \neq j. \end{cases}\displaystyle \end{aligned}$$
  4. (H-4)

    There exist \(c > 0\) and \(\ell \in \{1,2, \ldots, n-1 \}\) such that \(\lambda _{\ell} (t, x) < -c\) and \(\lambda _{\ell + 1} (t, x) > c \) for every (t,x) R + ×[0,1].

  5. (H-5)

    B 0 0 , B 0 1 C 0 (R; R ( n ) × n ), and B 1 0 , B 1 1 C 0 (R; R × n ) are locally Lipschitz continuous and satisfy

    $$ \det \begin{pmatrix} \mathbf{B}^{0}_{0}(t) [\mathbf{r}_{\ell + 1}(t, 0) \cdots \mathbf{r}_{n}(t, 0) ] & \mathbf{B}^{1}_{0}(t) [ \mathbf{r}_{1}(t, 1) \cdots \mathbf{r}_{\ell}(t, 1) ] \\ \mathbf{B}^{0}_{1}(t) [\mathbf{r}_{\ell + 1}(t, 0) \cdots \mathbf{r}_{n}(t, 0) ] & \mathbf{B}^{1}_{1}(t) [ \mathbf{r}_{1}(t, 1) \cdots \mathbf{r}_{\ell}(t, 1) ] \end{pmatrix} \ne 0 $$

    for every \(t \in [0, T]\).

Remark 1

Under the previous assumptions, system (2a)–(2c) can be rewritten in a diagonal form. Indeed, define the \(n \times n\) matrices

$$ \mathbf{L}(t,x) = \bigl[\mathbf{l}_{1}(t, x) \cdots \mathbf{l}_{n}(t,x) \bigr]^{\top } \quad\text{and}\quad \mathbf{R}(t, x) = \bigl[ \mathbf{r}_{1}(t, x) \cdots \mathbf{r}_{n}(t, x) \bigr], $$

whose components are, respectively, the normalized left- and right-eigenvectors of the matrix \(\mathbf{A}(t,x)\) and the \(n \times n\) diagonal matrix \(\boldsymbol{\Lambda}(t,x)\) composed by the eigenvalues of \(\mathbf{A}(t,x)\). Note that (H-3) and (H-4) imply that the matrices L, R, and Λ are nonsingular. Defining the characteristic variables

$$\begin{aligned} & \mathbf{v}(t, x) = \bigl[v_{1}(t, x) \cdots v_{n}(t,x) \bigr]^{\top }:= \mathbf{L}(t,x) \mathbf{u}(t, x), \\ &\mathbf{v}^{-}(t, x) = \bigl[v_{1}(t, x) \cdots v_{\ell}(t,x)\bigr]^{\top},\qquad \mathbf{v}^{+}(t, x) = \bigl[v_{\ell + 1}(t, x) \cdots v_{n}(t,x)\bigr]^{ \top}, \end{aligned}$$

equation (2a) takes the diagonal form

$$ \mathbf{v}_{t} (t,x) + \boldsymbol{\Lambda}(t,x) \mathbf{v}_{x}(t,x) = \mathbf{h}\bigl(t,x,\mathbf{v}(t, x)\bigr), $$
(3)

where

$$ \begin{aligned} \mathbf{h}(t,x,\mathbf{v}) :={}& \mathbf{L}(t,x) \mathbf{s}\bigl(t,x, \mathbf{R}(t,x)\mathbf{v}\bigr) \\ &{} + \bigl[ \mathbf{L}_{t} (t,x) + \boldsymbol{\Lambda}(t,x) \mathbf{L}_{x} (t,x) \bigr] \mathbf{R}(t,x) \mathbf{v}. \end{aligned} $$
(4)

Finally, defining

$$ \mathbf{R}^{-}(t,x) = \bigl[\mathbf{r}_{1}(t, x) \cdots \mathbf{r}_{\ell}(t,x) \bigr]\quad \text{and}\quad \mathbf{R}^{+}(t, x) = \bigl[\mathbf{r}_{\ell + 1}(t, x) \cdots \mathbf{r}_{n}(t, x) \bigr], $$

we rewrite the boundary condition (2b) in the form

$$ \begin{pmatrix} \mathbf{N}_{0}(t) & \mathbf{M}_{0}(t) \\ \mathbf{M}_{1}(t) & \mathbf{N}_{1}(t) \end{pmatrix} \begin{pmatrix} \mathbf{v}^{+}(t,0) \\ \mathbf{v}^{-}(t,1) \end{pmatrix} =\mathbf{b}(t) - \hat{\mathbf{N}}(t) \begin{pmatrix} \mathbf{v}^{-}(t,0) \\ \mathbf{v}^{+}(t,1) \end{pmatrix} $$
(5)

with

$$\begin{aligned} &\mathbf{N}_{0}(t) = \mathbf{B}_{0}^{0}(t) \mathbf{R}^{+}(t,0),\qquad \mathbf{M}_{0}(t) = \mathbf{B}_{0}^{1}(t) \mathbf{R}^{-}(t,1),\qquad \mathbf{M}_{1}(t) = \mathbf{B}_{1}^{0}(t) \mathbf{R}^{+}(t,0), \\ &\mathbf{N}_{1}(t) = \mathbf{B}_{1}^{1}(t) \mathbf{R}^{-}(t,1) \quad\text{and}\quad \hat{\mathbf{N}}(t) = \begin{pmatrix} \mathbf{B}_{0}^{0}\mathbf{R}^{-} (t, 0) & \mathbf{B}_{0}^{1} \mathbf{R}^{+}(t, 1) \\ \mathbf{B}_{1}^{0}\mathbf{R}^{-}(t, 0) & \mathbf{B}_{1}^{1}\mathbf{R}^{+}(t, 1) \end{pmatrix}. \end{aligned}$$

Due to (H-5), the \(n \times n\) matrix

$$ \hat {\mathbf{M}}(t):= \begin{pmatrix} \mathbf{N}_{0}(t) & \mathbf{M}_{0}(t) \\ \mathbf{M}_{1}(t) & \mathbf{N}_{1}(t) \end{pmatrix} $$

is invertible, and so (5) can be rewritten as

$$ \begin{pmatrix} \mathbf{v}^{+}(t,0) \\ \mathbf{v}^{-}(t,1) \end{pmatrix} = \bigl(\hat { \mathbf{M}}(t) \bigr)^{-1} \mathbf{b}(t) - \bigl( \hat {\mathbf{M}}(t) \bigr)^{-1} \hat {\mathbf{N}}(t) \begin{pmatrix} \mathbf{v}^{-}(t,0) \\ \mathbf{v}^{+}(t,1) \end{pmatrix} , $$
(6)

that is,

$$ \textstyle\begin{cases} \mathbf{v}^{+}(t, 0) = \mathbf{b}^{+}(t) + \mathbf{N}^{+}(t) \begin{pmatrix} \mathbf{v}^{-}(t, 0) \\ \mathbf{v}^{+}(t, 1) \end{pmatrix}, \\ \mathbf{v}^{-}(t, 1) = \mathbf{b}^{-}(t) + \mathbf{N}^{-}(t) \begin{pmatrix} \mathbf{v}^{-}(t, 0) \\ \mathbf{v}^{+}(t, 1) \end{pmatrix}, \end{cases} $$
(7)

with appropriate choices of b (t) R , b + (t) R n , N (t) R × n , and N + (t) R ( n ) × n . Expressions (6) or (7) have the same form of the general boundary conditions considered in [23, Sect. 2]. The right-hand side represents the boundary datum, which is given since \(\mathbf{v}^{-}(t,0)\) and \(\mathbf{v}^{+}(t,1)\) are the exiting components of the solution. On the left-hand side of (6) and (7), the values of the entering components \(\mathbf{v}^{-}(t,1)\) and \(\mathbf{v}^{+}(t,0)\) of the solution are prescribed.

Remark 2

Since the map A is of class \(\mathbf{C^{2}}\), we deduce that the eigenvalues and eigenvectors have the same regularity. In particular, the source term h defined in (4) for the diagonal equation (3) satisfies the following estimates. For every \(T > 0\), there exists a constant \(L_{\mathbf{h}} > 0\) such that

$$ \begin{aligned} &\bigl\vert \mathbf{h} (t, x, \mathbf{v} ) \bigr\vert \le L_{\mathbf{h}} \bigl(1 + \vert \mathbf{v} \vert \bigr), \\ &\bigl\vert \mathbf{h} (t, x_{1}, \mathbf{v}_{1} ) - \mathbf{h} (t, x_{2}, \mathbf{v}_{2} ) \bigr\vert \le L_{ \mathbf{h}} \vert \mathbf{v}_{1} \vert \vert x_{1} - x_{2} \vert + L_{\mathbf{h}} \vert \mathbf{v}_{1} - \mathbf{v}_{2} \vert \end{aligned} $$

for a.e. \(t \in [0, T]\) and all \(x, x_{1}, x_{2} \in [0, 1]\) and v, v 1 , v 2 R n .

Solutions to (2a)–(2c) are to be intended in the sense of broad solutions, which are based on the concept of characteristic curves.

Definition 3

Given τ R + , \(\sigma \in [0, 1]\), and \(i \in \{1, \ldots, n \}\), an absolutely continuous function \(t \mapsto X_{i}(t; \tau, \sigma )\) defined in a possible one-side neighborhood of τ is called the ith characteristic curve if it satisfies

$$ \frac{\mathrm{d}}{\mathrm{d}t} X_{i}(t; \tau, \sigma ) = \lambda _{i} \bigl(t, X_{i}(t;\tau, \sigma )\bigr) $$

for a.e. t where \(X_{i}(t;\tau,\sigma )\) is defined, and \(X_{i}(\tau; \tau, \sigma )=\sigma \).

Remark 4

By assumption (H-4) the function \(t \mapsto X_{i}(t; \tau, \sigma )\) is invertible. We denote the inverse function by \(x \mapsto T_{i}(x; \tau, \sigma )\).

Definition 5

Fix \(T > 0\). A function u: C 0 ([0,T]; L 1 ((0,1); R n )) is a broad solution to (2a)–(2c) if, defining for every \(i \in \{1, \ldots, n \}\) the ith component \(v_{i}\) of u as in Remark 1 and, consequently, writing u as

$$ \mathbf{u}(t,x) = \sum_{i=1}^{n} v_{i}(t,x) \mathbf{r}_{i}(t,x) = \mathbf{R}(t, x) \mathbf{v}(t,x) \quad\text{on } [0,T] \times [0,1], $$
(8)

the following conditions hold.

  1. 1.

    For all \(i \in \{1, \ldots, n \}\) and \(\tau \in [0, T]\) and for a.e. \(\sigma \in [0, 1]\), the equation

    $$ \frac{\mathrm{d}}{\mathrm{d}t} v_{i} \bigl(t; X_{i}(t; \tau,\sigma ) \bigr) = h_{i} \bigl(t, X_{i}( t;\tau,\sigma ),\mathbf{v} \bigl(t, X_{i}(t; \tau,\sigma )\bigr) \bigr) $$

    is satisfied for a.e. t where the characteristic curve \(X_{i}(t; \tau, \sigma )\) (see Definition 3) exists.

  2. 2.

    The boundary condition (2b), in the sense of formulation (6), is satisfied for a.e. \(t \in [0, T]\).

  3. 3.

    For every \(i \in \{1, \ldots, n \}\), the initial condition

    $$ v_{i} (0, x ) = \mathbf{l}_{i}(0, x) \cdot \bar { \mathbf{u}}(x) $$

    is satisfied for a.e. \(x \in [0, 1]\).

We have the following well-posedness result for (2a)–(2c).

Theorem 6

Fix \(T > 0\) and let hypotheses (H-1)(H-5) hold. For every \(t_{o} \in [0, T]\), there exists a process

P t o :[ t o ,T]× D t o L 1 ( ( 0 , 1 ) ; R n ) ,

where

D t o = { ( u ¯ , b ) L 1 ( ( 0 , 1 ) ; R n ) × L 1 ( ( t o , T ) ; R n ) : TV ( u ¯ ) + TV ( b ) < + }

satisfying:

  1. 1.

    \(\mathbf{u}(t, \cdot ) = \mathcal {P}_{0} (t, \bar{\mathbf{u}}, \mathbf{b} )\) is the solution to (2a)(2c) in the sense of Definition 5.

  2. 2.

    \(\mathcal {P}_{t_{o}}(t_{o}, \bar {\mathbf{u}}, \mathbf{b}) = \bar {\mathbf{u}}\) for every \((\bar {\mathbf{u}}, \mathbf{b} ) \in \mathcal {D}_{t_{o}}\).

  3. 3.

    For all \(t_{o} \le t_{1} \le t_{2} \le T\) and \((\bar {\mathbf{u}}, \mathbf{b} ) \in \mathcal {D}_{t_{o}}\), we have:

    $$ \mathcal {P}_{t_{o}} (t_{2}, \bar {\mathbf{u}}, \mathbf{b} ) = \mathcal {P}_{t_{1}} \bigl(t_{2}, \mathcal {P}_{t_{o}} (t_{1}, \bar {\mathbf{u}}, \mathbf{b} ), \mathbf{b}_{|(t_{1}, T)} \bigr). $$
  4. 4.

    There exists \(L > 0\) such that

    $$ \bigl\Vert \mathcal {P}_{0} (t, \bar{\mathbf{u}}, \mathbf{b} ) - \mathcal {P}_{0} (t, \bar {\mathbf{u}}_{0}, \tilde {\mathbf{b}} ) \bigr\Vert _{\mathbf{L^{1}} (0, 1 )} \le L \bigl[ \Vert \bar{ \mathbf{u}} - \bar {\mathbf{u}}_{0} \Vert _{\mathbf{L^{1}}(0,1)} + \Vert \mathbf{b} - \tilde {\mathbf{b}} \Vert _{\mathbf{L^{1}}(0, T)} \bigr] $$
    (9)

    for a.e. \(t \in [0, T]\) and for all \(\bar{\mathbf{u}}, \bar{\mathbf{u}}_{0} \in \mathbf{L^{1}} (0,1 )\) and \(\mathbf{b}, \tilde {\mathbf{b}}\in \mathbf{L^{1}} (0, T )\).

  5. 5.

    There exists \(L > 0\) such that for a.e. \(t \in [0, T]\),

    $$ \begin{aligned} \mathbf{TV}_{[0,1]} \bigl( \mathcal {P}_{0} (t, \bar {\mathbf{u}}, \mathbf{b} ) \bigr) \le{}& L e^{L t} \bigl[1 + \mathbf{TV}_{[0, 1]} (\bar {\mathbf{u}} ) + \mathbf{TV}_{[0, t]} (\mathbf{b} ) \bigr] \\ &{} + L e^{L t} \bigl[ \Vert \bar {\mathbf{u}} \Vert _{ \mathbf{L^{\infty }} (0,1 )} + \Vert \mathbf{b} \Vert _{\mathbf{L^{\infty }} (0, t )} \bigr]. \end{aligned} $$
    (10)
  6. 6.

    There exists \(L > 0\) such that for a.e. \(t \in [0, T]\),

    $$ \begin{aligned} \bigl\Vert \mathcal {P}_{0} ( \cdot, \bar{\mathbf{u}}, \mathbf{b} ) \bigl(0^{+}\bigr) - \mathcal {P}_{0} (\cdot, \bar {\mathbf{u}}_{0}, \tilde {\mathbf{b}} ) \bigl(0^{+}\bigr) \bigr\Vert _{ \mathbf{L^{1}} (0, t )} \le{}& L \Vert \bar{ \mathbf{u}} - \bar {\mathbf{u}}_{0} \Vert _{\mathbf{L^{1}}(0,1)} \\ &{} + L \Vert \mathbf{b}- \tilde {\mathbf{b}} \Vert _{ \mathbf{L^{1}}(0, T)}. \end{aligned} $$
    (11)
  7. 7.

    There exists \(L > 0\) such that for a.e. \(t \in [0, T]\),

    $$ \begin{aligned} \bigl\Vert \mathcal {P}_{0} ( \cdot, \bar{\mathbf{u}}, \mathbf{b} ) \bigl(1^{-}\bigr) - \mathcal {P}_{0} (\cdot, \bar {\mathbf{u}}_{0}, \tilde {\mathbf{b}} ) \bigl(1^{-}\bigr) \bigr\Vert _{ \mathbf{L^{1}} (0, t )} \le{}& L \Vert \bar{ \mathbf{u}} - \bar {\mathbf{u}}_{0} \Vert _{\mathbf{L^{1}}(0,1)} \\ &{} + L \Vert \mathbf{b}- \tilde {\mathbf{b}} \Vert _{ \mathbf{L^{1}}(0, T)}. \end{aligned} $$
    (12)
  8. 8.

    There exists \(L > 0\) such that for a.e. \(t \in [0, T]\),

    $$ \bigl\Vert \mathcal {P}_{0}(t, \bar {\mathbf{u}}, \mathbf{b}) \bigr\Vert _{ \mathbf{L^{\infty }}(0,1)} \le L \bigl[ \Vert \bar {\mathbf{u}} \Vert _{\mathbf{L^{\infty }}} + 2 \Vert \mathbf{b} \Vert _{ \mathbf{L^{\infty }}(0, t)} + T \bigr]. $$
    (13)

Theorem 6 is in the same spirit as [8, Theorem 3.2], where the result is proved in the case of no boundaries. The proof in the case of two separate boundaries, contained in [21], does not cover the situation in this paper. The proof of Theorem 6 is given in Sect. 4.3.

2.2 Linear DAE

Consider, for \(T > 0\), the linear differential algebraic equation

$$ \begin{aligned} &\mathbf{E}\dot{\mathbf{w}}= \mathbf{H} \mathbf{w}+ \hat {\mathbf{f}}(t), \\ &\mathbf{w}(0)= \bar{\mathbf{w}}, \end{aligned} $$
(14)

where w:[0,T] R m is the unknown, E,H R m × m are given coefficients, f ˆ :[0,T] R m is the nonhomogeneous term, and w ¯ R m is the initial condition. In the case E is an invertible matrix, (14) clearly is a classical system of ordinary differential equations; see, for example, [22] for the basic theory. The case of a singular matrix E is more tricky. Following [24], we introduce the following assumptions on the matrices \(\mathbf{E},\mathbf{H}\).

  1. (D-1)

    The matrix pair \((\mathbf{E},\mathbf{H})\) is regular, i.e., the map \(s \mapsto \mathrm{det}(s\mathbf{E}-\mathbf{H})\) is a nonzero polynomial.

  2. (D-2)

    The matrices E and H commute, i.e., \(\mathbf{E}\mathbf{H}= \mathbf{H}\mathbf{E}\).

Remark 7

Assumption (D-2) can be omitted by using a manipulation of (14). Under assumption (D-1), there exists s ˜ R such that \((\tilde{s} \mathbf{E}- \mathbf{H} )\) is nonsingular. Multiplying equation (14) from the left by \((\tilde{s}\mathbf{E}-\mathbf{H})^{-1}\), we obtain that

$$ \widetilde {\mathbf{E}}\dot {\mathbf{w}}= \widetilde {\mathbf{H}} \mathbf{w}+ ( \tilde{s} \mathbf{E}- \mathbf{H} )^{-1} \hat {\mathbf{f}}(t), $$

where \(\widetilde {\mathbf{E}}= (\tilde{s} \mathbf{E}- \mathbf{H} )^{-1} \mathbf{E}\) and \(\widetilde {\mathbf{H}}= (\tilde{s} \mathbf{E}- \mathbf{H} )^{-1} \mathbf{H}\). We note that \(\tilde{s} \widetilde {\mathbf{E}}- \widetilde {\mathbf{H}}\) is the identity matrix, and hence the matrices \(\widetilde {\mathbf{E}}\) and \(\widetilde {\mathbf{H}}\) commute.

If (D-1) holds, then according to [24, Theorem 2.7], we can transform E and H into its Weierstraß canonical form, i.e., there exist invertible transformations S,T R m × m such that

$$\begin{aligned} (\mathbf{S}\mathbf{E}\mathbf{T},\mathbf{S}\mathbf{H}\mathbf{T} ) = \left ( \begin{pmatrix} \mathbf{I}_{1} &\mathbf{0} \\ \mathbf{0}& \mathbf{N} \end{pmatrix}, \begin{pmatrix} \mathbf{J}&\mathbf{0} \\ \mathbf{0}& \mathbf{I}_{2} \end{pmatrix} \right ), \end{aligned}$$
(15)

where I 1 R m 1 × m 1 and I 2 R m 2 × m 2 are the identity matrices, J R m 1 × m 1 is a matrix in Jordan canonical form, and N R m 2 × m 2 is a nilpotent matrix, i.e., \(\mathbf {N}^{\nu }= 0\) for some νN{0}. The integers \(m_{1}\) and \(m_{2}\) satisfy \(m_{1} + m_{2} = m\). For later use, we decompose S into S 1 R m 1 × m and S 2 R m 2 × m and define the variables y R m 1 and z R m 2 such that

$$\begin{aligned} \begin{pmatrix} \mathbf{S}_{1} \\ \mathbf{S}_{2} \end{pmatrix} = \mathbf{S}, \qquad \begin{pmatrix} \mathbf{y} \\ \mathbf{z} \end{pmatrix} = \mathbf{T}^{-1}\mathbf{w}. \end{aligned}$$
(16)

Thus we can write (14) in the form

$$ \begin{aligned}& \dot{\mathbf{y}} = \mathbf{J} \mathbf{y}+ \mathbf{f}_{ \mathbf{y}}(t), \\ &\mathbf{N}\dot{\mathbf{z}}= \mathbf{z}+ \mathbf{f}_{\mathbf{z}}(t) ,\qquad \end{aligned} \begin{pmatrix} \mathbf{y}(0) \\ \mathbf{z}(0) \end{pmatrix} = \mathbf{T}^{-1} \bar {\mathbf{w}}, $$
(17)

where \(\mathbf{S}\hat {\mathbf{f}}= (\mathbf{f}_{\mathbf{y}}, \mathbf{f}_{\mathbf{z}} )^{\top}\).

Following [24, Chap. 2.2], we can give an explicit formula for the solution of (14):

$$ \begin{aligned} \mathbf{w}(t) ={}& e^{\mathbf{E}^{D} \mathbf{H} t} \mathbf{E}^{D} \mathbf{E} \bar{\mathbf{w}}_{0} + \int _{0}^{t} e^{ \mathbf{E}^{D} \mathbf{H}(t-s)} \mathbf{E}^{D} \hat {\mathbf{f}}(s) \,\mathrm{d} s \\ & {}- \bigl(\mathbf{I}-\mathbf{E}^{D} \mathbf{E} \bigr) \sum _{i=0}^{ \nu -1} \bigl(\mathbf{E}\mathbf{H}^{D} \bigr)^{i} \mathbf{H}^{D} \hat {\mathbf{f}}^{(i)}(t), \end{aligned} $$
(18)

where \(\bar{\mathbf{w}}_{0}\) solves

$$ \bar{\mathbf{w}} = \mathbf{E}^{D} \mathbf{E}\bar{ \mathbf{w}}_{0}- \bigl(\mathbf{I}-\mathbf{E}^{D} \mathbf{E} \bigr) \sum_{i=0}^{\nu -1} \bigl(\mathbf{E} \mathbf{H}^{D} \bigr)^{i} \mathbf{H}^{D} \hat { \mathbf{f}}^{(i)}(0). $$
(19)

Here the matrices \(\mathbf{E}^{D}\) and \(\mathbf{H}^{D}\) are the so-called Drazin inverses of E and H, respectively; see [24, Chap. 2].

Definition 8

A function w C 0 ([0,T]; R m ) is a solution to (14) if for every \(t \in [0, T]\), equations (18) and (19) hold.

We have the following result about the existence and uniqueness of solution for (14).

Theorem 9

([24, Theorem 2.29 and Corollary 2.30])

Assume that hypotheses (D-1) and (D-2) hold. Let f ˆ C ν 1 ([0,T]; R m ), where ν is the smallest natural number such that \(\mathbf{N}^{\nu }= 0\). Then there exists a unique solution to (14) in the sense of Definition 8.

Remark 10

In the case \(\nu = 1\), Theorem 9 remains valid also in the case where \(\hat {\mathbf{f}}\) is a bounded-variation function. In this setting, we need to relax the regularity of w to the class of bounded-variation functions and the expression of the solution to (14) is, for a.e. \(t \in [0, T]\),

$$ \mathbf{w}(t) = e^{\mathbf{E}^{D} \mathbf{H} t} \mathbf{E}^{D} \mathbf{E} \bar{ \mathbf{w}}_{0} + \int _{0}^{t} e^{\mathbf{E}^{D} \mathbf{H}(t-s)} \mathbf{E}^{D} \hat {\mathbf{f}}(s) \,\mathrm{d} s - \bigl(\mathbf{I}-\mathbf{E}^{D} \mathbf{E} \bigr) \mathbf{H}^{D} \hat {\mathbf{f}}(t), $$

where \(\bar{\mathbf{w}} = \mathbf{E}^{D} \mathbf{E}\bar{\mathbf{w}}_{0}- (\mathbf{I}-\mathbf{E}^{D} \mathbf{E} ) \mathbf{H}^{D} \hat {\mathbf{f}}(0^{+})\).

3 The coupled problem

Now we consider the coupled problem of switched hyperbolic PDE and switched DAE (swDAE). The complete system is

$$\begin{aligned} &\partial _{t} \mathbf{u}(t,x) + \mathbf{A}_{\sigma }(t,x) \partial _{x} \mathbf{u}(t,x)= \mathbf{s}_{\sigma}\bigl(t,x, \mathbf{u}(t,x)\bigr), \end{aligned}$$
(20a)
$$\begin{aligned} &\mathbf{B}_{\sigma }(t) \begin{pmatrix} \mathbf{u}(t,0) \\ \mathbf{u}(t,1) \end{pmatrix}= \mathbf{B}_{\mathbf{w},\sigma}(t) \mathbf{w}(t) + \mathbf{b}_{\sigma}(t), \end{aligned}$$
(20b)
$$\begin{aligned} &\mathbf{u}(0,x)= \bar{\mathbf{u}}(x), \\ &\mathbf{E}_{\sigma }\dot{\mathbf{w}} = \mathbf{H}_{\sigma } \mathbf{w}+ \mathbf{K}_{0,\sigma}(t) \mathbf{u}\bigl(t,0^{+}\bigr) + \mathbf{K}_{1,\sigma}(t) \mathbf{u}\bigl(t,1^{-}\bigr) + \mathbf{f}(t), \\ &\mathbf{w}(0)=\overline{\mathbf{w}}, \end{aligned}$$
(20c)

where \(x\in [0,1], t \in [0,T]\) for \(T>0\), u:[0,T]×[0,1] R n is the solution of the PDE (20a), A σ :[0,T]×[0,1] R n × n , s σ [0,T]×[0,1]× R n R n is a source term, B σ :[0,T] R n × 2 n and B w , σ :[0,T] R n × m , b σ :[0,T] R n constitute the boundary or coupling conditions, u ¯ :[0,1] R n is the initial condition for system (20a), w:[0,T] R m is as solution of the swDAE (20c), σ:RN is a switching signal with finitely many switching times, E σ , H σ R m × m and K 0 , σ , K 1 , σ :[0,T] R m × n , f:[0,T] R m form the DAE, and w R m are the initial condition for system (20c). In the following, we restrict ourselves to the case of an swDAE system with index \(\nu = 1\).

Note that (20b) is an algebraic equation and (20c) contains algebraic equations. Therefore the coupled problem cannot be addressed simply as a combination of the two separate subsystems. Equations (20b) and (20c) have to be chosen such that the PDE provides only information via the outgoing characteristics and sufficient data is given as boundary conditions, as the following trivial example illustrates.

Example 11

Consider the system

$$\begin{aligned} \textstyle\begin{cases} \partial _{t} u + \partial _{x} u = 0, & t > 0, x \in [0,1], \\ u(t,0)= w, & t > 0, \\ 0\cdot \dot{w} = w -u(t,0), & t > 0. \end{cases}\displaystyle \end{aligned}$$

The PDE equation is a simple transport equation with characteristic speed 1; hence its solution is completely determined by specifying the initial and left boundary data. In this example, the algebraic differential equation is unable to select the boundary datum, since the DAE and boundary conditions coincide. In other words, the boundary condition does not contain any information; thus the transport equation has infinitely many solutions.

To avoid settings like those of Example 11, we rewrite the PDE into characteristic variables and decompose the DAE into algebraic equations and ODEs. The resulting system has the form

$$\begin{aligned} & \partial _{t} \mathbf{v}+ \boldsymbol{\Lambda}(t,x) \partial _{x} \mathbf {v}= \mathbf{h}(t,x,\mathbf{v}), \\ & \begin{pmatrix} \mathbf{v}^{+}(t,0^{+}) \\ \mathbf{v}^{-}(t,1^{-}) \end{pmatrix} = \begin{pmatrix} \mathbf{B}_{\mathbf{y},0}(t)& \mathbf{B}_{\mathbf{z},0}(t) \\ \mathbf{B}_{\mathbf{y},1}(t)& \mathbf{B}_{\mathbf{z},1}(t) \end{pmatrix} \begin{pmatrix} \mathbf{y}(t) \\ \mathbf{z}(t) \end{pmatrix} + \begin{pmatrix} \mathbf{b}_{0}(t) \\ \mathbf{b}_{1}(t) \end{pmatrix} \\ &\phantom{ \begin{pmatrix} \mathbf{v}^{+}(t,0^{+}) \\ \mathbf{v}^{-}(t,1^{-}) \end{pmatrix} =} {}+ \begin{pmatrix} \mathbf{N}^{-}(t) \\ \mathbf{N}^{+}(t) \end{pmatrix} \begin{pmatrix} \mathbf{v}^{-}(t,0) \\ \mathbf{v}^{+}(t,1) \end{pmatrix}, \\ &\mathbf{v}(0,x) = \bar{\mathbf{v}} (x), \\ &\dot{\mathbf{y}}(t) = \mathbf{J}\mathbf{y}(t) + \mathbf{S}_{1} \mathbf{K}_{0}(t)\mathbf{R}(t,0) \begin{pmatrix} \mathbf{v}^{-}(t,0^{+}) \\ \mathbf{v}^{+}(t,0^{+}) \end{pmatrix} \\ &\phantom{\dot{\mathbf{y}}(t) = }{} + \mathbf{S}_{1}\mathbf{K}_{1}(t)\mathbf{R}(t,1) \begin{pmatrix} \mathbf{v}^{-}(t,1^{-}) \\ \mathbf{v}^{+}(t,1^{-}) \end{pmatrix} + \mathbf{S}_{1}\mathbf{f}(t), \\ &\mathbf{z}(t) = - \mathbf{S}_{2}\mathbf{K}_{0}(t) \mathbf{R}(t,0) \begin{pmatrix} \mathbf{v}^{-}(t,0^{+}) \\ \mathbf{v}^{+}(t,0^{+}) \end{pmatrix} \\ &\phantom{\mathbf{z}(t) =}{} - \mathbf{S}_{2}\mathbf{K}_{1}(t)\mathbf{R}(t,1) \begin{pmatrix} \mathbf{v}^{-}(t,1^{-}) \\ \mathbf{v}^{+}(t,1^{-}) \end{pmatrix} - \mathbf{S}_{2}\mathbf{f}(t), \\ &\mathbf{y}(0) =\bar{\mathbf{y}}. \end{aligned}$$
(21)

The algebraic conditions do not conflict with the boundary conditions, provided that

  1. 1.

    (C-1)] For the coupled system (21),

    $$\begin{aligned} \mathbf{S}_{2}\mathbf{K}_{0}(t)\mathbf{R}^{+}(t,0)= \mathbf{0}\quad \text{and}\quad \mathbf{S}_{2}\mathbf{K}_{1}(t) \mathbf{R}^{-}(t,1)= \mathbf{0}, \end{aligned}$$

    where \(\mathbf{S}_{2}\) is chosen as in (16). We further we assume that \(\mathbf{S}_{1}\mathbf{K}_{0}(t)\), \(\mathbf{S}_{1}\mathbf{K}_{1}(t)\), and \(\mathbf{f}(t)\) are measurable in time and bounded.

Remark 12

Note that if this assumption is not satisfied, then it might be possible transfer these algebraic relations into the formulation of the coupling conditions.

With assumption (C-1), we can decouple the algebraic equations and replace z in the boundary conditions so that the new system reads

$$ \begin{aligned}& \partial _{t} \mathbf{v}+ \boldsymbol{\Lambda}(t,x) \partial _{x} \mathbf {v}= \mathbf{h}(t,x, \mathbf{v}), \\ & \begin{pmatrix} \mathbf{v}^{+}(t,0) \\ \mathbf{v}^{-}(t,1) \end{pmatrix}= \begin{pmatrix} \mathbf{B}_{\mathbf{y},0}(t) \\ \mathbf{B}_{\mathbf{y},1}(t) \end{pmatrix} \mathbf{y}(t) + \begin{pmatrix} \tilde{\mathbf{b}}_{0}(t) \\ \tilde{\mathbf{b}}_{1}(t) \end{pmatrix} + \begin{pmatrix} \tilde{\mathbf{N}}^{-}(t) \\ \tilde{\mathbf{N}}^{+}(t) \end{pmatrix} \begin{pmatrix} \mathbf{v}^{-}(t,0^{+}) \\ \mathbf{v}^{+}(t,1^{-}) \end{pmatrix} \\ &\mathbf{v}(0,x)= \bar{\mathbf{v}} (x), \\ &\dot{\mathbf{y}}(t)= \mathbf{J}\mathbf{y}(t) + \mathbf{S}_{1} \mathbf{K}_{0}(t)\mathbf{R}(t,0) \begin{pmatrix} \mathbf{v}^{-}(t,0^{+}) \\ \mathbf{v}^{+}(t,0^{+}) \end{pmatrix} \\ &\phantom{\dot{\mathbf{y}}(t)=}{} + \mathbf{S}_{1}\mathbf{K}_{1}(t)\mathbf{R}(t,1) \begin{pmatrix} \mathbf{v}^{-}(t,1^{-}) \\ \mathbf{v}^{+}(t,1^{-}) \end{pmatrix} + \mathbf{S}_{1}\mathbf{f}(t). \end{aligned} $$
(22)

Note that the terms \(\tilde {\mathbf{N}}^{-}\) and \(\tilde {\mathbf{N}}^{+}\) in (22) can be different from zero, even if \(\mathbf{N}^{-}=\mathbf{0}\) and \(\mathbf{N}^{+}=\mathbf{0}\) in (21). Moreover, the dependencies on \(\mathbf{v}^{+}(t,0^{+})\) and \(\mathbf{v}^{-}(t,1^{-})\) in the ODE can be replaced by boundary conditions.

We finally rewrite system (22) in the more compact form

$$ \begin{aligned} &\partial _{t} \mathbf{u}(t,x) + \mathbf{A}(t,x) \partial _{x} \mathbf{u}(t,x)= \mathbf{s} \bigl(t,x,\mathbf{u}(t,x)\bigr), \\ &\mathbf{P}(t) \begin{pmatrix} \mathbf{u}(t,0) \\ \mathbf{u}(t,1) \end{pmatrix} = \mathbf{P}_{\mathbf{y}}(t) \mathbf{y}(t) + \mathbf{p}(t), \\ &\mathbf{u}(0,x) = \bar{\mathbf{u}}(x), \\ &\dot{\mathbf{y}}(t)= \mathbf{J}\mathbf{y}(t) + \begin{pmatrix} \mathbf{G}_{0} & \mathbf{G}_{1} \end{pmatrix} \begin{pmatrix} \mathbf{u}(t,0^{+}) \\ \mathbf{u}(t,1^{-}) \end{pmatrix} + \mathbf{g}(t), \\ &\mathbf{y}(0) = \bar {\mathbf{y}}, \end{aligned} $$
(23)

with

$$\begin{aligned} \mathbf{P}(t) = \begin{pmatrix} -\tilde {\mathbf{N}}^{-}_{0}&\mathbf{I}&\mathbf{0}&- \tilde {\mathbf{N}}_{1}^{-} \\ -\tilde {\mathbf{N}}^{+}_{0}&\mathbf{0}&\mathbf{I}&- \tilde {\mathbf{N}}_{1}^{+} \end{pmatrix},\qquad \mathbf{P}_{\mathbf{y}} = \begin{pmatrix} \mathbf{B}_{\mathbf{y},0}(t) \\ \mathbf{B}_{\mathbf{y},1}(t) \end{pmatrix},\qquad \mathbf{p}= \begin{pmatrix} \tilde{\mathbf{b}}_{0}(t) \\ \tilde{\mathbf{b}}_{1}(t) \end{pmatrix}, \end{aligned}$$

and \(\mathbf{G}_{0} = \mathbf{S}_{1} \mathbf{K}_{0}\), \(\mathbf{G}_{1} = \mathbf{S}_{1} \mathbf{K}_{1}\), \(\mathbf{g}= \mathbf{S}_{1} \mathbf{f}\). System (23) is equivalent to (20a)–(20c) thanks to (C-1). For this system, we provide analytical results.

Definition 13

Fix \(T > 0\). A pair \((\mathbf{u}, \mathbf{y})\) is a solution to (23) on the time interval \([0, T]\) if the following conditions hold.

  1. 1.

    u is a broad solution on \([0, T]\) to

    $$ \textstyle\begin{cases} \partial _{t} \mathbf{u}+ \mathbf{A}(t,x) \partial _{x} \mathbf{u}= \mathbf{s}(t,x,\mathbf{u}), \\ \mathbf{P}(t) \begin{pmatrix} \mathbf{u}(t,0) \\ \mathbf{u}(t,1) \end{pmatrix} = \mathbf{P}_{\mathbf{y}}(t) \mathbf{y}(t) + \mathbf{p}(t), \\ \mathbf{u}(0,x) = \bar{\mathbf{u}}, \end{cases} $$

    in the sense of Definition 5.

  2. 2.

    y C 0 ([0,T]; R m 1 ) satisfies

    $$ \mathbf{y}(t) = \bar {\mathbf{y}}+ \int _{0}^{t} \bigl(\mathbf{J} \mathbf{y}(s) + \mathbf{G}(s) \bigr) \,\mathrm{d}s $$

    for every \(t \in [0, T]\), where

    $$ \mathbf{G}(t) = \mathbf{G}_{0}(t) \mathbf{u}\bigl(t, 0^{+} \bigr) + \mathbf{G}_{1}(t) \mathbf{u}\bigl(t,1^{-}\bigr) + \mathbf{g}(t) $$

    for a.e. \(t \in [0, T]\).

We have the following existence result.

Theorem 14

Assume that (C-1), (D-1), (D-2), and (H-1)(H-5) hold. Then, for every \(T > 0\), there exists a semigroup

$$ \mathcal {S}: [0, T] \times \mathcal {D} \longrightarrow \mathcal {D}, $$

where

D= { ( u ¯ , y ¯ ) L 1 ( ( 0 , 1 ) ; R n ) × R m 1 : TV ( u ¯ ) < + }

satisfying:

  1. 1.

    \((\mathbf{u}(t, x), \mathbf{y}(t) ) = \mathcal {S} (t, \bar{\mathbf{u}}, \bar {\mathbf{y}} )(x)\) for every \((\bar {\mathbf{u}}, \bar {\mathbf{y}} ) \in \mathcal {D}\) is a solution to the coupled system (20a)(20c) (or to the alternative form (23)) in the sense of Definition 13.

  2. 2.

    \(\mathcal {S}(0, \bar {\mathbf{u}}, \bar {\mathbf{y}}) = ( \bar {\mathbf{u}}, \bar {\mathbf{y}} )\) for every \((\bar {\mathbf{u}}, \bar {\mathbf{y}} ) \in \mathcal {D}\).

  3. 3.

    For all \(0 \le t_{1} \le t_{2} \le T\) and \((\bar {\mathbf{u}}, \bar {\mathbf{y}} ) \in \mathcal {D}\), we have

    $$ \mathcal {S} (t_{2}, \bar {\mathbf{u}}, \bar {\mathbf{y}} ) = \mathcal {S} \bigl(t_{2} - t_{1}, \mathcal {S} (t_{1}, \bar { \mathbf{u}}, \bar {\mathbf{y}} ) \bigr). $$
  4. 4.

    There exists \(L > 0\) such that

    $$ \bigl\Vert \mathcal {S} (t, \bar{\mathbf{u}}, \bar {\mathbf{y}} ) - \mathcal {S} (t, \tilde {\mathbf{u}}, \tilde {\mathbf{y}} ) \bigr\Vert _{\mathbf{L^{1}} (0, 1 )} \le L \bigl[ \Vert \bar{\mathbf{u}} - \tilde {\mathbf{u}} \Vert _{\mathbf{L^{1}}(0,1)} + \Vert \bar {\mathbf{y}} - \tilde {\mathbf{y}} \Vert _{\mathbf{L^{1}}(0, t)} \bigr] $$
    (24)

    for a.e. \(t \in [0, T]\) and for all \((\bar {\mathbf{u}}, \bar {\mathbf{y}} ) \in \mathcal {D}\) and \((\tilde {\mathbf{u}}, \tilde {\mathbf{y}} ) \in \mathcal {D}\).

Proof

First, introduce the sets

D u = { u C 0 ( [ 0 , T ] ; L 1 ( ( 0 , 1 ) ; R n ) ) : sup t [ 0 , T ] TV ( u ( t ) ) + u L < + } , D y = { y C 0 ( [ 0 , T ] ; R m 1 ) : TV ( y ) < + } .

We construct the solution to system (23) by passing to the limit of an approximating sequence of solutions. The proof is divided into several steps.

Construction of approximate solutions.

Set \(\mathbf{u}_{0}(t,x) \equiv \bar{\mathbf{u}}(x)\) and \(\mathbf{y}_{0}(t) \equiv \bar{\mathbf{y}}\). For every \(k \geq 1\), given \(\mathbf{u}_{k-1}\in \mathcal {D}_{\mathbf{u}}\) and \(\mathbf{y}_{k-1}\in \mathcal {D}_{\mathbf{y}}\), recursively define \(\mathbf{u}_{k}\) as the solution to

$$\begin{aligned} \textstyle\begin{cases} \partial _{t} \mathbf{u}_{k}(t,x) + \mathbf{A}(t,x) \partial _{x} \mathbf{u}_{k}(t,x) = \mathbf{s}(t,x,\mathbf{u}_{k}), \\ \mathbf{P}(t) \begin{pmatrix} \mathbf{u}_{k}(t,0) \\ \mathbf{u}_{k}(t,1) \end{pmatrix} = \mathbf{P}_{\mathbf{y}}(t) \mathbf{y}_{k-1}(t) + \mathbf{p}(t), \\ \mathbf{u}_{k}(0,x) = \bar{\mathbf{u}}. \end{cases}\displaystyle \end{aligned}$$
(25)

Note that Theorem 6 applies to system (25), and hence the solution \(\mathbf{u}_{k}\) exists, is unique, and belongs to \(\mathcal {D}_{\mathbf{u}}\). Moreover, define y k C 0 ([0,T]; R m 1 ) as the solution to the linear nonhomogeneous system

$$ \textstyle\begin{cases} \dot{\mathbf{y}}_{k}(t) = \mathbf{J}\mathbf{y}_{k} (t)+ \mathbf{G}_{0}(t) \mathbf{u}_{k-1}(t,0^{+}) + \mathbf{G}_{1}(t) \mathbf{u}_{k-1} (t,1^{-}) + \mathbf{g}(t), \\ \mathbf{y}_{k}(0) = \bar{\mathbf{y}}. \end{cases} $$
(26)

Classic theory of ODEs implies that the previous system admits a unique solution, since by Theorem 6 and (C-1) the function

$$ t \longmapsto \mathbf{G}_{0}(t) \mathbf{u}_{k-1} \bigl(t,0^{+}\bigr) + \mathbf{G}_{1}(t) \mathbf{u}_{k-1} \bigl(t,1^{-}\bigr) + \mathbf{g}(t) $$

is measurable; see [9, Theorem 3.1]. The same function is also bounded by (C-1) and the definition of \(\mathcal {D}_{\mathbf{u}}\). Hence \(\mathbf{y}_{k}\) belongs to \(\mathcal {D}_{\mathbf{y}}\).

\(\mathbf{y}_{k}\) is a Cauchy sequence.

For \(k \ge 2\) and \(t \in [0, T]\), using (26), we obtain

$$\begin{aligned} \bigl\vert \mathbf{y}_{k}(t) - \mathbf{y}_{k-1}(t) \bigr\vert \le{}& \int _{0}^{t} \bigl\vert \mathbf{J} \bigl( \mathbf{y}_{k}(s) - \mathbf{y}_{k-1}(s) \bigr) \bigr\vert \,\mathrm{d}s \\ &{}+ \int _{0}^{t} \bigl\vert \mathbf{G}_{0}(s) \bigl( \mathbf{u}_{k-1}(s, 0) - \mathbf{u}_{k-2}(s, 0) \bigr) \bigr\vert \,\mathrm{d}s \\ &{}+ \int _{0}^{t} \bigl\vert \mathbf{G}_{1}(s) \bigl( \mathbf{u}_{k-1}(s, 1) - \mathbf{u}_{k-2}(s, 1) \bigr) \bigr\vert \,\mathrm{d}s \\ \leq{}& \Vert \mathbf{J} \Vert \int _{0}^{t} \bigl\vert \mathbf{y}_{k}(s) - \mathbf{y}_{k-1}(s) \bigr\vert \,\mathrm{d}s \\ &{}+ L_{G} \int _{0}^{t} \bigl\vert \mathbf{u}_{k-1}(s, 0) - \mathbf{u}_{k-2}(s, 0) \bigr\vert \,\mathrm{d}s \\ &{}+ L_{G} \int _{0}^{t} \bigl\vert \mathbf{u}_{k-1}(s, 1) - \mathbf{u}_{k-2}(s, 1) \bigr\vert \,\mathrm{d}s, \end{aligned}$$

where \(L_{G}:= \max \{ \sup_{t \in [0, T]} \Vert \mathbf{G}_{0}(t) \Vert , \sup_{t \in [0, T]} \Vert \mathbf{G}_{1}(t) \Vert \}\). By the Gronwall lemma, for \(k \ge 2\) and \(t \in [0, T]\), we deduce that

$$ \begin{aligned} \bigl\vert \mathbf{y}_{k}(t) - \mathbf{y}_{k-1}(t) \bigr\vert \le{}& \mathrm{e}^{ \Vert \mathbf{J} \Vert t} L_{G} \bigl\Vert \mathbf{u}_{k-1}(\cdot, 0) - \mathbf{u}_{k-2}(\cdot, 0) \bigr\Vert _{ \mathbf{L^{1}}(0,t)} \\ &{}+ \mathrm{e}^{ \Vert \mathbf{J} \Vert t} L_{G} \bigl\Vert \mathbf{u}_{k-1}(\cdot, 1) - \mathbf{u}_{k-2}(\cdot, 1) \bigr\Vert _{ \mathbf{L^{1}}(0,t)}. \end{aligned} $$
(27)

By (11) and (12) we obtain that for \(k \ge 3\),

$$\begin{aligned} \bigl\vert \mathbf{y}_{k}(t) - \mathbf{y}_{k-1}(t) \bigr\vert & \leq \mathrm{e}^{ \Vert \mathbf{J} \Vert t} L_{G} L \bigl\Vert \mathbf{P}_{\mathbf{y}} (\mathbf{y}_{k-2} - \mathbf{y}_{k-3} ) \bigr\Vert _{L^{1}(0,t)} \\ & \le \mathrm{e}^{ \Vert \mathbf{J} \Vert t} L_{G} L \Vert \mathbf{P}_{\mathbf{y}} \Vert \int _{0}^{t} \bigl\vert \mathbf{y}_{k-2}(s) - \mathbf{y}_{k-3}(s) \bigr\vert \,\mathrm{d}s. \end{aligned}$$

We apply [5, Lemma 4.2], i.e., Lemma 16 with \(\alpha = 0\), \(\beta = \mathrm{e}^{ \Vert \mathbf{J} \Vert t} L_{G} L \Vert \mathbf{P}_{\mathbf{y}} \Vert \), and \(\mathbf{h}_{k}(t)= \vert \mathbf{y}_{k}(t)-\mathbf{y}_{k-1}(t) \vert \), to the inequality

$$\begin{aligned} \mathbf{h}_{n}(t) \leq \alpha + \beta \int _{0}^{t} \mathbf{h}_{n-2} ( \tau ) \, \mathrm{d} \tau \end{aligned}$$

and obtain that for all \(n\geq 1\),

$$\begin{aligned} \max \bigl\{ \mathbf{h}_{2n}(t), \mathbf{h}_{2n+1}(t) \bigr\} \leq \alpha \sum_{i=0}^{n-1} \frac{\beta ^{i} t^{i}}{i!} + Y \frac{\beta ^{n} t^{n}}{n!}, \end{aligned}$$

where \(Y \ge \max \{ \Vert \mathbf{h}_{0} \Vert , \Vert \mathbf{h}_{1} \Vert \}\).

Thus there exists a positive constant \(C_{1}\) such that

$$ \Vert \mathbf{y}_{k} - \mathbf{y}_{k-1} \Vert _{\mathbf{C^{0}} ([0, T] )} \leq C_{1} \frac{ ( \mathrm{e}^{ \Vert \mathbf{J} \Vert T} L_{G} L \Vert \mathbf{P}_{\mathbf{y}} \Vert )^{k} T^{k} }{k!} $$

for every \(k \ge 3\). Therefore, for every \(k > j \ge 3\),

$$\begin{aligned} \Vert \mathbf{y}_{k} - \mathbf{y}_{j} \Vert _{\mathbf{C^{0}} ([0, T] )} & \leq \sum_{i=j + 1}^{k} \Vert \mathbf{y}_{i} - \mathbf{y}_{i-1} \Vert _{\mathbf{C^{0}} ([0, T] )} \\ & \leq C_{1} \sum_{i=j+1}^{k} \frac{ ( \mathrm{e}^{ \Vert \mathbf{J} \Vert T} L_{G} L \Vert \mathbf{P}_{\mathbf{y}} \Vert )^{i} T^{i} }{i!}, \end{aligned}$$

proving that \(\mathbf{y}_{k}\) is a Cauchy sequence in \(\mathbf{C^{0}} ([0, T] )\). Thus there exists \(\mathbf{y}^{*} \in \mathbf{C^{0}}([0,T])\) such that \(\mathbf{y}_{k}\) converges to \(\mathbf{y}^{*}\) in \(\mathbf{C^{0}} ([0, T] )\) as \(k \to +\infty \).

\(\mathbf{u}_{k}\) is a Cauchy sequence.

Using (9), we deduce the existence of a constant \(C > 0\) such that for all k and \(k'\), we have the estimate

$$\begin{aligned} \bigl\Vert \mathbf{u}_{k}(t, \cdot ) - \mathbf{u}_{k'}(t, \cdot ) \bigr\Vert _{\mathbf{L^{1}} (0,1 )} & \leq C \Vert \mathbf{y}_{k-1} - \mathbf{y}_{k'-1} \Vert _{\mathbf{L^{1}} (0, T )} \\ & \le CT \Vert \mathbf{y}_{k-1} - \mathbf{y}_{k'-1} \Vert _{ \mathbf{C^{0}} ([0, T] )} \end{aligned}$$

for every \(t \in [0, T]\). Thus \(\mathbf{u}_{k}\) is a Cauchy sequence in \(\mathbf{C^{0}} ( [0,T]; \mathbf{L^{1}}(0,1) )\), proving the existence of \(\mathbf{u}^{*} \in \mathbf{C^{0}} ( [0,T]; \mathbf{L^{1}}(0,1) )\) such that \(\mathbf{u}_{k}\) converges to \(\mathbf{u}^{*}\) in \(\mathbf{C^{0}} ( [0,T]; \mathbf{L^{1}}(0,1) )\) as \(k \to +\infty \).

The couple \((\mathbf{u}^{*}, \mathbf{y}^{*} )\) is a solution to ( 23 ).

First, we show that \(\mathbf{y}^{*}\) is a solution to the ODE with the input from \(\mathbf{u}^{*}\). Due to (26), for every \(t \in [0, T]\), we have

$$\begin{aligned} \mathbf{y}_{k}(t) = \bar{\mathbf{y}} + \int _{0}^{t} \mathbf{J} \mathbf{y}_{k}(s) \,\mathrm{d}s + \int _{0}^{t} \bigl[\mathbf{G}_{0}(s) \mathbf{u}_{k-1}\bigl(s,0^{+}\bigr) + \mathbf{G}_{1}(s) \mathbf{u}_{k-1}\bigl(s,1^{-}\bigr) + \mathbf{g}(s) \bigr] \, \mathrm{d}s. \end{aligned}$$

Using again (11) and (12), we deduce that both sequences \(\mathbf{u}_{k} (\cdot, 0^{+} )\) and \(\mathbf{u}_{k} (\cdot, 1^{-} )\) are Cauchy sequences in \(\mathbf{L^{1}} (0, T )\) and the limits are respectively \(\mathbf{u}^{*} (\cdot, 0^{+} )\) and \(\mathbf{u}^{*} (\cdot, 1^{-} )\), since the noncharacteristic condition (H-4) holds; see [1]. Passing to the limit as \(k\to \infty \), we thus obtain

$$\begin{aligned} \mathbf{y}^{*}(t) = \bar{\mathbf{y}} + \int _{0}^{t} \mathbf{J} \mathbf{y}^{*}(s) \,\mathrm{d}s + \int _{0}^{t} \bigl[\mathbf{G}_{0}(s) \mathbf{u}^{*}\bigl(s,0^{+}\bigr) + \mathbf{G}_{1}(s) \mathbf{u}^{*}\bigl(s,1^{-}\bigr) + \mathbf{g}(s) \bigr] \, \mathrm{d}s, \end{aligned}$$

proving that \(\mathbf{y}^{*}\) satisfies condition 2 of Definition 13. Moreover, note that the last integral in the previous equation is uniformly bounded because of (13) and (C-1). Hence the previous equation implies that \(\mathbf{y}^{*}\) has finite total variation.

Conversely, we define \(\tilde{\mathbf{u}}\) as the solution to the hyperbolic system

$$\begin{aligned} \textstyle\begin{cases} \partial _{t} \tilde {\mathbf{u}}(t,x) + \mathbf{A}(t,x) \partial _{x} \tilde {\mathbf{u}}(t,x) = \mathbf{s}(t,x, \tilde {\mathbf{u}}), \\ \mathbf{P}(t) \begin{pmatrix} \tilde {\mathbf{u}}(t, 0) \\ \tilde {\mathbf{u}}(t, 1) \end{pmatrix} = \mathbf{P}_{\mathbf{y}}(t) \mathbf{y}^{*} (t) + \mathbf{p}(t), \\ \tilde {\mathbf{u}}(0,x) = \bar{\mathbf{u}}, \end{cases}\displaystyle \end{aligned}$$

which exists and is unique by Theorem 6. Due to (9), for \(t \in [0, T]\) and \(k \ge 1\), we have that

$$\begin{aligned} \bigl\Vert \tilde{\mathbf{u}}(t)-\mathbf{u}_{k}(t) \bigr\Vert _{ \mathbf{L^{1}} (0,1 )} \leq L \bigl\Vert \mathbf{y}^{*}- \mathbf{y}_{k-1} \bigr\Vert _{\mathbf{L^{1}}(0,t)} \end{aligned}$$

for some positive constant L. Since \(\mathbf{y}_{k}\) is a Cauchy sequence and \(\mathbf{u}_{k}\) converges to \(\mathbf{u}^{*}\) in \(\mathbf{C^{0}} ([0, T]; \mathbf{L^{1}} (0,1 ) )\), we deduce that \(\tilde {\mathbf{u}}= \mathbf{u}^{*}\) in \(\mathbf{C^{0}} ([0, T]; \mathbf{L^{1}} (0,1 ) )\), proving that \(\mathbf{u}^{*}\) satisfies condition 1 of Definition 13.

Well-posedness estimate. Consider two initial conditions \((\bar {\mathbf{u}}, \bar {\mathbf{y}} )\) and \((\tilde {\mathbf{u}}, \tilde {\mathbf{y}} )\) with \(\mathbf{TV} (\bar {\mathbf{u}} ) + \mathbf{TV} ( \tilde {\mathbf{u}} ) < +\infty \). Denote by \((\bar {\mathbf{u}}_{k}, \bar {\mathbf{y}}_{k} )\) and \((\tilde {\mathbf{u}}_{k}, \tilde {\mathbf{y}}_{k} )\) the sequences constructed as in the first part of the proof for the initial conditions given by \((\bar {\mathbf{u}}, \bar {\mathbf{y}} )\) and \((\tilde {\mathbf{u}}, \tilde {\mathbf{y}} )\), respectively. By (9) there exists a constant \(C_{1} > 0\) such that

$$ \bigl\Vert \bar {\mathbf{u}}_{k}(t) - \tilde { \mathbf{u}}_{k}(t) \bigr\Vert _{\mathbf{L^{1}} (0, 1 )} \le C_{1} \Vert \bar {\mathbf{u}}- \tilde {\mathbf{u}} \Vert _{\mathbf{L^{1}} (0, 1 )} + C_{1} \int _{0}^{t} \bigl\vert \bar { \mathbf{y}}_{k}(s) - \tilde {\mathbf{y}}_{k}(s) \bigr\vert \,\mathrm{d}s $$
(28)

for a.e. \(t \in [0, T]\). Moreover, there exists \(C_{2} > 0\) such that for every \(t \in [0, T]\),

$$ \begin{aligned} \bigl\vert \bar {\mathbf{y}}_{k}(t) - \tilde {\mathbf{y}}_{k}(t) \bigr\vert \le{}& \vert \bar { \mathbf{y}}- \tilde {\mathbf{y}} \vert + C_{2} \int _{0}^{t} \bigl\vert \bar { \mathbf{y}}_{k}(s) - \tilde {\mathbf{y}}_{k}(s) \bigr\vert \,\mathrm{d}s \\ &{}+ C_{2} \int _{0}^{t} \bigl\vert \bar { \mathbf{u}}_{k}(s, 0) - \tilde {\mathbf{u}}_{k}(s, 0) \bigr\vert \,\mathrm{d}s \\ &{}+ C_{2} \int _{0}^{t} \bigl\vert \bar { \mathbf{u}}_{k}(s, 1) - \tilde {\mathbf{u}}_{k}(s, 1) \bigr\vert \,\mathrm{d}s. \end{aligned} $$
(29)

Using (11) and (12) in (29), we deduce that there exists \(C_{3} > 0\) such that

$$ \bigl\vert \bar {\mathbf{y}}_{k}(t) - \tilde { \mathbf{y}}_{k}(t) \bigr\vert \le \vert \bar {\mathbf{y}}- \tilde { \mathbf{y}} \vert + C_{2} \int _{0}^{t} \bigl\vert \bar { \mathbf{y}}_{k}(s) - \tilde {\mathbf{y}}_{k}(s) \bigr\vert \,\mathrm{d}s + C_{3} \Vert \bar {\mathbf{u}}- \tilde {\mathbf{u}} \Vert _{\mathbf{L^{1}} (0, 1 )} $$
(30)

for every \(t \in [0, T]\), and so by the Gronwall lemma

$$ \begin{aligned} \bigl\vert \bar {\mathbf{y}}_{k}(t) - \tilde {\mathbf{y}}_{k}(t) \bigr\vert & \le \bigl[ \vert \bar { \mathbf{y}}- \tilde {\mathbf{y}} \vert + C_{3} \Vert \bar { \mathbf{u}}- \tilde {\mathbf{u}} \Vert _{\mathbf{L^{1}} (0, 1 )} \bigr] e^{C_{2} t} \\ & \le \bigl[ \vert \bar {\mathbf{y}}- \tilde {\mathbf{y}} \vert + C_{3} \Vert \bar {\mathbf{u}}- \tilde {\mathbf{u}} \Vert _{ \mathbf{L^{1}} (0, 1 )} \bigr] e^{C_{2} T} \end{aligned} $$
(31)

for every \(t \in [0, T]\). Inserting (31) into (28), we deduce that for a.e. \(t \in [0, T]\),

$$ \begin{aligned} \bigl\Vert \bar {\mathbf{u}}_{k}(t) - \tilde {\mathbf{u}}_{k}(t) \bigr\Vert _{\mathbf{L^{1}} (0, 1 )} \le{}& \biggl(C_{1} + \frac{C_{3}}{C_{2}} \bigl(e^{C_{2} T} - 1 \bigr) \biggr) \Vert \bar {\mathbf{u}}- \tilde {\mathbf{u}} \Vert _{\mathbf{L^{1}} (0, 1 )} \\ &{}+ \frac{C_{1}}{C_{2}} \vert \bar {\mathbf{y}}- \tilde {\mathbf{y}} \vert \bigl(e^{C_{2} T} - 1 \bigr). \end{aligned} $$
(32)

Passing to the limit as \(k \to + \infty \) in (31) and (32), we obtain (24). □

Corollary 15

Let \(T>0\), and let σ:[0,T]N be a given switching signal with finitely many switching points. Then, under the above hypotheses, system (20a)(20c) has a unique solution \((\mathbf{u},\mathbf{w})\) on \([0,T]\).

A proof can be obtained by iteratively applying Theorem 14.

4 Technical details

4.1 Lemma 4.2

Here we repeat Lemma 4.2 from [5].

Lemma 16

Assume that the sequence h n C 0 ([0,T]; R + ) satisfies

$$\begin{aligned} h_{n}(t) \leq \alpha + \beta \int _{0}^{t} h_{n-2}(\tau )\,\mathrm{d} \tau \quad\textit{with } h_{0}(t)\in [0,H] \textit{ and } h_{1}(t) \in [0,H] \end{aligned}$$

for positive numbers \(\alpha,\beta \), and H. Then for all \(n\geq 1\),

$$\begin{aligned} \max \bigl\{ h_{2n}(t),h_{2n+1}(t) \bigr\} \leq \alpha \sum _{i=0}^{n-1} \frac{\beta ^{i}t^{i}}{i!}+H \frac{\beta ^{n}t^{n}}{n!}. \end{aligned}$$

4.2 A priori estimates

Lemma 17

Assume hypotheses (H-1)(H-5) hold. Define \(\lambda _{\max}\) as in (52). Let v be a broad solution to (3) with initial condition \(\bar {\mathbf{v}}\) and boundary conditions (7). Then, for every \(0 < t \le \frac{1}{\lambda _{\max}}\), there exists a constant \(C > 0\), depending on \(\lambda _{\max}\), h, \(\mathbf{N}^{+}\), and \(\mathbf{N}^{-}\), such that

$$ \bigl\Vert \mathbf{v}(t) \bigr\Vert _{\mathbf{L^{\infty }}} \le C \bigl[ \Vert \bar {\mathbf{v}} \Vert _{\mathbf{L^{\infty }}} + \bigl\Vert \mathbf{b}^{-} \bigr\Vert _{\mathbf{L^{\infty }}(0, t)} + \bigl\Vert \mathbf{b}^{+} \bigr\Vert _{\mathbf{L^{\infty }}(0, t)} + t \bigr] $$
(33)

and

$$ \begin{aligned} \mathbf{TV} \bigl(\mathbf{v}(t) \bigr) \le{}& C \bigl(1 + \mathbf{TV}(\bar{\mathbf{v}}) + \mathbf{TV}\bigl( \mathbf{b}^{+}\bigr) + \mathbf{TV}\bigl(\mathbf{b}^{-}\bigr) \bigr) \exp (C t ) \\ &{}+ C \bigl( \Vert \mathbf{v} \Vert _{ \mathbf{L^{\infty }}} + \bigl\Vert \mathbf{b}^{+} \bigr\Vert _{ \mathbf{L^{\infty }}} + \bigl\Vert \mathbf{b}^{-} \bigr\Vert _{ \mathbf{L^{\infty }}} \bigr) \exp (C t ). \end{aligned} $$
(34)

Proof

First note that the choice \(t\le \frac{1}{\lambda _{\max}}\) implies that the characteristic curves starting from one boundary do not reach the other boundary within time \(\frac{1}{\lambda _{\max}}\). Denote by L a uniform bound and a Lipschitz constant for h in [0, 1 λ max ]×[0,1]× R n ; see Remark 2. Since v is a broad solution to (3), then for all \(i \in \{1, \ldots, \ell \}\) and \(0 \le t \le \frac{1}{\lambda _{\max}}\),

$$ v_{i} (t,x) = \textstyle\begin{cases} \bar{v}_{i}(X_{i}(0;t,x)) + \int _{0}^{t} h_{i}( \tau, X_{i}(\tau;t,x),\mathbf{v}(\tau, X_{i}(\tau;t,x))) \,\mathrm{d}\tau \\ \quad\text{if } x < X_{i} (t; 0, 1 ), \\ m_{i}^{1}(T_{i}(1;t,x)) + \int _{T_{i}(1;t,x)}^{t} h_{i}(\tau, X_{i}(\tau;t,x), \mathbf{v}(\tau, X_{i}(\tau;t,x))) \,\mathrm{d}\tau \\ \quad\text{if } x > X_{i} (t; 0, 1 ), \end{cases} $$
(35)

whereas for all \(i \in \{\ell + 1, \ldots, n \}\) and \(0 \le t \le \frac{1}{\lambda _{\max}}\),

$$ v_{i} (t,x) = \textstyle\begin{cases} m_{i}^{0}(T_{i}(0;t,x)) + \int _{T_{i}(0;t,x)}^{t} h_{i}(\tau, X_{i}(\tau;t,x), \mathbf{v}(\tau, X_{i}(\tau;t,x))) \,\mathrm{d}\tau \\ \quad\text{if } x < X_{i} (t; 0, 0 ), \\ \bar{v}_{i}(X_{i}(0;t,x)) + \int _{0}^{t} h_{i}( \tau, X_{i}(\tau;t,x),\mathbf{v}(\tau, X_{i}(\tau;t,x))) \,\mathrm{d}\tau \\ \quad\text{if } x > X_{i} (t; 0, 0 ), \end{cases} $$
(36)

where \(T_{i}\) denotes the inverse of the ith characteristic curve (see Remark 4), and

$$ \begin{aligned} &m_{i}^{0}(t) = b_{i}^{+}(t) + \left [\mathbf{N}^{+}(t) \begin{pmatrix} \mathbf{v}^{-}(t, 0) \\ \mathbf{v}^{+}(t, 1) \end{pmatrix} \right ]_{i}, \\ & m_{i}^{1}(t) = b_{i}^{-}(t) + \left [\mathbf{N}^{-}(t) \begin{pmatrix} \mathbf{v}^{-}(t, 0) \\ \mathbf{v}^{+}(t, 1) \end{pmatrix} \right ]_{i}; \end{aligned} $$
(37)

see (7).

First consider the \(\mathbf{L^{\infty }}\) estimates. For \(i \in \{1, \ldots, \ell \}\) and \(0 < t \le \frac{1}{\lambda _{\max }}\), we have

$$\begin{aligned} \bigl\vert v_{i} (t, 0 ) \bigr\vert & \le \bigl\vert \bar{v}_{i} \bigl(X_{i} (0;t, 0 ) \bigr) \bigr\vert + \int _{0}^{t} \bigl\vert h_{i}\bigl( \tau, X_{i}(\tau;t,x),\mathbf{v}\bigl(\tau, X_{i}(\tau;t,x) \bigr)\bigr) \bigr\vert \,\mathrm{d}\tau \\ & \le \sqrt{n} \Vert \bar {\mathbf{v}} \Vert _{ \mathbf{L^{\infty }}} + Lt + L \int _{0}^{t} \bigl\Vert \mathbf{v}( \tau ) \bigr\Vert _{\mathbf{L^{\infty }}} \,\mathrm{d}\tau, \end{aligned}$$

and so

$$ \bigl\vert \mathbf{v}^{-} (t, 0 ) \bigr\vert \le n \sqrt{n} \Vert \bar {\mathbf{v}} \Vert _{\mathbf{L^{\infty }}} + nLt + nL \int _{0}^{t} \bigl\Vert \mathbf{v}(\tau ) \bigr\Vert _{ \mathbf{L^{\infty }}} \,\mathrm{d}\tau. $$
(38)

An analogous computation yields

$$ \bigl\vert \mathbf{v}^{+} (t, 1 ) \bigr\vert \le n \sqrt{n} \Vert \bar {\mathbf{v}} \Vert _{\mathbf{L^{\infty }}} + nLt + nL \int _{0}^{t} \bigl\Vert \mathbf{v}(\tau ) \bigr\Vert _{ \mathbf{L^{\infty }}} \,\mathrm{d}\tau. $$
(39)

For \(i \in \{1, \ldots, \ell \}\), \(0 < t \le \frac{1}{\lambda _{\max }}\), and \(x \in (0, X_{i} (t; 0, 1 ))\), we have

$$\begin{aligned} \bigl\vert v_{i} (t, x ) \bigr\vert & \le \sqrt{n} \Vert \bar { \mathbf{v}} \Vert _{\mathbf{L^{\infty }}} + Lt + L \int _{0}^{t} \bigl\Vert \mathbf{v}(\tau ) \bigr\Vert _{\mathbf{L^{\infty }}} \,\mathrm{d}\tau, \end{aligned}$$

whereas for \(x \in (X_{i} (t; 0, 1 ), 1)\), using (38) and (39), we have

$$\begin{aligned} \bigl\vert v_{i} (t, x ) \bigr\vert \le{}& \bigl\vert m_{i}^{1} \bigl(T_{i} (1;t, x ) \bigr) \bigr\vert + \int _{T_{i}(1; t, x)}^{t} \bigl\vert h_{i}\bigl( \tau, X_{i}(\tau;t,x),\mathbf{v}\bigl(\tau, X_{i}( \tau;t,x) \bigr)\bigr) \bigr\vert \,\mathrm{d}\tau \\ \le{}& \bigl\vert b_{i}^{-}\bigl(T_{i}(1; t, x) \bigr) \bigr\vert + L \bigl\vert \mathbf{v}^{-}\bigl(T_{i}(1; t, x),0\bigr) \bigr\vert + L \bigl\vert \mathbf{v}^{+} \bigl(T_{i}(1; t, x),0\bigr) \bigr\vert \\ &{}+ L\bigl(t- T_{i}(1; t, x)\bigr) + L \int _{T_{i}(1; t, x)}^{t} \bigl\Vert \mathbf{v}(\tau ) \bigr\Vert _{\mathbf{L^{\infty }}} \,\mathrm{d}\tau \\ \le{}& \sqrt{n} \bigl\Vert \mathbf{b}^{-} \bigr\Vert _{\mathbf{L^{\infty }}(0, t)} + 2 n \sqrt{n} L \Vert \bar {\mathbf{v}} \Vert _{ \mathbf{L^{\infty }}} \\ &{}+ 2nL^{2} T_{i}(1; t, x) + 2nL^{2} \int _{0}^{T_{i}(1; t, x)} \bigl\Vert \mathbf{v}(\tau ) \bigr\Vert _{\mathbf{L^{\infty }}} \,\mathrm{d}\tau \\ &{}+ L\bigl(t- T_{i}(1; t, x)\bigr) + L \int _{T_{i}(1; t, x)}^{t} \bigl\Vert \mathbf{v}(\tau ) \bigr\Vert _{\mathbf{L^{\infty }}} \,\mathrm{d}\tau \\ \le{}& \sqrt{n} \bigl\Vert \mathbf{b}^{-} \bigr\Vert _{\mathbf{L^{\infty }}(0, t)} + 2 n \sqrt{n} L \Vert \bar {\mathbf{v}} \Vert _{ \mathbf{L^{\infty }}} + 2nL^{2} t + 2nL^{2} \int _{0}^{t} \bigl\Vert \mathbf{v}(\tau ) \bigr\Vert _{\mathbf{L^{\infty }}} \,\mathrm{d}\tau. \end{aligned}$$

A similar computation holds in the case \(i \in \{\ell +1, \ldots, n \}\). Hence

$$\begin{aligned} \bigl\Vert \mathbf{v}(t) \bigr\Vert _{\mathbf{L^{\infty }}} \le{}& (n \sqrt{n} + 4n \sqrt{n}L ) \Vert \bar {\mathbf{v}} \Vert _{ \mathbf{L^{\infty }}} + n \sqrt{n} \bigl( \bigl\Vert \mathbf{b}^{-} \bigr\Vert _{\mathbf{L^{\infty }}(0, t)} + \bigl\Vert \mathbf{b}^{+} \bigr\Vert _{\mathbf{L^{\infty }}(0, t)} \bigr) \\ &{}+ \bigl(nL + 4nL^{2} \bigr)t + \bigl(nL + 4nL^{2} \bigr) \int _{0}^{t} \bigl\Vert \mathbf{v}(\tau ) \bigr\Vert _{ \mathbf{L^{\infty }}} \,\mathrm{d}\tau \\ \le{}& 5n \sqrt{n}L \Vert \bar {\mathbf{v}} \Vert _{ \mathbf{L^{\infty }}} + n \sqrt{n} \bigl( \bigl\Vert \mathbf{b}^{-} \bigr\Vert _{\mathbf{L^{\infty }}(0, t)} + \bigl\Vert \mathbf{b}^{+} \bigr\Vert _{\mathbf{L^{\infty }}(0, t)} \bigr) \\ &{}+ 5nL^{2}t + 5nL^{2} \int _{0}^{t} \bigl\Vert \mathbf{v}(\tau ) \bigr\Vert _{\mathbf{L^{\infty }}} \,\mathrm{d}\tau. \end{aligned}$$

The Gronwall inequality implies that

$$\begin{aligned} \bigl\Vert \mathbf{v}(t) \bigr\Vert _{\mathbf{L^{\infty }}} & \le e^{5n L^{2} t} \bigl[ 5n \sqrt{n}L \Vert \bar {\mathbf{v}} \Vert _{ \mathbf{L^{\infty }}} + n \sqrt{n} \bigl( \bigl\Vert \mathbf{b}^{-} \bigr\Vert _{\mathbf{L^{\infty }}(0, t)} + \bigl\Vert \mathbf{b}^{+} \bigr\Vert _{\mathbf{L^{\infty }}(0, t)} \bigr) + 5nL^{2}t \bigr] \\ & \le 5n\sqrt{n} L^{2} e^{5n L^{2} t} \bigl[ \Vert \bar { \mathbf{v}} \Vert _{\mathbf{L^{\infty }}} + \bigl\Vert \mathbf{b}^{-} \bigr\Vert _{\mathbf{L^{\infty }}(0, t)} + \bigl\Vert \mathbf{b}^{+} \bigr\Vert _{\mathbf{L^{\infty }}(0, t)} + t \bigr], \end{aligned}$$

so that (33) holds.

Consider now the total-variation estimate (34). For \(i \in \{1, \ldots, \ell \}\) and \(0 < t \le \frac{1}{\lambda _{\max}}\), we have

$$ \begin{aligned} \mathbf{TV} \bigl(v_{i}(t, \cdot ) \bigr) ={}& \mathbf{TV} \bigl(v_{i}(t, \cdot ); \bigl[0, X_{i} (t; 0, 1 )\bigr) \bigr) \\ &{}+ \mathbf{TV} \bigl(v_{i}(t, \cdot ); \bigl(X_{i} (t; 0, 1 ), 1 \bigr] \bigr) \\ &{}+ \bigl\vert v_{i} \bigl(t, X_{i} (t; 0, 1 )^{+} \bigr) - v_{i} \bigl(t, X_{i} (t; 0, 1 )^{-} \bigr) \bigr\vert , \end{aligned} $$
(40)

whereas for \(i \in \{\ell + 1, \ldots, n \}\) and \(0 < t \le \frac{1}{\lambda _{\max}}\),

$$ \begin{aligned} \mathbf{TV} \bigl(v_{i}(t, \cdot ) \bigr) ={}& \mathbf{TV} \bigl(v_{i}(t, \cdot ); \bigl[0, X_{i} (t; 0, 0 ) \bigr)\bigr) \\ &{}+ \mathbf{TV} \bigl(v_{i}(t, \cdot ); \bigl(X_{i} (t; 0, 0 ), 1 \bigr] \bigr) \\ &{}+ \bigl\vert v_{i} \bigl(t, X_{i} (t; 0, 0 )^{+} \bigr) - v_{i} \bigl(t, X_{i} (t; 0, 0 )^{-} \bigr) \bigr\vert . \end{aligned} $$
(41)

Consider the first term in the right-hand side of (40) and points \(0 \le x_{0} \le \cdots \le x_{N} < X_{i} (t; 0, 1 )\). Using (35), we deduce that

$$\begin{aligned} & \sum_{j=1}^{N} \bigl\vert v_{i} (t, x_{j} ) - v_{i} (t, x_{j-1} ) \bigr\vert \\ &\quad\le \mathbf{TV} (\bar{v}_{i} )+ \sum_{j=1}^{N} \int _{0}^{t} \bigl| h_{i} \bigl(\tau, X_{i}(0; t, x_{j}), \mathbf{v}\bigl(\tau, X_{i}(0; t, x_{j})\bigr) \bigr) \\ &\qquad{}- h_{i} (\tau, X_{i}\bigl(0; t, x_{j - 1}, \mathbf{v}\bigl(\tau, X_{i}(0; t, x_{j-1})\bigr) \bigr)\bigr|\,\mathrm{d} \tau \\ &\quad\le \mathbf{TV}(\bar{v}_{i}) + L \int _{0}^{t} \bigl\Vert \mathbf{v} (\tau ) \bigr\Vert _{\infty } \,\mathrm{d}\tau + L \int _{0}^{t} \mathbf{TV}\bigl(\mathbf{v}(\tau,\cdot )\bigr)\,\mathrm{d}\tau, \end{aligned}$$

and so by (33) we have

$$ \begin{aligned} & \mathbf{TV} \bigl(v_{i}(t, \cdot ); \bigl[0, X_{i} (t; 0, 1 )\bigr) \bigr) \\ &\quad\le \mathbf{TV}( \bar{v}_{i}) + L \int _{0}^{t} \mathbf{TV}\bigl(\mathbf{v}(\tau,\cdot )\bigr)\,\mathrm{d}\tau \\ &\qquad{}+ \mathcal{O} (1 ) \biggl( \Vert \bar {\mathbf{v}} \Vert _{\mathbf{L^{\infty }}} + \bigl\Vert \mathbf{b}^{-} \bigr\Vert _{\mathbf{L^{\infty }}(0, t)} + \bigl\Vert \mathbf{b}^{+} \bigr\Vert _{\mathbf{L^{\infty }}(0, t)} + \frac{1}{\lambda _{\max }} \biggr) t. \end{aligned} $$
(42)

Here and in the following part of the proof, the Landau symbol \(\mathcal{O} (1 )\) denotes a constant. Similarly the second term in the right-hand side of (41) can be estimated by

$$ \begin{aligned} & \mathbf{TV} (v_{i}\bigl(t, \cdot ); \bigl(X_{i} (t; 0, 0 ), 1 \bigr] \bigr) \\ &\quad\le \mathbf{TV}( \bar{v}_{i}) + L \int _{0}^{t} \mathbf{TV}\bigl(\mathbf{v}(\tau,\cdot )\bigr)\,\mathrm{d}\tau. \\ &\qquad{}+ \mathcal{O} (1 ) \biggl( \Vert \bar {\mathbf{v}} \Vert _{\mathbf{L^{\infty }}} + \bigl\Vert \mathbf{b}^{-} \bigr\Vert _{\mathbf{L^{\infty }}(0, t)} + \bigl\Vert \mathbf{b}^{+} \bigr\Vert _{\mathbf{L^{\infty }}(0, t)} + \frac{1}{\lambda _{\max }} \biggr) t. \end{aligned} $$
(43)

Consider now the second term in the right-hand side of (40) and points \(X_{i} (t; 0, 1 ) < x_{0} \le \cdots \le x_{N} \le 1\). Using (35), we get

$$\begin{aligned} & \sum_{j=1}^{N} \bigl\vert v_{i} (t, x_{j} ) - v_{i} (t, x_{j-1} ) \bigr\vert \\ &\quad\le \sum_{j=1}^{N} \bigl\vert m_{i}^{1} \bigl(T_{i}(1; t, x_{j}) \bigr) - m_{i}^{1} \bigl(T_{i}(1; t, x_{j-1}) \bigr) \bigr\vert \\ &\qquad{}+ \sum_{j=1}^{N} \biggl\vert \int _{T_{i}(1; t, x_{j})}^{t} h_{i}\bigl( \tau, X_{i}(\tau; t, x_{j}), \mathbf{v}\bigl(\tau, X_{i}(\tau; t, x_{j})\bigr)\bigr) \,\mathrm{d}\tau \\ &\qquad{}- \int _{T_{i}(1; t, x_{j-1})}^{t} h_{i}\bigl( \tau, X_{i}(\tau; t, x_{j-1}), \mathbf{v}\bigl(\tau, X_{i}(\tau; t, x_{j-1})\bigr)\bigr) \,\mathrm{d}\tau \biggr\vert . \end{aligned}$$

Defining K= sup t [ 0 , 1 λ max ] { sup ξ R n { 0 } | N ( t ) ( ξ ) | | ξ | , sup ξ R n { 0 } | N + ( t ) ( ξ ) | | ξ | } and using (35), (36), and (37), we deduce that

$$\begin{aligned} & \sum_{j=1}^{N} \bigl\vert m_{i}^{1} \bigl(T_{i}(1; t, x_{j}) \bigr) - m_{i}^{1} \bigl(T_{i}(1; t, x_{j-1}) \bigr) \bigr\vert \\ &\quad\le \mathbf{TV} \bigl( \mathbf{b}^{-} \bigr) + Kn \mathbf{TV} (\bar {\mathbf{v}} ) + 2KnLt \\ &\qquad{}+ KnL \int _{0}^{t} \mathbf{TV} \bigl(\mathbf{v} (\tau; \cdot ) \bigr) \,\mathrm{d}\tau, \end{aligned}$$

whereas, using the assumptions on h and triangle inequalities, we have

$$\begin{aligned} & \sum_{j=1}^{N} \biggl\vert \int _{T_{i}(1; t, x_{j})}^{t} h_{i}\bigl(\tau, X_{i}( \tau; t, x_{j}), \mathbf{v}\bigl(\tau, X_{i}(\tau; t, x_{j})\bigr)\bigr) \,\mathrm{d}\tau \\ &\qquad{}- \int _{T_{i}(1; t, x_{j-1})}^{t} h_{i}\bigl(\tau, X_{i}( \tau; t, x_{j-1}), \mathbf{v}\bigl(\tau, X_{i}(\tau; t, x_{j-1})\bigr)\bigr) \,\mathrm{d}\tau \biggr\vert \\ &\quad\le 2Lt + L \int _{0}^{t} \mathbf{TV} \bigl(\mathbf{v} (\tau; \cdot ) \bigr) \,\mathrm{d}\tau \\ &\qquad{}+ L \mathcal{O} (1 ) \biggl( \Vert \bar {\mathbf{v}} \Vert _{\mathbf{L^{\infty }}} + \bigl\Vert \mathbf{b}^{-} \bigr\Vert _{ \mathbf{L^{\infty }}(0, t)} + \bigl\Vert \mathbf{b}^{+} \bigr\Vert _{ \mathbf{L^{\infty }}(0, t)} + \frac{1}{\lambda _{\max }} \biggr)t. \end{aligned}$$

Therefore the second term in the right-hand side of (40) can be estimated by

$$ \begin{aligned} & \mathbf{TV} \bigl(v_{i}(t, \cdot ); \bigl(X_{i} (t; 0, 1 ), 1 \bigr]\bigr) \\ &\quad\le \mathbf{TV} \bigl( \mathbf{b}^{-} \bigr) + Kn \mathbf{TV} (\bar {\mathbf{v}} ) + 2 (Kn + 1 )Lt \\ &\qquad{}+ (Kn + 1 ) L \int _{0}^{t} \mathbf{TV} \bigl( \mathbf{v} (\tau; \cdot ) \bigr) \,\mathrm{d}\tau \\ &\qquad{}+ L \mathcal{O} (1 ) \biggl( \Vert \bar {\mathbf{v}} \Vert _{\mathbf{L^{\infty }}} + \bigl\Vert \mathbf{b}^{-} \bigr\Vert _{\mathbf{L^{\infty }}(0, t)} + \bigl\Vert \mathbf{b}^{+} \bigr\Vert _{\mathbf{L^{\infty }}(0, t)} + \frac{1}{\lambda _{\max }} \biggr)t. \end{aligned} $$
(44)

Similarly, the first term in the right-hand side of (41) can be estimated by

$$ \begin{aligned} & \mathbf{TV} \bigl(v_{i}(t, \cdot ); \bigl[0, X_{i} (t; 0, 0 ) \bigr) \bigr)\\ &\quad\le \mathbf{TV} \bigl( \mathbf{b}^{+} \bigr) + Kn \mathbf{TV} (\bar {\mathbf{v}} ) + 2 (Kn + 1 )Lt \\ &\qquad{}+ (Kn + 1 ) L \int _{0}^{t} \mathbf{TV} \bigl( \mathbf{v} (\tau; \cdot ) \bigr) \,\mathrm{d}\tau. \\ &\qquad{}+ L \mathcal{O} (1 ) \biggl( \Vert \bar {\mathbf{v}} \Vert _{\mathbf{L^{\infty }}} + \bigl\Vert \mathbf{b}^{-} \bigr\Vert _{\mathbf{L^{\infty }}(0, t)} + \bigl\Vert \mathbf{b}^{+} \bigr\Vert _{\mathbf{L^{\infty }}(0, t)} + \frac{1}{\lambda _{\max }} \biggr)t. \end{aligned} $$
(45)

Consider now the third term in the right-hand side of (40). Using (35), (36), (37), and the assumptions on h, we obtain

$$\begin{aligned} & \bigl\vert v_{i} \bigl(t, X_{i} (t; 0, 1 )^{+} \bigr) - v_{i} \bigl(t, X_{i} (t; 0, 1 )^{-} \bigr) \bigr\vert \\ &\quad\le \Bigl\vert \lim_{\tau \to 0^{+}} m_{i}^{1} (\tau ) \Bigr\vert + \bigl\vert \bar{v}_{i}\bigl(1^{-}\bigr) \bigr\vert \\ &\begin{aligned} &\qquad{}+ \biggl\vert \int _{0}^{t} h_{i} \bigl(\tau, X_{i} \bigl(\tau; t, X_{i} (t; 0, 1 ) \bigr), \mathbf{v} \bigl(\tau, X_{i} \bigl( \tau; t, X_{i} (t; 0, 1 ) \bigr)^{+} \bigr) \bigr) \,\mathrm{d}\tau \\ &\qquad{}- \int _{0}^{t} h_{i} \bigl(\tau, X_{i} \bigl(\tau; t, X_{i} (t; 0, 1 ) \bigr), \mathbf{v} \bigl(\tau, X_{i} \bigl(\tau; t, X_{i} (t; 0, 1 ) \bigr)^{-} \bigr) \bigr) \,\mathrm{d}\tau \biggr\vert \end{aligned} \\ &\quad\le \bigl\vert \mathbf{b}^{-}\bigl(0^{+}\bigr) \bigr\vert + (2K + 1 ) \Vert \bar {\mathbf{v}} \Vert _{\mathbf{L^{\infty }}} + L \int _{0}^{t} \mathbf{TV} \bigl(v (\tau, \cdot ) \bigr) \,\mathrm{d} \tau \\ &\qquad{}+ L \mathcal{O} (1 ) \biggl( \Vert \bar {\mathbf{v}} \Vert _{\mathbf{L^{\infty }}} + \bigl\Vert \mathbf{b}^{-} \bigr\Vert _{ \mathbf{L^{\infty }}(0, t)} + \bigl\Vert \mathbf{b}^{+} \bigr\Vert _{ \mathbf{L^{\infty }}(0, t)} + \frac{1}{\lambda _{\max }} \biggr)t. \end{aligned}$$
(46)

Similarly, the third term in the right-hand side of (41) can be estimated by

$$ \begin{aligned} & \bigl\vert v_{i} \bigl(t, X_{i} (t; 0, 0 )^{+} \bigr) - v_{i} \bigl(t, X_{i} (t; 0, 0 )^{-} \bigr) \bigr\vert \\ &\quad\le \bigl\vert \mathbf{b}^{+}\bigl(1^{-}\bigr) \bigr\vert + (2K + 1 ) \Vert \bar {\mathbf{v}} \Vert _{\mathbf{L^{\infty }}} + L \int _{0}^{t} \mathbf{TV} \bigl(v (\tau, \cdot ) \bigr) \,\mathrm{d} \tau \\ &\qquad{}+ L \mathcal{O} (1 ) \biggl( \Vert \bar {\mathbf{v}} \Vert _{\mathbf{L^{\infty }}} + \bigl\Vert \mathbf{b}^{-} \bigr\Vert _{ \mathbf{L^{\infty }}(0, t)} + \bigl\Vert \mathbf{b}^{+} \bigr\Vert _{ \mathbf{L^{\infty }}(0, t)} + \frac{1}{\lambda _{\max }} \biggr)t. \end{aligned} $$
(47)

Inserting (42), (44), and (46) into (40), we get

$$ \begin{aligned} \mathbf{TV} \bigl(v_{i}(t, \cdot ) \bigr) \le{}& \mathbf{TV}(\bar{v}_{i}) + \mathbf{TV} \bigl( \mathbf{b}^{-} \bigr) + Kn \mathbf{TV} (\bar {\mathbf{v}} ) + (2Kn + 3 )Lt \\ &{}+ (Kn + 3 ) L \int _{0}^{t} \mathbf{TV} \bigl( \mathbf{v} (\tau; \cdot ) \bigr) \,\mathrm{d}\tau \\ &{}+ \mathcal{O} (1 ) \biggl( \Vert \bar {\mathbf{v}} \Vert _{\mathbf{L^{\infty }}} + \bigl\Vert \mathbf{b}^{-} \bigr\Vert _{\mathbf{L^{\infty }}(0, t)} + \bigl\Vert \mathbf{b}^{+} \bigr\Vert _{\mathbf{L^{\infty }}(0, t)} + \frac{1}{\lambda _{\max }} \biggr)t. \end{aligned} $$
(48)

A similar estimate of (41) holds. Consequently,

$$\begin{aligned} \mathbf{TV}\bigl(\mathbf{v}(t,\cdot )\bigr) \le{}& \bigl(1 + K n^{2} \bigr) \mathbf{TV}(\bar{\mathbf{v}}) + \ell \mathbf{TV}\bigl(\mathbf{b}^{-} \bigr) + (n - \ell ) \mathbf{TV}\bigl(\mathbf{b}^{+}\bigr) \\ &{}+ (2Kn + 3 ) n Lt + (2 + Kn )nL \int _{0}^{t} \mathbf{TV}\bigl(\mathbf{v}(\tau,\cdot )\bigr)\,\mathrm{d}\tau \\ &{}+ \mathcal{O} (1 ) \biggl( \Vert \bar {\mathbf{v}} \Vert _{\mathbf{L^{\infty }}} + \bigl\Vert \mathbf{b}^{-} \bigr\Vert _{\mathbf{L^{\infty }}(0, t)} + \bigl\Vert \mathbf{b}^{+} \bigr\Vert _{\mathbf{L^{\infty }}(0, t)} + \frac{1}{\lambda _{\max }} \biggr)t. \end{aligned}$$

An application of the Gronwall lemma implies (34). □

4.3 Proof of Theorem 6

This subsection contains the proof of Theorem 6, which is based on the Banach fixed point theorem.

Proof of Theorem 6

By Remark 1 the proof is focused on the diagonal version of system (2a)–(2c) and is divided into two steps.

Step 1. Local existence and uniqueness of solution. Fix an initial condition u ¯ L 1 ((0,1); R n ) with finite total variation and a boundary condition b L 1 ((0,T); R n ) with finite total variation. Denote by \(\bar {\mathbf{v}}(x) = \mathbf{L} (0, x ) \bar {\mathbf{u}}(x)\) the corresponding initial condition for the diagonal system (3) with the corresponding boundary conditions \(\mathbf{b}^{-}\) and \(\mathbf{b}^{+}\); see (7). Define

K= sup t [ 0 , T ] { sup ξ R n { 0 } | N ( t ) ( ξ ) | | ξ | , sup ξ R n { 0 } | N + ( t ) ( ξ ) | | ξ | } ,
(49)
$$\begin{aligned} & M = 2n (2K + 1 ) \mathbf{TV} (\bar {\mathbf{u}} ) + 2 n \mathbf{TV} \bigl(\mathbf{b}^{-} \bigr) + (2 + K ) n \Vert \bar { \mathbf{v}} \Vert _{\infty } \end{aligned}$$
(50)
$$\begin{aligned} &\phantom{M = }{} + 2 n \Vert \mathbf{b} \Vert _{\infty }+ 1, \\ & M_{1} = (1 + K) \Vert \mathbf{v} \Vert _{\infty }+ \Vert \mathbf{b} \Vert _{\infty }+ 1, \end{aligned}$$
(51)
$$\begin{aligned} & \lambda _{\max} = \max \bigl\{ \Vert \lambda _{i} \Vert _{ \mathbf{C^{0}} ([0, T] \times [0,1] )}: i \in \{1, \ldots, n \} \bigr\} , \end{aligned}$$
(52)
$$\begin{aligned} & \Lambda = \max \bigl\{ \Vert \lambda _{i} \Vert _{ \mathbf{C^{1}} ([0, T] \times [0, 1] )}: i \in \{1, \ldots, n \} \bigr\} . \end{aligned}$$
(53)

Note that both \(\lambda _{\max}\) and Λ are finite because of (H-1) and (H-3). Choose \(\bar{t} \in (0, T]\) such that

$$ \bar{t} < \min \biggl\{ \frac{1}{\lambda _{\max }}, \frac{1}{n L (5K + 4 ) (1 + 2M_{1} + M )} \biggr\} $$
(54)

and

$$ n (2 + n K ) e^{\Lambda \bar{t}} L \bar{t} \le \frac{1}{2}, $$
(55)

where L is a uniform bound and a Lipschitz constant for h in [0,T]×[0,1]× R n ; see Remark 2.

Note that the choice of implies that every characteristic curve starting form a boundary does not arrive at the other boundary within time . Now we aim to construct a map whose fixed points are solutions to the diagonal IBVP and so to (2a)–(2c). First, introduce the space

X= { v C 0 ( [ 0 , t ¯ ] ; L 1 ( [ 0 , 1 ] ; R n ) ) : sup i { 1 , , n } sup t [ 0 , t ¯ ] TV ( v i ( t ) ) M v ( 0 ) = v ¯ v L ( [ 0 , t ¯ ] × [ 0 , 1 ] ) M 1 }
(56)

equipped with the norm

v X := i = 1 n v i C 0 ( [ 0 , t ¯ ] ; L 1 ( [ 0 , 1 ] ; R ) ) = i = 1 n sup t [ 0 , t ¯ ] 0 1 | v i (t,x)|dx,
(57)

so that X is a complete metric space. Now define the operator

$$ \begin{aligned} \mathbf{M}\colon & X \longrightarrow X \\ & \mathbf{v} \longmapsto \mathbf{M}(\mathbf{v}) = \bigl(M_{1}( \mathbf{v}), \ldots, M_{n}(\mathbf{v}) \bigr), \end{aligned} $$

according to the following four cases.

  1. (c1)

    For all \(i \in \{1, \ldots, \ell \}\), \(0 < t \le \bar{t}\), and \(x \in [0, X_{i} (t; 0, 1 )]\), we define

    $$ M_{i}(\mathbf{v}) (t,x) = \bar{v}_{i} \bigl(X_{i}(0;t,x)\bigr) + \int _{0}^{t} h_{i}\bigl(\tau, X_{i}(\tau;t,x),\mathbf{v}\bigl(\tau, X_{i}(\tau;t,x)\bigr) \bigr) \,\mathrm{d}\tau. $$
    (58)
  2. (c2)

    For all \(i \in \{\ell + 1, \ldots, n \}\), \(0 < t \le \bar{t}\), and \(x \in [X_{i} (t; 0, 0 ), 1]\), we define

    $$ M_{i}(\mathbf{v}) (t,x) = \bar{v}_{i} \bigl(X_{i}(0;t,x)\bigr) + \int _{0}^{t} h_{i}\bigl( \tau,X_{i}(\tau;t,x),\mathbf{v}\bigl(\tau,X_{i}(\tau;t,x) \bigr)\bigr) \,\mathrm{d}\tau. $$
    (59)
  3. (c3)

    For all \(i \in \{1, \ldots, \ell \}\), \(0 < t \le \bar{t}\), and \(x \in (X_{i} (t; 0, 1 ), 1]\), we define

    $$ M_{i}(\mathbf{v}) (t,x) = m_{i}^{1} \bigl(T_{i}(1;t,x)\bigr) + \int _{T_{i}(1;t,x)}^{t} h_{i}\bigl(\tau, X_{i}(\tau;t,x), \mathbf{v}\bigl(\tau, X_{i}(\tau;t,x) \bigr)\bigr) \,\mathrm{d}\tau, $$
    (60)

    where \(T_{i}\) denotes the inverse of the ith characteristic curve (see Remark 4), and

    $$ m_{i}^{1}(t) = \mathbf{b}^{-}(t) + \mathbf{N}^{-}(t) \begin{pmatrix} \mathbf{M}_{b, 0}(\mathbf{v})(t) \\ \mathbf{M}_{b, 1}(\mathbf{v})(t) \end{pmatrix}; $$
    (61)

    see (7), (67), and (70).

  4. (c4)

    For all \(i \in \{\ell + 1, \ldots, n \}\), \(0 < t \le \bar{t}\), and \(x \in [0, X_{i} (t; 0, 0 ))\), we define

    $$ M_{i}(\mathbf{v}) (t,x) = m_{i}^{0} \bigl(T_{i}(0; t,x)\bigr) + \int _{T_{i}(0; t,x)}^{t} h_{i}\bigl(\tau, X_{i}(\tau;t,x), \mathbf{v}\bigl(\tau, X_{i}(\tau;t,x) \bigr)\bigr) \,\mathrm{d}\tau, $$
    (62)

    where

    $$ m_{i}^{0}(t) = \mathbf{b}^{+}(t) + \mathbf{N}^{+}(t) \begin{pmatrix} \mathbf{M}_{b, 0}(\mathbf{v})(t) \\ \mathbf{M}_{b, 1}(\mathbf{v})(t) \end{pmatrix}; $$
    (63)

    see (7).

We proceed now to estimate the \(\mathbf{L^{\infty }}\) norm and the total variation of \(\mathbf{M}(\mathbf{v})\) according to four cases.

Case (c1). By Remark 2 we easily deduce that

$$ \bigl\Vert M_{i}(\mathbf{v}) \bigr\Vert _{\mathbf{L^{\infty }}} \le \Vert \bar{v}_{i} \Vert _{\mathbf{L^{\infty }}} + L (1 + M_{1}) \bar{t}. $$
(64)

We claim that for every \(0 \le t \le \bar{t}\),

$$ \mathbf{TV} \bigl(M_{i}(\mathbf{v}) (t, \cdot ); \bigl[0, X_{i} (t; 0, 1 )\bigr) \bigr) \le \mathbf{TV} (\bar{v}_{i} ) + L (M_{1} + M ) \bar{t} $$
(65)

and that

$$ \mathbf{TV} \bigl(M_{i}(\mathbf{v}) (\cdot, 0+); [0, \bar{t} ] \bigr) \le \mathbf{TV} (\bar{v}_{i} ) + L (1 + 2M_{1} + M )\bar{t}. $$
(66)

For later use, for \(0 \le t \le \bar{t}\), we denote

$$ \mathbf{M}_{b, 0}(\mathbf{v}) (t) = \begin{pmatrix} M_{1}(\mathbf{v})(t, 0+) \\ \vdots \\ M_{\ell} (\mathbf{v}) (t, 0+) \end{pmatrix}, $$
(67)

which is well defined by (58) and has a finite total variation by (66).

To prove (65), fix NN{0}, a time \(0 \le t \le \bar{t}\), and points \(0 \le x_{0} < \cdots < x_{N} \le X_{i} (t; 0, 1 )\). Using the notation \(\tilde{x}_{j}(\tau ) = X_{i} (\tau; t, x_{j} )\), we have that

$$\begin{aligned} & \sum_{j = 1}^{N} \bigl\vert M_{i}(\mathbf{v}) (t, x_{j}) - M_{i}( \mathbf{v}) (t, x_{j-1}) \bigr\vert \\ &\quad\le \underbrace{\sum _{j = 1}^{N} \bigl\vert \bar{v}_{i} \bigl( \tilde{x}_{j}(0) \bigr) - \bar{v}_{i} \bigl(\tilde{x}_{j-1}(0) \bigr) \bigr\vert }_{I_{1}} \\ &\qquad{}+ \underbrace{\sum_{j = 1}^{N} \biggl\vert \int _{0}^{t} h_{i}\bigl(\tau, \tilde{x}_{j}(\tau ), \mathbf{v}\bigl(\tau, \tilde{x}_{j}(\tau ) \bigr)\bigr) - h_{i}\bigl(\tau, \tilde{x}_{j-1}(\tau ), \mathbf{v}\bigl(\tau, \tilde{x}_{j-1}(\tau )\bigr)\bigr) \,\mathrm{d}\tau \biggr\vert }_{I_{2}}. \end{aligned}$$

Clearly, the term \(I_{1}\) is estimated by \(\mathbf{TV} (\bar{v}_{i} )\). For the term \(I_{2}\), we have

$$\begin{aligned} I_{2} \le{}& \sum_{j = 1}^{N} \int _{0}^{t} \bigl\vert h_{i}\bigl( \tau, \tilde{x}_{j}(\tau ),\mathbf{v}\bigl(\tau, \tilde{x}_{j}(\tau )\bigr)\bigr) - h_{i}\bigl( \tau, \tilde{x}_{j-1}(\tau ),\mathbf{v}\bigl(\tau, \tilde{x}_{j}(\tau ) \bigr)\bigr) \bigr\vert \,\mathrm{d}\tau \\ &{}+ \sum_{j = 1}^{N} \int _{0}^{t} \bigl\vert h_{i}\bigl( \tau, \tilde{x}_{j-1}(\tau ),\mathbf{v}\bigl(\tau, \tilde{x}_{j}(\tau )\bigr)\bigr) - h_{i}\bigl( \tau, \tilde{x}_{j-1}(\tau ), \mathbf{v}\bigl(\tau, \tilde{x}_{j-1}(\tau ) \bigr)\bigr) \bigr\vert \,\mathrm{d}\tau \\ \le{}& L \sum_{j = 1}^{N} \int _{0}^{t} \bigl( \bigl\vert \tilde{x}_{j}( \tau ) - \tilde{x}_{j-1}(\tau ) \bigr\vert M_{1} + \bigl\vert \mathbf{v}\bigl( \tau, \tilde{x}_{j}( \tau )\bigr) - \mathbf{v}\bigl(\tau, \tilde{x}_{j-1}(\tau )\bigr) \bigr\vert \bigr) \,\mathrm{d}\tau \\ \le{}& LM_{1}t + LMt, \end{aligned}$$

and so we deduce (65).

To prove (66), fix NN{0} and times \(0 \le t_{0} < \cdots < t_{N} \le \bar{t}\). Using the notation \(\hat{x}_{j}(\tau ) = X_{i} (\tau; t_{j}, 0 )\), we have that

$$\begin{aligned} & \sum_{j = 1}^{N} \bigl\vert M_{i}(\mathbf{v}) (t_{j}, 0) - M_{i}( \mathbf{v}) (t_{j-1}, 0) \bigr\vert \\ &\quad\le \underbrace{\sum _{j = 1}^{N} \bigl\vert \bar{v}_{i} \bigl( \hat{x}_{j}(0) \bigr) - \bar{v}_{i} \bigl(\hat{x}_{j-1}(0) \bigr) \bigr\vert }_{I_{3}} \\ &\qquad{}+ \underbrace{\sum_{j = 1}^{N} \biggl\vert \int _{0}^{t_{j-1}} \bigl(h_{i}\bigl(\tau, \hat{x}_{j}(\tau ), \mathbf{v}\bigl(\tau, \hat{x}_{j}(\tau )\bigr) \bigr) - h_{i}\bigl(\tau, \hat{x}_{j-1}(\tau ), \mathbf{v} \bigl(\tau, \hat{x}_{j}(\tau )\bigr)\bigr) \bigr)\,\mathrm{d}\tau \biggr\vert }_{I_{4}} \\ &\qquad{}+ \underbrace{\sum_{j = 1}^{N} \biggl\vert \int _{0}^{t_{j-1}} \bigl(h_{i}\bigl(\tau, \hat{x}_{j-1}(\tau ), \mathbf{v}\bigl(\tau, \hat{x}_{j}(\tau ) \bigr)\bigr) - h_{i}\bigl(\tau, \hat{x}_{j-1}(\tau ), \mathbf{v}\bigl(\tau, \hat{x}_{j-1}(\tau )\bigr)\bigr) \bigr) \, \mathrm{d}\tau \biggr\vert }_{I_{5}} \\ &\qquad{}+ \underbrace{\sum_{j = 1}^{N} \biggl\vert \int _{t_{j-1}}^{t_{j}} h_{i}\bigl(\tau, \hat{x}_{j}(\tau ), \mathbf{v}\bigl(\tau, \hat{x}_{j}(\tau )\bigr) \bigr) \,\mathrm{d}\tau \biggr\vert }_{I_{6}}. \end{aligned}$$

Clearly, the term \(I_{3}\) is estimated by \(\mathbf{TV} (\bar{v}_{i} )\). For the remaining terms \(I_{4}\), \(I_{5}\), and \(I_{6}\), we have

$$\begin{aligned} &I_{4} \le L \sum_{j = 1}^{N} \int _{0}^{t_{j-1}} \bigl\vert \hat{x}_{j}( \tau ) - \hat{x}_{j-1}(\tau ) \bigr\vert M_{1}\,\mathrm{d} \tau \le L M_{1} \bar{t}, \\ &I_{5} \le L \sum_{j = 1}^{N} \int _{0}^{t_{j-1}} \bigl\vert \mathbf{v}\bigl(\tau, X_{i}(\tau;t_{j}, 0)\bigr) - \mathbf{v}\bigl(\tau, X_{i}( \tau;t_{j-1}, 0)\bigr) \bigr\vert \,\mathrm{d}\tau \le L M \bar{t}, \\ &I_{6} \le L (1+M_{1})\bar{t}; \end{aligned}$$

so (66) is proved.

Case (c2). Similarly to Case (c1), we deduce that for every \(0 \le t \le \bar{t}\), (64) holds,

$$ \mathbf{TV} (M_{i}(\mathbf{v}) (t, \cdot ); \bigl(X_{i} (t; 0, 0 ), 1 \bigr] \bigr) \le \mathbf{TV} (\bar{v}_{i} ) + L (M_{1} + M ) \bar{t}, $$
(68)

and

$$ \mathbf{TV} \bigl(M_{i}(\mathbf{v}) (\cdot, 1-); [0, \bar{t} ] \bigr) \le \mathbf{TV} (\bar{v}_{i} ) + L (1 + 2M_{1} + M ) \bar{t}. $$
(69)

For \(0 \le t \le \bar{t}\), we denote

$$ \mathbf{M}_{b, 1}(\mathbf{v}) (t) = \begin{pmatrix} M_{\ell + 1}(\mathbf{v})(t, 1-) \\ \vdots \\ M_{n} (\mathbf{v}) (t, 1-) \end{pmatrix}, $$
(70)

which is well defined by (59) and has a finite total variation by (69).

Case (c3). By Remark 2 we easily deduce that

$$ \bigl\Vert M_{i}(\mathbf{v}) \bigr\Vert _{\mathbf{L^{\infty }}} \le \bigl\Vert m_{i}^{1} \bigr\Vert _{\mathbf{L^{\infty }}} + L (1 + M_{1}) \bar{t}. $$
(71)

We claim that for every \(0 \le t \le \bar{t}\),

$$ \begin{aligned} \mathbf{TV} (M_{i}( \mathbf{v}) (t, \cdot ); \bigl(X_{i} (t; 0, 1 ), 1 ] \bigr) \le{}& \mathbf{TV} \bigl( \mathbf{b}^{-} \bigr) + 2K \mathbf{TV} (\bar{v}_{i} ) \\ &{}+ L (2K + 1 ) (1+M + 2M_{1}) \bar{t}. \end{aligned} $$
(72)

To prove (72), fix NN{0}, a time \(0 \le t \le \bar{t}\), and points \(X_{i} (t; 0, 1 ) \le x_{0} < \cdots < x_{N} \le 1\). Using the notations \(\tilde{x}_{j}(\tau ) = X_{i} (\tau; t, x_{j} )\) and \(\tilde{t}_{j} = T_{i} (1; t, x_{j} )\), we have that \(\tilde{t}_{0} < \cdots < \tilde{t}_{N}\) and

$$\begin{aligned} & \sum_{j = 1}^{N} \bigl\vert M_{i}(\mathbf{v}) (t, x_{j}) - M_{i}( \mathbf{v}) (t, x_{j-1}) \bigr\vert \\ &\quad\le \underbrace{\sum _{j = 1}^{N} \bigl\vert m_{i}^{1} (\tilde{t}_{j} ) - m_{i}^{1} (\tilde{t}_{j-1} ) \bigr\vert }_{I_{7}} \\ &\qquad{}+ \underbrace{\sum_{j = 1}^{N} \biggl\vert \int _{\tilde{t}_{j}}^{t} \bigl(h_{i}\bigl(\tau, \tilde{x}_{j}(\tau ),\mathbf{v}\bigl(\tau, \tilde{x}_{j}(\tau )\bigr)\bigr) - h_{i}\bigl(\tau, \tilde{x}_{j}(\tau ), \mathbf{v}\bigl(\tau, \tilde{x}_{j-1}(\tau )\bigr)\bigr) \bigr)\, \mathrm{d}\tau \biggr\vert }_{I_{8}} \\ &\qquad{}+ \underbrace{\sum_{j = 1}^{N} \biggl\vert \int _{\tilde{t}_{j}}^{t} \bigl(h_{i}\bigl(\tau, \tilde{x}_{j}(\tau ),\mathbf{v}\bigl(\tau, \tilde{x}_{j-1}(\tau )\bigr)\bigr) - h_{i}\bigl(\tau, \tilde{x}_{j-1}(\tau ), \mathbf{v}\bigl(\tau, \tilde{x}_{j-1}(\tau )\bigr)\bigr) \bigr) \, \mathrm{d}\tau \biggr\vert }_{I_{9}} \\ &\qquad{}+ \underbrace{\sum_{j = 1}^{N} \biggl\vert \int ^{\tilde{t}_{j}} _{\tilde{t}_{j-1}} h_{i}\bigl(\tau, \tilde{x}_{j-1}(\tau ),\mathbf{v}\bigl(\tau, \tilde{x}_{j-1}(\tau ) \bigr)\bigr) \,\mathrm{d}\tau \biggr\vert }_{I_{10}}. \end{aligned}$$

Using (49), (66), (69), and (61), we get

$$\begin{aligned} I_{7} & \le \mathbf{TV} \bigl(\mathbf{b}^{-} \bigr) + K \mathbf{TV} \bigl(\mathbf{M}_{b, 0}(\mathbf{v}) (\cdot ) \bigr) + K \mathbf{TV} \bigl(\mathbf{M}_{b, 1}(\mathbf{v}) (\cdot ) \bigr) \\ & \le \mathbf{TV} \bigl(\mathbf{b}^{-} \bigr)+ 2 K \bigl[ \mathbf{TV} (\bar{v}_{i} ) + L (1 + 2M_{1} + M )\bar{t} \bigr]. \end{aligned}$$

For the remaining terms \(I_{8}\), \(I_{9}\), and \(I_{10}\), we have

$$\begin{aligned} &I_{8} \le L \sum_{j = 1}^{N} \int _{\tilde{t}_{j}}^{t} \bigl\vert \mathbf{v}\bigl(\tau, \tilde{x}_{j}(\tau )\bigr) - \mathbf{v}\bigl(\tau, \tilde{x}_{j-1}( \tau )\bigr) \bigr\vert \,\mathrm{d}\tau \le L M \bar{t}, \\ &I_{9} \le L \sum_{j = 1}^{N} \int _{\tilde{t}_{j}}^{t} \bigl\vert \tilde{x}_{j}( \tau ) - \tilde{x}_{j-1}(\tau ) \bigr\vert M_{1} \,\mathrm{d} \tau \le L M_{1} \bar{t}, \\ &I_{10} \le \sum_{j = 1}^{N} \int ^{\tilde{t}_{j}}_{\tilde{t}_{j-1}} \bigl\vert h_{i}\bigl( \tau, \tilde{x}_{j-1}(\tau ),\mathbf{v}\bigl(\tau, \tilde{x}_{j-1}( \tau )\bigr)\bigr) \bigr\vert \,\mathrm{d}\tau \le L (1+M_{1})\bar{t}, \end{aligned}$$

proving (72).

Case (c4). Similarly to Case (c3), we deduce that for every \(0 \le t \le \bar{t}\), (71) holds, and

$$ \begin{aligned} \mathbf{TV} \bigl(M_{i}( \mathbf{v}) (t, \cdot ); \bigl[0, X_{i} (t; 0, 0 ) \bigr) \bigr) \le{}& \mathbf{TV} \bigl( \mathbf{b}^{-} \bigr) + 2K \mathbf{TV} (\bar{v}_{i} ) \\ &{}+ L (2K + 1 ) (1+M + 2M_{1}) \bar{t}. \end{aligned} $$
(73)

Moreover, using (58) and (60), note also that for all \(i \in \{1, \ldots, \ell \}\) and \(0 < t \le \bar{t}\),

$$ \begin{aligned} & \Bigl\vert \lim _{x \to X_{i} (t; 0, 1 )^{-}}M_{i}( \mathbf{v}) (t, x) - \lim _{x \to X_{i} (t; 0, 1 )^{+}}M_{i}( \mathbf{v}) (t, x) \Bigr\vert \\ &\quad \le 2 \Vert \bar {\mathbf{v}} \Vert _{\mathbf{L^{\infty }}} + 2 \Vert \mathbf{b} \Vert _{\infty }+ K \bigl( \Vert \bar {\mathbf{v}} \Vert _{\infty } + L (1 + M_{1} ) \bar{t} \bigr) + 2L (1 + M_{1}) \bar{t}. \end{aligned} $$
(74)

The same inequality holds in the case \(i \in \{\ell + 1, \ldots, n \}\).

Using (65), (68), (72), (73), and (74), we deduce that for all \(0 \le t \le \bar{t}\) and \(i \in \{1, \ldots, n \}\),

$$ \begin{aligned} \mathbf{TV} \bigl(M_{i}( \mathbf{v}) (t, \cdot ) \bigr) \le{}& 2 (2K + 1 ) \mathbf{TV} (\bar{v}_{i} ) + 2 \mathbf{TV} \bigl(\mathbf{b}^{-} \bigr) + (2 + K ) \Vert \bar {\mathbf{v}} \Vert _{\infty } \\ &{}+ 2 \Vert \mathbf{b} \Vert _{\infty }+ L K (1 + M_{1} ) \bar{t} \\ &{}+ 4L (K + 1 ) (1+M + 2M_{1}) \bar{t}, \end{aligned} $$
(75)

and so, by the choice of  as in (54),

$$ \mathbf{TV} \bigl(\mathbf{M}(\mathbf{v}) (t, \cdot ) \bigr) \le M, $$
(76)

which implies that the operator \(\mathbf{M}(\mathbf{v})\) is well defined. Note that the proof that \(t \mapsto \mathbf{M}(\mathbf{v})(t)\) is continuous from \([0, \bar{t}]\) to L 1 ((0,1); R n ) is straightforward and so omitted.

Fix \(\mathbf{v}, \mathbf{v}^{*} \in X\). For all \(t \in [0, \bar{t} ]\) and \(i \in \{1, \ldots, \ell \}\), we have

$$\begin{aligned} & \bigl\Vert M_{i}(\mathbf{v}) (t, \cdot ) - M_{i}\bigl( \mathbf{v}^{*}\bigr) (t, \cdot ) \bigr\Vert _{\mathbf{L^{1}}}\\ &\quad= \int _{0}^{1} \bigl\vert M_{i}( \mathbf{v}) (t, x) - M_{i}\bigl(\mathbf{v}^{*}\bigr) (t, x) \bigr\vert \,\mathrm{d}x \\ &\quad\le \int _{0}^{X_{i} (t; 0, 1 )} \bigl\vert M_{i}( \mathbf{v}) (t, x) - M_{i}\bigl(\mathbf{v}^{*}\bigr) (t, x) \bigr\vert \,\mathrm{d}x \\ &\qquad{}+ \int _{X_{i} (t; 0, 1 )}^{1} \bigl\vert M_{i}( \mathbf{v}) (t, x) - M_{i}\bigl(\mathbf{v}^{*}\bigr) (t, x) \bigr\vert \,\mathrm{d}x. \end{aligned}$$

Using (58) and the change of variable \(\xi = X_{i} (\tau; t, x )\), we deduce that

$$\begin{aligned} & \int _{0}^{X_{i} (t; 0, 1 )} \bigl\vert M_{i}( \mathbf{v}) (t, x) - M_{i}\bigl(\mathbf{v}^{*}\bigr) (t, x) \bigr\vert \,\mathrm{d}x \\ &\quad\le \int _{0}^{X_{i} (t; 0, 1 )} \int _{0}^{t} \bigl\vert h_{i}\bigl( \tau, X_{i}(\tau;t,x),\mathbf{v}\bigl(\tau, X_{i}(\tau;t,x) \bigr)\bigr) \\ &\qquad{}- h_{i}\bigl(\tau, X_{i}(\tau;t,x), \mathbf{v}^{*}\bigl(\tau, X_{i}(\tau;t,x)\bigr)\bigr) \bigr\vert \,\mathrm{d}\tau \,\mathrm{d}x \\ &\quad\le L \int _{0}^{X_{i} (t; 0, 1 )} \int _{0}^{t} \bigl\vert \mathbf{v}\bigl(\tau, X_{i}(\tau;t,x)\bigr) - \mathbf{v}^{*}\bigl(\tau, X_{i}( \tau;t,x)\bigr) \bigr\vert \,\mathrm{d}\tau \,\mathrm{d}x \\ &\quad\le e^{\Lambda \bar{t}} L \int _{0}^{t} \int _{0}^{1} \bigl\vert \mathbf{v}(\tau, \xi ) - \mathbf{v}^{*}(\tau, \xi ) \bigr\vert \,\mathrm{d}\xi \,\mathrm{d} \tau \le e^{\Lambda \bar{t}} L \bar{t} \bigl\Vert \mathbf{v}- \mathbf{v}^{*} \bigr\Vert _{X}. \end{aligned}$$

Using (60), we obtain that

$$\begin{aligned} & \int _{X_{i} (t; 0, 1 )}^{1} \bigl\vert M_{i}( \mathbf{v}) (t, x) - M_{i}\bigl(\mathbf{v}^{*}\bigr) (t, x) \bigr\vert \,\mathrm{d}x \\ &\quad\le K \underbrace{ \int _{X_{i} (t; 0, 1 )}^{1} \bigl\vert \mathbf{M}_{b,0}( \mathbf{v}) \bigl(T_{i} (1; t, x ) \bigr) - \mathbf{M}_{b,0} \bigl(\mathbf{v}^{*}\bigr) \bigl(T_{i} (1; t, x ) \bigr) \bigr\vert \,\mathrm{d}x}_{I_{11}} \\ &\qquad{}+ K \underbrace{ \int _{X_{i} (t; 0, 1 )}^{1} \bigl\vert \mathbf{M}_{b,1}( \mathbf{v}) \bigl(T_{i} (1; t, x ) \bigr) - \mathbf{M}_{b,1} \bigl(\mathbf{v}^{*}\bigr) \bigl(T_{i} (1; t, x ) \bigr) \bigr\vert \,\mathrm{d}x}_{I_{12}} + I_{13}, \end{aligned}$$

where

$$\begin{aligned} I_{13} ={}& \int _{X_{i} (t; 0, 1 )}^{1} \int _{T_{i} (1; t, x )}^{t} \bigl\vert h_{i}\bigl( \tau, X_{i}(\tau;t,x),\mathbf{v}\bigl( \tau, X_{i}(\tau;t,x) \bigr)\bigr) \\ &{}- h_{i}\bigl(\tau, X_{i}(\tau;t,x), \mathbf{v}^{*}\bigl(\tau, X_{i}(\tau;t,x)\bigr)\bigr) \bigr\vert \,\mathrm{d}\tau \,\mathrm{d}x. \end{aligned}$$

For the term \(I_{11}\), using (58) and (67), we have that

$$\begin{aligned} I_{11} \le{}& \sum_{j = 1}^{\ell} \int _{X_{i} (t; 0, 1 )}^{1} \bigl\vert M_{j} ( \mathbf{v} ) \bigl(T_{i} (1; t, x ), 0 \bigr) - M_{j} \bigl( \mathbf{v}^{*} \bigr) \bigl(T_{i} (1; t, x ), 0 \bigr) \bigr\vert \,\mathrm{d}x \\ \le{}& \sum_{j = 1}^{\ell} \int _{X_{i} (t; 0, 1 )}^{1} \biggl\vert \int _{0}^{T_{i} (1; t, x )} h_{j} \bigl(\tau, X_{j} \bigl(\tau; T_{i} (1; t, x ), 0 \bigr), \mathbf{v} \bigl(\tau, X_{j} \bigl(\tau; T_{i} (1; t, x ), 0 \bigr) \bigr) \bigr) \\ &{}- h_{j} \bigl(\tau, X_{j} \bigl( \tau; T_{i} (1; t, x ), 0 \bigr), \mathbf{v}^{*} \bigl(\tau, X_{j} \bigl(\tau; T_{i} (1; t, x ), 0 \bigr) \bigr) \bigr) \biggr\vert \,\mathrm{d}\tau \,\mathrm{d}x \\ \le{}& L \sum_{j = 1}^{\ell} \int _{X_{i} (t; 0, 1 )}^{1} \int _{0}^{T_{i} (1; t, x )} \bigl\vert \mathbf{v}^{*} \bigl( \tau, X_{j} \bigl(\tau; T_{i} (1; t, x ), 0 \bigr) \bigr) \\ &{}-\mathbf{v}^{*} \bigl(\tau, X_{j} \bigl(\tau; T_{i} (1; t, x ), 0 \bigr) \bigr) \bigr\vert \,\mathrm{d}\tau \, \mathrm{d}x \\ \le{}& L \ell e^{\Lambda \bar{t}} \bar{t} \bigl\Vert \mathbf{v}- \mathbf{v}^{*} \bigr\Vert _{X}. \end{aligned}$$

Similarly, we deduce that

$$ I_{12} \le L (n - \ell ) e^{\Lambda \bar{t}} \bar{t} \bigl\Vert \mathbf{v}- \mathbf{v}^{*} \bigr\Vert _{X}. $$

For the remaining term \(I_{13}\), using the change of variable \(\xi = X_{i} (\tau; t, x )\), we get

$$\begin{aligned} I_{13} & \le L \int _{X_{i} (t; 0, 1 )}^{1} \int _{T_{i} (1; t, x )}^{t} \bigl\vert \mathbf{v}\bigl(\tau, X_{i}(\tau;t,x)\bigr) - \mathbf{v}^{*}\bigl(\tau, X_{i}(\tau;t,x)\bigr) \bigr\vert \,\mathrm{d}\tau \,\mathrm{d}x \\ & \le e^{\Lambda \bar{t}} L \int _{0}^{t} \int _{0}^{1} \bigl\vert \mathbf{v}(\tau, \xi ) - \mathbf{v}^{*}(\tau, \xi ) \bigr\vert \,\mathrm{d}\tau \,\mathrm{d} \xi \le e^{\Lambda \bar{t}} L \bar{t} \bigl\Vert \mathbf{v}- \mathbf{v}^{*} \bigr\Vert _{X}. \end{aligned}$$

Therefore for all \(t \in [0, \bar{t} ]\) and \(i \in \{1, \ldots, \ell \}\), we obtain

$$ \bigl\Vert M_{i}(\mathbf{v}) (t, \cdot ) - M_{i}\bigl(\mathbf{v}^{*}\bigr) (t, \cdot ) \bigr\Vert _{\mathbf{L^{1}}} \le (2 + K n ) e^{ \Lambda \bar{t}} L \bar{t} \bigl\Vert \mathbf{v}- \mathbf{v}^{*} \bigr\Vert _{X}. $$
(77)

Analogous calculations allow us to prove that for all \(i \in \{\ell + 1, \ldots, n \}\) and \(t \in [0, \bar{t} ]\),

$$ \bigl\Vert M_{i}(\mathbf{v}) (t, \cdot ) - M_{i}\bigl(\mathbf{v}^{*}\bigr) (t, \cdot ) \bigr\Vert _{\mathbf{L^{1}}} \le (2 + K n ) e^{ \Lambda \bar{t}} L \bar{t} \bigl\Vert \mathbf{v}- \mathbf{v}^{*} \bigr\Vert _{X}. $$
(78)

Hence, using (55), (57), (77), and (78), for every \(t \in [0, \bar{t} ]\), we have

M ( v ) M ( v ) X i = 1 n sup t [ 0 , t ¯ ] M i ( v ) ( t , ) M i ( v ) ( t , ) L 1 ( [ 0 , 1 ] ; R ) n ( 2 + K n ) e Λ t ¯ L t ¯ v v X 1 2 v v X ,

proving that M is a contraction. Hence a unique solution exists in the time interval \([0, \bar{t} ]\).

Step 2. Global existence in \([0, T]\). Assume by contradiction that the solution v does not exist on the whole time interval \([0, T]\) and define

$$ \widehat{T} = \sup \bigl\{ t \in [0, T]: \mathbf{v} \text{ is defined in } [0, t] \bigr\} . $$
(79)

By contradiction, \(\widehat{T} < T\). Moreover,

$$ \lim_{t \to \widehat{T}^{-}} \mathbf{TV} \bigl(\mathbf{v}(t, \cdot ) \bigr) = + \infty; $$
(80)

otherwise, the construction in the first part of the proof can be applied, violating the maximality of .

If \(\widehat{T} \le \frac{1}{\lambda _{\max }}\), then Lemma 17 implies that \(\mathbf{TV} (\mathbf{v}(t, \cdot ) )\) is bounded in the time interval \([0, \widehat{T} ]\), contradicting (80).

If \(\widehat{T} \le \frac{1}{\lambda _{\max }}\), then we can apply the previous considerations on time intervals of length \(\frac{1}{\lambda _{\max }}\), obtaining a contradiction with the definition of .

Step 3. Stability estimates in \([0, T]\). Here we briefly sketch the proofs for the \(\mathbf{L^{1}}\)-estimates (9), (11), and (12). We only consider the case \(t\leq \bar{t}\); the final estimates follow by an iterative procedure. We start with four cases in the construction of M. Let v and \(\mathbf{v}^{*}\) be the solutions to the diagonal system (3) with the initial and boundary conditions \(\bar{\mathbf{v}}\), b and, respectively, \(\bar{\mathbf{v}}^{*}\), \(\mathbf{b}^{*}\).

  1. 1.

    For \(i\in \{1,\ldots,\ell \}\), \(t \le \bar{t}\), and \(x\in [0,\bar{x}_{i}]\), where \(\bar{x}_{i} = X_{i}(t;0,1)\), we obtain

    $$\begin{aligned} & \int _{0}^{\bar{x}_{i}} \bigl\vert M_{i}( \mathbf{v}) (t,x)-M_{i}\bigl( \mathbf{v}^{*}\bigr) (t,x) \bigr\vert \,\mathrm{d}x \\ &\quad\leq \bigl\Vert \bar{\mathbf{v}}-\bar{\mathbf{v}}^{*} \bigr\Vert _{ \mathbf{L^{1}}(0,1)} + \int _{0}^{\bar{x}_{i}} \int _{0}^{t} \bigl\vert h_{i}\bigl( \tau,X_{i}(\tau;t,x),\mathbf{v}\bigl(\tau,X_{i}(\tau;t,x) \bigr)\bigr) \\ &\qquad{}-h_{i}\bigl(\tau,X_{i}(\tau;t,x), \mathbf{v}^{*}\bigl( \tau,X_{i}(\tau;t,x)\bigr)\bigr) \bigr\vert \,\mathrm{d}\tau \,\mathrm{d}x \\ &\quad\leq \bigl\Vert \bar{\mathbf{v}}-\bar{\mathbf{v}}^{*} \bigr\Vert _{ \mathbf{L^{1}}(0,1)} +L \int _{0}^{t} \bigl\Vert \mathbf{v}(\tau,\cdot )- \mathbf{v}(\tau,\cdot ) \bigr\Vert _{\mathbf{L^{1}}(0,1)}\,\mathrm{d} \tau. \end{aligned}$$

    Similarly, for \(\tilde{t}\in (0,t)\), we deduce the estimate for the trace:

    $$ \begin{aligned} & \int _{\tilde{t}}^{t} \bigl\vert M_{i}( \mathbf{v}) (\tau,0+)-M_{i}\bigl( \mathbf{v}^{*}\bigr) ( \tau,0+) \bigr\vert \,\mathrm{d}\tau \\ &\quad\le \int _{\tilde{t}}^{t} \bigl\vert \bar{v}_{i} \bigl(X_{i}(\tilde{t};\tau,0)\bigr)- \bar{v}^{*}_{i} \bigl(X_{i}(\tilde{t};\tau,0)\bigr) \bigr\vert \,\mathrm{d}\tau \\ &\qquad{}+ \int _{\tilde{t}}^{t} \int _{0}^{t} \bigl\vert h_{i}\bigl( \tau,X_{i}( \theta;\tau,0), \mathbf{v}\bigl(\tau,X_{i}( \theta;\tau,0)\bigr)\bigr) \\ &\qquad{}-h_{i}\bigl(\tau,X_{i}(\theta;\tau,0), \mathbf{v}^{*}\bigl(\tau,X_{i}( \theta;\tau,0)\bigr)\bigr) \bigr\vert \,\mathrm{d}\theta \,\mathrm{d}\tau \\ &\quad\le \bigl\Vert \bar{\mathbf{v}}-\bar{\mathbf{v}}^{*} \bigr\Vert _{ \mathbf{L^{1}}(0,1)} +L \int _{0}^{t} \bigl\Vert \mathbf{v}(\tau,\cdot )- \mathbf{v}(\tau,\cdot ) \bigr\Vert _{\mathbf{L^{1}}(0,1)} \,\mathrm{d} \tau. \end{aligned} $$
    (81)
  2. 2.

    In the same way, for \(i\in \{\ell +1,\ldots,n \}\), \(t \le \bar{t}\), and \(x \in [\bar{x}_{i}, 1]\), where \(\bar{x}_{i} = X_{i} (t; 0, 0 )\),

    $$\begin{aligned} \int _{\bar{x}_{i}}^{1} \bigl\vert M_{i}( \mathbf{v}) (t,x)-M_{i}\bigl(\mathbf{v}^{*}\bigr) (t,x) \bigr\vert \,\mathrm{d}x \leq{}& \bigl\Vert \bar{\mathbf{v}}- \bar{ \mathbf{v}}^{*} \bigr\Vert _{\mathbf{L^{1}}(0,1)} \\ &{} +L \int _{0}^{t} \bigl\Vert \mathbf{v}(\tau,\cdot )- \mathbf{v}( \tau,\cdot ) \bigr\Vert _{\mathbf{L^{1}}(0,1)}\,\mathrm{d}\tau, \end{aligned}$$

    and, for \(\tilde{t} \in (0, t)\),

    $$ \begin{aligned} & \int _{\tilde{t}}^{t} \bigl\vert M_{i}( \mathbf{v}) (\tau,1-)-M_{i}\bigl( \mathbf{v}^{*}\bigr) ( \tau,1-) \bigr\vert \,\mathrm{d}\tau \\ &\quad\leq \bigl\Vert \bar{\mathbf{v}}-\bar{\mathbf{v}}^{*} \bigr\Vert _{ \mathbf{L^{1}}(0,1)} +L \int _{0}^{t} \bigl\Vert \mathbf{v}(\tau,\cdot )- \mathbf{v}(\tau,\cdot ) \bigr\Vert _{\mathbf{L^{1}}(0,1)} \,\mathrm{d} \tau. \end{aligned} $$
    (82)
  3. 3.

    For \(i\in \{1,\ldots,\ell \}\), \(t \le \bar{t}\), and \(x \in [\bar{x}_{i},1]\), where \(\bar{x}_{i} = X_{i} (t; 0, 1 )\), using (81) and (82), we deduce that

    $$\begin{aligned} & \int _{\bar{x}_{i}}^{1} \bigl\vert M_{i}( \mathbf{v}) (t,x)-M_{i}\bigl( \mathbf{v}^{*}\bigr) (t,x) \bigr\vert \,\mathrm{d}x \\ &\quad\le \int _{\bar{x}_{i}}^{1} \bigl\vert m_{i} \bigl(T_{i}(1;t,x)\bigr)-m_{i}^{*} \bigl(T_{i}(1;t,x)\bigr) \bigr\vert \,\mathrm{d}x \\ &\qquad{}+ \int _{\bar{x}_{i}}^{1} \int _{T_{i}(1;t,x)}^{t} \bigl\vert h_{i}\bigl( \tau,X_{i}(\tau;t,x),\mathbf{v}\bigl(\tau,X_{i}(\tau;t,x) \bigr)\bigr) \\ &\qquad{}-h_{i}\bigl(\tau,X_{i}(\tau;t,x), \mathbf{v}^{*}\bigl( \tau,X_{i}(\tau;t,x)\bigr)\bigr) \bigr\vert \,\mathrm{d}\tau \,\mathrm{d}x \\ &\quad\le \bigl\Vert \mathbf{b}-\mathbf{b}^{*} \bigr\Vert _{\mathbf{L^{1}}(0,T)} + K\sum_{j = 1}^{\ell } \int _{T_{j}(1;t,\bar{x}_{i})}^{t} \bigl\vert M_{j}( \mathbf{v}) (\tau,0+)-M_{j}\bigl(\mathbf{v}^{*}\bigr) ( \tau,0+) \bigr\vert \,\mathrm{d}\tau \\ &\qquad{}+ K\sum_{j = \ell +1}^{n} \int _{T_{j}(1;t,\bar{x}_{i})}^{t} \bigl\vert M_{j}( \mathbf{v}) (\tau,1-)-M_{j}\bigl(\mathbf{v}^{*}\bigr) ( \tau,1-) \bigr\vert \,\mathrm{d}\tau \\ &\qquad{}+ L \int _{0}^{t} \bigl\Vert \mathbf{v}(\tau,\cdot )- \mathbf{v}(\tau, \cdot ) \bigr\Vert _{\mathbf{L^{1}}(0,1)}\,\mathrm{d}\tau \\ &\quad\le \bigl\Vert \mathbf{b}-\mathbf{b}^{*} \bigr\Vert _{\mathbf{L^{1}}(0,T)} + nK \bigl\Vert \bar{\mathbf{v}}-\bar{\mathbf{v}}^{*} \bigr\Vert _{ \mathbf{L^{1}}(0,1)} \\ &\qquad{}+nKL \int _{0}^{t} \bigl\Vert \mathbf{v}(\tau,\cdot )- \mathbf{v}( \tau,\cdot ) \bigr\Vert _{\mathbf{L^{1}}(0,1)}\,\mathrm{d}\tau. \end{aligned}$$
  4. 4.

    Analogous calculations imply that for \(i\in \{\ell +1,\ldots,n \}\), \(t \le \bar{t}\), and \(x\in [0,\bar{x}_{i}]\), with \(\bar{x}_{i} = X_{i} (t; 0, 0 )\),

    $$\begin{aligned} & \int _{0}^{\bar{x}_{i}} \bigl\vert M_{i}( \mathbf{v}) (t,x) - M_{i}\bigl( \mathbf{v}^{*}\bigr) (t,x) \bigr\vert \,\mathrm{d}x \\ &\quad\le \bigl\Vert \mathbf{b}- \mathbf{b}^{*} \bigr\Vert _{\mathbf{L^{1}}(0,T)} + nK \bigl\Vert \bar{\mathbf{v}}-\bar{ \mathbf{v}}^{*} \bigr\Vert _{\mathbf{L^{1}}(0,1)} \\ &\qquad{}+ nKL \int _{0}^{t} \bigl\Vert \mathbf{v}(\tau,\cdot )- \mathbf{v}(\tau,\cdot ) \bigr\Vert _{\mathbf{L^{1}}(0,1)}\,\mathrm{d} \tau. \end{aligned}$$

Combining the estimates obtained in the previous four cases, we have

$$\begin{aligned} \bigl\Vert \mathbf{v}(t,\cdot )-\mathbf{v}^{*}(t,\cdot ) \bigr\Vert _{ \mathbf{L^{1}}} \leq{}& 2 \bigl\Vert \mathbf{b}-\mathbf{b}^{*} \bigr\Vert _{ \mathbf{L^{1}}(0,T)} + (2nK+2) \bigl\Vert \bar{\mathbf{v}}- \bar{ \mathbf{v}}^{*} \bigr\Vert _{\mathbf{L^{1}}(0,1)} \\ &{}+(2nKL+2) \int _{0}^{t} \bigl\Vert \mathbf{v}(\tau,\cdot )- \mathbf{v}(\tau,\cdot ) \bigr\Vert _{\mathbf{L^{1}}(0,1)} \,\mathrm{d} \tau \end{aligned}$$

for every \(t \le \bar{t}\). Using the Gronwall lemma, we obtain (9). Moreover, estimates (11) and (12) follow from (81), (82), and (9).

Step 4. Total variation and \(\mathbf{L^{\infty }}\) estimates. The total variation (10) and the \(\mathbf{L^{\infty }}\) estimates (13) follow from Lemma 17. □

5 Conclusions

We proved the well-posedness of a switched system composed by a system of linear hyperbolic balance laws and by a system of linear algebraic differential equations. The results are global in time in the case of the initial data with finite total variation. We do not need to impose any additional hypothesis on the smallness of the total variation.

The present setting includes networks and looped systems of hyperbolic balance laws. Moreover, it can describe many real applications: for networks for water supply, electrical power distribution, or gas transport. Similar systems, but with nonlinear PDE, are used also for modeling the human circulatory system or controlling traffic flow through autonomous vehicles.

Availability of data and materials

Not applicable.

Code availability

Not applicable.

References

  1. Amadori, D.: Initial-boundary value problems for nonlinear systems of conservation laws. Nonlinear Differ. Equ. Appl. 4(1), 1–42 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  2. Bardos, C., le Roux, A.Y., Nédélec, J.-C.: First order quasilinear equations with boundary conditions. Commun. Partial Differ. Equ. 4(9), 1017–1034 (1979)

    Article  MathSciNet  MATH  Google Scholar 

  3. Bastin, G., Coron, J.-M.: Stability and Boundary Stabilization of 1-D Hyperbolic Systems. Progress in Nonlinear Differential Equations and Their Applications, vol. 88. Springer, Cham (2016)

    Book  MATH  Google Scholar 

  4. Borsche, R., Colombo, R.M., Garavello, M.: On the coupling of systems of hyperbolic conservation laws with ordinary differential equations. Nonlinearity 23(11), 2749–2770 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  5. Borsche, R., Colombo, R.M., Garavello, M.: Mixed systems: ODEs—balance laws. J. Differ. Equ. 252(3), 2311–2338 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  6. Borsche, R., Colombo, R.M., Garavello, M., Meurer, A.: Differential equations modeling crowd interactions. J. Nonlinear Sci. 25(4), 827–859 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  7. Borsche, R., Kocoglu, D., Trenn, S.: A distributional solution framework for linear hyperbolic PDEs coupled to switched DAEs. Math. Control Signals Syst. 32(4), 455–487 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  8. Bressan, A.: Hyperbolic Systems of Conservation Laws. Oxford Lecture Series in Mathematics and Its Applications., vol. 20. Oxford University Press, Oxford (2000)

    MATH  Google Scholar 

  9. Bressan, A., Piccoli, B.: Introduction to the Mathematical Theory of Control. AIMS Series on Applied Mathematics, vol. 2. American Institute of Mathematical Sciences (AIMS), Springfield (2007)

    MATH  Google Scholar 

  10. Chalons, C., Delle Monache, M.L., Goatin, P.: A conservative scheme for non-classical solutions to a strongly coupled PDE-ODE problem. Interfaces Free Bound. 19(4), 553–570 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  11. Colombo, R.M., Marcellini, F.: A mixed ODE-PDE model for vehicular traffic. Math. Methods Appl. Sci. 38(7), 1292–1302 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  12. Colombo, R.M., Rossi, E.: On the micro-macro limit in traffic flow. Rend. Semin. Mat. Univ. Padova 131, 217–235 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  13. Delle Monache, M.L., Goatin, P.: Scalar conservation laws with moving constraints arising in traffic flow modeling: an existence result. J. Differ. Equ. 257(11), 4015–4029 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  14. Dubois, F., LeFloch, P.: Boundary conditions for nonlinear hyperbolic systems of conservation laws. J. Differ. Equ. 71(1), 93–122 (1988)

    Article  MathSciNet  MATH  Google Scholar 

  15. Egger, H., Kugler, T.: Damped wave systems on networks: exponential stability and uniform approximations. Numer. Math. 138(4), 839–867 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  16. Egger, H., Kugler, T., Strogies, N.: Parameter identification in a semilinear hyperbolic system. Inverse Probl. 33(5), 055022 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  17. Garavello, M., Goatin, P., Liard, T., Piccoli, B.: A multiscale model for traffic regulation via autonomous vehicles. J. Differ. Equ. 269(7), 6088–6124 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  18. Garavello, M., Han, K., Piccoli, B.: Models for Vehicular Traffic on Networks. AIMS Series on Applied Mathematics, vol. 9. American Institute of Mathematical Sciences (AIMS), Springfield (2016)

    MATH  Google Scholar 

  19. Garavello, M., Piccoli, B.: Boundary coupling of microscopic and first order macroscopic traffic models. Nonlinear Differ. Equ. Appl. 24(4), Article ID 43 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  20. Göttlich, S., Herty, M., Schillen, P.: Electric transmission lines: control and numerical discretization. Optim. Control Appl. Methods 37(5), 980–995 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  21. Hante, F.: Hybrid Dynamics Comprising Modes Governed by Partial Differential Equations: Modeling, Analysis and Control for Semilinear Hyperbolic Systems in One Space Dimension (2010)

    Google Scholar 

  22. Hartman, P.: Ordinary Differential Equations, 2nd edn. Birkhäuser, Boston (1982)

    MATH  Google Scholar 

  23. Higdon, R.L.: Initial-boundary value problems for linear hyperbolic systems. SIAM Rev. 28(2), 177–217 (1986)

    Article  MathSciNet  MATH  Google Scholar 

  24. Kunkel, P., Mehrmann, V.: Differential-Algebraic Equations. EMS Textbooks in Mathematics (2006)

    Book  MATH  Google Scholar 

  25. Quarteroni, A., Formaggia, L., Veneziani, A.: Complex Systems in Biomedicine. Springer, Berlin (2006)

    Book  MATH  Google Scholar 

  26. Quarteroni, A., Ragni, S., Veneziani, A.: Coupling between lumped and distributed models for blood flow problems. Comput. Vis. Sci. 4(2), 111–124 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  27. Quarteroni, A., Veneziani, A.: Analysis of a geometrical multiscale model based on the coupling of odes and pdes for blood flow simulations. Multiscale Model. Simul. 1(2), 173–195 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  28. Trenn, S.: Switched differential algebraic equations. In: Vasca, F., Iannelli, L. (eds.) Dynamics and Control of Switched Electronic Systems—Advanced Perspectives for Modeling, Simulation and Control of Power Converters, pp. 189–216. Springer, London (2012)

    Google Scholar 

Download references

Acknowledgements

The authors were partially supported by the GNAMPA 2020 project “From Wellposedness to Game Theory in Conservation Laws”.

Funding

Not applicable.

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed equally to the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Mauro Garavello.

Ethics declarations

Competing interests

The authors declare that they have no conflict of interest.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Borsche, R., Garavello, M. & Kocoglu, D. Switched hyperbolic balance laws and differential algebraic equations. Adv Cont Discr Mod 2023, 19 (2023). https://doi.org/10.1186/s13662-023-03764-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-023-03764-6

MSC

Keywords