Skip to main content

Theory and Modern Applications

Oscillation of Runge-Kutta methods for advanced impulsive differential equations with piecewise constant arguments

Abstract

The purpose of this paper is to study oscillation of Runge-Kutta methods for linear advanced impulsive differential equations with piecewise constant arguments. We obtain conditions of oscillation and nonoscillation for Runge-Kutta methods. Moreover, we prove that the oscillation of the exact solution is preserved by the θ-methods. It turns out that the zeros of the piecewise linear interpolation functions of the numerical solution converge to the zeros of the exact solution. We give some numerical examples to confirm the theoretical results.

1 Introduction

In the past three decades, the theory of differential equations with piecewise constant arguments has been intensively studied. In 1984, Cooke and Wiener [1] studied differential equations without impulses and noted that such equations were related to impulsive and difference equations. Later, oscillation of discontinuous solutions of differential equations with piecewise constant arguments has been proposed by Wiener as an open problem [2], p.380. In recent years, the Euler method for impulsive and stochastic delay differential equations has been studied in [3] and [4], and impulsive delay difference equations have been studied in [5]. Especially, oscillation of advanced impulsive differential equations with piecewise constant arguments has been studied in [6]. Furthermore, in [7], asymptotical stability of Runge-Kutta methods for the advanced linear impulsive differential equation with piecewise constant arguments was studied. In the present paper, we study oscillation of Runge-Kutta methods for the following equation:

$$ \textstyle\begin{cases} x'(t)+ax(t)+bx([t])+cx([t+1])=0,& t\geq0, t\neq k,k=1,2,\ldots, \\ \Delta x(k)=dx(k),& k=1,2,\ldots, \\ x(0)=x_{0}, \end{cases} $$
(1.1)

where a, b, c, d, and \(x_{0}\) are real constants, \(\Delta x(k)=x(k)-x(k^{-})\), and \([\cdot]\) denotes the greatest integer function.

The rest of the paper is organized as follows. In Section 2, the results about oscillation of the exact solutions of (1.1) in [6] are introduced. In Section 3, the conditions of oscillation and nonoscillation for Runge-Kutta methods are obtained. In Section 4, the conditions of oscillation and nonoscillation for θ-methods are obtained. Moreover, it is proved that the oscillation of the exact solution is preserved by the θ-methods. It turns out that the zeros of the piecewise linear interpolation functions of the numerical solution converge to the zeros of the exact solution with the order of accuracy 1 (\(\theta\neq\frac{1}{2}\)) and 2 (\(\theta=\frac{1}{2}\)). In the last section, two simple numerical examples are given to confirm the theoretical results.

2 Preliminaries

Definition 2.1

See [6, 7]

A function \(x(t)\) defined on \([0,\infty)\) is said to be a solution of (1.1) if it satisfies the following conditions:

  1. (1)

    \(x:[0,\infty)\rightarrow\mathbb{R}\) is continuous for \(t\in [0,+\infty)\) with the possible exception of the points \([t]\in[ 0, \infty)\),

  2. (2)

    \(x(t)\) is right continuous and has left-hand limit at the points \([t]\in[ 0, \infty)\),

  3. (3)

    \(x(t)\) is differentiable and satisfies \(x'(t)+ax(t)+bx([t])+cx([t+1])=0\) for any \(t\in\mathbb{R}^{+}\) with the possible exception of the points \([t]\in[ 0, \infty)\) where one-sided derivatives exist,

  4. (4)

    \(x(n)\) satisfies \(\Delta x(n)=d x(n)\) for \(n\in\mathbb{N}\) and \(x(0)=x_{0}\).

Theorem 2.2

See [6, 7]

When \(a \neq0\), Eq. (1.1) has on \(t\in[n,n+1)\), \(n=0,1,2,\ldots \) , a unique solution \(x(t)\) defined by

$$\begin{aligned}& x(t)=m_{0}\bigl({\{t\}}\bigr)x(n)+m_{1}\bigl(\{t\} \bigr)x(n+1), \end{aligned}$$
(2.1)
$$\begin{aligned}& (1-d)x(n+1)-m_{0}(1)x(n)-m_{1}(1)x(n+1)=0, \end{aligned}$$
(2.2)

where \(\{t\}=t-[t]\), \(m_{0}(t)=\mathrm{e}^{-at}+\frac{b}{a}(\mathrm {e}^{-at}-1)\), and \(m_{1}(t)=\frac{c}{a} (\mathrm{e}^{-at}-1)\).

Definition 2.3

See [6]

The solution \(x(t)\) of (1.1) is said to be oscillatory if there exist two real-valued sequences \((t_{n})_{n\geq0}, (t'_{n})_{n\geq 0} \subseteq[0,\infty)\) such that \(t_{n}\rightarrow\infty\), \(t'_{n}\rightarrow\infty\) as \(n\rightarrow\infty\) and \(x(t_{n})\leq0\leq x(t'_{n})\) for \(n\geq N\) with N sufficiently large. Otherwise, the solution is called nonoscillatory.

Remark 2.4

When \(1-d>0\), Definition 2.3 is equivalent to the following one: The solution \(x(t)\) of (1.1) is said to be oscillatory if \(x(t)\) has arbitrarily large zeros, that is, for every \(T>0\), there exists a point \(\hat{t}>T\) such that \(x(\hat{t})=0\).

Theorem 2.5

See [6]

Let \(a\neq0\), \(c>0\), and \(1-d>0\). Then, all solutions of (1.1) are oscillatory if and only if

$$ b\geq\frac{a}{\mathrm{e}^{a}-1}. $$
(2.3)

3 Runge-Kutta methods

3.1 Oscillation of Runge-Kutta methods

Consider the Runge-Kutta methods for (1.1):

$$ \textstyle\begin{cases} x_{k, l+1}=x_{k,l}-h\sum_{i=1}^{v}b_{i}( aY^{i}_{k, l}+b x_{k,0}+c x_{k+1,0}),&k=0,1,\ldots, \\ Y^{i}_{k, l}=x_{k,l}-h\sum_{j=1}^{v}a_{ij}( aY^{j}_{k, l}+b x_{k, 0}+c x_{k+1,0}),&l=0,\ldots, m-1, \\ (1-d)x_{k+1, 0}=x_{k,m}, \\ x_{0,0}=x_{0}, \end{cases} $$
(3.1)

where \(h=\frac{1}{m}\), \(m\geq1\), m is an integer, and v is referred to as the number of stages. The weights \(b_{i}\), the abscissaes \(c_{i}=\sum_{j=1}^{v}a_{ij}\), and the matrix \(A=[a_{ij}]_{i,j=1}^{v}\) will be denoted by \((A,b,c)\). Define

$$ x_{n}=x_{km+l}=x_{k,l},\quad k=0,1,2, \ldots, l=0,1,\ldots,m-1, $$
(3.2)

where \(n=km+l\), and \(x_{n}\) is an approximation of the solution \(x(nh)\) of (1.1), \(n=0, 1,\ldots \) .

For the above-mentioned Runge-Kutta methods, in the following, we always assume that there exist \(\delta_{1}<0\) and \(\delta_{2}>0\) such that \(0< R(z)<1\) for \(\delta_{1}< z<0\) and \(R(z)>1\) for \(0< z<\delta_{2}\), which means that

$$ \frac{R(z)-1}{z}>0 \quad \mbox{for } \delta_{1}< z< \delta_{2}, $$
(3.3)

where \(z=-ah\), \(R(z)=1+zb^{T}(I-zA)^{-1}e\), and \(e=(1, 1, \ldots, 1)^{T}\) is a vector of dimension v.

Definition 3.1

A nontrivial solution \(x_{n}\) of (3.1)-(3.2) is said to be oscillatory if there exists a sequence \(n_{k}\) such that \(n_{k}\rightarrow \infty\) as \(k\rightarrow\infty\) and \(x_{n_{k}}x_{n_{k+1}}\leq0\); otherwise, it is called nonoscillatory. We say that the Runge-Kutta method (3.1)-(3.2) for (1.1) is oscillatory if all the nontrivial solutions of (3.1)-(3.2) are oscillatory; we say that the Runge-Kutta method (3.1)-(3.2) for (1.1) is non-oscillatory if all the nontrivial solutions of (3.1)-(3.2) are nonoscillatory.

Definition 3.2

We say that the Runge-Kutta method preserves oscillations of (1.1) if (1.1) oscillates and there is \(h_{0}\) such that (3.1)-(3.2) oscillates for \(h< h_{0}\).

Theorem 3.3

When \(a\neq0\), \(c>0\), and \(1-d>0\), then the Runge-Kutta method (3.1)-(3.2) for (1.1) is oscillatory if \(I-z A\) is invertible and

$$ b\geq\frac{aR(z)^{m}}{1-R(z)^{m}} $$
(3.4)

for \(\delta_{1}< z<\delta_{2}\).

Proof

Assume that \(I-z A\) is invertible. For \(k=0,1,\ldots \) and \(l=0,1,\ldots ,m\), we can obtain that

$$\begin{aligned} x_{k,l}&=R(z)x_{k,l-1}+\frac{b}{a} \bigl(R(z)-1 \bigr)x_{k,0}+\frac{c}{a} \bigl(R(z)-1\bigr)x_{k+1,0} \\ &=\biggl(R(z)^{l}+\frac{b}{a} \bigl(R(z)^{l}-1\bigr) \biggr)x_{k,0}+\frac{c}{a} \bigl(R(z)^{l}-1 \bigr)x_{k+1,0}, \end{aligned}$$

which implies

$$ x_{k+1,0}=\frac{R(z)^{m}+\frac{b}{a} (R(z)^{m}-1)}{1-d-\frac{c}{a} (R(z)^{m}-1)}\cdot x_{k,0}. $$
(3.5)

For \(\delta_{1}< z<\delta_{2}\), equation (3.4) implies \(R(z)^{m}+\frac{b}{a}(R(z)^{m}-1)\leq0\) and \(-\frac{c}{a}(R(z)^{m}-1)> 0\). So \(a\neq0\), \(c>0\), \(1-d>0\), and (3.5) imply \(x_{k,0}x_{k+1,0}\leq0\), that is, \(x_{km}x_{(k+1)m}\leq0\). Hence, (3.1)-(3.2) is oscillatory. □

Lemma 3.4

See [811]

The \((j,k)\)-Padé approximation to \(\mathrm{e}^{z}\) is given by

$$ R(z)=\frac{P_{j}(z)}{Q_{k}(z)}, $$
(3.6)

where

$$\begin{aligned}& P_{j}(z)=1+\frac{j}{j+k}\cdot z+\frac{j(j-1)}{(j+k)(j+k-1)}\cdot \frac {z^{2}}{2!}+\cdots+\frac{j!k!}{(j+k)!}\cdot\frac{z^{j}}{j!}, \\& Q_{k}(z)=1-\frac{k}{j+k}\cdot z+\frac{k(k-1)}{(j+k)(j+k-1)}\cdot \frac {z^{2}}{2!}+\cdots+(-1)^{k}\cdot\frac{k!j!}{(j+k)!}\cdot \frac{z^{k}}{k!}, \end{aligned}$$

with error

$$\mathrm{e}^{z}-R(z)=(-1)^{k}\cdot\frac{j!k!}{(j+k)!(j+k+1)!}\cdot z^{j+k+1}+O\bigl(z^{j+k+2}\bigr). $$

It is the unique rational approximation to \(\mathrm{e}^{z}\) of order \(j+k\) such that the degrees of numerator and denominator are j and k, respectively.

Lemma 3.5

See [12, 13]

Let \(R(z)\) be the \((j,k)\)-Padé approximation to \(\mathrm{e}^{z}\). Then, for \(z>0\) (\(a<0\), \(z=-ah\)),

  1. (i)

    \(R(z)<\mathrm{e}^{z}\) for all \(z>0\) if and only if k is even,

  2. (ii)

    \(R(z)>\mathrm{e}^{z}\) for \(0< z<\eta\) if and only if k is odd;

and, for \(z<0\) (\(a>0\), \(z=-ah\)),

  1. (i)

    \(R(z)> \mathrm{e}^{z}\) for all \(z<0\) if and only if j is even,

  2. (ii)

    \(R(z)<\mathrm{e}^{z}\) for \(\varsigma< z<0\) if and only if j is odd,

where η is a real zero of \(Q_{k}(z)\), and ς is a real zero of \(P_{j}(z)\).

Theorem 3.6

If \(a\neq0\), \(c>0\), \(1-d>0\), and \(b\geq\frac{a}{\mathrm{e}^{a}-1}\), then the Runge-Kutta method (3.1)-(3.2) for (1.1) is oscillatory if any one of the following conditions holds:

  1. (1)

    \(a<0\), and k is odd for \(h=\frac{1}{m}<\min\{\frac{\eta }{-a},\frac{\delta_{2}}{-a}\}\);

  2. (2)

    \(a>0\), and j is odd for \(h=\frac{1}{m}<\min\{\frac{\zeta }{-a},\frac{\delta_{1}}{-a}\}\).

Proof

When \(a<0\) and k is odd, by Lemma 3.5 we obtain that

$$ R(z)>\mathrm{e}^{z} \quad \mbox{for } 0< z=-ah< \eta, $$

which implies that

$$ \frac{\mathrm{e}^{-a}}{1-\mathrm{e}^{-a}}< \frac{R(z)^{m}}{1-R(z)^{m}}< 0\quad \mbox{for } 0< z=-ah< \min\{\eta, \delta_{2}\} \mbox{ and } h=\frac{1}{m}, $$

which also implies that

$$ \frac{a R(z)^{m}}{1-R(z)^{m}}< \frac{a \mathrm{e}^{-a}}{1-\mathrm {e}^{-a}}=\frac{a}{\mathrm{e}^{a}-1}\leq b. $$

Hence, (3.1)-(3.2) is oscillatory by Theorem 3.3. □

3.2 Nonoscillation of Runge-Kutta methods

Lemma 3.7

If \(a\neq0\), \(c>0\), \(1-d>0\), and \(b<\frac{a}{\mathrm{e}^{a}-1}\), then the Runge-Kutta method (3.1) satisfies \(b<\frac{a R(z)^{l}}{1-R(z)^{l}}\), that is, \(R(z)^{l}+\frac{b}{a} (R(z)^{l}-1)>0\), \(l=1,2,\ldots, m\), if any one of the following conditions holds:

  1. (1)

    \(a<0\), and k is even for \(h=\frac{1}{m}<-\frac{\delta_{2}}{a}\);

  2. (2)

    \(a>0\), and j is even for \(h=\frac{1}{m}<-\frac{\delta_{1}}{a}\).

Proof

For brevity, we prove only part (1) of the lemma. When \(a<0\) and k is even, by Lemma 3.5 we obtain that

$$ 1< R(z)< \mathrm{e}^{z}\quad \mbox{for } 0< z=-ah< \delta_{2}, $$

which implies that, for \(l=1,2,\ldots,m-1\) and \(h=\frac{1}{m}\),

$$ \frac{R(z)^{l}}{1-R(z)^{l}}< \frac{R(z)^{m}}{1-R(z)^{m}}< \frac{\mathrm {e}^{-a}}{1-\mathrm{e}^{-a}}< 0, $$

which also implies that

$$ \frac{a \mathrm{e}^{-a}}{1-\mathrm{e}^{-a}}< \frac{a R(z)^{m}}{1-R(z)^{m}}< \frac{a R(z)^{l}}{1-R(z)^{l}}. $$

Hence, we obtain that

$$ b< \frac{a}{\mathrm{e}^{a}-1}=\frac{a \mathrm{e}^{-a}}{1-\mathrm {e}^{-a}}< \frac{a R(z)^{m}}{1-R(z)^{m}}< \frac{a R(z)^{l}}{1-R(z)^{l}}, \quad l=1,2,\ldots, m-1. $$

 □

Lemma 3.8

If \(c>0\), \(x+\frac{b}{a}(x-1)>0\), and \(f(x)=\frac{-\frac{c}{a} (x-1)}{x+\frac{b}{a} (x-1)}\), then

  1. (1)

    \(f(x)\) is decreasing if \(a<0\);

  2. (2)

    \(f(x)\) is increasing if \(a>0\).

Proof

It follows from \(c>0\), \(x+\frac{b}{a}(x-1)>0\), and

$$ f'(x)=\frac{-\frac{c}{a}(x+\frac{b}{a} (x-1))+\frac{c}{a} (x-1)(1+\frac{b}{a})}{(x+\frac{b}{a} (x-1))^{2}}=\frac{-\frac{c}{a}}{(x+\frac{b}{a} (x-1))^{2}} $$

that \(f'(x)>0\) if \(a<0\) and \(f'(x)<0\) if \(a>0\). Hence,

  1. (1)

    \(f(x)\) is decreasing if \(a<0\),

  2. (2)

    \(f(x)\) is increasing if \(a>0\).

 □

Theorem 3.9

If \(a\neq0\), \(c>0\), \(1-d>0\), and \(b<\frac{a}{\mathrm{e}^{a}-1}\), then the Runge-Kutta method (3.1)-(3.2) for (1.1) is nonoscillatory if any one of the following conditions holds:

  1. (1)

    \(a<0\), and k is even for \(h=\frac{1}{m}<-\frac{\delta_{2}}{a}\);

  2. (2)

    \(a>0\), and j is even for \(h=\frac{1}{m}<-\frac{\delta_{1}}{a}\).

Proof

For brevity, we prove only part (1) of the lemma. Without loss of generality, assume that \(x_{0}>0\). Obviously, the conditions of Lemma 3.7 are fulfilled, and hence \(R(z)^{m}+\frac{b}{a}(R(z)^{m}-1)>0\). It follows from (3.3) that \(-\frac{c}{a} (R(z)^{m}-1)>0\). Therefore, we can obtain that

$$ x_{k,0}=\biggl(\frac{R(z)^{m}+\frac{b}{a} (R(z)^{m}-1)}{1-d-\frac{c}{a} (R(z)^{m}-1)}\biggr)^{k} x_{0}>0,\quad k=1,2,\ldots, $$

which implies

$$x_{k,m}=(1-d)x_{k+1,0}>0,\quad k=0,1,2,\ldots, $$

which also implies

$$ x_{k,m}=\biggl(R(z)^{m}+\frac{b}{a} \bigl(R(z)^{m}-1\bigr)\biggr)x_{k,0}+\frac{c}{a} \bigl(R(z)^{m}-1\bigr)x_{k+1,0}>0. $$

Consequently, we have

$$ \frac{x_{k,0}}{x_{k+1,0}}=\frac{-\frac{c}{a} (R(z)^{m}-1)}{R(z)^{m}+\frac{b}{a} (R(z)^{m}-1)}>0. $$

By Lemma 3.8 we obtain that, for \(l=1,2,\ldots,m\), \(k=0,1,2,\ldots \) ,

$$ \frac{-\frac{c}{a} (R^{l}(z)-1)}{R(z)^{l}+\frac{b}{a} (R(z)^{l}-1)}>\frac {-\frac{c}{a} (R(z)^{m}-1)}{R(z)^{m}+\frac{b}{a} (R(z)^{m}-1)}=\frac {x_{k,0}}{x_{k+1,0}}>0, $$

which means that

$$ x_{k,l}=\biggl(R(z)^{l}+\frac{b}{a} \bigl(R(z)^{l}-1\bigr)\biggr)x_{k,0}+\frac{c}{a} \bigl(R(z)^{l}-1\bigr)x_{k+1,0}>0. $$

Hence, the Runge-Kutta method (3.1)-(3.2) is nonoscillatory. □

4 Piecewise linear interpolation of θ-methods

4.1 Oscillation of θ-methods

Consider the following θ-methods for (1.1):

$$ \textstyle\begin{cases} x_{k, l+1}=x_{k,l}-ha(1-\theta)x_{k,l}-ha\theta x_{k,l+1}-hbx_{k,0}-hcx_{k+1,0},&l=0,\ldots, m-1, \\ (1-d)x_{k+1, 0}=x_{k,m},&k=0,\ldots, \\ x_{0, 0}=x_{0}, \end{cases} $$
(4.1)

where \(h=\frac{1}{m}\) with integer \(m\geq1\). Define

$$ x_{n}=x_{km+l}=x_{k,l},\quad l=0,1, \ldots,m-1, k=0,1,2,\ldots, $$
(4.2)

which is an approximation of the solution \(x(nh)\) of (1.1), \(n=0, 1,\ldots \) .

Lemma 4.1

For all \(m>|a|\), we have

  1. (i)

    for \(a>0\),

    $$\begin{aligned}& \biggl(1+\frac{z}{1-z\theta}\biggr)^{m}\geq\mathrm{e}^{-a} \quad \textit{if and only if}\quad \varphi(-1)\leq\theta\leq1, \\& \biggl(1+\frac{z}{1-z\theta}\biggr)^{m}\leq\mathrm{e}^{-a} \quad \textit{if and only if}\quad 0\leq\theta\leq\frac{1}{2}; \end{aligned}$$
  2. (ii)

    for \(a<0\),

    $$\begin{aligned}& \biggl(1+\frac{z}{1-z\theta}\biggr)^{m}\geq\mathrm{e}^{-a} \quad \textit{if and only if}\quad \frac{1}{2} \leq\theta\leq1, \\& \biggl(1+\frac{z}{1-z\theta}\biggr)^{m}\leq\mathrm{e}^{-a} \quad \textit{if and only if}\quad 0\leq\theta\leq\varphi(1), \end{aligned}$$

    where \(\varphi(x)=\frac{1}{x} -\frac{1}{\mathrm{e}^{x}-1}\).

Applying Lemma 4.1, we can obtain the following two results.

Theorem 4.2

If \(a\neq0\), \(c>0\), \(1-d>0\), and \(b\geq\frac{a}{\mathrm{e}^{a}-1}\), th the en θ-method (4.1)-(4.2) preserves the oscillation of (1.1) if any of the following conditions is satisfied:

  1. (1)

    \(\frac{1}{2} \leq\theta\leq1\) and \(a<0\) for \(h=\frac{1}{m}\), \(m>-a\);

  2. (2)

    \(0 \leq\theta\leq\frac{1}{2}\) and \(a>0\) for \(h=\frac{1}{m}\), \(m>a\).

Theorem 4.3

If \(a\neq0\), \(c>0\), \(1-d>0\), and \(b< \frac{a}{\mathrm{e}^{a}-1}\), then the θ-method (4.1)-(4.2) preserves the nonoscillation of (1.1) if any of the following conditions is satisfied:

  1. (1)

    \(0 \leq\theta\leq\varphi(1)\) and \(a<0\) for \(h=\frac{1}{m}\), \(m>-a\);

  2. (2)

    \(\varphi(-1) \leq\theta\leq1\) and \(a>0\) for \(h=\frac{1}{m}\), \(m>a\).

4.2 Piecewise linear interpolation of θ-methods

For convenience, define the functions \(y_{k}(t)\) on the closed intervals \([k,k+1]\), \(k=0,1,2,\ldots \) , as follows:

$$ y_{k}(t) = \textstyle\begin{cases} x(t), &t\in[k,k+1), \\ \lim_{t\rightarrow k+1^{-}}x(t), &t=k+1, \end{cases} $$

where \(x(t)\) is the exact solution of (1.1). Obviously,

$$ x(k+1)=\frac{1}{1-r} \cdot x\bigl(k+1^{-}\bigr)=\frac{1}{1-r} \cdot y_{k}(k+1). $$

Theorem 4.4

Let \(a\neq0\), \(c>0\), \(1-d>0\), and \(b>\frac{a}{\mathrm{e}^{a}-1}\). For any integer k, we have

  1. (1)

    \(x(k)x(k+1)<0\);

  2. (2)

    \(x(t)\) has at most one zero at \([k,k+1]\).

Proof

(1) If \(a\neq0\), \(c>0\), \(1-d>0\), and \(b>\frac{a}{\mathrm {e}^{a}-1}\), then

$$ \frac{\mathrm{e}^{-a}+\frac{b}{a} (\mathrm{e}^{-a}-1)}{1-d-\frac{c}{a} (\mathrm{e}^{-a}-1)}< 0. $$

By (2.2), that is,

$$ x(k+1)=\frac{\mathrm{e}^{-a}+\frac{b}{a} (\mathrm{e}^{-a}-1)}{1-d-\frac{c}{a} (\mathrm{e}^{-a}-1)} \cdot x(k), $$

we have

$$ x(k+1) \cdot x(k)< 0. $$

(2) Let \(t=k+\alpha\), \(\alpha\in[0,1]\). For convenience, define \(A_{k}=x(k)\). It follows from Theorem 2.2 that

$$ y_{k}(k+\alpha)=A_{k}m_{0}( \alpha)+A_{k+1}m_{1}(\alpha). $$
(4.3)

Suppose that \(x_{k}(t)\) has two zeros \(k+\alpha_{1}\), \(k+\alpha_{2}\), and \(\alpha_{1}\neq\alpha_{2}\), \(\alpha_{1}, \alpha_{2}\in[0,1]\). Then we have

$$\begin{aligned}& y_{k}(k+\alpha_{1})=A_{k}m_{0}( \alpha_{1})+A_{k+1}m_{1}(\alpha_{1})=0, \\& y_{k}(k+\alpha_{2})=A_{k}m_{0}( \alpha_{2})+A_{k+1}m_{1}(\alpha_{2})=0. \end{aligned}$$

It follows from

$$ \left ( \textstyle\begin{array}{@{}c@{\quad}c@{}} m_{0}(\alpha_{1}) & m_{1}(\alpha_{1}) \\ m_{0}(\alpha_{2}) & m_{1}(\alpha_{2}) \end{array}\displaystyle \right )=\frac{q}{p} \bigl( \mathrm{e}^{-p\alpha_{2}}-\mathrm{e}^{-p\alpha_{1}}\bigr)\neq0 $$

that \(A_{k}=A_{k+1}=0\), which implies \(y_{k}(t)\equiv0\). Hence, \(y_{k}(t)\) has at most one zero at \([k,k+1]\), which implies that \(x(t)\) has at most one zero at \([k,k+1]\). □

Let \(\bar{x}_{k}(t)\), \(t\in[k,k+1]\), be the linear interpolation of \((x_{k,l})_{l=0,1,\ldots,m}\) given by

$$ \bar{x}_{k}(t_{k,l}+\xi h)=\xi x_{k,l+1}+(1-\xi)x_{k,l}, $$
(4.4)

where \(\xi\in[0,1]\). Define

$$ \bar{x}(t)=\bar{x}_{k}(t)\quad \mbox{for } t \in[k,k+1),k=0,1,2,\ldots, $$
(4.5)

which is a piecewise continuous numerical solution of (1.1). The following theorems give the properties of \(\bar{x}(t)\), \(t\geq0\).

Theorem 4.5

Under the conditions of Theorem  4.4, the piecewise linear interpolation function \(\bar{x}(t)\) defined by (4.4)-(4.5) satisfies

  1. (1)

    \(\bar{x}(t)-x(t)=O(h)\) (\(\theta\neq\frac{1}{2}\)), \(\bar {x}(t)-x(t)=O(h^{2})\) (\(\theta=\frac{1}{2}\));

  2. (2)

    \(\bar{x}(t)\) has at most one zero in \([k,k+1]\) for any integer k.

Proof

(1) Obviously, by mathematical induction we can prove that, for any nonnegative integer k,

$$ \bar{x}_{k}(t)-y_{k}(t)=O(h) \quad \biggl(\theta\neq \frac{1}{2}\biggr), \qquad \bar {x}_{k}(t)-y_{k}(t)=O \bigl(h^{2}\bigr)\quad \biggl(\theta=\frac{1}{2}\biggr), $$

which implies

$$ \bar{x}(t)-x(t)=O(h)\quad \biggl(\theta\neq\frac{1}{2}\biggr), \qquad \bar{x}(t)-x(t)=O\bigl(h^{2}\bigr) \quad \biggl(\theta=\frac{1}{2} \biggr). $$

(2) Suppose that \(\bar{x}_{k}(t)\) has two zeros \(t_{1}=t_{k,l_{1}}+\eta_{1}h\), \(t_{2}=t_{k,l_{2}}+\eta_{2}h\). Then by (4.4) we have

$$\begin{aligned}& m_{11}x_{k,0}+m_{12}x_{k+1,0}=0, \\& m_{21}x_{k,0}+m_{22}x_{k+1,0}=0, \end{aligned}$$

where

$$\begin{aligned}& m_{11}=\biggl(1+\frac{z}{1-z\theta}\biggr)^{l_{1}}\biggl(1- \eta_{1}+\eta_{1}\biggl(1+\frac{z}{ 1-z\theta}\biggr)\biggr), \\& m_{12}=\frac{q}{p} \biggl(\biggl(1+\frac{z}{1-z\theta} \biggr)^{l_{1}}\biggl(1-\eta_{1}+\eta _{1}\biggl(1+ \frac{z}{1-z\theta}\biggr)\biggr)-1\biggr), \\& m_{21}=\biggl(1+\frac{z}{1-z\theta}\biggr)^{l_{2}}\biggl(1- \eta_{2}+\eta_{2}\biggl(1+\frac{z}{ 1-z\theta}\biggr)\biggr), \\& m_{22}=\frac{q}{p} \biggl(\biggl(1+\frac{z}{1-z\theta} \biggr)^{l_{2}}\biggl(1-\eta_{2}+\eta _{2}\biggl(1+ \frac{z}{1-z\theta}\biggr)\biggr)-1\biggr). \end{aligned}$$

Hence,

$$\begin{aligned} M =&\det \left ( \textstyle\begin{array}{@{}c@{\quad}c@{}} m_{11} & m_{12} \\ m_{21} & m_{22} \end{array}\displaystyle \right ) \\ =&\frac{q}{p} \biggl(1+\frac{z}{1-z\theta}\biggr)^{l_{2}}\biggl(1- \eta_{2}+\eta_{2}\biggl(1+\frac{z}{ 1-z\theta}\biggr)\biggr) \\ &-\frac{q}{p} \biggl(1+\frac{z}{1-z\theta}\biggr)^{l_{1}} \biggl(1-\eta_{1}+\eta_{1}\biggl(1+\frac{z}{ 1-z\theta}\biggr) \biggr). \end{aligned}$$

In the following, we will prove that \(M\neq0\).

• If \(l_{1}=l_{2}\), \(\eta_{1}\neq\eta_{2}\), then

$$ M=\frac{q}{p} \biggl(1+\frac{z}{1-z\theta}\biggr)^{l_{1}}\cdot( \eta_{2}-\eta_{1}) \biggl(\frac {z}{1-z\theta}\biggr)\neq0. $$

• Else, if \(l_{1}\neq l_{2}\), \(\eta_{1}=\eta_{2}\), then

$$ M=\frac{q}{p} \biggl(1-\eta_{1}+\eta_{1}\biggl(1+ \frac{z}{1-z\theta}\biggr)\biggr) \biggl(\biggl(1+\frac{z}{ 1-z\theta} \biggr)^{l_{1}}-\biggl(1+\frac{z}{1-z\theta}\biggr)^{l_{2}}\biggr) \neq0. $$

• Else, if \(l_{1}\neq l_{2}\), \(\eta_{1}\neq\eta_{2}\), then, without loss of generality, let \(l_{2}>l_{1}\) and \(M=0\). Then

$$\begin{aligned}& 1+\frac{z}{1-z\theta}< \biggl(1+\frac{z}{1-z\theta}\biggr)^{l_{2}-l_{1}}= \frac{1-\eta _{1}+\eta_{1}(1+\frac{z}{1-z\theta})}{1-\eta_{2}+\eta_{2}(1+\frac{z}{1-z\theta })} \\& \hphantom{1+\frac{z}{1-z\theta}}< 1-\eta_{1}+\eta_{1}\biggl(1+ \frac{z}{1-z\theta}\biggr)< 1+\frac{z}{1-z\theta}\quad (z>0), \\& 1+\frac{z}{1-z\theta}>\biggl(1+\frac{z}{1-z\theta}\biggr)^{l_{2}-l_{1}}= \frac{1-\eta _{1}+\eta_{1}(1+\frac{z}{1-z\theta})}{1-\eta_{2}+\eta_{2}(1+\frac{z}{1-z\theta })} \\& \hphantom{1+\frac{z}{1-z\theta}}>1-\eta_{1}+\eta_{1}\biggl(1+ \frac{z}{1-z\theta}\biggr)>1+\frac{z}{1-z\theta}\quad (z< 0), \end{aligned}$$

and these contradictions lead to \(M\neq0\).

Consequently, \(x_{k,0}=x_{k+1,0}=0\), which implies \(\bar{x}_{k}(t)\equiv 0\), which is a contradiction to part (1) of the theorem. Hence, \(\bar {x}_{k}(t)\) has at most one zero in \([k,k+1]\), which implies \(\bar{x}(t)\) has at most one zero in \([k,k+1]\) for any integer k. □

Theorem 4.6

Assume that the conditions of Theorem  4.4 hold and \(x(t)=0\). Then there is \(h_{0}>0\) such that, for \(h< h_{0}\), there is a unique such that

  1. (1)

    \(\bar{x}(\bar{t})=0\); moreover, \(t-\bar{t}=O(h)\) (\(\theta\neq \frac{1}{2}\)), \(t-\bar{t}=O(h^{2})\) (\(\theta=\frac{1}{2}\));

  2. (2)

    \(\bar{x}(t)\) intersects the axis of abscissas at , that is, \(\bar{x}(\bar{t})=0\),

    1. (i)

      \(\bar{x}(k+(l+1)h)\bar{x}(k+(l-1)h)<0\) for \(\bar{t}=k+lh\),

    2. (ii)

      \(\bar{x}(k+(l+1)h)\bar{x}(k+lh)<0\) for \(\bar{t}=k+(l+\mu)h\) (\(0<\mu<1\)).

Proof

(1) Assume that \(x(t)=0\), \(t\in(k,k+1)\). Then it follows from Theorem 4.4 that

$$ x(k)\cdot x(k+1)< 0\quad \mbox{for } t=k+\alpha,0< \alpha< 1. $$

Hence, by Theorem 4.5 there is \(h_{0}\) such that, for \(h< h_{0}\),

$$ \bar{x}(k)\cdot\bar{x}(k+1)< 0\quad \mbox{for } t=k+\alpha,0< \alpha< 1. $$

It is easy to see from Theorem 4.5 that there is a unique \(\bar {t}\in(k,k+1)\) such that \(\bar{x}(\bar{t})=0\).

Assume that \(t\neq\bar{t}\). By Theorem 4.5 we obtain that

$$\begin{aligned}& x(t)-x(\bar{t})=x(t)-\bar{x}(\bar{t})+\bigl(\bar{x}(\bar{t})-x(\bar{t})\bigr) =x(t)-\bar{x}(\bar{t})+O(h) \\& \hphantom{x(t)-x(\bar{t})}= O(h)\quad \biggl(\theta\neq\frac{1}{2}\biggr), \\& x(t)-x(\bar{t})=x(t)-\bar{x}(\bar{t})+\bigl(\bar{x}(\bar{t})-x(\bar{t})\bigr) =x(t)-\bar{x}(\bar{t})+O\bigl(h^{2}\bigr) \\& \hphantom{x(t)-x(\bar{t})}= O\bigl(h^{2}\bigr)\quad \biggl(\theta= \frac{1}{2}\biggr). \end{aligned}$$

On the other hand,

$$ x(t)-x(\bar{t})=x'(\xi) (t-\bar{t}), $$

where ξ is in between t and . So \(x'(\xi)\neq0\); otherwise, \(x(\bar{t})=0\), which is contrary to Theorem 4.4. Hence,

$$\begin{aligned}& |t-\bar{t}|=O(h)\quad \biggl(\theta\neq\frac{1}{2}\biggr), \\& |t-\bar{t}|=O\bigl(h^{2}\bigr)\quad \biggl(\theta=\frac{1}{2} \biggr). \end{aligned}$$

(2) • Assume that \(\bar{x}(k+lh)=0\), \(0\leq l< m\). Obviously, the first equation in (4.1) can be rewritten as

$$\begin{aligned} x_{k,l+1} =&R(z)x_{k,l}+\frac{b}{a} \bigl(R(z)-1\bigr)x_{k,0}+\frac{c}{a} \bigl(R(z)-1 \bigr)x_{k+1,0} \\ =&\frac{b}{a}\bigl(R(z)-1\bigr)x_{k,0}+\frac{c}{a} \bigl(R(z)-1\bigr)x_{k+1,0}, \end{aligned}$$
(4.6)

where \(R(z)=1+\frac{z}{1-z\theta}\), \(z=-ah\). Similarly, we can obtain

$$ R(z)x_{k,l-1}=-\biggl(\frac{b}{a}\bigl(R(z)-1 \bigr)x_{k,0}+\frac{c}{a}\bigl(R(z)-1\bigr)x_{k+1,0} \biggr). $$
(4.7)

It follows from (4.6) and (4.7) that

$$ R(z)x_{k,l-1}x_{k,l+1}=-\biggl(\frac{b}{a}\bigl(R(z)-1 \bigr)x_{k,0}+\frac{c}{a} \bigl(R(z)-1\bigr)x_{k+1,0} \biggr)^{2}\leq0. $$

Hence, since \(R(z)\neq0\), by Theorem 4.5(2) we immediately have \(x_{k,l-1}x_{k,l+1}<0\), that is, \(\bar{x}(k+(l+1)h)\bar{x}(k+(l-1)h)<0\).

• Assume that \(\bar{x}(k+(l+\mu)h)=0\), \(0<\mu<1\), \(0\leq l< m\). It follows from Theorem 4.5(2) and \(\bar{x}(k+(l+\mu)h)=\mu x_{k,l+1}+(1-\mu)x_{k,l}=0\) that

$$\bar{x}\bigl(k+(l+1)h\bigr)\bar{x}(k+lh)< 0. $$

 □

5 Numerical experiments

Example 5.1

Consider the following equation:

$$ \textstyle\begin{cases} x'(t)+x(t)+\frac{1}{\mathrm{e}-1}\cdot x([t])+c x([t+1])=0,& t\geq0, t\neq k, k\in\mathbb{Z}^{+}, \\ \Delta x(t)=d\cdot x(t),& t=k, k\in\mathbb{Z}^{+}, \\ x(0)=x_{0}, \end{cases} $$
(5.1)

where \(x_{0}>0\), \(c>0\), and \(1-d>0\). Solving this equation, we get

$$x(t) = \textstyle\begin{cases} (\mathrm{e}^{-t}+\frac{1}{\mathrm{e}-1}\cdot(\mathrm{e}^{-t}-1))x_{0}, & t\in[0,1), \\ 0&t\geq1. \end{cases} $$

Obviously, the exact solution \(x(t)\) of (1.1) is oscillatory, which can also be proved by Theorem 2.5. By Theorem 3.6(2), the Runge-Kutta method (3.1)-(3.2) for (5.1) is oscillatory since j is odd for \(h=\frac{1}{m}<\min\{-{\zeta },-{\delta_{1}}\}\) with integer m. All the numerical methods in the last line of Table 1 preserve oscillation of (5.1), and by Theorem 4.2(2), the θ-method (3.1)-(3.2) for (5.1) is oscillatory as \(0 \leq\theta\leq\frac{1}{2}\) for \(h=\frac{1}{m}\) with integer \(m>1\).

Table 1 Preservation of oscillation for six methods

On the other hand, the Runge-Kutta method (3.1)-(3.2) for (5.1) is nonoscillatory as j is an even positive integer, even though the stepsize h is very small. In fact, all the numerical solutions of the Runge-Kutta method (3.1)-(3.2) for (5.1) are positive as j is even for \(h=\frac{1}{m}<-\delta_{1}\) with integer m, which can be proved similarly as in the proof of Theorem 3.9. Hence, all the numerical methods in the last line of Table 2 for (5.1) are nonoscillatory for \(h=\frac{1}{m} <-\delta_{1}\) with integer m.

Table 2 Preservation of nonoscillation for six methods

Example 5.2

Consider the following example from [6]:

$$ \textstyle\begin{cases} x'(t)+x(t)+\frac{2}{\mathrm{e}-1}\cdot x([t])+x([t+1])=0,& t\geq 0, t\neq k, k\in\mathbb{Z}^{+}, \\ \Delta x(t)=\frac{1}{2} \cdot x(t),& t=k, k\in\mathbb{Z}^{+}, \\ x(0)=x_{0}. \end{cases} $$
(5.2)

By Theorem 2.5 we obtain that the exact solution \(x(t)\) of (1.1) is oscillatory. By Theorem 3.6(2) the Runge-Kutta method (3.1)-(3.2) for (5.2) is oscillatory as j is odd for \(h=\frac{1}{m}<\min\{- {\zeta},-{\delta_{1}}\} \) with integer m. All the numerical methods in the last line of Table 1 preserve oscillation of (5.2).

By Theorem 4.2(2) the θ-method (3.1)-(3.2) for (5.2) is oscillatory as \(0 \leq\theta\leq\frac{1}{2}\) for \(h=\frac{1}{m}\) with integer \(m>1\). Tables 3 and 4 roughly illustrate that the zeros of the piecewise linear interpolation of θ-methods converge to the corresponding zeros of the exact solution with the order of accuracy 1 (\(\theta\neq\frac{1}{2}\)) and 2 (\(\theta=\frac{1}{2}\)), which is in agreement with Theorem 4.6.

Table 3 The errors of the first zero between numerical solutions and exact solution of ( 5.2 )
Table 4 The errors of the third zero between numerical solutions and exact solution of ( 5.2 )

References

  1. Cooke, KL, Wiener, J: Retarded differential equations with piecewise constant delays. J. Math. Anal. Appl. 99, 265-297 (1984)

    Article  MathSciNet  MATH  Google Scholar 

  2. Wiener, J: Generalized Solutions of Functional Differential Equations. World Scientific, Singapore (1994)

    Google Scholar 

  3. Ding, XH, Wu, KN, Liu, MZ: The Euler scheme and its convergence for impulsive delay differential equations. Appl. Math. Comput. 216, 1566-1570 (2010)

    MathSciNet  MATH  Google Scholar 

  4. Wu, KN, Ding, XH: Convergence and stability of Euler method for impulsive stochastic delay differential equations. Appl. Math. Comput. 229, 151-158 (2014)

    MathSciNet  Google Scholar 

  5. Wu, KN, Ding, XH: Impulsive stabilization of delay difference equations and its application in Nicholson’s blowflies model. Adv. Differ. Equ. 2012, 88 (2012). doi:10.1186/1687-1847-2012-88

    Article  MathSciNet  MATH  Google Scholar 

  6. Bereketoglu, H, Seyhan, G, Ogun, A: Advanced impulsive differential equations with piecewise constant arguments. Math. Model. Anal. 15, 175-187 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  7. Zhang, GL, Song, MH: Asymptotical stability of Runge-Kutta methods for advanced linear impulsive differential equations with piecewise constant arguments. Appl. Math. Comput. 259, 831-837 (2015)

    MathSciNet  Google Scholar 

  8. Butcher, JC: The Numerical Analysis of Ordinary Differential Equations: Runge-Kutta and General Linear Methods. Wiley, New York (1987)

    MATH  Google Scholar 

  9. Dekker, K, Verwer, JG: Stability of Runge-Kutta Methods for Stiff Nonlinear Differential Equations. North-Holland, Amsterdam (1984)

    MATH  Google Scholar 

  10. Hairer, E, Nørsett, SP, Wanner, G: Solving Ordinary Differential Equations II: Stiff and Differential Algebraic Problems. Springer, New York (1993)

    MATH  Google Scholar 

  11. Wanner, G, Hairer, E, Nørsett, SP: Order stars and stability theorems. BIT Numer. Math. 18, 475-489 (1978)

    Article  MathSciNet  MATH  Google Scholar 

  12. Song, MH, Yang, ZW, Liu, MZ: Stability of θ-methods for advanced differential equations with piecewise continuous arguments. Comput. Math. Appl. 49, 1295-1301 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  13. Liu, MZ, Song, MH, Yang, ZW: Stability of Runge. Kutta methods in the numerical solution of equation \(u'(t) = au(t) + a_{0} u([t])\). J. Comput. Appl. Math. 166, 361-370 (2004)

    Article  MATH  Google Scholar 

Download references

Acknowledgements

I would like to thank the referees for their helpful comments and suggestions. This work is supported by the Natural Science Foundations of Hebei Province A2015501130, the Research Project of Higher School Science and Technology in Hebei province ZD2015211,the Fundamental Research Funds for Central Universities N152304007 and the Youth Science Foundations of Heilongjiang Province QC2016001.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gui-Lai Zhang.

Additional information

Competing interests

The author has declared that no competing interests exist.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, GL. Oscillation of Runge-Kutta methods for advanced impulsive differential equations with piecewise constant arguments. Adv Differ Equ 2017, 32 (2017). https://doi.org/10.1186/s13662-016-1067-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-016-1067-0

Keywords