Skip to main content

Theory and Modern Applications

Flocking of discrete Cucker–Smale model with packet loss


In this paper, we consider the flocking problem for a discrete Cucker–Smale model with packet loss. For describing the packet loss process, a more general condition including four different cases is adopted in this paper. Then a sufficient condition for achieving flocking under this condition is obtained.

1 Introduction

Flocking of birds, schooling of fish and swarming of bacteria are three common phenomena that we can observe in nature. The scientific reason behind flocking behavior is amazing and interests a lot of scientists from different research communities such as mathematics, physics, computer science, and so on. In a flocking group, there are many agents, organized in a coordinated motion by using information from their neighbors and simple rules [1]. The study of flocking behavior has motivate many applications in engineering such as the design of sensor network, the formation flight of unmanned aerial vehicle (UAV) [2] and the team control of a multi-robot [3].

Research on flocking has continued throughout many decades. In 1980s, Reyolds proposed the classical flocking model [4]. Then in 1995, T. Vicsek et al. [5] made a breakthrough and established the famous Vicsek model to study flocking behavior of self-driven particles by computer simulation. The theoretic analysis of Vicsek model for achieving flocking is given by A. Jadbabaie, J. Lin and A. S. Morse [6]. In 2007, F. Cucker and S. Smale established a new kind of model, the so-called Cucker–Smale (C–S, for short) model [7, 8], on the basis of Vicsek model. This model reflects many more reality factors. In this model, all agents determine their next velocity by calculating the weighted average of the speed difference of their neighbors, while the weight is determined by the distance between the agent and its neighbors. The study of the C–S model has received much attention, and the model has been generalized by considering more factors such as time lag [9, 10] and noise [11, 12]. Besides these factors, the connecting structure among the agents is yet another important factor affecting the dynamics of the C–S system. For example, in [13, 14] the authors considered the C–S model with switching topology. In [15, 16], the authors considered the C–S model with hierarchies. C–S models with or without a leader were considered in [17, 18].

As we all know, the information structure is important in the study of collective dynamics of a group of agents. In the original C–S model, it is assumed that each agent can sense the distance between other agents and itself. Based on this assumption, the weights can be determined. However, in a practical environment, this assumption is irrational. For example, flocking flying birds may encounter a natural enemy. Thus, in this case, information transmission between them gets broken. We called this phenomenon information packet-loss. In [16, 19], the authors investigated the flocking behavior when the connections fail to some degree. A binary-valued random process satisfying some additional condition was usually applied to describe the uncertainty in the transmission of information. For example, a Bernoulli random process was used in [19], while a random graph model was used in [16]. Thus studying the information structure in Cucker–Smale model is an important and valuable topic both from the theoretical and practical point of view.

In a recent paper by A. Cetinkaya, H. Ishii and T. Hayakawa [20], an interesting method describing packet-loss was proposed and was used to study the cyber-security issues in networked control of linear dynamical systems. The new method describing packet-loss included many afore mentioned methods as special cases. The details will be shown in Sect. 2. Motivated by the work by A. Cetinkaya et al., we investigate the flocking behavior of the discrete Cucker–Smale model with a packet-loss process, which is described by a new condition similar to that of A. Cetinkaya et al.

The contribution of this paper is that we give a sufficient condition to assure flocking for the Cucker–Smale model with more general information packet-loss. An outline of this paper is as follows. Section 2 introduces the discrete Cucker– Smale model with packet loss, while Sect. 3 proposes a condition to describe the information packet-loss process and gives the main results. Finally, we give some simulation examples to verify conclusions.

2 Problem formulation

We consider the following a discrete time Cucker–Smale model:

$$\begin{aligned} \textstyle\begin{cases} x_{i}(k+1)= x_{i}(k)+hv_{i}(k), \\ v_{i}(k+1)= v_{i}(k)+h\sum_{j=1}^{n}a_{ij}(k) ( v_{j}(k)-v _{i}(k) ). \end{cases}\displaystyle \end{aligned}$$

Here \(x_{i}(k)\in \mathbb{R}^{3}\) and \(v_{i}(k)\in \mathbb{R}^{3}\), with \(i=1,\ldots,n\) and \(k\in \mathbb{N}_{0}\), are the position and velocity of agent i at time kh, respectively; \(h>0\) is the time step, \(a_{ij}(k)\) is the weighted function defined as follows:

$$ a_{ij}(k)=\xi _{ij}(k)\cdot \frac{K}{ (1+ \Vert x_{i}(k)-x_{j}(k) \Vert )^{\alpha }}, $$

where α is the decay rate and K is the coupling strength between agents i and j. For each k, \(\xi _{ij}(k)\) is a random variable taking values 0 and 1, it characterizes the result of information transmission attempts in each step. When \(\xi _{ij}(k)=1\), the information transmission attempt between agent j and i is successful at time kh, while \(\xi _{ij}(k)=0\) indicates that agent i fails to receive the information from agent j at time kh, in other words, a packet loss takes place. Since the topology structure of the system is an undirected complete graph, we have \(\xi _{ij}(k)=\xi _{ji}(k)\).

When \(\xi _{ij}(k)=1\), in other words, no packed loss occurs, according to [7, 8], we have the following definition.

Definition 1

System (1) with connectivity coefficients (2) achieves flocking when all agents’ velocities converge to a common one and the distance between each pair of agents converges to a constant, i.e., for all \(i,j=1,\ldots,n\), we have \(\|v_{i}(k)-v_{j}(k)\|\rightarrow 0\) and \(\|x_{i}(k)-x_{j}(k)\|\rightarrow x_{\infty }\) for some \(x_{\infty }\in \mathbb{R}^{3}\) when \(k\rightarrow \infty \).

Let the adjacency matrix be \(A_{k}= (a_{ij}(k) )_{n\times n}\). The Laplacian \(L_{k}\) of \(A_{k}\) is defined as \(L_{k}=D_{k}-A_{k}\), where \(D_{k}=\operatorname{diag}(d_{1},d_{2},\ldots,d_{n})\) and \(d_{i}=\sum_{j=1}^{n}a_{ij}\). Assume that \(\lambda _{i}\) (\(i=1,\ldots,n\)) are the eigenvalues of \(L_{k}\) and \(0=\lambda _{1}\leq \lambda _{2}\leq \cdots \leq \lambda _{n}\). The second eigenvalue \(\lambda _{2}\) is called the Fieder number of \(A_{k}\) and denoted by \(\phi _{k}\).

In the light of [7], let Δ be the diagonal of \(\mathbb{R}^{3n}\), i.e., \(\Delta = \{(e,e,\ldots,e)|e\in \mathbb{R}^{3} \}\) and \(\Delta ^{\perp }\) be the orthogonal complement of Δ in \(\mathbb{R}^{3n}\). Then we consider positions and velocities in the quotient spaces \(X\triangleq \mathbb{R}^{3n}/ \Delta \backsimeq \Delta ^{\perp }\) and \(V\triangleq \mathbb{R}^{3n}/ \Delta \backsimeq \Delta ^{\perp }\), respectively. And system (1) is rewritten as a compact form as follows:

$$ \textstyle\begin{cases} x(k+1)=x(k)+hv(k), \\ v(k+1)=(Id-hL_{k})v(k), \end{cases} $$

where \(x=(x_{1},\dots, x_{n})^{T}\), \(v=(v_{1},\dots, v_{n})^{T}\), Id is an identity matrix of appropriate dimension.

Taking the randomness into consideration, we say that system (3) achieves flocking almost surely when the following two conditions are almost surely fulfilled: \(v(k)\rightarrow 0\) and \(x(k)\rightarrow x_{\infty }\), when \(k\rightarrow \infty \) for some \(x_{\infty } \in X\).

3 Main results

In recent works concerning flocking problems with packet losses, a random process \(\{\xi _{ij}(k)\}\) is used to describe the result of information transmission attempts. The random process \(\{\xi _{ij}(k) \}\) is commonly assumed to be a collection of independent and identically distributed Bernoulli random variables [16, 19]. Most recently in a study about the cyber-security issues in the networked control of a linear dynamical system in [20], A. Cetinkaya et al. proposed a new assumption about the random process \(\{\xi (k)\}\) to describe the packet-loss as follows:

$$ \sum_{k=1}^{\infty }\mathbb{P} \Biggl[\sum_{\theta =0}^{k-1} \bigl(1- \xi (\theta) \bigr)>\rho ^{*}k \Biggr]< \infty, $$

for some constant \(\rho ^{*}\in [0,1]\). This condition on \(\{\xi (k)\}\) is a generalization of the earlier simple assumption on \(\{\xi (k)\}\), it includes the Bernoulli random process as a special case. Moreover, it covers more cases such as random or malicious information packet loss, even a combination of the two, which may be encountered in reality. It is worthwhile to note that this condition only concerns a single channel in [20]. Motivated by this, we propose the following assumption in a complex network composed of multi-agents.

Assumption 1

There exists a scalar \(\rho \in [0,1)\) such that

$$ \sum_{k=1}^{\infty }k \mathbb{P} \Biggl[\sum_{\theta =0}^{k-1} \biggl(1- \prod _{i< j}\xi _{ij}(\theta) \biggr)>\rho k \Biggr]< \infty. $$

As in [20], Assumption 1 proposed here reveals the probabilistic characterization of the packet loss process in our setting, which also includes (i) random packet loss, (ii) malicious packet loss, (iii) dependent combination of random and malicious packet loss, and (iv) independent combination of random and malicious packet loss as four special cases. A detailed proof will be given in the Appendix.

Remark 1

Inequality (4) reveals the probabilistic characterization of packet losses by counting the number of observed packet losses. It indicates that the limit of the probability of the rate of packet losses greater than \(\rho ^{*}\) is zero. In other words, when the time is long enough, the rate of packet loss should have an upper bound with probability one.

Remark 2

Similar to inequality (4), inequality (5) implies that the rate of all the channels without packet losses at same time has a lower bound as time tends to infinity with probability one. This will also be used to ensure that the system achieves flocking in this paper.

Next, we introduce four lemmas, which are useful for verifying the main theorem.

Lemma 1


For all \(x\in X\), we have \(\max_{i\neq j}\|x_{i}-x_{j}\|\leq \sqrt{2}\|x\|\).

Lemma 2


For all \(x\in X\), we have \(\|L_{k}\|\leq 2(n-1)\sqrt{3n}K\). Particularly, when \(h<\frac{1}{2(n-1)\sqrt{3n}K}\), we have \(h\|L_{k}\|\in [0,1)\).

Lemma 3


Let A be an \(n\times n\) nonnegative, symmetric matrix, let \(L=D-A\) be the Laplacian of A and ϕ the Fiedler number of L. Then \(\phi \geq na^{*}\), where \(a^{*}=\min_{i\neq j}a_{ij}\).

Lemma 4


Assume \(c_{1},c_{2}>0\), \(s>q>0\). Then the equation \(f(z)=z^{s}-c_{1}z ^{q}-c_{2}=0\) has a unique positive root \(z_{*}\). In addition, \(z_{*}\leq \max \{(2c_{1})^{\frac{1}{s-q}},(2c_{2})^{\frac{1}{s}} \}\) and \(f(z)\leq 0\) for \(0\leq z\leq z_{*}\).

Then, the main theorem of the paper is given below.

Theorem 1

Assume that the random process \(\{\xi _{ij}(k) \}_{k\in \mathbb{N}_{0}}\) takes values 0 and 1 and satisfies Assumption 1. If \(h<\frac{1}{2(n-1)\sqrt{3n}K}\) and one of the following three conditions holds:

  1. (i)

    \(\alpha <1\);

  2. (ii)

    \(\alpha =1\) and \(\sqrt{2}\|v(0)\|< K(1-\rho)\);

  3. (iii)

    \(\alpha >1\) and

    $$ \biggl(\frac{1}{c_{1}} \biggr)^{\frac{1}{\alpha -1}} \biggl[ \biggl( \frac{1}{ \alpha } \biggr)^{\frac{1}{\alpha -1}}- \biggl(\frac{1}{\alpha } \biggr)^{\frac{ \alpha }{\alpha -1}} \biggr]-c_{2}>\sqrt{2}h \bigl\Vert v(0) \bigr\Vert , $$

    where \(c_{1}=\frac{\sqrt{2}\|v(0)\|}{K(1-\rho)}\) and \(c_{2}=1+ \sqrt{2}\|x(0)\|+\sqrt{2}h\theta _{1}\|v(0)\|\),

then system (3) with weight functions (2) achieves flocking almost surely. More precisely, \(v(k)\rightarrow 0\) a.s. and there exists \(x_{\infty }\in X\) such that \(x(k)\rightarrow x_{\infty }\) a.s. when \(k\rightarrow \infty \).

Before proving Theorem 1, we present two propositions.

Proposition 1

Assume \(\rho \in [0,1)\) and let the random process \(\{\xi _{ij}(k, \omega) \}\), taking values 0 and 1, satisfy Assumption 1. Then there exists an integer-valued random variable \(\theta _{0}(\omega)\), independent of k, such that for all \(k\geq \theta _{0}\), we have \(\sum_{\theta =0}^{k-1}\prod_{i< j}\xi _{ij}(\theta)\geq (1-\rho)k\) with probability one. Moreover, \(\mathbb{E}\theta _{0}<+\infty \).


From Assumption 1, there exists \(\rho \in [0,1)\), such that \(\sum_{k=1}^{\infty }k\mathbb{P} [\sum_{\theta =0} ^{k-1} (1-\prod_{i< j}\xi _{ij}(\theta) )>\rho k ]< \infty\). Then, we have

$$ \sum_{k=1}^{\infty }\mathbb{P} \Biggl[\sum _{\theta =0}^{k-1} \biggl(1-\prod _{i< j}\xi _{ij}(\theta) \biggr)>\rho k \Biggr] \leqslant \sum_{k=1}^{\infty }k\mathbb{P} \Biggl[ \sum_{\theta =0} ^{k-1} \biggl(1-\prod _{i< j}\xi _{ij}(\theta) \biggr)>\rho k \Biggr]< \infty. $$

It follows from Borel–Cantelli Lemma that

$$ \mathbb{P} \Biggl[\limsup_{k\rightarrow \infty } \Biggl[\sum _{\theta =0}^{k-1} \biggl(1-\prod _{i< j}\xi _{ij}(\theta) \biggr)>\rho k \Biggr] \Biggr]=\mathbb{P} \Biggl[\bigcap_{n=1}^{\infty } \bigcup_{k=n}^{\infty }A_{k}^{C} \Biggr]=0, $$

where \(A_{k}(\omega)= \{\omega:\sum_{\theta =0}^{k-1} \prod_{i< j}\xi _{ij}(\theta)\geq (1-\rho)k \}\). Then, we can deduce that

$$\begin{aligned} \mathbb{P} \Bigl[\liminf_{k\rightarrow \infty } A_{k} \Bigr] =& \mathbb{P} \Biggl[\bigcup_{n=1}^{\infty } \bigcap_{k=n}^{ \infty } A_{k} \Biggr] = \mathbb{P} \Biggl[\bigcup_{n=1}^{\infty } \Biggl( \bigcup_{k=n}^{\infty } A_{k}^{C} \Biggr)^{C} \Biggr] \\ =&\mathbb{P} \Biggl[ \Biggl(\bigcap_{n=1}^{\infty } \bigcup_{k=n}^{\infty } A_{k}^{C} \Biggr)^{C} \Biggr] =1-\mathbb{P} \Biggl[\bigcap _{n=1}^{\infty }\bigcup_{k=n}^{\infty } A_{k}^{C} \Biggr] =1. \end{aligned}$$

Let \(B_{n}=\bigcap_{k=n}^{\infty }A_{k}\), \(\varOmega _{0}= \bigcup_{n=1}^{\infty }B_{n}\), and define \(\theta _{0}(\omega)=n \Longleftrightarrow \omega \in B_{n}-B_{n-1}\), with \(B_{0}= \emptyset \). Obviously, for all \(\omega \in \varOmega _{0}\), we have \(\omega \in A_{k}\) whenever \(k\geq \theta _{0}(\omega)\). Next, let us prove \(E\theta _{0}<+\infty \). It follows from the definition of \(\theta _{0}(\omega)\) that

$$\begin{aligned} \{\omega:\theta _{0}>n \} =&\bigcup _{k=n}^{\infty } \{\omega:\theta _{0}=k+1 \}= \bigcup_{k=n} ^{\infty } \{B_{k+1}-B_{k} \}=\varOmega _{0}-B_{n} \\ =&\varOmega _{0}\cap B_{n}^{C} \subset B_{n}^{C}= \bigcup _{k=n}^{\infty }A_{n}^{C} =\bigcup_{k=n}^{\infty } \Biggl\{ \omega:\sum _{\theta =0} ^{k-1}\prod _{i< j}\xi _{ij} (\theta)< (1-\rho)k \Biggr\} . \end{aligned}$$

Thus we have

$$ \mathbb{P} [\theta _{0}>n ]\leq \sum_{k=n}^{\infty } \mathbb{P} \Biggl[\sum_{\theta =0}^{k-1}\prod _{i< j}\xi _{ij}(\theta)< (1- \rho)k \Biggr]. $$

Summing both sides of the above inequality over n from 1 to infinity, we obtain

$$\begin{aligned} \sum_{n=1}^{\infty }\mathbb{P} [\theta _{0}>n ] \leq& \sum_{n=1}^{\infty }\sum _{k=n}^{\infty }\mathbb{P} \Biggl[\sum _{\theta =0}^{k-1}\prod_{i< j}\xi _{ij}(\theta)< (1- \rho)k \Biggr] \\ =&\sum_{k=1}^{\infty }k\cdot \mathbb{P} \Biggl[ \sum_{ \theta =0}^{k-1}\prod _{i< j}\xi _{ij}(\theta)< (1-\rho)k \Biggr] \\ =&\sum_{k=1}^{\infty }k\cdot \mathbb{P} \Biggl[ \sum_{ \theta =0}^{k-1} \biggl(1-\prod _{i< j}\xi _{ij}(\theta) \biggr)> \rho k \Biggr] < + \infty. \end{aligned}$$

Thus \(\mathbb{E}\theta _{0}\leq \sum_{n=1}^{\infty }\mathbb{P} [\theta _{0}>n ]<+\infty \). □

Proposition 2

For system (3) with weight functions (2), if \(h<\frac{1}{2(n-1)\sqrt{3n}K}\), \(\rho \in [0,1)\) and the random process \(\{\xi _{ij}(k)\}_{k\in \mathbb{N}_{0}}\) satisfies Assumption 1, then there exists a random variable \(F(k)\) such that \(F(k)\in (0,1)\) and

$$ \bigl\Vert v(k) \bigr\Vert \leq F^{\sum _{\theta =0}^{k-1}\prod _{i< j}\xi _{ij}(\theta)}(k) \bigl\Vert v(0) \bigr\Vert \quad \textit{a.s.} $$

Furthermore, denoting \(\|x(k_{*})\|=\max_{0\leq \theta \leq k} \|x(\theta)\|\), there exists a positive variable \(\theta _{1}\) independent of k such that

$$ \bigl\Vert x(k_{*}) \bigr\Vert \leq \bigl\Vert x(0) \bigr\Vert +h \bigl\Vert v(0) \bigr\Vert \biggl(\theta _{1}+\frac{(1+ \sqrt{2} \Vert x(k_{*}) \Vert ) ^{\alpha }}{hnK(1-\rho)} \biggr) \quad \textit{a.s.} $$


We have \(\|L_{k}\|\leq 2(n-1)\sqrt{3n}K\) by Lemma 2. Therefore \(h\phi _{k}\in [0,1)\) and the corresponding network graph is not connected when \(\phi _{k}=0\). Since the linear map \(Id-hL_{k}\) is self-adjoint, its eigenvalues are in the interval \((0,1]\) and the largest eigenvalue is \(1-h\phi _{k}\). It follows from system (3) that

$$\begin{aligned} \bigl\Vert v(k) \bigr\Vert \leq & \Vert Id-hL_{k-1} \Vert \cdot \bigl\Vert v(k-1) \bigr\Vert \leq (1-h\phi _{k-1}) \bigl\Vert v(k-1) \bigr\Vert \\ \leq& \bigl(1-hna^{*}(k-1) \bigr)^{\prod _{i< j}\xi _{ij}(k-1)} \cdot \bigl\Vert v(k-1) \bigr\Vert \\ =& \biggl(1-\min_{i\neq j}\frac{hnK}{(1+ \Vert x_{i}(k-1)-x_{j}(k-1) \Vert )^{ \alpha }} \biggr)^{\prod _{i< j}\xi _{ij}(k-1)}\cdot \bigl\Vert v(k-1) \bigr\Vert \\ \leq& \biggl(1-\frac{hnK}{ (1+\sqrt{2} \Vert x((k-1)_{*}) \Vert )^{ \alpha }} \biggr)^{\prod _{i< j}\xi _{ij}(k-1)}\cdot \bigl\Vert v(k-1) \bigr\Vert \\ =& F(k-1)^{\prod _{i< j}\xi _{ij}(k-1)}\cdot \bigl\Vert v(k-1) \bigr\Vert \\ \leq& F(k)^{\prod _{i< j}\xi _{ij}(k-1)}\cdot \bigl\Vert v(k-1) \bigr\Vert \\ \leq &F(k)^{\sum _{\theta =0}^{k-1}\prod _{i< j}\xi _{ij}( \theta)}\cdot \bigl\Vert v(0) \bigr\Vert \quad \text{a.s.}, \end{aligned}$$

where we denote \(F(k)=1-\frac{hnK}{(1+\sqrt{2}\|x(k_{*})\|)^{\alpha }}\). Here, in the second step, we consider the following two facts: (i) when \({\prod_{i< j}\xi _{ij}(k-1)}=1\), according to Lemma 3, we have \(1-h\phi _{k-1}\leq 1-hna^{*}(k-1)\); (ii) when \({\prod_{i< j}\xi _{ij}(k-1)}=0\), obviously, we have \(1-h\phi _{k-1}\leq 1\). The fourth step follows from Lemma 1.

Next, we prove inequality (12). Due to (11), the definition of \(\|x(k_{*})\|\) and Proposition 2, we obtain

$$\begin{aligned} \bigl\Vert x(k) \bigr\Vert \leq& \bigl\Vert x(0) \bigr\Vert +\sum_{\theta =0}^{k-1} \bigl\Vert x(\theta +1)-x( \theta) \bigr\Vert = \bigl\Vert x(0) \bigr\Vert +h\sum _{\theta =0}^{k-1} \bigl\Vert v(\theta) \bigr\Vert \\ \leq & \bigl\Vert x(0) \bigr\Vert +h \bigl\Vert v(0) \bigr\Vert \Biggl(1+\sum_{\theta =1}^{\theta _{1}-1}F(k)^{ \sum _{t=0}^{\theta -1}\prod _{i< j}\xi _{ij}(t)}+ \sum_{\theta =\theta _{1}}^{k-1}F(k)^{\sum _{t=0}^{\theta -1} \prod _{i< j}\xi _{ij}(t)} \Biggr) \\ \leq& \bigl\Vert x(0) \bigr\Vert +h \bigl\Vert v(0) \bigr\Vert \Biggl( \theta _{1}+\sum_{\theta =\theta _{1}}^{\infty }F(k)^{(1-\rho)\theta } \Biggr) \\ =& \bigl\Vert x(0) \bigr\Vert +h \bigl\Vert v(0) \bigr\Vert \biggl( \theta _{1}+\frac{F(k)^{(1-\rho)\theta _{1}}}{1-F(k)^{1- \rho }} \biggr) \\ \leq& \bigl\Vert x(0) \bigr\Vert +h \bigl\Vert v(0) \bigr\Vert \biggl( \theta _{1}+\frac{1}{1-F(k)^{1-\rho }} \biggr) \\ \leq& \bigl\Vert x(0) \bigr\Vert +h \bigl\Vert v(0) \bigr\Vert \biggl( \theta _{1}+\frac{ (1+\sqrt{2} \Vert x(k _{*}) \Vert ) ^{\alpha }}{hnK(1-\rho)} \biggr) \quad \text{a.s.} \end{aligned}$$

Here, the last inequality can be deduced from following inequality:

$$ F(k_{*})^{1-\rho }= \biggl(1-\frac{hnK}{ (1+\sqrt{2} \Vert x(k_{*}) \Vert )^{\alpha }} \biggr)^{1-\rho } \leq 1-\frac{hnK(1-\rho)}{ (1+ \sqrt{2} \Vert x(k_{*}) \Vert )^{\alpha }}. $$

Then, for \(k=k_{*}\), we obtain

$$ \bigl\Vert x(k_{*}) \bigr\Vert \leq \bigl\Vert x(0) \bigr\Vert +h \bigl\Vert v(0) \bigr\Vert \biggl(\theta _{1}+ \frac{(1+ \sqrt{2} \Vert x(k_{*}) \Vert ) ^{\alpha }}{hnK(1-\rho)} \biggr) \quad \text{a.s.} $$


Let us now prove Theorem 1.

Proof (of Theorem 1)

Denote \(z=1+\sqrt{2}\|x(k_{*})\|\), then inequality (12) in Proposition 2 can be rewritten as \(f(z)=z-c_{1}z^{\alpha }-c_{2}\leq 0\).

Case (i). \(\alpha <1\). From Lemma 4, we obtain \(z\leq U_{0}=\max \{(2c_{1})^{\frac{1}{1-\alpha }}, 2c_{2}\}\). By the definition of \(F(k)\), we have \(F(k)=1-\frac{hnK}{z^{\alpha }}\leq 1-\frac{hnK}{U _{0}^{\alpha }}\triangleq F^{*}\).

Then, for all \(k>\theta _{1}\), it follows from (11) and Proposition 1 that

$$\begin{aligned} \bigl\Vert v(k) \bigr\Vert \leq& F(k)^{\sum _{\theta =0}^{k-1}\prod _{i< j} \xi _{ij}(\theta)}\cdot \bigl\Vert v(0) \bigr\Vert \leq \biggl(1-\frac{hnK}{U_{0}^{\alpha }} \biggr)^{\sum _{\theta =0}^{k-1}\prod _{i< j} \xi _{ij}( \theta)} \cdot \bigl\Vert v(0) \bigr\Vert \\ \leq & \biggl(1-\frac{hnK}{U_{0}^{\alpha }} \biggr)^{(1-\rho)k} \bigl\Vert v(0) \bigr\Vert \rightarrow 0,\quad k\rightarrow \infty, \text{a.s.} \end{aligned}$$

And for all \(k_{2}>k_{1}>\theta _{1}\), we have

$$\begin{aligned} \bigl\Vert x(k_{2})-x(k_{1}) \bigr\Vert \leq& \sum_{k=k_{1}}^{k_{2}-1} \bigl\Vert x(k+1)-x(k) \bigr\Vert \leq h\sum_{k=k_{1}}^{k_{2}-1} \bigl\Vert v(k) \bigr\Vert \\ \leq& h \bigl\Vert v(0) \bigr\Vert \sum_{k=k_{1}}^{k_{2}-1} {\bigl(F^{*}\bigr)}^{\sum _{\theta =0}^{k-1}\prod _{i< j}\xi _{ij}(\theta)} \\ \leq& h \bigl\Vert v(0) \bigr\Vert \sum_{k=k_{1}}^{\infty }{ \bigl(F^{*}\bigr)}^{(1-\rho)k} \\ =&h \bigl\Vert v(0) \bigr\Vert \frac{{(F^{*})}^{(1-\rho){k_{1}}}}{1-{(F^{*})}^{1-\rho }} \rightarrow 0, \quad k_{1}\rightarrow \infty, \text{a.s.} \end{aligned}$$

According to the Cauchy convergence criterion, there exists an \(x_{\infty }\in X\) such that \(x(k)\rightarrow x_{\infty }\) a.s.

Case (ii). \(\alpha =1\). We have \(f(z)=(1-c_{1})z-c_{2} \leq 0\). And the assumptions imply that \(1-c_{1}>0\). Thus \(z\leq \frac{c _{2}}{1-c_{1}}\). Then we proceed as in Case (i).

Case (iii). \(\alpha >1\). We have \(f(z)=z-c_{1}z^{\alpha }-c _{2}\leq 0\). The derivative \(f'(z)=1-c_{1}\alpha z^{\alpha -1}\) has a unique zero at \(z_{*}= (\frac{1}{c_{1}\alpha } )^{\frac{1}{ \alpha -1}}\) and \(f(z_{*})= (\frac{1}{c_{1}\alpha } )^{\frac{1}{ \alpha -1}}- c_{1} (\frac{1}{c_{1}\alpha } )^{\frac{\alpha }{ \alpha -1}}-c_{2}>0\) by hypothesis (6). Since \(f(0)=-c_{2}<0\) and \(f(z)\rightarrow -\infty \), \(z\rightarrow +\infty \), we obtain the shape of function f shown in Fig. 1.

Figure 1
figure 1

The shape of f

For \(k=0\), we have \(k_{*}=0\) and \(z(0)=1+\sqrt{2}\|x(0)\|\leq c_{2} \leq z_{*}\). It follows from \(f(z)\leq 0\) and the shape of function f that \(z(0)\leq z_{l}\).

Next, we prove that \(z(k)\leq z_{*}\) holds for all \(k\in \mathbb{N}\). Assume that there exists a \(k\in \mathbb{N}\) such that \(z(k)\geq z _{u}\) and let T be the first such k. For all \(k< T\), we have \(z(k)\leq z_{l}\), i.e., \(\|x(k)\|\leq \frac{z_{l}-1}{\sqrt{2}}\). In particular, \(\|x(T-1)\|\leq \frac{z_{l}-1}{\sqrt{2}}\). For \(k=T\), we have \(z(k)\geq z_{*}\), i.e., \(\|x(T)\|\geq \frac{z_{*}-1}{\sqrt{2}}\). Thus

$$ \bigl\Vert x(T)-x(T-1) \bigr\Vert \geq \bigl\Vert x(T) \bigr\Vert - \bigl\Vert x(T-1) \bigr\Vert \geq \frac{z_{*}-z_{l}}{ \sqrt{2}}\geq \frac{f(z_{*})}{\sqrt{2}}. $$

Here, in the last inequality, we used the intermediate value theorem, which assures the existence of \(\xi \in [z_{l},z_{*}]\) such that \(f(z_{*})-f(z_{l})=f(\xi)'(z_{*}-z_{l})\) with \(f(\xi)'\leq 1\). From system (3), we obtain \(\|x(T)-x(T-1)\|= h\|v(T-1)\|\leq h\|v(0) \|\). Combining this inequality with (19) shows that \(f(z_{*}) \leq \sqrt{2}h\|v(0)\|\), which contradicts hypothesis (6). Therefore, for all \(k\in \mathbb{N}\), we have \(z(k)\leq z_{*}\). Again we proceed as in Case (i). □

4 An example

We consider a system consisting of four agents with a coupling strength \(K=30\). Let the time step be \(h=0.001\), which satisfies \(h<\frac{1}{2(n-1) \sqrt{3n}K}\). According to the values of α, the following three initial conditions are given. For \(\alpha <1\), let \(\alpha =0.5\) and take initial conditions

X(0)= ( 25 25 20 1 0 0 5 6 10 30 33 30 ) ,V(0)= ( 13 4 20 7 10 8 2 6 16 25 5 9 ) .

For \(\alpha =1\), the initial conditions are

X(0)= ( 2 1 1 1 3 1 1 1 1 1 0 1 ) ,V(0)= ( 0.3 0 0 1 0 0 0 1.5 0 0 0 0.5 ) ,

which make condition \(\sqrt{2}\|v(0)\|< K(1-\rho)\) true for all \(\rho \in [0,0.9]\). Then, for \(\alpha >1\), let \(\alpha =1.2\) and assume initial conditions

X(0)= ( 0 0 0.5 0 1 0 1 0 1 0 0 0 ) ,V(0)= ( 0 0 0.4 0.5 0.5 0 0 0 0 0.3 0 0 ) ,

which make condition (6) hold with \(\theta _{1}=1000\). Next, we give four different packet loss example cases to illustrate Theorem 1.

Case 1. Random packet losses. The random packet losses are assumed to be characterized by the Markov chain \(\{1-\xi ^{R}_{ij}(k) \}_{k\in \mathbb{N}_{0}}\) with transition probabilities and initial distributions as follows: \(P_{0,1}(i)\triangleq 0.1+0.03\sin ^{2}(0.1i)\), \(P _{1,1}\triangleq 0.1+0.03\cos ^{2}(0.1i)\), \(P_{q,0}=1-P_{q,1}\), \(\nu _{0}=0\), \(\nu _{1}=1\). Note that the upper bounds of \(P_{q,1}\) and \(P_{q,0}\) are \(p_{1}^{*}=0.13\) and \(p_{0}^{*}=0.88\), respectively. Thus, for the Markov chain \(\{1-\prod_{i< j}\xi ^{R}_{ij}(k) \}_{k \in \mathbb{N}_{0}}\), the upper bounds are \(p_{1}=0.6\) and \(p_{0}=0.5\). According to Proposition 4, Assumption 1 holds with \(\rho =0.7\in (0.6,1)\). Figures 24 show the norms of position and velocity trajectories under random packet losses with \(\alpha =0.5\), \(\alpha =1\), and \(\alpha =1.2\), respectively.

Figure 2
figure 2

The norms of position and velocity trajectories with \(\alpha =0.5\) in Case 1

Figure 3
figure 3

The norms of position and velocity trajectories with \(\alpha =1\) in Case 1

Figure 4
figure 4

The norms of position and velocity trajectories with \(\alpha =1.2\) in Case 1

Case 2. Malicious packet losses. Assume the system is subject to jamming attacks and satisfies (31) with \(\kappa =0\), \(\lambda =6\). Then, let \(\rho _{M}=0.17\) so that \(\rho _{M}>\frac{1}{\lambda }\). It follows from Proposition 5 that Assumption 1 holds with \(\rho =0.17\). Figure 57 show the norms of position and velocity trajectories under malicious packet losses with \(\alpha =0.5\), \(\alpha =1\), and \(\alpha =1.2\), respectively.

Figure 5
figure 5

The norms of position and velocity trajectories with \(\alpha =0.5\) in Case 2

Figure 6
figure 6

The norms of position and velocity trajectories with \(\alpha =1\) in Case 2

Figure 7
figure 7

The norms of position and velocity trajectories with \(\alpha =1.2\) in Case 2

Case 3. Combination of random and malicious packet losses (independent case). We consider the random packet losses in Case 1 and malicious packet losses in Case 2 as independent. Noting that \(p_{1}+\rho _{M}+p_{1}\rho _{M}<0.9\), Assumption 1 holds with \(\rho =0.9\) by Proposition 6. Figure 810 show the norms of position and velocity trajectories under random and malicious packet losses (independent case) with \(\alpha =0.5\), \(\alpha =1\), and \(\alpha =1.2\), respectively.

Figure 8
figure 8

The norms of position and velocity trajectories with \(\alpha =0.5\) in Case 3

Figure 9
figure 9

The norms of position and velocity trajectories with \(\alpha =1\) in Case 3

Figure 10
figure 10

The norms of position and velocity trajectories with \(\alpha =1.2\) in Case 3

Case 4. Combination of random and malicious packet losses (dependent case). We consider the above random and malicious packet losses as dependent. Since \(p_{1}+\rho _{M}<0.8\), Assumption 1 holds with \(\rho =0.8\) by Proposition 7. Figures 1113 show the norms of position and velocity trajectories under random and malicious packet losses (dependent case) with \(\alpha =0.5\), \(\alpha =1\), and \(\alpha =1.2\), respectively.

Figure 11
figure 11

The norms of position and velocity trajectories with \(\alpha =0.5\) in Case 4

Figure 12
figure 12

The norms of position and velocity trajectories with \(\alpha =1\) in Case 4

Figure 13
figure 13

The norms of position and velocity trajectories with \(\alpha =1.2\) in Case 4

5 Concluding remarks

In the present paper, we first proposed a more general condition to describe the character of information packet loss process in the famous Cucker–Smale model. Then we obtained a sufficient condition to assure the flocking behavior for the discrete Cucker–Smale model.

In this paper, we assumed that the networks of the agents are symmetric. In the future, we will investigate the topic under more complex topological structure, such as in networks with hierarchical leadership or networks with one or more rooted leaders. Moreover, we plan to investigate the flocking behavior of continuous Cucker–Smale models under information packet loss, especially those described by stochastic difference or differential equations.


  1. Toner, J., Tu, Y.: Flocks, herds and schools: a quantitative theory of flocking. Phys. Rev. E 58(4), 4828–4858 (1998)

    Article  MathSciNet  Google Scholar 

  2. Crowther, B.: Flocking of autonomous unmanned air vehicles. Aeronaut. J. 107(1068), 99–109 (2003)

    Google Scholar 

  3. Balch, T., Arkin, R.C.: Behavior-based formation control for multirobot teams. IEEE Trans. Robot. Autom. 14(6), 926–939 (1998)

    Article  Google Scholar 

  4. Reynolds, C.: Flocks, herds and schools: a distributed behavioral model. Comput. Graph. 21(4), 25–34 (1987)

    Article  Google Scholar 

  5. Vicsek, T., Czirók, A., Ben-Jacob, E., Cohen, I., Shochet, O.: Novel type of phase transition in a system of self-driven particles. Phys. Rev. Lett. 75(6), 1226–1229 (1995)

    Article  MathSciNet  Google Scholar 

  6. Jadbabaie, A., Lin, J., Morse, A.S.: Coordination of groups of mobile autonomous agents using nearest neighbor rules. IEEE Trans. Autom. Control 48(6), 988–1001 (2003)

    Article  MathSciNet  Google Scholar 

  7. Cucker, F., Smale, S.: Emergent behavior in flocks. IEEE Trans. Autom. Control 52(5), 852–862 (2007)

    Article  MathSciNet  Google Scholar 

  8. Cucker, F., Smale, S.: On the mathematics of emergence. Jpn. J. Math. 2(1), 197–227 (2007)

    Article  MathSciNet  Google Scholar 

  9. Yang, Z., Zhang, Q., Jiang, Z., Chen, Z.: Flocking of multi-agents with time delay. Int. J. Syst. Sci. 43(11), 2125–2134 (2012)

    Article  MathSciNet  Google Scholar 

  10. Zhang, Q., Xu, X., Yang, Z., Chen, Z.: Distance constrained flocking of multi-agents with time delay. Control Conference, 1114–1117 (2013)

  11. Cucker, F., Mordecki, E.: Flocking in noisy environments. J. Math. Pures Appl. 89(3), 278–296 (2008)

    Article  MathSciNet  Google Scholar 

  12. La, H.M., Sheng, W.: Flocking control of multiple agents in noisy environments. In: Proc. IEEE Inter. Conf. on Robotics and Automation, pp. 4964–4969 (2010)

    Google Scholar 

  13. Li, Z., Jia, Y., Du, J., Yuan, S.: Flocking for multi-agent systems with switching topology in a noisy environment. In: Proc. American Control Conference, pp. 111–116 (2008)

    Google Scholar 

  14. Li, Z., Xue, X.: Cuker–Smale flocking under rooted leadership with fixed and switching topologies. SIAM J. Appl. Math. 70(8), 3156–3174 (2010)

    Article  MathSciNet  Google Scholar 

  15. Shen, J.: Cucker–Smale Flocking under Hierarchical Leadership. SIAM J. Appl. Math. 68(3), 694–719 (2007)

    Article  MathSciNet  Google Scholar 

  16. Dalmao, F., Mordecki, E.: Cucker–Smale flocking under hierarchical leadership and random interactions. SIAM J. Appl. Math. 71(4), 1307–1316 (2009)

    Article  MathSciNet  Google Scholar 

  17. Dong, J.G.: Flocking under hierarchical leadership with a free-will leader. Int. J. Robust Nonlinear Control 23(16), 1891–1898 (2013)

    MathSciNet  MATH  Google Scholar 

  18. Li, Z., Ha, S.-Y.: On the Cucker–Smale flocking with alternating leaders. Q. Appl. Math. 73(4), 693–709 (2015)

    Article  MathSciNet  Google Scholar 

  19. Ru, L., Li, Z., Xue, X.: Cucker–Smale flocking with randomly failed interactions. J. Franklin Inst. 352(3), 1099–1118 (2015)

    Article  MathSciNet  Google Scholar 

  20. Cetinkaya, A., Ishii, H., Hayakawa, T.: Networked control under random and malicious packet losses. IEEE Trans. Autom. Control 62(5), 2434–2449 (2017)

    Article  MathSciNet  Google Scholar 

  21. De Persis, C., Tesi, P.: On resilient control of nonlinear systems under denial-of-service. In: Proc. of the IEEE Conference on Decision and Control, pp. 5254–5259 (2014)

    Google Scholar 

  22. Gu, D., Wang, Z.: Leader–follower flocking: algorithms and experiments. IEEE Trans. Control Syst. Technol. 17(5), 1211–1219 (2009)

    Article  Google Scholar 

  23. Foroush, H.S., Martínez, S.: On single-input controllable linear systems under periodic DoS jamming attacks. In: Proc. SIAM Conf. Contr. Appl (2013)

    Google Scholar 

Download references


The authors thank the anonymous referees for their valuable comments and suggestions.

Availability of data and materials

Not applicable.


This work has been partially supported by National Natural Science Foundation of China (No. 11001108), Natural Science Foundation of Jiangsu Province of China (No. BK20170171), and the Fundamental Research Funds for the Central Universities (JUSRP51317B).

Author information

Authors and Affiliations



RW carried out the main results of this paper and drafted the manuscript. ZF directed the study, LL helped to inspect the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Zheng Fang.

Ethics declarations

Competing interests

No potential conflict of interest was reported by the authors.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.



Assumption 1 says that there exists a \(\rho \in [0,1)\) such that

$$ \sum_{k=1}^{\infty }k\mathbb{P} \Biggl[\sum_{\theta =0}^{k-1} \biggl(1- \prod _{i< j}\xi _{ij}(\theta) \biggr)>\rho k \Biggr]< \infty. $$

It provides a probabilistic characterization of packet loss process. It includes some ordinary packet loss processes as special cases. In the following, we will show that the probabilistic characteristics of random packet loss, malicious packet loss and their dependent or independent combinations all satisfy condition (23) in Assumption 1.

  1. 1.

    Random packet losses.

During the process of information transmission among agents, the random packet losses caused by nonmalicious transmission such as information congestion or communication errors, are described by a time-inhomogeneous Markov chain. For each interaction channel \((i,j)\), let \(\{1-\xi ^{R}_{ij}(k) \}_{k\in \mathbb{N}_{0}}\) be a time-inhomogeneous Markov chain taking values 0 and 1, which is characterized by an initial distribution \(\nu _{q}\), \(q=\{0,1\}\), and by transition probabilities \(P_{q,1}\leq p_{1}^{*}\), \(P_{q,0}\leq p_{0} ^{*}\), where \(p_{1}^{*}, p_{0}^{*}\in [0,1]\) are constants.

Then the initial distributions of the time-inhomogeneous Markov chains \(\{1- \prod_{i< j}\xi ^{R}_{ij}(k) \}_{k\in \mathbb{N} _{0}}\) are \(\mathbb{P} [1-\prod_{i< j}\xi ^{R}_{ij}(0)=1 ]=\min \{\sum_{i=1}^{\varepsilon }C_{\varepsilon } ^{i} \nu _{1}^{i}\nu _{0}^{\varepsilon -i},1 \}\), \(\mathbb{P} [1- \prod_{i< j}\xi ^{R}_{ij}(0)=0 ]=\nu _{0}^{\varepsilon }\), the transition probabilities are \(P_{q,1}\leq \min \{\sum_{i=1} ^{\varepsilon }C_{\varepsilon }^{i}p_{1}^{*i}p_{0}^{*\varepsilon -i},1 \}\triangleq p_{1}\), \(P_{q,0}\leq p_{0}^{*\varepsilon }\triangleq p_{0}\), where \(\varepsilon =\frac{n(n-1)}{2}\).

We obtain two conclusions, before proving that Assumption 1 includes random packet losses as a special case.

Proposition 3


Let \(\{\xi (k) \}_{k\in \mathbb{N}_{0}}\), taking values 0 and 1, be a time-inhomogeneous Markov chain with transition probabilities \(P_{q,r}:\mathbb{N}_{0}\rightarrow [0,1]\), \(q,r\in \{0,1\}\). And let \(\{\chi (k) \}_{k\in \mathbb{N}_{0}}\) be a binary-valued process that is independent of \(\{\xi (k) \}_{k\in \mathbb{N}_{0}}\). Assume

$$\begin{aligned}& P_{q,1}\leq \tilde{p}, \\ \end{aligned}$$
$$\begin{aligned}& \sum_{k=1}^{\infty }\mathbb{P} \Biggl[\sum_{\theta =0}^{k-1}\chi (\theta)> \tilde{ \omega }k \Biggr]< \infty, \end{aligned}$$

where \(\tilde{p}\in (0,1)\), \(\tilde{\omega }\in (0,1)\). Then for \(\rho \in (\tilde{p}\tilde{\omega },\tilde{\omega })\), we have

$$ \mathbb{P} \Biggl[\sum_{\theta =0}^{k-1}\xi (k)\chi (k)>\rho k \Biggr]< \tilde{\sigma }_{k}+\psi _{k}, $$


$$\begin{aligned}& \begin{aligned}&\tilde{\sigma }_{k}=\mathbb{P} \Biggl[\sum _{\theta =0}^{k-1}\chi (k)> \tilde{\omega }k \Biggr],\qquad \psi _{k}=\phi ^{-\rho k+1}\frac{ ((\phi -1) \tilde{p}+1 )^{\tilde{\omega }k}-1}{(\phi -1)\tilde{p}}, \\ &\phi =\frac{\frac{ \rho }{\tilde{\omega }}(1-\tilde{p})}{\tilde{p}(1-\frac{\rho }{ \tilde{\omega }})}. \end{aligned} \end{aligned}$$

Furthermore, we can deduce the following corollary from the this proposition.

Corollary 1

In the setting of Proposition 3, we have \(\sum_{k=1} ^{\infty }k\psi _{k}<\infty \).


Function \(\psi _{k}\) can be rewritten as

$$ \psi _{k}=\frac{\phi }{(\phi -1)\tilde{p}} \bigl[ \bigl( \phi ^{-\frac{\rho }{\tilde{\omega }}} \bigl((\phi -1)\tilde{p}+1 \bigr) \bigr)^{\tilde{\omega }k}-\phi ^{-\rho k} \bigr]. $$

From the proof of Proposition 3, we have \(\phi ^{-\frac{\rho }{\tilde{\omega }}} ((\phi -1)\tilde{p}+1 )<1\) and \(\phi >1\). Thus let \(N\geq \frac{\phi }{(\phi -1)\tilde{p}}\), \(0< m\leq -\tilde{\omega }\log _{\phi }\phi ^{-\frac{\rho }{ \tilde{\omega }}} ((\phi -1)\tilde{p}+1 )\) and \(\mu =\phi \). Then we have \(\psi _{k}\leq N\mu ^{-mk}\). Hence, \(\sum_{k=1}^{\infty }k\psi _{k}\leq \sum_{k=1}^{\infty }kN\mu ^{-mk}<\infty \), which completes the proof. □

Proposition 4

Consider the time-inhomogeneous Markov chains \(\{1-\prod_{i< j}\xi ^{R}_{ij}(k) \}_{k\in \mathbb{N}_{0}}\), taking values 0 and 1, with transmission probability upper-bound \(P_{q,1}\leq p _{1}\). Then for all \(\rho _{R}\in (p_{1},1)\), we have

$$ \sum_{k=1}^{\infty }k\mathbb{P} \Biggl[\sum _{\theta =0}^{k-1} \biggl(1- \prod _{i< j}\xi _{ij}^{R}(\theta) \biggr)>\rho _{R} k \Biggr]< \infty. $$


Let \(\tilde{p}=p_{1}\), \(\tilde{\omega }=1\), and take the processes \(\{\xi (k) \}= \{1-\prod_{i< j}\xi ^{R}_{ij}(k) \}\) and \(\{\chi (k) \}=\{1\}\), where \(1-\prod_{i< j}\xi ^{R}_{ij}(k)\) takes value 0 or 1. Since conditions (24) and (25) are satisfied, the conclusion follows from Proposition 3 and Corollary 1. □

  1. 2.

    Malicious packet losses.

As we all know, the process of information transmission may be interrupted by malicious activities from exotic attacks. For example, jamming attacks of interaction channels may cause packet losses.

According to the model of attack strategy proposed in [21], let \(\xi ^{M}_{ij}(k)\), taking values 0 and 1, denote the state of attacks. The state \(\xi ^{M}_{ij}(k)=0\) means that the channel \((i,j)\) faces an attack at time kh. Assume that the number of packet transmission attempts that face attacks are upper-bounded almost surely by a certain ratio of the total number of packet transmission attempts, i.e.,

$$ \mathbb{P} \Biggl[\sum_{\theta =0}^{k-1} \bigl(1- \xi ^{M}_{ij}(\theta) \bigr)\leq \kappa +\frac{k}{\lambda } \Biggr]=1,\quad k\in \mathbb{N}_{0}, $$

where \(\kappa \geq 0\), \(\lambda >1\). It indicates that among k packet transmission attempts, at most \(\kappa +\frac{k}{\lambda }\) of them face attacks. Specifically, parameter κ can ensure that there are no attacks during the first few packet transmission attempts almost surely and the ratio \(\frac{1}{\lambda }\) expresses the jamming rate of packet transmission attempts that meet attacks, both of them are discussed in [21].

Now, consider all of the channels \((i,j)\) where \(i< j\). If we use the following to characterize malicious packet losses with attack

$$ \mathbb{P} \Biggl[\sum_{\theta =0}^{k-1} \biggl(1-\prod_{i< j}\xi ^{M}_{ij}( \theta)\biggr)\leq \kappa +\frac{k}{\lambda } \Biggr]=1,\quad k\in \mathbb{N}_{0}, $$

where \(\kappa \geq 0\), \(\lambda >1\), then we can say that Assumption 1 includes malicious packet losses with attack strategy given by (31) as a special case.

Proposition 5

Consider a binary-valued process \(\{\xi ^{M}_{ij}(k) \}_{k \in \mathbb{N}_{0}}\) satisfying equation (31). Then for all \(\rho _{M}\in (\frac{1}{\lambda },1)\), we have

$$ \sum_{k=1}^{\infty }k\mathbb{P} \Biggl[\sum _{\theta =0}^{k-1} \biggl(1- \prod _{i< j}\xi ^{M}_{ij}(\theta) \biggr)>\rho _{M} k \Biggr]< \infty. $$


$$ \begin{aligned}[b] \mathbb{P} \Biggl[\sum _{\theta =0}^{k-1} \biggl(1-\prod _{i< j}\xi ^{M}_{ij}( \theta) \biggr)> \rho _{M} k \Biggr] &\leq \mathbb{P} \Biggl[\sum _{\theta =0} ^{k-1} \biggl(1-\prod _{i< j}\xi ^{M}_{ij}(\theta) \biggr) \geq \rho _{M} k \Biggr] \\ &=\mathbb{P} \bigl[e^{\sum _{\theta =0}^{k-1} (1- \prod _{i< j}\xi ^{M}_{ij}(\theta) )} \geq e^{\rho _{M} k} \bigr] \\ &\leq e^{-\rho _{M} k}\cdot \mathbb{E} \bigl[e^{\sum _{\theta =0}^{k-1} (1-\prod _{i< j} \xi ^{M}_{ij}( \theta) )} \bigr] \\ &\leq e^{-\rho _{M} k}\cdot e^{\kappa +\frac{k}{ \lambda }}=e^{\kappa -(\rho _{M}-\frac{1}{\lambda })k}. \end{aligned} $$

Here, the second inequality is deduced from Markov’ inequality. It follows from equation (31) that

$$ \mathbb{E} \bigl[e^{\sum _{\theta =0}^{k-1} (1-\prod _{i< j}\xi ^{M}_{ij}(\theta) )} \bigr]\leq \mathbb{E} \bigl[e^{\kappa +\frac{k}{ \lambda }} \bigr]=e^{\kappa +\frac{k}{\lambda }}, $$

and then the third inequality is obtained. Thus,

$$ \sum_{k=1}^{\infty }k\mathbb{P} \Biggl[\sum _{\theta =0}^{k-1} \biggl(1- \prod _{i< j}\xi ^{M}_{ij}(\theta) \biggr)>\rho _{M} k \Biggr]\leq \sum_{k=1} ^{\infty }ke^{\kappa -(\rho _{M}-\frac{1}{\lambda })k}< \infty. $$


  1. 3.

    A combination of random and malicious packet losses (independent case).

Consider a situation when random and malicious packet losses happened together, while being independent. At this moment, the packet transmission is succeed at time kh if and only if there are all successes, this means

$$ \prod_{i< j}\xi _{ij}(k)= \textstyle\begin{cases} 1, & \prod_{i< j}\xi ^{R}_{ij}(k)=1 \mbox{ and } \prod_{i< j}\xi ^{M}_{ij}(k)=1, \\ 0 ,& \mbox{otherwise}. \end{cases} $$

The next proposition illustrates that Assumption 1 also includes this type of packet losses as a special case.

Proposition 6

Consider the state indicator process of packet transmission \(\{\prod_{i< j}\xi _{ij}(k) \}_{k\in \mathbb{N}_{0}}\) given by (36), where \(\{\prod_{i< j}\xi ^{R}_{ij}(k) \}_{k\in \mathbb{N}_{0}}\) and \(\{\prod_{i< j}\xi ^{M} _{ij}(k) \}_{k\in \mathbb{N}_{0}}\) are mutually independent. If

$$ p_{1}+\rho _{M}+p_{1}\rho _{M}< 1, $$

Then inequality (5) holds for all \(\rho \in (p_{1}+\rho _{M}+p _{1}\rho _{M},1)\).


The function in (36) can be rewritten as \(\prod_{i< j} \xi _{ij}(k)=\prod_{i< j}\xi ^{R}_{ij}(k)\cdot \prod_{i< j} \xi ^{M}_{ij}(k)\), and then we have

$$\begin{aligned} 1-\prod_{i< j}\xi _{ij}(k) =& \biggl(1-\prod _{i< j}\xi ^{R}_{ij}(k) \biggr)+ \biggl( 1-\prod_{i< j}\xi ^{M}_{ij}(k) \biggr) \\ &{}- \biggl(1-\prod_{i< j}\xi ^{R} _{ij}(k) \biggr) \biggl(1-\prod_{i< j}\xi ^{M}_{ij}(k) \biggr). \end{aligned}$$

By summing over θ from 0 to \(k-1\) both sides and denoting \(L(k)=\sum_{\theta =0}^{k-1} (1-\prod_{i< j}\xi _{ij}( \theta) )\), \(L_{1}(k)=\sum_{\theta =0}^{k-1} (1- \prod_{i< j}\xi ^{R}_{ij}(\theta) )\), \(L_{2}(k)=\sum_{\theta =0}^{k-1} (1-\prod_{i< j}\xi ^{M}_{ij}(\theta) )\), \(L_{3}(k)=\sum_{\theta =0}^{k-1} (1-\prod_{i< j}\xi ^{R}_{ij}(\theta) ) (1-\prod_{i< j}\xi ^{M}_{ij}( \theta) )\), we have \(L(k)=L_{1}(k)+L_{1}(k)+L_{1}(k)\). Let \(\rho _{1}=p_{1}+\eta _{1}\), \(\rho _{2}=\rho _{M}\), \(\rho _{3}=p_{1}\rho _{M}+\eta _{2}\), \(\eta =\eta _{1}+\eta _{2}=\rho -p_{1}-\rho _{M}-p_{1}\rho _{M}\), \(\eta _{2}=\min \{\frac{\eta }{2},\frac{\rho _{M}-p_{1}\rho _{M}}{2}\}\). Then

$$\begin{aligned} \sum_{k=1}^{\infty }k\mathbb{P} \bigl[L(k)>\rho k \bigr] =&\sum_{k=1}^{ \infty }k \mathbb{P} \bigl[L_{1}(k)+L_{2}(k)+L_{3}(k)>\rho _{1}k+\rho _{2}k+ \rho _{3}k \bigr] \\ \leq& \sum_{k=1}^{\infty }k \mathbb{P} \bigl[L_{1}(k)> \rho _{1}k \bigr]+\sum _{k=1}^{\infty }k\mathbb{P} \bigl[L_{2}(k)>\rho _{2}k \bigr] \\ &{} +\sum_{k=1}^{\infty }k\mathbb{P} \bigl[L_{3}(k)>\rho _{3}k \bigr]. \end{aligned}$$


$$\begin{aligned} \rho _{1} =&p_{1}+\eta -\eta _{2}= \max \biggl\{ p_{1}+\frac{\eta }{2},p _{1}+\eta - \frac{\rho _{M}-p_{1}\rho _{M}}{2} \biggr\} \\ =&\max \biggl\{ \frac{p _{1}+\rho -\rho _{M}-p_{1}\rho _{M}}{2}, \frac{2\rho -\rho _{M}(p_{1}+3)}{2} \biggr\} . \end{aligned}$$

It is easy to see that \(\frac{2\rho -\rho _{M}(p_{1}+3)}{2}<1\), and it follows from the assumption that \(\frac{p_{1}+\rho -\rho _{M}-p_{1}\rho _{M}}{2}<1\). Thus we have \(\rho _{1}\in (p_{1},1)\). Therefore, it can be deduced from Proposition 4 that \(\sum_{k=1}^{\infty }k \mathbb{P} [L_{1}(k)>\rho _{1}k ]<\infty\). Second, let \(\rho _{2}=\rho _{M}\). According to Proposition 5, we have \(\sum_{k=1}^{\infty }k\mathbb{P} [L_{2}(k)>\rho _{2}k ]< \infty\). Finally, since \(\rho _{3}=p_{1}\rho _{M}+\eta _{2}\leq p_{1} \rho _{M}+\frac{\rho _{M}-p_{1}\rho _{M}}{2}< p_{1}\rho _{M}+\rho _{M}-p _{1}\rho _{M}=\rho _{M}\), we obtain \(\rho _{3}\in (p_{1}\rho _{M},\rho _{M})\). Let \(\{\xi (k) \}= \{1-\prod_{i< j}\xi ^{R} _{ij}(k) \}\) and \(\{\chi (k) \}= \{1-\prod_{i< j}\xi ^{M}_{ij}(k) \}\) with \(\tilde{p}=p_{1}\) and \(\tilde{\omega }=\rho _{M}\) which make (24) and (25) hold. By Proposition 3, Corollary 1 and Proposition 5, we obtain \(\sum_{k=1}^{\infty }k\mathbb{P} [L _{3}(k)>\rho _{3}k ]<\infty\). Therefore, the conclusion is proved. □

  1. 4.

    A combination of random and malicious packet losses (dependent case).

In this case, the attacker can decide to attack or not on the basis of the information about the random packet losses in the channel. Obviously, in this case, the number of transmission failures does not exceed the sum of the number of random and malicious failures, i.e.,

$$ \sum_{\theta =0}^{k-1} \biggl(1- \prod_{i< j}\xi _{ij}(\theta) \biggr)\leq \sum_{\theta =0}^{k-1} \biggl(1-\prod _{i< j}\xi ^{R}_{ij}(\theta) \biggr) + \sum _{\theta =0}^{k-1} \biggl(1-\prod _{i< j}\xi ^{M}_{ij}(\theta) \biggr). $$

For simplicity, we denote (41) as \(L\leq L_{1}+L_{2}\). Proposition 7 below shows that this is also a special case included in Assumption 1.

Proposition 7

Consider the state indicator process of packet transmission \(\{\prod_{i< j}\xi _{ij}(k) \}_{k\in \mathbb{N}_{0}}\) given by (41). Assume

$$ p_{1}+\rho _{M}< 1. $$

Then, for all \(\rho \in (p_{1}+\rho _{M},1)\), we have

$$ \sum_{k=1}^{\infty }k\mathbb{P} \Biggl[\sum _{\theta =0}^{k-1} \biggl(1- \prod _{i< j}\xi _{ij}(\theta) \biggr)>\rho k \Biggr]< \infty. $$


Let \(\rho _{1}=p_{1}+\frac{\eta }{2}\), \(\rho _{2}=\rho _{M}+ \frac{\eta }{2}\) and \(\rho =p_{1}+\rho _{M}+\eta \). Then we have

$$\begin{aligned} \sum_{k=1}^{\infty }k\mathbb{P} \bigl[L(k)>\rho k \bigr] =&\sum_{k=1}^{ \infty }k \mathbb{P} \bigl[L_{1}(k)+L_{2}(k)>\rho _{1}k+ \rho _{2}k \bigr] \\ \leq& \sum_{k=1}^{\infty }k\mathbb{P} \bigl[L_{1}(k)>\rho _{1}k \bigr]+ \sum _{k=1}^{\infty }k\mathbb{P} \bigl[L_{2}(k)>\rho _{2}k \bigr]. \end{aligned}$$

It follows from \(\rho _{1}=p_{1}+\frac{\rho -p_{1}-\rho _{M}}{2}=\frac{ \rho +p_{1}-\rho _{M}}{2}<1\) that \(\rho _{1}\in (p_{1},1)\). From Proposition 4, we have

$$ \sum_{k=1}^{\infty }k\mathbb{P} \bigl[L_{1}(k)>\rho _{1}k \bigr]< \infty. $$

Due to \(\rho _{2}=\rho _{M}+\frac{\rho -p_{1}-\rho _{M}}{2}=\frac{\rho -p _{1}+\rho _{M}}{2}<1\), we get \(\rho _{2}\in (\rho _{M},1)\). From Proposition 5, we obtain

$$ \sum_{k=1}^{\infty }k\mathbb{P} \bigl[L_{2}(k)>\rho _{2}k \bigr]< \infty. $$

Thus, the proof is complete. □

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, R., Fang, Z. & Li, L. Flocking of discrete Cucker–Smale model with packet loss. Adv Differ Equ 2019, 71 (2019).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: