Theory and Modern Applications

A general quantum difference calculus

Abstract

In this paper, we consider a strictly increasing continuous function β, and we present a general quantum difference operator $$D_{\beta}$$ which is defined to be $${D}_{\beta}f(t)= ({f(\beta(t))-f(t)} )/ ({\beta(t)-t} )$$. This operator yields the Hahn difference operator when $$\beta(t)=qt+\omega$$, the Jackson q-difference operator when $$\beta (t)=qt$$, $$q\in(0,1)$$, $$\omega>0$$ are fixed real numbers and the forward difference operator when $$\beta(t)=t+\omega$$, $$\omega>{0}$$. A calculus based on the operator $$D_{\beta}$$ and its inverse is established.

1 Introduction

The quantum calculus is known as the calculus without limits. It substitutes the classical derivative by a quantum difference operator which allows to deal with sets of nondifferentiable functions. Quantum difference operators have an interesting role due to their applications in several mathematical areas such as orthogonal polynomials, basic hyper-geometric functions, combinatorics, the calculus of variations and the theory of relativity. New results in quantum calculus can be found in [18] and the references cited therein. One type of quantum calculus is the Hahn quantum calculus. In [9], Hahn introduced his difference operator, as a tool for constructing families of orthogonal polynomials, which is defined by

$${D}_{q,\omega}f(t)=\frac{f(qt+\omega)-f(t)}{t(q-1)+\omega}, \quad t\neq { \omega_{0}},$$
(1.1)

where $$q\in(0,1)$$, $$\omega>0$$ are fixed and $$\omega_{0}= {\frac{\omega}{1-q}}$$. The derivative at $$t=\omega_{0}$$ is defined to be the usual derivative $$f'(\omega_{0})$$ whenever it exists. In [2, 10], the inverse operator was constructed and a rigorous analysis of the calculus associated with $${D}_{q,\omega}$$ was given. Hamza and Ahmed, in [4], studied the existence and uniqueness of solutions of the Hahn difference equations. Also, in [5], they established the theory of linear Hahn difference equations. Hahn quantum difference operator unifies two important difference operators. The first is the Jackson q-difference operator which is defined by

$${D}_{q}f(t)=\frac{f(qt)-f(t)}{t(q-1)},\quad t\neq{0},$$
(1.2)

and $${D}_{q}f(0)=f'(0)$$, where q is a fixed number, $$q\in(0,1)$$. The function f is defined on a q-geometric set $$\mathbb{A}\subseteq \mathbb{R}$$ (or $$\mathbb{C}$$) such that whenever $$t\in{\mathbb{A}}$$, $$qt\in{\mathbb{A}}$$. See [3, 11]. The second is the forward difference operator $$D_{\omega}$$ which is defined by

$$D_{\omega}f(t)=\frac{f(t+\omega)-f(t)}{\omega},\quad t\in{\mathbb{R}},$$
(1.3)

where ω is a fixed number and $$\omega>{0}$$. We refer the reader also to the interesting book [12] by Kac and Cheung who presented the q-calculus and the ω-calculus in details, associated with the difference operators $$D_{q}$$ and $$D_{\omega }$$, respectively.

Auch in his PhD thesis [13] in 2013 (supervised by Lynn Erbe and Allan Peterson) introduced the forward difference operator

$${\Delta}_{a,b}f(t)=\frac{f(\sigma(t))-f(t)}{\sigma(t)-t},$$
(1.4)

where $$\sigma(t)=at+b$$ with $$a\ge1$$, $$b\ge0$$ and $$a+b>1$$, and its inverse $$\rho(t)=\frac{t-b}{a}$$. He defined f on a mixed time scale $$\mathbb{T}_{\alpha}:=\{\ldots,\rho^{2}(\alpha),\rho(\alpha),\alpha,\sigma (\alpha),\sigma^{2}(\alpha),\ldots\}$$, $$\alpha>\frac{b}{1-a}$$, which is a discrete subset of $$\mathbb{R}$$.

In this paper, we introduce a general quantum difference operator defined by

$${D}_{\beta}f(t)=\frac{f(\beta(t))-f(t)}{\beta(t)-t}$$
(1.5)

for every t with $$\beta(t)\neq{t}$$ and $${D}_{\beta}f(t)=f'(t)$$ when $$\beta(t)=t$$ provided that $$f'(t)$$ exists in the usual sense. Here, $$\beta:{I}\longrightarrow{I}$$ is a strictly increasing continuous function, and f is an arbitrary function defined, in general, on a subset $$I\subseteq\mathbb{R}$$ with $$\beta(t)\in I$$ for any $$t\in I$$.

Throughout this paper $$\mathbb{X}$$ is a Banach space with norm $$\|\cdot\|$$, and we denote by

$$\beta^{k}(t):=\underbrace{\beta\circ\beta\circ\cdots\circ \beta}_{k\text{ times}}(t)\quad \mbox{and}\quad \beta^{-k}(t):= \underbrace{\beta^{-1}\circ\beta^{-1}\circ\cdots\circ \beta^{-1}}_{k\text{ times}}(t),$$

$$k\in{\mathbb{N}_{0}}={{\mathbb{N}}\cup\{0\}}$$, where $$\mathbb{N}$$ is the set of natural numbers. For convenience $$\beta^{0}(t)=t$$ for all $$t\in{I}$$.

The general function β may be linear or nonlinear. Then β has many types according to the number of its fixed points in I. Two classes of β can be considered. The first class is the family of all β that has a unique fixed point $$s_{0}\in I$$ and satisfies the following inequality:

$${(t-s_{0}) \bigl(\beta(t)-t\bigr)}\leq{0} \quad \text{for all } t \in{I}.$$

The second class is the family of all β that has a unique fixed point $$s_{0}\in I$$ and satisfies the following inequality:

$${(t-s_{0}) \bigl(\beta(t)-t\bigr)}\geq{0}\quad \text{for all } t \in{I}.$$

Hahn and Jackson difference operators are special linear forms of the general difference operator $$D_{\beta}$$ when $$\beta(t)=qt+\omega$$ and $$\beta(t)=qt$$, $$q\in(0,1)$$, $$\omega>0$$, respectively. These functions belong to the first class. Furthermore, the function $$\beta(t)=qt+\omega$$, $$q>1$$, $$\omega>0$$ belongs to the second class. The forward difference operator $$D_{\omega}$$ is a type of β which has no fixed points. Also, $$\beta(t)=at+b$$, $$a>1$$, $$b\geq{0}$$ belongs to the second class.

In the whole paper, we consider all functions β that belong to the first class, and give a rigorous analysis of the calculus based on $$D_{\beta}$$. In this class, the movement of the sequence $$\{\beta^{k}(t)\}_{k\in{\mathbb {N}_{0}}}$$ is towards $$s_{0}$$. Every choice of the function β gives a new difference operator. Thus, we can obtain a wide class of quantum difference operators with the corresponding quantum calculi.

The advantage of this study is that it helps and allows us to avoid repetition in proving results once for the Jackson q-difference operator, once for the Hahn difference operator and once for any difference operator on the form $$D_{\beta}$$ with β in that class.

We organize this paper as follows. In Section 2, we introduce the definition of β-derivative and prove its main properties. For instance, we deduce the chain rule, Leibniz’ formula and the mean value theorem. In Section 3, we introduce the β-integral and we establish the fundamental theorem of β-calculus.

2 β-differentiation

Assume that the function β has only one fixed point $$s_{0}\in{I}$$ and satisfies the following condition:

$${(t-s_{0}) \bigl(\beta(t)-t\bigr)}\leq{0}\quad \text{for all } t\in{I},$$
(2.1)

where the equality holds only if $$t=s_{0}$$. Here, I is supposed to be an interval of the real line.

In the following, we introduce two important lemmas in proving our main results.

Lemma 2.1

The following statements are true.

1. (i)

The sequence of functions $$\{{\beta}^{k}(t)\}_{k\in{{\mathbb {N}_{0}}}}$$ converges uniformly to the constant function $${\hat{\beta} (t)}:=s_{0}$$ on every compact interval $$J\subseteq{I}$$ containing $$s_{0}$$.

2. (ii)

The series $${\sum}^{\infty}_{k=0}|{\beta}^{k}(t)-{\beta }^{k+1}(t)|$$ is uniformly convergent to $$|t-s_{0}|$$ on every compact interval $$J\subseteq{I}$$ containing $$s_{0}$$.

Proof

(i) Let $$J=[a,b]$$, $$s_{0}\in{J}$$. If $$t\in[s_{0},b]$$, then condition (2.1) implies $${\beta}^{k+1}(t)\leqslant{\beta}^{k}(t)$$ for all $$k\in{{\mathbb{N}_{0}}}$$. So, the sequence $$\{{\beta}^{k}(t)\}_{k\in{{\mathbb{N}_{0}}}}$$ is decreasing to the constant function $${\hat{\beta}(t)}=s_{0}$$. By Dini’s theorem $$\{{\beta}^{k}(t)\}_{k\in{{\mathbb{N}_{0}}}}$$ is uniformly convergent to the constant function $${\hat{\beta}(t)}$$ on the interval $$[s_{0},b]$$. Similarly, we can prove its uniform convergence on $$[a,s_{0}]$$. Consequently, the sequence $$\{{\beta}^{k}(t)\}_{k\in{{\mathbb{N}_{0}}}}$$ is uniformly convergent on the interval $$J=[a,b]$$.

(ii) We apply Dini’s theorem to $$S_{n}(t)={\sum}^{n}_{k=0}({\beta}^{k}(t)-{\beta}^{k+1}(t))$$, $$n=1,2,\ldots$$ on both $$[s_{0},b]$$ and $$[a,s_{0}]$$ to get the desired result. □

The proof of the following lemma is straightforward and will be omitted.

Lemma 2.2

If $$f: I\longrightarrow\mathbb{X}$$ is continuous at $$s_{0}$$, then the sequence $$\{f(\beta^{k}(t))\}_{k\in{{\mathbb{N}_{0}}}}$$ converges uniformly to $$f(s_{0})$$ on every compact interval $$J\subseteq {I}$$ containing $$s_{0}$$.

Theorem 2.3

If $$f:I\longrightarrow\mathbb{X}$$ is continuous at $$s_{0}$$, then the series $${\sum}^{\infty}_{k=0}\|({\beta}^{k}(t)- {\beta }^{k+1}(t))f(\beta^{k}(t))\|$$ is uniformly convergent on every compact interval $$J\subseteq{I}$$ containing $$s_{0}$$.

Proof

Let $$J\subseteq{I}$$ be a compact interval containing $$s_{0}$$. By Lemma 2.2, there exists $$k_{0}\in{{\mathbb{N}}}$$ such that

$$\bigl\Vert f\bigl(\beta^{k}(t)\bigr)-f(s_{0})\bigr\Vert < 1\quad \forall t\in J, k\geq{k_{0}}.$$

Then $$\|f(\beta^{k}(t))\|<1+\|f(s_{0})\|$$ for $$k\geq{k_{0}}$$ and $$t\in{J}$$, which in turn implies that

$$\bigl\vert \bigl(\beta^{k}(t)-\beta^{k+1}(t) \bigr)\bigr\vert \bigl\Vert f\bigl(\beta^{k}(t)\bigr)\bigr\Vert < \bigl\vert \bigl(\beta^{k}(t)-\beta ^{k+1}(t)\bigr)\bigr\vert \bigl(1+\bigl\Vert f(s_{0})\bigr\Vert \bigr) \quad \forall t\in{J}, k\geq{k_{0}}.$$
(2.2)

Consider the two sequences

$$D_{n}(t)=\sum^{n}_{k=0} \bigl\Vert \bigl(\beta^{k}(t)-\beta^{k+1}(t)\bigr)f\bigl( \beta^{k}(t)\bigr)\bigr\Vert$$
(2.3)

and

$$C_{n}(t)=\sum^{n}_{k=0}\bigl\vert \bigl(\beta^{k}(t)-\beta^{k+1}(t)\bigr)\bigr\vert \bigl(1+\bigl\Vert f(s_{0})\bigr\Vert \bigr).$$
(2.4)

By Lemma 2.1(ii), $$C_{n}(t)$$ is uniformly convergent to $$|t-s_{0}|(1+\| f(s_{0})\|)$$ on J.

By the Cauchy criterion, given $$\epsilon>0$$, there exists $$n_{0}\in\mathbb {N}$$ such that

$$\bigl\Vert C_{n}(t)-C_{m}(t)\bigr\Vert < \epsilon \quad \forall t\in{J}, n\geq{m}\geq{n_{0}}.$$
(2.5)

By using (2.2) and (2.5), we have

$$\bigl\Vert D_{n}(t)-D_{m}(t)\bigr\Vert \leq \bigl\Vert C_{n}(t)-C_{m}(t)\bigr\Vert < \epsilon \quad \forall n \geq{m}\geq\max\{n_{0},k_{0}\}.$$

Therefore, $$\sum^{\infty}_{k=0}\|(\beta^{k}(t)-\beta^{k+1}(t))f(\beta^{k}(t))\|$$ is uniformly convergent on J. □

In the following, we present some examples of special forms of β which has one fixed point $$s_{0}\in{I}$$ and satisfies condition (2.1).

Examples 2.4

1. $$\beta(t):=qt\mp\omega$$ for fixed $$\omega\geq0$$ and $$q\in(0,1)$$ is defined on $$I=\mathbb{R}$$. In this case, $$s_{0}=\frac{\mp\omega}{1-q}$$,

$$\beta^{k}(t)=q^{k}t\mp\omega[k]_{q} \quad \mbox{and}\quad \beta^{-k}(t)=\frac{t\pm\omega[k]_{q}}{q^{k}},$$

where $$[k]_{q}=\frac{1-q^{k}}{1-q}$$. We have

$$\lim_{k\rightarrow\infty}\beta^{k}(t)=s_{0} \quad \mbox{and}\quad \lim_{k\rightarrow\infty}\beta^{-k}(t)=\left \{ \textstyle\begin{array}{l@{\quad}l} \infty, &t>s_{0}, \\ {-}\infty, &t< s_{0} \end{array}\displaystyle \displaystyle \right .$$

for the iteration of $$\beta(t)=qt+\omega$$ see Figure 1.

This case represents both of the forward and backward Hahn difference operators, respectively. Also, the Jackson q-difference operator when $$\omega=0$$, see [24, 11, 12].

2. $$\beta(t):=qt^{n}$$ for fixed $$q\in(0,1)$$ and fixed $$n\in2\mathbb{N}+1$$, and β is defined on $$I= (-q^{\frac{1}{1-n}},q^{\frac {1}{1-n}} )$$. Then β is a strictly increasing function from I onto I and has a unique fixed point $$s_{0}=0$$, and $$\beta^{-1} (t) =\sqrt[n]{\frac{t}{q}}$$. Moreover,

$$\beta^{k} (t)= q^{[k]_{n}} t^{n^{k}},\qquad \beta^{-k}(t)= q^{-n^{-k_{[k]_{n}}}}t^{n^{-k}},$$

and for $$t\in I$$,

\begin{aligned}& \lim_{k\rightarrow\infty} \beta^{k} (t)=0, \\& \lim _{k\rightarrow\infty} \beta^{-k}(t) = \left \{ \textstyle\begin{array}{l@{\quad}l} q^{\frac{1}{1-n}},&0< t, \\ 0, &t=0, \\ - q^{\frac{1}{1-n}},&t< 0. \end{array}\displaystyle \displaystyle \displaystyle \right . \end{aligned}

In Figure 2, we illustrate the behavior of $$\beta^{k}(t)$$ for $$t\in I$$. This case yields the power quantum difference operator

$$D_{n,q} f(t) := \left \{ \textstyle\begin{array}{l@{\quad}l} \frac{f(qt^{n})-f(t)}{qt^{n} -t}, &t\ne0, \\ f^{\prime}(0), &t=0, \end{array}\displaystyle \right .$$

which was introduced by Aldwoah et al. in [1].

3. Fix $$n\in2\mathbb{N}+1$$, $$\beta(t):=t^{n}$$ for $$t\in I=(-1,1)$$. $$\beta: I \longrightarrow I$$ is strictly increasing, $$\beta^{-1} (t) =\sqrt[n]{t}$$, the unique fixed point is $$s_{0}=0$$, $$\beta^{k} (t)= t^{n^{k}}$$, $$\beta^{-k}(t)= t^{-{n^{k}}}$$, $$\lim_{k\rightarrow\infty} \beta^{k} (t)=0$$ for $$t\in I$$, and

$$\lim_{k\rightarrow\infty} \beta^{-k}(t) = \left \{ \textstyle\begin{array}{l@{\quad}l} 1,& t\in(0,1), \\ 0, & t=0, \\ -1,&t\in(-1,0). \end{array}\displaystyle \right .$$

This case represents the n-power difference operator [10]

$$D_{n} f(t) := \left \{ \textstyle\begin{array}{l@{\quad}l} \frac{f(t^{n})-f(t)}{t^{n} -t}, & t\ne0, \\ f^{\prime}(0), & t=0. \end{array}\displaystyle \right .$$
(2.6)

4. $$\beta(t):=\ln{t}+1$$ which is a strictly increasing and continuous nonlinear function defined on $$I=[1,\infty)$$. The only fixed point is $$s_{0}=1$$. We can see that

$$\beta^{k}(t)=\ln{\beta^{k-1}(t)}+1,\qquad \beta^{-1}(t)=e^{t-1},$$

and for $$t\in I$$,

$$\lim_{k\rightarrow\infty}\beta^{k}(t)=1 ,\qquad \lim _{k\rightarrow\infty }\beta^{-k}(t)=\infty.$$

Now, we introduce the β-difference operator as follows.

Definition 2.5

For a function $$f:I\longrightarrow\mathbb{X}$$, we define the β-difference operator of f as

$${D}_{\beta}f(t)=\left \{ \textstyle\begin{array}{l@{\quad}l} \frac{f(\beta(t))-f(t)}{\beta(t)-t},&t\neq{s_{0}}, \\ {f'(s_{0})},&t={s_{0}}, \end{array}\displaystyle \right .$$

provided that the ordinary derivative $$f'$$ exists at $$t=s_{0}$$. In this case, we say that $${D}_{\beta}f(t)$$ is the β-derivative of f at t. We say that f is β-differentiable on I if $${f'(s_{0})}$$ exists.

In the following, we state some clear properties of the β-difference operator.

1. (i)

$$D_{\beta}$$ is a linear operator.

2. (ii)

If f is β-differentiable at t, then $$f({\beta }(t))=f(t)+({\beta}(t)-t){{D}_{\beta}f(t)}$$.

3. (iii)

If f is β-differentiable, then f is continuous at $$s_{0}$$.

Simple calculations show that the following theorem is true. So, its proof will be omitted.

Theorem 2.6

Assume that $$f:{I}\longrightarrow\mathbb{X}$$ and $$g:{I}\longrightarrow \mathbb{R}$$ are β-differentiable functions at $$t\in{I}$$. Then:

1. (i)

The product $$fg:I\longrightarrow\mathbb{X}$$ is β-differentiable at t and

\begin{aligned} {D}_{\beta}(fg) (t) =&\bigl({D}_{\beta}f(t)\bigr)g(t)+f\bigl( \beta(t)\bigr){D}_{\beta}g(t) \\ =&\bigl({D}_{\beta}f(t)\bigr)g\bigl(\beta(t)\bigr)+f(t){D}_{\beta}g(t). \end{aligned}
2. (ii)

$$f/g$$ is β-differentiable at t and

$${D}_{\beta} ({f}/{g} ) (t)=\frac{({D}_{\beta }f(t))g(t)-f(t){D}_{\beta}g(t)}{g(t)g(\beta(t))},\quad g(t)g\bigl(\beta(t) \bigr)\neq{0}.$$

Examples 2.7

1. 1.

$$D_{\beta}{t^{n}}=\sum^{n-1}_{k=0}(\beta(t))^{n-k-1}{t}^{k}$$, $$t\in{I}$$, $$n\geq{1}$$.

2. 2.

For $$t\neq0$$, $${D}_{\beta}\frac{1}{t}=-\frac{1}{t\beta(t)}$$, $$t\in{I}$$, $$\beta(t)\neq{0}$$.

3. 3.

If $$f:{I}\longrightarrow\mathbb{R}^{2}$$ defined by $$f(t)=(t^{2},2t)$$ and $$\beta(t)=\frac{1}{2}t+1$$, then

$$D_{\beta}f(t) =\frac{(-\frac{3}{4}t^{2}+t+1, 2-t)}{1-\frac{1}{2}t}.$$
4. 4.

If $$\beta(t)=\frac{1}{4}t$$ and $$f:{I}\longrightarrow\mathbb{M}_{2\times2}$$ defined by $$f(t)=\bigl[ {\scriptsize\begin{matrix}t^{3}&1 \cr t& t^{2}\end{matrix}} \bigr]$$, then one can see that $$D_{\beta}f(t)=\bigl[ {\scriptsize\begin{matrix}{\frac{21}{16}t^{2}}&0 \cr 1&\frac{5}{4} t\end{matrix}} \bigr]$$, where $$\mathbb{M}_{2\times2}$$ is the space of all $$2\times {2}$$ matrices.

Lemma 2.8

Let $$f:{I}\longrightarrow\mathbb{X}$$ be β-differentiable and $${D}_{\beta}f(t)=0$$ for all $$t\in{I}$$, then $$f(t)=f(s_{0})$$, $$t\in I$$.

Proof

Since $${D}_{\beta}f(t)=0$$, $$t\in{I}$$, then $$f(t)=f(\beta(t))$$, $$t\in{I}$$. Consequently, $$f(t)=f({\beta}^{k}(t))$$, $$t\in{I}$$ and $$k\in\mathbb{N}_{0}$$. Taking $${k}\rightarrow{\infty}$$ and using the continuity of f at $$s_{0}$$, we obtain $$f(t)=f(s_{0})$$ for $$t\in{I}$$. □

As a direct consequence we obtain the following corollary.

Corollary 2.9

Suppose that $$f,g:I\longrightarrow\mathbb{X}$$ are β-differentiable on I. If $$D_{\beta}{f(t)}=D_{\beta}{g(t)}$$ for all $$t\in{I}$$, then $$f(t)-g(t)=f(s_{0})-g(s_{0})$$ for all $$t\in{I}$$.

Definition 2.10

Let $${s_{0}\in[a,b]\subseteq{I}}$$. We define the β-interval by

$${[a,b]}_{\beta}=\bigl\{ {\beta}^{k}(a);{k}\in{ \mathbb{N}_{0}}\bigr\} \cup\bigl\{ {\beta }^{k}(b);{k}\in{ \mathbb{N}_{0}}\bigr\} \cup\{s_{0}\},$$

and the class $$[c]_{\beta}$$ for any point $$c\in{I}$$ by

$$[c]_{\beta}=\bigl\{ \beta^{k}(c); k\in{{\mathbb{N}_{0}}} \bigr\} \cup\{s_{0}\}.$$

Finally, for any set $$A\subset\mathbb{R}$$, we define

$$A^{*} =A\setminus\{s_{0}\}.$$

In the following lemma, $$[a,b]$$ is a compact subinterval of I and $$s_{0}\in[a,b]$$.

Lemma 2.11

Let $$f:[a,b]\longrightarrow\mathbb{R}$$ be continuous at $$s_{0}$$. The following statements are true:

1. (i)

$$D_{\beta}f(t)>{0}$$ for all $$t\in{[a,b]^{*}_{\beta}}$$ if and only if f is strictly increasing on $$[a,b]_{\beta}$$.

2. (ii)

$$D_{\beta}f(t)<0$$ for all $$t\in{[a,b]^{*}_{\beta}}$$ if and only if f is strictly decreasing on $$[a,b]_{\beta}$$.

Proof

We prove only the first part and the second one can be shown similarly. For the proof of (i), suppose $$D_{\beta}f(t)>0$$ for all $$t\in {[a,b]^{*}_{\beta}}$$. We may assume that $$s_{0}\notin\{a,b\}$$. We have $$a<\beta(a)<\beta^{2}(a)<\cdots<\beta^{k}(a)<\cdots< s_{0}<\cdots<\beta ^{m}(b)<\cdots<\beta(b)<b$$. Then, using the continuity of f at $$s_{0}$$, we conclude that $$f(a)< f(\beta(a))< f(\beta^{2}(a))<\cdots<f(\beta ^{k}(a))<\cdots<f(s_{0})<\cdots<f(\beta^{m}(b))<\cdots<f(\beta(b))<f(b)$$. This implies that f is strictly increasing on $$[a,b]_{\beta}$$. Conversely, suppose that f is strictly increasing on $$[a,b]_{\beta}$$ for any $$k\in{\mathbb{N}_{0}}$$. If $$\beta^{k+1}(t)>\beta ^{k}(t)$$, then $$f(\beta^{k+1}(t))>f(\beta^{k}(t))$$, and if $$\beta^{k+1}(t)<\beta^{k}(t)$$, then $$f(\beta^{k+1}(t))< f(\beta^{k}(t))$$. Therefore, $$D_{\beta}{f(t)}>0$$ for all $$t\in{[a,b]^{*}_{\beta}}$$. □

The following example shows that the previous lemma may not hold on $$[a,b]\setminus[a,b]_{\beta}$$.

Example 2.12

Let $$f:[1,\frac{3}{2}]\longrightarrow{\mathbb{R}}$$ defined by $$f(t)= 4t^{2}-9t$$ and let $$\beta(t)= \frac{1}{2}t+\frac{3}{4}$$. One can see that $$D_{\beta}{f(t)}<0$$, $$t\in[1,\frac{3}{2})$$ and $$s_{0}=\frac{3}{2}$$. Let $$t_{1}=1.15< t_{2}=1.2$$, then $$f(t_{1})=-5.06< f(t_{2})=-5.04$$, which means that f is not strictly decreasing on the interval $$[1,\frac{3}{2}]$$. Note that $$t_{1},t_{2}\notin [1,\frac{3}{2}]_{\beta}$$.

Simple calculations, using induction on m, show that the following theorem is true. So its proof will be omitted.

Theorem 2.13

Let α be a constant and $$m\in\mathbb{N}$$.

1. (i)

If $$f(t)=(t-\alpha)^{m}$$, then

$$D_{\beta}f(t)=\sum^{m-1}_{r=0} \bigl(\beta(t)-\alpha\bigr)^{r}(t-\alpha)^{m-1-r}.$$
(2.7)
2. (ii)

If $$g(t)={1}/{(t-\alpha)^{m}}$$, then

$$D_{\beta}{g(t)}=-{\sum}^{m-1}_{r=0} \frac{1}{(\beta(t)-\alpha )^{m-r}(t-\alpha)^{r+1}},$$
(2.8)

provided that $${(\beta(t)-\alpha)^{m-r}(t-\alpha)^{r+1}}\neq0$$, $$r=0,1,\ldots,m-1$$.

The following example shows that the ordinary chain rule does not hold in the β-calculus.

Example 2.14

Consider the functions $$f(t)=t^{2}$$ and $$g(t)=3t$$. Then

$$D_{\beta}(f\circ{g}) (t) =9\bigl(\beta(t)+t\bigr),$$

while

$$D_{\beta}{f}\bigl(g(t)\bigr)D_{\beta}{g(t)}=3 \bigl(\beta(3t)+3t\bigr).$$
(2.9)

That is,

$$D_{\beta}(f\circ{g}) (t)\neq D_{\beta}{f\bigl(g(t) \bigr)}D_{\beta}{g(t)}.$$
(2.10)

The next theorem gives us an analogous formula of the chain rule for β-calculus.

Theorem 2.15

Let $$g:{I}\longrightarrow\mathbb{R}$$ be a continuous and β-differentiable function and $$f:\mathbb{R}\longrightarrow\mathbb{X}$$ be continuously differentiable. Then there exists a point c between $$\beta(t)$$ and t such that

$$D_{\beta}(f\circ{g}) (t)=f'\bigl(g(c) \bigr)D_{\beta}{g(t)}.$$
(2.11)

Proof

The case $$t=s_{0}$$ is the usual chain rule. The case $$t\neq{s_{0}}$$ with $$g(\beta(t))=g(t)$$ is evident since both sides of (2.11) are zero. For $$t\neq{s_{0}}$$ with $$g(\beta(t))\neq{g(t)}$$, we have

\begin{aligned} D_{\beta}(f\circ{g}) (t) &=\frac{(f\circ{g})(\beta(t))-(f\circ{g})(t)}{\beta(t)-t} \\ &=\frac{f(g(\beta(t)))-f(g(t))}{g(\beta(t))-g(t)}{ }\frac{g(\beta (t))-g(t)}{\beta(t)-t}. \end{aligned}

By the mean value theorem, there exists a real number η between $$g(\beta(t))$$ and $$g(t)$$ such that

$$\frac{f(g(\beta(t)))-f(g(t))}{g(\beta(t))-g(t)}=f'(\eta).$$

Since g is a continuous function, then there exists c between $$\beta(t)$$ and t such that $$g(c)=\eta$$. Hence

$$D_{\beta}(f\circ{g}) (t)=f'\bigl(g(c)\bigr)D_{\beta}\bigl(g(t)\bigr).$$

□

In the following theorem, we derive the formula for the nth β-derivative of the product fg, where one of them is a real-valued function and the other is a vector-valued function.

For $$n\in\mathbb{N}$$, let $$S^{(n)}_{k}$$ be the set of all possible strings of length n containing k times β and $$n-k$$ times $$D_{\beta}$$. We denote $$f^{ {D_{\beta}}{ \beta}}(t)=(D_{\beta}{f})(\beta(t))$$ and $$f^{ {\beta} {D_{\beta}}}(t)=D_{\beta}(f(\beta(t)))$$, and $$f^{\Gamma}$$ is defined accordingly for $$\Gamma\in{S^{(n)}_{k}}$$.

If f is β-differentiable n times over I, then the higher order derivatives of f are defined by

$$D^{n}_{\beta}f=D_{\beta}\bigl({D^{n-1}_{\beta}}f \bigr),\quad n\in{\mathbb{N}_{0}},\mbox{where } D^{0}_{\beta}f=f.$$

Finally, one can see that

$$\biggl(\sum_{ \Gamma\in{S^{(n)}_{k}}}f^{{\Gamma}D_{\beta}} \biggr) (t) + \biggl(\sum_{ \Gamma\in{S^{(n)}_{k-1}}}f^{{\Gamma}\beta} \biggr) (t) = \biggl(\sum_{ \Gamma\in{S^{(n+1)}_{k}}}f^{\Gamma} \biggr) (t).$$

Theorem 2.16

(Leibniz’ formula)

If f and g are n times β-differentiable functions, then we have

$$D^{n}_{\beta}(fg) (t)=\sum ^{n}_{k=0} \biggl(\sum_{ \Gamma\in {S^{(n)}_{k}}}f^{\Gamma} \biggr) (t) D^{k}_{\beta}g(t),\quad t\neq{s_{0}}.$$
(2.12)

Proof

We prove by induction on n. By Theorem 2.6(i), the statement is true for $$n=1$$. Suppose that (2.12) is true for $$n=m$$. Now, we prove that it is true for $$n=m+1$$. We have

\begin{aligned} D^{m+1}_{\beta}(fg) (t) =&D_{\beta}\Biggl[\sum ^{m}_{k=0} \biggl(\sum _{ \Gamma\in{S^{(m)}_{k}}}f^{\Gamma} \biggr) (t) D^{k}_{\beta}g(t) \Biggr] \\ =&\sum^{m}_{k=0} \biggl(\sum _{ \Gamma\in{S^{(m)}_{k}}}D_{\beta}{f}^{\Gamma} \biggr) (t) D^{k}_{\beta}g(t)+ \sum^{m}_{k=0} \biggl(\sum_{ \Gamma\in{S^{(m)}_{k}}}f^{\Gamma} \biggr) \bigl( \beta(t)\bigr) D^{k+1}_{\beta}g(t) \\ =&\sum^{m}_{k=0} \biggl(\sum _{ \Gamma\in{S^{(m)}_{k}}}D_{\beta}{f}^{\Gamma} \biggr) (t) D^{k}_{\beta}g(t)+ \sum^{m+1}_{k=1} \biggl(\sum_{ \Gamma\in{S^{(m)}_{k-1}}}f^{\Gamma} \biggr) \bigl( \beta(t)\bigr) D^{k}_{\beta}g(t) \\ =& \biggl(\sum_{ \Gamma\in{S^{(m)}_{0}}}D_{\beta}{f}^{\Gamma} \biggr) (t) g(t)+ \sum^{m}_{k=1} \biggl( \sum_{\Gamma\in{S^{(m)}_{k}}}D_{\beta}{f}^{\Gamma} \biggr) (t) D^{k}_{\beta}g(t) \\ &{} + \biggl(\sum_{ \Gamma\in{S^{(m)}_{m}}}f^{\Gamma} \biggr) \bigl(\beta(t)\bigr) D^{m+1}_{\beta}g(t)+\sum ^{m}_{k=1} \biggl(\sum_{ \Gamma\in{S^{(m)}_{k-1}}}f^{\Gamma} \biggr) \bigl(\beta(t)\bigr) D^{k}_{\beta}g(t) \\ =& \biggl(\sum_{ \Gamma\in{S^{(m)}_{0}}}D_{\beta}{f}^{\Gamma} \biggr) (t) g(t)+ \biggl(\sum_{ \Gamma\in{S^{(m)}_{m}}}f^{\Gamma} \biggr) \bigl(\beta(t)\bigr) D^{m+1}_{\beta}g(t) \\ &{}+\sum^{m}_{k=1} \biggl[ \biggl(\sum _{ \Gamma\in{S^{(m)}_{k}}}{f}^{{\Gamma}D_{\beta}} \biggr) (t)+ \biggl(\sum _{ \Gamma\in{S^{(m)}_{k-1}}}f^{{\Gamma}\beta} \biggr) \biggr] D^{k}_{\beta}g(t) \\ =& \biggl(\sum_{ \Gamma\in{S^{(m+1)}_{0}}}{f}^{\Gamma} \biggr) (t) g(t)+ \biggl(\sum_{ \Gamma\in{S^{(m+1)}_{m+1}}}{f}^{\Gamma} \biggr) (t) D^{m+1}_{\beta}g(t) \\ &{} +\sum^{m}_{k=1} \biggl( \sum_{ \Gamma\in{S^{(m+1)}_{k}}}f^{\Gamma} \biggr) (t)D^{k}_{\beta}g(t) \\ =&\sum^{m+1}_{k=0} \biggl(\sum _{ \Gamma\in{S^{(m+1)}_{k}}}{f}^{\Gamma} \biggr) (t)D^{k}_{\beta}g(t). \end{aligned}

Hence (2.12) holds for all $$n\in\mathbb{N}$$. □

The following example shows that the function f may be discontinuous but it is β-differentiable.

Example 2.17

Let $$f:[-1,1]\longrightarrow\mathbb{R}$$ be such that

$$f(t)=\left \{ \textstyle\begin{array}{l@{\quad}l} t ,& t\in(-1,0), \\ -t ,& t\in(0,1), \\ 1 ,&t=0, \\ 0 ,& t=1,-1, \end{array}\displaystyle \right .$$

and let

$$\beta(t)=\frac{1}{4}t+ \frac{1}{4}.$$

We see that the function f is discontinuous but it is β-differentiable, where

$$D_{\beta}f(t)=\left \{ \textstyle\begin{array}{l@{\quad}l} 1,& {t}\in(-1,0), \\ -1,& {t}\in(0,1), \\ -5,&t=0, \\ 1,&t=1,-1. \end{array}\displaystyle \right .$$

Rolle’s theorem, in general, is not true with respect to the β-derivative. This can be shown by the following example.

Example 2.18

The function $$f(t)= 4t^{2}-9t$$, defined in Example 2.12, is ordinary differentiable and hence β-differentiable over $$\mathbb{R}$$ with respect to $$\beta(t)= \frac{1}{2}t+\frac{3}{4}$$. Clearly, $$f(1)={f(\beta(1))}$$, but $$f(t)\neq{f(\beta(t))}$$ inside the interval $$[1,\beta(1)]$$, i.e., there are no points between 1 and $$\beta(1)$$ such that $${D}_{\beta}f(t)=0$$. In fact, $$f(t)=f(\beta(t))$$ only at 1 and $$\frac{3}{2}$$. This implies the failure of Rolle’s theorem with respect to the β-derivative.

In the following theorem, we obtain analogues for the classical mean value theorem. We postpone the proof of this theorem to Section 3.

Theorem 2.19

(Mean value theorem)

Suppose $$f,g:I\longrightarrow\mathbb{X}$$ are β-differentiable functions on I. Then

$$\bigl\Vert f(y)-f(x)\bigr\Vert \leq{\sup_{t\in{I}} \bigl\Vert D_{\beta}{f(t)}\bigr\Vert (y-x)}$$
(2.13)

for every $$x,y\in[a,b]_{\beta}$$, $$x< s_{0}< y$$, where $$a,b\in{I}$$, $$a\leq{b}$$.

The following example shows that inequality (2.13) does not hold with $$x,y\notin{[a,b]_{\beta}}$$, $$x< s_{0}< y$$ and $$a,b\in{I}$$, $$a\leq{b}$$.

Example 2.20

Let $$f,g:I=[{-}\frac{5}{3},2]\longrightarrow\mathbb{R}$$ defined by $$f(t)=t^{2}$$ and $$g(t)=4t$$ and $$\beta(t)=\frac{1}{2}t+\frac {1}{2}$$. Then $$s_{0}=1$$ and one can see that $$|D_{\beta}{f(t)}|<{D_{\beta}}g(t)$$ for all $$t\in{I}$$. If we take $$a=b=-1$$, then

$$[-1]_{\beta}= \biggl\{ 1,\frac{2^{n}-1}{2^{n}}: n=-1,0,1,\ldots \biggr\} .$$

Let $$x,y\in{[-1]_{\beta}}$$, $$x< y$$. By Theorem 2.19, $$|y^{2}-x^{2}|\leq\frac{7}{2}(y-x)$$ for every $$x,y\in[-1]_{\beta}$$, $$x< y$$, where $$\sup_{t\in{I}}|D_{\beta}{f(t)}|=\frac{7}{2}$$. Now, if we take $$x,y\notin{[-1]_{\beta}}$$, where $$x=\frac{15}{9}$$ and $$y=\frac{17}{9}$$. One can see that $$|y^{2}-x^{2}|>\frac{7}{2}(y-x)$$.

3 β-integration

We say that F is a β-antiderivative of the function $$f:I\longrightarrow\mathbb{X}$$ if $$D_{\beta}{F(t)}=f(t)$$ for $$t\in{I}$$.

Definition 3.1

We denote by Ω the vector space of all functions $$g:I\to\mathbb{X}$$ which are continuous at $$s_{0}$$ and vanish at $$s_{0}$$. Define the operator $$T_{\beta}:\Omega\longrightarrow\Omega$$ by

$$T_{\beta}(g) (t)=g\bigl(\beta(t)\bigr), \quad t\in I.$$

Let Y be the range of $$\mathcal{I}-T_{\beta}$$, where $$\mathcal{I}$$ is the identity operator. One can check that for $$g\in Y$$ the series $$\sum_{k=0}^{\infty}g(\beta^{k}(t))$$ is uniformly convergent on I. Clearly, the operator $$\mathcal{I}-T_{\beta}$$ is one-to-one.

We need the following lemma in proving the next theorem. Its proof is straightforward, so it will be omitted.

Lemma 3.2

The operator $$A:Y\longrightarrow\Omega$$ defined by

$$A(g) (t)=\sum_{k=0}^{\infty}g \bigl(\beta^{k}(t)\bigr)$$
(3.1)

is the inverse of the operator $$\mathcal{I}-T_{\beta}$$.

Theorem 3.3

Assume $$f:{I}\to\mathbb{X}$$ is continuous at $$s_{0}$$. Then the function F defined by

$$F(t)=\sum_{k=0}^{\infty} \bigl( \beta^{k}(t)-\beta^{k+1}(t) \bigr)f\bigl(\beta ^{k}(t)\bigr),\quad t\in{I}$$
(3.2)

is a β-antiderivative of f with $$F(s_{0})=0$$. Conversely, a β-antiderivative F of f vanishing at $$s_{0}$$ is given by formula (3.2).

Proof

For all $$t\in{I}$$ and $$t\neq{s_{0}}$$, we have

\begin{aligned} D_{\beta}{F(t)} &=\frac{F(\beta(t))-F(t)}{\beta(t)-t} \\ &=\frac{\sum^{\infty}_{k=0} (\beta^{k+1}(t)-\beta^{k+2}(t) )f (\beta^{k+1}(t) )-\sum^{\infty}_{k=0} (\beta^{k}(t)-\beta ^{k+1}(t) )f (\beta^{k}(t) )}{\beta(t)-t} \\ &=f(t). \end{aligned}

To show that $$D_{\beta}{F(s_{0})}=f(s_{0})$$, let $$\epsilon>0$$. By the continuity of $$f(t)$$ at $$t=s_{0}$$, there is $$\delta>0$$ such that

$$\bigl\Vert f \bigl(\beta^{k}(s_{0}+h) \bigr)-f(s_{0})\bigr\Vert < \epsilon,\quad k\geq0, 0< h< \delta.$$

This implies

\begin{aligned} \biggl\Vert \frac{1}{h}F(s_{0}+h)-f(s_{0})\biggr\Vert &\leq\sum^{\infty }_{k=0} \frac{1}{h} \bigl(\beta^{k}(s_{0}+h)- \beta^{k+1}(s_{0}+h) \bigr) \bigl\Vert f \bigl(\beta ^{k}(s_{0}+h) \bigr)-f(s_{0})\bigr\Vert \\ &< \epsilon,\quad 0< h< \delta. \end{aligned}

Conversely, assume that F is a β-antiderivative of f vanishing at $$s_{0}$$. This implies that

\begin{aligned} f(t)&=D_{\beta}{F(t)} =\frac{F(\beta(t))-F(t)}{\beta(t)-t} \\ &=\frac{T_{\beta}(F(t))-F(t)}{\beta(t)-t} \\ &=\frac{(\mathcal{I}-T_{\beta})F(t)}{t-\beta(t)}. \end{aligned}

Then $$f(t)(t-\beta(t))=(\mathcal{I}-T_{\beta})F(t)$$, which implies that $$F(t)=(\mathcal{I}-T_{\beta})^{-1} (t-\beta(t) )f(t)$$. The function $$G(t)=(t-\beta(t))f(t)$$ belongs to Ω and $$F(t)=(\mathcal{I}-T_{\beta})^{-1}G(t)$$. By Lemma 3.2, we have

$$F(t)=\sum_{k=0}^{\infty}G\bigl( \beta^{k}(t)\bigr)=\sum_{k=0}^{\infty} \bigl(\beta ^{k}(t)-\beta^{k+1}(t) \bigr)f\bigl( \beta^{k}(t)\bigr).$$
(3.3)

□

Definition 3.4

Let $$f:{I}\longrightarrow{\mathbb{X}}$$ and $$a,b\in{I}$$. We define the β-integral of f from a to b by

$$\int^{b}_{a}f(t)\, d_{\beta}{t}=\int^{b}_{s_{0}}f(t)\, d_{\beta}{t}-\int^{a}_{s_{0}}f(t)\, d_{\beta}{t},$$
(3.4)

where

$$\int^{x}_{s_{0}}f(t)\, d_{\beta}{t}=\sum^{\infty}_{k=0} \bigl( \beta ^{k}(x)-\beta^{k+1}(x) \bigr)f\bigl( \beta^{k}(x)\bigr),\quad x\in{I},$$
(3.5)

provided that the series converges at $$x=a$$ and $$x=b$$. f is called β-integrable on I if the series converges at a, b for all $$a,b\in{I}$$. Clearly, if f is continuous at $$s_{0}\in{I}$$, then f is β-integrable on I.

In the integral formulas (3.4) and (3.5), when $$\beta (t)=qt$$, $$q\in(0,1)$$, we obtain Jackson q-integration

$$\int_{a}^{b} f(t)\, d_{q} t := \int_{0}^{b} f(t) \, d_{q} t - \int_{0}^{a} f(t)\, d_{q} t,$$
(3.6)

where

$$\int_{0}^{x} f(t) \, d_{q} t := x(1-q) \sum_{k=0}^{\infty} q^{k} f \bigl(xq^{k}\bigr),\quad x\in I,$$
(3.7)

see [3, 1012]. If $$\beta(t)=qt+\omega$$, $$q\in(0,1)$$, $$\omega>0$$, then (3.4) and (3.5) reduce to the Hahn integral

$$\int_{a} ^{b} f(t)\, d_{q,\omega} t:= \int_{\omega_{0}}^{b} f(t)\, d_{q,\omega} t- \int_{\omega_{0}}^{a} f(t) \, d_{q,\omega} t,$$
(3.8)

where

$$\int_{\omega_{0}} ^{x} f(t)\, d_{q,\omega} t:= \bigl( x (1-q) - \omega \bigr) \sum _{k=0}^{\infty} q^{k} f\bigl(xq^{k}+ \omega[k]_{q}\bigr),\quad x\in I,$$
(3.9)

where $$\omega_{0}= {\frac{\omega}{1-q}}$$ and $$[k]_{q}= {\frac {1-q^{k}}{1-q}}$$, see [2, 46, 10, 12].

Lemma 3.5

Let $$f:I\longrightarrow\mathbb{X}$$ be β-integrable on I and $$a,b,c\in{I}$$, then the following statements are true:

1. (i)

The β-integral is a linear operator.

2. (ii)

$$\int^{a}_{a}f(t)\, d_{\beta}{t}=0$$.

3. (iii)

$$\int^{b}_{a}{f}(t)\, d_{\beta}{t}=-\int^{a}_{b}{f}(t)\, d_{\beta}{t}$$.

4. (iv)

$$\int^{b}_{a}f(t)\, d_{\beta}{t}=\int^{c}_{a}f(t)\, d_{\beta }{t}+\int^{b}_{c}f(t)\, d_{\beta}{t}$$.

Proof

The proof is straightforward. □

By Theorem 3.3, we obtain the first fundamental theorem of β-calculus which is stated as follows.

Theorem 3.6

Let $$f:I\longrightarrow\mathbb{X}$$ be continuous at $$s_{0}$$. Define the function

$$F(x)=\int^{x}_{s_{0}}f(t) \, d_{\beta}{t},\quad x\in{I}.$$
(3.10)

Then F is continuous at $$s_{0}$$, $$D_{\beta}{F(x)}$$ exists for all $$x\in{I}$$ and $$D_{\beta}{F(x)}=f(x)$$.

Corollary 3.7

If $$f:{I}\longrightarrow\mathbb{X}$$ is continuous at $$s_{0}$$. Then

$$\int_{t}^{\beta(t)}f(\tau)\, d_{\beta}{\tau}=\bigl(\beta(t)-t\bigr)f(t), \quad t\in{I}.$$
(3.11)

Proof

Let $$F(t)=\int_{s_{0}}^{t}f(\tau)\, d_{\beta}{\tau}$$, $$t\in{I}$$. By Theorem 3.6, $$F(t)$$ is continuous at $$s_{0}$$ and $$D_{\beta}{F(t)}=f(t)$$ for all $$t\in{I}$$. Then

\begin{aligned} \int_{t}^{\beta(t)}f(\tau)\, d_{\beta}{\tau}&= \int_{s_{0}}^{\beta(t)}f(\tau )\, d_{\beta}{\tau}- \int_{s_{0}}^{t}f(\tau)\, d_{\beta}{\tau} \\ &=F\bigl(\beta(t)\bigr)-F(t). \end{aligned}

Since $$F(\beta(t))=F(t)+(\beta(t)-t)D_{\beta}{F(t)}$$, then

$$\int_{t}^{\beta(t)}f(\tau)\, d_{\beta}{\tau}= \bigl(\beta(t)-t\bigr)f(t), \quad t\in{I}.$$

□

Now, we state and prove the second fundamental theorem of β-calculus.

Theorem 3.8

If $$f:I\longrightarrow\mathbb{X}$$ is β-differentiable on I, then

$$\int^{b}_{a}D_{\beta}{f}(t) \, d_{\beta}t=f(b)-f(a) \quad \textit{for all } {a,b}\in{I}.$$
(3.12)

Proof

We have

\begin{aligned} \int^{b}_{s_{0}}D_{\beta}{f}(t)\, d_{\beta}{t} &=\sum^{\infty}_{k=0} \bigl( \beta^{k}(b)-\beta^{k+1}(b) \bigr) (D_{\beta}{f} ) \bigl(\beta^{k}(b)\bigr) \\ &=\sum^{\infty}_{k=0}\bigl(f\bigl( \beta^{k}(b)\bigr)-f\bigl(\beta^{k+1}(b)\bigr)\bigr) \\ &=\lim_{n\rightarrow{\infty}}\sum^{n}_{k=0} \bigl(f\bigl(\beta^{k}(b)\bigr)-f\bigl(\beta ^{k+1}(b)\bigr) \bigr) \\ &=f(b)-f(s_{0}). \end{aligned}

Similarly,

$$\int^{a}_{s_{0}}D_{\beta}{f}(t)\, d_{\beta}{t}=f(a)-f(s_{0}).$$

Therefore,

$$\int^{b}_{a}D_{\beta}{f}(t)\, d_{\beta}{t}=f(b)-f(a)\quad \text{for all } {a,b}\in{I}.$$

□

As a direct consequence of Theorem 3.6, one can see that the following theorem is true.

Theorem 3.9

If $$f:I\longrightarrow\mathbb{X}$$ is continuous at $$s_{0}$$ and $$\Phi :I\longrightarrow\mathbb{X}$$ is a β-antiderivative of f on I, then for $$a,b\in{I}$$, we have

$$\int^{b}_{a}f(t)\, d_{\beta}{t}=\Phi(b)- \Phi(a).$$

The following theorem establishes the formula for the β-integral by parts. The proof is straightforward, so it will be omitted.

Theorem 3.10

Assume f, g are β-differentiable functions on I and $$D_{\beta }f$$, $$D_{\beta}g$$ both continuous at $$s_{0}$$. Then

$$\int^{b}_{a}f(t)D_{\beta} {g(t)}\, d_{\beta}{t}=f(b)g(b)-f(a)g(a)-\int^{b}_{a} \bigl(D_{\beta}{f(t)}\bigr)g\bigl(\beta(t)\bigr)\, d_{\beta}{t}, \quad a,b\in{I}.$$

Here, at least one of the functions f and g is a real-valued function.

The following two lemmas and Definition 3.12 are fundamental in the study of the calculus of variations. The first is based originally on [14], Lemma 12.1 and the second on [15], Lemma 2.2. Both are adapted in [6] for the case of Hahn’s function $$\beta (t)= qt+\omega$$, $$q\in(0,1)$$, $$\omega>0$$. Here, following [6], we show that both lemmas are valid for the case of our general function $$\beta(t)$$.

Let D denote the set of all real-valued functions defined on $${[c,d]}_{\beta}$$ and continuous at $${s_{0}}$$, where $$c,d\in{I}$$ and $$c< d$$.

Lemma 3.11

Let $${f}\in{D}$$. Then $${\int}^{d}_{c}f(t)h(\beta(t))\, {d}_{\beta}t=0$$ for all functions $${h}\in{D}$$ with $$h(c)=h(d)=0$$ if and only if $$f(t)=0$$ for all $$t\in{[c,d]}_{\beta}$$.

Proof

It is obvious from the definition of β-integration that if $$f(t)=0$$ for all $$t\in{[c,d]}_{\beta}$$, then $${\int}^{d}_{c}f(t)h (\beta(t))\, {d}_{\beta}t=0$$. To prove the other implication, assume on the contrary that there is some $$l\in{[c,d]}_{\beta}$$ such that $$f(l)\neq{0}$$. We have the following two cases.

Case I: $${l}\neq{s_{0}}$$. Then either $${l}= {\beta}^{k}(c)$$ or $${l}= {\beta}^{k}(d)$$ for some $${k}\in\mathbb{N}_{0}$$. First, assume that $${l}= {\beta}^{k}(c)$$ for some $${k}\in\mathbb{N}_{0}$$. Define

$$h(t)=\left \{ \textstyle\begin{array}{l@{\quad}l} f(l),&t=\beta{(l)}, \\ 0,&\text{otherwise}. \end{array}\displaystyle \right .$$

Clearly, $$h\in{D}$$ with $$h(c)=h(d)=0$$. Then

\begin{aligned} \int^{d}_{c}f(t)h\bigl(\beta(t)\bigr)\, {d}_{\beta}t &=\int^{d}_{s_{0}}f(t)h \bigl(\beta(t)\bigr)\, {d}_{\beta}t-\int^{c}_{s_{0}}f(t) h \bigl(\beta(t)\bigr)\, {d}_{\beta}t \\ &=-\bigl(l-{\beta}(l)\bigr)f^{2}(l)\neq0. \end{aligned}

The case $${l}= {\beta}^{k}(d)$$ can be treated similarly.

Case II: $${l}={s_{0}}$$. Let $$f(s_{0})\neq0$$ and without loss of generality assume $$f(s_{0})>0$$.

The continuity of f at $$s_{0}$$ implies $$\lim_{k\rightarrow{\infty}}f({\beta}^{k}(c))=\lim_{k\rightarrow{\infty }}f({\beta}^{k}(d))=f(s_{0})$$.

Consequently, there exists $${k_{0}}\in\mathbb{N}$$ such that $$f({\beta}^{k}(c))>0$$ and $$f({\beta}^{k}(d))>0$$ for all $${k}>{k_{0}}$$.

If $$s_{0}\notin\{c,d\}$$, we define h by

$$h(t)=\left \{ \textstyle\begin{array}{l@{\quad}l} f({\beta}^{k}(c)),& t={\beta}^{k+1}(c)\text{ for all } k>k_{0}, \\ f({\beta}^{k}(d)),& t={\beta}^{k+1}(d) \text{ for all } k>k_{0}, \\ 0,& \text{otherwise}. \end{array}\displaystyle \right .$$

Hence,

\begin{aligned} \int^{d}_{c}f(t)h\bigl(\beta(t)\bigr)\, {d}_{\beta}t =&{\sum}^{\infty}_{k=k_{0}} \bigl({ \beta}^{k}(d)-{\beta}^{k+1}(d) \bigr)f^{2}\bigl({ \beta}^{k}(d)\bigr) \\ &{}-{\sum}^{\infty}_{k=k_{0}} \bigl({\beta}^{k}(c)-{ \beta}^{k+1}(c) \bigr)f^{2}\bigl({\beta}^{k}(c) \bigr)\neq0. \end{aligned}

For $$s_{0}=c$$, we define h by

$$h(t)=\left \{ \textstyle\begin{array}{l@{\quad}l} f({\beta}^{k}(d)), &t={\beta}^{k+1}(d)\text{ for all } k>k_{0}, \\ 0,&\text{otherwise}. \end{array}\displaystyle \right .$$

Therefore,

$$\int^{d}_{c}f(t)h\bigl(\beta(t)\bigr)\, {d}_{\beta}t={\sum}^{\infty}_{k=k_{0}} \bigl({ \beta}^{k}(d)-{\beta}^{k+1}(d) \bigr)f^{2}\bigl({ \beta}^{k}(d)\bigr)\neq0.$$

The case $$s_{0}=d$$ can be treated similarly. □

Definition 3.12

([15])

Let $$g:[r]_{\beta}\times\, ]-\tilde{\theta},\tilde{\theta}[\, \longrightarrow \mathbb{R}$$. We say that $$g(t,\cdot)$$ is continuous in $$\theta_{0}$$ uniformly in t iff for every $$\epsilon>0$$, there exists $$\delta>0$$ such that $$\vert \theta-\theta_{0}\vert<\delta$$ implies $$\vert{g(t,\theta)-g(t,\theta _{0})}\vert<\epsilon$$ for all $$t\in[r]_{\beta}$$. Furthermore, we say that $$g(t,\cdot)$$ is differentiable at $$\theta_{0}$$ uniformly in t iff for every $$\epsilon>0$$ there exists $$\delta>0$$ such that $$0<\vert\theta-\theta _{0}\vert<\delta$$ implies $$|{\frac{g(t,\theta)-g(t,\theta _{0})}{(\theta-\theta_{0})}}-g_{\theta}(t,{\theta_{0}}) |<\epsilon$$ for all $$t\in[r]_{\beta}$$.

Lemma 3.13

Assume $$g(t,\cdot)$$ is differentiable at $$\theta_{0}$$, uniformly in t for all $$t\in[r]_{\beta}$$ and that $$G(\theta)=\int^{r}_{s_{0}}g(t,\theta )\, d_{\beta}t$$ for θ in a neighborhood of $$\theta_{0}$$ and $$\int^{r}_{s_{0}}g_{\theta}(t,{\theta_{0}})\, d_{\beta}t$$ exists. Then $$G(\theta )$$ is differentiable at $$\theta_{0}$$ with $$G'(\theta_{0})=\int^{r}_{s_{0}}g_{\theta}(t,{\theta_{0}})\, d_{\beta}t$$.

Proof

Since $$g(t,\cdot)$$ is differentiable at $$\theta_{0}$$ uniformly in t, then for every $$\epsilon>0$$, there exists $$\delta>0$$ such that for all $$t\in [r]_{\beta}$$ and for $$0<\vert\theta-\theta_{0}\vert<\delta$$, the following inequalities hold:

\begin{aligned}& \biggl\vert {\frac{g(t,\theta)-g(t,\theta_{0})}{\theta-\theta_{0}}}-g_{\theta }(t,{\theta_{0}}) \biggr\vert < \frac{\epsilon}{r-s_{0}}, \\& \biggl\vert {\frac{G(\theta)-G(\theta_{0})}{\theta-\theta_{0}}}-G'(\theta )\biggr\vert \leq { \int^{r}_{s_{0}}\biggl\vert \frac{g(t,\theta)- g(t,\theta_{0})}{\theta -\theta_{0}}-g_{\theta}(t,{ \theta_{0}}) \biggr\vert \, d_{\beta}t} \\& \hphantom{\vert {\frac{G(\theta)-G(\theta_{0})}{\theta-\theta_{0}}}-G'(\theta )\vert }< \int^{r}_{s_{0}}\frac{\epsilon}{r-s_{0}} \, d_{\beta}t =\epsilon. \end{aligned}

Hence, $$G(\cdot)$$ is differentiable at $$\theta_{0}$$ and $$G'(\theta_{0})=\int^{r}_{s_{0}}g_{\theta}(t,{\theta_{0}})\, d_{\beta}t$$. □

Following [2], Lemma 4.3, we show that their results hold for our general function $$\beta(t)$$.

Lemma 3.14

Let $$f:{I}\longrightarrow\mathbb{X}$$, $$g:{I}\longrightarrow\mathbb{R}$$ be β-integrable functions on I. If

$$\bigl\Vert f(t)\bigr\Vert \leq{g(t)} \quad \textit{for all } t \in[a,b]_{\beta}, a,b\in {I} \textit{ and }a\leq{b},$$

then for $$x,y\in[a,b]_{\beta}$$, $$x< s_{0}< y$$, we have

\begin{aligned}& \biggl\Vert \int^{y}_{s_{0}}f(t)\, d_{\beta}t\biggr\Vert \leq\int^{y}_{s_{0}}g(t) \, d_{\beta}t, \end{aligned}
(3.13)
\begin{aligned}& \biggl\Vert \int^{x}_{s_{0}}f(t)\, d_{\beta}t\biggr\Vert \leq{-}\int^{x}_{s_{0}}g(t) \, d_{\beta}t \end{aligned}
(3.14)

and

$$\biggl\Vert \int^{y}_{x}f(t)\, d_{\beta}t\biggr\Vert \leq\int^{y}_{x}g(t) \, d_{\beta}t.$$
(3.15)

Consequently, if $$g(t)\geq{0}$$ for all $$t\in[a,b]_{\beta}$$, then the inequalities $$\int^{y}_{s_{0}}g(t)\, d_{\beta}t\geq{0}$$ and $$\int^{y}_{x}g(t)\, d_{\beta }t\geq{0}$$ hold for all $$x,y\in[a,b]_{\beta}$$, $$x< s_{0}< y$$.

Proof

Since $$y>s_{0}$$, then $$\beta^{k+1}(y)<\beta^{k}(y)$$, $$k\in\mathbb {N}_{0}$$, $$y\in[a,b]_{\beta}$$,

\begin{aligned} \biggl\Vert \int^{y}_{s_{0}}f(t)\, d_{\beta}t\biggr\Vert \leq&\sum^{\infty}_{k=0} \bigl(\beta^{k}(y)-\beta^{k+1}(y) \bigr)\bigl\Vert f\bigl( \beta^{k}(y)\bigr)\bigr\Vert \\ \leq&\sum^{\infty}_{k=0} \bigl( \beta^{k}(y)-\beta^{k+1}(y) \bigr)g\bigl(\beta ^{k}(y)\bigr) \\ =&\int^{y}_{s_{0}}g(t)\, d_{\beta}t. \end{aligned}

Similarly, we can prove equation (3.14). Also, if $$x,y\in[a,b]_{\beta}$$ and $$x< s_{0}< y$$, then there exist $$k_{1},k_{2}\in{\mathbb{N}}$$ such that $$x=\beta^{k_{2}}(a)$$ and $$y=\beta^{k_{1}}(b)$$. We conclude that

\begin{aligned} \biggl\Vert \int^{y}_{x}f(t)\, d_{\beta}t\biggr\Vert =&\Biggl\Vert \sum ^{\infty}_{k=k_{1}} \bigl[ \bigl(\beta^{k}(y)- \beta^{k+1}(y) \bigr)f\bigl(\beta^{k}(y)\bigr) \bigr] \\ &{}-\sum^{\infty}_{k=k_{2}} \bigl[ \bigl( \beta^{k}(x)-\beta^{k+1}(x) \bigr)f\bigl(\beta ^{k}(x)\bigr) \bigr]\Biggr\Vert \\ \leq&\sum^{\infty}_{k=0} \bigl( \beta^{k+k_{1}}(y)-\beta^{k+k_{1}+1}(y) \bigr)\bigl\Vert f\bigl( \beta^{k+k_{1}}(y)\bigr)\bigr\Vert \\ &{}+\sum^{\infty}_{k=0} \bigl( \beta^{k+k_{2}+1}(x)-\beta^{k+k_{2}}(x) \bigr) \bigl\Vert f\bigl( \beta^{k+k_{2}}(x)\bigr)\bigr\Vert \\ \leq&\sum^{\infty}_{k=0} \bigl( \beta^{k+k_{1}}(y)-\beta^{k+k_{1}+1}(y) \bigr)g\bigl(\beta^{k+k_{1}}(y) \bigr) \\ &{}-\sum^{\infty}_{k=0} \bigl( \beta^{k+k_{2}}(x)-\beta^{k+k_{2}+1}(x) \bigr)g\bigl(\beta^{k+k_{2}}(x) \bigr) \\ =&\int^{y}_{s_{0}}g(t)\, d_{\beta}t-\int ^{x}_{s_{0}}g(t)\, d_{\beta}t=\int ^{y}_{x}g(t)\, d_{\beta}t. \end{aligned}

Putting $$f(t)=0$$ in (3.13) and (3.15) we get $$\int^{y}_{s_{0}}g(t)\, d_{\beta}t\geq{0}$$ and $$\int^{y}_{x}g(t)\, d_{\beta}t\geq {0}$$. □

Lemma 3.15

Let $$f:{I}\longrightarrow\mathbb{X}$$ and $$g:{I}\longrightarrow\mathbb {R}$$ be β-differentiable on I. If

$$\bigl\Vert D_{\beta}{f(t)}\bigr\Vert \leq{D_{\beta}}g(t), \quad t\in[a,b]_{\beta}, a,b\in{I} \textit{ and } a\leq{b},$$

then

$$\bigl\Vert f(y)-f(x)\bigr\Vert \leq{g(y)-g(x)}$$
(3.16)

for every $$x,y\in[a,b]_{\beta}$$, $$x< s_{0}< y$$.

Proof

Assume $$\|D_{\beta}{f(t)}\|\leq{D_{\beta}}g(t)$$, $$t\in[a,b]_{\beta}$$, $$a,b\in {I}$$, $$a\leq{b}$$. By Theorem 3.8 and Lemma 3.14 we obtain

$$\biggl\Vert \int^{y}_{x}D_{\beta}{f(t)} \, d_{\beta}{t}\biggr\Vert \leq\int^{y}_{x}D_{\beta}{g(t)}\, d_{\beta}{t},$$

$$\bigl\Vert f(y)-f(x)\bigr\Vert \leq{g(y)-g(x)}.$$

□

We are now in a position to prove Theorem 2.19.

Proof of Theorem 2.19

Define the function g by $$g(t)={\sup_{\tau\in{I}}\|D_{\beta}{f(\tau)}\|(t-x)}$$. We have $$D_{\beta}{g(t)}=\sup_{\tau\in{I}}\|D_{\beta}{f(\tau)}\|\geq\sup_{\tau\in {[a,b]_{\beta}}}\|D_{\beta}{f(\tau)}\|\geq{\|D_{\beta}{f(t)}\|}$$, $$t\in [a,b]_{\beta}$$. Then, by Lemma 3.15,

$$\bigl\Vert f(y)-f(x)\bigr\Vert \leq{g(y)-g(x)}=\sup_{t\in{I}} \bigl\Vert D_{\beta}{f(t)}\bigr\Vert (y-x).$$

□

4 Conclusion and perspectives

In this paper, we presented a general quantum difference operator $${D}_{\beta}f(t)= \frac{f(\beta(t))-f(t)}{\beta(t)-t}$$, where β is a strictly increasing continuous function defined on $$I\subseteq {\mathbb{R}}$$ which has only one fixed point $$s_{0}\in{I}$$. This operator yields the Hahn difference operator when $$\beta(t)=qt+\omega$$, $$\omega>0$$, $$q\in(0,1)$$ are fixed real numbers, and the Jackson q-difference operator when $$\beta(t)=qt$$, $$q\in(0,1)$$. A calculus based on this operator and its inverse was established. For instance, the chain rule, Leibniz’ formula and the mean value theorem.

There is still a lot of work ahead of us. In one direction, we should establish existence and uniqueness results of solutions of difference equations based on $$D_{\beta}$$ (β-difference equations). Another direction is to establish the theory of linear quantum difference equations associated with $$D_{\beta}$$. Finally, we should ask about the stability of such equations. The theory of β-difference equations helps and allows us to avoid proving results more than one time, once for q-difference equations, once for Hahn difference equations and once for any choice of β.

References

1. Aldwoah, KA, Malinowska, AB, Torres, DFM: The power quantum calculus and variational problems. Dyn. Contin. Discrete Impuls. Syst., Ser. B, Appl. Algorithms 19, 93-116 (2012)

2. Annaby, MH, Hamza, AE, Aldwoah, KA: Hahn difference operator and associated Jackson-Nörlund integrals. J. Optim. Theory Appl. 154, 133-153 (2012)

3. Annaby, MH, Mansour, ZS: q-Fractional Calculus and Equations. Springer, Berlin (2012)

4. Hamza, AE, Ahmed, SM: Existence and uniqueness of solutions of Hahn difference equations. Adv. Differ. Equ. 2013, 316 (2013)

5. Hamza, AE, Ahmed, SM: Theory of linear Hahn difference equations. J. Adv. Math. 4(2), 441-461 (2013)

6. Malinowska, AB, Torres, DFM: The Hahn quantum variational calculus. J. Optim. Theory Appl. 147, 419-442 (2010)

7. da Cruz, B, Artur, MC: Symmetric Quantum Calculus. PhD thesis, Aveiro University (2012)

8. Tariboon, J, Ntouyas, SK: Quantum calculus on finite intervals and applications to impulsive difference equations. Adv. Differ. Equ. 2013, 282 (2013)

9. Hahn, W: Über orthogonalpolynome, die q-differenzengleichungen genügen. Math. Nachr. 2, 4-34 (1949)

10. Aldwoah, KA: Generalized Time Scales and the Associated Difference Equations. PhD thesis, Cairo University (2009)

11. Bangerezako, G: An Introduction to q-Difference Equations, Bujumbura (2008)

12. Kac, V, Cheung, P: Quantum Calculus. Springer, New York (2002)

13. Auch, TJ: Development and Application of Difference and Fractional Calculus on Discrete Time Scales. PhD thesis, University of Nebraska-Lincoln (2013)

14. Curtain, RF, Pritchard, AJ: Functional Analysis in Modern Applied Mathematics. Elsevier, Amsterdam (1977)

15. Bohner, M: Calculus of variations on time scales. Dyn. Syst. Appl. 13, 339-349 (2004)

Acknowledgements

The authors sincerely thank the referees for their valuable suggestions and comments.

Author information

Authors

Corresponding author

Correspondence to Alaa E Hamza.

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally and significantly in writing this paper. All authors read and approved the final manuscript.

Rights and permissions

Reprints and permissions

Hamza, A.E., Sarhan, AS.M., Shehata, E.M. et al. A general quantum difference calculus. Adv Differ Equ 2015, 182 (2015). https://doi.org/10.1186/s13662-015-0518-3