- Research Article
- Open Access
- Published:
Solution to a Function Equation and Divergence Measures
Advances in Difference Equations volume 2011, Article number: 617564 (2011)
Abstract
We investigate the solution to the following function equation , which arises from the theory of divergence measures. Moreover, new results on divergence measures are given.
1. Introduction
As early as in 1952, Chernoff [1] used the -divergence to evaluate classification errors. Since then, the study of various divergence measures has been attracting many researchers. So far, we have known that the Csiszár
-divergence is a unique class of divergences having information monotonicity, from which the dual
geometrical structure with the Fisher metric is derived, and the Bregman divergence is another class of divergences that gives a dually flat geometrical structure different from the
-structure in general. Actually, a divergence measure between two probability distributions or positive measures have been proved a useful tool for solving optimization problems in optimization, signal processing, machine learning, and statistical inference. For more information on the theory of divergence measures, please see, for example, [2–5] and references therein.
Motivated by these studies, we investigate in this paper the solution to the following function equation

which arises from the discussion of the theory of divergence measures, and show that for , if
, 
, and
satisfy

then is the solution of a linear homogenous differential equation with constant coefficients. Moreover, new results on divergence measures are given.
Throughout this paper, we let be the set of real numbers and
are a convex set.
Basic notations:;
is strictly convex and twice differentiable;
is differentiable injective map;
is the general vector Bregman divergence;
is strictly convex twice-continuously differentiable function satisfying
;  
is the vector
-divergence.
If for every ,

then we say the or
is in the intersection of
-divergence and general Bregman divergence.
For more information on some basic concepts of divergence measures, we refer the reader to, for example, [2–5] and references therein.
2. Main Results
Theorem 2.1.
Assume that there are differentiable functions

and such that

Then and

for some .
Proof.
Since is differentiable functions, it is clear that

Let

Then is a finite dimension space. So we can find differentiable functions

as the orthonormal bases of , where
. Observing that

where

we have

Clearly,

Next we prove that

It is easy to see that we only need to prove the following fact:

Actually, if this is not true, that is,

then there exists such that

Therefore

Because

we get

that is,

Since is linearly independent, we see that

So

This is a contradiction. Hence (2.12) holds, and so does (2.11). Thus, there are
such that

Therefore,

So we have

Define

Then

Let , and

Then

Since is a symmetric matrix, we have

for an orthogonal matrix , and a diagonal matrix

Write

Then

So, for all ,

Without loss the generalization, we can assume that

Thus, for all ,

By the similar arguments as above, we can prove

So there is a matrix satisfying

Thus,

By mathematical induction we obtain

So .
Let

be the annihilation polynomial of . Then

Since , we can find
such that

The proof is then complete.
Theorem 2.2.
Let the -divergence
be in the section of
-divergence and general Bregman divergence. Then
satisfies

for some .
Proof.
If are in the intersection of
-divergence and general Bregmen divergence, then we have

where

Let

Then

Hence

Let

Then

Thus, a modification of Theorem 2.1 implies the conclusion.
Moreover, it is not so hard to deduce the following theorem.
Theorem 2.3.
Let a vector -divergence is are the intersection of vector
-divergence and general Bregman divergence and
satisfy

where is strictly monotone twice-continuously differentiable functions. Then the
divergence is
-divergence or vector
-divergence times a positive constant
.
References
Chernoff H: A measure of asymptotic efficiency for tests of a hypothesis based on the sum of observations. Annals of Mathematical Statistics 1952, 23: 493-507. 10.1214/aoms/1177729330
Amari S: Information geometry and its applications: convex function and dually flat manifold. In Emerging Trends in Visual Computing, Lecture Notes in Computer Science. Volume 5416. Edited by: Nielsen F. Springer, Berlin, Germany; 2009:75-102. 10.1007/978-3-642-00826-9_4
Brègman LM: The relaxation method of finding a common point of convex sets and its application to the solution of problems in convex programming. Computational Mathematics and Mathematical Physics 1967, 7: 200-217.
Cichocki A, Zdunek R, Phan AH, Amari S: Non-Negative Matrix and Tensor Factorizations: Applications to Explanatory Multi-Way Data Analysis and Blind Source Separation. Wiley, New York, NY, USA; 2009.
Csiszár I: Why least squares and maximum entropy? An axiomatic approach to inference for linear inverse problems. The Annals of Statistics 1991,19(4):2032-2066. 10.1214/aos/1176348385
Acknowledgments
This work was supported partially by the NSF of China and the Specialized Research Fund for the Doctoral Program of Higher Education of China.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Dong, CL., Liang, J. Solution to a Function Equation and Divergence Measures. Adv Differ Equ 2011, 617564 (2011). https://doi.org/10.1155/2011/617564
Received:
Accepted:
Published:
DOI: https://doi.org/10.1155/2011/617564
Keywords
- Machine Learning
- Functional Equation
- Orthonormal Base
- Divergence Measure
- Statistical Inference