The inhomogeneous p -Laplacian equation with Neumann boundary conditions in the limit p → ∞

We investigate the limiting behavior of solutions to the inhomogeneous p -Laplacian equation − ∆ p u = µ p subject to Neumann boundary conditions. For right hand sides which are arbitrary signed measures we show that solutions converge to a Kantorovich potential associated with the geodesic Wasserstein-1 distance. In the regular case with continuous right hand sides we characterize the limit as viscosity solution to an inﬁnity Laplacian / eikonal type equation.


Introduction
The purpose of this paper is to study the behavior of solutions of the inhomogeneous p-Laplacian equation with Neumann boundary conditions as p → ∞. The precise equation we consider is |∇u| p-2 ∂u ∂ν = 0 on ∂ , |u| p-2 u dx = 0, (1.1) where ⊂ R d is a Lipschitz domain, and the right-hand side μ p ∈ M( ) is a signed Radon measure which satisfies the compatibility condition μ p ( ) = 0. We index the right-hand side by p to include the case that it varies with p. In the rest of the paper we will refer to (1.1) as the p-Poisson equation since for p = 2 it obviously coincides with the standard Poisson equation. We prove two convergence results stated in Sect. 2.4 below. The first one is purely variational and states that, if the right-hand sides μ p converge weak-star to a measure μ ∈ M( ) as p → ∞, then weak solutions u p of (1.1) converge (up to a subsequence) to a Kantorovich potential u ∞ , which realizes the maximum in the following version of the Wasserstein-1 distance between the positive part μ + and the negative part μof μ: sup u dμ +u dμ -: u ∈ C( ), ess sup |∇u| ≤ 1 . (1. 2) The second result uses techniques from viscosity solutions to prove that for continuous data μ p ∈ C( ), converging uniformly to μ ∈ C( ), solutions u p converge to a viscosity solution of the following infinity Laplacian / eikonal type partial differential equation (PDE): Consequently, the only information on μ, which "survives" the limit p → ∞ in the p-Poisson problem (1.1), is the support of its positive and negative part. Similar results have already been established for several related problems associated with the p-Laplace operator. In [1], the limit of p-Poisson equations with nonnegative right-hand side and Dirichlet boundary conditions was related to a PDE similar to (1.3). In [2] the asymptotics of the homogeneous p-Laplacian equation with nonhomogeneous Neumann boundary conditions was investigated and related to an optimal transport problem and a viscosity PDE of infinity Laplacian type. Furthermore, in [3] a vector-valued version of (1.1) with right-hand side independent of p was studied. Solutions were shown to converge to a Kantorovich potential and to solve a PDE in divergence form with measure coefficients. Similar results were established in [4,5], however, imposing stricter regularity conditions on the right-hand side in (1.1). Furthermore, in [6] the case of mixed boundary conditions and regular fixed right-hand sides was related to optimal transport through a window on the boundary. Infinity Laplacian eigenvalue problems, their approximation with p-Laplacian problems, and their relation to optimal transport were investigated in [7][8][9][10][11].
Apart from the theoretical interest in understanding the limiting behavior of solutions to (1.1), our investigations are also driven by recent developments in data science. In [12] it was proposed to utilize the p-Poisson equation to solve semisupervised learning tasks. To this end, one assumes to have access to labels g : O → R of a closed subset O ⊂ of the domain, in particular, O could be a finite collection of points. For extending these labels from a discrete set O = {x i : i = 1, . . . , m} with m ∈ N to the whole domain , it was suggested in [12] to solve (1.1) with the right-hand side given by where δ x ∈ M( ) denotes the Dirac measure located at x ∈ . While this method, termed "Poisson learning", performs very well in practice, a full analysis is still pending. In particular, a rigorous convergence proof of the finite-dimensional approximation of Poisson learning on weighted graphs-which is used in applications-would be desirable.
The results of the present article apply to the continuum description of Poisson learning and, in particular, address the asymptotics as p → ∞. For the balanced case of two labelled classes with equal size, i.e., g : O → {±1} and g = 0, our main results can be interpreted as follows: The labelling function u arising as limit of solutions to Poisson learning as p → ∞ is directly connected to the solution of the optimal transport problem, which transports the empirical measure i:g(x i )=+1 δ x i of the points with label +1 to the empirical measure i:g(x i )=-1 δ x i of the points with label -1. The plan of this paper is the following: Sect. 2 reviews some important mathematical background and states our main results which are proved in Sect. 3. In more detail, Sect. 3.1 proves compactness of solutions of (1.1) as p → ∞, Sect. 3.2 is devoted to the optimal transport characterization of cluster points, and Sect. 3.3 relates them to the limiting PDE (1.3).

Weak solution to the p-Laplacian equation
The p-Laplacian for p ∈ [1, ∞) is defined as For C 2 -functions u, it admits the decomposition formula where u = div(∇u) denotes the Laplacian and ∞ u := ∇u, D 2 u∇u is called the infinity Laplacian.
Since we are interested in the case p → ∞ anyway, we assume in the whole article that p > d, in which case the Sobolev embedding W 1,p ( ) → C 0,1-d p ( ) makes sure that the following concept of weak solutions to (1.1) is well defined. Definition 2.1 Let p > d. A function u ∈ W 1,p ( ) is called a weak solution to (1.1) if it satisfies |u| p-2 u dx = 0 and |∇u| p-2 ∇u · ∇φ dx = φ dμ p , ∀φ ∈ W 1,p ( ). (2. 3) It is obvious that weak solutions in the sense of Definition 2.1 coincide with solutions of the variational problem since the Euler-Lagrange equations of this problem precisely coincide with (2.3), cf. [13]. Using standard arguments from calculus of variations, it can be shown that this problem admits a unique solution for every p > 1. Apart from guaranteeing existence and uniqueness, this variational characterization will be essential for deriving the optimal transport characterization of the limit lim p→∞ u p of weak solutions u p ∈ W 1,p ( ). For higher regularity statements for solutions of the p-Poisson equation, we refer the interested reader to [14].

Geodesic geometry
As it turns out, the correct metric on when working with (1.1) (or (2.4)) and its limit as p → ∞ is not the Euclidean one but the geodesic distance. It is defined as and turns ( , d ) into a length space. The geodesic distance measures the length of the shortest curve in connecting two points. If is convex, then the curve γ (t) = (1-t)x + ty shows d (x, y) = |x -y|, but in general it holds d (x, y) ≥ |x -y|. A derived quantity, which appears naturally in the context of the Neumann problem (1.1), is the geodesic diameter of , defined as The geodesic diameter appears in the optimal constant in the inequality and in the first nontrivial Neumann eigenvalue of the infinity Laplacian [8,15], given by One can use the geodesic distance to define the geodesic Lipschitz constant of u ∈ C( ) as With this at hand, one can introduce a geodesic version of the Wasserstein-1 distance: Note that, as stated in [16, page 269], any function u ∈ W 1,∞ ( ) has a continuous representative (denoted by the same symbol), and it holds This shows that Lip (u) ≤ ess sup |∇u|. Furthermore, since for points x, y that lie in a ball that is fully contained in it holds d (x, y) = |x -y|, it is easily seen (see [1, page 23]) that in fact Lip (u) = ess sup |∇u|. (2.12)

Weak-star convergence of measures
For measuring the convergence of the right-hand side measures μ p in (1.1) as p → ∞, we utilize weak-star convergence of measures.

Definition 2.2 (Weak-star convergence of measures) As sequence of Radon measures
It is easy to see that any Radon measure can be approximated in the weak-star topology by convolving it with a mollifier.

Main results
The following are our main results. The proof of Theorem 1 can be found in Sect. 3.2 and the one of Theorem 2, along with precise definitions of the notion of viscosity solutions and some corollaries, in Sect. 3.3.

Convergence of solutions
In this section we show that if the sequence of right-hand sides μ p in (1.1) has uniformly bounded mass, then the sequence of solutions (u p ) p>0 admits a convergent subsequence.
To this end, we first derive an upper bound for the p-Dirichlet energy |∇u| p dx in terms of the data, which will then allow us to deduce convergence.
Proposition 3.1 Let u p ∈ W 1,p ( ) be a weak solution of (1.1) with data μ p ∈ M( ). Then it holds Proof Choosing φ = u p in (2.3) and using Hölder's and Morrey's inequalities yields where the optimal constant for the Morrey inequality is defined as Using that p → p √ σ p converges to the value 2 diam( ) ∈ (0, ∞), which is the first nontrivial Neumann eigenvalue of the infinity Laplacian [8,15], concludes the proof.
Before proving the convergence theorem, we need the following important lemma.
Proof Let ε > 0 be given. Then, for p sufficiently large, it holds ess sup |u pu ∞ | ≤ ε. Consequently, using Minkowski's inequality, Using the reverse triangle inequality, one analogously obtains Combining these two inequalities and using that ε > 0 was arbitrary concludes the proof. Now we can prove that the sequence of solutions of (1.1) has a convergent subsequence. Proposition 3.2 Let u p ∈ W 1,p ( ) be a weak solution of (1.1) with data μ p ∈ M( ) and assume that the data satisfy Then there exists a function u ∞ ∈ W 1,∞ ( ) such that as p → ∞ (up to a subsequence) the functions u p converge to u ∞ uniformly and weakly in W 1,m ( ) for any m > 1. Furthermore, it holds Proof We follow the strategy from [1]. For p > m, Hölder's inequality yields Consequently, using Proposition 3.1 it follows Introducing the first nonzero eigenvalue of the p-Laplacian operator [8,15] and therefore we can estimate Using Proposition 3.1 together with the fact that according to [8] Thanks to (3.3) and (3.6), the sequence u p has uniformly bounded W 1,m -norms, and hence (up to a subsequence) converges weakly to a function u ∞ in W 1,m ( ). Furthermore, for m > d, one has the compact embedding [17] of W 1,m ( ) into C 0,1-d m ( ), which (after another round of subsequence refinement) proves the uniform convergence.
It remains to argue that u ∞ ∈ W 1,∞ ( ) and to prove (3.2). Using the weak lower semicontinuity of the L m -norm, we obtain from (3.3) that Taking the mth root and applying Lemma 3.1 with p = m and k = 0 yields Hence, we have established all inequalities in (3.2).

Optimal transport characterization
The main theorem in this section characterizes the limit u ∞ as optimal transport potential. We assume that the data measures μ p converge in the weak-star sense of measures. This makes sure that one can pass to the limit in duality products where both factors converge, as the following lemma shows. Proof With the abbreviation μ, u := u dμ, we can compute μ n , u nμ, u = μ n , u nu + μ n , uμ, u ≤ |μ n |( ) sup |u n -u| + μ nμ, u .
The Banach-Steinhaus theorem (or the uniform boundedness principle) [16,Sect. 2.2] makes sure that sup n |μ n |( ) < ∞. Together with the uniform convergence of u n and the weak-star convergence of μ n , this implies that the right-hand side goes to zero when taking the lim sup as n → ∞.
To set the scene for the optimal transport characterization, we remind the reader of the usual Wasserstein-1 distance W 1 (μ + , μ -) of the two measures μ ± , defined as where the Lipschitz constant Lip(u) in (3.8) is Functions u ∈ C( ), which attain the supremum in (3.8), are typically referred to as Kantorovich potentials. The Lipschitz constant, and hence also the Wasserstein-1 distance, is defined with respect to the Euclidean metric on R d . This is, however, not the most natural metric to consider on the (possibly nonconvex) domain . Indeed it can happen that two points in have a small Euclidean distance although transporting two measures concentrated on these points onto each other within requires a long transportation path. To overcome this, one can use the geodesic distance on , defined in (2.5). Correspondingly, one can also introduce the geodesic Lipschitz constant (2.9) and geodesic Wasserstein-1 distance (2.10).
As Theorem 1 states, this geodesic transport distance (2.10) arises naturally in the limiting problem of the p-Poisson equation (1.1). We now give the proof of this statement.
Proof of Theorem 1 First we note that the weak-star convergence of μ p together with the Banach-Steinhaus theorem in particular implies that lim sup p→∞ |μ p |( ) < ∞ such that Proposition 3.2 assures the existence of a subsequential uniform limit u ∞ .
Let u ∈ W 1,∞ ( ) with ess sup |∇u| ≤ 1 be arbitrary. Without loss of generality we can assume that |u| p-2 u dx = 0. Since u p in particular solves (2.4), it holds We can rearrange this inequality to where μ p = μ + pμp , with nonnegative measures μ ± p ∈ M( ), is the Jordan decomposition of μ p . Obviously, it holds μ ± p * μ ± as p → ∞ since the measures μ ± p are mutually singular. Now we use Lemma 3.2 together with the fact that the first term is nonnegative and |∇u| ≤ 1 a.e. in to obtain Since by (2.12) and (3.2) the function u ∞ is feasible for the optimization problem in (2.10), taking the supremum over u shows the assertion.
Since according to Proposition 3.2 the limit u ∞ also satisfies ess sup |u ∞ | ≤ diam( ) 2 , one could also have the idea to include a boundedness condition in the optimization problem in (2.10). This is motivated by the so-called Kantorovich-Rubinstein (KR) norm of the measure μ = μ +μon the length space ( , d ), which is defined as The reason why the KR norm does not appear naturally in our context is that for measures with zero mass it is equivalent (and for suitably scaled domains even equal) to the so-called dual Lipschitz norm. This norm coincides with the geodesic Wasserstein distance of the positive and negative part of the measure and is defined as For completeness, the equivalence is stated in the following proposition.
Proof The proof works just as in [9,Proposition 4.3], see also [18,Lemma 2.1]. By omitting the constraint ess sup |u| ≤ 1, we obtain the first inequality μ KR( ) ≤ μ Lip * ( ) . For the other inequality, we argue as follows: Since μ has zero mass, we can without loss of generality assume that the supremum in (3.11) is taken over functions that satisfy ess sup u + ess inf u = 0 by replacing u with uc for a suitable constant. Then, using (2.7), we get for all u ∈ C( ) with Lip (u) ≤ 1 that Letting t :

PDE characterization
Now we also give a PDE characterization of the limit u ∞ , which we have shown to be a Kantorovich potential in the previous section. Note that Kantorovich potentials are typically not unique, which is why it is interesting to verify that the limiting procedure p → ∞ selects a more regular potential. Since u ∞ turns out to solve an infinity Laplacian type PDE in the viscosity sense, we also have to work with viscosity solutions for finite p. However, for that we have to assume that the data μ p are continuous and converge uniformly. Let us first define what it means to be a viscosity solution to the p-Poisson equation (1.1). In particular, one has to interpret the Neumann boundary conditions in the viscosity sense, see also [8,19]. As explained in [20], the proper way to understand boundary conditions for boundary value problems of the form  • for all x 0 ∈ and φ ∈ C 2 ( ) such that uφ has a local maximum at x 0 , it holds • for all x 0 ∈ ∂ and φ ∈ C 2 ( ) such that uφ has a local maximum at x 0 , it holds • it holds |u| p-2 u dx ≤ 0. A lower semicontinuous function u : → R is called viscosity supersolution of (1.1) if • for all x 0 ∈ and φ ∈ C 2 ( ) such that uφ has a local minimum at x 0 , it holds • for all x 0 ∈ ∂ and φ ∈ C 2 ( ) such that uφ has a local maximum at x 0 , it holds • it holds |u| p-2 u dx ≥ 0. A function u ∈ C( ) is called a viscosity solution of (1.1) if it is both a sub-and supersolution.
We need the following well-known statement, which asserts that weak solutions to the p-Poisson equation are also viscosity solutions. Before we turn to the limiting PDE, we recall that the statement of Proposition 3.2, which states that |∇u ∞ | ≤ 1 almost everywhere in , can be converted into the viscosity framework.
It is important to remark that in the viscosity sense the inequality |∇u| -1 ≤ 0 is not equivalent to 1 -|∇u| ≥ 0, which is why we make the distinction explicit.
Let us now turn to the limiting PDE (1.3) satisfied by u ∞ for which we assume that the limiting data μ ∈ C( ) are continuous. We prove that every limit u ∞ of solutions to the p-Poisson equation (1.1) as p → ∞ is a viscosity solution of (1.3), which we restate here for convenience: Note that this PDE does not contain any boundary conditions and it also does not specify the behavior on the closed set \ ({μ > 0} ∪ {μ < 0} ∪ {μ = 0} c ). Note that even the weak boundary conditions in the viscosity sense, introduced before Definition 3.1, do not carry over to the limiting problem, which is consistent with the findings in [1,19]. Regarding the behavior outside the three sets that occur in (3.13), one should remark that the PDE is discontinuous there. Using lower and upper semicontinuous envelopes of this discontinuous function, one can make sense of a weaker form of the PDE on the whole of , see [22,Remark 4.3] for a similar problem and [20, Remark 6.3] for a general statement. In contrast, for Neumann eigenvalue problems of the p-Laplacian, it is possible to formulate boundary conditions and obtain a limiting PDE on the whole of , see [8].
Let us now define what precisely we mean by viscosity solutions to equation (3.13).

Definition 3.2 (Viscosity solutions of the limiting equation) Let μ ∈ C( ). An upper semicontinuous function u : → R is called a viscosity subsolution of (3.13) if
• for all x 0 ∈ and φ ∈ C 2 ( ) such that uφ has a local maximum at x 0 , it holds • it holds max u + ess inf u ≤ 0. A lower semicontinuous function u : → R is called a viscosity supersolution if • for all x 0 ∈ and φ ∈ C 2 ( ) such that uφ has a local minimum at x 0 , it holds • it holds ess sup u + min u ≥ 0. A function u ∈ C( ) is called viscosity solution it is both a sub-and supersolution. Now we can prove the main theorem of this section.
Proof of Theorem 2 The conditions of Proposition 3.2 are trivially fulfilled, which guarantees the existence of a (subsequential) uniform limit u ∞ ∈ C( ). We only show the subsolution property, the supersolution property can be shown analogously.
Let x 0 ∈ and φ ∈ C 2 ( ) such that u ∞φ has a local maximum at x 0 . Choose a sequence (p i ) i∈N ⊂ (d, ∞) converging to ∞ such that u p i → u ∞ uniformly. Then there exists a sequence of points (x i ) i∈N ⊂ converging to x 0 ∈ such that u p iφ has a local maximum in x i for all i ∈ N. Since u p i is a viscosity solution of (1.1), by (2.2) it holds (3.14) Case 1, x 0 ∈ {μ > 0}: We have to show that In fact, for showing this, we will not even have to use that μ(x 0 ) > 0, but (3.15) is true for all x ∈ . The condition μ(x 0 ) > 0 will only be relevant for showing the converse inequality for supersolutions.
It is important to remark that the limiting PDE (3.13) does not admit unique solutions. This is illustrated in the following example. x, x ∈ (-0.5, 0.5), is a viscosity solution of (3.13). Indeed, it is trivial to see that u t is even a classical solution of (3.13) on (-2, 2) \ {±t}. So we just have to check the two corner points at ±t. For x 0 = -t and φ ∈ C 2 ( ) touching u from above in x 0 , it is obvious that |φ (x 0 )| ≤ 1, and hence min{|φ(x 0 )| -1, -∞ φ(x 0 )} ≤ 0. Furthermore, there is no φ ∈ C 2 ( ) touching u from below in x 0 . Similarly, one can argue for x 0 = t and obtain that u t is a viscosity solution of (3.13). Note that none of the functions u t has homogeneous Neumann boundary conditions.
Since the concept of viscosity solutions heavily relies on continuity and is not compatible with discontinuous or even measure data μ, we have to use approximation techniques if we want to make sense of (3.13) if μ is a measure. In particular, it seems natural to replace the open set {μ = 0} c with the open set \ supp μ. However, one cannot just replace {μ ≷} with supp μ ± since the latter sets are not open and might even have empty interior. For an arbitrary measure data μ ∈ M( ), which we extend to zero outside , we consider the mollifications μ ε (x) = φ ε (xy) dμ(y), x ∈ R d , (3.20) where φ ∈ C ∞ c (R d ) is a smooth kernel with supp φ ∈ B 1 (0) and φ ε (x) = ε -d φ(x/ε). It is obvious from the definition of μ ε that if x ∈ \ supp μ then x ∈ \ supp μ ε for all ε > 0 small enough. Furthermore μ ε * μ as ε ↓ 0. Using the techniques from the proof of Theorem 2, we immediately get the following result. Corollary 3.2 Let μ ∈ M( ) and μ p := μ ε p ∈ C( ), where lim p→∞ ε p = 0 and μ ε p is defined as in (3.20). Let, furthermore, u p ∈ W 1,p ( ) be viscosity solutions of (1.1) with data μ p ∈ C( ). Then the function u ∞ ∈ W 1,∞ ( ) is a viscosity solution of Proof Let x 0 ∈ \ supp μ and φ ∈ C 2 ( ) such that u ∞φ has a local maximum at x 0 . Choose a sequence (p i ) i∈N ⊂ (d, ∞) converging to ∞ such that u p i → u ∞ uniformly. As always, there exists a sequence of points (x i ) i∈N ⊂ converging to x 0 ∈ such that u p iφ has a local maximum in x i for all i ∈ N. For all sufficiently large i ∈ N, it holds x 0 ∈ \ supp μ p i , and hence μ p i (x 0 ) = 0. As in Case 2 of the proof of Theorem 2, we can conclude thatφ(x 0 ) ≤ 0. The supersolution property is shown analogously.

Conclusion
In this article we have investigated limits of the p-Laplace equation with measure-valued right-hand side as p → ∞. We proved the existence of (subsequential) limits and characterized them as Kantorovich potentials for the optimal transport problem of transporting the positive part of the right-hand side onto the negative one. For continuous data, we also proved that such limits are viscosity solutions of a degenerate PDE, involving the infinity Laplacian and the eikonal equation. It will be interesting to investigate in which sense the limiting PDE can be interpreted for measure-valued data, which have a support with empty interior. Here, lower / upper semicontinuous relaxations as in [20,22] might be promising tools.