Indefinite sum
In the calculus of finite differences, the indefinite sum operator (also known as the antidifference operator), denoted by or ,[1][2] is the linear operator that is the inverse of the forward difference operator . It relates to the forward difference operator as the indefinite integral relates to the derivative. Thus,[3]
If , then
The solution is not unique; it is determined only up to an additive periodic function with period 1. Therefore, each indefinite sum represents a family of functions.
The Nørlund principal solution represents the analytic solution without any such non-constant periodic terms. Two conventions exist, one for the forward difference, , and one for the backward difference, . The inverse forward difference, denoted , naturally extends the summation up to . The inverse backward difference, denoted , naturally extends the summation up to .
Fundamental theorem of the calculus of finite differences
Indefinite sums can be used to calculate definite sums with the formula:[4]
Alternatively, using the inverse backward difference operator, the relation is:
Examples
The following basic indefinite sums follow from the fundamental properties of the difference operator:[5]
- Constant:
- Falling factorial:
- Exponential:
- Logarithm:
In the above identities, represents an arbitrary 1-periodic function (or a constant if the Nørlund principal solution is assumed).
Summation by parts
Indefinite summation by parts is the discrete analog of integration by parts. It is used to find the indefinite sum of a product of two functions:[6][5]
Using the identity , this is often written more compactly as:
A symmetrical form derived from the discrete product rule is:
Definite summation by parts is defined as:
- Example product of a polynomial and exponential
Summation by parts is effective for functions like . To find the indefinite sum, let and :[7]
- Find the components:
- Apply the summation by parts formula:
- Evaluate the remaining sum:
- Result:
Discrete analogs and alternative usage

The inverse forward difference operator, , extends the summation up to , typically starting with the iterator at :
Some authors analytically extend summation for which the upper limit is the argument without a shift, typically starting the iterator at :[8][9][10]
In this case, the analytic continuation, , for the sum is a solution of . Stated explicitly, that is:
Some authors use the equivalent form called the telescoping equation:[11]
The lower bounds of the discrete analog for both inverse forward difference and inverse backward difference can be an arbitrary constant other than those listed here, as it is absorbed into the height of the 1-periodic or constant term .
Uniqueness of the principal solution
The functional equation does not have a unique solution. If is a particular solution, then for any function satisfying (i.e., any 1-periodic function), the function is also a solution. Therefore, the indefinite sum operator defines a family of functions differing by an arbitrary 1-periodic component, .
To select the unique principal solution (German: Hauptlösung)[12] up to an additive constant (instead of up to the additive 1-periodic function ) one must impose additional constraints.
Complex analysis (exponential type)
Following the theory developed by Niels Erik Nørlund,[12] the indefinite sum can be uniquely determined for analytic functions by imposing restriction on their growth in the complex plane. Specifically, by imposing minimal growth, the non-constant periodic terms can be filtered out.
Suppose is analytic in a vertical strip containing the real axis, and let be an analytic solution of in that strip. To ensure uniqueness, require to be of minimal growth, specifically to be of exponential type less than in the imaginary direction. That is, there exist constants and such that as .[13][14]
Let and be two analytic solutions satisfying this growth condition. Their difference is then analytic, 1‑periodic (i.e., ), and inherits the same exponential type less than .
Nørlund uses a fundamental result in complex analysis (related to Carlson's theorem and the Paley–Wiener theorem) which states that a non‑constant periodic entire function must have exponential type at least .[12] This follows from its Fourier series expansion: if is non‑constant, its Fourier series contains a term with , which has type . Since has type strictly less than , it cannot contain any such term and therefore must be constant.
Real analysis (higher‑order convexity)
In real analysis, the uniqueness condition can be given using higher‑order convexity, generalizing the Bohr-Mollerup theorem. For an integer , a function is called -convex if its divided differences of order are non‑negative, and -concave if those divided differences are non-positive. A function is called eventually -convex (resp. eventually -concave) if there exists such that it is -convex (resp. -concave) on the interval .
Marichal and Zenaïdi proved the following uniqueness theorem, their method requiring the solution to be eventually -convex or -concave.[15][16]
Theorem. Let be an integer and let satisfy . If is an eventually -convex or eventually -concave solution of , then is uniquely determined up to an additive constant. Moreover, for any ,
and the convergence is uniform on bounded subsets of .
Müller–Schleicher axiomatic method
In their paper How to Add a Noninteger Number of Terms[8], Müller and Schleicher introduced an axiomatic approach to fractional summation with a real or complex number of terms. Their method extends the classical discrete sum
to non-integer and complex upper limits . The definition is built upon six natural axioms that uniquely determine the extension of fractional sums for functions that grow polynomially, S1 through S6 are as follows:
- Continued Summation: .
- Translation Invariance: .
- Linearity: .
- Empty Sum Condition: (equivalent to the empty sum condition).
- Holomorphy for Monomials: for each , is holomorphic in .
- Right-Shift Continuity: if pointwise as , then ; more generally, if can be approximated by polynomials of fixed degree with , then:
- .
This axiomatic framework assumes that for sufficiently large inputs, the indefinite sum behaves like a polynomial (S5); this asymptotic behavior is then "stepped back" to the rest of the complex plane using translation invariance (S2) and continuity (S6) to uniquely determine the analytic continuation.
A function is called fractional summable of degree if, for large , the values can be approximated by a sequence of polynomials of fixed degree , with the error tending to zero. For such functions, the fractional sum is uniquely given by the limit:
where the sum over polynomials is evaluated via the polynomial antidifference formula.
In the simplest case when as (i.e., the approximating polynomials are zero), this reduces to:
Symmetry of the principal solution
Following directly from uniqueness, if is a meromorphic function, one can define a unique analytic solution of the backward difference sum, by imposing the conditions that:
- Difference Equation:
- Normalization: (empty sum boundary condition).
- Growth constraint: has exponential type less than in the imaginary direction.
Under these conditions, satisfies a reflection formula (referred to by Nørlund as Ergänzungssatz, a complementary theorem to uniqueness of the principal solution [Hauptlösung], presenting it as where is the span).[17]

Odd functions
If is an odd function (), the unique analytic solution satisfies:[17]
This represents a point symmetry about .
Even functions
If is an even function (), the unique analytic solution satisfies:[17]
.
Relationship to indefinite products
In the symbolic method developed by L. M. Milne-Thomson, the indefinite product operator serves as the multiplicative analog to the indefinite sum. It is defined by the first order homogeneous equation
By taking the logarithm of the product formula, one obtains the telescoping identity .[18] This allows any indefinite product to be expressed through an indefinite sum:
where is an arbitrary periodic function of period 1.[19] Conversely, an indefinite sum may be represented as the logarithm of an indefinite product:
Expansions and definitions
Newton series
For an entire function of exponential type less than [20] the inverse forward difference operator, , can be expressed by its Newton series expansion: [21][22]
- is the falling factorial.
Bernoulli‑operator series expansion
Formally, the inverse forward difference operator can be expressed in terms of the derivative operator using the exponential generating function of the Bernoulli numbers:[23][24][25]
where are the Bernoulli numbers defined by the generating function . Under this convention .
If is a polynomial, only finitely many terms of the series are non-zero as the finite difference of a monomial is a polynomial of one degree lower (following by induction, finitely many terms are required). For one obtains the antidifference:[24]
where are the Bernoulli polynomials of the first order.[24]
If admits a Maclaurin series expansion , the antidifference of monomials in the series expansion yields the formal series:[25]
For non‑polynomials this expansion is generally asymptotic.
- Relation to the inverse backward difference
If one instead expands the inverse backward difference operator, (which extends ), it admits to the same expansion, but with in place of .
Euler–Maclaurin formula
The Euler–Maclaurin formula extends :[9][13] where are the even Bernoulli numbers, is an arbitrary positive integer, and is the remainder term given by:
with being the periodized Bernoulli function related to the Bernoulli polynomials.
Laplace summation (Gregory summation formula)
Laplace's summation formula, closely related to the Gregory summation formula, can be seen as the discrete counterpart to the Euler–Maclaurin formula. The inverse forward difference :[26][27][7][28]
- where are the Cauchy numbers of the first kind.
- is the falling factorial.
Truncating the series after terms leaves a remainder that can be expressed as an integral of times a periodic Bernoulli polynomial.[7][28] In the notation of Charles Jordan, Gregory's formula is:[7]
where the coefficients are the Bernoulli numbers of the second kind. Note the argument is without a shift, aligning with the inverse backward difference.
Abel–Plana formula
The indefinite sum can be analytically continued by applying the standard Abel-Plana formula to the finite sum and then analytically continuing the integer limit to the variable . This yields the formula:[10]
This analytic continuation is valid when the conditions for the original formula are met. The sufficient conditions are:[13][14]
- Analyticity: must be analytic in the closed vertical strip between and . The formula provides the analytic solution up to, but not beyond, the nearest singularities of to the line .
- Growth: must be of exponential type less than in this strip, satisfying for some , as .
Choice of the constant term
Analytic continuation of discrete sums

The constant term , in the context of indefinite sums naturally extending the discrete summation, is often defined based on the respective empty sum.
For the inverse forward difference, , the typical summation equivalent is so the empty sum is when as it correlates to
For the inverse backward difference, , the typical summation equivalent is so the empty sum is when as it correlates to
Normalization
In older texts relating to Bernoulli polynomials (predating more modern analytic techniques) the constant was often fixed using integral conditions.
Let . Then, the constant is fixed from the condition or .
Let . Then, the constant is fixed from the condition or .
Alternatively, Ramanujan summation can be used: Or at 1: respectively.[29][30]
See also
References
- ^ Man, Yiu-Kwong (1993), "On computing closed forms for indefinite summations", Journal of Symbolic Computation, 16 (4): 355–376, doi:10.1006/jsco.1993.1053, MR 1263873
- ^ Goldberg, Samuel (1986) [1958]. Introduction to Difference Equations, with Illustrative Examples from Economics, Psychology, and Sociology. New York: Dover Publications. p. 41. ISBN 978-0-486-65084-5. MR 0094249.
If is a function whose first difference is the function , then is called an indefinite sum of and denoted by .
- ^ Kelley, Walter G.; Peterson, Allan C. (2001). Difference Equations: An Introduction with Applications. Academic Press. p. 20. ISBN 0-12-403330-X.
- ^ "Handbook of discrete and combinatorial mathematics", Kenneth H. Rosen, John G. Michaels, CRC Press, 1999, ISBN 0-8493-0149-1
- ^ a b Jordan, Charles (1960). Calculus of Finite Differences (Second ed.). New York, NY: Chelsea Publishing Company. pp. 104–107.
- ^ Kelley, Walter G.; Peterson, Allan C. (2001). Difference Equations: An Introduction with Applications. Academic Press. p. 24. ISBN 0-12-403330-X.
- ^ a b c d Jordan, Charles (1960). Calculus of Finite Differences (Second ed.). New York, NY: Chelsea Publishing Company. pp. 284–285.
- ^ a b Markus Müller and Dierk Schleicher, How to Add a Noninteger Number of Terms: From Axioms to New Identities, Amer. Math. Mon. 118(2), 136-152 (2011).
- ^ a b Candelpergher, Bernard (2017). "Ramanujan Summation of Divergent Series" (PDF). HAL Archives Ouvertes. p. 3. Retrieved 2025-12-07.
- ^ a b Candelpergher, Bernard (2017). "Ramanujan Summation of Divergent Series" (PDF). HAL Archives Ouvertes. p. 23. Retrieved 2025-12-07.
- ^ Algorithms for Nonlinear Higher Order Difference Equations, Manuel Kauers
- ^ a b c Nörlund, Niels Erik. Vorlesungen über Differenzenrechnung. Springer. pp. 40–44. ISBN 978-3-642-50514-0.
- ^ a b c "§2.10 Sums and Sequences". NIST Digital Library of Mathematical Functions. National Institute of Standards and Technology. Retrieved 2025-11-20.
- ^ a b Olver, Frank W. J. (1997). Asymptotics and Special Functions. A K Peters Ltd. p. 290. ISBN 978-1-56881-069-0.
- ^ Marichal, Jean‑Luc; Zenaïdi, Naïm (2024). "A generalization of Bohr‑Mollerup's theorem for higher order convex functions: a tutorial". Aequationes Mathematicae. 98 (2): 455–481. arXiv:2207.12694. doi:10.1007/s00010-023-00968-9.
- ^ Marichal, Jean‑Luc; Zenaïdi, Naïm (2022). A Generalization of Bohr‑Mollerup's Theorem for Higher Order Convex Functions. Developments in Mathematics. Vol. 70. Springer. doi:10.1007/978-3-030-95088-0. ISBN 978-3-030-95087-3.
- ^ a b c Nörlund, Niels Erik. Vorlesungen über Differenzenrechnung. Springer. p. 74. ISBN 978-3-642-50514-0.
- ^ Nörlund, Niels Erik. Vorlesungen über Differenzenrechnung. Springer. p. 109. ISBN 978-3-642-50514-0.
- ^ Milne-Thomson, L. M. (1933). The Calculus of Finite Differences. Macmillan and Co. pp. 324–325.
- ^ Nörlund, Niels Erik. Vorlesungen über Differenzenrechnung. Springer. p. 237. ISBN 978-3-642-50514-0.
- ^ Newton, Isaac, (1687). Principia, Book III, Lemma V, Case 1
- ^ Iaroslav V. Blagouchine (2018). "Three notes on Ser's and Hasse's representations for the zeta-functions" (PDF). Integers (Electronic Journal of Combinatorial Number Theory). 18A: 1–45. arXiv:1606.02044. doi:10.5281/zenodo.10581385.
- ^ Steffensen, J. F. (1950). Interpolation (2nd ed.). New York, NY: Chelsea Publishing Company. p. 192.
- ^ a b c Milne-Thomson, L. M. (1933). The Calculus of Finite Differences. Macmillan and Co. pp. 139–140.
- ^ a b Nörlund, Niels Erik. Vorlesungen über Differenzenrechnung. Springer. pp. 142–143. ISBN 978-3-642-50514-0.
- ^ Bernoulli numbers of the second kind on Mathworld
- ^ Ferraro, Giovanni (2008). The Rise and Development of the Theory of Series up to the Early 1820s. Springer Science+Business Media, LLC. p. 248. ISBN 978-0-387-73468-2.
- ^ a b Milne-Thomson, L. M. (1933). The Calculus of Finite Differences. Macmillan and Co. pp. 180–181.
- ^ Bruce C. Berndt, Ramanujan's Notebooks Archived 2006-10-12 at the Wayback Machine, Ramanujan's Theory of Divergent Series, Chapter 6, Springer-Verlag (ed.), (1939), pp. 133–149.
- ^ Éric Delabaere, Ramanujan's Summation, Algorithms Seminar 2001–2002, F. Chyzak (ed.), INRIA, (2003), pp. 83–88.
Further reading
- "Difference Equations: An Introduction with Applications", Walter G. Kelley, Allan C. Peterson, Academic Press, 2001, ISBN 0-12-403330-X
- Markus Müller. How to Add a Non-Integer Number of Terms, and How to Produce Unusual Infinite Summations
- Markus Mueller, Dierk Schleicher. Fractional Sums and Euler-like Identities
- S. P. Polyakov. Indefinite summation of rational functions with additional minimization of the summable part. Programmirovanie, 2008, Vol. 34, No. 2.
- "Finite-Difference Equations And Simulations", Francis B. Hildebrand, Prenctice-Hall, 1968
External links
- Brian Hamrick: Discrete Calculus (PDF, 70 kB)
- Interactive visualization of the Nörlund principal solution for inverse backward differences. Implements Candelpergher's analytic continuation (Abel-Plana formula with recurrence) for visualizing Nörlund's principal solution.