Open Access
December, 1991 Why Least Squares and Maximum Entropy? An Axiomatic Approach to Inference for Linear Inverse Problems
Imre Csiszar
Ann. Statist. 19(4): 2032-2066 (December, 1991). DOI: 10.1214/aos/1176348385


An attempt is made to determine the logically consistent rules for selecting a vector from any feasible set defined by linear constraints, when either all $n$-vectors or those with positive components or the probability vectors are permissible. Some basic postulates are satisfied if and only if the selection rule is to minimize a certain function which, if a "prior guess" is available, is a measure of distance from the prior guess. Two further natural postulates restrict the permissible distances to the author's $f$-divergences and Bregman's divergences, respectively. As corollaries, axiomatic characterizations of the methods of least squares and minimum discrimination information are arrived at. Alternatively, the latter are also characterized by a postulate of composition consistency. As a special case, a derivation of the method of maximum entropy from a small set of natural axioms is obtained.


Download Citation

Imre Csiszar. "Why Least Squares and Maximum Entropy? An Axiomatic Approach to Inference for Linear Inverse Problems." Ann. Statist. 19 (4) 2032 - 2066, December, 1991.


Published: December, 1991
First available in Project Euclid: 12 April 2007

zbMATH: 0753.62003
MathSciNet: MR1135163
Digital Object Identifier: 10.1214/aos/1176348385

Primary: 62A99
Secondary: 68T01 , 92C55 , 94A17

Keywords: Image reconstruction , linear constraitns , logically consistent inference , minimum discrimination information , nonlinear projection , nonsymmetric distance , Selection rules

Rights: Copyright © 1991 Institute of Mathematical Statistics

Vol.19 • No. 4 • December, 1991
Back to Top