Stochastic Interpolation: A Probabilistic View
Document Type
Conference Proceeding
Publication Date
9-25-2008
Department
Mathematics
School
Mathematics and Natural Sciences
Abstract
Based on a probabilistic method, the data regularization framework known as stochastic interpolation (SI) recovers well-behaved functional representations of input data. SI splits the interpolation operator into a discrete deconvolution that is followed by a discrete convolution of the data. At the heart of the process is a row stochastic matrix which represents the approximation of the data by a probabilistic weighting of the data values. It allows the direct inclusion of statistical models into data regularization. We examine connections to radial basis functions and posit that SI is a general framework providing a unique mechanism for linking statistical data analysis with conventional interpolation and approximation methods that are built on nonnegative operators. SI can be implemented with flexibility to yield data approximation, interpolation, peak sharpening, non-linear smoothing, and all manner of hybrid schemes in a principled way by a deliberate choice of different generators of the row space of the convolution matrix. © 2008 IEEE.
Publication Title
Proceedings BLISS 2008 - 2008 ECSIS Symposium on Bio-inspired, Learning, and Intelligent Systems for Security
First Page
129
Last Page
135
Recommended Citation
Kolibal, J.,
Howard, D.
(2008). Stochastic Interpolation: A Probabilistic View. Proceedings BLISS 2008 - 2008 ECSIS Symposium on Bio-inspired, Learning, and Intelligent Systems for Security, 129-135.
Available at: https://aquila.usm.edu/fac_pubs/21230
COinS