Book Volume 1
Page: i-i (1)
Author: Michael Saunders
Page: ii-iii (2)
Author: Victor Pereyra, Mountain View and Godela Scherer
Page: iv-vi (3)
Author: V. Pereyra and G. Scherer
Full text available
Page: 1-26 (26)
Author: Victor Pereyra and Godela Scherer
In this initial chapter we consider some of the basic methods used in the fitting of data by real and complex linear combinations of exponentials. We have selected the classes of methods that are most frequently used in many different fields: variable projections for solving this separable nonlinear least squares problem, derivatives and variants of Prony’s method, which rely on evenly sampled data and take special advantage of the particular form of the approximation and finally the matrix-pencil method. We also have implemented some of these techniques and compared them in a few examples to support some comments on their advantages and disadvantages and exemplify their performance in terms of computing time and robustness, specially considering that this is a notoriously ill-conditioned problem in many cases.
Page: 27-51 (25)
Author: Diana M. Sima, Jean-Baptiste Poullet and Sabine Van Huffel
Magnetic Resonance Spectroscopy (MRS) is one of the practical biomedical applications where exponential data fitting is an essential tool. An MRS signal is a complex-valued time-domain signal that satisfies in theory a model expressed as a sum of complex damped exponentials. The model parameters provide useful information about metabolites, i.e., about the chemical content of the sample or tissue under study. In vivo MRS signals are characterized by the presence of noise and by deviations from the theoretical model. Along the years many fitting methods have been employed for the so-called “metabolite quantification" problem. These are, to mention just a few, subspace-based methods such as HSVD, or optimization-based methods such as VARPRO. In recent years more focus has been placed upon methods that incorporate prior knowledge into the model. Initially, prior knowledge has been included in the form of simple relationships between parameters of the exponential model (for instance, linear relations between the parameters of individual exponential components, when it is known that these components originate from the same metabolite). More advanced methods are able to include much more prior knowledge by combining whole metabolite profiles into the model; these metabolite profiles are quantum-mechanically simulated or in vitro measured MRS signals. Other interesting computational issues related to MRS quantification include preprocessing techniques (such as filtering out some unwanted spectral components) aimed at modifying the measured data without loss of essential information, in order to match the assumed exponential model. This chapter reviews time-domain MRS quantification methods and related preprocessing techniques, it emphasizes the computational aspects such as the use of variable projection in the optimization-based methods, and it hints towards some of the features of existing software packages for MRS.
Page: 52-70 (19)
Author: Marco Paluszny, Marianela Lentini, Miguel Martin-Landrove, Wuilian Torres and Rafael Martin
We consider synthetic magnetic resonance images of a brain slice generated with the BrainWeb resource. They correspond to measurements taken at various times and record the intensity of the response signal of the probed tissue to a magnetic pulse. The specific property measured, which is considered in this chapter, is transverse magnetization. The transverse magnetization decay technique can be used to obtain several images for a given axial slice of tissue. Namely, for each pixel the time uniform sequence of transverse magnetization measurements yields information about the tissues at that pixel and for a given time the responses of all the pixels form an image of the slice. In clinical studies this data is acquired using the magnetic resonance procedure. Mathematically this decay is described as a linear combination of decaying exponentials and it strongly correlates to the tissue type at each pixel. We consider several approaches to extract the exponents and estimates of the fractions of each tissue type for every pixel in a region of interest. The main thrust is on separation of variables techniques, by looking at Prony’s method, some special Vandermonde systems and linear regression. We consider comparisons of a Prony technique and the classical separable nonlinear least squares method.
Page: 71-93 (23)
Author: Saul D. Cohen, George T. Fleming and Huey-Wen Lin
Exponential time series analysis has been an integral part of lattice quantum field theory calculations for three decades. Until recently, the level of sophistication has been relatively modest since the number of computable time samples was limited by available computational resources to at most a few dozen times, enabling the reliable estimation of only a few exponentials. Recent algorithmic advances, coupled with continued growth in high performance computing following Moore’s Law, has enabled calculations of exponential time series with a hundred or more time samples and generated new interest in finding reliable analysis methods for estimating many exponentials. We review the methods currently used to analyze lattice quantum field theory calculations.
Page: 94-109 (16)
Author: Linda Kaufman
In 1978 Golub and LeVeque considered an exponential fitting problem with multiple datasets where the nonlinear variables, e.g., the decay rates, had to hold for all the datasets simultaneously, but the linear variables, e.g., the pre-exponentials, could vary from one dataset to the next. They showed that with the variable projection technique, one could reduce the problem to only the nonlinear variables. Golub and LeVeque also showed that the main matrix of the algorithm was block diagonal with the same matrix down the diagonal. This allowed them to compute a solution while storing only the main matrix associated with a single dataset, so that the memory requirements of the problem are independent of the number of datasets. Since then, papers using this observation have appeared in the biophysics literature, in the systems identification literature, in the medical literature for studying disease of the retina, in the spectroscopy literature, and in the numerical analysis literature for determining the knots in a 2 dimensional spline problem. In 2007 the TIMP package was created in the statistical language R by Mullen and van Stokkum to handle spectroscopy problems which might have as many as 1000 datasets. The TIMP package, which handles several models, uses finite differences to approximate derivatives. In this paper we show that by using a tensor product of orthogonal matrices, the number of rows for the Jacobian for the multiple dataset problem can be significantly reduced.
Page: 110-127 (18)
Author: Katharine M. Mullen and Ivo H. M. van Stokkum
A model consisting of a sum of exponential functions is very useful for the description of time-resolved spectroscopy data. Each exponential term in the sum can often be associated with a given state of the physical system underlying the measurement, such as a protein excited by laser light. Then the exponential decay rate associated with each state describes the time profile of the contribution of the state to the observed data. The linear coefficients of the sum represent the relative amplitudes of the contributions of each state. When time-resolved spectroscopy data represent more than one wavelength, a linear coefficient is associated with each exponential decay term at each wavelength. In this chapter sum-of-exponentials models for time-resolved spectroscopy applications are reviewed. The parameter estimation problem of fitting the decay rates and linear coefficients of the sum under the least squares criteria is also reviewed, with attention to implementation of algorithms for model fitting. Case studies in fitting models to picosecond time-scale spectroscopic data illustrate the reviewed topics.
Page: 128-144 (17)
Author: Per Christian Hansen, Hans Bruun Nielsen, Christina Ankjærgaard and Mayank Jain
Optically Stimulated Luminescence (OSL) from quartz is used, e.g., for geological and archeological dating, and involves the measurement of light from a sample, followed by fitting a sum of exponentials to these data. We consider two different forms of exponential models for this purpose – one without weighting and another with a certain weighting of the data. In this work we compare the two models with regards to their ability to estimate the correct model parameters, including the unknown number of exponential components.
Page: 145-164 (20)
Author: Bert W. Rust, Dianne P. O’Leary and Katharine M. Mullen
Type Ia supernova light curves are characterized by a rapid rise from zero luminosity to a peak value, followed by a slower quasi-exponential decline. The rise and peak last for a few days, while the decline persists for many months. It is widely believed that the decline is powered by the radioactive decay chain 56Ni → 56Co → 56Fe, but the rates of decline in luminosity do not exactly match the decay rates of Ni and Co. In 1976, Rust, Leventhal, and McCall  presented evidence that the declining part of the light curve is well modelled by a linear combination of two exponentials whose decay rates were proportional to, but not exactly equal to, the decay rates for Ni and Co. The proposed reason for the lack of agreement between the rates was that the radioactive decays take place in the interior of a white dwarf star, at densities much higher than any encountered in a terrestrial environment, and that these higher densities accelerate the two decays by the same factor. This paper revisits this model, demonstrating that a variant of it provides excellent fits to observed luminosity data from 6 supernovae.
Accurate calculations of the high-frequency impedance matrix for VLSI interconnects and inductors above a multi-layer substrate:
Page: 165-192 (28)
Author: Navin Srivastava, Roberto Suaya and Victor Pereyra
Impedance characterization of Interconnects and intentional Inductors in the broad frequency domain that extends from near DC to 100’s of GHz in integrated circuits is full of unachieved goals. Existing computational methods are near the end of their usefulness, since accurate characterization of the Impedance matrix Z(ω) at high-frequencies with existing methods can only be applied to small structures. We present a computationally inexpensive approach that extends the ability for accurate characterization to problem sizes that are between one and two orders of magnitude larger, opening the door to the validation of high frequency wireless circuits in terms of real time simulation, rather than the less desirable alter-native of validation by manufacturing and testing. The starting point in our approach is an integral representation of the Green’s function for the magnetic vector potential in classical Electromagnetic theory. The intermediate computation involves a least-square fit to refection coefficients in terms of linear combinations of complex exponentials, so as to render integrable the coordinate space representation of the Green’s function. The end result is an analytical description of derivative quantities, including the matrix elements of the serial Impedance matrix of the interconnect configuration, for all frequencies of interest. We study the problem in two and three dimensions. Among the alternative least square fits, we found that those utilizing VARPRO in the complex domain – an extension of VARPRO created specifically to attack this problem – give the best results. The levels of accuracy (errors less than 3%) and efficiency (better than an order of magnitude lower computational cost than existing methods) have a major impact on nano-electronic circuit design.
Page: 193-195 (3)
Author: V. Pereyra and G. Scherer
Full text available
Real and complex exponential data fitting is an important activity in many different areas of science and engineering, ranging from Nuclear Magnetic Resonance Spectroscopy and Lattice Quantum Chromodynamics to Electrical and Chemical Engineering, Vision and Robotics. The most commonly used norm in the approximation by linear combinations of exponentials is the l2 norm (sum of squares of residuals), in which case one obtains a nonlinear separable least squares problem. A number of different methods have been proposed through the years to solve these types of problems and new applications appear daily. Necessary guidance is provided so that care should be taken when applying standard or simplified methods to it. The described methods take into account the separability between the linear and nonlinear parameters, which have been quite successful. The accessibility of good, publicly available software that has been very beneficial in many different fields is also considered. This Ebook covers the main solution methods (Variable Projections, Modified Prony) and also emphasizes the applications to different fields. It is considered essential reading for researchers and students in this field.