I gave this talk as part of the degree requirements for my Ph.D. Presented at the Center for Quantum Information and Control, UNM.
Quantum state tomography seeks a “best” estimate of the quantum state, meaning one as close as possible to the true state (as measured, e.g., by fidelity). To achieve this, we seek a state that best fits the data. However, fitting incorporates noise (e.g. finite-sample fluctuations) into the estimate. The total error incurred this way grows with each parameter that we vary. For tomography of continuous-variable systems, the number of parameters to be fit is infinite, necessitating some kind of regularization or statistical model selection to minimize the total error. Model selection criteria can advise us when a parameter (or a set of them) should not be fit to the data. Many such techniques rely on the loglikelihood ratio statistic (LLRS), which quantifies the relative plausibility of distinct models.
My research investigates whether the LLRS in quantum state tomography behaves in the way predicted by canonical results like the Wilks Theorem. I will demonstrate, using Monte Carlo simulations of state tomography, that the average value of the LLRS can (and often does) disagree with the predictions of Wilks theorem. Therefore, LLRS-based techniques like hypothesis testing and Akaike’s AIC, should NOT be used without modification in tomography. This behavior appears to stem from boundaries (in the parameter space and between models). I will propose an improved “Wilks Theorem” that takes boundaries into account, and predicts the LLRS much more reliably than the Wilks Theorem. However, even this improved model is imperfect, suggesting that reliable tomographic model selection may demand alternatives to LLRS-based techniques.