QUANTUM & AI -- NEXT GEN OF TECHNOLOGY

Integrating quantum information science, physics and engineering physics, and computer science.  I am interested in research that provides novel, high-impact solutions for a broad research community in the fields of artificial intelligence, quantum computation, quantum communication and networking, and state-of-the-art quantum technologies. ⚛

EmailGitHubLink

Updates:

Recent arXiv

---------- x ----------

See you there! 👋

---------- x ----------

Past updates ⌛

Featured Projects*

...

Tulane is part of Navy/Army-funded research on improving communication, April 2021

Machine learning shows potential to enhance quantum information transfer, March 2021 


AI enables efficiencies in quantum information processing, July 2020


Tulane Scientists partner with U.S. Army on machine learning study, July 2020


Collaborators:

Abstracts:

Demonstration of machine-learning-enhanced Bayesian quantum state estimation

Lohani, S.,  Lukens J, Davis A, Khannejad A, Regmi S, Jones D, Glasser R,  Searles T. A., Kirby B

New Journal of Physics (IOP Science)

We experimentally realize an approach for defining custom prior distributions that are automatically tuned using ML for use with Bayesian quantum state estimation methods. Previously, researchers have looked to Bayesian quantum state tomography due to its unique advantages like natural uncertainty quantification, the return of reliable estimates under any measurement condition, and minimal mean-squared error. However, practical challenges related to long computation times and conceptual issues concerning how to incorporate prior knowledge most suitably can overshadow these benefits. Using both simulated and experimental measurement results, we demonstrate that ML-defined prior distributions reduce net convergence times and provide a natural way to incorporate both implicit and explicit information directly into the prior distribution. These results constitute a promising path toward practical implementations of Bayesian quantum state tomography.

Dimension-adaptive machine learning-based quantum state reconstruction

Lohani, S.,  Lukens J, Jones D,  Searles T. A., Glasser R. T., Kirby B

Quantum Machine Intelligence; arXiv

This approach removes the necessity of exactly matching the dimensionality of a system under consideration with the dimension of a model used for training. We demonstrate our technique by performing quantum state reconstruction on randomly sampled systems of one, two, and three qubits using machine-learning-based methods trained exclusively on systems containing at least one additional qubit.

Data-centric machine learning in quantum information science

Lohani, S.,  Lukens J, Jones D,  Searles T. A., Glasser R. T., Kirby B

Machine Learning: Science and Technology (IOP Science)

We propose a series of data-centric heuristics for improving the performance of machine learning systems when applied to problems in quantum information science. In particular, we consider how systematic engineering of training sets can significantly enhance the accuracy of pre-trained neural networks used for quantum state reconstruction without altering the underlying  architecture. We find that it is not always optimal to engineer training sets to exactly match the expected distribution of a target scenario, and instead, performance can be further improved by biasing the training set to be slightly more mixed than the target.

Improving application performance with biased distributions of quantum states

Lohani, S.,  Lukens J, Jones D,  Searles T. A., Glasser R. T., Kirby B

Physical Review Research

We demonstrate how substituting the Dirichlet-weighted Haar mixtures in place of the Bures and Hilbert--Schmidt distributions results in measurable performance advantages in machine-learning-based quantum state tomography systems and Bayesian quantum state reconstruction.

On the experimental feasibility of quantum state reconstruction via machine learning

Lohani, S., Searles T. A., Kirby T. B., and Glasser R. T.

IEEE Transaction on Quantum Engineering

We determine the resource scaling of machine learning-based quantum state reconstruction methods, in terms of both inference and training, for systems of up to four qubits. Further, we examine system performance in the low-count regime, likely to be encountered in the tomography of high-dimensional systems. Finally, we implement our quantum state reconstruction method on a IBM Q quantum computer and confirm our results. 

Machine Learning assisted quantum state estimation

Lohani, S., Kirby B. T, Brodsky, M., Danaci, O., Glasser R. T

Machine Learning: Science and Technology (IOP Science)

We build a general quantum state tomography framework that makes use of machine learning techniques to reconstruct quantum states from a given set of coincidence measurements. For a wide range of pure and mixed input states we demonstrate via simulations that our method produces functionally equivalent reconstructed states to that of traditional methods with the added benefit that expensive computations are front-loaded with our system. Further, by training our system with measurement results that include simulated noise sources we are able to demonstrate a significantly enhanced average fidelity when compared to typical reconstruction methods. These enhancements in average fidelity are also shown to persist when we consider state reconstruction from partial tomography data where several measurements are missing.

Generative Machine Learning for Robust Free-Space Communication

Lohani, S., E. M. Knutson & Glasser, R. T. Glasser

Nature Communications Physics 

We develop a state-of-the-art generative neural network (GNN) and convolutional neural network (CNN) system in combination, and demonstrate its efficacy in simulated and experimental communications settings. Experimentally, the GNN system corrects for distortion and reduces detector noise, resulting in nearly identical-to-desired mode profiles at the receiver, requiring no feedback or adaptive optics. Classification accuracy is significantly improved when these generated modes are demodulated using a CNN that is pre-trained with undistorted modes. Using the GNN and CNN system exclusively pre-trained with simulated optical profiles, we show a reduction in cross-talk between experimentally-detected noisy/distorted modes at the receiver. This scalable scheme may provide a concrete and effective demodulation technique for establishing long-range classical and quantum communication links. 

Coherent Optical Communications Enhanced by Machine Intelligence

Lohani, S., & Glasser, R. T. 

Machine Learning: Science and Technology (IOP-Science)

Accuracy in discriminating between different received coherent signals is integral to the operation of many free-space communications protocols, and is often difficult when the receiver measures a weak signal. Here we design an optical communication scheme that uses balanced homodyne detection in combination with an unsupervised generative machine learning and convolutional neural network (CNN) system, and demonstrate its efficacy in a realistic simulated coherent quadrature phase shift keyed (QPSK) communications system. Additionally, we design the neural network system such that it autonomously learns to correct for the noise associated with a weak QPSK signal, which is distributed to the receiver prior to the implementation of the communications. We find that the scheme significantly reduces the overall error probability of the communications system, achieving the classical optimal limit. 

Turbulence correction with artificial neural networks

Lohani, S., & Glasser, R. T. (2018)

Optics Letters; arXiv

We design an optical feedback network making use of machine learning techniques and demonstrate via simulations its ability to correct for the effects of turbulent propagation on optical modes. This artificial neural network scheme only relies on measuring the intensity profile of the distorted modes, making the approach simple and robust. The network results in the generation of various mode profiles at the transmitter that, after propagation through turbulence, closely resemble the desired target mode. The corrected optical mode profiles at the receiver are found to be nearly identical to the desired profiles, with near-zero mean square error indices. 

Spatial Mode Correction of Single Photons Using Machine Learning

Bhusal, N., Lohani, S., You, C., Fabre, J., Zhao, P., Knutson, E. M., ... & Magana-Loaiza, O. S. (2021)

Advanced Quantum Technology 

In this article, the self‐learning and self‐evolving features of artificial neural networks are exploited to correct the complex spatial profile of distorted Laguerre–Gaussian modes at the single‐photon level. Furthermore, the potential of this technique is used to improve the channel capacity of an optical communication protocol that relies on structured single photons. The results have important implications for real‐time turbulence correction of structured photons and single‐photon images. 

Dispersion characterization and pulse prediction with machine learning

Lohani, S., Knutson, E. M., Zhang, W., & Glasser, R. T. (2019)

OSA Continuum

In this work we demonstrate the efficacy of neural networks in the characterization of dispersive media. We also develop a neural network to make predictions for input probe pulses which propagate through a nonlinear dispersive medium, which may be applied to predicting optimal pulse shapes for a desired output. The setup requires only a single pulse for the probe, providing considerable simplification of the current method of dispersion characterization that requires frequency scanning across the entirety of the gain and absorption features. We show that the trained networks are able to predict pulse profiles as well as dispersive features that are nearly identical to their experimental counterparts. 

On the use of deep neural networks in optical communications

Lohani, S., Knutson, E. M., O’Donnell, M., Huver, S. D., & Glasser, R. T. (2018)

Applied Optics; arXiv

we demonstrate the ability of deep neural networks to classify numerically generated, noisy Laguerre–Gauss modes of up to 100 quanta of orbital angular momentum with near-unity fidelity.  The scheme relies only on the intensity profile of the detected modes, allowing for considerable simplification of current measurement schemes required to sort the states containing increasing degrees of orbital angular momentum. We also present results that show the strength of deep neural networks in the classification of experimental superpositions of Laguerre–Gauss modes when the networks are trained solely using simulated images