Quantum machine learning and variational quantum algorithms were formerly hot topics, but the desert plateau event dampened their initial excitement. For instance, the loss function landscapes of many quantum learning architectures show an exponential convergence towards their mean value as the system size increases and gains more and more attention. Due to the exponential training resources required, variational quantum algorithms are not scalable in such settings.
Consequently, there has been a lot of interest in studying training approaches and architectures that do not produce empty plateaus. However, the fundamental structure of the problem is utilized by each of these approaches in some manner.
A classical technique in polynomial time can simulate the loss of landscapes that do not probably have barren plateaus. Using parameterized quantum circuits or hybrid quantum-classical optimization loops on a quantum device is unnecessary for this simulation. However, early data collection may still necessitate a quantum computer. One possible reading of these arguments is that they dequantize the information-processing capabilities of variational quantum circuits in empty, plateau-free spaces.
A new analysis of popular tactics supports the premise that all strategies for avoiding barren plateaus may be successfully duplicated using traditional methods. The fact that there are no empty plateaus allowed them to find the polynomially-sized subspaces that contain the relevant part of the calculation. Using this information, one can find the set of expectation values that must be calculated (either classically or quantumly) to enable classical simulations.
This study was done by a group of researchers from Los Alamos National Laboratory, Quantum Science Center, California Institute of Technology, Chulalongkorn University, Vector Institute, University of Waterloo, Donostia International Physics Center, Ecole Polytechnique Fédérale de Lausanne (EPFL), Quantum Science Center, Universidad Nacional de La Plata, and University of Strathclyde.
Because the researcher’s claims could be misunderstood, they have clarified them in their paper as follows:
They argue for widely used models and methods that use a loss function defined as the expected value of an observable for a state created by a parametrized quantum circuit and more general versions that use these measurements with classical post-processing. Among the many popular quantum designs that fall within this category are several models for quantum machine learning, the most typical variational quantum algorithms, and families of quantum-generating schemes. It is not exhaustive of all possible quantum learning mechanisms.
Even if it is feasible for all case studies, the team still hasn’t proven that it can reliably identify the components needed for simulation. As mentioned in their paper, they don’t know how to replicate it, although, in theory, there might be models without landscape desolate plateaus. When the small subspace is otherwise unknown, or the problem is highly structured but remains in the entire exponential space, this might happen for sub-regions of a landscape that are explorable using smart initialization strategies.
Having taken note of these cautions, the team presents new opportunities and potential avenues for further research based on their results. They focus on the possibilities presented by warm starts. The computational cost might be too high even for polynomial-time classical simulation; this could lead to polynomial benefits when applying the variational quantum computing scheme on a quantum computer. Using the structure of traditional fault-tolerant quantum algorithms, the researchers suggest that highly structured variational architectures with superpolynomial quantum benefits, which are more exotic, are still possible.
Check out the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to join our 34k+ ML SubReddit, 41k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.
If you like our work, you will love our newsletter..