# QuICS Frontiers Workshop

## A Review

** Date:**

Last week I had the opportunity to attend the “Frontiers in Quantum Information and Computer Science” workshop hosted through the Joint Center for Quantum Information and Computer Science at the University of Maryland. With the workshop running from 0930 to 1700 each day, there was quite a bit of information to take in! (Thankfully, an hour and a half lunch and two coffee breaks helped break things up.)

Some speakers whose talks I found particularly helpful included:

** Seth Lloyd** (MIT)

Seth talked about “Universal Deep Quantum Learning”, touching on how we might think about
machine learning (ML) in quantum information. Already some research has been done to investigate how quantum computers
could do ML. Some specific ML topics where this might be useful include data fitting,
support vector machines, principal component analysis, and, yes, quantum finance. (The latter uses path integrals
to carry out some calculations in financial engineering, it *does not* involve creating a “quantum stock market”!)

Seth discussed one paradigm for ML, the Boltzmann machine (BM), which might benefit from quantum information. In order to “train” a BM, and thereby actually do some interesting ML task, one has to adjust some parameters within the BM. It turns out one can use quantum computers to help efficiently tune those parameters to their optimal value.

One intriguing idea would be to see if one could use a BM to learn a quantum computation. Such an idea is possible, according to Seth, because BM machines can be used to simulate universal classical computation. So, maybe with a bit of work and some generalization, we could devise a BM to do universal quantum computation.

Unfortunately, whether quantum machine learning provides any actual speedup to classical machine learning is an open question. What is more, there does not seem to be as systematic a study into the computational complexity of machine learning as one would hope. That is, we apparently do not yet know how hard machine learning is in the complexity sense. Thus, even if we have a new quantum algorithm for doing machine learning, the only way we can certify it as “better” is by comparing it to existing machine learning techniques, and not by proving that it has such and such computational complexity.

** Scott Aaronson** (MIT)

The title of Scott’s talk certainly provided food for thought - “Black Holes, Firewalls, and the Complexity of States and Unitaries”. One major question at the intersection of quantum information and general relativity concerns the nature of black holes and how/whether they preserve information. For instance, imagine you take your favorite book and throw it into a black hole. What happens to the information contained in the book? Where does it go? Nobody seems to have a particularly satisfactory answer right now, though we have made progress.

An initial answer came when Stephen Hawking proposed that black holes actually emit radiation. Over a long enough timescale, the information contained in the book would eventually be re-radiated back out. All we would have to do is collect all the radiation and then (somehow!) find the information in the book.

Unfortunately, Hawking radiation has its own problems. These problems are beyond the scope of this post, so I will defer to the writings of John Preskill on the “monogamy of quantum entanglement” and the black hole firewall controversy. Suffice it to say that, as far as Hawking radiation is concerned, it seemed to be possible to clone the quantum information inside the black hole, which violates the no-cloning theorem.

Scott discussed one resolution to the problem of cloning. In order for the information inside the black hole to be cloned, we have to unscramble the Hawking radiation. Turning this into a formal computational problem, the amount of time it would take to actually do the unscrambling might be so hard that even a quantum computer could not do it efficiently. Essentially, by the time we unscrambled the information, the black hole might have evaporated! Some initial work in this direction was done by Daniel Harlow and Patrick Hayden; Scott showed us how one can extend and formalize their results.

** Jeongwan Haah** (MIT)

Jeongwan talked about something related to my own work through a discussion of “Sample-Optimal Tomography of Quantum States”.
In order to do quantum state tomography - that is, estimating an unknown density matrix - how many initial copies of the state are necessary to achieve a desired accuracy under fidelity or trace distance? Previously, it was shown that, in order to achieve error \(\epsilon\) in trace distance, \(\mathcal{O}(dr^{2}/\epsilon^{2})\) copies were required, assuming the state had dimension \(d\) and rank \(r\). Jeongwan discussed a measurement scheme in which the *minimum* number of copies goes as \(\mathcal{O}(dr/\epsilon^{2}*\ln(d/\epsilon))\). I find this result nice in that it helps eliminate assumptions regarding the rank of the state.

However, one immediate question arises - what happens as \(d\rightarrow \infty\)?. Formally, the above bound implies you need an infinite number of copies to achieve any kind of accuracy! Which directly ties back into why model selection is important.

** Martin Roettler** (Microsoft)

Martin’s talk on “Reversible Circuit Compilation with Space Constraints” had a great advertisement for LIQUi|>, a programming language designed by Microsoft for quantum computation. LIQUi|> could be used to program the classical simulation of quantum circuits, and to help researchers compile quantum circuits for execution on small-scale quantum information processors. (See this arxiv paper for more information.)

The talk also touched upon one perennial problem in circuit compilation - the trade-off between time and space requirements. These trade-offs are especially important for reversible computing, a computing paradigm in which it is possible to “undo” the computation by running it in reverse. Martin discussed a new tool, REVS, which allows one to “compile high-level programs into lower-level reversible circuits”. In this way, one could write a quantum simulation algorithm, then convert it into a reversible format, then compile it. One neat trick was to think of ways to do “quantum garbage collection” - recycling ancilla qubits for use in later steps of the computation.

*Conclusion*

Overall, QuICS 2015 was a great workshop. It was a bit hard to keep up with everything at times, which usually happens at these sorts of things. If you are so inclined, some attendees were tweeting during the workshop; check out the hashtag #quics!

Finally, I wanted to give a shout out to Daniel Nagaj, who gave a talk on “Very Entangled Spin Chains”. I didn’t follow most of what he said, but I did notice his slides were really well done! Check out his talks page for examples of great slide design.