Quantum states can exhibit bizarre but powerful properties, such as being in a superposition or containing correlations not possible in classical physics. If these properties can be controlled, then they can be exploited in quantum technologies to dramatically transform computing, enable secure cryptography, and unlock new ways of observing the universe. Quantum optics is a particularly fertile field for testing and developing these technologies – but how exactly can we design a quantum optics experiment to produce useful quantum states of light that can be put to good use? The usual methods involve painstaking calculations, clever insights, and utilising knowledge built up from years of experience and careful reading of previous researchers’ work. But the counter-intuitive nature of the quantum world, whilst enabling disruptive new technologies, can make it particularly challenging to design quantum experiments that can engineer useful states – our usual intuitions can fail us here. Indeed, while the current techniques used by researchers have led to a host of impressive and exciting results, we are far from finding the optimal methods to manipulate and control quantum states.

To overcome this I developed a new technique that instead employs *computers algorithms to design quantum optics experiments* for us^{1}. While computers are not yet creative, and in many tasks can be outsmarted by children, they do have the unique ability to perform millions of calculations per second, and it is this powerful feature of computers that I exploit. Specifically, my algorithm shuffles through different combinations of experimental equipment – such as beam splitters, phase shifters, and non-linear crystals (that “squeeze” the light) – to find arrangements that can produce quantum states of light with specific properties, which can be used for a given task. As with a recent independent project using related techniques by Mario Krenn and Anton Zeilinger^{2}, my computer algorithm found numerous solutions that surpass the previous results in the literature whilst involving surprising experimental arrangements quite different from the human designs.

The picture above is an artist’s impression of the algorithm, named “Tachikoma”. While my first work only found quantum states for making high-precision measurements, future work will find states for a wide range of tasks: highly entangled states, states with a large quantum Fisher information, and the preposterously named zombie cat states and three-headed cat states!

Artwork by Joseph Namara Hollis, __josephhollis.com__.

- P. A. Knott, New Journal of Physics 18, 073033 (2016) http://iopscience.iop.org/article/10.1088/1367-2630/18/7/073033/meta
- Krenn, Malik, Fickler, Lapkiewicz & Zeilinger, Phys. Rev. Lett. 116 (2016) https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.116.090405

]]>

*Happy 2017 from the Quanta Rei team! We close the current year with a guest post from our former member, long-time friend and collaborator Dr Rosario Lo Franco |RLF>*

From the dawn of agriculture until the industrial revolution, all over the world, human beings have been facing the problem of food preservation. We are now quite familiar with many techniques, most of which utilised in our own kitchens, to reach this fundamental goal for our existence on Earth. Efficient and very employed procedures are, for instance: drying, salting, smoking, cooling and freezing. Let us focus on the last one, which works well for a very wide variety of foods. We are aware that, in order to preserve foods for long times by freezing, our freezers and refrigerators must be able to maintain temperatures well under zero Celsius degrees, typically −18° C or below (0° Fahrenheit or below). Air at the poles of our planet would be an extremely efficient freezer for foods, although it is not a very pleasant environment where to live (the average temperatures at North Pole and South Pole are, respectively: 0° C (32° F) and −28.2° C (−18° F) during summer; −40° C (−40° F) and −60° C (−76° F) during winter). There is therefore a continuous technological development in engineering efficient and eco-friendly freezing machines to assure a trustable and lasting food preservation. If someone comes and tells us that it is possible to preserve food by freezing at room temperature we wold not believe them, unless we are in front of Marvel’s Iceman (see picture aside).

Nevertheless, as we highlighted in a recent experiment (http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.117.160402) following a theoretical prediction (http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.114.210401), there is something very precious at the very fundamental level of matter that is possible to freeze, in principle, even at room temperature: this is * quantum coherence*. Quantum coherence represents the wavelike nature of matter and the essence of quantum parallelism, that is the possibility for a very little system (at atomic scale, nanometers, and below) to be simultaneously in different states. During the last years, quantum coherence has been shown to constitute the primary ingredient enabling the development of quantum technologies for commercial applications to networked secure communication, computing, imaging, sensing, and simulation. The performance of quantum devices in making these tasks greatly supersedes the ordinary classical technology currently available. There is thus great interest in the practical use of quantum coherence other than its fundamental value. But, as often happens when one wants to get an ambitious goal, there are challenging obstacles to be overcome. In particular, one of the biggest problems towards the reliable exploitation of this quantum resource is the unavoidable adverse environmental condition. This is not to be meant as “bad weather” but rather as the destruction of quantum coherence due to the interaction between the quantum system and its surrounding environment, a phenomenon known as

This decoherence is one of the interpretations scientists give of the fact that we do not observe quantum parallelism at macroscopic level. Putting it in a Schrödinger-like fashion, the cat in the box is either alive or dead, it is not alive and dead: the interaction with the environment destroys the possibility of the simultaneous existence of the two “states” for the cat in extremely small and imperceptible times. This is applicable to all macroscopic “classical” objects. At the microscopic level, one can think of an assembly of quantum bits (qubits), which are two-state systems (e.g., atoms, photons, nuclear spins) which can live in their 0 and 1 states at the same time thanks to the quantum mechanical laws. Although quantum coherence between the possible collective states is observable at this scale, it typically lasts for only a fraction of a second before decoherence destroys it. This means that if we want to use quantum coherence for next generation inviolable communications and super-fast computing, we must design and engineer efficient strategies to maintain it as long as possible. Researchers around the world have developed methods to slow down or correct the effects of decoherence but these methods are generally very demanding, requiring external precise controls of the quantum evolution, adjunctive quantum devices and particular structured environments. Differently from these approaches, our study shows a natural mechanism of compound quantum systems to contrast decoherence without any external intervention such that, under suitable conditions, quantum coherence remains unaltered, frozen, during the evolution. Great Scott!, Dr. Emmett “Doc” Brown would say.

Let us now go a little deeper, but shortly, into the story which led us to obtain this result. A story of a group of people meeting in different places at different times to “coherently” discuss the main aspects of the theory and device the experiments. As usual in science, collaboration among people is essential: a sort of human coherence, linking people from Nottingham (UK), São Carlos and Rio de Janeiro (Brazil), Munich (Germany) and Palermo (Italy).

The hard core of the theoretical study and prediction about the phenomenon of frozen quantum coherence was established in Nottingham. Previous works on universal freezing of quantum correlations in compound quantum systems and the introduction of suitable quantifiers of coherence opened the way to the discover that quantum coherence in a system made of an even number of qubits can be in principle maintained constant and equal to its initial value during a non-dissipative evolution. This is thus achievable, once prepared a suitable initial state, whenever there is no energy exchange between the system and the environment, provided that the quantum coherence is measured along a direction which is perpendicular to the direction where the decoherence naturally acts. In fact, the nature of the system-environment interaction creates a given way of action for decoherence, which destroys every coherence it finds along this way, let us say along a *z* axis. To give a pictorial representation, one can see the system made of qubits, where each qubit has two basis states represented by two arrows (spins) pointing up

or down along the *z* direction. The quantum coherence of the system is then observed along the *x* direction, by aligning detectors sensitive to arrows pointing along the *x* axis. And, very importantly, the coherence is predicted to be frozen independently of the measure employed to quantify it: in this sense, the phenomenon is universal. In the picture, you can see Gerardo (right) close to me, along a street of Nottingham, making a call to the labs having this successful “observation gimmick” in mind: our tricky happy faces are quite evident!

It is now the turn of our visit to the labs in Brazil and the planning of the experiment there. It was August 2014 when Gerardo and I went to the Institute of Physics in São Carlos (Universidade de São Paulo) to meet our colleagues there, who soon became great friends. In the picture below, you can see Diogo (right), Gerardo (center) and me (left) smiling during a discussion about the main theoretical aspects of the experiment in nuclear magnetic resonance, who would have reproduced the predictions on freezing coherence, with the background company of the magic music by Vinícius de Moraes and Antônio Carlos Jobim. This picture represents, in my opinion, a very good synthesis of the lovely and funny atmosphere of that time which certainly contributed to the success of the story. The experiment involved people from São Carlos but also from Rio de Janeiro, working at Centro Brasiliero de Pesquisas Físicas (CBPF). Therefore, we spent an entire week in Rio to meet our colleagues at CBPF and follow the work in the lab.

Rio de Janeiro: what an amazing and unique city! I will never forget the landing by airplane there. The pilots introduced Rio to us by a slow spectacular flight above the city, allowing us to immediately appreciate the beauty of houses nested in a lush green nature and vegetation gently fading out into the blue Atlantic ocean, from which small islands and green mountains emerge like sweet fingers greeting the people looking at them. The statue of Christ The Redeemer on the top of Corcovado really gives the feeling of embracing all the city, giving an eye to the Sugarloaf mountain (*P**ã**o de A**çú**car*). It was August, which means summer in Europe, winter in Brazil. But it is universally known that Rio has only two seasons: summer and hell! Therefore, we enjoyed summer at that time, moving from Copacabana beach to downtown samba events. I had the opportunity to go to the *Pedra do Sal* (Rock of Salt), a historical site where samba is played and sang by musicians around a table with the company of hundreds of people, Carioca and tourists as well: amazing, moving and poetical. Thank you Rio for that night! Of course, being those days mainly dedicated to our scientific research, we had the luck to understand that also nucleuses of carbon and hydrogen danced bossa nova subject to nuclear magnetic resonance (NMR) to assembly themselves into the desired quantum states. During our visit to CBPF, where you can see a picture aside with Roberto (right), Gerardo (left) and me (back), the guys of the lab gave us the good news that they would have been able to implement the theoretical method using a setup made of a simple Chloroform sample labelled with Carbon-13, to encode a two-qubit system in ^{1}H (hydrogen) and ^{13}C (carbon) nuclear spins. The systems are naturally affected by dephasing noise (the most common type of non-dissipative environmental noise), so that once the desired initial quantum states are prepared, the freezing of quantum coherence can be automatically observed, without external control.

The German contribution then enters the game with the aim to go further and prove the occurrence of this counterintuitive phenomenon in larger quantum systems. Thanks to our “globetrotter” researcher Isabela, who traveled a lot to connect all the involved people by an effective “human coherence”, the lab at the Technische Universität München was successful in increasing the number of qubits from two to four. This four-qubit system was a heteronuclear sample developed in Steffen Glaser’s group, manipulated by a prototype NMR probe. In particular, the four qubits were encoded in ^{1}H, ^{13}C, ^{19}F (fluorine) and ^{31}P (phosphorus) nuclear spins. Both Brazilian and German setups demonstrated long-lived quantum coherence involving room-temperature liquid-state NMR quantum simulators. Thus, freezing coherence at room temperature is possible, opening up further research on exploiting this resource and understanding its possible role in biological complexes! Thank you Isabela, Alexandre, Tom, Marco, Raimund, Roberto, Ivan, Steffen, Eduardo, Diogo and Gerardo for the great collaboration.

I would like to conclude this story by pointing out the following message: it is often by changing the viewpoint that striking results may surprisingly show up. They are there, Nature is kind of willing to bring them to human knowledge, but one needs to find the right perspective to see them! This concept may be illustrated by a picture I made during my visit at the wonderful Iguazù Falls, at the border among Brazil, Argentina and Paraguay: the beauty of Nature is just there to be seen and appreciated by the proper sensitive eyes!

*This is a guest post by Dr Rosario Lo Franco |RLF>*

]]>

When young Tudor visited the West Indies for the first time, he was delighted by the warm Caribbean weather. He may have been sunbathing or walking on the beach when the idea struck him like a lightning. It was a new business model, completely overlooked, and extremely profitable: Why not cut ice in Boston, ship it to the tropics and sell it to local restaurants? They could start selling chilled drinks, or even ice-cream. Most people there had never seen ice before. *They would go crazy about it! *

I can imagine young Tudor considering with excitement the feasibility of his idea. Ice was free. At least in Boston. And there was plenty. One only needed to cut it in blocks. Ships were affordable at the time and, to keep the ice from melting, one could insulate it with sawdust, which was also essentially free. Believe it or not, nobody had thought about large scale commercial ice ventures before. Back then, ice was only used in small quantities wherever it was naturally available. But the idea of getting people to actually pay for ice was simply revolutionary.

And so, the 10th of February of 1806, the brig *Favourite* departed from Charleston headed to Martinique with 130 tons of ice in her hold. Not surprisingly, the venture was an absolute disaster: the ice that didn’t melt during the three-week journey to St. Pierre, melted soon after arrival, since there were no storage facilities in the island at the time. Although he did manage to sell some ice, poor Frederic lost better than $3500 dollars in this first frustrated attempt.

But he didn’t give up. His determination to succeed against all odds was almost heartbreaking. Tudor had “ice houses” built in Martinique and Havana at great expense and started experimenting with different kinds of insulation to improve the efficiency of his business. In spite of his efforts, he kept accumulating debt, and even spent some time in debtors prison between 1812 and 1813. And yet, he kept trying.

As he started getting profits from his sales, his ambition grew out of proportion: He harnessed horses with metal blades to cut ice in greater amounts and teamed up with a business partner to start shipping ice to India.

*(Yes, my friends, you read well. This guy was seriously shipping ice from Boston to Calcutta)*

And guess what? He succeeded. By the 1850s, Frederic Tudor, a.k.a. *“the Ice King”*, had built an ice empire, shipping over 150 000 tons of ice per year to South America, India, Persia and the Caribbean. Tudor died a millionaire, in 1864, at the age of 80.

Crucially, besides making a fortune, the Ice King managed to change the everyday life of people around the globe by creating a new *need*. Larger amounts of ice were stored every winter and transported by boat or train to big cities, where its use for food preservation, medicine or manufacturing of chemicals became widespread. For instance, by the time Tudor died, *lager* had superseded all other types of beer in Germany.

Of course, ice had its detractors too. They say that in Vermont preachers warned in their sermons against “the abominations of sucking soda” as late as 1890, and even laws were passed in some towns prohibiting selling it on Sundays. Around that time, any accidentally frozen food was immediately trashed and the regulations on cold-storage warehouses were extremely restrictive, supposedly in the interest of public health.

Back in 1834, when Tudor was expanding his trade to India, a guy called Jacob Perkins invented the *refrigerator* or “artificial ice-making machine”. Perkin’s patent sketched a primitive hand-operated compression fridge with very limited usability. The idea of artificial refrigeration didn’t really take off until *much* later. It was in 1860 when Ferdinand Carré patented the absorption refrigerator in the US. This ingenious invention, powered by heat, instead of manpower, did find practical applications: During the Civil War, it supplied the Confederates with ice, which they could not get anymore from the North. By the 1890s, steam-powered compression refrigerators and absorption cooling systems were produced for industrial use, although these were still humongous machines weighing from tens to hundreds of tonnes.

Even then, some of the best-informed engineers of the time laughed at the idea of refrigerating machines replacing natural ice, let alone the concept of *domestic refrigerators*. But just like those who laughed at Frederic Tudor when his *Favourite* cleared customs for Martinique in 1806, they were simply wrong.

The history of refrigeration teaches us two important lessons: The first one is that the biggest impediment for technological progress is often the very opposition of *people*. It takes a lot of time and determination to convince scientists, businessmen and the general public of the interest, profitability and practical uses of new technologies and ideas. The second lesson is that businessmen play a starring role in technological progress, side by side with scientists and engineers. Their support and initiative is vital to make technological breakthroughs happen. Sadly, this is very rarely acknowledged.

As you all know, artificial ice eventually won the war against the once thriving ice-shipping empire of Frederic Tudor. Furthermore, small-scale domestic fridges did made their way into our households in the early 20th century.

The first domestic refrigerators were noisy and dangerous, though. All refrigerants used at the time were poisonous, and accidents happened quite often. They say that when reading in the press the tragic news about the death of an entire family, poisoned by the gases of their own household refrigerator, Albert Einstein got very upset. This happened in the mid 1920s, when Einstein was visiting his good friend Leo Szilard. These two theoretical physicists, better known for their works on quantum mechanics, relativity, or nuclear energy generation, embarked themselves in a seven-year collaboration to design safer refrigerators. They produced a total of 45 patents and even managed to *build* several working prototypes of the most ingenious cooling machines that one could possibly imagine.

But that’s another story.

Maybe next time.

]]>

*“When two systems enter into temporary physical interaction due to known forces between them, and when after a time of mutual influence the systems*

*separate again, then they can no longer be described in the same way as before. I would not call that one but rather the characteristic trait of quantum mechanics,the one that enforces its entire departure from classic lines of thought. By the interaction the two systems have become entangled”*

*[E. Schroedinger, 1935]*

Being a physicist or a mathematician and living in the normal world is not a simple task. This is not only about physical aspect, conventions, or appearances.

This is also about the weird sensation that you prove before answering the question “W*hat do you do in your life?*” Generally speaking, saying you studied maths or physics leads to the answer “*Wow! You’re very smart*“, that in general is how the conversation ends. However there are more brave people who go further and deeper in the discussion asking you what is your research on.

Sometimes it happened to me that I had to explain to my – non-physicist – friends what entanglement is. This is how usually this thing goes:

– *I’m studying a type of correlation between quantum systems called entanglement.*

– *Sounds interesting! Can you tell me more?* –

– *Well, basically you let two very small systems interact and it can happen that they end up being correlated in such a way that even if they separate again you can’t describe them separately anymore.*–

… and here comes the awkward part…

– *Oooh! So it’s like they fell in love!*–

Since I’m not a very romantic person, the first time I heard this reaction my face looked like this:

But in time I learned not only to take things less seriously but also that sometimes concepts that you would never relate, explain each other more easily (especially to people that are completely unfamiliar with one of the two subjects).

The first time I came in contact with the concept of “*monogamy of entanglement*” it was reading B. Terhal paper celebrating C. H. Bennett’s legacy on quantum information theory [B. Terhal, *Is entanglement monogamous*? IBM J. Res. Dev. 48, 71 (2004)]. Monogamy of entanglement: it makes sense, no? If entanglement can be compered to love, then we should also question its monogamy. So let me explain you what monogamy of entanglement is.

Suppose that you have three systems. Let me address them with *A*, *B* and *C*, for simplicity. Furthermore, suppose *A* and *B* are maximally entangled. It turns out that each of these two systems, say *A* specifically, can’t share any entanglement with *C*. If we want to maintain the romantic vision of entanglement as love, it makes perfect sense to call this property *monogamy of entanglement*: it can immediately become the story of a woman, Alice, and a man, Bob, that are so in love with each other that for them is impossible to find room for anyone else.

Now we can move this concept further and ask more questions. For instance, Alice and Bob may be not so keen on spending all their time always together and with no one else and maybe they would like to meet other people. This is how they meet Charlie. Anyway, being so in love with each other, Alice and Bob cannot dedicate too much time, energies and attentions to someone else, let’s say Charlie.

This is how we arrive to a more interesting concept of monogamy of entanglement. In 2000 it was proved that if two qubits (elementary quantum systems) are entangled, the entanglement that they can share with a third part is bounded [V. Coffman, J. Kundu, and W. K. Wootters, *Distributed entanglement* Phys. Rev. A 61, 052306 (2000)]. In other words the entanglement that *A* shares with *B* summed with the one that *A* shares with *C* can’t exceed the entanglement that *A* shares with *B* and *C* seen as a whole system. The work of Coffman, Kundu and Wootters caught what seemed to be an essential feature of quantum entanglement and since then many other works followed, trying to investigate deeper this property in many directions. In particular, from the rough description that I gave you there are at least two possible directions that are not so difficult to see:

- Study what happens enlarging the number of subsystems;
- Study what happens enlarging the dimension of the single subsystems.

But maybe the experts among you, readers, have noticed that I move forward a crucial information: what measure did they use to quantify the entanglement? Indeed, there’re a lot of mathematical functions that can be addressed as entanglement measures and each of them is able to catch different aspects of this quantum feature. In their paper, Coffman, Kundu and Wootters proved the inequality shown in the previous picture (that from now on I’ll call CKW inequality) using as measure the (squared) *concurrence* [S. Hill and W. K. Wootters *Entanglement of a pair of quantum bits* Phys. Rev. Lett. 78, 1997; W. K. Wootters, *Entanglement of formation of an arbitrary state of two qubits* Phys. Rev. Lett. 80, 1998]. This means that there is a third possible direction in which one can try to investigate further:

3. Study what happens changing the employed entanglement measure.

Here it comes one of the main problems regarding CKW inequality. Indeed, it can be shown that its validity is not universal, but rather depends on the specific choice of the measure. In fact, useful entanglement monotones, such as the *entanglement of formation* or the *distillable entanglement* do not directly obey the inequality.

Anyway monogamy seems to be a key feature of entanglement as hinted, for instance, apart for the nice parallelism with love, by the s*hareability problem* that states that it is not possible to create a symmetric extension of an entangled state of two parties, for infinitely many parties.

So we are led to raise the question:** Should any valid entanglement measure be monogamous** in a CKW-like sense? In particular, given an entanglement measure *E, *should it satisfy an inequality like

with *f* a suitable function that doesn’t make everything trivial? Clearly we would like to have *f *as general as possible meaning that we would like it to be dimension independent, for instance.

The first step in order to answer our question is to define what are the properties that we would like the entanglement measure to satisfy in order to be considered valid. Here I will skim through some technicalities, but let me just say that we would like our entanglement measure to be additive, in the sense that if you bring two copies of the same quantum state the total entanglement you get is twice that of a single copy, and to respect the geometry of quantum states, in the sense of assigning a high value to states which stay far away from the subset of unentangled states no matter their dimension (one such example is the so-called fully antisymmetric state in arbitrary dimension, which has a constant distance from unentangled states). This last requirement can be considered a kind of* faithfulness *of the measure. (Now you see where I’m going, don’t you?) But before going further and before you start thinking that I’m a cheater, let me tell you that these properties have not been chosen in such a way that there is no possible *E *satisfying them. Indeed, important measures of entanglement such as the *regularization of the relative entropy of entanglement* and the *entanglement cost* satisfy them.

As maybe you’re expecting, the interesting thing in this discussion is that it can be proved that a suitable *f* for *E *satisfying the properties that we required, including the non-dimensional dependence (or faithfulness), can’t exist (what a surprise!). Indeed, what can be shown is that enlarging enough the dimension of the system we can always find a state for which the entanglement of part *A* with *B *and* C* taken together is effectively the same as the entanglement that *A* shares with *B *and* C* individually.

This leads us to the starting title and directly to the end of the story. The moral is that even if monogamy and faithfulness are properties that we would like entanglement (maybe also love!) to be endowed with, they’re not compatible to each other at all (is it also true with love?), at least within the weird quantum world.

…Ending titles…

[based on a real story:

C. Lancien, S. Di Martino, M. Huber, M. Piani, G. Adesso, A. Winter

Phys. Rev. Lett. 117, 060501 (2016)]

Ehm… If you’re wondering, the answer is yes: there is a closing scene after the ending titles.

There is a way to escape the monogamy vs faithfulness problem: you can decide to give up the universality of *f * regarding the dimension of the system, thus preserving both monogamy and faithfulness.

*Would you rather care for monogamy, faithfulness or dimensions?*

]]>

This week I am away attending a different type of event, at least different compared to my usual scientific conferences. I have been honoured as *Young Scientist* by the World Economic Forum (WEF), which means I have been selected within a group of 45 scientists under 40 years of age, including a delegation of European Research Council (ERC) Grantees, to attend the 10th edition of the WEF Annual Meeting of the New Champions in Tianjin (China), also known as Summer Davos 2016. This events brings together our (relatively small) group of scientists, a group of Tech Pioneers (promising entrepreneurs at the initial stages of their ventures), a large number of companies, press, and financial delegates, and world leaders including the Chinese Premier, the Canadian Minister of Innovation, etc., for a total of over 2000 participants. In a beautifully designed Convention Centre, we have all sorts of sessions from 7.30am to 6pm to learn and discuss around the main theme: *The Fourth Industrial Revolution and its Transformational Impact*.

So what is this all about?

Before coming here, I only knew about the First Industrial Revolution, the one triggered by steam engines in the late-eighteenth century. Apparently, there have been two more, which are still continuing through today. The Second came about a century later, with the development of electricity and mass production. The Third started last century with the transistors, consumer electronics and computers. What more?

Professor Klaus Schwab, founder of the WEF, argues in his latest book (published in January 2016) that we are on the verge of a Fourth revolution, triggered not by a single breakthrough, but by a confluence of new technologies which are having a rapid transformational impact on society. Let me quote from the introduction of his book:

We have yet to grasp fully the speed and breadth of this

new revolution. Consider the unlimited possibilities of

having billions of people connected by mobile devices,

giving rise to unprecedented processing power, storage

capabilities and knowledge access. Or think about

the staggering confluence of emerging technology

breakthroughs, covering wide-ranging fields such as

artificial intelligence (AI), robotics, the internet of things

(IoT), autonomous vehicles, 3D printing, nanotechnology,

biotechnology, materials science, energy storage and

quantum computing, to name a few. Many of these

innovations are in their infancy, but they are already

reaching an inflection point in their development as they

build on and amplify each other in a fusion of technologies

across the physical, digital and biological worlds.## (K. Schwab,

“The fourth industrial revolution”, 2016)

Yes! **Quantum technologies** are part of this revolution. These are exciting times for doing quantum research, and while our current projects may not directly feed into the big technological transfer exercise being brought forward right now in the UK and in many other countries (China on top, but also Canada, Australia, the Netherlands, and more generally the European Union as a whole through a €1Bn Flagship Initiative — whether we will be part of it or not ), they most likely will form the basis for a new generation of more efficient, and more robust quantum technologies in their future iterations. With an eye towards practical applications, we are in fact working on developing more concrete schemes for quantum parameter estimation and multiuser communication, to understand resources which enable them to operate better than classical ones even in the presence of noise, with suitable tailoring to the environmental conditions. We are also thinking more in detail about integrating and hybridizing some of the key emergent technologies that will form the core of the Fourth industrial revolution, such as quantum metrology and advanced manufacturing, in order to develop better sensors for industrial applications, to monitor and control the 3D printing of complex structures. Hopefully, in a few years, we will see some of our ideas developed in practice, in collaboration with engineers and industrial partners.

This event was and is very useful for me to remind myself, as a research leader of a team with many junior members having different inclinations and aspirations, and as a young scientist still very actively enjoying to put my hands on mathematical problems and thrilling for discovery of the most basic physical facts of our universe, that scientists at any stage of their career need to achieve increasingly more impact outside the ivory tower of academia. I was exposed to stimulating discussions with the Editor in Chief of Nature, the President of the ERC, as well as a community of scientists across a broad range of disciplines, and some tech pioneers who transitioned from academia into their own enterprises. I don’t foresee myself ever doing such a step (I love my position and the freedom attached to it too much), and my research is perhaps still too fundamental to allow for such a possibility to be even considered, but I am prepared now to seeing in the future someone of my students bridging out towards a start-up company, e.g. developing an original application of quantum information theory, and in which I would be happy to maintain a collaborative involvement on the purely scientific side. I am already amazed at the variety of my students’ interests, i.e. towards data science, machine learning, quantum algorithms, and maybe this transition may happen sooner rather than later.

During the various meetings, forums, IdeaLabs, I was also exposed to a variety of really cool science, such as how can we turbocharge our immune system to better defend ourselves against antibiotic-resistant bacteria, how to develop a sat-nav for navigating the brain to remove brain tumor cells without damaging the healthy ones, etc. I feel even more proud to be supported by the ERC, that is funding also such an interesting science, which is no doubt having a more concrete impact on humanity than my fixation on the quantification of quantum correlations … at least, for now! In the current geopolitical climate (BREXIT was a huge topic of discussion here, with lots of uncertainties, as one can imagine), I hope our rulers and most importantly the public remain aware of the crucial role played by European funding in the development of British science and leadership. Perhaps it is our fault as scientists as we missed the opportunity to enthuse our local communities (speaking for myself, in particular in the Midlands, the top pro-Leave regions as reported in the statistics) about the benefits of European integration for science, and the benefits of science for society at large. This is another memento for the future.

**We have a responsibility as scientists to pursue truth, to create knowledge, and to nurture talent of the next generations. But we also have an ethical duty for stewardship towards the society**, the taxpayers, the public who entrust us to drive the world forward and address global challenges with creativity. We need to try harder to communicate (neither dumbing down, nor hyping up) our intentions and outcomes.

In this respect, I believe the WEF is offering unique opportunities for bringing together policy makers, scientists and innovators, to develop the agenda for the future of the planet. I feel humbled and inspired to be part of this.

In a few hours, I will be interviewed in a briefing panel about what, in my view, can be the transformational impact of the Fourth industrial revolution. I hope (if I won’t be too tired, as I ended up not sleeping at all!) to convey some of the points and experiences I shared in this post. As for answering the question itself, well, I think by definition a revolution is an event whose impact cannot be predicted. Thinking back to what I have now learned to be the Third industrial revolution, it is a curious anecdote that when the transistor was developed and brought down to a reasonable size, the best application people could envision for such technology was hearing aids. We have all seen how history was made instead with the advent of computers and all modern consumer electronics based on transistors.

In a language more familiar to my research field, these developments are often flagged as Quantum Technologies 1.0, since the transistors are based on semiconductors, which are modelled according to the laws of quantum mechanics. However, these technologies do not fully exploit the potential of quantum physics, in particular entanglement, nonlocality, and superposition effects. Quantum Technologies 2.0, exemplified by the quantum computing mentioned by Professor Schwab in the reported quote, are designed instead to exploit such resources, and are those which are fuelling (and perhaps will be driving) the Fourth industrial revolution. My short-term expectation is to witness the feasible proposal of some innovative application of Quantum Technologies 2.0 to healthcare, bioengineering, or advanced manufacturing, developed by the next year, and hopefully presented at the 11th WEF Annual Meeting of the New Champions, where I hope to be invited again.

[Update] For those interested: my issue briefing can be viewed here.

]]>

On one hand, in our everyday experience, which deals with *macroscopic* objects and thus fits within the **classical realm**, there is no evidence of the fact that such objects have no reality per se. On the other hand, amazingly enough, some experiments have very recently demonstrated beyond any major loop-hole that at the *subatomic scale*, which lies comfortably within the **quantum realm**, there could be no reality-in-itself if the following very natural condition governed the quantum world: if two events are too close in time and far in space in such a way that not even the light can make it to connect them (so-called **spatially separated** events), then they cannot be in a causal relationship. The latter condition goes under the name of **locality**. So, overall, the crazy quantum wonderland is free from either reality-in-itself or locality or even both!

In order to get an idea of how these experiments managed to prove that at the microscopic scale nature fails to be jointly local and realistic, let us first familiarise a bit more with the concept of local realism by considering the following thought experiment, taken from the inspiring Michel Le Bellac’s Quantum Physics book.

Two travellers *A* and *B*, each carrying a suitcase, depart in opposite directions from the same point *O*. The suitcases of the travellers satisfy the following properties, as also illustrated in the figure below:

- they are circles divided into infinitesimal angular sectors labelled by the corresponding orientations:
*a*,*b*, …, etc.; - each angular sector contains either the result (+) or (-) in such a way that two angular sectors that do not belong to the same suitcase but are labelled by the same orientation contain opposite results.

The travellers have picked up their closed suitcases at random at the starting point *O* and do not know what results are inside.

Eventually, the two travellers are checked by Alice and Bob. Alice opens the angular sector of the suitcase of the traveller *A* labelled by the orientation *a* while Bob opens the angular sector of the suitcase of the traveller *B* labelled by the orientation *b* in such a way that their observations are *spatially separated* in the aforementioned sense.

Then they repeat this experiment several times. What happens is that both Alice and Bob observe a perfect random series of (+) and (-), even though Alice always opens the angular sector of the suitcase of the traveller *A* labelled by the orientation *a* and Bob always opens the angular sector of the suitcase of the traveller *B* labelled by the orientation *b*. This is due to the fact that, at every repetition of the experiment, the two travellers pick up their closed suitcases at random.

After the above series of experiments, Alice and Bob meet, compare their results and calculate the quantity *P(a=+,b=+)*, that is, the joint probability that both the angular sector *a* of the suitcase carried by *A* and the angular sector *b* of the suitcase carried by *B* contain a (+). They calculate this probability by simply dividing the number of times that they both got (+) in a same repetition by the number of repetitions of the experiment. For example, if *a=b*, we have that when Alice and Bob, after the above series of experiments, meet and compare their results they see that there is a perfect anti-correlation between the outcomes of their observations, in the sense that any time Alice has found, e.g., the result (+), in the same repetition of the experiment Bob has found the value (-), and viceversa, so that they would get *P(a=+,a=+)=0*.

This experiment is local and realistic if the following two conditions jointly apply:

*Locality*: the outcome of Alice’s observation, say (+), only reveals a piece of information already stored in the local region of space-time associated with the suitcase carried by*B*: the opposite result, (-), must be in the angular sector labelled by the orientation*a*of the latter suitcase. The correlations between the two suitcases were introduced at the time of departure and reappear as classical correlations between the results of the observations of Alice and Bob. In other words, within this classical experiment, the opening of the suitcase of the traveller*A*by Alice does not disturb the suitcase of the traveller*B*whatsoever, but it determines the result of Bob if he had opened the angular sector of the suitcase of the traveller*B*labelled by the orientation*a*. The sign contained in the latter angular sector existed before the suitcase of*A*was opened by Alice. An analogous condition holds for Bob;*Reality*: even though, in every repetition of the experiment, Bob is allowed to open only the angular sector*b*of the suitcase carried by*B*, this suitcase still has a well defined result in any other angular sector*b’*different from*b*, as illustrated in the figure above. In other words the suitcase carried by*B*has simultaneous physical reality in any of its angular sectors, regardless of whether they are opened (*phenomenon*) or not (*noumenon*). An analogous condition holds for the suitcase carried by*A*.

By jointly making these two assumptions, we can easily obtain a quantitative theoretical prediction on the joint probability of Alice and Bob both getting two pluses as follows:

*P(a=+,b=+) + P(b=+,c=+) ≤ P(a=+,c=+),*

where *c* represents another possible orientation. If such theoretical prediction is found to be violated by Alice and Bob, then the joint local realism hypothesis underlying it is wrong. The above experimental local realism tester was discovered by the Northern Irish physicist John Stewart Bell in 1964 and represented the turning point from a philosophical to a concrete scientific debate regarding the local realistic nature of our universe. Such inequality indeed goes under the name of Bell inequality.

Now by using two suitcases the above inequality can never be violated and so there is no confutation of the local realism hypothesis in the classical realm. On the other hand, what happens if we replace the two macroscopic suitcases with two microscopic electrons travelling towards Alice and Bob? First of all, what do Alice and Bob observe of the received electrons? They measure their spin, which basically is a property of the electron that has infinitely many components in all possible directions and can be measured only in a given direction per time. Moreover, the result of such measurement is dichotomic, i.e., it can be either a (+) or a (-). Therefore, if the spins of the two electrons are in a so-called Einsten-Podolsky-Rosen state, which is perfectly anti-correlated, it turns out that the local realistic way to model them is exactly as the one that we have just used in Le Bellac’s thought experiment for the two suitcases so that exactly the same Bell inequality would apply to such electrons if they were governed by local realism. Well, some experiments have just demonstrated that two electrons in an EPR state violate the above inequality, thus proving that local realism cannot govern the quantum realm!

]]>

The word “coherence” has different meaning for different people. Most people may think of the notion of being logical and consistent, be it in speaking or in acting. Actually, we all hope to deal with people — especially politicians(!) — who exhibit coherence between what they say and what they do. And we all hope that the next major blockbuster movie is coherent, with no major plot holes that make you grind your teeth in your seat, unable to fully enjoy your popcorn.

Nonetheless, to a physicist, coherence is also a notion associated with wave behaviour. More precisely, it is associated with the possibility of seeing the effects of superposition, which is the coherent(!) combination of different physical possibilities. For example, the superposition of sounds waves is what allows people to listen to music in the background, while pleasantly chatting.

Among the effects of superposition more affected by coherence (or lack thereof) are phenomena of interference, be it constructive or destructive, like the ones that you can experience with noise-cancelling headphones, for sound, or by looking at the colours of a soap bubble, for light. The recent detection of gravitational waves was possible exactly using the fact that light is a wave, and as such it can be used to detect tiny variations in length within an “interferometer”. Without coherence, neither constructive nor destructive interference would be possible, because both kinds of interference would be “washed out” and inexistent in practice.

The importance of coherence becomes enormous, both conceptually and practically, when we realize that in quantum mechanics *everything* is also a wave, including what would normally — or, rather, “classically” — be considered “particles”, like electrons and atoms. Mathematically speaking, what we do is to associate a wave — the wave-function — to any physical system or compound of physical systems, more precisely to the state of the system. The evolution in time of the state of the object is given by the evolution of such a wave, described by the famous Schrödinger equation. Then, predictions of what one can observe, and with what probability, can be computed from the knowledge of the wave at a given time.

In the case of information, this wave-like property of objects leads to the consideration of the quantum bit, or *qubit*, where one can have the superposition of the standard values assumed by a bit, 0 and 1. While in the classical realm the latter would be considered alternative and mutually exclusive options, they can coexist — in the sense of superposition — in the quantum case. This is at the basis of the computational power of future quantum computers.

In a more realistic setting, and taking into account issues like ignorance(!), the (unwanted) interactions with an environment, and all kinds of “noise”, the state of an object is associated not with a wave, but rather with a so-called density matrix. The latter can be thought of as the *in*coherent combination of several waves, leading to the decrease and potentially the disappearance of interference. One could compare coherent and incoherent mixing respectively to, on one hand, expert cooking, where many flavours combine nicely, either reinforcing or contrasting each other, and, on the other hand, blending everything in a mixer, making often a tasteless combination of even the most delicious ingredients.

In the density matrix formalism, (the surviving) coherence is often equated with the presence of off-diagonal elements in the matrix representation of a quantum state. Such off-diagonal elements are the “fingerprint” of the quantum superposition of the (classically) mutually exclusive properties associated with the basis in which the matrix is written; the latter, although in principle arbitrary, is typically singled-out by the physics, for example by the consideration of what are the various possible energy states of the system. Most importantly, interesting effects — like oscillations — can occur when, and only when, there are off-diagonal elements in the energy representation of a quantum state.

Somewhat surprisingly, the purposeful and focused study of coherence in the matrix formalism has been initiated only recently, leading to an explosion of interest and of works on the topic. Researchers are trying to develop a fully consistent theory of coherence, which can be considered like a resource to be characterized, quantified, and manipulated.

In [C. Napoli et al., Phys. Rev. Lett. 116, 150502 (2016)], together with collaborators from the University of Nottingham in UK and the Mount Allison University in New Brunswick, Canada, I put forward a quantifier of coherence, the robustness of coherence, that has many appealing properties, including the possibility of efficiently calculating it when the density matrix is known, of directly measuring it in the lab, and of associating it with practical tasks. Indeed, we find that the robustness of coherence of a quantum state sets an ultimate limit for usefulness of the involved physical system for metrological tasks.

In the companion paper [M. Piani et al., Phys. Rev. A 93, 042107] we expand on these ideas, using the fact that coherence, despite being such a fundamental concept, can also be seen as “just” a special case of “asymmetry”, a word that may also mean different things to different people. Nonetheless, in this case, it is easy to grasp that the asymmetry of an object is associated with how different it looks when, let us say, we rotate it or flip it. It should be clear that a sphere is a very symmetric object; for example, it looks the same from whatever direction we look at it, even if we look at it while standing on our hands, rather than on our feet. On the other hand, say, a face, albeit typically symmetric with respect to a left-right flip, is not symmetric with respect to an upside-down flip. This means that we can realize that we are standing on our hands by noticing that the faces of the bystanders around us are upside-down themselves — this even disregarding the puzzlement or amusement that could transpire from the same faces.

In [M. Piani et al., Phys. Rev. A 93, 042107] we introduce the robustness of asymmetry as a quantifier of the asymmetry of a quantum state with respect to a set of transformations that form a group; that means, in particular, with respect to a set of transformations such that, if you combine two transformations, one followed by the other, you obtain again a transformation that is part of the group, and such that any transformation can be undone by another transformation in the group. Again, think of rotations of an object, and of how they can be combined and undone. We prove that also the robustness of asymmetry of a quantum state can be easily calculated, that it can be measured directly experimentally, and that it sets an ultimate limit to the usefulness of the system prepared in said state for the sake of telling apart the transformations of the group — another metrological task.

You might still wonder where the name “robustness” comes from. Well, it comes from the fact that the property of interest — coherence, or asymmetry — is quantified by the noise that it takes to destroy it; that is, literally, by how robust it is. What our works point out is that this already operational interpretation of the quantifier is precisely associated with how useful the coherence or asymmetry present in the quantum system are. That is, independently of whether you have a positive attitude (“what is the best use I can make of the resource?”) or you’d rather prepare for the worse (“how much noise can our system tolerate?”), robustness is your answer.

]]>

I clearly remember myself playing with the seven wooden pieces, and the cheer of satisfaction when I finally figured out a way to reassemble them into the designed large square. More recently, with the advent of smartphones, plenty of Tangram puzzle apps became available for download, all based on the same principle: reassembling a number of small polygonal pieces in a puzzle, to reconstruct a target, more complex shape. Sometimes, there is only one way to achieve the goal, given the set pieces; some other times, there are multiple solutions to a puzzle.

Today I am going to discuss a variation of the Tangram puzzles, with an important twist: *price*. Imagine that each of the set pieces, to be used for assembling the final target shape, comes at a cost (that we will indicate with the currency symbol ¥). For any given target shape, the goal of the game is then to find the solution which minimizes the overall cost. This may not be necessarily the one with the least amount of used pieces, but must be the solution (or solutions) involving the overall cheapest possible combination of the pieces, in order to reconstruct the target.

For example, imagine we are allowed the following set with associated prices: squares with side lengths of 1 unit (available for free, 0¥), right-angled triangles with two side lengths of 1 unit (priced at 1¥), and right-angled triangles with two side lengths of 2 units (priced at 2¥). See below, a picture is worth a million words.

Notice that, counterintuitively, one may combine two small triangles (totalling 2¥) to form a unit square (which by itself costs 0¥): this is a bit like mixing two contrasting flavours which eventually annihilate each other. For the goals of the game, this simple observation implies that putting two triangles in place of a whole unit square (whenever the latter is available) generally leads to overpriced strategies, hence to non-optimal solutions.

Then, let’s try to analyze different solutions for the following target shape: a *trapezium*. We depict the target and a few solutions below, together with their price. One can easily convince oneself that, given an unlimited number of set pieces as in the collection above, the best possible combinations to achieve the target require a total spend of 2¥. Other solutions, even with less pieces, are more expensive.

Consider now the situation in which one restricts the available set pieces. Say we are only allowed to use at most one 0¥ unit square, two 1¥ smaller triangles, and three 2¥ bigger triangles. It is a nice exercise for the reader to figure out all the possible solutions in this case (some are shown below). Interestingly, it turns out all of them carry *the same* total price (6¥) in this case, whatever the arrangement. They are all equivalently optimal.

One may wonder whether this is a coincidence, or a more general consequence of the fact that, in particular, we limited our set to only one free piece. One may then go and investigate more complex shapes, and the answer to the latter question is probably not so important for the art of Tangram puzzles *per se*. However, the answer to a closely related question turns out to be crucial for the quantification of *entanglement* in multipartite quantum systems.

We have talked about entanglement in previous posts, and we will likely talk about it again. It is one of the most puzzling yet fundamental characteristics of quantum mechanics, which enables the implementation of disruptive technologies, and manifests as a strong form of correlations among different parts of a composite quantum system. Determining precisely *how strong* is a very difficult problem in general. We have to adopt a measure to gauge the extent to which a group of many quantum particles are entangled. This is akin to the currency we adopted to gauge the value of Tangram shapes, only that the entanglement currency is usually expressed in *ebits* rather than ¥. In general, a system of many quantum particles can be prepared in infinitely many different states: a *state* is a collection of information about the system, which allows to predict with what probabilities we can obtain an outcome rather than another when measuring properties of the system with an apparatus. If this information is complete, we say the system is in a *pure state*. If instead the information is incomplete (i.e., we have some ignorance about possible measurement results, due to the effect of noise or due to a lack of control over the system itself), then we say that the system is in a *mixed state*.

Pure states are akin to our Tangram set pieces, while a mixed state is like our Tangram target shape. Some pure states come at no cost, meaning that they have no entanglement (they are called separable states, as the properties of the various subsystems are independent from each other), much like the little red squares in our Tangram set. A mixed state can be prepared by combining various pure states, and there are many different (infinitely many!) combinations of pure states which allow us to construct a given mixed state. It can happen that mixing two or more pure entangled states one obtains overall a separable mixed state (analogously to the case of combining two 1¥ triangles to obtain the 0¥ square): in this case, such a preparation is certainly not the most efficient, as one could have obtained the same result by using only pure separable states, without the use of any entanglement whatsoever, that is, without paying any price.

Like in our variation of the Tangram game, we can in fact assign an overall cost to each different preparation of a target mixed state from ensembles of pure states. The costs are now measured in ebits. Given an entanglement measure (in particular, technically speaking, some polynomial function of the state elements), it is relatively easy to evaluate it for pure states, so that we can assign prices to all the pure states in our quantum Tangram set *a priori*. The hard question now becomes the following: *given a target mixed state, what is the preparation using pure states which carries the minimum overall cost in ebits?* Such a minimum cost will be assigned as the entanglement value of the target mixed state. As you can see, the question is exactly like the one we asked in the Tangram puzzle game. In quantum mechanics and convex analysis, this is known as the *convex roof* problem. Such a problem can be solved exactly in few cases where the target mixed state has some particular symmetries, or has a small dimension, but it is a nightmare to deal with it in general.

In a recent article, my student Bartosz and I found a particularly interesting way to solve the convex roof problem for the quantification of entanglement in systems of many quantum particles, under two specific conditions: 1) we specialized to mixed states which have rank equal to 2 (stretching our analogy, we could say target Tangram shapes which are two-dimensional polygons, rather than more complicated three-dimensional figures); 2) we focused on the case where, among all possible pure states in our available set, there is only one which is separable (i.e. with 0 ebits, like the restriction to only one free square in the second Tangram example above). We call this second crucial condition *one root*, which is a shorthand for *“one root to rule them all” *(we even write this in the actual paper! Yes, we are such nerds). Then, some cool things happen. First of all, we can represent our set of states on a *sphere*.

On this sphere, pure states (our set pieces) span the surface, while mixed states are in the interior. For instance, the point indicated by *ρ* is a particular mixed state, like the trapezium target shape in our earlier examples. In this depiction of our quantum Tangram version of the convex roof problem, finding a preparation for the mixed state *ρ* amounts to finding a set of points on the surface (pure states), such that their barycenter (aka their weighted average) coincides with the point *ρ*. For instance, the target state ρ can be prepared by mixing the pure states Φ_{0} and Φ_{1 }with suitable weights, or alternatively the pure states ψ_{0} and ψ_{1} with other suitable weights. But the crux of the problem remains: which ones of all these possible preparations of the mixed state *ρ* are the cheapest in units of entanglement? You will have noticed that the sphere is coloured: the colours actually reflect the entanglement value of all the pure states on the surface, ranging from 0 (red) to some maximal value (blue). As per the one root requirement, there is only one pure state with exactly zero entanglement, and is denoted by a *z*. The state diametrically opposite to it on the sphere, labelled *z’*, is in turn the most entangled one.

Well, we are now ready to give the solution to the main problem: under the conditions stated above, the convex roof problem becomes *completely trivial*! More precisely, any possible preparation of ρ carries the same overall entanglement value, even if the individual pure states involved in each preparation may have varying levels of entanglement. This is a bit like the second example with the trapezium in our Tangram analogy, with the important difference that in the present case we have a theorem proving that all solutions are equivalent when the one root condition is in force. The theorem follows from nothing more than elementary Euclidean geometry, and builds on a classical result due to Apollonius of Perga (circa 200 BC). Such a finding implies that entanglement can be quantified easily in mixed states of many particles compliant with the one root constraint, and is exactly specified in geometric terms by the distance between the plane containing the given mixed state *ρ *and the separable state *z* embodying the one root (i.e., in the picture, the length of the segment connecting *z* to the centre *ρ _{c}* of the shaded plane containing

More details are available in the (pretty technical) research paper (free preprint available here), and in a more accessible coverage of our result which appeared in the Phys.Org website.

Finally, I am personally quite excited to realize that, somehow unconsciously, my longstanding passion for Tangram has finally proven useful to illustrate some of the complicate problems that I like to face at work nowadays. Coming to think about it, it would not be a bad idea at all to implement a suitable variant of Tangram, which might provide new solutions for the convex roof entanglement problem in more general cases, where even a brute-force computer optimization becomes unfeasible. There is a bunch of interesting activities coming from various groups and aimed at coding “games for quantum research”, i.e. exploiting the human abilities in devising solutions to complex optimization problems of current relevance, under the masquerade of playing simple and addictive games. Foldit was a breakthrough in this respect in the field of biochemistry. For recent worthy developments in quantum theory through games, keep an eye on ScienceAtHome and the QuantumGameJam.

But this can perhaps be the subject of another post. For now, let me be excused, as I have a particularly challenging non-quantum Tangram to solve on my phone!

]]>

There’s no doubt that some may have liked thermodynamics better than others, but we all

In my case, I later started a physics degree and had to take yet another “Basic Thermodynamics” course. While electromagnetism and mechanics had become considerably more scary in the first year of the degree, thermodynamics still looked pretty much the same.

*(Nothing to worry about. Piece of cake)*

However, some things got me thinking. I started to ask myself about basic concepts that had never troubled me before. For instance, the ‘state variable’ *T* stands for ‘temperature’ and appears all over the place in thermodynamics but, did I really understand what *temperature* was?

“You must be kidding. *Everyone* knows what temperature is!”, you may say.

“Really? Then tell me what is it”, I would reply defiantly.

“*Temperature* is what you measure with a *thermometer*. It tells you how hot things are“, you could answer.

“So you’d say that temperature is a measure of the ‘hotness’ of things. What is that *hotness* then?”, I would continue impassively.

“I mean the amount of *heat* that things carry”, you could reply. And maybe you’d like to add here: “For God’s sake! And you say you finished that physics degree of yours?”

I concede that *heat* and *temperature* can be easily confused. Top scientists of the 18th century made such mistake by developing the now obsoleted ‘caloric theory’. *Everyone* knows that when a hot object is placed next to a cold object, they exchange heat until their temperatures level off. However, *temperature* is not a measure of the *heat* contained in either of the bodies. As a matter of fact, bodies carry no heat at all. They carry *energy*. Heat is more of a *currency* for trading energy.

“What is temperature then?“

It seems that the question is not as silly as it sounds. Is it?

This sense of perpetual wonder is quintessential to thermodynamics. We are so used to concepts like *heat*, *energy*, *power* or *temperature* that we rarely stop to think about them.

“Who cares how temperature is *rigorously* defined?”, you could object. “Other than a first-year physics student, that is… As I said, as long as I have a thermometer I can *measure* temperature. I don’t really need to know *what* I’m measuring. All I need to know is that 37 °C is a normal body temperature, or that it takes 30 minutes at 180 °C to bake a carrot cake“.

That’s not just a fair point. It’s actually brilliant: Historically, it was not the theory of thermodynamics that we know today (with its elegant axioms and all) which drove the Industrial Revolution during the 18th and 19th centuries. Instead, progress was possible thanks to the *intuition* of engineers, solidly built on the experiences of a lifetime dealing with temperature, heat, and work.

For instance, let’s talk about temperature. Its *existence *was promoted to the level of postulate or Law of Thermodynamics (the Zeroth Law) only in the 1930s. The statement goes like this:

If a body A is in thermal equilibrium with two other bodies B and C, then B and C are in thermal equilibrium with each other. Hence, systems can be sorted into equivalence classes. We may label these classes with a parameter T, calledtemperature.

For A and B, being in thermal equilibrium simply means that nothing happens if they are brought into contact with one another. The Zeroth Law certainty *is* a useless truism, with no bearing in practical *thermometry*. Ancient physicians definitely knew about fever, and felt the urge to measure the *temperature* of their patients (whatever that was) as early as the second century A.D. In the late 1590s Galileo and his students pioneered in the thermometer-manufacturing business, and after a few centuries, people got pretty good at it: Better designs were conceived, reference points were established, and temperature scales were defined, allowing for the *comparability* of measurements. By the 18th century, thermometers were customarily found in the streets and homes and had many different uses. However, *nobody* was aware of what they were measuring!

Another good example of the value of intuition in scientific progress is the backstory of Carnot’s Great Theorem. In 1824 young Sadi Carnot published a scientific paper (or *“memoir”, *as they used to call them) on the *motive power* of fire. Nowadays Carnot’s memoir is acknowledged as the ‘birth certificate’ of thermodynamics and one of the biggest milestones in the history of physics. Let us quote here his main result:

The driving force of heat is independent of the agents chosen to produce it; it’s amount is solely determined by the temperatures of the bodies between which ‘caloric’ is transferred.

“And what is that supposed to mean?”, you may ask.

“Well, it means that if you came up with a fancy design for a new heat engine tomorrow, no matter how many smart tweaks you had included in it, its energy-efficiency would always be less than a *universal* quantity (determined by the external temperatures). A more efficient heat engine would simply be *against the laws of nature*”, would be my answer.

Carnot’s statement is equivalent to the Second Law of thermodynamics, which was not established until thirty years later. He phrased it in very *practical* terms. Thanks to his theorem, engineers could benchmark their newest designs not against the best *existing* engines, but against the best engines possible.

However, as shocking as it may sound, the line of reasoning that led Carnot to his Great Theorem was completely flawed. As a nostalgic supporter of the old-fashioned caloric theory, he thought that bodies carried *“caloric fluid”*, which was supposed to *flow downstream* from hotter bodies to colder bodies (the overall caloric being conserved). In Carnot’s view, that *stream of hea*t could then be utilised to generate *motive power* in the same way in which electricity is produced by the turbines of an hydroelectric power plant.

“Then, how could he get that theorem right?”, you’d probably wonder.

Undoubtedly it was not because he had a correct understanding of heat, temperature and work (which he hadn’t), but because of his right *intuitions* about them. We must bear in mind that in Carnot’s time, engineers had been designing and building engines for over a century. In all those years they were completely unaware of the Second Law or any other *legal requirement* that their designs had to comply with. They didn’t need to know such things. They just did their job.

*(And I can tell you that they were damn good at it)*

Carnot, who was an engineer himself, must have also developed some kind of* thermodynamic instinct*.

Sadly, poor Carnot’s extraordinary achievement went mostly unnoticed during his lifetime. He got his memoir rejected by the editor of *“Annalen der Physik”, *which was the leading physical journal of the time, and had to publish it as a standalone little book. By the time he died (of cholera, aged 36), nobody seemed to have read it. Nobody except the mining engineer Benoît Clapeyron. He liked it so much than he even reprinted a revised version of the memoir in the journal of the Polytechnical School of Paris in 1834, two years after Carnot’s death. Clapeyron’s revision attracted the attention of Johann Poggendorff, who published it in *“Annalen der Physik”* in 1843. Ironically, that was the same Poggendorff that had rejected Carnot’s original 19 years earlier! In this unlikely way, Carnot’s ideas made it to William Thomson (later promoted to ‘Lord Kelvin’), who wrote *“An Account of Carnot’s Theory of the Motive Power of Heat” *(1849). There, the word *“thermo-dynamic”* was used for the very first time. As you can see, thermodynamics works in mysterious ways.

In conclusion, it would be historically inaccurate to claim that the Industrial Revolution was fueled by the advancements in thermodynamics. Instead, both the Industrial Revolution *and* thermodynamics were a result of the daring ingenuity of a legion of reckless engineers that insisted in pushing technology beyond the boundaries of physics. Such feat was only possible because of their intuitive understanding of energy transformations and their everyday familiarity with the elements of thermodynamics.

*(And because rich fellows decided to spend quite a bit in science back in the day)*

Today, there’s another technological revolution going on: the *“Quantum Revolution”*. For instance, we have developed the technology to grab a single atom, manipulate it, move it around, put it close to other atoms, and make them interact. These technical capabilities bring all sorts of sci-fi gadgets a step closer to reality. Some prophesize that the Quantum Revolution will be as life-changing in the coming decades as the Industrial Revolution was in the 18th century.

*(Provided that rich fellows decide to put some good money on that too)*

For instance, how about building a heat engine made up of a single atom? Or even better, how about building an atom-sized refrigerator? Or hundreds of thousands of them? That definitely sounds like a lot of fun but, before we can start to play with our atoms, *quantum engineers* must solve a serious problem: The problem is that our *classical* intuitions about temperature, heat and work are worth *nothing* in the *quantum world*. And that’s because classical and quantum physics are usually *very* different.

*(That makes everything much harder, doesn’t it?)*

This is why still today I ask myself things like:

“Can you measure *temperature* with a thermometer made up of a single atom?”

“What is the *heat* that two atoms can exchange?”

“What is the *work* done by a single atom?“

I wonder…

]]>

In the same way that any physical body has “mass” as an intrinsic property and this mass interacts with Earth’s gravitational force, some nuclei (like hydrogen, nitrogen…) have also an intrinsic “nuclear spin”. This spin allows those particles to feel and respond when a magnetic field is present. Like a compass that will orient itself according to Earth’s magnetic field, the nuclear spins will follow the local magnetic field around its position.

So, imagine that you have a set of very small magnets spread on a table, like people on a dancing floor. If you apply a magnetic field and slowly change it, you can make them move as you wish… as in a choreographic dancing!

However, as one person prefers some kinds of music and will dance to some beat more enthusiastically than to others, different nuclear spin species have particular frequency preferences (called Larmour frequencies). In a NMR experiment, we first apply a strong static magnetic field (**B**_{0}) which defines a reference for our “spin-ballerinas”. Then, we apply a second oscillatory magnetic field (a radio frequency pulse) which has a specific frequency that will only be heard and followed by spins with a preference for this frequency. This phenomenon is called resonance, and is the basis for NMR experiments.

Now, you are probably wondering “If this is all microscopic, how are you able to observe this spin-dancing thing”? And “how is it important for science”?

In 1831, the English scientist Michael Faraday observed the electromagnetic induction principle according to which, when a magnet is moved through a loop of wire, an electric current flow in that wire. So applying this principle, we are able to transform the magnetic “spin dancing” (which is nothing else then a variable magnetic field) into a detectable signal. The result is a signal that oscillates in time. Apparently this signal does not say much, however, if we apply a mathematical transformation that drives us from time to frequency domain (Fourier transformation) then we can see peaks in some special frequencies: the “special characteristic frequency” of each nucleus! Therefore, when we see a peak on that frequency, we can guarantee that this nucleus is present on our sample.

Things can get much more interesting than that! In a ball, you (hopefully!) do not need to dance alone all the time, so also nuclear spins can dance (interact) with others. Those interactions will than change our signal and the peaks shape, allowing us to predict who is interacting with whom and in which way. This is why NMR is applied to determine molecular structures so often.

But this is a quantum information blog… what all this “dancing” stuff has to do with it? Classical computers (the ones we have in our houses) are built to process information based on bits. So everything that happens inside is described by a code with combinations of 0s and 1s. In this case you only have two options… either your bit is 0 or it is 1, like when you throw a coin and it gives you head or tail. In an electrical circuit, we can translate this for “you have” (bit 1) or “you have not” (bit 0) an electrical signal. But… what if you could have both options? Would it change the way the information is processed? Would it make computers faster? That is exactly what we are investigating nowadays.

Coming back to our dancing spins… Relatively to our reference magnetic field, a spin can be in a parallel (associated to bit 0) or an anti-parallel (associated to bit 1) position, however our spins can also lie on the perpendicular plane, which corresponds to a “superposition state” (a combination of 0 and 1 at the same time). In this sense, we can apply NMR as a “quantum computer simulator” where we will encode bits as qubits (quantum-bits), and denote them by |0> and |1>.

To exemplify how powerful quantum algorithms can be, I propose the following game. Suppose you have four balls numbered sequentially {1, 2, 3, 4}. I will cover the numbers and mix them cyclically. The possible outcomes are identified below and can be split in two groups: when an even and when an odd permutation – function f_{i}(x) – is applied.

My question for you is “which kind of permutation was performed (even or odd)”? You can guess and will have 50% chance to answer correctly, but to be sure you would need to uncover at least two balls. To answer a question like that: “which parity a certain function has?” a classical computer would need to evaluate it at least two times. However, applying a quantum algorithm we proved that it is possible to answer that question looking only at a single result.

Our system was a liquid crystal containing a ^{23}Na nucleus. In a first step, we apply some radiofrequency pulses to prepare our spin system in a pre-determined configuration |ψ>. Then, we apply three functions sequentially… a quantum Fourier transformation (U_{FT}), the f_{i}(x) chosen randomly, and the inversion form of this quantum Fourier transformation (U_{FT}^{†}). The result is such that: if after applying those three functions our spin is in the same configuration as in the beginning, then the f_{i}(x) permutation was even, otherwise it was odd…and the function f_{i}(x) was evaluated only once! This makes our “quantum computer” two times faster than a classical computer when evaluating the same problem.

Many other interesting quantum properties, like quantum correlations (mentioned by my colleagues before) – where we need to explore the interactions between different spin nuclei – can be also simulated using NMR setups. But this is a topic for another time…

]]>