There has been lots of excitement – and hype – surrounding the "first photograph of a black hole". Sensible people think beyond the mindless hype, of course, and they are really asking themselves: What has actually happened? Is that important or interesting? If it is, in what respect it is important or interesting? Which kind of work was hard? Which kind of information has it brought us or what can the method bring us in the future?
I think that despite the thousands of articles in the mainstream media, these basic questions aren't being answered well – or they're not being answered at all. Let me try to clarify some of the basic facts about the big picture.
The black holes follow from Einstein's general relativity, published in 1915-1916. But the fact that the theory predicts black holes hadn't been clear for quite some time. Einstein himself believed – basically up to his death – that black holes didn't exist and that his theory broke down before such objects could be realized in the real world. He was wrong about his own theory, like originally in the case of the Big Bang as well as gravitational waves (Einstein was initially wrong about almost everything except for the form of the equations) – by locality, Einstein's general relativity really cannot break down in such large distance situations.
The first black hole solution was written down in 1917, by Karl Schwarzschild. A member of the German Army, he was fighting on the Eastern front. He returned home, discovered some disease, and died within months. OK, the first solution was the Schwarzschild black hole, a spherically symmetric i.e. non-rotating, neutral black hole. But that wording already involves some rewriting of the history. Black holes had been named "frozen stars" up to the 1960s. This was somewhat correlated to Einstein's and physicists' belief that black holes were too extreme – and the stellar matter still "had to" look like a star.
The most relevant black hole solution now is much newer, the Kerr solution (1963), produced by New Zealand's Roy Kerr who is still around and celebrated by the Kiwi press now, 56 years after he wrote the solution for the electrically neutral but rotating black hole. The M87 galaxy's black hole is basically a Kerr black hole.
Things only began to change in the 1960s, when the likes of Kerr started to work hard on black holes. The modern name was invented by John Wheeler, a conservative guy who was also great at P.R. and marketing – a somewhat rare species today, too. Clearly, his "black hole" is exactly what is needed: it is a hole (where you can fall) and its color is black because light cannot escape from it. "Black hole" was very different from all other words in astrophysics, therefore emphasizing that it's an object that is very different from all stars and other spherical things we know in the Universe.
In the same decade of the 1960s, the evidence that black holes existed started to pile up. Maybe Wheeler's better terminology has been extremely helpful for that progress in the "beef" of the research, not just for the popularization. (Weinberg's First Three Minutes was a "popular" book but, as my adviser Banks has stressed, it has taught quite something to the experts, too.) At any rate, people began to treat general relativity "really seriously" and trust it. So they understood that when correctly treated, no "extra effects" may possibly be strong enough to prevent a heavy enough star from a fatal collapse all the way down to a black hole (which has both a singularity and an event horizon). Such "singularity theorems" were proven in the early 1970s, by Hawking and Penrose.
You know, the real point initially misunderstood by Einstein is that certain regions of the spacetime may be shown to be "inside the event horizon" even though the local conditions are completely unspectacular – the density of matter may be lower than the density of water and there's nothing seen around that would tell you "you have already crossed the point of no return". For large-and-heavy enough black holes, however, a very low density object and its collapse is enough to "cross the critical point" where the black hole is born and cannot be removed anymore (except by the Hawking evaporation, a slow quantum mechanical process).
Growing evidence for stellar-size black holes was accumulating in the subsequent decades. Gravitational lensing and orbiting of visible objects around invisible objects were a part of the evidence. And in the 1990s, the huge black holes at galactic centers were added to the list of black holes that astrophysicists were sort of certain about. All the predictions from the black hole models worked. That has included the LIGO discovery – the sound of mutually orbiting black holes – a few years ago.
OK, we have the first "somewhat high resolution" photograph of a black hole now. Black holes tend to be geometrically small because they're really dense. A black hole of the solar mass would be just a few miles across. It's really hard to see then. That's why the "direct photography" focused on huge black holes such as the huge one in the M87 galaxy. That has a mass of over 6 billion solar masses, so the radius is tens of billions of miles (masses and radii are proportional for black holes in four dimensions).
The M87 black hole was observed first, before Sgr A* in our galaxy, because the image is "more stable". They will hopefully add "our huge Milky Way black hole" but some extra tricks to "stabilize the image" may be needed. OK, I am slowly getting to the point – what was actually hard and impressive about that experiment.
Almost all the hard work is about the construction of a photography from several "cameras".
In February, I discussed new Nokia phones. They include Nokia 9 Pureview, a phone that has 5 cameras. Much of the sophistication is in the software that takes the images from these 5 back cameras on the Nokia's body, and that combines them into a huge-resolution photograph with some extra information about the layers and other things, if needed.
Now, the Nokia 9 Pureview phone is expected to take pictures of the landscape, people, flowers, and those look very different from a "photograph of black hole". But much of the work needed to get these photographs is completely analogous. It's about the manipulation of complementary information from several cameras, in order to get as fine a resulting image as possible.
Nokia 9 Pureview has 5 back cameras, the Event Horizon Telescope is composed of 8 cameras – positioned all over the Earth, at high altitudes. Although the 5 Nokia cameras are much closer to each other than the 8 Event Horizon Telescope cameras, these numbers 5 and 8 really are analogous, and what is being done with the images from these 5 or 8 cameras is also qualitatively analogous. This is the right way to think about it. Most of the steps that were done were analogous to the software operating in the Nokia. So the people – like the celebrated MIT-trained coder Katie Bouman – are actually not "physicists" let alone "astrophysicists" at all. They are engineers who work on imaging, completely analogous to those who work on the Nokia cameras.
OK, the 8 telescopes were collecting videos of the region in the 1-millimeter microwave range. And they accumulated lots of data in these movies – because even the phase of all the light had to be remembered. The total amount of the data was huge: 5 petabytes. "Peta/penta" is related to "five" in Greek, just like "tera/tetra" is related to "four". In the prefixes, they stand for powers of one thousand. So 5 petabytes is simply 5,000 (or 5,120) terabytes. A terabyte is a unit you may know because hard disks are often this large.
OK, they had 5 petabytes of moving pictures from 8 cameras or so, about 5,000 hard drives. This amount of data is easier to transfer on hard disks on airplanes than through cables. The fastest connection you can have at "home" is below something like a gigabyte per second – you would still need to wait for 5 million seconds and the airplane is just faster.
Now they needed to combine them. The idea is that they want to assume that they're representative of a whole image on a huge, Earth-sized camera. You want to imagine that these signals are "reflected" from the 8 places of the Earth to a focus, just like in a regular telescope with mirrors, and you want to reconstruct the "image that you would get from the Earth-sized telescope" as accurately as possible.
So some correlations have to be found between the images from the 8 cameras. Once you find the right timing, delays etc. to "focus" on the same black hole, most of the raw data becomes useless and the dataset reduces from 5 petabytes to 5 terabytes, about 1/1,000 of the original amount. Another fancy technical step, "fringe fitting", was applied. The images are moved to compare what should be ideally compared. The dataset is reduced by another factor of 1/10,000 i.e. to 0.5 gigabytes. And then they're combined, more or less directly, in the "imaging" step, another reduction by 1/1,000. Half a megabyte picture file is produced. Well, I think you don't lose much if you compress it to a 20 kB JPG file...
As you can see, I do believe that the phase of the microwave radiation is actually being taken into account. Some interference is therefore done by the process of the photograph creation – but the interference isn't done automatically, like by a double slit in a double slit experiment, but by actual computers that add the amplitudes from the very raw data from 8 telescopes. But all these steps are just meant to emulate the virtues of a "huge telescope", assuming that the images from 8 "small" telescopes carry the information that more or less replaces a "huge telescope".
In principle, it's just some technicalities to obtain as sharp a photograph from some inputs as you can get. I would surely say that the picture of the black hole is a photograph (the orange color is a fabrication made for our eyes' comfort, the original "color" is 1 mm microwave which our eyes don't see). We are also unambiguous when we call the products of Nokia 9 Pureview "photographs", despite all the processing – they are really meant to "look the same" as the images from an old-fashioned, huge, high-resolution (maybe even analog) camera that takes great pictures by "brute force". So of course those are photographs and the situations are analogous!
To summarize, there are two very different portions of the "event" involving the event horizon:
* How it was done – and it was done by "engineers" who manipulate images from cameras.
* What it means for astronomy, astrophysics, and physics in general.
Most of the employees belong to the first cluster, they're "engineers in photography". They just chose a black hole instead of a flower as the object on the photograph. Is it important physically? Well, it's fun for physicists to see a photograph of a black hole. But all qualitative features of that photograph agree with the expectations. Of course physicists have known that the black holes are black holes, even without a direct photograph. They knew that the picture would vaguely resemble that of a solar eclipse: a black disk inside a brighter ring. (Although I was confused for a while, I thought that they could focus on a much smaller part of the event horizon and see more details there.) There has been a huge amount of indirect evidence to be near-certain that our theories that also predict the black holes simply work right.
The physicists' situation is analogous to a lover of flowers who often goes to a tropical forest and looks at some flower, smells it, observes it, but he never took his camera with himself. Now he took a camera for the first time and took a photograph of the flower. It's the first photograph of the flower but the flower lover has actually known what the flower has looked like before. How much can he learn? Maybe if there are some details on the picture... But the M87 black hole picture doesn't contain too many details you could look at for hours.
And if some parameters may be extracted from the picture, they're parameters that specifically refer to a particular object only. In particular, that black hole isn't precisely spherical, so the shape isn't a "precise circle", even if you make it very sharp. It's a Kerr black hole, roughly an ellipse. There are hundreds of billions of galaxies, one of them has a very large black hole inside, and that has some mass and angular momentum that we may extract from a photograph now. Great. But that's clearly not enough information to teach us something about fundamental physics.
It's plausible that a huge number of such pictures, if those became easy, could tell us something – through the statistical distributions of the masses etc. Also, the methodology could be used to take microwave photographs of more mundane objects in the Universe – that however turn out to be more useful or interesting for some reasons. If your camera could take a picture of the flower in the tropical forest, it may also take a picture of a tiger. Maybe the tiger is more interesting than the flower.
So I think it's not right when the physics is being conflated with the engineering too much. A photograph of an object that is really, really far, 55 million light years, has been taken. The hard work to achieve that victory is mostly the work done by photography engineers. It could have been done and has an obvious link to astrophysics and relativistic physics because the object of the photograph is a black hole. But just because someone takes a picture – using a new photographing technology – of an object from the discipline XY doesn't mean that the discipline XY has made a huge progress. I think that there's been virtually no progress in our understanding of the black holes and similar things. You don't become a top biologist if you take a beautiful picture of a tulip, either.
Now, one should think about the question how much money could go to similar things. Again, one must realize what is the actual motivation for such hypothetical future funding. The first photograph of a black hole – or two, or a couple – is interesting just for fun, like men on the Moon. But should we pay billions of dollars every year to take photographs of random black holes or other objects? I doubt it's rational. We should have a reason. We clearly have a capability to take "this new kind of photographs". But that doesn't mean we have to spend billions of dollars on applying this capability – unless we have some real reasons what it is good for.
I think that this is a great representative of a "possible investment competing with the particle accelerators". If we wish, the extra spending could increase to tens of billions per decade – and be as huge as the whole collider industry or larger. We could produce thousands of such photographs each year but we would need to increase the spending, build new telescopes etc. Is it worth it? Without some detailed proposals what could be observed, I think that the answer is clearly No. The diminishing returns kick in. We already have a huge excess of "assorted pictures from the Universe", I think.
Highest-energy particle colliders look at previously unexplored phenomena in Nature. A camera is just taking a picture of something in the Universe. It can be a finer picture or a picture of a more distant object. But they're still analogous to the objects that we have seen before and we were more or less satisfied with the resolution, too. Similar objects that are far away are likely to be analogous.
If you get my point, we should still distinguish things that are "really new" from those that are "just gradual improvements of an engineering type". Looking at phenomena or new particles that hadn't been produced before is "really new". A higher-resolution camera at slightly different wavelengths is a "gradual improvement of an engineering type", unless something else is seen to be new. Science that does some real progress simply must do things that have a reasonable potential to be "really new". Ever finer pictures of rather well-known things are unlikely to lead to any paradigm shift.
The first black hole photograph is fun and is a testimony to the prowess of the visual engineers. The advances that were needed were of the engineering type, more so than in the case of the LIGO which was a qualitatively new method to observe objects (mostly the same class of things – the black holes). Even the very classification of this achievement as "physics" is rather misleading. The photography manipulation isn't physics, at least not "fundamental physics", whether or not it is done in Nokia/HMD/Zeiss or in the EHT Collaboration.