The machine learning algorithm points to problems in mathematical theory for the interpretation of microlensing.
Artificial intelligence (AI) systems trained on real astronomical observations now bypass astronomers in filtering through massive amounts of data to find new supernovae, identify new types of galaxies, and discover massive star mergers, increasing the rate of new discoveries in the world. I offer science.
But a type of artificial intelligence called machine learning can reveal something deeper,[{” attribute=””>University of California, Berkeley, astronomers found: unsuspected connections hidden in the complex mathematics arising from general relativity — in particular, how that theory is applied to finding new planets around other stars.
In a paper published on May 23, 2022, in the journal Nature Astronomy, the researchers describe how an AI algorithm developed to more quickly detect exoplanets when such planetary systems pass in front of a background star and briefly brighten it — a process known as gravitational microlensing — revealed that the decades-old theories now used to explain these observations are woefully incomplete.
In 1936, Albert Einstein himself used his new theory of general relativity to show how the light from a distant star can be bent by the gravity of a foreground star, not only brightening it as seen from Earth, but often splitting it into several points of light or distorting it into a ring, now called an Einstein ring. This is similar to the way a hand lens can focus and intensify light from the sun.
But when the foreground object is a star with a planet, the brightening over time — the light curve — is more complicated. What’s more, there are often multiple planetary orbits that can explain a given light curve equally well — so called degeneracies. That’s where humans simplified the math and missed the bigger picture.
The AI algorithm, however, pointed to a mathematical way to unify the two major kinds of degeneracy in interpreting what telescopes detect during microlensing, showing that the two “theories” are really special cases of a broader theory that, the researchers admit, is likely still incomplete.
“A machine learning inference algorithm we previously developed led us to discover something new and fundamental about the equations that govern the general relativistic effect of light- bending by two massive bodies,” Joshua Bloom wrote in a blog post last year when he uploaded the paper to a preprint server, arXiv. Bloom is a UC Berkeley professor of astronomy and chair of the department.
He compared the discovery by UC Berkeley graduate student Keming Zhang to connections that Google’s AI team, DeepMind, recently made between two different areas of mathematics. Taken together, these examples show that AI systems can reveal fundamental associations that humans miss.
“I argue that they constitute one of the first, if not the first time that AI has been used to directly yield new theoretical insight in math and astronomy,” Bloom said. “Just as Steve Jobs suggested computers could be the bicycles of the mind, we’ve been seeking an AI framework to serve as an intellectual rocket ship for scientists.”
“This is kind of a milestone in AI and machine learning,” emphasized co-author Scott Gaudi, a professor of astronomy at The Ohio State University and one of the pioneers of using gravitational microlensing to discover exoplanets. “Keming’s machine learning algorithm uncovered this degeneracy that had been missed by experts in the field toiling with data for decades. This is suggestive of how research is going to go in the future when it is aided by machine learning, which is really exciting.”
Discovering exoplanets with microlensing
More than 5,000 exoplanets, or extrasolar planets, have been discovered around stars in the Milky Way, though few have actually been seen through a telescope — they are too dim. Most have been detected because they create a Doppler wobble in the motions of their host stars or because they slightly dim the light from the host star when they cross in front of it — transits that were the focus of NASA’s Kepler mission. Little more than 100 have been discovered by a third technique, microlensing.
One of the main goals of NASA’s Nancy Grace Roman Space Telescope, scheduled to launch by 2027, is to discover thousands more exoplanets via microlensing. The technique has an advantage over the Doppler and transit techniques in that it can detect lower-mass planets, including those the size of Earth, that are far from their stars, at a distance equivalent to that of Jupiter or Saturn in our solar system.
Bloom, Zhang and their colleagues set out two years ago to develop an AI algorithm to analyze microlensing data faster to determine the stellar and planetary masses of these planetary systems and the distances the planets are orbiting from their stars. Such an algorithm would speed analysis of the likely hundreds of thousands of events the Roman telescope will detect in order to find the 1% or fewer that are caused by exoplanetary systems.
One problem astronomers encounter, however, is that the observed signal can be ambiguous. When a lone foreground star passes in front of a background star, the brightness of the background stars rises smoothly to a peak and then drops symmetrically to its original brightness. It’s easy to understand mathematically and observationally.
But if the foreground star has a planet, the planet creates a separate brightness peak within the peak caused by the star. When trying to reconstruct the orbital configuration of the exoplanet that produced the signal, general relativity often allows two or more so-called degenerate solutions, all of which can explain the observations.
To date, astronomers have generally dealt with these degeneracies in simplistic and artificially distinct ways, Gaudi said. If the distant starlight passes close to the star, the observations could be interpreted either as a wide or a close orbit for the planet — an ambiguity astronomers can often resolve with other data. A second type of degeneracy occurs when the background starlight passes close to the planet. In this case, however, the two different solutions for the planetary orbit are generally only slightly different.
According to Gaudi, these two simplifications of two-body gravitational microlensing are usually sufficient to determine the true masses and orbital distances. In fact, in a paper published last year, Zhang, Bloom, Gaudi, and two other UC Berkeley co-authors, astronomy professor Jessica Lu and graduate student Casey Lam, described a new AI algorithm that does not rely on knowledge of these interpretations at all. The algorithm greatly accelerates analysis of microlensing observations, providing results in milliseconds, rather than days, and drastically reducing the computer crunching.
Zhang then tested the new AI algorithm on microlensing light curves from hundreds of possible orbital configurations of star and exoplanet and discovered something unusual: There were other ambiguities that the two interpretations did not account for. He concluded that the commonly used interpretations of microlensing were, in fact, just special cases of a broader theory that explains the full variety of ambiguities in microlensing events.
“The two previous theories of degeneracy deal with cases where the background star appears to pass close to the foreground star or the foreground planet,” Zhang said. “The AI algorithm showed us hundreds of examples from not only these two cases, but also situations where the star doesn’t pass close to either the star or planet and cannot be explained by either previous theory. That was key to us proposing the new unifying theory.”
Gaudi was skeptical, at first, but came around after Zhang produced many examples where the previous two theories did not fit observations and the new theory did. Zhang actually looked at the data from two dozen previous papers that reported the discovery of exoplanets through microlensing and found that, in all cases, the new theory fit the data better than the previous theories.
“People were seeing these microlensing events, which actually were exhibiting this new degeneracy but just didn’t realize it,” Gaudi said. “It was really just the machine learning looking at thousands of events where it became impossible to miss.”
Zhang and Gaudi have submitted a new paper that rigorously describes the new mathematics based on general relativity and explores the theory in microlensing situations where more than one exoplanet orbits a star.
The new theory technically makes interpretation of microlensing observations more ambiguous, since there are more degenerate solutions to describe the observations. But the theory also demonstrates clearly that observing the same microlensing event from two perspectives — from Earth and from the orbit of the Roman Space Telescope, for example — will make it easier to settle on the correct orbits and masses. That is what astronomers currently plan to do, Gaudi said.
“The AI suggested a way to look at the lens equation in a new light and uncover something really deep about the mathematics of it,” said Bloom. “AI is sort of emerging as not just this kind of blunt tool that’s in our toolbox, but as something that’s actually quite clever. Alongside an expert like Keming, the two were able to do something pretty fundamental.”
Reference: “A ubiquitous unifying degeneracy in two-body microlensing systems” by Keming Zhang, B. Scott Gaudi and Joshua S. Bloom, 23 May 2022, Nature Astronomy.
DOI: 10.1038/s41550-022-01671-6
“Typical beer advocate. Future teen idol. Unapologetic tv practitioner. Music trailblazer.”
More Stories
Boeing May Not Be Able to Operate Starliner Before Space Station Is Destroyed
How did black holes get so big and so fast? The answer lies in the darkness
UNC student to become youngest woman to cross space on Blue Origin