Images From James Webb Space Telescope to Be Analyzed by AI
The unveiling of the first full-color image from the James Webb Space Telescope is already astounding and delighting humans around the globe. But humans aren’t the only audience for these images.
Data is also being soaked up by a new generation of GPU-accelerated AI created at the University of California, Santa Cruz. And Morpheus, as the team at UC Santa Cruz has dubbed the AI, won’t just be helping humans make sense of what we’re seeing. It will also use images from the $10 billion space telescope to better understand what it’s looking for.
The image released by the National Aeronautics and Space Administration represents the deepest and sharpest infrared images of the distant universe to date. Dubbed “Webb’s First Deep Field,” the image of galaxy cluster SMACS 0723 is overflowing with detail. NASA reported that thousands of galaxies, including the faintest objects ever observed in the infrared, have appeared in Webb’s view for the first time. And this image represents just a tiny piece of what’s out there, with the image covering a patch of sky roughly the size of a grain of sand held at arm’s length by someone on the ground, explained NASA Administrator Bill Nelson.
The telescope’s array of 18 interlocking hexagonal mirrors, which span a total of 21 feet 4 inches, are peering far deeper into the universe and deeper into the universe’s past than any tool to date. “We are going to be able to answer questions that we don’t even know what the questions are yet,” Nelson said. “The telescope won’t just see back further in time than any scientific instrument, almost to the beginning of the universe, it may also help us see if planets outside our solar system are habitable,” added Nelson.
Morpheus, which played a key role in helping scientists understand images taken on NASA’s Hubble Space Telescope, will help scientists ask, and answer these questions. Eventually, Morpheus will be using the images to learn, too. It is trained on UC Santa Cruz’s Lux supercomputer. The machine includes 28 GPU nodes with two NVIDIA V100 Tensor Core GPUs each. In other words, while we’ll all feasting our eyes on these images for years to come, scientists will be feeding data to AI.