How Machine Vision Powers the James Webb Telescope

When the James Webb telescope released its first images in mid-July 2022, the world was amazed by photographs of the cosmos in more detail than they’d ever seen before. NASA shared images that showed us famous nebulae in more detail than we’d ever seen, Jupiter in a shocking new light, and images from the beginning of the universe. Like the Hubble Telescope before it, the James Webb telescope is able to do this work in part thanks to an amazing technology at its heart: machine vision.

The telescope’s lensing array is complex and impressive: eighteen interlocking hexagonal mirrors that cover an area of 21 feet and 4 inches. But what makes those images possible is an even more impressive technology: machine vision. Machine vision refers to the systems that take the visual data captured by machines, which can cover the entirety of the light spectrum beyond the visible range that humans are limited to, and makes of these images by applying machine learning algorithms such as deep learning or sparse modeling to them in order to make sense of the captured data. Both of these methods rely on complex statistical models in order to “train” the system based on either larger or smaller libraries of comparative sources, respectively.

Deep learning algorithms such as Morpheus, designed by Brant Robertson and Ryan Hausen of the University of California, Santa Cruz, help take the raw data collected by the James Webb and apply a framework to it in order to classify objects such as galaxies and nebulae. This technology helps us to find more objects, and to understand what we’re seeing when we see them better. For example, algorithms like Morpheus can help us understand when we’re seeing an exoplanet, and then help sort the data with the aid of an advanced telescope like the James Webb to tell us about the atmosphere of the planet we’ve spotted. Never has so much of the universe now been open to our study. During the course of the James Webb project, scientists hope to study half a million galaxies with near-infrared imaging and thirty-two thousand galaxies with mid-infrared imagery, speaking to the scope and speed with which the JWST will be expected to operate.

Finding objects is not the only use that algorithms are put to, however. They also sharpen the images that we’re able to capture. It’s easy to find the raw, heavily pixelated images that the cameras produce as their raw output. Machine learning algorithms are then applied to these images in order to turn them into the exciting photos that we see in the news. They do this by using vast libraries of images and natural photography in order to sharpen and add depth to the images produced automatically. These processes turn images that only a machine could love into snapshots of startling beauty.

The James Webb Telescope is an extreme example of how machine vision enables us to see things that were previously invisible to us. But machine vision is not limited to projects of this scale and size.

At Phase One Technology, we’re proud to offer our expertise to anyone interested in using machine vision in their aerospace applications.