The human eye is an amazing biological camera. It’s equipped with a lens that focuses in an instant and a sensor that accommodates the faintest to the more intense lighting. But an even more wondrous organ complements the eye: the human brain.
Engineers have created highly sophisticated, compact cameras. But building a computer that can make sense of an image, classify it, make predictions about it, and then decide on actions related to it has been a challenging feat. Especially when trying to reach a high level of complexity in a compact form factor. But in-sensor computing could change everything for machine vision.
What Is Machine Vision?
Before we talk about in-sensor computing, let’s discuss machine vision and what it means for automation systems. When robotics was first deployed, simple programming allowed them to perform the same task over and over. This works well for manufacturing the same components with little or no variations.
Newer applications for robotics and other automation called for the system to be able to adapt to the components or the environment. Engineers found that by adding a camera, computer hardware, and image recognition software, automation solutions could now “see” what they were doing and then adapt as needed.
What Is In-Sensor Computing?
Machine vision spent many years where the camera and processing were separate. But as integrators found more applications for machine vision technology, they found additional challenges. One major challenge was space. Not every location has room for a computer vision workstation. To provide the computing power needed, connection speeds now became an issue.
Finally, a research team at the Vienna University of Technology is developing a way to boost the speed of machine vision. They’re adding computing technology to the sensor that captures the image. The technology is made possible by mimicking the actions of neurons in the human brain.
How In-Sensor Computing Is Changing Machine Vision
By adding in-sensor computing capabilities to the camera, the system saves the step of sending a complete image to the computer vision software for processing. Transferring high-resolution images often requires the bulk of the bandwidth and time needed for machine vision to work.
If the camera’s sensor can figure out what it is seeing as it processes the image, decisions can be made without relying on cloud connectivity. Even if some information must be sent to the cloud for processing, it is already in digital form, saving time and computing power. Having sensors that can process their own data might hold out new possibilities for machine vision in driverless vehicles and other wireless industrial applications.
Need a camera for your machine vision application? Let the experts at Phase 1 Technology help you pick the right smart camera for the job.