Facebook Open Sources Machine Vision AI Code

When it comes to advanced artificial intelligence research, certain large institutions have major advantages – think top research universities and Fortune 100 companies in the technology space.

But one of the most influential players in AI is taking steps to level the playing field: Facebook.

Facebook has been involved in advanced AI research for years through its Artificial Intelligence Research Lab (FAIR). Some of its advancements in machine learning have already been used to enhance security and user experience on the ubiquitous social media site.

Perhaps not surprisingly, machine vision is one of Facebook’s top research areas. When AI can identify and process the details of a photo, it becomes possible to provide new accessibility features, more effective ads, and enhanced safety across the suite of Facebook services.

To make it happen, Facebook has developed cutting edge machine vision tools.

Now, those tools are available to everyone.

Facebook’s Release Means Next Generation Machine Vision is “Open Source”

In August, Facebook released code for some of its top machine vision and AI tools.

Much of the code-crunching going on behind the scenes at Facebook doesn't focus solely on identifying specific objects within an image, but mapping those objects and their details in relation to one another – a challenge that’s central to, for example, autonomous driving.

The cornerstones of Facebook’s most recent code dump include:

DeepMask – a specialized image segmentation framework;

SharpMask – which allows DeepMask to detect every object in an image;

MultiPath.Net – which labels and classifies each object that was identified.

This isn’t the first time Facebook has made potentially revolutionary tools public. Just last year, it published open source AI tools crucial to development in “deep learning,” improving the training of AI neural networks.

The latest release may save machine vision experts years of coding work.

Looking for machine vision components? Talk to the experts at Phase1Vision.com.