In a lengthy blog post about Facebook’s ten-year plan to “accelerate innovation and power new experiences with AI,” chief technology officer Mike Schroepfer mentioned that one update coming to the company’s mobile apps will be a “style transfer” tool that turns normal photos and videos into works of art using “high efficiency neural networks,” all running directly on iOS and Android smartphones.
Facebook’s impending update will, according to Schroepfer, run entirely on the user’s smartphone and not be dependent on the content to be sent to servers, creating long load times and frustrating users. The CTO described this as the most demanding and “technically difficult” hurdle to clear in the process of adding the feature to the company’s mobile apps, but Schroepfer said the company has done just that, and the result is a deep learning platform called “Caffe2Go.”
Just three months ago we set out to do something nobody else had done before: ship AI-based style transfer running live, in real time, on mobile devices. This was a major engineering challenge, as we needed to design software that could run high-powered computing operations on a device with unique resource constraints in areas like power, memory and compute capability. The result is Caffe2Go, a new deep learning platform that can capture, analyze and process pixels in real time on a mobile device.
We found that by condensing the size of the AI model used to process images and videos by 100x, we’re able to run deep neural networks with high efficiency on both iOS and Android. This is all happening in the palm of your hand, so you can apply styles to videos as you’re taking them.
Schroepfer said that the alternative of sending the content to data centers to be analyzed and filtered was “not ideal for letting people share fun content in the moment.” In addition to basic image and video filtering capabilities, the new deep-learning platform will also feasibly be able to understand gesture controls when taking a selfie, for instance. In today’s blog post, an example is given of a user swiping right and left between various artistic filters for a selfie (all running live, in real-time) and even snapping a picture when the user smiles.
As a comparison, the update sounds largely similar to Prisma, an app that launched over the summer and impressed many with its ability to turn photos, and eventually videos, into stylized images. Originally, Prisma used a server-side combination of neural networks and artificial intelligence to apply the different filters to user photos, but an update also introduced offline image processing, meaning users could use some of the app’s filters to alter their images right from their smartphones.
For Facebook, the announcement follows a year of video-first announcements from the company, most recently embodied in CEO and co-founder Mark Zuckerberg’s plan to make the camera more prominent in the app. No specific timeline was given for a possible launch of the new features on Facebook’s mobile apps, but the company is clearly looking to lay the groundwork for its future, calling its new AI initiative, along with virtual reality, “new technologies that will shape the next decade.”
Discuss this article in our forums