How Meta plans to leverage generative AI
In February, Meta chief Mark Zuckerberg announced that the company will create a new product group that will focus on generative artificial intelligence. He further added that the unit will consist of teams from across the company and will be organised under Chris Cox, the chief product officer.
Meta joins the list of companies like Google and Microsoft that are leveraging generative AI into their products. Although in early stages, Meta has announced a few ways it would be commercialising the technology.
AI personas
The new product group announced by Zuckerberg will be leveraging generative AI for creative tools. To begin with, Meta is planning on using the technology to build AI personas. In a Facebook post, Zuckerberg wrote, “We're exploring experiences with text (like chat in WhatsApp and Messenger), with images (like creative Instagram filters and ad formats), and with video and multi-modal experiences.”
Using generative AI to make ads
Facebook parent company Meta would be commercialising its proprietary generative artificial intelligence by December. In an interview with Nikkei Asia, Meta chief technology officer Andrew Bosworth said that Meta’s AI would help advertisers by suggesting the tools to build their advertisements and improve effectiveness. For instance, creating different images in an advertisement to suit different audiences.
Meta’s SAM
This week, Meta launched its new generative AI tool called the segment anything model (SAM). Based on the input prompt, the tool allows segmentation of the image without the need for additional training. The tool can segment even the objects which it may have not encountered during the training time.
Meta already uses technologies similar to SAM for performing tasks like photo tagging, content moderation, and for determining what posts to recommend for Facebook and Instagram users. As reported by Reuters, the company said that SAM’s release would further broaden access to such technologies.
Make-a-video
In September last year, Meta introduced Make-a-Video which uses text prompts to create videos. The research for this technology is based on the text-to-image generation method. It uses data like images with descriptions and unlabelled videos to ‘imagine’ how an environment looks like and how it moves.
Meta said that this model is an improved version of Make-a-Scene that was introduced in the first half of 2022 to create photorealistic illustrations. Meta hasn’t explicitly mentioned what products currently leverage Make-a-Video.