What Is Meta AI? What You Need to Know About the Social Network's AI Tools


What Is Meta AI? What You Need to Know About the Social Network's AI Tools

Barbara is a tech writer specializing in AI and emerging technologies. With a background as a systems librarian in software development, she brings a unique perspective to her reporting. Having lived in the USA and Ireland, Barbara now resides in Croatia. She covers the latest in artificial intelligence and tech innovations. Her work draws on years of experience in tech and other fields, blending technical know-how with a passion for how technology shapes our world.

If you use Facebook, Instagram, WhatsApp or Messenger, you've no doubt come across something called Meta AI. It's woven into how you interact on these apps, including helping you with posts and edit your images.

The company's goal for Meta AI is to become your ultimate personal virtual assistant with free unlimited access to its AI models integrated into Meta's app family. Meta's whole shtick at late September's Connect 2024 event was to make AI tools more fun, accessible and user-friendly.

With the latest upgrades, Meta AI aims to go beyond basic chatbot functions and offer a multimodal, multilingual AI assistant that can handle complex tasks. Here's what to know about the social network giant's artificial intelligence tools.

Beyond its use in apps, Meta AI also refers to Meta's academic research laboratory. It was formerly known as Facebook Artificial Intelligence Research before Facebook's rebranding to Meta (Facebook, the company, not the social media platform) in October 2021. It focuses on the metaverse -- hence the name Meta -- and develops AI technology to power everything from chatbots to virtual reality and augmented reality experiences.

Meta AI isn't the only player in the race to integrate artificial intelligence into everyday life. Google has its own AI tools, like Google Assistant and Gemini, its free chatbot, akin to ChatGPT.

While Google's AI focuses more on productivity like search results or managing schedules, Meta AI is embedded into your social interactions, offering assistance without you having to ask. With Meta AI, you can snap a photo and ask it to identify its details or edit the images with prompting.

Similarly, Amazon's Alexa and Apple's Siri are task-oriented assistants, and ChatGPT or Snapchat's My AI help with conversational experience.

But Meta AI goes a step further blending all those features to make it the "everyday experience," rather than a standalone tool. And so, while those other utility tools feel more like something you consciously use, Meta AI quietly shapes how you connect with others or create content.

It's almost sneaky in how it seamlessly integrates into social platforms people use daily, making AI tools harder to avoid. By simply typing "@" followed by Meta AI, you can summon the assistant in chats (even group chats) to offer suggestions, answer questions or edit images.

This AI integration also extends to the search functions within Meta's apps, making it more intuitive and easier to find content and explore topics based on what you see in your feed -- what Meta calls a "contextual experience."

Following ChatGPT's path, Meta AI now has natural voice conversations. It is multilingual, speaking English, French, German, Hindi, Hindi-Romanized script, Italian, Portuguese and Spanish. Soon, you'll also be able to choose from various celebrity voices for the assistant, including John Cena and Kristen Bell.

Meta AI is currently available in 21 countries outside of the US: Argentina, Australia, Cameroon, Canada, Chile, Colombia, Ecuador, Ghana, India, Jamaica, Malawi, Mexico, New Zealand, Nigeria, Pakistan, Peru, Singapore, South Africa, Uganda, Zambia and Zimbabwe.

Though Meta AI isn't available in the EU, the company says it might later join the EU's AI Pact. The AI Act requires companies to provide "detailed summaries" of the data used to train their models -- a requirement Meta has been hesitant to meet, likely due to its history with data privacy lawsuits.

Meta CEO Mark Zuckerberg introduced new multimodal features, powered by its open-source Llama 3.2 models, during the Connect event in September, where Meta's team emphasized the future of computing and human connection. One of the biggest announcements from Connect 2024 was how Meta AI is integrating into everyday products like its Ray-Ban Meta glasses. These glasses can assist users in various ways, like remembering where you parked your car (woohoo!).

In December, Meta announced the glasses are now able to do real-time live AI and translation, meaning if someone speaks to you in Spanish, French or Italian, you'll be able to hear them in your ear in English. Another major breakthrough is video dubbing in Reels in Spanish and English, with automated lip-syncing.

The glasses can also take actions based on what you're looking at. For example, you can ask AI to make a call or scan a QR code for you.

Other products include the Meta Quest S3 version of their stand-alone virtual reality headset, which, after the upgrades, are called a mixed-reality headset, and Orion, its prototype of holographic AR glasses, which has been in the making for over a decade.

Though Ray-Ban Meta glasses and Quest devices are available across 15 countries, including some European ones, Meta AI is currently available on those devices only in the US and Canada.

For now, this feature is available only in the US. Users and businesses will be able to create custom AI chatbots without needing extensive programming knowledge. These so-called AI characters will serve as extensions of themselves or their brands, enabling more engaging interactions with followers or customers.

In full transparency, all replies generated by AI will be marked as such.

Llama (Large Language Model Meta AI) is a family of LLMs designed to understand and generate human-like text, answer questions, write and even hold conversations.

Llama 3.2 is the latest version of this LLM and Meta's first open-source multimodal model, which will enable many applications that require visual understanding. Meta claims that it is "its most advanced open-source model, with flexibility, control and state-of-the-art capabilities that rival the best closed-sourced models."

The new Llama 3.2 models come in two multimodal variants with 11B and 19B parameters, and text-only models with 8B and 70B parameters. Parameters are measured in billions and define how the model processes inputs, like words or images, and generates outputs by adjusting relationships between them.

Meta also plans to release models with smaller parameters optimized for mobile devices and wearables like glasses.

According to the company, Meta AI is set to become the world's most widely used AI assistant by the end of the year. Over 400 million people interact with Meta AI monthly, with 185 million using it across Meta's products each week.

Previous articleNext article

POPULAR CATEGORY

corporate

10745

tech

11464

entertainment

13195

research

6018

misc

14039

wellness

10697

athletics

14056