Gold prices rise from 2-wk low with focus on Russia-Ukraine, Jackson Hole
Investing.com -- Meta Platforms (NASDAQ:META) unveiled a suite of new artificial intelligence models that push the boundaries of machine perception and language understanding, signaling a leap forward in AI capabilities. Among the new models are the Perception Encoder, Perception Language Model (PLM), Meta Locate 3D, Dynamic Byte Latent Transformer, and Collaborative Reasoner, each designed to tackle complex challenges in their respective fields.
The Perception Encoder stands out for its ability to interpret visual information from images and videos, surpassing existing models in zero-shot classification and retrieval tasks. It has demonstrated proficiency in difficult tasks, such as identifying animals in their natural habitats, and has shown significant improvements in language tasks after integration with a large language model.
Meta’s PLM, on the other hand, is an open-source vision-language model trained on a combination of human-labeled and synthetic data. It is designed to handle challenging visual recognition tasks and comes in variants with up to 8 billion parameters. The PLM-VideoBench, a new benchmark released alongside the PLM, focuses on fine-grained activity understanding and spatiotemporally grounded reasoning.
In robotics, Meta Locate 3D represents an innovation in object localization, enabling robots to understand and interact with the 3D world using natural language prompts. This model can accurately localize objects within 3D environments, a crucial step towards more autonomous and intelligent robotic systems. Meta has also released a dataset to support the development of this technology, which includes 130,000 language annotations.
The Dynamic Byte Latent Transformer is another groundbreaking model from Meta, designed to enhance efficiency and robustness in language processing. This byte-level language model architecture matches the performance of traditional tokenization-based models and is now available for community use following its research publication in late 2024.
Finally, the Collaborative Reasoner framework aims to develop social AI agents capable of collaborating with humans or other AI agents. It includes a suite of goal-oriented tasks that require multi-step reasoning and multi-turn conversation. Meta’s evaluation shows that current models can benefit from collaborative reasoning, and the company has open-sourced its data generation and modeling pipeline to encourage further research.
As Meta integrates these advanced AI models into new applications, the potential for more capable AI systems across various domains is set to expand, marking significant progress in artificial intelligence research and development.
This article was generated with the support of AI and reviewed by an editor. For more information see our T&C.