type
status
date
slug
summary
tags
category
password
icon
Author
Abstract
This week brings some significant AI news.

OpenAI

  • Advanced Voice Mode: OpenAI has rolled out advanced voice mode to all Plus and Team users, except in Europe, the UK, Switzerland, Iceland, Norway, and Liechtenstein. This mode allows users to converse with ChatGPT using their smartphone for a more natural, human-like interaction. The advanced voice mode leverages GPT-4 for faster responses and can interpret text, vision, and audio inputs. For more details, visit the Advanced Voice Mode FAQ.
  • Executives Departing: Shortly after the release of advanced voice mode, OpenAI CTO Mira Murati and two other executives announced their departure. Murati, after six and a half years with the company, decided to leave. Additionally, Barret Zoph, the company's Vice President of Research, announced his resignation, followed by Chief Research Officer Bob McGrew, who also praised the company's latest AI model in his farewell note.
  • Sam Altman's Vision: OpenAI CEO Sam Altman shared more of his vision for an AI-powered future in a blog post titled The Intelligence Age.

Alibaba

The Qwen Team from Alibaba Group has upgraded their large language models and large multimodal models to the Qwen2.5 Series, featuring:
  • Dense, easy-to-use, decoder-only language models available in sizes ranging from 0.5B to 72B in both base and instruct variants.
  • Pretrained on a large-scale dataset encompassing up to 18T tokens.
  • Significant improvements in instruction following, long text generation (over 8K tokens), understanding structured data (e.g., tables), and generating structured outputs, especially JSON.
  • Enhanced resilience to diverse system prompts, improving role-play implementation and condition-setting for chatbots.
  • Context length support up to 128K tokens and can generate up to 8K tokens.
  • Multilingual support for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
For more information, visit Qwenlm.

Google

  • Gemini Models Update: Google updated its production-ready Gemini models and introduced two new models: Gemini-1.5-Pro-002 and Gemini-1.5-Flash-002. Updates include:
    • Over 50% reduced pricing on 1.5 Pro (for prompts < 128K)
    • 2x higher rate limits on 1.5 Flash and ~3x higher on 1.5 Pro
    • 2x faster output and 3x lower latency
    • Updated default filter settings
For more details, visit the Google Developers Blog.

Anthropic

  • Contextual Retrieval: Anthropic introduced Contextual Retrieval, significantly improving the retrieval step in Retrieval-Augmented Generation (RAG). It uses two sub-techniques: Contextual Embeddings and Contextual BM25, reducing failed retrievals by 49% and by 67% when combined with reranking. These improvements markedly enhance retrieval accuracy. Details can be found here: Introducing Contextual Retrieval.
  • Revenue Growth: Anthropic's revenue is reportedly set to jump to $1 billion this year, representing 1,000% year-over-year growth. More details can be found here: Anthropic Revenue.

Meta

Meta has announced several important updates:
 
This concludes this week's AI news roundup. Stay tuned for more updates!
 
Loading...