Powered by RND
PodcastsTechnologiesLarge Language Model (LLM) Talk

Large Language Model (LLM) Talk

AI-Talk
Large Language Model (LLM) Talk
Dernier épisode

Épisodes disponibles

5 sur 59
  • vLLM
    vLLM is a high-throughput serving system for large language models. It addresses inefficient KV cache memory management in existing systems caused by fragmentation and lack of sharing, which limits batch size. vLLM uses PagedAttention, inspired by OS paging, to manage KV cache in non-contiguous blocks. This minimizes memory waste and enables flexible sharing, allowing vLLM to batch significantly more requests. As a result, vLLM achieves 2-4x higher throughput compared to state-of-the-art systems like FasterTransformer and Orca.
    --------  
    13:12
  • Qwen3: Thinking Deeper, Acting Faster
    Qwen3 models introduce both Mixture-of-Experts (MoE) and dense architectures. They utilize hybrid thinking modes, allowing users to balance response speed and reasoning depth for tasks, controllable via parameters or tags. Developed through a multi-stage post-training pipeline, Qwen3 is trained on a significantly expanded dataset of approximately 36 trillion tokens across 119 languages. This enhances its multilingual support for global applications. The models also feature improved agentic capabilities, notably excelling in tool calling, which increases their utility for complex, interactive tasks.
    --------  
    13:15
  • RAGEN: train and evaluate LLM agents using multi-turn RL
    RAGEN is a modular system for training and evaluating LLM agents using multi-turn reinforcement learning. Built on the StarPO framework, it implements the full training loop including rollout generation, reward assignment, and trajectory optimization. RAGEN serves as research infrastructure to analyze LLM agent training dynamics, focusing on challenges like stability, generalization, and the emergence of reasoning in interactive environments.
    --------  
    11:56
  • DeepSeek-Prover-V2
    DeepSeek-Prover-V2 is an open-source large language model designed for formal theorem proving in Lean 4. Its training relies heavily on synthetic data, generated by using DeepSeek-V3 to decompose problems into subgoals, which are then recursively solved by a smaller 7B prover model. The model uses a two-stage training process, including supervised fine-tuning and reinforcement learning (GRPO), to bridge informal reasoning with formal proofs. It achieves state-of-the-art performance, particularly with its high-precision Chain-of-Thought mode.
    --------  
    11:04
  • DeepSeek-Prover
    The DeepSeek-Prover project aims to advance large language model capabilities in formal theorem proving by addressing the scarcity of training data. It uses autoformalization to convert informal high school and undergraduate math competition problems into formal statements, generating a large dataset of 8 million synthetic proofs. Quality filtering and formal verification with Lean 4 ensure data reliability. An iterative process enhances the model, leading to state-of-the-art performance on miniF2F and FIMO benchmarks, outperforming models like GPT-4.
    --------  
    8:37

Plus de podcasts Technologies

À propos de Large Language Model (LLM) Talk

AI Explained breaks down the world of AI in just 10 minutes. Get quick, clear insights into AI concepts and innovations, without any complicated math or jargon. Perfect for your commute or spare time, this podcast makes understanding AI easy, engaging, and fun—whether you're a beginner or tech enthusiast.
Site web du podcast

Écoutez Large Language Model (LLM) Talk, Tech&Co, la quotidienne ou d'autres podcasts du monde entier - avec l'app de radio.fr

Obtenez l’app radio.fr
 gratuite

  • Ajout de radios et podcasts en favoris
  • Diffusion via Wi-Fi ou Bluetooth
  • Carplay & Android Auto compatibles
  • Et encore plus de fonctionnalités
Applications
Réseaux sociaux
v7.17.1 | © 2007-2025 radio.de GmbH
Generated: 5/9/2025 - 11:29:50 PM