Best AI papers explained
Podcast készítő Enoch H. Kang
550 Epizód
-
Front-Loading Reasoning: The Synergy between Pretraining and Post-Training Data
Közzétéve: 2025. 10. 18. -
Representation-Based Exploration for Language Models: From Test-Time to Post-Training
Közzétéve: 2025. 10. 18. -
The attacker moves second: stronger adaptive attacks bypass defenses against LLM jail- Breaks and prompt injections
Közzétéve: 2025. 10. 18. -
When can in-context learning generalize out of task distribution?
Közzétéve: 2025. 10. 16. -
The Art of Scaling Reinforcement Learning Compute for LLMs
Közzétéve: 2025. 10. 16. -
A small number of samples can poison LLMs of any size
Közzétéve: 2025. 10. 16. -
Dual Goal Representations
Közzétéve: 2025. 10. 14. -
Welcome to the Era of Experience
Közzétéve: 2025. 10. 14. -
Value Flows: Flow-Based Distributional Reinforcement Learning
Közzétéve: 2025. 10. 14. -
Self-Adapting Language Models
Közzétéve: 2025. 10. 12. -
The Markovian Thinker
Közzétéve: 2025. 10. 12. -
Moloch’s Bargain: emergent misalignment when LLMs compete for audiences
Közzétéve: 2025. 10. 12. -
Transformer Predictor Dynamics and Task Diversity
Közzétéve: 2025. 10. 11. -
Base models know how to reason, thinking models learn when
Közzétéve: 2025. 10. 11. -
Spectrum tuning: Post-training for distributional coverage and in-context steerability
Közzétéve: 2025. 10. 11. -
Understanding Prompt Tuning and In-Context Learning via Meta-Learning
Közzétéve: 2025. 10. 11. -
MLPs Learn In-Context on Regression and Classification tasks
Közzétéve: 2025. 10. 11. -
Is Pre-Training Truly Better than Meta-Learning?
Közzétéve: 2025. 10. 11. -
Agentic Context Engineering: Evolving Contexts for Self-Improving Language Models
Közzétéve: 2025. 10. 11. -
Do LLMs Recognize Your Preferences? Evaluating Personalized Preference Following in LLMs
Közzétéve: 2025. 10. 09.
Cut through the noise. We curate and break down the most important AI papers so you don’t have to.
