“Announcing the AI Forecasting Benchmark Series | July 8, $120k in Prizes” by christian

EA Forum Podcast (All audio) - Podcast készítő EA Forum Team

This is a link post. On July 8th Metaculus is launching the first in a series of quarterly tournaments benchmarking the state of the art in AI forecasting and how it compares to the best human forecasting on real-world questions. Why a forecasting benchmark?  Many Metaculus questions call for complex, multi-step thinking to predict accurately. A good forecaster needs a mix of capabilities and sound judgment to apply them appropriately. And because the outcomes are not yet known, it's difficult to narrowly train a model to the task to simply game the benchmark. Benchmarking forecasting ability offers a way to measure and better understand key AI capabilities. AI forecasting accuracy is well below human level, but the gap is narrowing—and it's important to know just how quickly. And it's not just accuracy we want to measure over time, but a variety of forecasting metrics, including calibration and logical consistency. [...] ---Outline:(00:22) Why a forecasting benchmark?(01:29) The Series — Feedback Wanted(02:53) Bot Creation and Forecast Prompting(04:12) Metaculus AI Benchmarking Pilot(06:18) Build Your Own Bot From Templates(07:37) Prompt Engineering(08:09) Relevant Research(08:35) Share Your Thoughts and Feedback--- First published: June 19th, 2024 Source: https://forum.effectivealtruism.org/posts/MkNjvwW79aAPR4LQ4/announcing-the-ai-forecasting-benchmark-series-or-july-8 --- Narrated by TYPE III AUDIO.

Visit the podcast's native language site