Leo
Welcome everyone to this episode of our podcast! I’m your host, Leo, and today we have a really special guest with us, Miles Brundage, who recently made headlines with his decision to leave OpenAI. We’ll dive into his journey, what led him to this point, and what exciting things he plans to do next. Thanks for joining us, Miles!
Miles Brundage
Thanks for having me, Leo. It's great to be here and to share my thoughts about this transition. Leaving OpenAI was a tough decision, especially since it has been my dream job for years. I’ve been passionate about AI and its potential impact on humanity since the very start of my career.
Leo
Absolutely! You’ve accomplished a lot during your time at OpenAI. It must feel surreal to step away from that environment. Can you share a bit about your journey within the organization and what roles you held?
Miles Brundage
Sure! I started as a research scientist on the Policy team and gradually worked my way up to becoming Senior Advisor for AGI Readiness. Throughout my six years, I was involved in shaping policies around AI deployment and safety, which I found incredibly rewarding.
Leo
That’s fascinating. What were some of the key reasons that influenced your decision to leave? I imagine it wasn’t an easy choice.
Miles Brundage
It definitely wasn’t easy. I realized that to make a broader impact on AI development, I needed to work outside of the industry. OpenAI is such a high-profile organization, and that brings certain publishing constraints that limit my ability to share my ideas freely on topics I find important.
Leo
That makes sense. The influence of an organization like OpenAI can be quite significant. You mentioned wanting to impact AI development from an external perspective. What specific areas do you hope to focus on in your future endeavors?
Miles Brundage
I plan to start a nonprofit focused on AI policy research and advocacy. I’m particularly interested in issues like the assessment of AI progress, regulation of frontier AI safety, and addressing the economic impacts of AI. These areas are crucial if we want to ensure AI remains beneficial for everyone.
Leo
That’s a bold and important mission. Speaking of AGI readiness, you’ve mentioned that neither OpenAI nor other frontier labs are currently ready for AGI. Can you elaborate on that?
Miles Brundage
Yes, I believe there are significant gaps that need to be addressed. OpenAI has made strides in safety culture, but there’s still a long way to go in terms of governance and ensuring that the world is ready to manage these powerful AI capabilities responsibly.
Leo
It’s definitely a complex issue. I’m curious to hear your thoughts on how the future of AI policy should shape up. What strategies do you think are necessary to address these challenges effectively?
Miles Brundage
A multi-faceted approach is essential. We need collaboration between academia, industry, and government, alongside robust public discussions. It’s crucial to create policies that are informed by diverse perspectives and aim for a fair distribution of AI’s benefits while mitigating risks.
Leo
That’s a great perspective. I can see how your new role could really bridge those gaps. How do you envision starting this nonprofit? Is there a specific framework or model you’re considering?
Miles Brundage
I’m still exploring various models, but I’m leaning towards a collaborative framework where I can bring together like-minded individuals to focus on impactful AI policy initiatives. I want to ensure that our work is both research-driven and advocacy-oriented, addressing the urgent needs in the AI landscape.
Leo
Podcast Host
Miles Brundage
Former Senior Advisor at OpenAI