TYPE III AUDIO (All episodes)

40 Episodes
"Information security in high-impact areas career review" by Jarrah Bloomfield
06/23/2023

---
client: 80000_hours
project_id: articles
narrator: pw
qa: km
qa_time: 0h30m
---

As the 2016 US presidential campaign was entering a fractious round of primaries, Hillary Clinton’s campaign chair, John Podesta, opened a disturbing email. The March 19 message warned that his Gmail password had been compromised and that he urgently needed to change it.

The email was a lie. It wasn’t trying to help him protect his account — it was a phishing attack trying to gain illicit access.

Podesta was suspicious, but the campai...


Part 3: No matter your job, here’s 3 evidence-based ways anyone can have a real impact
06/14/2023

---
client: 80000_hours
project_id: articles
narrator: pw
qa: km
---
No matter which career you choose, anyone can make a difference by donating to charity, engaging in advocacy, or volunteering.

Unfortunately, many attempts to do good in this way are ineffective, and some actually cause harm.

Take sponsored skydiving. Every year, thousands of people collect donations for good causes and throw themselves out of planes to draw attention to whatever charity they’ve chosen to support. This sounds like a win-win: the fundraiser gets an exhilarating once-in-a-lifetime experience wh...


Part 4: Want to do good? Here's how to choose an area to focus on
06/14/2023

---
client: 80000_hours
project_id: articles
narrator: pw
qa: km
---
If you want to make a difference with your career, one place to start is to ask which global problems most need attention. Should you work on education, climate change, poverty, or something else?

The standard advice is to do whatever most interests you, and most people seem to end up working on whichever social problem first grabs their attention.

That’s exactly what our cofounder, Ben, did. At age 19, he was most interested in climate change. Here he...


Part 5: The world’s biggest problems and why they’re not what first comes to mind
06/14/2023

---
client: 80000_hours
project_id: articles
narrator: pw
qa: km
---
We’ve spent much of the last 10+ years trying to answer a simple question: what are the world’s biggest and most neglected problems?

We wanted to have a positive impact with our careers, and so we set out to discover where our efforts would be most effective.

Our analysis suggests that choosing the right problem could increase your impact by over 100 times, which would make it the most important driver of your impact.

Here, we give...


Part 6: Which jobs help people the most?
06/14/2023

---
client: 80000_hours
project_id: articles
narrator: pw
qa: km
---
Many people think of Superman as a hero. But he may be the greatest example of underutilised talent in all of fiction. It was a blunder to spend his life fighting crime one case at a time; if he’d thought a little more creatively, he could have done far more good. How about delivering vaccines to everyone in the world at superspeed? That would have eradicated most infectious disease, saving hundreds of millions of lives.

Here we’ll argue that...


Part 7: Which jobs put you in the best long-term position?
06/14/2023

---
client: 80000_hours
project_id: articles
narrator: pw
qa: km
---
People like to lionise the Mozarts, Malala Yousafzais, and Mark Zuckerbergs of the world — people who achieved great success while young — and there are all sorts of awards for young leaders, like the Forbes 30 Under 30.

But these stories are interesting precisely because they’re the exception.

Most people reach the peak of their impact in their middle age. Income usually peaks in the 40s, suggesting that it takes around 20 years for most people to reach their peak productivity.


Part 8: How to find the right career for you
06/14/2023

---
client: 80000_hours
project_id: articles
narrator: pw
qa: km
---
Everyone says it’s important to find a job you’re good at, but no one tells you how.

The standard advice is to think about it for weeks and weeks until you “discover your talent.” To help, career advisers give you quizzes about your interests and preferences. Others recommend you go on a gap yah, reflect deeply, imagine different options, and try to figure out what truly motivates you.

But as we saw in an earlier article, becoming...


Part 9: All the evidence-based advice we found on how to be more successful in any job
06/14/2023

---
client: 80000_hours
project_id: articles
narrator: pw
qa: km
---
The trouble with self-help advice is that it’s often based on barely any evidence.

For example, how many times have you been told to “think positively” in order to reach your goals? It’s probably the most popular piece of personal guidance, beloved by everyone from high school teachers to bestselling careers experts. One key idea behind the slogan is that if you visualise your ideal future, you’re more likely to get there.

The problem? Recent research f...


Part 10: How to make your career plan
06/14/2023

---
client: 80000_hours
project_id: articles
narrator: pw
qa: km
---
People often come to us trying to figure out what they should do over the next 10 or 20 years. Others say they want to figure out “the right career” for them.

The problem with all of this is that, as we’ve seen, your plan is almost certainly going to change:

You’ll change — more than you think.The world will change — many industries around today won’t even exist in 20 years.You’ll learn more about what’s best for you — it’s ver...


Part 11: All the best advice we could find on how to get a job
06/14/2023

---
client: 80000_hours
project_id: articles
narrator: pw
qa: km
---
When it comes to advice on how to get a job, most of it is pretty bad.

CollegeFeed suggests that you “be confident” as their first interview tip, which is a bit like suggesting that you should “be employable.”Many advisors cover the “clean your nails and have a firm handshake” kind of thing.One of the most popular interview videos on YouTube, with over 8 million views, makes the wise point that you definitely mustn’t sit down until you’re explicitly invit...


Part 12: One of the most powerful ways to improve your career: Join a community.
06/14/2023

---
client: 80000_hours
project_id: articles
narrator: pw
qa: km
---
Not many students are in a position to start a successful, cost-effective charity straight out of a philosophy degree. But when Thomas attended an “effective altruism” conference in London in 2018, he discovered an opportunity to start a nonprofit that could have a major impact on factory farmed animals.

Through the community, he received advice and funding, and ended up in an incubation programme. Today, Thomas’s charity, the Fish Welfare Initiative, has reduced the suffering of around one million factory farmed...


The end: A cheery final note — imagining your deathbed
06/14/2023

---
client: 80000_hours
project_id: articles
narrator: pw
qa: km
---
We’re about to summarise the whole guide in one minute. But before that, imagine a cheery thought: you’re at the end of your 80,000-hour career.

You’re on your deathbed looking back. What are some things you might regret

Perhaps you drifted into whatever seemed like the easiest option, or did what your parents did.

Maybe you even made a lot of money doing something you were interested in, and had a nice house and ca...


Summary: How to find a fulfilling career that does good
06/14/2023

---
client: 80000_hours
project_id: articles
narrator: pw
qa: km
---
TL;DR: To have a fulfilling career, get good at something and then use it to tackle pressing global problems.

Rather than expect to discover your passion in a flash of insight, your job satisfaction will grow over time as you learn more about what kind of work fits you, master valuable skills, and use them to find engaging work that helps others.

Source:
https://80000hours.org/career-guide/summary/

Narrated for 80,000...


Introduction: Why should I read this guide?
06/14/2023


---
client: 80000_hours
project_id: articles
narrator: pw
qa: km
---

You’ll spend about 80,000 hours working in your career: 40 hours a week, 50 weeks a year, for 40 years. So how to spend that time is one of the most important decisions you’ll ever make.

Choose wisely, and you will not only have a more rewarding and interesting life — you’ll also be able to help solve some of the world’s most pressing problems. But how should you choose?

To answer this question, we set up an indepen...


Part 1: We reviewed over 60 studies about what makes for a dream job. Here’s what we found.
06/14/2023

---
client: 80000_hours
project_id: articles
narrator: pw
qa: km
---

We all want to find a dream job that’s enjoyable and meaningful, but what does that actually mean?

Some people imagine that the answer involves discovering their passion through a flash of insight, while others think that the key elements of their dream job are that it be easy and highly paid.

We’ve reviewed three decades of research into the causes of a satisfying life and career, drawing on over 60 studies, and we didn’t find m...


Part 2: Can one person make a difference? What the evidence says.
06/14/2023

---
client: 80000_hours
project_id: articles
narrator: pw
qa: km
---
It’s easy to feel like one person can’t make a difference. The world has so many big problems, and they often seem impossible to solve.

So when we started 80,000 Hours — with the aim of helping people do good with their careers — one of the first questions we asked was, “How much difference can one person really make?”

We learned that while many common ways to do good (such as becoming a doctor) have less impact than you might fi...


"Information security in high-impact areas career review" by Jarrah Bloomfield
06/12/2023

---
client: 80000_hours
project_id: articles
narrator: pw
qa: km
qa_time: 0h30m
---

As the 2016 US presidential campaign was entering a fractious round of primaries, Hillary Clinton’s campaign chair, John Podesta, opened a disturbing email. The March 19 message warned that his Gmail password had been compromised and that he urgently needed to change it.

The email was a lie. It wasn’t trying to help him protect his account — it was a phishing attack trying to gain illicit access.

Podesta was suspicious, but the campaign’s IT team...


"Cause area report: Antimicrobial Resistance" by Akhil
06/08/2023

---
client: ea_forum
project_id: curated
narrator: pw
qa: km
qa_time: 0h20m
---

This post is a summary of some of my work as a field strategy consultant at Schmidt Futures' Act 2 program, where I spoke with over a hundred experts and did a deep dive into antimicrobial resistance to find impactful investment opportunities within the cause area. The full report can be accessed here.


Antimicrobials, the medicines we use to fight infections, have played a foundational role in improving the length and...


"Tips for people considering starting new incubators" by Joey
05/26/2023

---
client: ea_forum
project_id: curated
narrator: pw
qa: km
narrator_time: 1h15m
qa_time: 30m
---

Charity Entrepreneurship is frequently contacted by individuals and donors who like our model. Several have expressed interest in seeing the model expanded, or seeing what a twist on the model would look like (e.g., different cause area, region, etc.) Although we are excited about maximizing CE’s impact, we are less convinced by the idea of growing the effective charity pool via franchising or other independent nonprofit incubators. This is be...


"An artificially structured argument for expecting AGI ruin" by Rob Bensinger
05/16/2023

---
client: lesswrong
project_id: articles
feed_id: ai ai_safety
narrator: pw
qa: km
narrator_time: 4h30m
qa_time: 2h0m
---

Philosopher David Chalmers asked: "Is there a canonical source for "the argument for AGI ruin" somewhere, preferably laid out as an explicit argument with premises and a conclusion?"


Unsurprisingly, the actual reason people expect AGI ruin isn't a crisp deductive argument; it's a probabilistic update based on many lines of evidence. The specific observations and heuristics that carried the most weight for...


"How much do you believe your results?" by Eric Neyman
05/10/2023

---
client: lesswrong
project_id: curated
narrator: pw
qa: km
narrator_time: 2h45m
qa_time: 1h00m
---

Thanks to Drake Thomas for feedback. I. Here’s a fun scatter plot. It has two thousand points, which I generated as follows: first, I drew two thousand x-values from a normal distribution with mean 0 and standard deviation 1. Then, I chose the y-value of each point by taking the x-value and then adding noise to it. The noise is also normally distributed, with mean 0 and standard deviation 1. Notice that there’s more...


EA Forum Weekly Summaries – Episode 25 (May 1-7, 2023)
05/10/2023

---
client: ea_forum
project_id: summaries
narrator: cs
---

We've just passed the half-year mark for this project! If you're reading this, please consider taking this 5 minute survey — all questions optional. If you listen to the podcast version, we have a separate survey for that here. Thanks to everyone that has responded to this already! 

Original text:
https://forum.effectivealtruism.org/posts/9QcmyGAjERHRFfrr7/summaries-of-top-forum-posts-1st-to-7th-may-2023

This is part of a weekly series summarizing the top posts on the EA Forum — you c...


"Predictable updating about AI risk" by Joe Carlsmith
05/09/2023

---
client: ea_forum
project_id: curated
feed_id: ai_safety
narrator: jc
---

How worried about AI risk will we feel in the future, when we can see advanced machine intelligence up close? We should worry accordingly now.  

Original article:
https://joecarlsmith.com/2023/05/08/predictable-updating-about-ai-risk

Narrated by Joe Carlsmith and included on the Effective Altruism Forum by TYPE III AUDIO.

Share feedback on this narration.


EA Forum Weekly Summaries – Episode 24 (April 24-30, 2023)
05/08/2023

---
client: ea_forum
project_id: summaries
narrator: cs
---

We've just passed the half-year mark for this project! If you're reading this, please consider taking this 5 minute survey — all questions optional. If you listen to the podcast version, we have a separate survey for that here. Thanks to everyone that has responded to this already! 

Original text:
https://forum.effectivealtruism.org/posts/wzn7hEj3BSz7us7ge/summaries-of-top-forum-posts-24th-30th-april-2023

This is part of a weekly series summarizing the top posts on...


"AGI safety career advice" by Richard Ngo
05/04/2023

---
client: ea_forum
project_id: curated
narrator: pw
qa: km
qa_time: 0h35m 
---

People often ask me for career advice related to AGI safety. This post summarizes the advice I most commonly give. I’ve split it into three sections: general mindset, alignment research and governance work. For each of the latter two, I start with high-level advice aimed primarily at students and those early in their careers, then dig into more details of the field. See also this post I wrote two years ago, containing a bunch of...


EA Forum Weekly Summaries – Episode 23 (April 17-23, 2023)
05/02/2023

---
client: ea_forum
project_id: summaries
narrator: cs
---

Original article:
https://forum.effectivealtruism.org/posts/m2Y6HheC2Q2GLQ3oS/summaries-of-top-forum-posts-17th-23rd-april-2023
This podcast has just passed the 6-month mark! Please give us your feedback and suggestions so we can continue to improve — the survey should take no more than 10 minutes, and we really appreciate your input!

This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection here. The...


"First clean water, now clean air" by Fin Moorhouse
05/02/2023

---
client: ea_forum
project_id: curated
narrator: pw
qa: km
qa_time: 0h45m
---

The excellent report from Rethink Priorities was my main source for this. Many of the substantial points I make are taken from it, though errors are my own. It’s worth reading! The authors are Gavriel Kleinwaks, Alastair Fraser-Urquhart, Jam Kraprayoon, and Josh Morrison.

Clean water


In the mid 19th century, London had a sewage problem. It relied on a patchwork of a few hundred sewers, of...


[Week 2] "Learning from human preferences" (Blog Post) by Dario Amodei, Paul Christiano & Alex Ray
04/28/2023

---
client: agi_sf
project_id: core_readings
feed_id: agi_sf__alignment
narrator: pw
qa: mds
qa_time: 0h15m
---
One step towards building safe AI systems is to remove the need for humans to write goal functions, since using a simple proxy for a complex goal, or getting the complex goal a bit wrong, can lead to undesirable and even dangerous behavior. In collaboration with DeepMind’s safety team, we’ve developed an algorithm which can infer what humans want by being told which of two proposed behaviors is b...


"Mental Health and the Alignment Problem: A Compilation of Resources (updated April 2023)" by Chris Scammell & DivineMango
04/27/2023

---
client: lesswrong
project_id: curated
narrator: pw
qa: mds
qa_time: 1h00m
---

This is a post about mental health and disposition in relation to the alignment problem. It compiles a number of resources that address how to maintain wellbeing and direction when confronted with existential risk. 

Many people in this community have posted their emotional strategies for facing Doom after Eliezer Yudkowsky’s “Death With Dignity” generated so much conversation on the subject. This post intends to be more touchy-feely, dealing more directly with emotional landscapes than questions of time...


"New 80,000 Hours Podcast on high-impact climate philanthropy" by Johannes Ackva
04/27/2023

---
client: ea_forum
project_id: curated
narrator: pw
qa: mds
qa_time: 0h05m
---

This is a linkpost for a new 80,000 hours episode focused on how to engage in climate from an effective altruist perspective.

The podcast lives here, including a selection of highlights as well as a full transcript and lots of additional links. Thanks to 80,000hours’ new feature rolled out on April 1st you can even listen to it!My Twitter thread is here.

Rob and I are having a pretty wide-ranging conversation, here are th...


EA Forum Weekly Summaries – Episode 22 (Mar. 27 - Apr. 16, 2023)
04/22/2023

---
client: ea_forum
project_id: summaries
narrator: cs
---

Original article:
https://forum.effectivealtruism.org/posts/o3Gaoizs2So6SpgLH/summaries-of-top-forum-posts-27th-march-to-16th-april

This podcast has just passed the 6-month mark! Please give us your feedback and suggestions so we can continue to improve — the survey should take no more than 10 minutes, and we really appreciate your input!

This is part of a weekly series summarizing the top posts on the EA Forum — you can see the full collection here. The firs...


"A freshman year during the AI midgame: my approach to the next year" by Buck
04/19/2023

---
client: ea_forum
project_id: curated 
narrator: pw
qa: mds
qa_time: 0h15m
---

I recently spent some time reflecting on my career and my life, for a few reasons:

It was my 29th birthday, an occasion which felt like a particularly natural time to think through what I wanted to accomplish over the course of the next year 🙂.It seems like AI progress is heating up.It felt like a good time to reflect on how Redwood has been going, because we’ve been having conversations with funder...


"On AutoGPT" by Zvi
04/19/2023

---
client: lesswrong
project_id: curated
feed_id: ai_alignment, ai
narrator: pw
qa: km
qa_time: 0h50m
---
The primary talk of the AI world recently is about AI agents (whether or not it includes the question of whether we can’t help but notice we are all going to die.)

The trigger for this was AutoGPT, now number one on GitHub, which allows you to turn GPT-4 (or GPT-3.5 for us clowns without proper access) into a prototype version of a self-directed agent.

We al...


"Want to win the AGI race? Solve alignment." by Leopold Aschenbrenner
04/14/2023

---
client: ea_forum
project_id: articles
feed_id: ai_safety
narrator: pw
qa: mds
qa_time: 0h15m
---

This is a linkpost for https://www.forourposterity.com/want-to-win-the-agi-race-solve-alignment/

Society really cares about safety. Practically speaking, the binding constraint on deploying your AGI could well be your ability to align your AGI. Solving (scalable) alignment might be worth lots of $$$ and key to beating China.

Look, I really don't want Xi Jinping Thought to rule the world. If China gets AGI first, the ensuing rapid...


"GPTs are Predictors, not Imitators" by Eliezer Yudkowsky
04/12/2023

---
client: lesswrong
project_id: curated
feed_id: ai_safety
narrator: pw
qa: km
qa_time: 0h10m
---

(Related text posted to Twitter; this version is edited and has a more advanced final section.)

Imagine yourself in a box, trying to predict the next word - assign as much probability mass to the next token as possible - for all the text on the Internet.

Koan:  Is this a task whose difficulty caps out as human intelligence, or at the intelligence level of the smartest h...


[Week 0] "Machine Learning for Humans, Part 2.1: Supervised Learning" by Vishal Maini
04/08/2023

---
client: agi_sf
project_id: core_readings
feed_id: agi_sf__alignment
narrator: pw
qa: mds
qa_time: 0h30m
---

The two tasks of supervised learning: regression and classification. Linear regression, loss functions, and gradient descent.

How much money will we make by spending more dollars on digital advertising? Will this loan applicant pay back the loan or not? What’s going to happen to the stock market tomorrow?

Original article:
https://medium.com/machine-learning-for-humans/supervised-learning-740383a2feab

Au...


Interpretability in the wild and other papers
04/06/2023

---
client: t3a
feed_id: ai_safety_abstracts
narrator: ai
---

This episode covers 3 abstracts:

Active reward learning from multiple teachers - Peter Barnett et al. Conditioning Predictive Models: Risks and Strategies - Hubinger et al.Interpretability in the Wild: a Circuit for Indirect Object Identification in GPT2 small - Kevin Wang et al.

Share feedback on this narration.


"Discussion with Nate Soares on a key alignment difficulty" by Holden Karnofsky
04/05/2023

---
client: lesswrong
project_id: curated
feed_id: ai_safety 
narrator: pw
qa: mds
qa_time: 1h00m
---

In late 2022, Nate Soares gave some feedback on my Cold Takes series on AI risk (shared as drafts at that point), stating that I hadn't discussed what he sees as one of the key difficulties of AI alignment.

I wanted to understand the difficulty he was pointing to, so the two of us had an extended Slack exchange, and I then wrote up a summary of the exchange that w...


"A stylized dialogue on John Wentworth's claims about markets and optimization" by Nate Soares
04/05/2023

---
client: lesswrong
project_id: curated
narrator: pw
qa: mds
qa_time: 0h30m
---

(This is a stylized version of a real conversation, where the first part happened as part of a public debate between John Wentworth and Eliezer Yudkowsky, and the second part happened between John and me over the following morning. The below is combined, stylized, and written in my own voice throughout. The specific concrete examples in John's part of the dialog were produced by me. It's over a year old. Sorry for the lag.)

(As to...


"Nobody’s on the ball on AGI alignment" by Leopold Aschenbrenner
04/05/2023

---
client: ea_forum
project_id: curated
feed_id: ai_safety
narrator: pw
qa: mds
qa_time: 0h30m
---
Far fewer people are working on it than you might think, and even the alignment research that is happening is very much not on track. (But it’s a solvable problem, if we get our act together.)

Original article:
This is a linkpost for https://www.forourposterity.com/nobodys-on-the-ball-on-agi-alignment/
https://forum.effectivealtruism.org/posts/5LNxeWFdoynvgZeik/nobody-s-on-the-ball-on-agi-alignment

Narrated for the Effective Altruism Forum by TY...