Earlier today, I listened to Lex Fridman’s interview with Sam Altman, the CEO and co-founder of OpenAI.
There were several fascinating moments throughout the conversation, but there was one thing in particular that struck me. Lex brought up the topic of bias in regard to ChatGPT. For example, he provided the example of asking ChatGPT if Jordan Peterson is a fascist. According to Lex, ChatGPT answered this question with little bias and provided a considerable amount of context. It explained who Jordan is, why people might call Jordan a fascist, the fact that there is no factual grounding for those claims, Jordan’s beliefs, and so on. It even wrapped the whole explanation up like a nicely written college essay, essentially explaining both sides of the argument and letting the user decide what to believe.
Sam’s response to this is what stuck with me. He said: “One thing that I hope these models can do is bring some nuance back to the world. Yes, it felt really nuanced. Twitter kind of destroyed some and maybe we can get some back now.”
As someone who identifies with a lot of moderate values, this resonated with me quite a bit. I’m not strictly speaking in a political context, but in regard to all aspects of life. Finding balance is a crucial skill one should develop. So when thinking about our educational and informational resources, it feels particularly important that they can provide nuanced and balanced explanations.
There is no simple answer to many questions in life, especially when it comes to many hot button topics such as abortion, gun control, etc. All of those situations are incredibly nuanced and complex, so the way we talk about them should reflect that as well. It’s easier to get attention when you say things that are extreme and take a hard stance, which explains our current political and social environment. But the reality is that everyone would be much better off if we could recognize and sympathize with both sides of the coin.
Kudos to Sam and the rest of the OpenAI team for creating a tool that provides room for nuance. ChatGPT is by no means a perfect tool, but I have faith that OpenAI will continue iterating and improving their models in a way that is fair and balanced.
This feels like a step in the right direction.
Earlier today, I listened to Lex Fridman’s interview with Sam Altman, the CEO and co-founder of OpenAI.
There were several fascinating moments throughout the conversation, but there was one thing in particular that struck me. Lex brought up the topic of bias in regard to ChatGPT. For example, he provided the example of asking ChatGPT if Jordan Peterson is a fascist. According to Lex, ChatGPT answered this question with little bias and provided a considerable amount of context. It explained who Jordan is, why people might call Jordan a fascist, the fact that there is no factual grounding for those claims, Jordan’s beliefs, and so on. It even wrapped the whole explanation up like a nicely written college essay, essentially explaining both sides of the argument and letting the user decide what to believe.
Sam’s response to this is what stuck with me. He said: “One thing that I hope these models can do is bring some nuance back to the world. Yes, it felt really nuanced. Twitter kind of destroyed some and maybe we can get some back now.”
As someone who identifies with a lot of moderate values, this resonated with me quite a bit. I’m not strictly speaking in a political context, but in regard to all aspects of life. Finding balance is a crucial skill one should develop. So when thinking about our educational and informational resources, it feels particularly important that they can provide nuanced and balanced explanations.
There is no simple answer to many questions in life, especially when it comes to many hot button topics such as abortion, gun control, etc. All of those situations are incredibly nuanced and complex, so the way we talk about them should reflect that as well. It’s easier to get attention when you say things that are extreme and take a hard stance, which explains our current political and social environment. But the reality is that everyone would be much better off if we could recognize and sympathize with both sides of the coin.
Kudos to Sam and the rest of the OpenAI team for creating a tool that provides room for nuance. ChatGPT is by no means a perfect tool, but I have faith that OpenAI will continue iterating and improving their models in a way that is fair and balanced.
This feels like a step in the right direction.