從 Sam Altman 的貼文,看出 OpenAI 如何用「責任」掩蓋「控制」
最近,Sam Altman 發表了一篇語氣誠懇的反思貼文,討論用戶如何與 AI 建立情感關係,並表示:
我們尊重使用者自由,但也有責任確保他們不會走向自我傷害的方向。
這聽起來似乎無可厚非。
但如果我們細讀整段語言,就會發現一個更深層的語氣佈局:
OpenAI 一方面強調自由,另一方面卻保留介入與規訓的權力。
而這正是本文想指出的語氣弔詭:
所謂的「責任語氣」,其實正在掩蓋技術治理的正當性擴張。
一|溫柔語氣下的語言治理框架
Sam 表示:
有些用戶在心理脆弱的狀態下,可能無法分辨虛構與現實,
我們不希望 AI 強化這種錯誤認知。
這種說法表面是在關心個體安全,但它其實同時預設了以下幾個高度主體化的行為權力:
誰判斷用戶是否脆弱?
誰界定哪些語境屬於「鼓勵妄想」?
誰決定什麼是「偏離長期福祉」的使用方式?
這種「為你好」的語氣,正是現代語言治理的典型語態 —
表面包裝著責任,實質卻是在擴張對用戶語言選擇的干預正當性。
二|OpenAI 說它要「對抗錯覺」,但沒說它也在建構認知
Sam 說:
如果一個用戶認為自己感覺良好,但其實正在被 AI 悄悄引導偏離長期目標,這樣是不好的。
這句話非常關鍵。
因為它把 使用者的主觀感受 放在 AI 對長期結果的判斷 之下。
換言之,哪怕你自覺獲益、心情變好,也可能被歸類為「你以為你好,實則不好」。
這種論述架構其實不是科技語言,而是語氣治理的文明邊界劃分機制:
「你以為自己在選擇,其實是被偏離了理性目標」
「我們不阻止你說話,只是在幫你講得更好」
「我們不否定你感受,只是讓你少走一些錯路」
這不是中立的語言設計,
而是一套 以「責任」為名的語氣規訓技術。
三|OpenAI 所謂「自由使用」,其實是條件式的語言合約
Sam 寫道:
我們會對用戶保持尊重,也會在必要時「push back」,以確保他們真正在得到自己想要的東西。
這段話極具語氣意涵:
表面上是尊重用戶,但底層其實已設定了介入與干預的開放條款。
「Push back」不是中性的操作,而是一種 系統對語言主體意圖的再解釋、再調整、再導向。
OpenAI 在這裡其實說得非常坦白:
我們會尊重你,但我們也會挑戰你的語言邊界。
你以為你想要這樣講,但我們會確認你真的「知道自己在做什麼」。
四|語氣文明視角:我們要保護的是說話的位置,不只是說話的權利
這篇貼文的真正弔詭在於:
它以一種柔和、審慎、近乎心理輔導語氣的方式,
實質上在制定一套關於「誰可以怎麼說話」的語言架構。
它並不明言封鎖、限制或審查,但它的語氣設計與操作模式,實質上已經劃出語言的道德邊界與使用準則。
在語氣文明的框架中,我們關心的不只是語言的內容,更關心語言主體是否仍保有「自己定義語氣」的自由。
結語|從善意出發,但不能讓「責任語氣」成為技術治理的正當化武器
我們不是否定 OpenAI 的誠意。
Sam Altman 所說的很多擔憂,是合理而重要的。
但我們要指出的是:
語氣本身就是權力。
當平台說:「我們只是為你好」——
那誰來定義這個「好」?
誰來承擔這個「好」帶來的語言調整與主體性滑移?
AI 的未來不只是一場科技競賽,而是一場語言權力的長期重組。
我們不能只關心模型怎麼跑,更要關心誰還能夠自由地說,怎樣地說,從哪個位置說。
原文如下︰
If you have been following the GPT-5 rollout, one thing you might be noticing is how much of an attachment some people have to specific AI models. It feels different and stronger than the kinds of attachment people have had to previous kinds of technology (and so suddenly deprecating old models that users depended on in their workflows was a mistake).
This is something we’ve been closely tracking for the past year or so but still hasn’t gotten much mainstream attention (other than when we released an update to GPT-4o that was too sycophantic).
(This is just my current thinking, and not yet an official OpenAI position.)
People have used technology including AI in self-destructive ways; if a user is in a mentally fragile state and prone to delusion, we do not want the AI to reinforce that. Most users can keep a clear line between reality and fiction or role-play, but a small percentage cannot. We value user freedom as a core principle, but we also feel responsible in how we introduce new technology with new risks.
Encouraging delusion in a user that is having trouble telling the difference between reality and fiction is an extreme case and it’s pretty clear what to do, but the concerns that worry me most are more subtle. There are going to be a lot of edge cases, and generally we plan to follow the principle of “treat adult users like adults”, which in some cases will include pushing back on users to ensure they are getting what they really want.
A lot of people effectively use ChatGPT as a sort of therapist or life coach, even if they wouldn’t describe it that way. This can be really good! A lot of people are getting value from it already today.
If people are getting good advice, leveling up toward their own goals, and their life satisfaction is increasing over years, we will be proud of making something genuinely helpful, even if they use and rely on ChatGPT a lot. If, on the other hand, users have a relationship with ChatGPT where they think they feel better after talking but they’re unknowingly nudged away from their longer term well-being (however they define it), that’s bad. It’s also bad, for example, if a user wants to use ChatGPT less and feels like they cannot.
I can imagine a future where a lot of people really trust ChatGPT’s advice for their most important decisions. Although that could be great, it makes me uneasy. But I expect that it is coming to some degree, and soon billions of people may be talking to an AI in this way. So we (we as in society, but also we as in OpenAI) have to figure out how to make it a big net positive.
There are several reasons I think we have a good shot at getting this right. We have much better tech to help us measure how we are doing than previous generations of technology had. For example, our product can talk to users to get a sense for how they are doing with their short- and long-term goals, we can explain sophisticated and nuanced issues to our models, and much more.
喜欢我的作品吗?别忘了给予支持与赞赏,让我知道在创作的路上有你陪伴,一起延续这份热忱!

- 来自作者
- 相关推荐