Blueprint

A perfect balance of exhilarating flexiblity and the effortless simplicity of the Code Supply Co. WordPress themes.

The ultimate publishing experience is here.

From the comments, on AI safety

 This is from Richard Ngo, who works on the governance team at OpenAI: A few points: 1. I agree that the alignment community has generally been remiss in not trying hard enough to clarify the arguments in more formal papers. 2. The only peer-reviewed paper making the case for AI risk that I know of
The post From the comments, on AI safety appeared first on Marginal REVOLUTION. 

This is from Richard Ngo, who works on the governance team at OpenAI:

A few points:
1. I agree that the alignment community has generally been remiss in not trying hard enough to clarify the arguments in more formal papers.
2. The only peer-reviewed paper making the case for AI risk that I know of is: https://onlinelibrary.wiley.com/doi/10.1002/aaai.12064. Though note that my paper (the second you linked) is currently under review at a top ML conference.
3. I don’t think that a formal model would shed much light here. My goal in writing my paper was to establish misaligned power-seeking AGI as a credible scientific hypothesis; I think that most who think it’s credible would then agree that investigating it further should be a key priority, whether or not their credences are more like 10% or more like 90%.

From this batch of comments.  Here is Richard on Twitter.

The post From the comments, on AI safety appeared first on Marginal REVOLUTION.

 Science, Uncategorized, Web/Tech 

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Prev
From the comments, on AI safety

From the comments, on AI safety

 This is from Richard Ngo, who works on the governance team at OpenAI: A few

Next
From the comments, on AI safety

From the comments, on AI safety

 This is from Richard Ngo, who works on the governance team at OpenAI: A few

You May Also Like