A bit of Russell on Super Intelligence

Stuart Russell (of Russell and Norvig, AI: A Modern Approach fame) has a book out on super intelligence. IEEE Spectrum has an excerpt:

AI research is making great strides toward its long-term goal of human-level or superhuman intelligent machines. If it succeeds in its current form, however, that could well be catastrophic for the human race. The reason is that the “standard model” of AI requires machines to pursue a fixed objective specified by humans. We are unable to specify the objective completely and correctly, nor can we anticipate or prevent the harms that machines pursuing an incorrect objective will create when operating on a global scale with superhuman capabilities. Already, we see examples such as social-media algorithms that learn to optimize click-through by manipulating human preferences, with disastrous consequences for democratic systems.

Surely, with so much at stake, the great minds of today are already doing this hard thinking—engaging in serious debate, weighing up the risks and benefits, seeking solutions, ferreting out loopholes in solutions, and so on. Not yet, as far as I am aware. Instead, a great deal of effort has gone into various forms of denial.

The rest of the except is a fun if depressing read though I take some of it with a grain of salt. I do agree with the general thrust that a priori arguments against the possibility of powerfully harmful AI (whether by impossibility arguments or by pollyanaish assertions of easy mitigations such as “turning it off”) are probably all very bad. I tend not to think that much about the problems of super intelligence (esp runaway malevolent super intelligence) because there are plenty of worrisome issues with existing pretty dumb and indifferent AI. I’m pretty skeptical about positive feedback loops to singularityish states with current technology (even 5-10 year technology) on lack of unfettered feedback loops and energy issues (i.e., we’re not yet able to have AIs design and build AIs in an improvement spiral and current compution to energy ratios aren’t hugely favourable to supporting superintelligences —our brains have amazingly low power requirements given what they can do; perhaps there are sneaky algorithms that would support super human general AI on current to near future hardware with reasonable energy consumption but I don’t see it).

There are some reasons to thinking that iterative runaway intelligence arguments aren’t all that strong…after all, just because a system is very smart doesn’t mean it has either the intelligence or the information to design a better system. I sincerely doubt, of example, that a super intelligence can just “figure out” what dark energy is without doing a lot of experimentation. How that experimentation turns out is unclear!

However, we don’t need magic super intelligence for AI systems to be dangerous in a variety of ways. While I don’t think it’s a bad idea, per se, to try to think about superintelligences and their possible emergence, the press for this has been ridiculous. There’s plenty of stuff to be concerned about with today’s systems and we should put a fair bit of focus on that.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s