3/15/2023 0 Comments Lifeboat for sale ebay![]() ![]() His risk horizon is precautionary rather than proactionary: He focuses on preventing the worst consequences rather than considering the prospects that are opened up by whatever radical changes might be inflicted by the superintelligence. Bostrom treats risk as a threat rather than as an opportunity. I must confess that I find this conceptualisation of ‘existential risk’ rather un-transhumanist in spirit. Thus, this being would not conform to the science fiction stereotype of robots deliberately revolting against creators who are now seen as their inferiors. Such a superintelligence may appear to be quite ruthless in its dealings with humans, but that would only reflect the obstacles that we place, perhaps unwittingly, in the way of the realization of its objectives. However, its self-programming capacity would have run amok in positive feedback, resulting in a maniacal, even self-destructive mission to rearrange the world in the image of its objectives. This creature would be a human artefact, or at least descended from one. (Indeed, there are now institutes for the study of existential risks on both sides of the Atlantic.) Moreover, because existential risks are intimately tied to the advancement of science and technology, their probability is likely to grow in the coming years.Ĭontrary to expectations, Bostrom is much less concerned with ecological suicide from humanity’s excessive carbon emissions than with the emergence of a superior brand of artificial intelligence – a ‘superintelligence’. Their probability may be relatively small, but the expected magnitude of their effects are so great, so Bostrom claims, that it is rational to devote some significant resources to safeguarding against them. But those assumptions fundamentally misunderstand the nature of superintelligence: The dangers come not necessarily from evil motives, says Bostrom, but from a powerful, wholly nonhuman agent that lacks common sense.Īmong transhumanists, Nick Bostrom is well-known for promoting the idea of ‘existential risks’, potential harms which, were they come to pass, would annihilate the human condition altogether. ![]() Spurred by science fiction and pop culture, we assume that the main superintelligence-gone-wrong scenario features a hostile organization programming software to conquer the world. We anthropomorphize such machines as particularly clever math nerds, says Bostrom, whose book Superintelligence: Paths, Dangers, Strategies was released in Britain in July and arrived stateside this month. He created the “paper-clip maximizer” thought experiment to expose flaws in how we conceive of superintelligence. So says Nick Bostrom, a philosopher who founded and directs the Future of Humanity Institute, in the Oxford Martin School at the University of Oxford. Our demise may come at the hands of a superintelligence that just wants more paper clips. When the world ends, it may not be by fire or ice or an evil robot overlord. By Angela Chen - The Chronicle of Higher Education ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |