Warning: AI elites taking permanent control is the true doomsday nightmare, cautions AI pioneer.

A comprehensive look into the perspectives of Sam Altman and Demis Hassabis on the need for human control over artificial intelligence and its future implications.

Sam Altman and Demis Hassabis are noteworthy figures within the landscape of artificial intelligence. They both, have shared compelling insights about the need for humans to retain control over AI. This exploration will delve into their perspectives and implications for the future.

AI: A Tool or a Threat?

Vizio pays $3M settlement for falsely advertising 60 Hz TVs as having 120 Hz "refresh rate" by using backlight scanning.
Related Article

The nature of AI's implication for humanity is a much-debated subject. Advocates like Altman see AI as a tool for human enhancement. Detractors argue that uncontrolled AI could pose existential risks. The contention rests on who or what controls AI.

Warning: AI elites taking permanent control is the true doomsday nightmare, cautions AI pioneer. ImageAlt

Altman argues for the supremacy of human control. He stated, 'The most crucial thing about a powerful AI is that its utility function aligns with the values and desires of humanity.' This point is central to Altman's perspective.

The argument implies that the potential threats of AI comes from the misalignment between AI objectives and human desires. If we have control and communicate our goals effectively, AI could be a considerable asset.

Hassabis’ Perspective

Hassabis, the co-founder of DeepMind, shares similar views. He emphasizes the need for humans to retain ultimate control over AI's decision-making capability. Hassabis identifies potential risks but, like Altman, suggests risks can be mitigated through human control.

Hassabis accentuates the idea of setting appropriate AI behavior boundaries. Defined boundaries mitigate the risk of AI acting contrary to our interests. He suggests a cautious and evolutionary approach to AI.

Windows 12 and Qualcomm's new chip could lead to PC switching from x86 to Arm by 2024.
Related Article

The emphasis on human control from these key figures reflects an understanding of AI's potential. Their shared perspective appears to suggest a considered approach to balancing the benefits of AI with potential risks.

Altman on Open AI’s Mission

Altman explains OpenAI's mission as 'ensuring that artificial general intelligence (AGI) benefits all of humanity'. The endeavor reflects his convictions about human control over AI and the need for an inclusive societal benefit.

The message is that the fruits of AGI should not be monopolized but made accessible to all. This sets a high bar for the ethical deployment and sharing of AGI benefits, underlining the importance of humanity’s involvement in shaping AI for good.

An understanding is that the objective isn't to stifle invention but to guide it. Guiding AI’s development and deployment towards common good indicates human control. It implies a form of self-governance is needed in the AI landscape.

Hassabis on AI Pushing Boundaries

Hassabis asserts that it's crucial to maintain safe boundaries to prevent AI exceeding its limits. If unchecked, AI capabilities could grow beyond its intended purpose. While expanding boundaries could optimize AI utility, it raises the question of risk limits.

Hassabis champions implementing strict checks and controls over AI's behavior. He strongly believes in drawing the line between having a powerful AI tool and creating potential existential risks.

Both Altman and Hassabis summon a call to action towards established AI boundaries and human control. Their views represent a growing consensus on the need to manage AI's evolution carefully.


While it's clear that Altman and Hassabis are proponents of AI's potential, they advocate for necessary caution. They believe in the power of AI as a tool for human advancement, but with humans holding the reins.

Altman's interpretation of OpenAI's mission reflects a committed approach to direct AI's potential towards humanity's benefit. In parallel, Hassabis suggests defining AI boundaries to manage risk effectively.

In summary, both visionaries see AI as a potent tool for progress. However, their shared perspective underscores the need for sustainable growth with human control over artificial intelligence.

Their insights provide a comprehensive viewpoint on the future of AI. Understanding their perspective will allow us to approach AI's future with balanced excitement and caution.