The Future of Security for AI (labs and Superintelligence) — Situational Awareness’ takes on Security

Andre Camillo, CISSP
9 min read3 days ago

--

It takes a village… actually, a whole nation!

Of course, AI generated.

So I saw this piece of news recently, a former openAI satefy researcher released a paper talking about the state of AI industry and his projections for it in the coming decade.

Reading through the document I’ve learned a lot about the intricacies and challenges with the next wave of technology and how security should be done to ensure it’s best use for the interests of nation-states...

Here is the breakdown of the paper called:

Situational awareness, the decade ahead.

by Leopold Aschenbrenner

Background Check

The document is publicly accessible, check out the link below for it:

Introduction — SITUATIONAL AWARENESS: The Decade Ahead (situational-awareness.ai)

And it’s a 160+ pages long, with the author addresses his thoughts on the future of AI, and how much involvement the government should have in it, given the importance the upcoming wave of AI will have militarily.

Fascinating thoughts, and parallels drawn with last century’s biggest military project: the atomic bomb.

A must read document for everyone, but mostly techies such as ourselves.

Before diving into it, however , let’s have a look at the author himself, Leopold Aschenbrenner — and his backgound.

In his own words:

“Hi, I’m Leopold Aschenbrenner. I recently founded an investment firm focused on AGI, with anchor investments from Patrick Collison, John Collison, Nat Friedman, and Daniel Gross.

Before that, I worked on the Superalignment team at OpenAI.

In a past life, I did research on economic growth at Oxford’s Global Priorities Institute. I graduated as valedictorian from Columbia at age 19. I originally hail from Germany and now live in the great city of San Francisco, California.

My aspiration is to secure the blessings of liberty for our posterity. I’m interested in a pretty eclectic mix of things, from First Amendment law to German history to topology, though I’m pretty focused on AI these days.”

source: For Our Posterity — by Leopold Aschenbrenner

And as you can see, having experience with large AI Safety development, Leopold is in the industry of developing the next wave of AI, with this investment firm announcement (which is really recent, BTW) — this is behind a paywall, but highlights how this happened alongside the released of his paper:

Briefing: Ex-OpenAI researcher Leopold Aschenbrenner Starts AGI-focused Investment Firm — The Information

He has close ties to Ilya Sutskever which is a co-founder at OpenAI — and who’s recently left OpenAI to start his own startup to develop a ‘safe superintelligence’…

With these massive caveats out of the way, let’s look at what “Situational Awareness — the decade Ahead” says about the future of AI technologies and the role that Security should play in it.

Part 0: Hear all of this in more details

I’ve recorded a video going through it all in details, with thoughts based on what I have been seeing at work for the past couple of years at Microsoft and Trend Micro.

I do go over the document in details in the video, which I will leave out of this document (otherwise this would replicate the document by a whole lot). Check out my video if you want to multi-task and hear my opinions on it.

Now back to the written commentary…

Part 1: Artificial Intelligence’s Evolution

According to the author’s projections AI is on the path to rapidly reach the point of being able to improve itself. And it will do so by having significant improvement in 3 key areas:

If you want to laern more about the reasons behind each, the document is detailed and a fascinating weekend read!

This graph, represents his projections in these areas up until ~2028 when, according to Leopold, AI will reach the abilities of an AI researcher / Engineer — which is the breaking point for it to learn and improve itself.

Source: Situational Awareness — the decade ahead, by Leopold Aschenbrenner

(soon) AI will reach the abilities of a AI researcher / Engineer — which is the breaking point for it to learn and improve itself.

Soon after this, AI will reach “General” level… and once that happens, it’s one next step into what Leopold and Ilya Sutskever are calling “Superintelligence” and this is the point of the whole:
“Need for security” points that the author makes.

Source: Situational Awareness — the decade ahead, by Leopold Aschenbrenner

And the big analogy he makes is around how Superintelligence will be a technology as powerful and important militarily as the Atomic (and later Hydrogen) bombs were in the 20th century.

I highly recommend you read a book that is mentioned multiple times by the author throughout the paper, the book is called: “The making of the atomic bomb” — it’s certainly in my reading list, hope it’s available in Audible…

For all his thoughts around the power of Superintelligence, check out page 66 onwards from the document:

Part 2: The Threat Model

In his paper, Leopold discussed challenges first, but I think understand the Threat model of AGI / Superintelligence development is key before introducing the challenges, and as such let’s discuss the Threat Model.

According to Aschenbrenner, there are 2 key assets to protect in this scenario:

Source: Situational Awareness, page 93. Diagram by Andre Camillo

To secure such technology, Leopold proposes what he calls “The Project”. A similar effort to control the development of these technologies to what was done with the Atomic bomb back in 1940s.

Part3: Protecting Superintelligence

With the understanding that we’re all working towards this goal, of reaching something as powerful as “Superintelligence”, Leopold kicks off chapter 3 of his paper talking about the Challenges in building it, in a way that is of positive light particularly to the United States Government (USG, from here onwards).

The challenges mentioned include both parallels to super secretive projects such as the Atomic bomb and also insights from someone working closely within a leading AI lab, such as OpenAI — and as such they are important to understand, let’s review them.

In no particular order, these are challenges raised by the author:

And so, the solution presented by the author includes control of the USG. The thinking is pretty simple, although just as imperfect at protecting themelves from cyber threats, only the USG has enough AUTHORITY to enforce the needed controls for such a high profile technology.

Authority is needed for:

  • Vetting
  • Consequencs to Data Leakeage
  • Physical Security
  • Background checks

Only by applying this level of authority, one could strive to reach what’s needed to protect such a crucial technology from Nation-State actors. And in the paper, the author calls out prominent nations that are often on headlines for cyber operations, namely: North Korea, China and Russia.

And, note, this has generated some controversy online about potential negative discourse against these nations on Reddit threads…

Back to the article, the author goes on to describe what Nation-state proof security would look like:

  • Fully airgapped datacenters
  • Confidential compute and hardware encryption
  • SCIF (Sensitive Compartmented Information Facility)
  • Extreme personnel vetting and monitoring
  • Strong internal controls
  • Strict limitations on external dependencies
  • Intense pen-testing by NSA

Check out page 99 of the document for details on the thinking behind each of those.

It is at this point that the author dismisses and, in fact, presented counter arguments to a couple of potential negative aspects of adopting these rather strict, and severe controls to AI labs could hinder or negatively impact the development of technologies. Check out page 100 for them.

The (pseudo) conclusion to this section of his paper goes on to explain how adopting stricter controls would actually favor the nation greatly, especially avoiding future risks and allowing for better and safer growth in this area — with the emphasis of the need of urgency to start performing better safety to AI labs TODAY.

I will close this commentary with the author’s own paragraph, which I highlighted in my notes:

“We’re developing the most powerful weapon mankind has ever created. The algorithmic secrets we are developing, right now, are literally the nation’s most important national defense secrets — the secrets that will be at the foundation of the US and her allies’ economic and military predominance by the end of the decade, the secrets that will determine whether we have the requisite lead to get AI safety right, the secrets that will determine the outcome of WWIII, the secrets that will determine the future of the free world. And yet AI lab security is probably worse than a random defense contractor making bolts.”

Leopold Aschenbrenner, in “Situational Awareness: the decade ahead”

The author goes on to explain for much longer on the paper, challenges and concerns with AI Safety: “Superalignment” and then how China, is really well placed from some perspectives to house Datacenters that can fuel Superintelligence. He demonstrates how China’s energy output has double the capacity of the US:

Source: Situational Awareness, page 132

And later in the document, he explains what is “The Project”, and reiterates how important Security will be to develop the next wave of AI, safely.

Timeline of recent events accelerating the race towards “Superintelligence”

With this we have this rough timeline of events pertaining the topics I’ve discussed. This is important given the developments that followed.

~5th June: Leopold Announces AGI-focused investment firm

6th June: Release of “Situational Awareness: The Decade Ahead”

13th June: OpenAI appoints Retired U.S. Army to Board of Directors

Source: OpenAI appoints Retired U.S. Army General Paul M. Nakasone to Board of Directors | OpenAI

21st June: OpenAI co-founder Sutskever sets up new AI company devoted to ‘safe superintelligence’

Source: OpenAI co-founder Sutskever sets up new AI company devoted to ‘safe superintelligence’ | AP News

Conclusion

Since forever in human societies, security and safety have been primordial for the continuation of the peoples and even the race. Raising this question and putting / projecting the next few years of AI evolution in such a concise (despite potential bulishness by author) does provide an immense impact to those that need to grasp the importance of it all.

And by judging the timeline of events in AI this month, I’d say it has kicked-off some conversations at state level.

It’s a positive result, for sure, since mere mortals like us will both benefit and fear the consequences of the proposed evolution of AI.

Small Appendix

My notes around what’s really needed to Protect AGI and Superintelligence:

If you understand the cross reference, kudos!

For more about Cyber Security and its future Consider subscribing to Medium (here) and following me in my other channels: https://linktr.ee/acamillo

I’ve started a Discord server to share knowledge, learnings and industry news. Please join and let’s make it relevant to you all! 👇

Join the community! https://discord.gg/9dARHambFj

Thank you for reading and leave your thoughts/comments!

References

Scattered throughout the document

--

--

Andre Camillo, CISSP

Cloud and Security technologies, Career, Growth Mindset. Find my Discord &more: https://linktr.ee/acamillo . Technical Specialist @Microsoft. Opinions R mine!