The bmannconsulting.com website

bsky comments

-5
_includes/fissiondiscourse.html
···
-
<h2>Comments from the Original Fission Forum Post</h2>
-
-
<div id='fission-talk-comments'></div>
-
<script type="text/javascript"> DiscourseEmbed = { discourseUrl: 'https://talk.fission.codes/', discourseEmbedUrl: '{{ page.url }}' }; (function() { var d = document.createElement('script'); d.type = 'text/javascript'; d.async = true; d.src = DiscourseEmbed.discourseUrl + 'javascripts/embed.js'; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(d); })();
-
</script>
···
+28
_includes/head.html
···
{% comment %}<!-- Littlefoot footnotes https://github.com/goblindegook/littlefoot -->{% endcomment %}
<link rel="stylesheet" href="https://unpkg.com/littlefoot/dist/littlefoot.css" />
{% if page.excerpt %}
<meta property="og:description" content="{{ page.excerpt | strip_html | strip_newlines | truncate: 160 }}"/>
{% else %}
···
{% comment %}<!-- Littlefoot footnotes https://github.com/goblindegook/littlefoot -->{% endcomment %}
<link rel="stylesheet" href="https://unpkg.com/littlefoot/dist/littlefoot.css" />
+
{% if page.comments == "on" %}
+
<link rel="stylesheet" href="https://unpkg.com/bluesky-comments@0.10.1/dist/bluesky-comments.css">
+
+
<script type="importmap">
+
{
+
"imports": {
+
"react": "https://esm.sh/react@18",
+
"react-dom/client": "https://esm.sh/react-dom@18/client"
+
}
+
}
+
</script>
+
+
<script type="module">
+
import { createElement } from 'react';
+
import { createRoot } from 'react-dom/client';
+
import { BlueskyComments } from 'https://unpkg.com/bluesky-comments@<VERSION>/dist/bluesky-comments.es.js';
+
+
const author = 'you.bsky.social';
+
const container = document.getElementById('bluesky-comments');
+
const root = createRoot(container);
+
root.render(
+
createElement(BlueskyComments, {
+
"author": author,
+
})
+
);
+
</script>
+
{% endif %}
+
{% if page.excerpt %}
<meta property="og:description" content="{{ page.excerpt | strip_html | strip_newlines | truncate: 160 }}"/>
{% else %}
+18
_layouts/default.html
···
<nav>{% include nav.html %}</nav>
<div class="wrapper">
<main>{{ content }}</main>
<footer>{% include footer.html %}</footer>
</div>
···
<nav>{% include nav.html %}</nav>
<div class="wrapper">
<main>{{ content }}</main>
+
{% if page.comments == "on" %}
+
<BlueskyComments
+
uri="https://bsky.app/profile/bmann.ca/post/3lmbir26qnc2k"
+
author="bmann.ca"
+
commentFilters={[
+
BlueskyFilters.NoPins, // Hide pinned comments
+
BlueskyFilters.MinCharacterCountFilter(3), // Hide comments with less than 10 characters
+
]}
+
onEmpty={
+
(details) => {
+
console.error('Failed to load comments:', details);
+
document.getElementById('bluesky-comments').innerHTML =
+
'No comments on this post yet. Details: ' + details.message;
+
}
+
}
+
}); />
+
{% endif %}
+
<footer>{% include footer.html %}</footer>
</div>
+141
_notes/AI 2027.md
···
···
+
---
+
link: https://ai-2027.com/
+
published: 2025-04-03
+
tags:
+
- article
+
- AI
+
---
+
Authors: Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, Romeo Dean
+
+
We predict that the impact of superhuman AI over the next decade will be enormous, exceeding that of the Industrial Revolution.
+
+
We wrote a scenario that represents our best guess about what that might look like.[^1] It’s informed by trend extrapolations, wargames, expert feedback, experience at OpenAI, and previous forecasting successes.[^2]
+
+
[PDF](https://ai-2027.com/scenario.pdf)
+
+
## Home
+
+
### What is this?
+
+
The CEOs of [OpenAI](https://www.bloomberg.com/features/2025-sam-altman-interview), [Google DeepMind](https://www.bigtechnology.com/p/google-deepmind-ceo-demis-hassabis), and [Anthropic](https://www.wsj.com/livecoverage/stock-market-today-dow-sp500-nasdaq-live-01-21-2025/card/anthropic-ceo-says-ai-could-surpass-human-intelligence-by-2027-9tka9tjLKLalkXX8IgKA) have all predicted that AGI will arrive within the next 5 years. Sam Altman [has said](https://webcf.waybackmachine.org/web/20250106014723/https://blog.samaltman.com/reflections) OpenAI is setting its sights on “superintelligence in the true sense of the word” and the “glorious future.”[^3]
+
+
What might that look like? We wrote AI 2027 to answer that question. Claims about the future are often frustratingly vague, so we tried to be as concrete and quantitative as possible, even though this means depicting one of many possible futures.
+
+
We wrote two endings: a “slowdown” and a “race” ending. However, AI 2027 is not a recommendation or exhortation. Our goal is predictive accuracy.[^4]
+
+
We encourage you to debate and counter this scenario.[^5] We hope to spark a broad conversation about where we’re headed and how to steer toward positive futures. We’re [planning to give out thousands in prizes](https://ai-2027.com/about#bets-and-bounties) to the best alternative scenarios.
+
+
### How did we write it?
+
+
Our research on key questions (e.g. what goals will future AI agents have?) can be found [here](https://ai-2027.com/research).
+
+
The scenario itself was written iteratively: we wrote the first period (up to mid-2025), then the following period, etc. until we reached the ending. We then scrapped this and did it again.
+
+
We weren’t trying to reach any particular ending. After we finished the first ending—which is now colored red—we wrote a new alternative branch because we wanted to also depict a more hopeful way things could end, starting from roughly the same premises. This went through several iterations.[^6]
+
+
Our scenario was informed by approximately 25 [tabletop exercises](https://ai-2027.com/about?tab=tabletop-exercise#tab-box-tabletop-exercise) and feedback from over 100 people, including dozens of experts in each of AI governance and AI technical work.
+
+
### Why is it valuable?
+
+
_“I highly recommend reading this scenario-type prediction on how AI could transform the world in just a few years. Nobody has a crystal ball, but this type of content can help notice important questions and illustrate the potential impact of emerging risks.”_ —_Yoshua Bengio[^7]_
+
+
We have set ourselves an impossible task. Trying to predict how superhuman AI in 2027 would go is like trying to predict how World War 3 in 2027 would go, except that it’s an even larger departure from past case studies. Yet it is still valuable to attempt, just as it is valuable for the US military to game out Taiwan scenarios.
+
+
Painting the whole picture makes us notice important questions or connections we hadn’t considered or appreciated before, or realize that a possibility is more or less likely. Moreover, by sticking our necks out with concrete predictions, and encouraging others to publicly state their disagreements, we make it possible to evaluate years later who was right.
+
+
Also, one author wrote a lower-effort AI scenario [before, in August 2021](https://www.lesswrong.com/posts/6Xgy6CAf2jqHhynHL/what-2026-looks-like). While it got many things wrong, overall it was surprisingly successful: he predicted the rise of chain-of-thought, inference scaling, sweeping AI chip export controls, and $100 million training runs—all more than a year before ChatGPT.
+
+
### Who are we?
+
+
[Daniel Kokotajlo](https://x.com/DKokotajlo) ([TIME100](https://time.com/7012881/daniel-kokotajlo/), [NYT piece](https://www.nytimes.com/2024/06/04/technology/openai-culture-whistleblowers.html)) is a former OpenAI researcher whose previous [AI predictions](https://www.lesswrong.com/posts/6Xgy6CAf2jqHhynHL/what-2026-looks-like) have [held up well](https://www.lesswrong.com/posts/u9Kr97di29CkMvjaj/evaluating-what-2026-looks-like-so-far).
+
+
[Eli Lifland](https://www.linkedin.com/in/eli-lifland/) co-founded [AI Digest](https://theaidigest.org/), did [AI robustness research](https://scholar.google.com/citations?user=Q33DXbEAAAAJ&hl=en), and ranks #1 on the [RAND Forecasting Initiative](https://www.rand.org/global-and-emerging-risks/forecasting-initiative.html) all-time leaderboard.
+
+
[Thomas Larsen](https://www.linkedin.com/in/thomas-larsen/) founded the [Center for AI Policy](https://www.centeraipolicy.org/) and did AI safety research at the [Machine Intelligence Research Institute](https://intelligence.org/).
+
+
[Romeo Dean](https://www.linkedin.com/in/romeo-dean-789313200/) is completing a computer science concurrent bachelor’s and master’s degree at Harvard and previously was an [AI Policy Fellow](https://www.iaps.ai/romeo-dean) at the Institute for AI Policy and Strategy.
+
+
[Scott Alexander, blogger extraordinaire](https://en.wikipedia.org/wiki/Slate_Star_Codex), volunteered to rewrite our content in an engaging style; the fun parts of the story are his and the boring parts are ours.
+
+
+
## Mid 2025: Stumbling Agents
+
+
## Late 2025: The World’s Most Expensive AI
+
+
> Although models are improving on a wide range of skills, one stands out: <mark>OpenBrain focuses on AIs that can speed up AI research.</mark> They want to win the twin arms races against China (whose leading company we’ll call “DeepCent”)[16](https://ai-2027.com/footnotes#footnote-16) and their US competitors.
+
+
## Early 2026: Coding Automation
+
+
+
## Mid 2026: China Wakes Up
+
+
> Chip export controls and lack of government support have left China under-resourced compared to the West. By smuggling banned Taiwanese chips, buying older chips, and producing domestic chips about three years behind the US-Taiwanese frontier, China has managed to maintain about 12% of the world’s AI-relevant compute—but the older technology is harder to work with, and supply is a constant headache.[^35]
+
+
## Late 2026: AI Takes Some Jobs
+
+
> AI has started to take jobs, but has also created new ones. The stock market has gone up 30% in 2026, led by OpenBrain, Nvidia, and whichever companies have most successfully integrated AI assistants. <mark>The job market for junior software engineers is in turmoil: the AIs can do everything taught by a CS degree, but people who know how to manage and quality-control teams of AIs are making a killing.</mark> Business gurus tell job seekers that familiarity with AI is the most important skill to put on a resume. Many people fear that the next wave of AIs will come for their jobs; there is a 10,000 person anti-AI protest in DC.
+
+
## January 2027: Agent-2 Never Finishes Learning
+
+
## February 2027: China Steals Agent-2
+
+
## March 2027: Algorithmic Breakthroughs
+
+
## April 2027: Alignment for Agent-3
+
+
## May 2027: National Security
+
+
## June 2027: Self-improving AI
+
+
## August 2027: The Geopolitics of Superintelligence
+
+
## September 2027: Agent-4, the Superhuman AI Researcher
+
+
## October 2027: Government Oversight
+
+
# Slowdown
+
+
+
# Race
+
+
## November 2027: Superhuman Politicking
+
+
## December 2027: The Agent-5 Collective
+
+
## 2028: The AI Economy
+
+
## 2029: The Deal
+
+
> Humans realize that they are obsolete. A few niche industries still trade with the robot economy, supplying goods where the humans can still add value.[^30] Everyone else either performs a charade of doing their job—leaders still leading, managers still managing—or relaxes and collects an incredibly luxurious universal basic income. Everyone knows that if the AIs turned on humans, they would be completely overpowered. Not that most humans would even resist, the political institutions are too thoroughly captured. But it doesn’t seem like this is happening. Every week that goes by with no dramatic AI treachery, is another week that confidence and trust grow.
+
+
## 2030: Takeover
+
+
> For about three months, Consensus-1 expands around humans, tiling the prairies and icecaps with factories and solar panels. Eventually it finds the remaining humans too much of an impediment: in mid-2030, the AI releases a dozen quiet-spreading biological weapons in major cities, lets them silently infect almost everyone, then triggers them with a chemical spray. Most are dead within hours; the few survivors (e.g. preppers in bunkers, sailors on submarines) are mopped up by drones. Robots scan the victims’ brains, placing copies in memory for future study or revival.[^31]
+
+
+
---
+
## About
+
+
The AI 2027 scenario is the first major release from the [AI Futures Project](https://ai-futures.org/). We’re a new nonprofit forecasting the future of AI. We created this website in collaboration with [Lightcone Infrastructure](http://lightconeinfrastructure.com/).
+
+
---
+
+
[^1]: We disagree somewhat amongst ourselves about AI timelines; our median AGI arrival date is somewhat longer than what this scenario depicts. This scenario depicts something like our mode. See our [timelines forecast](https://ai-2027.com/research/timelines-forecast) for more details.
+
+
[^2]: One author, Daniel Kokotajlo, did a [lower-effort scenario exercise in 2021](https://www.alignmentforum.org/posts/6Xgy6CAf2jqHhynHL/what-2026-looks-like) that got many things right including the rise of chatbots, chain of thought, inference scaling, sweeping AI chip export controls, and $100 million training runs. Another author, [Eli Lifland](https://www.vox.com/future-perfect/2024/2/13/24070864/samotsvety-forecasting-superforecasters-tetlock), ranks #1 on the [RAND Forecasting Initiative](https://www.rand.org/global-and-emerging-risks/forecasting-initiative.html) leaderboard.
+
+
[^3]: It’s tempting to dismiss this as just hype. We think this would be a grave mistake—it is not _just_ hype. We have no desire to hype AI ourselves, yet we also think it is [strikingly plausible](https://situational-awareness.ai/) that superintelligence could arrive by the end of the decade. See our [timelines forecast](https://ai-2027.com/research/timelines-forecast) for more.
+
+
[^4]: Sometimes people mix prediction and recommendation, hoping to create a self-fulfilling-prophecy effect. We emphatically are not doing this; we hope that what we depict does not come to pass!
+
+
[^5]:Feel free to [contact us](http://mailto%40info@ai-futures.org/) if you’re writing a critique or an alternative scenario.
+
+
[^6]: It was overall more difficult, because unlike with the first ending, we were trying to get it to reach a good outcome starting from a rather difficult situation.
+
+
[^7]: Full Professor, Université de Montréal and Founder and Scientific Advisor, Mila - Quebec AI Institute. Also the world’s [most-cited computer scientist](https://en.wikipedia.org/wiki/Yoshua_Bengio).
+
+
[^35]: There are now 3M H100e in China, up from the 1.5M they had one year ago in mid 2025. See the compute supplement’s [distribution section](https://ai-2027.com/supplements/compute-forecast#section-2-compute-distribution) for more details. We expect [smuggling efforts](https://www.cnas.org/publications/reports/preventing-ai-chip-smuggling-to-china) to secure around 60K [GB300s](https://www.trendforce.com/news/2025/03/10/news-nvidia-to-unveil-gb300-at-gtc-with-shipment-reportedly-to-begin-in-may-driving-cooling-demands/) (450K H100e), with another 2M [Huawei 910Cs](https://www.tomshardware.com/tech-industry/artificial-intelligence/deepseek-research-suggests-huaweis-ascend-910c-delivers-60-percent-nvidia-h100-inference-performance) being [produced](https://x.com/ohlennart/status/1899488375574278336) (800k H100e) and a mixture of ~1M legally imported chips (such as Nvidia’s [H20s](https://www.reuters.com/technology/artificial-intelligence/nvidias-h20-chip-orders-jump-chinese-firms-adopt-deepseeks-ai-models-sources-say-2025-02-25/) or [B20s](https://www.reuters.com/technology/nvidia-preparing-version-new-flaghip-ai-chip-chinese-market-sources-say-2024-07-22/)) making up the last 250K H100e.
+
+
[^30]: E.g. by finding old and unused equipment and taking it to collection sites to sell for scrap.
+
+
[^31]: Arguably this means only a few people actually died. Arguably.
+16
_notes/Bluesky Comments.md
···
···
+
---
+
comments: on
+
tags:
+
- Bluesky
+
- ATProtocol
+
- comments
+
---
+
I went and looked for static site compatible [[Bluesky]] comments:
+
+
<blockquote class="bluesky-embed" data-bluesky-uri="at://did:plc:2cxgdrgtsmrbqnjkwyplmp43/app.bsky.feed.post/3lmbdivdkfs2k" data-bluesky-cid="bafyreibnggwpuhtyy4ac4mh42attfpydcub27yfdpvfxqezbekffycgwu4" data-bluesky-embed-color-mode="system"><p lang="en">Yeah, Emily did the initial write up a long time ago, there are a handful of &quot;add this JS script from a CDN&quot;
+
+
Here&#x27;s a Github search that surfaced a couple github.com/search?q=blu...
+
+
I&#x27;ll cherry pick @coryzue.com https://github.com/czue/bluesky-comments and @tom.party https://github.com/tomcreighton/Bluesky-comments-for-Static-Sites</p>&mdash; Boris (<a href="https://bsky.app/profile/did:plc:2cxgdrgtsmrbqnjkwyplmp43?ref_src=embed">@bmann.ca</a>) <a href="https://bsky.app/profile/did:plc:2cxgdrgtsmrbqnjkwyplmp43/post/3lmbdivdkfs2k?ref_src=embed">April 7, 2025 at 6:35 PM</a></blockquote><script async src="https://embed.bsky.app/static/embed.js" charset="utf-8"></script>
+
+
I'm going to try including Cory Zue's Bluesky Comments <https://github.com/czue/bluesky-comments> here, which I can then enable for certain pages by turning `comments: on` on a per page basis.
+11
_notes/Keeks.md
···
···
+
---
+
twitter: https://x.com/Nogoodtwts
+
link: https://nevergreen-musings.ghost.io/
+
tags:
+
- person
+
- developer
+
- NYC
+
- crypto
+
- web3
+
---
+
Founder of [[Parabl]]
+5 -1
_notes/Parabl.md
···
- hardware
- energy
- NYC
---
-
Parabl is building portable energy & mesh networking hardware targeted at "leapfrog regions".
···
- hardware
- energy
- NYC
+
twitter: https://x.com/parabltech
+
ATProtocol:
---
+
Parabl is building portable energy & mesh networking hardware targeted at "leapfrog regions".
+
+
Founded by JJ aka [[techboiafrica]] and [[Keeks]]
+13
_notes/techboiafrica.md
···
···
+
---
+
ATProtocol: https://bsky.app/profile/doom.bsky.social
+
twitter: https://x.com/techboiafrica
+
tags:
+
- person
+
- developer
+
- crypto
+
- web3
+
- NYC
+
---
+
Founder of [[Parabl]]
+
+
> Not big on digital imperialism. I frequent sarcasm. Building [[Parabl]]. Techno-solutionism academic. Tigrayan-New Yorker. Multi-disciplinary artist.