PAS7 Studio
Back to all articles

Sora 1 Ends. What Did It Leave Behind?

A clear look at Sora 1's sunset in 2026: what OpenAI actually shut down, how Sora evolved from the 2024 research preview to Sora 2, what it achieved, why the legacy version was retired, and why critics never fully stopped worrying about it.

25 Mar 2026· 13 min read· Technology
Best forCreative technologistsAI product watchersFounders exploring generative videoReaders trying to understand what Sora actually changed
Editorial cover for a blog about the end of Sora 1 and the transition to Sora 2

If you take only one idea from this article, let it be this: the end of Sora 1 is not the end of Sora. OpenAI removed the older layer because it wanted one newer product, one newer stack, and one clearer story for the user.

Officially, Sora 1 stopped being available in the United States on March 13, 2026, while Sora 2 became the default Sora experience. [1]
OpenAI says the reason is straightforward: Sora 1 depended on older models and infrastructure, and the company wanted a single, updated experience powered by Sora 2. [1]
That matters because Sora 1 was the first version people associated with the original 2024 breakthrough: minute-long video, object permanence, long-range coherence, and the feeling that video generation had stopped looking like a toy. [3][4]
Sora 2 kept the name, but changed the product shape. It moved toward a social app, synchronized audio, characters, styles, storyboards, stitching, and a more controllable creative workflow. [4][5]
Sora's legacy is mixed. It pushed AI video forward very fast, but it also sharpened every old fear around deepfakes, consent, copyright, and synthetic media at scale. [6][7][8][9]

The first thing to clarify is the headline itself. If we stay close to the official wording, OpenAI did not shut down Sora as a whole. The company shut down Sora 1 in the United States on March 13, 2026 and moved everyone to Sora 2. [1]

The help center says this fairly directly. Sora 1 is no longer available in the US, Sora now opens in Sora 2 by default, and image generation inside Sora is no longer available after that change. [1] That last detail shows how tightly the first product was tied to the older stack.

The official explanation is also low drama: Sora 1 relied on older models and infrastructure, and moving to one product reduces complexity while giving OpenAI room to keep developing Sora 2 on both web and mobile. [1] This reads more like product consolidation than a loud funeral for a failed tool.

Sora moved very fast. In a little over two years it went from a research preview that made the market stop and look, to a consumer app, and then to a legacy layer OpenAI was ready to remove.

01

Sora arrives as a research shock

In its research paper, OpenAI presented Sora as a model capable of generating up to a minute of high-quality video. But the bigger story was not just the length. It showed object permanence, longer scene coherence, camera motion, looping, video extension, interpolation, and a noticeably higher sense of consistency than the market was used to seeing. [3]

02

Sora becomes a real product surface

By the time Sora reached broader access, OpenAI was already presenting it as more than a model demo. AP wrote both about the release and the early limits around depicting people. That was a clear signal that the risk profile here was already very different from a normal image tool. [6]

03

Sora 2 changes the center of gravity

OpenAI described Sora 2 as more physically accurate, more realistic, more controllable, and able to generate synchronized dialogue and sound effects. It also launched through a new Sora app rather than just a web experiment. [4]

04

The app starts growing real product features

The release notes show how quickly the product grew: storyboards, cameos and later characters, stitching, styles, Android support, longer videos, image-to-video with people under stricter rules, and video extensions. [5]

05

Sora 1 becomes the legacy branch

OpenAI sunsets Sora 1 in the US, moves users to Sora 2 by default, and removes image generation from Sora itself. That is the moment when the original web-era Sora stops being the main product story. [1]

Summary

Sora did not have one life. First it was a research story, then a product story. Now only the second one remains.

People often reduce Sora to 'that AI video tool that looked better than the others.' That is too shallow. What made Sora important is that it compressed several stages of the market's progress into one short stretch of time.

It made long video feel plausible

Back in 2024, the point was not just pretty clips. OpenAI was showing that large-scale video training could hold together minute-long scenes, different aspect ratios, looping, extension, and more coherent multi-shot outputs than the market was used to. [3]

It pushed world-model talk into the mainstream

OpenAI did not present Sora as only a creative toy. The company kept connecting it to the idea of learning the physical world through video and called future world simulators strategically important. [3][4]

It turned video generation into a product, not just a demo

Sora 2 added what a pure model demo was missing: synchronized audio, an app, a feed, remixing, social reuse, characters, stitching, storyboards, styles, and later an editor. That changed expectations for what an AI video product should actually be. [4][5]

It moved faster than public trust did

That speed became part of the story too. The same features that made Sora compelling also made it a focal point for arguments about likeness, synthetic media, content provenance, and what happens when realistic video creation becomes normal consumer behavior. [7][8][9]

The core story is not only video quality. It is the shift from raw generation into a full creative workflow with remixing, styles, characters, storyboards, and editing. [3][4][5]

Section what-sora-achieved screenshot

The easiest way to understand the sunset is a direct comparison between Sora 1 and Sora 2. The name stayed the same, but the product logic did not.

Comparison pointSora 1 / original web eraSora 2 / current direction
Core identityLegacy web experience tied to older video and image generation infrastructure. [1]Single default Sora experience, centered on newer models and a broader app-plus-web product. [1][4]
Flagship claimProof that high-fidelity, longer AI video could work at all. [3]A more physically accurate, controllable model with synchronized dialogue and sound effects. [4]
Creative workflowPrompting, image/video conditioning, looping, extension, interpolation, image generation. [1][3]Storyboards, styles, characters, stitching, editor, extensions, image-to-video with people under stricter rules. [5]
SurfaceWeb-first, with the energy of a research preview and the status of a legacy branch. [1]App-first product with feed logic, social reuse, and rollout to mobile platforms. [4][5]
After March 13, 2026 in the USRemoved. Users can no longer switch back. [1]Default Sora experience. [1]

The official explanation is short. OpenAI says Sora 1 depended on older models and infrastructure, and that moving to one Sora experience reduces complexity while letting the team calmly keep developing Sora 2 on web and mobile. [1] For a product decision, that explanation already stands on its own.

But there is another layer too. By late 2025 and early 2026, Sora was no longer just a place for isolated generations. It had become an app with feed behavior, characters, styles, stitching, longer clips, mobile rollout, and a very different monetization logic. [4][5] Keeping a separate Sora 1 legacy branch next to that would have meant maintaining older image-and-video pathways while the main team was clearly building a different product.

There is also a trust and governance angle. One active product is easier to moderate, label, watermark, and update than two parallel surfaces with different assumptions and capabilities. OpenAI does not say that directly in the FAQ, but the overall product direction points there very clearly. [1][4][10]

OpenAI's official reason is operational, but the full picture is broader: one product, one stack, one app strategy, and less energy spent keeping the legacy branch alive. [1][4][5]

Section why-openai-retired-it screenshot

The easiest way to write a bad retrospective about Sora is to pretend it was either only a big success or only a reckless launch. In reality, both lines ran at the same time.

Deepfakes moved from abstract fear to consumer reality

TIME reported that Reality Defender was able to bypass Sora's anti-impersonation protections within the first 24 hours, and its CEO described the result as a plausible sense of security. [7] That is a serious criticism because Sora's realism was exactly what made it so culturally visible.

Public-interest groups saw a release-speed problem

AP reported that Public Citizen called the Sora 2 release a consistent and dangerous pattern of rushing products to market without enough guardrails. [8] You can disagree with the wording, but it captures a real public trust problem around OpenAI's release tempo.

OpenAI had to keep tightening safety while expanding features

OpenAI's own Sora 2 materials make clear how much of the product is really a safety stack problem: likeness controls, stricter rules for minors, visible watermarks, provenance metadata, content moderation, and different restrictions around realistic people. [4][10]

The stronger the product got, the harder it became to separate creativity from risk

That is the core tension in Sora's story. More realism, more control, and easier remixing are real creative upgrades, but they also multiply misuse if governance does not keep up. [4][7][8][10]

Sora 1 leaves behind something bigger than one interface. It changed the baseline for what people expect from AI video. Before Sora, many clips still looked like short-lived experiments. After Sora, the conversation shifted toward realism, scene continuity, editing workflows, audio, social reuse, and trust at scale. [3][4][5]

That is why the end of Sora 1 is not a minor housekeeping note. It closes the first phase of OpenAI's video story. The research-preview phase is over. The app-and-platform phase is now the real one.

If you want the simplest human reading of the whole story, it is this: Sora 1 showed that the market had already changed. Sora 2 is the version OpenAI believes is fit for continued public development. And the speed with which the old layer disappeared tells you how quickly the product, the infrastructure, and the surrounding risk profile all moved at once.

Did OpenAI shut down Sora completely?

No. Officially, OpenAI shut down `Sora 1` in the United States on March 13, 2026. `Sora 2` remains the default Sora experience there. [1]

Why did OpenAI remove Sora 1?

According to OpenAI, Sora 1 relied on older models and infrastructure. The company moved the product to a single Sora 2 experience in order to reduce complexity and keep developing the newer stack on web and mobile. [1]

What disappeared when Sora 1 was retired?

In the US, users lost access to the legacy Sora 1 experience, to older generations and social activity after the cutoff unless they exported their data in time, and image generation inside Sora disappeared as well. [1]

What made Sora important in the first place?

The first Sora mattered because it made longer and much more coherent AI video look plausible at a level the market had rarely seen before. OpenAI's 2024 paper highlighted minute-long scenes, object permanence, multi-shot consistency, looping, extension, and stronger stability across aspect ratios. [3]

What did Sora 2 add over the older version?

OpenAI presents Sora 2 as a more physically accurate and more controllable model with synchronized dialogue and sound effects. On top of that, the product gained app-first features such as storyboards, characters, stitching, styles, an editor, and extensions. [4][5]

Why was Sora also controversial?

Because the same realism that made Sora impressive also intensified concerns around impersonation, consent, deepfakes, and copyright. External reporting from TIME, AP, and critics such as Public Citizen shows that these concerns remained central even as the product improved. [7][8]

This article is based on official OpenAI help and product pages, the original 2024 research paper, the Sora 2 launch materials from 2025, and outside reporting on safety, privacy, and public criticism around the product.

Reviewed: 25 Mar 2026

Related Articles

growth

AI SEO / GEO in 2026: Your Next Customers Aren’t Humans — They’re Agents

Search is shifting from clicks to answers. Bots and AI agents crawl, cite, recommend, and increasingly buy. Learn what AI SEO / GEO means, why classic SEO is no longer enough, and how PAS7 Studio helps brands win visibility in the agentic web.

blogs

The most powerful Apple chip yet? M5 Pro and M5 Max are breaking records

A data-backed March 2026 analysis of Apple M5 Pro and M5 Max. We break down why these chips can credibly be called Apple's most powerful pro laptop silicon, how they compare with M4 Pro, M4 Max, M1 Pro, M1 Max, and how they stack up against Intel and AMD laptop rivals.

telegram-media-saver

Automatic Tagging & Search for Saved Links

Integrate with GDrive/S3/Notion for automatic tagging and fast search via search APIs

services

Bot Development & Automation Services

Professional Telegram bot development and business process automation: chatbots, AI assistants, CRM integrations, workflow automation.

Professional development for your business

We create modern web solutions and bots for businesses. Learn how we can help you achieve your goals.