Back To Category / AI News

Claude Code Starts Working on Its Own: Anthropic Pushes AI Development Tools Toward the Era of 24/7 Autonomous Agents

AI News | Editor: Sandy Anthropic’s latest launch, Claude Code Routines, pushes AI coding tools from a model that works only when a human is present toward one

Claude Code Starts Working on Its Own: Anthropic Pushes AI Development Tools Toward the Era of 24/7 Autonomous Agents

AI News | Editor: Sandy

Anthropic’s latest launch, Claude Code Routines, pushes AI coding tools from a model that works only when a human is present toward one that can keep running in the cloud. According to Anthropic’s official blog post, “Introducing routines in Claude Code” (https://claude.com/blog/introducing-routines-in-claude-code), the research preview, released on April 14, 2026, allows developers to package prompts, code repositories and external connectors into reusable routines that can be triggered by schedules, API calls or GitHub events. More importantly, those tasks run on Claude Code’s web-based infrastructure rather than relying on a user’s laptop to remain open. That shifts Claude away from the role of a terminal-side assistant and closer to a genuine software agent that can keep working in the background.

From terminal helper to background engineering agent

It would be easy to dismiss this update as little more than another automation feature. That would miss the point. According to Claude Code documentation, “Automate work with routines” (https://code.claude.com/docs/en/routines), routines can be activated through three main mechanisms: scheduled runs, API triggers and GitHub event triggers, including pushes, pull requests, issues and workflow runs. That means Claude Code no longer needs to wait for an engineer to open a terminal before it begins to work. It can be called into action when the workflow itself changes, carrying out triage, review, repair or organization before a human even steps in. If the last wave of AI coding tools was about reducing the friction of writing code, this wave is about reducing the waiting built into engineering workflows.

As 9to5Mac noted in “Anthropic adds routines to redesigned Claude Code, here’s how it works” (https://9to5mac.com/2026/04/14/anthropic-adds-repeatable-routines-feature-to-claude-code-heres-how-it-works/), one of the most obvious advantages of the new system is that tasks can continue running even when a device is offline or the application is closed. That may sound like a matter of convenience. In practice, it makes Claude Code resemble a backend service rather than a personal productivity tool living on a workstation. Once an AI system can sort bugs overnight and prepare preliminary changes before the workday begins, the relationship between engineering teams and AI starts to look less like tool usage and more like the management of a persistent digital worker.

The real innovation is not another button, but turning judgment into a schedulable resource

The deeper technical significance of this release lies not in the addition of more triggers, but in Anthropic’s attempt to embed model-based reasoning into automation infrastructure. Traditional CI/CD pipelines, cron jobs and GitHub Actions can already run in the background, but they are fundamentally fixed scripts: the logic has to be specified in advance. Claude Code Routines differ because the model can read context and decide what to do next. According to Anthropic’s documentation, “Automate work with routines” (https://code.claude.com/docs/en/routines), each routine can also be configured with its own HTTPS endpoint and bearer token, allowing external systems to invoke it directly by API and retrieve a session URL for tracing execution.

That makes Claude Code look less like an interactive interface and more like an intelligent reasoning node that other systems can call. In the past, automation mainly executed predefined flows. Anthropic is now trying to let AI take over part of the judgment inside those flows. In software engineering, that matters. Much of the delay in development comes not from typing code, but from reading diffs, understanding issues, setting priorities and deciding what should happen next. If a model can take on part of those intermediate steps, the way human engineers allocate their attention will inevitably change.

Anthropic’s strategy is to turn Claude Code into a workflow platform

That also helps explain why Anthropic did not merely launch routines, but redesigned Claude Code’s desktop experience at the same time. According to Anthropic’s official article, “Redesigning Claude Code on desktop for parallel agents” (https://www.anthropic.com/news/redesigning-claude-code-on-desktop-for-parallel-agents), the new desktop app includes a new sidebar, parallel session management, adjustable layouts, an integrated terminal and faster diff inspection. The goal is to make it easier for users to supervise multiple agent tasks at once. Taken together, the desktop redesign and routines point in the same direction: Anthropic is moving agents into the cloud while turning the local interface into a control room for overseeing and taking over parallel AI work.

That product logic is quite different from simply making a model better at coding. Anthropic is not merely trying to prove that Claude writes better code than its rivals. It is trying to make Claude Code into a working environment that teams do not want to leave. Once scheduling, automation, review, handoff and version comparison all happen within one ecosystem, the platform becomes far stickier than a chat model that can be swapped out with relative ease. From a commercial perspective, that offers more durable value than a narrow advantage on coding benchmarks.

The American market is already moving decisively toward agentic programming

Seen in international context, Anthropic is not acting alone. It is moving in the same direction as several of the largest American players. According to GitHub Blog’s “GitHub Copilot: Meet the new coding agent” (https://github.blog/news-insights/product-news/github-copilot-meet-the-new-coding-agent/), GitHub has already pushed Copilot toward a model in which it can execute tasks in the background and submit draft pull requests. According to OpenAI’s official post, “Introducing Codex” (https://openai.com/index/introducing-codex/), OpenAI is likewise framing Codex as a software engineering agent that can handle multiple engineering tasks in parallel in the cloud. Google has adopted a similar line. In “Jules: Google’s autonomous AI coding agent” (https://blog.google/technology/developers/introducing-jules/), the company describes Jules as an autonomous coding agent that can asynchronously handle testing, debugging and feature building in a secure cloud virtual machine.

Taken together, these descriptions make the direction of the American market hard to miss. AI coding tools are no longer being defined primarily by autocomplete, rewriting or one-off answers. They are being recast as systems capable of delivering whole tasks. Anthropic’s launch of routines is therefore not an isolated product update, but a move to close a strategic gap. Claude Code has already been strong on usability and model quality. What Anthropic is now adding is the infrastructure required to keep an agent inside the workflow after the user steps away. Competition is shifting from who feels like the smartest assistant to who looks most like a manageable engineering system.

China and Europe are unlikely to adopt the model in exactly the same way

Yet although the race toward agentic programming is being driven largely by American firms, the way it lands in other markets is unlikely to be identical. Chinese technology companies have also moved quickly to bring AI programming capabilities into enterprise development, but the emphasis there has often been on integration with cloud platforms, workplace suites and industry-specific applications rather than on a standalone developer product identity. That means capabilities similar to routines may in China be absorbed into broader enterprise software stacks rather than marketed primarily as a distinct AI coding agent category.

Europe presents another set of priorities. Once AI agents can read repositories, modify code and generate draft pull requests, companies will care not only about productivity but about data protection, permission boundaries, auditability and human oversight. In that environment, systems such as Claude Code Routines will need to do more than prove that they save time. They will also need to prove that they can be governed. That may be less glamorous than the launch narrative, but it is likely to become one of the most practical questions in enterprise adoption.

The broader significance is that AI is starting to take over workflows, not just fragments of work

The real industry significance of routines lies in shifting AI’s value from saving time on isolated actions to taking responsibility for longer-running workflows. If a system can scan new issues overnight, judge priority, generate an initial fix and produce a draft pull request for human review, managers will start measuring AI differently. The question will no longer be simply how much faster engineers can code. It will be how many previously supervised processes can now be done mostly by an agent before a human handles exceptions.

TechCrunch hinted at that trajectory earlier in “Anthropic hands Claude Code more control, but keeps it on a leash” (https://techcrunch.com/2026/03/24/anthropic-hands-claude-code-more-control-but-keeps-it-on-a-leash/), which described Anthropic’s earlier experiments with more autonomous operation while still preserving clear control boundaries. Routines can be seen as an extension of that strategy. Anthropic is not only making the model more autonomous; it is making that autonomy schedulable, triggerable and systematic.

Cost, risk and lines of responsibility remain the hard realities of the agent era

Still, the vision of agentic development tools remains some distance from frictionless reality. According to Anthropic’s documentation, “Automate work with routines” (https://code.claude.com/docs/en/routines), routines currently come with daily execution limits that vary by subscription tier. That suggests Anthropic itself still treats them as a high-value, metered compute resource rather than as a near-free background service. As tasks grow more complex, repositories become larger and triggers fire more often, inference costs, monitoring costs and human review costs may all rise quickly.

An even harder problem is responsibility. If AI can work in the background, it can also fail in the background. When a coding assistant makes a bad completion, a human often notices quickly. But if the error occurs in triage, prioritization, test selection or submission flow, it can slip into the development rhythm while still looking plausible. As TechCrunch wrote in “Anthropic hands Claude Code more control, but keeps it on a leash” (https://techcrunch.com/2026/03/24/anthropic-hands-claude-code-more-control-but-keeps-it-on-a-leash/), Anthropic is trying to balance greater autonomy with stronger guardrails. Over time, that balance may become not just a product design issue, but a matter of corporate governance and legal accountability.

The longer-term impact may lie in how engineering teams redefine human work

Over the longer run, routines may change not simply which company has the most impressive demo, but how software teams define everyday work. As AI agents become capable of handling repetitive, verifiable and reversible tasks, human engineers are likely to concentrate more on problem definition, architectural judgment, cross-system coordination and final review. That will shape not only tool procurement, but also training, role design and performance measurement. The scarce skill of the future may not be prompting alone. It may be the ability to break work down into the parts that agents can reliably handle and the parts that must remain human.

For that reason, Claude Code Routines matter not just because Anthropic has added an eye-catching feature. They matter because they signal that the next contest in AI coding will not be decided solely by how intelligent a model appears inside a chat window. It will be decided by which system can continue to work usefully, safely and reliably after the engineer has stepped away from the keyboard. From Anthropic’s “Introducing routines in Claude Code” (https://claude.com/blog/introducing-routines-in-claude-code) to GitHub’s “GitHub Copilot: Meet the new coding agent” (https://github.blog/news-insights/product-news/github-copilot-meet-the-new-coding-agent/), OpenAI’s “Introducing Codex” (https://openai.com/index/introducing-codex/) and Google’s “Jules: Google’s autonomous AI coding agent” (https://blog.google/technology/developers/introducing-jules/), the industry is moving in the same direction. The question is no longer whether AI can write code. It is when software companies will allow it to start taking shifts.

Email Subscription

Get practical AI and content updates in your inbox

Leave your name and email to receive future updates from our blog and product insights.