The New Stack Podcast
The New Stack Podcast is all about the developers, software engineers and operations people who build at-scale architectures that change the way we develop and deploy software. For more content from The New Stack, subscribe on YouTube at: https://www.youtube.com/c/TheNewStack
Edge-forward: Akamai eyes sweet spot between centralized & decentralized AI inference
At KubeCon + CloudNativeCon Europe 2026, Lena Hall and Thorsten Hans of Akamai outlined how the company is evolving from a CDN provider into a developer-focused cloud platform for AI. Akamaiâs strategy centers on low-latency, distributed computing, combining managed Kubernetes, serverless functions, and a distributed AI inference platform to support modern workloads.
With a global footprint of core and âdistributed reachâ datacenters, Akamai aims to bring compute closer to users while still leveraging centralized infrastructure for heavier processing. This hybrid model enables faster feedback loops critical for applications like fraud detection, robotics, and conversational AI.
To addres...
Kubernetes co-founder Brendan Burns: AI-generated code will become as invisible as assembly
In this episode of The New Stack Makers, Microsoft Corporate Vice President and Technical Fellow, Brendan Burns discusses how AI is reshaping Kubernetes and modern infrastructure. Originally designed for stateless applications, Kubernetes is evolving to support AI workloads that require complex GPU scheduling, co-location, and failure sensitivity. Features like Dynamic Resource Allocation and projects such as KAITO introduce AI-specific capabilities, while maintaining Kubernetesâ core strength: vendor-neutral extensibility.Â
Burns highlights that AI also changes how systems are monitored. Success is no longer binary; it depends on answer quality, user feedback, and large-scale testing using thousands of prompts and eve...
AI can write your infrastructure code. There's a reason most teams won't let it.
In this episode ofThe New Stack Agents, Marcin Wyszynski, co-founder of Spacelift and OpenTofu, explains how AI is transforming infrastructure as code (IaC). Originally built for individual operators, tools like Terraform struggled to scale across teams, prompting Wyszynski to help launch OpenTofu after HashiCorpâs 2023 license change. Now, the bigger shift is AI: engineers no longer write configuration languages like HCL manually, as AI tools generate it, dramatically lowering the barrier to entry.
However, this creates a dangerous gap between generating infrastructure and truly understanding itâlike using a phrasebook to ask questions in a foreign language but...
OutSystems CEO on how enterprises can successfully adopt vibe coding
Woodson Martin, CEO ofOutSystems, argues that successful enterprise AI deployments rarely rely on standalone agents. Instead, production systems combine AI agents with data, workflows, APIs, applications, and human oversight. While claims that â95% of agent pilots failâ are common, Martin suggests many of those pilots were simply low-commitment experiments made possible by the low cost of testing AI. Enterprises that succeed typically keep humans in the loop, at least initially, to review recommendations and maintain control over decisions.
Current enterprise use cases for agents include document processing, decision support, and personalized outputs. When integrated into broader systems, these appl...
Inception Labs says its diffusion LLM is 10x faster than Claude, ChatGPT, Gemini
On a recent episode of the The New Stack Agents, Inception Labs CEO Stefano Ermon introduced Mercury 2, a large language model built on diffusion rather than the standard autoregressive approach. Traditional LLMs generate text token by token from left to right, which Ermon describes as âfancy autocomplete.â In contrast, diffusion models begin with a rough draft and refine it in parallel, similar to image systems like Stable Diffusion.
This parallel process allows Mercury 2 to produce over 1,000 tokens per secondâfive to ten times faster than optimized models from labs such as OpenAI, Anthropic, and Google, according to compan...
NanoClaw's answer to OpenClaw is minimal code, maximum isolation
OnThe New Stack Agents, Gavriel Cohen discusses why he built NanoClaw, a minimalist alternative to OpenClaw, after discovering security and architectural flaws in the rapidly growing agentic framework. Cohen, co-founder of AI marketing agencyQwibit, had been running agents across operations, sales, and research usingClaude Code. When Clawdbot (laterOpenClaw) launched, it initially seemed ideal. But Cohen grew concerned after noticing questionable dependenciesâincluding his own outdated GitHub packageâexcessive WhatsApp data storage, a massive AI-generated codebase nearing 400,000 lines, and a lack of OS-level isolation between agents.
In response, he createdNanoClawwith radical minimalism: only a few hundred core lines, mini...
The developer as conductor: Leading an orchestra of AI agents with the feature flag baton
A few weeks after Dynatrace acquired DevCycle, Michael Beemer and Andrew Norris discussed on The New Stack Makers podcast how feature flagging is becoming a critical safeguard in the AI era. By integrating DevCycleâs feature flagging into the Dynatrace observability platform, the combined solution delivers a â360-degree viewâ of software performance at the feature level. This closes a key visibility gap, enabling teams to see exactly how individual features affect systems in production.
As âagentic developmentâ acceleratesâwhere AI agents rapidly generate codeâfeature flags act as a safety net. They allow teams to test, control, and roll back...
The reason AI agents shouldnât touch your source code â and what they should do instead
Dynatrace is at a pivotal point, expanding beyond traditional observability into a platform designed for autonomous operations and security powered by agentic AI. In an interview on *The New Stack Makers*, recorded at the Dynatrace Perform conference, Chief Technology Strategist Alois Reitbauer discussed his vision for AI-managed production environments. The conversation followed Dynatraceâs acquisition of DevCycle, a feature-management platform. Reitbauer highlighted feature flagsâlong used in software developmentâas a critical safety mechanism in the age of agentic AI.Â
Rather than allowing AI agents to rewrite and deploy code, Dynatrace envisions them operating within guardrails by adjusti...
You canât fire a bot: The blunt truth about AI slop and your job
Matan-Paul Shetrit, Director of Product Management at Writer, argues that people must take responsibility for how they use AI. If someone produces poor-quality output, he says, the blame lies with the userânot the tool. He believes many misunderstand AIâs role, confusing its ability to accelerate work with an abdication of accountability. Speaking on The New Stack Agents podcast, Shetrit emphasized that âweâre all becoming editors,â meaning professionals increasingly review and refine AI-generated content rather than create everything from scratch. However, ultimate responsibility remains human. If an AI-generated presentation contains errors, the presenterânot the AIâis accountable. <...
GitLab CEO on why AI isn't helping enterprise ship code faster
AI coding assistants are boosting developer productivity, but most enterprises arenât shipping software any faster. GitLab CEO Bill Staples says the reason is simple: coding was never the main bottleneck. After speaking with more than 60 customers, Staples found that developers spend only 10â20% of their time writing code. The remaining 80â90% is consumed by reviews, CI/CD pipelines, security scans, compliance checks, and deploymentâareas that remain largely unautomated. Faster code generation only worsens downstream queues.
GitLabâs response is its newly GAâed Duo Agent Platform, designed to automate the full software development lifecycle. The platform introduces âagent flows,â...
The enterprise is not ready for "the rise of the developer"
Sean OâDell of Dynatrace argues that enterprises are unprepared for a major shift brought on by AI: the rise of the developer. Speaking at Dynatrace Perform in Las Vegas, OâDell explains that AI-assisted and âvibeâ coding are collapsing traditional boundaries in software development. Developers, once insulated from production by layers of operations and governance, are now regaining end-to-end ownership of the entire software lifecycle â from development and testing to deployment and security. This shift challenges long-standing enterprise structures built around separation of duties and risk mitigation.Â
At the same time, the definition of âdeveloperâ is expanding. With...
Meet Gravitino, a geo-distributed, federated metadata lake
In the era of agentic AI, attention has largely focused on data itself, while metadata has remained a neglected concern. Junping (JP) Du, founder and CEO of Datastrato, argues that this must change as AI fundamentally alters how data and metadata are consumed, governed, and understood. To address this gap, Datastrato created Apache Gravitino, an open source, high-performance, geo-distributed, federated metadata lake designed to act as a neutral control plane for metadata and governance across multi-modal, multi-engine AI workloads.Â
Gravitino achieved major milestones in 2025, including graduation as an Apache Top Level Project, a stable 1.1.0 release, and membership i...
CTO Chris Aniszczyk on the CNCF push for AI interoperability
Chris Aniszczyk, co-founder and CTO of the Cloud Native Computing Foundation (CNCF), argues that AI agents resemble microservices at a surface level, though they differ in how they are scaled and managed. In an interview ahead of KubeCon/CloudNativeCon Europe, he emphasized that being âAI nativeâ requires being cloud native by default. Cloud-native technologies such as containers, microservices, Kubernetes, gRPC, Prometheus, and OpenTelemetry provide the scalability, resilience, and observability needed to support AI systems at scale. Aniszczyk noted that major AI platforms like ChatGPT and Claude already rely on Kubernetes and other CNCF projects.
To address growing comp...
Solving the Problems that Accompany API Sprawl with AI
API sprawl creates hidden security risks and missed revenue opportunities when organizations lose visibility into the APIs they build. According to IBMâs Neeraj Nargund, APIs power the core business processes enterprises want to scale, making automated discovery, observability, and governance essentialâespecially when thousands of APIs exist across teams and environments. Strong governance helps identify endpoints, remediate shadow APIs, and manage risk at scale. At the same time, enterprises increasingly want to monetize the data APIs generate, packaging insights into products and pricing and segmenting usage, a need amplified by the rise of AI.
To address thes...
CloudBees CEO: Why Migration Is a Mirage Costing You Millions
A CloudBees survey reveals that enterprise migration projects often fail to deliver promised modernization benefits. In 2024, 57% of enterprises spent over $1 million on migrations, with average overruns costing $315,000 per project. In The New Stack Makers podcast, CloudBees CEO Anuj Kapur describes this pattern as âthe migration mirage,â where organizations chase modernization through costly migrations that push value further into the future. Findings from the CloudBees 2025 DevOps Migration Index show leaders routinely underestimate the longevity and resilience of existing systems. Kapur notes that applications often outlast CIOs, yet new leadership repeatedly mandates wholesale replacement.Â
The report argues modernization has been...
Human Cognition Canât Keep Up with Modern Networks. Whatâs Next?
IBMâs recent acquisitions of Red Hat, HashiCorp, and its planned purchase of Confluent reflect a deliberate strategy to build the infrastructure required for enterprise AI. According to IBMâs Sanil Nambiar, AI depends on consistent hybrid cloud runtimes (Red Hat), programmable and automated infrastructure (HashiCorp), and real-time, trustworthy data (Confluent). Without these foundations, AI cannot function effectively.Â
Nambiar argues that modern, software-defined networks have become too complex for humans to manage alone, overwhelmed by fragmented data, escalating tool sophistication, and a widening skills gap that makes veteran âtribal knowledgeâ hard to transfer. Trust, he says, is the bigge...
From Group Science Project to Enterprise Service: Rethinking OpenTelemetry
Ari Zilka, founder of MyDecisive.ai and former Hortonworks CPO, argues that most observability vendors now offer essentially identical, reactive dashboards that highlight problems only after systems are already broken. After speaking with all 23 observability vendors at KubeCon + CloudNativeCon North America 2025, Zilka said these tools fail to meaningfully reduce mean time to resolution (MTTR), a long-standing demand he heard repeatedly from thousands of CIOs during his time at New Relic.
Zilka believes observability must shift from reactive monitoring to proactive operations, where systems automatically respond to telemetry in real time. MyDecisive.ai is his attempt to solve...
Why You Can't Build AI Without Progressive Delivery
Former GitHub CEO Thomas Dohmkeâs claim that AI-based development requires progressive delivery frames a conversation between analyst James Governor and The New Stackâs Alex Williams about why modern release practices matter more than ever. Governor argues that AI systems behave unpredictably in production: models can hallucinate, outputs vary between versions, and changes are often non-deterministic. Because of this uncertainty, teams must rely on progressive delivery techniques such as feature flags, canary releases, observability, measurement and rollback. These practices, originally developed to improve traditional software releases, now form the foundation for deploying AI safely. Concepts like evaluations, model vers...
How Nutanix Is Taming Operational Complexity
Most enterprises today run workloads across multiple IT infrastructures rather than a single platform, creating significant operational challenges. According to Nutanix CTO Deepak Goel, organizations face three major hurdles: managing operational complexity amid a shortage of cloud-native skills, migrating legacy virtual machine (VM) workloads to microservices-based cloud-native platforms, and running VM-based workloads alongside containerized applications. Many engineers have deep infrastructure experience but lack Kubernetes expertise, making the transition especially difficult and increasing the learning curve for IT administrators.Â
To address these issues, organizations are turning to platform engineering and internal developer platforms that abstract infrastructure complexity and p...
Do All Your AI Workloads Actually Require Expensive GPUs?
GPUs dominate todayâs AI landscape, but Google argues they are not necessary for every workload. As AI adoption has grown, customers have increasingly demanded compute options that deliver high performance with lower cost and power consumption. Drawing on its long history of custom silicon, Google introduced Axion CPUs in 2024 to meet needs for massive scale, flexibility, and general-purpose computing alongside AI workloads. The Axion-based C4A instance is generally available, while the newer N4A virtual machines promise up to 2x price performance.
In this episode, Andrei Gueletii, a technical solutions consultant for Google Cloud joined Ga...
Breaking Data Team Silos Is the Key to Getting AI to Production
Enterprises are racing to deploy AI services, but the teams responsible for running them in production are seeing familiar problems reemergeâmost notably, silos between data scientists and operations teams, reminiscent of the old DevOps divide. In a discussion recorded at AWS re:Invent 2025, IBMâs Thanos Matzanas and Martin Fuentes argue that the challenge isnât new technology but repeating organizational patterns. As data teams move from internal projects to revenue-critical, customer-facing applications, they face new pressures around reliability, observability, and accountability.
The speakers stress that many existing observability and governance practices still apply. Standard metrics, KPIs...
Why AI Parallelization Will Be One of the Biggest Challenges of 2026
Rob Whiteley, CEO of Coder, argues that the biggest winners in todayâs AI boom resemble the âpicks and shovelsâ sellers of the California Gold Rush: companies that provide tools enabling others to build with AI. Speaking onThe New Stack Makersat AWS re:Invent, Whiteley described the current AI moment as the fastest-moving shift heâs seen in 25 years of tech. Developers are rapidly adopting AI tools, while platform teams face pressure to approve them, as saying ânoâ is no longer viable.Â
Whiteley warns of a widening gap between organizations that extract real value from AI and those that do...
How Nutanix Is Taming Operational Complexity
Many enterprises now run workloads across multiple IT infrastructures rather than a single environment. According to Nutanix, about 60% of organizations deploy this way, creating significant operational challenges. In an episode ofThe New Stack Makers, Deepak Goel, CTO for cloud native at Nutanix, outlined three major issues: operational complexity combined with a shortage of cloud native skills, the difficulty of migrating legacy VM-based workloads to microservices-oriented platforms, and the challenge of running virtual machines and containers side by side, often in silos.
To address these problems, organizations are adopting platform engineering, where specialized teams abstract infrastructure complexity and...
Kubernetes GPU Management Just Got a Major Upgrade
Nvidia Distinguished Engineer Kevin Klues noted that low-level systems work is invisible when done well and highly visible when it fails â a dynamic that frames current Kubernetes innovations for AI. At KubeCon + CloudNativeCon North America 2025, Klues and AWS product manager Jesse Butler discussed two emerging capabilities: dynamic resource allocation (DRA) and a new workload abstraction designed for sophisticated AI scheduling.
DRA, now generally available in Kubernetes 1.34, fixes long-standing limitations in GPU requests. Instead of simply asking for a number of GPUs, users can specify types and configurations. Modeled after persistent volumes, DRA allows any specialized hardware to be...
The Rise of the Cognitive Architect
At KubeCon North America 2025, GitLabâs Emilio Salvador outlined how developers are shifting from individual coders to leaders of hybrid humanâAI teams. He envisions developers evolving into âcognitive architects,â responsible for breaking down large, complex problems and distributing work across both AI agents and humans. Complementing this is the emerging role of the âAI guardian,â reflecting growing skepticism around AI-generated code. Even as AI produces more code, humans remain accountable for reviewing quality, security, and compliance.
Salvador also described GitLabâs âAI paradoxâ: developers may code faster with AI, but overall productivity stalls because testing, security, and compliance pro...
Why the CNCF's New Executive Director is Obsessed With Inference
Jonathan Bryce, the new CNCF executive director, argues that inferenceânot model trainingâwill define the next decade of computing. Speaking at KubeCon North America 2025, he emphasized that while the industry obsesses over massive LLM training runs, the real opportunity lies in efficiently serving these models at scale. Cloud-native infrastructure, he says, is uniquely suited to this shift because inference requires real-time deployment, security, scaling, and observabilityâstrengths of the CNCF ecosystem.Â
Bryce believes Kubernetes is already central to modern inference stacks, with projects like Ray, KServe, and emerging GPU-oriented tooling enabling teams to deploy and operationalize models...
Kubernetes Gets an AI Conformance Program â and VMware Is Already On Board
The Cloud Native Computing Foundation has introduced the Certified Kubernetes AI Conformance Program to bring consistency to an increasingly fragmented AI ecosystem. Announced at KubeCon + CloudNativeCon North America 2025, the program establishes open, community-driven standards to ensure AI applications run reliably and portably across different Kubernetes platforms. VMware by Broadcomâs vSphere Kubernetes Service (VKS) is among the first platforms to achieve certification.
In an interview with The New Stack, Broadcom leaders Dilpreet Bindra and Himanshu Singh explained that the program applies lessons from Kubernetesâ early evolution, aiming to reduce the âmuddinessâ in AI tooling and improve cross-platform interope...
How etcd Solved Its Knowledge Drain with Deterministic Testing
The etcd project â a distributed key-value store older than Kubernetes â recently faced significant challenges due to maintainer turnover and the resulting loss of unwritten institutional knowledge. Lead maintainer Marek Siarkowicz explained that as longtime contributors left, crucial expertise about testing procedures and correctness guarantees disappeared. This gap led to a problematic release that introduced critical reliability issues, including potential data inconsistencies after crashes.
To rebuild confidence in etcdâs correctness, the new maintainer team introduced ârobustness testing,â creating a framework inspired by Jepsen to validate both basic and distributed-system behavior. Their goal was to ensure linearizability, the âHoly Grailâ...
Helm 4: Whatâs New in the Open Source Kubernetes Package Manager?
Helm â originally a hackathon project called Kateâs Place â turned 10 in 2025, marking the milestone with the release of Helm 4, its first major update in six years. Created by Matt Butcher and colleagues as a playful take on âK8s,â the early project won a small prize but quickly grew into a serious effort when Deus leadership recognized the need for a Kubernetes package manager. Renamed Helm, it rapidly expanded with community contributors and became one of the first CNCF graduating projects.
Helm 4 reflects years of accumulated design debt and evolving use cases. After the rapid iterations of Helm 1, 2...
All About Cedar, an Open Source Solution for Fine-Tuning Kubernetes Authorization
Kubernetes has relied on role-based access control (RBAC) since 2017, but its simplicity limits what developers can express, said Micah Hausler, principal engineer at AWS, on The New Stack Makers. RBAC only allows actions; it canât enforce conditions, denials, or attribute-based rules. Seeking a more expressive authorization model for Kubernetes, Hausler explored Cedar, an authorization engine and policy language created at AWS in 2022 and later open-sourced. Although not designed specifically for Kubernetes, Cedar proved capable of modeling its authorization needs in a concise, readable way. Hausler highlighted Cedarâs clarityânontechnical users can often understand policies at a glanceâas well...
Teaching a Billion People to Code: How JupyterLite Is Scaling the Impossible
JupyterLite, a fully browser-based distribution of JupyterLab, is enabling new levels of global scalability in technical education. Developed by Sylvain Corlayâs QuantStack team, it allows math and programming lessons to run entirely in studentsâ browsers â kernel included â without relying on Docker or cloud-scale infrastructure. Its most prominent success is Capytale, a French national deployment that supports half a million high school students and over 200,000 weekly sessions from essentially a single server, which hosts only teaching content while computation happens locally in each browser.
QuantStack, founded in 2016 as what Corlay calls an âaccidental startup,â has since grown into a 30-pe...
2026 Will Be the Year of Agentic Workloads in Production on Amazon EKS
AWSâs approach to Elastic Kubernetes Service has evolved significantly since its 2018 launch. According to Mike Stefanik, Senior Manager of Product Management for EKS and ECR, todayâs users increasingly represent the late majorityâteams that want Kubernetes without managing every component themselves. In a conversation onThe New Stack Makers, Stefanik described how AI workloads are reshaping Kubernetes operations and why AWS open-sourced an MCP server for EKS. Early feedback showed that meaningful, task-oriented tool namesânot simple API mirrorsâmade MCP servers more effective for LLMs, prompting AWS to design tools focused on troubleshooting, runbooks, and full application workflows...
From Cloud Native to AI Native: Where Are We Going?
At KubeCon + CloudNativeCon 2025 in Atlanta, the panel of experts - Kate Goldenring of Fermyon Technologies, Idit Levine of Solo.io, Shaun O'Meara of Mirantis, Sean O'Dell of Dynatrace and James Harmison of Red Hat - explored whether the cloud native era has evolved into an AI native era â and what that shift means for infrastructure, security and development practices. Jonathan Bryce of the CNCF argued that true AI-native systems depend on robust inference layers, which have been overshadowed by the hype around chatbots and agents. As organizations push AI to the edge and demand faster, more personalized experiences, Fermyonâs Ka...
Amazon CTO Werner Vogels' Predictions for 2026
AWS re:Invent has long featured CTO Werner Vogelsâ closing keynote, but this year he signaled it may be his last, emphasizing itâs time for âyounger voicesâ at Amazon. After 21 years with the company, Vogels reflected on arriving as an academic and being stunned by Amazonâs technical scaleâan energy that still drives him today. He released his annual predictions ahead of re:Invent, with this yearâs five themes focused heavily on AI and broader societal impacts.
Vogels highlights technologyâs growing role in addressing loneliness, noting how devices like Alexa can offer comfort to those who fee...
How Can We Solve Observability's Data Capture and Spending Problem?
DevOps practitioners â whether developers, operators, SREs or business stakeholders â increasingly rely on telemetry to guide decisions, yet face growing complexity, siloed teams and rising observability costs. In a conversation at KubeCon + CloudNativeCon North America, IBMâs Jacob Yackenovich emphasized the importance of collecting high-granularity, full-capture data to avoid missing critical performance signals across hybrid application stacks that blend legacy and cloud-native components. He argued that observability must evolve to serve both technical and nontechnical users, enabling teams to focus on issues based on real business impact rather than subjective judgment.
AIâs rapid integration into applications introduces new obse...
How Kubernetes Became the New Linux
Major banks once built their own Linux kernels because no distributions existed, but today commercial distros â and Kubernetes â are universal. At KubeCon + CloudNativeCon North America, AWSâs Jesse Butler noted that Kubernetes has reached the same maturity Linux once did: organizations no longer build bespoke control planes but rely on shared standards. That shift influences how AWS contributes to open source, emphasizing community-wide solutions rather than AWS-specific products.
Butler highlighted two AWS EKS projects donated to Kubernetes SIGs: KRO and Karpenter. KRO addresses the proliferation of custom controllers that emerged once CRDs made everything representable as Kubernetes resour...
Keeping GPUs Ticking Like Clockwork
Clockwork began with a narrow goalâkeeping clocks synchronized across serversâbut soon realized that its precise latency measurements could reveal deeper data center networking issues. This insight led the company to build a hardware-agnostic monitoring and remediation platform capable of automatically routing around faults. Today, Clockworkâs technology is especially valuable for large GPU clusters used in training LLMs, where communication efficiency and reliability are critical. CEO Suresh Vasudevan explains that AI workloads are among the most demanding distributed applications ever, and Clockwork provides building blocks that improve visibility, performance and fault tolerance. Its flagship feature, FleetIQ, can rerout...
Jupyter Deploy: the New Middle Ground between Laptops and Enterprise
At JupyterCon 2025, Jupyter Deploy was introduced as an open source command-line tool designed to make cloud-based Jupyter deployments quick and accessible for small teams, educators, and researchers who lack cloud engineering expertise. As described by AWS engineer Jonathan Guinegagne, these users often struggle in an âin-betweenâ spaceâneeding more computing power and collaboration features than a laptop offers, but without the resources for complex cloud setups.Â
Jupyter Deploy simplifies this by orchestrating an entire encrypted stackâusing Docker, Terraform, OAuth2, and Letâs Encryptâwith minimal setup, removing the need to manually manage 15â20 cloud components. While it offers an easy on...
From Physics to the Future: Brian Granger on Project Jupyter in the Age of AI
In an interview at JupyterCon, Brian Granger â co-creator of Project Jupyter and senior principal technologist at AWS â reflected on Jupyterâs evolution and how AI is redefining open source sustainability. Originally inspired by physicsâ modular principles, Granger and co-founder Fernando PĂ©rez designed Jupyter with flexible, extensible components like the notebook format and kernel message protocol. This architecture has endured as the ecosystem expanded from data science into AI and machine learning.Â
Now, AI is accelerating development itself: Granger described rewriting Jupyter Server in Go, complete with tests, in just 30 minutes using an AI coding agent â a task once co...
Jupyter AI v3: Could It Generate an âEcosystem of AI Personasâ?
Jupyter AI v3 marks a major step forward in integrating intelligent coding assistance directly into JupyterLab. Discussed by AWS engineers David Qiu and Piyush Jain at JupyterCon, the new release introduces AI personasâ customizable, specialized assistants that users can configure to perform tasks such as coding help, debugging, or analysis. Unlike other AI tools, Jupyter AI allows multiple named agents, such as âClaude Codeâ or âOpenAI Codex,â to coexist in one chat.Â
Developers can even build and share their own personas as local or pip-installable packages. This flexibility was enabled by splitting Jupyter AIâs previously large, complex codeb...