nbt observations: AI is changing the way we need to think about Software - The Rise of Software 3.0

September 2025

Authors:

Helmut Lodzik
Founder & CEO

Patrick Funcke
Founder & CTO

TL;DR - Too Long, Didn´t Read

#1 Retrofitting existing Software with AI is like strapping a jet engine to a horse-drawn carriage - the incremental value of adding AI support to existing software is very limited

#2 Using AI to create more software is like building more horse-drawn carriages faster, cheaper and in more colors - creating more software faster and quicker just gives you more mediocre solutions, but nothing new

#3 Re-thinking what we are trying to do with AI in mind is like looking at birds flying and imagining a supersonic jetliner -

Re-thinking the original challenge you are trying to solve in a world where AI exists allows you to create completely new and much better approaches

#4 The true disruption will no longer be traditional Standard-Software but custom. Software 3.0 will be built by AI and will be continuously evolving, morphing and improving directly by user interaction at run-time

# Outlook While the potential for a complete disruption of how we define “software” is obvious, the speed of change is hard to predict. Legacy solutions tend to live a lot longer than expected, while the speed of progress in AI is explosive.

Executive Summary

Less than 5% of AI’s potential is realized today: Despite tremendous advances in AI (like GPT-4 and other large language models), most businesses have only scratched the surface of what these technologies can do. A recent survey found that while nearly all companies are investing in AI, only about 1% consider their AI fully integrated and mature in operations[1]. This means 95–99% of AI’s economic value is still untapped, awaiting new ways of integration.

Current approach – “AI retrofits” – delivers only gimmicks: Many enterprises have reacted to the AI wave by bolting AI features onto legacy software, hoping to boost productivity. Think of adding a ChatGPT-based assistant into a 20-year-old CRM system. These additions can provide convenience (smarter search, auto-suggested text, etc.), but they rarely transform the core experience. It’s akin to attaching a jet engine to a horse-drawn carriage – a powerful tool applied to a fundamentally old design. Early results have been mixed: for example, users of Microsoft’s new AI Copilot in Office reported confusion and disappointing outputs, often giving up and going back to standalone ChatGPT[2]. The hard truth: simply appending AI to existing, static software yields diminishing returns.

AI demands a *“greenfield” rethinking of software: To unlock AI’s full value, we must redesign software from first principles with AI at the center, rather than as an afterthought. This means asking, “If we started from scratch, how would AI solve this problem?” instead of “Where can we plug AI into our app?” Companies need to return to the drawing board (channeling a design-thinking mindset) and imagine entirely new solutions. For example, a traditional used-car marketplace app organizes information by make, model, year, mileage, price, etc. An AI-first reimagination might instead offer a chat-based interface where a buyer simply describes their ideal car – “I need a reliable family SUV under $20k that feels fun to drive” – and the AI handles searching, filtering, and even negotiating, all through natural conversation. These kinds of AI-native, goal-driven designs break from the rigid forms and workflows of the past.

From custom vs. standard to “dynamic” software: Historically, enterprises faced a trade-off between bespoke software (tailored exactly to your needs but expensive and slow to build) and standardized products (one-size-fits-all solutions like SAP that are affordable but force you to adapt your processes)[3]. Cloud and SaaS made software more accessible, but didn’t solve the rigidity of pre-built features. AI now offers a way out: “Software 3.0” – applications that are highly customized and continuously adapting, without the traditional cost and effort. In this new paradigm, software isn’t a fixed product anymore; it becomes a living solution that evolves for each business and user. We get the best of both worlds: the exact fit of a custom app with the scalability of a cloud service.

Dynamic user interfaces (UIs) powered by AI: One immediate impact of AI-native design is the end of bloated, one-size-fits-all UIs. Today’s enterprise software is infamous for feature overload – massive menus and forms built to cover every conceivable use case, most of which a given user doesn’t need[4][5]. In contrast, an AI-driven system can generate a personalized interface in real time for each user, showing only what’s relevant for their role, context, and intent[6][7]. Complexity is hidden until needed. Novice users get guided, simplified flows, while power users can call up advanced functions on demand. In fact, design experts predict “generative UIs” soon will let every end user interact with a tailor-made interface that fits their needs and moment[8][6]. Software adapts to the user, instead of forcing the user to adapt to the software.

Software that morphs and improves continuously: The ultimate vision for Software 3.0 is systems that don’t remain static after deployment. Instead, AI makes software dynamic, able to “learn” from usage and update itself in production. In a fully AI-native environment, you might define your business processes and goals in natural language, and the AI will generate and refine the software to execute them. Over time, as conditions or user behaviors change, the application morphs – optimizing workflows, adding or tweaking features, and even interfacing with other systems autonomously. Early hints of this can be seen in AI DevOps tools: for instance, AI agents can now observe issues in user onboarding and automatically suggest (or implement) improvements in the app’s design. One forward-looking report describes a future where “AI agents continuously design, test, deploy, and adapt software based on real-time business goals and customer behavior,” turning software into a self-improving organism[9][10].

Challenges and the road ahead: Moving to this AI-first, dynamic paradigm won’t happen overnight. Enterprises still have decades of legacy systems that can’t just vanish. Practical concerns around data security, compliance, reliability, and change management are significant. Highly regulated industries will demand proof that AI-driven systems can be controlled and audited. Culturally, organizations must overcome understandable skepticism – both leaders and staff need to trust AI enough to let it take the driver’s seat in software generation. Moreover, developers and IT teams will need to acquire new skills (prompt engineering, AI orchestration, oversight of AI outputs) rather than traditional coding alone. Nevertheless, the direction is clear. AI will fundamentally reshape software development and usage. Businesses that begin evolving toward these AI-native approaches now will have a massive advantage, while those that stick to static software (or superficial AI add-ons) risk being left behind as the gap widens.

Introduction: AI’s Untapped Potential in Software

Artificial Intelligence has made incredible strides in the last few years. Large language models can now write code, draft documents, converse fluently, and answer complex questions. Image generators can create artwork or user interface designs from scratch. These breakthroughs suggest we are on the cusp of a transformative era for technology. Yet the reality inside most organizations feels very different. The vast majority of companies are still using software and business processes that look and operate much as they did a decade ago. The infusion of AI into day-to-day tools and workflows has been modest and uneven. In fact, by some estimates we have realized well under 5% of AI’s ultimate economic value so far[11]. In early 2025, McKinsey surveyed thousands of firms and found that although almost all are experimenting with AI, only 1% of leaders feel their company has fully adopted AI in a “mature” way[1]. In other words, despite all the hype, we are only at the very beginning of translating AI’s raw capabilities into broad productivity gains.

Why the slow progress? A core issue is that it takes time to integrate new technology into established workflows, software systems, and mindsets. Developing a powerful AI model in the lab is one thing; redesigning an entire business process around that AI is far more involved. Today’s enterprise software and IT architectures are products of decades of incremental evolution. They weren’t built with AI in mind, so plugging in AI often means forcing a square peg into a round hole. Companies can’t rip and replace everything overnight just because a new algorithm came along. The result: organizations tinker at the edges, adding a dash of AI here and there, without rethinking the underlying systems. The real impact of AI gets constrained by legacy platforms and cautious corporate adoption.

This mismatch is evident in the AI value chain. Consider that enormous investments have poured into AI chips and infrastructure (for example, NVIDIA’s GPUs and cloud platforms like AWS and Azure) and into AI model development (from OpenAI’s GPT series to Google’s and Meta’s models). These are the first three links of the chain – hardware, data centers, and models. By contrast, the fourth link – actual end-user applications – lags behind[12]. We’re essentially driving the world’s fastest race car (state-of-the-art AI) on old cobblestone roads (outdated software constructs). For AI to deliver its promised value, those roads need paving anew.

Renowned chip architect Jim Keller captured the magnitude of change ahead with a provocative prediction in late 2023: “Ten years from now, no current software will be in use anymore.”[13] In his view, the traditional software we rely on – with its static code, fixed interfaces, and predetermined workflows – could be made obsolete by AI-driven approaches within a decade. That may be an extreme take (and certainly debatable), but it underscores a growing realization: AI isn’t just another add-on feature – it’s a once-in-a-generation technology shift. To fully harness it, we can’t simply retrofit AI onto the software of yesterday. We need to fundamentally reinvent how software is envisioned, built, and operated.

The Pitfall of Retrofits:
New Tech, Old Thinking

Many companies’ first instinct has been to “add AI” to their existing products. It’s easy to see why – this approach is fast and doesn’t require re-engineering everything. We’ve seen a flurry of announcements along these lines: CRM platforms adding AI-written sales emails, graphic design tools adding AI image generation, office suites adding AI assistants, and so on. This strategy can yield quick wins. For example, an AI writing suggestion in your email client might save you a few minutes, or an AI analytics feature might surface a pattern you would have missed.

However, these retrofits often amount to incremental improvements or gimmicks, rather than transformative change. They’re like adding fancy new decorations to an old house whose basic floor plan remains unchanged. The core software is still doing what it always did – now it just has a couple of AI-powered conveniences on top. Clippy got replaced by an AI bot, but the application’s fundamental workflow is the same.

In some cases, bolting on AI can even highlight the limitations of the underlying system. A striking example emerged with Microsoft’s Office 365 Copilot, an AI assistant meant to help generate documents, emails, and analyses inside Office apps. Early users discovered that Copilot often struggled to execute tasks cleanly, because it was constrained by the old software’s structure. One IT expert who tested Copilot reported being “a little disappointed” – frequently Copilot would need excessive prodding or simply fail to deliver a useful result, leading the user to switch back to ChatGPT outside the Microsoft ecosystem[2]. On Microsoft’s own forums, users vented that Copilot felt “useless… [it] falls flat” when asked to actually do things like modify documents, often providing only vague suggestions rather than action[14][15]. The culprit is not that the AI model is too weak – it’s that the legacy software wasn’t designed for an AI to take actions on a user’s behalf. The AI ends up constrained by the same old menus and rules meant for human operators.

This pattern – powerful AI, trapped in outdated software – is the “retrofit problem.” As the saying goes, it’s like “using a quantum computer to solve your Sudoku puzzle.” In other words, it’s applying a revolutionary technology to do something trivial in the scheme of its capabilities. The underlying architecture (the Sudoku puzzle, or the legacy app) dictates what value the AI can provide, and if that architecture doesn’t fully utilize AI’s strengths, most of the AI’s potential goes to waste.

Why do retrofits fall short? In legacy enterprise software, the design assumptions are fundamentally static. These systems assume that all possible user needs can be predefined by developers, encoded as features and options. They assume that data must fit neatly into predetermined database schemas and forms. They assume user interfaces must be fixed, with navigation menus and screens that everyone must learn. AI, by contrast, is dynamic – it can generate new content or logic on the fly, and it excels at understanding unstructured goals (like a user’s request in plain English). When we cram AI into a static mold, two things happen:

  1. Using AI for coding traditional software misses the point. The first AI wave to hit the software development community was predictably short-sighted. Laying of junior programmers and replacing them with AI certainly reduces cost and may increase speed (arguably), but does nothing to rethink your application in an AI world. It is like using a quantum computer to solve a Sudoku puzzle.

  2. We underutilize AI’s adaptability: The AI might be capable of handling a task in a flexible, conversational way, but the software only lets it fill in one text box at a time because that’s how the UI was built. It’s like asking a genius polyglot to communicate by picking phrases from a phrasebook – you don’t get the full fluidity of their talent.

  3. Legacy bloat and complexity remain: A bit of AI sugar on top doesn’t change the fact that many enterprise applications are overly complex, with screens full of fields and buttons that most users don’t need. In fact, adding AI can even make things more confusing if not done carefully (“Which of these 5 search boxes is the AI one?”). Users still face the “conundrum of min/max design” – meaning software that tries to cater to the maximum needs of the maximum users ends up satisfying few. It’s common to see enterprise tools where 5% of the features cover 95% of daily use, and the rest just add noise. Without rethinking, AI features become just another line item in an already crowded interface.

In short, slapping AI onto an old product is a bit like strapping a Jet Engine to a horse-drawn carriage. It might go a little faster, but it’s still limited by the horses – it won’t fly around corners and it lacks modern safety features. To truly benefit from the Jet Engine, you’d need to design plane. Likewise, to reap the true benefits of AI, we’ll need to design new kinds of software around AI’s capabilities.

Rethinking from Scratch: If we start from scratch it in a world where AI exists, how would It work?

It’s time to step back and ask a fundamental question: If we weren’t constrained by how software works today, how could we solve a given problem with AI? This is a classic first-principles or “greenfield” approach – imagine you had no legacy systems at all and could build a solution entirely optimized around AI. What would it look like?

Let’s take a concrete scenario to illustrate the difference. Consider an online marketplace for used cars:

  • Traditional design: In a pre-AI (or AI-retrofitted) world, you’d likely build a database with tables for car listings (make, model, year, mileage, price, seller info, etc.). You’d create a web interface with forms for sellers to input details and forms or filters for buyers to search by those criteria. The experience is defined by that structured data model; users have to think in terms of dropdowns and checkboxes that match the database fields. If a buyer wants to find a car, they have to manually apply filters like “SUV, 2015-2018, price under $20k, within 50 miles”. The software is essentially an information catalog.

  • AI-native “greenfield” design: Now imagine scrapping the notion of rigid forms. Instead, you start with the user’s outcome in mind – “help me find the right car.” The interface could be as simple as a chat box or voice assistant. A buyer might literally describe what they want in natural language: “I need a reliable, family-friendly SUV, not older than about 10 years, budget around $18k. I care more about low mileage than brand. Also I prefer something that doesn’t feel too sluggish to drive.” From this unstructured input, an AI system could parse the essential preferences and search the inventory intelligently. It might ask a few clarifying questions (“Do you have a preference for seating capacity or cargo space?”) just like a knowledgeable sales assistant would. Then it can present a personalized shortlist of options with explanations: “Here are two cars that match your needs well – a Honda CR-V and a Toyota Highlander. The CR-V is under budget and has slightly better mileage; the Highlander is a bit roomier with a smoother ride but 20% higher mileage.” The buyer can then converse further: “I think the Highlander might be too big. What else is like the CR-V?” – and the AI adjusts the recommendations accordingly, perhaps suggesting a Toyota RAV4. In the background, this AI-driven app might also negotiate with sellers (via another AI agent) or handle financing queries, all through the conversational interface.

This example highlights a few key differences of an AI-first approach: - No fixed schema that the user needs to understand. The user didn’t have to manually input filters; they expressed their goal and preferences freely. The AI mapped that to the data. This is far more natural for users, and it captures nuance that a few filters might miss (e.g. “doesn’t feel sluggish” implies a performance preference that isn’t simply a single number). - Dynamic interaction, not static forms. The process is interactive and adaptive. The software learns more about what the user wants through conversation, just as a human salesperson would. Traditional software typically doesn’t learn – it just executes predefined queries or transactions. - Outcome-oriented results. The interface focused on delivering a useful outcome (a good car match with reasoning), not just spitting out a raw list of 100 cars sorted by price. The AI can incorporate additional context (market pricing, maybe even the user’s past behavior or emotional tone) to present the most relevant choices.

This kind of design is only now becoming possible thanks to advanced AI. A few years ago, a computer couldn’t easily parse a sentence like “I want the car to make me feel excited to drive” and translate that into filtering for horsepower or handling characteristics. Now, with LLMs and large-scale recommendation models, it’s increasingly feasible.

Crucially, rethinking a solution with AI may lead us to completely different mechanisms than we’d get by just adding AI onto an existing product. It’s the difference between retrofit and reinvention. The retrofit approach to our used car marketplace might have been: “Let’s add a chatbot in case users want to ask questions, and maybe use AI to auto-fill some car details from a photo.” Nice features, but still the same workflow. The reinvented approach tossed out the notion of search filters and made the whole interface a conversation. That’s a paradigm shift.

This “clean slate” thinking can be applied in almost any domain: - In HR software: Instead of filling out forms to configure a new hire onboarding workflow, a manager could simply tell an AI, “Set up everything for a new software engineer joining in Munich office next Monday” and the AI system would generate the accounts, send welcome emails, schedule trainings, and so on, asking questions only if needed. - In analytics and BI: Instead of manually dragging and dropping fields to create a chart, an analyst might ask, “Why did our North America sales dip in Q3 compared to Q2, and was it specific to any product line?” and get a coherent analysis narrative with charts included. - In enterprise resource planning (ERP): Instead of navigating dozens of modules (procurement, inventory, finance) and entering transactions, a user could state a goal: “Order 500 units of part X from our preferred supplier, but make sure it doesn’t exceed last quarter’s budget average” and the AI-driven ERP handles it, ensuring compliance and updating all records accordingly.

These scenarios may sound futuristic, but they highlight the direction of travel. The more we can express intent in human terms and have AI execute the mechanics, the more we free humans to focus on what really matters (defining goals, making judgment calls, building relationships, etc., rather than clicking buttons and copying data between fields).

In summary, greenfield AI-native design isn’t about building software the way we used to, only faster – it’s about questioning why the software exists and how it can achieve the user’s original goal in a radically simpler way. The payoff to this approach can be enormous: simpler user experiences, more intelligent automation under the hood, and systems that can continuously improve themselves by learning from each interaction.

From Software 1.0 to 3.0:
A Brief History of Evolution

To appreciate how significant this AI-driven shift is, it’s helpful to put it in the context of software’s evolution over the decades:

  • Software 1.0 – Custom Code for Every Need: In the early days of computing (think 1960s–1970s), software was almost entirely custom-crafted. If a business or government agency needed software, they often had to hire programmers (or vendors) to build it from scratch, tailored to their specific requirements. This was the era of big bespoke projects – expensive and time-consuming, but highly tuned to what the user needed. The advantage was a precise fit; the drawback was cost, time, and difficulty in maintenance. Only large entities (or those on cutting edge missions like NASA’s Moon program) could afford substantial software initiatives.

  • Software 2.0 – The Rise of Standard Software: Over time, it became clear that many organizations were reinventing the wheel. Smart entrepreneurs realized they could develop standardized software packages that address common needs across companies – for example, general ledger accounting, payroll, or inventory management – and sell the same solution to many customers. In the 1980s and 1990s, firms like SAP pioneered this model. (SAP’s very name underlines this concept: originally “Systeme, Anwendungen und Produkte” in German, meaning “Systems, Applications, and Products in Data Processing” – essentially a promise of ready-made business applications[3].) Instead of coding your own system to handle HR or finance, you could buy SAP or Oracle Applications and adapt your processes to the software’s best-practice templates. This era introduced the trade-off we mentioned earlier: you gain efficiency and reliability by using pre-built standard software, but you sacrifice some flexibility – your business processes now partly conform to the software, instead of the software conforming to you. The benefit was huge scale and cost reduction, spawning an entire industry of enterprise software.

  • Software 2.5 – Cloud and SaaS (Software as a Service): In the 2000s and 2010s, another jump occurred with the internet and cloud computing. The core idea of standard software remained, but deployment and accessibility changed. Instead of installing software on-premises with heavy customization, companies could use web-based applications hosted by providers (Salesforce, Workday, etc.). This SaaS model (often multi-tenant, meaning many customers share the same application instance in the cloud) further reduced cost and maintenance burdens. It also accelerated update cycles and made software more accessible to smaller organizations. However, even in SaaS, the fundamental architecture is still fairly static – one application serving many users with a common feature set. Clients can configure settings, but the feature scope is broad to meet many needs, and individual customization is limited. In other words, SaaS was Software 2.x – an improvement in delivery, but not a wholesale reimagining of what software is.

  • Software 2.6 – More Software: AI may allow us to create infinite amount of software at close to zero cost. While this reduces the barrier of entry for new participants in the software space on the technical development side and may have a competitive effect on the pricing of conventional software, this is not a game changer. Anyone with experience in the Enterprise software business knows, that the purely technical part of software development, the coding, while challenging, is just the starting point. Product Management, Customer Service, Enterprise Sales, Customer Relations, Certifications are all significant hurdles that any new entrant will have to overcome, limiting the impact of this development. This impact will be mostly felt in niches, where all of a sudden software may become available that was previously uneconomical to develop.

  • Software 3.0 – GAME CHANGER – Dynamic, AI-Generated, morphing Software: Now we stand at the brink of the next transformation. Software 3.0, as we’ll call it, combines the best of both previous eras while adding something unprecedented. It suggests a world where each organization (or even each user) can have software tailored to their unique needs without the traditional cost and delay of custom development. How? Through AI that can generate, configure, and adapt software automatically. In Software 3.0, the application you use today might not be exactly the same as what you use a month from now – it will have evolved, learning from your usage or adjusting to new requirements. It’s “standard” in the sense that everyone might be using AI-driven systems built on a common set of AI capabilities, but it’s “custom” in that what the AI does for you could be very different from what it does for another company. The software essentially writes itself (or significant parts of itself), guided by high-level specifications and real-time feedback.

Consider how radical that is. We’re talking about a shift from software as a fixed product (even if cloud-hosted) to software as a fluid, responsive service that co-creates itself with the user. In a way, we’re circling back to the idea of tailored solutions (like Software 1.0’s custom projects) but achieving it with the efficiency and scalability of a product (the hallmark of Software 2.0). This “have your cake and eat it too” scenario is what excites visionaries. It could spell the end of the frustrating compromises businesses make today: using a giant off-the-shelf system that does 100 things decently but not the 5 things you truly care about exceptionally well.

Why Now? The Ingredients of Software 3.0

Several technological advancements are converging to make Software 3.0 possible: - Natural Language Understanding: AI models can now parse complex human inputs and intents. This is key for turning user requirements (expressed in everyday language) directly into software behaviors. In earlier eras, a business user had to explain their needs to an analyst, who wrote specs for programmers, who wrote code – a long translation chain. Now, a savvy AI can skip intermediaries and translate user intent straight into functioning logic or queries. - Code Generation & No-Code Platforms: We already see AI coding assistants (like GitHub Copilot) and no-code/low-code platforms that let people create apps with minimal code. These are early steps (sometimes dubbed Software 2.5) towards faster development. They hint at how AI could automate many programming tasks. However, in Software 3.0 the vision is not just faster coding, but autonomous coding – the AI system writes and rewrites itself as needed. A human might set high-level rules (“We need a process to approve expense reports under $1000 automatically”) and the AI takes care of implementing that and later adjusting it if policy changes. - AI Orchestration & Agents: New paradigms like AI agents (autonomous bots that can perform multi-step tasks) show that AIs can do more than answer questions – they can take actions in software. For instance, an AI agent could read through a database, generate a report, send emails, update records, all by chaining capabilities. Or an AI agent could monitor user interactions and propose a UX change on the fly. This begins to blur the line between “the software” and “the AI”; the AI is part of the software itself, deciding what actions to take. When multiple such agents work in concert, we get a system that’s very dynamic and responsive[16][17]. - Flexible Data Models (AI as Memory): Traditionally, a huge part of building software is designing the database – deciding what tables, fields, and relationships will structure the information. This is a rigid process; if your model is wrong, your app struggles. But AI, especially large language models with contextual memory, offers a tantalizing alternative: the AI can ingest and recall unstructured or semi-structured data as needed. Rather than painstakingly designing a schema upfront, you might dump a lot of raw data (documents, logs, emails) into a storage, and rely on the AI to fetch and interpret it on the fly. Some researchers even talk about replacing or augmenting conventional databases with AI “knowledge bases” that understand data in context[18]. We’re not fully there yet (issues of consistency, accuracy, and real-time updates are challenges), but the implication is that future software might not require as much rigid data modeling. The AI can be the glue that interprets data, so developers can focus on what outcome they want rather than how to structure every byte.

All these pieces suggest that software creation is moving to a higher level of abstraction. Instead of writing algorithms, future developers (or end-users) will describe goals, rules, and constraints, and the AI will assemble the pieces – UI, data, logic – in a coherent application. In fact, designers are already talking about “outcome-oriented design”, where one focuses on the user’s goal and leaves many design details for AI to figure out[19]. This flips the traditional design paradigm on its head. We used to meticulously design every screen and path (“interface-oriented design”). In an AI-driven world, you define the desired outcome, and the AI dynamically generates interfaces or processes to achieve it.

Dynamic Interfaces: Goodbye, One-Size-Fits-All

One of the clearest manifestations of Software 3.0 is how it will change user interfaces. As mentioned, enterprise software today often suffers from bloat and complexity. Why? Because a single application has to accommodate every role, every use case, every client’s special requests. The result is a monstrous UI with dozens of modules and hundreds of options, most of which are irrelevant to any given user at a given time[4][5]. Software vendors try to mitigate this by adding roles and permissions to hide parts of the UI, or by offering limited customization, but underneath it’s the same app for everyone. This is the “maximally capable but minimally usable” conundrum.

AI can finally break this logjam. With an AI front-end, we can have software that adapts to each user in real time: - The AI can infer, “What is this user trying to do right now, and what’s their context?” and then present exactly the information and controls needed for that task – nothing more. It’s as if the software shapeshifts to become a bespoke tool for that moment. - If the user’s intent changes, the interface can morph accordingly. Perhaps in the morning you use a project management app to quickly update a task status (so it shows you a simple checklist interface), and later you need to do a deep timeline planning (so it brings up a Gantt chart view). Instead of burying both features in menus, the AI could surface the right one at the right time based on your cues or even your calendar schedule. - AI can also adjust the level of complexity shown based on the user’s expertise. A new employee might get a streamlined interface with guided step-by-step wizards, while an expert gets more direct access to advanced functions. And as the new employee gains experience, the interface can gradually introduce more options. This is essentially having a UI that learns about the user – something static software never did.

The concept of Generative UI (GenUI) encapsulates this idea. As defined by UX experts Kate Moran and Sarah Gibbons, “a generative UI is a user interface that is dynamically generated in real time by artificial intelligence to provide an experience customized to fit the user’s needs and context.”[6] Rather than designing one interface to rule them all, designers of the future will define a design space or set of components and constraints, and the AI will assemble an interface on the fly from those building blocks. This moves design from crafting fixed screens to crafting rules for screens. It’s a shift to what they call outcome-oriented design, where the focus is on enabling the user to achieve their goal, letting the AI figure out the intermediate interaction steps[19].

We’re already seeing early steps toward dynamic UIs: - Websites that personalize content based on whether you’re a new visitor or returning customer. - Mobile apps that rearrange their home screen based on your usage patterns (for example, showing frequently used actions prominently). - Dashboard software that recommends which metrics or charts you likely care about most, instead of making you configure it all manually.

Generative UI takes this further by potentially generating entire new interface elements or flows on demand. Imagine telling your software, “I need to capture an approval from our legal team for this contract” and instantly the system generates a little approval form and route, even if none existed before – because the AI understands the pattern and creates the UI for you in that moment. This is not far-fetched; some AI tools can already generate UI code from text descriptions. The leap in Software 3.0 is that this generation happens in real time within the application and is tailored to one user’s needs.

The benefits of AI-driven dynamic UIs are huge: - Greatly simplified experiences: Users no longer wade through irrelevant buttons and screens. Every interface is “the right interface” for the task at hand[8]. - Reduced training and onboarding: If the software adapts to the user, you don’t need to train users on every nook and cranny. The app itself can guide them (“Next, you might want to do X. Click here to proceed.”). - Higher productivity and satisfaction: People can accomplish tasks faster when the tool is streamlined for them. It’s the difference between a general-purpose toolbox versus having a custom jig or fixture that’s made for the job – the latter is much faster when available. - Minimized error and confusion: By hiding functionality that a user shouldn’t access or isn’t ready for, dynamic UIs reduce the chance of mistakes. It also can enforce context-specific best practices (the AI can nudge, “You’ve filled in the budget field; typically users in sales also attach a justification document at this stage”).

One might wonder, is this really practical? It sounds complex to implement. The truth is, it is challenging, but not impossible. The heavy lifting is done by the AI model which can generalize design patterns. Projects in big tech firms and startups are already exploring how an LLM can output interface definitions (like generating a form or a menu) based on the user’s dialogue. There are research prototypes of LLM-driven GUIs that change as you type your intent[20][21]. And the industry is clearly thinking in this direction – for example, Nielsen Norman Group predicts that although the timeline is uncertain, we can expect highly personalized, generative interfaces for each individual user in the future[8].

It’s worth noting that dynamic UIs also entail a new approach to software development. Developers and designers will need to provide the lego pieces (UI components, access to functions via APIs, etc.) and define rules/constraints for assembly. The AI then becomes the assembler based on context. We move from designing deterministic sequences to designing conditional, branching possibilities that the AI will choose from. This is a significant shift in mindset, but arguably it aligns better with how users actually think (“I need to get X done” as opposed to “I have to follow steps A, B, C in this software”). It’s software that meets the user where they are.

Morphing Applications: Software That Evolves

Perhaps the most groundbreaking aspect of an AI-first paradigm is the notion that software can continuously morph and improve itself at runtime. This goes beyond the UI – it’s about the entire application behavior and features adapting over time.

Traditionally, once a piece of software is deployed, it remains mostly static until the next update or version release, which could be weeks, months, or years later. Updates are written by developers, tested, and then pushed out. In between those releases, the software doesn’t fundamentally change by itself. If users have feedback or if the business environment shifts, there’s a lag until the software catches up (if it ever does).

Now imagine a different world: the software you use is never “final” – it’s always a work in progress, adjusting itself in response to how you use it and what it learns. In a sense, the application is like a living organism, responding to stimuli (user behavior, new goals, external changes).

Here’s what that could look like: - Continuous Improvement from User Feedback: Today, if users find a workflow cumbersome, they might file a support ticket or complain in a meeting, and perhaps months later the product team redesigns it. In a dynamic AI-driven system, the software could notice patterns like “users are taking a long time to complete Step 3 of this process” or “many users are manually exporting data from System A to System B daily.” The AI system, equipped with process mining or monitoring capabilities, could propose optimizations. It might quietly adjust the workflow – e.g., eliminating an unnecessary confirmation screen if it sees everyone always clicks “Yes,” or automating that daily export. It could even run an A/B test by trying a slightly altered process for a subset of users and, if metrics improve, roll it out to everyone – all autonomously. In fact, a fully realized Software 3.0 organization might experience something like an AI agent noticing a 5% drop in some KPI, diagnosing a UX issue, generating a fix, testing it overnight, and deploying it the next morning[22][23]. Human teams would oversee this at a high level, but they wouldn’t be manually coding each change. - Adaptive Integration of New Functions: Suppose your company decides to start accepting payments in a new currency, or comply with a new regulation, or integrate with a new CRM. Instead of launching a major IT project, you could inform your AI-driven system of the new requirement (maybe in plain language or through a high-level config), and the system itself figures out how to implement it. Perhaps it writes a new microservice or calls an external API, or creates a new data field, all following guidelines you’ve set. Essentially, the software adds features on the fly. We see glimpses of this with modular systems and plugins today, but those still require developers to create the plugin. In the future, the AI could generate the plugin code by itself, under supervision. - Personalization to Each User or Team: Beyond general improvements, morphing software could create forks for individual preferences without the development team explicitly building multiple versions. For example, one salesperson might prefer a visual pipeline dashboard, another prefers a text summary. The AI could provide each with their preferred view, effectively morphing the app per person. Or if one department uses a slightly different approval chain for purchases, the software adapts for them. This is more than just configuration settings – it’s the AI learning usage patterns and adjusting the logic. It ensures everyone feels like the software was “made just for them,” because in a sense, it was. - Autonomous Problem Solving: As AI agents become more sophisticated, your software could go beyond just following the processes it was given – it might start to solve new problems proactively. For instance, if the system notices a supplier is late on deliveries and this risks production schedules, an AI agent within your ERP might alert you and also suggest alternative suppliers (having already fetched some options). It might even auto-contact those suppliers for quotes. We move from software being a passive tool to it being an active agent working alongside humans.

All this paints a picture of software that is highly dynamic, almost fluid. It’s important to stress that this doesn’t mean chaos or lack of control. On the contrary, organizations would set the goals, policies, and guardrails within which the AI can adapt the system. Human oversight remains crucial, especially to ensure changes are correct and compliant. But the heavy lifting of actually writing code for changes, or deploying updates, or tailoring to individual needs would be offloaded to AI.

Is any company doing this yet? We are in early days, but the concept is being actively discussed. One tech consulting firm calls reaching this state “Phase 5: AI Native” where “AI agents continuously design, test, deploy, and adapt software based on real-time business goals and customer behavior… Software becomes a self-improving system that learns, adapts, and evolves in production.”[9]. They emphasize that at this stage, humans shift to guiding strategy and setting boundaries, while AI takes care of execution details[24][25]. In other words, people decide what needs to happen; AI figures out how to make it happen in the software.

This vision aligns with trends in DevOps automation, A/B testing, and continuous deployment, but supercharged by AI. Currently, companies like Amazon or Netflix do deploy code changes extremely frequently (even thousands of deployments per day) based on experiments and metrics – but that’s still driven by human engineers automating their pipelines. With Software 3.0, the autonomy goes further: the system itself identifies opportunities for improvement and can execute changes within predefined limits.

Of course, this raises governance questions. How do we ensure the AI doesn’t introduce a change that has unintended side effects? How do we audit what changes were made and why? These are solvable with proper oversight frameworks – logs of AI decisions, fallbacks to human review for major alterations, and strict permission scopes (for example, maybe the AI can tweak a user interface layout freely, but needs approval to alter a financial calculation formula). Think of it like hiring a very fast, meticulous junior developer who can propose changes constantly – you still have senior architects (human) who review and guide the overall direction.

If done right, the implications of morphing software are profound: - Businesses could respond to market or operational changes in near real-time, with their software updating itself to meet new demands. - The concept of “version upgrades” might fade away; you’d no longer wait for Version 10.5 of a product, because your instance is continually upgrading itself. Software might be delivered more as a constantly evolving service. - Over-engineering and over-provisioning could decrease. Today, software vendors pack in extra features “just in case” some customer needs them. In an adaptive model, you only add features when a need is detected, possibly only for the specific customer who needs it. - Technical debt (the accumulation of outdated code) might be reduced, because the AI can refactor or clean up as it goes, rather than humans avoiding touching old code. The software could potentially self-optimize, removing parts that aren’t used.

It’s both exciting and a bit daunting – we’re essentially talking about software that writes itself and rewrites itself, guided by humans. This has been a dream in computer science for a long time (automatic programming). AI is bringing it into reach.

Challenges on the Path to AI-Native Software

While the vision of Software 3.0 is thrilling, it’s important to be clear-eyed about the challenges. Transforming how software is built and used will not be trivial. Here are key considerations:

  1. Legacy Infrastructure: Companies can’t just throw away their existing systems overnight. Critical business operations run on legacy software (including some truly ancient code in banks or governments). Rebuilding everything with AI is a multi-year (or decade-long) journey. During the transition, hybrid models will prevail – where AI might sit on top of legacy systems as a smarter interface or where new AI-driven microservices are gradually introduced alongside old modules. Managing this coexistence (ensuring the AI layer doesn’t break the underlying transactions, etc.) will require careful architecture. In some cases, organizations might start with pilot projects or sandboxes – for example, use an AI approach for a new product line or department – before scaling up.

  2. Culture and Trust: Software 3.0 changes the roles of people, and that can be unsettling. Will developers embrace a world where the AI writes a lot of the code? Many will – as it frees them to focus on higher-level problems – but some might fear job security or loss of craftsmanship. End-users, on the other hand, need to trust dynamic systems. If the UI changes every day or the software behaves differently this week than last, users might feel a loss of control or predictability. Change management and user education will be vital. It may be that at first the changes are subtle or optional (the system might ask, “I can simplify your screen, would you like to try it?”). Building confidence that the AI’s adaptations are beneficial and not disruptive is key to user adoption. People are often comfortable with incremental improvements but uneasy with sudden shifts – so the morphing might need to happen in a smooth, transparent way (“We moved the ‘Submit’ button here for your convenience” tooltip, etc.).

  3. Governance, Risk, and Compliance: Regulated industries (finance, healthcare, aerospace, etc.) have strict requirements on software behavior, auditing, and validation. An AI that changes software on the fly could raise red flags for auditors. Companies will need to implement robust governance: - Audit logs of AI decisions and changes. - The ability to roll back any AI-made change instantly if a problem is detected. - Validation pipelines where AI-proposed changes that affect critical calculations or compliance-related processes are vetted through simulations or tests before going live. - Clear rules about what the AI is allowed to do autonomously versus what requires human sign-off. For instance, an AI might be allowed to rearrange UI or automate data transfers, but not allowed to alter how financial figures are calculated without approval. - Security is another aspect: a dynamic system could be a target for new kinds of attacks (e.g., tricking the AI into making an insecure change). Ensuring that the AI’s training data and feedback loops are protected from tampering will be crucial.

  4. Performance and Reliability: Today’s AI models (like large language models) are powerful but also heavy in computation. Running an LLM continuously to decide what every user’s interface should show might be expensive or slow if not optimized. There’s active work on making AI decisions more efficient (through caching, smaller specialized models, etc.). Over time, these technical hurdles will diminish as hardware improves and algorithms get optimized. However, in the near term, developers of AI-native software will have to architect systems that fall back gracefully if the AI is unavailable or slow (maybe default to a basic UI if needed, etc.), to ensure reliability. We must avoid a scenario where the app fails because it can’t reach the AI service. Essentially, the software still needs a backbone of stability, with AI as the intelligence layer.

  5. Accuracy and Error-Handling: AI-driven systems can sometimes be too confident and make mistakes (the phenomenon of AI “hallucinations” where an LLM generates incorrect information). In high-stakes software, mistakes can be costly. Proper feedback loops are needed so that when the AI does err or a user corrects it, the system learns from it and prevents repetition. Additionally, combining deterministic code with AI judiciously can yield the best results – for example, let AI handle the parts it’s great at (like understanding intent or generating interface), but use traditional code for a calculation that must be exact or a compliance rule that must be strictly enforced. Knowing where to put AI vs. hard code will be part of the new design skill set.

  6. Skills and Talent: To implement Software 3.0 concepts, companies will need people who understand both domains – software engineering and AI/ML. There will be a learning curve for development teams to adopt new tools and paradigms (like prompt engineering, AI model integration, and supervising AI agents). Luckily, we see a lot of enthusiasm in the developer community for AI, so this may be more opportunity than obstacle. Still, retraining and hiring for these hybrid skills is something organizations should plan for. Early adopters might even collaborate closely with AI research groups or vendors due to the novelty of what they’re building.

In short, the journey to AI-native software must be approached thoughtfully. It’s not as simple as flipping a switch. Enterprises will likely gradually increase the “AI IQ” of their software over several stages: 1. AI Assistants: AI features that help users but don’t control core processes (where many companies are now). 2. AI Augmentation: AI starts to take over executing certain tasks under human rules (e.g., an AI agent that automatically closes low-priority IT tickets). 3. AI Orchestration: AI agents handle multi-step workflows and collaborate (with oversight) – e.g., one agent prepares data, another writes a draft report, etc.[26][27]. 4. AI Native Operation: The software is largely driven by AI, with humans monitoring the objectives and boundaries. 5. AI-Defined Software: Eventually, new software or major enhancements are specified in high-level terms and the AI builds them almost entirely (the fully realized Software 3.0).

We’re somewhere between stages 1 and 2 in most cases. Each step requires proving value and trust at a small scale before moving forward. But with each step, the benefits compound, and the organization becomes more adept at handling the next.

Impact and Implications: Why This Matters

If we manage to overcome the challenges, the shift to AI-driven, dynamic software could unlock unprecedented value and fundamentally reshape the software industry (and by extension, every industry that relies on software).

Productivity Explosion: One obvious gain is speed and cost of software development. If an AI can handle creating interfaces, writing boilerplate code, or even optimizing databases, development teams can deliver solutions much faster. Businesses can get custom tools in days instead of months. This means companies can experiment more and tailor software closely to evolving needs without huge IT backlogs. The “last mile” of technology – applying it to real business workflows – suddenly shortens. A task that used to require a dozen developers and a six-month project might be done by a small team working with AI in a few weeks, or eventually by an AI system directly from a specification[28]. Lowering the cost and time barriers means more problems get solved via software that previously weren’t worth the effort. It democratizes innovation.

Personalized User Experiences at Scale: We’ve long known that personalized tools can boost effectiveness – think of a personal assistant who knows your work style. But scaling that was impossible; you can’t give every employee their own bespoke app experience when thousands use the same platform. AI changes that: it is possible to maintain unique configurations or behaviors per user when an AI handles the tailoring automatically. The result could be happier, more empowered users. Employees often get frustrated with clunky enterprise software (that “why is this so hard?!” feeling) – dynamic AI software can eliminate a lot of those pain points by simplifying the view and doing more for the user proactively[29][30]. In customer-facing apps, personalization can drive engagement and loyalty. For example, an e-commerce site with an AI-generated UI might adapt the shopping experience to each customer’s preferences in real time, making it far more engaging than a static page that’s the same for everyone.

New Business Models: Software 3.0 could blur the line between software vendor and client. Instead of selling a one-size product, a software provider might offer an AI-driven platform that co-develops solutions with the client. The value could shift to training the AI on a particular industry’s best practices or data. We might see “software templates” that an AI then morphs into a final product for each customer. This could upend licensing models – perhaps instead of paying per user or per module, companies pay for outcomes (e.g., a subscription based on successful tasks automated by the AI). It could also enable much more granular or short-term software use. Need a custom app just for a 3-month marketing campaign? An AI can spin it up; when the campaign is over, you archive it. Traditional software economics struggle with short-lived apps because development is too slow/expensive; AI could make “pop-up software” a reality.

Competitive Advantage and Adaptability: On a strategic level, companies that fully leverage AI in their software will likely outpace those that don’t. In a world where conditions change fast (just think of how businesses had to adapt in the 2020 pandemic), being able to reconfigure your digital processes quickly is a huge advantage. AI-native software means your systems become a source of agility instead of inertia. For instance, if a new regulatory requirement comes out, an AI-driven compliance system might adjust all relevant processes by reading the regulation and implementing changes in days – whereas a competitor might spend months on manual re-coding and testing. In effect, the business that has Software 3.0 operates like a living organism that senses and responds, whereas a traditional business is more like a machine that has to be manually recalibrated each time. Over a span of years, that adaptability gap can become the difference between leading a market and being disrupted.

Economic Implications: Some analysts have noted that if AI’s current capabilities were fully utilized, it could add trillions of dollars of value (in productivity, new services, cost savings) to the economy[31][1]. But that won’t happen by slapping chatbots on everything; it requires the deep integration we’ve discussed. The sooner enterprises re-architect to let AI into their core, the sooner we’ll see big efficiency leaps. Entire job categories might shift from routine data handling to more creative or strategic work because the software takes over grunt tasks. This could improve job satisfaction (people spend less time on tedious form-filling and more on interesting analysis or decision-making). There is also the question of whether AI-driven automation might reduce some jobs – undoubtedly, automation will change roles, but historically technology creates new roles even as it displaces others. Someone will need to oversee AI systems, curate data, and focus on uniquely human tasks (like relationship building, complex problem solving, and oversight). The net effect is likely a re-shuffling of work rather than pure elimination. Businesses that adapt roles proactively (reskilling employees to work alongside AI, not against it) will have an edge.

The Software Industry itself: If software becomes easier to create via AI, we might see a proliferation of niche software that previously didn’t exist. Right now, software vendors aim for big markets (to justify development cost). In the future, there could be viable software for a “market of one” – e.g., a custom app for a single company’s very specific need – because AI can build it cheaply on demand. This could fragment some markets and erode the dominance of a few big software suites. Or those suites themselves will evolve into AI-powered platforms that essentially generate sub-applications for each client. The role of a software vendor might become more about providing the best AI “brain” and industry templates, rather than delivering a static product. It’s a bit analogous to how consulting firms deliver bespoke solutions – we might see product companies and service companies blend together. Tech giants are already moving toward AI cloud services that let others build on their AI; this will accelerate.

In all of these implications, there’s a common theme: the way we think about software is shifting from product to partnership. Instead of software being a tool we use, it becomes more of a collaborator – one that is constantly learning and improving to better serve us. This is a fundamental change in the relationship between people and their digital systems.

Conclusion:
Embrace the Paradigm Shift

We are at an inflection point. The emergence of powerful AI capabilities has opened the door to a new paradigm – one where software is no longer a static asset, but a dynamic, intelligent companion in work and life. Getting to Software 3.0 won’t happen overnight, and it won’t be without hurdles. But the trajectory is clear and the early signals are already here in the form of AI-assisted development, generative UIs, and autonomous agents.

For business leaders and software professionals, the message is: Don’t think too small. It’s tempting to see AI as just another efficiency tool – a way to automate a few tasks or reduce headcount in support functions. But that’s like using a supercomputer as a calculator. The real prize is in rethinking what your software and systems can do if AI is woven into their DNA. That means revisiting core processes and questioning decades-old assumptions: - Do our customers really need to navigate these menus, or can they just ask for what they want? - Do our employees really need to do this manual data reconciliation, or can the system learn to do it? - Do we need to choose between a standard solution and a custom one, or can we have a solution that becomes custom via AI? - Can our software adapt faster to business changes, and what would that do for our competitiveness?

The companies that start exploring these questions now, even in small pilot projects, will develop the organizational muscle for AI-native thinking. They’ll also signal to talent (and investors) that they are forward-looking. Much like the early adopters of cloud gained an edge, the early adopters of AI-native design will have a head start that others will later scramble to catch up with.

It’s also worth noting that embracing Software 3.0 is not just a tech initiative, but a strategic one. It requires collaboration between IT and business units, between designers and engineers, between compliance officers and AI experts. In many ways, it forces the breaking of silos – because if the software is adapting continuously, then your business processes, IT policies, and user feedback loops all intersect in that adaptation. It encourages a more agile, cross-functional mode of operating.

Jim Keller’s bold prediction that no current software will be in use ten years from now may or may not come true exactly in that timeframe, but it carries a truth: the software of the next decade will look and feel fundamentally different from the last decade’s. We’ll measure its success not by counting features or adherence to spec, but by its adaptability, intelligence, and outcomes. The winners in this new era will be those who embrace software’s evolution from fixed to fluid – those who allow AI to not just assist in the margins, but to become a core design principle of everything we build.

In conclusion, AI gives us the opportunity to break free from the limits of “how things have always been done” in software. Enterprises that seize this opportunity can unlock immense efficiency and innovation. Those that don’t may wake up to find their old software – and by extension, their business models – being left in the dust by more dynamic competitors. The writing is on the wall: static software is a relic of the past. The future belongs to software that can think, adapt, and grow. It’s time to start building that future today.

Outlook: Software 4.0?

So what may lie beyond Software 3.0? What else could happen?

Today’s AI / LLM has one major limitation: The memory problem.

While the AI has a huge memory, way beyond any human capability, based on all the information it has been trained on, it also has complete amnesia when it is in execution mode.

Any question/prompt sent to a LMM is computed in its short-term memory and then immediately forgotten afterwards. This is by design and a good thing, if you think about it. You would not want an LLM to retain or even learn from interactions directly, otherwise interactions with malicious intent could create monstrous behavior for all users.

So when we interact with an LLM today, the interfaces we use create the illusion of a memory by basically sending all previous prompts and responses with your current follow-up within a conversational thread invisibly and in the background.

Since the LLM can only accept a certain overall prompt length (the infamous token window) and since longer prompts take more computing power, we use smart algorithms, to condense the older information. And beyond the individual thread, we also generate general information and preferences about the user, expand the “memory” with RAG (Retrieval Augmented Generation) and other supporting infrastructure.

But in effect, complex user interactions can become frustrating as the user can experience the reliability of the AI memory “But I told you 3 sentences ago that…”

The current solution for more effective interactions both on in personal, group and enterprise productivity is to make the AI build it´s own “Exoskeleton”, just writing and executing code, to manifest processes, standards and store data – Software 3.0

If and when the memory problem is ever solved on a more fundamental level, an world of Software 4.0 might become feasible, where even complex collaborative task can be solved without software at all, just in memory of an AI.

Glimpses of that are already visible. If you can formulate a task with corresponding information/data in a single prompt, LLMs can already solve them in a single, self-extinguishing compute. For example: “Please calculate the Return on Assets (ROA) on an Investment with the following parameters: …” used to require at least a qualified human with a calculator, or an excel file, but today can be a simple prompt.

Sources

Moran, K., & Gibbons, S. (2024). Generative UI and Outcome-Oriented Design. Nielsen Norman Group – “In the future, generative UI will dynamically create customized user interfaces in real-time… generative UI promises highly personalized interfaces — a move from designing for many to tailoring for the individual.”

McKinsey & Company (2025). Empowering people to unlock AI’s full potential at work – Only 1% of surveyed companies consider themselves AI mature (fully integrated into workflows), despite 92% planning to increase AI investments.

Eficode (2023). How to become an AI Native software organization – Description of “Phase 5 – Software that evolves itself”: AI agents continuously design, test, deploy, and adapt software based on real-time goals and behavior; software becomes a self-improving system

Andrej Karpathy (2017). Software 2.0 – Concept of neural networks and data-driven code as a new programming paradigm, hinting at the shift towards AI-created logic.

Microsoft Community Forum (2024). User feedback on early Copilot in M365 – Users report disappointment with AI retrofitted into Office, calling it “a frustrating flop” when it fails to execute tasks as expected[14], highlighting the limitations of bolting AI onto legacy UIs.

SAP SE – Origin of the name “SAP” (“Systems, Applications, and Products in Data Processing”) reflecting the advent of standard software products

Nielsen Norman Group (2024). Generative UI Report – Explains distinction between AI-assisted design vs. generative UI that builds interfaces for end users on the fly

National Centre for AI, Jisc (2024). Initial Thoughts on Microsoft 365 Copilot – Early enterprise user’s perspective: “a little disappointed” with Copilot, often reverting to ChatGPT for better results[2], exemplifying challenges of AI retrofits.

AI Whitepaper (User’s original content, 2023). AI’s Economic Impact and Future of Software – Discussion on the retrofit problem and the original objective of enterprise software (capture/process data to match business operations), as well as the concept of dynamic UIs where AI generates custom interfaces in real time.

AI in the workplace: A report for 2025 | McKinsey

Initial thoughts on Microsoft Copilot for 365 (and AI FOMO) - Artificial intelligence

Generative UI and Outcome-Oriented Design - NN/g

How to become an AI Native software organization | Eficode

Microsoft's Copilot: A Frustrating Flop in AI-Powered Productivity | Microsoft Community Hub

What if AI could generate unique UI for every user, in real-time?

Could AI-driven chat interfaces fundamentally change the traditional ...