Related sites:

Newsletter: Perspectives on Power Platform

Company: Niiranen Advisory Oy

The world beyond apps – my thoughts on AI’s impact

How LLM based generative AI systems are challenging the role of traditional business applications, as well as the job roles of people building them.

This text was written by a human being, not AI – even if some of the images are AI generated.

Adding such a disclaimer would have sounded ridiculous only a few months ago. Today, many of us are starting to be rightfully sceptical towards online content we come across that touches upon any current hot topic. Is it organic or at least partially machine produced?

LLMs, large language models, struck earth like a meteorite in the end of 2022 and made some of us question what we really understand about the current capabilities of computers. If you’ve logged in to ChatGPT or signed up for the New Bing version now running on top of OpenAI’s GPT-4 model, you will have certainly experienced a “WTF?!?” moment or a few when seeing the kind of answers they can come up with.

In this blog post I’ll reflect on the experiences I’ve had with this latest wave of AI capabilities and how I think they potentially will change the business I operate in. Meaning low-code application platforms (LCAP) and in my case the Microsoft stack of tools.

Why didn’t we see this coming?

When discussing the AI phenomenon over a pint of craft beer with a colleague of mine a couple of weeks ago, he asked a great question: “why didn’t anyone see this recent AI wave coming?” Indeed, how can people in the tech industry be so surprised about what took place in 2022 on the AI front? Where’s the catch? Is this real or just momentarily hype that will fade away once the next hot topic comes along and takes over our LinkedIn feeds – in the same vein as web3, crypto, metaverse etc.?

One of my all time favorite phrases is “first gradually, then suddenly”. It applies to pretty much any disruptive change that the world of technology and business encounters. Things like AI don’t just emerge one day and immediately start wrecking havoc. Instead they remain in the “bubbling under” stage for quite some time, then they suddenly erupt.

ChatGPT has been called the iPhone moment of AI. If we think about the technology that Apple put together to create the original iPhone as a product, key elements like touch screens or downloadable apps were all introduced many years before by the biggest player in the phone business (Nokia). It was not the existence of these technical capabilities that created the perfect storm. It was the way in which they were delivered to the market as a product that made people think: “wow!”

Creating a web service like chat.openai.com and just casually making it available to the whole world on one day was completely different from having the GPT family of LLM published via the traditional APIs and researcher/developer focused forums. Unlike the iPhone as a physical product, no one needed to purchase anything or make a switch/commitment to experience the potential of generative AI. That’s how you grab the attention of the world in these always-on always-connected days.

Even alpha geeks like Bill Gates didn’t believe it was going to happen this fast. Tech giants like Google had been sitting on much of the technology needed for building what became ChatGPT, yet they lacked the financial incentive to proceed with it. As a result, in 2023 I’m now using “just Bing it” in a non-ironic way. One year ago I would have given a below 0.1% chance for this being the year of Bing – and today it already clearly is such. Google has access to all the necessary tech to grab the lead, yet the dynamics of their business model (targeting search based ads) is a force very similar to that which took Nokia under once phones became computers. Google, like probably also Nokia, became a victim of their delusions of excellence.

So, about the original question. No one saw this coming because that’s how disruption always happens. Which leads us to the next topic:

OMG, they’re coming after us now!

In the IT consulting business we’re accustomed to the endless talk about change. Everyone wants to change the way how people use technology, and many are surely fantasizing about being able to disrupt existing business models. Very rarely, if ever, do us tech people see ourselves as the target of such disruption. Until now, that is.

What the recently discovered capabilities of LLM based AI systems have shown already is that a very, very significant share of what us knowledge workers spend our days doing while at work can be handled by computers. It should be a wake-up call to all of us: we have been wasting our wetware brain cycles for reading and writing things that generative AI should be processing instead.

Shouldn’t we be happy about AI coming and promising to handle the mundane parts of the consulting business that we didn’t really enjoy all that much anyway? While everyone sort of cheers for the virtual assistant that can process our emails, summarize data, generate reports, (very shortly) do basic interactions with CRUD based systems – some might notice that this also sounds like an existential threat. Even Bing knows it:

Ah. AI skills and bioweapons nicely coexisting in one answer from a service that used to be just a dumb internet search engine a few months ago. Nothing to see here, folks! Move along!

When I first started to explore this recent leap in AI’s capabilities, I was introduced to Moravec’s paradox. It states that high-level reasoning requires very little computation, while low-level sensorimotor skills require enormous computational resources. In short what this means is: replacing the driver in a car will require a huge amount of computing resources, whereas replacing the office worker sitting next to a computer all day is almost trivial in comparison.

This is not what most of us have been taught about how technical advancements replace human workers with digital tools. Yet it makes perfect sense when you think about it. The natural selection of evolution has had billions of years to develop the skills we use for observing and operating inside a physical space. Email has existed for only 54 years. We are all really just helpless babies when it comes to processing data, yet we are quite advanced in our abilities needed when diving through the busy streets of a city full of physical objects / actors.

The arrival of year 2022 level AI skills in the form of LLM was the first time many of us ever needed to take a look in the mirror and say:

We used to be the ones disrupting the working life of others. Now we are gradually joining the ranks of disruptees. No, I still don’t regret one bit that I chose to pursue a career in the cross section of business and IT. This is a much better position to be in when thinking about what lies ahead and who’s got the possibilities of participating in building something new and exciting.

Things have always evolved and we’ve seen technological leaps before. I just can’t help thinking that the world in which I grew up to understand what IT is about will now seem incomprehensible to the new generations who’ve yet to discover it.

Backward never

As a child, the first time I got the chance to physically interact with a computer was the ABC 80 that my dad bought. Serious computers seemed very complex to use, yet at that time I was young enough to entertain myself simply by typing on the keyboard and pretending to use the computer. Fake it till you make it, right?

Photo by Frédéric BISSON: Pacman on Atari 800XL at RetroGaming Days IV

When I got my first personal computer for myself, the Atari 800XL, it supported loading games from a cartridge that you just plugged in. That was the “instant gratification path” of using a computer. I did also start reading the magazines of the time and even typed in a few of the Basic games, transforming the code that was printed on the paper into bits that I typed in via the keyboard. It was interesting, yet it always felt too hard to achieve some tangible results. Code didn’t provide me a way to express myself. It wasn’t until I started creating music on the early PC tracker software that I saw computers as a tool for creativity.

My kid is now 3 years old. As he’s getting exposed to how technology is used in our daily lives, I’m pretty sure he’ll never see computers as “hard to use”. Now as the natural language interaction pattern will presumably quickly take over most casual usage of smart devices, the concepts of programming or code may remain very distant to him. He’s already happily giving Google Assistant voice commands via a smart speaker in our living room. Why wouldn’t it work for everything by the time he’s old enough to start creating things with computers on his own?

It goes far beyond the difference between punching keys vs. shouting commands. As of today, Google Assistant is a complete moron compared to ChatGPT. It won’t be that way for long, though. The idea of a computer that understands what you want to achieve without it having been programmed to fulfill that specific request is a shift so profound that such AI capabilities will be infused in every possible smart device. Because otherwise calling them “smart” with the 2021 level of skills would just get the vendors laughed out of our homes and businesses.

Soon, no one will be given the opportunity to claim that they don’t know how to use computers. There may not be a single visible computer around, but everything around you will be processing data and adjusting itself to the sensors inputs. That’s likely the world in which our children will grow up in, regardless of what we as parents would do. Life without ubiquitous AI may soon be like trying to live without electricity.

And now to Power Platform.

Everyone as a developer

The idea of how the low-code Power tools could eventually democratize the creation of applications is something very close to my heart. The products were introduced roughly a decade ago now. At around 2018 I started talking about the crazy idea of how in the future creating apps would be as commonplace as creating documents already was. I used it as a way to get my point across when talking with colleagues, claiming “every PowerPoint should really be a PowerApp”. I wanted to challenge their traditional idea about apps being something big and expensive created by professionals, insisting that they could also be small, disposable things created by anyone with a computer.

MS has of course been talking about infusing AI into their product portfolio for a long time already. We’ve seen some neat little demos and product features introduced with what NLP (natural language processing) has been able to do, but in reality it hasn’t had almost any impact on the everyday life of the app/flow/report makers. Sure, AI Builder as a citizen friendly entry point into the world of Azure Cognitive Services has seen some real-life usage, yet it has hardly been a mainstream tool for citizen developers.

The more powerful next generation AI features have first landed on the pro-code side, under the GitHub Copilot product. These stats from the announcement of the latest GPT-4 based Copilot X version give some indication about the adoption rate. It doesn’t sound like just a marketing gimmick anymore. A fair share of developers out there are probably at least considering taking it into everyday use for the process through which they produce their work output.

Those of us who come from outside the world of programming and spend our days working with something other than raw programming code could soon be facing the same question: should I let AI do a part of the work for me? As anyone with more extensive experience on Power Platform probably agrees, low-code does not mean low complexity. Working with data, business logic and UI can present quite a cognitive load, even if you are “just point’n’clicking” or writing Power Fx formulas instead of JavaScript or C#.

If anything, no-code/low-code should become the area in which safe usage of AI generated components would go mainstream a lot faster than in code generated without any productized guardrails around it. In the end it’s still all made of code and it all runs in the Microsoft cloud already (in the case of Power Platform). Training dedicated AI models to serve this well defined playground should be a very doable task for the platform provider. Of course the UX of how app makers interact with the generated results needs to be thought out, as direct manipulation of the generated code wouldn’t be quite ideal.

If business users will learn to leverage Microsoft 365 Copilot to generate documents for them, how far can we be from the stage where they are also comfortable generating apps and automations in the same way? I believe were are definitely moving into the direction where questioning the abilities of non-programmers to design and develop their own tools isn’t a valid generalization to make anymore. I honestly did not believe we’d get so close to the state of “hey Copilot, turn this PowerPoint into an interactive Power App” being a possible reality this soon.

This, in turn, will lead us to the question to think about: is this what the world really needs?

The NoApps future

In the end, people don’t need apps. In the same way as the record industry was formed around the concept of producing, promoting and selling physical items containing a representation of music created by the artists, our current business applications consulting industry is also focusing on the intermediate output. The actual value delivery is something we must never lose sight of. Else you may find yourself selling plastic discs when the world has decided to jump into streaming audio instead.

Quicker creation of apps will initially have plenty of demand I’m sure. Besides, it really is a cool demo that you could draw a form design with pen & paper and then have AI generate a digital app’lified version of it (both Microsoft and OpenAI have used this scenario). Yet it’s still just a static data capture form. How many forms can employees or consumers navigate through during their days before getting exhausted with the “there’s an app for everything” experience?

These UIs for standardized data capture and processing have been needed because our technology couldn’t previously work with anything more fuzzy. Well, we’ve now seen through ChatGPT that it most certainly could. Not only can the AI figure out what us humans mean, regardless of what language we type our text in. It can also figure things out from what anyone else out there in the world has written.

There’s an interesting demo / research preview of “just some guy” applying GPT-4 to instruct the web browser on what to do. Not just writing out the instructions but actually performing the steps. In the example provided in this TaxyAI project, while on the GitHub website, giving a prompt “protect the main branch” will trigger the AI to do research on what that means, where it should be clicking, and then completing the steps as an RPA style bot:

I’m not saying this is a tool that would go mainstream. Rather it serves in giving ideas on what the tech giants out there will be creating with their 1000x budgets. The Copilots of tomorrow will not just return a box filled with text that they generated. They’ll probably do the actual work for you, rather than just providing instructions.

Think about all the process documentation and ad-hoc instructions that has been created inside a medium size enterprise. It will never be realistic to turn each of them into dedicated apps and automations. Yet if we get an AI service that can read these instructions created in human language, turn that into actions for the computer, and then complete the chain of activities to move from the original process input to the final process output – that would be something.

The concept of “working out loud” by proactively sharing our observations of the world and our accumulated knowledge with the community of colleagues meeting on a digital platform has been a great productivity booster and a source of professional & personal growth for me. Today with ChatGPT or Bing we can gain further benefits by “thinking out loud” with the machine, providing it a sequence of dialogue like prompts. The natural evolution from this could lead towards a world that actually supports “working by thinking”:

  • “Hey Copilot, I spent €50 on a cab ride.”
  • “I can see that you’re in a city where you had previously agreed to visit for a customer project. I’ve grabbed the Uber receipt from your email, filled all the details into our corporate systems and the reimbursement has been deposited back to your bank account.”

Instead of this sounding like a scene from some flashy “World of Tomorrow” video created by IT companies to sell us boring & expensive tech of today by using a fictious scenario from the year 2060 or something – it doesn’t sound too far off anymore. An AI assistant that understands what we say we want & what others say we should do in order to get it is in theory here today already, in the form of LLM based chatbots.

We just need to plug it in.

Endless problems

Cue the soundtrack from The Terminator. This is exactly how you create Skynet, isn’t it?

“Defence network computers. New… powerful… hooked into everything, trusted to run it all. They say it got smart, a new order of intelligence”.
“Skynet saw all humans as a threat; not just the ones on the other side” and “decided our fate in a microsecond: extermination”.

Michael Biehn as Kyle Reese in The Terminator (1984)

AI will introduce brand new problems for humanity. Some will be existential (“is there a place for us in the world after AGI arrives?”), others much more mundane. What I’m somewhat worried about is that we’ll be faced with them pretty much all at once in the greater scheme of things. Without the ability to identify what is a serious issue for the whole world, what is merely a speedbump on the road to innovation and business productivity, we’ll be mighty confused.

The number of things we could be worrying about when it comes to AI is overwhelming already today. Use this list as an example and pick one item if your bag of worries is looking empty:

  • Copyright issues of imitating/borrowing content from original makers without permission.
  • Tech monopolies growing even bigger and stronger than they are today.
  • LLM hallucinations making it impossible for us to know what’s right & wrong.
  • The internet & every media getting flooded with machine generated content.
  • Biased data in the training sets inflicting harm on how minorities get treated.
  • Next generation surveillance society á la Minority Report.

The challenge I see ahead is that AI may be unlike anything we have ever encountered before. Sci-fi literature and movies have attempted to provide at least some context to the phenomenon, enabling us regular human beings to talk about what’s happening around us by using references from popular culture. Yet where are the AI consultants that will help organizations of all shapes and sizes make sense of this? (I know: just open LinkedIn and everyone is an AI expert there these days. But anyway.)

The range of reactions to AI that we can expect to encounter when starting to talk about it as just regular technology consultants is probably going to be all over the place. Many organizations will have legitimate legal and compliance concerns that will cause them to choose the “better safe than sorry” approach and put things on hold. Elsewhere, the gains from initial experiments with LLM powered information systems may turn out to be so compelling that AI becomes the next Digital Transformation that must be sprinkled on top of everything the company does.

I suspect the divide between individuals will be even greater. Just like the low-code tools have enabled the new breed of app makers to stand out from the ranks of ordinary business people and create something few of their colleagues could have ever imagines, the same thing can happen with adoption of AI tools. One’s formal education nor professional background may not be the best criteria to identify those who’ll be able to apply this new technology into solving real life problems.

If anything, I suspect it will be even more likely that the domain experts, the citizens, will be able to set aside all those “this AI generated code is garbage compared to what a true professional developer could write” complaints and just focus on the end results from using that code instead.

Good enough is close enough

We’ve been taught to believe that the world of computing is very binary by nature: it’s either 0 or 1. Computers either behave in a logical, repeatable manner and deliver the exact result they have been instructed to – or they deliver nothing but an error. People trust the computer to be right just as long as someone programmed it the right way.

LLM’s have flipped these roles around. Now the computer is the creative one, generating an endless list of ideas and content variations, based on the simple prompts given to it by humans. Not instructions like code, as there’s no way to guarantee that the neural network would produce the same output the next time you provide the exact same input. This new generative AI is whimsy and unpredictable, which makes it so much closer to behaving like us humans do.

If you work in a profession where any part of your output can be presented as text, there’s a high probability you’ve given ChatGPT a go and tested how it performs in producing similar outputs. It’s equally likely that you’ve seen the answers given by the machine to contain plenty of factual errors. “Hah! Nice try, AI, but no matter how much data they feed you, you’re still just a stochastic parrot.

We should of course take pride in our craft. Many of us work in an environment where the professionals who can provide the most detailed answers to questions presented to them get the highest amount of respect. We gain trust from others, we build up our own confidence, we “level up” by being knowledgeable in our field and delivering high quality work outputs.

Are you always right, though? Of course not, we’re all only human.

Can you apply your expertise to any areas of business? Heck no, the deeper we go on topic Z the less time we have for studying topics A…Y.

Can you work 24/7/365? Not possible, you know that.

Communicate in any language? Why are you even asking these th…

WILL YOU WORK FOR FREE?!?!? Okay, I prefer not to continue this discussion anymore.

AI doesn’t have to be perfectly accurate to be of extremely high business value to companies. It only needs to be close enough, so that the superpowers it does possess over us living and breathing human beings can be put into use. Yes, it will very often need supervision and intervention from humans at some stage of the process. Yet the biggest financial gains will be achieved wherever the share of human work can be brought down to a minimum. Which means people will be very creative in finding ways to harness the creativity of AI in novel ways.

AI can always be there for you when needed, whereas a human professional cannot. In my line of business, today the customers are googling for answers and following videos & blog posts to manually repeat the steps on a computer to get their job done. When it gets too tricky or they don’t even know what to search for, Power Platform Advisors can step in and ensure the desired results are achieved. Customers can rely on us, but they need to sacrifice both their time (adjusting to whenever we are available) and money (sorry, we also have to make a living).

If you can take away all these nasty human constraints, AI will be a sweet enough deal to consider as an option for pretty much anything. People will try it because the barrier is almost non-existent. The tools for creating business solutions will have the Copilot capabilities baked into them, thus promoting them as the first resort. You’d be crazy not to use them.

What about the possible damages, though? Let’s say that generative AI is right 90% of the time and a human professional gets to 99% accuracy. When your AI built business solution causes big problems for the first time, won’t everyone go back to the good ol’ professionals?

There will undoubtedly be business models emerging that work a bit like insurance does. Since we can’t be sure that LLM based answers are correct, and we also can’t sue the AI for providing us wrong advise, someone needs to step in and become that middleman. Yes, accidents will happen and someone needs to cover the damages. Now, if your AI based service costs €20/month instead of €200/h and you’ve got a policy that promises to fix whatever issues were caused by unsupervised AI-driven decisions – it can be quite a lucrative model for both parties.

Bicycle evolved

Steve Jobs called the computer a bicycle for the mind.” It is a beautiful, powerful metaphor. In the early 80s, the democratization of computing through the rise of a personal computer that was available for work and personal tasks was presented as a huge leap in the capabilities of an individual. Just like hopping on a bicycle improves man’s efficiency on energy use per kilometer travelled beyond that of any animal in the world, our possibilities for cognitive work rose onto a new level as computers became an everyday tool within our reach.

AI could lead to something of similar magnitude eventually. Is it just a faster bicycle, though? Did our computers become more powerful or did the tools change in a way that requires a new metaphor? When talking about LLM based tools like ChatGPT specifically, I quite like the analogy of “calculator for words”. It underlines the way in which these new tools of 2023 need to be approached as not the all knowing sources of truth but rather the wizards of words. They are extremely powerful in delivering combinations of words to represent most things us humans use text for (be it communication or code). However, assuming that they understand the world in the way that humans do is a mistake to be avoided when making use of their wizard skills.

If the electronic calculator was brought into an office where everyone had previously been crunching the numbers with only pen and paper, what would be its impact? A thought exercise like this might help us in understanding why there will be both enthusiasm and scepticism expected as AI capabilities begin to appear in the applications and platforms used in organizations today. And just like what happened to the physical calculator devices, eventually we’ll get a next generation of machines where the ability to perform calculations is just one app among many.

One comment

  1. Great blog as always! In my daily use I try to be the adjudicator of everything that comes out because, at the end of the day, it is me writing, and summarizing this information. But I keep asking myself two things: One, we have not seen the issues with this technology yet (seen some but not a lot), so an answered question is how will corporations, governments, and other users act when that happens? I mean maybe this ChatGPT getting hacked, and more countries start banning it and privacy concerns. Two, adoption! I cannot count how many GPT Posts I have seen this year alone, but in my day-to-day, I still see a lot of people who have not heard of this technology. Heck, I still see large and small Orgs that still use Excel to run their business and have never heard of the Power Platform. I feel like it will be a massive distortion with many gains but what worries me about the Open ChatGPT technology (not the embedded version), is that people are too drunk about the technology right now that they are not asking the right questions, the ones that look at the long run. With Great Power comes Great Responsibility. I am curious to see what will happen in the next 5-10 years.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.