AI Evolution: From Textbooks to Dinner Tables

"AI in its various deep learning and machine learning incarnations has been around for 20 years, but seeing generative AI become a common dinner table discussion topic... that was the surprise"

– Karl Wright, CIO/CISO, Datacom

This caught my eye. It was first of the several quotes in the Aotearoa’s Digital Priorities in 2024 report published by TUANZ.

I know Karl from my Spark days, and I have been touched by his wisdom in the past. He is right here. AI from the textbooks has now reached the dinner table.

In the mid 90’s I was student of engineering. It blew my mind when I was reading about Neural Networks and Fuzzy Logic. What blew my mind was a simple fact. Computers are digital, and they work on Boolean Logic (0s and/or 1s). Yet, by applying mathematical models to computers that run on binary logic, it was possible to apply fuzzy logic to mimic human decision-making. Here is a simple example to explain this:

Scenario: An autonomous car has to slow down for the car in front of it. It is a usual scenario that drivers encounter in day-to-day driving. Using “If-Then” logic, one could apply a simple rule “If distance is less than x then reduce speed by y%.” This would result in infinite possible scenarios, and that’s not how human brain works. It works on fuzzy logic. “If there is an object in front, slow down”. Now, this concept is challenging for a computer that only understands 1s and 0s, true and false.

So, how did AI end up as a dinner table conversation in recent years? Several factors contributed:

  • Advancement in compute technology: Moore’s Law became a thing of past. Compute power grew faster and cheaper than before. Previously focused on Small Language Models (SLM), the availability of compute power for high-end graphics and quantum computing made Large Language Models (LLM) feasible.
  • Improved AI algorithms: Capabilities in language understanding and content generation have significantly advanced.
  • Data explosion: The immense growth of publicly available data, including archives, websites, and digitized texts, has vastly improved model training.
  • Massive investments: Large businesses have invested billions in AI development, making substantial bets on its future.

A couple of weeks ago I watched an interesting video from Dr. Michelle Dickinson, aka Nanogirl. In her video, ChatGPT (4o) insisted there are 2 r’s in “strawberry”. Watch this, if you haven’t:

I can understand that an LLM like ChatGPT uses tokens to construct words and sentences and may not understand “r” on its own as a character. Whereas a binary logic can recognise it as an ASCII character and can search and count r’s in a word, using simple search and count algorithms. What troubled me was the “arrogance” of the model! That is disturbing.

Generative AI (GenAI) is different than lots of other hypes that that the tech industry has seen. They all go through a hype-cycle. This one has just started. Real world business use cases with clear ROI are still emerging. CIOs and technology leaders must watch this space and set a clear direction for their businesses. Just like there was (and is) “shadow IT”, there will be “shadow AI”.

I’ll leave you with a quote by Roy Amara a futurist and a scientist, known as Amara’s law:

"We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run."

-Roy Amara

If you are an optimist, you can see the pessimism. If you are a pessimist, you can see the optimism. If you are a bit of both, like me, you can see both the perspectives.

The journey of AI from textbooks to dinner tables is just the beginning of its profound impact on our world.



Discover more from Sid Kumar

Subscribe to get the latest posts to your email.

Or follow me on

Discover more from Sid Kumar

Subscribe now to keep reading and get access to the full archive.

Or follow me on

Continue reading