Why name it "Aye, Aye, AI"? Here's the answer...
Weekend at Bangalore Literature Festival 2025
This is the third in a series of articles from BLF 2025. Read the first one here and the second one here.
During the first day of the 14th edition of the Bangalore Literature Festival, the stage in the Open Cell likely observed the widest spectrum of vibes. What started as a nostalgic corner of Sufi ghazals and classical ragas evolved into a café of dialogues about the future of artificial intelligence.
The title of the event is a phrase that’s well known to the AI community—Aye, Aye, AI. It has been refurbished to become the slogan for numerous articles, YouTube videos, LinkedIn posts, etc. There’s even a podcast of the same name! The quest to pinpoint the accurate origin and original generator of the phrase is one of those ordeals that is slipping out of human hands. This is where journalists like Anil Ananthaswamy and Karen Hao come to the rescue.
“I think I’ll try to start with a little bit of history to avoid making this sound like a math lesson,” said Anil when asked by the moderator, Indulekha Aravind, on how generative AI is different from the broader superset of AI. He traced the history of AI in the following way: “The term itself was coined in 1955 by John McCarthy. He was a mathematician at Dartmouth (College), and that year, he, along with three other people, Claude Shannon, Nathaniel… forgot his last name (it was Rochester), and then Marvin Minsky. These four people basically decided that they’re going to organize a meeting to design machines, or to think about designing machines that will use language, will form abstractions and concepts, and will be able to reason and solve problems like humans, all the while improving themselves. So in a sense, “artificial intelligence” refers very broadly to that idea, right? And it has remained in spirit that particular notion of machines—that we want them to do these things. But, you know, generative AI is just one small part of that thing.”
He added, “When we think of artificial intelligence, there were two very broad categories that were in vogue for a long time. One was something called “symbolic AI,” which is the idea that we can build artificial intelligence systems that use symbols or representations to, you know, codify knowledge that is out there in the world and use some sort of rules of logic to make inferences, make deductions, and do things. And this was, kind of, the whole structure that was imposed top-down by taking what we knew about the world as humans and somehow imposing that on the system.” Anil mentioned that although it was very popular in the 80s and 90s, this symbolic AI didn’t learn from new experiences and was very brittle. It broke readily when asked a question that couldn’t fit a predefined set of rules.
“By the 90s, we realized that this effort was going nowhere,” he continued. “But simultaneously, from the 50s onwards, there was a separate track that was also functioning, which was called machine learning. And that, fundamentally, is this idea that machines should be able to look at data, and there are patterns inherent in the data, and they should learn about these patterns and then use whatever they have learned to do something. And that “do something” part is what distinguishes generative AI from another form of machine learning, which would be “discriminative AI.”
Then Anil described discriminative AI to the audience, which had already become quite interested in this topic: “So most people would be very familiar with things like face recognition and voice recognition. Those are all what are called discriminative tasks, where, like, if I’m given a bunch of images of cats and dogs, what the discriminative machine learning AI will do is it will learn about what patterns constitute dogs and what patterns constitute cats. And then once it’s learned that, then when you give it a new image and ask it to say, ‘Oh, is this a cat or a dog?’ It’ll say, ‘Okay, this matches a dog,’ or ‘This matches a cat.’ So it’s discriminating between different types of data. So, much of what came about five years and before that was discriminated AI. So, all of that stuff was for image recognition, voice recognition. But then, once a machine has learned about patterns that exist in data, you can also ask it to generate new data that looks like the data that you trained it on. So if you had images of cats and dogs, you could ask it, ‘Okay, once you’ve figured out what patterns constitute dogs and cats, can you now produce an image that looks like another cat?’ It may not be exactly the cat that was there in the training data, but something that’s very similar.”
So, that’s how Anil provided a very generic yet simplistic example of how generative AI works. He explained that as long as the statistical patterns that are hidden in the data of images, audio, language, etc., can be figured out and trained on, similar kinds of data can be generated. For the audience, it was profound insight, a class in motion. It left some beaming with newfound knowledge, and some, unabashedly, just left. After getting the feel of it, the moderator didn’t skip a beat and immediately posed the question to Karen about the nebulous aspect of artificial general intelligence.
Naturally, Karen started to respond with her definitive take on AGI…
“AGI, or artificial general intelligence, is typically defined as a theoretical AI system that will have parity with human capabilities over a wide variety of tasks. And the challenge with this term is that we don’t actually really have good ways of measuring human capabilities or understanding human intelligence,” she mentioned before referencing Anil’s remarks. “That’s part of the reason why Anil was talking about (how) in the history of AI development, there have been all of these different ways of trying to achieve the ultimate task of developing AIs; actually, all of it is rooted in philosophical differences about, ‘Are we smart because we know things? Are we smart because we can learn things? Are we smart because we can do math problems, or are we smart because we can read?’ You know, there’s just, like, a very wide spread of different understandings of why we have this seemingly special characteristic that makes us superior to other species. And so, in the context of corporations, AGI has largely been a marketing term because there isn’t actually any scientific consensus around where they are actually headed,” concluded Karen as she dove into her field of expertise—OpenAI.
“So in that vacuum of consensus, they can just define AGI as wherever they need to go. And so OpenAI is famous for this. In that, you know, they’ve used many, many different definitions of AGI over the last decade of their existence. Sam Altman, when he’s talking with consumers and wants you to buy the product, is like, ‘AGI is going to be this magical digital assistant that will do anything that you want.’ When he’s talking with the U.S. government to try and ward off regulation, he says, ‘AGI is going to cure cancer and solve climate change.’ When he’s talking with Microsoft to strike a deal, the information reported is that AGI is defined as a system that will generate $100 billion of revenue.’ And when he is on OpenAI’s website, the official definition is, ‘Highly autonomous systems that outperform humans in most economically valuable works.’ So that is the labor-automated definition, or they’re trying to automate away the labor that people get paid the most for,” said Karen as she concluded four different definitions of AGI. According to her, it was just a matter of cherry-picking a definition as required for the purpose of developing a wish to manifest a reason for AGI.
The moderator then tried to ask about all the hype about AI, about CEOs making statements about AI winning awards and doing manual labor, to which Anil interjected to clarify. Taking the lead from Karen’s last remark about companies using agendas to justify the development of AI, he mentioned the capability of AIs as narrow intelligence, where they can only perform as desired at one thing, while humans have the ability to create abstractions or abstract ways of thinking about one problem and apply it to a completely different domain. That might be regarded as a type of general intelligence, which is the goal of these AGI researchers. Anil confidently concluded, “We are nowhere close to that.”
The affirmation the audience was very well acquainted with—that they would want to reach that point, but not yet. At this point, the moderator changed tactics to touch upon the books both panelists had published on AI, followed by a question about the validity of the hype growth between the time frames of their respective printed works.
After failing to make an analogy between Bangalore and San Francisco traffic, Karen said, “Originally, AI started as a scientific discipline.” The audience leaned further back in their seats as she continued, “...and it was driven by a scientific curiosity of, ‘Can machines think? Can we actually get computational infrastructure to act as biological or neurological infrastructure?’ But what we’ve seen happen in the last decade-plus is the huge infusion of large amounts of capital into this pursuit and a lot of ideological drives that are motivating this pursuit as well. And so now, I think for me, there’s a much more fundamental question of, ‘Should we actually be trying to create so-called everything machines or generalized intelligence when there’s now this huge political and ideological apparatus and capital apparatus that is trying to pump it towards things that we actually do not want to see, like large-scale job automation, the consolidation of enormous amounts of power, the return of what I call the empires of AI, and a return back to a much more hierarchical global order rather than a more democratic and inclusive one?’ And to me, like, the scientific curiosity, which was a really nice philosophical pursuit, the thinking about, ‘Oh, can we actually, you know, understand more about our intelligence by trying to recreate this?’ We now have to set aside those interesting questions and recognize that those questions are now being codified by a political-economic order that is driving the pursuit of AGI towards very, very dark consequences.
And so for me, actually, one of the things that I really advocate for is that we should not, in fact, be building AGI or trying to pursue building AGI; we should be focused, actually, on narrow intelligence. Because one of the challenges of that—I mean, there are many challenges for why we shouldn’t focus on AGI.”
Karen then provided an example of why companies don’t want to focus on narrow intelligence instead of general intelligence. According to her, “When companies say that they’re building an everything machine and they tell consumers, ‘We have made this everything machine or we are on the trajectory of creating this everything machine, and you can do anything you want with it and it will solve any of your problems.’ What happens is: then people will start using it in all these different ways that the machine is not actually designed for. From the company’s perspective, they cannot make a technology like this safe because they cannot anticipate all of the different ways that this technology will be abused, because they have communicated to the public that there is an infinite surface area of things that you can do with this technology. Whereas if you have narrow AI systems, the benefit is that you have these systems that are well-scoped, so that you can build, test, and deploy them, and very well-scoped problems. And you can then, as a company developing this technology, anticipate all of the different ways that this technology should or should not be used, and then shore up the problems of its potential abuses in advance and make the technology safer once it’s diffused across the public.”
Then the moderator asked the fellow panelist for his views, along with the direction of AI in India, to which he replied, “Maybe the India direction we can come back to later. But I do see an essential conflict between what might be happening in companies versus just human curiosity. Like, you know, one of the things that keeps getting pointed out by people who don’t think AI has done much is to keep saying something like, ‘Oh, you know, AIs don’t demonstrate creativity and curiosity, and we value that in humans.’ If we value that in humans, we are also not going to be able to stop this essential, curious… Almost everything that we have done in science and technology has come out of human curiosity of wanting to find out... So I see a tension between that, which is playing out in labs around the world. And not just companies. I mean, this, like we said, this effort at developing machine learning goes back to the 1940s and 50s, and it has been an incremental effort from then on.” Anil mentioned that companies would be going forward at the same pace as now, with the scary part being the people getting worried even after condemning AI. In addition to being concerned and nervous, he occasionally became terrified about how the AI scene might develop.
Karen was then asked a question only she could answer as an expert about OpenAI. “Altman is singular in his ability to fundraise when there is no viable business model,” she added after recalling OpenAI’s defunct framework when asked about OpenAI with no Sam Altman to boot. “And this was something that his other mentor, Paul Graham, identified very early on when he (Sam) was starting off as a founder, but another company before he started OpenAI, that was ultimately a failure. Paul Graham said, ‘You know, typically, you need to show results in order to get people to believe you, unless you’re Sam Altman.’
And so from that perspective, I think Altman was singular in getting OpenAI to where it is today and launching the race in this very aggressive, money-fueled dynamic. At the same time, I do not think that if we were to now swap Altman out for someone else, it would solve the problems that I articulate in my book about the power consolidation that’s happening and the sheer amount of capital that’s just being, you know, burned effectively for an unknowable destination. And the reason is because now there is a power structure that has been constructed where OpenAI and all of these other companies with tech leaders at their helm are able to make decisions as a tiny group of people with a very narrow representation of the world among their worldviews; they’re able to make, you know, 60 decisions in an hour that affect a billions of people around the world, and that’s not actually going to change anymore if Altman is fired again, and, you know, another person steps in.” The audience responded to this with a round of applause, but the arc was yet to be completed.
“And so to me, a lot of the problems that I see with OpenAI, with the AI industry, and with the way that it’s operating now need to be solved at the root cause, which is the way in which these companies are currently operating as empires, as extractive and exploitative entities that are accruing an enormous amount of economic and political power through the dispossession of the majority,” continued Karen as she encouraged the audience to figure out ways to challenge the companies with regulations, pushbacks, and protests from the public.
It was then time for the moderator to put the question of the India context to Karen on huge data centers and further developments that may concern the country.
“Yeah, so maybe first I’ll take a step back before we get to the India context,” she responded and provided some more context on how AI data centers work.
“Within the machine learning branch, there’s also a range of different types of models. And really, what we’ve seen in the last few years is that we went from small machine learning models to absolutely massive-scale machine learning models, most colloquially now known as large language models, like ChatGPT. This scale is unlike anything that we’ve seen, like, in history,” added Karen while explaining the consumption of models, the chips that make them work, their past and present, the supercomputer facilities, and the current rollout of AI infrastructure.
“And so India has become a prime target for Silicon Valley companies, because right now, if you look at the global distribution of data centers, the U.S. has the most data centers. And Virginia, as a state in particular, has 15% of the world’s data centers. They have run out of land, energy, and water in the U.S. for supporting more of this infrastructure. So you see these companies now scoping, scouting out these resources in the rest of the world.
That’s why they’ve been striking lots of deals in the Middle East, for example. They’ve been trying to go into Latin America, which I talk about pretty extensively in my book. And now they’re trying to bring enormous amounts of capital into India to also build this infrastructure.”
According to Karen, the energy needed to run the proposed data centers has not even been invented yet, and most of the water required to cool these facilities is fresh water. According to a recent investigation, two coal plants, each owned by Tata and Adani, that were scheduled for retirement have been put on hold due to the data centers’ anticipated energy demand. In a way, the trajectory of AI in the India context may be much similar to the rest of the world in the coming years.



