AI Manifesto

Posted on Feb 18, 2025

I’ve used a lot of AI in the last few years. I’ve also written a lot about it previously. Here are some of my previous posts around AI.

  1. Improvements in Artificial Intelligence, December 9, 2021
  2. I wonder how this AI thing is going to shape up, March 3, 2023
  3. How does GPT work? Understanding Generative AI Models, April 26, 2023
  4. Four AI Chatbots other than ChatGPT, November 27, 2023
  5. OpenAI’s GPT is a terrific idea, February 8, 2024

When ChatGPT (GPT-3.5) took the world by storm, I was actually offline in a Vipassana course. I only got to know about it after a while but it fascinated me that it could write a poem, albeit poorly. Now, I couldn’t make the joke “oh, it can’t write a poem anyway”. Creative jobs were up for grabs now.

However, the more I think of it, I find AI to be complementing my intelligence rather than replacing it. My own curiosities that required a lot of effort to answer, now can be answered quickly and reliably (thanks Perplexity!). Obscure technical questions around topics that no one except me cares about (like learning Kaithee script and Hindustani language) are so much better with Claude and ChatGPT. Often I find myself checking for updated information on Reddit Answers or Grok’s tweet search. Image generation by Grok is too good. Coding (especially debugging) is so much better today that without these LLMs. I have created projects where I got extensive help from LLMs like Spotify Randomizer and jailbreaking my Kindle.

Some of the newer writings on this website may have been “proofread” with the help of an AI. So far, I haven’t found any AI that’s able to write with my level of English or Hindustani. But, it’s amazing for correcting spelling mistakes and reorganizing text for better flow. I use it for that. Which tool specifically, varies quite a bit over time and space.

But like all authors in their books say, “all mistakes are mine alone”. If I say something to you because I had a faulty source, my explanation that it was faulty source isn’t enough. Thus, if I present it to you, it has my word of confirmation and verification. I might still make an honest mistake and in that case, PLEASE correct me.

I think it goes back to what Douglas Adams wrote in 1999 about the internet:

So people complain that there’s a lot of rubbish online,… or that you can’t necessarily trust what you read on the web. Imagine trying to apply any of those criticisms to what you hear on the telephone. Of course you can’t ‘trust’ what people tell you on the web anymore than you can ‘trust’ what people tell you on megaphones, postcards or in restaurants. Working out the social politics of who you can trust and why is, quite literally, what a very large part of our brain has evolved to do. For some batty reason we turn off this natural scepticism when we see things in any medium which require a lot of work or resources to work in, or in which we can’t easily answer back – like newspapers, television or granite. Hence ‘carved in stone.’ What should concern us is not that we can’t take what we read on the internet on trust – of course you can’t, it’s just people talking – but that we ever got into the dangerous habit of believing what we read in the newspapers or saw on the TV – a mistake that no one who has met an actual journalist would ever make. One of the most important things you learn from the internet is that there is no ‘them’ out there. It’s just an awful lot of ‘us’.

Douglas Adams’ prescient observations about internet skepticism parallel our current relationship with AI in striking ways. Just as he noted that we shouldn’t inherently trust internet content simply because it exists in a medium that requires significant resources to operate, we shouldn’t automatically trust (or distrust) AI outputs merely because they come from sophisticated models trained on vast datasets. Just as Adams pointed out there is no “them” on the internet, just lots of “us,” we should remember that AI systems are ultimately tools created by and for humans, reflecting our collective knowledge, biases, and limitations.

What you can trust or not trust will be the job of institutions, like Yuval Noah Harrari says in the interview with Kara Swisher. Thus, trust the institutions. To that effect, I have also created a Chrome extension that you can select any text and then ask Perplexity or Grok to factcheck it for you (for free). By making fact-checking more accessible, we’re not just helping individuals make better decisions; we’re contributing to a broader ecosystem of trust and verification in the AI age.

Another side note on AI text: These language models were trained on all1 human text and many people consider this an evil act by itself. I’m not so sure of my position. In one way, everything that humans have done so far was to advance the humanity even further. No one gave Aristotle or Kalidasa money for translating or copying their works in many languages. Today, the problem has arisen primarily due to the speed at which this transmission has happened. Writings of yesterday might end up in training dataset today. But philosophically, nothing has changed.

Ancient authors often didn’t seek personal attribution or compensation. Rather, they were supported by patronage from the royal courts — like media agencies pay journalists today. Like Patanjali’s Yoga Sutras advanced our knowledge of medicine and surgery without any compensation to them, I am certain these systems being created will result in even better creative works, which will fuel next systems supercharging creativity.

Thus, our works should be considered an homage to the future civilizations rather than mere intellectual property to be guarded.

I know it is easy for me to be philosophical; my bread and butter isn’t writing. Thus, I expect a lot of people, especially independent journalists and writers, to completely disagree with me.

But, this is my AI manifesto for this website. For this website (and for this website alone), I give OpenAI, Anthropic, Google, DeepSeek and everyone else, who wants to use my work to create something new, express permission to use them however they see fit. I believe their creation will help the future humanity and thus I will have fulfilled my duty to make the world a better place.


This was inspired from Damola Morenikeji’s AI Manifesto. I’ve also requested this to be added to the list of AI manifesto.


  1. Not all literally, but figuratively. We are running out of scaling training data and now data curation — selecting right set of data to fine tune the model — is more rewarding that just dumping more data.↩︎