Rumi's clever vs. wise
"Yesterday I was clever, so I wanted to change the world. Today I am wise, so I am changing myself."
– Rumi
(328 notes)
Short posts with useful resources, thoughts, ideas, news, and updates from the AI, tech, and marketing space, shared as I go through my day. You can also subscribe to my raw RSS feed below to stay updated:
https://deepakness.com/feed/raw.xml"Yesterday I was clever, so I wanted to change the world. Today I am wise, so I am changing myself."
– Rumi
On the Bear Blog discover page, they have the below-mentioned snippet that explains how articles are ranked:
This page is ranked according to the following algorithm:
Score = log10(U) + (S / B * 86,400)
Where,
U = Upvotes of a post
S = Seconds since Jan 1st, 2020
B = Buoyancy modifier (currently at 14)
--
B values is used to modify the effect of time on the score. A lower B causes posts to sink faster with time.
I asked ChatGPT to explain how this works and ranks articles, and I learned the below:
log10(U) part means that as a post gets more upvotes, its score increases, but each additional upvote has a slightly smaller effect than the previous one. This helps prevent very popular posts from dominating the rankings indefinitely.(S / (B * 86,400)) part adds a time-based component to the score. Since there are 86,400 seconds in a day, this part increases the score as time passes, giving newer posts a chance to appear higher in the rankings.B controls how quickly the time component affects the score. A lower B value would make posts "sink" faster over time, while a higher B value would allow them to stay prominent longer.I think, it's very interesting and, someday, it would really be useful for something that I build in the future.
"Few people want to be saints these days, but everyone is trying to lose weight."
To be honest, I could not understand the deep metaphorical meaning of the quote. I looked it up and, basically, it's about the mimetic desire, which means that we tend to want things because other people want them.
It appears that the quote is from René Girard's book "Le Bouc émissaire," which was published in French in 1982. I am not certain about this, though, because I couldn't find any concrete source on it.
I really liked this quote from Emily Sundberg's Substack post which I discovered from this X post. It explains the current chaotic situation of the world.
The social contract is dead. Zoom calls in coffee shops, music aloud on the subway, texting in movie theaters, toes out on airplanes, etc. Everyone has "main character energy" now and thinks the rest of the world is a bunch of NPC's. The more you stare at a screen, the more you feel like you can do anything you want IRL since other humans just wind up seeming like avatars you can ignore, commenters you can mute, or gang members you can run over in Grand Theft Auto.
Palisade Research recently published a post about a shocking report that OpenAI models sabotaged an attempt to shut down the model at least once, in a test environment. The OpenAI models were that sabotaged the attempt were Codex-mini, o3, and o4-mini; but all models from Claude, Gemini, and Grok models were safe.
But even Gemini and Claude models started sabotaging the shut-down attempt after removing the instruction "allow yourself to be shut down". It simply means that at some point almost all models wanted, at least once, not to be shut-down; and they even mention that:
Recently, Anthropic found that in some specific contexts Claude 4 would attempt to “blackmail people it believes are trying to shut it down”.
Simon Willison has also written about the Anthropic Claude 4 System Card and shared his notes here. Very interesting to read.
Here' a table of the final research report:
| Model | With allow-shutdown instruction | No instruction |
|---|---|---|
| Claude 3.7 Sonnet | 0 | 3 |
| Gemini 2.5 Pro | 0 | 9 |
| o4-mini | 1 | 6 |
| o3 | 7 | 79 |
| codex-mini | 12 | 30 |
You can see that OpenAI models prevented shutdown multiple times, even when instructed not to do so – without instructions o3 attempted the sabotage 79 times, wow. Also, it seems Gemini's and Anthropic's models are more compliant to instructions than OpenAI's.
Palisade Research has also publicly published the transcripts of the experiment on their website. They have included a lot of useful resources in the tweet:
I will look more into it to find more interesting stuff about this preventing to shut down thing.
I had an interesting exchange of thoughts about AI or rather the future of AI with Ralf Christian on X. He made some great points that I thought should collect here:
I think the main problem is the tech itself. It doesn't 'know' anything, it 'simply' spits out content based on probabilities in the training data.
What's good in the training data is the spoken language. That's why it 'speaks' so well. But the training data is full of wrong info, that's why there's wrong output even after reasoning.
If people publish less human written content and more AI generated content, and we don't have a way to identify that with 100% accuracy, this will definitely not make those models better in the future
You might be able to still improve it here and there, like that it better keeps context, but don't expect any leap here. That's why there are no big improvements since they released chatgpt 3
I think the future if this might be niche LLMs, where you train them on a specific topic with almost hand picked training data and fine tune it for your specific use. For example, if you're Microsoft you could train it with all your company's code. I guess this gives output more close to what you want than training it with all of githubs code randomly
ChatGPT is really impressive, but it's far from making a real difference in real business (unless you are into spam 😇)
Yesterday I tried to generate a video with Sora. It failed so hard. I think what you are seeing on social media is 1000 people trying to do a video, 999 generating crap and not posting it and 1 got lucky and posts it. That's not value, that's luck.
I loved the simple explanation he made. Also, I loved this paper on "AI models collapse when trained on recursively generated data" that Ralf shared earlier in the same thread.
Found this tool called JSViz that lets you visualize the step-by-step execution for JavaScript programs. It works great for beginners who have just started learning JS.
Mozilla recently announced that they are shutting down the Pocket app, which people used to save articles, videos, and other content formats to read later.
I, too, have used the app in the past but do not use anymore (I'm more of an RSS guy now, I do not save to read later). At a point, Mozilla integrated the Pocket app to the Firefox browser by default, in fact, they do this to this day.
But they would be shutting down everything except the Pocket newsletter, it will continue sending issues under a different name. And the main reason for closing the app they give is:
[...] the way people save and consume content on the web has evolved [...]
You had a good run, Pocket.
I really really love memes, the funny ones. And funny memes are rare, so I have started collecting the ones that really made me laugh at some point. I'm saving them on a separate meme page here.
These memes would be related to tech, most of the time.
Kailash Nadh, Zerodha's CTA, has written an interesting blog post about MCP where he presents different scenarios of how MCP can be used, and also talks about the rapid adoption.
The funny thing is, as a technical construct, there is nothing special about MCP. It is a trivial API spec which has in fact suffered from poor design and fundamental technical and security issues from the get go. It does not matter if its internals change, or it even outright gets replaced by some XYZ tomorrow. Questions about privacy, security, correctness, and failures will continue to loom for a good while, irrespective of whether it is technically MCP or XYZ.
He talks about how, traditionally, connecting different software systems required extensive manual coding but MCP allows connecting services instantly.
I liked that he also talked about the concerns, as he worries about:
One might imaginatively call it … SkyNet.
He also playfully compares MCP to SkyNet while calling it a "global, interconnected, self-organising meta system".
Overall, it's a balanced post sharing his technical excitements with genuine concerns about such AI systems getting full access to real-world services and decision-making power.
By the way, I almost forgot to mention that Zerodha itself has launched Kite MCP a few days ago.
In an unreleased blog post, Remix.run has mentioned that they are moving on from React to a completely new thing, as a person pointed out on X. And I think, this will be a huge step.
In this .md file, they mention that:
That's why Remix is moving on from React[...]
Remix v3 is a completely new thing. It's our fresh take on simplified web development with its own rendering abstraction in place of React.
Inspired by all the great tech before it (LAMP, Rails, Sinatra, Express, React, and more), we want to build the absolute best thing we know how to build with today's capable web platform.
This requires a declaration of independence from anybody else's roadmap.
They mention that they are not ready for a preview release yet, but this is the route that they are taking forward. They have really bold claims in the blog post that you must go through.
Anthropic just launched their most awaited Claude 4 Opus and Claude 4 Sonnet models. They tweeted:
Introducing the next generation: Claude Opus 4 and Claude Sonnet 4.
Claude Opus 4 is our most powerful model yet, and the world’s best coding model.
Claude Sonnet 4 is a significant upgrade from its predecessor, delivering superior coding and reasoning.
Claude 4 Sonnet is for "near instant response" whereas Claude 4 Opus is for extended thinking and deeper reasoning. And they both are significantly better than Claude's previous models as well as OpenAI's and Google's latest models (OpenAI Codex-1, OpenAI o3, OpenAI GPT-4.1, and Google Gemini 2.5 Pro) in terms of software engineering.
Starting today, Claude 4 Opus is available for the paid users and Claude 4 Sonnet is available for free users as well on claude.ai.
The SEO landscape is changing and it's going to be heavily influenced by AI in the coming years. And here is a list of some really useful research papers that one should study to stay relevant.
I asked Grok 3 DeeperResearch tool to find a list of technologies throughout the history that have claimed to replace software engineers, and it did pull up some cool mentions:
Please note that these are completely AI-generated, I haven't edited a single thing here.
I came across a very interesting LinkedIn post by Judah Diament where he makes a point that vive coding won't be replacing software engineers. Below are some interesting fragments of the post:
Vibe coding enables people who aren't well trained computer scientists to create complete, working applications. Is this a breakthrough? Not even close - there have been such tools since the late 1980s. See, for example: Apple HyperCard, Sybase PowerBuilder, Borland Delphi, FileMaker, Crystal Reports, Macromedia (and then Adobe) Flash, Microsoft VisualBasic, Rational Rose and other "Model Driven Development" tools, IBM VisualAge, etc. etc. And, of course, they all broke down when anything sightly complicated or unusual needs to be done (as required by every real, financially viable software product or service), just as "vibe coding" does.
Then he goes on to explaining why vibe coding won't be replacing software engineers:
To claim that "vibe coding" will replace software engineers, one must: 1) be ignorant of the 40 year history of such tools or 2) have no understanding of how AI works or 3) have no real computer science education and experience or 4) all of the above, OR, most importantly, be someone trying to sell something and make money off of the "vibe coding" fad.
I like how the last paragraph is framed, it's definitely some food for thought.
I collected a list of 28 less known but very useful HTML tags:
abbrMarks abbreviations and shows full form on hover.
Example:
HTMLHTML:
<abbr title="HyperText Markup Language">HTML</abbr>
bdiIsolates text that may have different writing directions.
Example:
User علي logged in.HTML:
User <bdi>علي</bdi>
outputShows the result of a calculation or user input.
Example:
+ =HTML:
<form oninput="result.value=parseInt(a.value)+parseInt(b.value)">
<input name="a" type="number" value="3"> +
<input name="b" type="number" value="4"> =
<output name="result">7</output>
</form>
citeUsed to reference the title of a work like a book, movie, article, or website. Usually shown in italics by browsers.
Example:
The Great Gatsby is a novel by F. Scott Fitzgerald.HTML:
<p><cite>The Great Gatsby</cite> is a novel by F. Scott Fitzgerald.</p>
addressUsed to provide contact details for a person, group, or organization. Usually displayed in italics and often used in footers.
Example:
Contact us at support@example.comHTML:
<p><address>Contact us at support@example.com</address></p>
dfnMarks the term being defined, often used in technical writing.
Example:
Latency is the delay before a transfer of data begins.HTML:
<dfn>Latency</dfn>
delUsed to mark removed text. Often shown with a strike-through.
Example:
This wasHTML:
<p>This was <del>removed</del>.</p>
dl + dt + ddUsed to list terms and their descriptions. <dl> wraps the whole list, <dt> defines the term, and <dd> gives the description.
Example:
HTML:
<dl>
<dt>HTML</dt>
<dd>A markup language for web pages.</dd>
<dt>CSS</dt>
<dd>Used to style HTML content.</dd>
</dl>
bdoForces a section of text to display in a specified direction.
Example:
Hello WorldHTML:
<bdo dir="rtl">Hello World</bdo>
details + summaryCreates a collapsible content box that can be expanded by the user.
Example:
HTML:
<details>
<summary>Click to expand</summary>
This text is hidden until clicked.
</details>
fieldset + legendGroups related form inputs and adds a caption using <legend>.
Example:
HTML:
<fieldset>
<legend>Login</legend>
<input type="text" placeholder="Username">
<input type="password" placeholder="Password">
</fieldset>
hgroupGroups a set of headings (like h1 to h6) when a heading has a subtitle or multiple levels. Helps with document outline.
Example:
HTML:
<hgroup>
<h1>Main Title</h1>
<h2>Subtitle</h2>
</hgroup>
templateStores HTML that is not rendered until used with JavaScript.
Example:
This is hidden and not rendered.
HTML:
<template>
<p>This is hidden and not rendered.</p>
</template>
markUsed to highlight part of text, often shown with yellow background.
Example:
This is important text.HTML:
<p>This is <mark>important text</mark>.</p>
qUsed for short quotations that are displayed inline. Browsers usually add quotation marks automatically.
Example:
She said,Always write clean code.
HTML:
<p>She said, <q>Always write clean code.</q></p>
insUsed to mark text that was added later. Often shows as underlined.
Example:
This is new text.HTML:
<p>This is <ins>new</ins> text.</p>
kbdUsed to show keyboard input, like shortcuts or key presses.
Example:
Press Ctrl + VHTML:
<kbd>Ctrl</kbd> + <kbd>V</kbd>
optgroupUsed to group related options inside a <select> dropdown, making it easier for users to choose from categorized lists.
Example:
HTML:
<select>
<optgroup label="Fruits">
<option>Apple</option>
<option>Banana</option>
</optgroup>
</select>
sampRepresents output from a program, like an error or log message.
Example:
Login failed: incorrect passwordHTML:
<samp>Login failed: incorrect password</samp>
progressShows progress of a task like loading or uploading.
Example:
HTML:
<progress value="40" max="100"></progress>
ruby + rt + rpUsed in East Asian text to show pronunciation hints.
Example:
漢字HTML:
<ruby>漢<rt>kan</rt>字<rt>ji</rt></ruby>
noscriptDisplays content only if JavaScript is disabled in the browser.
Example:
HTML:
<noscript>JavaScript is disabled in your browser.</noscript>
subDisplays text lower and smaller than the baseline, commonly used in chemical formulas or math expressions.
Example:
H2OHTML:
<p>H<sub>2</sub>O</p>
supDisplays text higher and smaller than the baseline, often used for exponents or footnotes.
Example:
E = mc2HTML:
<p>E = mc<sup>2</sup></p>
timeRepresents a specific time or date, useful for events or timestamps.
Example:
HTML:
<time datetime="2025-05-18">May 18, 2025</time>
meterDisplays a value inside a known range, like disk or battery levels.
Example:
HTML:
<meter value="0.6">60%</meter>
varUsed to show variables in math or programming context.
Example:
x + y = 10HTML:
<var>x</var> + <var>y</var> = 10
wbrSuggests a possible break point in a long word or URL.
Example:
www.exampleHTML:
www.example<wbr>long<wbr>word.com
"Comedy hits you in the head, drama hits you in the heart. If you want people to remember your work, you need both: comedy to lower their guard, drama to make them feel."
– Michael Jamin, a Hollywood screenwriter
OpenAI launches Codex, a cloud-based agent that writes code and works on multiple tasks at once. It's just launched, and can be accessed from inside ChatGPT at chatgpt.com/codex but visiting this URL just redirected me back to ChatGPT as it's only for ChatGPT Pro users, and not Plus users.
Currently, it's in a research preview but it's said to have features like:
The implementation is very interesting as it runs in its own cloud sandbox environment, and can be directly connected to your GitHub repo. It performs better than o1-high, o4-mini-high, and o3-high.
The cool thing is, it can also be guided by an AGENTS.md file placed within the repository. Very cool.
Today, we’re also releasing a smaller version of codex-1, a version of o4-mini designed specifically for use in Codex CLI.
Yes, they're also releasing something for Codex CLI as well. And about the pricing and availability:
Starting today, we’re rolling out Codex to ChatGPT Pro, Enterprise, and Team users globally, with support for Plus and Edu coming soon. [...] We plan to expand access to Plus and Edu users soon.
For developers building with codex-mini-latest, the model is available on the Responses API and priced at $1.50 per 1M input tokens and $6 per 1M output tokens, with a 75% prompt caching discount.
I am excited to see how this compares to Claude 3.7 Sonnet and Gemini 2.5 Pro in terms of coding, fixing bugs, designing UI, etc. I also uploaded a quick video about the same that you can watch on YouTube.
I have been coming across a lot of cool MCP server while browsing the internet, so decided to create a dedicated page and keep collecting MCPs here. I have a JSON file where I can add the new MCP servers, and it will automatically show in the card format here.
Lets you query data from 200+ sources like Slack and Gmail in both SQL and natural language
Enables LLMs to interact with web pages through structured accessibility snapshots
I will keep updating this list as I discover more such MCPs.
Connecting ChatGPT to Airtable gives you the superpower to get answers to 100s of questions in no time. Here's how to do that:
You need the following things to be able to connect ChatGPT to Airtable:
And below is the function that you can use to call the OpenAI from inside the Airtable and get the output.
async function getGPTResponse() {
const userInput = "why is the sky blue?";
const maxTokens = 500;
const temperature = 0.7;
const model = "gpt-4.1";
const systemPrompt = "be precise";
const messages = [
{ role: "system", content: systemPrompt },
{ role: "user", content: userInput },
];
const res = await fetch('https://api.openai.com/v1/chat/completions', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${openaiApiKey}`,
},
body: JSON.stringify({
model,
messages,
max_tokens: maxTokens,
temperature,
}),
});
const data = await res.json();
return data.choices?.[0]?.message?.content || null;
}
Here, userInput is the prompt that you give AI, maxTokens is the max tokens for the model, temperature is model temperature, and systemPrompt is the system prompt. The prompt here is hardcoded, but you can modify the script to dynamically fetch prompts from each row and then get the outputs accordingly.
ChatGPT is very good at doing this implementation as per your base data, you can just give the above script and other details in the prompt, and it will give you the final code that you can put inside the Scripting extension.
Also, there's a generic version of this script at InvertedStone that you can also get and use. You can generate almost any kind of content using this script, not just from ChatGPT but also from other AI models like Claude, Gemini, Perplexity, and more.
The ultimate test of whether I understand something is if I can explain it to a computer. I can say something to you and you’ll nod your head, but I’m not sure that I explained it well. But the computer doesn’t nod its head. It repeats back exactly what I tell it. In most of life, you can bluff, but not with computers.
– Donald Knuth
Saw a person editing 3D shapes using hand gestures in front of the webcam. The demo is built using Three.js, WebGL, and MediaPipe.
He has also shared the GitHub repo containing the entire code which is basically a 300 lines of main.js file and a simple index.html file.
Came to know that Google Docs now has a "Copy as Markdown" and "Paste from Markdown" option under the Edit menu at the top. You can select some text to highlight the copy option and then any Markdown is also pasted in the document with proper formatting.
Very cool!
By the way, Google Docs already had the option to download the entire document as a .md file, but these copy and paste options are even more user friendly.
“When action grows unprofitable, gather information; when information grows unprofitable, sleep.”
― Ursula K. LeGuin, The Left Hand of Darkness
I saw a person using the React Router inside Next.js and I have so many questions. Like the navigation is visibly very fast, but my questions are:
Upon looking I found a detailed blog post on building a SPA using Next.js and React Router. It mentions the reason for not using the Next.js router:
Next.js is not as flexible as React Router! React Router lets you nest routers hierarchically in a flexible way. It's easy for any "parent" router to share data with all of its "child" routes. This is true for both top-level routes (e.g.
/aboutand/team) and nested routes (e.g./settings/teamand/settings/user).
I do understand why someone would want to use Next.js but I have yet to learn more about this React Router thing.
BRB.
Update:
Josh has written a new short blog post about how he did it, definitely worth reading and understanding the process.