Raw notes include useful resources, incomplete thoughts, ideas, and learnings as I go about my day. You can also subscribe to the RSS feed to stay updated.
On the Bear Blog discover page, they have the below-mentioned snippet that explains how articles are ranked:
This page is ranked according to the following algorithm:
Score = log10(U) + (S / B * 86,400)
Where,
U = Upvotes of a post
S = Seconds since Jan 1st, 2020
B = Buoyancy modifier (currently at 14)
--
B values is used to modify the effect of time on the score. A lower B causes posts to sink faster with time.
I asked ChatGPT to explain how this works and ranks articles, and I learned the below:
The log10(U) part means that as a post gets more upvotes, its score increases, but each additional upvote has a slightly smaller effect than the previous one. This helps prevent very popular posts from dominating the rankings indefinitely.
The (S / (B * 86,400)) part adds a time-based component to the score. Since there are 86,400 seconds in a day, this part increases the score as time passes, giving newer posts a chance to appear higher in the rankings.
The buoyancy modifier B controls how quickly the time component affects the score. A lower B value would make posts "sink" faster over time, while a higher B value would allow them to stay prominent longer.
I think, it's very interesting and, someday, it would really be useful for something that I build in the future.
To be honest, I could not understand the deep metaphorical meaning of the quote. I looked it up and, basically, it's about the mimetic desire, which means that we tend to want things because other people want them.
It appears that the quote is from René Girard's book "Le Bouc émissaire," which was published in French in 1982. I am not certain about this, though, because I couldn't find any concrete source on it.
I really liked this quote from Emily Sundberg's Substack post which I discovered from this X post. It explains the current chaotic situation of the world.
The social contract is dead. Zoom calls in coffee shops, music aloud on the subway, texting in movie theaters, toes out on airplanes, etc. Everyone has "main character energy" now and thinks the rest of the world is a bunch of NPC's. The more you stare at a screen, the more you feel like you can do anything you want IRL since other humans just wind up seeming like avatars you can ignore, commenters you can mute, or gang members you can run over in Grand Theft Auto.
Palisade Research recently published a post about a shocking report that OpenAI models sabotaged an attempt to shut down the model at least once, in a test environment. The OpenAI models were that sabotaged the attempt were Codex-mini, o3, and o4-mini; but all models from Claude, Gemini, and Grok models were safe.
But even Gemini and Claude models started sabotaging the shut-down attempt after removing the instruction "allow yourself to be shut down". It simply means that at some point almost all models wanted, at least once, not to be shut-down; and they even mention that:
Recently, Anthropic found that in some specific contexts Claude 4 would attempt to “blackmail people it believes are trying to shut it down”.
You can see that OpenAI models prevented shutdown multiple times, even when instructed not to do so – without instructions o3 attempted the sabotage 79 times, wow. Also, it seems Gemini's and Anthropic's models are more compliant to instructions than OpenAI's.
Palisade Research has also publicly published the transcripts of the experiment on their website. They have included a lot of useful resources in the tweet:
Steve Omohundro's paper about AI preventing to shut down, published in 2008
I had an interesting exchange of thoughts about AI or rather the future of AI with Ralf Christian on X. He made some great points that I thought should collect here:
I think the main problem is the tech itself. It doesn't 'know' anything, it 'simply' spits out content based on probabilities in the training data.
What's good in the training data is the spoken language. That's why it 'speaks' so well. But the training data is full of wrong info, that's why there's wrong output even after reasoning.
If people publish less human written content and more AI generated content, and we don't have a way to identify that with 100% accuracy, this will definitely not make those models better in the future
You might be able to still improve it here and there, like that it better keeps context, but don't expect any leap here. That's why there are no big improvements since they released chatgpt 3
I think the future if this might be niche LLMs, where you train them on a specific topic with almost hand picked training data and fine tune it for your specific use. For example, if you're Microsoft you could train it with all your company's code. I guess this gives output more close to what you want than training it with all of githubs code randomly
ChatGPT is really impressive, but it's far from making a real difference in real business (unless you are into spam 😇)
Yesterday I tried to generate a video with Sora. It failed so hard. I think what you are seeing on social media is 1000 people trying to do a video, 999 generating crap and not posting it and 1 got lucky and posts it. That's not value, that's luck.
I loved the simple explanation he made. Also, I loved this paper on "AI models collapse when trained on recursively generated data" that Ralf shared earlier in the same thread.
Found this tool called JSViz that lets you visualize the step-by-step execution for JavaScript programs. It works great for beginners who have just started learning JS.
Mozilla recently announced that they are shutting down the Pocket app, which people used to save articles, videos, and other content formats to read later.
I, too, have used the app in the past but do not use anymore (I'm more of an RSS guy now, I do not save to read later). At a point, Mozilla integrated the Pocket app to the Firefox browser by default, in fact, they do this to this day.
But they would be shutting down everything except the Pocket newsletter, it will continue sending issues under a different name. And the main reason for closing the app they give is:
[...] the way people save and consume content on the web has evolved [...]
I really really love memes, the funny ones. And funny memes are rare, so I have started collecting the ones that really made me laugh at some point. I'm saving them on a separate meme page here.
These memes would be related to tech, most of the time.
Kailash Nadh, Zerodha's CTA, has written an interesting blog post about MCP where he presents different scenarios of how MCP can be used, and also talks about the rapid adoption.
The funny thing is, as a technical construct, there is nothing special about MCP. It is a trivial API spec which has in fact suffered from poor design and fundamental technical and security issues from the get go. It does not matter if its internals change, or it even outright gets replaced by some XYZ tomorrow. Questions about privacy, security, correctness, and failures will continue to loom for a good while, irrespective of whether it is technically MCP or XYZ.
He talks about how, traditionally, connecting different software systems required extensive manual coding but MCP allows connecting services instantly.
I liked that he also talked about the concerns, as he worries about:
AI systems making real-world decisions with minimal human oversight
Questions of accountability when things go wrong
Privacy and security implications
One might imaginatively call it … SkyNet.
He also playfully compares MCP to SkyNet while calling it a "global, interconnected, self-organising meta system".
Overall, it's a balanced post sharing his technical excitements with genuine concerns about such AI systems getting full access to real-world services and decision-making power.
By the way, I almost forgot to mention that Zerodha itself has launched Kite MCP a few days ago.
In an unreleased blog post, Remix.run has mentioned that they are moving on from React to a completely new thing, as a person pointed out on X. And I think, this will be a huge step.
Remix v3 is a completely new thing. It's our fresh take on simplified web development with its own rendering abstraction in place of React.
Inspired by all the great tech before it (LAMP, Rails, Sinatra, Express, React, and more), we want to build the absolute best thing we know how to build with today's capable web platform.
This requires a declaration of independence from anybody else's roadmap.
They mention that they are not ready for a preview release yet, but this is the route that they are taking forward. They have really bold claims in the blog post that you must go through.
Anthropic just launched their most awaited Claude 4 Opus and Claude 4 Sonnet models. They tweeted:
Introducing the next generation: Claude Opus 4 and Claude Sonnet 4.
Claude Opus 4 is our most powerful model yet, and the world’s best coding model.
Claude Sonnet 4 is a significant upgrade from its predecessor, delivering superior coding and reasoning.
Claude 4 Sonnet is for "near instant response" whereas Claude 4 Opus is for extended thinking and deeper reasoning. And they both are significantly better than Claude's previous models as well as OpenAI's and Google's latest models (OpenAI Codex-1, OpenAI o3, OpenAI GPT-4.1, and Google Gemini 2.5 Pro) in terms of software engineering.
Starting today, Claude 4 Opus is available for the paid users and Claude 4 Sonnet is available for free users as well on claude.ai.
The SEO landscape is changing and it's going to be heavily influenced by AI in the coming years. And here is a list of some really useful research papers that one should study to stay relevant.
I asked Grok 3 DeeperResearch tool to find a list of technologies throughout the history that have claimed to replace software engineers, and it did pull up some cool mentions:
1. Automatic Programming and Compilers (1950s)
Era: 1950s
Description: Early efforts to automate programming, such as the A-0 compiler by Grace Hopper, focused on translating high-level languages like Fortran into machine code.
Claims: Suggested that programming could be automated, reducing the need for low-level coding expertise.
Impact: Enabled higher-level programming but did not eliminate the need for programmers to design algorithms and logic.
Source: Wikipedia: Automatic Programming
2. Fourth-Generation Programming Languages (4GLs) (1970s-1990s)
Era: 1970s-1990s
Description: High-level languages like SQL and FoxPro designed to be closer to human language, enabling non-programmers to perform tasks like database queries.
Claims: Hyped as potentially eliminating the need for traditional programmers, with - Claims: that they were the "last generation" requiring code writing.
Impact: Simplified specific tasks but were limited for complex projects, requiring professional developers for broader applications.
Source: Wikipedia: Fourth-generation programming language
I came across a very interesting LinkedIn post by Judah Diament where he makes a point that vive coding won't be replacing software engineers. Below are some interesting fragments of the post:
Vibe coding enables people who aren't well trained computer scientists to create complete, working applications. Is this a breakthrough? Not even close - there have been such tools since the late 1980s. See, for example: Apple HyperCard, Sybase PowerBuilder, Borland Delphi, FileMaker, Crystal Reports, Macromedia (and then Adobe) Flash, Microsoft VisualBasic, Rational Rose and other "Model Driven Development" tools, IBM VisualAge, etc. etc. And, of course, they all broke down when anything sightly complicated or unusual needs to be done (as required by every real, financially viable software product or service), just as "vibe coding" does.
Then he goes on to explaining why vibe coding won't be replacing software engineers:
To claim that "vibe coding" will replace software engineers, one must: 1) be ignorant of the 40 year history of such tools or 2) have no understanding of how AI works or 3) have no real computer science education and experience or 4) all of the above, OR, most importantly, be someone trying to sell something and make money off of the "vibe coding" fad.
I like how the last paragraph is framed, it's definitely some food for thought.
"Comedy hits you in the head, drama hits you in the heart. If you want people to remember your work, you need both: comedy to lower their guard, drama to make them feel."
OpenAI launches Codex, a cloud-based agent that writes code and works on multiple tasks at once. It's just launched, and can be accessed from inside ChatGPT at chatgpt.com/codex but visiting this URL just redirected me back to ChatGPT as it's only for ChatGPT Pro users, and not Plus users.
Currently, it's in a research preview but it's said to have features like:
writing code for you
implementing new features
answering questions about your codebase
fixing bugs, etc.
The implementation is very interesting as it runs in its own cloud sandbox environment, and can be directly connected to your GitHub repo. It performs better than o1-high, o4-mini-high, and o3-high.
The cool thing is, it can also be guided by an AGENTS.md file placed within the repository. Very cool.
Today, we’re also releasing a smaller version of codex-1, a version of o4-mini designed specifically for use in Codex CLI.
Yes, they're also releasing something for Codex CLI as well. And about the pricing and availability:
Starting today, we’re rolling out Codex to ChatGPT Pro, Enterprise, and Team users globally, with support for Plus and Edu coming soon. [...] We plan to expand access to Plus and Edu users soon.
For developers building with codex-mini-latest, the model is available on the Responses API and priced at $1.50 per 1M input tokens and $6 per 1M output tokens, with a 75% prompt caching discount.
I am excited to see how this compares to Claude 3.7 Sonnet and Gemini 2.5 Pro in terms of coding, fixing bugs, designing UI, etc. I also uploaded a quick video about the same that you can watch on YouTube.
I have been coming across a lot of cool MCP server while browsing the internet, so decided to create a dedicated page and keep collecting MCPs here. I have a JSON file where I can add the new MCP servers, and it will automatically show in the card format here.
BioMCP
Connects AI systems to authoritative biomedical data sources
Connecting ChatGPT to Airtable gives you the superpower to get answers to 100s of questions in no time. Here's how to do that:
You need the following things to be able to connect ChatGPT to Airtable:
A paid Airtable account (the lowest plan is $24/month)
OpenAI API key (you'll have to set up a payment method on OpenAI, here)
The Scripting extension from Airtable (no additional cost), and
A script to call the OpenAI API inside Airtable
And below is the function that you can use to call the OpenAI from inside the Airtable and get the output.
asyncfunctiongetGPTResponse(){const userInput ="why is the sky blue?";const maxTokens =500;const temperature =0.7;const model ="gpt-4.1";const systemPrompt ="be precise";const messages =[{role:"system",content: systemPrompt },{role:"user",content: userInput },];const res =awaitfetch('https://api.openai.com/v1/chat/completions',{method:'POST',headers:{'Content-Type':'application/json','Authorization':`Bearer ${openaiApiKey}`,},body:JSON.stringify({
model,
messages,max_tokens: maxTokens,
temperature,}),});const data =await res.json();return data.choices?.[0]?.message?.content ||null;}
Here, userInput is the prompt that you give AI, maxTokens is the max tokens for the model, temperature is model temperature, and systemPrompt is the system prompt. The prompt here is hardcoded, but you can modify the script to dynamically fetch prompts from each row and then get the outputs accordingly.
ChatGPT is very good at doing this implementation as per your base data, you can just give the above script and other details in the prompt, and it will give you the final code that you can put inside the Scripting extension.
Also, there's a generic version of this script at InvertedStone that you can also get and use. You can generate almost any kind of content using this script, not just from ChatGPT but also from other AI models like Claude, Gemini, Perplexity, and more.
The ultimate test of whether I understand something is if I can explain it to a computer. I can say something to you and you’ll nod your head, but I’m not sure that I explained it well. But the computer doesn’t nod its head. It repeats back exactly what I tell it. In most of life, you can bluff, but not with computers.
Came to know that Google Docs now has a "Copy as Markdown" and "Paste from Markdown" option under the Edit menu at the top. You can select some text to highlight the copy option and then any Markdown is also pasted in the document with proper formatting.
Very cool!
By the way, Google Docs already had the option to download the entire document as a .md file, but these copy and paste options are even more user friendly.
I saw a person using the React Router inside Next.js and I have so many questions. Like the navigation is visibly very fast, but my questions are:
Is it good for public pages? Because I think, it will have same SEO issues as SPAs.
Does it make the codebase more complicated?
Upon looking I found a detailed blog post on building a SPA using Next.js and React Router. It mentions the reason for not using the Next.js router:
Next.js is not as flexible as React Router! React Router lets you nest routers hierarchically in a flexible way. It's easy for any "parent" router to share data with all of its "child" routes. This is true for both top-level routes (e.g. /about and /team) and nested routes (e.g. /settings/team and /settings/user).
I do understand why someone would want to use Next.js but I have yet to learn more about this React Router thing.
BRB.
Update:
Josh has written a new short blog post about how he did it, definitely worth reading and understanding the process.
Just noting this for myself for future reference that whenever I have to create cards, I must use this simpler method each time. If the HTML is like this:
.card-container{display: grid;grid-template-columns:repeat(auto-fit,minmax(300px, 1fr));gap: 20px;margin: 0 auto;}/* and then whatever CSS for .card here */