Skip to main content
Photo of DeepakNess DeepakNess

Raw notes

Raw notes include useful resources, incomplete thoughts, ideas, micro thoughts, and learnings as I go about my day. Below, you can also subscribe to the RSS feed to stay updated:

https://deepakness.com/feed/raw.xml

Total Notes: 268


The recent Mintlify vulnerability

Found this interesting articles about the recent Mintlify critical security vulnerabilities. For your info, Mintlify is used by companies Discord and Vercel for hosting docs.

I will add more articles as I discover.


Uploading to Cloudinary from Google Sheets

If you want to do an unsigned image upload from Google Sheets to Cloudinary, here's the correct Apps Script snippet that does that:

function uploadImageToCloudinary() {
  var formData = {
    file: "<file_path>", // Replace with the file URL or base64 string
    upload_preset: "<upload_preset>" // Replace with your upload preset
  };

  var options = {
    method: "post",
    payload: formData
  };

  var response = UrlFetchApp.fetch(
    "https://api.cloudinary.com/v1_1/<cloud_name>/image/upload/",
    options
  );

  Logger.log(response.getContentText()); // Logs the Cloudinary response
}

I have used this on the Multi-AI Script product that I have. Earlier, I was using the default unsigned upload script and it wasn't working properly as it should, but now it's seamless.


Can a tool detect AI generated text?

Google has created an amazing tool called @SynthID inside Gemini to detect whether an image is AI generated or not, and it works great most of the time. It works for images generated by different image models by Google like nano-banana-pro.

But is there any tool that can correctly detect AI generated text?

I have come across a bunch of tools and most don't work as expected, for example, you might have seen people saying that some tools flag the constitution as AI generated and that says a lot about these tools. But sometimes, some tools do work as expected. For example, I used this tool called Undetectable that correctly detected my own writings as not AI generated and then ChatGPT generated text as AI generated. But it wasn't accurate all the time at all, only sometimes, it made some mistakes.

Some AI detection tools that I like are:

  1. Undetectable
  2. Grammarly AI Detector
  3. ZeroGPT
  4. Originality AI

And consider the output from these tools in this order – from top to bottom with the top being the best AI text detection tool.


How Cursor AI migrated away from Sanity CMS

I came across this post on X from Lee Rob where he talked about migrating Cursor docs/blog away from a CMS to raw code and Markdown. He didn't directly mention the Sanity CMS but the head of developer from Sanity himself did a post commenting on this, and also has a blog post about the same.

Lee Rob also wrote a detailed post on his personal website explaining the entire process and thinking behind it. I liked the post as it touches all different questions and doubts that you might have before making such a move. The blog post covers topics like:

  • the issue with the old system with Sanity CMS
  • the thinking behind migrating to raw code and Markdown
  • issues and accidents they encountered when migrating, and
  • how exactly it was done by using Cursor AI

As mentioned in the post, it took them only 3 days, $260.32, and 297.4M tokens to migrate away the entire website. Also, they removed 322k lines of code and added 43k lines.

It's an interesting read.


Google Antigravity stage, commit, and sync

When you're not working on a very serious project and just vibe coding, imagine just pressing a simple keyboard shortcut to stage all changes, commit with AI-generated commit message, and also sync with your git provider like GitHub. I set up this cmd + enter keybinding in my Google Antigravity IDE that immediately stages all changes, commits using AI-generated messages, and then syncs with my GitHub repo.

For this, you need to open your Antigravity IDE, press cmd + shift + p and search for "Preferences: Open Keyboard Shortcuts (JSON)" and then copy-paste the below keybinding JSON.

[
    {
        "key":"ctrl+enter",
        "command":"runCommands",
        "args":{
            "commands":[
            {
                "command":"git.stageAll"
            },
            {
                "command":"antigravity.generateCommitMessage"
            },
            {
                "command":"git.commitAll"
            },
            {
                "command":"git.sync"
            }
            ]
        }
    }
]

I have another post about doing the same in Cursor and VS Code where you can set up the same keybindings. Most of the thing are done the same way, and only the command for generating commit messages are different.


Best laptops for running Linux in India

Most of the cool laptop brands like Framework, System76, Bee-Link, Razer, etc. are not available in India. And that led me to finding some other cool laptops that are available in India and that can be used to run different Linux distributions without any issues. Here are a few laptops that I would recommend for running Linux:

  1. Lenovo ThinkPad E14 Intel Core i7 13th Gen 14 (16GB RAM/1TB SSD): This comes with Windows 11 pre-installed but you can later easily install any Linux distro like Omarchy or others (by the way, I have a resource website about Omarchy).
  2. Lenovo ThinkPad E14 AMD Ryzen 5 7530U 14" (16GB RAM/512GB SSD): A cheaper option with a bit better value for money, also comes with Windows 11 preinstalled.
  3. HP 15 Laptop, AMD Ryzen 7 7730U 15.6" (16GB DDR4,512GB SSD): This is another good option in mid-range that also comes with Windows 11 preinstalled but can Linux be installed later.

Apart from this, you can also directly build your PC on Lenovo's website where you can select the processor, RAM and other configurations as per your requirements.


The best local speech-to-text app

I was using a speech-to-text app called WisprFlow for voice typing, it's great. The text quality is amazing and rarely has errors related to spelling or grammar. But there's a problem with this app, as it's not private and shares your usage information with its server. While WisprFlow has generous free tier, it's limited and you need to pay to use it freely.

But then I found Handy, a completely local speech-to-text app that uses local LLMs like Parakeet v3, Parakeet v2, Whisper Medium, Whisper Large, Whisper Small, Whisper Turbo, etc. Most of these models are between 400 MB and 1 GB in filesize.

I am using the Parakeet v3 model as of now, and it's working great so far. And the best thing is, it's using Handy is faster than using WisprFlow or other API-based speech-to-text apps. Also, the audio I speak doesn't leave my device so that's completely private.


How ChatGPT stores memories about you

I found a really interesting article by Manthan Gupta about how ChatGPT stores and seamlessly retrieves information about you when you chat with it. The article explains that ChatGPT receives the following context for every message you send:

[0] System Instructions
[1] Developer Instructions
[2] Session Metadata (ephemeral)
[3] User Memory (long-term facts)
[4] Recent Conversations Summary (past chats, titles + snippets)
[5] Current Session Messages (this chat)
[6] Your latest message

These stored info are passed to every future prompts as separate blocks, something like this:

- User's name is Manthan Gupta.
- Previously worked at Merkle Science and Qoohoo (YC W23).
- Prefers learning through a mix of videos, papers, and hands-on work.
- Built TigerDB, CricLang, Load Balancer, FitMe.
- Studying modern IR systems (LDA, BM25, hybrid, dense embeddings, FAISS, RRF, LLM reranking).

The blog post has much more information about how exactly this happens. He explains the process in great detail, and is really a fun read.


Best air purifiers in India

I was researching about best air purifiers in India and came across this very helpful blog post from Amit Sarda listing out some great air purifiers. The article has a lot of data points available about each product and that makes it easier to choose what you're looking for.

I went with the mid-range Airmega 150 as it's super value for money, but there are other options to look for as well.


Best LLM for UI

I am regularly trying a lot of AI models these days for creating websites and I think the current best model for creating stunning user interfaces is the Google Gemini 3 Pro model. Earlier, I was using Claude Sonnet 4.5 a lot and sometimes Opus 4.5 too, but now Gemini 3 Pro is better than all.


T3.chat moved from Next.js to TanStack Start

I came across this announcement post from Theo that says he migrated t3.chat from Next.js to TanStack Start.

As of 2 minutes ago, T3 Chat is no longer on Next.js 🫡

But the migration process wasn't straightforward as they ran into multiple issues during the process.

You guys have NO idea how many random issues we hit along the way (primarily around bundling). Took a week to fix all of them and a lot of sleepless nights from the team.

And then the reason for moving off Next.js was:

We were barely using it and my React Router hacks on top of Next were hellish for debugging

And this is very interesting to see.

I am yet to try TanStack Start, though.


Rumble video embed options

One way to embed Rumble videos is using the iframe embed code as you see below. This embed code can be obtained from below the video by clicking on the embed option.

<iframe class="rumble" width="640" height="360" src="https://rumble.com/embed/v70bqqu/?pub=4nvf6q" frameborder="0" allowfullscreen></iframe>

But if you want to programmatically get the video details along with the embed code, here's an API.

https://wn0.rumble.com/api/Media/oembed.json?url={videoURL}

Just replace the {videoURL} with the URL of your Rumble video and it gives you the following JSON response:

{
  "type": "video",
  "version": "1.0",
  "title": "Generating Alt Texts for 45 Images using AI in Google Sheets in Minutes",
  "author_name": "DeepakNess",
  "author_url": "https://rumble.com/c/c-7818786",
  "provider_name": "Rumble.com",
  "provider_url": "https://rumble.com/",
  "html": "\u003Ciframe src=\"https://rumble.com/embed/u4nvf6q.v70bqqu/\" width=\"3840\" height=\"2156\" frameborder=\"0\" title=\"Generating Alt Texts for 45 Images using AI in Google Sheets in Minutes\" webkitallowfullscreen mozallowfullscreen allowfullscreen\u003E\u003C/iframe\u003E",
  "width": 3840,
  "height": 2156,
  "duration": 497,
  "thumbnail_url": "https://1a-1791.com/video/fww1/31/s8/6/U/K/T/E/UKTEz.SHFsc.1.jpg",
  "thumbnail_width": 3840,
  "thumbnail_height": 2156
}

I think, this can be very helpful in creating a tool that creates optimized embed code for Rumble videos.

I am still looking into this and probably create a tool that does this.


Download all YouTube videos via Takeout

I recently got my YouTube channel deleted for some unknown reasons, and then I realized how wrong I was in playing the game.

Basically, this wasn't my main channel, and I was mainly embedding the videos on my website inside my blog posts. And I wasn't also keeping a copy of all the videos I uploaded to the channel, as it wasn't anything serious. But now that all videos are deleted, I wish I had the copies of those videos that I could upload to other platforms like Rumble and then embed on my website.

But it's too late for that.

However, if you also have a channel and you don't keep copies of uploaded videos as well, it's still not too late. Download everything ever uploaded through the Google Takeout feature:

  1. Open Google Takeout page
  2. Login to your correct account, change to brand account if you have
  3. Select YouTube and YouTube Music services to download
  4. Select 4GB files in ZIP format and start the process

You will receive an email after some time, depending on how large your YouTube channel is, and then download each and every ZIP file from there and save it somewhere safely. If required, you can unzip these files and then access all your videos from inside these ZIP files.


How to remove PostgreSQL from macOS

I tried uninstalling the PostgreSQL from my macOS Tahoe and the process wasn't very simple, in fact, the uninstaller in the below folder wasn't working at all. Clicking, double-clicking did nothing.

/Library/PostgreSQL/18/uninstall-postgresql.app

Upon looking here and there, I finally found a solution that worked. I ran the following command in my terminal and then the uninstaller window appeared.

sudo /Library/PostgreSQL/18/uninstall-postgresql.app/Contents/MacOS/installbuilder.sh

I then selected Entire Application option and clicked Next and the uninstallation process quickly started and then final window after it was complete looked like this. And then I had to run some more following commands to make sure no residue was left on my computer:

sudo pkill -u postgres

sudo rm -rf /Library/PostgreSQL

sudo rm /etc/postgres-reg.ini

sudo dscl . -delete /Users/postgres

sudo rm /Library/LaunchDaemons/com.edb.launchd.postgresql-18.plist

After this I had to manually go and delete the Postgres 18 folder from the Applications folder. It only contained a Documentation folder.

And this was done after that.


Staying with 11ty for now

I frequently take notes on my this blog, especially in the /raw section, and as the number of pages grow, the build time on Netlify is also growing. So, today I started thinking about moving to WordPress.

But... I have dropped the plan for now and staying with 11ty.

However, I will have to optimize the build process so that the build takes less time. And the first thing I can do is locally process images to optimize and convert them to .jpg and .webp formats. Currently, I'm using the official @11ty/eleventy-img plugin for this, and all images build each time I push (no, caching doesn't work because I'm co-locating images in different post folders).

So here what I can do is, create a local script that uses the sharp image optimization library and then locally process images before pushing them to GitHub. This way I will only have to process images once, and the build time should be significantly reduced.

I am working on this already, and will update soon.


Export n8n workflows and credentials

I was moving my self-hosted n8n instance to another server and needed to export my workflows and credentials so that I can use them inside my new n8n instance. There is a page about this in the n8n official docs but the commands don't directly work in the terminal. However, there's a workaround for this:

First, ssh into your server by running the ssh root@your_server_ip command and then cd into the n8n directory. For me, the folder was n8n-docker-caddy. After that find the name of your Docker container by running the following command:

docker ps

Find the container name and then run the below command, and replace the <container_name> with the actual name of the container. This outputs a workflows.json file in the same directory that you can later copy of download to your computer.

docker exec -u node -it <container_name> n8n export:workflow --all > workflows.json

Similarly, you can run the below command to output all credentials as credentials.json file in the same directory.

docker exec -u node -it <container_name> n8n export:credentials --all --decrypted > credentials.json

Make sure to keep the credentials safe as it contains important keys and APIs that you wouldn't want going into wrong hands.

Also, to download these .json files locally to your computer, you can run the below command. Replace your_ip_address with the server's IP and adjust other things accordingly.

scp root@your_ip_address:/root/n8n-docker-caddy/*.json ~/Downloads/

That's it.

Previously, I've also written about n8n password reset and upgrading n8n that you might find helpful.

Hope this helps.


Marking AI images on a website

I have a website where I have used some AI images and the website was ranking fine for a few years, but a few days ago I received an angry email from a person saying that AI images do not correctly represent what I am showing. So I decided to mark or rather label all AI images as "AI-Generated Image" and I have discussed this more in this post on X.

Right now, I don't know if this is going to affect the website in any way, but it will be an interesting thing to see.


Get YouTube thumbnail from URL

I discovered a quick way to grab a YouTube video's thumbnail. You just have to replace VIDEO_ID in the below URL with the YouTube video's ID, open the URL in your browser, and you'll get the video's thumbnail in the browser.

https://img.youtube.com/vi/VIDEO_ID/maxresdefault.jpg

I tested it on multiple videos and it does seem to be working correctly, so far.


SiteGPT.ai tech-stack

Bhanu Teja P from SiteGPT.ai has shared his complete tech-stack for running his business, and it's very interesting to see how the project is being run.

LLM: @OpenAI Website: @reactjs + @remix_run + @tailwindcss Hosting: @Cloudflare Workers + @vercel + @modal Chat: @Cloudflare Durable Objects + @partykit_io Database: @prisma postgres + @pinecone Storage: @Cloudflare R2 Redis + Queues + Workflows: @upstash AI Observability: @PortkeyAI AI Connectors: @SourceSyncAI + @firecrawl_dev Email: @resend + @Bento Payments: @PaddleHQ Subscription Analytics: @ChartMogul Analytics: @DataFast_ + @posthog Blog: @feather_blogs Communication: @SlackHQ Support: @SiteGPT as website chat + Email Support Feedback Portal: @FeaturebaseHQ Docs: @mintlify

So many good tools here that I didn't know about from earlier.


CLI tool to convert Next.js to TanStack

While browsing, I came across this post that features a CLI tool to convert a Next.js website to a TanStack website, and here's the CLI tool. I mean, it's not perfect as you still need to check a few things manually, but could definitely be a good start for simpler projects.

Actually, I still haven't ever used TanStack for any project so I don't know exactly how it works, and am just noting it down for future references.

Apart from this, I also found this cool project called redomysite.com that claims to convert any website to Astro within minutes. It's a paid offering but loved the concept.


Fix the 'host key verification failed' issue

If getting "host key verification failed" error when sshing to a server this is the issue:

This happens when the IP got a different SSH host key than what your Mac saved earlier – common if you rebuilt the server or your provider reused the IP. SSH blocks the login to protect you from a man-in-the-middle.

Something like the below:

ssh root@1.2.3.4

# output

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the ED255 key sent by the remote host is
SHA256:EsWKr/gOT/GHcFMYIUvhgaTss6+.
Please contact your system administrator.
Add correct host key in /Users/deepak/.ssh/known_hosts to get rid of this message.
Offending ECDSA key in /Users/deepak/.ssh/known_hosts:9
Host key for 1.2.3.4 has changed and you have requested strict checking.
Host key verification failed.

And the fix is to update just that host’s key instead of deleting the whole file.

ssh-keygen -R 1.2.3.4
# If you also connect by a hostname, remove that too:
ssh-keygen -R your-hostname.example.com

And then connect again to the server, and it should work.


No em dashes anymore

I am not using em dashes anymore as more and more people are seeing content with em dashes as AI generated, even if it's not. And that's not good for your reputation in the long run. So now, I've started using en dashes everywhere when writing:

  • Em dash example: He knew what he had to do—leave.
  • En dash example: He knew what he had to do – leave.

On a macOS device, pressing option + minus writes an en dash (for an em dash, you needed to press option + shift + minus keys).