Structured LLM Prompting

TL;DR: Scroll down to the My Prompting Strategy section to copy-pasta my current workflow. Are you having a hard time getting what you expect out of an LLM? Are you giving it a bunch of files and it’s getting lost in the task? Are you giving it multiple tasks and it is only completing someContinue reading “Structured LLM Prompting”

Post-transformer Architecture

There are new AI architectures that are coming out that may be the next generation of new foundation models. I’m going to keep a running list of the promising ones here. The Dragon Hatchling (BDH) https://www.rohan-paul.com/i/175334051/the-dragon-hatchling-the-missing-link-between-the-transformer-and-models-of-the-brain This new architecture was published in September 2025 and claims to be a closer representation of the brain. OneContinue reading “Post-transformer Architecture”

Feature Engineering with Geospatial Data: Binned Statistics in GEE

In quantitative research, generating novel features from alternative datasets is often a primary source of identifying variation and predictive performance. Geospatial data, such as weather patterns and agricultural metrics, provides a rich source for these signals. A common feature engineering task is to aggregate one spatial dataset by the discrete bins of another—for example, calculatingContinue reading “Feature Engineering with Geospatial Data: Binned Statistics in GEE”

Retrieval Augmented Generation (RAG)

In short, RAG is a style of LLM usage where you give the LLM more information on top of your prompt. Between prompt engineering and RAG, you can dramatically increase the ability of the model to predict an accurate response. This can be in the form of internet searches the agent performs automatically (like GeminiContinue reading “Retrieval Augmented Generation (RAG)”

Programming with LLMs (not just generating code with a chatbot)

This page is about using LLM APIs in a programming project. Not to be confused with generating and editing code using a chatbot. Under construction: this is currently a place for me to dump links and quick thoughts. It might turn into a real post one day. Resources Python to use LLM libraries Python isContinue reading “Programming with LLMs (not just generating code with a chatbot)”

Transformer Architecture

This is a page to store resources and thoughts about transformer architecture and its applications. Components of the Transformer Architecture Tokenization The tokenization step takes the raw input and partitions it into tokens, bit-sized chunks of the data. Tokens are the unit of analysis of the transformer model, and the universe of all possible tokensContinue reading “Transformer Architecture”

LLM Tips and Tools

Large language models (LLMs) utilize a form of neural network architecture to learn the relationships between words and predict the next word in a sentence (more general than words with multimodal models but words and sentences are a fine mental model to have). Below are some resources and tips I am collecting to better useContinue reading “LLM Tips and Tools”

LLM Prompts

Some of my recent prompting strategies… See my Structured LLM Prompting post for more recent and advanced prompting tips. General Tips Keep in mind: LLMs are charting a way through a latent topic space. Prompts are the starting, pre-defined path on a longer journey, and you are asking the model to auto-complete that journey. AddingContinue reading “LLM Prompts”