X Bookmarks — 2025 KW03: Agent Recipes and prompting o1 the right way

January 16, 2025

|bookmarks

by Florian Narr

X Bookmarks — 2025 KW03: Agent Recipes and prompting o1 the right way

@nutlope — Agent Recipes, a library of agent/workflow patterns with code

Announcing Agent Recipes!

A site to learn about agent/workflow recipes with code examples that you can easily copy & paste into your own AI apps.

I'm gonna make this the go-to resource for devs to learn about agents & how to implement them – more soon.

That's interesting because most agent documentation is either "here's a concept diagram" or a 2,000-line starter repo with twelve opinions baked in. A focused recipe format — one pattern, runnable code, grab what you need — is actually how I'd want to learn the edge cases. 4.6k bookmarks in a day is a signal that a lot of devs are hunting for the same thing.

@daniel_mac8 — the right mental model for prompting o1

this is an amazing way to think about prompting o1

from @benhylak

21k bookmarks on a repost says this framing hit a nerve. The core idea from @benhylak is that o1 isn't a chat model you steer — it's more like a contractor you give a spec to and let run. Stop adding step-by-step instructions. Give it the goal, the constraints, and the acceptance criteria. The more you try to direct the reasoning, the worse it gets.

@gdb — o1 requires a genuinely different usage pattern

o1 is a different kind of model. great performance requires using it in a new way relative to standard chat models.

Makes sense that Greg Brockman himself is saying this — it confirms what the @daniel_mac8 repost was getting at. The implied advice: if you're treating o1 like GPT-4 with longer answers, you're leaving most of the capability on the table. The instruction-following intuitions you've built up do not transfer cleanly.

@tom_doerr — document-to-Markdown converter with OCR and LLM pipeline

Converts images, PDFs, and Office documents to Markdown or JSON using OCR and LLM models, with features for caching, distributed processing, and PII removal

Smart, because PII removal and caching aren't afterthoughts here — they're listed as first-class features. A PDF-to-Markdown pipeline that can run distributed, strip personal data, and cache results across repeated ingestion is the kind of thing that shows up in enterprise data pipelines, not just toy demos. The JSON output option is useful when you want structured extraction rather than prose reconstruction.

@tom_doerr — sitefetch, crawl an entire site to a text file

sitefetch — fetch an entire site and save it as a text file for use with AI models.

Honestly just saved this for the next time I need to feed documentation into a context window without crawling page-by-page myself. 1.7k stars in a short window. The promise is simple: one CLI call, one output file, ready to paste. Whether it handles pagination, auth-walled docs, or JS-rendered content is the interesting question — the README is the place to check.

@LottiSchmitt — tags and sentiment analysis live in Octolens

Just launched tags + sentiment analysis @Octolens.

  • Tags: We label the posts we find with buy intent, customer testimonial, own brand mention, competitor mention etc. based on their content.

  • Sentiment Analysis: We check if a post has positive, neutral, or negative sentiment.

That's cool because buy-intent detection on top of social monitoring is one of those features that sounds simple until you actually try to build it — the label space matters a lot and the false positive rate on "buy intent" especially can wreck the signal. Curious how they handle sarcasm in the sentiment pass, which is where most sentiment classifiers visibly fall apart.