Effective Prompt Engineering
Prompt engineering without the folklore. A practitioner's tour of what the evidence actually says about prompts — as specifications, as speech acts, as artifacts that fail under complexity and decay as models improve. Covers chain-of-thought, few-shot, persona, structured output, context engineering, systematic optimization (DSPy, MIPRO), empirical methodology, and the engineering judgment that holds it all together.
How this plan was made
Each plan on learnings is built by a hand-crafted agentic pipeline: research agents gather primary sources, a claim reviewer verifies facts against them, and a sequencer orders modules for how people actually learn. The curation — topic selection, framing, editorial standards — is Nicolas's. The research and writing is AI-assembled.