AvnishYadav
WorkProjectsBlogsNewsletterSupportAbout
Work With Me

Avnish Yadav

Engineer. Automate. Build. Scale.

© 2026 Avnish Yadav. All rights reserved.

The Automation Update

AI agents, automation, and micro-SaaS. Weekly.

Explore

  • Home
  • Projects
  • Blogs
  • Newsletter Archive
  • About
  • Contact
  • Support

Legal

  • Privacy Policy

Connect

LinkedInGitHubInstagramYouTube
Building Your First LLM Chain: A Developer's Guide to Sequential Logic
2026-02-22

Building Your First LLM Chain: A Developer's Guide to Sequential Logic

6 min readTutorialsAI EngineeringCore ConceptsLangChainPythonTutorialLLM DevelopmentLCEL

Stop making single API calls. Learn how to chain LLM steps together to build complex workflows. This tutorial covers the logic, the code, and the deployment of a sequential chain using modern patterns.

The Limit of a Single Prompt

If you have played with ChatGPT or the OpenAI API, you know the workflow: Input text, get text back. For 80% of casual use cases, that is enough.

But as an engineer building automation systems and micro-SaaS tools, a single prompt rarely cuts it. Real-world applications require orchestration. You might need to take a user topic, research it, outline a post, and then write the content. Or perhaps you need to extract data from an email, format it as JSON, and then decide whether to draft a reply or create a Jira ticket.

This is where Chains come in.

A chain is simply a sequence of operations where the output of one step becomes the input of the next. In this tutorial, we aren't just talking theory. We are going to build a functional sequential chain using Python and the modern syntax of LangChain (LCEL).


What We Are Building

We will build a simple but practical content engine. We won't just ask an LLM to "write a blog." That produces generic fluff. Instead, we will build a chain that:

  1. Step 1 (Ideator): Takes a broad topic and generates a specific, contrarian angle.
  2. Step 2 (Expander): Takes that angle and writes a concise LinkedIn post about it.

This two-step process mimics how a human creator thinks, resulting in significantly higher-quality output than a zero-shot prompt.

The Tech Stack

  • Python 3.10+
  • LangChain (Core & OpenAI): The orchestration layer.
  • OpenAI API Key: The intelligence layer.

Note: I strongly recommend using virtual environments for this.

pip install langchain langchain-openai python-dotenv

The Core Concept: LCEL

Before writing the code, you need to understand the syntax we will use. In the early days of LangChain, we instantiated classes like `LLMChain`. That is now legacy code.

Modern builders use LCEL (LangChain Expression Language). If you are familiar with Linux or Unix pipes (`|`), this will feel native to you.

The logic looks like this:

chain = prompt | model | output_parser

Data flows from left to right. The prompt formats the string, the model predicts the token, and the parser cleans the output into a usable string.


Step 1: Setup and Basic Configuration

Create a file named `chain_builder.py`. First, let's set up our imports and model. I prefer using `gpt-3.5-turbo` for testing (cheap) and `gpt-4o` for production.

import os
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser

# Load environment variables
load_dotenv()

# Initialize the Model
# temperature=0.7 balances creativity with coherence
model = ChatOpenAI(model="gpt-3.5-turbo", temperature=0.7)

Step 2: Building the First Link (The Ideator)

We need a prompt that takes a user topic and refines it. In a robust system, I usually engineer prompts that force the model to output JSON, but for this chain, text output is fine.

# Define the Prompt Template
ideation_prompt = ChatPromptTemplate.from_template(
    "You are a tech content strategist. Given the topic '{topic}', "
    "generate one specific, contrarian, and engaging angle for a LinkedIn post."
)

# Create the first chain
# This uses the pipe operator (|) to glue components
ideation_chain = ideation_prompt | model | StrOutputParser()

At this stage, if you ran `ideation_chain.invoke({"topic": "AI Agents"})`, you would get a single string back, something like: "Why AI Agents will replace managers, not developers."

Step 3: Building the Second Link (The Writer)

Now we need a component that knows how to take that specific angle and write the actual content. Notice the input variable here matches the output conceptual logic of the previous step.

writing_prompt = ChatPromptTemplate.from_template(
    "Write a short, punchy LinkedIn post based on this angle: {angle}. "
    "Use a professional but direct tone. No hashtags."
)

writing_chain = writing_prompt | model | StrOutputParser()

Step 4: Composing the Full Chain

Here is where the magic happens. We need to feed the output of the `ideation_chain` into the `writing_chain`.

We use a `RunnableLambda` or simply dictionary manipulation to map the output of step one to the input key required by step two (`{angle}`).

from langchain_core.runnables import RunnablePassthrough

# The Composition
# We take the input, pass it to ideation_chain, then map the result to 'angle'
full_chain = (
    {"angle": ideation_chain} 
    | writing_chain
)

# Execution
topic = "Remote Work for Developers"
result = full_chain.invoke({"topic": topic})

print(f"--- Generated Post for: {topic} ---\n")
print(result)

Wait, how did that work?

Let's break down the logic of `{"angle": ideation_chain} | writing_chain`:

  1. The `.invoke({"topic": "..."})` passes the dictionary to the first element.
  2. Inside the dictionary, `ideation_chain` receives the input. It expects `{topic}`, which is present.
  3. The `ideation_chain` runs and produces a string (the contrarian angle).
  4. That string is assigned to the key `angle`.
  5. The resulting dictionary `{"angle": "The generated angle string..."}` is passed via the pipe `|` to `writing_chain`.
  6. `writing_chain`'s prompt expects `{angle}`, finds it, and generates the final post.

Why This Matters for Automation

You might ask, "Avnish, why not just put both instructions in one giant prompt?"

This is the most common mistake beginners make. Giant prompts (Mega-prompts) are:

  • Hard to debug: If the output is bad, is it the angle or the writing style?
  • Expensive: If you need to tweak the writing style, you have to re-run the ideation (which consumes tokens).
  • Unstable: LLMs suffer from "lost in the middle" phenomenon. Asking for too much logic in one pass degrades performance.

By chaining, you create modular systems. In my own micro-SaaS backends, I often cache the result of Step 1. If the user wants to "rewrite" the post, I don't need to generate a new idea; I just rerun Step 2. That saves latency and API costs.

Next Steps: From Chains to Agents

A chain is a hard-coded sequence of steps. Step A always leads to Step B.

Once you are comfortable with this, the next evolution is Agents. An Agent is essentially an LLM that has access to a "toolbox" of chains and decides which chain to run based on user input. But you cannot build good agents until you know how to build reliable chains.

Grab the code above, run it locally, and try adding a third step—maybe a "Critic" that reviews the post and suggests improvements.

That is how you move from a prompt engineer to an AI engineer. Keep building.

Share

Comments

Loading comments...

Add a comment

By posting a comment, you’ll be subscribed to the newsletter. You can unsubscribe anytime.

0/2000