How We Built an AI-Powered Business Plan Generator Using LangGraph & LangChain
Introduction
When building an AI-powered business plan generator, we started from the ground up using LangChain and LangGraph, as we needed an agentic framework capable of handling complex workflows. Unlike our previous project, Business Advisor — a chat-based agent that relied on OpenAI’s SDK and pipeline-based processing — this new project required a structured, multi-step AI workflow to dynamically generate and refine business plans.
Our product team defined the core functionality as follows:
- Users would go through a business interview where they answered structured questions.
- Each question-answer pair would map to specific sections in the business plan.
- Users could later update responses, triggering a regeneration of affected sections.
- The system was designed to support future domain-specific agents, such as financial modeling or market research specialists.
Given the increasing trend of multi-agent AI workflows in business applications, we structured our system with modular, scalable AI components that could collaborate efficiently while ensuring accuracy, consistency, and adaptability.
Why We Chose LangChain & LangGraph
Building a complex AI-driven application required more than just simple LLM queries. We needed:
- A model-agnostic architecture: The ability to switch between OpenAI models, Claude, and even local LLMs without major rewrites.
- Graph-based execution: A way to structure workflows dynamically, avoiding rigid pipelines.
- Stateful memory: The ability for the system to retain intermediate results across steps.
- Scalability: The flexibility to add more specialized AI agents in the future.
LangChain and LangGraph provided these capabilities through directed acyclic graph (DAG)-based workflows, enabling complex interactions between multiple processing nodes.
While we did not immediately implement domain-specific agents, we designed the system to support future AI models specializing in financial projections, legal compliance, or market analysis, ensuring the framework remained flexible.
One major trend we identified was the increasing adoption of multi-agent AI workflows in complex business applications. Companies like LinkedIn and Uber have successfully deployed agent-based architectures to improve operations and decision-making. Inspired by this, we designed our system so that multiple AI nodes could collaborate dynamically, ensuring that each processing step could be optimized independently while maintaining a seamless integration.
Implementing the AI Workflow
The core architecture consisted of:
- User responses from the interview stored as structured data.
- LangGraph-powered workflow that dynamically routed tasks to relevant AI nodes.
- A hybrid model selection strategy, allowing different tasks to be handled by different OpenAI models (GPT-4o for detailed sections, GPT-4o-mini for general drafting).
- A hybrid generation approach, where some sections were generated individually for accuracy, while others were processed in batches for efficiency.
Step-by-Step Breakdown of Operations
Our business plan generation workflow involved several sequential steps:
- Drafting Node — Generates an initial business plan draft based on user responses.
- Evaluation Node — Assesses the draft, identifying gaps and improvement areas.
- Post-Evaluation Refinement — Adjusts the draft based on evaluation feedback.
- Final Generation — Produces the final version, ensuring completeness and coherence.
This multi-step approach ensured progressive refinement of the business plan, rather than relying on a single pass of AI generation. However, due to performance constraints, we later simplified this into a single-step generation process for usability reasons, which we will discuss in a future article.
Graph-Based Processing Example
graph TD;
A[User Interview] --> B[Draft Generation];
B --> C[Evaluation];
C --> D[Post-Evaluation Refinement];
D --> E[Final Business Plan];
This structure illustrates how tasks flow through distinct stages, ensuring modular and scalable execution.
Example: Using Tool Calling for Structured Generation
One of our key decisions was to leverage tool calling in OpenAI’s strict mode. This allowed AI models to interact with structured functions and enforce predictable, formatted responses, reducing hallucinations.
Tool Calling Example: Generating Business Plan Sections
import { StructuredToolWithStrict } from 'langchain/tools';
import { z } from 'zod';
const sectionsSchema = z.object({
sections: z.array(
z.object({
id: z.string().describe('The ID of the section'),
slug: z.string().describe('The template slug for this section'),
content: z.string().describe('The generated content for this section')
})
).describe('A batch of business plan sections')
}).strict();export class GenerateBusinessPlanSections extends StructuredToolWithStrict {
name = 'generate_sections';
description = 'Generate structured business plan sections based on user input.';
schema = sectionsSchema;
async _call(input) {
return { sections: input.sections }; // AI-generated content provided as tool arguments
}
}
Using Zod for schema validation ensured that responses were always well-formed and type-safe, reducing parsing errors and allowing for automated validation of AI-generated content.
Challenges & LangChain Modifications
Despite the advantages of LangChain and LangGraph, we encountered multiple roadblocks that required custom modifications:
- LangChain limitations → Missing strict mode for tool calling and lack of streaming support for tool outputs. We extended LangChain’s built-in functions to enforce stricter output constraints and modified LangGraph to allow incremental streaming of structured responses.
- Poor LangChain documentation → We frequently had to read and analyze LangChain’s source code due to insufficient documentation, which made implementation slower and debugging harder.
- Code quality issues → Certain areas of LangChain’s implementation lacked maintainability, requiring us to refactor and optimize key components.
- Performance bottlenecks with OpenAI’s Assistants API → We initially used Assistants API but found that thread creation introduced significant latency, and additional unstructured messages in tool calls slowed response times. We eventually transitioned to Chat API for structured responses and improved efficiency.
To address these, we developed a custom OpenAIAssistantRunnable, a specialized component based on LangChain’s existing implementation, but enhanced to support streaming, strict tool calls, and multi-step workflows.
Final Architecture Adjustment
While the original architecture of operations remained intact, we minimized the number of processing steps from multiple iterations to a single-step generation to improve speed and user experience in the final release.
Key Takeaways
- LangChain + LangGraph were essential but required deep customization for structured responses and tool calling.
- Hybrid generation approaches — mixing individual and batch processing — allowed us to balance accuracy and efficiency.
- Structured responses and schema validation significantly improved AI output quality and reliability.
- A multi-step processing approach was initially used, but due to performance constraints, a simplified single-step generation was implemented.
- Optimizing AI execution speed by shifting from the Assistants API to the Chat API drastically reduced generation time while preserving structured responses.
Try Our AI-Powered Business Suite
Experience the full capabilities of our AI-driven business tools, built and hosted on DreamHost. From business planning to content generation, our suite of AI tools is designed to help entrepreneurs and businesses streamline their operations.
Try our AI-powered business plan generator and explore other business tools: Business Planner
Powered by DreamHost: DreamHost
Connect with me on LinkedIn: Krzysztof Miaskowski