Skip to main content

Dhenara vs. LangChain

Here we compares Dhenara with LangChain, highlighting key differences and advantages to help you choose the right framework for your AI applications.

At a Glance: Dhenara vs. LangChain

FeatureDhenaraLangChain
ArchitectureClean, direct architecture with minimal abstraction layersMultiple layers of abstraction (chains, memory, callbacks)
Type SafetyStrong typing throughout with Pydantic validationLimited type safety, particularly across providers
Cross-Provider SupportSeamless provider switching with unified APIProvider switching requires manual memory synchronization
Conversation ManagementDirect, explicit control with ConversationNodeComplex memory systems with varying implementations
StreamingSimplified streaming with automatic consolidationMultiple callback systems for streaming
Usage TrackingBuilt-in cost and token tracking across providersLimited or manual cost tracking
Test ModeBuilt-in test mode for rapid developmentRequires manual mocking
Sync/AsyncUnified sync/async interfacesMixed sync/async implementations
BoilerplateMinimal setup code requiredSignificant boilerplate for complex scenarios
Learning CurveTransparent design patternsSteep learning curve with many abstractions

Key Advantages of Dhenara

1. Simplified Architecture

Dhenara uses a more straightforward approach to managing conversation context. The ConversationNode structure directly captures all necessary information without the additional layers of abstraction that LangChain introduces with its chains, memory types, and callbacks.

# Dhenara's clean approach
node = ConversationNode(
user_query=query,
attached_files=[],
response=response.chat_response,
timestamp=datetime.datetime.now().isoformat(),
)
conversation_nodes.append(node)

2. Strong Typing and Validation

Dhenara leverages Pydantic models throughout the library, ensuring that data structures are properly validated at runtime. This helps catch mistakes early and provides better IDE support with type hints.

Every response follows a consistent pattern:

# Consistent response structure
chat_response = ChatResponse(
model="gpt-4o",
provider=AIModelProviderEnum.OPEN_AI,
usage=ChatResponseUsage(...),
usage_charge=UsageCharge(...),
choices=[...],
metadata={...}
)

3. Cross-Provider Flexibility

Dhenara's implementation allows seamless switching between providers (OpenAI, Anthropic, Google) while maintaining conversation context. The PromptFormatter automatically handles the conversion between different provider formats.

# Effortlessly switch models between turns
model_endpoint = random.choice(all_model_endpoints) # Can select from any provider

4. Built-in Usage and Cost Tracking

Dhenara provides automatic tracking of token usage and associated costs across all providers:

# Usage data automatically included in responses
response.chat_response.usage # ChatResponseUsage with token counts
response.chat_response.usage_charge # Cost information including price calculations

5. Simplified Streaming

Streaming is handled through a unified interface that works consistently across providers:

# Streaming with Dhenara
config = AIModelCallConfig(streaming=True)
client = AIModelClient(model_endpoint, config)

async for chunk, final_response in client.generate_async(...):
# Process each chunk as it arrives
print(chunk.data.choice_deltas[0].content_deltas[0].text_delta)

6. Test Mode for Rapid Development

Dhenara includes a built-in test mode that doesn't require API credentials:

# Enable test mode for rapid development without API calls
config = AIModelCallConfig(test_mode=True)
client = AIModelClient(model_endpoint, config)
response = client.generate(prompt=prompt)

7. Less Boilerplate Code

The Dhenara implementation requires significantly less setup code compared to LangChain's equivalent functionality:

# LangChain equivalent would require:
# - Setting up a memory object
# - Configuring a chain
# - Creating provider-specific clients
# - Setting up callbacks for logging

8. Direct Control Flow

Dhenara gives developers explicit control over the conversation flow without hiding it behind abstractions:

# Direct access to get context and manage turns
context = get_context(conversation_nodes, endpoint.ai_model)
response = client.generate(prompt=prompt, context=context, instructions=instructions)

How LangChain Would Handle the Same Example

For comparison, here's how a similar multi-turn conversation might be implemented with LangChain:

from langchain.chains import ConversationChain
from langchain.memory import ConversationBufferMemory
from langchain_openai import ChatOpenAI
from langchain_anthropic import ChatAnthropic
from langchain_google_genai import ChatGoogleGenerativeAI

# Setup providers
openai_llm = ChatOpenAI(model_name="gpt-4o-mini")
anthropic_llm = ChatAnthropic(model="claude-3-5-haiku")
google_llm = ChatGoogleGenerativeAI(model="gemini-1.5-flash")

# Dictionary to track LLM chains for each provider
llm_chains = {
"openai": ConversationChain(
llm=openai_llm,
memory=ConversationBufferMemory(),
verbose=True
),
"anthropic": ConversationChain(
llm=anthropic_llm,
memory=ConversationBufferMemory(),
verbose=True
),
"google": ConversationChain(
llm=google_llm,
memory=ConversationBufferMemory(),
verbose=True
)
}

# This is where LangChain gets complicated - cross-provider memory sharing
# requires manual handling of memory state
def sync_memories(from_chain, to_chain):
# Need to extract conversation from one memory and add to another
# This is non-trivial in LangChain and requires understanding internal structures
conversation = from_chain.memory.buffer
to_chain.memory.buffer = conversation

# Execute conversation turns
queries = [
"Tell me a short story about a robot learning to paint.",
"Continue the story but add a twist where the robot discovers something unexpected.",
"Conclude the story with an inspiring ending."
]

instructions = [
"Be creative and engaging.",
"Build upon the previous story seamlessly.",
"Bring the story to a satisfying conclusion."
]

# Need to keep track of which provider was used last
last_provider = None
current_chain = None

for i, query in enumerate(queries):
# Select provider (randomly or in sequence)
providers = ["openai", "anthropic", "google"]
current_provider = random.choice(providers)
current_chain = llm_chains[current_provider]

# Sync memory if switching providers
if last_provider and last_provider != current_provider:
sync_memories(llm_chains[last_provider], current_chain)

# Need to inject the system prompt/instructions manually
# LangChain has limited support for per-turn instructions
enriched_query = f"{instructions[i]}\n\nUser: {query}"

# Generate response
response = current_chain.predict(input=enriched_query)

print(f"User: {query}")
print(f"Model: {current_provider}")
print(f"Response: {response}")
print("-" * 80)

last_provider = current_provider

Resource Configuration

FeatureDhenaraLangChain
Credential ManagementCentralized YAML configuration with runtime loadingEnvironment variables or manual client setup
Model OrganizationStructured model registry with provider metadataAd-hoc model instantiation
Provider SwitchingSingle config with dynamic model selectionManual client reconfiguration
Endpoint ManagementAutomatic endpoint creation from models and APIsManual endpoint setup
Resource QueryingRich query interface for resource retrievalNo centralized resource management
Multi-environment SupportMultiple resource configs for different environmentsManual environment handling

Dhenara's ResourceConfig Advantage

Dhenara introduces a centralized resource management system that dramatically simplifies working with multiple AI models and providers:

# Load all credentials and initialize endpoints in one line
resource_config = ResourceConfig()
resource_config.load_from_file("credentials.yaml", init_endpoints=True)

# Get any model by name, regardless of provider
claude_endpoint = resource_config.get_model_endpoint("claude-3-5-haiku")
gpt4_endpoint = resource_config.get_model_endpoint("gpt-4o")

# Or use a more specific query when needed
gemini_endpoint = resource_config.get_resource(
ResourceConfigItem(
item_type=ResourceConfigItemTypeEnum.ai_model_endpoint,
query={"model_name": "gemini-1.5-flash", "api_provider": "google_gemini_api"}
)
)

In contrast, LangChain requires setting up each model client individually:

from langchain_openai import ChatOpenAI
from langchain_anthropic import ChatAnthropic
from langchain_google_genai import ChatGoogleGenerativeAI

# Manual setup for each provider
openai_model = ChatOpenAI(api_key=os.environ["OPENAI_API_KEY"], model="gpt-4o")
anthropic_model = ChatAnthropic(api_key=os.environ["ANTHROPIC_API_KEY"], model="claude-3-haiku")
google_model = ChatGoogleGenerativeAI(api_key=os.environ["GOOGLE_API_KEY"], model="gemini-1.5-flash")

# No centralized way to retrieve models by name or query
# Must manually track which model is which

Dhenara's ResourceConfig provides a more maintainable, structured approach to managing AI resources, especially in applications that use multiple models across different providers.

Key Limitations of LangChain in this Use Case

  1. Complex Memory Synchronization: LangChain doesn't natively support sharing memory across different provider chains, requiring manual memory synchronization.

  2. Opaque Memory Structure: The internal representation of conversation history is less transparent and harder to manipulate directly.

  3. Provider Switching Complexity: Switching between providers requires creating separate chains and manually transferring context.

  4. Per-Turn Instructions: LangChain's design makes it difficult to vary system instructions on a per-turn basis.

  5. Verbose Configuration: Requires more boilerplate code to set up chains, memory, and callbacks.

  6. Limited Usage Tracking: Cost tracking is not built-in across providers and requires additional setup.

  7. Inconsistent Streaming: Streaming implementations vary across providers and require different callback setups.

When to Choose Dhenara Over LangChain

Dhenara is likely the better choice when:

  1. You need seamless multi-provider conversation support
  2. You want direct control over conversation state
  3. You prefer clean, strongly-typed interfaces
  4. Your application needs per-turn instruction customization
  5. You require built-in usage and cost tracking
  6. You value simplified streaming implementations
  7. You need both sync and async interfaces with consistent behavior
  8. You want a lower learning curve with more transparent design patterns

LangChain may still be preferable if you're using its extensive collection of tools, agents, and integrations beyond simple conversation management.

Conclusion

For multi-turn conversations specifically, Dhenara provides a more elegant, flexible, and developer-friendly approach compared to LangChain. The design prioritizes simplicity and direct control while still offering powerful features like cross-provider compatibility, usage tracking, and contextual awareness.

Rather than hiding complexity behind layers of abstraction, Dhenara gives developers clear patterns that are easy to understand, extend, and debug – making it particularly well-suited for production applications that need reliability and maintainability.