S
ilverLingua

A modular and extensible framework for working with Large Language Models (LLMs)
View on Github
Python

About

SilverLingua's design makes it particularly suitable for production LLM applications where reliability, maintainability, and flexibility are crucial. Its hierarchical architecture using atomic design patterns (atoms, molecules, organisms) provides a clear mental model for understanding component relationships and responsibilities, making the codebase more maintainable as it scales.
SilverLingua is a type-safe, lightweight framework that makes working with LLMs easier. It provides an intuitive biological-inspired architecture (atoms, molecules, organisms) for LLMs and associated functionality such as token-aware memory management, prompt templating, and tool integration. It standardizes how LLMs are interacted with, while allowing for surgical modifications and extension without disrupting the core functionality. Created as a more intuitive alternative to the often complex and hard-to-extend LangChain.

Snippets

1
class SummarizingIdearium(Idearium):
2
"""
3
An extension of Idearium that summarizes older messages instead of removing them.
4
5
This implementation uses a summarization model to condense older messages
6
while preserving their essential information.
7
"""
8
9
def __init__(
10
self,
11
tokenizer: Tokenizer,
12
max_tokens: int,
13
summarization_model: Optional[any] = None,
14
summarization_threshold: int = 5,
15
**kwargs
16
):
17
super().__init__(tokenizer=tokenizer, max_tokens=max_tokens, **kwargs)
18
self.summarization_model = summarization_model
19
self.summarization_threshold = summarization_threshold
20
self.summary_indices = set()
21
22
def _trim(self):
23
"""Trim strategy that summarizes chunks of messages instead of removing them."""
24
if self.total_tokens <= self.max_tokens:
25
return
26
27
# Find chunks of consecutive non-persistent messages to summarize
28
current_chunk = []
29
chunks = []
30
31
for i, notion in enumerate(self.notions):
32
# Skip persistent notions and notions already in summaries
33
if notion.persistent or i in self.summary_indices:
34
if current_chunk and len(current_chunk) >= self.summarization_threshold:
35
chunks.append(current_chunk)
36
current_chunk = []
37
continue
38
39
current_chunk.append((i, notion))
40
41
# Replace oldest chunks with summaries until under token limit
42
for chunk in chunks:
43
if self.total_tokens <= self.max_tokens:
44
break
45
46
# Extract indices and notions
47
indices = [i for i, _ in chunk]
48
notions = [n for _, n in chunk]
49
50
# Create summary
51
summary = self._summarize_notions(notions)
52
53
# Only apply if we save tokens
54
if summary_tokens < original_tokens:
55
# Replace the first notion with the summary
56
first_idx = indices[0]
57
self.notions[first_idx] = summary
58
self.tokenized_notions[first_idx] = self.tokenizer.encode(summary.content)
59
60
# Remove the rest and update indices
61
# ...
The SummarizingIdearium class elegantly extends Idearium's memory management by implementing a summarization strategy instead of simply dropping old messages. When the token limit is exceeded, it identifies chunks of non-persistent messages, summarizes them to preserve their essential information, and replaces the original messages with these summaries. This implementation demonstrates the extensibility of SilverLingua - by just overriding the _trim method, we get an entirely new memory management strategy that preserves context while reducing token usage. This example shows how SilverLingua's architecture allows for powerful customizations with minimal code changes.
1
class OpenAIChatAgent(Agent):
2
"""
3
An agent that uses the OpenAI chat completion API.
4
"""
5
6
model: OpenAIModel
7
8
@property
9
def model(self) -> OpenAIModel:
10
return self._model
11
12
def _bind_tools(self) -> None:
13
m_tools: List[ChatCompletionToolParam] = [
14
{"type": "function", "function": tool.description} for tool in self.tools
15
]
16
17
# Check to make sure m_tools is not empty
18
if len(m_tools) > 0:
19
self.model.completion_params.tools = m_tools
20
21
def __init__(
22
self,
23
model_name: OpenAIChatModelName = "gpt-3.5-turbo",
24
idearium: Optional[Idearium] = None,
25
tools: Optional[List[Tool]] = None,
26
api_key: Optional[str] = None,
27
completion_params: Optional[CompletionParams] = None,
28
):
29
"""
30
Initializes the OpenAI chat agent.
31
"""
32
model = OpenAIModel(
33
name=model_name, api_key=api_key, completion_params=completion_params
34
)
35
36
args = {"model": model}
37
if idearium is not None:
38
args["idearium"] = idearium
39
if tools is not None:
40
args["tools"] = tools
41
42
super().__init__(**args)
The OpenAIChatAgent implementation shows how cleanly provider-specific functionality can be integrated. It only needs to implement the _bind_tools method, which transforms SilverLingua Tool objects into OpenAI's expected format for function calling. This minimal implementation (just ~40 lines of code) demonstrates SilverLingua's streamlined design philosophy - by handling the complexities of agent behavior in the abstract base class, concrete implementations can focus solely on provider-specific adaptations.
1
class AnthropicChatAgent(Agent):
2
"""
3
An agent that uses the Anthropic chat completion API.
4
"""
5
6
model: AnthropicModel
7
8
@property
9
def model(self) -> AnthropicModel:
10
return self._model
11
12
def _bind_tools(self) -> None:
13
"""Bind tools to the model."""
14
if not self.tools:
15
return
16
17
tools = []
18
for tool in self.tools:
19
tools.append(
20
{
21
"name": tool.description.name,
22
"description": tool.description.description,
23
"input_schema": {
24
"type": "object",
25
"properties": {
26
name: {
27
"type": param.type,
28
"description": param.description or "",
29
}
30
for name, param in tool.description.parameters.properties.items()
31
},
32
"required": tool.description.parameters.required or [],
33
},
34
}
35
)
36
37
if tools:
38
self.model.completion_params.tools = tools
Similar to the OpenAI implementation, the AnthropicChatAgent shows the framework's provider-agnostic design. It implements the same interface but adapts to Anthropic's specific tool format in the _bind_tools method. This consistency across different providers is what makes SilverLingua so powerful - developers can easily switch between LLM providers or use multiple providers in the same application with minimal code changes. The shared abstract interface ensures behavior remains consistent regardless of the underlying LLM provider.
1
# Create a simple tool that works with both providers
2
@tool
3
def search_database(query: str) -> str:
4
"""
5
Search a database for information.
6
7
Args:
8
query: The search query
9
10
Returns:
11
The search results
12
"""
13
# In a real implementation, this would query a database
14
return f"Database results for '{query}': Found 3 matching entries."
15
16
# OpenAI implementation
17
openai_agent = OpenAIChatAgent(
18
model_name="gpt-3.5-turbo",
19
tools=[search_database],
20
api_key=os.environ.get("OPENAI_API_KEY")
21
)
22
23
# Anthropic implementation with the same tool
24
anthropic_agent = AnthropicChatAgent(
25
model_name="claude-3-haiku",
26
tools=[search_database],
27
api_key=os.environ.get("ANTHROPIC_API_KEY")
28
)
29
30
# Same query works seamlessly across providers
31
openai_response = openai_agent.generate("Search for recent climate studies")
32
anthropic_response = anthropic_agent.generate("Search for recent climate studies")
One of SilverLingua's most powerful features is how easily you can switch between different LLM providers. The example code shows creating an agent with Anthropic's Claude model and another with OpenAI's GPT model, both using the same tool. Despite the significant differences in how these providers handle tool calling behind the scenes, SilverLingua abstracts away these complexities. This allows developers to focus on building their application logic rather than handling provider-specific implementations.