Skip to content

Commit ff6d332

Browse files
Update README.md
1 parent 6e2a208 commit ff6d332

1 file changed

Lines changed: 27 additions & 65 deletions

File tree

README.md

Lines changed: 27 additions & 65 deletions
Original file line numberDiff line numberDiff line change
@@ -25,12 +25,11 @@ PromptStream.AI enables developers to **compose**, **validate**, **generate**, a
2525
### ⚙️ Key Features
2626
- 🧩 **Token-aware prompt builder** with variable interpolation (`{{variable}}` syntax)
2727
-**Validation engine** for token limits, structure, and completeness
28-
- 🔢 **Integrated token counting** via `ITokenFlowProvider`
28+
- 💬 **Shared Core Models** from Flow.AI.Core (`PromptTemplate`, `PromptInstance`, `PromptMessage`, `PromptResponse`)
2929
- 🧠 **Context manager** with replay, merge, summarization, and JSON persistence
30-
- 💬 **Assistant reply tracking** and multi-turn chat history
3130
- 💾 **Persistent context storage** (`ToJson` / `LoadFromJson`)
3231
- 🧮 **Token budgeting tools** (`EstimateTokenUsage`, `TrimToTokenBudget`)
33-
-**CLI utility (`PromptStream.AI.CLI`)** for building, validating, and generating prompts
32+
-**CLI utility (`PromptStream.AI.CLI`)** for building, validating, analyzing, and generating prompts
3433
- 🔌 Seamless integration with **TokenFlow.AI** for model-aware tokenization
3534

3635
---
@@ -43,8 +42,8 @@ dotnet add package PromptStream.AI
4342

4443
Requires:
4544
- .NET 8.0 or higher
46-
- Flow.AI.Core
47-
- (optional) TokenFlow.AI for model-specific token counting
45+
- Flow.AI.Core v0.2.0+
46+
- (optional) TokenFlow.AI for advanced token metrics
4847

4948
---
5049

@@ -53,30 +52,33 @@ Requires:
5352
```csharp
5453
using System;
5554
using System.Collections.Generic;
55+
using Flow.AI.Core.Models;
56+
using Flow.AI.Core.Interfaces;
5657
using TokenFlow.AI.Integration;
57-
using PromptStream.AI.Models;
5858
using PromptStream.AI.Services;
5959

60-
// Create a token provider (TokenFlow.AI adapter)
61-
var tokenProvider = new TokenFlowProvider("gpt-4o-mini");
60+
// Initialize the service with token tracking
61+
var tokenProvider = new BasicTokenFlowProvider();
62+
var modelClient = new TokenFlowModelClient("gpt-4o-mini");
63+
var context = new PromptContextManager();
6264

63-
// Initialize PromptStream service (with context-aware memory)
64-
var service = new PromptStreamService(tokenProvider);
65+
var service = new PromptStreamService(tokenProvider, context, modelClient);
6566

66-
// Define a prompt template
67+
// Define a shared Core template
6768
var template = new PromptTemplate
6869
{
6970
Id = "summarize-v1",
7071
Template = "Summarize the following:\n\n{{input}}\n\nBe concise.",
7172
RequiredVariables = new() { "input" }
7273
};
7374

74-
// Build and validate
75+
// Variables to inject
7576
var variables = new Dictionary<string, string>
7677
{
7778
["input"] = "Flow.AI enables composable AI workflows for .NET developers."
7879
};
7980

81+
// Build and validate
8082
var (instance, validation) = service.BuildAndValidate(template, variables);
8183

8284
if (validation.IsValid)
@@ -89,15 +91,15 @@ else
8991
Console.WriteLine($"❌ Invalid: {string.Join(", ", validation.Errors)}");
9092
}
9193

92-
// Add an assistant reply for multi-turn context
93-
service.AddAssistantReply("Sure! Here's a short summary...");
94+
// Add a user message to the context
95+
context.AddMessage(new PromptMessage { Role = "user", Content = instance.RenderedText });
9496
```
9597

9698
---
9799

98100
### 💻 CLI Usage (`PromptStream.AI.CLI`)
99101

100-
PromptStream.AI now includes a lightweight **command-line interface** for developers to build, validate, and generate prompts directly from the terminal.
102+
PromptStream.AI includes a full command-line interface for developers to build, validate, analyze, and generate prompts directly from the terminal.
101103

102104
#### 🧩 Build a prompt
103105
```bash
@@ -119,67 +121,27 @@ dotnet run --project src/PromptStream.AI.CLI/PromptStream.AI.CLI.csproj -- gener
119121
dotnet run --project src/PromptStream.AI.CLI/PromptStream.AI.CLI.csproj -- context --load context.json --summarize
120122
```
121123

124+
#### 📊 Analyze prompt usage
125+
```bash
126+
dotnet run --project src/PromptStream.AI.CLI/PromptStream.AI.CLI.csproj -- analyze --template "Summarize {{topic}}" --var topic="AI" --model gpt-4o-mini
127+
```
128+
122129
**Available commands:**
123130
| Command | Description |
124131
|----------|--------------|
125132
| `build` | Render a prompt with variable substitution |
126133
| `validate` | Validate prompt completeness and token limits |
127134
| `generate` | Build, validate, and produce a model-like response |
128135
| `context` | Load, save, summarize, or clear conversation context |
136+
| `analyze` | Estimate token usage and cost for prompts |
129137

130138
---
131139

132-
### 🧩 Namespace Overview
133-
134-
| Namespace | Description |
135-
|------------|-------------|
136-
| `PromptStream.AI.Models` | Core prompt, validation, and message models |
137-
| `PromptStream.AI.Builders` | Responsible for rendering prompt templates |
138-
| `PromptStream.AI.Validation` | Handles validation, token limits, and structure |
139-
| `PromptStream.AI.Services` | High-level orchestrator combining build, validate, and context |
140-
| `PromptStream.AI.Context` | Manages conversational memory and context tracking |
141-
| `PromptStream.AI.Extensions` | Utility helpers for interpolation and cleanup |
142-
| `PromptStream.AI.CLI` | Developer command-line interface for building and testing prompts |
143-
144-
---
145-
146-
### 🧪 Testing & Coverage
147-
148-
```bash
149-
dotnet test --collect:"XPlat Code Coverage"
150-
```
151-
152-
All core logic is tested using **xUnit** and **coverlet**, with full integration into **Codecov** for continuous coverage tracking.
153-
154-
---
155-
156-
### 🗺️ Roadmap
157-
158-
| Feature | Status |
159-
|----------|---------|
160-
| Core Models (`PromptTemplate`, `PromptInstance`, `PromptValidationResult`) ||
161-
| PromptBuilder & Validator ||
162-
| Context Manager & Message Tracking ||
163-
| Context Enhancements (merge, summarize, token budgeting, JSON persistence) ||
164-
| PromptStreamService orchestration layer ||
165-
| Assistant reply integration ||
166-
| CLI tool for build, validate, generate, context ||
167-
| Token cost estimator (TokenFlow.AI integration) | 🔄 Planned |
168-
| OpenAI-style function call validation | 🔄 Planned |
169-
| Visual prompt editor in Flow.AI Studio | 🔄 Planned |
170-
171-
---
172-
173-
### 🤝 Contributing
174-
Contributions are welcome!
175-
If you’d like to extend functionality or improve coverage, please fork the repo and submit a pull request.
176-
177-
See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
178-
179-
---
140+
### 🌟 Supporting the Project
180141

181-
### 📜 License
182-
This project is licensed under the **MIT License** — see the [LICENSE](LICENSE) file for details.
142+
If you find **PromptStream.AI** helpful, please consider
143+
**starring the repository** and ☕ [**supporting my work**](https://buymeacoffee.com/andrewclements84).
144+
Your support helps keep the Flow.AI ecosystem growing.
183145

184146
---
185147

0 commit comments

Comments
 (0)