You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+85-80Lines changed: 85 additions & 80 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,52 +1,49 @@
1
1
# CodePapi AI ⚡
2
2
3
-
> **Transform your code instantly with local AI power.**No cloud, no data leaks—just blazing-fast code translation, migration, and debugging on your machine.
3
+
> **A local AI-powered code companion.**Keep your code on your machine while exploring code translation, reviews, and debugging with LLMs. A learning project exploring local AI integration in developer workflows.
CodePapi AI is a professional, privacy-focused developer tool that brings the power of Large Language Models (LLMs) directly to your local development workflow. Whether you're translating code between languages, migrating frameworks, reviewing for security issues, or debugging complex logic—all your code stays on your machine.
15
+
CodePapi AI is an experimental, open-source project that brings Large Language Models (LLMs) to your local development environment. Translate code between languages, get AI-powered code reviews, and explore debugging workflows—all without sending your code to external services.
16
+
17
+
**Note:** This is a hobby/learning project. While functional, it's not optimized for production use. Performance depends heavily on your hardware and model choice. Expect AI responses to take 10-60+ seconds depending on code size and hardware.
16
18
17
19
### Why CodePapi AI?
18
20
19
-
✅ **100% Private** — Your code never leaves your machine
20
-
✅ **Lightning Fast** — Runs locally on your hardware
21
-
✅ **Free** — MIT licensed, fully open-source
22
-
✅ **Extensible** — Add languages, frameworks, and custom prompts easily
21
+
✅ **Private** — Your code stays on your machine (no cloud uploads)
22
+
✅ **Open-Source** — Inspect the full codebase
23
+
✅ **Free** — MIT licensed, no subscriptions
24
+
✅ **Learning Tool** — Explore local LLM integration in practice
23
25
24
26
---
25
27
26
-
## ✨ Core Features
28
+
## ✨ Features
27
29
28
-
#### 🔄 Smart Code Translation
29
-
Effortlessly convert code between 10+ languages including JavaScript, TypeScript, Python, Go, Rust, Java, and more. The system is flexible enough to support any language you add.
30
+
#### 🔄 Code Translation
31
+
Convert code between supported languages: JavaScript, TypeScript, Python, Go, Rust, Java, C++, PHP, Ruby, Swift, and C#. Quality depends on model accuracy and code complexity.
30
32
31
-
#### 🚀 Framework Migration Engine
32
-
Pre-built, expert-level migration presets for common transformations:
33
-
-React Class Components → React Functional Components (with Hooks)
34
-
-JavaScript → TypeScript
35
-
-CSS → Tailwind CSS
36
-
-React → Vue.js
33
+
#### 🔍 Code Review
34
+
Get AI-generated feedback on:
35
+
-Performance optimization ideas
36
+
-Potential security issues
37
+
-Code quality observations
38
+
-Best practice suggestions
37
39
38
-
#### 🔍 Deep Code Review
39
-
Get AI-driven analysis of your code covering:
40
-
- Performance optimization opportunities
41
-
- Security vulnerabilities
42
-
- Best practice violations
43
-
- Code quality improvements
40
+
**Note:** AI suggestions should be reviewed carefully and aren't a substitute for human code review.
44
41
45
-
#### 🐞 Interactive Bug Fixer
46
-
Fix bugs with confidence. The **Diff View** shows exactly what the AI changed, side-by-side comparison so you understand every modification before accepting.
42
+
#### 🐞 Bug Detection
43
+
The **Diff View** shows AI-suggested fixes side-by-side with original code. Always test fixes before committing.
47
44
48
-
#### 🔒 Air-Gapped Privacy
49
-
Powered by **Qwen2.5-Coder** (1.5GB model) running locally through **Ollama**. Your code never touches the internet.
45
+
#### 🔒 Local Privacy
46
+
Code processing happens locally using Qwen2.5-Coder via Ollama—nothing leaves your machine.
50
47
51
48
---
52
49
@@ -71,18 +68,20 @@ docker-compose up -d
71
68
72
69
### First Launch Setup
73
70
74
-
> ⚠️ **Important:** The first startup requires downloading AI models. Ensure you have a stable internet connection.
71
+
> ⚠️ **First Run:** The first startup downloads the AI model (~1.5GB). Ensure stable internet and available disk space.
75
72
76
-
After starting the containers, pull the required models:
73
+
After starting the containers, pull the required model:
77
74
78
75
```bash
79
-
# Pull Qwen2.5 Coder (primary model, ~1.5GB)
80
76
docker exec ollama ollama pull qwen2.5-coder:1.5b
81
-
82
-
# Pull Phi-3 Mini (optional, ~2.3GB alternative model)
83
-
docker exec ollama ollama pull phi3:mini
84
77
```
85
78
79
+
**Initial Request Times:** Expect 10-90 seconds for initial responses depending on:
80
+
- Your CPU/GPU specs
81
+
- Code size
82
+
- Available system memory
83
+
- Background processes
84
+
86
85
Once the models are downloaded and containers are running:
87
86
-**🖥️ Frontend:** Open http://localhost in your browser
88
87
-**🔌 API:** Backend runs at http://localhost:3000
@@ -92,14 +91,19 @@ Once the models are downloaded and containers are running:
92
91
93
92
## 💻 How to Use
94
93
95
-
1.**Paste or type your code** into the left editor
96
-
2.**Select a source language/framework** from the dropdown
97
-
3.**Choose your action:**
94
+
1. Paste or type code into the left editor
95
+
2. Select a source language
96
+
3. Choose an action:
98
97
-**Translate:** Pick a target language
99
-
-**Review:** Get AI analysis (no target needed)
100
-
-**Check Bugs:** See a diff view of fixes
101
-
4.**Click "Run AI"** and watch the magic happen
102
-
5.**Copy the result** or download your transformed code
98
+
-**Review:** Get feedback on code
99
+
-**Check Bugs:** See suggested fixes
100
+
4. Click "Run AI" and wait for results
101
+
5. Copy or review the output
102
+
103
+
**Tips:**
104
+
- Smaller code snippets get faster responses
105
+
- Review AI suggestions before using them in production
106
+
- Results vary based on code complexity and quality
103
107
104
108
---
105
109
@@ -120,13 +124,13 @@ Once the models are downloaded and containers are running:
120
124
121
125
### Adding New Languages
122
126
123
-
Want to support more programming languages or migration presets? It's easy!
127
+
Want to support more programming languages? It's easy!
124
128
125
129
See the **[Frontend Documentation](./frontend/README.md)** for detailed instructions on adding languages to `frontend/src/constants/languages.ts`.
126
130
127
131
### Code Quality
128
132
129
-
We use **Biome** for lightning-fast linting and formatting. Before submitting a PR, run:
133
+
We use **Biome** for linting and formatting. Before submitting a PR, run:
130
134
131
135
```bash
132
136
npm run biome:lint # Check for issues
@@ -140,7 +144,7 @@ codepapi-ai/
140
144
├── backend/ # NestJS API server
141
145
│ └── src/converter/ # Code conversion logic
142
146
├── frontend/ # React UI application
143
-
│ └── src/constants/ # Language & migration definitions
147
+
│ └── src/constants/ # Language definitions
144
148
├── docker-compose.yml # Full stack orchestration
145
149
└── README.md # This file
146
150
```
@@ -162,7 +166,13 @@ Violations will not be tolerated and may result in removal from the project.
162
166
163
167
## 🤝 Contributing Guidelines
164
168
165
-
We welcome contributions from the community! Whether it's bug fixes, features, documentation, or translations, your help makes CodePapi AI better.
169
+
We welcome contributions! This is a learning/hobby project, so contributions can range from bug fixes and feature ideas to documentation and testing.
170
+
171
+
### Important Notes
172
+
173
+
-**This is experimental code.** Don't expect production-grade stability
174
+
-**Limitations are intentional** — helps us learn and improve
175
+
-**AI suggestions need review** — this tool augments, not replaces, human developers
166
176
167
177
### Getting Started
168
178
@@ -259,7 +269,7 @@ Before submitting a PR, ensure:
259
269
While formal unit tests are encouraged:
260
270
-**Manual testing** is acceptable for UI changes
261
271
-**Test in Docker** to ensure consistency across environments
262
-
-**Test with the Qwen2.5-Coder model** (not a different LLM)
272
+
-**Test with the Qwen2.5-Coder model**
263
273
-**Document test steps** in your PR
264
274
265
275
### Review Process
@@ -276,10 +286,10 @@ While formal unit tests are encouraged:
- ✨ **Features:** Language support, migration presets, new modes
289
+
- ✨ **Features:** Language support, new modes
280
290
- 🎨 **UI/UX:** Design improvements, accessibility
281
291
282
-
See the [Issues page](https://github.com/yourusername/codepapi-ai/issues) for tasks labeled `good first issue` and `help wanted`.
292
+
See the [Issues page](https://github.com/codepapi/codepapi-ai/issues) for tasks labeled `good first issue` and `help wanted`.
283
293
284
294
### Recognition
285
295
@@ -291,42 +301,37 @@ See the [Issues page](https://github.com/yourusername/codepapi-ai/issues) for ta
291
301
292
302
## 🤖 Responsible AI Ethics
293
303
294
-
As an AI-powered tool, CodePapi AI follows these ethical principles:
304
+
As an experimental AI project, CodePapi AI follows responsible practices:
295
305
296
-
### Privacy First
297
-
-**No telemetry:** We don't track usage or collect analytics
298
-
-**Local processing:** All code processing happens on your machine
299
-
-**No training data:** Your code is never used to train or improve models
300
-
-**GDPR compliant:** Full control over your data
306
+
### Privacy
307
+
-**No telemetry:** We don't collect usage analytics
308
+
-**Local processing:** All code stays on your machine
309
+
-**No training:** Your code never trains models
301
310
302
311
### Transparency
303
-
-**Open source:** Full code transparency — inspect everything
304
-
-**Model disclosure:** We explicitly state which LLM is used (Phi-3 Mini)
305
-
-**Limitations:** We're honest about what the AI can and cannot do
306
-
-**Attribution:** AI improvements are documented and credited
307
-
308
-
### Responsible Use Guidelines
309
-
310
-
**Do:**
311
-
- ✅ Use CodePapi AI for legitimate code improvement
312
-
- ✅ Review AI suggestions before implementing
313
-
- ✅ Report security issues responsibly
314
-
- ✅ Contribute improvements back to the community
315
-
316
-
**Don't:**
317
-
- ❌ Use for malicious code generation
318
-
- ❌ Bypass security reviews using AI
319
-
- ❌ Rely solely on AI without code review
320
-
- ❌ Claim AI-generated code as entirely your own without attribution
321
-
322
-
### Security Considerations
323
-
- Always review code generated by AI before committing
324
-
- Run security scanners on translated/migrated code
325
-
- Test thoroughly in safe environments first
326
-
- Report any security concerns to [security@example.com]
312
+
-**Open source:** Full code inspection available
313
+
-**Clear limitations:** We're honest about what works and what doesn't
314
+
-**No magic:** It's an AI assistant, not a replacement for human judgment
315
+
316
+
### Use Responsibly
317
+
- Review all AI suggestions before implementing
318
+
- Don't rely solely on AI output for security-critical code
319
+
- Test thoroughly in your environment
320
+
- Report security issues privately
327
321
328
322
---
329
323
324
+
## 🚨 Limitations & Known Issues
325
+
326
+
This is an experimental project with real limitations:
327
+
328
+
-**Speed:** Not fast. Responses take 10-90+ seconds per request
329
+
-**Quality:** AI output varies. Some translations work well, others need manual fixes
330
+
-**Hardware-dependent:** Performance heavily depends on your CPU/GPU and available RAM
331
+
-**Model limitations:** Qwen2.5-Coder is a smaller model; results aren't comparable to larger proprietary models
332
+
-**Error handling:** Limited error checking and validation
333
+
-**Production use:** Not suitable for mission-critical workflows without thorough testing
334
+
330
335
## 🚨 Reporting Issues & Security
331
336
332
337
### Bug Reports
@@ -371,16 +376,16 @@ Distributed under the **MIT License**. See [LICENSE](./LICENSE) for details.
371
376
372
377
## 💬 Support
373
378
374
-
-**Issues:** Report bugs on [GitHub Issues](https://github.com/yourusername/codepapi-ai/issues)
375
-
-**Discussions:** Ask questions in [GitHub Discussions](https://github.com/yourusername/codepapi-ai/discussions)
379
+
-**Issues:** Report bugs on [GitHub Issues](https://github.com/codepapi/codepapi-ai/issues)
380
+
-**Discussions:** Ask questions in [GitHub Discussions](https://github.com/codepapi/codepapi-ai/discussions)
376
381
-**Docs:** Full documentation in [README files](./frontend/README.md)
377
382
378
383
---
379
384
380
385
<divalign="center">
381
386
382
-
**Made with ❤️ for developers who value privacy and speed**
387
+
**A learning project exploring local LLMs in development workflows**
383
388
384
-
[⭐ Star us on GitHub](https://github.com/yourusername/codepapi-ai) • [🐛 Report a Bug](https://github.com/yourusername/codepapi-ai/issues) • [💡 Suggest a Feature](https://github.com/yourusername/codepapi-ai/discussions)
389
+
[⭐ Star us on GitHub](https://github.com/codepapi/codepapi-ai) • [🐛 Report a Bug](https://github.com/codepapi/codepapi-ai/issues) • [💡 Suggest a Feature](https://github.com/codepapi/codepapi-ai/discussions)
0 commit comments