🐍 Installation •
Sinapsis OCR provides powerful and flexible implementations for extracting text from images using different OCR engines. It enables users to easily configure and run OCR tasks with minimal setup.
This mono repo consists of different packages for OCR:
sinapsis-deepseek-ocrsinapsis-doctrsinapsis-easyocrsinapsis-glm-ocr
Install using your package manager of choice. We encourage the use of uv
Example with uv:
uv pip install sinapsis-doctr --extra-index-url https://pypi.sinapsis.techor with raw pip:
pip install sinapsis-doctr --extra-index-url https://pypi.sinapsis.techChange the name of the package for the one you want to install.
Important
Templates in each package may require extra dependencies. For development, we recommend installing the package with all the optional dependencies:
with uv:
uv pip install sinapsis-doctr[all] --extra-index-url https://pypi.sinapsis.techor with raw pip:
pip install sinapsis-doctr[all] --extra-index-url https://pypi.sinapsis.techTip
You can also install all the packages within this project:
uv pip install sinapsis-ocr[all] --extra-index-url https://pypi.sinapsis.techDeepSeek OCR and GLM OCR have different transformers version requirements. They cannot be used together in the same environment:
| Package | Transformers Version | Notes |
|---|---|---|
sinapsis-deepseek-ocr |
==4.46.3 (pinned) |
DeepSeek models require this exact version |
sinapsis-glm-ocr |
>=4.46.3 (flexible) |
GLM-OCR works with >=5.1.0 |
When installing from PyPI:
# DeepSeek OCR - installs transformers==4.46.3
uv pip install sinapsis-deepseek-ocr[all] --extra-index-url https://pypi.sinapsis.tech
# GLM OCR - installs latest transformers (5.x)
uv pip install sinapsis-glm-ocr[all] --extra-index-url https://pypi.sinapsis.techImportant: Installing both sinapsis-deepseek-ocr and sinapsis-glm-ocr in the same environment may force transformers==4.46.3, which will cause GLM OCR to fail. Use separate virtual environments if you need both.
Packages summary
-
Sinapsis DeepSeek OCR
- Uses the DeepSeek OCR model for high-quality OCR
- Supports optional grounding for bounding box extraction
- Multiple inference modes (tiny, small, base, large, gundam)
-
Sinapsis DocTR
- Uses the DocTR library for high-quality OCR with modern deep learning models
- Supports multiple detection and recognition architectures
- Provides detailed text extraction with bounding boxes and confidence scores
-
Sinapsis EasyOCR
- Leverages the EasyOCR library for simple yet effective OCR
- Supports multiple languages
- Extracts text with bounding boxes and confidence scores
-
Sinapsis GLM OCR
- Uses Zhipu AI's GLM-OCR model for high-quality OCR
- Supports document parsing (text, formula, table) and structured information extraction via JSON schema
- Batch inference support for faster processing of multiple images
Tip
Use CLI command sinapsis info --all-template-names to show a list with all the available Template names installed with Sinapsis OCR.
Tip
Use CLI command sinapsis info --example-template-config TEMPLATE_NAME to produce an example Agent config for the Template specified in TEMPLATE_NAME.
For example, for DocTROCRPrediction use sinapsis info --example-template-config DocTROCRPrediction to produce an example config.
DocTR Example
agent:
name: doctr_prediction
description: agent to run inference with DocTR, performs on images read, recognition and save
templates:
- template_name: InputTemplate
class_name: InputTemplate
attributes: {}
- template_name: FolderImageDatasetCV2
class_name: FolderImageDatasetCV2
template_input: InputTemplate
attributes:
data_dir: dataset/input
- template_name: DocTROCRPrediction
class_name: DocTROCRPrediction
template_input: FolderImageDatasetCV2
attributes:
recognized_characters_as_labels: True
- template_name: BBoxDrawer
class_name: BBoxDrawer
template_input: DocTROCRPrediction
attributes:
draw_confidence: True
draw_extra_labels: True
- template_name: ImageSaver
class_name: ImageSaver
template_input: BBoxDrawer
attributes:
save_dir: output
root_dir: datasetEasyOCR Example
agent:
name: easyocr_inference
description: agent to run inference with EasyOCR, performs on images read, recognition and save
templates:
- template_name: InputTemplate
class_name: InputTemplate
attributes: {}
- template_name: FolderImageDatasetCV2
class_name: FolderImageDatasetCV2
template_input: InputTemplate
attributes:
data_dir: dataset/input
- template_name: EasyOCR
class_name: EasyOCR
template_input: FolderImageDatasetCV2
attributes: {}
- template_name: BBoxDrawer
class_name: BBoxDrawer
template_input: EasyOCR
attributes:
draw_confidence: True
draw_extra_labels: True
- template_name: ImageSaver
class_name: ImageSaver
template_input: BBoxDrawer
attributes:
save_dir: output
root_dir: datasetDeepSeek OCR Example
agent:
name: deepseek_ocr_inference
description: agent to run inference with DeepSeek OCR
templates:
- template_name: InputTemplate
class_name: InputTemplate
attributes: {}
- template_name: FolderImageDatasetCV2
class_name: FolderImageDatasetCV2
template_input: InputTemplate
attributes:
data_dir: dataset/input
- template_name: DeepSeekOCRInference
class_name: DeepSeekOCRInference
template_input: FolderImageDatasetCV2
attributes:
prompt: "Convert the document to markdown."
enable_grounding: true
mode: base
- template_name: BBoxDrawer
class_name: BBoxDrawer
template_input: DeepSeekOCRInference
attributes:
draw_confidence: True
draw_extra_labels: True
- template_name: ImageSaver
class_name: ImageSaver
template_input: BBoxDrawer
attributes:
save_dir: output
root_dir: datasetGLM OCR Example
agent:
name: glm_ocr_table_agent
description: "Agent to read images, perform GLM OCR for table recognition."
templates:
- template_name: InputTemplate
class_name: InputTemplate
attributes: {}
- template_name: FolderImageDatasetCV2
class_name: FolderImageDatasetCV2
template_input: InputTemplate
attributes:
load_on_init: True
root_dir: "."
data_dir: "artifacts"
pattern: "expense.jpg"
- template_name: GLMOCRInference
class_name: GLMOCRInference
template_input: FolderImageDatasetCV2
attributes:
prompt: "Table Recognition:"
init_args:
pretrained_model_name_or_path: zai-org/GLM-OCR
torch_dtype: auto
attn_implementation: kernels-community/flash-attn2
device_map: auto
generation_config:
max_new_tokens: 8192
do_sample: falseTo run, simply use:
sinapsis run name_of_the_config.ymlThe webapp provides a simple interface to extract text from images using OCR. Upload your image, and the app will process it and display the detected text with bounding boxes.
Important
To run the app you first need to clone this repository:
git clone https://github.com/Sinapsis-ai/sinapsis-ocr.git
cd sinapsis-ocrNote
If you'd like to enable external app sharing in Gradio, export GRADIO_SHARE_APP=True
Tip
The agent configuration can be updated using the AGENT_CONFIG_PATH environment var.
For default uses the config for easy ocr but this can be chaged with:
AGENT_CONFIG_PATH=/app/packages/sinapsis_doctr/src/sinapsis_doctr/configs/doctr_demo.yaml
🐳 Docker
IMPORTANT This docker image depends on the sinapsis:base image. Please refer to the official sinapsis instructions to Build with Docker.
- Build the sinapsis-ocr image:
docker compose -f docker/compose.yaml build- Start the app container:
docker compose -f docker/compose_app.yaml up- Check the status:
docker logs -f sinapsis-ocr-app- The logs will display the URL to access the webapp, e.g.:
NOTE: The url can be different, check the output of logs
Running on local URL: http://127.0.0.1:7860- To stop the app:
docker compose -f docker/compose_app.yaml down💻 UV
To run the webapp using the uv package manager, please:
- Create the virtual environment and sync the dependencies:
uv sync --frozen- Install packages:
uv pip install sinapsis-ocr[all] --extra-index-url https://pypi.sinapsis.tech- Run the webapp:
uv run webapps/gradio_ocr.py- The terminal will display the URL to access the webapp, e.g.:
Running on local URL: http://127.0.0.1:7860NOTE: The url can be different, check the output of the terminal
- To stop the app press
Control + Con the terminal
Documentation for this and other sinapsis packages is available on the sinapsis website
Tutorials for different projects within sinapsis are available at sinapsis tutorials page
This project is licensed under the AGPLv3 license, which encourages open collaboration and sharing. For more details, please refer to the LICENSE file.
For commercial use, please refer to our official Sinapsis website for information on obtaining a commercial license.