Christoph Auer 4852d8b4f2 feat(experimental): Layout + VLM model with layout prompt (#2244)
* adding granite-docling preview

Signed-off-by: Peter Staar <taa@zurich.ibm.com>

* updated the model specs

Signed-off-by: Peter Staar <taa@zurich.ibm.com>

* Add Layout+VLM pipeline with prompt injection, ApiVlmModel updates

Signed-off-by: Christoph Auer <cau@zurich.ibm.com>

* Update layout injection, move to experimental

Signed-off-by: Christoph Auer <cau@zurich.ibm.com>

* Adjust defaults

Signed-off-by: Christoph Auer <cau@zurich.ibm.com>

* Map Layout+VLM pipeline to GraniteDoclign

Signed-off-by: Christoph Auer <cau@zurich.ibm.com>

* Remove base_prompt from layout injection prompt

Signed-off-by: Christoph Auer <cau@zurich.ibm.com>

* Reinstate custom prompt

Signed-off-by: Christoph Auer <cau@zurich.ibm.com>

* add demo_layout file that produces with vs without layout injection

Signed-off-by: Peter El Hachem <peter.el.hachem@ibm.com>
Signed-off-by: ElHachem02 <peterelhachem02@gmail.com>

* feat: wrap vlm_inference around process_images

Signed-off-by: ElHachem02 <peterelhachem02@gmail.com>

* feat: carry input prompt + number of input tokens

Signed-off-by: ElHachem02 <peterelhachem02@gmail.com>

* fix: adapt example to run on local test file

Signed-off-by: ElHachem02 <peterelhachem02@gmail.com>

* fix: example now expects single document

Signed-off-by: ElHachem02 <peterelhachem02@gmail.com>

* feat: add layout example to EXAMPLES_TO_SKIP

Signed-off-by: ElHachem02 <peterelhachem02@gmail.com>

* feat: address comments on git

Signed-off-by: ElHachem02 <peterelhachem02@gmail.com>

* feat: add inference wrapper for hf_transformers + carry input prompt

Signed-off-by: ElHachem02 <peterelhachem02@gmail.com>

* Feat: add track_input_prompt to ApiVlmOptions, and track input prompt as part of api vlm

Signed-off-by: ElHachem02 <peterelhachem02@gmail.com>

* fix: Ensure backward-compatible build_prompt by adding _internal_page ag

Signed-off-by: Christoph Auer <cau@zurich.ibm.com>

* fix: Ensure backward-compatible build_prompt by adding _internal_page ag

Signed-off-by: Christoph Auer <cau@zurich.ibm.com>

* Fixes for demo

Signed-off-by: Christoph Auer <cau@zurich.ibm.com>

* Typing fixes

Signed-off-by: Christoph Auer <cau@zurich.ibm.com>

* Restoring lost changes in vllm_model

Signed-off-by: Christoph Auer <cau@zurich.ibm.com>

* Restoring vlm_pipeline_api_model example

Signed-off-by: Christoph Auer <cau@zurich.ibm.com>

---------

Signed-off-by: Peter Staar <taa@zurich.ibm.com>
Signed-off-by: Christoph Auer <cau@zurich.ibm.com>
Signed-off-by: Peter El Hachem <peter.el.hachem@ibm.com>
Signed-off-by: ElHachem02 <peterelhachem02@gmail.com>
Co-authored-by: Peter Staar <taa@zurich.ibm.com>
Co-authored-by: ElHachem02 <peterelhachem02@gmail.com>
2025-11-12 13:42:09 +01:00
2024-07-16 13:05:04 +02:00

Docling

Docling

DS4SD%2Fdocling | Trendshift

arXiv Docs PyPI version PyPI - Python Version uv Ruff Pydantic v2 pre-commit License MIT PyPI Downloads Docling Actor Chat with Dosu Discord OpenSSF Best Practices LF AI & Data

Docling simplifies document processing, parsing diverse formats — including advanced PDF understanding — and providing seamless integrations with the gen AI ecosystem.

Features

  • 🗂️ Parsing of multiple document formats incl. PDF, DOCX, PPTX, XLSX, HTML, WAV, MP3, VTT, images (PNG, TIFF, JPEG, ...), and more
  • 📑 Advanced PDF understanding incl. page layout, reading order, table structure, code, formulas, image classification, and more
  • 🧬 Unified, expressive DoclingDocument representation format
  • ↪️ Various export formats and options, including Markdown, HTML, DocTags and lossless JSON
  • 🔒 Local execution capabilities for sensitive data and air-gapped environments
  • 🤖 Plug-and-play integrations incl. LangChain, LlamaIndex, Crew AI & Haystack for agentic AI
  • 🔍 Extensive OCR support for scanned PDFs and images
  • 👓 Support of several Visual Language Models (GraniteDocling)
  • 🎙️ Audio support with Automatic Speech Recognition (ASR) models
  • 🔌 Connect to any agent using the MCP server
  • 💻 Simple and convenient CLI

What's new

  • 📤 Structured information extraction [🧪 beta]
  • 📑 New layout model (Heron) by default, for faster PDF parsing
  • 🔌 MCP server for agentic applications
  • 💬 Parsing of Web Video Text Tracks (WebVTT) files

Coming soon

  • 📝 Metadata extraction, including title, authors, references & language
  • 📝 Chart understanding (Barchart, Piechart, LinePlot, etc)
  • 📝 Complex chemistry understanding (Molecular structures)

Installation

To use Docling, simply install docling from your package manager, e.g. pip:

pip install docling

Works on macOS, Linux and Windows environments. Both x86_64 and arm64 architectures.

More detailed installation instructions are available in the docs.

Getting started

To convert individual documents with python, use convert(), for example:

from docling.document_converter import DocumentConverter

source = "https://arxiv.org/pdf/2408.09869"  # document per local path or URL
converter = DocumentConverter()
result = converter.convert(source)
print(result.document.export_to_markdown())  # output: "## Docling Technical Report[...]"

More advanced usage options are available in the docs.

CLI

Docling has a built-in CLI to run conversions.

docling https://arxiv.org/pdf/2206.01062

You can also use 🥚GraniteDocling and other VLMs via Docling CLI:

docling --pipeline vlm --vlm-model granite_docling https://arxiv.org/pdf/2206.01062

This will use MLX acceleration on supported Apple Silicon hardware.

Read more here

Documentation

Check out Docling's documentation, for details on installation, usage, concepts, recipes, extensions, and more.

Examples

Go hands-on with our examples, demonstrating how to address different application use cases with Docling.

Integrations

To further accelerate your AI application development, check out Docling's native integrations with popular frameworks and tools.

Get help and support

Please feel free to connect with us using the discussion section.

Technical report

For more details on Docling's inner workings, check out the Docling Technical Report.

Contributing

Please read Contributing to Docling for details.

References

If you use Docling in your projects, please consider citing the following:

@techreport{Docling,
  author = {Deep Search Team},
  month = {8},
  title = {Docling Technical Report},
  url = {https://arxiv.org/abs/2408.09869},
  eprint = {2408.09869},
  doi = {10.48550/arXiv.2408.09869},
  version = {1.0.0},
  year = {2024}
}

License

The Docling codebase is under MIT license. For individual model usage, please refer to the model licenses found in the original packages.

LF AI & Data

Docling is hosted as a project in the LF AI & Data Foundation.

IBM ❤️ Open Source AI

The project was started by the AI for knowledge team at IBM Research Zurich.

Languages
Python 98.6%
Shell 1.1%
Dockerfile 0.3%