After reviewing the code you provided for the AsciiDoc, HTML, and MS Word backends, I have found a key inconsistency in how heading levels are calculated in the `msword_backend.py` file compared to the other two. This inconsistency is the likely cause of the problem of limited available header levels when converting Word documents. ### Analysis of the Inconsistency 1. **`asciidoc_backend.py`**: In the `_parse_section_header` method, the heading level is calculated as `header_level - 1`, where `header_level` is the number of `=` characters. For example, `===` (3 characters) correctly becomes `level=2`. 2. **`html_backend.py`**: In the `handle_header` method, the level for tags like `<h2>`, `<h3>`, etc., is calculated as `hlevel - 1`. For example, an `<h4>` tag results in `level=3`. (Note: `<h1>` is correctly treated as a document title). 3. **`msword_backend.py`**: In the `_add_header` method, the level is determined by the number in the style name (e.g., "Heading 4" provides `curr_level = 4`). However, the final level passed to the document model is set by `add_level = curr_level`. This means a "Heading 4" style results in `level=4`. This is the inconsistency: for a semantically equivalent heading (like `<h4>`, `====`, or "Heading 4"), the MS Word backend produces a level that is one greater than the other backends. This can easily lead to downstream processing or rendering issues that make it seem like the depth is "cut off," especially if that system doesn't expect a heading with `level=4` or higher from this parser. ### The Fix To resolve this and make the MS Word backend consistent with the others, you need to adjust the level calculation. The fix is a one-line change in `docling/backend/msword_backend.py`. In the `_add_header` method, change the line that assigns `add_level`. **File:** `docling/backend/msword_backend.py` **Function:** `_add_header` **Original Code (~line 1030):** ```python current_level = curr_level parent_level = curr_level - 1 add_level = curr_level ``` **Corrected Code:** ```python current_level = curr_level parent_level = curr_level - 1 add_level = curr_level - 1 ``` By subtracting 1 from `curr_level`, you align the MS Word backend's behavior with the HTML and AsciiDoc backends. A "Heading 2" will now correctly be parsed as `level=1`, "Heading 3" as `level=2`, and so on, which should solve the depth problem you observed. ~~~ Validated by Gemini 2.5 Pro, o3, o3-pro, Claude 4. Signed-off-by: Artus Krohn-Grimberghe <artuskg@users.noreply.github.com> |
||
---|---|---|
.actor | ||
.github | ||
docling | ||
docs | ||
tests | ||
.gitattributes | ||
.gitignore | ||
.pre-commit-config.yaml | ||
CHANGELOG.md | ||
CITATION.cff | ||
CODE_OF_CONDUCT.md | ||
CONTRIBUTING.md | ||
Dockerfile | ||
LICENSE | ||
MAINTAINERS.md | ||
mkdocs.yml | ||
pyproject.toml | ||
README.md | ||
uv.lock |
Docling
Docling simplifies document processing, parsing diverse formats — including advanced PDF understanding — and providing seamless integrations with the gen AI ecosystem.
Features
- 🗂️ Parsing of multiple document formats incl. PDF, DOCX, XLSX, HTML, images, and more
- 📑 Advanced PDF understanding incl. page layout, reading order, table structure, code, formulas, image classification, and more
- 🧬 Unified, expressive DoclingDocument representation format
- ↪️ Various export formats and options, including Markdown, HTML, and lossless JSON
- 🔒 Local execution capabilities for sensitive data and air-gapped environments
- 🤖 Plug-and-play integrations incl. LangChain, LlamaIndex, Crew AI & Haystack for agentic AI
- 🔍 Extensive OCR support for scanned PDFs and images
- 🥚 Support of several Visual Language Models (SmolDocling)
- 💻 Simple and convenient CLI
Coming soon
- 📝 Metadata extraction, including title, authors, references & language
- 📝 Chart understanding (Barchart, Piechart, LinePlot, etc)
- 📝 Complex chemistry understanding (Molecular structures)
Installation
To use Docling, simply install docling
from your package manager, e.g. pip:
pip install docling
Works on macOS, Linux and Windows environments. Both x86_64 and arm64 architectures.
More detailed installation instructions are available in the docs.
Getting started
To convert individual documents with python, use convert()
, for example:
from docling.document_converter import DocumentConverter
source = "https://arxiv.org/pdf/2408.09869" # document per local path or URL
converter = DocumentConverter()
result = converter.convert(source)
print(result.document.export_to_markdown()) # output: "## Docling Technical Report[...]"
More advanced usage options are available in the docs.
CLI
Docling has a built-in CLI to run conversions.
docling https://arxiv.org/pdf/2206.01062
You can also use 🥚SmolDocling and other VLMs via Docling CLI:
docling --pipeline vlm --vlm-model smoldocling https://arxiv.org/pdf/2206.01062
This will use MLX acceleration on supported Apple Silicon hardware.
Read more here
Documentation
Check out Docling's documentation, for details on installation, usage, concepts, recipes, extensions, and more.
Examples
Go hands-on with our examples, demonstrating how to address different application use cases with Docling.
Integrations
To further accelerate your AI application development, check out Docling's native integrations with popular frameworks and tools.
Get help and support
Please feel free to connect with us using the discussion section.
Technical report
For more details on Docling's inner workings, check out the Docling Technical Report.
Contributing
Please read Contributing to Docling for details.
References
If you use Docling in your projects, please consider citing the following:
@techreport{Docling,
author = {Deep Search Team},
month = {8},
title = {Docling Technical Report},
url = {https://arxiv.org/abs/2408.09869},
eprint = {2408.09869},
doi = {10.48550/arXiv.2408.09869},
version = {1.0.0},
year = {2024}
}
License
The Docling codebase is under MIT license. For individual model usage, please refer to the model licenses found in the original packages.
LF AI & Data
Docling is hosted as a project in the LF AI & Data Foundation.
IBM ❤️ Open Source AI
The project was started by the AI for knowledge team at IBM Research Zurich.