From 413ffd18bd5e87696e944223344b02ce473b0115 Mon Sep 17 00:00:00 2001 From: Michele Dolfi Date: Wed, 19 Feb 2025 07:10:41 +0100 Subject: [PATCH] fix colab install, use granite and improve viz of description Signed-off-by: Michele Dolfi --- docs/examples/pictures_description.ipynb | 61 ++++++++++++++++-------- 1 file changed, 41 insertions(+), 20 deletions(-) diff --git a/docs/examples/pictures_description.ipynb b/docs/examples/pictures_description.ipynb index f906a7aa..025eef0c 100644 --- a/docs/examples/pictures_description.ipynb +++ b/docs/examples/pictures_description.ipynb @@ -9,7 +9,7 @@ }, { "cell_type": "code", - "execution_count": 1, + "execution_count": null, "metadata": {}, "outputs": [ { @@ -21,18 +21,19 @@ } ], "source": [ - "%pip install -q docling ipython" + "%pip install -q docling[vlm] ipython" ] }, { "cell_type": "code", - "execution_count": 2, + "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "from docling.datamodel.base_models import InputFormat\n", - "from docling.datamodel.pipeline_options import ( # granite_picture_description,\n", + "from docling.datamodel.pipeline_options import (\n", " PdfPipelineOptions,\n", + " granite_picture_description,\n", " smolvlm_picture_description,\n", ")\n", "from docling.document_converter import DocumentConverter, PdfFormatOption" @@ -40,16 +41,31 @@ }, { "cell_type": "code", - "execution_count": 3, + "execution_count": 9, "metadata": {}, - "outputs": [], + "outputs": [ + { + "data": { + "application/vnd.jupyter.widget-view+json": { + "model_id": "9d3bb7b3b4fd4640af40289dd7bf50d7", + "version_major": 2, + "version_minor": 0 + }, + "text/plain": [ + "Loading checkpoint shards: 0%| | 0/2 [00:00Picture #/pictures/0

Caption

Figure 1: Four examples of complex page layouts across different document categories

Annotations

[PictureDescriptionData(kind='description', text='An advertisement with a blue background, an image of a building, and text about the 175 years of looking forward.', provenance='HuggingFaceTB/SmolVLM-256M-Instruct')]\n", - "

Picture #/pictures/1


Caption

Figure 2: Distribution of DocLayNet pages across document categories.

Annotations

[PictureDescriptionData(kind='description', text='The image is a pie chart that represents the distribution of various categories. The chart is divided into four sections, each representing a different category. The categories are: Financial, Tenders, Laws, and Manuals. \\n\\n### Description of the Pie Chart:\\n1. **Financial Categories:**\\n - **Financial:** 32%\\n - **Tenders:** 6%\\n - **Laws:** 16%\\n - **Manuals:** 21%\\n\\n2. **Tenders:**\\n - **Tenders:** 16%\\n - **Laws:** 16%\\n - **Manuals:** 16%\\n\\n3. **Laws:**\\n - **Laws:** 16%\\n - **Manuals:** 16%\\n\\n4. **Manuals:**\\n - **Manuals:** 21%\\n\\n### Analysis:\\nThe pie chart is a visual representation of the distribution of', provenance='HuggingFaceTB/SmolVLM-256M-Instruct')]\n", - "

Picture #/pictures/2


Caption

Figure 3: Corpus Conversion Service annotation user interface. The PDF page is shown in the background, with overlaid text-cells (in darker shades). The annotation boxes can be drawn by dragging a rectangle over each segment with the respective label from the palette on the right.

Annotations

[PictureDescriptionData(kind='description', text='The image is a table that contains field labels and a list of fields. The table is titled \"Field Labels.\" The table has five columns and five rows. The first column is labeled \"Clusters,\" the second column is labeled \"Clusters,\" the third column is labeled \"Clusters,\" the fourth column is labeled \"Clusters,\" and the fifth column is labeled \"Clusters.\"\\n\\nThe table is structured in a way that it is easy to understand. The first row of the table contains the following fields:\\n\\n- \"Clusters\"\\n- \"Clusters\"\\n- \"Clusters\"\\n- \"Clusters\"\\n- \"Clusters\"\\n- \"Clusters\"\\n\\nThe second row of the table contains the following fields:\\n\\n- \"Clusters\"\\n- \"Clusters\"\\n- \"Clusters\"\\n- \"Clusters\"\\n- \"Clusters\"\\n- \"Clusters\"\\n\\nThe third row of the', provenance='HuggingFaceTB/SmolVLM-256M-Instruct')]\n", - "

Picture #/pictures/3


Caption

Figure 4: Examples of plausible annotation alternatives for the same page. Criteria in our annotation guideline can resolve cases A to C, while the case D remains ambiguous.

Annotations

[PictureDescriptionData(kind='description', text='Figure 1.', provenance='HuggingFaceTB/SmolVLM-256M-Instruct')]\n", - "

Picture #/pictures/4


Caption

Figure 5: Prediction performance (mAP@0.5-0.95) of a Mask R-CNNnetworkwithResNet50backbonetrainedonincreasing fractions of the DocLayNet dataset. The learning curve flattens around the 80% mark, indicating that increasing the size of the DocLayNet dataset with similar data will not yield significantly better predictions.

Annotations

[PictureDescriptionData(kind='description', text='The image is a line graph that shows the percentage of DocLayNet training set as a percentage of the total training set. The x-axis represents the percentage of training set, ranging from 0 to 100. The y-axis represents the percentage of training set, ranging from 0 to 100. The graph shows a continuous trend of increasing training set percentage over time.\\n\\n### Description of the Graph:\\n1. **X-Axis (Percentage of Training Set):**\\n - The x-axis is labeled \"Percentage of DocLayNet training set.\"\\n - The range of the x-axis is from 0 to 100.\\n\\n2. **Y-Axis (Percentage of Training Set):**\\n - The y-axis is labeled \"MAP:0.500-0.95.\"\\n - The range of the y-axis is from 0 to 100.\\n\\n3.', provenance='HuggingFaceTB/SmolVLM-256M-Instruct')]\n" + "

Picture #/pictures/0


Caption

Figure 1: Sketch of Docling's pipelines and usage model. Both PDF pipeline and simple pipeline build up a DoclingDocument representation, which can be further enriched. Downstream applications can utilize Docling's API to inspect, export, or chunk the document for various purposes.

Annotations

In this image we can see a poster with some text and images.
\n", + "

Picture #/pictures/1


Caption

Figure 2: Dataset categories and sample counts for documents and pages.

Annotations

In this image we can see a pie chart. In the pie chart we can see the categories and the number of documents in each category.
\n", + "

Picture #/pictures/2


Caption

Figure 3: Distribution of conversion times for all documents, ordered by number of pages in a document, on all system configurations. Every dot represents one document. Log/log scale is used to even the spacing, since both number of pages and conversion times have long-tail distributions.

Annotations

In this image we can see a graph. On the x-axis we can see the number of pages. On the y-axis we can see the seconds.
\n", + "

Picture #/pictures/3


Caption

Figure 4: Contributions of PDF backend and AI models to the conversion time of a page (in seconds per page). Lower is better. Left: Ranges of time contributions for each model to pages it was applied on (i.e., OCR was applied only on pages with bitmaps, table structure was applied only on pages with tables). Right: Average time contribution to a page in the benchmark dataset (factoring in zero-time contribution for OCR and table structure models on pages without bitmaps or tables) .

Annotations

In this image we can see a bar chart and a line chart. In the bar chart we can see the values of Pdf Parse, OCR, Layout, Table Structure, Page Total and Page. In the line chart we can see the values of Pdf Parse, OCR, Layout, Table Structure, Page Total and Page.
\n", + "

Picture #/pictures/4


Caption

Figure 5: Conversion time in seconds per page on our dataset in three scenarios, across all assets and system configurations. Lower bars are better. The configuration includes OCR and table structure recognition ( fast table option on Docling and MinerU, hi res in unstructured, as shown in table 1).

Annotations

In this image we can see a bar chart. In the chart we can see the CPU, Max, GPU, and sec/page.
\n" ], "text/plain": [ "" ] }, - "execution_count": 4, + "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ + "from docling_core.types.doc.document import PictureDescriptionData\n", "from IPython import display\n", "\n", "html_buffer = []\n", "# display the first 5 pictures and their captions and annotations:\n", "for pic in doc.pictures[:5]:\n", - " html_buffer.append(\n", + " html_item = (\n", " f\"

Picture {pic.self_ref}

\"\n", " f'
'\n", " f\"

Caption

{pic.caption_text(doc=doc)}
\"\n", - " f\"

Annotations

{pic.annotations}\\n\"\n", " )\n", + " for annotation in pic.annotations:\n", + " if not isinstance(annotation, PictureDescriptionData):\n", + " continue\n", + " html_item += f\"

Annotations

{annotation.text}
\\n\"\n", + " html_buffer.append(html_item)\n", "display.HTML(\"
\".join(html_buffer))" ] }, @@ -114,7 +135,7 @@ ], "metadata": { "kernelspec": { - "display_name": ".venv", + "display_name": "docling-aMWN2FRM-py3.12", "language": "python", "name": "python3" }, @@ -128,7 +149,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.12.8" + "version": "3.12.7" } }, "nbformat": 4,