docs: Describe examples (#2262)

* Update .py examples with clearer guidance,
update out of date imports and calls

Signed-off-by: Mingxuan Zhao <43148277+mingxzhao@users.noreply.github.com>

* Fix minimal.py string error, fix ruff format error

Signed-off-by: Mingxuan Zhao <43148277+mingxzhao@users.noreply.github.com>

* fix more CI issues

Signed-off-by: Mingxuan Zhao <43148277+mingxzhao@users.noreply.github.com>

---------

Signed-off-by: Mingxuan Zhao <43148277+mingxzhao@users.noreply.github.com>
This commit is contained in:
Mingxuan Zhao
2025-09-16 10:00:38 -04:00
committed by GitHub
parent 0e95171dd6
commit ff351fd40c
21 changed files with 608 additions and 85 deletions

View File

@@ -1,3 +1,27 @@
# %% [markdown]
# Describe pictures using a remote VLM API (vLLM, LM Studio, or watsonx.ai).
#
# What this example does
# - Configures `PictureDescriptionApiOptions` for local or cloud providers.
# - Converts a PDF, then prints each picture's caption and annotations.
#
# Prerequisites
# - Install Docling and `python-dotenv` if loading env vars from a `.env` file.
# - For local providers: ensure vLLM or LM Studio is running.
# - For watsonx.ai: set `WX_API_KEY` and `WX_PROJECT_ID` in the environment.
#
# How to run
# - From the repo root: `python docs/examples/pictures_description_api.py`.
# - Uncomment exactly one provider config and set `enable_remote_services=True` (already set).
#
# Notes
# - vLLM default endpoint: `http://localhost:8000/v1/chat/completions`.
# - LM Studio default endpoint: `http://localhost:1234/v1/chat/completions`.
# - Calling remote APIs sends page images/text to the provider; review privacy and
# costs. For local testing, LM Studio runs everything on your machine.
# %%
import logging
import os
from pathlib import Path