* feat: Switch default layout model to DOCLING_LAYOUT_HERON. Update the unit test data. Signed-off-by: Nikos Livathinos <nli@zurich.ibm.com> * Use default layout model in model_downloader default args Signed-off-by: Christoph Auer <cau@zurich.ibm.com> * Use default layout model in model_downloader default args Signed-off-by: Christoph Auer <cau@zurich.ibm.com> * Update docling-models tag for TableFormer Signed-off-by: Christoph Auer <cau@zurich.ibm.com> * Update test GT Signed-off-by: Christoph Auer <cau@zurich.ibm.com> * Update test GT (from linux CPU) Signed-off-by: Ubuntu <ubuntu@ip-172-31-30-253.eu-central-1.compute.internal> * fix: Ensure that the visualisations happen on copies of the page image Signed-off-by: Nikos Livathinos <nli@zurich.ibm.com> * chore: Pinpoint docling-ibm-models to the fix branch for the ReadingOrderPredictor Signed-off-by: Nikos Livathinos <nli@zurich.ibm.com> * chore: Update uv.lock Signed-off-by: Nikos Livathinos <nli@zurich.ibm.com> * chore: Update tests GT to match the Heron layout model and the improved reading order model in Linux Signed-off-by: Nikos Livathinos <nli@zurich.ibm.com> * fix: Introduce the verify_doctags optional parameter in conversion tests to control if a doctags comparison should take place. Skip doctags comparisons for certain tests. Signed-off-by: Nikos Livathinos <nli@zurich.ibm.com> * chore: Generate tests GT on Mac Signed-off-by: Nikos Livathinos <nli@zurich.ibm.com> * chore: Remove the pinning of the docling-ibm-models and use the release 3.9.1 Signed-off-by: Nikos Livathinos <nli@zurich.ibm.com> --------- Signed-off-by: Nikos Livathinos <nli@zurich.ibm.com> Signed-off-by: Christoph Auer <cau@zurich.ibm.com> Signed-off-by: Ubuntu <ubuntu@ip-172-31-30-253.eu-central-1.compute.internal> Co-authored-by: Christoph Auer <cau@zurich.ibm.com> Co-authored-by: Ubuntu <ubuntu@ip-172-31-30-253.eu-central-1.compute.internal>
2.8 KiB
Vendored
order to compute the TED score. Inference timing results for all experiments were obtained from the same machine on a single core with AMD EPYC 7763 CPU @2.45 GHz.
5.1 Hyper Parameter Optimization
We have chosen the PubTabNet data set to perform HPO, since it includes a highly diverse set of tables. Also we report TED scores separately for simple and complex tables (tables with cell spans). Results are presented in Table. 1. It is evident that with OTSL, our model achieves the same TED score and slightly better mAP scores in comparison to HTML. However OTSL yields a 2x speed up in the inference runtime over HTML.
Table 1. HPO performed in OTSL and HTML representation on the same transformer-based TableFormer [9] architecture, trained only on PubTabNet [22]. Effects of reducing the # of layers in encoder and decoder stages of the model show that smaller models trained on OTSL perform better, especially in recognizing complex table structures, and maintain a much higher mAP score than the HTML counterpart.
| # enc-layers | # dec-layers | Language | TEDs | TEDs | TEDs | mAP (0.75) | Inference time (secs) |
|---|---|---|---|---|---|---|---|
| # enc-layers | # dec-layers | Language | simple | complex | all | mAP (0.75) | Inference time (secs) |
| 6 | 6 | OTSL HTML | 0.965 0.969 | 0.934 0.927 | 0.955 0.955 | 0.88 0.857 | 2.73 5.39 |
| 4 | 4 | OTSL HTML | 0.938 0.952 | 0.904 0.909 | 0.927 0.938 | 0.853 0.843 | 1.97 3.77 |
| 2 | 4 | OTSL HTML | 0.923 0.945 | 0.897 0.901 | 0.915 0.931 | 0.859 0.834 | 1.91 3.81 |
| 4 | 2 | OTSL HTML | 0.952 0.944 | 0.92 0.903 | 0.942 0.931 | 0.857 0.824 | 1.22 2 |
5.2 Quantitative Results
We picked the model parameter configuration that produced the best prediction quality (enc=6, dec=6, heads=8) with PubTabNet alone, then independently trained and evaluated it on three publicly available data sets: PubTabNet (395k samples), FinTabNet (113k samples) and PubTables-1M (about 1M samples). Performance results are presented in Table. 2. It is clearly evident that the model trained on OTSL outperforms HTML across the board, keeping high TEDs and mAP scores even on di ffi cult financial tables (FinTabNet) that contain sparse and large tables.
Additionally, the results show that OTSL has an advantage over HTML when applied on a bigger data set like PubTables-1M and achieves significantly improved scores. Finally, OTSL achieves faster inference due to fewer decoding steps which is a result of the reduced sequence representation.