sync with docling main

This commit is contained in:
João 2025-01-09 12:25:16 -03:00
commit 82441ed6d2
129 changed files with 89677 additions and 3053 deletions

View File

@ -1,3 +1,43 @@
## [v2.15.0](https://github.com/DS4SD/docling/releases/tag/v2.15.0) - 2025-01-08
### Feature
* Added http header support for document converter and cli ([#642](https://github.com/DS4SD/docling/issues/642)) ([`0ee849e`](https://github.com/DS4SD/docling/commit/0ee849e8bc8cf24d1c5597af3fe20a7fa19a29e0))
### Fix
* Correct scaling of debug visualizations, tune OCR ([#700](https://github.com/DS4SD/docling/issues/700)) ([`5cb4cf6`](https://github.com/DS4SD/docling/commit/5cb4cf6f19f91e6c87141e93400c4b54b93aa5d7))
* Let BeautifulSoup detect the HTML encoding ([#695](https://github.com/DS4SD/docling/issues/695)) ([`42856fd`](https://github.com/DS4SD/docling/commit/42856fdf79559188ec4617bc5d3a007286f114d2))
* **mspowerpoint:** Handle invalid images in PowerPoint slides ([#650](https://github.com/DS4SD/docling/issues/650)) ([`d49650c`](https://github.com/DS4SD/docling/commit/d49650c54ffa60bc6d6106970e104071689bc7b0))
### Documentation
* Specify docstring types ([#702](https://github.com/DS4SD/docling/issues/702)) ([`ead396a`](https://github.com/DS4SD/docling/commit/ead396ab407f6bbd43176abd6ed2bed7ed8c7c43))
* Add link to rag with granite ([#698](https://github.com/DS4SD/docling/issues/698)) ([`6701f34`](https://github.com/DS4SD/docling/commit/6701f34c855992c52918b210c65a2edb1c827c01))
* Add integrations, revamp docs ([#693](https://github.com/DS4SD/docling/issues/693)) ([`2d24fae`](https://github.com/DS4SD/docling/commit/2d24faecd96bfa656b2b8c80f25cdf251a50526a))
* Add OpenContracts as an integration ([#679](https://github.com/DS4SD/docling/issues/679)) ([`569038d`](https://github.com/DS4SD/docling/commit/569038df4205703f87517ea58da7902d143e7699))
* Add Weaviate RAG recipe notebook ([#451](https://github.com/DS4SD/docling/issues/451)) ([`2b591f9`](https://github.com/DS4SD/docling/commit/2b591f98726ed0d883236dd0550201b95203eebb))
* Document Haystack & Vectara support ([#628](https://github.com/DS4SD/docling/issues/628)) ([`fc645ea`](https://github.com/DS4SD/docling/commit/fc645ea531ddc67959640b428007851d641c923e))
## [v2.14.0](https://github.com/DS4SD/docling/releases/tag/v2.14.0) - 2024-12-18
### Feature
* Create a backend to transform PubMed XML files to DoclingDocument ([#557](https://github.com/DS4SD/docling/issues/557)) ([`fd03480`](https://github.com/DS4SD/docling/commit/fd034802b65a0e567531b8ecc9a283aaf030e050))
## [v2.13.0](https://github.com/DS4SD/docling/releases/tag/v2.13.0) - 2024-12-17
### Feature
* Updated Layout processing with forms and key-value areas ([#530](https://github.com/DS4SD/docling/issues/530)) ([`60dc852`](https://github.com/DS4SD/docling/commit/60dc852f16dc1adbb5e9284c81a146043a301ec1))
* Create a backend to parse USPTO patents into DoclingDocument ([#606](https://github.com/DS4SD/docling/issues/606)) ([`4e08750`](https://github.com/DS4SD/docling/commit/4e087504cc4b04210574e69f616badcddfa1f8e5))
* Add Easyocr parameter recog_network ([#613](https://github.com/DS4SD/docling/issues/613)) ([`3b53bd3`](https://github.com/DS4SD/docling/commit/3b53bd38c8efcc5ba54421fbfa90d047f1a61f82))
### Documentation
* Add Haystack RAG example ([#615](https://github.com/DS4SD/docling/issues/615)) ([`3e599c7`](https://github.com/DS4SD/docling/commit/3e599c7bbeef211dc346e9bc1d3a249113fcc4e4))
* Fix the path to the run_with_accelerator.py example ([#608](https://github.com/DS4SD/docling/issues/608)) ([`3bb3bf5`](https://github.com/DS4SD/docling/commit/3bb3bf57150c9705a055982e6fb0cc8d1408f161))
## [v2.12.0](https://github.com/DS4SD/docling/releases/tag/v2.12.0) - 2024-12-13 ## [v2.12.0](https://github.com/DS4SD/docling/releases/tag/v2.12.0) - 2024-12-13
### Feature ### Feature

View File

@ -29,7 +29,7 @@ Docling parses documents and exports them to the desired format with ease and sp
* 🗂️ Reads popular document formats (PDF, DOCX, PPTX, XLSX, Images, HTML, AsciiDoc & Markdown) and exports to HTML, Markdown and JSON (with embedded and referenced images) * 🗂️ Reads popular document formats (PDF, DOCX, PPTX, XLSX, Images, HTML, AsciiDoc & Markdown) and exports to HTML, Markdown and JSON (with embedded and referenced images)
* 📑 Advanced PDF document understanding including page layout, reading order & table structures * 📑 Advanced PDF document understanding including page layout, reading order & table structures
* 🧩 Unified, expressive [DoclingDocument](https://ds4sd.github.io/docling/concepts/docling_document/) representation format * 🧩 Unified, expressive [DoclingDocument](https://ds4sd.github.io/docling/concepts/docling_document/) representation format
* 🤖 Easy integration with 🦙 LlamaIndex & 🦜🔗 LangChain for powerful RAG / QA applications * 🤖 Plug-and-play [integrations](https://ds4sd.github.io/docling/integrations/) incl. LangChain, LlamaIndex, Crew AI & Haystack for agentic AI
* 🔍 OCR support for scanned PDFs * 🔍 OCR support for scanned PDFs
* 💻 Simple and convenient CLI * 💻 Simple and convenient CLI
@ -39,7 +39,6 @@ Explore the [documentation](https://ds4sd.github.io/docling/) to discover plenty
* ♾️ Equation & code extraction * ♾️ Equation & code extraction
* 📝 Metadata extraction, including title, authors, references & language * 📝 Metadata extraction, including title, authors, references & language
* 🦜🔗 Native LangChain extension
## Installation ## Installation

View File

@ -37,10 +37,10 @@ class HTMLDocumentBackend(DeclarativeDocumentBackend):
try: try:
if isinstance(self.path_or_stream, BytesIO): if isinstance(self.path_or_stream, BytesIO):
text_stream = self.path_or_stream.getvalue().decode("utf-8") text_stream = self.path_or_stream.getvalue()
self.soup = BeautifulSoup(text_stream, "html.parser") self.soup = BeautifulSoup(text_stream, "html.parser")
if isinstance(self.path_or_stream, Path): if isinstance(self.path_or_stream, Path):
with open(self.path_or_stream, "r", encoding="utf-8") as f: with open(self.path_or_stream, "rb") as f:
html_content = f.read() html_content = f.read()
self.soup = BeautifulSoup(html_content, "html.parser") self.soup = BeautifulSoup(html_content, "html.parser")
except Exception as e: except Exception as e:

View File

@ -16,7 +16,7 @@ from docling_core.types.doc import (
TableCell, TableCell,
TableData, TableData,
) )
from PIL import Image from PIL import Image, UnidentifiedImageError
from pptx import Presentation from pptx import Presentation
from pptx.enum.shapes import MSO_SHAPE_TYPE, PP_PLACEHOLDER from pptx.enum.shapes import MSO_SHAPE_TYPE, PP_PLACEHOLDER
@ -120,6 +120,7 @@ class MsPowerpointDocumentBackend(DeclarativeDocumentBackend, PaginatedDocumentB
bullet_type = "None" bullet_type = "None"
list_text = "" list_text = ""
list_label = GroupLabel.LIST list_label = GroupLabel.LIST
doc_label = DocItemLabel.LIST_ITEM
prov = self.generate_prov(shape, slide_ind, shape.text.strip()) prov = self.generate_prov(shape, slide_ind, shape.text.strip())
# Identify if shape contains lists # Identify if shape contains lists
@ -276,16 +277,19 @@ class MsPowerpointDocumentBackend(DeclarativeDocumentBackend, PaginatedDocumentB
im_dpi, _ = image.dpi im_dpi, _ = image.dpi
# Open it with PIL # Open it with PIL
pil_image = Image.open(BytesIO(image_bytes)) try:
pil_image = Image.open(BytesIO(image_bytes))
# shape has picture # shape has picture
prov = self.generate_prov(shape, slide_ind, "") prov = self.generate_prov(shape, slide_ind, "")
doc.add_picture( doc.add_picture(
parent=parent_slide, parent=parent_slide,
image=ImageRef.from_pil(image=pil_image, dpi=im_dpi), image=ImageRef.from_pil(image=pil_image, dpi=im_dpi),
caption=None, caption=None,
prov=prov, prov=prov,
) )
except (UnidentifiedImageError, OSError) as e:
_log.warning(f"Warning: image cannot be loaded by Pillow: {e}")
return return
def handle_tables(self, shape, parent_slide, slide_ind, doc): def handle_tables(self, shape, parent_slide, slide_ind, doc):

View File

View File

@ -0,0 +1,592 @@
import logging
from io import BytesIO
from pathlib import Path
from typing import Any, Set, Union
import lxml
from bs4 import BeautifulSoup
from docling_core.types.doc import (
DocItemLabel,
DoclingDocument,
DocumentOrigin,
GroupLabel,
TableCell,
TableData,
)
from lxml import etree
from typing_extensions import TypedDict, override
from docling.backend.abstract_backend import DeclarativeDocumentBackend
from docling.datamodel.base_models import InputFormat
from docling.datamodel.document import InputDocument
_log = logging.getLogger(__name__)
class Paragraph(TypedDict):
text: str
headers: list[str]
class Author(TypedDict):
name: str
affiliation_names: list[str]
class Table(TypedDict):
label: str
caption: str
content: str
class FigureCaption(TypedDict):
label: str
caption: str
class Reference(TypedDict):
author_names: str
title: str
journal: str
year: str
class XMLComponents(TypedDict):
title: str
authors: list[Author]
abstract: str
paragraphs: list[Paragraph]
tables: list[Table]
figure_captions: list[FigureCaption]
references: list[Reference]
class PubMedDocumentBackend(DeclarativeDocumentBackend):
"""
The code from this document backend has been developed by modifying parts of the PubMed Parser library (version 0.5.0, released on 12.08.2024):
Achakulvisut et al., (2020).
Pubmed Parser: A Python Parser for PubMed Open-Access XML Subset and MEDLINE XML Dataset XML Dataset.
Journal of Open Source Software, 5(46), 1979,
https://doi.org/10.21105/joss.01979
"""
@override
def __init__(self, in_doc: "InputDocument", path_or_stream: Union[BytesIO, Path]):
super().__init__(in_doc, path_or_stream)
self.path_or_stream = path_or_stream
# Initialize parents for the document hierarchy
self.parents: dict = {}
self.valid = False
try:
if isinstance(self.path_or_stream, BytesIO):
self.path_or_stream.seek(0)
self.tree: lxml.etree._ElementTree = etree.parse(self.path_or_stream)
if "/NLM//DTD JATS" in self.tree.docinfo.public_id:
self.valid = True
except Exception as exc:
raise RuntimeError(
f"Could not initialize PubMed backend for file with hash {self.document_hash}."
) from exc
@override
def is_valid(self) -> bool:
return self.valid
@classmethod
@override
def supports_pagination(cls) -> bool:
return False
@override
def unload(self):
if isinstance(self.path_or_stream, BytesIO):
self.path_or_stream.close()
self.path_or_stream = None
@classmethod
@override
def supported_formats(cls) -> Set[InputFormat]:
return {InputFormat.XML_PUBMED}
@override
def convert(self) -> DoclingDocument:
# Create empty document
origin = DocumentOrigin(
filename=self.file.name or "file",
mimetype="application/xml",
binary_hash=self.document_hash,
)
doc = DoclingDocument(name=self.file.stem or "file", origin=origin)
_log.debug("Trying to convert PubMed XML document...")
# Get parsed XML components
xml_components: XMLComponents = self._parse()
# Add XML components to the document
doc = self._populate_document(doc, xml_components)
return doc
def _parse_title(self) -> str:
title: str = " ".join(
[
t.replace("\n", "")
for t in self.tree.xpath(".//title-group/article-title")[0].itertext()
]
)
return title
def _parse_authors(self) -> list[Author]:
# Get mapping between affiliation ids and names
affiliation_names = []
for affiliation_node in self.tree.xpath(".//aff[@id]"):
affiliation_names.append(
": ".join([t for t in affiliation_node.itertext() if t != "\n"])
)
affiliation_ids_names = {
id: name
for id, name in zip(self.tree.xpath(".//aff[@id]/@id"), affiliation_names)
}
# Get author names and affiliation names
authors: list[Author] = []
for author_node in self.tree.xpath(
'.//contrib-group/contrib[@contrib-type="author"]'
):
author: Author = {
"name": "",
"affiliation_names": [],
}
# Affiliation names
affiliation_ids = [
a.attrib["rid"] for a in author_node.xpath('xref[@ref-type="aff"]')
]
for id in affiliation_ids:
if id in affiliation_ids_names:
author["affiliation_names"].append(affiliation_ids_names[id])
# Name
author["name"] = (
author_node.xpath("name/surname")[0].text
+ " "
+ author_node.xpath("name/given-names")[0].text
)
authors.append(author)
return authors
def _parse_abstract(self) -> str:
texts = []
for abstract_node in self.tree.xpath(".//abstract"):
for text in abstract_node.itertext():
texts.append(text.replace("\n", ""))
abstract: str = "".join(texts)
return abstract
def _parse_main_text(self) -> list[Paragraph]:
paragraphs: list[Paragraph] = []
for paragraph_node in self.tree.xpath("//body//p"):
# Skip captions
if "/caption" in paragraph_node.getroottree().getpath(paragraph_node):
continue
paragraph: Paragraph = {"text": "", "headers": []}
# Text
paragraph["text"] = "".join(
[t.replace("\n", "") for t in paragraph_node.itertext()]
)
# Header
path = "../title"
while len(paragraph_node.xpath(path)) > 0:
paragraph["headers"].append(
"".join(
[
t.replace("\n", "")
for t in paragraph_node.xpath(path)[0].itertext()
]
)
)
path = "../" + path
paragraphs.append(paragraph)
return paragraphs
def _parse_tables(self) -> list[Table]:
tables: list[Table] = []
for table_node in self.tree.xpath(".//body//table-wrap"):
table: Table = {"label": "", "caption": "", "content": ""}
# Content
if len(table_node.xpath("table")) > 0:
table_content_node = table_node.xpath("table")[0]
elif len(table_node.xpath("alternatives/table")) > 0:
table_content_node = table_node.xpath("alternatives/table")[0]
else:
table_content_node = None
if table_content_node != None:
table["content"] = etree.tostring(table_content_node).decode("utf-8")
# Caption
if len(table_node.xpath("caption/p")) > 0:
caption_node = table_node.xpath("caption/p")[0]
elif len(table_node.xpath("caption/title")) > 0:
caption_node = table_node.xpath("caption/title")[0]
else:
caption_node = None
if caption_node != None:
table["caption"] = "".join(
[t.replace("\n", "") for t in caption_node.itertext()]
)
# Label
if len(table_node.xpath("label")) > 0:
table["label"] = table_node.xpath("label")[0].text
tables.append(table)
return tables
def _parse_figure_captions(self) -> list[FigureCaption]:
figure_captions: list[FigureCaption] = []
if not (self.tree.xpath(".//fig")):
return figure_captions
for figure_node in self.tree.xpath(".//fig"):
figure_caption: FigureCaption = {
"caption": "",
"label": "",
}
# Label
if figure_node.xpath("label"):
figure_caption["label"] = "".join(
[
t.replace("\n", "")
for t in figure_node.xpath("label")[0].itertext()
]
)
# Caption
if figure_node.xpath("caption"):
caption = ""
for caption_node in figure_node.xpath("caption")[0].getchildren():
caption += (
"".join([t.replace("\n", "") for t in caption_node.itertext()])
+ "\n"
)
figure_caption["caption"] = caption
figure_captions.append(figure_caption)
return figure_captions
def _parse_references(self) -> list[Reference]:
references: list[Reference] = []
for reference_node_abs in self.tree.xpath(".//ref-list/ref"):
reference: Reference = {
"author_names": "",
"title": "",
"journal": "",
"year": "",
}
reference_node: Any = None
for tag in ["mixed-citation", "element-citation", "citation"]:
if len(reference_node_abs.xpath(tag)) > 0:
reference_node = reference_node_abs.xpath(tag)[0]
break
if reference_node is None:
continue
if all(
not (ref_type in ["citation-type", "publication-type"])
for ref_type in reference_node.attrib.keys()
):
continue
# Author names
names = []
if len(reference_node.xpath("name")) > 0:
for name_node in reference_node.xpath("name"):
name_str = " ".join(
[t.text for t in name_node.getchildren() if (t.text != None)]
)
names.append(name_str)
elif len(reference_node.xpath("person-group")) > 0:
for name_node in reference_node.xpath("person-group")[0]:
name_str = (
name_node.xpath("given-names")[0].text
+ " "
+ name_node.xpath("surname")[0].text
)
names.append(name_str)
reference["author_names"] = "; ".join(names)
# Title
if len(reference_node.xpath("article-title")) > 0:
reference["title"] = " ".join(
[
t.replace("\n", " ")
for t in reference_node.xpath("article-title")[0].itertext()
]
)
# Journal
if len(reference_node.xpath("source")) > 0:
reference["journal"] = reference_node.xpath("source")[0].text
# Year
if len(reference_node.xpath("year")) > 0:
reference["year"] = reference_node.xpath("year")[0].text
if (
not (reference_node.xpath("article-title"))
and not (reference_node.xpath("journal"))
and not (reference_node.xpath("year"))
):
reference["title"] = reference_node.text
references.append(reference)
return references
def _parse(self) -> XMLComponents:
"""Parsing PubMed document."""
xml_components: XMLComponents = {
"title": self._parse_title(),
"authors": self._parse_authors(),
"abstract": self._parse_abstract(),
"paragraphs": self._parse_main_text(),
"tables": self._parse_tables(),
"figure_captions": self._parse_figure_captions(),
"references": self._parse_references(),
}
return xml_components
def _populate_document(
self, doc: DoclingDocument, xml_components: XMLComponents
) -> DoclingDocument:
self._add_title(doc, xml_components)
self._add_authors(doc, xml_components)
self._add_abstract(doc, xml_components)
self._add_main_text(doc, xml_components)
if xml_components["tables"]:
self._add_tables(doc, xml_components)
if xml_components["figure_captions"]:
self._add_figure_captions(doc, xml_components)
self._add_references(doc, xml_components)
return doc
def _add_figure_captions(
self, doc: DoclingDocument, xml_components: XMLComponents
) -> None:
self.parents["Figures"] = doc.add_heading(
parent=self.parents["Title"], text="Figures"
)
for figure_caption_xml_component in xml_components["figure_captions"]:
figure_caption_text = (
figure_caption_xml_component["label"]
+ ": "
+ figure_caption_xml_component["caption"].strip()
)
fig_caption = doc.add_text(
label=DocItemLabel.CAPTION, text=figure_caption_text
)
doc.add_picture(
parent=self.parents["Figures"],
caption=fig_caption,
)
return
def _add_title(self, doc: DoclingDocument, xml_components: XMLComponents) -> None:
self.parents["Title"] = doc.add_text(
parent=None,
text=xml_components["title"],
label=DocItemLabel.TITLE,
)
return
def _add_authors(self, doc: DoclingDocument, xml_components: XMLComponents) -> None:
authors_affiliations: list = []
for author in xml_components["authors"]:
authors_affiliations.append(author["name"])
authors_affiliations.append(", ".join(author["affiliation_names"]))
authors_affiliations_str = "; ".join(authors_affiliations)
doc.add_text(
parent=self.parents["Title"],
text=authors_affiliations_str,
label=DocItemLabel.PARAGRAPH,
)
return
def _add_abstract(
self, doc: DoclingDocument, xml_components: XMLComponents
) -> None:
abstract_text: str = xml_components["abstract"]
self.parents["Abstract"] = doc.add_heading(
parent=self.parents["Title"], text="Abstract"
)
doc.add_text(
parent=self.parents["Abstract"],
text=abstract_text,
label=DocItemLabel.TEXT,
)
return
def _add_main_text(
self, doc: DoclingDocument, xml_components: XMLComponents
) -> None:
added_headers: list = []
for paragraph in xml_components["paragraphs"]:
if not (paragraph["headers"]):
continue
# Header
for i, header in enumerate(reversed(paragraph["headers"])):
if header in added_headers:
continue
added_headers.append(header)
if ((i - 1) >= 0) and list(reversed(paragraph["headers"]))[
i - 1
] in self.parents:
parent = self.parents[list(reversed(paragraph["headers"]))[i - 1]]
else:
parent = self.parents["Title"]
self.parents[header] = doc.add_heading(parent=parent, text=header)
# Paragraph text
if paragraph["headers"][0] in self.parents:
parent = self.parents[paragraph["headers"][0]]
else:
parent = self.parents["Title"]
doc.add_text(parent=parent, label=DocItemLabel.TEXT, text=paragraph["text"])
return
def _add_references(
self, doc: DoclingDocument, xml_components: XMLComponents
) -> None:
self.parents["References"] = doc.add_heading(
parent=self.parents["Title"], text="References"
)
current_list = doc.add_group(
parent=self.parents["References"], label=GroupLabel.LIST, name="list"
)
for reference in xml_components["references"]:
reference_text: str = ""
if reference["author_names"]:
reference_text += reference["author_names"] + ". "
if reference["title"]:
reference_text += reference["title"]
if reference["title"][-1] != ".":
reference_text += "."
reference_text += " "
if reference["journal"]:
reference_text += reference["journal"]
if reference["year"]:
reference_text += " (" + reference["year"] + ")"
if not (reference_text):
_log.debug(f"Skipping reference for: {str(self.file)}")
continue
doc.add_list_item(
text=reference_text, enumerated=False, parent=current_list
)
return
def _add_tables(self, doc: DoclingDocument, xml_components: XMLComponents) -> None:
self.parents["Tables"] = doc.add_heading(
parent=self.parents["Title"], text="Tables"
)
for table_xml_component in xml_components["tables"]:
try:
self._add_table(doc, table_xml_component)
except Exception as e:
_log.debug(f"Skipping unsupported table for: {str(self.file)}")
pass
return
def _add_table(self, doc: DoclingDocument, table_xml_component: Table) -> None:
soup = BeautifulSoup(table_xml_component["content"], "html.parser")
table_tag = soup.find("table")
nested_tables = table_tag.find("table")
if nested_tables:
_log.debug(f"Skipping nested table for: {str(self.file)}")
return
# Count the number of rows (number of <tr> elements)
num_rows = len(table_tag.find_all("tr"))
# Find the number of columns (taking into account colspan)
num_cols = 0
for row in table_tag.find_all("tr"):
col_count = 0
for cell in row.find_all(["td", "th"]):
colspan = int(cell.get("colspan", 1))
col_count += colspan
num_cols = max(num_cols, col_count)
grid = [[None for _ in range(num_cols)] for _ in range(num_rows)]
data = TableData(num_rows=num_rows, num_cols=num_cols, table_cells=[])
# Iterate over the rows in the table
for row_idx, row in enumerate(table_tag.find_all("tr")):
# For each row, find all the column cells (both <td> and <th>)
cells = row.find_all(["td", "th"])
# Check if each cell in the row is a header -> means it is a column header
col_header = True
for j, html_cell in enumerate(cells):
if html_cell.name == "td":
col_header = False
# Extract and print the text content of each cell
col_idx = 0
for _, html_cell in enumerate(cells):
text = html_cell.text
col_span = int(html_cell.get("colspan", 1))
row_span = int(html_cell.get("rowspan", 1))
while grid[row_idx][col_idx] != None:
col_idx += 1
for r in range(row_span):
for c in range(col_span):
grid[row_idx + r][col_idx + c] = text
cell = TableCell(
text=text,
row_span=row_span,
col_span=col_span,
start_row_offset_idx=row_idx,
end_row_offset_idx=row_idx + row_span,
start_col_offset_idx=col_idx,
end_col_offset_idx=col_idx + col_span,
col_header=col_header,
row_header=((not col_header) and html_cell.name == "th"),
)
data.table_cells.append(cell)
table_caption = doc.add_text(
label=DocItemLabel.CAPTION,
text=table_xml_component["label"] + ": " + table_xml_component["caption"],
)
doc.add_table(data=data, parent=self.parents["Tables"], caption=table_caption)
return

File diff suppressed because it is too large Load Diff

View File

@ -164,6 +164,11 @@ def convert(
to_formats: List[OutputFormat] = typer.Option( to_formats: List[OutputFormat] = typer.Option(
None, "--to", help="Specify output formats. Defaults to Markdown." None, "--to", help="Specify output formats. Defaults to Markdown."
), ),
headers: str = typer.Option(
None,
"--headers",
help="Specify http request headers used when fetching url input sources in the form of a JSON string",
),
image_export_mode: Annotated[ image_export_mode: Annotated[
ImageRefMode, ImageRefMode,
typer.Option( typer.Option(
@ -279,12 +284,19 @@ def convert(
if from_formats is None: if from_formats is None:
from_formats = [e for e in InputFormat] from_formats = [e for e in InputFormat]
parsed_headers: Optional[Dict[str, str]] = None
if headers is not None:
headers_t = TypeAdapter(Dict[str, str])
parsed_headers = headers_t.validate_json(headers)
with tempfile.TemporaryDirectory() as tempdir: with tempfile.TemporaryDirectory() as tempdir:
input_doc_paths: List[Path] = [] input_doc_paths: List[Path] = []
for src in input_sources: for src in input_sources:
try: try:
# check if we can fetch some remote url # check if we can fetch some remote url
source = resolve_source_to_path(source=src, workdir=Path(tempdir)) source = resolve_source_to_path(
source=src, headers=parsed_headers, workdir=Path(tempdir)
)
input_doc_paths.append(source) input_doc_paths.append(source)
except FileNotFoundError: except FileNotFoundError:
err_console.print( err_console.print(
@ -390,7 +402,7 @@ def convert(
start_time = time.time() start_time = time.time()
conv_results = doc_converter.convert_all( conv_results = doc_converter.convert_all(
input_doc_paths, raises_on_error=abort_on_error input_doc_paths, headers=parsed_headers, raises_on_error=abort_on_error
) )
output.mkdir(parents=True, exist_ok=True) output.mkdir(parents=True, exist_ok=True)

View File

@ -1,4 +1,4 @@
from enum import Enum, auto from enum import Enum
from typing import TYPE_CHECKING, Dict, List, Optional, Union from typing import TYPE_CHECKING, Dict, List, Optional, Union
from docling_core.types.doc import ( from docling_core.types.doc import (
@ -28,14 +28,18 @@ class ConversionStatus(str, Enum):
class InputFormat(str, Enum): class InputFormat(str, Enum):
"""A document format supported by document backend parsers."""
DOCX = "docx" DOCX = "docx"
PPTX = "pptx" PPTX = "pptx"
HTML = "html" HTML = "html"
XML_PUBMED = "xml_pubmed"
IMAGE = "image" IMAGE = "image"
PDF = "pdf" PDF = "pdf"
ASCIIDOC = "asciidoc" ASCIIDOC = "asciidoc"
MD = "md" MD = "md"
XLSX = "xlsx" XLSX = "xlsx"
XML_USPTO = "xml_uspto"
class OutputFormat(str, Enum): class OutputFormat(str, Enum):
@ -52,9 +56,11 @@ FormatToExtensions: Dict[InputFormat, List[str]] = {
InputFormat.PDF: ["pdf"], InputFormat.PDF: ["pdf"],
InputFormat.MD: ["md"], InputFormat.MD: ["md"],
InputFormat.HTML: ["html", "htm", "xhtml"], InputFormat.HTML: ["html", "htm", "xhtml"],
InputFormat.XML_PUBMED: ["xml", "nxml"],
InputFormat.IMAGE: ["jpg", "jpeg", "png", "tif", "tiff", "bmp"], InputFormat.IMAGE: ["jpg", "jpeg", "png", "tif", "tiff", "bmp"],
InputFormat.ASCIIDOC: ["adoc", "asciidoc", "asc"], InputFormat.ASCIIDOC: ["adoc", "asciidoc", "asc"],
InputFormat.XLSX: ["xlsx"], InputFormat.XLSX: ["xlsx"],
InputFormat.XML_USPTO: ["xml", "txt"],
} }
FormatToMimeType: Dict[InputFormat, List[str]] = { FormatToMimeType: Dict[InputFormat, List[str]] = {
@ -68,6 +74,7 @@ FormatToMimeType: Dict[InputFormat, List[str]] = {
"application/vnd.openxmlformats-officedocument.presentationml.presentation", "application/vnd.openxmlformats-officedocument.presentationml.presentation",
], ],
InputFormat.HTML: ["text/html", "application/xhtml+xml"], InputFormat.HTML: ["text/html", "application/xhtml+xml"],
InputFormat.XML_PUBMED: ["application/xml"],
InputFormat.IMAGE: [ InputFormat.IMAGE: [
"image/png", "image/png",
"image/jpeg", "image/jpeg",
@ -81,10 +88,13 @@ FormatToMimeType: Dict[InputFormat, List[str]] = {
InputFormat.XLSX: [ InputFormat.XLSX: [
"application/vnd.openxmlformats-officedocument.spreadsheetml.sheet" "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet"
], ],
InputFormat.XML_USPTO: ["application/xml", "text/plain"],
} }
MimeTypeToFormat = { MimeTypeToFormat: dict[str, list[InputFormat]] = {
mime: fmt for fmt, mimes in FormatToMimeType.items() for mime in mimes mime: [fmt for fmt in FormatToMimeType if mime in FormatToMimeType[fmt]]
for value in FormatToMimeType.values()
for mime in value
} }
@ -122,6 +132,7 @@ class Cluster(BaseModel):
bbox: BoundingBox bbox: BoundingBox
confidence: float = 1.0 confidence: float = 1.0
cells: List[Cell] = [] cells: List[Cell] = []
children: List["Cluster"] = [] # Add child cluster support
class BasePageElement(BaseModel): class BasePageElement(BaseModel):
@ -136,6 +147,12 @@ class LayoutPrediction(BaseModel):
clusters: List[Cluster] = [] clusters: List[Cluster] = []
class ContainerElement(
BasePageElement
): # Used for Form and Key-Value-Regions, only for typing.
pass
class Table(BasePageElement): class Table(BasePageElement):
otsl_seq: List[str] otsl_seq: List[str]
num_rows: int = 0 num_rows: int = 0
@ -175,7 +192,7 @@ class PagePredictions(BaseModel):
equations_prediction: Optional[EquationPrediction] = None equations_prediction: Optional[EquationPrediction] = None
PageElement = Union[TextElement, Table, FigureElement] PageElement = Union[TextElement, Table, FigureElement, ContainerElement]
class AssembledUnit(BaseModel): class AssembledUnit(BaseModel):

View File

@ -3,7 +3,17 @@ import re
from enum import Enum from enum import Enum
from io import BytesIO from io import BytesIO
from pathlib import Path, PurePath from pathlib import Path, PurePath
from typing import TYPE_CHECKING, Dict, Iterable, List, Optional, Set, Type, Union from typing import (
TYPE_CHECKING,
Dict,
Iterable,
List,
Literal,
Optional,
Set,
Type,
Union,
)
import filetype import filetype
from docling_core.types.doc import ( from docling_core.types.doc import (
@ -63,7 +73,7 @@ _log = logging.getLogger(__name__)
layout_label_to_ds_type = { layout_label_to_ds_type = {
DocItemLabel.TITLE: "title", DocItemLabel.TITLE: "title",
DocItemLabel.DOCUMENT_INDEX: "table-of-contents", DocItemLabel.DOCUMENT_INDEX: "table",
DocItemLabel.SECTION_HEADER: "subtitle-level-1", DocItemLabel.SECTION_HEADER: "subtitle-level-1",
DocItemLabel.CHECKBOX_SELECTED: "checkbox-selected", DocItemLabel.CHECKBOX_SELECTED: "checkbox-selected",
DocItemLabel.CHECKBOX_UNSELECTED: "checkbox-unselected", DocItemLabel.CHECKBOX_UNSELECTED: "checkbox-unselected",
@ -78,6 +88,8 @@ layout_label_to_ds_type = {
DocItemLabel.PICTURE: "figure", DocItemLabel.PICTURE: "figure",
DocItemLabel.TEXT: "paragraph", DocItemLabel.TEXT: "paragraph",
DocItemLabel.PARAGRAPH: "paragraph", DocItemLabel.PARAGRAPH: "paragraph",
DocItemLabel.FORM: DocItemLabel.FORM.value,
DocItemLabel.KEY_VALUE_REGION: DocItemLabel.KEY_VALUE_REGION.value,
} }
_EMPTY_DOCLING_DOC = DoclingDocument(name="dummy") _EMPTY_DOCLING_DOC = DoclingDocument(name="dummy")
@ -215,13 +227,18 @@ class _DummyBackend(AbstractDocumentBackend):
class _DocumentConversionInput(BaseModel): class _DocumentConversionInput(BaseModel):
path_or_stream_iterator: Iterable[Union[Path, str, DocumentStream]] path_or_stream_iterator: Iterable[Union[Path, str, DocumentStream]]
headers: Optional[Dict[str, str]] = None
limits: Optional[DocumentLimits] = DocumentLimits() limits: Optional[DocumentLimits] = DocumentLimits()
def docs( def docs(
self, format_options: Dict[InputFormat, "FormatOption"] self, format_options: Dict[InputFormat, "FormatOption"]
) -> Iterable[InputDocument]: ) -> Iterable[InputDocument]:
for item in self.path_or_stream_iterator: for item in self.path_or_stream_iterator:
obj = resolve_source_to_stream(item) if isinstance(item, str) else item obj = (
resolve_source_to_stream(item, self.headers)
if isinstance(item, str)
else item
)
format = self._guess_format(obj) format = self._guess_format(obj)
backend: Type[AbstractDocumentBackend] backend: Type[AbstractDocumentBackend]
if format not in format_options.keys(): if format not in format_options.keys():
@ -235,7 +252,7 @@ class _DocumentConversionInput(BaseModel):
if isinstance(obj, Path): if isinstance(obj, Path):
yield InputDocument( yield InputDocument(
path_or_stream=obj, path_or_stream=obj,
format=format, format=format, # type: ignore[arg-type]
filename=obj.name, filename=obj.name,
limits=self.limits, limits=self.limits,
backend=backend, backend=backend,
@ -243,7 +260,7 @@ class _DocumentConversionInput(BaseModel):
elif isinstance(obj, DocumentStream): elif isinstance(obj, DocumentStream):
yield InputDocument( yield InputDocument(
path_or_stream=obj.stream, path_or_stream=obj.stream,
format=format, format=format, # type: ignore[arg-type]
filename=obj.name, filename=obj.name,
limits=self.limits, limits=self.limits,
backend=backend, backend=backend,
@ -251,15 +268,15 @@ class _DocumentConversionInput(BaseModel):
else: else:
raise RuntimeError(f"Unexpected obj type in iterator: {type(obj)}") raise RuntimeError(f"Unexpected obj type in iterator: {type(obj)}")
def _guess_format(self, obj: Union[Path, DocumentStream]): def _guess_format(self, obj: Union[Path, DocumentStream]) -> Optional[InputFormat]:
content = b"" # empty binary blob content = b"" # empty binary blob
format = None formats: list[InputFormat] = []
if isinstance(obj, Path): if isinstance(obj, Path):
mime = filetype.guess_mime(str(obj)) mime = filetype.guess_mime(str(obj))
if mime is None: if mime is None:
ext = obj.suffix[1:] ext = obj.suffix[1:]
mime = self._mime_from_extension(ext) mime = _DocumentConversionInput._mime_from_extension(ext)
if mime is None: # must guess from if mime is None: # must guess from
with obj.open("rb") as f: with obj.open("rb") as f:
content = f.read(1024) # Read first 1KB content = f.read(1024) # Read first 1KB
@ -274,15 +291,58 @@ class _DocumentConversionInput(BaseModel):
if ("." in obj.name and not obj.name.startswith(".")) if ("." in obj.name and not obj.name.startswith("."))
else "" else ""
) )
mime = self._mime_from_extension(ext) mime = _DocumentConversionInput._mime_from_extension(ext)
mime = mime or self._detect_html_xhtml(content) mime = mime or _DocumentConversionInput._detect_html_xhtml(content)
mime = mime or "text/plain" mime = mime or "text/plain"
formats = MimeTypeToFormat.get(mime, [])
if formats:
if len(formats) == 1 and mime not in ("text/plain"):
return formats[0]
else: # ambiguity in formats
return _DocumentConversionInput._guess_from_content(
content, mime, formats
)
else:
return None
format = MimeTypeToFormat.get(mime) @staticmethod
return format def _guess_from_content(
content: bytes, mime: str, formats: list[InputFormat]
) -> Optional[InputFormat]:
"""Guess the input format of a document by checking part of its content."""
input_format: Optional[InputFormat] = None
content_str = content.decode("utf-8")
def _mime_from_extension(self, ext): if mime == "application/xml":
match_doctype = re.search(r"<!DOCTYPE [^>]+>", content_str)
if match_doctype:
xml_doctype = match_doctype.group()
if InputFormat.XML_USPTO in formats and any(
item in xml_doctype
for item in (
"us-patent-application-v4",
"us-patent-grant-v4",
"us-grant-025",
"patent-application-publication",
)
):
input_format = InputFormat.XML_USPTO
if (
InputFormat.XML_PUBMED in formats
and "/NLM//DTD JATS" in xml_doctype
):
input_format = InputFormat.XML_PUBMED
elif mime == "text/plain":
if InputFormat.XML_USPTO in formats and content_str.startswith("PATN\r\n"):
input_format = InputFormat.XML_USPTO
return input_format
@staticmethod
def _mime_from_extension(ext):
mime = None mime = None
if ext in FormatToExtensions[InputFormat.ASCIIDOC]: if ext in FormatToExtensions[InputFormat.ASCIIDOC]:
mime = FormatToMimeType[InputFormat.ASCIIDOC][0] mime = FormatToMimeType[InputFormat.ASCIIDOC][0]
@ -290,10 +350,21 @@ class _DocumentConversionInput(BaseModel):
mime = FormatToMimeType[InputFormat.HTML][0] mime = FormatToMimeType[InputFormat.HTML][0]
elif ext in FormatToExtensions[InputFormat.MD]: elif ext in FormatToExtensions[InputFormat.MD]:
mime = FormatToMimeType[InputFormat.MD][0] mime = FormatToMimeType[InputFormat.MD][0]
return mime return mime
def _detect_html_xhtml(self, content): @staticmethod
def _detect_html_xhtml(
content: bytes,
) -> Optional[Literal["application/xhtml+xml", "application/xml", "text/html"]]:
"""Guess the mime type of an XHTML, HTML, or XML file from its content.
Args:
content: A short piece of a document from its beginning.
Returns:
The mime type of an XHTML, HTML, or XML file, or None if the content does
not match any of these formats.
"""
content_str = content.decode("ascii", errors="ignore").lower() content_str = content.decode("ascii", errors="ignore").lower()
# Remove XML comments # Remove XML comments
content_str = re.sub(r"<!--(.*?)-->", "", content_str, flags=re.DOTALL) content_str = re.sub(r"<!--(.*?)-->", "", content_str, flags=re.DOTALL)
@ -302,8 +373,16 @@ class _DocumentConversionInput(BaseModel):
if re.match(r"<\?xml", content_str): if re.match(r"<\?xml", content_str):
if "xhtml" in content_str[:1000]: if "xhtml" in content_str[:1000]:
return "application/xhtml+xml" return "application/xhtml+xml"
else:
return "application/xml"
if re.match(r"<!doctype\s+html|<html|<head|<body", content_str): if re.match(r"<!doctype\s+html|<html|<head|<body", content_str):
return "text/html" return "text/html"
p = re.compile(
r"<!doctype\s+(?P<root>[a-zA-Z_:][a-zA-Z0-9_:.-]*)\s+.*>\s*<(?P=root)\b"
)
if p.search(content_str):
return "application/xml"
return None return None

View File

@ -139,7 +139,10 @@ class EasyOcrOptions(OcrOptions):
use_gpu: Optional[bool] = None use_gpu: Optional[bool] = None
confidence_threshold: float = 0.65
model_storage_directory: Optional[str] = None model_storage_directory: Optional[str] = None
recog_network: Optional[str] = "standard"
download_enabled: bool = True download_enabled: bool = True
model_config = ConfigDict( model_config = ConfigDict(

View File

@ -31,6 +31,7 @@ class DebugSettings(BaseModel):
visualize_cells: bool = False visualize_cells: bool = False
visualize_ocr: bool = False visualize_ocr: bool = False
visualize_layout: bool = False visualize_layout: bool = False
visualize_raw_layout: bool = False
visualize_tables: bool = False visualize_tables: bool = False
profile_pipeline_timings: bool = False profile_pipeline_timings: bool = False

View File

@ -15,6 +15,8 @@ from docling.backend.md_backend import MarkdownDocumentBackend
from docling.backend.msexcel_backend import MsExcelDocumentBackend from docling.backend.msexcel_backend import MsExcelDocumentBackend
from docling.backend.mspowerpoint_backend import MsPowerpointDocumentBackend from docling.backend.mspowerpoint_backend import MsPowerpointDocumentBackend
from docling.backend.msword_backend import MsWordDocumentBackend from docling.backend.msword_backend import MsWordDocumentBackend
from docling.backend.xml.pubmed_backend import PubMedDocumentBackend
from docling.backend.xml.uspto_backend import PatentUsptoDocumentBackend
from docling.datamodel.base_models import ( from docling.datamodel.base_models import (
ConversionStatus, ConversionStatus,
DoclingComponentType, DoclingComponentType,
@ -82,12 +84,22 @@ class HTMLFormatOption(FormatOption):
backend: Type[AbstractDocumentBackend] = HTMLDocumentBackend backend: Type[AbstractDocumentBackend] = HTMLDocumentBackend
class PdfFormatOption(FormatOption): class PatentUsptoFormatOption(FormatOption):
pipeline_cls: Type = SimplePipeline
backend: Type[PatentUsptoDocumentBackend] = PatentUsptoDocumentBackend
class XMLPubMedFormatOption(FormatOption):
pipeline_cls: Type = SimplePipeline
backend: Type[AbstractDocumentBackend] = PubMedDocumentBackend
class ImageFormatOption(FormatOption):
pipeline_cls: Type = StandardPdfPipeline pipeline_cls: Type = StandardPdfPipeline
backend: Type[AbstractDocumentBackend] = DoclingParseV2DocumentBackend backend: Type[AbstractDocumentBackend] = DoclingParseV2DocumentBackend
class ImageFormatOption(FormatOption): class PdfFormatOption(FormatOption):
pipeline_cls: Type = StandardPdfPipeline pipeline_cls: Type = StandardPdfPipeline
backend: Type[AbstractDocumentBackend] = DoclingParseV2DocumentBackend backend: Type[AbstractDocumentBackend] = DoclingParseV2DocumentBackend
@ -112,6 +124,12 @@ def _get_default_option(format: InputFormat) -> FormatOption:
InputFormat.HTML: FormatOption( InputFormat.HTML: FormatOption(
pipeline_cls=SimplePipeline, backend=HTMLDocumentBackend pipeline_cls=SimplePipeline, backend=HTMLDocumentBackend
), ),
InputFormat.XML_USPTO: FormatOption(
pipeline_cls=SimplePipeline, backend=PatentUsptoDocumentBackend
),
InputFormat.XML_PUBMED: FormatOption(
pipeline_cls=SimplePipeline, backend=PubMedDocumentBackend
),
InputFormat.IMAGE: FormatOption( InputFormat.IMAGE: FormatOption(
pipeline_cls=StandardPdfPipeline, backend=DoclingParseV2DocumentBackend pipeline_cls=StandardPdfPipeline, backend=DoclingParseV2DocumentBackend
), ),
@ -158,16 +176,17 @@ class DocumentConverter:
def convert( def convert(
self, self,
source: Union[Path, str, DocumentStream], # TODO review naming source: Union[Path, str, DocumentStream], # TODO review naming
headers: Optional[Dict[str, str]] = None,
raises_on_error: bool = True, raises_on_error: bool = True,
max_num_pages: int = sys.maxsize, max_num_pages: int = sys.maxsize,
max_file_size: int = sys.maxsize, max_file_size: int = sys.maxsize,
) -> ConversionResult: ) -> ConversionResult:
all_res = self.convert_all( all_res = self.convert_all(
source=[source], source=[source],
raises_on_error=raises_on_error, raises_on_error=raises_on_error,
max_num_pages=max_num_pages, max_num_pages=max_num_pages,
max_file_size=max_file_size, max_file_size=max_file_size,
headers=headers,
) )
return next(all_res) return next(all_res)
@ -175,6 +194,7 @@ class DocumentConverter:
def convert_all( def convert_all(
self, self,
source: Iterable[Union[Path, str, DocumentStream]], # TODO review naming source: Iterable[Union[Path, str, DocumentStream]], # TODO review naming
headers: Optional[Dict[str, str]] = None,
raises_on_error: bool = True, # True: raises on first conversion error; False: does not raise on conv error raises_on_error: bool = True, # True: raises on first conversion error; False: does not raise on conv error
max_num_pages: int = sys.maxsize, max_num_pages: int = sys.maxsize,
max_file_size: int = sys.maxsize, max_file_size: int = sys.maxsize,
@ -184,8 +204,7 @@ class DocumentConverter:
max_file_size=max_file_size, max_file_size=max_file_size,
) )
conv_input = _DocumentConversionInput( conv_input = _DocumentConversionInput(
path_or_stream_iterator=source, path_or_stream_iterator=source, limits=limits, headers=headers
limits=limits,
) )
conv_res_iter = self._convert(conv_input, raises_on_error=raises_on_error) conv_res_iter = self._convert(conv_input, raises_on_error=raises_on_error)

View File

@ -138,18 +138,31 @@ class BaseOcrModel(BasePageModel):
def draw_ocr_rects_and_cells(self, conv_res, page, ocr_rects, show: bool = False): def draw_ocr_rects_and_cells(self, conv_res, page, ocr_rects, show: bool = False):
image = copy.deepcopy(page.image) image = copy.deepcopy(page.image)
scale_x = image.width / page.size.width
scale_y = image.height / page.size.height
draw = ImageDraw.Draw(image, "RGBA") draw = ImageDraw.Draw(image, "RGBA")
# Draw OCR rectangles as yellow filled rect # Draw OCR rectangles as yellow filled rect
for rect in ocr_rects: for rect in ocr_rects:
x0, y0, x1, y1 = rect.as_tuple() x0, y0, x1, y1 = rect.as_tuple()
y0 *= scale_x
y1 *= scale_y
x0 *= scale_x
x1 *= scale_x
shade_color = (255, 255, 0, 40) # transparent yellow shade_color = (255, 255, 0, 40) # transparent yellow
draw.rectangle([(x0, y0), (x1, y1)], fill=shade_color, outline=None) draw.rectangle([(x0, y0), (x1, y1)], fill=shade_color, outline=None)
# Draw OCR and programmatic cells # Draw OCR and programmatic cells
for tc in page.cells: for tc in page.cells:
x0, y0, x1, y1 = tc.bbox.as_tuple() x0, y0, x1, y1 = tc.bbox.as_tuple()
color = "red" y0 *= scale_x
y1 *= scale_y
x0 *= scale_x
x1 *= scale_x
color = "gray"
if isinstance(tc, OcrCell): if isinstance(tc, OcrCell):
color = "magenta" color = "magenta"
draw.rectangle([(x0, y0), (x1, y1)], outline=color) draw.rectangle([(x0, y0), (x1, y1)], outline=color)

View File

@ -22,9 +22,15 @@ from docling_core.types.legacy_doc.document import (
from docling_core.types.legacy_doc.document import CCSFileInfoObject as DsFileInfoObject from docling_core.types.legacy_doc.document import CCSFileInfoObject as DsFileInfoObject
from docling_core.types.legacy_doc.document import ExportedCCSDocument as DsDocument from docling_core.types.legacy_doc.document import ExportedCCSDocument as DsDocument
from PIL import ImageDraw from PIL import ImageDraw
from pydantic import BaseModel, ConfigDict from pydantic import BaseModel, ConfigDict, TypeAdapter
from docling.datamodel.base_models import Cluster, FigureElement, Table, TextElement from docling.datamodel.base_models import (
Cluster,
ContainerElement,
FigureElement,
Table,
TextElement,
)
from docling.datamodel.document import ConversionResult, layout_label_to_ds_type from docling.datamodel.document import ConversionResult, layout_label_to_ds_type
from docling.datamodel.settings import settings from docling.datamodel.settings import settings
from docling.utils.glm_utils import to_docling_document from docling.utils.glm_utils import to_docling_document
@ -204,7 +210,31 @@ class GlmModel:
) )
], ],
obj_type=layout_label_to_ds_type.get(element.label), obj_type=layout_label_to_ds_type.get(element.label),
# data=[[]], payload={
"children": TypeAdapter(List[Cluster]).dump_python(
element.cluster.children
)
}, # hack to channel child clusters through GLM
)
)
elif isinstance(element, ContainerElement):
main_text.append(
BaseText(
text="",
payload={
"children": TypeAdapter(List[Cluster]).dump_python(
element.cluster.children
)
}, # hack to channel child clusters through GLM
obj_type=layout_label_to_ds_type.get(element.label),
name=element.label,
prov=[
Prov(
bbox=target_bbox,
page=element.page_no + 1,
span=[0, 0],
)
],
) )
) )

View File

@ -66,6 +66,7 @@ class EasyOcrModel(BaseOcrModel):
lang_list=self.options.lang, lang_list=self.options.lang,
gpu=use_gpu, gpu=use_gpu,
model_storage_directory=self.options.model_storage_directory, model_storage_directory=self.options.model_storage_directory,
recog_network=self.options.recog_network,
download_enabled=self.options.download_enabled, download_enabled=self.options.download_enabled,
verbose=False, verbose=False,
) )
@ -117,6 +118,7 @@ class EasyOcrModel(BaseOcrModel):
), ),
) )
for ix, line in enumerate(result) for ix, line in enumerate(result)
if line[2] >= self.options.confidence_threshold
] ]
all_ocr_cells.extend(cells) all_ocr_cells.extend(cells)

View File

@ -7,9 +7,8 @@ from typing import Iterable, List
from docling_core.types.doc import CoordOrigin, DocItemLabel from docling_core.types.doc import CoordOrigin, DocItemLabel
from docling_ibm_models.layoutmodel.layout_predictor import LayoutPredictor from docling_ibm_models.layoutmodel.layout_predictor import LayoutPredictor
from PIL import ImageDraw from PIL import Image, ImageDraw, ImageFont
import docling.utils.layout_utils as lu
from docling.datamodel.base_models import ( from docling.datamodel.base_models import (
BoundingBox, BoundingBox,
Cell, Cell,
@ -22,6 +21,7 @@ from docling.datamodel.pipeline_options import AcceleratorDevice, AcceleratorOpt
from docling.datamodel.settings import settings from docling.datamodel.settings import settings
from docling.models.base_model import BasePageModel from docling.models.base_model import BasePageModel
from docling.utils.accelerator_utils import decide_device from docling.utils.accelerator_utils import decide_device
from docling.utils.layout_postprocessor import LayoutPostprocessor
from docling.utils.profiling import TimeRecorder from docling.utils.profiling import TimeRecorder
_log = logging.getLogger(__name__) _log = logging.getLogger(__name__)
@ -44,9 +44,10 @@ class LayoutModel(BasePageModel):
] ]
PAGE_HEADER_LABELS = [DocItemLabel.PAGE_HEADER, DocItemLabel.PAGE_FOOTER] PAGE_HEADER_LABELS = [DocItemLabel.PAGE_HEADER, DocItemLabel.PAGE_FOOTER]
TABLE_LABEL = DocItemLabel.TABLE TABLE_LABELS = [DocItemLabel.TABLE, DocItemLabel.DOCUMENT_INDEX]
FIGURE_LABEL = DocItemLabel.PICTURE FIGURE_LABEL = DocItemLabel.PICTURE
FORMULA_LABEL = DocItemLabel.FORMULA FORMULA_LABEL = DocItemLabel.FORMULA
CONTAINER_LABELS = [DocItemLabel.FORM, DocItemLabel.KEY_VALUE_REGION]
def __init__(self, artifacts_path: Path, accelerator_options: AcceleratorOptions): def __init__(self, artifacts_path: Path, accelerator_options: AcceleratorOptions):
@ -60,230 +61,118 @@ class LayoutModel(BasePageModel):
blacklist_classes={"Form", "Key-Value Region", "Picture"}, # Use this to disable picture recognition (trying to force to identify only text) blacklist_classes={"Form", "Key-Value Region", "Picture"}, # Use this to disable picture recognition (trying to force to identify only text)
) )
def postprocess(self, clusters_in: List[Cluster], cells: List[Cell], page_height): def draw_clusters_and_cells_side_by_side(
MIN_INTERSECTION = 0.2 self, conv_res, page, clusters, mode_prefix: str, show: bool = False
CLASS_THRESHOLDS = { ):
DocItemLabel.CAPTION: 0.35, """
DocItemLabel.FOOTNOTE: 0.35, Draws a page image side by side with clusters filtered into two categories:
DocItemLabel.FORMULA: 0.35, - Left: Clusters excluding FORM, KEY_VALUE_REGION, and PICTURE.
DocItemLabel.LIST_ITEM: 0.35, - Right: Clusters including FORM, KEY_VALUE_REGION, and PICTURE.
DocItemLabel.PAGE_FOOTER: 0.35, Includes label names and confidence scores for each cluster.
DocItemLabel.PAGE_HEADER: 0.35, """
DocItemLabel.PICTURE: 0.2, # low threshold adjust to capture chemical structures for examples. scale_x = page.image.width / page.size.width
DocItemLabel.SECTION_HEADER: 0.45, scale_y = page.image.height / page.size.height
DocItemLabel.TABLE: 0.35,
DocItemLabel.TEXT: 0.45, # Filter clusters for left and right images
DocItemLabel.TITLE: 0.45, exclude_labels = {
DocItemLabel.DOCUMENT_INDEX: 0.45, DocItemLabel.FORM,
DocItemLabel.CODE: 0.45, DocItemLabel.KEY_VALUE_REGION,
DocItemLabel.CHECKBOX_SELECTED: 0.45, DocItemLabel.PICTURE,
DocItemLabel.CHECKBOX_UNSELECTED: 0.45,
DocItemLabel.FORM: 0.45,
DocItemLabel.KEY_VALUE_REGION: 0.45,
} }
left_clusters = [c for c in clusters if c.label not in exclude_labels]
right_clusters = [c for c in clusters if c.label in exclude_labels]
# Create a deep copy of the original image for both sides
left_image = copy.deepcopy(page.image)
right_image = copy.deepcopy(page.image)
CLASS_REMAPPINGS = { # Function to draw clusters on an image
DocItemLabel.DOCUMENT_INDEX: DocItemLabel.TABLE, def draw_clusters(image, clusters):
DocItemLabel.TITLE: DocItemLabel.SECTION_HEADER, draw = ImageDraw.Draw(image, "RGBA")
} # Create a smaller font for the labels
try:
font = ImageFont.truetype("arial.ttf", 12)
except OSError:
# Fallback to default font if arial is not available
font = ImageFont.load_default()
for c_tl in clusters:
all_clusters = [c_tl, *c_tl.children]
for c in all_clusters:
# Draw cells first (underneath)
cell_color = (0, 0, 0, 40) # Transparent black for cells
for tc in c.cells:
cx0, cy0, cx1, cy1 = tc.bbox.as_tuple()
cx0 *= scale_x
cx1 *= scale_x
cy0 *= scale_x
cy1 *= scale_y
_log.debug("================= Start postprocess function ====================") draw.rectangle(
start_time = time.time() [(cx0, cy0), (cx1, cy1)],
# Apply Confidence Threshold to cluster predictions outline=None,
# confidence = self.conf_threshold fill=cell_color,
clusters_mod = [] )
# Draw cluster rectangle
x0, y0, x1, y1 = c.bbox.as_tuple()
x0 *= scale_x
x1 *= scale_x
y0 *= scale_x
y1 *= scale_y
for cluster in clusters_in: cluster_fill_color = (*list(DocItemLabel.get_color(c.label)), 70)
confidence = CLASS_THRESHOLDS[cluster.label] cluster_outline_color = (
if cluster.confidence >= confidence: *list(DocItemLabel.get_color(c.label)),
# annotation["created_by"] = "high_conf_pred" 255,
)
draw.rectangle(
[(x0, y0), (x1, y1)],
outline=cluster_outline_color,
fill=cluster_fill_color,
)
# Add label name and confidence
label_text = f"{c.label.name} ({c.confidence:.2f})"
# Create semi-transparent background for text
text_bbox = draw.textbbox((x0, y0), label_text, font=font)
text_bg_padding = 2
draw.rectangle(
[
(
text_bbox[0] - text_bg_padding,
text_bbox[1] - text_bg_padding,
),
(
text_bbox[2] + text_bg_padding,
text_bbox[3] + text_bg_padding,
),
],
fill=(255, 255, 255, 180), # Semi-transparent white
)
# Draw text
draw.text(
(x0, y0),
label_text,
fill=(0, 0, 0, 255), # Solid black
font=font,
)
# Remap class labels where needed. # Draw clusters on both images
if cluster.label in CLASS_REMAPPINGS.keys(): draw_clusters(left_image, left_clusters)
cluster.label = CLASS_REMAPPINGS[cluster.label] draw_clusters(right_image, right_clusters)
clusters_mod.append(cluster) # Combine the images side by side
combined_width = left_image.width * 2
# map to dictionary clusters and cells, with bottom left origin combined_height = left_image.height
clusters_orig = [ combined_image = Image.new("RGB", (combined_width, combined_height))
{ combined_image.paste(left_image, (0, 0))
"id": c.id, combined_image.paste(right_image, (left_image.width, 0))
"bbox": list( if show:
c.bbox.to_bottom_left_origin(page_height).as_tuple() combined_image.show()
), # TODO else:
"confidence": c.confidence, out_path: Path = (
"cell_ids": [], Path(settings.debug.debug_output_path)
"type": c.label, / f"debug_{conv_res.input.file.stem}"
}
for c in clusters_in
]
clusters_out = [
{
"id": c.id,
"bbox": list(
c.bbox.to_bottom_left_origin(page_height).as_tuple()
), # TODO
"confidence": c.confidence,
"created_by": "high_conf_pred",
"cell_ids": [],
"type": c.label,
}
for c in clusters_mod
]
del clusters_mod
raw_cells = [
{
"id": c.id,
"bbox": list(
c.bbox.to_bottom_left_origin(page_height).as_tuple()
), # TODO
"text": c.text,
}
for c in cells
]
cell_count = len(raw_cells)
_log.debug("---- 0. Treat cluster overlaps ------")
clusters_out = lu.remove_cluster_duplicates_by_conf(clusters_out, 0.8)
_log.debug(
"---- 1. Initially assign cells to clusters based on minimum intersection ------"
)
## Check for cells included in or touched by clusters:
clusters_out = lu.assigning_cell_ids_to_clusters(
clusters_out, raw_cells, MIN_INTERSECTION
)
_log.debug("---- 2. Assign Orphans with Low Confidence Detections")
# Creates a map of cell_id->cluster_id
(
clusters_around_cells,
orphan_cell_indices,
ambiguous_cell_indices,
) = lu.cell_id_state_map(clusters_out, cell_count)
# Assign orphan cells with lower confidence predictions
clusters_out, orphan_cell_indices = lu.assign_orphans_with_low_conf_pred(
clusters_out, clusters_orig, raw_cells, orphan_cell_indices
)
# Refresh the cell_ids assignment, after creating new clusters using low conf predictions
clusters_out = lu.assigning_cell_ids_to_clusters(
clusters_out, raw_cells, MIN_INTERSECTION
)
_log.debug("---- 3. Settle Ambigous Cells")
# Creates an update map after assignment of cell_id->cluster_id
(
clusters_around_cells,
orphan_cell_indices,
ambiguous_cell_indices,
) = lu.cell_id_state_map(clusters_out, cell_count)
# Settle pdf cells that belong to multiple clusters
clusters_out, ambiguous_cell_indices = lu.remove_ambigous_pdf_cell_by_conf(
clusters_out, raw_cells, ambiguous_cell_indices
)
_log.debug("---- 4. Set Orphans as Text")
(
clusters_around_cells,
orphan_cell_indices,
ambiguous_cell_indices,
) = lu.cell_id_state_map(clusters_out, cell_count)
clusters_out, orphan_cell_indices = lu.set_orphan_as_text(
clusters_out, clusters_orig, raw_cells, orphan_cell_indices
)
_log.debug("---- 5. Merge Cells & and adapt the bounding boxes")
# Merge cells orphan cells
clusters_out = lu.merge_cells(clusters_out)
# Clean up clusters that remain from merged and unreasonable clusters
clusters_out = lu.clean_up_clusters(
clusters_out,
raw_cells,
merge_cells=True,
img_table=True,
one_cell_table=True,
)
new_clusters = lu.adapt_bboxes(raw_cells, clusters_out, orphan_cell_indices)
clusters_out = new_clusters
## We first rebuild where every cell is now:
## Now we write into a prediction cells list, not into the raw cells list.
## As we don't need previous labels, we best overwrite any old list, because that might
## have been sorted differently.
(
clusters_around_cells,
orphan_cell_indices,
ambiguous_cell_indices,
) = lu.cell_id_state_map(clusters_out, cell_count)
target_cells = []
for ix, cell in enumerate(raw_cells):
new_cell = {
"id": ix,
"rawcell_id": ix,
"label": "None",
"bbox": cell["bbox"],
"text": cell["text"],
}
for cluster_index in clusters_around_cells[
ix
]: # By previous analysis, this is always 1 cluster.
new_cell["label"] = clusters_out[cluster_index]["type"]
target_cells.append(new_cell)
# _log.debug("New label of cell " + str(ix) + " is " + str(new_cell["label"]))
cells_out = target_cells
## -------------------------------
## Sort clusters into reasonable reading order, and sort the cells inside each cluster
_log.debug("---- 5. Sort clusters in reading order ------")
sorted_clusters = lu.produce_reading_order(
clusters_out, "raw_cell_ids", "raw_cell_ids", True
)
clusters_out = sorted_clusters
# end_time = timer()
_log.debug("---- End of postprocessing function ------")
end_time = time.time() - start_time
_log.debug(f"Finished post processing in seconds={end_time:.3f}")
cells_out_new = [
Cell(
id=c["id"], # type: ignore
bbox=BoundingBox.from_tuple(
coord=c["bbox"], origin=CoordOrigin.BOTTOMLEFT # type: ignore
).to_top_left_origin(page_height),
text=c["text"], # type: ignore
) )
for c in cells_out out_path.mkdir(parents=True, exist_ok=True)
] out_file = out_path / f"{mode_prefix}_layout_page_{page.page_no:05}.png"
combined_image.save(str(out_file), format="png")
del cells_out
clusters_out_new = []
for c in clusters_out:
cluster_cells = [
ccell for ccell in cells_out_new if ccell.id in c["cell_ids"] # type: ignore
]
c_new = Cluster(
id=c["id"], # type: ignore
bbox=BoundingBox.from_tuple(
coord=c["bbox"], origin=CoordOrigin.BOTTOMLEFT # type: ignore
).to_top_left_origin(page_height),
confidence=c["confidence"], # type: ignore
label=DocItemLabel(c["type"]),
cells=cluster_cells,
)
clusters_out_new.append(c_new)
return clusters_out_new, cells_out_new
def __call__( def __call__(
self, conv_res: ConversionResult, page_batch: Iterable[Page] self, conv_res: ConversionResult, page_batch: Iterable[Page]
@ -316,66 +205,26 @@ class LayoutModel(BasePageModel):
) )
clusters.append(cluster) clusters.append(cluster)
# Map cells to clusters if settings.debug.visualize_raw_layout:
# TODO: Remove, postprocess should take care of it anyway. self.draw_clusters_and_cells_side_by_side(
for cell in page.cells: conv_res, page, clusters, mode_prefix="raw"
for cluster in clusters: )
if not cell.bbox.area() > 0:
overlap_frac = 0.0
else:
overlap_frac = (
cell.bbox.intersection_area_with(cluster.bbox)
/ cell.bbox.area()
)
if overlap_frac > 0.5: # Apply postprocessing
cluster.cells.append(cell)
# Pre-sort clusters processed_clusters, processed_cells = LayoutPostprocessor(
# clusters = self.sort_clusters_by_cell_order(clusters) page.cells, clusters, page.size
).postprocess()
# processed_clusters, processed_cells = clusters, page.cells
# DEBUG code: page.cells = processed_cells
def draw_clusters_and_cells(show: bool = False): page.predictions.layout = LayoutPrediction(
image = copy.deepcopy(page.image) clusters=processed_clusters
if image is not None:
draw = ImageDraw.Draw(image)
for c in clusters:
x0, y0, x1, y1 = c.bbox.as_tuple()
draw.rectangle([(x0, y0), (x1, y1)], outline="green")
cell_color = (
random.randint(30, 140),
random.randint(30, 140),
random.randint(30, 140),
)
for tc in c.cells: # [:1]:
x0, y0, x1, y1 = tc.bbox.as_tuple()
draw.rectangle(
[(x0, y0), (x1, y1)], outline=cell_color
)
if show:
image.show()
else:
out_path: Path = (
Path(settings.debug.debug_output_path)
/ f"debug_{conv_res.input.file.stem}"
)
out_path.mkdir(parents=True, exist_ok=True)
out_file = (
out_path / f"layout_page_{page.page_no:05}.png"
)
image.save(str(out_file), format="png")
# draw_clusters_and_cells()
clusters, page.cells = self.postprocess(
clusters, page.cells, page.size.height
) )
page.predictions.layout = LayoutPrediction(clusters=clusters)
if settings.debug.visualize_layout: if settings.debug.visualize_layout:
draw_clusters_and_cells() self.draw_clusters_and_cells_side_by_side(
conv_res, page, processed_clusters, mode_prefix="postprocessed"
)
yield page yield page

View File

@ -6,6 +6,7 @@ from pydantic import BaseModel
from docling.datamodel.base_models import ( from docling.datamodel.base_models import (
AssembledUnit, AssembledUnit,
ContainerElement,
FigureElement, FigureElement,
Page, Page,
PageElement, PageElement,
@ -94,7 +95,7 @@ class PageAssembleModel(BasePageModel):
headers.append(text_el) headers.append(text_el)
else: else:
body.append(text_el) body.append(text_el)
elif cluster.label == LayoutModel.TABLE_LABEL: elif cluster.label in LayoutModel.TABLE_LABELS:
tbl = None tbl = None
if page.predictions.tablestructure: if page.predictions.tablestructure:
tbl = page.predictions.tablestructure.table_map.get( tbl = page.predictions.tablestructure.table_map.get(
@ -159,6 +160,15 @@ class PageAssembleModel(BasePageModel):
) )
elements.append(equation) elements.append(equation)
body.append(equation) body.append(equation)
elif cluster.label in LayoutModel.CONTAINER_LABELS:
container_el = ContainerElement(
label=cluster.label,
id=cluster.id,
page_no=page.page_no,
cluster=cluster,
)
elements.append(container_el)
body.append(container_el)
page.assembled = AssembledUnit( page.assembled = AssembledUnit(
elements=elements, headers=headers, body=body elements=elements, headers=headers, body=body

View File

@ -66,19 +66,43 @@ class TableStructureModel(BasePageModel):
show: bool = False, show: bool = False,
): ):
assert page._backend is not None assert page._backend is not None
assert page.size is not None
image = ( image = (
page._backend.get_page_image() page._backend.get_page_image()
) # make new image to avoid drawing on the saved ones ) # make new image to avoid drawing on the saved ones
scale_x = image.width / page.size.width
scale_y = image.height / page.size.height
draw = ImageDraw.Draw(image) draw = ImageDraw.Draw(image)
for table_element in tbl_list: for table_element in tbl_list:
x0, y0, x1, y1 = table_element.cluster.bbox.as_tuple() x0, y0, x1, y1 = table_element.cluster.bbox.as_tuple()
y0 *= scale_x
y1 *= scale_y
x0 *= scale_x
x1 *= scale_x
draw.rectangle([(x0, y0), (x1, y1)], outline="red") draw.rectangle([(x0, y0), (x1, y1)], outline="red")
for cell in table_element.cluster.cells:
x0, y0, x1, y1 = cell.bbox.as_tuple()
x0 *= scale_x
x1 *= scale_x
y0 *= scale_x
y1 *= scale_y
draw.rectangle([(x0, y0), (x1, y1)], outline="green")
for tc in table_element.table_cells: for tc in table_element.table_cells:
if tc.bbox is not None: if tc.bbox is not None:
x0, y0, x1, y1 = tc.bbox.as_tuple() x0, y0, x1, y1 = tc.bbox.as_tuple()
x0 *= scale_x
x1 *= scale_x
y0 *= scale_x
y1 *= scale_y
if tc.column_header: if tc.column_header:
width = 3 width = 3
else: else:
@ -89,7 +113,6 @@ class TableStructureModel(BasePageModel):
text=f"{tc.start_row_offset_idx}, {tc.start_col_offset_idx}", text=f"{tc.start_row_offset_idx}, {tc.start_col_offset_idx}",
fill="black", fill="black",
) )
if show: if show:
image.show() image.show()
else: else:
@ -135,47 +158,40 @@ class TableStructureModel(BasePageModel):
], ],
) )
for cluster in page.predictions.layout.clusters for cluster in page.predictions.layout.clusters
if cluster.label == DocItemLabel.TABLE if cluster.label
in [DocItemLabel.TABLE, DocItemLabel.DOCUMENT_INDEX]
] ]
if not len(in_tables): if not len(in_tables):
yield page yield page
continue continue
tokens = []
for c in page.cells:
for cluster, _ in in_tables:
if c.bbox.area() > 0:
if (
c.bbox.intersection_area_with(cluster.bbox)
/ c.bbox.area()
> 0.2
):
# Only allow non empty stings (spaces) into the cells of a table
if len(c.text.strip()) > 0:
new_cell = copy.deepcopy(c)
new_cell.bbox = new_cell.bbox.scaled(
scale=self.scale
)
tokens.append(new_cell.model_dump())
page_input = { page_input = {
"tokens": tokens,
"width": page.size.width * self.scale, "width": page.size.width * self.scale,
"height": page.size.height * self.scale, "height": page.size.height * self.scale,
"image": numpy.asarray(page.get_image(scale=self.scale)),
} }
page_input["image"] = numpy.asarray(
page.get_image(scale=self.scale)
)
table_clusters, table_bboxes = zip(*in_tables) table_clusters, table_bboxes = zip(*in_tables)
if len(table_bboxes): if len(table_bboxes):
tf_output = self.tf_predictor.multi_table_predict( for table_cluster, tbl_box in in_tables:
page_input, table_bboxes, do_matching=self.do_cell_matching
)
for table_cluster, table_out in zip(table_clusters, tf_output): tokens = []
for c in table_cluster.cells:
# Only allow non empty stings (spaces) into the cells of a table
if len(c.text.strip()) > 0:
new_cell = copy.deepcopy(c)
new_cell.bbox = new_cell.bbox.scaled(
scale=self.scale
)
tokens.append(new_cell.model_dump())
page_input["tokens"] = tokens
tf_output = self.tf_predictor.multi_table_predict(
page_input, [tbl_box], do_matching=self.do_cell_matching
)
table_out = tf_output[0]
table_cells = [] table_cells = []
for element in table_out["tf_responses"]: for element in table_out["tf_responses"]:
@ -208,7 +224,7 @@ class TableStructureModel(BasePageModel):
id=table_cluster.id, id=table_cluster.id,
page_no=page.page_no, page_no=page.page_no,
cluster=table_cluster, cluster=table_cluster,
label=DocItemLabel.TABLE, label=table_cluster.label,
) )
page.predictions.tablestructure.table_map[ page.predictions.tablestructure.table_map[

View File

@ -168,7 +168,9 @@ class PaginatedPipeline(BasePipeline): # TODO this is a bad name.
except Exception as e: except Exception as e:
conv_res.status = ConversionStatus.FAILURE conv_res.status = ConversionStatus.FAILURE
trace = "\n".join(traceback.format_exception(e)) trace = "\n".join(
traceback.format_exception(type(e), e, e.__traceback__)
)
_log.warning( _log.warning(
f"Encountered an error during conversion of document {conv_res.input.document_hash}:\n" f"Encountered an error during conversion of document {conv_res.input.document_hash}:\n"
f"{trace}" f"{trace}"

View File

@ -169,6 +169,8 @@ def to_docling_document(doc_glm, update_name_label=False) -> DoclingDocument:
current_list = None current_list = None
text = "" text = ""
caption_refs = [] caption_refs = []
item_label = DocItemLabel(pelem["name"])
for caption in obj["captions"]: for caption in obj["captions"]:
text += caption["text"] text += caption["text"]
@ -254,12 +256,18 @@ def to_docling_document(doc_glm, update_name_label=False) -> DoclingDocument:
), ),
) )
tbl = doc.add_table(data=tbl_data, prov=prov) tbl = doc.add_table(data=tbl_data, prov=prov, label=item_label)
tbl.captions.extend(caption_refs) tbl.captions.extend(caption_refs)
elif ptype in ["form", "key_value_region"]: elif ptype in [DocItemLabel.FORM.value, DocItemLabel.KEY_VALUE_REGION.value]:
label = DocItemLabel(ptype) label = DocItemLabel(ptype)
container_el = doc.add_group(label=GroupLabel.UNSPECIFIED, name=label) group_label = GroupLabel.UNSPECIFIED
if label == DocItemLabel.FORM:
group_label = GroupLabel.FORM_AREA
elif label == DocItemLabel.KEY_VALUE_REGION:
group_label = GroupLabel.KEY_VALUE_AREA
container_el = doc.add_group(label=group_label)
_add_child_elements(container_el, doc, obj, pelem) _add_child_elements(container_el, doc, obj, pelem)

View File

@ -0,0 +1,666 @@
import bisect
import logging
import sys
from collections import defaultdict
from typing import Dict, List, Set, Tuple
from docling_core.types.doc import DocItemLabel, Size
from rtree import index
from docling.datamodel.base_models import BoundingBox, Cell, Cluster, OcrCell
_log = logging.getLogger(__name__)
class UnionFind:
"""Efficient Union-Find data structure for grouping elements."""
def __init__(self, elements):
self.parent = {elem: elem for elem in elements}
self.rank = {elem: 0 for elem in elements}
def find(self, x):
if self.parent[x] != x:
self.parent[x] = self.find(self.parent[x]) # Path compression
return self.parent[x]
def union(self, x, y):
root_x, root_y = self.find(x), self.find(y)
if root_x == root_y:
return
if self.rank[root_x] > self.rank[root_y]:
self.parent[root_y] = root_x
elif self.rank[root_x] < self.rank[root_y]:
self.parent[root_x] = root_y
else:
self.parent[root_y] = root_x
self.rank[root_x] += 1
def get_groups(self) -> Dict[int, List[int]]:
"""Returns groups as {root: [elements]}."""
groups = defaultdict(list)
for elem in self.parent:
groups[self.find(elem)].append(elem)
return groups
class SpatialClusterIndex:
"""Efficient spatial indexing for clusters using R-tree and interval trees."""
def __init__(self, clusters: List[Cluster]):
p = index.Property()
p.dimension = 2
self.spatial_index = index.Index(properties=p)
self.x_intervals = IntervalTree()
self.y_intervals = IntervalTree()
self.clusters_by_id: Dict[int, Cluster] = {}
for cluster in clusters:
self.add_cluster(cluster)
def add_cluster(self, cluster: Cluster):
bbox = cluster.bbox
self.spatial_index.insert(cluster.id, bbox.as_tuple())
self.x_intervals.insert(bbox.l, bbox.r, cluster.id)
self.y_intervals.insert(bbox.t, bbox.b, cluster.id)
self.clusters_by_id[cluster.id] = cluster
def remove_cluster(self, cluster: Cluster):
self.spatial_index.delete(cluster.id, cluster.bbox.as_tuple())
del self.clusters_by_id[cluster.id]
def find_candidates(self, bbox: BoundingBox) -> Set[int]:
"""Find potential overlapping cluster IDs using all indexes."""
spatial = set(self.spatial_index.intersection(bbox.as_tuple()))
x_candidates = self.x_intervals.find_containing(
bbox.l
) | self.x_intervals.find_containing(bbox.r)
y_candidates = self.y_intervals.find_containing(
bbox.t
) | self.y_intervals.find_containing(bbox.b)
return spatial.union(x_candidates).union(y_candidates)
def check_overlap(
self,
bbox1: BoundingBox,
bbox2: BoundingBox,
overlap_threshold: float,
containment_threshold: float,
) -> bool:
"""Check if two bboxes overlap sufficiently."""
area1, area2 = bbox1.area(), bbox2.area()
if area1 <= 0 or area2 <= 0:
return False
overlap_area = bbox1.intersection_area_with(bbox2)
if overlap_area <= 0:
return False
iou = overlap_area / (area1 + area2 - overlap_area)
containment1 = overlap_area / area1
containment2 = overlap_area / area2
return (
iou > overlap_threshold
or containment1 > containment_threshold
or containment2 > containment_threshold
)
class Interval:
"""Helper class for sortable intervals."""
def __init__(self, min_val: float, max_val: float, id: int):
self.min_val = min_val
self.max_val = max_val
self.id = id
def __lt__(self, other):
if isinstance(other, Interval):
return self.min_val < other.min_val
return self.min_val < other
class IntervalTree:
"""Memory-efficient interval tree for 1D overlap queries."""
def __init__(self):
self.intervals: List[Interval] = [] # Sorted by min_val
def insert(self, min_val: float, max_val: float, id: int):
interval = Interval(min_val, max_val, id)
bisect.insort(self.intervals, interval)
def find_containing(self, point: float) -> Set[int]:
"""Find all intervals containing the point."""
pos = bisect.bisect_left(self.intervals, point)
result = set()
# Check intervals starting before point
for interval in reversed(self.intervals[:pos]):
if interval.min_val <= point <= interval.max_val:
result.add(interval.id)
else:
break
# Check intervals starting at/after point
for interval in self.intervals[pos:]:
if point <= interval.max_val:
if interval.min_val <= point:
result.add(interval.id)
else:
break
return result
class LayoutPostprocessor:
"""Postprocesses layout predictions by cleaning up clusters and mapping cells."""
# Cluster type-specific parameters for overlap resolution
OVERLAP_PARAMS = {
"regular": {"area_threshold": 1.3, "conf_threshold": 0.05},
"picture": {"area_threshold": 2.0, "conf_threshold": 0.3},
"wrapper": {"area_threshold": 2.0, "conf_threshold": 0.2},
}
WRAPPER_TYPES = {
DocItemLabel.FORM,
DocItemLabel.KEY_VALUE_REGION,
DocItemLabel.TABLE,
DocItemLabel.DOCUMENT_INDEX,
}
SPECIAL_TYPES = WRAPPER_TYPES.union({DocItemLabel.PICTURE})
CONFIDENCE_THRESHOLDS = {
DocItemLabel.CAPTION: 0.5,
DocItemLabel.FOOTNOTE: 0.5,
DocItemLabel.FORMULA: 0.5,
DocItemLabel.LIST_ITEM: 0.5,
DocItemLabel.PAGE_FOOTER: 0.5,
DocItemLabel.PAGE_HEADER: 0.5,
DocItemLabel.PICTURE: 0.5,
DocItemLabel.SECTION_HEADER: 0.45,
DocItemLabel.TABLE: 0.5,
DocItemLabel.TEXT: 0.5, # 0.45,
DocItemLabel.TITLE: 0.45,
DocItemLabel.CODE: 0.45,
DocItemLabel.CHECKBOX_SELECTED: 0.45,
DocItemLabel.CHECKBOX_UNSELECTED: 0.45,
DocItemLabel.FORM: 0.45,
DocItemLabel.KEY_VALUE_REGION: 0.45,
DocItemLabel.DOCUMENT_INDEX: 0.45,
}
LABEL_REMAPPING = {
# DocItemLabel.DOCUMENT_INDEX: DocItemLabel.TABLE,
DocItemLabel.TITLE: DocItemLabel.SECTION_HEADER,
}
def __init__(self, cells: List[Cell], clusters: List[Cluster], page_size: Size):
"""Initialize processor with cells and clusters."""
"""Initialize processor with cells and spatial indices."""
self.cells = cells
self.page_size = page_size
self.regular_clusters = [
c for c in clusters if c.label not in self.SPECIAL_TYPES
]
self.special_clusters = [c for c in clusters if c.label in self.SPECIAL_TYPES]
# Build spatial indices once
self.regular_index = SpatialClusterIndex(self.regular_clusters)
self.picture_index = SpatialClusterIndex(
[c for c in self.special_clusters if c.label == DocItemLabel.PICTURE]
)
self.wrapper_index = SpatialClusterIndex(
[c for c in self.special_clusters if c.label in self.WRAPPER_TYPES]
)
def postprocess(self) -> Tuple[List[Cluster], List[Cell]]:
"""Main processing pipeline."""
self.regular_clusters = self._process_regular_clusters()
self.special_clusters = self._process_special_clusters()
# Remove regular clusters that are included in wrappers
contained_ids = {
child.id
for wrapper in self.special_clusters
if wrapper.label in self.SPECIAL_TYPES
for child in wrapper.children
}
self.regular_clusters = [
c for c in self.regular_clusters if c.id not in contained_ids
]
# Combine and sort final clusters
final_clusters = self._sort_clusters(
self.regular_clusters + self.special_clusters, mode="id"
)
for cluster in final_clusters:
cluster.cells = self._sort_cells(cluster.cells)
# Also sort cells in children if any
for child in cluster.children:
child.cells = self._sort_cells(child.cells)
return final_clusters, self.cells
def _process_regular_clusters(self) -> List[Cluster]:
"""Process regular clusters with iterative refinement."""
clusters = [
c
for c in self.regular_clusters
if c.confidence >= self.CONFIDENCE_THRESHOLDS[c.label]
]
# Apply label remapping
for cluster in clusters:
if cluster.label in self.LABEL_REMAPPING:
cluster.label = self.LABEL_REMAPPING[cluster.label]
# Initial cell assignment
clusters = self._assign_cells_to_clusters(clusters)
# Remove clusters with no cells
clusters = [cluster for cluster in clusters if cluster.cells]
# Handle orphaned cells
unassigned = self._find_unassigned_cells(clusters)
if unassigned:
next_id = max((c.id for c in clusters), default=0) + 1
orphan_clusters = []
for i, cell in enumerate(unassigned):
conf = 1.0
if isinstance(cell, OcrCell):
conf = cell.confidence
orphan_clusters.append(
Cluster(
id=next_id + i,
label=DocItemLabel.TEXT,
bbox=cell.bbox,
confidence=conf,
cells=[cell],
)
)
clusters.extend(orphan_clusters)
# Iterative refinement
prev_count = len(clusters) + 1
for _ in range(3): # Maximum 3 iterations
if prev_count == len(clusters):
break
prev_count = len(clusters)
clusters = self._adjust_cluster_bboxes(clusters)
clusters = self._remove_overlapping_clusters(clusters, "regular")
return clusters
def _process_special_clusters(self) -> List[Cluster]:
special_clusters = [
c
for c in self.special_clusters
if c.confidence >= self.CONFIDENCE_THRESHOLDS[c.label]
]
special_clusters = self._handle_cross_type_overlaps(special_clusters)
# Calculate page area from known page size
page_area = self.page_size.width * self.page_size.height
if page_area > 0:
# Filter out full-page pictures
special_clusters = [
cluster
for cluster in special_clusters
if not (
cluster.label == DocItemLabel.PICTURE
and cluster.bbox.area() / page_area > 0.90
)
]
for special in special_clusters:
contained = []
for cluster in self.regular_clusters:
overlap = cluster.bbox.intersection_area_with(special.bbox)
if overlap > 0:
containment = overlap / cluster.bbox.area()
if containment > 0.8:
contained.append(cluster)
if contained:
# Sort contained clusters by minimum cell ID:
contained = self._sort_clusters(contained, mode="id")
special.children = contained
# Adjust bbox only for Form and Key-Value-Region, not Table or Picture
if special.label in [DocItemLabel.FORM, DocItemLabel.KEY_VALUE_REGION]:
special.bbox = BoundingBox(
l=min(c.bbox.l for c in contained),
t=min(c.bbox.t for c in contained),
r=max(c.bbox.r for c in contained),
b=max(c.bbox.b for c in contained),
)
# Collect all cells from children
all_cells = []
for child in contained:
all_cells.extend(child.cells)
special.cells = self._deduplicate_cells(all_cells)
special.cells = self._sort_cells(special.cells)
picture_clusters = [
c for c in special_clusters if c.label == DocItemLabel.PICTURE
]
picture_clusters = self._remove_overlapping_clusters(
picture_clusters, "picture"
)
wrapper_clusters = [
c for c in special_clusters if c.label in self.WRAPPER_TYPES
]
wrapper_clusters = self._remove_overlapping_clusters(
wrapper_clusters, "wrapper"
)
return picture_clusters + wrapper_clusters
def _handle_cross_type_overlaps(self, special_clusters) -> List[Cluster]:
"""Handle overlaps between regular and wrapper clusters before child assignment.
In particular, KEY_VALUE_REGION proposals that are almost identical to a TABLE
should be removed.
"""
wrappers_to_remove = set()
for wrapper in special_clusters:
if wrapper.label not in self.WRAPPER_TYPES:
continue # only treat KEY_VALUE_REGION for now.
for regular in self.regular_clusters:
if regular.label == DocItemLabel.TABLE:
# Calculate overlap
overlap = regular.bbox.intersection_area_with(wrapper.bbox)
wrapper_area = wrapper.bbox.area()
overlap_ratio = overlap / wrapper_area
conf_diff = wrapper.confidence - regular.confidence
# If wrapper is mostly overlapping with a TABLE, remove the wrapper
if (
overlap_ratio > 0.9 and conf_diff < 0.1
): # self.OVERLAP_PARAMS["wrapper"]["conf_threshold"]): # 80% overlap threshold
wrappers_to_remove.add(wrapper.id)
break
# Filter out the identified wrappers
special_clusters = [
cluster
for cluster in special_clusters
if cluster.id not in wrappers_to_remove
]
return special_clusters
def _should_prefer_cluster(
self, candidate: Cluster, other: Cluster, params: dict
) -> bool:
"""Determine if candidate cluster should be preferred over other cluster based on rules.
Returns True if candidate should be preferred, False if not."""
# Rule 1: LIST_ITEM vs TEXT
if (
candidate.label == DocItemLabel.LIST_ITEM
and other.label == DocItemLabel.TEXT
):
# Check if areas are similar (within 20% of each other)
area_ratio = candidate.bbox.area() / other.bbox.area()
area_similarity = abs(1 - area_ratio) < 0.2
if area_similarity:
return True
# Rule 2: CODE vs others
if candidate.label == DocItemLabel.CODE:
# Calculate how much of the other cluster is contained within the CODE cluster
overlap = other.bbox.intersection_area_with(candidate.bbox)
containment = overlap / other.bbox.area()
if containment > 0.8: # other is 80% contained within CODE
return True
# If no label-based rules matched, fall back to area/confidence thresholds
area_ratio = candidate.bbox.area() / other.bbox.area()
conf_diff = other.confidence - candidate.confidence
if (
area_ratio <= params["area_threshold"]
and conf_diff > params["conf_threshold"]
):
return False
return True # Default to keeping candidate if no rules triggered rejection
def _select_best_cluster_from_group(
self,
group_clusters: List[Cluster],
params: dict,
) -> Cluster:
"""Select best cluster from a group of overlapping clusters based on all rules."""
current_best = None
for candidate in group_clusters:
should_select = True
for other in group_clusters:
if other == candidate:
continue
if not self._should_prefer_cluster(candidate, other, params):
should_select = False
break
if should_select:
if current_best is None:
current_best = candidate
else:
# If both clusters pass rules, prefer the larger one unless confidence differs significantly
if (
candidate.bbox.area() > current_best.bbox.area()
and current_best.confidence - candidate.confidence
<= params["conf_threshold"]
):
current_best = candidate
return current_best if current_best else group_clusters[0]
def _remove_overlapping_clusters(
self,
clusters: List[Cluster],
cluster_type: str,
overlap_threshold: float = 0.8,
containment_threshold: float = 0.8,
) -> List[Cluster]:
if not clusters:
return []
spatial_index = (
self.regular_index
if cluster_type == "regular"
else self.picture_index if cluster_type == "picture" else self.wrapper_index
)
# Map of currently valid clusters
valid_clusters = {c.id: c for c in clusters}
uf = UnionFind(valid_clusters.keys())
params = self.OVERLAP_PARAMS[cluster_type]
for cluster in clusters:
candidates = spatial_index.find_candidates(cluster.bbox)
candidates &= valid_clusters.keys() # Only keep existing candidates
candidates.discard(cluster.id)
for other_id in candidates:
if spatial_index.check_overlap(
cluster.bbox,
valid_clusters[other_id].bbox,
overlap_threshold,
containment_threshold,
):
uf.union(cluster.id, other_id)
result = []
for group in uf.get_groups().values():
if len(group) == 1:
result.append(valid_clusters[group[0]])
continue
group_clusters = [valid_clusters[cid] for cid in group]
best = self._select_best_cluster_from_group(group_clusters, params)
# Simple cell merging - no special cases
for cluster in group_clusters:
if cluster != best:
best.cells.extend(cluster.cells)
best.cells = self._deduplicate_cells(best.cells)
best.cells = self._sort_cells(best.cells)
result.append(best)
return result
def _select_best_cluster(
self,
clusters: List[Cluster],
area_threshold: float,
conf_threshold: float,
) -> Cluster:
"""Iteratively select best cluster based on area and confidence thresholds."""
current_best = None
for candidate in clusters:
should_select = True
for other in clusters:
if other == candidate:
continue
area_ratio = candidate.bbox.area() / other.bbox.area()
conf_diff = other.confidence - candidate.confidence
if area_ratio <= area_threshold and conf_diff > conf_threshold:
should_select = False
break
if should_select:
if current_best is None or (
candidate.bbox.area() > current_best.bbox.area()
and current_best.confidence - candidate.confidence <= conf_threshold
):
current_best = candidate
return current_best if current_best else clusters[0]
def _deduplicate_cells(self, cells: List[Cell]) -> List[Cell]:
"""Ensure each cell appears only once, maintaining order of first appearance."""
seen_ids = set()
unique_cells = []
for cell in cells:
if cell.id not in seen_ids:
seen_ids.add(cell.id)
unique_cells.append(cell)
return unique_cells
def _assign_cells_to_clusters(
self, clusters: List[Cluster], min_overlap: float = 0.2
) -> List[Cluster]:
"""Assign cells to best overlapping cluster."""
for cluster in clusters:
cluster.cells = []
for cell in self.cells:
if not cell.text.strip():
continue
best_overlap = min_overlap
best_cluster = None
for cluster in clusters:
if cell.bbox.area() <= 0:
continue
overlap = cell.bbox.intersection_area_with(cluster.bbox)
overlap_ratio = overlap / cell.bbox.area()
if overlap_ratio > best_overlap:
best_overlap = overlap_ratio
best_cluster = cluster
if best_cluster is not None:
best_cluster.cells.append(cell)
# Deduplicate cells in each cluster after assignment
for cluster in clusters:
cluster.cells = self._deduplicate_cells(cluster.cells)
return clusters
def _find_unassigned_cells(self, clusters: List[Cluster]) -> List[Cell]:
"""Find cells not assigned to any cluster."""
assigned = {cell.id for cluster in clusters for cell in cluster.cells}
return [
cell for cell in self.cells if cell.id not in assigned and cell.text.strip()
]
def _adjust_cluster_bboxes(self, clusters: List[Cluster]) -> List[Cluster]:
"""Adjust cluster bounding boxes to contain their cells."""
for cluster in clusters:
if not cluster.cells:
continue
cells_bbox = BoundingBox(
l=min(cell.bbox.l for cell in cluster.cells),
t=min(cell.bbox.t for cell in cluster.cells),
r=max(cell.bbox.r for cell in cluster.cells),
b=max(cell.bbox.b for cell in cluster.cells),
)
if cluster.label == DocItemLabel.TABLE:
# For tables, take union of current bbox and cells bbox
cluster.bbox = BoundingBox(
l=min(cluster.bbox.l, cells_bbox.l),
t=min(cluster.bbox.t, cells_bbox.t),
r=max(cluster.bbox.r, cells_bbox.r),
b=max(cluster.bbox.b, cells_bbox.b),
)
else:
cluster.bbox = cells_bbox
return clusters
def _sort_cells(self, cells: List[Cell]) -> List[Cell]:
"""Sort cells in native reading order."""
return sorted(cells, key=lambda c: (c.id))
def _sort_clusters(
self, clusters: List[Cluster], mode: str = "id"
) -> List[Cluster]:
"""Sort clusters in reading order (top-to-bottom, left-to-right)."""
if mode == "id": # sort in the order the cells are printed in the PDF.
return sorted(
clusters,
key=lambda cluster: (
(
min(cell.id for cell in cluster.cells)
if cluster.cells
else sys.maxsize
),
cluster.bbox.t,
cluster.bbox.l,
),
)
elif mode == "tblr": # Sort top-to-bottom, then left-to-right ("row first")
return sorted(
clusters, key=lambda cluster: (cluster.bbox.t, cluster.bbox.l)
)
elif mode == "lrtb": # Sort left-to-right, then top-to-bottom ("column first")
return sorted(
clusters, key=lambda cluster: (cluster.bbox.l, cluster.bbox.t)
)
else:
return clusters

View File

@ -1,812 +0,0 @@
import copy
import logging
import networkx as nx
from docling_core.types.doc import DocItemLabel
logger = logging.getLogger("layout_utils")
## -------------------------------
## Geometric helper functions
## The coordinates grow left to right, and bottom to top.
## The bounding box list elements 0 to 3 are x_left, y_bottom, x_right, y_top.
def area(bbox):
return (bbox[2] - bbox[0]) * (bbox[3] - bbox[1])
def contains(bbox_i, bbox_j):
## Returns True if bbox_i contains bbox_j, else False
return (
bbox_i[0] <= bbox_j[0]
and bbox_i[1] <= bbox_j[1]
and bbox_i[2] >= bbox_j[2]
and bbox_i[3] >= bbox_j[3]
)
def is_intersecting(bbox_i, bbox_j):
return not (
bbox_i[2] < bbox_j[0]
or bbox_i[0] > bbox_j[2]
or bbox_i[3] < bbox_j[1]
or bbox_i[1] > bbox_j[3]
)
def bb_iou(boxA, boxB):
# determine the (x, y)-coordinates of the intersection rectangle
xA = max(boxA[0], boxB[0])
yA = max(boxA[1], boxB[1])
xB = min(boxA[2], boxB[2])
yB = min(boxA[3], boxB[3])
# compute the area of intersection rectangle
interArea = max(0, xB - xA + 1) * max(0, yB - yA + 1)
# compute the area of both the prediction and ground-truth
# rectangles
boxAArea = (boxA[2] - boxA[0] + 1) * (boxA[3] - boxA[1] + 1)
boxBArea = (boxB[2] - boxB[0] + 1) * (boxB[3] - boxB[1] + 1)
# compute the intersection over union by taking the intersection
# area and dividing it by the sum of prediction + ground-truth
# areas - the interesection area
iou = interArea / float(boxAArea + boxBArea - interArea)
# return the intersection over union value
return iou
def compute_intersection(bbox_i, bbox_j):
## Returns the size of the intersection area of the two boxes
if not is_intersecting(bbox_i, bbox_j):
return 0
## Determine the (x, y)-coordinates of the intersection rectangle:
xA = max(bbox_i[0], bbox_j[0])
yA = max(bbox_i[1], bbox_j[1])
xB = min(bbox_i[2], bbox_j[2])
yB = min(bbox_i[3], bbox_j[3])
## Compute the area of intersection rectangle:
interArea = (xB - xA) * (yB - yA)
if interArea < 0:
logger.debug("Warning: Negative intersection detected!")
return 0
return interArea
def surrounding(bbox_i, bbox_j):
## Computes minimal box that contains both input boxes
sbox = []
sbox.append(min(bbox_i[0], bbox_j[0]))
sbox.append(min(bbox_i[1], bbox_j[1]))
sbox.append(max(bbox_i[2], bbox_j[2]))
sbox.append(max(bbox_i[3], bbox_j[3]))
return sbox
def surrounding_list(bbox_list):
## Computes minimal box that contains all boxes in the input list
## The list should be non-empty, but just in case it's not:
if len(bbox_list) == 0:
sbox = [0, 0, 0, 0]
else:
sbox = []
sbox.append(min([bbox[0] for bbox in bbox_list]))
sbox.append(min([bbox[1] for bbox in bbox_list]))
sbox.append(max([bbox[2] for bbox in bbox_list]))
sbox.append(max([bbox[3] for bbox in bbox_list]))
return sbox
def vertical_overlap(bboxA, bboxB):
## bbox[1] is the lower bound, bbox[3] the upper bound (larger number)
if bboxB[3] < bboxA[1]: ## B below A
return False
elif bboxA[3] < bboxB[1]: ## A below B
return False
else:
return True
def vertical_overlap_fraction(bboxA, bboxB):
## Returns the vertical overlap as fraction of the lower bbox height.
## bbox[1] is the lower bound, bbox[3] the upper bound (larger number)
## Height 0 is permitted in the input.
heightA = bboxA[3] - bboxA[1]
heightB = bboxB[3] - bboxB[1]
min_height = min(heightA, heightB)
if bboxA[3] >= bboxB[3]: ## A starts higher or equal
if (
bboxA[1] <= bboxB[1]
): ## B is completely in A; this can include height of B = 0:
fraction = 1
else:
overlap = max(bboxB[3] - bboxA[1], 0)
fraction = overlap / max(min_height, 0.001)
else:
if (
bboxB[1] <= bboxA[1]
): ## A is completely in B; this can include height of A = 0:
fraction = 1
else:
overlap = max(bboxA[3] - bboxB[1], 0)
fraction = overlap / max(min_height, 0.001)
return fraction
## -------------------------------
## Cluster-and-cell relations
def compute_enclosed_cells(
cluster_bbox, raw_cells, min_cell_intersection_with_cluster=0.2
):
cells_in_cluster = []
cells_in_cluster_int = []
for ix, cell in enumerate(raw_cells):
cell_bbox = cell["bbox"]
intersection = compute_intersection(cell_bbox, cluster_bbox)
frac_area = area(cell_bbox) * min_cell_intersection_with_cluster
if (
intersection > frac_area and frac_area > 0
): # intersect > certain fraction of cell
cells_in_cluster.append(ix)
cells_in_cluster_int.append(intersection)
elif contains(
cluster_bbox,
[cell_bbox[0] + 3, cell_bbox[1] + 3, cell_bbox[2] - 3, cell_bbox[3] - 3],
):
cells_in_cluster.append(ix)
return cells_in_cluster, cells_in_cluster_int
def find_clusters_around_cells(cell_count, clusters):
## Per raw cell, find to which clusters it belongs.
## Return list of these indices in the raw-cell order.
clusters_around_cells = [[] for _ in range(cell_count)]
for cl_ix, cluster in enumerate(clusters):
for ix in cluster["cell_ids"]:
clusters_around_cells[ix].append(cl_ix)
return clusters_around_cells
def find_cell_index(raw_ix, cell_array):
## "raw_ix" is a rawcell_id.
## "cell_array" has the structure of an (annotation) cells array.
## Returns index of cell in cell_array that has this rawcell_id.
for ix, cell in enumerate(cell_array):
if cell["rawcell_id"] == raw_ix:
return ix
def find_cell_indices(cluster, cell_array):
## "cluster" must have the structure as in a clusters array in a prediction,
## "cell_array" that of a cells array.
## Returns list of indices of cells in cell_array that have the rawcell_ids as in the cluster,
## in the order of the rawcell_ids.
result = []
for raw_ix in sorted(cluster["cell_ids"]):
## Find the cell with this rawcell_id (if any)
for ix, cell in enumerate(cell_array):
if cell["rawcell_id"] == raw_ix:
result.append(ix)
return result
def find_first_cell_index(cluster, cell_array):
## "cluster" must be a dict with key "cell_ids"; it can also be a line.
## "cell_array" has the structure of a cells array in an annotation.
## Returns index of cell in cell_array that has the lowest rawcell_id from the cluster.
result = [] ## We keep it a list as it can be empty (picture without text cells)
if len(cluster["cell_ids"]) == 0:
return result
raw_ix = min(cluster["cell_ids"])
## Find the cell with this rawcell_id (if any)
for ix, cell in enumerate(cell_array):
if cell["rawcell_id"] == raw_ix:
result.append(ix)
break ## One is enough; should be only one anyway.
if result == []:
logger.debug(
" Warning: Raw cell " + str(raw_ix) + " not found in annotation cells"
)
return result
## -------------------------------
## Cluster labels and text
def relabel_cluster(cluster, cl_ix, new_label, target_pred):
## "cluster" must have the structure as in a clusters array in a prediction,
## "cl_ix" is its index in target_pred,
## "new_label" is the intended new label,
## "target_pred" is the entire current target prediction.
## Sets label on the cluster itself, and on the cells in the target_pred.
## Returns new_label so that also the cl_label variable in the main code is easily set.
target_pred["clusters"][cl_ix]["type"] = new_label
cluster_target_cells = find_cell_indices(cluster, target_pred["cells"])
for ix in cluster_target_cells:
target_pred["cells"][ix]["label"] = new_label
return new_label
def find_cluster_text(cluster, raw_cells):
## "cluster" must be a dict with "cell_ids"; it can also be a line.
## "raw_cells" must have the format of item["raw"]["cells"]
## Returns the text of the cluster, with blanks between the cell contents
## (which seem to be words or phrases without starting or trailing blanks).
## Note that in formulas, this may give a lot more blanks than originally
cluster_text = ""
for raw_ix in sorted(cluster["cell_ids"]):
cluster_text = cluster_text + raw_cells[raw_ix]["text"] + " "
return cluster_text.rstrip()
def find_cluster_text_without_blanks(cluster, raw_cells):
## "cluster" must be a dict with "cell_ids"; it can also be a line.
## "raw_cells" must have the format of item["raw"]["cells"]
## Returns the text of the cluster, without blanks between the cell contents
## Interesting in formula analysis.
cluster_text = ""
for raw_ix in sorted(cluster["cell_ids"]):
cluster_text = cluster_text + raw_cells[raw_ix]["text"]
return cluster_text.rstrip()
## -------------------------------
## Clusters and lines
## (Most line-oriented functions are only needed in TextAnalysisGivenClusters,
## but this one also in FormulaAnalysis)
def build_cluster_from_lines(lines, label, id):
## Lines must be a non-empty list of dicts (lines) with elements "cell_ids" and "bbox"
## (There is no condition that they are really geometrically lines)
## A cluster in standard format is returned with given label and id
local_lines = copy.deepcopy(
lines
) ## without this, it changes "lines" also outside this function
first_line = local_lines.pop(0)
cluster = {
"id": id,
"type": label,
"cell_ids": first_line["cell_ids"],
"bbox": first_line["bbox"],
"confidence": 0,
"created_by": "merged_cells",
}
confidence = 0
counter = 0
for line in local_lines:
new_cell_ids = cluster["cell_ids"] + line["cell_ids"]
cluster["cell_ids"] = new_cell_ids
cluster["bbox"] = surrounding(cluster["bbox"], line["bbox"])
counter += 1
confidence += line["confidence"]
confidence = confidence / counter
cluster["confidence"] = confidence
return cluster
## -------------------------------
## Reading order
def produce_reading_order(clusters, cluster_sort_type, cell_sort_type, sort_ids):
## In:
## Clusters: list as in predictions.
## cluster_sort_type: string, currently only "raw_cells".
## cell_sort_type: string, currently only "raw_cells".
## sort_ids: Boolean, whether the cluster ids should be adapted to their new position
## Out: Another clusters list, sorted according to the type.
logger.debug("---- Start cluster sorting ------")
if cell_sort_type == "raw_cell_ids":
for cl in clusters:
sorted_cell_ids = sorted(cl["cell_ids"])
cl["cell_ids"] = sorted_cell_ids
else:
logger.debug(
"Unknown cell_sort_type `"
+ cell_sort_type
+ "`, no cell sorting will happen."
)
if cluster_sort_type == "raw_cell_ids":
clusters_with_cells = [cl for cl in clusters if cl["cell_ids"] != []]
clusters_without_cells = [cl for cl in clusters if cl["cell_ids"] == []]
logger.debug(
"Clusters with cells: " + str([cl["id"] for cl in clusters_with_cells])
)
logger.debug(
" Their first cell ids: "
+ str([cl["cell_ids"][0] for cl in clusters_with_cells])
)
logger.debug(
"Clusters without cells: "
+ str([cl["id"] for cl in clusters_without_cells])
)
clusters_with_cells_sorted = sorted(
clusters_with_cells, key=lambda cluster: cluster["cell_ids"][0]
)
logger.debug(
" First cell ids after sorting: "
+ str([cl["cell_ids"][0] for cl in clusters_with_cells_sorted])
)
sorted_clusters = clusters_with_cells_sorted + clusters_without_cells
else:
logger.debug(
"Unknown cluster_sort_type: `"
+ cluster_sort_type
+ "`, no cluster sorting will happen."
)
if sort_ids:
for i, cl in enumerate(sorted_clusters):
cl["id"] = i
return sorted_clusters
## -------------------------------
## Line Splitting
def sort_cells_horizontal(line_cell_ids, raw_cells):
## "line_cells" should be a non-empty list of (raw) cell_ids
## "raw_cells" has the structure of item["raw"]["cells"].
## Sorts the cells in the line by x0 (left start).
new_line_cell_ids = sorted(
line_cell_ids, key=lambda cell_id: raw_cells[cell_id]["bbox"][0]
)
return new_line_cell_ids
def adapt_bboxes(raw_cells, clusters, orphan_cell_indices):
new_clusters = []
for ix, cluster in enumerate(clusters):
new_cluster = copy.deepcopy(cluster)
logger.debug(
"Treating cluster " + str(ix) + ", type " + str(new_cluster["type"])
)
logger.debug(" with cells: " + str(new_cluster["cell_ids"]))
if len(cluster["cell_ids"]) == 0 and cluster["type"] != DocItemLabel.PICTURE:
logger.debug(" Empty non-picture, removed")
continue ## Skip this former cluster, now without cells.
new_bbox = adapt_bbox(raw_cells, new_cluster, orphan_cell_indices)
new_cluster["bbox"] = new_bbox
new_clusters.append(new_cluster)
return new_clusters
def adapt_bbox(raw_cells, cluster, orphan_cell_indices):
if not (cluster["type"] in [DocItemLabel.TABLE, DocItemLabel.PICTURE]):
## A text-like cluster. The bbox only needs to be around the text cells:
logger.debug(" Initial bbox: " + str(cluster["bbox"]))
new_bbox = surrounding_list(
[raw_cells[cid]["bbox"] for cid in cluster["cell_ids"]]
)
logger.debug(" New bounding box:" + str(new_bbox))
if cluster["type"] == DocItemLabel.PICTURE:
## We only make the bbox completely comprise included text cells:
logger.debug(" Picture")
if len(cluster["cell_ids"]) != 0:
min_bbox = surrounding_list(
[raw_cells[cid]["bbox"] for cid in cluster["cell_ids"]]
)
logger.debug(" Minimum bbox: " + str(min_bbox))
logger.debug(" Initial bbox: " + str(cluster["bbox"]))
new_bbox = surrounding(min_bbox, cluster["bbox"])
logger.debug(" New bbox (initial and text cells): " + str(new_bbox))
else:
logger.debug(" without text cells, no change.")
new_bbox = cluster["bbox"]
else: ## A table
## At least we have to keep the included text cells, and we make the bbox completely comprise them
min_bbox = surrounding_list(
[raw_cells[cid]["bbox"] for cid in cluster["cell_ids"]]
)
logger.debug(" Minimum bbox: " + str(min_bbox))
logger.debug(" Initial bbox: " + str(cluster["bbox"]))
new_bbox = surrounding(min_bbox, cluster["bbox"])
logger.debug(" Possibly increased bbox: " + str(new_bbox))
## Now we look which non-belonging cells are covered.
## (To decrease dependencies, we don't make use of which cells we actually removed.)
## We don't worry about orphan cells, those could still be added to the table.
enclosed_cells = compute_enclosed_cells(
new_bbox, raw_cells, min_cell_intersection_with_cluster=0.3
)[0]
additional_cells = set(enclosed_cells) - set(cluster["cell_ids"])
logger.debug(
" Additional cells enclosed by Table bbox: " + str(additional_cells)
)
spurious_cells = additional_cells - set(orphan_cell_indices)
logger.debug(
" Spurious cells enclosed by Table bbox (additional minus orphans): "
+ str(spurious_cells)
)
if len(spurious_cells) == 0:
return new_bbox
## Else we want to keep as much as possible, e.g., grid lines, but not the spurious cells if we can.
## We initialize possible cuts with the current bbox.
left_cut = new_bbox[0]
right_cut = new_bbox[2]
upper_cut = new_bbox[3]
lower_cut = new_bbox[1]
for cell_ix in spurious_cells:
cell = raw_cells[cell_ix]
# logger.debug(" Spurious cell bbox: " + str(cell["bbox"]))
is_left = cell["bbox"][2] < min_bbox[0]
is_right = cell["bbox"][0] > min_bbox[2]
is_above = cell["bbox"][1] > min_bbox[3]
is_below = cell["bbox"][3] < min_bbox[1]
# logger.debug(" Left, right, above, below? " + str([is_left, is_right, is_above, is_below]))
if is_left:
if cell["bbox"][2] > left_cut:
## We move the left cut to exclude this cell:
left_cut = cell["bbox"][2]
if is_right:
if cell["bbox"][0] < right_cut:
## We move the right cut to exclude this cell:
right_cut = cell["bbox"][0]
if is_above:
if cell["bbox"][1] < upper_cut:
## We move the upper cut to exclude this cell:
upper_cut = cell["bbox"][1]
if is_below:
if cell["bbox"][3] > lower_cut:
## We move the left cut to exclude this cell:
lower_cut = cell["bbox"][3]
# logger.debug(" Current bbox: " + str([left_cut, lower_cut, right_cut, upper_cut]))
new_bbox = [left_cut, lower_cut, right_cut, upper_cut]
logger.debug(" Final bbox: " + str(new_bbox))
return new_bbox
def remove_cluster_duplicates_by_conf(cluster_predictions, threshold=0.5):
DuplicateDeletedClusterIDs = []
for cluster_1 in cluster_predictions:
for cluster_2 in cluster_predictions:
if cluster_1["id"] != cluster_2["id"]:
if_conf = False
if cluster_1["confidence"] > cluster_2["confidence"]:
if_conf = True
if if_conf == True:
if bb_iou(cluster_1["bbox"], cluster_2["bbox"]) > threshold:
DuplicateDeletedClusterIDs.append(cluster_2["id"])
elif contains(
cluster_1["bbox"],
[
cluster_2["bbox"][0] + 3,
cluster_2["bbox"][1] + 3,
cluster_2["bbox"][2] - 3,
cluster_2["bbox"][3] - 3,
],
):
DuplicateDeletedClusterIDs.append(cluster_2["id"])
DuplicateDeletedClusterIDs = list(set(DuplicateDeletedClusterIDs))
for cl_id in DuplicateDeletedClusterIDs:
for cluster in cluster_predictions:
if cl_id == cluster["id"]:
cluster_predictions.remove(cluster)
return cluster_predictions
# Assign orphan cells by a low confidence prediction that is below the assigned confidence
def assign_orphans_with_low_conf_pred(
cluster_predictions, cluster_predictions_low, raw_cells, orphan_cell_indices
):
for orph_id in orphan_cell_indices:
cluster_chosen = {}
iou_thresh = 0.05
confidence = 0.05
# Loop over all predictions, and find the one with the highest IOU, and confidence
for cluster in cluster_predictions_low:
calc_iou = bb_iou(cluster["bbox"], raw_cells[orph_id]["bbox"])
cluster_area = (cluster["bbox"][3] - cluster["bbox"][1]) * (
cluster["bbox"][2] - cluster["bbox"][0]
)
cell_area = (
raw_cells[orph_id]["bbox"][3] - raw_cells[orph_id]["bbox"][1]
) * (raw_cells[orph_id]["bbox"][2] - raw_cells[orph_id]["bbox"][0])
if (
(iou_thresh < calc_iou)
and (cluster["confidence"] > confidence)
and (cell_area * 3 > cluster_area)
):
cluster_chosen = cluster
iou_thresh = calc_iou
confidence = cluster["confidence"]
# If a candidate is found, assign to it the PDF cell ids, and tag that it was created by this function for tracking
if iou_thresh != 0.05 and confidence != 0.05:
cluster_chosen["cell_ids"].append(orph_id)
cluster_chosen["created_by"] = "orph_low_conf"
cluster_predictions.append(cluster_chosen)
orphan_cell_indices.remove(orph_id)
return cluster_predictions, orphan_cell_indices
def remove_ambigous_pdf_cell_by_conf(cluster_predictions, raw_cells, amb_cell_idxs):
for amb_cell_id in amb_cell_idxs:
highest_conf = 0
highest_bbox_iou = 0
cluster_chosen = None
problamatic_clusters = []
# Find clusters in question
for cluster in cluster_predictions:
if amb_cell_id in cluster["cell_ids"]:
problamatic_clusters.append(amb_cell_id)
# If the cell_id is in a cluster of high conf, and highest iou score, and smaller in area
bbox_iou_val = bb_iou(cluster["bbox"], raw_cells[amb_cell_id]["bbox"])
if (
cluster["confidence"] > highest_conf
and bbox_iou_val > highest_bbox_iou
):
cluster_chosen = cluster
highest_conf = cluster["confidence"]
highest_bbox_iou = bbox_iou_val
if cluster["id"] in problamatic_clusters:
problamatic_clusters.remove(cluster["id"])
# now remove the assigning of cell id from lower confidence, and threshold
for cluster in cluster_predictions:
for prob_amb_id in problamatic_clusters:
if prob_amb_id in cluster["cell_ids"]:
cluster["cell_ids"].remove(prob_amb_id)
amb_cell_idxs.remove(amb_cell_id)
return cluster_predictions, amb_cell_idxs
def ranges(nums):
# Find if consecutive numbers exist within pdf cells
# Used to remove line numbers for review manuscripts
nums = sorted(set(nums))
gaps = [[s, e] for s, e in zip(nums, nums[1:]) if s + 1 < e]
edges = iter(nums[:1] + sum(gaps, []) + nums[-1:])
return list(zip(edges, edges))
def set_orphan_as_text(
cluster_predictions, cluster_predictions_low, raw_cells, orphan_cell_indices
):
max_id = -1
figures = []
for cluster in cluster_predictions:
if cluster["type"] == DocItemLabel.PICTURE:
figures.append(cluster)
if cluster["id"] > max_id:
max_id = cluster["id"]
max_id += 1
lines_detector = False
content_of_orphans = []
for orph_id in orphan_cell_indices:
orph_cell = raw_cells[orph_id]
content_of_orphans.append(raw_cells[orph_id]["text"])
fil_content_of_orphans = []
for cell_content in content_of_orphans:
if cell_content.isnumeric():
try:
num = int(cell_content)
fil_content_of_orphans.append(num)
except ValueError: # ignore the cell
pass
# line_orphans = []
# Check if there are more than 2 pdf orphan cells, if there are more than 2,
# then check between the orphan cells if they are numeric
# and if they are a consecutive series of numbers (using ranges function) to decide
if len(fil_content_of_orphans) > 2:
out_ranges = ranges(fil_content_of_orphans)
if len(out_ranges) > 1:
cnt_range = 0
for ranges_ in out_ranges:
if ranges_[0] != ranges_[1]:
# If there are more than 75 (half the total line number of a review manuscript page)
# decide that there are line numbers on page to be ignored.
if len(list(range(ranges_[0], ranges_[1]))) > 75:
lines_detector = True
# line_orphans = line_orphans + list(range(ranges_[0], ranges_[1]))
for orph_id in orphan_cell_indices:
orph_cell = raw_cells[orph_id]
if bool(orph_cell["text"] and not orph_cell["text"].isspace()):
fig_flag = False
# Do not assign orphan cells if they are inside a figure
for fig in figures:
if contains(fig["bbox"], orph_cell["bbox"]):
fig_flag = True
# if fig_flag == False and raw_cells[orph_id]["text"] not in line_orphans:
if fig_flag == False and lines_detector == False:
# get class from low confidence detections if not set as text:
class_type = DocItemLabel.TEXT
for cluster in cluster_predictions_low:
intersection = compute_intersection(
orph_cell["bbox"], cluster["bbox"]
)
class_type = DocItemLabel.TEXT
if (
cluster["confidence"] > 0.1
and bb_iou(cluster["bbox"], orph_cell["bbox"]) > 0.4
):
class_type = cluster["type"]
elif contains(
cluster["bbox"],
[
orph_cell["bbox"][0] + 3,
orph_cell["bbox"][1] + 3,
orph_cell["bbox"][2] - 3,
orph_cell["bbox"][3] - 3,
],
):
class_type = cluster["type"]
elif intersection > area(orph_cell["bbox"]) * 0.2:
class_type = cluster["type"]
new_cluster = {
"id": max_id,
"bbox": orph_cell["bbox"],
"type": class_type,
"cell_ids": [orph_id],
"confidence": -1,
"created_by": "orphan_default",
}
max_id += 1
cluster_predictions.append(new_cluster)
return cluster_predictions, orphan_cell_indices
def merge_cells(cluster_predictions):
# Using graph component creates clusters if orphan cells are touching or too close.
G = nx.Graph()
for cluster in cluster_predictions:
if cluster["created_by"] == "orphan_default":
G.add_node(cluster["id"])
for cluster_1 in cluster_predictions:
for cluster_2 in cluster_predictions:
if (
cluster_1["id"] != cluster_2["id"]
and cluster_2["created_by"] == "orphan_default"
and cluster_1["created_by"] == "orphan_default"
):
cl1 = copy.deepcopy(cluster_1["bbox"])
cl2 = copy.deepcopy(cluster_2["bbox"])
cl1[0] = cl1[0] - 2
cl1[1] = cl1[1] - 2
cl1[2] = cl1[2] + 2
cl1[3] = cl1[3] + 2
cl2[0] = cl2[0] - 2
cl2[1] = cl2[1] - 2
cl2[2] = cl2[2] + 2
cl2[3] = cl2[3] + 2
if is_intersecting(cl1, cl2):
G.add_edge(cluster_1["id"], cluster_2["id"])
component = sorted(map(sorted, nx.k_edge_components(G, k=1)))
max_id = -1
for cluster_1 in cluster_predictions:
if cluster_1["id"] > max_id:
max_id = cluster_1["id"]
for nodes in component:
if len(nodes) > 1:
max_id += 1
lines = []
for node in nodes:
for cluster in cluster_predictions:
if cluster["id"] == node:
lines.append(cluster)
cluster_predictions.remove(cluster)
new_merged_cluster = build_cluster_from_lines(
lines, DocItemLabel.TEXT, max_id
)
cluster_predictions.append(new_merged_cluster)
return cluster_predictions
def clean_up_clusters(
cluster_predictions,
raw_cells,
merge_cells=False,
img_table=False,
one_cell_table=False,
):
DuplicateDeletedClusterIDs = []
for cluster_1 in cluster_predictions:
for cluster_2 in cluster_predictions:
if cluster_1["id"] != cluster_2["id"]:
# remove any artifcats created by merging clusters
if merge_cells == True:
if contains(
cluster_1["bbox"],
[
cluster_2["bbox"][0] + 3,
cluster_2["bbox"][1] + 3,
cluster_2["bbox"][2] - 3,
cluster_2["bbox"][3] - 3,
],
):
cluster_1["cell_ids"] = (
cluster_1["cell_ids"] + cluster_2["cell_ids"]
)
DuplicateDeletedClusterIDs.append(cluster_2["id"])
# remove clusters that might appear inside tables, or images (such as pdf cells in graphs)
elif img_table == True:
if (
cluster_1["type"] == DocItemLabel.TEXT
and cluster_2["type"] == DocItemLabel.PICTURE
or cluster_2["type"] == DocItemLabel.TABLE
):
if bb_iou(cluster_1["bbox"], cluster_2["bbox"]) > 0.5:
DuplicateDeletedClusterIDs.append(cluster_1["id"])
elif contains(
[
cluster_2["bbox"][0] - 3,
cluster_2["bbox"][1] - 3,
cluster_2["bbox"][2] + 3,
cluster_2["bbox"][3] + 3,
],
cluster_1["bbox"],
):
DuplicateDeletedClusterIDs.append(cluster_1["id"])
# remove tables that have one pdf cell
if one_cell_table == True:
if (
cluster_1["type"] == DocItemLabel.TABLE
and len(cluster_1["cell_ids"]) < 2
):
DuplicateDeletedClusterIDs.append(cluster_1["id"])
DuplicateDeletedClusterIDs = list(set(DuplicateDeletedClusterIDs))
for cl_id in DuplicateDeletedClusterIDs:
for cluster in cluster_predictions:
if cl_id == cluster["id"]:
cluster_predictions.remove(cluster)
return cluster_predictions
def assigning_cell_ids_to_clusters(clusters, raw_cells, threshold):
for cluster in clusters:
cells_in_cluster, _ = compute_enclosed_cells(
cluster["bbox"], raw_cells, min_cell_intersection_with_cluster=threshold
)
cluster["cell_ids"] = cells_in_cluster
## These cell_ids are ids of the raw cells.
## They are often, but not always, the same as the "id" or the index of the "cells" list in a prediction.
return clusters
# Creates a map of cell_id->cluster_id
def cell_id_state_map(clusters, cell_count):
clusters_around_cells = find_clusters_around_cells(cell_count, clusters)
orphan_cell_indices = [
ix for ix in range(cell_count) if len(clusters_around_cells[ix]) == 0
] # which cells are assigned no cluster?
ambiguous_cell_indices = [
ix for ix in range(cell_count) if len(clusters_around_cells[ix]) > 1
] # which cells are assigned > 1 clusters?
return clusters_around_cells, orphan_cell_indices, ambiguous_cell_indices

View File

@ -4,7 +4,30 @@
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Hybrid Chunking" "# Hybrid chunking"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Overview"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Hybrid chunking applies tokenization-aware refinements on top of document-based hierarchical chunking.\n",
"\n",
"For more details, see [here](../../concepts/chunking#hybrid-chunker)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup"
] ]
}, },
{ {
@ -21,7 +44,7 @@
} }
], ],
"source": [ "source": [
"%pip install -qU 'docling-core[chunking]' sentence-transformers transformers lancedb" "%pip install -qU docling transformers"
] ]
}, },
{ {
@ -48,16 +71,12 @@
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Chunking" "## Chunking\n",
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Notice how `tokenizer` and `embed_model` further below are single-sourced from `EMBED_MODEL_ID`.\n",
"\n", "\n",
"This is important for making sure the chunker and the embedding model are using the same tokenizer." "### Basic usage\n",
"\n",
"For a basic usage scenario, we can just instantiate a `HybridChunker`, which will use\n",
"the default parameters."
] ]
}, },
{ {
@ -65,20 +84,102 @@
"execution_count": 3, "execution_count": 3,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [
"from docling.chunking import HybridChunker\n",
"\n",
"chunker = HybridChunker()\n",
"chunk_iter = chunker.chunk(dl_doc=doc)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note that the text you would typically want to embed is the context-enriched one as\n",
"returned by the `serialize()` method:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"=== 0 ===\n",
"chunk.text:\n",
"'International Business Machines Corporation (using the trademark IBM), nicknamed Big Blue, is an American multinational technology company headquartered in Armonk, New York and present in over 175 countries.\\nIt is a publicly traded company and one of the 30 companies in the Dow Jones Industrial Aver…'\n",
"chunker.serialize(chunk):\n",
"'IBM\\nInternational Business Machines Corporation (using the trademark IBM), nicknamed Big Blue, is an American multinational technology company headquartered in Armonk, New York and present in over 175 countries.\\nIt is a publicly traded company and one of the 30 companies in the Dow Jones Industrial …'\n",
"\n",
"=== 1 ===\n",
"chunk.text:\n",
"'IBM originated with several technological innovations developed and commercialized in the late 19th century. Julius E. Pitrap patented the computing scale in 1885;[17] Alexander Dey invented the dial recorder (1888);[18] Herman Hollerith patented the Electric Tabulating Machine (1889);[19] and Willa…'\n",
"chunker.serialize(chunk):\n",
"'IBM\\n1910s1950s\\nIBM originated with several technological innovations developed and commercialized in the late 19th century. Julius E. Pitrap patented the computing scale in 1885;[17] Alexander Dey invented the dial recorder (1888);[18] Herman Hollerith patented the Electric Tabulating Machine (1889…'\n",
"\n",
"=== 2 ===\n",
"chunk.text:\n",
"'Collectively, the companies manufactured a wide array of machinery for sale and lease, ranging from commercial scales and industrial time recorders, meat and cheese slicers, to tabulators and punched cards. Thomas J. Watson, Sr., fired from the National Cash Register Company by John Henry Patterson,…'\n",
"chunker.serialize(chunk):\n",
"'IBM\\n1910s1950s\\nCollectively, the companies manufactured a wide array of machinery for sale and lease, ranging from commercial scales and industrial time recorders, meat and cheese slicers, to tabulators and punched cards. Thomas J. Watson, Sr., fired from the National Cash Register Company by John …'\n",
"\n",
"=== 3 ===\n",
"chunk.text:\n",
"'In 1961, IBM developed the SABRE reservation system for American Airlines and introduced the highly successful Selectric typewriter.…'\n",
"chunker.serialize(chunk):\n",
"'IBM\\n1960s1980s\\nIn 1961, IBM developed the SABRE reservation system for American Airlines and introduced the highly successful Selectric typewriter.…'\n",
"\n"
]
}
],
"source": [
"for i, chunk in enumerate(chunk_iter):\n",
" print(f\"=== {i} ===\")\n",
" print(f\"chunk.text:\\n{repr(f'{chunk.text[:300]}…')}\")\n",
"\n",
" enriched_text = chunker.serialize(chunk=chunk)\n",
" print(f\"chunker.serialize(chunk):\\n{repr(f'{enriched_text[:300]}…')}\")\n",
"\n",
" print()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Advanced usage\n",
"\n",
"For more control on the chunking, we can parametrize through the `HybridChunker`\n",
"arguments illustrated below.\n",
"\n",
"Notice how `tokenizer` and `embed_model` further below are single-sourced from\n",
"`EMBED_MODEL_ID`.\n",
"This is important for making sure the chunker and the embedding model are using the same\n",
"tokenizer."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [ "source": [
"from transformers import AutoTokenizer\n", "from transformers import AutoTokenizer\n",
"\n", "\n",
"from docling.chunking import HybridChunker\n", "from docling.chunking import HybridChunker\n",
"\n", "\n",
"EMBED_MODEL_ID = \"sentence-transformers/all-MiniLM-L6-v2\"\n", "EMBED_MODEL_ID = \"sentence-transformers/all-MiniLM-L6-v2\"\n",
"MAX_TOKENS = 64\n", "MAX_TOKENS = 64 # set to a small number for illustrative purposes\n",
"\n", "\n",
"tokenizer = AutoTokenizer.from_pretrained(EMBED_MODEL_ID)\n", "tokenizer = AutoTokenizer.from_pretrained(EMBED_MODEL_ID)\n",
"\n", "\n",
"chunker = HybridChunker(\n", "chunker = HybridChunker(\n",
" tokenizer=tokenizer, # can also just pass model name instead of tokenizer instance\n", " tokenizer=tokenizer, # instance or model name, defaults to \"sentence-transformers/all-MiniLM-L6-v2\"\n",
" max_tokens=MAX_TOKENS, # optional, by default derived from `tokenizer`\n", " max_tokens=MAX_TOKENS, # optional, by default derived from `tokenizer`\n",
" # merge_peers=True, # optional, defaults to True\n", " merge_peers=True, # optional, defaults to True\n",
")\n", ")\n",
"chunk_iter = chunker.chunk(dl_doc=doc)\n", "chunk_iter = chunker.chunk(dl_doc=doc)\n",
"chunks = list(chunk_iter)" "chunks = list(chunk_iter)"
@ -88,7 +189,7 @@
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"Points to notice:\n", "Points to notice looking at the output chunks below:\n",
"- Where possible, we fit the limit of 64 tokens for the metadata-enriched serialization form (see chunk 2)\n", "- Where possible, we fit the limit of 64 tokens for the metadata-enriched serialization form (see chunk 2)\n",
"- Where neeeded, we stop before the limit, e.g. see cases of 63 as it would otherwise run into a comma (see chunk 6)\n", "- Where neeeded, we stop before the limit, e.g. see cases of 63 as it would otherwise run into a comma (see chunk 6)\n",
"- Where possible, we merge undersized peer chunks (see chunk 0)\n", "- Where possible, we merge undersized peer chunks (see chunk 0)\n",
@ -97,7 +198,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 4, "execution_count": 6,
"metadata": {}, "metadata": {},
"outputs": [ "outputs": [
{ {
@ -245,174 +346,6 @@
"\n", "\n",
" print()" " print()"
] ]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Vector Retrieval"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\n",
"To disable this warning, you can either:\n",
"\t- Avoid using `tokenizers` before the fork if possible\n",
"\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\n"
]
}
],
"source": [
"from sentence_transformers import SentenceTransformer\n",
"\n",
"embed_model = SentenceTransformer(EMBED_MODEL_ID)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"<div>\n",
"<style scoped>\n",
" .dataframe tbody tr th:only-of-type {\n",
" vertical-align: middle;\n",
" }\n",
"\n",
" .dataframe tbody tr th {\n",
" vertical-align: top;\n",
" }\n",
"\n",
" .dataframe thead th {\n",
" text-align: right;\n",
" }\n",
"</style>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>vector</th>\n",
" <th>text</th>\n",
" <th>headings</th>\n",
" <th>captions</th>\n",
" <th>_distance</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>0</th>\n",
" <td>[-0.1269039, -0.01948185, -0.07718097, -0.1116...</td>\n",
" <td>language, and the UPC barcode. The company has...</td>\n",
" <td>[IBM]</td>\n",
" <td>None</td>\n",
" <td>1.164613</td>\n",
" </tr>\n",
" <tr>\n",
" <th>1</th>\n",
" <td>[-0.10198064, 0.0055981805, -0.05095279, -0.13...</td>\n",
" <td>IBM originated with several technological inno...</td>\n",
" <td>[IBM, 1910s1950s]</td>\n",
" <td>None</td>\n",
" <td>1.245144</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2</th>\n",
" <td>[-0.057121325, -0.034115084, -0.018113216, -0....</td>\n",
" <td>As one of the world's oldest and largest techn...</td>\n",
" <td>[IBM]</td>\n",
" <td>None</td>\n",
" <td>1.355586</td>\n",
" </tr>\n",
" <tr>\n",
" <th>3</th>\n",
" <td>[-0.04429054, -0.058111433, -0.009330196, -0.0...</td>\n",
" <td>IBM is the largest industrial research organiz...</td>\n",
" <td>[IBM]</td>\n",
" <td>None</td>\n",
" <td>1.398617</td>\n",
" </tr>\n",
" <tr>\n",
" <th>4</th>\n",
" <td>[-0.11920792, 0.053496413, -0.042391937, -0.03...</td>\n",
" <td>Awards.[16]</td>\n",
" <td>[IBM]</td>\n",
" <td>None</td>\n",
" <td>1.446295</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"</div>"
],
"text/plain": [
" vector \\\n",
"0 [-0.1269039, -0.01948185, -0.07718097, -0.1116... \n",
"1 [-0.10198064, 0.0055981805, -0.05095279, -0.13... \n",
"2 [-0.057121325, -0.034115084, -0.018113216, -0.... \n",
"3 [-0.04429054, -0.058111433, -0.009330196, -0.0... \n",
"4 [-0.11920792, 0.053496413, -0.042391937, -0.03... \n",
"\n",
" text headings \\\n",
"0 language, and the UPC barcode. The company has... [IBM] \n",
"1 IBM originated with several technological inno... [IBM, 1910s1950s] \n",
"2 As one of the world's oldest and largest techn... [IBM] \n",
"3 IBM is the largest industrial research organiz... [IBM] \n",
"4 Awards.[16] [IBM] \n",
"\n",
" captions _distance \n",
"0 None 1.164613 \n",
"1 None 1.245144 \n",
"2 None 1.355586 \n",
"3 None 1.398617 \n",
"4 None 1.446295 "
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from pathlib import Path\n",
"from tempfile import mkdtemp\n",
"\n",
"import lancedb\n",
"\n",
"\n",
"def make_lancedb_index(db_uri, index_name, chunks, embedding_model):\n",
" db = lancedb.connect(db_uri)\n",
" data = []\n",
" for chunk in chunks:\n",
" embeddings = embedding_model.encode(chunker.serialize(chunk=chunk))\n",
" data_item = {\n",
" \"vector\": embeddings,\n",
" \"text\": chunk.text,\n",
" \"headings\": chunk.meta.headings,\n",
" \"captions\": chunk.meta.captions,\n",
" }\n",
" data.append(data_item)\n",
" tbl = db.create_table(index_name, data=data, exist_ok=True)\n",
" return tbl\n",
"\n",
"\n",
"db_uri = str(Path(mkdtemp()) / \"docling.db\")\n",
"index = make_lancedb_index(db_uri, doc.name, chunks, embed_model)\n",
"\n",
"sample_query = \"invent\"\n",
"sample_embedding = embed_model.encode(sample_query)\n",
"results = index.search(sample_embedding).limit(5)\n",
"\n",
"results.to_pandas()"
]
} }
], ],
"metadata": { "metadata": {

View File

@ -0,0 +1,399 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a href=\"https://colab.research.google.com/github/DS4SD/docling/blob/main/docs/examples/rag_haystack.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# RAG with Haystack"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"| Step | Tech | Execution | \n",
"| --- | --- | --- |\n",
"| Embedding | Hugging Face / Sentence Transformers | 💻 Local |\n",
"| Vector store | Milvus | 💻 Local |\n",
"| Gen AI | Hugging Face Inference API | 🌐 Remote | "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Overview"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This example leverages the\n",
"[Haystack Docling extension](../../integrations/haystack/), along with\n",
"Milvus-based document store and retriever instances, as well as sentence-transformers\n",
"embeddings.\n",
"\n",
"The presented `DoclingConverter` component enables you to:\n",
"- use various document types in your LLM applications with ease and speed, and\n",
"- leverage Docling's rich format for advanced, document-native grounding.\n",
"\n",
"`DoclingConverter` supports two different export modes:\n",
"- `ExportType.MARKDOWN`: if you want to capture each input document as a separate\n",
" Haystack document, or\n",
"- `ExportType.DOC_CHUNKS` (default): if you want to have each input document chunked and\n",
" to then capture each individual chunk as a separate Haystack document downstream.\n",
"\n",
"The example allows to explore both modes via parameter `EXPORT_TYPE`; depending on the\n",
"value set, the ingestion and RAG pipelines are then set up accordingly."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"- 👉 For best conversion speed, use GPU acceleration whenever available; e.g. if running on Colab, use GPU-enabled runtime.\n",
"- Notebook uses HuggingFace's Inference API; for increased LLM quota, token can be provided via env var `HF_TOKEN`.\n",
"- Requirements can be installed as shown below (`--no-warn-conflicts` meant for Colab's pre-populated Python env; feel free to remove for stricter usage):"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"%pip install -q --progress-bar off --no-warn-conflicts docling-haystack haystack-ai docling pymilvus milvus-haystack sentence-transformers python-dotenv"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"from pathlib import Path\n",
"from tempfile import mkdtemp\n",
"\n",
"from docling_haystack.converter import ExportType\n",
"from dotenv import load_dotenv\n",
"\n",
"\n",
"def _get_env_from_colab_or_os(key):\n",
" try:\n",
" from google.colab import userdata\n",
"\n",
" try:\n",
" return userdata.get(key)\n",
" except userdata.SecretNotFoundError:\n",
" pass\n",
" except ImportError:\n",
" pass\n",
" return os.getenv(key)\n",
"\n",
"\n",
"load_dotenv()\n",
"HF_TOKEN = _get_env_from_colab_or_os(\"HF_TOKEN\")\n",
"PATHS = [\"https://arxiv.org/pdf/2408.09869\"] # Docling Technical Report\n",
"EMBED_MODEL_ID = \"sentence-transformers/all-MiniLM-L6-v2\"\n",
"GENERATION_MODEL_ID = \"mistralai/Mixtral-8x7B-Instruct-v0.1\"\n",
"EXPORT_TYPE = ExportType.DOC_CHUNKS\n",
"QUESTION = \"Which are the main AI models in Docling?\"\n",
"TOP_K = 3\n",
"MILVUS_URI = str(Path(mkdtemp()) / \"docling.db\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Indexing pipeline"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Token indices sequence length is longer than the specified maximum sequence length for this model (1041 > 512). Running this sequence through the model will result in indexing errors\n"
]
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "80beca8762c34095a21467fb7f056059",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"Batches: 0%| | 0/2 [00:00<?, ?it/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"{'writer': {'documents_written': 54}}"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from docling_haystack.converter import DoclingConverter\n",
"from haystack import Pipeline\n",
"from haystack.components.embedders import (\n",
" SentenceTransformersDocumentEmbedder,\n",
" SentenceTransformersTextEmbedder,\n",
")\n",
"from haystack.components.preprocessors import DocumentSplitter\n",
"from haystack.components.writers import DocumentWriter\n",
"from milvus_haystack import MilvusDocumentStore, MilvusEmbeddingRetriever\n",
"\n",
"from docling.chunking import HybridChunker\n",
"\n",
"document_store = MilvusDocumentStore(\n",
" connection_args={\"uri\": MILVUS_URI},\n",
" drop_old=True,\n",
" text_field=\"txt\", # set for preventing conflict with same-name metadata field\n",
")\n",
"\n",
"idx_pipe = Pipeline()\n",
"idx_pipe.add_component(\n",
" \"converter\",\n",
" DoclingConverter(\n",
" export_type=EXPORT_TYPE,\n",
" chunker=HybridChunker(tokenizer=EMBED_MODEL_ID),\n",
" ),\n",
")\n",
"idx_pipe.add_component(\n",
" \"embedder\",\n",
" SentenceTransformersDocumentEmbedder(model=EMBED_MODEL_ID),\n",
")\n",
"idx_pipe.add_component(\"writer\", DocumentWriter(document_store=document_store))\n",
"if EXPORT_TYPE == ExportType.DOC_CHUNKS:\n",
" idx_pipe.connect(\"converter\", \"embedder\")\n",
"elif EXPORT_TYPE == ExportType.MARKDOWN:\n",
" idx_pipe.add_component(\n",
" \"splitter\",\n",
" DocumentSplitter(split_by=\"sentence\", split_length=1),\n",
" )\n",
" idx_pipe.connect(\"converter.documents\", \"splitter.documents\")\n",
" idx_pipe.connect(\"splitter.documents\", \"embedder.documents\")\n",
"else:\n",
" raise ValueError(f\"Unexpected export type: {EXPORT_TYPE}\")\n",
"idx_pipe.connect(\"embedder\", \"writer\")\n",
"idx_pipe.run({\"converter\": {\"paths\": PATHS}})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## RAG pipeline"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "d753748e2b624896ad2caf5e8368b041",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"Batches: 0%| | 0/1 [00:00<?, ?it/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"/Users/pva/work/github.com/DS4SD/docling/.venv/lib/python3.12/site-packages/huggingface_hub/inference/_client.py:2232: FutureWarning: `stop_sequences` is a deprecated argument for `text_generation` task and will be removed in version '0.28.0'. Use `stop` instead.\n",
" warnings.warn(\n"
]
}
],
"source": [
"from haystack.components.builders import AnswerBuilder\n",
"from haystack.components.builders.prompt_builder import PromptBuilder\n",
"from haystack.components.generators import HuggingFaceAPIGenerator\n",
"from haystack.utils import Secret\n",
"\n",
"prompt_template = \"\"\"\n",
" Given these documents, answer the question.\n",
" Documents:\n",
" {% for doc in documents %}\n",
" {{ doc.content }}\n",
" {% endfor %}\n",
" Question: {{query}}\n",
" Answer:\n",
" \"\"\"\n",
"\n",
"rag_pipe = Pipeline()\n",
"rag_pipe.add_component(\n",
" \"embedder\",\n",
" SentenceTransformersTextEmbedder(model=EMBED_MODEL_ID),\n",
")\n",
"rag_pipe.add_component(\n",
" \"retriever\",\n",
" MilvusEmbeddingRetriever(document_store=document_store, top_k=TOP_K),\n",
")\n",
"rag_pipe.add_component(\"prompt_builder\", PromptBuilder(template=prompt_template))\n",
"rag_pipe.add_component(\n",
" \"llm\",\n",
" HuggingFaceAPIGenerator(\n",
" api_type=\"serverless_inference_api\",\n",
" api_params={\"model\": GENERATION_MODEL_ID},\n",
" token=Secret.from_token(HF_TOKEN) if HF_TOKEN else None,\n",
" ),\n",
")\n",
"rag_pipe.add_component(\"answer_builder\", AnswerBuilder())\n",
"rag_pipe.connect(\"embedder.embedding\", \"retriever\")\n",
"rag_pipe.connect(\"retriever\", \"prompt_builder.documents\")\n",
"rag_pipe.connect(\"prompt_builder\", \"llm\")\n",
"rag_pipe.connect(\"llm.replies\", \"answer_builder.replies\")\n",
"rag_pipe.connect(\"llm.meta\", \"answer_builder.meta\")\n",
"rag_pipe.connect(\"retriever\", \"answer_builder.documents\")\n",
"rag_res = rag_pipe.run(\n",
" {\n",
" \"embedder\": {\"text\": QUESTION},\n",
" \"prompt_builder\": {\"query\": QUESTION},\n",
" \"answer_builder\": {\"query\": QUESTION},\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Below we print out the RAG results. If you have used `ExportType.DOC_CHUNKS`, notice how\n",
"the sources contain document-level grounding (e.g. page number or bounding box\n",
"information):"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Question:\n",
"Which are the main AI models in Docling?\n",
"\n",
"Answer:\n",
"The main AI models in Docling are a layout analysis model and TableFormer. The layout analysis model is an accurate object-detector for page elements, while TableFormer is a state-of-the-art table structure recognition model. These models are provided with pre-trained weights and a separate package for the inference code as docling-ibm-models. They are also used in the open-access deepsearch-experience, a cloud-native service for knowledge exploration tasks. Additionally, Docling plans to extend its model library with a figure-classifier model, an equation-recognition model, a code-recognition model, and more in the future.\n",
"\n",
"Sources:\n",
"- text: 'As part of Docling, we initially release two highly capable AI models to the open-source community, which have been developed and published recently by our team. The first model is a layout analysis model, an accurate object-detector for page elements [13]. The second model is TableFormer [12, 9], a state-of-the-art table structure recognition model. We provide the pre-trained weights (hosted on huggingface) and a separate package for the inference code as docling-ibm-models . Both models are also powering the open-access deepsearch-experience, our cloud-native service for knowledge exploration tasks.'\n",
" file: 2408.09869v5.pdf\n",
" section: 3.2 AI models\n",
" page: 3, bounding box: [107, 406, 504, 330]\n",
"- text: 'Docling implements a linear pipeline of operations, which execute sequentially on each given document (see Fig. 1). Each document is first parsed by a PDF backend, which retrieves the programmatic text tokens, consisting of string content and its coordinates on the page, and also renders a bitmap image of each page to support downstream operations. Then, the standard model pipeline applies a sequence of AI models independently on every page in the document to extract features and content, such as layout and table structures. Finally, the results from all pages are aggregated and passed through a post-processing stage, which augments metadata, detects the document language, infers reading-order and eventually assembles a typed document object which can be serialized to JSON or Markdown.'\n",
" file: 2408.09869v5.pdf\n",
" section: 3 Processing pipeline\n",
" page: 2, bounding box: [107, 273, 504, 176]\n",
"- text: 'Docling is designed to allow easy extension of the model library and pipelines. In the future, we plan to extend Docling with several more models, such as a figure-classifier model, an equationrecognition model, a code-recognition model and more. This will help improve the quality of conversion for specific types of content, as well as augment extracted document metadata with additional information. Further investment into testing and optimizing GPU acceleration as well as improving the Docling-native PDF backend are on our roadmap, too.\\nWe encourage everyone to propose or implement additional features and models, and will gladly take your inputs and contributions under review . The codebase of Docling is open for use and contribution, under the MIT license agreement and in alignment with our contributing guidelines included in the Docling repository. If you use Docling in your projects, please consider citing this technical report.'\n",
" section: 6 Future work and contributions\n",
" page: 5, bounding box: [106, 323, 504, 258]\n"
]
}
],
"source": [
"from docling.chunking import DocChunk\n",
"\n",
"print(f\"Question:\\n{QUESTION}\\n\")\n",
"print(f\"Answer:\\n{rag_res['answer_builder']['answers'][0].data.strip()}\\n\")\n",
"print(\"Sources:\")\n",
"sources = rag_res[\"answer_builder\"][\"answers\"][0].documents\n",
"for source in sources:\n",
" if EXPORT_TYPE == ExportType.DOC_CHUNKS:\n",
" doc_chunk = DocChunk.model_validate(source.meta[\"dl_meta\"])\n",
" print(f\"- text: {repr(doc_chunk.text)}\")\n",
" if doc_chunk.meta.origin:\n",
" print(f\" file: {doc_chunk.meta.origin.filename}\")\n",
" if doc_chunk.meta.headings:\n",
" print(f\" section: {' / '.join(doc_chunk.meta.headings)}\")\n",
" bbox = doc_chunk.meta.doc_items[0].prov[0].bbox\n",
" print(\n",
" f\" page: {doc_chunk.meta.doc_items[0].prov[0].page_no}, \"\n",
" f\"bounding box: [{int(bbox.l)}, {int(bbox.t)}, {int(bbox.r)}, {int(bbox.b)}]\"\n",
" )\n",
" elif EXPORT_TYPE == ExportType.MARKDOWN:\n",
" print(repr(source.content))\n",
" else:\n",
" raise ValueError(f\"Unexpected export type: {EXPORT_TYPE}\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.7"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@ -4,7 +4,63 @@
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# RAG with LangChain 🦜🔗" "<a href=\"https://colab.research.google.com/github/DS4SD/docling/blob/main/docs/examples/rag_langchain.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# RAG with LangChain"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"| Step | Tech | Execution | \n",
"| --- | --- | --- |\n",
"| Embedding | Hugging Face / Sentence Transformers | 💻 Local |\n",
"| Vector store | Milvus | 💻 Local |\n",
"| Gen AI | Hugging Face Inference API | 🌐 Remote | "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This example leverages the\n",
"[LangChain Docling integration](../../integrations/langchain/), along with a Milvus\n",
"vector store, as well as sentence-transformers embeddings.\n",
"\n",
"The presented `DoclingLoader` component enables you to:\n",
"- use various document types in your LLM applications with ease and speed, and\n",
"- leverage Docling's rich format for advanced, document-native grounding.\n",
"\n",
"`DoclingLoader` supports two different export modes:\n",
"- `ExportType.MARKDOWN`: if you want to capture each input document as a separate\n",
" LangChain document, or\n",
"- `ExportType.DOC_CHUNKS` (default): if you want to have each input document chunked and\n",
" to then capture each individual chunk as a separate LangChain document downstream.\n",
"\n",
"The example allows exploring both modes via parameter `EXPORT_TYPE`; depending on the\n",
"value set, the example pipeline is then set up accordingly."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"- 👉 For best conversion speed, use GPU acceleration whenever available; e.g. if running on Colab, use GPU-enabled runtime.\n",
"- Notebook uses HuggingFace's Inference API; for increased LLM quota, token can be provided via env var `HF_TOKEN`.\n",
"- Requirements can be installed as shown below (`--no-warn-conflicts` meant for Colab's pre-populated Python env; feel free to remove for stricter usage):"
] ]
}, },
{ {
@ -21,81 +77,105 @@
} }
], ],
"source": [ "source": [
"# requirements for this example:\n", "%pip install -q --progress-bar off --no-warn-conflicts langchain-docling langchain-core langchain-huggingface langchain_milvus langchain python-dotenv"
"%pip install -qq docling docling-core python-dotenv langchain-text-splitters langchain-huggingface langchain-milvus"
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 2, "execution_count": 2,
"metadata": {}, "metadata": {},
"outputs": [ "outputs": [],
{
"data": {
"text/plain": [
"True"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [ "source": [
"import os\n", "import os\n",
"from pathlib import Path\n",
"from tempfile import mkdtemp\n",
"\n", "\n",
"from dotenv import load_dotenv\n", "from dotenv import load_dotenv\n",
"from langchain_core.prompts import PromptTemplate\n",
"from langchain_docling.loader import ExportType\n",
"\n", "\n",
"load_dotenv()" "\n",
"def _get_env_from_colab_or_os(key):\n",
" try:\n",
" from google.colab import userdata\n",
"\n",
" try:\n",
" return userdata.get(key)\n",
" except userdata.SecretNotFoundError:\n",
" pass\n",
" except ImportError:\n",
" pass\n",
" return os.getenv(key)\n",
"\n",
"\n",
"load_dotenv()\n",
"\n",
"# https://github.com/huggingface/transformers/issues/5486:\n",
"os.environ[\"TOKENIZERS_PARALLELISM\"] = \"false\"\n",
"\n",
"HF_TOKEN = _get_env_from_colab_or_os(\"HF_TOKEN\")\n",
"FILE_PATH = [\"https://arxiv.org/pdf/2408.09869\"] # Docling Technical Report\n",
"EMBED_MODEL_ID = \"sentence-transformers/all-MiniLM-L6-v2\"\n",
"GEN_MODEL_ID = \"mistralai/Mixtral-8x7B-Instruct-v0.1\"\n",
"EXPORT_TYPE = ExportType.DOC_CHUNKS\n",
"QUESTION = \"Which are the main AI models in Docling?\"\n",
"PROMPT = PromptTemplate.from_template(\n",
" \"Context information is below.\\n---------------------\\n{context}\\n---------------------\\nGiven the context information and not prior knowledge, answer the query.\\nQuery: {input}\\nAnswer:\\n\",\n",
")\n",
"TOP_K = 3\n",
"MILVUS_URI = str(Path(mkdtemp()) / \"docling.db\")"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Setup" "## Document loading\n",
] "\n",
}, "Now we can instantiate our loader and load documents."
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Loader and splitter"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Below we set up:\n",
"- a `Loader` which will be used to create LangChain documents, and\n",
"- a splitter, which will be used to split these documents"
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 3, "execution_count": 3,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Token indices sequence length is longer than the specified maximum sequence length for this model (1041 > 512). Running this sequence through the model will result in indexing errors\n"
]
}
],
"source": [ "source": [
"from typing import Iterator\n", "from langchain_docling import DoclingLoader\n",
"\n", "\n",
"from langchain_core.document_loaders import BaseLoader\n", "from docling.chunking import HybridChunker\n",
"from langchain_core.documents import Document as LCDocument\n",
"\n", "\n",
"from docling.document_converter import DocumentConverter\n", "loader = DoclingLoader(\n",
" file_path=FILE_PATH,\n",
" export_type=EXPORT_TYPE,\n",
" chunker=HybridChunker(tokenizer=EMBED_MODEL_ID),\n",
")\n",
"\n", "\n",
"class DoclingPDFLoader(BaseLoader):\n", "docs = loader.load()"
"\n", ]
" def __init__(self, file_path: str | list[str]) -> None:\n", },
" self._file_paths = file_path if isinstance(file_path, list) else [file_path]\n", {
" self._converter = DocumentConverter()\n", "cell_type": "markdown",
"\n", "metadata": {},
" def lazy_load(self) -> Iterator[LCDocument]:\n", "source": [
" for source in self._file_paths:\n", "> Note: a message saying `\"Token indices sequence length is longer than the specified\n",
" dl_doc = self._converter.convert(source).document\n", "maximum sequence length...\"` can be ignored in this case — details\n",
" text = dl_doc.export_to_markdown()\n", "[here](https://github.com/DS4SD/docling-core/issues/119#issuecomment-2577418826)."
" yield LCDocument(page_content=text)" ]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Determining the splits:"
] ]
}, },
{ {
@ -104,29 +184,57 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"FILE_PATH = \"https://raw.githubusercontent.com/DS4SD/docling/main/tests/data/2206.01062.pdf\" # DocLayNet paper" "if EXPORT_TYPE == ExportType.DOC_CHUNKS:\n",
] " splits = docs\n",
}, "elif EXPORT_TYPE == ExportType.MARKDOWN:\n",
{ " from langchain_text_splitters import MarkdownHeaderTextSplitter\n",
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"from langchain_text_splitters import RecursiveCharacterTextSplitter\n",
"\n", "\n",
"loader = DoclingPDFLoader(file_path=FILE_PATH)\n", " splitter = MarkdownHeaderTextSplitter(\n",
"text_splitter = RecursiveCharacterTextSplitter(\n", " headers_to_split_on=[\n",
" chunk_size=1000,\n", " (\"#\", \"Header_1\"),\n",
" chunk_overlap=200,\n", " (\"##\", \"Header_2\"),\n",
")" " (\"###\", \"Header_3\"),\n",
" ],\n",
" )\n",
" splits = [split for doc in docs for split in splitter.split_text(doc.page_content)]\n",
"else:\n",
" raise ValueError(f\"Unexpected export type: {EXPORT_TYPE}\")"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"We now used the above-defined objects to get the document splits:" "Inspecting some sample splits:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"- d.page_content='arXiv:2408.09869v5 [cs.CL] 9 Dec 2024'\n",
"- d.page_content='Docling Technical Report\\nVersion 1.0\\nChristoph Auer Maksym Lysak Ahmed Nassar Michele Dolfi Nikolaos Livathinos Panos Vagenas Cesar Berrospi Ramis Matteo Omenetti Fabian Lindlbauer Kasper Dinkla Lokesh Mishra Yusik Kim Shubham Gupta Rafael Teixeira de Lima Valery Weber Lucas Morin Ingmar Meijer Viktor Kuropiatnyk Peter W. J. Staar\\nAI4K Group, IBM Research R¨uschlikon, Switzerland'\n",
"- d.page_content='Abstract\\nThis technical report introduces Docling , an easy to use, self-contained, MITlicensed open-source package for PDF document conversion. It is powered by state-of-the-art specialized AI models for layout analysis (DocLayNet) and table structure recognition (TableFormer), and runs efficiently on commodity hardware in a small resource budget. The code interface allows for easy extensibility and addition of new features and models.'\n",
"...\n"
]
}
],
"source": [
"for d in splits[:3]:\n",
" print(f\"- {d.page_content=}\")\n",
"print(\"...\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Ingestion"
] ]
}, },
{ {
@ -135,93 +243,27 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"docs = loader.load()\n", "import json\n",
"splits = text_splitter.split_documents(docs)" "from pathlib import Path\n",
] "from tempfile import mkdtemp\n",
}, "\n",
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Embeddings"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [],
"source": [
"from langchain_huggingface.embeddings import HuggingFaceEmbeddings\n", "from langchain_huggingface.embeddings import HuggingFaceEmbeddings\n",
"\n",
"HF_EMBED_MODEL_ID = \"BAAI/bge-small-en-v1.5\"\n",
"embeddings = HuggingFaceEmbeddings(model_name=HF_EMBED_MODEL_ID)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Vector store"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [],
"source": [
"from tempfile import TemporaryDirectory\n",
"\n",
"from langchain_milvus import Milvus\n", "from langchain_milvus import Milvus\n",
"\n", "\n",
"MILVUS_URI = os.environ.get(\n", "embedding = HuggingFaceEmbeddings(model_name=EMBED_MODEL_ID)\n",
" \"MILVUS_URI\", f\"{(tmp_dir := TemporaryDirectory()).name}/milvus_demo.db\"\n",
")\n",
"\n", "\n",
"\n",
"milvus_uri = str(Path(mkdtemp()) / \"docling.db\") # or set as needed\n",
"vectorstore = Milvus.from_documents(\n", "vectorstore = Milvus.from_documents(\n",
" splits,\n", " documents=splits,\n",
" embeddings,\n", " embedding=embedding,\n",
" connection_args={\"uri\": MILVUS_URI},\n", " collection_name=\"docling_demo\",\n",
" connection_args={\"uri\": milvus_uri},\n",
" index_params={\"index_type\": \"FLAT\"},\n",
" drop_old=True,\n", " drop_old=True,\n",
")" ")"
] ]
}, },
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### LLM"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The token has not been saved to the git credentials helper. Pass `add_to_git_credential=True` in this function directly or `--add-to-git-credential` if using via `huggingface-cli` if you want to set the git credential as well.\n",
"Token is valid (permission: write).\n",
"Your token has been saved to /Users/pva/.cache/huggingface/token\n",
"Login successful\n"
]
}
],
"source": [
"from langchain_huggingface import HuggingFaceEndpoint\n",
"\n",
"HF_API_KEY = os.environ.get(\"HF_API_KEY\")\n",
"HF_LLM_MODEL_ID = \"mistralai/Mistral-7B-Instruct-v0.3\"\n",
"\n",
"llm = HuggingFaceEndpoint(\n",
" repo_id=HF_LLM_MODEL_ID,\n",
" huggingfacehub_api_token=HF_API_KEY,\n",
")"
]
},
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
@ -231,55 +273,89 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 10, "execution_count": 7,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Note: Environment variable`HF_TOKEN` is set and is the current active token independently from the token you've just configured.\n"
]
}
],
"source": [ "source": [
"from typing import Iterable\n", "from langchain.chains import create_retrieval_chain\n",
"from langchain.chains.combine_documents import create_stuff_documents_chain\n",
"from langchain_huggingface import HuggingFaceEndpoint\n",
"\n", "\n",
"from langchain_core.documents import Document as LCDocument\n", "retriever = vectorstore.as_retriever(search_kwargs={\"k\": TOP_K})\n",
"from langchain_core.output_parsers import StrOutputParser\n", "llm = HuggingFaceEndpoint(\n",
"from langchain_core.prompts import PromptTemplate\n", " repo_id=GEN_MODEL_ID,\n",
"from langchain_core.runnables import RunnablePassthrough\n", " huggingfacehub_api_token=HF_TOKEN,\n",
"\n",
"\n",
"def format_docs(docs: Iterable[LCDocument]):\n",
" return \"\\n\\n\".join(doc.page_content for doc in docs)\n",
"\n",
"\n",
"retriever = vectorstore.as_retriever()\n",
"\n",
"prompt = PromptTemplate.from_template(\n",
" \"Context information is below.\\n---------------------\\n{context}\\n---------------------\\nGiven the context information and not prior knowledge, answer the query.\\nQuery: {question}\\nAnswer:\\n\"\n",
")\n", ")\n",
"\n", "\n",
"rag_chain = (\n", "\n",
" {\"context\": retriever | format_docs, \"question\": RunnablePassthrough()}\n", "def clip_text(text, threshold=100):\n",
" | prompt\n", " return f\"{text[:threshold]}...\" if len(text) > threshold else text"
" | llm\n",
" | StrOutputParser()\n",
")"
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 11, "execution_count": 8,
"metadata": {}, "metadata": {},
"outputs": [ "outputs": [
{ {
"data": { "name": "stdout",
"text/plain": [ "output_type": "stream",
"'- 80,863 pages were human annotated for DocLayNet.'" "text": [
] "Question:\n",
}, "Which are the main AI models in Docling?\n",
"execution_count": 11, "\n",
"metadata": {}, "Answer:\n",
"output_type": "execute_result" "Docling initially releases two AI models, a layout analysis model and TableFormer. The layout analysis model is an accurate object-detector for page elements, and TableFormer is a state-of-the-art tab...\n",
"\n",
"Source 1:\n",
" text: \"3.2 AI models\\nAs part of Docling, we initially release two highly capable AI models to the open-source community, which have been developed and published recently by our team. The first model is a layout analysis model, an accurate object-detector for page elements [13]. The second model is TableFormer [12, 9], a state-of-the-art table structure re...\"\n",
" dl_meta: {'schema_name': 'docling_core.transforms.chunker.DocMeta', 'version': '1.0.0', 'doc_items': [{'self_ref': '#/texts/50', 'parent': {'$ref': '#/body'}, 'children': [], 'label': 'text', 'prov': [{'page_no': 3, 'bbox': {'l': 108.0, 't': 405.1419982910156, 'r': 504.00299072265625, 'b': 330.7799987792969, 'coord_origin': 'BOTTOMLEFT'}, 'charspan': [0, 608]}]}], 'headings': ['3.2 AI models'], 'origin': {'mimetype': 'application/pdf', 'binary_hash': 11465328351749295394, 'filename': '2408.09869v5.pdf'}}\n",
" source: https://arxiv.org/pdf/2408.09869\n",
"\n",
"Source 2:\n",
" text: \"3 Processing pipeline\\nDocling implements a linear pipeline of operations, which execute sequentially on each given document (see Fig. 1). Each document is first parsed by a PDF backend, which retrieves the programmatic text tokens, consisting of string content and its coordinates on the page, and also renders a bitmap image of each page to support ...\"\n",
" dl_meta: {'schema_name': 'docling_core.transforms.chunker.DocMeta', 'version': '1.0.0', 'doc_items': [{'self_ref': '#/texts/26', 'parent': {'$ref': '#/body'}, 'children': [], 'label': 'text', 'prov': [{'page_no': 2, 'bbox': {'l': 108.0, 't': 273.01800537109375, 'r': 504.00299072265625, 'b': 176.83799743652344, 'coord_origin': 'BOTTOMLEFT'}, 'charspan': [0, 796]}]}], 'headings': ['3 Processing pipeline'], 'origin': {'mimetype': 'application/pdf', 'binary_hash': 11465328351749295394, 'filename': '2408.09869v5.pdf'}}\n",
" source: https://arxiv.org/pdf/2408.09869\n",
"\n",
"Source 3:\n",
" text: \"6 Future work and contributions\\nDocling is designed to allow easy extension of the model library and pipelines. In the future, we plan to extend Docling with several more models, such as a figure-classifier model, an equationrecognition model, a code-recognition model and more. This will help improve the quality of conversion for specific types of ...\"\n",
" dl_meta: {'schema_name': 'docling_core.transforms.chunker.DocMeta', 'version': '1.0.0', 'doc_items': [{'self_ref': '#/texts/76', 'parent': {'$ref': '#/body'}, 'children': [], 'label': 'text', 'prov': [{'page_no': 5, 'bbox': {'l': 108.0, 't': 322.468994140625, 'r': 504.00299072265625, 'b': 259.0169982910156, 'coord_origin': 'BOTTOMLEFT'}, 'charspan': [0, 543]}]}, {'self_ref': '#/texts/77', 'parent': {'$ref': '#/body'}, 'children': [], 'label': 'text', 'prov': [{'page_no': 5, 'bbox': {'l': 108.0, 't': 251.6540069580078, 'r': 504.00299072265625, 'b': 198.99200439453125, 'coord_origin': 'BOTTOMLEFT'}, 'charspan': [0, 402]}]}], 'headings': ['6 Future work and contributions'], 'origin': {'mimetype': 'application/pdf', 'binary_hash': 11465328351749295394, 'filename': '2408.09869v5.pdf'}}\n",
" source: https://arxiv.org/pdf/2408.09869\n"
]
} }
], ],
"source": [ "source": [
"rag_chain.invoke(\"How many pages were human annotated for DocLayNet?\")" "question_answer_chain = create_stuff_documents_chain(llm, PROMPT)\n",
"rag_chain = create_retrieval_chain(retriever, question_answer_chain)\n",
"resp_dict = rag_chain.invoke({\"input\": QUESTION})\n",
"\n",
"clipped_answer = clip_text(resp_dict[\"answer\"], threshold=200)\n",
"print(f\"Question:\\n{resp_dict['input']}\\n\\nAnswer:\\n{clipped_answer}\")\n",
"for i, doc in enumerate(resp_dict[\"context\"]):\n",
" print()\n",
" print(f\"Source {i+1}:\")\n",
" print(f\" text: {json.dumps(clip_text(doc.page_content, threshold=350))}\")\n",
" for key in doc.metadata:\n",
" if key != \"pk\":\n",
" val = doc.metadata.get(key)\n",
" clipped_val = clip_text(val) if isinstance(val, str) else val\n",
" print(f\" {key}: {clipped_val}\")"
] ]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
} }
], ],
"metadata": { "metadata": {
@ -298,7 +374,7 @@
"name": "python", "name": "python",
"nbconvert_exporter": "python", "nbconvert_exporter": "python",
"pygments_lexer": "ipython3", "pygments_lexer": "ipython3",
"version": "3.12.4" "version": "3.12.8"
} }
}, },
"nbformat": 4, "nbformat": 4,

View File

@ -11,7 +11,18 @@
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# RAG with LlamaIndex 🦙" "# RAG with LlamaIndex"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"| Step | Tech | Execution | \n",
"| --- | --- | --- |\n",
"| Embedding | Hugging Face / Sentence Transformers | 💻 Local |\n",
"| Vector store | Milvus | 💻 Local |\n",
"| Gen AI | Hugging Face Inference API | 🌐 Remote | "
] ]
}, },
{ {
@ -462,7 +473,7 @@
"name": "python", "name": "python",
"nbconvert_exporter": "python", "nbconvert_exporter": "python",
"pygments_lexer": "ipython3", "pygments_lexer": "ipython3",
"version": "3.12.4" "version": "3.12.7"
} }
}, },
"nbformat": 4, "nbformat": 4,

View File

@ -0,0 +1,752 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/DS4SD/docling/blob/main/docs/examples/rag_weaviate.ipynb)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Ag9kcX2B_atc"
},
"source": [
"# RAG with Weaviate"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"| Step | Tech | Execution | \n",
"| --- | --- | --- |\n",
"| Embedding | Open AI | 🌐 Remote |\n",
"| Vector store | Weavieate | 💻 Local |\n",
"| Gen AI | Open AI | 🌐 Remote |\n",
"\n",
"## A recipe 🧑‍🍳 🐥 💚\n",
"\n",
"This is a code recipe that uses [Weaviate](https://weaviate.io/) to perform RAG over PDF documents parsed by [Docling](https://ds4sd.github.io/docling/).\n",
"\n",
"In this notebook, we accomplish the following:\n",
"* Parse the top machine learning papers on [arXiv](https://arxiv.org/) using Docling\n",
"* Perform hierarchical chunking of the documents using Docling\n",
"* Generate text embeddings with OpenAI\n",
"* Perform RAG using [Weaviate](https://weaviate.io/developers/weaviate/search/generative)\n",
"\n",
"To run this notebook, you'll need:\n",
"* An [OpenAI API key](https://platform.openai.com/docs/quickstart)\n",
"* Access to GPU/s\n",
"\n",
"Note: For best results, please use **GPU acceleration** to run this notebook. Here are two options for running this notebook:\n",
"1. **Locally on a MacBook with an Apple Silicon chip.** Converting all documents in the notebook takes ~2 minutes on a MacBook M2 due to Docling's usage of MPS accelerators.\n",
"2. **Run this notebook on Google Colab.** Converting all documents in the notebook takes ~8 mintutes on a Google Colab T4 GPU."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "4YgT7tpXCUl0"
},
"source": [
"### Install Docling and Weaviate client\n",
"\n",
"Note: If Colab prompts you to restart the session after running the cell below, click \"restart\" and proceed with running the rest of the notebook."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"collapsed": true,
"id": "u076oUSF_YUG"
},
"outputs": [],
"source": [
"%%capture\n",
"%pip install docling~=\"2.7.0\"\n",
"%pip install -U weaviate-client~=\"4.9.4\"\n",
"%pip install rich\n",
"%pip install torch\n",
"\n",
"import warnings\n",
"\n",
"warnings.filterwarnings(\"ignore\")\n",
"\n",
"import logging\n",
"\n",
"# Suppress Weaviate client logs\n",
"logging.getLogger(\"weaviate\").setLevel(logging.ERROR)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "2q2F9RUmR8Wj"
},
"source": [
"## 🐥 Part 1: Docling\n",
"\n",
"Part of what makes Docling so remarkable is the fact that it can run on commodity hardware. This means that this notebook can be run on a local machine with GPU acceleration. If you're using a MacBook with a silicon chip, Docling integrates seamlessly with Metal Performance Shaders (MPS). MPS provides out-of-the-box GPU acceleration for macOS, seamlessly integrating with PyTorch and TensorFlow, offering energy-efficient performance on Apple Silicon, and broad compatibility with all Metal-supported GPUs.\n",
"\n",
"The code below checks to see if a GPU is available, either via CUDA or MPS."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"MPS GPU is enabled.\n"
]
}
],
"source": [
"import torch\n",
"\n",
"# Check if GPU or MPS is available\n",
"if torch.cuda.is_available():\n",
" device = torch.device(\"cuda\")\n",
" print(f\"CUDA GPU is enabled: {torch.cuda.get_device_name(0)}\")\n",
"elif torch.backends.mps.is_available():\n",
" device = torch.device(\"mps\")\n",
" print(\"MPS GPU is enabled.\")\n",
"else:\n",
" raise EnvironmentError(\n",
" \"No GPU or MPS device found. Please check your environment and ensure GPU or MPS support is configured.\"\n",
" )"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "wHTsy4a8JFPl"
},
"source": [
"Here, we've collected 10 influential machine learning papers published as PDFs on arXiv. Because Docling does not yet have title extraction for PDFs, we manually add the titles in a corresponding list.\n",
"\n",
"Note: Converting all 10 papers should take around 8 minutes with a T4 GPU."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"id": "Vy5SMPiGDMy-"
},
"outputs": [],
"source": [
"# Influential machine learning papers\n",
"source_urls = [\n",
" \"https://arxiv.org/pdf/1706.03762\",\n",
" \"https://arxiv.org/pdf/1810.04805\",\n",
" \"https://arxiv.org/pdf/1406.2661\",\n",
" \"https://arxiv.org/pdf/1409.0473\",\n",
" \"https://arxiv.org/pdf/1412.6980\",\n",
" \"https://arxiv.org/pdf/1312.6114\",\n",
" \"https://arxiv.org/pdf/1312.5602\",\n",
" \"https://arxiv.org/pdf/1512.03385\",\n",
" \"https://arxiv.org/pdf/1409.3215\",\n",
" \"https://arxiv.org/pdf/1301.3781\",\n",
"]\n",
"\n",
"# And their corresponding titles (because Docling doesn't have title extraction yet!)\n",
"source_titles = [\n",
" \"Attention Is All You Need\",\n",
" \"BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding\",\n",
" \"Generative Adversarial Nets\",\n",
" \"Neural Machine Translation by Jointly Learning to Align and Translate\",\n",
" \"Adam: A Method for Stochastic Optimization\",\n",
" \"Auto-Encoding Variational Bayes\",\n",
" \"Playing Atari with Deep Reinforcement Learning\",\n",
" \"Deep Residual Learning for Image Recognition\",\n",
" \"Sequence to Sequence Learning with Neural Networks\",\n",
" \"A Neural Probabilistic Language Model\",\n",
"]"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "5fi8wzHrCoLa"
},
"source": [
"### Convert PDFs to Docling documents\n",
"\n",
"Here we use Docling's `.convert_all()` to parse a batch of PDFs. The result is a list of Docling documents that we can use for text extraction.\n",
"\n",
"Note: Please ignore the `ERR#` message."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 67,
"referenced_widgets": [
"6d049f786a2f4ad7857a6cf2d95b5ba2",
"db2a7b9f549e4f0fb1ff3fce655d76a2",
"630967a2db4c4714b4c15d1358a0fcae",
"b3da9595ab7c4995a00e506e7b5202e3",
"243ecaf36ee24cafbd1c33d148f2ca78",
"5b7e22df1b464ca894126736e6f72207",
"02f6af5993bb4a6a9dbca77952f675d2",
"dea323b3de0e43118f338842c94ac065",
"bd198d2c0c4c4933a6e6544908d0d846",
"febd5c498e4f4f5dbde8dec3cd935502",
"ab4f282c0d37451092c60e6566e8e945"
]
},
"id": "Sr44xGR1PNSc",
"outputId": "b5cca9ee-d7c0-4c8f-c18a-0ac4787984e9"
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Fetching 9 files: 100%|██████████| 9/9 [00:00<00:00, 84072.91it/s]\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"ERR#: COULD NOT CONVERT TO RS THIS TABLE TO COMPUTE SPANS\n"
]
}
],
"source": [
"from docling.datamodel.document import ConversionResult\n",
"from docling.document_converter import DocumentConverter\n",
"\n",
"# Instantiate the doc converter\n",
"doc_converter = DocumentConverter()\n",
"\n",
"# Directly pass list of files or streams to `convert_all`\n",
"conv_results_iter = doc_converter.convert_all(source_urls) # previously `convert`\n",
"\n",
"# Iterate over the generator to get a list of Docling documents\n",
"docs = [result.document for result in conv_results_iter]"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "xHun_P-OCtKd"
},
"source": [
"### Post-process extracted document data\n",
"#### Perform hierarchical chunking on documents\n",
"\n",
"We use Docling's `HierarchicalChunker()` to perform hierarchy-aware chunking of our list of documents. This is meant to preserve some of the structure and relationships within the document, which enables more accurate and relevant retrieval in our RAG pipeline."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"id": "L17ju9xibuIo"
},
"outputs": [],
"source": [
"from docling_core.transforms.chunker import HierarchicalChunker\n",
"\n",
"# Initialize lists for text, and titles\n",
"texts, titles = [], []\n",
"\n",
"chunker = HierarchicalChunker()\n",
"\n",
"# Process each document in the list\n",
"for doc, title in zip(docs, source_titles): # Pair each document with its title\n",
" chunks = list(\n",
" chunker.chunk(doc)\n",
" ) # Perform hierarchical chunking and get text from chunks\n",
" for chunk in chunks:\n",
" texts.append(chunk.text)\n",
" titles.append(title)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "khbU9R1li2Kj"
},
"source": [
"Because we're splitting the documents into chunks, we'll concatenate the article title to the beginning of each chunk for additional context."
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"id": "HNwYV9P57OwF"
},
"outputs": [],
"source": [
"# Concatenate title and text\n",
"for i in range(len(texts)):\n",
" texts[i] = f\"{titles[i]} {texts[i]}\""
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "uhLlCpQODaT3"
},
"source": [
"## 💚 Part 2: Weaviate\n",
"### Create and configure an embedded Weaviate collection"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ho7xYQTZK5Wk"
},
"source": [
"We'll be using the OpenAI API for both generating the text embeddings and for the generative model in our RAG pipeline. The code below dynamically fetches your API key based on whether you're running this notebook in Google Colab and running it as a regular Jupyter notebook. All you need to do is replace `openai_api_key_var` with the name of your environmental variable name or Colab secret name for the API key.\n",
"\n",
"If you're running this notebook in Google Colab, make sure you [add](https://medium.com/@parthdasawant/how-to-use-secrets-in-google-colab-450c38e3ec75) your API key as a secret."
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"id": "PD53jOT4roj2"
},
"outputs": [],
"source": [
"# OpenAI API key variable name\n",
"openai_api_key_var = \"OPENAI_API_KEY\" # Replace with the name of your secret/env var\n",
"\n",
"# Fetch OpenAI API key\n",
"try:\n",
" # If running in Colab, fetch API key from Secrets\n",
" import google.colab\n",
" from google.colab import userdata\n",
"\n",
" openai_api_key = userdata.get(openai_api_key_var)\n",
" if not openai_api_key:\n",
" raise ValueError(f\"Secret '{openai_api_key_var}' not found in Colab secrets.\")\n",
"except ImportError:\n",
" # If not running in Colab, fetch API key from environment variable\n",
" import os\n",
"\n",
" openai_api_key = os.getenv(openai_api_key_var)\n",
" if not openai_api_key:\n",
" raise EnvironmentError(\n",
" f\"Environment variable '{openai_api_key_var}' is not set. \"\n",
" \"Please define it before running this script.\"\n",
" )"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "8G5jZSh6ti3e"
},
"source": [
"[Embedded Weaviate](https://weaviate.io/developers/weaviate/installation/embedded) allows you to spin up a Weaviate instance directly from your application code, without having to use a Docker container. If you're interested in other deployment methods, like using Docker-Compose or Kubernetes, check out this [page](https://weaviate.io/developers/weaviate/installation) in the Weaviate docs."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "hFUBEZiJUMic",
"outputId": "0b6534c9-66c9-4a47-9754-103bcc030019"
},
"outputs": [],
"source": [
"import weaviate\n",
"\n",
"# Connect to Weaviate embedded\n",
"client = weaviate.connect_to_embedded(headers={\"X-OpenAI-Api-Key\": openai_api_key})"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "4nu9qM75hrsd"
},
"outputs": [],
"source": [
"import weaviate.classes.config as wc\n",
"from weaviate.classes.config import DataType, Property\n",
"\n",
"# Define the collection name\n",
"collection_name = \"docling\"\n",
"\n",
"# Delete the collection if it already exists\n",
"if client.collections.exists(collection_name):\n",
" client.collections.delete(collection_name)\n",
"\n",
"# Create the collection\n",
"collection = client.collections.create(\n",
" name=collection_name,\n",
" vectorizer_config=wc.Configure.Vectorizer.text2vec_openai(\n",
" model=\"text-embedding-3-large\", # Specify your embedding model here\n",
" ),\n",
" # Enable generative model from Cohere\n",
" generative_config=wc.Configure.Generative.openai(\n",
" model=\"gpt-4o\" # Specify your generative model for RAG here\n",
" ),\n",
" # Define properties of metadata\n",
" properties=[\n",
" wc.Property(name=\"text\", data_type=wc.DataType.TEXT),\n",
" wc.Property(name=\"title\", data_type=wc.DataType.TEXT, skip_vectorization=True),\n",
" ],\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "RgMcZDB9Dzfs"
},
"source": [
"### Wrangle data into an acceptable format for Weaviate\n",
"\n",
"Transform our data from lists to a list of dictionaries for insertion into our Weaviate collection."
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {
"id": "kttDgwZEsIJQ"
},
"outputs": [],
"source": [
"# Initialize the data object\n",
"data = []\n",
"\n",
"# Create a dictionary for each row by iterating through the corresponding lists\n",
"for text, title in zip(texts, titles):\n",
" data_point = {\n",
" \"text\": text,\n",
" \"title\": title,\n",
" }\n",
" data.append(data_point)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "-4amqRaoD5g0"
},
"source": [
"### Insert data into Weaviate and generate embeddings\n",
"\n",
"Embeddings will be generated upon insertion to our Weaviate collection."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "g8VCYnhbaxcz",
"outputId": "cc900e56-9fb6-4d4e-ab18-ebd12b1f4201"
},
"outputs": [],
"source": [
"# Insert text chunks and metadata into vector DB collection\n",
"response = collection.data.insert_many(data)\n",
"\n",
"if response.has_errors:\n",
" print(response.errors)\n",
"else:\n",
" print(\"Insert complete.\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "KI01PxjuD_XR"
},
"source": [
"### Query the data\n",
"\n",
"Here, we perform a simple similarity search to return the most similar embedded chunks to our search query."
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "zbz6nWJc5CSj",
"outputId": "16aced21-4496-4c91-cc12-d5c9ac983351"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'text': 'BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding A distinctive feature of BERT is its unified architecture across different tasks. There is mini-', 'title': 'BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding'}\n",
"0.6578550338745117\n",
"{'text': 'BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding We introduce a new language representation model called BERT , which stands for B idirectional E ncoder R epresentations from T ransformers. Unlike recent language representation models (Peters et al., 2018a; Radford et al., 2018), BERT is designed to pretrain deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be finetuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial taskspecific architecture modifications.', 'title': 'BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding'}\n",
"0.6696287989616394\n"
]
}
],
"source": [
"from weaviate.classes.query import MetadataQuery\n",
"\n",
"response = collection.query.near_text(\n",
" query=\"bert\",\n",
" limit=2,\n",
" return_metadata=MetadataQuery(distance=True),\n",
" return_properties=[\"text\", \"title\"],\n",
")\n",
"\n",
"for o in response.objects:\n",
" print(o.properties)\n",
" print(o.metadata.distance)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "elo32iMnEC18"
},
"source": [
"### Perform RAG on parsed articles\n",
"\n",
"Weaviate's `generate` module allows you to perform RAG over your embedded data without having to use a separate framework.\n",
"\n",
"We specify a prompt that includes the field we want to search through in the database (in this case it's `text`), a query that includes our search term, and the number of retrieved results to use in the generation."
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 233
},
"id": "7r2LMSX9bO4y",
"outputId": "84639adf-7783-4d43-94d9-711fb313a168"
},
"outputs": [
{
"data": {
"text/html": [
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"color: #800000; text-decoration-color: #800000; font-weight: bold\">╭──────────────────────────────────────────────────── Prompt ─────────────────────────────────────────────────────╮</span>\n",
"<span style=\"color: #800000; text-decoration-color: #800000; font-weight: bold\">│</span> Explain how bert works, using only the retrieved context. <span style=\"color: #800000; text-decoration-color: #800000; font-weight: bold\">│</span>\n",
"<span style=\"color: #800000; text-decoration-color: #800000; font-weight: bold\">╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯</span>\n",
"</pre>\n"
],
"text/plain": [
"\u001b[1;31m╭─\u001b[0m\u001b[1;31m───────────────────────────────────────────────────\u001b[0m\u001b[1;31m Prompt \u001b[0m\u001b[1;31m────────────────────────────────────────────────────\u001b[0m\u001b[1;31m─╮\u001b[0m\n",
"\u001b[1;31m│\u001b[0m Explain how bert works, using only the retrieved context. \u001b[1;31m│\u001b[0m\n",
"\u001b[1;31m╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯\u001b[0m\n"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"color: #008000; text-decoration-color: #008000; font-weight: bold\">╭─────────────────────────────────────────────── Generated Content ───────────────────────────────────────────────╮</span>\n",
"<span style=\"color: #008000; text-decoration-color: #008000; font-weight: bold\">│</span> BERT, which stands for Bidirectional Encoder Representations from Transformers, is a language representation <span style=\"color: #008000; text-decoration-color: #008000; font-weight: bold\">│</span>\n",
"<span style=\"color: #008000; text-decoration-color: #008000; font-weight: bold\">│</span> model designed to pretrain deep bidirectional representations from unlabeled text. It conditions on both left <span style=\"color: #008000; text-decoration-color: #008000; font-weight: bold\">│</span>\n",
"<span style=\"color: #008000; text-decoration-color: #008000; font-weight: bold\">│</span> and right context in all layers, unlike traditional left-to-right or right-to-left language models. This <span style=\"color: #008000; text-decoration-color: #008000; font-weight: bold\">│</span>\n",
"<span style=\"color: #008000; text-decoration-color: #008000; font-weight: bold\">│</span> pre-training involves two unsupervised tasks. The pre-trained BERT model can then be fine-tuned with just one <span style=\"color: #008000; text-decoration-color: #008000; font-weight: bold\">│</span>\n",
"<span style=\"color: #008000; text-decoration-color: #008000; font-weight: bold\">│</span> additional output layer to create state-of-the-art models for various tasks, such as question answering and <span style=\"color: #008000; text-decoration-color: #008000; font-weight: bold\">│</span>\n",
"<span style=\"color: #008000; text-decoration-color: #008000; font-weight: bold\">│</span> language inference, without needing substantial task-specific architecture modifications. A distinctive feature <span style=\"color: #008000; text-decoration-color: #008000; font-weight: bold\">│</span>\n",
"<span style=\"color: #008000; text-decoration-color: #008000; font-weight: bold\">│</span> of BERT is its unified architecture across different tasks. <span style=\"color: #008000; text-decoration-color: #008000; font-weight: bold\">│</span>\n",
"<span style=\"color: #008000; text-decoration-color: #008000; font-weight: bold\">╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯</span>\n",
"</pre>\n"
],
"text/plain": [
"\u001b[1;32m╭─\u001b[0m\u001b[1;32m──────────────────────────────────────────────\u001b[0m\u001b[1;32m Generated Content \u001b[0m\u001b[1;32m──────────────────────────────────────────────\u001b[0m\u001b[1;32m─╮\u001b[0m\n",
"\u001b[1;32m│\u001b[0m BERT, which stands for Bidirectional Encoder Representations from Transformers, is a language representation \u001b[1;32m│\u001b[0m\n",
"\u001b[1;32m│\u001b[0m model designed to pretrain deep bidirectional representations from unlabeled text. It conditions on both left \u001b[1;32m│\u001b[0m\n",
"\u001b[1;32m│\u001b[0m and right context in all layers, unlike traditional left-to-right or right-to-left language models. This \u001b[1;32m│\u001b[0m\n",
"\u001b[1;32m│\u001b[0m pre-training involves two unsupervised tasks. The pre-trained BERT model can then be fine-tuned with just one \u001b[1;32m│\u001b[0m\n",
"\u001b[1;32m│\u001b[0m additional output layer to create state-of-the-art models for various tasks, such as question answering and \u001b[1;32m│\u001b[0m\n",
"\u001b[1;32m│\u001b[0m language inference, without needing substantial task-specific architecture modifications. A distinctive feature \u001b[1;32m│\u001b[0m\n",
"\u001b[1;32m│\u001b[0m of BERT is its unified architecture across different tasks. \u001b[1;32m│\u001b[0m\n",
"\u001b[1;32m╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯\u001b[0m\n"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"from rich.console import Console\n",
"from rich.panel import Panel\n",
"\n",
"# Create a prompt where context from the Weaviate collection will be injected\n",
"prompt = \"Explain how {text} works, using only the retrieved context.\"\n",
"query = \"bert\"\n",
"\n",
"response = collection.generate.near_text(\n",
" query=query, limit=3, grouped_task=prompt, return_properties=[\"text\", \"title\"]\n",
")\n",
"\n",
"# Prettify the output using Rich\n",
"console = Console()\n",
"\n",
"console.print(\n",
" Panel(f\"{prompt}\".replace(\"{text}\", query), title=\"Prompt\", border_style=\"bold red\")\n",
")\n",
"console.print(\n",
" Panel(response.generated, title=\"Generated Content\", border_style=\"bold green\")\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 233
},
"id": "Dtju3oCiDOdD",
"outputId": "2f0f0cf8-0305-40cc-8409-07036c101938"
},
"outputs": [
{
"data": {
"text/html": [
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"color: #800000; text-decoration-color: #800000; font-weight: bold\">╭──────────────────────────────────────────────────── Prompt ─────────────────────────────────────────────────────╮</span>\n",
"<span style=\"color: #800000; text-decoration-color: #800000; font-weight: bold\">│</span> Explain how a generative adversarial net works, using only the retrieved context. <span style=\"color: #800000; text-decoration-color: #800000; font-weight: bold\">│</span>\n",
"<span style=\"color: #800000; text-decoration-color: #800000; font-weight: bold\">╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯</span>\n",
"</pre>\n"
],
"text/plain": [
"\u001b[1;31m╭─\u001b[0m\u001b[1;31m───────────────────────────────────────────────────\u001b[0m\u001b[1;31m Prompt \u001b[0m\u001b[1;31m────────────────────────────────────────────────────\u001b[0m\u001b[1;31m─╮\u001b[0m\n",
"\u001b[1;31m│\u001b[0m Explain how a generative adversarial net works, using only the retrieved context. \u001b[1;31m│\u001b[0m\n",
"\u001b[1;31m╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯\u001b[0m\n"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
"<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"color: #008000; text-decoration-color: #008000; font-weight: bold\">╭─────────────────────────────────────────────── Generated Content ───────────────────────────────────────────────╮</span>\n",
"<span style=\"color: #008000; text-decoration-color: #008000; font-weight: bold\">│</span> Generative Adversarial Nets (GANs) operate within an adversarial framework where two models are trained <span style=\"color: #008000; text-decoration-color: #008000; font-weight: bold\">│</span>\n",
"<span style=\"color: #008000; text-decoration-color: #008000; font-weight: bold\">│</span> simultaneously: a generative model (G) and a discriminative model (D). The generative model aims to capture the <span style=\"color: #008000; text-decoration-color: #008000; font-weight: bold\">│</span>\n",
"<span style=\"color: #008000; text-decoration-color: #008000; font-weight: bold\">│</span> data distribution and generate samples that mimic real data, while the discriminative model's task is to <span style=\"color: #008000; text-decoration-color: #008000; font-weight: bold\">│</span>\n",
"<span style=\"color: #008000; text-decoration-color: #008000; font-weight: bold\">│</span> distinguish between samples from the real data and those generated by G. This setup is akin to a game where the <span style=\"color: #008000; text-decoration-color: #008000; font-weight: bold\">│</span>\n",
"<span style=\"color: #008000; text-decoration-color: #008000; font-weight: bold\">│</span> generative model acts like counterfeiters trying to produce indistinguishable fake currency, and the <span style=\"color: #008000; text-decoration-color: #008000; font-weight: bold\">│</span>\n",
"<span style=\"color: #008000; text-decoration-color: #008000; font-weight: bold\">│</span> discriminative model acts like the police trying to detect these counterfeits. <span style=\"color: #008000; text-decoration-color: #008000; font-weight: bold\">│</span>\n",
"<span style=\"color: #008000; text-decoration-color: #008000; font-weight: bold\">│</span> <span style=\"color: #008000; text-decoration-color: #008000; font-weight: bold\">│</span>\n",
"<span style=\"color: #008000; text-decoration-color: #008000; font-weight: bold\">│</span> The training process involves a minimax two-player game where G tries to maximize the probability of D making a <span style=\"color: #008000; text-decoration-color: #008000; font-weight: bold\">│</span>\n",
"<span style=\"color: #008000; text-decoration-color: #008000; font-weight: bold\">│</span> mistake, while D tries to minimize it. When both models are defined by multilayer perceptrons, they can be <span style=\"color: #008000; text-decoration-color: #008000; font-weight: bold\">│</span>\n",
"<span style=\"color: #008000; text-decoration-color: #008000; font-weight: bold\">│</span> trained using backpropagation without the need for Markov chains or approximate inference networks. The <span style=\"color: #008000; text-decoration-color: #008000; font-weight: bold\">│</span>\n",
"<span style=\"color: #008000; text-decoration-color: #008000; font-weight: bold\">│</span> ultimate goal is for G to perfectly replicate the training data distribution, making D's output equal to 1/2 <span style=\"color: #008000; text-decoration-color: #008000; font-weight: bold\">│</span>\n",
"<span style=\"color: #008000; text-decoration-color: #008000; font-weight: bold\">│</span> everywhere, indicating it cannot distinguish between real and generated data. This framework allows for <span style=\"color: #008000; text-decoration-color: #008000; font-weight: bold\">│</span>\n",
"<span style=\"color: #008000; text-decoration-color: #008000; font-weight: bold\">│</span> specific training algorithms and optimization techniques, such as backpropagation and dropout, to be <span style=\"color: #008000; text-decoration-color: #008000; font-weight: bold\">│</span>\n",
"<span style=\"color: #008000; text-decoration-color: #008000; font-weight: bold\">│</span> effectively utilized. <span style=\"color: #008000; text-decoration-color: #008000; font-weight: bold\">│</span>\n",
"<span style=\"color: #008000; text-decoration-color: #008000; font-weight: bold\">╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯</span>\n",
"</pre>\n"
],
"text/plain": [
"\u001b[1;32m╭─\u001b[0m\u001b[1;32m──────────────────────────────────────────────\u001b[0m\u001b[1;32m Generated Content \u001b[0m\u001b[1;32m──────────────────────────────────────────────\u001b[0m\u001b[1;32m─╮\u001b[0m\n",
"\u001b[1;32m│\u001b[0m Generative Adversarial Nets (GANs) operate within an adversarial framework where two models are trained \u001b[1;32m│\u001b[0m\n",
"\u001b[1;32m│\u001b[0m simultaneously: a generative model (G) and a discriminative model (D). The generative model aims to capture the \u001b[1;32m│\u001b[0m\n",
"\u001b[1;32m│\u001b[0m data distribution and generate samples that mimic real data, while the discriminative model's task is to \u001b[1;32m│\u001b[0m\n",
"\u001b[1;32m│\u001b[0m distinguish between samples from the real data and those generated by G. This setup is akin to a game where the \u001b[1;32m│\u001b[0m\n",
"\u001b[1;32m│\u001b[0m generative model acts like counterfeiters trying to produce indistinguishable fake currency, and the \u001b[1;32m│\u001b[0m\n",
"\u001b[1;32m│\u001b[0m discriminative model acts like the police trying to detect these counterfeits. \u001b[1;32m│\u001b[0m\n",
"\u001b[1;32m│\u001b[0m \u001b[1;32m│\u001b[0m\n",
"\u001b[1;32m│\u001b[0m The training process involves a minimax two-player game where G tries to maximize the probability of D making a \u001b[1;32m│\u001b[0m\n",
"\u001b[1;32m│\u001b[0m mistake, while D tries to minimize it. When both models are defined by multilayer perceptrons, they can be \u001b[1;32m│\u001b[0m\n",
"\u001b[1;32m│\u001b[0m trained using backpropagation without the need for Markov chains or approximate inference networks. The \u001b[1;32m│\u001b[0m\n",
"\u001b[1;32m│\u001b[0m ultimate goal is for G to perfectly replicate the training data distribution, making D's output equal to 1/2 \u001b[1;32m│\u001b[0m\n",
"\u001b[1;32m│\u001b[0m everywhere, indicating it cannot distinguish between real and generated data. This framework allows for \u001b[1;32m│\u001b[0m\n",
"\u001b[1;32m│\u001b[0m specific training algorithms and optimization techniques, such as backpropagation and dropout, to be \u001b[1;32m│\u001b[0m\n",
"\u001b[1;32m│\u001b[0m effectively utilized. \u001b[1;32m│\u001b[0m\n",
"\u001b[1;32m╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯\u001b[0m\n"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"# Create a prompt where context from the Weaviate collection will be injected\n",
"prompt = \"Explain how {text} works, using only the retrieved context.\"\n",
"query = \"a generative adversarial net\"\n",
"\n",
"response = collection.generate.near_text(\n",
" query=query, limit=3, grouped_task=prompt, return_properties=[\"text\", \"title\"]\n",
")\n",
"\n",
"# Prettify the output using Rich\n",
"console = Console()\n",
"\n",
"console.print(\n",
" Panel(f\"{prompt}\".replace(\"{text}\", query), title=\"Prompt\", border_style=\"bold red\")\n",
")\n",
"console.print(\n",
" Panel(response.generated, title=\"Generated Content\", border_style=\"bold green\")\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "7tGz49nfUegG"
},
"source": [
"We can see that our RAG pipeline performs relatively well for simple queries, especially given the small size of the dataset. Scaling this method for converting a larger sample of PDFs would require more compute (GPUs) and a more advanced deployment of Weaviate (like Docker, Kubernetes, or Weaviate Cloud). For more information on available Weaviate configurations, check out the [documetation](https://weaviate.io/developers/weaviate/starter-guides/which-weaviate)."
]
}
],
"metadata": {
"accelerator": "GPU",
"colab": {
"gpuType": "T4",
"provenance": []
},
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.7"
}
},
"nbformat": 4,
"nbformat_minor": 0
}

View File

@ -12,7 +12,17 @@
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# Hybrid RAG with Qdrant" "# Retrieval with Qdrant"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"| Step | Tech | Execution | \n",
"| --- | --- | --- |\n",
"| Embedding | FastEmbed | 💻 Local |\n",
"| Vector store | Qdrant | 💻 Local |"
] ]
}, },
{ {
@ -47,22 +57,19 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": 1,
"metadata": {}, "metadata": {},
"outputs": [ "outputs": [
{ {
"name": "stdout", "name": "stdout",
"output_type": "stream", "output_type": "stream",
"text": [ "text": [
"\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m24.2\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m24.3.1\u001b[0m\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\n",
"Note: you may need to restart the kernel to use updated packages.\n" "Note: you may need to restart the kernel to use updated packages.\n"
] ]
} }
], ],
"source": [ "source": [
"%pip install --no-warn-conflicts -q qdrant-client docling docling-core fastembed" "%pip install --no-warn-conflicts -q qdrant-client docling fastembed"
] ]
}, },
{ {
@ -74,13 +81,13 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 1, "execution_count": 2,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"from docling_core.transforms.chunker import HierarchicalChunker\n",
"from qdrant_client import QdrantClient\n", "from qdrant_client import QdrantClient\n",
"\n", "\n",
"from docling.chunking import HybridChunker\n",
"from docling.datamodel.base_models import InputFormat\n", "from docling.datamodel.base_models import InputFormat\n",
"from docling.document_converter import DocumentConverter" "from docling.document_converter import DocumentConverter"
] ]
@ -95,36 +102,16 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 2, "execution_count": 3,
"metadata": {}, "metadata": {},
"outputs": [ "outputs": [
{ {
"data": { "name": "stderr",
"application/vnd.jupyter.widget-view+json": { "output_type": "stream",
"model_id": "c1077c6634d9434584c41cc12f9107c9", "text": [
"version_major": 2, "/Users/pva/work/github.com/DS4SD/docling/.venv/lib/python3.12/site-packages/huggingface_hub/utils/tqdm.py:155: UserWarning: Cannot enable progress bars: environment variable `HF_HUB_DISABLE_PROGRESS_BARS=1` is set and has priority.\n",
"version_minor": 0 " warnings.warn(\n"
}, ]
"text/plain": [
"Fetching 5 files: 0%| | 0/5 [00:00<?, ?it/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "67069c07b73448d491944452159d10bc",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"Fetching 29 files: 0%| | 0/29 [00:00<?, ?it/s]"
]
},
"metadata": {},
"output_type": "display_data"
} }
], ],
"source": [ "source": [
@ -149,7 +136,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 3, "execution_count": 4,
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
@ -157,7 +144,7 @@
" \"https://www.sagacify.com/news/a-guide-to-chunking-strategies-for-retrieval-augmented-generation-rag\"\n", " \"https://www.sagacify.com/news/a-guide-to-chunking-strategies-for-retrieval-augmented-generation-rag\"\n",
")\n", ")\n",
"documents, metadatas = [], []\n", "documents, metadatas = [], []\n",
"for chunk in HierarchicalChunker().chunk(result.document):\n", "for chunk in HybridChunker().chunk(result.document):\n",
" documents.append(chunk.text)\n", " documents.append(chunk.text)\n",
" metadatas.append(chunk.meta.export_json_dict())" " metadatas.append(chunk.meta.export_json_dict())"
] ]
@ -173,95 +160,119 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 4, "execution_count": 5,
"metadata": {}, "metadata": {},
"outputs": [ "outputs": [],
{
"data": {
"text/plain": [
"['e74ae15be5eb4805858307846318e784',\n",
" 'f83f6125b0fa4a0595ae6a0777c9d90d',\n",
" '9cf63c7f30764715bf3804a19db36d7d',\n",
" '007dbe6d355b4b49af3b736cbd63a4d8',\n",
" 'e5e31f21f2e84aa68beca0dfc532cbe9',\n",
" '69c10816af204bb28630a1f957d8dd3e',\n",
" 'b63546b9b1744063bdb076b234d883ca',\n",
" '90ad15ba8fa6494489e1d3221e30bfcf',\n",
" '13517debb483452ea40fc7aa04c08c50',\n",
" '84ccab5cfab74e27a55acef1c63e3fad',\n",
" 'e8aa2ef46d234c5a8a9da64b701d60b4',\n",
" '190bea5ba43c45e792197c50898d1d90',\n",
" 'a730319ea65645ca81e735ace0bcc72e',\n",
" '415e7f6f15864e30b836e23ae8d71b43',\n",
" '5569bce4e65541868c762d149c6f491e',\n",
" '74d9b234e9c04ebeb8e4e1ca625789ac',\n",
" '308b1c5006a94a679f4c8d6f2396993c',\n",
" 'aaa5ec6d385a418388e660c425bf1dbe',\n",
" '630be8e43e4e4472a9cdb9af9462a43a',\n",
" '643b316224de4770a5349bf69cf93471',\n",
" 'da9265e6f6c2485493d15223eefdf411',\n",
" 'a916e447d52c4084b5ce81a0c5a65b07',\n",
" '2883c620858e4e728b88e127155a4f2c',\n",
" '2a998f0e9c124af99027060b94027874',\n",
" 'be551fbd2b9e42f48ebae0cbf1f481bc',\n",
" '95b7f7608e974ca6847097ee4590fba1',\n",
" '309db4f3863b4e3aaf16d5f346c309f3',\n",
" 'c818383267f64fd68b2237b024bd724e',\n",
" '1f16e78338c94238892171b400051cd4',\n",
" '25c680c3e064462cab071ea9bf1bad8c',\n",
" 'f41ab7e480a248c6bb87019341c7ca74',\n",
" 'd440128bed6d4dcb987152b48ecd9a8a',\n",
" 'c110d5dfdc5849808851788c2404dd15']"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [ "source": [
"client.add(COLLECTION_NAME, documents=documents, metadata=metadatas, batch_size=64)" "_ = client.add(\n",
" collection_name=COLLECTION_NAME,\n",
" documents=documents,\n",
" metadata=metadatas,\n",
" batch_size=64,\n",
")"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Query Documents" "## Retrieval"
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 5, "execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"points = client.query(\n",
" collection_name=COLLECTION_NAME,\n",
" query_text=\"Can I split documents?\",\n",
" limit=10,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {}, "metadata": {},
"outputs": [ "outputs": [
{ {
"name": "stdout", "name": "stdout",
"output_type": "stream", "output_type": "stream",
"text": [ "text": [
"<=== Retrieved documents ===>\n", "=== 0 ===\n",
"Document Specific Chunking is a strategy that respects the document's structure. Rather than using a set number of characters or a recursive process, it creates chunks that align with the logical sections of the document, like paragraphs or subsections. This approach maintains the original author's organization of content and helps keep the text coherent. It makes the retrieved information more relevant and useful, particularly for structured documents with clearly defined sections.\n", "Have you ever wondered how we, humans, would chunk? Here's a breakdown of a possible way a human would process a new document:\n",
"Document Specific Chunking can handle a variety of document formats, such as:\n",
"Consequently, there are also splitters available for this purpose.\n",
"1. We start at the top of the document, treating the first part as a chunk.\n", "1. We start at the top of the document, treating the first part as a chunk.\n",
"   2. We continue down the document, deciding if a new sentence or piece of information belongs with the first chunk or should start a new one.\n", "   2. We continue down the document, deciding if a new sentence or piece of information belongs with the first chunk or should start a new one.\n",
"    3. We keep this up until we reach the end of the document.\n", "    3. We keep this up until we reach the end of the document.\n",
"Have you ever wondered how we, humans, would chunk? Here's a breakdown of a possible way a human would process a new document:\n", "The ultimate dream? Having an agent do this for you. But slow down! This approach is still being tested and isn't quite ready for the big leagues due to the time it takes to process multiple LLM calls and the cost of those calls. There's no implementation available in public libraries just yet. However, Greg Kamradt has his version available here.\n",
"The goal of chunking is, as its name says, to chunk the information into multiple smaller pieces in order to store it in a more efficient and meaningful way. This allows the retrieval to capture pieces of information that are more related to the question at hand, and the generation to be more precise, but also less costly, as only a part of a document will be included in the LLM prompt, instead of the whole document.\n", "\n",
"To put these strategies into action, there's a whole array of tools and libraries at your disposal. For example, llama_index is a fantastic tool that lets you create document indices and retrieve chunked documents. Let's not forget LangChain, another remarkable tool that makes implementing chunking strategies a breeze, particularly when dealing with multi-language data. Diving into these tools and understanding how they can work in harmony with the chunking strategies we've discussed is a crucial part of mastering Retrieval Augmented Generation.\n", "=== 1 ===\n",
"Semantic chunking involves taking the embeddings of every sentence in the document, comparing the similarity of all sentences with each other, and then grouping sentences with the most similar embeddings together.\n", "Document Specific Chunking is a strategy that respects the document's structure. Rather than using a set number of characters or a recursive process, it creates chunks that align with the logical sections of the document, like paragraphs or subsections. This approach maintains the original author's organization of content and helps keep the text coherent. It makes the retrieved information more relevant and useful, particularly for structured documents with clearly defined sections.\n",
"Document Specific Chunking can handle a variety of document formats, such as:\n",
"Markdown\n",
"HTML\n",
"Python\n",
"etc\n",
"Here well take Markdown as our example and use a modified version of our first sample text:\n",
"\n",
"The result is the following:\n",
"You can see here that with a chunk size of 105, the Markdown structure of the document is taken into account, and the chunks thus preserve the semantics of the text!\n", "You can see here that with a chunk size of 105, the Markdown structure of the document is taken into account, and the chunks thus preserve the semantics of the text!\n",
"And there you have it! These chunking strategies are like a personal toolbox when it comes to implementing Retrieval Augmented Generation. They're a ton of ways to slice and dice text, each with its unique features and quirks. This variety gives you the freedom to pick the strategy that suits your project best, allowing you to tailor your approach to perfectly fit the unique needs of your work.\n" "\n",
"=== 2 ===\n",
"And there you have it! These chunking strategies are like a personal toolbox when it comes to implementing Retrieval Augmented Generation. They're a ton of ways to slice and dice text, each with its unique features and quirks. This variety gives you the freedom to pick the strategy that suits your project best, allowing you to tailor your approach to perfectly fit the unique needs of your work.\n",
"To put these strategies into action, there's a whole array of tools and libraries at your disposal. For example, llama_index is a fantastic tool that lets you create document indices and retrieve chunked documents. Let's not forget LangChain, another remarkable tool that makes implementing chunking strategies a breeze, particularly when dealing with multi-language data. Diving into these tools and understanding how they can work in harmony with the chunking strategies we've discussed is a crucial part of mastering Retrieval Augmented Generation.\n",
"By the way, if you're eager to experiment with your own examples using the chunking visualisation tool featured in this blog, feel free to give it a try! You can access it right here. Enjoy, and happy chunking! 😉\n",
"\n",
"=== 3 ===\n",
"Retrieval Augmented Generation (RAG) has been a hot topic in understanding, interpreting, and generating text with AI for the last few months. It's like a wonderful union of retrieval-based and generative models, creating a playground for researchers, data scientists, and natural language processing enthusiasts, like you and me.\n",
"To truly control the results produced by our RAG, we need to understand chunking strategies and their role in the process of retrieving and generating text. Indeed, each chunking strategy enhances RAG's effectiveness in its unique way.\n",
"The goal of chunking is, as its name says, to chunk the information into multiple smaller pieces in order to store it in a more efficient and meaningful way. This allows the retrieval to capture pieces of information that are more related to the question at hand, and the generation to be more precise, but also less costly, as only a part of a document will be included in the LLM prompt, instead of the whole document.\n",
"Let's explore some chunking strategies together.\n",
"The methods mentioned in the article you're about to read usually make use of two key parameters. First, we have [chunk_size]— which controls the size of your text chunks. Then there's [chunk_overlap], which takes care of how much text overlaps between one chunk and the next.\n",
"\n",
"=== 4 ===\n",
"Semantic Chunking considers the relationships within the text. It divides the text into meaningful, semantically complete chunks. This approach ensures the information's integrity during retrieval, leading to a more accurate and contextually appropriate outcome.\n",
"Semantic chunking involves taking the embeddings of every sentence in the document, comparing the similarity of all sentences with each other, and then grouping sentences with the most similar embeddings together.\n",
"By focusing on the text's meaning and context, Semantic Chunking significantly enhances the quality of retrieval. It's a top-notch choice when maintaining the semantic integrity of the text is vital.\n",
"However, this method does require more effort and is notably slower than the previous ones.\n",
"On our example text, since it is quite short and does not expose varied subjects, this method would only generate a single chunk.\n",
"\n",
"=== 5 ===\n",
"Language models used in the rest of your possible RAG pipeline have a token limit, which should not be exceeded. When dividing your text into chunks, it's advisable to count the number of tokens. Plenty of tokenizers are available. To ensure accuracy, use the same tokenizer for counting tokens as the one used in the language model.\n",
"Consequently, there are also splitters available for this purpose.\n",
"For instance, by using the [SpacyTextSplitter] from LangChain, the following chunks are created:\n",
"\n",
"\n",
"=== 6 ===\n",
"First things first, we have Character Chunking. This strategy divides the text into chunks based on a fixed number of characters. Its simplicity makes it a great starting point, but it can sometimes disrupt the text's flow, breaking sentences or words in unexpected places. Despite its limitations, it's a great stepping stone towards more advanced methods.\n",
"Now lets see that in action with an example. Imagine a text that reads:\n",
"If we decide to set our chunk size to 100 and no chunk overlap, we'd end up with the following chunks. As you can see, Character Chunking can lead to some intriguing, albeit sometimes nonsensical, results, cutting some of the sentences in their middle.\n",
"By choosing a smaller chunk size,  we would obtain more chunks, and by setting a bigger chunk overlap, we could obtain something like this:\n",
"\n",
"Also, by default this method creates chunks character by character based on the empty character [ ]. But you can specify a different one in order to chunk on something else, even a complete word! For instance, by specifying [' '] as the separator, you can avoid cutting words in their middle.\n",
"\n",
"=== 7 ===\n",
"Next, let's take a look at Recursive Character Chunking. Based on the basic concept of Character Chunking, this advanced version takes it up a notch by dividing the text into chunks until a certain condition is met, such as reaching a minimum chunk size. This method ensures that the chunking process aligns with the text's structure, preserving more meaning. Its adaptability makes Recursive Character Chunking great for texts with varied structures.\n",
"Again, lets use the same example in order to illustrate this method. With a chunk size of 100, and the default settings for the other parameters, we obtain the following chunks:\n",
"\n"
] ]
} }
], ],
"source": [ "source": [
"points = client.query(COLLECTION_NAME, query_text=\"Can I split documents?\", limit=10)\n", "for i, point in enumerate(points):\n",
"\n", " print(f\"=== {i} ===\")\n",
"print(\"<=== Retrieved documents ===>\")\n", " print(point.document)\n",
"for point in points:\n", " print()"
" print(point.document)"
] ]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
} }
], ],
"metadata": { "metadata": {
@ -280,7 +291,7 @@
"name": "python", "name": "python",
"nbconvert_exporter": "python", "nbconvert_exporter": "python",
"pygments_lexer": "ipython3", "pygments_lexer": "ipython3",
"version": "3.13.0" "version": "3.12.7"
} }
}, },
"nbformat": 4, "nbformat": 4,

View File

@ -21,7 +21,7 @@ Docling parses documents and exports them to the desired format with ease and sp
* 🗂️ Reads popular document formats (PDF, DOCX, PPTX, XLSX, Images, HTML, AsciiDoc & Markdown) and exports to HTML, Markdown and JSON (with embedded and referenced images) * 🗂️ Reads popular document formats (PDF, DOCX, PPTX, XLSX, Images, HTML, AsciiDoc & Markdown) and exports to HTML, Markdown and JSON (with embedded and referenced images)
* 📑 Advanced PDF document understanding incl. page layout, reading order & table structures * 📑 Advanced PDF document understanding incl. page layout, reading order & table structures
* 🧩 Unified, expressive [DoclingDocument](./concepts/docling_document.md) representation format * 🧩 Unified, expressive [DoclingDocument](./concepts/docling_document.md) representation format
* 🤖 Easy integration with 🦙 LlamaIndex & 🦜🔗 LangChain for powerful RAG / QA applications * 🤖 Plug-and-play [integrations](https://ds4sd.github.io/docling/integrations/) incl. LangChain, LlamaIndex, Crew AI & Haystack for agentic AI
* 🔍 OCR support for scanned PDFs * 🔍 OCR support for scanned PDFs
* 💻 Simple and convenient CLI * 💻 Simple and convenient CLI
@ -29,7 +29,15 @@ Docling parses documents and exports them to the desired format with ease and sp
* ♾️ Equation & code extraction * ♾️ Equation & code extraction
* 📝 Metadata extraction, including title, authors, references & language * 📝 Metadata extraction, including title, authors, references & language
* 🦜🔗 Native LangChain extension
## Get started
<div class="grid">
<a href="concepts/" class="card"><b>Concepts</b><br />Learn Docling fundamendals</a>
<a href="examples/" class="card"><b>Examples</b><br />Try out recipes for various use cases, including conversion, RAG, and more</a>
<a href="integrations/" class="card"><b>Integrations</b><br />Check out integrations with popular frameworks and tools</a>
<a href="reference/document_converter/" class="card"><b>Reference</b><br />See more API details</a>
</div>
## IBM ❤️ Open Source AI ## IBM ❤️ Open Source AI

View File

@ -0,0 +1,10 @@
Docling is available in [CrewAI](https://www.crewai.com/) as the `CrewDoclingSource`
knowledge source.
- 💻 [Crew AI GitHub][github]
- 📖 [Crew AI knowledge docs][docs]
- 📦 [Crew AI PyPI][package]
[github]: https://github.com/crewAIInc/crewAI/
[docs]: https://docs.crewai.com/concepts/knowledge
[package]: https://pypi.org/project/crewai/

View File

@ -0,0 +1,11 @@
Docling is available as a converter in [Haystack](https://haystack.deepset.ai/):
- 📖 [Docling Haystack integration docs][docs]
- 💻 [Docling Haystack integration GitHub][github]
- 🧑🏽‍🍳 [Docling Haystack integration example][example]
- 📦 [Docling Haystack integration PyPI][pypi]
[github]: https://github.com/DS4SD/docling-haystack
[docs]: https://haystack.deepset.ai/integrations/docling
[pypi]: https://pypi.org/project/docling-haystack
[example]: ../examples/rag_haystack.ipynb

View File

@ -0,0 +1,9 @@
Docling is available as a [LangChain](https://www.langchain.com/) document loader:
- 💻 [LangChain Docling integration GitHub][github]
- 🧑🏽‍🍳 [LangChain Docling integration example][example]
- 📦 [LangChain Docling integration PyPI][pypi]
[github]: https://github.com/DS4SD/docling-langchain
[example]: ../examples/rag_langchain.ipynb
[pypi]: https://pypi.org/project/langchain-docling/

View File

@ -0,0 +1,6 @@
Docling is powering the NVIDIA *PDF to Podcast* agentic AI blueprint:
- [🏠 PDF to Podcast home](https://build.nvidia.com/nvidia/pdf-to-podcast)
- [💻 PDF to Podcast GitHub](https://github.com/NVIDIA-AI-Blueprints/pdf-to-podcast)
- [📣 PDF to Podcast announcement](https://nvidianews.nvidia.com/news/nvidia-launches-ai-foundation-models-for-rtx-ai-pcs)
- [✍️ PDF to Podcast blog post](https://blogs.nvidia.com/blog/agentic-ai-blueprints/)

View File

@ -0,0 +1,5 @@
Docling is available an ingestion engine for [OpenContracts](https://github.com/JSv4/OpenContracts), allowing you to use Docling's OCR engine(s), chunker(s), labels, etc. and load them into a platform supporting bulk data extraction, text annotating, and question-answering:
- 💻 [OpenContracts GitHub](https://github.com/JSv4/OpenContracts)
- 📖 [OpenContracts Docs](https://jsv4.github.io/OpenContracts/)
- ▶️ [OpenContracts x Docling PDF annotation screen capture](https://github.com/JSv4/OpenContracts/blob/main/docs/assets/images/gifs/PDF%20Annotation%20Flow.gif)

View File

@ -1,10 +1,8 @@
Docling is powering document processing in [Red Hat Enterprise Linux AI][home] (RHEL AI), Docling is powering document processing in [Red Hat Enterprise Linux AI (RHEL AI)](https://rhel.ai),
enabling users to unlock the knowledge hidden in documents and present it to enabling users to unlock the knowledge hidden in documents and present it to
InstructLab's fine-tuning for aligning AI models to the user's specific data. InstructLab's fine-tuning for aligning AI models to the user's specific data.
More details can be found in this [blog post][blog]. - 📣 [RHEL AI 1.3 announcement](https://www.redhat.com/en/about/press-releases/red-hat-delivers-next-wave-gen-ai-innovation-new-red-hat-enterprise-linux-ai-capabilities)
- ✍️ RHEL blog posts:
- 🏠 [RHEL AI home][home] - [RHEL AI 1.3 Docling context aware chunking: What you need to know](https://www.redhat.com/en/blog/rhel-13-docling-context-aware-chunking-what-you-need-know)
- [Docling: The missing document processing companion for generative AI](https://www.redhat.com/en/blog/docling-missing-document-processing-companion-generative-ai)
[home]: https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux/ai
[blog]: https://www.redhat.com/en/blog/docling-missing-document-processing-companion-generative-ai

View File

@ -0,0 +1,5 @@
Docling is available as a document parser in [Vectara](https://www.vectara.com/).
- 💻 [Vectara GitHub org](https://github.com/vectara)
- [vectara-ingest GitHub repo](https://github.com/vectara/vectara-ingest)
- 📖 [Vectara docs](https://docs.vectara.com/)

View File

@ -32,6 +32,7 @@ This is an automatic generated API reference of the DoclingDocument type.
- CoordOrigin - CoordOrigin
- ImageRefMode - ImageRefMode
- Size - Size
docstring_style: sphinx
show_if_no_docstring: true show_if_no_docstring: true
show_submodules: true show_submodules: true
docstring_section_style: list docstring_section_style: list

View File

@ -65,7 +65,7 @@ nav:
- Chunking: concepts/chunking.md - Chunking: concepts/chunking.md
- Examples: - Examples:
- Examples: examples/index.md - Examples: examples/index.md
- Conversion: - 🔀 Conversion:
- "Simple conversion": examples/minimal.py - "Simple conversion": examples/minimal.py
- "Custom conversion": examples/custom_convert.py - "Custom conversion": examples/custom_convert.py
- "Batch conversion": examples/batch_convert.py - "Batch conversion": examples/batch_convert.py
@ -75,27 +75,38 @@ nav:
- "Table export": examples/export_tables.py - "Table export": examples/export_tables.py
- "Multimodal export": examples/export_multimodal.py - "Multimodal export": examples/export_multimodal.py
- "Force full page OCR": examples/full_page_ocr.py - "Force full page OCR": examples/full_page_ocr.py
- "Accelerator options": examples/run_with_acclerators.py - "Accelerator options": examples/run_with_accelerator.py
- Chunking: - ✂️ Chunking:
- "Hybrid chunking": examples/hybrid_chunking.ipynb - "Hybrid chunking": examples/hybrid_chunking.ipynb
- RAG / QA: - 💬 RAG / QA:
- "RAG with LlamaIndex 🦙": examples/rag_llamaindex.ipynb - examples/rag_haystack.ipynb
- "RAG with LangChain 🦜🔗": examples/rag_langchain.ipynb - examples/rag_llamaindex.ipynb
- "Hybrid RAG with Qdrant": examples/hybrid_rag_qdrant.ipynb - examples/rag_langchain.ipynb
- examples/rag_weaviate.ipynb
- RAG with Granite [↗]: https://github.com/ibm-granite-community/granite-snack-cookbook/blob/main/recipes/RAG/Granite_Docling_RAG.ipynb
- examples/retrieval_qdrant.ipynb
- Integrations: - Integrations:
- Integrations: integrations/index.md - Integrations: integrations/index.md
- "🐝 Bee": integrations/bee.md - 🤖 Agentic / AI dev frameworks:
- "Cloudera": integrations/cloudera.md - "Bee Agent Framework": integrations/bee.md
- "Data Prep Kit": integrations/data_prep_kit.md - "Crew AI": integrations/crewai.md
- "DocETL": integrations/docetl.md - "Haystack": integrations/haystack.md
- "🐶 InstructLab": integrations/instructlab.md - "LangChain": integrations/langchain.md
- "Kotaemon": integrations/kotaemon.md - "LlamaIndex": integrations/llamaindex.md
- "🦙 LlamaIndex": integrations/llamaindex.md - "txtai": integrations/txtai.md
- "Prodigy": integrations/prodigy.md - ⭐️ Featured:
- "Red Hat Enterprise Linux AI": integrations/rhel_ai.md - "Data Prep Kit": integrations/data_prep_kit.md
- "spaCy": integrations/spacy.md - "InstructLab": integrations/instructlab.md
- "txtai": integrations/txtai.md - "NVIDIA": integrations/nvidia.md
# - "LangChain 🦜🔗": integrations/langchain.md - "Prodigy": integrations/prodigy.md
- "RHEL AI": integrations/rhel_ai.md
- "spaCy": integrations/spacy.md
- 🗂️ More integrations:
- "Cloudera": integrations/cloudera.md
- "DocETL": integrations/docetl.md
- "Kotaemon": integrations/kotaemon.md
- "OpenContracts": integrations/opencontracts.md
- "Vectara": integrations/vectara.md
- Reference: - Reference:
- Python API: - Python API:
- Document Converter: reference/document_converter.md - Document Converter: reference/document_converter.md

1582
poetry.lock generated

File diff suppressed because it is too large Load Diff

View File

@ -1,6 +1,6 @@
[tool.poetry] [tool.poetry]
name = "docling" name = "docling"
version = "2.12.0" # DO NOT EDIT, updated automatically version = "2.15.0" # DO NOT EDIT, updated automatically
description = "SDK and CLI for parsing PDF, DOCX, HTML, and more, to a unified document representation for powering downstream workflows such as gen AI applications." description = "SDK and CLI for parsing PDF, DOCX, HTML, and more, to a unified document representation for powering downstream workflows such as gen AI applications."
authors = ["Christoph Auer <cau@zurich.ibm.com>", "Michele Dolfi <dol@zurich.ibm.com>", "Maxim Lysak <mly@zurich.ibm.com>", "Nikos Livathinos <nli@zurich.ibm.com>", "Ahmed Nassar <ahn@zurich.ibm.com>", "Panos Vagenas <pva@zurich.ibm.com>", "Peter Staar <taa@zurich.ibm.com>"] authors = ["Christoph Auer <cau@zurich.ibm.com>", "Michele Dolfi <dol@zurich.ibm.com>", "Maxim Lysak <mly@zurich.ibm.com>", "Nikos Livathinos <nli@zurich.ibm.com>", "Ahmed Nassar <ahn@zurich.ibm.com>", "Panos Vagenas <pva@zurich.ibm.com>", "Peter Staar <taa@zurich.ibm.com>"]
license = "MIT" license = "MIT"
@ -25,7 +25,7 @@ packages = [{include = "docling"}]
# actual dependencies: # actual dependencies:
###################### ######################
python = "^3.9" python = "^3.9"
docling-core = { version = "^2.9.0", extras = ["chunking"] } docling-core = { version = "^2.13.1", extras = ["chunking"] }
pydantic = "^2.0.0" pydantic = "^2.0.0"
docling-ibm-models = "^3.1.0" docling-ibm-models = "^3.1.0"
deepsearch-glm = "^1.0.0" deepsearch-glm = "^1.0.0"
@ -34,7 +34,7 @@ filetype = "^1.2.0"
pypdfium2 = "^4.30.0" pypdfium2 = "^4.30.0"
pydantic-settings = "^2.3.0" pydantic-settings = "^2.3.0"
huggingface_hub = ">=0.23,<1" huggingface_hub = ">=0.23,<1"
requests = "^2.32.3" requests = "^2.32.2"
easyocr = "^1.7" easyocr = "^1.7"
tesserocr = { version = "^2.7.1", optional = true } tesserocr = { version = "^2.7.1", optional = true }
certifi = ">=2024.7.4" certifi = ">=2024.7.4"

View File

@ -1,23 +1,28 @@
<document> <document>
<subtitle-level-1><location><page_1><loc_16><loc_85><loc_82><loc_87></location>TableFormer: Table Structure Understanding with Transformers.</subtitle-level-1> <subtitle-level-1><location><page_1><loc_16><loc_85><loc_82><loc_86></location>TableFormer: Table Structure Understanding with Transformers.</subtitle-level-1>
<subtitle-level-1><location><page_1><loc_23><loc_78><loc_74><loc_82></location>Ahmed Nassar, Nikolaos Livathinos, Maksym Lysak, Peter Staar IBM Research</subtitle-level-1> <subtitle-level-1><location><page_1><loc_23><loc_78><loc_74><loc_81></location>Ahmed Nassar, Nikolaos Livathinos, Maksym Lysak, Peter Staar IBM Research</subtitle-level-1>
<paragraph><location><page_1><loc_34><loc_77><loc_62><loc_78></location>{ ahn,nli,mly,taa } @zurich.ibm.com</paragraph> <paragraph><location><page_1><loc_34><loc_77><loc_62><loc_78></location>{ ahn,nli,mly,taa } @zurich.ibm.com</paragraph>
<subtitle-level-1><location><page_1><loc_24><loc_71><loc_31><loc_73></location>Abstract</subtitle-level-1> <subtitle-level-1><location><page_1><loc_24><loc_71><loc_31><loc_73></location>Abstract</subtitle-level-1>
<subtitle-level-1><location><page_1><loc_52><loc_71><loc_67><loc_73></location>a. Picture of a table:</subtitle-level-1> <subtitle-level-1><location><page_1><loc_52><loc_71><loc_67><loc_72></location>a. Picture of a table:</subtitle-level-1>
<subtitle-level-1><location><page_1><loc_8><loc_30><loc_21><loc_32></location>1. Introduction</subtitle-level-1> <subtitle-level-1><location><page_1><loc_8><loc_30><loc_21><loc_32></location>1. Introduction</subtitle-level-1>
<paragraph><location><page_1><loc_8><loc_10><loc_47><loc_29></location>The occurrence of tables in documents is ubiquitous. They often summarise quantitative or factual data, which is cumbersome to describe in verbose text but nevertheless extremely valuable. Unfortunately, this compact representation is often not easy to parse by machines. There are many implicit conventions used to obtain a compact table representation. For example, tables often have complex columnand row-headers in order to reduce duplicated cell content. Lines of different shapes and sizes are leveraged to separate content or indicate a tree structure. Additionally, tables can also have empty/missing table-entries or multi-row textual table-entries. Fig. 1 shows a table which presents all these issues.</paragraph> <paragraph><location><page_1><loc_8><loc_10><loc_47><loc_29></location>The occurrence of tables in documents is ubiquitous. They often summarise quantitative or factual data, which is cumbersome to describe in verbose text but nevertheless extremely valuable. Unfortunately, this compact representation is often not easy to parse by machines. There are many implicit conventions used to obtain a compact table representation. For example, tables often have complex columnand row-headers in order to reduce duplicated cell content. Lines of different shapes and sizes are leveraged to separate content or indicate a tree structure. Additionally, tables can also have empty/missing table-entries or multi-row textual table-entries. Fig. 1 shows a table which presents all these issues.</paragraph>
<figure>
<location><page_1><loc_52><loc_62><loc_88><loc_71></location>
</figure>
<caption><location><page_1><loc_8><loc_35><loc_47><loc_70></location>Tables organize valuable content in a concise and compact representation. This content is extremely valuable for systems such as search engines, Knowledge Graph's, etc, since they enhance their predictive capabilities. Unfortunately, tables come in a large variety of shapes and sizes. Furthermore, they can have complex column/row-header configurations, multiline rows, different variety of separation lines, missing entries, etc. As such, the correct identification of the table-structure from an image is a nontrivial task. In this paper, we present a new table-structure identification model. The latter improves the latest end-toend deep learning model (i.e. encoder-dual-decoder from PubTabNet) in two significant ways. First, we introduce a new object detection decoder for table-cells. In this way, we can obtain the content of the table-cells from programmatic PDF's directly from the PDF source and avoid the training of the custom OCR decoders. This architectural change leads to more accurate table-content extraction and allows us to tackle non-english tables. Second, we replace the LSTM decoders with transformer based decoders. This upgrade improves significantly the previous state-of-the-art tree-editing-distance-score (TEDS) from 91% to 98.5% on simple tables and from 88.7% to 95% on complex tables.</caption> <caption><location><page_1><loc_8><loc_35><loc_47><loc_70></location>Tables organize valuable content in a concise and compact representation. This content is extremely valuable for systems such as search engines, Knowledge Graph's, etc, since they enhance their predictive capabilities. Unfortunately, tables come in a large variety of shapes and sizes. Furthermore, they can have complex column/row-header configurations, multiline rows, different variety of separation lines, missing entries, etc. As such, the correct identification of the table-structure from an image is a nontrivial task. In this paper, we present a new table-structure identification model. The latter improves the latest end-toend deep learning model (i.e. encoder-dual-decoder from PubTabNet) in two significant ways. First, we introduce a new object detection decoder for table-cells. In this way, we can obtain the content of the table-cells from programmatic PDF's directly from the PDF source and avoid the training of the custom OCR decoders. This architectural change leads to more accurate table-content extraction and allows us to tackle non-english tables. Second, we replace the LSTM decoders with transformer based decoders. This upgrade improves significantly the previous state-of-the-art tree-editing-distance-score (TEDS) from 91% to 98.5% on simple tables and from 88.7% to 95% on complex tables.</caption>
<table> <table>
<location><page_1><loc_52><loc_62><loc_88><loc_71></location> <location><page_1><loc_52><loc_62><loc_88><loc_71></location>
<caption>Tables organize valuable content in a concise and compact representation. This content is extremely valuable for systems such as search engines, Knowledge Graph's, etc, since they enhance their predictive capabilities. Unfortunately, tables come in a large variety of shapes and sizes. Furthermore, they can have complex column/row-header configurations, multiline rows, different variety of separation lines, missing entries, etc. As such, the correct identification of the table-structure from an image is a nontrivial task. In this paper, we present a new table-structure identification model. The latter improves the latest end-toend deep learning model (i.e. encoder-dual-decoder from PubTabNet) in two significant ways. First, we introduce a new object detection decoder for table-cells. In this way, we can obtain the content of the table-cells from programmatic PDF's directly from the PDF source and avoid the training of the custom OCR decoders. This architectural change leads to more accurate table-content extraction and allows us to tackle non-english tables. Second, we replace the LSTM decoders with transformer based decoders. This upgrade improves significantly the previous state-of-the-art tree-editing-distance-score (TEDS) from 91% to 98.5% on simple tables and from 88.7% to 95% on complex tables.</caption> <caption>Tables organize valuable content in a concise and compact representation. This content is extremely valuable for systems such as search engines, Knowledge Graph's, etc, since they enhance their predictive capabilities. Unfortunately, tables come in a large variety of shapes and sizes. Furthermore, they can have complex column/row-header configurations, multiline rows, different variety of separation lines, missing entries, etc. As such, the correct identification of the table-structure from an image is a nontrivial task. In this paper, we present a new table-structure identification model. The latter improves the latest end-toend deep learning model (i.e. encoder-dual-decoder from PubTabNet) in two significant ways. First, we introduce a new object detection decoder for table-cells. In this way, we can obtain the content of the table-cells from programmatic PDF's directly from the PDF source and avoid the training of the custom OCR decoders. This architectural change leads to more accurate table-content extraction and allows us to tackle non-english tables. Second, we replace the LSTM decoders with transformer based decoders. This upgrade improves significantly the previous state-of-the-art tree-editing-distance-score (TEDS) from 91% to 98.5% on simple tables and from 88.7% to 95% on complex tables.</caption>
<row_0><col_0><col_header>3</col_0><col_1><col_header>1</col_1></row_0> <row_0><col_0><col_header>3</col_0><col_1><col_header>1</col_1></row_0>
</table> </table>
<paragraph><location><page_1><loc_52><loc_58><loc_79><loc_60></location>b. Red-annotation of bounding boxes, Blue-predictions by TableFormer</paragraph> <paragraph><location><page_1><loc_52><loc_58><loc_79><loc_60></location>- b. Red-annotation of bounding boxes, Blue-predictions by TableFormer</paragraph>
<figure> <figure>
<location><page_1><loc_51><loc_48><loc_88><loc_57></location> <location><page_1><loc_51><loc_48><loc_88><loc_57></location>
</figure> </figure>
<paragraph><location><page_1><loc_52><loc_46><loc_53><loc_47></location>c.</paragraph> <paragraph><location><page_1><loc_52><loc_46><loc_80><loc_47></location>- c. Structure predicted by TableFormer:</paragraph>
<paragraph><location><page_1><loc_54><loc_46><loc_80><loc_47></location>Structure predicted by TableFormer:</paragraph> <figure>
<location><page_1><loc_52><loc_37><loc_88><loc_45></location>
</figure>
<caption><location><page_1><loc_50><loc_29><loc_89><loc_35></location>Figure 1: Picture of a table with subtle, complex features such as (1) multi-column headers, (2) cell with multi-row text and (3) cells with no content. Image from PubTabNet evaluation set, filename: 'PMC2944238 004 02'.</caption> <caption><location><page_1><loc_50><loc_29><loc_89><loc_35></location>Figure 1: Picture of a table with subtle, complex features such as (1) multi-column headers, (2) cell with multi-row text and (3) cells with no content. Image from PubTabNet evaluation set, filename: 'PMC2944238 004 02'.</caption>
<table> <table>
<location><page_1><loc_52><loc_37><loc_88><loc_45></location> <location><page_1><loc_52><loc_37><loc_88><loc_45></location>
@ -31,7 +36,7 @@
<paragraph><location><page_1><loc_50><loc_16><loc_89><loc_26></location>Recently, significant progress has been made with vision based approaches to extract tables in documents. For the sake of completeness, the issue of table extraction from documents is typically decomposed into two separate challenges, i.e. (1) finding the location of the table(s) on a document-page and (2) finding the structure of a given table in the document.</paragraph> <paragraph><location><page_1><loc_50><loc_16><loc_89><loc_26></location>Recently, significant progress has been made with vision based approaches to extract tables in documents. For the sake of completeness, the issue of table extraction from documents is typically decomposed into two separate challenges, i.e. (1) finding the location of the table(s) on a document-page and (2) finding the structure of a given table in the document.</paragraph>
<paragraph><location><page_1><loc_50><loc_10><loc_89><loc_16></location>The first problem is called table-location and has been previously addressed [30, 38, 19, 21, 23, 26, 8] with stateof-the-art object-detection networks (e.g. YOLO and later on Mask-RCNN [9]). For all practical purposes, it can be</paragraph> <paragraph><location><page_1><loc_50><loc_10><loc_89><loc_16></location>The first problem is called table-location and has been previously addressed [30, 38, 19, 21, 23, 26, 8] with stateof-the-art object-detection networks (e.g. YOLO and later on Mask-RCNN [9]). For all practical purposes, it can be</paragraph>
<paragraph><location><page_2><loc_8><loc_88><loc_47><loc_91></location>considered as a solved problem, given enough ground-truth data to train on.</paragraph> <paragraph><location><page_2><loc_8><loc_88><loc_47><loc_91></location>considered as a solved problem, given enough ground-truth data to train on.</paragraph>
<paragraph><location><page_2><loc_8><loc_71><loc_47><loc_88></location>The second problem is called table-structure decomposition. The latter is a long standing problem in the community of document understanding [6, 4, 14]. Contrary to the table-location problem, there are no commonly used approaches that can easily be re-purposed to solve this problem. Lately, a set of new model-architectures has been proposed by the community to address table-structure decomposition [37, 36, 18, 20]. All these models have some weaknesses (see Sec. 2). The common denominator here is the reliance on textual features and/or the inability to provide the bounding box of each table-cell in the original image.</paragraph> <paragraph><location><page_2><loc_8><loc_71><loc_47><loc_87></location>The second problem is called table-structure decomposition. The latter is a long standing problem in the community of document understanding [6, 4, 14]. Contrary to the table-location problem, there are no commonly used approaches that can easily be re-purposed to solve this problem. Lately, a set of new model-architectures has been proposed by the community to address table-structure decomposition [37, 36, 18, 20]. All these models have some weaknesses (see Sec. 2). The common denominator here is the reliance on textual features and/or the inability to provide the bounding box of each table-cell in the original image.</paragraph>
<paragraph><location><page_2><loc_8><loc_53><loc_47><loc_71></location>In this paper, we want to address these weaknesses and present a robust table-structure decomposition algorithm. The design criteria for our model are the following. First, we want our algorithm to be language agnostic. In this way, we can obtain the structure of any table, irregardless of the language. Second, we want our algorithm to leverage as much data as possible from the original PDF document. For programmatic PDF documents, the text-cells can often be extracted much faster and with higher accuracy compared to OCR methods. Last but not least, we want to have a direct link between the table-cell and its bounding box in the image.</paragraph> <paragraph><location><page_2><loc_8><loc_53><loc_47><loc_71></location>In this paper, we want to address these weaknesses and present a robust table-structure decomposition algorithm. The design criteria for our model are the following. First, we want our algorithm to be language agnostic. In this way, we can obtain the structure of any table, irregardless of the language. Second, we want our algorithm to leverage as much data as possible from the original PDF document. For programmatic PDF documents, the text-cells can often be extracted much faster and with higher accuracy compared to OCR methods. Last but not least, we want to have a direct link between the table-cell and its bounding box in the image.</paragraph>
<paragraph><location><page_2><loc_8><loc_45><loc_47><loc_53></location>To meet the design criteria listed above, we developed a new model called TableFormer and a synthetically generated table structure dataset called SynthTabNet $^{1}$. In particular, our contributions in this work can be summarised as follows:</paragraph> <paragraph><location><page_2><loc_8><loc_45><loc_47><loc_53></location>To meet the design criteria listed above, we developed a new model called TableFormer and a synthetically generated table structure dataset called SynthTabNet $^{1}$. In particular, our contributions in this work can be summarised as follows:</paragraph>
<paragraph><location><page_2><loc_10><loc_38><loc_47><loc_44></location>- · We propose TableFormer , a transformer based model that predicts tables structure and bounding boxes for the table content simultaneously in an end-to-end approach.</paragraph> <paragraph><location><page_2><loc_10><loc_38><loc_47><loc_44></location>- · We propose TableFormer , a transformer based model that predicts tables structure and bounding boxes for the table content simultaneously in an end-to-end approach.</paragraph>
@ -75,10 +80,10 @@
<row_5><col_0><row_header>Combined(**)</col_0><col_1><body>3</col_1><col_2><body>3</col_2><col_3><body>500k</col_3><col_4><body>PNG</col_4></row_5> <row_5><col_0><row_header>Combined(**)</col_0><col_1><body>3</col_1><col_2><body>3</col_2><col_3><body>500k</col_3><col_4><body>PNG</col_4></row_5>
<row_6><col_0><row_header>SynthTabNet</col_0><col_1><body>3</col_1><col_2><body>3</col_2><col_3><body>600k</col_3><col_4><body>PNG</col_4></row_6> <row_6><col_0><row_header>SynthTabNet</col_0><col_1><body>3</col_1><col_2><body>3</col_2><col_3><body>600k</col_3><col_4><body>PNG</col_4></row_6>
</table> </table>
<paragraph><location><page_4><loc_50><loc_63><loc_89><loc_69></location>one adopts a colorful appearance with high contrast and the last one contains tables with sparse content. Lastly, we have combined all synthetic datasets into one big unified synthetic dataset of 600k examples.</paragraph> <paragraph><location><page_4><loc_50><loc_63><loc_89><loc_68></location>one adopts a colorful appearance with high contrast and the last one contains tables with sparse content. Lastly, we have combined all synthetic datasets into one big unified synthetic dataset of 600k examples.</paragraph>
<paragraph><location><page_4><loc_52><loc_61><loc_89><loc_62></location>Tab. 1 summarizes the various attributes of the datasets.</paragraph> <paragraph><location><page_4><loc_52><loc_61><loc_89><loc_62></location>Tab. 1 summarizes the various attributes of the datasets.</paragraph>
<subtitle-level-1><location><page_4><loc_50><loc_58><loc_73><loc_60></location>4. The TableFormer model</subtitle-level-1> <subtitle-level-1><location><page_4><loc_50><loc_58><loc_73><loc_59></location>4. The TableFormer model</subtitle-level-1>
<paragraph><location><page_4><loc_50><loc_43><loc_89><loc_57></location>Given the image of a table, TableFormer is able to predict: 1) a sequence of tokens that represent the structure of a table, and 2) a bounding box coupled to a subset of those tokens. The conversion of an image into a sequence of tokens is a well-known task [35, 16]. While attention is often used as an implicit method to associate each token of the sequence with a position in the original image, an explicit association between the individual table-cells and the image bounding boxes is also required.</paragraph> <paragraph><location><page_4><loc_50><loc_44><loc_89><loc_57></location>Given the image of a table, TableFormer is able to predict: 1) a sequence of tokens that represent the structure of a table, and 2) a bounding box coupled to a subset of those tokens. The conversion of an image into a sequence of tokens is a well-known task [35, 16]. While attention is often used as an implicit method to associate each token of the sequence with a position in the original image, an explicit association between the individual table-cells and the image bounding boxes is also required.</paragraph>
<subtitle-level-1><location><page_4><loc_50><loc_41><loc_69><loc_42></location>4.1. Model architecture.</subtitle-level-1> <subtitle-level-1><location><page_4><loc_50><loc_41><loc_69><loc_42></location>4.1. Model architecture.</subtitle-level-1>
<paragraph><location><page_4><loc_50><loc_16><loc_89><loc_40></location>We now describe in detail the proposed method, which is composed of three main components, see Fig. 4. Our CNN Backbone Network encodes the input as a feature vector of predefined length. The input feature vector of the encoded image is passed to the Structure Decoder to produce a sequence of HTML tags that represent the structure of the table. With each prediction of an HTML standard data cell (' < td > ') the hidden state of that cell is passed to the Cell BBox Decoder. As for spanning cells, such as row or column span, the tag is broken down to ' < ', 'rowspan=' or 'colspan=', with the number of spanning cells (attribute), and ' > '. The hidden state attached to ' < ' is passed to the Cell BBox Decoder. A shared feed forward network (FFN) receives the hidden states from the Structure Decoder, to provide the final detection predictions of the bounding box coordinates and their classification.</paragraph> <paragraph><location><page_4><loc_50><loc_16><loc_89><loc_40></location>We now describe in detail the proposed method, which is composed of three main components, see Fig. 4. Our CNN Backbone Network encodes the input as a feature vector of predefined length. The input feature vector of the encoded image is passed to the Structure Decoder to produce a sequence of HTML tags that represent the structure of the table. With each prediction of an HTML standard data cell (' < td > ') the hidden state of that cell is passed to the Cell BBox Decoder. As for spanning cells, such as row or column span, the tag is broken down to ' < ', 'rowspan=' or 'colspan=', with the number of spanning cells (attribute), and ' > '. The hidden state attached to ' < ' is passed to the Cell BBox Decoder. A shared feed forward network (FFN) receives the hidden states from the Structure Decoder, to provide the final detection predictions of the bounding box coordinates and their classification.</paragraph>
<paragraph><location><page_4><loc_50><loc_10><loc_89><loc_16></location>CNN Backbone Network. A ResNet-18 CNN is the backbone that receives the table image and encodes it as a vector of predefined length. The network has been modified by removing the linear and pooling layer, as we are not per-</paragraph> <paragraph><location><page_4><loc_50><loc_10><loc_89><loc_16></location>CNN Backbone Network. A ResNet-18 CNN is the backbone that receives the table image and encodes it as a vector of predefined length. The network has been modified by removing the linear and pooling layer, as we are not per-</paragraph>
@ -92,22 +97,22 @@
<location><page_5><loc_9><loc_36><loc_47><loc_67></location> <location><page_5><loc_9><loc_36><loc_47><loc_67></location>
<caption>Figure 4: Given an input image of a table, the Encoder produces fixed-length features that represent the input image. The features are then passed to both the Structure Decoder and Cell BBox Decoder . During training, the Structure Decoder receives 'tokenized tags' of the HTML code that represent the table structure. Afterwards, a transformer encoder and decoder architecture is employed to produce features that are received by a linear layer, and the Cell BBox Decoder. The linear layer is applied to the features to predict the tags. Simultaneously, the Cell BBox Decoder selects features referring to the data cells (' < td > ', ' < ') and passes them through an attention network, an MLP, and a linear layer to predict the bounding boxes.</caption> <caption>Figure 4: Given an input image of a table, the Encoder produces fixed-length features that represent the input image. The features are then passed to both the Structure Decoder and Cell BBox Decoder . During training, the Structure Decoder receives 'tokenized tags' of the HTML code that represent the table structure. Afterwards, a transformer encoder and decoder architecture is employed to produce features that are received by a linear layer, and the Cell BBox Decoder. The linear layer is applied to the features to predict the tags. Simultaneously, the Cell BBox Decoder selects features referring to the data cells (' < td > ', ' < ') and passes them through an attention network, an MLP, and a linear layer to predict the bounding boxes.</caption>
</figure> </figure>
<paragraph><location><page_5><loc_50><loc_63><loc_89><loc_69></location>forming classification, and adding an adaptive pooling layer of size 28*28. ResNet by default downsamples the image resolution by 32 and then the encoded image is provided to both the Structure Decoder , and Cell BBox Decoder .</paragraph> <paragraph><location><page_5><loc_50><loc_63><loc_89><loc_68></location>forming classification, and adding an adaptive pooling layer of size 28*28. ResNet by default downsamples the image resolution by 32 and then the encoded image is provided to both the Structure Decoder , and Cell BBox Decoder .</paragraph>
<paragraph><location><page_5><loc_50><loc_48><loc_89><loc_63></location>Structure Decoder. The transformer architecture of this component is based on the work proposed in [31]. After extensive experimentation, the Structure Decoder is modeled as a transformer encoder with two encoder layers and a transformer decoder made from a stack of 4 decoder layers that comprise mainly of multi-head attention and feed forward layers. This configuration uses fewer layers and heads in comparison to networks applied to other problems (e.g. "Scene Understanding", "Image Captioning"), something which we relate to the simplicity of table images.</paragraph> <paragraph><location><page_5><loc_50><loc_48><loc_89><loc_62></location>Structure Decoder. The transformer architecture of this component is based on the work proposed in [31]. After extensive experimentation, the Structure Decoder is modeled as a transformer encoder with two encoder layers and a transformer decoder made from a stack of 4 decoder layers that comprise mainly of multi-head attention and feed forward layers. This configuration uses fewer layers and heads in comparison to networks applied to other problems (e.g. "Scene Understanding", "Image Captioning"), something which we relate to the simplicity of table images.</paragraph>
<paragraph><location><page_5><loc_50><loc_31><loc_89><loc_47></location>The transformer encoder receives an encoded image from the CNN Backbone Network and refines it through a multi-head dot-product attention layer, followed by a Feed Forward Network. During training, the transformer decoder receives as input the output feature produced by the transformer encoder, and the tokenized input of the HTML ground-truth tags. Using a stack of multi-head attention layers, different aspects of the tag sequence could be inferred. This is achieved by each attention head on a layer operating in a different subspace, and then combining altogether their attention score.</paragraph> <paragraph><location><page_5><loc_50><loc_31><loc_89><loc_47></location>The transformer encoder receives an encoded image from the CNN Backbone Network and refines it through a multi-head dot-product attention layer, followed by a Feed Forward Network. During training, the transformer decoder receives as input the output feature produced by the transformer encoder, and the tokenized input of the HTML ground-truth tags. Using a stack of multi-head attention layers, different aspects of the tag sequence could be inferred. This is achieved by each attention head on a layer operating in a different subspace, and then combining altogether their attention score.</paragraph>
<paragraph><location><page_5><loc_50><loc_17><loc_89><loc_31></location>Cell BBox Decoder. Our architecture allows to simultaneously predict HTML tags and bounding boxes for each table cell without the need of a separate object detector end to end. This approach is inspired by DETR [1] which employs a Transformer Encoder, and Decoder that looks for a specific number of object queries (potential object detections). As our model utilizes a transformer architecture, the hidden state of the < td > ' and ' < ' HTML structure tags become the object query.</paragraph> <paragraph><location><page_5><loc_50><loc_18><loc_89><loc_31></location>Cell BBox Decoder. Our architecture allows to simultaneously predict HTML tags and bounding boxes for each table cell without the need of a separate object detector end to end. This approach is inspired by DETR [1] which employs a Transformer Encoder, and Decoder that looks for a specific number of object queries (potential object detections). As our model utilizes a transformer architecture, the hidden state of the < td > ' and ' < ' HTML structure tags become the object query.</paragraph>
<paragraph><location><page_5><loc_50><loc_10><loc_89><loc_17></location>The encoding generated by the CNN Backbone Network along with the features acquired for every data cell from the Transformer Decoder are then passed to the attention network. The attention network takes both inputs and learns to provide an attention weighted encoding. This weighted at-</paragraph> <paragraph><location><page_5><loc_50><loc_10><loc_89><loc_17></location>The encoding generated by the CNN Backbone Network along with the features acquired for every data cell from the Transformer Decoder are then passed to the attention network. The attention network takes both inputs and learns to provide an attention weighted encoding. This weighted at-</paragraph>
<paragraph><location><page_6><loc_8><loc_80><loc_47><loc_91></location>tention encoding is then multiplied to the encoded image to produce a feature for each table cell. Notice that this is different than the typical object detection problem where imbalances between the number of detections and the amount of objects may exist. In our case, we know up front that the produced detections always match with the table cells in number and correspondence.</paragraph> <paragraph><location><page_6><loc_8><loc_80><loc_47><loc_91></location>tention encoding is then multiplied to the encoded image to produce a feature for each table cell. Notice that this is different than the typical object detection problem where imbalances between the number of detections and the amount of objects may exist. In our case, we know up front that the produced detections always match with the table cells in number and correspondence.</paragraph>
<paragraph><location><page_6><loc_8><loc_70><loc_47><loc_80></location>The output features for each table cell are then fed into the feed-forward network (FFN). The FFN consists of a Multi-Layer Perceptron (3 layers with ReLU activation function) that predicts the normalized coordinates for the bounding box of each table cell. Finally, the predicted bounding boxes are classified based on whether they are empty or not using a linear layer.</paragraph> <paragraph><location><page_6><loc_8><loc_70><loc_47><loc_80></location>The output features for each table cell are then fed into the feed-forward network (FFN). The FFN consists of a Multi-Layer Perceptron (3 layers with ReLU activation function) that predicts the normalized coordinates for the bounding box of each table cell. Finally, the predicted bounding boxes are classified based on whether they are empty or not using a linear layer.</paragraph>
<paragraph><location><page_6><loc_8><loc_44><loc_47><loc_69></location>Loss Functions. We formulate a multi-task loss Eq. 2 to train our network. The Cross-Entropy loss (denoted as l$_{s}$ ) is used to train the Structure Decoder which predicts the structure tokens. As for the Cell BBox Decoder it is trained with a combination of losses denoted as l$_{box}$ . l$_{box}$ consists of the generally used l$_{1}$ loss for object detection and the IoU loss ( l$_{iou}$ ) to be scale invariant as explained in [25]. In comparison to DETR, we do not use the Hungarian algorithm [15] to match the predicted bounding boxes with the ground-truth boxes, as we have already achieved a one-toone match through two steps: 1) Our token input sequence is naturally ordered, therefore the hidden states of the table data cells are also in order when they are provided as input to the Cell BBox Decoder , and 2) Our bounding boxes generation mechanism (see Sec. 3) ensures a one-to-one mapping between the cell content and its bounding box for all post-processed datasets.</paragraph> <paragraph><location><page_6><loc_8><loc_44><loc_47><loc_69></location>Loss Functions. We formulate a multi-task loss Eq. 2 to train our network. The Cross-Entropy loss (denoted as l$_{s}$ ) is used to train the Structure Decoder which predicts the structure tokens. As for the Cell BBox Decoder it is trained with a combination of losses denoted as l$_{box}$ . l$_{box}$ consists of the generally used l$_{1}$ loss for object detection and the IoU loss ( l$_{iou}$ ) to be scale invariant as explained in [25]. In comparison to DETR, we do not use the Hungarian algorithm [15] to match the predicted bounding boxes with the ground-truth boxes, as we have already achieved a one-toone match through two steps: 1) Our token input sequence is naturally ordered, therefore the hidden states of the table data cells are also in order when they are provided as input to the Cell BBox Decoder , and 2) Our bounding boxes generation mechanism (see Sec. 3) ensures a one-to-one mapping between the cell content and its bounding box for all post-processed datasets.</paragraph>
<paragraph><location><page_6><loc_8><loc_41><loc_47><loc_44></location>The loss used to train the TableFormer can be defined as following:</paragraph> <paragraph><location><page_6><loc_8><loc_41><loc_47><loc_43></location>The loss used to train the TableFormer can be defined as following:</paragraph>
<paragraph><location><page_6><loc_8><loc_32><loc_46><loc_33></location>where λ ∈ [0, 1], and λ$_{iou}$, λ$_{l}$$_{1}$ ∈$_{R}$ are hyper-parameters.</paragraph> <paragraph><location><page_6><loc_8><loc_32><loc_46><loc_33></location>where λ ∈ [0, 1], and λ$_{iou}$, λ$_{l}$$_{1}$ ∈$_{R}$ are hyper-parameters.</paragraph>
<subtitle-level-1><location><page_6><loc_8><loc_28><loc_28><loc_30></location>5. Experimental Results</subtitle-level-1> <subtitle-level-1><location><page_6><loc_8><loc_28><loc_28><loc_30></location>5. Experimental Results</subtitle-level-1>
<subtitle-level-1><location><page_6><loc_8><loc_26><loc_29><loc_27></location>5.1. Implementation Details</subtitle-level-1> <subtitle-level-1><location><page_6><loc_8><loc_26><loc_29><loc_27></location>5.1. Implementation Details</subtitle-level-1>
<paragraph><location><page_6><loc_8><loc_19><loc_47><loc_25></location>TableFormer uses ResNet-18 as the CNN Backbone Network . The input images are resized to 448*448 pixels and the feature map has a dimension of 28*28. Additionally, we enforce the following input constraints:</paragraph> <paragraph><location><page_6><loc_8><loc_19><loc_47><loc_25></location>TableFormer uses ResNet-18 as the CNN Backbone Network . The input images are resized to 448*448 pixels and the feature map has a dimension of 28*28. Additionally, we enforce the following input constraints:</paragraph>
<paragraph><location><page_6><loc_8><loc_10><loc_47><loc_13></location>Although input constraints are used also by other methods, such as EDD, ours are less restrictive due to the improved</paragraph> <paragraph><location><page_6><loc_8><loc_10><loc_47><loc_13></location>Although input constraints are used also by other methods, such as EDD, ours are less restrictive due to the improved</paragraph>
<paragraph><location><page_6><loc_50><loc_86><loc_89><loc_91></location>runtime performance and lower memory footprint of TableFormer. This allows to utilize input samples with longer sequences and images with larger dimensions.</paragraph> <paragraph><location><page_6><loc_50><loc_86><loc_89><loc_91></location>runtime performance and lower memory footprint of TableFormer. This allows to utilize input samples with longer sequences and images with larger dimensions.</paragraph>
<paragraph><location><page_6><loc_50><loc_59><loc_89><loc_86></location>The Transformer Encoder consists of two "Transformer Encoder Layers", with an input feature size of 512, feed forward network of 1024, and 4 attention heads. As for the Transformer Decoder it is composed of four "Transformer Decoder Layers" with similar input and output dimensions as the "Transformer Encoder Layers". Even though our model uses fewer layers and heads than the default implementation parameters, our extensive experimentation has proved this setup to be more suitable for table images. We attribute this finding to the inherent design of table images, which contain mostly lines and text, unlike the more elaborate content present in other scopes (e.g. the COCO dataset). Moreover, we have added ResNet blocks to the inputs of the Structure Decoder and Cell BBox Decoder. This prevents a decoder having a stronger influence over the learned weights which would damage the other prediction task (structure vs bounding boxes), but learn task specific weights instead. Lastly our dropout layers are set to 0.5.</paragraph> <paragraph><location><page_6><loc_50><loc_59><loc_89><loc_85></location>The Transformer Encoder consists of two "Transformer Encoder Layers", with an input feature size of 512, feed forward network of 1024, and 4 attention heads. As for the Transformer Decoder it is composed of four "Transformer Decoder Layers" with similar input and output dimensions as the "Transformer Encoder Layers". Even though our model uses fewer layers and heads than the default implementation parameters, our extensive experimentation has proved this setup to be more suitable for table images. We attribute this finding to the inherent design of table images, which contain mostly lines and text, unlike the more elaborate content present in other scopes (e.g. the COCO dataset). Moreover, we have added ResNet blocks to the inputs of the Structure Decoder and Cell BBox Decoder. This prevents a decoder having a stronger influence over the learned weights which would damage the other prediction task (structure vs bounding boxes), but learn task specific weights instead. Lastly our dropout layers are set to 0.5.</paragraph>
<paragraph><location><page_6><loc_50><loc_46><loc_89><loc_58></location>For training, TableFormer is trained with 3 Adam optimizers, each one for the CNN Backbone Network , Structure Decoder , and Cell BBox Decoder . Taking the PubTabNet as an example for our parameter set up, the initializing learning rate is 0.001 for 12 epochs with a batch size of 24, and λ set to 0.5. Afterwards, we reduce the learning rate to 0.0001, the batch size to 18 and train for 12 more epochs or convergence.</paragraph> <paragraph><location><page_6><loc_50><loc_46><loc_89><loc_58></location>For training, TableFormer is trained with 3 Adam optimizers, each one for the CNN Backbone Network , Structure Decoder , and Cell BBox Decoder . Taking the PubTabNet as an example for our parameter set up, the initializing learning rate is 0.001 for 12 epochs with a batch size of 24, and λ set to 0.5. Afterwards, we reduce the learning rate to 0.0001, the batch size to 18 and train for 12 more epochs or convergence.</paragraph>
<paragraph><location><page_6><loc_50><loc_30><loc_89><loc_45></location>TableFormer is implemented with PyTorch and Torchvision libraries [22]. To speed up the inference, the image undergoes a single forward pass through the CNN Backbone Network and transformer encoder. This eliminates the overhead of generating the same features for each decoding step. Similarly, we employ a 'caching' technique to preform faster autoregressive decoding. This is achieved by storing the features of decoded tokens so we can reuse them for each time step. Therefore, we only compute the attention for each new tag.</paragraph> <paragraph><location><page_6><loc_50><loc_30><loc_89><loc_45></location>TableFormer is implemented with PyTorch and Torchvision libraries [22]. To speed up the inference, the image undergoes a single forward pass through the CNN Backbone Network and transformer encoder. This eliminates the overhead of generating the same features for each decoding step. Similarly, we employ a 'caching' technique to preform faster autoregressive decoding. This is achieved by storing the features of decoded tokens so we can reuse them for each time step. Therefore, we only compute the attention for each new tag.</paragraph>
<subtitle-level-1><location><page_6><loc_50><loc_26><loc_65><loc_27></location>5.2. Generalization</subtitle-level-1> <subtitle-level-1><location><page_6><loc_50><loc_26><loc_65><loc_27></location>5.2. Generalization</subtitle-level-1>
@ -159,14 +164,18 @@
<row_5><col_0><row_header>EDD</col_0><col_1><body>91.2</col_1><col_2><body>85.4</col_2><col_3><body>88.3</col_3></row_5> <row_5><col_0><row_header>EDD</col_0><col_1><body>91.2</col_1><col_2><body>85.4</col_2><col_3><body>88.3</col_3></row_5>
<row_6><col_0><row_header>TableFormer</col_0><col_1><body>95.4</col_1><col_2><body>90.1</col_2><col_3><body>93.6</col_3></row_6> <row_6><col_0><row_header>TableFormer</col_0><col_1><body>95.4</col_1><col_2><body>90.1</col_2><col_3><body>93.6</col_3></row_6>
</table> </table>
<paragraph><location><page_8><loc_9><loc_89><loc_10><loc_90></location>a.</paragraph> <paragraph><location><page_8><loc_9><loc_89><loc_10><loc_90></location>- a.</paragraph>
<paragraph><location><page_8><loc_11><loc_89><loc_82><loc_90></location>Red - PDF cells, Green - predicted bounding boxes, Blue - post-processed predictions matched to PDF cells</paragraph> <paragraph><location><page_8><loc_11><loc_89><loc_82><loc_90></location>- Red - PDF cells, Green - predicted bounding boxes, Blue - post-processed predictions matched to PDF cells</paragraph>
<paragraph><location><page_8><loc_9><loc_87><loc_46><loc_88></location>Japanese language (previously unseen by TableFormer):</paragraph> <subtitle-level-1><location><page_8><loc_9><loc_87><loc_46><loc_88></location>Japanese language (previously unseen by TableFormer):</subtitle-level-1>
<subtitle-level-1><location><page_8><loc_50><loc_87><loc_70><loc_88></location>Example table from FinTabNet:</subtitle-level-1>
<figure> <figure>
<location><page_8><loc_8><loc_76><loc_49><loc_87></location> <location><page_8><loc_8><loc_76><loc_49><loc_87></location>
</figure> </figure>
<paragraph><location><page_8><loc_9><loc_73><loc_10><loc_74></location>b.</paragraph> <caption><location><page_8><loc_9><loc_73><loc_63><loc_74></location>b. Structure predicted by TableFormer, with superimposed matched PDF cell text:</caption>
<paragraph><location><page_8><loc_11><loc_73><loc_63><loc_74></location>Structure predicted by TableFormer, with superimposed matched PDF cell text:</paragraph> <figure>
<location><page_8><loc_50><loc_77><loc_91><loc_88></location>
<caption>b. Structure predicted by TableFormer, with superimposed matched PDF cell text:</caption>
</figure>
<table> <table>
<location><page_8><loc_9><loc_63><loc_49><loc_72></location> <location><page_8><loc_9><loc_63><loc_49><loc_72></location>
<row_0><col_0><body></col_0><col_1><body></col_1><col_2><col_header>論文ファイル</col_2><col_3><col_header>論文ファイル</col_3><col_4><col_header>参考文献</col_4><col_5><col_header>参考文献</col_5></row_0> <row_0><col_0><body></col_0><col_1><body></col_1><col_2><col_header>論文ファイル</col_2><col_3><col_header>論文ファイル</col_3><col_4><col_header>参考文献</col_4><col_5><col_header>参考文献</col_5></row_0>
@ -192,7 +201,7 @@
<row_5><col_0><row_header>Canceled or forfeited</col_0><col_1><body>(0. 1 )</col_1><col_2><body>-</col_2><col_3><body>102.01</col_3><col_4><body>92.18</col_4></row_5> <row_5><col_0><row_header>Canceled or forfeited</col_0><col_1><body>(0. 1 )</col_1><col_2><body>-</col_2><col_3><body>102.01</col_3><col_4><body>92.18</col_4></row_5>
<row_6><col_0><row_header>Nonvested on December 31</col_0><col_1><body>1.0</col_1><col_2><body>0.3</col_2><col_3><body>104.85 $</col_3><col_4><body>$ 104.51</col_4></row_6> <row_6><col_0><row_header>Nonvested on December 31</col_0><col_1><body>1.0</col_1><col_2><body>0.3</col_2><col_3><body>104.85 $</col_3><col_4><body>$ 104.51</col_4></row_6>
</table> </table>
<caption><location><page_8><loc_8><loc_54><loc_89><loc_60></location>Figure 5: One of the benefits of TableFormer is that it is language agnostic, as an example, the left part of the illustration demonstrates TableFormer predictions on previously unseen language (Japanese). Additionally, we see that TableFormer is robust to variability in style and content, right side of the illustration shows the example of the TableFormer prediction from the FinTabNet dataset.</caption> <caption><location><page_8><loc_8><loc_54><loc_89><loc_59></location>Figure 5: One of the benefits of TableFormer is that it is language agnostic, as an example, the left part of the illustration demonstrates TableFormer predictions on previously unseen language (Japanese). Additionally, we see that TableFormer is robust to variability in style and content, right side of the illustration shows the example of the TableFormer prediction from the FinTabNet dataset.</caption>
<figure> <figure>
<location><page_8><loc_8><loc_44><loc_35><loc_52></location> <location><page_8><loc_8><loc_44><loc_35><loc_52></location>
<caption>Figure 5: One of the benefits of TableFormer is that it is language agnostic, as an example, the left part of the illustration demonstrates TableFormer predictions on previously unseen language (Japanese). Additionally, we see that TableFormer is robust to variability in style and content, right side of the illustration shows the example of the TableFormer prediction from the FinTabNet dataset.</caption> <caption>Figure 5: One of the benefits of TableFormer is that it is language agnostic, as an example, the left part of the illustration demonstrates TableFormer predictions on previously unseen language (Japanese). Additionally, we see that TableFormer is robust to variability in style and content, right side of the illustration shows the example of the TableFormer prediction from the FinTabNet dataset.</caption>
@ -210,14 +219,11 @@
<subtitle-level-1><location><page_8><loc_50><loc_37><loc_75><loc_38></location>6. Future Work & Conclusion</subtitle-level-1> <subtitle-level-1><location><page_8><loc_50><loc_37><loc_75><loc_38></location>6. Future Work & Conclusion</subtitle-level-1>
<paragraph><location><page_8><loc_50><loc_18><loc_89><loc_35></location>In this paper, we presented TableFormer an end-to-end transformer based approach to predict table structures and bounding boxes of cells from an image. This approach enables us to recreate the table structure, and extract the cell content from PDF or OCR by using bounding boxes. Additionally, it provides the versatility required in real-world scenarios when dealing with various types of PDF documents, and languages. Furthermore, our method outperforms all state-of-the-arts with a wide margin. Finally, we introduce "SynthTabNet" a challenging synthetically generated dataset that reinforces missing characteristics from other datasets.</paragraph> <paragraph><location><page_8><loc_50><loc_18><loc_89><loc_35></location>In this paper, we presented TableFormer an end-to-end transformer based approach to predict table structures and bounding boxes of cells from an image. This approach enables us to recreate the table structure, and extract the cell content from PDF or OCR by using bounding boxes. Additionally, it provides the versatility required in real-world scenarios when dealing with various types of PDF documents, and languages. Furthermore, our method outperforms all state-of-the-arts with a wide margin. Finally, we introduce "SynthTabNet" a challenging synthetically generated dataset that reinforces missing characteristics from other datasets.</paragraph>
<subtitle-level-1><location><page_8><loc_50><loc_14><loc_60><loc_15></location>References</subtitle-level-1> <subtitle-level-1><location><page_8><loc_50><loc_14><loc_60><loc_15></location>References</subtitle-level-1>
<paragraph><location><page_8><loc_51><loc_10><loc_89><loc_13></location>- [1] Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-</paragraph> <paragraph><location><page_8><loc_51><loc_10><loc_89><loc_12></location>- [1] Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-</paragraph>
<figure> <paragraph><location><page_9><loc_11><loc_85><loc_47><loc_90></location>- end object detection with transformers. In Andrea Vedaldi, Horst Bischof, Thomas Brox, and Jan-Michael Frahm, editors, Computer Vision - ECCV 2020 , pages 213-229, Cham, 2020. Springer International Publishing. 5</paragraph>
<location><page_8><loc_50><loc_77><loc_91><loc_88></location>
</figure>
<paragraph><location><page_9><loc_11><loc_85><loc_47><loc_91></location>- end object detection with transformers. In Andrea Vedaldi, Horst Bischof, Thomas Brox, and Jan-Michael Frahm, editors, Computer Vision - ECCV 2020 , pages 213-229, Cham, 2020. Springer International Publishing. 5</paragraph>
<paragraph><location><page_9><loc_9><loc_81><loc_47><loc_85></location>- [2] Zewen Chi, Heyan Huang, Heng-Da Xu, Houjin Yu, Wanxuan Yin, and Xian-Ling Mao. Complicated table structure recognition. arXiv preprint arXiv:1908.04729 , 2019. 3</paragraph> <paragraph><location><page_9><loc_9><loc_81><loc_47><loc_85></location>- [2] Zewen Chi, Heyan Huang, Heng-Da Xu, Houjin Yu, Wanxuan Yin, and Xian-Ling Mao. Complicated table structure recognition. arXiv preprint arXiv:1908.04729 , 2019. 3</paragraph>
<paragraph><location><page_9><loc_9><loc_77><loc_47><loc_81></location>- [3] Bertrand Couasnon and Aurelie Lemaitre. Recognition of Tables and Forms , pages 647-677. Springer London, London, 2014. 2</paragraph> <paragraph><location><page_9><loc_9><loc_77><loc_47><loc_81></location>- [3] Bertrand Couasnon and Aurelie Lemaitre. Recognition of Tables and Forms , pages 647-677. Springer London, London, 2014. 2</paragraph>
<paragraph><location><page_9><loc_9><loc_71><loc_47><loc_77></location>- [4] Herv'e D'ejean, Jean-Luc Meunier, Liangcai Gao, Yilun Huang, Yu Fang, Florian Kleber, and Eva-Maria Lang. ICDAR 2019 Competition on Table Detection and Recognition (cTDaR), Apr. 2019. http://sac.founderit.com/. 2</paragraph> <paragraph><location><page_9><loc_9><loc_71><loc_47><loc_76></location>- [4] Herv'e D'ejean, Jean-Luc Meunier, Liangcai Gao, Yilun Huang, Yu Fang, Florian Kleber, and Eva-Maria Lang. ICDAR 2019 Competition on Table Detection and Recognition (cTDaR), Apr. 2019. http://sac.founderit.com/. 2</paragraph>
<paragraph><location><page_9><loc_9><loc_66><loc_47><loc_71></location>- [5] Basilios Gatos, Dimitrios Danatsas, Ioannis Pratikakis, and Stavros J Perantonis. Automatic table detection in document images. In International Conference on Pattern Recognition and Image Analysis , pages 609-618. Springer, 2005. 2</paragraph> <paragraph><location><page_9><loc_9><loc_66><loc_47><loc_71></location>- [5] Basilios Gatos, Dimitrios Danatsas, Ioannis Pratikakis, and Stavros J Perantonis. Automatic table detection in document images. In International Conference on Pattern Recognition and Image Analysis , pages 609-618. Springer, 2005. 2</paragraph>
<paragraph><location><page_9><loc_9><loc_60><loc_47><loc_65></location>- [6] Max Gobel, Tamir Hassan, Ermelinda Oro, and Giorgio Orsi. Icdar 2013 table competition. In 2013 12th International Conference on Document Analysis and Recognition , pages 1449-1453, 2013. 2</paragraph> <paragraph><location><page_9><loc_9><loc_60><loc_47><loc_65></location>- [6] Max Gobel, Tamir Hassan, Ermelinda Oro, and Giorgio Orsi. Icdar 2013 table competition. In 2013 12th International Conference on Document Analysis and Recognition , pages 1449-1453, 2013. 2</paragraph>
<paragraph><location><page_9><loc_9><loc_56><loc_47><loc_60></location>- [7] EA Green and M Krishnamoorthy. Recognition of tables using table grammars. procs. In Symposium on Document Analysis and Recognition (SDAIR'95) , pages 261-277. 2</paragraph> <paragraph><location><page_9><loc_9><loc_56><loc_47><loc_60></location>- [7] EA Green and M Krishnamoorthy. Recognition of tables using table grammars. procs. In Symposium on Document Analysis and Recognition (SDAIR'95) , pages 261-277. 2</paragraph>
@ -229,7 +235,7 @@
<paragraph><location><page_9><loc_8><loc_18><loc_47><loc_25></location>- [13] Thotreingam Kasar, Philippine Barlas, Sebastien Adam, Cl'ement Chatelain, and Thierry Paquet. Learning to detect tables in scanned document images using line information. In 2013 12th International Conference on Document Analysis and Recognition , pages 1185-1189. IEEE, 2013. 2</paragraph> <paragraph><location><page_9><loc_8><loc_18><loc_47><loc_25></location>- [13] Thotreingam Kasar, Philippine Barlas, Sebastien Adam, Cl'ement Chatelain, and Thierry Paquet. Learning to detect tables in scanned document images using line information. In 2013 12th International Conference on Document Analysis and Recognition , pages 1185-1189. IEEE, 2013. 2</paragraph>
<paragraph><location><page_9><loc_8><loc_14><loc_47><loc_18></location>- [14] Pratik Kayal, Mrinal Anand, Harsh Desai, and Mayank Singh. Icdar 2021 competition on scientific table image recognition to latex, 2021. 2</paragraph> <paragraph><location><page_9><loc_8><loc_14><loc_47><loc_18></location>- [14] Pratik Kayal, Mrinal Anand, Harsh Desai, and Mayank Singh. Icdar 2021 competition on scientific table image recognition to latex, 2021. 2</paragraph>
<paragraph><location><page_9><loc_8><loc_10><loc_47><loc_14></location>- [15] Harold W Kuhn. The hungarian method for the assignment problem. Naval research logistics quarterly , 2(1-2):83-97, 1955. 6</paragraph> <paragraph><location><page_9><loc_8><loc_10><loc_47><loc_14></location>- [15] Harold W Kuhn. The hungarian method for the assignment problem. Naval research logistics quarterly , 2(1-2):83-97, 1955. 6</paragraph>
<paragraph><location><page_9><loc_50><loc_82><loc_89><loc_91></location>- [16] Girish Kulkarni, Visruth Premraj, Vicente Ordonez, Sagnik Dhar, Siming Li, Yejin Choi, Alexander C. Berg, and Tamara L. Berg. Babytalk: Understanding and generating simple image descriptions. IEEE Transactions on Pattern Analysis and Machine Intelligence , 35(12):2891-2903, 2013. 4</paragraph> <paragraph><location><page_9><loc_50><loc_82><loc_89><loc_90></location>- [16] Girish Kulkarni, Visruth Premraj, Vicente Ordonez, Sagnik Dhar, Siming Li, Yejin Choi, Alexander C. Berg, and Tamara L. Berg. Babytalk: Understanding and generating simple image descriptions. IEEE Transactions on Pattern Analysis and Machine Intelligence , 35(12):2891-2903, 2013. 4</paragraph>
<paragraph><location><page_9><loc_50><loc_78><loc_89><loc_82></location>- [17] Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou, and Zhoujun Li. Tablebank: A benchmark dataset for table detection and recognition, 2019. 2, 3</paragraph> <paragraph><location><page_9><loc_50><loc_78><loc_89><loc_82></location>- [17] Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou, and Zhoujun Li. Tablebank: A benchmark dataset for table detection and recognition, 2019. 2, 3</paragraph>
<paragraph><location><page_9><loc_50><loc_67><loc_89><loc_78></location>- [18] Yiren Li, Zheng Huang, Junchi Yan, Yi Zhou, Fan Ye, and Xianhui Liu. Gfte: Graph-based financial table extraction. In Alberto Del Bimbo, Rita Cucchiara, Stan Sclaroff, Giovanni Maria Farinella, Tao Mei, Marco Bertini, Hugo Jair Escalante, and Roberto Vezzani, editors, Pattern Recognition. ICPR International Workshops and Challenges , pages 644-658, Cham, 2021. Springer International Publishing. 2, 3</paragraph> <paragraph><location><page_9><loc_50><loc_67><loc_89><loc_78></location>- [18] Yiren Li, Zheng Huang, Junchi Yan, Yi Zhou, Fan Ye, and Xianhui Liu. Gfte: Graph-based financial table extraction. In Alberto Del Bimbo, Rita Cucchiara, Stan Sclaroff, Giovanni Maria Farinella, Tao Mei, Marco Bertini, Hugo Jair Escalante, and Roberto Vezzani, editors, Pattern Recognition. ICPR International Workshops and Challenges , pages 644-658, Cham, 2021. Springer International Publishing. 2, 3</paragraph>
<paragraph><location><page_9><loc_50><loc_59><loc_89><loc_67></location>- [19] Nikolaos Livathinos, Cesar Berrospi, Maksym Lysak, Viktor Kuropiatnyk, Ahmed Nassar, Andre Carvalho, Michele Dolfi, Christoph Auer, Kasper Dinkla, and Peter Staar. Robust pdf document conversion using recurrent neural networks. Proceedings of the AAAI Conference on Artificial Intelligence , 35(17):15137-15145, May 2021. 1</paragraph> <paragraph><location><page_9><loc_50><loc_59><loc_89><loc_67></location>- [19] Nikolaos Livathinos, Cesar Berrospi, Maksym Lysak, Viktor Kuropiatnyk, Ahmed Nassar, Andre Carvalho, Michele Dolfi, Christoph Auer, Kasper Dinkla, and Peter Staar. Robust pdf document conversion using recurrent neural networks. Proceedings of the AAAI Conference on Artificial Intelligence , 35(17):15137-15145, May 2021. 1</paragraph>
@ -239,7 +245,7 @@
<paragraph><location><page_9><loc_50><loc_21><loc_89><loc_29></location>- [23] Devashish Prasad, Ayan Gadpal, Kshitij Kapadni, Manish Visave, and Kavita Sultanpure. Cascadetabnet: An approach for end to end table detection and structure recognition from image-based documents. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops , pages 572-573, 2020. 1</paragraph> <paragraph><location><page_9><loc_50><loc_21><loc_89><loc_29></location>- [23] Devashish Prasad, Ayan Gadpal, Kshitij Kapadni, Manish Visave, and Kavita Sultanpure. Cascadetabnet: An approach for end to end table detection and structure recognition from image-based documents. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops , pages 572-573, 2020. 1</paragraph>
<paragraph><location><page_9><loc_50><loc_16><loc_89><loc_21></location>- [24] Shah Rukh Qasim, Hassan Mahmood, and Faisal Shafait. Rethinking table recognition using graph neural networks. In 2019 International Conference on Document Analysis and Recognition (ICDAR) , pages 142-147. IEEE, 2019. 3</paragraph> <paragraph><location><page_9><loc_50><loc_16><loc_89><loc_21></location>- [24] Shah Rukh Qasim, Hassan Mahmood, and Faisal Shafait. Rethinking table recognition using graph neural networks. In 2019 International Conference on Document Analysis and Recognition (ICDAR) , pages 142-147. IEEE, 2019. 3</paragraph>
<paragraph><location><page_9><loc_50><loc_10><loc_89><loc_15></location>- [25] Hamid Rezatofighi, Nathan Tsoi, JunYoung Gwak, Amir Sadeghian, Ian Reid, and Silvio Savarese. Generalized intersection over union: A metric and a loss for bounding box regression. In Proceedings of the IEEE/CVF Conference on</paragraph> <paragraph><location><page_9><loc_50><loc_10><loc_89><loc_15></location>- [25] Hamid Rezatofighi, Nathan Tsoi, JunYoung Gwak, Amir Sadeghian, Ian Reid, and Silvio Savarese. Generalized intersection over union: A metric and a loss for bounding box regression. In Proceedings of the IEEE/CVF Conference on</paragraph>
<paragraph><location><page_10><loc_11><loc_88><loc_47><loc_91></location>Computer Vision and Pattern Recognition , pages 658-666, 2019. 6</paragraph> <paragraph><location><page_10><loc_11><loc_88><loc_47><loc_90></location>Computer Vision and Pattern Recognition , pages 658-666, 2019. 6</paragraph>
<paragraph><location><page_10><loc_8><loc_80><loc_47><loc_88></location>- [26] Sebastian Schreiber, Stefan Agne, Ivo Wolf, Andreas Dengel, and Sheraz Ahmed. Deepdesrt: Deep learning for detection and structure recognition of tables in document images. In 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR) , volume 01, pages 11621167, 2017. 1</paragraph> <paragraph><location><page_10><loc_8><loc_80><loc_47><loc_88></location>- [26] Sebastian Schreiber, Stefan Agne, Ivo Wolf, Andreas Dengel, and Sheraz Ahmed. Deepdesrt: Deep learning for detection and structure recognition of tables in document images. In 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR) , volume 01, pages 11621167, 2017. 1</paragraph>
<paragraph><location><page_10><loc_8><loc_71><loc_47><loc_79></location>- [27] Sebastian Schreiber, Stefan Agne, Ivo Wolf, Andreas Dengel, and Sheraz Ahmed. Deepdesrt: Deep learning for detection and structure recognition of tables in document images. In 2017 14th IAPR international conference on document analysis and recognition (ICDAR) , volume 1, pages 1162-1167. IEEE, 2017. 3</paragraph> <paragraph><location><page_10><loc_8><loc_71><loc_47><loc_79></location>- [27] Sebastian Schreiber, Stefan Agne, Ivo Wolf, Andreas Dengel, and Sheraz Ahmed. Deepdesrt: Deep learning for detection and structure recognition of tables in document images. In 2017 14th IAPR international conference on document analysis and recognition (ICDAR) , volume 1, pages 1162-1167. IEEE, 2017. 3</paragraph>
<paragraph><location><page_10><loc_8><loc_66><loc_47><loc_71></location>- [28] Faisal Shafait and Ray Smith. Table detection in heterogeneous documents. In Proceedings of the 9th IAPR International Workshop on Document Analysis Systems , pages 6572, 2010. 2</paragraph> <paragraph><location><page_10><loc_8><loc_66><loc_47><loc_71></location>- [28] Faisal Shafait and Ray Smith. Table detection in heterogeneous documents. In Proceedings of the 9th IAPR International Workshop on Document Analysis Systems , pages 6572, 2010. 2</paragraph>
@ -252,24 +258,24 @@
<paragraph><location><page_10><loc_8><loc_20><loc_47><loc_25></location>- [35] Quanzeng You, Hailin Jin, Zhaowen Wang, Chen Fang, and Jiebo Luo. Image captioning with semantic attention. In Proceedings of the IEEE conference on computer vision and pattern recognition , pages 4651-4659, 2016. 4</paragraph> <paragraph><location><page_10><loc_8><loc_20><loc_47><loc_25></location>- [35] Quanzeng You, Hailin Jin, Zhaowen Wang, Chen Fang, and Jiebo Luo. Image captioning with semantic attention. In Proceedings of the IEEE conference on computer vision and pattern recognition , pages 4651-4659, 2016. 4</paragraph>
<paragraph><location><page_10><loc_8><loc_13><loc_47><loc_19></location>- [36] Xinyi Zheng, Doug Burdick, Lucian Popa, Peter Zhong, and Nancy Xin Ru Wang. Global table extractor (gte): A framework for joint table identification and cell structure recognition using visual context. Winter Conference for Applications in Computer Vision (WACV) , 2021. 2, 3</paragraph> <paragraph><location><page_10><loc_8><loc_13><loc_47><loc_19></location>- [36] Xinyi Zheng, Doug Burdick, Lucian Popa, Peter Zhong, and Nancy Xin Ru Wang. Global table extractor (gte): A framework for joint table identification and cell structure recognition using visual context. Winter Conference for Applications in Computer Vision (WACV) , 2021. 2, 3</paragraph>
<paragraph><location><page_10><loc_8><loc_10><loc_47><loc_12></location>- [37] Xu Zhong, Elaheh ShafieiBavani, and Antonio Jimeno Yepes. Image-based table recognition: Data, model,</paragraph> <paragraph><location><page_10><loc_8><loc_10><loc_47><loc_12></location>- [37] Xu Zhong, Elaheh ShafieiBavani, and Antonio Jimeno Yepes. Image-based table recognition: Data, model,</paragraph>
<paragraph><location><page_10><loc_54><loc_85><loc_89><loc_91></location>- and evaluation. In Andrea Vedaldi, Horst Bischof, Thomas Brox, and Jan-Michael Frahm, editors, Computer Vision ECCV 2020 , pages 564-580, Cham, 2020. Springer International Publishing. 2, 3, 7</paragraph> <paragraph><location><page_10><loc_54><loc_85><loc_89><loc_90></location>- and evaluation. In Andrea Vedaldi, Horst Bischof, Thomas Brox, and Jan-Michael Frahm, editors, Computer Vision ECCV 2020 , pages 564-580, Cham, 2020. Springer International Publishing. 2, 3, 7</paragraph>
<paragraph><location><page_10><loc_50><loc_80><loc_89><loc_85></location>- [38] Xu Zhong, Jianbin Tang, and Antonio Jimeno Yepes. Publaynet: Largest dataset ever for document layout analysis. In 2019 International Conference on Document Analysis and Recognition (ICDAR) , pages 1015-1022, 2019. 1</paragraph> <paragraph><location><page_10><loc_50><loc_80><loc_89><loc_85></location>- [38] Xu Zhong, Jianbin Tang, and Antonio Jimeno Yepes. Publaynet: Largest dataset ever for document layout analysis. In 2019 International Conference on Document Analysis and Recognition (ICDAR) , pages 1015-1022, 2019. 1</paragraph>
<subtitle-level-1><location><page_11><loc_22><loc_83><loc_76><loc_86></location>TableFormer: Table Structure Understanding with Transformers Supplementary Material</subtitle-level-1> <subtitle-level-1><location><page_11><loc_22><loc_83><loc_76><loc_86></location>TableFormer: Table Structure Understanding with Transformers Supplementary Material</subtitle-level-1>
<subtitle-level-1><location><page_11><loc_8><loc_78><loc_29><loc_80></location>1. Details on the datasets</subtitle-level-1> <subtitle-level-1><location><page_11><loc_8><loc_78><loc_29><loc_80></location>1. Details on the datasets</subtitle-level-1>
<subtitle-level-1><location><page_11><loc_8><loc_76><loc_25><loc_77></location>1.1. Data preparation</subtitle-level-1> <subtitle-level-1><location><page_11><loc_8><loc_76><loc_25><loc_77></location>1.1. Data preparation</subtitle-level-1>
<paragraph><location><page_11><loc_8><loc_51><loc_47><loc_75></location>As a first step of our data preparation process, we have calculated statistics over the datasets across the following dimensions: (1) table size measured in the number of rows and columns, (2) complexity of the table, (3) strictness of the provided HTML structure and (4) completeness (i.e. no omitted bounding boxes). A table is considered to be simple if it does not contain row spans or column spans. Additionally, a table has a strict HTML structure if every row has the same number of columns after taking into account any row or column spans. Therefore a strict HTML structure looks always rectangular. However, HTML is a lenient encoding format, i.e. tables with rows of different sizes might still be regarded as correct due to implicit display rules. These implicit rules leave room for ambiguity, which we want to avoid. As such, we prefer to have "strict" tables, i.e. tables where every row has exactly the same length.</paragraph> <paragraph><location><page_11><loc_8><loc_51><loc_47><loc_75></location>As a first step of our data preparation process, we have calculated statistics over the datasets across the following dimensions: (1) table size measured in the number of rows and columns, (2) complexity of the table, (3) strictness of the provided HTML structure and (4) completeness (i.e. no omitted bounding boxes). A table is considered to be simple if it does not contain row spans or column spans. Additionally, a table has a strict HTML structure if every row has the same number of columns after taking into account any row or column spans. Therefore a strict HTML structure looks always rectangular. However, HTML is a lenient encoding format, i.e. tables with rows of different sizes might still be regarded as correct due to implicit display rules. These implicit rules leave room for ambiguity, which we want to avoid. As such, we prefer to have "strict" tables, i.e. tables where every row has exactly the same length.</paragraph>
<paragraph><location><page_11><loc_8><loc_21><loc_47><loc_51></location>We have developed a technique that tries to derive a missing bounding box out of its neighbors. As a first step, we use the annotation data to generate the most fine-grained grid that covers the table structure. In case of strict HTML tables, all grid squares are associated with some table cell and in the presence of table spans a cell extends across multiple grid squares. When enough bounding boxes are known for a rectangular table, it is possible to compute the geometrical border lines between the grid rows and columns. Eventually this information is used to generate the missing bounding boxes. Additionally, the existence of unused grid squares indicates that the table rows have unequal number of columns and the overall structure is non-strict. The generation of missing bounding boxes for non-strict HTML tables is ambiguous and therefore quite challenging. Thus, we have decided to simply discard those tables. In case of PubTabNet we have computed missing bounding boxes for 48% of the simple and 69% of the complex tables. Regarding FinTabNet, 68% of the simple and 98% of the complex tables require the generation of bounding boxes.</paragraph> <paragraph><location><page_11><loc_8><loc_21><loc_47><loc_51></location>We have developed a technique that tries to derive a missing bounding box out of its neighbors. As a first step, we use the annotation data to generate the most fine-grained grid that covers the table structure. In case of strict HTML tables, all grid squares are associated with some table cell and in the presence of table spans a cell extends across multiple grid squares. When enough bounding boxes are known for a rectangular table, it is possible to compute the geometrical border lines between the grid rows and columns. Eventually this information is used to generate the missing bounding boxes. Additionally, the existence of unused grid squares indicates that the table rows have unequal number of columns and the overall structure is non-strict. The generation of missing bounding boxes for non-strict HTML tables is ambiguous and therefore quite challenging. Thus, we have decided to simply discard those tables. In case of PubTabNet we have computed missing bounding boxes for 48% of the simple and 69% of the complex tables. Regarding FinTabNet, 68% of the simple and 98% of the complex tables require the generation of bounding boxes.</paragraph>
<paragraph><location><page_11><loc_8><loc_18><loc_47><loc_21></location>Figure 7 illustrates the distribution of the tables across different dimensions per dataset.</paragraph> <paragraph><location><page_11><loc_8><loc_18><loc_47><loc_20></location>Figure 7 illustrates the distribution of the tables across different dimensions per dataset.</paragraph>
<subtitle-level-1><location><page_11><loc_8><loc_15><loc_25><loc_16></location>1.2. Synthetic datasets</subtitle-level-1> <subtitle-level-1><location><page_11><loc_8><loc_15><loc_25><loc_16></location>1.2. Synthetic datasets</subtitle-level-1>
<paragraph><location><page_11><loc_8><loc_10><loc_47><loc_14></location>Aiming to train and evaluate our models in a broader spectrum of table data we have synthesized four types of datasets. Each one contains tables with different appear-</paragraph> <paragraph><location><page_11><loc_8><loc_10><loc_47><loc_14></location>Aiming to train and evaluate our models in a broader spectrum of table data we have synthesized four types of datasets. Each one contains tables with different appear-</paragraph>
<paragraph><location><page_11><loc_50><loc_74><loc_89><loc_80></location>ances in regard to their size, structure, style and content. Every synthetic dataset contains 150k examples, summing up to 600k synthetic examples. All datasets are divided into Train, Test and Val splits (80%, 10%, 10%).</paragraph> <paragraph><location><page_11><loc_50><loc_74><loc_89><loc_79></location>ances in regard to their size, structure, style and content. Every synthetic dataset contains 150k examples, summing up to 600k synthetic examples. All datasets are divided into Train, Test and Val splits (80%, 10%, 10%).</paragraph>
<paragraph><location><page_11><loc_50><loc_71><loc_89><loc_73></location>The process of generating a synthetic dataset can be decomposed into the following steps:</paragraph> <paragraph><location><page_11><loc_50><loc_71><loc_89><loc_73></location>The process of generating a synthetic dataset can be decomposed into the following steps:</paragraph>
<paragraph><location><page_11><loc_50><loc_60><loc_89><loc_70></location>- 1. Prepare styling and content templates: The styling templates have been manually designed and organized into groups of scope specific appearances (e.g. financial data, marketing data, etc.) Additionally, we have prepared curated collections of content templates by extracting the most frequently used terms out of non-synthetic datasets (e.g. PubTabNet, FinTabNet, etc.).</paragraph> <paragraph><location><page_11><loc_50><loc_60><loc_89><loc_70></location>- 1. Prepare styling and content templates: The styling templates have been manually designed and organized into groups of scope specific appearances (e.g. financial data, marketing data, etc.) Additionally, we have prepared curated collections of content templates by extracting the most frequently used terms out of non-synthetic datasets (e.g. PubTabNet, FinTabNet, etc.).</paragraph>
<paragraph><location><page_11><loc_50><loc_43><loc_89><loc_60></location>- 2. Generate table structures: The structure of each synthetic dataset assumes a horizontal table header which potentially spans over multiple rows and a table body that may contain a combination of row spans and column spans. However, spans are not allowed to cross the header - body boundary. The table structure is described by the parameters: Total number of table rows and columns, number of header rows, type of spans (header only spans, row only spans, column only spans, both row and column spans), maximum span size and the ratio of the table area covered by spans.</paragraph> <paragraph><location><page_11><loc_50><loc_43><loc_89><loc_60></location>- 2. Generate table structures: The structure of each synthetic dataset assumes a horizontal table header which potentially spans over multiple rows and a table body that may contain a combination of row spans and column spans. However, spans are not allowed to cross the header - body boundary. The table structure is described by the parameters: Total number of table rows and columns, number of header rows, type of spans (header only spans, row only spans, column only spans, both row and column spans), maximum span size and the ratio of the table area covered by spans.</paragraph>
<paragraph><location><page_11><loc_50><loc_37><loc_89><loc_43></location>- 3. Generate content: Based on the dataset theme , a set of suitable content templates is chosen first. Then, this content can be combined with purely random text to produce the synthetic content.</paragraph> <paragraph><location><page_11><loc_50><loc_37><loc_89><loc_43></location>- 3. Generate content: Based on the dataset theme , a set of suitable content templates is chosen first. Then, this content can be combined with purely random text to produce the synthetic content.</paragraph>
<paragraph><location><page_11><loc_50><loc_31><loc_89><loc_37></location>- 4. Apply styling templates: Depending on the domain of the synthetic dataset, a set of styling templates is first manually selected. Then, a style is randomly selected to format the appearance of the synthesized table.</paragraph> <paragraph><location><page_11><loc_50><loc_31><loc_89><loc_37></location>- 4. Apply styling templates: Depending on the domain of the synthetic dataset, a set of styling templates is first manually selected. Then, a style is randomly selected to format the appearance of the synthesized table.</paragraph>
<paragraph><location><page_11><loc_50><loc_23><loc_89><loc_31></location>- 5. Render the complete tables: The synthetic table is finally rendered by a web browser engine to generate the bounding boxes for each table cell. A batching technique is utilized to optimize the runtime overhead of the rendering process.</paragraph> <paragraph><location><page_11><loc_50><loc_23><loc_89><loc_31></location>- 5. Render the complete tables: The synthetic table is finally rendered by a web browser engine to generate the bounding boxes for each table cell. A batching technique is utilized to optimize the runtime overhead of the rendering process.</paragraph>
<subtitle-level-1><location><page_11><loc_50><loc_18><loc_89><loc_22></location>2. Prediction post-processing for PDF documents</subtitle-level-1> <subtitle-level-1><location><page_11><loc_50><loc_18><loc_89><loc_21></location>2. Prediction post-processing for PDF documents</subtitle-level-1>
<paragraph><location><page_11><loc_50><loc_10><loc_89><loc_17></location>Although TableFormer can predict the table structure and the bounding boxes for tables recognized inside PDF documents, this is not enough when a full reconstruction of the original table is required. This happens mainly due the following reasons:</paragraph> <paragraph><location><page_11><loc_50><loc_10><loc_89><loc_17></location>Although TableFormer can predict the table structure and the bounding boxes for tables recognized inside PDF documents, this is not enough when a full reconstruction of the original table is required. This happens mainly due the following reasons:</paragraph>
<caption><location><page_12><loc_8><loc_76><loc_89><loc_79></location>Figure 7: Distribution of the tables across different dimensions per dataset. Simple vs complex tables per dataset and split, strict vs non strict html structures per dataset and table complexity, missing bboxes per dataset and table complexity.</caption> <caption><location><page_12><loc_8><loc_76><loc_89><loc_79></location>Figure 7: Distribution of the tables across different dimensions per dataset. Simple vs complex tables per dataset and split, strict vs non strict html structures per dataset and table complexity, missing bboxes per dataset and table complexity.</caption>
<figure> <figure>
@ -291,7 +297,7 @@
<paragraph><location><page_12><loc_50><loc_65><loc_89><loc_67></location>- 6. Snap all cells with bad IOU to their corresponding median x -coordinates and cell sizes.</paragraph> <paragraph><location><page_12><loc_50><loc_65><loc_89><loc_67></location>- 6. Snap all cells with bad IOU to their corresponding median x -coordinates and cell sizes.</paragraph>
<paragraph><location><page_12><loc_50><loc_51><loc_89><loc_64></location>- 7. Generate a new set of pair-wise matches between the corrected bounding boxes and PDF cells. This time use a modified version of the IOU metric, where the area of the intersection between the predicted and PDF cells is divided by the PDF cell area. In case there are multiple matches for the same PDF cell, the prediction with the higher score is preferred. This covers the cases where the PDF cells are smaller than the area of predicted or corrected prediction cells.</paragraph> <paragraph><location><page_12><loc_50><loc_51><loc_89><loc_64></location>- 7. Generate a new set of pair-wise matches between the corrected bounding boxes and PDF cells. This time use a modified version of the IOU metric, where the area of the intersection between the predicted and PDF cells is divided by the PDF cell area. In case there are multiple matches for the same PDF cell, the prediction with the higher score is preferred. This covers the cases where the PDF cells are smaller than the area of predicted or corrected prediction cells.</paragraph>
<paragraph><location><page_12><loc_50><loc_42><loc_89><loc_51></location>- 8. In some rare occasions, we have noticed that TableFormer can confuse a single column as two. When the postprocessing steps are applied, this results with two predicted columns pointing to the same PDF column. In such case we must de-duplicate the columns according to highest total column intersection score.</paragraph> <paragraph><location><page_12><loc_50><loc_42><loc_89><loc_51></location>- 8. In some rare occasions, we have noticed that TableFormer can confuse a single column as two. When the postprocessing steps are applied, this results with two predicted columns pointing to the same PDF column. In such case we must de-duplicate the columns according to highest total column intersection score.</paragraph>
<paragraph><location><page_12><loc_50><loc_28><loc_89><loc_42></location>- 9. Pick up the remaining orphan cells. There could be cases, when after applying all the previous post-processing steps, some PDF cells could still remain without any match to predicted cells. However, it is still possible to deduce the correct matching for an orphan PDF cell by mapping its bounding box on the geometry of the grid. This mapping decides if the content of the orphan cell will be appended to an already matched table cell, or a new table cell should be created to match with the orphan.</paragraph> <paragraph><location><page_12><loc_50><loc_28><loc_89><loc_41></location>- 9. Pick up the remaining orphan cells. There could be cases, when after applying all the previous post-processing steps, some PDF cells could still remain without any match to predicted cells. However, it is still possible to deduce the correct matching for an orphan PDF cell by mapping its bounding box on the geometry of the grid. This mapping decides if the content of the orphan cell will be appended to an already matched table cell, or a new table cell should be created to match with the orphan.</paragraph>
<paragraph><location><page_12><loc_50><loc_24><loc_89><loc_28></location>9a. Compute the top and bottom boundary of the horizontal band for each grid row (min/max y coordinates per row).</paragraph> <paragraph><location><page_12><loc_50><loc_24><loc_89><loc_28></location>9a. Compute the top and bottom boundary of the horizontal band for each grid row (min/max y coordinates per row).</paragraph>
<paragraph><location><page_12><loc_50><loc_21><loc_89><loc_23></location>- 9b. Intersect the orphan's bounding box with the row bands, and map the cell to the closest grid row.</paragraph> <paragraph><location><page_12><loc_50><loc_21><loc_89><loc_23></location>- 9b. Intersect the orphan's bounding box with the row bands, and map the cell to the closest grid row.</paragraph>
<paragraph><location><page_12><loc_50><loc_16><loc_89><loc_20></location>- 9c. Compute the left and right boundary of the vertical band for each grid column (min/max x coordinates per column).</paragraph> <paragraph><location><page_12><loc_50><loc_16><loc_89><loc_20></location>- 9c. Compute the left and right boundary of the vertical band for each grid column (min/max x coordinates per column).</paragraph>
@ -300,56 +306,147 @@
<paragraph><location><page_13><loc_8><loc_89><loc_15><loc_91></location>phan cell.</paragraph> <paragraph><location><page_13><loc_8><loc_89><loc_15><loc_91></location>phan cell.</paragraph>
<paragraph><location><page_13><loc_8><loc_86><loc_47><loc_89></location>9f. Otherwise create a new structural cell and match it wit the orphan cell.</paragraph> <paragraph><location><page_13><loc_8><loc_86><loc_47><loc_89></location>9f. Otherwise create a new structural cell and match it wit the orphan cell.</paragraph>
<paragraph><location><page_13><loc_8><loc_83><loc_47><loc_86></location>Aditional images with examples of TableFormer predictions and post-processing can be found below.</paragraph> <paragraph><location><page_13><loc_8><loc_83><loc_47><loc_86></location>Aditional images with examples of TableFormer predictions and post-processing can be found below.</paragraph>
<paragraph><location><page_13><loc_10><loc_35><loc_45><loc_37></location>Figure 8: Example of a table with multi-line header.</paragraph> <table>
<caption><location><page_13><loc_50><loc_59><loc_89><loc_61></location>Figure 9: Example of a table with big empty distance between cells.</caption> <location><page_13><loc_14><loc_73><loc_39><loc_80></location>
</table>
<table>
<location><page_13><loc_14><loc_63><loc_39><loc_70></location>
</table>
<table>
<location><page_13><loc_14><loc_54><loc_39><loc_61></location>
</table>
<caption><location><page_13><loc_10><loc_35><loc_45><loc_37></location>Figure 8: Example of a table with multi-line header.</caption>
<table>
<location><page_13><loc_14><loc_38><loc_41><loc_50></location>
<caption>Figure 8: Example of a table with multi-line header.</caption>
</table>
<table>
<location><page_13><loc_51><loc_83><loc_91><loc_87></location>
</table>
<table>
<location><page_13><loc_51><loc_77><loc_91><loc_80></location>
</table>
<table>
<location><page_13><loc_51><loc_71><loc_91><loc_75></location>
</table>
<figure> <figure>
<location><page_13><loc_51><loc_63><loc_70><loc_68></location> <location><page_13><loc_51><loc_63><loc_70><loc_68></location>
<caption>Figure 9: Example of a table with big empty distance between cells.</caption>
</figure> </figure>
<caption><location><page_13><loc_51><loc_13><loc_89><loc_14></location>Figure 10: Example of a complex table with empty cells.</caption> <caption><location><page_13><loc_50><loc_59><loc_89><loc_61></location>Figure 9: Example of a table with big empty distance between cells.</caption>
<table>
<location><page_13><loc_51><loc_63><loc_70><loc_68></location>
<caption>Figure 9: Example of a table with big empty distance between cells.</caption>
</table>
<table>
<location><page_13><loc_55><loc_45><loc_80><loc_51></location>
</table>
<table>
<location><page_13><loc_55><loc_37><loc_80><loc_43></location>
</table>
<table>
<location><page_13><loc_55><loc_28><loc_80><loc_34></location>
</table>
<figure> <figure>
<location><page_13><loc_55><loc_16><loc_85><loc_25></location> <location><page_13><loc_55><loc_16><loc_85><loc_25></location>
</figure>
<caption><location><page_13><loc_51><loc_13><loc_89><loc_14></location>Figure 10: Example of a complex table with empty cells.</caption>
<table>
<location><page_13><loc_55><loc_16><loc_85><loc_25></location>
<caption>Figure 10: Example of a complex table with empty cells.</caption> <caption>Figure 10: Example of a complex table with empty cells.</caption>
</figure> </table>
<caption><location><page_14><loc_56><loc_13><loc_83><loc_14></location>Figure 14: Example with multi-line text.</caption> <table>
<figure> <location><page_14><loc_8><loc_57><loc_46><loc_65></location>
<location><page_14><loc_9><loc_81><loc_27><loc_86></location> </table>
<caption>Figure 14: Example with multi-line text.</caption>
</figure>
<caption><location><page_14><loc_8><loc_52><loc_47><loc_55></location>Figure 11: Simple table with different style and empty cells.</caption> <caption><location><page_14><loc_8><loc_52><loc_47><loc_55></location>Figure 11: Simple table with different style and empty cells.</caption>
<figure> <figure>
<location><page_14><loc_9><loc_68><loc_27><loc_73></location> <location><page_14><loc_8><loc_56><loc_46><loc_87></location>
<caption>Figure 11: Simple table with different style and empty cells.</caption> <caption>Figure 11: Simple table with different style and empty cells.</caption>
</figure> </figure>
<table>
<location><page_14><loc_8><loc_38><loc_51><loc_43></location>
</table>
<table>
<location><page_14><loc_8><loc_32><loc_51><loc_36></location>
</table>
<table>
<location><page_14><loc_8><loc_25><loc_51><loc_30></location>
</table>
<caption><location><page_14><loc_9><loc_14><loc_46><loc_15></location>Figure 12: Simple table predictions and post processing.</caption> <caption><location><page_14><loc_9><loc_14><loc_46><loc_15></location>Figure 12: Simple table predictions and post processing.</caption>
<figure> <figure>
<location><page_14><loc_8><loc_17><loc_29><loc_23></location> <location><page_14><loc_8><loc_17><loc_29><loc_23></location>
<caption>Figure 12: Simple table predictions and post processing.</caption> <caption>Figure 12: Simple table predictions and post processing.</caption>
</figure> </figure>
<figure> <table>
<location><page_14><loc_52><loc_81><loc_87><loc_88></location> <location><page_14><loc_52><loc_73><loc_87><loc_80></location>
</figure> </table>
<figure> <table>
<location><page_14><loc_52><loc_65><loc_87><loc_71></location> <location><page_14><loc_52><loc_65><loc_87><loc_71></location>
</figure> </table>
<table>
<location><page_14><loc_54><loc_55><loc_86><loc_64></location>
</table>
<caption><location><page_14><loc_52><loc_52><loc_88><loc_53></location>Figure 13: Table predictions example on colorful table.</caption> <caption><location><page_14><loc_52><loc_52><loc_88><loc_53></location>Figure 13: Table predictions example on colorful table.</caption>
<figure> <figure>
<location><page_14><loc_54><loc_55><loc_86><loc_64></location> <location><page_14><loc_52><loc_55><loc_87><loc_89></location>
<caption>Figure 13: Table predictions example on colorful table.</caption> <caption>Figure 13: Table predictions example on colorful table.</caption>
</figure> </figure>
<caption><location><page_15><loc_50><loc_15><loc_89><loc_18></location>Figure 16: Example of how post-processing helps to restore mis-aligned bounding boxes prediction artifact.</caption> <table>
<location><page_14><loc_52><loc_40><loc_85><loc_46></location>
</table>
<table>
<location><page_14><loc_52><loc_32><loc_85><loc_38></location>
</table>
<table>
<location><page_14><loc_52><loc_25><loc_85><loc_31></location>
</table>
<caption><location><page_14><loc_56><loc_13><loc_83><loc_14></location>Figure 14: Example with multi-line text.</caption>
<table>
<location><page_14><loc_52><loc_16><loc_87><loc_23></location>
<caption>Figure 14: Example with multi-line text.</caption>
</table>
<figure> <figure>
<location><page_15><loc_9><loc_69><loc_46><loc_83></location> <location><page_15><loc_9><loc_69><loc_46><loc_83></location>
<caption>Figure 16: Example of how post-processing helps to restore mis-aligned bounding boxes prediction artifact.</caption>
</figure> </figure>
<table>
<location><page_15><loc_9><loc_69><loc_46><loc_83></location>
</table>
<figure>
<location><page_15><loc_9><loc_53><loc_46><loc_67></location>
</figure>
<table>
<location><page_15><loc_9><loc_53><loc_46><loc_67></location>
</table>
<figure> <figure>
<location><page_15><loc_9><loc_37><loc_46><loc_51></location> <location><page_15><loc_9><loc_37><loc_46><loc_51></location>
</figure> </figure>
<caption><location><page_15><loc_14><loc_17><loc_41><loc_19></location>Figure 15: Example with triangular table.</caption>
<figure> <figure>
<location><page_15><loc_8><loc_20><loc_52><loc_36></location> <location><page_15><loc_8><loc_20><loc_52><loc_36></location>
<caption>Figure 15: Example with triangular table.</caption>
</figure> </figure>
<caption><location><page_15><loc_14><loc_18><loc_41><loc_19></location>Figure 15: Example with triangular table.</caption>
<table>
<location><page_15><loc_8><loc_20><loc_52><loc_36></location>
<caption>Figure 15: Example with triangular table.</caption>
</table>
<table>
<location><page_15><loc_53><loc_72><loc_86><loc_85></location>
</table>
<table>
<location><page_15><loc_53><loc_57><loc_86><loc_69></location>
</table>
<figure>
<location><page_15><loc_53><loc_41><loc_86><loc_54></location>
</figure>
<table>
<location><page_15><loc_53><loc_41><loc_86><loc_54></location>
</table>
<figure>
<location><page_15><loc_58><loc_20><loc_81><loc_38></location>
</figure>
<caption><location><page_15><loc_50><loc_15><loc_89><loc_18></location>Figure 16: Example of how post-processing helps to restore mis-aligned bounding boxes prediction artifact.</caption>
<table>
<location><page_15><loc_58><loc_20><loc_81><loc_38></location>
<caption>Figure 16: Example of how post-processing helps to restore mis-aligned bounding boxes prediction artifact.</caption>
</table>
<caption><location><page_16><loc_8><loc_33><loc_89><loc_36></location>Figure 17: Example of long table. End-to-end example from initial PDF cells to prediction of bounding boxes, post processing and prediction of structure.</caption> <caption><location><page_16><loc_8><loc_33><loc_89><loc_36></location>Figure 17: Example of long table. End-to-end example from initial PDF cells to prediction of bounding boxes, post processing and prediction of structure.</caption>
<figure> <figure>
<location><page_16><loc_11><loc_37><loc_86><loc_68></location> <location><page_16><loc_11><loc_37><loc_86><loc_68></location>

File diff suppressed because one or more lines are too long

View File

@ -12,18 +12,19 @@
The occurrence of tables in documents is ubiquitous. They often summarise quantitative or factual data, which is cumbersome to describe in verbose text but nevertheless extremely valuable. Unfortunately, this compact representation is often not easy to parse by machines. There are many implicit conventions used to obtain a compact table representation. For example, tables often have complex columnand row-headers in order to reduce duplicated cell content. Lines of different shapes and sizes are leveraged to separate content or indicate a tree structure. Additionally, tables can also have empty/missing table-entries or multi-row textual table-entries. Fig. 1 shows a table which presents all these issues. The occurrence of tables in documents is ubiquitous. They often summarise quantitative or factual data, which is cumbersome to describe in verbose text but nevertheless extremely valuable. Unfortunately, this compact representation is often not easy to parse by machines. There are many implicit conventions used to obtain a compact table representation. For example, tables often have complex columnand row-headers in order to reduce duplicated cell content. Lines of different shapes and sizes are leveraged to separate content or indicate a tree structure. Additionally, tables can also have empty/missing table-entries or multi-row textual table-entries. Fig. 1 shows a table which presents all these issues.
<!-- image -->
Tables organize valuable content in a concise and compact representation. This content is extremely valuable for systems such as search engines, Knowledge Graph's, etc, since they enhance their predictive capabilities. Unfortunately, tables come in a large variety of shapes and sizes. Furthermore, they can have complex column/row-header configurations, multiline rows, different variety of separation lines, missing entries, etc. As such, the correct identification of the table-structure from an image is a nontrivial task. In this paper, we present a new table-structure identification model. The latter improves the latest end-toend deep learning model (i.e. encoder-dual-decoder from PubTabNet) in two significant ways. First, we introduce a new object detection decoder for table-cells. In this way, we can obtain the content of the table-cells from programmatic PDF's directly from the PDF source and avoid the training of the custom OCR decoders. This architectural change leads to more accurate table-content extraction and allows us to tackle non-english tables. Second, we replace the LSTM decoders with transformer based decoders. This upgrade improves significantly the previous state-of-the-art tree-editing-distance-score (TEDS) from 91% to 98.5% on simple tables and from 88.7% to 95% on complex tables. Tables organize valuable content in a concise and compact representation. This content is extremely valuable for systems such as search engines, Knowledge Graph's, etc, since they enhance their predictive capabilities. Unfortunately, tables come in a large variety of shapes and sizes. Furthermore, they can have complex column/row-header configurations, multiline rows, different variety of separation lines, missing entries, etc. As such, the correct identification of the table-structure from an image is a nontrivial task. In this paper, we present a new table-structure identification model. The latter improves the latest end-toend deep learning model (i.e. encoder-dual-decoder from PubTabNet) in two significant ways. First, we introduce a new object detection decoder for table-cells. In this way, we can obtain the content of the table-cells from programmatic PDF's directly from the PDF source and avoid the training of the custom OCR decoders. This architectural change leads to more accurate table-content extraction and allows us to tackle non-english tables. Second, we replace the LSTM decoders with transformer based decoders. This upgrade improves significantly the previous state-of-the-art tree-editing-distance-score (TEDS) from 91% to 98.5% on simple tables and from 88.7% to 95% on complex tables.
b. Red-annotation of bounding boxes, Blue-predictions by TableFormer - b. Red-annotation of bounding boxes, Blue-predictions by TableFormer
<!-- image --> <!-- image -->
c. - c. Structure predicted by TableFormer:
Structure predicted by TableFormer: <!-- image -->
Figure 1: Picture of a table with subtle, complex features such as (1) multi-column headers, (2) cell with multi-row text and (3) cells with no content. Image from PubTabNet evaluation set, filename: 'PMC2944238 004 02'. Figure 1: Picture of a table with subtle, complex features such as (1) multi-column headers, (2) cell with multi-row text and (3) cells with no content. Image from PubTabNet evaluation set, filename: 'PMC2944238 004 02'.
@ -222,20 +223,18 @@ Table 4: Results of structure with content retrieved using cell detection on Pub
| EDD | 91.2 | 85.4 | 88.3 | | EDD | 91.2 | 85.4 | 88.3 |
| TableFormer | 95.4 | 90.1 | 93.6 | | TableFormer | 95.4 | 90.1 | 93.6 |
a. - a.
Red - PDF cells, Green - predicted bounding boxes, Blue - post-processed predictions matched to PDF cells - Red - PDF cells, Green - predicted bounding boxes, Blue - post-processed predictions matched to PDF cells
Japanese language (previously unseen by TableFormer): ## Japanese language (previously unseen by TableFormer):
## Example table from FinTabNet:
<!-- image --> <!-- image -->
b. b. Structure predicted by TableFormer, with superimposed matched PDF cell text:
<!-- image -->
Structure predicted by TableFormer, with superimposed matched PDF cell text:
| | | 論文ファイル | 論文ファイル | 参考文献 | 参考文献 | | | | 論文ファイル | 論文ファイル | 参考文献 | 参考文献 |
|----------------------------------------------------|-------------|----------------|----------------|------------|------------| |----------------------------------------------------|-------------|----------------|----------------|------------|------------|
@ -263,7 +262,6 @@ Text is aligned to match original for ease of viewing
Figure 5: One of the benefits of TableFormer is that it is language agnostic, as an example, the left part of the illustration demonstrates TableFormer predictions on previously unseen language (Japanese). Additionally, we see that TableFormer is robust to variability in style and content, right side of the illustration shows the example of the TableFormer prediction from the FinTabNet dataset. Figure 5: One of the benefits of TableFormer is that it is language agnostic, as an example, the left part of the illustration demonstrates TableFormer predictions on previously unseen language (Japanese). Additionally, we see that TableFormer is robust to variability in style and content, right side of the illustration shows the example of the TableFormer prediction from the FinTabNet dataset.
<!-- image --> <!-- image -->
<!-- image --> <!-- image -->
Figure 6: An example of TableFormer predictions (bounding boxes and structure) from generated SynthTabNet table. Figure 6: An example of TableFormer predictions (bounding boxes and structure) from generated SynthTabNet table.
@ -281,9 +279,6 @@ In this paper, we presented TableFormer an end-to-end transformer based approach
- [1] Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to- - [1] Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-
<!-- image -->
- end object detection with transformers. In Andrea Vedaldi, Horst Bischof, Thomas Brox, and Jan-Michael Frahm, editors, Computer Vision - ECCV 2020 , pages 213-229, Cham, 2020. Springer International Publishing. 5 - end object detection with transformers. In Andrea Vedaldi, Horst Bischof, Thomas Brox, and Jan-Michael Frahm, editors, Computer Vision - ECCV 2020 , pages 213-229, Cham, 2020. Springer International Publishing. 5
- [2] Zewen Chi, Heyan Huang, Heng-Da Xu, Houjin Yu, Wanxuan Yin, and Xian-Ling Mao. Complicated table structure recognition. arXiv preprint arXiv:1908.04729 , 2019. 3 - [2] Zewen Chi, Heyan Huang, Heng-Da Xu, Houjin Yu, Wanxuan Yin, and Xian-Ling Mao. Complicated table structure recognition. arXiv preprint arXiv:1908.04729 , 2019. 3
@ -451,14 +446,19 @@ Aditional images with examples of TableFormer predictions and post-processing ca
Figure 8: Example of a table with multi-line header. Figure 8: Example of a table with multi-line header.
<!-- image -->
Figure 9: Example of a table with big empty distance between cells. Figure 9: Example of a table with big empty distance between cells.
<!-- image --> <!-- image -->
Figure 10: Example of a complex table with empty cells. Figure 10: Example of a complex table with empty cells.
<!-- image -->
Figure 14: Example with multi-line text.
<!-- image -->
Figure 11: Simple table with different style and empty cells. Figure 11: Simple table with different style and empty cells.
<!-- image --> <!-- image -->
@ -466,23 +466,32 @@ Figure 11: Simple table with different style and empty cells.
Figure 12: Simple table predictions and post processing. Figure 12: Simple table predictions and post processing.
<!-- image --> <!-- image -->
<!-- image -->
<!-- image -->
Figure 13: Table predictions example on colorful table. Figure 13: Table predictions example on colorful table.
<!-- image --> <!-- image -->
Figure 16: Example of how post-processing helps to restore mis-aligned bounding boxes prediction artifact. Figure 14: Example with multi-line text.
<!-- image --> <!-- image -->
<!-- image -->
<!-- image -->
<!-- image --> <!-- image -->
Figure 15: Example with triangular table. Figure 15: Example with triangular table.
<!-- image --> <!-- image -->
<!-- image -->
Figure 16: Example of how post-processing helps to restore mis-aligned bounding boxes prediction artifact.
Figure 17: Example of long table. End-to-end example from initial PDF cells to prediction of bounding boxes, post processing and prediction of structure. Figure 17: Example of long table. End-to-end example from initial PDF cells to prediction of bounding boxes, post processing and prediction of structure.
<!-- image --> <!-- image -->

File diff suppressed because one or more lines are too long

View File

@ -1,33 +1,24 @@
<document> <document>
<subtitle-level-1><location><page_1><loc_18><loc_85><loc_83><loc_90></location>DocLayNet: A Large Human-Annotated Dataset for Document-Layout Analysis</subtitle-level-1> <subtitle-level-1><location><page_1><loc_18><loc_85><loc_83><loc_89></location>DocLayNet: A Large Human-Annotated Dataset for Document-Layout Analysis</subtitle-level-1>
<paragraph><location><page_1><loc_15><loc_77><loc_32><loc_83></location>Birgit Pfitzmann IBM Research Rueschlikon, Switzerland bpf@zurich.ibm.com</paragraph> <paragraph><location><page_1><loc_15><loc_77><loc_32><loc_83></location>Birgit Pfitzmann IBM Research Rueschlikon, Switzerland bpf@zurich.ibm.com</paragraph>
<paragraph><location><page_1><loc_42><loc_77><loc_58><loc_83></location>Christoph Auer IBM Research Rueschlikon, Switzerland cau@zurich.ibm.com</paragraph> <paragraph><location><page_1><loc_42><loc_77><loc_58><loc_83></location>Christoph Auer IBM Research Rueschlikon, Switzerland cau@zurich.ibm.com</paragraph>
<paragraph><location><page_1><loc_68><loc_77><loc_85><loc_83></location>Michele Dolfi IBM Research Rueschlikon, Switzerland dol@zurich.ibm.com</paragraph> <paragraph><location><page_1><loc_69><loc_77><loc_85><loc_83></location>Michele Dolfi IBM Research Rueschlikon, Switzerland dol@zurich.ibm.com</paragraph>
<paragraph><location><page_1><loc_28><loc_70><loc_45><loc_76></location>Ahmed S. Nassar IBM Research Rueschlikon, Switzerland ahn@zurich.ibm.com</paragraph> <paragraph><location><page_1><loc_28><loc_70><loc_45><loc_76></location>Ahmed S. Nassar IBM Research Rueschlikon, Switzerland ahn@zurich.ibm.com</paragraph>
<paragraph><location><page_1><loc_55><loc_70><loc_72><loc_76></location>Peter Staar IBM Research Rueschlikon, Switzerland taa@zurich.ibm.com</paragraph> <paragraph><location><page_1><loc_55><loc_70><loc_72><loc_76></location>Peter Staar IBM Research Rueschlikon, Switzerland taa@zurich.ibm.com</paragraph>
<subtitle-level-1><location><page_1><loc_9><loc_67><loc_18><loc_69></location>ABSTRACT</subtitle-level-1> <subtitle-level-1><location><page_1><loc_9><loc_67><loc_18><loc_69></location>ABSTRACT</subtitle-level-1>
<paragraph><location><page_1><loc_9><loc_32><loc_48><loc_67></location>Accurate document layout analysis is a key requirement for highquality PDF document conversion. With the recent availability of public, large ground-truth datasets such as PubLayNet and DocBank, deep-learning models have proven to be very effective at layout detection and segmentation. While these datasets are of adequate size to train such models, they severely lack in layout variability since they are sourced from scientific article repositories such as PubMed and arXiv only. Consequently, the accuracy of the layout segmentation drops significantly when these models are applied on more challenging and diverse layouts. In this paper, we present DocLayNet , a new, publicly available, document-layout annotation dataset in COCO format. It contains 80863 manually annotated pages from diverse data sources to represent a wide variability in layouts. For each PDF page, the layout annotations provide labelled bounding-boxes with a choice of 11 distinct classes. DocLayNet also provides a subset of double- and triple-annotated pages to determine the inter-annotator agreement. In multiple experiments, we provide baseline accuracy scores (in mAP) for a set of popular object detection models. We also demonstrate that these models fall approximately 10% behind the inter-annotator agreement. Furthermore, we provide evidence that DocLayNet is of sufficient size. Lastly, we compare models trained on PubLayNet, DocBank and DocLayNet, showing that layout predictions of the DocLayNettrained models are more robust and thus the preferred choice for general-purpose document-layout analysis.</paragraph> <paragraph><location><page_1><loc_9><loc_33><loc_48><loc_67></location>Accurate document layout analysis is a key requirement for highquality PDF document conversion. With the recent availability of public, large ground-truth datasets such as PubLayNet and DocBank, deep-learning models have proven to be very effective at layout detection and segmentation. While these datasets are of adequate size to train such models, they severely lack in layout variability since they are sourced from scientific article repositories such as PubMed and arXiv only. Consequently, the accuracy of the layout segmentation drops significantly when these models are applied on more challenging and diverse layouts. In this paper, we present DocLayNet , a new, publicly available, document-layout annotation dataset in COCO format. It contains 80863 manually annotated pages from diverse data sources to represent a wide variability in layouts. For each PDF page, the layout annotations provide labelled bounding-boxes with a choice of 11 distinct classes. DocLayNet also provides a subset of double- and triple-annotated pages to determine the inter-annotator agreement. In multiple experiments, we provide baseline accuracy scores (in mAP) for a set of popular object detection models. We also demonstrate that these models fall approximately 10% behind the inter-annotator agreement. Furthermore, we provide evidence that DocLayNet is of sufficient size. Lastly, we compare models trained on PubLayNet, DocBank and DocLayNet, showing that layout predictions of the DocLayNettrained models are more robust and thus the preferred choice for general-purpose document-layout analysis.</paragraph>
<subtitle-level-1><location><page_1><loc_9><loc_29><loc_22><loc_30></location>CCS CONCEPTS</subtitle-level-1> <subtitle-level-1><location><page_1><loc_9><loc_29><loc_22><loc_30></location>CCS CONCEPTS</subtitle-level-1>
<paragraph><location><page_1><loc_9><loc_25><loc_49><loc_29></location>· Information systems → Document structure ; · Applied computing → Document analysis ; · Computing methodologies → Machine learning ; Computer vision ; Object detection ;</paragraph> <paragraph><location><page_1><loc_9><loc_25><loc_49><loc_29></location>· Information systems → Document structure ; · Applied computing → Document analysis ; · Computing methodologies → Machine learning ; Computer vision ; Object detection ;</paragraph>
<paragraph><location><page_1><loc_9><loc_15><loc_48><loc_20></location>Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s).</paragraph> <paragraph><location><page_1><loc_9><loc_15><loc_48><loc_20></location>Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s).</paragraph>
<paragraph><location><page_1><loc_9><loc_11><loc_32><loc_15></location>KDD '22, August 14-18, 2022, Washington, DC, USA © 2022 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-9385-0/22/08. https://doi.org/10.1145/3534678.3539043</paragraph> <paragraph><location><page_1><loc_9><loc_14><loc_32><loc_15></location>KDD '22, August 14-18, 2022, Washington, DC, USA</paragraph>
<paragraph><location><page_1><loc_53><loc_55><loc_63><loc_68></location>13 USING THE VERTICAL TUBE MODELS AY11230/11234 1. The vertical tube can be used for instructional viewing or to photograph the image with a digital camera or a micro TV unit 2. Loosen the retention screw, then rotate the adjustment ring to change the length of the vertical tube. 3. Make sure that both the images in OPERATION ( cont. ) SELECTING OBJECTIVE MAGNIFICATION 1. There are two objectives. The lower magnification objective has a greater depth of field and view. 2. In order to observe the specimen easily use the lower magnification objective first. Then, by rotating the case, the magnification can be changed. CHANGING THE INTERPUPILLARY DISTANCE 1. The distance between the observer's pupils is the interpupillary distance. 2. To adjust the interpupillary distance rotate the prism caps until both eyes coincide with the image in the eyepiece. FOCUSING 1. Remove the lens protective cover. 2. Place the specimen on the working stage. 3. Focus the specimen with the left eye first while turning the focus knob until the image appears clear and sharp. 4. Rotate the right eyepiece ring until the images in each eyepiece coincide and are sharp and clear. CHANGING THE BULB 1. Disconnect the power cord. 2. When the bulb is cool, remove the oblique illuminator cap and remove the halogen bulb with cap. 3. Replace with a new halogen bulb. 4. Open the window in the base plate and replace the halogen lamp or fluorescent lamp of transmitted illuminator. FOCUSING 1. Turn the focusing knob away or toward you until a clear image is viewed. 2. If the image is unclear, adjust the height of the elevator up or down, then turn the focusing knob again. ZOOM MAGNIFICATION 1. Turn the zoom magnification knob to the desired magnification and field of view. 2. In most situations, it is recommended that you focus at the lowest magnification, then move to a higher magnification and re-focus as necessary. 3. If the image is not clear to both eyes at the same time, the diopter ring may need adjustment. DIOPTER RING ADJUSTMENT 1. To adjust the eyepiece for viewing with or without eyeglasses and for differences in acuity between the right and left eyes, follow the following steps: a. Observe an image through the left eyepiece and bring a specific point into focus using the focus knob. b. By turning the diopter ring adjustment for the left eyepiece, bring the same point into sharp focus. c.Then bring the same point into focus through the right eyepiece by turning the right diopter ring. d.With more than one viewer, each viewer should note their own diopter ring position for the left and right eyepieces, then before viewing set the diopter ring adjustments to that setting. CHANGING THE BULB 1. Disconnect the power cord from the electrical outlet. 2. When the bulb is cool, remove the oblique illuminator cap and remove the halogen bulb with cap. 3. Replace with a new halogen bulb. 4. Open the window in the base plate and replace the halogen lamp or fluorescent lamp of transmitted illuminator. Model AY11230 Model AY11234</paragraph> <paragraph><location><page_1><loc_9><loc_13><loc_31><loc_14></location>© 2022 Copyright held by the owner/author(s).</paragraph>
<paragraph><location><page_1><loc_9><loc_12><loc_26><loc_13></location>ACM ISBN 978-1-4503-9385-0/22/08.</paragraph>
<paragraph><location><page_1><loc_9><loc_11><loc_27><loc_12></location>https://doi.org/10.1145/3534678.3539043</paragraph>
<caption><location><page_1><loc_52><loc_29><loc_91><loc_32></location>Figure 1: Four examples of complex page layouts across different document categories</caption> <caption><location><page_1><loc_52><loc_29><loc_91><loc_32></location>Figure 1: Four examples of complex page layouts across different document categories</caption>
<figure> <figure>
<location><page_1><loc_52><loc_33><loc_72><loc_53></location> <location><page_1><loc_53><loc_34><loc_90><loc_68></location>
<caption>Figure 1: Four examples of complex page layouts across different document categories</caption> <caption>Figure 1: Four examples of complex page layouts across different document categories</caption>
</figure> </figure>
<figure>
<location><page_1><loc_65><loc_56><loc_75><loc_68></location>
</figure>
<paragraph><location><page_1><loc_74><loc_55><loc_75><loc_56></location>14</paragraph>
<figure>
<location><page_1><loc_77><loc_54><loc_90><loc_69></location>
</figure>
<paragraph><location><page_1><loc_73><loc_50><loc_90><loc_52></location>Circling Minimums 7 K H U H Z D V D F K D Q J H W R W K H 7 ( 5 3 6 F U L W H U L D L Q W K D W D ႇH F W V F L U F O L Q J D U H D G L P H Q V L R Q E \ H [ S D Q G L Q J W K H D U H D V W R S U R Y L G H improved obstacle protection. To indicate that the new criteria had been applied to a given procedure, a is placed on the circling line of minimums. The new circling tables and explanatory information is located in the Legend of the TPP. 7 K H D S S U R D F K H V X V L Q J V W D Q G D U G F L U F O L Q J D S S U R D F K D U H D V F D Q E H L G H Q W L ¿ H G E \ W K H D E V H Q F H R I W K H on the circling line of minima.</paragraph>
<paragraph><location><page_1><loc_82><loc_48><loc_90><loc_48></location>$ S S O \ ( [ S D Q G H G & L U F O L Q J $ S S U R D F K 0 D Q H X Y H U L Q J $ L U V S D F H 5 D G L X V Table</paragraph>
<paragraph><location><page_1><loc_73><loc_37><loc_90><loc_48></location>$ S S O \ 6 W D Q G D U G & L U F O L Q J $ S S U R D F K 0 D Q H X Y H U L Q J 5 D G L X V 7 D E O H AIRPORT SKETCH The airport sketch is a depiction of the airport with emphasis on runway pattern and related information, positioned in either the lower left or lower right corner of the chart to aid pilot recognition of the airport from the air and to provide some information to aid on ground navigation of the airport. The runways are drawn to scale and oriented to true north. Runway dimensions (length and width) are shown for all active runways. Runway(s) are depicted based on what type and construction of the runway. Hard Surface Other Than Hard Surface Metal Surface Closed Runway Under Construction Stopways, Taxiways, Parking Areas Displaced Threshold Closed Pavement Water Runway Taxiways and aprons are shaded grey. Other runway features that may be shown are runway numbers, runway dimensions, runway slope, arresting gear, and displaced threshold. 2 W K H U L Q I R U P D W L R Q F R Q F H U Q L Q J O L J K W L Q J ¿ Q D O D S S U R D F K E H D U L Q J V D L U S R U W E H D F R Q R E V W D F O H V F R Q W U R O W R Z H U 1 $ 9 $ , ' V K H O L -pads may also be shown. $ L U S R U W ( O H Y D W L R Q D Q G 7 R X F K G R Z Q = R Q H ( O H Y D W L R Q The airport elevation is shown enclosed within a box in the upper left corner of the sketch box and the touchdown zone elevation (TDZE) is shown in the upper right corner of the sketch box. The airport elevation is the highest point of an D L U S R U W ¶ V X V D E O H U X Q Z D \ V P H D V X U H G L Q I H H W I U R P P H D Q V H D O H Y H O 7 K H 7 ' = ( L V W K H K L J K H V W H O H Y D W L R Q L Q W K H ¿ U V W I H H W R I the landing surface. Circling only approaches will not show a TDZE. FAA Chart Users' Guide - Terminal Procedures Publication (TPP) - Terms</paragraph>
<paragraph><location><page_1><loc_82><loc_34><loc_82><loc_35></location>114</paragraph>
<subtitle-level-1><location><page_1><loc_52><loc_24><loc_62><loc_25></location>KEYWORDS</subtitle-level-1> <subtitle-level-1><location><page_1><loc_52><loc_24><loc_62><loc_25></location>KEYWORDS</subtitle-level-1>
<paragraph><location><page_1><loc_52><loc_21><loc_91><loc_23></location>PDF document conversion, layout segmentation, object-detection, data set, Machine Learning</paragraph> <paragraph><location><page_1><loc_52><loc_21><loc_91><loc_23></location>PDF document conversion, layout segmentation, object-detection, data set, Machine Learning</paragraph>
<subtitle-level-1><location><page_1><loc_52><loc_18><loc_66><loc_19></location>ACM Reference Format:</subtitle-level-1> <subtitle-level-1><location><page_1><loc_52><loc_18><loc_66><loc_19></location>ACM Reference Format:</subtitle-level-1>
@ -36,9 +27,9 @@
<paragraph><location><page_2><loc_9><loc_71><loc_50><loc_86></location>Despite the substantial improvements achieved with machine-learning (ML) approaches and deep neural networks in recent years, document conversion remains a challenging problem, as demonstrated by the numerous public competitions held on this topic [1-4]. The challenge originates from the huge variability in PDF documents regarding layout, language and formats (scanned, programmatic or a combination of both). Engineering a single ML model that can be applied on all types of documents and provides high-quality layout segmentation remains to this day extremely challenging [5]. To highlight the variability in document layouts, we show a few example documents from the DocLayNet dataset in Figure 1.</paragraph> <paragraph><location><page_2><loc_9><loc_71><loc_50><loc_86></location>Despite the substantial improvements achieved with machine-learning (ML) approaches and deep neural networks in recent years, document conversion remains a challenging problem, as demonstrated by the numerous public competitions held on this topic [1-4]. The challenge originates from the huge variability in PDF documents regarding layout, language and formats (scanned, programmatic or a combination of both). Engineering a single ML model that can be applied on all types of documents and provides high-quality layout segmentation remains to this day extremely challenging [5]. To highlight the variability in document layouts, we show a few example documents from the DocLayNet dataset in Figure 1.</paragraph>
<paragraph><location><page_2><loc_9><loc_37><loc_48><loc_71></location>A key problem in the process of document conversion is to understand the structure of a single document page, i.e. which segments of text should be grouped together in a unit. To train models for this task, there are currently two large datasets available to the community, PubLayNet [6] and DocBank [7]. They were introduced in 2019 and 2020 respectively and significantly accelerated the implementation of layout detection and segmentation models due to their sizes of 300K and 500K ground-truth pages. These sizes were achieved by leveraging an automation approach. The benefit of automated ground-truth generation is obvious: one can generate large ground-truth datasets at virtually no cost. However, the automation introduces a constraint on the variability in the dataset, because corresponding structured source data must be available. PubLayNet and DocBank were both generated from scientific document repositories (PubMed and arXiv), which provide XML or L A T E X sources. Those scientific documents present a limited variability in their layouts, because they are typeset in uniform templates provided by the publishers. Obviously, documents such as technical manuals, annual company reports, legal text, government tenders, etc. have very different and partially unique layouts. As a consequence, the layout predictions obtained from models trained on PubLayNet or DocBank is very reasonable when applied on scientific documents. However, for more artistic or free-style layouts, we see sub-par prediction quality from these models, which we demonstrate in Section 5.</paragraph> <paragraph><location><page_2><loc_9><loc_37><loc_48><loc_71></location>A key problem in the process of document conversion is to understand the structure of a single document page, i.e. which segments of text should be grouped together in a unit. To train models for this task, there are currently two large datasets available to the community, PubLayNet [6] and DocBank [7]. They were introduced in 2019 and 2020 respectively and significantly accelerated the implementation of layout detection and segmentation models due to their sizes of 300K and 500K ground-truth pages. These sizes were achieved by leveraging an automation approach. The benefit of automated ground-truth generation is obvious: one can generate large ground-truth datasets at virtually no cost. However, the automation introduces a constraint on the variability in the dataset, because corresponding structured source data must be available. PubLayNet and DocBank were both generated from scientific document repositories (PubMed and arXiv), which provide XML or L A T E X sources. Those scientific documents present a limited variability in their layouts, because they are typeset in uniform templates provided by the publishers. Obviously, documents such as technical manuals, annual company reports, legal text, government tenders, etc. have very different and partially unique layouts. As a consequence, the layout predictions obtained from models trained on PubLayNet or DocBank is very reasonable when applied on scientific documents. However, for more artistic or free-style layouts, we see sub-par prediction quality from these models, which we demonstrate in Section 5.</paragraph>
<paragraph><location><page_2><loc_9><loc_27><loc_48><loc_36></location>In this paper, we present the DocLayNet dataset. It provides pageby-page layout annotation ground-truth using bounding-boxes for 11 distinct class labels on 80863 unique document pages, of which a fraction carry double- or triple-annotations. DocLayNet is similar in spirit to PubLayNet and DocBank and will likewise be made available to the public 1 in order to stimulate the document-layout analysis community. It distinguishes itself in the following aspects:</paragraph> <paragraph><location><page_2><loc_9><loc_27><loc_48><loc_36></location>In this paper, we present the DocLayNet dataset. It provides pageby-page layout annotation ground-truth using bounding-boxes for 11 distinct class labels on 80863 unique document pages, of which a fraction carry double- or triple-annotations. DocLayNet is similar in spirit to PubLayNet and DocBank and will likewise be made available to the public 1 in order to stimulate the document-layout analysis community. It distinguishes itself in the following aspects:</paragraph>
<paragraph><location><page_2><loc_10><loc_22><loc_48><loc_26></location>- (1) Human Annotation : In contrast to PubLayNet and DocBank, we relied on human annotation instead of automation approaches to generate the data set.</paragraph> <paragraph><location><page_2><loc_11><loc_22><loc_48><loc_26></location>- (1) Human Annotation : In contrast to PubLayNet and DocBank, we relied on human annotation instead of automation approaches to generate the data set.</paragraph>
<paragraph><location><page_2><loc_10><loc_20><loc_48><loc_22></location>- (2) Large Layout Variability : We include diverse and complex layouts from a large variety of public sources.</paragraph> <paragraph><location><page_2><loc_11><loc_20><loc_48><loc_22></location>- (2) Large Layout Variability : We include diverse and complex layouts from a large variety of public sources.</paragraph>
<paragraph><location><page_2><loc_10><loc_15><loc_48><loc_19></location>- (3) Detailed Label Set : We define 11 class labels to distinguish layout features in high detail. PubLayNet provides 5 labels; DocBank provides 13, although not a superset of ours.</paragraph> <paragraph><location><page_2><loc_11><loc_15><loc_48><loc_19></location>- (3) Detailed Label Set : We define 11 class labels to distinguish layout features in high detail. PubLayNet provides 5 labels; DocBank provides 13, although not a superset of ours.</paragraph>
<paragraph><location><page_2><loc_11><loc_13><loc_48><loc_15></location>- (4) Redundant Annotations : A fraction of the pages in the DocLayNet data set carry more than one human annotation.</paragraph> <paragraph><location><page_2><loc_11><loc_13><loc_48><loc_15></location>- (4) Redundant Annotations : A fraction of the pages in the DocLayNet data set carry more than one human annotation.</paragraph>
<paragraph><location><page_2><loc_56><loc_87><loc_91><loc_89></location>This enables experimentation with annotation uncertainty and quality control analysis.</paragraph> <paragraph><location><page_2><loc_56><loc_87><loc_91><loc_89></location>This enables experimentation with annotation uncertainty and quality control analysis.</paragraph>
<paragraph><location><page_2><loc_54><loc_80><loc_91><loc_86></location>- (5) Pre-defined Train-, Test- & Validation-set : Like DocBank, we provide fixed train-, test- & validation-sets to ensure proportional representation of the class-labels. Further, we prevent leakage of unique layouts across sets, which has a large effect on model accuracy scores.</paragraph> <paragraph><location><page_2><loc_54><loc_80><loc_91><loc_86></location>- (5) Pre-defined Train-, Test- & Validation-set : Like DocBank, we provide fixed train-, test- & validation-sets to ensure proportional representation of the class-labels. Further, we prevent leakage of unique layouts across sets, which has a large effect on model accuracy scores.</paragraph>
@ -48,7 +39,7 @@
<paragraph><location><page_2><loc_52><loc_41><loc_91><loc_56></location>While early approaches in document-layout analysis used rulebased algorithms and heuristics [8], the problem is lately addressed with deep learning methods. The most common approach is to leverage object detection models [9-15]. In the last decade, the accuracy and speed of these models has increased dramatically. Furthermore, most state-of-the-art object detection methods can be trained and applied with very little work, thanks to a standardisation effort of the ground-truth data format [16] and common deep-learning frameworks [17]. Reference data sets such as PubLayNet [6] and DocBank provide their data in the commonly accepted COCO format [16].</paragraph> <paragraph><location><page_2><loc_52><loc_41><loc_91><loc_56></location>While early approaches in document-layout analysis used rulebased algorithms and heuristics [8], the problem is lately addressed with deep learning methods. The most common approach is to leverage object detection models [9-15]. In the last decade, the accuracy and speed of these models has increased dramatically. Furthermore, most state-of-the-art object detection methods can be trained and applied with very little work, thanks to a standardisation effort of the ground-truth data format [16] and common deep-learning frameworks [17]. Reference data sets such as PubLayNet [6] and DocBank provide their data in the commonly accepted COCO format [16].</paragraph>
<paragraph><location><page_2><loc_52><loc_30><loc_91><loc_41></location>Lately, new types of ML models for document-layout analysis have emerged in the community [18-21]. These models do not approach the problem of layout analysis purely based on an image representation of the page, as computer vision methods do. Instead, they combine the text tokens and image representation of a page in order to obtain a segmentation. While the reported accuracies appear to be promising, a broadly accepted data format which links geometric and textual features has yet to establish.</paragraph> <paragraph><location><page_2><loc_52><loc_30><loc_91><loc_41></location>Lately, new types of ML models for document-layout analysis have emerged in the community [18-21]. These models do not approach the problem of layout analysis purely based on an image representation of the page, as computer vision methods do. Instead, they combine the text tokens and image representation of a page in order to obtain a segmentation. While the reported accuracies appear to be promising, a broadly accepted data format which links geometric and textual features has yet to establish.</paragraph>
<subtitle-level-1><location><page_2><loc_52><loc_27><loc_78><loc_29></location>3 THE DOCLAYNET DATASET</subtitle-level-1> <subtitle-level-1><location><page_2><loc_52><loc_27><loc_78><loc_29></location>3 THE DOCLAYNET DATASET</subtitle-level-1>
<paragraph><location><page_2><loc_52><loc_15><loc_91><loc_26></location>DocLayNet contains 80863 PDF pages. Among these, 7059 carry two instances of human annotations, and 1591 carry three. This amounts to 91104 total annotation instances. The annotations provide layout information in the shape of labeled, rectangular boundingboxes. We define 11 distinct labels for layout features, namely Caption , Footnote , Formula , List-item , Page-footer , Page-header , Picture , Section-header , Table , Text , and Title . Our reasoning for picking this particular label set is detailed in Section 4.</paragraph> <paragraph><location><page_2><loc_52><loc_15><loc_91><loc_25></location>DocLayNet contains 80863 PDF pages. Among these, 7059 carry two instances of human annotations, and 1591 carry three. This amounts to 91104 total annotation instances. The annotations provide layout information in the shape of labeled, rectangular boundingboxes. We define 11 distinct labels for layout features, namely Caption , Footnote , Formula , List-item , Page-footer , Page-header , Picture , Section-header , Table , Text , and Title . Our reasoning for picking this particular label set is detailed in Section 4.</paragraph>
<paragraph><location><page_2><loc_52><loc_11><loc_91><loc_14></location>In addition to open intellectual property constraints for the source documents, we required that the documents in DocLayNet adhere to a few conditions. Firstly, we kept scanned documents</paragraph> <paragraph><location><page_2><loc_52><loc_11><loc_91><loc_14></location>In addition to open intellectual property constraints for the source documents, we required that the documents in DocLayNet adhere to a few conditions. Firstly, we kept scanned documents</paragraph>
<caption><location><page_3><loc_9><loc_68><loc_48><loc_70></location>Figure 2: Distribution of DocLayNet pages across document categories.</caption> <caption><location><page_3><loc_9><loc_68><loc_48><loc_70></location>Figure 2: Distribution of DocLayNet pages across document categories.</caption>
<figure> <figure>
@ -57,11 +48,11 @@
</figure> </figure>
<paragraph><location><page_3><loc_9><loc_54><loc_48><loc_64></location>to a minimum, since they introduce difficulties in annotation (see Section 4). As a second condition, we focussed on medium to large documents ( > 10 pages) with technical content, dense in complex tables, figures, plots and captions. Such documents carry a lot of information value, but are often hard to analyse with high accuracy due to their challenging layouts. Counterexamples of documents not included in the dataset are receipts, invoices, hand-written documents or photographs showing "text in the wild".</paragraph> <paragraph><location><page_3><loc_9><loc_54><loc_48><loc_64></location>to a minimum, since they introduce difficulties in annotation (see Section 4). As a second condition, we focussed on medium to large documents ( > 10 pages) with technical content, dense in complex tables, figures, plots and captions. Such documents carry a lot of information value, but are often hard to analyse with high accuracy due to their challenging layouts. Counterexamples of documents not included in the dataset are receipts, invoices, hand-written documents or photographs showing "text in the wild".</paragraph>
<paragraph><location><page_3><loc_9><loc_36><loc_48><loc_53></location>The pages in DocLayNet can be grouped into six distinct categories, namely Financial Reports , Manuals , Scientific Articles , Laws & Regulations , Patents and Government Tenders . Each document category was sourced from various repositories. For example, Financial Reports contain both free-style format annual reports 2 which expose company-specific, artistic layouts as well as the more formal SEC filings. The two largest categories ( Financial Reports and Manuals ) contain a large amount of free-style layouts in order to obtain maximum variability. In the other four categories, we boosted the variability by mixing documents from independent providers, such as different government websites or publishers. In Figure 2, we show the document categories contained in DocLayNet with their respective sizes.</paragraph> <paragraph><location><page_3><loc_9><loc_36><loc_48><loc_53></location>The pages in DocLayNet can be grouped into six distinct categories, namely Financial Reports , Manuals , Scientific Articles , Laws & Regulations , Patents and Government Tenders . Each document category was sourced from various repositories. For example, Financial Reports contain both free-style format annual reports 2 which expose company-specific, artistic layouts as well as the more formal SEC filings. The two largest categories ( Financial Reports and Manuals ) contain a large amount of free-style layouts in order to obtain maximum variability. In the other four categories, we boosted the variability by mixing documents from independent providers, such as different government websites or publishers. In Figure 2, we show the document categories contained in DocLayNet with their respective sizes.</paragraph>
<paragraph><location><page_3><loc_9><loc_23><loc_48><loc_36></location>We did not control the document selection with regard to language. The vast majority of documents contained in DocLayNet (close to 95%) are published in English language. However, DocLayNet also contains a number of documents in other languages such as German (2.5%), French (1.0%) and Japanese (1.0%). While the document language has negligible impact on the performance of computer vision methods such as object detection and segmentation models, it might prove challenging for layout analysis methods which exploit textual features.</paragraph> <paragraph><location><page_3><loc_9><loc_23><loc_48><loc_35></location>We did not control the document selection with regard to language. The vast majority of documents contained in DocLayNet (close to 95%) are published in English language. However, DocLayNet also contains a number of documents in other languages such as German (2.5%), French (1.0%) and Japanese (1.0%). While the document language has negligible impact on the performance of computer vision methods such as object detection and segmentation models, it might prove challenging for layout analysis methods which exploit textual features.</paragraph>
<paragraph><location><page_3><loc_9><loc_14><loc_48><loc_23></location>To ensure that future benchmarks in the document-layout analysis community can be easily compared, we have split up DocLayNet into pre-defined train-, test- and validation-sets. In this way, we can avoid spurious variations in the evaluation scores due to random splitting in train-, test- and validation-sets. We also ensured that less frequent labels are represented in train and test sets in equal proportions.</paragraph> <paragraph><location><page_3><loc_9><loc_14><loc_48><loc_23></location>To ensure that future benchmarks in the document-layout analysis community can be easily compared, we have split up DocLayNet into pre-defined train-, test- and validation-sets. In this way, we can avoid spurious variations in the evaluation scores due to random splitting in train-, test- and validation-sets. We also ensured that less frequent labels are represented in train and test sets in equal proportions.</paragraph>
<paragraph><location><page_3><loc_52><loc_80><loc_91><loc_89></location>Table 1 shows the overall frequency and distribution of the labels among the different sets. Importantly, we ensure that subsets are only split on full-document boundaries. This avoids that pages of the same document are spread over train, test and validation set, which can give an undesired evaluation advantage to models and lead to overestimation of their prediction accuracy. We will show the impact of this decision in Section 5.</paragraph> <paragraph><location><page_3><loc_52><loc_80><loc_91><loc_89></location>Table 1 shows the overall frequency and distribution of the labels among the different sets. Importantly, we ensure that subsets are only split on full-document boundaries. This avoids that pages of the same document are spread over train, test and validation set, which can give an undesired evaluation advantage to models and lead to overestimation of their prediction accuracy. We will show the impact of this decision in Section 5.</paragraph>
<paragraph><location><page_3><loc_52><loc_66><loc_91><loc_79></location>In order to accommodate the different types of models currently in use by the community, we provide DocLayNet in an augmented COCO format [16]. This entails the standard COCO ground-truth file (in JSON format) with the associated page images (in PNG format, 1025 × 1025 pixels). Furthermore, custom fields have been added to each COCO record to specify document category, original document filename and page number. In addition, we also provide the original PDF pages, as well as sidecar files containing parsed PDF text and text-cell coordinates (in JSON). All additional files are linked to the primary page images by their matching filenames.</paragraph> <paragraph><location><page_3><loc_52><loc_66><loc_91><loc_79></location>In order to accommodate the different types of models currently in use by the community, we provide DocLayNet in an augmented COCO format [16]. This entails the standard COCO ground-truth file (in JSON format) with the associated page images (in PNG format, 1025 × 1025 pixels). Furthermore, custom fields have been added to each COCO record to specify document category, original document filename and page number. In addition, we also provide the original PDF pages, as well as sidecar files containing parsed PDF text and text-cell coordinates (in JSON). All additional files are linked to the primary page images by their matching filenames.</paragraph>
<paragraph><location><page_3><loc_52><loc_26><loc_91><loc_66></location>Despite being cost-intense and far less scalable than automation, human annotation has several benefits over automated groundtruth generation. The first and most obvious reason to leverage human annotations is the freedom to annotate any type of document without requiring a programmatic source. For most PDF documents, the original source document is not available. The latter is not a hard constraint with human annotation, but it is for automated methods. A second reason to use human annotations is that the latter usually provide a more natural interpretation of the page layout. The human-interpreted layout can significantly deviate from the programmatic layout used in typesetting. For example, "invisible" tables might be used solely for aligning text paragraphs on columns. Such typesetting tricks might be interpreted by automated methods incorrectly as an actual table, while the human annotation will interpret it correctly as Text or other styles. The same applies to multi-line text elements, when authors decided to space them as "invisible" list elements without bullet symbols. A third reason to gather ground-truth through human annotation is to estimate a "natural" upper bound on the segmentation accuracy. As we will show in Section 4, certain documents featuring complex layouts can have different but equally acceptable layout interpretations. This natural upper bound for segmentation accuracy can be found by annotating the same pages multiple times by different people and evaluating the inter-annotator agreement. Such a baseline consistency evaluation is very useful to define expectations for a good target accuracy in trained deep neural network models and avoid overfitting (see Table 1). On the flip side, achieving high annotation consistency proved to be a key challenge in human annotation, as we outline in Section 4.</paragraph> <paragraph><location><page_3><loc_52><loc_26><loc_91><loc_65></location>Despite being cost-intense and far less scalable than automation, human annotation has several benefits over automated groundtruth generation. The first and most obvious reason to leverage human annotations is the freedom to annotate any type of document without requiring a programmatic source. For most PDF documents, the original source document is not available. The latter is not a hard constraint with human annotation, but it is for automated methods. A second reason to use human annotations is that the latter usually provide a more natural interpretation of the page layout. The human-interpreted layout can significantly deviate from the programmatic layout used in typesetting. For example, "invisible" tables might be used solely for aligning text paragraphs on columns. Such typesetting tricks might be interpreted by automated methods incorrectly as an actual table, while the human annotation will interpret it correctly as Text or other styles. The same applies to multi-line text elements, when authors decided to space them as "invisible" list elements without bullet symbols. A third reason to gather ground-truth through human annotation is to estimate a "natural" upper bound on the segmentation accuracy. As we will show in Section 4, certain documents featuring complex layouts can have different but equally acceptable layout interpretations. This natural upper bound for segmentation accuracy can be found by annotating the same pages multiple times by different people and evaluating the inter-annotator agreement. Such a baseline consistency evaluation is very useful to define expectations for a good target accuracy in trained deep neural network models and avoid overfitting (see Table 1). On the flip side, achieving high annotation consistency proved to be a key challenge in human annotation, as we outline in Section 4.</paragraph>
<subtitle-level-1><location><page_3><loc_52><loc_22><loc_77><loc_23></location>4 ANNOTATION CAMPAIGN</subtitle-level-1> <subtitle-level-1><location><page_3><loc_52><loc_22><loc_77><loc_23></location>4 ANNOTATION CAMPAIGN</subtitle-level-1>
<paragraph><location><page_3><loc_52><loc_11><loc_91><loc_20></location>The annotation campaign was carried out in four phases. In phase one, we identified and prepared the data sources for annotation. In phase two, we determined the class labels and how annotations should be done on the documents in order to obtain maximum consistency. The latter was guided by a detailed requirement analysis and exhaustive experiments. In phase three, we trained the annotation staff and performed exams for quality assurance. In phase four,</paragraph> <paragraph><location><page_3><loc_52><loc_11><loc_91><loc_20></location>The annotation campaign was carried out in four phases. In phase one, we identified and prepared the data sources for annotation. In phase two, we determined the class labels and how annotations should be done on the documents in order to obtain maximum consistency. The latter was guided by a detailed requirement analysis and exhaustive experiments. In phase three, we trained the annotation staff and performed exams for quality assurance. In phase four,</paragraph>
<caption><location><page_4><loc_9><loc_85><loc_91><loc_89></location>Table 1: DocLayNet dataset overview. Along with the frequency of each class label, we present the relative occurrence (as % of row "Total") in the train, test and validation sets. The inter-annotator agreement is computed as the mAP@0.5-0.95 metric between pairwise annotations from the triple-annotated pages, from which we obtain accuracy ranges.</caption> <caption><location><page_4><loc_9><loc_85><loc_91><loc_89></location>Table 1: DocLayNet dataset overview. Along with the frequency of each class label, we present the relative occurrence (as % of row "Total") in the train, test and validation sets. The inter-annotator agreement is computed as the mAP@0.5-0.95 metric between pairwise annotations from the triple-annotated pages, from which we obtain accuracy ranges.</caption>
@ -93,14 +84,14 @@
<paragraph><location><page_4><loc_52><loc_53><loc_91><loc_61></location>include publication repositories such as arXiv$^{3}$, government offices, company websites as well as data directory services for financial reports and patents. Scanned documents were excluded wherever possible because they can be rotated or skewed. This would not allow us to perform annotation with rectangular bounding-boxes and therefore complicate the annotation process.</paragraph> <paragraph><location><page_4><loc_52><loc_53><loc_91><loc_61></location>include publication repositories such as arXiv$^{3}$, government offices, company websites as well as data directory services for financial reports and patents. Scanned documents were excluded wherever possible because they can be rotated or skewed. This would not allow us to perform annotation with rectangular bounding-boxes and therefore complicate the annotation process.</paragraph>
<paragraph><location><page_4><loc_52><loc_36><loc_91><loc_52></location>Preparation work included uploading and parsing the sourced PDF documents in the Corpus Conversion Service (CCS) [22], a cloud-native platform which provides a visual annotation interface and allows for dataset inspection and analysis. The annotation interface of CCS is shown in Figure 3. The desired balance of pages between the different document categories was achieved by selective subsampling of pages with certain desired properties. For example, we made sure to include the title page of each document and bias the remaining page selection to those with figures or tables. The latter was achieved by leveraging pre-trained object detection models from PubLayNet, which helped us estimate how many figures and tables a given page contains.</paragraph> <paragraph><location><page_4><loc_52><loc_36><loc_91><loc_52></location>Preparation work included uploading and parsing the sourced PDF documents in the Corpus Conversion Service (CCS) [22], a cloud-native platform which provides a visual annotation interface and allows for dataset inspection and analysis. The annotation interface of CCS is shown in Figure 3. The desired balance of pages between the different document categories was achieved by selective subsampling of pages with certain desired properties. For example, we made sure to include the title page of each document and bias the remaining page selection to those with figures or tables. The latter was achieved by leveraging pre-trained object detection models from PubLayNet, which helped us estimate how many figures and tables a given page contains.</paragraph>
<paragraph><location><page_4><loc_52><loc_12><loc_91><loc_36></location>Phase 2: Label selection and guideline. We reviewed the collected documents and identified the most common structural features they exhibit. This was achieved by identifying recurrent layout elements and lead us to the definition of 11 distinct class labels. These 11 class labels are Caption , Footnote , Formula , List-item , Pagefooter , Page-header , Picture , Section-header , Table , Text , and Title . Critical factors that were considered for the choice of these class labels were (1) the overall occurrence of the label, (2) the specificity of the label, (3) recognisability on a single page (i.e. no need for context from previous or next page) and (4) overall coverage of the page. Specificity ensures that the choice of label is not ambiguous, while coverage ensures that all meaningful items on a page can be annotated. We refrained from class labels that are very specific to a document category, such as Abstract in the Scientific Articles category. We also avoided class labels that are tightly linked to the semantics of the text. Labels such as Author and Affiliation , as seen in DocBank, are often only distinguishable by discriminating on</paragraph> <paragraph><location><page_4><loc_52><loc_12><loc_91><loc_36></location>Phase 2: Label selection and guideline. We reviewed the collected documents and identified the most common structural features they exhibit. This was achieved by identifying recurrent layout elements and lead us to the definition of 11 distinct class labels. These 11 class labels are Caption , Footnote , Formula , List-item , Pagefooter , Page-header , Picture , Section-header , Table , Text , and Title . Critical factors that were considered for the choice of these class labels were (1) the overall occurrence of the label, (2) the specificity of the label, (3) recognisability on a single page (i.e. no need for context from previous or next page) and (4) overall coverage of the page. Specificity ensures that the choice of label is not ambiguous, while coverage ensures that all meaningful items on a page can be annotated. We refrained from class labels that are very specific to a document category, such as Abstract in the Scientific Articles category. We also avoided class labels that are tightly linked to the semantics of the text. Labels such as Author and Affiliation , as seen in DocBank, are often only distinguishable by discriminating on</paragraph>
<paragraph><location><page_5><loc_9><loc_86><loc_48><loc_89></location>the textual content of an element, which goes beyond visual layout recognition, in particular outside the Scientific Articles category.</paragraph> <paragraph><location><page_5><loc_9><loc_87><loc_48><loc_89></location>the textual content of an element, which goes beyond visual layout recognition, in particular outside the Scientific Articles category.</paragraph>
<paragraph><location><page_5><loc_9><loc_68><loc_48><loc_86></location>At first sight, the task of visual document-layout interpretation appears intuitive enough to obtain plausible annotations in most cases. However, during early trial-runs in the core team, we observed many cases in which annotators use different annotation styles, especially for documents with challenging layouts. For example, if a figure is presented with subfigures, one annotator might draw a single figure bounding-box, while another might annotate each subfigure separately. The same applies for lists, where one might annotate all list items in one block or each list item separately. In essence, we observed that challenging layouts would be annotated in different but plausible ways. To illustrate this, we show in Figure 4 multiple examples of plausible but inconsistent annotations on the same pages.</paragraph> <paragraph><location><page_5><loc_9><loc_69><loc_48><loc_86></location>At first sight, the task of visual document-layout interpretation appears intuitive enough to obtain plausible annotations in most cases. However, during early trial-runs in the core team, we observed many cases in which annotators use different annotation styles, especially for documents with challenging layouts. For example, if a figure is presented with subfigures, one annotator might draw a single figure bounding-box, while another might annotate each subfigure separately. The same applies for lists, where one might annotate all list items in one block or each list item separately. In essence, we observed that challenging layouts would be annotated in different but plausible ways. To illustrate this, we show in Figure 4 multiple examples of plausible but inconsistent annotations on the same pages.</paragraph>
<paragraph><location><page_5><loc_9><loc_57><loc_48><loc_68></location>Obviously, this inconsistency in annotations is not desirable for datasets which are intended to be used for model training. To minimise these inconsistencies, we created a detailed annotation guideline. While perfect consistency across 40 annotation staff members is clearly not possible to achieve, we saw a huge improvement in annotation consistency after the introduction of our annotation guideline. A few selected, non-trivial highlights of the guideline are:</paragraph> <paragraph><location><page_5><loc_9><loc_57><loc_48><loc_68></location>Obviously, this inconsistency in annotations is not desirable for datasets which are intended to be used for model training. To minimise these inconsistencies, we created a detailed annotation guideline. While perfect consistency across 40 annotation staff members is clearly not possible to achieve, we saw a huge improvement in annotation consistency after the introduction of our annotation guideline. A few selected, non-trivial highlights of the guideline are:</paragraph>
<paragraph><location><page_5><loc_11><loc_51><loc_48><loc_56></location>- (1) Every list-item is an individual object instance with class label List-item . This definition is different from PubLayNet and DocBank, where all list-items are grouped together into one List object.</paragraph> <paragraph><location><page_5><loc_11><loc_51><loc_48><loc_56></location>- (1) Every list-item is an individual object instance with class label List-item . This definition is different from PubLayNet and DocBank, where all list-items are grouped together into one List object.</paragraph>
<paragraph><location><page_5><loc_11><loc_45><loc_48><loc_51></location>- (2) A List-item is a paragraph with hanging indentation. Singleline elements can qualify as List-item if the neighbour elements expose hanging indentation. Bullet or enumeration symbols are not a requirement.</paragraph> <paragraph><location><page_5><loc_11><loc_45><loc_48><loc_50></location>- (2) A List-item is a paragraph with hanging indentation. Singleline elements can qualify as List-item if the neighbour elements expose hanging indentation. Bullet or enumeration symbols are not a requirement.</paragraph>
<paragraph><location><page_5><loc_10><loc_42><loc_48><loc_45></location>- (3) For every Caption , there must be exactly one corresponding Picture or Table .</paragraph> <paragraph><location><page_5><loc_11><loc_42><loc_48><loc_45></location>- (3) For every Caption , there must be exactly one corresponding Picture or Table .</paragraph>
<paragraph><location><page_5><loc_10><loc_40><loc_48><loc_42></location>- (4) Connected sub-pictures are grouped together in one Picture object.</paragraph> <paragraph><location><page_5><loc_11><loc_40><loc_48><loc_42></location>- (4) Connected sub-pictures are grouped together in one Picture object.</paragraph>
<paragraph><location><page_5><loc_10><loc_38><loc_43><loc_39></location>- (5) Formula numbers are included in a Formula object.</paragraph> <paragraph><location><page_5><loc_11><loc_38><loc_43><loc_39></location>- (5) Formula numbers are included in a Formula object.</paragraph>
<paragraph><location><page_5><loc_11><loc_34><loc_48><loc_38></location>- (6) Emphasised text (e.g. in italic or bold) at the beginning of a paragraph is not considered a Section-header , unless it appears exclusively on its own line.</paragraph> <paragraph><location><page_5><loc_11><loc_34><loc_48><loc_38></location>- (6) Emphasised text (e.g. in italic or bold) at the beginning of a paragraph is not considered a Section-header , unless it appears exclusively on its own line.</paragraph>
<paragraph><location><page_5><loc_9><loc_27><loc_48><loc_33></location>The complete annotation guideline is over 100 pages long and a detailed description is obviously out of scope for this paper. Nevertheless, it will be made publicly available alongside with DocLayNet for future reference.</paragraph> <paragraph><location><page_5><loc_9><loc_27><loc_48><loc_33></location>The complete annotation guideline is over 100 pages long and a detailed description is obviously out of scope for this paper. Nevertheless, it will be made publicly available alongside with DocLayNet for future reference.</paragraph>
<paragraph><location><page_5><loc_9><loc_11><loc_48><loc_27></location>Phase 3: Training. After a first trial with a small group of people, we realised that providing the annotation guideline and a set of random practice pages did not yield the desired quality level for layout annotation. Therefore we prepared a subset of pages with two different complexity levels, each with a practice and an exam part. 974 pages were reference-annotated by one proficient core team member. Annotation staff were then given the task to annotate the same subsets (blinded from the reference). By comparing the annotations of each staff member with the reference annotations, we could quantify how closely their annotations matched the reference. Only after passing two exam levels with high annotation quality, staff were admitted into the production phase. Practice iterations</paragraph> <paragraph><location><page_5><loc_9><loc_11><loc_48><loc_27></location>Phase 3: Training. After a first trial with a small group of people, we realised that providing the annotation guideline and a set of random practice pages did not yield the desired quality level for layout annotation. Therefore we prepared a subset of pages with two different complexity levels, each with a practice and an exam part. 974 pages were reference-annotated by one proficient core team member. Annotation staff were then given the task to annotate the same subsets (blinded from the reference). By comparing the annotations of each staff member with the reference annotations, we could quantify how closely their annotations matched the reference. Only after passing two exam levels with high annotation quality, staff were admitted into the production phase. Practice iterations</paragraph>
@ -109,6 +100,7 @@
<location><page_5><loc_52><loc_42><loc_91><loc_89></location> <location><page_5><loc_52><loc_42><loc_91><loc_89></location>
<caption>Figure 4: Examples of plausible annotation alternatives for the same page. Criteria in our annotation guideline can resolve cases A to C, while the case D remains ambiguous.</caption> <caption>Figure 4: Examples of plausible annotation alternatives for the same page. Criteria in our annotation guideline can resolve cases A to C, while the case D remains ambiguous.</caption>
</figure> </figure>
<paragraph><location><page_5><loc_65><loc_42><loc_78><loc_42></location>05237a14f2524e3f53c8454b074409d05078038a6a36b770fcc8ec7e540deae0</paragraph>
<paragraph><location><page_5><loc_52><loc_31><loc_91><loc_34></location>were carried out over a timeframe of 12 weeks, after which 8 of the 40 initially allocated annotators did not pass the bar.</paragraph> <paragraph><location><page_5><loc_52><loc_31><loc_91><loc_34></location>were carried out over a timeframe of 12 weeks, after which 8 of the 40 initially allocated annotators did not pass the bar.</paragraph>
<paragraph><location><page_5><loc_52><loc_10><loc_91><loc_31></location>Phase 4: Production annotation. The previously selected 80K pages were annotated with the defined 11 class labels by 32 annotators. This production phase took around three months to complete. All annotations were created online through CCS, which visualises the programmatic PDF text-cells as an overlay on the page. The page annotation are obtained by drawing rectangular bounding-boxes, as shown in Figure 3. With regard to the annotation practices, we implemented a few constraints and capabilities on the tooling level. First, we only allow non-overlapping, vertically oriented, rectangular boxes. For the large majority of documents, this constraint was sufficient and it speeds up the annotation considerably in comparison with arbitrary segmentation shapes. Second, annotator staff were not able to see each other's annotations. This was enforced by design to avoid any bias in the annotation, which could skew the numbers of the inter-annotator agreement (see Table 1). We wanted</paragraph> <paragraph><location><page_5><loc_52><loc_10><loc_91><loc_31></location>Phase 4: Production annotation. The previously selected 80K pages were annotated with the defined 11 class labels by 32 annotators. This production phase took around three months to complete. All annotations were created online through CCS, which visualises the programmatic PDF text-cells as an overlay on the page. The page annotation are obtained by drawing rectangular bounding-boxes, as shown in Figure 3. With regard to the annotation practices, we implemented a few constraints and capabilities on the tooling level. First, we only allow non-overlapping, vertically oriented, rectangular boxes. For the large majority of documents, this constraint was sufficient and it speeds up the annotation considerably in comparison with arbitrary segmentation shapes. Second, annotator staff were not able to see each other's annotations. This was enforced by design to avoid any bias in the annotation, which could skew the numbers of the inter-annotator agreement (see Table 1). We wanted</paragraph>
<caption><location><page_6><loc_9><loc_77><loc_48><loc_89></location>Table 2: Prediction performance (mAP@0.5-0.95) of object detection networks on DocLayNet test set. The MRCNN (Mask R-CNN) and FRCNN (Faster R-CNN) models with ResNet-50 or ResNet-101 backbone were trained based on the network architectures from the detectron2 model zoo (Mask R-CNN R50, R101-FPN 3x, Faster R-CNN R101-FPN 3x), with default configurations. The YOLO implementation utilized was YOLOv5x6 [13]. All models were initialised using pre-trained weights from the COCO 2017 dataset.</caption> <caption><location><page_6><loc_9><loc_77><loc_48><loc_89></location>Table 2: Prediction performance (mAP@0.5-0.95) of object detection networks on DocLayNet test set. The MRCNN (Mask R-CNN) and FRCNN (Faster R-CNN) models with ResNet-50 or ResNet-101 backbone were trained based on the network architectures from the detectron2 model zoo (Mask R-CNN R50, R101-FPN 3x, Faster R-CNN R101-FPN 3x), with default configurations. The YOLO implementation utilized was YOLOv5x6 [13]. All models were initialised using pre-trained weights from the COCO 2017 dataset.</caption>
@ -229,20 +221,20 @@
<paragraph><location><page_8><loc_52><loc_18><loc_91><loc_21></location>- [11] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. IEEE Transactions on Pattern Analysis and Machine Intelligence , 39(6):1137-1149, 2017.</paragraph> <paragraph><location><page_8><loc_52><loc_18><loc_91><loc_21></location>- [11] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. IEEE Transactions on Pattern Analysis and Machine Intelligence , 39(6):1137-1149, 2017.</paragraph>
<paragraph><location><page_8><loc_52><loc_15><loc_91><loc_18></location>- [12] Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross B. Girshick. Mask R-CNN. In IEEE International Conference on Computer Vision , ICCV, pages 2980-2988. IEEE Computer Society, Oct 2017.</paragraph> <paragraph><location><page_8><loc_52><loc_15><loc_91><loc_18></location>- [12] Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross B. Girshick. Mask R-CNN. In IEEE International Conference on Computer Vision , ICCV, pages 2980-2988. IEEE Computer Society, Oct 2017.</paragraph>
<paragraph><location><page_8><loc_52><loc_11><loc_91><loc_15></location>- [13] Glenn Jocher, Alex Stoken, Ayush Chaurasia, Jirka Borovec, NanoCode012, TaoXie, Yonghye Kwon, Kalen Michael, Liu Changyu, Jiacong Fang, Abhiram V, Laughing, tkianai, yxNONG, Piotr Skalski, Adam Hogan, Jebastin Nadar, imyhxy, Lorenzo Mammana, Alex Wang, Cristi Fati, Diego Montes, Jan Hajek, Laurentiu</paragraph> <paragraph><location><page_8><loc_52><loc_11><loc_91><loc_15></location>- [13] Glenn Jocher, Alex Stoken, Ayush Chaurasia, Jirka Borovec, NanoCode012, TaoXie, Yonghye Kwon, Kalen Michael, Liu Changyu, Jiacong Fang, Abhiram V, Laughing, tkianai, yxNONG, Piotr Skalski, Adam Hogan, Jebastin Nadar, imyhxy, Lorenzo Mammana, Alex Wang, Cristi Fati, Diego Montes, Jan Hajek, Laurentiu</paragraph>
<caption><location><page_9><loc_9><loc_43><loc_52><loc_44></location>Text Caption List-Item Formula Table Section-Header Picture Page-Header Page-Footer Title</caption> <caption><location><page_9><loc_10><loc_43><loc_52><loc_44></location>Text Caption List-Item Formula Table Section-Header Picture Page-Header Page-Footer Title</caption>
<figure> <figure>
<location><page_9><loc_9><loc_44><loc_91><loc_89></location> <location><page_9><loc_9><loc_44><loc_91><loc_89></location>
<caption>Text Caption List-Item Formula Table Section-Header Picture Page-Header Page-Footer Title</caption> <caption>Text Caption List-Item Formula Table Section-Header Picture Page-Header Page-Footer Title</caption>
</figure> </figure>
<paragraph><location><page_9><loc_9><loc_36><loc_91><loc_41></location>Figure 6: Example layout predictions on selected pages from the DocLayNet test-set. (A, D) exhibit favourable results on coloured backgrounds. (B, C) show accurate list-item and paragraph differentiation despite densely-spaced lines. (E) demonstrates good table and figure distinction. (F) shows predictions on a Chinese patent with multiple overlaps, label confusion and missing boxes.</paragraph> <paragraph><location><page_9><loc_9><loc_36><loc_91><loc_41></location>Figure 6: Example layout predictions on selected pages from the DocLayNet test-set. (A, D) exhibit favourable results on coloured backgrounds. (B, C) show accurate list-item and paragraph differentiation despite densely-spaced lines. (E) demonstrates good table and figure distinction. (F) shows predictions on a Chinese patent with multiple overlaps, label confusion and missing boxes.</paragraph>
<paragraph><location><page_9><loc_11><loc_31><loc_48><loc_34></location>Diaconu, Mai Thanh Minh, Marc, albinxavi, fatih, oleg, and wanghao yang. ultralytics/yolov5: v6.0 - yolov5n nano models, roboflow integration, tensorflow export, opencv dnn support, October 2021.</paragraph> <paragraph><location><page_9><loc_11><loc_31><loc_48><loc_33></location>Diaconu, Mai Thanh Minh, Marc, albinxavi, fatih, oleg, and wanghao yang. ultralytics/yolov5: v6.0 - yolov5n nano models, roboflow integration, tensorflow export, opencv dnn support, October 2021.</paragraph>
<paragraph><location><page_9><loc_9><loc_28><loc_48><loc_30></location>- [14] Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-end object detection with transformers. CoRR , abs/2005.12872, 2020.</paragraph> <paragraph><location><page_9><loc_9><loc_28><loc_48><loc_30></location>- [14] Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-end object detection with transformers. CoRR , abs/2005.12872, 2020.</paragraph>
<paragraph><location><page_9><loc_9><loc_26><loc_48><loc_27></location>- [15] Mingxing Tan, Ruoming Pang, and Quoc V. Le. Efficientdet: Scalable and efficient object detection. CoRR , abs/1911.09070, 2019.</paragraph> <paragraph><location><page_9><loc_9><loc_26><loc_48><loc_27></location>- [15] Mingxing Tan, Ruoming Pang, and Quoc V. Le. Efficientdet: Scalable and efficient object detection. CoRR , abs/1911.09070, 2019.</paragraph>
<paragraph><location><page_9><loc_9><loc_23><loc_48><loc_25></location>- [16] Tsung-Yi Lin, Michael Maire, Serge J. Belongie, Lubomir D. Bourdev, Ross B. Girshick, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C. Lawrence Zitnick. Microsoft COCO: common objects in context, 2014.</paragraph> <paragraph><location><page_9><loc_9><loc_23><loc_48><loc_25></location>- [16] Tsung-Yi Lin, Michael Maire, Serge J. Belongie, Lubomir D. Bourdev, Ross B. Girshick, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C. Lawrence Zitnick. Microsoft COCO: common objects in context, 2014.</paragraph>
<paragraph><location><page_9><loc_9><loc_21><loc_48><loc_23></location>- [17] Yuxin Wu, Alexander Kirillov, Francisco Massa, Wan-Yen Lo, and Ross Girshick. Detectron2, 2019.</paragraph> <paragraph><location><page_9><loc_9><loc_21><loc_48><loc_22></location>- [17] Yuxin Wu, Alexander Kirillov, Francisco Massa, Wan-Yen Lo, and Ross Girshick. Detectron2, 2019.</paragraph>
<paragraph><location><page_9><loc_9><loc_16><loc_48><loc_20></location>- [18] Nikolaos Livathinos, Cesar Berrospi, Maksym Lysak, Viktor Kuropiatnyk, Ahmed Nassar, Andre Carvalho, Michele Dolfi, Christoph Auer, Kasper Dinkla, and Peter W. J. Staar. Robust pdf document conversion using recurrent neural networks. In Proceedings of the 35th Conference on Artificial Intelligence , AAAI, pages 1513715145, feb 2021.</paragraph> <paragraph><location><page_9><loc_9><loc_16><loc_48><loc_20></location>- [18] Nikolaos Livathinos, Cesar Berrospi, Maksym Lysak, Viktor Kuropiatnyk, Ahmed Nassar, Andre Carvalho, Michele Dolfi, Christoph Auer, Kasper Dinkla, and Peter W. J. Staar. Robust pdf document conversion using recurrent neural networks. In Proceedings of the 35th Conference on Artificial Intelligence , AAAI, pages 1513715145, feb 2021.</paragraph>
<paragraph><location><page_9><loc_9><loc_10><loc_48><loc_15></location>- [19] Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, and Ming Zhou. Layoutlm: Pre-training of text and layout for document image understanding. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining , KDD, pages 1192-1200, New York, USA, 2020. Association for Computing Machinery.</paragraph> <paragraph><location><page_9><loc_9><loc_10><loc_48><loc_15></location>- [19] Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, and Ming Zhou. Layoutlm: Pre-training of text and layout for document image understanding. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining , KDD, pages 1192-1200, New York, USA, 2020. Association for Computing Machinery.</paragraph>
<paragraph><location><page_9><loc_52><loc_32><loc_91><loc_34></location>- [20] Shoubin Li, Xuyan Ma, Shuaiqun Pan, Jun Hu, Lin Shi, and Qing Wang. Vtlayout: Fusion of visual and text features for document layout analysis, 2021.</paragraph> <paragraph><location><page_9><loc_52><loc_32><loc_91><loc_33></location>- [20] Shoubin Li, Xuyan Ma, Shuaiqun Pan, Jun Hu, Lin Shi, and Qing Wang. Vtlayout: Fusion of visual and text features for document layout analysis, 2021.</paragraph>
<paragraph><location><page_9><loc_52><loc_29><loc_91><loc_31></location>- [21] Peng Zhang, Can Li, Liang Qiao, Zhanzhan Cheng, Shiliang Pu, Yi Niu, and Fei Wu. Vsr: A unified framework for document layout analysis combining vision, semantics and relations, 2021.</paragraph> <paragraph><location><page_9><loc_52><loc_29><loc_91><loc_31></location>- [21] Peng Zhang, Can Li, Liang Qiao, Zhanzhan Cheng, Shiliang Pu, Yi Niu, and Fei Wu. Vsr: A unified framework for document layout analysis combining vision, semantics and relations, 2021.</paragraph>
<paragraph><location><page_9><loc_52><loc_25><loc_91><loc_28></location>- [22] Peter W J Staar, Michele Dolfi, Christoph Auer, and Costas Bekas. Corpus conversion service: A machine learning platform to ingest documents at scale. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining , KDD, pages 774-782. ACM, 2018.</paragraph> <paragraph><location><page_9><loc_52><loc_25><loc_91><loc_28></location>- [22] Peter W J Staar, Michele Dolfi, Christoph Auer, and Costas Bekas. Corpus conversion service: A machine learning platform to ingest documents at scale. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining , KDD, pages 774-782. ACM, 2018.</paragraph>
<paragraph><location><page_9><loc_52><loc_23><loc_91><loc_24></location>- [23] Connor Shorten and Taghi M. Khoshgoftaar. A survey on image data augmentation for deep learning. Journal of Big Data , 6(1):60, 2019.</paragraph> <paragraph><location><page_9><loc_52><loc_23><loc_91><loc_24></location>- [23] Connor Shorten and Taghi M. Khoshgoftaar. A survey on image data augmentation for deep learning. Journal of Big Data , 6(1):60, 2019.</paragraph>

File diff suppressed because one or more lines are too long

View File

@ -20,29 +20,17 @@ Accurate document layout analysis is a key requirement for highquality PDF docum
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s).
KDD '22, August 14-18, 2022, Washington, DC, USA © 2022 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-9385-0/22/08. https://doi.org/10.1145/3534678.3539043 KDD '22, August 14-18, 2022, Washington, DC, USA
13 USING THE VERTICAL TUBE MODELS AY11230/11234 1. The vertical tube can be used for instructional viewing or to photograph the image with a digital camera or a micro TV unit 2. Loosen the retention screw, then rotate the adjustment ring to change the length of the vertical tube. 3. Make sure that both the images in OPERATION ( cont. ) SELECTING OBJECTIVE MAGNIFICATION 1. There are two objectives. The lower magnification objective has a greater depth of field and view. 2. In order to observe the specimen easily use the lower magnification objective first. Then, by rotating the case, the magnification can be changed. CHANGING THE INTERPUPILLARY DISTANCE 1. The distance between the observer's pupils is the interpupillary distance. 2. To adjust the interpupillary distance rotate the prism caps until both eyes coincide with the image in the eyepiece. FOCUSING 1. Remove the lens protective cover. 2. Place the specimen on the working stage. 3. Focus the specimen with the left eye first while turning the focus knob until the image appears clear and sharp. 4. Rotate the right eyepiece ring until the images in each eyepiece coincide and are sharp and clear. CHANGING THE BULB 1. Disconnect the power cord. 2. When the bulb is cool, remove the oblique illuminator cap and remove the halogen bulb with cap. 3. Replace with a new halogen bulb. 4. Open the window in the base plate and replace the halogen lamp or fluorescent lamp of transmitted illuminator. FOCUSING 1. Turn the focusing knob away or toward you until a clear image is viewed. 2. If the image is unclear, adjust the height of the elevator up or down, then turn the focusing knob again. ZOOM MAGNIFICATION 1. Turn the zoom magnification knob to the desired magnification and field of view. 2. In most situations, it is recommended that you focus at the lowest magnification, then move to a higher magnification and re-focus as necessary. 3. If the image is not clear to both eyes at the same time, the diopter ring may need adjustment. DIOPTER RING ADJUSTMENT 1. To adjust the eyepiece for viewing with or without eyeglasses and for differences in acuity between the right and left eyes, follow the following steps: a. Observe an image through the left eyepiece and bring a specific point into focus using the focus knob. b. By turning the diopter ring adjustment for the left eyepiece, bring the same point into sharp focus. c.Then bring the same point into focus through the right eyepiece by turning the right diopter ring. d.With more than one viewer, each viewer should note their own diopter ring position for the left and right eyepieces, then before viewing set the diopter ring adjustments to that setting. CHANGING THE BULB 1. Disconnect the power cord from the electrical outlet. 2. When the bulb is cool, remove the oblique illuminator cap and remove the halogen bulb with cap. 3. Replace with a new halogen bulb. 4. Open the window in the base plate and replace the halogen lamp or fluorescent lamp of transmitted illuminator. Model AY11230 Model AY11234 © 2022 Copyright held by the owner/author(s).
ACM ISBN 978-1-4503-9385-0/22/08.
https://doi.org/10.1145/3534678.3539043
Figure 1: Four examples of complex page layouts across different document categories Figure 1: Four examples of complex page layouts across different document categories
<!-- image --> <!-- image -->
<!-- image -->
14
<!-- image -->
Circling Minimums 7 K H U H Z D V D F K D Q J H W R W K H 7 ( 5 3 6 F U L W H U L D L Q W K D W D ႇH F W V F L U F O L Q J D U H D G L P H Q V L R Q E \ H [ S D Q G L Q J W K H D U H D V W R S U R Y L G H improved obstacle protection. To indicate that the new criteria had been applied to a given procedure, a is placed on the circling line of minimums. The new circling tables and explanatory information is located in the Legend of the TPP. 7 K H D S S U R D F K H V X V L Q J V W D Q G D U G F L U F O L Q J D S S U R D F K D U H D V F D Q E H L G H Q W L ¿ H G E \ W K H D E V H Q F H R I W K H on the circling line of minima.
$ S S O \ ( [ S D Q G H G & L U F O L Q J $ S S U R D F K 0 D Q H X Y H U L Q J $ L U V S D F H 5 D G L X V Table
$ S S O \ 6 W D Q G D U G & L U F O L Q J $ S S U R D F K 0 D Q H X Y H U L Q J 5 D G L X V 7 D E O H AIRPORT SKETCH The airport sketch is a depiction of the airport with emphasis on runway pattern and related information, positioned in either the lower left or lower right corner of the chart to aid pilot recognition of the airport from the air and to provide some information to aid on ground navigation of the airport. The runways are drawn to scale and oriented to true north. Runway dimensions (length and width) are shown for all active runways. Runway(s) are depicted based on what type and construction of the runway. Hard Surface Other Than Hard Surface Metal Surface Closed Runway Under Construction Stopways, Taxiways, Parking Areas Displaced Threshold Closed Pavement Water Runway Taxiways and aprons are shaded grey. Other runway features that may be shown are runway numbers, runway dimensions, runway slope, arresting gear, and displaced threshold. 2 W K H U L Q I R U P D W L R Q F R Q F H U Q L Q J O L J K W L Q J ¿ Q D O D S S U R D F K E H D U L Q J V D L U S R U W E H D F R Q R E V W D F O H V F R Q W U R O W R Z H U 1 $ 9 $ , ' V K H O L -pads may also be shown. $ L U S R U W ( O H Y D W L R Q D Q G 7 R X F K G R Z Q = R Q H ( O H Y D W L R Q The airport elevation is shown enclosed within a box in the upper left corner of the sketch box and the touchdown zone elevation (TDZE) is shown in the upper right corner of the sketch box. The airport elevation is the highest point of an D L U S R U W ¶ V X V D E O H U X Q Z D \ V P H D V X U H G L Q I H H W I U R P P H D Q V H D O H Y H O 7 K H 7 ' = ( L V W K H K L J K H V W H O H Y D W L R Q L Q W K H ¿ U V W I H H W R I the landing surface. Circling only approaches will not show a TDZE. FAA Chart Users' Guide - Terminal Procedures Publication (TPP) - Terms
114
## KEYWORDS ## KEYWORDS
PDF document conversion, layout segmentation, object-detection, data set, Machine Learning PDF document conversion, layout segmentation, object-detection, data set, Machine Learning
@ -164,6 +152,8 @@ Phase 3: Training. After a first trial with a small group of people, we realised
Figure 4: Examples of plausible annotation alternatives for the same page. Criteria in our annotation guideline can resolve cases A to C, while the case D remains ambiguous. Figure 4: Examples of plausible annotation alternatives for the same page. Criteria in our annotation guideline can resolve cases A to C, while the case D remains ambiguous.
<!-- image --> <!-- image -->
05237a14f2524e3f53c8454b074409d05078038a6a36b770fcc8ec7e540deae0
were carried out over a timeframe of 12 weeks, after which 8 of the 40 initially allocated annotators did not pass the bar. were carried out over a timeframe of 12 weeks, after which 8 of the 40 initially allocated annotators did not pass the bar.
Phase 4: Production annotation. The previously selected 80K pages were annotated with the defined 11 class labels by 32 annotators. This production phase took around three months to complete. All annotations were created online through CCS, which visualises the programmatic PDF text-cells as an overlay on the page. The page annotation are obtained by drawing rectangular bounding-boxes, as shown in Figure 3. With regard to the annotation practices, we implemented a few constraints and capabilities on the tooling level. First, we only allow non-overlapping, vertically oriented, rectangular boxes. For the large majority of documents, this constraint was sufficient and it speeds up the annotation considerably in comparison with arbitrary segmentation shapes. Second, annotator staff were not able to see each other's annotations. This was enforced by design to avoid any bias in the annotation, which could skew the numbers of the inter-annotator agreement (see Table 1). We wanted Phase 4: Production annotation. The previously selected 80K pages were annotated with the defined 11 class labels by 32 annotators. This production phase took around three months to complete. All annotations were created online through CCS, which visualises the programmatic PDF text-cells as an overlay on the page. The page annotation are obtained by drawing rectangular bounding-boxes, as shown in Figure 3. With regard to the annotation practices, we implemented a few constraints and capabilities on the tooling level. First, we only allow non-overlapping, vertically oriented, rectangular boxes. For the large majority of documents, this constraint was sufficient and it speeds up the annotation considerably in comparison with arbitrary segmentation shapes. Second, annotator staff were not able to see each other's annotations. This was enforced by design to avoid any bias in the annotation, which could skew the numbers of the inter-annotator agreement (see Table 1). We wanted
@ -230,8 +220,6 @@ One of the fundamental questions related to any dataset is if it is "large enoug
The choice and number of labels can have a significant effect on the overall model performance. Since PubLayNet, DocBank and DocLayNet all have different label sets, it is of particular interest to understand and quantify this influence of the label set on the model performance. We investigate this by either down-mapping labels into more common ones (e.g. Caption → Text ) or excluding them from the annotations entirely. Furthermore, it must be stressed that all mappings and exclusions were performed on the data before model training. In Table 3, we present the mAP scores for a Mask R-CNN R50 network on different label sets. Where a label is down-mapped, we show its corresponding label, otherwise it was excluded. We present three different label sets, with 6, 5 and 4 different labels respectively. The set of 5 labels contains the same labels as PubLayNet. However, due to the different definition of The choice and number of labels can have a significant effect on the overall model performance. Since PubLayNet, DocBank and DocLayNet all have different label sets, it is of particular interest to understand and quantify this influence of the label set on the model performance. We investigate this by either down-mapping labels into more common ones (e.g. Caption → Text ) or excluding them from the annotations entirely. Furthermore, it must be stressed that all mappings and exclusions were performed on the data before model training. In Table 3, we present the mAP scores for a Mask R-CNN R50 network on different label sets. Where a label is down-mapped, we show its corresponding label, otherwise it was excluded. We present three different label sets, with 6, 5 and 4 different labels respectively. The set of 5 labels contains the same labels as PubLayNet. However, due to the different definition of
| Class-count | 11 | 11 | 5 | 5 | | Class-count | 11 | 11 | 5 | 5 |
|----------------|------|------|-----|------| |----------------|------|------|-----|------|
| Split | Doc | Page | Doc | Page | | Split | Doc | Page | Doc | Page |

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -1,7 +1,9 @@
<document> <document>
<subtitle-level-1><location><page_1><loc_22><loc_81><loc_79><loc_86></location>Optimized Table Tokenization for Table Structure Recognition</subtitle-level-1> <subtitle-level-1><location><page_1><loc_22><loc_82><loc_79><loc_85></location>Optimized Table Tokenization for Table Structure Recognition</subtitle-level-1>
<paragraph><location><page_1><loc_23><loc_74><loc_78><loc_79></location>Maksym Lysak [0000 - 0002 - 3723 - $^{6960]}$, Ahmed Nassar[0000 - 0002 - 9468 - $^{0822]}$, Nikolaos Livathinos [0000 - 0001 - 8513 - $^{3491]}$, Christoph Auer[0000 - 0001 - 5761 - $^{0422]}$, and Peter Staar [0000 - 0002 - 8088 - 0823]</paragraph> <paragraph><location><page_1><loc_23><loc_75><loc_78><loc_79></location>Maksym Lysak [0000 0002 3723 $^{6960]}$, Ahmed Nassar[0000 0002 9468 $^{0822]}$, Nikolaos Livathinos [0000 0001 8513 $^{3491]}$, Christoph Auer[0000 0001 5761 $^{0422]}$, [0000 0002 8088 0823]</paragraph>
<paragraph><location><page_1><loc_36><loc_70><loc_64><loc_73></location>IBM Research {mly,ahn,nli,cau,taa}@zurich.ibm.com</paragraph> <paragraph><location><page_1><loc_38><loc_74><loc_49><loc_75></location>and Peter Staar</paragraph>
<paragraph><location><page_1><loc_46><loc_72><loc_55><loc_73></location>IBM Research</paragraph>
<paragraph><location><page_1><loc_36><loc_70><loc_64><loc_71></location>{mly,ahn,nli,cau,taa}@zurich.ibm.com</paragraph>
<paragraph><location><page_1><loc_27><loc_41><loc_74><loc_66></location>Abstract. Extracting tables from documents is a crucial task in any document conversion pipeline. Recently, transformer-based models have demonstrated that table-structure can be recognized with impressive accuracy using Image-to-Markup-Sequence (Im2Seq) approaches. Taking only the image of a table, such models predict a sequence of tokens (e.g. in HTML, LaTeX) which represent the structure of the table. Since the token representation of the table structure has a significant impact on the accuracy and run-time performance of any Im2Seq model, we investigate in this paper how table-structure representation can be optimised. We propose a new, optimised table-structure language (OTSL) with a minimized vocabulary and specific rules. The benefits of OTSL are that it reduces the number of tokens to 5 (HTML needs 28+) and shortens the sequence length to half of HTML on average. Consequently, model accuracy improves significantly, inference time is halved compared to HTML-based models, and the predicted table structures are always syntactically correct. This in turn eliminates most post-processing needs. Popular table structure data-sets will be published in OTSL format to the community.</paragraph> <paragraph><location><page_1><loc_27><loc_41><loc_74><loc_66></location>Abstract. Extracting tables from documents is a crucial task in any document conversion pipeline. Recently, transformer-based models have demonstrated that table-structure can be recognized with impressive accuracy using Image-to-Markup-Sequence (Im2Seq) approaches. Taking only the image of a table, such models predict a sequence of tokens (e.g. in HTML, LaTeX) which represent the structure of the table. Since the token representation of the table structure has a significant impact on the accuracy and run-time performance of any Im2Seq model, we investigate in this paper how table-structure representation can be optimised. We propose a new, optimised table-structure language (OTSL) with a minimized vocabulary and specific rules. The benefits of OTSL are that it reduces the number of tokens to 5 (HTML needs 28+) and shortens the sequence length to half of HTML on average. Consequently, model accuracy improves significantly, inference time is halved compared to HTML-based models, and the predicted table structures are always syntactically correct. This in turn eliminates most post-processing needs. Popular table structure data-sets will be published in OTSL format to the community.</paragraph>
<paragraph><location><page_1><loc_27><loc_37><loc_74><loc_40></location>Keywords: Table Structure Recognition · Data Representation · Transformers · Optimization.</paragraph> <paragraph><location><page_1><loc_27><loc_37><loc_74><loc_40></location>Keywords: Table Structure Recognition · Data Representation · Transformers · Optimization.</paragraph>
<subtitle-level-1><location><page_1><loc_22><loc_33><loc_37><loc_34></location>1 Introduction</subtitle-level-1> <subtitle-level-1><location><page_1><loc_22><loc_33><loc_37><loc_34></location>1 Introduction</subtitle-level-1>
@ -16,7 +18,7 @@
<paragraph><location><page_2><loc_22><loc_16><loc_79><loc_34></location>Recently emerging SOTA methods for table structure recognition employ transformer-based models, in which an image of the table is provided to the network in order to predict the structure of the table as a sequence of tokens. These image-to-sequence (Im2Seq) models are extremely powerful, since they allow for a purely data-driven solution. The tokens of the sequence typically belong to a markup language such as HTML, Latex or Markdown, which allow to describe table structure as rows, columns and spanning cells in various configurations. In Figure 1, we illustrate how HTML is used to represent the table-structure of a particular example table. Public table-structure data sets such as PubTabNet [22], and FinTabNet [21], which were created in a semi-automated way from paired PDF and HTML sources (e.g. PubMed Central), popularized primarily the use of HTML as ground-truth representation format for TSR.</paragraph> <paragraph><location><page_2><loc_22><loc_16><loc_79><loc_34></location>Recently emerging SOTA methods for table structure recognition employ transformer-based models, in which an image of the table is provided to the network in order to predict the structure of the table as a sequence of tokens. These image-to-sequence (Im2Seq) models are extremely powerful, since they allow for a purely data-driven solution. The tokens of the sequence typically belong to a markup language such as HTML, Latex or Markdown, which allow to describe table structure as rows, columns and spanning cells in various configurations. In Figure 1, we illustrate how HTML is used to represent the table-structure of a particular example table. Public table-structure data sets such as PubTabNet [22], and FinTabNet [21], which were created in a semi-automated way from paired PDF and HTML sources (e.g. PubMed Central), popularized primarily the use of HTML as ground-truth representation format for TSR.</paragraph>
<paragraph><location><page_3><loc_22><loc_73><loc_79><loc_85></location>While the majority of research in TSR is currently focused on the development and application of novel neural model architectures, the table structure representation language (e.g. HTML in PubTabNet and FinTabNet) is usually adopted as is for the sequence tokenization in Im2Seq models. In this paper, we aim for the opposite and investigate the impact of the table structure representation language with an otherwise unmodified Im2Seq transformer-based architecture. Since the current state-of-the-art Im2Seq model is TableFormer [9], we select this model to perform our experiments.</paragraph> <paragraph><location><page_3><loc_22><loc_73><loc_79><loc_85></location>While the majority of research in TSR is currently focused on the development and application of novel neural model architectures, the table structure representation language (e.g. HTML in PubTabNet and FinTabNet) is usually adopted as is for the sequence tokenization in Im2Seq models. In this paper, we aim for the opposite and investigate the impact of the table structure representation language with an otherwise unmodified Im2Seq transformer-based architecture. Since the current state-of-the-art Im2Seq model is TableFormer [9], we select this model to perform our experiments.</paragraph>
<paragraph><location><page_3><loc_22><loc_58><loc_79><loc_73></location>The main contribution of this paper is the introduction of a new optimised table structure language (OTSL), specifically designed to describe table-structure in an compact and structured way for Im2Seq models. OTSL has a number of key features, which make it very attractive to use in Im2Seq models. Specifically, compared to other languages such as HTML, OTSL has a minimized vocabulary which yields short sequence length, strong inherent structure (e.g. strict rectangular layout) and a strict syntax with rules that only look backwards. The latter allows for syntax validation during inference and ensures a syntactically correct table-structure. These OTSL features are illustrated in Figure 1, in comparison to HTML.</paragraph> <paragraph><location><page_3><loc_22><loc_58><loc_79><loc_73></location>The main contribution of this paper is the introduction of a new optimised table structure language (OTSL), specifically designed to describe table-structure in an compact and structured way for Im2Seq models. OTSL has a number of key features, which make it very attractive to use in Im2Seq models. Specifically, compared to other languages such as HTML, OTSL has a minimized vocabulary which yields short sequence length, strong inherent structure (e.g. strict rectangular layout) and a strict syntax with rules that only look backwards. The latter allows for syntax validation during inference and ensures a syntactically correct table-structure. These OTSL features are illustrated in Figure 1, in comparison to HTML.</paragraph>
<paragraph><location><page_3><loc_22><loc_44><loc_79><loc_58></location>The paper is structured as follows. In section 2, we give an overview of the latest developments in table-structure reconstruction. In section 3 we review the current HTML table encoding (popularised by PubTabNet and FinTabNet) and discuss its flaws. Subsequently, we introduce OTSL in section 4, which includes the language definition, syntax rules and error-correction procedures. In section 5, we apply OTSL on the TableFormer architecture, compare it to TableFormer models trained on HTML and ultimately demonstrate the advantages of using OTSL. Finally, in section 6 we conclude our work and outline next potential steps.</paragraph> <paragraph><location><page_3><loc_22><loc_45><loc_79><loc_58></location>The paper is structured as follows. In section 2, we give an overview of the latest developments in table-structure reconstruction. In section 3 we review the current HTML table encoding (popularised by PubTabNet and FinTabNet) and discuss its flaws. Subsequently, we introduce OTSL in section 4, which includes the language definition, syntax rules and error-correction procedures. In section 5, we apply OTSL on the TableFormer architecture, compare it to TableFormer models trained on HTML and ultimately demonstrate the advantages of using OTSL. Finally, in section 6 we conclude our work and outline next potential steps.</paragraph>
<subtitle-level-1><location><page_3><loc_22><loc_40><loc_39><loc_42></location>2 Related Work</subtitle-level-1> <subtitle-level-1><location><page_3><loc_22><loc_40><loc_39><loc_42></location>2 Related Work</subtitle-level-1>
<paragraph><location><page_3><loc_22><loc_16><loc_79><loc_38></location>Approaches to formalize the logical structure and layout of tables in electronic documents date back more than two decades [16]. In the recent past, a wide variety of computer vision methods have been explored to tackle the problem of table structure recognition, i.e. the correct identification of columns, rows and spanning cells in a given table. Broadly speaking, the current deeplearning based approaches fall into three categories: object detection (OD) methods, Graph-Neural-Network (GNN) methods and Image-to-Markup-Sequence (Im2Seq) methods. Object-detection based methods [11,12,13,14,21] rely on tablestructure annotation using (overlapping) bounding boxes for training, and produce bounding-box predictions to define table cells, rows, and columns on a table image. Graph Neural Network (GNN) based methods [3,6,17,18], as the name suggests, represent tables as graph structures. The graph nodes represent the content of each table cell, an embedding vector from the table image, or geometric coordinates of the table cell. The edges of the graph define the relationship between the nodes, e.g. if they belong to the same column, row, or table cell.</paragraph> <paragraph><location><page_3><loc_22><loc_16><loc_79><loc_38></location>Approaches to formalize the logical structure and layout of tables in electronic documents date back more than two decades [16]. In the recent past, a wide variety of computer vision methods have been explored to tackle the problem of table structure recognition, i.e. the correct identification of columns, rows and spanning cells in a given table. Broadly speaking, the current deeplearning based approaches fall into three categories: object detection (OD) methods, Graph-Neural-Network (GNN) methods and Image-to-Markup-Sequence (Im2Seq) methods. Object-detection based methods [11,12,13,14,21] rely on tablestructure annotation using (overlapping) bounding boxes for training, and produce bounding-box predictions to define table cells, rows, and columns on a table image. Graph Neural Network (GNN) based methods [3,6,17,18], as the name suggests, represent tables as graph structures. The graph nodes represent the content of each table cell, an embedding vector from the table image, or geometric coordinates of the table cell. The edges of the graph define the relationship between the nodes, e.g. if they belong to the same column, row, or table cell.</paragraph>
<paragraph><location><page_4><loc_22><loc_67><loc_79><loc_85></location>Other work [20] aims at predicting a grid for each table and deciding which cells must be merged using an attention network. Im2Seq methods cast the problem as a sequence generation task [4,5,9,22], and therefore need an internal tablestructure representation language, which is often implemented with standard markup languages (e.g. HTML, LaTeX, Markdown). In theory, Im2Seq methods have a natural advantage over the OD and GNN methods by virtue of directly predicting the table-structure. As such, no post-processing or rules are needed in order to obtain the table-structure, which is necessary with OD and GNN approaches. In practice, this is not entirely true, because a predicted sequence of table-structure markup does not necessarily have to be syntactically correct. Hence, depending on the quality of the predicted sequence, some post-processing needs to be performed to ensure a syntactically valid (let alone correct) sequence.</paragraph> <paragraph><location><page_4><loc_22><loc_67><loc_79><loc_85></location>Other work [20] aims at predicting a grid for each table and deciding which cells must be merged using an attention network. Im2Seq methods cast the problem as a sequence generation task [4,5,9,22], and therefore need an internal tablestructure representation language, which is often implemented with standard markup languages (e.g. HTML, LaTeX, Markdown). In theory, Im2Seq methods have a natural advantage over the OD and GNN methods by virtue of directly predicting the table-structure. As such, no post-processing or rules are needed in order to obtain the table-structure, which is necessary with OD and GNN approaches. In practice, this is not entirely true, because a predicted sequence of table-structure markup does not necessarily have to be syntactically correct. Hence, depending on the quality of the predicted sequence, some post-processing needs to be performed to ensure a syntactically valid (let alone correct) sequence.</paragraph>
@ -39,24 +41,24 @@
<paragraph><location><page_6><loc_22><loc_44><loc_79><loc_56></location>To mitigate the issues with HTML in Im2Seq-based TSR models laid out before, we propose here our Optimised Table Structure Language (OTSL). OTSL is designed to express table structure with a minimized vocabulary and a simple set of rules, which are both significantly reduced compared to HTML. At the same time, OTSL enables easy error detection and correction during sequence generation. We further demonstrate how the compact structure representation and minimized sequence length improves prediction accuracy and inference time in the TableFormer architecture.</paragraph> <paragraph><location><page_6><loc_22><loc_44><loc_79><loc_56></location>To mitigate the issues with HTML in Im2Seq-based TSR models laid out before, we propose here our Optimised Table Structure Language (OTSL). OTSL is designed to express table structure with a minimized vocabulary and a simple set of rules, which are both significantly reduced compared to HTML. At the same time, OTSL enables easy error detection and correction during sequence generation. We further demonstrate how the compact structure representation and minimized sequence length improves prediction accuracy and inference time in the TableFormer architecture.</paragraph>
<subtitle-level-1><location><page_6><loc_22><loc_40><loc_43><loc_41></location>4.1 Language Definition</subtitle-level-1> <subtitle-level-1><location><page_6><loc_22><loc_40><loc_43><loc_41></location>4.1 Language Definition</subtitle-level-1>
<paragraph><location><page_6><loc_22><loc_34><loc_79><loc_38></location>In Figure 3, we illustrate how the OTSL is defined. In essence, the OTSL defines only 5 tokens that directly describe a tabular structure based on an atomic 2D grid.</paragraph> <paragraph><location><page_6><loc_22><loc_34><loc_79><loc_38></location>In Figure 3, we illustrate how the OTSL is defined. In essence, the OTSL defines only 5 tokens that directly describe a tabular structure based on an atomic 2D grid.</paragraph>
<paragraph><location><page_6><loc_24><loc_32><loc_67><loc_34></location>The OTSL vocabulary is comprised of the following tokens:</paragraph> <paragraph><location><page_6><loc_24><loc_33><loc_67><loc_34></location>The OTSL vocabulary is comprised of the following tokens:</paragraph>
<paragraph><location><page_6><loc_23><loc_30><loc_75><loc_31></location>- -"C" cell a new table cell that either has or does not have cell content</paragraph> <paragraph><location><page_6><loc_23><loc_30><loc_75><loc_31></location>- -"C" cell a new table cell that either has or does not have cell content</paragraph>
<paragraph><location><page_6><loc_23><loc_27><loc_79><loc_29></location>- -"L" cell left-looking cell , merging with the left neighbor cell to create a span</paragraph> <paragraph><location><page_6><loc_23><loc_27><loc_79><loc_29></location>- -"L" cell left-looking cell , merging with the left neighbor cell to create a span</paragraph>
<paragraph><location><page_6><loc_23><loc_24><loc_79><loc_26></location>- -"U" cell up-looking cell , merging with the upper neighbor cell to create a span</paragraph> <paragraph><location><page_6><loc_23><loc_24><loc_79><loc_26></location>- -"U" cell up-looking cell , merging with the upper neighbor cell to create a span</paragraph>
<paragraph><location><page_6><loc_23><loc_22><loc_74><loc_23></location>- -"X" cell cross cell , to merge with both left and upper neighbor cells</paragraph> <paragraph><location><page_6><loc_23><loc_22><loc_74><loc_23></location>- -"X" cell cross cell , to merge with both left and upper neighbor cells</paragraph>
<paragraph><location><page_6><loc_23><loc_20><loc_54><loc_22></location>- -"NL" new-line , switch to the next row.</paragraph> <paragraph><location><page_6><loc_23><loc_20><loc_54><loc_21></location>- -"NL" new-line , switch to the next row.</paragraph>
<paragraph><location><page_6><loc_22><loc_16><loc_79><loc_19></location>A notable attribute of OTSL is that it has the capability of achieving lossless conversion to HTML.</paragraph> <paragraph><location><page_6><loc_22><loc_16><loc_79><loc_19></location>A notable attribute of OTSL is that it has the capability of achieving lossless conversion to HTML.</paragraph>
<caption><location><page_7><loc_22><loc_80><loc_79><loc_84></location>Fig. 3. OTSL description of table structure: A - table example; B - graphical representation of table structure; C - mapping structure on a grid; D - OTSL structure encoding; E - explanation on cell encoding</caption> <caption><location><page_7><loc_22><loc_80><loc_79><loc_84></location>Fig. 3. OTSL description of table structure: A - table example; B - graphical representation of table structure; C - mapping structure on a grid; D - OTSL structure encoding; E - explanation on cell encoding</caption>
<figure> <figure>
<location><page_7><loc_27><loc_65><loc_73><loc_79></location> <location><page_7><loc_27><loc_65><loc_73><loc_79></location>
<caption>Fig. 3. OTSL description of table structure: A - table example; B - graphical representation of table structure; C - mapping structure on a grid; D - OTSL structure encoding; E - explanation on cell encoding</caption> <caption>Fig. 3. OTSL description of table structure: A - table example; B - graphical representation of table structure; C - mapping structure on a grid; D - OTSL structure encoding; E - explanation on cell encoding</caption>
</figure> </figure>
<subtitle-level-1><location><page_7><loc_22><loc_60><loc_40><loc_62></location>4.2 Language Syntax</subtitle-level-1> <subtitle-level-1><location><page_7><loc_22><loc_60><loc_40><loc_61></location>4.2 Language Syntax</subtitle-level-1>
<paragraph><location><page_7><loc_22><loc_58><loc_59><loc_59></location>The OTSL representation follows these syntax rules:</paragraph> <paragraph><location><page_7><loc_22><loc_58><loc_59><loc_59></location>The OTSL representation follows these syntax rules:</paragraph>
<paragraph><location><page_7><loc_23><loc_54><loc_79><loc_56></location>- 1. Left-looking cell rule : The left neighbour of an "L" cell must be either another "L" cell or a "C" cell.</paragraph> <paragraph><location><page_7><loc_23><loc_54><loc_79><loc_56></location>- 1. Left-looking cell rule : The left neighbour of an "L" cell must be either another "L" cell or a "C" cell.</paragraph>
<paragraph><location><page_7><loc_23><loc_51><loc_79><loc_53></location>- 2. Up-looking cell rule : The upper neighbour of a "U" cell must be either another "U" cell or a "C" cell.</paragraph> <paragraph><location><page_7><loc_23><loc_51><loc_79><loc_53></location>- 2. Up-looking cell rule : The upper neighbour of a "U" cell must be either another "U" cell or a "C" cell.</paragraph>
<subtitle-level-1><location><page_7><loc_23><loc_49><loc_37><loc_50></location>3. Cross cell rule :</subtitle-level-1> <subtitle-level-1><location><page_7><loc_23><loc_49><loc_37><loc_50></location>3. Cross cell rule :</subtitle-level-1>
<paragraph><location><page_7><loc_24><loc_44><loc_79><loc_49></location>- The left neighbour of an "X" cell must be either another "X" cell or a "U" cell, and the upper neighbour of an "X" cell must be either another "X" cell or an "L" cell.</paragraph> <paragraph><location><page_7><loc_25><loc_44><loc_79><loc_49></location>- The left neighbour of an "X" cell must be either another "X" cell or a "U" cell, and the upper neighbour of an "X" cell must be either another "X" cell or an "L" cell.</paragraph>
<paragraph><location><page_7><loc_23><loc_43><loc_78><loc_44></location>- 4. First row rule : Only "L" cells and "C" cells are allowed in the first row.</paragraph> <paragraph><location><page_7><loc_23><loc_43><loc_78><loc_44></location>- 4. First row rule : Only "L" cells and "C" cells are allowed in the first row.</paragraph>
<paragraph><location><page_7><loc_23><loc_40><loc_79><loc_43></location>- 5. First column rule : Only "U" cells and "C" cells are allowed in the first column.</paragraph> <paragraph><location><page_7><loc_23><loc_40><loc_79><loc_43></location>- 5. First column rule : Only "U" cells and "C" cells are allowed in the first column.</paragraph>
<paragraph><location><page_7><loc_23><loc_37><loc_79><loc_40></location>- 6. Rectangular rule : The table representation is always rectangular - all rows must have an equal number of tokens, terminated with "NL" token.</paragraph> <paragraph><location><page_7><loc_23><loc_37><loc_79><loc_40></location>- 6. Rectangular rule : The table representation is always rectangular - all rows must have an equal number of tokens, terminated with "NL" token.</paragraph>
@ -65,7 +67,7 @@
<paragraph><location><page_8><loc_22><loc_82><loc_79><loc_85></location>reduces significantly the column drift seen in the HTML based models (see Figure 5).</paragraph> <paragraph><location><page_8><loc_22><loc_82><loc_79><loc_85></location>reduces significantly the column drift seen in the HTML based models (see Figure 5).</paragraph>
<subtitle-level-1><location><page_8><loc_22><loc_78><loc_52><loc_80></location>4.3 Error-detection and -mitigation</subtitle-level-1> <subtitle-level-1><location><page_8><loc_22><loc_78><loc_52><loc_80></location>4.3 Error-detection and -mitigation</subtitle-level-1>
<paragraph><location><page_8><loc_22><loc_62><loc_79><loc_77></location>The design of OTSL allows to validate a table structure easily on an unfinished sequence. The detection of an invalid sequence token is a clear indication of a prediction mistake, however a valid sequence by itself does not guarantee prediction correctness. Different heuristics can be used to correct token errors in an invalid sequence and thus increase the chances for accurate predictions. Such heuristics can be applied either after the prediction of each token, or at the end on the entire predicted sequence. For example a simple heuristic which can correct the predicted OTSL sequence on-the-fly is to verify if the token with the highest prediction confidence invalidates the predicted sequence, and replace it by the token with the next highest confidence until OTSL rules are satisfied.</paragraph> <paragraph><location><page_8><loc_22><loc_62><loc_79><loc_77></location>The design of OTSL allows to validate a table structure easily on an unfinished sequence. The detection of an invalid sequence token is a clear indication of a prediction mistake, however a valid sequence by itself does not guarantee prediction correctness. Different heuristics can be used to correct token errors in an invalid sequence and thus increase the chances for accurate predictions. Such heuristics can be applied either after the prediction of each token, or at the end on the entire predicted sequence. For example a simple heuristic which can correct the predicted OTSL sequence on-the-fly is to verify if the token with the highest prediction confidence invalidates the predicted sequence, and replace it by the token with the next highest confidence until OTSL rules are satisfied.</paragraph>
<subtitle-level-1><location><page_8><loc_22><loc_58><loc_37><loc_60></location>5 Experiments</subtitle-level-1> <subtitle-level-1><location><page_8><loc_22><loc_58><loc_37><loc_59></location>5 Experiments</subtitle-level-1>
<paragraph><location><page_8><loc_22><loc_43><loc_79><loc_56></location>To evaluate the impact of OTSL on prediction accuracy and inference times, we conducted a series of experiments based on the TableFormer model (Figure 4) with two objectives: Firstly we evaluate the prediction quality and performance of OTSL vs. HTML after performing Hyper Parameter Optimization (HPO) on the canonical PubTabNet data set. Secondly we pick the best hyper-parameters found in the first step and evaluate how OTSL impacts the performance of TableFormer after training on other publicly available data sets (FinTabNet, PubTables-1M [14]). The ground truth (GT) from all data sets has been converted into OTSL format for this purpose, and will be made publicly available.</paragraph> <paragraph><location><page_8><loc_22><loc_43><loc_79><loc_56></location>To evaluate the impact of OTSL on prediction accuracy and inference times, we conducted a series of experiments based on the TableFormer model (Figure 4) with two objectives: Firstly we evaluate the prediction quality and performance of OTSL vs. HTML after performing Hyper Parameter Optimization (HPO) on the canonical PubTabNet data set. Secondly we pick the best hyper-parameters found in the first step and evaluate how OTSL impacts the performance of TableFormer after training on other publicly available data sets (FinTabNet, PubTables-1M [14]). The ground truth (GT) from all data sets has been converted into OTSL format for this purpose, and will be made publicly available.</paragraph>
<caption><location><page_8><loc_22><loc_36><loc_79><loc_39></location>Fig. 4. Architecture sketch of the TableFormer model, which is a representative for the Im2Seq approach.</caption> <caption><location><page_8><loc_22><loc_36><loc_79><loc_39></location>Fig. 4. Architecture sketch of the TableFormer model, which is a representative for the Im2Seq approach.</caption>
<figure> <figure>
@ -74,7 +76,7 @@
</figure> </figure>
<paragraph><location><page_8><loc_22><loc_16><loc_79><loc_22></location>We rely on standard metrics such as Tree Edit Distance score (TEDs) for table structure prediction, and Mean Average Precision (mAP) with 0.75 Intersection Over Union (IOU) threshold for the bounding-box predictions of table cells. The predicted OTSL structures were converted back to HTML format in</paragraph> <paragraph><location><page_8><loc_22><loc_16><loc_79><loc_22></location>We rely on standard metrics such as Tree Edit Distance score (TEDs) for table structure prediction, and Mean Average Precision (mAP) with 0.75 Intersection Over Union (IOU) threshold for the bounding-box predictions of table cells. The predicted OTSL structures were converted back to HTML format in</paragraph>
<paragraph><location><page_9><loc_22><loc_81><loc_79><loc_85></location>order to compute the TED score. Inference timing results for all experiments were obtained from the same machine on a single core with AMD EPYC 7763 CPU @2.45 GHz.</paragraph> <paragraph><location><page_9><loc_22><loc_81><loc_79><loc_85></location>order to compute the TED score. Inference timing results for all experiments were obtained from the same machine on a single core with AMD EPYC 7763 CPU @2.45 GHz.</paragraph>
<subtitle-level-1><location><page_9><loc_22><loc_77><loc_52><loc_79></location>5.1 Hyper Parameter Optimization</subtitle-level-1> <subtitle-level-1><location><page_9><loc_22><loc_78><loc_52><loc_79></location>5.1 Hyper Parameter Optimization</subtitle-level-1>
<paragraph><location><page_9><loc_22><loc_68><loc_79><loc_77></location>We have chosen the PubTabNet data set to perform HPO, since it includes a highly diverse set of tables. Also we report TED scores separately for simple and complex tables (tables with cell spans). Results are presented in Table. 1. It is evident that with OTSL, our model achieves the same TED score and slightly better mAP scores in comparison to HTML. However OTSL yields a 2x speed up in the inference runtime over HTML.</paragraph> <paragraph><location><page_9><loc_22><loc_68><loc_79><loc_77></location>We have chosen the PubTabNet data set to perform HPO, since it includes a highly diverse set of tables. Also we report TED scores separately for simple and complex tables (tables with cell spans). Results are presented in Table. 1. It is evident that with OTSL, our model achieves the same TED score and slightly better mAP scores in comparison to HTML. However OTSL yields a 2x speed up in the inference runtime over HTML.</paragraph>
<caption><location><page_9><loc_22><loc_59><loc_79><loc_65></location>Table 1. HPO performed in OTSL and HTML representation on the same transformer-based TableFormer [9] architecture, trained only on PubTabNet [22]. Effects of reducing the # of layers in encoder and decoder stages of the model show that smaller models trained on OTSL perform better, especially in recognizing complex table structures, and maintain a much higher mAP score than the HTML counterpart.</caption> <caption><location><page_9><loc_22><loc_59><loc_79><loc_65></location>Table 1. HPO performed in OTSL and HTML representation on the same transformer-based TableFormer [9] architecture, trained only on PubTabNet [22]. Effects of reducing the # of layers in encoder and decoder stages of the model show that smaller models trained on OTSL perform better, especially in recognizing complex table structures, and maintain a much higher mAP score than the HTML counterpart.</caption>
<table> <table>
@ -91,7 +93,7 @@
<subtitle-level-1><location><page_9><loc_22><loc_35><loc_43><loc_36></location>5.2 Quantitative Results</subtitle-level-1> <subtitle-level-1><location><page_9><loc_22><loc_35><loc_43><loc_36></location>5.2 Quantitative Results</subtitle-level-1>
<paragraph><location><page_9><loc_22><loc_22><loc_79><loc_34></location>We picked the model parameter configuration that produced the best prediction quality (enc=6, dec=6, heads=8) with PubTabNet alone, then independently trained and evaluated it on three publicly available data sets: PubTabNet (395k samples), FinTabNet (113k samples) and PubTables-1M (about 1M samples). Performance results are presented in Table. 2. It is clearly evident that the model trained on OTSL outperforms HTML across the board, keeping high TEDs and mAP scores even on difficult financial tables (FinTabNet) that contain sparse and large tables.</paragraph> <paragraph><location><page_9><loc_22><loc_22><loc_79><loc_34></location>We picked the model parameter configuration that produced the best prediction quality (enc=6, dec=6, heads=8) with PubTabNet alone, then independently trained and evaluated it on three publicly available data sets: PubTabNet (395k samples), FinTabNet (113k samples) and PubTables-1M (about 1M samples). Performance results are presented in Table. 2. It is clearly evident that the model trained on OTSL outperforms HTML across the board, keeping high TEDs and mAP scores even on difficult financial tables (FinTabNet) that contain sparse and large tables.</paragraph>
<paragraph><location><page_9><loc_22><loc_16><loc_79><loc_22></location>Additionally, the results show that OTSL has an advantage over HTML when applied on a bigger data set like PubTables-1M and achieves significantly improved scores. Finally, OTSL achieves faster inference due to fewer decoding steps which is a result of the reduced sequence representation.</paragraph> <paragraph><location><page_9><loc_22><loc_16><loc_79><loc_22></location>Additionally, the results show that OTSL has an advantage over HTML when applied on a bigger data set like PubTables-1M and achieves significantly improved scores. Finally, OTSL achieves faster inference due to fewer decoding steps which is a result of the reduced sequence representation.</paragraph>
<caption><location><page_10><loc_22><loc_82><loc_79><loc_86></location>Table 2. TSR and cell detection results compared between OTSL and HTML on the PubTabNet [22], FinTabNet [21] and PubTables-1M [14] data sets using TableFormer [9] (with enc=6, dec=6, heads=8).</caption> <caption><location><page_10><loc_22><loc_82><loc_79><loc_85></location>Table 2. TSR and cell detection results compared between OTSL and HTML on the PubTabNet [22], FinTabNet [21] and PubTables-1M [14] data sets using TableFormer [9] (with enc=6, dec=6, heads=8).</caption>
<table> <table>
<location><page_10><loc_23><loc_67><loc_77><loc_80></location> <location><page_10><loc_23><loc_67><loc_77><loc_80></location>
<caption>Table 2. TSR and cell detection results compared between OTSL and HTML on the PubTabNet [22], FinTabNet [21] and PubTables-1M [14] data sets using TableFormer [9] (with enc=6, dec=6, heads=8).</caption> <caption>Table 2. TSR and cell detection results compared between OTSL and HTML on the PubTabNet [22], FinTabNet [21] and PubTables-1M [14] data sets using TableFormer [9] (with enc=6, dec=6, heads=8).</caption>
@ -113,18 +115,18 @@
</figure> </figure>
<paragraph><location><page_10><loc_37><loc_15><loc_38><loc_16></location>μ</paragraph> <paragraph><location><page_10><loc_37><loc_15><loc_38><loc_16></location>μ</paragraph>
<paragraph><location><page_10><loc_49><loc_12><loc_49><loc_14></location>≥</paragraph> <paragraph><location><page_10><loc_49><loc_12><loc_49><loc_14></location>≥</paragraph>
<caption><location><page_11><loc_22><loc_77><loc_79><loc_84></location>Fig. 6. Visualization of predicted structure and detected bounding boxes on a complex table with many rows. The OTSL model (B) captured repeating pattern of horizontally merged cells from the GT (A), unlike the HTML model (C). The HTML model also didn't complete the HTML sequence correctly and displayed a lot more of drift and overlap of bounding boxes. "PMC5406406_003_01.png" PubTabNet.</caption> <caption><location><page_11><loc_22><loc_78><loc_79><loc_84></location>Fig. 6. Visualization of predicted structure and detected bounding boxes on a complex table with many rows. The OTSL model (B) captured repeating pattern of horizontally merged cells from the GT (A), unlike the HTML model (C). The HTML model also didn't complete the HTML sequence correctly and displayed a lot more of drift and overlap of bounding boxes. "PMC5406406_003_01.png" PubTabNet.</caption>
<figure> <figure>
<location><page_11><loc_28><loc_20><loc_73><loc_77></location> <location><page_11><loc_28><loc_20><loc_73><loc_77></location>
<caption>Fig. 6. Visualization of predicted structure and detected bounding boxes on a complex table with many rows. The OTSL model (B) captured repeating pattern of horizontally merged cells from the GT (A), unlike the HTML model (C). The HTML model also didn't complete the HTML sequence correctly and displayed a lot more of drift and overlap of bounding boxes. "PMC5406406_003_01.png" PubTabNet.</caption> <caption>Fig. 6. Visualization of predicted structure and detected bounding boxes on a complex table with many rows. The OTSL model (B) captured repeating pattern of horizontally merged cells from the GT (A), unlike the HTML model (C). The HTML model also didn't complete the HTML sequence correctly and displayed a lot more of drift and overlap of bounding boxes. "PMC5406406_003_01.png" PubTabNet.</caption>
</figure> </figure>
<subtitle-level-1><location><page_12><loc_22><loc_84><loc_36><loc_85></location>6 Conclusion</subtitle-level-1> <subtitle-level-1><location><page_12><loc_22><loc_84><loc_36><loc_85></location>6 Conclusion</subtitle-level-1>
<paragraph><location><page_12><loc_22><loc_74><loc_79><loc_82></location>We demonstrated that representing tables in HTML for the task of table structure recognition with Im2Seq models is ill-suited and has serious limitations. Furthermore, we presented in this paper an Optimized Table Structure Language (OTSL) which, when compared to commonly used general purpose languages, has several key benefits.</paragraph> <paragraph><location><page_12><loc_22><loc_74><loc_79><loc_81></location>We demonstrated that representing tables in HTML for the task of table structure recognition with Im2Seq models is ill-suited and has serious limitations. Furthermore, we presented in this paper an Optimized Table Structure Language (OTSL) which, when compared to commonly used general purpose languages, has several key benefits.</paragraph>
<paragraph><location><page_12><loc_22><loc_59><loc_79><loc_74></location>First and foremost, given the same network configuration, inference time for a table-structure prediction is about 2 times faster compared to the conventional HTML approach. This is primarily owed to the shorter sequence length of the OTSL representation. Additional performance benefits can be obtained with HPO (hyper parameter optimization). As we demonstrate in our experiments, models trained on OTSL can be significantly smaller, e.g. by reducing the number of encoder and decoder layers, while preserving comparatively good prediction quality. This can further improve inference performance, yielding 5-6 times faster inference speed in OTSL with prediction quality comparable to models trained on HTML (see Table 1).</paragraph> <paragraph><location><page_12><loc_22><loc_59><loc_79><loc_74></location>First and foremost, given the same network configuration, inference time for a table-structure prediction is about 2 times faster compared to the conventional HTML approach. This is primarily owed to the shorter sequence length of the OTSL representation. Additional performance benefits can be obtained with HPO (hyper parameter optimization). As we demonstrate in our experiments, models trained on OTSL can be significantly smaller, e.g. by reducing the number of encoder and decoder layers, while preserving comparatively good prediction quality. This can further improve inference performance, yielding 5-6 times faster inference speed in OTSL with prediction quality comparable to models trained on HTML (see Table 1).</paragraph>
<paragraph><location><page_12><loc_22><loc_41><loc_79><loc_59></location>Secondly, OTSL has more inherent structure and a significantly restricted vocabulary size. This allows autoregressive models to perform better in the TED metric, but especially with regards to prediction accuracy of the table-cell bounding boxes (see Table 2). As shown in Figure 5, we observe that the OTSL drastically reduces the drift for table cell bounding boxes at high row count and in sparse tables. This leads to more accurate predictions and a significant reduction in post-processing complexity, which is an undesired necessity in HTML-based Im2Seq models. Significant novelty lies in OTSL syntactical rules, which are few, simple and always backwards looking. Each new token can be validated only by analyzing the sequence of previous tokens, without requiring the entire sequence to detect mistakes. This in return allows to perform structural error detection and correction on-the-fly during sequence generation.</paragraph> <paragraph><location><page_12><loc_22><loc_41><loc_79><loc_59></location>Secondly, OTSL has more inherent structure and a significantly restricted vocabulary size. This allows autoregressive models to perform better in the TED metric, but especially with regards to prediction accuracy of the table-cell bounding boxes (see Table 2). As shown in Figure 5, we observe that the OTSL drastically reduces the drift for table cell bounding boxes at high row count and in sparse tables. This leads to more accurate predictions and a significant reduction in post-processing complexity, which is an undesired necessity in HTML-based Im2Seq models. Significant novelty lies in OTSL syntactical rules, which are few, simple and always backwards looking. Each new token can be validated only by analyzing the sequence of previous tokens, without requiring the entire sequence to detect mistakes. This in return allows to perform structural error detection and correction on-the-fly during sequence generation.</paragraph>
<subtitle-level-1><location><page_12><loc_22><loc_36><loc_32><loc_38></location>References</subtitle-level-1> <subtitle-level-1><location><page_12><loc_22><loc_36><loc_32><loc_38></location>References</subtitle-level-1>
<paragraph><location><page_12><loc_23><loc_29><loc_79><loc_34></location>- 1. Auer, C., Dolfi, M., Carvalho, A., Ramis, C.B., Staar, P.W.J.: Delivering document conversion as a cloud service with high throughput and responsiveness. CoRR abs/2206.00785 (2022). https://doi.org/10.48550/arXiv.2206.00785 , https://doi.org/10.48550/arXiv.2206.00785</paragraph> <paragraph><location><page_12><loc_23><loc_29><loc_79><loc_34></location>- 1. Auer, C., Dolfi, M., Carvalho, A., Ramis, C.B., Staar, P.W.J.: Delivering document conversion as a cloud service with high throughput and responsiveness. CoRR abs/2206.00785 (2022). https://doi.org/10.48550/arXiv.2206.00785 , https://doi.org/10.48550/arXiv.2206.00785</paragraph>
<paragraph><location><page_12><loc_23><loc_23><loc_79><loc_29></location>- 2. Chen, B., Peng, D., Zhang, J., Ren, Y., Jin, L.: Complex table structure recognition in the wild using transformer and identity matrix-based augmentation. In: Porwal, U., Fornés, A., Shafait, F. (eds.) Frontiers in Handwriting Recognition. pp. 545561. Springer International Publishing, Cham (2022)</paragraph> <paragraph><location><page_12><loc_23><loc_23><loc_79><loc_28></location>- 2. Chen, B., Peng, D., Zhang, J., Ren, Y., Jin, L.: Complex table structure recognition in the wild using transformer and identity matrix-based augmentation. In: Porwal, U., Fornés, A., Shafait, F. (eds.) Frontiers in Handwriting Recognition. pp. 545561. Springer International Publishing, Cham (2022)</paragraph>
<paragraph><location><page_12><loc_23><loc_20><loc_79><loc_23></location>- 3. Chi, Z., Huang, H., Xu, H.D., Yu, H., Yin, W., Mao, X.L.: Complicated table structure recognition. arXiv preprint arXiv:1908.04729 (2019)</paragraph> <paragraph><location><page_12><loc_23><loc_20><loc_79><loc_23></location>- 3. Chi, Z., Huang, H., Xu, H.D., Yu, H., Yin, W., Mao, X.L.: Complicated table structure recognition. arXiv preprint arXiv:1908.04729 (2019)</paragraph>
<paragraph><location><page_12><loc_23><loc_16><loc_79><loc_20></location>- 4. Deng, Y., Rosenberg, D., Mann, G.: Challenges in end-to-end neural scientific table recognition. In: 2019 International Conference on Document Analysis and Recognition (ICDAR). pp. 894-901. IEEE (2019)</paragraph> <paragraph><location><page_12><loc_23><loc_16><loc_79><loc_20></location>- 4. Deng, Y., Rosenberg, D., Mann, G.: Challenges in end-to-end neural scientific table recognition. In: 2019 International Conference on Document Analysis and Recognition (ICDAR). pp. 894-901. IEEE (2019)</paragraph>
<paragraph><location><page_13><loc_23><loc_81><loc_79><loc_85></location>- 5. Kayal, P., Anand, M., Desai, H., Singh, M.: Tables to latex: structure and content extraction from scientific tables. International Journal on Document Analysis and Recognition (IJDAR) pp. 1-10 (2022)</paragraph> <paragraph><location><page_13><loc_23><loc_81><loc_79><loc_85></location>- 5. Kayal, P., Anand, M., Desai, H., Singh, M.: Tables to latex: structure and content extraction from scientific tables. International Journal on Document Analysis and Recognition (IJDAR) pp. 1-10 (2022)</paragraph>
@ -136,14 +138,14 @@
<paragraph><location><page_13><loc_22><loc_48><loc_79><loc_53></location>- 11. Prasad, D., Gadpal, A., Kapadni, K., Visave, M., Sultanpure, K.: Cascadetabnet: An approach for end to end table detection and structure recognition from imagebased documents. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops. pp. 572-573 (2020)</paragraph> <paragraph><location><page_13><loc_22><loc_48><loc_79><loc_53></location>- 11. Prasad, D., Gadpal, A., Kapadni, K., Visave, M., Sultanpure, K.: Cascadetabnet: An approach for end to end table detection and structure recognition from imagebased documents. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops. pp. 572-573 (2020)</paragraph>
<paragraph><location><page_13><loc_22><loc_42><loc_79><loc_48></location>- 12. Schreiber, S., Agne, S., Wolf, I., Dengel, A., Ahmed, S.: Deepdesrt: Deep learning for detection and structure recognition of tables in document images. In: 2017 14th IAPR international conference on document analysis and recognition (ICDAR). vol. 1, pp. 1162-1167. IEEE (2017)</paragraph> <paragraph><location><page_13><loc_22><loc_42><loc_79><loc_48></location>- 12. Schreiber, S., Agne, S., Wolf, I., Dengel, A., Ahmed, S.: Deepdesrt: Deep learning for detection and structure recognition of tables in document images. In: 2017 14th IAPR international conference on document analysis and recognition (ICDAR). vol. 1, pp. 1162-1167. IEEE (2017)</paragraph>
<paragraph><location><page_13><loc_22><loc_37><loc_79><loc_42></location>- 13. Siddiqui, S.A., Fateh, I.A., Rizvi, S.T.R., Dengel, A., Ahmed, S.: Deeptabstr: Deep learning based table structure recognition. In: 2019 International Conference on Document Analysis and Recognition (ICDAR). pp. 1403-1409 (2019). https:// doi.org/10.1109/ICDAR.2019.00226</paragraph> <paragraph><location><page_13><loc_22><loc_37><loc_79><loc_42></location>- 13. Siddiqui, S.A., Fateh, I.A., Rizvi, S.T.R., Dengel, A., Ahmed, S.: Deeptabstr: Deep learning based table structure recognition. In: 2019 International Conference on Document Analysis and Recognition (ICDAR). pp. 1403-1409 (2019). https:// doi.org/10.1109/ICDAR.2019.00226</paragraph>
<paragraph><location><page_13><loc_22><loc_31><loc_79><loc_37></location>- 14. Smock, B., Pesala, R., Abraham, R.: PubTables-1M: Towards comprehensive table extraction from unstructured documents. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 4634-4642 (June 2022)</paragraph> <paragraph><location><page_13><loc_22><loc_31><loc_79><loc_36></location>- 14. Smock, B., Pesala, R., Abraham, R.: PubTables-1M: Towards comprehensive table extraction from unstructured documents. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 4634-4642 (June 2022)</paragraph>
<paragraph><location><page_13><loc_22><loc_23><loc_79><loc_31></location>- 15. Staar, P.W.J., Dolfi, M., Auer, C., Bekas, C.: Corpus conversion service: A machine learning platform to ingest documents at scale. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. pp. 774-782. KDD '18, Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3219819.3219834 , https://doi.org/10. 1145/3219819.3219834</paragraph> <paragraph><location><page_13><loc_22><loc_23><loc_79><loc_31></location>- 15. Staar, P.W.J., Dolfi, M., Auer, C., Bekas, C.: Corpus conversion service: A machine learning platform to ingest documents at scale. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. pp. 774-782. KDD '18, Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3219819.3219834 , https://doi.org/10. 1145/3219819.3219834</paragraph>
<paragraph><location><page_13><loc_22><loc_20><loc_79><loc_23></location>- 16. Wang, X.: Tabular Abstraction, Editing, and Formatting. Ph.D. thesis, CAN (1996), aAINN09397</paragraph> <paragraph><location><page_13><loc_22><loc_20><loc_79><loc_23></location>- 16. Wang, X.: Tabular Abstraction, Editing, and Formatting. Ph.D. thesis, CAN (1996), aAINN09397</paragraph>
<paragraph><location><page_13><loc_22><loc_16><loc_79><loc_20></location>- 17. Xue, W., Li, Q., Tao, D.: Res2tim: Reconstruct syntactic structures from table images. In: 2019 International Conference on Document Analysis and Recognition (ICDAR). pp. 749-755. IEEE (2019)</paragraph> <paragraph><location><page_13><loc_22><loc_16><loc_79><loc_20></location>- 17. Xue, W., Li, Q., Tao, D.: Res2tim: Reconstruct syntactic structures from table images. In: 2019 International Conference on Document Analysis and Recognition (ICDAR). pp. 749-755. IEEE (2019)</paragraph>
<paragraph><location><page_14><loc_22><loc_81><loc_79><loc_85></location>- 18. Xue, W., Yu, B., Wang, W., Tao, D., Li, Q.: Tgrnet: A table graph reconstruction network for table structure recognition. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 1295-1304 (2021)</paragraph> <paragraph><location><page_14><loc_22><loc_81><loc_79><loc_85></location>- 18. Xue, W., Yu, B., Wang, W., Tao, D., Li, Q.: Tgrnet: A table graph reconstruction network for table structure recognition. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 1295-1304 (2021)</paragraph>
<paragraph><location><page_14><loc_22><loc_76><loc_79><loc_81></location>- 19. Ye, J., Qi, X., He, Y., Chen, Y., Gu, D., Gao, P., Xiao, R.: Pingan-vcgroup's solution for icdar 2021 competition on scientific literature parsing task b: Table recognition to html (2021). https://doi.org/10.48550/ARXIV.2105.01848 , https://arxiv.org/abs/2105.01848</paragraph> <paragraph><location><page_14><loc_22><loc_76><loc_79><loc_81></location>- 19. Ye, J., Qi, X., He, Y., Chen, Y., Gu, D., Gao, P., Xiao, R.: Pingan-vcgroup's solution for icdar 2021 competition on scientific literature parsing task b: Table recognition to html (2021). https://doi.org/10.48550/ARXIV.2105.01848 , https://arxiv.org/abs/2105.01848</paragraph>
<paragraph><location><page_14><loc_22><loc_73><loc_79><loc_75></location>- 20. Zhang, Z., Zhang, J., Du, J., Wang, F.: Split, embed and merge: An accurate table structure recognizer. Pattern Recognition 126 , 108565 (2022)</paragraph> <paragraph><location><page_14><loc_22><loc_73><loc_79><loc_75></location>- 20. Zhang, Z., Zhang, J., Du, J., Wang, F.: Split, embed and merge: An accurate table structure recognizer. Pattern Recognition 126 , 108565 (2022)</paragraph>
<paragraph><location><page_14><loc_22><loc_66><loc_79><loc_73></location>- 21. Zheng, X., Burdick, D., Popa, L., Zhong, X., Wang, N.X.R.: Global table extractor (gte): A framework for joint table identification and cell structure recognition using visual context. In: 2021 IEEE Winter Conference on Applications of Computer Vision (WACV). pp. 697-706 (2021). https://doi.org/10.1109/WACV48630.2021. 00074</paragraph> <paragraph><location><page_14><loc_22><loc_66><loc_79><loc_72></location>- 21. Zheng, X., Burdick, D., Popa, L., Zhong, X., Wang, N.X.R.: Global table extractor (gte): A framework for joint table identification and cell structure recognition using visual context. In: 2021 IEEE Winter Conference on Applications of Computer Vision (WACV). pp. 697-706 (2021). https://doi.org/10.1109/WACV48630.2021. 00074</paragraph>
<paragraph><location><page_14><loc_22><loc_60><loc_79><loc_66></location>- 22. Zhong, X., ShafieiBavani, E., Jimeno Yepes, A.: Image-based table recognition: Data, model, and evaluation. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.M. (eds.) Computer Vision - ECCV 2020. pp. 564-580. Springer International Publishing, Cham (2020)</paragraph> <paragraph><location><page_14><loc_22><loc_60><loc_79><loc_66></location>- 22. Zhong, X., ShafieiBavani, E., Jimeno Yepes, A.: Image-based table recognition: Data, model, and evaluation. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.M. (eds.) Computer Vision - ECCV 2020. pp. 564-580. Springer International Publishing, Cham (2020)</paragraph>
<paragraph><location><page_14><loc_22><loc_56><loc_79><loc_60></location>- 23. Zhong, X., Tang, J., Yepes, A.J.: Publaynet: largest dataset ever for document layout analysis. In: 2019 International Conference on Document Analysis and Recognition (ICDAR). pp. 1015-1022. IEEE (2019)</paragraph> <paragraph><location><page_14><loc_22><loc_56><loc_79><loc_60></location>- 23. Zhong, X., Tang, J., Yepes, A.J.: Publaynet: largest dataset ever for document layout analysis. In: 2019 International Conference on Document Analysis and Recognition (ICDAR). pp. 1015-1022. IEEE (2019)</paragraph>
</document> </document>

File diff suppressed because one or more lines are too long

View File

@ -1,8 +1,12 @@
## Optimized Table Tokenization for Table Structure Recognition ## Optimized Table Tokenization for Table Structure Recognition
Maksym Lysak [0000 - 0002 - 3723 - $^{6960]}$, Ahmed Nassar[0000 - 0002 - 9468 - $^{0822]}$, Nikolaos Livathinos [0000 - 0001 - 8513 - $^{3491]}$, Christoph Auer[0000 - 0001 - 5761 - $^{0422]}$, and Peter Staar [0000 - 0002 - 8088 - 0823] Maksym Lysak [0000 0002 3723 $^{6960]}$, Ahmed Nassar[0000 0002 9468 $^{0822]}$, Nikolaos Livathinos [0000 0001 8513 $^{3491]}$, Christoph Auer[0000 0001 5761 $^{0422]}$, [0000 0002 8088 0823]
IBM Research {mly,ahn,nli,cau,taa}@zurich.ibm.com and Peter Staar
IBM Research
{mly,ahn,nli,cau,taa}@zurich.ibm.com
Abstract. Extracting tables from documents is a crucial task in any document conversion pipeline. Recently, transformer-based models have demonstrated that table-structure can be recognized with impressive accuracy using Image-to-Markup-Sequence (Im2Seq) approaches. Taking only the image of a table, such models predict a sequence of tokens (e.g. in HTML, LaTeX) which represent the structure of the table. Since the token representation of the table structure has a significant impact on the accuracy and run-time performance of any Im2Seq model, we investigate in this paper how table-structure representation can be optimised. We propose a new, optimised table-structure language (OTSL) with a minimized vocabulary and specific rules. The benefits of OTSL are that it reduces the number of tokens to 5 (HTML needs 28+) and shortens the sequence length to half of HTML on average. Consequently, model accuracy improves significantly, inference time is halved compared to HTML-based models, and the predicted table structures are always syntactically correct. This in turn eliminates most post-processing needs. Popular table structure data-sets will be published in OTSL format to the community. Abstract. Extracting tables from documents is a crucial task in any document conversion pipeline. Recently, transformer-based models have demonstrated that table-structure can be recognized with impressive accuracy using Image-to-Markup-Sequence (Im2Seq) approaches. Taking only the image of a table, such models predict a sequence of tokens (e.g. in HTML, LaTeX) which represent the structure of the table. Since the token representation of the table structure has a significant impact on the accuracy and run-time performance of any Im2Seq model, we investigate in this paper how table-structure representation can be optimised. We propose a new, optimised table-structure language (OTSL) with a minimized vocabulary and specific rules. The benefits of OTSL are that it reduces the number of tokens to 5 (HTML needs 28+) and shortens the sequence length to half of HTML on average. Consequently, model accuracy improves significantly, inference time is halved compared to HTML-based models, and the predicted table structures are always syntactically correct. This in turn eliminates most post-processing needs. Popular table structure data-sets will be published in OTSL format to the community.

File diff suppressed because one or more lines are too long

View File

@ -3,66 +3,21 @@
<figure> <figure>
<location><page_1><loc_84><loc_93><loc_96><loc_97></location> <location><page_1><loc_84><loc_93><loc_96><loc_97></location>
</figure> </figure>
<subtitle-level-1><location><page_1><loc_6><loc_79><loc_96><loc_90></location>Row and Column Access Control Support in IBM DB2 for i</subtitle-level-1> <subtitle-level-1><location><page_1><loc_6><loc_79><loc_96><loc_89></location>Row and Column Access Control Support in IBM DB2 for i</subtitle-level-1>
<paragraph><location><page_1><loc_6><loc_59><loc_35><loc_63></location>Implement roles and separation of duties</paragraph> <figure>
<paragraph><location><page_1><loc_6><loc_52><loc_33><loc_56></location>Leverage row permissions on the database</paragraph> <location><page_1><loc_5><loc_11><loc_96><loc_63></location>
<paragraph><location><page_1><loc_6><loc_45><loc_32><loc_49></location>Protect columns by defining column masks</paragraph> </figure>
<paragraph><location><page_1><loc_81><loc_12><loc_95><loc_28></location>Jim Bainbridge Hernando Bedoya Rob Bestgen Mike Cain Dan Cruikshank Jim Denton Doug Mack Tom McKinley Kent Milligan</paragraph> <figure>
<paragraph><location><page_1><loc_51><loc_2><loc_95><loc_10></location>Redpaper</paragraph> <location><page_1><loc_52><loc_2><loc_95><loc_10></location>
</figure>
<subtitle-level-1><location><page_2><loc_11><loc_88><loc_28><loc_91></location>Contents</subtitle-level-1> <subtitle-level-1><location><page_2><loc_11><loc_88><loc_28><loc_91></location>Contents</subtitle-level-1>
<table>
<location><page_2><loc_22><loc_10><loc_90><loc_83></location>
<row_0><col_0><body>Notices</col_0><col_1><body>. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii</col_1></row_0>
<row_1><col_0><body>Trademarks</col_0><col_1><body>. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii</col_1></row_1>
<row_2><col_0><body>DB2 for i Center of Excellence</col_0><col_1><body>. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix</col_1></row_2>
<row_3><col_0><body>Preface</col_0><col_1><body>. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi</col_1></row_3>
<row_4><col_0><body>Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi</col_0><col_1><body></col_1></row_4>
<row_5><col_0><body>Now you can become a published author, too!</col_0><col_1><body>. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii</col_1></row_5>
<row_6><col_0><body>Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .</col_0><col_1><body>xiii</col_1></row_6>
<row_7><col_0><body>Stay connected to IBM Redbooks</col_0><col_1><body>. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv</col_1></row_7>
<row_8><col_0><body>Chapter 1. Securing and protecting IBM DB2 data . . . . . . . . . . . . . . . . . . . . . . . . . . . . .</col_0><col_1><body>1</col_1></row_8>
<row_9><col_0><body>1.1 Security fundamentals. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2</col_0><col_1><body></col_1></row_9>
<row_10><col_0><body>1.2 Current state of IBM i security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .</col_0><col_1><body>2</col_1></row_10>
<row_11><col_0><body>1.3 DB2 for i security controls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3</col_0><col_1><body></col_1></row_11>
<row_12><col_0><body>1.3.1 Existing row and column control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .</col_0><col_1><body>4</col_1></row_12>
<row_13><col_0><body>1.3.2 New controls: Row and Column Access Control. . . . . . . . . . . . . . . . . . . . . . . . . . .</col_0><col_1><body>5</col_1></row_13>
<row_14><col_0><body>Chapter 2. Roles and separation of duties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .</col_0><col_1><body>7</col_1></row_14>
<row_15><col_0><body>2.1 Roles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .</col_0><col_1><body>8</col_1></row_15>
<row_16><col_0><body>2.1.1 DDM and DRDA application server access: QIBM_DB_DDMDRDA . . . . . . . . . . .</col_0><col_1><body>8</col_1></row_16>
<row_17><col_0><body>2.1.2 Toolbox application server access: QIBM_DB_ZDA. . . . . . . . . . . . . . . . . . . . . . . .</col_0><col_1><body>8</col_1></row_17>
<row_18><col_0><body>2.1.3 Database Administrator function: QIBM_DB_SQLADM . . . . . . . . . . . . . . . . . . . . .</col_0><col_1><body>9</col_1></row_18>
<row_19><col_0><body>2.1.4 Database Information function: QIBM_DB_SYSMON</col_0><col_1><body>. . . . . . . . . . . . . . . . . . . . . . 9</col_1></row_19>
<row_20><col_0><body>2.1.5 Security Administrator function: QIBM_DB_SECADM . . . . . . . . . . . . . . . . . . . . . .</col_0><col_1><body>9</col_1></row_20>
<row_21><col_0><body>2.1.6 Change Function Usage CL command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .</col_0><col_1><body>10</col_1></row_21>
<row_22><col_0><body>2.1.7 Verifying function usage IDs for RCAC with the FUNCTION_USAGE view . . . . .</col_0><col_1><body>10</col_1></row_22>
<row_23><col_0><body>2.2 Separation of duties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10</col_0><col_1><body></col_1></row_23>
<row_24><col_0><body>Chapter 3. Row and Column Access Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .</col_0><col_1><body>13</col_1></row_24>
<row_25><col_0><body>3.1 Explanation of RCAC and the concept of access control . . . . . . . . . . . . . . . . . . . . . . .</col_0><col_1><body>14</col_1></row_25>
<row_26><col_0><body>3.1.1 Row permission and column mask definitions</col_0><col_1><body>. . . . . . . . . . . . . . . . . . . . . . . . . . . 14</col_1></row_26>
<row_27><col_0><body>3.1.2 Enabling and activating RCAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .</col_0><col_1><body>16</col_1></row_27>
<row_28><col_0><body>3.2 Special registers and built-in global variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .</col_0><col_1><body>18</col_1></row_28>
<row_29><col_0><body>3.2.1 Special registers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .</col_0><col_1><body>18</col_1></row_29>
<row_30><col_0><body>3.2.2 Built-in global variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .</col_0><col_1><body>19</col_1></row_30>
<row_31><col_0><body>3.3 VERIFY_GROUP_FOR_USER function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .</col_0><col_1><body>20</col_1></row_31>
<row_32><col_0><body>3.4 Establishing and controlling accessibility by using the RCAC rule text . . . . . . . . . . . . .</col_0><col_1><body>21</col_1></row_32>
<row_33><col_0><body></col_0><col_1><body>. . . . . . . . . . . . . . . . . . . . . . . . 22</col_1></row_33>
<row_34><col_0><body>3.5 SELECT, INSERT, and UPDATE behavior with RCAC</col_0><col_1><body></col_1></row_34>
<row_35><col_0><body>3.6.1 Assigning the QIBM_DB_SECADM function ID to the consultants. . . . . . . . . . . .</col_0><col_1><body>23</col_1></row_35>
<row_36><col_0><body>3.6.2 Creating group profiles for the users and their roles . . . . . . . . . . . . . . . . . . . . . . .</col_0><col_1><body>23</col_1></row_36>
<row_37><col_0><body>3.6.3 Demonstrating data access without RCAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .</col_0><col_1><body>24</col_1></row_37>
<row_38><col_0><body>3.6.4 Defining and creating row permissions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .</col_0><col_1><body>25</col_1></row_38>
<row_39><col_0><body>3.6.5 Defining and creating column masks</col_0><col_1><body>. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26</col_1></row_39>
<row_40><col_0><body>3.6.6 Activating RCAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .</col_0><col_1><body>28</col_1></row_40>
<row_41><col_0><body>3.6.7 Demonstrating data access with RCAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .</col_0><col_1><body>29</col_1></row_41>
<row_42><col_0><body>3.6.8 Demonstrating data access with a view and RCAC . . . . . . . . . . . . . . . . . . . . . . .</col_0><col_1><body>32</col_1></row_42>
</table>
<paragraph><location><page_3><loc_11><loc_89><loc_39><loc_91></location>DB2 for i Center of Excellence</paragraph> <paragraph><location><page_3><loc_11><loc_89><loc_39><loc_91></location>DB2 for i Center of Excellence</paragraph>
<paragraph><location><page_3><loc_15><loc_80><loc_38><loc_83></location>Solution Brief IBM Systems Lab Services and Training</paragraph> <paragraph><location><page_3><loc_15><loc_80><loc_38><loc_83></location>Solution Brief IBM Systems Lab Services and Training</paragraph>
<figure> <figure>
<location><page_3><loc_23><loc_64><loc_29><loc_66></location> <location><page_3><loc_23><loc_64><loc_29><loc_66></location>
</figure> </figure>
<subtitle-level-1><location><page_3><loc_24><loc_57><loc_31><loc_59></location>Highlights</subtitle-level-1> <subtitle-level-1><location><page_3><loc_24><loc_57><loc_31><loc_59></location>Highlights</subtitle-level-1>
<paragraph><location><page_3><loc_24><loc_55><loc_40><loc_57></location>- GLYPH<g115>GLYPH<g3> GLYPH<g40>GLYPH<g81>GLYPH<g75>GLYPH<g68>GLYPH<g81>GLYPH<g70>GLYPH<g72>GLYPH<g3> GLYPH<g87>GLYPH<g75>GLYPH<g72>GLYPH<g3> GLYPH<g83>GLYPH<g72>GLYPH<g85>GLYPH<g73>GLYPH<g82>GLYPH<g85>GLYPH<g80>GLYPH<g68>GLYPH<g81>GLYPH<g70>GLYPH<g72>GLYPH<g3> GLYPH<g82>GLYPH<g73>GLYPH<g3> GLYPH<g92>GLYPH<g82>GLYPH<g88>GLYPH<g85> GLYPH<g3> GLYPH<g71>GLYPH<g68>GLYPH<g87>GLYPH<g68>GLYPH<g69>GLYPH<g68>GLYPH<g86>GLYPH<g72>GLYPH<g3> GLYPH<g82>GLYPH<g83>GLYPH<g72>GLYPH<g85>GLYPH<g68>GLYPH<g87>GLYPH<g76>GLYPH<g82>GLYPH<g81>GLYPH<g86></paragraph> <paragraph><location><page_3><loc_24><loc_55><loc_40><loc_56></location>- GLYPH<g115>GLYPH<g3> GLYPH<g40>GLYPH<g81>GLYPH<g75>GLYPH<g68>GLYPH<g81>GLYPH<g70>GLYPH<g72>GLYPH<g3> GLYPH<g87>GLYPH<g75>GLYPH<g72>GLYPH<g3> GLYPH<g83>GLYPH<g72>GLYPH<g85>GLYPH<g73>GLYPH<g82>GLYPH<g85>GLYPH<g80>GLYPH<g68>GLYPH<g81>GLYPH<g70>GLYPH<g72>GLYPH<g3> GLYPH<g82>GLYPH<g73>GLYPH<g3> GLYPH<g92>GLYPH<g82>GLYPH<g88>GLYPH<g85> GLYPH<g3> GLYPH<g71>GLYPH<g68>GLYPH<g87>GLYPH<g68>GLYPH<g69>GLYPH<g68>GLYPH<g86>GLYPH<g72>GLYPH<g3> GLYPH<g82>GLYPH<g83>GLYPH<g72>GLYPH<g85>GLYPH<g68>GLYPH<g87>GLYPH<g76>GLYPH<g82>GLYPH<g81>GLYPH<g86></paragraph>
<paragraph><location><page_3><loc_24><loc_51><loc_42><loc_54></location>- GLYPH<g115>GLYPH<g3> GLYPH<g40>GLYPH<g68>GLYPH<g85> GLYPH<g81>GLYPH<g3> GLYPH<g74>GLYPH<g85>GLYPH<g72>GLYPH<g68>GLYPH<g87>GLYPH<g72>GLYPH<g85>GLYPH<g3> GLYPH<g85>GLYPH<g72>GLYPH<g87>GLYPH<g88>GLYPH<g85> GLYPH<g81>GLYPH<g3> GLYPH<g82>GLYPH<g81>GLYPH<g3> GLYPH<g44>GLYPH<g55>GLYPH<g3> GLYPH<g83>GLYPH<g85>GLYPH<g82>GLYPH<g77>GLYPH<g72>GLYPH<g70>GLYPH<g87>GLYPH<g86> GLYPH<g3> GLYPH<g87>GLYPH<g75>GLYPH<g85>GLYPH<g82>GLYPH<g88>GLYPH<g74>GLYPH<g75>GLYPH<g3> GLYPH<g80>GLYPH<g82>GLYPH<g71>GLYPH<g72>GLYPH<g85> GLYPH<g81>GLYPH<g76>GLYPH<g93>GLYPH<g68>GLYPH<g87>GLYPH<g76>GLYPH<g82>GLYPH<g81>GLYPH<g3> GLYPH<g82>GLYPH<g73>GLYPH<g3> GLYPH<g71>GLYPH<g68>GLYPH<g87>GLYPH<g68>GLYPH<g69>GLYPH<g68>GLYPH<g86>GLYPH<g72>GLYPH<g3> GLYPH<g68>GLYPH<g81>GLYPH<g71> GLYPH<g3> GLYPH<g68>GLYPH<g83>GLYPH<g83>GLYPH<g79>GLYPH<g76>GLYPH<g70>GLYPH<g68>GLYPH<g87>GLYPH<g76>GLYPH<g82>GLYPH<g81>GLYPH<g86></paragraph> <paragraph><location><page_3><loc_24><loc_51><loc_42><loc_54></location>- GLYPH<g115>GLYPH<g3> GLYPH<g40>GLYPH<g68>GLYPH<g85> GLYPH<g81>GLYPH<g3> GLYPH<g74>GLYPH<g85>GLYPH<g72>GLYPH<g68>GLYPH<g87>GLYPH<g72>GLYPH<g85>GLYPH<g3> GLYPH<g85>GLYPH<g72>GLYPH<g87>GLYPH<g88>GLYPH<g85> GLYPH<g81>GLYPH<g3> GLYPH<g82>GLYPH<g81>GLYPH<g3> GLYPH<g44>GLYPH<g55>GLYPH<g3> GLYPH<g83>GLYPH<g85>GLYPH<g82>GLYPH<g77>GLYPH<g72>GLYPH<g70>GLYPH<g87>GLYPH<g86> GLYPH<g3> GLYPH<g87>GLYPH<g75>GLYPH<g85>GLYPH<g82>GLYPH<g88>GLYPH<g74>GLYPH<g75>GLYPH<g3> GLYPH<g80>GLYPH<g82>GLYPH<g71>GLYPH<g72>GLYPH<g85> GLYPH<g81>GLYPH<g76>GLYPH<g93>GLYPH<g68>GLYPH<g87>GLYPH<g76>GLYPH<g82>GLYPH<g81>GLYPH<g3> GLYPH<g82>GLYPH<g73>GLYPH<g3> GLYPH<g71>GLYPH<g68>GLYPH<g87>GLYPH<g68>GLYPH<g69>GLYPH<g68>GLYPH<g86>GLYPH<g72>GLYPH<g3> GLYPH<g68>GLYPH<g81>GLYPH<g71> GLYPH<g3> GLYPH<g68>GLYPH<g83>GLYPH<g83>GLYPH<g79>GLYPH<g76>GLYPH<g70>GLYPH<g68>GLYPH<g87>GLYPH<g76>GLYPH<g82>GLYPH<g81>GLYPH<g86></paragraph>
<paragraph><location><page_3><loc_24><loc_48><loc_41><loc_50></location>- GLYPH<g115>GLYPH<g3> GLYPH<g53>GLYPH<g72>GLYPH<g79>GLYPH<g92>GLYPH<g3> GLYPH<g82>GLYPH<g81>GLYPH<g3> GLYPH<g44>GLYPH<g37>GLYPH<g48>GLYPH<g3> GLYPH<g72>GLYPH<g91>GLYPH<g83>GLYPH<g72>GLYPH<g85>GLYPH<g87>GLYPH<g3> GLYPH<g70>GLYPH<g82>GLYPH<g81>GLYPH<g86>GLYPH<g88>GLYPH<g79>GLYPH<g87>GLYPH<g76>GLYPH<g81>GLYPH<g74>GLYPH<g15>GLYPH<g3> GLYPH<g86>GLYPH<g78>GLYPH<g76>GLYPH<g79>GLYPH<g79>GLYPH<g86> GLYPH<g3> GLYPH<g86>GLYPH<g75>GLYPH<g68>GLYPH<g85>GLYPH<g76>GLYPH<g81>GLYPH<g74>GLYPH<g3> GLYPH<g68>GLYPH<g81>GLYPH<g71>GLYPH<g3> GLYPH<g85>GLYPH<g72>GLYPH<g81>GLYPH<g82>GLYPH<g90>GLYPH<g81>GLYPH<g3> GLYPH<g86>GLYPH<g72>GLYPH<g85>GLYPH<g89>GLYPH<g76>GLYPH<g70>GLYPH<g72>GLYPH<g86></paragraph> <paragraph><location><page_3><loc_24><loc_48><loc_41><loc_50></location>- GLYPH<g115>GLYPH<g3> GLYPH<g53>GLYPH<g72>GLYPH<g79>GLYPH<g92>GLYPH<g3> GLYPH<g82>GLYPH<g81>GLYPH<g3> GLYPH<g44>GLYPH<g37>GLYPH<g48>GLYPH<g3> GLYPH<g72>GLYPH<g91>GLYPH<g83>GLYPH<g72>GLYPH<g85>GLYPH<g87>GLYPH<g3> GLYPH<g70>GLYPH<g82>GLYPH<g81>GLYPH<g86>GLYPH<g88>GLYPH<g79>GLYPH<g87>GLYPH<g76>GLYPH<g81>GLYPH<g74>GLYPH<g15>GLYPH<g3> GLYPH<g86>GLYPH<g78>GLYPH<g76>GLYPH<g79>GLYPH<g79>GLYPH<g86> GLYPH<g3> GLYPH<g86>GLYPH<g75>GLYPH<g68>GLYPH<g85>GLYPH<g76>GLYPH<g81>GLYPH<g74>GLYPH<g3> GLYPH<g68>GLYPH<g81>GLYPH<g71>GLYPH<g3> GLYPH<g85>GLYPH<g72>GLYPH<g81>GLYPH<g82>GLYPH<g90>GLYPH<g81>GLYPH<g3> GLYPH<g86>GLYPH<g72>GLYPH<g85>GLYPH<g89>GLYPH<g76>GLYPH<g70>GLYPH<g72>GLYPH<g86></paragraph>
<paragraph><location><page_3><loc_24><loc_45><loc_38><loc_47></location>- GLYPH<g115>GLYPH<g3> GLYPH<g55> GLYPH<g68>GLYPH<g78>GLYPH<g72>GLYPH<g3> GLYPH<g68>GLYPH<g71>GLYPH<g89>GLYPH<g68>GLYPH<g81>GLYPH<g87>GLYPH<g68>GLYPH<g74>GLYPH<g72>GLYPH<g3> GLYPH<g82>GLYPH<g73>GLYPH<g3> GLYPH<g68>GLYPH<g70>GLYPH<g70>GLYPH<g72>GLYPH<g86>GLYPH<g86>GLYPH<g3> GLYPH<g87>GLYPH<g82>GLYPH<g3> GLYPH<g68> GLYPH<g3> GLYPH<g90>GLYPH<g82>GLYPH<g85>GLYPH<g79>GLYPH<g71>GLYPH<g90>GLYPH<g76>GLYPH<g71>GLYPH<g72>GLYPH<g3> GLYPH<g86>GLYPH<g82>GLYPH<g88>GLYPH<g85>GLYPH<g70>GLYPH<g72>GLYPH<g3> GLYPH<g82>GLYPH<g73>GLYPH<g3> GLYPH<g72>GLYPH<g91>GLYPH<g83>GLYPH<g72>GLYPH<g85>GLYPH<g87>GLYPH<g76>GLYPH<g86>GLYPH<g72></paragraph> <paragraph><location><page_3><loc_24><loc_45><loc_38><loc_47></location>- GLYPH<g115>GLYPH<g3> GLYPH<g55> GLYPH<g68>GLYPH<g78>GLYPH<g72>GLYPH<g3> GLYPH<g68>GLYPH<g71>GLYPH<g89>GLYPH<g68>GLYPH<g81>GLYPH<g87>GLYPH<g68>GLYPH<g74>GLYPH<g72>GLYPH<g3> GLYPH<g82>GLYPH<g73>GLYPH<g3> GLYPH<g68>GLYPH<g70>GLYPH<g70>GLYPH<g72>GLYPH<g86>GLYPH<g86>GLYPH<g3> GLYPH<g87>GLYPH<g82>GLYPH<g3> GLYPH<g68> GLYPH<g3> GLYPH<g90>GLYPH<g82>GLYPH<g85>GLYPH<g79>GLYPH<g71>GLYPH<g90>GLYPH<g76>GLYPH<g71>GLYPH<g72>GLYPH<g3> GLYPH<g86>GLYPH<g82>GLYPH<g88>GLYPH<g85>GLYPH<g70>GLYPH<g72>GLYPH<g3> GLYPH<g82>GLYPH<g73>GLYPH<g3> GLYPH<g72>GLYPH<g91>GLYPH<g83>GLYPH<g72>GLYPH<g85>GLYPH<g87>GLYPH<g76>GLYPH<g86>GLYPH<g72></paragraph>
@ -79,14 +34,14 @@
<subtitle-level-1><location><page_3><loc_46><loc_44><loc_71><loc_45></location>Who we are, some of what we do</subtitle-level-1> <subtitle-level-1><location><page_3><loc_46><loc_44><loc_71><loc_45></location>Who we are, some of what we do</subtitle-level-1>
<paragraph><location><page_3><loc_46><loc_42><loc_71><loc_43></location>Global CoE engagements cover topics including:</paragraph> <paragraph><location><page_3><loc_46><loc_42><loc_71><loc_43></location>Global CoE engagements cover topics including:</paragraph>
<paragraph><location><page_3><loc_46><loc_40><loc_66><loc_41></location>- r Database performance and scalability</paragraph> <paragraph><location><page_3><loc_46><loc_40><loc_66><loc_41></location>- r Database performance and scalability</paragraph>
<paragraph><location><page_3><loc_46><loc_39><loc_69><loc_40></location>- r Advanced SQL knowledge and skills transfer</paragraph> <paragraph><location><page_3><loc_46><loc_39><loc_69><loc_39></location>- r Advanced SQL knowledge and skills transfer</paragraph>
<paragraph><location><page_3><loc_46><loc_37><loc_64><loc_38></location>- r Business intelligence and analytics</paragraph> <paragraph><location><page_3><loc_46><loc_37><loc_64><loc_38></location>- r Business intelligence and analytics</paragraph>
<paragraph><location><page_3><loc_46><loc_36><loc_56><loc_37></location>- r DB2 Web Query</paragraph> <paragraph><location><page_3><loc_46><loc_36><loc_56><loc_37></location>- r DB2 Web Query</paragraph>
<paragraph><location><page_3><loc_46><loc_35><loc_82><loc_36></location>- r Query/400 modernization for better reporting and analysis capabilities</paragraph> <paragraph><location><page_3><loc_46><loc_35><loc_82><loc_36></location>- r Query/400 modernization for better reporting and analysis capabilities</paragraph>
<paragraph><location><page_3><loc_46><loc_33><loc_69><loc_34></location>- r Database modernization and re-engineering</paragraph> <paragraph><location><page_3><loc_46><loc_33><loc_69><loc_34></location>- r Database modernization and re-engineering</paragraph>
<paragraph><location><page_3><loc_46><loc_32><loc_65><loc_33></location>- r Data-centric architecture and design</paragraph> <paragraph><location><page_3><loc_46><loc_32><loc_65><loc_33></location>- r Data-centric architecture and design</paragraph>
<paragraph><location><page_3><loc_46><loc_31><loc_76><loc_32></location>- r Extremely large database and overcoming limits to growth</paragraph> <paragraph><location><page_3><loc_46><loc_31><loc_76><loc_32></location>- r Extremely large database and overcoming limits to growth</paragraph>
<paragraph><location><page_3><loc_46><loc_30><loc_62><loc_31></location>- r ISV education and enablement</paragraph> <paragraph><location><page_3><loc_46><loc_30><loc_62><loc_30></location>- r ISV education and enablement</paragraph>
<subtitle-level-1><location><page_4><loc_11><loc_88><loc_25><loc_91></location>Preface</subtitle-level-1> <subtitle-level-1><location><page_4><loc_11><loc_88><loc_25><loc_91></location>Preface</subtitle-level-1>
<paragraph><location><page_4><loc_22><loc_75><loc_89><loc_83></location>This IBMfi Redpaper™ publication provides information about the IBM i 7.2 feature of IBM DB2fi for i Row and Column Access Control (RCAC). It offers a broad description of the function and advantages of controlling access to data in a comprehensive and transparent way. This publication helps you understand the capabilities of RCAC and provides examples of defining, creating, and implementing the row permissions and column masks in a relational database environment.</paragraph> <paragraph><location><page_4><loc_22><loc_75><loc_89><loc_83></location>This IBMfi Redpaper™ publication provides information about the IBM i 7.2 feature of IBM DB2fi for i Row and Column Access Control (RCAC). It offers a broad description of the function and advantages of controlling access to data in a comprehensive and transparent way. This publication helps you understand the capabilities of RCAC and provides examples of defining, creating, and implementing the row permissions and column masks in a relational database environment.</paragraph>
<paragraph><location><page_4><loc_22><loc_67><loc_89><loc_73></location>This paper is intended for database engineers, data-centric application developers, and security officers who want to design and implement RCAC as a part of their data control and governance policy. A solid background in IBM i object level security, DB2 for i relational database concepts, and SQL is assumed.</paragraph> <paragraph><location><page_4><loc_22><loc_67><loc_89><loc_73></location>This paper is intended for database engineers, data-centric application developers, and security officers who want to design and implement RCAC as a part of their data control and governance policy. A solid background in IBM i object level security, DB2 for i relational database concepts, and SQL is assumed.</paragraph>
@ -98,8 +53,8 @@
<location><page_4><loc_24><loc_20><loc_41><loc_33></location> <location><page_4><loc_24><loc_20><loc_41><loc_33></location>
</figure> </figure>
<paragraph><location><page_4><loc_43><loc_35><loc_88><loc_53></location>Jim Bainbridge is a senior DB2 consultant on the DB2 for i Center of Excellence team in the IBM Lab Services and Training organization. His primary role is training and implementation services for IBM DB2 Web Query for i and business analytics. Jim began his career with IBM 30 years ago in the IBM Rochester Development Lab, where he developed cooperative processing products that paired IBM PCs with IBM S/36 and AS/.400 systems. In the years since, Jim has held numerous technical roles, including independent software vendors technical support on a broad range of IBM technologies and products, and supporting customers in the IBM Executive Briefing Center and IBM Project Office.</paragraph> <paragraph><location><page_4><loc_43><loc_35><loc_88><loc_53></location>Jim Bainbridge is a senior DB2 consultant on the DB2 for i Center of Excellence team in the IBM Lab Services and Training organization. His primary role is training and implementation services for IBM DB2 Web Query for i and business analytics. Jim began his career with IBM 30 years ago in the IBM Rochester Development Lab, where he developed cooperative processing products that paired IBM PCs with IBM S/36 and AS/.400 systems. In the years since, Jim has held numerous technical roles, including independent software vendors technical support on a broad range of IBM technologies and products, and supporting customers in the IBM Executive Briefing Center and IBM Project Office.</paragraph>
<paragraph><location><page_4><loc_43><loc_14><loc_88><loc_34></location>Hernando Bedoya is a Senior IT Specialist at STG Lab Services and Training in Rochester, Minnesota. He writes extensively and teaches IBM classes worldwide in all areas of DB2 for i. Before joining STG Lab Services, he worked in the ITSO for nine years writing multiple IBM Redbooksfi publications. He also worked for IBM Colombia as an IBM AS/400fi IT Specialist doing presales support for the Andean countries. He has 28 years of experience in the computing field and has taught database classes in Colombian universities. He holds a Master's degree in Computer Science from EAFIT, Colombia. His areas of expertise are database technology, performance, and data warehousing. Hernando can be contacted at hbedoya@us.ibm.com .</paragraph> <paragraph><location><page_4><loc_43><loc_14><loc_88><loc_33></location>Hernando Bedoya is a Senior IT Specialist at STG Lab Services and Training in Rochester, Minnesota. He writes extensively and teaches IBM classes worldwide in all areas of DB2 for i. Before joining STG Lab Services, he worked in the ITSO for nine years writing multiple IBM Redbooksfi publications. He also worked for IBM Colombia as an IBM AS/400fi IT Specialist doing presales support for the Andean countries. He has 28 years of experience in the computing field and has taught database classes in Colombian universities. He holds a Master's degree in Computer Science from EAFIT, Colombia. His areas of expertise are database technology, performance, and data warehousing. Hernando can be contacted at hbedoya@us.ibm.com .</paragraph>
<subtitle-level-1><location><page_4><loc_10><loc_62><loc_20><loc_64></location>Authors</subtitle-level-1> <subtitle-level-1><location><page_4><loc_11><loc_62><loc_20><loc_64></location>Authors</subtitle-level-1>
<figure> <figure>
<location><page_5><loc_5><loc_70><loc_39><loc_91></location> <location><page_5><loc_5><loc_70><loc_39><loc_91></location>
</figure> </figure>
@ -117,7 +72,7 @@
<paragraph><location><page_6><loc_22><loc_77><loc_89><loc_83></location>- GLYPH<SM590000> First, and most important, is the definition of a company's security policy . Without a security policy, there is no definition of what are acceptable practices for using, accessing, and storing information by who, what, when, where, and how. A security policy should minimally address three things: confidentiality, integrity, and availability.</paragraph> <paragraph><location><page_6><loc_22><loc_77><loc_89><loc_83></location>- GLYPH<SM590000> First, and most important, is the definition of a company's security policy . Without a security policy, there is no definition of what are acceptable practices for using, accessing, and storing information by who, what, when, where, and how. A security policy should minimally address three things: confidentiality, integrity, and availability.</paragraph>
<paragraph><location><page_6><loc_25><loc_66><loc_89><loc_76></location>- The monitoring and assessment of adherence to the security policy determines whether your security strategy is working. Often, IBM security consultants are asked to perform security assessments for companies without regard to the security policy. Although these assessments can be useful for observing how the system is defined and how data is being accessed, they cannot determine the level of security without a security policy. Without a security policy, it really is not an assessment as much as it is a baseline for monitoring the changes in the security settings that are captured.</paragraph> <paragraph><location><page_6><loc_25><loc_66><loc_89><loc_76></location>- The monitoring and assessment of adherence to the security policy determines whether your security strategy is working. Often, IBM security consultants are asked to perform security assessments for companies without regard to the security policy. Although these assessments can be useful for observing how the system is defined and how data is being accessed, they cannot determine the level of security without a security policy. Without a security policy, it really is not an assessment as much as it is a baseline for monitoring the changes in the security settings that are captured.</paragraph>
<paragraph><location><page_6><loc_25><loc_64><loc_89><loc_65></location>A security policy is what defines whether the system and its settings are secure (or not).</paragraph> <paragraph><location><page_6><loc_25><loc_64><loc_89><loc_65></location>A security policy is what defines whether the system and its settings are secure (or not).</paragraph>
<paragraph><location><page_6><loc_22><loc_52><loc_89><loc_63></location>- GLYPH<SM590000> The second fundamental in securing data assets is the use of resource security . If implemented properly, resource security prevents data breaches from both internal and external intrusions. Resource security controls are closely tied to the part of the security policy that defines who should have access to what information resources. A hacker might be good enough to get through your company firewalls and sift his way through to your system, but if they do not have explicit access to your database, the hacker cannot compromise your information assets.</paragraph> <paragraph><location><page_6><loc_22><loc_53><loc_89><loc_63></location>- GLYPH<SM590000> The second fundamental in securing data assets is the use of resource security . If implemented properly, resource security prevents data breaches from both internal and external intrusions. Resource security controls are closely tied to the part of the security policy that defines who should have access to what information resources. A hacker might be good enough to get through your company firewalls and sift his way through to your system, but if they do not have explicit access to your database, the hacker cannot compromise your information assets.</paragraph>
<paragraph><location><page_6><loc_22><loc_48><loc_87><loc_51></location>With your eyes now open to the importance of securing information assets, the rest of this chapter reviews the methods that are available for securing database resources on IBM i.</paragraph> <paragraph><location><page_6><loc_22><loc_48><loc_87><loc_51></location>With your eyes now open to the importance of securing information assets, the rest of this chapter reviews the methods that are available for securing database resources on IBM i.</paragraph>
<subtitle-level-1><location><page_6><loc_11><loc_43><loc_53><loc_45></location>1.2 Current state of IBM i security</subtitle-level-1> <subtitle-level-1><location><page_6><loc_11><loc_43><loc_53><loc_45></location>1.2 Current state of IBM i security</subtitle-level-1>
<paragraph><location><page_6><loc_22><loc_35><loc_89><loc_41></location>Because of the inherently secure nature of IBM i, many clients rely on the default system settings to protect their business data that is stored in DB2 for i. In most cases, this means no data protection because the default setting for the Create default public authority (QCRTAUT) system value is *CHANGE.</paragraph> <paragraph><location><page_6><loc_22><loc_35><loc_89><loc_41></location>Because of the inherently secure nature of IBM i, many clients rely on the default system settings to protect their business data that is stored in DB2 for i. In most cases, this means no data protection because the default setting for the Create default public authority (QCRTAUT) system value is *CHANGE.</paragraph>
@ -133,16 +88,16 @@
<location><page_7><loc_22><loc_13><loc_89><loc_53></location> <location><page_7><loc_22><loc_13><loc_89><loc_53></location>
<caption>Figure 1-2 Existing row and column controls</caption> <caption>Figure 1-2 Existing row and column controls</caption>
</figure> </figure>
<subtitle-level-1><location><page_8><loc_10><loc_89><loc_55><loc_91></location>2.1.6 Change Function Usage CL command</subtitle-level-1> <subtitle-level-1><location><page_8><loc_11><loc_89><loc_55><loc_91></location>2.1.6 Change Function Usage CL command</subtitle-level-1>
<paragraph><location><page_8><loc_22><loc_86><loc_89><loc_88></location>The following CL commands can be used to work with, display, or change function usage IDs:</paragraph> <paragraph><location><page_8><loc_22><loc_87><loc_89><loc_88></location>The following CL commands can be used to work with, display, or change function usage IDs:</paragraph>
<paragraph><location><page_8><loc_22><loc_84><loc_49><loc_86></location>- GLYPH<SM590000> Work Function Usage ( WRKFCNUSG )</paragraph> <paragraph><location><page_8><loc_22><loc_84><loc_49><loc_86></location>- GLYPH<SM590000> Work Function Usage ( WRKFCNUSG )</paragraph>
<paragraph><location><page_8><loc_22><loc_83><loc_51><loc_84></location>- GLYPH<SM590000> Change Function Usage ( CHGFCNUSG )</paragraph> <paragraph><location><page_8><loc_22><loc_83><loc_51><loc_84></location>- GLYPH<SM590000> Change Function Usage ( CHGFCNUSG )</paragraph>
<paragraph><location><page_8><loc_22><loc_81><loc_51><loc_83></location>- GLYPH<SM590000> Display Function Usage ( DSPFCNUSG )</paragraph> <paragraph><location><page_8><loc_22><loc_81><loc_51><loc_83></location>- GLYPH<SM590000> Display Function Usage ( DSPFCNUSG )</paragraph>
<paragraph><location><page_8><loc_22><loc_77><loc_84><loc_80></location>For example, the following CHGFCNUSG command shows granting authorization to user HBEDOYA to administer and manage RCAC rules:</paragraph> <paragraph><location><page_8><loc_22><loc_77><loc_84><loc_80></location>For example, the following CHGFCNUSG command shows granting authorization to user HBEDOYA to administer and manage RCAC rules:</paragraph>
<paragraph><location><page_8><loc_22><loc_75><loc_72><loc_76></location>CHGFCNUSG FCNID(QIBM_DB_SECADM) USER(HBEDOYA) USAGE(*ALLOWED)</paragraph> <paragraph><location><page_8><loc_22><loc_75><loc_72><loc_76></location>CHGFCNUSG FCNID(QIBM_DB_SECADM) USER(HBEDOYA) USAGE(*ALLOWED)</paragraph>
<subtitle-level-1><location><page_8><loc_10><loc_71><loc_89><loc_72></location>2.1.7 Verifying function usage IDs for RCAC with the FUNCTION_USAGE view</subtitle-level-1> <subtitle-level-1><location><page_8><loc_11><loc_71><loc_89><loc_72></location>2.1.7 Verifying function usage IDs for RCAC with the FUNCTION_USAGE view</subtitle-level-1>
<paragraph><location><page_8><loc_22><loc_66><loc_85><loc_69></location>The FUNCTION_USAGE view contains function usage configuration details. Table 2-1 describes the columns in the FUNCTION_USAGE view.</paragraph> <paragraph><location><page_8><loc_22><loc_66><loc_85><loc_69></location>The FUNCTION_USAGE view contains function usage configuration details. Table 2-1 describes the columns in the FUNCTION_USAGE view.</paragraph>
<caption><location><page_8><loc_22><loc_64><loc_47><loc_65></location>Table 2-1 FUNCTION_USAGE view</caption> <caption><location><page_8><loc_22><loc_64><loc_46><loc_65></location>Table 2-1 FUNCTION_USAGE view</caption>
<table> <table>
<location><page_8><loc_22><loc_44><loc_89><loc_63></location> <location><page_8><loc_22><loc_44><loc_89><loc_63></location>
<caption>Table 2-1 FUNCTION_USAGE view</caption> <caption>Table 2-1 FUNCTION_USAGE view</caption>
@ -153,9 +108,19 @@
<row_4><col_0><body>USER_TYPE</col_0><col_1><body>VARCHAR(5)</col_1><col_2><body>Type of user profile: GLYPH<SM590000> USER: The user profile is a user. GLYPH<SM590000> GROUP: The user profile is a group.</col_2></row_4> <row_4><col_0><body>USER_TYPE</col_0><col_1><body>VARCHAR(5)</col_1><col_2><body>Type of user profile: GLYPH<SM590000> USER: The user profile is a user. GLYPH<SM590000> GROUP: The user profile is a group.</col_2></row_4>
</table> </table>
<paragraph><location><page_8><loc_22><loc_40><loc_89><loc_43></location>To discover who has authorization to define and manage RCAC, you can use the query that is shown in Example 2-1.</paragraph> <paragraph><location><page_8><loc_22><loc_40><loc_89><loc_43></location>To discover who has authorization to define and manage RCAC, you can use the query that is shown in Example 2-1.</paragraph>
<paragraph><location><page_8><loc_22><loc_37><loc_76><loc_39></location>Example 2-1 Query to determine who has authority to define and manage RCAC</paragraph> <paragraph><location><page_8><loc_22><loc_38><loc_76><loc_39></location>Example 2-1 Query to determine who has authority to define and manage RCAC</paragraph>
<paragraph><location><page_8><loc_22><loc_26><loc_54><loc_36></location>SELECT function_id, user_name, usage, user_type FROM function_usage WHERE function_id='QIBM_DB_SECADM' ORDER BY user_name;</paragraph> <paragraph><location><page_8><loc_22><loc_35><loc_28><loc_36></location>SELECT</paragraph>
<subtitle-level-1><location><page_8><loc_10><loc_20><loc_41><loc_22></location>2.2 Separation of duties</subtitle-level-1> <paragraph><location><page_8><loc_30><loc_35><loc_41><loc_36></location>function_id,</paragraph>
<paragraph><location><page_8><loc_27><loc_34><loc_39><loc_35></location>user_name,</paragraph>
<paragraph><location><page_8><loc_28><loc_32><loc_36><loc_33></location>usage,</paragraph>
<paragraph><location><page_8><loc_27><loc_31><loc_39><loc_32></location>user_type</paragraph>
<paragraph><location><page_8><loc_22><loc_29><loc_26><loc_30></location>FROM</paragraph>
<paragraph><location><page_8><loc_29><loc_29><loc_43><loc_30></location>function_usage</paragraph>
<paragraph><location><page_8><loc_22><loc_28><loc_27><loc_29></location>WHERE</paragraph>
<paragraph><location><page_8><loc_29><loc_28><loc_54><loc_29></location>function_id=QIBM_DB_SECADM</paragraph>
<paragraph><location><page_8><loc_22><loc_26><loc_29><loc_27></location>ORDER BY</paragraph>
<paragraph><location><page_8><loc_31><loc_26><loc_39><loc_27></location>user_name;</paragraph>
<subtitle-level-1><location><page_8><loc_11><loc_20><loc_41><loc_22></location>2.2 Separation of duties</subtitle-level-1>
<paragraph><location><page_8><loc_22><loc_10><loc_89><loc_18></location>Separation of duties helps businesses comply with industry regulations or organizational requirements and simplifies the management of authorities. Separation of duties is commonly used to prevent fraudulent activities or errors by a single person. It provides the ability for administrative functions to be divided across individuals without overlapping responsibilities, so that one user does not possess unlimited authority, such as with the *ALLOBJ authority.</paragraph> <paragraph><location><page_8><loc_22><loc_10><loc_89><loc_18></location>Separation of duties helps businesses comply with industry regulations or organizational requirements and simplifies the management of authorities. Separation of duties is commonly used to prevent fraudulent activities or errors by a single person. It provides the ability for administrative functions to be divided across individuals without overlapping responsibilities, so that one user does not possess unlimited authority, such as with the *ALLOBJ authority.</paragraph>
<paragraph><location><page_9><loc_22><loc_82><loc_89><loc_91></location>For example, assume that a business has assigned the duty to manage security on IBM i to Theresa. Before release IBM i 7.2, to grant privileges, Theresa had to have the same privileges Theresa was granting to others. Therefore, to grant *USE privileges to the PAYROLL table, Theresa had to have *OBJMGT and *USE authority (or a higher level of authority, such as *ALLOBJ). This requirement allowed Theresa to access the data in the PAYROLL table even though Theresa's job description was only to manage its security.</paragraph> <paragraph><location><page_9><loc_22><loc_82><loc_89><loc_91></location>For example, assume that a business has assigned the duty to manage security on IBM i to Theresa. Before release IBM i 7.2, to grant privileges, Theresa had to have the same privileges Theresa was granting to others. Therefore, to grant *USE privileges to the PAYROLL table, Theresa had to have *OBJMGT and *USE authority (or a higher level of authority, such as *ALLOBJ). This requirement allowed Theresa to access the data in the PAYROLL table even though Theresa's job description was only to manage its security.</paragraph>
<paragraph><location><page_9><loc_22><loc_75><loc_89><loc_81></location>In IBM i 7.2, the QIBM_DB_SECADM function usage grants authorities, revokes authorities, changes ownership, or changes the primary group without giving access to the object or, in the case of a database table, to the data that is in the table or allowing other operations on the table.</paragraph> <paragraph><location><page_9><loc_22><loc_75><loc_89><loc_81></location>In IBM i 7.2, the QIBM_DB_SECADM function usage grants authorities, revokes authorities, changes ownership, or changes the primary group without giving access to the object or, in the case of a database table, to the data that is in the table or allowing other operations on the table.</paragraph>
@ -163,7 +128,7 @@
<paragraph><location><page_9><loc_22><loc_65><loc_89><loc_69></location>QIBM_DB_SECADM also is responsible for administering RCAC, which restricts which rows a user is allowed to access in a table and whether a user is allowed to see information in certain columns of a table.</paragraph> <paragraph><location><page_9><loc_22><loc_65><loc_89><loc_69></location>QIBM_DB_SECADM also is responsible for administering RCAC, which restricts which rows a user is allowed to access in a table and whether a user is allowed to see information in certain columns of a table.</paragraph>
<paragraph><location><page_9><loc_22><loc_57><loc_88><loc_63></location>A preferred practice is that the RCAC administrator has the QIBM_DB_SECADM function usage ID, but absolutely no other data privileges. The result is that the RCAC administrator can deploy and maintain the RCAC constructs, but cannot grant themselves unauthorized access to data itself.</paragraph> <paragraph><location><page_9><loc_22><loc_57><loc_88><loc_63></location>A preferred practice is that the RCAC administrator has the QIBM_DB_SECADM function usage ID, but absolutely no other data privileges. The result is that the RCAC administrator can deploy and maintain the RCAC constructs, but cannot grant themselves unauthorized access to data itself.</paragraph>
<paragraph><location><page_9><loc_22><loc_53><loc_89><loc_56></location>Table 2-2 shows a comparison of the different function usage IDs and *JOBCTL authority to the different CL commands and DB2 for i tools.</paragraph> <paragraph><location><page_9><loc_22><loc_53><loc_89><loc_56></location>Table 2-2 shows a comparison of the different function usage IDs and *JOBCTL authority to the different CL commands and DB2 for i tools.</paragraph>
<caption><location><page_9><loc_11><loc_50><loc_64><loc_52></location>Table 2-2 Comparison of the different function usage IDs and *JOBCTL authority</caption> <caption><location><page_9><loc_11><loc_51><loc_64><loc_52></location>Table 2-2 Comparison of the different function usage IDs and *JOBCTL authority</caption>
<table> <table>
<location><page_9><loc_11><loc_9><loc_89><loc_50></location> <location><page_9><loc_11><loc_9><loc_89><loc_50></location>
<caption>Table 2-2 Comparison of the different function usage IDs and *JOBCTL authority</caption> <caption>Table 2-2 Comparison of the different function usage IDs and *JOBCTL authority</caption>
@ -187,7 +152,7 @@
<location><page_10><loc_22><loc_48><loc_89><loc_86></location> <location><page_10><loc_22><loc_48><loc_89><loc_86></location>
<caption>The SQL CREATE PERMISSION statement that is shown in Figure 3-1 is used to define and initially enable or disable the row access rules.Figure 3-1 CREATE PERMISSION SQL statement</caption> <caption>The SQL CREATE PERMISSION statement that is shown in Figure 3-1 is used to define and initially enable or disable the row access rules.Figure 3-1 CREATE PERMISSION SQL statement</caption>
</figure> </figure>
<subtitle-level-1><location><page_10><loc_22><loc_43><loc_35><loc_45></location>Column mask</subtitle-level-1> <subtitle-level-1><location><page_10><loc_22><loc_43><loc_35><loc_44></location>Column mask</subtitle-level-1>
<paragraph><location><page_10><loc_22><loc_37><loc_89><loc_43></location>A column mask is a database object that manifests a column value access control rule for a specific column in a specific table. It uses a CASE expression that describes what you see when you access the column. For example, a teller can see only the last four digits of a tax identification number.</paragraph> <paragraph><location><page_10><loc_22><loc_37><loc_89><loc_43></location>A column mask is a database object that manifests a column value access control rule for a specific column in a specific table. It uses a CASE expression that describes what you see when you access the column. For example, a teller can see only the last four digits of a tax identification number.</paragraph>
<paragraph><location><page_11><loc_22><loc_90><loc_67><loc_91></location>Table 3-1 summarizes these special registers and their values.</paragraph> <paragraph><location><page_11><loc_22><loc_90><loc_67><loc_91></location>Table 3-1 summarizes these special registers and their values.</paragraph>
<caption><location><page_11><loc_22><loc_87><loc_61><loc_88></location>Table 3-1 Special registers and their corresponding values</caption> <caption><location><page_11><loc_22><loc_87><loc_61><loc_88></location>Table 3-1 Special registers and their corresponding values</caption>
@ -210,9 +175,9 @@
<location><page_11><loc_22><loc_25><loc_49><loc_51></location> <location><page_11><loc_22><loc_25><loc_49><loc_51></location>
<caption>Figure 3-5 Special registers and adopted authority</caption> <caption>Figure 3-5 Special registers and adopted authority</caption>
</figure> </figure>
<subtitle-level-1><location><page_11><loc_10><loc_19><loc_40><loc_21></location>3.2.2 Built-in global variables</subtitle-level-1> <subtitle-level-1><location><page_11><loc_11><loc_20><loc_40><loc_21></location>3.2.2 Built-in global variables</subtitle-level-1>
<paragraph><location><page_11><loc_22><loc_15><loc_85><loc_18></location>Built-in global variables are provided with the database manager and are used in SQL statements to retrieve scalar values that are associated with the variables.</paragraph> <paragraph><location><page_11><loc_22><loc_15><loc_85><loc_18></location>Built-in global variables are provided with the database manager and are used in SQL statements to retrieve scalar values that are associated with the variables.</paragraph>
<paragraph><location><page_11><loc_22><loc_9><loc_87><loc_14></location>IBM DB2 for i supports nine different built-in global variables that are read only and maintained by the system. These global variables can be used to identify attributes of the database connection and used as part of the RCAC logic.</paragraph> <paragraph><location><page_11><loc_22><loc_9><loc_87><loc_13></location>IBM DB2 for i supports nine different built-in global variables that are read only and maintained by the system. These global variables can be used to identify attributes of the database connection and used as part of the RCAC logic.</paragraph>
<paragraph><location><page_12><loc_22><loc_90><loc_56><loc_91></location>Table 3-2 lists the nine built-in global variables.</paragraph> <paragraph><location><page_12><loc_22><loc_90><loc_56><loc_91></location>Table 3-2 lists the nine built-in global variables.</paragraph>
<caption><location><page_12><loc_11><loc_87><loc_33><loc_88></location>Table 3-2 Built-in global variables</caption> <caption><location><page_12><loc_11><loc_87><loc_33><loc_88></location>Table 3-2 Built-in global variables</caption>
<table> <table>
@ -229,37 +194,41 @@
<row_8><col_0><body>ROUTINE_SPECIFIC_NAME</col_0><col_1><body>VARCHAR(128)</col_1><col_2><body>Name of the currently running routine</col_2></row_8> <row_8><col_0><body>ROUTINE_SPECIFIC_NAME</col_0><col_1><body>VARCHAR(128)</col_1><col_2><body>Name of the currently running routine</col_2></row_8>
<row_9><col_0><body>ROUTINE_TYPE</col_0><col_1><body>CHAR(1)</col_1><col_2><body>Type of the currently running routine</col_2></row_9> <row_9><col_0><body>ROUTINE_TYPE</col_0><col_1><body>CHAR(1)</col_1><col_2><body>Type of the currently running routine</col_2></row_9>
</table> </table>
<subtitle-level-1><location><page_12><loc_11><loc_57><loc_63><loc_60></location>3.3 VERIFY_GROUP_FOR_USER function</subtitle-level-1> <subtitle-level-1><location><page_12><loc_11><loc_57><loc_63><loc_59></location>3.3 VERIFY_GROUP_FOR_USER function</subtitle-level-1>
<paragraph><location><page_12><loc_22><loc_45><loc_89><loc_55></location>The VERIFY_GROUP_FOR_USER function was added in IBM i 7.2. Although it is primarily intended for use with RCAC permissions and masks, it can be used in other SQL statements. The first parameter must be one of these three special registers: SESSION_USER, USER, or CURRENT_USER. The second and subsequent parameters are a list of user or group profiles. Each of these values must be 1 - 10 characters in length. These values are not validated for their existence, which means that you can specify the names of user profiles that do not exist without receiving any kind of error.</paragraph> <paragraph><location><page_12><loc_22><loc_45><loc_89><loc_55></location>The VERIFY_GROUP_FOR_USER function was added in IBM i 7.2. Although it is primarily intended for use with RCAC permissions and masks, it can be used in other SQL statements. The first parameter must be one of these three special registers: SESSION_USER, USER, or CURRENT_USER. The second and subsequent parameters are a list of user or group profiles. Each of these values must be 1 - 10 characters in length. These values are not validated for their existence, which means that you can specify the names of user profiles that do not exist without receiving any kind of error.</paragraph>
<paragraph><location><page_12><loc_22><loc_39><loc_89><loc_44></location>If a special register value is in the list of user profiles or it is a member of a group profile included in the list, the function returns a long integer value of 1. Otherwise, it returns a value of 0. It never returns the null value.</paragraph> <paragraph><location><page_12><loc_22><loc_39><loc_89><loc_43></location>If a special register value is in the list of user profiles or it is a member of a group profile included in the list, the function returns a long integer value of 1. Otherwise, it returns a value of 0. It never returns the null value.</paragraph>
<paragraph><location><page_12><loc_22><loc_36><loc_75><loc_38></location>Here is an example of using the VERIFY_GROUP_FOR_USER function:</paragraph> <paragraph><location><page_12><loc_22><loc_36><loc_75><loc_38></location>Here is an example of using the VERIFY_GROUP_FOR_USER function:</paragraph>
<paragraph><location><page_12><loc_22><loc_34><loc_66><loc_36></location>- 1. There are user profiles for MGR, JANE, JUDY, and TONY.</paragraph> <paragraph><location><page_12><loc_22><loc_34><loc_66><loc_35></location>- 1. There are user profiles for MGR, JANE, JUDY, and TONY.</paragraph>
<paragraph><location><page_12><loc_22><loc_32><loc_65><loc_33></location>- 2. The user profile JANE specifies a group profile of MGR.</paragraph> <paragraph><location><page_12><loc_22><loc_32><loc_65><loc_33></location>- 2. The user profile JANE specifies a group profile of MGR.</paragraph>
<paragraph><location><page_12><loc_22><loc_28><loc_88><loc_31></location>- 3. If a user is connected to the server using user profile JANE, all of the following function invocations return a value of 1:</paragraph> <paragraph><location><page_12><loc_22><loc_28><loc_88><loc_31></location>- 3. If a user is connected to the server using user profile JANE, all of the following function invocations return a value of 1:</paragraph>
<paragraph><location><page_12><loc_24><loc_19><loc_74><loc_27></location>VERIFY_GROUP_FOR_USER (CURRENT_USER, 'MGR') VERIFY_GROUP_FOR_USER (CURRENT_USER, 'JANE', 'MGR') VERIFY_GROUP_FOR_USER (CURRENT_USER, 'JANE', 'MGR', 'STEVE') The following function invocation returns a value of 0: VERIFY_GROUP_FOR_USER (CURRENT_USER, 'JUDY', 'TONY')</paragraph> <paragraph><location><page_12><loc_25><loc_19><loc_74><loc_27></location>VERIFY_GROUP_FOR_USER (CURRENT_USER, 'MGR') VERIFY_GROUP_FOR_USER (CURRENT_USER, 'JANE', 'MGR') VERIFY_GROUP_FOR_USER (CURRENT_USER, 'JANE', 'MGR', 'STEVE') The following function invocation returns a value of 0: VERIFY_GROUP_FOR_USER (CURRENT_USER, 'JUDY', 'TONY')</paragraph>
<paragraph><location><page_13><loc_22><loc_88><loc_27><loc_91></location>RETURN CASE</paragraph> <paragraph><location><page_13><loc_22><loc_90><loc_27><loc_91></location>RETURN</paragraph>
<paragraph><location><page_13><loc_22><loc_88><loc_26><loc_89></location>CASE</paragraph>
<paragraph><location><page_13><loc_22><loc_67><loc_85><loc_88></location>WHEN VERIFY_GROUP_FOR_USER ( SESSION_USER , 'HR', 'EMP' ) = 1 THEN EMPLOYEES . DATE_OF_BIRTH WHEN VERIFY_GROUP_FOR_USER ( SESSION_USER , 'MGR' ) = 1 AND SESSION_USER = EMPLOYEES . USER_ID THEN EMPLOYEES . DATE_OF_BIRTH WHEN VERIFY_GROUP_FOR_USER ( SESSION_USER , 'MGR' ) = 1 AND SESSION_USER <> EMPLOYEES . USER_ID THEN ( 9999 || '-' || MONTH ( EMPLOYEES . DATE_OF_BIRTH ) || '-' || DAY (EMPLOYEES.DATE_OF_BIRTH )) ELSE NULL END ENABLE ;</paragraph> <paragraph><location><page_13><loc_22><loc_67><loc_85><loc_88></location>WHEN VERIFY_GROUP_FOR_USER ( SESSION_USER , 'HR', 'EMP' ) = 1 THEN EMPLOYEES . DATE_OF_BIRTH WHEN VERIFY_GROUP_FOR_USER ( SESSION_USER , 'MGR' ) = 1 AND SESSION_USER = EMPLOYEES . USER_ID THEN EMPLOYEES . DATE_OF_BIRTH WHEN VERIFY_GROUP_FOR_USER ( SESSION_USER , 'MGR' ) = 1 AND SESSION_USER <> EMPLOYEES . USER_ID THEN ( 9999 || '-' || MONTH ( EMPLOYEES . DATE_OF_BIRTH ) || '-' || DAY (EMPLOYEES.DATE_OF_BIRTH )) ELSE NULL END ENABLE ;</paragraph>
<paragraph><location><page_13><loc_22><loc_63><loc_89><loc_65></location>- 2. The other column to mask in this example is the TAX_ID information. In this example, the rules to enforce include the following ones:</paragraph> <paragraph><location><page_13><loc_22><loc_63><loc_89><loc_65></location>- 2. The other column to mask in this example is the TAX_ID information. In this example, the rules to enforce include the following ones:</paragraph>
<paragraph><location><page_13><loc_25><loc_60><loc_77><loc_62></location>- -Human Resources can see the unmasked TAX_ID of the employees.</paragraph> <paragraph><location><page_13><loc_25><loc_60><loc_77><loc_62></location>- -Human Resources can see the unmasked TAX_ID of the employees.</paragraph>
<paragraph><location><page_13><loc_25><loc_58><loc_66><loc_60></location>- -Employees can see only their own unmasked TAX_ID.</paragraph> <paragraph><location><page_13><loc_25><loc_58><loc_66><loc_59></location>- -Employees can see only their own unmasked TAX_ID.</paragraph>
<paragraph><location><page_13><loc_25><loc_55><loc_89><loc_57></location>- -Managers see a masked version of TAX_ID with the first five characters replaced with the X character (for example, XXX-XX-1234).</paragraph> <paragraph><location><page_13><loc_25><loc_55><loc_89><loc_57></location>- -Managers see a masked version of TAX_ID with the first five characters replaced with the X character (for example, XXX-XX-1234).</paragraph>
<paragraph><location><page_13><loc_25><loc_52><loc_87><loc_54></location>- -Any other person sees the entire TAX_ID as masked, for example, XXX-XX-XXXX.</paragraph> <paragraph><location><page_13><loc_25><loc_52><loc_87><loc_54></location>- -Any other person sees the entire TAX_ID as masked, for example, XXX-XX-XXXX.</paragraph>
<paragraph><location><page_13><loc_25><loc_50><loc_87><loc_52></location>- To implement this column mask, run the SQL statement that is shown in Example 3-9.</paragraph> <paragraph><location><page_13><loc_25><loc_50><loc_87><loc_51></location>- To implement this column mask, run the SQL statement that is shown in Example 3-9.</paragraph>
<paragraph><location><page_13><loc_22><loc_48><loc_58><loc_49></location>Example 3-9 Creating a mask on the TAX_ID column</paragraph> <paragraph><location><page_13><loc_22><loc_48><loc_58><loc_49></location>Example 3-9 Creating a mask on the TAX_ID column</paragraph>
<paragraph><location><page_13><loc_22><loc_13><loc_88><loc_47></location>CREATE MASK HR_SCHEMA.MASK_TAX_ID_ON_EMPLOYEES ON HR_SCHEMA.EMPLOYEES AS EMPLOYEES FOR COLUMN TAX_ID RETURN CASE WHEN VERIFY_GROUP_FOR_USER ( SESSION_USER , 'HR' ) = 1 THEN EMPLOYEES . TAX_ID WHEN VERIFY_GROUP_FOR_USER ( SESSION_USER , 'MGR' ) = 1 AND SESSION_USER = EMPLOYEES . USER_ID THEN EMPLOYEES . TAX_ID WHEN VERIFY_GROUP_FOR_USER ( SESSION_USER , 'MGR' ) = 1 AND SESSION_USER <> EMPLOYEES . USER_ID THEN ( 'XXX-XX-' CONCAT QSYS2 . SUBSTR ( EMPLOYEES . TAX_ID , 8 , 4 ) ) WHEN VERIFY_GROUP_FOR_USER ( SESSION_USER , 'EMP' ) = 1 THEN EMPLOYEES . TAX_ID ELSE 'XXX-XX-XXXX' END ENABLE ;</paragraph> <paragraph><location><page_13><loc_22><loc_14><loc_86><loc_47></location>CREATE MASK HR_SCHEMA.MASK_TAX_ID_ON_EMPLOYEES ON HR_SCHEMA.EMPLOYEES AS EMPLOYEES FOR COLUMN TAX_ID RETURN CASE WHEN VERIFY_GROUP_FOR_USER ( SESSION_USER , 'HR' ) = 1 THEN EMPLOYEES . TAX_ID WHEN VERIFY_GROUP_FOR_USER ( SESSION_USER , 'MGR' ) = 1 AND SESSION_USER = EMPLOYEES . USER_ID THEN EMPLOYEES . TAX_ID WHEN VERIFY_GROUP_FOR_USER ( SESSION_USER , 'MGR' ) = 1 AND SESSION_USER <> EMPLOYEES . USER_ID THEN ( 'XXX-XX-' CONCAT QSYS2 . SUBSTR ( EMPLOYEES . TAX_ID , 8 , 4 ) ) WHEN VERIFY_GROUP_FOR_USER ( SESSION_USER , 'EMP' ) = 1 THEN EMPLOYEES . TAX_ID ELSE 'XXX-XX-XXXX' END ENABLE ;</paragraph>
<paragraph><location><page_14><loc_22><loc_90><loc_74><loc_91></location>- 3. Figure 3-10 shows the masks that are created in the HR_SCHEMA.</paragraph> <paragraph><location><page_14><loc_22><loc_90><loc_74><loc_91></location>- 3. Figure 3-10 shows the masks that are created in the HR_SCHEMA.</paragraph>
<caption><location><page_14><loc_10><loc_77><loc_48><loc_78></location>Figure 3-10 Column masks shown in System i Navigator</caption> <caption><location><page_14><loc_11><loc_77><loc_48><loc_78></location>Figure 3-10 Column masks shown in System i Navigator</caption>
<figure> <figure>
<location><page_14><loc_10><loc_79><loc_89><loc_88></location> <location><page_14><loc_10><loc_79><loc_89><loc_88></location>
<caption>Figure 3-10 Column masks shown in System i Navigator</caption> <caption>Figure 3-10 Column masks shown in System i Navigator</caption>
</figure> </figure>
<subtitle-level-1><location><page_14><loc_11><loc_73><loc_33><loc_75></location>3.6.6 Activating RCAC</subtitle-level-1> <subtitle-level-1><location><page_14><loc_11><loc_73><loc_33><loc_74></location>3.6.6 Activating RCAC</subtitle-level-1>
<paragraph><location><page_14><loc_22><loc_67><loc_89><loc_71></location>Now that you have created the row permission and the two column masks, RCAC must be activated. The row permission and the two column masks are enabled (last clause in the scripts), but now you must activate RCAC on the table. To do so, complete the following steps:</paragraph> <paragraph><location><page_14><loc_22><loc_67><loc_89><loc_71></location>Now that you have created the row permission and the two column masks, RCAC must be activated. The row permission and the two column masks are enabled (last clause in the scripts), but now you must activate RCAC on the table. To do so, complete the following steps:</paragraph>
<paragraph><location><page_14><loc_22><loc_65><loc_67><loc_66></location>- 1. Run the SQL statements that are shown in Example 3-10.</paragraph> <paragraph><location><page_14><loc_22><loc_65><loc_67><loc_66></location>- 1. Run the SQL statements that are shown in Example 3-10.</paragraph>
<subtitle-level-1><location><page_14><loc_22><loc_62><loc_61><loc_63></location>Example 3-10 Activating RCAC on the EMPLOYEES table</subtitle-level-1> <subtitle-level-1><location><page_14><loc_22><loc_62><loc_61><loc_63></location>Example 3-10 Activating RCAC on the EMPLOYEES table</subtitle-level-1>
<paragraph><location><page_14><loc_22><loc_60><loc_62><loc_61></location>- /* Active Row Access Control (permissions) */</paragraph> <paragraph><location><page_14><loc_22><loc_60><loc_62><loc_61></location>- /* Active Row Access Control (permissions) */</paragraph>
<paragraph><location><page_14><loc_22><loc_54><loc_58><loc_60></location>/* Active Column Access Control (masks) ALTER TABLE HR_SCHEMA.EMPLOYEES ACTIVATE ROW ACCESS CONTROL ACTIVATE COLUMN ACCESS CONTROL;</paragraph> <paragraph><location><page_14><loc_22><loc_58><loc_58><loc_60></location>- /* Active Column Access Control (masks)</paragraph>
<paragraph><location><page_14><loc_60><loc_58><loc_62><loc_60></location>*/</paragraph> <paragraph><location><page_14><loc_60><loc_58><loc_62><loc_60></location>*/</paragraph>
<paragraph><location><page_14><loc_22><loc_57><loc_48><loc_58></location>ALTER TABLE HR_SCHEMA.EMPLOYEES</paragraph>
<paragraph><location><page_14><loc_22><loc_55><loc_44><loc_56></location>ACTIVATE ROW ACCESS CONTROL</paragraph>
<paragraph><location><page_14><loc_22><loc_54><loc_48><loc_55></location>ACTIVATE COLUMN ACCESS CONTROL;</paragraph>
<paragraph><location><page_14><loc_22><loc_48><loc_88><loc_52></location>- 2. Look at the definition of the EMPLOYEE table, as shown in Figure 3-11. To do this, from the main navigation pane of System i Navigator, click Schemas  HR_SCHEMA  Tables , right-click the EMPLOYEES table, and click Definition .</paragraph> <paragraph><location><page_14><loc_22><loc_48><loc_88><loc_52></location>- 2. Look at the definition of the EMPLOYEE table, as shown in Figure 3-11. To do this, from the main navigation pane of System i Navigator, click Schemas  HR_SCHEMA  Tables , right-click the EMPLOYEES table, and click Definition .</paragraph>
<caption><location><page_14><loc_11><loc_17><loc_57><loc_18></location>Figure 3-11 Selecting the EMPLOYEES table from System i Navigator</caption> <caption><location><page_14><loc_11><loc_17><loc_57><loc_18></location>Figure 3-11 Selecting the EMPLOYEES table from System i Navigator</caption>
<figure> <figure>
@ -267,7 +236,7 @@
<caption>Figure 3-11 Selecting the EMPLOYEES table from System i Navigator</caption> <caption>Figure 3-11 Selecting the EMPLOYEES table from System i Navigator</caption>
</figure> </figure>
<paragraph><location><page_15><loc_22><loc_87><loc_84><loc_91></location>- 2. Figure 4-68 shows the Visual Explain of the same SQL statement, but with RCAC enabled. It is clear that the implementation of the SQL statement is more complex because the row permission rule becomes part of the WHERE clause.</paragraph> <paragraph><location><page_15><loc_22><loc_87><loc_84><loc_91></location>- 2. Figure 4-68 shows the Visual Explain of the same SQL statement, but with RCAC enabled. It is clear that the implementation of the SQL statement is more complex because the row permission rule becomes part of the WHERE clause.</paragraph>
<caption><location><page_15><loc_22><loc_38><loc_54><loc_39></location>Figure 4-68 Visual Explain with RCAC enabled</caption> <caption><location><page_15><loc_22><loc_38><loc_53><loc_39></location>Figure 4-68 Visual Explain with RCAC enabled</caption>
<figure> <figure>
<location><page_15><loc_22><loc_40><loc_89><loc_85></location> <location><page_15><loc_22><loc_40><loc_89><loc_85></location>
<caption>Figure 4-68 Visual Explain with RCAC enabled</caption> <caption>Figure 4-68 Visual Explain with RCAC enabled</caption>
@ -278,10 +247,10 @@
<location><page_15><loc_11><loc_16><loc_83><loc_30></location> <location><page_15><loc_11><loc_16><loc_83><loc_30></location>
<caption>Figure 4-69 Index advice with no RCAC</caption> <caption>Figure 4-69 Index advice with no RCAC</caption>
</figure> </figure>
<paragraph><location><page_16><loc_10><loc_11><loc_82><loc_91></location>THEN C . CUSTOMER_TAX_ID WHEN QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'TELLER' ) = 1 THEN ( 'XXX-XX-' CONCAT QSYS2 . SUBSTR ( C . CUSTOMER_TAX_ID , 8 , 4 ) ) WHEN QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'CUSTOMER' ) = 1 THEN C . CUSTOMER_TAX_ID ELSE 'XXX-XX-XXXX' END ENABLE ; CREATE MASK BANK_SCHEMA.MASK_DRIVERS_LICENSE_ON_CUSTOMERS ON BANK_SCHEMA.CUSTOMERS AS C FOR COLUMN CUSTOMER_DRIVERS_LICENSE_NUMBER RETURN CASE WHEN QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'ADMIN' ) = 1 THEN C . CUSTOMER_DRIVERS_LICENSE_NUMBER WHEN QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'TELLER' ) = 1 THEN C . CUSTOMER_DRIVERS_LICENSE_NUMBER WHEN QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'CUSTOMER' ) = 1 THEN C . CUSTOMER_DRIVERS_LICENSE_NUMBER ELSE '*************' END ENABLE ; CREATE MASK BANK_SCHEMA.MASK_LOGIN_ID_ON_CUSTOMERS ON BANK_SCHEMA.CUSTOMERS AS C FOR COLUMN CUSTOMER_LOGIN_ID RETURN CASE WHEN QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'ADMIN' ) = 1 THEN C . CUSTOMER_LOGIN_ID WHEN QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'CUSTOMER' ) = 1 THEN C . CUSTOMER_LOGIN_ID ELSE '*****' END ENABLE ; CREATE MASK BANK_SCHEMA.MASK_SECURITY_QUESTION_ON_CUSTOMERS ON BANK_SCHEMA.CUSTOMERS AS C FOR COLUMN CUSTOMER_SECURITY_QUESTION RETURN CASE WHEN QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'ADMIN' ) = 1 THEN C . CUSTOMER_SECURITY_QUESTION WHEN QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'CUSTOMER' ) = 1 THEN C . CUSTOMER_SECURITY_QUESTION ELSE '*****' END ENABLE ; CREATE MASK BANK_SCHEMA.MASK_SECURITY_QUESTION_ANSWER_ON_CUSTOMERS ON BANK_SCHEMA.CUSTOMERS AS C FOR COLUMN CUSTOMER_SECURITY_QUESTION_ANSWER RETURN CASE WHEN QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'ADMIN' ) = 1 THEN C . CUSTOMER_SECURITY_QUESTION_ANSWER WHEN QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'CUSTOMER' ) = 1 THEN C . CUSTOMER_SECURITY_QUESTION_ANSWER ELSE '*****' END ENABLE ; ALTER TABLE BANK_SCHEMA.CUSTOMERS ACTIVATE ROW ACCESS CONTROL ACTIVATE COLUMN ACCESS CONTROL ;</paragraph> <paragraph><location><page_16><loc_11><loc_11><loc_82><loc_91></location>THEN C . CUSTOMER_TAX_ID WHEN QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'TELLER' ) = 1 THEN ( 'XXX-XX-' CONCAT QSYS2 . SUBSTR ( C . CUSTOMER_TAX_ID , 8 , 4 ) ) WHEN QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'CUSTOMER' ) = 1 THEN C . CUSTOMER_TAX_ID ELSE 'XXX-XX-XXXX' END ENABLE ; CREATE MASK BANK_SCHEMA.MASK_DRIVERS_LICENSE_ON_CUSTOMERS ON BANK_SCHEMA.CUSTOMERS AS C FOR COLUMN CUSTOMER_DRIVERS_LICENSE_NUMBER RETURN CASE WHEN QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'ADMIN' ) = 1 THEN C . CUSTOMER_DRIVERS_LICENSE_NUMBER WHEN QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'TELLER' ) = 1 THEN C . CUSTOMER_DRIVERS_LICENSE_NUMBER WHEN QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'CUSTOMER' ) = 1 THEN C . CUSTOMER_DRIVERS_LICENSE_NUMBER ELSE '*************' END ENABLE ; CREATE MASK BANK_SCHEMA.MASK_LOGIN_ID_ON_CUSTOMERS ON BANK_SCHEMA.CUSTOMERS AS C FOR COLUMN CUSTOMER_LOGIN_ID RETURN CASE WHEN QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'ADMIN' ) = 1 THEN C . CUSTOMER_LOGIN_ID WHEN QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'CUSTOMER' ) = 1 THEN C . CUSTOMER_LOGIN_ID ELSE '*****' END ENABLE ; CREATE MASK BANK_SCHEMA.MASK_SECURITY_QUESTION_ON_CUSTOMERS ON BANK_SCHEMA.CUSTOMERS AS C FOR COLUMN CUSTOMER_SECURITY_QUESTION RETURN CASE WHEN QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'ADMIN' ) = 1 THEN C . CUSTOMER_SECURITY_QUESTION WHEN QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'CUSTOMER' ) = 1 THEN C . CUSTOMER_SECURITY_QUESTION ELSE '*****' END ENABLE ; CREATE MASK BANK_SCHEMA.MASK_SECURITY_QUESTION_ANSWER_ON_CUSTOMERS ON BANK_SCHEMA.CUSTOMERS AS C FOR COLUMN CUSTOMER_SECURITY_QUESTION_ANSWER RETURN CASE WHEN QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'ADMIN' ) = 1 THEN C . CUSTOMER_SECURITY_QUESTION_ANSWER WHEN QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'CUSTOMER' ) = 1 THEN C . CUSTOMER_SECURITY_QUESTION_ANSWER ELSE '*****' END ENABLE ; ALTER TABLE BANK_SCHEMA.CUSTOMERS ACTIVATE ROW ACCESS CONTROL ACTIVATE COLUMN ACCESS CONTROL ;</paragraph>
<paragraph><location><page_18><loc_47><loc_94><loc_68><loc_96></location>Back cover</paragraph> <paragraph><location><page_18><loc_47><loc_94><loc_68><loc_96></location>Back cover</paragraph>
<subtitle-level-1><location><page_18><loc_4><loc_82><loc_73><loc_91></location>Row and Column Access Control Support in IBM DB2 for i</subtitle-level-1> <subtitle-level-1><location><page_18><loc_4><loc_82><loc_73><loc_91></location>Row and Column Access Control Support in IBM DB2 for i</subtitle-level-1>
<paragraph><location><page_18><loc_4><loc_66><loc_21><loc_70></location>Implement roles and separation of duties</paragraph> <paragraph><location><page_18><loc_4><loc_66><loc_21><loc_69></location>Implement roles and separation of duties</paragraph>
<paragraph><location><page_18><loc_4><loc_59><loc_20><loc_64></location>Leverage row permissions on the database</paragraph> <paragraph><location><page_18><loc_4><loc_59><loc_20><loc_64></location>Leverage row permissions on the database</paragraph>
<paragraph><location><page_18><loc_4><loc_52><loc_20><loc_57></location>Protect columns by defining column masks</paragraph> <paragraph><location><page_18><loc_4><loc_52><loc_20><loc_57></location>Protect columns by defining column masks</paragraph>
<paragraph><location><page_18><loc_25><loc_59><loc_68><loc_69></location>This IBM Redpaper publication provides information about the IBM i 7.2 feature of IBM DB2 for i Row and Column Access Control (RCAC). It offers a broad description of the function and advantages of controlling access to data in a comprehensive and transparent way. This publication helps you understand the capabilities of RCAC and provides examples of defining, creating, and implementing the row permissions and column masks in a relational database environment.</paragraph> <paragraph><location><page_18><loc_25><loc_59><loc_68><loc_69></location>This IBM Redpaper publication provides information about the IBM i 7.2 feature of IBM DB2 for i Row and Column Access Control (RCAC). It offers a broad description of the function and advantages of controlling access to data in a comprehensive and transparent way. This publication helps you understand the capabilities of RCAC and provides examples of defining, creating, and implementing the row permissions and column masks in a relational database environment.</paragraph>

File diff suppressed because one or more lines are too long

View File

@ -1,74 +1,19 @@
Front cover Front cover
<!-- image --> <!-- image -->
## Row and Column Access Control Support in IBM DB2 for i ## Row and Column Access Control Support in IBM DB2 for i
Implement roles and separation of duties <!-- image -->
Leverage row permissions on the database <!-- image -->
Protect columns by defining column masks
Jim Bainbridge Hernando Bedoya Rob Bestgen Mike Cain Dan Cruikshank Jim Denton Doug Mack Tom McKinley Kent Milligan
Redpaper
## Contents ## Contents
| Notices | . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii |
|------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------|
| Trademarks | . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii |
| DB2 for i Center of Excellence | . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix |
| Preface | . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi |
| Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi | |
| Now you can become a published author, too! | . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii |
| Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | xiii |
| Stay connected to IBM Redbooks | . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv |
| Chapter 1. Securing and protecting IBM DB2 data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 1 |
| 1.1 Security fundamentals. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 | |
| 1.2 Current state of IBM i security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 2 |
| 1.3 DB2 for i security controls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 | |
| 1.3.1 Existing row and column control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 4 |
| 1.3.2 New controls: Row and Column Access Control. . . . . . . . . . . . . . . . . . . . . . . . . . . | 5 |
| Chapter 2. Roles and separation of duties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 7 |
| 2.1 Roles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 8 |
| 2.1.1 DDM and DRDA application server access: QIBM_DB_DDMDRDA . . . . . . . . . . . | 8 |
| 2.1.2 Toolbox application server access: QIBM_DB_ZDA. . . . . . . . . . . . . . . . . . . . . . . . | 8 |
| 2.1.3 Database Administrator function: QIBM_DB_SQLADM . . . . . . . . . . . . . . . . . . . . . | 9 |
| 2.1.4 Database Information function: QIBM_DB_SYSMON | . . . . . . . . . . . . . . . . . . . . . . 9 |
| 2.1.5 Security Administrator function: QIBM_DB_SECADM . . . . . . . . . . . . . . . . . . . . . . | 9 |
| 2.1.6 Change Function Usage CL command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 10 |
| 2.1.7 Verifying function usage IDs for RCAC with the FUNCTION_USAGE view . . . . . | 10 |
| 2.2 Separation of duties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 | |
| Chapter 3. Row and Column Access Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 13 |
| 3.1 Explanation of RCAC and the concept of access control . . . . . . . . . . . . . . . . . . . . . . . | 14 |
| 3.1.1 Row permission and column mask definitions | . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 |
| 3.1.2 Enabling and activating RCAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 16 |
| 3.2 Special registers and built-in global variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 18 |
| 3.2.1 Special registers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 18 |
| 3.2.2 Built-in global variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 19 |
| 3.3 VERIFY_GROUP_FOR_USER function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 20 |
| 3.4 Establishing and controlling accessibility by using the RCAC rule text . . . . . . . . . . . . . | 21 |
| | . . . . . . . . . . . . . . . . . . . . . . . . 22 |
| 3.5 SELECT, INSERT, and UPDATE behavior with RCAC | |
| 3.6.1 Assigning the QIBM_DB_SECADM function ID to the consultants. . . . . . . . . . . . | 23 |
| 3.6.2 Creating group profiles for the users and their roles . . . . . . . . . . . . . . . . . . . . . . . | 23 |
| 3.6.3 Demonstrating data access without RCAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 24 |
| 3.6.4 Defining and creating row permissions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 25 |
| 3.6.5 Defining and creating column masks | . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 |
| 3.6.6 Activating RCAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 28 |
| 3.6.7 Demonstrating data access with RCAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 29 |
| 3.6.8 Demonstrating data access with a view and RCAC . . . . . . . . . . . . . . . . . . . . . . . | 32 |
DB2 for i Center of Excellence DB2 for i Center of Excellence
Solution Brief IBM Systems Lab Services and Training Solution Brief IBM Systems Lab Services and Training
<!-- image --> <!-- image -->
## Highlights ## Highlights
@ -81,7 +26,6 @@ Solution Brief IBM Systems Lab Services and Training
- GLYPH<g115>GLYPH<g3> GLYPH<g55> GLYPH<g68>GLYPH<g78>GLYPH<g72>GLYPH<g3> GLYPH<g68>GLYPH<g71>GLYPH<g89>GLYPH<g68>GLYPH<g81>GLYPH<g87>GLYPH<g68>GLYPH<g74>GLYPH<g72>GLYPH<g3> GLYPH<g82>GLYPH<g73>GLYPH<g3> GLYPH<g68>GLYPH<g70>GLYPH<g70>GLYPH<g72>GLYPH<g86>GLYPH<g86>GLYPH<g3> GLYPH<g87>GLYPH<g82>GLYPH<g3> GLYPH<g68> GLYPH<g3> GLYPH<g90>GLYPH<g82>GLYPH<g85>GLYPH<g79>GLYPH<g71>GLYPH<g90>GLYPH<g76>GLYPH<g71>GLYPH<g72>GLYPH<g3> GLYPH<g86>GLYPH<g82>GLYPH<g88>GLYPH<g85>GLYPH<g70>GLYPH<g72>GLYPH<g3> GLYPH<g82>GLYPH<g73>GLYPH<g3> GLYPH<g72>GLYPH<g91>GLYPH<g83>GLYPH<g72>GLYPH<g85>GLYPH<g87>GLYPH<g76>GLYPH<g86>GLYPH<g72> - GLYPH<g115>GLYPH<g3> GLYPH<g55> GLYPH<g68>GLYPH<g78>GLYPH<g72>GLYPH<g3> GLYPH<g68>GLYPH<g71>GLYPH<g89>GLYPH<g68>GLYPH<g81>GLYPH<g87>GLYPH<g68>GLYPH<g74>GLYPH<g72>GLYPH<g3> GLYPH<g82>GLYPH<g73>GLYPH<g3> GLYPH<g68>GLYPH<g70>GLYPH<g70>GLYPH<g72>GLYPH<g86>GLYPH<g86>GLYPH<g3> GLYPH<g87>GLYPH<g82>GLYPH<g3> GLYPH<g68> GLYPH<g3> GLYPH<g90>GLYPH<g82>GLYPH<g85>GLYPH<g79>GLYPH<g71>GLYPH<g90>GLYPH<g76>GLYPH<g71>GLYPH<g72>GLYPH<g3> GLYPH<g86>GLYPH<g82>GLYPH<g88>GLYPH<g85>GLYPH<g70>GLYPH<g72>GLYPH<g3> GLYPH<g82>GLYPH<g73>GLYPH<g3> GLYPH<g72>GLYPH<g91>GLYPH<g83>GLYPH<g72>GLYPH<g85>GLYPH<g87>GLYPH<g76>GLYPH<g86>GLYPH<g72>
<!-- image --> <!-- image -->
Power Services Power Services
@ -128,10 +72,8 @@ This paper is intended for database engineers, data-centric application develope
This paper was produced by the IBM DB2 for i Center of Excellence team in partnership with the International Technical Support Organization (ITSO), Rochester, Minnesota US. This paper was produced by the IBM DB2 for i Center of Excellence team in partnership with the International Technical Support Organization (ITSO), Rochester, Minnesota US.
<!-- image --> <!-- image -->
<!-- image --> <!-- image -->
Jim Bainbridge is a senior DB2 consultant on the DB2 for i Center of Excellence team in the IBM Lab Services and Training organization. His primary role is training and implementation services for IBM DB2 Web Query for i and business analytics. Jim began his career with IBM 30 years ago in the IBM Rochester Development Lab, where he developed cooperative processing products that paired IBM PCs with IBM S/36 and AS/.400 systems. In the years since, Jim has held numerous technical roles, including independent software vendors technical support on a broad range of IBM technologies and products, and supporting customers in the IBM Executive Briefing Center and IBM Project Office. Jim Bainbridge is a senior DB2 consultant on the DB2 for i Center of Excellence team in the IBM Lab Services and Training organization. His primary role is training and implementation services for IBM DB2 Web Query for i and business analytics. Jim began his career with IBM 30 years ago in the IBM Rochester Development Lab, where he developed cooperative processing products that paired IBM PCs with IBM S/36 and AS/.400 systems. In the years since, Jim has held numerous technical roles, including independent software vendors technical support on a broad range of IBM technologies and products, and supporting customers in the IBM Executive Briefing Center and IBM Project Office.
@ -140,7 +82,6 @@ Hernando Bedoya is a Senior IT Specialist at STG Lab Services and Training in Ro
## Authors ## Authors
<!-- image --> <!-- image -->
Chapter 1. Chapter 1.
@ -227,7 +168,27 @@ To discover who has authorization to define and manage RCAC, you can use the que
Example 2-1 Query to determine who has authority to define and manage RCAC Example 2-1 Query to determine who has authority to define and manage RCAC
SELECT function_id, user_name, usage, user_type FROM function_usage WHERE function_id='QIBM_DB_SECADM' ORDER BY user_name; SELECT
function_id,
user_name,
usage,
user_type
FROM
function_usage
WHERE
function_id=QIBM_DB_SECADM
ORDER BY
user_name;
## 2.2 Separation of duties ## 2.2 Separation of duties
@ -336,7 +297,9 @@ Here is an example of using the VERIFY_GROUP_FOR_USER function:
VERIFY_GROUP_FOR_USER (CURRENT_USER, 'MGR') VERIFY_GROUP_FOR_USER (CURRENT_USER, 'JANE', 'MGR') VERIFY_GROUP_FOR_USER (CURRENT_USER, 'JANE', 'MGR', 'STEVE') The following function invocation returns a value of 0: VERIFY_GROUP_FOR_USER (CURRENT_USER, 'JUDY', 'TONY') VERIFY_GROUP_FOR_USER (CURRENT_USER, 'MGR') VERIFY_GROUP_FOR_USER (CURRENT_USER, 'JANE', 'MGR') VERIFY_GROUP_FOR_USER (CURRENT_USER, 'JANE', 'MGR', 'STEVE') The following function invocation returns a value of 0: VERIFY_GROUP_FOR_USER (CURRENT_USER, 'JUDY', 'TONY')
RETURN CASE RETURN
CASE
WHEN VERIFY_GROUP_FOR_USER ( SESSION_USER , 'HR', 'EMP' ) = 1 THEN EMPLOYEES . DATE_OF_BIRTH WHEN VERIFY_GROUP_FOR_USER ( SESSION_USER , 'MGR' ) = 1 AND SESSION_USER = EMPLOYEES . USER_ID THEN EMPLOYEES . DATE_OF_BIRTH WHEN VERIFY_GROUP_FOR_USER ( SESSION_USER , 'MGR' ) = 1 AND SESSION_USER <> EMPLOYEES . USER_ID THEN ( 9999 || '-' || MONTH ( EMPLOYEES . DATE_OF_BIRTH ) || '-' || DAY (EMPLOYEES.DATE_OF_BIRTH )) ELSE NULL END ENABLE ; WHEN VERIFY_GROUP_FOR_USER ( SESSION_USER , 'HR', 'EMP' ) = 1 THEN EMPLOYEES . DATE_OF_BIRTH WHEN VERIFY_GROUP_FOR_USER ( SESSION_USER , 'MGR' ) = 1 AND SESSION_USER = EMPLOYEES . USER_ID THEN EMPLOYEES . DATE_OF_BIRTH WHEN VERIFY_GROUP_FOR_USER ( SESSION_USER , 'MGR' ) = 1 AND SESSION_USER <> EMPLOYEES . USER_ID THEN ( 9999 || '-' || MONTH ( EMPLOYEES . DATE_OF_BIRTH ) || '-' || DAY (EMPLOYEES.DATE_OF_BIRTH )) ELSE NULL END ENABLE ;
@ -371,10 +334,16 @@ Now that you have created the row permission and the two column masks, RCAC must
- /* Active Row Access Control (permissions) */ - /* Active Row Access Control (permissions) */
/* Active Column Access Control (masks) ALTER TABLE HR_SCHEMA.EMPLOYEES ACTIVATE ROW ACCESS CONTROL ACTIVATE COLUMN ACCESS CONTROL; - /* Active Column Access Control (masks)
*/ */
ALTER TABLE HR_SCHEMA.EMPLOYEES
ACTIVATE ROW ACCESS CONTROL
ACTIVATE COLUMN ACCESS CONTROL;
- 2. Look at the definition of the EMPLOYEE table, as shown in Figure 3-11. To do this, from the main navigation pane of System i Navigator, click Schemas  HR_SCHEMA  Tables , right-click the EMPLOYEES table, and click Definition . - 2. Look at the definition of the EMPLOYEE table, as shown in Figure 3-11. To do this, from the main navigation pane of System i Navigator, click Schemas  HR_SCHEMA  Tables , right-click the EMPLOYEES table, and click Definition .
Figure 3-11 Selecting the EMPLOYEES table from System i Navigator Figure 3-11 Selecting the EMPLOYEES table from System i Navigator
@ -406,10 +375,8 @@ This IBM Redpaper publication provides information about the IBM i 7.2 feature o
This paper is intended for database engineers, data-centric application developers, and security officers who want to design and implement RCAC as a part of their data control and governance policy. A solid background in IBM i object level security, DB2 for i relational database concepts, and SQL is assumed. This paper is intended for database engineers, data-centric application developers, and security officers who want to design and implement RCAC as a part of their data control and governance policy. A solid background in IBM i object level security, DB2 for i relational database concepts, and SQL is assumed.
<!-- image --> <!-- image -->
<!-- image --> <!-- image -->
INTERNATIONAL TECHNICAL SUPPORT ORGANIZATION INTERNATIONAL TECHNICAL SUPPORT ORGANIZATION

File diff suppressed because one or more lines are too long

View File

@ -1,22 +1,31 @@
<document> <document>
<section_header_level_1><location><page_1><loc_16><loc_85><loc_82><loc_87></location>TableFormer: Table Structure Understanding with Transformers.</section_header_level_1> <section_header_level_1><location><page_1><loc_16><loc_85><loc_82><loc_86></location>TableFormer: Table Structure Understanding with Transformers.</section_header_level_1>
<section_header_level_1><location><page_1><loc_23><loc_78><loc_74><loc_82></location>Ahmed Nassar, Nikolaos Livathinos, Maksym Lysak, Peter Staar IBM Research</section_header_level_1> <section_header_level_1><location><page_1><loc_23><loc_78><loc_74><loc_81></location>Ahmed Nassar, Nikolaos Livathinos, Maksym Lysak, Peter Staar IBM Research</section_header_level_1>
<text><location><page_1><loc_34><loc_77><loc_62><loc_78></location>{ ahn,nli,mly,taa } @zurich.ibm.com</text> <text><location><page_1><loc_34><loc_77><loc_62><loc_78></location>{ ahn,nli,mly,taa } @zurich.ibm.com</text>
<section_header_level_1><location><page_1><loc_24><loc_71><loc_31><loc_73></location>Abstract</section_header_level_1> <section_header_level_1><location><page_1><loc_24><loc_71><loc_31><loc_73></location>Abstract</section_header_level_1>
<section_header_level_1><location><page_1><loc_52><loc_71><loc_67><loc_73></location>a. Picture of a table:</section_header_level_1> <section_header_level_1><location><page_1><loc_52><loc_71><loc_67><loc_72></location>a. Picture of a table:</section_header_level_1>
<section_header_level_1><location><page_1><loc_8><loc_30><loc_21><loc_32></location>1. Introduction</section_header_level_1> <section_header_level_1><location><page_1><loc_8><loc_30><loc_21><loc_32></location>1. Introduction</section_header_level_1>
<text><location><page_1><loc_8><loc_10><loc_47><loc_29></location>The occurrence of tables in documents is ubiquitous. They often summarise quantitative or factual data, which is cumbersome to describe in verbose text but nevertheless extremely valuable. Unfortunately, this compact representation is often not easy to parse by machines. There are many implicit conventions used to obtain a compact table representation. For example, tables often have complex columnand row-headers in order to reduce duplicated cell content. Lines of different shapes and sizes are leveraged to separate content or indicate a tree structure. Additionally, tables can also have empty/missing table-entries or multi-row textual table-entries. Fig. 1 shows a table which presents all these issues.</text> <text><location><page_1><loc_8><loc_10><loc_47><loc_29></location>The occurrence of tables in documents is ubiquitous. They often summarise quantitative or factual data, which is cumbersome to describe in verbose text but nevertheless extremely valuable. Unfortunately, this compact representation is often not easy to parse by machines. There are many implicit conventions used to obtain a compact table representation. For example, tables often have complex columnand row-headers in order to reduce duplicated cell content. Lines of different shapes and sizes are leveraged to separate content or indicate a tree structure. Additionally, tables can also have empty/missing table-entries or multi-row textual table-entries. Fig. 1 shows a table which presents all these issues.</text>
<figure>
<location><page_1><loc_52><loc_62><loc_88><loc_71></location>
</figure>
<table> <table>
<location><page_1><loc_52><loc_62><loc_88><loc_71></location> <location><page_1><loc_52><loc_62><loc_88><loc_71></location>
<caption>Tables organize valuable content in a concise and compact representation. This content is extremely valuable for systems such as search engines, Knowledge Graph's, etc, since they enhance their predictive capabilities. Unfortunately, tables come in a large variety of shapes and sizes. Furthermore, they can have complex column/row-header configurations, multiline rows, different variety of separation lines, missing entries, etc. As such, the correct identification of the table-structure from an image is a nontrivial task. In this paper, we present a new table-structure identification model. The latter improves the latest end-toend deep learning model (i.e. encoder-dual-decoder from PubTabNet) in two significant ways. First, we introduce a new object detection decoder for table-cells. In this way, we can obtain the content of the table-cells from programmatic PDF's directly from the PDF source and avoid the training of the custom OCR decoders. This architectural change leads to more accurate table-content extraction and allows us to tackle non-english tables. Second, we replace the LSTM decoders with transformer based decoders. This upgrade improves significantly the previous state-of-the-art tree-editing-distance-score (TEDS) from 91% to 98.5% on simple tables and from 88.7% to 95% on complex tables.</caption> <caption>Tables organize valuable content in a concise and compact representation. This content is extremely valuable for systems such as search engines, Knowledge Graph's, etc, since they enhance their predictive capabilities. Unfortunately, tables come in a large variety of shapes and sizes. Furthermore, they can have complex column/row-header configurations, multiline rows, different variety of separation lines, missing entries, etc. As such, the correct identification of the table-structure from an image is a nontrivial task. In this paper, we present a new table-structure identification model. The latter improves the latest end-toend deep learning model (i.e. encoder-dual-decoder from PubTabNet) in two significant ways. First, we introduce a new object detection decoder for table-cells. In this way, we can obtain the content of the table-cells from programmatic PDF's directly from the PDF source and avoid the training of the custom OCR decoders. This architectural change leads to more accurate table-content extraction and allows us to tackle non-english tables. Second, we replace the LSTM decoders with transformer based decoders. This upgrade improves significantly the previous state-of-the-art tree-editing-distance-score (TEDS) from 91% to 98.5% on simple tables and from 88.7% to 95% on complex tables.</caption>
<row_0><col_0><col_header>3</col_0><col_1><col_header>1</col_1></row_0> <row_0><col_0><col_header>3</col_0><col_1><col_header>1</col_1></row_0>
</table> </table>
<text><location><page_1><loc_52><loc_58><loc_79><loc_60></location>b. Red-annotation of bounding boxes, Blue-predictions by TableFormer</text> <unordered_list>
<list_item><location><page_1><loc_52><loc_58><loc_79><loc_60></location>b. Red-annotation of bounding boxes, Blue-predictions by TableFormer</list_item>
</unordered_list>
<figure> <figure>
<location><page_1><loc_51><loc_48><loc_88><loc_57></location> <location><page_1><loc_51><loc_48><loc_88><loc_57></location>
</figure> </figure>
<text><location><page_1><loc_52><loc_46><loc_53><loc_47></location>c.</text> <unordered_list>
<text><location><page_1><loc_54><loc_46><loc_80><loc_47></location>Structure predicted by TableFormer:</text> <list_item><location><page_1><loc_52><loc_46><loc_80><loc_47></location>c. Structure predicted by TableFormer:</list_item>
</unordered_list>
<figure>
<location><page_1><loc_52><loc_37><loc_88><loc_45></location>
</figure>
<table> <table>
<location><page_1><loc_52><loc_37><loc_88><loc_45></location> <location><page_1><loc_52><loc_37><loc_88><loc_45></location>
<caption>Figure 1: Picture of a table with subtle, complex features such as (1) multi-column headers, (2) cell with multi-row text and (3) cells with no content. Image from PubTabNet evaluation set, filename: 'PMC2944238 004 02'.</caption> <caption>Figure 1: Picture of a table with subtle, complex features such as (1) multi-column headers, (2) cell with multi-row text and (3) cells with no content. Image from PubTabNet evaluation set, filename: 'PMC2944238 004 02'.</caption>
@ -29,7 +38,7 @@
<text><location><page_1><loc_50><loc_16><loc_89><loc_26></location>Recently, significant progress has been made with vision based approaches to extract tables in documents. For the sake of completeness, the issue of table extraction from documents is typically decomposed into two separate challenges, i.e. (1) finding the location of the table(s) on a document-page and (2) finding the structure of a given table in the document.</text> <text><location><page_1><loc_50><loc_16><loc_89><loc_26></location>Recently, significant progress has been made with vision based approaches to extract tables in documents. For the sake of completeness, the issue of table extraction from documents is typically decomposed into two separate challenges, i.e. (1) finding the location of the table(s) on a document-page and (2) finding the structure of a given table in the document.</text>
<text><location><page_1><loc_50><loc_10><loc_89><loc_16></location>The first problem is called table-location and has been previously addressed [30, 38, 19, 21, 23, 26, 8] with stateof-the-art object-detection networks (e.g. YOLO and later on Mask-RCNN [9]). For all practical purposes, it can be</text> <text><location><page_1><loc_50><loc_10><loc_89><loc_16></location>The first problem is called table-location and has been previously addressed [30, 38, 19, 21, 23, 26, 8] with stateof-the-art object-detection networks (e.g. YOLO and later on Mask-RCNN [9]). For all practical purposes, it can be</text>
<text><location><page_2><loc_8><loc_88><loc_47><loc_91></location>considered as a solved problem, given enough ground-truth data to train on.</text> <text><location><page_2><loc_8><loc_88><loc_47><loc_91></location>considered as a solved problem, given enough ground-truth data to train on.</text>
<text><location><page_2><loc_8><loc_71><loc_47><loc_88></location>The second problem is called table-structure decomposition. The latter is a long standing problem in the community of document understanding [6, 4, 14]. Contrary to the table-location problem, there are no commonly used approaches that can easily be re-purposed to solve this problem. Lately, a set of new model-architectures has been proposed by the community to address table-structure decomposition [37, 36, 18, 20]. All these models have some weaknesses (see Sec. 2). The common denominator here is the reliance on textual features and/or the inability to provide the bounding box of each table-cell in the original image.</text> <text><location><page_2><loc_8><loc_71><loc_47><loc_87></location>The second problem is called table-structure decomposition. The latter is a long standing problem in the community of document understanding [6, 4, 14]. Contrary to the table-location problem, there are no commonly used approaches that can easily be re-purposed to solve this problem. Lately, a set of new model-architectures has been proposed by the community to address table-structure decomposition [37, 36, 18, 20]. All these models have some weaknesses (see Sec. 2). The common denominator here is the reliance on textual features and/or the inability to provide the bounding box of each table-cell in the original image.</text>
<text><location><page_2><loc_8><loc_53><loc_47><loc_71></location>In this paper, we want to address these weaknesses and present a robust table-structure decomposition algorithm. The design criteria for our model are the following. First, we want our algorithm to be language agnostic. In this way, we can obtain the structure of any table, irregardless of the language. Second, we want our algorithm to leverage as much data as possible from the original PDF document. For programmatic PDF documents, the text-cells can often be extracted much faster and with higher accuracy compared to OCR methods. Last but not least, we want to have a direct link between the table-cell and its bounding box in the image.</text> <text><location><page_2><loc_8><loc_53><loc_47><loc_71></location>In this paper, we want to address these weaknesses and present a robust table-structure decomposition algorithm. The design criteria for our model are the following. First, we want our algorithm to be language agnostic. In this way, we can obtain the structure of any table, irregardless of the language. Second, we want our algorithm to leverage as much data as possible from the original PDF document. For programmatic PDF documents, the text-cells can often be extracted much faster and with higher accuracy compared to OCR methods. Last but not least, we want to have a direct link between the table-cell and its bounding box in the image.</text>
<text><location><page_2><loc_8><loc_45><loc_47><loc_53></location>To meet the design criteria listed above, we developed a new model called TableFormer and a synthetically generated table structure dataset called SynthTabNet $^{1}$. In particular, our contributions in this work can be summarised as follows:</text> <text><location><page_2><loc_8><loc_45><loc_47><loc_53></location>To meet the design criteria listed above, we developed a new model called TableFormer and a synthetically generated table structure dataset called SynthTabNet $^{1}$. In particular, our contributions in this work can be summarised as follows:</text>
<unordered_list> <unordered_list>
@ -73,10 +82,10 @@
<row_5><col_0><row_header>Combined(**)</col_0><col_1><body>3</col_1><col_2><body>3</col_2><col_3><body>500k</col_3><col_4><body>PNG</col_4></row_5> <row_5><col_0><row_header>Combined(**)</col_0><col_1><body>3</col_1><col_2><body>3</col_2><col_3><body>500k</col_3><col_4><body>PNG</col_4></row_5>
<row_6><col_0><row_header>SynthTabNet</col_0><col_1><body>3</col_1><col_2><body>3</col_2><col_3><body>600k</col_3><col_4><body>PNG</col_4></row_6> <row_6><col_0><row_header>SynthTabNet</col_0><col_1><body>3</col_1><col_2><body>3</col_2><col_3><body>600k</col_3><col_4><body>PNG</col_4></row_6>
</table> </table>
<text><location><page_4><loc_50><loc_63><loc_89><loc_69></location>one adopts a colorful appearance with high contrast and the last one contains tables with sparse content. Lastly, we have combined all synthetic datasets into one big unified synthetic dataset of 600k examples.</text> <text><location><page_4><loc_50><loc_63><loc_89><loc_68></location>one adopts a colorful appearance with high contrast and the last one contains tables with sparse content. Lastly, we have combined all synthetic datasets into one big unified synthetic dataset of 600k examples.</text>
<text><location><page_4><loc_52><loc_61><loc_89><loc_62></location>Tab. 1 summarizes the various attributes of the datasets.</text> <text><location><page_4><loc_52><loc_61><loc_89><loc_62></location>Tab. 1 summarizes the various attributes of the datasets.</text>
<section_header_level_1><location><page_4><loc_50><loc_58><loc_73><loc_60></location>4. The TableFormer model</section_header_level_1> <section_header_level_1><location><page_4><loc_50><loc_58><loc_73><loc_59></location>4. The TableFormer model</section_header_level_1>
<text><location><page_4><loc_50><loc_43><loc_89><loc_57></location>Given the image of a table, TableFormer is able to predict: 1) a sequence of tokens that represent the structure of a table, and 2) a bounding box coupled to a subset of those tokens. The conversion of an image into a sequence of tokens is a well-known task [35, 16]. While attention is often used as an implicit method to associate each token of the sequence with a position in the original image, an explicit association between the individual table-cells and the image bounding boxes is also required.</text> <text><location><page_4><loc_50><loc_44><loc_89><loc_57></location>Given the image of a table, TableFormer is able to predict: 1) a sequence of tokens that represent the structure of a table, and 2) a bounding box coupled to a subset of those tokens. The conversion of an image into a sequence of tokens is a well-known task [35, 16]. While attention is often used as an implicit method to associate each token of the sequence with a position in the original image, an explicit association between the individual table-cells and the image bounding boxes is also required.</text>
<section_header_level_1><location><page_4><loc_50><loc_41><loc_69><loc_42></location>4.1. Model architecture.</section_header_level_1> <section_header_level_1><location><page_4><loc_50><loc_41><loc_69><loc_42></location>4.1. Model architecture.</section_header_level_1>
<text><location><page_4><loc_50><loc_16><loc_89><loc_40></location>We now describe in detail the proposed method, which is composed of three main components, see Fig. 4. Our CNN Backbone Network encodes the input as a feature vector of predefined length. The input feature vector of the encoded image is passed to the Structure Decoder to produce a sequence of HTML tags that represent the structure of the table. With each prediction of an HTML standard data cell (' < td > ') the hidden state of that cell is passed to the Cell BBox Decoder. As for spanning cells, such as row or column span, the tag is broken down to ' < ', 'rowspan=' or 'colspan=', with the number of spanning cells (attribute), and ' > '. The hidden state attached to ' < ' is passed to the Cell BBox Decoder. A shared feed forward network (FFN) receives the hidden states from the Structure Decoder, to provide the final detection predictions of the bounding box coordinates and their classification.</text> <text><location><page_4><loc_50><loc_16><loc_89><loc_40></location>We now describe in detail the proposed method, which is composed of three main components, see Fig. 4. Our CNN Backbone Network encodes the input as a feature vector of predefined length. The input feature vector of the encoded image is passed to the Structure Decoder to produce a sequence of HTML tags that represent the structure of the table. With each prediction of an HTML standard data cell (' < td > ') the hidden state of that cell is passed to the Cell BBox Decoder. As for spanning cells, such as row or column span, the tag is broken down to ' < ', 'rowspan=' or 'colspan=', with the number of spanning cells (attribute), and ' > '. The hidden state attached to ' < ' is passed to the Cell BBox Decoder. A shared feed forward network (FFN) receives the hidden states from the Structure Decoder, to provide the final detection predictions of the bounding box coordinates and their classification.</text>
<text><location><page_4><loc_50><loc_10><loc_89><loc_16></location>CNN Backbone Network. A ResNet-18 CNN is the backbone that receives the table image and encodes it as a vector of predefined length. The network has been modified by removing the linear and pooling layer, as we are not per-</text> <text><location><page_4><loc_50><loc_10><loc_89><loc_16></location>CNN Backbone Network. A ResNet-18 CNN is the backbone that receives the table image and encodes it as a vector of predefined length. The network has been modified by removing the linear and pooling layer, as we are not per-</text>
@ -88,15 +97,15 @@
<location><page_5><loc_9><loc_36><loc_47><loc_67></location> <location><page_5><loc_9><loc_36><loc_47><loc_67></location>
<caption>Figure 4: Given an input image of a table, the Encoder produces fixed-length features that represent the input image. The features are then passed to both the Structure Decoder and Cell BBox Decoder . During training, the Structure Decoder receives 'tokenized tags' of the HTML code that represent the table structure. Afterwards, a transformer encoder and decoder architecture is employed to produce features that are received by a linear layer, and the Cell BBox Decoder. The linear layer is applied to the features to predict the tags. Simultaneously, the Cell BBox Decoder selects features referring to the data cells (' < td > ', ' < ') and passes them through an attention network, an MLP, and a linear layer to predict the bounding boxes.</caption> <caption>Figure 4: Given an input image of a table, the Encoder produces fixed-length features that represent the input image. The features are then passed to both the Structure Decoder and Cell BBox Decoder . During training, the Structure Decoder receives 'tokenized tags' of the HTML code that represent the table structure. Afterwards, a transformer encoder and decoder architecture is employed to produce features that are received by a linear layer, and the Cell BBox Decoder. The linear layer is applied to the features to predict the tags. Simultaneously, the Cell BBox Decoder selects features referring to the data cells (' < td > ', ' < ') and passes them through an attention network, an MLP, and a linear layer to predict the bounding boxes.</caption>
</figure> </figure>
<text><location><page_5><loc_50><loc_63><loc_89><loc_69></location>forming classification, and adding an adaptive pooling layer of size 28*28. ResNet by default downsamples the image resolution by 32 and then the encoded image is provided to both the Structure Decoder , and Cell BBox Decoder .</text> <text><location><page_5><loc_50><loc_63><loc_89><loc_68></location>forming classification, and adding an adaptive pooling layer of size 28*28. ResNet by default downsamples the image resolution by 32 and then the encoded image is provided to both the Structure Decoder , and Cell BBox Decoder .</text>
<text><location><page_5><loc_50><loc_48><loc_89><loc_63></location>Structure Decoder. The transformer architecture of this component is based on the work proposed in [31]. After extensive experimentation, the Structure Decoder is modeled as a transformer encoder with two encoder layers and a transformer decoder made from a stack of 4 decoder layers that comprise mainly of multi-head attention and feed forward layers. This configuration uses fewer layers and heads in comparison to networks applied to other problems (e.g. "Scene Understanding", "Image Captioning"), something which we relate to the simplicity of table images.</text> <text><location><page_5><loc_50><loc_48><loc_89><loc_62></location>Structure Decoder. The transformer architecture of this component is based on the work proposed in [31]. After extensive experimentation, the Structure Decoder is modeled as a transformer encoder with two encoder layers and a transformer decoder made from a stack of 4 decoder layers that comprise mainly of multi-head attention and feed forward layers. This configuration uses fewer layers and heads in comparison to networks applied to other problems (e.g. "Scene Understanding", "Image Captioning"), something which we relate to the simplicity of table images.</text>
<text><location><page_5><loc_50><loc_31><loc_89><loc_47></location>The transformer encoder receives an encoded image from the CNN Backbone Network and refines it through a multi-head dot-product attention layer, followed by a Feed Forward Network. During training, the transformer decoder receives as input the output feature produced by the transformer encoder, and the tokenized input of the HTML ground-truth tags. Using a stack of multi-head attention layers, different aspects of the tag sequence could be inferred. This is achieved by each attention head on a layer operating in a different subspace, and then combining altogether their attention score.</text> <text><location><page_5><loc_50><loc_31><loc_89><loc_47></location>The transformer encoder receives an encoded image from the CNN Backbone Network and refines it through a multi-head dot-product attention layer, followed by a Feed Forward Network. During training, the transformer decoder receives as input the output feature produced by the transformer encoder, and the tokenized input of the HTML ground-truth tags. Using a stack of multi-head attention layers, different aspects of the tag sequence could be inferred. This is achieved by each attention head on a layer operating in a different subspace, and then combining altogether their attention score.</text>
<text><location><page_5><loc_50><loc_17><loc_89><loc_31></location>Cell BBox Decoder. Our architecture allows to simultaneously predict HTML tags and bounding boxes for each table cell without the need of a separate object detector end to end. This approach is inspired by DETR [1] which employs a Transformer Encoder, and Decoder that looks for a specific number of object queries (potential object detections). As our model utilizes a transformer architecture, the hidden state of the < td > ' and ' < ' HTML structure tags become the object query.</text> <text><location><page_5><loc_50><loc_18><loc_89><loc_31></location>Cell BBox Decoder. Our architecture allows to simultaneously predict HTML tags and bounding boxes for each table cell without the need of a separate object detector end to end. This approach is inspired by DETR [1] which employs a Transformer Encoder, and Decoder that looks for a specific number of object queries (potential object detections). As our model utilizes a transformer architecture, the hidden state of the < td > ' and ' < ' HTML structure tags become the object query.</text>
<text><location><page_5><loc_50><loc_10><loc_89><loc_17></location>The encoding generated by the CNN Backbone Network along with the features acquired for every data cell from the Transformer Decoder are then passed to the attention network. The attention network takes both inputs and learns to provide an attention weighted encoding. This weighted at-</text> <text><location><page_5><loc_50><loc_10><loc_89><loc_17></location>The encoding generated by the CNN Backbone Network along with the features acquired for every data cell from the Transformer Decoder are then passed to the attention network. The attention network takes both inputs and learns to provide an attention weighted encoding. This weighted at-</text>
<text><location><page_6><loc_8><loc_80><loc_47><loc_91></location>tention encoding is then multiplied to the encoded image to produce a feature for each table cell. Notice that this is different than the typical object detection problem where imbalances between the number of detections and the amount of objects may exist. In our case, we know up front that the produced detections always match with the table cells in number and correspondence.</text> <text><location><page_6><loc_8><loc_80><loc_47><loc_91></location>tention encoding is then multiplied to the encoded image to produce a feature for each table cell. Notice that this is different than the typical object detection problem where imbalances between the number of detections and the amount of objects may exist. In our case, we know up front that the produced detections always match with the table cells in number and correspondence.</text>
<text><location><page_6><loc_8><loc_70><loc_47><loc_80></location>The output features for each table cell are then fed into the feed-forward network (FFN). The FFN consists of a Multi-Layer Perceptron (3 layers with ReLU activation function) that predicts the normalized coordinates for the bounding box of each table cell. Finally, the predicted bounding boxes are classified based on whether they are empty or not using a linear layer.</text> <text><location><page_6><loc_8><loc_70><loc_47><loc_80></location>The output features for each table cell are then fed into the feed-forward network (FFN). The FFN consists of a Multi-Layer Perceptron (3 layers with ReLU activation function) that predicts the normalized coordinates for the bounding box of each table cell. Finally, the predicted bounding boxes are classified based on whether they are empty or not using a linear layer.</text>
<text><location><page_6><loc_8><loc_44><loc_47><loc_69></location>Loss Functions. We formulate a multi-task loss Eq. 2 to train our network. The Cross-Entropy loss (denoted as l$_{s}$ ) is used to train the Structure Decoder which predicts the structure tokens. As for the Cell BBox Decoder it is trained with a combination of losses denoted as l$_{box}$ . l$_{box}$ consists of the generally used l$_{1}$ loss for object detection and the IoU loss ( l$_{iou}$ ) to be scale invariant as explained in [25]. In comparison to DETR, we do not use the Hungarian algorithm [15] to match the predicted bounding boxes with the ground-truth boxes, as we have already achieved a one-toone match through two steps: 1) Our token input sequence is naturally ordered, therefore the hidden states of the table data cells are also in order when they are provided as input to the Cell BBox Decoder , and 2) Our bounding boxes generation mechanism (see Sec. 3) ensures a one-to-one mapping between the cell content and its bounding box for all post-processed datasets.</text> <text><location><page_6><loc_8><loc_44><loc_47><loc_69></location>Loss Functions. We formulate a multi-task loss Eq. 2 to train our network. The Cross-Entropy loss (denoted as l$_{s}$ ) is used to train the Structure Decoder which predicts the structure tokens. As for the Cell BBox Decoder it is trained with a combination of losses denoted as l$_{box}$ . l$_{box}$ consists of the generally used l$_{1}$ loss for object detection and the IoU loss ( l$_{iou}$ ) to be scale invariant as explained in [25]. In comparison to DETR, we do not use the Hungarian algorithm [15] to match the predicted bounding boxes with the ground-truth boxes, as we have already achieved a one-toone match through two steps: 1) Our token input sequence is naturally ordered, therefore the hidden states of the table data cells are also in order when they are provided as input to the Cell BBox Decoder , and 2) Our bounding boxes generation mechanism (see Sec. 3) ensures a one-to-one mapping between the cell content and its bounding box for all post-processed datasets.</text>
<text><location><page_6><loc_8><loc_41><loc_47><loc_44></location>The loss used to train the TableFormer can be defined as following:</text> <text><location><page_6><loc_8><loc_41><loc_47><loc_43></location>The loss used to train the TableFormer can be defined as following:</text>
<formula><location><page_6><loc_20><loc_35><loc_47><loc_38></location>l$_{box}$ = λ$_{iou}$l$_{iou}$ + λ$_{l}$$_{1}$ l = λl$_{s}$ + (1 - λ ) l$_{box}$ (1)</formula> <formula><location><page_6><loc_20><loc_35><loc_47><loc_38></location>l$_{box}$ = λ$_{iou}$l$_{iou}$ + λ$_{l}$$_{1}$ l = λl$_{s}$ + (1 - λ ) l$_{box}$ (1)</formula>
<text><location><page_6><loc_8><loc_32><loc_46><loc_33></location>where λ ∈ [0, 1], and λ$_{iou}$, λ$_{l}$$_{1}$ ∈$_{R}$ are hyper-parameters.</text> <text><location><page_6><loc_8><loc_32><loc_46><loc_33></location>where λ ∈ [0, 1], and λ$_{iou}$, λ$_{l}$$_{1}$ ∈$_{R}$ are hyper-parameters.</text>
<section_header_level_1><location><page_6><loc_8><loc_28><loc_28><loc_30></location>5. Experimental Results</section_header_level_1> <section_header_level_1><location><page_6><loc_8><loc_28><loc_28><loc_30></location>5. Experimental Results</section_header_level_1>
@ -105,7 +114,7 @@
<formula><location><page_6><loc_15><loc_14><loc_47><loc_17></location>Image width and height ≤ 1024 pixels Structural tags length ≤ 512 tokens. (2)</formula> <formula><location><page_6><loc_15><loc_14><loc_47><loc_17></location>Image width and height ≤ 1024 pixels Structural tags length ≤ 512 tokens. (2)</formula>
<text><location><page_6><loc_8><loc_10><loc_47><loc_13></location>Although input constraints are used also by other methods, such as EDD, ours are less restrictive due to the improved</text> <text><location><page_6><loc_8><loc_10><loc_47><loc_13></location>Although input constraints are used also by other methods, such as EDD, ours are less restrictive due to the improved</text>
<text><location><page_6><loc_50><loc_86><loc_89><loc_91></location>runtime performance and lower memory footprint of TableFormer. This allows to utilize input samples with longer sequences and images with larger dimensions.</text> <text><location><page_6><loc_50><loc_86><loc_89><loc_91></location>runtime performance and lower memory footprint of TableFormer. This allows to utilize input samples with longer sequences and images with larger dimensions.</text>
<text><location><page_6><loc_50><loc_59><loc_89><loc_86></location>The Transformer Encoder consists of two "Transformer Encoder Layers", with an input feature size of 512, feed forward network of 1024, and 4 attention heads. As for the Transformer Decoder it is composed of four "Transformer Decoder Layers" with similar input and output dimensions as the "Transformer Encoder Layers". Even though our model uses fewer layers and heads than the default implementation parameters, our extensive experimentation has proved this setup to be more suitable for table images. We attribute this finding to the inherent design of table images, which contain mostly lines and text, unlike the more elaborate content present in other scopes (e.g. the COCO dataset). Moreover, we have added ResNet blocks to the inputs of the Structure Decoder and Cell BBox Decoder. This prevents a decoder having a stronger influence over the learned weights which would damage the other prediction task (structure vs bounding boxes), but learn task specific weights instead. Lastly our dropout layers are set to 0.5.</text> <text><location><page_6><loc_50><loc_59><loc_89><loc_85></location>The Transformer Encoder consists of two "Transformer Encoder Layers", with an input feature size of 512, feed forward network of 1024, and 4 attention heads. As for the Transformer Decoder it is composed of four "Transformer Decoder Layers" with similar input and output dimensions as the "Transformer Encoder Layers". Even though our model uses fewer layers and heads than the default implementation parameters, our extensive experimentation has proved this setup to be more suitable for table images. We attribute this finding to the inherent design of table images, which contain mostly lines and text, unlike the more elaborate content present in other scopes (e.g. the COCO dataset). Moreover, we have added ResNet blocks to the inputs of the Structure Decoder and Cell BBox Decoder. This prevents a decoder having a stronger influence over the learned weights which would damage the other prediction task (structure vs bounding boxes), but learn task specific weights instead. Lastly our dropout layers are set to 0.5.</text>
<text><location><page_6><loc_50><loc_46><loc_89><loc_58></location>For training, TableFormer is trained with 3 Adam optimizers, each one for the CNN Backbone Network , Structure Decoder , and Cell BBox Decoder . Taking the PubTabNet as an example for our parameter set up, the initializing learning rate is 0.001 for 12 epochs with a batch size of 24, and λ set to 0.5. Afterwards, we reduce the learning rate to 0.0001, the batch size to 18 and train for 12 more epochs or convergence.</text> <text><location><page_6><loc_50><loc_46><loc_89><loc_58></location>For training, TableFormer is trained with 3 Adam optimizers, each one for the CNN Backbone Network , Structure Decoder , and Cell BBox Decoder . Taking the PubTabNet as an example for our parameter set up, the initializing learning rate is 0.001 for 12 epochs with a batch size of 24, and λ set to 0.5. Afterwards, we reduce the learning rate to 0.0001, the batch size to 18 and train for 12 more epochs or convergence.</text>
<text><location><page_6><loc_50><loc_30><loc_89><loc_45></location>TableFormer is implemented with PyTorch and Torchvision libraries [22]. To speed up the inference, the image undergoes a single forward pass through the CNN Backbone Network and transformer encoder. This eliminates the overhead of generating the same features for each decoding step. Similarly, we employ a 'caching' technique to preform faster autoregressive decoding. This is achieved by storing the features of decoded tokens so we can reuse them for each time step. Therefore, we only compute the attention for each new tag.</text> <text><location><page_6><loc_50><loc_30><loc_89><loc_45></location>TableFormer is implemented with PyTorch and Torchvision libraries [22]. To speed up the inference, the image undergoes a single forward pass through the CNN Backbone Network and transformer encoder. This eliminates the overhead of generating the same features for each decoding step. Similarly, we employ a 'caching' technique to preform faster autoregressive decoding. This is achieved by storing the features of decoded tokens so we can reuse them for each time step. Therefore, we only compute the attention for each new tag.</text>
<section_header_level_1><location><page_6><loc_50><loc_26><loc_65><loc_27></location>5.2. Generalization</section_header_level_1> <section_header_level_1><location><page_6><loc_50><loc_26><loc_65><loc_27></location>5.2. Generalization</section_header_level_1>
@ -155,14 +164,19 @@
<row_5><col_0><row_header>EDD</col_0><col_1><body>91.2</col_1><col_2><body>85.4</col_2><col_3><body>88.3</col_3></row_5> <row_5><col_0><row_header>EDD</col_0><col_1><body>91.2</col_1><col_2><body>85.4</col_2><col_3><body>88.3</col_3></row_5>
<row_6><col_0><row_header>TableFormer</col_0><col_1><body>95.4</col_1><col_2><body>90.1</col_2><col_3><body>93.6</col_3></row_6> <row_6><col_0><row_header>TableFormer</col_0><col_1><body>95.4</col_1><col_2><body>90.1</col_2><col_3><body>93.6</col_3></row_6>
</table> </table>
<text><location><page_8><loc_9><loc_89><loc_10><loc_90></location>a.</text> <unordered_list>
<text><location><page_8><loc_11><loc_89><loc_82><loc_90></location>Red - PDF cells, Green - predicted bounding boxes, Blue - post-processed predictions matched to PDF cells</text> <list_item><location><page_8><loc_9><loc_89><loc_10><loc_90></location>a.</list_item>
<text><location><page_8><loc_9><loc_87><loc_46><loc_88></location>Japanese language (previously unseen by TableFormer):</text> <list_item><location><page_8><loc_11><loc_89><loc_82><loc_90></location>Red - PDF cells, Green - predicted bounding boxes, Blue - post-processed predictions matched to PDF cells</list_item>
</unordered_list>
<section_header_level_1><location><page_8><loc_9><loc_87><loc_46><loc_88></location>Japanese language (previously unseen by TableFormer):</section_header_level_1>
<section_header_level_1><location><page_8><loc_50><loc_87><loc_70><loc_88></location>Example table from FinTabNet:</section_header_level_1>
<figure> <figure>
<location><page_8><loc_8><loc_76><loc_49><loc_87></location> <location><page_8><loc_8><loc_76><loc_49><loc_87></location>
</figure> </figure>
<text><location><page_8><loc_9><loc_73><loc_10><loc_74></location>b.</text> <figure>
<text><location><page_8><loc_11><loc_73><loc_63><loc_74></location>Structure predicted by TableFormer, with superimposed matched PDF cell text:</text> <location><page_8><loc_50><loc_77><loc_91><loc_88></location>
<caption>b. Structure predicted by TableFormer, with superimposed matched PDF cell text:</caption>
</figure>
<table> <table>
<location><page_8><loc_9><loc_63><loc_49><loc_72></location> <location><page_8><loc_9><loc_63><loc_49><loc_72></location>
<row_0><col_0><body></col_0><col_1><body></col_1><col_2><col_header>論文ファイル</col_2><col_3><col_header>論文ファイル</col_3><col_4><col_header>参考文献</col_4><col_5><col_header>参考文献</col_5></row_0> <row_0><col_0><body></col_0><col_1><body></col_1><col_2><col_header>論文ファイル</col_2><col_3><col_header>論文ファイル</col_3><col_4><col_header>参考文献</col_4><col_5><col_header>参考文献</col_5></row_0>
@ -204,16 +218,13 @@
<text><location><page_8><loc_50><loc_18><loc_89><loc_35></location>In this paper, we presented TableFormer an end-to-end transformer based approach to predict table structures and bounding boxes of cells from an image. This approach enables us to recreate the table structure, and extract the cell content from PDF or OCR by using bounding boxes. Additionally, it provides the versatility required in real-world scenarios when dealing with various types of PDF documents, and languages. Furthermore, our method outperforms all state-of-the-arts with a wide margin. Finally, we introduce "SynthTabNet" a challenging synthetically generated dataset that reinforces missing characteristics from other datasets.</text> <text><location><page_8><loc_50><loc_18><loc_89><loc_35></location>In this paper, we presented TableFormer an end-to-end transformer based approach to predict table structures and bounding boxes of cells from an image. This approach enables us to recreate the table structure, and extract the cell content from PDF or OCR by using bounding boxes. Additionally, it provides the versatility required in real-world scenarios when dealing with various types of PDF documents, and languages. Furthermore, our method outperforms all state-of-the-arts with a wide margin. Finally, we introduce "SynthTabNet" a challenging synthetically generated dataset that reinforces missing characteristics from other datasets.</text>
<section_header_level_1><location><page_8><loc_50><loc_14><loc_60><loc_15></location>References</section_header_level_1> <section_header_level_1><location><page_8><loc_50><loc_14><loc_60><loc_15></location>References</section_header_level_1>
<unordered_list> <unordered_list>
<list_item><location><page_8><loc_51><loc_10><loc_89><loc_13></location>[1] Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-</list_item> <list_item><location><page_8><loc_51><loc_10><loc_89><loc_12></location>[1] Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-</list_item>
</unordered_list> </unordered_list>
<figure>
<location><page_8><loc_50><loc_77><loc_91><loc_88></location>
</figure>
<unordered_list> <unordered_list>
<list_item><location><page_9><loc_11><loc_85><loc_47><loc_91></location>end object detection with transformers. In Andrea Vedaldi, Horst Bischof, Thomas Brox, and Jan-Michael Frahm, editors, Computer Vision - ECCV 2020 , pages 213-229, Cham, 2020. Springer International Publishing. 5</list_item> <list_item><location><page_9><loc_11><loc_85><loc_47><loc_90></location>end object detection with transformers. In Andrea Vedaldi, Horst Bischof, Thomas Brox, and Jan-Michael Frahm, editors, Computer Vision - ECCV 2020 , pages 213-229, Cham, 2020. Springer International Publishing. 5</list_item>
<list_item><location><page_9><loc_9><loc_81><loc_47><loc_85></location>[2] Zewen Chi, Heyan Huang, Heng-Da Xu, Houjin Yu, Wanxuan Yin, and Xian-Ling Mao. Complicated table structure recognition. arXiv preprint arXiv:1908.04729 , 2019. 3</list_item> <list_item><location><page_9><loc_9><loc_81><loc_47><loc_85></location>[2] Zewen Chi, Heyan Huang, Heng-Da Xu, Houjin Yu, Wanxuan Yin, and Xian-Ling Mao. Complicated table structure recognition. arXiv preprint arXiv:1908.04729 , 2019. 3</list_item>
<list_item><location><page_9><loc_9><loc_77><loc_47><loc_81></location>[3] Bertrand Couasnon and Aurelie Lemaitre. Recognition of Tables and Forms , pages 647-677. Springer London, London, 2014. 2</list_item> <list_item><location><page_9><loc_9><loc_77><loc_47><loc_81></location>[3] Bertrand Couasnon and Aurelie Lemaitre. Recognition of Tables and Forms , pages 647-677. Springer London, London, 2014. 2</list_item>
<list_item><location><page_9><loc_9><loc_71><loc_47><loc_77></location>[4] Herv'e D'ejean, Jean-Luc Meunier, Liangcai Gao, Yilun Huang, Yu Fang, Florian Kleber, and Eva-Maria Lang. ICDAR 2019 Competition on Table Detection and Recognition (cTDaR), Apr. 2019. http://sac.founderit.com/. 2</list_item> <list_item><location><page_9><loc_9><loc_71><loc_47><loc_76></location>[4] Herv'e D'ejean, Jean-Luc Meunier, Liangcai Gao, Yilun Huang, Yu Fang, Florian Kleber, and Eva-Maria Lang. ICDAR 2019 Competition on Table Detection and Recognition (cTDaR), Apr. 2019. http://sac.founderit.com/. 2</list_item>
<list_item><location><page_9><loc_9><loc_66><loc_47><loc_71></location>[5] Basilios Gatos, Dimitrios Danatsas, Ioannis Pratikakis, and Stavros J Perantonis. Automatic table detection in document images. In International Conference on Pattern Recognition and Image Analysis , pages 609-618. Springer, 2005. 2</list_item> <list_item><location><page_9><loc_9><loc_66><loc_47><loc_71></location>[5] Basilios Gatos, Dimitrios Danatsas, Ioannis Pratikakis, and Stavros J Perantonis. Automatic table detection in document images. In International Conference on Pattern Recognition and Image Analysis , pages 609-618. Springer, 2005. 2</list_item>
<list_item><location><page_9><loc_9><loc_60><loc_47><loc_65></location>[6] Max Gobel, Tamir Hassan, Ermelinda Oro, and Giorgio Orsi. Icdar 2013 table competition. In 2013 12th International Conference on Document Analysis and Recognition , pages 1449-1453, 2013. 2</list_item> <list_item><location><page_9><loc_9><loc_60><loc_47><loc_65></location>[6] Max Gobel, Tamir Hassan, Ermelinda Oro, and Giorgio Orsi. Icdar 2013 table competition. In 2013 12th International Conference on Document Analysis and Recognition , pages 1449-1453, 2013. 2</list_item>
<list_item><location><page_9><loc_9><loc_56><loc_47><loc_60></location>[7] EA Green and M Krishnamoorthy. Recognition of tables using table grammars. procs. In Symposium on Document Analysis and Recognition (SDAIR'95) , pages 261-277. 2</list_item> <list_item><location><page_9><loc_9><loc_56><loc_47><loc_60></location>[7] EA Green and M Krishnamoorthy. Recognition of tables using table grammars. procs. In Symposium on Document Analysis and Recognition (SDAIR'95) , pages 261-277. 2</list_item>
@ -227,7 +238,7 @@
<list_item><location><page_9><loc_8><loc_10><loc_47><loc_14></location>[15] Harold W Kuhn. The hungarian method for the assignment problem. Naval research logistics quarterly , 2(1-2):83-97, 1955. 6</list_item> <list_item><location><page_9><loc_8><loc_10><loc_47><loc_14></location>[15] Harold W Kuhn. The hungarian method for the assignment problem. Naval research logistics quarterly , 2(1-2):83-97, 1955. 6</list_item>
</unordered_list> </unordered_list>
<unordered_list> <unordered_list>
<list_item><location><page_9><loc_50><loc_82><loc_89><loc_91></location>[16] Girish Kulkarni, Visruth Premraj, Vicente Ordonez, Sagnik Dhar, Siming Li, Yejin Choi, Alexander C. Berg, and Tamara L. Berg. Babytalk: Understanding and generating simple image descriptions. IEEE Transactions on Pattern Analysis and Machine Intelligence , 35(12):2891-2903, 2013. 4</list_item> <list_item><location><page_9><loc_50><loc_82><loc_89><loc_90></location>[16] Girish Kulkarni, Visruth Premraj, Vicente Ordonez, Sagnik Dhar, Siming Li, Yejin Choi, Alexander C. Berg, and Tamara L. Berg. Babytalk: Understanding and generating simple image descriptions. IEEE Transactions on Pattern Analysis and Machine Intelligence , 35(12):2891-2903, 2013. 4</list_item>
<list_item><location><page_9><loc_50><loc_78><loc_89><loc_82></location>[17] Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou, and Zhoujun Li. Tablebank: A benchmark dataset for table detection and recognition, 2019. 2, 3</list_item> <list_item><location><page_9><loc_50><loc_78><loc_89><loc_82></location>[17] Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, Ming Zhou, and Zhoujun Li. Tablebank: A benchmark dataset for table detection and recognition, 2019. 2, 3</list_item>
<list_item><location><page_9><loc_50><loc_67><loc_89><loc_78></location>[18] Yiren Li, Zheng Huang, Junchi Yan, Yi Zhou, Fan Ye, and Xianhui Liu. Gfte: Graph-based financial table extraction. In Alberto Del Bimbo, Rita Cucchiara, Stan Sclaroff, Giovanni Maria Farinella, Tao Mei, Marco Bertini, Hugo Jair Escalante, and Roberto Vezzani, editors, Pattern Recognition. ICPR International Workshops and Challenges , pages 644-658, Cham, 2021. Springer International Publishing. 2, 3</list_item> <list_item><location><page_9><loc_50><loc_67><loc_89><loc_78></location>[18] Yiren Li, Zheng Huang, Junchi Yan, Yi Zhou, Fan Ye, and Xianhui Liu. Gfte: Graph-based financial table extraction. In Alberto Del Bimbo, Rita Cucchiara, Stan Sclaroff, Giovanni Maria Farinella, Tao Mei, Marco Bertini, Hugo Jair Escalante, and Roberto Vezzani, editors, Pattern Recognition. ICPR International Workshops and Challenges , pages 644-658, Cham, 2021. Springer International Publishing. 2, 3</list_item>
<list_item><location><page_9><loc_50><loc_59><loc_89><loc_67></location>[19] Nikolaos Livathinos, Cesar Berrospi, Maksym Lysak, Viktor Kuropiatnyk, Ahmed Nassar, Andre Carvalho, Michele Dolfi, Christoph Auer, Kasper Dinkla, and Peter Staar. Robust pdf document conversion using recurrent neural networks. Proceedings of the AAAI Conference on Artificial Intelligence , 35(17):15137-15145, May 2021. 1</list_item> <list_item><location><page_9><loc_50><loc_59><loc_89><loc_67></location>[19] Nikolaos Livathinos, Cesar Berrospi, Maksym Lysak, Viktor Kuropiatnyk, Ahmed Nassar, Andre Carvalho, Michele Dolfi, Christoph Auer, Kasper Dinkla, and Peter Staar. Robust pdf document conversion using recurrent neural networks. Proceedings of the AAAI Conference on Artificial Intelligence , 35(17):15137-15145, May 2021. 1</list_item>
@ -238,7 +249,7 @@
<list_item><location><page_9><loc_50><loc_16><loc_89><loc_21></location>[24] Shah Rukh Qasim, Hassan Mahmood, and Faisal Shafait. Rethinking table recognition using graph neural networks. In 2019 International Conference on Document Analysis and Recognition (ICDAR) , pages 142-147. IEEE, 2019. 3</list_item> <list_item><location><page_9><loc_50><loc_16><loc_89><loc_21></location>[24] Shah Rukh Qasim, Hassan Mahmood, and Faisal Shafait. Rethinking table recognition using graph neural networks. In 2019 International Conference on Document Analysis and Recognition (ICDAR) , pages 142-147. IEEE, 2019. 3</list_item>
<list_item><location><page_9><loc_50><loc_10><loc_89><loc_15></location>[25] Hamid Rezatofighi, Nathan Tsoi, JunYoung Gwak, Amir Sadeghian, Ian Reid, and Silvio Savarese. Generalized intersection over union: A metric and a loss for bounding box regression. In Proceedings of the IEEE/CVF Conference on</list_item> <list_item><location><page_9><loc_50><loc_10><loc_89><loc_15></location>[25] Hamid Rezatofighi, Nathan Tsoi, JunYoung Gwak, Amir Sadeghian, Ian Reid, and Silvio Savarese. Generalized intersection over union: A metric and a loss for bounding box regression. In Proceedings of the IEEE/CVF Conference on</list_item>
</unordered_list> </unordered_list>
<text><location><page_10><loc_11><loc_88><loc_47><loc_91></location>Computer Vision and Pattern Recognition , pages 658-666, 2019. 6</text> <text><location><page_10><loc_11><loc_88><loc_47><loc_90></location>Computer Vision and Pattern Recognition , pages 658-666, 2019. 6</text>
<unordered_list> <unordered_list>
<list_item><location><page_10><loc_8><loc_80><loc_47><loc_88></location>[26] Sebastian Schreiber, Stefan Agne, Ivo Wolf, Andreas Dengel, and Sheraz Ahmed. Deepdesrt: Deep learning for detection and structure recognition of tables in document images. In 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR) , volume 01, pages 11621167, 2017. 1</list_item> <list_item><location><page_10><loc_8><loc_80><loc_47><loc_88></location>[26] Sebastian Schreiber, Stefan Agne, Ivo Wolf, Andreas Dengel, and Sheraz Ahmed. Deepdesrt: Deep learning for detection and structure recognition of tables in document images. In 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR) , volume 01, pages 11621167, 2017. 1</list_item>
<list_item><location><page_10><loc_8><loc_71><loc_47><loc_79></location>[27] Sebastian Schreiber, Stefan Agne, Ivo Wolf, Andreas Dengel, and Sheraz Ahmed. Deepdesrt: Deep learning for detection and structure recognition of tables in document images. In 2017 14th IAPR international conference on document analysis and recognition (ICDAR) , volume 1, pages 1162-1167. IEEE, 2017. 3</list_item> <list_item><location><page_10><loc_8><loc_71><loc_47><loc_79></location>[27] Sebastian Schreiber, Stefan Agne, Ivo Wolf, Andreas Dengel, and Sheraz Ahmed. Deepdesrt: Deep learning for detection and structure recognition of tables in document images. In 2017 14th IAPR international conference on document analysis and recognition (ICDAR) , volume 1, pages 1162-1167. IEEE, 2017. 3</list_item>
@ -254,7 +265,7 @@
<list_item><location><page_10><loc_8><loc_10><loc_47><loc_12></location>[37] Xu Zhong, Elaheh ShafieiBavani, and Antonio Jimeno Yepes. Image-based table recognition: Data, model,</list_item> <list_item><location><page_10><loc_8><loc_10><loc_47><loc_12></location>[37] Xu Zhong, Elaheh ShafieiBavani, and Antonio Jimeno Yepes. Image-based table recognition: Data, model,</list_item>
</unordered_list> </unordered_list>
<unordered_list> <unordered_list>
<list_item><location><page_10><loc_54><loc_85><loc_89><loc_91></location>and evaluation. In Andrea Vedaldi, Horst Bischof, Thomas Brox, and Jan-Michael Frahm, editors, Computer Vision ECCV 2020 , pages 564-580, Cham, 2020. Springer International Publishing. 2, 3, 7</list_item> <list_item><location><page_10><loc_54><loc_85><loc_89><loc_90></location>and evaluation. In Andrea Vedaldi, Horst Bischof, Thomas Brox, and Jan-Michael Frahm, editors, Computer Vision ECCV 2020 , pages 564-580, Cham, 2020. Springer International Publishing. 2, 3, 7</list_item>
<list_item><location><page_10><loc_50><loc_80><loc_89><loc_85></location>[38] Xu Zhong, Jianbin Tang, and Antonio Jimeno Yepes. Publaynet: Largest dataset ever for document layout analysis. In 2019 International Conference on Document Analysis and Recognition (ICDAR) , pages 1015-1022, 2019. 1</list_item> <list_item><location><page_10><loc_50><loc_80><loc_89><loc_85></location>[38] Xu Zhong, Jianbin Tang, and Antonio Jimeno Yepes. Publaynet: Largest dataset ever for document layout analysis. In 2019 International Conference on Document Analysis and Recognition (ICDAR) , pages 1015-1022, 2019. 1</list_item>
</unordered_list> </unordered_list>
<section_header_level_1><location><page_11><loc_22><loc_83><loc_76><loc_86></location>TableFormer: Table Structure Understanding with Transformers Supplementary Material</section_header_level_1> <section_header_level_1><location><page_11><loc_22><loc_83><loc_76><loc_86></location>TableFormer: Table Structure Understanding with Transformers Supplementary Material</section_header_level_1>
@ -262,10 +273,10 @@
<section_header_level_1><location><page_11><loc_8><loc_76><loc_25><loc_77></location>1.1. Data preparation</section_header_level_1> <section_header_level_1><location><page_11><loc_8><loc_76><loc_25><loc_77></location>1.1. Data preparation</section_header_level_1>
<text><location><page_11><loc_8><loc_51><loc_47><loc_75></location>As a first step of our data preparation process, we have calculated statistics over the datasets across the following dimensions: (1) table size measured in the number of rows and columns, (2) complexity of the table, (3) strictness of the provided HTML structure and (4) completeness (i.e. no omitted bounding boxes). A table is considered to be simple if it does not contain row spans or column spans. Additionally, a table has a strict HTML structure if every row has the same number of columns after taking into account any row or column spans. Therefore a strict HTML structure looks always rectangular. However, HTML is a lenient encoding format, i.e. tables with rows of different sizes might still be regarded as correct due to implicit display rules. These implicit rules leave room for ambiguity, which we want to avoid. As such, we prefer to have "strict" tables, i.e. tables where every row has exactly the same length.</text> <text><location><page_11><loc_8><loc_51><loc_47><loc_75></location>As a first step of our data preparation process, we have calculated statistics over the datasets across the following dimensions: (1) table size measured in the number of rows and columns, (2) complexity of the table, (3) strictness of the provided HTML structure and (4) completeness (i.e. no omitted bounding boxes). A table is considered to be simple if it does not contain row spans or column spans. Additionally, a table has a strict HTML structure if every row has the same number of columns after taking into account any row or column spans. Therefore a strict HTML structure looks always rectangular. However, HTML is a lenient encoding format, i.e. tables with rows of different sizes might still be regarded as correct due to implicit display rules. These implicit rules leave room for ambiguity, which we want to avoid. As such, we prefer to have "strict" tables, i.e. tables where every row has exactly the same length.</text>
<text><location><page_11><loc_8><loc_21><loc_47><loc_51></location>We have developed a technique that tries to derive a missing bounding box out of its neighbors. As a first step, we use the annotation data to generate the most fine-grained grid that covers the table structure. In case of strict HTML tables, all grid squares are associated with some table cell and in the presence of table spans a cell extends across multiple grid squares. When enough bounding boxes are known for a rectangular table, it is possible to compute the geometrical border lines between the grid rows and columns. Eventually this information is used to generate the missing bounding boxes. Additionally, the existence of unused grid squares indicates that the table rows have unequal number of columns and the overall structure is non-strict. The generation of missing bounding boxes for non-strict HTML tables is ambiguous and therefore quite challenging. Thus, we have decided to simply discard those tables. In case of PubTabNet we have computed missing bounding boxes for 48% of the simple and 69% of the complex tables. Regarding FinTabNet, 68% of the simple and 98% of the complex tables require the generation of bounding boxes.</text> <text><location><page_11><loc_8><loc_21><loc_47><loc_51></location>We have developed a technique that tries to derive a missing bounding box out of its neighbors. As a first step, we use the annotation data to generate the most fine-grained grid that covers the table structure. In case of strict HTML tables, all grid squares are associated with some table cell and in the presence of table spans a cell extends across multiple grid squares. When enough bounding boxes are known for a rectangular table, it is possible to compute the geometrical border lines between the grid rows and columns. Eventually this information is used to generate the missing bounding boxes. Additionally, the existence of unused grid squares indicates that the table rows have unequal number of columns and the overall structure is non-strict. The generation of missing bounding boxes for non-strict HTML tables is ambiguous and therefore quite challenging. Thus, we have decided to simply discard those tables. In case of PubTabNet we have computed missing bounding boxes for 48% of the simple and 69% of the complex tables. Regarding FinTabNet, 68% of the simple and 98% of the complex tables require the generation of bounding boxes.</text>
<text><location><page_11><loc_8><loc_18><loc_47><loc_21></location>Figure 7 illustrates the distribution of the tables across different dimensions per dataset.</text> <text><location><page_11><loc_8><loc_18><loc_47><loc_20></location>Figure 7 illustrates the distribution of the tables across different dimensions per dataset.</text>
<section_header_level_1><location><page_11><loc_8><loc_15><loc_25><loc_16></location>1.2. Synthetic datasets</section_header_level_1> <section_header_level_1><location><page_11><loc_8><loc_15><loc_25><loc_16></location>1.2. Synthetic datasets</section_header_level_1>
<text><location><page_11><loc_8><loc_10><loc_47><loc_14></location>Aiming to train and evaluate our models in a broader spectrum of table data we have synthesized four types of datasets. Each one contains tables with different appear-</text> <text><location><page_11><loc_8><loc_10><loc_47><loc_14></location>Aiming to train and evaluate our models in a broader spectrum of table data we have synthesized four types of datasets. Each one contains tables with different appear-</text>
<text><location><page_11><loc_50><loc_74><loc_89><loc_80></location>ances in regard to their size, structure, style and content. Every synthetic dataset contains 150k examples, summing up to 600k synthetic examples. All datasets are divided into Train, Test and Val splits (80%, 10%, 10%).</text> <text><location><page_11><loc_50><loc_74><loc_89><loc_79></location>ances in regard to their size, structure, style and content. Every synthetic dataset contains 150k examples, summing up to 600k synthetic examples. All datasets are divided into Train, Test and Val splits (80%, 10%, 10%).</text>
<text><location><page_11><loc_50><loc_71><loc_89><loc_73></location>The process of generating a synthetic dataset can be decomposed into the following steps:</text> <text><location><page_11><loc_50><loc_71><loc_89><loc_73></location>The process of generating a synthetic dataset can be decomposed into the following steps:</text>
<unordered_list> <unordered_list>
<list_item><location><page_11><loc_50><loc_60><loc_89><loc_70></location>1. Prepare styling and content templates: The styling templates have been manually designed and organized into groups of scope specific appearances (e.g. financial data, marketing data, etc.) Additionally, we have prepared curated collections of content templates by extracting the most frequently used terms out of non-synthetic datasets (e.g. PubTabNet, FinTabNet, etc.).</list_item> <list_item><location><page_11><loc_50><loc_60><loc_89><loc_70></location>1. Prepare styling and content templates: The styling templates have been manually designed and organized into groups of scope specific appearances (e.g. financial data, marketing data, etc.) Additionally, we have prepared curated collections of content templates by extracting the most frequently used terms out of non-synthetic datasets (e.g. PubTabNet, FinTabNet, etc.).</list_item>
@ -274,7 +285,7 @@
<list_item><location><page_11><loc_50><loc_31><loc_89><loc_37></location>4. Apply styling templates: Depending on the domain of the synthetic dataset, a set of styling templates is first manually selected. Then, a style is randomly selected to format the appearance of the synthesized table.</list_item> <list_item><location><page_11><loc_50><loc_31><loc_89><loc_37></location>4. Apply styling templates: Depending on the domain of the synthetic dataset, a set of styling templates is first manually selected. Then, a style is randomly selected to format the appearance of the synthesized table.</list_item>
<list_item><location><page_11><loc_50><loc_23><loc_89><loc_31></location>5. Render the complete tables: The synthetic table is finally rendered by a web browser engine to generate the bounding boxes for each table cell. A batching technique is utilized to optimize the runtime overhead of the rendering process.</list_item> <list_item><location><page_11><loc_50><loc_23><loc_89><loc_31></location>5. Render the complete tables: The synthetic table is finally rendered by a web browser engine to generate the bounding boxes for each table cell. A batching technique is utilized to optimize the runtime overhead of the rendering process.</list_item>
</unordered_list> </unordered_list>
<section_header_level_1><location><page_11><loc_50><loc_18><loc_89><loc_22></location>2. Prediction post-processing for PDF documents</section_header_level_1> <section_header_level_1><location><page_11><loc_50><loc_18><loc_89><loc_21></location>2. Prediction post-processing for PDF documents</section_header_level_1>
<text><location><page_11><loc_50><loc_10><loc_89><loc_17></location>Although TableFormer can predict the table structure and the bounding boxes for tables recognized inside PDF documents, this is not enough when a full reconstruction of the original table is required. This happens mainly due the following reasons:</text> <text><location><page_11><loc_50><loc_10><loc_89><loc_17></location>Although TableFormer can predict the table structure and the bounding boxes for tables recognized inside PDF documents, this is not enough when a full reconstruction of the original table is required. This happens mainly due the following reasons:</text>
<figure> <figure>
<location><page_12><loc_9><loc_81><loc_89><loc_91></location> <location><page_12><loc_9><loc_81><loc_89><loc_91></location>
@ -303,7 +314,7 @@
<list_item><location><page_12><loc_50><loc_65><loc_89><loc_67></location>6. Snap all cells with bad IOU to their corresponding median x -coordinates and cell sizes.</list_item> <list_item><location><page_12><loc_50><loc_65><loc_89><loc_67></location>6. Snap all cells with bad IOU to their corresponding median x -coordinates and cell sizes.</list_item>
<list_item><location><page_12><loc_50><loc_51><loc_89><loc_64></location>7. Generate a new set of pair-wise matches between the corrected bounding boxes and PDF cells. This time use a modified version of the IOU metric, where the area of the intersection between the predicted and PDF cells is divided by the PDF cell area. In case there are multiple matches for the same PDF cell, the prediction with the higher score is preferred. This covers the cases where the PDF cells are smaller than the area of predicted or corrected prediction cells.</list_item> <list_item><location><page_12><loc_50><loc_51><loc_89><loc_64></location>7. Generate a new set of pair-wise matches between the corrected bounding boxes and PDF cells. This time use a modified version of the IOU metric, where the area of the intersection between the predicted and PDF cells is divided by the PDF cell area. In case there are multiple matches for the same PDF cell, the prediction with the higher score is preferred. This covers the cases where the PDF cells are smaller than the area of predicted or corrected prediction cells.</list_item>
<list_item><location><page_12><loc_50><loc_42><loc_89><loc_51></location>8. In some rare occasions, we have noticed that TableFormer can confuse a single column as two. When the postprocessing steps are applied, this results with two predicted columns pointing to the same PDF column. In such case we must de-duplicate the columns according to highest total column intersection score.</list_item> <list_item><location><page_12><loc_50><loc_42><loc_89><loc_51></location>8. In some rare occasions, we have noticed that TableFormer can confuse a single column as two. When the postprocessing steps are applied, this results with two predicted columns pointing to the same PDF column. In such case we must de-duplicate the columns according to highest total column intersection score.</list_item>
<list_item><location><page_12><loc_50><loc_28><loc_89><loc_42></location>9. Pick up the remaining orphan cells. There could be cases, when after applying all the previous post-processing steps, some PDF cells could still remain without any match to predicted cells. However, it is still possible to deduce the correct matching for an orphan PDF cell by mapping its bounding box on the geometry of the grid. This mapping decides if the content of the orphan cell will be appended to an already matched table cell, or a new table cell should be created to match with the orphan.</list_item> <list_item><location><page_12><loc_50><loc_28><loc_89><loc_41></location>9. Pick up the remaining orphan cells. There could be cases, when after applying all the previous post-processing steps, some PDF cells could still remain without any match to predicted cells. However, it is still possible to deduce the correct matching for an orphan PDF cell by mapping its bounding box on the geometry of the grid. This mapping decides if the content of the orphan cell will be appended to an already matched table cell, or a new table cell should be created to match with the orphan.</list_item>
</unordered_list> </unordered_list>
<text><location><page_12><loc_50><loc_24><loc_89><loc_28></location>9a. Compute the top and bottom boundary of the horizontal band for each grid row (min/max y coordinates per row).</text> <text><location><page_12><loc_50><loc_24><loc_89><loc_28></location>9a. Compute the top and bottom boundary of the horizontal band for each grid row (min/max y coordinates per row).</text>
<unordered_list> <unordered_list>
@ -315,48 +326,138 @@
<text><location><page_13><loc_8><loc_89><loc_15><loc_91></location>phan cell.</text> <text><location><page_13><loc_8><loc_89><loc_15><loc_91></location>phan cell.</text>
<text><location><page_13><loc_8><loc_86><loc_47><loc_89></location>9f. Otherwise create a new structural cell and match it wit the orphan cell.</text> <text><location><page_13><loc_8><loc_86><loc_47><loc_89></location>9f. Otherwise create a new structural cell and match it wit the orphan cell.</text>
<text><location><page_13><loc_8><loc_83><loc_47><loc_86></location>Aditional images with examples of TableFormer predictions and post-processing can be found below.</text> <text><location><page_13><loc_8><loc_83><loc_47><loc_86></location>Aditional images with examples of TableFormer predictions and post-processing can be found below.</text>
<paragraph><location><page_13><loc_10><loc_35><loc_45><loc_37></location>Figure 8: Example of a table with multi-line header.</paragraph> <table>
<location><page_13><loc_14><loc_73><loc_39><loc_80></location>
</table>
<table>
<location><page_13><loc_14><loc_63><loc_39><loc_70></location>
</table>
<table>
<location><page_13><loc_14><loc_54><loc_39><loc_61></location>
</table>
<table>
<location><page_13><loc_14><loc_38><loc_41><loc_50></location>
<caption>Figure 8: Example of a table with multi-line header.</caption>
</table>
<table>
<location><page_13><loc_51><loc_83><loc_91><loc_87></location>
</table>
<table>
<location><page_13><loc_51><loc_77><loc_91><loc_80></location>
</table>
<table>
<location><page_13><loc_51><loc_71><loc_91><loc_75></location>
</table>
<figure> <figure>
<location><page_13><loc_51><loc_63><loc_70><loc_68></location> <location><page_13><loc_51><loc_63><loc_70><loc_68></location>
<caption>Figure 9: Example of a table with big empty distance between cells.</caption>
</figure> </figure>
<table>
<location><page_13><loc_51><loc_63><loc_70><loc_68></location>
<caption>Figure 9: Example of a table with big empty distance between cells.</caption>
</table>
<table>
<location><page_13><loc_55><loc_45><loc_80><loc_51></location>
</table>
<table>
<location><page_13><loc_55><loc_37><loc_80><loc_43></location>
</table>
<table>
<location><page_13><loc_55><loc_28><loc_80><loc_34></location>
</table>
<figure> <figure>
<location><page_13><loc_55><loc_16><loc_85><loc_25></location> <location><page_13><loc_55><loc_16><loc_85><loc_25></location>
</figure>
<table>
<location><page_13><loc_55><loc_16><loc_85><loc_25></location>
<caption>Figure 10: Example of a complex table with empty cells.</caption> <caption>Figure 10: Example of a complex table with empty cells.</caption>
</figure> </table>
<table>
<location><page_14><loc_8><loc_57><loc_46><loc_65></location>
</table>
<figure> <figure>
<location><page_14><loc_9><loc_81><loc_27><loc_86></location> <location><page_14><loc_8><loc_56><loc_46><loc_87></location>
<caption>Figure 14: Example with multi-line text.</caption>
</figure>
<figure>
<location><page_14><loc_9><loc_68><loc_27><loc_73></location>
<caption>Figure 11: Simple table with different style and empty cells.</caption> <caption>Figure 11: Simple table with different style and empty cells.</caption>
</figure> </figure>
<table>
<location><page_14><loc_8><loc_38><loc_51><loc_43></location>
</table>
<table>
<location><page_14><loc_8><loc_32><loc_51><loc_36></location>
</table>
<table>
<location><page_14><loc_8><loc_25><loc_51><loc_30></location>
</table>
<figure> <figure>
<location><page_14><loc_8><loc_17><loc_29><loc_23></location> <location><page_14><loc_8><loc_17><loc_29><loc_23></location>
<caption>Figure 12: Simple table predictions and post processing.</caption> <caption>Figure 12: Simple table predictions and post processing.</caption>
</figure> </figure>
<figure> <table>
<location><page_14><loc_52><loc_81><loc_87><loc_88></location> <location><page_14><loc_52><loc_73><loc_87><loc_80></location>
</figure> </table>
<figure> <table>
<location><page_14><loc_52><loc_65><loc_87><loc_71></location> <location><page_14><loc_52><loc_65><loc_87><loc_71></location>
</figure> </table>
<figure> <table>
<location><page_14><loc_54><loc_55><loc_86><loc_64></location> <location><page_14><loc_54><loc_55><loc_86><loc_64></location>
</table>
<figure>
<location><page_14><loc_52><loc_55><loc_87><loc_89></location>
<caption>Figure 13: Table predictions example on colorful table.</caption> <caption>Figure 13: Table predictions example on colorful table.</caption>
</figure> </figure>
<table>
<location><page_14><loc_52><loc_40><loc_85><loc_46></location>
</table>
<table>
<location><page_14><loc_52><loc_32><loc_85><loc_38></location>
</table>
<table>
<location><page_14><loc_52><loc_25><loc_85><loc_31></location>
</table>
<table>
<location><page_14><loc_52><loc_16><loc_87><loc_23></location>
<caption>Figure 14: Example with multi-line text.</caption>
</table>
<figure> <figure>
<location><page_15><loc_9><loc_69><loc_46><loc_83></location> <location><page_15><loc_9><loc_69><loc_46><loc_83></location>
<caption>Figure 16: Example of how post-processing helps to restore mis-aligned bounding boxes prediction artifact.</caption>
</figure> </figure>
<table>
<location><page_15><loc_9><loc_69><loc_46><loc_83></location>
</table>
<figure>
<location><page_15><loc_9><loc_53><loc_46><loc_67></location>
</figure>
<table>
<location><page_15><loc_9><loc_53><loc_46><loc_67></location>
</table>
<figure> <figure>
<location><page_15><loc_9><loc_37><loc_46><loc_51></location> <location><page_15><loc_9><loc_37><loc_46><loc_51></location>
</figure> </figure>
<figure> <figure>
<location><page_15><loc_8><loc_20><loc_52><loc_36></location> <location><page_15><loc_8><loc_20><loc_52><loc_36></location>
<caption>Figure 15: Example with triangular table.</caption>
</figure> </figure>
<table>
<location><page_15><loc_8><loc_20><loc_52><loc_36></location>
<caption>Figure 15: Example with triangular table.</caption>
</table>
<table>
<location><page_15><loc_53><loc_72><loc_86><loc_85></location>
</table>
<table>
<location><page_15><loc_53><loc_57><loc_86><loc_69></location>
</table>
<figure>
<location><page_15><loc_53><loc_41><loc_86><loc_54></location>
</figure>
<table>
<location><page_15><loc_53><loc_41><loc_86><loc_54></location>
</table>
<figure>
<location><page_15><loc_58><loc_20><loc_81><loc_38></location>
</figure>
<table>
<location><page_15><loc_58><loc_20><loc_81><loc_38></location>
<caption>Figure 16: Example of how post-processing helps to restore mis-aligned bounding boxes prediction artifact.</caption>
</table>
<figure> <figure>
<location><page_16><loc_11><loc_37><loc_86><loc_68></location> <location><page_16><loc_11><loc_37><loc_86><loc_68></location>
<caption>Figure 17: Example of long table. End-to-end example from initial PDF cells to prediction of bounding boxes, post processing and prediction of structure.</caption> <caption>Figure 17: Example of long table. End-to-end example from initial PDF cells to prediction of bounding boxes, post processing and prediction of structure.</caption>

File diff suppressed because one or more lines are too long

View File

@ -12,15 +12,17 @@
The occurrence of tables in documents is ubiquitous. They often summarise quantitative or factual data, which is cumbersome to describe in verbose text but nevertheless extremely valuable. Unfortunately, this compact representation is often not easy to parse by machines. There are many implicit conventions used to obtain a compact table representation. For example, tables often have complex columnand row-headers in order to reduce duplicated cell content. Lines of different shapes and sizes are leveraged to separate content or indicate a tree structure. Additionally, tables can also have empty/missing table-entries or multi-row textual table-entries. Fig. 1 shows a table which presents all these issues. The occurrence of tables in documents is ubiquitous. They often summarise quantitative or factual data, which is cumbersome to describe in verbose text but nevertheless extremely valuable. Unfortunately, this compact representation is often not easy to parse by machines. There are many implicit conventions used to obtain a compact table representation. For example, tables often have complex columnand row-headers in order to reduce duplicated cell content. Lines of different shapes and sizes are leveraged to separate content or indicate a tree structure. Additionally, tables can also have empty/missing table-entries or multi-row textual table-entries. Fig. 1 shows a table which presents all these issues.
<!-- image -->
Tables organize valuable content in a concise and compact representation. This content is extremely valuable for systems such as search engines, Knowledge Graph's, etc, since they enhance their predictive capabilities. Unfortunately, tables come in a large variety of shapes and sizes. Furthermore, they can have complex column/row-header configurations, multiline rows, different variety of separation lines, missing entries, etc. As such, the correct identification of the table-structure from an image is a nontrivial task. In this paper, we present a new table-structure identification model. The latter improves the latest end-toend deep learning model (i.e. encoder-dual-decoder from PubTabNet) in two significant ways. First, we introduce a new object detection decoder for table-cells. In this way, we can obtain the content of the table-cells from programmatic PDF's directly from the PDF source and avoid the training of the custom OCR decoders. This architectural change leads to more accurate table-content extraction and allows us to tackle non-english tables. Second, we replace the LSTM decoders with transformer based decoders. This upgrade improves significantly the previous state-of-the-art tree-editing-distance-score (TEDS) from 91% to 98.5% on simple tables and from 88.7% to 95% on complex tables. Tables organize valuable content in a concise and compact representation. This content is extremely valuable for systems such as search engines, Knowledge Graph's, etc, since they enhance their predictive capabilities. Unfortunately, tables come in a large variety of shapes and sizes. Furthermore, they can have complex column/row-header configurations, multiline rows, different variety of separation lines, missing entries, etc. As such, the correct identification of the table-structure from an image is a nontrivial task. In this paper, we present a new table-structure identification model. The latter improves the latest end-toend deep learning model (i.e. encoder-dual-decoder from PubTabNet) in two significant ways. First, we introduce a new object detection decoder for table-cells. In this way, we can obtain the content of the table-cells from programmatic PDF's directly from the PDF source and avoid the training of the custom OCR decoders. This architectural change leads to more accurate table-content extraction and allows us to tackle non-english tables. Second, we replace the LSTM decoders with transformer based decoders. This upgrade improves significantly the previous state-of-the-art tree-editing-distance-score (TEDS) from 91% to 98.5% on simple tables and from 88.7% to 95% on complex tables.
b. Red-annotation of bounding boxes, Blue-predictions by TableFormer - b. Red-annotation of bounding boxes, Blue-predictions by TableFormer
<!-- image --> <!-- image -->
c. - c. Structure predicted by TableFormer:
Structure predicted by TableFormer: <!-- image -->
Figure 1: Picture of a table with subtle, complex features such as (1) multi-column headers, (2) cell with multi-row text and (3) cells with no content. Image from PubTabNet evaluation set, filename: 'PMC2944238 004 02'. Figure 1: Picture of a table with subtle, complex features such as (1) multi-column headers, (2) cell with multi-row text and (3) cells with no content. Image from PubTabNet evaluation set, filename: 'PMC2944238 004 02'.
@ -225,17 +227,18 @@ Table 4: Results of structure with content retrieved using cell detection on Pub
| EDD | 91.2 | 85.4 | 88.3 | | EDD | 91.2 | 85.4 | 88.3 |
| TableFormer | 95.4 | 90.1 | 93.6 | | TableFormer | 95.4 | 90.1 | 93.6 |
a. - a.
- Red - PDF cells, Green - predicted bounding boxes, Blue - post-processed predictions matched to PDF cells
Red - PDF cells, Green - predicted bounding boxes, Blue - post-processed predictions matched to PDF cells ## Japanese language (previously unseen by TableFormer):
Japanese language (previously unseen by TableFormer): ## Example table from FinTabNet:
<!-- image --> <!-- image -->
b. b. Structure predicted by TableFormer, with superimposed matched PDF cell text:
Structure predicted by TableFormer, with superimposed matched PDF cell text: <!-- image -->
| | | 論文ファイル | 論文ファイル | 参考文献 | 参考文献 | | | | 論文ファイル | 論文ファイル | 参考文献 | 参考文献 |
|----------------------------------------------------|-------------|----------------|----------------|------------|------------| |----------------------------------------------------|-------------|----------------|----------------|------------|------------|
@ -282,8 +285,6 @@ In this paper, we presented TableFormer an end-to-end transformer based approach
- [1] Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to- - [1] Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-
<!-- image -->
- end object detection with transformers. In Andrea Vedaldi, Horst Bischof, Thomas Brox, and Jan-Michael Frahm, editors, Computer Vision - ECCV 2020 , pages 213-229, Cham, 2020. Springer International Publishing. 5 - end object detection with transformers. In Andrea Vedaldi, Horst Bischof, Thomas Brox, and Jan-Michael Frahm, editors, Computer Vision - ECCV 2020 , pages 213-229, Cham, 2020. Springer International Publishing. 5
- [2] Zewen Chi, Heyan Huang, Heng-Da Xu, Houjin Yu, Wanxuan Yin, and Xian-Ling Mao. Complicated table structure recognition. arXiv preprint arXiv:1908.04729 , 2019. 3 - [2] Zewen Chi, Heyan Huang, Heng-Da Xu, Houjin Yu, Wanxuan Yin, and Xian-Ling Mao. Complicated table structure recognition. arXiv preprint arXiv:1908.04729 , 2019. 3
- [3] Bertrand Couasnon and Aurelie Lemaitre. Recognition of Tables and Forms , pages 647-677. Springer London, London, 2014. 2 - [3] Bertrand Couasnon and Aurelie Lemaitre. Recognition of Tables and Forms , pages 647-677. Springer London, London, 2014. 2
@ -404,18 +405,14 @@ Aditional images with examples of TableFormer predictions and post-processing ca
Figure 8: Example of a table with multi-line header. Figure 8: Example of a table with multi-line header.
<!-- image -->
Figure 9: Example of a table with big empty distance between cells. Figure 9: Example of a table with big empty distance between cells.
<!-- image --> <!-- image -->
Figure 10: Example of a complex table with empty cells. Figure 10: Example of a complex table with empty cells.
<!-- image -->
Figure 14: Example with multi-line text.
<!-- image -->
Figure 11: Simple table with different style and empty cells. Figure 11: Simple table with different style and empty cells.
<!-- image --> <!-- image -->
@ -424,15 +421,15 @@ Figure 12: Simple table predictions and post processing.
<!-- image --> <!-- image -->
<!-- image -->
<!-- image -->
Figure 13: Table predictions example on colorful table. Figure 13: Table predictions example on colorful table.
<!-- image --> <!-- image -->
Figure 16: Example of how post-processing helps to restore mis-aligned bounding boxes prediction artifact. Figure 14: Example with multi-line text.
<!-- image -->
<!-- image -->
<!-- image --> <!-- image -->
@ -442,6 +439,10 @@ Figure 15: Example with triangular table.
<!-- image --> <!-- image -->
<!-- image -->
Figure 16: Example of how post-processing helps to restore mis-aligned bounding boxes prediction artifact.
Figure 17: Example of long table. End-to-end example from initial PDF cells to prediction of bounding boxes, post processing and prediction of structure. Figure 17: Example of long table. End-to-end example from initial PDF cells to prediction of bounding boxes, post processing and prediction of structure.
<!-- image --> <!-- image -->

File diff suppressed because one or more lines are too long

View File

@ -1,32 +1,23 @@
<document> <document>
<section_header_level_1><location><page_1><loc_18><loc_85><loc_83><loc_90></location>DocLayNet: A Large Human-Annotated Dataset for Document-Layout Analysis</section_header_level_1> <section_header_level_1><location><page_1><loc_18><loc_85><loc_83><loc_89></location>DocLayNet: A Large Human-Annotated Dataset for Document-Layout Analysis</section_header_level_1>
<text><location><page_1><loc_15><loc_77><loc_32><loc_83></location>Birgit Pfitzmann IBM Research Rueschlikon, Switzerland bpf@zurich.ibm.com</text> <text><location><page_1><loc_15><loc_77><loc_32><loc_83></location>Birgit Pfitzmann IBM Research Rueschlikon, Switzerland bpf@zurich.ibm.com</text>
<text><location><page_1><loc_42><loc_77><loc_58><loc_83></location>Christoph Auer IBM Research Rueschlikon, Switzerland cau@zurich.ibm.com</text> <text><location><page_1><loc_42><loc_77><loc_58><loc_83></location>Christoph Auer IBM Research Rueschlikon, Switzerland cau@zurich.ibm.com</text>
<text><location><page_1><loc_68><loc_77><loc_85><loc_83></location>Michele Dolfi IBM Research Rueschlikon, Switzerland dol@zurich.ibm.com</text> <text><location><page_1><loc_69><loc_77><loc_85><loc_83></location>Michele Dolfi IBM Research Rueschlikon, Switzerland dol@zurich.ibm.com</text>
<text><location><page_1><loc_28><loc_70><loc_45><loc_76></location>Ahmed S. Nassar IBM Research Rueschlikon, Switzerland ahn@zurich.ibm.com</text> <text><location><page_1><loc_28><loc_70><loc_45><loc_76></location>Ahmed S. Nassar IBM Research Rueschlikon, Switzerland ahn@zurich.ibm.com</text>
<text><location><page_1><loc_55><loc_70><loc_72><loc_76></location>Peter Staar IBM Research Rueschlikon, Switzerland taa@zurich.ibm.com</text> <text><location><page_1><loc_55><loc_70><loc_72><loc_76></location>Peter Staar IBM Research Rueschlikon, Switzerland taa@zurich.ibm.com</text>
<section_header_level_1><location><page_1><loc_9><loc_67><loc_18><loc_69></location>ABSTRACT</section_header_level_1> <section_header_level_1><location><page_1><loc_9><loc_67><loc_18><loc_69></location>ABSTRACT</section_header_level_1>
<text><location><page_1><loc_9><loc_32><loc_48><loc_67></location>Accurate document layout analysis is a key requirement for highquality PDF document conversion. With the recent availability of public, large ground-truth datasets such as PubLayNet and DocBank, deep-learning models have proven to be very effective at layout detection and segmentation. While these datasets are of adequate size to train such models, they severely lack in layout variability since they are sourced from scientific article repositories such as PubMed and arXiv only. Consequently, the accuracy of the layout segmentation drops significantly when these models are applied on more challenging and diverse layouts. In this paper, we present DocLayNet , a new, publicly available, document-layout annotation dataset in COCO format. It contains 80863 manually annotated pages from diverse data sources to represent a wide variability in layouts. For each PDF page, the layout annotations provide labelled bounding-boxes with a choice of 11 distinct classes. DocLayNet also provides a subset of double- and triple-annotated pages to determine the inter-annotator agreement. In multiple experiments, we provide baseline accuracy scores (in mAP) for a set of popular object detection models. We also demonstrate that these models fall approximately 10% behind the inter-annotator agreement. Furthermore, we provide evidence that DocLayNet is of sufficient size. Lastly, we compare models trained on PubLayNet, DocBank and DocLayNet, showing that layout predictions of the DocLayNettrained models are more robust and thus the preferred choice for general-purpose document-layout analysis.</text> <text><location><page_1><loc_9><loc_33><loc_48><loc_67></location>Accurate document layout analysis is a key requirement for highquality PDF document conversion. With the recent availability of public, large ground-truth datasets such as PubLayNet and DocBank, deep-learning models have proven to be very effective at layout detection and segmentation. While these datasets are of adequate size to train such models, they severely lack in layout variability since they are sourced from scientific article repositories such as PubMed and arXiv only. Consequently, the accuracy of the layout segmentation drops significantly when these models are applied on more challenging and diverse layouts. In this paper, we present DocLayNet , a new, publicly available, document-layout annotation dataset in COCO format. It contains 80863 manually annotated pages from diverse data sources to represent a wide variability in layouts. For each PDF page, the layout annotations provide labelled bounding-boxes with a choice of 11 distinct classes. DocLayNet also provides a subset of double- and triple-annotated pages to determine the inter-annotator agreement. In multiple experiments, we provide baseline accuracy scores (in mAP) for a set of popular object detection models. We also demonstrate that these models fall approximately 10% behind the inter-annotator agreement. Furthermore, we provide evidence that DocLayNet is of sufficient size. Lastly, we compare models trained on PubLayNet, DocBank and DocLayNet, showing that layout predictions of the DocLayNettrained models are more robust and thus the preferred choice for general-purpose document-layout analysis.</text>
<section_header_level_1><location><page_1><loc_9><loc_29><loc_22><loc_30></location>CCS CONCEPTS</section_header_level_1> <section_header_level_1><location><page_1><loc_9><loc_29><loc_22><loc_30></location>CCS CONCEPTS</section_header_level_1>
<text><location><page_1><loc_9><loc_25><loc_49><loc_29></location>· Information systems → Document structure ; · Applied computing → Document analysis ; · Computing methodologies → Machine learning ; Computer vision ; Object detection ;</text> <text><location><page_1><loc_9><loc_25><loc_49><loc_29></location>· Information systems → Document structure ; · Applied computing → Document analysis ; · Computing methodologies → Machine learning ; Computer vision ; Object detection ;</text>
<text><location><page_1><loc_9><loc_15><loc_48><loc_20></location>Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s).</text> <text><location><page_1><loc_9><loc_15><loc_48><loc_20></location>Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s).</text>
<text><location><page_1><loc_9><loc_11><loc_32><loc_15></location>KDD '22, August 14-18, 2022, Washington, DC, USA © 2022 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-9385-0/22/08. https://doi.org/10.1145/3534678.3539043</text> <text><location><page_1><loc_9><loc_14><loc_32><loc_15></location>KDD '22, August 14-18, 2022, Washington, DC, USA</text>
<text><location><page_1><loc_53><loc_55><loc_63><loc_68></location>13 USING THE VERTICAL TUBE MODELS AY11230/11234 1. The vertical tube can be used for instructional viewing or to photograph the image with a digital camera or a micro TV unit 2. Loosen the retention screw, then rotate the adjustment ring to change the length of the vertical tube. 3. Make sure that both the images in OPERATION ( cont. ) SELECTING OBJECTIVE MAGNIFICATION 1. There are two objectives. The lower magnification objective has a greater depth of field and view. 2. In order to observe the specimen easily use the lower magnification objective first. Then, by rotating the case, the magnification can be changed. CHANGING THE INTERPUPILLARY DISTANCE 1. The distance between the observer's pupils is the interpupillary distance. 2. To adjust the interpupillary distance rotate the prism caps until both eyes coincide with the image in the eyepiece. FOCUSING 1. Remove the lens protective cover. 2. Place the specimen on the working stage. 3. Focus the specimen with the left eye first while turning the focus knob until the image appears clear and sharp. 4. Rotate the right eyepiece ring until the images in each eyepiece coincide and are sharp and clear. CHANGING THE BULB 1. Disconnect the power cord. 2. When the bulb is cool, remove the oblique illuminator cap and remove the halogen bulb with cap. 3. Replace with a new halogen bulb. 4. Open the window in the base plate and replace the halogen lamp or fluorescent lamp of transmitted illuminator. FOCUSING 1. Turn the focusing knob away or toward you until a clear image is viewed. 2. If the image is unclear, adjust the height of the elevator up or down, then turn the focusing knob again. ZOOM MAGNIFICATION 1. Turn the zoom magnification knob to the desired magnification and field of view. 2. In most situations, it is recommended that you focus at the lowest magnification, then move to a higher magnification and re-focus as necessary. 3. If the image is not clear to both eyes at the same time, the diopter ring may need adjustment. DIOPTER RING ADJUSTMENT 1. To adjust the eyepiece for viewing with or without eyeglasses and for differences in acuity between the right and left eyes, follow the following steps: a. Observe an image through the left eyepiece and bring a specific point into focus using the focus knob. b. By turning the diopter ring adjustment for the left eyepiece, bring the same point into sharp focus. c.Then bring the same point into focus through the right eyepiece by turning the right diopter ring. d.With more than one viewer, each viewer should note their own diopter ring position for the left and right eyepieces, then before viewing set the diopter ring adjustments to that setting. CHANGING THE BULB 1. Disconnect the power cord from the electrical outlet. 2. When the bulb is cool, remove the oblique illuminator cap and remove the halogen bulb with cap. 3. Replace with a new halogen bulb. 4. Open the window in the base plate and replace the halogen lamp or fluorescent lamp of transmitted illuminator. Model AY11230 Model AY11234</text> <text><location><page_1><loc_9><loc_13><loc_31><loc_14></location>© 2022 Copyright held by the owner/author(s).</text>
<text><location><page_1><loc_9><loc_12><loc_26><loc_13></location>ACM ISBN 978-1-4503-9385-0/22/08.</text>
<text><location><page_1><loc_9><loc_11><loc_27><loc_12></location>https://doi.org/10.1145/3534678.3539043</text>
<figure> <figure>
<location><page_1><loc_52><loc_33><loc_72><loc_53></location> <location><page_1><loc_53><loc_34><loc_90><loc_68></location>
<caption>Figure 1: Four examples of complex page layouts across different document categories</caption> <caption>Figure 1: Four examples of complex page layouts across different document categories</caption>
</figure> </figure>
<figure>
<location><page_1><loc_65><loc_56><loc_75><loc_68></location>
</figure>
<text><location><page_1><loc_74><loc_55><loc_75><loc_56></location>14</text>
<figure>
<location><page_1><loc_77><loc_54><loc_90><loc_69></location>
</figure>
<text><location><page_1><loc_73><loc_50><loc_90><loc_52></location>Circling Minimums 7 K H U H Z D V D F K D Q J H W R W K H 7 ( 5 3 6 F U L W H U L D L Q W K D W D ႇH F W V F L U F O L Q J D U H D G L P H Q V L R Q E \ H [ S D Q G L Q J W K H D U H D V W R S U R Y L G H improved obstacle protection. To indicate that the new criteria had been applied to a given procedure, a is placed on the circling line of minimums. The new circling tables and explanatory information is located in the Legend of the TPP. 7 K H D S S U R D F K H V X V L Q J V W D Q G D U G F L U F O L Q J D S S U R D F K D U H D V F D Q E H L G H Q W L ¿ H G E \ W K H D E V H Q F H R I W K H on the circling line of minima.</text>
<text><location><page_1><loc_82><loc_48><loc_90><loc_48></location>$ S S O \ ( [ S D Q G H G & L U F O L Q J $ S S U R D F K 0 D Q H X Y H U L Q J $ L U V S D F H 5 D G L X V Table</text>
<text><location><page_1><loc_73><loc_37><loc_90><loc_48></location>$ S S O \ 6 W D Q G D U G & L U F O L Q J $ S S U R D F K 0 D Q H X Y H U L Q J 5 D G L X V 7 D E O H AIRPORT SKETCH The airport sketch is a depiction of the airport with emphasis on runway pattern and related information, positioned in either the lower left or lower right corner of the chart to aid pilot recognition of the airport from the air and to provide some information to aid on ground navigation of the airport. The runways are drawn to scale and oriented to true north. Runway dimensions (length and width) are shown for all active runways. Runway(s) are depicted based on what type and construction of the runway. Hard Surface Other Than Hard Surface Metal Surface Closed Runway Under Construction Stopways, Taxiways, Parking Areas Displaced Threshold Closed Pavement Water Runway Taxiways and aprons are shaded grey. Other runway features that may be shown are runway numbers, runway dimensions, runway slope, arresting gear, and displaced threshold. 2 W K H U L Q I R U P D W L R Q F R Q F H U Q L Q J O L J K W L Q J ¿ Q D O D S S U R D F K E H D U L Q J V D L U S R U W E H D F R Q R E V W D F O H V F R Q W U R O W R Z H U 1 $ 9 $ , ' V K H O L -pads may also be shown. $ L U S R U W ( O H Y D W L R Q D Q G 7 R X F K G R Z Q = R Q H ( O H Y D W L R Q The airport elevation is shown enclosed within a box in the upper left corner of the sketch box and the touchdown zone elevation (TDZE) is shown in the upper right corner of the sketch box. The airport elevation is the highest point of an D L U S R U W ¶ V X V D E O H U X Q Z D \ V P H D V X U H G L Q I H H W I U R P P H D Q V H D O H Y H O 7 K H 7 ' = ( L V W K H K L J K H V W H O H Y D W L R Q L Q W K H ¿ U V W I H H W R I the landing surface. Circling only approaches will not show a TDZE. FAA Chart Users' Guide - Terminal Procedures Publication (TPP) - Terms</text>
<text><location><page_1><loc_82><loc_34><loc_82><loc_35></location>114</text>
<section_header_level_1><location><page_1><loc_52><loc_24><loc_62><loc_25></location>KEYWORDS</section_header_level_1> <section_header_level_1><location><page_1><loc_52><loc_24><loc_62><loc_25></location>KEYWORDS</section_header_level_1>
<text><location><page_1><loc_52><loc_21><loc_91><loc_23></location>PDF document conversion, layout segmentation, object-detection, data set, Machine Learning</text> <text><location><page_1><loc_52><loc_21><loc_91><loc_23></location>PDF document conversion, layout segmentation, object-detection, data set, Machine Learning</text>
<section_header_level_1><location><page_1><loc_52><loc_18><loc_66><loc_19></location>ACM Reference Format:</section_header_level_1> <section_header_level_1><location><page_1><loc_52><loc_18><loc_66><loc_19></location>ACM Reference Format:</section_header_level_1>
@ -36,9 +27,9 @@
<text><location><page_2><loc_9><loc_37><loc_48><loc_71></location>A key problem in the process of document conversion is to understand the structure of a single document page, i.e. which segments of text should be grouped together in a unit. To train models for this task, there are currently two large datasets available to the community, PubLayNet [6] and DocBank [7]. They were introduced in 2019 and 2020 respectively and significantly accelerated the implementation of layout detection and segmentation models due to their sizes of 300K and 500K ground-truth pages. These sizes were achieved by leveraging an automation approach. The benefit of automated ground-truth generation is obvious: one can generate large ground-truth datasets at virtually no cost. However, the automation introduces a constraint on the variability in the dataset, because corresponding structured source data must be available. PubLayNet and DocBank were both generated from scientific document repositories (PubMed and arXiv), which provide XML or L A T E X sources. Those scientific documents present a limited variability in their layouts, because they are typeset in uniform templates provided by the publishers. Obviously, documents such as technical manuals, annual company reports, legal text, government tenders, etc. have very different and partially unique layouts. As a consequence, the layout predictions obtained from models trained on PubLayNet or DocBank is very reasonable when applied on scientific documents. However, for more artistic or free-style layouts, we see sub-par prediction quality from these models, which we demonstrate in Section 5.</text> <text><location><page_2><loc_9><loc_37><loc_48><loc_71></location>A key problem in the process of document conversion is to understand the structure of a single document page, i.e. which segments of text should be grouped together in a unit. To train models for this task, there are currently two large datasets available to the community, PubLayNet [6] and DocBank [7]. They were introduced in 2019 and 2020 respectively and significantly accelerated the implementation of layout detection and segmentation models due to their sizes of 300K and 500K ground-truth pages. These sizes were achieved by leveraging an automation approach. The benefit of automated ground-truth generation is obvious: one can generate large ground-truth datasets at virtually no cost. However, the automation introduces a constraint on the variability in the dataset, because corresponding structured source data must be available. PubLayNet and DocBank were both generated from scientific document repositories (PubMed and arXiv), which provide XML or L A T E X sources. Those scientific documents present a limited variability in their layouts, because they are typeset in uniform templates provided by the publishers. Obviously, documents such as technical manuals, annual company reports, legal text, government tenders, etc. have very different and partially unique layouts. As a consequence, the layout predictions obtained from models trained on PubLayNet or DocBank is very reasonable when applied on scientific documents. However, for more artistic or free-style layouts, we see sub-par prediction quality from these models, which we demonstrate in Section 5.</text>
<text><location><page_2><loc_9><loc_27><loc_48><loc_36></location>In this paper, we present the DocLayNet dataset. It provides pageby-page layout annotation ground-truth using bounding-boxes for 11 distinct class labels on 80863 unique document pages, of which a fraction carry double- or triple-annotations. DocLayNet is similar in spirit to PubLayNet and DocBank and will likewise be made available to the public 1 in order to stimulate the document-layout analysis community. It distinguishes itself in the following aspects:</text> <text><location><page_2><loc_9><loc_27><loc_48><loc_36></location>In this paper, we present the DocLayNet dataset. It provides pageby-page layout annotation ground-truth using bounding-boxes for 11 distinct class labels on 80863 unique document pages, of which a fraction carry double- or triple-annotations. DocLayNet is similar in spirit to PubLayNet and DocBank and will likewise be made available to the public 1 in order to stimulate the document-layout analysis community. It distinguishes itself in the following aspects:</text>
<unordered_list> <unordered_list>
<list_item><location><page_2><loc_10><loc_22><loc_48><loc_26></location>(1) Human Annotation : In contrast to PubLayNet and DocBank, we relied on human annotation instead of automation approaches to generate the data set.</list_item> <list_item><location><page_2><loc_11><loc_22><loc_48><loc_26></location>(1) Human Annotation : In contrast to PubLayNet and DocBank, we relied on human annotation instead of automation approaches to generate the data set.</list_item>
<list_item><location><page_2><loc_10><loc_20><loc_48><loc_22></location>(2) Large Layout Variability : We include diverse and complex layouts from a large variety of public sources.</list_item> <list_item><location><page_2><loc_11><loc_20><loc_48><loc_22></location>(2) Large Layout Variability : We include diverse and complex layouts from a large variety of public sources.</list_item>
<list_item><location><page_2><loc_10><loc_15><loc_48><loc_19></location>(3) Detailed Label Set : We define 11 class labels to distinguish layout features in high detail. PubLayNet provides 5 labels; DocBank provides 13, although not a superset of ours.</list_item> <list_item><location><page_2><loc_11><loc_15><loc_48><loc_19></location>(3) Detailed Label Set : We define 11 class labels to distinguish layout features in high detail. PubLayNet provides 5 labels; DocBank provides 13, although not a superset of ours.</list_item>
<list_item><location><page_2><loc_11><loc_13><loc_48><loc_15></location>(4) Redundant Annotations : A fraction of the pages in the DocLayNet data set carry more than one human annotation.</list_item> <list_item><location><page_2><loc_11><loc_13><loc_48><loc_15></location>(4) Redundant Annotations : A fraction of the pages in the DocLayNet data set carry more than one human annotation.</list_item>
</unordered_list> </unordered_list>
<text><location><page_2><loc_56><loc_87><loc_91><loc_89></location>This enables experimentation with annotation uncertainty and quality control analysis.</text> <text><location><page_2><loc_56><loc_87><loc_91><loc_89></location>This enables experimentation with annotation uncertainty and quality control analysis.</text>
@ -51,7 +42,7 @@
<text><location><page_2><loc_52><loc_41><loc_91><loc_56></location>While early approaches in document-layout analysis used rulebased algorithms and heuristics [8], the problem is lately addressed with deep learning methods. The most common approach is to leverage object detection models [9-15]. In the last decade, the accuracy and speed of these models has increased dramatically. Furthermore, most state-of-the-art object detection methods can be trained and applied with very little work, thanks to a standardisation effort of the ground-truth data format [16] and common deep-learning frameworks [17]. Reference data sets such as PubLayNet [6] and DocBank provide their data in the commonly accepted COCO format [16].</text> <text><location><page_2><loc_52><loc_41><loc_91><loc_56></location>While early approaches in document-layout analysis used rulebased algorithms and heuristics [8], the problem is lately addressed with deep learning methods. The most common approach is to leverage object detection models [9-15]. In the last decade, the accuracy and speed of these models has increased dramatically. Furthermore, most state-of-the-art object detection methods can be trained and applied with very little work, thanks to a standardisation effort of the ground-truth data format [16] and common deep-learning frameworks [17]. Reference data sets such as PubLayNet [6] and DocBank provide their data in the commonly accepted COCO format [16].</text>
<text><location><page_2><loc_52><loc_30><loc_91><loc_41></location>Lately, new types of ML models for document-layout analysis have emerged in the community [18-21]. These models do not approach the problem of layout analysis purely based on an image representation of the page, as computer vision methods do. Instead, they combine the text tokens and image representation of a page in order to obtain a segmentation. While the reported accuracies appear to be promising, a broadly accepted data format which links geometric and textual features has yet to establish.</text> <text><location><page_2><loc_52><loc_30><loc_91><loc_41></location>Lately, new types of ML models for document-layout analysis have emerged in the community [18-21]. These models do not approach the problem of layout analysis purely based on an image representation of the page, as computer vision methods do. Instead, they combine the text tokens and image representation of a page in order to obtain a segmentation. While the reported accuracies appear to be promising, a broadly accepted data format which links geometric and textual features has yet to establish.</text>
<section_header_level_1><location><page_2><loc_52><loc_27><loc_78><loc_29></location>3 THE DOCLAYNET DATASET</section_header_level_1> <section_header_level_1><location><page_2><loc_52><loc_27><loc_78><loc_29></location>3 THE DOCLAYNET DATASET</section_header_level_1>
<text><location><page_2><loc_52><loc_15><loc_91><loc_26></location>DocLayNet contains 80863 PDF pages. Among these, 7059 carry two instances of human annotations, and 1591 carry three. This amounts to 91104 total annotation instances. The annotations provide layout information in the shape of labeled, rectangular boundingboxes. We define 11 distinct labels for layout features, namely Caption , Footnote , Formula , List-item , Page-footer , Page-header , Picture , Section-header , Table , Text , and Title . Our reasoning for picking this particular label set is detailed in Section 4.</text> <text><location><page_2><loc_52><loc_15><loc_91><loc_25></location>DocLayNet contains 80863 PDF pages. Among these, 7059 carry two instances of human annotations, and 1591 carry three. This amounts to 91104 total annotation instances. The annotations provide layout information in the shape of labeled, rectangular boundingboxes. We define 11 distinct labels for layout features, namely Caption , Footnote , Formula , List-item , Page-footer , Page-header , Picture , Section-header , Table , Text , and Title . Our reasoning for picking this particular label set is detailed in Section 4.</text>
<text><location><page_2><loc_52><loc_11><loc_91><loc_14></location>In addition to open intellectual property constraints for the source documents, we required that the documents in DocLayNet adhere to a few conditions. Firstly, we kept scanned documents</text> <text><location><page_2><loc_52><loc_11><loc_91><loc_14></location>In addition to open intellectual property constraints for the source documents, we required that the documents in DocLayNet adhere to a few conditions. Firstly, we kept scanned documents</text>
<figure> <figure>
<location><page_3><loc_14><loc_72><loc_43><loc_88></location> <location><page_3><loc_14><loc_72><loc_43><loc_88></location>
@ -59,11 +50,11 @@
</figure> </figure>
<text><location><page_3><loc_9><loc_54><loc_48><loc_64></location>to a minimum, since they introduce difficulties in annotation (see Section 4). As a second condition, we focussed on medium to large documents ( > 10 pages) with technical content, dense in complex tables, figures, plots and captions. Such documents carry a lot of information value, but are often hard to analyse with high accuracy due to their challenging layouts. Counterexamples of documents not included in the dataset are receipts, invoices, hand-written documents or photographs showing "text in the wild".</text> <text><location><page_3><loc_9><loc_54><loc_48><loc_64></location>to a minimum, since they introduce difficulties in annotation (see Section 4). As a second condition, we focussed on medium to large documents ( > 10 pages) with technical content, dense in complex tables, figures, plots and captions. Such documents carry a lot of information value, but are often hard to analyse with high accuracy due to their challenging layouts. Counterexamples of documents not included in the dataset are receipts, invoices, hand-written documents or photographs showing "text in the wild".</text>
<text><location><page_3><loc_9><loc_36><loc_48><loc_53></location>The pages in DocLayNet can be grouped into six distinct categories, namely Financial Reports , Manuals , Scientific Articles , Laws & Regulations , Patents and Government Tenders . Each document category was sourced from various repositories. For example, Financial Reports contain both free-style format annual reports 2 which expose company-specific, artistic layouts as well as the more formal SEC filings. The two largest categories ( Financial Reports and Manuals ) contain a large amount of free-style layouts in order to obtain maximum variability. In the other four categories, we boosted the variability by mixing documents from independent providers, such as different government websites or publishers. In Figure 2, we show the document categories contained in DocLayNet with their respective sizes.</text> <text><location><page_3><loc_9><loc_36><loc_48><loc_53></location>The pages in DocLayNet can be grouped into six distinct categories, namely Financial Reports , Manuals , Scientific Articles , Laws & Regulations , Patents and Government Tenders . Each document category was sourced from various repositories. For example, Financial Reports contain both free-style format annual reports 2 which expose company-specific, artistic layouts as well as the more formal SEC filings. The two largest categories ( Financial Reports and Manuals ) contain a large amount of free-style layouts in order to obtain maximum variability. In the other four categories, we boosted the variability by mixing documents from independent providers, such as different government websites or publishers. In Figure 2, we show the document categories contained in DocLayNet with their respective sizes.</text>
<text><location><page_3><loc_9><loc_23><loc_48><loc_36></location>We did not control the document selection with regard to language. The vast majority of documents contained in DocLayNet (close to 95%) are published in English language. However, DocLayNet also contains a number of documents in other languages such as German (2.5%), French (1.0%) and Japanese (1.0%). While the document language has negligible impact on the performance of computer vision methods such as object detection and segmentation models, it might prove challenging for layout analysis methods which exploit textual features.</text> <text><location><page_3><loc_9><loc_23><loc_48><loc_35></location>We did not control the document selection with regard to language. The vast majority of documents contained in DocLayNet (close to 95%) are published in English language. However, DocLayNet also contains a number of documents in other languages such as German (2.5%), French (1.0%) and Japanese (1.0%). While the document language has negligible impact on the performance of computer vision methods such as object detection and segmentation models, it might prove challenging for layout analysis methods which exploit textual features.</text>
<text><location><page_3><loc_9><loc_14><loc_48><loc_23></location>To ensure that future benchmarks in the document-layout analysis community can be easily compared, we have split up DocLayNet into pre-defined train-, test- and validation-sets. In this way, we can avoid spurious variations in the evaluation scores due to random splitting in train-, test- and validation-sets. We also ensured that less frequent labels are represented in train and test sets in equal proportions.</text> <text><location><page_3><loc_9><loc_14><loc_48><loc_23></location>To ensure that future benchmarks in the document-layout analysis community can be easily compared, we have split up DocLayNet into pre-defined train-, test- and validation-sets. In this way, we can avoid spurious variations in the evaluation scores due to random splitting in train-, test- and validation-sets. We also ensured that less frequent labels are represented in train and test sets in equal proportions.</text>
<text><location><page_3><loc_52><loc_80><loc_91><loc_89></location>Table 1 shows the overall frequency and distribution of the labels among the different sets. Importantly, we ensure that subsets are only split on full-document boundaries. This avoids that pages of the same document are spread over train, test and validation set, which can give an undesired evaluation advantage to models and lead to overestimation of their prediction accuracy. We will show the impact of this decision in Section 5.</text> <text><location><page_3><loc_52><loc_80><loc_91><loc_89></location>Table 1 shows the overall frequency and distribution of the labels among the different sets. Importantly, we ensure that subsets are only split on full-document boundaries. This avoids that pages of the same document are spread over train, test and validation set, which can give an undesired evaluation advantage to models and lead to overestimation of their prediction accuracy. We will show the impact of this decision in Section 5.</text>
<text><location><page_3><loc_52><loc_66><loc_91><loc_79></location>In order to accommodate the different types of models currently in use by the community, we provide DocLayNet in an augmented COCO format [16]. This entails the standard COCO ground-truth file (in JSON format) with the associated page images (in PNG format, 1025 × 1025 pixels). Furthermore, custom fields have been added to each COCO record to specify document category, original document filename and page number. In addition, we also provide the original PDF pages, as well as sidecar files containing parsed PDF text and text-cell coordinates (in JSON). All additional files are linked to the primary page images by their matching filenames.</text> <text><location><page_3><loc_52><loc_66><loc_91><loc_79></location>In order to accommodate the different types of models currently in use by the community, we provide DocLayNet in an augmented COCO format [16]. This entails the standard COCO ground-truth file (in JSON format) with the associated page images (in PNG format, 1025 × 1025 pixels). Furthermore, custom fields have been added to each COCO record to specify document category, original document filename and page number. In addition, we also provide the original PDF pages, as well as sidecar files containing parsed PDF text and text-cell coordinates (in JSON). All additional files are linked to the primary page images by their matching filenames.</text>
<text><location><page_3><loc_52><loc_26><loc_91><loc_66></location>Despite being cost-intense and far less scalable than automation, human annotation has several benefits over automated groundtruth generation. The first and most obvious reason to leverage human annotations is the freedom to annotate any type of document without requiring a programmatic source. For most PDF documents, the original source document is not available. The latter is not a hard constraint with human annotation, but it is for automated methods. A second reason to use human annotations is that the latter usually provide a more natural interpretation of the page layout. The human-interpreted layout can significantly deviate from the programmatic layout used in typesetting. For example, "invisible" tables might be used solely for aligning text paragraphs on columns. Such typesetting tricks might be interpreted by automated methods incorrectly as an actual table, while the human annotation will interpret it correctly as Text or other styles. The same applies to multi-line text elements, when authors decided to space them as "invisible" list elements without bullet symbols. A third reason to gather ground-truth through human annotation is to estimate a "natural" upper bound on the segmentation accuracy. As we will show in Section 4, certain documents featuring complex layouts can have different but equally acceptable layout interpretations. This natural upper bound for segmentation accuracy can be found by annotating the same pages multiple times by different people and evaluating the inter-annotator agreement. Such a baseline consistency evaluation is very useful to define expectations for a good target accuracy in trained deep neural network models and avoid overfitting (see Table 1). On the flip side, achieving high annotation consistency proved to be a key challenge in human annotation, as we outline in Section 4.</text> <text><location><page_3><loc_52><loc_26><loc_91><loc_65></location>Despite being cost-intense and far less scalable than automation, human annotation has several benefits over automated groundtruth generation. The first and most obvious reason to leverage human annotations is the freedom to annotate any type of document without requiring a programmatic source. For most PDF documents, the original source document is not available. The latter is not a hard constraint with human annotation, but it is for automated methods. A second reason to use human annotations is that the latter usually provide a more natural interpretation of the page layout. The human-interpreted layout can significantly deviate from the programmatic layout used in typesetting. For example, "invisible" tables might be used solely for aligning text paragraphs on columns. Such typesetting tricks might be interpreted by automated methods incorrectly as an actual table, while the human annotation will interpret it correctly as Text or other styles. The same applies to multi-line text elements, when authors decided to space them as "invisible" list elements without bullet symbols. A third reason to gather ground-truth through human annotation is to estimate a "natural" upper bound on the segmentation accuracy. As we will show in Section 4, certain documents featuring complex layouts can have different but equally acceptable layout interpretations. This natural upper bound for segmentation accuracy can be found by annotating the same pages multiple times by different people and evaluating the inter-annotator agreement. Such a baseline consistency evaluation is very useful to define expectations for a good target accuracy in trained deep neural network models and avoid overfitting (see Table 1). On the flip side, achieving high annotation consistency proved to be a key challenge in human annotation, as we outline in Section 4.</text>
<section_header_level_1><location><page_3><loc_52><loc_22><loc_77><loc_23></location>4 ANNOTATION CAMPAIGN</section_header_level_1> <section_header_level_1><location><page_3><loc_52><loc_22><loc_77><loc_23></location>4 ANNOTATION CAMPAIGN</section_header_level_1>
<text><location><page_3><loc_52><loc_11><loc_91><loc_20></location>The annotation campaign was carried out in four phases. In phase one, we identified and prepared the data sources for annotation. In phase two, we determined the class labels and how annotations should be done on the documents in order to obtain maximum consistency. The latter was guided by a detailed requirement analysis and exhaustive experiments. In phase three, we trained the annotation staff and performed exams for quality assurance. In phase four,</text> <text><location><page_3><loc_52><loc_11><loc_91><loc_20></location>The annotation campaign was carried out in four phases. In phase one, we identified and prepared the data sources for annotation. In phase two, we determined the class labels and how annotations should be done on the documents in order to obtain maximum consistency. The latter was guided by a detailed requirement analysis and exhaustive experiments. In phase three, we trained the annotation staff and performed exams for quality assurance. In phase four,</text>
<table> <table>
@ -93,15 +84,15 @@
<text><location><page_4><loc_52><loc_53><loc_91><loc_61></location>include publication repositories such as arXiv$^{3}$, government offices, company websites as well as data directory services for financial reports and patents. Scanned documents were excluded wherever possible because they can be rotated or skewed. This would not allow us to perform annotation with rectangular bounding-boxes and therefore complicate the annotation process.</text> <text><location><page_4><loc_52><loc_53><loc_91><loc_61></location>include publication repositories such as arXiv$^{3}$, government offices, company websites as well as data directory services for financial reports and patents. Scanned documents were excluded wherever possible because they can be rotated or skewed. This would not allow us to perform annotation with rectangular bounding-boxes and therefore complicate the annotation process.</text>
<text><location><page_4><loc_52><loc_36><loc_91><loc_52></location>Preparation work included uploading and parsing the sourced PDF documents in the Corpus Conversion Service (CCS) [22], a cloud-native platform which provides a visual annotation interface and allows for dataset inspection and analysis. The annotation interface of CCS is shown in Figure 3. The desired balance of pages between the different document categories was achieved by selective subsampling of pages with certain desired properties. For example, we made sure to include the title page of each document and bias the remaining page selection to those with figures or tables. The latter was achieved by leveraging pre-trained object detection models from PubLayNet, which helped us estimate how many figures and tables a given page contains.</text> <text><location><page_4><loc_52><loc_36><loc_91><loc_52></location>Preparation work included uploading and parsing the sourced PDF documents in the Corpus Conversion Service (CCS) [22], a cloud-native platform which provides a visual annotation interface and allows for dataset inspection and analysis. The annotation interface of CCS is shown in Figure 3. The desired balance of pages between the different document categories was achieved by selective subsampling of pages with certain desired properties. For example, we made sure to include the title page of each document and bias the remaining page selection to those with figures or tables. The latter was achieved by leveraging pre-trained object detection models from PubLayNet, which helped us estimate how many figures and tables a given page contains.</text>
<text><location><page_4><loc_52><loc_12><loc_91><loc_36></location>Phase 2: Label selection and guideline. We reviewed the collected documents and identified the most common structural features they exhibit. This was achieved by identifying recurrent layout elements and lead us to the definition of 11 distinct class labels. These 11 class labels are Caption , Footnote , Formula , List-item , Pagefooter , Page-header , Picture , Section-header , Table , Text , and Title . Critical factors that were considered for the choice of these class labels were (1) the overall occurrence of the label, (2) the specificity of the label, (3) recognisability on a single page (i.e. no need for context from previous or next page) and (4) overall coverage of the page. Specificity ensures that the choice of label is not ambiguous, while coverage ensures that all meaningful items on a page can be annotated. We refrained from class labels that are very specific to a document category, such as Abstract in the Scientific Articles category. We also avoided class labels that are tightly linked to the semantics of the text. Labels such as Author and Affiliation , as seen in DocBank, are often only distinguishable by discriminating on</text> <text><location><page_4><loc_52><loc_12><loc_91><loc_36></location>Phase 2: Label selection and guideline. We reviewed the collected documents and identified the most common structural features they exhibit. This was achieved by identifying recurrent layout elements and lead us to the definition of 11 distinct class labels. These 11 class labels are Caption , Footnote , Formula , List-item , Pagefooter , Page-header , Picture , Section-header , Table , Text , and Title . Critical factors that were considered for the choice of these class labels were (1) the overall occurrence of the label, (2) the specificity of the label, (3) recognisability on a single page (i.e. no need for context from previous or next page) and (4) overall coverage of the page. Specificity ensures that the choice of label is not ambiguous, while coverage ensures that all meaningful items on a page can be annotated. We refrained from class labels that are very specific to a document category, such as Abstract in the Scientific Articles category. We also avoided class labels that are tightly linked to the semantics of the text. Labels such as Author and Affiliation , as seen in DocBank, are often only distinguishable by discriminating on</text>
<text><location><page_5><loc_9><loc_86><loc_48><loc_89></location>the textual content of an element, which goes beyond visual layout recognition, in particular outside the Scientific Articles category.</text> <text><location><page_5><loc_9><loc_87><loc_48><loc_89></location>the textual content of an element, which goes beyond visual layout recognition, in particular outside the Scientific Articles category.</text>
<text><location><page_5><loc_9><loc_68><loc_48><loc_86></location>At first sight, the task of visual document-layout interpretation appears intuitive enough to obtain plausible annotations in most cases. However, during early trial-runs in the core team, we observed many cases in which annotators use different annotation styles, especially for documents with challenging layouts. For example, if a figure is presented with subfigures, one annotator might draw a single figure bounding-box, while another might annotate each subfigure separately. The same applies for lists, where one might annotate all list items in one block or each list item separately. In essence, we observed that challenging layouts would be annotated in different but plausible ways. To illustrate this, we show in Figure 4 multiple examples of plausible but inconsistent annotations on the same pages.</text> <text><location><page_5><loc_9><loc_69><loc_48><loc_86></location>At first sight, the task of visual document-layout interpretation appears intuitive enough to obtain plausible annotations in most cases. However, during early trial-runs in the core team, we observed many cases in which annotators use different annotation styles, especially for documents with challenging layouts. For example, if a figure is presented with subfigures, one annotator might draw a single figure bounding-box, while another might annotate each subfigure separately. The same applies for lists, where one might annotate all list items in one block or each list item separately. In essence, we observed that challenging layouts would be annotated in different but plausible ways. To illustrate this, we show in Figure 4 multiple examples of plausible but inconsistent annotations on the same pages.</text>
<text><location><page_5><loc_9><loc_57><loc_48><loc_68></location>Obviously, this inconsistency in annotations is not desirable for datasets which are intended to be used for model training. To minimise these inconsistencies, we created a detailed annotation guideline. While perfect consistency across 40 annotation staff members is clearly not possible to achieve, we saw a huge improvement in annotation consistency after the introduction of our annotation guideline. A few selected, non-trivial highlights of the guideline are:</text> <text><location><page_5><loc_9><loc_57><loc_48><loc_68></location>Obviously, this inconsistency in annotations is not desirable for datasets which are intended to be used for model training. To minimise these inconsistencies, we created a detailed annotation guideline. While perfect consistency across 40 annotation staff members is clearly not possible to achieve, we saw a huge improvement in annotation consistency after the introduction of our annotation guideline. A few selected, non-trivial highlights of the guideline are:</text>
<unordered_list> <unordered_list>
<list_item><location><page_5><loc_11><loc_51><loc_48><loc_56></location>(1) Every list-item is an individual object instance with class label List-item . This definition is different from PubLayNet and DocBank, where all list-items are grouped together into one List object.</list_item> <list_item><location><page_5><loc_11><loc_51><loc_48><loc_56></location>(1) Every list-item is an individual object instance with class label List-item . This definition is different from PubLayNet and DocBank, where all list-items are grouped together into one List object.</list_item>
<list_item><location><page_5><loc_11><loc_45><loc_48><loc_51></location>(2) A List-item is a paragraph with hanging indentation. Singleline elements can qualify as List-item if the neighbour elements expose hanging indentation. Bullet or enumeration symbols are not a requirement.</list_item> <list_item><location><page_5><loc_11><loc_45><loc_48><loc_50></location>(2) A List-item is a paragraph with hanging indentation. Singleline elements can qualify as List-item if the neighbour elements expose hanging indentation. Bullet or enumeration symbols are not a requirement.</list_item>
<list_item><location><page_5><loc_10><loc_42><loc_48><loc_45></location>(3) For every Caption , there must be exactly one corresponding Picture or Table .</list_item> <list_item><location><page_5><loc_11><loc_42><loc_48><loc_45></location>(3) For every Caption , there must be exactly one corresponding Picture or Table .</list_item>
<list_item><location><page_5><loc_10><loc_40><loc_48><loc_42></location>(4) Connected sub-pictures are grouped together in one Picture object.</list_item> <list_item><location><page_5><loc_11><loc_40><loc_48><loc_42></location>(4) Connected sub-pictures are grouped together in one Picture object.</list_item>
<list_item><location><page_5><loc_10><loc_38><loc_43><loc_39></location>(5) Formula numbers are included in a Formula object.</list_item> <list_item><location><page_5><loc_11><loc_38><loc_43><loc_39></location>(5) Formula numbers are included in a Formula object.</list_item>
<list_item><location><page_5><loc_11><loc_34><loc_48><loc_38></location>(6) Emphasised text (e.g. in italic or bold) at the beginning of a paragraph is not considered a Section-header , unless it appears exclusively on its own line.</list_item> <list_item><location><page_5><loc_11><loc_34><loc_48><loc_38></location>(6) Emphasised text (e.g. in italic or bold) at the beginning of a paragraph is not considered a Section-header , unless it appears exclusively on its own line.</list_item>
</unordered_list> </unordered_list>
<text><location><page_5><loc_9><loc_27><loc_48><loc_33></location>The complete annotation guideline is over 100 pages long and a detailed description is obviously out of scope for this paper. Nevertheless, it will be made publicly available alongside with DocLayNet for future reference.</text> <text><location><page_5><loc_9><loc_27><loc_48><loc_33></location>The complete annotation guideline is over 100 pages long and a detailed description is obviously out of scope for this paper. Nevertheless, it will be made publicly available alongside with DocLayNet for future reference.</text>
@ -110,6 +101,7 @@
<location><page_5><loc_52><loc_42><loc_91><loc_89></location> <location><page_5><loc_52><loc_42><loc_91><loc_89></location>
<caption>Figure 4: Examples of plausible annotation alternatives for the same page. Criteria in our annotation guideline can resolve cases A to C, while the case D remains ambiguous.</caption> <caption>Figure 4: Examples of plausible annotation alternatives for the same page. Criteria in our annotation guideline can resolve cases A to C, while the case D remains ambiguous.</caption>
</figure> </figure>
<text><location><page_5><loc_65><loc_42><loc_78><loc_42></location>05237a14f2524e3f53c8454b074409d05078038a6a36b770fcc8ec7e540deae0</text>
<text><location><page_5><loc_52><loc_31><loc_91><loc_34></location>were carried out over a timeframe of 12 weeks, after which 8 of the 40 initially allocated annotators did not pass the bar.</text> <text><location><page_5><loc_52><loc_31><loc_91><loc_34></location>were carried out over a timeframe of 12 weeks, after which 8 of the 40 initially allocated annotators did not pass the bar.</text>
<text><location><page_5><loc_52><loc_10><loc_91><loc_31></location>Phase 4: Production annotation. The previously selected 80K pages were annotated with the defined 11 class labels by 32 annotators. This production phase took around three months to complete. All annotations were created online through CCS, which visualises the programmatic PDF text-cells as an overlay on the page. The page annotation are obtained by drawing rectangular bounding-boxes, as shown in Figure 3. With regard to the annotation practices, we implemented a few constraints and capabilities on the tooling level. First, we only allow non-overlapping, vertically oriented, rectangular boxes. For the large majority of documents, this constraint was sufficient and it speeds up the annotation considerably in comparison with arbitrary segmentation shapes. Second, annotator staff were not able to see each other's annotations. This was enforced by design to avoid any bias in the annotation, which could skew the numbers of the inter-annotator agreement (see Table 1). We wanted</text> <text><location><page_5><loc_52><loc_10><loc_91><loc_31></location>Phase 4: Production annotation. The previously selected 80K pages were annotated with the defined 11 class labels by 32 annotators. This production phase took around three months to complete. All annotations were created online through CCS, which visualises the programmatic PDF text-cells as an overlay on the page. The page annotation are obtained by drawing rectangular bounding-boxes, as shown in Figure 3. With regard to the annotation practices, we implemented a few constraints and capabilities on the tooling level. First, we only allow non-overlapping, vertically oriented, rectangular boxes. For the large majority of documents, this constraint was sufficient and it speeds up the annotation considerably in comparison with arbitrary segmentation shapes. Second, annotator staff were not able to see each other's annotations. This was enforced by design to avoid any bias in the annotation, which could skew the numbers of the inter-annotator agreement (see Table 1). We wanted</text>
<table> <table>
@ -233,15 +225,15 @@
<caption>Text Caption List-Item Formula Table Section-Header Picture Page-Header Page-Footer Title</caption> <caption>Text Caption List-Item Formula Table Section-Header Picture Page-Header Page-Footer Title</caption>
</figure> </figure>
<text><location><page_9><loc_9><loc_36><loc_91><loc_41></location>Figure 6: Example layout predictions on selected pages from the DocLayNet test-set. (A, D) exhibit favourable results on coloured backgrounds. (B, C) show accurate list-item and paragraph differentiation despite densely-spaced lines. (E) demonstrates good table and figure distinction. (F) shows predictions on a Chinese patent with multiple overlaps, label confusion and missing boxes.</text> <text><location><page_9><loc_9><loc_36><loc_91><loc_41></location>Figure 6: Example layout predictions on selected pages from the DocLayNet test-set. (A, D) exhibit favourable results on coloured backgrounds. (B, C) show accurate list-item and paragraph differentiation despite densely-spaced lines. (E) demonstrates good table and figure distinction. (F) shows predictions on a Chinese patent with multiple overlaps, label confusion and missing boxes.</text>
<text><location><page_9><loc_11><loc_31><loc_48><loc_34></location>Diaconu, Mai Thanh Minh, Marc, albinxavi, fatih, oleg, and wanghao yang. ultralytics/yolov5: v6.0 - yolov5n nano models, roboflow integration, tensorflow export, opencv dnn support, October 2021.</text> <text><location><page_9><loc_11><loc_31><loc_48><loc_33></location>Diaconu, Mai Thanh Minh, Marc, albinxavi, fatih, oleg, and wanghao yang. ultralytics/yolov5: v6.0 - yolov5n nano models, roboflow integration, tensorflow export, opencv dnn support, October 2021.</text>
<unordered_list> <unordered_list>
<list_item><location><page_9><loc_9><loc_28><loc_48><loc_30></location>[14] Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-end object detection with transformers. CoRR , abs/2005.12872, 2020.</list_item> <list_item><location><page_9><loc_9><loc_28><loc_48><loc_30></location>[14] Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-end object detection with transformers. CoRR , abs/2005.12872, 2020.</list_item>
<list_item><location><page_9><loc_9><loc_26><loc_48><loc_27></location>[15] Mingxing Tan, Ruoming Pang, and Quoc V. Le. Efficientdet: Scalable and efficient object detection. CoRR , abs/1911.09070, 2019.</list_item> <list_item><location><page_9><loc_9><loc_26><loc_48><loc_27></location>[15] Mingxing Tan, Ruoming Pang, and Quoc V. Le. Efficientdet: Scalable and efficient object detection. CoRR , abs/1911.09070, 2019.</list_item>
<list_item><location><page_9><loc_9><loc_23><loc_48><loc_25></location>[16] Tsung-Yi Lin, Michael Maire, Serge J. Belongie, Lubomir D. Bourdev, Ross B. Girshick, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C. Lawrence Zitnick. Microsoft COCO: common objects in context, 2014.</list_item> <list_item><location><page_9><loc_9><loc_23><loc_48><loc_25></location>[16] Tsung-Yi Lin, Michael Maire, Serge J. Belongie, Lubomir D. Bourdev, Ross B. Girshick, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C. Lawrence Zitnick. Microsoft COCO: common objects in context, 2014.</list_item>
<list_item><location><page_9><loc_9><loc_21><loc_48><loc_23></location>[17] Yuxin Wu, Alexander Kirillov, Francisco Massa, Wan-Yen Lo, and Ross Girshick. Detectron2, 2019.</list_item> <list_item><location><page_9><loc_9><loc_21><loc_48><loc_22></location>[17] Yuxin Wu, Alexander Kirillov, Francisco Massa, Wan-Yen Lo, and Ross Girshick. Detectron2, 2019.</list_item>
<list_item><location><page_9><loc_9><loc_16><loc_48><loc_20></location>[18] Nikolaos Livathinos, Cesar Berrospi, Maksym Lysak, Viktor Kuropiatnyk, Ahmed Nassar, Andre Carvalho, Michele Dolfi, Christoph Auer, Kasper Dinkla, and Peter W. J. Staar. Robust pdf document conversion using recurrent neural networks. In Proceedings of the 35th Conference on Artificial Intelligence , AAAI, pages 1513715145, feb 2021.</list_item> <list_item><location><page_9><loc_9><loc_16><loc_48><loc_20></location>[18] Nikolaos Livathinos, Cesar Berrospi, Maksym Lysak, Viktor Kuropiatnyk, Ahmed Nassar, Andre Carvalho, Michele Dolfi, Christoph Auer, Kasper Dinkla, and Peter W. J. Staar. Robust pdf document conversion using recurrent neural networks. In Proceedings of the 35th Conference on Artificial Intelligence , AAAI, pages 1513715145, feb 2021.</list_item>
<list_item><location><page_9><loc_9><loc_10><loc_48><loc_15></location>[19] Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, and Ming Zhou. Layoutlm: Pre-training of text and layout for document image understanding. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining , KDD, pages 1192-1200, New York, USA, 2020. Association for Computing Machinery.</list_item> <list_item><location><page_9><loc_9><loc_10><loc_48><loc_15></location>[19] Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, and Ming Zhou. Layoutlm: Pre-training of text and layout for document image understanding. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining , KDD, pages 1192-1200, New York, USA, 2020. Association for Computing Machinery.</list_item>
<list_item><location><page_9><loc_52><loc_32><loc_91><loc_34></location>[20] Shoubin Li, Xuyan Ma, Shuaiqun Pan, Jun Hu, Lin Shi, and Qing Wang. Vtlayout: Fusion of visual and text features for document layout analysis, 2021.</list_item> <list_item><location><page_9><loc_52><loc_32><loc_91><loc_33></location>[20] Shoubin Li, Xuyan Ma, Shuaiqun Pan, Jun Hu, Lin Shi, and Qing Wang. Vtlayout: Fusion of visual and text features for document layout analysis, 2021.</list_item>
<list_item><location><page_9><loc_52><loc_29><loc_91><loc_31></location>[21] Peng Zhang, Can Li, Liang Qiao, Zhanzhan Cheng, Shiliang Pu, Yi Niu, and Fei Wu. Vsr: A unified framework for document layout analysis combining vision, semantics and relations, 2021.</list_item> <list_item><location><page_9><loc_52><loc_29><loc_91><loc_31></location>[21] Peng Zhang, Can Li, Liang Qiao, Zhanzhan Cheng, Shiliang Pu, Yi Niu, and Fei Wu. Vsr: A unified framework for document layout analysis combining vision, semantics and relations, 2021.</list_item>
<list_item><location><page_9><loc_52><loc_25><loc_91><loc_28></location>[22] Peter W J Staar, Michele Dolfi, Christoph Auer, and Costas Bekas. Corpus conversion service: A machine learning platform to ingest documents at scale. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining , KDD, pages 774-782. ACM, 2018.</list_item> <list_item><location><page_9><loc_52><loc_25><loc_91><loc_28></location>[22] Peter W J Staar, Michele Dolfi, Christoph Auer, and Costas Bekas. Corpus conversion service: A machine learning platform to ingest documents at scale. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining , KDD, pages 774-782. ACM, 2018.</list_item>
<list_item><location><page_9><loc_52><loc_23><loc_91><loc_24></location>[23] Connor Shorten and Taghi M. Khoshgoftaar. A survey on image data augmentation for deep learning. Journal of Big Data , 6(1):60, 2019.</list_item> <list_item><location><page_9><loc_52><loc_23><loc_91><loc_24></location>[23] Connor Shorten and Taghi M. Khoshgoftaar. A survey on image data augmentation for deep learning. Journal of Big Data , 6(1):60, 2019.</list_item>

File diff suppressed because one or more lines are too long

View File

@ -20,28 +20,18 @@ Accurate document layout analysis is a key requirement for highquality PDF docum
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s).
KDD '22, August 14-18, 2022, Washington, DC, USA © 2022 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-9385-0/22/08. https://doi.org/10.1145/3534678.3539043 KDD '22, August 14-18, 2022, Washington, DC, USA
13 USING THE VERTICAL TUBE MODELS AY11230/11234 1. The vertical tube can be used for instructional viewing or to photograph the image with a digital camera or a micro TV unit 2. Loosen the retention screw, then rotate the adjustment ring to change the length of the vertical tube. 3. Make sure that both the images in OPERATION ( cont. ) SELECTING OBJECTIVE MAGNIFICATION 1. There are two objectives. The lower magnification objective has a greater depth of field and view. 2. In order to observe the specimen easily use the lower magnification objective first. Then, by rotating the case, the magnification can be changed. CHANGING THE INTERPUPILLARY DISTANCE 1. The distance between the observer's pupils is the interpupillary distance. 2. To adjust the interpupillary distance rotate the prism caps until both eyes coincide with the image in the eyepiece. FOCUSING 1. Remove the lens protective cover. 2. Place the specimen on the working stage. 3. Focus the specimen with the left eye first while turning the focus knob until the image appears clear and sharp. 4. Rotate the right eyepiece ring until the images in each eyepiece coincide and are sharp and clear. CHANGING THE BULB 1. Disconnect the power cord. 2. When the bulb is cool, remove the oblique illuminator cap and remove the halogen bulb with cap. 3. Replace with a new halogen bulb. 4. Open the window in the base plate and replace the halogen lamp or fluorescent lamp of transmitted illuminator. FOCUSING 1. Turn the focusing knob away or toward you until a clear image is viewed. 2. If the image is unclear, adjust the height of the elevator up or down, then turn the focusing knob again. ZOOM MAGNIFICATION 1. Turn the zoom magnification knob to the desired magnification and field of view. 2. In most situations, it is recommended that you focus at the lowest magnification, then move to a higher magnification and re-focus as necessary. 3. If the image is not clear to both eyes at the same time, the diopter ring may need adjustment. DIOPTER RING ADJUSTMENT 1. To adjust the eyepiece for viewing with or without eyeglasses and for differences in acuity between the right and left eyes, follow the following steps: a. Observe an image through the left eyepiece and bring a specific point into focus using the focus knob. b. By turning the diopter ring adjustment for the left eyepiece, bring the same point into sharp focus. c.Then bring the same point into focus through the right eyepiece by turning the right diopter ring. d.With more than one viewer, each viewer should note their own diopter ring position for the left and right eyepieces, then before viewing set the diopter ring adjustments to that setting. CHANGING THE BULB 1. Disconnect the power cord from the electrical outlet. 2. When the bulb is cool, remove the oblique illuminator cap and remove the halogen bulb with cap. 3. Replace with a new halogen bulb. 4. Open the window in the base plate and replace the halogen lamp or fluorescent lamp of transmitted illuminator. Model AY11230 Model AY11234 © 2022 Copyright held by the owner/author(s).
ACM ISBN 978-1-4503-9385-0/22/08.
https://doi.org/10.1145/3534678.3539043
Figure 1: Four examples of complex page layouts across different document categories Figure 1: Four examples of complex page layouts across different document categories
<!-- image --> <!-- image -->
<!-- image -->
14
<!-- image -->
Circling Minimums 7 K H U H Z D V D F K D Q J H W R W K H 7 ( 5 3 6 F U L W H U L D L Q W K D W D ႇH F W V F L U F O L Q J D U H D G L P H Q V L R Q E \ H [ S D Q G L Q J W K H D U H D V W R S U R Y L G H improved obstacle protection. To indicate that the new criteria had been applied to a given procedure, a is placed on the circling line of minimums. The new circling tables and explanatory information is located in the Legend of the TPP. 7 K H D S S U R D F K H V X V L Q J V W D Q G D U G F L U F O L Q J D S S U R D F K D U H D V F D Q E H L G H Q W L ¿ H G E \ W K H D E V H Q F H R I W K H on the circling line of minima.
$ S S O \ ( [ S D Q G H G & L U F O L Q J $ S S U R D F K 0 D Q H X Y H U L Q J $ L U V S D F H 5 D G L X V Table
$ S S O \ 6 W D Q G D U G & L U F O L Q J $ S S U R D F K 0 D Q H X Y H U L Q J 5 D G L X V 7 D E O H AIRPORT SKETCH The airport sketch is a depiction of the airport with emphasis on runway pattern and related information, positioned in either the lower left or lower right corner of the chart to aid pilot recognition of the airport from the air and to provide some information to aid on ground navigation of the airport. The runways are drawn to scale and oriented to true north. Runway dimensions (length and width) are shown for all active runways. Runway(s) are depicted based on what type and construction of the runway. Hard Surface Other Than Hard Surface Metal Surface Closed Runway Under Construction Stopways, Taxiways, Parking Areas Displaced Threshold Closed Pavement Water Runway Taxiways and aprons are shaded grey. Other runway features that may be shown are runway numbers, runway dimensions, runway slope, arresting gear, and displaced threshold. 2 W K H U L Q I R U P D W L R Q F R Q F H U Q L Q J O L J K W L Q J ¿ Q D O D S S U R D F K E H D U L Q J V D L U S R U W E H D F R Q R E V W D F O H V F R Q W U R O W R Z H U 1 $ 9 $ , ' V K H O L -pads may also be shown. $ L U S R U W ( O H Y D W L R Q D Q G 7 R X F K G R Z Q = R Q H ( O H Y D W L R Q The airport elevation is shown enclosed within a box in the upper left corner of the sketch box and the touchdown zone elevation (TDZE) is shown in the upper right corner of the sketch box. The airport elevation is the highest point of an D L U S R U W ¶ V X V D E O H U X Q Z D \ V P H D V X U H G L Q I H H W I U R P P H D Q V H D O H Y H O 7 K H 7 ' = ( L V W K H K L J K H V W H O H Y D W L R Q L Q W K H ¿ U V W I H H W R I the landing surface. Circling only approaches will not show a TDZE. FAA Chart Users' Guide - Terminal Procedures Publication (TPP) - Terms
114
## KEYWORDS ## KEYWORDS
PDF document conversion, layout segmentation, object-detection, data set, Machine Learning PDF document conversion, layout segmentation, object-detection, data set, Machine Learning
@ -158,6 +148,8 @@ Figure 4: Examples of plausible annotation alternatives for the same page. Crite
<!-- image --> <!-- image -->
05237a14f2524e3f53c8454b074409d05078038a6a36b770fcc8ec7e540deae0
were carried out over a timeframe of 12 weeks, after which 8 of the 40 initially allocated annotators did not pass the bar. were carried out over a timeframe of 12 weeks, after which 8 of the 40 initially allocated annotators did not pass the bar.
Phase 4: Production annotation. The previously selected 80K pages were annotated with the defined 11 class labels by 32 annotators. This production phase took around three months to complete. All annotations were created online through CCS, which visualises the programmatic PDF text-cells as an overlay on the page. The page annotation are obtained by drawing rectangular bounding-boxes, as shown in Figure 3. With regard to the annotation practices, we implemented a few constraints and capabilities on the tooling level. First, we only allow non-overlapping, vertically oriented, rectangular boxes. For the large majority of documents, this constraint was sufficient and it speeds up the annotation considerably in comparison with arbitrary segmentation shapes. Second, annotator staff were not able to see each other's annotations. This was enforced by design to avoid any bias in the annotation, which could skew the numbers of the inter-annotator agreement (see Table 1). We wanted Phase 4: Production annotation. The previously selected 80K pages were annotated with the defined 11 class labels by 32 annotators. This production phase took around three months to complete. All annotations were created online through CCS, which visualises the programmatic PDF text-cells as an overlay on the page. The page annotation are obtained by drawing rectangular bounding-boxes, as shown in Figure 3. With regard to the annotation practices, we implemented a few constraints and capabilities on the tooling level. First, we only allow non-overlapping, vertically oriented, rectangular boxes. For the large majority of documents, this constraint was sufficient and it speeds up the annotation considerably in comparison with arbitrary segmentation shapes. Second, annotator staff were not able to see each other's annotations. This was enforced by design to avoid any bias in the annotation, which could skew the numbers of the inter-annotator agreement (see Table 1). We wanted

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -1,7 +1,9 @@
<document> <document>
<section_header_level_1><location><page_1><loc_22><loc_81><loc_79><loc_86></location>Optimized Table Tokenization for Table Structure Recognition</section_header_level_1> <section_header_level_1><location><page_1><loc_22><loc_82><loc_79><loc_85></location>Optimized Table Tokenization for Table Structure Recognition</section_header_level_1>
<text><location><page_1><loc_23><loc_74><loc_78><loc_79></location>Maksym Lysak [0000 - 0002 - 3723 - $^{6960]}$, Ahmed Nassar[0000 - 0002 - 9468 - $^{0822]}$, Nikolaos Livathinos [0000 - 0001 - 8513 - $^{3491]}$, Christoph Auer[0000 - 0001 - 5761 - $^{0422]}$, and Peter Staar [0000 - 0002 - 8088 - 0823]</text> <text><location><page_1><loc_23><loc_75><loc_78><loc_79></location>Maksym Lysak [0000 0002 3723 $^{6960]}$, Ahmed Nassar[0000 0002 9468 $^{0822]}$, Nikolaos Livathinos [0000 0001 8513 $^{3491]}$, Christoph Auer[0000 0001 5761 $^{0422]}$, [0000 0002 8088 0823]</text>
<text><location><page_1><loc_36><loc_70><loc_64><loc_73></location>IBM Research {mly,ahn,nli,cau,taa}@zurich.ibm.com</text> <text><location><page_1><loc_38><loc_74><loc_49><loc_75></location>and Peter Staar</text>
<text><location><page_1><loc_46><loc_72><loc_55><loc_73></location>IBM Research</text>
<text><location><page_1><loc_36><loc_70><loc_64><loc_71></location>{mly,ahn,nli,cau,taa}@zurich.ibm.com</text>
<text><location><page_1><loc_27><loc_41><loc_74><loc_66></location>Abstract. Extracting tables from documents is a crucial task in any document conversion pipeline. Recently, transformer-based models have demonstrated that table-structure can be recognized with impressive accuracy using Image-to-Markup-Sequence (Im2Seq) approaches. Taking only the image of a table, such models predict a sequence of tokens (e.g. in HTML, LaTeX) which represent the structure of the table. Since the token representation of the table structure has a significant impact on the accuracy and run-time performance of any Im2Seq model, we investigate in this paper how table-structure representation can be optimised. We propose a new, optimised table-structure language (OTSL) with a minimized vocabulary and specific rules. The benefits of OTSL are that it reduces the number of tokens to 5 (HTML needs 28+) and shortens the sequence length to half of HTML on average. Consequently, model accuracy improves significantly, inference time is halved compared to HTML-based models, and the predicted table structures are always syntactically correct. This in turn eliminates most post-processing needs. Popular table structure data-sets will be published in OTSL format to the community.</text> <text><location><page_1><loc_27><loc_41><loc_74><loc_66></location>Abstract. Extracting tables from documents is a crucial task in any document conversion pipeline. Recently, transformer-based models have demonstrated that table-structure can be recognized with impressive accuracy using Image-to-Markup-Sequence (Im2Seq) approaches. Taking only the image of a table, such models predict a sequence of tokens (e.g. in HTML, LaTeX) which represent the structure of the table. Since the token representation of the table structure has a significant impact on the accuracy and run-time performance of any Im2Seq model, we investigate in this paper how table-structure representation can be optimised. We propose a new, optimised table-structure language (OTSL) with a minimized vocabulary and specific rules. The benefits of OTSL are that it reduces the number of tokens to 5 (HTML needs 28+) and shortens the sequence length to half of HTML on average. Consequently, model accuracy improves significantly, inference time is halved compared to HTML-based models, and the predicted table structures are always syntactically correct. This in turn eliminates most post-processing needs. Popular table structure data-sets will be published in OTSL format to the community.</text>
<text><location><page_1><loc_27><loc_37><loc_74><loc_40></location>Keywords: Table Structure Recognition · Data Representation · Transformers · Optimization.</text> <text><location><page_1><loc_27><loc_37><loc_74><loc_40></location>Keywords: Table Structure Recognition · Data Representation · Transformers · Optimization.</text>
<section_header_level_1><location><page_1><loc_22><loc_33><loc_37><loc_34></location>1 Introduction</section_header_level_1> <section_header_level_1><location><page_1><loc_22><loc_33><loc_37><loc_34></location>1 Introduction</section_header_level_1>
@ -15,7 +17,7 @@
<text><location><page_2><loc_22><loc_16><loc_79><loc_34></location>Recently emerging SOTA methods for table structure recognition employ transformer-based models, in which an image of the table is provided to the network in order to predict the structure of the table as a sequence of tokens. These image-to-sequence (Im2Seq) models are extremely powerful, since they allow for a purely data-driven solution. The tokens of the sequence typically belong to a markup language such as HTML, Latex or Markdown, which allow to describe table structure as rows, columns and spanning cells in various configurations. In Figure 1, we illustrate how HTML is used to represent the table-structure of a particular example table. Public table-structure data sets such as PubTabNet [22], and FinTabNet [21], which were created in a semi-automated way from paired PDF and HTML sources (e.g. PubMed Central), popularized primarily the use of HTML as ground-truth representation format for TSR.</text> <text><location><page_2><loc_22><loc_16><loc_79><loc_34></location>Recently emerging SOTA methods for table structure recognition employ transformer-based models, in which an image of the table is provided to the network in order to predict the structure of the table as a sequence of tokens. These image-to-sequence (Im2Seq) models are extremely powerful, since they allow for a purely data-driven solution. The tokens of the sequence typically belong to a markup language such as HTML, Latex or Markdown, which allow to describe table structure as rows, columns and spanning cells in various configurations. In Figure 1, we illustrate how HTML is used to represent the table-structure of a particular example table. Public table-structure data sets such as PubTabNet [22], and FinTabNet [21], which were created in a semi-automated way from paired PDF and HTML sources (e.g. PubMed Central), popularized primarily the use of HTML as ground-truth representation format for TSR.</text>
<text><location><page_3><loc_22><loc_73><loc_79><loc_85></location>While the majority of research in TSR is currently focused on the development and application of novel neural model architectures, the table structure representation language (e.g. HTML in PubTabNet and FinTabNet) is usually adopted as is for the sequence tokenization in Im2Seq models. In this paper, we aim for the opposite and investigate the impact of the table structure representation language with an otherwise unmodified Im2Seq transformer-based architecture. Since the current state-of-the-art Im2Seq model is TableFormer [9], we select this model to perform our experiments.</text> <text><location><page_3><loc_22><loc_73><loc_79><loc_85></location>While the majority of research in TSR is currently focused on the development and application of novel neural model architectures, the table structure representation language (e.g. HTML in PubTabNet and FinTabNet) is usually adopted as is for the sequence tokenization in Im2Seq models. In this paper, we aim for the opposite and investigate the impact of the table structure representation language with an otherwise unmodified Im2Seq transformer-based architecture. Since the current state-of-the-art Im2Seq model is TableFormer [9], we select this model to perform our experiments.</text>
<text><location><page_3><loc_22><loc_58><loc_79><loc_73></location>The main contribution of this paper is the introduction of a new optimised table structure language (OTSL), specifically designed to describe table-structure in an compact and structured way for Im2Seq models. OTSL has a number of key features, which make it very attractive to use in Im2Seq models. Specifically, compared to other languages such as HTML, OTSL has a minimized vocabulary which yields short sequence length, strong inherent structure (e.g. strict rectangular layout) and a strict syntax with rules that only look backwards. The latter allows for syntax validation during inference and ensures a syntactically correct table-structure. These OTSL features are illustrated in Figure 1, in comparison to HTML.</text> <text><location><page_3><loc_22><loc_58><loc_79><loc_73></location>The main contribution of this paper is the introduction of a new optimised table structure language (OTSL), specifically designed to describe table-structure in an compact and structured way for Im2Seq models. OTSL has a number of key features, which make it very attractive to use in Im2Seq models. Specifically, compared to other languages such as HTML, OTSL has a minimized vocabulary which yields short sequence length, strong inherent structure (e.g. strict rectangular layout) and a strict syntax with rules that only look backwards. The latter allows for syntax validation during inference and ensures a syntactically correct table-structure. These OTSL features are illustrated in Figure 1, in comparison to HTML.</text>
<text><location><page_3><loc_22><loc_44><loc_79><loc_58></location>The paper is structured as follows. In section 2, we give an overview of the latest developments in table-structure reconstruction. In section 3 we review the current HTML table encoding (popularised by PubTabNet and FinTabNet) and discuss its flaws. Subsequently, we introduce OTSL in section 4, which includes the language definition, syntax rules and error-correction procedures. In section 5, we apply OTSL on the TableFormer architecture, compare it to TableFormer models trained on HTML and ultimately demonstrate the advantages of using OTSL. Finally, in section 6 we conclude our work and outline next potential steps.</text> <text><location><page_3><loc_22><loc_45><loc_79><loc_58></location>The paper is structured as follows. In section 2, we give an overview of the latest developments in table-structure reconstruction. In section 3 we review the current HTML table encoding (popularised by PubTabNet and FinTabNet) and discuss its flaws. Subsequently, we introduce OTSL in section 4, which includes the language definition, syntax rules and error-correction procedures. In section 5, we apply OTSL on the TableFormer architecture, compare it to TableFormer models trained on HTML and ultimately demonstrate the advantages of using OTSL. Finally, in section 6 we conclude our work and outline next potential steps.</text>
<section_header_level_1><location><page_3><loc_22><loc_40><loc_39><loc_42></location>2 Related Work</section_header_level_1> <section_header_level_1><location><page_3><loc_22><loc_40><loc_39><loc_42></location>2 Related Work</section_header_level_1>
<text><location><page_3><loc_22><loc_16><loc_79><loc_38></location>Approaches to formalize the logical structure and layout of tables in electronic documents date back more than two decades [16]. In the recent past, a wide variety of computer vision methods have been explored to tackle the problem of table structure recognition, i.e. the correct identification of columns, rows and spanning cells in a given table. Broadly speaking, the current deeplearning based approaches fall into three categories: object detection (OD) methods, Graph-Neural-Network (GNN) methods and Image-to-Markup-Sequence (Im2Seq) methods. Object-detection based methods [11,12,13,14,21] rely on tablestructure annotation using (overlapping) bounding boxes for training, and produce bounding-box predictions to define table cells, rows, and columns on a table image. Graph Neural Network (GNN) based methods [3,6,17,18], as the name suggests, represent tables as graph structures. The graph nodes represent the content of each table cell, an embedding vector from the table image, or geometric coordinates of the table cell. The edges of the graph define the relationship between the nodes, e.g. if they belong to the same column, row, or table cell.</text> <text><location><page_3><loc_22><loc_16><loc_79><loc_38></location>Approaches to formalize the logical structure and layout of tables in electronic documents date back more than two decades [16]. In the recent past, a wide variety of computer vision methods have been explored to tackle the problem of table structure recognition, i.e. the correct identification of columns, rows and spanning cells in a given table. Broadly speaking, the current deeplearning based approaches fall into three categories: object detection (OD) methods, Graph-Neural-Network (GNN) methods and Image-to-Markup-Sequence (Im2Seq) methods. Object-detection based methods [11,12,13,14,21] rely on tablestructure annotation using (overlapping) bounding boxes for training, and produce bounding-box predictions to define table cells, rows, and columns on a table image. Graph Neural Network (GNN) based methods [3,6,17,18], as the name suggests, represent tables as graph structures. The graph nodes represent the content of each table cell, an embedding vector from the table image, or geometric coordinates of the table cell. The edges of the graph define the relationship between the nodes, e.g. if they belong to the same column, row, or table cell.</text>
<text><location><page_4><loc_22><loc_67><loc_79><loc_85></location>Other work [20] aims at predicting a grid for each table and deciding which cells must be merged using an attention network. Im2Seq methods cast the problem as a sequence generation task [4,5,9,22], and therefore need an internal tablestructure representation language, which is often implemented with standard markup languages (e.g. HTML, LaTeX, Markdown). In theory, Im2Seq methods have a natural advantage over the OD and GNN methods by virtue of directly predicting the table-structure. As such, no post-processing or rules are needed in order to obtain the table-structure, which is necessary with OD and GNN approaches. In practice, this is not entirely true, because a predicted sequence of table-structure markup does not necessarily have to be syntactically correct. Hence, depending on the quality of the predicted sequence, some post-processing needs to be performed to ensure a syntactically valid (let alone correct) sequence.</text> <text><location><page_4><loc_22><loc_67><loc_79><loc_85></location>Other work [20] aims at predicting a grid for each table and deciding which cells must be merged using an attention network. Im2Seq methods cast the problem as a sequence generation task [4,5,9,22], and therefore need an internal tablestructure representation language, which is often implemented with standard markup languages (e.g. HTML, LaTeX, Markdown). In theory, Im2Seq methods have a natural advantage over the OD and GNN methods by virtue of directly predicting the table-structure. As such, no post-processing or rules are needed in order to obtain the table-structure, which is necessary with OD and GNN approaches. In practice, this is not entirely true, because a predicted sequence of table-structure markup does not necessarily have to be syntactically correct. Hence, depending on the quality of the predicted sequence, some post-processing needs to be performed to ensure a syntactically valid (let alone correct) sequence.</text>
@ -37,20 +39,20 @@
<text><location><page_6><loc_22><loc_44><loc_79><loc_56></location>To mitigate the issues with HTML in Im2Seq-based TSR models laid out before, we propose here our Optimised Table Structure Language (OTSL). OTSL is designed to express table structure with a minimized vocabulary and a simple set of rules, which are both significantly reduced compared to HTML. At the same time, OTSL enables easy error detection and correction during sequence generation. We further demonstrate how the compact structure representation and minimized sequence length improves prediction accuracy and inference time in the TableFormer architecture.</text> <text><location><page_6><loc_22><loc_44><loc_79><loc_56></location>To mitigate the issues with HTML in Im2Seq-based TSR models laid out before, we propose here our Optimised Table Structure Language (OTSL). OTSL is designed to express table structure with a minimized vocabulary and a simple set of rules, which are both significantly reduced compared to HTML. At the same time, OTSL enables easy error detection and correction during sequence generation. We further demonstrate how the compact structure representation and minimized sequence length improves prediction accuracy and inference time in the TableFormer architecture.</text>
<section_header_level_1><location><page_6><loc_22><loc_40><loc_43><loc_41></location>4.1 Language Definition</section_header_level_1> <section_header_level_1><location><page_6><loc_22><loc_40><loc_43><loc_41></location>4.1 Language Definition</section_header_level_1>
<text><location><page_6><loc_22><loc_34><loc_79><loc_38></location>In Figure 3, we illustrate how the OTSL is defined. In essence, the OTSL defines only 5 tokens that directly describe a tabular structure based on an atomic 2D grid.</text> <text><location><page_6><loc_22><loc_34><loc_79><loc_38></location>In Figure 3, we illustrate how the OTSL is defined. In essence, the OTSL defines only 5 tokens that directly describe a tabular structure based on an atomic 2D grid.</text>
<text><location><page_6><loc_24><loc_32><loc_67><loc_34></location>The OTSL vocabulary is comprised of the following tokens:</text> <text><location><page_6><loc_24><loc_33><loc_67><loc_34></location>The OTSL vocabulary is comprised of the following tokens:</text>
<unordered_list> <unordered_list>
<list_item><location><page_6><loc_23><loc_30><loc_75><loc_31></location>-"C" cell a new table cell that either has or does not have cell content</list_item> <list_item><location><page_6><loc_23><loc_30><loc_75><loc_31></location>-"C" cell a new table cell that either has or does not have cell content</list_item>
<list_item><location><page_6><loc_23><loc_27><loc_79><loc_29></location>-"L" cell left-looking cell , merging with the left neighbor cell to create a span</list_item> <list_item><location><page_6><loc_23><loc_27><loc_79><loc_29></location>-"L" cell left-looking cell , merging with the left neighbor cell to create a span</list_item>
<list_item><location><page_6><loc_23><loc_24><loc_79><loc_26></location>-"U" cell up-looking cell , merging with the upper neighbor cell to create a span</list_item> <list_item><location><page_6><loc_23><loc_24><loc_79><loc_26></location>-"U" cell up-looking cell , merging with the upper neighbor cell to create a span</list_item>
<list_item><location><page_6><loc_23><loc_22><loc_74><loc_23></location>-"X" cell cross cell , to merge with both left and upper neighbor cells</list_item> <list_item><location><page_6><loc_23><loc_22><loc_74><loc_23></location>-"X" cell cross cell , to merge with both left and upper neighbor cells</list_item>
<list_item><location><page_6><loc_23><loc_20><loc_54><loc_22></location>-"NL" new-line , switch to the next row.</list_item> <list_item><location><page_6><loc_23><loc_20><loc_54><loc_21></location>-"NL" new-line , switch to the next row.</list_item>
</unordered_list> </unordered_list>
<text><location><page_6><loc_22><loc_16><loc_79><loc_19></location>A notable attribute of OTSL is that it has the capability of achieving lossless conversion to HTML.</text> <text><location><page_6><loc_22><loc_16><loc_79><loc_19></location>A notable attribute of OTSL is that it has the capability of achieving lossless conversion to HTML.</text>
<figure> <figure>
<location><page_7><loc_27><loc_65><loc_73><loc_79></location> <location><page_7><loc_27><loc_65><loc_73><loc_79></location>
<caption>Fig. 3. OTSL description of table structure: A - table example; B - graphical representation of table structure; C - mapping structure on a grid; D - OTSL structure encoding; E - explanation on cell encoding</caption> <caption>Fig. 3. OTSL description of table structure: A - table example; B - graphical representation of table structure; C - mapping structure on a grid; D - OTSL structure encoding; E - explanation on cell encoding</caption>
</figure> </figure>
<section_header_level_1><location><page_7><loc_22><loc_60><loc_40><loc_62></location>4.2 Language Syntax</section_header_level_1> <section_header_level_1><location><page_7><loc_22><loc_60><loc_40><loc_61></location>4.2 Language Syntax</section_header_level_1>
<text><location><page_7><loc_22><loc_58><loc_59><loc_59></location>The OTSL representation follows these syntax rules:</text> <text><location><page_7><loc_22><loc_58><loc_59><loc_59></location>The OTSL representation follows these syntax rules:</text>
<unordered_list> <unordered_list>
<list_item><location><page_7><loc_23><loc_54><loc_79><loc_56></location>1. Left-looking cell rule : The left neighbour of an "L" cell must be either another "L" cell or a "C" cell.</list_item> <list_item><location><page_7><loc_23><loc_54><loc_79><loc_56></location>1. Left-looking cell rule : The left neighbour of an "L" cell must be either another "L" cell or a "C" cell.</list_item>
@ -58,7 +60,7 @@
</unordered_list> </unordered_list>
<section_header_level_1><location><page_7><loc_23><loc_49><loc_37><loc_50></location>3. Cross cell rule :</section_header_level_1> <section_header_level_1><location><page_7><loc_23><loc_49><loc_37><loc_50></location>3. Cross cell rule :</section_header_level_1>
<unordered_list> <unordered_list>
<list_item><location><page_7><loc_24><loc_44><loc_79><loc_49></location>The left neighbour of an "X" cell must be either another "X" cell or a "U" cell, and the upper neighbour of an "X" cell must be either another "X" cell or an "L" cell.</list_item> <list_item><location><page_7><loc_25><loc_44><loc_79><loc_49></location>The left neighbour of an "X" cell must be either another "X" cell or a "U" cell, and the upper neighbour of an "X" cell must be either another "X" cell or an "L" cell.</list_item>
<list_item><location><page_7><loc_23><loc_43><loc_78><loc_44></location>4. First row rule : Only "L" cells and "C" cells are allowed in the first row.</list_item> <list_item><location><page_7><loc_23><loc_43><loc_78><loc_44></location>4. First row rule : Only "L" cells and "C" cells are allowed in the first row.</list_item>
<list_item><location><page_7><loc_23><loc_40><loc_79><loc_43></location>5. First column rule : Only "U" cells and "C" cells are allowed in the first column.</list_item> <list_item><location><page_7><loc_23><loc_40><loc_79><loc_43></location>5. First column rule : Only "U" cells and "C" cells are allowed in the first column.</list_item>
<list_item><location><page_7><loc_23><loc_37><loc_79><loc_40></location>6. Rectangular rule : The table representation is always rectangular - all rows must have an equal number of tokens, terminated with "NL" token.</list_item> <list_item><location><page_7><loc_23><loc_37><loc_79><loc_40></location>6. Rectangular rule : The table representation is always rectangular - all rows must have an equal number of tokens, terminated with "NL" token.</list_item>
@ -68,7 +70,7 @@
<text><location><page_8><loc_22><loc_82><loc_79><loc_85></location>reduces significantly the column drift seen in the HTML based models (see Figure 5).</text> <text><location><page_8><loc_22><loc_82><loc_79><loc_85></location>reduces significantly the column drift seen in the HTML based models (see Figure 5).</text>
<section_header_level_1><location><page_8><loc_22><loc_78><loc_52><loc_80></location>4.3 Error-detection and -mitigation</section_header_level_1> <section_header_level_1><location><page_8><loc_22><loc_78><loc_52><loc_80></location>4.3 Error-detection and -mitigation</section_header_level_1>
<text><location><page_8><loc_22><loc_62><loc_79><loc_77></location>The design of OTSL allows to validate a table structure easily on an unfinished sequence. The detection of an invalid sequence token is a clear indication of a prediction mistake, however a valid sequence by itself does not guarantee prediction correctness. Different heuristics can be used to correct token errors in an invalid sequence and thus increase the chances for accurate predictions. Such heuristics can be applied either after the prediction of each token, or at the end on the entire predicted sequence. For example a simple heuristic which can correct the predicted OTSL sequence on-the-fly is to verify if the token with the highest prediction confidence invalidates the predicted sequence, and replace it by the token with the next highest confidence until OTSL rules are satisfied.</text> <text><location><page_8><loc_22><loc_62><loc_79><loc_77></location>The design of OTSL allows to validate a table structure easily on an unfinished sequence. The detection of an invalid sequence token is a clear indication of a prediction mistake, however a valid sequence by itself does not guarantee prediction correctness. Different heuristics can be used to correct token errors in an invalid sequence and thus increase the chances for accurate predictions. Such heuristics can be applied either after the prediction of each token, or at the end on the entire predicted sequence. For example a simple heuristic which can correct the predicted OTSL sequence on-the-fly is to verify if the token with the highest prediction confidence invalidates the predicted sequence, and replace it by the token with the next highest confidence until OTSL rules are satisfied.</text>
<section_header_level_1><location><page_8><loc_22><loc_58><loc_37><loc_60></location>5 Experiments</section_header_level_1> <section_header_level_1><location><page_8><loc_22><loc_58><loc_37><loc_59></location>5 Experiments</section_header_level_1>
<text><location><page_8><loc_22><loc_43><loc_79><loc_56></location>To evaluate the impact of OTSL on prediction accuracy and inference times, we conducted a series of experiments based on the TableFormer model (Figure 4) with two objectives: Firstly we evaluate the prediction quality and performance of OTSL vs. HTML after performing Hyper Parameter Optimization (HPO) on the canonical PubTabNet data set. Secondly we pick the best hyper-parameters found in the first step and evaluate how OTSL impacts the performance of TableFormer after training on other publicly available data sets (FinTabNet, PubTables-1M [14]). The ground truth (GT) from all data sets has been converted into OTSL format for this purpose, and will be made publicly available.</text> <text><location><page_8><loc_22><loc_43><loc_79><loc_56></location>To evaluate the impact of OTSL on prediction accuracy and inference times, we conducted a series of experiments based on the TableFormer model (Figure 4) with two objectives: Firstly we evaluate the prediction quality and performance of OTSL vs. HTML after performing Hyper Parameter Optimization (HPO) on the canonical PubTabNet data set. Secondly we pick the best hyper-parameters found in the first step and evaluate how OTSL impacts the performance of TableFormer after training on other publicly available data sets (FinTabNet, PubTables-1M [14]). The ground truth (GT) from all data sets has been converted into OTSL format for this purpose, and will be made publicly available.</text>
<figure> <figure>
<location><page_8><loc_23><loc_25><loc_77><loc_36></location> <location><page_8><loc_23><loc_25><loc_77><loc_36></location>
@ -76,7 +78,7 @@
</figure> </figure>
<text><location><page_8><loc_22><loc_16><loc_79><loc_22></location>We rely on standard metrics such as Tree Edit Distance score (TEDs) for table structure prediction, and Mean Average Precision (mAP) with 0.75 Intersection Over Union (IOU) threshold for the bounding-box predictions of table cells. The predicted OTSL structures were converted back to HTML format in</text> <text><location><page_8><loc_22><loc_16><loc_79><loc_22></location>We rely on standard metrics such as Tree Edit Distance score (TEDs) for table structure prediction, and Mean Average Precision (mAP) with 0.75 Intersection Over Union (IOU) threshold for the bounding-box predictions of table cells. The predicted OTSL structures were converted back to HTML format in</text>
<text><location><page_9><loc_22><loc_81><loc_79><loc_85></location>order to compute the TED score. Inference timing results for all experiments were obtained from the same machine on a single core with AMD EPYC 7763 CPU @2.45 GHz.</text> <text><location><page_9><loc_22><loc_81><loc_79><loc_85></location>order to compute the TED score. Inference timing results for all experiments were obtained from the same machine on a single core with AMD EPYC 7763 CPU @2.45 GHz.</text>
<section_header_level_1><location><page_9><loc_22><loc_77><loc_52><loc_79></location>5.1 Hyper Parameter Optimization</section_header_level_1> <section_header_level_1><location><page_9><loc_22><loc_78><loc_52><loc_79></location>5.1 Hyper Parameter Optimization</section_header_level_1>
<text><location><page_9><loc_22><loc_68><loc_79><loc_77></location>We have chosen the PubTabNet data set to perform HPO, since it includes a highly diverse set of tables. Also we report TED scores separately for simple and complex tables (tables with cell spans). Results are presented in Table. 1. It is evident that with OTSL, our model achieves the same TED score and slightly better mAP scores in comparison to HTML. However OTSL yields a 2x speed up in the inference runtime over HTML.</text> <text><location><page_9><loc_22><loc_68><loc_79><loc_77></location>We have chosen the PubTabNet data set to perform HPO, since it includes a highly diverse set of tables. Also we report TED scores separately for simple and complex tables (tables with cell spans). Results are presented in Table. 1. It is evident that with OTSL, our model achieves the same TED score and slightly better mAP scores in comparison to HTML. However OTSL yields a 2x speed up in the inference runtime over HTML.</text>
<table> <table>
<location><page_9><loc_23><loc_41><loc_78><loc_57></location> <location><page_9><loc_23><loc_41><loc_78><loc_57></location>
@ -117,13 +119,13 @@
<caption>Fig. 6. Visualization of predicted structure and detected bounding boxes on a complex table with many rows. The OTSL model (B) captured repeating pattern of horizontally merged cells from the GT (A), unlike the HTML model (C). The HTML model also didn't complete the HTML sequence correctly and displayed a lot more of drift and overlap of bounding boxes. "PMC5406406_003_01.png" PubTabNet.</caption> <caption>Fig. 6. Visualization of predicted structure and detected bounding boxes on a complex table with many rows. The OTSL model (B) captured repeating pattern of horizontally merged cells from the GT (A), unlike the HTML model (C). The HTML model also didn't complete the HTML sequence correctly and displayed a lot more of drift and overlap of bounding boxes. "PMC5406406_003_01.png" PubTabNet.</caption>
</figure> </figure>
<section_header_level_1><location><page_12><loc_22><loc_84><loc_36><loc_85></location>6 Conclusion</section_header_level_1> <section_header_level_1><location><page_12><loc_22><loc_84><loc_36><loc_85></location>6 Conclusion</section_header_level_1>
<text><location><page_12><loc_22><loc_74><loc_79><loc_82></location>We demonstrated that representing tables in HTML for the task of table structure recognition with Im2Seq models is ill-suited and has serious limitations. Furthermore, we presented in this paper an Optimized Table Structure Language (OTSL) which, when compared to commonly used general purpose languages, has several key benefits.</text> <text><location><page_12><loc_22><loc_74><loc_79><loc_81></location>We demonstrated that representing tables in HTML for the task of table structure recognition with Im2Seq models is ill-suited and has serious limitations. Furthermore, we presented in this paper an Optimized Table Structure Language (OTSL) which, when compared to commonly used general purpose languages, has several key benefits.</text>
<text><location><page_12><loc_22><loc_59><loc_79><loc_74></location>First and foremost, given the same network configuration, inference time for a table-structure prediction is about 2 times faster compared to the conventional HTML approach. This is primarily owed to the shorter sequence length of the OTSL representation. Additional performance benefits can be obtained with HPO (hyper parameter optimization). As we demonstrate in our experiments, models trained on OTSL can be significantly smaller, e.g. by reducing the number of encoder and decoder layers, while preserving comparatively good prediction quality. This can further improve inference performance, yielding 5-6 times faster inference speed in OTSL with prediction quality comparable to models trained on HTML (see Table 1).</text> <text><location><page_12><loc_22><loc_59><loc_79><loc_74></location>First and foremost, given the same network configuration, inference time for a table-structure prediction is about 2 times faster compared to the conventional HTML approach. This is primarily owed to the shorter sequence length of the OTSL representation. Additional performance benefits can be obtained with HPO (hyper parameter optimization). As we demonstrate in our experiments, models trained on OTSL can be significantly smaller, e.g. by reducing the number of encoder and decoder layers, while preserving comparatively good prediction quality. This can further improve inference performance, yielding 5-6 times faster inference speed in OTSL with prediction quality comparable to models trained on HTML (see Table 1).</text>
<text><location><page_12><loc_22><loc_41><loc_79><loc_59></location>Secondly, OTSL has more inherent structure and a significantly restricted vocabulary size. This allows autoregressive models to perform better in the TED metric, but especially with regards to prediction accuracy of the table-cell bounding boxes (see Table 2). As shown in Figure 5, we observe that the OTSL drastically reduces the drift for table cell bounding boxes at high row count and in sparse tables. This leads to more accurate predictions and a significant reduction in post-processing complexity, which is an undesired necessity in HTML-based Im2Seq models. Significant novelty lies in OTSL syntactical rules, which are few, simple and always backwards looking. Each new token can be validated only by analyzing the sequence of previous tokens, without requiring the entire sequence to detect mistakes. This in return allows to perform structural error detection and correction on-the-fly during sequence generation.</text> <text><location><page_12><loc_22><loc_41><loc_79><loc_59></location>Secondly, OTSL has more inherent structure and a significantly restricted vocabulary size. This allows autoregressive models to perform better in the TED metric, but especially with regards to prediction accuracy of the table-cell bounding boxes (see Table 2). As shown in Figure 5, we observe that the OTSL drastically reduces the drift for table cell bounding boxes at high row count and in sparse tables. This leads to more accurate predictions and a significant reduction in post-processing complexity, which is an undesired necessity in HTML-based Im2Seq models. Significant novelty lies in OTSL syntactical rules, which are few, simple and always backwards looking. Each new token can be validated only by analyzing the sequence of previous tokens, without requiring the entire sequence to detect mistakes. This in return allows to perform structural error detection and correction on-the-fly during sequence generation.</text>
<section_header_level_1><location><page_12><loc_22><loc_36><loc_32><loc_38></location>References</section_header_level_1> <section_header_level_1><location><page_12><loc_22><loc_36><loc_32><loc_38></location>References</section_header_level_1>
<unordered_list> <unordered_list>
<list_item><location><page_12><loc_23><loc_29><loc_79><loc_34></location>1. Auer, C., Dolfi, M., Carvalho, A., Ramis, C.B., Staar, P.W.J.: Delivering document conversion as a cloud service with high throughput and responsiveness. CoRR abs/2206.00785 (2022). https://doi.org/10.48550/arXiv.2206.00785 , https://doi.org/10.48550/arXiv.2206.00785</list_item> <list_item><location><page_12><loc_23><loc_29><loc_79><loc_34></location>1. Auer, C., Dolfi, M., Carvalho, A., Ramis, C.B., Staar, P.W.J.: Delivering document conversion as a cloud service with high throughput and responsiveness. CoRR abs/2206.00785 (2022). https://doi.org/10.48550/arXiv.2206.00785 , https://doi.org/10.48550/arXiv.2206.00785</list_item>
<list_item><location><page_12><loc_23><loc_23><loc_79><loc_29></location>2. Chen, B., Peng, D., Zhang, J., Ren, Y., Jin, L.: Complex table structure recognition in the wild using transformer and identity matrix-based augmentation. In: Porwal, U., Fornés, A., Shafait, F. (eds.) Frontiers in Handwriting Recognition. pp. 545561. Springer International Publishing, Cham (2022)</list_item> <list_item><location><page_12><loc_23><loc_23><loc_79><loc_28></location>2. Chen, B., Peng, D., Zhang, J., Ren, Y., Jin, L.: Complex table structure recognition in the wild using transformer and identity matrix-based augmentation. In: Porwal, U., Fornés, A., Shafait, F. (eds.) Frontiers in Handwriting Recognition. pp. 545561. Springer International Publishing, Cham (2022)</list_item>
<list_item><location><page_12><loc_23><loc_20><loc_79><loc_23></location>3. Chi, Z., Huang, H., Xu, H.D., Yu, H., Yin, W., Mao, X.L.: Complicated table structure recognition. arXiv preprint arXiv:1908.04729 (2019)</list_item> <list_item><location><page_12><loc_23><loc_20><loc_79><loc_23></location>3. Chi, Z., Huang, H., Xu, H.D., Yu, H., Yin, W., Mao, X.L.: Complicated table structure recognition. arXiv preprint arXiv:1908.04729 (2019)</list_item>
<list_item><location><page_12><loc_23><loc_16><loc_79><loc_20></location>4. Deng, Y., Rosenberg, D., Mann, G.: Challenges in end-to-end neural scientific table recognition. In: 2019 International Conference on Document Analysis and Recognition (ICDAR). pp. 894-901. IEEE (2019)</list_item> <list_item><location><page_12><loc_23><loc_16><loc_79><loc_20></location>4. Deng, Y., Rosenberg, D., Mann, G.: Challenges in end-to-end neural scientific table recognition. In: 2019 International Conference on Document Analysis and Recognition (ICDAR). pp. 894-901. IEEE (2019)</list_item>
</unordered_list> </unordered_list>
@ -137,7 +139,7 @@
<list_item><location><page_13><loc_22><loc_48><loc_79><loc_53></location>11. Prasad, D., Gadpal, A., Kapadni, K., Visave, M., Sultanpure, K.: Cascadetabnet: An approach for end to end table detection and structure recognition from imagebased documents. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops. pp. 572-573 (2020)</list_item> <list_item><location><page_13><loc_22><loc_48><loc_79><loc_53></location>11. Prasad, D., Gadpal, A., Kapadni, K., Visave, M., Sultanpure, K.: Cascadetabnet: An approach for end to end table detection and structure recognition from imagebased documents. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops. pp. 572-573 (2020)</list_item>
<list_item><location><page_13><loc_22><loc_42><loc_79><loc_48></location>12. Schreiber, S., Agne, S., Wolf, I., Dengel, A., Ahmed, S.: Deepdesrt: Deep learning for detection and structure recognition of tables in document images. In: 2017 14th IAPR international conference on document analysis and recognition (ICDAR). vol. 1, pp. 1162-1167. IEEE (2017)</list_item> <list_item><location><page_13><loc_22><loc_42><loc_79><loc_48></location>12. Schreiber, S., Agne, S., Wolf, I., Dengel, A., Ahmed, S.: Deepdesrt: Deep learning for detection and structure recognition of tables in document images. In: 2017 14th IAPR international conference on document analysis and recognition (ICDAR). vol. 1, pp. 1162-1167. IEEE (2017)</list_item>
<list_item><location><page_13><loc_22><loc_37><loc_79><loc_42></location>13. Siddiqui, S.A., Fateh, I.A., Rizvi, S.T.R., Dengel, A., Ahmed, S.: Deeptabstr: Deep learning based table structure recognition. In: 2019 International Conference on Document Analysis and Recognition (ICDAR). pp. 1403-1409 (2019). https:// doi.org/10.1109/ICDAR.2019.00226</list_item> <list_item><location><page_13><loc_22><loc_37><loc_79><loc_42></location>13. Siddiqui, S.A., Fateh, I.A., Rizvi, S.T.R., Dengel, A., Ahmed, S.: Deeptabstr: Deep learning based table structure recognition. In: 2019 International Conference on Document Analysis and Recognition (ICDAR). pp. 1403-1409 (2019). https:// doi.org/10.1109/ICDAR.2019.00226</list_item>
<list_item><location><page_13><loc_22><loc_31><loc_79><loc_37></location>14. Smock, B., Pesala, R., Abraham, R.: PubTables-1M: Towards comprehensive table extraction from unstructured documents. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 4634-4642 (June 2022)</list_item> <list_item><location><page_13><loc_22><loc_31><loc_79><loc_36></location>14. Smock, B., Pesala, R., Abraham, R.: PubTables-1M: Towards comprehensive table extraction from unstructured documents. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 4634-4642 (June 2022)</list_item>
<list_item><location><page_13><loc_22><loc_23><loc_79><loc_31></location>15. Staar, P.W.J., Dolfi, M., Auer, C., Bekas, C.: Corpus conversion service: A machine learning platform to ingest documents at scale. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. pp. 774-782. KDD '18, Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3219819.3219834 , https://doi.org/10. 1145/3219819.3219834</list_item> <list_item><location><page_13><loc_22><loc_23><loc_79><loc_31></location>15. Staar, P.W.J., Dolfi, M., Auer, C., Bekas, C.: Corpus conversion service: A machine learning platform to ingest documents at scale. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. pp. 774-782. KDD '18, Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3219819.3219834 , https://doi.org/10. 1145/3219819.3219834</list_item>
<list_item><location><page_13><loc_22><loc_20><loc_79><loc_23></location>16. Wang, X.: Tabular Abstraction, Editing, and Formatting. Ph.D. thesis, CAN (1996), aAINN09397</list_item> <list_item><location><page_13><loc_22><loc_20><loc_79><loc_23></location>16. Wang, X.: Tabular Abstraction, Editing, and Formatting. Ph.D. thesis, CAN (1996), aAINN09397</list_item>
<list_item><location><page_13><loc_22><loc_16><loc_79><loc_20></location>17. Xue, W., Li, Q., Tao, D.: Res2tim: Reconstruct syntactic structures from table images. In: 2019 International Conference on Document Analysis and Recognition (ICDAR). pp. 749-755. IEEE (2019)</list_item> <list_item><location><page_13><loc_22><loc_16><loc_79><loc_20></location>17. Xue, W., Li, Q., Tao, D.: Res2tim: Reconstruct syntactic structures from table images. In: 2019 International Conference on Document Analysis and Recognition (ICDAR). pp. 749-755. IEEE (2019)</list_item>
@ -146,7 +148,7 @@
<list_item><location><page_14><loc_22><loc_81><loc_79><loc_85></location>18. Xue, W., Yu, B., Wang, W., Tao, D., Li, Q.: Tgrnet: A table graph reconstruction network for table structure recognition. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 1295-1304 (2021)</list_item> <list_item><location><page_14><loc_22><loc_81><loc_79><loc_85></location>18. Xue, W., Yu, B., Wang, W., Tao, D., Li, Q.: Tgrnet: A table graph reconstruction network for table structure recognition. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 1295-1304 (2021)</list_item>
<list_item><location><page_14><loc_22><loc_76><loc_79><loc_81></location>19. Ye, J., Qi, X., He, Y., Chen, Y., Gu, D., Gao, P., Xiao, R.: Pingan-vcgroup's solution for icdar 2021 competition on scientific literature parsing task b: Table recognition to html (2021). https://doi.org/10.48550/ARXIV.2105.01848 , https://arxiv.org/abs/2105.01848</list_item> <list_item><location><page_14><loc_22><loc_76><loc_79><loc_81></location>19. Ye, J., Qi, X., He, Y., Chen, Y., Gu, D., Gao, P., Xiao, R.: Pingan-vcgroup's solution for icdar 2021 competition on scientific literature parsing task b: Table recognition to html (2021). https://doi.org/10.48550/ARXIV.2105.01848 , https://arxiv.org/abs/2105.01848</list_item>
<list_item><location><page_14><loc_22><loc_73><loc_79><loc_75></location>20. Zhang, Z., Zhang, J., Du, J., Wang, F.: Split, embed and merge: An accurate table structure recognizer. Pattern Recognition 126 , 108565 (2022)</list_item> <list_item><location><page_14><loc_22><loc_73><loc_79><loc_75></location>20. Zhang, Z., Zhang, J., Du, J., Wang, F.: Split, embed and merge: An accurate table structure recognizer. Pattern Recognition 126 , 108565 (2022)</list_item>
<list_item><location><page_14><loc_22><loc_66><loc_79><loc_73></location>21. Zheng, X., Burdick, D., Popa, L., Zhong, X., Wang, N.X.R.: Global table extractor (gte): A framework for joint table identification and cell structure recognition using visual context. In: 2021 IEEE Winter Conference on Applications of Computer Vision (WACV). pp. 697-706 (2021). https://doi.org/10.1109/WACV48630.2021. 00074</list_item> <list_item><location><page_14><loc_22><loc_66><loc_79><loc_72></location>21. Zheng, X., Burdick, D., Popa, L., Zhong, X., Wang, N.X.R.: Global table extractor (gte): A framework for joint table identification and cell structure recognition using visual context. In: 2021 IEEE Winter Conference on Applications of Computer Vision (WACV). pp. 697-706 (2021). https://doi.org/10.1109/WACV48630.2021. 00074</list_item>
<list_item><location><page_14><loc_22><loc_60><loc_79><loc_66></location>22. Zhong, X., ShafieiBavani, E., Jimeno Yepes, A.: Image-based table recognition: Data, model, and evaluation. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.M. (eds.) Computer Vision - ECCV 2020. pp. 564-580. Springer International Publishing, Cham (2020)</list_item> <list_item><location><page_14><loc_22><loc_60><loc_79><loc_66></location>22. Zhong, X., ShafieiBavani, E., Jimeno Yepes, A.: Image-based table recognition: Data, model, and evaluation. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.M. (eds.) Computer Vision - ECCV 2020. pp. 564-580. Springer International Publishing, Cham (2020)</list_item>
<list_item><location><page_14><loc_22><loc_56><loc_79><loc_60></location>23. Zhong, X., Tang, J., Yepes, A.J.: Publaynet: largest dataset ever for document layout analysis. In: 2019 International Conference on Document Analysis and Recognition (ICDAR). pp. 1015-1022. IEEE (2019)</list_item> <list_item><location><page_14><loc_22><loc_56><loc_79><loc_60></location>23. Zhong, X., Tang, J., Yepes, A.J.: Publaynet: largest dataset ever for document layout analysis. In: 2019 International Conference on Document Analysis and Recognition (ICDAR). pp. 1015-1022. IEEE (2019)</list_item>
</document> </document>

File diff suppressed because one or more lines are too long

View File

@ -1,8 +1,12 @@
## Optimized Table Tokenization for Table Structure Recognition ## Optimized Table Tokenization for Table Structure Recognition
Maksym Lysak [0000 - 0002 - 3723 - $^{6960]}$, Ahmed Nassar[0000 - 0002 - 9468 - $^{0822]}$, Nikolaos Livathinos [0000 - 0001 - 8513 - $^{3491]}$, Christoph Auer[0000 - 0001 - 5761 - $^{0422]}$, and Peter Staar [0000 - 0002 - 8088 - 0823] Maksym Lysak [0000 0002 3723 $^{6960]}$, Ahmed Nassar[0000 0002 9468 $^{0822]}$, Nikolaos Livathinos [0000 0001 8513 $^{3491]}$, Christoph Auer[0000 0001 5761 $^{0422]}$, [0000 0002 8088 0823]
IBM Research {mly,ahn,nli,cau,taa}@zurich.ibm.com and Peter Staar
IBM Research
{mly,ahn,nli,cau,taa}@zurich.ibm.com
Abstract. Extracting tables from documents is a crucial task in any document conversion pipeline. Recently, transformer-based models have demonstrated that table-structure can be recognized with impressive accuracy using Image-to-Markup-Sequence (Im2Seq) approaches. Taking only the image of a table, such models predict a sequence of tokens (e.g. in HTML, LaTeX) which represent the structure of the table. Since the token representation of the table structure has a significant impact on the accuracy and run-time performance of any Im2Seq model, we investigate in this paper how table-structure representation can be optimised. We propose a new, optimised table-structure language (OTSL) with a minimized vocabulary and specific rules. The benefits of OTSL are that it reduces the number of tokens to 5 (HTML needs 28+) and shortens the sequence length to half of HTML on average. Consequently, model accuracy improves significantly, inference time is halved compared to HTML-based models, and the predicted table structures are always syntactically correct. This in turn eliminates most post-processing needs. Popular table structure data-sets will be published in OTSL format to the community. Abstract. Extracting tables from documents is a crucial task in any document conversion pipeline. Recently, transformer-based models have demonstrated that table-structure can be recognized with impressive accuracy using Image-to-Markup-Sequence (Im2Seq) approaches. Taking only the image of a table, such models predict a sequence of tokens (e.g. in HTML, LaTeX) which represent the structure of the table. Since the token representation of the table structure has a significant impact on the accuracy and run-time performance of any Im2Seq model, we investigate in this paper how table-structure representation can be optimised. We propose a new, optimised table-structure language (OTSL) with a minimized vocabulary and specific rules. The benefits of OTSL are that it reduces the number of tokens to 5 (HTML needs 28+) and shortens the sequence length to half of HTML on average. Consequently, model accuracy improves significantly, inference time is halved compared to HTML-based models, and the predicted table structures are always syntactically correct. This in turn eliminates most post-processing needs. Popular table structure data-sets will be published in OTSL format to the community.

File diff suppressed because one or more lines are too long

View File

@ -0,0 +1,165 @@
item-0 at level 0: unspecified: group _root_
item-1 at level 1: title: KRAB-zinc finger protein gene ex ... retrotransposons in the murine lineage
item-2 at level 2: paragraph: Wolf Gernot; 1: The Eunice Kenne ... tes of Health: Bethesda: United States
item-3 at level 2: section_header: Abstract
item-4 at level 3: text: The Krüppel-associated box zinc ... edundant role restricting TE activity.
item-5 at level 2: section_header: Introduction
item-6 at level 3: text: Nearly half of the human and mou ... s are active beyond early development.
item-7 at level 3: text: TEs, especially long terminal re ... f evolutionarily young KRAB-ZFP genes.
item-8 at level 2: section_header: Results
item-9 at level 3: section_header: Mouse KRAB-ZFPs target retrotransposons
item-10 at level 4: text: We analyzed the RNA expression p ... duplications (Kauzlaric et al., 2017).
item-11 at level 4: text: To determine the binding sites o ... ctive in the early embryo (Figure 1A).
item-12 at level 4: text: We generally observed that KRAB- ... responsible for this silencing effect.
item-13 at level 4: text: To further test the hypothesis t ... t easily evade repression by mutation.
item-14 at level 4: text: Our KRAB-ZFP ChIP-seq dataset al ... ntirely shift the mode of DNA binding.
item-15 at level 3: section_header: Genetic deletion of KRAB-ZFP gen ... leads to retrotransposon reactivation
item-16 at level 4: text: The majority of KRAB-ZFP genes a ... ung et al., 2014; Deniz et al., 2018).
item-17 at level 3: section_header: KRAB-ZFP cluster deletions license TE-borne enhancers
item-18 at level 4: text: We next used our RNA-seq dataset ... vating effects of TEs on nearby genes.
item-19 at level 4: text: While we generally observed that ... he internal region and not on the LTR.
item-20 at level 3: section_header: ETn retrotransposition in Chr4-cl KO and WT mice
item-21 at level 4: text: IAP, ETn/ETnERV and MuLV/RLTR4 r ... s may contribute to reduced viability.
item-22 at level 4: text: We reasoned that retrotransposon ... Tn insertions at a high recovery rate.
item-23 at level 4: text: Using this dataset, we first con ... nsertions in our pedigree (Figure 4A).
item-24 at level 4: text: To validate some of the novel ET ... ess might have truncated this element.
item-25 at level 4: text: Besides novel ETn insertions tha ... tions (Figure 4—figure supplement 3D).
item-26 at level 4: text: Finally, we asked whether there ... s clearly also play an important role.
item-27 at level 2: section_header: Discussion
item-28 at level 3: text: C2H2 zinc finger proteins, about ... ) depending upon their insertion site.
item-29 at level 3: text: Despite a lack of widespread ETn ... ion of the majority of KRAB-ZFP genes.
item-30 at level 2: section_header: Materials and methods
item-31 at level 3: section_header: Cell lines and transgenic mice
item-32 at level 4: text: Mouse ES cells and F9 EC cells w ... KO/KO and KO/WT (B6/129 F2) offspring.
item-33 at level 3: section_header: Generation of KRAB-ZFP expressing cell lines
item-34 at level 4: text: KRAB-ZFP ORFs were PCR-amplified ... led and further expanded for ChIP-seq.
item-35 at level 3: section_header: CRISPR/Cas9 mediated deletion of KRAB-ZFP clusters and an MMETn insertion
item-36 at level 4: text: All gRNAs were expressed from th ... PCR genotyping (Supplementary file 3).
item-37 at level 3: section_header: ChIP-seq analysis
item-38 at level 4: text: For ChIP-seq analysis of KRAB-ZF ... 010 or Khil et al., 2012 respectively.
item-39 at level 4: text: ChIP-seq libraries were construc ... were re-mapped using Bowtie (--best).
item-40 at level 3: section_header: Luciferase reporter assays
item-41 at level 4: text: For KRAB-ZFP repression assays, ... after transfection as described above.
item-42 at level 3: section_header: RNA-seq analysis
item-43 at level 4: text: Whole RNA was purified using RNe ... lemented in the R function p.adjust().
item-44 at level 3: section_header: Reduced representation bisulfite sequencing (RRBS-seq)
item-45 at level 4: text: For RRBS-seq analysis, Chr4-cl W ... h sample were considered for analysis.
item-46 at level 3: section_header: Retrotransposition assay
item-47 at level 4: text: The retrotransposition vectors p ... were stained with Amido Black (Sigma).
item-48 at level 3: section_header: Capture-seq screen
item-49 at level 4: text: To identify novel retrotransposo ... assembly using the Unicycler software.
item-50 at level 2: section_header: Tables
item-51 at level 3: table with [9x5]
item-51 at level 4: caption: Table 1.: * Number of protein-coding KRAB-ZFP genes identified in a previously published screen (Imbeault et al., 2017) and the ChIP-seq data column indicates the number of KRAB-ZFPs for which ChIP-seq was performed in this study.
item-52 at level 3: table with [31x5]
item-52 at level 4: caption: Key resources table:
item-53 at level 2: section_header: Figures
item-54 at level 3: picture
item-54 at level 4: caption: Figure 1.: Genome-wide binding patterns of mouse KRAB-ZFPs.
(A) Probability heatmap of KRAB-ZFP binding to TEs. Blue color intensity (main field) corresponds to -log10 (adjusted p-value) enrichment of ChIP-seq peak overlap with TE groups (Fishers exact test). The green/red color intensity (top panel) represents mean KAP1 (GEO accession: GSM1406445) and H3K9me3 (GEO accession: GSM1327148) enrichment (respectively) at peaks overlapping significantly targeted TEs (adjusted p-value<1e-5) in WT ES cells. (B) Summarized ChIP-seq signal for indicated KRAB-ZFPs and previously published KAP1 and H3K9me3 in WT ES cells across 127 intact ETn elements. (C) Heatmaps of KRAB-ZFP ChIP-seq signal at ChIP-seq peaks. For better comparison, peaks for all three KRAB-ZFPs were called with the same parameters (p<1e-10, peak enrichment >20). The top panel shows a schematic of the arrangement of the contact amino acid composition of each zinc finger. Zinc fingers are grouped and colored according to similarity, with amino acid differences relative to the five consensus fingers highlighted in white.
Figure 1—source data 1.KRAB-ZFP expression in 40 mouse tissues and cell lines (ENCODE).Mean values of replicates are shown as log2 transcripts per million.
Figure 1—source data 2.Probability heatmap of KRAB-ZFP binding to TEs.Values corresponds to -log10 (adjusted p-value) enrichment of ChIP-seq peak overlap with TE groups (Fishers exact test).
item-55 at level 3: picture
item-55 at level 4: caption: Figure 1—figure supplement 1.: ES cell-specific expression of KRAB-ZFP gene clusters.
(A) Heatmap showing expression patterns of mouse KRAB-ZFPs in 40 mouse tissues and cell lines (ENCODE). Heatmap colors indicate gene expression levels in log2 transcripts per million (TPM). The asterisk indicates a group of 30 KRAB-ZFPs that are exclusively expressed in ES cells. (B) Physical location of the genes encoding for the 30 KRAB-ZFPs that are exclusively expressed in ES cells. (C) Phylogenetic (Maximum likelihood) tree of the KRAB domains of mouse KRAB-ZFPs. KRAB-ZFPs encoded on the gene clusters on chromosome 2 and 4 are highlighted. The scale bar at the bottom indicates amino acid substitutions per site.
item-56 at level 3: picture
item-56 at level 4: caption: Figure 1—figure supplement 2.: KRAB-ZFP binding motifs and their repression activity.
(A) Comparison of computationally predicted (bottom) and experimentally determined (top) KRAB-ZFP binding motifs. Only significant pairs are shown (FDR < 0.1). (B) Luciferase reporter assays to confirm KRAB-ZFP repression of the identified target sites. Bars show the luciferase activity (normalized to Renilla luciferase) of reporter plasmids containing the indicated target sites cloned upstream of the SV40 promoter. Reporter plasmids were co-transfected into 293 T cells with a Renilla luciferase plasmid for normalization and plasmids expressing the targeting KRAB-ZFP. Normalized mean luciferase activity (from three replicates) is shown relative to luciferase activity of the reporter plasmid co-transfected with an empty pcDNA3.1 vector.
item-57 at level 3: picture
item-57 at level 4: caption: Figure 1—figure supplement 3.: KRAB-ZFP binding to ETn retrotransposons.
(A) Comparison of the PBSLys1,2 sequence with Zfp961 binding motifs in nonrepetitive peaks (Nonrep) and peaks at ETn elements. (B) Retrotransposition assays of original (ETnI1-neoTNF and MusD2-neoTNF Ribet et al., 2004) and modified reporter vectors where the Rex2 or Gm13051 binding motifs where removed. Schematic of reporter vectors are displayed at the top. HeLa cells were transfected as described in the Materials and Methods section and neo-resistant colonies, indicating retrotransposition events, were selected and stained. (C) Stem-loop structure of the ETn RNA export signal, the Gm13051 motif on the corresponding DNA is marked with red circles, the part of the motif that was deleted is indicated with grey crosses (adapted from Legiewicz et al., 2010).
item-58 at level 3: picture
item-58 at level 4: caption: Figure 2.: Retrotransposon reactivation in KRAB-ZFP cluster KO ES cells.
(A) RNA-seq analysis of TE expression in five KRAB-ZFP cluster KO ES cells. Green and grey squares on top of the panel represent KRAB-ZFPs with or without ChIP-seq data, respectively, within each deleted gene cluster. Reactivated TEs that are bound by one or several KRAB-ZFPs are indicated by green squares in the panel. Significantly up- and downregulated elements (adjusted p-value<0.05) are highlighted in red and green, respectively. (B) Differential KAP1 binding and H3K9me3 enrichment at TE groups (summarized across all insertions) in Chr2-cl and Chr4-cl KO ES cells. TE groups targeted by one or several KRAB-ZFPs encoded within the deleted clusters are highlighted in blue (differential enrichment over the entire TE sequences) and red (differential enrichment at TE regions that overlap with KRAB-ZFP ChIP-seq peaks). (C) DNA methylation status of CpG sites at indicated TE groups in WT and Chr4-cl KO ES cells grown in serum containing media or in hypomethylation-inducing media (2i + Vitamin C). P-values were calculated using paired t-test.
Figure 2—source data 1.Differential H3K9me3 and KAP1 distribution in WT and KRAB-ZFP cluster KO ES cells at TE families and KRAB-ZFP bound TE insertions.Differential read counts and statistical testing were determined by DESeq2.
item-59 at level 3: picture
item-59 at level 4: caption: Figure 2—figure supplement 1.: Epigenetic changes at TEs and TE-borne enhancers in KRAB-ZFP cluster KO ES cells.
(A) Differential analysis of summative (all individual insertions combined) H3K9me3 enrichment at TE groups in Chr10-cl, Chr13.1-cl and Chr13.2-cl KO ES cells. TE groups targeted by one or several KRAB-ZFPs encoded within the deleted clusters are highlighted in orange (differential enrichment over the entire TE sequences) and red (differential enrichment at TE regions that overlap with KRAB-ZFP ChIP-seq peaks). (B) Top: Schematic view of the Cd59a/Cd59b locus with a 5 truncated ETn insertion. ChIP-seq (Input subtracted from ChIP) data for overexpressed epitope-tagged Gm13051 (a Chr4-cl KRAB-ZFP) in F9 EC cells, and re-mapped KAP1 (GEO accession: GSM1406445) and H3K9me3 (GEO accession: GSM1327148) in WT ES cells are shown together with RNA-seq data from Chr4-cl WT and KO ES cells (mapped using Bowtie (-a -m 1 --strata -v 2) to exclude reads that cannot be uniquely mapped). Bottom: Transcriptional activity of a 5 kb fragment with or without fragments of the ETn insertion was tested by luciferase reporter assay in Chr4-cl WT and KO ES cells.
item-60 at level 3: picture
item-60 at level 4: caption: Figure 3.: TE-dependent gene activation in KRAB-ZFP cluster KO ES cells.
(A) Differential gene expression in Chr2-cl and Chr4-cl KO ES cells. Significantly up- and downregulated genes (adjusted p-value<0.05) are highlighted in red and green, respectively, KRAB-ZFP genes within the deleted clusters are shown in blue. (B) Correlation of TEs and gene deregulation. Plots show enrichment of TE groups within 100 kb of up- and downregulated genes relative to all genes. Significantly overrepresented LTR and LINE groups (adjusted p-value<0.1) are highlighted in blue and red, respectively. (C) Schematic view of the downstream region of Chst1 where a 5 truncated ETn insertion is located. ChIP-seq (Input subtracted from ChIP) data for overexpressed epitope-tagged Gm13051 (a Chr4-cl KRAB-ZFP) in F9 EC cells, and re-mapped KAP1 (GEO accession: GSM1406445) and H3K9me3 (GEO accession: GSM1327148) in WT ES cells are shown together with RNA-seq data from Chr4-cl WT and KO ES cells (mapped using Bowtie (-a -m 1 --strata -v 2) to exclude reads that cannot be uniquely mapped). (D) RT-qPCR analysis of Chst1 mRNA expression in Chr4-cl WT and KO ES cells with or without the CRISPR/Cas9 deleted ETn insertion near Chst1. Values represent mean expression (normalized to Gapdh) from three biological replicates per sample (each performed in three technical replicates) in arbitrary units. Error bars represent standard deviation and asterisks indicate significance (p<0.01, Students t-test). n.s.: not significant. (E) Mean coverage of ChIP-seq data (Input subtracted from ChIP) in Chr4-cl WT and KO ES cells over 127 full-length ETn insertions. The binding sites of the Chr4-cl KRAB-ZFPs Rex2 and Gm13051 are indicated by dashed lines.
item-61 at level 3: picture
item-61 at level 4: caption: Figure 4.: ETn retrotransposition in Chr4-cl KO mice.
(A) Pedigree of mice used for transposon insertion screening by capture-seq in mice of different strain backgrounds. The number of novel ETn insertions (only present in one animal) are indicated. For animals whose direct ancestors have not been screened, the ETn insertions are shown in parentheses since parental inheritance cannot be excluded in that case. Germ line insertions are indicated by asterisks. All DNA samples were prepared from tail tissues unless noted (-S: spleen, -E: ear, -B:Blood) (B) Statistical analysis of ETn insertion frequency in tail tissue from 30 Chr4-cl KO, KO/WT and WT mice that were derived from one Chr4-c KO x KO/WT and two Chr4-cl KO/WT x KO/WT matings. Only DNA samples that were collected from juvenile tails were considered for this analysis. P-values were calculated using one-sided Wilcoxon Rank Sum Test. In the last panel, KO, WT and KO/WT mice derived from all matings were combined for the statistical analysis.
Figure 4—source data 1.Coordinates of identified novel ETn insertions and supporting capture-seq read counts.Genomic regions indicate cluster of supporting reads.
Figure 4—source data 2.Sequences of capture-seq probes used to enrich genomic DNA for ETn and MuLV (RLTR4) insertions.
item-62 at level 3: picture
item-62 at level 4: caption: Figure 4—figure supplement 1.: Birth statistics of KRAB-ZFP cluster KO mice and TE reactivation in adult tissues.
(A) Birth statistics of Chr4- and Chr2-cl mice derived from KO/WT x KO/WT matings in different strain backgrounds. (B) RNA-seq analysis of TE expression in Chr2- (left) and Chr4-cl (right) KO tissues. TE groups with the highest reactivation phenotype in ES cells are shown separately. Significantly up- and downregulated elements (adjusted p-value<0.05) are highlighted in red and green, respectively. Experiments were performed in at least two biological replicates.
item-63 at level 3: picture
item-63 at level 4: caption: Figure 4—figure supplement 2.: Identification of polymorphic ETn and MuLV retrotransposon insertions in Chr4-cl KO and WT mice.
Heatmaps show normalized capture-seq read counts in RPM (Read Per Million) for identified polymorphic ETn (A) and MuLV (B) loci in different mouse strains. Only loci with strong support for germ line ETn or MuLV insertions (at least 100 or 3000 ETn or MuLV RPM, respectively) in at least two animals are shown. Non-polymorphic insertion loci with high read counts in all screened mice were excluded for better visibility. The sample information (sample name and cell type/tissue) is annotated at the bottom, with the strain information indicated by color at the top. The color gradient indicates log10(RPM+1).
item-64 at level 3: picture
item-64 at level 4: caption: Figure 4—figure supplement 3.: Confirmation of novel ETn insertions identified by capture-seq.
(A) PCR validation of novel ETn insertions in genomic DNA of three littermates (IDs: T09673, T09674 and T00436) and their parents (T3913 and T3921). Primer sequences are shown in Supplementary file 3. (B) ETn capture-seq read counts (RPM) at putative novel somatic (loci identified exclusively in one single animal), novel germ line (loci identified in several littermates) insertions, and at B6 reference ETn elements. (C) Heatmap shows capture-seq read counts (RPM) of a Chr4-cl KO mouse (ID: C6733) as determined in different tissues. Each row represents a novel ETn locus that was identified in at least one tissue. The color gradient indicates log10(RPM+1). (D) Heatmap shows the capture-seq RPM in technical replicates using the same Chr4-cl KO DNA sample (rep1/rep2) or replicates with DNA samples prepared from different sections of the tail from the same mouse at different ages (tail1/tail2). Each row represents a novel ETn locus that was identified in at least one of the displayed samples. The color gradient indicates log10(RPM+1).
item-65 at level 2: section_header: References
item-66 at level 3: list: group list
item-67 at level 4: list_item: TL Bailey; M Boden; FA Buske; M ... arching. Nucleic Acids Research (2009)
item-68 at level 4: list_item: C Baust; L Gagnier; GJ Baillie; ... the mouse. Journal of Virology (2003)
item-69 at level 4: list_item: K Blaschke; KT Ebata; MM Karimi; ... -like state in ES cells. Nature (2013)
item-70 at level 4: list_item: A Brodziak; E Ziółko; M Muc-Wier ... erimental and Clinical Research (2012)
item-71 at level 4: list_item: N Castro-Diaz; G Ecco; A Colucci ... stem cells. Genes & Development (2014)
item-72 at level 4: list_item: EB Chuong; NC Elde; C Feschotte. ... ndogenous retroviruses. Science (2016)
item-73 at level 4: list_item: J Dan; Y Liu; N Liu; M Chiourea; ... n silencing. Developmental Cell (2014)
item-74 at level 4: list_item: A De Iaco; E Planet; A Coluccio; ... cental mammals. Nature Genetics (2017)
item-75 at level 4: list_item: Ö Deniz; L de la Rica; KCL Cheng ... onic stem cells. Genome Biology (2018)
item-76 at level 4: list_item: M Dewannieux; T Heidmann. Endoge ... rs. Current Opinion in Virology (2013)
item-77 at level 4: list_item: G Ecco; M Cassano; A Kauzlaric; ... ult tissues. Developmental Cell (2016)
item-78 at level 4: list_item: G Ecco; M Imbeault; D Trono. KRAB zinc finger proteins. Development (2017)
item-79 at level 4: list_item: JA Frank; C Feschotte. Co-option ... on. Current Opinion in Virology (2017)
item-80 at level 4: list_item: L Gagnier; VP Belancio; DL Mager ... ansposon insertions. Mobile DNA (2019)
item-81 at level 4: list_item: AC Groner; S Meylan; A Ciuffi; N ... omatin spreading. PLOS Genetics (2010)
item-82 at level 4: list_item: DC Hancks; HH Kazazian. Roles fo ... ns in human disease. Mobile DNA (2016)
item-83 at level 4: list_item: M Imbeault; PY Helleboid; D Tron ... ene regulatory networks. Nature (2017)
item-84 at level 4: list_item: FM Jacobs; D Greenberg; N Nguyen ... SVA/L1 retrotransposons. Nature (2014)
item-85 at level 4: list_item: H Kano; H Kurahashi; T Toda. Gen ... e dactylaplasia phenotype. PNAS (2007)
item-86 at level 4: list_item: MM Karimi; P Goyal; IA Maksakova ... cripts in mESCs. Cell Stem Cell (2011)
item-87 at level 4: list_item: A Kauzlaric; G Ecco; M Cassano; ... related genetic units. PLOS ONE (2017)
item-88 at level 4: list_item: PP Khil; F Smagulova; KM Brick; ... ction of ssDNA. Genome Research (2012)
item-89 at level 4: list_item: F Krueger; SR Andrews. Bismark: ... eq applications. Bioinformatics (2011)
item-90 at level 4: list_item: B Langmead; SL Salzberg. Fast ga ... t with bowtie 2. Nature Methods (2012)
item-91 at level 4: list_item: M Legiewicz; AS Zolotukhin; GR P ... Journal of Biological Chemistry (2010)
item-92 at level 4: list_item: JA Lehoczky; PE Thomas; KM Patri ... n Polypodia mice. PLOS Genetics (2013)
item-93 at level 4: list_item: D Leung; T Du; U Wagner; W Xie; ... methyltransferase Setdb1. PNAS (2014)
item-94 at level 4: list_item: J Lilue; AG Doran; IT Fiddes; M ... unctional loci. Nature Genetics (2018)
item-95 at level 4: list_item: S Liu; J Brind'Amour; MM Karimi; ... germ cells. Genes & Development (2014)
item-96 at level 4: list_item: MI Love; W Huber; S Anders. Mode ... ata with DESeq2. Genome Biology (2014)
item-97 at level 4: list_item: F Lugani; R Arora; N Papeta; A P ... short tail mouse. PLOS Genetics (2013)
item-98 at level 4: list_item: TS Macfarlan; WD Gifford; S Dris ... ous retrovirus activity. Nature (2012)
item-99 at level 4: list_item: IA Maksakova; MT Romanish; L Gag ... mouse germ line. PLOS Genetics (2006)
item-100 at level 4: list_item: T Matsui; D Leung; H Miyashita; ... methyltransferase ESET. Nature (2010)
item-101 at level 4: list_item: HS Najafabadi; S Mnaimneh; FW Sc ... y lexicon. Nature Biotechnology (2015)
item-102 at level 4: list_item: C Nellåker; TM Keane; B Yalcin; ... 8 mouse strains. Genome Biology (2012)
item-103 at level 4: list_item: H O'Geen; S Frietze; PJ Farnham. ... s. Methods in Molecular Biology (2010)
item-104 at level 4: list_item: A Patel; P Yang; M Tinkham; M Pr ... ndem zinc finger proteins. Cell (2018)
item-105 at level 4: list_item: D Ribet; M Dewannieux; T Heidman ... s-mobilization. Genome Research (2004)
item-106 at level 4: list_item: SR Richardson; P Gerdes; DJ Gerh ... d early embryo. Genome Research (2017)
item-107 at level 4: list_item: HM Rowe; J Jakobsson; D Mesnard; ... in embryonic stem cells. Nature (2010)
item-108 at level 4: list_item: HM Rowe; A Kapopoulou; A Corsino ... nic stem cells. Genome Research (2013)
item-109 at level 4: list_item: SN Schauer; PE Carreira; R Shukl ... carcinogenesis. Genome Research (2018)
item-110 at level 4: list_item: DC Schultz; K Ayyanathan; D Nego ... r proteins. Genes & Development (2002)
item-111 at level 4: list_item: K Semba; K Araki; K Matsumoto; H ... short tail mice. PLOS Genetics (2013)
item-112 at level 4: list_item: SP Sripathy; J Stevens; DC Schul ... Molecular and Cellular Biology (2006)
item-113 at level 4: list_item: JH Thomas; S Schneider. Coevolut ... c finger genes. Genome Research (2011)
item-114 at level 4: list_item: PJ Thompson; TS Macfarlan; MC Lo ... tory repertoire. Molecular Cell (2016)
item-115 at level 4: list_item: RS Treger; SD Pope; Y Kong; M To ... irus expression SNERV. Immunity (2019)
item-116 at level 4: list_item: CN Vlangos; AN Siuniak; D Robins ... Ptf1a expression. PLOS Genetics (2013)
item-117 at level 4: list_item: J Wang; G Xie; M Singh; AT Ghanb ... s naive-like stem cells. Nature (2014)
item-118 at level 4: list_item: D Wolf; K Hug; SP Goff. TRIM28 m ... iruses in embryonic cells. PNAS (2008)
item-119 at level 4: list_item: G Wolf; D Greenberg; TS Macfarla ... ger protein family. Mobile DNA (2015a)
item-120 at level 4: list_item: G Wolf; P Yang; AC Füchtbauer; E ... roviruses. Genes & Development (2015b)
item-121 at level 4: list_item: M Yamauchi; B Freitag; C Khan; B ... silencers. Journal of Virology (1995)
item-122 at level 4: list_item: Y Zhang; T Liu; CA Meyer; J Eeck ... ChIP-Seq (MACS). Genome Biology (2008)
item-123 at level 1: caption: Table 1.: * Number of protein-co ... ChIP-seq was performed in this study.
item-124 at level 1: caption: Key resources table:
item-125 at level 1: caption: Figure 1.: Genome-wide binding p ... with TE groups (Fishers exact test).
item-126 at level 1: caption: Figure 1—figure supplement 1.: E ... tes amino acid substitutions per site.
item-127 at level 1: caption: Figure 1—figure supplement 2.: K ... sfected with an empty pcDNA3.1 vector.
item-128 at level 1: caption: Figure 1—figure supplement 3.: K ... (adapted from Legiewicz et al., 2010).
item-129 at level 1: caption: Figure 2.: Retrotransposon react ... cal testing were determined by DESeq2.
item-130 at level 1: caption: Figure 2—figure supplement 1.: E ... r assay in Chr4-cl WT and KO ES cells.
item-131 at level 1: caption: Figure 3.: TE-dependent gene act ... Gm13051 are indicated by dashed lines.
item-132 at level 1: caption: Figure 4.: ETn retrotranspositio ... A for ETn and MuLV (RLTR4) insertions.
item-133 at level 1: caption: Figure 4—figure supplement 1.: B ... in at least two biological replicates.
item-134 at level 1: caption: Figure 4—figure supplement 2.: I ... color gradient indicates log10(RPM+1).
item-135 at level 1: caption: Figure 4—figure supplement 3.: C ... color gradient indicates log10(RPM+1).

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,268 @@
# KRAB-zinc finger protein gene expansion in response to active retrotransposons in the murine lineage
Wolf Gernot; 1: The Eunice Kennedy Shriver National Institute of Child Health and Human Development, The National Institutes of Health: Bethesda: United States; de Iaco Alberto; 2: School of Life Sciences, École Polytechnique Fédérale de Lausanne (EPFL): Lausanne: Switzerland; Sun Ming-An; 1: The Eunice Kennedy Shriver National Institute of Child Health and Human Development, The National Institutes of Health: Bethesda: United States; Bruno Melania; 1: The Eunice Kennedy Shriver National Institute of Child Health and Human Development, The National Institutes of Health: Bethesda: United States; Tinkham Matthew; 1: The Eunice Kennedy Shriver National Institute of Child Health and Human Development, The National Institutes of Health: Bethesda: United States; Hoang Don; 1: The Eunice Kennedy Shriver National Institute of Child Health and Human Development, The National Institutes of Health: Bethesda: United States; Mitra Apratim; 1: The Eunice Kennedy Shriver National Institute of Child Health and Human Development, The National Institutes of Health: Bethesda: United States; Ralls Sherry; 1: The Eunice Kennedy Shriver National Institute of Child Health and Human Development, The National Institutes of Health: Bethesda: United States; Trono Didier; 2: School of Life Sciences, École Polytechnique Fédérale de Lausanne (EPFL): Lausanne: Switzerland; Macfarlan Todd S; 1: The Eunice Kennedy Shriver National Institute of Child Health and Human Development, The National Institutes of Health: Bethesda: United States
## Abstract
The Krüppel-associated box zinc finger protein (KRAB-ZFP) family diversified in mammals. The majority of human KRAB-ZFPs bind transposable elements (TEs), however, since most TEs are inactive in humans it is unclear whether KRAB-ZFPs emerged to suppress TEs. We demonstrate that many recently emerged murine KRAB-ZFPs also bind to TEs, including the active ETn, IAP, and L1 families. Using a CRISPR/Cas9-based engineering approach, we genetically deleted five large clusters of KRAB-ZFPs and demonstrate that target TEs are de-repressed, unleashing TE-encoded enhancers. Homozygous knockout mice lacking one of two KRAB-ZFP gene clusters on chromosome 2 and chromosome 4 were nonetheless viable. In pedigrees of chromosome 4 cluster KRAB-ZFP mutants, we identified numerous novel ETn insertions with a modest increase in mutants. Our data strongly support the current model that recent waves of retrotransposon activity drove the expansion of KRAB-ZFP genes in mice and that many KRAB-ZFPs play a redundant role restricting TE activity.
## Introduction
Nearly half of the human and mouse genomes consist of transposable elements (TEs). TEs shape the evolution of species, serving as a source for genetic innovation (Chuong et al., 2016; Frank and Feschotte, 2017). However, TEs also potentially harm their hosts by insertional mutagenesis, gene deregulation and activation of innate immunity (Maksakova et al., 2006; Kano et al., 2007; Brodziak et al., 2012; Hancks and Kazazian, 2016). To protect themselves from TE activity, host organisms have developed a wide range of defense mechanisms targeting virtually all steps of the TE life cycle (Dewannieux and Heidmann, 2013). In tetrapods, KRAB zinc finger protein (KRAB-ZFP) genes have amplified and diversified, likely in response to TE colonization (Thomas and Schneider, 2011; Najafabadi et al., 2015; Wolf et al., 2015a; Wolf et al., 2015b; Imbeault et al., 2017). Conventional ZFPs bind DNA using tandem arrays of C2H2 zinc finger domains, each capable of specifically interacting with three nucleotides, whereas some zinc fingers can bind two or four nucleotides and include DNA backbone interactions depending on target DNA structure (Patel et al., 2018). This allows KRAB-ZFPs to flexibly bind to large stretches of DNA with high affinity. The KRAB domain binds the corepressor KAP1, which in turn recruits histone modifying enzymes including the NuRD histone deacetylase complex and the H3K9-specific methylase SETDB1 (Schultz et al., 2002; Sripathy et al., 2006), which induces persistent and heritable gene silencing (Groner et al., 2010). Deletion of KAP1 (Rowe et al., 2010) or SETDB1 (Matsui et al., 2010) in mouse embryonic stem (ES) cells induces TE reactivation and cell death, but only minor phenotypes in differentiated cells, suggesting KRAB-ZFPs are most important during early embryogenesis where they mark TEs for stable epigenetic silencing that persists through development. However, SETDB1-containing complexes are also required to repress TEs in primordial germ cells (Liu et al., 2014) and adult tissues (Ecco et al., 2016), indicating KRAB-ZFPs are active beyond early development.
TEs, especially long terminal repeat (LTR) retrotransposons, also known as endogenous retroviruses (ERVs), can affect expression of neighboring genes through their promoter and enhancer functions (Macfarlan et al., 2012; Wang et al., 2014; Thompson et al., 2016). KAP1 deletion in mouse ES cells causes rapid gene deregulation (Rowe et al., 2013), indicating that KRAB-ZFPs may regulate gene expression by recruiting KAP1 to TEs. Indeed, Zfp809 knock-out (KO) in mice resulted in transcriptional activation of a handful of genes in various tissues adjacent to ZFP809-targeted VL30-Pro elements (Wolf et al., 2015b). It has therefore been speculated that KRAB-ZFPs bind to TE sequences to domesticate them for gene regulatory innovation (Ecco et al., 2017). This idea is supported by the observation that many human KRAB-ZFPs target TE groups that have lost their coding potential millions of years ago and that KRAB-ZFP target sequences within TEs are in some cases under purifying selection (Imbeault et al., 2017). However, there are also clear signs of an evolutionary arms-race between human TEs and KRAB-ZFPs (Jacobs et al., 2014), indicating that some KRAB-ZFPs may limit TE mobility for stretches of evolutionary time, prior to their ultimate loss from the genome or adaptation for other regulatory functions. Here we use the laboratory mouse, which has undergone a recent expansion of the KRAB-ZFP family, to determine the in vivo requirement of the majority of evolutionarily young KRAB-ZFP genes.
## Results
### Mouse KRAB-ZFPs target retrotransposons
We analyzed the RNA expression profiles of mouse KRAB-ZFPs across a wide range of tissues to identify candidates active in early embryos/ES cells. While the majority of KRAB-ZFPs are expressed at low levels and uniformly across tissues, a group of KRAB-ZFPs are highly and almost exclusively expressed in ES cells (Figure 1—figure supplement 1A). About two thirds of these KRAB-ZFPs are physically linked in two clusters on chromosome 2 (Chr2-cl) and 4 (Chr4-cl) (Figure 1—figure supplement 1B). These two clusters encode 40 and 21 KRAB-ZFP annotated genes, respectively, which, with one exception on Chr4-cl, do not have orthologues in rat or any other sequenced mammals (Supplementary file 1). The KRAB-ZFPs within these two genomic clusters also group together phylogenetically (Figure 1—figure supplement 1C), indicating these gene clusters arose by a series of recent segmental gene duplications (Kauzlaric et al., 2017).
To determine the binding sites of the KRAB-ZFPs within these and other gene clusters, we expressed epitope-tagged KRAB-ZFPs using stably integrating vectors in mouse embryonic carcinoma (EC) or ES cells (Table 1, Supplementary file 1) and performed chromatin immunoprecipitation followed by deep sequencing (ChIP-seq). We then determined whether the identified binding sites are significantly enriched over annotated TEs and used the non-repetitive peak fraction to identify binding motifs. We discarded 7 of 68 ChIP-seq datasets because we could not obtain a binding motif or a target TE and manual inspection confirmed low signal to noise ratio. Of the remaining 61 KRAB-ZFPs, 51 significantly overlapped at least one TE subfamily (adjusted p-value<1e-5). Altogether, 81 LTR retrotransposon, 18 LINE, 10 SINE and one DNA transposon subfamilies were targeted by at least one of the 51 KRAB-ZFPs (Figure 1A and Supplementary file 1). Chr2-cl KRAB-ZFPs preferably bound IAPEz retrotransposons and L1-type LINEs, while Chr4-cl KRAB-ZFPs targeted various retrotransposons, including the closely related MMETn (hereafter referred to as ETn) and ETnERV (also known as MusD) elements (Figure 1A). ETn elements are non-autonomous LTR retrotransposons that require trans-complementation by the fully coding ETnERV elements that contain Gag, Pro and Pol genes (Ribet et al., 2004). These elements have accumulated to ~240 and~100 copies in the reference C57BL/6 genome, respectively, with ~550 solitary LTRs (Baust et al., 2003). Both ETn and ETnERVs are still active, generating polymorphisms and mutations in several mouse strains (Gagnier et al., 2019). The validity of our ChIP-seq screen was confirmed by the identification of binding motifs - which often resembled the computationally predicted motifs (Figure 1figure supplement 2A) - for the majority of screened KRAB-ZFPs (Supplementary file 1). Moreover, predicted and experimentally determined motifs were found in targeted TEs in most cases (Supplementary file 1), and reporter repression assays confirmed KRAB-ZFP induced silencing for all the tested sequences (Figure 1figure supplement 2B). Finally, we observed KAP1 and H3K9me3 enrichment at most of the targeted TEs in wild type ES cells, indicating that most of these KRAB-ZFPs are functionally active in the early embryo (Figure 1A).
We generally observed that KRAB-ZFPs present exclusively in mouse target TEs that are restricted to the mouse genome, indicating KRAB-ZFPs and their targets emerged together. For example, several mouse-specific KRAB-ZFPs in Chr2-cl and Chr4-cl target IAP and ETn elements which are only found in the mouse genome and are highly active. This is the strongest data to date supporting that recent KRAB-ZFP expansions in these young clusters is a response to recent TE activity. Likewise, ZFP599 and ZFP617, both conserved in Muroidea, bind to various ORR1-type LTRs which are present in the rat genome (Supplementary file 1). However, ZFP961, a KRAB-ZFP encoded on a small gene cluster on chromosome 8 that is conserved in Muroidea targets TEs that are only found in the mouse genome (e.g. ETn), a paradox we have previously observed with ZFP809, which also targets TEs that are evolutionarily younger than itself (Wolf et al., 2015b). The ZFP961 binding site is located at the 5 end of the internal region of ETn and ETnERV elements, a sequence that usually contains the primer binding site (PBS), which is required to prime retroviral reverse transcription. Indeed, the ZFP961 motif closely resembles the PBSLys1,2 (Figure 1—figure supplement 3A), which had been previously identified as a KAP1-dependent target of retroviral repression (Yamauchi et al., 1995; Wolf et al., 2008). Repression of the PBSLys1,2 by ZFP961 was also confirmed in reporter assays (Figure 1—figure supplement 2B), indicating that ZFP961 is likely responsible for this silencing effect.
To further test the hypothesis that KRAB-ZFPs target sites necessary for retrotransposition, we utilized previously generated ETn and ETnERV retrotransposition reporters in which we mutated KRAB-ZFP binding sites (Ribet et al., 2004). Whereas the ETnERV reporters are sufficient for retrotransposition, the ETn reporter requires ETnERV genes supplied in trans. We tested and confirmed that the REX2/ZFP600 and GM13051 binding sites within these TEs are required for efficient retrotransposition (Figure 1—figure supplement 3B). REX2 and ZFP600 both bind a target about 200 bp from the start of the internal region (Figure 1B), a region that often encodes the packaging signal. GM13051 binds a target coding for part of a highly structured mRNA export signal (Legiewicz et al., 2010) near the 3 end of the internal region of ETn (Figure 1—figure supplement 3C). Both signals are characterized by stem-loop intramolecular base-pairing in which a single mutation can disrupt loop formation. This indicates that at least some KRAB-ZFPs evolved to bind functionally essential target sequences which cannot easily evade repression by mutation.
Our KRAB-ZFP ChIP-seq dataset also provided unique insights into the emergence of new KRAB-ZFPs and binding patterns. The Chr4-cl KRAB-ZFPs REX2 and ZFP600 bind to the same target within ETn but with varying affinity (Figure 1C). Comparison of the amino acids responsible for DNA contact revealed a high similarity between REX2 and ZFP600, with the main differences at the most C-terminal zinc fingers. Additionally, we found that GM30910, another KRAB-ZFP encoded in the Chr4-cl, also shows a strong similarity to both KRAB-ZFPs yet targets entirely different groups of TEs (Figure 1C and Supplementary file 1). Together with previously shown data (Ecco et al., 2016), this example highlights how addition of a few new zinc fingers to an existing array can entirely shift the mode of DNA binding.
### Genetic deletion of KRAB-ZFP gene clusters leads to retrotransposon reactivation
The majority of KRAB-ZFP genes are harbored in large, highly repetitive clusters that have formed by successive complex segmental duplications (Kauzlaric et al., 2017), rendering them inaccessible to conventional gene targeting. We therefore developed a strategy to delete entire KRAB-ZFP gene clusters in ES cells (including the Chr2-cl and Chr4-cl as well as two clusters on chromosome 13 and a cluster on chromosome 10) using two CRISPR/Cas9 gRNAs targeting unique regions flanking each cluster, and short single-stranded repair oligos with homologies to both sides of the projected cut sites. Using this approach, we generated five cluster KO ES cell lines in at least two biological replicates and performed RNA sequencing (RNA-seq) to determine TE expression levels. Strikingly, four of the five cluster KO ES cells exhibited distinct TE reactivation phenotypes (Figure 2A). Chr2-cl KO resulted in reactivation of several L1 subfamilies as well as RLTR10 (up to more than 100-fold as compared to WT) and IAPEz ERVs. In contrast, the most strongly upregulated TEs in Chr4-cl KO cells were ETn/ETnERV (up to 10-fold as compared to WT), with several other ERV groups modestly reactivated. ETn/ETnERV elements were also upregulated in Chr13.2-cl KO ES cells while the only upregulated ERVs in Chr13.1-cl KO ES cells were MMERVK10C elements (Figure 2A). Most reactivated retrotransposons were targeted by at least one KRAB-ZFP that was encoded in the deleted cluster (Figure 2A and Supplementary file 1), indicating a direct effect of these KRAB-ZFPs on TE expression levels. Furthermore, we observed a loss of KAP1 binding and H3K9me3 at several TE subfamilies that are targeted by at least one KRAB-ZFP within the deleted Chr2-cl and Chr4-cl (Figure 2B, Figure 2—figure supplement 1A), including L1, ETn and IAPEz elements. Using reduced representation bisulfite sequencing (RRBS-seq), we found that a subset of KRAB-ZFP bound TEs were partially hypomethylated in Chr4-cl KO ES cells, but only when grown in genome-wide hypomethylation-inducing conditions (Blaschke et al., 2013; Figure 2C and Supplementary file 2). These data are consistent with the hypothesis that KRAB-ZFPs/KAP1 are not required to establish DNA methylation, but under certain conditions they protect specific TEs and imprint control regions from genome-wide demethylation (Leung et al., 2014; Deniz et al., 2018).
### KRAB-ZFP cluster deletions license TE-borne enhancers
We next used our RNA-seq datasets to determine the effect of KRAB-ZFP cluster deletions on gene expression. We identified 195 significantly upregulated and 130 downregulated genes in Chr4-cl KO ES cells, and 108 upregulated and 59 downregulated genes in Chr2-cl KO ES cells (excluding genes on the deleted cluster) (Figure 3A). To address whether gene deregulation in Chr2-cl and Chr4-cl KO ES cells is caused by nearby TE reactivation, we determined whether genes near certain TE subfamilies are more frequently deregulated than random genes. We found a strong correlation of gene upregulation and TE proximity for several TE subfamilies, of which many became transcriptionally activated themselves (Figure 3B). For example, nearly 10% of genes that are located within 100 kb (up- or downstream of the TSS) of an ETn element are upregulated in Chr4-cl KO ES cells, as compared to 0.8% of all genes. In Chr2-cl KO ES cells, upregulated genes were significantly enriched near various LINE groups but also IAPEz-int and RLTR10-int elements, indicating that TE-binding KRAB-ZFPs in these clusters limit the potential activating effects of TEs on nearby genes.
While we generally observed that TE-associated gene reactivation is not caused by elongated or spliced transcription starting at the retrotransposons, we did observe that the strength of the effect of ETn elements on gene expression is stronger on genes in closer proximity. About 25% of genes located within 20 kb of an ETn element, but only 5% of genes located at a distance between 50 and 100 kb from the nearest ETn insertion, become upregulated in Chr4-cl KO ES cells. Importantly however, the correlation is still significant for genes that are located at distances between 50 and 100 kb from the nearest ETn insertion, indicating that ETn elements can act as long-range enhancers of gene expression in the absence of KRAB-ZFPs that target them. To confirm that Chr4-cl KRAB-ZFPs such as GM13051 block ETn-borne enhancers, we tested the ability of a putative ETn enhancer to activate transcription in a reporter assay. For this purpose, we cloned a 5 kb fragment spanning from the GM13051 binding site within the internal region of a truncated ETn insertion to the first exon of the Cd59a gene, which is strongly activated in Chr4-cl KO ES cells (Figure 2—figure supplement 1B). We observed strong transcriptional activity of this fragment which was significantly higher in Chr4-cl KO ES cells. Surprisingly, this activity was reduced to background when the internal segment of the ETn element was not included in the fragment, suggesting the internal segment of the ETn element, but not its LTR, contains a Chr4-cl KRAB-ZFP sensitive enhancer. To further corroborate these findings, we genetically deleted an ETn element that is located about 60 kb from the TSS of Chst1, one of the top-upregulated genes in Chr4-cl KO ES cells (Figure 3C). RT-qPCR analysis revealed that the Chst1 upregulation phenotype in Chr4-cl KO ES cells diminishes when the ETn insertion is absent, providing direct evidence that a KRAB-ZFP controlled ETn-borne enhancer regulates Chst1 expression (Figure 3D). Furthermore, ChIP-seq confirmed a general increase of H3K4me3, H3K4me1 and H3K27ac marks at ETn elements in Chr4-cl KO ES cells (Figure 3E). Notably, enhancer marks were most pronounced around the GM13051 binding site near the 3 end of the internal region, confirming that the enhancer activity of ETn is located on the internal region and not on the LTR.
### ETn retrotransposition in Chr4-cl KO and WT mice
IAP, ETn/ETnERV and MuLV/RLTR4 retrotransposons are highly polymorphic in inbred mouse strains (Nellåker et al., 2012), indicating that these elements are able to mobilize in the germ line. Since these retrotransposons are upregulated in Chr2-cl and Chr4-cl KO ES cells, we speculated that these KRAB-ZFP clusters evolved to minimize the risks of insertional mutagenesis by retrotransposition. To test this, we generated Chr2-cl and Chr4-cl KO mice via ES cell injection into blastocysts, and after germ line transmission we genotyped the offspring of heterozygous breeding pairs. While the offspring of Chr4-cl KO/WT parents were born close to Mendelian ratios in pure C57BL/6 and mixed C57BL/6 129Sv matings, one Chr4-cl KO/WT breeding pair gave birth to significantly fewer KO mice than expected (p-value=0.022) (Figure 4—figure supplement 1A). Likewise, two out of four Chr2-cl KO breeding pairs on mixed C57BL/6 129Sv matings failed to give birth to a single KO offspring (p-value<0.01) while the two other mating pairs produced KO offspring at near Mendelian ratios (Figure 4figure supplement 1A). Altogether, these data indicate that KRAB-ZFP clusters are not absolutely essential in mice, but that genetic and/or epigenetic factors may contribute to reduced viability.
We reasoned that retrotransposon activation could account for the reduced viability of Chr2-cl and Chr4-cl KO mice in some matings. However, since only rare matings produced non-viable KO embryos, we instead turned to the viable KO mice to assay for increased transposon activity. RNA-seq in blood, brain and testis revealed that, with a few exceptions, retrotransposons upregulated in Chr2 and Chr4 KRAB-ZFP cluster KO ES cells are not expressed at higher levels in adult tissues (Figure 4—figure supplement 1B). Likewise, no strong transcriptional TE reactivation phenotype was observed in liver and kidney of Chr4-cl KO mice (data not shown) and ChIP-seq with antibodies against H3K4me1, H3K4me3 and H3K27ac in testis of Chr4-cl WT and KO mice revealed no increase of active histone marks at ETn elements or other TEs (data not shown). This indicates that Chr2-cl and Chr4-cl KRAB-ZFPs are primarily required for TE repression during early development. This is consistent with the high expression of these KRAB-ZFPs uniquely in ES cells (Figure 1—figure supplement 1A). To determine whether retrotransposition occurs at a higher frequency in Chr4-cl KO mice during development, we screened for novel ETn (ETn/ETnERV) and MuLV (MuLV/RLTR4\_MM) insertions in viable Chr4-cl KO mice. For this purpose, we developed a capture-sequencing approach to enrich for ETn/MuLV DNA and flanking sequences from genomic DNA using probes that hybridize with the 5 and 3 ends of ETn and MuLV LTRs prior to deep sequencing. We screened genomic DNA samples from a total of 76 mice, including 54 mice from ancestry-controlled Chr4-cl KO matings in various strain backgrounds, the two ES cell lines the Chr4-cl KO mice were generated from, and eight mice from a Chr2-cl KO mating which served as a control (since ETn and MuLVs are not activated in Chr2-cl KO ES cells) (Supplementary file 4). Using this approach, we were able to enrich reads mapping to ETn/MuLV LTRs about 2,000-fold compared to genome sequencing without capture. ETn/MuLV insertions were determined by counting uniquely mapped reads that were paired with reads mapping to ETn/MuLV elements (see materials and methods for details). To assess the efficiency of the capture approach, we determined what proportion of a set of 309 largely intact (two LTRs flanking an internal sequence) reference ETn elements could be identified using our sequencing data. 95% of these insertions were called with high confidence in the majority of our samples (data not shown), indicating that we are able to identify ETn insertions at a high recovery rate.
Using this dataset, we first confirmed the polymorphic nature of both ETn and MuLV retrotransposons in laboratory mouse strains (Figure 4—figure supplement 2A), highlighting the potential of these elements to retrotranspose. To identify novel insertions, we filtered out insertions that were supported by ETn/MuLV-paired reads in more than one animal. While none of the 54 ancestry-controlled mice showed a single novel MuLV insertion, we observed greatly varying numbers of up to 80 novel ETn insertions in our pedigree (Figure 4A).
To validate some of the novel ETn insertions, we designed specific PCR primers for five of the insertions and screened genomic DNA of the mice in which they were identified as well as their parents. For all tested insertions, we were able to amplify their flanking sequence and show that these insertions are absent in their parents (Figure 4—figure supplement 3A). To confirm their identity, we amplified and sequenced three of the novel full-length ETn insertions. Two of these elements (Genbank accession: MH449667-68) resembled typical ETnII elements with identical 5 and 3 LTRs and target site duplications (TSD) of 4 or 6 bp, respectively. The third sequenced element (MH449669) represented a hybrid element that contains both ETnI and MusD (ETnERV) sequences. Similar insertions can be found in the B6 reference genome; however, the identified novel insertion has a 2.5 kb deletion of the 5 end of the internal region. Additionally, the 5 and 3 LTR of this element differ in one nucleotide near the start site and contain an unusually large 248 bp TSD (containing a SINE repeat) indicating that an improper integration process might have truncated this element.
Besides novel ETn insertions that were only identified in one specific animal, we also observed three ETn insertions that could be detected in several siblings but not in their parents or any of the other screened mice. This strongly indicates that these retrotransposition events occurred in the germ line of the parents from which they were passed on to some of their offspring. One of these germ line insertions was evidently passed on from the offspring to the next generation (Figure 4A). As expected, the read numbers supporting these novel germ line insertions were comparable to the read numbers that were found in the flanking regions of annotated B6 ETn insertions (Figure 4—figure supplement 3B). In contrast, virtually all novel insertions that were only found in one animal were supported by significantly fewer reads (Figure 4—figure supplement 3B). This indicates that these elements resulted from retrotransposition events in the developing embryo and not in the zygote or parental germ cells. Indeed, we detected different sets of insertions in various tissues from the same animal (Figure 4—figure supplement 3C). Even between tail samples that were collected from the same animal at different ages, only a fraction of the new insertions were present in both samples, while technical replicates from the same genomic DNA samples showed a nearly complete overlap in insertions (Figure 4—figure supplement 3D).
Finally, we asked whether there were more novel ETn insertions in mice lacking the Chr4-cl relative to their wild type and heterozygous littermates in our pedigree. Interestingly, only one out of the eight Chr4-cl KO mice in a pure C57BL/6 strain background and none of the eight offspring from a Chr2-cl mating carried a single novel ETn insertion (Figure 4A). When crossing into a 129Sv background for a single generation before intercrossing heterozygous mice (F1), we observed 4 out of 8 Chr4-cl KO mice that contained at least one new ETn insertion, whereas none of 3 heterozygous mice contained any insertions. After crossing to the 129Sv background for a second generation (F2), we determined the number of novel ETn insertions in the offspring of one KO/WT x KO and two KO/WT x KO/WT matings, excluding all samples that were not derived from juvenile tail tissue. Only in the offspring of the KO/WT x KO mating, we observed a statistically significant higher average number of ETn insertions in KO vs. KO/WT animals (7.3 vs. 29.6, p=0.045, Figure 4B). Other than that, only a non-significant trend towards greater average numbers of ETn insertions in KO (11 vs. 27.8, p=0.192, Figure 4B) was apparent in one of the WT/KO x KO/WT matings whereas no difference in ETn insertion numbers between WT and KO mice could be observed in the second mating WT/KO x KO/WT (26 vs. 31, p=0.668, Figure 4B). When comparing all KO with all WT and WT/KO mice from these three matings, a trend towards more ETn insertions in KO remained but was not supported by strong significance (26 vs. 13, p=0.057, Figure 4B). Altogether, we observed a high variability in the number of new ETn insertions in both KO and WT but our data suggest that the Chr4-cl KRAB-ZFPs may have a modest effect on ETn retrotransposition rates in some mouse strains but other genetic and epigenetic effects clearly also play an important role.
## Discussion
C2H2 zinc finger proteins, about half of which contain a KRAB repressor domain, represent the largest DNA-binding protein family in mammals. Nevertheless, most of these factors have not been investigated using loss-of-function studies. The most comprehensive characterization of human KRAB-ZFPs revealed a strong preference to bind TEs (Imbeault et al., 2017; Najafabadi et al., 2015) yet their function remains unknown. In humans, very few TEs are capable of retrotransposition yet many of them, often tens of million years old, are bound by KRAB-ZFPs. While this suggests that human KRAB-ZFPs mainly serve to control TE-borne enhancers and may have potentially transcription-independent functions, we were interested in the biological significance of KRAB-ZFPs in restricting potentially active TEs. The mouse is an ideal model for such studies since the mouse genome contains several active TE families, including IAP, ETn and L1 elements. We found that many of the young KRAB-ZFPs present in the genomic clusters of KRAB-ZFPs on chromosomes 2 and 4, which are highly expressed in a restricted pattern in ES cells, bound redundantly to these three active TE families. In several cases, KRAB-ZFPs bound to functionally constrained sequence elements we and others have demonstrated to be necessary for retrotransposition, including PBS and viral packaging signals. Targeting such sequences may help the host defense system keep pace with rapidly evolving mouse transposons. This provides strong evidence that many young KRAB-ZFPs are indeed expanding in response to TE activity. But do these young KRAB-ZFP genes limit the mobilization of TEs? Despite the large number of polymorphic ETn elements in mouse strains (Nellåker et al., 2012) and several reports of phenotype-causing novel ETn germ line insertions, no new ETn insertions were reported in recent screens of C57BL/6 mouse genomes (Richardson et al., 2017; Gagnier et al., 2019), indicating that the overall rate of ETn germ line mobilization in inbred mice is rather low. We have demonstrated that Chr4-cl KRAB-ZFPs control ETn/ETnERV expression in ES cells, but this does not lead to widespread ETn mobility in viable C57BL/6 mice. In contrast, we found numerous novel, including several germ line, ETn insertions in both WT and Chr4-cl KO mice in a C57BL/6 129Sv mixed genetic background, with generally more insertions in KO mice and in mice with more 129Sv DNA. This is consistent with a report detecting ETn insertions in FVB.129 mice (Schauer et al., 2018). Notably, there was a large variation in the number of new insertions in these mice, possibly caused by hyperactive polymorphic ETn insertions that varied from individual to individual, epigenetic variation at ETn insertions between individuals and/or the general stochastic nature of ETn mobilization. Furthermore, recent reports have suggested that KRAB-ZFP gene content is distinct in different strains of laboratory mice (Lilue et al., 2018; Treger et al., 2019), and reduced KRAB-ZFP gene content could contribute to increased activity in individual mice. Although we have yet to find obvious phenotypes in the mice carrying new insertions, novel ETn germ line insertions have been shown to cause phenotypes from short tails (Lugani et al., 2013; Semba et al., 2013; Vlangos et al., 2013) to limb malformation (Kano et al., 2007) and severe morphogenetic defects including polypodia (Lehoczky et al., 2013) depending upon their insertion site.
Despite a lack of widespread ETn activation in Chr4-cl KO mice, it still remains to be determined whether other TEs, like L1, IAP or other LTR retrotransposons are activated in any of the KRAB-ZFP cluster KO mice, which will require the development of additional capture-seq based assays. Notably, two of the heterozygous matings from Chr2-cl KO mice failed to produce viable knockout offspring, which could indicate a TE-reactivation phenotype. It may also be necessary to generate compound homozygous mutants of distinct KRAB-ZFP clusters to eliminate redundancy before TEs become unleashed. The KRAB-ZFP cluster knockouts produced here will be useful reagents to test such hypotheses. In sum, our data supports that a major driver of KRAB-ZFP gene expansion in mice is recent retrotransposon insertions, and that redundancy within the KRAB-ZFP gene family and with other TE restriction pathways provides protection against widespread TE mobility, explaining the non-essential function of the majority of KRAB-ZFP genes.
## Materials and methods
### Cell lines and transgenic mice
Mouse ES cells and F9 EC cells were cultivated as described previously (Wolf et al., 2015b) unless stated otherwise. Chr4-cl KO ES cells originate from B6;129 Gt(ROSA)26Sortm1(cre/ERT)Nat/J mice (Jackson lab), all other KRAB-ZFP cluster KO ES cell lines originate from JM8A3.N1 C57BL/6N-Atm1Brd ES cells (KOMP Repository). Chr2-cl KO and WT ES cells were initially grown in serum-containing media (Wolf et al., 2015b) but changed to 2i media (De Iaco et al., 2017) for several weeks before analysis. To generate Chr4-cl and Chr2-cl KO mice, the cluster deletions were repeated in B6 ES (KOMP repository) or R1 (Nagy lab) ES cells, respectively, and heterozygous clones were injected into B6 albino blastocysts. Chr2-cl KO mice were therefore kept on a mixed B6/Svx129/Sv-CP strain background while Chr4-cl KO mice were initially derived on a pure C57BL/6 background. For capture-seq screens, Chr4-cl KO mice were crossed with 129 × 1/SvJ mice (Jackson lab) to produce the founder mice for Chr4-cl KO and WT (B6/129 F1) offspring. Chr4-cl KO/WT (B6/129 F1) were also crossed with 129 × 1/SvJ mice to get Chr4-cl KO/WT (B6/129 F1) mice, which were intercrossed to give rise to the parents of Chr4-cl KO/KO and KO/WT (B6/129 F2) offspring.
### Generation of KRAB-ZFP expressing cell lines
KRAB-ZFP ORFs were PCR-amplified from cDNA or synthesized with codon-optimization (Supplementary file 1), and stably expressed with 3XFLAG or 3XHA tags in F9 EC or ES cells using Sleeping beauty transposon-based (Wolf et al., 2015b) or lentiviral expression vectors (Imbeault et al., 2017; Supplementary file 1). Cells were selected with puromycin (1 µg/ml) and resistant clones were pooled and further expanded for ChIP-seq.
### CRISPR/Cas9 mediated deletion of KRAB-ZFP clusters and an MMETn insertion
All gRNAs were expressed from the pX330-U6-Chimeric\_BB-CBh-hSpCas9 vector (RRID:Addgene\_42230) and nucleofected into 106 ES cells using Amaxa nucleofection in the following amounts: 5 µg of each pX330-gRNA plasmid, 1 µg pPGK-puro and 500 pmoles single-stranded repair oligos (Supplementary file 3). One day after nucleofection, cells were kept under puromycin selection (1 µg/ml) for 24 hr. Individual KO and WT clones were picked 78 days after nucleofection and expanded for PCR genotyping (Supplementary file 3).
### ChIP-seq analysis
For ChIP-seq analysis of KRAB-ZFP expressing cells, 510 × 107 cells were crosslinked and immunoprecipitated with anti-FLAG (Sigma-Aldrich Cat# F1804, RRID:AB\_262044) or anti-HA (Abcam Cat# ab9110, RRID:AB\_307019 or Covance Cat# MMS-101P-200, RRID:AB\_10064068) antibody using one of two previously described protocols (O'Geen et al., 2010; Imbeault et al., 2017) as indicated in Supplementary file 1. H3K9me3 distribution in Chr4-cl, Chr10-cl, Chr13.1-cl and Chr13.2-cl KO ES cells was determined by native ChIP-seq with anti-H3K9me3 serum (Active Motif Cat# 39161, RRID:AB\_2532132) as described previously (Karimi et al., 2011). In Chr2-cl KO ES cells, H3K9me3 and KAP1 ChIP-seq was performed as previously described (Ecco et al., 2016). In Chr4-cl KO and WT ES cells KAP1 binding was determined by endogenous tagging of KAP1 with C-terminal GFP (Supplementary file 3), followed by FACS to enrich for GFP-positive cells and ChIP with anti-GFP (Thermo Fisher Scientific Cat# A-11122, RRID:AB\_221569) using a previously described protocol (O'Geen et al., 2010). For ChIP-seq analysis of active histone marks, cross-linked chromatin from ES cells or testis (from two-week old mice) was immunoprecipitated with antibodies against H3K4me3 (Abcam Cat# ab8580, RRID:AB\_306649), H3K4me1 (Abcam Cat# ab8895, RRID:AB\_306847) and H3K27ac (Abcam Cat# ab4729, RRID:AB\_2118291) following the protocol developed by O'Geen et al., 2010 or Khil et al., 2012 respectively.
ChIP-seq libraries were constructed and sequenced as indicated in Supplementary file 4. Reads were mapped to the mm9 genome using Bowtie (RRID:SCR\_005476; settings: --best) or Bowtie2 (Langmead and Salzberg, 2012) as indicated in Supplementary file 4. Under these settings, reads that map to multiple genomic regions are assigned to the top-scored match and, if a set of equally good choices is encountered, a pseudo-random number is used to choose one location. Peaks were called using MACS14 (RRID:SCR\_013291) under high stringency settings (p<1e-10, peak enrichment >20) (Zhang et al., 2008). Peaks were called both over the Input control and a FLAG or HA control ChIP (unless otherwise stated in Supplementary file 4) and only peaks that were called in both settings were kept for further analysis. In cases when the stringency settings did not result in at least 50 peaks, the settings were changed to medium (p<1e-10, peak enrichment >10) or low (p<1e-5, peak enrichment >10) stringency (Supplementary file 4). For further analysis, all peaks were scaled to 200 bp regions centered around the peak summits. The overlap of the scaled peaks to each repeat element in UCSC Genome Browser (RRID:SCR\_005780) were calculated by using the bedfisher function (settings: -f 0.25) from BEDTools (RRID:SCR\_006646). The right-tailed p-values between pair-wise comparison of each ChIP-seq peak and repeat element were extracted, and then adjusted using the Benjamini-Hochberg approach implemented in the R function p.adjust(). Binding motifs were determined using only nonrepetitive (<10% repeat content) peaks with MEME (Bailey et al., 2009). MEME motifs were compared with in silico predicted motifs (Najafabadi et al., 2015) using Tomtom (Bailey et al., 2009) and considered as significantly overlapping with a False Discovery Rate (FDR) below 0.1. To find MEME and predicted motifs in repetitive peaks, we used FIMO (Bailey et al., 2009). Differential H3K9me3 and KAP1 distribution in WT and Chr2-cl or Chr4-cl KO ES cells at TEs was determined by counting ChIP-seq reads overlapping annotated insertions of each TE group using BEDTools (MultiCovBed). Additionally, ChIP-seq reads were counted at the TE fraction that was bound by Chr2-cl or Chr4-cl KRAB-ZFPs (overlapping with 200 bp peaks). Count tables were concatenated and analyzed using DESeq2 (Love et al., 2014). The previously published ChIP-seq datasets for KAP1 (Castro-Diaz et al., 2014) and H3K9me3 (Dan et al., 2014) were re-mapped using Bowtie (--best).
### Luciferase reporter assays
For KRAB-ZFP repression assays, double-stranded DNA oligos containing KRAB-ZFP target sequences (Supplementary file 3) were cloned upstream of the SV40 promoter of the pGL3-Promoter vector (Promega) between the restriction sites for NheI and XhoI. 33 ng of reporter vectors were co-transfected (Lipofectamine 2000, Thermofisher) with 33 ng pRL-SV40 (Promega) for normalization and 33 ng of transient KRAB-ZFP expression vectors (in pcDNA3.1) or empty pcDNA3.1 into 293 T cells seeded one day earlier in 96-well plates. Cells were lysed 48 hr after transfection and luciferase/Renilla luciferase activity was measured using the Dual-Luciferase Reporter Assay System (Promega). To measure the transcriptional activity of the MMETn element upstream of the Cd59a gene, fragments of varying sizes (Supplementary file 3) were cloned into the promoter-less pGL3-basic vector (Promega) using NheI and NcoI sites. 70 ng of reporter vectors were cotransfected with 30 ng pRL-SV40 into feeder-depleted Chr4-cl WT and KO ES cells, seeded into a gelatinized 96-well plate 2 hr before transfection. Luciferase activity was measured 48 hr after transfection as described above.
### RNA-seq analysis
Whole RNA was purified using RNeasy columns (Qiagen) with on column DNase treatment or the High Pure RNA Isolation Kit (Roche) (Supplementary file 4). Tissues were first lysed in TRIzol reagent (ThermoFisher) and RNA was purified after the isopropanol precipitation step using RNeasy columns (Qiagen) with on column DNase treatment. Libraries were generated using the SureSelect Strand-Specific RNA Library Prep kit (Agilent) or Illuminas TruSeq RNA Library Prep Kit (with polyA selection) and sequenced as 50 or 100 bp paired-end reads on an Illumina HiSeq2500 (RRID:SCR\_016383) or HiSeq3000 (RRID:SCR\_016386) machine (Supplementary file 4). RNA-seq reads were mapped to the mouse genome (mm9) using Tophat (RRID:SCR\_013035; settings: --I 200000 g 1) unless otherwise stated. These settings allow each mappable read to be reported once, in case the read maps to multiple locations equally well, one match is randomly chosen. For differential transposon expression, mapped reads that overlap with TEs annotated in Repeatmasker (RRID:SCR\_012954) were counted using BEDTools MultiCovBed (setting: -split). Reads mapping to multiple fragments that belong to the same TE insertion (as indicated by the repeat ID) were summed up. Only transposons with a total of at least 20 (for two biological replicates) or 30 (for three biological replicates) mapped reads across WT and KO samples were considered for differential expression analysis. Transposons within the deleted KRAB-ZFP cluster were excluded from the analysis. Read count tables were used for differential expression analysis with DESeq2 (RRID:SCR\_015687). For differential gene expression analysis, reads overlapping with gene exons were counted using HTSeq-count and analyzed using DESeq2. To test if KRAB-ZFP peaks are significantly enriched near up- or down-regulated genes, a binomial test was performed. Briefly, the proportion of the peaks that are located within a certain distance up- or downstream to the TSS of genes was determined using the windowBed function of BED tools. The probability p in the binomial distribution was estimated as the fraction of all genes overlapped with KRAB-ZFP peaks. Then, given n which is the number of specific groups of genes, and x which is the number of this group of genes overlapped with peaks, the R function binom.test() was used to estimate the p-value based on right-tailed Binomial test. Finally, the adjusted p-values were determined separately for LTR and LINE retrotransposon groups using the Benjamini-Hochberg approach implemented in the R function p.adjust().
### Reduced representation bisulfite sequencing (RRBS-seq)
For RRBS-seq analysis, Chr4-cl WT and KO ES cells were grown in either standard ES cell media containing FCS or for one week in 2i media containing vitamin C as described previously (Blaschke et al., 2013). Genomic DNA was purified from WT and Chr4-cl KO ES cells using the Quick-gDNA purification kit (Zymo Research) and bisulfite-converted with the NEXTflex Bisulfite-Seq Kit (Bio Scientific) using Msp1 digestion to fragment DNA. Libraries were sequenced as 50 bp paired-end reads on an Illumina HiSeq. The reads were processed using Trim Galore (--illumina --paired rrbs) to trim poor quality bases and adaptors. Additionally, the first 5 nt of R2 and the last 3 nt of R1 and R2 were trimmed. Reads were then mapped to the reference genome (mm9) using Bismark (Krueger and Andrews, 2011) to extract methylation calling results. The CpG methylation pattern for each covered CpG dyads (two complementary CG dinucleotides) was calculated using a custom script (Source code 1: get\_CpG\_ML.pl). For comparison of CpG methylation between WT and Chr4-cl KO ES cells (in serum or 2i + Vitamin C conditions) only CpG sites with at least 10-fold coverage in each sample were considered for analysis.
### Retrotransposition assay
The retrotransposition vectors pCMV-MusD2, pCMV-MusD2-neoTNF and pCMV-ETnI1-neoTNF (Ribet et al., 2004) were a kind gift from Dixie Mager. To partially delete the Gm13051 binding site within pCMV-MusD2-neoTNF, the vector was cut with KpnI and re-ligated using a repair oligo, leaving a 24 bp deletion within the Gm13051 binding site. The Rex2 binding site in pCMV-ETnI1-neoTNF was deleted by cutting the vector with EcoRI and XbaI followed by re-ligation using two overlapping PCR products, leaving a 45 bp deletion while maintaining the rest of the vector unchanged (see Supplementary file 3 for primer sequences). For MusD retrotransposition assays, 5 × 104 HeLa cells (ATCC CCL-2) were transfected in a 24-well dish with 100 ng pCMV-MusD2-neoTNF or pCMV-MusD2-neoTNF (ΔGm13051-m) using Lipofectamine 2000. For ETn retrotransposition assays, 50 ng of pCMV-ETnI1-neoTNF or pCMV-ETnI1-neoTNF (ΔRex2) vectors were cotransfected with 50 ng pCMV-MusD2 to provide gag and pol proteins in trans. G418 (0.6 mg/ml) was added five days after transfection and cells were grown under selection until colonies were readily visible by eye. G418-resistant colonies were stained with Amido Black (Sigma).
### Capture-seq screen
To identify novel retrotransposon insertions, genomic DNA from various tissues (Supplementary file 4) was purified and used for library construction with target enrichment using the SureSelectQXT Target Enrichment kit (Agilent). Custom RNA capture probes were designed to hybridize with the 120 bp 5 ends of the 5 LTRs and the 120 bp 3 ends of the 3 LTR of about 600 intact (internal region flanked by two LTRs) MMETn/RLTRETN retrotransposons or of 140 RLTR4\_MM/RLTR4 retrotransposons that were upregulated in Chr4-cl KO ES cells (Figure 4—source data 2). Enriched libraries were sequenced on an Illumina HiSeq as paired-end 50 bp reads. R1 and R2 reads were mapped to the mm9 genome separately, using settings that only allow non-duplicated, uniquely mappable reads (Bowtie -m 1 --best --strata; samtools rmdup -s) and under settings that allow multimapping and duplicated reads (Bowtie --best). Of the latter, only reads that overlap (min. 50% of read) with RLTRETN, MMETn-int, ETnERV-int, ETnERV2-int or ETnERV3-int repeats (ETn) or RLTR4, RLTR4\_MM-int or MuLV-int repeats (RLTR4) were kept. Only uniquely mappable reads whose paired reads were overlapping with the repeats mentioned above were used for further analysis. All ETn- and RLTR4-paired reads were then clustered (as bed files) using BEDTools (bedtools merge -i -n -d 1000) to receive a list of all potential annotated and non-annotated new ETn or RLTR4 insertion sites and all overlapping ETn- or RLTR4-paired reads were counted for each sample at each locus. Finally, all regions that were located within 1 kb of an annotated RLTRETN, MMETn-int, ETnERV-int, ETnERV2-int or ETnERV3-int repeat as well as regions overlapping with previously identified polymorphic ETn elements (Nellåker et al., 2012) were removed. Genomic loci with at least 10 reads per million unique ETn- or RLTR4-paired reads were considered as insertion sites. To qualify for a de-novo insertion, we allowed no called insertions in any of the other screened mice at the locus and not a single read at the locus in the ancestors of the mouse. Insertions at the same locus in at least two siblings from the same offspring were considered as germ line insertions, if the insertion was absent in the parents and mice who were not direct descendants from these siblings. Full-length sequencing of new ETn insertions was done by Sanger sequencing of short PCR products in combination with Illumina sequencing of a large PCR product (Supplementary file 3), followed by de-novo assembly using the Unicycler software.
## Tables
Table 1.: * Number of protein-coding KRAB-ZFP genes identified in a previously published screen (Imbeault et al., 2017) and the ChIP-seq data column indicates the number of KRAB-ZFPs for which ChIP-seq was performed in this study.
| Cluster | Location | Size (Mb) | # of KRAB-ZFPs* | ChIP-seq data |
|-----------|------------|-------------|-------------------|-----------------|
| Chr2 | Chr2 qH4 | 3.1 | 40 | 17 |
| Chr4 | Chr4 qE1 | 2.3 | 21 | 19 |
| Chr10 | Chr10 qC1 | 0.6 | 6 | 1 |
| Chr13.1 | Chr13 qB3 | 1.2 | 6 | 2 |
| Chr13.2 | Chr13 qB3 | 0.8 | 26 | 12 |
| Chr8 | Chr8 qB3.3 | 0.1 | 4 | 4 |
| Chr9 | Chr9 qA3 | 0.1 | 4 | 2 |
| Other | - | - | 248 | 4 |
Key resources table:
| Reagent type (species) or resource | Designation | Source or reference | Identifiers | Additional information |
|------------------------------------------|----------------------------------------|-----------------------------------|-------------------------------------|------------------------------------------------------|
| Strain, strain background (Mus musculus) | 129 × 1/SvJ | The Jackson Laboratory | 000691 | Mice used to generate mixed strain Chr4-cl KO mice |
| Cell line (Homo-sapiens) | HeLa | ATCC | ATCC CCL-2 | |
| Cell line (Mus musculus) | JM8A3.N1 C57BL/6N-Atm1Brd | KOMP Repository | PL236745 | B6 ES cells used to generate KO cell lines and mice |
| Cell line (Mus musculus) | B6;129 Gt(ROSA)26Sortm1(cre/ERT)Nat/J | The Jackson Laboratory | 004847 | ES cells used to generate KO cell lines and mice |
| Cell line (Mus musculus) | R1 ES cells | Andras Nagy lab | R1 | 129 ES cells used to generate KO cell lines and mice |
| Cell line (Mus musculus) | F9 Embryonic carcinoma cells | ATCC | ATCC CRL-1720 | |
| Antibody | Mouse monoclonal ANTI-FLAG M2 antibody | Sigma-Aldrich | Cat# F1804, RRID:AB\_262044 | ChIP (1 µg/107 cells) |
| Antibody | Rabbit polyclonal anti-HA | Abcam | Cat# ab9110, RRID:AB\_307019 | ChIP (1 µg/107 cells) |
| Antibody | Mouse monoclonal anti-HA | Covance | Cat# MMS-101P-200, RRID:AB\_10064068 | |
| Antibody | Rabbit polyclonal anti-H3K9me3 | Active Motif | Cat# 39161, RRID:AB\_2532132 | ChIP (3 µl/107 cells) |
| Antibody | Rabbit polyclonal anti-GFP | Thermo Fisher Scientific | Cat# A-11122, RRID:AB\_221569 | ChIP (1 µg/107 cells) |
| Antibody | Rabbit polyclonal anti- H3K4me3 | Abcam | Cat# ab8580, RRID:AB\_306649 | ChIP (1 µg/107 cells) |
| Antibody | Rabbit polyclonal anti- H3K4me1 | Abcam | Cat# ab8895, RRID:AB\_306847 | ChIP (1 µg/107 cells) |
| Antibody | Rabbit polyclonal anti- H3K27ac | Abcam | Cat# ab4729, RRID:AB\_2118291 | ChIP (1 µg/107 cells) |
| Recombinant DNA reagent | pCW57.1 | Addgene | RRID:Addgene\_41393 | Inducible lentiviral expression vector |
| Recombinant DNA reagent | pX330-U6-Chimeric\_BB-CBh-hSpCas9 | Addgene | RRID:Addgene\_42230 | CRISPR/Cas9 expression construct |
| Sequence-based reagent | Chr2-cl KO gRNA.1 | This paper | Cas9 gRNA | GCCGTTGCTCAGTCCAAATG |
| Sequenced-based reagent | Chr2-cl KO gRNA.2 | This paper | Cas9 gRNA | GATACCAGAGGTGGCCGCAAG |
| Sequenced-based reagent | Chr4-cl KO gRNA.1 | This paper | Cas9 gRNA | GCAAAGGGGCTCCTCGATGGA |
| Sequence-based reagent | Chr4-cl KO gRNA.2 | This paper | Cas9 gRNA | GTTTATGGCCGTGCTAAGGTC |
| Sequenced-based reagent | Chr10-cl KO gRNA.1 | This paper | Cas9 gRNA | GTTGCCTTCATCCCACCGTG |
| Sequenced-based reagent | Chr10-cl KO gRNA.2 | This paper | Cas9 gRNA | GAAGTTCGACTTGGACGGGCT |
| Sequenced-based reagent | Chr13.1-cl KO gRNA.1 | This paper | Cas9 gRNA | GTAACCCATCATGGGCCCTAC |
| Sequenced-based reagent | Chr13.1-cl KO gRNA.2 | This paper | Cas9 gRNA | GGACAGGTTATAGGTTTGAT |
| Sequenced-based reagent | Chr13.2-cl KO gRNA.1 | This paper | Cas9 gRNA | GGGTTTCTGAGAAACGTGTA |
| Sequenced-based reagent | Chr13.2-cl KO gRNA.2 | This paper | Cas9 gRNA | GTGTAATGAGTTCTTATATC |
| Commercial assay or kit | SureSelectQXT Target Enrichment kit | Agilent | G9681-90000 | |
| Software, algorithm | Bowtie | http://bowtie-bio.sourceforge.net | RRID:SCR\_005476 | |
| Software, algorithm | MACS14 | https://bio.tools/macs | RRID:SCR\_013291 | |
| Software, algorithm | Tophat | https://ccb.jhu.edu | RRID:SCR\_013035 | |
## Figures
Figure 1.: Genome-wide binding patterns of mouse KRAB-ZFPs.
(A) Probability heatmap of KRAB-ZFP binding to TEs. Blue color intensity (main field) corresponds to -log10 (adjusted p-value) enrichment of ChIP-seq peak overlap with TE groups (Fishers exact test). The green/red color intensity (top panel) represents mean KAP1 (GEO accession: GSM1406445) and H3K9me3 (GEO accession: GSM1327148) enrichment (respectively) at peaks overlapping significantly targeted TEs (adjusted p-value<1e-5) in WT ES cells. (B) Summarized ChIP-seq signal for indicated KRAB-ZFPs and previously published KAP1 and H3K9me3 in WT ES cells across 127 intact ETn elements. (C) Heatmaps of KRAB-ZFP ChIP-seq signal at ChIP-seq peaks. For better comparison, peaks for all three KRAB-ZFPs were called with the same parameters (p<1e-10, peak enrichment >20). The top panel shows a schematic of the arrangement of the contact amino acid composition of each zinc finger. Zinc fingers are grouped and colored according to similarity, with amino acid differences relative to the five consensus fingers highlighted in white.
Figure 1—source data 1.KRAB-ZFP expression in 40 mouse tissues and cell lines (ENCODE).Mean values of replicates are shown as log2 transcripts per million.
Figure 1—source data 2.Probability heatmap of KRAB-ZFP binding to TEs.Values corresponds to -log10 (adjusted p-value) enrichment of ChIP-seq peak overlap with TE groups (Fishers exact test).
<!-- image -->
Figure 1—figure supplement 1.: ES cell-specific expression of KRAB-ZFP gene clusters.
(A) Heatmap showing expression patterns of mouse KRAB-ZFPs in 40 mouse tissues and cell lines (ENCODE). Heatmap colors indicate gene expression levels in log2 transcripts per million (TPM). The asterisk indicates a group of 30 KRAB-ZFPs that are exclusively expressed in ES cells. (B) Physical location of the genes encoding for the 30 KRAB-ZFPs that are exclusively expressed in ES cells. (C) Phylogenetic (Maximum likelihood) tree of the KRAB domains of mouse KRAB-ZFPs. KRAB-ZFPs encoded on the gene clusters on chromosome 2 and 4 are highlighted. The scale bar at the bottom indicates amino acid substitutions per site.
<!-- image -->
Figure 1—figure supplement 2.: KRAB-ZFP binding motifs and their repression activity.
(A) Comparison of computationally predicted (bottom) and experimentally determined (top) KRAB-ZFP binding motifs. Only significant pairs are shown (FDR < 0.1). (B) Luciferase reporter assays to confirm KRAB-ZFP repression of the identified target sites. Bars show the luciferase activity (normalized to Renilla luciferase) of reporter plasmids containing the indicated target sites cloned upstream of the SV40 promoter. Reporter plasmids were co-transfected into 293 T cells with a Renilla luciferase plasmid for normalization and plasmids expressing the targeting KRAB-ZFP. Normalized mean luciferase activity (from three replicates) is shown relative to luciferase activity of the reporter plasmid co-transfected with an empty pcDNA3.1 vector.
<!-- image -->
Figure 1—figure supplement 3.: KRAB-ZFP binding to ETn retrotransposons.
(A) Comparison of the PBSLys1,2 sequence with Zfp961 binding motifs in nonrepetitive peaks (Nonrep) and peaks at ETn elements. (B) Retrotransposition assays of original (ETnI1-neoTNF and MusD2-neoTNF Ribet et al., 2004) and modified reporter vectors where the Rex2 or Gm13051 binding motifs where removed. Schematic of reporter vectors are displayed at the top. HeLa cells were transfected as described in the Materials and Methods section and neo-resistant colonies, indicating retrotransposition events, were selected and stained. (C) Stem-loop structure of the ETn RNA export signal, the Gm13051 motif on the corresponding DNA is marked with red circles, the part of the motif that was deleted is indicated with grey crosses (adapted from Legiewicz et al., 2010).
<!-- image -->
Figure 2.: Retrotransposon reactivation in KRAB-ZFP cluster KO ES cells.
(A) RNA-seq analysis of TE expression in five KRAB-ZFP cluster KO ES cells. Green and grey squares on top of the panel represent KRAB-ZFPs with or without ChIP-seq data, respectively, within each deleted gene cluster. Reactivated TEs that are bound by one or several KRAB-ZFPs are indicated by green squares in the panel. Significantly up- and downregulated elements (adjusted p-value<0.05) are highlighted in red and green, respectively. (B) Differential KAP1 binding and H3K9me3 enrichment at TE groups (summarized across all insertions) in Chr2-cl and Chr4-cl KO ES cells. TE groups targeted by one or several KRAB-ZFPs encoded within the deleted clusters are highlighted in blue (differential enrichment over the entire TE sequences) and red (differential enrichment at TE regions that overlap with KRAB-ZFP ChIP-seq peaks). (C) DNA methylation status of CpG sites at indicated TE groups in WT and Chr4-cl KO ES cells grown in serum containing media or in hypomethylation-inducing media (2i + Vitamin C). P-values were calculated using paired t-test.
Figure 2—source data 1.Differential H3K9me3 and KAP1 distribution in WT and KRAB-ZFP cluster KO ES cells at TE families and KRAB-ZFP bound TE insertions.Differential read counts and statistical testing were determined by DESeq2.
<!-- image -->
Figure 2—figure supplement 1.: Epigenetic changes at TEs and TE-borne enhancers in KRAB-ZFP cluster KO ES cells.
(A) Differential analysis of summative (all individual insertions combined) H3K9me3 enrichment at TE groups in Chr10-cl, Chr13.1-cl and Chr13.2-cl KO ES cells. TE groups targeted by one or several KRAB-ZFPs encoded within the deleted clusters are highlighted in orange (differential enrichment over the entire TE sequences) and red (differential enrichment at TE regions that overlap with KRAB-ZFP ChIP-seq peaks). (B) Top: Schematic view of the Cd59a/Cd59b locus with a 5 truncated ETn insertion. ChIP-seq (Input subtracted from ChIP) data for overexpressed epitope-tagged Gm13051 (a Chr4-cl KRAB-ZFP) in F9 EC cells, and re-mapped KAP1 (GEO accession: GSM1406445) and H3K9me3 (GEO accession: GSM1327148) in WT ES cells are shown together with RNA-seq data from Chr4-cl WT and KO ES cells (mapped using Bowtie (-a -m 1 --strata -v 2) to exclude reads that cannot be uniquely mapped). Bottom: Transcriptional activity of a 5 kb fragment with or without fragments of the ETn insertion was tested by luciferase reporter assay in Chr4-cl WT and KO ES cells.
<!-- image -->
Figure 3.: TE-dependent gene activation in KRAB-ZFP cluster KO ES cells.
(A) Differential gene expression in Chr2-cl and Chr4-cl KO ES cells. Significantly up- and downregulated genes (adjusted p-value<0.05) are highlighted in red and green, respectively, KRAB-ZFP genes within the deleted clusters are shown in blue. (B) Correlation of TEs and gene deregulation. Plots show enrichment of TE groups within 100 kb of up- and downregulated genes relative to all genes. Significantly overrepresented LTR and LINE groups (adjusted p-value<0.1) are highlighted in blue and red, respectively. (C) Schematic view of the downstream region of Chst1 where a 5 truncated ETn insertion is located. ChIP-seq (Input subtracted from ChIP) data for overexpressed epitope-tagged Gm13051 (a Chr4-cl KRAB-ZFP) in F9 EC cells, and re-mapped KAP1 (GEO accession: GSM1406445) and H3K9me3 (GEO accession: GSM1327148) in WT ES cells are shown together with RNA-seq data from Chr4-cl WT and KO ES cells (mapped using Bowtie (-a -m 1 --strata -v 2) to exclude reads that cannot be uniquely mapped). (D) RT-qPCR analysis of Chst1 mRNA expression in Chr4-cl WT and KO ES cells with or without the CRISPR/Cas9 deleted ETn insertion near Chst1. Values represent mean expression (normalized to Gapdh) from three biological replicates per sample (each performed in three technical replicates) in arbitrary units. Error bars represent standard deviation and asterisks indicate significance (p<0.01, Students t-test). n.s.: not significant. (E) Mean coverage of ChIP-seq data (Input subtracted from ChIP) in Chr4-cl WT and KO ES cells over 127 full-length ETn insertions. The binding sites of the Chr4-cl KRAB-ZFPs Rex2 and Gm13051 are indicated by dashed lines.
<!-- image -->
Figure 4.: ETn retrotransposition in Chr4-cl KO mice.
(A) Pedigree of mice used for transposon insertion screening by capture-seq in mice of different strain backgrounds. The number of novel ETn insertions (only present in one animal) are indicated. For animals whose direct ancestors have not been screened, the ETn insertions are shown in parentheses since parental inheritance cannot be excluded in that case. Germ line insertions are indicated by asterisks. All DNA samples were prepared from tail tissues unless noted (-S: spleen, -E: ear, -B:Blood) (B) Statistical analysis of ETn insertion frequency in tail tissue from 30 Chr4-cl KO, KO/WT and WT mice that were derived from one Chr4-c KO x KO/WT and two Chr4-cl KO/WT x KO/WT matings. Only DNA samples that were collected from juvenile tails were considered for this analysis. P-values were calculated using one-sided Wilcoxon Rank Sum Test. In the last panel, KO, WT and KO/WT mice derived from all matings were combined for the statistical analysis.
Figure 4—source data 1.Coordinates of identified novel ETn insertions and supporting capture-seq read counts.Genomic regions indicate cluster of supporting reads.
Figure 4—source data 2.Sequences of capture-seq probes used to enrich genomic DNA for ETn and MuLV (RLTR4) insertions.
<!-- image -->
Figure 4—figure supplement 1.: Birth statistics of KRAB-ZFP cluster KO mice and TE reactivation in adult tissues.
(A) Birth statistics of Chr4- and Chr2-cl mice derived from KO/WT x KO/WT matings in different strain backgrounds. (B) RNA-seq analysis of TE expression in Chr2- (left) and Chr4-cl (right) KO tissues. TE groups with the highest reactivation phenotype in ES cells are shown separately. Significantly up- and downregulated elements (adjusted p-value<0.05) are highlighted in red and green, respectively. Experiments were performed in at least two biological replicates.
<!-- image -->
Figure 4—figure supplement 2.: Identification of polymorphic ETn and MuLV retrotransposon insertions in Chr4-cl KO and WT mice.
Heatmaps show normalized capture-seq read counts in RPM (Read Per Million) for identified polymorphic ETn (A) and MuLV (B) loci in different mouse strains. Only loci with strong support for germ line ETn or MuLV insertions (at least 100 or 3000 ETn or MuLV RPM, respectively) in at least two animals are shown. Non-polymorphic insertion loci with high read counts in all screened mice were excluded for better visibility. The sample information (sample name and cell type/tissue) is annotated at the bottom, with the strain information indicated by color at the top. The color gradient indicates log10(RPM+1).
<!-- image -->
Figure 4—figure supplement 3.: Confirmation of novel ETn insertions identified by capture-seq.
(A) PCR validation of novel ETn insertions in genomic DNA of three littermates (IDs: T09673, T09674 and T00436) and their parents (T3913 and T3921). Primer sequences are shown in Supplementary file 3. (B) ETn capture-seq read counts (RPM) at putative novel somatic (loci identified exclusively in one single animal), novel germ line (loci identified in several littermates) insertions, and at B6 reference ETn elements. (C) Heatmap shows capture-seq read counts (RPM) of a Chr4-cl KO mouse (ID: C6733) as determined in different tissues. Each row represents a novel ETn locus that was identified in at least one tissue. The color gradient indicates log10(RPM+1). (D) Heatmap shows the capture-seq RPM in technical replicates using the same Chr4-cl KO DNA sample (rep1/rep2) or replicates with DNA samples prepared from different sections of the tail from the same mouse at different ages (tail1/tail2). Each row represents a novel ETn locus that was identified in at least one of the displayed samples. The color gradient indicates log10(RPM+1).
<!-- image -->
## References
- TL Bailey; M Boden; FA Buske; M Frith; CE Grant; L Clementi; J Ren; WW Li; WS Noble. MEME SUITE: tools for motif discovery and searching. Nucleic Acids Research (2009)
- C Baust; L Gagnier; GJ Baillie; MJ Harris; DM Juriloff; DL Mager. Structure and expression of mobile ETnII retroelements and their coding-competent MusD relatives in the mouse. Journal of Virology (2003)
- K Blaschke; KT Ebata; MM Karimi; JA Zepeda-Martínez; P Goyal; S Mahapatra; A Tam; DJ Laird; M Hirst; A Rao; MC Lorincz; M Ramalho-Santos. Vitamin C induces Tet-dependent DNA demethylation and a blastocyst-like state in ES cells. Nature (2013)
- A Brodziak; E Ziółko; M Muc-Wierzgoń; E Nowakowska-Zajdel; T Kokot; K Klakla. The role of human endogenous retroviruses in the pathogenesis of autoimmune diseases. Medical Science Monitor : International Medical Journal of Experimental and Clinical Research (2012)
- N Castro-Diaz; G Ecco; A Coluccio; A Kapopoulou; B Yazdanpanah; M Friedli; J Duc; SM Jang; P Turelli; D Trono. Evolutionally dynamic L1 regulation in embryonic stem cells. Genes & Development (2014)
- EB Chuong; NC Elde; C Feschotte. Regulatory evolution of innate immunity through co-option of endogenous retroviruses. Science (2016)
- J Dan; Y Liu; N Liu; M Chiourea; M Okuka; T Wu; X Ye; C Mou; L Wang; L Wang; Y Yin; J Yuan; B Zuo; F Wang; Z Li; X Pan; Z Yin; L Chen; DL Keefe; S Gagos; A Xiao; L Liu. Rif1 maintains telomere length homeostasis of ESCs by mediating heterochromatin silencing. Developmental Cell (2014)
- A De Iaco; E Planet; A Coluccio; S Verp; J Duc; D Trono. DUX-family transcription factors regulate zygotic genome activation in placental mammals. Nature Genetics (2017)
- Ö Deniz; L de la Rica; KCL Cheng; D Spensberger; MR Branco. SETDB1 prevents TET2-dependent activation of IAP retroelements in naïve embryonic stem cells. Genome Biology (2018)
- M Dewannieux; T Heidmann. Endogenous retroviruses: acquisition, amplification and taming of genome invaders. Current Opinion in Virology (2013)
- G Ecco; M Cassano; A Kauzlaric; J Duc; A Coluccio; S Offner; M Imbeault; HM Rowe; P Turelli; D Trono. Transposable elements and their KRAB-ZFP controllers regulate gene expression in adult tissues. Developmental Cell (2016)
- G Ecco; M Imbeault; D Trono. KRAB zinc finger proteins. Development (2017)
- JA Frank; C Feschotte. Co-option of endogenous viral sequences for host cell function. Current Opinion in Virology (2017)
- L Gagnier; VP Belancio; DL Mager. Mouse germ line mutations due to retrotransposon insertions. Mobile DNA (2019)
- AC Groner; S Meylan; A Ciuffi; N Zangger; G Ambrosini; N Dénervaud; P Bucher; D Trono. KRAB-zinc finger proteins and KAP1 can mediate long-range transcriptional repression through heterochromatin spreading. PLOS Genetics (2010)
- DC Hancks; HH Kazazian. Roles for retrotransposon insertions in human disease. Mobile DNA (2016)
- M Imbeault; PY Helleboid; D Trono. KRAB zinc-finger proteins contribute to the evolution of gene regulatory networks. Nature (2017)
- FM Jacobs; D Greenberg; N Nguyen; M Haeussler; AD Ewing; S Katzman; B Paten; SR Salama; D Haussler. An evolutionary arms race between KRAB zinc-finger genes ZNF91/93 and SVA/L1 retrotransposons. Nature (2014)
- H Kano; H Kurahashi; T Toda. Genetically regulated epigenetic transcriptional activation of retrotransposon insertion confers mouse dactylaplasia phenotype. PNAS (2007)
- MM Karimi; P Goyal; IA Maksakova; M Bilenky; D Leung; JX Tang; Y Shinkai; DL Mager; S Jones; M Hirst; MC Lorincz. DNA methylation and SETDB1/H3K9me3 regulate predominantly distinct sets of genes, retroelements, and chimeric transcripts in mESCs. Cell Stem Cell (2011)
- A Kauzlaric; G Ecco; M Cassano; J Duc; M Imbeault; D Trono. The mouse genome displays highly dynamic populations of KRAB-zinc finger protein genes and related genetic units. PLOS ONE (2017)
- PP Khil; F Smagulova; KM Brick; RD Camerini-Otero; GV Petukhova. Sensitive mapping of recombination hotspots using sequencing-based detection of ssDNA. Genome Research (2012)
- F Krueger; SR Andrews. Bismark: a flexible aligner and methylation caller for Bisulfite-Seq applications. Bioinformatics (2011)
- B Langmead; SL Salzberg. Fast gapped-read alignment with bowtie 2. Nature Methods (2012)
- M Legiewicz; AS Zolotukhin; GR Pilkington; KJ Purzycka; M Mitchell; H Uranishi; J Bear; GN Pavlakis; SF Le Grice; BK Felber. The RNA transport element of the murine musD retrotransposon requires long-range intramolecular interactions for function. Journal of Biological Chemistry (2010)
- JA Lehoczky; PE Thomas; KM Patrie; KM Owens; LM Villarreal; K Galbraith; J Washburn; CN Johnson; B Gavino; AD Borowsky; KJ Millen; P Wakenight; W Law; ML Van Keuren; G Gavrilina; ED Hughes; TL Saunders; L Brihn; JH Nadeau; JW Innis. A novel intergenic ETnII-β insertion mutation causes multiple malformations in Polypodia mice. PLOS Genetics (2013)
- D Leung; T Du; U Wagner; W Xie; AY Lee; P Goyal; Y Li; KE Szulwach; P Jin; MC Lorincz; B Ren. Regulation of DNA methylation turnover at LTR retrotransposons and imprinted loci by the histone methyltransferase Setdb1. PNAS (2014)
- J Lilue; AG Doran; IT Fiddes; M Abrudan; J Armstrong; R Bennett; W Chow; J Collins; S Collins; A Czechanski; P Danecek; M Diekhans; DD Dolle; M Dunn; R Durbin; D Earl; A Ferguson-Smith; P Flicek; J Flint; A Frankish; B Fu; M Gerstein; J Gilbert; L Goodstadt; J Harrow; K Howe; X Ibarra-Soria; M Kolmogorov; CJ Lelliott; DW Logan; J Loveland; CE Mathews; R Mott; P Muir; S Nachtweide; FCP Navarro; DT Odom; N Park; S Pelan; SK Pham; M Quail; L Reinholdt; L Romoth; L Shirley; C Sisu; M Sjoberg-Herrera; M Stanke; C Steward; M Thomas; G Threadgold; D Thybert; J Torrance; K Wong; J Wood; B Yalcin; F Yang; DJ Adams; B Paten; TM Keane. Sixteen diverse laboratory mouse reference genomes define strain-specific haplotypes and novel functional loci. Nature Genetics (2018)
- S Liu; J Brind'Amour; MM Karimi; K Shirane; A Bogutz; L Lefebvre; H Sasaki; Y Shinkai; MC Lorincz. Setdb1 is required for germline development and silencing of H3K9me3-marked endogenous retroviruses in primordial germ cells. Genes & Development (2014)
- MI Love; W Huber; S Anders. Moderated estimation of fold change and dispersion for RNA-seq data with DESeq2. Genome Biology (2014)
- F Lugani; R Arora; N Papeta; A Patel; Z Zheng; R Sterken; RA Singer; G Caridi; C Mendelsohn; L Sussel; VE Papaioannou; AG Gharavi. A retrotransposon insertion in the 5' regulatory domain of Ptf1a results in ectopic gene expression and multiple congenital defects in Danforth's short tail mouse. PLOS Genetics (2013)
- TS Macfarlan; WD Gifford; S Driscoll; K Lettieri; HM Rowe; D Bonanomi; A Firth; O Singer; D Trono; SL Pfaff. Embryonic stem cell potency fluctuates with endogenous retrovirus activity. Nature (2012)
- IA Maksakova; MT Romanish; L Gagnier; CA Dunn; LN van de Lagemaat; DL Mager. Retroviral elements and their hosts: insertional mutagenesis in the mouse germ line. PLOS Genetics (2006)
- T Matsui; D Leung; H Miyashita; IA Maksakova; H Miyachi; H Kimura; M Tachibana; MC Lorincz; Y Shinkai. Proviral silencing in embryonic stem cells requires the histone methyltransferase ESET. Nature (2010)
- HS Najafabadi; S Mnaimneh; FW Schmitges; M Garton; KN Lam; A Yang; M Albu; MT Weirauch; E Radovani; PM Kim; J Greenblatt; BJ Frey; TR Hughes. C2H2 zinc finger proteins greatly expand the human regulatory lexicon. Nature Biotechnology (2015)
- C Nellåker; TM Keane; B Yalcin; K Wong; A Agam; TG Belgard; J Flint; DJ Adams; WN Frankel; CP Ponting. The genomic landscape shaped by selection on transposable elements across 18 mouse strains. Genome Biology (2012)
- H O'Geen; S Frietze; PJ Farnham. Using ChIP-seq technology to identify targets of zinc finger transcription factors. Methods in Molecular Biology (2010)
- A Patel; P Yang; M Tinkham; M Pradhan; M-A Sun; Y Wang; D Hoang; G Wolf; JR Horton; X Zhang; T Macfarlan; X Cheng. DNA conformation induces adaptable binding by tandem zinc finger proteins. Cell (2018)
- D Ribet; M Dewannieux; T Heidmann. An active murine transposon family pair: retrotransposition of "master" MusD copies and ETn trans-mobilization. Genome Research (2004)
- SR Richardson; P Gerdes; DJ Gerhardt; FJ Sanchez-Luque; GO Bodea; M Muñoz-Lopez; JS Jesuadian; MHC Kempen; PE Carreira; JA Jeddeloh; JL Garcia-Perez; HH Kazazian; AD Ewing; GJ Faulkner. Heritable L1 retrotransposition in the mouse primordial germline and early embryo. Genome Research (2017)
- HM Rowe; J Jakobsson; D Mesnard; J Rougemont; S Reynard; T Aktas; PV Maillard; H Layard-Liesching; S Verp; J Marquis; F Spitz; DB Constam; D Trono. KAP1 controls endogenous retroviruses in embryonic stem cells. Nature (2010)
- HM Rowe; A Kapopoulou; A Corsinotti; L Fasching; TS Macfarlan; Y Tarabay; S Viville; J Jakobsson; SL Pfaff; D Trono. TRIM28 repression of retrotransposon-based enhancers is necessary to preserve transcriptional dynamics in embryonic stem cells. Genome Research (2013)
- SN Schauer; PE Carreira; R Shukla; DJ Gerhardt; P Gerdes; FJ Sanchez-Luque; P Nicoli; M Kindlova; S Ghisletti; AD Santos; D Rapoud; D Samuel; J Faivre; AD Ewing; SR Richardson; GJ Faulkner. L1 retrotransposition is a common feature of mammalian hepatocarcinogenesis. Genome Research (2018)
- DC Schultz; K Ayyanathan; D Negorev; GG Maul; FJ Rauscher. SETDB1: a novel KAP-1-associated histone H3, lysine 9-specific methyltransferase that contributes to HP1-mediated silencing of euchromatic genes by KRAB zinc-finger proteins. Genes & Development (2002)
- K Semba; K Araki; K Matsumoto; H Suda; T Ando; A Sei; H Mizuta; K Takagi; M Nakahara; M Muta; G Yamada; N Nakagata; A Iida; S Ikegawa; Y Nakamura; M Araki; K Abe; K Yamamura. Ectopic expression of Ptf1a induces spinal defects, urogenital defects, and anorectal malformations in Danforth's short tail mice. PLOS Genetics (2013)
- SP Sripathy; J Stevens; DC Schultz. The KAP1 corepressor functions to coordinate the assembly of de novo HP1-demarcated microenvironments of heterochromatin required for KRAB zinc finger protein-mediated transcriptional repression. Molecular and Cellular Biology (2006)
- JH Thomas; S Schneider. Coevolution of retroelements and tandem zinc finger genes. Genome Research (2011)
- PJ Thompson; TS Macfarlan; MC Lorincz. Long terminal repeats: from parasitic elements to building blocks of the transcriptional regulatory repertoire. Molecular Cell (2016)
- RS Treger; SD Pope; Y Kong; M Tokuyama; M Taura; A Iwasaki. The lupus susceptibility locus Sgp3 encodes the suppressor of endogenous retrovirus expression SNERV. Immunity (2019)
- CN Vlangos; AN Siuniak; D Robinson; AM Chinnaiyan; RH Lyons; JD Cavalcoli; CE Keegan. Next-generation sequencing identifies the Danforth's short tail mouse mutation as a retrotransposon insertion affecting Ptf1a expression. PLOS Genetics (2013)
- J Wang; G Xie; M Singh; AT Ghanbarian; T Raskó; A Szvetnik; H Cai; D Besser; A Prigione; NV Fuchs; GG Schumann; W Chen; MC Lorincz; Z Ivics; LD Hurst; Z Izsvák. Primate-specific endogenous retrovirus-driven transcription defines naive-like stem cells. Nature (2014)
- D Wolf; K Hug; SP Goff. TRIM28 mediates primer binding site-targeted silencing of Lys1,2 tRNA-utilizing retroviruses in embryonic cells. PNAS (2008)
- G Wolf; D Greenberg; TS Macfarlan. Spotting the enemy within: targeted silencing of foreign DNA in mammalian genomes by the Krüppel-associated box zinc finger protein family. Mobile DNA (2015a)
- G Wolf; P Yang; AC Füchtbauer; EM Füchtbauer; AM Silva; C Park; W Wu; AL Nielsen; FS Pedersen; TS Macfarlan. The KRAB zinc finger protein ZFP809 is required to initiate epigenetic silencing of endogenous retroviruses. Genes & Development (2015b)
- M Yamauchi; B Freitag; C Khan; B Berwin; E Barklis. Stem cell factor binding to retrovirus primer binding site silencers. Journal of Virology (1995)
- Y Zhang; T Liu; CA Meyer; J Eeckhoute; DS Johnson; BE Bernstein; C Nusbaum; RM Myers; M Brown; W Li; XS Liu. Model-based analysis of ChIP-Seq (MACS). Genome Biology (2008)

View File

@ -0,0 +1,185 @@
item-0 at level 0: unspecified: group _root_
item-1 at level 1: title: LIGHT EMITTING DEVICE AND PLANT CULTIVATION METHOD
item-2 at level 2: section_header: ABSTRACT
item-3 at level 3: paragraph: Provided is a light emitting device that includes a light emitting element having a light emission peak wavelength ranging from 380 nm to 490 nm, and a fluorescent material excited by light from the light emitting element and emitting light having at a light emission peak wavelength ranging from 580 nm or more to less than 680 nm. The light emitting device emits light having a ratio R/B of a photon flux density R to a photon flux density B ranging from 2.0 to 4.0 and a ratio R/FR of the photon flux density R to a photon flux density FR ranging from 0.7 to 13.0, the photon flux density R being in a wavelength range of 620 nm or more and less than 700 nm, the photon flux density B being in a wavelength range of 380 nm or more and 490 nm or less, and the photon flux density FR being in a wavelength range of 700 nm or more and 780 nm or less.
item-4 at level 2: section_header: CROSS-REFERENCE TO RELATED APPLICATION
item-5 at level 3: paragraph: The application claims benefit of Japanese Patent Application No. 2016-128835 filed on Jun. 29, 2016, the entire disclosure of which is hereby incorporated by reference in its entirety.
item-6 at level 2: section_header: BACKGROUND
item-7 at level 2: section_header: Technical Field
item-8 at level 3: paragraph: The present disclosure relates to a light emitting device and a plant cultivation method.
item-9 at level 2: section_header: Description of Related Art
item-10 at level 3: paragraph: With environmental changes due to climate change and other artificial disruptions, plant factories are expected to increase production efficiency of vegetables and be capable of adjusting production in order to make it possible to stably supply vegetables. Plant factories that are capable of artificial management can stably supply clean and safe vegetables to markets, and therefore are expected to be the next-generation industries.
item-11 at level 3: paragraph: Plant factories that are completely isolated from external environment make it possible to artificially control and collect various data such as growth method, growth rate data, yield data, depending on classification of plants. Based on those data, plant factories are able to plan production according to the balance between supply and demand in markets, and supply plants such as vegetables without depending on surrounding conditions such as climatic environment. Particularly, an increase in food production is indispensable with world population growth. If plants can be systematically produced without the influence by surrounding conditions such as climatic environment, vegetables produced in plant factories can be stably supplied within a country, and additionally can be exported abroad as viable products.
item-12 at level 3: paragraph: In general, vegetables that are grown outdoors get sunlight, grow while conducting photosynthesis, and are gathered. On the other hand, vegetables that are grown in plant factories are required to be harvested in a short period of time, or are required to grow in larger than normal sizes even in an ordinary growth period.
item-13 at level 3: paragraph: In plant factories, the light source used in place of sunlight affect a growth period, growth of plants. LED lighting is being used in place of conventional fluorescent lamps, from a standpoint of power consumption reduction.
item-14 at level 3: paragraph: For example, Japanese Unexamined Patent Publication No. 2009-125007 discloses a plant growth method. In this method, the plants is irradiated with light emitted from a first LED light emitting element and/or a second LED light emitting element at predetermined timings using a lighting apparatus including the first LED light emitting element emitting light having a wavelength region of 625 to 690 nm and the second LED light emitting element emitting light having a wavelength region of 420 to 490 nm in order to emit lights having sufficient intensities and different wavelengths from each other.
item-15 at level 2: section_header: SUMMARY
item-16 at level 3: paragraph: However, even though plants are merely irradiated with lights having different wavelengths as in the plant growth method disclosed in Japanese Unexamined Patent Publication No. 2009-125007, the effect of promoting plant growth is not sufficient. Further improvement is required in promotion of plant growth.
item-17 at level 3: paragraph: Accordingly, an object of the present disclosure is to provide a light emitting device capable of promoting growth of plants and a plant cultivation method.
item-18 at level 3: paragraph: Means for solving the above problems are as follows, and the present disclosure includes the following embodiments.
item-19 at level 3: paragraph: A first embodiment of the present disclosure is a light emitting device including a light emitting element having a light emission peak wavelength in a range of 380 nm or more and 490 nm or less, and a fluorescent material that is excited by light from the light emitting element and emits light having at least one light emission peak wavelength in a range of 580 nm or more and less than 680 nm. The light emitting device emits light having a ratio R/B of a photon flux density R to a photon flux density B within a range of 2.0 or more and 4.0 or less, and a ratio R/FR of a photon flux density R to a photon flux density FR within a range of 0.7 or more and 13.0 or less, where the photon flux density R is the number of light quanta (μmol·m⁻²·g⁻¹) incident per unit time and unit area in a wavelength range of 620 nm or more and less than 700 nm, the photon flux density B is the number of light quanta (μmol·m⁻²·g⁻¹) incident per unit time and unit area in a wavelength range of 380 nm or more and 490 nm or less, and the photon flux density FR is the number of light quanta (μmol·m⁻²·g⁻¹) incident per unit time and unit area in a wavelength range of 700 nm or more and 780 nm or less.
item-20 at level 3: paragraph: A second embodiment of the present disclosure is a plant cultivation method including irradiating plants with light from the light emitting device.
item-21 at level 3: paragraph: According to embodiments of the present disclosure, a light emitting device capable of promoting growth of plants and a plant cultivation method can be provided.
item-22 at level 2: section_header: BRIEF DESCRIPTION OF THE DRAWINGS
item-23 at level 3: paragraph: FIG. 1 is a schematic cross sectional view of a light emitting device according to an embodiment of the present disclosure.
item-24 at level 3: paragraph: FIG. 2 is a diagram showing spectra of wavelengths and relative photon flux densities of exemplary light emitting devices according to embodiments of the present disclosure and a comparative light emitting devices.
item-25 at level 3: paragraph: FIG. 3 is a graph showing fresh weight (edible part) at the harvest time of each plant grown by irradiating the plant with light from exemplary light emitting devices according to embodiments of the present disclosure and a comparative light emitting device.
item-26 at level 3: paragraph: FIG. 4 is a graph showing nitrate nitrogen content in each plant grown by irradiating the plant with light from exemplary light emitting devices according to embodiments of the present disclosure and a comparative light emitting device.
item-27 at level 2: section_header: DETAILED DESCRIPTION
item-28 at level 3: paragraph: A light emitting device and a plant cultivation method according to the present invention will be described below based on an embodiment. However, the embodiment described below only exemplifies the technical concept of the present invention, and the present invention is not limited to the light emitting device and plant cultivation method described below. In the present specification, the relationship between the color name and the chromaticity coordinate, the relationship between the wavelength range of light and the color name of monochromatic light follows JIS Z8110.
item-29 at level 3: section_header: Light Emitting Device
item-30 at level 4: paragraph: An embodiment of the present disclosure is a light emitting device including a light emitting element having a light emission peak wavelength in a range of 380 nm or more and 490 nm or less (hereinafter sometimes referred to as a “region of from near ultraviolet to blue color”), and a first fluorescent material emitting light having at least one light emission peak wavelength in a range of 580 nm or more and less than 680 nm by being excited by light from the light emitting element. The light emitting device emits light having a ratio R/B of a photon flux density R to a photon flux density B within a range of 2.0 or more and 4.0 or less, and a ratio R/FR of the photon flux density R to a photon flux density FR within a range of 0.7 or more and 13.0 or less, where the photon flux density R is the number of light quanta (μmol·m⁻²·g⁻¹) incident per unit time and unit area in a wavelength range of 620 nm or more and less than 700 nm, the photon flux density B is the number of light quanta (μmol·m⁻²·g⁻¹) incident per unit time and unit area in a wavelength range of 380 nm or more and 490 nm or less, and the photon flux density FR is the number of light quanta (μmol·m⁻²·g⁻¹) incident per unit time and unit area in a wavelength range of 700 nm or more and 780 nm or less.
item-31 at level 4: paragraph: An example of the light emitting device according to one embodiment of the present disclosure is described below based on the drawings. FIG. 1 is a schematic cross sectional view showing a light emitting device 100 according to an embodiment of the present disclosure.
item-32 at level 4: paragraph: The light emitting device 100 includes a molded article 40, a light emitting element 10 and a fluorescent member 50, as shown in FIG. 1. The molded article 40 includes a first lead 20 and a second lead 30 that are integrally molded with a resin portion 42 containing a thermoplastic resin or a thermosetting resin. The molded article 40 forms a depression having a bottom and sides, and the light emitting element 10 is placed on the bottom of the depression. The light emitting element 10 has a pair of an anode and a cathode, and the anode and the cathode are electrically connected to the first lead 20 and the second lead 30 respectively through the respective wires 60. The light emitting element 10 is covered with the fluorescent member 50. The fluorescent member 50 includes, for example, a fluorescent material 70 performing wavelength conversion of light from the light emitting element 10, and a resin. The fluorescent material 70 includes a first fluorescent material 71 and a second fluorescent material 72. A part of the first lead 20 and the second lead 30 that are connected to a pair of the anode and the cathode of the light emitting element 10 is exposed toward outside a package constituting the light emitting element 100. The light emitting device 100 can emit light by receiving electric power supply from the outside through the first lead 20 and the second lead 30.
item-33 at level 4: paragraph: The fluorescent member 50 not only performs wavelength conversion of light emitted from the light emitting element 10, but functions as a member for protecting the light emitting element 10 from the external environment. In FIG. 1, the fluorescent material 70 is localized in the fluorescent member 50 in the state that the first fluorescent material 71 and the second fluorescent material 72 are mixed with each other, and is arranged adjacent to the light emitting element 10. This constitution can efficiently perform the wavelength conversion of light from the light emitting element 10 in the fluorescent material 70, and as a result, can provide a light emitting device having excellent light emission efficiency. The arrangement of the fluorescent member 50 containing the fluorescent material 70, and the light emitting element 10 is not limited to the embodiment that the fluorescent material 70 is arranged adjacent to the light emitting element 10 as shown in FIG. 1, and considering the influence of heat generated from the light emitting element 10, the fluorescent material 70 can be arranged separated from the light emitting element 10 in the fluorescent member 50. Furthermore, light having suppressed color unevenness can be emitted from the light emitting device 100 by arranging the fluorescent material 70 almost evenly in the fluorescent member 50. In FIG. 1, the fluorescent material 70 is arranged in the state that the first fluorescent material 71 and the second fluorescent material 72 are mixed with each other. However, for example, the first fluorescent material 71 may be arranged in a layer state and the second fluorescent material 72 may be arranged thereon in another layer state. Alternatively, the second fluorescent material 72 may be arranged in a layer state and the first fluorescent material 71 may be arranged thereon in another layer state.
item-34 at level 4: paragraph: The light emitting device 100 includes the first fluorescent material 71 having at least one light emission peak wavelength in a range of 580 nm or more and less than 680 nm by being excited by light from the light emitting element 10, and preferably further includes the second fluorescent material 72 having at least one light emission peak wavelength in a range of 680 nm or more and 800 nm or less by being excited by light from the light emitting element 10.
item-35 at level 4: paragraph: The first fluorescent material 71 and the second fluorescent material 72 are contained in, for example, the fluorescent member 50 covering the light emitting element 10. The light emitting device 100 in which the light emitting element 10 has been covered with the fluorescent member 50 containing the first fluorescent material 71 and the second fluorescent material 72 emits light having at least one light emission peak wavelength in a range of 580 nm or more and less than 680 nm by a part of light emission of the light emitting element 10 that is absorbed in the first fluorescent material 71. Furthermore, the light emitting device 100 emits light having at least one light emission peak wavelength in a range of 680 nm or more and 800 nm or less by a part of light emission of the light emitting element 10 that is absorbed in the second fluorescent material 72.
item-36 at level 4: paragraph: Plants grow when a pigment (chlorophyll a and chlorophyll b) present in chlorophyll thereof absorbs light and additionally takes carbon dioxide gas and water therein, and these are converted to carbohydrates (saccharides) by photosynthesis. Chlorophyll a and chlorophyll b used in growth promotion of plants particularly have absorption peaks in a red region of 625 nm or more and 675 nm or less and a blue region of 425 nm or more and 475 nm or less. The action of photosynthesis by chlorophylls of plants mainly occurs in a wavelength range of 400 nm or more and 700 nm or less, but chlorophyll a and chlorophyll b further have local absorption peaks in a region of 700 nm or more and 800 nm or less.
item-37 at level 4: paragraph: For example, when plants are irradiated with light having longer wavelength than and absorption peak (in the vicinity of 680 nm) in a red region of chlorophyll a, a phenomenon called red drop, in which activity of photosynthesis rapidly decreases, occurs. However, it is known that when plants are irradiated with light containing near infrared region together with light of red region, photosynthesis is accelerated by a synergistic effect of those two kinds of lights. This phenomenon is called the Emerson effect.
item-38 at level 4: paragraph: Intensity of light with which plants are irradiated is represented by photon flux density. The photon flux density (μmol·m⁻²·s⁻¹) is the number of photons reaching a unit area per unit time. The amount of photosynthesis depends on the number of photons, and therefore does not depend on other optical characteristics if the photon flux density is the same. However, wavelength dependency activating photosynthesis differs depending on photosynthetic pigment. Intensity of light necessary for photosynthesis of plants is sometimes represented by Photosynthetic Photon Flux Density (PPFD).
item-39 at level 4: paragraph: The light emitting device 100 emits light having a ratio R/B of a photon flux density R to a photon flux density B within a range of 2.0 or more and 4.0 or less, and a ratio R/FR of the photon flux density R to a photon flux density FR within a range of 0.7 or more and 13.0 or less, where the photon flux density R is the number of light quanta (μmol·m⁻²·g⁻¹) incident per unit time and unit area in a wavelength range of 620 nm or more and less than 700 nm, the photon flux density B is the number of light quanta (μmol·m⁻²·g⁻¹) incident per unit time and unit area in a wavelength range of 380 nm or more and 490 nm or less, and the photon flux density FR is the number of light quanta (μmol·m⁻²·g⁻¹) incident per unit time and unit area in a wavelength range of 700 nm or more and 780 nm or less.
item-40 at level 4: paragraph: It is estimated that in plants, which are irradiated with light containing the photon flux density FR from the light emitting device 100, photosynthesis is activated by Emerson effect, and as a result, growth of plants can be promoted. Furthermore, when plants are irradiated with light containing the photon flux density FR, growth of the plants can be promoted by a reversible reaction between red light irradiation, to which chlorophyll as chromoprotein contained in plants has participated, and far infrared light irradiation.
item-41 at level 4: paragraph: Examples of nutrients necessary for growth of plants include nitrogen, phosphoric acid, and potassium. Of those nutrients, nitrogen is absorbed in plants as nitrate nitrogen (nitrate ion: NO₃⁻). The nitrate nitrogen changes into nitrite ion (NO₂⁻) by a reduction reaction, and when the nitrite ion is further reacted with fatty acid amine, nitrosoamine is formed. It is known that nitrite ion acts to hemoglobin in blood, and it is known that a nitroso compound sometimes affects health of a human body. Mechanism of converting nitrate nitrogen into nitrite ion in vivo is complicated, and the relationship between the amount of intake of nitrate nitrogen and the influence to health of a human body is not clarified. However, it is desired that the content of nitrate nitrogen having a possibility of affecting health of a human body is smaller.
item-42 at level 4: paragraph: For the above reasons, nitrogen is one of nutrients necessary for growth of plants, but it is preferred that the content of nitrate nitrogen in food plants be reduced to a range that does not disturb the growth of plants.
item-43 at level 4: paragraph: It is preferred that the light emitting device 100 further include the second fluorescent material 72 having at least one light emission peak wavelength in a range of 680 nm or more and 800 nm or less by being excited by light from the light emitting element 10, wherein the R/FR ratio is within a range of 0.7 or more and 5.0 or less. The R/FR ratio is more preferably within a range of 0.7 or more and 2.0 or less.
item-44 at level 3: section_header: Light Emitting Element
item-45 at level 4: paragraph: The light emitting element 10 is used as an excitation light source, and is a light emitting element emitting light having a light emission peak wavelength in a range of 380 nm or more and 490 nm or less. As a result, a stable light emitting device having high efficiency, high linearity of output to input and strong mechanical impacts can be obtained.
item-46 at level 4: paragraph: The range of the light emission peak wavelength of the light emitting element 10 is preferably in a range of 390 nm or more and 480 nm or less, more preferably in a range of 420 nm or more and 470 nm or less, and still more preferably in a range of 440 nm or more and 460 nm or less, and particularly preferably in a range of 445 nm or more and 455 nm or less. A light emitting element including a nitride semiconductor (InₓAlyGa₁₋ₓ₋yN, 0≦X, 0≦Y and X+Y≦1) is preferably used as the light emitting element 10.
item-47 at level 4: paragraph: The half value width of emission spectrum of the light emitting element 10 can be, for example, 30 nm or less.
item-48 at level 3: section_header: Fluorescent Member
item-49 at level 4: paragraph: The fluorescent member 50 used in the light emitting device 100 preferably includes the first fluorescent material 71 and a sealing material, and more preferably further includes the second fluorescent material 72. A thermoplastic resin and a thermosetting resin can be used as the sealing material. The fluorescent member 50 may contain other components such as a filler, a light stabilizer and a colorant, in addition to the fluorescent material and the sealing material. Examples of the filler include silica, barium titanate, titanium oxide and aluminum oxide.
item-50 at level 4: paragraph: The content of other components other than the fluorescent material 70 and the sealing material in the fluorescent member 50 is preferably in a range of 0.01 parts by mass or more and 20 parts by mass or less, per 100 parts by mass of the sealing material.
item-51 at level 4: paragraph: The total content of the fluorescent material 70 in the fluorescent member 50 can be, for example, 5 parts by mass or more and 300 parts by mass or less, per 100 parts by mass of the sealing material. The total content is preferably 10 parts by mass or more and 250 parts by mass or less, more preferably 15 parts by mass or more and 230 parts by mass or less, and still more preferably 15 parts by mass or more and 200 parts by mass or less. When the total content of the fluorescent material 70 in the fluorescent member 50 is within the above range, the light emitted from the light emitting element 10 can be efficiently subjected to wavelength conversion in the fluorescent material 70.
item-52 at level 3: section_header: First Fluorescent Material
item-53 at level 4: paragraph: The first fluorescent material 71 is a fluorescent material that is excited by light from the light emitting element 10 and emits light having at least one light emission peak wavelength in a range of 580 nm or more and less than 680 nm. Examples of the first fluorescent material 71 include an Mn⁴⁺-activated fluorogermanate fluorescent material, an Eu²⁺-activated nitride fluorescent material, an Eu²⁺-activated alkaline earth sulfide fluorescent material and an Mn⁴⁺-activated halide fluorescent material. The first fluorescent material 71 may use one selected from those fluorescent materials and may use a combination of two or more thereof. The first fluorescent material preferably contains an Eu²⁺-activated nitride fluorescent material and an Mn⁴⁺-activated fluorogermanate fluorescent material.
item-54 at level 4: paragraph: The Eu²⁺-activated nitride fluorescent material is preferably a fluorescent material that has a composition including at least one element selected from Sr and Ca, and Al and contains silicon nitride that is activated by Eu²⁺, or a fluorescent material that has a composition including at least one element selected from the group consisting of alkaline earth metal elements and at least one element selected from the group consisting of alkali metal elements and contains aluminum nitride that is activated by Eu²⁺.
item-55 at level 4: paragraph: The halide fluorescent material that is activated by Mn⁴⁺ is preferably a fluorescent material that has a composition including at least one element or ion selected from the group consisting of alkali metal elements and an ammonium ion (NH⁴⁺) and at least one element selected from the group consisting of Group 4 elements and Group 14 elements and contains a fluoride that is activated by Mn⁴⁺.
item-56 at level 4: paragraph: Examples of the first fluorescent material 71 specifically include fluorescent materials having any one composition of the following formulae (I) to (VI).
item-57 at level 4: paragraph: (ij)MgO.(j/2)Sc₂O₃.kMgF₂.mCaF₂.(1n)GeO₂.(n/2)Mt₂O₃:zMn⁴⁺ (I)
item-58 at level 4: paragraph: wherein Mt is at least one selected from the group consisting of Al, Ga, and In, and j, k, m, n, and z are numbers satisfying 2≦i≦4, 0≦j<0.5, 0<k<1.5, 0≦m<1.5, 0<n<0.5, and 0<z<0.05, respectively.
item-59 at level 4: paragraph: (Ca₁₋p₋qSrpEuq)AlSiN₃ (II)
item-60 at level 4: paragraph: wherein p and q are numbers satisfying 0≦p≦1.0, 0<q<1.0, and p+q<1.0.
item-61 at level 4: paragraph: MªvMbwMcfAl₃₋gSigNh (III)
item-62 at level 4: paragraph: wherein Mª is at least one element selected from the group consisting of Ca, Sr, Ba, and Mg, Mb is at least one element selected from the group consisting of Li, Na, and K, Mc is at least one element selected from the group consisting of Eu, Ce, Tb, and Mn, v, w, f, g, and h are numbers satisfying 0.80≦v≦1.05, 0.80≦w≦1.05, 0.001<f≦0.1, 0≦g≦0.5, and 3.0≦h≦5.0, respectively.
item-63 at level 4: paragraph: (Ca₁₋r₋s₋tSrrBasEut)₂Si₅N₈ (IV)
item-64 at level 4: paragraph: wherein r, s, and t are numbers satisfying 0≦r≦1.0, 0≦s≦1.0, 0<t<1.0, and r+s+t≦1.0.
item-65 at level 4: paragraph: (Ca,Sr)S:Eu (V)
item-66 at level 4: paragraph: A₂[M¹₁₋uMn⁴⁺uF₆] (VI)
item-67 at level 4: paragraph: wherein A is at least one selected from the group consisting of K, Li, Na, Rb, Cs, and NH₄⁺, M¹ is at least one element selected from the group consisting of Group 4 elements and Group 14 elements, and u is the number satisfying 0<u<0.2.
item-68 at level 4: paragraph: The content of the first fluorescent material 71 in the fluorescent member 50 is not particularly limited as long as the R/B ratio is within a range of 2.0 or more and 4.0 or less. The content of the first fluorescent material 71 in the fluorescent member 50 is, for example, 1 part by mass or more, preferably 5 parts by mass or more, and more preferably 8 parts by mass or more, per 100 parts by mass of the sealing material, and is preferably 200 parts by mass or less, more preferably 150 parts by mass or less, and still more preferably 100 parts by mass or less, per 100 parts by mass of the sealing material. When the content of the first fluorescent material 71 in the fluorescent member 50 is within the aforementioned range, the light emitted from the light emitting element 10 can be efficiently subjected to wavelength conversion, and light capable of promoting growth of plant can be emitted from the light emitting device 100.
item-69 at level 4: paragraph: The first fluorescent material 71 preferably contains at least two fluorescent materials, and in the case of containing at least two fluorescent materials, the first fluorescent material preferably contains a fluorogermanate fluorescent material that is activated by Mn⁴⁺ (hereinafter referred to as “MGF fluorescent material”), and a fluorescent material that has a composition including at least one element selected from Sr and Ca, and Al, and contains silicon nitride that is activated by Eu²⁺ (hereinafter referred to as “CASN fluorescent material”).
item-70 at level 4: paragraph: In the case where the first fluorescent material 71 contains at least two fluorescent materials and two fluorescent materials are a MGF fluorescent material and a CASN fluorescent material, where a compounding ratio thereof (MGF fluorescent material:CASN fluorescent material) is preferably in a range of 50:50 or more and 99:1 or less, more preferably in a range of 60:40 or more and 97:3 or less, and still more preferably in a range of 70:30 or more and 96:4 or less, in mass ratio. In the case where the first fluorescent material contains two fluorescent materials, when those fluorescent materials are a MGF fluorescent material and a CASN fluorescent material and the mass ratio thereof is within the aforementioned range, the light emitted from the light emitting element 10 can be efficiently subjected to wavelength conversion in the first fluorescent material 71. In addition, the R/B ratio can be adjusted to within a range of 2.0 or more and 4.0 or less, and the R/FR ratio is easy to be adjusted to within a range of 0.7 or more and 13.0 or less.
item-71 at level 3: section_header: Second Fluorescent Material
item-72 at level 4: paragraph: The second fluorescent material 72 is a fluorescent material that is excited by the light from the light emitting element 10 and emits light having at least one light emission peak wavelength in a range of 680 nm or more and 800 nm or less.
item-73 at level 4: paragraph: The second fluorescent material 72 used in the light emitting device according to one embodiment of the present disclosure is a fluorescent material that contains a first element Ln containing at least one element selected from the group consisting of rare earth elements excluding Ce, a second element M containing at least one element selected from the group consisting of Al, Ga, In, Ce, and Cr, and has a composition of an aluminate fluorescent material. When a molar ratio of the second element M is taken as 5, it is preferred that a molar ratio of Ce be a product of a value of a parameter x and 3, and a molar ratio of Cr be a product of a value of a parameter y and 3, wherein the value of the parameter x is in a range of exceeding 0.0002 and less than 0.50, and the value of the parameter y is in a range of exceeding 0.0001 and less than 0.05.
item-74 at level 4: paragraph: The second fluorescent material 72 is preferably a fluorescent material having the composition represented by the following formula (1):
item-75 at level 4: paragraph: (Ln₁₋ₓ₋yCeₓCry)₃M₅O₁₂ (1)
item-76 at level 4: paragraph: wherein Ln is at least one rare earth element selected from the group consisting of rare earth elements excluding Ce, M is at least one element selected from the group consisting of Al, Ga, and In, and x and y are numbers satisfying 0.0002<x<0.50 and 0.0001<y<0.05, respectively.
item-77 at level 4: paragraph: In this case, the second fluorescent material 72 has a composition constituting a garnet structure, and therefore is tough against heat, light, and water, has an absorption peak wavelength of excited absorption spectrum in the vicinity of 420 nm or more and 470 nm or less, and sufficiently absorbs the light from the light emitting element 10, thereby enhancing light emitting intensity of the second fluorescent material 72, which is preferred. Furthermore, the second fluorescent material 72 is excited by light having light emission peak wavelength in a range of 380 nm or more and 490 nm or less and emits light having at least one light emission peak wavelength in a range of 680 nm or more and 800 nm or less.
item-78 at level 4: paragraph: In the second fluorescent material 72, from the standpoint of stability of a crystal structure, Ln is preferably at least one rare earth element selected from the group consisting of Y, Gd, Lu, La, Tb, and Pr, and M is preferably Al or Ga.
item-79 at level 4: paragraph: In the second fluorescent material 72, the value of the parameter x is more preferably in a range of 0.0005 or more and 0.400 or less (0.0005≦x≦0.400), and still more preferably in a range of 0.001 or more and 0.350 or less (0.001≦x≦0.350).
item-80 at level 4: paragraph: In the second fluorescent material 72, the value of the parameter y is preferably in a range of exceeding 0.0005 and less than 0.040 (0.0005<y<0.040), and more preferably in a range of 0.001 or more and 0.026 or less (0.001≦y≦0.026).
item-81 at level 4: paragraph: The parameter x is an activation amount of Ce and the value of the parameter x is in a range of exceeding 0.0002 and less than 0.50 (0.0002<x<0.50), and the parameter y is an activation amount of Cr. When the value of the parameter y is in a range of exceeding 0.0001 and less than 0.05 (0.0001<y<0.05), the activation amount of Ce and the activation amount of Cr that are light emission centers contained in the crystal structure of the fluorescent material are within optimum ranges, the decrease of light emission intensity due to the decrease of light emission center can be suppressed, the decrease of light emission intensity due to concentration quenching caused by the increase of the activation amount can be suppressed, and light emission intensity can be enhanced.
item-82 at level 3: section_header: Production Method of Second Fluorescent Material
item-83 at level 4: paragraph: A method for producing the second fluorescent material 72 includes the following method.
item-84 at level 4: paragraph: A compound containing at least one rare earth element Ln selected from the group consisting of rare earth elements excluding Ce, a compound containing at least one element M selected from the group consisting of Al, Ga, and In, a compound containing Ce and a compound containing Cr are mixed such that, when the total molar composition ratio of the M is taken as 5 as the standard, in the case where the total molar composition ratio of Ln, Ce, and Nd is 3, the molar ratio of Ce is a product of 3 and a value of a parameter x, and the molar ratio of Cr is a product of 3 and a value of a parameter y, the value of the parameter x is in a range of exceeding 0.0002 and less than 0.50 and the value of the parameter y is in a range of exceeding 0.0001 and less than 0.05, thereby obtaining a raw material mixture, the raw material mixture is heat-treated, followed by classification and the like, thereby obtaining the second fluorescent material.
item-85 at level 3: section_header: Compound Containing Rare Earth Element Ln
item-86 at level 4: paragraph: Examples of the compound containing rare earth element Ln include oxides, hydroxides, nitrides, oxynitrides, fluorides, and chlorides, that contain at least one rare earth element Ln selected from the group consisting of rare earth elements excluding Ce. Those compounds may be hydrates. At least a part of the compounds containing rare earth element may use a metal simple substance or an alloy containing rare earth element. The compound containing rare earth element is preferably a compound containing at least one rare earth element Ln selected from the group consisting of Y, Gd, Lu, La, Tb, and Pr. The compound containing rare earth element may be used alone or may be used as a combination of at least two compounds containing rare earth element.
item-87 at level 4: paragraph: The compound containing rare earth element is preferably an oxide that does not contain elements other than the target composition, as compared with other materials. Examples of the oxide specifically include Y₂O₃, Gd₂O₃, Lu₂O₃, La₂O₃, Tb₄O₇ and Pr₆O₁₁.
item-88 at level 3: section_header: Compound Containing M
item-89 at level 4: paragraph: Examples of the compound containing at least one element M selected from the group consisting of Al, Ga, and In include oxides, hydroxides, nitrides, oxynitrides, fluorides, and chlorides, that contain Al, Ga, or In. Those compounds may be hydrates. Furthermore, Al metal simple substance, Ga metal simple substance, In metal simple substance, Al alloy, Ga alloy or In alloy may be used, and metal simple substance or an alloy may be used in place of at least a part of the compound. The compound containing Al, Ga, or In may be used alone or may be used as a combination of two or more thereof. The compound containing at least one element selected from the group consisting of Al, Ga, and In is preferably an oxide. The reason for this is that an oxide that does not contain elements other than the target composition, as compared with other materials, and a fluorescent material having a target composition are easy to be obtained. When a compound containing elements other than the target composition has been used, residual impurity elements are sometimes present in the fluorescent material obtained. The residual impurity element becomes a killer factor in light emission, leading to the possibility of remarkable decrease of light emission intensity.
item-90 at level 4: paragraph: Examples of the compound containing Al, Ga, or In specifically include Al₂O₃, Ga₂O₃, and In₂O₃.
item-91 at level 3: section_header: Compound Containing Ce and Compound Containing Cr
item-92 at level 4: paragraph: Examples of the compound containing Ce or the compound containing Cr include oxides, hydroxides, nitrides, fluorides, and chlorides, that contain cerium (Ce) or chromium (Cr). Those compounds may be hydrates. Ce metal simple substance, Ce alloy, Cr metal simple substance, or Cr alloy may be used, and a metal simple substance or an alloy may be used in place of a part of the compound. The compound containing Ce or the compound containing Cr may be used alone or may be used as a combination of two or more thereof. The compound containing Ce or the compound containing Cr is preferably an oxide. The reason for this is that an oxide that does not contain elements other than the target composition, as compared with other materials, and a fluorescent material having a target composition are easy to be obtained. When a compound containing elements other than the target composition has been used, residual impurity elements are sometimes present in the fluorescent material obtained. The residual impurity element becomes a killer factor in light emission, leading to the possibility of remarkable decrease of light emission intensity.
item-93 at level 4: paragraph: Example of the compound containing Ce specifically includes CeO₂, and example of the compound containing Cr specifically includes Cr₂O₃.
item-94 at level 4: paragraph: The raw material mixture may contain a flux such as a halide, as necessary. When a flux is contained in the raw material mixture, reaction of raw materials with each other is accelerated, and a solid phase reaction is easy to proceed further uniformly. It is considered that a temperature for heat-treating the raw material mixture is almost the same as a formation temperature of a liquid phase of a halide used as a flux or is a temperature higher than the formation temperature, and, as a result, the reaction is accelerated.
item-95 at level 4: paragraph: Examples of the halide include fluorides, chlorides of rare earth metals, alkali earth metals, and alkali metals. When a halide of rare earth metal is used as the flux, the flux can be added as a compound so as to achieve a target composition. Examples of the flux specifically include BaF₂ and CaF₂. Of those, BaF₂ is preferably used. When barium fluoride is used as the flux, a garnet crystal structure is stabilized and a composition of a garnet crystal structure is easy to be formed.
item-96 at level 4: paragraph: When the raw material mixture contains a flux, the content of the flux is preferably 20 mass % or less, and more preferably 10 mass % or less, and is preferably 0.1 mass % or more, on the basis of the raw material mixture (100 mass %). When the flux content is within the aforementioned range, the problem that it is difficult to form a garnet crystal structure due to the insufficiency of particle growth by small amount of the flux is prevented, and furthermore, the problem that it is difficult to form a garnet crystal structure due to too large amount of the flux is prevented.
item-97 at level 4: paragraph: The raw material mixture is prepared, for example, as follows. Each of raw materials is weighed so as to be a compounding ratio. Thereafter, the raw materials are subjected to mixed grinding using a dry grinding machine such as ball mill, are subjected to mixed grinding using a mortar and a pestle, are subjected to mixing using a mixing machine such as a ribbon blender, for example, or are subjected to mixed grinding using both a dry grinding machine and a mixing machine. As necessary, the raw material mixture may be classified using a wet separator such as a setting tank generally used industrially, or a dry classifier such as a cyclone. The mixing may be conducted by dry mixing or may be conducted by wet mixing by adding a solvent. The mixing is preferably dry mixing. The reason for this is that dry mixing can shorten a processing time as compared with wet drying, and this leads to the improvement of productivity.
item-98 at level 4: paragraph: The raw material mixture after mixing each raw material is dissolved in an acid, the resulting solution is co-precipitated in oxalic acid, a product formed by the co-precipitation is baked to obtain an oxide, and the oxide may be used as the raw material mixture.
item-99 at level 4: paragraph: The raw material mixture can be heat-treated by placing it in a crucible, a boat made of a carbon material (such as graphite), boron nitride (BN), aluminum oxide (alumina), tungsten (W) or molybdenum (Mo).
item-100 at level 4: paragraph: From the standpoint of stability of a crystal structure, the temperature for heat-treating the raw material mixture is preferably in a range of 1,000° C. or higher and 2,100° C. or lower, more preferably in a range of 1,100° C. or higher and 2,000° C. or lower, still more preferably in a range of 1,200° C. or higher and 1,900° C. or lower, and particularly preferably in a range of 1,300° C. or higher and 1,800° C. or lower. The heat treatment can use an electric furnace or a gas furnace.
item-101 at level 4: paragraph: The heat treatment time varies depending on a temperature rising rate, a heat treatment atmosphere. The heat treatment time after reaching the heat treatment temperature is preferably 1 hour or more, more preferably 2 hours or more, and still more preferably 3 hours or more, and is preferably 20 hours or less, more preferably 18 hours or less and still more preferably 15 hours or less.
item-102 at level 4: paragraph: The atmosphere for heat-treating the raw material mixture is an inert atmosphere such as argon or nitrogen, a reducing atmosphere containing hydrogen, or an oxidizing atmosphere such as the air. The raw material mixture may be subjected to a two-stage heat treatment of a first heat treatment of heat-treating in the air or a weakly reducing atmosphere from the standpoint of, for example, prevention of blackening, and a second heat treatment of heat-treating in a reducing atmosphere from the standpoint of enhancing absorption efficiency of light having a specific light emission peak wavelength. The fluorescent material constituting a garnet structure is that reactivity of the raw material mixture is improved in an atmosphere having high reducing power such as a reducing atmosphere. Therefore, the fluorescent material can be heat-treated under the atmospheric pressure without pressurizing. For example, the heat treatment can be conducted by the method disclosed in Japanese Patent Application No. 2014-260421.
item-103 at level 4: paragraph: The fluorescent material obtained may be subjected to post-treatment steps such as a solid-liquid separation by a method such as cleaning or filtration, drying by a method such as vacuum drying, and classification by dry sieving. After those post-treatment steps, a fluorescent material having a desired average particle diameter is obtained.
item-104 at level 3: section_header: Other Fluorescent Materials
item-105 at level 4: paragraph: The light emitting device 100 may contain other kinds of fluorescent materials, in addition to the first fluorescent material 71.
item-106 at level 4: paragraph: Examples of other kinds of fluorescent materials include a green fluorescent material emitting green color by absorbing a part of the light emitted from the light emitting element 10, a yellow fluorescent material emitting yellow color, and a fluorescent material having a light emission peak wavelength in a wavelength range exceeding 680 nm.
item-107 at level 4: paragraph: Examples of the green fluorescent material specifically include fluorescent materials having any one of compositions represented by the following formulae (i) to (iii).
item-108 at level 4: paragraph: M¹¹₈MgSi₄O₁₆X¹¹:Eu (i)
item-109 at level 4: paragraph: wherein M¹¹ is at least one selected from the group consisting of Ca, Sr, Ba, and Zn, and X¹¹ is at least one selected from the group consisting of F, Cl, Br, and I.
item-110 at level 4: paragraph: Si₆₋bAlbObN₈₋b:Eu (ii)
item-111 at level 4: paragraph: wherein b satisfies 0<b<4.2.
item-112 at level 4: paragraph: M¹³Ga₂S₄:Eu (iii)
item-113 at level 4: paragraph: wherein M¹³ is at least one selected from the group consisting of Mg, Ca, Sr, and
item-114 at level 4: paragraph: Ba.
item-115 at level 4: paragraph: Examples of the yellow fluorescent material specifically include fluorescent materials having any one of compositions represented by the following formulae (iv) to (v).
item-116 at level 4: paragraph: M¹⁴c/dSi₁₂₋₍c₊d₎Al₍c₊d₎OdN₍₁₆₋d₎:Eu (iv)
item-117 at level 4: paragraph: wherein M¹⁴ is at least one selected from the group consisting of Sr, Ca, Li, and Y. A value of a parameter c is in a range of 0.5 to 5, a value of a parameter d is in a range of 0 to 2.5, and the parameter d is an electrical charge of M¹⁴.
item-118 at level 4: paragraph: M¹⁵₃Al₅O₁₂:Ce (v)
item-119 at level 4: paragraph: wherein M¹⁵ is at least one selected from the group consisting of Y and Lu.
item-120 at level 4: paragraph: Examples of the fluorescent material having light emission peak wavelength in a wavelength range exceeding 680 nm specifically include fluorescent materials having any one of compositions represented by the following formulae (vi) to (x).
item-121 at level 4: paragraph: Al₂O₃:Cr (vi)
item-122 at level 4: paragraph: CaYAlO₄:Mn (vii)
item-123 at level 4: paragraph: LiAlO₂:Fe (viii)
item-124 at level 4: paragraph: CdS:Ag (ix)
item-125 at level 4: paragraph: GdAlO₃:Cr (x)
item-126 at level 4: paragraph: The light emitting device 100 can be utilized as a light emitting device for plant cultivation that can activate photosynthesis of plants and promote growth of plants so as to have favorable form and weight.
item-127 at level 3: section_header: Plant Cultivation Method
item-128 at level 4: paragraph: The plant cultivation method of one embodiment of the present disclosure is a method for cultivating plants, including irradiating plants with light emitted from the light emitting device 100. In the plant cultivation method, plants can be irradiated with light from the light emitting device 100 in plant factories that are completely isolated from external environment and make it possible for artificial control. The kind of plants is not particularly limited. However, the light emitting device 100 of one embodiment of the present disclosure can activate photosynthesis of plants and promote growth of plants such that a stem, a leaf, a root, a fruit have favorable form and weight, and therefore is preferably applied to cultivation of vegetables, flowers that contain much chlorophyll performing photosynthesis. Examples of the vegetables include lettuces such as garden lettuce, curl lettuce, Lamb's lettuce, Romaine lettuce, endive, Lollo Rosso, Rucola lettuce, and frill lettuce; Asteraceae vegetables such as “shungiku” (chrysanthemum coronarium); morning glory vegetables such as spinach; Rosaceae vegetables such as strawberry; and flowers such as chrysanthemum, gerbera, rose, and tulip.
item-129 at level 2: section_header: EXAMPLES
item-130 at level 3: paragraph: The present invention is further specifically described below by Examples and Comparative Examples.
item-131 at level 2: section_header: Examples 1 to 5
item-132 at level 3: section_header: First Fluorescent Material
item-133 at level 4: paragraph: Two fluorescent materials of fluorogarmanate fluorescent material that is activated by Mn⁴⁺, having a light emission peak at 660 nm and fluorescent material containing silicon nitride that are activated by Eu²⁺, having a light emission peak at 660 nm were used as the first fluorescent material 71. In the first fluorescent material 71, a mass ratio of a MGF fluorescent material to a CASN fluorescent material (MGF:CASN) was 95:5.
item-134 at level 3: section_header: Second Fluorescent Material
item-135 at level 4: paragraph: Fluorescent material that is obtained by the following production method was used as the second fluorescent material 72.
item-136 at level 4: paragraph: 55.73 g of Y₂O₃ (Y₂O₃ content: 100 mass %), 0.78 g of CeO₂ (CeO₂ content: 100 mass %), 0.54 g of Cr₂O₃ (Cr₂O₃ content: 100 mass %,) and 42.95 g of Al₂O₃ (Al₂O₃ content: 100 mass %) were weighed as raw materials, and 5.00 g of BaF₂ as a flux was added to the mixture. The resulting raw materials were dry mixed for 1 hour by a ball mill. Thus, a raw material mixture was obtained.
item-137 at level 4: paragraph: The raw material mixture obtained was placed in an alumina crucible, and a lid was put on the alumina crucible. The raw material mixture was heat-treated at 1,500° C. for 10 hours in a reducing atmosphere of H₂: 3 vol % and N₂: 97 vol %. Thus, a calcined product was obtained. The calcined product was passed through a dry sieve to obtain a second fluorescent material. The second fluorescent material obtained was subjected to composition analysis by ICP-AES emission spectrometry using an inductively coupled plasma emission analyzer (manufactured by Perkin Elmer). The composition of the second fluorescent material obtained was (Y₀.₉₇₇Ce₀.₀₀₉Cr₀.₀₁₄)₃Al₅O₁₂ (hereinafter referred to as “YAG: Ce, Cr”).
item-138 at level 3: section_header: Light Emitting Device
item-139 at level 4: paragraph: Nitride semiconductor having a light emission peak wavelength of 450 nm was used as the light emitting element 10 in the light emitting device 100.
item-140 at level 4: paragraph: Silicone resin was used as a sealing material constituting the fluorescent member 50, the first fluorescent material 71 and/or the second fluorescent material 72 was added to 100 parts by mass of the silicone resin in the compounding ratio (parts by mass) shown in Table 1, and 15 parts by mass of silica filler were further added thereto, followed by mixing and dispersing. The resulting mixture was degassed to obtain a resin composition constituting a fluorescent member. In each of resin compositions of Examples 1 to 5, the compounding ratio of the first fluorescent material 71 and the second fluorescent material 72 was adjusted as shown in Table 1, and those materials are compounded such that the R/B ratio is within a range of 2.0 or more and 2.4 or less, and the R/FR ratio is within a range of 1.4 or more and 6.0 or less.
item-141 at level 4: paragraph: The resin composition was poured on the light emitting element 10 of a depressed portion of the molded article 40 to fill the depressed portion, and heated at 150° C. for 4 hours to cure the resin composition, thereby forming the fluorescent member 50. Thus, the light emitting device 100 as shown in FIG. 1 was produced in each of Examples 1 to 5.
item-142 at level 2: section_header: Comparative Example 1
item-143 at level 3: paragraph: A light emitting device X including a semiconductor light emitting element having a light emission peak wavelength of 450 nm and a light emitting device Y including a semiconductor light emitting element having a light emission peak length of 660 nm were used, and the R/B ratio was adjusted to 2.5.
item-144 at level 3: section_header: Evaluation
item-145 at level 3: section_header: Photon Flux Density
item-146 at level 4: paragraph: Photon flux densities of lights emitted from the light emitting device 100 used in Examples 1 to 5 and the light emitting devices X and Y used in Comparative Example 1 were measured using a photon measuring device (LI-250A, manufactured by Li-COR). The photon flux density B, the photon flux density R, and the photon flux density FR of lights emitted from the light emitting devices used in each of the Examples and Comparative Example; the R/B ratio; and the R/FR ratio are shown in Table 1. FIG. 2 shows spectra showing the relationship between a wavelength and a relative photon flux density, in the light emitting devices used in each Example and Comparative Example.
item-147 at level 3: section_header: Plant Cultivation Test
item-148 at level 4: paragraph: The plant cultivation method includes a method of conducting by “growth period by RGB light source (hereinafter referred to as a first growth period)” and “growth period by light source for plant growth (hereinafter referred to as a second growth period)” using a light emitting device according to an embodiment of the present disclosure as a light source.
item-149 at level 4: paragraph: The first growth period uses RGB light source, and RGB type LED generally known can be used as the RGB light source. The reason for irradiating plants with RGB type LED in the initial stage of the plant growth is that length of a stem and the number and size of true leaves in the initial stage of plant growth are made equal, thereby clarifying the influence by the difference of light quality in the second growth period.
item-150 at level 4: paragraph: The first growth period is preferably about 2 weeks. In the case where the first growth period is shorter than 2 weeks, it is necessary to confirm that two true leaves develop and a root reaches length that can surely absorb water in the second growth period. In the case where the first growth period exceeds 2 weeks, variation in the second growth period tends to increase. The variation is easy to be controlled by RGB light source by which stem extension is inhibitory, rather than a fluorescent lamp by which stem extension is easy to occur.
item-151 at level 4: paragraph: After completion of the first growth period, the second growth period immediately proceeds. It is preferred that plants are irradiated with light emitted from a light emitting device according to an embodiment of the present disclosure. Photosynthesis of plants is activated by irradiating plants with light emitted from the light emitting device according to an embodiment of the present disclosure, and the growth of plants can be promoted so as to have favorable form and weight.
item-152 at level 4: paragraph: The total growth period of the first growth period and the second growth period is about 4 to 6 weeks, and it is preferred that shippable plants can be obtained within the period.
item-153 at level 4: paragraph: The cultivation test was specifically conducted by the following method.
item-154 at level 4: paragraph: Romaine lettuce (green romaine, produced by Nakahara Seed Co., Ltd.) was used as cultivation plant.
item-155 at level 3: section_header: First Growth Period
item-156 at level 4: paragraph: Urethane sponges (salad urethane, manufactured by M Hydroponic Research Co., Ltd.) having Romaine lettuce seeded therein were placed side by side on a plastic tray, and were irradiated with light from RGB-LED light source (manufactured by Shibasaki Inc.) to cultivate plants. The plants were cultivated for 16 days under the conditions of room temperature: 22 to 23° C., humidity: 50 to 60%, photon flux density from light emitting device: 100 μmol·m⁻²·s⁻¹ and daytime hour: 16 hours/day. Only water was given until germination, and after the germination (about 4 days later), a solution obtained by mixing Otsuka House #1 (manufactured by Otsuka Chemical Co., Ltd.) and Otsuka House #2 (manufactured by Otsuka Chemical Co., Ltd.) in a mass ratio of 3:2 and dissolving the mixture in water was used as a nutrient solution (Otsuka Formulation A). Conductivity of the nutrient was 1.5 ms·cm⁻¹.
item-157 at level 3: section_header: Second Growth Period
item-158 at level 4: paragraph: After the first growth period, the plants were irradiated with light from the light emitting devices of Examples 1 to 5 and Comparative Example 1, and were subjected to hydroponics.
item-159 at level 4: paragraph: The plants were cultivated for 19 days under the conditions of room temperature: 22 to 24° C., humidity: 60 to 70%, CO₂ concentration: 600 to 700 ppm, photon flux density from light emitting device: 125 μmol·m⁻²·s⁻¹ and daytime hour: 16 hours/day. Otsuka Formulation A was used as the nutrient solution. Conductivity of the nutrient was 1.5 ms·cm⁻¹. The values of the R/B and R/FR ratios of light for plant irradiation from each light emitting device in the second growth period are shown in Table 1.
item-160 at level 3: section_header: Measurement of Fresh Weight (Edible Part)
item-161 at level 4: paragraph: The plants after cultivation were harvested, and wet weights of a terrestrial part and a root were measured. The wet weight of a terrestrial part of each of 6 cultivated plants having been subjected to hydroponics by irradiating with light from the light emitting devices of Examples 1 to 5 and Comparative Example 1 was measured as a fresh weight (edible part) (g). The results obtained are shown in Table 1 and FIG. 3.
item-162 at level 3: section_header: Measurement of Nitrate Nitrogen Content
item-163 at level 4: paragraph: The edible part (about 20 g) of each of the cultivated plants, from which a foot about 5 cm had been removed, was frozen with liquid nitrogen and crushed with a juice mixer (laboratory mixer LM-PLUS, manufactured by Osaka Chemical Co., Ltd.) for 1 minute. The resulting liquid was filtered with Miracloth (manufactured by Milipore), and the filtrate was centrifuged at 4° C. and 15,000 rpm for 5 minutes. The nitrate nitrogen content (mg/100 g) in the cultivated plant in the supernatant was measured using a portable reflection photometer system (product name: RQ flex system, manufactured by Merck) and a test paper (product name: Reflectoquant (registered trade mark), manufactured by Kanto Chemical Co., Inc.). The results are shown in Table 1 and FIG. 4.
item-164 at level 4: table with [13x10]
item-165 at level 4: paragraph: As shown in Table 1, for the light emitting devices in Examples 1 to 5, the R/B ratios are within a range of 2.0 or more and 4.0 or less and the R/FR ratios are within the range of 0.7 or more and 13.0 or less. For Romaine lettuce cultivated by irradiating with light from the light emitting device in Examples 1 to 5, the fresh weight (edible part) was increased as compared with Romaine lettuce cultivated by irradiating with light from the light emitting device used in Comparative Example 1. Therefore, cultivation of plants was promoted, as shown in Table 1 and FIG. 3.
item-166 at level 4: paragraph: As shown in FIG. 2, the light emitting device 100 in Example 1 had at least one maximum value of the relative photon flux density in a range of 380 nm or more and 490 nm or less and in a range of 580 nm or more and less than 680 nm. The light emitting devices 100 in Examples 2 to 5 had at least one maximum value of relative photon flux density in a range of 380 nm or more and 490 nm or less, in a range of 580 nm or more and less than 680 nm and in a range of 680 nm or more and 800 nm or less, respectively. The maximum value of the relative photon flux density in a range of 380 nm or more and 490 nm or less is due to the light emission of the light emitting element having light emission peak wavelength in a range of 380 nm or more and 490 nm or less, the maximum value of the relative photon flux density in a range of 580 nm or more and less than 680 nm is due to the first fluorescent material emitting the light having at least one light emission peak wavelength in a range of 580 nm or more and less than 680 nm, and the maximum value of the relative photon flux density in a range of 680 nm or more and 800 nm or less is due to the second fluorescent material emitting the light having at least one light emission peak wavelength in a range of 680 nm or more and 800 nm or less.
item-167 at level 4: paragraph: As shown in Table 1, for the light emitting devices 100 in Examples 4 and 5, the R/B ratios are 2.0 and 2.3, respectively, and the R/FR ratios are 1.6 and 1.4, respectively. The R/B ratios are within a range of 2.0 or more and 4.0 or less, and the R/FR ratios are within a range of 0.7 or more and 2.0 or less. For Romaine lettuces cultivated by irradiating with lights from the light emitting devices 100, the nitrate nitrogen content is decreased as compared with Comparative Example 1. Plants, in which the nitrate nitrogen content having the possibility of adversely affecting health of human body had been reduced to a range that does not inhibit the cultivation of plants, could be cultivated, as shown in Table 1 and FIG. 4.
item-168 at level 4: paragraph: The light emitting device according to an embodiment of the present disclosure can be utilized as a light emitting device for plant cultivation that can activate photosynthesis and is capable of promoting growth of plants. Furthermore, the plant cultivation method, in which plants are irradiated with the light emitted from the light emitting device according to an embodiment of the present disclosure, can cultivate plants that can be harvested in a relatively short period of time and can be used in a plant factory.
item-169 at level 4: paragraph: Although the present disclosure has been described with reference to several exemplary embodiments, it shall be understood that the words that have been used are words of description and illustration, rather than words of limitation. Changes may be made within the purview of the appended claims, as presently stated and as amended, without departing from the scope and spirit of the disclosure in its aspects. Although the disclosure has been described with reference to particular examples, means, and embodiments, the disclosure may be not intended to be limited to the particulars disclosed; rather the disclosure extends to all functionally equivalent structures, methods, and uses such as are within the scope of the appended claims.
item-170 at level 4: paragraph: One or more examples or embodiments of the disclosure may be referred to herein, individually and/or collectively, by the term “disclosure” merely for convenience and without intending to voluntarily limit the scope of this application to any particular disclosure or inventive concept. Moreover, although specific examples and embodiments have been illustrated and described herein, it should be appreciated that any subsequent arrangement designed to achieve the same or similar purpose may be substituted for the specific examples or embodiments shown. This disclosure may be intended to cover any and all subsequent adaptations or variations of various examples and embodiments. Combinations of the above examples and embodiments, and other examples and embodiments not specifically described herein, will be apparent to those of skill in the art upon reviewing the description.
item-171 at level 4: paragraph: In addition, in the foregoing Detailed Description, various features may be grouped together or described in a single embodiment for the purpose of streamlining the disclosure. This disclosure may be not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter may be directed to less than all of the features of any of the disclosed embodiments. Thus, the following claims are incorporated into the Detailed Description, with each claim standing on its own as defining separately claimed subject matter.
item-172 at level 4: paragraph: The above disclosed subject matter shall be considered illustrative, and not restrictive, and the appended claims are intended to cover all such modifications, enhancements, and other embodiments which fall within the true spirit and scope of the present disclosure. Thus, to the maximum extent allowed by law, the scope of the present disclosure may be determined by the broadest permissible interpretation of the following claims and their equivalents, and shall not be restricted or limited by the foregoing detailed description.
item-173 at level 2: section_header: CLAIMS
item-174 at level 3: paragraph: 1. A light emitting device comprising: a light emitting element having a light emission peak wavelength in a range of 380 nm or more and 490 nm or less; and a fluorescent material that is excited by light from the light emitting element and emits light having at least one light emission peak wavelength in a range of 580 nm or more and less than 680 nm, wherein the light emitting device emits light having a ratio R/B of a photon flux density R to a photon flux density B within a range of 2.0 or more and 4.0 or less, and a ratio R/FR of the photon flux density R to a photon flux density FR within a range of 0.7 or more and 13.0 or less, wherein the photon flux density R is in a wavelength range of 620 nm or more and less than 700 nm, the photon flux density B is in a wavelength range of 380 nm or more and 490 nm or less, and the photon flux density FR is in a wavelength range of 700 nm or more and 780 nm or less.
item-175 at level 3: paragraph: 2. The light emitting device according to claim 1, further comprising another fluorescent material that is excited by light from the light emitting element and emits light having at least one light emission peak wavelength in a range of 680 nm or more and 800 nm or less, wherein the ratio R/FR is within a range of 0.7 or more and 5.0 or less.
item-176 at level 3: paragraph: 3. The light emitting device according to claim 2, wherein the ratio R/FR is within a range of 0.7 or more and 2.0 or less.
item-177 at level 3: paragraph: 4. The light emitting device according to claim 2, wherein the another fluorescent material contains a first element Ln containing at least one element selected from the group consisting of rare earth elements excluding Ce, a second element M containing at least one element selected from the group consisting of Al, Ga and In, Ce, and Cr, and has a composition of an aluminate fluorescent material, and when a molar ratio of the second element M is taken as 5, a molar ratio of Ce is a product of a value of a parameter x and 3, and a molar ratio of Cr is a product of a value of a parameter y and 3, the value of the parameter x being in a range of exceeding 0.0002 and less than 0.50, and the value of the parameter y being in a range of exceeding 0.0001 and less than 0.05.
item-178 at level 3: paragraph: 5. The light emitting device according to claim 2, wherein the another fluorescent material has the composition represented by the following formula (I): (Ln₁₋ₓ₋yCeₓCry)₃M₅O₁₂ (I) wherein Ln is at least one rare earth element selected from the group consisting of rare earth elements excluding Ce, M is at least one element selected from the group consisting of Al, Ga, and In, and x and y are numbers satisfying 0.0002<x<0.50 and 0.0001<y<0.05.
item-179 at level 3: paragraph: 6. The light emitting device according to claim 2, the light emitting device being used in plant cultivation.
item-180 at level 3: paragraph: 7. The light emitting device according to claim 1, wherein the fluorescent material is at least one selected from the group consisting of: a fluorogermanate fluorescent material that is activated by Mn⁴⁺, a fluorescent material that has a composition containing at least one element selected from Sr and Ca, and Al, and contains silicon nitride that is activated by Eu²⁺, a fluorescent material that has a composition containing at least one element selected from the group consisting of alkaline earth metal elements and at least one element selected from the group consisting of alkali metal elements, and contains aluminum nitride that is activated by Eu²⁺, a fluorescent material containing a sulfide of Ca or Sr that is activated by Eu²⁺, and a fluorescent material that has a composition containing at least one element or ion selected from the group consisting of alkali metal elements, and an ammonium ion (NH₄⁺), and at least one element selected from the group consisting of Group 4 elements and Group 14 elements, and contains a fluoride that is activated by Mn⁴⁺.
item-181 at level 3: paragraph: 8. The light emitting device according to claim 1, wherein the fluorescent material contains: a fluorogermanate fluorescent material that is activated by Mn⁴⁺, and a fluorescent material that has a composition containing at least one element selected from Sr and Ca, and Al, and contains silicon nitride that is activated by Eu²⁺, wherein the compounding ratio between the fluorogermanate fluorescent material and the fluorescent material containing silicon nitride (fluorogermanate fluorescent material:fluorescent material containing silicon nitride) is in a range of 50:50 or more and 99:1 or less.
item-182 at level 3: paragraph: 9. The light emitting device according to claim 1, the light emitting device being used in plant cultivation.
item-183 at level 3: paragraph: 10. A plant cultivation method comprising irradiating plants with light emitted from the light emitting device according to claim 1.
item-184 at level 3: paragraph: 11. A plant cultivation method comprising irradiating plants with light emitted from the light emitting device according to claim 2.

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,380 @@
# LIGHT EMITTING DEVICE AND PLANT CULTIVATION METHOD
## ABSTRACT
Provided is a light emitting device that includes a light emitting element having a light emission peak wavelength ranging from 380 nm to 490 nm, and a fluorescent material excited by light from the light emitting element and emitting light having at a light emission peak wavelength ranging from 580 nm or more to less than 680 nm. The light emitting device emits light having a ratio R/B of a photon flux density R to a photon flux density B ranging from 2.0 to 4.0 and a ratio R/FR of the photon flux density R to a photon flux density FR ranging from 0.7 to 13.0, the photon flux density R being in a wavelength range of 620 nm or more and less than 700 nm, the photon flux density B being in a wavelength range of 380 nm or more and 490 nm or less, and the photon flux density FR being in a wavelength range of 700 nm or more and 780 nm or less.
## CROSS-REFERENCE TO RELATED APPLICATION
The application claims benefit of Japanese Patent Application No. 2016-128835 filed on Jun. 29, 2016, the entire disclosure of which is hereby incorporated by reference in its entirety.
## BACKGROUND
## Technical Field
The present disclosure relates to a light emitting device and a plant cultivation method.
## Description of Related Art
With environmental changes due to climate change and other artificial disruptions, plant factories are expected to increase production efficiency of vegetables and be capable of adjusting production in order to make it possible to stably supply vegetables. Plant factories that are capable of artificial management can stably supply clean and safe vegetables to markets, and therefore are expected to be the next-generation industries.
Plant factories that are completely isolated from external environment make it possible to artificially control and collect various data such as growth method, growth rate data, yield data, depending on classification of plants. Based on those data, plant factories are able to plan production according to the balance between supply and demand in markets, and supply plants such as vegetables without depending on surrounding conditions such as climatic environment. Particularly, an increase in food production is indispensable with world population growth. If plants can be systematically produced without the influence by surrounding conditions such as climatic environment, vegetables produced in plant factories can be stably supplied within a country, and additionally can be exported abroad as viable products.
In general, vegetables that are grown outdoors get sunlight, grow while conducting photosynthesis, and are gathered. On the other hand, vegetables that are grown in plant factories are required to be harvested in a short period of time, or are required to grow in larger than normal sizes even in an ordinary growth period.
In plant factories, the light source used in place of sunlight affect a growth period, growth of plants. LED lighting is being used in place of conventional fluorescent lamps, from a standpoint of power consumption reduction.
For example, Japanese Unexamined Patent Publication No. 2009-125007 discloses a plant growth method. In this method, the plants is irradiated with light emitted from a first LED light emitting element and/or a second LED light emitting element at predetermined timings using a lighting apparatus including the first LED light emitting element emitting light having a wavelength region of 625 to 690 nm and the second LED light emitting element emitting light having a wavelength region of 420 to 490 nm in order to emit lights having sufficient intensities and different wavelengths from each other.
## SUMMARY
However, even though plants are merely irradiated with lights having different wavelengths as in the plant growth method disclosed in Japanese Unexamined Patent Publication No. 2009-125007, the effect of promoting plant growth is not sufficient. Further improvement is required in promotion of plant growth.
Accordingly, an object of the present disclosure is to provide a light emitting device capable of promoting growth of plants and a plant cultivation method.
Means for solving the above problems are as follows, and the present disclosure includes the following embodiments.
A first embodiment of the present disclosure is a light emitting device including a light emitting element having a light emission peak wavelength in a range of 380 nm or more and 490 nm or less, and a fluorescent material that is excited by light from the light emitting element and emits light having at least one light emission peak wavelength in a range of 580 nm or more and less than 680 nm. The light emitting device emits light having a ratio R/B of a photon flux density R to a photon flux density B within a range of 2.0 or more and 4.0 or less, and a ratio R/FR of a photon flux density R to a photon flux density FR within a range of 0.7 or more and 13.0 or less, where the photon flux density R is the number of light quanta (μmol·m⁻²·g⁻¹) incident per unit time and unit area in a wavelength range of 620 nm or more and less than 700 nm, the photon flux density B is the number of light quanta (μmol·m⁻²·g⁻¹) incident per unit time and unit area in a wavelength range of 380 nm or more and 490 nm or less, and the photon flux density FR is the number of light quanta (μmol·m⁻²·g⁻¹) incident per unit time and unit area in a wavelength range of 700 nm or more and 780 nm or less.
A second embodiment of the present disclosure is a plant cultivation method including irradiating plants with light from the light emitting device.
According to embodiments of the present disclosure, a light emitting device capable of promoting growth of plants and a plant cultivation method can be provided.
## BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a schematic cross sectional view of a light emitting device according to an embodiment of the present disclosure.
FIG. 2 is a diagram showing spectra of wavelengths and relative photon flux densities of exemplary light emitting devices according to embodiments of the present disclosure and a comparative light emitting devices.
FIG. 3 is a graph showing fresh weight (edible part) at the harvest time of each plant grown by irradiating the plant with light from exemplary light emitting devices according to embodiments of the present disclosure and a comparative light emitting device.
FIG. 4 is a graph showing nitrate nitrogen content in each plant grown by irradiating the plant with light from exemplary light emitting devices according to embodiments of the present disclosure and a comparative light emitting device.
## DETAILED DESCRIPTION
A light emitting device and a plant cultivation method according to the present invention will be described below based on an embodiment. However, the embodiment described below only exemplifies the technical concept of the present invention, and the present invention is not limited to the light emitting device and plant cultivation method described below. In the present specification, the relationship between the color name and the chromaticity coordinate, the relationship between the wavelength range of light and the color name of monochromatic light follows JIS Z8110.
### Light Emitting Device
An embodiment of the present disclosure is a light emitting device including a light emitting element having a light emission peak wavelength in a range of 380 nm or more and 490 nm or less (hereinafter sometimes referred to as a “region of from near ultraviolet to blue color”), and a first fluorescent material emitting light having at least one light emission peak wavelength in a range of 580 nm or more and less than 680 nm by being excited by light from the light emitting element. The light emitting device emits light having a ratio R/B of a photon flux density R to a photon flux density B within a range of 2.0 or more and 4.0 or less, and a ratio R/FR of the photon flux density R to a photon flux density FR within a range of 0.7 or more and 13.0 or less, where the photon flux density R is the number of light quanta (μmol·m⁻²·g⁻¹) incident per unit time and unit area in a wavelength range of 620 nm or more and less than 700 nm, the photon flux density B is the number of light quanta (μmol·m⁻²·g⁻¹) incident per unit time and unit area in a wavelength range of 380 nm or more and 490 nm or less, and the photon flux density FR is the number of light quanta (μmol·m⁻²·g⁻¹) incident per unit time and unit area in a wavelength range of 700 nm or more and 780 nm or less.
An example of the light emitting device according to one embodiment of the present disclosure is described below based on the drawings. FIG. 1 is a schematic cross sectional view showing a light emitting device 100 according to an embodiment of the present disclosure.
The light emitting device 100 includes a molded article 40, a light emitting element 10 and a fluorescent member 50, as shown in FIG. 1. The molded article 40 includes a first lead 20 and a second lead 30 that are integrally molded with a resin portion 42 containing a thermoplastic resin or a thermosetting resin. The molded article 40 forms a depression having a bottom and sides, and the light emitting element 10 is placed on the bottom of the depression. The light emitting element 10 has a pair of an anode and a cathode, and the anode and the cathode are electrically connected to the first lead 20 and the second lead 30 respectively through the respective wires 60. The light emitting element 10 is covered with the fluorescent member 50. The fluorescent member 50 includes, for example, a fluorescent material 70 performing wavelength conversion of light from the light emitting element 10, and a resin. The fluorescent material 70 includes a first fluorescent material 71 and a second fluorescent material 72. A part of the first lead 20 and the second lead 30 that are connected to a pair of the anode and the cathode of the light emitting element 10 is exposed toward outside a package constituting the light emitting element 100. The light emitting device 100 can emit light by receiving electric power supply from the outside through the first lead 20 and the second lead 30.
The fluorescent member 50 not only performs wavelength conversion of light emitted from the light emitting element 10, but functions as a member for protecting the light emitting element 10 from the external environment. In FIG. 1, the fluorescent material 70 is localized in the fluorescent member 50 in the state that the first fluorescent material 71 and the second fluorescent material 72 are mixed with each other, and is arranged adjacent to the light emitting element 10. This constitution can efficiently perform the wavelength conversion of light from the light emitting element 10 in the fluorescent material 70, and as a result, can provide a light emitting device having excellent light emission efficiency. The arrangement of the fluorescent member 50 containing the fluorescent material 70, and the light emitting element 10 is not limited to the embodiment that the fluorescent material 70 is arranged adjacent to the light emitting element 10 as shown in FIG. 1, and considering the influence of heat generated from the light emitting element 10, the fluorescent material 70 can be arranged separated from the light emitting element 10 in the fluorescent member 50. Furthermore, light having suppressed color unevenness can be emitted from the light emitting device 100 by arranging the fluorescent material 70 almost evenly in the fluorescent member 50. In FIG. 1, the fluorescent material 70 is arranged in the state that the first fluorescent material 71 and the second fluorescent material 72 are mixed with each other. However, for example, the first fluorescent material 71 may be arranged in a layer state and the second fluorescent material 72 may be arranged thereon in another layer state. Alternatively, the second fluorescent material 72 may be arranged in a layer state and the first fluorescent material 71 may be arranged thereon in another layer state.
The light emitting device 100 includes the first fluorescent material 71 having at least one light emission peak wavelength in a range of 580 nm or more and less than 680 nm by being excited by light from the light emitting element 10, and preferably further includes the second fluorescent material 72 having at least one light emission peak wavelength in a range of 680 nm or more and 800 nm or less by being excited by light from the light emitting element 10.
The first fluorescent material 71 and the second fluorescent material 72 are contained in, for example, the fluorescent member 50 covering the light emitting element 10. The light emitting device 100 in which the light emitting element 10 has been covered with the fluorescent member 50 containing the first fluorescent material 71 and the second fluorescent material 72 emits light having at least one light emission peak wavelength in a range of 580 nm or more and less than 680 nm by a part of light emission of the light emitting element 10 that is absorbed in the first fluorescent material 71. Furthermore, the light emitting device 100 emits light having at least one light emission peak wavelength in a range of 680 nm or more and 800 nm or less by a part of light emission of the light emitting element 10 that is absorbed in the second fluorescent material 72.
Plants grow when a pigment (chlorophyll a and chlorophyll b) present in chlorophyll thereof absorbs light and additionally takes carbon dioxide gas and water therein, and these are converted to carbohydrates (saccharides) by photosynthesis. Chlorophyll a and chlorophyll b used in growth promotion of plants particularly have absorption peaks in a red region of 625 nm or more and 675 nm or less and a blue region of 425 nm or more and 475 nm or less. The action of photosynthesis by chlorophylls of plants mainly occurs in a wavelength range of 400 nm or more and 700 nm or less, but chlorophyll a and chlorophyll b further have local absorption peaks in a region of 700 nm or more and 800 nm or less.
For example, when plants are irradiated with light having longer wavelength than and absorption peak (in the vicinity of 680 nm) in a red region of chlorophyll a, a phenomenon called red drop, in which activity of photosynthesis rapidly decreases, occurs. However, it is known that when plants are irradiated with light containing near infrared region together with light of red region, photosynthesis is accelerated by a synergistic effect of those two kinds of lights. This phenomenon is called the Emerson effect.
Intensity of light with which plants are irradiated is represented by photon flux density. The photon flux density (μmol·m⁻²·s⁻¹) is the number of photons reaching a unit area per unit time. The amount of photosynthesis depends on the number of photons, and therefore does not depend on other optical characteristics if the photon flux density is the same. However, wavelength dependency activating photosynthesis differs depending on photosynthetic pigment. Intensity of light necessary for photosynthesis of plants is sometimes represented by Photosynthetic Photon Flux Density (PPFD).
The light emitting device 100 emits light having a ratio R/B of a photon flux density R to a photon flux density B within a range of 2.0 or more and 4.0 or less, and a ratio R/FR of the photon flux density R to a photon flux density FR within a range of 0.7 or more and 13.0 or less, where the photon flux density R is the number of light quanta (μmol·m⁻²·g⁻¹) incident per unit time and unit area in a wavelength range of 620 nm or more and less than 700 nm, the photon flux density B is the number of light quanta (μmol·m⁻²·g⁻¹) incident per unit time and unit area in a wavelength range of 380 nm or more and 490 nm or less, and the photon flux density FR is the number of light quanta (μmol·m⁻²·g⁻¹) incident per unit time and unit area in a wavelength range of 700 nm or more and 780 nm or less.
It is estimated that in plants, which are irradiated with light containing the photon flux density FR from the light emitting device 100, photosynthesis is activated by Emerson effect, and as a result, growth of plants can be promoted. Furthermore, when plants are irradiated with light containing the photon flux density FR, growth of the plants can be promoted by a reversible reaction between red light irradiation, to which chlorophyll as chromoprotein contained in plants has participated, and far infrared light irradiation.
Examples of nutrients necessary for growth of plants include nitrogen, phosphoric acid, and potassium. Of those nutrients, nitrogen is absorbed in plants as nitrate nitrogen (nitrate ion: NO₃⁻). The nitrate nitrogen changes into nitrite ion (NO₂⁻) by a reduction reaction, and when the nitrite ion is further reacted with fatty acid amine, nitrosoamine is formed. It is known that nitrite ion acts to hemoglobin in blood, and it is known that a nitroso compound sometimes affects health of a human body. Mechanism of converting nitrate nitrogen into nitrite ion in vivo is complicated, and the relationship between the amount of intake of nitrate nitrogen and the influence to health of a human body is not clarified. However, it is desired that the content of nitrate nitrogen having a possibility of affecting health of a human body is smaller.
For the above reasons, nitrogen is one of nutrients necessary for growth of plants, but it is preferred that the content of nitrate nitrogen in food plants be reduced to a range that does not disturb the growth of plants.
It is preferred that the light emitting device 100 further include the second fluorescent material 72 having at least one light emission peak wavelength in a range of 680 nm or more and 800 nm or less by being excited by light from the light emitting element 10, wherein the R/FR ratio is within a range of 0.7 or more and 5.0 or less. The R/FR ratio is more preferably within a range of 0.7 or more and 2.0 or less.
### Light Emitting Element
The light emitting element 10 is used as an excitation light source, and is a light emitting element emitting light having a light emission peak wavelength in a range of 380 nm or more and 490 nm or less. As a result, a stable light emitting device having high efficiency, high linearity of output to input and strong mechanical impacts can be obtained.
The range of the light emission peak wavelength of the light emitting element 10 is preferably in a range of 390 nm or more and 480 nm or less, more preferably in a range of 420 nm or more and 470 nm or less, and still more preferably in a range of 440 nm or more and 460 nm or less, and particularly preferably in a range of 445 nm or more and 455 nm or less. A light emitting element including a nitride semiconductor (InₓAlyGa₁₋ₓ₋yN, 0≦X, 0≦Y and X+Y≦1) is preferably used as the light emitting element 10.
The half value width of emission spectrum of the light emitting element 10 can be, for example, 30 nm or less.
### Fluorescent Member
The fluorescent member 50 used in the light emitting device 100 preferably includes the first fluorescent material 71 and a sealing material, and more preferably further includes the second fluorescent material 72. A thermoplastic resin and a thermosetting resin can be used as the sealing material. The fluorescent member 50 may contain other components such as a filler, a light stabilizer and a colorant, in addition to the fluorescent material and the sealing material. Examples of the filler include silica, barium titanate, titanium oxide and aluminum oxide.
The content of other components other than the fluorescent material 70 and the sealing material in the fluorescent member 50 is preferably in a range of 0.01 parts by mass or more and 20 parts by mass or less, per 100 parts by mass of the sealing material.
The total content of the fluorescent material 70 in the fluorescent member 50 can be, for example, 5 parts by mass or more and 300 parts by mass or less, per 100 parts by mass of the sealing material. The total content is preferably 10 parts by mass or more and 250 parts by mass or less, more preferably 15 parts by mass or more and 230 parts by mass or less, and still more preferably 15 parts by mass or more and 200 parts by mass or less. When the total content of the fluorescent material 70 in the fluorescent member 50 is within the above range, the light emitted from the light emitting element 10 can be efficiently subjected to wavelength conversion in the fluorescent material 70.
### First Fluorescent Material
The first fluorescent material 71 is a fluorescent material that is excited by light from the light emitting element 10 and emits light having at least one light emission peak wavelength in a range of 580 nm or more and less than 680 nm. Examples of the first fluorescent material 71 include an Mn⁴⁺-activated fluorogermanate fluorescent material, an Eu²⁺-activated nitride fluorescent material, an Eu²⁺-activated alkaline earth sulfide fluorescent material and an Mn⁴⁺-activated halide fluorescent material. The first fluorescent material 71 may use one selected from those fluorescent materials and may use a combination of two or more thereof. The first fluorescent material preferably contains an Eu²⁺-activated nitride fluorescent material and an Mn⁴⁺-activated fluorogermanate fluorescent material.
The Eu²⁺-activated nitride fluorescent material is preferably a fluorescent material that has a composition including at least one element selected from Sr and Ca, and Al and contains silicon nitride that is activated by Eu²⁺, or a fluorescent material that has a composition including at least one element selected from the group consisting of alkaline earth metal elements and at least one element selected from the group consisting of alkali metal elements and contains aluminum nitride that is activated by Eu²⁺.
The halide fluorescent material that is activated by Mn⁴⁺ is preferably a fluorescent material that has a composition including at least one element or ion selected from the group consisting of alkali metal elements and an ammonium ion (NH⁴⁺) and at least one element selected from the group consisting of Group 4 elements and Group 14 elements and contains a fluoride that is activated by Mn⁴⁺.
Examples of the first fluorescent material 71 specifically include fluorescent materials having any one composition of the following formulae (I) to (VI).
(ij)MgO.(j/2)Sc₂O₃.kMgF₂.mCaF₂.(1n)GeO₂.(n/2)Mt₂O₃:zMn⁴⁺ (I)
wherein Mt is at least one selected from the group consisting of Al, Ga, and In, and j, k, m, n, and z are numbers satisfying 2≦i≦4, 0≦j<0.5, 0<k<1.5, 0m<1.5, 0<n<0.5, and 0<z<0.05, respectively.
(Ca₁₋p₋qSrpEuq)AlSiN₃ (II)
wherein p and q are numbers satisfying 0≦p≦1.0, 0<q<1.0, and p+q<1.0.
MªvMbwMcfAl₃₋gSigNh (III)
wherein Mª is at least one element selected from the group consisting of Ca, Sr, Ba, and Mg, Mb is at least one element selected from the group consisting of Li, Na, and K, Mc is at least one element selected from the group consisting of Eu, Ce, Tb, and Mn, v, w, f, g, and h are numbers satisfying 0.80≦v≦1.05, 0.80≦w≦1.05, 0.001<f0.1, 0g0.5, and 3.0h5.0, respectively.
(Ca₁₋r₋s₋tSrrBasEut)₂Si₅N₈ (IV)
wherein r, s, and t are numbers satisfying 0≦r≦1.0, 0≦s≦1.0, 0<t<1.0, and r+s+t1.0.
(Ca,Sr)S:Eu (V)
A₂[M¹₁₋uMn⁴⁺uF₆] (VI)
wherein A is at least one selected from the group consisting of K, Li, Na, Rb, Cs, and NH₄⁺, M¹ is at least one element selected from the group consisting of Group 4 elements and Group 14 elements, and u is the number satisfying 0<u<0.2.
The content of the first fluorescent material 71 in the fluorescent member 50 is not particularly limited as long as the R/B ratio is within a range of 2.0 or more and 4.0 or less. The content of the first fluorescent material 71 in the fluorescent member 50 is, for example, 1 part by mass or more, preferably 5 parts by mass or more, and more preferably 8 parts by mass or more, per 100 parts by mass of the sealing material, and is preferably 200 parts by mass or less, more preferably 150 parts by mass or less, and still more preferably 100 parts by mass or less, per 100 parts by mass of the sealing material. When the content of the first fluorescent material 71 in the fluorescent member 50 is within the aforementioned range, the light emitted from the light emitting element 10 can be efficiently subjected to wavelength conversion, and light capable of promoting growth of plant can be emitted from the light emitting device 100.
The first fluorescent material 71 preferably contains at least two fluorescent materials, and in the case of containing at least two fluorescent materials, the first fluorescent material preferably contains a fluorogermanate fluorescent material that is activated by Mn⁴⁺ (hereinafter referred to as “MGF fluorescent material”), and a fluorescent material that has a composition including at least one element selected from Sr and Ca, and Al, and contains silicon nitride that is activated by Eu²⁺ (hereinafter referred to as “CASN fluorescent material”).
In the case where the first fluorescent material 71 contains at least two fluorescent materials and two fluorescent materials are a MGF fluorescent material and a CASN fluorescent material, where a compounding ratio thereof (MGF fluorescent material:CASN fluorescent material) is preferably in a range of 50:50 or more and 99:1 or less, more preferably in a range of 60:40 or more and 97:3 or less, and still more preferably in a range of 70:30 or more and 96:4 or less, in mass ratio. In the case where the first fluorescent material contains two fluorescent materials, when those fluorescent materials are a MGF fluorescent material and a CASN fluorescent material and the mass ratio thereof is within the aforementioned range, the light emitted from the light emitting element 10 can be efficiently subjected to wavelength conversion in the first fluorescent material 71. In addition, the R/B ratio can be adjusted to within a range of 2.0 or more and 4.0 or less, and the R/FR ratio is easy to be adjusted to within a range of 0.7 or more and 13.0 or less.
### Second Fluorescent Material
The second fluorescent material 72 is a fluorescent material that is excited by the light from the light emitting element 10 and emits light having at least one light emission peak wavelength in a range of 680 nm or more and 800 nm or less.
The second fluorescent material 72 used in the light emitting device according to one embodiment of the present disclosure is a fluorescent material that contains a first element Ln containing at least one element selected from the group consisting of rare earth elements excluding Ce, a second element M containing at least one element selected from the group consisting of Al, Ga, In, Ce, and Cr, and has a composition of an aluminate fluorescent material. When a molar ratio of the second element M is taken as 5, it is preferred that a molar ratio of Ce be a product of a value of a parameter x and 3, and a molar ratio of Cr be a product of a value of a parameter y and 3, wherein the value of the parameter x is in a range of exceeding 0.0002 and less than 0.50, and the value of the parameter y is in a range of exceeding 0.0001 and less than 0.05.
The second fluorescent material 72 is preferably a fluorescent material having the composition represented by the following formula (1):
(Ln₁₋ₓ₋yCeₓCry)₃M₅O₁₂ (1)
wherein Ln is at least one rare earth element selected from the group consisting of rare earth elements excluding Ce, M is at least one element selected from the group consisting of Al, Ga, and In, and x and y are numbers satisfying 0.0002<x<0.50 and 0.0001<y<0.05, respectively.
In this case, the second fluorescent material 72 has a composition constituting a garnet structure, and therefore is tough against heat, light, and water, has an absorption peak wavelength of excited absorption spectrum in the vicinity of 420 nm or more and 470 nm or less, and sufficiently absorbs the light from the light emitting element 10, thereby enhancing light emitting intensity of the second fluorescent material 72, which is preferred. Furthermore, the second fluorescent material 72 is excited by light having light emission peak wavelength in a range of 380 nm or more and 490 nm or less and emits light having at least one light emission peak wavelength in a range of 680 nm or more and 800 nm or less.
In the second fluorescent material 72, from the standpoint of stability of a crystal structure, Ln is preferably at least one rare earth element selected from the group consisting of Y, Gd, Lu, La, Tb, and Pr, and M is preferably Al or Ga.
In the second fluorescent material 72, the value of the parameter x is more preferably in a range of 0.0005 or more and 0.400 or less (0.0005≦x≦0.400), and still more preferably in a range of 0.001 or more and 0.350 or less (0.001≦x≦0.350).
In the second fluorescent material 72, the value of the parameter y is preferably in a range of exceeding 0.0005 and less than 0.040 (0.0005<y<0.040), and more preferably in a range of 0.001 or more and 0.026 or less (0.001y0.026).
The parameter x is an activation amount of Ce and the value of the parameter x is in a range of exceeding 0.0002 and less than 0.50 (0.0002<x<0.50), and the parameter y is an activation amount of Cr. When the value of the parameter y is in a range of exceeding 0.0001 and less than 0.05 (0.0001<y<0.05), the activation amount of Ce and the activation amount of Cr that are light emission centers contained in the crystal structure of the fluorescent material are within optimum ranges, the decrease of light emission intensity due to the decrease of light emission center can be suppressed, the decrease of light emission intensity due to concentration quenching caused by the increase of the activation amount can be suppressed, and light emission intensity can be enhanced.
### Production Method of Second Fluorescent Material
A method for producing the second fluorescent material 72 includes the following method.
A compound containing at least one rare earth element Ln selected from the group consisting of rare earth elements excluding Ce, a compound containing at least one element M selected from the group consisting of Al, Ga, and In, a compound containing Ce and a compound containing Cr are mixed such that, when the total molar composition ratio of the M is taken as 5 as the standard, in the case where the total molar composition ratio of Ln, Ce, and Nd is 3, the molar ratio of Ce is a product of 3 and a value of a parameter x, and the molar ratio of Cr is a product of 3 and a value of a parameter y, the value of the parameter x is in a range of exceeding 0.0002 and less than 0.50 and the value of the parameter y is in a range of exceeding 0.0001 and less than 0.05, thereby obtaining a raw material mixture, the raw material mixture is heat-treated, followed by classification and the like, thereby obtaining the second fluorescent material.
### Compound Containing Rare Earth Element Ln
Examples of the compound containing rare earth element Ln include oxides, hydroxides, nitrides, oxynitrides, fluorides, and chlorides, that contain at least one rare earth element Ln selected from the group consisting of rare earth elements excluding Ce. Those compounds may be hydrates. At least a part of the compounds containing rare earth element may use a metal simple substance or an alloy containing rare earth element. The compound containing rare earth element is preferably a compound containing at least one rare earth element Ln selected from the group consisting of Y, Gd, Lu, La, Tb, and Pr. The compound containing rare earth element may be used alone or may be used as a combination of at least two compounds containing rare earth element.
The compound containing rare earth element is preferably an oxide that does not contain elements other than the target composition, as compared with other materials. Examples of the oxide specifically include Y₂O₃, Gd₂O₃, Lu₂O₃, La₂O₃, Tb₄O₇ and Pr₆O₁₁.
### Compound Containing M
Examples of the compound containing at least one element M selected from the group consisting of Al, Ga, and In include oxides, hydroxides, nitrides, oxynitrides, fluorides, and chlorides, that contain Al, Ga, or In. Those compounds may be hydrates. Furthermore, Al metal simple substance, Ga metal simple substance, In metal simple substance, Al alloy, Ga alloy or In alloy may be used, and metal simple substance or an alloy may be used in place of at least a part of the compound. The compound containing Al, Ga, or In may be used alone or may be used as a combination of two or more thereof. The compound containing at least one element selected from the group consisting of Al, Ga, and In is preferably an oxide. The reason for this is that an oxide that does not contain elements other than the target composition, as compared with other materials, and a fluorescent material having a target composition are easy to be obtained. When a compound containing elements other than the target composition has been used, residual impurity elements are sometimes present in the fluorescent material obtained. The residual impurity element becomes a killer factor in light emission, leading to the possibility of remarkable decrease of light emission intensity.
Examples of the compound containing Al, Ga, or In specifically include Al₂O₃, Ga₂O₃, and In₂O₃.
### Compound Containing Ce and Compound Containing Cr
Examples of the compound containing Ce or the compound containing Cr include oxides, hydroxides, nitrides, fluorides, and chlorides, that contain cerium (Ce) or chromium (Cr). Those compounds may be hydrates. Ce metal simple substance, Ce alloy, Cr metal simple substance, or Cr alloy may be used, and a metal simple substance or an alloy may be used in place of a part of the compound. The compound containing Ce or the compound containing Cr may be used alone or may be used as a combination of two or more thereof. The compound containing Ce or the compound containing Cr is preferably an oxide. The reason for this is that an oxide that does not contain elements other than the target composition, as compared with other materials, and a fluorescent material having a target composition are easy to be obtained. When a compound containing elements other than the target composition has been used, residual impurity elements are sometimes present in the fluorescent material obtained. The residual impurity element becomes a killer factor in light emission, leading to the possibility of remarkable decrease of light emission intensity.
Example of the compound containing Ce specifically includes CeO₂, and example of the compound containing Cr specifically includes Cr₂O₃.
The raw material mixture may contain a flux such as a halide, as necessary. When a flux is contained in the raw material mixture, reaction of raw materials with each other is accelerated, and a solid phase reaction is easy to proceed further uniformly. It is considered that a temperature for heat-treating the raw material mixture is almost the same as a formation temperature of a liquid phase of a halide used as a flux or is a temperature higher than the formation temperature, and, as a result, the reaction is accelerated.
Examples of the halide include fluorides, chlorides of rare earth metals, alkali earth metals, and alkali metals. When a halide of rare earth metal is used as the flux, the flux can be added as a compound so as to achieve a target composition. Examples of the flux specifically include BaF₂ and CaF₂. Of those, BaF₂ is preferably used. When barium fluoride is used as the flux, a garnet crystal structure is stabilized and a composition of a garnet crystal structure is easy to be formed.
When the raw material mixture contains a flux, the content of the flux is preferably 20 mass % or less, and more preferably 10 mass % or less, and is preferably 0.1 mass % or more, on the basis of the raw material mixture (100 mass %). When the flux content is within the aforementioned range, the problem that it is difficult to form a garnet crystal structure due to the insufficiency of particle growth by small amount of the flux is prevented, and furthermore, the problem that it is difficult to form a garnet crystal structure due to too large amount of the flux is prevented.
The raw material mixture is prepared, for example, as follows. Each of raw materials is weighed so as to be a compounding ratio. Thereafter, the raw materials are subjected to mixed grinding using a dry grinding machine such as ball mill, are subjected to mixed grinding using a mortar and a pestle, are subjected to mixing using a mixing machine such as a ribbon blender, for example, or are subjected to mixed grinding using both a dry grinding machine and a mixing machine. As necessary, the raw material mixture may be classified using a wet separator such as a setting tank generally used industrially, or a dry classifier such as a cyclone. The mixing may be conducted by dry mixing or may be conducted by wet mixing by adding a solvent. The mixing is preferably dry mixing. The reason for this is that dry mixing can shorten a processing time as compared with wet drying, and this leads to the improvement of productivity.
The raw material mixture after mixing each raw material is dissolved in an acid, the resulting solution is co-precipitated in oxalic acid, a product formed by the co-precipitation is baked to obtain an oxide, and the oxide may be used as the raw material mixture.
The raw material mixture can be heat-treated by placing it in a crucible, a boat made of a carbon material (such as graphite), boron nitride (BN), aluminum oxide (alumina), tungsten (W) or molybdenum (Mo).
From the standpoint of stability of a crystal structure, the temperature for heat-treating the raw material mixture is preferably in a range of 1,000° C. or higher and 2,100° C. or lower, more preferably in a range of 1,100° C. or higher and 2,000° C. or lower, still more preferably in a range of 1,200° C. or higher and 1,900° C. or lower, and particularly preferably in a range of 1,300° C. or higher and 1,800° C. or lower. The heat treatment can use an electric furnace or a gas furnace.
The heat treatment time varies depending on a temperature rising rate, a heat treatment atmosphere. The heat treatment time after reaching the heat treatment temperature is preferably 1 hour or more, more preferably 2 hours or more, and still more preferably 3 hours or more, and is preferably 20 hours or less, more preferably 18 hours or less and still more preferably 15 hours or less.
The atmosphere for heat-treating the raw material mixture is an inert atmosphere such as argon or nitrogen, a reducing atmosphere containing hydrogen, or an oxidizing atmosphere such as the air. The raw material mixture may be subjected to a two-stage heat treatment of a first heat treatment of heat-treating in the air or a weakly reducing atmosphere from the standpoint of, for example, prevention of blackening, and a second heat treatment of heat-treating in a reducing atmosphere from the standpoint of enhancing absorption efficiency of light having a specific light emission peak wavelength. The fluorescent material constituting a garnet structure is that reactivity of the raw material mixture is improved in an atmosphere having high reducing power such as a reducing atmosphere. Therefore, the fluorescent material can be heat-treated under the atmospheric pressure without pressurizing. For example, the heat treatment can be conducted by the method disclosed in Japanese Patent Application No. 2014-260421.
The fluorescent material obtained may be subjected to post-treatment steps such as a solid-liquid separation by a method such as cleaning or filtration, drying by a method such as vacuum drying, and classification by dry sieving. After those post-treatment steps, a fluorescent material having a desired average particle diameter is obtained.
### Other Fluorescent Materials
The light emitting device 100 may contain other kinds of fluorescent materials, in addition to the first fluorescent material 71.
Examples of other kinds of fluorescent materials include a green fluorescent material emitting green color by absorbing a part of the light emitted from the light emitting element 10, a yellow fluorescent material emitting yellow color, and a fluorescent material having a light emission peak wavelength in a wavelength range exceeding 680 nm.
Examples of the green fluorescent material specifically include fluorescent materials having any one of compositions represented by the following formulae (i) to (iii).
M¹¹₈MgSi₄O₁₆X¹¹:Eu (i)
wherein M¹¹ is at least one selected from the group consisting of Ca, Sr, Ba, and Zn, and X¹¹ is at least one selected from the group consisting of F, Cl, Br, and I.
Si₆₋bAlbObN₈₋b:Eu (ii)
wherein b satisfies 0<b<4.2.
M¹³Ga₂S₄:Eu (iii)
wherein M¹³ is at least one selected from the group consisting of Mg, Ca, Sr, and
Ba.
Examples of the yellow fluorescent material specifically include fluorescent materials having any one of compositions represented by the following formulae (iv) to (v).
M¹⁴c/dSi₁₂₋₍c₊d₎Al₍c₊d₎OdN₍₁₆₋d₎:Eu (iv)
wherein M¹⁴ is at least one selected from the group consisting of Sr, Ca, Li, and Y. A value of a parameter c is in a range of 0.5 to 5, a value of a parameter d is in a range of 0 to 2.5, and the parameter d is an electrical charge of M¹⁴.
M¹⁵₃Al₅O₁₂:Ce (v)
wherein M¹⁵ is at least one selected from the group consisting of Y and Lu.
Examples of the fluorescent material having light emission peak wavelength in a wavelength range exceeding 680 nm specifically include fluorescent materials having any one of compositions represented by the following formulae (vi) to (x).
Al₂O₃:Cr (vi)
CaYAlO₄:Mn (vii)
LiAlO₂:Fe (viii)
CdS:Ag (ix)
GdAlO₃:Cr (x)
The light emitting device 100 can be utilized as a light emitting device for plant cultivation that can activate photosynthesis of plants and promote growth of plants so as to have favorable form and weight.
### Plant Cultivation Method
The plant cultivation method of one embodiment of the present disclosure is a method for cultivating plants, including irradiating plants with light emitted from the light emitting device 100. In the plant cultivation method, plants can be irradiated with light from the light emitting device 100 in plant factories that are completely isolated from external environment and make it possible for artificial control. The kind of plants is not particularly limited. However, the light emitting device 100 of one embodiment of the present disclosure can activate photosynthesis of plants and promote growth of plants such that a stem, a leaf, a root, a fruit have favorable form and weight, and therefore is preferably applied to cultivation of vegetables, flowers that contain much chlorophyll performing photosynthesis. Examples of the vegetables include lettuces such as garden lettuce, curl lettuce, Lamb's lettuce, Romaine lettuce, endive, Lollo Rosso, Rucola lettuce, and frill lettuce; Asteraceae vegetables such as “shungiku” (chrysanthemum coronarium); morning glory vegetables such as spinach; Rosaceae vegetables such as strawberry; and flowers such as chrysanthemum, gerbera, rose, and tulip.
## EXAMPLES
The present invention is further specifically described below by Examples and Comparative Examples.
## Examples 1 to 5
### First Fluorescent Material
Two fluorescent materials of fluorogarmanate fluorescent material that is activated by Mn⁴⁺, having a light emission peak at 660 nm and fluorescent material containing silicon nitride that are activated by Eu²⁺, having a light emission peak at 660 nm were used as the first fluorescent material 71. In the first fluorescent material 71, a mass ratio of a MGF fluorescent material to a CASN fluorescent material (MGF:CASN) was 95:5.
### Second Fluorescent Material
Fluorescent material that is obtained by the following production method was used as the second fluorescent material 72.
55.73 g of Y₂O₃ (Y₂O₃ content: 100 mass %), 0.78 g of CeO₂ (CeO₂ content: 100 mass %), 0.54 g of Cr₂O₃ (Cr₂O₃ content: 100 mass %,) and 42.95 g of Al₂O₃ (Al₂O₃ content: 100 mass %) were weighed as raw materials, and 5.00 g of BaF₂ as a flux was added to the mixture. The resulting raw materials were dry mixed for 1 hour by a ball mill. Thus, a raw material mixture was obtained.
The raw material mixture obtained was placed in an alumina crucible, and a lid was put on the alumina crucible. The raw material mixture was heat-treated at 1,500° C. for 10 hours in a reducing atmosphere of H₂: 3 vol % and N₂: 97 vol %. Thus, a calcined product was obtained. The calcined product was passed through a dry sieve to obtain a second fluorescent material. The second fluorescent material obtained was subjected to composition analysis by ICP-AES emission spectrometry using an inductively coupled plasma emission analyzer (manufactured by Perkin Elmer). The composition of the second fluorescent material obtained was (Y₀.₉₇₇Ce₀.₀₀₉Cr₀.₀₁₄)₃Al₅O₁₂ (hereinafter referred to as “YAG: Ce, Cr”).
### Light Emitting Device
Nitride semiconductor having a light emission peak wavelength of 450 nm was used as the light emitting element 10 in the light emitting device 100.
Silicone resin was used as a sealing material constituting the fluorescent member 50, the first fluorescent material 71 and/or the second fluorescent material 72 was added to 100 parts by mass of the silicone resin in the compounding ratio (parts by mass) shown in Table 1, and 15 parts by mass of silica filler were further added thereto, followed by mixing and dispersing. The resulting mixture was degassed to obtain a resin composition constituting a fluorescent member. In each of resin compositions of Examples 1 to 5, the compounding ratio of the first fluorescent material 71 and the second fluorescent material 72 was adjusted as shown in Table 1, and those materials are compounded such that the R/B ratio is within a range of 2.0 or more and 2.4 or less, and the R/FR ratio is within a range of 1.4 or more and 6.0 or less.
The resin composition was poured on the light emitting element 10 of a depressed portion of the molded article 40 to fill the depressed portion, and heated at 150° C. for 4 hours to cure the resin composition, thereby forming the fluorescent member 50. Thus, the light emitting device 100 as shown in FIG. 1 was produced in each of Examples 1 to 5.
## Comparative Example 1
A light emitting device X including a semiconductor light emitting element having a light emission peak wavelength of 450 nm and a light emitting device Y including a semiconductor light emitting element having a light emission peak length of 660 nm were used, and the R/B ratio was adjusted to 2.5.
### Evaluation
### Photon Flux Density
Photon flux densities of lights emitted from the light emitting device 100 used in Examples 1 to 5 and the light emitting devices X and Y used in Comparative Example 1 were measured using a photon measuring device (LI-250A, manufactured by Li-COR). The photon flux density B, the photon flux density R, and the photon flux density FR of lights emitted from the light emitting devices used in each of the Examples and Comparative Example; the R/B ratio; and the R/FR ratio are shown in Table 1. FIG. 2 shows spectra showing the relationship between a wavelength and a relative photon flux density, in the light emitting devices used in each Example and Comparative Example.
### Plant Cultivation Test
The plant cultivation method includes a method of conducting by “growth period by RGB light source (hereinafter referred to as a first growth period)” and “growth period by light source for plant growth (hereinafter referred to as a second growth period)” using a light emitting device according to an embodiment of the present disclosure as a light source.
The first growth period uses RGB light source, and RGB type LED generally known can be used as the RGB light source. The reason for irradiating plants with RGB type LED in the initial stage of the plant growth is that length of a stem and the number and size of true leaves in the initial stage of plant growth are made equal, thereby clarifying the influence by the difference of light quality in the second growth period.
The first growth period is preferably about 2 weeks. In the case where the first growth period is shorter than 2 weeks, it is necessary to confirm that two true leaves develop and a root reaches length that can surely absorb water in the second growth period. In the case where the first growth period exceeds 2 weeks, variation in the second growth period tends to increase. The variation is easy to be controlled by RGB light source by which stem extension is inhibitory, rather than a fluorescent lamp by which stem extension is easy to occur.
After completion of the first growth period, the second growth period immediately proceeds. It is preferred that plants are irradiated with light emitted from a light emitting device according to an embodiment of the present disclosure. Photosynthesis of plants is activated by irradiating plants with light emitted from the light emitting device according to an embodiment of the present disclosure, and the growth of plants can be promoted so as to have favorable form and weight.
The total growth period of the first growth period and the second growth period is about 4 to 6 weeks, and it is preferred that shippable plants can be obtained within the period.
The cultivation test was specifically conducted by the following method.
Romaine lettuce (green romaine, produced by Nakahara Seed Co., Ltd.) was used as cultivation plant.
### First Growth Period
Urethane sponges (salad urethane, manufactured by M Hydroponic Research Co., Ltd.) having Romaine lettuce seeded therein were placed side by side on a plastic tray, and were irradiated with light from RGB-LED light source (manufactured by Shibasaki Inc.) to cultivate plants. The plants were cultivated for 16 days under the conditions of room temperature: 22 to 23° C., humidity: 50 to 60%, photon flux density from light emitting device: 100 μmol·m⁻²·s⁻¹ and daytime hour: 16 hours/day. Only water was given until germination, and after the germination (about 4 days later), a solution obtained by mixing Otsuka House #1 (manufactured by Otsuka Chemical Co., Ltd.) and Otsuka House #2 (manufactured by Otsuka Chemical Co., Ltd.) in a mass ratio of 3:2 and dissolving the mixture in water was used as a nutrient solution (Otsuka Formulation A). Conductivity of the nutrient was 1.5 ms·cm⁻¹.
### Second Growth Period
After the first growth period, the plants were irradiated with light from the light emitting devices of Examples 1 to 5 and Comparative Example 1, and were subjected to hydroponics.
The plants were cultivated for 19 days under the conditions of room temperature: 22 to 24° C., humidity: 60 to 70%, CO₂ concentration: 600 to 700 ppm, photon flux density from light emitting device: 125 μmol·m⁻²·s⁻¹ and daytime hour: 16 hours/day. Otsuka Formulation A was used as the nutrient solution. Conductivity of the nutrient was 1.5 ms·cm⁻¹. The values of the R/B and R/FR ratios of light for plant irradiation from each light emitting device in the second growth period are shown in Table 1.
### Measurement of Fresh Weight (Edible Part)
The plants after cultivation were harvested, and wet weights of a terrestrial part and a root were measured. The wet weight of a terrestrial part of each of 6 cultivated plants having been subjected to hydroponics by irradiating with light from the light emitting devices of Examples 1 to 5 and Comparative Example 1 was measured as a fresh weight (edible part) (g). The results obtained are shown in Table 1 and FIG. 3.
### Measurement of Nitrate Nitrogen Content
The edible part (about 20 g) of each of the cultivated plants, from which a foot about 5 cm had been removed, was frozen with liquid nitrogen and crushed with a juice mixer (laboratory mixer LM-PLUS, manufactured by Osaka Chemical Co., Ltd.) for 1 minute. The resulting liquid was filtered with Miracloth (manufactured by Milipore), and the filtrate was centrifuged at 4° C. and 15,000 rpm for 5 minutes. The nitrate nitrogen content (mg/100 g) in the cultivated plant in the supernatant was measured using a portable reflection photometer system (product name: RQ flex system, manufactured by Merck) and a test paper (product name: Reflectoquant (registered trade mark), manufactured by Kanto Chemical Co., Inc.). The results are shown in Table 1 and FIG. 4.
| | TABLE 1 | TABLE 1 | TABLE 1 | TABLE 1 | TABLE 1 | TABLE 1 | | | |
|-------------|----------------------|----------------------|--------------------|--------------------|--------------------|----------------|----------------|---------------|------------------|
| | Fluorescent material | Fluorescent material | | | | | | | |
| | (parts by mass) | (parts by mass) | Photon flux | Photon flux | Photon flux | Ratio of | Ratio of | | |
| | First fluorescent | Second fluorescent | density | density | density | photon | photon | Fresh weight | Nitrate nitrogen |
| | material | material | (μmol · m2 · s1) | (μmol · m2 · s1) | (μmol · m2 · s1) | flux densities | flux densities | (Edible part) | content |
| | (MGF/CASN = 95:5) | (YAG: Ce, Cr) | B | R | FR | R/B | R/FR | (g) | (mg/100 g) |
| Comparative | — | — | 35.5 | 88.8 | 0.0 | 2.5 | — | 26.2 | 361.2 |
| Example 1 | | | | | | | | | |
| Example 1 | 60 | — | 31.5 | 74.9 | 12.6 | 2.4 | 6.0 | 35.4 | 430.8 |
| Example 2 | 50 | 10 | 28.5 | 67.1 | 21.7 | 2.4 | 3.1 | 34.0 | 450.0 |
| Example 3 | 40 | 20 | 25.8 | 62.0 | 28.7 | 2.4 | 2.2 | 33.8 | 452.4 |
| Example 4 | 30 | 30 | 26.8 | 54.7 | 33.5 | 2.0 | 1.6 | 33.8 | 345.0 |
| Example 5 | 25 | 39 | 23.4 | 52.8 | 38.1 | 2.3 | 1.4 | 28.8 | 307.2 |
As shown in Table 1, for the light emitting devices in Examples 1 to 5, the R/B ratios are within a range of 2.0 or more and 4.0 or less and the R/FR ratios are within the range of 0.7 or more and 13.0 or less. For Romaine lettuce cultivated by irradiating with light from the light emitting device in Examples 1 to 5, the fresh weight (edible part) was increased as compared with Romaine lettuce cultivated by irradiating with light from the light emitting device used in Comparative Example 1. Therefore, cultivation of plants was promoted, as shown in Table 1 and FIG. 3.
As shown in FIG. 2, the light emitting device 100 in Example 1 had at least one maximum value of the relative photon flux density in a range of 380 nm or more and 490 nm or less and in a range of 580 nm or more and less than 680 nm. The light emitting devices 100 in Examples 2 to 5 had at least one maximum value of relative photon flux density in a range of 380 nm or more and 490 nm or less, in a range of 580 nm or more and less than 680 nm and in a range of 680 nm or more and 800 nm or less, respectively. The maximum value of the relative photon flux density in a range of 380 nm or more and 490 nm or less is due to the light emission of the light emitting element having light emission peak wavelength in a range of 380 nm or more and 490 nm or less, the maximum value of the relative photon flux density in a range of 580 nm or more and less than 680 nm is due to the first fluorescent material emitting the light having at least one light emission peak wavelength in a range of 580 nm or more and less than 680 nm, and the maximum value of the relative photon flux density in a range of 680 nm or more and 800 nm or less is due to the second fluorescent material emitting the light having at least one light emission peak wavelength in a range of 680 nm or more and 800 nm or less.
As shown in Table 1, for the light emitting devices 100 in Examples 4 and 5, the R/B ratios are 2.0 and 2.3, respectively, and the R/FR ratios are 1.6 and 1.4, respectively. The R/B ratios are within a range of 2.0 or more and 4.0 or less, and the R/FR ratios are within a range of 0.7 or more and 2.0 or less. For Romaine lettuces cultivated by irradiating with lights from the light emitting devices 100, the nitrate nitrogen content is decreased as compared with Comparative Example 1. Plants, in which the nitrate nitrogen content having the possibility of adversely affecting health of human body had been reduced to a range that does not inhibit the cultivation of plants, could be cultivated, as shown in Table 1 and FIG. 4.
The light emitting device according to an embodiment of the present disclosure can be utilized as a light emitting device for plant cultivation that can activate photosynthesis and is capable of promoting growth of plants. Furthermore, the plant cultivation method, in which plants are irradiated with the light emitted from the light emitting device according to an embodiment of the present disclosure, can cultivate plants that can be harvested in a relatively short period of time and can be used in a plant factory.
Although the present disclosure has been described with reference to several exemplary embodiments, it shall be understood that the words that have been used are words of description and illustration, rather than words of limitation. Changes may be made within the purview of the appended claims, as presently stated and as amended, without departing from the scope and spirit of the disclosure in its aspects. Although the disclosure has been described with reference to particular examples, means, and embodiments, the disclosure may be not intended to be limited to the particulars disclosed; rather the disclosure extends to all functionally equivalent structures, methods, and uses such as are within the scope of the appended claims.
One or more examples or embodiments of the disclosure may be referred to herein, individually and/or collectively, by the term “disclosure” merely for convenience and without intending to voluntarily limit the scope of this application to any particular disclosure or inventive concept. Moreover, although specific examples and embodiments have been illustrated and described herein, it should be appreciated that any subsequent arrangement designed to achieve the same or similar purpose may be substituted for the specific examples or embodiments shown. This disclosure may be intended to cover any and all subsequent adaptations or variations of various examples and embodiments. Combinations of the above examples and embodiments, and other examples and embodiments not specifically described herein, will be apparent to those of skill in the art upon reviewing the description.
In addition, in the foregoing Detailed Description, various features may be grouped together or described in a single embodiment for the purpose of streamlining the disclosure. This disclosure may be not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter may be directed to less than all of the features of any of the disclosed embodiments. Thus, the following claims are incorporated into the Detailed Description, with each claim standing on its own as defining separately claimed subject matter.
The above disclosed subject matter shall be considered illustrative, and not restrictive, and the appended claims are intended to cover all such modifications, enhancements, and other embodiments which fall within the true spirit and scope of the present disclosure. Thus, to the maximum extent allowed by law, the scope of the present disclosure may be determined by the broadest permissible interpretation of the following claims and their equivalents, and shall not be restricted or limited by the foregoing detailed description.
## CLAIMS
1. A light emitting device comprising: a light emitting element having a light emission peak wavelength in a range of 380 nm or more and 490 nm or less; and a fluorescent material that is excited by light from the light emitting element and emits light having at least one light emission peak wavelength in a range of 580 nm or more and less than 680 nm, wherein the light emitting device emits light having a ratio R/B of a photon flux density R to a photon flux density B within a range of 2.0 or more and 4.0 or less, and a ratio R/FR of the photon flux density R to a photon flux density FR within a range of 0.7 or more and 13.0 or less, wherein the photon flux density R is in a wavelength range of 620 nm or more and less than 700 nm, the photon flux density B is in a wavelength range of 380 nm or more and 490 nm or less, and the photon flux density FR is in a wavelength range of 700 nm or more and 780 nm or less.
2. The light emitting device according to claim 1, further comprising another fluorescent material that is excited by light from the light emitting element and emits light having at least one light emission peak wavelength in a range of 680 nm or more and 800 nm or less, wherein the ratio R/FR is within a range of 0.7 or more and 5.0 or less.
3. The light emitting device according to claim 2, wherein the ratio R/FR is within a range of 0.7 or more and 2.0 or less.
4. The light emitting device according to claim 2, wherein the another fluorescent material contains a first element Ln containing at least one element selected from the group consisting of rare earth elements excluding Ce, a second element M containing at least one element selected from the group consisting of Al, Ga and In, Ce, and Cr, and has a composition of an aluminate fluorescent material, and when a molar ratio of the second element M is taken as 5, a molar ratio of Ce is a product of a value of a parameter x and 3, and a molar ratio of Cr is a product of a value of a parameter y and 3, the value of the parameter x being in a range of exceeding 0.0002 and less than 0.50, and the value of the parameter y being in a range of exceeding 0.0001 and less than 0.05.
5. The light emitting device according to claim 2, wherein the another fluorescent material has the composition represented by the following formula (I): (Ln₁₋ₓ₋yCeₓCry)₃M₅O₁₂ (I) wherein Ln is at least one rare earth element selected from the group consisting of rare earth elements excluding Ce, M is at least one element selected from the group consisting of Al, Ga, and In, and x and y are numbers satisfying 0.0002<x<0.50 and 0.0001<y<0.05.
6. The light emitting device according to claim 2, the light emitting device being used in plant cultivation.
7. The light emitting device according to claim 1, wherein the fluorescent material is at least one selected from the group consisting of: a fluorogermanate fluorescent material that is activated by Mn⁴⁺, a fluorescent material that has a composition containing at least one element selected from Sr and Ca, and Al, and contains silicon nitride that is activated by Eu²⁺, a fluorescent material that has a composition containing at least one element selected from the group consisting of alkaline earth metal elements and at least one element selected from the group consisting of alkali metal elements, and contains aluminum nitride that is activated by Eu²⁺, a fluorescent material containing a sulfide of Ca or Sr that is activated by Eu²⁺, and a fluorescent material that has a composition containing at least one element or ion selected from the group consisting of alkali metal elements, and an ammonium ion (NH₄⁺), and at least one element selected from the group consisting of Group 4 elements and Group 14 elements, and contains a fluoride that is activated by Mn⁴⁺.
8. The light emitting device according to claim 1, wherein the fluorescent material contains: a fluorogermanate fluorescent material that is activated by Mn⁴⁺, and a fluorescent material that has a composition containing at least one element selected from Sr and Ca, and Al, and contains silicon nitride that is activated by Eu²⁺, wherein the compounding ratio between the fluorogermanate fluorescent material and the fluorescent material containing silicon nitride (fluorogermanate fluorescent material:fluorescent material containing silicon nitride) is in a range of 50:50 or more and 99:1 or less.
9. The light emitting device according to claim 1, the light emitting device being used in plant cultivation.
10. A plant cultivation method comprising irradiating plants with light emitted from the light emitting device according to claim 1.
11. A plant cultivation method comprising irradiating plants with light emitted from the light emitting device according to claim 2.

View File

@ -0,0 +1,79 @@
item-0 at level 0: unspecified: group _root_
item-1 at level 1: title: SYSTEM FOR CONTROLLING THE OPERATION OF AN ACTUATOR MOUNTED ON A SEED PLANTING IMPLEMENT
item-2 at level 2: section_header: ABSTRACT
item-3 at level 3: paragraph: In one aspect, a system for controlling an operation of an actuator mounted on a seed planting implement may include an actuator configured to adjust a position of a row unit of the seed planting implement relative to a toolbar of the seed planting implement. The system may also include a flow restrictor fluidly coupled to a fluid chamber of the actuator, with the flow restrictor being configured to reduce a rate at which fluid is permitted to exit the fluid chamber in a manner that provides damping to the row unit. Furthermore, the system may include a valve fluidly coupled to the flow restrictor in a parallel relationship such that the valve is configured to permit the fluid exiting the fluid chamber to flow through the flow restrictor and the fluid entering the fluid chamber to bypass the flow restrictor.
item-4 at level 2: section_header: FIELD
item-5 at level 3: paragraph: The present disclosure generally relates to seed planting implements and, more particularly, to systems for controlling the operation of an actuator mounted on a seed planting implement in a manner that provides damping to one or more components of the seed planting implement.
item-6 at level 2: section_header: BACKGROUND
item-7 at level 3: paragraph: Modern farming practices strive to increase yields of agricultural fields. In this respect, seed planting implements are towed behind a tractor or other work vehicle to deposit seeds in a field. For example, seed planting implements typically include one or more ground engaging tools or openers that form a furrow or trench in the soil. One or more dispensing devices of the seed planting implement may, in turn, deposit seeds into the furrow(s). After deposition of the seeds, a packer wheel may pack the soil on top of the deposited seeds.
item-8 at level 3: paragraph: In certain instances, the packer wheel may also control the penetration depth of the furrow. In this regard, the position of the packer wheel may be moved vertically relative to the associated opener(s) to adjust the depth of the furrow. Additionally, the seed planting implement includes an actuator configured to exert a downward force on the opener(s) to ensure that the opener(s) is able to penetrate the soil to the depth set by the packer wheel. However, the seed planting implement may bounce or chatter when traveling at high speeds and/or when the opener(s) encounters hard or compacted soil. As such, operators generally operate the seed planting implement with the actuator exerting more downward force on the opener(s) than is necessary in order to prevent such bouncing or chatter. Operation of the seed planting implement with excessive down pressure applied to the opener(s), however, reduces the overall stability of the seed planting implement.
item-9 at level 3: paragraph: Accordingly, an improved system for controlling the operation of an actuator mounted on s seed planting implement to enhance the overall operation of the implement would be welcomed in the technology.
item-10 at level 2: section_header: BRIEF DESCRIPTION
item-11 at level 3: paragraph: Aspects and advantages of the technology will be set forth in part in the following description, or may be obvious from the description, or may be learned through practice of the technology.
item-12 at level 3: paragraph: In one aspect, the present subject matter is directed to a system for controlling an operation of an actuator mounted on a seed planting implement. The system may include a toolbar and a row unit adjustably mounted on the toolbar. The system may also include a fluid-driven actuator configured to adjust a position of the row unit relative to the toolbar, with the fluid-driven actuator defining first and second fluid chambers. Furthermore, the system may include a flow restrictor fluidly coupled to the first fluid chamber, with the flow restrictor being configured to reduce a rate at which fluid is permitted to exit the first fluid chamber in a manner that provides viscous damping to the row unit. Additionally, the system may include a valve fluidly coupled to the first fluid chamber. The valve may further be fluidly coupled to the flow restrictor in a parallel relationship such that the valve is configured to permit the fluid exiting the first fluid chamber to flow through the flow restrictor and the fluid entering the first fluid chamber to bypass the flow restrictor.
item-13 at level 3: paragraph: In another aspect, the present subject matter is directed to a seed planting implement including a toolbar and a plurality of row units adjustably coupled to the toolbar. Each row unit may include a ground engaging tool configured to form a furrow in the soil. The seed planting implement may also include plurality of fluid-driven actuators, with each fluid-driven actuator being coupled between the toolbar and a corresponding row unit of the plurality of row units. As such, each fluid-driven actuator may be configured to adjust a position of the corresponding row unit relative to the toolbar. Moreover, each fluid-driven actuator may define first and second fluid chambers. Furthermore, the seed planting implement may include a flow restrictor fluidly coupled to the first fluid chamber of a first fluid-driven actuator of the plurality of fluid-driven actuators. The flow restrictor may be configured to reduce a rate at which fluid is permitted to exit the first fluid chamber of the first fluid-driven actuator in a manner that provides viscous damping to the corresponding row unit. Additionally, the seed planting implement may include a valve fluidly coupled to the first fluid chamber of the first fluid-driven actuator. The valve further may be fluidly coupled to the flow restrictor in a parallel relationship such that the valve is configured to permit the fluid exiting the first fluid chamber to flow through the flow restrictor and the fluid entering the first fluid chamber to bypass the flow restrictor.
item-14 at level 3: paragraph: In a further aspect, the present subject matter is directed to a system for providing damping to a row unit of a seed planting implement. The system may include a toolbar, a row unit adjustably mounted on the toolbar, and a fluid-driven actuator configured to adjust a position of the row unit relative to the toolbar. As such, the fluid-driven actuator may define a fluid chamber. The system may also include a flow restrictor fluidly coupled to the fluid chamber. The flow restrictor may define an adjustable throat configured to reduce a rate at which fluid is permitted to exit the fluid chamber. In this regard, the throat may be adjustable between a first size configured to provide a first damping rate to the row unit and a second size configured to provide a second damping rate to the row unit, with the first and second damping rates being different.
item-15 at level 3: paragraph: These and other features, aspects and advantages of the present technology will become better understood with reference to the following description and appended claims. The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the technology and, together with the description, serve to explain the principles of the technology.
item-16 at level 2: section_header: BRIEF DESCRIPTION OF THE DRAWINGS
item-17 at level 3: paragraph: A full and enabling disclosure of the present technology, including the best mode thereof, directed to one of ordinary skill in the art, is set forth in the specification, which makes reference to the appended figures, in which:
item-18 at level 3: paragraph: FIG. 1 illustrates a perspective view of one embodiment of a seed planting implement in accordance with aspects of the present subject matter;
item-19 at level 3: paragraph: FIG. 2 illustrates a side view of one embodiment of a row unit suitable for use with a seed planting implement in accordance with aspects of the present subject matter;
item-20 at level 3: paragraph: FIG. 3 illustrates a schematic view of one embodiment of a system for controlling the operation of an actuator mounted on a seed planting implement in accordance with aspects of the present subject matter;
item-21 at level 3: paragraph: FIG. 4 illustrates a cross-sectional view of one embodiment of a flow restrictor suitable for use in the system shown in FIG. 3, particularly illustrating the flow restrictor defining a throat having a fixed size in accordance with aspects of the present subject matter;
item-22 at level 3: paragraph: FIG. 5 illustrates a cross-sectional view of another embodiment of a flow restrictor suitable for use in the system shown in FIG. 3, particularly illustrating the flow restrictor defining a throat having an adjustable size in accordance with aspects of the present subject matter;
item-23 at level 3: paragraph: FIG. 6 illustrates a simplified cross-sectional view of the flow restrictor shown in FIG. 5, particularly illustrating the throat having a first size configured to provide a first damping rate in accordance with aspects of the present subject matter;
item-24 at level 3: paragraph: FIG. 7 illustrates a simplified cross-sectional view of the flow restrictor shown in FIG. 5, particularly illustrating the throat having a second size configured to provide a second damping rate in accordance with aspects of the present subject matter;
item-25 at level 3: paragraph: FIG. 8 illustrates a cross-sectional view of another embodiment of a system for controlling the operation of an actuator mounted on a seed planting implement in accordance with aspects of the present subject matter, particularly illustrating the system including a fluidly actuated check valve; and
item-26 at level 3: paragraph: FIG. 9 illustrates a cross-sectional view of a further embodiment of a system for controlling the operation of an actuator mounted on a seed planting implement in accordance with aspects of the present subject matter, particularly illustrating the system including an electrically actuated check valve.
item-27 at level 3: paragraph: Repeat use of reference characters in the present specification and drawings is intended to represent the same or analogous features or elements of the present technology.
item-28 at level 2: section_header: DETAILED DESCRIPTION
item-29 at level 3: paragraph: Reference now will be made in detail to embodiments of the invention, one or more examples of which are illustrated in the drawings. Each example is provided by way of explanation of the invention, not limitation of the invention. In fact, it will be apparent to those skilled in the art that various modifications and variations can be made in the present invention without departing from the scope or spirit of the invention. For instance, features illustrated or described as part of one embodiment can be used with another embodiment to yield a still further embodiment. Thus, it is intended that the present invention covers such modifications and variations as come within the scope of the appended claims and their equivalents.
item-30 at level 3: paragraph: In general, the present subject matter is directed to systems for controlling the operation of an actuator mounted on a seed planting implement. Specifically, the disclosed systems may be configured to control the operation of the actuator in a manner that provides damping to one or more components of the seed planting implement. For example, in several embodiments, the seed planting implement may include a toolbar and one or more row units adjustably coupled to the toolbar. One or more fluid-driven actuators of the seed planting implement may be configured to control and/or adjust the position of the row unit(s) relative to the toolbar. Furthermore, a flow restrictor may be fluidly coupled to a fluid chamber of the actuator and configured to reduce the rate at which fluid is permitted to exit the fluid chamber so as to provide viscous damping to the row unit(s). In this regard, when the row unit(s) moves relative to the toolbar (e.g., when the row unit contacts a rock or other impediment in the soil), the flow restrictor may be configured to reduce the relative speed and/or displacement of such movement, thereby damping the movement of the row unit(s) relative to the toolbar.
item-31 at level 3: paragraph: In one embodiment, the flow restrictor may be configured to provide a variable damping rate to the component(s) of the seed planting implement. Specifically, in such embodiment, the flow restrictor may be configured as an adjustable valve having one or more components that may be adjusted to change the size of a fluid passage or throat defined by the valve. In this regard, changing the throat size of the valve varies the rate at which the fluid may exit the fluid chamber of the actuator, thereby adjusting the damping rate provided by the disclosed system. For example, adjusting the valve so as to increase the size of the throat may allow the fluid to exit the fluid chamber more quickly, thereby reducing the damping rate of the system. Conversely, adjusting the valve so as to decrease the size of the throat may allow the fluid to exit the fluid chamber more slowly, thereby increasing the damping rate of the system.
item-32 at level 3: paragraph: In accordance with aspects of the present subject matter, the system may further include a check valve fluidly coupled to the fluid chamber of the actuator. Specifically, in several embodiments, the check valve may also be fluidly coupled to the flow restrictor in a parallel relationship. As such, the check valve may be configured to direct the fluid exiting the fluid chamber of the actuator (e.g., when one of the row units hits a rock) to flow through the flow restrictor, thereby reducing the relative speed and/or displacement between the row unit(s) in the toolbar. Furthermore, the check valve may be configured to permit the fluid entering the fluid chamber to bypass the flow restrictor. For example, the fluid may return to the fluid chamber as the row unit(s) returns to its initial position following contact with the rock. In this regard, allowing the returning fluid to bypass the flow restrictor may increase the rate at which the fluid flows back into the fluid chamber, thereby further increasing the damping provided by the disclosed system.
item-33 at level 3: paragraph: Referring now to FIG. 1, a perspective view of one embodiment of a seed planting implement 10 is illustrated in accordance with aspects of the present subject matter. As shown in FIG. 1, the implement 10 may include a laterally extending toolbar or frame assembly 12 connected at its middle to a forwardly extending tow bar 14 to allow the implement 10 to be towed by a work vehicle (not shown), such as an agricultural tractor, in a direction of travel (e.g., as indicated by arrow 16). The toolbar 12 may generally be configured to support a plurality of tool frames 18. Each tool frame 18 may, in turn, be configured to support a plurality of row units 20. As will be described below, each row unit 20 may include one or more ground engaging tools configured to excavate a furrow or trench in the soil.
item-34 at level 3: paragraph: It should be appreciated that, for purposes of illustration, only a portion of the row units 20 of the implement 10 have been shown in FIG. 1. In general, the implement 10 may include any number of row units 20, such as six, eight, twelve, sixteen, twenty-four, thirty-two, or thirty-six row units. In addition, it should be appreciated that the lateral spacing between row units 20 may be selected based on the type of crop being planted. For example, the row units 20 may be spaced approximately thirty inches from one another for planting corn, and approximately fifteen inches from one another for planting soybeans.
item-35 at level 3: paragraph: It should also be appreciated that the configuration of the implement 10 described above and shown in FIG. 1 is provided only to place the present subject matter in an exemplary field of use. Thus, it should be appreciated that the present subject matter may be readily adaptable to any manner of implement configuration.
item-36 at level 3: paragraph: Referring now to FIG. 2, a side view of one embodiment of a row unit 20 is illustrated in accordance with aspects of the present subject matter. As shown, the row unit 20 is configured as a hoe opener row unit. However, it should be appreciated that, in alternative embodiments, the row unit 20 may be configured as a disc opener row unit or any other suitable type of seed planting unit. Furthermore, it should be appreciated that, although the row unit 20 will generally be described in the context of the implement 10 shown in FIG. 1, the row unit 20 may generally be configured to be installed on any suitable seed planting implement having any suitable implement configuration.
item-37 at level 3: paragraph: As shown, the row unit 20 may be adjustably coupled to one of the tool frames 18 of the implement 10 by a suitable linkage assembly 22. For example, in one embodiment, the linkage assembly 22 may include a mounting bracket 24 coupled to the tool frame 18. Furthermore, the linkage assembly 22 may include first and second linkage members 26, 28. One end of each linkage member 26, 28 may be pivotably coupled to the mounting bracket 24, while an opposed end of each linkage member 26, 28 may be pivotally coupled to a support member 30 of the row unit 20. In this regard, the linkage assembly 22 may form a four bar linkage with the support member 30 that permits relative pivotable movement between the row unit 20 and the associated tool frame 18. However, it should be appreciated that, in alternative embodiments, the row unit 20 may be adjustably coupled to the tool frame 18 or the toolbar 12 via any other suitable linkage assembly. Furthermore, it should be appreciated that, in further embodiments the linkage assembly 22 may couple the row unit 20 directly to the toolbar 12.
item-38 at level 3: paragraph: Furthermore, the support member 30 may be configured to support one or more components of the row unit 20. For example, in several embodiments, a ground engaging shank 32 may be mounted or otherwise supported on support member 22. As shown, the shank 32 may include an opener 34 configured to excavate a furrow or trench in the soil as the implement 10 moves in the direction of travel 12 to facilitate deposition of a flowable granular or particulate-type agricultural product, such as seed, fertilizer, and/or the like. Moreover, the row unit 20 may include a packer wheel 36 configured to roll along the soil and close the furrow after deposition of the agricultural product. In one embodiment, the packer wheel 36 may be coupled to the support member 30 by an arm 38. It should be appreciated that, in alternative embodiments, any other suitable component(s) may be supported on or otherwise coupled to the support member 30. For example, the row unit 20 may include a ground engaging disc opener (not shown) in lieu of the ground engaging shank 32.
item-39 at level 3: paragraph: Additionally, in several embodiments, a fluid-driven actuator 102 of the implement 10 may be configured to adjust the position of one or more components of the row unit 20 relative to the tool frame 18. For example, in one embodiment, a rod 104 of the actuator 102 may be coupled to the shank 32 (e.g., the end of the shank 32 opposed from the opener 34), while a cylinder 106 of the actuator 102 may be coupled to the mounting bracket 24. As such, the rod 104 may be configured to extend and/or retract relative to the cylinder 106 to adjust the position of the shank 32 relative to the tool frame 18, which, in turn, adjusts the force being applied to the shank 32. However, it should be appreciated that, in alternative embodiments, the rod 104 may be coupled to the mounting bracket 24, while the cylinder 106 may be coupled to the shank 32. Furthermore, it should be appreciated that, in further embodiments, the actuator 102 may be coupled to any other suitable component of the row unit 20 and/or directly to the toolbar 12.
item-40 at level 3: paragraph: Moreover, it should be appreciated that the configuration of the row unit 20 described above and shown in FIG. 2 is provided only to place the present subject matter in an exemplary field of use. Thus, it should be appreciated that the present subject matter may be readily adaptable to any manner of seed planting unit configuration.
item-41 at level 3: paragraph: Referring now to FIG. 3, a schematic view of one embodiment of a system 100 for controlling the operation of an actuator mounted on a seed planting implement is illustrated in accordance with aspects of the present subject matter. In general, the system 100 will be described herein with reference to the seed planting implement 10 and the row unit 20 described above with reference to FIGS. 1 and 2. However, it should be appreciated by those of ordinary skill in the art that the disclosed system 100 may generally be utilized with seed planting implements having any other suitable implement configuration and/or seed planting units having any other suitable unit configuration.
item-42 at level 3: paragraph: As shown in FIG. 3, the system 100 may include a fluid-driven actuator, such as the actuator 102 of the row unit 20 described above with reference to FIG. 2. As shown, the actuator 102 may correspond to a hydraulic actuator. Thus, in several embodiments, the actuator 102 may include a piston 108 housed within the cylinder 106. One end of the rod 104 may be coupled to the piston 108, while an opposed end of the rod 104 may extend outwardly from the cylinder 106. Additionally, the actuator 102 may include a cap-side chamber 110 and a rod-side chamber 112 defined within the cylinder 106. As is generally understood, by regulating the pressure of the fluid supplied to one or both of the cylinder chambers 110, 112, the actuation of the rod 104 may be controlled. However, it should be appreciated that, in alternative embodiments, the actuator 102 may be configured as any other suitable type of actuator, such as a pneumatic actuator. Furthermore, it should be appreciated that, in further embodiments, the system 100 may include any other suitable number of fluid-driven actuators, such as additional actuators 102 mounted on the implement 10.
item-43 at level 3: paragraph: Furthermore, the system 100 may include various components configured to provide fluid (e.g., hydraulic oil) to the cylinder chambers 110, 112 of the actuator 102. For example, in several embodiments, the system 100 may include a fluid reservoir 114 and first and second fluid conduits 116, 118. As shown, a first fluid conduit 116 may extend between and fluidly couple the reservoir 114 and the rod-side chamber 112 of the actuator 102. Similarly, a second fluid conduit 118 may extend between and fluidly couple the reservoir 114 and the cap-side chamber 110 of the actuator 102. Additionally, a pump 115 and a remote switch 117 or other valve(s) may be configured to control the flow of the fluid between the reservoir 114 and the cylinder chambers 110, 112 of the actuator 102. In one embodiment, the reservoir 114, the pump 115, and the remote switch 117 may be mounted on the work vehicle (not shown) configured to tow the implement 10. However, it should be appreciated that, in alternative embodiments, the reservoir 114, the pump 115, and/or the remote switch 117 may be mounted on the implement 10. Furthermore, it should be appreciated that the system 100 may include any other suit component(s) configured to control the flow of fluid between the reservoir and the actuator 102.
item-44 at level 3: paragraph: In several embodiments, the system 100 may also include a flow restrictor 120 that is fluidly coupled to the cap-side chamber 110. As such, the flow restrictor 120 may be provided in series with the second fluid conduit 118. As will be described below, the flow restrictor 120 may be configured to reduce the flow rate of the fluid exiting the cap-side chamber 110 in a manner that provides damping to one or more components of the implement 10. However, it should be appreciated that, in alternative embodiments, the flow restrictor 120 may be fluidly coupled to the rod-side chamber 120 such that the flow restrictor 120 is provided in series with the first fluid conduit 116.
item-45 at level 3: paragraph: Additionally, in several embodiments, the system 100 may include a check valve 122 that is fluidly coupled to the cap-side chamber 110 and provided in series with the second fluid conduit 118. As shown, the check valve 122 may be fluidly coupled to the flow restrictor 120 in parallel. In this regard, the check valve 122 may be provided in series with a first branch 124 of the second fluid conduit 118, while the flow restrictor 120 may be provided in series with a second branch 126 of the second fluid conduit 118. As such, the check valve 122 may be configured to allow the fluid to flow through the first branch 124 of the second fluid conduit 118 from the reservoir 114 to the cap-side chamber 110. However, the check valve 122 may be configured to occlude or prevent the fluid from flowing through the first branch 124 of the second fluid conduit 118 from the cap-side chamber 110 to the reservoir 114. In this regard, the check valve 122 directs all of the fluid exiting the cap-side chamber 110 into the flow restrictor 120. Conversely, the check valve 122 permits the fluid flowing to the cap-side chamber 110 to bypass the flow restrictor 120. As will be described below, such configuration facilitates damping of one or more components of the implement 10. However, it should be appreciated that, in alternative embodiments, the check valve 122 may be fluidly coupled to the rod-side chamber 112 in combination with the flow restrictor 120 such that the check valve 122 is provided in series with the first fluid conduit 116.
item-46 at level 3: paragraph: As indicated above, the system 100 may generally be configured to provide viscous damping to one or more components of the implement 10. For example, when a ground engaging tool of the implement 10, such as the shank 32, contacts a rock or other impediment in the soil, the corresponding row unit 20 may pivot relative to the corresponding tool frame 18 and/or the toolbar 12 against the down pressure load applied to the row unit 20 by the corresponding actuator 102. In several embodiments, such movement may cause the rod 104 of the actuator 102 to retract into the cylinder 106, thereby moving the piston 108 in a manner that decreases the volume of the cap-side chamber 110. In such instances, some of the fluid present within the cap-side chamber 110 may exit and flow into the second fluid conduit 118 toward the reservoir 114. The check valve 122 may prevent the fluid exiting the cap-side chamber 110 from flowing through the first branch 124 of the second fluid conduit 118. As such, all fluid exiting the cap-side chamber 110 may be directed into the second branch 126 and through the flow restrictor 120. As indicated above, the flow restrictor 120 reduces or limits the rate at which the fluid may flow through the second fluid conduit 118 so as to reduce the rate at which the fluid may exit the cap-side chamber 110. In this regard, the speed at which and/or the amount that the rod 104 retracts into the cylinder 106 when the shank 32 contacts a soil impediment may be reduced (e.g., because of the reduced rate at which the fluid is discharged from the cap-side chamber 110), thereby damping the movement of the row unit 20 relative to the corresponding tool frame 18 and/or the toolbar 12. Furthermore, after the initial retraction of the rod 104 into the cylinder 106, the piston 108 may then move in a manner that increases the volume of the cap-side chamber 110, thereby extending the rod 104 from the cylinder 106. In such instances, fluid present within the reservoir 114 and the second fluid conduit 118 may be drawn back into the cap-side chamber 110. As indicated above, the check valve 122 may permit the fluid within the second fluid conduit 118 to bypass the flow restrictor 120 and flow unobstructed through the first branch 124, thereby maximizing the rate at which the fluid returns to the cap-side chamber 110. Increasing the rate at which the fluid returns to the cap-side chamber 110 may decrease the time that the row unit 20 is displaced relative to the tool frame 18, thereby further damping of the row unit 20 relative to the corresponding tool frame 18 and/or the toolbar 12.
item-47 at level 3: paragraph: Referring now to FIG. 4, a cross-sectional view of one embodiment of the flow restrictor 120 is illustrated in accordance with aspects of the present subject matter. For example, in the illustrated embodiment, the flow restrictor 120 may include a restrictor body 128 coupled to the second branch 126 of the second fluid conduit 118, with the restrictor body 128, in turn, defining a fluid passage 130 extending therethrough. Furthermore, the flow restrictor 120 may include an orifice plate 132 extending inward from the restrictor body 128 into the fluid passage 130. As shown, the orifice plate 132 may define a central aperture or throat 134 extending therethrough. In general, the size (e.g., the area, diameter, etc.) of the throat 134 may be smaller than the size of the fluid passage 130 so as to reduce the flow rate of the fluid through the flow restrictor 120. It should be appreciated that, in the illustrated embodiment, the throat 134 has a fixed size such that the throat 134 provides a fixed or constant backpressure for a given fluid flow rate. In this regard, in such embodiment, a fixed or constant damping rate is provided by the system 100. However, it should be appreciated that, in alternative embodiments, the flow restrictor 120 may have any other suitable configuration that reduces the flow rate of the fluid flowing therethrough.
item-48 at level 3: paragraph: Referring now to FIG. 5, a cross-sectional view of another embodiment of the flow restrictor 120 is illustrated in accordance with aspects of the present subject matter. As shown, the flow restrictor 120 may generally be configured the same as or similar to that described above with reference to FIG. 4. For instance, the flow restrictor 120 may define the throat 134, which is configured to reduce the flow rate of the fluid through the flow restrictor 120. However, as shown in FIG. 5, unlike the above-describe embodiment, the size (e.g., the area, diameter, etc.) of the throat 134 is adjustable. For example, in such embodiment, the flow restrictor 120 may be configured as an adjustable valve 136. As shown, the valve 136 may include a valve body 138 coupled to the second branch 126 of the second fluid conduit 118, a shaft 140 rotatably coupled to the valve body 138, a disc 142 coupled to the shaft 140, and an actuator 144 (e.g., a suitable electric motor) coupled to the shaft 140. As such, the actuator 144 may be configured to rotate the shaft 140 and the disc 142 relative to the valve body 138 (e.g., as indicated by arrow 146 in FIG. 5) to change the size of the throat 134 defined between the disc 142 and the valve body 138. Although the valve 136 is configured as a butterfly valve in FIG. 5, it should be appreciated that, in alternative embodiments, the valve 136 may be configured as any other suitable type of valve or adjustable flow restrictor. For example, in one embodiment, the valve 136 may be configured as a suitable ball valve.
item-49 at level 3: paragraph: In accordance with aspects of the present disclosure, by adjusting the size of the throat 134, the system 100 may be able to provide variable damping rates. In general, the size of the throat 134 may be indicative of the amount of damping provided by the system 100. For example, in several embodiments, the disc 142 may be adjustable between a first position shown in FIG. 6 and a second position shown in FIG. 7. More specifically, when the disc 142 is at the first position, the throat 134 defines a first size (e.g., as indicated by arrow 148 in FIG. 6), thereby providing a first damping rate. Conversely, when the disc 142 is at the second position, the throat 134 defines a second size (e.g., as indicated by arrow 150 in FIG. 7), thereby providing a second damping rate. As shown in FIGS. 6 and 7, the first distance 148 is larger than the second distance 150. In such instance, the system 100 provides greater damping when the throat 134 is adjusted to the first size than when the throat 134 is adjusted to the second size. It should be appreciated that, in alternative embodiments, the disc 142 may be adjustable between any other suitable positions that provide any other suitable damping rates. For example, the disc 142 may be adjustable to a plurality of different positions defined between the fully opened and fully closed positions of the valve, thereby providing for a corresponding number of different damping rates. Furthermore, it should be appreciated that the disc 142 may be continuously adjustable or adjustable between various discrete positions.
item-50 at level 3: paragraph: Referring back to FIG. 5, a controller 152 of the system 100 may be configured to electronically control the operation of one or more components of the valve 138, such as the actuator 144. In general, the controller 152 may comprise any suitable processor-based device known in the art, such as a computing device or any suitable combination of computing devices. Thus, in several embodiments, the controller 152 may include one or more processor(s) 154 and associated memory device(s) 156 configured to perform a variety of computer-implemented functions. As used herein, the term “processor” refers not only to integrated circuits referred to in the art as being included in a computer, but also refers to a controller, a microcontroller, a microcomputer, a programmable logic controller (PLC), an application specific integrated circuit, and other programmable circuits. Additionally, the memory device(s) 156 of the controller 152 may generally comprise memory element(s) including, but not limited to, a computer readable medium (e.g., random access memory (RAM)), a computer readable non-volatile medium (e.g., a flash memory), a floppy disk, a compact disc-read only memory (CD-ROM), a magneto-optical disk (MOD), a digital versatile disc (DVD) and/or other suitable memory elements. Such memory device(s) 156 may generally be configured to store suitable computer-readable instructions that, when implemented by the processor(s) 154, configure the controller 152 to perform various computer-implemented functions. In addition, the controller 152 may also include various other suitable components, such as a communications circuit or module, one or more input/output channels, a data/control bus and/or the like.
item-51 at level 3: paragraph: It should be appreciated that the controller 152 may correspond to an existing controller of the implement 10 or associated work vehicle (not shown) or the controller 152 may correspond to a separate processing device. For instance, in one embodiment, the controller 152 may form all or part of a separate plug-in module that may be installed within the implement 10 or associated work vehicle to allow for the disclosed system and method to be implemented without requiring additional software to be uploaded onto existing control devices of the implement 10 or associated work vehicle.
item-52 at level 3: paragraph: Furthermore, in one embodiment, a user interface 158 of the system 100 may be communicatively coupled to the controller 152 via a wired or wireless connection to allow feedback signals (e.g., as indicated by dashed line 160 in FIG. 5) to be transmitted from the controller 152 to the user interface 158. More specifically, the user interface 158 may be configured to receive an input from an operator of the implement 10 or the associated work vehicle, such as an input associated with a desired damping characteristic(s) to be provided by the system 100. As such, the user interface 158 may include one or more input devices (not shown), such as touchscreens, keypads, touchpads, knobs, buttons, sliders, switches, mice, microphones, and/or the like. In addition, some embodiments of the user interface 158 may include one or more one or more feedback devices (not shown), such as display screens, speakers, warning lights, and/or the like, which are configured to communicate such feedback from the controller 152 to the operator of the implement 10. However, in alternative embodiments, the user interface 158 may have any suitable configuration.
item-53 at level 3: paragraph: Moreover, in one embodiment, one or more sensors 162 of the system 100 may be communicatively coupled to the controller 152 via a wired or wireless connection to allow sensor data (e.g., as indicated by dashed line 164 in FIG. 5) to be transmitted from the sensor(s) 162 to the controller 152. For example, in one embodiment, the sensor(s) 162 may include a location sensor, such as a GNSS-based sensor, that is configured to detect a parameter associated with the location of the implement 10 or associated work vehicle within the field. In another embodiment, the sensor(s) 162 may include a speed sensor, such as a Hall Effect sensor, that is configured to detect a parameter associated with the speed at which the implement 10 is moved across the field. However, it should be appreciated that, in alternative embodiments, the sensor(s) 162 may include any suitable sensing device(s) configured to detect any suitable operating parameter of the implement 10 and/or the associated work vehicle.
item-54 at level 3: paragraph: In several embodiments, the controller 152 may be configured to control the operation of the valve 136 based on the feedback signals 160 received from the user interface 158 and/or the sensor data 164 received from the sensor(s) 162. Specifically, as shown in FIG. 5, the controller 152 may be communicatively coupled to the actuator 144 of the valve 136 via a wired or wireless connection to allow control signals (e.g., indicated by dashed lines 166 in FIG. 5) to be transmitted from the controller 152 to the actuator 144. Such control signals 166 may be configured to regulate the operation of the actuator 144 to adjust the position of the disc 142 relative to the valve body 138, such as by moving the disc 142 along the direction 146 between the first position (FIG. 6) and the second position (FIG. 7). For example, the feedback signals 116 received by the controller 152 may be indicative that the operator desires to adjust the damping provided by the system 100. Furthermore, upon receipt of the sensor data 164 (e.g., data indicative of the location and/or speed of the implement 10), the controller 152 may be configured to determine that the damping rate of the system 100 should be adjusted. In either instance, the controller 152 may be configured to transmit the control signals 166 to the actuator 144, with such control signals 166 being configured to control the operation of the actuator 144 to adjust the position of the disc 142 to provide the desired damping rate. However, it should be appreciated that, in alternative embodiments, the controller 152 may be configured to control the operation of the valve 136 based on any other suitable input(s) and/or parameter(s).
item-55 at level 3: paragraph: Referring now to FIG. 8, a schematic view of another embodiment of the system 100 is illustrated in accordance with aspects of the present subject matter. As shown, the system 100 may generally be configured the same as or similar to that described above with reference to FIG. 3. For instance, the system 100 may include the flow restrictor 120 and the check valve 122 fluidly coupled to the cap-side chamber 110 of the actuator 102 via the second fluid conduit 118. Furthermore, the flow restrictor 120 and the check valve 122 may be fluidly coupled together in parallel. However, as shown in FIG. 8, unlike the above-describe embodiment, the check valve 122 may be configured as a pilot-operated or fluid actuated three-way valve that is fluidly coupled to the first fluid conduit 116 by a pilot conduit 168.
item-56 at level 3: paragraph: In general, when the row unit 20 is lifted from an operational position relative to the ground to a raised position relative to the ground, it may be desirable for fluid to exit the cap-side chamber 110 without its flow rate being limited by the flow restrictor 120. For example, permitting such fluid to bypass the flow restrictor 120 may reduce the time required to lift the row unit 20 from the operational position to the raised position. More specifically, when lifting the row unit 20 from the operational position to the raised position, a pump (not shown) may pump fluid through the first fluid conduit 116 from the reservoir 114 to the rod-side chamber 112 of the actuator 102, thereby retracting the rod 104 into the cylinder 106. This may, in turn, discharge fluid from the cap-side chamber 110 into the second fluid conduit 118. As described above, the check valve 122 may generally be configured to direct all fluid exiting the cap-side chamber 110 into the flow restrictor 120. However, in the configuration of the system 100 shown in FIG. 8, when lifting the row unit 20 to the raised position, the pilot conduit 168 supplies fluid flowing through the first fluid conduit 116 to the check valve 122. The fluid received from the pilot conduit 168 may, in turn, actuate suitable component(s) of the check valve 122 (e.g., a diaphragm(s), a spring(s), and/or the like) in a manner that causes the check valve 122 to open, thereby permitting the fluid exiting the cap-side chamber 110 to bypass the flow restrictor 120 and flow unobstructed through the check valve 122 toward the reservoir 114. Conversely, when the row unit 20 is at the operational position, the check valve 122 may be closed, thereby directing all fluid exiting the cap-side chamber 110 into the flow restrictor 120.
item-57 at level 3: paragraph: Referring now to FIG. 9, a schematic view of a further embodiment of the system 100 is illustrated in accordance with aspects of the present subject matter. As shown, the system 100 may generally be configured the same as or similar to that described above with reference to FIGS. 3 and 8. For instance, the system 100 may include the flow restrictor 120 and the check valve 122 fluidly coupled to the cap-side chamber 110 of the actuator 102 via the second fluid conduit 118. Furthermore, the flow restrictor 120 and the check valve 122 may be fluidly coupled together in parallel. However, as shown in FIG. 9, unlike the above-describe embodiments, the check valve 122 may be configured as an electrically actuated valve. Specifically, as shown, the controller 152 may be communicatively coupled to the check valve 122 via a wired or wireless connection to allow control signals (e.g., indicated by dashed lines 170 in FIG. 9) to be transmitted from the controller 152 to the check valve 122. In this regard, when the row unit 20 is lifted from the operational position to the raised position, the control signals 170 may be configured to instruct the check valve 122 to open in a manner that permits the fluid exiting the cap-side chamber 110 to bypass the flow restrictor 120 and flow unobstructed through the check valve 122 toward the reservoir 114. Conversely, when the row unit 20 is at the operational position, the control signals 170 may be configured to instruct the check valve 122 to close, thereby directing all fluid exiting the cap-side chamber 110 into the flow restrictor 120.
item-58 at level 3: paragraph: This written description uses examples to disclose the technology, including the best mode, and also to enable any person skilled in the art to practice the technology, including making and using any devices or systems and performing any incorporated methods. The patentable scope of the technology is defined by the claims, and may include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims if they include structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements with insubstantial differences from the literal language of the claims.
item-59 at level 2: section_header: CLAIMS
item-60 at level 3: paragraph: 1. A system for controlling an operation of an actuator mounted on a seed planting implement, the system comprising: a toolbar; a row unit adjustably mounted on the toolbar; a fluid-driven actuator configured to adjust a position of the row unit relative to the toolbar, the fluid-driven actuator defining first and second fluid chambers; a flow restrictor fluidly coupled to the first fluid chamber, the flow restrictor being configured to reduce a rate at which fluid is permitted to exit the first fluid chamber in a manner that provides damping to the row unit; and a valve fluidly coupled to the first fluid chamber, the valve further being fluidly coupled to the flow restrictor in a parallel relationship such that the valve is configured to permit the fluid exiting the first fluid chamber to flow through the flow restrictor and the fluid entering the first fluid chamber to bypass the flow restrictor.
item-61 at level 3: paragraph: 2. The system of claim 1, wherein, when fluid is supplied to the second fluid chamber, the valve is configured to permit fluid exiting the first fluid chamber to bypass the flow restrictor.
item-62 at level 3: paragraph: 3. The system of claim 1, wherein the valve is fluidly actuated.
item-63 at level 3: paragraph: 4. The system of claim 3, further comprising: a fluid line configured to supply the fluid to the second fluid chamber, the fluid line being fluidly coupled to the valve such that, when the fluid flows through the fluid line to the second fluid chamber, the valve opens in a manner that permits the fluid exiting first fluid chamber to bypass the flow restrictor.
item-64 at level 3: paragraph: 5. The system of claim 1, wherein the valve is electrically actuated.
item-65 at level 3: paragraph: 6. The system of claim 1, wherein the flow restrictor defines a throat having a fixed size.
item-66 at level 3: paragraph: 7. The system of claim 1, wherein the flow restrictor defines a throat having an adjustable size.
item-67 at level 3: paragraph: 8. A seed planting implement, comprising: a toolbar; a plurality of row units adjustably coupled to the toolbar, each row unit including a ground engaging tool configured to form a furrow in the soil; a plurality of fluid-driven actuators, each fluid-driven actuator being coupled between the toolbar and a corresponding row unit of the plurality of row units, each fluid-driven actuator being configured to adjust a position of the corresponding row unit relative to the toolbar, each fluid-driven actuator defining first and second fluid chambers; a flow restrictor fluidly coupled to the first fluid chamber of a first fluid-driven actuator of the plurality of fluid-driven actuators, the flow restrictor being configured to reduce a rate at which fluid is permitted to exit the first fluid chamber of the first fluid-driven actuator in a manner that provides damping to the corresponding row unit; and a valve fluidly coupled to the first fluid chamber of the first fluid-driven actuator, the valve further being fluidly coupled to the flow restrictor in a parallel relationship such that the valve is configured to permit the fluid exiting the first fluid chamber to flow through the flow restrictor and the fluid entering the first fluid chamber to bypass the flow restrictor.
item-68 at level 3: paragraph: 9. The seed planting implement of claim 8, wherein, when fluid is supplied to the second fluid chamber of the first fluid-driven actuator, the valve is configured to permit fluid exiting the first fluid chamber of the first fluid-driven actuator to bypass the flow restrictor.
item-69 at level 3: paragraph: 10. The seed planting implement of claim 8, wherein the valve is fluidly actuated.
item-70 at level 3: paragraph: 11. The seed planting implement of claim 10, further comprising: a fluid line configured to supply fluid to the second fluid chamber of the first fluid-driven actuator, the fluid line being fluidly coupled to the valve such that, when fluid flows through the fluid line to the second fluid chamber of the first fluid-driven actuator, the valve opens in a manner that permits the fluid exiting first fluid chamber of the first fluid-driven actuator to bypass the flow restrictor.
item-71 at level 3: paragraph: 12. The seed planting implement of claim 8, wherein the valve is electrically actuated.
item-72 at level 3: paragraph: 13. The seed planting implement of claim 8, wherein the flow restrictor defines a throat having a fixed size.
item-73 at level 3: paragraph: 14. The seed planting implement of claim 8, wherein the flow restrictor defines a throat having an adjustable size.
item-74 at level 3: paragraph: 15. A system for providing damping to a row unit of a seed planting implement, the system comprising: a toolbar; a row unit adjustably mounted on the toolbar; a fluid-driven actuator configured to adjust a position of the row unit relative to the toolbar, the fluid-driven actuator defining a fluid chamber; and a flow restrictor fluidly coupled to the fluid chamber, the flow restrictor defining an adjustable throat configured to reduce a rate at which fluid is permitted to exit the fluid chamber, the throat being adjustable between a first size configured to provide a first damping rate to the row unit and a second size configured to provide a second damping rate to the row unit, the first and second damping rates being different.
item-75 at level 3: paragraph: 16. The system of claim 15, wherein the throat is adjustable between the first and second damping rates based on an operator input.
item-76 at level 3: paragraph: 17. The system of claim 15, wherein the throat is adjustable between the first and second damping rates based on data received from one or more sensors on the seed planting implement.
item-77 at level 3: paragraph: 18. The system of claim 15, further comprising: a valve fluidly coupled to the fluid chamber, the valve being configured to selectively occlude the flow of fluid such that fluid exiting the fluid chamber flows through the flow restrictor and fluid entering the fluid chamber bypasses the flow restrictor.
item-78 at level 3: paragraph: 19. The system of claim 18, wherein the flow restrictor and the valve are fluidly coupled in a parallel relationship.

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,155 @@
# SYSTEM FOR CONTROLLING THE OPERATION OF AN ACTUATOR MOUNTED ON A SEED PLANTING IMPLEMENT
## ABSTRACT
In one aspect, a system for controlling an operation of an actuator mounted on a seed planting implement may include an actuator configured to adjust a position of a row unit of the seed planting implement relative to a toolbar of the seed planting implement. The system may also include a flow restrictor fluidly coupled to a fluid chamber of the actuator, with the flow restrictor being configured to reduce a rate at which fluid is permitted to exit the fluid chamber in a manner that provides damping to the row unit. Furthermore, the system may include a valve fluidly coupled to the flow restrictor in a parallel relationship such that the valve is configured to permit the fluid exiting the fluid chamber to flow through the flow restrictor and the fluid entering the fluid chamber to bypass the flow restrictor.
## FIELD
The present disclosure generally relates to seed planting implements and, more particularly, to systems for controlling the operation of an actuator mounted on a seed planting implement in a manner that provides damping to one or more components of the seed planting implement.
## BACKGROUND
Modern farming practices strive to increase yields of agricultural fields. In this respect, seed planting implements are towed behind a tractor or other work vehicle to deposit seeds in a field. For example, seed planting implements typically include one or more ground engaging tools or openers that form a furrow or trench in the soil. One or more dispensing devices of the seed planting implement may, in turn, deposit seeds into the furrow(s). After deposition of the seeds, a packer wheel may pack the soil on top of the deposited seeds.
In certain instances, the packer wheel may also control the penetration depth of the furrow. In this regard, the position of the packer wheel may be moved vertically relative to the associated opener(s) to adjust the depth of the furrow. Additionally, the seed planting implement includes an actuator configured to exert a downward force on the opener(s) to ensure that the opener(s) is able to penetrate the soil to the depth set by the packer wheel. However, the seed planting implement may bounce or chatter when traveling at high speeds and/or when the opener(s) encounters hard or compacted soil. As such, operators generally operate the seed planting implement with the actuator exerting more downward force on the opener(s) than is necessary in order to prevent such bouncing or chatter. Operation of the seed planting implement with excessive down pressure applied to the opener(s), however, reduces the overall stability of the seed planting implement.
Accordingly, an improved system for controlling the operation of an actuator mounted on s seed planting implement to enhance the overall operation of the implement would be welcomed in the technology.
## BRIEF DESCRIPTION
Aspects and advantages of the technology will be set forth in part in the following description, or may be obvious from the description, or may be learned through practice of the technology.
In one aspect, the present subject matter is directed to a system for controlling an operation of an actuator mounted on a seed planting implement. The system may include a toolbar and a row unit adjustably mounted on the toolbar. The system may also include a fluid-driven actuator configured to adjust a position of the row unit relative to the toolbar, with the fluid-driven actuator defining first and second fluid chambers. Furthermore, the system may include a flow restrictor fluidly coupled to the first fluid chamber, with the flow restrictor being configured to reduce a rate at which fluid is permitted to exit the first fluid chamber in a manner that provides viscous damping to the row unit. Additionally, the system may include a valve fluidly coupled to the first fluid chamber. The valve may further be fluidly coupled to the flow restrictor in a parallel relationship such that the valve is configured to permit the fluid exiting the first fluid chamber to flow through the flow restrictor and the fluid entering the first fluid chamber to bypass the flow restrictor.
In another aspect, the present subject matter is directed to a seed planting implement including a toolbar and a plurality of row units adjustably coupled to the toolbar. Each row unit may include a ground engaging tool configured to form a furrow in the soil. The seed planting implement may also include plurality of fluid-driven actuators, with each fluid-driven actuator being coupled between the toolbar and a corresponding row unit of the plurality of row units. As such, each fluid-driven actuator may be configured to adjust a position of the corresponding row unit relative to the toolbar. Moreover, each fluid-driven actuator may define first and second fluid chambers. Furthermore, the seed planting implement may include a flow restrictor fluidly coupled to the first fluid chamber of a first fluid-driven actuator of the plurality of fluid-driven actuators. The flow restrictor may be configured to reduce a rate at which fluid is permitted to exit the first fluid chamber of the first fluid-driven actuator in a manner that provides viscous damping to the corresponding row unit. Additionally, the seed planting implement may include a valve fluidly coupled to the first fluid chamber of the first fluid-driven actuator. The valve further may be fluidly coupled to the flow restrictor in a parallel relationship such that the valve is configured to permit the fluid exiting the first fluid chamber to flow through the flow restrictor and the fluid entering the first fluid chamber to bypass the flow restrictor.
In a further aspect, the present subject matter is directed to a system for providing damping to a row unit of a seed planting implement. The system may include a toolbar, a row unit adjustably mounted on the toolbar, and a fluid-driven actuator configured to adjust a position of the row unit relative to the toolbar. As such, the fluid-driven actuator may define a fluid chamber. The system may also include a flow restrictor fluidly coupled to the fluid chamber. The flow restrictor may define an adjustable throat configured to reduce a rate at which fluid is permitted to exit the fluid chamber. In this regard, the throat may be adjustable between a first size configured to provide a first damping rate to the row unit and a second size configured to provide a second damping rate to the row unit, with the first and second damping rates being different.
These and other features, aspects and advantages of the present technology will become better understood with reference to the following description and appended claims. The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the technology and, together with the description, serve to explain the principles of the technology.
## BRIEF DESCRIPTION OF THE DRAWINGS
A full and enabling disclosure of the present technology, including the best mode thereof, directed to one of ordinary skill in the art, is set forth in the specification, which makes reference to the appended figures, in which:
FIG. 1 illustrates a perspective view of one embodiment of a seed planting implement in accordance with aspects of the present subject matter;
FIG. 2 illustrates a side view of one embodiment of a row unit suitable for use with a seed planting implement in accordance with aspects of the present subject matter;
FIG. 3 illustrates a schematic view of one embodiment of a system for controlling the operation of an actuator mounted on a seed planting implement in accordance with aspects of the present subject matter;
FIG. 4 illustrates a cross-sectional view of one embodiment of a flow restrictor suitable for use in the system shown in FIG. 3, particularly illustrating the flow restrictor defining a throat having a fixed size in accordance with aspects of the present subject matter;
FIG. 5 illustrates a cross-sectional view of another embodiment of a flow restrictor suitable for use in the system shown in FIG. 3, particularly illustrating the flow restrictor defining a throat having an adjustable size in accordance with aspects of the present subject matter;
FIG. 6 illustrates a simplified cross-sectional view of the flow restrictor shown in FIG. 5, particularly illustrating the throat having a first size configured to provide a first damping rate in accordance with aspects of the present subject matter;
FIG. 7 illustrates a simplified cross-sectional view of the flow restrictor shown in FIG. 5, particularly illustrating the throat having a second size configured to provide a second damping rate in accordance with aspects of the present subject matter;
FIG. 8 illustrates a cross-sectional view of another embodiment of a system for controlling the operation of an actuator mounted on a seed planting implement in accordance with aspects of the present subject matter, particularly illustrating the system including a fluidly actuated check valve; and
FIG. 9 illustrates a cross-sectional view of a further embodiment of a system for controlling the operation of an actuator mounted on a seed planting implement in accordance with aspects of the present subject matter, particularly illustrating the system including an electrically actuated check valve.
Repeat use of reference characters in the present specification and drawings is intended to represent the same or analogous features or elements of the present technology.
## DETAILED DESCRIPTION
Reference now will be made in detail to embodiments of the invention, one or more examples of which are illustrated in the drawings. Each example is provided by way of explanation of the invention, not limitation of the invention. In fact, it will be apparent to those skilled in the art that various modifications and variations can be made in the present invention without departing from the scope or spirit of the invention. For instance, features illustrated or described as part of one embodiment can be used with another embodiment to yield a still further embodiment. Thus, it is intended that the present invention covers such modifications and variations as come within the scope of the appended claims and their equivalents.
In general, the present subject matter is directed to systems for controlling the operation of an actuator mounted on a seed planting implement. Specifically, the disclosed systems may be configured to control the operation of the actuator in a manner that provides damping to one or more components of the seed planting implement. For example, in several embodiments, the seed planting implement may include a toolbar and one or more row units adjustably coupled to the toolbar. One or more fluid-driven actuators of the seed planting implement may be configured to control and/or adjust the position of the row unit(s) relative to the toolbar. Furthermore, a flow restrictor may be fluidly coupled to a fluid chamber of the actuator and configured to reduce the rate at which fluid is permitted to exit the fluid chamber so as to provide viscous damping to the row unit(s). In this regard, when the row unit(s) moves relative to the toolbar (e.g., when the row unit contacts a rock or other impediment in the soil), the flow restrictor may be configured to reduce the relative speed and/or displacement of such movement, thereby damping the movement of the row unit(s) relative to the toolbar.
In one embodiment, the flow restrictor may be configured to provide a variable damping rate to the component(s) of the seed planting implement. Specifically, in such embodiment, the flow restrictor may be configured as an adjustable valve having one or more components that may be adjusted to change the size of a fluid passage or throat defined by the valve. In this regard, changing the throat size of the valve varies the rate at which the fluid may exit the fluid chamber of the actuator, thereby adjusting the damping rate provided by the disclosed system. For example, adjusting the valve so as to increase the size of the throat may allow the fluid to exit the fluid chamber more quickly, thereby reducing the damping rate of the system. Conversely, adjusting the valve so as to decrease the size of the throat may allow the fluid to exit the fluid chamber more slowly, thereby increasing the damping rate of the system.
In accordance with aspects of the present subject matter, the system may further include a check valve fluidly coupled to the fluid chamber of the actuator. Specifically, in several embodiments, the check valve may also be fluidly coupled to the flow restrictor in a parallel relationship. As such, the check valve may be configured to direct the fluid exiting the fluid chamber of the actuator (e.g., when one of the row units hits a rock) to flow through the flow restrictor, thereby reducing the relative speed and/or displacement between the row unit(s) in the toolbar. Furthermore, the check valve may be configured to permit the fluid entering the fluid chamber to bypass the flow restrictor. For example, the fluid may return to the fluid chamber as the row unit(s) returns to its initial position following contact with the rock. In this regard, allowing the returning fluid to bypass the flow restrictor may increase the rate at which the fluid flows back into the fluid chamber, thereby further increasing the damping provided by the disclosed system.
Referring now to FIG. 1, a perspective view of one embodiment of a seed planting implement 10 is illustrated in accordance with aspects of the present subject matter. As shown in FIG. 1, the implement 10 may include a laterally extending toolbar or frame assembly 12 connected at its middle to a forwardly extending tow bar 14 to allow the implement 10 to be towed by a work vehicle (not shown), such as an agricultural tractor, in a direction of travel (e.g., as indicated by arrow 16). The toolbar 12 may generally be configured to support a plurality of tool frames 18. Each tool frame 18 may, in turn, be configured to support a plurality of row units 20. As will be described below, each row unit 20 may include one or more ground engaging tools configured to excavate a furrow or trench in the soil.
It should be appreciated that, for purposes of illustration, only a portion of the row units 20 of the implement 10 have been shown in FIG. 1. In general, the implement 10 may include any number of row units 20, such as six, eight, twelve, sixteen, twenty-four, thirty-two, or thirty-six row units. In addition, it should be appreciated that the lateral spacing between row units 20 may be selected based on the type of crop being planted. For example, the row units 20 may be spaced approximately thirty inches from one another for planting corn, and approximately fifteen inches from one another for planting soybeans.
It should also be appreciated that the configuration of the implement 10 described above and shown in FIG. 1 is provided only to place the present subject matter in an exemplary field of use. Thus, it should be appreciated that the present subject matter may be readily adaptable to any manner of implement configuration.
Referring now to FIG. 2, a side view of one embodiment of a row unit 20 is illustrated in accordance with aspects of the present subject matter. As shown, the row unit 20 is configured as a hoe opener row unit. However, it should be appreciated that, in alternative embodiments, the row unit 20 may be configured as a disc opener row unit or any other suitable type of seed planting unit. Furthermore, it should be appreciated that, although the row unit 20 will generally be described in the context of the implement 10 shown in FIG. 1, the row unit 20 may generally be configured to be installed on any suitable seed planting implement having any suitable implement configuration.
As shown, the row unit 20 may be adjustably coupled to one of the tool frames 18 of the implement 10 by a suitable linkage assembly 22. For example, in one embodiment, the linkage assembly 22 may include a mounting bracket 24 coupled to the tool frame 18. Furthermore, the linkage assembly 22 may include first and second linkage members 26, 28. One end of each linkage member 26, 28 may be pivotably coupled to the mounting bracket 24, while an opposed end of each linkage member 26, 28 may be pivotally coupled to a support member 30 of the row unit 20. In this regard, the linkage assembly 22 may form a four bar linkage with the support member 30 that permits relative pivotable movement between the row unit 20 and the associated tool frame 18. However, it should be appreciated that, in alternative embodiments, the row unit 20 may be adjustably coupled to the tool frame 18 or the toolbar 12 via any other suitable linkage assembly. Furthermore, it should be appreciated that, in further embodiments the linkage assembly 22 may couple the row unit 20 directly to the toolbar 12.
Furthermore, the support member 30 may be configured to support one or more components of the row unit 20. For example, in several embodiments, a ground engaging shank 32 may be mounted or otherwise supported on support member 22. As shown, the shank 32 may include an opener 34 configured to excavate a furrow or trench in the soil as the implement 10 moves in the direction of travel 12 to facilitate deposition of a flowable granular or particulate-type agricultural product, such as seed, fertilizer, and/or the like. Moreover, the row unit 20 may include a packer wheel 36 configured to roll along the soil and close the furrow after deposition of the agricultural product. In one embodiment, the packer wheel 36 may be coupled to the support member 30 by an arm 38. It should be appreciated that, in alternative embodiments, any other suitable component(s) may be supported on or otherwise coupled to the support member 30. For example, the row unit 20 may include a ground engaging disc opener (not shown) in lieu of the ground engaging shank 32.
Additionally, in several embodiments, a fluid-driven actuator 102 of the implement 10 may be configured to adjust the position of one or more components of the row unit 20 relative to the tool frame 18. For example, in one embodiment, a rod 104 of the actuator 102 may be coupled to the shank 32 (e.g., the end of the shank 32 opposed from the opener 34), while a cylinder 106 of the actuator 102 may be coupled to the mounting bracket 24. As such, the rod 104 may be configured to extend and/or retract relative to the cylinder 106 to adjust the position of the shank 32 relative to the tool frame 18, which, in turn, adjusts the force being applied to the shank 32. However, it should be appreciated that, in alternative embodiments, the rod 104 may be coupled to the mounting bracket 24, while the cylinder 106 may be coupled to the shank 32. Furthermore, it should be appreciated that, in further embodiments, the actuator 102 may be coupled to any other suitable component of the row unit 20 and/or directly to the toolbar 12.
Moreover, it should be appreciated that the configuration of the row unit 20 described above and shown in FIG. 2 is provided only to place the present subject matter in an exemplary field of use. Thus, it should be appreciated that the present subject matter may be readily adaptable to any manner of seed planting unit configuration.
Referring now to FIG. 3, a schematic view of one embodiment of a system 100 for controlling the operation of an actuator mounted on a seed planting implement is illustrated in accordance with aspects of the present subject matter. In general, the system 100 will be described herein with reference to the seed planting implement 10 and the row unit 20 described above with reference to FIGS. 1 and 2. However, it should be appreciated by those of ordinary skill in the art that the disclosed system 100 may generally be utilized with seed planting implements having any other suitable implement configuration and/or seed planting units having any other suitable unit configuration.
As shown in FIG. 3, the system 100 may include a fluid-driven actuator, such as the actuator 102 of the row unit 20 described above with reference to FIG. 2. As shown, the actuator 102 may correspond to a hydraulic actuator. Thus, in several embodiments, the actuator 102 may include a piston 108 housed within the cylinder 106. One end of the rod 104 may be coupled to the piston 108, while an opposed end of the rod 104 may extend outwardly from the cylinder 106. Additionally, the actuator 102 may include a cap-side chamber 110 and a rod-side chamber 112 defined within the cylinder 106. As is generally understood, by regulating the pressure of the fluid supplied to one or both of the cylinder chambers 110, 112, the actuation of the rod 104 may be controlled. However, it should be appreciated that, in alternative embodiments, the actuator 102 may be configured as any other suitable type of actuator, such as a pneumatic actuator. Furthermore, it should be appreciated that, in further embodiments, the system 100 may include any other suitable number of fluid-driven actuators, such as additional actuators 102 mounted on the implement 10.
Furthermore, the system 100 may include various components configured to provide fluid (e.g., hydraulic oil) to the cylinder chambers 110, 112 of the actuator 102. For example, in several embodiments, the system 100 may include a fluid reservoir 114 and first and second fluid conduits 116, 118. As shown, a first fluid conduit 116 may extend between and fluidly couple the reservoir 114 and the rod-side chamber 112 of the actuator 102. Similarly, a second fluid conduit 118 may extend between and fluidly couple the reservoir 114 and the cap-side chamber 110 of the actuator 102. Additionally, a pump 115 and a remote switch 117 or other valve(s) may be configured to control the flow of the fluid between the reservoir 114 and the cylinder chambers 110, 112 of the actuator 102. In one embodiment, the reservoir 114, the pump 115, and the remote switch 117 may be mounted on the work vehicle (not shown) configured to tow the implement 10. However, it should be appreciated that, in alternative embodiments, the reservoir 114, the pump 115, and/or the remote switch 117 may be mounted on the implement 10. Furthermore, it should be appreciated that the system 100 may include any other suit component(s) configured to control the flow of fluid between the reservoir and the actuator 102.
In several embodiments, the system 100 may also include a flow restrictor 120 that is fluidly coupled to the cap-side chamber 110. As such, the flow restrictor 120 may be provided in series with the second fluid conduit 118. As will be described below, the flow restrictor 120 may be configured to reduce the flow rate of the fluid exiting the cap-side chamber 110 in a manner that provides damping to one or more components of the implement 10. However, it should be appreciated that, in alternative embodiments, the flow restrictor 120 may be fluidly coupled to the rod-side chamber 120 such that the flow restrictor 120 is provided in series with the first fluid conduit 116.
Additionally, in several embodiments, the system 100 may include a check valve 122 that is fluidly coupled to the cap-side chamber 110 and provided in series with the second fluid conduit 118. As shown, the check valve 122 may be fluidly coupled to the flow restrictor 120 in parallel. In this regard, the check valve 122 may be provided in series with a first branch 124 of the second fluid conduit 118, while the flow restrictor 120 may be provided in series with a second branch 126 of the second fluid conduit 118. As such, the check valve 122 may be configured to allow the fluid to flow through the first branch 124 of the second fluid conduit 118 from the reservoir 114 to the cap-side chamber 110. However, the check valve 122 may be configured to occlude or prevent the fluid from flowing through the first branch 124 of the second fluid conduit 118 from the cap-side chamber 110 to the reservoir 114. In this regard, the check valve 122 directs all of the fluid exiting the cap-side chamber 110 into the flow restrictor 120. Conversely, the check valve 122 permits the fluid flowing to the cap-side chamber 110 to bypass the flow restrictor 120. As will be described below, such configuration facilitates damping of one or more components of the implement 10. However, it should be appreciated that, in alternative embodiments, the check valve 122 may be fluidly coupled to the rod-side chamber 112 in combination with the flow restrictor 120 such that the check valve 122 is provided in series with the first fluid conduit 116.
As indicated above, the system 100 may generally be configured to provide viscous damping to one or more components of the implement 10. For example, when a ground engaging tool of the implement 10, such as the shank 32, contacts a rock or other impediment in the soil, the corresponding row unit 20 may pivot relative to the corresponding tool frame 18 and/or the toolbar 12 against the down pressure load applied to the row unit 20 by the corresponding actuator 102. In several embodiments, such movement may cause the rod 104 of the actuator 102 to retract into the cylinder 106, thereby moving the piston 108 in a manner that decreases the volume of the cap-side chamber 110. In such instances, some of the fluid present within the cap-side chamber 110 may exit and flow into the second fluid conduit 118 toward the reservoir 114. The check valve 122 may prevent the fluid exiting the cap-side chamber 110 from flowing through the first branch 124 of the second fluid conduit 118. As such, all fluid exiting the cap-side chamber 110 may be directed into the second branch 126 and through the flow restrictor 120. As indicated above, the flow restrictor 120 reduces or limits the rate at which the fluid may flow through the second fluid conduit 118 so as to reduce the rate at which the fluid may exit the cap-side chamber 110. In this regard, the speed at which and/or the amount that the rod 104 retracts into the cylinder 106 when the shank 32 contacts a soil impediment may be reduced (e.g., because of the reduced rate at which the fluid is discharged from the cap-side chamber 110), thereby damping the movement of the row unit 20 relative to the corresponding tool frame 18 and/or the toolbar 12. Furthermore, after the initial retraction of the rod 104 into the cylinder 106, the piston 108 may then move in a manner that increases the volume of the cap-side chamber 110, thereby extending the rod 104 from the cylinder 106. In such instances, fluid present within the reservoir 114 and the second fluid conduit 118 may be drawn back into the cap-side chamber 110. As indicated above, the check valve 122 may permit the fluid within the second fluid conduit 118 to bypass the flow restrictor 120 and flow unobstructed through the first branch 124, thereby maximizing the rate at which the fluid returns to the cap-side chamber 110. Increasing the rate at which the fluid returns to the cap-side chamber 110 may decrease the time that the row unit 20 is displaced relative to the tool frame 18, thereby further damping of the row unit 20 relative to the corresponding tool frame 18 and/or the toolbar 12.
Referring now to FIG. 4, a cross-sectional view of one embodiment of the flow restrictor 120 is illustrated in accordance with aspects of the present subject matter. For example, in the illustrated embodiment, the flow restrictor 120 may include a restrictor body 128 coupled to the second branch 126 of the second fluid conduit 118, with the restrictor body 128, in turn, defining a fluid passage 130 extending therethrough. Furthermore, the flow restrictor 120 may include an orifice plate 132 extending inward from the restrictor body 128 into the fluid passage 130. As shown, the orifice plate 132 may define a central aperture or throat 134 extending therethrough. In general, the size (e.g., the area, diameter, etc.) of the throat 134 may be smaller than the size of the fluid passage 130 so as to reduce the flow rate of the fluid through the flow restrictor 120. It should be appreciated that, in the illustrated embodiment, the throat 134 has a fixed size such that the throat 134 provides a fixed or constant backpressure for a given fluid flow rate. In this regard, in such embodiment, a fixed or constant damping rate is provided by the system 100. However, it should be appreciated that, in alternative embodiments, the flow restrictor 120 may have any other suitable configuration that reduces the flow rate of the fluid flowing therethrough.
Referring now to FIG. 5, a cross-sectional view of another embodiment of the flow restrictor 120 is illustrated in accordance with aspects of the present subject matter. As shown, the flow restrictor 120 may generally be configured the same as or similar to that described above with reference to FIG. 4. For instance, the flow restrictor 120 may define the throat 134, which is configured to reduce the flow rate of the fluid through the flow restrictor 120. However, as shown in FIG. 5, unlike the above-describe embodiment, the size (e.g., the area, diameter, etc.) of the throat 134 is adjustable. For example, in such embodiment, the flow restrictor 120 may be configured as an adjustable valve 136. As shown, the valve 136 may include a valve body 138 coupled to the second branch 126 of the second fluid conduit 118, a shaft 140 rotatably coupled to the valve body 138, a disc 142 coupled to the shaft 140, and an actuator 144 (e.g., a suitable electric motor) coupled to the shaft 140. As such, the actuator 144 may be configured to rotate the shaft 140 and the disc 142 relative to the valve body 138 (e.g., as indicated by arrow 146 in FIG. 5) to change the size of the throat 134 defined between the disc 142 and the valve body 138. Although the valve 136 is configured as a butterfly valve in FIG. 5, it should be appreciated that, in alternative embodiments, the valve 136 may be configured as any other suitable type of valve or adjustable flow restrictor. For example, in one embodiment, the valve 136 may be configured as a suitable ball valve.
In accordance with aspects of the present disclosure, by adjusting the size of the throat 134, the system 100 may be able to provide variable damping rates. In general, the size of the throat 134 may be indicative of the amount of damping provided by the system 100. For example, in several embodiments, the disc 142 may be adjustable between a first position shown in FIG. 6 and a second position shown in FIG. 7. More specifically, when the disc 142 is at the first position, the throat 134 defines a first size (e.g., as indicated by arrow 148 in FIG. 6), thereby providing a first damping rate. Conversely, when the disc 142 is at the second position, the throat 134 defines a second size (e.g., as indicated by arrow 150 in FIG. 7), thereby providing a second damping rate. As shown in FIGS. 6 and 7, the first distance 148 is larger than the second distance 150. In such instance, the system 100 provides greater damping when the throat 134 is adjusted to the first size than when the throat 134 is adjusted to the second size. It should be appreciated that, in alternative embodiments, the disc 142 may be adjustable between any other suitable positions that provide any other suitable damping rates. For example, the disc 142 may be adjustable to a plurality of different positions defined between the fully opened and fully closed positions of the valve, thereby providing for a corresponding number of different damping rates. Furthermore, it should be appreciated that the disc 142 may be continuously adjustable or adjustable between various discrete positions.
Referring back to FIG. 5, a controller 152 of the system 100 may be configured to electronically control the operation of one or more components of the valve 138, such as the actuator 144. In general, the controller 152 may comprise any suitable processor-based device known in the art, such as a computing device or any suitable combination of computing devices. Thus, in several embodiments, the controller 152 may include one or more processor(s) 154 and associated memory device(s) 156 configured to perform a variety of computer-implemented functions. As used herein, the term “processor” refers not only to integrated circuits referred to in the art as being included in a computer, but also refers to a controller, a microcontroller, a microcomputer, a programmable logic controller (PLC), an application specific integrated circuit, and other programmable circuits. Additionally, the memory device(s) 156 of the controller 152 may generally comprise memory element(s) including, but not limited to, a computer readable medium (e.g., random access memory (RAM)), a computer readable non-volatile medium (e.g., a flash memory), a floppy disk, a compact disc-read only memory (CD-ROM), a magneto-optical disk (MOD), a digital versatile disc (DVD) and/or other suitable memory elements. Such memory device(s) 156 may generally be configured to store suitable computer-readable instructions that, when implemented by the processor(s) 154, configure the controller 152 to perform various computer-implemented functions. In addition, the controller 152 may also include various other suitable components, such as a communications circuit or module, one or more input/output channels, a data/control bus and/or the like.
It should be appreciated that the controller 152 may correspond to an existing controller of the implement 10 or associated work vehicle (not shown) or the controller 152 may correspond to a separate processing device. For instance, in one embodiment, the controller 152 may form all or part of a separate plug-in module that may be installed within the implement 10 or associated work vehicle to allow for the disclosed system and method to be implemented without requiring additional software to be uploaded onto existing control devices of the implement 10 or associated work vehicle.
Furthermore, in one embodiment, a user interface 158 of the system 100 may be communicatively coupled to the controller 152 via a wired or wireless connection to allow feedback signals (e.g., as indicated by dashed line 160 in FIG. 5) to be transmitted from the controller 152 to the user interface 158. More specifically, the user interface 158 may be configured to receive an input from an operator of the implement 10 or the associated work vehicle, such as an input associated with a desired damping characteristic(s) to be provided by the system 100. As such, the user interface 158 may include one or more input devices (not shown), such as touchscreens, keypads, touchpads, knobs, buttons, sliders, switches, mice, microphones, and/or the like. In addition, some embodiments of the user interface 158 may include one or more one or more feedback devices (not shown), such as display screens, speakers, warning lights, and/or the like, which are configured to communicate such feedback from the controller 152 to the operator of the implement 10. However, in alternative embodiments, the user interface 158 may have any suitable configuration.
Moreover, in one embodiment, one or more sensors 162 of the system 100 may be communicatively coupled to the controller 152 via a wired or wireless connection to allow sensor data (e.g., as indicated by dashed line 164 in FIG. 5) to be transmitted from the sensor(s) 162 to the controller 152. For example, in one embodiment, the sensor(s) 162 may include a location sensor, such as a GNSS-based sensor, that is configured to detect a parameter associated with the location of the implement 10 or associated work vehicle within the field. In another embodiment, the sensor(s) 162 may include a speed sensor, such as a Hall Effect sensor, that is configured to detect a parameter associated with the speed at which the implement 10 is moved across the field. However, it should be appreciated that, in alternative embodiments, the sensor(s) 162 may include any suitable sensing device(s) configured to detect any suitable operating parameter of the implement 10 and/or the associated work vehicle.
In several embodiments, the controller 152 may be configured to control the operation of the valve 136 based on the feedback signals 160 received from the user interface 158 and/or the sensor data 164 received from the sensor(s) 162. Specifically, as shown in FIG. 5, the controller 152 may be communicatively coupled to the actuator 144 of the valve 136 via a wired or wireless connection to allow control signals (e.g., indicated by dashed lines 166 in FIG. 5) to be transmitted from the controller 152 to the actuator 144. Such control signals 166 may be configured to regulate the operation of the actuator 144 to adjust the position of the disc 142 relative to the valve body 138, such as by moving the disc 142 along the direction 146 between the first position (FIG. 6) and the second position (FIG. 7). For example, the feedback signals 116 received by the controller 152 may be indicative that the operator desires to adjust the damping provided by the system 100. Furthermore, upon receipt of the sensor data 164 (e.g., data indicative of the location and/or speed of the implement 10), the controller 152 may be configured to determine that the damping rate of the system 100 should be adjusted. In either instance, the controller 152 may be configured to transmit the control signals 166 to the actuator 144, with such control signals 166 being configured to control the operation of the actuator 144 to adjust the position of the disc 142 to provide the desired damping rate. However, it should be appreciated that, in alternative embodiments, the controller 152 may be configured to control the operation of the valve 136 based on any other suitable input(s) and/or parameter(s).
Referring now to FIG. 8, a schematic view of another embodiment of the system 100 is illustrated in accordance with aspects of the present subject matter. As shown, the system 100 may generally be configured the same as or similar to that described above with reference to FIG. 3. For instance, the system 100 may include the flow restrictor 120 and the check valve 122 fluidly coupled to the cap-side chamber 110 of the actuator 102 via the second fluid conduit 118. Furthermore, the flow restrictor 120 and the check valve 122 may be fluidly coupled together in parallel. However, as shown in FIG. 8, unlike the above-describe embodiment, the check valve 122 may be configured as a pilot-operated or fluid actuated three-way valve that is fluidly coupled to the first fluid conduit 116 by a pilot conduit 168.
In general, when the row unit 20 is lifted from an operational position relative to the ground to a raised position relative to the ground, it may be desirable for fluid to exit the cap-side chamber 110 without its flow rate being limited by the flow restrictor 120. For example, permitting such fluid to bypass the flow restrictor 120 may reduce the time required to lift the row unit 20 from the operational position to the raised position. More specifically, when lifting the row unit 20 from the operational position to the raised position, a pump (not shown) may pump fluid through the first fluid conduit 116 from the reservoir 114 to the rod-side chamber 112 of the actuator 102, thereby retracting the rod 104 into the cylinder 106. This may, in turn, discharge fluid from the cap-side chamber 110 into the second fluid conduit 118. As described above, the check valve 122 may generally be configured to direct all fluid exiting the cap-side chamber 110 into the flow restrictor 120. However, in the configuration of the system 100 shown in FIG. 8, when lifting the row unit 20 to the raised position, the pilot conduit 168 supplies fluid flowing through the first fluid conduit 116 to the check valve 122. The fluid received from the pilot conduit 168 may, in turn, actuate suitable component(s) of the check valve 122 (e.g., a diaphragm(s), a spring(s), and/or the like) in a manner that causes the check valve 122 to open, thereby permitting the fluid exiting the cap-side chamber 110 to bypass the flow restrictor 120 and flow unobstructed through the check valve 122 toward the reservoir 114. Conversely, when the row unit 20 is at the operational position, the check valve 122 may be closed, thereby directing all fluid exiting the cap-side chamber 110 into the flow restrictor 120.
Referring now to FIG. 9, a schematic view of a further embodiment of the system 100 is illustrated in accordance with aspects of the present subject matter. As shown, the system 100 may generally be configured the same as or similar to that described above with reference to FIGS. 3 and 8. For instance, the system 100 may include the flow restrictor 120 and the check valve 122 fluidly coupled to the cap-side chamber 110 of the actuator 102 via the second fluid conduit 118. Furthermore, the flow restrictor 120 and the check valve 122 may be fluidly coupled together in parallel. However, as shown in FIG. 9, unlike the above-describe embodiments, the check valve 122 may be configured as an electrically actuated valve. Specifically, as shown, the controller 152 may be communicatively coupled to the check valve 122 via a wired or wireless connection to allow control signals (e.g., indicated by dashed lines 170 in FIG. 9) to be transmitted from the controller 152 to the check valve 122. In this regard, when the row unit 20 is lifted from the operational position to the raised position, the control signals 170 may be configured to instruct the check valve 122 to open in a manner that permits the fluid exiting the cap-side chamber 110 to bypass the flow restrictor 120 and flow unobstructed through the check valve 122 toward the reservoir 114. Conversely, when the row unit 20 is at the operational position, the control signals 170 may be configured to instruct the check valve 122 to close, thereby directing all fluid exiting the cap-side chamber 110 into the flow restrictor 120.
This written description uses examples to disclose the technology, including the best mode, and also to enable any person skilled in the art to practice the technology, including making and using any devices or systems and performing any incorporated methods. The patentable scope of the technology is defined by the claims, and may include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims if they include structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements with insubstantial differences from the literal language of the claims.
## CLAIMS
1. A system for controlling an operation of an actuator mounted on a seed planting implement, the system comprising: a toolbar; a row unit adjustably mounted on the toolbar; a fluid-driven actuator configured to adjust a position of the row unit relative to the toolbar, the fluid-driven actuator defining first and second fluid chambers; a flow restrictor fluidly coupled to the first fluid chamber, the flow restrictor being configured to reduce a rate at which fluid is permitted to exit the first fluid chamber in a manner that provides damping to the row unit; and a valve fluidly coupled to the first fluid chamber, the valve further being fluidly coupled to the flow restrictor in a parallel relationship such that the valve is configured to permit the fluid exiting the first fluid chamber to flow through the flow restrictor and the fluid entering the first fluid chamber to bypass the flow restrictor.
2. The system of claim 1, wherein, when fluid is supplied to the second fluid chamber, the valve is configured to permit fluid exiting the first fluid chamber to bypass the flow restrictor.
3. The system of claim 1, wherein the valve is fluidly actuated.
4. The system of claim 3, further comprising: a fluid line configured to supply the fluid to the second fluid chamber, the fluid line being fluidly coupled to the valve such that, when the fluid flows through the fluid line to the second fluid chamber, the valve opens in a manner that permits the fluid exiting first fluid chamber to bypass the flow restrictor.
5. The system of claim 1, wherein the valve is electrically actuated.
6. The system of claim 1, wherein the flow restrictor defines a throat having a fixed size.
7. The system of claim 1, wherein the flow restrictor defines a throat having an adjustable size.
8. A seed planting implement, comprising: a toolbar; a plurality of row units adjustably coupled to the toolbar, each row unit including a ground engaging tool configured to form a furrow in the soil; a plurality of fluid-driven actuators, each fluid-driven actuator being coupled between the toolbar and a corresponding row unit of the plurality of row units, each fluid-driven actuator being configured to adjust a position of the corresponding row unit relative to the toolbar, each fluid-driven actuator defining first and second fluid chambers; a flow restrictor fluidly coupled to the first fluid chamber of a first fluid-driven actuator of the plurality of fluid-driven actuators, the flow restrictor being configured to reduce a rate at which fluid is permitted to exit the first fluid chamber of the first fluid-driven actuator in a manner that provides damping to the corresponding row unit; and a valve fluidly coupled to the first fluid chamber of the first fluid-driven actuator, the valve further being fluidly coupled to the flow restrictor in a parallel relationship such that the valve is configured to permit the fluid exiting the first fluid chamber to flow through the flow restrictor and the fluid entering the first fluid chamber to bypass the flow restrictor.
9. The seed planting implement of claim 8, wherein, when fluid is supplied to the second fluid chamber of the first fluid-driven actuator, the valve is configured to permit fluid exiting the first fluid chamber of the first fluid-driven actuator to bypass the flow restrictor.
10. The seed planting implement of claim 8, wherein the valve is fluidly actuated.
11. The seed planting implement of claim 10, further comprising: a fluid line configured to supply fluid to the second fluid chamber of the first fluid-driven actuator, the fluid line being fluidly coupled to the valve such that, when fluid flows through the fluid line to the second fluid chamber of the first fluid-driven actuator, the valve opens in a manner that permits the fluid exiting first fluid chamber of the first fluid-driven actuator to bypass the flow restrictor.
12. The seed planting implement of claim 8, wherein the valve is electrically actuated.
13. The seed planting implement of claim 8, wherein the flow restrictor defines a throat having a fixed size.
14. The seed planting implement of claim 8, wherein the flow restrictor defines a throat having an adjustable size.
15. A system for providing damping to a row unit of a seed planting implement, the system comprising: a toolbar; a row unit adjustably mounted on the toolbar; a fluid-driven actuator configured to adjust a position of the row unit relative to the toolbar, the fluid-driven actuator defining a fluid chamber; and a flow restrictor fluidly coupled to the fluid chamber, the flow restrictor defining an adjustable throat configured to reduce a rate at which fluid is permitted to exit the fluid chamber, the throat being adjustable between a first size configured to provide a first damping rate to the row unit and a second size configured to provide a second damping rate to the row unit, the first and second damping rates being different.
16. The system of claim 15, wherein the throat is adjustable between the first and second damping rates based on an operator input.
17. The system of claim 15, wherein the throat is adjustable between the first and second damping rates based on data received from one or more sensors on the seed planting implement.
18. The system of claim 15, further comprising: a valve fluidly coupled to the fluid chamber, the valve being configured to selectively occlude the flow of fluid such that fluid exiting the fluid chamber flows through the flow restrictor and fluid entering the fluid chamber bypasses the flow restrictor.
19. The system of claim 18, wherein the flow restrictor and the valve are fluidly coupled in a parallel relationship.

View File

@ -0,0 +1,105 @@
item-0 at level 0: unspecified: group _root_
item-1 at level 1: title: Assay reagent
item-2 at level 2: section_header: ABSTRACT
item-3 at level 3: paragraph: A cell-derived assay reagent prepared from cells which have been killed by treatment with an antibiotic selected from the bleomycin-phleomycin family of antibiotics but which retain a signal-generating metabolic activity such as bioluminescence.
item-4 at level 2: paragraph: This application is a continuation of PCT/GB99/01730, filed Jun. 1, 1999 designating the United States (the disclosure of which is incorporated herein by reference) and claiming priority from British application serial no. 9811845.8, filed Jun. 2, 1998.
item-5 at level 2: paragraph: The invention relates to a cell-derived assay reagent, in particular to an assay reagent prepared from cells which have been killed but which retain a signal-generating metabolic activity such as bioluminescence and also to assay methods using the cell-derived reagent such as, for example, toxicity testing methods.
item-6 at level 2: paragraph: The use of bacteria with a signal-generating metabolic activity as indicators of toxicity is well established. UK patent number GB 2005018 describes a method of assaying a liquid sample for toxic substances which involves contacting a suspension of bioluminescent microorganisms with a sample suspected of containing a toxic substance and observing the change in the light output of the bioluminescent organisms as a result of contact with the suspected toxic substance. Furthermore, a toxicity monitoring system embodying the same assay principle, which is manufactured and sold under the Trade Mark Microtox®, is in routine use in both environmental laboratories and for a variety of industrial applications. An improved toxicity assay method using bioluminescent bacteria, which can be used in a wider range of test conditions than the method of GB 2005018, is described in International patent application number WO 95/10767.
item-7 at level 2: paragraph: The assay methods known in the prior art may utilize naturally occurring bioluminescent organisms, including Photobacterium phosphoreum and Vibrio fischeri. However, recent interest has focused on the use of genetically modified microorganisms which have been engineered to express bioluminescence. These genetically modified bioluminescent microorganisms usually express lux genes, encoding the enzyme luciferase, which have been cloned from a naturally occurring bioluminescent microorganism (E. A. Meighen (1994) Genetics of Bacterial Bioluminescence. Ann. Rev. Genet. 28: 117-139; Stewart, G. S. A. B. Jassin, S. A. A. and Denyer, S. P. (1993), Engineering Microbial bioluminescence and biosensor applications. In Molecular Diagnosis. Eds R. Rapley and M. R. Walker Blackwell Scientific Pubs/Oxford). A process for producing genetically modified bioluminescent microorganisms expressing lux genes cloned from Vibrio harveyi is described in U.S. Pat. No. 4,581,335.
item-8 at level 2: paragraph: The use of genetically modified bioluminescent microorganisms in toxicity testing applications has several advantages over the use of naturally occurring microorganisms. For example, it is possible to engineer microorganisms with different sensitivities to a range of different toxic substances or to a single toxic substance. However, genetically modified microorganisms are subject to marketing restrictions as a result of government legislation and there is major concern relating to the deliberate release of genetically modified microorganisms into the environment as components of commercial products. This is particularly relevant with regard to toxicity testing which is often performed in the field rather than within the laboratory. The potential risk from release of potentially pathogenic genetically modified microorganisms into the environment where they may continue to grow in an uncontrollable manner has led to the introduction of legal restrictions on the use of genetically modified organisms in the field in many countries.
item-9 at level 2: paragraph: It has been suggested, to avoid the problems discussed above, to use genetically modified bioluminescent microorganisms which have been treated so that they retain the metabolic function of bioluminescence but an no longer reproduce. The use of radiation (gamma-radiation), X-rays or an electron beam) to kill bioluminescent cells whilst retaining the metabolic function of bioluminescence is demonstrated in International patent application number WO 95/07346. It is an object of the present invention to provide an alternative method of killing bioluminescent cells whilst retaining the metabolic function of bioluminescence which does not require the use of radiation and, as such, can be easily carried out without the need for specialized radiation equipment and containment facilities and without the risk to laboratory personnel associated with the use of radiation.
item-10 at level 2: paragraph: Accordingly, in a first aspect the invention provides a method of making a non-viable preparation of prokaryotic or eukaryotic cells, which preparation has a signal-generating metabolic activity, which method comprises contacting a viable culture of cells with signal-generating metabolic activity with a member of the bleomycin/phleomycin family of antibiotics.
item-11 at level 2: paragraph: Bleomycin and phleomycin are closely related glycopeptide antibiotics that are isolated in the form of copper chelates from cultures of Streptomyces verticillus. They represent a group of proteins with molecular weights ranging from 1000 to 1000 kda that are potent antibiotics and anti-tumour agents. So far more than 200 members of the bleomycin/phleomycin family have been isolated and characterised as complex basic glycopeptides. Family members resemble each other with respect to their physicochemical properties and their structure, indicating that functionally they all behave in the same manner. Furthermore, the chemical structure of the active moiety is conserved between family members and consists of 5 amino acids, L-glucose, 3-O-carbamoyl-D-mannose and a terminal cation. The various different bleomycin/phleomycin family members differ from each other in the nature of the terminal cation moiety, which is usually an amine. A preferred bleomycin/phleomycin antibiotic for use in the method of the invention is phleomycin D1, sold under the trade name Zeocin™.
item-12 at level 2: paragraph: Bleomycin and phleomycin are strong, selective inhibitors of DNA synthesis in intact bacteria and in mammalian cells. Bleomycin can be observed to attack purified DNA in vitro when incubated under appropriate conditions and analysis of the bleomycin damaged DNA shows that both single-stranded and double-stranded cleavages occur, the latter being the result of staggered single strand breaks formed approximately two base pairs apart in the complementary strands.
item-13 at level 2: paragraph: In in vivo systems, after being taken up by the cell, bleomycin enters the cell nucleus, binds to DNA (by virtue of the interaction between its positively charged terminal amine moiety and a negatively charged phosphate group of the DNA backbone) and causes strand scission. Bleomycin causes strand scission of DNA in viruses, bacteria and eukaryotic cell systems.
item-14 at level 2: paragraph: The present inventors have surprisingly found that treatment of a culture of cells with signal-generating metabolic activity with a bleomycin/phleomycin antibiotic renders the culture non-viable whilst retaining a level of signal-generating metabolic activity suitable for use in toxicity testing applications. In the context of this application the term non-viable is taken to mean that the cells are unable to reproduce. The process of rendering cells non-viable whilst retaining signal-generating metabolic activity may hereinafter be referred to as inactivation and cells which have been rendered non-viable according to the method of the invention may be referred to as inactivated.
item-15 at level 2: paragraph: Because of the broad spectrum of action of the bleomycin/phleomycin family of antibiotics the method of the invention is equally applicable to bacterial cells and to eukaryotic cells with signal generating metabolic activity. Preferably the signal-generating metabolic activity is bioluminescence but other signal-generating metabolic activities which are reporters of toxic damage could be used with equivalent effect.
item-16 at level 2: paragraph: The method of the invention is preferred for use with bacteria or eukaryotic cells that have been genetically modified to express a signal-generating metabolic activity. The examples given below relate to E. coil which have been engineered to express bioluminescence by transformation with a plasmid carrying lux genes. The eukaryotic equivalent would be cells transfected with a vector containing nucleic acid encoding a eukaryotic luciferase enzyme (abbreviated luc) such as, for example, luciferase from the firefly Photinus pyralis. A suitable plasmid vector containing cDNA encoding firefly luciferase under the control of an SV40 viral promoter is available from Promega Corporation, Madison Wis., USA. However, in connection with the present invention it is advantageous to use recombinant cells containing the entire eukaryotic luc operon so as to avoid the need to add an exogenous substrate ( e.g. luciferin) in order to generate light output.
item-17 at level 2: paragraph: The optimum concentration of bleomycin/phleomycin antibiotic and contact time required to render a culture of cells non-viable whilst retaining a useful level of signal-generating metabolic activity may vary according to the cell type but can be readily determined by routine experiment. In general, the lower the concentration of antibiotic used the longer the contact time required for cell inactivation. In connection with the production of assay reagents for use in toxicity testing applications, it is generally advantageous to keep the concentration of antibiotic low (e.g. around 1-1.5 mg/ml) and increase the contact time for inactivation. As will be shown in Example 1, treatment with Zeocin™ at a concentration of 1.5 mg/ml for 3 to 5 hours is sufficient to completely inactivate a culture of recombinant E. coli.
item-18 at level 2: paragraph: In the case of bacteria, the contact time required to inactivate a culture of bacterial cells is found to vary according to the stage of growth of the bacterial culture at the time the antibiotic is administered. Although the method of the invention can be used on bacteria at all stages of growth it is generally preferable to perform the method on bacterial cells in an exponential growth phase because the optimum antibiotic contact time has been observed to be shortest when the antibiotic is administered to bacterial cells in an exponential growth phase.
item-19 at level 2: paragraph: Following treatment with bleomycin/phleomycin antibiotic the non-viable preparation of cells is preferably stabilised for ease of storage or shipment. The cells can be stabilised using known techniques such as, for example, freeze drying (lyophilization) or other cell preservation techniques known in the art. Stabilization by freeze drying has the added advantage that the freeze drying procedure itself can render cells non-viable. Thus, any cells in the preparation which remain viable after treatment of the culture with bleomycin/phleomycin antibiotic will be rendered non-viable by freeze drying. It is thought that freeze drying inactivates any remaining viable cells by enhancing the effect of antibiotic, such that sub-lethally injured cells in the culture are more sensitive to the stresses applied during freeze drying.
item-20 at level 2: paragraph: Prior to use the stabilised cell preparation is reconstituted using a reconstitution buffer to form an assay reagent. This reconstituted assay reagent may then be used directly in assays for analytes, for example in toxicity testing applications. It is preferable that the stabilised (i.e. freeze dried) assay reagent be reconstituted immediately prior to use, but after reconstitution it is generally necessary to allow sufficient time prior to use for the reconstituted reagent to reach a stable, high level of signal-generating activity. Suitable reconstitution buffers preferably contain an osmotically potent non-salt compound such as sucrose, dextran or polyethylene glycol, although salt based stabilisers may also be used.
item-21 at level 2: paragraph: Whilst the assay reagent of the invention is particularly suitable for use in toxicity testing applications it is to be understood that the invention is not limited to assay reagents for use in toxicity testing. The cell inactivation method of the invention can be used to inactivate any recombinant cells (prokaryotic or eukaryotic) with a signal generating metabolic activity that is not dependent upon cell viability.
item-22 at level 2: paragraph: In a further aspect the invention provides a method of assaying a potentially toxic analyte comprising the steps of,
item-23 at level 2: paragraph: (a) contacting a sample to be assayed for the analyte with a sample of assay reagent comprising a non-viable preparation of cells with a signal-generating metabolic activity;
item-24 at level 2: paragraph: (b) measuring the level of signal generated; and
item-25 at level 2: paragraph: (c) using the measurement obtained as an indicator of the toxicity of the analyte.
item-26 at level 2: paragraph: In a still further aspect, the invention provides a kit for performing the above-stated assay comprising an assay reagent with signal generating metabolic activity and means for contacting the assay reagent with a sample to be assayed for an analyte.
item-27 at level 2: paragraph: The analytes tested using the assay of the invention are usually toxic substances, but it is to be understood that the precise nature of the analyte to be tested is not material to the invention.
item-28 at level 2: paragraph: Toxicity is a general term used to describe an adverse effect on biological system and the term toxic substances includes both toxicants (synthetic chemicals that are toxic) and toxins (natural poisons). Toxicity is usually expressed as an effective concentration (EC) or inhibitory concentration (IC) value. The EC/IC value is usually denoted as a percentage response e.g. EC₅₀, EC₁₀ which denotes the concentration (dose) of a particular substance which affects the designated criteria for assessing toxicity (i.e. a behavioural trait or death) in the indicated proportion of the population tested. For example, an EC₅₀ of 10 ppm indicates that 50% of the population will be affected by a concentration of 10 ppm. In the case of a toxicity assay based on the use of a bioluminescent assay reagent, the EC₅₀ value is usually the concentration of sample substance causing a 50% change in light output.
item-29 at level 2: paragraph: The present invention will be further understood by way of the following Examples with reference to the accompanying Figures in which:
item-30 at level 2: paragraph: FIG. 1 is a graph to show the effect of Zeocin™ treatment on viable count and light output of recombinant bioluminescent E. coil cells.
item-31 at level 2: paragraph: FIG. 2 is a graph to show the light output from five separate vials of reconstituted assay reagent. The assay reagent was prepared from recombinant bioluminescent E. coil exposed to 1.5 mg/ml Zeocin™ for 300 minutes. Five vials were used to reduce discrepancies resulting from vial to vial variation.
item-32 at level 2: paragraph: FIGS. 3 to 8 are graphs to show the effect of Zeocin™ treatment on the sensitivity of bioluminescent assay reagent to toxicant (ZnSO₄):
item-33 at level 2: paragraph: FIG. 3: Control cells, lag phase.
item-34 at level 2: paragraph: FIG. 4: Zeocin™ treated cells, lag phase.
item-35 at level 2: paragraph: FIG. 5: Control cells, mid-exponential growth.
item-36 at level 2: paragraph: FIG. 6: Zeocin™ treated cells, mid-exponential growth.
item-37 at level 2: paragraph: FIG. 7: Control cells, stationary phase.
item-38 at level 2: paragraph: FIG. 8: Zeocin™ treated cells, stationary phase.
item-39 at level 2: section_header: EXAMPLE 1
item-40 at level 2: section_header: (A) Inactivation of Bioluminescent E. coil Method
item-41 at level 3: paragraph: 1. Bioluminescent genetically modified E. coil strain HB101 (E. coli HB101 made bioluminescent by transformation with a plasmid carrying the lux operon of Vibrio fischeri constructed by the method of Shaw and Kado, as described in Biotechnology 4: 560-564) were grown from a frozen stock in 5 ml of low salt medium (LB (5 g/ml NaCl)+glycerol+MgSO₄) for 24 hours.
item-42 at level 3: paragraph: 2. 1 ml of the 5 ml culture was then used to inoculate 200 ml of low salt medium in a shaker flask and the resultant culture grown to an OD₆₃₀ of 0.407 (exponential growth phase).
item-43 at level 3: paragraph: 3. 50 ml of this culture was removed to a fresh sterile shaker flask (control cells).
item-44 at level 3: paragraph: 4. Zeocin™ was added to the 150 ml of culture in the original shaker flash, to a final concentration of 1.5 mg/ml. At the same time, an equivalent volume of water was added to the 50 ml culture removed from the original flask (control cells).
item-45 at level 3: paragraph: 5. The time course of cell inactivation was monitored by removing samples from the culture at 5, 60, 120, 180, 240 and 300 minutes after the addition of Zeocin™ and taking measurements of both light output (measured using a Deltatox luminometer) and viable count (per ml, determined using the method given in Example 3 below) for each of the samples. Samples of the control cells were removed at 5 and 300 minutes after the addition of water and measurements of light output and viable count taken as for the Zeocin™ treated cells.
item-46 at level 3: paragraph: FIG. 1 shows the effect of Zeocin™ treatment on the light output and viable count (per ml) of recombinant bioluminescent E. coil. Zeocin™ was added to a final concentration of 1.5 mg/ml at time zero. The number of viable cells in the culture was observed to decrease with increasing contact cells with Zeocin™, the culture being completely inactivated after 3 hours. The light output from the culture was observed to decrease gradually with increasing Zeocin™ contact time.
item-47 at level 2: section_header: (B) Production of Assay Reagent
item-48 at level 3: paragraph: Five hours after the addition of Zeocin™ or water the remaining bacterial cells in the Zeocin™ treated and control cultures were harvested by the centrifugation, washed (to remove traces of Zeocin™ from the Zeocin™ treated culture), re-centrifuged and resuspended in cryoprotectant to an OD₆₃₀ of 0.25. 200 μl aliquots of the cells in cryoprotectant were dispensed into single shot vials, and freeze dried. Freeze dried samples of the Zeocin™ treated cells and control cells were reconstituted in 0.2M sucrose to form assay reagents and the light output of the assay reagents measured at various times after reconstitution.
item-49 at level 3: paragraph: The light output from assay reagent prepared from cells exposed to 1.5 mg/ml Zeocin™ for 5 hours was not significantly different to the light output from assay reagent prepared from control (Zeocin™ untreated) cells, indicating that Zeocin™ treatment does not affect the light output of the reconstituted freeze dried assay reagent. Both Zeocin™ treated and Zeocin™ untreated assay reagents produced stable light output 15 minutes after reconstitution.
item-50 at level 3: paragraph: FIG. 2 shows the light output from five separate vials of reconstituted Zeocin™ treated assay reagent inactivated according to the method of Example 1(A) and processed into assay reagent as described in Example 1(B). Reconstitution solution was added at time zero and thereafter light output was observed to increase steadily before stabilising out at around 15 minutes after reconstitution. All five vials were observed to give similar light profiles after reconstitution.
item-51 at level 2: section_header: EXAMPLE 2
item-52 at level 2: section_header: Sensitivity of Zeocin™ Treated Assay Reagent to Toxicant Method
item-53 at level 3: paragraph: 1. Bioluminescent genetically modified E. coil strain HB101 (E. coli HB101 made bioluminescent by transformation with a plasmid carrying the lux operon of vibrio fischeri constructed by the method of Shaw and Kado, as described in Biotechnology 4: 560-564) was grown in fermenter as a batch culture in low salt medium (LB(5 g/ml NaCl)+glycerol+MgSO₄).
item-54 at level 3: paragraph: 2. Two aliquots of the culture were removed from the fermenter into separate sterile shaker flasks at each of three different stages of growth i.e. at OD₆₃₀ values of 0.038 (lag phase growth), 1.31 (mid-exponential phase growth) and 2.468 (stationary phase growth).
item-55 at level 3: paragraph: 3. One aliquot of culture for each of the three growth stages was inactivated by contact with Zeocin™ (1 mg Zeocin™ added per 2.5×10⁶ cells, i.e. the concentration of Zeocin™ per cell is kept constant) for 300 minutes and then processed into assay reagent by freeze drying and reconstitution, as described in part (B) of Example 1.
item-56 at level 3: paragraph: 4. An equal volume of water was added to the second aliquot of culture for each of the three growth stages and the cultures processed into assay reagent as described above.
item-57 at level 3: paragraph: 5. Samples of each of the three Zeocin™ treated and three control assay reagents were then evaluated for sensitivity to toxicant (ZnSO₄) according to the following assay protocol:
item-58 at level 3: paragraph: ZnSO₄ Sensitivity Assay
item-59 at level 3: paragraph: 1. ZnSO₄ solutions were prepared in pure water at 30, 10, 3, 1, 0.3 and 0.1 ppm. Pure water was also used as a control.
item-60 at level 3: paragraph: 2. Seven vials of each of the three Zeocin™ treated and each of the three control assay reagents (i.e. one for each of the six ZnSO₄ solutions and one for the pure water control) were reconstituted using 0.5 ml of reconstitution solution (eg 0.2M sucrose) and then left to stand at room temperature for 15 minutes to allow the light output to stabilize. Base line (time zero) readings of light output were then measured for each of the reconstituted reagents.
item-61 at level 3: paragraph: 3. 0.5 ml aliquots of each of the six ZnSO₄ solutions and the pure water control were added to separate vials of reconstituted assay reagent. This was repeated for each of the different Zeocin™ treated and control assay reagents.
item-62 at level 3: paragraph: 4. The vials were incubated at room temperature and light output readings were taken 5, 10, 15, 20, 25 and 30 minutes after addition of ZnSO₄ solution.
item-63 at level 3: paragraph: 5. The % toxic effect for each sample was calculated as follows:
item-64 at level 3: paragraph: where: Cₒ=light in control at time zero
item-65 at level 3: paragraph: Ct=light in control at reading time
item-66 at level 3: paragraph: Sₒ=light in sample at time zero
item-67 at level 3: paragraph: St=light in sample at reading time
item-68 at level 3: paragraph: The results of toxicity assays for sensitivity to ZnSO₄ for all the Zeocin™ treated and control assay reagents are shown in FIGS. 3 to 8:
item-69 at level 3: paragraph: FIG. 3: Control cells, lag phase.
item-70 at level 3: paragraph: FIG. 4: Zeocin™ treated cells, lag phase.
item-71 at level 3: paragraph: FIG. 5: Control cells, mid-exponential growth.
item-72 at level 3: paragraph: FIG. 6: Zeocin™ treated cells, mid-exponential growth.
item-73 at level 3: paragraph: FIG. 7: Control cells, stationary phase.
item-74 at level 3: paragraph: FIG. 8: Zeocin™ treated cells, stationary phase.
item-75 at level 3: table with [6x3]
item-76 at level 3: paragraph: In each case, separate graphs of % toxic effect against log₁₀ concentration of ZnSO₄ were plotted on the same axes for each value of time (minutes) after addition of Zeocin™ or water. The sensitivities of the various reagents, expressed as an EC₅₀ value for 15 minutes exposed to ZnSO₄, are summarised in Table 1 below.
item-77 at level 3: paragraph: Table 1: Sensitivity of the different assay reagents to ZnSo₄ expressed as EC₅₀ values for 15 minutes exposure to ZNSO₄.
item-78 at level 3: paragraph: The results of the toxicity assays indicate that Zeocin™ treatment does not significantly affect the sensitivity of a recombinant bioluminescent E. coli derived assay reagent to ZnSO₄. Similar results could be expected with other toxic substances which have an effect on signal-generating metabolic activities.
item-79 at level 2: section_header: EXAMPLE 3
item-80 at level 2: section_header: Method to Determine Viable Count
item-81 at level 3: paragraph: 1. Samples of bacterial culture to be assayed for viable count were centrifuged at 10,000 rpm for 5 minutes to pellet the bacterial cells.
item-82 at level 3: paragraph: 2. Bacterial cells were washed by resuspending in 1 ml of M9 medium, re-centrifuged at 10,000 rpm for 5 minutes and finally re-suspended in 1 ml of M9 medium.
item-83 at level 3: paragraph: 3. Serial dilutions of the bacterial cell suspension from 10⁻¹ to 10⁻⁷ were prepared in M9 medium.
item-84 at level 3: paragraph: 4. Three separate 10 μl aliquots of each of the serial dilutions were plated out on standard agar plates and the plates incubated at 37° C.
item-85 at level 3: paragraph: 5. The number of bacterial colonies present for each of the three aliquots at each of the serial dilutions were counted and the values averaged. Viable count was calculated per ml of bacterial culture.
item-86 at level 2: section_header: CLAIMS
item-87 at level 3: paragraph: 1. A method of making a non-viable preparation of prokaryotic or eukaryotic cells, which preparation has a signal-generating metabolic activity, which method comprises contacting a viable culture of said cells having signal-generating metabolic activity with an antibiotic selected from the bleomycin/phleomycin family of antibiotics.
item-88 at level 3: paragraph: 2. The method as claimed in claim 1 wherein following contact with antibiotic, said cells are subjected to a stabilization step.
item-89 at level 3: paragraph: 3. The method as claimed in claim 2 wherein said stabilization step comprises freeze drying.
item-90 at level 3: paragraph: 4. The method as claimed in claim 1 wherein said antibiotic is phleomycin D1.
item-91 at level 3: paragraph: 5. The method as claimed in claim 5 wherein said signal-generating metabolic activity is bioluminescence.
item-92 at level 3: paragraph: 6. The method as claimed in claim 5 wherein said cells are bacteria.
item-93 at level 3: paragraph: 7. The method as claimed in claim 6 wherein said bacteria are in an exponential growth phase when contacted with said antibiotic.
item-94 at level 3: paragraph: 8. The method as claimed in claim 6 wherein said bacteria are genetically modified.
item-95 at level 3: paragraph: 9. The method as claimed in claim 8 wherein said genetically modified bacteria contain nucleic acid encoding luciferase.
item-96 at level 3: paragraph: 10. The method as claimed in claim 9 wherein said bacteria are E. coli.
item-97 at level 3: paragraph: 11. The method as claimed in claim 5 wherein said cells are eukaryotic cells.
item-98 at level 3: paragraph: 12. The method as claimed in claim 11 wherein said eukaryotic cells are genetically modified.
item-99 at level 3: paragraph: 13. The method as claimed in claim 12 wherein said genetically modified eukaryotic cells contain nucleic acid encoding luciferase.
item-100 at level 3: paragraph: 14. A method of making a non-viable preparation of prokaryotic cells, which preparation has a signal-generating metabolic activity, which method comprises contacting a viable culture of a genetically modified E. coli strain made bioluminescent by transformation with a plasmid carrying the lux operon of Vibrio fischeri with an antibiotic selected from the bleomycin/phleomycin family of antibiotics.
item-101 at level 3: paragraph: 15. The method as claimed in claim 14 wherein said cells are contacted with phleomycin D1 at a concentration of at least about 1.5 mg/ml.
item-102 at level 3: paragraph: 16. The method as claimed in claim 15 wherein said contact is maintained for at least about 3 hours.
item-103 at level 3: paragraph: 17. The method as claimed in claim 16 wherein said antibiotic-treated cells are harvested, washed and freeze-dried.
item-104 at level 1: section_header: Drawings

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,213 @@
# Assay reagent
## ABSTRACT
A cell-derived assay reagent prepared from cells which have been killed by treatment with an antibiotic selected from the bleomycin-phleomycin family of antibiotics but which retain a signal-generating metabolic activity such as bioluminescence.
This application is a continuation of PCT/GB99/01730, filed Jun. 1, 1999 designating the United States (the disclosure of which is incorporated herein by reference) and claiming priority from British application serial no. 9811845.8, filed Jun. 2, 1998.
The invention relates to a cell-derived assay reagent, in particular to an assay reagent prepared from cells which have been killed but which retain a signal-generating metabolic activity such as bioluminescence and also to assay methods using the cell-derived reagent such as, for example, toxicity testing methods.
The use of bacteria with a signal-generating metabolic activity as indicators of toxicity is well established. UK patent number GB 2005018 describes a method of assaying a liquid sample for toxic substances which involves contacting a suspension of bioluminescent microorganisms with a sample suspected of containing a toxic substance and observing the change in the light output of the bioluminescent organisms as a result of contact with the suspected toxic substance. Furthermore, a toxicity monitoring system embodying the same assay principle, which is manufactured and sold under the Trade Mark Microtox®, is in routine use in both environmental laboratories and for a variety of industrial applications. An improved toxicity assay method using bioluminescent bacteria, which can be used in a wider range of test conditions than the method of GB 2005018, is described in International patent application number WO 95/10767.
The assay methods known in the prior art may utilize naturally occurring bioluminescent organisms, including Photobacterium phosphoreum and Vibrio fischeri. However, recent interest has focused on the use of genetically modified microorganisms which have been engineered to express bioluminescence. These genetically modified bioluminescent microorganisms usually express lux genes, encoding the enzyme luciferase, which have been cloned from a naturally occurring bioluminescent microorganism (E. A. Meighen (1994) Genetics of Bacterial Bioluminescence. Ann. Rev. Genet. 28: 117-139; Stewart, G. S. A. B. Jassin, S. A. A. and Denyer, S. P. (1993), Engineering Microbial bioluminescence and biosensor applications. In Molecular Diagnosis. Eds R. Rapley and M. R. Walker Blackwell Scientific Pubs/Oxford). A process for producing genetically modified bioluminescent microorganisms expressing lux genes cloned from Vibrio harveyi is described in U.S. Pat. No. 4,581,335.
The use of genetically modified bioluminescent microorganisms in toxicity testing applications has several advantages over the use of naturally occurring microorganisms. For example, it is possible to engineer microorganisms with different sensitivities to a range of different toxic substances or to a single toxic substance. However, genetically modified microorganisms are subject to marketing restrictions as a result of government legislation and there is major concern relating to the deliberate release of genetically modified microorganisms into the environment as components of commercial products. This is particularly relevant with regard to toxicity testing which is often performed in the field rather than within the laboratory. The potential risk from release of potentially pathogenic genetically modified microorganisms into the environment where they may continue to grow in an uncontrollable manner has led to the introduction of legal restrictions on the use of genetically modified organisms in the field in many countries.
It has been suggested, to avoid the problems discussed above, to use genetically modified bioluminescent microorganisms which have been treated so that they retain the metabolic function of bioluminescence but an no longer reproduce. The use of radiation (gamma-radiation), X-rays or an electron beam) to kill bioluminescent cells whilst retaining the metabolic function of bioluminescence is demonstrated in International patent application number WO 95/07346. It is an object of the present invention to provide an alternative method of killing bioluminescent cells whilst retaining the metabolic function of bioluminescence which does not require the use of radiation and, as such, can be easily carried out without the need for specialized radiation equipment and containment facilities and without the risk to laboratory personnel associated with the use of radiation.
Accordingly, in a first aspect the invention provides a method of making a non-viable preparation of prokaryotic or eukaryotic cells, which preparation has a signal-generating metabolic activity, which method comprises contacting a viable culture of cells with signal-generating metabolic activity with a member of the bleomycin/phleomycin family of antibiotics.
Bleomycin and phleomycin are closely related glycopeptide antibiotics that are isolated in the form of copper chelates from cultures of Streptomyces verticillus. They represent a group of proteins with molecular weights ranging from 1000 to 1000 kda that are potent antibiotics and anti-tumour agents. So far more than 200 members of the bleomycin/phleomycin family have been isolated and characterised as complex basic glycopeptides. Family members resemble each other with respect to their physicochemical properties and their structure, indicating that functionally they all behave in the same manner. Furthermore, the chemical structure of the active moiety is conserved between family members and consists of 5 amino acids, L-glucose, 3-O-carbamoyl-D-mannose and a terminal cation. The various different bleomycin/phleomycin family members differ from each other in the nature of the terminal cation moiety, which is usually an amine. A preferred bleomycin/phleomycin antibiotic for use in the method of the invention is phleomycin D1, sold under the trade name Zeocin™.
Bleomycin and phleomycin are strong, selective inhibitors of DNA synthesis in intact bacteria and in mammalian cells. Bleomycin can be observed to attack purified DNA in vitro when incubated under appropriate conditions and analysis of the bleomycin damaged DNA shows that both single-stranded and double-stranded cleavages occur, the latter being the result of staggered single strand breaks formed approximately two base pairs apart in the complementary strands.
In in vivo systems, after being taken up by the cell, bleomycin enters the cell nucleus, binds to DNA (by virtue of the interaction between its positively charged terminal amine moiety and a negatively charged phosphate group of the DNA backbone) and causes strand scission. Bleomycin causes strand scission of DNA in viruses, bacteria and eukaryotic cell systems.
The present inventors have surprisingly found that treatment of a culture of cells with signal-generating metabolic activity with a bleomycin/phleomycin antibiotic renders the culture non-viable whilst retaining a level of signal-generating metabolic activity suitable for use in toxicity testing applications. In the context of this application the term non-viable is taken to mean that the cells are unable to reproduce. The process of rendering cells non-viable whilst retaining signal-generating metabolic activity may hereinafter be referred to as inactivation and cells which have been rendered non-viable according to the method of the invention may be referred to as inactivated.
Because of the broad spectrum of action of the bleomycin/phleomycin family of antibiotics the method of the invention is equally applicable to bacterial cells and to eukaryotic cells with signal generating metabolic activity. Preferably the signal-generating metabolic activity is bioluminescence but other signal-generating metabolic activities which are reporters of toxic damage could be used with equivalent effect.
The method of the invention is preferred for use with bacteria or eukaryotic cells that have been genetically modified to express a signal-generating metabolic activity. The examples given below relate to E. coil which have been engineered to express bioluminescence by transformation with a plasmid carrying lux genes. The eukaryotic equivalent would be cells transfected with a vector containing nucleic acid encoding a eukaryotic luciferase enzyme (abbreviated luc) such as, for example, luciferase from the firefly Photinus pyralis. A suitable plasmid vector containing cDNA encoding firefly luciferase under the control of an SV40 viral promoter is available from Promega Corporation, Madison Wis., USA. However, in connection with the present invention it is advantageous to use recombinant cells containing the entire eukaryotic luc operon so as to avoid the need to add an exogenous substrate ( e.g. luciferin) in order to generate light output.
The optimum concentration of bleomycin/phleomycin antibiotic and contact time required to render a culture of cells non-viable whilst retaining a useful level of signal-generating metabolic activity may vary according to the cell type but can be readily determined by routine experiment. In general, the lower the concentration of antibiotic used the longer the contact time required for cell inactivation. In connection with the production of assay reagents for use in toxicity testing applications, it is generally advantageous to keep the concentration of antibiotic low (e.g. around 1-1.5 mg/ml) and increase the contact time for inactivation. As will be shown in Example 1, treatment with Zeocin™ at a concentration of 1.5 mg/ml for 3 to 5 hours is sufficient to completely inactivate a culture of recombinant E. coli.
In the case of bacteria, the contact time required to inactivate a culture of bacterial cells is found to vary according to the stage of growth of the bacterial culture at the time the antibiotic is administered. Although the method of the invention can be used on bacteria at all stages of growth it is generally preferable to perform the method on bacterial cells in an exponential growth phase because the optimum antibiotic contact time has been observed to be shortest when the antibiotic is administered to bacterial cells in an exponential growth phase.
Following treatment with bleomycin/phleomycin antibiotic the non-viable preparation of cells is preferably stabilised for ease of storage or shipment. The cells can be stabilised using known techniques such as, for example, freeze drying (lyophilization) or other cell preservation techniques known in the art. Stabilization by freeze drying has the added advantage that the freeze drying procedure itself can render cells non-viable. Thus, any cells in the preparation which remain viable after treatment of the culture with bleomycin/phleomycin antibiotic will be rendered non-viable by freeze drying. It is thought that freeze drying inactivates any remaining viable cells by enhancing the effect of antibiotic, such that sub-lethally injured cells in the culture are more sensitive to the stresses applied during freeze drying.
Prior to use the stabilised cell preparation is reconstituted using a reconstitution buffer to form an assay reagent. This reconstituted assay reagent may then be used directly in assays for analytes, for example in toxicity testing applications. It is preferable that the stabilised (i.e. freeze dried) assay reagent be reconstituted immediately prior to use, but after reconstitution it is generally necessary to allow sufficient time prior to use for the reconstituted reagent to reach a stable, high level of signal-generating activity. Suitable reconstitution buffers preferably contain an osmotically potent non-salt compound such as sucrose, dextran or polyethylene glycol, although salt based stabilisers may also be used.
Whilst the assay reagent of the invention is particularly suitable for use in toxicity testing applications it is to be understood that the invention is not limited to assay reagents for use in toxicity testing. The cell inactivation method of the invention can be used to inactivate any recombinant cells (prokaryotic or eukaryotic) with a signal generating metabolic activity that is not dependent upon cell viability.
In a further aspect the invention provides a method of assaying a potentially toxic analyte comprising the steps of,
(a) contacting a sample to be assayed for the analyte with a sample of assay reagent comprising a non-viable preparation of cells with a signal-generating metabolic activity;
(b) measuring the level of signal generated; and
(c) using the measurement obtained as an indicator of the toxicity of the analyte.
In a still further aspect, the invention provides a kit for performing the above-stated assay comprising an assay reagent with signal generating metabolic activity and means for contacting the assay reagent with a sample to be assayed for an analyte.
The analytes tested using the assay of the invention are usually toxic substances, but it is to be understood that the precise nature of the analyte to be tested is not material to the invention.
Toxicity is a general term used to describe an adverse effect on biological system and the term toxic substances includes both toxicants (synthetic chemicals that are toxic) and toxins (natural poisons). Toxicity is usually expressed as an effective concentration (EC) or inhibitory concentration (IC) value. The EC/IC value is usually denoted as a percentage response e.g. EC₅₀, EC₁₀ which denotes the concentration (dose) of a particular substance which affects the designated criteria for assessing toxicity (i.e. a behavioural trait or death) in the indicated proportion of the population tested. For example, an EC₅₀ of 10 ppm indicates that 50% of the population will be affected by a concentration of 10 ppm. In the case of a toxicity assay based on the use of a bioluminescent assay reagent, the EC₅₀ value is usually the concentration of sample substance causing a 50% change in light output.
The present invention will be further understood by way of the following Examples with reference to the accompanying Figures in which:
FIG. 1 is a graph to show the effect of Zeocin™ treatment on viable count and light output of recombinant bioluminescent E. coil cells.
FIG. 2 is a graph to show the light output from five separate vials of reconstituted assay reagent. The assay reagent was prepared from recombinant bioluminescent E. coil exposed to 1.5 mg/ml Zeocin™ for 300 minutes. Five vials were used to reduce discrepancies resulting from vial to vial variation.
FIGS. 3 to 8 are graphs to show the effect of Zeocin™ treatment on the sensitivity of bioluminescent assay reagent to toxicant (ZnSO₄):
FIG. 3: Control cells, lag phase.
FIG. 4: Zeocin™ treated cells, lag phase.
FIG. 5: Control cells, mid-exponential growth.
FIG. 6: Zeocin™ treated cells, mid-exponential growth.
FIG. 7: Control cells, stationary phase.
FIG. 8: Zeocin™ treated cells, stationary phase.
## EXAMPLE 1
## (A) Inactivation of Bioluminescent E. coil Method
1. Bioluminescent genetically modified E. coil strain HB101 (E. coli HB101 made bioluminescent by transformation with a plasmid carrying the lux operon of Vibrio fischeri constructed by the method of Shaw and Kado, as described in Biotechnology 4: 560-564) were grown from a frozen stock in 5 ml of low salt medium (LB (5 g/ml NaCl)+glycerol+MgSO₄) for 24 hours.
2. 1 ml of the 5 ml culture was then used to inoculate 200 ml of low salt medium in a shaker flask and the resultant culture grown to an OD₆₃₀ of 0.407 (exponential growth phase).
3. 50 ml of this culture was removed to a fresh sterile shaker flask (control cells).
4. Zeocin™ was added to the 150 ml of culture in the original shaker flash, to a final concentration of 1.5 mg/ml. At the same time, an equivalent volume of water was added to the 50 ml culture removed from the original flask (control cells).
5. The time course of cell inactivation was monitored by removing samples from the culture at 5, 60, 120, 180, 240 and 300 minutes after the addition of Zeocin™ and taking measurements of both light output (measured using a Deltatox luminometer) and viable count (per ml, determined using the method given in Example 3 below) for each of the samples. Samples of the control cells were removed at 5 and 300 minutes after the addition of water and measurements of light output and viable count taken as for the Zeocin™ treated cells.
FIG. 1 shows the effect of Zeocin™ treatment on the light output and viable count (per ml) of recombinant bioluminescent E. coil. Zeocin™ was added to a final concentration of 1.5 mg/ml at time zero. The number of viable cells in the culture was observed to decrease with increasing contact cells with Zeocin™, the culture being completely inactivated after 3 hours. The light output from the culture was observed to decrease gradually with increasing Zeocin™ contact time.
## (B) Production of Assay Reagent
Five hours after the addition of Zeocin™ or water the remaining bacterial cells in the Zeocin™ treated and control cultures were harvested by the centrifugation, washed (to remove traces of Zeocin™ from the Zeocin™ treated culture), re-centrifuged and resuspended in cryoprotectant to an OD₆₃₀ of 0.25. 200 μl aliquots of the cells in cryoprotectant were dispensed into single shot vials, and freeze dried. Freeze dried samples of the Zeocin™ treated cells and control cells were reconstituted in 0.2M sucrose to form assay reagents and the light output of the assay reagents measured at various times after reconstitution.
The light output from assay reagent prepared from cells exposed to 1.5 mg/ml Zeocin™ for 5 hours was not significantly different to the light output from assay reagent prepared from control (Zeocin™ untreated) cells, indicating that Zeocin™ treatment does not affect the light output of the reconstituted freeze dried assay reagent. Both Zeocin™ treated and Zeocin™ untreated assay reagents produced stable light output 15 minutes after reconstitution.
FIG. 2 shows the light output from five separate vials of reconstituted Zeocin™ treated assay reagent inactivated according to the method of Example 1(A) and processed into assay reagent as described in Example 1(B). Reconstitution solution was added at time zero and thereafter light output was observed to increase steadily before stabilising out at around 15 minutes after reconstitution. All five vials were observed to give similar light profiles after reconstitution.
## EXAMPLE 2
## Sensitivity of Zeocin™ Treated Assay Reagent to Toxicant Method
1. Bioluminescent genetically modified E. coil strain HB101 (E. coli HB101 made bioluminescent by transformation with a plasmid carrying the lux operon of vibrio fischeri constructed by the method of Shaw and Kado, as described in Biotechnology 4: 560-564) was grown in fermenter as a batch culture in low salt medium (LB(5 g/ml NaCl)+glycerol+MgSO₄).
2. Two aliquots of the culture were removed from the fermenter into separate sterile shaker flasks at each of three different stages of growth i.e. at OD₆₃₀ values of 0.038 (lag phase growth), 1.31 (mid-exponential phase growth) and 2.468 (stationary phase growth).
3. One aliquot of culture for each of the three growth stages was inactivated by contact with Zeocin™ (1 mg Zeocin™ added per 2.5×10⁶ cells, i.e. the concentration of Zeocin™ per cell is kept constant) for 300 minutes and then processed into assay reagent by freeze drying and reconstitution, as described in part (B) of Example 1.
4. An equal volume of water was added to the second aliquot of culture for each of the three growth stages and the cultures processed into assay reagent as described above.
5. Samples of each of the three Zeocin™ treated and three control assay reagents were then evaluated for sensitivity to toxicant (ZnSO₄) according to the following assay protocol:
ZnSO₄ Sensitivity Assay
1. ZnSO₄ solutions were prepared in pure water at 30, 10, 3, 1, 0.3 and 0.1 ppm. Pure water was also used as a control.
2. Seven vials of each of the three Zeocin™ treated and each of the three control assay reagents (i.e. one for each of the six ZnSO₄ solutions and one for the pure water control) were reconstituted using 0.5 ml of reconstitution solution (eg 0.2M sucrose) and then left to stand at room temperature for 15 minutes to allow the light output to stabilize. Base line (time zero) readings of light output were then measured for each of the reconstituted reagents.
3. 0.5 ml aliquots of each of the six ZnSO₄ solutions and the pure water control were added to separate vials of reconstituted assay reagent. This was repeated for each of the different Zeocin™ treated and control assay reagents.
4. The vials were incubated at room temperature and light output readings were taken 5, 10, 15, 20, 25 and 30 minutes after addition of ZnSO₄ solution.
5. The % toxic effect for each sample was calculated as follows:
where: Cₒ=light in control at time zero
Ct=light in control at reading time
Sₒ=light in sample at time zero
St=light in sample at reading time
The results of toxicity assays for sensitivity to ZnSO₄ for all the Zeocin™ treated and control assay reagents are shown in FIGS. 3 to 8:
FIG. 3: Control cells, lag phase.
FIG. 4: Zeocin™ treated cells, lag phase.
FIG. 5: Control cells, mid-exponential growth.
FIG. 6: Zeocin™ treated cells, mid-exponential growth.
FIG. 7: Control cells, stationary phase.
FIG. 8: Zeocin™ treated cells, stationary phase.
| | SENSITIVITY-EC50 VALUES | SENSITIVITY-EC50 VALUES |
|-------------------|---------------------------|---------------------------|
| GROWTH STAGE OF | ZEOCIN | CONTROL |
| ASSAY REAGENT | TREATED | CELLS |
| Lag Phase | 1.445 ppm ZnSO4 | 1.580 ppm ZnSO4 |
| Expotential phase | 0.446 ppm ZnSO4 | 0.446 ZnSO4 |
| Stationary phase | 0.426 ppm ZnSO4 | 0.457 ppm ZnSO4 |
In each case, separate graphs of % toxic effect against log₁₀ concentration of ZnSO₄ were plotted on the same axes for each value of time (minutes) after addition of Zeocin™ or water. The sensitivities of the various reagents, expressed as an EC₅₀ value for 15 minutes exposed to ZnSO₄, are summarised in Table 1 below.
Table 1: Sensitivity of the different assay reagents to ZnSo₄ expressed as EC₅₀ values for 15 minutes exposure to ZNSO₄.
The results of the toxicity assays indicate that Zeocin™ treatment does not significantly affect the sensitivity of a recombinant bioluminescent E. coli derived assay reagent to ZnSO₄. Similar results could be expected with other toxic substances which have an effect on signal-generating metabolic activities.
## EXAMPLE 3
## Method to Determine Viable Count
1. Samples of bacterial culture to be assayed for viable count were centrifuged at 10,000 rpm for 5 minutes to pellet the bacterial cells.
2. Bacterial cells were washed by resuspending in 1 ml of M9 medium, re-centrifuged at 10,000 rpm for 5 minutes and finally re-suspended in 1 ml of M9 medium.
3. Serial dilutions of the bacterial cell suspension from 10⁻¹ to 10⁻⁷ were prepared in M9 medium.
4. Three separate 10 μl aliquots of each of the serial dilutions were plated out on standard agar plates and the plates incubated at 37° C.
5. The number of bacterial colonies present for each of the three aliquots at each of the serial dilutions were counted and the values averaged. Viable count was calculated per ml of bacterial culture.
## CLAIMS
1. A method of making a non-viable preparation of prokaryotic or eukaryotic cells, which preparation has a signal-generating metabolic activity, which method comprises contacting a viable culture of said cells having signal-generating metabolic activity with an antibiotic selected from the bleomycin/phleomycin family of antibiotics.
2. The method as claimed in claim 1 wherein following contact with antibiotic, said cells are subjected to a stabilization step.
3. The method as claimed in claim 2 wherein said stabilization step comprises freeze drying.
4. The method as claimed in claim 1 wherein said antibiotic is phleomycin D1.
5. The method as claimed in claim 5 wherein said signal-generating metabolic activity is bioluminescence.
6. The method as claimed in claim 5 wherein said cells are bacteria.
7. The method as claimed in claim 6 wherein said bacteria are in an exponential growth phase when contacted with said antibiotic.
8. The method as claimed in claim 6 wherein said bacteria are genetically modified.
9. The method as claimed in claim 8 wherein said genetically modified bacteria contain nucleic acid encoding luciferase.
10. The method as claimed in claim 9 wherein said bacteria are E. coli.
11. The method as claimed in claim 5 wherein said cells are eukaryotic cells.
12. The method as claimed in claim 11 wherein said eukaryotic cells are genetically modified.
13. The method as claimed in claim 12 wherein said genetically modified eukaryotic cells contain nucleic acid encoding luciferase.
14. A method of making a non-viable preparation of prokaryotic cells, which preparation has a signal-generating metabolic activity, which method comprises contacting a viable culture of a genetically modified E. coli strain made bioluminescent by transformation with a plasmid carrying the lux operon of Vibrio fischeri with an antibiotic selected from the bleomycin/phleomycin family of antibiotics.
15. The method as claimed in claim 14 wherein said cells are contacted with phleomycin D1 at a concentration of at least about 1.5 mg/ml.
16. The method as claimed in claim 15 wherein said contact is maintained for at least about 3 hours.
17. The method as claimed in claim 16 wherein said antibiotic-treated cells are harvested, washed and freeze-dried.
## Drawings

View File

@ -0,0 +1,76 @@
item-0 at level 0: unspecified: group _root_
item-1 at level 1: title: Carbocation containing cyanine-type dye
item-2 at level 2: section_header: ABSTRACT
item-3 at level 3: paragraph: To provide a reagent with excellent stability under storage, which can detect a subject compound to be measured with higher specificity and sensitibity. Complexes of a compound represented by the general formula (IV):
item-4 at level 2: section_header: BACKGROUND OF THE INVENTION
item-5 at level 3: paragraph: 1. Field of the Invention
item-6 at level 3: paragraph: The present invention relates to a labeled complex for microassay using near-infrared radiation. More specifically, the present invention relates to a labeled complex capable of specifically detecting a certain particular component in a complex mixture with a higher sensitivity.
item-7 at level 3: paragraph: 2. Related Background Art
item-8 at level 3: paragraph: On irradiating a laser beam on a trace substance labeled with dyes and the like, information due to the substance is generated such as scattered light, absorption light, fluorescent light and furthermore light acoustics. It is widely known in the field of analysis using lasers, to detect such information so as to practice microassays rapidly with a higher precision.
item-9 at level 3: paragraph: A gas laser represented by an argon laser and a helium laser has conventionally been used exclusively as a laser source. In recent years, however, a semi-conductor laser has been developed, and based on the characteristic features thereof such as inexpensive cost, small scale and easy output control, it is now desired to use the semiconductor laser as a light source.
item-10 at level 3: paragraph: If diagnostically useful substances from living organisms are assayed by means of the wave-length in ultraviolet and visible regions as has conventionally been used, the background (blank) via the intrinsic fluorescence of naturally occurring products, such as flavin, pyridine coenzyme and serum proteins, which are generally contained in samples, is likely to increase. Only if a light source in a near-infrared region can be used, such background from naturally occurring products can be eliminated so that the sensitivity to substances to be measured might be enhanced, consequently.
item-11 at level 3: paragraph: However, the oscillation wavelength of a semiconductor laser is generally in red and near-infrared regions (670 to 830 nm), where not too many dyes generate fluorescence via absorption or excitation. A representative example of such dyes is polymethine-type dye having a longer conjugated chain. Examples of labeling substances from living organisms with a polymethine-type dye and using the labeled substances for microanalysis are reported by K. Sauda, T. Imasaka, et al. in the report in Anal. Chem., 58, 2649-2653 (1986), such that plasma protein is labeled with a cyanine dye having a sulfonate group (for example, Indocyanine Green) for the analysis by high-performance liquid chromatography.
item-12 at level 3: paragraph: Japanese Patent Application Laid-open No. 2-191674 discloses that various cyanine dyes having sulfonic acid groups or sulfonate groups are used for labeling substances from living organisms and for detecting the fluorescence.
item-13 at level 3: paragraph: However, these known cyanine dyes emitting fluorescence via absorption or excitation in the near-infrared region are generally not particularly stable under light or heat.
item-14 at level 3: paragraph: If the dyes are used as labeling agents and bonded to substances from living organisms such as antibodies for preparing complexes, the complexes are likely to be oxidized easily by environmental factors such as light, heat, moisture, atmospheric oxygen and the like or to be subjected to modification such as generating cross-links. Particularly in water, a modification such as hydrolysis is further accelerated, disadvantageously. Therefore, the practical use of these complexes as detecting reagents in carrying out the microassay of the components of living organisms has encountered difficulties because of their poor stability under storage.
item-15 at level 2: section_header: SUMMARY OF THE INVENTION
item-16 at level 3: paragraph: The present inventors have made various investigations so as to solve the above problems, and have found that a dye of a particular structure, more specifically a particular polymethine dye, and among others, a dye having an azulene skelton, are extremely stable even after the immobilization thereof as a labeling agent onto substances from living organisms. Thus, the inventors have achieved the present invention. It is an object of the present invention to provide a labeled complex with excellent storage stability which can overcome the above problems.
item-17 at level 3: paragraph: According to an aspect of the present invention, there is provided a labeled complex for detecting a subject compound to be analyzed by means of optical means using near-infrared radiation which complex comprises a substance from a living organism and a labeling agent fixed onto the substance and is bonded to the subject compound to be analyzed, wherein the labeling agent comprises a compound represented by the general formula (I), (II) or (III): wherein R.sub.1 through R.sub.7 are independently selected from the group consisting of hydrogen atom, halogen atom, alkyl group, aryl group, aralkyl group, sulfonate group, amino group, styryl group, nitro group, hydroxyl group, carboxyl group, cyano group, or arylazo group; R.sub.1 through R.sub.7 may be bonded to each other to form a substituted or an unsubstituted condensed ring; R.sub.1 represents a divalent organic residue; and X.sub.1.sup..crclbar. represents an anion; wherein R.sub.8 through R14 are independently selected from the group consisting of hydrogen atom, halogen atom, alkyl group, aryl group, aralkyl group, sulfonate group, amino group, styryl group, nitro group, hydroxyl group, carboxyl group, cyano group, or arylazo group; R.sub.8 through R14 may be bonded to each other to form a substituted or an unsubstituted condensed ring; and R.sub.A represents a divalent organic residue; wherein R.sub.15 through R.sub.21 are independently selected from the group consisting of hydrogen atom, halogen atom, alkyl group, aryl group, a substituted or an unsubstituted aralkyl group, a substituted or an unsubstituted amino group, a substituted or an unsubstituted styryl group, nitro group, sulfonate group, hydroxyl group, carboxyl group, cyano group, or arylazo group; R.sub.15 through R.sub.21 may or may not be bonded to each other to form a substituted or an unsubstituted condensed ring; R.sub.B represents a divalent organic residue; and X.sub.1.sup..crclbar. represents an anion.
item-18 at level 3: paragraph: According to another aspect of the present invention, there is provided a labeled complex for detecting a subject compound to be analyzed by means of optical means using near-infrared radiation which complex comprises a substance from a living organism and a labeling agent fixed onto the substance and is bonded to the subject compound to be analyzed, wherein the labeling agent comprises a compound represented by the general formula (IV): wherein A, B, D and E are independently selected from the group consisting of hydrogen atom, a substituted or an unsubstituted alkyl group having two or more carbon atoms, alkenyl group, aralkyl group, aryl group, styryl group and heterocyclic group; r.sub.1 ' and r.sub.2 ' are individually selected from the group consisting of hydrogen atom, a substituted or an unsubstituted alkyl group, cyclic alkyl group, alkenyl group, aralkyl group and aryl group; k is 0 or 1; 1 is 0, 1 or 2; and X.sub.2.sup..crclbar. represents an anion.
item-19 at level 3: paragraph: According to another aspect of the present invention, there is provided a method of detecting a subject compound to be analyzed by means of optical means which method comprises using a labeled complex comprised of a substance from a living organism and a labeling agent fixed onto the substance and bonding the complex to the subject compound to be analyzed, wherein the labeling agent comprises a compound represented by the general formula (I), (II) or (III).
item-20 at level 3: paragraph: According to still another aspect of the present invention, there is provided a method of detecting a subject compound to be analyzed by means of optical means which method comprises using a labeled complex comprised of a substance from a living organism and a labeling agent fixed onto the substance and bonding the complex to the subject compound to be analyzed, wherein the labeling agent comprises a compound represented by the general formula (iv).
item-21 at level 2: section_header: BRIEF DESCRIPTION OF THE DRAWINGS
item-22 at level 3: paragraph: FIG. 1 depicts one example of fluorescence emitting wave form of a labeling agent.
item-23 at level 2: section_header: DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
item-24 at level 3: paragraph: The present invention will now be explained in detail hereinbelow.
item-25 at level 3: paragraph: In accordance with the present invention, the compound of the general formula (I), (II) or (III) is employed as a labeling agent, wherein R.sub.1 to R.sub.21 individually represent hydrogen atom, halogen atom (chlorine atom, bromine atom, and iodine atom) or a monovalent organic residue, and other such functional groups described above. The monovalent organic residue can be selected from a wide variety of such residues.
item-26 at level 3: paragraph: The alkyl group is preferably in straight chain or branched chain, having a carbon number of 1 to 12, such as for example methyl group, ethyl group, n-propyl group, iso-propyl group, n-butyl group, sec-butyl group, iso-butyl group, t-butyl group, n-amyl group, t-amyl group, n-hexyl group, n-octyl group, t-octyl group and the like.
item-27 at level 3: paragraph: The aryl group preferably has a carbon number of 6 to 20, such as for example phenyl group, naphthyl group, methoxyphenyl group, diethylaminophenyl group, dimethylaminophenyl group and the like.
item-28 at level 3: paragraph: The substituted aralkyl group preferably has a carbon number of 7 to 19, such as for example carboxybenzyl group, sulfobenzyl group, hydroxybenzyl group and the like.
item-29 at level 3: paragraph: The unsubstituted aralkyl group preferably has a carbon number of 7 to 19, such as for example benzyl group, phenethyl group, .alpha.-naphthylmethyl group, .beta.-naphthylmethyl group and the like.
item-30 at level 3: paragraph: The substituted or unsubstituted amino group preferably has a carbon number of 10 or less, such as for example amino group, dimethylamino group, diethylamino group, dipropylamino group, acetylamino group, benzoylamino group and the like.
item-31 at level 3: paragraph: The substituted or unsubstituted styryl group preferably has a carbon number of 8 to 14, such as for example styryl group, dimethylaminostyryl group, diethylaminostyryl group, dipropylaminostyryl group, methoxystyryl group, ethoxystyryl group, methylstyryl group and the like.
item-32 at level 3: paragraph: The aryl azo group preferably has a carbon number of 6 to 14, such as for example phenylazo group, .alpha.-naphthylazo group, .beta.-naphthylazo group, dimethylaminophenylazo group, chlorophenylazo group, nitrophenylazo group, methoxyphenylazo group and the like.
item-33 at level 3: paragraph: Of the combinations of R.sub.1 and R.sub.2, R.sub.2 and R.sub.3, R.sub.3 and R.sub.4, R.sub.4 and R.sub.5, R.sub.5 and R.sub.6, and R.sub.6 and R.sub.7 of the general formula (I), at least one combination may form a substituted or an unsubstituted condensed ring. The condensed ring may be five, six or seven membered, including aromatic ring (benzene, naphthalene, chlorobenzene, bromobenzene, methyl benzene, ethyl benzene, methoxybenzene, ethoxybenzene and the like); heterocyclic ring (furan ring, benzofuran ring, pyrrole ring, thiophene ring, pyridine ring, quinoline ring, thiazole ring and the like); and aliphatic ring (dimethylene, trimethylene, tetramethylene and the like). This is the case with the general formulas (II) and (III).
item-34 at level 3: paragraph: For the general formula (II), at least one combination among the combinations of R.sub.8 and R.sub.9, R.sub.9 and R.sub.10, R.sub.10 and R.sub.11, R.sub.11 and R.sub.12, R.sub.12 and R.sub.13, and R.sub.13 and R.sub.14, may form a substituted or an unsubstituted condensed ring.
item-35 at level 3: paragraph: Also for the general formula (III), at least one combination of the combinations of R.sub.15 and R.sub.16, R.sub.16 and R.sub.17, R.sub.17 and R.sub.18, R.sub.18 and R.sub.19, R.sub.19 and R.sub.20, and R.sub.20 and R.sub.21, may form a substituted or an unsubstituted condensed ring.
item-36 at level 3: paragraph: In the general formulas (I) to (IV) described above, the general formula (I) is specifically preferable; preference is also given individually to hydrogen atom, alkyl group and sulfonate group in the case of R.sub.1 to R.sub.7 ; hydrogen atom, alkyl group and sulfonate group in the case of R.sub.8 to R.sub.14 ; hydrogen atom, alkyl group and sulfonate group in the case of R.sub.15 to R.sub.21 ; alkyl group and aryl group in the case of A, B, D and E; hydrogen atom and alkyl group in the case Of r.sub.1 ' to r.sub.2 '.
item-37 at level 3: paragraph: In the general formula (I), R represents a divalent organic residue bonded via a double bond. Specific examples of a compound containing such R to be used in the present invention, include those represented by the following general formulas (1) to (12), wherein Q.sup..sym. represents the following azulenium salt nucleus and the right side excluding Q.sup..sym. represents R. wherein the relation between the azulenium salt nucleus represented by Q.sup..crclbar. and the azulene salt nucleus on the right side in the formula (3) may be symmetric or asymmetric. In the above formulas (1) to (12) as in the case of R.sub.1 to R.sub.7, R.sub.1 ' to R.sub.7 ' and R.sub.1 " to R.sub.7 " independently represent hydrogen atom, halogen atom, alkyl group, aryl group, aralkyl group, amino group, styryl group, nitro group, hydroxyl group, carboxyl group, cyano group or aryl azo group, while R.sub.1 ' to R.sub.7 ' and R.sub.1 " to R.sub.7 " independently may form a substituted or an unsubstituted condensed ring; n is 0, 1 or 2; r is an integer of 1 to 8; S represents 0 or 1; and t represents 1 or 2.
item-38 at level 3: paragraph: M.sub.2 represents a non-metallic atom group required for the completion of a nitrogen-containing heterocyclic ring.
item-39 at level 3: paragraph: Specific examples of M.sub.2 are atom groups required for the completion of a nitrogen-containing heterocyclic ring, including pyridine, thiazole, benzothiazole, naphthothiazole, oxazole, benzoxazole, naphthoxazole, imidazole, benzimidazole, naphthoimidazole, 2-quinoline, 4-quinoline, isoquinoline or indole, and may be substituted by halogen atom (chlorine atom, bromine atom, iodine atom and the like), alkyl group (methyl, ethyl, propyl, butyl and the like), aryl group (phenyl, tolyl, xylyl and the like), and aralkyl (benzene, p-trimethyl, and the like).
item-40 at level 3: paragraph: R.sub.22 represents hydrogen atom, nitro group, sulfonate group, cyano group, alkyl group (methyl, ethyl, propyl, butyl and the like), or aryl group (phenyl, tolyl, xylyl and the like). R.sub.23 represents alkyl group (methyl, ethyl, propyl, butyl and the like), a substituted alkyl group (2-hydroxyethyl, 2-methoxyethyl, 2-ethoxyethyl, 3-hydroxypropyl, 3-methoxypropyl, 3-ethoxypropyl, 3-chloropropyl, 3-bromopropyl, 3-carboxylpropyl and the like ), a cyclic alkyl group (cyclohexyl, cyclopropyl), aryl aralkyl group (benzene, 2-phenylethyl, 3-phenylpropyl, 3-phenylbutyl, 4-phenylbutyl, .alpha.-naphthylmethyl, .beta.-naphthylmethyl), a substituted aralkyl group (methylbenzyl, ethylbenzyl, dimethylbenzyl, trimethylbenzyl, chlorobenzyl, bromobenzyl and the like), aryl group (phenyl, tolyl, xylyl, .alpha.-naphtyl, .beta.-naphthyl) or a substituted aryl group (chlorophenyl, dichlorophenyl, trichlorophenyl, ethylphenyl, methoxydiphenyl, dimethoxyphenyl, aminophenyl, sulfonate phenyl, nitrophenyl, hydroxyphenyl and the like).
item-41 at level 3: paragraph: R.sub.24 represents a substituted or an unsubstituted aryl group or the cation group thereof, specifically including a substituted or an unsubstituted aryl group (phenyl, tolyl, xylyl, biphenyl, aminophenyl, .alpha.-naphthyl, .beta.-napthyl, anthranyl, pyrenyl, methoxyphenyl, dimethoxyphenyl, trimethoxyphenyl, ethoxyphenyl, diethoxyphenyl, chlorophenyl, dichlorophenyl, trichlorophenyl, bromophenyl, dibromophenyl, tribromophenyl, ethylphenyl, diethylphenyl, nitrophenyl, aminophenyl, dimethylaminophenyl, diethylaminophenyl, dibenzylaminophenyl, dipropylaminophenyl, morpholinophenyl, piperidinylphenyl, piperidinophenyl, diphenylaminophenyl, acetylaminophenyl, benzoylaminophenyl, acetylphenyl, benzoylphenyl, cyanophenyl, sulfonate phenyl, carboxylate phenyl and the like).
item-42 at level 3: paragraph: R.sub.25 represents a heterocyclic ring or the cation group thereof, specifically including a monovalent heterocyclic ring derived from cyclic rings, such as furan, thiophene, benzofuran, thionaphthene, dibenzofuran, carbazole, phenothiazine phenoxazine, pyridine and the like.
item-43 at level 3: paragraph: R.sub.26 represents hydrogen atom, alkyl group (methyl, ethyl, propyl, butyl and the like), or a substituted or an unsubstituted aryl group (phenyl, tolyl, xylyl, biphenyl, ethylphenyl, chlorophenyl, methoxyphenyl, ethoxyphenyl, nitrophenyl, aminophenyl, dimethylaminophenyl, diethylaminophenyl, acetylaminophenyl, .alpha.-naphthyl, .beta.-naphthyl, anthraryl, pyrenyl, sulfonate phenyl, carboxylate phenyl and the like. In the formula, Z.sub.7 represents an atom group required for the completion of pyran, thiapyran, selenapyran, telluropyran, benzopyran, benzothiapyran, benzoselenapyran, benzotelluropyran, naphthopyran, naphthothiapyran, or naphthoselenapyran, or naphthotelluropyran.
item-44 at level 3: paragraph: L.sub.7 represents sulfur atom, oxygen atom or selenium atom or tellurium atom.
item-45 at level 3: paragraph: R.sub.27 and R.sub.28 individually represent hydrogen atom, alkoxy group, a substituted or an unsubstituted aryl group, alkenyl group and a heterocyclic group,
item-46 at level 3: paragraph: More specifically, R.sub.27 and R.sub.28 individually represent hydrogen atom, alkyl group (methyl, ethyl, propyl, butyl and the like), alkyl sulfonate group, alkoxyl group (methoxy, ethoxy, propoxy, ethoxyethyl, methoxyethyl and the like), aryl group (phenyl, tolyl, xylyl, sulfonate phenyl, chlorophenyl, biphenyl, methoxyphenyl and the like), a substituted or an unsubstituted styryl group (styryl, p-methylstyryl, o-chlorostyryl, p-methoxystyryl and the like), a substituted or an unsubstituted 4-phenyl, 1,3-butadienyl group (r-phenyl, 1,3-butadienyl, 4-(p-methylphenyl), 1,3-butadienyl and the like), or a substituted or an unsubstituted heterocyclic group (quinolyl, pyridyl, carbazoyl, furyl and the like).
item-47 at level 3: paragraph: As in the case of R, the same is true with R.sub.A and R.sub.B of the general formulas (II) and (III), respectively.
item-48 at level 3: paragraph: Then, in R, the symbols R.sub.8 ' to R.sub.14 ' individually correspond to R.sub.1 ' to R.sub.7 '; R.sub.8 " to R.sub.14 " individually correspond to R.sub.1 " to R.sub.7 "; in R.sub.B, R.sub.14 ' to R.sub.21 " individually correspond to R.sub.1 ' to R.sub.7 '; R.sub.14 " to R.sub.21 " individually correspond to R.sub.1 " to R.sub.7 ".
item-49 at level 3: paragraph: In the azulenium nucleus of the (1) to (12), described above, those represented by the formulas (3), (9) and (10) are more preferably used; and particularly, the formula (3) is preferable.
item-50 at level 3: paragraph: R.sub.1 to R.sub.28, R.sub.1 ' to R.sub.21 ' and R.sub.1 " to R.sub.21 " preferably contain one or more well-known polar groups in order to impart water solubility to a compound (labeling agent) represented by the general formula (I), (II) or (III). The polar groups include, for example, hydroxyl group, alkylhydroxyl group, sulfonate group, alkylsulfonate group, carboxylate group, alkylcarboxylate group, tetra-ammonium base and the like. R.sub.1 to R.sub.28, R.sub.1 ' to R.sub.21 ', and R.sub.1 " to R.sub.21 " preferably contain one or more well-known reactive groups in order that the compound of the general formula (I) can form a covalent bond with a substance from a living organism.
item-51 at level 3: paragraph: The reactive groups include the reactive sites of isocyanate, isothiocyanate, succinimide ester, sulfosuccinimide ester, imide ester, hydrazine, nitroaryl halide, piperidine disulfide, maleimide, thiophthalimide, acid halide, sulfonyl halide, aziridine, azide nitrophenyl, azide amino, 3-(2-pyridyldithio) propionamide and the like. In these reactive sites, the following spacer groups (n=0, 1 to 6) may be interposed in order to prevent steric hindrance during on the bonding of a labeling agent and a substance from a living organism.
item-52 at level 3: paragraph: Preferable such reactive groups include isothiocyanate, sulfosuccinimide ester, succinimide ester maleimide and the like X.sub.1.sup..sym. represents an anion, including chloride ion, bromide ion, iodide ion, perchlorate ion, benzenesulfonate ion, p-toluene sulfonate ion, methylsulfate ion, ethylsulfate ion, propylsulfate ion, tetrafluoroborate ion, tetraphenylborate ion, hexafluorophosphate ion, benzenesulfinic acid salt ion, acetate ion, trifluoroacetate ion, propionate ion, benzoate ion, oxalate ion, succinate ion, malonate ion, oleate ion, stearate ion, citrate ion, monohydrogen diphosphate ion, dihydrogen monophosphate ion, pentachlorostannate ion, chlorosulfonate ion, fluorosulfonate ion, trifluoromethane sulfonate ion, hexafluoroantimonate ion, molybdate ion, tungstate ion, titanate ion, zirconate ion and the like.
item-53 at level 3: paragraph: Specific examples of these labeling agents are illustrated in Tables 1, 2 and 3, but are not limited thereto.
item-54 at level 3: paragraph: The synthetic method of these azulene dyes is described in U.S. Pat. No. 4,738,908.
item-55 at level 2: section_header: CLAIMS
item-56 at level 3: paragraph: 1. A labeled complex for detecting a subject compound to be analyzed by means of optical means using near-infrared radiation which complex comprises a substance from a living organism and a labeling agent fixed onto the substance, the substance capable of specifically binding to the subject compound, wherein the labeling agent comprises a compound represented by the general formula (IV): wherein A, B, D and E are independently selected from the group consisting of hydrogen atom, a substituted or an unsubstituted alkyl group having two or more carbon atoms, alkenyl group, aralkyl group, aryl group, styryl group and heterocyclic group, and at least one of A and B is a substituted or unsubstituted aryl group, and at least one of D and E is a substituted or unsubstituted aryl group; r.sub.1 ' and r.sub.2 ' are individually selected from the group consisting of hydrogen atom, a substituted or an unsubstituted alkyl group, cyclic alkyl group, alkenyl group, aralkyl group and aryl group; k is 0 or 1; is 0, 1 or 2; and X.sub.2.sup..crclbar. represents an anion.
item-57 at level 3: paragraph: 2. The labeled complex according to claim 1, wherein the substance from a living organism is an antibody or an antigen.
item-58 at level 3: paragraph: 3. The labeled complex according to claim 1, wherein the substance from a living organism is a nucleic acid.
item-59 at level 3: paragraph: 4. The labeled complex according to claim 1, wherein the substituted aryl group constituting at least one of A and B is phenyl group substituted by dialkylamino group.
item-60 at level 3: paragraph: 5. The labeled complex according to claim 1, wherein the substituted aryl group constituting at least one of D and E is phenyl group substituted by dialkylamino group.
item-61 at level 3: paragraph: 6. The labeled complex according to claim 4 or 5, wherein the dialkylamino group is a diethylamino group.
item-62 at level 3: paragraph: 7. The labeled complex according to claim 1, wherein each of A, B and D is dimethylaminophenyl group, E is aminophenyl group, k is 0 and l is 1.
item-63 at level 3: paragraph: 8. The labeled complex according to claim 1, wherein each of A, B and D is diethylaminophenyl group, E is phenyl group substituted by carboxyl group, k is 0 and l is 1.
item-64 at level 3: paragraph: 9. The labeled complex according to claim 1, wherein each of A, B, D and E is diethylaminophenyl group, k is 1 and l is 0.
item-65 at level 3: paragraph: 10. The labeled complex according to claim 1, wherein each of A, B, and D is diethylaminophenyl group, E is aminophenyl group, K is 0 and l is 1.
item-66 at level 3: paragraph: 11. The labeled complex according to claim 1, wherein A is dimethylaminophenyl group, each of B and E is ethoxyphenyl group, k is 0, 1 is l and D is represented by the following formula:
item-67 at level 3: paragraph: 12. A method of detecting a subject compound to be analyzed in a sample comprising the steps of: providing a labeled complex comprising a substance from a living organisms and a labeling agent fixed onto the substance, the substance being capable of specifically binding to the subject compound; binding the labeled complex to the subject compound; and detecting the labeled complex to which the subject compound is bonded by means of optical means, wherein the labeling agent comprises a compound represented by the general formula (IV): wherein A, B, D and E are independently selected from the group consisting of hydrogen atom, a substituted or an unsubstituted alkyl group having two or more carbon atoms, alkenyl group, aralkyl group, aryl group, styryl group and heterocyclic group, and at least one of A and B is a substituted or unsubstituted aryl group, and at least one of D and E is a substituted or unsubstituted aryl group; r.sub.1 ' and r.sub.2 ' are individually selected from the group consisting of hydrogen atom, a substituted or an unsubstituted alkyl group, cyclic alkyl group, alkenyl group, aralkyl group and aryl group; k is 0 or 1; is 0, 1 or 2; and X.sub.2.sup..crclbar. represents an anion.
item-68 at level 3: paragraph: 13. The method according to claim 12, wherein the substance from a living organism is an antibody or an antigen.
item-69 at level 3: paragraph: 14. The method according to claim 12, wherein the substance from a living organism is a nucleic acid.
item-70 at level 3: paragraph: 15. The analyzing method according to any one of claims 12, 13 and 14, wherein the optical means is an optical means using near-infrared ray.
item-71 at level 3: paragraph: 16. The method according to claim 12, wherein each of A, B and D is dimethylaminophenyl group, E is aminophenyl group, k is 0 and l is 1.
item-72 at level 3: paragraph: 17. The method according to claim 12, wherein each of A, B and D is diethylaminophenyl group, E is phenyl group substituted by carboxyl group, k is 0 and l is 1.
item-73 at level 3: paragraph: 18. The method according to claim 12, wherein each of A, B, D and E is diethylaminophenyl group, k is 1 and l is 0.
item-74 at level 3: paragraph: 19. The method according to claim 12, wherein each of A, B and D is diethylaminophenyl group, E is aminophenyl group, k is 0 and l is 1.
item-75 at level 3: paragraph: 20. The method according to claim 12, wherein A is dimethylaminophenyl group, each of B and E is ethoxyphenyl group, k is 0, l is 1 and D is represented by the following formula: ##STR102##

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,149 @@
# Carbocation containing cyanine-type dye
## ABSTRACT
To provide a reagent with excellent stability under storage, which can detect a subject compound to be measured with higher specificity and sensitibity. Complexes of a compound represented by the general formula (IV):
## BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to a labeled complex for microassay using near-infrared radiation. More specifically, the present invention relates to a labeled complex capable of specifically detecting a certain particular component in a complex mixture with a higher sensitivity.
2. Related Background Art
On irradiating a laser beam on a trace substance labeled with dyes and the like, information due to the substance is generated such as scattered light, absorption light, fluorescent light and furthermore light acoustics. It is widely known in the field of analysis using lasers, to detect such information so as to practice microassays rapidly with a higher precision.
A gas laser represented by an argon laser and a helium laser has conventionally been used exclusively as a laser source. In recent years, however, a semi-conductor laser has been developed, and based on the characteristic features thereof such as inexpensive cost, small scale and easy output control, it is now desired to use the semiconductor laser as a light source.
If diagnostically useful substances from living organisms are assayed by means of the wave-length in ultraviolet and visible regions as has conventionally been used, the background (blank) via the intrinsic fluorescence of naturally occurring products, such as flavin, pyridine coenzyme and serum proteins, which are generally contained in samples, is likely to increase. Only if a light source in a near-infrared region can be used, such background from naturally occurring products can be eliminated so that the sensitivity to substances to be measured might be enhanced, consequently.
However, the oscillation wavelength of a semiconductor laser is generally in red and near-infrared regions (670 to 830 nm), where not too many dyes generate fluorescence via absorption or excitation. A representative example of such dyes is polymethine-type dye having a longer conjugated chain. Examples of labeling substances from living organisms with a polymethine-type dye and using the labeled substances for microanalysis are reported by K. Sauda, T. Imasaka, et al. in the report in Anal. Chem., 58, 2649-2653 (1986), such that plasma protein is labeled with a cyanine dye having a sulfonate group (for example, Indocyanine Green) for the analysis by high-performance liquid chromatography.
Japanese Patent Application Laid-open No. 2-191674 discloses that various cyanine dyes having sulfonic acid groups or sulfonate groups are used for labeling substances from living organisms and for detecting the fluorescence.
However, these known cyanine dyes emitting fluorescence via absorption or excitation in the near-infrared region are generally not particularly stable under light or heat.
If the dyes are used as labeling agents and bonded to substances from living organisms such as antibodies for preparing complexes, the complexes are likely to be oxidized easily by environmental factors such as light, heat, moisture, atmospheric oxygen and the like or to be subjected to modification such as generating cross-links. Particularly in water, a modification such as hydrolysis is further accelerated, disadvantageously. Therefore, the practical use of these complexes as detecting reagents in carrying out the microassay of the components of living organisms has encountered difficulties because of their poor stability under storage.
## SUMMARY OF THE INVENTION
The present inventors have made various investigations so as to solve the above problems, and have found that a dye of a particular structure, more specifically a particular polymethine dye, and among others, a dye having an azulene skelton, are extremely stable even after the immobilization thereof as a labeling agent onto substances from living organisms. Thus, the inventors have achieved the present invention. It is an object of the present invention to provide a labeled complex with excellent storage stability which can overcome the above problems.
According to an aspect of the present invention, there is provided a labeled complex for detecting a subject compound to be analyzed by means of optical means using near-infrared radiation which complex comprises a substance from a living organism and a labeling agent fixed onto the substance and is bonded to the subject compound to be analyzed, wherein the labeling agent comprises a compound represented by the general formula (I), (II) or (III): wherein R.sub.1 through R.sub.7 are independently selected from the group consisting of hydrogen atom, halogen atom, alkyl group, aryl group, aralkyl group, sulfonate group, amino group, styryl group, nitro group, hydroxyl group, carboxyl group, cyano group, or arylazo group; R.sub.1 through R.sub.7 may be bonded to each other to form a substituted or an unsubstituted condensed ring; R.sub.1 represents a divalent organic residue; and X.sub.1.sup..crclbar. represents an anion; wherein R.sub.8 through R14 are independently selected from the group consisting of hydrogen atom, halogen atom, alkyl group, aryl group, aralkyl group, sulfonate group, amino group, styryl group, nitro group, hydroxyl group, carboxyl group, cyano group, or arylazo group; R.sub.8 through R14 may be bonded to each other to form a substituted or an unsubstituted condensed ring; and R.sub.A represents a divalent organic residue; wherein R.sub.15 through R.sub.21 are independently selected from the group consisting of hydrogen atom, halogen atom, alkyl group, aryl group, a substituted or an unsubstituted aralkyl group, a substituted or an unsubstituted amino group, a substituted or an unsubstituted styryl group, nitro group, sulfonate group, hydroxyl group, carboxyl group, cyano group, or arylazo group; R.sub.15 through R.sub.21 may or may not be bonded to each other to form a substituted or an unsubstituted condensed ring; R.sub.B represents a divalent organic residue; and X.sub.1.sup..crclbar. represents an anion.
According to another aspect of the present invention, there is provided a labeled complex for detecting a subject compound to be analyzed by means of optical means using near-infrared radiation which complex comprises a substance from a living organism and a labeling agent fixed onto the substance and is bonded to the subject compound to be analyzed, wherein the labeling agent comprises a compound represented by the general formula (IV): wherein A, B, D and E are independently selected from the group consisting of hydrogen atom, a substituted or an unsubstituted alkyl group having two or more carbon atoms, alkenyl group, aralkyl group, aryl group, styryl group and heterocyclic group; r.sub.1 ' and r.sub.2 ' are individually selected from the group consisting of hydrogen atom, a substituted or an unsubstituted alkyl group, cyclic alkyl group, alkenyl group, aralkyl group and aryl group; k is 0 or 1; 1 is 0, 1 or 2; and X.sub.2.sup..crclbar. represents an anion.
According to another aspect of the present invention, there is provided a method of detecting a subject compound to be analyzed by means of optical means which method comprises using a labeled complex comprised of a substance from a living organism and a labeling agent fixed onto the substance and bonding the complex to the subject compound to be analyzed, wherein the labeling agent comprises a compound represented by the general formula (I), (II) or (III).
According to still another aspect of the present invention, there is provided a method of detecting a subject compound to be analyzed by means of optical means which method comprises using a labeled complex comprised of a substance from a living organism and a labeling agent fixed onto the substance and bonding the complex to the subject compound to be analyzed, wherein the labeling agent comprises a compound represented by the general formula (iv).
## BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 depicts one example of fluorescence emitting wave form of a labeling agent.
## DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
The present invention will now be explained in detail hereinbelow.
In accordance with the present invention, the compound of the general formula (I), (II) or (III) is employed as a labeling agent, wherein R.sub.1 to R.sub.21 individually represent hydrogen atom, halogen atom (chlorine atom, bromine atom, and iodine atom) or a monovalent organic residue, and other such functional groups described above. The monovalent organic residue can be selected from a wide variety of such residues.
The alkyl group is preferably in straight chain or branched chain, having a carbon number of 1 to 12, such as for example methyl group, ethyl group, n-propyl group, iso-propyl group, n-butyl group, sec-butyl group, iso-butyl group, t-butyl group, n-amyl group, t-amyl group, n-hexyl group, n-octyl group, t-octyl group and the like.
The aryl group preferably has a carbon number of 6 to 20, such as for example phenyl group, naphthyl group, methoxyphenyl group, diethylaminophenyl group, dimethylaminophenyl group and the like.
The substituted aralkyl group preferably has a carbon number of 7 to 19, such as for example carboxybenzyl group, sulfobenzyl group, hydroxybenzyl group and the like.
The unsubstituted aralkyl group preferably has a carbon number of 7 to 19, such as for example benzyl group, phenethyl group, .alpha.-naphthylmethyl group, .beta.-naphthylmethyl group and the like.
The substituted or unsubstituted amino group preferably has a carbon number of 10 or less, such as for example amino group, dimethylamino group, diethylamino group, dipropylamino group, acetylamino group, benzoylamino group and the like.
The substituted or unsubstituted styryl group preferably has a carbon number of 8 to 14, such as for example styryl group, dimethylaminostyryl group, diethylaminostyryl group, dipropylaminostyryl group, methoxystyryl group, ethoxystyryl group, methylstyryl group and the like.
The aryl azo group preferably has a carbon number of 6 to 14, such as for example phenylazo group, .alpha.-naphthylazo group, .beta.-naphthylazo group, dimethylaminophenylazo group, chlorophenylazo group, nitrophenylazo group, methoxyphenylazo group and the like.
Of the combinations of R.sub.1 and R.sub.2, R.sub.2 and R.sub.3, R.sub.3 and R.sub.4, R.sub.4 and R.sub.5, R.sub.5 and R.sub.6, and R.sub.6 and R.sub.7 of the general formula (I), at least one combination may form a substituted or an unsubstituted condensed ring. The condensed ring may be five, six or seven membered, including aromatic ring (benzene, naphthalene, chlorobenzene, bromobenzene, methyl benzene, ethyl benzene, methoxybenzene, ethoxybenzene and the like); heterocyclic ring (furan ring, benzofuran ring, pyrrole ring, thiophene ring, pyridine ring, quinoline ring, thiazole ring and the like); and aliphatic ring (dimethylene, trimethylene, tetramethylene and the like). This is the case with the general formulas (II) and (III).
For the general formula (II), at least one combination among the combinations of R.sub.8 and R.sub.9, R.sub.9 and R.sub.10, R.sub.10 and R.sub.11, R.sub.11 and R.sub.12, R.sub.12 and R.sub.13, and R.sub.13 and R.sub.14, may form a substituted or an unsubstituted condensed ring.
Also for the general formula (III), at least one combination of the combinations of R.sub.15 and R.sub.16, R.sub.16 and R.sub.17, R.sub.17 and R.sub.18, R.sub.18 and R.sub.19, R.sub.19 and R.sub.20, and R.sub.20 and R.sub.21, may form a substituted or an unsubstituted condensed ring.
In the general formulas (I) to (IV) described above, the general formula (I) is specifically preferable; preference is also given individually to hydrogen atom, alkyl group and sulfonate group in the case of R.sub.1 to R.sub.7 ; hydrogen atom, alkyl group and sulfonate group in the case of R.sub.8 to R.sub.14 ; hydrogen atom, alkyl group and sulfonate group in the case of R.sub.15 to R.sub.21 ; alkyl group and aryl group in the case of A, B, D and E; hydrogen atom and alkyl group in the case Of r.sub.1 ' to r.sub.2 '.
In the general formula (I), R represents a divalent organic residue bonded via a double bond. Specific examples of a compound containing such R to be used in the present invention, include those represented by the following general formulas (1) to (12), wherein Q.sup..sym. represents the following azulenium salt nucleus and the right side excluding Q.sup..sym. represents R. wherein the relation between the azulenium salt nucleus represented by Q.sup..crclbar. and the azulene salt nucleus on the right side in the formula (3) may be symmetric or asymmetric. In the above formulas (1) to (12) as in the case of R.sub.1 to R.sub.7, R.sub.1 ' to R.sub.7 ' and R.sub.1 " to R.sub.7 " independently represent hydrogen atom, halogen atom, alkyl group, aryl group, aralkyl group, amino group, styryl group, nitro group, hydroxyl group, carboxyl group, cyano group or aryl azo group, while R.sub.1 ' to R.sub.7 ' and R.sub.1 " to R.sub.7 " independently may form a substituted or an unsubstituted condensed ring; n is 0, 1 or 2; r is an integer of 1 to 8; S represents 0 or 1; and t represents 1 or 2.
M.sub.2 represents a non-metallic atom group required for the completion of a nitrogen-containing heterocyclic ring.
Specific examples of M.sub.2 are atom groups required for the completion of a nitrogen-containing heterocyclic ring, including pyridine, thiazole, benzothiazole, naphthothiazole, oxazole, benzoxazole, naphthoxazole, imidazole, benzimidazole, naphthoimidazole, 2-quinoline, 4-quinoline, isoquinoline or indole, and may be substituted by halogen atom (chlorine atom, bromine atom, iodine atom and the like), alkyl group (methyl, ethyl, propyl, butyl and the like), aryl group (phenyl, tolyl, xylyl and the like), and aralkyl (benzene, p-trimethyl, and the like).
R.sub.22 represents hydrogen atom, nitro group, sulfonate group, cyano group, alkyl group (methyl, ethyl, propyl, butyl and the like), or aryl group (phenyl, tolyl, xylyl and the like). R.sub.23 represents alkyl group (methyl, ethyl, propyl, butyl and the like), a substituted alkyl group (2-hydroxyethyl, 2-methoxyethyl, 2-ethoxyethyl, 3-hydroxypropyl, 3-methoxypropyl, 3-ethoxypropyl, 3-chloropropyl, 3-bromopropyl, 3-carboxylpropyl and the like ), a cyclic alkyl group (cyclohexyl, cyclopropyl), aryl aralkyl group (benzene, 2-phenylethyl, 3-phenylpropyl, 3-phenylbutyl, 4-phenylbutyl, .alpha.-naphthylmethyl, .beta.-naphthylmethyl), a substituted aralkyl group (methylbenzyl, ethylbenzyl, dimethylbenzyl, trimethylbenzyl, chlorobenzyl, bromobenzyl and the like), aryl group (phenyl, tolyl, xylyl, .alpha.-naphtyl, .beta.-naphthyl) or a substituted aryl group (chlorophenyl, dichlorophenyl, trichlorophenyl, ethylphenyl, methoxydiphenyl, dimethoxyphenyl, aminophenyl, sulfonate phenyl, nitrophenyl, hydroxyphenyl and the like).
R.sub.24 represents a substituted or an unsubstituted aryl group or the cation group thereof, specifically including a substituted or an unsubstituted aryl group (phenyl, tolyl, xylyl, biphenyl, aminophenyl, .alpha.-naphthyl, .beta.-napthyl, anthranyl, pyrenyl, methoxyphenyl, dimethoxyphenyl, trimethoxyphenyl, ethoxyphenyl, diethoxyphenyl, chlorophenyl, dichlorophenyl, trichlorophenyl, bromophenyl, dibromophenyl, tribromophenyl, ethylphenyl, diethylphenyl, nitrophenyl, aminophenyl, dimethylaminophenyl, diethylaminophenyl, dibenzylaminophenyl, dipropylaminophenyl, morpholinophenyl, piperidinylphenyl, piperidinophenyl, diphenylaminophenyl, acetylaminophenyl, benzoylaminophenyl, acetylphenyl, benzoylphenyl, cyanophenyl, sulfonate phenyl, carboxylate phenyl and the like).
R.sub.25 represents a heterocyclic ring or the cation group thereof, specifically including a monovalent heterocyclic ring derived from cyclic rings, such as furan, thiophene, benzofuran, thionaphthene, dibenzofuran, carbazole, phenothiazine phenoxazine, pyridine and the like.
R.sub.26 represents hydrogen atom, alkyl group (methyl, ethyl, propyl, butyl and the like), or a substituted or an unsubstituted aryl group (phenyl, tolyl, xylyl, biphenyl, ethylphenyl, chlorophenyl, methoxyphenyl, ethoxyphenyl, nitrophenyl, aminophenyl, dimethylaminophenyl, diethylaminophenyl, acetylaminophenyl, .alpha.-naphthyl, .beta.-naphthyl, anthraryl, pyrenyl, sulfonate phenyl, carboxylate phenyl and the like. In the formula, Z.sub.7 represents an atom group required for the completion of pyran, thiapyran, selenapyran, telluropyran, benzopyran, benzothiapyran, benzoselenapyran, benzotelluropyran, naphthopyran, naphthothiapyran, or naphthoselenapyran, or naphthotelluropyran.
L.sub.7 represents sulfur atom, oxygen atom or selenium atom or tellurium atom.
R.sub.27 and R.sub.28 individually represent hydrogen atom, alkoxy group, a substituted or an unsubstituted aryl group, alkenyl group and a heterocyclic group,
More specifically, R.sub.27 and R.sub.28 individually represent hydrogen atom, alkyl group (methyl, ethyl, propyl, butyl and the like), alkyl sulfonate group, alkoxyl group (methoxy, ethoxy, propoxy, ethoxyethyl, methoxyethyl and the like), aryl group (phenyl, tolyl, xylyl, sulfonate phenyl, chlorophenyl, biphenyl, methoxyphenyl and the like), a substituted or an unsubstituted styryl group (styryl, p-methylstyryl, o-chlorostyryl, p-methoxystyryl and the like), a substituted or an unsubstituted 4-phenyl, 1,3-butadienyl group (r-phenyl, 1,3-butadienyl, 4-(p-methylphenyl), 1,3-butadienyl and the like), or a substituted or an unsubstituted heterocyclic group (quinolyl, pyridyl, carbazoyl, furyl and the like).
As in the case of R, the same is true with R.sub.A and R.sub.B of the general formulas (II) and (III), respectively.
Then, in R, the symbols R.sub.8 ' to R.sub.14 ' individually correspond to R.sub.1 ' to R.sub.7 '; R.sub.8 " to R.sub.14 " individually correspond to R.sub.1 " to R.sub.7 "; in R.sub.B, R.sub.14 ' to R.sub.21 " individually correspond to R.sub.1 ' to R.sub.7 '; R.sub.14 " to R.sub.21 " individually correspond to R.sub.1 " to R.sub.7 ".
In the azulenium nucleus of the (1) to (12), described above, those represented by the formulas (3), (9) and (10) are more preferably used; and particularly, the formula (3) is preferable.
R.sub.1 to R.sub.28, R.sub.1 ' to R.sub.21 ' and R.sub.1 " to R.sub.21 " preferably contain one or more well-known polar groups in order to impart water solubility to a compound (labeling agent) represented by the general formula (I), (II) or (III). The polar groups include, for example, hydroxyl group, alkylhydroxyl group, sulfonate group, alkylsulfonate group, carboxylate group, alkylcarboxylate group, tetra-ammonium base and the like. R.sub.1 to R.sub.28, R.sub.1 ' to R.sub.21 ', and R.sub.1 " to R.sub.21 " preferably contain one or more well-known reactive groups in order that the compound of the general formula (I) can form a covalent bond with a substance from a living organism.
The reactive groups include the reactive sites of isocyanate, isothiocyanate, succinimide ester, sulfosuccinimide ester, imide ester, hydrazine, nitroaryl halide, piperidine disulfide, maleimide, thiophthalimide, acid halide, sulfonyl halide, aziridine, azide nitrophenyl, azide amino, 3-(2-pyridyldithio) propionamide and the like. In these reactive sites, the following spacer groups (n=0, 1 to 6) may be interposed in order to prevent steric hindrance during on the bonding of a labeling agent and a substance from a living organism.
Preferable such reactive groups include isothiocyanate, sulfosuccinimide ester, succinimide ester maleimide and the like X.sub.1.sup..sym. represents an anion, including chloride ion, bromide ion, iodide ion, perchlorate ion, benzenesulfonate ion, p-toluene sulfonate ion, methylsulfate ion, ethylsulfate ion, propylsulfate ion, tetrafluoroborate ion, tetraphenylborate ion, hexafluorophosphate ion, benzenesulfinic acid salt ion, acetate ion, trifluoroacetate ion, propionate ion, benzoate ion, oxalate ion, succinate ion, malonate ion, oleate ion, stearate ion, citrate ion, monohydrogen diphosphate ion, dihydrogen monophosphate ion, pentachlorostannate ion, chlorosulfonate ion, fluorosulfonate ion, trifluoromethane sulfonate ion, hexafluoroantimonate ion, molybdate ion, tungstate ion, titanate ion, zirconate ion and the like.
Specific examples of these labeling agents are illustrated in Tables 1, 2 and 3, but are not limited thereto.
The synthetic method of these azulene dyes is described in U.S. Pat. No. 4,738,908.
## CLAIMS
1. A labeled complex for detecting a subject compound to be analyzed by means of optical means using near-infrared radiation which complex comprises a substance from a living organism and a labeling agent fixed onto the substance, the substance capable of specifically binding to the subject compound, wherein the labeling agent comprises a compound represented by the general formula (IV): wherein A, B, D and E are independently selected from the group consisting of hydrogen atom, a substituted or an unsubstituted alkyl group having two or more carbon atoms, alkenyl group, aralkyl group, aryl group, styryl group and heterocyclic group, and at least one of A and B is a substituted or unsubstituted aryl group, and at least one of D and E is a substituted or unsubstituted aryl group; r.sub.1 ' and r.sub.2 ' are individually selected from the group consisting of hydrogen atom, a substituted or an unsubstituted alkyl group, cyclic alkyl group, alkenyl group, aralkyl group and aryl group; k is 0 or 1; is 0, 1 or 2; and X.sub.2.sup..crclbar. represents an anion.
2. The labeled complex according to claim 1, wherein the substance from a living organism is an antibody or an antigen.
3. The labeled complex according to claim 1, wherein the substance from a living organism is a nucleic acid.
4. The labeled complex according to claim 1, wherein the substituted aryl group constituting at least one of A and B is phenyl group substituted by dialkylamino group.
5. The labeled complex according to claim 1, wherein the substituted aryl group constituting at least one of D and E is phenyl group substituted by dialkylamino group.
6. The labeled complex according to claim 4 or 5, wherein the dialkylamino group is a diethylamino group.
7. The labeled complex according to claim 1, wherein each of A, B and D is dimethylaminophenyl group, E is aminophenyl group, k is 0 and l is 1.
8. The labeled complex according to claim 1, wherein each of A, B and D is diethylaminophenyl group, E is phenyl group substituted by carboxyl group, k is 0 and l is 1.
9. The labeled complex according to claim 1, wherein each of A, B, D and E is diethylaminophenyl group, k is 1 and l is 0.
10. The labeled complex according to claim 1, wherein each of A, B, and D is diethylaminophenyl group, E is aminophenyl group, K is 0 and l is 1.
11. The labeled complex according to claim 1, wherein A is dimethylaminophenyl group, each of B and E is ethoxyphenyl group, k is 0, 1 is l and D is represented by the following formula:
12. A method of detecting a subject compound to be analyzed in a sample comprising the steps of: providing a labeled complex comprising a substance from a living organisms and a labeling agent fixed onto the substance, the substance being capable of specifically binding to the subject compound; binding the labeled complex to the subject compound; and detecting the labeled complex to which the subject compound is bonded by means of optical means, wherein the labeling agent comprises a compound represented by the general formula (IV): wherein A, B, D and E are independently selected from the group consisting of hydrogen atom, a substituted or an unsubstituted alkyl group having two or more carbon atoms, alkenyl group, aralkyl group, aryl group, styryl group and heterocyclic group, and at least one of A and B is a substituted or unsubstituted aryl group, and at least one of D and E is a substituted or unsubstituted aryl group; r.sub.1 ' and r.sub.2 ' are individually selected from the group consisting of hydrogen atom, a substituted or an unsubstituted alkyl group, cyclic alkyl group, alkenyl group, aralkyl group and aryl group; k is 0 or 1; is 0, 1 or 2; and X.sub.2.sup..crclbar. represents an anion.
13. The method according to claim 12, wherein the substance from a living organism is an antibody or an antigen.
14. The method according to claim 12, wherein the substance from a living organism is a nucleic acid.
15. The analyzing method according to any one of claims 12, 13 and 14, wherein the optical means is an optical means using near-infrared ray.
16. The method according to claim 12, wherein each of A, B and D is dimethylaminophenyl group, E is aminophenyl group, k is 0 and l is 1.
17. The method according to claim 12, wherein each of A, B and D is diethylaminophenyl group, E is phenyl group substituted by carboxyl group, k is 0 and l is 1.
18. The method according to claim 12, wherein each of A, B, D and E is diethylaminophenyl group, k is 1 and l is 0.
19. The method according to claim 12, wherein each of A, B and D is diethylaminophenyl group, E is aminophenyl group, k is 0 and l is 1.
20. The method according to claim 12, wherein A is dimethylaminophenyl group, each of B and E is ethoxyphenyl group, k is 0, l is 1 and D is represented by the following formula: ##STR102##

View File

@ -0,0 +1,109 @@
item-0 at level 0: unspecified: group _root_
item-1 at level 1: title: Methods and apparatus for turbo code
item-2 at level 2: section_header: ABSTRACT
item-3 at level 3: paragraph: An interleaver receives incoming data frames of size N. The interleaver indexes the elements of the frame with an N₁×N₂ index array. The interleaver then effectively rearranges (permutes) the data by permuting the rows of the index array. The interleaver employs the equation I(j,k)=I(j,αjk+βj)modP) to permute the columns (indexed by k) of each row (indexed by j). P is at least equal to N₂, βj is a constant which may be different for each row, and each αj is a relative prime number relative to P. After permuting, the interleaver outputs the data in a different order than received (e.g., receives sequentially row by row, outputs sequentially each column by column).
item-4 at level 2: section_header: CROSS-REFERENCE TO RELATED APPLICATIONS
item-5 at level 3: paragraph: This application claims the benefit of U.S. Provisional Application No. 60/115,394 filed Jan. 11, 1999.
item-6 at level 2: section_header: FIELD OF THE INVENTION
item-7 at level 3: paragraph: This invention relates generally to communication systems and, more particularly, to interleavers for performing code modulation.
item-8 at level 2: section_header: BACKGROUND OF THE INVENTION
item-9 at level 3: paragraph: Techniques for encoding communication channels, known as coded modulation, have been found to improve the bit error rate (BER) of electronic communication systems such as modem and wireless communication systems. Turbo coded modulation has proven to be a practical, power-efficient, and bandwidth-efficient modulation method for “random-error” channels characterized by additive white Gaussian noise (AWGN) or fading. These random-error channels can be found, for example, in the code division multiple access (CDMA) environment. Since the capacity of a CDMA environment is dependent upon the operating signal to noise ratio, improved performance translates into higher capacity.
item-10 at level 3: paragraph: An aspect of turbo coders which makes them so effective is an interleaver which permutes the original received or transmitted data frame before it is input to a second encoder. The permuting is accomplished by randomizing portions of the signal based upon one or more randomizing algorithms. Combining the permuted data frames with the original data frames has been shown to achieve low BERs in AWGN and fading channels. The interleaving process increases the diversity in the data such that if the modulated symbol is distorted in transmission the error may be recoverable with the use of error correcting algorithms in the decoder.
item-11 at level 3: paragraph: A conventional interleaver collects, or frames, the signal points to be transmitted into an array, where the array is sequentially filled up row by row. After a predefined number of signal points have been framed, the interleaver is emptied by sequentially reading out the columns of the array for transmission. As a result, signal points in the same row of the array that were near each other in the original signal point flow are separated by a number of signal points equal to the number of rows in the array. Ideally, the number of columns and rows would be picked such that interdependent signal points, after transmission, would be separated by more than the expected length of an error burst for the channel.
item-12 at level 3: paragraph: Non-uniform interleaving achieves “maximum scattering” of data and “maximum disorder” of the output sequence. Thus the redundancy introduced by the two convolutional encoders is more equally spread in the output sequence of the turbo encoder. The minimum distance is increased to much higher values than for uniform interleaving. A persistent problem for non-uniform interleaving is how to practically implement the interleaving while achieving sufficient “non-uniformity,” and minimizing delay compensations which limit the use for applications with real-time requirements.
item-13 at level 3: paragraph: Finding an effective interleaver is a current topic in the third generation CDMA standard activities. It has been determined and generally agreed that, as the frame size approaches infinity, the most effective interleaver is the random interleaver. However, for finite frame sizes, the decision as to the most effective interleaver is still open for discussion.
item-14 at level 3: paragraph: Accordingly there exists a need for systems and methods of interleaving codes that improve non-uniformity for finite frame sizes.
item-15 at level 3: paragraph: There also exists a need for such systems and methods of interleaving codes which are relatively simple to implement.
item-16 at level 3: paragraph: It is thus an object of the present invention to provide systems and methods of interleaving codes that improve non-uniformity for finite frame sizes.
item-17 at level 3: paragraph: It is also an object of the present invention to provide systems and methods of interleaving codes which are relatively simple to implement.
item-18 at level 3: paragraph: These and other objects of the invention will become apparent to those skilled in the art from the following description thereof.
item-19 at level 2: section_header: SUMMARY OF THE INVENTION
item-20 at level 3: paragraph: The foregoing objects, and others, may be accomplished by the present invention, which interleaves a data frame, where the data frame has a predetermined size and is made up of portions. An embodiment of the invention includes an interleaver for interleaving these data frames. The interleaver includes an input memory configured to store a received data frame as an array organized into rows and columns, a processor connected to the input memory and configured to permute the received data frame in accordance with the equation D(j,k)=D (j, (αjk+βj)modP), and a working memory in electrical communication with the processor and configured to store a permuted version of the data frame. The elements of the equation are as follows: D is the data frame, j and k are indexes to the rows and columns, respectively, in the data frame, α and β are sets of constants selected according to the current row, and P and each αj are relative prime numbers. (“Relative prime numbers” connotes a set of numbers that have no common divisor other than 1. Members of a set of relative prime numbers, considered by themselves, need not be prime numbers.)
item-21 at level 3: paragraph: Another embodiment of the invention includes a method of storing a data frame and indexing it by an N₁×N₂ index array I, where the product of N₁ and N₂ is at least equal to N. The elements of the index array indicate positions of the elements of the data frame. The data frame elements may be stored in any convenient manner and need not be organized as an array. The method further includes permuting the index array according to I(j,k)=I(j,(αjk+βj)modP), wherein I is the index array, and as above j and k are indexes to the rows and columns, respectively, in the index array, α and β are sets of constants selected according to the current row, and P and each αj are relative prime numbers. The data frame, as indexed by the permuted index array I, is effectively permuted.
item-22 at level 3: paragraph: Still another embodiment of the invention includes an interleaver which includes a storage device for storing a data frame and for storing an N₁×N₂ index array I, where the product of N₁ and N₂ is at least equal to N. The elements of the index array indicate positions of the elements of the data frame. The data frame elements may be stored in any convenient manner and need not be organized as an array. The interleaver further includes a permuting device for permuting the index array according to I(j,k)=I(j,(αjk+βj)modP), wherein I is the index array, and as above j and k are indexes to the rows and columns, respectively, in the index array, α and β are sets of constants selected according to the current row, and P and each αj are relative prime numbers. The data frame, as indexed by the permuted index array I, is effectively permuted.
item-23 at level 3: paragraph: The invention will next be described in connection with certain illustrated embodiments and practices. However, it will be clear to those skilled in the art that various modifications, additions and subtractions can be made without departing from the spirit or scope of the claims.
item-24 at level 2: section_header: BRIEF DESCRIPTION OF THE DRAWINGS
item-25 at level 3: paragraph: The invention will be more clearly understood by reference to the following detailed description of an exemplary embodiment in conjunction with the accompanying drawings, in which:
item-26 at level 3: paragraph: FIG. 1 depicts a diagram of a conventional turbo encoder.
item-27 at level 3: paragraph: FIG. 2 depicts a block diagram of the interleaver illustrated in FIG. 1;
item-28 at level 3: paragraph: FIG. 3 depicts an array containing a data frame, and permutation of that array;
item-29 at level 3: paragraph: FIG. 4 depicts a data frame stored in consecutive storage locations;
item-30 at level 3: paragraph: FIG. 5 depicts an index array for indexing the data frame shown in FIG. 4, and permutation of the index array.
item-31 at level 2: section_header: DETAILED DESCRIPTION OF THE INVENTION
item-32 at level 3: paragraph: FIG. 1 illustrates a conventional turbo encoder. As illustrated, conventional turbo encoders include two encoders 20 and an interleaver 100. An interleaver 100 in accordance with the present invention receives incoming data frames 110 of size N, where N is the number of bits, number of bytes, or the number of some other portion the frame may be separated into, which are regarded as frame elements. The interleaver 100 separates the N frame elements into sets of data, such as rows. The interleaver then rearranges (permutes) the data in each set (row) in a pseudo-random fashion. The interleaver 100 may employ different methods for rearranging the data of the different sets. However, those skilled in the art will recognize that one or more of the methods could be reused on one or more of the sets without departing from the scope of the invention. After permuting the data in each of the sets, the interleaver outputs the data in a different order than received.
item-33 at level 3: paragraph: The interleaver 100 may store the data frame 110 in an array of size N₁×N₂ such that N₁*N₂=N. An example depicted in FIG. 3 shows an array 350 having 3 rows (N₁=3) of 6 columns (N₂=6)for storing a data frame 110 having 18 elements, denoted Frame Element 00 (FE00) through FE17 (N=18). While this is the preferred method, the array may also be designed such that N₁*N₂ is a fraction of N such that one or more of the smaller arrays is/are operated on in accordance with the present invention and the results from each of the smaller arrays are later combined.
item-34 at level 3: paragraph: To permute array 350 according to the present invention, each row j of array 350 is individually operated on, to permute the columns k of each row according to the equation:
item-35 at level 3: paragraph: D₁(j,k)=D(j,(αk+β)modP)
item-36 at level 3: paragraph: where:
item-37 at level 3: paragraph: j and k are row and column indices, respectively, in array 350;
item-38 at level 3: paragraph: P is a number greater than or equal to N₂;
item-39 at level 3: paragraph: αj and P arc relative prime numbers (one or both can be non-prime numbers, but the only divisor that they have in common is 1);
item-40 at level 3: paragraph: βj is a constant, one value associated with each row.
item-41 at level 3: paragraph: Once the data for all of the rows are permuted, the new array is read out column by column. Also, once the rows have been permuted, it is possible (but not required) to permute the data grouped by column before outputting the data. In the event that both the rows and columns are permuted, the rows, the columns or both may be permuted in accordance with the present invention. It is also possible to transpose rows of array, for example by transposing bits in the binary representation of the row index j. (In a four-row array, for example, the second and third rows would be transposed under this scheme.) It is also possible that either the rows or the columns, but not both may be permuted in accordance with a different method of permuting. Those skilled in the art will recognize that the system could be rearranged to store the data column by column, permute each set of data in a column and read out the results row by row without departing from the scope of the invention.
item-42 at level 3: paragraph: These methods of interleaving are based on number theory and may be implemented in software and/or hardware (i.e. application specific integrated circuits (ASIC), programmable logic arrays (PLA), or any other suitable logic devices). Further, a single pseudo random sequence generator (i.e. m-sequence, M-sequence, Gold sequence, Kasami sequence . . . ) can be employed as the interleaver.
item-43 at level 3: paragraph: In the example depicted in FIG. 3, the value selected for P is 6, the values of α are 5 for all three rows, and the values of β are 1, 2, and 3 respectively for the three rows. (These are merely exemplary. Other numbers may be chosen to achieve different permutation results.) The values of α (5) are each relative prime numbers relative to the value of P (6), as stipulated above.
item-44 at level 3: paragraph: Calculating the specified equation with the specified values for permuting row 0 of array D 350 into row 0 of array D₁ 360 proceeds as:
item-45 at level 3: paragraph: and the permuted data frame is contained in array D₁ 360 shown in FIG. 3. Outputting the array column by column outputs the frame elements in the order:
item-46 at level 3: paragraph: 1,8,15,0,7,14,5,6,13,4,11,12,3,10,17,2,9,16.
item-47 at level 3: paragraph: In an alternative practice of the invention, data frame 110 is stored in consecutive storage locations, not as an array or matrix, and a separate index array is stored to index the elements of the data frame, the index array is permuted according to the equations of the present invention, and the data frame is output as indexed by the permuted index array.
item-48 at level 3: paragraph: FIG. 4 depicts a block 400 of storage 32 elements in length (thus having offsets of 0 through 31 from a starting storage location). A data frame 110, taken in this example to be 22 elements long and thus to consist of elements FE00 through FE21, occupies offset locations 00 through 21 within block 400. Offset locations 22 through 31 of block 400 contain unknown contents. A frame length of 22 elements is merely exemplary, and other lengths could be chosen. Also, storage of the frame elements in consecutive locations is exemplary, and non-consecutive locations could be employed.
item-49 at level 3: paragraph: FIG. 5 depicts index array I 550 for indexing storage block 400. It is organized as 4 rows of 8 columns each (N₁=4, N₂=8, N=N₁*N₂=32). Initial contents are filled in to array I 550 as shown in FIG. 5 sequentially. This sequential initialization yields the same effect as a row-by-row read-in of data frame 110.
item-50 at level 3: paragraph: The index array is permuted according to
item-51 at level 3: paragraph: I₁(j,k)=I(j,(αj*k+βj)modP)
item-52 at level 3: paragraph: where
item-53 at level 3: paragraph: α=1, 3, 5, 7
item-54 at level 3: paragraph: β=0, 0, 0, 0
item-55 at level 3: paragraph: P=8
item-56 at level 3: paragraph: These numbers are exemplary and other numbers could be chosen, as long as the stipulations are observed that P is at least equal to N₂ and that each value of α is a relative prime number relative to the chosen value of P.
item-57 at level 3: paragraph: If the equation is applied to the columns of row 2, for example, it yields:
item-58 at level 3: paragraph: Applying the equation comparably to rows 0, 1, and 3 produces the permuted index array I₁ 560 shown in FIG. 5.
item-59 at level 3: paragraph: The data frame 110 is read out of storage block 400 and output in the order specified in the permuted index array I₁ 560 taken column by column. This would output storage locations in offset order:
item-60 at level 3: paragraph: 0,8,16,24,1,11,21,31,2,14,18,30,3,9,23,29,4,12,20,28,5,15,17,27,6,10,22,26,7,13,19,25.
item-61 at level 3: paragraph: However, the example assumed a frame length of 22 elements, with offset locations 22-31 in block 400 not being part of the data frame. Accordingly, when outputting the data frame it would be punctured or pruned to a length of 22; i.e., offset locations greater than 21 are ignored. The data frame is thus output with an element order of 0,8,16,1,11,21,2,14,18,3,9,4,12,20,5,15,17,6,10,7,13,19.
item-62 at level 3: paragraph: In one aspect of the invention, rows of the array may be transposed prior to outputting, for example by reversing the bits in the binary representations of row index j.
item-63 at level 3: paragraph: There are a number of different ways to implement the interleavers 100 of the present invention. FIG. 2 illustrates an embodiment of the invention wherein the interleaver 100 includes an input memory 300 for receiving and storing the data frame 110. This memory 300 may include shift registers, RAM or the like. The interleaver 100 may also include a working memory 310 which may also include RAM, shift registers or the like. The interleaver includes a processor 320 (e.g., a microprocessor, ASIC, etc.) which may be configured to process I(j,k) in real time according to the above-identified equation or to access a table which includes the results of I(j,k) already stored therein. Those skilled in the art will recognize that memory 300 and memory 310 may be the same memory or they may be separate memories.
item-64 at level 3: paragraph: For real-time determinations of I(j,k), the first row of the index array is permuted and the bytes corresponding to the permuted index are stored in the working memory. Then the next row is permuted and stored, etc. until all rows have been permuted and stored. The permutation of rows may be done sequentially or in parallel.
item-65 at level 3: paragraph: Whether the permuted I(j,k) is determined in real time or by lookup, the data may be stored in the working memory in a number of different ways. It can be stored by selecting the data from the input memory in the same order as the I(j,k)s in the permuted index array (i.e., indexing the input memory with the permuting function) and placing them in the working memory in sequential available memory locations. It may also be stored by selecting the bytes in the sequence they were stored in the input memory (i.e., FIFO) and storing them in the working memory directly into the location determined by the permuted I(j,k)s (i.e., indexing the working memory with the permuting function). Once this is done, the data may be read out of the working memory column by column based upon the permuted index array. As stated above, the data could be subjected to another round of permuting after it is stored in the working memory based upon columns rather than on rows to achieve different results.
item-66 at level 3: paragraph: If the system is sufficiently fast, one of the memories could be eliminated and as a data element is received it could be placed into the working memory, in real time or by table lookup, in the order corresponding to the permuted index array.
item-67 at level 3: paragraph: The disclosed interleavers are compatible with existing turbo code structures. These interleavers offer superior performance without increasing system complexity.
item-68 at level 3: paragraph: In addition, those skilled in the art will realize that de-interleavers can be used to decode the interleaved data frames. The construction of de-interleavers used in decoding turbo codes is well known in the art. As such they are not further discussed herein. However, a de-interleaver corresponding to the embodiments can be constructed using the permuted sequences discussed above.
item-69 at level 3: paragraph: Although the embodiment described above is a turbo encoder such as is found in a CDMA system, those skilled in the art realize that the practice of the invention is not limited thereto and that the invention may be practiced for any type of interleaving and de-interleaving in any communication system.
item-70 at level 3: paragraph: It will thus be seen that the invention efficiently attains the objects set forth above, among those made apparent from the preceding description. In particular, the invention provides improved apparatus and methods of interleaving codes of finite length while minimizing the complexity of the implementation.
item-71 at level 3: paragraph: It will be understood that changes may be made in the above construction and in the foregoing sequences of operation without departing from the scope of the invention. It is accordingly intended that all matter contained in the above description or shown in the accompanying drawings be interpreted as illustrative rather than in a limiting sense.
item-72 at level 3: paragraph: It is also to be understood that the following claims are intended to cover all of the generic and specific features of the invention as described herein, and all statements of the scope of the invention which, as a matter of language, might be said to fall therebetween.
item-73 at level 2: section_header: CLAIMS
item-74 at level 3: paragraph: 1. A method of interleaving elements of frames of signal data communication channel, the method comprising; storing a frame of signal data comprising a plurality of elements as an array D having N₁ rows enumerated as 0, 1, . . . N₁1; and N₂ columns enumerated as 0, 1, . . . N₂1, wherein N₁ and N₂ are positive integers greater than 1; and permuting array D into array D₁ according to D₁(𝑗,𝑘)=D(𝑗,(αj𝑘+βj)𝑚𝑜𝑑𝑃) wherein j is an index through the rows of arrays D and D₁; k is an index through the columns of arrays D and D₁; αj and βj are integers predetermined for each row j; P is an integer at least equal to N₂; and each αj is a relative prime number relative to P.
item-75 at level 3: paragraph: 2. The method according to claim 1 wherein said elements of array D are stored in accordance with a first order and wherein said elements of array D₁ are output in accordance with a second order.
item-76 at level 3: paragraph: 3. The method according to claim 2 wherein elements of array D are stored row by row and elements of array D₁ are output column by column.
item-77 at level 3: paragraph: 4. The method according to claim 1 further including outputting of array D₁ and wherein the product of N₁ and N₂ is greater than the number of elements in the frame and the frame is punctured during outputting to the number of elements in the frame.
item-78 at level 3: paragraph: 5. A method of interleaving elements of frames of signal data communication channel, the method comprising; creating and storing an index array I having N₁ rows enumerated as 0, 1, . . . N₁1; and N₂ columns enumerated as 0, 1, . . . N₂1, wherein N₁ and N₂ are positive integers greater than 1, storing elements of a frame of signal data in each of a plurality of storage locations; storing in row-by-row sequential positions in array I values indicative of corresponding locations of frame elements; and permuting array I into array I₁ according to I₁(𝑗,𝑘)=I(𝑗,(αj𝑘+βj)𝑚𝑜𝑑𝑃) wherein j is an index through the rows of arrays I and I₁; k is an index through the columns of arrays I and I₁; αj and βj are integers predetermined for each row j; P is an integer at least equal to N₂; and each αj is a relative prime number relative to P, whereby the frame of signal data as indexed by array I₁ is effectively permuted.
item-79 at level 3: paragraph: 6. The method according to claim 5 further including permuting said stored elements according to said permuted index array I₁.
item-80 at level 3: paragraph: 7. The method according to claim 5 wherein said elements of the frame of data are output as indexed by entries of array I₁ taken other than row by row.
item-81 at level 3: paragraph: 8. The method according to claim 7 wherein elements of the frame of data are output as indexed by entries of array I₁ taken column by column.
item-82 at level 3: paragraph: 9. The method according to claim 5 including the step of transposing rows of array I prior to the step of permuting array I.
item-83 at level 3: paragraph: 10. The method according to claim 5 wherein N₁ is equal to 4, N₂ is equal to 8, P is equal to 8, and the values of αj are different for each row and are chosen from a group consisting of 1, 3, 5, and 7.
item-84 at level 3: paragraph: 11. The method according to claim 10 wherein the values of αj are 1, 3, 5, and 7 for j=0, 1, 2, and 3 respectively.
item-85 at level 3: paragraph: 12. The method according to claim 11 wherein all values of β are zero.
item-86 at level 3: paragraph: 13. The method according to claim 10 wherein the values of αj are 1, 5, 3, and 7 for j=0, 1, 2, and 3 respectively.
item-87 at level 3: paragraph: 14. The method according to claim 13 wherein all values of β are zero.
item-88 at level 3: paragraph: 15. The method according to claim 5 wherein all values of β are zero.
item-89 at level 3: paragraph: 16. The method according to claim 5 wherein at least two values of β are the same.
item-90 at level 3: paragraph: 17. The method according to claim 5 further including outputting of the frame of data and wherein the product of N₁ and N₂ is greater than the number of elements in the frame of data and the frame of data is punctured during outputting to the number of elements in the frame of data.
item-91 at level 3: paragraph: 18. An interleaver for interleaving elements of frames of data, the interleaver comprising; storage means for storing a frame of data comprising a plurality of elements as an array D having N₁ rows enumerated as 0, 1, . . . N₂1; and N₂ columns enumerated as 0, 1, . . . N₂1, wherein N₁ and N₂ are positive integers greater than 1, and permuting means for permuting array D into array D₁ according to D₁(𝑗,𝑘)=D(𝑗,(αj𝑘+βj)𝑚𝑜𝑑𝑃) wherein j is an index through the rows of arrays D and D₁; k is an index through the columns of arrays D and D₁; αj and βj are integers predetermined for each row j; P is an integer at least equal to N₂; and each αj is a relative prime number relative to P.
item-92 at level 3: paragraph: 19. The interleaver according to claim 18 including means for storing said elements of array D in accordance with a first order and means for outputting said elements of array D₁ in accordance with a second order.
item-93 at level 3: paragraph: 20. The interleaver according to claim 19 wherein said means for storing said elements of array D stores row by row and said means for outputting elements of array D₁ outputs column by column.
item-94 at level 3: paragraph: 21. The interleaver according to claim 18 including means for outputting said array D₁ and for puncturing said array D₁ to the number of elements in the frame when the product of N₁ and N₂ is greater than the number of elements in the frame.
item-95 at level 3: paragraph: 22. An interleaver for interleaving elements of frames of data, the interleaver comprising; means for storing an index array I having N₁ rows enumerated as 0, 1, . . . N₁1; and N₂ columns enumerated as 0, 1, . . . N₂1, wherein N₁ and N₂ are positive integers greater than 1, and means for receiving a frame of data and storing elements of the frame of data in each of a plurality of storage locations; means for storing in row-by-row sequential positions in array I values indicative of corresponding locations of frame elements; and means for permuting array I into array I₁ according to: I₁(𝑗,𝑘)=I(𝑗,(αj𝑘+βj)𝑚𝑜𝑑𝑃) wherein j is an index through the rows of arrays I and I₁; k is an index through the columns of arrays I and I₁; αj and βj are integers predetermined for each row j; P is an integer at least equal to N₂; and each αj is a relative prime number relative to P, whereby the frame of data as indexed by array I₁ is effectively permuted.
item-96 at level 3: paragraph: 23. The interleaver according to claim 22 further including means for permuting said stored elements according to said permuted index array I₁.
item-97 at level 3: paragraph: 24. The interleaver according to claim 22 including means for outputting frame elements as indexed by entries of array I₁ taken other than row by row.
item-98 at level 3: paragraph: 25. The interleaver according to claim 24 including means for outputting frame elements as indexed by entries of array I₁ taken column by column.
item-99 at level 3: paragraph: 26. The interleaver according to claim 22 wherein the product of N₁ and N₂ is greater than the number of elements in the frame and the frame is punctured by the means for outputting to the number of elements in the frame.
item-100 at level 3: paragraph: 27. An interleaver for interleaving elements of frames of data, the interleaver comprising; an input memory for storing a received frame of data comprising a plurality of elements as an array D having N₁ rows enumerated as 0, 1, . . . N₁1; and N₂ columns enumerated as 0, 1, . . . N₂1, wherein N₁ and N₂ are positive integers greater than 1; a processor coupled to said input memory for permuting array D into array D₁ according to D₁(𝑗,𝑘)=D(𝑗,(αj𝑘+βj)𝑚𝑜𝑑𝑃) wherein j is an index through the rows of arrays D and D₁; k is an index through the columns of arrays D and D₁; αj and βj are integers predetermined for each row j; P is an integer at least equal to N₂; and each αj is a relative prime number relative to P, and a working memory coupled to said processor and configured to store the permuted array D₁.
item-101 at level 3: paragraph: 28. The interlcavcr according to claim 27 wherein said input memory stores said elements of array D in accordance with a first order and said working memory outputs said elements of array D₁ in accordance with a second order.
item-102 at level 3: paragraph: 29. The interleaver according to claim 28 wherein said input memory stores elements of array D row by row and said working memory outputs elements of array D₁ column by column.
item-103 at level 3: paragraph: 30. The interleaver according to claim 27 said working memory punctures said array D₁ to the number of elements in the frame when the product of N₁ and N₂ is greater than the number of elements in the frame.
item-104 at level 3: paragraph: 31. An interleaver for interleaving elements of frames of data, the interleaver comprising; a memory for storing an index array I having N₁ rows enumerated as 0, 1, . . . N₁1; and N₂ columns enumerated as 0, 1, . . . N₂1, wherein N₁ and N₂ are positive integers greater than 1, and said memory also for storing elements of a received frame of data in each of a plurality of storage locations; a processor coupled to said memory for storing in row-by-row sequential positions in array I values indicative of corresponding locations of frame elements; and said processor also for permuting array I into array I₁ stored in said memory according to: I₁(𝑗,𝑘)=I(𝑗,(αj𝑘+βj)𝑚𝑜𝑑𝑃) wherein j is an index through the rows of arrays I and I₁; k is an index through the columns of arrays I and I₁; αj and βj are integers predetermined for each row j; P is an integer at least equal to N₂; and each αj is a relative prime number relative to P, and whereby the frame of data as indexed by array I₁ is effectively permuted.
item-105 at level 3: paragraph: 32. The interleaver according to claim 31 wherein said processor permutes said stored elements according to said permuted index array I₁.
item-106 at level 3: paragraph: 33. The interleaver according to claim 31 wherein said memory outputs frame elements as indexed by entries of array I₁ taken other than row by row.
item-107 at level 3: paragraph: 34. The interleaver according to claim 33 wherein said memory outputs frame elements as indexed by entries of array I₁ taken column by column.
item-108 at level 3: paragraph: 35. The interleaver according to claim 31 wherein said memory punctures the frame of data to the number of elements in the frame of data when the product of N₁ and N₂ is greater than the number of elements in the frame of data.

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,215 @@
# Methods and apparatus for turbo code
## ABSTRACT
An interleaver receives incoming data frames of size N. The interleaver indexes the elements of the frame with an N₁×N₂ index array. The interleaver then effectively rearranges (permutes) the data by permuting the rows of the index array. The interleaver employs the equation I(j,k)=I(j,αjk+βj)modP) to permute the columns (indexed by k) of each row (indexed by j). P is at least equal to N₂, βj is a constant which may be different for each row, and each αj is a relative prime number relative to P. After permuting, the interleaver outputs the data in a different order than received (e.g., receives sequentially row by row, outputs sequentially each column by column).
## CROSS-REFERENCE TO RELATED APPLICATIONS
This application claims the benefit of U.S. Provisional Application No. 60/115,394 filed Jan. 11, 1999.
## FIELD OF THE INVENTION
This invention relates generally to communication systems and, more particularly, to interleavers for performing code modulation.
## BACKGROUND OF THE INVENTION
Techniques for encoding communication channels, known as coded modulation, have been found to improve the bit error rate (BER) of electronic communication systems such as modem and wireless communication systems. Turbo coded modulation has proven to be a practical, power-efficient, and bandwidth-efficient modulation method for “random-error” channels characterized by additive white Gaussian noise (AWGN) or fading. These random-error channels can be found, for example, in the code division multiple access (CDMA) environment. Since the capacity of a CDMA environment is dependent upon the operating signal to noise ratio, improved performance translates into higher capacity.
An aspect of turbo coders which makes them so effective is an interleaver which permutes the original received or transmitted data frame before it is input to a second encoder. The permuting is accomplished by randomizing portions of the signal based upon one or more randomizing algorithms. Combining the permuted data frames with the original data frames has been shown to achieve low BERs in AWGN and fading channels. The interleaving process increases the diversity in the data such that if the modulated symbol is distorted in transmission the error may be recoverable with the use of error correcting algorithms in the decoder.
A conventional interleaver collects, or frames, the signal points to be transmitted into an array, where the array is sequentially filled up row by row. After a predefined number of signal points have been framed, the interleaver is emptied by sequentially reading out the columns of the array for transmission. As a result, signal points in the same row of the array that were near each other in the original signal point flow are separated by a number of signal points equal to the number of rows in the array. Ideally, the number of columns and rows would be picked such that interdependent signal points, after transmission, would be separated by more than the expected length of an error burst for the channel.
Non-uniform interleaving achieves “maximum scattering” of data and “maximum disorder” of the output sequence. Thus the redundancy introduced by the two convolutional encoders is more equally spread in the output sequence of the turbo encoder. The minimum distance is increased to much higher values than for uniform interleaving. A persistent problem for non-uniform interleaving is how to practically implement the interleaving while achieving sufficient “non-uniformity,” and minimizing delay compensations which limit the use for applications with real-time requirements.
Finding an effective interleaver is a current topic in the third generation CDMA standard activities. It has been determined and generally agreed that, as the frame size approaches infinity, the most effective interleaver is the random interleaver. However, for finite frame sizes, the decision as to the most effective interleaver is still open for discussion.
Accordingly there exists a need for systems and methods of interleaving codes that improve non-uniformity for finite frame sizes.
There also exists a need for such systems and methods of interleaving codes which are relatively simple to implement.
It is thus an object of the present invention to provide systems and methods of interleaving codes that improve non-uniformity for finite frame sizes.
It is also an object of the present invention to provide systems and methods of interleaving codes which are relatively simple to implement.
These and other objects of the invention will become apparent to those skilled in the art from the following description thereof.
## SUMMARY OF THE INVENTION
The foregoing objects, and others, may be accomplished by the present invention, which interleaves a data frame, where the data frame has a predetermined size and is made up of portions. An embodiment of the invention includes an interleaver for interleaving these data frames. The interleaver includes an input memory configured to store a received data frame as an array organized into rows and columns, a processor connected to the input memory and configured to permute the received data frame in accordance with the equation D(j,k)=D (j, (αjk+βj)modP), and a working memory in electrical communication with the processor and configured to store a permuted version of the data frame. The elements of the equation are as follows: D is the data frame, j and k are indexes to the rows and columns, respectively, in the data frame, α and β are sets of constants selected according to the current row, and P and each αj are relative prime numbers. (“Relative prime numbers” connotes a set of numbers that have no common divisor other than 1. Members of a set of relative prime numbers, considered by themselves, need not be prime numbers.)
Another embodiment of the invention includes a method of storing a data frame and indexing it by an N₁×N₂ index array I, where the product of N₁ and N₂ is at least equal to N. The elements of the index array indicate positions of the elements of the data frame. The data frame elements may be stored in any convenient manner and need not be organized as an array. The method further includes permuting the index array according to I(j,k)=I(j,(αjk+βj)modP), wherein I is the index array, and as above j and k are indexes to the rows and columns, respectively, in the index array, α and β are sets of constants selected according to the current row, and P and each αj are relative prime numbers. The data frame, as indexed by the permuted index array I, is effectively permuted.
Still another embodiment of the invention includes an interleaver which includes a storage device for storing a data frame and for storing an N₁×N₂ index array I, where the product of N₁ and N₂ is at least equal to N. The elements of the index array indicate positions of the elements of the data frame. The data frame elements may be stored in any convenient manner and need not be organized as an array. The interleaver further includes a permuting device for permuting the index array according to I(j,k)=I(j,(αjk+βj)modP), wherein I is the index array, and as above j and k are indexes to the rows and columns, respectively, in the index array, α and β are sets of constants selected according to the current row, and P and each αj are relative prime numbers. The data frame, as indexed by the permuted index array I, is effectively permuted.
The invention will next be described in connection with certain illustrated embodiments and practices. However, it will be clear to those skilled in the art that various modifications, additions and subtractions can be made without departing from the spirit or scope of the claims.
## BRIEF DESCRIPTION OF THE DRAWINGS
The invention will be more clearly understood by reference to the following detailed description of an exemplary embodiment in conjunction with the accompanying drawings, in which:
FIG. 1 depicts a diagram of a conventional turbo encoder.
FIG. 2 depicts a block diagram of the interleaver illustrated in FIG. 1;
FIG. 3 depicts an array containing a data frame, and permutation of that array;
FIG. 4 depicts a data frame stored in consecutive storage locations;
FIG. 5 depicts an index array for indexing the data frame shown in FIG. 4, and permutation of the index array.
## DETAILED DESCRIPTION OF THE INVENTION
FIG. 1 illustrates a conventional turbo encoder. As illustrated, conventional turbo encoders include two encoders 20 and an interleaver 100. An interleaver 100 in accordance with the present invention receives incoming data frames 110 of size N, where N is the number of bits, number of bytes, or the number of some other portion the frame may be separated into, which are regarded as frame elements. The interleaver 100 separates the N frame elements into sets of data, such as rows. The interleaver then rearranges (permutes) the data in each set (row) in a pseudo-random fashion. The interleaver 100 may employ different methods for rearranging the data of the different sets. However, those skilled in the art will recognize that one or more of the methods could be reused on one or more of the sets without departing from the scope of the invention. After permuting the data in each of the sets, the interleaver outputs the data in a different order than received.
The interleaver 100 may store the data frame 110 in an array of size N₁×N₂ such that N₁*N₂=N. An example depicted in FIG. 3 shows an array 350 having 3 rows (N₁=3) of 6 columns (N₂=6)for storing a data frame 110 having 18 elements, denoted Frame Element 00 (FE00) through FE17 (N=18). While this is the preferred method, the array may also be designed such that N₁*N₂ is a fraction of N such that one or more of the smaller arrays is/are operated on in accordance with the present invention and the results from each of the smaller arrays are later combined.
To permute array 350 according to the present invention, each row j of array 350 is individually operated on, to permute the columns k of each row according to the equation:
D₁(j,k)=D(j,(αk+β)modP)
where:
j and k are row and column indices, respectively, in array 350;
P is a number greater than or equal to N₂;
αj and P arc relative prime numbers (one or both can be non-prime numbers, but the only divisor that they have in common is 1);
βj is a constant, one value associated with each row.
Once the data for all of the rows are permuted, the new array is read out column by column. Also, once the rows have been permuted, it is possible (but not required) to permute the data grouped by column before outputting the data. In the event that both the rows and columns are permuted, the rows, the columns or both may be permuted in accordance with the present invention. It is also possible to transpose rows of array, for example by transposing bits in the binary representation of the row index j. (In a four-row array, for example, the second and third rows would be transposed under this scheme.) It is also possible that either the rows or the columns, but not both may be permuted in accordance with a different method of permuting. Those skilled in the art will recognize that the system could be rearranged to store the data column by column, permute each set of data in a column and read out the results row by row without departing from the scope of the invention.
These methods of interleaving are based on number theory and may be implemented in software and/or hardware (i.e. application specific integrated circuits (ASIC), programmable logic arrays (PLA), or any other suitable logic devices). Further, a single pseudo random sequence generator (i.e. m-sequence, M-sequence, Gold sequence, Kasami sequence . . . ) can be employed as the interleaver.
In the example depicted in FIG. 3, the value selected for P is 6, the values of α are 5 for all three rows, and the values of β are 1, 2, and 3 respectively for the three rows. (These are merely exemplary. Other numbers may be chosen to achieve different permutation results.) The values of α (5) are each relative prime numbers relative to the value of P (6), as stipulated above.
Calculating the specified equation with the specified values for permuting row 0 of array D 350 into row 0 of array D₁ 360 proceeds as:
and the permuted data frame is contained in array D₁ 360 shown in FIG. 3. Outputting the array column by column outputs the frame elements in the order:
1,8,15,0,7,14,5,6,13,4,11,12,3,10,17,2,9,16.
In an alternative practice of the invention, data frame 110 is stored in consecutive storage locations, not as an array or matrix, and a separate index array is stored to index the elements of the data frame, the index array is permuted according to the equations of the present invention, and the data frame is output as indexed by the permuted index array.
FIG. 4 depicts a block 400 of storage 32 elements in length (thus having offsets of 0 through 31 from a starting storage location). A data frame 110, taken in this example to be 22 elements long and thus to consist of elements FE00 through FE21, occupies offset locations 00 through 21 within block 400. Offset locations 22 through 31 of block 400 contain unknown contents. A frame length of 22 elements is merely exemplary, and other lengths could be chosen. Also, storage of the frame elements in consecutive locations is exemplary, and non-consecutive locations could be employed.
FIG. 5 depicts index array I 550 for indexing storage block 400. It is organized as 4 rows of 8 columns each (N₁=4, N₂=8, N=N₁*N₂=32). Initial contents are filled in to array I 550 as shown in FIG. 5 sequentially. This sequential initialization yields the same effect as a row-by-row read-in of data frame 110.
The index array is permuted according to
I₁(j,k)=I(j,(αj*k+βj)modP)
where
α=1, 3, 5, 7
β=0, 0, 0, 0
P=8
These numbers are exemplary and other numbers could be chosen, as long as the stipulations are observed that P is at least equal to N₂ and that each value of α is a relative prime number relative to the chosen value of P.
If the equation is applied to the columns of row 2, for example, it yields:
Applying the equation comparably to rows 0, 1, and 3 produces the permuted index array I₁ 560 shown in FIG. 5.
The data frame 110 is read out of storage block 400 and output in the order specified in the permuted index array I₁ 560 taken column by column. This would output storage locations in offset order:
0,8,16,24,1,11,21,31,2,14,18,30,3,9,23,29,4,12,20,28,5,15,17,27,6,10,22,26,7,13,19,25.
However, the example assumed a frame length of 22 elements, with offset locations 22-31 in block 400 not being part of the data frame. Accordingly, when outputting the data frame it would be punctured or pruned to a length of 22; i.e., offset locations greater than 21 are ignored. The data frame is thus output with an element order of 0,8,16,1,11,21,2,14,18,3,9,4,12,20,5,15,17,6,10,7,13,19.
In one aspect of the invention, rows of the array may be transposed prior to outputting, for example by reversing the bits in the binary representations of row index j.
There are a number of different ways to implement the interleavers 100 of the present invention. FIG. 2 illustrates an embodiment of the invention wherein the interleaver 100 includes an input memory 300 for receiving and storing the data frame 110. This memory 300 may include shift registers, RAM or the like. The interleaver 100 may also include a working memory 310 which may also include RAM, shift registers or the like. The interleaver includes a processor 320 (e.g., a microprocessor, ASIC, etc.) which may be configured to process I(j,k) in real time according to the above-identified equation or to access a table which includes the results of I(j,k) already stored therein. Those skilled in the art will recognize that memory 300 and memory 310 may be the same memory or they may be separate memories.
For real-time determinations of I(j,k), the first row of the index array is permuted and the bytes corresponding to the permuted index are stored in the working memory. Then the next row is permuted and stored, etc. until all rows have been permuted and stored. The permutation of rows may be done sequentially or in parallel.
Whether the permuted I(j,k) is determined in real time or by lookup, the data may be stored in the working memory in a number of different ways. It can be stored by selecting the data from the input memory in the same order as the I(j,k)s in the permuted index array (i.e., indexing the input memory with the permuting function) and placing them in the working memory in sequential available memory locations. It may also be stored by selecting the bytes in the sequence they were stored in the input memory (i.e., FIFO) and storing them in the working memory directly into the location determined by the permuted I(j,k)s (i.e., indexing the working memory with the permuting function). Once this is done, the data may be read out of the working memory column by column based upon the permuted index array. As stated above, the data could be subjected to another round of permuting after it is stored in the working memory based upon columns rather than on rows to achieve different results.
If the system is sufficiently fast, one of the memories could be eliminated and as a data element is received it could be placed into the working memory, in real time or by table lookup, in the order corresponding to the permuted index array.
The disclosed interleavers are compatible with existing turbo code structures. These interleavers offer superior performance without increasing system complexity.
In addition, those skilled in the art will realize that de-interleavers can be used to decode the interleaved data frames. The construction of de-interleavers used in decoding turbo codes is well known in the art. As such they are not further discussed herein. However, a de-interleaver corresponding to the embodiments can be constructed using the permuted sequences discussed above.
Although the embodiment described above is a turbo encoder such as is found in a CDMA system, those skilled in the art realize that the practice of the invention is not limited thereto and that the invention may be practiced for any type of interleaving and de-interleaving in any communication system.
It will thus be seen that the invention efficiently attains the objects set forth above, among those made apparent from the preceding description. In particular, the invention provides improved apparatus and methods of interleaving codes of finite length while minimizing the complexity of the implementation.
It will be understood that changes may be made in the above construction and in the foregoing sequences of operation without departing from the scope of the invention. It is accordingly intended that all matter contained in the above description or shown in the accompanying drawings be interpreted as illustrative rather than in a limiting sense.
It is also to be understood that the following claims are intended to cover all of the generic and specific features of the invention as described herein, and all statements of the scope of the invention which, as a matter of language, might be said to fall therebetween.
## CLAIMS
1. A method of interleaving elements of frames of signal data communication channel, the method comprising; storing a frame of signal data comprising a plurality of elements as an array D having N₁ rows enumerated as 0, 1, . . . N₁1; and N₂ columns enumerated as 0, 1, . . . N₂1, wherein N₁ and N₂ are positive integers greater than 1; and permuting array D into array D₁ according to D₁(𝑗,𝑘)=D(𝑗,(αj𝑘+βj)𝑚𝑜𝑑𝑃) wherein j is an index through the rows of arrays D and D₁; k is an index through the columns of arrays D and D₁; αj and βj are integers predetermined for each row j; P is an integer at least equal to N₂; and each αj is a relative prime number relative to P.
2. The method according to claim 1 wherein said elements of array D are stored in accordance with a first order and wherein said elements of array D₁ are output in accordance with a second order.
3. The method according to claim 2 wherein elements of array D are stored row by row and elements of array D₁ are output column by column.
4. The method according to claim 1 further including outputting of array D₁ and wherein the product of N₁ and N₂ is greater than the number of elements in the frame and the frame is punctured during outputting to the number of elements in the frame.
5. A method of interleaving elements of frames of signal data communication channel, the method comprising; creating and storing an index array I having N₁ rows enumerated as 0, 1, . . . N₁1; and N₂ columns enumerated as 0, 1, . . . N₂1, wherein N₁ and N₂ are positive integers greater than 1, storing elements of a frame of signal data in each of a plurality of storage locations; storing in row-by-row sequential positions in array I values indicative of corresponding locations of frame elements; and permuting array I into array I₁ according to I₁(𝑗,𝑘)=I(𝑗,(αj𝑘+βj)𝑚𝑜𝑑𝑃) wherein j is an index through the rows of arrays I and I₁; k is an index through the columns of arrays I and I₁; αj and βj are integers predetermined for each row j; P is an integer at least equal to N₂; and each αj is a relative prime number relative to P, whereby the frame of signal data as indexed by array I₁ is effectively permuted.
6. The method according to claim 5 further including permuting said stored elements according to said permuted index array I₁.
7. The method according to claim 5 wherein said elements of the frame of data are output as indexed by entries of array I₁ taken other than row by row.
8. The method according to claim 7 wherein elements of the frame of data are output as indexed by entries of array I₁ taken column by column.
9. The method according to claim 5 including the step of transposing rows of array I prior to the step of permuting array I.
10. The method according to claim 5 wherein N₁ is equal to 4, N₂ is equal to 8, P is equal to 8, and the values of αj are different for each row and are chosen from a group consisting of 1, 3, 5, and 7.
11. The method according to claim 10 wherein the values of αj are 1, 3, 5, and 7 for j=0, 1, 2, and 3 respectively.
12. The method according to claim 11 wherein all values of β are zero.
13. The method according to claim 10 wherein the values of αj are 1, 5, 3, and 7 for j=0, 1, 2, and 3 respectively.
14. The method according to claim 13 wherein all values of β are zero.
15. The method according to claim 5 wherein all values of β are zero.
16. The method according to claim 5 wherein at least two values of β are the same.
17. The method according to claim 5 further including outputting of the frame of data and wherein the product of N₁ and N₂ is greater than the number of elements in the frame of data and the frame of data is punctured during outputting to the number of elements in the frame of data.
18. An interleaver for interleaving elements of frames of data, the interleaver comprising; storage means for storing a frame of data comprising a plurality of elements as an array D having N₁ rows enumerated as 0, 1, . . . N₂1; and N₂ columns enumerated as 0, 1, . . . N₂1, wherein N₁ and N₂ are positive integers greater than 1, and permuting means for permuting array D into array D₁ according to D₁(𝑗,𝑘)=D(𝑗,(αj𝑘+βj)𝑚𝑜𝑑𝑃) wherein j is an index through the rows of arrays D and D₁; k is an index through the columns of arrays D and D₁; αj and βj are integers predetermined for each row j; P is an integer at least equal to N₂; and each αj is a relative prime number relative to P.
19. The interleaver according to claim 18 including means for storing said elements of array D in accordance with a first order and means for outputting said elements of array D₁ in accordance with a second order.
20. The interleaver according to claim 19 wherein said means for storing said elements of array D stores row by row and said means for outputting elements of array D₁ outputs column by column.
21. The interleaver according to claim 18 including means for outputting said array D₁ and for puncturing said array D₁ to the number of elements in the frame when the product of N₁ and N₂ is greater than the number of elements in the frame.
22. An interleaver for interleaving elements of frames of data, the interleaver comprising; means for storing an index array I having N₁ rows enumerated as 0, 1, . . . N₁1; and N₂ columns enumerated as 0, 1, . . . N₂1, wherein N₁ and N₂ are positive integers greater than 1, and means for receiving a frame of data and storing elements of the frame of data in each of a plurality of storage locations; means for storing in row-by-row sequential positions in array I values indicative of corresponding locations of frame elements; and means for permuting array I into array I₁ according to: I₁(𝑗,𝑘)=I(𝑗,(αj𝑘+βj)𝑚𝑜𝑑𝑃) wherein j is an index through the rows of arrays I and I₁; k is an index through the columns of arrays I and I₁; αj and βj are integers predetermined for each row j; P is an integer at least equal to N₂; and each αj is a relative prime number relative to P, whereby the frame of data as indexed by array I₁ is effectively permuted.
23. The interleaver according to claim 22 further including means for permuting said stored elements according to said permuted index array I₁.
24. The interleaver according to claim 22 including means for outputting frame elements as indexed by entries of array I₁ taken other than row by row.
25. The interleaver according to claim 24 including means for outputting frame elements as indexed by entries of array I₁ taken column by column.
26. The interleaver according to claim 22 wherein the product of N₁ and N₂ is greater than the number of elements in the frame and the frame is punctured by the means for outputting to the number of elements in the frame.
27. An interleaver for interleaving elements of frames of data, the interleaver comprising; an input memory for storing a received frame of data comprising a plurality of elements as an array D having N₁ rows enumerated as 0, 1, . . . N₁1; and N₂ columns enumerated as 0, 1, . . . N₂1, wherein N₁ and N₂ are positive integers greater than 1; a processor coupled to said input memory for permuting array D into array D₁ according to D₁(𝑗,𝑘)=D(𝑗,(αj𝑘+βj)𝑚𝑜𝑑𝑃) wherein j is an index through the rows of arrays D and D₁; k is an index through the columns of arrays D and D₁; αj and βj are integers predetermined for each row j; P is an integer at least equal to N₂; and each αj is a relative prime number relative to P, and a working memory coupled to said processor and configured to store the permuted array D₁.
28. The interlcavcr according to claim 27 wherein said input memory stores said elements of array D in accordance with a first order and said working memory outputs said elements of array D₁ in accordance with a second order.
29. The interleaver according to claim 28 wherein said input memory stores elements of array D row by row and said working memory outputs elements of array D₁ column by column.
30. The interleaver according to claim 27 said working memory punctures said array D₁ to the number of elements in the frame when the product of N₁ and N₂ is greater than the number of elements in the frame.
31. An interleaver for interleaving elements of frames of data, the interleaver comprising; a memory for storing an index array I having N₁ rows enumerated as 0, 1, . . . N₁1; and N₂ columns enumerated as 0, 1, . . . N₂1, wherein N₁ and N₂ are positive integers greater than 1, and said memory also for storing elements of a received frame of data in each of a plurality of storage locations; a processor coupled to said memory for storing in row-by-row sequential positions in array I values indicative of corresponding locations of frame elements; and said processor also for permuting array I into array I₁ stored in said memory according to: I₁(𝑗,𝑘)=I(𝑗,(αj𝑘+βj)𝑚𝑜𝑑𝑃) wherein j is an index through the rows of arrays I and I₁; k is an index through the columns of arrays I and I₁; αj and βj are integers predetermined for each row j; P is an integer at least equal to N₂; and each αj is a relative prime number relative to P, and whereby the frame of data as indexed by array I₁ is effectively permuted.
32. The interleaver according to claim 31 wherein said processor permutes said stored elements according to said permuted index array I₁.
33. The interleaver according to claim 31 wherein said memory outputs frame elements as indexed by entries of array I₁ taken other than row by row.
34. The interleaver according to claim 33 wherein said memory outputs frame elements as indexed by entries of array I₁ taken column by column.
35. The interleaver according to claim 31 wherein said memory punctures the frame of data to the number of elements in the frame of data when the product of N₁ and N₂ is greater than the number of elements in the frame of data.

View File

@ -0,0 +1,132 @@
item-0 at level 0: unspecified: group _root_
item-1 at level 1: title: Risk factors associated with fai ... s: Results of a multi-country analysis
item-2 at level 2: paragraph: Burgert-Brucker Clara R.; 1: Glo ... shington, DC, United States of America
item-3 at level 2: section_header: Abstract
item-4 at level 3: text: Achieving elimination of lymphat ... ine prevalence and/or lower elevation.
item-5 at level 2: section_header: Introduction
item-6 at level 3: text: Lymphatic filariasis (LF), a dis ... 8 countries remain endemic for LF [3].
item-7 at level 3: text: The road to elimination as a pub ... t elimination be officially validated.
item-8 at level 3: text: Pre-TAS include at least one sen ... me of day that blood can be taken [5].
item-9 at level 3: text: When a country fails to meet the ... o ensure rounds of MDA are not missed.
item-10 at level 3: text: This study aims to understand wh ... e of limited LF elimination resources.
item-11 at level 2: section_header: Methods
item-12 at level 3: text: This is a secondary data analysi ... rch; no ethical approval was required.
item-13 at level 3: text: Building on previous work, we de ... available global geospatial data sets.
item-14 at level 3: section_header: Data sources
item-15 at level 4: text: Information on baseline prevalen ... publicly available sources (Table 1).
item-16 at level 3: section_header: Outcome and covariate variables
item-17 at level 4: text: The outcome of interest for this ... r than or equal to 1% Mf or 2% Ag [4].
item-18 at level 4: text: Potential covariates were derive ... is and the final categorizations used.
item-19 at level 4: section_header: Baseline prevalence
item-20 at level 5: text: Baseline prevalence can be assum ... (2) using the cut-off of <10% or ≥10%.
item-21 at level 4: section_header: Agent
item-22 at level 5: text: In terms of differences in trans ... dazole (DEC-ALB)] from the MDA domain.
item-23 at level 4: section_header: Environment
item-24 at level 5: text: LF transmission intensity is inf ... dicates a higher level of “greenness.”
item-25 at level 5: text: We included the socio-economic v ... proxy for socio-economic status [33].
item-26 at level 5: text: Finally, all or parts of distric ... s were co-endemic with onchocerciasis.
item-27 at level 4: section_header: MDA
item-28 at level 5: text: Treatment effectiveness depends ... esent a threat to elimination [41,42].
item-29 at level 5: text: We considered three approaches w ... unds ever documented in that district.
item-30 at level 4: section_header: Pre-TAS implementation
item-31 at level 5: text: Pre-TAS results can be influence ... d throughout the time period of study.
item-32 at level 3: section_header: Data inclusion criteria
item-33 at level 4: text: The dataset, summarized at the d ... al analysis dataset had 554 districts.
item-34 at level 3: section_header: Statistical analysis and modeling
item-35 at level 4: text: Statistical analysis and modelin ... d the number of variables accordingly.
item-36 at level 4: text: Sensitivity analysis was perform ... ot have been truly LF-endemic [43,44].
item-37 at level 2: section_header: Results
item-38 at level 3: text: The overall pre-TAS pass rate fo ... ts had baseline prevalences below 20%.
item-39 at level 3: text: Fig 3 shows the unadjusted analy ... overage, and sufficient rounds of MDA.
item-40 at level 3: text: The final log-binomial model inc ... igh baseline and diagnostic test used.
item-41 at level 3: text: Fig 4 shows the risk ratio resul ... of failing pre-TAS (95% CI 1.954.83).
item-42 at level 3: text: Sensitivity analyses were conduc ... gnified by large confidence intervals.
item-43 at level 3: text: Overall 74 districts in the data ... or 51% of all the failures (38 of 74).
item-44 at level 2: section_header: Discussion
item-45 at level 3: text: This paper reports for the first ... ctors associated with TAS failure [7].
item-46 at level 3: text: Though diagnostic test used was ... FTS was more sensitive than ICT [45].
item-47 at level 3: text: Elevation was the only environme ... ich impact vector chances of survival.
item-48 at level 3: text: The small number of failures ove ... search has shown the opposite [15,16].
item-49 at level 3: text: All other variables included in ... are not necessary to lower prevalence.
item-50 at level 3: text: Limitations to this study includ ... reducing LF prevalence [41,48,5153].
item-51 at level 3: text: Fourteen districts were excluded ... ta to extreme outliners in a district.
item-52 at level 3: text: As this analysis used data acros ... of individuals included in the survey.
item-53 at level 3: text: This paper provides evidence fro ... th high baseline and/or low elevation.
item-54 at level 2: section_header: Tables
item-55 at level 3: table with [18x8]
item-55 at level 4: caption: Table 1: Categorization of potential factors influencing pre-TAS results.
item-56 at level 3: table with [11x6]
item-56 at level 4: caption: Table 2: Adjusted risk ratios for pre-TAS failure from log-binomial model sensitivity analysis.
item-57 at level 2: section_header: Figures
item-58 at level 3: picture
item-58 at level 4: caption: Fig 1: Number of pre-TAS by country.
item-59 at level 3: picture
item-59 at level 4: caption: Fig 2: District-level baseline prevalence by country.
item-60 at level 3: picture
item-60 at level 4: caption: Fig 3: Percent pre-TAS failure by each characteristic (unadjusted).
item-61 at level 3: picture
item-61 at level 4: caption: Fig 4: Adjusted risk ratios for pre-TAS failure with 95% Confidence Interval from log-binomial model.
item-62 at level 3: picture
item-62 at level 4: caption: Fig 5: Analysis of failures by model combinations.
item-63 at level 2: section_header: References
item-64 at level 3: list: group list
item-65 at level 4: list_item: World Health Organization. Lymph ... rategic plan 20102020. Geneva; 2010.
item-66 at level 4: list_item: World Health Organization. Valid ... public health problem. Geneva; 2017.
item-67 at level 4: list_item: Global programme to eliminate ly ... eport, 2018. Wkly Epidemiol Rec (2019)
item-68 at level 4: list_item: World Health Organization. Globa ... ss drug administration. Geneva; 2011.
item-69 at level 4: list_item: World Health Organization. Stren ... isease-specific Indicators. 2016; 42.
item-70 at level 4: list_item: Kyelem D; Biswas G; Bockarie MJ; ... search needs. Am J Trop Med Hyg (2008)
item-71 at level 4: list_item: Goldberg EM; King JD; Mupfasoni ... c filariasis. Am J Trop Med Hyg (2019)
item-72 at level 4: list_item: Cano J; Rebollo MP; Golding N; P ... present. Parasites and Vectors (2014)
item-73 at level 4: list_item: CGIAR-CSI. CGIAR-CSI SRTM 90m DEM Digital Elevation Database. In: .
item-74 at level 4: list_item: USGS NASA. Vegetation indices 16 ... et]. [cited 1 May 2018]. Available: .
item-75 at level 4: list_item: Funk C; Peterson P; Landsfeld M; ... r monitoring extremes. Sci Data (2015)
item-76 at level 4: list_item: Lloyd CT; Sorichetta A; Tatem AJ ... in population studies. Sci Data (2017)
item-77 at level 4: list_item: Elvidge CD; Baugh KE; Zhizhin M; ... hts. Proc Asia-Pacific Adv Netw (2013)
item-78 at level 4: list_item: Jambulingam P; Subramanian S; De ... dicators. Parasites and Vectors (2016)
item-79 at level 4: list_item: Michael E; Malecela-Lazaro MN; S ... c filariasis. Lancet Infect Dis (2004)
item-80 at level 4: list_item: Stolk WA; Swaminathan S; van Oor ... simulation study. J Infect Dis (2003)
item-81 at level 4: list_item: Grady CA; De Rochars MB; Direny ... asis programs. Emerg Infect Dis (2007)
item-82 at level 4: list_item: Evans D; McFarland D; Adamani W; ... Nigeria. Ann Trop Med Parasitol (2011)
item-83 at level 4: list_item: Richards FO; Eigege A; Miri ES; ... in Nigeria. PLoS Negl Trop Dis (2011)
item-84 at level 4: list_item: Biritwum NK; Yikpotey P; Marfo B ... Ghana. Trans R Soc Trop Med Hyg (2016)
item-85 at level 4: list_item: Moraga P; Cano J; Baggaley RF; G ... odelling. Parasites and Vectors (2015)
item-86 at level 4: list_item: Irvine MA; Njenga SM; Gunawarden ... ction. Trans R Soc Trop Med Hyg (2016)
item-87 at level 4: list_item: Ottesen EA. Efficacy of diethylc ... ariae in humans. Rev Infect Dis (1985)
item-88 at level 4: list_item: Gambhir M; Bockarie M; Tisch D; ... lymphatic filariasis. BMC Biol (2010)
item-89 at level 4: list_item: World Health Organization. Globa ... al entomology handbook. Geneva; 2013.
item-90 at level 4: list_item: Slater H; Michael E. Predicting ... gical niche modelling. PLoS One (2012)
item-91 at level 4: list_item: Slater H; Michael E. Mapping, Ba ... prevalence in Africa. PLoS One (2013)
item-92 at level 4: list_item: Sabesan S; Raju KHK; Subramanian ... odel. Vector-Borne Zoonotic Dis (2013)
item-93 at level 4: list_item: Stanton MC; Molyneux DH; Kyelem ... in Burkina Faso. Geospat Health (2013)
item-94 at level 4: list_item: Manhenje I; Teresa Galán-Puchade ... hern Mozambique. Geospat Health (2013)
item-95 at level 4: list_item: Ngwira BM; Tambala P; Perez a M; ... infection in Malawi. Filaria J (2007)
item-96 at level 4: list_item: Simonsen PE; Mwakitalu ME. Urban ... hatic filariasis. Parasitol Res (2013)
item-97 at level 4: list_item: Proville J; Zavala-Araiza D; Wag ... socio-economic trends. PLoS One (2017)
item-98 at level 4: list_item: Endeshaw T; Taye A; Tadesse Z; K ... st Ethiopia. Pathog Glob Health (2015)
item-99 at level 4: list_item: Richards FO; Eigege A; Pam D; Ka ... eas of co-endemicity. Filaria J (2005)
item-100 at level 4: list_item: Kyelem D; Sanou S; Boatin B a; M ... cations. Ann Trop Med Parasitol (2003)
item-101 at level 4: list_item: Weil GJ; Lammie PJ; Richards FO; ... ne and ivermectin. J Infect Dis (1991)
item-102 at level 4: list_item: Kumar A; Sachan P. Measuring imp ... rug administration. Trop Biomed (2014)
item-103 at level 4: list_item: Njenga SM; Mwandawiro CS; Wamae ... control. Parasites and Vectors (2011)
item-104 at level 4: list_item: Boyd A; Won KY; McClintock SK; D ... gane, Haiti. PLoS Negl Trop Dis (2010)
item-105 at level 4: list_item: Irvine MA; Reimer LJ; Njenga SM; ... mination. Parasites and Vectors (2015)
item-106 at level 4: list_item: Irvine MA; Stolk WA; Smith ME; S ... elling study. Lancet Infect Dis (2017)
item-107 at level 4: list_item: Pion SD; Montavon C; Chesnais CB ... crofilaremia. Am J Trop Med Hyg (2016)
item-108 at level 4: list_item: Wanji S; Esum ME; Njouendou AJ; ... in Cameroon. PLoS Negl Trop Dis (2018)
item-109 at level 4: list_item: Chesnais CB; Awaca-Uvon NP; Bola ... a in Africa. PLoS Negl Trop Dis (2017)
item-110 at level 4: list_item: Silumbwe A; Zulu JM; Halwindi H; ... haran Africa. BMC Public Health (2017)
item-111 at level 4: list_item: Adams AM; Vuckovic M; Birch E; B ... nistration. Trop Med Infect Dis (2018)
item-112 at level 4: list_item: Rao RU; Samarasekera SD; Nagodav ... n Sri Lanka. PLoS Negl Trop Dis (2017)
item-113 at level 4: list_item: Xu Z; Graves PM; Lau CL; Clement ... is in American Samoa. Epidemics (2018)
item-114 at level 4: list_item: Id CM; Tettevi EJ; Mechan F; Idu ... rural Ghana. PLoS Negl Trop Dis (2019)
item-115 at level 4: list_item: Eigege A; Kal A; Miri E; Sallau ... in Nigeria. PLoS Negl Trop Dis (2013)
item-116 at level 4: list_item: Van den Berg H; Kelly-Hope LA; L ... r management. Lancet Infect Dis (2013)
item-117 at level 4: list_item: Webber R.. Eradication of Wucher ... ntrol. Trans R Soc Trop Med Hyg (1979)
item-118 at level 1: caption: Table 1: Categorization of potential factors influencing pre-TAS results.
item-119 at level 1: caption: Table 2: Adjusted risk ratios fo ... g-binomial model sensitivity analysis.
item-120 at level 1: caption: Fig 1: Number of pre-TAS by country.
item-121 at level 1: caption: Fig 2: District-level baseline prevalence by country.
item-122 at level 1: caption: Fig 3: Percent pre-TAS failure by each characteristic (unadjusted).
item-123 at level 1: caption: Fig 4: Adjusted risk ratios for ... ence Interval from log-binomial model.
item-124 at level 1: caption: Fig 5: Analysis of failures by model combinations.

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,222 @@
# Risk factors associated with failing pre-transmission assessment surveys (pre-TAS) in lymphatic filariasis elimination programs: Results of a multi-country analysis
Burgert-Brucker Clara R.; 1: Global Health Division, RTI International, Washington, DC, United States of America; Zoerhoff Kathryn L.; 1: Global Health Division, RTI International, Washington, DC, United States of America; Headland Maureen; 1: Global Health Division, RTI International, Washington, DC, United States of America, 2: Global Health, Population, and Nutrition, FHI 360, Washington, DC, United States of America; Shoemaker Erica A.; 1: Global Health Division, RTI International, Washington, DC, United States of America; Stelmach Rachel; 1: Global Health Division, RTI International, Washington, DC, United States of America; Karim Mohammad Jahirul; 3: Department of Disease Control, Ministry of Health and Family Welfare, Dhaka, Bangladesh; Batcho Wilfrid; 4: National Control Program of Communicable Diseases, Ministry of Health, Cotonou, Benin; Bougouma Clarisse; 5: Lymphatic Filariasis Elimination Program, Ministère de la Santé, Ouagadougou, Burkina Faso; Bougma Roland; 5: Lymphatic Filariasis Elimination Program, Ministère de la Santé, Ouagadougou, Burkina Faso; Benjamin Didier Biholong; 6: National Onchocerciasis and Lymphatic Filariasis Control Program, Ministry of Health, Yaounde, Cameroon; Georges Nko'Ayissi; 6: National Onchocerciasis and Lymphatic Filariasis Control Program, Ministry of Health, Yaounde, Cameroon; Marfo Benjamin; 7: Neglected Tropical Diseases Programme, Ghana Health Service, Accra, Ghana; Lemoine Jean Frantz; 8: Ministry of Health, Port-au-Prince, Haiti; Pangaribuan Helena Ullyartha; 9: National Institute Health Research & Development, Ministry of Health, Jakarta, Indonesia; Wijayanti Eksi; 9: National Institute Health Research & Development, Ministry of Health, Jakarta, Indonesia; Coulibaly Yaya Ibrahim; 10: Filariasis Unit, International Center of Excellence in Research, Faculty of Medicine and Odontostomatology, Bamako, Mali; Doumbia Salif Seriba; 10: Filariasis Unit, International Center of Excellence in Research, Faculty of Medicine and Odontostomatology, Bamako, Mali; Rimal Pradip; 11: Epidemiology and Disease Control Division, Department of Health Service, Kathmandu, Nepal; Salissou Adamou Bacthiri; 12: Programme Onchocercose et Filariose Lymphatique, Ministère de la Santé, Niamey, Niger; Bah Yukaba; 13: National Neglected Tropical Disease Program, Ministry of Health and Sanitation, Freetown, Sierra Leone; Mwingira Upendo; 14: Neglected Tropical Disease Control Programme, National Institute for Medical Research, Dar es Salaam, Tanzania; Nshala Andreas; 15: IMA World Health/Tanzania NTD Control Programme, Uppsala University, & TIBA Fellow, Dar es Salaam, Tanzania; Muheki Edridah; 16: Programme to Eliminate Lymphatic Filariasis, Ministry of Health, Kampala, Uganda; Shott Joseph; 17: Division of Neglected Tropical Diseases, Office of Infectious Diseases, Bureau for Global Health, USAID, Washington, DC, United States of America; Yevstigneyeva Violetta; 17: Division of Neglected Tropical Diseases, Office of Infectious Diseases, Bureau for Global Health, USAID, Washington, DC, United States of America; Ndayishimye Egide; 2: Global Health, Population, and Nutrition, FHI 360, Washington, DC, United States of America; Baker Margaret; 1: Global Health Division, RTI International, Washington, DC, United States of America; Kraemer John; 1: Global Health Division, RTI International, Washington, DC, United States of America, 18: Georgetown University, Washington, DC, United States of America; Brady Molly; 1: Global Health Division, RTI International, Washington, DC, United States of America
## Abstract
Achieving elimination of lymphatic filariasis (LF) as a public health problem requires a minimum of five effective rounds of mass drug administration (MDA) and demonstrating low prevalence in subsequent assessments. The first assessments recommended by the World Health Organization (WHO) are sentinel and spot-check sites—referred to as pre-transmission assessment surveys (pre-TAS)—in each implementation unit after MDA. If pre-TAS shows that prevalence in each site has been lowered to less than 1% microfilaremia or less than 2% antigenemia, the implementation unit conducts a TAS to determine whether MDA can be stopped. Failure to pass pre-TAS means that further rounds of MDA are required. This study aims to understand factors influencing pre-TAS results using existing programmatic data from 554 implementation units, of which 74 (13%) failed, in 13 countries. Secondary data analysis was completed using existing data from Bangladesh, Benin, Burkina Faso, Cameroon, Ghana, Haiti, Indonesia, Mali, Nepal, Niger, Sierra Leone, Tanzania, and Uganda. Additional covariate data were obtained from spatial raster data sets. Bivariate analysis and multilinear regression were performed to establish potential relationships between variables and the pre-TAS result. Higher baseline prevalence and lower elevation were significant in the regression model. Variables statistically significantly associated with failure (p-value ≤0.05) in the bivariate analyses included baseline prevalence at or above 5% or 10%, use of Filariasis Test Strips (FTS), primary vector of Culex, treatment with diethylcarbamazine-albendazole, higher elevation, higher population density, higher enhanced vegetation index (EVI), higher annual rainfall, and 6 or more rounds of MDA. This paper reports for the first time factors associated with pre-TAS results from a multi-country analysis. This information can help countries more effectively forecast program activities, such as the potential need for more rounds of MDA, and prioritize resources to ensure adequate coverage of all persons in areas at highest risk of failing pre-TAS.Author summaryAchieving elimination of lymphatic filariasis (LF) as a public health problem requires a minimum of five rounds of mass drug administration (MDA) and being able to demonstrate low prevalence in several subsequent assessments. LF elimination programs implement sentinel and spot-check site assessments, called pre-TAS, to determine whether districts are eligible to implement more rigorous population-based surveys to determine whether MDA can be stopped or if further rounds are required. Reasons for failing pre-TAS are not well understood and have not previously been examined with data compiled from multiple countries. For this analysis, we analyzed data from routine USAID and WHO reports from Bangladesh, Benin, Burkina Faso, Cameroon, Ghana, Haiti, Indonesia, Mali, Nepal, Niger, Sierra Leone, Tanzania, and Uganda. In a model that included multiple variables, high baseline prevalence and lower elevation were significant. In models comparing only one variable to the outcome, the following were statistically significantly associated with failure: higher baseline prevalence at or above 5% or 10%, use of the FTS, primary vector of Culex, treatment with diethylcarbamazine-albendazole, lower elevation, higher population density, higher Enhanced Vegetation Index, higher annual rainfall, and six or more rounds of mass drug administration. These results can help national programs plan MDA more effectively, e.g., by focusing resources on areas with higher baseline prevalence and/or lower elevation.
## Introduction
Lymphatic filariasis (LF), a disease caused by parasitic worms transmitted to humans by mosquito bite, manifests in disabling and stigmatizing chronic conditions including lymphedema and hydrocele. To eliminate LF as a public health problem, the World Health Organization (WHO) recommends two strategies: reducing transmission through annual mass drug administration (MDA) and reducing suffering through ensuring the availability of morbidity management and disability prevention services to all patients [1]. For the first strategy, eliminating LF as a public health problem is defined as a reduction in measurable prevalence in infection in endemic areas below a target threshold at which further transmission is considered unlikely even in the absence of MDA [2]. As of 2018, 14 countries have eliminated LF as a public health problem while 58 countries remain endemic for LF [3].
The road to elimination as a public health problem has several milestones. First, where LF prevalence at baseline has exceeded 1% as measured either through microfilaremia (Mf) or antigenemia (Ag), MDA is implemented and treatment coverage is measured in all implementation units, which usually correspond to districts. Implementation units must complete at least five rounds of effective treatment, i.e. treatment with a minimum coverage of 65% of the total population. Then, WHO recommends sentinel and spot-check site assessments—referred to as pre-transmission assessment surveys (pre-TAS)—in each implementation unit to determine whether prevalence in each site is less than 1% Mf or less than 2% Ag [4]. Next, if these thresholds are met, national programs can progress to the first transmission assessment survey (TAS). The TAS is a population-based cluster or systematic survey of six- and seven-year-old children to assess whether transmission has fallen below the threshold at which infection is believed to persist. TAS is conducted at least three times, with two years between each survey. TAS 1 results determine if it is appropriate to stop MDA or whether further rounds are required. Finally, when TAS 2 and 3 also fall below the set threshold in every endemic implementation unit nationwide and morbidity criteria have been fulfilled, the national program submits a dossier to WHO requesting that elimination be officially validated.
Pre-TAS include at least one sentinel and one spot-check site per one million population. Sentinel sites are established at the start of the program in villages where LF prevalence was believed to be relatively high. Spot-check sites are villages not previously tested but purposively selected as potentially high-risk areas due to original high prevalence, low coverage during MDA, high vector density, or other factors [4]. At least six months after MDA implementation, data are collected from a convenience sample of at least 300 people over five years old in each site. Originally, Mf was recommended as the indicator of choice for pre-TAS, assessed by blood smears taken at the time of peak parasite periodicity [4]. WHO later recommended the use of circulating filarial antigen rapid diagnostic tests, BinaxNow immunochromatographic card tests (ICTs), and after 2016, Alere Filariasis Test Strips (FTS), because they are more sensitive, easier to implement, and more flexible about time of day that blood can be taken [5].
When a country fails to meet the established thresholds in a pre-TAS, they must implement at least two more rounds of MDA. National programs need to forecast areas that might fail pre-TAS and need repeated MDA, so that they can inform the community and district decision-makers of the implications of pre-TAS failure, including the need for continued MDA to lower prevalence effectively. In addition, financial and human resources must be made available for ordering drugs, distributing drugs, supervision and monitoring to implement the further MDA rounds. Ordering drugs and providing MDA budgets often need to be completed before the pre-TAS are implemented, so contingency planning and funding are important to ensure rounds of MDA are not missed.
This study aims to understand which factors are associated with the need for additional rounds of MDA as identified by pre-TAS results using programmatic data from 13 countries. The factors associated with failing pre-TAS are not well understood and have not previously been examined at a multi-country scale in the literature. We examine the association between pre-TAS failure and baseline prevalence, parasites, environmental factors, MDA implementation, and pre-TAS implementation. Understanding determinants of pre-TAS failure will help countries identify where elimination may be most difficult and prioritize the use of limited LF elimination resources.
## Methods
This is a secondary data analysis using existing data, collected for programmatic purposes. Data for this analysis come from 568 districts in 13 countries whose LF elimination programs were supported by the United States Agency for International Development (USAID) through the ENVISION project, led by RTI International, and the END in Africa and END in Asia projects, led by FHI 360. These countries are Bangladesh, Benin, Burkina Faso, Cameroon, Ghana, Haiti, Indonesia, Mali, Nepal, Niger, Sierra Leone, Tanzania, and Uganda. The data represent all pre-TAS funded by USAID from 2012 to 2017 and, in some cases, surveys funded by host government or other non-United States government funders. Because pre-TAS data were collected as part of routine program activities in most countries, in general, ethical clearance was not sought for these surveys. Our secondary analysis only included the aggregated survey results and therefore did not constitute human subjects research; no ethical approval was required.
Building on previous work, we delineated five domains of variables that could influence pre-TAS outcomes: prevalence, agent, environment, MDA, and pre-TAS implementation (Table 1) [68]. We prioritized key concepts that could be measured through our data or captured through publicly available global geospatial data sets.
### Data sources
Information on baseline prevalence, MDA coverage, the number of MDA rounds, and pre-TAS information (month and year of survey, district, site name, and outcome) was gathered through regular reporting for the USAID-funded NTD programs (ENVISION, END in Africa, and END in Asia). These data were augmented by other reporting data such as the countrys dossier data annexes, the WHO Preventive Chemotherapy and Transmission Control Databank, and WHO reporting forms. Data were then reviewed by country experts, including the Ministry of Health program staff and implementing program staff, and updated as necessary. Data on vectors were also obtained from country experts. The district geographic boundaries were matched to geospatial shapefiles from the ENVISION project geospatial data repository, while other geospatial data were obtained through publicly available sources (Table 1).
### Outcome and covariate variables
The outcome of interest for this analysis was whether a district passed or failed the pre-TAS. Failure was defined as any district that had at least one sentinel or spot-check site with a prevalence higher than or equal to 1% Mf or 2% Ag [4].
Potential covariates were derived from the available data for each factor in the domain groups listed in Table 1. New dichotomous variables were created for all variables that had multiple categories or were continuous for ease of interpretation in models and use in program decision-making. Cut-points for continuous variables were derived from either a priori knowledge or through exploratory analysis considering the mean or median value of the dataset, looking to create two groups of similar size with logical cut-points (e.g. rounding numbers to whole numbers). All the variables derived from publicly available global spatial raster datasets were summarized to the district level in ArcGIS Pro using the “zonal statistics” tool. The final output used the continuous value measuring the mean pixel value for the district for all variables except geographic area. Categories for each variable were determined by selecting the mean or median dataset value or cut-off used in other relevant literature [7]. The following section describes the variables that were included in the final analysis and the final categorizations used.
#### Baseline prevalence
Baseline prevalence can be assumed as a proxy for local transmission conditions [14] and correlates with prevalence after MDA [1420]. Baseline prevalence for each district was measured by either blood smears to measure Mf or rapid diagnostic tests to measure Ag. Other studies have modeled Mf and Ag prevalence separately, due to lack of a standardized correlation between the two, especially at pre-MDA levels [21,22]. However, because WHO mapping guidance states that MDA is required if either Mf or Ag is ≥1% and there were not enough data to model each separately, we combined baseline prevalence values regardless of diagnostic test used. We created two variables for use in the analysis (1) using the cut-off of <5% or 5% (dataset median value of 5%) and (2) using the cut-off of <10% or 10%.
#### Agent
In terms of differences in transmission dynamics by agent, research has shown that Brugia spp. are more susceptible to the anti-filarial drug regimens than Wuchereria bancrofti parasites [23]. Thus, we combined districts reporting B. malayi and B. timori and compared them to areas with W. bancrofti or mixed parasites. Two variables from other domains were identified in exploratory analyses to be highly colinear with the parasite, and thus we considered them in the same group of variables for the final regression models. These were variables delineating vectors (Anopheles or Mansonia compared to Culex) from the environmental domain and drug package [ivermectin-albendazole (IVM-ALB) compared to diethylcarbamazine-albendazole (DEC-ALB)] from the MDA domain.
#### Environment
LF transmission intensity is influenced by differing vector transmission dynamics, including vector biting rates and competence, and the number of individuals with microfilaria [21,24,25]. Since vector data are not always available, previous studies have explored whether environmental variables associated with vector density, such as elevation, rainfall, and temperature, can be used to predict LF prevalence [8,21,2631]. We included the district area and elevation in meters as geographic variables potentially associated with transmission intensity. In addition, within the climate factor, we included Enhanced Vegetation Index (EVI) and rainfall variables. EVI measures vegetation levels, or “greenness,” where a higher index value indicates a higher level of “greenness.”
We included the socio-economic variable of population density, as it has been positively associated with LF prevalence in some studies [8,27,29], but no significant association has been found in others [30]. Population density could be correlated with vector, as in eastern African countries LF is mostly transmitted by Culex in urban areas and by Anopheles in rural areas [32]. Additionally, inclusion of the satellite imagery of nighttime lights data is another a proxy for socio-economic status [33].
Finally, all or parts of districts that are co-endemic with onchocerciasis may have received multiple rounds of MDA with ivermectin before LF MDA started, which may have lowered LF prevalence in an area [3436]. Thus, we included a categorical variable to distinguish if districts were co-endemic with onchocerciasis.
#### MDA
Treatment effectiveness depends upon both drug efficacy (ability to kill adult worms, ability to kill Mf, drug resistance, drug quality) and implementation of MDA (coverage, compliance, number of rounds) [14,16]. Ivermectin is less effective against adult worms than DEC, and therefore it is likely that Ag reduction is slower in areas using ivermectin instead of DEC in MDA [37]. Models also have shown that MDA coverage affects prevalence, although coverage has been defined in various ways, such as median coverage, number of rounds, or individual compliance [1416,20,3840]. Furthermore, systematic non-compliance, or population sub-groups which consistently refuse to take medicines, has been shown to represent a threat to elimination [41,42].
We considered three approaches when analyzing the MDA data: median MDA coverage in the most recent 5 rounds, number of rounds with sufficient coverage in the most recent 5 rounds, and count of the total number of rounds. MDA coverage is considered sufficient at or above 65% of the total population who were reported to have ingested the drugs; this was used as the cut point for MDA median coverage for the most recent 5 rounds. The rounds of sufficient coverage variable was categorized as having 2 or fewer rounds compared to 3 or more sufficient rounds. The total number of MDA rounds variable was categorized at 5 or fewer rounds compared to 6 or more rounds ever documented in that district.
#### Pre-TAS implementation
Pre-TAS results can be influenced by the implementation of the survey itself, including the use of a particular diagnostic test, the selection of sites, the timing of survey, and the appropriate application of methods for population recruitment and diagnostic test adminstration. We included two variables in the pre-TAS implementation domain: `type of diagnostic method used and `diagnostic test used. The type of diagnostic method used variable categorized districts by either using Mf or Ag. The diagnostic test used variable examined Mf (reference category) compared to ICT and compared to FTS (categorical variable with 3 values). This approach was used to compare each test to each other. Countries switched from ICT to FTS during 2016, while Mf testing continued to be used throughout the time period of study.
### Data inclusion criteria
The dataset, summarized at the district level, included information from 568 districts where a pre-TAS was being implemented for the first time. A total of 14 districts were removed from the final analysis due to missing data related to the following points: geospatial boundaries (4), baseline prevalence (4), and MDA coverage (6). The final analysis dataset had 554 districts.
### Statistical analysis and modeling
Statistical analysis and modeling were done with Stata MP 15.1 (College Station, TX). Descriptive statistics comparing various variables to the principle outcome were performed. Significant differences were identified using a chi-square test. A generalized linear model (GLM) with a log link and binomial error distribution—which estimates relative risks—was developed using forward stepwise modeling methods (called log-binomial model). Models with higher pseudo-r-squared and lower Akaike information criterion (AIC) were retained at each step. Pseudo-r-squared is a value between 0 and 1 with the higher the value, the better the model is at predicting the outcome of interest. AIC values are used to compare the relative quality of models compared to each other; in general, a lower value indicates a better model. Variables were tested by factor group. Once a variable was selected from the group, no other variable in that same group was eligible to be included in the final model due to issues of collinearity and small sample sizes. Interaction between terms in the model was tested after model selection, and interaction terms that modified the original terms significance were included in the final model. Overall, the number of potential variables able to be included in the model remained low due to the relatively small number of failure results (13%) in the dataset. Furthermore, the models with more than 3 variables and one interaction term either were unstable (indicated by very large confidence interval widths) or did not improve the model by being significant predictors or by modifying other parameters already in the model. These models were at heightened risk of non-convergence; we limited the number of variables accordingly.
Sensitivity analysis was performed for the final log-binomial model to test for the validity of results under different parameters by excluding some sub-sets of districts from the dataset and rerunning the model. This analysis was done to understand the robustness of the model when (1) excluding all districts in Cameroon, (2) including only districts in Africa, (3) including only districts with W. bancrofti parasite, and (4) including only districts with Anopheles as the primary vector. The sensitivity analysis excluding Cameroon was done for two reasons. First, Cameroon had the most pre-TAS results included, but no failures. Second, 70% of the Cameroon districts included in the analysis are co-endemic for loiasis. Given that diagnostic tests used in LF mapping have since been shown to cross-react with loiasis, there is some concern that these districts might not have been truly LF-endemic [43,44].
## Results
The overall pre-TAS pass rate for the districts included in this analysis was 87% (74 failures in 554 districts). Nearly 40% of the 554 districts were from Cameroon (134) and Tanzania (87) (Fig 1). No districts in Bangladesh, Cameroon, Mali, or Uganda failed a pre-TAS in this data set; over 25% of districts in Burkina Faso, Ghana, Haiti, Nepal, and Sierra Leone failed pre-TAS in this data set. Baseline prevalence varied widely within and between the 13 countries. Fig 2 shows the highest, lowest, and median baseline prevalence in the study districts by country. Burkina Faso had the highest median baseline prevalence at 52% and Burkina Faso, Tanzania, and Ghana all had at least one district with a very high baseline of over 70%. In Mali, Indonesia, Benin, and Bangladesh, all districts had baseline prevalences below 20%.
Fig 3 shows the unadjusted analysis for key variables by pre-TAS result. Variables statistically significantly associated with failure (p-value ≤0.05) included higher baseline prevalence at or above 5% or 10%, FTS diagnostic test, primary vector of Culex, treatment with DEC-ALB, higher elevation, higher population density, higher EVI, higher annual rainfall, and six or more rounds of MDA. Variables that were not significantly associated with pre-TAS failure included diagnostic method used (Ag or Mf), parasite, co-endemicity for onchocerciasis, median MDA coverage, and sufficient rounds of MDA.
The final log-binomial model included the variables of baseline prevalence ≥10%, the diagnostic test used (FTS and ICT), and elevation. The final model also included a significant interaction term between high baseline and diagnostic test used.
Fig 4 shows the risk ratio results with their corresponding confidence intervals. In a model with interaction between baseline and diagnostic test the baseline parameter was significant while diagnostic test and the interaction term were not. Districts with high baseline had a statistically significant (p-value ≤0.05) 2.52 times higher risk of failure (95% CI 1.374.64) compared to those with low baseline prevalence. The FTS diagnostic test or ICT diagnostic test alone were not significant nor was the interaction term. Additionally, districts with an elevation below 350 meters had a statistically significant (p-value ≤0.05) 3.07 times higher risk of failing pre-TAS (95% CI 1.954.83).
Sensitivity analyses were conducted using the same model with different subsets of the dataset including (1) all districts except for districts in Cameroon (134 total with no failures), (2) only districts in Africa, (3) only districts with W. bancrofti, and (4) only districts with Anopheles as primary vector. The results of the sensitivity models (Table 2) indicate an overall robust model. High baseline and lower elevation remained significant across all the models. The ICT diagnostic test used remains insignificant across all models. The FTS diagnostic test was positively significant in model 1 and negatively significant in model 4. The interaction term of baseline prevalence and FTS diagnostic test was significant in three models though the estimate was unstable in the W. bancrofti-only and Anopheles-only models (models 3 and 4 respectively), as signified by large confidence intervals.
Overall 74 districts in the dataset failed pre-TAS. Fig 5 summarizes the likelihood of failure by variable combinations identified in the log-binomial model. For those districts with a baseline prevalence ≥10% that used a FTS diagnostic test and have an average elevation below 350 meters (Combination C01), 87% of the 23 districts failed. Of districts with high baseline that used an ICT diagnostic test and have a low average elevation (C02) 45% failed. Overall, combinations with high baseline and low elevation C01, C02, and C04 accounted for 51% of all the failures (38 of 74).
## Discussion
This paper reports for the first time factors associated with pre-TAS results from a multi-country analysis. Variables significantly associated with failure were higher baseline prevalence and lower elevation. Districts with a baseline prevalence of 10% or more were at 2.52 times higher risk to fail pre-TAS in the final log-binomial model. In the bivariate analysis, baseline prevalence above 5% was also significantly more likely to fail compared to lower baselines, which indicates that the threshold for higher baseline prevalence may be as little as 5%, similar to what was found in Goldberg et al., which explored ecological and socioeconomic factors associated with TAS failure [7].
Though diagnostic test used was selected for the final log-binomial model, neither category (FTS or ICT) were significant after interaction with high baseline. FTS alone is significant in the bivariate analysis compared to ICT or Mf. This result is not surprising given previous research which found that FTS was more sensitive than ICT [45].
Elevation was the only environmental domain variable selected for the final log-binomial model during the model selection process, with areas of lower elevation (<350m) found to be at 3.07 times higher risk to fail pre-TAS compared to districts with a higher elevation. Similar results related to elevation were found in previous studies [8,31], including Goldberg et al. [7], who used a cutoff of 200 meters. Elevation likely also encompasses some related environmental concepts, such as vector habitat, greenness (EVI), or rainfall, which impact vector chances of survival.
The small number of failures overall prevented the inclusion of a large number of variables in the final log-binomial model. However, other variables that are associated with failure as identified in the bivariate analyses, such as Culex vector, higher population density, higher EVI, higher rainfall and more rounds of MDA, should not be discounted when making programmatic decisions. Other models have shown that Culex as the predominant vector in a district, compared to Anopheles, results in more intense interventions needed to reach elimination [24,41]. Higher population density, which was also found to predict TAS failure [7], could be related to different vector species transmission dynamics in urban areas, as well as the fact that MDAs are harder to conduct and to accurately measure in urban areas [46,47]. Both higher enhanced vegetation index (>0.3) and higher rainfall (>700 mm per year) contribute to expansion of vector habitats and population. Additionally, having more than five rounds of MDA before pre-TAS was also statistically significantly associated with higher failure in the bivariate analysis. It is unclear why higher number of rounds is associated with first pre-TAS failure given that other research has shown the opposite [15,16].
All other variables included in this analysis were not significantly associated with pre-TAS failure in our analysis. Goldberg et al. found Brugia spp. to be significantly associated with failure, but our results did not. This is likely due in part to the small number of districts with Brugia spp. in our dataset (6%) compared to 46% in the Goldberg et al. article [7]. MDA coverage levels were not significantly associated with pre-TAS failure, likely due to the lack of variance in the coverage data since WHO guidance dictates a minimum of five rounds of MDA with ≥65% epidemiological coverage to be eligible to implement pre-TAS. It should not be interpreted as evidence that high MDA coverage levels are not necessary to lower prevalence.
Limitations to this study include data sources, excluded data, unreported data, misassigned data, and aggregation of results at the district level. The main data sources for this analysis were programmatic data, which may be less accurate than data collected specifically for research purposes. This is particularly true of the MDA coverage data, where some countries report data quality challenges in areas of instability or frequent population migration. Even though risk factors such as age, sex, compliance with MDA, and use of bednets have been shown to influence infection in individuals [40,4850], we could not include factors from the human host domain in our analysis, as data sets were aggregated at site level and did not include individual information. In addition, vector control data were not universally available across the 13 countries and thus were not included in the analysis, despite studies showing that vector control has an impact on reducing LF prevalence [41,48,5153].
Fourteen districts were excluded from the analysis because we were not able to obtain complete data for baseline prevalence, MDA coverage, or geographic boundaries. One of these districts had failed pre-TAS. It is likely these exclusions had minimal impact on the conclusions, as they represented a small number of districts and were similar to other included districts in terms of key variables. Unreported data could have occurred if a country conducted a pre-TAS that failed and then chose not to report it or reported it as a mid-term survey instead. Anecdotally, we know this has occurred occasionally, but we do not believe the practice to be widespread. Another limitation in the analysis is a potential misassignment of key variable values to a district due to changes in the district over time. Redistricting, changes in district size or composition, was pervasive in many countries during the study period; however, we expect the impact on the study outcome to be minimal, as the historical prevalence and MDA data from the “mother” districts are usually flowed down to these new “daughter” districts. However, it is possible that the split created an area of higher prevalence or lower MDA coverage than would have been found on average in the overall larger original “mother” district. Finally, the aggregation or averaging of results to the district level may mask heterogeneity within districts. Though this impact could be substantial in districts with considerable heterogeneity, the use of median values and binomial variables mitigated the likelihood of skewing the data to extreme outliners in a district.
As this analysis used data across a variety of countries and epidemiological situations, the results are likely relevant for other districts in the countries examined and in countries with similar epidemiological backgrounds. In general, as more data become available at site level through the increased use of electronic data collection tools, further analysis of geospatial variables and associations will be possible. For example, with the availability of GPS coordinates, it may become possible to analyze outcomes by site and to link the geospatial environmental domain variables at a smaller scale. Future analyses also might seek to include information from coverage surveys or qualitative research studies on vector control interventions such as bed net usage, MDA compliance, population movement, and sub-populations that might be missed during MDA. Future pre-TAS using electronic data collection could include sex and age of individuals included in the survey.
This paper provides evidence from analysis of 554 districts and 13 countries on the factors associated with pre-TAS results. Baseline prevalence, elevation, vector, population density, EVI, rainfall, and number of MDA rounds were all significant in either bivariate or multivariate analyses. This information along with knowledge of local context can help countries more effectively plan pre-TAS and forecast program activities, such as the potential need for more than five rounds of MDA in areas with high baseline and/or low elevation.
## Tables
Table 1: Categorization of potential factors influencing pre-TAS results.
| Domain | Factor | Covariate | Description | Reference Group | Summary statistic | Temporal Resolution | Source |
|------------------------|-----------------------|-------------------------------|-----------------------------------------------------------------|----------------------|---------------------|-----------------------|--------------------|
| Prevalence | Baseline prevalence | 5% cut off | Maximum reported mapping or baseline sentinel site prevalence | <5% | Maximum | Varies | Programmatic data |
| Prevalence | Baseline prevalence | 10% cut off | Maximum reported mapping or baseline sentinel site prevalence | <10% | Maximum | Varies | Programmatic data |
| Agent | Parasite | Parasite | Predominate parasite in district | W. bancrofti & mixed | Binary value | 2018 | Programmatic data |
| Environment | Vector | Vector | Predominate vector in district | Anopheles & Mansonia | Binary value | 2018 | Country expert |
| Environment | Geography | Elevation | Elevation measured in meters | >350 | Mean | 2000 | CGIAR-CSI SRTM [9] |
| Environment | Geography | District area | Area measured in km2 | >2,500 | Maximum sum | Static | Programmatic data |
| Environment | Climate | EVI | Enhanced vegetation index | > 0.3 | Mean | 2015 | MODIS [10] |
| Environment | Climate | Rainfall | Annual rainfall measured in mm | ≤ 700 | Mean | 2015 | CHIRPS [11] |
| Environment | Socio-economic | Population density | Number of people per km2 | ≤ 100 | Mean | 2015 | WorldPop [12] |
| Environment | Socio-economic | Nighttime lights | Nighttime light index from 0 to 63 | >1.5 | Mean | 2015 | VIIRS [13] |
| Environment | Co-endemicity | Co-endemic for onchocerciasis | Part or all of district is also endemic for onchocerciases | Non-endemic | Binary value | 2018 | Programmatic data |
| MDA | Drug efficacy | Drug package | DEC-ALB or IVM-ALB | DEC-ALB | Binary value | 2018 | Programmatic data |
| MDA | Implementation of MDA | Coverage | Median MDA coverage for last 5 rounds | ≥ 65% | Median | Varies | Programmatic data |
| MDA | Implementation of MDA | Sufficient rounds | Number of rounds of sufficient (≥ 65% coverage) in last 5 years | ≥ 3 | Count | Varies | Programmatic data |
| MDA | Implementation of MDA | Number of rounds | Maximum number of recorded rounds of MDA | ≥ 6 | Maximum | Varies | Programmatic data |
| Pre-TAS implementation | Quality of survey | Diagnostic method | Using Mf or Ag | Mf | Binary value | Varies | Programmatic data |
| Pre-TAS implementation | Quality of survey | Diagnostic test | Using Mf, ICT, or FTS | Mf | Categorical | Varies | Programmatic data |
Table 2: Adjusted risk ratios for pre-TAS failure from log-binomial model sensitivity analysis.
| | | (1) | (2) | (3) | (4) |
|---------------------------------------------|------------------|----------------------------|--------------------------|--------------------------------------|---------------------------------|
| | Full Model | Without Cameroon districts | Only districts in Africa | Only W. bancrofti parasite districts | Only Anopheles vector districts |
| Number of Failures | 74 | 74 | 44 | 72 | 46 |
| Number of total districts | (N = 554) | (N = 420) | (N = 407) | (N = 518) | (N = 414) |
| Covariate | RR (95% CI) | RR (95% CI) | RR (95% CI) | RR (95% CI) | RR (95% CI) |
| Baseline prevalence > = 10% & used FTS test | 2.38 (0.965.90) | 1.23 (0.522.92) | 14.52 (1.79117.82) | 2.61 (1.036.61) | 15.80 (1.95127.67) |
| Baseline prevalence > = 10% & used ICT test | 0.80 (0.203.24) | 0.42 (0.111.68) | 1.00 (0.000.00) | 0.88 (0.213.60) | 1.00 (0.000.00) |
| +Used FTS test | 1.16 (0.522.59) | 2.40 (1.125.11) | 0.15 (0.021.11) | 1.03 (0.452.36) | 0.13 (0.020.96) |
| +Used ICT test | 0.92 (0.322.67) | 1.47 (0.514.21) | 0.33 (0.042.54) | 0.82 (0.282.43) | 0.27 (0.032.04) |
| +Baseline prevalence > = 10% | 2.52 (1.374.64) | 2.42 (1.314.47) | 2.03 (1.063.90) | 2.30 (1.214.36) | 2.01 (1.073.77) |
| Elevation < 350m | 3.07 (1.954.83) | 2.21 (1.423.43) | 4.68 (2.229.87) | 3.04 (1.934.79) | 3.76 (1.927.37) |
## Figures
Fig 1: Number of pre-TAS by country.
<!-- image -->
Fig 2: District-level baseline prevalence by country.
<!-- image -->
Fig 3: Percent pre-TAS failure by each characteristic (unadjusted).
<!-- image -->
Fig 4: Adjusted risk ratios for pre-TAS failure with 95% Confidence Interval from log-binomial model.
<!-- image -->
Fig 5: Analysis of failures by model combinations.
<!-- image -->
## References
- World Health Organization. Lymphatic filariasis: progress report 20002009 and strategic plan 20102020. Geneva; 2010.
- World Health Organization. Validation of elimination of lymphatic filariasis as a public health problem. Geneva; 2017.
- Global programme to eliminate lymphatic filariasis: progress report, 2018. Wkly Epidemiol Rec (2019)
- World Health Organization. Global programme to eliminate lymphatic filariasis: monitoring and epidemiological assessment of mass drug administration. Geneva; 2011.
- World Health Organization. Strengthening the assessment of lymphatic filariasis transmission and documenting the achievement of elimination—Meeting of the Neglected Tropical Diseases Strategic and Technical Advisory Groups Monitoring and Evaluation Subgroup on Disease-specific Indicators. 2016; 42.
- Kyelem D; Biswas G; Bockarie MJ; Bradley MH; El-Setouhy M; Fischer PU. Determinants of success in national programs to eliminate lymphatic filariasis: a perspective identifying essential elements and research needs. Am J Trop Med Hyg (2008)
- Goldberg EM; King JD; Mupfasoni D; Kwong K; Hay SI; Pigott DM. Ecological and socioeconomic predictors of transmission assessment survey failure for lymphatic filariasis. Am J Trop Med Hyg (2019)
- Cano J; Rebollo MP; Golding N; Pullan RL; Crellen T; Soler A. The global distribution and transmission limits of lymphatic filariasis: past and present. Parasites and Vectors (2014)
- CGIAR-CSI. CGIAR-CSI SRTM 90m DEM Digital Elevation Database. In: .
- USGS NASA. Vegetation indices 16-DAy L3 global 500 MOD13A1 dataset [Internet]. [cited 1 May 2018]. Available: .
- Funk C; Peterson P; Landsfeld M; Pedreros D; Verdin J; Shukla S. The climate hazards infrared precipitation with stations—A new environmental record for monitoring extremes. Sci Data (2015)
- Lloyd CT; Sorichetta A; Tatem AJ. High resolution global gridded data for use in population studies. Sci Data (2017)
- Elvidge CD; Baugh KE; Zhizhin M; Hsu F-C. Why VIIRS data are superior to DMSP for mapping nighttime lights. Proc Asia-Pacific Adv Netw (2013)
- Jambulingam P; Subramanian S; De Vlas SJ; Vinubala C; Stolk WA. Mathematical modelling of lymphatic filariasis elimination programmes in India: required duration of mass drug administration and post-treatment level of infection indicators. Parasites and Vectors (2016)
- Michael E; Malecela-Lazaro MN; Simonsen PE; Pedersen EM; Barker G; Kumar A. Mathematical modelling and the control of lymphatic filariasis. Lancet Infect Dis (2004)
- Stolk WA; Swaminathan S; van Oortmarssen GJ; Das PK; Habbema JDF. Prospects for elimination of bancroftian filariasis by mass drug treatment in Pondicherry, India: a simulation study. J Infect Dis (2003)
- Grady CA; De Rochars MB; Direny AN; Orelus JN; Wendt J; Radday J. Endpoints for lymphatic filariasis programs. Emerg Infect Dis (2007)
- Evans D; McFarland D; Adamani W; Eigege A; Miri E; Schulz J. Cost-effectiveness of triple drug administration (TDA) with praziquantel, ivermectin and albendazole for the prevention of neglected tropical diseases in Nigeria. Ann Trop Med Parasitol (2011)
- Richards FO; Eigege A; Miri ES; Kal A; Umaru J; Pam D. Epidemiological and entomological evaluations after six years or more of mass drug administration for lymphatic filariasis elimination in Nigeria. PLoS Negl Trop Dis (2011)
- Biritwum NK; Yikpotey P; Marfo BK; Odoom S; Mensah EO; Asiedu O. Persistent “hotspots” of lymphatic filariasis microfilaraemia despite 14 years of mass drug administration in Ghana. Trans R Soc Trop Med Hyg (2016)
- Moraga P; Cano J; Baggaley RF; Gyapong JO; Njenga SM; Nikolay B. Modelling the distribution and transmission intensity of lymphatic filariasis in sub-Saharan Africa prior to scaling up interventions: integrated use of geostatistical and mathematical modelling. Parasites and Vectors (2015)
- Irvine MA; Njenga SM; Gunawardena S; Wamae CN; Cano J; Brooker SJ. Understanding the relationship between prevalence of microfilariae and antigenaemia using a model of lymphatic filariasis infection. Trans R Soc Trop Med Hyg (2016)
- Ottesen EA. Efficacy of diethylcarbamazine in eradicating infection with lymphatic-dwelling filariae in humans. Rev Infect Dis (1985)
- Gambhir M; Bockarie M; Tisch D; Kazura J; Remais J; Spear R. Geographic and ecologic heterogeneity in elimination thresholds for the major vector-borne helminthic disease, lymphatic filariasis. BMC Biol (2010)
- World Health Organization. Global programme to eliminate lymphatic filariasis: practical entomology handbook. Geneva; 2013.
- Slater H; Michael E. Predicting the current and future potential distributions of lymphatic filariasis in Africa using maximum entropy ecological niche modelling. PLoS One (2012)
- Slater H; Michael E. Mapping, Bayesian geostatistical analysis and spatial prediction of lymphatic filariasis prevalence in Africa. PLoS One (2013)
- Sabesan S; Raju KHK; Subramanian S; Srivastava PK; Jambulingam P. Lymphatic filariasis transmission risk map of India, based on a geo-environmental risk model. Vector-Borne Zoonotic Dis (2013)
- Stanton MC; Molyneux DH; Kyelem D; Bougma RW; Koudou BG; Kelly-Hope LA. Baseline drivers of lymphatic filariasis in Burkina Faso. Geospat Health (2013)
- Manhenje I; Teresa Galán-Puchades M; Fuentes M V. Socio-environmental variables and transmission risk of lymphatic filariasis in central and northern Mozambique. Geospat Health (2013)
- Ngwira BM; Tambala P; Perez a M; Bowie C; Molyneux DH. The geographical distribution of lymphatic filariasis infection in Malawi. Filaria J (2007)
- Simonsen PE; Mwakitalu ME. Urban lymphatic filariasis. Parasitol Res (2013)
- Proville J; Zavala-Araiza D; Wagner G. Night-time lights: a global, long term look at links to socio-economic trends. PLoS One (2017)
- Endeshaw T; Taye A; Tadesse Z; Katabarwa MN; Shafi O; Seid T. Presence of Wuchereria bancrofti microfilaremia despite seven years of annual ivermectin monotherapy mass drug administration for onchocerciasis control: a study in north-west Ethiopia. Pathog Glob Health (2015)
- Richards FO; Eigege A; Pam D; Kal A; Lenhart A; Oneyka JOA. Mass ivermectin treatment for onchocerciasis: lack of evidence for collateral impact on transmission of Wuchereria bancrofti in areas of co-endemicity. Filaria J (2005)
- Kyelem D; Sanou S; Boatin B a; Medlock J; Couibaly S; Molyneux DH. Impact of long-term ivermectin (Mectizan) on Wuchereria bancrofti and Mansonella perstans infections in Burkina Faso: strategic and policy implications. Ann Trop Med Parasitol (2003)
- Weil GJ; Lammie PJ; Richards FO; Eberhard ML. Changes in circulating parasite antigen levels after treatment of bancroftian filariasis with diethylcarbamazine and ivermectin. J Infect Dis (1991)
- Kumar A; Sachan P. Measuring impact on filarial infection status in a community study: role of coverage of mass drug administration. Trop Biomed (2014)
- Njenga SM; Mwandawiro CS; Wamae CN; Mukoko DA; Omar AA; Shimada M. Sustained reduction in prevalence of lymphatic filariasis infection in spite of missed rounds of mass drug administration in an area under mosquito nets for malaria control. Parasites and Vectors (2011)
- Boyd A; Won KY; McClintock SK; Donovan C V; Laney SJ; Williams SA. A community-based study of factors associated with continuing transmission of lymphatic filariasis in Leogane, Haiti. PLoS Negl Trop Dis (2010)
- Irvine MA; Reimer LJ; Njenga SM; Gunawardena S; Kelly-Hope L; Bockarie M. Modelling strategies to break transmission of lymphatic filariasis—aggregation, adherence and vector competence greatly alter elimination. Parasites and Vectors (2015)
- Irvine MA; Stolk WA; Smith ME; Subramanian S; Singh BK; Weil GJ. Effectiveness of a triple-drug regimen for global elimination of lymphatic filariasis: a modelling study. Lancet Infect Dis (2017)
- Pion SD; Montavon C; Chesnais CB; Kamgno J; Wanji S; Klion AD. Positivity of antigen tests used for diagnosis of lymphatic filariasis in individuals without Wuchereria bancrofti infection but with high loa loa microfilaremia. Am J Trop Med Hyg (2016)
- Wanji S; Esum ME; Njouendou AJ; Mbeng AA; Chounna Ndongmo PW; Abong RA. Mapping of lymphatic filariasis in loiasis areas: a new strategy shows no evidence for Wuchereria bancrofti endemicity in Cameroon. PLoS Negl Trop Dis (2018)
- Chesnais CB; Awaca-Uvon NP; Bolay FK; Boussinesq M; Fischer PU; Gankpala L. A multi-center field study of two point-of-care tests for circulating Wuchereria bancrofti antigenemia in Africa. PLoS Negl Trop Dis (2017)
- Silumbwe A; Zulu JM; Halwindi H; Jacobs C; Zgambo J; Dambe R. A systematic review of factors that shape implementation of mass drug administration for lymphatic filariasis in sub-Saharan Africa. BMC Public Health (2017)
- Adams AM; Vuckovic M; Birch E; Brant TA; Bialek S; Yoon D. Eliminating neglected tropical diseases in urban areas: a review of challenges, strategies and research directions for successful mass drug administration. Trop Med Infect Dis (2018)
- Rao RU; Samarasekera SD; Nagodavithana KC; Dassanayaka TDM; Punchihewa MW; Ranasinghe USB. Reassessment of areas with persistent lymphatic filariasis nine years after cessation of mass drug administration in Sri Lanka. PLoS Negl Trop Dis (2017)
- Xu Z; Graves PM; Lau CL; Clements A; Geard N; Glass K. GEOFIL: a spatially-explicit agent-based modelling framework for predicting the long-term transmission dynamics of lymphatic filariasis in American Samoa. Epidemics (2018)
- Id CM; Tettevi EJ; Mechan F; Idun B; Biritwum N; Osei-atweneboana MY. Elimination within reach: a cross-sectional study highlighting the factors that contribute to persistent lymphatic filariasis in eight communities in rural Ghana. PLoS Negl Trop Dis (2019)
- Eigege A; Kal A; Miri E; Sallau A; Umaru J; Mafuyai H. Long-lasting insecticidal nets are synergistic with mass drug administration for interruption of lymphatic filariasis transmission in Nigeria. PLoS Negl Trop Dis (2013)
- Van den Berg H; Kelly-Hope LA; Lindsay SW. Malaria and lymphatic filariasis: The case for integrated vector management. Lancet Infect Dis (2013)
- Webber R.. Eradication of Wuchereria bancrofti infection through vector control. Trans R Soc Trop Med Hyg (1979)

View File

@ -0,0 +1,177 @@
item-0 at level 0: unspecified: group _root_
item-1 at level 1: title: Potential to reduce greenhouse g ... cattle systems in subtropical regions
item-2 at level 2: paragraph: Ribeiro-Filho Henrique M. N.; 1: ... , California, United States of America
item-3 at level 2: section_header: Abstract
item-4 at level 3: text: Carbon (C) footprint of dairy pr ... uce the C footprint to a small extent.
item-5 at level 2: section_header: Introduction
item-6 at level 3: text: Greenhouse gas (GHG) emissions f ... suitable for food crop production [4].
item-7 at level 3: text: Considering the key role of live ... anagement to mitigate the C footprint.
item-8 at level 3: text: In subtropical climate zones, co ... t in tropical pastures (e.g. [1719]).
item-9 at level 3: text: It has been shown that dairy cow ... sions from crop and reduced DM intake.
item-10 at level 3: text: The aim of this work was to quan ... uring lactation periods was evaluated.
item-11 at level 2: section_header: Materials and methods
item-12 at level 3: text: An LCA was developed according t ... 90816 - https://www.udesc.br/cav/ceua.
item-13 at level 3: section_header: System boundary
item-14 at level 4: text: The goal of the study was to ass ... n were outside of the system boundary.
item-15 at level 3: section_header: Functional unit
item-16 at level 4: text: The functional unit was one kilo ... tein according to NRC [20] as follows:
item-17 at level 4: text: ECM = Milk production × (0.0929 ... characteristics described in Table 1.
item-18 at level 3: section_header: Data sources and livestock system description
item-19 at level 4: text: The individual feed requirements ... ed to the ad libitum TMR intake group.
item-20 at level 4: text: Using experimental data, three s ... med during an entire lactation period.
item-21 at level 3: section_header: Impact assessment
item-22 at level 4: text: The CO2e emissions were calculat ... 65 for CO2, CH4 and N2O, respectively.
item-23 at level 3: section_header: Feed production
item-24 at level 4: section_header: Diets composition
item-25 at level 5: text: The DM intake of each ingredient ... collected throughout the experiments.
item-26 at level 4: section_header: GHG emissions from crop and pasture production
item-27 at level 5: text: GHG emission factors used for of ... onsume 70% of pastures during grazing.
item-28 at level 5: text: Emissions from on-farm feed prod ... factors described by Rotz et al. [42].
item-29 at level 3: section_header: Animal husbandry
item-30 at level 4: text: The CH4 emissions from enteric f ... 1) = 13.8 + 0.185 × NDF (% DM intake).
item-31 at level 3: section_header: Manure from confined cows and urine and dung from grazing animals
item-32 at level 4: text: The CH4 emission from manure (kg ... for dietary GE per kg of DM (MJ kg-1).
item-33 at level 4: text: The OM digestibility was estimat ... h were 31%, 26% and 46%, respectively.
item-34 at level 4: text: The N2O-N emissions from urine a ... using the IPCC [38] emission factors.
item-35 at level 3: section_header: Farm management
item-36 at level 4: text: Emissions due to farm management ... crop and pasture production section.
item-37 at level 4: text: The amount of fuel use for manur ... me that animals stayed on confinement.
item-38 at level 4: text: The emissions from fuel were est ... × kg CO2e (kg machinery mass)-1 [42].
item-39 at level 4: text: Emissions from electricity for m ... ws in naturally ventilated barns [47].
item-40 at level 4: text: The lower impact of emissions fr ... greater than 5% of total C footprint.
item-41 at level 4: text: Emissions from farm management d ... gas and hard coal, respectively [46].
item-42 at level 3: section_header: Co-product allocation
item-43 at level 4: text: The C footprint for milk produce ... directly assigned to milk production.
item-44 at level 3: section_header: Sensitivity analysis
item-45 at level 4: text: A sensitivity index was calculat ... ses a similar change in the footprint.
item-46 at level 2: section_header: Results and discussion
item-47 at level 3: text: The study has assessed the impac ... , feed production and electricity use.
item-48 at level 3: section_header: Greenhouse gas emissions
item-49 at level 4: text: Depending on emission factors us ... more than 5% of overall GHG emissions.
item-50 at level 4: text: Considering IPCC emission factor ... the C footprint of the dairy systems.
item-51 at level 4: text: The similarity of C footprint be ... of TMR was replaced by pasture access.
item-52 at level 4: text: The lower C footprint in scenari ... r, averaging 0.004 kg N2O-N kg-1 [37].
item-53 at level 3: section_header: Methane emissions
item-54 at level 4: text: The enteric CH4 intensity was si ... ], which did not happen in this study.
item-55 at level 4: text: The lack of difference in enteri ... same scenarios as in this study [26].
item-56 at level 3: section_header: Emissions from excreta and feed production
item-57 at level 4: text: Using IPCC emission factors for ... may not be captured by microbes [65].
item-58 at level 4: text: Using local emission factors for ... be revised for the subtropical region.
item-59 at level 4: text: Emissions for feed production de ... act, particularly in confinements [9].
item-60 at level 3: section_header: Assumptions and limitations
item-61 at level 4: text: The milk production and composit ... ions as a function of soil management.
item-62 at level 3: section_header: Further considerations
item-63 at level 4: text: The potential for using pasture ... g ECM)-1 in case of foot lesions [72].
item-64 at level 4: text: Grazing lands may also improve b ... hange of CO2 would be negligible [76].
item-65 at level 2: section_header: Conclusions
item-66 at level 3: text: This study assessed the C footpr ... on with or without access to pastures.
item-67 at level 2: section_header: Tables
item-68 at level 3: table with [13x3]
item-68 at level 4: caption: Table 1: Descriptive characteristics of the herd.
item-69 at level 3: table with [21x11]
item-69 at level 4: caption: Table 2: Dairy cows diets in different scenariosa.
item-70 at level 3: table with [9x5]
item-70 at level 4: caption: Table 3: GHG emission factors for Off- and On-farm feed production.
item-71 at level 3: table with [28x5]
item-71 at level 4: caption: Table 4: GHG emissions from On-farm feed production.
item-72 at level 3: table with [12x4]
item-72 at level 4: caption: Table 5: Factors for major resource inputs in farm management.
item-73 at level 2: section_header: Figures
item-74 at level 3: picture
item-74 at level 4: caption: Fig 1: Overview of the milk production system boundary considered in the study.
item-75 at level 3: picture
item-75 at level 4: caption: Fig 2: Overall greenhouse gas emissions in dairy cattle systems under various scenarios.
TMR = ad libitum TMR intake, 75TMR = 75% of ad libitum TMR intake with access to pasture, 50TMR = 50% of ad libitum TMR intake with access to pasture. (a) N2O emission factors for urine and dung from IPCC [38], feed production emission factors from Table 3 without accounting for sequestered CO2-C from perennial pasture, production of electricity = 0.73 kg CO2e kWh-1 [41]. (b) N2O emission factors for urine and dung from IPCC [38], feed production emission factors from Table 3 without accounting for sequestered CO2-C from perennial pasture, production of electricity = 0.205 kg CO2e kWh-1 [46]; (c) N2O emission factors for urine and dung from local data [37], feed production EF from Table 4 without accounting for sequestered CO2-C from perennial pasture, production of electricity = 0.205 kg CO2e kWh-1 [46]. (d) N2O emission factors for urine and dung from local data [37], feed production emission factors from Table 4 accounting for sequestered CO2-C from perennial pasture, production of electricity = 0.205 kg CO2e kWh-1 [46].
item-76 at level 3: picture
item-76 at level 4: caption: Fig 3: Sensitivity of the C footprint.
Sensitivity index = percentage change in C footprint for a 10% change in the given emission source divided by 10% of. (a) N2O emission factors for urine and dung from IPCC [38], feed production emission factors from Table 3, production of electricity = 0.73 kg CO2e kWh-1 [41]. (b) N2O emission factors for urine and dung from IPCC [38], feed production emission factors from Table 3, production of electricity = 0.205 kg CO2e kWh-1 [46]; (c) N2O emission factors for urine and dung from local data [37], feed production EF from Table 4 without accounting sequestered CO2-C from perennial pasture, production of electricity = 0.205 kg CO2e kWh-1 [46]. (d) N2O emission factors for urine and dung from local data [37], feed production emission factors from Table 4 accounting sequestered CO2-C from perennial pasture, production of electricity = 0.205 kg CO2e kWh-1 [46].
item-77 at level 3: picture
item-77 at level 4: caption: Fig 4: Greenhouse gas emissions (GHG) from manure and feed production in dairy cattle systems.
TMR = ad libitum TMR intake, 75TMR = 75% of ad libitum TMR intake with access to pasture, 50TMR = 50% of ad libitum TMR intake with access to pasture. (a) N2O emission factors for urine and dung from IPCC [38]. (b) Feed production emission factors from Table 3. (c) N2O emission factors for urine and dung from local data [37]. (d) Feed production emission factors from Table 4 accounting sequestered CO2-C from perennial pasture.
item-78 at level 2: section_header: References
item-79 at level 3: list: group list
item-80 at level 4: list_item: Climate Change and Land. Chapter 5: Food Security (2019)
item-81 at level 4: list_item: Herrero M; Henderson B; Havlík P ... ivestock sector. Nat Clim Chang (2016)
item-82 at level 4: list_item: Rivera-Ferre MG; López-i-Gelats ... iley Interdiscip Rev Clim Chang (2016)
item-83 at level 4: list_item: van Zanten HHE; Mollenhorst H; K ... ystems. Int J Life Cycle Assess (2016)
item-84 at level 4: list_item: Hristov AN; Oh J; Firkins L; Dij ... mitigation options. J Anim Sci (2013)
item-85 at level 4: list_item: Hristov AN; Ott T; Tricarico J; ... mitigation options. J Anim Sci (2013)
item-86 at level 4: list_item: Montes F; Meinen R; Dell C; Rotz ... mitigation options. J Anim Sci (2013)
item-87 at level 4: list_item: Ledgard SF; Wei S; Wang X; Falco ... mitigations. Agric Water Manag (2019)
item-88 at level 4: list_item: OBrien D; Shalloo L; Patton J; ... inement dairy farms. Agric Syst (2012)
item-89 at level 4: list_item: Salou T; Le Mouël C; van der Wer ... nal unit matters!. J Clean Prod (2017)
item-90 at level 4: list_item: Lizarralde C; Picasso V; Rotz CA ... Case Studies. Sustain Agric Res (2014)
item-91 at level 4: list_item: Clark CEF; Kaur R; Millapan LO; ... ction and behavior. J Dairy Sci (2018)
item-92 at level 4: list_item: FAOSTAT. (2017)
item-93 at level 4: list_item: Vogeler I; Mackay A; Vibart R; R ... ms modelling. Sci Total Environ (2016)
item-94 at level 4: list_item: Wilkinson JM; Lee MRF; Rivero MJ ... ate pastures. Grass Forage Sci. (2020)
item-95 at level 4: list_item: Wales WJ; Marett LC; Greenwood J ... ons of Australia. Anim Prod Sci (2013)
item-96 at level 4: list_item: Bargo F; Muller LD; Delahoy JE; ... otal mixed rations. J Dairy Sci (2002)
item-97 at level 4: list_item: Vibart RE; Fellner V; Burns JC; ... ration and pasture. J Dairy Res (2008)
item-98 at level 4: list_item: Mendoza A; Cajarville C; Repetto ... total mixed ration. J Dairy Sci (2016)
item-99 at level 4: list_item: Nutrient Requirements of Dairy Cattle (2001)
item-100 at level 4: list_item: Noizère P; Sauvant D; Delaby L. (2018)
item-101 at level 4: list_item: Lorenz H; Reinsch T; Hess S; Tau ... roduction systems. J Clean Prod (2019)
item-102 at level 4: list_item: INTERNATIONAL STANDARD—Environme ... ent—Requirements and guidelines (2006)
item-103 at level 4: list_item: Environmental management—Life cy ... ciples and framework. Iso 14040 (2006)
item-104 at level 4: list_item: FAO. Environmental Performance o ... ains: Guidelines for assessment (2016)
item-105 at level 4: list_item: Civiero M; Ribeiro-Filho HMN; Sc ... ture Conference,. Foz do Iguaçu (2019)
item-106 at level 4: list_item: IPCC—Intergovernmental Panel on ... d Version). 2014. Available: ttps://.
item-107 at level 4: list_item: INRA. Alimentation des bovins, o ... nra 2007. 4th ed. INRA, editor. 2007.
item-108 at level 4: list_item: Delagarde R; Faverdin P; Baratte ... ng management. Grass Forage Sci (2011)
item-109 at level 4: list_item: Ma BL; Liang BC; Biswas DK; Morr ... tions. Nutr Cycl Agroecosystems (2012)
item-110 at level 4: list_item: Rauccci GS; Moreira CS; Alves PS ... Mato Grosso State. J Clean Prod (2015)
item-111 at level 4: list_item: Camargo GGT; Ryan MR; Richard TL ... nergy Analysis Tool. Bioscience (2013)
item-112 at level 4: list_item: da Silva MSJ; Jobim CC; Poppi EC ... outhern Brazil. Rev Bras Zootec (2015)
item-113 at level 4: list_item: Duchini PGPG Guzatti GCGC; Ribei ... monocultures. Crop Pasture Sci (2016)
item-114 at level 4: list_item: Scaravelli LFB; Pereira LET; Oli ... om vacas leiteiras. Cienc Rural (2007)
item-115 at level 4: list_item: Sbrissia AF; Duchini PG; Zanini ... ge of grazing heights. Crop Sci (2018)
item-116 at level 4: list_item: Almeida JGR; Dall-Orsoletta AC; ... grazing temperate grass. Animal (2020)
item-117 at level 4: list_item: Eggleston H.S.; Buendia L.; Miwa ... nal greenhouse gas inventories. (2006)
item-118 at level 4: list_item: Ramalho B; Dieckow J; Barth G; S ... mbric Ferralsol. Eur J Soil Sci (2020)
item-119 at level 4: list_item: Fernandes HC; da Silveira JCM; R ... nizadas. Cienc e Agrotecnologia (2008)
item-120 at level 4: list_item: Wang M Q. GREET 1.8a Spreadsheet Model. 2007. Available: .
item-121 at level 4: list_item: Rotz CAA; Montes F; Chianese DS; ... e cycle assessment. J Dairy Sci (2010)
item-122 at level 4: list_item: Niu M; Kebreab E; Hristov AN; Oh ... ental database. Glob Chang Biol (2018)
item-123 at level 4: list_item: Eugène M; Sauvant D; Nozière P; ... for ruminants. J Environ Manage (2019)
item-124 at level 4: list_item: Reed KF; Moraes LE; Casper DP; K ... retion from cattle. J Dairy Sci (2015)
item-125 at level 4: list_item: Barros MV; Piekarski CM; De Fran ... the 20162026 period. Energies (2018)
item-126 at level 4: list_item: Ludington D; Johnson E. Dairy Fa ... York State Energy Res Dev Auth (2003)
item-127 at level 4: list_item: Thoma G; Jolliet O; Wang Y. A bi ... ply chain analysis. Int Dairy J (2013)
item-128 at level 4: list_item: Naranjo A; Johnson A; Rossow H. ... dairy industry over 50 years. (2020)
item-129 at level 4: list_item: Jayasundara S; Worden D; Weersin ... roduction systems. J Clean Prod (2019)
item-130 at level 4: list_item: Williams SRO; Fisher PD; Berrisf ... ssions. Int J Life Cycle Assess (2014)
item-131 at level 4: list_item: Gollnow S; Lundie S; Moore AD; M ... cows in Australia. Int Dairy J (2014)
item-132 at level 4: list_item: OBrien D; Capper JL; Garnsworth ... -based dairy farms. J Dairy Sci (2014)
item-133 at level 4: list_item: Chobtang J; McLaren SJ; Ledgard ... Region, New Zealand. J Ind Ecol (2017)
item-134 at level 4: list_item: Garg MR; Phondba BT; Sherasia PL ... cycle assessment. Anim Prod Sci (2016)
item-135 at level 4: list_item: de Léis CM; Cherubini E; Ruviaro ... study. Int J Life Cycle Assess (2015)
item-136 at level 4: list_item: OBrien D; Geoghegan A; McNamara ... otprint of milk?. Anim Prod Sci (2016)
item-137 at level 4: list_item: OBrien D; Brennan P; Humphreys ... dology. Int J Life Cycle Assess (2014)
item-138 at level 4: list_item: Baek CY; Lee KM; Park KH. Quanti ... dairy cow system. J Clean Prod (2014)
item-139 at level 4: list_item: Dall-Orsoletta AC; Almeida JGR; ... to late lactation. J Dairy Sci (2016)
item-140 at level 4: list_item: Dall-Orsoletta AC; Oziemblowski ... entation. Anim Feed Sci Technol (2019)
item-141 at level 4: list_item: Niu M; Appuhamy JADRN; Leytem AB ... s simultaneously. Anim Prod Sci (2016)
item-142 at level 4: list_item: Waghorn GC; Law N; Bryant M; Pac ... with fodder beet. Anim Prod Sci (2019)
item-143 at level 4: list_item: Dickhoefer U; Glowacki S; Gómez ... protein and starch. Livest Sci (2018)
item-144 at level 4: list_item: Schwab CG; Broderick GA. A 100-Y ... tion in dairy cows. J Dairy Sci (2017)
item-145 at level 4: list_item: Sordi A; Dieckow J; Bayer C; Alb ... tureland. Agric Ecosyst Environ (2014)
item-146 at level 4: list_item: Simon PL; Dieckow J; de Klein CA ... pastures. Agric Ecosyst Environ (2018)
item-147 at level 4: list_item: Wang X; Ledgard S; Luo J; Guo Y; ... e assessment. Sci Total Environ (2018)
item-148 at level 4: list_item: Pirlo G; Lolli S. Environmental ... Lombardy (Italy). J Clean Prod (2019)
item-149 at level 4: list_item: Herzog A; Winckler C; Zollitsch ... tigation. Agric Ecosyst Environ (2018)
item-150 at level 4: list_item: Mostert PF; van Middelaar CE; Bo ... f milk production. J Clean Prod (2018)
item-151 at level 4: list_item: Mostert PF; van Middelaar CE; de ... of milk production. Agric Syst (2018)
item-152 at level 4: list_item: Foley JA; Ramankutty N; Brauman ... for a cultivated planet. Nature (2011)
item-153 at level 4: list_item: Lal R.. Soil Carbon Sequestratio ... nd Food Security. Science (80-) (2004)
item-154 at level 4: list_item: Boddey RM; Jantalia CP; Conceiça ... al agriculture. Glob Chang Biol (2010)
item-155 at level 4: list_item: McConkey B; Angers D; Bentham M; ... he LULUCF sector for NIR 2014. (2014)
item-156 at level 1: caption: Table 1: Descriptive characteristics of the herd.
item-157 at level 1: caption: Table 2: Dairy cows diets in different scenariosa.
item-158 at level 1: caption: Table 3: GHG emission factors for Off- and On-farm feed production.
item-159 at level 1: caption: Table 4: GHG emissions from On-farm feed production.
item-160 at level 1: caption: Table 5: Factors for major resource inputs in farm management.
item-161 at level 1: caption: Fig 1: Overview of the milk prod ... stem boundary considered in the study.
item-162 at level 1: caption: Fig 2: Overall greenhouse gas em ... lectricity = 0.205 kg CO2e kWh-1 [46].
item-163 at level 1: caption: Fig 3: Sensitivity of the C foot ... lectricity = 0.205 kg CO2e kWh-1 [46].
item-164 at level 1: caption: Fig 4: Greenhouse gas emissions ... uestered CO2-C from perennial pasture.

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,336 @@
# Potential to reduce greenhouse gas emissions through different dairy cattle systems in subtropical regions
Ribeiro-Filho Henrique M. N.; 1: Department of Animal Science, University of California, Davis, California, United States of America, 2: Programa de Pós-graduação em Ciência Animal, Universidade do Estado de Santa Catarina, Lages, Santa Catarina, Brazil; Civiero Maurício; 2: Programa de Pós-graduação em Ciência Animal, Universidade do Estado de Santa Catarina, Lages, Santa Catarina, Brazil; Kebreab Ermias; 1: Department of Animal Science, University of California, Davis, California, United States of America
## Abstract
Carbon (C) footprint of dairy production, expressed in kg C dioxide (CO2) equivalents (CO2e) (kg energy-corrected milk (ECM))-1, encompasses emissions from feed production, diet management and total product output. The proportion of pasture on diets may affect all these factors, mainly in subtropical climate zones, where cows may access tropical and temperate pastures during warm and cold seasons, respectively. The aim of the study was to assess the C footprint of a dairy system with annual tropical and temperate pastures in a subtropical region. The system boundary included all processes up to the animal farm gate. Feed requirement during the entire life of each cow was based on data recorded from Holstein × Jersey cow herds producing an average of 7,000 kg ECM lactation-1. The milk production response as consequence of feed strategies (scenarios) was based on results from two experiments (warm and cold seasons) using lactating cows from the same herd. Three scenarios were evaluated: total mixed ration (TMR) ad libitum intake, 75, and 50% of ad libitum TMR intake with access to grazing either a tropical or temperate pasture during lactation periods. Considering IPCC and international literature values to estimate emissions from urine/dung, feed production and electricity, the C footprint was similar between scenarios, averaging 1.06 kg CO2e (kg ECM)-1. Considering factors from studies conducted in subtropical conditions and actual inputs for on-farm feed production, the C footprint decreased 0.04 kg CO2e (kg ECM)-1 in scenarios including pastures compared to ad libitum TMR. Regardless of factors considered, emissions from feed production decreased as the proportion of pasture went up. In conclusion, decreasing TMR intake and including pastures in dairy cow diets in subtropical conditions have the potential to maintain or reduce the C footprint to a small extent.
## Introduction
Greenhouse gas (GHG) emissions from livestock activities represent 1012% of global emissions [1], ranging from 5.57.5 Gt CO2 equivalents (CO2e) yr-1, with almost 30% coming from dairy cattle production systems [2]. However, the livestock sector supply between 13 and 17% of calories and between 28 and 33% of human edible protein consumption globally [3]. Additionally, livestock produce more human-edible protein per unit area than crops when land is unsuitable for food crop production [4].
Considering the key role of livestock systems in global food security, several technical and management interventions have been investigated to mitigate methane (CH4) emissions from enteric fermentation [5], animal management [6] and manure management [7]. CH4 emissions from enteric fermentation represents around 34% of total emissions from livestock sector, which is the largest source [2]. Increasing proportions of concentrate and digestibility of forages in the diet have been proposed as mitigation strategies [1,5]. In contrast, some life cycle assessment (LCA) studies of dairy systems in temperate regions [811] have identified that increasing concentrate proportion may increase carbon (C) footprint due to greater resource use and pollutants from the production of feed compared to forage. Thus, increasing pasture proportion on dairy cattle systems may be an alternative management to mitigate the C footprint.
In subtropical climate zones, cows may graze tropical pastures rather than temperate pastures during the warm season [12]. Some important dairy production areas, such as southern Brazil, central to northern Argentina, Uruguay, South Africa, New Zealand and Australia, are located in these climate zones, having more than 900 million ha in native, permanent or temporary pastures, producing almost 20% of global milk production [13]. However, due to a considerable inter-annual variation in pasture growth rates [14,15], the interest in mixed systems, using total mixed ration (TMR) + pasture has been increasing [16]. Nevertheless, to our best knowledge, studies conducted to evaluate milk production response in dairy cow diets receiving TMR and pastures have only been conducted in temperate pastures and not in tropical pastures (e.g. [1719]).
It has been shown that dairy cows receiving TMR-based diets may not decrease milk production when supplemented with temperate pastures in a vegetative growth stage [18]. On the other hand, tropical pastures have lower organic matter digestibility and cows experience reduced dry matter (DM) intake and milk yield compared to temperate pastures [20,21]. A lower milk yield increases the C footprint intensity [22], offsetting an expected advantage through lower GHG emissions from crop and reduced DM intake.
The aim of this work was to quantify the C footprint and land use of dairy systems using cows with a medium milk production potential in a subtropical region. The effect of replacing total mixed ration (TMR) with pastures during lactation periods was evaluated.
## Materials and methods
An LCA was developed according to the ISO standards [23,24] and Food and Agriculture Organization of the United Nations (FAO) Livestock Environmental Assessment Protocol guidelines [25]. All procedures were approved by the Comissão de Ética no Uso de Animais (CEUA/UDESC) on September 15, 2016—Approval number 4373090816 - https://www.udesc.br/cav/ceua.
### System boundary
The goal of the study was to assess the C footprint of annual tropical and temperate pastures in lactating dairy cow diets. The production system was divided into four main processes: (i) animal husbandry, (ii) manure management and urine and dung deposited by grazing animals, (iii) production of feed ingredients and (iv) farm management (Fig 1). The study boundary included all processes up to the animal farm gate (cradle to gate), including secondary sources such as GHG emissions during the production of fuel, electricity, machinery, manufacturing of fertilizer, pesticides, seeds and plastic used in silage production. Fuel combustion and machinery (manufacture and repairs) for manure handling and electricity for milking and confinement were accounted as emissions from farm management. Emissions post milk production were assumed to be similar for all scenarios, therefore, activities including milk processing, distribution, retail or consumption were outside of the system boundary.
### Functional unit
The functional unit was one kilogram of energy-corrected milk (ECM) at the farm gate. All processes in the system were calculated based on one kilogram ECM. The ECM was calculated by multiplying milk production by the ratio of the energy content of the milk to the energy content of standard milk with 4% fat and 3.3% true protein according to NRC [20] as follows:
ECM = Milk production × (0.0929 × fat% + 0.0588× true protein% + 0.192) / (0.0929 × (4%) + 0.0588 × (3.3%) + 0.192), where fat% and protein% are fat and protein percentages in milk, respectively. The average milk production and composition were recorded from the University of Santa Catarina State (Brazil) herd, considering 165 lactations between 2009 and 2018. The herd is predominantly Holstein × Jersey cows, with key characteristics described in Table 1.
### Data sources and livestock system description
The individual feed requirements, as well as the milk production responses based on feed strategies were based on data recorded from the herd described above and two experiments performed using lactating cows from the same herd. Due to the variation on herbage production throughout the year, feed requirements were estimated taking into consideration that livestock systems have a calving period in April, which represents the beginning of fall season in the southern Hemisphere. The experiments have shown a 10% reduction in ECM production in dairy cows that received both 75 and 50% of ad libitum TMR intake with access to grazing a tropical pasture (pearl-millet, Pennisetum glaucum Campeiro) compared to cows receiving ad libitum TMR intake. Cows grazing on a temperate pasture (ryegrass, Lolium multiflorum Maximus) did not need changes to ECM production compared to the ad libitum TMR intake group.
Using experimental data, three scenarios were evaluated during the lactation period: ad libitum TMR intake, and 75, and 50% of ad libitum TMR intake with access to grazing either an annual tropical or temperate pasture as a function of month ([26], Civiero et al., in press). From April to October (210 days) cows accessed an annual temperate pasture (ryegrass), and from November to beginning of February (95 days) cows grazed an annual tropical pasture (pearl-millet). The average annual reduction in ECM production in dairy cows with access to pastures is 3%. This value was assumed during an entire lactation period.
### Impact assessment
The CO2e emissions were calculated by multiplying the emissions of CO2, CH4 and N2O by their 100-year global warming potential (GWP100), based on IPCC assessment report 5 (AR5; [27]). The values of GWP100 are 1, 28 and 265 for CO2, CH4 and N2O, respectively.
### Feed production
#### Diets composition
The DM intake of each ingredient throughout the entire life of animals during lactation periods was calculated for each scenario: cows receiving only TMR, cows receiving 75% of TMR with annual pastures and cows receiving 50% of TMR with annual pastures (Table 2). In each of other phases of life (calf, heifer, dry cow), animals received the same diet, including a perennial tropical pasture (kikuyu grass, Pennisetum clandestinum). The DM intake of calves, heifers and dry cows was calculated assuming 2.8, 2.5 and 1.9% body weight, respectively [20]. In each case, the actual DM intake of concentrate and corn silage was recorded, and pasture DM intake was estimated by the difference between daily expected DM intake and actual DM intake of concentrate and corn silage. For lactating heifers and cows, TMR was formulated to meet the net energy for lactation (NEL) and metabolizable protein (MP) requirements of experimental animals, according to [28]. The INRA system was used because it is possible to estimate pasture DM intake taking into account the TMR intake, pasture management and the time of access to pasture using the GrazeIn model [29], which was integrated in the software INRAtion 4.07 (https://www.inration.educagri.fr/fr/forum.php). The nutrient intake was calculated as a product of TMR and pasture intake and the nutrient contents of TMR and pasture, respectively, which were determined in feed samples collected throughout the experiments.
#### GHG emissions from crop and pasture production
GHG emission factors used for off- and on-farm feed production were based on literature values, and are presented in Table 3. The emission factor used for corn grain is the average of emission factors observed in different levels of synthetic N fertilization [30]. The emission factor used for soybean is based on Brazilian soybean production [31]. The emissions used for corn silage, including feed processing (cutting, crushing and mixing), and annual or perennial grass productions were 3300 and 1500 kg CO2e ha-1, respectively [32]. The DM production (kg ha-1) of corn silage and pastures were based on regional and locally recorded data [3336], assuming that animals are able to consume 70% of pastures during grazing.
Emissions from on-farm feed production (corn silage and pasture) were estimated using primary and secondary sources based on the actual amount of each input (Table 4). Primary sources were direct and indirect N2O-N emissions from organic and synthetic fertilizers and crop/pasture residues, CO2-C emissions from lime and urea applications, as well as fuel combustion. The direct N2O-N emission factor (kg (kg N input)-1) is based on a local study performed previously [37]. For indirect N2O-N emissions (kg N2O-N (kg NH3-N + NOx)-1), as well as CO2-C emissions from lime + urea, default values proposed by IPCC [38] were used. For perennial pastures, a C sequestration of 0.57 t ha-1 was used based on a 9-year study conducted in southern Brazil [39]. Due to the use of conventional tillage, no C sequestration was considered for annual pastures. The amount of fuel required was 8.9 (no-tillage) and 14.3 L ha-1 (disking) for annual tropical and temperate pastures, respectively [40]. The CO2 from fuel combustion was 2.7 kg CO2 L-1 [41]. Secondary sources of emissions during the production of fuel, machinery, fertilizer, pesticides, seeds and plastic for ensilage were estimated using emission factors described by Rotz et al. [42].
### Animal husbandry
The CH4 emissions from enteric fermentation intensity (g (kg ECM)-1) was a function of estimated CH4 yield (g (kg DM intake)-1), actual DM intake and ECM. The enteric CH4 yield was estimated as a function of neutral detergent fiber (NDF) concentration on total DM intake, as proposed by Niu et al. [43], where: CH4 yield (g (kg DM intake)-1) = 13.8 + 0.185 × NDF (% DM intake).
### Manure from confined cows and urine and dung from grazing animals
The CH4 emission from manure (kg (kg ECM)-1) was a function of daily CH4 emission from manure (kg cow-1) and daily ECM (kg cow-1). The daily CH4 emission from manure was estimated according to IPCC [38], which considered daily volatile solid (VS) excreted (kg DM cow-1) in manure. The daily VS was estimated as proposed by Eugène et al. [44] as: VS = NDOMI + (UE × GE) × (OM/18.45), where: VS = volatile solid excretion on an organic matter (OM) basis (kg day-1), NDOMI = non-digestible OM intake (kg day-1): (1- OM digestibility) × OM intake, UE = urinary energy excretion as a fraction of GE (0.04), GE = gross energy intake (MJ day-1), OM = organic matter (g), 18.45 = conversion factor for dietary GE per kg of DM (MJ kg-1).
The OM digestibility was estimated as a function of chemical composition, using equations published by INRA [21], which takes into account the effects of digestive interactions due to feeding level, the proportion of concentrate and rumen protein balance on OM digestibility. For scenarios where cows had access to grazing, the amount of calculated VS were corrected as a function of the time at pasture. The biodegradability of manure factor (0.13 for dairy cows in Latin America) and methane conversion factor (MCF) values were taken from IPCC [38]. The MCF values for pit storage below animal confinements (> 1 month) were used for the calculation, taking into account the annual average temperature (16.6ºC) or the average temperatures during the growth period of temperate (14.4ºC) or tropical (21ºC) annual pastures, which were 31%, 26% and 46%, respectively.
The N2O-N emissions from urine and feces were estimated considering the proportion of N excreted as manure and storage or as urine and dung deposited by grazing animals. These proportions were calculated based on the proportion of daily time that animals stayed on pasture (7 h/24 h = 0.29) or confinement (10.29 = 0.71). For lactating heifers and cows, the total amount of N excreted was calculated by the difference between N intake and milk N excretion. For heifers and non-lactating cows, urinary and fecal N excretion were estimated as proposed by Reed et al. [45] (Table 3: equations 10 and 12, respectively). The N2O emissions from stored manure as well as urine and dung during grazing were calculated based on the conversion of N2O-N emissions to N2O emissions, where N2O emissions = N2O-N emissions × 44/28. The emission factors were 0.002 kg N2O-N (kg N)-1 stored in a pit below animal confinements, and 0.02 kg N2O-N (kg of urine and dung)-1 deposited on pasture [38]. The indirect N2O emissions from storage manure and urine and dung deposits on pasture were also estimated using the IPCC [38] emission factors.
### Farm management
Emissions due to farm management included those from fuel and machinery for manure handling and electricity for milking and confinement (Table 5). Emissions due to feed processing such as cutting, crushing, mixing and distributing, as well as secondary sources of emissions during the production of fuel, machinery, fertilizer, pesticides, seeds and plastic for ensilage were included in Emissions from crop and pasture production section.
The amount of fuel use for manure handling were estimated taking into consideration the amount of manure produced per cow and the amounts of fuel required for manure handling (L diesel t-1) [42]. The amount of manure was estimated from OM excretions (kg cow-1), assuming that the manure has 8% ash on DM basis and 60% DM content. The OM excretions were calculated by NDOMI × days in confinement × proportion of daily time that animals stayed on confinement.
The emissions from fuel were estimated considering the primary (emissions from fuel burned) and secondary (emissions for producing and transporting fuel) emissions. The primary emissions were calculated by the amount of fuel required for manure handling (L) × (kg CO2e L-1) [41]. The secondary emissions from fuel were calculated by the amount of fuel required for manure handling × emissions for production and transport of fuel (kg CO2e L-1) [41]. Emissions from manufacture and repair of machinery for manure handling were estimated by manure produced per cow (t) × (kg machinery mass (kg manure)-1 × 103) [42] × kg CO2e (kg machinery mass)-1 [42].
Emissions from electricity for milking and confinement were estimated using two emission factors (kg CO2 kWh-1). The first one is based on United States electricity matrix [41], and was used as a reference of an electricity matrix with less hydroelectric power than the region under study. The second is based on the Brazilian electricity matrix [46]. The electricity required for milking activities is 0.06 kWh (kg milk produced)-1 [47]. The annual electricity use for lighting was 75 kWh cow-1, which is the value considered for lactating cows in naturally ventilated barns [47].
The lower impact of emissions from farm management is in agreement with other studies conducted in Europe [9, 62] and USA [42, 55], where the authors found that most emissions in dairy production systems are from enteric fermentation, feed production and emissions from excreta. As emissions from fuel for on-farm feed production were accounted into the emissions from crop and pasture production, total emissions from farm management were not greater than 5% of total C footprint.
Emissions from farm management dropped when the emission factor for electricity generation was based on the Brazilian matrix. In this case, the emission factor for electricity generation (0.205 kg CO2e kWh-1 [46]) is much lower than that in a LCA study conducted in US (0.73 kg CO2e kWh-1 [42]). This apparent discrepancy is explained because in 2016, almost 66% of the electricity generated in Brazil was from hydropower, which has an emission factor of 0.074 kg CO2e kWh-1 against 0.382 and 0.926 kg CO2e kWh-1 produced by natural gas and hard coal, respectively [46].
### Co-product allocation
The C footprint for milk produced in the system was calculated using a biophysical allocation approach, as recommended by the International Dairy Federation [49], and described by Thoma et al. [48]. Briefly, ARmilk = 16.04 × BMR, where: ARmilk is the allocation ratio for milk and BMR is cow BW at the time of slaughter (kg) + calf BW sold (kg) divided by the total ECM produced during cow`s entire life (kg). The ARmilk were 0.854 and 0.849 for TMR and TMR with both pasture scenarios, respectively. The ARmilk was applied to the whole emissions, except for the electricity consumed for milking (milking parlor) and refrigerant loss, which was directly assigned to milk production.
### Sensitivity analysis
A sensitivity index was calculated as described by Rotz et al. [42]. The sensitivity index was defined for each emission source as the percentage change in the C footprint for a 10% change in the given emission source divided by 10%. Thus, a value near 0 indicates a low sensitivity, whereas an index near or greater than 1 indicates a high sensitivity because a change in this value causes a similar change in the footprint.
## Results and discussion
The study has assessed the impact of tropical and temperate pastures in dairy cows fed TMR on the C footprint of dairy production in subtropics. Different factors were taken in to consideration to estimate emissions from manure (or urine and dung) of grazing animals, feed production and electricity use.
### Greenhouse gas emissions
Depending on emission factors used for calculating emissions from urine and dung (IPCC or local data) and feed production (Tables 3 or 4), the C footprint was similar (Fig 2A and 2B) or decreased by 0.04 kg CO2e (kg ECM)-1 (Fig 2C and 2D) in scenarios that included pastures compared to ad libitum TMR intake. Due to differences in emission factors, the overall GHG emission values ranged from 0.92 to 1.04 kg CO2e (kg ECM)-1 for dairy cows receiving TMR exclusively, and from 0.88 to 1.04 kg CO2e (kg ECM)-1 for cows with access to pasture. Using IPCC emission factors [38], manure emissions increased as TMR intake went down (Fig 2A and 2B). However, using local emission factors for estimating N2O-N emissions [37], manure emissions decreased as TMR intake went down (Fig 2C and 2D). Regardless of emission factors used (Tables 3 or 4), emissions from feed production decreased to a small extent as the proportion of TMR intake decreased. Emissions from farm management did not contribute more than 5% of overall GHG emissions.
Considering IPCC emission factors for N2O emissions from urine and dung [38] and those from Table 3, the C footprint ranged from 0.99 to 1.04 kg CO2e (kg ECM)-1, and was close to those reported under confined based systems in California [49], Canada [50], China [8], Ireland [9], different scenarios in Australia [51,52] and Uruguay [11], which ranged from 0.98 to 1.16 kg CO2e (kg ECM)-1. When local emission factors for N2O emissions from urine and dung [37] and those from Table 4 were taking into account, the C footprint for scenarios including pasture, without accounting for sequestered CO2-C from perennial pasture—0.91 kg CO2e (kg ECM)-1—was lower than the range of values described above. However, these values were still greater than high-performance confinement systems in UK and USA [53] or grass based dairy systems in Ireland [9,53] and New Zealand [8,54], which ranged from 0.52 to 0.89 kg CO2e (kg ECM)-1. Regardless of which emission factor was used, we found a lower C footprint in all conditions compared to scenarios with lower milk production per cow or in poor conditions of manure management, which ranged from 1.4 to 2.3 kg CO2e (kg ECM)-1 [8,55]. Thus, even though differences between studies may be partially explained by various assumptions (e.g., emission factors, co-product allocation, methane emissions estimation, sequestered CO2-C, etc.), herd productivity and manure management were systematically associated with the C footprint of the dairy systems.
The similarity of C footprint between different scenarios using IPCC [38] for estimating emissions from manure and for emissions from feed production (Table 3) was a consequence of the trade-off between greater manure emissions and lower emissions to produce feed, as the proportion of pasture in diets increased. Additionally, the small negative effect of pasture on ECM production also contributed to the trade-off. The impact of milk production on the C footprint was reported in a meta-analysis comprising 30 studies from 15 different countries [22]. As observed in this study (Fig 2A and 2B) the authors reported no significant difference between the C footprint of pasture-based vs. confinement systems. However, they observed that an increase of 1000 kg cow-1 (5000 to 6000 kg ECM) reduced the C footprint by 0.12 kg CO2e (kg ECM)-1, which may explain an apparent discrepancy between our study and an LCA performed in south Brazilian conditions [56]. Their study compared a confinement and a grazing-based dairy system with annual average milk production of 7667 and 5535 kg cow, respectively. In this study, the same herd was used in all systems, with an annual average milk production of around 7000 kg cow-1. Experimental data showed a reduction not greater than 3% of ECM when 50% of TMR was replaced by pasture access.
The lower C footprint in scenarios with access to pasture, when local emission factors [37] were used for N2O emissions from urine and dung and for feed production (Table 4), may also be partially attributed to the small negative effect of pasture on ECM production. Nevertheless, local emission factors for urine and dung had a great impact on scenarios including pastures compared to ad libitum TMR intake. Whereas the IPCC [38] considers an emission of 0.02 kg N2O-N (kg N)-1 for urine and dung from grazing animals, experimental evidence shows that it may be up to five times lower, averaging 0.004 kg N2O-N kg-1 [37].
### Methane emissions
The enteric CH4 intensity was similar between different scenarios (Fig 2), showing the greatest sensitivity index, with values ranging from 0.53 to 0.62, which indicate that for a 10% change in this source, the C footprint may change between 5.3 and 6.2% (Fig 3). The large effect of enteric CH4 emissions on the whole C footprint was expected, because the impact of enteric CH4 on GHG emissions of milk production in different dairy systems has been estimated to range from 44 to 60% of the total CO2e [50,52,57,58]. However, emissions in feed production may be the most important source of GHG when emission factors for producing concentrate feeds are greater than 0.7 kg CO2e kg-1 [59], which did not happen in this study.
The lack of difference in enteric CH4 emissions in different systems can be explained by the narrow range of NDF content in diets (<4% difference). This non-difference is due to the lower NDF content of annual temperate pastures (495 g (kg DM)-1) compared to corn silage (550 g (kg DM)-1). Hence, an expected, increase NDF content with decreased concentrate was partially offset by an increase in the pasture proportion relatively low in NDF. This is in agreement with studies conducted in southern Brazil, which have shown that the actual enteric CH4 emissions may decrease with inclusion of temperate pastures in cows receiving corn silage and soybean meal [60] or increase enteric CH4 emissions when dairy cows grazing a temperate pasture was supplemented with corn silage [61]. Additionally, enteric CH4 emissions did not differ between dairy cows receiving TMR exclusively or grazing a tropical pasture in the same scenarios as in this study [26].
### Emissions from excreta and feed production
Using IPCC emission factors for N2O emissions from urine and dung [38] and those from Table 3, CH4 emissions from manure decreased 0.07 kg CO2e (kg ECM)-1, but N2O emissions from manure increased 0.09 kg CO2e (kg ECM)-1, as TMR intake was restricted to 50% ad libitum (Fig 4A). Emissions for pastures increased by 0.06 kg CO2e (kg ECM)-1, whereas emissions for producing concentrate feeds and corn silage decreased by 0.09 kg CO2e (kg ECM)-1, as TMR intake decreased (Fig 4B). In this situation, the lack of difference in calculated C footprints of different systems was also due to the greater emissions from manure, and offset by lower emissions from feed production with inclusion of pasture in lactating dairy cow diets. The greater N2O-N emissions from manure with pasture was a consequence of higher N2O-N emissions due to greater CP content and N urine excretion, as pasture intake increased. The effect of CP content on urine N excretion has been shown by several authors in lactating dairy cows [6264]. For instance, by decreasing CP content from 185 to 152 g (kg DM)-1, N intake decreased by 20% and urine N excretion by 60% [62]. In this study, the CP content for lactating dairy cows ranged from 150 g (kg DM)-1 on TMR system to 198 g (kg DM)-1 on 50% TMR with pasture. Additionally, greater urine N excretion is expected with greater use of pasture. This occurs because protein utilization in pastures is inefficient, as the protein in fresh forages is highly degradable in the rumen and may not be captured by microbes [65].
Using local emission factors for N2O emissions from urine and dung [37] and those from Table 4, reductions in CH4 emissions from stocked manure, when pastures were included on diets, did not offset by increases in N2O emissions from excreta (Fig 4C). In this case, total emissions from manure (Fig 4C) and feed production (Fig 4D) decreased with the inclusion of pasture. The impact of greater CP content and N urine excretion with increased pasture intake was offset by the much lower emission factors used for N2O emissions from urine and dung. As suggested by other authors [66,67], these results show that IPCC default value may need to be revised for the subtropical region.
Emissions for feed production decreased when pasture was included due to the greater emission factor for corn grain production compared to pastures. Emissions from concentrate and silage had at least twice the sensitivity index compared to emissions from pastures. The amount of grain required per cow in a lifetime decreased from 7,300 kg to 4,000 kg when 50% of TMR was replaced by pasture access. These results are in agreement with other studies which found lower C footprint, as concentrate use is reduced and/or pasture is included [9,68,69]. Moreover, it has been demonstrated that in intensive dairy systems, after enteric fermentation, feed production is the second main contributor to C footprint [50]. There is potential to decrease the environmental impact of dairy systems by reducing the use of concentrate ingredients with high environmental impact, particularly in confinements [9].
### Assumptions and limitations
The milk production and composition data are the average for a typical herd, which might have great animal-to-animal variability. Likewise, DM yield of crops and pastures were collected from experimental observations, and may change as a function of inter-annual variation, climatic conditions, soil type, fertilization level etc. The emission factors for direct and indirect N2O emissions from urine and dung were alternatively estimated using local data, but more experiments are necessary to reduce the uncertainty. The CO2 emitted from lime and urea application was estimated from IPCC default values, which may not represent emissions in subtropical conditions. This LCA may be improved by reducing the uncertainty of factors for estimating emissions from excreta and feed production, including the C sequestration or emissions as a function of soil management.
### Further considerations
The potential for using pasture can reduce the C footprint because milk production kept pace with animal confinement. However, if milk production is to decrease with lower TMR intake and inclusion of pasture [19], the C footprint would be expected to increase. Lorenz et al. [22] showed that an increase in milk yield from 5,000 to 6,000 kg ECM reduced the C footprint by 0.12 kg CO2e (kg ECM)-1, whereas an increase from 10,000 to 11,000 kg ECM reduced the C footprint by only 0.06 kg CO2e (kg ECM)-1. Hence, the impact of increasing milk production on decreasing C footprint is not linear, and mitigation measures, such as breeding for increased genetic yield potential and increasing concentrate ratio in the diet, are potentially harmful for animals health and welfare [70]. For instance, increasing concentrate ratio potentially increases the occurrence of subclinical ketosis and foot lesions, and C footprint may increase by 0.03 kg CO2e (kg ECM)-1 in subclinical ketosis [71] and by 0.02 kg CO2e (kg ECM)-1 in case of foot lesions [72].
Grazing lands may also improve biodiversity [73]. Strategies such as zero tillage may increase stocks of soil C [74]. This study did not consider C sequestration during the growth of annual pastures, because it was assumed these grasses were planted with tillage, having a balance between C sequestration and C emissions [38]. Considering the C sequestration from no-tillage perennial pasture, the amount of C sequestration will more than compensates for C emitted. These results are in agreement with other authors who have shown that a reduction or elimination of soil tillage increases annual soil C sequestration in subtropical areas by 0.5 to 1.5 t ha-1 [75]. If 50% of tilled areas were under perennial grasslands, 1.0 t C ha-1 would be sequestered, further reducing the C footprint by 0.015 and 0.025 kg CO2e (kg ECM)-1 for the scenarios using 75 and 50% TMR, respectively. Eliminating tillage, the reduction on total GHG emissions would be 0.03 and 0.05 kg CO2e (kg ECM)-1 for 75 and 50% TMR, respectively. However, this approach may be controversial because lands which have been consistently managed for decades have approached steady state C storage, so that net exchange of CO2 would be negligible [76].
## Conclusions
This study assessed the C footprint of dairy cattle systems with or without access to pastures. Including pastures showed potential to maintain or decrease to a small extent the C footprint, which may be attributable to the evidence of low N2O emissions from urine and dung in dairy systems in subtropical areas. Even though the enteric CH4 intensity was the largest source of CO2e emissions, it did not change between different scenarios due to the narrow range of NDF content in diets and maintaining the same milk production with or without access to pastures.
## Tables
Table 1: Descriptive characteristics of the herd.
| Item | Unit | Average |
|-------------------------------|-----------|-----------|
| Milking cows | # | 165 |
| Milk production | kg year-1 | 7,015 |
| Milk fat | % | 4.0 |
| Milk protein | % | 3.3 |
| Length of lactation | days | 305 |
| Body weight | kg | 553 |
| Lactations per cow | # | 4 |
| Replacement rate | % | 25 |
| Cull rate | % | 25 |
| First artificial insemination | months | 16 |
| Weaned | days | 60 |
| Mortality | % | 3.0 |
Table 2: Dairy cows diets in different scenariosa.
| | Calf | Calf | Pregnant/dry | Pregnant/dry | Lactation | Lactation | Lactation | Weighted average | Weighted average | Weighted average |
|-----------------------------------|-----------------------------------|-----------------------------------|-----------------------------------|-----------------------------------|-----------------------------------|-----------------------------------|-----------------------------------|-----------------------------------|-----------------------------------|-----------------------------------|
| | 012 mo | 12-AI mo | Heifer | Cow | TMR | TMR75 | TMR50 | TMR | TMR75 | TMR50 |
| Days | 360 | 120 | 270 | 180 | 1220 | 1220 | 1220 | | | |
| DM intake, kg d-1 | 3.35 | 6.90 | 10.4 | 11.0 | 18.7 | 17.2 | 17.0 | 13.8 | 12.9 | 12.8 |
| Ingredients, g (kg DM)-1 | Ingredients, g (kg DM)-1 | Ingredients, g (kg DM)-1 | Ingredients, g (kg DM)-1 | Ingredients, g (kg DM)-1 | Ingredients, g (kg DM)-1 | Ingredients, g (kg DM)-1 | Ingredients, g (kg DM)-1 | Ingredients, g (kg DM)-1 | Ingredients, g (kg DM)-1 | Ingredients, g (kg DM)-1 |
| Ground corn | 309 | 145 | 96.3 | - | 257 | 195 | 142 | 218 | 183 | 153 |
| Soybean meal | 138 | 22 | 26.7 | - | 143 | 105 | 76.1 | 109 | 88.0 | 71.0 |
| Corn silage | 149 | 290 | 85.6 | - | 601 | 451 | 326 | 393 | 308 | 237 |
| Ann temperate pasture | 184 | 326 | 257 | - | - | 185 | 337 | 81.3 | 186 | 273 |
| Ann tropical pasture | - | - | 107 | - | - | 63 | 119 | 13.4 | 49.1 | 81.0 |
| Perenn tropical pasture | 219 | 217 | 428 | 1000 | - | - | - | 186 | 186 | 186 |
| Chemical composition, g (kg DM)-1 | Chemical composition, g (kg DM)-1 | Chemical composition, g (kg DM)-1 | Chemical composition, g (kg DM)-1 | Chemical composition, g (kg DM)-1 | Chemical composition, g (kg DM)-1 | Chemical composition, g (kg DM)-1 | Chemical composition, g (kg DM)-1 | Chemical composition, g (kg DM)-1 | Chemical composition, g (kg DM)-1 | Chemical composition, g (kg DM)-1 |
| Organic matter | 935 | 924 | 913 | 916 | 958 | 939 | 924 | 943 | 932 | 924 |
| Crude protein | 216 | 183 | 213 | 200 | 150 | 170 | 198 | 175 | 186 | 202 |
| Neutral detergent fibre | 299 | 479 | 518 | 625 | 382 | 418 | 449 | 411 | 431 | 449 |
| Acid detergent fibre | 127 | 203 | 234 | 306 | 152 | 171 | 187 | 174 | 185 | 194 |
| Ether extract | 46.5 | 30.4 | 28.6 | 25.0 | 31.8 | 31.1 | 30.4 | 33.2 | 32.8 | 32.4 |
| Nutritive value | Nutritive value | Nutritive value | Nutritive value | Nutritive value | Nutritive value | Nutritive value | Nutritive value | Nutritive value | Nutritive value | Nutritive value |
| OM digestibility, % | 82.1 | 77.9 | 77.1 | 71.9 | 72.4 | 75.0 | 77.2 | 74.8 | 76.3 | 77.6 |
| NEL, Mcal (kg DM)-1 | 1.96 | 1.69 | 1.63 | 1.44 | 1.81 | 1.78 | 1.74 | 1.8 | 1.8 | 1.7 |
| MP, g (kg DM)-1 | 111 | 93.6 | 97.6 | 90.0 | 95.0 | 102 | 102 | 97.5 | 102 | 101 |
Table 3: GHG emission factors for Off- and On-farm feed production.
| Feed | DM yield (kg ha-1) | Emission factor | Unita | References |
|------------------|----------------------|-------------------|----------------------|--------------|
| Off-farm | | | | |
| Corn grain | 7,500 | 0.316 | kg CO2e (kg grain)-1 | [30] |
| Soybean | 2,200 | 0.186 | kg CO2e (kg grain)-1 | [31] |
| On-farm | | | | |
| Corn silageb | 16,000 | 0.206 | kg CO2e (kg DM)-1 | [32,33] |
| Annual ryegrassc | 9,500 | 0.226 | kg CO2e (kg DM)-1 | [32,34] |
| Pearl milletd | 11,000 | 0.195 | kg CO2e (kg DM)-1 | [32,35] |
| Kikuyu grasse | 9,500 | 0.226 | kg CO2e (kg DM)-1 | [32,36] |
Table 4: GHG emissions from On-farm feed production.
| Item | Corn silage | Annual temperate pasture | Annual tropical pasture | Perennial tropical pasture |
|-------------------------------------------|---------------|----------------------------|---------------------------|------------------------------|
| DM yield, kg ha-1 | 16000 | 9500 | 11000 | 9500 |
| Direct N2O emissions to air | | | | |
| N organic fertilizer, kg ha-1a | 150 | 180 | 225 | 225 |
| N synthetic fertilizer | - | 20 | 25 | 25 |
| N from residual DM, kg ha-1b | 70 | 112 | 129 | 112 |
| Emission fator, kg N2O-N (kg N)-1c | 0.002 | 0.002 | 0.002 | 0.002 |
| kg N2O ha-1 from direct emissions | 0.69 | 0.98 | 1.19 | 1.14 |
| Indirect N2O emissions to air | | | | |
| kg NH3-N+NOx-N (kg organic N)-1b | 0.2 | 0.2 | 0.2 | 0.2 |
| kg NH3-N+NOx-N (kg synthetic N)-1b | 0.1 | 0.1 | 0.1 | 0.1 |
| kg N2O-N (kg NH3-N+NOx-N)-1b | 0.01 | 0.01 | 0.01 | 0.01 |
| kg N2O ha-1 from NH3+NOx volatilized | 0.47 | 0.60 | 0.75 | 0.75 |
| Indirect N2O emissions to soil | | | | |
| kg N losses by leaching (kg N)-1b | 0.3 | 0.3 | 0.3 | 0.3 |
| kg N2O-N (kg N leaching)-1 | 0.0075 | 0.0075 | 0.0075 | 0.0075 |
| kg N2O ha-1 from N losses by leaching | 0.78 | 1.10 | 1.34 | 1.28 |
| kg N2O ha-1 (direct + indirect emissions) | 1.94 | 2.68 | 3.28 | 3.16 |
| kg CO2e ha-1 from N20 emissionsd | 514 | 710 | 869 | 838 |
| kg CO2 ha-1 from lime+ureab | 515 | 721 | 882 | 852 |
| kg CO2 ha-1 from diesel combustione | 802 | 38 | 23 | 12 |
| kg CO2e from secondary sourcesf | 516 | 205 | 225 | 284 |
| Total CO2e emitted, kg ha-1 | 1833 | 964 | 1130 | 1148 |
| Emission factor, kg CO2e (kg DM)-1g | 0.115 | 0.145 | 0.147 | 0.173 |
| Carbon sequestered, kg ha-1h | - | - | - | 570 |
| Sequestered CO2-C, kg ha-1 | - | - | - | 1393 |
| kg CO2e ha-1 (emitted—sequestered) | 1833 | 964 | 1130 | -245 |
| Emission factor, kg CO2e (kg DM)-1i | 0.115 | 0.145 | 0.147 | -0.037 |
Table 5: Factors for major resource inputs in farm management.
| Item | Factor | Unita | References |
|------------------------------------------|----------|-------------------|--------------|
| Production and transport of diesel | 0.374 | kg CO2e L-1 | [41] |
| Emissions from diesel fuel combustion | 2.637 | kg CO2e L-1 | [41] |
| Production of electricityb | 0.73 | kg CO2e kWh-1 | [41] |
| Production of electricity (alternative)c | 0.205 | kg CO2e kWh-1 | [46] |
| Production of machinery | 3.54 | kg CO2e (kg mm)-1 | [42] |
| Manure handling | | | |
| Fuel for manure handling | 0.600 | L diesel tonne-1 | [42] |
| Machinery for manure handling | 0.17 | kg mm kg-1 | [42] |
| Milking and confinement | | | |
| Electricity for milking | 0.06 | kWh (kg milk)-1 | [47] |
| Electricity for lightingd | 75 | kWh cow-1 | [47] |
## Figures
Fig 1: Overview of the milk production system boundary considered in the study.
<!-- image -->
Fig 2: Overall greenhouse gas emissions in dairy cattle systems under various scenarios.
TMR = ad libitum TMR intake, 75TMR = 75% of ad libitum TMR intake with access to pasture, 50TMR = 50% of ad libitum TMR intake with access to pasture. (a) N2O emission factors for urine and dung from IPCC [38], feed production emission factors from Table 3 without accounting for sequestered CO2-C from perennial pasture, production of electricity = 0.73 kg CO2e kWh-1 [41]. (b) N2O emission factors for urine and dung from IPCC [38], feed production emission factors from Table 3 without accounting for sequestered CO2-C from perennial pasture, production of electricity = 0.205 kg CO2e kWh-1 [46]; (c) N2O emission factors for urine and dung from local data [37], feed production EF from Table 4 without accounting for sequestered CO2-C from perennial pasture, production of electricity = 0.205 kg CO2e kWh-1 [46]. (d) N2O emission factors for urine and dung from local data [37], feed production emission factors from Table 4 accounting for sequestered CO2-C from perennial pasture, production of electricity = 0.205 kg CO2e kWh-1 [46].
<!-- image -->
Fig 3: Sensitivity of the C footprint.
Sensitivity index = percentage change in C footprint for a 10% change in the given emission source divided by 10% of. (a) N2O emission factors for urine and dung from IPCC [38], feed production emission factors from Table 3, production of electricity = 0.73 kg CO2e kWh-1 [41]. (b) N2O emission factors for urine and dung from IPCC [38], feed production emission factors from Table 3, production of electricity = 0.205 kg CO2e kWh-1 [46]; (c) N2O emission factors for urine and dung from local data [37], feed production EF from Table 4 without accounting sequestered CO2-C from perennial pasture, production of electricity = 0.205 kg CO2e kWh-1 [46]. (d) N2O emission factors for urine and dung from local data [37], feed production emission factors from Table 4 accounting sequestered CO2-C from perennial pasture, production of electricity = 0.205 kg CO2e kWh-1 [46].
<!-- image -->
Fig 4: Greenhouse gas emissions (GHG) from manure and feed production in dairy cattle systems.
TMR = ad libitum TMR intake, 75TMR = 75% of ad libitum TMR intake with access to pasture, 50TMR = 50% of ad libitum TMR intake with access to pasture. (a) N2O emission factors for urine and dung from IPCC [38]. (b) Feed production emission factors from Table 3. (c) N2O emission factors for urine and dung from local data [37]. (d) Feed production emission factors from Table 4 accounting sequestered CO2-C from perennial pasture.
<!-- image -->
## References
- Climate Change and Land. Chapter 5: Food Security (2019)
- Herrero M; Henderson B; Havlík P; Thornton PK; Conant RT; Smith P. Greenhouse gas mitigation potentials in the livestock sector. Nat Clim Chang (2016)
- Rivera-Ferre MG; López-i-Gelats F; Howden M; Smith P; Morton JF; Herrero M. Re-framing the climate change debate in the livestock sector: mitigation and adaptation options. Wiley Interdiscip Rev Clim Chang (2016)
- van Zanten HHE; Mollenhorst H; Klootwijk CW; van Middelaar CE; de Boer IJM. Global food supply: land use efficiency of livestock systems. Int J Life Cycle Assess (2016)
- Hristov AN; Oh J; Firkins L; Dijkstra J; Kebreab E; Waghorn G. SPECIAL TOPICS—Mitigation of methane and nitrous oxide emissions from animal operations: I. A review of enteric methane mitigation options. J Anim Sci (2013)
- Hristov AN; Ott T; Tricarico J; Rotz A; Waghorn G; Adesogan A. SPECIAL TOPICS—Mitigation of methane and nitrous oxide emissions from animal operations: III. A review of animal management mitigation options. J Anim Sci (2013)
- Montes F; Meinen R; Dell C; Rotz A; Hristov AN; Oh J. SPECIAL TOPICS—Mitigation of methane and nitrous oxide emissions from animal operations: II. A review of manure management mitigation options. J Anim Sci (2013)
- Ledgard SF; Wei S; Wang X; Falconer S; Zhang N; Zhang X. Nitrogen and carbon footprints of dairy farm systems in China and New Zealand, as influenced by productivity, feed sources and mitigations. Agric Water Manag (2019)
- OBrien D; Shalloo L; Patton J; Buckley F; Grainger C; Wallace M. A life cycle assessment of seasonal grass-based and confinement dairy farms. Agric Syst (2012)
- Salou T; Le Mouël C; van der Werf HMG. Environmental impacts of dairy system intensification: the functional unit matters!. J Clean Prod (2017)
- Lizarralde C; Picasso V; Rotz CA; Cadenazzi M; Astigarraga L. Practices to Reduce Milk Carbon Footprint on Grazing Dairy Farms in Southern Uruguay. Case Studies. Sustain Agric Res (2014)
- Clark CEF; Kaur R; Millapan LO; Golder HM; Thomson PC; Horadagoda A. The effect of temperate or tropical pasture grazing state and grain-based concentrate allocation on dairy cattle production and behavior. J Dairy Sci (2018)
- FAOSTAT. (2017)
- Vogeler I; Mackay A; Vibart R; Rendel J; Beautrais J; Dennis S. Effect of inter-annual variability in pasture growth and irrigation response on farm productivity and profitability based on biophysical and farm systems modelling. Sci Total Environ (2016)
- Wilkinson JM; Lee MRF; Rivero MJ; Chamberlain AT. Some challenges and opportunities for grazing dairy cows on temperate pastures. Grass Forage Sci. (2020)
- Wales WJ; Marett LC; Greenwood JS; Wright MM; Thornhill JB; Jacobs JL. Use of partial mixed rations in pasture-based dairying in temperate regions of Australia. Anim Prod Sci (2013)
- Bargo F; Muller LD; Delahoy JE; Cassidy TW. Performance of high producing dairy cows with three different feeding systems combining pasture and total mixed rations. J Dairy Sci (2002)
- Vibart RE; Fellner V; Burns JC; Huntington GB; Green JT. Performance of lactating dairy cows fed varying levels of total mixed ration and pasture. J Dairy Res (2008)
- Mendoza A; Cajarville C; Repetto JL. Short communication: Intake, milk production, and milk fatty acid profile of dairy cows fed diets combining fresh forage with a total mixed ration. J Dairy Sci (2016)
- Nutrient Requirements of Dairy Cattle (2001)
- Noizère P; Sauvant D; Delaby L. (2018)
- Lorenz H; Reinsch T; Hess S; Taube F. Is low-input dairy farming more climate friendly? A meta-analysis of the carbon footprints of different production systems. J Clean Prod (2019)
- INTERNATIONAL STANDARD—Environmental management—Life cycle assessment—Requirements and guidelines (2006)
- Environmental management—Life cycle assessment—Principles and framework. Iso 14040 (2006)
- FAO. Environmental Performance of Large Ruminant Supply Chains: Guidelines for assessment (2016)
- Civiero M; Ribeiro-Filho HMN; Schaitz LH. Pearl-millet grazing decreases daily methane emissions in dairy cows receiving total mixed ration. 7th Greenhouse Gas and Animal Agriculture Conference,. Foz do Iguaçu (2019)
- IPCC—Intergovernmental Panel on Climate Change. Climate Change 2014 Synthesis Report (Unedited Version). 2014. Available: ttps://.
- INRA. Alimentation des bovins, ovins et caprins. Besoins des animaux—valeurs des aliments. Tables Inra 2007. 4th ed. INRA, editor. 2007.
- Delagarde R; Faverdin P; Baratte C; Peyraud JL. GrazeIn: a model of herbage intake and milk production for grazing dairy cows. 2. Prediction of intake under rotational and continuously stocked grazing management. Grass Forage Sci (2011)
- Ma BL; Liang BC; Biswas DK; Morrison MJ; McLaughlin NB. The carbon footprint of maize production as affected by nitrogen fertilizer and maize-legume rotations. Nutr Cycl Agroecosystems (2012)
- Rauccci GS; Moreira CS; Alves PS; Mello FFC; Frazão LA; Cerri CEP. Greenhouse gas assessment of Brazilian soybean production: a case study of Mato Grosso State. J Clean Prod (2015)
- Camargo GGT; Ryan MR; Richard TL. Energy Use and Greenhouse Gas Emissions from Crop Production Using the Farm Energy Analysis Tool. Bioscience (2013)
- da Silva MSJ; Jobim CC; Poppi EC; Tres TT; Osmari MP. Production technology and quality of corn silage for feeding dairy cattle in Southern Brazil. Rev Bras Zootec (2015)
- Duchini PGPG Guzatti GCGC; Ribeiro-Filho HMNHMNN Sbrissia AFAFAF. Intercropping black oat (Avena strigosa) and annual ryegrass (Lolium multiflorum) can increase pasture leaf production compared with their monocultures. Crop Pasture Sci (2016)
- Scaravelli LFB; Pereira LET; Olivo CJ; Agnolin CA. Produção e qualidade de pastagens de Coastcross-1 e milheto utilizadas com vacas leiteiras. Cienc Rural (2007)
- Sbrissia AF; Duchini PG; Zanini GD; Santos GT; Padilha DA; Schmitt D. Defoliation strategies in pastures submitted to intermittent stocking method: Underlying mechanisms buffering forage accumulation over a range of grazing heights. Crop Sci (2018)
- Almeida JGR; Dall-Orsoletta AC; Oziemblowski MM; Michelon GM; Bayer C; Edouard N. Carbohydrate-rich supplements can improve nitrogen use efficiency and mitigate nitrogenous gas emissions from the excreta of dairy cows grazing temperate grass. Animal (2020)
- Eggleston H.S.; Buendia L.; Miwa K. IPCC guidlines for national greenhouse gas inventories. (2006)
- Ramalho B; Dieckow J; Barth G; Simon PL; Mangrich AS; Brevilieri RC. No-tillage and ryegrass grazing effects on stocks, stratification and lability of carbon and nitrogen in a subtropical Umbric Ferralsol. Eur J Soil Sci (2020)
- Fernandes HC; da Silveira JCM; Rinaldi PCN. Avaliação do custo energético de diferentes operações agrícolas mecanizadas. Cienc e Agrotecnologia (2008)
- Wang M Q. GREET 1.8a Spreadsheet Model. 2007. Available: .
- Rotz CAA; Montes F; Chianese DS; Chiane DS. The carbon footprint of dairy production systems through partial life cycle assessment. J Dairy Sci (2010)
- Niu M; Kebreab E; Hristov AN; Oh J; Arndt C; Bannink A. Prediction of enteric methane production, yield, and intensity in dairy cattle using an intercontinental database. Glob Chang Biol (2018)
- Eugène M; Sauvant D; Nozière P; Viallard D; Oueslati K; Lherm M. A new Tier 3 method to calculate methane emission inventory for ruminants. J Environ Manage (2019)
- Reed KF; Moraes LE; Casper DP; Kebreab E. Predicting nitrogen excretion from cattle. J Dairy Sci (2015)
- Barros MV; Piekarski CM; De Francisco AC. Carbon footprint of electricity generation in Brazil: An analysis of the 20162026 period. Energies (2018)
- Ludington D; Johnson E. Dairy Farm Energy Audit Summary. New York State Energy Res Dev Auth (2003)
- Thoma G; Jolliet O; Wang Y. A biophysical approach to allocation of life cycle environmental burdens for fluid milk supply chain analysis. Int Dairy J (2013)
- Naranjo A; Johnson A; Rossow H. Greenhouse gas, water, and land footprint per unit of production of the California dairy industry over 50 years. (2020)
- Jayasundara S; Worden D; Weersink A; Wright T; VanderZaag A; Gordon R. Improving farm profitability also reduces the carbon footprint of milk production in intensive dairy production systems. J Clean Prod (2019)
- Williams SRO; Fisher PD; Berrisford T; Moate PJ; Reynard K. Reducing methane on-farm by feeding diets high in fat may not always reduce life cycle greenhouse gas emissions. Int J Life Cycle Assess (2014)
- Gollnow S; Lundie S; Moore AD; McLaren J; van Buuren N; Stahle P. Carbon footprint of milk production from dairy cows in Australia. Int Dairy J (2014)
- OBrien D; Capper JL; Garnsworthy PC; Grainger C; Shalloo L. A case study of the carbon footprint of milk from high-performing confinement and grass-based dairy farms. J Dairy Sci (2014)
- Chobtang J; McLaren SJ; Ledgard SF; Donaghy DJ. Consequential Life Cycle Assessment of Pasture-based Milk Production: A Case Study in the Waikato Region, New Zealand. J Ind Ecol (2017)
- Garg MR; Phondba BT; Sherasia PL; Makkar HPS. Carbon footprint of milk production under smallholder dairying in Anand district of Western India: A cradle-to-farm gate life cycle assessment. Anim Prod Sci (2016)
- de Léis CM; Cherubini E; Ruviaro CF; Prudêncio da Silva V; do Nascimento Lampert V; Spies A. Carbon footprint of milk production in Brazil: a comparative case study. Int J Life Cycle Assess (2015)
- OBrien D; Geoghegan A; McNamara K; Shalloo L. How can grass-based dairy farmers reduce the carbon footprint of milk?. Anim Prod Sci (2016)
- OBrien D; Brennan P; Humphreys J; Ruane E; Shalloo L. An appraisal of carbon footprint of milk from commercial grass-based dairy farms in Ireland according to a certified life cycle assessment methodology. Int J Life Cycle Assess (2014)
- Baek CY; Lee KM; Park KH. Quantification and control of the greenhouse gas emissions from a dairy cow system. J Clean Prod (2014)
- Dall-Orsoletta AC; Almeida JGR; Carvalho PCF; Savian J V. Ribeiro-Filho HMN. Ryegrass pasture combined with partial total mixed ration reduces enteric methane emissions and maintains the performance of dairy cows during mid to late lactation. J Dairy Sci (2016)
- Dall-Orsoletta AC; Oziemblowski MM; Berndt A; Ribeiro-Filho HMN. Enteric methane emission from grazing dairy cows receiving corn silage or ground corn supplementation. Anim Feed Sci Technol (2019)
- Niu M; Appuhamy JADRN; Leytem AB; Dungan RS; Kebreab E. Effect of dietary crude protein and forage contents on enteric methane emissions and nitrogen excretion from dairy cows simultaneously. Anim Prod Sci (2016)
- Waghorn GC; Law N; Bryant M; Pacheco D; Dalley D. Digestion and nitrogen excretion by Holstein-Friesian cows in late lactation offered ryegrass-based pasture supplemented with fodder beet. Anim Prod Sci (2019)
- Dickhoefer U; Glowacki S; Gómez CA; Castro-Montoya JM. Forage and protein use efficiency in dairy cows grazing a mixed grass-legume pasture and supplemented with different levels of protein and starch. Livest Sci (2018)
- Schwab CG; Broderick GA. A 100-Year Review: Protein and amino acid nutrition in dairy cows. J Dairy Sci (2017)
- Sordi A; Dieckow J; Bayer C; Alburquerque MA; Piva JT; Zanatta JA. Nitrous oxide emission factors for urine and dung patches in a subtropical Brazilian pastureland. Agric Ecosyst Environ (2014)
- Simon PL; Dieckow J; de Klein CAM; Zanatta JA; van der Weerden TJ; Ramalho B. Nitrous oxide emission factors from cattle urine and dung, and dicyandiamide (DCD) as a mitigation strategy in subtropical pastures. Agric Ecosyst Environ (2018)
- Wang X; Ledgard S; Luo J; Guo Y; Zhao Z; Guo L. Environmental impacts and resource use of milk production on the North China Plain, based on life cycle assessment. Sci Total Environ (2018)
- Pirlo G; Lolli S. Environmental impact of milk production from samples of organic and conventional farms in Lombardy (Italy). J Clean Prod (2019)
- Herzog A; Winckler C; Zollitsch W. In pursuit of sustainability in dairy farming: A review of interdependent effects of animal welfare improvement and environmental impact mitigation. Agric Ecosyst Environ (2018)
- Mostert PF; van Middelaar CE; Bokkers EAM; de Boer IJM. The impact of subclinical ketosis in dairy cows on greenhouse gas emissions of milk production. J Clean Prod (2018)
- Mostert PF; van Middelaar CE; de Boer IJM; Bokkers EAM. The impact of foot lesions in dairy cows on greenhouse gas emissions of milk production. Agric Syst (2018)
- Foley JA; Ramankutty N; Brauman KA; Cassidy ES; Gerber JS; Johnston M. Solutions for a cultivated planet. Nature (2011)
- Lal R.. Soil Carbon Sequestration Impacts on Global Climate Change and Food Security. Science (80-) (2004)
- Boddey RM; Jantalia CP; Conceiçao PC; Zanatta JA; Bayer C; Mielniczuk J. Carbon accumulation at depth in Ferralsols under zero-till subtropical agriculture. Glob Chang Biol (2010)
- McConkey B; Angers D; Bentham M; Boehm M; Brierley T; Cerkowniak D. Canadian agricultural greenhouse gas monitoring accounting and reporting system: methodology and greenhouse gas estimates for agricultural land in the LULUCF sector for NIR 2014. (2014)

View File

@ -3,15 +3,16 @@
<figure> <figure>
<location><page_1><loc_84><loc_93><loc_96><loc_97></location> <location><page_1><loc_84><loc_93><loc_96><loc_97></location>
</figure> </figure>
<section_header_level_1><location><page_1><loc_6><loc_79><loc_96><loc_90></location>Row and Column Access Control Support in IBM DB2 for i</section_header_level_1> <section_header_level_1><location><page_1><loc_6><loc_79><loc_96><loc_89></location>Row and Column Access Control Support in IBM DB2 for i</section_header_level_1>
<text><location><page_1><loc_6><loc_59><loc_35><loc_63></location>Implement roles and separation of duties</text> <figure>
<text><location><page_1><loc_6><loc_52><loc_33><loc_56></location>Leverage row permissions on the database</text> <location><page_1><loc_5><loc_11><loc_96><loc_63></location>
<text><location><page_1><loc_6><loc_45><loc_32><loc_49></location>Protect columns by defining column masks</text> </figure>
<text><location><page_1><loc_81><loc_12><loc_95><loc_28></location>Jim Bainbridge Hernando Bedoya Rob Bestgen Mike Cain Dan Cruikshank Jim Denton Doug Mack Tom McKinley Kent Milligan</text> <figure>
<text><location><page_1><loc_51><loc_2><loc_95><loc_10></location>Redpaper</text> <location><page_1><loc_52><loc_2><loc_95><loc_10></location>
</figure>
<section_header_level_1><location><page_2><loc_11><loc_88><loc_28><loc_91></location>Contents</section_header_level_1> <section_header_level_1><location><page_2><loc_11><loc_88><loc_28><loc_91></location>Contents</section_header_level_1>
<table> <table>
<location><page_2><loc_22><loc_10><loc_90><loc_83></location> <location><page_2><loc_22><loc_10><loc_89><loc_83></location>
<row_0><col_0><body>Notices</col_0><col_1><body>. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii</col_1></row_0> <row_0><col_0><body>Notices</col_0><col_1><body>. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii</col_1></row_0>
<row_1><col_0><body>Trademarks</col_0><col_1><body>. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii</col_1></row_1> <row_1><col_0><body>Trademarks</col_0><col_1><body>. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii</col_1></row_1>
<row_2><col_0><body>DB2 for i Center of Excellence</col_0><col_1><body>. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix</col_1></row_2> <row_2><col_0><body>DB2 for i Center of Excellence</col_0><col_1><body>. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix</col_1></row_2>
@ -45,8 +46,8 @@
<row_30><col_0><body>3.2.2 Built-in global variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .</col_0><col_1><body>19</col_1></row_30> <row_30><col_0><body>3.2.2 Built-in global variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .</col_0><col_1><body>19</col_1></row_30>
<row_31><col_0><body>3.3 VERIFY_GROUP_FOR_USER function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .</col_0><col_1><body>20</col_1></row_31> <row_31><col_0><body>3.3 VERIFY_GROUP_FOR_USER function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .</col_0><col_1><body>20</col_1></row_31>
<row_32><col_0><body>3.4 Establishing and controlling accessibility by using the RCAC rule text . . . . . . . . . . . . .</col_0><col_1><body>21</col_1></row_32> <row_32><col_0><body>3.4 Establishing and controlling accessibility by using the RCAC rule text . . . . . . . . . . . . .</col_0><col_1><body>21</col_1></row_32>
<row_33><col_0><body></col_0><col_1><body>. . . . . . . . . . . . . . . . . . . . . . . . 22</col_1></row_33> <row_33><col_0><body>. . . . . . . . . . . . . . . . . . . . . . . .</col_0><col_1><body>22</col_1></row_33>
<row_34><col_0><body>3.5 SELECT, INSERT, and UPDATE behavior with RCAC</col_0><col_1><body></col_1></row_34> <row_34><col_0><body>3.5 SELECT, INSERT, and UPDATE behavior with RCAC 3.6 Human resources example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .</col_0><col_1><body>22</col_1></row_34>
<row_35><col_0><body>3.6.1 Assigning the QIBM_DB_SECADM function ID to the consultants. . . . . . . . . . . .</col_0><col_1><body>23</col_1></row_35> <row_35><col_0><body>3.6.1 Assigning the QIBM_DB_SECADM function ID to the consultants. . . . . . . . . . . .</col_0><col_1><body>23</col_1></row_35>
<row_36><col_0><body>3.6.2 Creating group profiles for the users and their roles . . . . . . . . . . . . . . . . . . . . . . .</col_0><col_1><body>23</col_1></row_36> <row_36><col_0><body>3.6.2 Creating group profiles for the users and their roles . . . . . . . . . . . . . . . . . . . . . . .</col_0><col_1><body>23</col_1></row_36>
<row_37><col_0><body>3.6.3 Demonstrating data access without RCAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .</col_0><col_1><body>24</col_1></row_37> <row_37><col_0><body>3.6.3 Demonstrating data access without RCAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .</col_0><col_1><body>24</col_1></row_37>
@ -63,7 +64,7 @@
</figure> </figure>
<section_header_level_1><location><page_3><loc_24><loc_57><loc_31><loc_59></location>Highlights</section_header_level_1> <section_header_level_1><location><page_3><loc_24><loc_57><loc_31><loc_59></location>Highlights</section_header_level_1>
<unordered_list> <unordered_list>
<list_item><location><page_3><loc_24><loc_55><loc_40><loc_57></location>GLYPH<g115>GLYPH<g3> GLYPH<g40>GLYPH<g81>GLYPH<g75>GLYPH<g68>GLYPH<g81>GLYPH<g70>GLYPH<g72>GLYPH<g3> GLYPH<g87>GLYPH<g75>GLYPH<g72>GLYPH<g3> GLYPH<g83>GLYPH<g72>GLYPH<g85>GLYPH<g73>GLYPH<g82>GLYPH<g85>GLYPH<g80>GLYPH<g68>GLYPH<g81>GLYPH<g70>GLYPH<g72>GLYPH<g3> GLYPH<g82>GLYPH<g73>GLYPH<g3> GLYPH<g92>GLYPH<g82>GLYPH<g88>GLYPH<g85> GLYPH<g3> GLYPH<g71>GLYPH<g68>GLYPH<g87>GLYPH<g68>GLYPH<g69>GLYPH<g68>GLYPH<g86>GLYPH<g72>GLYPH<g3> GLYPH<g82>GLYPH<g83>GLYPH<g72>GLYPH<g85>GLYPH<g68>GLYPH<g87>GLYPH<g76>GLYPH<g82>GLYPH<g81>GLYPH<g86></list_item> <list_item><location><page_3><loc_24><loc_55><loc_40><loc_56></location>GLYPH<g115>GLYPH<g3> GLYPH<g40>GLYPH<g81>GLYPH<g75>GLYPH<g68>GLYPH<g81>GLYPH<g70>GLYPH<g72>GLYPH<g3> GLYPH<g87>GLYPH<g75>GLYPH<g72>GLYPH<g3> GLYPH<g83>GLYPH<g72>GLYPH<g85>GLYPH<g73>GLYPH<g82>GLYPH<g85>GLYPH<g80>GLYPH<g68>GLYPH<g81>GLYPH<g70>GLYPH<g72>GLYPH<g3> GLYPH<g82>GLYPH<g73>GLYPH<g3> GLYPH<g92>GLYPH<g82>GLYPH<g88>GLYPH<g85> GLYPH<g3> GLYPH<g71>GLYPH<g68>GLYPH<g87>GLYPH<g68>GLYPH<g69>GLYPH<g68>GLYPH<g86>GLYPH<g72>GLYPH<g3> GLYPH<g82>GLYPH<g83>GLYPH<g72>GLYPH<g85>GLYPH<g68>GLYPH<g87>GLYPH<g76>GLYPH<g82>GLYPH<g81>GLYPH<g86></list_item>
<list_item><location><page_3><loc_24><loc_51><loc_42><loc_54></location>GLYPH<g115>GLYPH<g3> GLYPH<g40>GLYPH<g68>GLYPH<g85> GLYPH<g81>GLYPH<g3> GLYPH<g74>GLYPH<g85>GLYPH<g72>GLYPH<g68>GLYPH<g87>GLYPH<g72>GLYPH<g85>GLYPH<g3> GLYPH<g85>GLYPH<g72>GLYPH<g87>GLYPH<g88>GLYPH<g85> GLYPH<g81>GLYPH<g3> GLYPH<g82>GLYPH<g81>GLYPH<g3> GLYPH<g44>GLYPH<g55>GLYPH<g3> GLYPH<g83>GLYPH<g85>GLYPH<g82>GLYPH<g77>GLYPH<g72>GLYPH<g70>GLYPH<g87>GLYPH<g86> GLYPH<g3> GLYPH<g87>GLYPH<g75>GLYPH<g85>GLYPH<g82>GLYPH<g88>GLYPH<g74>GLYPH<g75>GLYPH<g3> GLYPH<g80>GLYPH<g82>GLYPH<g71>GLYPH<g72>GLYPH<g85> GLYPH<g81>GLYPH<g76>GLYPH<g93>GLYPH<g68>GLYPH<g87>GLYPH<g76>GLYPH<g82>GLYPH<g81>GLYPH<g3> GLYPH<g82>GLYPH<g73>GLYPH<g3> GLYPH<g71>GLYPH<g68>GLYPH<g87>GLYPH<g68>GLYPH<g69>GLYPH<g68>GLYPH<g86>GLYPH<g72>GLYPH<g3> GLYPH<g68>GLYPH<g81>GLYPH<g71> GLYPH<g3> GLYPH<g68>GLYPH<g83>GLYPH<g83>GLYPH<g79>GLYPH<g76>GLYPH<g70>GLYPH<g68>GLYPH<g87>GLYPH<g76>GLYPH<g82>GLYPH<g81>GLYPH<g86></list_item> <list_item><location><page_3><loc_24><loc_51><loc_42><loc_54></location>GLYPH<g115>GLYPH<g3> GLYPH<g40>GLYPH<g68>GLYPH<g85> GLYPH<g81>GLYPH<g3> GLYPH<g74>GLYPH<g85>GLYPH<g72>GLYPH<g68>GLYPH<g87>GLYPH<g72>GLYPH<g85>GLYPH<g3> GLYPH<g85>GLYPH<g72>GLYPH<g87>GLYPH<g88>GLYPH<g85> GLYPH<g81>GLYPH<g3> GLYPH<g82>GLYPH<g81>GLYPH<g3> GLYPH<g44>GLYPH<g55>GLYPH<g3> GLYPH<g83>GLYPH<g85>GLYPH<g82>GLYPH<g77>GLYPH<g72>GLYPH<g70>GLYPH<g87>GLYPH<g86> GLYPH<g3> GLYPH<g87>GLYPH<g75>GLYPH<g85>GLYPH<g82>GLYPH<g88>GLYPH<g74>GLYPH<g75>GLYPH<g3> GLYPH<g80>GLYPH<g82>GLYPH<g71>GLYPH<g72>GLYPH<g85> GLYPH<g81>GLYPH<g76>GLYPH<g93>GLYPH<g68>GLYPH<g87>GLYPH<g76>GLYPH<g82>GLYPH<g81>GLYPH<g3> GLYPH<g82>GLYPH<g73>GLYPH<g3> GLYPH<g71>GLYPH<g68>GLYPH<g87>GLYPH<g68>GLYPH<g69>GLYPH<g68>GLYPH<g86>GLYPH<g72>GLYPH<g3> GLYPH<g68>GLYPH<g81>GLYPH<g71> GLYPH<g3> GLYPH<g68>GLYPH<g83>GLYPH<g83>GLYPH<g79>GLYPH<g76>GLYPH<g70>GLYPH<g68>GLYPH<g87>GLYPH<g76>GLYPH<g82>GLYPH<g81>GLYPH<g86></list_item>
<list_item><location><page_3><loc_24><loc_48><loc_41><loc_50></location>GLYPH<g115>GLYPH<g3> GLYPH<g53>GLYPH<g72>GLYPH<g79>GLYPH<g92>GLYPH<g3> GLYPH<g82>GLYPH<g81>GLYPH<g3> GLYPH<g44>GLYPH<g37>GLYPH<g48>GLYPH<g3> GLYPH<g72>GLYPH<g91>GLYPH<g83>GLYPH<g72>GLYPH<g85>GLYPH<g87>GLYPH<g3> GLYPH<g70>GLYPH<g82>GLYPH<g81>GLYPH<g86>GLYPH<g88>GLYPH<g79>GLYPH<g87>GLYPH<g76>GLYPH<g81>GLYPH<g74>GLYPH<g15>GLYPH<g3> GLYPH<g86>GLYPH<g78>GLYPH<g76>GLYPH<g79>GLYPH<g79>GLYPH<g86> GLYPH<g3> GLYPH<g86>GLYPH<g75>GLYPH<g68>GLYPH<g85>GLYPH<g76>GLYPH<g81>GLYPH<g74>GLYPH<g3> GLYPH<g68>GLYPH<g81>GLYPH<g71>GLYPH<g3> GLYPH<g85>GLYPH<g72>GLYPH<g81>GLYPH<g82>GLYPH<g90>GLYPH<g81>GLYPH<g3> GLYPH<g86>GLYPH<g72>GLYPH<g85>GLYPH<g89>GLYPH<g76>GLYPH<g70>GLYPH<g72>GLYPH<g86></list_item> <list_item><location><page_3><loc_24><loc_48><loc_41><loc_50></location>GLYPH<g115>GLYPH<g3> GLYPH<g53>GLYPH<g72>GLYPH<g79>GLYPH<g92>GLYPH<g3> GLYPH<g82>GLYPH<g81>GLYPH<g3> GLYPH<g44>GLYPH<g37>GLYPH<g48>GLYPH<g3> GLYPH<g72>GLYPH<g91>GLYPH<g83>GLYPH<g72>GLYPH<g85>GLYPH<g87>GLYPH<g3> GLYPH<g70>GLYPH<g82>GLYPH<g81>GLYPH<g86>GLYPH<g88>GLYPH<g79>GLYPH<g87>GLYPH<g76>GLYPH<g81>GLYPH<g74>GLYPH<g15>GLYPH<g3> GLYPH<g86>GLYPH<g78>GLYPH<g76>GLYPH<g79>GLYPH<g79>GLYPH<g86> GLYPH<g3> GLYPH<g86>GLYPH<g75>GLYPH<g68>GLYPH<g85>GLYPH<g76>GLYPH<g81>GLYPH<g74>GLYPH<g3> GLYPH<g68>GLYPH<g81>GLYPH<g71>GLYPH<g3> GLYPH<g85>GLYPH<g72>GLYPH<g81>GLYPH<g82>GLYPH<g90>GLYPH<g81>GLYPH<g3> GLYPH<g86>GLYPH<g72>GLYPH<g85>GLYPH<g89>GLYPH<g76>GLYPH<g70>GLYPH<g72>GLYPH<g86></list_item>
<list_item><location><page_3><loc_24><loc_45><loc_38><loc_47></location>GLYPH<g115>GLYPH<g3> GLYPH<g55> GLYPH<g68>GLYPH<g78>GLYPH<g72>GLYPH<g3> GLYPH<g68>GLYPH<g71>GLYPH<g89>GLYPH<g68>GLYPH<g81>GLYPH<g87>GLYPH<g68>GLYPH<g74>GLYPH<g72>GLYPH<g3> GLYPH<g82>GLYPH<g73>GLYPH<g3> GLYPH<g68>GLYPH<g70>GLYPH<g70>GLYPH<g72>GLYPH<g86>GLYPH<g86>GLYPH<g3> GLYPH<g87>GLYPH<g82>GLYPH<g3> GLYPH<g68> GLYPH<g3> GLYPH<g90>GLYPH<g82>GLYPH<g85>GLYPH<g79>GLYPH<g71>GLYPH<g90>GLYPH<g76>GLYPH<g71>GLYPH<g72>GLYPH<g3> GLYPH<g86>GLYPH<g82>GLYPH<g88>GLYPH<g85>GLYPH<g70>GLYPH<g72>GLYPH<g3> GLYPH<g82>GLYPH<g73>GLYPH<g3> GLYPH<g72>GLYPH<g91>GLYPH<g83>GLYPH<g72>GLYPH<g85>GLYPH<g87>GLYPH<g76>GLYPH<g86>GLYPH<g72></list_item> <list_item><location><page_3><loc_24><loc_45><loc_38><loc_47></location>GLYPH<g115>GLYPH<g3> GLYPH<g55> GLYPH<g68>GLYPH<g78>GLYPH<g72>GLYPH<g3> GLYPH<g68>GLYPH<g71>GLYPH<g89>GLYPH<g68>GLYPH<g81>GLYPH<g87>GLYPH<g68>GLYPH<g74>GLYPH<g72>GLYPH<g3> GLYPH<g82>GLYPH<g73>GLYPH<g3> GLYPH<g68>GLYPH<g70>GLYPH<g70>GLYPH<g72>GLYPH<g86>GLYPH<g86>GLYPH<g3> GLYPH<g87>GLYPH<g82>GLYPH<g3> GLYPH<g68> GLYPH<g3> GLYPH<g90>GLYPH<g82>GLYPH<g85>GLYPH<g79>GLYPH<g71>GLYPH<g90>GLYPH<g76>GLYPH<g71>GLYPH<g72>GLYPH<g3> GLYPH<g86>GLYPH<g82>GLYPH<g88>GLYPH<g85>GLYPH<g70>GLYPH<g72>GLYPH<g3> GLYPH<g82>GLYPH<g73>GLYPH<g3> GLYPH<g72>GLYPH<g91>GLYPH<g83>GLYPH<g72>GLYPH<g85>GLYPH<g87>GLYPH<g76>GLYPH<g86>GLYPH<g72></list_item>
@ -82,14 +83,14 @@
<text><location><page_3><loc_46><loc_42><loc_71><loc_43></location>Global CoE engagements cover topics including:</text> <text><location><page_3><loc_46><loc_42><loc_71><loc_43></location>Global CoE engagements cover topics including:</text>
<unordered_list> <unordered_list>
<list_item><location><page_3><loc_46><loc_40><loc_66><loc_41></location>r Database performance and scalability</list_item> <list_item><location><page_3><loc_46><loc_40><loc_66><loc_41></location>r Database performance and scalability</list_item>
<list_item><location><page_3><loc_46><loc_39><loc_69><loc_40></location>r Advanced SQL knowledge and skills transfer</list_item> <list_item><location><page_3><loc_46><loc_39><loc_69><loc_39></location>r Advanced SQL knowledge and skills transfer</list_item>
<list_item><location><page_3><loc_46><loc_37><loc_64><loc_38></location>r Business intelligence and analytics</list_item> <list_item><location><page_3><loc_46><loc_37><loc_64><loc_38></location>r Business intelligence and analytics</list_item>
<list_item><location><page_3><loc_46><loc_36><loc_56><loc_37></location>r DB2 Web Query</list_item> <list_item><location><page_3><loc_46><loc_36><loc_56><loc_37></location>r DB2 Web Query</list_item>
<list_item><location><page_3><loc_46><loc_35><loc_82><loc_36></location>r Query/400 modernization for better reporting and analysis capabilities</list_item> <list_item><location><page_3><loc_46><loc_35><loc_82><loc_36></location>r Query/400 modernization for better reporting and analysis capabilities</list_item>
<list_item><location><page_3><loc_46><loc_33><loc_69><loc_34></location>r Database modernization and re-engineering</list_item> <list_item><location><page_3><loc_46><loc_33><loc_69><loc_34></location>r Database modernization and re-engineering</list_item>
<list_item><location><page_3><loc_46><loc_32><loc_65><loc_33></location>r Data-centric architecture and design</list_item> <list_item><location><page_3><loc_46><loc_32><loc_65><loc_33></location>r Data-centric architecture and design</list_item>
<list_item><location><page_3><loc_46><loc_31><loc_76><loc_32></location>r Extremely large database and overcoming limits to growth</list_item> <list_item><location><page_3><loc_46><loc_31><loc_76><loc_32></location>r Extremely large database and overcoming limits to growth</list_item>
<list_item><location><page_3><loc_46><loc_30><loc_62><loc_31></location>r ISV education and enablement</list_item> <list_item><location><page_3><loc_46><loc_30><loc_62><loc_30></location>r ISV education and enablement</list_item>
</unordered_list> </unordered_list>
<section_header_level_1><location><page_4><loc_11><loc_88><loc_25><loc_91></location>Preface</section_header_level_1> <section_header_level_1><location><page_4><loc_11><loc_88><loc_25><loc_91></location>Preface</section_header_level_1>
<text><location><page_4><loc_22><loc_75><loc_89><loc_83></location>This IBMfi Redpaper™ publication provides information about the IBM i 7.2 feature of IBM DB2fi for i Row and Column Access Control (RCAC). It offers a broad description of the function and advantages of controlling access to data in a comprehensive and transparent way. This publication helps you understand the capabilities of RCAC and provides examples of defining, creating, and implementing the row permissions and column masks in a relational database environment.</text> <text><location><page_4><loc_22><loc_75><loc_89><loc_83></location>This IBMfi Redpaper™ publication provides information about the IBM i 7.2 feature of IBM DB2fi for i Row and Column Access Control (RCAC). It offers a broad description of the function and advantages of controlling access to data in a comprehensive and transparent way. This publication helps you understand the capabilities of RCAC and provides examples of defining, creating, and implementing the row permissions and column masks in a relational database environment.</text>
@ -102,8 +103,8 @@
<location><page_4><loc_24><loc_20><loc_41><loc_33></location> <location><page_4><loc_24><loc_20><loc_41><loc_33></location>
</figure> </figure>
<text><location><page_4><loc_43><loc_35><loc_88><loc_53></location>Jim Bainbridge is a senior DB2 consultant on the DB2 for i Center of Excellence team in the IBM Lab Services and Training organization. His primary role is training and implementation services for IBM DB2 Web Query for i and business analytics. Jim began his career with IBM 30 years ago in the IBM Rochester Development Lab, where he developed cooperative processing products that paired IBM PCs with IBM S/36 and AS/.400 systems. In the years since, Jim has held numerous technical roles, including independent software vendors technical support on a broad range of IBM technologies and products, and supporting customers in the IBM Executive Briefing Center and IBM Project Office.</text> <text><location><page_4><loc_43><loc_35><loc_88><loc_53></location>Jim Bainbridge is a senior DB2 consultant on the DB2 for i Center of Excellence team in the IBM Lab Services and Training organization. His primary role is training and implementation services for IBM DB2 Web Query for i and business analytics. Jim began his career with IBM 30 years ago in the IBM Rochester Development Lab, where he developed cooperative processing products that paired IBM PCs with IBM S/36 and AS/.400 systems. In the years since, Jim has held numerous technical roles, including independent software vendors technical support on a broad range of IBM technologies and products, and supporting customers in the IBM Executive Briefing Center and IBM Project Office.</text>
<text><location><page_4><loc_43><loc_14><loc_88><loc_34></location>Hernando Bedoya is a Senior IT Specialist at STG Lab Services and Training in Rochester, Minnesota. He writes extensively and teaches IBM classes worldwide in all areas of DB2 for i. Before joining STG Lab Services, he worked in the ITSO for nine years writing multiple IBM Redbooksfi publications. He also worked for IBM Colombia as an IBM AS/400fi IT Specialist doing presales support for the Andean countries. He has 28 years of experience in the computing field and has taught database classes in Colombian universities. He holds a Master's degree in Computer Science from EAFIT, Colombia. His areas of expertise are database technology, performance, and data warehousing. Hernando can be contacted at hbedoya@us.ibm.com .</text> <text><location><page_4><loc_43><loc_14><loc_88><loc_33></location>Hernando Bedoya is a Senior IT Specialist at STG Lab Services and Training in Rochester, Minnesota. He writes extensively and teaches IBM classes worldwide in all areas of DB2 for i. Before joining STG Lab Services, he worked in the ITSO for nine years writing multiple IBM Redbooksfi publications. He also worked for IBM Colombia as an IBM AS/400fi IT Specialist doing presales support for the Andean countries. He has 28 years of experience in the computing field and has taught database classes in Colombian universities. He holds a Master's degree in Computer Science from EAFIT, Colombia. His areas of expertise are database technology, performance, and data warehousing. Hernando can be contacted at hbedoya@us.ibm.com .</text>
<section_header_level_1><location><page_4><loc_10><loc_62><loc_20><loc_64></location>Authors</section_header_level_1> <section_header_level_1><location><page_4><loc_11><loc_62><loc_20><loc_64></location>Authors</section_header_level_1>
<figure> <figure>
<location><page_5><loc_5><loc_70><loc_39><loc_91></location> <location><page_5><loc_5><loc_70><loc_39><loc_91></location>
</figure> </figure>
@ -126,7 +127,7 @@
</unordered_list> </unordered_list>
<text><location><page_6><loc_25><loc_64><loc_89><loc_65></location>A security policy is what defines whether the system and its settings are secure (or not).</text> <text><location><page_6><loc_25><loc_64><loc_89><loc_65></location>A security policy is what defines whether the system and its settings are secure (or not).</text>
<unordered_list> <unordered_list>
<list_item><location><page_6><loc_22><loc_52><loc_89><loc_63></location>GLYPH<SM590000> The second fundamental in securing data assets is the use of resource security . If implemented properly, resource security prevents data breaches from both internal and external intrusions. Resource security controls are closely tied to the part of the security policy that defines who should have access to what information resources. A hacker might be good enough to get through your company firewalls and sift his way through to your system, but if they do not have explicit access to your database, the hacker cannot compromise your information assets.</list_item> <list_item><location><page_6><loc_22><loc_53><loc_89><loc_63></location>GLYPH<SM590000> The second fundamental in securing data assets is the use of resource security . If implemented properly, resource security prevents data breaches from both internal and external intrusions. Resource security controls are closely tied to the part of the security policy that defines who should have access to what information resources. A hacker might be good enough to get through your company firewalls and sift his way through to your system, but if they do not have explicit access to your database, the hacker cannot compromise your information assets.</list_item>
</unordered_list> </unordered_list>
<text><location><page_6><loc_22><loc_48><loc_87><loc_51></location>With your eyes now open to the importance of securing information assets, the rest of this chapter reviews the methods that are available for securing database resources on IBM i.</text> <text><location><page_6><loc_22><loc_48><loc_87><loc_51></location>With your eyes now open to the importance of securing information assets, the rest of this chapter reviews the methods that are available for securing database resources on IBM i.</text>
<section_header_level_1><location><page_6><loc_11><loc_43><loc_53><loc_45></location>1.2 Current state of IBM i security</section_header_level_1> <section_header_level_1><location><page_6><loc_11><loc_43><loc_53><loc_45></location>1.2 Current state of IBM i security</section_header_level_1>
@ -142,8 +143,8 @@
<location><page_7><loc_22><loc_13><loc_89><loc_53></location> <location><page_7><loc_22><loc_13><loc_89><loc_53></location>
<caption>Figure 1-2 Existing row and column controls</caption> <caption>Figure 1-2 Existing row and column controls</caption>
</figure> </figure>
<section_header_level_1><location><page_8><loc_10><loc_89><loc_55><loc_91></location>2.1.6 Change Function Usage CL command</section_header_level_1> <section_header_level_1><location><page_8><loc_11><loc_89><loc_55><loc_91></location>2.1.6 Change Function Usage CL command</section_header_level_1>
<text><location><page_8><loc_22><loc_86><loc_89><loc_88></location>The following CL commands can be used to work with, display, or change function usage IDs:</text> <text><location><page_8><loc_22><loc_87><loc_89><loc_88></location>The following CL commands can be used to work with, display, or change function usage IDs:</text>
<unordered_list> <unordered_list>
<list_item><location><page_8><loc_22><loc_84><loc_49><loc_86></location>GLYPH<SM590000> Work Function Usage ( WRKFCNUSG )</list_item> <list_item><location><page_8><loc_22><loc_84><loc_49><loc_86></location>GLYPH<SM590000> Work Function Usage ( WRKFCNUSG )</list_item>
<list_item><location><page_8><loc_22><loc_83><loc_51><loc_84></location>GLYPH<SM590000> Change Function Usage ( CHGFCNUSG )</list_item> <list_item><location><page_8><loc_22><loc_83><loc_51><loc_84></location>GLYPH<SM590000> Change Function Usage ( CHGFCNUSG )</list_item>
@ -151,7 +152,7 @@
</unordered_list> </unordered_list>
<text><location><page_8><loc_22><loc_77><loc_84><loc_80></location>For example, the following CHGFCNUSG command shows granting authorization to user HBEDOYA to administer and manage RCAC rules:</text> <text><location><page_8><loc_22><loc_77><loc_84><loc_80></location>For example, the following CHGFCNUSG command shows granting authorization to user HBEDOYA to administer and manage RCAC rules:</text>
<text><location><page_8><loc_22><loc_75><loc_72><loc_76></location>CHGFCNUSG FCNID(QIBM_DB_SECADM) USER(HBEDOYA) USAGE(*ALLOWED)</text> <text><location><page_8><loc_22><loc_75><loc_72><loc_76></location>CHGFCNUSG FCNID(QIBM_DB_SECADM) USER(HBEDOYA) USAGE(*ALLOWED)</text>
<section_header_level_1><location><page_8><loc_10><loc_71><loc_89><loc_72></location>2.1.7 Verifying function usage IDs for RCAC with the FUNCTION_USAGE view</section_header_level_1> <section_header_level_1><location><page_8><loc_11><loc_71><loc_89><loc_72></location>2.1.7 Verifying function usage IDs for RCAC with the FUNCTION_USAGE view</section_header_level_1>
<text><location><page_8><loc_22><loc_66><loc_85><loc_69></location>The FUNCTION_USAGE view contains function usage configuration details. Table 2-1 describes the columns in the FUNCTION_USAGE view.</text> <text><location><page_8><loc_22><loc_66><loc_85><loc_69></location>The FUNCTION_USAGE view contains function usage configuration details. Table 2-1 describes the columns in the FUNCTION_USAGE view.</text>
<table> <table>
<location><page_8><loc_22><loc_44><loc_89><loc_63></location> <location><page_8><loc_22><loc_44><loc_89><loc_63></location>
@ -163,9 +164,19 @@
<row_4><col_0><body>USER_TYPE</col_0><col_1><body>VARCHAR(5)</col_1><col_2><body>Type of user profile: GLYPH<SM590000> USER: The user profile is a user. GLYPH<SM590000> GROUP: The user profile is a group.</col_2></row_4> <row_4><col_0><body>USER_TYPE</col_0><col_1><body>VARCHAR(5)</col_1><col_2><body>Type of user profile: GLYPH<SM590000> USER: The user profile is a user. GLYPH<SM590000> GROUP: The user profile is a group.</col_2></row_4>
</table> </table>
<text><location><page_8><loc_22><loc_40><loc_89><loc_43></location>To discover who has authorization to define and manage RCAC, you can use the query that is shown in Example 2-1.</text> <text><location><page_8><loc_22><loc_40><loc_89><loc_43></location>To discover who has authorization to define and manage RCAC, you can use the query that is shown in Example 2-1.</text>
<paragraph><location><page_8><loc_22><loc_37><loc_76><loc_39></location>Example 2-1 Query to determine who has authority to define and manage RCAC</paragraph> <paragraph><location><page_8><loc_22><loc_38><loc_76><loc_39></location>Example 2-1 Query to determine who has authority to define and manage RCAC</paragraph>
<text><location><page_8><loc_22><loc_26><loc_54><loc_36></location>SELECT function_id, user_name, usage, user_type FROM function_usage WHERE function_id='QIBM_DB_SECADM' ORDER BY user_name;</text> <text><location><page_8><loc_22><loc_35><loc_28><loc_36></location>SELECT</text>
<section_header_level_1><location><page_8><loc_10><loc_20><loc_41><loc_22></location>2.2 Separation of duties</section_header_level_1> <text><location><page_8><loc_30><loc_35><loc_41><loc_36></location>function_id,</text>
<text><location><page_8><loc_27><loc_34><loc_39><loc_35></location>user_name,</text>
<text><location><page_8><loc_28><loc_32><loc_36><loc_33></location>usage,</text>
<text><location><page_8><loc_27><loc_31><loc_39><loc_32></location>user_type</text>
<text><location><page_8><loc_22><loc_29><loc_26><loc_30></location>FROM</text>
<text><location><page_8><loc_29><loc_29><loc_43><loc_30></location>function_usage</text>
<text><location><page_8><loc_22><loc_28><loc_27><loc_29></location>WHERE</text>
<text><location><page_8><loc_29><loc_28><loc_54><loc_29></location>function_id=QIBM_DB_SECADM</text>
<text><location><page_8><loc_22><loc_26><loc_29><loc_27></location>ORDER BY</text>
<text><location><page_8><loc_31><loc_26><loc_39><loc_27></location>user_name;</text>
<section_header_level_1><location><page_8><loc_11><loc_20><loc_41><loc_22></location>2.2 Separation of duties</section_header_level_1>
<text><location><page_8><loc_22><loc_10><loc_89><loc_18></location>Separation of duties helps businesses comply with industry regulations or organizational requirements and simplifies the management of authorities. Separation of duties is commonly used to prevent fraudulent activities or errors by a single person. It provides the ability for administrative functions to be divided across individuals without overlapping responsibilities, so that one user does not possess unlimited authority, such as with the *ALLOBJ authority.</text> <text><location><page_8><loc_22><loc_10><loc_89><loc_18></location>Separation of duties helps businesses comply with industry regulations or organizational requirements and simplifies the management of authorities. Separation of duties is commonly used to prevent fraudulent activities or errors by a single person. It provides the ability for administrative functions to be divided across individuals without overlapping responsibilities, so that one user does not possess unlimited authority, such as with the *ALLOBJ authority.</text>
<text><location><page_9><loc_22><loc_82><loc_89><loc_91></location>For example, assume that a business has assigned the duty to manage security on IBM i to Theresa. Before release IBM i 7.2, to grant privileges, Theresa had to have the same privileges Theresa was granting to others. Therefore, to grant *USE privileges to the PAYROLL table, Theresa had to have *OBJMGT and *USE authority (or a higher level of authority, such as *ALLOBJ). This requirement allowed Theresa to access the data in the PAYROLL table even though Theresa's job description was only to manage its security.</text> <text><location><page_9><loc_22><loc_82><loc_89><loc_91></location>For example, assume that a business has assigned the duty to manage security on IBM i to Theresa. Before release IBM i 7.2, to grant privileges, Theresa had to have the same privileges Theresa was granting to others. Therefore, to grant *USE privileges to the PAYROLL table, Theresa had to have *OBJMGT and *USE authority (or a higher level of authority, such as *ALLOBJ). This requirement allowed Theresa to access the data in the PAYROLL table even though Theresa's job description was only to manage its security.</text>
<text><location><page_9><loc_22><loc_75><loc_89><loc_81></location>In IBM i 7.2, the QIBM_DB_SECADM function usage grants authorities, revokes authorities, changes ownership, or changes the primary group without giving access to the object or, in the case of a database table, to the data that is in the table or allowing other operations on the table.</text> <text><location><page_9><loc_22><loc_75><loc_89><loc_81></location>In IBM i 7.2, the QIBM_DB_SECADM function usage grants authorities, revokes authorities, changes ownership, or changes the primary group without giving access to the object or, in the case of a database table, to the data that is in the table or allowing other operations on the table.</text>
@ -194,7 +205,7 @@
<location><page_10><loc_22><loc_48><loc_89><loc_86></location> <location><page_10><loc_22><loc_48><loc_89><loc_86></location>
<caption>The SQL CREATE PERMISSION statement that is shown in Figure 3-1 is used to define and initially enable or disable the row access rules.Figure 3-1 CREATE PERMISSION SQL statement</caption> <caption>The SQL CREATE PERMISSION statement that is shown in Figure 3-1 is used to define and initially enable or disable the row access rules.Figure 3-1 CREATE PERMISSION SQL statement</caption>
</figure> </figure>
<section_header_level_1><location><page_10><loc_22><loc_43><loc_35><loc_45></location>Column mask</section_header_level_1> <section_header_level_1><location><page_10><loc_22><loc_43><loc_35><loc_44></location>Column mask</section_header_level_1>
<text><location><page_10><loc_22><loc_37><loc_89><loc_43></location>A column mask is a database object that manifests a column value access control rule for a specific column in a specific table. It uses a CASE expression that describes what you see when you access the column. For example, a teller can see only the last four digits of a tax identification number.</text> <text><location><page_10><loc_22><loc_37><loc_89><loc_43></location>A column mask is a database object that manifests a column value access control rule for a specific column in a specific table. It uses a CASE expression that describes what you see when you access the column. For example, a teller can see only the last four digits of a tax identification number.</text>
<paragraph><location><page_11><loc_22><loc_90><loc_67><loc_91></location>Table 3-1 summarizes these special registers and their values.</paragraph> <paragraph><location><page_11><loc_22><loc_90><loc_67><loc_91></location>Table 3-1 summarizes these special registers and their values.</paragraph>
<table> <table>
@ -217,9 +228,9 @@
<location><page_11><loc_22><loc_25><loc_49><loc_51></location> <location><page_11><loc_22><loc_25><loc_49><loc_51></location>
<caption>Figure 3-5 Special registers and adopted authority</caption> <caption>Figure 3-5 Special registers and adopted authority</caption>
</figure> </figure>
<section_header_level_1><location><page_11><loc_10><loc_19><loc_40><loc_21></location>3.2.2 Built-in global variables</section_header_level_1> <section_header_level_1><location><page_11><loc_11><loc_20><loc_40><loc_21></location>3.2.2 Built-in global variables</section_header_level_1>
<text><location><page_11><loc_22><loc_15><loc_85><loc_18></location>Built-in global variables are provided with the database manager and are used in SQL statements to retrieve scalar values that are associated with the variables.</text> <text><location><page_11><loc_22><loc_15><loc_85><loc_18></location>Built-in global variables are provided with the database manager and are used in SQL statements to retrieve scalar values that are associated with the variables.</text>
<text><location><page_11><loc_22><loc_9><loc_87><loc_14></location>IBM DB2 for i supports nine different built-in global variables that are read only and maintained by the system. These global variables can be used to identify attributes of the database connection and used as part of the RCAC logic.</text> <text><location><page_11><loc_22><loc_9><loc_87><loc_13></location>IBM DB2 for i supports nine different built-in global variables that are read only and maintained by the system. These global variables can be used to identify attributes of the database connection and used as part of the RCAC logic.</text>
<text><location><page_12><loc_22><loc_90><loc_56><loc_91></location>Table 3-2 lists the nine built-in global variables.</text> <text><location><page_12><loc_22><loc_90><loc_56><loc_91></location>Table 3-2 lists the nine built-in global variables.</text>
<table> <table>
<location><page_12><loc_10><loc_63><loc_90><loc_87></location> <location><page_12><loc_10><loc_63><loc_90><loc_87></location>
@ -235,28 +246,29 @@
<row_8><col_0><body>ROUTINE_SPECIFIC_NAME</col_0><col_1><body>VARCHAR(128)</col_1><col_2><body>Name of the currently running routine</col_2></row_8> <row_8><col_0><body>ROUTINE_SPECIFIC_NAME</col_0><col_1><body>VARCHAR(128)</col_1><col_2><body>Name of the currently running routine</col_2></row_8>
<row_9><col_0><body>ROUTINE_TYPE</col_0><col_1><body>CHAR(1)</col_1><col_2><body>Type of the currently running routine</col_2></row_9> <row_9><col_0><body>ROUTINE_TYPE</col_0><col_1><body>CHAR(1)</col_1><col_2><body>Type of the currently running routine</col_2></row_9>
</table> </table>
<section_header_level_1><location><page_12><loc_11><loc_57><loc_63><loc_60></location>3.3 VERIFY_GROUP_FOR_USER function</section_header_level_1> <section_header_level_1><location><page_12><loc_11><loc_57><loc_63><loc_59></location>3.3 VERIFY_GROUP_FOR_USER function</section_header_level_1>
<text><location><page_12><loc_22><loc_45><loc_89><loc_55></location>The VERIFY_GROUP_FOR_USER function was added in IBM i 7.2. Although it is primarily intended for use with RCAC permissions and masks, it can be used in other SQL statements. The first parameter must be one of these three special registers: SESSION_USER, USER, or CURRENT_USER. The second and subsequent parameters are a list of user or group profiles. Each of these values must be 1 - 10 characters in length. These values are not validated for their existence, which means that you can specify the names of user profiles that do not exist without receiving any kind of error.</text> <text><location><page_12><loc_22><loc_45><loc_89><loc_55></location>The VERIFY_GROUP_FOR_USER function was added in IBM i 7.2. Although it is primarily intended for use with RCAC permissions and masks, it can be used in other SQL statements. The first parameter must be one of these three special registers: SESSION_USER, USER, or CURRENT_USER. The second and subsequent parameters are a list of user or group profiles. Each of these values must be 1 - 10 characters in length. These values are not validated for their existence, which means that you can specify the names of user profiles that do not exist without receiving any kind of error.</text>
<text><location><page_12><loc_22><loc_39><loc_89><loc_44></location>If a special register value is in the list of user profiles or it is a member of a group profile included in the list, the function returns a long integer value of 1. Otherwise, it returns a value of 0. It never returns the null value.</text> <text><location><page_12><loc_22><loc_39><loc_89><loc_43></location>If a special register value is in the list of user profiles or it is a member of a group profile included in the list, the function returns a long integer value of 1. Otherwise, it returns a value of 0. It never returns the null value.</text>
<text><location><page_12><loc_22><loc_36><loc_75><loc_38></location>Here is an example of using the VERIFY_GROUP_FOR_USER function:</text> <text><location><page_12><loc_22><loc_36><loc_75><loc_38></location>Here is an example of using the VERIFY_GROUP_FOR_USER function:</text>
<unordered_list> <unordered_list>
<list_item><location><page_12><loc_22><loc_34><loc_66><loc_36></location>1. There are user profiles for MGR, JANE, JUDY, and TONY.</list_item> <list_item><location><page_12><loc_22><loc_34><loc_66><loc_35></location>1. There are user profiles for MGR, JANE, JUDY, and TONY.</list_item>
<list_item><location><page_12><loc_22><loc_32><loc_65><loc_33></location>2. The user profile JANE specifies a group profile of MGR.</list_item> <list_item><location><page_12><loc_22><loc_32><loc_65><loc_33></location>2. The user profile JANE specifies a group profile of MGR.</list_item>
<list_item><location><page_12><loc_22><loc_28><loc_88><loc_31></location>3. If a user is connected to the server using user profile JANE, all of the following function invocations return a value of 1:</list_item> <list_item><location><page_12><loc_22><loc_28><loc_88><loc_31></location>3. If a user is connected to the server using user profile JANE, all of the following function invocations return a value of 1:</list_item>
</unordered_list> </unordered_list>
<code><location><page_12><loc_24><loc_19><loc_74><loc_27></location>VERIFY_GROUP_FOR_USER (CURRENT_USER, 'MGR') VERIFY_GROUP_FOR_USER (CURRENT_USER, 'JANE', 'MGR') VERIFY_GROUP_FOR_USER (CURRENT_USER, 'JANE', 'MGR', 'STEVE') The following function invocation returns a value of 0: VERIFY_GROUP_FOR_USER (CURRENT_USER, 'JUDY', 'TONY')</code> <code><location><page_12><loc_25><loc_19><loc_74><loc_27></location>VERIFY_GROUP_FOR_USER (CURRENT_USER, 'MGR') VERIFY_GROUP_FOR_USER (CURRENT_USER, 'JANE', 'MGR') VERIFY_GROUP_FOR_USER (CURRENT_USER, 'JANE', 'MGR', 'STEVE') The following function invocation returns a value of 0: VERIFY_GROUP_FOR_USER (CURRENT_USER, 'JUDY', 'TONY')</code>
<text><location><page_13><loc_22><loc_88><loc_27><loc_91></location>RETURN CASE</text> <text><location><page_13><loc_22><loc_90><loc_27><loc_91></location>RETURN</text>
<text><location><page_13><loc_22><loc_88><loc_26><loc_89></location>CASE</text>
<code><location><page_13><loc_22><loc_67><loc_85><loc_88></location>WHEN VERIFY_GROUP_FOR_USER ( SESSION_USER , 'HR', 'EMP' ) = 1 THEN EMPLOYEES . DATE_OF_BIRTH WHEN VERIFY_GROUP_FOR_USER ( SESSION_USER , 'MGR' ) = 1 AND SESSION_USER = EMPLOYEES . USER_ID THEN EMPLOYEES . DATE_OF_BIRTH WHEN VERIFY_GROUP_FOR_USER ( SESSION_USER , 'MGR' ) = 1 AND SESSION_USER <> EMPLOYEES . USER_ID THEN ( 9999 || '-' || MONTH ( EMPLOYEES . DATE_OF_BIRTH ) || '-' || DAY (EMPLOYEES.DATE_OF_BIRTH )) ELSE NULL END ENABLE ;</code> <code><location><page_13><loc_22><loc_67><loc_85><loc_88></location>WHEN VERIFY_GROUP_FOR_USER ( SESSION_USER , 'HR', 'EMP' ) = 1 THEN EMPLOYEES . DATE_OF_BIRTH WHEN VERIFY_GROUP_FOR_USER ( SESSION_USER , 'MGR' ) = 1 AND SESSION_USER = EMPLOYEES . USER_ID THEN EMPLOYEES . DATE_OF_BIRTH WHEN VERIFY_GROUP_FOR_USER ( SESSION_USER , 'MGR' ) = 1 AND SESSION_USER <> EMPLOYEES . USER_ID THEN ( 9999 || '-' || MONTH ( EMPLOYEES . DATE_OF_BIRTH ) || '-' || DAY (EMPLOYEES.DATE_OF_BIRTH )) ELSE NULL END ENABLE ;</code>
<unordered_list> <unordered_list>
<list_item><location><page_13><loc_22><loc_63><loc_89><loc_65></location>2. The other column to mask in this example is the TAX_ID information. In this example, the rules to enforce include the following ones:</list_item> <list_item><location><page_13><loc_22><loc_63><loc_89><loc_65></location>2. The other column to mask in this example is the TAX_ID information. In this example, the rules to enforce include the following ones:</list_item>
<list_item><location><page_13><loc_25><loc_60><loc_77><loc_62></location>-Human Resources can see the unmasked TAX_ID of the employees.</list_item> <list_item><location><page_13><loc_25><loc_60><loc_77><loc_62></location>-Human Resources can see the unmasked TAX_ID of the employees.</list_item>
<list_item><location><page_13><loc_25><loc_58><loc_66><loc_60></location>-Employees can see only their own unmasked TAX_ID.</list_item> <list_item><location><page_13><loc_25><loc_58><loc_66><loc_59></location>-Employees can see only their own unmasked TAX_ID.</list_item>
<list_item><location><page_13><loc_25><loc_55><loc_89><loc_57></location>-Managers see a masked version of TAX_ID with the first five characters replaced with the X character (for example, XXX-XX-1234).</list_item> <list_item><location><page_13><loc_25><loc_55><loc_89><loc_57></location>-Managers see a masked version of TAX_ID with the first five characters replaced with the X character (for example, XXX-XX-1234).</list_item>
<list_item><location><page_13><loc_25><loc_52><loc_87><loc_54></location>-Any other person sees the entire TAX_ID as masked, for example, XXX-XX-XXXX.</list_item> <list_item><location><page_13><loc_25><loc_52><loc_87><loc_54></location>-Any other person sees the entire TAX_ID as masked, for example, XXX-XX-XXXX.</list_item>
<list_item><location><page_13><loc_25><loc_50><loc_87><loc_52></location>To implement this column mask, run the SQL statement that is shown in Example 3-9.</list_item> <list_item><location><page_13><loc_25><loc_50><loc_87><loc_51></location>To implement this column mask, run the SQL statement that is shown in Example 3-9.</list_item>
</unordered_list> </unordered_list>
<paragraph><location><page_13><loc_22><loc_48><loc_58><loc_49></location>Example 3-9 Creating a mask on the TAX_ID column</paragraph> <paragraph><location><page_13><loc_22><loc_48><loc_58><loc_49></location>Example 3-9 Creating a mask on the TAX_ID column</paragraph>
<code><location><page_13><loc_22><loc_13><loc_88><loc_47></location>CREATE MASK HR_SCHEMA.MASK_TAX_ID_ON_EMPLOYEES ON HR_SCHEMA.EMPLOYEES AS EMPLOYEES FOR COLUMN TAX_ID RETURN CASE WHEN VERIFY_GROUP_FOR_USER ( SESSION_USER , 'HR' ) = 1 THEN EMPLOYEES . TAX_ID WHEN VERIFY_GROUP_FOR_USER ( SESSION_USER , 'MGR' ) = 1 AND SESSION_USER = EMPLOYEES . USER_ID THEN EMPLOYEES . TAX_ID WHEN VERIFY_GROUP_FOR_USER ( SESSION_USER , 'MGR' ) = 1 AND SESSION_USER <> EMPLOYEES . USER_ID THEN ( 'XXX-XX-' CONCAT QSYS2 . SUBSTR ( EMPLOYEES . TAX_ID , 8 , 4 ) ) WHEN VERIFY_GROUP_FOR_USER ( SESSION_USER , 'EMP' ) = 1 THEN EMPLOYEES . TAX_ID ELSE 'XXX-XX-XXXX' END ENABLE ;</code> <code><location><page_13><loc_22><loc_14><loc_86><loc_47></location>CREATE MASK HR_SCHEMA.MASK_TAX_ID_ON_EMPLOYEES ON HR_SCHEMA.EMPLOYEES AS EMPLOYEES FOR COLUMN TAX_ID RETURN CASE WHEN VERIFY_GROUP_FOR_USER ( SESSION_USER , 'HR' ) = 1 THEN EMPLOYEES . TAX_ID WHEN VERIFY_GROUP_FOR_USER ( SESSION_USER , 'MGR' ) = 1 AND SESSION_USER = EMPLOYEES . USER_ID THEN EMPLOYEES . TAX_ID WHEN VERIFY_GROUP_FOR_USER ( SESSION_USER , 'MGR' ) = 1 AND SESSION_USER <> EMPLOYEES . USER_ID THEN ( 'XXX-XX-' CONCAT QSYS2 . SUBSTR ( EMPLOYEES . TAX_ID , 8 , 4 ) ) WHEN VERIFY_GROUP_FOR_USER ( SESSION_USER , 'EMP' ) = 1 THEN EMPLOYEES . TAX_ID ELSE 'XXX-XX-XXXX' END ENABLE ;</code>
<unordered_list> <unordered_list>
<list_item><location><page_14><loc_22><loc_90><loc_74><loc_91></location>3. Figure 3-10 shows the masks that are created in the HR_SCHEMA.</list_item> <list_item><location><page_14><loc_22><loc_90><loc_74><loc_91></location>3. Figure 3-10 shows the masks that are created in the HR_SCHEMA.</list_item>
</unordered_list> </unordered_list>
@ -264,7 +276,7 @@
<location><page_14><loc_10><loc_79><loc_89><loc_88></location> <location><page_14><loc_10><loc_79><loc_89><loc_88></location>
<caption>Figure 3-10 Column masks shown in System i Navigator</caption> <caption>Figure 3-10 Column masks shown in System i Navigator</caption>
</figure> </figure>
<section_header_level_1><location><page_14><loc_11><loc_73><loc_33><loc_75></location>3.6.6 Activating RCAC</section_header_level_1> <section_header_level_1><location><page_14><loc_11><loc_73><loc_33><loc_74></location>3.6.6 Activating RCAC</section_header_level_1>
<text><location><page_14><loc_22><loc_67><loc_89><loc_71></location>Now that you have created the row permission and the two column masks, RCAC must be activated. The row permission and the two column masks are enabled (last clause in the scripts), but now you must activate RCAC on the table. To do so, complete the following steps:</text> <text><location><page_14><loc_22><loc_67><loc_89><loc_71></location>Now that you have created the row permission and the two column masks, RCAC must be activated. The row permission and the two column masks are enabled (last clause in the scripts), but now you must activate RCAC on the table. To do so, complete the following steps:</text>
<unordered_list> <unordered_list>
<list_item><location><page_14><loc_22><loc_65><loc_67><loc_66></location>1. Run the SQL statements that are shown in Example 3-10.</list_item> <list_item><location><page_14><loc_22><loc_65><loc_67><loc_66></location>1. Run the SQL statements that are shown in Example 3-10.</list_item>
@ -272,9 +284,12 @@
<section_header_level_1><location><page_14><loc_22><loc_62><loc_61><loc_63></location>Example 3-10 Activating RCAC on the EMPLOYEES table</section_header_level_1> <section_header_level_1><location><page_14><loc_22><loc_62><loc_61><loc_63></location>Example 3-10 Activating RCAC on the EMPLOYEES table</section_header_level_1>
<unordered_list> <unordered_list>
<list_item><location><page_14><loc_22><loc_60><loc_62><loc_61></location>/* Active Row Access Control (permissions) */</list_item> <list_item><location><page_14><loc_22><loc_60><loc_62><loc_61></location>/* Active Row Access Control (permissions) */</list_item>
<list_item><location><page_14><loc_22><loc_58><loc_58><loc_60></location>/* Active Column Access Control (masks)</list_item>
</unordered_list> </unordered_list>
<text><location><page_14><loc_22><loc_54><loc_58><loc_60></location>/* Active Column Access Control (masks) ALTER TABLE HR_SCHEMA.EMPLOYEES ACTIVATE ROW ACCESS CONTROL ACTIVATE COLUMN ACCESS CONTROL;</text>
<text><location><page_14><loc_60><loc_58><loc_62><loc_60></location>*/</text> <text><location><page_14><loc_60><loc_58><loc_62><loc_60></location>*/</text>
<text><location><page_14><loc_22><loc_57><loc_48><loc_58></location>ALTER TABLE HR_SCHEMA.EMPLOYEES</text>
<text><location><page_14><loc_22><loc_55><loc_44><loc_56></location>ACTIVATE ROW ACCESS CONTROL</text>
<text><location><page_14><loc_22><loc_54><loc_48><loc_55></location>ACTIVATE COLUMN ACCESS CONTROL;</text>
<unordered_list> <unordered_list>
<list_item><location><page_14><loc_22><loc_48><loc_88><loc_52></location>2. Look at the definition of the EMPLOYEE table, as shown in Figure 3-11. To do this, from the main navigation pane of System i Navigator, click Schemas  HR_SCHEMA  Tables , right-click the EMPLOYEES table, and click Definition .</list_item> <list_item><location><page_14><loc_22><loc_48><loc_88><loc_52></location>2. Look at the definition of the EMPLOYEE table, as shown in Figure 3-11. To do this, from the main navigation pane of System i Navigator, click Schemas  HR_SCHEMA  Tables , right-click the EMPLOYEES table, and click Definition .</list_item>
</unordered_list> </unordered_list>
@ -296,10 +311,10 @@
<location><page_15><loc_11><loc_16><loc_83><loc_30></location> <location><page_15><loc_11><loc_16><loc_83><loc_30></location>
<caption>Figure 4-69 Index advice with no RCAC</caption> <caption>Figure 4-69 Index advice with no RCAC</caption>
</figure> </figure>
<code><location><page_16><loc_10><loc_11><loc_82><loc_91></location>THEN C . CUSTOMER_TAX_ID WHEN QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'TELLER' ) = 1 THEN ( 'XXX-XX-' CONCAT QSYS2 . SUBSTR ( C . CUSTOMER_TAX_ID , 8 , 4 ) ) WHEN QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'CUSTOMER' ) = 1 THEN C . CUSTOMER_TAX_ID ELSE 'XXX-XX-XXXX' END ENABLE ; CREATE MASK BANK_SCHEMA.MASK_DRIVERS_LICENSE_ON_CUSTOMERS ON BANK_SCHEMA.CUSTOMERS AS C FOR COLUMN CUSTOMER_DRIVERS_LICENSE_NUMBER RETURN CASE WHEN QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'ADMIN' ) = 1 THEN C . CUSTOMER_DRIVERS_LICENSE_NUMBER WHEN QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'TELLER' ) = 1 THEN C . CUSTOMER_DRIVERS_LICENSE_NUMBER WHEN QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'CUSTOMER' ) = 1 THEN C . CUSTOMER_DRIVERS_LICENSE_NUMBER ELSE '*************' END ENABLE ; CREATE MASK BANK_SCHEMA.MASK_LOGIN_ID_ON_CUSTOMERS ON BANK_SCHEMA.CUSTOMERS AS C FOR COLUMN CUSTOMER_LOGIN_ID RETURN CASE WHEN QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'ADMIN' ) = 1 THEN C . CUSTOMER_LOGIN_ID WHEN QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'CUSTOMER' ) = 1 THEN C . CUSTOMER_LOGIN_ID ELSE '*****' END ENABLE ; CREATE MASK BANK_SCHEMA.MASK_SECURITY_QUESTION_ON_CUSTOMERS ON BANK_SCHEMA.CUSTOMERS AS C FOR COLUMN CUSTOMER_SECURITY_QUESTION RETURN CASE WHEN QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'ADMIN' ) = 1 THEN C . CUSTOMER_SECURITY_QUESTION WHEN QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'CUSTOMER' ) = 1 THEN C . CUSTOMER_SECURITY_QUESTION ELSE '*****' END ENABLE ; CREATE MASK BANK_SCHEMA.MASK_SECURITY_QUESTION_ANSWER_ON_CUSTOMERS ON BANK_SCHEMA.CUSTOMERS AS C FOR COLUMN CUSTOMER_SECURITY_QUESTION_ANSWER RETURN CASE WHEN QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'ADMIN' ) = 1 THEN C . CUSTOMER_SECURITY_QUESTION_ANSWER WHEN QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'CUSTOMER' ) = 1 THEN C . CUSTOMER_SECURITY_QUESTION_ANSWER ELSE '*****' END ENABLE ; ALTER TABLE BANK_SCHEMA.CUSTOMERS ACTIVATE ROW ACCESS CONTROL ACTIVATE COLUMN ACCESS CONTROL ;</code> <code><location><page_16><loc_11><loc_11><loc_82><loc_91></location>THEN C . CUSTOMER_TAX_ID WHEN QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'TELLER' ) = 1 THEN ( 'XXX-XX-' CONCAT QSYS2 . SUBSTR ( C . CUSTOMER_TAX_ID , 8 , 4 ) ) WHEN QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'CUSTOMER' ) = 1 THEN C . CUSTOMER_TAX_ID ELSE 'XXX-XX-XXXX' END ENABLE ; CREATE MASK BANK_SCHEMA.MASK_DRIVERS_LICENSE_ON_CUSTOMERS ON BANK_SCHEMA.CUSTOMERS AS C FOR COLUMN CUSTOMER_DRIVERS_LICENSE_NUMBER RETURN CASE WHEN QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'ADMIN' ) = 1 THEN C . CUSTOMER_DRIVERS_LICENSE_NUMBER WHEN QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'TELLER' ) = 1 THEN C . CUSTOMER_DRIVERS_LICENSE_NUMBER WHEN QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'CUSTOMER' ) = 1 THEN C . CUSTOMER_DRIVERS_LICENSE_NUMBER ELSE '*************' END ENABLE ; CREATE MASK BANK_SCHEMA.MASK_LOGIN_ID_ON_CUSTOMERS ON BANK_SCHEMA.CUSTOMERS AS C FOR COLUMN CUSTOMER_LOGIN_ID RETURN CASE WHEN QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'ADMIN' ) = 1 THEN C . CUSTOMER_LOGIN_ID WHEN QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'CUSTOMER' ) = 1 THEN C . CUSTOMER_LOGIN_ID ELSE '*****' END ENABLE ; CREATE MASK BANK_SCHEMA.MASK_SECURITY_QUESTION_ON_CUSTOMERS ON BANK_SCHEMA.CUSTOMERS AS C FOR COLUMN CUSTOMER_SECURITY_QUESTION RETURN CASE WHEN QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'ADMIN' ) = 1 THEN C . CUSTOMER_SECURITY_QUESTION WHEN QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'CUSTOMER' ) = 1 THEN C . CUSTOMER_SECURITY_QUESTION ELSE '*****' END ENABLE ; CREATE MASK BANK_SCHEMA.MASK_SECURITY_QUESTION_ANSWER_ON_CUSTOMERS ON BANK_SCHEMA.CUSTOMERS AS C FOR COLUMN CUSTOMER_SECURITY_QUESTION_ANSWER RETURN CASE WHEN QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'ADMIN' ) = 1 THEN C . CUSTOMER_SECURITY_QUESTION_ANSWER WHEN QSYS2 . VERIFY_GROUP_FOR_USER ( SESSION_USER , 'CUSTOMER' ) = 1 THEN C . CUSTOMER_SECURITY_QUESTION_ANSWER ELSE '*****' END ENABLE ; ALTER TABLE BANK_SCHEMA.CUSTOMERS ACTIVATE ROW ACCESS CONTROL ACTIVATE COLUMN ACCESS CONTROL ;</code>
<text><location><page_18><loc_47><loc_94><loc_68><loc_96></location>Back cover</text> <text><location><page_18><loc_47><loc_94><loc_68><loc_96></location>Back cover</text>
<section_header_level_1><location><page_18><loc_4><loc_82><loc_73><loc_91></location>Row and Column Access Control Support in IBM DB2 for i</section_header_level_1> <section_header_level_1><location><page_18><loc_4><loc_82><loc_73><loc_91></location>Row and Column Access Control Support in IBM DB2 for i</section_header_level_1>
<text><location><page_18><loc_4><loc_66><loc_21><loc_70></location>Implement roles and separation of duties</text> <text><location><page_18><loc_4><loc_66><loc_21><loc_69></location>Implement roles and separation of duties</text>
<text><location><page_18><loc_4><loc_59><loc_20><loc_64></location>Leverage row permissions on the database</text> <text><location><page_18><loc_4><loc_59><loc_20><loc_64></location>Leverage row permissions on the database</text>
<text><location><page_18><loc_4><loc_52><loc_20><loc_57></location>Protect columns by defining column masks</text> <text><location><page_18><loc_4><loc_52><loc_20><loc_57></location>Protect columns by defining column masks</text>
<text><location><page_18><loc_25><loc_59><loc_68><loc_69></location>This IBM Redpaper publication provides information about the IBM i 7.2 feature of IBM DB2 for i Row and Column Access Control (RCAC). It offers a broad description of the function and advantages of controlling access to data in a comprehensive and transparent way. This publication helps you understand the capabilities of RCAC and provides examples of defining, creating, and implementing the row permissions and column masks in a relational database environment.</text> <text><location><page_18><loc_25><loc_59><loc_68><loc_69></location>This IBM Redpaper publication provides information about the IBM i 7.2 feature of IBM DB2 for i Row and Column Access Control (RCAC). It offers a broad description of the function and advantages of controlling access to data in a comprehensive and transparent way. This publication helps you understand the capabilities of RCAC and provides examples of defining, creating, and implementing the row permissions and column masks in a relational database environment.</text>

File diff suppressed because one or more lines are too long

View File

@ -4,62 +4,56 @@ Front cover
## Row and Column Access Control Support in IBM DB2 for i ## Row and Column Access Control Support in IBM DB2 for i
Implement roles and separation of duties <!-- image -->
Leverage row permissions on the database <!-- image -->
Protect columns by defining column masks
Jim Bainbridge Hernando Bedoya Rob Bestgen Mike Cain Dan Cruikshank Jim Denton Doug Mack Tom McKinley Kent Milligan
Redpaper
## Contents ## Contents
| Notices | . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii | | Notices | . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii |
|------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------| |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------|
| Trademarks | . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii | | Trademarks | . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii |
| DB2 for i Center of Excellence | . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix | | DB2 for i Center of Excellence | . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix |
| Preface | . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi | | Preface | . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi |
| Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi | | | Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi | |
| Now you can become a published author, too! | . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii | | Now you can become a published author, too! | . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii |
| Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | xiii | | Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | xiii |
| Stay connected to IBM Redbooks | . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv | | Stay connected to IBM Redbooks | . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv |
| Chapter 1. Securing and protecting IBM DB2 data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 1 | | Chapter 1. Securing and protecting IBM DB2 data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 1 |
| 1.1 Security fundamentals. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 | | | 1.1 Security fundamentals. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 | |
| 1.2 Current state of IBM i security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 2 | | 1.2 Current state of IBM i security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 2 |
| 1.3 DB2 for i security controls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 | | | 1.3 DB2 for i security controls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 | |
| 1.3.1 Existing row and column control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 4 | | 1.3.1 Existing row and column control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 4 |
| 1.3.2 New controls: Row and Column Access Control. . . . . . . . . . . . . . . . . . . . . . . . . . . | 5 | | 1.3.2 New controls: Row and Column Access Control. . . . . . . . . . . . . . . . . . . . . . . . . . . | 5 |
| Chapter 2. Roles and separation of duties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 7 | | Chapter 2. Roles and separation of duties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 7 |
| 2.1 Roles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 8 | | 2.1 Roles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 8 |
| 2.1.1 DDM and DRDA application server access: QIBM\_DB\_DDMDRDA . . . . . . . . . . . | 8 | | 2.1.1 DDM and DRDA application server access: QIBM\_DB\_DDMDRDA . . . . . . . . . . . | 8 |
| 2.1.2 Toolbox application server access: QIBM\_DB\_ZDA. . . . . . . . . . . . . . . . . . . . . . . . | 8 | | 2.1.2 Toolbox application server access: QIBM\_DB\_ZDA. . . . . . . . . . . . . . . . . . . . . . . . | 8 |
| 2.1.3 Database Administrator function: QIBM\_DB\_SQLADM . . . . . . . . . . . . . . . . . . . . . | 9 | | 2.1.3 Database Administrator function: QIBM\_DB\_SQLADM . . . . . . . . . . . . . . . . . . . . . | 9 |
| 2.1.4 Database Information function: QIBM\_DB\_SYSMON | . . . . . . . . . . . . . . . . . . . . . . 9 | | 2.1.4 Database Information function: QIBM\_DB\_SYSMON | . . . . . . . . . . . . . . . . . . . . . . 9 |
| 2.1.5 Security Administrator function: QIBM\_DB\_SECADM . . . . . . . . . . . . . . . . . . . . . . | 9 | | 2.1.5 Security Administrator function: QIBM\_DB\_SECADM . . . . . . . . . . . . . . . . . . . . . . | 9 |
| 2.1.6 Change Function Usage CL command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 10 | | 2.1.6 Change Function Usage CL command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 10 |
| 2.1.7 Verifying function usage IDs for RCAC with the FUNCTION\_USAGE view . . . . . | 10 | | 2.1.7 Verifying function usage IDs for RCAC with the FUNCTION\_USAGE view . . . . . | 10 |
| 2.2 Separation of duties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 | | | 2.2 Separation of duties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 | |
| Chapter 3. Row and Column Access Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 13 | | Chapter 3. Row and Column Access Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 13 |
| 3.1 Explanation of RCAC and the concept of access control . . . . . . . . . . . . . . . . . . . . . . . | 14 | | 3.1 Explanation of RCAC and the concept of access control . . . . . . . . . . . . . . . . . . . . . . . | 14 |
| 3.1.1 Row permission and column mask definitions | . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 | | 3.1.1 Row permission and column mask definitions | . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 |
| 3.1.2 Enabling and activating RCAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 16 | | 3.1.2 Enabling and activating RCAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 16 |
| 3.2 Special registers and built-in global variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 18 | | 3.2 Special registers and built-in global variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 18 |
| 3.2.1 Special registers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 18 | | 3.2.1 Special registers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 18 |
| 3.2.2 Built-in global variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 19 | | 3.2.2 Built-in global variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 19 |
| 3.3 VERIFY\_GROUP\_FOR\_USER function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 20 | | 3.3 VERIFY\_GROUP\_FOR\_USER function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 20 |
| 3.4 Establishing and controlling accessibility by using the RCAC rule text . . . . . . . . . . . . . | 21 | | 3.4 Establishing and controlling accessibility by using the RCAC rule text . . . . . . . . . . . . . | 21 |
| | . . . . . . . . . . . . . . . . . . . . . . . . 22 | | . . . . . . . . . . . . . . . . . . . . . . . . | 22 |
| 3.5 SELECT, INSERT, and UPDATE behavior with RCAC | | | 3.5 SELECT, INSERT, and UPDATE behavior with RCAC 3.6 Human resources example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 22 |
| 3.6.1 Assigning the QIBM\_DB\_SECADM function ID to the consultants. . . . . . . . . . . . | 23 | | 3.6.1 Assigning the QIBM\_DB\_SECADM function ID to the consultants. . . . . . . . . . . . | 23 |
| 3.6.2 Creating group profiles for the users and their roles . . . . . . . . . . . . . . . . . . . . . . . | 23 | | 3.6.2 Creating group profiles for the users and their roles . . . . . . . . . . . . . . . . . . . . . . . | 23 |
| 3.6.3 Demonstrating data access without RCAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 24 | | 3.6.3 Demonstrating data access without RCAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 24 |
| 3.6.4 Defining and creating row permissions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 25 | | 3.6.4 Defining and creating row permissions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 25 |
| 3.6.5 Defining and creating column masks | . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 | | 3.6.5 Defining and creating column masks | . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 |
| 3.6.6 Activating RCAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 28 | | 3.6.6 Activating RCAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 28 |
| 3.6.7 Demonstrating data access with RCAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 29 | | 3.6.7 Demonstrating data access with RCAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 29 |
| 3.6.8 Demonstrating data access with a view and RCAC . . . . . . . . . . . . . . . . . . . . . . . | 32 | | 3.6.8 Demonstrating data access with a view and RCAC . . . . . . . . . . . . . . . . . . . . . . . | 32 |
DB2 for i Center of Excellence DB2 for i Center of Excellence
@ -204,7 +198,27 @@ To discover who has authorization to define and manage RCAC, you can use the que
Example 2-1 Query to determine who has authority to define and manage RCAC Example 2-1 Query to determine who has authority to define and manage RCAC
SELECT function\_id, user\_name, usage, user\_type FROM function\_usage WHERE function\_id='QIBM\_DB\_SECADM' ORDER BY user\_name; SELECT
function\_id,
user\_name,
usage,
user\_type
FROM
function\_usage
WHERE
function\_id=QIBM\_DB\_SECADM
ORDER BY
user\_name;
## 2.2 Separation of duties ## 2.2 Separation of duties
@ -307,7 +321,9 @@ Here is an example of using the VERIFY\_GROUP\_FOR\_USER function:
VERIFY\_GROUP\_FOR\_USER (CURRENT\_USER, 'MGR') VERIFY\_GROUP\_FOR\_USER (CURRENT\_USER, 'JANE', 'MGR') VERIFY\_GROUP\_FOR\_USER (CURRENT\_USER, 'JANE', 'MGR', 'STEVE') The following function invocation returns a value of 0: VERIFY\_GROUP\_FOR\_USER (CURRENT\_USER, 'JUDY', 'TONY') VERIFY\_GROUP\_FOR\_USER (CURRENT\_USER, 'MGR') VERIFY\_GROUP\_FOR\_USER (CURRENT\_USER, 'JANE', 'MGR') VERIFY\_GROUP\_FOR\_USER (CURRENT\_USER, 'JANE', 'MGR', 'STEVE') The following function invocation returns a value of 0: VERIFY\_GROUP\_FOR\_USER (CURRENT\_USER, 'JUDY', 'TONY')
``` ```
RETURN CASE RETURN
CASE
``` ```
WHEN VERIFY\_GROUP\_FOR\_USER ( SESSION\_USER , 'HR', 'EMP' ) = 1 THEN EMPLOYEES . DATE\_OF\_BIRTH WHEN VERIFY\_GROUP\_FOR\_USER ( SESSION\_USER , 'MGR' ) = 1 AND SESSION\_USER = EMPLOYEES . USER\_ID THEN EMPLOYEES . DATE\_OF\_BIRTH WHEN VERIFY\_GROUP\_FOR\_USER ( SESSION\_USER , 'MGR' ) = 1 AND SESSION\_USER <> EMPLOYEES . USER\_ID THEN ( 9999 || '-' || MONTH ( EMPLOYEES . DATE\_OF\_BIRTH ) || '-' || DAY (EMPLOYEES.DATE\_OF\_BIRTH )) ELSE NULL END ENABLE ; WHEN VERIFY\_GROUP\_FOR\_USER ( SESSION\_USER , 'HR', 'EMP' ) = 1 THEN EMPLOYEES . DATE\_OF\_BIRTH WHEN VERIFY\_GROUP\_FOR\_USER ( SESSION\_USER , 'MGR' ) = 1 AND SESSION\_USER = EMPLOYEES . USER\_ID THEN EMPLOYEES . DATE\_OF\_BIRTH WHEN VERIFY\_GROUP\_FOR\_USER ( SESSION\_USER , 'MGR' ) = 1 AND SESSION\_USER <> EMPLOYEES . USER\_ID THEN ( 9999 || '-' || MONTH ( EMPLOYEES . DATE\_OF\_BIRTH ) || '-' || DAY (EMPLOYEES.DATE\_OF\_BIRTH )) ELSE NULL END ENABLE ;
@ -341,11 +357,16 @@ Now that you have created the row permission and the two column masks, RCAC must
## Example 3-10 Activating RCAC on the EMPLOYEES table ## Example 3-10 Activating RCAC on the EMPLOYEES table
- /* Active Row Access Control (permissions) */ - /* Active Row Access Control (permissions) */
- /* Active Column Access Control (masks)
/* Active Column Access Control (masks) ALTER TABLE HR\_SCHEMA.EMPLOYEES ACTIVATE ROW ACCESS CONTROL ACTIVATE COLUMN ACCESS CONTROL;
*/ */
ALTER TABLE HR\_SCHEMA.EMPLOYEES
ACTIVATE ROW ACCESS CONTROL
ACTIVATE COLUMN ACCESS CONTROL;
- 2. Look at the definition of the EMPLOYEE table, as shown in Figure 3-11. To do this, from the main navigation pane of System i Navigator, click Schemas  HR\_SCHEMA  Tables , right-click the EMPLOYEES table, and click Definition . - 2. Look at the definition of the EMPLOYEE table, as shown in Figure 3-11. To do this, from the main navigation pane of System i Navigator, click Schemas  HR\_SCHEMA  Tables , right-click the EMPLOYEES table, and click Definition .
Figure 3-11 Selecting the EMPLOYEES table from System i Navigator Figure 3-11 Selecting the EMPLOYEES table from System i Navigator

Some files were not shown because too many files have changed in this diff Show More