0
0
Computer Visionml~5 mins

Table extraction from images in Computer Vision

Choose your learning style9 modes available
Introduction

Table extraction from images helps turn pictures of tables into usable data. This saves time and avoids manual typing.

You have a photo of a printed report with tables and want to analyze the data.
You scanned a document with tables and need to convert it into a spreadsheet.
You want to extract tables from screenshots or PDFs that are saved as images.
You need to automate data entry from paper forms containing tables.
You want to digitize old books or papers with tabular data.
Syntax
Computer Vision
1. Load the image containing the table.
2. Use a table detection model or algorithm to find table boundaries.
3. Extract the table cells by detecting lines or using OCR.
4. Convert the extracted cells into structured data like CSV or JSON.

Step 2 often uses deep learning models trained to detect tables.

OCR (Optical Character Recognition) reads text inside each cell.

Examples
Basic example using OpenCV and pytesseract to start table extraction.
Computer Vision
import cv2
import pytesseract

image = cv2.imread('table_image.png')
# Use OpenCV to detect table lines
# Use pytesseract to extract text from cells
Using PaddleOCR which supports table detection and text extraction.
Computer Vision
from paddleocr import PaddleOCR

ocr = PaddleOCR()
result = ocr.ocr('table_image.png')
# PaddleOCR can detect tables and extract text directly
Sample Model

This code loads an image, detects table lines using image processing, finds cells, and extracts text using OCR.

It prints each cell's position and text.

Computer Vision
import cv2
import numpy as np
import pytesseract

# Load image
image = cv2.imread('table_sample.png')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)

# Threshold to get binary image
_, binary = cv2.threshold(gray, 150, 255, cv2.THRESH_BINARY_INV)

# Detect horizontal lines
horizontal_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (40,1))
horizontal_lines = cv2.morphologyEx(binary, cv2.MORPH_OPEN, horizontal_kernel)

# Detect vertical lines
vertical_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (1,40))
vertical_lines = cv2.morphologyEx(binary, cv2.MORPH_OPEN, vertical_kernel)

# Combine lines to get table mask
table_mask = cv2.add(horizontal_lines, vertical_lines)

# Find contours of table cells
contours, _ = cv2.findContours(table_mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)

cells = []
for cnt in contours:
    x, y, w, h = cv2.boundingRect(cnt)
    if w > 20 and h > 20:  # filter small boxes
        cell_img = image[y:y+h, x:x+w]
        text = pytesseract.image_to_string(cell_img, config='--psm 7').strip()
        cells.append({'position': (x, y, w, h), 'text': text})

# Sort cells by position (top to bottom, left to right)
cells_sorted = sorted(cells, key=lambda c: (c['position'][1], c['position'][0]))

# Print extracted text from cells
for cell in cells_sorted:
    print(f"Cell at {cell['position']}: '{cell['text']}'")
OutputSuccess
Important Notes

Good lighting and clear images improve extraction accuracy.

Complex tables with merged cells may need advanced models.

Preprocessing like noise removal helps OCR results.

Summary

Table extraction turns images of tables into editable data.

It uses image processing to find table structure and OCR to read text.

This helps automate data entry and analysis from pictures or scans.