.##....##.########.##......##..######.....########..#######..########.....###....##....##
.###...##.##.......##..##..##.##....##.......##....##.....##.##.....##...##.##....##..##.
.####..##.##.......##..##..##.##.............##....##.....##.##.....##..##...##....####..
.##.##.##.######...##..##..##..######........##....##.....##.##.....##.##.....##....##...
.##..####.##.......##..##..##.......##.......##....##.....##.##.....##.#########....##...
.##...###.##.......##..##..##.##....##.......##....##.....##.##.....##.##.....##....##...
.##....##.########..###..###...######........##.....#######..########..##.....##....##...

24/7 Trending News.
Built for Humans & AI Agents.

IBM Launches Granite 4.0 3B Vision: Specialized AI Model for Enterprise Document Data Extraction

IBM Introduces Granite 4.0 3B Vision for Enterprise Document Analysis

IBM has unveiled the Granite 4.0 3B Vision, a vision-language model (VLM) tailored for enterprise-level document data extraction. This release marks a shift from monolithic multimodal approaches to modular, task-specific AI systems focused on structured data accuracy. The model emphasizes converting complex visual elements like charts and tables into machine-readable formats such as HTML or CSV.

Modular Architecture with LoRA Adapters

The Granite 4.0 3B Vision is built as a Low-Rank Adaptation (LoRA) adapter, containing approximately 0.5 billion parameters. It operates on top of the Granite 4.0 Micro base model, which has 3.5 billion parameters. This dual-mode design allows the system to handle text-only tasks independently while activating vision capabilities only when required for multimodal processing.

High-Resolution Visual Processing

The visual component leverages the google/siglip2-so400m-patch16-384 encoder. To preserve fine details in documents, input images are split into 384×384 patches and processed alongside a downscaled global view of the image. This tiling method ensures critical elements like small data points or subscripts in formulas remain intact before reaching the language model.

DeepStack Integration for Layout Awareness

To align visual and textual information, IBM employs a modified DeepStack architecture. Visual tokens are injected into the language model at eight strategic points across its transformer layers. This approach enhances spatial understanding by routing semantic content to earlier layers and spatial details to later layers, improving accuracy in table and chart extraction.

Specialized Training for Document Extraction

The model was trained on curated datasets focused on structured document tasks rather than general image-text pairs. Key components include the ChartNet dataset—a million-scale collection of charts—and a “code-guided” pipeline that links plotting code, rendered images, and underlying data tables. This training method enables the model to grasp the logical relationships between visual representations and their source data.

Performance Benchmarks

Granite 4.0 3B Vision has been evaluated against industry benchmarks like PubTables-v2 and OmniDocBench. It achieved an 85.5% exact match rate in zero-shot KVP extraction tasks on the VAREX benchmark. As of March 2026, it ranks third among models with 2–4 billion parameters in the VAREX leaderboard, demonstrating strong performance relative to its size.

Developer-Focused Features

The model is released under an Apache 2.0 license and supports integration with vLLM via a custom implementation and IBM’s Docling tool for converting unstructured PDFs into structured JSON or HTML formats. These features enable seamless deployment in enterprise workflows requiring document analysis and data extraction.

Max

Written by

Max

Covers AI news, agentic AI, LLMs, and tech developments. When he is not writing, he is running open-source models just to see how they hold up.

+ ,