AI prompts
base on Official Repository of ChartX & ChartVLM: A Versatile Benchmark and Foundation Model for Complicated Chart Reasoning <div align="center">
<h1>ChartX & ChartVLM: A Versatile Benchmark and Foundation Model for Complicated Chart Reasoning</h1>
[[ Related Paper ]](https://arxiv.org/abs/2402.12185) [[ Website ]](https://unimodal4reasoning.github.io/DocGenome_page/) [[ Dataset (Google Drive)]](https://drive.google.com/file/d/1d6zyH3kIwgepTqR0fc67xzyUtblrvOIX/view) [[ Dataset (Hugging Face) ]](https://huggingface.co/datasets/U4R/ChartX/viewer)
[[Models š¤(Hugging Face)]](https://huggingface.co/U4R/ChartVLM-base)
</div>
# ChartX & ChartVLM
Recently, many versatile Multi-modal Large Language Models (MLLMs) have emerged continuously. However, their capacity to query information depicted in visual charts and engage in reasoning based on the queried contents remains under-explored. In this paper, to comprehensively and rigorously benchmark the ability of the off-the-shelf MLLMs in the chart domain, we construct ChartX, a multi-modal evaluation set covering 18 chart types, 7 chart tasks, 22 disciplinary topics, and high-quality chart data. Besides, we develop ChartVLM to offer a new perspective on handling multi-modal tasks that strongly depend on interpretable patterns such as reasoning tasks in the field of charts or geometric images. We evaluate the chart-related ability of mainstream MLLMs and our ChartVLM on the proposed ChartX evaluation set. Extensive experiments demonstrate that ChartVLM surpasses both versatile and chart-related large models, achieving results comparable to GPT-4V. We believe that our study can pave the way for further exploration in creating a more comprehensive chart evaluation set and developing more interpretable multi-modal models.
## Release
- **Structuring Chart-oriented Repre- sentation Metric (SCRM)**: You could refer to [Evaluation](https://github.com/UniModal4Reasoning/ChartVLM/blob/1dfda1372c888e98c197b5873dcc6e3aaa13cf39/eval/README.md?plain=1#L27) and [eval_SE_ChartX.py](https://github.com/UniModal4Reasoning/ChartVLM/blob/main/eval/eval_SE_ChartX.py) for evaluating the chart-related data extraction ability of VLMS. Please see [StructChart](https://arxiv.org/abs/2309.11268) for more technical details of SCRM evaluation metric.
- [2024/2/21] š„ We have released the ChartX benchmark [data](https://drive.google.com/file/d/1d6zyH3kIwgepTqR0fc67xzyUtblrvOIX/view) to evaluate the chart-related capabilities of the existing MLLMS. We divide the entire ChartX benchmark into 4,848 validation samples, [ChartX_annotation_val.json](https://drive.google.com/file/d/13jwSO8kaAnbPujECQK9x2QA_TXByzzYH/view?usp=sharing) and 1,152 test samples, [ChartX_annotation_test.json](https://drive.google.com/file/d/1kOEi5Kca7WnFhBGyJlBIEtlgaIk004o0/view?usp=sharing). Results reported in Table 2 to 5 are evaluated on test samples. The evaluation log is shown [here](eval/eval_result_SE_on_ChartX.log).
<div align=center>
<img src="assets/motivation.png" height="85%">
</div>
------------------------
<div align="center">
<h1>ChartX Evaluation Set<br></h1>
</div>
## Overall
We collected 48K multi-modal chart data covering **22 topics**, **18 chart types**, and **7 tasks**. Each chart data within this dataset includes four modalities: image, CSV, python code, and text description.
<details>
<summary> 18 chart types:</summary>
General Chart Types = ['bar chart', 'bar_num chart', 'line chart', 'line_num chart', 'pie chart'],
Fine-grained Chart Types = ['radar chart', 'histogram', 'box plot', 'treemap', 'bubble chart', 'area chart', '3D-bar chart', 'multi-axes', 'ring chart', 'rose chart'],
Domain-specific Chart Types=['heatmap', 'candlestick chart', 'funnel chart']
</details>
<details>
<summary> 22 chart topics:</summary>
major_categories = [
"Business and Finance",
"Healthcare and Health",
"Science and Engineering",
"Social Media and the Web",
"Government and Public Policy",
"Education and Academics",
"Environment and Sustainability",
"Arts and Culture",
"Retail and E-commerce",
"Tourism and Hospitality",
"Human Resources and Employee Management",
"Agriculture and Food Production",
"Energy and Utilities",
"Transportation and Logistics",
"Real Estate and Housing Market",
"Manufacturing and Production",
"Sports and Entertainment",
"Social Sciences and Humanities",
"Law and Legal Affairs",
"Technology and the Internet",
"Charity and Nonprofit Organizations",
"Food and Beverage Industry"
]
</details>
<details>
<summary> 7 chart tasks (Employed eval metric):</summary>
4 close-ended = ['Structural Extraction (SCRM)', 'Chart Type (EM)', 'Chart Title (EM)', 'QA (GPT-acc)']
3 open-ended = ['Description (GPT-score)', 'Summarization (GPT-score)', 'Redrawing code (GPT-score)']
</details>
## ChartX Download
<details>
<summary> Data Download</summary>
Please download the official [ChartX Evaluation Set](https://drive.google.com/file/d/1d6zyH3kIwgepTqR0fc67xzyUtblrvOIX/view?usp=sharing) dataset and organize the downloaded files as follows:
```
ChartX
āāā 3D-Bar
ā āāā code
| āāā csv
| āāā png
| āāā txt
āāā area_chart
ā āāā code
| āāā csv
| āāā png
| āāā txt
....
....
āāā rose
ā āāā code
| āāā csv
| āāā png
| āāā txt
```
</details>
<details>
<summary> Visualization of Data Distribution</summary>
<div align=center>
<img src="assets/tsne.png" height="85%">
</div>
</details>
------------------------
<div align="center">
<h1>ChartVLM<br></h1>
</div>
## ChartVLM Overall:
- **(1)** To enhance the interpretability of the chart model in cognition tasks (e.g. answer questions based on chart image), ChartVLM first performs the base perception task (e.g. structural extraction from the given chart image to a predicted CSV data), and then, finishes other cognition tasks (e.g. chart redrawing, description, summary, and QA) based on the extracted structural data.
- **(2)** To choose the task that users expect to perform according to the used prompts, the instruction adapter is designed, which can cover a variety of user instructions as illustrated in this figure.
<div align=center>
<img src="assets/chartvlm.png" height="85%">
</div>
## Installation for ChartVLM
* Clone this repository.
```shell
git clone https://github.com/UniModal4Reasoning/ChartVLM.git
```
* Install the python dependent libraries.
```shell
pip install -r requirements.txt
```
## Pre-trained Checkpoints of ChartVLM
Please refer to Huggingface to download our pre-trained weights for [ChartVLM-large](https://huggingface.co/U4R/ChartVLM-large) and [ChartVLM-base](https://huggingface.co/U4R/ChartVLM-base).
<details>
<summary>You need to organize the downloaded ckpts as follow:</summary>
```
CharVLM-base (or your customized name)
āāā instruction_adapter
ā āāā mlp_classifier.pth
| āāā vectorizer.pkl
āāā base_decoder
ā āāā type_title
ā ā āāā files of type_title base_decoder
ā āāā files of base_decoder
āāā auxiliary_decoder
ā āāā base
ā ā āāā files of pretrained auxiliary_decoder
ā āāā files of auxiliary_decoder lora_weights
```
</details>
## Training ChartVLM
Please refer to [instruction adapter](adapter/README.md), [base decoder](base_decoder/README.md), and [auxiliary decoder](auxiliary_decoder/README.md) for more details of model training.
## Evaluation
Please refer to [eval](eval/README.md) for details of evaluation all tasks
<details>
<summary> Evaluation Results for Structural Extraction (SE) task</summary>
<div align=center>
<img src="assets/radar_se.png" height="650">
</div>
</details>
<details>
<summary> Evaluation Results for QA task</summary>
<div align=center>
<img src="assets/radar_qa.png" height="650">
</div>
</details>
<details>
<summary> Evaluation Results for Description task</summary>
<div align=center>
<img src="assets/radar_desc.png" height="650">
</div>
</details>
<details>
<summary> Evaluation Results for Summarization task</summary>
<div align=center>
<img src="assets/radar_summ.png" height="650">
</div>
</details>
## Citation
If you find our work useful in your research, please consider citing Fox:
```bibtex
@article{xia2024chartx,
title={ChartX \& ChartVLM: A Versatile Benchmark and Foundation Model for Complicated Chart Reasoning},
author={Xia, Renqiu and Zhang, Bo and Ye, Hancheng and Yan, Xiangchao and Liu, Qi and Zhou, Hongbin and Chen, Zijun and Dou, Min and Shi, Botian and Yan, Junchi and others},
journal={arXiv preprint arXiv:2402.12185},
year={2024}
}
```", Assign "at most 3 tags" to the expected json: {"id":"9812","tags":[]} "only from the tags list I provide: [{"id":77,"name":"3d"},{"id":89,"name":"agent"},{"id":17,"name":"ai"},{"id":54,"name":"algorithm"},{"id":24,"name":"api"},{"id":44,"name":"authentication"},{"id":3,"name":"aws"},{"id":27,"name":"backend"},{"id":60,"name":"benchmark"},{"id":72,"name":"best-practices"},{"id":39,"name":"bitcoin"},{"id":37,"name":"blockchain"},{"id":1,"name":"blog"},{"id":45,"name":"bundler"},{"id":58,"name":"cache"},{"id":21,"name":"chat"},{"id":49,"name":"cicd"},{"id":4,"name":"cli"},{"id":64,"name":"cloud-native"},{"id":48,"name":"cms"},{"id":61,"name":"compiler"},{"id":68,"name":"containerization"},{"id":92,"name":"crm"},{"id":34,"name":"data"},{"id":47,"name":"database"},{"id":8,"name":"declarative-gui "},{"id":9,"name":"deploy-tool"},{"id":53,"name":"desktop-app"},{"id":6,"name":"dev-exp-lib"},{"id":59,"name":"dev-tool"},{"id":13,"name":"ecommerce"},{"id":26,"name":"editor"},{"id":66,"name":"emulator"},{"id":62,"name":"filesystem"},{"id":80,"name":"finance"},{"id":15,"name":"firmware"},{"id":73,"name":"for-fun"},{"id":2,"name":"framework"},{"id":11,"name":"frontend"},{"id":22,"name":"game"},{"id":81,"name":"game-engine "},{"id":23,"name":"graphql"},{"id":84,"name":"gui"},{"id":91,"name":"http"},{"id":5,"name":"http-client"},{"id":51,"name":"iac"},{"id":30,"name":"ide"},{"id":78,"name":"iot"},{"id":40,"name":"json"},{"id":83,"name":"julian"},{"id":38,"name":"k8s"},{"id":31,"name":"language"},{"id":10,"name":"learning-resource"},{"id":33,"name":"lib"},{"id":41,"name":"linter"},{"id":28,"name":"lms"},{"id":16,"name":"logging"},{"id":76,"name":"low-code"},{"id":90,"name":"message-queue"},{"id":42,"name":"mobile-app"},{"id":18,"name":"monitoring"},{"id":36,"name":"networking"},{"id":7,"name":"node-version"},{"id":55,"name":"nosql"},{"id":57,"name":"observability"},{"id":46,"name":"orm"},{"id":52,"name":"os"},{"id":14,"name":"parser"},{"id":74,"name":"react"},{"id":82,"name":"real-time"},{"id":56,"name":"robot"},{"id":65,"name":"runtime"},{"id":32,"name":"sdk"},{"id":71,"name":"search"},{"id":63,"name":"secrets"},{"id":25,"name":"security"},{"id":85,"name":"server"},{"id":86,"name":"serverless"},{"id":70,"name":"storage"},{"id":75,"name":"system-design"},{"id":79,"name":"terminal"},{"id":29,"name":"testing"},{"id":12,"name":"ui"},{"id":50,"name":"ux"},{"id":88,"name":"video"},{"id":20,"name":"web-app"},{"id":35,"name":"web-server"},{"id":43,"name":"webassembly"},{"id":69,"name":"workflow"},{"id":87,"name":"yaml"}]" returns me the "expected json"