You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

πŸ† 185-Hard: A Scenario-Driven Benchmark for VLM-Supervised Image-to-3D Generation

GitHub License

πŸ“– Dataset Summary

185-Hard is a scenario-driven benchmark designed for industrial-grade Image-to-3D generation.

To address the gap between traditional metrics and human judgment in Image-to-3D generation, we introduce a VLM-based pipeline functioning as ’Senior 3D Art Supervisors’ to assess geometry, texture, and adherence. We present 185-hard, a hierarchical dataset covering, five application scenarios and fourteen generative capabilities designed to rigorously test real-world performance. By validating this automated methodology, we aim to establish a new standard for 3D generation benchmarking and invite community participation.

✨ Core Features

  • 🎯 Scenario-Driven: The dataset is logically mapped via a Sankey flow to 5 core application scenarios (Gaming, E-commerce, ArchViz, Education, Toys).
  • πŸ”₯ 185-Hard: Focused on "stress-testing" models with 14 fine-grained generative capabilities, including transparent materials, intricate hollow structures, and multi-component assemblies.
  • πŸ€– Robust Automated Evaluation: Designed for evaluation powered by Gemini-3-Pro with a $3 \times 3$ sampling strategy, decoupling geometry from texture to provide low-variance, highly credible Mean Opinion Scores (MOS).

Dataset Metadata

Field Description
file_name The filename (or relative path) of the source image.
Level-1 Categories Organizes 3D objects into 5 primary physical categories.
Level-2 Tags Specific sub-categories (over 20 distinct types) nested under Level-1 categories.
Application Scenarios Stratifies the dataset across five distinct application scenarios.
Generative Capabilities Deconstructs the 3D generation task into 14 fine-grained technical capabilities.

Column Descriptions

1. Physically Grounded Taxonomy:

Physically Grounded Taxonomy.

2. Application Scenarios:

Scenario Tag Technical Specifications & Focus
1. Game & Entertainment Rendering & Rigging: Optimizes for stylized NPCs and fantasy props with clean topology suitable for animation.
2. E-commerce & Advertising High-Fidelity PBR: Prioritizes Physically Based Rendering materials and structural precision for commercial product presentation and brand visibility.
3. ArchViz & Interior Design Geometric Regularity: Emphasizes straight lines, hollow/lattice structures, and logical assembly for furniture and architectural elements.
4. Education, Culture & Science Anatomical Correctness: Demands strict fidelity for biological models and cultural artifacts.
5. Toys & 3D Printing Watertight Geometry: Ensures physical stability and printable geometry (watertight meshes) for collectibles and manufacturing.

3. Fine-Grained Generative Capabilities

Generative Capabilities.

πŸ† Leaderboard

Rankings are based on automated MOS (1-5) by Gemini-3-Pro. Scoring Scale: 1-Unusable, 2-Defective, 3-Mediocre, 4-Usable, 5-Production-Ready

Rank Model Image Adherence Geometry Quality Texture Quality Overall
πŸ₯‡ Hunyuan3D 3.0 3.34 3.21 2.94 3.16
πŸ₯ˆ Tripo v3.0 3.35 3.15 2.89 3.13
πŸ₯‰ Seed3D 1.0 3.26 3.00 2.84 3.03
4 Meshy 6.0 preview 3.10 2.93 2.67 2.90
5 Rodin Gen-2 2.98 2.82 3.03 2.80
6 Step1x-3d (Open) 2.91 2.65 2.54 2.70
7 TRELLIS (Open) 2.61 2.67 2.30 2.53

πŸ› οΈ Evaluation Pipeline

Our evaluation framework consists of three critical steps:

  1. Standardized Rendering:
    • 360Β° turntable videos at $20^\circ$ elevation.
    • Split-Screen Visualization: The left panel shows the full render (PBR for commercial, Shaded for open-source), while the right panel displays the Normal Map to isolate geometric quality from texture masking.
  2. VLM Judge:
    • Gemini-3-Pro serves as the automated critic.
  3. Robustness Strategy:
    • $3 \times 3$ Sampling: Each prompt is generated 3 times, and each generation is evaluated 3 times. The final score is the arithmetic mean, minimizing variance and hallucination.

πŸ“§ Call for Participation

This work marks the initial release of our benchmark dataset. We invite researchers and developers in the Image-to-3D community to utilize this dataset for evaluating their models. By standardizing assessment on a comprehensive, industry-relevant benchmark, we aim to accelerate progress in high-fidelity 3D generation. We plan to host future evaluation challenges and leaderboard rankings on the 3d generation task.

Please contact us at: molodata@molodata.cn

πŸ“š Citation

@misc{185hard2026molodata,
  author = {molodata},
  title = {185-Hard: A Scenario-Driven Benchmark for VLM-Supervised Image-to-3D Generation},
  year = {2026},
  url = {https://huggingface.co/datasets/molodata/185-hard-image-to-3D-eval-dataset},
}
Downloads last month
25