diff --git a/README.md b/README.md
index 47ac6e54..699d023c 100644
--- a/README.md
+++ b/README.md
@@ -1,170 +1,171 @@
-![Genesis](imgs/big_text.png)
-
-![Teaser](imgs/teaser.png)
-
-[![PyPI - Version](https://img.shields.io/pypi/v/genesis-world)](https://pypi.org/project/genesis-world/)
-[![PyPI - Downloads](https://img.shields.io/pypi/dm/genesis-world)](https://pypi.org/project/genesis-world/)
-[![GitHub Issues](https://img.shields.io/github/issues/Genesis-Embodied-AI/Genesis)](https://github.com/Genesis-Embodied-AI/Genesis/issues)
-[![GitHub Discussions](https://img.shields.io/github/discussions/Genesis-Embodied-AI/Genesis)](https://github.com/Genesis-Embodied-AI/Genesis/discussions)
-
-[![README in English](https://img.shields.io/badge/English-d9d9d9)](./README.md)
-[![简体中文版自述文件](https://img.shields.io/badge/简体中文-d9d9d9)](./README_CN.md)
-[![日本語版 README](https://img.shields.io/badge/日本語-d9d9d9)](./README_JA.md)
-
-# Genesis
-## 🔥 News
-- [2024-12-25] Added a [docker](#docker) including support for the ray-tracing renderer
-- [2024-12-24] Added guidelines for [contributing to Genesis](https://github.com/Genesis-Embodied-AI/Genesis/blob/main/CONTRIBUTING.md)
-
-## Table of Contents
-
-1. [What is Genesis?](#what-is-genesis)
-2. [Key Features](#key-features)
-3. [Quick Installation](#quick-installation)
-4. [Docker](#docker)
-5. [Documentation](#documentation)
-6. [Contributing to Genesis](#contributing-to-genesis)
-7. [Support](#support)
-8. [License and Acknowledgments](#license-and-acknowledgments)
-9. [Associated Papers](#associated-papers)
-10. [Citation](#citation)
-
-## What is Genesis?
-
-Genesis is a physics platform designed for general-purpose *Robotics/Embodied AI/Physical AI* applications. It is simultaneously multiple things:
-
-1. A **universal physics engine** re-built from the ground up, capable of simulating a wide range of materials and physical phenomena.
-2. A **lightweight**, **ultra-fast**, **pythonic**, and **user-friendly** robotics simulation platform.
-3. A powerful and fast **photo-realistic rendering system**.
-4. A **generative data engine** that transforms user-prompted natural language description into various modalities of data.
-
-Genesis aims to:
-
-- **Lower the barrier** to using physics simulations, making robotics research accessible to everyone. See our [mission statement](https://genesis-world.readthedocs.io/en/latest/user_guide/overview/mission.html).
-- **Unify diverse physics solvers** into a single framework to recreate the physical world with the highest fidelity.
-- **Automate data generation**, reducing human effort and letting the data flywheel spin on its own.
-
-Project Page:
-
-## Key Features
-
-- **Speed**: Over 43 million FPS when simulating a Franka robotic arm with a single RTX 4090 (430,000 times faster than real-time).
-- **Cross-platform**: Runs on Linux, macOS, Windows, and supports multiple compute backends (CPU, Nvidia/AMD GPUs, Apple Metal).
-- **Integration of diverse physics solvers**: Rigid body, MPM, SPH, FEM, PBD, Stable Fluid.
-- **Wide range of material models**: Simulation and coupling of rigid bodies, liquids, gases, deformable objects, thin-shell objects, and granular materials.
-- **Compatibility with various robots**: Robotic arms, legged robots, drones, *soft robots*, and support for loading `MJCF (.xml)`, `URDF`, `.obj`, `.glb`, `.ply`, `.stl`, and more.
-- **Photo-realistic rendering**: Native ray-tracing-based rendering.
-- **Differentiability**: Genesis is designed to be fully differentiable. Currently, our MPM solver and Tool Solver support differentiability, with other solvers planned for future versions (starting with rigid & articulated body solver).
-- **Physics-based tactile simulation**: Differentiable [tactile sensor simulation](https://github.com/Genesis-Embodied-AI/DiffTactile) coming soon (expected in version 0.3.0).
-- **User-friendliness**: Designed for simplicity, with intuitive installation and APIs.
-
-## Quick Installation
-
-Genesis is available via PyPI:
-
-```bash
-pip install genesis-world # Requires Python >=3.9;
-```
-
-You also need to install **PyTorch** following the [official instructions](https://pytorch.org/get-started/locally/).
-
-For the latest version, clone the repository and install locally:
-
-```bash
-git clone https://github.com/Genesis-Embodied-AI/Genesis.git
-cd Genesis
-pip install -e .
-```
-
-## Docker
-
-If you want to use Genesis from Docker, you can first build the Docker image as:
-
-```bash
-docker build -t genesis -f docker/Dockerfile docker
-```
-
-Then you can run the examples inside the docker image (mounted to `/workspace/examples`):
-
-```bash
-xhost +local:root # Allow the container to access the display
-
-docker run --gpus all --rm -it \
--e DISPLAY=$DISPLAY \
--v /tmp/.X11-unix/:/tmp/.X11-unix \
--v $PWD:/workspace \
-genesis
-```
-
-## Documentation
-
-Comprehensive documentation is available in [English](https://genesis-world.readthedocs.io/en/latest/user_guide/index.html) and [Chinese](https://genesis-world.readthedocs.io/zh-cn/latest/user_guide/index.html). This includes detailed installation steps, tutorials, and API references.
-
-## Contributing to Genesis
-
-The Genesis project is an open and collaborative effort. We welcome all forms of contributions from the community, including:
-
-- **Pull requests** for new features or bug fixes.
-- **Bug reports** through GitHub Issues.
-- **Suggestions** to improve Genesis's usability.
-
-Refer to our [contribution guide](https://github.com/Genesis-Embodied-AI/Genesis/blob/main/CONTRIBUTING.md) for more details.
-
-## Support
-
-- Report bugs or request features via GitHub [Issues](https://github.com/Genesis-Embodied-AI/Genesis/issues).
-- Join discussions or ask questions on GitHub [Discussions](https://github.com/Genesis-Embodied-AI/Genesis/discussions).
-
-## License and Acknowledgments
-
-The Genesis source code is licensed under Apache 2.0.
-
-Genesis's development has been made possible thanks to these open-source projects:
-
-- [Taichi](https://github.com/taichi-dev/taichi): High-performance cross-platform compute backend. Kudos to the Taichi team for their technical support!
-- [FluidLab](https://github.com/zhouxian/FluidLab): Reference MPM solver implementation.
-- [SPH_Taichi](https://github.com/erizmr/SPH_Taichi): Reference SPH solver implementation.
-- [Ten Minute Physics](https://matthias-research.github.io/pages/tenMinutePhysics/index.html) and [PBF3D](https://github.com/WASD4959/PBF3D): Reference PBD solver implementations.
-- [MuJoCo](https://github.com/google-deepmind/mujoco): Reference for rigid body dynamics.
-- [libccd](https://github.com/danfis/libccd): Reference for collision detection.
-- [PyRender](https://github.com/mmatl/pyrender): Rasterization-based renderer.
-- [LuisaCompute](https://github.com/LuisaGroup/LuisaCompute) and [LuisaRender](https://github.com/LuisaGroup/LuisaRender): Ray-tracing DSL.
-
-## Associated Papers
-
-Genesis is a large scale effort that integrates state-of-the-art technologies of various existing and on-going research work into a single system. Here we include a non-exhaustive list of all the papers that contributed to the Genesis project in one way or another:
-
-- Xian, Zhou, et al. "Fluidlab: A differentiable environment for benchmarking complex fluid manipulation." arXiv preprint arXiv:2303.02346 (2023).
-- Xu, Zhenjia, et al. "Roboninja: Learning an adaptive cutting policy for multi-material objects." arXiv preprint arXiv:2302.11553 (2023).
-- Wang, Yufei, et al. "Robogen: Towards unleashing infinite data for automated robot learning via generative simulation." arXiv preprint arXiv:2311.01455 (2023).
-- Wang, Tsun-Hsuan, et al. "Softzoo: A soft robot co-design benchmark for locomotion in diverse environments." arXiv preprint arXiv:2303.09555 (2023).
-- Wang, Tsun-Hsuan Johnson, et al. "Diffusebot: Breeding soft robots with physics-augmented generative diffusion models." Advances in Neural Information Processing Systems 36 (2023): 44398-44423.
-- Katara, Pushkal, Zhou Xian, and Katerina Fragkiadaki. "Gen2sim: Scaling up robot learning in simulation with generative models." 2024 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2024.
-- Si, Zilin, et al. "DiffTactile: A Physics-based Differentiable Tactile Simulator for Contact-rich Robotic Manipulation." arXiv preprint arXiv:2403.08716 (2024).
-- Wang, Yian, et al. "Thin-Shell Object Manipulations With Differentiable Physics Simulations." arXiv preprint arXiv:2404.00451 (2024).
-- Lin, Chunru, et al. "UBSoft: A Simulation Platform for Robotic Skill Learning in Unbounded Soft Environments." arXiv preprint arXiv:2411.12711 (2024).
-- Zhou, Wenyang, et al. "EMDM: Efficient motion diffusion model for fast and high-quality motion generation." European Conference on Computer Vision. Springer, Cham, 2025.
-- Qiao, Yi-Ling, Junbang Liang, Vladlen Koltun, and Ming C. Lin. "Scalable differentiable physics for learning and control." International Conference on Machine Learning. PMLR, 2020.
-- Qiao, Yi-Ling, Junbang Liang, Vladlen Koltun, and Ming C. Lin. "Efficient differentiable simulation of articulated bodies." In International Conference on Machine Learning, PMLR, 2021.
-- Qiao, Yi-Ling, Junbang Liang, Vladlen Koltun, and Ming Lin. "Differentiable simulation of soft multi-body systems." Advances in Neural Information Processing Systems 34 (2021).
-- Wan, Weilin, et al. "Tlcontrol: Trajectory and language control for human motion synthesis." arXiv preprint arXiv:2311.17135 (2023).
-- Wang, Yian, et al. "Architect: Generating Vivid and Interactive 3D Scenes with Hierarchical 2D Inpainting." arXiv preprint arXiv:2411.09823 (2024).
-- Zheng, Shaokun, et al. "LuisaRender: A high-performance rendering framework with layered and unified interfaces on stream architectures." ACM Transactions on Graphics (TOG) 41.6 (2022): 1-19.
-- Fan, Yingruo, et al. "Faceformer: Speech-driven 3d facial animation with transformers." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.
-- Wu, Sichun, Kazi Injamamul Haque, and Zerrin Yumak. "ProbTalk3D: Non-Deterministic Emotion Controllable Speech-Driven 3D Facial Animation Synthesis Using VQ-VAE." Proceedings of the 17th ACM SIGGRAPH Conference on Motion, Interaction, and Games. 2024.
-- Dou, Zhiyang, et al. "C· ase: Learning conditional adversarial skill embeddings for physics-based characters." SIGGRAPH Asia 2023 Conference Papers. 2023.
-
-... and many more on-going work.
-
-## Citation
-
-If you use Genesis in your research, please consider citing:
-
-```bibtex
-@software{Genesis,
- author = {Genesis Authors},
- title = {Genesis: A Universal and Generative Physics Engine for Robotics and Beyond},
- month = {December},
- year = {2024},
- url = {https://github.com/Genesis-Embodied-AI/Genesis}
-}
+![Genesis](imgs/big_text.png)
+
+![Teaser](imgs/teaser.png)
+
+[![PyPI - Version](https://img.shields.io/pypi/v/genesis-world)](https://pypi.org/project/genesis-world/)
+[![PyPI - Downloads](https://img.shields.io/pypi/dm/genesis-world)](https://pypi.org/project/genesis-world/)
+[![GitHub Issues](https://img.shields.io/github/issues/Genesis-Embodied-AI/Genesis)](https://github.com/Genesis-Embodied-AI/Genesis/issues)
+[![GitHub Discussions](https://img.shields.io/github/discussions/Genesis-Embodied-AI/Genesis)](https://github.com/Genesis-Embodied-AI/Genesis/discussions)
+
+[![README in English](https://img.shields.io/badge/English-d9d9d9)](./README.md)
+[![README en Français](https://img.shields.io/badge/Francais-d9d9d9)](./README_FR.md)
+[![简体中文版自述文件](https://img.shields.io/badge/简体中文-d9d9d9)](./README_CN.md)
+[![日本語版 README](https://img.shields.io/badge/日本語-d9d9d9)](./README_JA.md)
+
+# Genesis
+## 🔥 News
+- [2024-12-25] Added a [docker](#docker) including support for the ray-tracing renderer
+- [2024-12-24] Added guidelines for [contributing to Genesis](https://github.com/Genesis-Embodied-AI/Genesis/blob/main/CONTRIBUTING.md)
+
+## Table of Contents
+
+1. [What is Genesis?](#what-is-genesis)
+2. [Key Features](#key-features)
+3. [Quick Installation](#quick-installation)
+4. [Docker](#docker)
+5. [Documentation](#documentation)
+6. [Contributing to Genesis](#contributing-to-genesis)
+7. [Support](#support)
+8. [License and Acknowledgments](#license-and-acknowledgments)
+9. [Associated Papers](#associated-papers)
+10. [Citation](#citation)
+
+## What is Genesis?
+
+Genesis is a physics platform designed for general-purpose *Robotics/Embodied AI/Physical AI* applications. It is simultaneously multiple things:
+
+1. A **universal physics engine** re-built from the ground up, capable of simulating a wide range of materials and physical phenomena.
+2. A **lightweight**, **ultra-fast**, **pythonic**, and **user-friendly** robotics simulation platform.
+3. A powerful and fast **photo-realistic rendering system**.
+4. A **generative data engine** that transforms user-prompted natural language description into various modalities of data.
+
+Genesis aims to:
+
+- **Lower the barrier** to using physics simulations, making robotics research accessible to everyone. See our [mission statement](https://genesis-world.readthedocs.io/en/latest/user_guide/overview/mission.html).
+- **Unify diverse physics solvers** into a single framework to recreate the physical world with the highest fidelity.
+- **Automate data generation**, reducing human effort and letting the data flywheel spin on its own.
+
+Project Page:
+
+## Key Features
+
+- **Speed**: Over 43 million FPS when simulating a Franka robotic arm with a single RTX 4090 (430,000 times faster than real-time).
+- **Cross-platform**: Runs on Linux, macOS, Windows, and supports multiple compute backends (CPU, Nvidia/AMD GPUs, Apple Metal).
+- **Integration of diverse physics solvers**: Rigid body, MPM, SPH, FEM, PBD, Stable Fluid.
+- **Wide range of material models**: Simulation and coupling of rigid bodies, liquids, gases, deformable objects, thin-shell objects, and granular materials.
+- **Compatibility with various robots**: Robotic arms, legged robots, drones, *soft robots*, and support for loading `MJCF (.xml)`, `URDF`, `.obj`, `.glb`, `.ply`, `.stl`, and more.
+- **Photo-realistic rendering**: Native ray-tracing-based rendering.
+- **Differentiability**: Genesis is designed to be fully differentiable. Currently, our MPM solver and Tool Solver support differentiability, with other solvers planned for future versions (starting with rigid & articulated body solver).
+- **Physics-based tactile simulation**: Differentiable [tactile sensor simulation](https://github.com/Genesis-Embodied-AI/DiffTactile) coming soon (expected in version 0.3.0).
+- **User-friendliness**: Designed for simplicity, with intuitive installation and APIs.
+
+## Quick Installation
+
+Genesis is available via PyPI:
+
+```bash
+pip install genesis-world # Requires Python >=3.9;
+```
+
+You also need to install **PyTorch** following the [official instructions](https://pytorch.org/get-started/locally/).
+
+For the latest version, clone the repository and install locally:
+
+```bash
+git clone https://github.com/Genesis-Embodied-AI/Genesis.git
+cd Genesis
+pip install -e .
+```
+
+## Docker
+
+If you want to use Genesis from Docker, you can first build the Docker image as:
+
+```bash
+docker build -t genesis -f docker/Dockerfile docker
+```
+
+Then you can run the examples inside the docker image (mounted to `/workspace/examples`):
+
+```bash
+xhost +local:root # Allow the container to access the display
+
+docker run --gpus all --rm -it \
+-e DISPLAY=$DISPLAY \
+-v /tmp/.X11-unix/:/tmp/.X11-unix \
+-v $PWD:/workspace \
+genesis
+```
+
+## Documentation
+
+Comprehensive documentation is available in [English](https://genesis-world.readthedocs.io/en/latest/user_guide/index.html) and [Chinese](https://genesis-world.readthedocs.io/zh-cn/latest/user_guide/index.html). This includes detailed installation steps, tutorials, and API references.
+
+## Contributing to Genesis
+
+The Genesis project is an open and collaborative effort. We welcome all forms of contributions from the community, including:
+
+- **Pull requests** for new features or bug fixes.
+- **Bug reports** through GitHub Issues.
+- **Suggestions** to improve Genesis's usability.
+
+Refer to our [contribution guide](https://github.com/Genesis-Embodied-AI/Genesis/blob/main/CONTRIBUTING.md) for more details.
+
+## Support
+
+- Report bugs or request features via GitHub [Issues](https://github.com/Genesis-Embodied-AI/Genesis/issues).
+- Join discussions or ask questions on GitHub [Discussions](https://github.com/Genesis-Embodied-AI/Genesis/discussions).
+
+## License and Acknowledgments
+
+The Genesis source code is licensed under Apache 2.0.
+
+Genesis's development has been made possible thanks to these open-source projects:
+
+- [Taichi](https://github.com/taichi-dev/taichi): High-performance cross-platform compute backend. Kudos to the Taichi team for their technical support!
+- [FluidLab](https://github.com/zhouxian/FluidLab): Reference MPM solver implementation.
+- [SPH_Taichi](https://github.com/erizmr/SPH_Taichi): Reference SPH solver implementation.
+- [Ten Minute Physics](https://matthias-research.github.io/pages/tenMinutePhysics/index.html) and [PBF3D](https://github.com/WASD4959/PBF3D): Reference PBD solver implementations.
+- [MuJoCo](https://github.com/google-deepmind/mujoco): Reference for rigid body dynamics.
+- [libccd](https://github.com/danfis/libccd): Reference for collision detection.
+- [PyRender](https://github.com/mmatl/pyrender): Rasterization-based renderer.
+- [LuisaCompute](https://github.com/LuisaGroup/LuisaCompute) and [LuisaRender](https://github.com/LuisaGroup/LuisaRender): Ray-tracing DSL.
+
+## Associated Papers
+
+Genesis is a large scale effort that integrates state-of-the-art technologies of various existing and on-going research work into a single system. Here we include a non-exhaustive list of all the papers that contributed to the Genesis project in one way or another:
+
+- Xian, Zhou, et al. "Fluidlab: A differentiable environment for benchmarking complex fluid manipulation." arXiv preprint arXiv:2303.02346 (2023).
+- Xu, Zhenjia, et al. "Roboninja: Learning an adaptive cutting policy for multi-material objects." arXiv preprint arXiv:2302.11553 (2023).
+- Wang, Yufei, et al. "Robogen: Towards unleashing infinite data for automated robot learning via generative simulation." arXiv preprint arXiv:2311.01455 (2023).
+- Wang, Tsun-Hsuan, et al. "Softzoo: A soft robot co-design benchmark for locomotion in diverse environments." arXiv preprint arXiv:2303.09555 (2023).
+- Wang, Tsun-Hsuan Johnson, et al. "Diffusebot: Breeding soft robots with physics-augmented generative diffusion models." Advances in Neural Information Processing Systems 36 (2023): 44398-44423.
+- Katara, Pushkal, Zhou Xian, and Katerina Fragkiadaki. "Gen2sim: Scaling up robot learning in simulation with generative models." 2024 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2024.
+- Si, Zilin, et al. "DiffTactile: A Physics-based Differentiable Tactile Simulator for Contact-rich Robotic Manipulation." arXiv preprint arXiv:2403.08716 (2024).
+- Wang, Yian, et al. "Thin-Shell Object Manipulations With Differentiable Physics Simulations." arXiv preprint arXiv:2404.00451 (2024).
+- Lin, Chunru, et al. "UBSoft: A Simulation Platform for Robotic Skill Learning in Unbounded Soft Environments." arXiv preprint arXiv:2411.12711 (2024).
+- Zhou, Wenyang, et al. "EMDM: Efficient motion diffusion model for fast and high-quality motion generation." European Conference on Computer Vision. Springer, Cham, 2025.
+- Qiao, Yi-Ling, Junbang Liang, Vladlen Koltun, and Ming C. Lin. "Scalable differentiable physics for learning and control." International Conference on Machine Learning. PMLR, 2020.
+- Qiao, Yi-Ling, Junbang Liang, Vladlen Koltun, and Ming C. Lin. "Efficient differentiable simulation of articulated bodies." In International Conference on Machine Learning, PMLR, 2021.
+- Qiao, Yi-Ling, Junbang Liang, Vladlen Koltun, and Ming Lin. "Differentiable simulation of soft multi-body systems." Advances in Neural Information Processing Systems 34 (2021).
+- Wan, Weilin, et al. "Tlcontrol: Trajectory and language control for human motion synthesis." arXiv preprint arXiv:2311.17135 (2023).
+- Wang, Yian, et al. "Architect: Generating Vivid and Interactive 3D Scenes with Hierarchical 2D Inpainting." arXiv preprint arXiv:2411.09823 (2024).
+- Zheng, Shaokun, et al. "LuisaRender: A high-performance rendering framework with layered and unified interfaces on stream architectures." ACM Transactions on Graphics (TOG) 41.6 (2022): 1-19.
+- Fan, Yingruo, et al. "Faceformer: Speech-driven 3d facial animation with transformers." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.
+- Wu, Sichun, Kazi Injamamul Haque, and Zerrin Yumak. "ProbTalk3D: Non-Deterministic Emotion Controllable Speech-Driven 3D Facial Animation Synthesis Using VQ-VAE." Proceedings of the 17th ACM SIGGRAPH Conference on Motion, Interaction, and Games. 2024.
+- Dou, Zhiyang, et al. "C· ase: Learning conditional adversarial skill embeddings for physics-based characters." SIGGRAPH Asia 2023 Conference Papers. 2023.
+
+... and many more on-going work.
+
+## Citation
+
+If you use Genesis in your research, please consider citing:
+
+```bibtex
+@software{Genesis,
+ author = {Genesis Authors},
+ title = {Genesis: A Universal and Generative Physics Engine for Robotics and Beyond},
+ month = {December},
+ year = {2024},
+ url = {https://github.com/Genesis-Embodied-AI/Genesis}
+}
diff --git a/README_CN.md b/README_CN.md
index bb852175..c1d08a70 100644
--- a/README_CN.md
+++ b/README_CN.md
@@ -1,162 +1,163 @@
-
-
-![Genesis](imgs/big_text.png)
-
-![Teaser](imgs/teaser.png)
-
-[![PyPI - Version](https://img.shields.io/pypi/v/genesis-world)](https://pypi.org/project/genesis-world/)
-[![PyPI - Downloads](https://img.shields.io/pypi/dm/genesis-world)](https://pypi.org/project/genesis-world/)
-[![GitHub Issues](https://img.shields.io/github/issues/Genesis-Embodied-AI/Genesis)](https://github.com/Genesis-Embodied-AI/Genesis/issues)
-[![GitHub Discussions](https://img.shields.io/github/discussions/Genesis-Embodied-AI/Genesis)](https://github.com/Genesis-Embodied-AI/Genesis/discussions)
-
-[![README in English](https://img.shields.io/badge/English-d9d9d9)](./README.md)
-[![简体中文版自述文件](https://img.shields.io/badge/简体中文-d9d9d9)](./README_CN.md)
-[![日本語版 README](https://img.shields.io/badge/日本語-d9d9d9)](./README_JA.md)
-
-
-
-# Genesis 通用物理引擎
-
-## 目录
-
-1. [概述](#概述)
-2. [主要特点](#主要特点)
-3. [快速入门](#快速入门)
-4. [参与贡献](#参与贡献)
-5. [帮助支持](#帮助支持)
-6. [许可证与致谢](#许可证和致谢)
-7. [相关论文](#genesis-背后的论文)
-8. [引用](#引用)
-
-## 概述
-
-Genesis 是专为 *机器人/嵌入式 AI/物理 AI* 应用设计的通用物理平台,集成了以下核心功能:
-
-- **通用物理引擎**: 从底层重建,支持多种材料和物理现象模拟
-- **机器人模拟平台**: 轻量、高速、Python友好的开发环境
-- **真实感渲染**: 内置光线追踪渲染系统
-- **生成数据引擎**: 自然语言驱动的多模态数据生成
-
-我们的长期使命:
-
-- 降低物理模拟使用门槛
-- 统一各类物理求解器
-- 实现数据生成自动化
-
-项目主页:
-
-## 主要特点
-
-- **速度**:Genesis 提供了前所未有的模拟速度——在单个 RTX 4090 上模拟 Franka 机器人手臂时超过 4300 万 FPS(比实时快 430,000 倍)。
-- **跨平台**:Genesis 原生运行在不同系统(Linux、MacOS、Windows)和不同计算后端(CPU、Nvidia GPU、AMD GPU、Apple Metal)上。
-- **各种物理求解器的统一**:Genesis 开发了一个统一的模拟框架,集成了各种物理求解器:刚体、MPM、SPH、FEM、PBD、稳定流体。
-- **支持广泛的材料模型**:Genesis 支持刚体和关节体、各种液体、气体现象、可变形物体、薄壳物体和颗粒材料的模拟(及其耦合)。
-- **支持广泛的机器人**:机器人手臂、腿式机器人、无人机、*软体机器人*等,并广泛支持加载不同文件类型:`MJCF (.xml)`、`URDF`、`.obj`、`.glb`、`.ply`、`.stl` 等。
-- **照片级真实感和高性能光线追踪器**:Genesis 支持基于光线追踪的原生渲染。
-- **可微分性**:Genesis 设计为完全兼容可微分模拟。目前,我们的 MPM 求解器和工具求解器是可微分的,其他求解器的可微分性将很快添加(从刚体模拟开始)。
-- **基于物理的触觉传感器**:Genesis 包含一个基于物理的可微分 [触觉传感器模拟模块](https://github.com/Genesis-Embodied-AI/DiffTactile)。这将很快集成到公共版本中(预计在 0.3.0 版本中)。
-- **用户友好性**:Genesis 设计为尽可能简化模拟的使用。从安装到 API 设计,如果有任何您觉得不直观或难以使用的地方,请 [告诉我们](https://github.com/Genesis-Embodied-AI/Genesis/issues)。
-
-## 快速入门
-
-### 安装
-
-Genesis 可通过 PyPI 获取:
-
-```bash
-pip install genesis-world # 需要 Python >=3.9
-```
-
-同时需要按照[官方指南](https://pytorch.org/get-started/locally/)安装 PyTorch。
-
-### Docker 支持
-
-如果您想通过 Docker 使用 Genesis,您可以首先构建 Docker 镜像,命令如下:
-
-```bash
-docker build -t genesis -f docker/Dockerfile docker
-```
-
-然后,您可以在 Docker 镜像内运行示例代码(挂载到 `/workspace/examples`):
-
-```bash
-xhost +local:root # 允许容器访问显示器
-
-docker run --gpus all --rm -it \
--e DISPLAY=$DISPLAY \
--v /tmp/.X11-unix/:/tmp/.X11-unix \
--v $PWD:/workspace \
-genesis
-```
-
-### 文档
-
-- [英文文档](https://genesis-world.readthedocs.io/en/latest/user_guide/index.html)
-- [中文文档](https://genesis-world.readthedocs.io/zh-cn/latest/user_guide/index.html)
-
-## 参与贡献
-
-Genesis 项目的目标是构建一个完全透明、用户友好的生态系统,让来自机器人和计算机图形学的贡献者 **共同创建一个高效、真实(物理和视觉上)的虚拟世界,用于机器人研究及其他领域**。
-
-我们真诚地欢迎来自社区的 *任何形式的贡献*,以使世界对机器人更友好。从 **新功能的拉取请求**、**错误报告**,到甚至是使 Genesis API 更直观的微小 **建议**,我们都全心全意地感谢!
-
-## 帮助支持
-
-- 请使用 Github [Issues](https://github.com/Genesis-Embodied-AI/Genesis/issues) 报告错误和提出功能请求。
-
-- 请使用 GitHub [Discussions](https://github.com/Genesis-Embodied-AI/Genesis/discussions) 讨论想法和提问。
-
-## 许可证和致谢
-
-Genesis 源代码根据 Apache 2.0 许可证授权。
-没有这些令人惊叹的开源项目,Genesis 的开发是不可能的:
-
-- [Taichi](https://github.com/taichi-dev/taichi):提供高性能跨平台计算后端。感谢 taichi 的所有成员提供的技术支持!
-- [FluidLab](https://github.com/zhouxian/FluidLab) 提供参考 MPM 求解器实现
-- [SPH_Taichi](https://github.com/erizmr/SPH_Taichi) 提供参考 SPH 求解器实现
-- [Ten Minute Physics](https://matthias-research.github.io/pages/tenMinutePhysics/index.html) 和 [PBF3D](https://github.com/WASD4959/PBF3D) 提供参考 PBD 求解器实现
-- [MuJoCo](https://github.com/google-deepmind/mujoco) 和 [Brax](https://github.com/google/brax) 提供刚体动力学参考
-- [libccd](https://github.com/danfis/libccd) 提供碰撞检测参考
-- [PyRender](https://github.com/mmatl/pyrender) 提供基于光栅化的渲染器
-- [LuisaCompute](https://github.com/LuisaGroup/LuisaCompute) 和 [LuisaRender](https://github.com/LuisaGroup/LuisaRender) 提供其光线追踪 DSL
-- [trimesh](https://github.com/mikedh/trimesh)、[PyMeshLab](https://github.com/cnr-isti-vclab/PyMeshLab) 和 [CoACD](https://github.com/SarahWeiii/CoACD) 提供几何处理
-
-## Genesis 背后的论文
-
-Genesis 是一个大规模的努力,将各种现有和正在进行的研究工作的最先进技术集成到一个系统中。这里我们列出了一些对 Genesis 项目有贡献的论文(非详尽列表):
-
-- Xian, Zhou, et al. "Fluidlab: A differentiable environment for benchmarking complex fluid manipulation." arXiv preprint arXiv:2303.02346 (2023).
-- Xu, Zhenjia, et al. "Roboninja: Learning an adaptive cutting policy for multi-material objects." arXiv preprint arXiv:2302.11553 (2023).
-- Wang, Yufei, et al. "Robogen: Towards unleashing infinite data for automated robot learning via generative simulation." arXiv preprint arXiv:2311.01455 (2023).
-- Wang, Tsun-Hsuan, et al. "Softzoo: A soft robot co-design benchmark for locomotion in diverse environments." arXiv preprint arXiv:2303.09555 (2023).
-- Wang, Tsun-Hsuan Johnson, et al. "Diffusebot: Breeding soft robots with physics-augmented generative diffusion models." Advances in Neural Information Processing Systems 36 (2023): 44398-44423.
-- Katara, Pushkal, Zhou Xian, and Katerina Fragkiadaki. "Gen2sim: Scaling up robot learning in simulation with generative models." 2024 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2024.
-- Si, Zilin, et al. "DiffTactile: A Physics-based Differentiable Tactile Simulator for Contact-rich Robotic Manipulation." arXiv preprint arXiv:2403.08716 (2024).
-- Wang, Yian, et al. "Thin-Shell Object Manipulations With Differentiable Physics Simulations." arXiv preprint arXiv:2404.00451 (2024).
-- Lin, Chunru, et al. "UBSoft: A Simulation Platform for Robotic Skill Learning in Unbounded Soft Environments." arXiv preprint arXiv:2411.12711 (2024).
-- Zhou, Wenyang, et al. "EMDM: Efficient motion diffusion model for fast and high-quality motion generation." European Conference on Computer Vision. Springer, Cham, 2025.
-- Qiao, Yi-Ling, Junbang Liang, Vladlen Koltun, and Ming C. Lin. "Scalable differentiable physics for learning and control." International Conference on Machine Learning. PMLR, 2020.
-- Qiao, Yi-Ling, Junbang Liang, Vladlen Koltun, and Ming C. Lin. "Efficient differentiable simulation of articulated bodies." In International Conference on Machine Learning, PMLR, 2021.
-- Qiao, Yi-Ling, Junbang Liang, Vladlen Koltun, and Ming Lin. "Differentiable simulation of soft multi-body systems." Advances in Neural Information Processing Systems 34 (2021).
-- Wan, Weilin, et al. "Tlcontrol: Trajectory and language control for human motion synthesis." arXiv preprint arXiv:2311.17135 (2023).
-- Wang, Yian, et al. "Architect: Generating Vivid and Interactive 3D Scenes with Hierarchical 2D Inpainting." arXiv preprint arXiv:2411.09823 (2024).
-- Zheng, Shaokun, et al. "LuisaRender: A high-performance rendering framework with layered and unified interfaces on stream architectures." ACM Transactions on Graphics (TOG) 41.6 (2022): 1-19.
-- Fan, Yingruo, et al. "Faceformer: Speech-driven 3d facial animation with transformers." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.
-- Wu, Sichun, Kazi Injamamul Haque, and Zerrin Yumak. "ProbTalk3D: Non-Deterministic Emotion Controllable Speech-Driven 3D Facial Animation Synthesis Using VQ-VAE." Proceedings of the 17th ACM SIGGRAPH Conference on Motion, Interaction, and Games. 2024.
-- Dou, Zhiyang, et al. "C· ase: Learning conditional adversarial skill embeddings for physics-based characters." SIGGRAPH Asia 2023 Conference Papers. 2023.
-
-... 以及许多正在进行的工作。
-
-## 引用
-
-如果您在研究中使用了 Genesis,我们将非常感谢您引用它。我们仍在撰写技术报告,在其公开之前,您可以考虑引用:
-
-```bibtex
-@software{Genesis,
- author = {Genesis Authors},
- title = {Genesis: A Universal and Generative Physics Engine for Robotics and Beyond},
- month = {December},
- year = {2024},
- url = {https://github.com/Genesis-Embodied-AI/Genesis}
-}
-```
+
+
+![Genesis](imgs/big_text.png)
+
+![Teaser](imgs/teaser.png)
+
+[![PyPI - Version](https://img.shields.io/pypi/v/genesis-world)](https://pypi.org/project/genesis-world/)
+[![PyPI - Downloads](https://img.shields.io/pypi/dm/genesis-world)](https://pypi.org/project/genesis-world/)
+[![GitHub Issues](https://img.shields.io/github/issues/Genesis-Embodied-AI/Genesis)](https://github.com/Genesis-Embodied-AI/Genesis/issues)
+[![GitHub Discussions](https://img.shields.io/github/discussions/Genesis-Embodied-AI/Genesis)](https://github.com/Genesis-Embodied-AI/Genesis/discussions)
+
+[![README in English](https://img.shields.io/badge/English-d9d9d9)](./README.md)
+[![README en Français](https://img.shields.io/badge/Francais-d9d9d9)](./README_FR.md)
+[![简体中文版自述文件](https://img.shields.io/badge/简体中文-d9d9d9)](./README_CN.md)
+[![日本語版 README](https://img.shields.io/badge/日本語-d9d9d9)](./README_JA.md)
+
+
+
+# Genesis 通用物理引擎
+
+## 目录
+
+1. [概述](#概述)
+2. [主要特点](#主要特点)
+3. [快速入门](#快速入门)
+4. [参与贡献](#参与贡献)
+5. [帮助支持](#帮助支持)
+6. [许可证与致谢](#许可证和致谢)
+7. [相关论文](#genesis-背后的论文)
+8. [引用](#引用)
+
+## 概述
+
+Genesis 是专为 *机器人/嵌入式 AI/物理 AI* 应用设计的通用物理平台,集成了以下核心功能:
+
+- **通用物理引擎**: 从底层重建,支持多种材料和物理现象模拟
+- **机器人模拟平台**: 轻量、高速、Python友好的开发环境
+- **真实感渲染**: 内置光线追踪渲染系统
+- **生成数据引擎**: 自然语言驱动的多模态数据生成
+
+我们的长期使命:
+
+- 降低物理模拟使用门槛
+- 统一各类物理求解器
+- 实现数据生成自动化
+
+项目主页:
+
+## 主要特点
+
+- **速度**:Genesis 提供了前所未有的模拟速度——在单个 RTX 4090 上模拟 Franka 机器人手臂时超过 4300 万 FPS(比实时快 430,000 倍)。
+- **跨平台**:Genesis 原生运行在不同系统(Linux、MacOS、Windows)和不同计算后端(CPU、Nvidia GPU、AMD GPU、Apple Metal)上。
+- **各种物理求解器的统一**:Genesis 开发了一个统一的模拟框架,集成了各种物理求解器:刚体、MPM、SPH、FEM、PBD、稳定流体。
+- **支持广泛的材料模型**:Genesis 支持刚体和关节体、各种液体、气体现象、可变形物体、薄壳物体和颗粒材料的模拟(及其耦合)。
+- **支持广泛的机器人**:机器人手臂、腿式机器人、无人机、*软体机器人*等,并广泛支持加载不同文件类型:`MJCF (.xml)`、`URDF`、`.obj`、`.glb`、`.ply`、`.stl` 等。
+- **照片级真实感和高性能光线追踪器**:Genesis 支持基于光线追踪的原生渲染。
+- **可微分性**:Genesis 设计为完全兼容可微分模拟。目前,我们的 MPM 求解器和工具求解器是可微分的,其他求解器的可微分性将很快添加(从刚体模拟开始)。
+- **基于物理的触觉传感器**:Genesis 包含一个基于物理的可微分 [触觉传感器模拟模块](https://github.com/Genesis-Embodied-AI/DiffTactile)。这将很快集成到公共版本中(预计在 0.3.0 版本中)。
+- **用户友好性**:Genesis 设计为尽可能简化模拟的使用。从安装到 API 设计,如果有任何您觉得不直观或难以使用的地方,请 [告诉我们](https://github.com/Genesis-Embodied-AI/Genesis/issues)。
+
+## 快速入门
+
+### 安装
+
+Genesis 可通过 PyPI 获取:
+
+```bash
+pip install genesis-world # 需要 Python >=3.9
+```
+
+同时需要按照[官方指南](https://pytorch.org/get-started/locally/)安装 PyTorch。
+
+### Docker 支持
+
+如果您想通过 Docker 使用 Genesis,您可以首先构建 Docker 镜像,命令如下:
+
+```bash
+docker build -t genesis -f docker/Dockerfile docker
+```
+
+然后,您可以在 Docker 镜像内运行示例代码(挂载到 `/workspace/examples`):
+
+```bash
+xhost +local:root # 允许容器访问显示器
+
+docker run --gpus all --rm -it \
+-e DISPLAY=$DISPLAY \
+-v /tmp/.X11-unix/:/tmp/.X11-unix \
+-v $PWD:/workspace \
+genesis
+```
+
+### 文档
+
+- [英文文档](https://genesis-world.readthedocs.io/en/latest/user_guide/index.html)
+- [中文文档](https://genesis-world.readthedocs.io/zh-cn/latest/user_guide/index.html)
+
+## 参与贡献
+
+Genesis 项目的目标是构建一个完全透明、用户友好的生态系统,让来自机器人和计算机图形学的贡献者 **共同创建一个高效、真实(物理和视觉上)的虚拟世界,用于机器人研究及其他领域**。
+
+我们真诚地欢迎来自社区的 *任何形式的贡献*,以使世界对机器人更友好。从 **新功能的拉取请求**、**错误报告**,到甚至是使 Genesis API 更直观的微小 **建议**,我们都全心全意地感谢!
+
+## 帮助支持
+
+- 请使用 Github [Issues](https://github.com/Genesis-Embodied-AI/Genesis/issues) 报告错误和提出功能请求。
+
+- 请使用 GitHub [Discussions](https://github.com/Genesis-Embodied-AI/Genesis/discussions) 讨论想法和提问。
+
+## 许可证和致谢
+
+Genesis 源代码根据 Apache 2.0 许可证授权。
+没有这些令人惊叹的开源项目,Genesis 的开发是不可能的:
+
+- [Taichi](https://github.com/taichi-dev/taichi):提供高性能跨平台计算后端。感谢 taichi 的所有成员提供的技术支持!
+- [FluidLab](https://github.com/zhouxian/FluidLab) 提供参考 MPM 求解器实现
+- [SPH_Taichi](https://github.com/erizmr/SPH_Taichi) 提供参考 SPH 求解器实现
+- [Ten Minute Physics](https://matthias-research.github.io/pages/tenMinutePhysics/index.html) 和 [PBF3D](https://github.com/WASD4959/PBF3D) 提供参考 PBD 求解器实现
+- [MuJoCo](https://github.com/google-deepmind/mujoco) 和 [Brax](https://github.com/google/brax) 提供刚体动力学参考
+- [libccd](https://github.com/danfis/libccd) 提供碰撞检测参考
+- [PyRender](https://github.com/mmatl/pyrender) 提供基于光栅化的渲染器
+- [LuisaCompute](https://github.com/LuisaGroup/LuisaCompute) 和 [LuisaRender](https://github.com/LuisaGroup/LuisaRender) 提供其光线追踪 DSL
+- [trimesh](https://github.com/mikedh/trimesh)、[PyMeshLab](https://github.com/cnr-isti-vclab/PyMeshLab) 和 [CoACD](https://github.com/SarahWeiii/CoACD) 提供几何处理
+
+## Genesis 背后的论文
+
+Genesis 是一个大规模的努力,将各种现有和正在进行的研究工作的最先进技术集成到一个系统中。这里我们列出了一些对 Genesis 项目有贡献的论文(非详尽列表):
+
+- Xian, Zhou, et al. "Fluidlab: A differentiable environment for benchmarking complex fluid manipulation." arXiv preprint arXiv:2303.02346 (2023).
+- Xu, Zhenjia, et al. "Roboninja: Learning an adaptive cutting policy for multi-material objects." arXiv preprint arXiv:2302.11553 (2023).
+- Wang, Yufei, et al. "Robogen: Towards unleashing infinite data for automated robot learning via generative simulation." arXiv preprint arXiv:2311.01455 (2023).
+- Wang, Tsun-Hsuan, et al. "Softzoo: A soft robot co-design benchmark for locomotion in diverse environments." arXiv preprint arXiv:2303.09555 (2023).
+- Wang, Tsun-Hsuan Johnson, et al. "Diffusebot: Breeding soft robots with physics-augmented generative diffusion models." Advances in Neural Information Processing Systems 36 (2023): 44398-44423.
+- Katara, Pushkal, Zhou Xian, and Katerina Fragkiadaki. "Gen2sim: Scaling up robot learning in simulation with generative models." 2024 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2024.
+- Si, Zilin, et al. "DiffTactile: A Physics-based Differentiable Tactile Simulator for Contact-rich Robotic Manipulation." arXiv preprint arXiv:2403.08716 (2024).
+- Wang, Yian, et al. "Thin-Shell Object Manipulations With Differentiable Physics Simulations." arXiv preprint arXiv:2404.00451 (2024).
+- Lin, Chunru, et al. "UBSoft: A Simulation Platform for Robotic Skill Learning in Unbounded Soft Environments." arXiv preprint arXiv:2411.12711 (2024).
+- Zhou, Wenyang, et al. "EMDM: Efficient motion diffusion model for fast and high-quality motion generation." European Conference on Computer Vision. Springer, Cham, 2025.
+- Qiao, Yi-Ling, Junbang Liang, Vladlen Koltun, and Ming C. Lin. "Scalable differentiable physics for learning and control." International Conference on Machine Learning. PMLR, 2020.
+- Qiao, Yi-Ling, Junbang Liang, Vladlen Koltun, and Ming C. Lin. "Efficient differentiable simulation of articulated bodies." In International Conference on Machine Learning, PMLR, 2021.
+- Qiao, Yi-Ling, Junbang Liang, Vladlen Koltun, and Ming Lin. "Differentiable simulation of soft multi-body systems." Advances in Neural Information Processing Systems 34 (2021).
+- Wan, Weilin, et al. "Tlcontrol: Trajectory and language control for human motion synthesis." arXiv preprint arXiv:2311.17135 (2023).
+- Wang, Yian, et al. "Architect: Generating Vivid and Interactive 3D Scenes with Hierarchical 2D Inpainting." arXiv preprint arXiv:2411.09823 (2024).
+- Zheng, Shaokun, et al. "LuisaRender: A high-performance rendering framework with layered and unified interfaces on stream architectures." ACM Transactions on Graphics (TOG) 41.6 (2022): 1-19.
+- Fan, Yingruo, et al. "Faceformer: Speech-driven 3d facial animation with transformers." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.
+- Wu, Sichun, Kazi Injamamul Haque, and Zerrin Yumak. "ProbTalk3D: Non-Deterministic Emotion Controllable Speech-Driven 3D Facial Animation Synthesis Using VQ-VAE." Proceedings of the 17th ACM SIGGRAPH Conference on Motion, Interaction, and Games. 2024.
+- Dou, Zhiyang, et al. "C· ase: Learning conditional adversarial skill embeddings for physics-based characters." SIGGRAPH Asia 2023 Conference Papers. 2023.
+
+... 以及许多正在进行的工作。
+
+## 引用
+
+如果您在研究中使用了 Genesis,我们将非常感谢您引用它。我们仍在撰写技术报告,在其公开之前,您可以考虑引用:
+
+```bibtex
+@software{Genesis,
+ author = {Genesis Authors},
+ title = {Genesis: A Universal and Generative Physics Engine for Robotics and Beyond},
+ month = {December},
+ year = {2024},
+ url = {https://github.com/Genesis-Embodied-AI/Genesis}
+}
+```
diff --git a/README_FR.md b/README_FR.md
new file mode 100644
index 00000000..511c1e0b
--- /dev/null
+++ b/README_FR.md
@@ -0,0 +1,181 @@
+![Genesis](imgs/big_text.png)
+
+![Teaser](imgs/teaser.png)
+
+[![PyPI - Version](https://img.shields.io/pypi/v/genesis-world)](https://pypi.org/project/genesis-world/)
+[![PyPI - Downloads](https://img.shields.io/pypi/dm/genesis-world)](https://pypi.org/project/genesis-world/)
+[![GitHub Issues](https://img.shields.io/github/issues/Genesis-Embodied-AI/Genesis)](https://github.com/Genesis-Embodied-AI/Genesis/issues)
+[![GitHub Discussions](https://img.shields.io/github/discussions/Genesis-Embodied-AI/Genesis)](https://github.com/Genesis-Embodied-AI/Genesis/discussions)
+
+[![README in English](https://img.shields.io/badge/English-d9d9d9)](./README.md)
+[![README en Français](https://img.shields.io/badge/Francais-d9d9d9)](./README_FR.md)
+[![简体中文版自述文件](https://img.shields.io/badge/简体中文-d9d9d9)](./README_CN.md)
+[![日本語版 README](https://img.shields.io/badge/日本語-d9d9d9)](./README_JA.md)
+
+
+# Genesis
+## 🔥 Nouveautés
+- [2024-12-25] Ajout d’un [docker](#docker) incluant la prise en charge du moteur de rendu par ray-tracing.
+- [2024-12-24] Ajout de directives pour [contribuer à Genesis](https://github.com/Genesis-Embodied-AI/Genesis/blob/main/CONTRIBUTING.md).
+
+## Table des Matières
+
+
+1. [Qu'est-ce que Genesis ?](#quest-ce-que-genesis-)
+2. [Caractéristiques clés](#principales-caract%C3%A9ristiques)
+3. [Installation Rapide](#installation-rapide)
+4. [Docker](#docker)
+5. [Documentation](#documentation)
+6. [Contribuer à Genesis](#contribution-%C3%A0-genesis)
+7. [Support](#support)
+8. [License et Remerciements](#licence-et-remerciements)
+9. [Articles Associés](#publications-associ%C3%A9es)
+10. [Citation](#citation)
+
+
+## Qu'est-ce que Genesis ?
+
+Genesis est une plateforme physique conçue pour des applications générales en *Robotique/ IA embarquée/IA physique*. Elle combine plusieurs fonctionnalités :
+
+1. Un **moteur physique universel**, reconstruit depuis zéro, capable de simuler une large gamme de matériaux et de phénomènes physiques.
+2. Une plateforme de simulation robotique **légère**, **ultra-rapide**,**pythonic**, et **conviviale**.
+3. Un puissant et rapide **système de rendu photo-réaliste**.
+4. Un **moteur de génération de données** qui transforme des descriptions en langage naturel en divers types de données.
+
+Genesis vise à :
+
+- **Réduire les barrières** à l'utilisation des simulations physiques, rendant la recherche en robotique accessible à tous. Voir notre [déclaration de mission](https://genesis-world.readthedocs.io/en/latest/user_guide/overview/mission.html).
+- **Unifier divers solveurs physiques** dans un cadre unique pour recréer le monde physique avec la plus haute fidélité.
+- **Automatiser la génération de données**, réduisant l'effort humain et permettant à l'écosystème de données de fonctionner de manière autonome.
+
+Page du projet :
+
+
+## Principales Caractéristiques
+
+- **Vitesse** : Plus de 43 millions d'IPS lors de la simulation d'un bras robotique Franka avec une seule RTX 4090 (430 000 fois plus rapide que le temps réel).
+- **Multi-plateforme** : Fonctionne sur Linux, macOS, Windows, et prend en charge plusieurs backends de calcul (CPU, GPU Nvidia/AMD, Apple Metal).
+- **Intégration de divers solveurs physiques** : Corps rigides, MPM, SPH, FEM, PBD, Fluides stables.
+- **Large éventail de modèles de matériaux** : Simulation et couplage de corps rigides, liquides, gaz, objets déformables, objets à coque mince et matériaux granulaires.
+- **Compatibilité avec divers robots** : Bras robotiques, robots à pattes, drones, *robots mous*, et support pour charger `MJCF (.xml)`, `URDF`, `.obj`, `.glb`, `.ply`, `.stl`, et plus encore.
+- **Rendu photo-réaliste** : Rendu natif basé sur le lancer de rayons.
+- **Différentiabilité** : Genesis est conçu pour être entièrement différentiable. Actuellement, notre solveur MPM et Tool Solver prennent en charge la différentiabilité, avec d'autres solveurs prévus dans les prochaines versions (à commencer par le solveur de corps rigides et articulés).
+- **Simulation tactile basée sur la physique** : Simulation de capteur tactile différentiable [en cours de développement](https://github.com/Genesis-Embodied-AI/DiffTactile) (prévue pour la version 0.3.0).
+- **Facilité d'utilisation** : Conçu pour être simple, avec une installation intuitive et des API conviviales.
+
+## Installation Rapide
+
+Genesis est disponible via PyPI :
+
+```bash
+pip install genesis-world # Nécessite Python >=3.9;
+```
+
+Vous devez également installer **PyTorch** en suivant [les instructions officielles](https://pytorch.org/get-started/locally/).
+
+Pour la dernière version, clonez le dépôt et installez localement :
+
+```bash
+git clone https://github.com/Genesis-Embodied-AI/Genesis.git
+cd Genesis
+pip install -e .
+```
+
+## Docker
+
+Si vous souhaitez utiliser Genesis avec Docker, vous pouvez d'abord construire l'image Docker comme suit :
+
+```bash
+docker build -t genesis -f docker/Dockerfile docker
+```
+
+Ensuite, vous pouvez exécuter les exemples à l'intérieur de l'image Docker (montés sur `/workspace/examples`) :
+
+```bash
+xhost +local:root # Autoriser le conteneur à accéder à l'affichage
+
+docker run --gpus all --rm -it \
+-e DISPLAY=$DISPLAY \
+-v /tmp/.X11-unix/:/tmp/.X11-unix \
+-v $PWD:/workspace \
+genesis
+
+```
+
+## Documentation
+
+Une documentation complète est disponible en [Anglais](https://genesis-world.readthedocs.io/en/latest/user_guide/index.html) et en [Chinois](https://genesis-world.readthedocs.io/zh-cn/latest/user_guide/index.html). Cela inclut des étapes d'installation détaillées, des tutoriels et des références API.
+
+
+## Contribution à Genesis
+
+Le projet Genesis est un effort ouvert et collaboratif. Nous accueillons toutes les formes de contributions de la communauté, notamment :
+
+- **Pull requests** pour de nouvelles fonctionnalités ou des corrections de bugs.
+- **Rapports de bugs** via GitHub Issues.
+- **Suggestions** pour améliorer la convivialité de Genesis.
+
+Consultez notre [guide de contribution](https://github.com/Genesis-Embodied-AI/Genesis/blob/main/CONTRIBUTING.md) pour plus de détails.
+
+## Support
+
+- Signalez des bugs ou demandez des fonctionnalités via GitHub [Issues](https://github.com/Genesis-Embodied-AI/Genesis/issues).
+- Participez aux discussions ou posez des questions sur GitHub [Discussions](https://github.com/Genesis-Embodied-AI/Genesis/discussions).
+
+
+## Licence et Remerciements
+
+Le code source de Genesis est sous licence Apache 2.0.
+
+Le développement de Genesis a été rendu possible grâce à ces projets open-source :
+
+- [Taichi](https://github.com/taichi-dev/taichi) : Backend de calcul multiplateforme haute performance. Merci à l'équipe de Taichi pour leur support technique !
+- [FluidLab](https://github.com/zhouxian/FluidLab) : Implémentation de référence du solveur MPM.
+- [SPH_Taichi](https://github.com/erizmr/SPH_Taichi) : Implémentation de référence du solveur SPH.
+- [Ten Minute Physics](https://matthias-research.github.io/pages/tenMinutePhysics/index.html) et [PBF3D](https://github.com/WASD4959/PBF3D) : Implémentations de référence des solveurs PBD.
+- [MuJoCo](https://github.com/google-deepmind/mujoco) : Référence pour la dynamique des corps rigides.
+- [libccd](https://github.com/danfis/libccd) : Référence pour la détection des collisions.
+- [PyRender](https://github.com/mmatl/pyrender) : Rendu basé sur la rasterisation.
+- [LuisaCompute](https://github.com/LuisaGroup/LuisaCompute) et [LuisaRender](https://github.com/LuisaGroup/LuisaRender) : DSL de ray-tracing.
+
+
+## Publications Associées
+
+Genesis est un projet à grande échelle qui intègre des technologies de pointe issues de divers travaux de recherche existants et en cours dans un seul système. Voici une liste non exhaustive de toutes les publications qui ont contribué au projet Genesis d'une manière ou d'une autre :
+
+- Xian, Zhou, et al. "Fluidlab: A differentiable environment for benchmarking complex fluid manipulation." arXiv preprint arXiv:2303.02346 (2023).
+- Xu, Zhenjia, et al. "Roboninja: Learning an adaptive cutting policy for multi-material objects." arXiv preprint arXiv:2302.11553 (2023).
+- Wang, Yufei, et al. "Robogen: Towards unleashing infinite data for automated robot learning via generative simulation." arXiv preprint arXiv:2311.01455 (2023).
+- Wang, Tsun-Hsuan, et al. "Softzoo: A soft robot co-design benchmark for locomotion in diverse environments." arXiv preprint arXiv:2303.09555 (2023).
+- Wang, Tsun-Hsuan Johnson, et al. "Diffusebot: Breeding soft robots with physics-augmented generative diffusion models." Advances in Neural Information Processing Systems 36 (2023): 44398-44423.
+- Katara, Pushkal, Zhou Xian, and Katerina Fragkiadaki. "Gen2sim: Scaling up robot learning in simulation with generative models." 2024 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2024.
+- Si, Zilin, et al. "DiffTactile: A Physics-based Differentiable Tactile Simulator for Contact-rich Robotic Manipulation." arXiv preprint arXiv:2403.08716 (2024).
+- Wang, Yian, et al. "Thin-Shell Object Manipulations With Differentiable Physics Simulations." arXiv preprint arXiv:2404.00451 (2024).
+- Lin, Chunru, et al. "UBSoft: A Simulation Platform for Robotic Skill Learning in Unbounded Soft Environments." arXiv preprint arXiv:2411.12711 (2024).
+- Zhou, Wenyang, et al. "EMDM: Efficient motion diffusion model for fast and high-quality motion generation." European Conference on Computer Vision. Springer, Cham, 2025.
+- Qiao, Yi-Ling, Junbang Liang, Vladlen Koltun, and Ming C. Lin. "Scalable differentiable physics for learning and control." International Conference on Machine Learning. PMLR, 2020.
+- Qiao, Yi-Ling, Junbang Liang, Vladlen Koltun, and Ming C. Lin. "Efficient differentiable simulation of articulated bodies." In International Conference on Machine Learning, PMLR, 2021.
+- Qiao, Yi-Ling, Junbang Liang, Vladlen Koltun, and Ming Lin. "Differentiable simulation of soft multi-body systems." Advances in Neural Information Processing Systems 34 (2021).
+- Wan, Weilin, et al. "Tlcontrol: Trajectory and language control for human motion synthesis." arXiv preprint arXiv:2311.17135 (2023).
+- Wang, Yian, et al. "Architect: Generating Vivid and Interactive 3D Scenes with Hierarchical 2D Inpainting." arXiv preprint arXiv:2411.09823 (2024).
+- Zheng, Shaokun, et al. "LuisaRender: A high-performance rendering framework with layered and unified interfaces on stream architectures." ACM Transactions on Graphics (TOG) 41.6 (2022): 1-19.
+- Fan, Yingruo, et al. "Faceformer: Speech-driven 3d facial animation with transformers." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.
+- Wu, Sichun, Kazi Injamamul Haque, and Zerrin Yumak. "ProbTalk3D: Non-Deterministic Emotion Controllable Speech-Driven 3D Facial Animation Synthesis Using VQ-VAE." Proceedings of the 17th ACM SIGGRAPH Conference on Motion, Interaction, and Games. 2024.
+- Dou, Zhiyang, et al. "C· ase: Learning conditional adversarial skill embeddings for physics-based characters." SIGGRAPH Asia 2023 Conference Papers. 2023.
+
+... et bien d'autres travaux en cours.
+
+
+## Citation
+
+Si vous utilisez Genesis dans vos recherches, veuillez envisager de citer :
+
+```bibtex
+@software{Genesis,
+ author = {Genesis Authors},
+ title = {Genesis: A Universal and Generative Physics Engine for Robotics and Beyond},
+ month = {December},
+ year = {2024},
+ url = {https://github.com/Genesis-Embodied-AI/Genesis}
+}
+```
\ No newline at end of file
diff --git a/README_JA.md b/README_JA.md
index 254ec07c..77013b24 100644
--- a/README_JA.md
+++ b/README_JA.md
@@ -1,171 +1,172 @@
-![Genesis](imgs/big_text.png)
-
-![Teaser](imgs/teaser.png)
-
-[![PyPI - Version](https://img.shields.io/pypi/v/genesis-world)](https://pypi.org/project/genesis-world/)
-[![PyPI - Downloads](https://img.shields.io/pypi/dm/genesis-world)](https://pypi.org/project/genesis-world/)
-[![GitHub Issues](https://img.shields.io/github/issues/Genesis-Embodied-AI/Genesis)](https://github.com/Genesis-Embodied-AI/Genesis/issues)
-[![GitHub Discussions](https://img.shields.io/github/discussions/Genesis-Embodied-AI/Genesis)](https://github.com/Genesis-Embodied-AI/Genesis/discussions)
-
-[![README in English](https://img.shields.io/badge/English-d9d9d9)](./README.md)
-[![简体中文版自述文件](https://img.shields.io/badge/简体中文-d9d9d9)](./README_CN.md)
-[![日本語版 README](https://img.shields.io/badge/日本語-d9d9d9)](./README_JA.md)
-
-# Genesis
-## 🔥 最新情報
-- [2024-12-25] [レイトレーシングレンダラー](#docker)をサポートするDockerを追加しました。
-- [2024-12-24] [Genesisへの貢献方法](https://github.com/Genesis-Embodied-AI/Genesis/blob/main/CONTRIBUTING.md)に関するガイドラインを追加しました。
-
-## 目次
-
-1. [Genesisとは?](#what-is-genesis)
-2. [主な機能](#key-features)
-3. [簡単インストール](#quick-installation)
-4. [Docker](#docker)
-5. [ドキュメント](#documentation)
-6. [Genesisへの貢献](#contributing-to-genesis)
-7. [サポート](#support)
-8. [ライセンスと謝辞](#license-and-acknowledgments)
-9. [関連論文](#associated-papers)
-10. [引用](#citation)
-
-## Genesisとは?
-
-Genesisは、汎用的な*ロボティクス/身体性を持ったAI*アプリケーション向けに設計された物理シミュレーションプラットフォームです。このプラットフォームは以下のような特徴があります:
-
-1. あらゆる種類の材料や物理現象をシミュレート可能な**汎用物理エンジン**。
-2. **軽量**、**超高速**、**Python的**、そして**ユーザーフレンドリー**なロボティクスシミュレーションプラットフォーム。
-3. 高速で強力な**フォトリアリスティックなレンダリングシステム**。
-4. ユーザーの自然言語による指示をもとに様々なデータモダリティを生成する**生成型データエンジン**。
-
-Genesisの目指すところ:
-
-- **物理シミュレーションのハードルを下げ**、ロボティクス研究を誰でもアクセス可能にすること。詳細は[ミッションステートメント](https://genesis-world.readthedocs.io/en/latest/user_guide/overview/mission.html)をご覧ください。
-- **多様な物理ソルバーを統合**し、最高の忠実度で物理世界を再現すること。
-- **データ生成を自動化**し、人間の労力を削減し、データ生成の効率を最大化すること。
-
-プロジェクトページ:
-
-## 主な機能
-
-- **速度**: RTX 4090単体でフランカロボットアームを4300万FPS(リアルタイムの43万倍速)でシミュレーション可能。
-- **クロスプラットフォーム**: Linux、macOS、Windowsで動作し、CPU、Nvidia/AMD GPU、Apple Metalをサポート。
-- **多様な物理ソルバーの統合**: 剛体、MPM、SPH、FEM、PBD、安定流体シミュレーション。
-- **幅広い材料モデル**: 剛体、液体、気体、変形体、薄膜オブジェクト、粒状材料などをシミュレーション可能。
-- **様々なロボットへの対応**: ロボットアーム、脚付きロボット、ドローン、*ソフトロボット*など。また、`MJCF (.xml)`、`URDF`、`.obj`、`.glb`、`.ply`、`.stl`などの形式をサポート。
-- **フォトリアルなレンダリング**: レイトレーシングベースのレンダリングをネイティブでサポート。
-- **微分可能性**: 完全な微分可能性を備えた設計。現時点では、MPMソルバーとツールソルバーが対応しており、将来的には他のソルバーも対応予定(まず剛体および連結体ソルバーから開始)。
-- **物理ベースの触覚シミュレーション**: 微分可能な[触覚センサーシミュレーション](https://github.com/Genesis-Embodied-AI/DiffTactile)が近日公開予定(バージョン0.3.0を予定)。
-- **ユーザーフレンドリー**: シンプルで直感的なインストールとAPI設計。
-
-## インストール
-
-GenesisはPyPIで利用可能です:
-
-```bash
-pip install genesis-world # Python >=3.9 が必要です;
-```
-
-また、**PyTorch**を[公式手順](https://pytorch.org/get-started/locally/)に従ってインストールする必要があります。
-
-最新バージョンを利用するには、リポジトリをクローンしてローカルにインストールしてください:
-
-```bash
-git clone https://github.com/Genesis-Embodied-AI/Genesis.git
-cd Genesis
-pip install -e .
-```
-
-## Docker
-
-DockerからGenesisを利用する場合は、まずDockerイメージをビルドします:
-
-```bash
-docker build -t genesis -f docker/Dockerfile docker
-```
-
-その後、Dockerイメージ内で例を実行できます(`/workspace/examples`にマウント):
-
-```bash
-xhost +local:root # コンテナがディスプレイにアクセスできるようにする
-
-docker run --gpus all --rm -it \
--e DISPLAY=$DISPLAY \
--v /tmp/.X11-unix/:/tmp/.X11-unix \
--v $PWD:/workspace \
-genesis
-```
-
-## ドキュメント
-
-包括的なドキュメントは現時点では[英語](https://genesis-world.readthedocs.io/en/latest/user_guide/index.html)および[中国語](https://genesis-world.readthedocs.io/zh-cn/latest/user_guide/index.html)で提供されています。詳細なインストール手順、チュートリアル、APIリファレンスが含まれています。
-
-## Genesisへの貢献
-
-Genesisプロジェクトはオープンで協力的な取り組みです。以下を含む、コミュニティからのあらゆる貢献を歓迎します:
-
-- 新機能やバグ修正のための**プルリクエスト**。
-- GitHub Issuesを通じた**バグ報告**。
-- Genesisの使いやすさを向上させるための**提案**。
-
-詳細は[貢献ガイド](https://github.com/Genesis-Embodied-AI/Genesis/blob/main/CONTRIBUTING.md)をご参照ください。
-
-## サポート
-
-- バグ報告や機能リクエストはGitHubの[Issues](https://github.com/Genesis-Embodied-AI/Genesis/issues)をご利用ください。
-- 議論や質問はGitHubの[Discussions](https://github.com/Genesis-Embodied-AI/Genesis/discussions)で行えます。
-
-## ライセンスと謝辞
-
-GenesisのソースコードはApache 2.0ライセンスで提供されています。
-
-Genesisの開発は以下のオープンソースプロジェクトのおかげで可能になりました:
-
-- [Taichi](https://github.com/taichi-dev/taichi): 高性能でクロスプラットフォーム対応の計算バックエンド。Taichiチームの技術サポートに感謝します!
-- [FluidLab](https://github.com/zhouxian/FluidLab): 参照用のMPMソルバー実装。
-- [SPH_Taichi](https://github.com/erizmr/SPH_Taichi): 参照用のSPHソルバー実装。
-- [Ten Minute Physics](https://matthias-research.github.io/pages/tenMinutePhysics/index.html) と [PBF3D](https://github.com/WASD4959/PBF3D): 参照用のPBD(粒子ベースの物理)ソルバー実装。
-- [MuJoCo](https://github.com/google-deepmind/mujoco): 剛体ダイナミクスの参照用実装。
-- [libccd](https://github.com/danfis/libccd): 衝突検出の参照用実装。
-- [PyRender](https://github.com/mmatl/pyrender): ラスタライズベースのレンダラー。
-- [LuisaCompute](https://github.com/LuisaGroup/LuisaCompute) と [LuisaRender](https://github.com/LuisaGroup/LuisaRender): レイトレーシングDSL。
-
-## 関連論文
-
-Genesisプロジェクトに関与した主要な研究論文の一覧:
-
-- Xian, Zhou, et al. "Fluidlab: A differentiable environment for benchmarking complex fluid manipulation." arXiv preprint arXiv:2303.02346 (2023).
-- Xu, Zhenjia, et al. "Roboninja: Learning an adaptive cutting policy for multi-material objects." arXiv preprint arXiv:2302.11553 (2023).
-- Wang, Yufei, et al. "Robogen: Towards unleashing infinite data for automated robot learning via generative simulation." arXiv preprint arXiv:2311.01455 (2023).
-- Wang, Tsun-Hsuan, et al. "Softzoo: A soft robot co-design benchmark for locomotion in diverse environments." arXiv preprint arXiv:2303.09555 (2023).
-- Wang, Tsun-Hsuan Johnson, et al. "Diffusebot: Breeding soft robots with physics-augmented generative diffusion models." Advances in Neural Information Processing Systems 36 (2023): 44398-44423.
-- Katara, Pushkal, Zhou Xian, and Katerina Fragkiadaki. "Gen2sim: Scaling up robot learning in simulation with generative models." 2024 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2024.
-- Si, Zilin, et al. "DiffTactile: A Physics-based Differentiable Tactile Simulator for Contact-rich Robotic Manipulation." arXiv preprint arXiv:2403.08716 (2024).
-- Wang, Yian, et al. "Thin-Shell Object Manipulations With Differentiable Physics Simulations." arXiv preprint arXiv:2404.00451 (2024).
-- Lin, Chunru, et al. "UBSoft: A Simulation Platform for Robotic Skill Learning in Unbounded Soft Environments." arXiv preprint arXiv:2411.12711 (2024).
-- Zhou, Wenyang, et al. "EMDM: Efficient motion diffusion model for fast and high-quality motion generation." European Conference on Computer Vision. Springer, Cham, 2025.
-- Qiao, Yi-Ling, Junbang Liang, Vladlen Koltun, and Ming C. Lin. "Scalable differentiable physics for learning and control." International Conference on Machine Learning. PMLR, 2020.
-- Qiao, Yi-Ling, Junbang Liang, Vladlen Koltun, and Ming C. Lin. "Efficient differentiable simulation of articulated bodies." In International Conference on Machine Learning, PMLR, 2021.
-- Qiao, Yi-Ling, Junbang Liang, Vladlen Koltun, and Ming Lin. "Differentiable simulation of soft multi-body systems." Advances in Neural Information Processing Systems 34 (2021).
-- Wan, Weilin, et al. "Tlcontrol: Trajectory and language control for human motion synthesis." arXiv preprint arXiv:2311.17135 (2023).
-- Wang, Yian, et al. "Architect: Generating Vivid and Interactive 3D Scenes with Hierarchical 2D Inpainting." arXiv preprint arXiv:2411.09823 (2024).
-- Zheng, Shaokun, et al. "LuisaRender: A high-performance rendering framework with layered and unified interfaces on stream architectures." ACM Transactions on Graphics (TOG) 41.6 (2022): 1-19.
-- Fan, Yingruo, et al. "Faceformer: Speech-driven 3d facial animation with transformers." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.
-- Wu, Sichun, Kazi Injamamul Haque, and Zerrin Yumak. "ProbTalk3D: Non-Deterministic Emotion Controllable Speech-Driven 3D Facial Animation Synthesis Using VQ-VAE." Proceedings of the 17th ACM SIGGRAPH Conference on Motion, Interaction, and Games. 2024.
-- Dou, Zhiyang, et al. "C· ase: Learning conditional adversarial skill embeddings for physics-based characters." SIGGRAPH Asia 2023 Conference Papers. 2023.
-
-さらに多数の現在進行形のプロジェクトがあります。
-
-## 引用
-
-研究でGenesisを使用する場合、以下を引用してください:
-
-```bibtex
-@software{Genesis,
- author = {Genesis Authors},
- title = {Genesis: A Universal and Generative Physics Engine for Robotics and Beyond},
- month = {December},
- year = {2024},
- url = {https://github.com/Genesis-Embodied-AI/Genesis}
-}
+![Genesis](imgs/big_text.png)
+
+![Teaser](imgs/teaser.png)
+
+[![PyPI - Version](https://img.shields.io/pypi/v/genesis-world)](https://pypi.org/project/genesis-world/)
+[![PyPI - Downloads](https://img.shields.io/pypi/dm/genesis-world)](https://pypi.org/project/genesis-world/)
+[![GitHub Issues](https://img.shields.io/github/issues/Genesis-Embodied-AI/Genesis)](https://github.com/Genesis-Embodied-AI/Genesis/issues)
+[![GitHub Discussions](https://img.shields.io/github/discussions/Genesis-Embodied-AI/Genesis)](https://github.com/Genesis-Embodied-AI/Genesis/discussions)
+
+[![README in English](https://img.shields.io/badge/English-d9d9d9)](./README.md)
+[![README en Français](https://img.shields.io/badge/Francais-d9d9d9)](./README_FR.md)
+[![简体中文版自述文件](https://img.shields.io/badge/简体中文-d9d9d9)](./README_CN.md)
+[![日本語版 README](https://img.shields.io/badge/日本語-d9d9d9)](./README_JA.md)
+
+# Genesis
+## 🔥 最新情報
+- [2024-12-25] [レイトレーシングレンダラー](#docker)をサポートするDockerを追加しました。
+- [2024-12-24] [Genesisへの貢献方法](https://github.com/Genesis-Embodied-AI/Genesis/blob/main/CONTRIBUTING.md)に関するガイドラインを追加しました。
+
+## 目次
+
+1. [Genesisとは?](#what-is-genesis)
+2. [主な機能](#key-features)
+3. [簡単インストール](#quick-installation)
+4. [Docker](#docker)
+5. [ドキュメント](#documentation)
+6. [Genesisへの貢献](#contributing-to-genesis)
+7. [サポート](#support)
+8. [ライセンスと謝辞](#license-and-acknowledgments)
+9. [関連論文](#associated-papers)
+10. [引用](#citation)
+
+## Genesisとは?
+
+Genesisは、汎用的な*ロボティクス/身体性を持ったAI*アプリケーション向けに設計された物理シミュレーションプラットフォームです。このプラットフォームは以下のような特徴があります:
+
+1. あらゆる種類の材料や物理現象をシミュレート可能な**汎用物理エンジン**。
+2. **軽量**、**超高速**、**Python的**、そして**ユーザーフレンドリー**なロボティクスシミュレーションプラットフォーム。
+3. 高速で強力な**フォトリアリスティックなレンダリングシステム**。
+4. ユーザーの自然言語による指示をもとに様々なデータモダリティを生成する**生成型データエンジン**。
+
+Genesisの目指すところ:
+
+- **物理シミュレーションのハードルを下げ**、ロボティクス研究を誰でもアクセス可能にすること。詳細は[ミッションステートメント](https://genesis-world.readthedocs.io/en/latest/user_guide/overview/mission.html)をご覧ください。
+- **多様な物理ソルバーを統合**し、最高の忠実度で物理世界を再現すること。
+- **データ生成を自動化**し、人間の労力を削減し、データ生成の効率を最大化すること。
+
+プロジェクトページ:
+
+## 主な機能
+
+- **速度**: RTX 4090単体でフランカロボットアームを4300万FPS(リアルタイムの43万倍速)でシミュレーション可能。
+- **クロスプラットフォーム**: Linux、macOS、Windowsで動作し、CPU、Nvidia/AMD GPU、Apple Metalをサポート。
+- **多様な物理ソルバーの統合**: 剛体、MPM、SPH、FEM、PBD、安定流体シミュレーション。
+- **幅広い材料モデル**: 剛体、液体、気体、変形体、薄膜オブジェクト、粒状材料などをシミュレーション可能。
+- **様々なロボットへの対応**: ロボットアーム、脚付きロボット、ドローン、*ソフトロボット*など。また、`MJCF (.xml)`、`URDF`、`.obj`、`.glb`、`.ply`、`.stl`などの形式をサポート。
+- **フォトリアルなレンダリング**: レイトレーシングベースのレンダリングをネイティブでサポート。
+- **微分可能性**: 完全な微分可能性を備えた設計。現時点では、MPMソルバーとツールソルバーが対応しており、将来的には他のソルバーも対応予定(まず剛体および連結体ソルバーから開始)。
+- **物理ベースの触覚シミュレーション**: 微分可能な[触覚センサーシミュレーション](https://github.com/Genesis-Embodied-AI/DiffTactile)が近日公開予定(バージョン0.3.0を予定)。
+- **ユーザーフレンドリー**: シンプルで直感的なインストールとAPI設計。
+
+## インストール
+
+GenesisはPyPIで利用可能です:
+
+```bash
+pip install genesis-world # Python >=3.9 が必要です;
+```
+
+また、**PyTorch**を[公式手順](https://pytorch.org/get-started/locally/)に従ってインストールする必要があります。
+
+最新バージョンを利用するには、リポジトリをクローンしてローカルにインストールしてください:
+
+```bash
+git clone https://github.com/Genesis-Embodied-AI/Genesis.git
+cd Genesis
+pip install -e .
+```
+
+## Docker
+
+DockerからGenesisを利用する場合は、まずDockerイメージをビルドします:
+
+```bash
+docker build -t genesis -f docker/Dockerfile docker
+```
+
+その後、Dockerイメージ内で例を実行できます(`/workspace/examples`にマウント):
+
+```bash
+xhost +local:root # コンテナがディスプレイにアクセスできるようにする
+
+docker run --gpus all --rm -it \
+-e DISPLAY=$DISPLAY \
+-v /tmp/.X11-unix/:/tmp/.X11-unix \
+-v $PWD:/workspace \
+genesis
+```
+
+## ドキュメント
+
+包括的なドキュメントは現時点では[英語](https://genesis-world.readthedocs.io/en/latest/user_guide/index.html)および[中国語](https://genesis-world.readthedocs.io/zh-cn/latest/user_guide/index.html)で提供されています。詳細なインストール手順、チュートリアル、APIリファレンスが含まれています。
+
+## Genesisへの貢献
+
+Genesisプロジェクトはオープンで協力的な取り組みです。以下を含む、コミュニティからのあらゆる貢献を歓迎します:
+
+- 新機能やバグ修正のための**プルリクエスト**。
+- GitHub Issuesを通じた**バグ報告**。
+- Genesisの使いやすさを向上させるための**提案**。
+
+詳細は[貢献ガイド](https://github.com/Genesis-Embodied-AI/Genesis/blob/main/CONTRIBUTING.md)をご参照ください。
+
+## サポート
+
+- バグ報告や機能リクエストはGitHubの[Issues](https://github.com/Genesis-Embodied-AI/Genesis/issues)をご利用ください。
+- 議論や質問はGitHubの[Discussions](https://github.com/Genesis-Embodied-AI/Genesis/discussions)で行えます。
+
+## ライセンスと謝辞
+
+GenesisのソースコードはApache 2.0ライセンスで提供されています。
+
+Genesisの開発は以下のオープンソースプロジェクトのおかげで可能になりました:
+
+- [Taichi](https://github.com/taichi-dev/taichi): 高性能でクロスプラットフォーム対応の計算バックエンド。Taichiチームの技術サポートに感謝します!
+- [FluidLab](https://github.com/zhouxian/FluidLab): 参照用のMPMソルバー実装。
+- [SPH_Taichi](https://github.com/erizmr/SPH_Taichi): 参照用のSPHソルバー実装。
+- [Ten Minute Physics](https://matthias-research.github.io/pages/tenMinutePhysics/index.html) と [PBF3D](https://github.com/WASD4959/PBF3D): 参照用のPBD(粒子ベースの物理)ソルバー実装。
+- [MuJoCo](https://github.com/google-deepmind/mujoco): 剛体ダイナミクスの参照用実装。
+- [libccd](https://github.com/danfis/libccd): 衝突検出の参照用実装。
+- [PyRender](https://github.com/mmatl/pyrender): ラスタライズベースのレンダラー。
+- [LuisaCompute](https://github.com/LuisaGroup/LuisaCompute) と [LuisaRender](https://github.com/LuisaGroup/LuisaRender): レイトレーシングDSL。
+
+## 関連論文
+
+Genesisプロジェクトに関与した主要な研究論文の一覧:
+
+- Xian, Zhou, et al. "Fluidlab: A differentiable environment for benchmarking complex fluid manipulation." arXiv preprint arXiv:2303.02346 (2023).
+- Xu, Zhenjia, et al. "Roboninja: Learning an adaptive cutting policy for multi-material objects." arXiv preprint arXiv:2302.11553 (2023).
+- Wang, Yufei, et al. "Robogen: Towards unleashing infinite data for automated robot learning via generative simulation." arXiv preprint arXiv:2311.01455 (2023).
+- Wang, Tsun-Hsuan, et al. "Softzoo: A soft robot co-design benchmark for locomotion in diverse environments." arXiv preprint arXiv:2303.09555 (2023).
+- Wang, Tsun-Hsuan Johnson, et al. "Diffusebot: Breeding soft robots with physics-augmented generative diffusion models." Advances in Neural Information Processing Systems 36 (2023): 44398-44423.
+- Katara, Pushkal, Zhou Xian, and Katerina Fragkiadaki. "Gen2sim: Scaling up robot learning in simulation with generative models." 2024 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2024.
+- Si, Zilin, et al. "DiffTactile: A Physics-based Differentiable Tactile Simulator for Contact-rich Robotic Manipulation." arXiv preprint arXiv:2403.08716 (2024).
+- Wang, Yian, et al. "Thin-Shell Object Manipulations With Differentiable Physics Simulations." arXiv preprint arXiv:2404.00451 (2024).
+- Lin, Chunru, et al. "UBSoft: A Simulation Platform for Robotic Skill Learning in Unbounded Soft Environments." arXiv preprint arXiv:2411.12711 (2024).
+- Zhou, Wenyang, et al. "EMDM: Efficient motion diffusion model for fast and high-quality motion generation." European Conference on Computer Vision. Springer, Cham, 2025.
+- Qiao, Yi-Ling, Junbang Liang, Vladlen Koltun, and Ming C. Lin. "Scalable differentiable physics for learning and control." International Conference on Machine Learning. PMLR, 2020.
+- Qiao, Yi-Ling, Junbang Liang, Vladlen Koltun, and Ming C. Lin. "Efficient differentiable simulation of articulated bodies." In International Conference on Machine Learning, PMLR, 2021.
+- Qiao, Yi-Ling, Junbang Liang, Vladlen Koltun, and Ming Lin. "Differentiable simulation of soft multi-body systems." Advances in Neural Information Processing Systems 34 (2021).
+- Wan, Weilin, et al. "Tlcontrol: Trajectory and language control for human motion synthesis." arXiv preprint arXiv:2311.17135 (2023).
+- Wang, Yian, et al. "Architect: Generating Vivid and Interactive 3D Scenes with Hierarchical 2D Inpainting." arXiv preprint arXiv:2411.09823 (2024).
+- Zheng, Shaokun, et al. "LuisaRender: A high-performance rendering framework with layered and unified interfaces on stream architectures." ACM Transactions on Graphics (TOG) 41.6 (2022): 1-19.
+- Fan, Yingruo, et al. "Faceformer: Speech-driven 3d facial animation with transformers." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.
+- Wu, Sichun, Kazi Injamamul Haque, and Zerrin Yumak. "ProbTalk3D: Non-Deterministic Emotion Controllable Speech-Driven 3D Facial Animation Synthesis Using VQ-VAE." Proceedings of the 17th ACM SIGGRAPH Conference on Motion, Interaction, and Games. 2024.
+- Dou, Zhiyang, et al. "C· ase: Learning conditional adversarial skill embeddings for physics-based characters." SIGGRAPH Asia 2023 Conference Papers. 2023.
+
+さらに多数の現在進行形のプロジェクトがあります。
+
+## 引用
+
+研究でGenesisを使用する場合、以下を引用してください:
+
+```bibtex
+@software{Genesis,
+ author = {Genesis Authors},
+ title = {Genesis: A Universal and Generative Physics Engine for Robotics and Beyond},
+ month = {December},
+ year = {2024},
+ url = {https://github.com/Genesis-Embodied-AI/Genesis}
+}
```
\ No newline at end of file