A contact solver for physics-based simulations involving ๐ shells, ๐ชต solids and ๐ชข rods. All made by ZOZO. Published in ACM Transactions on Graphics (TOG).
- ๐ช Robust: Contact resolutions are penetration-free. No snagging intersections.
- โฒ Scalable: An extreme case includes beyond 150M contacts. Not just one million.
- ๐ฒ Cache Efficient: All on the GPU runs in single precision. No double precision.
- ๐ฅผ Inextensible: Cloth never extends beyond very strict upper bounds, such as 1%.
- ๐ Physically Accurate: Our deformable solver is driven by the Finite Element Method.
- โ๏ธ Highly Stressed: We run GitHub Actions to run stress tests 10 times in a row.
- ๐ Massively Parallel: Both contact and elasticity solvers are run on the GPU.
- ๐ณ Docker Sealed: Everything is designed to work out of the box.
- ๐ JupyterLab Included: Open your browser and run examples right away [Video].
- ๐ Documtened Python APIs: Our Python code is fully docstringed and lintable [Video].
- โ๏ธ Cloud-Ready: Our solver can be seamlessly deployed on major cloud platforms.
- โจ Stay Clean: You can remove all traces after use.
- ๐ Change History
- ๐ Technical Materials
- โก๏ธ Requirements
- ๐จ Getting Started
- ๐ How To Use
- ๐ Python APIs and Parameters
- ๐ Obtaining Logs
- ๐ผ๏ธ Catalogue
- ๐ GitHub Actions
- ๐ก Deploying on Cloud Services
- ๐ Acknowledgements
- ๐ Citation
- ๐งโ๐ป Setting Up Your Development Environment [Markdown]
- ๐ Bug Fixes and Updates [Markdown]
- (2025.1.8) Added a domino example [Video].
- (2025.1.5) Added a single twist example [Video].
- (2024.12.31) Added full documentation for Python APIs, parameters, and log files [GitHub Pages].
- (2024.12.27) Line search for strain limiting is improved [Markdown]
- (2024.12.23) Added [Bug Fixes and Updates]
- (2024.12.21) Added a house of cards example [Video]
- (2024.12.18) Added a frictional contact example: armadillo sliding on the slope [Video]
- (2024.12.18) Added a hindsight noting that the tilt angle was not
$30^\circ$ , but rather$26.57^\circ$ - (2024.12.16) Removed thrust dependencies to fix runtime errors for the driver version
560.94
[Issue Link]
- ๐ฅ Main video [Video]
- ๐ฅ Additional video examples [Directory]
- ๐ฅ Presentation videos [Short][Long]
- ๐ Main paper [PDF][Hindsight]
- ๐ Supplementary PDF [PDF]
- ๐ค Supplementary scripts [Directory]
- ๐ Singular-value eigenanalysis [Markdown]
- ๐ฅ A modern NVIDIA GPU (Turing or newer)
- ๐ณ A Docker environment (see below)
Install a ๐ฎ NVIDIA driver [Link] on your ๐ป host system and follow the ๐ instructions below specific to the ๐ฅ๏ธ operating system to get a ๐ณ Docker running:
๐ง Linux | ๐ช Windows |
---|---|
Install the Docker engine from here [Link]. Also, install the NVIDIA Container Toolkit [Link]. Just to make sure that the Container Toolkit is loaded, run sudo service docker restart . |
Install the Docker Desktop [Link]. You may need to log out or reboot after the installation. After logging back in, launch Docker Desktop to ensure that Docker is running. |
Next, run the following command to start the ๐ฆ container:
$MY_WEB_PORT = 8080 # Web port number for web interface
$IMAGE_NAME = "ghcr.io/st-tech/ppf-contact-solver-compiled:latest"
docker run --rm --gpus all -p ${MY_WEB_PORT}:8080 $IMAGE_NAME
MY_WEB_PORT=8080 # Web port number for web interface
IMAGE_NAME=ghcr.io/st-tech/ppf-contact-solver-compiled:latest
docker run --rm --gpus all -p ${MY_WEB_PORT}:8080 $IMAGE_NAME
โณ Wait for a while until the container becomes a steady state.
Next, open your ๐ browser and navigate to http://localhost:8080, where 8080
is the port number specified in the MY_WEB_PORT
variable.
Keep your terminal window open.
๐ Now you are ready to go! ๐
To shut down the container, just press Ctrl+C
in the terminal.
The container will be removed and all traces will be ๐งน cleaned up.
If you wish to build the container from scratch ๐ ๏ธ, please refer to the cleaner installation guide [Markdown] ๐.
Our frontend is accessible through ๐ a browser using our built-in JupyterLab ๐ interface. All is set up when you open it for the first time. Results can be interactively viewed through the browser and exported as needed.
This allows you to interact with the simulator on your ๐ป laptop while the actual simulation runs on a remote headless server over ๐ the internet. This means that you don't have to buy โ๏ธ hardware, but can rent it at vast.ai or RunPod for less than ๐ต $1 per hour. For example, this [Video] was recorded on a vast.ai instance. The experience is ๐ good!
Our Python interface is designed with the following principles in mind:
-
๐ ๏ธ Dynamic Tri/Tet Creation: Relying on non-integrated third-party tools for triangulation, tetrahedralization, and loading can make it difficult to dynamically adjust resolutions. Our built-in tri/tet creation tools eliminate this issue.
-
๐ซ No Mesh Data: Preparing mesh data using external tools can be cumbersome. Our frontend minimizes this effort by allowing meshes to be created on the fly or downloaded when needed.
-
๐ Method Chaining: We adopt the method chaining style from JavaScript, making the API intuitive and easy to understand.
-
๐ฆ Single Import for Everything: All frontend features are accessible by simply importing with
from frontend import App
.
Here's an example of draping five sheets over a sphere with two corners pinned. Please look into the examples directory for more examples.
# import our frontend
from frontend import App
# make an app with the label "drape"
app = App("drape", renew=True)
# create a square mesh resolution 128 spanning the xz plane
V, F = app.mesh.square(res=128, ex=[1,0,0], ey=[0,0,1])
# add to the asset and name it "sheet"
app.asset.add.tri("sheet", V, F)
# create an icosphere mesh radius 0.5 and 5 subdivisions
V, F = app.mesh.icosphere(r=0.5, subdiv_count=5)
# add to the asset and name it "sphere"
app.asset.add.tri("sphere", V, F)
# create a scene "five-sheets"
scene = app.scene.create("five-sheets")
# define gap between sheets
gap = 0.01
for i in range(5):
# add a sheet to the scene
obj = scene.add("sheet")
# pick two vertices max towards directions [1,0,-1] and [-1,0,-1]
corner = obj.grab([1, 0, -1]) + obj.grab([-1, 0, -1])
# place it with a vertical offset and pin the corners
obj.at(0, gap * i, 0).pin(corner)
# set fiber directions required for the Baraff-Witkin model
obj.direction([1, 0, 0], [0, 0, 1])
# add a sphere mesh at a lower position and set it to a static collider
scene.add("sphere").at(0, -0.5 - gap, 0).pin()
# compile the scene and report stats
fixed = scene.build().report()
# interactively preview the built scene (image left)
fixed.preview()
# set simulation parameter(s)
param = app.session.param()
param.set("dt", 0.01)
# create a new session with the built scene
session = app.session.create(fixed)
# start the simulation and live-preview the results (image right)
session.start(param).preview()
# also show streaming logs
session.stream()
# or interactively view the animation sequences
session.animate()
# export all simulated frames and zip them
path = f"export/{scene.info.name}/{session.info.name}"
session.export.animation(path).zip()
-
Full API documentation ๐ is available on our GitHub Pages. The major APIs are documented using docstrings โ๏ธ and compiled with Sphinx โ๏ธ. We have also included
jupyter-lsp
to provide interactive linting assistance ๐ ๏ธ and display docstrings as you type. See this video [Video] for an example. The behaviors can be changed through the settings. -
A list of parameters used in
param.set(key,value)
is documented here [GitHub Pages].
Note
๐ Logs for the simulation can also be queried through the Python APIs ๐. Here's an example of how to get a list of recorded logs ๐, fetch them ๐ฅ, and compute the average ๐งฎ.
# get a list of log names
logs = session.get.log.names()
assert 'time-per-frame' in logs
assert 'newton-steps' in logs
# get a list of time per video frame
msec_per_video = session.get.log.numbers('time-per-frame')
# compute the average time per video frame
print('avg per frame:', sum([n for _,n in msec_per_video])/len(msec_per_video))
# get a list of newton steps
newton_steps = session.get.log.numbers('newton-steps')
# compute the average of consumed newton steps
print('avg newton steps:', sum([n for _,n in newton_steps])/len(newton_steps))
Below are some representatives.
vid_time
refers to the video time in seconds and is recorded as float
.
ms
refers to the consumed simulation time in milliseconds recorded as int
.
vid_frame
is the video frame count recorede as int
.
Name | Description | Format |
---|---|---|
time-per-frame | Time per video frame | list[(vid_frame,ms)] |
matrix-assembly | Matrix assembly time | list[(vid_time,ms)] |
pcg-linsolve | Linear system solve time | list[(vid_time,ms)] |
line-search | Line search time | list[(vid_time,ms)] |
time-per-step | Time per step | list[(vid_time,ms)] |
newton-steps | Newton iterations per step | list[(vid_time,count)] |
num-contact | Contact count | list[(vid_time,count)] |
max-sigma | Max stretch | list(vid_time,float) |
The full list of log names and their descriptions is documented here: [GitHub Pages].
Note that some entries have multiple records at the same video time โฑ๏ธ. This occurs because the same operation is executed multiple times ๐ within a single step during the inner Newton's iterations ๐งฎ. For example, the linear system solve is performed at each Newton's step, so if multiple Newton's steps are ๐ executed, multiple linear system solve times appear in the record at the same ๐ video time.
If you would like to retrieve the raw log stream, you can do so by
# Last 8 lines. Omit for everything.
for line in session.get.log.stream(n_lines=8):
print(line)
This will output something like:
* dt: 1.000e-03
* max_sigma: 1.045e+00
* avg_sigma: 1.030e+00
------ newton step 1 ------
====== contact_matrix_assembly ======
> dry_pass...0 msec
> rebuild...7 msec
> fillin_pass...0 msec
If you would like to read stdout
and stderr
, you can do so using session.get.stdout()
and session.get.stderr()
(if it exists). They return list[str]
.
All the log files ๐ are available โ and can be fetched โฌ๏ธ during the simulation ๐งโ๐ป.
woven | stack [Video] | trampoline [Video] | needle [Video] |
cards [Video] | codim | hang [Video] | trapped |
domino [Video] | noodle | drape [Video] | twist [Video] |
ribbon | curtain [Video] | fishingknot | friction [Video] |
At the moment, not all examples are ready yet, but they will be added/updated one by one. The author is actively woriking on it.
We implemented GitHub Actions that test all of our examples. We perform explicit intersection checks ๐ at the end of each step, which raises an error โ if an intersection is detected. This ensures that all steps are confirmed to be penetration-free if tests are pass โ . The runner types are described as follows:
The tested ๐ runner of this action is the Ubuntu NVIDIA GPU-Optimized Image for AI and HPC with an NVIDIA Tesla T4 (16 GB VRAM) with Driver version 550.127.05. This is not a self-hosted runner, meaning that each time the runner launches, all environments are ๐ฑ fresh.
We use the GitHub-hosted runner ๐ฅ๏ธ, but the actual simulation runs on a provisioned vast.ai instance ๐. We do this for performance โก and budget ๐ฐ reasons. We choose an RTX 4090 ๐ฎ, which typically costs less than $0.50 per hour ๐ต. Since we start with a fresh ๐ฑ instance, the environment is clean ๐งน every time. We take advantage of the ability to deploy on the cloud; this action is performed in parallel, which reduces the total action time.
We know that you can't judge the reliability of contact resolution by simply watching a success case in a single ๐ฅ video. To ensure greater transparency, we implemented GitHub Actions to run many of our examples via automated GitHub Actions โ๏ธ, not just once, but 10 times in a row ๐. This means that a single failure out of 10 tests is considered a failure of the entire test suite!
Also, we apply small jitters to the position of objects in the scene ๐, so at each run, the scene is slightly different.
Our long stress tests can fail due to following reasons:
- We are constantly updating our algorithms ๐, which may introduce bugs. This stress test is indeed designed for this purpose ๐ฏ.
- Failures can be also due to excessively difficult spots ๐ฌ, which are unintended. An example is shown in the right inset ๐.
- Occasionally, we experience vast.ai instances shutting down before simulations finish.
Our contact solver is designed for heavy use in cloud services โ๏ธ, enabling us to:
- ๐ฐ Cost-Effective Development: Quickly deploy testing environments ๐ and delete ๐๏ธ them when not in use, saving costs.
- ๐ Flexible Scalability: Scale as needed based on demand ๐. For example, you can launch multiple instances before a specific deadline โฐ.
- ๐ High Accessibility: Allow anyone with an internet connection ๐ to try our solver, even on a smartphone ๐ฑ or tablet ๐ฅ๏ธ.
- ๐ Easier Bug Tracking: Users and developers can easily share the same hardware, kernel, and driver environment, making it easier to track and fix bugs.
This is all made possible with our purely web-based frontends ๐ and scalable capability ๐งฉ. Our main target is the NVIDIA L4 ๐ฑ๏ธ, a data-center-targeted GPU ๐ฅ๏ธ that offers reasonable pricing ๐ฒ, delivering both practical performance ๐ช and scalability ๐ without investing in expensive hardware ๐ป.
Below, we describe how to deploy our solver on major cloud services โ๏ธ. These instructions are up to date as of late 2024 ๐ and are subject to change ๐.
Important: For all the services below, don't forget to โ delete the instance after use, or youโll be ๐ธ charged for nothing.
๐ฆ Deploying on vast.ai
- Select our template [Link].
- Create an instance and click
Open
button.
๐ฆ Deploying on RunPod
- Follow this link [Link] and deploy an instance using our template.
- Click
Connect
button and open theHTTP Services
link.
๐ฆ Deploying on Scaleway
- Set zone to
fr-par-2
- Select type
L4-1-24G
orGPU-3070-S
- Choose
Ubuntu Jammy GPU OS 12
- Do not skip the Docker container creation in the installation process; it is required.
- This setup costs approximately โฌ0.76 per hour.
- CLI instructions are described in [Markdown].
๐ฆ Deploying on Amazon Web Services
- Amazon Machine Image (AMI):
Deep Learning Base OSS Nvidia Driver GPU AMI (Ubuntu 22.04)
- Instance Type:
g6.2xlarge
(Recommended) - This setup costs around $1 per hour.
- Do not skip the Docker container creation in the installation process; it is required.
๐ฆ Deploying on Google Compute Engine
-
Select
GPUs
. We recommend the GPU typeNVIDIA L4
because it's affordable and accessible, as it does not require a high quota. You may selectT4
instead for testing purposes. -
Do not check
Enable Virtual Workstation (NVIDIA GRID)
. -
We recommend the machine type
g2-standard-8
. -
Choose the OS type
Deep Learning VM with CUDA 11.8 M126
and set the disk size to50GB
. -
As of late 2024, this configuration costs approximately $0.86 per hour in
us-central1 (Iowa)
and $1.00 per hour inasia-east1 (Taiwan)
. -
Port number
8080
is reserved by the OS image. Set$MY_WEB_PORT
to8888
. When connecting viagcloud
, use the following format:gcloud compute ssh --zone "xxxx" "instance-name" -- -L 8080:localhost:8888
. -
Do not skip the Docker container creation in the installation process; it is required.
-
CLI instructions are described in [Markdown].
The author would like to thank ZOZO, Inc. for allowing him to work on this topic as part of his main workload. The author also extends thanks to the teams in the IP department for permitting the publication of our technical work and the release of our code, as well as to many others for assisting with the internal paperwork required for publication.
@article{Ando2024CB,
author = {Ando, Ryoichi},
title = {A Cubic Barrier with Elasticity-Inclusive Dynamic Stiffness},
year = {2024},
issue_date = {December 2024},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
volume = {43},
number = {6},
issn = {0730-0301},
url = {https://doi.org/10.1145/3687908},
doi = {10.1145/3687908},
journal = {ACM Trans. Graph.},
month = nov,
articleno = {224},
numpages = {13},
keywords = {collision, contact}
}
It should be emphasized that this work was strongly inspired by the IPC. The author kindly encourages citing their original work as well.