Large data sets: Parallelism and streaming #151
DanielVandH
started this conversation in
Ideas
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Could maybe be nice to have a feature for computing parallel triangulations when considering very large data sets. I don't know much about this, and haven't looked much into it beyond knowing that many algorithms exist e.g. using constrained triangulations to put the points into chunks, but at least this issue can track any developments or allow anyone who sees this to offer suggestions. Streaming methods are also interesting - https://www.cs.unc.edu/~isenburg/papers/ilss-scdt-06.pdf computes a triangulation of an amazing 1 billion points.
Considering large data sets would also probably mean more thought needs to be put into how
Triangulation
is designed - theGraph
for example probably shouldn't exist and instead graph operations should be computed on the fly (e.g.get_neighbours
can be exactly replaced byget_adjacent
at the cost of some extra allocations). Maybe we should do aLazyTriangulation
which stores only what is needed, and all other operations are computed on demand (Lazy
sounds like the triangulation isn't computed at all though, which is of course wrong - its the topology relations etc that would be computed lazily).Beta Was this translation helpful? Give feedback.
All reactions