Hi Dan, Looks nice. I've experimented a bit with a Qt/OpenGL front-end for cutting-simulation. This should compile on both windows and linux.
From the voxel data-structure, are you using any of the fancier surface-reconstruction algorithms? (extended marching-cubes, or dual contouring?) I want to try these when I have time.
The problem with voxels is the data structure grows in proportion to the volume. Can you simulate a 1000x1000x1000mm part at 1mm resolution without running out of memory?? The octree or dexel based volume models consume memory in proportion to the part surface, which is much less than a naive voxel-model.
Anders, I am using "voxlap" to do all the voxel storage, booleans and display. It has 1024x1024x256 voxels. My python code has to convert my world, in mm, to voxels. Voxlap is using ray casting and "rle" compression for the columns. Apart from that, I don't understand how it works. ( some of the source code is in assembly language! ).
1000x1000x256mm, would be at 1mm resolution, but 1000x1000x1000mm would be limited to 4mm resolution.
The simulation engine itself is fast enough. Depending on the resolution it takes 1 ms to maybe 10ms to subtract a tool-position from the stock. It then takes roughly the same amount of time to find out which new triangles should be deleted and what new triangles need to be created. In my experiments it is the rendering-part that has been very slow (1s or more per frame/move). I am trying to improve on that by using OpenGL VBOs, but that is not done yet.
3 comments:
Hi Dan,
Looks nice.
I've experimented a bit with a Qt/OpenGL front-end for cutting-simulation. This should compile on both windows and linux.
From the voxel data-structure, are you using any of the fancier surface-reconstruction algorithms? (extended marching-cubes, or dual contouring?) I want to try these when I have time.
The problem with voxels is the data structure grows in proportion to the volume. Can you simulate a 1000x1000x1000mm part at 1mm resolution without running out of memory?? The octree or dexel based volume models consume memory in proportion to the part surface, which is much less than a naive voxel-model.
Anders, I am using "voxlap" to do all the voxel storage, booleans and display. It has 1024x1024x256 voxels. My python code has to convert my world, in mm, to voxels.
Voxlap is using ray casting and "rle" compression for the columns.
Apart from that, I don't understand how it works. ( some of the source code is in assembly language! ).
1000x1000x256mm, would be at 1mm resolution, but 1000x1000x1000mm would be limited to 4mm resolution.
It is not doing and surface-reconstruction.
Is your simulation usable in real time?
The simulation engine itself is fast enough. Depending on the resolution it takes 1 ms to maybe 10ms to subtract a tool-position from the stock. It then takes roughly the same amount of time to find out which new triangles should be deleted and what new triangles need to be created.
In my experiments it is the rendering-part that has been very slow (1s or more per frame/move). I am trying to improve on that by using OpenGL VBOs, but that is not done yet.
Post a Comment