Everything is in the title, almost...
Do you think it would make sens to compute signed distance using normals in each points for cloud/cloud comparaison? I guese it won't be interesting in all cases, but for rougly continuous or flat objects (or between two almost identical point clouds for change/movement detection), it could be interesting.
What's your opinions?
Signed Cloud/Cloud distance using Normals
Re: Signed Cloud/Cloud distance using Normals
Ah, have you met Dimitri recently?! No, it's a joke, but Dimitri Lague (from Rennes University) has made several attempts of convincing me to add such an option to CloudCompare in the past. We had a lot of discussions about this (but I realized now that it was mostly by email, and not on the forum).
It would be really hard to properly take into account the normal at each point (i.e. the fact that the normal gives us an information on the local surface curvature). The actual distance computation algorithm just won't work. But what we could do in a much more simple approach is to sign the distance with the scalar product of the normal and the vector linking the query point with its nearest neighbor in the other cloud. As you already realized, it's slightly experimental and the result is only valid for certain types of clouds (flat ones, with a principal direction).
I believe Dimitri has advanced on this subject on his own (at least one of his messages suggests this: viewtopic.php?p=212). Maybe he could give us more insight on how it works for him?
It would be really hard to properly take into account the normal at each point (i.e. the fact that the normal gives us an information on the local surface curvature). The actual distance computation algorithm just won't work. But what we could do in a much more simple approach is to sign the distance with the scalar product of the normal and the vector linking the query point with its nearest neighbor in the other cloud. As you already realized, it's slightly experimental and the result is only valid for certain types of clouds (flat ones, with a principal direction).
I believe Dimitri has advanced on this subject on his own (at least one of his messages suggests this: viewtopic.php?p=212). Maybe he could give us more insight on how it works for him?
Daniel, CloudCompare admin
Re: Signed Cloud/Cloud distance using Normals
:) noms, i did not meet Dimitri recently... It would be great to have a feedback on his developpement. Dimitri?
Re: Signed Cloud/Cloud distance using Normals
Hi all,
indeed we have a beta version of a code dedicated to cloud2cloud comparison that solves many of the problems discussed in the forum (and in general) when trying to compare point clouds without knowing the normal and without doing any post-treatment on the data (meshing, gridding that can be a nightmare for the kind of surfaces I'm working on (cliffs, vegetated surfaces, 3D complex river beds...). The software also caters for the precision I'm aiming at (i.e. sub-cm level of detection) and factor in the local roughness of the surface to determine whether a change is statistically significant or not. I'm currently writing the paper that present the method we developped with a colleague (Nicolas Brodu), and I'll present it at the European Geophysical Union meeting in Vienna in april.
In a nutshell :
(i) normal computation : we locally model the cloud by a plane and compute the normal to this plane (nothing new there). The subtlety is that this process is done over a range of spatial scale around the considered point and the most planar one is chosen as the best representative scale. You can obviously fix this scale for all the calculation. In our application this a critical aspect because we're looking at surfaces with significant surface roughness at small scale (say a few cm to 50 cm). Only at scales larger than 1 to 5 m do you get a normal orientation that is meaningfull for our specific application. This is also very interesting for small scale measurement where you can define the scale slighlty larger than the typical instrument noise on a flat surface and avoid normal orientation "flickering" due to scanner noise.
(ii) orienting the normal : we impose a series of reference points towards which the normals are oriented. Although this seems time consuming, this is actually easy to do. There are much more complex way to compute and orient the normal, but for our application this is good enough.
(ii) surface difference : we introduce a second scale here generally smaller than the scale at which you compute the normal (but which can be the same). Along the normal we compute the average distance between the 2 point clouds at the considered scale on each cloud. We also record the local roughness (and the number of points) on the 2 clouds which is an important parameter to know whether a measured change is statistically significant or not. If you're comparing planar surfaces, by doing this approach you can significantly reduce the uncertainty related to instrument noise (which is normally distributed when the incidence angle is close to 90°). Note also that because we're computing the difference along the normal (and not looking at the closest point), no result is generated when there is no intercept with the 2nd cloud : this way you don't have to "cut" the clouds so that they are occupying the same space. You can also have holes in one of the point cloud (visibility issues, vegetation...) that won't pollute the results with artificial distance measurements. This partially solve the visibility issue. There are also other advantages in doing a multi-scale approach for the normal and distance measurements, but this would be too long to explain here...
Note that we also use here the notion of core points introduced in our recent paper on point cloud classification : http://www.sciencedirect.com/science/ar ... 1612000330 (you'll have to read it to know what this means ;-) )
As for our software on point cloud classification, this software will be released as a free software when the paper will be close to be accepted and the software further bug proofed and optimized (you can easily imagine that computing the normal at large scale takes a long, long time....). Note that i'm still a big fan of the C2C function of cloudcompare which works well within the scope designed by Daniel (unsigned change detection on densely sampled surfaces with small scale planarity). It's extremely fast to check rapidly the data, but I absolutely need signed difference as well as a proper treatment of local roughness for the level of change detection. And for data visualization, I'm yet to find something better than CC.
Attached is an example of 3D point cloud comparison of a 500 m reach of a meandering river (the bed is under water) with vegetation classification (in green), and 3D point cloud comparison. The normal is computed at 10 m, and the surface difference done at 20 cm. Core points (where we actually compute the distance) are every 20 cm, while the raw data is down to 1 cm point spacing (and obviously varying accross the scene). I'm sure you'll have recognized your favorite visualization software....I've chosen a scale from -1 to 1 m, but we actually have a registration error between the 2 clouds of about 2.5 mm and can measure really fine surface change.
indeed we have a beta version of a code dedicated to cloud2cloud comparison that solves many of the problems discussed in the forum (and in general) when trying to compare point clouds without knowing the normal and without doing any post-treatment on the data (meshing, gridding that can be a nightmare for the kind of surfaces I'm working on (cliffs, vegetated surfaces, 3D complex river beds...). The software also caters for the precision I'm aiming at (i.e. sub-cm level of detection) and factor in the local roughness of the surface to determine whether a change is statistically significant or not. I'm currently writing the paper that present the method we developped with a colleague (Nicolas Brodu), and I'll present it at the European Geophysical Union meeting in Vienna in april.
In a nutshell :
(i) normal computation : we locally model the cloud by a plane and compute the normal to this plane (nothing new there). The subtlety is that this process is done over a range of spatial scale around the considered point and the most planar one is chosen as the best representative scale. You can obviously fix this scale for all the calculation. In our application this a critical aspect because we're looking at surfaces with significant surface roughness at small scale (say a few cm to 50 cm). Only at scales larger than 1 to 5 m do you get a normal orientation that is meaningfull for our specific application. This is also very interesting for small scale measurement where you can define the scale slighlty larger than the typical instrument noise on a flat surface and avoid normal orientation "flickering" due to scanner noise.
(ii) orienting the normal : we impose a series of reference points towards which the normals are oriented. Although this seems time consuming, this is actually easy to do. There are much more complex way to compute and orient the normal, but for our application this is good enough.
(ii) surface difference : we introduce a second scale here generally smaller than the scale at which you compute the normal (but which can be the same). Along the normal we compute the average distance between the 2 point clouds at the considered scale on each cloud. We also record the local roughness (and the number of points) on the 2 clouds which is an important parameter to know whether a measured change is statistically significant or not. If you're comparing planar surfaces, by doing this approach you can significantly reduce the uncertainty related to instrument noise (which is normally distributed when the incidence angle is close to 90°). Note also that because we're computing the difference along the normal (and not looking at the closest point), no result is generated when there is no intercept with the 2nd cloud : this way you don't have to "cut" the clouds so that they are occupying the same space. You can also have holes in one of the point cloud (visibility issues, vegetation...) that won't pollute the results with artificial distance measurements. This partially solve the visibility issue. There are also other advantages in doing a multi-scale approach for the normal and distance measurements, but this would be too long to explain here...
Note that we also use here the notion of core points introduced in our recent paper on point cloud classification : http://www.sciencedirect.com/science/ar ... 1612000330 (you'll have to read it to know what this means ;-) )
As for our software on point cloud classification, this software will be released as a free software when the paper will be close to be accepted and the software further bug proofed and optimized (you can easily imagine that computing the normal at large scale takes a long, long time....). Note that i'm still a big fan of the C2C function of cloudcompare which works well within the scope designed by Daniel (unsigned change detection on densely sampled surfaces with small scale planarity). It's extremely fast to check rapidly the data, but I absolutely need signed difference as well as a proper treatment of local roughness for the level of change detection. And for data visualization, I'm yet to find something better than CC.
Attached is an example of 3D point cloud comparison of a 500 m reach of a meandering river (the bed is under water) with vegetation classification (in green), and 3D point cloud comparison. The normal is computed at 10 m, and the surface difference done at 20 cm. Core points (where we actually compute the distance) are every 20 cm, while the raw data is down to 1 cm point spacing (and obviously varying accross the scene). I'm sure you'll have recognized your favorite visualization software....I've chosen a scale from -1 to 1 m, but we actually have a registration error between the 2 clouds of about 2.5 mm and can measure really fine surface change.
- Attachments
-
- Rangitikei river, surface change.jpg (143.66 KiB) Viewed 12557 times
Re: Signed Cloud/Cloud distance using Normals
Hi Dimitri,
Good job! One of my colleague will be in Vienna to present a work done in the SNCF and INSA Strasbourg. I'll ask him to have a attentive look to your work! ;)
Your previous article (congrats for being in the Journal) is really interesting and remind me a good work of this summer at Calgary (ISPRS Terrestrial Laser Scanning Workshop):
http://www.isprs.org/proceedings/XXXVII ... sion_9.pdf
I think the PCA used to classify points is really close to yours. At least in a first reading of your work. This paper has been presented by Jerome Demantke from the IGN.
Have you planned to release a plugin for classification based on your work in a future release of CC?
I'm really interested in your future work, and will have a look to your paper of Vienna.
Thanks for your reply.
PS: your images of point clouds look stunning, as always, with CC :)
Good job! One of my colleague will be in Vienna to present a work done in the SNCF and INSA Strasbourg. I'll ask him to have a attentive look to your work! ;)
Your previous article (congrats for being in the Journal) is really interesting and remind me a good work of this summer at Calgary (ISPRS Terrestrial Laser Scanning Workshop):
http://www.isprs.org/proceedings/XXXVII ... sion_9.pdf
I think the PCA used to classify points is really close to yours. At least in a first reading of your work. This paper has been presented by Jerome Demantke from the IGN.
Have you planned to release a plugin for classification based on your work in a future release of CC?
I'm really interested in your future work, and will have a look to your paper of Vienna.
Thanks for your reply.
PS: your images of point clouds look stunning, as always, with CC :)
Re: Signed Cloud/Cloud distance using Normals
Dimitri, this looks fantastic!
Any idea of implementation? Plugin/standalone version?
Keep us updated, please! :)
Any idea of implementation? Plugin/standalone version?
Keep us updated, please! :)