Viewport Performance

Feel free to ask any question here
Post Reply
jedfrechette
Posts: 46
Joined: Mon Jan 20, 2014 6:31 pm
Location: Albuquerque, NM
Contact:

Viewport Performance

Post by jedfrechette »

I'm curious how much, if any, effort has gone in to optimizing viewport performance. Are you just using Qt's default OpenGL wrappers? Any thoughts on how much room for further optimization there is and where the effort would be needed to make that happen?

Today I was separately testing CloudCompare's performance with a GTX 580 and Titan Black in the same machine. The performance with both cards was essentially identical at ~4 fps on a test data set with about 28 million points [1]. Performance seemed to be limited by the CPU (Xeon E5-2687W) with one core maxed out by CC and the GPU never going much above 50% usage. For comparison Fabric Engine's lidar demo rendered the same data set at well over 60 fps on the Titan and could push a data set with nearly 150 million points at over 20 fps.

[1] http://sourceforge.net/projects/e57-3d- ... 7/download
Jed
daniel
Site Admin
Posts: 7717
Joined: Wed Oct 13, 2010 7:34 am
Location: Grenoble, France
Contact:

Re: Viewport Performance

Post by daniel »

Indeed, we only do pure OpenGL calls so as to work the same on all GPUs (even with very low/old versions of OpenGL). And we don't store the data on the graphic card's memory (as for a long time there was never enough memory to do so - this is not true anymore).

But I'm progressively changing my mind (for instance we recently added the 'color ramp' shader that doesn't work on low-ends GPUs for instance). And I bought a GTX 780 recently so I'm gonna use it ;)

We also want to add "level of detail" based display and out of core support. But we'll need some time!

Daniel
jedfrechette
Posts: 46
Joined: Mon Jan 20, 2014 6:31 pm
Location: Albuquerque, NM
Contact:

Re: Viewport Performance

Post by jedfrechette »

daniel wrote:And we don't store the data on the graphic card's memory (as for a long time there was never enough memory to do so - this is not true anymore.
I wonder how much of a performance impact this would have even on cards without enough VRAM. A few months ago I was testing a K5000 (4 GB) against a GTX 580 (1.5 GB) with some simple OpenGL code that did little more than load the points and draw them on screen. As soon as the data set couldn't fit in VRAM the performance of the K5000 fell off a cliff. The 580, however, although it slowed down, was not effected nearly as badly and kept chugging along with performance that was comparable to CloudCompare with big data sets.
Jed
daniel
Site Admin
Posts: 7717
Joined: Wed Oct 13, 2010 7:34 am
Location: Grenoble, France
Contact:

Re: Viewport Performance

Post by daniel »

My first tests show that on blank or colored point clouds we achieve a speed-up of about 2X (on my laptop's GT 640M) to 10X (on my desktop's GTX 780).

I'm quite disappointed but normals are still very slow to display. And I still have to find a proper way to display dynamic scalar field colors with VBOs (as those change a lot whenever the user plays with the color ramp parameters). Eventually I'll have to do the same for meshes... I hope the feature will be available for the next release.

Currently, we use in VRAM:
  • 12 bytes per point
  • 3 bytes per color
  • 12 bytes per normal vector (looking at how slow it remains, I wonder if it's worth the price)
So theoretically you could load on 1 Gb of VRAM up to 42 M. points with colors and normals, and up to 71 M. points with color only. But strangely on both computers I successfully loaded 170 M. points with colors AND normals in VBOs (I guess the driver does some kind of smart load balancing between the CPU and GPU memory or other equivalent optimizations). When disabling normals (which are still awfully slow to display with or without VBOs) I achieved about 5 fps on my GT 640M (which is already twice as fast as before) and about 50 fps on my GTX 780! I guess the more memory on the GPU side, the most points stay permanently on the GPU side...
Daniel, CloudCompare admin
jedfrechette
Posts: 46
Joined: Mon Jan 20, 2014 6:31 pm
Location: Albuquerque, NM
Contact:

Re: Viewport Performance

Post by jedfrechette »

Excellent, I'm looking forward trying out these changes. It looks like you're making these changes in 'master'?

Normals do seem to be a somewhat general performance problem, certainly other software I've used seem to suffer from similar issues. On the other hand, the information they can impart about point cloud geometry is extremely valuable and hard to replace. One solution might be to allow the user to bake the normal shading information out to static scalars that could be rendered as plain colors without needing to calculate the shading on the fly. For example, given a data set with scalars:

R
G
B
dx
dy
dz

The user could bake out the shading to a new set of scalars:

R_shaded
G_shaded
B_shaded

that would contain point colors that have been modified to include the shading added by a given scene lighting configuration. The new shaded colors could then be rendered with the same performance as the original shadeless colors. Depending on how long the baking takes I suppose it could even be something that happened automatically when the underlying data changed.

One thing I really like about CC is that it has the concept of generic scalar fields, which allow doing this sort of thing, something that most other point cloud software lacks. However, I think the interface would be more user friendly if there was a cleaner separation between data values and visualization. Rather than have the special "Color" attribute, which might be a data value from the original point cloud or a derived value determined by some other data property (e.g. Edit->Colors->Height Map) it seems like it would be better to store all data values as scalars and have separate user-controllable shaders that define how a given scalar(s) is rendered in the viewport. This approach would be more consistent with both existing visualization software like ParaView and 3D content creation software like Maya. The real advantage though would be the added flexibility this would give users.
Jed
daniel
Site Admin
Posts: 7717
Joined: Wed Oct 13, 2010 7:34 am
Location: Grenoble, France
Contact:

Re: Viewport Performance

Post by daniel »

About the first point, this could work if the light source is fixed relatively to the object (as normals shading depends on the light position). In the actual implementation the light source is attached to the camera (otherwise you'd get always dark areas "behind" the object). The custom light source (F7) is attached to the scene but it's not very user friendly ... When I see how complicated the EDL shader is and how fast it is, there must be a way to display normals faster...

The second point is a very interesting idea. It may take some time but I'll try to figure out something. The very first step would be to merge the 'colors' and 'scalar fields' visibility parameters indeed. For the shader mixing 3 custom SF, the user might need to set the display and saturation limits separately for each Sf so the interface might be a bit 'heavy'. But I've added recently a new tool to change the dynamics of the color fields that we could re-use for scalar fields as well:
cc_color_levels_dlg.jpg
cc_color_levels_dlg.jpg (45.19 KiB) Viewed 11896 times
It gave me a lot of new ideas... thanks!
Daniel, CloudCompare admin
jedfrechette
Posts: 46
Joined: Mon Jan 20, 2014 6:31 pm
Location: Albuquerque, NM
Contact:

Re: Viewport Performance

Post by jedfrechette »

daniel wrote:the interface [for writing custom shaders] might be a bit 'heavy'.
Very true, after all, there are entire programming languages dedicated to doing this sort of thing [1]. How much complexity should be exposed to the user is definitely worth thinking about. Maybe you could have a few basic shaders that convert one or more scalar values to a color based on the type of data represented by the scalars (e.g. RGB, categorical, continuous). Then there could be filters that modify the output of these shaders, for example a curve filter for adjusting contrast. Although I guess they're not quite the same from a technical standpoint, from a UI standpoint it might make sense to treat the existing shaders, such as EDL and SSAO, as filters that are applied in the same way as other filters which modify 'raw' color values. Do we need a node based shader editor? ;-) Anyway, it sounds like some good long-term goals to think about.

[1] http://opensource.imageworks.com/?p=osl
Jed
fifi
Posts: 3
Joined: Tue Feb 03, 2015 7:07 pm

Re: Viewport Performance

Post by fifi »

Greetings,

I wish I can have the points sorted in a way that each point has its own address. I mean how it is possible to sort them for instance:

- based on the time of capturing/recording each point information
- based on label of each point, like point number
- or any other possible feature which can be useful

Is there anyway to export the data from Cloud Compare in such an organized format?

I would like to use such location address and organized/structured format to find the neighbourhood of each point.

Thanks,
fifi
Systemication
Posts: 1
Joined: Mon Apr 03, 2017 7:09 pm

Re: Viewport Performance

Post by Systemication »

Hi,

I tried to load several coloured pointclouds of a building totalling 40,000 sqm floor space scanned in 2cm resolution. Each pointcloud has several million points. I ran into performance issues (v2.8.1) and ended up here. I've just read through this thread and others and understand that CC currently doesn't really take advantage of any GPU features. I also understand that the shading language is a difficult beast to wrap your head around.

I would be surprised if there weren't already existing frameworks in existence that have implemented the shading language and which could be utilised to render the point clouds. I'm thinking of a scene graph or some scientific library. Have you considered this before? If so, any conclusions?

Kind regards,
Dirk.
daniel
Site Admin
Posts: 7717
Joined: Wed Oct 13, 2010 7:34 am
Location: Grenoble, France
Contact:

Re: Viewport Performance

Post by daniel »

I'm not sure about which 'shading' language you are relating to, but the 2.8+ versions of CloudCompare are both loading the cloud on the GPU memory (with 'Vertex Buffer Objects') and computing a LOD graph to display the cloud progressively if it's too big.

On your side:
- you'll need to have a good GPU if you expect good performances (but with an NVidia GTX 780 or above, you'll easily work with 100 million points without even needing the LOD mechanism)
- if you load a lot of clouds that are all < 10 M. points, then the LOD mechanism won't activate (as you need more than 10 M. points to trigger it by default...). In this very particular case it may be a good idea to merge all the clouds :D. But once again, if you have a good graphic card, you shouldn't need it.

Can you give us more figures maybe?
Daniel, CloudCompare admin
Post Reply