Puzzled by GPU behaviour

Feel free to ask any question here
jackkirk
Posts: 33
Joined: Wed May 20, 2020 2:02 am

Puzzled by GPU behaviour

Post by jackkirk »

Hi,

To enable me to do ongoing large jobs in Cloudcompare I have recently purchased a top of the line Alienware gaming PC:

Aurora R11
10th Gen Intel® Core™ i9 10900KF (10-Core, 20MB Cache, 3.7GHz to 5.3GHz)
128GB Dual Channel HyperX™ FURY DDR4 XMP at 3200MHz
NVIDIA® GeForce RTX™ 3090 24GB GDDR6X
2TB M.2 PCIe NVMe SSD (Boot) + 2TB 7200RPM SATA 6Gb/s

I'm using latest and greatest Windows 10.

Basically the jobs in their raw state are 50 to 100 scans of 60 million coloured points each, for a total of 3 to 6 billion points per job.

To make them usable I merge and subsample until I end up with single point clouds of about 1.5 billion points.

Exploring CC's performance on this rig with the aid of Task Manager > Performance...

If I launch CC from scratch and open a 1.5 billion point cloud it loads completely into the GPU's "dedicated memory" (i.e. GPU's onboard memory) taking up 22.6GB and leaving 1.4GB free.

Under these circumstances the screen interactivity is brilliant - it literally dances.

If I now load an additional 0.28 billion point cloud I see that the GPU's dedicated memory is filled (i.e. last 1.4GB is filled) and the excess (2.9GB) is stored in the GPU's "shared memory" which I assume is RAM.

And screen interactivity is now a complete dog.

All logical so far - keep everything in GPU's dedicated memory and performance is brilliant - stray over and its a dog.

BUT...

If I delete the 1.5 billion point cloud I see that the GPU's dedicated memory is now 1.4GB full and the GPU's shared memory is still 2.9GB full.

And screen activity is still a dog.

There is obviously some smarts missing somewhere - I would have thought the contents of the GPU's shared memory would be moved into the GPU's dedicated memory automatically.

I can find no way to do this aside from deleting the 0.28 billion point cloud and opening it again.

Any help and/or explanations appreciated...
daniel
Site Admin
Posts: 7479
Joined: Wed Oct 13, 2010 7:34 am
Location: Grenoble, France
Contact:

Re: Puzzled by GPU behaviour

Post by daniel »

Interesting analysis.

Basically we load as many points as we can on the GPU to improve the display speed (and only the display). To do that we rely on OpenGL VBOs (Vertex Buffer Objects) and in the background we use QGLBuffer (from Qt) which itself probably relies heavily on the graphic card driver.

I guess the issue is that once you delete the biggest cloud, we release the VBOs, which in turn are freeing some GPU memory. But we don't update/question what was done for the small cloud (which will keep its current memory configuration, with half the data on the graphic card's memory, and the rest in the shared (CPU) memory.

But I agree that we could be smarter... It's just not easy since we don't have any information about the (available or total) GPU memory via OpenGL. We can only 'try' to load things on the GPU memory and see if it works or not... In these conditions it's hard to be super smart. And we don't want to reset the whole GPU memory every time a cloud is deleted, as it may be a good idea if you remove the biggest one, but it would be terrible if you remove the smallest...
Daniel, CloudCompare admin
WargodHernandez
Posts: 187
Joined: Tue Mar 05, 2019 3:59 pm

Re: Puzzled by GPU behaviour

Post by WargodHernandez »

Maybe a reasonable solution would be a reload vbo feature to make that decision available to the user. As it will take time to reload the buffers but if we give the option to the user they wouldn't be reloading all buffers after deleting one entity each time.
daniel
Site Admin
Posts: 7479
Joined: Wed Oct 13, 2010 7:34 am
Location: Grenoble, France
Contact:

Re: Puzzled by GPU behaviour

Post by daniel »

Yes probably the best option indeed. Something in the 'View' menu.
Daniel, CloudCompare admin
jackkirk
Posts: 33
Joined: Wed May 20, 2020 2:02 am

Re: Puzzled by GPU behaviour

Post by jackkirk »

Daniel, WargodHernandez,

Thanks for the headsup - would I be right is assuming the suggested solution would be a pretty minor one?

If so what would be the likely ETA?

Thanks again
jackkirk
Posts: 33
Joined: Wed May 20, 2020 2:02 am

Re: Puzzled by GPU behaviour

Post by jackkirk »

Daniel,

You say "It's just not easy since we don't have any information about the (available or total) GPU memory via OpenGL."

When CC is launched (and the option to use the GPU is checked) would it be possible to flood the GPU dedicated memory (say in small blocks) until it fails - then delete all the blocks - if you kept count you would know the size of the GPU dedicated memory wouldn't you?

Or is it more complex than this?

Just a thought...
daniel
Site Admin
Posts: 7479
Joined: Wed Oct 13, 2010 7:34 am
Location: Grenoble, France
Contact:

Re: Puzzled by GPU behaviour

Post by daniel »

Well, we share the GPU memory with all kinds of application, therefore we don't know if the amount of available memory will be constant...

But at least it's an interesting way to probe it!
Daniel, CloudCompare admin
jackkirk
Posts: 33
Joined: Wed May 20, 2020 2:02 am

Re: Puzzled by GPU behaviour

Post by jackkirk »

Daniel,

Would I be right is assuming the suggested solution in your second post in this thread would be a pretty minor one?

If so what would be the likely ETA?

Thanks again
daniel
Site Admin
Posts: 7479
Joined: Wed Oct 13, 2010 7:34 am
Location: Grenoble, France
Contact:

Re: Puzzled by GPU behaviour

Post by daniel »

There's now a new option to do that: 'Display > Reset all VBOs'

(available in the latest online 2.12.alpha version on Windows)
Daniel, CloudCompare admin
jackkirk
Posts: 33
Joined: Wed May 20, 2020 2:02 am

Re: Puzzled by GPU behaviour

Post by jackkirk »

Wow, thanks...
Post Reply