Puzzled by GPU behaviour
Posted: Mon Jan 25, 2021 2:56 am
Hi,
To enable me to do ongoing large jobs in Cloudcompare I have recently purchased a top of the line Alienware gaming PC:
Aurora R11
10th Gen Intel® Core™ i9 10900KF (10-Core, 20MB Cache, 3.7GHz to 5.3GHz)
128GB Dual Channel HyperX™ FURY DDR4 XMP at 3200MHz
NVIDIA® GeForce RTX™ 3090 24GB GDDR6X
2TB M.2 PCIe NVMe SSD (Boot) + 2TB 7200RPM SATA 6Gb/s
I'm using latest and greatest Windows 10.
Basically the jobs in their raw state are 50 to 100 scans of 60 million coloured points each, for a total of 3 to 6 billion points per job.
To make them usable I merge and subsample until I end up with single point clouds of about 1.5 billion points.
Exploring CC's performance on this rig with the aid of Task Manager > Performance...
If I launch CC from scratch and open a 1.5 billion point cloud it loads completely into the GPU's "dedicated memory" (i.e. GPU's onboard memory) taking up 22.6GB and leaving 1.4GB free.
Under these circumstances the screen interactivity is brilliant - it literally dances.
If I now load an additional 0.28 billion point cloud I see that the GPU's dedicated memory is filled (i.e. last 1.4GB is filled) and the excess (2.9GB) is stored in the GPU's "shared memory" which I assume is RAM.
And screen interactivity is now a complete dog.
All logical so far - keep everything in GPU's dedicated memory and performance is brilliant - stray over and its a dog.
BUT...
If I delete the 1.5 billion point cloud I see that the GPU's dedicated memory is now 1.4GB full and the GPU's shared memory is still 2.9GB full.
And screen activity is still a dog.
There is obviously some smarts missing somewhere - I would have thought the contents of the GPU's shared memory would be moved into the GPU's dedicated memory automatically.
I can find no way to do this aside from deleting the 0.28 billion point cloud and opening it again.
Any help and/or explanations appreciated...
To enable me to do ongoing large jobs in Cloudcompare I have recently purchased a top of the line Alienware gaming PC:
Aurora R11
10th Gen Intel® Core™ i9 10900KF (10-Core, 20MB Cache, 3.7GHz to 5.3GHz)
128GB Dual Channel HyperX™ FURY DDR4 XMP at 3200MHz
NVIDIA® GeForce RTX™ 3090 24GB GDDR6X
2TB M.2 PCIe NVMe SSD (Boot) + 2TB 7200RPM SATA 6Gb/s
I'm using latest and greatest Windows 10.
Basically the jobs in their raw state are 50 to 100 scans of 60 million coloured points each, for a total of 3 to 6 billion points per job.
To make them usable I merge and subsample until I end up with single point clouds of about 1.5 billion points.
Exploring CC's performance on this rig with the aid of Task Manager > Performance...
If I launch CC from scratch and open a 1.5 billion point cloud it loads completely into the GPU's "dedicated memory" (i.e. GPU's onboard memory) taking up 22.6GB and leaving 1.4GB free.
Under these circumstances the screen interactivity is brilliant - it literally dances.
If I now load an additional 0.28 billion point cloud I see that the GPU's dedicated memory is filled (i.e. last 1.4GB is filled) and the excess (2.9GB) is stored in the GPU's "shared memory" which I assume is RAM.
And screen interactivity is now a complete dog.
All logical so far - keep everything in GPU's dedicated memory and performance is brilliant - stray over and its a dog.
BUT...
If I delete the 1.5 billion point cloud I see that the GPU's dedicated memory is now 1.4GB full and the GPU's shared memory is still 2.9GB full.
And screen activity is still a dog.
There is obviously some smarts missing somewhere - I would have thought the contents of the GPU's shared memory would be moved into the GPU's dedicated memory automatically.
I can find no way to do this aside from deleting the 0.28 billion point cloud and opening it again.
Any help and/or explanations appreciated...