Deprecated: Function get_magic_quotes_gpc() is deprecated in /disks/centurion/b/carolyn/b/home/boincadm/projects/beta/html/inc/util.inc on line 663
Longer run time for a cluster of WUs

Longer run time for a cluster of WUs

Message boards : SETI@home Enhanced : Longer run time for a cluster of WUs
Message board moderation

To post messages, you must log in.

AuthorMessage
AMDave
Volunteer tester

Send message
Joined: 12 Jan 16
Posts: 38
Credit: 289,647
RAC: 0
United States
Message 57528 - Posted: 27 Mar 2016, 14:56:31 UTC

Just curious as to why the following WUs required roughly 2.5 - 4.25 times more time to complete.

23408038, SETI@home v8 v8.09 (opencl_nvidia_SoG) windows_intelx86
23407792, SETI@home v8 v8.09 (opencl_nvidia_SoG) windows_intelx86
23408069, SETI@home v8 v8.09 (opencl_nvidia_sah) windows_intelx86
23407947, SETI@home v8 v8.09 (opencl_nvidia_SoG) windows_intelx86
23407954, SETI@home v8 v8.09 (opencl_nvidia_SoG) windows_intelx86
23407355, SETI@home v8 v8.09 (opencl_nvidia_SoG) windows_intelx86

I've had other WUs with long run times like these, but they were sporadic and singular in occurrence, not clustered.
ID: 57528 · Report as offensive
Richard Haselgrove
Volunteer tester

Send message
Joined: 3 Jan 07
Posts: 1451
Credit: 3,272,268
RAC: 0
United Kingdom
Message 57530 - Posted: 27 Mar 2016, 16:29:59 UTC - in response to Message 57528.  

Those are VLARs - "WU true angle range is : 0.010246"

They're not sent to nvidia GPUs on the main project because they, err, take 2.5 - 4.25 times more time to complete.

But they are sent to late-model GPUs like your GTX 950 for testing here, in the hope that one or more of nvidia's hardware designers, nvidia's driver developers, or our intrepid SETI developers, will find a solution to the slow running problem.
ID: 57530 · Report as offensive
AMDave
Volunteer tester

Send message
Joined: 12 Jan 16
Posts: 38
Credit: 289,647
RAC: 0
United States
Message 57532 - Posted: 27 Mar 2016, 22:23:41 UTC - in response to Message 57530.  

Thank you Richard. I was simply curious and, after reading certain threads, I thought I'd be a little bit more aware of the WU run times. I don't have the ar cutoffs committed to memory yet. I checked my notes and found:

ar ~1.127 < are VHARs
ar ~0.120 > are VLARs
ar ~0.42 are 'normies'

On another note, your opinion would be appreciated. My card is new (< 5 months), with Geforce driver v359. Again, after reading certain threads, particularly this one,
Nvidia driver versions vs. performance

The consensus seems to be that newer drivers are basically tweaks for gaming. I was considering installing an older driver, around v337. What do you think?
ID: 57532 · Report as offensive
Richard Haselgrove
Volunteer tester

Send message
Joined: 3 Jan 07
Posts: 1451
Credit: 3,272,268
RAC: 0
United Kingdom
Message 57536 - Posted: 28 Mar 2016, 9:04:38 UTC - in response to Message 57532.  

The consensus seems to be that newer drivers are basically tweaks for gaming. I was considering installing an older driver, around v337. What do you think?

You won't be able to go as far back as that - according to nvidia.co.uk, the oldest driver compatible with your card is 355.69

I suffered a couple of hardware failures in the storms at the new year, and re-equipped with GTX 970. My hardware builder slip-streamed 359.00 as well, and it worked just fine. But before going into full production, I took a bit of time out to run an exhaustive test back to the first available for my cards, which was 345.20. Frankly, it was a waste of time - although the runtimes did get longer, on average, with each succeeding driver, the differences were marginal. I ended up running 350.12, because Raistmer was testing a new application at the time that fails on most 34x and before - I can't remember if he ever identified precisely why. But I'd probably have done more work overall if I'd stayed with 359.00 and just got stuck in. Depends if you want to spend time tweaking, or just let the computer do the work.
ID: 57536 · Report as offensive
AMDave
Volunteer tester

Send message
Joined: 12 Jan 16
Posts: 38
Credit: 289,647
RAC: 0
United States
Message 57539 - Posted: 28 Mar 2016, 16:40:51 UTC - in response to Message 57536.  
Last modified: 28 Mar 2016, 16:46:44 UTC

Depends if you want to spend time tweaking, or just let the computer do the work.

I'll pass. This is my everyday rig which I just built Nov '15. I allow the extra computing power to assist BOINC science.

Here's how I set up BOIMC. Currently, ALL apps are stock.

These are GPU (GTX 950) apps only:
sah, resource share = 133 (66.50%)
sah beta, resource share = 17 (8.50%)

This is a CPU app and uses 6 out of 8 logical cores on the Skylake processor:
Rosaetta, resource share = 50 (25.00%)

Recently, I've had instances where the CPU usage increased to 95% - 100% from the usual 78% - 83%. I checked Process Explorer and noticed that sah usage wasn't the usual 3.0% - 3.25%, but 12%+. I discovered this on this WU:

23429037 (ar = 0.010747)

It's been shown that VLARs have longer run times, and I've read that they also require more CPU power, even when only running the GPU app. But, is it normal for that CPU requirement to be 300% - 400% more? Overall, would it be better to limit Rosetta to 5 cores, not just to avoid maxing out the CPU, but also to lessen the run times on sah, & sah beta WUs?

I'd guess that Rosetta's performance would summarily improve as well. (I've noticed that production of these WUs has also decreased.)
ID: 57539 · Report as offensive
Richard Haselgrove
Volunteer tester

Send message
Joined: 3 Jan 07
Posts: 1451
Credit: 3,272,268
RAC: 0
United Kingdom
Message 57542 - Posted: 28 Mar 2016, 18:04:23 UTC - in response to Message 57539.  

Recently, I've had instances where the CPU usage increased to 95% - 100% from the usual 78% - 83%. I checked Process Explorer and noticed that sah usage wasn't the usual 3.0% - 3.25%, but 12%+. I discovered this on this WU:

23429037 (ar = 0.010747)

It's been shown that VLARs have longer run times, and I've read that they also require more CPU power, even when only running the GPU app. But, is it normal for that CPU requirement to be 300% - 400% more? Overall, would it be better to limit Rosetta to 5 cores, not just to avoid maxing out the CPU, but also to lessen the run times on sah, & sah beta WUs?

Not only was that a VLAR, you also computed it with opencl_nvidia_SoG.

As has been noted already on these boards - yes, the SoG application really does stress the CPU that hard.
ID: 57542 · Report as offensive

Message boards : SETI@home Enhanced : Longer run time for a cluster of WUs


 
©2023 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.