SETI@home v8 beta to begin on Tuesday

Message boards : News : SETI@home v8 beta to begin on Tuesday
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 48 · 49 · 50 · 51 · 52 · 53 · 54 . . . 99 · Next

AuthorMessage
Wedge009
Volunteer tester

Send message
Joined: 2 Aug 12
Posts: 14
Credit: 4,451,331
RAC: 17,908
Australia
Message 58131 - Posted: 4 May 2016, 6:26:31 UTC

Thanks, Eric. Seem to be getting something now, and the 'unsent' figure on the status page is going down.

Since GBT is primarily VLAR, it makes sense to allow VLAR WUs there, I think. VLAR WUs are still slow - nearly twice as long on as non-VLAR WUs on my AMD GPUs - and almost as long as the CPU part of the APUs. But between that and no tasks to work on, I'll take the VLAR WUs.

I have Ontario/Zacate APUs working on S@h beta, same generation as Raistmer's C-60, both faster and slower models.
ID: 58131 · Report as offensive
Richard Haselgrove
Volunteer tester

Send message
Joined: 3 Jan 07
Posts: 1444
Credit: 3,264,298
RAC: 0
United Kingdom
Message 58132 - Posted: 4 May 2016, 7:33:14 UTC - in response to Message 58127.  

There's definitely a problem with the scheduler that is preventing Intel GPUs from getting any work.

My host 72559 started picking up work for Intel GPU (Windows, under Anonymous Platform) at 4 May 2016, 3:45:52 UTC.
ID: 58132 · Report as offensive
Profile [SETI.Germany] Sutaru Tsureku ...
Volunteer tester

Send message
Joined: 7 Jun 09
Posts: 285
Credit: 2,822,466
RAC: 0
Germany
Message 58142 - Posted: 5 May 2016, 0:32:20 UTC - in response to Message 58127.  
Last modified: 5 May 2016, 0:36:51 UTC

Eric J Korpela wrote:
It looks like it's going to be an all or nothing change, at least for now. GPUs on beta should now be getting GBT VLAR (but not Arecibo VLAR).
(...)

I don't understand why GBT .vlar tasks need to be send to GPUs.

Very bad performance on GPU, like I wrote in message 58001 here in this thread.

On 1 CPU-Core of 12 of the Intel Xeon E5-2630v2 CPUs:
24ap10ad.18110.221515.4.31.31_0 / AR=0.391145 = CPU time = 1 hours 36 min 21 sec
blc0_2bit_guppi_57403_69832_HIP11048_0006.29397.831.21.44.252.vlar_1 / AR=0.012257 = CPU time = 52 min 17 sec

On CPU the GBT .vlar task is shorter than a mid-AR task, nearly 1/2 the time as a mid-AR task.

On one AMD Radeon R9 Fury X VGA card:
24mr10ac.30923.19310.5.39.173_1 / AR=0.429040 = Run time = 5 min 59 sec
blc3_2bit_guppi_57451_20612_HIP62472_0007.22580.831.17.20.60.vlar_0 / AR=0.008175 = Run time = 17 min 22 sec

On GPU the GBT .vlar task much longer than a mid-AR task, nearly 3x the time as a mid-AR task.


With the currently mix of GBT and Arecibo tasks my PC crunch happily 24/7 at Main (as example of currently very fast PC).
Currently top host #4 at Main.
(Before the PC crunched SETI-Beta and Einstein a few days, the RAC was ~ 117,500 @ SETI-Main (and still not reached max reachable RAC).)


In future the mix of available tasks at Main will change, just GBT tasks and not longer Arecibo tasks?
If the currently mix will continue, I don't understand all this - why it's needed to decrease the performance of the GPUs...

If this decision will be made at Main - I don't know if my PCs will continue to cruch SETI tasks (because it's wasted GPU performance (of/for SETI) - and/or withhold performance for other projects).
For sure - I'll not be the only one who will think about this...

Or the members will try to send the GBT .vlar tasks from their GPUs to their CPUs... (the currently tool can do this, or will be upgraded - or a new tool will come - for sure) - and CreditNew...
ID: 58142 · Report as offensive
Urs Echternacht
Volunteer tester
Avatar

Send message
Joined: 18 Jan 06
Posts: 1038
Credit: 18,734,730
RAC: 0
Germany
Message 58145 - Posted: 5 May 2016, 2:41:34 UTC

Why not use GPUs for .vlar, the penalty is not always that bad (see previous post) :

Macmini5,2 with AMD Radeon HD6630M (using default settings) :
AR=0.429040 has runtime of ca. 2 hours 25 minutes
AR=0.008175 has runtime of ca 2 hours 57 minutes

That's not much longer for the .vlar tasks. YMMV!
_\|/_
U r s
ID: 58145 · Report as offensive
Wedge009
Volunteer tester

Send message
Joined: 2 Aug 12
Posts: 14
Credit: 4,451,331
RAC: 17,908
Australia
Message 58147 - Posted: 5 May 2016, 8:12:32 UTC

Maybe VLIW5-based Turks GPU doesn't suffer as badly as GCN-based GPUs. But VLAR tasks have always performed more poorly than non-VLAR tasks, so I also tend to prefer VLAR tasks on the CPU. Maximising use of resources and such.
ID: 58147 · Report as offensive
Profile Mike
Volunteer tester
Avatar

Send message
Joined: 16 Jun 05
Posts: 2511
Credit: 1,057,937
RAC: 334
Germany
Message 58148 - Posted: 5 May 2016, 8:20:05 UTC
Last modified: 5 May 2016, 8:54:30 UTC

In my point of view seti is an scientific project so all types of work should be sent to all devices.
Speed is no reason to avoid that.
AMD GPU`s have no issues at all running VLAR tasks.
OTOH as been seen at main during a VLAR storm lots of GPU`s will not get much work at all.

Maybe a switch to disable it manually in preferences would be a choice.
Seti@home v7 could be renamed to GreenBanks.
Shouldn`t be to much programming efforts.
With each crime and every kindness we birth our future.
ID: 58148 · Report as offensive
Rob Smith
Volunteer tester

Send message
Joined: 21 Nov 12
Posts: 856
Credit: 4,144,523
RAC: 2,417
United Kingdom
Message 58149 - Posted: 5 May 2016, 9:36:37 UTC

I'm with you Mike, so long as the processor (whatever breed) is returning correct results then there is no need to preventing it running a particular type of work unit.
I think some are looking at the mess we had the other day when the vast majority of "guppi" units were failing due to a data issue and blaming GPUs for the problem - they were failing on CPUs as well...
ID: 58149 · Report as offensive
Richard Haselgrove
Volunteer tester

Send message
Joined: 3 Jan 07
Posts: 1444
Credit: 3,264,298
RAC: 0
United Kingdom
Message 58152 - Posted: 5 May 2016, 10:10:27 UTC - in response to Message 58149.  

Please remember that some people like to use their computers for other things foremost, and the scientific research is just the icing on the cake. The default settings, for both BOINC and SETI, have to be designed so they don't intrude on "normal use", whatever the hardware and whatever that 'normal' is for the user concerned.

From my point of view, unchanged since 15 Jan 2009, the main reason for not sending VLARs to NVidia GPUs (by default, at any rate) is the lagginess - poor screen usability while they are running. Until and unless the server scheduler can be programmed to recognise the characteristics of a 'laggy GPU', and avoid sending VLARs to those machines, I think we should proceed cautiously. Enabling user opt-in through preferences would be a good first step along the way.

The other problem I identified in that post seven years ago, and which still applies today, is the sheer inefficiency of the code NVidia supplied at VLAR. Nobody's been able to solve that completely, and I for one would choose to use my NVidia GPUs for the work they're efficient at. "Four times the time, but only an extra 15% FLOPs" may not be the exact efficiency measure now, but it's still a poor use of the hardware.
ID: 58152 · Report as offensive
Profile [SETI.Germany] Sutaru Tsureku ...
Volunteer tester

Send message
Joined: 7 Jun 09
Posts: 285
Credit: 2,822,466
RAC: 0
Germany
Message 58153 - Posted: 5 May 2016, 10:36:03 UTC

Urs, maybe on MAC OS the .vlar tasks running 'well', but I guess 90 or 95% of all SETI PCs have Windows OS.


I made a thread at SETI-Main for a larger discussion.
ID: 58153 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 2 Jul 13
Posts: 505
Credit: 2,097,578
RAC: 33
United States
Message 58154 - Posted: 5 May 2016, 11:03:25 UTC - in response to Message 58152.  

...The other problem I identified in that post seven years ago, and which still applies today, is the sheer inefficiency of the code NVidia supplied at VLAR. Nobody's been able to solve that completely, and I for one would choose to use my NVidia GPUs for the work they're efficient at. "Four times the time, but only an extra 15% FLOPs" may not be the exact efficiency measure now, but it's still a poor use of the hardware.

On the newer nVidia cards, CC 3.2 and higher, Petri's code based on 'streams' is much better on the VLARs than the 'original' code. Somewhere around twice as fast on a nVidia 750Ti. There doesn't seem to be any problem with screen Lag either. Of course this doesn't help the compute code 3.0 and lower cards, but it does offer hope for the newer cards. Based on my tests the CUDA 'Special' App is about as fast as the OpenCL SoG App on a 750Ti.
Something to think about anyway.
ID: 58154 · Report as offensive
Profile Mike
Volunteer tester
Avatar

Send message
Joined: 16 Jun 05
Posts: 2511
Credit: 1,057,937
RAC: 334
Germany
Message 58155 - Posted: 5 May 2016, 11:07:40 UTC - in response to Message 58152.  

Please remember that some people like to use their computers for other things foremost, and the scientific research is just the icing on the cake. The default settings, for both BOINC and SETI, have to be designed so they don't intrude on "normal use", whatever the hardware and whatever that 'normal' is for the user concerned.

From my point of view, unchanged since 15 Jan 2009, the main reason for not sending VLARs to NVidia GPUs (by default, at any rate) is the lagginess - poor screen usability while they are running. Until and unless the server scheduler can be programmed to recognise the characteristics of a 'laggy GPU', and avoid sending VLARs to those machines, I think we should proceed cautiously. Enabling user opt-in through preferences would be a good first step along the way.

The other problem I identified in that post seven years ago, and which still applies today, is the sheer inefficiency of the code NVidia supplied at VLAR. Nobody's been able to solve that completely, and I for one would choose to use my NVidia GPUs for the work they're efficient at. "Four times the time, but only an extra 15% FLOPs" may not be the exact efficiency measure now, but it's still a poor use of the hardware.


Therefore boinc has a setting don`t use GPU while computer is in use.

Like i said i would prefer the switch so everyone can choose on his own.
With each crime and every kindness we birth our future.
ID: 58155 · Report as offensive
Profile Raistmer
Volunteer tester
Avatar

Send message
Joined: 18 Aug 05
Posts: 2423
Credit: 15,878,738
RAC: 0
Russia
Message 58166 - Posted: 5 May 2016, 22:11:24 UTC - in response to Message 58142.  


blc3_2bit_guppi_57451_20612_HIP62472_0007.22580.831.17.20.60.vlar_0 / AR=0.008175 = Run time = 17 min 22 sec

Link to this result please.
News about SETI opt app releases: https://twitter.com/Raistmer
ID: 58166 · Report as offensive
Richard Haselgrove
Volunteer tester

Send message
Joined: 3 Jan 07
Posts: 1444
Credit: 3,264,298
RAC: 0
United Kingdom
Message 58167 - Posted: 5 May 2016, 22:32:20 UTC - in response to Message 58166.  

blc3_2bit_guppi_57451_20612_HIP62472_0007.22580.831.17.20.60.vlar_0 / AR=0.008175 = Run time = 17 min 22 sec

Link to this result please.

There's a search box...

Task 23610097
ID: 58167 · Report as offensive
HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Dec 09
Posts: 74
Credit: 1,248,766
RAC: 0
United States
Message 58168 - Posted: 6 May 2016, 2:02:28 UTC - in response to Message 58153.  

Urs, maybe on MAC OS the .vlar tasks running 'well', but I guess 90 or 95% of all SETI PCs have Windows OS.


I made a thread at SETI-Main for a larger discussion.

I had no issues running VLAR tasks on my HD 6870 with Windows 7. I believe the reason that send VLAR to GPUs was disabled was that it caused some NVIDIA GPUs to either crash or become unresponsive. So not sending VLAR tasks to all GPUs solved the issue for the lowest common denominator. If VLAR tasks no longer cause issues on systems, or the GBT VLARs don't have the same type of issue. Then it makes sense to once again send those tasks to all devices.
Unless explicitly directed to do otherwise by one of the developers you should always be using the version provided by the server for Beta.
ID: 58168 · Report as offensive
Zalster
Volunteer tester

Send message
Joined: 30 Dec 13
Posts: 258
Credit: 12,167,465
RAC: 9
United States
Message 58169 - Posted: 6 May 2016, 3:12:27 UTC - in response to Message 58168.  

I would love the change to crunch the GBT on my GPUs on Main.

Yes I'm aware that they take longer than a non-VLAR work unit but I don't think it's fair to compare a VLAR with a non-VLAR.

Better to compare CPU VLAR to GPU VLARs.

As such, the GPU takes half as long 21 minutes compared to the 43 min on my mega cruncher.

Yes, the credit aspect messes up things for everyone but if it's about getting the data crunched then doing it the fastest we can, I say let the GPUs run with them.
ID: 58169 · Report as offensive
Profile Jimbocous
Volunteer tester
Avatar

Send message
Joined: 9 Jan 16
Posts: 51
Credit: 1,038,205
RAC: 0
United States
Message 58170 - Posted: 6 May 2016, 3:51:17 UTC - in response to Message 58169.  

I would love the change to crunch the GBT on my GPUs on Main.

Yes I'm aware that they take longer than a non-VLAR work unit but I don't think it's fair to compare a VLAR with a non-VLAR.

Better to compare CPU VLAR to GPU VLARs.

As such, the GPU takes half as long 21 minutes compared to the 43 min on my mega cruncher.

Yes, the credit aspect messes up things for everyone but if it's about getting the data crunched then doing it the fastest we can, I say let the GPUs run with them.

+1
But would also like to be able to control that, on or off.
If I can help out by testing something, please let me know.
Available hardware and software is listed in my profile here.
ID: 58170 · Report as offensive
Profile Raistmer
Volunteer tester
Avatar

Send message
Joined: 18 Aug 05
Posts: 2423
Credit: 15,878,738
RAC: 0
Russia
Message 58171 - Posted: 6 May 2016, 5:01:37 UTC - in response to Message 58167.  

blc3_2bit_guppi_57451_20612_HIP62472_0007.22580.831.17.20.60.vlar_0 / AR=0.008175 = Run time = 17 min 22 sec

Link to this result please.

There's a search box...

Task 23610097

Thanks!

@Dirk.
Try to add -sbs 512 option to command line. How will it change speed of VLAR processing?

GPU with 64 CUs is not the same as GPU with let say ~20 CUs. It will starve on much more PulseFind kernel invocations than midrange one with default settings.
Hence such big performance drop that doesn't observable on less capable devices.
So lets try to find how defaults could be scaled to reduce this performance drop still conforming definition of "default stock" in terms of unattended non-intrusive use.
News about SETI opt app releases: https://twitter.com/Raistmer
ID: 58171 · Report as offensive
Profile Raistmer
Volunteer tester
Avatar

Send message
Joined: 18 Aug 05
Posts: 2423
Credit: 15,878,738
RAC: 0
Russia
Message 58172 - Posted: 6 May 2016, 5:05:51 UTC - in response to Message 58153.  

Urs, maybe on MAC OS the .vlar tasks running 'well', but I guess 90 or 95% of all SETI PCs have Windows OS.


I made a thread at SETI-Main for a larger discussion.


1) It's not OS but device performance issue. Your card needs much more parallelized work to stay busy than those Urs listed.

2) Not discussion but tuning needed. "Wider discussion" will not solve performance drop.
News about SETI opt app releases: https://twitter.com/Raistmer
ID: 58172 · Report as offensive
Profile Mike
Volunteer tester
Avatar

Send message
Joined: 16 Jun 05
Posts: 2511
Credit: 1,057,937
RAC: 334
Germany
Message 58185 - Posted: 6 May 2016, 13:17:46 UTC - in response to Message 58172.  
Last modified: 6 May 2016, 13:18:22 UTC

Urs, maybe on MAC OS the .vlar tasks running 'well', but I guess 90 or 95% of all SETI PCs have Windows OS.


I made a thread at SETI-Main for a larger discussion.


1) It's not OS but device performance issue. Your card needs much more parallelized work to stay busy than those Urs listed.

2) Not discussion but tuning needed. "Wider discussion" will not solve performance drop.


I disagree in this case but we should discuess this at Lunatics.
I notice performance drop on VLAR`s for more than a year now on all GCN based GPU`s i have tested so far.
And that`s quite a few.
With each crime and every kindness we birth our future.
ID: 58185 · Report as offensive
Profile [SETI.Germany] Sutaru Tsureku ...
Volunteer tester

Send message
Joined: 7 Jun 09
Posts: 285
Credit: 2,822,466
RAC: 0
Germany
Message 58192 - Posted: 6 May 2016, 20:16:28 UTC
Last modified: 6 May 2016, 20:25:22 UTC

I'm sad, disappointed, little upset... - because of a few things.

I don't understand why Eric (and/or other admins) want to send GBT .vlar tasks to GPUs.
With the currently mix of Arecibo and GBT tasks at Main, there is no problem to feed e.g. my PC (example for a fast PC) for 24/7.

Or Arecibo tasks will run out, just GBT tasks will come in future?
How will look the mix in future?


Also...
Until now I found noone with which I talked who could say, if the tasks here at Beta go into the data base for the science or not.
The admins worry if they would say, the tasks here are just test tasks and don't go into the data base for the science, that noone will participate here at Beta?

No statement of the admins to this questions.

I'm a hardcore SETIzen, build PCs just for SETI, pay the electricity bill and I'm with my heart here...
I think the project live and profit from this kind of members.
It's then not possibly to ask and expect answeres?
If I find myself not understood and ignored, it's understandable if the members leave the project?

I remember the past, wasn't I the starter of revolt at Main that VLAR tasks shouldn't be send to GPUs?
The related thread in the Main News Forum has been hidden later.
Then the admins decided to give the .vlar extension to the tasks and didn't send it to the GPUs.

I want to give SETI the max performance of my PC, GBT .vlar tasks on GPUs are decreasing of the performance.

If there will be not a checkbox for to un-/check 'GBT .vlar tasks to GPU (?)' in the project prefs, I guess, worry - I'll leave SETI.
Not a big deal for SETI, of course, I'm a realist ;-) ... - but I guess, worry, I'll be not the only one who will do this...

If the currently 'GPU -> CPU task send tool' will work with SETIv8, or not, then will come an upgrade or a new tool, this would be a hard work for me, I should need to let run it every few hours because of the very fast FuryX VGA cards... - this will be annoying... - and screw up CreditNew for me and the wingmen... - not a solution for my PC, but I guess for many others...


Please don't misunderstood my message, english isn't my 1st language, so maybe I didn't used the correct words - my heart was talking...

- - - - - - - - - -

Currently I use the r3430 ATI HD5 app at Main (with the permission of Raistmer).
I use -sbs 512 since a longer time, VHAR tasks didn't changed, but mid-AR tasks gone from around 6 mins down to around 5 mins 20 secs (no bench test, live).

I would like to make bench test runs (I guess - because after nearly 1 year it's still not possible to let run 2 WUs/GPU on my FuryX's with the currently available drivers - this will not change in future, now I could start to make bench test runs for to find opti cmdline settings for AP and MB), but until now I didn't found the correct tools on the Lunatics site.

I used:
MBbench 2.10
PG WU set v8 (all tasks without the VLAR task)
WisGen WUs.7z (the _WisGenA.wu)

For long time as I made AstroPulse bench test runs on my J1900 (iGPU + GT730) PC, I inserted the app with a few different cmdline settings (in BenchCfg.txt), after execution of the .cmd file the tool created the .wisdom file at (with) the 1st (cmdline line), and all other following cmdline (lines) settings the task (.wisdom creation) run were skipped, and then the 'real' bench test run started with the test tasks.

With the above mentioned MBbench 2.10 for SETI tasks, it don't create a .wisdom file automatically. (I don't know what to do with the files of 'MBBench v2.13' (it looks like it will suspend 3 CPU tasks and make GPU bench test?))

I used _WisGenA.wu, made a copy of it named it _WisGenB.wu - and this two tasks in the folder TestWUs.

Then this both tasks were used for to create the .wisdom file and all other files (*.bin_V7_*, .bin_*****VM).
Then the 3 'real' test tasks were calculated.
But the above mentioned tasks lasts just around 30 secs (hard to find time differences).


Could someone of the Lunatics crew make 'new' 'longer' bench test tasks (for fast GPUs)?
Maybe 2 mins VHAR, 4 mins mid-AR and 6 mins ('guppi') VLAR tasks?
So that they are good (and have everything) for to make bench test runs (on fast GPUs) for to find fastest cmdline settings. This would be very helpful and I would be very grateful.

Then I could make a bunch of bench test runs with (recommended/wanted) cmdline settings.


Thanks.
ID: 58192 · Report as offensive
Previous · 1 . . . 48 · 49 · 50 · 51 · 52 · 53 · 54 . . . 99 · Next

Message boards : News : SETI@home v8 beta to begin on Tuesday


 
©2018 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.