SETI@home v8 beta to begin on Tuesday

Message boards : News : SETI@home v8 beta to begin on Tuesday
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 49 · 50 · 51 · 52 · 53 · 54 · 55 . . . 99 · Next

AuthorMessage
Profile Eric J Korpela
Volunteer moderator
Project administrator
Project developer
Project scientist
Avatar

Send message
Joined: 15 Mar 05
Posts: 1546
Credit: 26,244,542
RAC: 1,725
Message 58193 - Posted: 6 May 2016, 20:31:36 UTC - in response to Message 58142.  

Arecibo has recorded very little data in 2016. Eventually we will be 90 or 100% breakthrough tasks.
ID: 58193 · Report as offensive
Tutankhamon
Volunteer tester
Avatar

Send message
Joined: 10 Mar 12
Posts: 1353
Credit: 6,749,709
RAC: 10,871
Message 58196 - Posted: 6 May 2016, 22:08:47 UTC
Last modified: 6 May 2016, 22:19:01 UTC

Well, since the RAC is dropping like a stone on main, I might as well spend some 24/7 testing here on main with my main cruncher.

Let's see what it can achieve.

Edit: Running GPU tasks only with OpenCL settings from SoG on main:

-cpu_lock -sbs 256 -period_iterations_num 20 -spike_fft_thresh 4096 -tune 1 64 1 4 -oclfft_tune_gr 256 -oclfft_tune_lr 16 -oclfft_tune_wg 256 -oclfft_tune_ls 512 -oclfft_tune_bn 64 -oclfft_tune_cw 64 -instances_per_device 3

Same for both mb_cmdline-8.12_windows_intel__opencl_nvidia_sah.txt, and mb_cmdline-8.12_windows_intel__opencl_nvidia_SoG.txt

Let the fun begin.
ID: 58196 · Report as offensive
Profile Raistmer
Volunteer tester
Avatar

Send message
Joined: 18 Aug 05
Posts: 2423
Credit: 15,878,679
RAC: 0
Message 58207 - Posted: 7 May 2016, 20:37:27 UTC - in response to Message 58192.  


I don't understand why Eric (and/or other admins) want to send GBT .vlar tasks to GPUs.
With the currently mix of Arecibo and GBT tasks at Main, there is no problem to feed e.g. my PC (example for a fast PC) for 24/7.

Or Arecibo tasks will run out, just GBT tasks will come in future?
How will look the mix in future?


Answer is given. It's no work or slow work choice. Not slow work or fast work one.


But the above mentioned tasks lasts just around 30 secs (hard to find time differences).

Indeed, existing PGv8 set is too fast for high-end cards.
So I would recommend for now to take one GUPPI VLAR task from here or main, to put it into bench and use it for tuning.
If VLAR is "our new future" then it make sence to do best optimization under very VLAR task.
News about SETI opt app releases: https://twitter.com/Raistmer
ID: 58207 · Report as offensive
Tutankhamon
Volunteer tester
Avatar

Send message
Joined: 10 Mar 12
Posts: 1353
Credit: 6,749,709
RAC: 10,871
Message 58210 - Posted: 7 May 2016, 22:34:30 UTC
Last modified: 7 May 2016, 22:37:45 UTC

No problems whatsoever running VLARs on my 980 with this command line options:

-cpu_lock -sbs 256 -period_iterations_num 20 -spike_fft_thresh 4096 -tune 1 64 1 4 -oclfft_tune_gr 256 -oclfft_tune_lr 16 -oclfft_tune_wg 256 -oclfft_tune_ls 512 -oclfft_tune_bn 64 -oclfft_tune_cw 64 -instances_per_device 3

The server/scheduler have decided that SoG is the fastest app (no surprise there). No screen lags ( no surprise there either, since I do not run the monitor on the 980, but on the INTEL Intel(R) HD Graphics 4600.

No other type of machine lags either, even if I run 3 VLARs at a time. Completion times of VLARs, acceptable.

Valid results of the 980 computer: http://setiweb.ssl.berkeley.edu/beta/results.php?hostid=75292&offset=0&show_names=0&state=4&appid=

Release the Kraken on main :-)
ID: 58210 · Report as offensive
Profile Mike
Volunteer tester
Avatar

Send message
Joined: 16 Jun 05
Posts: 2460
Credit: 1,039,253
RAC: 53
Message 58217 - Posted: 8 May 2016, 10:29:12 UTC

Are you running 1 or 2 instances on the 980 Sten ?
With each crime and every kindness we birth our future.
ID: 58217 · Report as offensive
Tutankhamon
Volunteer tester
Avatar

Send message
Joined: 10 Mar 12
Posts: 1353
Credit: 6,749,709
RAC: 10,871
Message 58218 - Posted: 8 May 2016, 10:40:35 UTC - in response to Message 58217.  
Last modified: 8 May 2016, 10:56:23 UTC

Are you running 1 or 2 instances on the 980 Sten ?

As my command line states -instances_per_device 3.

3 instances it is, anything less is a waste of resources. I've tested SoG on main for a long time now. Over 25 thousand tasks finished.

4 is a little bit slower than 3, but it's really a toss-up. With lots of shorties, 4 instances is faster, but with a mix of ARs, 3 at a time is faster.

GTX980 temp is 62-65 C. However, that is with side off, and a table top fan pointing at the inner workings of the computer :-)

With the summer coming, that will not be enough though. This room will get at 30-32 C, and I have to crunch mostly from late evenings, and to maybe 9 in the morning. I have a portable AC, but it isn't worth the electicity cost to keep it running all day, just to be able to run SETI.
ID: 58218 · Report as offensive
Richard Haselgrove
Volunteer tester

Send message
Joined: 3 Jan 07
Posts: 1444
Credit: 3,263,946
RAC: 0
Message 58219 - Posted: 8 May 2016, 10:48:45 UTC

Just encountered a "finish file present too long" error on Arecibo mid-AR task 23790517 - it was sharing the GTX 970 with a guppi VLAR, running SoG r3430.

Timetable was

11:03:17 (40088): called boinc_finish(0)
08/05/2016 11:03:30 | SETI@home Beta Test | [sched_op] Reason: Unrecoverable error for task 24mr10ac.7768.9486.6.40.124_1
08/05/2016 11:03:35 | SETI@home Beta Test | Computation for task 24mr10ac.7768.9486.6.40.124_1 finished

So it looks as if there was an 18-second gap between calling finish and the app quitting, with BOINC pulling the plug at 13 seconds.
ID: 58219 · Report as offensive
Profile Jimbocous
Volunteer tester
Avatar

Send message
Joined: 9 Jan 16
Posts: 51
Credit: 1,038,205
RAC: 224
Message 58220 - Posted: 8 May 2016, 10:54:52 UTC - in response to Message 58219.  

Just encountered a "finish file present too long" error on Arecibo mid-AR task 23790517 - it was sharing the GTX 970 with a guppi VLAR, running SoG r3430.

Timetable was

11:03:17 (40088): called boinc_finish(0)
08/05/2016 11:03:30 | SETI@home Beta Test | [sched_op] Reason: Unrecoverable error for task 24mr10ac.7768.9486.6.40.124_1
08/05/2016 11:03:35 | SETI@home Beta Test | Computation for task 24mr10ac.7768.9486.6.40.124_1 finished

So it looks as if there was an 18-second gap between calling finish and the app quitting, with BOINC pulling the plug at 13 seconds.


Hmm. Guess xj didn't totally drive a stake through the heart of that particular issue ... Personally, haven't had that since I loaded it ...
If I can help out by testing something, please let me know.
Available hardware and software is listed in my profile here.
ID: 58220 · Report as offensive
Richard Haselgrove
Volunteer tester

Send message
Joined: 3 Jan 07
Posts: 1444
Credit: 3,263,946
RAC: 0
Message 58221 - Posted: 8 May 2016, 11:04:56 UTC - in response to Message 58220.  

Hmm. Guess xj didn't totally drive a stake through the heart of that particular issue ... Personally, haven't had that since I loaded it ...

xj dates from around 21 April. Raistmer's SoG r3430 dates from 31 March, so it's not really fair to blame him for not including that fix in advance. But it is a reminder that we perhaps need to consider 'hardening' SoG against this problem, if that's going to become part of the armoury for handling VLAR on Main.
ID: 58221 · Report as offensive
Tutankhamon
Volunteer tester
Avatar

Send message
Joined: 10 Mar 12
Posts: 1353
Credit: 6,749,709
RAC: 10,871
Message 58223 - Posted: 8 May 2016, 15:31:53 UTC

End of GBT VLAR fun it seems. Only Arecibo tasks downloading now.
ID: 58223 · Report as offensive
Profile Jimbocous
Volunteer tester
Avatar

Send message
Joined: 9 Jan 16
Posts: 51
Credit: 1,038,205
RAC: 224
Message 58226 - Posted: 8 May 2016, 17:37:14 UTC - in response to Message 58221.  

Hmm. Guess xj didn't totally drive a stake through the heart of that particular issue ... Personally, haven't had that since I loaded it ...

xj dates from around 21 April. Raistmer's SoG r3430 dates from 31 March, so it's not really fair to blame him for not including that fix in advance. But it is a reminder that we perhaps need to consider 'hardening' SoG against this problem, if that's going to become part of the armoury for handling VLAR on Main.

Sorry, missed that it was a different app ... no intent to assign blame ...
If I can help out by testing something, please let me know.
Available hardware and software is listed in my profile here.
ID: 58226 · Report as offensive
Zalster
Volunteer tester

Send message
Joined: 30 Dec 13
Posts: 258
Credit: 12,152,357
RAC: 962
Message 58229 - Posted: 8 May 2016, 18:43:58 UTC - in response to Message 58226.  

Tut,

How much CPU of your 8 core were those 3 work units using?
ID: 58229 · Report as offensive
Tutankhamon
Volunteer tester
Avatar

Send message
Joined: 10 Mar 12
Posts: 1353
Credit: 6,749,709
RAC: 10,871
Message 58232 - Posted: 8 May 2016, 19:25:50 UTC - in response to Message 58229.  
Last modified: 8 May 2016, 19:26:33 UTC

Tut,

How much CPU of your 8 core were those 3 work units using?

Well, if they are GBT VLAR's, they use almost one core/thread each (95-99%). However I only run 3 CPU WU's, so 2 cores/threads are always free.
Even "normal" AR's use lots of CPU with opencl_nvidia_SoG, or opencl_nvidia_sah, some of them almost a full core/thread, but usually from 40-80%. Only shorties use under 10% of a core/thread.
I don't want to use sleep, because it really hits the performance. I can live with the high CPU usage.
ID: 58232 · Report as offensive
Zalster
Volunteer tester

Send message
Joined: 30 Dec 13
Posts: 258
Credit: 12,152,357
RAC: 962
Message 58234 - Posted: 8 May 2016, 19:56:28 UTC - in response to Message 58232.  

That is what I was seeing as well.

Since I was running multiple GPUs in mine, was forced to have -use_sleep in mine, really only extended the time by a few minutes.

But like you, I want those few minutes, lol..

My problems, not enough CPU cores for all the GPU work units.
ID: 58234 · Report as offensive
Profile Raistmer
Volunteer tester
Avatar

Send message
Joined: 18 Aug 05
Posts: 2423
Credit: 15,878,679
RAC: 0
Message 58238 - Posted: 8 May 2016, 20:50:46 UTC - in response to Message 58219.  

Just encountered a "finish file present too long" error on Arecibo mid-AR task 23790517 - it was sharing the GTX 970 with a guppi VLAR, running SoG r3430.

Timetable was

11:03:17 (40088): called boinc_finish(0)
08/05/2016 11:03:30 | SETI@home Beta Test | [sched_op] Reason: Unrecoverable error for task 24mr10ac.7768.9486.6.40.124_1
08/05/2016 11:03:35 | SETI@home Beta Test | Computation for task 24mr10ac.7768.9486.6.40.124_1 finished

So it looks as if there was an 18-second gap between calling finish and the app quitting, with BOINC pulling the plug at 13 seconds.


GPU device sync requested... ...GPU device synched
11:03:17 (40088): called boinc_finish(0)


app completely finished its own operation.
Worth to rise this case in front of BOINC API writers.
News about SETI opt app releases: https://twitter.com/Raistmer
ID: 58238 · Report as offensive
Tutankhamon
Volunteer tester
Avatar

Send message
Joined: 10 Mar 12
Posts: 1353
Credit: 6,749,709
RAC: 10,871
Message 58239 - Posted: 8 May 2016, 21:17:22 UTC

Hmm, I think we're about to run out of tasks....

Tasks ready to send 1339.

http://setiweb.ssl.berkeley.edu/beta/server_status.php

Time to move back to main perhaps.
ID: 58239 · Report as offensive
Profile [SETI.Germany] Sutaru Tsureku (aka Dirk :-)
Volunteer tester

Send message
Joined: 7 Jun 09
Posts: 285
Credit: 2,822,466
RAC: 0
Message 58266 - Posted: 9 May 2016, 14:32:08 UTC - in response to Message 58207.  
Last modified: 9 May 2016, 14:40:50 UTC


I don't understand why Eric (and/or other admins) want to send GBT .vlar tasks to GPUs.
With the currently mix of Arecibo and GBT tasks at Main, there is no problem to feed e.g. my PC (example for a fast PC) for 24/7.

Or Arecibo tasks will run out, just GBT tasks will come in future?
How will look the mix in future?


Answer is given. It's no work or slow work choice. Not slow work or fast work one.


But the above mentioned tasks lasts just around 30 secs (hard to find time differences).

Indeed, existing PGv8 set is too fast for high-end cards.
So I would recommend for now to take one GUPPI VLAR task from here or main, to put it into bench and use it for tuning.
If VLAR is "our new future" then it make sence to do best optimization under very VLAR task.


Since a few days I have 34 °C ambient in the 'PC room'.
And not very healthy CPU/GPU temps.
So I switched off my quad FuryX PC, at least for two weeks, because of construction work in front/opposite of/to the house (not possible to open the windows during the day), and to think about a way to cool down more the 'PC room'.
More pull out fans into the window or installation of an A/C.


Then, it would be equal to use a 'guppi' .vlar or a .vlar task for bench test runs?
Or it would be better to use a 'guppi' .vlar task?
If equal, maybe I start to use the VLAR task of the 'PG WU set v8'. ;-)
(Until now I have no idea how long this VLAR task will lasts.)


Because of...
postid=58192
I would like to make bench test runs (I guess - because after nearly 1 year it's still not possible to let run 2 WUs/GPU on my FuryX's with the currently available drivers - this will not change in future, now I could start to make bench test runs for to find opti cmdline settings for AP and MB), but until now I didn't found the correct tools on the Lunatics site.

I used:
MBbench 2.10
PG WU set v8 (all tasks without the VLAR task)
WisGen WUs.7z (the _WisGenA.wu)

For long time as I made AstroPulse bench test runs on my J1900 (iGPU + GT730) PC, I inserted the app with a few different cmdline settings (in BenchCfg.txt), after execution of the .cmd file the tool created the .wisdom file at (with) the 1st (cmdline line), and all other following cmdline (lines) settings the task (.wisdom creation) run were skipped, and then the 'real' bench test run started with the test tasks.

With the above mentioned MBbench 2.10 for SETI tasks, it don't create a .wisdom file automatically. (I don't know what to do with the files of 'MBBench v2.13' (it looks like it will suspend 3 CPU tasks and make GPU bench test?))

I used _WisGenA.wu, made a copy of it named it _WisGenB.wu - and this two tasks in the folder TestWUs.

Then this both tasks were used for to create the .wisdom file and all other files (*.bin_V7_*, .bin_*****VM).
Then the 3 'real' test tasks were calculated.
But the above mentioned tasks lasts just around 30 secs (hard to find time differences).


Could someone of the Lunatics crew make 'new' 'longer' bench test tasks (for fast GPUs)?
Maybe 2 mins VHAR, 4 mins mid-AR and 6 mins ('guppi') VLAR tasks?
So that they are good (and have everything) for to make bench test runs (on fast GPUs) for to find fastest cmdline settings. This would be very helpful and I would be very grateful.


Like I described above, with the 'MBbench 2.10' it don't work like with 'APbench211_minimal' (here is the #ap_genwis.dat file included) in past.

If I would continue with the work around way like above mentioned, task #1 and #2 the _WisGenA.wu, for to create the .wisdom file (2nd run of a task, .wisdom file creation finished), this would be waste of time... - or how would it work with 'MBbench 2.10'?
I copy the VHAR, 2x mid-AR and VLAR task into the TestWUs folder.
_WisGenA.wu and _WisGenB.wu (copy of *A*) also into the TestWUs folder.

If I have e.g. in the BenchCfg.txt file:
MB8_win_x86_SSE2_OpenCL_ATi_HD5_r3330.exe -bla bla -bla bla
MB8_win_x86_SSE2_OpenCL_ATi_HD5_r3330.exe -bla bla2 -bla bla2
MB8_win_x86_SSE2_OpenCL_ATi_HD5_r3330.exe -bla bla3 -bla bla3

...then the _WisGenA.wu and _WisGenB.wu runs at line #2 and #3 are for nothing, needless.
Or how it would work?


Raistmer, could you (the whole Lunatics crew, and/or advanced SETI members) make a description how to make bench test runs (maybe at the Main or Lunatics Forum)?
All possible commands, which numbers possible, if they are connected or alone and possible changes?
E.g. for AP:
-unroll N, 1 to 100, alone, +/- 1
-ffa_block N and -ffa_block_fetch N, 512 to 20480, connected, +/- 64

E.g. for AP and VGA card series with recommendations for testing:
Low end (NV GT***, AMD R5):
-unroll N, 1 to 10, alone, +/- 1
High end (NV GTX*80, AMD R9):
-unroll N, 10 to 20, alone, +/- 1

How much tasks/GPU (just for NV).

This would be very helpful and I (I guess many others also) would be very grateful.

Thanks.
ID: 58266 · Report as offensive
boboviz
Volunteer tester

Send message
Joined: 13 Dec 14
Posts: 14
Credit: 155,885
RAC: 0
Message 58312 - Posted: 16 May 2016, 7:15:14 UTC - in response to Message 58079.  

No plan to pass 8.12 to production to Seti@Home??

yep, there are such plans. Next week or maybe after.


:-P
ID: 58312 · Report as offensive
Tutankhamon
Volunteer tester
Avatar

Send message
Joined: 10 Mar 12
Posts: 1353
Credit: 6,749,709
RAC: 10,871
Message 58409 - Posted: 26 May 2016, 22:41:07 UTC

Are we going to have a VLAR/No VLAR to GPU, switch test on Beta soon?
ID: 58409 · Report as offensive
JLDun
Volunteer tester
Avatar

Send message
Joined: 23 May 07
Posts: 106
Credit: 119,668
RAC: 36
Message 58425 - Posted: 30 May 2016, 2:16:08 UTC - in response to Message 58409.  

Are we going to have a VLAR/No VLAR to GPU, switch test on Beta soon?

I'd like to see something like this,too.
79482 -- BoincStats

All:
ID: 58425 · Report as offensive
Previous · 1 . . . 49 · 50 · 51 · 52 · 53 · 54 · 55 . . . 99 · Next

Message boards : News : SETI@home v8 beta to begin on Tuesday


 
©2018 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.