Result is no longer usable

Message boards : SETI@home Enhanced : Result is no longer usable
Message board moderation

To post messages, you must log in.

AuthorMessage
Profile CElliott
Volunteer tester

Send message
Joined: 16 Aug 05
Posts: 79
Credit: 71,936,490
RAC: 0
United States
Message 45431 - Posted: 5 Apr 2013, 17:11:40 UTC

After successfully processing 344 tasks today and after 00:14:27 of processing the workunit whose name is given below, Boinc wrote:
"Result 21jl10aa.12093.4161.11.16.99_0 is no longer usable".

Yesterday Boinc flushed about 2600 tasks for this reason, but apparently has since successfully recovered. Every time Boinc communicates with the server, this message is output. Boinc and BoincTasks 1.45 say 21jl10aa.12093.4161.11.16.99_0 is being uploaded, but there is no result in the project directory to upload. The server will not offer any new CUDA workunits, saying the computer has reached a quota of 170 WUs.

I have tried to abort 21jl10aa.12093.4161.11.16.99_0 twice. Both times Boinc says it did so, but still it is listed in the Tasks tab as uploading. And still the server says it is no longer usable.

Does anyone know what is going on? Reset the project?
ID: 45431 · Report as offensive
Profile CElliott
Volunteer tester

Send message
Joined: 16 Aug 05
Posts: 79
Credit: 71,936,490
RAC: 0
United States
Message 45432 - Posted: 5 Apr 2013, 17:27:28 UTC - in response to Message 45431.  

I edited all references to 21jl10aa.12093.4161.11.16.99 out of the client_state.xml file, and that deleted it from the Tasks tab in Boinc, but still the server will not offer any WUs, now saying reached a quota of 177 work units.

Here is the result if anyone is interested:
<result>
<name>21jl10aa.12093.4161.11.16.99_0</name>
<final_cpu_time>120.906300</final_cpu_time>
<final_elapsed_time>867.682028</final_elapsed_time>
<exit_status>0</exit_status>
<state>4</state>
<platform>windows_intelx86</platform>
<version_num>697</version_num>
<plan_class>cuda_fermi</plan_class>
<fpops_cumulative>98914980000000.000000</fpops_cumulative>
<stderr_out>
<![CDATA[
<stderr_txt>
setiathome_CUDA: Found 2 CUDA device(s):

Device 1: GeForce GTX 570, 1279 MiB, regsPerBlock 32768

computeCap 2.0, multiProcs 15

pciBusID = 2, pciSlotID = 0

clockRate = 1464 MHz

Device 2: GeForce GTX 570, 1279 MiB, regsPerBlock 32768

computeCap 2.0, multiProcs 15

pciBusID = 3, pciSlotID = 0

clockRate = 1464 MHz

In cudaAcc_initializeDevice(): Boinc passed DevPref 2

setiathome_CUDA: CUDA Device 2 specified, checking...

Device 2: GeForce GTX 570 is okay

SETI@home using CUDA accelerated device GeForce GTX 570

pulsefind: blocks per SM 4 (Fermi or newer default)

pulsefind: periods per launch 100 (default)

Priority of process set to BELOW_NORMAL (default) successfully

Priority of worker thread set successfully



setiathome enhanced x41zc, Cuda 4.20



Detected setiathome_enhanced_v7 task. Autocorrelations enabled, size 128k elements.

Work Unit Info:

...............

WU true angle range is : 0.437447

re-using dev_GaussFitResults array for dev_AutoCorrIn, 4194304 bytes

re-using dev_GaussFitResults+524288x8 array for dev_AutoCorrOut, 4194304 bytes

Thread call stack limit is: 1k

cudaAcc_free() called...

cudaAcc_free() running...

cudaAcc_free() PulseFind freed...

cudaAcc_free() Gaussfit freed...

cudaAcc_free() AutoCorrelation freed...

cudaAcc_free() DONE.

Cuda sync'd & freed.

Preemptively acknowledging a safe Exit. ->

SETI@Home Informational message -9 result_overflow

NOTE: The number of results detected equals the storage space allocated.



Flopcounter: 34743740063387.562000



Spike count: 23

Autocorr count: 0

Pulse count: 5

Triplet count: 0

Gaussian count: 2

Worker preemptively acknowledging an overflow exit.->

called boinc_finish

Exit Status: 0

boinc_exit(): requesting safe worker shutdown ->

boinc_exit(): received safe worker shutdown acknowledge ->

Cuda threadsafe ExitProcess() initiated, rval 0


</stderr_txt>
]]>
</stderr_out>
<wu_name>21jl10aa.12093.4161.11.16.99</wu_name>
<report_deadline>1367003737.000000</report_deadline>
<received_time>1363020666.024283</received_time>
<file_ref>
<file_name>21jl10aa.12093.4161.11.16.99_0_0</file_name>
<open_name>result.sah</open_name>
</file_ref>
</result>
ID: 45432 · Report as offensive
Claggy
Volunteer tester

Send message
Joined: 29 May 06
Posts: 1037
Credit: 8,440,339
RAC: 0
United Kingdom
Message 45433 - Posted: 5 Apr 2013, 18:21:26 UTC - in response to Message 45431.  
Last modified: 5 Apr 2013, 18:21:48 UTC

After successfully processing 344 tasks today and after 00:14:27 of processing the workunit whose name is given below, Boinc wrote:
"Result 21jl10aa.12093.4161.11.16.99_0 is no longer usable".

Yesterday Boinc flushed about 2600 tasks for this reason, but apparently has since successfully recovered. Every time Boinc communicates with the server, this message is output. Boinc and BoincTasks 1.45 say 21jl10aa.12093.4161.11.16.99_0 is being uploaded, but there is no result in the project directory to upload. The server will not offer any new CUDA workunits, saying the computer has reached a quota of 170 WUs.

I have tried to abort 21jl10aa.12093.4161.11.16.99_0 twice. Both times Boinc says it did so, but still it is listed in the Tasks tab as uploading. And still the server says it is no longer usable.

Does anyone know what is going on? Reset the project?


Something to do with all those Abandoned tasks of yours, if you didn't detach and reattach yourself, you should you to this thread at Seti Main and add your log details:

Abandoned tasks - Ongoing issue

In the meantime, you could remove your app_info and revert to Stock, and get Wu's for the Stock plan_classes, i don't think there is a limit initialy for them.

Claggy
ID: 45433 · Report as offensive
keputnam
Volunteer tester

Send message
Joined: 17 Jun 05
Posts: 12
Credit: 305,996
RAC: 0
United States
Message 45727 - Posted: 7 May 2013, 4:06:11 UTC
Last modified: 7 May 2013, 4:06:27 UTC

I got the same error today on about nine WUs There was still eight days to the return limit, and I have made no changes to BOINC at all for over 6 months (except for T4T)

5/6/2013 8:46:41 PM | SETI@home Beta Test | Result 29my12ab.31380.16179.3.16.211.vlar_2 is no longer usable


What gives??
ID: 45727 · Report as offensive
Josef W. Segur
Volunteer tester

Send message
Joined: 14 Oct 05
Posts: 1137
Credit: 1,848,733
RAC: 0
United States
Message 45728 - Posted: 7 May 2013, 5:36:16 UTC - in response to Message 45727.  

I got the same error today on about nine WUs There was still eight days to the return limit, and I have made no changes to BOINC at all for over 6 months (except for T4T)

5/6/2013 8:46:41 PM | SETI@home Beta Test | Result 29my12ab.31380.16179.3.16.211.vlar_2 is no longer usable


What gives??

It looks like Eric has cancelled all unresoved SaH v7 WUs and reset the host/app_version counts again in order to get a fresh start on checking out the Scheduler modifications. BETA testing is always interesting.

One side effect Eric may not have anticipated is that the cancellations have the effect of driving the quota to one, so we'll all be temporarily very limited in getting work. That should just be a delay in getting back to full testing, though.
                                                                   Joe
ID: 45728 · Report as offensive
Profile Raistmer
Volunteer tester
Avatar

Send message
Joined: 18 Aug 05
Posts: 2423
Credit: 15,878,738
RAC: 0
Russia
Message 45729 - Posted: 7 May 2013, 6:57:43 UTC - in response to Message 45728.  

Cause quota rises fast it's lesser delay than to wait when all old task go through the pipe. And AFAIK old tasks distort averages. We need to check if new BOINC code can select best app for host or not. It's crucial for any Brook+ AP usage cause w/o this distributing Brook+ AP will be de-optimization for OpenCL hosts.
ID: 45729 · Report as offensive
Profile Raistmer
Volunteer tester
Avatar

Send message
Joined: 18 Aug 05
Posts: 2423
Credit: 15,878,738
RAC: 0
Russia
Message 45731 - Posted: 7 May 2013, 7:36:17 UTC - in response to Message 45728.  



One side effect Eric may not have anticipated is that the cancellations have the effect of driving the quota to one, so we'll all be temporarily very limited in getting work.
                                                                   Joe


It's not the case. I detached and re-attached my host to beta and get ful load of MB7 tasks, not just 1 task.
ID: 45731 · Report as offensive
Josef W. Segur
Volunteer tester

Send message
Joined: 14 Oct 05
Posts: 1137
Credit: 1,848,733
RAC: 0
United States
Message 45735 - Posted: 7 May 2013, 13:43:57 UTC - in response to Message 45731.  



One side effect Eric may not have anticipated is that the cancellations have the effect of driving the quota to one, so we'll all be temporarily very limited in getting work.
                                                                   Joe


It's not the case. I detached and re-attached my host to beta and get ful load of MB7 tasks, not just 1 task.

It is true that max tasks per day was reduced to one at least for CPU app versions. The Application details for host 39394 on your Q9450 is showing that right now, though it also shows that 139 tasks have been sent "today". That's because when the number "today" is temporarily zero because the server figures it's a new day, the amount of work asked for is delivered rather than having a step by step check against the quota. So recovery is quick, though the quota is very poor protection against a host which has gone bad.
                                                                  Joe
ID: 45735 · Report as offensive

Message boards : SETI@home Enhanced : Result is no longer usable


 
©2021 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.