Anything relating to AstroPulse tasks

Message boards : Number crunching : Anything relating to AstroPulse tasks
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · 5 . . . 128 · Next

AuthorMessage
Cosmic_Ocean
Avatar

Send message
Joined: 23 Dec 00
Posts: 2936
Credit: 11,067,574
RAC: 569
United States
Message 1695866 - Posted: 26 Jun 2015, 7:02:38 UTC

Hm. I don't know then. As I said, I'm just making observations of vague trends in the chaos. As Jason described it, basically, this whole BOINC thing is essentially quantum physics: a world where anything and everything is possible...all at the same time.
Linux laptop:
record uptime: 1511d 20h 19m (ended due to the power brick giving-up)
ID: 1695866 · Report as offensive
Lionel

Send message
Joined: 25 Mar 00
Posts: 672
Credit: 438,627,240
RAC: 113,088
Australia
Message 1695876 - Posted: 26 Jun 2015, 7:46:08 UTC - in response to Message 1695866.  

Neither do I. It's a mystery.

As an aside, I did notice that towards the end of the AP run, that downloads seem to fall back to only 2 at a time even though I have it set to 8 concurrent. The two that downloaded came down at breakneck speed and then the rest would hang and not download. I had to Suspend Network Activity and then restart it to get another two to download, then repeat until all APs were downloaded.

Just another mystery I suppose.
ID: 1695876 · Report as offensive
Profile jason_gee
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 24 Nov 06
Posts: 7489
Credit: 91,093,184
RAC: 0
Australia
Message 1695894 - Posted: 26 Jun 2015, 8:26:13 UTC - in response to Message 1695866.  
Last modified: 26 Jun 2015, 8:28:30 UTC

I need another walkthrough of the scheduler logic at some point, to confirm the exact mechanism. I do recall though there are links into the wonky time estimates early in the application selection part. (kindof a simulation of what the host would do with tasks, to decide what to send)

Some, maybe most, will see the 'expected' behaviour more or less by chance, while others fall into some funk for no apparent reason. From a control systems theory point of view, there are several 'strange attractors' present in that mechanism.

One of those inducing oscillations in estimates is the use of short term sample averages over arbitrary natural (noisy) input with little (if any) filtering, another the efficiency offsets by using the wrong Boinc Whetstone for the applications concerned, and probably another that the averages used evolve at too high of a frequency for what they're controlling (temporal mismatch)

Simplifying from above, there's a slim chance that other workarounds might be feasible, apart from adjusting the cache. Disabling bench and Fixing the Boinc Whetstone value at a realistic value (Say obtained via sisoft Sandra or similar) in single threaded SSE2 form should remove one contributor to instability in the mechanism. That figure should be around 2.5x Boinc Whetstone. It'd take some time to readjust estimates, but at least probably put CPU estimates into the ballpark to be selected.

For the GPU side, setting app info <flops> values might see similar stability or functional improvement. Again would have to walk through again, but this may override a swathe of dodgy estimate code used for the purposes of app selection (among other things). You could base that figure either on the high side of APR from the application details page, or just use 5-10% of the theoretical peak flops (it'll be non critical), and some lower figure for multiple instances.

That won't really fix the way the Boinc servers do things, but at least be some sortof workaround for hosts that refuse to play well with Boinc's wonky estimates.
"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.
ID: 1695894 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 11991
Credit: 118,541,885
RAC: 41,101
United Kingdom
Message 1695895 - Posted: 26 Jun 2015, 8:26:39 UTC

The first pre-requisite for receiving new tasks is that your computer is actively requesting tasks.

If you are actively processing work for the project, and completing tasks, then BOINC will keep requesting work. But if you're inactive for any reason, work requests will become fewer and further apart.
ID: 1695895 · Report as offensive
Lionel

Send message
Joined: 25 Mar 00
Posts: 672
Credit: 438,627,240
RAC: 113,088
Australia
Message 1695904 - Posted: 26 Jun 2015, 9:05:35 UTC - in response to Message 1695895.  

Not wishing to sound somewhat strange but how does your comment relate to the above comments (i just can't see the link, sorry about that) ...
ID: 1695904 · Report as offensive
Cosmic_Ocean
Avatar

Send message
Joined: 23 Dec 00
Posts: 2936
Credit: 11,067,574
RAC: 569
United States
Message 1695912 - Posted: 26 Jun 2015, 9:50:30 UTC

The first pre-requisite for receiving new tasks is that your computer is actively requesting tasks.

If you are actively processing work for the project, and completing tasks, then BOINC will keep requesting work. But if you're inactive for any reason, work requests will become fewer and further apart.


Not wishing to sound somewhat strange but how does your comment relate to the above comments (i just can't see the link, sorry about that) ...

I do notice, even though I'm using an older build of BOINC, that even if I am processing tasks, and requesting work, consecutive scheduler replies that do not assign any new tasks cause the requests to get further and further apart.

I thought there was supposed to be at least one scheduler contact every 24 hours, as well (I know for sure I read that here on the forums more than once over the years), but over the past few weeks of no APs, I was going 2-3 days without any scheduler contact until I manually clicked the update button.

And I know about the "project back-off" counter, in the case of failed transfers and the like, but when 6.10.58 decides to not even bother talking to the scheduler for 2-3 days, there is nothing under the 'status' column on the projects tab, nor in the message log indicating that it is going to wait 48+ hours because the last 25 contacts yielded no new work.


In that particular situation, if you are really doing set-and-forget and BOINC gets into that hidden 48-hour back-off, then I would imagine it would be difficult to get any tasks at all. And I should also note that once I hit the update button, it makes a contact, waits 303 seconds, makes another contact, the status column counts down to zero, and then there is nothing in the status column, and anywhere from 5 seconds to 5 minutes is when the next scheduler attempt is made, without indicating that there is any kind of back-off being enforced.

I'm sure that was fixed in newer builds, but that's what I think Richard meant by the first quote. (side-note.. my single-core machine running 6.2.19 doesn't have the phantom back-off phenomenon... it clearly says how long it is waiting before communicating again, and when it reaches zero, communication happens right away, and the increasing back-off maxes out at 3:59:59. That being said, it will communicate once every ~24 hours (it isn't down to the second, usually +/- 2-3 hours, but there is at least one contact per calendar day (unless it goes from like.. 2350 Monday to 0015 Wednesday)).)


What I mentioned in my observations is that if you have nothing in your cache at all, and you request a very large amount of work, you tend to not get anything at all, unless tasks are abundant on the server (either a pile of RTS, or creation rate is greater than 1.000/sec). Once you get some tasks on board, they start flowing in a bit smoother, but once you get a few empty-net scoops in a row, the requests start becoming less frequent.


The thing I think a small group of people are trying to figure out is: why does the server tend to not give you any work when you have an empty cache and you're requesting a large amount of work? And/Or.. why does one device seem to get all that it wants, whilst the other starves?


Anyway, that's probably enough rambling for now. I may not have solutions or answers, but maybe I can ask the right questions to dislodge a solution or answer out of someone else.
Linux laptop:
record uptime: 1511d 20h 19m (ended due to the power brick giving-up)
ID: 1695912 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 11991
Credit: 118,541,885
RAC: 41,101
United Kingdom
Message 1695918 - Posted: 26 Jun 2015, 9:55:26 UTC - in response to Message 1695904.  

Not wishing to sound somewhat strange but how does your comment relate to the above comments (i just can't see the link, sorry about that) ...

Many people over the years have made comments like "BOINC isn't sending me any [project] jobs" - it might be SETI, it might be other projects. I'm simply drawing the distinction between "server sending" - as if it was an action initiated by the server - and "client requesting" - an action initiated locally.

One reason for drawing the distinction is that the local action can be checked in the local Event Log, and I'd suggest that's the first place to look. When I see posts like Speedy's "I have not received any for a long time", my mind wonders "but did you actually ask for any during the time period when AP work was available?".

On a related note, since writing my comment, I've answered a question at the BOINC message boards, from someone who thought the SETI project was down because he hadn't received any work for over a week. From the other information he'd provided, it was clear that he had clicked the 'No new tasks' button...
ID: 1695918 · Report as offensive
Profile Brent Norman Special Project $250 donor
Volunteer tester

Send message
Joined: 1 Dec 99
Posts: 2129
Credit: 202,483,875
RAC: 509,537
Canada
Message 1695925 - Posted: 26 Jun 2015, 10:13:28 UTC

I have a feeling that the scheduler looks at ... you are requesting 10 days, but your app history says 2 days, so you are thrown into the "i'll get to you latter bucket" I'm dealing with the people that have reasonable requests.

I seem to have better luck with a 1.5 or 2 day setting.

Which makes me think, Did I get greedy with a big cache that I missed the last AP feeding? IDK if I boosted my cache for that, I forget :(
ID: 1695925 · Report as offensive
Cavalary

Send message
Joined: 15 Jul 99
Posts: 71
Credit: 5,876,242
RAC: 2,422
Romania
Message 1696282 - Posted: 27 Jun 2015, 23:55:23 UTC

Wow, 2 AP WUs... Can't recall how long it's been since the last ones, months and months, definitely.
ID: 1696282 · Report as offensive
kittyman Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 49845
Credit: 914,115,880
RAC: 161,084
United States
Message 1696798 - Posted: 29 Jun 2015, 18:22:30 UTC

And OMG.....
Just before I have to run off to work, I see another '15 dataset loaded and splitting off fresh APs!!!
Dang, wish I could sit here and watch them come in.
The kitties already snagged about 22 of them.

Meowza!
What meowing lurks in the hearts of man? The kittyman knows....MEOWhahahahahahha!

Have made friends here.
Most were cats.
ID: 1696798 · Report as offensive
Profile Zalster Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 27 May 99
Posts: 4433
Credit: 260,387,854
RAC: 12,067
United States
Message 1696822 - Posted: 29 Jun 2015, 21:24:36 UTC - in response to Message 1696798.  

Dang, You must have some rabbit's feet hanging around there somewhere Mark,

I'm squeezing out a few. Just had 1 CPU AP finish in 4 seconds :( Stderr says 100% blanked.

Get them while you can....
ID: 1696822 · Report as offensive
Speedy
Volunteer tester
Avatar

Send message
Joined: 26 Jun 04
Posts: 1008
Credit: 8,969,760
RAC: 2,222
New Zealand
Message 1696852 - Posted: 29 Jun 2015, 23:12:33 UTC - in response to Message 1696822.  
Last modified: 29 Jun 2015, 23:26:26 UTC

Get them while you can....

I'm trying but with no luck so far. Staying positive.
Positivity paid off as I am currently downloading to tasks, and sure a few more will come in
ID: 1696852 · Report as offensive
Tutankhamon
Volunteer tester
Avatar

Send message
Joined: 1 Nov 08
Posts: 7111
Credit: 44,217,383
RAC: 4,655
Sweden
Message 1696853 - Posted: 29 Jun 2015, 23:18:38 UTC

Got 74 AP's so far. Still incoming.
Too much hormone treated meat.
Too much Monsanto veggies.
Too old and outdated constitution.
A crazy problem, as you Yanks use to say......

There is no God, and God never existed.
ID: 1696853 · Report as offensive
Mark Stevenson Special Project $75 donor
Volunteer tester
Avatar

Send message
Joined: 8 Sep 11
Posts: 1630
Credit: 151,850,036
RAC: 63,210
United Kingdom
Message 1696915 - Posted: 30 Jun 2015, 5:48:24 UTC - in response to Message 1696853.  

Got 74 AP's so far. Still incoming.


197 here and climbing almost like how things were b4 the big crash
ID: 1696915 · Report as offensive
kittyman Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 49845
Credit: 914,115,880
RAC: 161,084
United States
Message 1696946 - Posted: 30 Jun 2015, 7:52:28 UTC

Just got home from a 12 hour shift.
Whilst I was away, the kitties grabbed up 876 of them AP beasties.
On 9 rigs.

Good work, kitties.
What meowing lurks in the hearts of man? The kittyman knows....MEOWhahahahahahha!

Have made friends here.
Most were cats.
ID: 1696946 · Report as offensive
Profile Brent Norman Special Project $250 donor
Volunteer tester

Send message
Joined: 1 Dec 99
Posts: 2129
Credit: 202,483,875
RAC: 509,537
Canada
Message 1697280 - Posted: 1 Jul 2015, 9:06:21 UTC

I'm curious if anyone knows this answer ...
Does AP processing of 50 GB of data take less lime than MB?

It's just, AP and MB tasks can never equal out, or is it just too many people refusing to crunch MB and do AP only?
ID: 1697280 · Report as offensive
rob smith Special Project $250 donor
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 16144
Credit: 311,785,509
RAC: 256,389
United Kingdom
Message 1697305 - Posted: 1 Jul 2015, 12:23:00 UTC

If you mean the splitting into work units then the answer is very firmly "YES". The AP splitters only have to chug through the data and every xMb produce a new work unit. The MB splitters have to chug though so far, produce a work unit, compress it, then scroll back a bit in the source file and do it all over again.

If you are talking about the processing we do then the answer is, well, not so easy. Are we talking about sending the tasks generated only to GPU hosts, what is the angle range of the tasks (are they "shorties", normals, "longies", or VLARS,..., ..., ... ) do the tasks contain a significant amount of dross? And no doubt a few more considerations (such as optimised vs. stock applications) and the answer is... I think the general consensus is that the APs generated from a given tape will take slightly less time than then MB, but not a lot... (I stand to be proven wrong)
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 1697305 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6530
Credit: 185,506,650
RAC: 44,899
United States
Message 1697309 - Posted: 1 Jul 2015, 12:36:22 UTC - in response to Message 1697280.  
Last modified: 1 Jul 2015, 12:36:31 UTC

I'm curious if anyone knows this answer ...
Does AP processing of 50 GB of data take less lime than MB?

It's just, AP and MB tasks can never equal out, or is it just too many people refusing to crunch MB and do AP only?

The MB tasks created are around 377KB where an AP is around 8MB. So AP tasks are roughly 21 times lager. As far as processing times my i5-4670Ks do MB tasks in ~1 hr and AP tasks in ~3.5 hours. So AP processing does happen, on the same given hardware, much quicker.
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the BP6/VP6 User Group today!
ID: 1697309 · Report as offensive
Profile Brent Norman Special Project $250 donor
Volunteer tester

Send message
Joined: 1 Dec 99
Posts: 2129
Credit: 202,483,875
RAC: 509,537
Canada
Message 1697360 - Posted: 1 Jul 2015, 15:55:53 UTC

Yea I guess your right, there is no where near 20 times processing time for AP files.

And for MB overlaps, (I forget the overlap time) but that could count for 5-10% more processing required.
ID: 1697360 · Report as offensive
Tutankhamon
Volunteer tester
Avatar

Send message
Joined: 1 Nov 08
Posts: 7111
Credit: 44,217,383
RAC: 4,655
Sweden
Message 1697447 - Posted: 1 Jul 2015, 20:46:31 UTC

Geeze, they're adding AP files to the splitters as if there were no tomorrow :-)
Too much hormone treated meat.
Too much Monsanto veggies.
Too old and outdated constitution.
A crazy problem, as you Yanks use to say......

There is no God, and God never existed.
ID: 1697447 · Report as offensive
Previous · 1 · 2 · 3 · 4 · 5 . . . 128 · Next

Message boards : Number crunching : Anything relating to AstroPulse tasks


 
©2018 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.