News:

Members can see all forum boards and posts. Non members can only see a few boards.  If you have forgotten your password use this link to change it.
https://forum.boinc-australia.net/index.php?action=reminder

Main Menu

Re: Project Update

Started by Tixx, May 07, 2009, 09:30:47 AM

Previous topic - Next topic

JugNut

#60
@tazzduke:  Eeek.. that's terrible ,  at least they seem to have canceled all the rest of them for now.  So far I haven't received any of those so they must be a work in progress.


@kashi:  You know I must be getting slow in my old age. I'm not sure why it didn't dawn on me before but the reason that these tasks run much the same on my gtx 970's as they do on the pair of 1080's is because there totally CPU bound. While we had already come to this conclusion yesterday but what I hadn't taken into account was the effect AVX plays in all this. Since the tasks use AVX and by nature AVX tries to use a full core and gets no advantage from hyper threading that means the box with the pair of 1080's in it should have 8 full cores to feed the 8 GPU tasks, which it does not. 
I should only be running at most 3 tasks concurrently on each GPU. Why? Because that box only has a 6 core 12 thread CPU in it and not only have I been running  4 x GPU tasks concurrently on each GPU i've also been running the extra WCG tasks to boot. That can only mean lots of contention.  In other words there's 8 GPU AVX tasks fighting for 6 cores.  And those cores already have WCG tasks on them which only makes matters even worse. I presume that's why the CPU is working so hard & getting so hot too.

As we've talked about before technically the best way to run these GPU AVX apps would be to disable any other CPU based WU's reboot the PC and disable hyperthreading in BIOS then only run the the amount of actual cores you have,  just like any other AVX CPU app would do,  just like you might do with LLR tasks from Prime Grid.
Of course that's a pain and the few times I have ever tried it there wasn't a huge advantage anyway,  but theoretically it should work.

On the other side my gtx 970's have a 8 core 16 thread CPU at their disposal which mean even though the GPU is slower the app can still run close to it's full speed whereas the 1080s can't.(even after disabling WCG work)

I can't say for sure but I doubt credit new is being used with these as credit new seems to be a means to stabilize credit and even reduce it year on year as hardware power increases, so I just find it hard to imagine credit new giving such huge credits at any time.  I'm sure there'd be a way of finding out but I have no idea of how?

Also I think the "nt" switch is not such as good solution as it first sounds because most people still only have limited "actual" cores. So if you run 4 GPU work units you'll need four real cores. So in that scenario setting anything other than nt 1 should not work well at all especially if you use the most common of all CPU's the intel i5 4 core or the i7 4 core 8 threader as both only have 4 real cores. I suppose if you set nt 2 and only used 2 GPU task concurrently that might work out with a 4 core chip?

Of course this is just theoretical and only actual testing with this in mind will find the best sweet spot.

Anyhoo I thought i'd share my epiphany even though now I think of it,  it's quite obvious.

Mmm I wonder how your 970 would go in the Asus server box?  For all intensive purposes you'd have plenty of core's to feed the GPU whatever it needed.  Might be a hassle to implement but wonder just how far you could load it up?  Just a thought...

Crunch-on.... 


PS: 28mil in credit yesterday which rocketed me into first place!!  I've never been this lucky before i'm not sure if I should be elated or embarrassed? I wonder how long it will be before they reverse the over crediting?  Oh well, easy come easy go...


kashi

Congratulations on first place and achieving the 50 million DNA helix badge.  :congrats  :congrats

Ah yes, it had occurred to me to run a single task using nt 32 parameter for fun in dual box, if it had a GPU installed. Think it may not work too well though because of how the 2 separate CPUs interact with the memory and other resources. Kind of intra socket contention issues. Sometimes you can partly get around these issues causing slowdown of multithreaded CPU programs by running 2 program instances and allocating program threads/cores to a particular CPU using Task Scheduler or Process Lasso. However for GPU applications I think the lanes on one PCIe slot map to one CPU and another slot maps to other CPU. So you'd perhaps need 2 GPUs to utilise both CPUs.

Didn't think of turning off hyperthreading though, haven't wanted to use all 8 CPU "cores" on Skylake box because it's running warm enough just using 6 (X3 tasks with nt 2). But it may run a fair bit cooler running all 4 cores with hyperthreading off, might try it tomorrow.

However as you said when it comes to GPU applications rather than solely CPU applications, sometimes turning hyperthreading off gives little or no advantage. Plus without the elapsed time multiplier effect of running X4 tasks with nt 2 like you can with hyperthreading on, it may end up being more efficient by completing more tasks per day but actually get less daily credit. That's always irritating when a faulty credit scheme promotes inefficiency.

Gromacs is complaining of the inefficiency of not using -pin on (and -pinoffset for multiple jobs). Could try fixing threads to cores I suppose, haven't used Process Lasso for a while. Just tried 1 task with nt 7 and it completed in 71 minutes, now have cleaned heatsink of CPU a little plus also filters on case and am running 1 task at nt 8. GPU and CPU temps are warmish but acceptable, slightly less than when running 4 concurrent at nt 1. Afterburner GPU usage % graph line is very smooth and steady at 64% to 66%. Task Manager Details tab shows gmx.exe using 92% to 94% of CPU.

Probably will only be a tiny bit faster if at all than nt 7, but runtime will be multiplied by 8 instead of by 7. Think I'll leave it on that overnight as it seems stable. Dual box is crunching away on WCG without complaint.

Actually CreditNew has a history of wild and erratic spiking and dipping when being used with GPU applications. It has no effective mechanism to consistently compensate for the large differences in Runtime when used with inefficient GPU applications and the resulting concurrent type processing. Can't remember which projects, but quite a few had big trouble with CreditNew and had to hurriedly introduce fixed credit. Think that included either DNETC@HOME or Moo! Wrapper (or both!).

Even with CPU applications it can muck up. Aqua@Home had so much recurring strife with the incompatibility of CreditNew with their multithreaded CPU application causing massive credit spikes that they gave up and left BOINC completely. Plus CreditNew has a comparison feature that is supposed to compare with other projects and adjust the credit rate accordingly. When CreditNew was first used on WCG they were alarmed and dismayed that the credit rate was automatically increased to a reasonable rate and hurriedly disabled that feature as they are totally wedded to the unwholesome, inconsistent, illogical, unfair "reduce towards zero" credit philosophy.

So much so, that on WCG a recent architecture CPU will often have a credit rate that's similar or even less than a CPU architecture from years ago, even though efficency of more recent CPU architectures has increased greatly. They hate the whole idea of credit at WCG which is why they invented their colourful Runtime badges to ensure people with old, inefficient "boat anchor" CPUs could pretend they were doing a useful amount of work instead of basically just wasting power.

Gee I'm twitter and bisted sometimes. Never miss an opportunity to bag WCG's "war on credit" stinginess even though I often turn my bedroom into a bath of fire runnning 3 CPUs/35 cores on their projects to help cure diseases.  Anyway, have a few Emeralds pending and will probably go for Zika Sapphire next, seeing as can't get many HS TB tasks at all. Maybe I too need to employ "old batchie" at the 3rd and 33rd minute every hour, haha.




JugNut

#62
Hey thanks kashi, 
I suppose it had to happen eventually but my credit output is now down by well over half what is was the day before,  although that hasn't happened across the board though as i've noticed there are now others getting the same credit what I was receiving and are now racing up the ranks behind me.  The funny thing is that one box that received the lions share of the credits performed quite poorly in the first place.
I wish I knew which setting controlled the credit boost though.  It would be awesome if everyone in the team could receive the same huge credits for a time and that way the team could get a massive head start on this project.  But sadly I have no idea what caused the spike as it was probably just a bit of luck. But you can bet your boots I looked into it anyway. LOL

Oh! And i'm sure you're right about credit new kashi but with my inbuilt bias against it I just couldn't imagine it ever giving something good to anyone at any time,  plus had never done either of those projects you mentioned,  so happily or sadly depending on your point of view I missed out on those particular rounds of credit craziness.  Thanks for the info.

Anyway time to prepare for another fun day at the doctors.  Oh well C'est la vie

Crunch em if you got em..  :crazy

kashi

#63
Yes you're quite right, CreditNew as used in CPU projects almost always causes very low credit because it applies the horrid, illogical "reduce towards zero" concept whereby newer faster computers are automatically crippled credit wise and awarded credits within the usual despised stingy range.  

However admins who are uninformed or misguided enough to ignore repeated history and try and use CreditNew for GPU applications may often be the same type who panic when credit rate spikes ridiculously high. Then they may repeatedly manually adjust the task parameters relating to credit calculation to try and restore stability. The CreditNew "smarts" then "fights back" and the yoyo continues.

Wouldn't feel guilty over winning the occasional credit lottery. The oodles of years of processing time we've donated to projects where credit is unjustifiably low and a poor recompense for the many thousands of bucks we've spent on crunching computers, expensive GPUs, power and/or solar installations more than balances it out. You wouldn't have any colourful WCG badges at all if you lived by credit alone. But despite the holier than thou lamentations of the anti-credit whingers, avid Cruncherman does not live by badges alone either, so just enjoy it as a random bonus.

Don't think I got much advantage from those 3 projects I mentioned. Aqua@Home was a lucky dip as only a small number of contributors quickly gobbled up all the task batches where the credit was huge. Remember being a bit disappointed that I missed out. Can't remember for sure but I think they removed/reduced some credit also when it became too excessive. Think one or both of the GPU projects quickly and wisely moved to fixed credits, so again don't remember getting any bonanza there either. Also I was possibly focussing on MilkyWay@Home on GPU back then, so just observing for interest.

Back to DrugDiscovery, yes the team is building up a handy total. Not going to do any more testing of different task number and nt combinations. Getting very warm in this room today and GPU runs cooler running only a single task. Possibly could get more doing multiple concurrent but although rate continues to gradually drop, single task daily yield is currently still "quite generous", mwuhahaha.  
 

Dataman

Appears the battle over the rights to this "project" continues. There is now a new DrugDiscovery@HOME2 project (not to be confused with DrugDiscovery@Home). It has no work and the credits from the old project did not transfer so it is completely new. A bunch of stuff is broken on the webpage (e.g. you cannot join a team because it cannot find the team database). Does not give me much hope there is actual science being done there.
:banghead


chooka03

I see they FINALLY retired this project. About time really.

:oz:

Dingo

Quote from: chooka03 on April 13, 2019, 05:50:57 PM
I see they FINALLY retired this project. About time really.

:oz:

Put in the Retired BOINC board.  :bloodshot


Radioactive@home graph
Have a look at the BOINC@AUSTRALIA Facebook Page and join and also the Twitter Page.

Proud Founder and member of BOINC@AUSTRALIA

My Luck Prime 1,056,356 digits.
Have a look at my  Web Cam of Parliament House Ottawa, CANADA