BOINC@AUSTRALIA FORUM
Welcome, Guest. Please login or register.
March 27, 2017, 10:41:55 AM

Login with username, password and session length
Search:     Advanced search
40139 Posts in 1734 Topics by 352 Members
Latest Member: tyler-durden
* Home Help Login Register
+  BOINC@AUSTRALIA FORUM
|-+  Recent Posts
Pages: [1] 2 3 ... 10

 1 
 on: Today at 03:36:59 AM 
Started by JugNut - Last post by JugNut
Thanks kashi  Thumbs Up

Well I have discovered something & that is that researching this stuff is an excellent way to get a bloody good headache.  In the end I went the low tech route and used PCpartspicker to check compatibilities.  When I checked last year some time they didn't seem to support server stuff and dual motherboards setup's but this time they did. 
In the end after putting in 2 x E5-2670's there were only a handful of compatible motherboards.
This was the motherboard selection given...

Code:
Motherboard                Socket/CPU    Form-Factor RAM    Max RAM     Price
                                                     slots
_______________________________________________________________________________
ASRock EP2C602-4L/D16     LGA2011 x 2   SSI EEB    16     512GB $305.99
Asus Z9PE-D8 WS             LGA2011 x 2   SSI EEB    8     64GB $599.99
Asus Z9PA-D8             LGA2011 x 2   ATX      8     64GB $338.93
Supermicro X9DRL-3F     LGA2011 x 2   ATX      8     64GB $319.99
EVGA Classified SR-X     LGA2011 x 2   HPTX      12     96GB ------.--

So there's only 2 ATX mobo's the rest are odd sized or meant for rack systems or don't accept standard ATX power supply.  So that makes it much easier.  So i'm down to the two ATX boards.  The ASUS ATX mobo you have & the ATX Supermicro.  I was hoping for a EATX board so as to have more space between the slots & maybe some more PCI-E x16 slots.  Also because I have a spare case that would take a EATX mobo,  but oh well I wasn't planning on making it a full blown GPU rig anyway i'm mainly just after CPU cores but a bit of future proofing would have been nice touch.

Yea as you mentioned the Noctua NH-U12DXi4 was on the compatible list for both those boards as well. Also all the supermico X10*** mobo's ended up being socket 2011-3. There definitely worth remembering though, not a bad price for a socket 2011-3.   Especially if we ever come across some cheap xeon E5-****v/3 or /4 chips. Although as you say the DDR4 prices might make it unworkable.

Mmm so how to pick between the two mobo's? I'll just pick one the most technical way I know of.   Mmm now hows it go?  oh yea!  Eeny, meeny, miny, moe catch a.....  LOL

I'm in no hurry so i'll mull over your wisdoms & i'll let you know what I come up with. Although it might be best to get the same mobo as you have that way we could help each other if ever have any problems.

Anyway thanks again for all your help kashi.  It is very much appreciated  Thumbs Up + karma



PS: Actually after a closer look at the supermicro board it has terrible PCI-E placement making even one full sized GPU a problem,  so it look like there is only one choice?
PSS: Err I may crunch Universe with the new rig???  Yea we are way off topic.   

@ Dingo: It might be best move these posts to a better suited thread when you have some spare time. Thanks..   

 2 
 on: Yesterday at 10:22:12 PM 
Started by JugNut - Last post by kashi
Yes chooka that was also often my song when I only had one computer, the familiar BOINC refrain: "too many projects, not enough cores". Better now with many more cores but still sometimes like to do more projects at the same time. Just like Johnny Rocco in Key Largo when asked what he wanted: "Yeah. That's it. More. That's right! I want more!". Yep my first WCG Sapphire badge is only about a day away now. Sapphire should match my eyes beautifully making me look quite fetching, haha.

Know very little about SuperMicro, JugNut. Their LGA2011 boards were sometimes a proprietary size which meant they only fitted perfectly in their own brand server cases. Other brand server sized cases needed modification, ie holes drilled, extra standoffs and/or spacers fitted. Other than that I suppose they should be reliable as they're a well known server brand. Although in the OCAU E5-2670 thread, the Intel brand server boards are recommended by some as the most reliable with the best BIOS and driver support. Possibly one of the reasons why quite a few purchased those recycled dual socket Intel boards from Natex even though they were second hand. Which is probably little help, as haven't looked, but new Intel LGA2011-3 dual socket boards are probably very expensive.

Good availability and prices about for new X10DRL-i you linked compared to most of the previous generation dual socket boards. Being ATX, the X10DRL-i has very tight CPU spacing which means only a few HSFs will fit. The 2 Noctua NH-U12DXi4 I use with my Z9PA-D8 were relatively expensive even at Amazon price.

What about registered DDR4, isn't it still super expensive? You possibly need to buy sticks on the board's Tested Memory List to ensure compatibility. A build of 20 or 24 total cores with a pair of 10 or 12 core E5-2xxx V3 Extra Spicy CPUs, that's 40 or 48 threads so perhaps shouldn't really skimp much on capacity. My 64GB (8 X 8GB) with 32 threads is nice as I've not had to worry about running short so far, but registered DDR3 much cheaper than registered DDR4. 32GB with 8 X 4GB DDR4 sticks should be cheaper than 64GB of DDR4 but don't know if 32GB with 40 or more threads may be cutting it a bit fine nowadays. Xeon E5-2xxx V3 chips like E5-2xxx and E5-2xxx V2 have 4 channel RAM so need to populate all 8 slots for full system performance.

Hope you get a "duallie" going one day. I really like mine, even though the total build cost ended up much more expensive than the cheap CPUs and memory at first implied. Because after the purchase price is forgotten the crunching powah remains. It went great on the last AA and also 28 cores a crunchin' is excellent to get those WCG time based badges much more quickly, yeeha.

You can even use them on Universe@Home (feeble attempt to have a tiny bit on topic). Big Grin Jester

 3 
 on: Yesterday at 09:13:45 PM 
Started by WikiWill - Last post by jave200372
~ stream

 4 
 on: Yesterday at 02:28:00 PM 
Started by Dingo - Last post by kashi
Yay, my pending BOINC credit for the Challenge has appeared. Victory

It has been showing only about 7,000 pending since the end of the Challenge when it should have been over 300,000 and I was getting converted credit of only 100 per day. Thought I had misconfigured something or it had been missed when manually being assigned to BOINC. But it popped up today "Pending credit: 327,616".

The pending PSA credit is shown in the "Credit Needed for Next Badge" column on Badges web page. Click any of your badge icons on Your Account page to get to Badge page. That column also shows number of tasks in progress for BOINC subprojects.

I did 5 AP27 tasks and got my bronze badge so as to boost my RAC to get more PRPNet credit converted to BOINC credit per day. Took over 27 hours each, if my other computer was working would have taken about 30 minutes each or less on GPU, haha. Might do a some more PRPNet tasks and get my gold PSA badge as I'll be close to it now, mmm badge tragic I've become.

 5 
 on: Yesterday at 09:39:23 AM 
Started by Dingo - Last post by Dingo
I started my miners back up today after a lot of reboots and reloading drivers etc.  I think they are all working now so credit should start going up tomorrow.   Fingers Crossed

@chooka      Holy Moly Yes there are only 127 today with credit so is dropping all the time.   I am not going to invest any more money in minres as I cannot use them other than here.

 6 
 on: Yesterday at 08:03:17 AM 
Started by Dingo - Last post by chooka03
Dare I say.....but is Bitcoin Utopia stuffed these days since they stopped GPU work?
When looking at monthly credits, there are only 155 people contributing credits. 155 people throughout the WORLD.
Its no wonder projects are now crawling along towards completion. And imagine what % is mostly gained from those in the top 10 positions.

Bit of shame for the scientific community really as a way of getting some funding.

 7 
 on: Yesterday at 07:48:53 AM 
Started by JugNut - Last post by chooka03
Seems like a few members are having a nice run on Universe. Just checked and I haven't crunched any for a almost a year. Will give it another go after finishing some WCG badge hunting in a few days.

Think Linux may have been a bit faster than Windows but Universe applications have been updated since then. So will try some on both OSs and see for myself.

I'm like you Kashi. Only new to WCG so chasing my first Childhood Cancer badge :) Also new to LHC so I'm trying to boost my rank there also. So slooooow though.
I've got too many CPU projects these days and not enough cores to share around lol. I can see the interest in server duelies. Won't be for me though.

 8 
 on: Yesterday at 03:51:55 AM 
Started by JugNut - Last post by Dataman


@ Dataman:  Yea at the moment the Universe work units are a tad on the large size. They go up & down in size. Some weeks the tasks can take as little 10 -12hrs which is good for a Pi, but yea right now they are taking 18 - 21+hrs.  

Mmm for some reason I never thought of doing collatz on the Pi's,  but that doesn't sound to bad.
On RPi Collatz is the best @ ~25 credits/hr, then Universe @ ~22 credits/hr, then a poor ~10 credits/hr for asteroids and a ridiculous 2.6 credits/hr for SETI.  These are on Version 3 Rpi's except SETI which is on a Version 2 which is about 3 times slower than the V3's.

 9 
 on: Yesterday at 03:28:17 AM 
Started by JugNut - Last post by JugNut
Thanks for the info kashi it looks like some more reading in store.  Buying server gear is not for the faint hearted but i'm in no hurry as the rigs I have now do OK, but was hoping to get enough grunt in one box that I could turn off the others off for a week or so each month.
Even with 10KW solar system my power bills are still ridiculous.

If I ever get this new box up & running I will have to resist the urge to just go flat out.  That would kinda defeat the purpose LOL

I've been looking at few supermicro boards like this https://www.supermicro.com/products/motherboard/xeon/c600/x10drl-i.cfm
I've always wanted a supermicro board, this one seems Ok but only has one proper x16 slot but thats OK just one good GPU and decent amount of cores would fit the bill fine.   What do you think of supermicro stuff?

@ Dataman:  Yea at the moment the Universe work units are a tad on the large size. They go up & down in size. Some weeks the tasks can take as little 10 -12hrs which is good for a Pi, but yea right now they are taking 18 - 21+hrs. 

Mmm for some reason I never thought of doing collatz on the Pi's,  but that doesn't sound to bad.

 10 
 on: March 25, 2017, 11:25:34 PM 
Started by JugNut - Last post by kashi
Beauty Dataman, then I'll give Linux a go first. Thumbs Up

JugNut, I've been running a Linux VM BOINC and my miners on Windows BOINC had still been working, although recently they had been more erratic in runtime than before. Some would run 15-20 minutes under the previous consistent 2 hour runtime and some would run 15-20 minutes longer.  I think tasks were swapping a bit from stick to stick like happens when one has lost contact. Thought it was due to wonky memory but maybe having a VM running was doing it and I didn't realise. That's with Gekkos though, you have some different miners as well I remember.

They did muck up completely when I accidentally ran all 8 cores in Windows though. I forget the exact circumstances, it was something to do with forgetting I had an app_config.xml on a certain CPU project and swapping to another CPU project and not putting in an app_config.xml to limit number of tasks running. Had 8 BOINC cores active for some reason which is rare as usually never run greater than 7 BOINC cores.

Re a dual E5-2670 build, I haven't checked but it may be worth looking for some affordable registered 1866 MHz RAM if there is such a thing. You could downclock it to run at 1600 MHz with your E5-2670s. However if you ever upgrade to a pair of Xeon E5-2XXX V2, they support 1866 RAM so 1866 would be good to have then. Trouble is you would need to find someone that has run the same RAM and motherboard combination successfully if it is not on your motherboard's RAM QVL. Some of the posters in the OCAU E5-2670 thread had RAM incompatibility problems with ram not on the RAM QVL of their motherboards.

Could all be academic if registered 1866 Mhz RAM is still as expensive as it was last year. I know the 64GB of RAM I bought for AU$140.06 including shipping increased a fair whack in price after I bought it.

Many (most) E5-2670 rig builders in the OCAU thread chose 1600 MHz registered ram (usually Samsung). My Hynix 64GB (8x 8GB) DDR3 PC3L-10600R 1333MHz has performed flawlessly so far and the difference in performance between 1333 CAS 9 and 1600 CAS 11 would not be much. I chose the lower 1333 speed because the various listings of higher 1600 speed available that I had researched as Asus Z9PA-D8 compatible were not "L" low voltage versions. With 8 sticks running BOINC constantly preferred the lower voltage RAM. Got sticks without heatspreaders as thought there may be a lower chance of getting counterfeits. I know it's probably over cautious but buying electronic items from Hong Kong/China does not inspire me with confidence in the quality, even when ebay rating is 99%. Purchased RAM from ebay seller wwon_one.

As for motherboards, my Asus Z9PA-D8 has given no trouble although some in the OCAU thread found it fussy with RAM and PSUs. I'm using an EVGA SuperNOVA 750 T2 from Amazon, Titanium rated model cost heaps but the most efficient I could find.

One of the reasons I bought Z9PA-D8 is it is the only dual LGA2011 socket motherboard with a standard ATX form factor which saved me having to buy a new server size case. Other reason is I got a good price from Computeruniverse.de although they mucked me around horridly with many months delay in actually sourcing one. Also has proper PCI-E 3 x16 slots if I ever decide to fit a GPU, which those 2nd hand Intel server boards do not. Plus preferred new rather than the recycled Intel server ones some purchased from USA.

Some in the OCAU thread prefer dual socket motherboards with 16 rather than 8 RAM slots, in case a RAM slot ever stops working or they want to add more RAM later. However most of them are using multiple VMs in database style server applications, so their usage patterns are different to BOINC users.

The size and number of GPUs and/or other accessories you're going to fit means the PCI-E slot layout needs to be taken into account carefully when choosing a dual socket motherboard. For example if you wanted to use 2 GPUs some of the Z9PE Asus models may be preferable to the Z9PA-D8 model. Although some motherboards have enough PCI-E 3 X16 or X8 (when shared) slots, the position of some of those slots means a full size GPU cannot fit because it overlaps where the RAM slots are.

This is all based on what I discovered last year, availability/price of certain motherboards/RAM has likely changed since then so best to have a gander at forums where these dual motherboard build configurations are discussed (ie OCAU and https://forums.servethehome.com). For example, I don't know which, if any, of the dual LGA 2011 socket boards support booting from a NVMe SSD if that was important to you. We can't send hardware back for a refund easily if it is incompatible, so better to take a bit more care with our choices and hopefully help minimise the chances of build disappointment, delay and extra expense.

Back on topic, just had a look at top computers on Universe@Home, yes not as high as some but blows away Crush Childhood Cancer credit for sure.


Pages: [1] 2 3 ... 10
Powered by MySQL Powered by PHP Powered by SMF 1.1.11 | SMF © 2006-2009, Simple Machines LLC Valid XHTML 1.0! Valid CSS!