BOINC@AUSTRALIA FORUM

Active BOINC projects => GPUGRID => Topic started by: yoda on April 01, 2009, 08:16:08 PM

Title: Windows or Linux
Post by: yoda on April 01, 2009, 08:16:08 PM
Quote from: Furlozza on March 31, 2009, 07:49:07 AM
With the cards, my 9600GT is the same as yours, but I know the reason I dumped the 9500GT out of iGnatious, even though he did do the work, was because he did draw on so much CPU cycle time.

When Furlozza mentioned the above in another topic, it got me thinking.

I have a 9800GT, overclocked.  It does most GPUGrid WU in less than 12 hours each.  I've noticed however that it too uses a lot of CPU time in 64bit Ubuntu (like 4-5 hours for each WU).   In comparison, I've seen lower rated cards (9600GT for example) at lower speeds (per stderr shown in the tasks) that used far less CPU time (like under an hour while they took a lot longer for the whole work unit.

Seems to me that the Windows app uses very little CPU time when compared to the Linux app.  Anyone have any experience with running GPUGrid on identical (or similar) hardware under both Windows and Linux?  Does it make a big difference?

More specifically, if any of you have a 9800GT on a box running Windows, could you provide a link to some results?   And how about some GTX260s running Linux and Windows?

Here's mine: http://www.gpugrid.net/results.php?hostid=27648

One of my reasons for asking is that I'm looking at spending some money to improve my farm.  Depending on efficiency, I may get one GTX260 or a couple of 9800GT's (about the same amount of money). 
Title: Re: Windows or Linux
Post by: Wang Solutions on April 01, 2009, 08:52:44 PM
I have three 9800GT cards, two in Windows, one in Ubuntu. The closest comparison is I have two otherwise identical machines, one with 64-bit Windows XP, and one with 64-bit Ubuntu 8.10. The Windows machine is at 3.7GHz and the Linux machine at 3.4GHz. Both computers process GPU Grid WUs in about the same time (no significant difference - if anything the Linux machine is slightly faster). I find that the displayed CPU time shown in the Ubuntu machine is usually a lot higher than that shown in the Windows machine, but I am not sure that this is an accurate reflection of actual CPU time used. I say this for several reasons:

1) The completion times for the WUs are almost identical
2) I have noticed NO difference in completion times for WUs running through the CPU at the same time on the Linux machine, suggesting there is little drain on the CPU
3) I have a cc_config.xml file on the Windows machine to add the extra "CPU" so that I can still maintain one WU per CPU core on that machine as well as the one on the GPU. In the Linux machine I have not added any such file as it automatically did the right thing anyway. I have not tested if adding such a cc_config file would alter the apparent CPU usage or not.

I think the answer is more the way the Linux app displays the CPU time than that it is actually using the CPU more. Just an opinion based on my observations, FWIW.
Title: Re: Windows or Linux
Post by: Furlozza on April 01, 2009, 09:03:55 PM
Using Process explorer in windows xp32 bit, it shows that the 9600GT on TheGnat uses anything from zero up to 6.00% of a core at anyone time, but it does use it more often than is recorded within BOINC. Tasks take about 18-21 hrs to complete and usually come in with anything between 10 to 20 mins of "cpu" time.

This makes me think that it isn't recording the correct time in Xp, but may have be in Ubuntu.
Title: Re: Windows or Linux
Post by: dyeman on April 01, 2009, 10:51:30 PM
Hi Webmaster Yoda. I have a 9800 GT also under Win XP.  Like Furrlozza it seems to use 10-20 mins CPU per WU.  Here is the host (http://www.gpugrid.net/show_host_detail.php?hostid=29936).  Times will be decreasing as I've overclocked more.  Host is a Pentium Dual Core E5200 gently OC'd to 3.33GHz.  Last few days the 9800GT is OC'd to "Optimal" setting determined by Nvidia Control Panel (Shader 1637 GPU 683 Mem 1145).  WUs seem to take 12 -14 hours.

What is your card OC'd to?


Title: Re: Windows or Linux
Post by: yoda on April 01, 2009, 11:36:46 PM
Hmmm, all food for thought.  Hard to tell whether it really is taking time away from the CPU or whether it's in the way it is reported / recorded.

@dyeman: here's what mine (a Galaxy 512MB 9800GT) is running at:

-- General info --
Card:            Unknown Nvidia card
Architecture:    G92 A2
PCI id:          0x614
GPU clock:       702.000 MHz
Bustype:         PCI-Express

-- Shader info --
Clock: 1728.000 MHz
Stream units: 112 (1b)
ROP units: 16 (1b)
-- Memory info --
Amount:    512 MB
Type:       128 bit DDR3
Clock:       1101.600 MHz

-- PCI-Express info --
Current Rate:    16X
Maximum rate:    16X

-- Sensor info --
Sensor: Analog Devices ADT7473
Board temperature: 51C
GPU temperature: 57C
Fanspeed: 1445 RPM
Fanspeed mode: auto
PWM duty cycle: 60.0%


Title: Re: Windows or Linux
Post by: Rob on April 02, 2009, 05:50:34 AM
I just finished a wu on my 8600GT under Win XP. Took about 10 minutes cpu time and about 57 hours GPU time which is about 0.01 cpu's. Mush better than a few months ago when I first tried GPUGrid & it took about 0.7 cpu's.
Title: Re: Windows or Linux
Post by: Wang Solutions on April 02, 2009, 08:50:55 AM
Hey Yoda,

What command did you run to get that output for the video card? I would like to try the same with my Ubuntu one.
Title: Re: Windows or Linux
Post by: dyeman on April 02, 2009, 09:59:11 AM
Quote from: Webmaster Yoda on April 01, 2009, 11:36:46 PM

@dyeman: here's what mine (a Galaxy 512MB 9800GT) is running at:



Thanks WM - sounds like my card might be the same as yours (Galaxy 512MB) but your shaders are more overclocked than mine.  I started getting artifacts in ATITOOL when I got got the shaders much past 1700.  I ran at 1674 for a while until I did the nvidia "optimal" thing.
Title: Re: Windows or Linux
Post by: yoda on April 02, 2009, 01:31:52 PM
Quote from: Wang Solutions on April 02, 2009, 08:50:55 AM
Hey Yoda,

What command did you run to get that output for the video card? I would like to try the same with my Ubuntu one.

nvclock -i  -f  (the -f is only because Ubuntu doesn't recognise the card, so I -force it)

You can also use nvclock -s but that will not show the shader speed:

Card:       Unknown Nvidia card
Card number:    1
Memory clock:    1101.600 MHz
GPU clock:    702.000 MHz


FWIW, the settings I'm using were the ones recommended by NVidia config when autodetecting optimal speed.  I may be able to tweak it a bit further but am happy with this speed.  Just wish it would not use so much CPU time
Title: Re: Windows or Linux
Post by: Wang Solutions on April 02, 2009, 10:50:02 PM
Well tried overclocking. Made the mistake of doing it when there was less than an hour left on a WU, which then promptly errored.  :cry2: The next two did the same though, so for now at least it is back to default settings till I can get the time to work out why.  :hbang: