News:

If you are a member of the Team on BOINC you still need to register on this forum to see the member posts.  The posts available for visitors are not posted to much by members.
 Remember to answer the questions when Registering and also you must be a active member of Team BOINC@AUSTRALIA on BOINC.

Main Menu

Windows or Linux

Started by yoda, April 01, 2009, 08:16:08 PM

Previous topic - Next topic

yoda

Quote from: Furlozza on March 31, 2009, 07:49:07 AM
With the cards, my 9600GT is the same as yours, but I know the reason I dumped the 9500GT out of iGnatious, even though he did do the work, was because he did draw on so much CPU cycle time.

When Furlozza mentioned the above in another topic, it got me thinking.

I have a 9800GT, overclocked.  It does most GPUGrid WU in less than 12 hours each.  I've noticed however that it too uses a lot of CPU time in 64bit Ubuntu (like 4-5 hours for each WU).   In comparison, I've seen lower rated cards (9600GT for example) at lower speeds (per stderr shown in the tasks) that used far less CPU time (like under an hour while they took a lot longer for the whole work unit.

Seems to me that the Windows app uses very little CPU time when compared to the Linux app.  Anyone have any experience with running GPUGrid on identical (or similar) hardware under both Windows and Linux?  Does it make a big difference?

More specifically, if any of you have a 9800GT on a box running Windows, could you provide a link to some results?   And how about some GTX260s running Linux and Windows?

Here's mine: http://www.gpugrid.net/results.php?hostid=27648

One of my reasons for asking is that I'm looking at spending some money to improve my farm.  Depending on efficiency, I may get one GTX260 or a couple of 9800GT's (about the same amount of money). 

Wang Solutions

I have three 9800GT cards, two in Windows, one in Ubuntu. The closest comparison is I have two otherwise identical machines, one with 64-bit Windows XP, and one with 64-bit Ubuntu 8.10. The Windows machine is at 3.7GHz and the Linux machine at 3.4GHz. Both computers process GPU Grid WUs in about the same time (no significant difference - if anything the Linux machine is slightly faster). I find that the displayed CPU time shown in the Ubuntu machine is usually a lot higher than that shown in the Windows machine, but I am not sure that this is an accurate reflection of actual CPU time used. I say this for several reasons:

1) The completion times for the WUs are almost identical
2) I have noticed NO difference in completion times for WUs running through the CPU at the same time on the Linux machine, suggesting there is little drain on the CPU
3) I have a cc_config.xml file on the Windows machine to add the extra "CPU" so that I can still maintain one WU per CPU core on that machine as well as the one on the GPU. In the Linux machine I have not added any such file as it automatically did the right thing anyway. I have not tested if adding such a cc_config file would alter the apparent CPU usage or not.

I think the answer is more the way the Linux app displays the CPU time than that it is actually using the CPU more. Just an opinion based on my observations, FWIW.

Furlozza

Using Process explorer in windows xp32 bit, it shows that the 9600GT on TheGnat uses anything from zero up to 6.00% of a core at anyone time, but it does use it more often than is recorded within BOINC. Tasks take about 18-21 hrs to complete and usually come in with anything between 10 to 20 mins of "cpu" time.

This makes me think that it isn't recording the correct time in Xp, but may have be in Ubuntu.

dyeman

Hi Webmaster Yoda. I have a 9800 GT also under Win XP.  Like Furrlozza it seems to use 10-20 mins CPU per WU.  Here is the host.  Times will be decreasing as I've overclocked more.  Host is a Pentium Dual Core E5200 gently OC'd to 3.33GHz.  Last few days the 9800GT is OC'd to "Optimal" setting determined by Nvidia Control Panel (Shader 1637 GPU 683 Mem 1145).  WUs seem to take 12 -14 hours.

What is your card OC'd to?



Folding Stats

yoda

#4
Hmmm, all food for thought.  Hard to tell whether it really is taking time away from the CPU or whether it's in the way it is reported / recorded.

@dyeman: here's what mine (a Galaxy 512MB 9800GT) is running at:

-- General info --
Card:            Unknown Nvidia card
Architecture:    G92 A2
PCI id:          0x614
GPU clock:       702.000 MHz
Bustype:         PCI-Express

-- Shader info --
Clock: 1728.000 MHz
Stream units: 112 (1b)
ROP units: 16 (1b)
-- Memory info --
Amount:    512 MB
Type:       128 bit DDR3
Clock:       1101.600 MHz

-- PCI-Express info --
Current Rate:    16X
Maximum rate:    16X

-- Sensor info --
Sensor: Analog Devices ADT7473
Board temperature: 51C
GPU temperature: 57C
Fanspeed: 1445 RPM
Fanspeed mode: auto
PWM duty cycle: 60.0%



Rob

I just finished a wu on my 8600GT under Win XP. Took about 10 minutes cpu time and about 57 hours GPU time which is about 0.01 cpu's. Mush better than a few months ago when I first tried GPUGrid & it took about 0.7 cpu's.

Wang Solutions

Hey Yoda,

What command did you run to get that output for the video card? I would like to try the same with my Ubuntu one.

dyeman

Quote from: Webmaster Yoda on April 01, 2009, 11:36:46 PM

@dyeman: here's what mine (a Galaxy 512MB 9800GT) is running at:



Thanks WM - sounds like my card might be the same as yours (Galaxy 512MB) but your shaders are more overclocked than mine.  I started getting artifacts in ATITOOL when I got got the shaders much past 1700.  I ran at 1674 for a while until I did the nvidia "optimal" thing.

Folding Stats

yoda

Quote from: Wang Solutions on April 02, 2009, 08:50:55 AM
Hey Yoda,

What command did you run to get that output for the video card? I would like to try the same with my Ubuntu one.

nvclock -i  -f  (the -f is only because Ubuntu doesn't recognise the card, so I -force it)

You can also use nvclock -s but that will not show the shader speed:

Card:       Unknown Nvidia card
Card number:    1
Memory clock:    1101.600 MHz
GPU clock:    702.000 MHz


FWIW, the settings I'm using were the ones recommended by NVidia config when autodetecting optimal speed.  I may be able to tweak it a bit further but am happy with this speed.  Just wish it would not use so much CPU time

Wang Solutions

Well tried overclocking. Made the mistake of doing it when there was less than an hour left on a WU, which then promptly errored.  :cry2: The next two did the same though, so for now at least it is back to default settings till I can get the time to work out why.  :hbang: