BOINC@AUSTRALIA FORUM
November 14, 2018, 04:49:55 PM *
Welcome, Guest. Please login or register.

Login with username, password and session length
News:
 
   Home   Help Login Register  
Pages: [1]   Go Down
  Print  
Author Topic: Running multiple WU and GPU VRAM  (Read 1889 times)
zzrhardy
Full Member
***

Aussie Karma: +25/-0
Offline Offline

Posts: 77


« on: July 07, 2018, 09:15:28 PM »

Very interesting post from "vseven" over on the MilkyWay forum regarding running multiple MilkyWay work units: https://milkyway.cs.rpi.edu/milkyway/forum_thread.php?id=3118&postid=67378#67378

The TLDR; of it is that you get computation errors in MW work units if you run out of VRAM on your GPU, and he has observed that each WU will peak at using 1.8GB VRAM towards the end of finishing each work unit. So if you are running multiple WU at a time, divide your total VRAM by 1.8 to find out the max to run concurrently.

Examples,

- an 11GB 1080Ti should run a max of 6 concurrent WUs (11 divided by 1.8 equals 6.11)
- an 8GB Vega 64 chould run a max of 4 concurrent WUs (8 divided by 1.8 equals 4.44)



Logged



Current builds: Kuroneko and Chibineko
JugNut
Super Hero
****

Aussie Karma: +705/-0
Offline Offline

Posts: 2223



« Reply #1 on: July 08, 2018, 02:05:22 PM »

Yea you're right z that is an interesting post. Although something must have changed in recent times as most of my credits i've done at Milky way were done on 7970/280 X's running 4 or 5 WU's concurrently and they only have 3GB of RAM on them. So using his math there's no way I could do that now, as there's just not enough RAM on those cards when the tasks are now using well over 1GB per WU.

I still have a some working AMD 280x's so it might be interesting to plug them back in and see what happens.
Logged



Participated in Challenge 1 and AA's 27 - 54. 

Crunching today for a better tomorrow...
Pages: [1]   Go Up
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.11 | SMF © 2006-2009, Simple Machines LLC Valid XHTML 1.0! Valid CSS!