Friday, January 29, 2010

G-Rig-3 (Stability testing, temperatures and power consumption)

5. Stability testing, temperatures and power consumption.
---------------------------------------------------------------------

After completing the build its time for some stability testing. I
wanted to make sure that all the components are working properly. I used the following software.

1. Windows64, Prime95 v25.8, build 4 for CPU stress testing.
2. Furmark 1.6.5 for GPU stress testing.
3. Memtest86+ (One that comes with the ubuntu 8.10) for memory testing.
4. CPU-z 1.53.1 for monitoring the processors voltages and speed.
5. GPU-z 0.3.8 to get the GPU specs and monitor GPU temps.
6. RealTemp 3.46 for monitoring temps.

I fired up GPU-z to check if the GPU I got is the 55nm SP216 one. It is a SP216, 55nm part and the clocks are shown as Core:640, mem:1000, Shader: 1380.

Then it time for testing the CPU. After waiting for some time to let the temps settle down. I started off with CPU stress testing. For this I ran 4 small FFT threads in Prime95 for more than 40 minutes. I opened up CPU-z and Realtemp to monitor the CPU. The processor is running at 2.8 GHz which is where it should be with 4 cores loaded. The temps recorded are min:36 and max:76(the temps are maximum of the four cores temps). Room temp will be around 23 degrees.

















Time for some GPU testing. I ran furmark stability test with burn test option selected at 1650x1050, full screen, 16xMSAA. “Monitor temps” is also selected to monitor the GPU temperature. The temps are min:55, max:86. Note that at this temp the GPU fan ramped up to max of 74%.

Note: The case I am using now has inadequate cooling with only one exhaust fan. With better air flow the temps may be lesser. I will update when I get a better case.

To test the memory I ran memtest86+ that comes with ubuntu 8.10 for 8.5 hrs (through the night). It was 13 cycles and there were zero errors.

Once the GPU and CPU are tested. I wanted to stress the entire system. I ran 4 small FFT prime95 threads and furmark in burn mode simultaneously. I left the machine running for 30 minutes. The rig passed this test also.
Temps were similar to what is reported above.

Note that this is a stress test. At normal gaming loads (Temps taken while running Far Cry 2 at 1650x1050 4xAA/16xAF /Ultra settings), the CPU and GPU go to a peak of 65 and 79 respectively. So the stock cooler is adequate for stock CPU at normal load, for my purpose.

5(a) Max power draw from the UPS. APC 650VA sufficient for this rig?
----------------------------------------------------------------------------------
I wanted to check if my choice of going for a 650VA UPS is correct. Like I said before I need it for basic protection and 2 min of backup to shutdown. I was actually planning to run only the CPU on the UPS. For this test
I connected only the CPU to battery backup and the monitor to the port with only surge protection.

To stress the machine to its maximum load I ran the same test that I used to stress the entire system (4 small FFT prime95 threads and furmark in burn mode simultaneously). I used APC power-chute to monitor the load. The max power draw from the UPS reported by power chute is 330 - 350w. At idle it consumes around 90 watts.

Then I wanted to test if the battery backup can take the rig at normal loads. To test this I connected the monitor, speakers and the CPU to battery backup and fired up far cry 2. Speaker volume is at 50%. While running Far Cry 2 the max power draw of the entire system is 290 - 300w. At this load the UPS should give me up to 4min of backup. More backup time than I wanted. At Idle the entire setup draws around 135-150w. At normal desktop usage it never crosses 200w and I should get up to 15 min of backup.

So a 650VA is more than enough for this rig, in practical usage scenarios. At max load also it can take the CPU alone with 1-2 min of backup.

5(b) The Half speed PCI-e bus.
---------------------------------

On the next day I noticed that CPU-z and GPU-z report the PCI-e link on the motherboard is capable of x16 v2.0 but is running ar x8 v2.0. For a moment I felt that there might be a problem with the Graphic card or the motherboard. I went through the BIOS and checked every option to make sure that some option related to the PCI-e link is causing the problem. Every thing seemed to be fine in the BIOS (BIOS updated to 1207). GPU-z still reported x8. For a moment I wondered if there is any power saving feature at play dynamically reducing the link width. But I am not aware of any
such thing. So I ran a Furmark in burn mode and checked if there is any change in the link width. No luck, both GPU-z and CPU-z reported the same link width. So power saving features causing this is ruled out.

Than I googled and found that there are other people also with various other hardware setups facing the same problem. Some of the posts said that if the Bandwidth is reported properly then there is no need to worry. The Bandwidth that GPU-z reported with my card seemed to be proper, but I am still not convinced that x8 is OK. I searched some more and found a forum post which said that graphic card not inserted properly may cause this problem. So I opened up the case and verified if the card is inserted properly. It is properly inserted, but to be sure i removed the card and re-inserted it securely. Then I ran GPU-z and bang this time it showed v2.0 x16 @ x16 v2.0, even CPU-z concurred. The card is very long (10.3 inches), so i thought even if it looked properly inserted for the first time, it may indeed be a problem of graphics card not inserted properly.

Update: The next day While running 3DMark vantage, I noticed that my GPU score is significantly less than what is expected. GPU score of GTX 260 SP216 is supposed to be above 9000, and mine is a XXX edition with 640/1000/1380. So I was expecting some kind of improvement but definitely not a decrease in score. I got a GPU score of 8050. The CPU score is OK. Then I opened GPU-z and the link width is again reported as x8 v2.0. I was about the open the case to
check if the graphic card is properly inserted, but thought I will just restart and check. To my relief after the restart both GPU-z and CPU-z reported x16 v2.0, touch wood. I am not sure why this happened again. I will keep a watch on this. Then I ran 3DMark vantage in performance preset, but no luck this time also the GPU score is exactly the same. I spent some 6.5 hours on this into the wee hours of the night (from 10.30 PM till about 4 A.M). Check the section "lower than normal 3DMark Vantage scores" under benchmarks below to know what fixed this.

Benchmarks

After finishing the build and preliminary stability testing I wanted to make sure that the hardware is performing at its potential. So I turned towards benchmarking. The first thing i wanted to test is gaming performance and the graphics subsystem, as the rig is primarily meant for gaming. I have Far Cry 2 installed which came bundled with the graphic card and I chose 3D mark vantage to check if the machine is performing at its potential. So I ran the following tests (My native res is 1650x1050 @ 60Hz)

The Tests:

1. 3DMark Vantage in performance preset. Compared the scores with VGA Charts December 2009 and 3DMark ORB.
2. Far Cry 2 DX10 1650x1050 No AA/16xAF Ultra High settings, VSync off. Compared the results with AnandTech: Far Cry 2 Dissected: Massive Amounts of Performance Data

(Will probably do more benchmarks when I get the time.)

OS & Drivers:
Windows 7 64-bit.
Nvidia Forceware 195.62
Nvidia Forceware 186.16

I run the benchmark that comes with Far Cry 2 at the above settings. The test used is ranch small. The results tallied with the results from the website. The drivers used are 195.62.

















Lower than normal 3DMark vantage scores:

------------------------------------------
Then I ran 3DMark vantage. To my surprise GPU score is lower than where my card should be. I got 8050 on the GPU score, whereas it should be above 9000. The CPU score seemed fine, but the overall is again less because of the lesser GPU score. I spent a lot of time trying to figure out what may be causing the problem. I ran the test with physX support disabled in the nvidia drivers, but the score is still the same. While searching for solutions, someone pointed out that the 190 series has issues with 3DMark Vantage and overall 186.16 Forceware driver is better.
Its time to get Forceware 186.16. I downloaded and installed this driver and ran 3DMark vantage P test again with GPU PhysX disable in nvidia drivers. Bingo, the GPU score shot up to 9850 and the total score is also where it should be. I am getting approx 10% boost in GPU score because of the 10% factory over-clock. Now everything is fine, touch wood.

No comments:

Post a Comment