Recently, I bought a new 4K monitor and a GTX 1070 video card. To
illustrate how rare that is, it’s been six years since I’ve had a new monitor,
and two years since my last graphics card.
Naturally, I couldn’t have planned it that way even if I had
tried. The truth is, I had fully intended on keeping my 27” monitor for a while
longer as I liked being able to run any current game at 1920x1080 with 60 FPS
(frames-per-second). However, earlier this year it developed an intermittent
flickering which eventually gave way to it dying completely. I looked at a few
2560x1440 monitors, but I figured that it didn’t make sense to get one when the
4K monitors were nearly as inexpensive. So what exactly is 4K? Thankfully, it’s
not what’s left of most people’s post-recession 401K, nor is it some kinky
S&M bit from Fifty Shades of Grey. Numerically, 4K refers to the
resolution of 3840x2160 that is twice HD’s 1920x1080. This is significant
because 4K actually possesses four-times the pixels of HD (double both
vertically and horizontally) and that translates directly into jaw-dropping
clarity. Another advantage of 4K is the ability to display 10-bit color. HD is
limited to 8-bit color which can only reproduce 256 shades of the three primary
colors, red, green and blue. With 4K, there are 1,024 shades of each primary
color giving remarkable image reproduction. Lastly, 4K also introduces High Dynamic
Range (HDR) which is a standard of lighting that replicates a spectrum to
create deeper, darker colors and vivid, brighter ones.
During the interim I was using an old 19” spare monitor that felt
as obstructive and rudimentary as staring through a plywood peephole. So I
settled on a 28” AOC gaming monitor and have really been impressed with
it. In my profession, I’ve dealt with a lot of monitors but the AOC is
the first I’ve seen that comes with a sturdy, brushed-aluminum base instead of
a flimsy plastic one. It’s also the first monitor to utilize the DisplayPort
interface since I wanted to be certain I could run at the native resolution of
3840x2160 at 60 Hz. Older technologies like HDMI would have limited it to just
30 Hz.
While it all sounds great on paper, the only caveat is that this
quad-fidelity comes at the expense of an enormous amount of video processing
power. Previously, it was nearly impossible for a single graphics card to
render a game smoothly at 3840x2160 but my new GTX 1070 finally puts that in
reach. To illustrate 4K’s massive overhead, I attempted to run an older game at
4K to see if it would be playable on my two year-old GTX 970 graphics card. I
chose Crysis 3 (2013) since I knew it would be old enough to hopefully run, yet
new enough to actually support the 4K resolution. Crysis proved it would run at
4K, but I needed to substantially turn down the level of detail so that it
could sustain acceptable frame rates of 30-40 FPS. Despite lowering overall
settings to “Medium” it still looked fantastic at 3840x2160 as the textures
were razor-sharp. It reminded me of the trade-off between Anti-Aliasing (AA)
and resolution, in which you can sacrifice AA if the resolution is high
enough.
Quite frankly,
with the difficulty involved in securing the 1070, I was a little underwhelmed
when I opened the box and the Heavens didn’t part and there were no singing
angels. That small detail aside, there have always been several items included
such as 6 or 8-pin power supply adapter, a DVI-to-Analog adapter, and with my
970, a nice mouse pad. As the picture above illustrates, inside the austere box
there was only a small pamphlet and a compact disc, both of which were more
useless than an inflatable dartboard. For starters, I pity anyone who buys this
video card and then is such a novice that they actually have to read the
enclosed guide which looks to be printed in every foreign language except
English. Even the text on the disc, “Real Graphics, True Gaming” sounds like
something lost in translation from Asian to American. Secondly, I’ve ranted
about these bundled “Drivers Disc” for as long as I can remember (which
probably goes all the way back to my first video card, a 4MB Diamond Monster
3D, in 1997). The reason being is that they are always hopelessly outdated and
this Gigabyte disc is no different. The only graphics card drivers on it are
from May 2016 and are just for Windows Vista, Windows 7 or Windows 8. Needless
to say, this isn’t even remotely close to the Version 372.54 Windows 10 64-Bit
Anniversary Update from August 15th that I needed. I purposely opted
for the premium “Windforce” model because I didn’t want the bland and boring
reference-style cooler that looks like it’s from a 2010 Radeon 5850. Despite
that, the onyx-hued 1070 still looks rather dull and depressing next to the
fiery red-trimmed 970. On the plus side, the 1070 only requires an 8-pin power
connector instead of the 8+6 pin of the 970. However, the 1070 doesn’t have a
light-up LED display like the 970 did, so make of that what you
will.
NVIDIA GTX 1070
|
NVIDIA GTX 970
|
|
CUDA CORES
|
1920
|
1664
|
TEXTURE UNITS
|
120
|
104
|
ROPs
|
64
|
56
|
CORE CLOCK
|
1500 MHz
|
1050 MHz
|
MEMORY CLOCK
|
8 Gbps GDDR5
|
7 Gbps GDDR5
|
BUS WIDTH
|
256-bit
|
256-bit
|
FRAME BUFFER
|
8000 MB
|
4000 MB
|
MAX TDP
|
150 watts
|
145 watts
|
MFG PROCESS
|
TSMC 16 nm
|
TSMC 28 nm
|
PRICE PAID
|
$399
|
$355 (2014)
|
Comparing the
specs, the 1070 (codenamed “Pascal”) looks only marginally better than the 970,
like more of a lateral move than a true upgrade. It has a meager 15% more CUDA
cores, just 15% higher Memory Clock and only 20% more Texture Units. Yet, it’s
a testament to the new architecture that it basically improves the performance
of the 970 by an average of 50% while drawing only 5 watts more juice. However,
it should be noted that bumping the resolution from 2560x1440 to 3840x2160 does
comes with an enormous 50% performance penalty. But, not everyone loves Pascal,
namely the enthusiasts with more dollars than sense who shelled out over $1,000
last year for a Titan X graphics card. The new 1070 offers better performance
for less than half the price and is the reward for consumers such as myself who
believe patience is a virtue.
The GTX 1070 is
for demanding games such as the new Forza Motorsport 6: Apex which has stiff
requirements. The Recommended (1080p at 60 FPS) hardware is for an Intel Core
i7-3820 @ 3.6 Ghz, GTX 970 and 12 GB RAM. Fortunately, I have 16 GB RAM but I’m
barely under for the processor. Luckily, my slightly older i7-3770 is
overclocked to 4.6 GHz so I feel that covers me even for the Ideal (4K at 60
FPS) requirement of an i7-6700k @ 4 Ghz. In terms of video at 4K, it calls for
a 980 Ti so my new 1070 is beyond even that. However, it’s likely that other
forthcoming games will have similar or stricter prerequisites so it’s probably
time to consider a move to a fresh i7-6800K. The encouraging aspect is that the
new Broadwell-E processors still support the X99 chipset (Socket LGA 2011-V3)
which presents a clear upgrade path for several years to come. Regardless, four
years on a Z77 platform is a pretty good track record for me. But if Zen’s
Summit Ridge benchmarks hold true, it might also be my first new AMD CPU in ten
years.
Good DirectX12
benchmarks are hard to come by, as many require purchasing the full game to
utilize (I’m looking at you Ashes of the Singularity, Hitman and Tomb
Raider). However, I did my best to gather the most current and varied ones
with an emphasis towards what me might see from future DX12 and VR games.
Unfortunately, none supported the 3840x2160 resolution so I used the maximum
fidelity available in each benchmark. Also, please note that there is no universal
standard for results; some display an average frame rate, while others use a
composite score generated from combining the results of the CPU and GPU tests.
A few may even state something such as “Low” “Medium” or “High” to help users
decipher the results.
3DMARK TIME
SPY
The first DX12
benchmark from Futuremark is big news, and a bit of an homage, featuring a spy
creeping through a museum with recognizable items in it from previous 3DMarks.
Despite the seemingly prosaic premise, it’s absolutely gorgeous and the
stunning visuals belie the crushing workload exhibited on the processor and
graphics card. My heavily-overclocked i7 struggled with the CPU test and scored
just 14.1 FPS, or about half of what’s considered an acceptable frame rate (30
FPS). However, the big debate is over the 1070’s performance and whether its
Achilles heel is DirectX 12 Asynchronous Compute efficiency. AMD’s R9
generation of graphics cards such as the FuryX already supported the Async
Compute feature set, but it was widely known that Nvidia’s 900 series “Maxwell”
did not, at least at the transistor level. For this reason, AMD’s 290x is as
fast as Nvidia’s last-generation flagship 980 Ti under DX12! Pascal was
supposed to address this deficiency, but for whatever reason, continues to use
an (albeit newer) version of the Async Compute software emulation first seen in
Maxwell. Without getting too granular, Pascal seeks to use its pre-emption
capabilities to process Async Compute + Graphics tasks concurrently. Clearly,
this will never be as fast as if was at the hardware level, but Nvidia’s clever
software routines plus Pascal’s raw processing power helps mask the Async
absence. My 970 drew a score of 23.2 FPS in Graphics Test 1 and
20.1 FPS in Graphics Test 2. Meanwhile, the 1070 was around 65%
faster posting 37.7 FPS in the first test and 33.4 FPS in the second one.
DEUS EX:
MANKIND DIVIDED
Although this AAA
title just dropped on August 23rd, I wanted to squeeze in a quick comparison
since it uses the new Dawn Engine. Based on a heavily modified version of IO
Interactive’s proprietary Glacier 2 game engine, it boasts more improvements
than you can shake an augmented limb at. For starters, it utilizes Tiled
Lighting, with Deferred Rendering for opaque surfaces and Forward Lighting for
transparent surfaces. Additionally, the Anti-Aliasing solution that the game
uses is based on a temporal algorithm, and a temporal solution is used for
Ambient Occlusion. Thankfully, it also includes a built-in benchmark, but the
bad news is that DX12 is not supported at launch. Perhaps that isn’t altogether
a bad thing, given how challenging it is under DX11. Eidos stated that their
“recommended spec” for Mankind Divided is essentially the same system I have,
an i7 3770 and a GTX 970. They claim that combination should yield 60 FPS at
1920x1080 on “High” quality. And it was pretty close, with my 970 scoring an
average of 51.5 FPS. Obviously, this is disappointing as I fully believed my
1070 would be powerful enough to run it comfortably at 4K. In fact, the sad
truth is that even with a 1070 you’re still pretty much limited to a 1920x1080
with “Ultra” quality and my 1070 averaged 50.2 FPS while the 970 hit
34.1 FPS. So, I settled on “High” quality at 2560x1440 which averaged 47.4 FPS.
Unbelievably, you’d need a new $1200 Pascal Titan X just to squeeze out 60 FPS
at 1920x1080 on “Ultra”. Without a doubt, Mankind Divided is the most graphics
intensive game to date.
FINAL
FANTASY XIV: HEAVENSWARD
Having never
played it, I don’t pretend to know any of the Final Fantasy anthology.
However, the psychedelic Heavensward benchmark looks like you took some
LSD before attending a Japanese Manga convention. The kaleidoscopic scenes
unravel rapidly with no rhyme or reason and there are dragons, waterfalls and
lots of fighting and pretty explosions. The mind-altering visuals also seemed
to perform well as an ersatz benchmark because at certain points they yanked my
970’s frame rate into the 20s. Overall, the 970 scored “Extremely High” with a
1920x1080 Preset of 11,978. However, my 1070 showed the least
improvement here, only topping the 970 by around 40% with 17,086.
STEAM VR
PERFORMANCE TEST
Based on Valve’s
Aperture Robot Repair VR demo, this program tries to determine whether a
system’s hardware is sufficient for displaying Virtual Reality games like you
might experience with an Oculus Rift or HTC Vive headset. Presently, there are
no concrete standards so predicting how well any given game will perform is
still largely hit-or-miss. Valve’s test gauges this by running the content at
90 FPS. VR is so demanding because for it to appear realistic, either two feeds
are sent to one display, or it has two displays with one per eye. This means
each scene is being rendered individually and in parallel. Many mainstream
systems struggle to run one instance of a game so it’s easy to imagine the
exponential workload of rendering it twice. Perhaps because it’s a relatively
new science, these VR results were the most detailed, including a bizarre
caveat that sounded like it had been drafted by one of Valve’s lawyers: “Please
note that while your system’s rendering power isn’t limited by your CPU, this
test doesn’t account for the varying CPU cost of positional tracking and
processing-intense applications.” In other words, don’t sue us or demand a
refund if you’re unable to acceptably run any of our VR products on your crappy
computer. Despite the disclaimer, my 970 was pronounced “VR Ready” with the
annotation that my system was “performing well enough for high quality VR”.
Thusly, the 970 scored 7.3 (High) although such a result is clearly based on
their esoteric measurements. Conversely, the 1070 was ranked an 11 (Very
High) with a 57% improvement and judged to be “well above for what is
needed for high quality VR”.
UNIGINE
VALLEY
Released in 2013,
this is the oldest benchmark here. But despite that and its modest DirectX 11
roots, cranking up the resolution and Anti-Aliasing still makes for a punishing
test. Panoramic vistas are breathtakingly rendered in this demo, but the
bucolic demeanor hides such GPU-crippling effects like Ambient Occlusion, Depth
of Field, Dynamic Sky, Sun Shafts and Volumetric Clouds. The net result is a
tangibly beautiful Ansel Adams portrait come to life, but at 2560x1600 and 8X
AA, my 970 barely averaged 31.6 FPS. My 1070, while undeniably
better, still failed to hit the magical 60 FPS mark, averaging just 48.4
FPS. Unigine Valley is also unique in that it includes a GPU temperature
monitor so you can watch it heat up faster than a reactor at Fukushima. It peaked around 65 degrees
Celsius which works out to a sizzling 150 degrees. I know the U.S. is the lone
holdout, but I hate metric and don’t want any foreign rulers.
THE VERDICT
If the numbers
are so good, why is Pascal a letdown? My biggest gripe was availability. When
Nvidia released the previous generation, it was a “hard” launch with cards
available immediately after the announcement. I was able to order my 970 the
same day and have it within 48 hours. But the 1070 was announced on May 27th
and it didn’t even go on sale until June 10th in “limited
quantities”. In other words, you had better odds of hitting the Powerball
Lottery than finding one for sale. It was this holdup that delayed my article
for three months as I refused to pay $100 over MSRP for an early model.
Honestly, I wouldn’t have thought that many people would have been willing to
pay such a premium for a graphics card. Secondly, it’s because Nvidia has a
history of breaking promises. Take my GTX 970 for instance. It was supposed to
arrive on a smaller, faster 20 nm fab but it was still stuck on the same 28nm
process as the 770 and 670 before it. And in 2014 when Nvidia debuted the 980,
it promised that Pascal in 2016 would showcase, among other things, a
revolutionary process of Unified Memory. This, the engineers claimed, would
allow the GPU to “access the CPU’s memory, so developers don’t have to allocate
resources between the two”. Nvidia also laughingly advertised something called
“Pascal Module” which theoretically would make the graphics card “one-third the
size of the standard boards used today” and “put the power of GPUs into more
compact form factors than ever before”. That didn’t happen either as the 1070
measures right at 10.5" in length which is an inch longer than my 970.
Also, I’m disappointed that Nvidia decided to go with DDR5X instead of the HBM2
memory used by AMD’s Fury line of video cards. Considering DDR5X is about 12%
slower than HBM1, it seems Nvidia took the route of DDR5X purely for the cost
savings over HBM2. Finally, the weak software Async Compute performance is
unforgivable, particularly since it was such a mess in Maxwell. So what did
they actually get right? They touted “3D Memory” that would allow “several
times greater bandwidth, more than twice the memory capacity and quadrupled
energy efficiency.” There’s no denying that Pascal does offer higher bandwidth,
more memory, and better efficiency than the previous generation, but it’s
hardly a ground-breaking technology. Remember that Intel has been using
“Tri-Gate” transistors in its CPUs since Ivy Bridge in 2012.
No doubt about
it, the new 1070 is the fastest single video card I’ve ever owned, but there’s
still a ways to go for upcoming technologies like Virtual Reality. While
consumers are clamoring for the pricey VR headsets, I’m taking a wait-and-see
approach. From what I’ve read, VR still needs a lot of maturity. At the
introduction of Pascal, Nvidia showcased a free collection of mini-games for
the HTC Vive VR headset. Called “Funhouse”, it lets users compete in virtual
games typically found at a carnival. Granted, the graphics are stunning but it
was later disclosed that powering the game was not one, but three
GTX 1080s, with one devoted solely to rendering physics. If that’s the case, it
appears that it may well be the next-generation of graphics cards before a single-card
solution is truly “VR Ready”. And hot on the heels of the new Titan X
announcement comes word that Nvidia might be bumping up its roadmap to release
Pascal’s successor, Volta, next year instead of 2018. If true, this would
be both good and bad: on the positive side, we’d potentially see a much faster
platform likely utilizing HBM memory as well as addressing Pascal’s DX12
weakness. But to ramp up production a year early would mean that Nvidia would
have to abandon plans to debut Volta on a 10nm FinFET architecture and instead
remain on Pascal’s older 16nm process. Am I sensing a pattern here?
Regardless, while
the 1070 isn’t quite powerful enough to game at 4K, I still won’t lose a
fortune on a Titan X if Nvidia decides to release a 1080 Ti later this year or
Volta next year.
No comments:
Post a Comment