GeForce
4 Ti4200 64mb 3D Card
Manufacturer: NVidia
Purchasing: nvidia
website (for a list of distributors only)
Reviewed: 30th June 2002
Introduction
For a long time
now NVidia have been the top-dogs of the consumer
3D-card market. Their current line of GeForce
cards (going 2 years back to the GeForce 1) have
consistently beaten the competition, and won rave
reviews the world over.
For the last 18
months it was looking like the 3D Card market was
closing down to being a war of two sides - ATI and
NVidia. However, in recent months announcements
from Creative and Matrox have indicated that the
war is going to be heating up once more. Also,
with the Radeon being a fully featured DirectX8.1
graphics card NVidia's GeForce series may be
seeing it's first big challenge...
It's
all about programming
As
mentioned, Creative and Matrox have signaled their
intent to move back into the consumer market, but
what they have to offer has yet to be seen. ATI
and NVidia have both been pioneering and
developing their latest graphics cards around a
programmable core - shaders. The Radeon 8500 and
the GeForce 4 are both companies second attempt
with this approach (the Radeon 7500 and GeForce 3
both had programmable units built in), and it'll
be interesting to see who comes out on top.
Whilst
all the cards are getting more and more powerful
with each revision, it's starting to become a
battle of features - with programmable shading
units providing graphics programmers endless
possibilities for special effects. This probably
won't bother end-users/consumers too much, but for
developers - choosing the correct video card (and
getting the most features) is paramount, for
NVidia/ATI/etc.. the one who's cards are supported
the most by developers will eventually get more
sales from end-users wanting to see top-quality
graphics.
Direct3D8.1
is the current graphics API of choice for many,
and there are currently only two families of
graphics cards that are designed to be Direct3D8.1
compatible - the ATI Radeon 8500 and the GeForce 4
Ti series (Ti = chemical symbol for Titanium).
The ATI Radeon 8500 was reviewed last month, you
can read it here.
Chipset
Overview
The GeForce 4's
core chipset is quite an impressively powerful
piece of equipment on paper, with large numbers in
no short supply. The majority of the core
components are based on the GeForce 3's
architecture - which was the first of the
programmable consumer 3D cards (think of it as the
GeForce 4's parent). The following list is a
run-down of the main features and technologies:
• Latest
generation Graphics Processing Unit (GPU),
incorporating the latest Transform & Lighting
(T&L) engine. The hardware T&L engine was
introduced in Direct3D7's API specification, thus
will dramatically boost almost all 3D games from
D3D7 onwards.
• nFiniteFX II - NVidia's implementation of
shaders for advanced pixel-level and vertex-level
effects.
• dual vertex shader
pipelines make vertex shaders much more efficient
/ faster.
• Z-Correct bump
mapping clears up anomalies where two bump mapped
surfaces intersect.
• Accuview - Anti-Aliasing implemented in
hardware, this newer method is designed not to
'hurt' frame rates as much as before
• nView - for those with 2 or more monitors this
software/hardware can display windows on multiple
screens; if you're an artist or a programmer it is
absolutely amazing how useful this can be. For
example, you can run your game on one monitor, and
output debugging information on a second monitor
(and see both at the same time).
• upto 128mb of video memory - an almost
criminal amount of space to store
textures/geometry.
• Extremely high performance memory architecture
'Lightspeed Memory Architecture II' (LMA II).
• Uses a crossbar system to
improve efficiency of memory controllers/managers
• 4:1 lossless Z compression
improves Z-Buffer read/write speeds. Early
Z-Rejection/occlusion culling helps to reduce
overdraw (improving fill rate and reducing
bandwidth usage).
The GeForce 4
comes in two main versions - the 'MX' budget range
and the fully-featured 'Ti' range. The board on
review here is from the Ti family (but is the
slowest one). You cannot directly compare the two
types - whilst there are obvious similarities
(beyond the name!), the 'MX' family misses out on
some of the great features that make the GeForce 4
Ti series great; other areas of the press have criticized
NVidia (including John Carmack of
Doom/Quake fame) for giving the 'MX' family the
GeForce 4 branding.
There are three
variations of the GeForce 4 Ti cards, the 4200
(reviewed here), the 4400 and the 4600.
Essentially they are the same chip with the same
features, but different clock multipliers/memory
bandwidths - ie, the 4600 is faster than the 4200.
Feature |
GeForce
4 Ti 4200 |
GeForce
4 Ti 4400 |
GeForce
4 Ti 4600 |
Vertices/Second |
113
million |
125
million |
136
million |
Fill
Rate |
4
bn AA samples/s |
4.4
bn AA samples/s |
4.8
bn AA samples/s |
Operations/sec |
1.03
trillion |
1.12
trillion |
1.23
trillion |
Memory
bandwidth |
8gb/sec |
8.8gb/sec |
10.4gb/s |
Speed
increase |
-- |
10% |
20% |
The above values
are quoted from the NVidia website, and whilst it
is almost certain that performance will follow the
same trends, they shouldn't be taken as benchmark
ratings (this comes later). For example, the value
of 136 million vertices/sec is an impressive
number, but what it doesn't say is what type of
vertices and what supporting system it is quoted
from (if it is even from a real-world test). The
bottom line is this: if you do buy a Ti4600 don't
necessarily expect to render 136 million vertices
every second!
Click
here to
go straight to the next page...
Or
select a page from the list:
• Introduction
• Installation, Benchmarks and Programming
• NVidia's Developer
Relations, Conclusion
|