f2 X icon 3 y2 steam2
 
 Search find 4120

GeForce 4 MX 460

 

 

GeForce 4 MX 460
NVidia announced a new line, with which the company began another round of aggressive struggle for various niches of the video card market.

New budget solutions

Contrary to accepted traditions, the company immediately presented full and stripped-down versions of new chips; moreover, cheap versions of GeForce4 first appeared on sale. Targeting the most diverse segments of the video card market, nVidia divided the entire GeForce4 series in half - into high-performance Titanium and cheap MX. Each half was also divided into three parts - Titanium 4200, Titanium 4400, Titanium 4600 and MX 420, MX 440 and MX460. Video cards of the MX series were the first to explore the ground for GeForce4. For the first time, nVidia released a weaker player into the arena first.

Differences from the old MX network

First of all, if the difference in GeForce2 Pro and GeForce2 MX video cards was mainly in TwinView support and performance, then GeForce4 Ti and GeForce4 MX are two different video chips that differ not only in architecture features, but also in their approach to development and use. everyone. Because the GeForce4 MX was originally designed for those users who are not really into games. And consequently, all the efforts, all the attention of engineers went to more "peaceful" goals, for example, to improve the technology for displaying images on two monitors. So, for example, the GeForce4 MX had two completely independent controllers for displaying an image on the screen. Signals fed to two monitors, two flat panels, or a flat panel and a monitor, or even a TV in general ... what's the difference, the main thing is that they do not depend on each other. Two monitors can have different resolutions and different frame rates. Now video card manufacturers don't have to solder RAMDAC chips onto boards because the GeForce4 MX has two 350 MHz RAMDACs integrated into it. On the one hand, this is good, because the low-quality secondary output on the GeForce2 MX nullified the entire TwinView technology, and now it doesn't matter which output on the video card you connected your monitor to. The TV-out interface was integrated into the chip. The GeForce4 MX did not support the SECAM TV-out format. PAL and NTSC only. Also, a TDMS transmitter for the DVI interface was built into the chip. after all, a poor-quality secondary output on the GeForce2 MX nullified the entire TwinView technology, and now it doesn't matter which output on the video card you connected your monitor to. The TV-out interface was integrated into the chip. The GeForce4 MX did not support the SECAM TV-out format. PAL and NTSC only. Also, a TDMS transmitter for the DVI interface was built into the chip. after all, a poor-quality secondary output on the GeForce2 MX nullified the entire TwinView technology, and now it doesn't matter which output on the video card you connected your monitor to. The TV-out interface was integrated into the chip. The GeForce4 MX did not support the SECAM TV-out format. PAL and NTSC only. Also, a TDMS transmitter for the DVI interface was built into the chip.

Advantages of the new chip

Compared to the previous third and second versions of GeForce, our NV17 had improved anti-aliasing algorithms, which was now based on multisampling (GeForce2 had supersampling), the Quincunx AA anti-aliasing method was introduced and improved compared to GeForce3, a new method was added - 4xAA S-Filter . Lossless compression, Z-buffer clearing, and invisible surface clipping have also been improved.

A hardware MPEG 2 decoder was added to the GeForce4 MX. This advantage was typical only for the GeForce4 MX. There was no MPEG 2 decoder in the GeForce4 Titanium.

Architectural flaws

As for the shortcomings of the NV17, there were also quite a few of them. The GeForce4 MX did not support pixel shaders. The GeForce4 MX did not support EMBM in its purest form, as the GeForce3 did. Such a "deprivation" of the NV17 seemed like vandalism on the part of nVidia, given that all GeForce3 and GeForce4 Ti supported the above-mentioned functions. But again, remember that MX was no longer aimed at gamers. The GeForce4 MX had only two texture pipelines, while the GeForce4 Ti and GeForce3 had four each. The memory controller of the GeForce4 MX is dual-channel, while that of the GeForce4 Ti is four-channel. Anisotropic filtering remained at the level of GeForce2. NV17 is a successful tuning of the GeForce2 with the addition of such "gadgets" for 2D and video as an MPEG-2 decoder, improved support for two monitors, a second 350 MHz RAMDAC and TV-Out.

 

Specifications NVIDIA GeForce 4 MX 460

Name GeForce 4 MX 460
Core NV17
Process technology (µm) 0.15
Transistors (million) 29
Core frequency 300
Memory frequency (DDR) 275 (550)
Bus and memory type DDR-128bit
Bandwidth (Gb/s) 8.8
Pixel pipelines 2
TMU per conveyor 2
textures per clock 4
textures per pass 2
Vertex conveyors No
Pixel Shaders 0.5 (emulation)
Vertex Shaders 1.1 (emulation)
Fill Rate (Mpix/s) 600
Fill Rate (Mtex/s) 1200
DirectX 7.1
Anti-Aliasing (Max) MS - 4x
Anisotropic Filtering (Max) 2x
Memory 64 / 128 MB
Interface AGP4x
RAMDAC 2x350 MHz

NVIDIA GeForce4 MX had the following features 

The GeForce4 MX had a higher core and memory clock speed than the previous NVIDIA GeForce2 models, as well as the ATI RADEON 7500 (except for the MX 420).
The GeForce4 MX had two full-fledged 350 MHz RAMDACs integrated into the processor, unlike the GeForce2 MX, where on dual-headed boards the second RAMDAC was external (at 270 MHz).
The organization of the internal architecture of the GeForce4 MX is similar to the GeForce2 MX - 2 rendering pipelines, two texture units each. In fact, the RADEON 7500 also had a similar architecture (when using 2 texture units on the conveyor - and these are almost all modern games). The results of their work could not be accumulated, as a result of which we did not get the opportunity to combine up to 4 textures in one pass, as in the case of the NV20/NV25.
The GeForce4 MX initially lacked support for pixel shaders. their implementation, according to NVIDIA, was too expensive in terms of increasing the area of ​​the crystal.
The GeForce4 MX was capable of performing vertex shaders, albeit not as efficiently as the NV20/NV25. The T&L of the latter consisted of two identical fully programmable blocks, it is quite possible that one such block was built into the NV17 as the base T&L. was to give a noticeable effect, especially on the youngest MX 420 model compared to the previous GeForce2 MX400.
Multisampling hasn't undergone any changes compared to GeForce3 - all the same 2..4 samples that no other ATI product has been capable of.
Implementation of anisotropy in GeForce4 MX and RADEON 7500 differed significantly. This time, the obvious advantage of the Canadian company's product was the support for a "deeper" degree of anisotropy and wider possibilities for its adjustment. Unfortunately, in terms of support for this feature, the GeForce4 MX is no different from its predecessors from the GeForce2 family - just double sampling is clearly not enough.
The GeForce4 MX did not support 3D textures.

The GeForce4 MX460 was the top model in the family at the beginning. It had the highest frequencies - 300 MHz graphics chip and 275x2 MHz memory. The video cards were equipped with 64 MB of DDR SDRAM/SGRAM memory in new packages - TBGA. At a price of around 0, the GeForce4 MX460 had to compete with the GeForce3 Ti200 and Radeon 8500, catching up with them in terms of performance, but losing in functionality.

The Elder Scrolls III (Morrowind)