All the gamers dream of having the most powerful system that would run all the games at the highest settings flawlessly and give the ultimate virtual experience.Now achieving that goal is very tough as you cant win the race with technology.No matter how much expensive or advance your system is, there will always be a better and more high-tech system coming after that.But what you can do is to build a system that would have higher hardware resources so that it can run most of the games which are available now and will be available in future.and when the system gets old and outdated just upgrade to newer technology.
But how to build a professional gaming machine like a PRO?
A professional gaming machine must have –
- A High power Processor – Now Single core systems are really out dated.Even phones and tablets have dual core processors now a days.So for a gaming PC Quad core processors will be preferable.If possible go after processors with more than 4 cores.6 or 8 core processors are better.Also check out whether the processor have high L1,L2,L3 cache memory.Generally multi-core processors have higher cache memory.But still you should go for the higher alternative.The more cache memory ,the better the processor is.
- A Big and fast RAM – Physical memory is very important to faster processing.Check for the clock speed of the ram,the faster it is,the faster your system will be.First check out how much your processor and motherboard can utilize ram.Go for the maximum if possible.Take the ram which have the fastest clock speed ( check for the motherboard and processor compatibility).
- A high speed mother board – Must have a high speed motherboard which can utilize the whole power of your processor and ram.Also check out whether it supports SLI .Go for SLI supported alternatives cause they are much more efficient of utilizing the total hardware resources and gives the best results.Check out the processor and RAM compatibility.
- Graphics Card – Now this is most important for getting the best out of your games.You must have a decent graphics card.But with this kind of system decent is simply is not enough.Dual core graphics card with high end GPU is needed for this kind of system to give the potential output.And as always One is better than Two.Get two of them.Running two graphics card may make your system unstable so for the sake of stability and compatibility get two same dual core high end graphics card .
- Cabinet – High end high power systems like that require better cooling.Cause that gets hot real quick.So a very efficient cooling system would be needed unless the system will lag for temperature issues and might fry up.Invest on good cooling systems and Cabinet with lots of space.
- Power supply unit – Systems like that need a lot of power.They are the hungry beasts in this case.So must have a good power supply system for potential and expected results
- Other components are optional.Offcource with a high end system like that you wont buy ghetto or cheap monitors,sound card and other stuffs.Go for at least the decent ones.Go for HD or 3D monitor,HD sound card and at least 5.1 sound system and A huge hard disk to store all of your HD movies and Games.
Can I run two graphics card in one single pc?
On SLI supported system,you can.Actually you should have two graphics card in high end gaming PCs as that gives better output and gives much better and faster results.
What is SLI?
Scalable Link Interface (SLI) is a brand name for a multi-GPU solution developed by NVIDIA for linking two or more video cards together to produce a single output. SLI is an application of parallel processing for computer graphics, meant to increase the processing power available for graphics.
The name SLI was first used by 3dfx under the full name Scan-Line Interleave, which was introduced to the consumer market in 1998 and used in the Voodoo2 line of video cards. After buying out 3dfx, NVIDIA acquired the technology but did not use it. NVIDIA later reintroduced the SLI name in 2004 and intended for it to be used in modern computer systems based on the PCI Express (PCIe) bus; however, the technology behind the name SLI has changed dramatically.
SLI allows two, three or four graphics processing units (GPUs) to share the workload when rendering a frame. Ideally, two cards using identical GPUs are installed in a motherboard that contains two PCI-Express slots, set up in a master-slave configuration. Both cards are given the same part of the 3D scene to render, but effectively half of the work load is sent to the master card through a connector called the SLI Bridge. As an example, the master card works on the top half of the scene while the slave card works on the bottom half. When the slave card is done, it sends its output to the master card, which combines the two images to form one and then outputs the final render to the monitor.
In its early implementations, motherboards capable of SLI required a special card (“paddle card”) which came with the motherboard. This card would fit into a socket usually located between both of the PCI-Express x16 slots. Depending on which way the card was inserted, the motherboard would either channel all 16 lanes into the primary PCI-Express x16 slot, or split lanes equally to both PCI-Express x16 slots (i.e. 8 lanes per slot). This was necessary as no motherboard at that time had enough PCI-Express lanes for both to have 16 lanes each. With the increase in available PCI-Express lanes, most modern SLI-capable motherboards allow each video card to use all 16 lanes in both PCI-Express x16 slots.
The SLI bridge is used to reduce bandwidth constraints and send data between both graphics cards directly. It is possible to run SLI without using the bridge connector on a pair of low-end to mid-range graphics cards (e.g. 7100GS or 6600GT) with NVIDIA’s Forceware drivers 80.XX or later. Since these graphics cards do not use as much bandwidth, data can be relayed through just the chipsets on the motherboard. However, if no SLI bridge is used on two high-end graphics cards, the performance suffers severely as the chipset does not have enough bandwidth.
SLI offers two rendering and one anti-aliasing method for splitting the work between the video cards:
- Split Frame Rendering (SFR), the first rendering method. This analyzes the rendered image in order to split the workload 50/50 between the two GPUs. To do this, the frame is split horizontally in varying ratios depending on geometry. For example, in a scene where the top half of the frame is mostly empty sky, the dividing line will lower, balancing geometry workload between the two GPUs. This method does not scale geometry or work as well as AFR, however.
- Alternate Frame Rendering (AFR), the second rendering method. Here, each GPU renders entire frames in sequence – one GPU processes even frames, and the second processes odd frames, one after the other. When the slave card finishes work on a frame (or part of a frame) the results are sent via the SLI bridge to the master card, which then outputs the completed frames. Ideally, this would result in the rendering time being cut in half, and thus performance from the video cards would double. In their advertising, NVIDIA claims up to 1.9x the performance of one card with the dual-card setup. While AFR may produce higher overall framerates than SFR, it may result in increased input latency due to the next frame starting rendering in advance of the frame before it. This is identical to the issue that was first discovered in the ATI Rage Fury MAXX board in 1999. This makes SFR the preferred SLI method for fast paced action games.
- SLI Antialiasing. This is a standalone rendering mode that offers up to double the antialiasing performance by splitting the antialiasing workload between the two graphics cards, offering superior image quality. One GPU performs an antialiasing pattern which is slightly offset to the usual pattern (for example, slightly up and to the right), and the second GPU uses a pattern offset by an equal amount in the opposite direction (down and to the left). Compositing both the results gives higher image quality than is normally possible. This mode is not intended for higher frame rates, and can actually lower performance, but is instead intended for games which are not GPU-bound, offering a clearer image in place of better performance. When enabled, SLI Antialiasing offers advanced antialiasing options: SLI 8X, SLI 16X, and SLI 32x (only available on newer, higher-end models starting with the 8800 series). A Quad SLI system is capable of up to SLI 64X antialiasing.
NVIDIA has created a set of custom video game profiles in cooperation with video game publishers that will automatically enable SLI in the mode that gives the largest performance boost. It is also possible to create custom game profiles or modify pre-defined profiles using their Coolbits software.
Two GPUs on one PCI-E slot
In February 2005, Gigabyte Technology released the GV-3D1, a single video card that uses NVIDIA’s SLI technology to run two 6600-series GPUs. Due to technical issues with compatibility, at release the card was supported by only one of Gigabyte’s own motherboards, with which it was bundled. Later came the GV-3D1-68GT, functionally similar and possessing similarly-limited motherboard compatibility, but with 6800 GPUs in place of the GV-3D1’s 6600 units.
Around March 2006, ASUS released the N7800GT Dual. Similar to Gigabyte’s design, it had two 7800GT GPUs mounted on one video card. Again, this faced several issues, such as pricing (it retailed for around US$800, while two separate 7800GTs were cheaper at the time), limited release, and limited compatibility. It would only be supported on the nForce4 chipset and only a few nForce4 chipset-based motherboards could actually utilize it. It was also one of the first video cards with the option to use an external power supply if needed.
In January 2006, NVIDIA released the 7900 GX2, their own attempt at a dual-GPU card. Effectively, this product is a pair of slightly lower clocked 7900GTX cards “bridged” together into one discrete unit, with separate frame buffers for both GPUs (512MB of GDDR3 each). The GeForce 7900 GX2 is only available to OEM companies for inclusion in quad-GPU systems, and it cannot be bought in the consumer market. The Dell XPS, announced at the 2006 Consumer Electronics Show, used two 7900 GX2’s to build a quad-GPU system. Later, Alienware acquired the technology in March.
The official implementations of dual-GPU graphics cards work in the same fashion. Two GPUs are placed on two separate printed circuit boards (PCBs), with their own power circuitry and memory. Both boards have slim coolers, cooling the GPU and memory. The ‘primary’ GPU can be considered to be the one on the rear board, or ‘top’ board (being on top when in a standard ATX system). The primary board has a physical PCIe x16 connector, and the other has a round gap in it to provide cooling for the primary HSF. Both boards are connected to each other by two physical links; one for 16 PCI-Express lanes, and one for the 400 MHz SLI bridge. An onboard PCI-Express bridge chip, with 48 lanes in total, acts as the MCP does in SLI motherboards, connecting to both GPUs and the physical PCI-Express slot, removing the need for the motherboard to support SLI.
A newer version, the GeForce 7950 GX2, which addressed many issues in the 7900 GX2, was available to consumers for separate purchase.
The GeForce 9800 GX2 was NVIDIA’s next attempt at a multi-GPU solution released in March 2008, this time using separate PCBs facing each other, thus sharing one large double wide cooling fan. This GX2 could expand to a total of four GPUs when paired in SLI. The 9800 GX2 was concurrent with the launch of a single-GPU 65 nm 9800 GTX. Three months later, with the 9800 GX2 selling at $299, NVIDIA found their product line competing with itself, as the GTX 260 and the 55 nm improved 9800 GTX+ became available, NVIDIA elected to venture into the GTX200 series and beyond lineups, rather than expanding the 55 nm G92 into a GX2 form factor, thus leaving mid-range audiences with the options of the 9800 GT and 9800 GTX+.
On January 2009, the new GTX200 series based GeForce GTX 295 was released. It combines two 55 nm GeForce GTX 275 GPUs underclocked, with a similar sandwich design of two graphics PCBs facing each other with a large double wide cooling fan solution in-between, but with all the GDDR3 RAM modules on the same half of each board as each corresponding GPU; a feature that neither the initial GTX200 boards nor the 9800 GX2 board had. It manages to maintain the same amount of shaders as the GTX 280/285 bringing it to a total of 480 shader units. A second version of the GTX295 has been produced, this time using a single PCB and a dual slot cooler.
Nvidia introduced its own new flagship video card, the GeForce GTX 590 – Nvidia’s first dual-GPU video card since the GTX 295 in early 2009. The GTX 590 unites a pair of GF110 GPUs (similar to the ones used in the GTX 580, the fastest single-GPU card on the market) on a single card. This translates to 1,024 CUDA processing cores, 128 texture units, 96 ROP units, and 32 tessellation engines. The card’s graphics clock runs at 607 MHz, its processor clock at 1,215 MHz, and its memory clock at 3,414 MHz. It is loaded with 3,072MB of GDDR5 memory for the frame buffer, which operates over a 384-bit memory interface.
In early 2006, NVIDIA revealed its plans for Quad SLI. When the 7900GX2 was originally demonstrated, it was with two such cards in a SLI configuration. This is possible because each GX2 has two extra SLI connectors, separate from the bridges used to link the two GPUs in one unit – one on each PCB, two per GPU, for a total of two links per GPU. When two GX2 graphics cards are installed in a SLI motherboard, these SLI connectors are bridged using two separate SLI bridges. (In such a configuration, if the four PCBs were labeled A, B, C, D from top to bottom, A and C would be linked by an SLI bridge, as would B and D.) This way, four GPUs can contribute to performance. The 7950GX2, sold as an enthusiast-friendly card, omits the external SLI connector on one of its PCBs, meaning that only one SLI bridge is required to run two 7950GX2s in SLI.
Quad SLI did not show any massive improvements in gaming using the common resolutions of 1280×1024 and 1600×1200, but has shown improvements by enabling 32x anti-aliasing in SLI-AA mode, and support for 2560×1600 resolutions at much higher framerates than is possible with single or dual GPU systems with maximum settings in modern games. It was believed that high latencies severely marginalized the benefits of four GPUs, however much of the blame for poor performance scaling is due to Windows XP’s API which only allows for a maximum storage of 3 extra frames. Windows Vista and Windows 7 are not limited in this fashion and shows promise for future multi-GPU configurations.
In March 2008, NVIDIA released the GeForce 9800 GX2 GPU. Targeted at high-end gaming, the 9800 GX2 is essentially two updated and slightly underclocked G92 8800GTS cores on a dual-PCB graphics card to compete with ATI’s HD 3870×2. Though NVIDIA did not release Quad SLI drivers for the 9800 GX2 at time of release, the telltale SLI connector on the top of the card leaves little doubt that users in the future will be able to equip themselves with two 9800GX2s, thus allowing for a total of 4 GPUs in one system via only 2 PCI Express x16 graphics slots, a feat impossible since the 7950GX2. Note that NVIDIA no longer supports Quad SLI on Windows XP (NVIDIA will automatically prevent you from using two 9800GX2s without Windows Vista.)
Currently, the only GPUs that can support 4-way SLI are the GTX 480 and GTX 580, and they must be used on an Intel x58 or 5520 chipset combined with a Nvidia nForce 200 chipset, to provide extra PCI lanes.
NVIDIA has revealed a triple SLI setup for the nForce 700 series motherboards, which only works on Windows Vista. The X58 intel chipset also implement 3-way SLI using an additional NF200 component based on nForce 700. The setup can be achieved using three high-end video cards with two MIO ports and a specially wired connector (or three flexible connectors used in a specific arrangement). The technology was officially announced in December 2007, shortly after the revised G92-based 8800GTS made its way out of the factory. In practical terms, it delivers up to a 2.8x performance increase over a single GPU system.
3-Way SLI is possible using all GeForce GTX except GTX 295 and GTX 460 (580, 570, 480, 470, 465, 285, 280, 275, 260). The GeForce GTX 295 does not support 3-way SLI but supports 4-way SLI and Multi-GPU. The 460 GTX was rumoured to support 3-Way SLI but upon release people found the 1 toothed 2-Way SLI bridge, this is because the 460 was not meant to replace the 260, it was supposed to push up the rest of NVIDIA’s product line and make the 465 GTX more appealing, but due to heat issues 465 GTX’s were considered flops and the mass of gamers stuck with 2-Way SLI 460 GTX.
Unlike traditional SLI, or CrossFireX, 3-way SLI was limited to the GeForce 8800 GTX, 8800 Ultra, 9800 GTX and June 2008 introduced the GTX 260, GTX 280, the 9800 GTX+ and also the GTX 275, 285, 465, 470 and 480 graphic cards on the 680i, 780i 790i, certain P55 chipsets, X58 chipsets, whereas CrossFireX can be theoretically used on multiple ATI (now AMD) Radeon cards (up to 4-GPUSs, must have same core irrespective of product binning).
The NVIDIA Quadro Plex Visual Computing System is an external graphics processing unit designed for large-scale 3D visualizations. The system consists of a box containing a pair of high-end NVIDIA graphics cards featuring a variety of external video connectors. A special PCI Express card is installed in the host computer, and the two are connected by VHDCI cables.
The NVIDIA Quadro Plex system supports up to four GPUs per unit. It connects to the host PC via a small form factor PCI Express card connected to the host, and a 2 meter (6.5 foot) NVIDIA Quadro Plex Interconnect Cable. The system is housed in an external case that is approximately 9.5 inches high, 6 inches wide, and 20.6 inches in depth and weighs about 19 pounds. The system relies heavily on NVIDIA’s SLI technology.
In response to ATI offering a discrete physics calculation solution in a tri-GPU system, NVIDIA announced a partnership with physics middleware company Havok to incorporate a similar system using a similar approach. Although this would eventually become the Quantum Effects technology, many motherboard companies began producing boards with three PCI-Express x16 slots in anticipation of this implementation being used.
In February 2008, NVIDIA acquired physics hardware and software firm Ageia, with plans to increase the market penetration for PhysX beyond its fairly limited use in games; notably Unreal Engine 3. In July 2008, NVIDIA released a beta PhysX driver supporting GPU acceleration, followed by an official launch on August 12, 2008. This allows PhysX acceleration on the primary GPU, a different GPU, or on both GPUs in SLI.
In January 2009 Mirror’s Edge became the first major PC game title to add NVIDIA PhysX to enhance visual effects in-game and add gameplay elements.
Also in response to the PowerXpress technology from AMD, a configuration of similar concept named “Hybrid SLI” was announced on January 7, 2008. The setup consists of an IGP as well as a GPU on MXM module. The IGP would assist the GPU to boost performance when the laptop is plugged to a power socket while the MXM module would be shut down when the laptop was unplugged from power socket to lower overall graphics power consumption.
Hybrid SLI is also available on desktop Motherboards and PCs with PCI-E discrete video cards. NVIDIA claims that twice the performance can be achieved with a Hybrid SLI capable IGP motherboard and a GeForce 8400 GS video card.
On November 5, 2008 in Microsoft’s Guidelines for Graphics in Windows 7 document, Microsoft stated that Windows 7 will not offer native support for hybrid graphics systems. Microsoft added the reason for the decision saying that hybrid graphics systems ‘can be unstable and provide a poor user experience,’ and that it would ‘strongly discourage system manufacturers from shipping such systems.’ Microsoft also added that ‘such systems require a reboot to switch between GPUs.’
On desktop systems, the motherboard chipsets nForce 720a, 730a, 750a SLI, 780a SLI and 980a SLI and the motherboard GPUs GeForce 8100, 8200, 8300 and 9300 support Hybrid SLI (GeForce Boost and HybridPower). The GPUs GeForce 8400 GS and 8500 GT support GeForce Boost, the GPUs 9800 GT, 9800 GTX, 9800 GTX+ 9800 GX2, GTX 260 and GTX 280 support HybridPower.Nevertheless, the most common cards which can be purchased nowadays do not support HybridPower, as manufacturer of the cards do not place the necessary PIC16F690 on the Graphic-PCB. Although for all users the switching is possible with tray icon, only a handful cardtypes have the ability to switch off the power completely.
There is Kernel level support in Linux as of 2.6.34 However user tools to actually use the feature are fairly primitive at this time.
- In an SLI configuration, cards can be of mixed manufacturers, card model names, BIOS revisions or clock speeds. However, they must be of the same GPU series (e.g. 8600, 8800) and GPU model name (e.g. GT, GTS, GTX). There are rare exceptions for “mixed SLI” configurations on some cards that only have a matching core codename (e.g. G70, G73, G80, etc.), but this is otherwise not possible, and only happens when two matched cards differ only very slightly, an example being a differing amount of video memory, stream processors, or clockspeed. In this case, the slower/lesser card becomes dominant, and the other card matches. Another exception is the GTS 250, which can SLI with the 9800 GTX+, as the GTS 250 GPU is a rebadged 9800 GTX+ GPU.
- In cases where two cards are not identical, the fastest card – or the card with more memory – will run at the speed of the slower card or disable its additional memory. (Note that while the FAQ still claims different memory size support, the support has been removed since revision 100.xx of NVIDIA’s Forceware driver suite.)
- SLI doesn’t always give a performance benefit – in some extreme cases, it can lower the frame rate due to the particulars of an application’s coding. This is also true for ATI’s CrossFire, as the problem is inherent in multi-GPU systems. This is often witnessed when running an application at low resolutions.
- In order to use SLI, a motherboard with an nForce4, nForce 500, nForce 600 or nForce 700 SLI chipset must be used, although with the use of hacks one can make SLI work on motherboards with Intel, ATI and ULi chipsets. NVIDIA has stated that only their own chipsets can allow SLI to function optimally, and that they will not allow SLI to work on any other vendor’s chipsets. Some early SLI systems used Intel’s E7525 Xeon chipset, which caused problems when NVIDIA started locking out other vendor’s chipsets as it limited them to an outdated driver set. In 2007, Intel has licensed NVIDIA’s SLI technology for its SkullTrail platform, and select motherboards supporting the Intel X58 (Tylersburg) chipset have unlocked SLI capabilities. Not all X58 motherboards support this technology, as NVIDIA offered it to motherboard manufacturers at the cost of $5 per motherboard sold. NOTE: As of the release of the AMD 900 chipset series, SLI can be run on an AMD mainboard provided that it has the 970, 990X, or 990FX chipset on the board.
- Vsync + Triple buffering is not supported in some cases in SLI AFR mode.
- Users having a Hybrid SLI setup must manually change modes between HybridPower and GeForce Boost, while automatically changing mode will not be available until future updates become available. Hybrid SLI currently supports only single link DVI at 1920×1200 screen resolution.
- When using SLI with AFR, the subjective framerate can often be lower than the framerate reported by benchmarking applications, and may even be poorer than the framerate of its single-GPU equivalent. This phenomenon is known as micro stuttering and also applies to CrossFire since it’s inherent to multi-GPU configurations.