Computer parameters affecting operating speed. Impact of Memory Settings on System Performance

The most basic parameters that affect the speed of a computer are: hardware. How it will work depends on what hardware is installed on the PC.

CPU

It can be called the heart of the computer. Many are simply sure that the main parameter that affects the speed of a PC is clock frequency and this is correct, but not completely.

Of course, the number of GHz is important, but the processor also plays an important role. There’s no need to go into too much detail, let’s simplify it: the higher the frequency and the more cores, the faster your computer.

RAM

Again, the more gigabytes of this memory, the better. Random access memory, or RAM for short, is temporary memory where program data is stored for quick access. However, after shutdown PC, they are all erased, that is, it is impermanent - dynamic.

And here there are some nuances. Most people, in pursuit of the amount of memory, install a bunch of memory sticks from different manufacturers and with different parameters, thereby not getting the desired effect. In order for the productivity increase to be maximum, you need to install strips with the same characteristics.

This memory also has a clock speed, and the higher it is, the better.

Video adapter

He can be discrete And built-in. The built-in one is located on the motherboard and its characteristics are very meager. They are only enough for regular office work.

If you plan to play modern games, use programs that process graphics, then you need discrete video card. Thus you will raise performance your PC. This is a separate board that needs to be inserted into a special connector located on the motherboard.

Motherboard

It is the largest board in the block. From her directly performance depends the entire computer, since all its components are located on it or connected to it.

HDD

This is a storage device where we store all our files, installed games and programs. They come in two types: HDD andSSD. The second ones work much better faster, consume less energy and are quiet. The former also have parameters that affect performance PC – rotation speed and volume. And again, the higher they are, the better.

power unit

It must supply sufficient energy to all PC components, otherwise performance will decrease significantly.

Program parameters

Also, the speed of your computer is affected by:

  • State established operating system.
  • Version OS.

Installed OS and software must be correct tuned and do not contain viruses, then the performance will be excellent.

Of course, from time to time you need reinstall system and all software to make the computer run faster. Also, you need to monitor software versions, because old ones may still work slowly because of the errors they contain. You need to use utilities that clean the system of junk and increase its performance.

You can download the presentation for the lecture.

Simplified processor model

Additional Information:

The prototype of the circuit is partly a description of the von Neumann architecture, which has the following principles:

  1. The principle of binary
  2. Program control principle
  3. The principle of memory homogeneity
  4. The principle of memory addressability
  5. Sequential program control principle
  6. Conditional jump principle

To make it easier to understand what modern computing system, we must consider it in development. Therefore, I have given here the simplest diagram that comes to mind. In essence, this is a simplified model. We have something control device inside the processor arithmetic logic unit, system registers, system bus, which allows communication between the control device and other devices, memory and peripheral devices. Control device receives instructions, decrypts them, controls the arithmetic-logical unit, transfers data between registers processor, memory, peripheral devices.

Simplified processor model

  • control unit (Control Unit, CU)
  • arithmetic and logic unit (ALU)
  • system registers
  • system bus (Front Side Bus, FSB)
  • memory
  • peripherals

Control Unit (CU):

  • decrypts instructions coming from the computer's memory.
  • controls the ALU.
  • transfers data between CPU registers, memory, and peripheral devices.

Arithmetic logic unit:

  • allows you to perform arithmetic and logical operations on system registers.

System registers:

  • a specific area of ​​memory within the CPU used for intermediate storage of information processed by the processor.

System bus:

  • used to transfer data between the CPU and memory, and between the CPU and peripheral devices.

Arithmetic logic unit consists of various electronic components that allow operations on system registers. System registers are certain areas in memory inside the central processor used to store intermediate results processed by the processor. The system bus is used to transfer data between the central processor and memory, and between the central processor and peripheral devices.

High performance of the MP (microprocessor) is one of the key factors in the competition between processor manufacturers.

The performance of a processor is directly related to the amount of work or calculations it can perform per unit of time.

Very conditional:

Performance = Number of instructions / Time

We will consider the performance of processors based on the IA32 and IA32e architectures. (IA32 with EM64T).

Factors affecting processor performance:

  • Processor clock speed.
  • Addressable memory volume and external memory access speed.
  • Execution speed and instruction set.
  • Use of internal memory and registers.
  • Pipelining quality.
  • Prefetch quality.
  • Superscalarity.
  • Availability of vector instructions.
  • Multi-core.

What's happened performance? It is difficult to give a clear definition of productivity. You can formally tie it to the processor - how many instructions a particular processor can execute per unit of time. But it’s easier to give a comparative definition - take two processors and the one that executes a certain set of instructions faster is more productive. That is, very roughly, we can say that performance is the number of instructions per lead time. Here we will mainly examine those microprocessor architectures that Intel produces, that is, IA32 architectures, which are now called Intel 64. These are architectures that, on the one hand, support the old instructions from the IA32 set, on the other hand, have EM64T - this is a kind of extension that allows the use of 64-bit addresses, i.e. address larger memory sizes, and also includes some useful additions, such as an increased number of system registers, an increased number of vector registers.

What factors influence performance? Let's list everything that comes to mind. This:

  • Speed ​​of instruction execution, completeness of the basic set of instructions.
  • Using internal register memory.
  • Pipelining quality.
  • Transition prediction quality.
  • Prefetch quality.
  • Superscalarity.
  • Vectorization, the use of vector instructions.
  • Parallelization and multi-core.

Clock frequency

The processor consists of components that fire at different times and has a timer that ensures synchronization by sending periodic pulses. Its frequency is called the processor clock speed.

Addressable memory capacity

Clock frequency.

Since the processor has many different electronic components that work independently, in order to synchronize their work so that they know at what moment to start working, when to do their work and wait, there is a timer that sends a clock pulse. The frequency with which the clock pulse is sent is clock frequency processor. There are devices that manage to perform two operations in this time, however, the processor’s operation is tied to this clock pulse, and we can say that if we increase this frequency, we will force all these microcircuits to work with more effort and be idle less.

Addressable memory volume and memory access speed.

Memory size - it is necessary that there is enough memory for our program and our data. That is, the EM64T technology allows you to address a huge amount of memory and at the moment there is no question of not having enough addressable memory.

Since developers generally do not have the ability to influence these factors, I only mention them.

Execution speed and instruction set

Performance depends on how well the instructions are implemented and how completely the basic set of instructions covers all possible tasks.

CISC,RISC (complex, reduced instruction set computing)

Modern Intel® processors are a hybrid of CISC and RISC processors that convert CISC instructions into a simpler set of RISC instructions before execution.

Speed ​​of instruction execution and completeness of the basic instruction set.

Essentially, when architects design processors, they are constantly working to improve it. performance. One of their tasks is to collect statistics to determine which instructions or sequences of instructions are key in terms of performance. Trying to improve performance, architects are trying to make the hottest instructions faster; for some sets of instructions, make a special instruction that will replace this set and work more efficiently. The characteristics of instructions change from architecture to architecture, and new instructions appear that allow for better performance. Those. we can assume that from architecture to architecture the basic set of instructions is constantly being improved and expanded. But if you do not specify on which architectures your program will run, then your application will use a certain default set of instructions that is supported by all the latest microprocessors. Those. We can achieve the best performance only if we clearly specify the microprocessor on which the task will be performed.

Using registers and RAM

Register access time is the shortest, so the number of available registers affects the performance of the microprocessor.

Register spilling – due to an insufficient number of registers, there is a large exchange between registers and the application stack.

With the increase in processor performance, a problem arose that the speed of access to external memory became lower than the speed of calculations.

There are two characteristics to describe memory properties:

  • Response time (latency) – the number of processor cycles required to transfer a unit of data from memory.
  • Bandwidth - the number of data elements that can be sent to the processor from memory in one cycle.

Two possible strategies for speeding up performance are reducing response time or proactively requesting the required memory.

Use of registers and RAM.

Registers are the fastest elements of memory, they are located directly on the core, and access to them is almost instantaneous. If your program is doing some calculations, you would want all intermediate data to be stored in registers. It is clear that this is impossible. One possible performance issue is the issue of register eviction. When you look at the assembly code under some kind of performance analyzer, you see that you have a lot of movement from the stack to registers and back, unloading registers onto the stack. The question is how to optimize the code so that the hottest addresses, the hottest intermediate data, are located in the system registers.

The next part of memory is regular RAM. As processor performance has increased, it has become clear that the biggest performance bottleneck is access to RAM. In order to get to the RAM, you need a hundred, or even two hundred processor cycles. That is, by requesting some memory cell in RAM, we will wait two hundred clock cycles, and the processor will be idle.

There are two characteristics to describe the properties of memory - this is the response time, that is, the number of processor cycles required to transfer a unit of data from memory, and throughput- how many data elements can be sent by the processor from memory in one cycle. Having encountered the problem that our bottleneck is memory access, we can solve this problem in two ways - either by reducing the response time, or by making proactive requests for the required memory. That is, at the moment we are not interested in the value of some variable, but we know that we will need it soon, and we are already requesting it.

Caching

Cache memory is used to reduce data access time.

To achieve this, blocks of RAM are mapped to faster cache memory.

If the memory address is in the cache, a “hit” occurs and the speed of data acquisition increases significantly.

Otherwise – “cache miss”

In this case, a block of RAM is read into the cache in one or more bus cycles, called a cache line fill.

The following types of cache memory can be distinguished:

  • fully associative cache (each block can be mapped to any location in the cache)
  • direct mapped memory (each block can be mapped to one location)
  • hybrid options (sector memory, multi-associative memory)

Multiple-associative access - the low-order bits determine the cache line where a given memory can be mapped, but this line can only contain several words of main memory, the choice of which is carried out on an associative basis.

The quality of cache use is a key condition for performance.

Additional Information: In modern IA32 systems, the cache line size is 64 bytes.

Reducing access time was achieved by introducing cache memory. Cache memory is a buffer memory located between the RAM and the microprocessor. It is implemented on the core, that is, access to it is much faster than conventional memory, but it is much more expensive, so when developing a microarchitecture, you need to find a precise balance between price and performance. If you look at the descriptions of processors offered for sale, you will see that the description always states how much memory cache of a particular level is on this processor. This figure seriously affects the price of this product. Cache memory is designed in such a way that regular memory is mapped to cache memory, and the mapping occurs in blocks. When you request some address in RAM, you check whether this address is displayed in cache memory. If this address is already in the cache, then you save time on accessing memory. You read this information from fast memory, and your response time is significantly reduced, but if this address is not in the cache memory, then we must turn to regular memory so that this address we need along with some block in which it is located , is mapped into this cache memory.

There are different implementations of cache memory. There is a fully associative cache memory, when each block can be mapped to any location in the cache. There is direct-mapped memory, where each block can be mapped to one place, and there are also various hybrid options - for example, a set-associative cache. What is the difference? The difference is in the time and complexity of checking for the presence of the desired address in the cache memory. Let's say we need a specific address. In the case of associative memory, we need to check the entire cache to make sure that this address is not in the cache. In the case of direct mapping, we only need to check one cell. In the case of hybrid variants, for example, when using a set-associative cache, we need to check, for example, four or eight cells. That is, the task of determining whether there is a cache address is also important. The quality of cache use is an important condition for performance. If we can write a program in such a way that the data we were going to work with is in the cache as often as possible, then such a program will run much faster.

Typical response times when accessing cache memory for Nehalem i7:

  • L1 - latency 4
  • L2 - latency 11
  • L3 - latency 38

Response time for RAM > 100

Pre-emptive memory access mechanism implemented using a hardware prefetching mechanism.

There is a special set of instructions that allows you to induce the processor to load memory located at a specific address into the cache (software prefetching).

For example, let's take our latest Nehalem processor: i7.

Here we have not just a cache, but a kind of hierarchical cache. For a long time it was two-level, in the modern Nehalem system it is three-level - just a little very fast cache, a little more second-level cache and a fairly large amount of third-level cache. Moreover, this system is built in such a way that if an address is in the first-level cache, it is automatically found in the second and third levels. This is a hierarchical system. For the first level cache, the latency is 4 clock cycles, for the second - 11, third - 38 and the RAM response time is more than 100 processor cycles.

Debunking myths about video card performance | Defining the concept of productivity

If you are a car enthusiast, you have probably argued with your friends more than once about the capabilities of two sports cars. One of the cars may have more horsepower, higher speed, less weight, and better handling. But very often the debate is limited to comparing Nurburgring lap speeds and always ends with someone from the group spoiling all the fun by reminding that none of the disputants will be able to afford the cars in question anyway.

A similar analogy can be drawn with expensive video cards. We have average frame rates, jittery frame times, noisy cooling systems, and a price that in some cases can be double the cost of modern gaming consoles. And for greater convincing, the design of some modern video cards uses aluminum and magnesium alloys - almost like in racing cars. Alas, there are differences. Despite all the attempts to impress the girl with the new graphics processor, rest assured that she likes sports cars more.

What is the equivalent lap speed for a video card? What factor differentiates winners and losers at equal value? This is clearly not an average frame rate, and evidence of this is the presence of frame time fluctuations, tearing, stuttering and fans whirring like a jet engine. In addition, there are other technical characteristics: texture rendering speed, computing performance, memory bandwidth. What is the significance of these indicators? Will I have to play with headphones due to the unbearable noise of the fans? How to take into account overclocking potential when evaluating a graphics adapter?

Before we delve into the myths about modern video cards, we first need to understand what performance is.

Productivity is a set of indicators, not just one parameter

Discussions about GPU performance often come down to the general concept of frame rate, or FPS. In practice, the concept of video card performance includes many more parameters than just the frequency with which frames are rendered. It is easier to consider them within the framework of a complex rather than a single meaning. The package has four main aspects: speed (frame rate, frame latency and input lag), picture quality (resolution and image quality), silence (acoustic efficiency, taking into account power consumption and cooler design) and, of course, affordability in terms of cost.

There are other factors that influence the value of a video card: for example, the games included in the package, or exclusive technologies used by a certain manufacturer. We will look at them briefly. Although in reality the value of CUDA, Mantle and ShadowPlay support largely depends on the needs of the individual user.

The chart shown above illustrates the position GeForce GTX 690 regarding a number of factors that we have described. In the standard configuration, the graphics accelerator in the test system (its description is given in a separate section) reaches 71.5 FPS in the Unigine Valley 1.0 test in ExtremeHD mode. The card generates a noticeable but not disturbing noise level of 42.5 dB(A). If you are willing to put up with noise at a level of 45.5 dB(A), then you can safely overclock the chip to achieve a stable frequency of 81.5 FPS in the same mode. Lowering the resolution or the level of anti-aliasing (which affects quality) results in a significant increase in frame rate, holding the remaining factors constant (including the already high price of $1000).

In order to ensure a more controlled testing process, it is necessary to define a benchmark for video card performance.


MSI Afterburner and EVGA PrecisionX are free utilities that allow you to manually adjust the fan speed and, as a result, adjust the noise level.

For today's article, we defined performance as the number of frames per second that a graphics card can output at a selected resolution within a specific application (and when the following conditions are met):

  • Quality settings are set to maximum values ​​(usually Ultra or Extreme).
  • The resolution is set to a constant level (usually 1920x1080, 2560x1440, 3840x2160 or 5760x1080 pixels in a three-monitor configuration).
  • The drivers are configured to the manufacturer's standard parameters (both in general and for a specific application).
  • The graphics card operates in a closed case at a noise level of 40 dB(A), measured at a distance of 90 cm from the case (ideally tested within a reference platform that is updated annually).
  • The video card operates at an ambient temperature of 20 °C and a pressure of one atmosphere (this is important because it directly affects the operation of thermal throttling).
  • The core and memory operate at temperatures up to thermal throttling so that the core frequency/temperature under load remains stable or varies within a very narrow range, while maintaining a constant noise level of 40 dB(A) (and therefore fan speed).
  • The 95th percentile frame time variation is less than 8ms, which is half the frame time, on a standard 60Hz display.
  • The card runs at or around 100% GPU load (this is important to demonstrate that there are no bottlenecks in the platform; if there are any, the GPU load will be below 100% and the test results will be meaningless).
  • The average FPS and frame-time variations are obtained from at least three runs for each sample, with each run lasting at least one minute, and individual samples should not deviate more than 5% from the average (ideally, we want try different cards at the same time, especially if you suspect there are significant differences between products from the same manufacturer).
  • The frame rate of a single card is measured using Fraps or built-in counters. FCAT is used for several cards in an SLI/CrossFire connection.

As you may have realized, the benchmark level of performance depends on both the application and the resolution. But it is defined in a way that allows tests to be repeated and verified independently. In this sense, this approach is truly scientific. In fact, we are interested in manufacturers and enthusiasts repeating the tests and reporting any discrepancies to us. This is the only way to ensure the integrity of our work.

This definition of performance does not take into account overclocking or the range of behavior of a particular GPU across different graphics cards. Fortunately, we noticed this problem in only a few cases. Modern thermal throttling engines are designed to extract maximum frame rates in most possible scenarios, causing graphics cards to operate very close to their maximum capabilities. Moreover, the limit is often reached even before overclocking provides a real speed advantage.

In this material we will widely use the Unigine Valley 1.0 benchmark. It takes advantage of several features of DirectX 11 and allows for easily reproducible tests. Additionally, it doesn't rely on physics (and by extension CPU) in the same way that 3DMark does (at least in general and combined tests).

What are we going to do?

We have already figured out how to determine the performance of video cards. Next we'll look at the methodology, Vsync, noise and performance adjusted for graphics card noise levels, as well as the amount of video memory that is actually needed to run. In part two, we'll look at anti-aliasing techniques, the impact of the display, different PCI Express lane configurations, and the value of your graphics card investment.

It's time to familiarize yourself with the test configuration. In the context of this article, this section deserves special attention because it contains important information about the tests themselves.

Debunking myths about video card performance | How we test

Two systems, two goals

We carried out all tests on two different stands. One stand is equipped with an old processor Intel Core i7-950, and the other with a modern chip Intel Core i7-4770K .

Test system 1
Frame Corsair Obsidian Series 800D
CPU Intel Core i7-950 (Bloomfield), overclocked to 3.6 GHz, Hyper-Threading and power saving off. Tower
CPU cooler CoolIT Systems ACO-R120 ALC, Tuniq TX-4 TIM, Scythe GentleTyphoon 1850 RPM fan
Motherboard Asus Rampage III Formula Intel LGA 1366, Intel X58 Chipset, BIOS: 903
Net Cisco-Linksys WMP600N (Ralink RT286)
RAM Corsair CMX6GX3M3A1600C9, 3 x 2 GB, 1600 MT/s, CL 9
Storage device Samsung 840 Pro SSD 256 GB SATA 6Gb/s
Video cards

Sound card Asus Xonar Essence STX
power unit Corsair AX850, 850 W
System software and drivers
operating system Windows 7 Enterprise x64, Aero off (see note below)
Windows 8.1 Pro x64 (reference only)
DirectX DirectX 11
Video drivers AMD Catalyst 13.11 Beta 9.5
Nvidia GeForce 331.82 WHQL

Test system 2
Frame Cooler Master HAF XB, hybrid desktop/testbed form
CPU Intel Core i7-4770k (Haswell), overclocked to 4.6 GHz, Hyper-Threading and power saving off.
CPU cooler Xigmatek Aegir SD128264, Xigmatek TIM, Xigmatek 120mm fan
Motherboard ASRock Extreme6/ac Intel LGA 1150, Intel Z87 Chipset, BIOS: 2.20
Net mini-PCIe Wi-Fi card 802.11ac
RAM G.Skill F3-2133C9D-8GAB, 2 x 4 GB, 2133 MT/s, CL 9
Storage device Samsung 840 Pro SSD 128 GB SATA 6Gb/s
Video cards AMD Radeon R9 290X 4GB (Press Sample)
Nvidia GeForce GTX 690 4 GB (retail sample)
Nvidia GeForce GTX Titan 6GB (Press Sample)
Sound card Built-in Realtek ALC1150
power unit Cooler Master V1000, 1000 W
System software and drivers
operating system Windows 8.1 Pro x64
DirectX DirectX 11
Video drivers AMD Catalyst 13.11 Beta 9.5
Nvidia GeForce 332.21 WHQL

We need the first test system to obtain repeatable results in real environments. Therefore, we assembled a relatively old, but still powerful system based on the LGA 1366 platform in a large full-size tower case.

The second test system must meet more specific requirements:

  • PCIe 3.0 support with a limited number of lanes (Haswell CPU for LGA 1150 offers only 16 lanes)
  • No PLX bridge
  • Supports three cards in CrossFire in x8/x4/x4 configuration or two in SLI in x8/x8

ASRock sent us a Z87 Extreme6/ac motherboard that meets our requirements. We have previously tested this model (only without the Wi-Fi module) in the article "Test of five Z87 chipset motherboards costing less than $220", in which it won our Smart Buy award. The sample that came to our laboratory turned out to be easy to set up, and we overclocked ours without any problems Intel Core i7-4770K up to 4.6 GHz.

The board's UEFI allows you to configure the PCI Express data transfer speed for each slot, so you can test the first, second and third generations of PCIe on the same motherboard. The results of these tests will be published in the second part of this material.

Cooler Master provided the case and power supply for the second test system. The unusual HAF XB case, which also received the Smart Buy award in the article "Review and testing of the Cooler Master HAF XB case", provides the necessary space for free access to components. The case has a lot of ventilation holes, so the components inside can be quite noisy if the cooling system is not sized correctly. However, this model boasts good air circulation, especially if you install all the optional fans.

The V1000 modular power supply allows you to install three high-performance video cards in the case while maintaining a neat cable layout.

Comparing test system No. 1 with system No. 2

It's amazing how close these systems are in performance if you don't pay attention to the architecture and focus on the frame rate. Here they are comparison in 3DMark Firestrike .

As you can see, the performance of both systems in graphics tests is essentially equal, even though the second system is equipped with faster memory (DDR3-2133 versus DDR3-1800, with Nehalem having a three-channel architecture and Haswell having a dual-channel architecture). Only in host processor tests Intel Core i7-4770K demonstrates its advantage.

The main advantage of the second system is a larger overclocking headroom. Intel Core i7-4770K with air cooling was able to maintain a stable frequency of 4.6 GHz, and Intel Core i7-950 could not exceed 4 GHz with water cooling.

It is also worth paying attention to the fact that the first test system is tested under the Windows 7x64 operating system instead of Windows 8.1. There are three reasons for this:

  • First, the Windows Virtual Desktop Manager (Windows Aero or wdm.exe) uses a significant amount of video memory. At 2160p resolution, Windows 7 takes on 200 MB, Windows 8.1– 300 MB, in addition to the 123 MB reserved by Windows. IN Windows 8.1 There is no way to disable this option without significant side effects, but in Windows 7 the problem is solved by switching to the base theme. 400 MB is 20% of the card’s total video memory, which is 2 GB.
  • When you activate basic (simplified) themes, memory consumption in Windows 7 stabilizes. It always takes up 99 MB at 1080p and 123 MB at 2160p with a video card GeForce GTX 690. This allows for maximum test repeatability. For comparison: Aero takes about 200 MB and +/- 40 MB.
  • There is a bug with Nvidia driver 331.82 WHQL when activating Windows Aero at 2160p resolution. It only appears when Aero is enabled on a display in which the 4K image is implemented in two tiles and manifests itself in reduced GPU load during testing (it jumps in the range of 60-80% instead of 100%), which affects performance losses of up to 15%. We have already notified Nvidia of our findings.

Regular screenshots and game videos cannot show ghosting and tearing effects. Therefore, we used a high-speed video camera to capture the actual image on the screen.

The temperature in the case is measured by the built-in temperature sensor of the Samsung 840 Pro. The ambient temperature is 20-22 °C. The background noise level for all acoustic tests was 33.7 dB(A) +/- 0.5 dB(A).

Test configuration
Games
The Elder Scrolls V: Skyrim Version 1.9.32.0.8, THG's own test, 25 seconds, HWiNFO64
Hitman: Absolution Version 1.0.447.0, built-in benchmark, HWiNFO64
Total War: Rome 2 Patch 7, built-in benchmark "Forest", HWiNFO64
BioShock Infinite Patch 11, Version 1.0.1593882, built-in benchmark, HWiNFO64
Synthetic tests
Ungine Valley Version 1.0, ExtremeHD Preset, HWiNFO64
3DMark Fire Strike Version 1.1

There are many tools you can use to measure video memory consumption. We chose HWiNFO64, which received high marks from the enthusiast community. The same result can be obtained using MSI Afterburner, EVGA Precision X or RivaTuner Statistics Server.

Debunking myths about video card performance | To enable or not to enable V-Sync – that is the question

When evaluating video cards, the first parameter you want to compare is performance. How do the latest and fastest solutions outperform previous products? The World Wide Web is replete with testing data conducted by thousands of online resources that are trying to answer this question.

So let's start by looking at performance and the factors to consider if you really want to know how fast a particular graphics card is.

Myth: Frame rate is an indicator of graphics performance level

Let's start with a factor that our readers are most likely already aware of, but many still have misconceptions about. Common sense dictates that a frame rate of 30 FPS or higher is considered suitable for the game. Some people believe that lower values ​​are fine for normal gameplay, others insist that even 30 FPS is too low.

However, in disputes it is not always obvious that FPS is just a frequency, behind which there are some complex matters. Firstly, in films the frequency is constant, but in games it changes, and, as a result, is expressed as an average value. Frequency fluctuations are a byproduct of the graphics card power required to process the scene, and as the content on the screen changes, the frame rate changes.

It's simple: the quality of the gaming experience is more important than a high average frame rate. Stability of personnel supply is another extremely important factor. Imagine driving on the highway at a constant speed of 100 km/h and the same trip at an average speed of 100 km/h, where a lot of time is spent shifting and braking. You will arrive at the appointed place at the same time, but the impressions of the trip will vary greatly.

So let's put aside for a moment the question "What level of performance is sufficient?" to the side. We will return to it after we discuss other important topics.

Introducing vertical sync (V-sync)

Myths: It is not necessary to have a frame rate higher than 30 FPS, since the human eye cannot see the difference. Values ​​above 60 FPS on a monitor with a 60 Hz refresh rate are unnecessary since the image is already rendered 60 times per second. V-sync should always be turned on. V-sync should always be turned off.

How are rendered frames actually displayed? Almost all LCD monitors work in such a way that the image on the screen is updated a fixed number of times per second, usually 60. Although there are models capable of updating the image at a frequency of 120 and 144 Hz. This mechanism is called the refresh rate and is measured in hertz.

The discrepancy between the variable frame rate of the video card and the fixed refresh rate of the monitor can be a problem. When the frame rate is higher than the refresh rate, multiple frames can be displayed in a single scan, resulting in an artifact called screen tearing. In the image above, the colored stripes highlight individual frames from the video card, which are displayed on the screen when ready. This can be very annoying, especially in active first-person shooters.

The image below shows another artifact that often appears on the screen, but is difficult to detect. Since this artifact is related to the operation of the display, it is not visible in the screenshots, but it is clearly visible to the naked eye. To catch him, you need a high-speed video camera. The FCAT utility we used to capture the frame in Battlefield 4, shows a gap but not a ghosting effect.

Screen tearing is evident in both images from BioShock Infinite. However, on a Sharp panel with a 60Hz refresh rate, it is much more pronounced than on an Asus monitor with a 120Hz refresh rate, since the VG236HE's screen refresh rate is twice as fast. This artifact is the clearest evidence that the game does not have vertical synchronization, or V-sync, enabled.

The second problem with the BioShock image is the ghosting effect, which is clearly visible at the bottom left of the image. This artifact is associated with a delay in displaying images on the screen. In short: individual pixels do not change color quickly enough, and this is how this type of afterglow appears. This effect is much more pronounced in the game than shown in the image. The gray-to-gray response time of the Sharp panel on the left is 8ms, and the image appears blurry during fast movements.

Let's get back to the breaks. The above mentioned vertical sync is a fairly old solution to the problem. It consists of synchronizing the frequency at which the video card supplies frames with the refresh rate of the monitor. Since multiple frames no longer appear simultaneously, there is no tearing either. But if your favorite game's frame rate drops below 60 FPS (or below your panel's refresh rate) on maximum graphics settings, the effective frame rate will jump between multiples of the refresh rate, as shown below. This is another artifact called braking.

One of the oldest debates on the Internet concerns vertical sync. Some insist that the technology should always be turned on, others are sure that it should always be turned off, and others choose the settings depending on the specific game.

So to enable or not to enable V-sync?

Let's say you're part of the majority and use a regular display with a 60Hz refresh rate:

  • If you play first-person shooters and/or have problems with perceived input lag, and/or your system cannot consistently maintain a minimum of 60 FPS in game, and/or you are testing a graphics card, then V-sync should be turned off.
  • If none of the above factors concern you, and you are experiencing noticeable screen tearing, then vertical sync needs to be enabled.
  • If you're not sure, it's best to leave V-sync turned off.
If you're using a gaming display with a 120/144Hz refresh rate (if you have one of these displays, there's a good chance you bought it for the high refresh rate):
  • You should only enable Vsync in older games where gameplay runs at frame rates above 120 FPS and you constantly experience screen tearing.

Please note that in some cases the frame rate reduction effect due to V-sync does not appear. Such applications support triple buffering, although this solution is not very common. Also in some games (for example, The Elder Scrolls V: Skyrim), V-sync is enabled by default. Forced shutdown by modifying some files leads to problems with the game engine. In such cases, it is better to leave vertical sync enabled.

G-Sync, FreeSync and the future

Fortunately, even on the weakest computers, input lag will not exceed 200 ms. Therefore, your own reaction has the greatest influence on the results of the game.

However, as input lag differences increase, their impact on gameplay increases. Imagine a professional gamer whose reaction can be compared to that of the best pilots, that is, 150 ms. An input lag of 50ms means a person will react 30% slower (that's four frames on a 60Hz refresh rate display) of their opponent. At a professional level, this is a very noticeable difference.

For mere mortals (including our editors, who scored 200ms in a visual test) and for those who would rather play Civilization V than Counter Strike 1.6, things are a little different. It's likely that you can ignore input lag altogether.

Here are some factors that can worsen input lag, all other things being equal:

  • Playing on an HDTV (especially if Game Mode is disabled) or playing on an LCD display with video processing that cannot be disabled. An ordered list of input lag metrics for various displays can be found in the DisplayLag database .
  • Gaming on LCD displays using IPS panels with higher response times (typically 5-7ms G2G) instead of TN+Film panels (1-2ms GTG) or CRT displays (the fastest available).
  • Gaming on low refresh rate displays. New gaming displays support 120 or 144 Hz.
  • Game at low frame rates (30 FPS is one frame every 33 ms; 144 FPS is one frame every 7 ms).
  • Using a USB mouse with a low polling rate. Cycle time at 125Hz is around 6ms, which gives an average input lag of around 3ms. At the same time, the polling rate of a gaming mouse can reach up to 1000 Hz, with an average input lag of 0.5 ms.
  • Using a low-quality keyboard (typically, keyboard input lag is 16 ms, but in cheap models it can be higher).
  • Enable V-sync, especially in combination with triple buffering (there is a myth that Direct3D does not enable triple buffering. In fact, Direct3D does allow for the option of multiple background buffers, but few games use it). If you are tech savvy, you can check out with review by Microsoft(English) about this.
  • Game with high pre-rendering time. The default queue in Direct3D is three frames or 48 ms at 60 Hz. This value can be increased up to 20 frames for greater "smoothness" and decreased to one frame to improve responsiveness at the expense of increased frame time fluctuations and, in some cases, overall loss in FPS. There is no null parameter. Zero simply resets the settings to the original value of three frames. If you are tech savvy, you can check out with review by Microsoft(English) about this.
  • High latency of the Internet connection. While this doesn't exactly relate to the definition of input lag, it does have a noticeable effect on it.

Factors that do not affect input lag:

  • Using a keyboard with a PS/2 or USB connector (see additional page in our review "Five Mechanical-Switch Keyboards: Only The Best For Your Hands"(English)).
  • Using a wired or wireless network connection (check your router's ping if you don't believe it; the ping should not exceed 1 ms).
  • Using SLI or CrossFire. The longer render queues required to implement these technologies are offset by higher throughput.

Conclusion: Input lag is only important for "fast" games and really plays a significant role at the professional level.

It's not just the display technology and graphics card that affect input lag. Hardware, hardware settings, display, display settings and application settings all contribute to this indicator.

Debunking myths about video card performance | Myths about video memory

Video memory is responsible for resolution and quality settings, but does not increase speed

Manufacturers often use video memory as a marketing tool. Because gamers have been led to believe that more is better, we often see entry-level graphics cards that have significantly more RAM than they actually need. But enthusiasts know that the most important thing is balance, and in all PC components.

Broadly speaking, video memory refers to the discrete GPU and the tasks it processes, independent of the system memory installed in the motherboard. Video cards use several RAM technologies, the most popular of which are DDR3 and GDDR5 SDRAM.

Myth: Graphics cards with 2 GB of memory are faster than models with 1 GB

It's not surprising that manufacturers pack inexpensive GPUs with more memory (and make higher profits), since many people believe that more memory will improve speed. Let's look into this issue. The amount of video memory on your video card does not affect its performance unless you select game settings that use all available memory.

But why then do we need additional video memory? To answer this question, you need to find out what it is used for. The list is simplified, but useful:

  • Drawing textures.
  • Frame buffer support.
  • Depth buffer support ("Z Buffer").
  • Support for other resources that are required to render the frame (shadow maps, etc.).

Of course, the size of textures that are loaded into memory depends on the game and detail settings. For example, Skyrim's High Definition Texture Pack includes 3GB of textures. Most games dynamically load and unload textures as needed, but not all textures need to be in video memory. But the textures that should be rendered in a specific scene must be in memory.

A frame buffer is used to store an image as it is rendered before or while it is sent to the screen. Thus, the required amount of video memory depends on the output resolution (an image with a resolution of 1920x1080 pixels at 32 bits per pixel “weighs” about 8.3 MB, and a 4K image with a resolution of 3840x2160 pixels at 32 bits per pixel is already about 33.2 MB ) and the number of buffers (at least two, less often three or more).

Specific anti-aliasing modes (FSAA, MSAA, CSAA, CFAA, but not FXAA or MLAA) effectively increase the number of pixels that must be rendered and proportionally increase the total amount of video memory required. Render-based anti-aliasing has a particularly large impact on memory consumption, which increases with sample size (2x, 4x, 8x, etc.). Additional buffers also take up video memory.

Thus, a video card with a large amount of graphics memory allows you to:

  1. Play at higher resolutions.
  2. Play at higher texture quality settings.
  3. Play at higher antialiasing levels.

Now let's destroy the myth.

Myth: You need 1, 2, 3, 4 or 6 GB of VRAM to play games on (insert native resolution of your display).

The most important factor to consider when choosing the amount of RAM is the resolution at which you will be playing. Naturally, higher resolution requires more memory. The second important factor is the use of the anti-aliasing technologies mentioned above. Other graphics options have a smaller impact on the amount of memory required.

Before we get into the measurements themselves, let me warn you. There is a special type of high-end video card with two GPUs (AMD Radeon HD 6990 and Radeon HD 7990, as well as Nvidia GeForce GTX 590 and GeForce GTX 690), which are equipped with a certain amount of memory. But as a result of using a dual-GPU configuration, the data is essentially duplicated, dividing the effective memory capacity in two. For example, GeForce GTX 690 with 4 GB it behaves like two 2 GB cards in SLI. Moreover, when you add a second card to a CrossFire or SLI configuration, the array's video memory does not double. Each card reserves only its own amount of memory.

We performed these tests on Windows 7 x64 with the Aero theme disabled. If you are using Aero (or Windows 8/8.1, which does not have Aero), then you can add about 300 MB to the figures.

As seen from the latest survey on Steam, the majority of gamers (about half) use graphics cards with 1 GB of video memory, about 20% have models with 2 GB of video memory, and a small number of users (less than 2%) work with graphics adapters with 3 GB of video memory or more.

We tested Skyrim with the official high quality texture pack. As you can see, 1GB of memory is barely enough to play at 1080p without anti-aliasing or using MLAA/FXAA. 2 GB allows you to run the game at a resolution of 1920x1080 pixels with maximum detail and at 2160p with a reduced level of anti-aliasing. To activate maximum settings and 8xMSAA anti-aliasing, even 2 GB is not enough.

Bethesda Creation Engine is a unique component of this benchmark suite. It is not always limited by GPU speed, but is often limited by platform capabilities. But in these tests, we saw for the first time how Skyrim at maximum settings reaches the limit of the graphics adapter's video memory.

It's also worth noting that activating FXAA does not consume additional memory. Therefore, there is a good compromise when using MSAA is not possible.

Debunking myths about video card performance | Additional video memory measurements

Io Interactive's Glacier 2 graphics engine, which powers Hitman: Absolution, is very memory-hungry and in our tests is second only to Creative Assembly's Warscape engine (Total War: Rome II) at maximum detail settings.

In Hitman: Absolution, a video card with 1 GB of video memory is not enough to play at ultra-quality settings in 1080p resolution. The 2GB model will allow you to enable 4xAA at 1080p or play without MSAA at 2160p.

To enable 8xMSAA in 1080p resolution, 3 GB of video memory is required, and 8xMSAA in 2160p resolution can be achieved by a video card no weaker GeForce GTX Titan with 6 GB of memory.

Here, activating FXAA also does not use additional memory.

Note: The new Ungine Valley 1.0 benchmark does not automatically support MLAA/FXAA. Thus, the memory consumption results with MLAA/FXAA are obtained using CCC/NVCP.

The data shows that the Valley test runs well on a card with 2GB of memory at 1080p (at least as far as VRAM is concerned). It's even possible to use a 1GB card with 4xMSAA active, although this won't be possible in all games. However, at 2160p the benchmark performs well on a 2GB card if anti-aliasing or post-processing effects are not enabled. The 2 GB threshold is reached when 4xMSAA is activated.

Ultra HD with 8xMSAA requires up to 3 GB of video memory. This means that with such settings the benchmark will only be passed GeForce GTX Titan or on one of the AMD models with 4 GB memory and a Hawaii chip.

Total War: Rome II uses the updated Warscape engine from Creative Assembly. It doesn't support SLI at the moment (but CrossFire does). It also does not support any form of MSAA. Of all the forms of anti-aliasing, only AMD's MLAA can be used, which is one of the post-processing techniques like SMAA and FXAA.

An interesting feature of this engine is the ability to reduce image quality based on the available video memory. The game can maintain an acceptable speed level with minimal user interaction. But the lack of SLI support kills the game on an Nvidia video card at 3840x2160 pixels. At least for now, this game is best played on an AMD card if you choose 4K resolution.

Without MLAA, the game's built-in "forest" benchmark on the Extreme rig uses 1848 MB of available video memory. Limit GeForce GTX 690 2 GB is exceeded when MLAA is activated in a resolution of 2160p pixels. At a resolution of 1920x1080 pixels, memory usage is in the 1400 MB range.

Please note that AMD technology (MLAA) runs on Nvidia hardware. Since FXAA and MLAA are post-processing techniques, there is technically no reason why they cannot function on other manufacturer's hardware. Either Creative Assembly is secretly switching to FXAA (despite what the configuration file says), or AMD's marketers haven't taken this fact into account.

To play Total War: Rome II at 1080p on Extreme graphics settings, you'll need a 2GB graphics card, while running the game smoothly at 2160p will require a CrossFire array of over 3GB. If your card only has 1GB of video memory, you can still play the new Total War, but only at 1080p resolution and lower quality settings.

What happens when video memory is fully utilized? In short, data is transferred to system memory via the PCI Express bus. In practice, this means that performance is significantly reduced, especially when textures have been loaded. It is unlikely that you will want to deal with this, since the game will be almost impossible to play due to constant slowdowns.

So how much video memory do you need?

If you have a video card with 1 GB of video memory and a monitor with a resolution of 1080p, then you don’t have to think about an upgrade at the moment. However, a 2GB card will allow you to set higher anti-aliasing settings in most games, so consider this a minimum starting point if you want to enjoy modern games at 1920x1080 resolution.

If you plan to use resolutions of 1440p, 1600p, 2160p or multi-monitor configurations, then it is better to consider models with memory capacity above 2 GB, especially if you want to enable MSAA. It is better to consider purchasing a 3 GB model (or several cards with more than 3 GB of memory in SLI/CrossFire).

Of course, as we have already said, it is important to maintain a balance. A weak GPU supported by 4 GB of GDDR5 memory (instead of 2 GB) is unlikely to allow playing at high resolutions only due to the presence of a large amount of memory. That's why in video card reviews we test multiple games, multiple resolutions, and multiple detail settings. After all, before making any recommendations, it is necessary to identify all possible shortcomings.

Debunking myths about video card performance | Thermal management in modern video cards

Modern AMD and Nvidia graphics cards use protection mechanisms to increase fan speed and ultimately lower clock speeds and voltage if the chip overheats. This technology does not always work for the benefit of the stability of your system (especially when overclocking). It is designed to protect equipment from damage. Therefore, cards with too high parameter settings often fail and require a reset.

There is a lot of controversy about the maximum temperature for the GPU. However, higher temperatures, if tolerated by the equipment, are preferable because they indicate increased overall heat dissipation (due to the difference with the ambient temperature, the amount of heat that can be transferred is higher). At least from a technical perspective, AMD's frustration with the Hawaii GPU's thermal ceiling is understandable. There are no long-term studies yet to indicate the viability of these temperature settings. Based on personal experience regarding the stability of devices, we would prefer to rely on the manufacturer's specifications.

On the other hand, it is well known that silicon transistors perform better at lower temperatures. This is the main reason why overclockers use liquid nitrogen coolers to keep their chips as cool as possible. Typically, lower temperatures help provide more overclocking headroom.

The most power-hungry video cards in the world are Radeon HD 7990(TDP 375 W) and GeForce GTX 690(TDP 300 W). Both models are equipped with two graphics processors. Cards with a single GPU consume much less power, although video cards in the series Radeon R9 290 approaching the 300 W level. In any case, this is a high level of heat generation.

The values ​​are indicated in the description of cooling systems, so today we will not delve into them. We're more interested in what happens when load is applied to modern GPUs.

  1. You are running an intensive task such as a 3D game or Bitcoin mining.
  2. The clock frequency of the video card is increased to nominal or boost values. The card begins to heat up due to increased current consumption.
  3. The fan rotation speed gradually increases to the point indicated in the firmware. Typically, growth stops when the noise level reaches 50 dB(A).
  4. If the programmed fan speed is not enough to keep the GPU temperature below a certain level, the clock speed begins to decrease until the temperature drops to the specified threshold.
  5. The card must operate stably within a relatively narrow range of frequencies and temperatures until the load supply stops.

As you can imagine, the point at which thermal throttling is activated depends on many factors, including the type of load, air exchange in the case, ambient air temperature, and even ambient air pressure. This is why video cards turn on throttling at different times. The thermal throttling trigger point can be used to define a performance reference level. And if we set the fan speed (and therefore the noise level) manually, we can create a measurement point depending on the noise. What's the point of this? Let's find out...

Debunking myths about video card performance | Testing performance at a constant noise level of 40 dB(A)

Why 40 dB(A)?

First, notice the A in parentheses. It means “A-corrected.” That is, sound pressure levels are corrected along a curve that simulates the sensitivity of the human ear to noise levels at different frequencies.

Forty decibels is considered average for background noise in a normally quiet room. In recording studios, this value is around 30 dB, and 50 dB corresponds to a quiet street or two people talking in a room. Zero is the minimum threshold for human hearing, although it is very rare to hear sounds in the 0-5 dB range if you are over five years old. The decibel scale is logarithmic, not linear. So 50 dB is twice as loud as 40, which is twice as loud as 30.

The noise level of a PC operating at 40 dB(A) should be compatible with the background noise of the house or apartment. As a rule, it should not be audible.

Interesting fact Fun fact: in the quietest room in the world The background noise level is -9 dB. If you spend less than an hour in it in the dark, hallucinations may begin due to sensory deprivation (limitation of sensory information). How to maintain a constant noise level of 40 dB(A)?

The acoustic profile of a video card is influenced by several factors, one of which is the fan speed. Not all fans produce the same amount of noise at the same speed, but each fan itself should make the same noise level at a constant speed.

So, by measuring the noise level directly with an SPL meter at a distance of 90 cm, we manually adjusted the fan profile so that the sound pressure did not exceed 40 dB(A).

Video card Fan setting % Fan rotation speed, rpm dB(A) ±0.5
Radeon R9 290X 41 2160 40
GeForce GTX 690 61 2160 GeForce GTX 690. On the other side, GeForce GTX Titan uses a different acoustic profile, achieving 40 dB(A) at a higher rotation speed of 2780 rpm. In this case, the fan setting (65%) is close to GeForce GTX 690 (61%).

This table illustrates fan profiles along with a variety of presets. Overclocked cards can be very noisy under load: we measured 47 dB(A). When processing a typical task, the card turned out to be the quietest GeForce GTX Titan(38.3 dB(A)), and the loudest - GeForce GTX 690(42.5 dB(A)).

Debunking myths about video card performance | Can overclocking hurt performance at 40 dB(A)?

Myth: Overclocking always gives a performance boost

If we tune a specific fan profile and allow the cards to throttle down to a stable level, we get some interesting and repeatable benchmarks.


Video card Env. temperature (°C) Vent setting, % Vent rotation speed, rpm dB(A) ±0.5 GPU1 clock, MHz GPU2 clock, MHz Memory clock, MHz FPS
Radeon R9 290X 30 41 2160 40 870-890 No 1250 55,5
Radeon R9 290X overclocking 28 41 2160 40 831-895 No 1375 55,5
GeForce GTX 690 42 61 2160 40 967-1006 1032 1503 73,1
GeForce GTX 690 overclocking 43 61 2160 40 575-1150 1124 1801 71,6
GeForce GTX Titan 30 65 2780 40 915-941 No 1503 62 Radeon R9 290X The Radeon R9 290X is behind in more standard tests.

Also curious is the sharper increase in ambient temperature in the case during use. GeForce GTX 690(12-14 °C). It is connected to an axial fan, which is located in the center of the video card. It blows air inside the case, limiting the thermal headroom. In most conventional cases we expect a similar picture. So it's up to you to decide whether to increase noise output to improve performance (or vice versa) based on your own preferences.

Having gone into detail about Vsync, input lag, video memory and testing a specific acoustic profile, we can return to work on the second part of the article, which already includes research on PCIe data transfer speeds, screen sizes, a detailed study of exclusive technologies from various manufacturers and price analysis.

An interesting topic and always relevant is how to increase the speed of your computer. In the modern world, the race against time is becoming more and more interesting, everyone gets out as best they can. And the computer plays an important role here. How can he infuriate you with his ridiculous brakes at a crucial moment! At this moment the following thoughts come to me: “Piss, well, I don’t do anything like that! where are the brakes from?

In this article I will look at the 10 most effective ways to increase computer performance.

Replacement of components

The most obvious way is to replace the computer with something more powerful, we will not consider that :) But replacing some spare part (component) is quite possible. You just need to figure out what can be replaced while spending less money and getting the maximum increase in computer performance.

A. CPU It is worth replacing if the new one is at least 30% faster than the installed one. Otherwise, there will be no noticeable increase in productivity, and a lot of money will be required.

Extreme enthusiasts can try to overclock their processor. The method is not for everyone, but nevertheless it allows you to postpone the processor upgrade for another year, if the overclocking potential of the motherboard and processor allows. It consists of increasing the standard operating frequencies of the central processor, video card and/or RAM. Complicated by the individual characteristics of a specific configuration and the possibility of premature failure.

B. RAM. It definitely needs to be added if during operation all the memory is loaded. We look through the “Task Manager”, if at the peak of work (when everything that can be opened) is loaded up to 80% of the RAM, then it is better to increase it by 50-100%. Fortunately, it now costs a penny.

C. HDD. It's not the size of the disk, but its speed. If you have a slow economy hard drive with a spindle speed of 5400 rpm, then replacing it with a more expensive one with a speed of 7200 rpm and a higher recording density will add performance. In all cases, replacing with an SSD drive makes users very happy :) The performance before and after is completely different.

You can roughly determine the bottleneck in the computer configuration using the standard Windows 7 performance tool. To do this, go to “Control Panel -> System” and click “Evaluate performance” or “Update”. The overall performance is determined by the lowest indicator, thus the weak link can be identified. For example, if the hard drive rating is much lower than the processor and RAM rating, then you need to think about replacing it with a more productive one.

Computer repair and cleaning

The computer may be slowing down due to some kind of malfunction, and a simple repair will help increase performance. For example, if the processor cooling system malfunctions, its clock speed is greatly reduced, and as a result, performance drops. It can still slow down simply because of the components of the motherboard due to heavy dust! So first, try to thoroughly clean the system unit.

Defragmentation and free disk space

If you have never heard of what it is or haven’t done it for a long time, then this is the first thing you need to do to increase the speed of your computer. Defragmentation collects the information on the hard drive piece by piece into a single whole, thereby reducing the number of read head movements and increasing performance.

The lack of at least 1 GB of free space on the system disk (where the operating system is installed) can also cause a decrease in overall performance. Keep track of the free space on your disks. By the way, for the defragmentation process it is desirable to have at least 30% of free space.

Reinstalling the Windows XP/7/10 operating system

Reinstalling 90% allows you to increase the speed of your computer by 1.5-3 times, depending on how dirty it is. This operating system is designed in such a way that over time it needs to be reinstalled :) I know people who “interrupt Windows” several times a week. I am not a supporter of this method, I try to optimize the system, to get to the bottom of the true source of the brakes, but still, about once a year I reinstall the system, and only because some components change.

In principle, if I didn’t have such a turnover of programs, then I could live 5-10 years without reinstalling. But this is rare, for example in some offices where only 1C: Accounting and Microsoft Office are installed, and nothing has changed for years. I know such a company, they have had Windows 2000 for more than 10 years and it works fine... But in general, reinstallation is a good way if you don’t know how to increase the performance of your computer.

Using operating system settings optimizer programs

Sometimes you can significantly increase the comfort of work using special programs. Moreover, in most cases this is almost the only simple, fast and suitable method. I already wrote about one good program called earlier.

You can also try a good PCMedic utility. It's paid, but that's not a problem :) The highlight of the program is its fully automated process. The entire program consists of one window in which you need to select your operating system, processor manufacturer (Intel, AMD or other) and optimization type - Heal (cleaning only) or Heal & Boost (cleaning plus acceleration). Press the “GO” button and that’s it.

And one of the most powerful programs is Auslogics BoostSpeed, although it is also paid, but there is a trial version. This is a real monster that includes several utilities to increase the performance of your computer on all fronts. There is an optimizer, a defragmenter, cleaning your computer from unnecessary files, cleaning the registry, an Internet accelerator and some other utilities.

Interestingly, the program has an advisor who will tell you what needs to be done. But always check what is recommended there, do not use everything indiscriminately. For example, the advisor really wants automatic Windows updates to work. Those who have not bought licensed Windows know that this can end badly...

For optimization, there are also cleaning programs, for example CCleaner, which clean the computer of unnecessary temporary files and clean the registry. Removing junk from disks will help free up free space.

But cleaning the registry does not lead to a noticeable increase in performance, but it can lead to problems if important keys are deleted.

IMPORTANT! Before any changes, be sure to!

NECESSARILY view everything that cleaner programs want to remove! I scanned my computer with Auslogics Disk Cleaner and at first I was glad that I had 25GB of junk in my recycle bin. But remembering that I had recently emptied the recycle bin, I opened the files prepared for deletion in this program and was simply amazed! ALL my most important files were there, my entire life for the last few months. Moreover, they were not in the trash, but in a separate folder on drive D. That’s how I would have deleted them if I hadn’t looked.

In Windows 7, you can slightly increase performance by simplifying the graphical interface. To do this, go to “Control Panel -> System -> Advanced -> Settings” and disable some of the checkboxes or select “Ensure the best performance.”

Motherboard BIOS Settings

The BIOS stores the most basic computer settings. You can enter it while turning on the computer using the Delete, F2, F10 or some other key (written on the screen when turning on the computer). A strong decrease in performance can only be due to critical bugs in the settings. Usually it is configured normally and interfering there is not necessary and even harmful.

The easiest way to change the settings to optimal is to go into the BIOS and select an option like “Load Optimal Settings” (the spelling may differ depending on the BIOS), save the settings and reboot.

Disabling unnecessary services and programs from startup

Today, almost every second installed program sticks its nose into startup. As a result, loading the operating system is delayed for an indefinite amount of time, and the work itself is slowed down. Look at the system tray (near the clock), how many unnecessary icons are there? It is worth removing unnecessary programs or disabling them from startup.

This is easy to do using the built-in Windows System Configuration utility. To run it, press the combination “Win ​​+ R” and enter “msconfig” in the window. In the program, go to the “Startup” tab and uncheck the extra boxes. If after a reboot something is missing, the checkboxes can be returned. You should have an idea of ​​what programs you have installed and .

One strong way to increase performance is... disabling the antivirus :) It’s bad, of course, but I sometimes disable the antivirus while performing resource-intensive tasks.

No need to do this while surfing the web or installing unknown software!

Installing the latest drivers

This can really help, especially if very old or default drivers are installed (by default from Microsoft). The motherboard chipset drivers have the greatest influence, but others can also reduce performance. You need to update drivers for each device, and you can find them on the manufacturers’ websites.

It is better to update drivers manually, but there are many programs for automatically updating drivers. For example, a good one will scan devices and look for updated drivers.

Choose your operating system wisely

If you are still sitting on Windows XP, having 2 gigabytes of RAM, then I advise you to quickly switch to Windows 7, performance will increase. And if you have 4 GB or more, then feel free to install Windows 10 64-bit version. The speed of work will increase even more, but only in 64-bit programs. Video, audio and other resource-intensive tasks can be processed 1.5-2 times faster! It's also time to change Windows Vista to seven.

Do not use various Windows builds for installation, such as Windows Zver and the like. They are already crammed with necessary and unnecessary software, and they are often buggy.

Viruses

Even though they are in tenth place for me, this does not mean at all that you should not pay attention to them. Viruses can significantly slow down your computer or even freeze it. If there is a strange decrease in performance, then you should scan the system with one of the scanners, for example. But it is better to have a reliable antivirus installed, such as DrWeb or Kaspersky Anti-Virus.

In this article, we looked at the main methods of how to increase the speed of your computer. I hope this article helped you save the most important thing in our lives - time that should be used productively, every hour and every minute, and not wasted. In the following articles I will touch upon the topic of increasing computer performance more than once, subscribe to blog updates.

Interesting video for today - incredible ping pong!

The speed of a desktop computer or laptop depends on many factors. Therefore, you can't expect a significant increase in PC performance if you improve just one component, such as installing a faster processor. In order for the computer to work noticeably faster, several characteristics of the components should be improved at once, and preferably even all of them. This is quite natural, because your the computer will not run faster than the slowest device allows in system.

CPU clock speed

When determining computer performance, they first look at processor clock speed. This indicator affects the speed of CPU operations. The processor frequency is the clock speed of the core, which is its main component, at the moment when the system is maximally loaded.

The measurement value of this parameter is megahertz and gigahertz. Clock speed indicator does not display number of operations performed per second . The fact is that performing certain operations can take several cycles. Naturally, a computer with a processor with a higher clock speed than an otherwise identical computer will be able to perform more tasks per unit of time.

RAM

The second most important computer parameter that affects performance is amount of RAM. It is the second fastest component in a computer, second only to the processor. However, the difference in speed between these devices is significant. It should be borne in mind that the more RAM you have, the more fully the processor can be used.

Information exchange with RAM is much faster than with other devices, such as a hard drive. That is why increasing the amount of RAM will significantly speed up your computer.

HDD

The performance of a computer is also significantly affected by the size of the hard drive and its speed. The size of the hard drive is not so important, the main thing is that there is up to 10% free space on the system disk. And here hard drive bus communication speed – this is a much more significant factor.

Today, conventional hard drives have been replaced by more high-speed SSD drives , in which there are no moving parts. They work on the principle of a flash drive. The speed of information exchange in them is several times higher than that of hard drives. This happens due to the fact that large files are read simultaneously from several chips, due to this the computer’s performance increases. In addition, there are no heads that move around the disk and slow down the entire process of reading/writing information. However, the main disadvantage of SSD drives remains relevant - the high price.

Defragmenting files

As a result of the fact that files are periodically deleted from the hard drive, empty spaces remain in their place, and then new files are loaded into these memory cells, and not in one place - the so-called disk fragmentation. As a result of this, the system has to access different parts of the drive, thereby slowing down operation.

To avoid this process, you should periodically carry out disk defragmentation– arrangement of similar files in adjacent sectors for the purpose of faster reading.

To defragment a disk in the Windows 7 operating system, you need to go to the Start menu, select All Programs – Accessories – Utilities – Disk Defragmenter.

Simultaneously running tasks in the OS

The larger your computer will be perform tasks simultaneously, the more it will slow down. Therefore, if you are having problems with your PC speed, you should close all applications and programs that you are not currently using. Closing some processes in the task manager will also help. Read about which processes can be stopped.

Viruses can also slow down your computer's performance, so install reliable antivirus software and scan your system for malware. You can also use the recommendations from the article.



Share