An overclocked computer that doesn't fail under normal computing sessions shouldn't be considered "stable" when intensive and sustained computing activities could bring it down.  Instead of waiting for the computer to crash in the middle of a game or while encoding video, burn-in software can be used to test for stability relatively quickly because of the extreme nature of its computing task.

Most applications--even synthetic benchmarks-- don't necessarily tax the computer; they merely keep it busy with stalled pipelines, missed branch predicts, hard drive access and slow memory requests.  The CPU is able to take advantage of these down-times to shut off sections of the core to save power and reduce heat.

The never-ending "burn cycle" in Core Damage is designed to prevent the CPU from entering its reduced power state as much as possible by issuing a sequence of SSE, integer, floating point, address generating and branch-predicting instructions optimized specifically for the underlying micro-architecture to achieve a high degree of core utilization.  Stimulating the core in this way produces more heat as execution units "wake up" to handle the computing task.

An overclocker relies on burn-in software to simulate the worst-case scenario as to ensure the computer will remain stable during ordinary and even intensive computing tasks.

<< Back