f2 X icon 3 y2 steam2
 
 Search find 4120

Intel Mandates Standard BIOS Settings for Enhanced Stability in 13th and 14th Generation CPUs

Intel has issued a directive to motherboard manufacturers to adopt "Intel Default Settings" as the default BIOS profile to address ongoing stability issues with its 13th and 14th generation CPUs. This move comes in response to numerous reports of system instability, crashes, and Blue Screen of Death (BSOD) incidents when motherboards are set to the previously encouraged "Extreme" performance profiles. The "Extreme" settings often push the CPUs beyond their recommended power limits, leading to operational inconsistencies.

14THGENDESKTOP 800x532

Intel’s decision underscores a shift towards ensuring system stability by standardizing how these CPUs are configured out of the box. The recommended "Intel Default Settings" are designed to maintain the processors within safer operational parameters, emphasizing reliability over maximum performance. This change is expected to significantly reduce instances of hardware failure and system instability that have been reported by users.

The adjustment involves setting the Power Level (PL) and Current Capability (IccMax) to levels that prevent undue stress on the CPUs. Specifically, Intel has outlined that the PL1/PL2 settings should be adjusted down to 125W/188W from the higher limits used under "Extreme" conditions, which can reach up to 253W. This reduction in power ceiling aims to curb the thermal and power demands on the processors, aligning them more closely with Intel’s engineering specifications and thermal design points.

Motherboard manufacturers have been given a deadline of May 31, 2024, to implement these changes in their default BIOS settings. Intel believes that this will not only solve the immediate issues of system crashes and instability but also prolong the lifespan of the CPUs and the motherboards themselves by reducing the thermal and electrical stresses endured during operation.

This initiative is part of Intel’s broader strategy to enhance customer satisfaction and trust in their products by ensuring that their CPUs perform reliably under all standard computing conditions. Intel also plans to engage with their partners to monitor the implementation of these settings and assess the impact on system performance and stability. Further adjustments and optimizations may be considered based on the collective feedback from the user base and technical communities.

Add a comment

U.S. Government Auctions Off Cheyenne Supercomputer for $480,085

w2yzbgz7zui79yHP75qXrT 650 80

The U.S. government recently concluded an online auction where the Cheyenne supercomputer was sold for $480,085. This system includes 8,064 Intel Xeon E5-2697 v4 processors with 18 cores and 36 threads each at 2.3 GHz, along with 313 TB of RAM divided across 4,890 64GB ECC-compliant modules. Employed for over seven years, the supercomputer was instrumental in climate and weather research across Wyoming and nationally.

Unfortunately for buyers, none of the 32 petabytes of high-speed storage was included in the sale. However, a knowledgeable eBay seller could potentially flip the processors and RAM for about $700,000, yielding a significant profit.

The auction was necessitated by a high failure rate and maintenance challenges, including faulty quick disconnects that caused water spray. These issues led to considerable downtime and maintenance costs, prompting the search for a replacement. Cheyenne will be replaced by the new Derecho supercomputer, costing between $35-40 million from HP.

The buyer is responsible for transporting the 30 server racks from the facility themselves; the government will not provide transport or include the necessary Ethernet or optical cabling to operationalize the machine. With the auction price representing just 2% of Cheyenne's estimated original construction cost of $25 million, this sale underscores the depreciation and operational challenges faced by high-end computing assets.

Both versions are tailored to fit the character limit while detailing the supercomputer's sale, technical specifications, and implications for buyers and the maintenance team.

Add a comment

AMD Instinct MI300A: A Revolutionary Leap in HPC Performance

AMD has introduced the Instinct MI300A, a groundbreaking product in high-performance computing (HPC) that significantly enhances performance compared to traditional discrete GPUs. This device embodies the 'Exascale APU' concept by integrating high-performance CPU and GPU capabilities into a single package with access to a shared HBM memory pool.

AMD Instinct MI300A

The Instinct MI300A is designed for tasks requiring high performance per watt, making it an ideal choice for scientific and engineering applications needing to process vast amounts of data. These APUs also necessitate substantial efforts in porting, tuning, and maintaining millions of lines of code, which can be challenging. However, the use of popular programming models like OpenMP and OpenACC simplifies this process.

In a research paper titled "Porting HPC Applications to AMD Instinct MI300A Using Unified Memory and OpenMP," the OpenFOAM framework, an open-source C++ library, was employed to demonstrate the flexibility and ease of porting codes to MI300A using OpenMP.

In performance evaluations using the OpenFOAM HPC motorcycle benchmark, the AMD Instinct MI300A APU was compared against AMD Instinct MI210 and NVIDIA A100 and H100 (80 GB) GPUs. Results indicated that the Instinct MI300A delivered a fourfold performance increase over the NVIDIA H100 and five times over the Instinct MI210.

A key advantage of the MI300A is the use of a unified physical memory shared between the CPU cores and the GPU’s compute units. This eliminates the need for page memory migrations, significantly accelerating operation speeds. This architecture provides a substantial performance boost, making the AMD Instinct MI300A an excellent choice for next-generation computing systems.

Add a comment

Intel's Arrow Lake-S CPUs to Feature Lower Clock Speeds but Enhanced Architecture

Intel is set to launch its new Arrow Lake-S series of processors, which notably features a reduction in clock speed compared to the preceding Raptor Lake-S series. According to recent leaks from Weibo user MebiuW, the flagship model Core Ultra 9 285K will operate at a maximum speed of 5.5 GHz, which is 700 MHz lower than the current flagship Core i9-14900KS at 6.2 GHz.

CoreUltra9 285K 001

The upcoming Arrow Lake-S processors will utilize an 8+16 core configuration, including 8 Lion Cove P-Cores and 16 Skymont E-Cores. Despite the lower maximum frequency, improvements in architecture are expected to achieve a performance increase of up to 10% compared to the 14th generation processors at similar power consumption.

Intel plans to unveil the new series at Computex 2024, where it will also showcase capabilities for supporting fast DDR5-6400 memory. The prices for the new processors are expected to be slightly higher than previous models due to added AI features and integrated Xe-LPG graphics.

Add a comment

NVIDIA Updates Unreal Engine 5: Introduces ReSTIR GI Support for Realistic Lighting

NVIDIA has announced a significant update to the NvRTX branch of Unreal Engine 5, introducing experimental support for ReSTIR Global Illumination (ReSTIR GI), a revolutionary ray tracing algorithm. First introduced in 2020, this technology allows for "direct lighting from millions of moving light sources" in real-time without the need for complex light structuring.

ReSTIR GI lights scenes using only emissive materials and can be integrated with Lumen, UE5's existing lighting system. The implementation was led by NVIDIA's Jiayin Cao, who returned to the company in 2023 to focus on enhancing the NvRTX branch.

The technology has already been used in games like "Cyberpunk 2077" and mods for other games, demonstrating potential benefits for game developers. Additionally, NVIDIA has scheduled a webinar to discuss the new capabilities of ReSTIR GI in real-time, including a lighting workflow demonstration.

This update places Unreal Engine 5 at the forefront of real-time implementation using RTX, offering developers enhanced ray tracing and path tracing options.

Add a comment

AMD's New RDNA4 Architecture Promises Significant Improvements in Ray Tracing

According to the latest leaks from insider Kepler_L2, known for accurate AMD product forecasts, the company has made significant changes to its new RDNA4 graphics processor architecture, especially in terms of ray tracing hardware acceleration. Unlike the previous version, RDNA3, which only improved upon RDNA2 architecture, RDNA4 will feature an entirely new block for ray tracing.

RDNA4 AMD

However, despite the anticipated performance improvements, AMD has canceled plans for the large Navi4 chips. This means that any innovations related to RDNA4 RT will only be available in mid-range to high-end models, forcing enthusiasts to wait for the next generation, RDNA5.

Rumors suggest that the PlayStation 5 Pro will use RT acceleration based on RDNA4. In particular, the PS5 Pro is expected to handle shaders for traversing an 8-level volume hierarchy (BVH8), while current RT solutions are limited to a 4-level hierarchy. This could theoretically double the throughput per processing cycle.

Add a comment

bcrypt vs. MD5: GeForce RTX 4090 Tested for Password Protection

Hive Systems has conducted comparative tests of password hashing algorithms on NVIDIA graphics cards, including the flagship model GeForce RTX 4090. The study highlighted the differences in protection levels when using the outdated MD5 and the more modern bcrypt.

2024PasswordTable MD5 1920x497

The tests showed that the GeForce RTX 4090 could crack an MD5-hashed password in just one hour, whereas it would take approximately 99 years to decrypt a password using the more secure bcrypt algorithm. This emphasizes the significant advantages of bcrypt in ensuring data security.

It was also revealed that 10,000 units of NVIDIA's A100 model could decrypt an MD5-protected password in just one second, demonstrating the inadequacy of this method against modern decryption tools. In the context of enhancing security, companies use multi-layered protection, including hashing, to ensure data safety.

Add a comment

NVIDIA Strengthens Its Position in the Premium AI PC Segment: New Perspectives and Capabilities

NVIDIA continues to dominate the artificial intelligence sector by introducing the new Premium AI PC ecosystem, set to significantly outperform current NPUs in terms of performance. With an expanding array of AI-supported applications (over 500) and an installed base of more than 100 million RTX users, the company is focused on further strengthening its market position.

NVIDIA RTX Premium AI

NVIDIA offers a broad range of tools optimized for RTX GPUs, including Chat With RTX and TensorRT, which greatly enhance AI performance. The company is also actively developing technologies such as DLSS 3.5 and DLSS 3.7, which not only improve game graphics but also enable modders to update old games via the RTX Remix platform.

RTX-based systems boast performance in the hundreds of TOPS, far surpassing NPU capabilities. Such metrics make RTX GPUs the preferred choice for users who need high performance for AI tasks. NVIDIA emphasizes that GPUs provide not only general computing capabilities but also specialized tensor cores for AI processing.

This year, PC manufacturers are expected to significantly enhance NPU capabilities to 45-50 TOPS, but even these figures cannot compare with the performance of RTX GPUs. This puts NVIDIA in a favorable position to further develop the AI PC market, offering users not only more performance but also a wide range of applications that facilitate the integration of AI into everyday tasks.

Add a comment

Release Details and System Requirements for Senua's Saga: Hellblade II

The release of Senua's Saga: Hellblade II, developed by Ninja Theory and published by Microsoft, is set for May 21, exclusively on PC and Xbox Series X|S platforms. The game will be available on Xbox Game Pass from day one and will offer Russian subtitles, catering to the Russian-speaking audience. The pre-order price in Russia is set at 3119 rubles, and pre-orders have opened three weeks ahead of the official launch.

hlsteam

Minimum system requirements for playing the game on PC at 1080p and 30 FPS include an Intel Core i5-8400 or AMD Ryzen 5 2600 processor, 16 GB of RAM, NVIDIA GeForce GTX 1070/AMD Radeon RX 5700/Intel ARC A580 graphics card, and at least 70 GB of free space on an SSD. This information was published on the game’s Steam page coinciding with the start of pre-orders.

Add a comment

NVIDIA to Dominate 2024 AI Market with Estimated $40 Billion in GPU Sales

NVIDIA continues to strengthen its position in the artificial intelligence market, forecasting AI accelerator sales of around $40 billion in 2024. This is nearly 80 times more than Intel and 11.4 times more than AMD, underscoring its leadership in the industry.

nvidia ai

According to Bloomberg Technology, NVIDIA’s competitors, such as AMD and Intel, are expected to see much smaller revenues from their AI devices — $3.5 billion and $500 million, respectively. This highlights that NVIDIA is not only maintaining but also increasing its lead over its competitors in the AI accelerator market.

NVIDIA has also showcased its latest achievement in AI — the Blackwell GPUs, priced between $30-40 thousand per unit, making them some of the most powerful in the world. This development is part of the company's strategy to maintain technological supremacy in a highly competitive field.


Add a comment