Categories: Cloud & SaaS

Inspur AI Servers Demonstrate Leading Performance In The Latest MLPerf Training v1.0 Benchmarks

Inspur improves its performance over previous MLPerf Training Benchmarks, setting four single node performance records in image Classification, NLP, Object Detection (light weight) and Recommendation.

Recently, MLCommons a well-known open engineering consortium, released new results for MLPerf Training v1.0, the organization’s machine learning training performance benchmark suite. Inspur topped the list of single node performance in four out of eight tasks in the Closed division of the MLPerf Training v1.0.

Sponsored

Recommended AI News: Panion Announces Major Rebrand, Pivot to B2B SaaS Community Management Platform

MLPerf is the leading industry benchmark for AI performance, developed in 2018. Inspur is a founding member of MLCommons, alongside more than 50 other leading organizations and companies from across the artificial intelligence field. MLPerf Training v1.0 measures the time it takes to train machine learning models to a standard quality target in a variety of tasks including Image Classification (ResNet), Image Segmentation (U-Net3D), Object Detection (Light weight, SSD), Object Detection (Heavy weight, Mask R-CNN), Speech Recognition (RNN-T), NLP (BERT), Recommendation (DLRM) and Reinforcement Learning (MiniGo), each with both Closed and Open performance divisions.

Inspur ranked first in the training tasks of Image Classification (ResNet), NLP (BERT), Object Detection (SSD) and Recommendation (DLRM) in the Closed division, with Inspur NF5688M6 achieving the best single node performance in ResNet, DLRM, and SSD, and NF5488A5 in BERT.

With its ability to optimize both software and hardware, Inspur dramatically improved the single node performance of the MLPerf training benchmark. Compared with its performance in the MLPerf Training v0.7 benchmark test in 2020, Inspur set a record in the single node performance of Image Classification, NLP, Object Detection and Recommendation by shortening the training time of each model by 17.95%, 56.85%, 18.61% and 42.64% respectively, clearly demonstrating the value of using top-level AI servers to improve the efficiency of AI model training.

Recommended AI News: Wunderkind Announces Strategic Integration with Klaviyo to Help eCommerce Brands Dramatically Scale Email Marketing Revenue

Inspur’s success in the MLPerf benchmark lies in the strength of the system design and full-stack optimization as part of innovation in the AI computing system. In terms of hardware, Inspur made comprehensive optimizations and in-depth calibrations to the data transmission between NUMA nodes and GPU to ensure non-blocking I/O in training. In addition, Inspur developed an advanced liquid cooled plate-based cooling system for the A100 GPU at 500W TDP (the highest power in the industry) to ensure that the GPU can function properly at full capacity, thus significantly increasing the performance of the AI computing system.

Sponsored

In keeping with the philosophy of MLCommons, Inspur contributed the optimized solutions explored in the benchmark to the community to accelerate machine learning innovation and AI technology.

During the MLPerf Training v0.7 benchmark test in 2020, Inspur made an optimization to boost the convergence of ResNet: on ImageNet, the solution achieved 75.9% of the targeted accuracy with only 85% iterations, improving the training efficiency by 15%. Since then, the optimization has been adopted by community members and widely used in the MLPerf Training v1.0 benchmark – an important reason for the significant improvement in ResNet this year.

Since 2020, Inspur has participated in four MLPerf benchmarks – training v0.7, Inference v0.7, Inference v1.0 and training v1.0. In this year’s MLPerf Inference v1.0, Inspur set 11 records in the data center Closed division and 7 records in the edge Closed division, becoming the company with the highest number of top results.

As a leading AI computing company, Inspur is committed to the R&D and innovation of AI computing, resource-based and algorithm platforms. It also works with other leading AI enterprises to promote the industrialization of AI and the development of AI-driven industries through its “Meta-Brain” technology ecosystem.

Recommended AI News: Shipsy sets sight at global expansion by deepening its Middle Eastern Presence, targets 3X client amplification

The post Inspur AI Servers Demonstrate Leading Performance In The Latest MLPerf Training v1.0 Benchmarks appeared first on WebsiteHost.Review.

Website Host Review

Recent Posts

Direct-to-Chip Cooling: A Technical Primer

Direct-to-chip cooling is a liquid cooling method that uses a specialized water coolant or blended…

2 days ago

Crypto mines are turning into AI factories

The pursuit of training ever-larger generative AI models has necessitated the creation of a new…

1 week ago

Hyperscale Data Center Market to Surpass USD 591 Billion by 2034

The global hyperscale data center market was valued at USD 58.3 billion in 2024 and…

1 week ago

Grid Stability Demands Smarter Design

As demand for AI accelerates, data centers face increasing scrutiny from regulators, communities, and power…

2 weeks ago

Cybersecurity and the cost of human error

Cyber incidents are increasing rapidly. In 2024, the number of outages caused by cyber incidents…

3 weeks ago

Annulet.com Is Merging With Network Solutions

According to a post on the Network Solutions blog, Annulet.com is merging with Network Solutions.…

4 weeks ago