• Five Steps to Next-Generation Incident Preparedness and Response
      Five Steps to Next-Generation Incident Preparedness and Response
      FEATURED INSIGHT OF THE WEEK

      Five Steps to Next-Generation Incident Preparedness and Response

      Recent disruptions associated with the COVID-19 pandemic have spurred a concerning trend: cyberthreats have grown among 86% of organizations in the U.S., Cybersecurity Dive reports, as well as 63% of companies in other countries.

      8 minute read

      Search Insights & Thought Leadership

      GPUs in University Research: Powering the Next Era of Discovery

      GPUs in University Research: Powering the Next Era of Discovery

      Universities are increasingly adopting Graphics Processing Units (GPUs) to accelerate research in fields like medicine, climate science, and artificial intelligence, which depend on processing massive datasets. Their parallel processing capabilities enable breakthroughs in complex tasks such as protein folding, large-scale climate modelling, and analysing cultural texts. The NVIDIA H100 GPU is a key technology in this shift, offering significant improvements in speed, memory bandwidth, and energy efficiency, allowing researchers to undertake larger projects. Beyond research, GPUs are being integrated into university curricula to prepare students for the modern AI workforce. While institutions face challenges like high costs and management complexity, recommendations include investing in shared clusters, forming vendor partnerships, and adopting hybrid on-premises and cloud models to maximise investment and foster innovation.

      14 minute read

      Energy and Utilities

      Unlocking the Power of NVIDIA Networking Software Tools for AI and HPC

      Unlocking the Power of NVIDIA Networking Software Tools for AI and HPC

      Networking has become a critical foundation for modern AI, high-performance computing, and cloud data centers. Training large language models, running simulations, or supporting real-time applications requires thousands of GPUs and CPUs working together. To make this possible, the infrastructure must move massive amounts of data quickly and reliably.

      10 minute read

      Datacenter

      NVIDIA Virtual Applications (vApps): Rethinking App Delivery for High-Performance Enterprises

      NVIDIA Virtual Applications (vApps): Rethinking App Delivery for High-Performance Enterprises

      In an era defined by distributed workforces, exponential data growth, and the integration of artificial intelligence into every workflow, IT managers face a monumental challenge: delivering complex, graphics-intensive applications securely and performantly to any device, anywhere. The traditional model of installing software on every endpoint is no longer secure, scalable, or financially viable. Modern operational realities demand a new architectural approach.

      9 minute read

      Datacenter

      The Carbon Footprint of GPUs: Balancing AI Performance and Sustainability  

      The Carbon Footprint of GPUs: Balancing AI Performance and Sustainability  

      GPUs are essential engines for modern artificial intelligence, but their rapid adoption raises significant environmental concerns due to their carbon footprint. This footprint extends beyond direct electricity use, encompassing the entire lifecycle: energy-intensive manufacturing, power consumption during AI training and inference, and the substantial energy needed for data centre cooling. Modern GPUs like the NVIDIA H100 are designed for greater energy efficiency, offering architectural improvements that deliver about three times the performance-per-watt of previous models. However, technology alone is insufficient. Best practices are critical for reducing emissions, including right-sizing workloads, maximising utilisation with multi-instance GPU technology, choosing data centres powered by renewable energy, and adopting carbon-aware scheduling. Achieving sustainability requires a combined approach of efficient hardware and responsible operational planning.

      12 minute read

      Datacenter

      NVIDIA Virtual PC (vPC) and the Role of DGX H200 in Enterprise Virtualization

      NVIDIA Virtual PC (vPC) and the Role of DGX H200 in Enterprise Virtualization

      Modern enterprises struggle with traditional, CPU-only Virtual Desktop Infrastructure (VDI), which delivers poor performance for applications like Microsoft Teams, Zoom, and CAD software. This results in lag and an inconsistent user experience. NVIDIA Virtual PC (vPC) solves this by adding GPU acceleration to virtual desktops, creating a smooth, responsive experience nearly indistinguishable from a physical workstation. This enhances security by keeping data centralised in the data centre, offers operational flexibility to support diverse users like knowledge workers and engineers, and improves cost-efficiency by extending the life of older hardware. When deployed on an NVIDIA DGX H200 server, enterprises can support a high density of users without performance loss. This powerful combination allows organisations to run demanding VDI and even AI workloads on a single, secure, and adaptable platform.

      12 minute read

      Datacenter

      NVIDIA RTX Virtual Workstation (vWS) Review: Bridging Creative Workflows and DGX H200 Power

      NVIDIA RTX Virtual Workstation (vWS) Review: Bridging Creative Workflows and DGX H200 Power

      The source provides an extensive review of the NVIDIA RTX Virtual Workstation (vWS) software, focusing on its integration and enhanced performance when paired with the powerful DGX H200 infrastructure. It explains that vWS enables high-performance, secure, and remote access to graphics-intensive applications like CAD and 3D modelling by centralising GPU resources in a data centre or cloud. The DGX H200 significantly boosts this solution by offering high-memory GPUs (141 GB HBM3e), superior throughput, and better scalability, making it ideal for demanding use cases such as large-scale engineering simulation and remote design studios. While the combination offers substantial benefits like smoother frame rates and consistent workflows, the review also notes trade-offs, including high upfront costs and a heavy dependency on network quality for optimal performance. summarize in under 150 words NVIDIA RTX Virtual Workstation (vWS) is software that delivers RTX-grade graphics and AI performance from a data centre to any device. This enables secure, remote workflows for professionals in design, engineering, and architecture. Pairing vWS with the NVIDIA DGX H200 system unlocks premium performance. The DGX H200 provides immense power through its high-memory GPUs (141 GB), high-bandwidth interconnects, and scalability, allowing it to handle complex, demanding workloads for multiple concurrent users. This combination is ideal for remote rendering studios and engineering simulations. While this setup solves many remote work bottlenecks by centralising resources, potential trade-offs include a strong dependency on network quality, high upfront costs, and the need to manage shared GPU resources.

      4 minute read

      Datacenter

      H200 Deployment Tools: Building Robust AI Infrastructures with NVIDIA’s Tools & Best Practices

      H200 Deployment Tools: Building Robust AI Infrastructures with NVIDIA’s Tools & Best Practices

      Deploying NVIDIA H200 GPUs for AI or high-performance computing requires a comprehensive suite of tools to manage their inherent complexity. The H200's advanced features, while powerful, introduce challenges in hardware configuration, software compatibility, and multi-node scaling that can cause performance bottlenecks. A robust deployment strategy relies on several categories of tools, including hardware validation, driver management, orchestration frameworks like Kubernetes, continuous monitoring, and security. Following best practices is crucial, such as using staged deployments, automating configuration, maintaining consistent software versions, and performing validation tests. Utilising resources like NVIDIA's DGX BasePOD guides is highly recommended. Ultimately, these tools and processes form a critical control plane, ensuring the full performance, reliability, and value of the H200 investment are realised.

      6 minute read

      Datacenter

      NVIDIA DGX H200 Power Consumption: What You Absolutely Must Know

      NVIDIA DGX H200 Power Consumption: What You Absolutely Must Know

      The NVIDIA DGX H200 is a powerful, factory-built AI supercomputer designed for complex AI and research tasks. Its high performance, driven primarily by eight H200 GPUs, comes with a maximum power consumption of 10.2 kilowatts (kW). This significant power draw requires specialised data centre infrastructure, including dedicated high-voltage, three-phase power circuits. All the energy consumed is converted into heat, meaning the system also produces 10.2 kW of thermal output. Because of this high heat density, liquid cooling is the recommended solution over traditional air cooling. Despite its power needs, the DGX H200 is highly efficient, delivering roughly twice the AI computational work per watt compared to the previous generation. This efficiency makes it a worthwhile investment for large enterprises and research institutions that require top-tier performance

      14 minute read

      Energy and Utilities

      NVIDIA DGX SuperPOD with H200: Building Enterprise-Scale AI Infrastructure

      NVIDIA DGX SuperPOD with H200: Building Enterprise-Scale AI Infrastructure

      The NVIDIA DGX SuperPOD is a purpose-built AI supercomputing system for enterprises, research institutions, and governments that need to operate at an industrial scale. As a turnkey, engineered solution, it integrates high-performance compute, networking, and storage to handle workloads that exceed the capacity of traditional data centres, such as training trillion-parameter models. Its modular architecture allows for scalable growth, enabling organisations to expand their infrastructure as AI requirements increase. The system is powered by NVIDIA DGX H200 systems, which feature GPUs with 141 GB of high-bandwidth memory, offering significant performance and efficiency gains. Managed by the NVIDIA Base Command software stack, the DGX SuperPOD simplifies deployment and operations, enabling organisations to build "AI factories" for the future of generative and multi-modal AI.

      14 minute read

      Energy and Utilities

      NVIDIA Pre-Trained Models: Accelerating AI Adoption with H200

      NVIDIA Pre-Trained Models: Accelerating AI Adoption with H200

      NVIDIA pre-trained models, accessible via the NGC Catalog, are accelerating AI adoption in enterprises by offering ready-to-use solutions across computer vision, natural language processing, and generative AI. These models significantly reduce training time and compute costs, allowing organisations to deploy accurate AI systems faster and more affordably than building from scratch. The NVIDIA H200 GPU further enhances performance, providing the high memory bandwidth and computational power required for large-scale pre-trained and foundation models. This powerful combination enables industries like healthcare and finance to implement AI for tasks such as medical imaging analysis, fraud detection, and customer service automation, democratising advanced AI for a broader range of organisations.

      13 minute read

      Datacenter

      1–10 of 62 items
      of 7 pages
      uvation