In-Org Review Champions
In-Org Gatekeepers
Repos
K8s-test-infra
Set of Python bindings to C++ libraries which provides full HW acceleration for video decoding, encoding and GPU-accelerated color space and pixel format conversions
NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
CUDA Templates and Python DSLs for High-Performance Linear Algebra
A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch
Samples for CUDA Developers which demonstrates features in CUDA Toolkit
Pytorch implementation of our method for high-resolution (e.g. 2048x1024) photorealistic video-to-video translation.
New repo collection for NVIDIA Cosmos: https://github.com/nvidia-cosmos
the LLM vulnerability scanner
Synthesizing and manipulating 2048x1024 images with conditional GANs
Transformer related optimization, including BERT, GPT
A Python framework for accelerated simulation, data generation and spatial computing.
NVIDIA Isaac GR00T N1.6 - A Foundation Model for Generalist Robots.
PersonaPlex code.
Tacotron 2 - PyTorch implementation with faster-than-realtime inference
Ongoing research training transformer models at scale
Style transfer, deep learning, feature transform
State-of-the-Art Deep Learning scripts organized by models - easy to train and deploy with reproducible accuracy and performance on enterprise-grade infrastructure.
Build and run Docker containers leveraging NVIDIA GPUs
TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT LLM also contains components to create Python and C++ runtimes that orchestrate the inference execution in a performant way.
NVIDIA Linux open GPU kernel module source