You can now use AWS to run GPU workloads -- in containers, on-premises

ECS Anywhere adds more capabilities...

You can now use AWS to run GPU workloads -- in containers, on-premises

Want to run GPU-based workloads and keep the data associated with them on-premises -- perhaps because of existing investment in compute capacity for machine learning or other workloads -- but want to manage it all through AWS? That's now possible as the result of an update to Amazon ECS Anywhere (Elastic Container Service), which lets customers pin physical GPUs to containers for workload isolation and optimal performance.

Customers using ECS Anywhere (a service launched in May 2021 that lets customers run and manage container-based applications on-premises, including on VMs, bare metal servers, and other customer-managed infrastructure) can now add an "enable-gpu" flag to the Amazon ECS Anywhere installation script. The move lets AWS users run GPU-powered containers in their own compute hardware using the Amazon ECS APIs in the AWS Region, without running and operating their own container orchestrators -- while keeping data on-premises.

The move is just the latest hyperscaler service that recognises certain workloads are unlikely to end up running in external cloud compute regions -- but that hybrid cloud setups often still suffer from complex management interfaces, and facilitating overall workload orchestration from a single portal may prove attractive in otherwise heterogeneous compute environments. (AWS customers meanwhile can choose to run their cloud workloads on EC2 instances that are deployed by the ECS service or use Fargate to provide Container as-a Service.)

As application delivery networking specialist F5 (which offers automated load balancing support for ECS Anywhere) notes: "There’s a variety of use cases for ECS from compute driven tasks (crunch numbers) to network based tasks (run a NGINX webserver). When you need to connect to a service like a webserver you can connect to the IP and Port that is exposed by ECS to reach the service. A simple example would be to connect to a webserver on the IP “10.1.10.10” and the Port “8080” that would map to the container port of “80” on the container."

For those running GPU-powered compute meanwhile, the ECS Anywhere might be handy to power machine learning, 3D visualisation, image processing, and big data workloads without having to handle orchestration workloads like Docker Swarm or Kubernetes.

See also: Wells Fargo unveils new multi-cloud infrastructure strategy: Azure, GCP… and GreenLake?

.