GPU Transcription Inference Container Release Notes
GPU Inference Containers are released in sync with the Real-time/Batch containers they support. You should only rely on an Inference Container working with a Real-time/Batch container if it has the same version number.
For full details and a guide to implementation see GPU Inference Container.
10.5.1
- Compatible with version 10.5.1 of the Batch and Real-time Containers
10.5.0
- Compatible with version 10.5.0 of the Batch and Real-time Containers
- All 49 languages are now available on GPU, including support for Persian (fa)
- For the Standard operating point, all 49 languages are available on a single GPU image
- To access additional language configurations for the Enhanced operating point, please reach out to our Support team by raising a ticket
10.4.0
- Compatible with version 10.4.0 of the Batch and Real-time Containers.
10.3.0
- Compatible with version 10.3.0 of the Batch and Real-time Containers.
10.2.0
- GPU Inference Container support for French, German and Spanish. This results in significant accuracy, efficiency and speed improvements when compared to CPU Container results
- Memory usage optimization, especially for Enhanced
- Security patches
10.1.0
- Compatible with version 10.1.0 of the Batch and Real-time containers.
10.0.0
- English Only
- Batch and Real-time
- Significant accuracy, efficiency and speed improvements
Known issues
- No support for secure GRPC (TLS)
9.4.x (beta)
Initial beta release.
Known issues
- Support for Batch transcription only, not Real-Time.
- Supports English only.
- No support for Custom Dictionary.
- No support for secure GRPC (TLS).