50 St Andrew’s St, Cambridge CB2 3AH, UK
Myrtle.ai optimizes machine learning inference workloads for multiple applications in data centers and in edge applications. The company’s products, expertise and IP ensure all available compute resources are optimized for cost, throughput, latency and energy. Combining its skills in machine learning, hardware acceleration and software design, Myrtle.ai deploys complex deep learning models on heterogenous platforms to achieve the highest levels of efficiency.
The Myrtle.ai solution is applicable to a wide range of applications, including recommendation systems, speech, vision and finance. The benefits include:
- Recommendation Models: increasing compute density by up to 10× on existing infrastructure, halving CapEx costs and energy consumption
- Speech Synthesis: 16× higher throughput than a GPU solution
- Automatic Speech Recognition (ASR): 2.1× higher performance per watt and 29× lower latency than a GPU solution
- Natural Language Processing (NLP): 2.2× lower cost and 7.7× smaller carbon footprint than a CPU-only solution
- The SEAL Accelerator™ is an OCP M.2 Accelerator for Recommendation Models and other memory-intensive workloads. SEAL can eliminate the system memory constraint experienced in many of these models, increasing compute density by up to 10× on existing infrastructure.
- The MAU Accelerator™ can accelerate RNNs and other DNNs with sparse layers, simultaneously achieving maximum throughput and ultra-low latency for hyperscale inference in data center applications. This enables higher quality models to be deployed, providing better services and customer experiences, while significant savings can be made in infrastructure costs and energy consumption.