Artificial Intelligence or AI Hal in 2001. Google in 2022? The future or the past technology? AI has many supporters, extoling the benefits and many critics call out cautions. It also still has gaps keeping it from a practical present according to some developers.
Deep learning-powered advancements in AI have led to innovations that have the potential to revolutionize services, products, and consumer applications across industries such as medicine, manufacturing, transportation, communication, and retail. However, the AI efficiency gap — a situation in which hardware is unable to meet the increasing computing demands of models that are growing in size and complexity – has proven to be an obstacle to more widespread AI commercialization.
This efficiency gap means that inference is still generally bound to the cloud, where computer hardware is abundant, but costs are high and concerns around data privacy and safety are prevalent. A company in Israel, Deci has been working on a solution and getting some traction in the market.
Deci’s deep learning platform helps data scientists eliminate the AI efficiency gap by adopting a more productive development paradigm. With the platform, AI developers can leverage hardware-aware NAS (Neural Architecture Search) to quickly build highly optimized deep learning models that are designed to meet specific production goals.
Deci claims the growing AI efficiency gap only highlights the importance of accounting for production considerations early in the development lifecycle, which can then significantly reduce the time and cost spent on fixing potential obstacles when deploying models in production. Deci’s deep learning development platform has a proven record of enabling companies of all sizes to do just that by providing them with the tools they need to successfully develop and deploy world-changing AI solutions.
The platform empowers data scientists to deliver superior performance at a much lower operational cost (up to an 80% reduction), reduce time to market from months to weeks, and easily enables new applications on resource-constrained hardware such as mobile phones, laptops, and other edge devices.
Deci’s deep learning development platform is powered by Deci’s proprietary AutoNAC (Automated Neural Architecture Construction) technology, an algorithmic optimization engine that empowers data scientists to build best-in-class deep learning models that are tailored for any task, data set, and target inference hardware. Deci’s AutoNAC engine democratizes NAS technology, something that until very recently was confined to academia or industry giants like Google due to its high cost.
Having a more efficient infrastructure for AI systems can make AI products qualitatively different and better, not just less expensive and faster to run. With Deci’s AutoNAC, you input your AI models, data, and target hardware — whether that hardware is on the edge or in the cloud — and it guides you in finding alternative models that will generate similar predictive accuracy with massively improved efficiency.
For the Technologists:
As input, the AutoNAC process receives the customer baseline model, the data used to train this model, and access to the target inference hardware device. AutoNAC then revises the baseline backbone layers that carry out most of the computation and redesign to be an optimal sub-network. This optimization is carried out by performing a very efficient predictive search in a large set of candidate architectures. During this process, AutoNAC probes the target hardware and directly optimizes the runtime, as measured on this specific device. The final fast architecture is then fine-tuned on the data provided, to achieve the same accuracy performance as the baseline. It is then ready for deployment.
Deci recently launched version 2.0 of its platform, which helps enterprises build, optimize, and deploy state-of-the-art computer vision models on any hardware and environment, including cloud, edge and mobile, with accuracy and runtime performance. Deci also announced the results of its AutoNAC-generated DeciBERT models. For natural language processing (NLP), Deci’s models accelerated question-answering tasks’ throughput performance on various Intel CPUs by 5x (depending on the hardware type and quantization level) while also improving the accuracy by +1.03%.
Deci collaborates with various hardware manufacturers, Computer OEMs (computer-equipment manufacturers) and other ML (machine learning) ecosystem leaders, and is an official partner of Intel, AWS (Amazon Web Services), HPE (Hewlett Packard Enterprise), and NVIDIA among others. The value of their development work has attracted interest from a variety of investors. It has raised $25 million in a Series B funding round led by global software investor Insight Partners, with participation from existing investors Square Peg, Emerge, Jibe Ventures, and Fort Ross Ventures, as well as new investor ICON-Israel Collaboration Network.
The investment comes just seven months after Deci secured $21 million in Series A funding, also led by Insight Partners, bringing Deci’s total funding to $55.1 million. The funds will be used to expand Deci’s go-to-market activities, as well as further accelerate the company’s R&D efforts.
Want to tweet about this article? Use hashtags #construction #sustainability #infrastructure #IoT #AI #5G #cloud #edge #futureofwork