Ship the fastest model implementation without additional development effort.
Support multiple hardware and software platforms from a single code base.
Get actionable insights for model speed, power and memory optimization.
Speed, memory and power are an afterthought in standard machine learning training. Why spend time training a perfectly accurate model just to discover it is too slow or too big for your application and target platform? We provide tools to help you reach your implementation goals quickly and effortlessly.
MODEL RESOURCE OPTIMIZATION
We automatically run multiple toolchains to give you the best speed, power and memory tradeoff on every model change.
CROSS-PLATFORM MODEL ANALYTICS
We measure on-device speed and power usage to help you evaluate and compare models across hardware platforms.
We help you pinpoint performance bottlenecks and focus your model optimization on layers that matter the most.
Focus on choosing and training the best Deep Learning model for your application knowing that we’ll do everything to find the best implementation of your network for every platform.
On the right we compare TensorFlow Mobile and Numericcal Runtime Engine using NVidia DriveNet. We use the same Qualcomm SnapDragon 820 based Android phone in both cases. In this example we do not quantize or modify the model in any way.
Use numericcal to compare and select hardware platforms for your next product. We’ll make sure you can easily update and evolve your Machine Learning models across hardware revisions as your product matures.
Questions? Comments? Feature requests?