We are pleased to update that our very own Moses Guttmann, Chief Technology Officer allegro.ai will be a featured speaker at the Embedded Vision Summit in Santa Clara California, this coming May 20th-23rd.
This conference is shaping up to be the largest ever focused on Computer Vision and Visual Artificial Intelligence. We invite you to attend the session and meet our experts. To arrange a time to meet during the conference, send an email to Neil Berns at firstname.lastname@example.org.
During the conference Moses will discuss Optimizing SSD Object Detection for Low-power Devices. Deep learning-based computer vision models have gained traction in applications requiring object detection, thanks to their accuracy and flexibility. For deployment on low-power hardware, single-shot detection (SSD) models are attractive due to their speed when operating on inputs with small spatial dimensions. The key challenge in creating efficient embedded implementations of SSD is not in the feature extraction module, but rather is due to the non-linear bottleneck in the detection stage, which does not lend itself to parallelization. This hinders the ability to lower the processing time per frame, even with custom hardware. We will describe in detail a data-centric optimization approach to SSD. Our approach drastically lowers the number of priors (“anchors”) needed for the detection, and thus linearly decreases time spent on this costly part of the computation. Thus, specialized processors and custom hardware may be better utilized, yielding higher performance and lower latency regardless of the specific hardware used.
To get you excited about learning from Moses’s deep experience in visual artificial intelligence, here is a recording of one of his previous talks during the O’reilly Artificial Intelligence Conference in New York – where Moses discussed another significant issue in deep learning computer vision – Why Data Management for Deep Learning Computer Vision is Challenging.