August 2, 2018

Deep Learning Research Allegro Found Interesting This Month

Here at allegro.ai we constantly look for research papers and publications that present findings in deep learning that might seem nascent, but can and should be implemented into a product prototype. And in some cases might even be ready for productization.

We also look to highlight best practices and methodologies we feel should be heard of and adopted when relevant. Finally, we look to highlight research findings that bust myths.

Everyday we come to work striving to help our customers create the best deep learning computer vision products. Like our customers, we keep our finger on the pulse. We study new research and insights focusing on metrics, new methods for visualization and things that deal with optimization of processes.

Here are three publications that caught our attention recently:

DensePose: Dense Human Pose Estimation In The Wild 

The Fashion Industry use case for computer vision is not the classic use case. For example, take the challenge of understanding how to put together an outfit for a wide variety of body types. This requires to successfully combine localization, depth perception etc.

This paper demonstrates exciting software 2.0 use cases. If you want to create a quality product you must have a well annotated data set. We foresee this being implemented more and more in new startups in the area of personalized fashion.

Dense Pose presents an evolution from Mask-RCNN, that is nascent yet ready to be implemented into a prototype for a product.

Read the full research here.

Do Better ImageNet Models Transfer Better?

This research from a team at Google – Simon Kornblith, Jonathon Shlens, and Quoc V. Le –  captured our attention. The researchers attempted to systematically explore the following 2 hypotheses:

  1. That network architectures that perform better on ImageNet necessarily perform better on other vision tasks.
  2. That better network architectures learn better features that can be transferred across vision-based tasks.

To do this they examined 13 networks ranging in ImageNet top-1 accuracy from 69.8% to 82.7%.

One of the things that this research shows is that even when features are not transferable, better architectures consistently achieve higher performance. The researchers also found that the best fixed image features do not come from the best ImageNet models, as measured by ImageNet accuracies.

We would like to recognize the important service that Google Brain is doing to the machine learning community by conducting an experiment on this matter. It is an important exercise that most likely would not have been prioritized by smaller companies.

Apart from the fact that an important scientific question was hidden in plain sight, is that it demonstrates to us that the rapid advancement in this field sometimes leads us to overlook potential gains in accuracy, that were right under our noses.

Read full research here.

Simon Kornblith Jonathon Shlens Quoc V Le

 

 

 

‘The discourse is unhinged’: how the media gets AI alarmingly wrong  

Our team at allegro.ai is also trying to identify when a particular research represent a possible milestone in our collective pan-academic industry effort towards better AI.

In our opinion, this opinion piece in the Guardian is as important as any research paper we bring here, because it addresses the “AI mis-information epidemic” especially in the social media.

This article is insightful because it’s honest. It begins with describing the downsides  of journalism succumbing to AI sensationalism in order to bring in more traffic, and demonstrates how it has been historically done, and how in many cases this behavior inhibited the healthy development of the AI industry.

The twist in the article comes when author Oscar Schwartz shines the light on the data science community. Schwartz stresses their responsibility not to cooperate with this sensationalism. The moral responsibility is not only the journalist’s.

Read the full article here.

Simon Kornblith Jonathon Shlens Quoc V Le 

 

That is it for this blog post.

Please let us know your thoughts on any of these papers, especially if you believe we got it wrong, or forgot something.

If you are interested in more updates or speaking to us, contact us on Twitter  or on Linkedin . We’re always happy to talk.

Scroll to Top