Hey, Siri, I bet you can't figure out what's in this picture!

Apple’s AI team publishes their first academic paper on adversarial training.

Artificial Intelligence (AI) researchers at Apple have published their first paper. Called ‘Learning from Simulated and Unsupervised Images through Adversarial Training’, it was submitted on December 22, and credited to Ashish Shrivastava, Tomas Pfister, Oncel Tuzel, Josh Susskind, Wenda Wang, Russ Webb.

The Cornell University Library, via MacRumors:

With recent progress in graphics, it has become more tractable to train models on synthetic images, potentially avoiding the need for expensive annotations. However, learning from synthetic images may not achieve the desired performance due to a gap between synthetic and real image distributions. To reduce this gap, we propose Simulated+Unsupervised (S+U) learning, where the task is to learn a model to improve the realism of a simulator’s output using unlabeled real data, while preserving the annotation information from the simulator. We develop a method for S+U learning that uses an adversarial network similar to Generative Adversarial Networks (GANs), but with synthetic images as inputs instead of random vectors. We make several key modifications to the standard GAN algorithm to preserve annotations, avoid artifacts and stabilize training: (i) a ‘self-regularization’ term, (ii) a local adversarial loss, and (iii) updating the discriminator using a history of refined images. We show that this enables generation of highly realistic images, which we demonstrate both qualitatively and with a user study. We quantitatively evaluate the generated images by training models for gaze estimation and hand pose estimation. We show a significant improvement over using synthetic images, and achieve state-of-the-art results on the MPIIGaze dataset without any labeled real data.

Apple’s been quietly working on artificial intelligence, machine learning, and computer vision for years now. It’s the “quietly” part that led people to worry Apple was lagging behind in the technologies that will either define the next era of humankind… or finally unleash Skynet. It also led some researchers to be wary of joining Apple and effectively disappearing.

The AI teams has also been opening up to the press and analysts. Once Google effectively re-announced sequential inference at I/O, and the media went bot-crazy, AI became table stakes in the tech perception racket, making it impossible for Apple to do anything other than to start talking about it. And now, publishing.

It’ll be interesting to see what goes public and what stays private, but as someone fascinated — and, thanks to Cameron et al, slightly terrified — by the subject, the more the better.

And I’m particularly interested in Apple’s approach, which apparently doesn’t require scanning my entire personal photo library to figure out what a mountain looks like…

Thank you for your visit on this page Hey, Siri, I bet you can't figure out what's in this picture!

Source link