A curated list of interesting ICML 2016 papers

I've went through the hundreds of ICML 2016 papers and curated a subset that look interesting to me. In no particular order:

Faster Convex Optimization: Simulated Annealing with an Efficient Universal Barrier

Jacob Abernethy, Elad Hazan

Variance Reduction for Faster Non-Convex Optimization

Zeyuan Allen-Zhu, Elad Hazan

Improved SVRG for Non-Strongly-Convex or Sum-of-Non-Convex Objectives

Zeyuan Allen-Zhu, Yang Yuan

Even Faster Accelerated Coordinate Descent Using Non-Uniform Sampling

Zeyuan Allen-Zhu, Zheng Qu, Peter Richtarik, Yang Yuan

Deep Speech 2 : End-to-End Speech Recognition in English and Mandarin

Dario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, JingDong Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, Erich Elsen, Jesse Engel, Linxi Fan, Christopher Fougner, Awni Hannun, Billy Jun, Tony Han, Patrick LeGresley, Xiangang Li, Libby Lin, Sharan Narang, Andrew Ng, Sherjil Ozair, Ryan Prenger, Sheng Qian, Jonathan Raiman, Sanjeev Satheesh, David Seetapun, Shubho Sengupta, Chong Wang, Yi Wang, Zhiqian Wang, Bo Xiao, Yan Xie, Dani Yogatama, Jun Zhan, Zhenyao Zhu

On the Iteration Complexity of Oblivious First-Order Optimization Algorithms

Yossi Arjevani, Ohad Shamir

Black-box Optimization with a Politician

Sebastien Bubeck, Yin Tat Lee

Importance Sampling Tree for Large-scale Empirical Expectation

Olivier Canevet, Cijo Jose, Francois Fleuret

CryptoNets: Applying Neural Networks to Encrypted Data with High Throughput and Accuracy

Ran Gilad-Bachrach, Nathan Dowlin, Kim Laine, Kristin Lauter, Michael Naehrig, John Wernsing

Solving Ridge Regression using Sketched Preconditioned SVRG

Alon Gonen, Francesco Orabona, Shai Shalev-Shwartz

Variance-Reduced and Projection-Free Stochastic Optimization

Elad Hazan, Haipeng Luo

On Graduated Optimization for Stochastic Non-Convex Problems

Elad Hazan, Kfir Yehuda Levy, Shai Shalev-Shwartz

Doubly Robust Off-policy Value Evaluation for Reinforcement Learning

Nan Jiang, Lihong Li

Stochastic Variance Reduced Optimization for Nonconvex Sparse Learning

Xingguo Li, Tuo Zhao, Raman Arora, Han Liu, Jarvis Haupt

A Variational Analysis of Stochastic Gradient Algorithms

Stephan Mandt, Matthew Hoffman, David Blei

Stochastic Variance Reduction for Nonconvex Optimization

Sashank J. Reddi, Ahmed Hefny, Suvrit Sra, Barnabas Poczos, Alex Smola

A Superlinearly-Convergent Proximal Newton-type Method for the Optimization of Finite Sums

Anton Rodomanov, Dmitry Kropotov

SDCA without Duality, Regularization, and Individual Convexity

Shai Shalev-Shwartz

Training Neural Networks Without Gradients: A Scalable ADMM Approach

Gavin Taylor, Ryan Burmeister, Zheng Xu, Bharat Singh, Ankit Patel, Tom Goldstein

Leave a Reply

Your email address will not be published. Required fields are marked *