Publications & Tech Reports

Publications / Preprints

Factorial Powers for Stochastic Optimization
Aaron Defazio, Robert M. Gower
[arXiv]
The convergence rates for convex and non-convex optimization methods depend on the choice of a host of constants, including step sizes, Lyapunov function constants and momentum constants. In this work we propose the use of factorial powers as a flexible tool for defining constants that appear in convergence proofs. We list a number of remarkable properties that these sequences enjoy, and show how they can be applied to convergence proofs to simplify or improve the convergence rates of the momentum method, accelerated gradient and the stochastic variance reduced method (SVRG).
@misc{defazio2020factorial, title={Factorial Powers for Stochastic Optimization}, author={Aaron Defazio and Robert M. Gower}, year={2020}, eprint={2006.01244}, archivePrefix={arXiv}, primaryClass={cs.LG} }
On the convergence of the Stochastic Heavy Ball Method
Othmane Sebbouh, Robert M. Gower, Aaron Defazio
[arXiv]
We provide a comprehensive analysis of the Stochastic Heavy Ball (SHB) method (otherwise known as the momentum method), including a convergence of the last iterate of SHB, establishing a faster rate of convergence than existing bounds on the last iterate of Stochastic Gradient Descent (SGD) in the convex setting. Our analysis shows that unlike SGD, no final iterate averaging is necessary with the SHB method. We detail new iteration dependent step sizes (learning rates) and momentum parameters for the SHB that result in this fast convergence. Moreover, assuming only smoothness and convexity, we prove that the iterates of SHB converge almost surely to a minimizer, and that the convergence of the function values of (S)HB is asymptotically faster than that of (S)GD in the overparametrized and in the deterministic settings. Our analysis is general, in that it includes all forms of mini-batching and non-uniform samplings as a special case, using an arbitrary sampling framework. Furthermore, our analysis does not rely on the bounded gradient assumptions. Instead, it only relies on smoothness, which is an assumption that can be more readily verified. Finally, we present extensive numerical experiments that show that our theoretically motivated parameter settings give a statistically significant faster convergence across a diverse collection of datasets.
@misc{sebbouh2020convergence, title={On the convergence of the Stochastic Heavy Ball Method}, author={Othmane Sebbouh and Robert M. Gower and Aaron Defazio}, year={2020}, eprint={2006.07867}, archivePrefix={arXiv}, primaryClass={cs.LG} }
MRI Banding Removal via Adversarial Training
Aaron Defazio, Tullie Murrell, Michael P. Recht
[arXiv]
MRI images reconstructed from sub-sampled Cartesian data using deep learning techniques often show a characteristic banding (sometimes described as streaking), which is particularly strong in low signal-to-noise regions of the reconstructed image. In this work, we propose the use of an adversarial loss that penalizes banding structures without requiring any human annotation. Our technique greatly reduces the appearance of banding, without requiring any additional computation or post-processing at reconstruction time. We report the results of a blind comparison against a strong baseline by a group of expert evaluators (board-certified radiologists), where our approach is ranked superior at banding removal with no statistically significant loss of detail.
@misc{defazio2020mri, title={MRI Banding Removal via Adversarial Training}, author={Aaron Defazio and Tullie Murrell and Michael P. Recht}, year={2020}, eprint={2001.08699}, archivePrefix={arXiv}, primaryClass={eess.IV} }
Using Deep Learning to Accelerate Knee MRI at 3T: Results of an Interchangeability Study
Recht, Michael P. and Zbontar, Jure and Sodickson, Daniel K. and Knoll, Florian and Yakubova, Nafissa and Sriram, Anuroop and Murrell, Tullie and Defazio, Aaron and Rabbat, Michael and Rybak, Leon and Kline, Mitchell and Ciavarra, Gina and Alaia, Erin F. and Samim, Mohammad and Walter, William R. and Lin, Dana and Lui, Yvonne W. and Muckley, Matthew and Huang, Zhengnan and Johnson, Patricia and Stern, Ruben and Zitnick, C. Lawrence
American Journal of Roentgenology
[publication]
Objective
Deep Learning (DL) image reconstruction has the potential to disrupt the current state of MR imaging by significantly decreasing the time required for MR exams. Our goal was to use DL to accelerate MR imaging in order to allow a 5-minute comprehensive examination of the knee, without compromising image quality or diagnostic accuracy.
Methods
A DL model for image reconstruction using a variational network was optimized. The model was trained using dedicated multi-sequence training, in which a single reconstruction model was trained with data from multiple sequences with different contrast and orientations. Following training, data from 108 patients were retrospectively undersampled in a manner that would correspond with a net 3.49-fold acceleration of fully-sampled data acquisition and 1.88-fold acceleration compared to our standard two-fold accelerated parallel acquisition. An interchangeability study was performed, in which the ability of 6 readers to detect internal derangement of the knee was compared for the clinical and DL-accelerated images.
Results
The study demonstrated a high degree of interchangeability between standard and DL-accelerated images. In particular, results showed that interchanging the sequences would result in discordant clinical opinions no more than 4% of the time for any feature evaluated. Moreover, the accelerated sequence was judged by all six readers to have better quality than the clinical sequence.
Conclusions
An optimized DL model allowed for acceleration of knee images which performed interchangeably with standard images for the detection of internal derangement of the knee. Importantly, readers preferred the quality of accelerated images to that of standard clinical images.
@article{cite-key, Author = {Recht, Michael P. and Zbontar, Jure and Sodickson, Daniel K. and Knoll, Florian and Yakubova, Nafissa and Sriram, Anuroop and Murrell, Tullie and Defazio, Aaron and Rabbat, Michael and Rybak, Leon and Kline, Mitchell and Ciavarra, Gina and Alaia, Erin F. and Samim, Mohammad and Walter, William R. and Lin, Dana and Lui, Yvonne W. and Muckley, Matthew and Huang, Zhengnan and Johnson, Patricia and Stern, Ruben and Zitnick, C. Lawrence}, Journal = {American Journal of Roentgenology}, Month = {2020/07/09}, Title = {Using Deep Learning to Accelerate Knee MRI at 3T: Results of an Interchangeability Study}, Year = {2020}}
GrappaNet: Combining Parallel Imaging With Deep Learning for Multi-Coil MRI Reconstruction
Anuroop Sriram, Jure Zbontar, Tullie Murrell, C. Lawrence Zitnick, Aaron Defazio, Daniel K. Sodickson;
[arXiv] [publication]
Magnetic Resonance Image (MRI) acquisition is an inherently slow process which has spurred the development of two different acceleration methods: acquiring multiple correlated samples simultaneously (parallel imaging) and acquiring fewer samples than necessary for traditional signal processing methods (compressed sensing). Both methods provide complementary approaches to accelerating MRI acquisition. In this paper, we present a novel method to integrate traditional parallel imaging methods into deep neural networks that is able to generate high quality reconstructions even for high acceleration factors. The proposed method, called GrappaNet, performs progressive reconstruction by first mapping the reconstruction problem to a simpler one that can be solved by a traditional parallel imaging methods using a neural network, followed by an application of a parallel imaging method, and finally fine-tuning the output with another neural network. The entire network can be trained end-to-end. We present experimental results on the recently released fastMRI dataset and show that GrappaNet can generate higher quality reconstructions than competing methods for both 4x and 8x acceleration.
@InProceedings{Sriram_2020_CVPR, author = {Sriram, Anuroop and Zbontar, Jure and Murrell, Tullie and Zitnick, C. Lawrence and Defazio, Aaron and Sodickson, Daniel K.}, title = {GrappaNet: Combining Parallel Imaging With Deep Learning for Multi-Coil MRI Reconstruction}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2020} }
Advancing machine learning for MR image reconstruction with an open competition: Overview of the 2019 fastMRI challenge
Knoll, Florian and Zbontar, Jure and Sriram, Anuroop and Muckley, Matthew J. and Bruno, Mary and Defazio, Aaron and Parente, Marc and Geras, Krzysztof J. and Katsnelson, Joe and Chandarana, Hersh and Zhang, Zizhao and Drozdzalv, Michal and Romero, Adriana and Rabbat, Michael and Vincent, Pascal and Pinkerton, James and Wang, Duo and Yakubova, Nafissa and Owens, Erich and Zitnick, C. Lawrence and Recht, Michael P. and Sodickson, Daniel K. and Lui, Yvonne W.
Magnetic Resonance in Medicine
[publication] [arXiv] [Code]
Purpose
To advance research in the field of machine learning for MR image reconstruction with an open challenge.
Methods
We provided participants with a dataset of raw k‐space data from 1,594 consecutive clinical exams of the knee. The goal of the challenge was to reconstruct images from these data. In order to strike a balance between realistic data and a shallow learning curve for those not already familiar with MR image reconstruction, we ran multiple tracks for multi‐coil and single‐coil data. We performed a two‐stage evaluation based on quantitative image metrics followed by evaluation by a panel of radiologists. The challenge ran from June to December of 2019.
Results
We received a total of 33 challenge submissions. All participants chose to submit results from supervised machine learning approaches.
Conclusions
The challenge led to new developments in machine learning for image reconstruction, provided insight into the current state of the art in the field, and highlighted remaining hurdles for clinical adoption.
@article{doi:10.1002/mrm.28338, Author = {Knoll, Florian and Murrell, Tullie and Sriram, Anuroop and Yakubova, Nafissa and Zbontar, Jure and Rabbat, Michael and Defazio, Aaron and Muckley, Matthew J. and Sodickson, Daniel K. and Zitnick, C. Lawrence and Recht, Michael P.}, Journal = {Magnetic Resonance in Medicine}, Title = {Advancing machine learning for MR image reconstruction with an open competition: Overview of the 2019 fastMRI challenge},
fastMRI: A Publicly Available Raw k-Space and DICOM Dataset of Knee Images for Accelerated MR Image Reconstruction Using Machine Learning
Knoll, Florian and Zbontar, Jure and Sriram, Anuroop and Muckley, Matthew J. and Bruno, Mary and Defazio, Aaron and Parente, Marc and Geras, Krzysztof J. and Katsnelson, Joe and Chandarana, Hersh and Zhang, Zizhao and Drozdzalv, Michal and Romero, Adriana and Rabbat, Michael and Vincent, Pascal and Pinkerton, James and Wang, Duo and Yakubova, Nafissa and Owens, Erich and Zitnick, C. Lawrence and Recht, Michael P. and Sodickson, Daniel K. and Lui, Yvonne W.
Radiology: Artificial Intelligence
[publication] [Code]
A publicly available dataset containing k-space data as well as Digital Imaging and Communications in Medicine image data of knee images for accelerated MR image reconstruction using machine learning is presented.
@article{doi:10.1148/ryai.2020190007, Author = {Knoll, Florian and Zbontar, Jure and Sriram, Anuroop and Muckley, Matthew J. and Bruno, Mary and Defazio, Aaron and Parente, Marc and Geras, Krzysztof J. and Katsnelson, Joe and Chandarana, Hersh and Zhang, Zizhao and Drozdzalv, Michal and Romero, Adriana and Rabbat, Michael and Vincent, Pascal and Pinkerton, James and Wang, Duo and Yakubova, Nafissa and Owens, Erich and Zitnick, C. Lawrence and Recht, Michael P. and Sodickson, Daniel K. and Lui, Yvonne W.}, Journal = {Radiology: Artificial Intelligence}, Title = {fastMRI: A Publicly Available Raw k-Space and DICOM Dataset of Knee Images for Accelerated MR Image Reconstruction Using Machine Learning}, Year = {2020}}
On the Curved Geometry of Accelerated Optimization
Aaron Defazio.
NeurIPS 2019
[arXiv]
In this work we propose a differential geometric motivation for Nesterov's accelerated gradient method (AGM) for strongly-convex problems. By considering the optimization procedure as occurring on a Riemannian manifold with a natural structure, The AGM method can be seen as the proximal point method applied in this curved space. This viewpoint can also be extended to the continuous time case, where the accelerated gradient method arises from the natural block-implicit Euler discretization of an ODE on the manifold. We provide an analysis of the convergence rate of this ODE for quadratic objectives.
@ARTICLE{adefazio-curvedgeom2019, author = {Aaron Defazio}, title = {On the Curved Geometry of Accelerated Optimization}, journal = {Advances in Neural Information Processing Systems 33 (NIPS 2019)}, year = {2019} }
On the Ineffectiveness of Variance Reduced Optimization for Deep Learning
Aaron Defazio, Léon Bottou
NeurIPS 2019
[arXiv] [Code]
The application of stochastic variance reduction to optimization has shown remarkable recent theoretical and practical success. The applicability of these techniques to the hard non-convex optimization problems encountered during training of modern deep neural networks is an open problem. We show that naive application of the SVRG technique and related approaches fail, and explore why.
@ARTICLE{adefazio-varred2019, author = {Aaron Defazio and L{\'{e}}on Bottou}, title = {On the Ineffectiveness of Variance Reduced Optimization for Deep Learning}, journal = {Advances in Neural Information Processing Systems 33 (NIPS 2019)}, year = {2019} }
fastMRI: An Open Dataset and Benchmarks for Accelerated MRI
Jure Zbontar and Florian Knoll and Anuroop Sriram and Matthew J. Muckley and Mary Bruno and Aaron Defazio and Marc Parente and Krzysztof J. Geras and Joe Katsnelson and Hersh Chandarana and Zizhao Zhang and Michal Drozdzal and Adriana Romero and Michael Rabbat and Pascal Vincent and James Pinkerton and Duo Wang and Nafissa Yakubova and Erich Owens and C. Lawrence Zitnick and Michael P. Recht and Daniel K. Sodickson and Yvonne W. Lui
[arXiv] [Code]
Accelerating Magnetic Resonance Imaging (MRI) by taking fewer measurements has the potential to reduce medical costs, minimize stress to patients and make MRI possible in applications where it is currently prohibitively slow or expensive. We introduce the fastMRI dataset, a large-scale collection of both raw MR measurements and clinical MR images, that can be used for training and evaluation of machine-learning approaches to MR image reconstruction. By introducing standardized evaluation criteria and a freely-accessible dataset, our goal is to help the community make rapid advances in the state of the art for MR image reconstruction. We also provide a self-contained introduction to MRI for machine learning researchers with no medical imaging background.
@inproceedings{fastMRI2018, title={{fastMRI}: An Open Dataset and Benchmarks for Accelerated {MRI}}, author={Jure Zbontar and Florian Knoll and Anuroop Sriram and Matthew J. Muckley and Mary Bruno and Aaron Defazio and Marc Parente and Krzysztof J. Geras and Joe Katsnelson and Hersh Chandarana and Zizhao Zhang and Michal Drozdzal and Adriana Romero and Michael Rabbat and Pascal Vincent and James Pinkerton and Duo Wang and Nafissa Yakubova and Erich Owens and C. Lawrence Zitnick and Michael P. Recht and Daniel K. Sodickson and Yvonne W. Lui}, journal = {ArXiv e-prints}, archivePrefix = "arXiv", eprint = {1811.08839}, year={2018} }
Controlling Covariate Shift using Equilibrium Normalization of Weights
Aaron Defazio, Léon Bottou
[arXiv]
We introduce a new normalization technique that exhibits the fast convergence properties of batch normalization using a transformation of layer weights instead of layer outputs. The proposed technique keeps the contribution of positive and negative weights to the layer output in equilibrium. We validate our method on a set of standard benchmarks including CIFAR-10/100, SVHN and ILSVRC 2012 ImageNet.
@ARTICLE{adefazio-equinorm2018, author = {Aaron Defazio and L{\'{e}}on Bottou}, title = {Controlling Covariate Shift using Equilibrium Normalization of Weights}, journal = {ArXiv e-prints}, archivePrefix = "arXiv", year = {2018} }
A Simple Practical Accelerated Method for Finite Sums
Aaron Defazio
NIPS 2016
[PDF] [Code]
We describe a novel optimization method for finite sums (such as empirical risk minimization problems) building on the recently introduced SAGA method. Our method achieves an accelerated convergence rate on strongly convex smooth prob- lems. Our method has only one parameter (a step size), and is radically simpler than other accelerated methods for finite sums. Additionally it can be applied when the terms are non-smooth, yielding a method applicable in many areas where operator splitting methods would traditionally be applied.
@ARTICLE{adefazio-nips2016, author = {Aaron Defazio}, title = {A Simple Practical Accelerated Method for Finite Sums}, journal = {Advances in Neural Information Processing Systems 29 (NIPS 2016)}, year = {2016} }
Non-Uniform Stochastic Average Gradient Method for Training Conditional Random Fields
M. Schmidt, R. Babanezhad, M.O. Ahmed, A. Defazio, A. Clifton & A. Sarkar
AISTATS 2015
[PDF]
We apply stochastic average gradient (SAG) algorithms for training conditional random fields (CRFs). We describe a practical implementation that uses structure in the CRF gradient to reduce the memory requirement of this linearly-convergent stochastic gradient method, propose a non-uniform sampling scheme that substantially improves practical performance, and analyze the rate of convergence of the SAGA variant under non-uniform sampling. Our experimental results reveal that our method often significantly outperforms existing methods in terms of the training objective, and performs as well or better than optimally-tuned stochastic gradient methods in terms of test error.
@ARTICLE{mschmidt-aistats2015, author = {Mark Schmidt and Reza Babanezhad and Mohamed Osama Ahmed and Aaron Defazio and Ann Clifton and Anoop Sarkar }, title = {Non-Uniform Stochastic Average Gradient Method for Training Conditional Random Fields}, journal = {18th International Conference on Artificial Intelligence and Statistics (AISTATS 2015)}, year = {2015} }
SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives
A. Defazio, F. Bach & S. Lacoste-Julien.
NIPS 2014
[PDF]
In this work we introduce a new optimisation method called SAGA in the spirit of SAG, SDCA, MISO and SVRG, a set of recently proposed incremental gradient algorithms with fast linear convergence rates. SAGA improves on the theory behind SAG and SVRG, with better theoretical convergence rates, and has support for composite objectives where a proximal operator is used on the regulariser. Unlike SDCA, SAGA supports non-strongly convex problems directly, and is adaptive to any inherent strong convexity of the problem. We give experimental results showing the effectiveness of our method.
@ARTICLE{adefazio-nips2014, author = {Aaron Defazio and Francis Bach and Simon Lacoste-Julien}, title = {SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives}, journal = {Advances in Neural Information Processing Systems 27 (NIPS 2014)}, year = {2014} }
Finito: A Faster, Permutable Incremental Gradient Method for Big Data Problems
A. Defazio, T. Caetano & J. Domke.
ICML 2014
[PDF] [Appendix]
Recent advances in optimization theory have shown that smooth strongly convex finite sums can be minimized faster than by treating them as a black box "batch" problem. In this work we introduce a new method in this class with a theoretical convergence rate four times faster than existing methods, for sums with sufficiently many terms. This method is also amendable to a sampling without replacement scheme that in practice gives further speed-ups. We give empirical results showing state of the art performance.
@ARTICLE{adefazio-icml2014, author = {Aaron Defazio and Tiberio Caetano and Justin Domke}, title = {Finito: A Faster, Permutable Incremental Gradient Method for Big Data Problems}, journal = {The 31st International Conference on Machine Learning (ICML 2014)}, year = {2014} }
A Convex Formulation for Learning Scale-Free Networks via Submodular Relaxation
A. Defazio & T. Caetano.
NIPS 2012
[PDF] [Appendix] [Code]
A key problem in statistics and machine learning is the determination of network structure from data. We consider the case where the structure of the graph to be reconstructed is known to be scale-free. We show that in such cases it is natural to formulate structured sparsity inducing priors using submodular functions, and we use their Lovasz extension to obtain a convex relaxation. For tractable classes such as Gaussian graphical models, this leads to a convex optimization problem that can be efficiently solved. We show that our method results in an improvement in the accuracy of reconstructed networks for synthetic data. We also show how our prior encourages scale-free reconstructions on a bioinfomatics dataset.
@ARTICLE{adefazio-nips2012, author = {Aaron Defazio and Tiberio Caetano}, title = {A Convex Formulation for Learning Scale-Free Networks via Submodular Relaxation}, journal = {Advances in Neural Information Processing Systems 25 (NIPS 2012)}, year = {2012} }
A Graphical Model Formulation of Collaborative Filtering Neighbourhood Methods
with Fast Maximum Entropy Training
A. Defazio & T. Caetano
ICML 2012
[PDF]
Item neighbourhood methods for collaborative filtering learn a weighted graph over the set of items, where each item is connected to those it is most similar to. The prediction of a user's rating on an item is then given by that rating of neighbouring items, weighted by their similarity. This paper presents a new neighbourhood approach which we call item fields, whereby an undirected graphical model is formed over the item graph. The resulting prediction rule is a simple generalization of the classical approaches, which takes into account non-local information in the graph, allowing its best results to be obtained when using drastically fewer edges than other neighbourhood approaches. A fast approximate maximum entropy training method based on the Bethe approximation is presented which utilizes a novel decomposition into tractable sub-problems. When using precomputed sufficient statistics on the Movielens dataset, our method outperforms maximum likelihood approaches by two orders of magnitude.
@ARTICLE{adefazio-icml2012, author = {Aaron Defazio and Tiberio Caetano}, title = {A Graphical Model Formulation of Collaborative Filtering Neighbourhood Methods with Fast Maximum Entropy Training}, journal = {The 29th International Conference on Machine Learning (ICML 2012)}, year = {2012} }

Tech Reports

A Comparison of Learning Algorithms on the Arcade Learning Environment
A. Defazio & T. Graepel
[PDF]
Reinforcement learning agents have traditionally been evaluated on small toy problems. With advances in computing power and the advent of the Arcade Learning Environment, it is now possible to evaluate algorithms on diverse and difficult problems within a consistent framework. We discuss some challenges posed by the arcade learning environment which do not manifest in simpler environments. We then provide a comparison of model-free, linear learning algorithms on this challenging problem set.
@TECHREPORT{adefazio-rl2014, author = {Aaron Defazio and Thore Graepel}, title = {A Comparison of Learning Algorithms on the Arcade Learning Environment}, institution = {Australian National University}, year = {2014} }

Articles

Linear programming in low dimensions is easy
[PDF]
A complete guide to the Bayes factor test
[PDF] [BLOG]
How to do A/B testing with early stopping correctly
[PDF] [BLOG]
Weighted random sampling with replacement with dynamic weights
[PDF] [BLOG]

PhD Thesis

New Optimisation Methods for Machine Learning
[PDF] Supervised by Tiberio Caetano
In this work we introduce several new optimisation methods for problems in machine learning. Our algorithms broadly fall into two categories: optimisation of finite sums and of graph structured objectives. The finite sum problem is simply the minimisation of objective functions that are naturally expressed as a summation over a large number of terms, where each term has a similar or identical weight. Such objectives most often appear in machine learning in the empirical risk minimisation framework in the non-online learning setting. The second category, that of graph structured objectives, consists of objectives that result from applying maximum likelihood to Markov random field models. Unlike the finite sum case, all the non-linearity is contained within a partition function term, which does not readily decompose into a summation.
For the finite sum problem, we introduce the Finito and SAGA algorithms, as well as variants of each. The Finito algorithm is best suited to strongly convex problems where the number of terms is of the same order as the condition number of the problem. We prove the fast convergence rate of Finito for strongly convex problems and demonstrate its state-of-the-art empirical performance on 5 datasets.
The SAGA algorithm we introduce is complementary to the Finito algorithm. It is more generally applicable, as it can be applied to problems without strong convexity, and to problems that have a non-differentiable regularisation term. In both cases we establish strong convergence rate proofs. It is also better suited to sparser problems than Finito. The SAGA method has a broader and simpler theory than any existing fast method for the problem class of finite sums, in particular it is the first such method that can provably be applied to non-strongly convex problems with non-differentiable regularisers without introduction of additional regularisation.
For graph-structured problems, we take three complementary approaches. We look at learning the parameters for a fixed structure, learning the structure independently, and learning both simultaneously. Specifically, for the combined approach, we introduce a new method for encouraging graph structures with the “scale-free” property. For the structure learning problem, we establish SHORTCUT, a O(n^2.5) expected time approximate structure learning method for Gaussian graphical models. For problems where the structure is known but the parameters unknown, we introduce an approximate maximum likelihood learning algorithm that is capable of learning a useful subclass of Gaussian graphical models.
Our thesis as a whole introduces a new suit of techniques for machine learning practitioners that increases the size and type of problems that can be efficiently solved. Our work is backed by extensive theory, including proofs of convergence for each method discussed.
@PHDTHESIS{adefazio-thesis2014, author = {Aaron Defazio}, title = {New Optimisation Methods for Machine Learning}, school = {Australian National University}, year = {2014}, note = {http://www.aarondefazio.com/pubs.html} }

Honours Research

Network Topology Tomography
[PDF] Supervised by Tiberio Caetano
@MASTERSTHESIS{adefazio-honours2010, author = {Aaron Defazio}, title = {Network Topology Tomography}, school = {Australian National University (ANU)}, year = {2010}, type = {Undergraduate Honors Thesis} }