Georgina Hall awarded 2016 INFORMS Computing Society Student Paper Prize
November 14th, 2016

Georgina Hall, a fifth-year Ph.D. student and a Gordon Y. S. Wu Fellow in the Department of Operations Research and Financial Engineering, was awarded the 2016 INFORMS Computing Society (ICS) Student Paper Award for her paper, "DC Decomposition of Nonconvex Polynomials with Algebraic Techniques." The INFORMS Computing Society (ICS) Student Paper Award is given annually to the best paper on computing and operations research by a student author, as judged by a panel of the ICS. She is advised by Assistant Professor Amir Ali Ahmadi.

Optimal Learning for Nonlinear Parametric Belief Models over Multidimensional Continuous Spaces
November 11th, 2016
OptLab Researcher(s):

We consider the optimal learning problem of optimizing an expensive function with a known parametric form but unknown parameters. Observations of the function, which might involve simulations, laboratory or field experiments, are both expensive and noisy.

Optimal Learning for Stochastic Optimization with Nonlinear Parametric Belief Models
November 11th, 2016
OptLab Researcher(s):

We consider the problem of estimating the expected value of information for Bayesian learning problems where the belief model is nonlinear in the parameters. Our goal is to maximize some metric, while simultaneously learning the unknown parameters of the nonlinear belief model, by guiding a sequential experimentation process which is expensive.

Optimizing Nanoemulsion Stability under Uncertainty of Underlying Physical Mechanisms
October 11th, 2016
OptLab Researcher(s):

We present a technique for adaptively choosing a sequence of experiments for materials design and optimization. Specifically, we consider problem of identifying the choice of experimental control variables that optimize the kinetic stability of a nanoemulsion, which we formulate as a ranking and selection problem.

Quantifying Experimental Characterization Choices in Optimal Learning and Materials Design
October 11th, 2016
OptLab Researcher(s):

We consider the choices and subsequent costs associated with ensemble averaging and extrapolating experimental measurements in the context of optimizing material properties using Optimal Learning (OL). We demonstrate how these two general techniques lead to a trade-off between measurement error and experimental costs, and incorporate this trade-off in the OL framework.

Principal Component Analysis for Online Data
September 14th, 2016
OptLab Researcher(s):
Additional Researcher(s):
  • Tong Zhang

In this paper, we cast online PCA into a stochastic nonconvex optimization problem, and we analyze the online PCA algorithm as a stochastic approximation iteration.

Optimal Learning of Noisy, Expensive Blackbox Functions Using Quadratic Belief Models
September 8th, 2016
OptLab Researcher(s):

I study the problem of learning the unknown parameters of an expensive function where the true underlying surface can be described by a quadratic polynomial. The motivation for this is that even though the optimal region for most functions might be unknown, it can still be well approximated by a quadratic function.

Holmes Data Science - Applying Machine Learning and Data Science Techniques to Problems in the Physical Sciences
August 18th, 2016
OptLab Researcher(s):

We research how to help laboratory scientists discover new science through the use of computers, data analysis, machine learning and decision theory. We collaborate with experimentalist teams trying to optimize material properties, or to discover novel materials, using the framework of Optimal Learning, guided by domain expert knowledge and relevant physical modeling.

Hierarchical Knowledge-Gradient with Stochastic Binary Feedbacks with an Application in Personalized Health Care
August 16th, 2016
OptLab Researcher(s):

Our problem is motivated by healthcare applications where the highly sparsity and the relatively small number of patients makes learning more difficult. With the adaptation of an online boosting framework, we develop a knowledge-gradient (KG) type policy to guide the experiment by maximizing the expected value of information of labeling each alternative, in order to reduce the number of expensive physical experiments.

Finite-time Analysis for the Knowledge-gradient Policy and a New Testing Environment for Optimal Learning
August 16th, 2016
OptLab Researcher(s):

We derived the first finite-time bounds for a knowledge gradient policy. We also introduce a Modular Optimal Learning Testing Environment (MOLTE) which provides a highly flexible environment for testing a range of learning policies on a library of test problems.

Pages