About
Activity
-
Put together this post recently on the concept of Vegan AI (nothing to do with food) along with a link to a set of Vegan datasets I'm compiling…
Put together this post recently on the concept of Vegan AI (nothing to do with food) along with a link to a set of Vegan datasets I'm compiling…
Liked by Corey Lowman
-
A year of travel and exploration has made me thankful for kindred spirits. The ones I have found already who I call close friends and the ones I…
A year of travel and exploration has made me thankful for kindred spirits. The ones I have found already who I call close friends and the ones I…
Liked by Corey Lowman
-
A brief aside to brag about my oldest heading off to college next week to study biomedical sciences. I am incredibly proud of your hard work, and I…
A brief aside to brag about my oldest heading off to college next week to study biomedical sciences. I am incredibly proud of your hard work, and I…
Liked by Corey Lowman
Experience & Education
Publications
-
Geometric instability of out of distribution data across autoencoder architecture
We study the map learned by a family of autoencoders trained on MNIST, and evaluated on ten different data sets created by the random selection of pixel values according to ten different distributions. Specifically, we study the eigenvalues of the Jacobians defined by the weight matrices of the autoencoder at each training and evaluation point. For high enough latent dimension, we find that each autoencoder reconstructs all the evaluation data sets as similar \emph{generalized characters}, but…
We study the map learned by a family of autoencoders trained on MNIST, and evaluated on ten different data sets created by the random selection of pixel values according to ten different distributions. Specifically, we study the eigenvalues of the Jacobians defined by the weight matrices of the autoencoder at each training and evaluation point. For high enough latent dimension, we find that each autoencoder reconstructs all the evaluation data sets as similar \emph{generalized characters}, but that this reconstructed \emph{generalized character} changes across autoencoder. Eigenvalue analysis shows that even when the reconstructed image appears to be an MNIST character for all out of distribution data sets, not all have latent representations that are close to the latent representation of MNIST characters. All told, the eigenvalue analysis demonstrated a great deal of geometric instability of the autoencoder both as a function on out of distribution inputs, and across architectures on the same set of inputs.
Other authorsSee publication -
Eigenvalues of Autoencoders in Training and at Initialization
In this paper, we investigate the evolution of autoencoders near their initialization. In particular, we study the distribution of the eigenvalues of the Jacobian matrices of autoencoders early in the training process, training on the MNIST data set. We find that autoencoders that have not been trained have eigenvalue distributions that are qualitatively different from those which have been trained for a long time (>100 epochs). Additionally, we find that even at early epochs, these…
In this paper, we investigate the evolution of autoencoders near their initialization. In particular, we study the distribution of the eigenvalues of the Jacobian matrices of autoencoders early in the training process, training on the MNIST data set. We find that autoencoders that have not been trained have eigenvalue distributions that are qualitatively different from those which have been trained for a long time (>100 epochs). Additionally, we find that even at early epochs, these eigenvalue distributions rapidly become qualitatively similar to those of the fully trained autoencoders. We also compare the eigenvalues at initialization to pertinent theoretical work on the eigenvalues of random matrices and the products of such matrices.
Other authorsSee publication -
Searching for explanations: testing social scientific methods in synthetic ground-truthed worlds.
Computational and Mathematical Organization Theory
A scientific model’s usefulness relies on its ability to explain phenomena, predict how such phenomena will be impacted by future interventions, and prescribe actions to achieve desired outcomes. We study methods for learning causal models that explain the behaviors of simulated “human” populations. Through the Ground Truth project, we solved a series of Challenges where our explanations, predictions and prescriptions were scored against ground truth information. We describe the processes that…
A scientific model’s usefulness relies on its ability to explain phenomena, predict how such phenomena will be impacted by future interventions, and prescribe actions to achieve desired outcomes. We study methods for learning causal models that explain the behaviors of simulated “human” populations. Through the Ground Truth project, we solved a series of Challenges where our explanations, predictions and prescriptions were scored against ground truth information. We describe the processes that emerged for applying causal discovery, network analysis, agent-based modeling and other analytical methods to inform solutions to Challenge tasks. We present our team’s overall performance results on these Challenges and discuss implications for future efforts to validate social scientific research using simulation-based challenges.
-
Imitation Learning with Approximated Behavior Cloning Loss
IEEE/RSJ
Recent Imitation Learning (IL) techniques focus on adversarial imitation learning algorithms to learn from a fixed set of expert demonstrations. While these approaches are theoretically sound, they suffer from a number of problems such as poor sample efficiency, poor stability, and a host of issues that Generative Adversarial Networks (GANs) suffer from. In this paper we introduce a generalization of Behavior Cloning (BC) that is applicable in any IL setting. Our algorithm first approximates…
Recent Imitation Learning (IL) techniques focus on adversarial imitation learning algorithms to learn from a fixed set of expert demonstrations. While these approaches are theoretically sound, they suffer from a number of problems such as poor sample efficiency, poor stability, and a host of issues that Generative Adversarial Networks (GANs) suffer from. In this paper we introduce a generalization of Behavior Cloning (BC) that is applicable in any IL setting. Our algorithm first approximates behavior cloning loss using a neural network and then uses that loss network to generate a loss signal which is minimized using standard supervised learning. We call the resulting algorithm family Approximated Behavior Cloning (ABC), introduce variants for each IL setting, and demonstrate an order of magnitude improvement in sample efficiency and increased stability in standard imitation learning environments.
Other authors -
-
Geometry and Generalization: Eigenvalues as predictors of where a network will fail to generalize
-
We study the deformation of the input space by a trained autoencoder via the Jacobians of the trained weight matrices. In doing so, we prove bounds for the mean squared errors for points in the input space, under assumptions regarding the orthogonality of the eigenvectors. We also show that the trace and the product of the eigenvalues of the Jacobian matrices is a good predictor of the MSE on test points. This is a dataset independent means of testing an autoencoder's ability to generalize on…
We study the deformation of the input space by a trained autoencoder via the Jacobians of the trained weight matrices. In doing so, we prove bounds for the mean squared errors for points in the input space, under assumptions regarding the orthogonality of the eigenvectors. We also show that the trace and the product of the eigenvalues of the Jacobian matrices is a good predictor of the MSE on test points. This is a dataset independent means of testing an autoencoder's ability to generalize on new input. Namely, no knowledge of the dataset on which the network was trained is needed, only the parameters of the trained model.
Other authors -
-
Instructive artificial intelligence (AI) for human training, assistance, and explainability
-
We propose a novel approach to explainable AI (XAI) based on the concept of "instruction" from neural networks. In this case study, we demonstrate how a superhuman neural network might instruct human trainees as an alternative to traditional approaches to XAI. Specifically, an AI examines human actions and calculates variations on the human strategy that lead to better performance. Experiments with a JHU/APL-developed AI player for the cooperative card game Hanabi suggest this technique makes…
We propose a novel approach to explainable AI (XAI) based on the concept of "instruction" from neural networks. In this case study, we demonstrate how a superhuman neural network might instruct human trainees as an alternative to traditional approaches to XAI. Specifically, an AI examines human actions and calculates variations on the human strategy that lead to better performance. Experiments with a JHU/APL-developed AI player for the cooperative card game Hanabi suggest this technique makes unique contributions to explainability while improving human performance. One area of focus for Instructive AI is in the significant discrepancies that can arise between a human's actual strategy and the strategy they profess to use. This inaccurate self-assessment presents a barrier for XAI, since explanations of an AI's strategy may not be properly understood or implemented by human recipients. We have developed and are testing a novel, Instructive AI approach that estimates human strategy by observing human actions. With neural networks, this allows a direct calculation of the changes in weights needed to improve the human strategy to better emulate a more successful AI. Subjected to constraints (e.g. sparsity) these weight changes can be interpreted as recommended changes to human strategy (e.g. "value A more, and value B less"). Instruction from AI such as this functions both to help humans perform better at tasks, but also to better understand, anticipate, and correct the actions of an AI. Results will be presented on AI instruction's ability to improve human decision-making and human-AI teaming in Hanabi.
Other authors -
-
The First International Competition in Machine Reconnaissance Blind Chess
PMLR
Reconnaissance blind chess (RBC) is a chess variant in which a player cannot see her
opponent’s pieces but can learn about them through private, explicit sensing actions. The
game presents numerous research challenges, and was the focus of a competition held in con-
junction with of the 2019 Conference on Neural Information Processing Systems (NeurIPS)Other authors -
More activity by Corey
-
Looking for a lightning-fast way to bootstrap your labeling? Together with the team at Recognai, we wrote a guide that lets you: ⚡ Build…
Looking for a lightning-fast way to bootstrap your labeling? Together with the team at Recognai, we wrote a guide that lets you: ⚡ Build…
Liked by Corey Lowman
People also viewed
-
David Hartmann
Senior ML Researcher @ Lambda
Connect -
Michael Balaban
Connect -
Stephen Balaban
Connect -
Herman Pabla
Connect -
Sam Khosroshahi
Connect -
Mitesh Agrawal
Connect -
Andrew Melton
Software and Intrastructure Engineer
Connect -
Ben Kulp
Connect -
David Hall
Connect -
Shawn Rusaw
Software at Lambda
Connect
Explore collaborative articles
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
Explore MoreOthers named Corey Lowman
1 other named Corey Lowman is on LinkedIn
See others named Corey Lowman