Corey Lowman

Richmond, Virginia, United States Contact Info
168 followers 129 connections

Join to view profile

About

Curious and intentional software engineer with 7 years of experience. I'm passionate…

Activity

Join now to see all activity

Experience & Education

  • Lambda

View Corey’s full experience

See their title, tenure and more.

or

By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.

Publications

  • Geometric instability of out of distribution data across autoencoder architecture

    We study the map learned by a family of autoencoders trained on MNIST, and evaluated on ten different data sets created by the random selection of pixel values according to ten different distributions. Specifically, we study the eigenvalues of the Jacobians defined by the weight matrices of the autoencoder at each training and evaluation point. For high enough latent dimension, we find that each autoencoder reconstructs all the evaluation data sets as similar \emph{generalized characters}, but…

    We study the map learned by a family of autoencoders trained on MNIST, and evaluated on ten different data sets created by the random selection of pixel values according to ten different distributions. Specifically, we study the eigenvalues of the Jacobians defined by the weight matrices of the autoencoder at each training and evaluation point. For high enough latent dimension, we find that each autoencoder reconstructs all the evaluation data sets as similar \emph{generalized characters}, but that this reconstructed \emph{generalized character} changes across autoencoder. Eigenvalue analysis shows that even when the reconstructed image appears to be an MNIST character for all out of distribution data sets, not all have latent representations that are close to the latent representation of MNIST characters. All told, the eigenvalue analysis demonstrated a great deal of geometric instability of the autoencoder both as a function on out of distribution inputs, and across architectures on the same set of inputs.

    Other authors
    See publication
  • Eigenvalues of Autoencoders in Training and at Initialization

    In this paper, we investigate the evolution of autoencoders near their initialization. In particular, we study the distribution of the eigenvalues of the Jacobian matrices of autoencoders early in the training process, training on the MNIST data set. We find that autoencoders that have not been trained have eigenvalue distributions that are qualitatively different from those which have been trained for a long time (>100 epochs). Additionally, we find that even at early epochs, these…

    In this paper, we investigate the evolution of autoencoders near their initialization. In particular, we study the distribution of the eigenvalues of the Jacobian matrices of autoencoders early in the training process, training on the MNIST data set. We find that autoencoders that have not been trained have eigenvalue distributions that are qualitatively different from those which have been trained for a long time (>100 epochs). Additionally, we find that even at early epochs, these eigenvalue distributions rapidly become qualitatively similar to those of the fully trained autoencoders. We also compare the eigenvalues at initialization to pertinent theoretical work on the eigenvalues of random matrices and the products of such matrices.

    Other authors
    See publication
  • Searching for explanations: testing social scientific methods in synthetic ground-truthed worlds.

    Computational and Mathematical Organization Theory

    A scientific model’s usefulness relies on its ability to explain phenomena, predict how such phenomena will be impacted by future interventions, and prescribe actions to achieve desired outcomes. We study methods for learning causal models that explain the behaviors of simulated “human” populations. Through the Ground Truth project, we solved a series of Challenges where our explanations, predictions and prescriptions were scored against ground truth information. We describe the processes that…

    A scientific model’s usefulness relies on its ability to explain phenomena, predict how such phenomena will be impacted by future interventions, and prescribe actions to achieve desired outcomes. We study methods for learning causal models that explain the behaviors of simulated “human” populations. Through the Ground Truth project, we solved a series of Challenges where our explanations, predictions and prescriptions were scored against ground truth information. We describe the processes that emerged for applying causal discovery, network analysis, agent-based modeling and other analytical methods to inform solutions to Challenge tasks. We present our team’s overall performance results on these Challenges and discuss implications for future efforts to validate social scientific research using simulation-based challenges.

    See publication
  • Imitation Learning with Approximated Behavior Cloning Loss

    IEEE/RSJ

    Recent Imitation Learning (IL) techniques focus on adversarial imitation learning algorithms to learn from a fixed set of expert demonstrations. While these approaches are theoretically sound, they suffer from a number of problems such as poor sample efficiency, poor stability, and a host of issues that Generative Adversarial Networks (GANs) suffer from. In this paper we introduce a generalization of Behavior Cloning (BC) that is applicable in any IL setting. Our algorithm first approximates…

    Recent Imitation Learning (IL) techniques focus on adversarial imitation learning algorithms to learn from a fixed set of expert demonstrations. While these approaches are theoretically sound, they suffer from a number of problems such as poor sample efficiency, poor stability, and a host of issues that Generative Adversarial Networks (GANs) suffer from. In this paper we introduce a generalization of Behavior Cloning (BC) that is applicable in any IL setting. Our algorithm first approximates behavior cloning loss using a neural network and then uses that loss network to generate a loss signal which is minimized using standard supervised learning. We call the resulting algorithm family Approximated Behavior Cloning (ABC), introduce variants for each IL setting, and demonstrate an order of magnitude improvement in sample efficiency and increased stability in standard imitation learning environments.

    Other authors
    • Galen Mullins
    • Joshua McClellan
    See publication
  • Geometry and Generalization: Eigenvalues as predictors of where a network will fail to generalize

    -

    We study the deformation of the input space by a trained autoencoder via the Jacobians of the trained weight matrices. In doing so, we prove bounds for the mean squared errors for points in the input space, under assumptions regarding the orthogonality of the eigenvectors. We also show that the trace and the product of the eigenvalues of the Jacobian matrices is a good predictor of the MSE on test points. This is a dataset independent means of testing an autoencoder's ability to generalize on…

    We study the deformation of the input space by a trained autoencoder via the Jacobians of the trained weight matrices. In doing so, we prove bounds for the mean squared errors for points in the input space, under assumptions regarding the orthogonality of the eigenvectors. We also show that the trace and the product of the eigenvalues of the Jacobian matrices is a good predictor of the MSE on test points. This is a dataset independent means of testing an autoencoder's ability to generalize on new input. Namely, no knowledge of the dataset on which the network was trained is needed, only the parameters of the trained model.

    Other authors
    • Susama Agarwala
    • Ben Dees
    See publication
  • Instructive artificial intelligence (AI) for human training, assistance, and explainability

    -

    We propose a novel approach to explainable AI (XAI) based on the concept of "instruction" from neural networks. In this case study, we demonstrate how a superhuman neural network might instruct human trainees as an alternative to traditional approaches to XAI. Specifically, an AI examines human actions and calculates variations on the human strategy that lead to better performance. Experiments with a JHU/APL-developed AI player for the cooperative card game Hanabi suggest this technique makes…

    We propose a novel approach to explainable AI (XAI) based on the concept of "instruction" from neural networks. In this case study, we demonstrate how a superhuman neural network might instruct human trainees as an alternative to traditional approaches to XAI. Specifically, an AI examines human actions and calculates variations on the human strategy that lead to better performance. Experiments with a JHU/APL-developed AI player for the cooperative card game Hanabi suggest this technique makes unique contributions to explainability while improving human performance. One area of focus for Instructive AI is in the significant discrepancies that can arise between a human's actual strategy and the strategy they profess to use. This inaccurate self-assessment presents a barrier for XAI, since explanations of an AI's strategy may not be properly understood or implemented by human recipients. We have developed and are testing a novel, Instructive AI approach that estimates human strategy by observing human actions. With neural networks, this allows a direct calculation of the changes in weights needed to improve the human strategy to better emulate a more successful AI. Subjected to constraints (e.g. sparsity) these weight changes can be interpreted as recommended changes to human strategy (e.g. "value A more, and value B less"). Instruction from AI such as this functions both to help humans perform better at tasks, but also to better understand, anticipate, and correct the actions of an AI. Results will be presented on AI instruction's ability to improve human decision-making and human-AI teaming in Hanabi.

    Other authors
    • Nick Kantack
    • Nina Cohen
    • Nathan Bos
    • James Everett
    • Timothy Endres
    See publication
  • The First International Competition in Machine Reconnaissance Blind Chess

    PMLR

    Reconnaissance blind chess (RBC) is a chess variant in which a player cannot see her
    opponent’s pieces but can learn about them through private, explicit sensing actions. The
    game presents numerous research challenges, and was the focus of a competition held in con-
    junction with of the 2019 Conference on Neural Information Processing Systems (NeurIPS)

    Other authors
    • Ryan Gardner
    • Casey Richardson
    • Ashley Llorens
    See publication

More activity by Corey

View Corey’s full profile

  • See who you know in common
  • Get introduced
  • Contact Corey directly
Join to view full profile

People also viewed

Explore collaborative articles

We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.

Explore More

Others named Corey Lowman

Add new skills with these courses