
Learning a Generative Model for Validity in Complex Discrete Structures
Deep generative models have been successfully used to learn representati...
read it

Discrete Object Generation with Reversible Inductive Construction
The success of generative modeling in continuous domains has led to a su...
read it

Grammar Variational Autoencoder
Deep generative models have been wildly successful at learning coherent ...
read it

An online sequencetosequence model for noisy speech recognition
Generative models have long been the dominant approach for speech recogn...
read it

Deep Generative Networks For Sequence Prediction
This thesis investigates unsupervised time series representation learnin...
read it

Towards Solving Textbased Games by Producing Adaptive Action Spaces
To solve a textbased game, an agent needs to formulate valid text comma...
read it

Model Inversion Networks for ModelBased Optimization
In this work, we aim to solve datadriven optimization problems, where t...
read it
Actively Learning what makes a Discrete Sequence Valid
Deep learning techniques have been hugely successful for traditional supervised and unsupervised machine learning problems. In large part, these techniques solve continuous optimization problems. Recently however, discrete generative deep learning models have been successfully used to efficiently search highdimensional discrete spaces. These methods work by representing discrete objects as sequences, for which powerful sequencebased deep models can be employed. Unfortunately, these techniques are significantly hindered by the fact that these generative models often produce invalid sequences. As a step towards solving this problem, we propose to learn a deep recurrent validator model. Given a partial sequence, our model learns the probability of that sequence occurring as the beginning of a full valid sequence. Thus this identifies valid versus invalid sequences and crucially it also provides insight about how individual sequence elements influence the validity of discrete objects. To learn this model we propose an approach inspired by seminal work in Bayesian active learning. On a synthetic dataset, we demonstrate the ability of our model to distinguish valid and invalid sequences. We believe this is a key step toward learning generative models that faithfully produce valid discrete objects.
READ FULL TEXT
Comments
There are no comments yet.