Skip to main content

Questions tagged [autoencoders]

Feedforward neural networks trained to reconstruct their own input. Usually one of the hidden layers is a "bottleneck", leading to encoder->decoder interpretation.

0 votes
1 answer
18 views

Exploring vae latent space

I recently trained a AE and a VAE and used the latent variables of each for a clustering task. It seemed to work well, sensible clusters. The main reason for training the VAE was too gain more ...
Nathan Thompo's user avatar
0 votes
0 answers
15 views

difference between l2 penalty and l2 loss in SAE

I was reading this paper from Anthropic https://transformer-circuits.pub/2024/scaling-monosemanticity/index.html and in the paper loss is defined like this :$$ L = \mathbb{E}_x \left[ \| x - \hat{x} \|...
Mrnobody's user avatar
2 votes
0 answers
100 views

Anomaly detection for Multivariate Time-Series data from multiple sensors

I work with tabular time-series data from multiple sensors and my goal is to detect abnormal behavior in battery discharge. Here is an example of data (example contains records only for one device ...
mz2300's user avatar
  • 71
1 vote
0 answers
22 views

Multi-task learning-Loss function

0 I am training a convolutional autoencoder with two velocity fields (2D array) as inputs and outputs. These fields represent wind velocities in both the x and y directions within a square domain. My ...
Sarah's user avatar
  • 11
2 votes
1 answer
47 views

Why is the forward process referred to as the "ground truth" in diffusion models?

I've seen in many tutorials on diffusion models refer to the distribution of the latent variables induced by the forward process as "ground truth". I wonder why. What we can actually see is ...
Daniel Mendoza's user avatar
2 votes
2 answers
73 views

Why does Variational Inference work?

ELBO is a lower bound, and only matches the true likelihood when the q-distribution/encoder we choose equals to the true posterior distribution. Are there any guarantees that maximizing ELBO indeed ...
Daniel Mendoza's user avatar
2 votes
1 answer
40 views

VAEs: Why do we need the encoder for image generation?

I'm probably missing something obvious, but if we're only looking to generate images and are not interested in the latent space, why do we even need the encoder in VAEs? In my understanding, the ...
Jannik's user avatar
  • 125
0 votes
0 answers
15 views

ShapeNet VAE KL Divergence issues

I am trying to train a VAE on shapenet but I can't seem to make it work. Any help or ideas would be highly appreciated. Now the problem is whenever I apply the KL divergence loss the network seems to ...
Youssef's user avatar
3 votes
1 answer
132 views

Are there any situations where orthogonality is not optimal?

Data reduction is often used to avoid overfitting and to enhance explainability. Popular data reduction techniques, such as SVD or PCA map/project high-dimensional data to a lower-dimensional ...
Chris M's user avatar
  • 39
0 votes
0 answers
10 views

Creating a light image generation model for a specific distribution

I am currently working on how a user can introduce bias in a neural network model. To do so, I am creating an image2image model that only works on the training distribution. For example, let's say I ...
Adrien's user avatar
  • 19
0 votes
0 answers
9 views

Clarification about varitional autoencoders training

I have one specific question about VAEs as I try to work through the math on my own. I found from this paper that to training a VAE model involves optimizing a lower-bound of the marginal log-...
Brooklyn Sheppard's user avatar
1 vote
0 answers
27 views

1 dimensional autoencoder as a clustering tool?

I am looking for references (as I prefer to sit over the giant's shoulders...) to something it "seems" to work well... When we do clustering to analyse some data, to understand its structure ...
Antonello's user avatar
  • 403
0 votes
0 answers
21 views

Multivariate Variational Autoencoder and Positive Definite Covariance Matrix

This might be a naive question from a non-statistician but here we go. I understand that the challenges that hamper the use multivariate variational encoder where a covariance matrix is used instead ...
applied_env's user avatar
1 vote
0 answers
25 views

why Conditional VAE require conditioning of the encoder

looking at this blogpost and in many other, the cVAE looks like this: Now, my question is... why do we need the label on the encoder level? Clearly that information is already inside the image, thus ...
Alberto's user avatar
  • 1,217
0 votes
0 answers
13 views

How can probability variables of different dimensions share the same parameter in VAE?

I am confused while studying VAE(Auto-Encoding Variational Bayes). My problem is as follows: We have a continuous variable $x$ and a random variable $z$. The dimensions of $x$ and $z$ are different, ...
Seonil Choi's user avatar

15 30 50 per page
1
2 3 4 5
44