News
We proposed a convolutional autoencoder with sequential and channel attention (CAE-SCA) to address this issue. Sequential attention (SA) is based on long short-term memory (LSTM), which captures ...
These networks, which can be seen as variational autoencoders with a flexible latent prior, are smaller and easier to train than state-of-the-art generative models. As a second contribution, we ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results