About 105,000 results
Open links in new tab
  1. The Geometry of Concepts: Sparse Autoencoder Feature Structure

    Oct 10, 2024 · Abstract: Sparse autoencoders have recently produced dictionaries of high-dimensional vectors corresponding to the universe of concepts represented by large language …

  2. Sparse Autoencoders in Deep Learning - GeeksforGeeks

    Apr 8, 2025 · This is an implementation that shows how to construct a sparse autoencoder with TensorFlow and Keras in order to learn useful representations of the MNIST dataset. The …

  3. Autoencoder - Wikipedia

    Examples are regularized autoencoders (sparse, denoising and contractive autoencoders), which are effective in learning representations for subsequent classification tasks, [3] and variational …

  4. The Geometry of Concepts: Sparse Autoencoder Feature Structure

    Mar 27, 2025 · SAE feature structure: Sparse autoencoders (SAEs) are a recent approach for discovering interpretable language model features without supervision, although relatively few …

  5. We will first describe feedforward neural networks and the backpropagation algorithm for supervised learning. Then, we show how this is used to construct an autoencoder, which is an …

  6. We develop a state-of-the-art methodology to reliably train extremely wide and sparse autoencoders with very few dead latents on the activations of any language model. We …

  7. Sparse Autoencoder Explained - Papers With Code

    A Sparse Autoencoder is a type of autoencoder that employs sparsity to achieve an information bottleneck. Specifically the loss function is constructed so that activations are penalized within …

  8. What happens in Sparse Autoencoder | by Syoya Zhou - Medium

    Dec 4, 2018 · The difference between a basic autoencoder and neural networks is that autoencoder is composed of two symmetric parts: encoder and decoder with dimensions of …

  9. Sparse Autoencoder Neural Networks - How to Utilise Sparsity …

    May 3, 2022 · This article will focus on Sparse Autoencoders (SAE) and compare them to **** Undercomplete Autoencoders (AE). Contents. SAE within the universe of Machine Learning …

  10. sparse_autoencoders.ipynb - Colab - Google Colab

    In this notebook, we will explore one of the cutting-edge approaches to interpreting superposition: sparse autoencoders (SAE). SAEs are a type of neural network used in unsupervised learning …

  11. Some results have been removed
Refresh