News

About six months after coming out of stealth with $50 million in funding, Latent Labs has released a web-based AI model for ...
The model can be accessed through Latent’s web-based platform for push-button protein design. Sign ups are open now for early access: platform.latentlabs.com Extensive lab validation shows picomolar ...
Large language models (LLMs) like BERT and GPT are driving major advances in artificial intelligence, but their size and ...
Figure 5 and Figure 6 are the autoencoder model structure and MLP model structure, respectively. To introduce non-linearity to the mapping functions, we employ ReLU and Sigmoid activation functions.
It introduces a novel framework combining a sparse deformable marching cubes structure called Sparcubes with a modality-consistent autoencoder known as Sparconv-VAE. Sparcubes transforms raw mesh data ...
Scientists at Massachusetts Institute of Technology have devised a way for large language models to keep learning on the fly—a step toward building AI that continually improves itself.
Model Context Protocol (MCP) is redefining AI by enabling real-time tool integration, solving knowledge staleness, and boosting interactivity.
A transfer-learned hierarchical variational autoencoder model for computational design of anticancer peptides.. If you have the appropriate software installed, you can download article citation data ...
For comparison purposes we developed a comparative foundation model (‘ViTClassifier’) based on a vanilla Vision Transformer (ViT) autoencoder. Having tuned these foundation models, we evaluated them ...
Microsoft Research has introduced BioEmu-1, a deep-learning model designed to predict the range of structural conformations that proteins can adopt. Unlike traditional methods that provide a ...