Recent advances in training large-scale multimodal models are driven by efforts to eliminate modeling constraints and unify architectures across domains. Despite these advances, many existing models still rely on individually trained components, such as modality-specific encoders and decoders.
In a new paper, “JetFormer: An Autoregressive Generative Model of Raw Images and Text,” the Google DeepMind research team introduces JetFormer, a breakthrough autoregressive decoder-specific Transformer designed to directly model raw data. I’m doing it. The model maximizes the potential of raw data without relying on pre-trained components and can seamlessly understand and generate text and images.


The team summarizes JetFormer’s key innovations as follows:
Leveraging normalization flows for image representation: The crucial insight behind JetFormer is that it uses a powerful normalization flow called “jet” to encode images into latent representations suitable for autoregressive modeling. That’s it. Traditional autoregression on raw image patches encoded as pixels has been impractical due to structural complexity. JetFormer’s flow model addresses this issue by providing a lossless, reversible representation that seamlessly integrates with multimodal models. At inference time, the reversibility of the flow enables direct image decoding. Guide models to high-level information: To strengthen the focus on important high-level information, researchers employ two innovative strategies. Progressive Gaussian Noise Augmentation: Gaussian noise is added during training and gradually reduced, encouraging the model to prioritize comprehensive features early. In the process of learning. Managing image data redundancy: JetFormer allows you to selectively exclude redundant dimensions in natural images from autoregressive models. Alternatively, principal component analysis (PCA) has been considered to reduce dimensionality without sacrificing important information.


The team evaluated JetFormer on two challenging tasks: conditional image generation for the ImageNet class and web-scale multimodal generation. Results show that JetFormer competes with less flexible models and outperforms on both image and text generation tasks when trained on large-scale data. Its end-to-end training capabilities further emphasize its flexibility and effectiveness.
JetFormer represents a major advance in simplifying multimodal architectures by integrating text and image modeling approaches. The innovative use of normalized flows and emphasis on prioritizing high-level features begins a new era of end-to-end generative modeling. This work lays the foundation for further exploration of integrated multimodal systems and paves the way for a more integrated and efficient approach to AI model development.
The paper “JetFormer: An Autoregressive Generative Model of Raw Images and Text” is available on arXiv.
Author: Hekate He | Editor: Chain Zhang