
LAION-5B: A NEW ERA OF OPEN LARGE-SCALE MULTI-MODAL DATASETS | LAION
Context from source
We present a dataset of 5,85 billion CLIP-filtered image-text pairs, 14x bigger than LAION-400M, previously the biggest openly accessible image-text dataset in the world - see also our NeurIPS2022 paper
See our update on the LAION-5B dataset.
Large image-text models like ALIGN, BASIC, Turing Bletchly, FLORENCE & GLIDE have shown better and better performance compared to previous flagship models like CLIP and DALL-E. Most of them had been trained on billions of image-text pairs and unfortunately, no datasets of this size had been openly available until now. To address this problem we present LAION 5B, a large-scale dataset for research purposes consisting of 5,85B CLIP-filtered image-text pairs. 2,3B contain English language, 2,2B samples from 100+ other languages and 1B samples have texts that do not allow a certain language assignment (e.g. names ). Additionally, we provide several nearest neighbor indices, an improved web interface for exploration & subset creation as well as detection scores for watermark and NSFW. We also announce a full reproduction of a clip training trained on LAION-400M at open_clip. Explore the dataset at the search demo. See also the same post on laion website .
We thank our sponsors hugging face, doodlebot and stability for providing us with computing resources to produce this dataset! We also thank the-eye.eu for hosting the image embeddings and a copy of the whole dataset.
Disclaimer on dataset purpose and content warning
The motivation behind dataset creation is to democratize research and experimentation around large-scale multi-modal model training and handling of uncurated, large-scale datasets crawled from publically available internet. Our recommendation is therefore to use the dataset for research purposes. Be aware that this large-scale dataset is uncurated. Keep in mind that the uncurated nature of the dataset means that collected links may lead to strongly discomforting and disturbing content for a human viewer. Therefore, please use the demo links with caution and at your own risk. It is possible to extract a “safe” subset by filtering out samples based on the safety tags (using a customized trained NSFW classifier that we built). While this strongly reduces the chance for encountering potentially harmful content when viewing, we cannot entirely exclude the possibility for harmful content being still present in safe mode, so that the warning holds also there. We think that providing the dataset openly to broad research and other interested communities will allow for transparent investigation of benefits that come along with training large-scale models as well as pitfalls and dangers that may stay unreported or unnoticed when working with closed large datasets that remain restricted to a small community. Providing our dataset openly, we however do not recommend using it for creating ready-to-go industrial products, as the basic research about general properties and safety of such large-scale models, which we would like to encourage with this release, is still in progress.
Introduction
Since the release of CLIP & DALL-E in January 2021, several similar large multi-modal language-vision models have been trained by large groups. Models like FLORENCE, Turing Bletchley, ALIGN & BASIC demonstrated very strong transfer capabilities on novel datasets in absence of per-sample labels, which also steadily improved when growing training data amount, following scaling laws observed in previous research work. These models require billions of image-text pairs to achieve competitive performances and unfortunately, no billion-scale image-text pair dataset had been openly available up until now. To address this problem we release LAION 5B, a CLIP-filtered dataset of 5,85 billion high-quality image-text pairs, their CLIP ViT-L/14 embeddings, kNN-indices, a web interface for exploration & subset-creation and NSFW- and watermark-detection scores and tools. We describe the procedure to create the dataset and demonstrate successful training of DALL-E architecture. Having sufficiently large scales, the dataset opens venues for research on multi-modal language-vision models to a broad community.
License
We distribute the metadata dataset (the parquet files) under the Creative Common CC-BY 4.0 license, which poses no particular restriction. The images are under their copyright.
Dataset columns
We provide these columns :
URL: the image url, millions of domains are covered
TEXT: captions, in english for en, other languages for multi and nolang
WIDTH: picture width
HEIGHT: picture height
LANGUAGE: the language of the sample, only for laion2B-multi, computed using cld3
similarity: cosine between text and image ViT-B/32 embeddings, clip for en, mclip for multi and nolang
pwatermark: probability of being a watermarked image, computed using our watermark detector
punsafe: probability of being an unsafe image, computed using our clip based detector
pwatermark and punsafe are available either as individual collections that must be joined with the hash of url+text, either as prejoined collections.
Dataset Statistics
We computed some statistics on the datasets to let people understand better: Samples are considered unsafe if the model predicts it as unsafe with a probability of more than 0.5. More than 0.8 for watermark. These values are pretty conservative, so the estimated safeness and watermark proportion may be higher than the truth. Other thresholds may be chosen to get a different precision/recall tradeoff.
Computed quantiles are quantiles from 0.05 to 0.95.
Also see the whole sheet and the whole dashboard
Laion2B-en
Total: 2.3B samples
Number with height and width bigger than
256 -> 1324M
512 -> 488M
1024 -> 76M
Width quantiles: 132.0, 160.0, 180.0, 210.0, 225.0, 240.0, 262.0, 300.0, 309.0, 340.0, 400.0, 450.0, 480.0, 512.0, 600.0, 656.0, 760.0, 960.0, 1050.0
Height quantiles: 125.0, 150.0, 166.0, 188.0, 208.0, 225.0, 250.0, 270.0, 300.0, 320.0, 350.0, 380.0, 418.0, 470.0, 500.0, 600.0, 672.0, 800.0, 1014.0
Unsafe proportion: 2.9%
Watermark proportion: 6.1%
Average text length: 67
Text length quantiles: 21.0, 25.0, 30.0, 33.0, 37.0, 40.0, 43.0, 47.0, 50.0, 54.0, 58.0, 62.0, 67.0, 72.0, 78.0, 85.0, 96.0, 114.0, 152.0
Laion2B-multi
Total: 2.2B samples
Number with height and width bigger than
256 -> 1299M
512 -> 480M
1024 -> 57M
Width quantiles: 140.0, 160.0, 188.0, 205.0, 235.0, 250.0, 284.0, 300.0, 324.0, 366.0, 420.0, 480.0, 520.0, 600.0, 640.0, 720.0, 800.0, 960.0, 1080.0
Height quantiles: 120.0, 144.0, 160.0, 180.0, 200.0, 217.0, 240.0, 262.0, 300.0, 320.0, 350.0, 394.0, 416.0, 458.0, 500.0, 564.0, 636.0, 725.0, 1000.0
Top 10 languages: LANGUAGE count proportion:
ru 241M 0.106
fr 168M 0.074
de 150M 0.066
es 149M 0.066
zh 143M 0.063
ja 131M 0.057
it 95M 0.042
pt 88M 0.038
nl 66M 0.029
pl 62M 0.027
no 49M 0.021
Unsafe proportion: 3.3%
Watermark proportion: 5.6%
Average text length: 52
Text length quantiles: 12.0, 16.0, 20.0, 23.0, 27.0, 30.0, 33.0, 37.0, 40.0, 44.0, 48.0, 52.0, 57.0, 61.0, 67.0, 74.0, 81.0, 93.0, 120.0
Laion1B-nolang
Total: 1.2B samples
Number with height and width bigger than
256 -> 1324M
512 -> 488M
1024 -> 76M
Width quantiles: 135.0, 160.0, 181.0, 207.0, 225.0, 241.0, 264.0, 300.0, 306.0, 338.0, 398.0, 426.0, 499.0, 520.0, 600.0, 655.0, 768.0, 940.0, 1080.0
Height quantiles: 118.0, 144.0, 160.0, 186.0, 200.0, 220.0, 240.0, 260.0, 292.0, 305.0, 338.0, 368.0, 405.0, 456.0, 500.0, 562.0, 637.0, 768.0, 1000.0
Unsafe proportion: 3%
Watermark proportion: 4%
Average text length: 46
Text length quantiles: 13.0, 17.0, 20.0, 23.0, 26.0, 29.0, 32.0, 35.0, 38.0, 41.0, 44.0, 48.0, 51.0, 56.0, 60.0, 67.0, 73.0, 82.0, 99.0
Acquisition pipeline
The acquisition pipeline follows the flowchart above and can be split into three major components:
Distributed processing of petabyte-scale Common Crawl dataset, which produces a collection of matching URLs and captio...
