Search results
Results From The WOW.Com Content Network
Covertype Dataset Data for predicting forest cover type strictly from cartographic variables. Many geographical features given. 581,012 Text Classification 1998 [310] [311] J. Blackard et al. Abscisic Acid Signaling Network Dataset Data for a plant signaling network. Goal is to determine set of rules that governs the network. None. 300 Text
huggingface.co Hugging Face is a French-American company that develops computation tools for building applications using machine learning . It is known for its transformers library built for natural language processing applications.
The Pile is an 886.03 GB diverse, open-source dataset of English text created as a training dataset for large language models (LLMs). It was constructed by EleutherAI in 2020 and publicly released on December 31 of that year. [1] [2] It is composed of 22 smaller datasets, including 14 new ones. [1]
RAWPED is a dataset for detection of pedestrians in the context of railways. The dataset is labeled box-wise. 26000 Images Object recognition and classification 2020 [70] [71] Tugce Toprak, Burak Belenlioglu, Burak Aydın, Cuneyt Guzelis, M. Alper Selver OSDaR23 OSDaR23 is a multi-sensory dataset for detection of objects in the context of railways.
GPT-2 was pre-trained on a dataset of 8 million web pages. [2] It was partially released in February 2019, followed by full release of the 1.5-billion-parameter model on November 5, 2019. [3] [4] [5] GPT-2 was created as a "direct scale-up" of GPT-1 [6] with a ten-fold increase in both its parameter count and the size of its training dataset. [5]
The Latent Diffusion Model (LDM) [1] is a diffusion model architecture developed by the CompVis (Computer Vision & Learning) [2] group at LMU Munich. [3]Introduced in 2015, diffusion models (DMs) are trained with the objective of removing successive applications of noise (commonly Gaussian) on training images.
The model, as well as the code base and the data used to train it, are distributed under free licences. [3] BLOOM was trained on approximately 366 billion (1.6TB) tokens from March to July 2022. [4] [5] BLOOM is the main outcome of the BigScience collaborative initiative, [6] a one-year-long research workshop that took place between May 2021 ...
The Stanford Institute for Human-Centered Artificial Intelligence's (HAI) Center for Research on Foundation Models (CRFM) coined the term "foundation model" in August 2021 [16] to mean "any model that is trained on broad data (generally using self-supervision at scale) that can be adapted (e.g., fine-tuned) to a wide range of downstream tasks". [17]