sha
stringlengths
40
40
text
stringlengths
1
13.4M
id
stringlengths
2
117
tags
sequencelengths
1
7.91k
created_at
stringlengths
25
25
metadata
stringlengths
2
875k
last_modified
stringlengths
25
25
arxiv
sequencelengths
0
25
languages
sequencelengths
0
7.91k
tags_str
stringlengths
17
159k
text_str
stringlengths
1
447k
text_lists
sequencelengths
0
352
processed_texts
sequencelengths
1
353
eccfbafa9060b01cb8e3d0d4e4c0b6d4b5605150
Dataset Summary --- Collection of Romance Novels featuring `title`, `description`, and `genres`. Created with intention of building a "Romance Novel Generator." Data Fields --- - `id` : unique integer to id book in the dataset - `pub_month` : string indicating the month the book was published in the form: `YEAR_MONTH` - `title` : title of the book - `author` : comma-separated (`last-name, first-name`) of the author of book - `isbn13` : 13 digit number for the isbn of book (note not all books will have an isbn number) - `description` : text description of the book. May contain quoted lines, a brief teaser of the plot, etc... - `genres` : dictionary of all genres with 1 or 0 indicating if genre is present - `womens-fiction` : 1 or 0 indicating if genre is present - `abuse` : 1 or 0 indicating if genre is present - `accidental-pregnancy` : 1 or 0 indicating if genre is present - `action-adventure` : 1 or 0 indicating if genre is present - `actor-actress-dancer-model` : 1 or 0 indicating if genre is present - `adoption` : 1 or 0 indicating if genre is present - `adultery` : 1 or 0 indicating if genre is present - `african-american` : 1 or 0 indicating if genre is present - `alcoholism` : 1 or 0 indicating if genre is present - `aliens` : 1 or 0 indicating if genre is present - `alpha-hero` : 1 or 0 indicating if genre is present - `alternative-history` : 1 or 0 indicating if genre is present - `amateur-sleuth` : 1 or 0 indicating if genre is present - `americana` : 1 or 0 indicating if genre is present - `amish` : 1 or 0 indicating if genre is present - `amnesia` : 1 or 0 indicating if genre is present - `angels` : 1 or 0 indicating if genre is present - `animals` : 1 or 0 indicating if genre is present - `anthropologists-archeologists` : 1 or 0 indicating if genre is present - `apocalypse` : 1 or 0 indicating if genre is present - `arranged-marriage` : 1 or 0 indicating if genre is present - `arthurian-legend` : 1 or 0 indicating if genre is present - `asian-american` : 1 or 0 indicating if genre is present - `astrology` : 1 or 0 indicating if genre is present - `bbw-heroines` : 1 or 0 indicating if genre is present - `bad-boy` : 1 or 0 indicating if genre is present - `best-friends` : 1 or 0 indicating if genre is present - `beta-hero` : 1 or 0 indicating if genre is present - `biographical` : 1 or 0 indicating if genre is present - `blackmail` : 1 or 0 indicating if genre is present - `boarding-school` : 1 or 0 indicating if genre is present - `captor-captive` : 1 or 0 indicating if genre is present - `category-romance` : 1 or 0 indicating if genre is present - `celebrities` : 1 or 0 indicating if genre is present - `celts` : 1 or 0 indicating if genre is present - `chefs-foodies` : 1 or 0 indicating if genre is present - `chick-lit` : 1 or 0 indicating if genre is present - `christian` : 1 or 0 indicating if genre is present - `clean-&-wholesome` : 1 or 0 indicating if genre is present - `clones` : 1 or 0 indicating if genre is present - `comedy-humor` : 1 or 0 indicating if genre is present - `coming-of-age` : 1 or 0 indicating if genre is present - `contemporary-romance` : 1 or 0 indicating if genre is present - `cowboys` : 1 or 0 indicating if genre is present - `cozy-mystery` : 1 or 0 indicating if genre is present - `crime` : 1 or 0 indicating if genre is present - `dark-fantasy` : 1 or 0 indicating if genre is present - `death-dying` : 1 or 0 indicating if genre is present - `debutante-heiress` : 1 or 0 indicating if genre is present - `demons` : 1 or 0 indicating if genre is present - `disabilities` : 1 or 0 indicating if genre is present - `divorce` : 1 or 0 indicating if genre is present - `doctor-nurse` : 1 or 0 indicating if genre is present - `dragons` : 1 or 0 indicating if genre is present - `dystopian` : 1 or 0 indicating if genre is present - `elves` : 1 or 0 indicating if genre is present - `enemies-to-lovers` : 1 or 0 indicating if genre is present - `epic-fantasy` : 1 or 0 indicating if genre is present - `erotica` : 1 or 0 indicating if genre is present - `espionage-spies-cia` : 1 or 0 indicating if genre is present - `fairies-fae` : 1 or 0 indicating if genre is present - `fairy-tales-folklore` : 1 or 0 indicating if genre is present - `fake-relationship` : 1 or 0 indicating if genre is present - `falsely-accused` : 1 or 0 indicating if genre is present - `family-siblings` : 1 or 0 indicating if genre is present - `famous-characters` : 1 or 0 indicating if genre is present - `fantasy` : 1 or 0 indicating if genre is present - `fantasy-romance` : 1 or 0 indicating if genre is present - `feminism` : 1 or 0 indicating if genre is present - `firefighters` : 1 or 0 indicating if genre is present - `forced-proximity` : 1 or 0 indicating if genre is present - `forensics` : 1 or 0 indicating if genre is present - `friends-to-lovers` : 1 or 0 indicating if genre is present - `general-fiction` : 1 or 0 indicating if genre is present - `ghosts` : 1 or 0 indicating if genre is present - `gothic` : 1 or 0 indicating if genre is present - `graphic-novel` : 1 or 0 indicating if genre is present - `guardian-ward` : 1 or 0 indicating if genre is present - `hard-boiled` : 1 or 0 indicating if genre is present - `heroic-fantasy-sword-&-sorcery` : 1 or 0 indicating if genre is present - `hidden-identity` : 1 or 0 indicating if genre is present - `hispanic-&-latino` : 1 or 0 indicating if genre is present - `historical` : 1 or 0 indicating if genre is present - `historical-mystery` : 1 or 0 indicating if genre is present - `historical-romance` : 1 or 0 indicating if genre is present - `holidays` : 1 or 0 indicating if genre is present - `horror` : 1 or 0 indicating if genre is present - `infidelity` : 1 or 0 indicating if genre is present - `jane-austen` : 1 or 0 indicating if genre is present - `jewish` : 1 or 0 indicating if genre is present - `kidnapping` : 1 or 0 indicating if genre is present - `kids-(12-&-under)` : 1 or 0 indicating if genre is present - `kids:-middle-grade` : 1 or 0 indicating if genre is present - `lgbtq` : 1 or 0 indicating if genre is present - `law-enforcement` : 1 or 0 indicating if genre is present - `lawyers` : 1 or 0 indicating if genre is present - `legal-thriller` : 1 or 0 indicating if genre is present - `literary` : 1 or 0 indicating if genre is present - `magic` : 1 or 0 indicating if genre is present - `magical-realism` : 1 or 0 indicating if genre is present - `mail-order-brides` : 1 or 0 indicating if genre is present - `manga` : 1 or 0 indicating if genre is present - `marriage-of-convenience` : 1 or 0 indicating if genre is present - `mashup` : 1 or 0 indicating if genre is present - `mature-(18-&-over)` : 1 or 0 indicating if genre is present - `may-december` : 1 or 0 indicating if genre is present - `medical` : 1 or 0 indicating if genre is present - `medical-thriller` : 1 or 0 indicating if genre is present - `mermaids` : 1 or 0 indicating if genre is present - `military` : 1 or 0 indicating if genre is present - `mistaken-identity` : 1 or 0 indicating if genre is present - `monsters` : 1 or 0 indicating if genre is present - `motorcycle-club-bikers` : 1 or 0 indicating if genre is present - `moviestv` : 1 or 0 indicating if genre is present - `multicultural-&-interracial-romance` : 1 or 0 indicating if genre is present - `music` : 1 or 0 indicating if genre is present - `mystery` : 1 or 0 indicating if genre is present - `mythology` : 1 or 0 indicating if genre is present - `native-americans` : 1 or 0 indicating if genre is present - `nautical` : 1 or 0 indicating if genre is present - `navy-seals` : 1 or 0 indicating if genre is present - `new-adult-(18-25)` : 1 or 0 indicating if genre is present - `noir` : 1 or 0 indicating if genre is present - `occult-&-supernatural` : 1 or 0 indicating if genre is present - `office-romance` : 1 or 0 indicating if genre is present - `opposites-attract` : 1 or 0 indicating if genre is present - `orphans` : 1 or 0 indicating if genre is present - `paranormal` : 1 or 0 indicating if genre is present - `paranormal-romance` : 1 or 0 indicating if genre is present - `pirates` : 1 or 0 indicating if genre is present - `police-lawmen-fbi-agents` : 1 or 0 indicating if genre is present - `police-procedural` : 1 or 0 indicating if genre is present - `political` : 1 or 0 indicating if genre is present - `political-thriller` : 1 or 0 indicating if genre is present - `post-apocalyptic` : 1 or 0 indicating if genre is present - `pregnancy` : 1 or 0 indicating if genre is present - `private-investigator` : 1 or 0 indicating if genre is present - `psychological-suspense` : 1 or 0 indicating if genre is present - `rags-to-riches` : 1 or 0 indicating if genre is present - `rakes` : 1 or 0 indicating if genre is present - `reincarnation` : 1 or 0 indicating if genre is present - `revenge` : 1 or 0 indicating if genre is present - `robin-hood` : 1 or 0 indicating if genre is present - `rock-stars` : 1 or 0 indicating if genre is present - `romance` : 1 or 0 indicating if genre is present - `romantic-elements` : 1 or 0 indicating if genre is present - `romantic-suspense` : 1 or 0 indicating if genre is present - `royalty` : 1 or 0 indicating if genre is present - `saga` : 1 or 0 indicating if genre is present - `schools` : 1 or 0 indicating if genre is present - `science-fiction` : 1 or 0 indicating if genre is present - `science-fiction-fantasy` : 1 or 0 indicating if genre is present - `scottish-highlands` : 1 or 0 indicating if genre is present - `second-chance-romance` : 1 or 0 indicating if genre is present - `secret-baby` : 1 or 0 indicating if genre is present - `serial-killers` : 1 or 0 indicating if genre is present - `servants-slaves` : 1 or 0 indicating if genre is present - `shakespeare` : 1 or 0 indicating if genre is present - `sheikhs` : 1 or 0 indicating if genre is present - `sherlock-holmes` : 1 or 0 indicating if genre is present - `single-parent` : 1 or 0 indicating if genre is present - `small-town` : 1 or 0 indicating if genre is present - `space-opera` : 1 or 0 indicating if genre is present - `speculative-fiction` : 1 or 0 indicating if genre is present - `sports` : 1 or 0 indicating if genre is present - `steampunk` : 1 or 0 indicating if genre is present - `superheroes` : 1 or 0 indicating if genre is present - `suspense` : 1 or 0 indicating if genre is present - `tear-jerker` : 1 or 0 indicating if genre is present - `technology` : 1 or 0 indicating if genre is present - `terrorists` : 1 or 0 indicating if genre is present - `thriller` : 1 or 0 indicating if genre is present - `time-travel` : 1 or 0 indicating if genre is present - `tortured-hero` : 1 or 0 indicating if genre is present - `tortured-heroine` : 1 or 0 indicating if genre is present - `traditional-british` : 1 or 0 indicating if genre is present - `traditional-regency` : 1 or 0 indicating if genre is present - `twins` : 1 or 0 indicating if genre is present - `tycoons` : 1 or 0 indicating if genre is present - `ugly-duckling` : 1 or 0 indicating if genre is present - `unicorns` : 1 or 0 indicating if genre is present - `urban-fantasy` : 1 or 0 indicating if genre is present - `vampires` : 1 or 0 indicating if genre is present - `vikings` : 1 or 0 indicating if genre is present - `virgin-hero` : 1 or 0 indicating if genre is present - `virgins` : 1 or 0 indicating if genre is present - `visionary-&-metaphysical` : 1 or 0 indicating if genre is present - `wagon-train` : 1 or 0 indicating if genre is present - `werewolves-shapeshifters` : 1 or 0 indicating if genre is present - `western` : 1 or 0 indicating if genre is present - `widow-widower` : 1 or 0 indicating if genre is present - `witch-warlock-mage-wizard` : 1 or 0 indicating if genre is present - `women-sleuths` : 1 or 0 indicating if genre is present - `young-adult-teens` : 1 or 0 indicating if genre is present - `zombies` : 1 or 0 indicating if genre is present Languages --- - en
diltdicker/romance_novel_data-2022
[ "license:openrail", "region:us" ]
2022-12-23T04:36:09+00:00
{"license": "openrail"}
2023-01-07T21:40:31+00:00
[]
[]
TAGS #license-openrail #region-us
Dataset Summary --- Collection of Romance Novels featuring 'title', 'description', and 'genres'. Created with intention of building a "Romance Novel Generator." Data Fields --- - 'id' : unique integer to id book in the dataset - 'pub_month' : string indicating the month the book was published in the form: 'YEAR_MONTH' - 'title' : title of the book - 'author' : comma-separated ('last-name, first-name') of the author of book - 'isbn13' : 13 digit number for the isbn of book (note not all books will have an isbn number) - 'description' : text description of the book. May contain quoted lines, a brief teaser of the plot, etc... - 'genres' : dictionary of all genres with 1 or 0 indicating if genre is present - 'womens-fiction' : 1 or 0 indicating if genre is present - 'abuse' : 1 or 0 indicating if genre is present - 'accidental-pregnancy' : 1 or 0 indicating if genre is present - 'action-adventure' : 1 or 0 indicating if genre is present - 'actor-actress-dancer-model' : 1 or 0 indicating if genre is present - 'adoption' : 1 or 0 indicating if genre is present - 'adultery' : 1 or 0 indicating if genre is present - 'african-american' : 1 or 0 indicating if genre is present - 'alcoholism' : 1 or 0 indicating if genre is present - 'aliens' : 1 or 0 indicating if genre is present - 'alpha-hero' : 1 or 0 indicating if genre is present - 'alternative-history' : 1 or 0 indicating if genre is present - 'amateur-sleuth' : 1 or 0 indicating if genre is present - 'americana' : 1 or 0 indicating if genre is present - 'amish' : 1 or 0 indicating if genre is present - 'amnesia' : 1 or 0 indicating if genre is present - 'angels' : 1 or 0 indicating if genre is present - 'animals' : 1 or 0 indicating if genre is present - 'anthropologists-archeologists' : 1 or 0 indicating if genre is present - 'apocalypse' : 1 or 0 indicating if genre is present - 'arranged-marriage' : 1 or 0 indicating if genre is present - 'arthurian-legend' : 1 or 0 indicating if genre is present - 'asian-american' : 1 or 0 indicating if genre is present - 'astrology' : 1 or 0 indicating if genre is present - 'bbw-heroines' : 1 or 0 indicating if genre is present - 'bad-boy' : 1 or 0 indicating if genre is present - 'best-friends' : 1 or 0 indicating if genre is present - 'beta-hero' : 1 or 0 indicating if genre is present - 'biographical' : 1 or 0 indicating if genre is present - 'blackmail' : 1 or 0 indicating if genre is present - 'boarding-school' : 1 or 0 indicating if genre is present - 'captor-captive' : 1 or 0 indicating if genre is present - 'category-romance' : 1 or 0 indicating if genre is present - 'celebrities' : 1 or 0 indicating if genre is present - 'celts' : 1 or 0 indicating if genre is present - 'chefs-foodies' : 1 or 0 indicating if genre is present - 'chick-lit' : 1 or 0 indicating if genre is present - 'christian' : 1 or 0 indicating if genre is present - 'clean-&-wholesome' : 1 or 0 indicating if genre is present - 'clones' : 1 or 0 indicating if genre is present - 'comedy-humor' : 1 or 0 indicating if genre is present - 'coming-of-age' : 1 or 0 indicating if genre is present - 'contemporary-romance' : 1 or 0 indicating if genre is present - 'cowboys' : 1 or 0 indicating if genre is present - 'cozy-mystery' : 1 or 0 indicating if genre is present - 'crime' : 1 or 0 indicating if genre is present - 'dark-fantasy' : 1 or 0 indicating if genre is present - 'death-dying' : 1 or 0 indicating if genre is present - 'debutante-heiress' : 1 or 0 indicating if genre is present - 'demons' : 1 or 0 indicating if genre is present - 'disabilities' : 1 or 0 indicating if genre is present - 'divorce' : 1 or 0 indicating if genre is present - 'doctor-nurse' : 1 or 0 indicating if genre is present - 'dragons' : 1 or 0 indicating if genre is present - 'dystopian' : 1 or 0 indicating if genre is present - 'elves' : 1 or 0 indicating if genre is present - 'enemies-to-lovers' : 1 or 0 indicating if genre is present - 'epic-fantasy' : 1 or 0 indicating if genre is present - 'erotica' : 1 or 0 indicating if genre is present - 'espionage-spies-cia' : 1 or 0 indicating if genre is present - 'fairies-fae' : 1 or 0 indicating if genre is present - 'fairy-tales-folklore' : 1 or 0 indicating if genre is present - 'fake-relationship' : 1 or 0 indicating if genre is present - 'falsely-accused' : 1 or 0 indicating if genre is present - 'family-siblings' : 1 or 0 indicating if genre is present - 'famous-characters' : 1 or 0 indicating if genre is present - 'fantasy' : 1 or 0 indicating if genre is present - 'fantasy-romance' : 1 or 0 indicating if genre is present - 'feminism' : 1 or 0 indicating if genre is present - 'firefighters' : 1 or 0 indicating if genre is present - 'forced-proximity' : 1 or 0 indicating if genre is present - 'forensics' : 1 or 0 indicating if genre is present - 'friends-to-lovers' : 1 or 0 indicating if genre is present - 'general-fiction' : 1 or 0 indicating if genre is present - 'ghosts' : 1 or 0 indicating if genre is present - 'gothic' : 1 or 0 indicating if genre is present - 'graphic-novel' : 1 or 0 indicating if genre is present - 'guardian-ward' : 1 or 0 indicating if genre is present - 'hard-boiled' : 1 or 0 indicating if genre is present - 'heroic-fantasy-sword-&-sorcery' : 1 or 0 indicating if genre is present - 'hidden-identity' : 1 or 0 indicating if genre is present - 'hispanic-&-latino' : 1 or 0 indicating if genre is present - 'historical' : 1 or 0 indicating if genre is present - 'historical-mystery' : 1 or 0 indicating if genre is present - 'historical-romance' : 1 or 0 indicating if genre is present - 'holidays' : 1 or 0 indicating if genre is present - 'horror' : 1 or 0 indicating if genre is present - 'infidelity' : 1 or 0 indicating if genre is present - 'jane-austen' : 1 or 0 indicating if genre is present - 'jewish' : 1 or 0 indicating if genre is present - 'kidnapping' : 1 or 0 indicating if genre is present - 'kids-(12-&-under)' : 1 or 0 indicating if genre is present - 'kids:-middle-grade' : 1 or 0 indicating if genre is present - 'lgbtq' : 1 or 0 indicating if genre is present - 'law-enforcement' : 1 or 0 indicating if genre is present - 'lawyers' : 1 or 0 indicating if genre is present - 'legal-thriller' : 1 or 0 indicating if genre is present - 'literary' : 1 or 0 indicating if genre is present - 'magic' : 1 or 0 indicating if genre is present - 'magical-realism' : 1 or 0 indicating if genre is present - 'mail-order-brides' : 1 or 0 indicating if genre is present - 'manga' : 1 or 0 indicating if genre is present - 'marriage-of-convenience' : 1 or 0 indicating if genre is present - 'mashup' : 1 or 0 indicating if genre is present - 'mature-(18-&-over)' : 1 or 0 indicating if genre is present - 'may-december' : 1 or 0 indicating if genre is present - 'medical' : 1 or 0 indicating if genre is present - 'medical-thriller' : 1 or 0 indicating if genre is present - 'mermaids' : 1 or 0 indicating if genre is present - 'military' : 1 or 0 indicating if genre is present - 'mistaken-identity' : 1 or 0 indicating if genre is present - 'monsters' : 1 or 0 indicating if genre is present - 'motorcycle-club-bikers' : 1 or 0 indicating if genre is present - 'moviestv' : 1 or 0 indicating if genre is present - 'multicultural-&-interracial-romance' : 1 or 0 indicating if genre is present - 'music' : 1 or 0 indicating if genre is present - 'mystery' : 1 or 0 indicating if genre is present - 'mythology' : 1 or 0 indicating if genre is present - 'native-americans' : 1 or 0 indicating if genre is present - 'nautical' : 1 or 0 indicating if genre is present - 'navy-seals' : 1 or 0 indicating if genre is present - 'new-adult-(18-25)' : 1 or 0 indicating if genre is present - 'noir' : 1 or 0 indicating if genre is present - 'occult-&-supernatural' : 1 or 0 indicating if genre is present - 'office-romance' : 1 or 0 indicating if genre is present - 'opposites-attract' : 1 or 0 indicating if genre is present - 'orphans' : 1 or 0 indicating if genre is present - 'paranormal' : 1 or 0 indicating if genre is present - 'paranormal-romance' : 1 or 0 indicating if genre is present - 'pirates' : 1 or 0 indicating if genre is present - 'police-lawmen-fbi-agents' : 1 or 0 indicating if genre is present - 'police-procedural' : 1 or 0 indicating if genre is present - 'political' : 1 or 0 indicating if genre is present - 'political-thriller' : 1 or 0 indicating if genre is present - 'post-apocalyptic' : 1 or 0 indicating if genre is present - 'pregnancy' : 1 or 0 indicating if genre is present - 'private-investigator' : 1 or 0 indicating if genre is present - 'psychological-suspense' : 1 or 0 indicating if genre is present - 'rags-to-riches' : 1 or 0 indicating if genre is present - 'rakes' : 1 or 0 indicating if genre is present - 'reincarnation' : 1 or 0 indicating if genre is present - 'revenge' : 1 or 0 indicating if genre is present - 'robin-hood' : 1 or 0 indicating if genre is present - 'rock-stars' : 1 or 0 indicating if genre is present - 'romance' : 1 or 0 indicating if genre is present - 'romantic-elements' : 1 or 0 indicating if genre is present - 'romantic-suspense' : 1 or 0 indicating if genre is present - 'royalty' : 1 or 0 indicating if genre is present - 'saga' : 1 or 0 indicating if genre is present - 'schools' : 1 or 0 indicating if genre is present - 'science-fiction' : 1 or 0 indicating if genre is present - 'science-fiction-fantasy' : 1 or 0 indicating if genre is present - 'scottish-highlands' : 1 or 0 indicating if genre is present - 'second-chance-romance' : 1 or 0 indicating if genre is present - 'secret-baby' : 1 or 0 indicating if genre is present - 'serial-killers' : 1 or 0 indicating if genre is present - 'servants-slaves' : 1 or 0 indicating if genre is present - 'shakespeare' : 1 or 0 indicating if genre is present - 'sheikhs' : 1 or 0 indicating if genre is present - 'sherlock-holmes' : 1 or 0 indicating if genre is present - 'single-parent' : 1 or 0 indicating if genre is present - 'small-town' : 1 or 0 indicating if genre is present - 'space-opera' : 1 or 0 indicating if genre is present - 'speculative-fiction' : 1 or 0 indicating if genre is present - 'sports' : 1 or 0 indicating if genre is present - 'steampunk' : 1 or 0 indicating if genre is present - 'superheroes' : 1 or 0 indicating if genre is present - 'suspense' : 1 or 0 indicating if genre is present - 'tear-jerker' : 1 or 0 indicating if genre is present - 'technology' : 1 or 0 indicating if genre is present - 'terrorists' : 1 or 0 indicating if genre is present - 'thriller' : 1 or 0 indicating if genre is present - 'time-travel' : 1 or 0 indicating if genre is present - 'tortured-hero' : 1 or 0 indicating if genre is present - 'tortured-heroine' : 1 or 0 indicating if genre is present - 'traditional-british' : 1 or 0 indicating if genre is present - 'traditional-regency' : 1 or 0 indicating if genre is present - 'twins' : 1 or 0 indicating if genre is present - 'tycoons' : 1 or 0 indicating if genre is present - 'ugly-duckling' : 1 or 0 indicating if genre is present - 'unicorns' : 1 or 0 indicating if genre is present - 'urban-fantasy' : 1 or 0 indicating if genre is present - 'vampires' : 1 or 0 indicating if genre is present - 'vikings' : 1 or 0 indicating if genre is present - 'virgin-hero' : 1 or 0 indicating if genre is present - 'virgins' : 1 or 0 indicating if genre is present - 'visionary-&-metaphysical' : 1 or 0 indicating if genre is present - 'wagon-train' : 1 or 0 indicating if genre is present - 'werewolves-shapeshifters' : 1 or 0 indicating if genre is present - 'western' : 1 or 0 indicating if genre is present - 'widow-widower' : 1 or 0 indicating if genre is present - 'witch-warlock-mage-wizard' : 1 or 0 indicating if genre is present - 'women-sleuths' : 1 or 0 indicating if genre is present - 'young-adult-teens' : 1 or 0 indicating if genre is present - 'zombies' : 1 or 0 indicating if genre is present Languages --- - en
[]
[ "TAGS\n#license-openrail #region-us \n" ]
491ab81e69438980266dbc5eaec0cd6d06d225c2
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/flan-t5-large-stacked-samsum-1024-FP32-fin * Dataset: samsum * Config: samsum * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model.
autoevaluate/autoeval-eval-samsum-samsum-e93f2c-2586578704
[ "autotrain", "evaluation", "region:us" ]
2022-12-23T05:37:45+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "pszemraj/flan-t5-large-stacked-samsum-1024-FP32-fin", "metrics": ["bertscore"], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}}
2022-12-23T05:40:55+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: pszemraj/flan-t5-large-stacked-samsum-1024-FP32-fin * Dataset: samsum * Config: samsum * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @pszemraj for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/flan-t5-large-stacked-samsum-1024-FP32-fin\n* Dataset: samsum\n* Config: samsum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @pszemraj for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/flan-t5-large-stacked-samsum-1024-FP32-fin\n* Dataset: samsum\n* Config: samsum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @pszemraj for evaluating this model." ]
6b80de530820a7f62f25515d39af18c479f35103
# Dataset Card for "LLM_Description_Vocab_opt_facebook_opt_30b_downstream_tasks" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Multimodal-Fatima/LLM_Description_Vocab_opt_facebook_opt_30b_downstream_tasks
[ "region:us" ]
2022-12-23T06:10:26+00:00
{"dataset_info": {"features": [{"name": "vocab", "dtype": "string"}, {"name": "descriptions", "sequence": "string"}], "splits": [{"name": "test", "num_bytes": 528559, "num_examples": 3426}], "download_size": 157247, "dataset_size": 528559}}
2022-12-23T06:10:32+00:00
[]
[]
TAGS #region-us
# Dataset Card for "LLM_Description_Vocab_opt_facebook_opt_30b_downstream_tasks" More Information needed
[ "# Dataset Card for \"LLM_Description_Vocab_opt_facebook_opt_30b_downstream_tasks\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"LLM_Description_Vocab_opt_facebook_opt_30b_downstream_tasks\"\n\nMore Information needed" ]
2325dbed42592f0e385dd5a084d81dfb8029724e
# Dataset Card for "clinic-home" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
fathyshalab/clinic-home
[ "region:us" ]
2022-12-23T06:15:24+00:00
{"dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "label_text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 79109.8, "num_examples": 1050}, {"name": "test", "num_bytes": 33904.2, "num_examples": 450}], "download_size": 0, "dataset_size": 113014.0}}
2022-12-24T14:09:41+00:00
[]
[]
TAGS #region-us
# Dataset Card for "clinic-home" More Information needed
[ "# Dataset Card for \"clinic-home\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"clinic-home\"\n\nMore Information needed" ]
d04eac99131731d8b61ee3754ea328f1d75017a6
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: facebook/bart-large-cnn * Dataset: gigaword * Config: default * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@Xiaoci](https://huggingface.co/Xiaoci) for evaluating this model.
autoevaluate/autoeval-eval-gigaword-default-50c095-2587478720
[ "autotrain", "evaluation", "region:us" ]
2022-12-23T08:25:48+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["gigaword"], "eval_info": {"task": "summarization", "model": "facebook/bart-large-cnn", "metrics": [], "dataset_name": "gigaword", "dataset_config": "default", "dataset_split": "validation", "col_mapping": {"text": "document", "target": "summary"}}}
2022-12-23T14:28:33+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: facebook/bart-large-cnn * Dataset: gigaword * Config: default * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @Xiaoci for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: facebook/bart-large-cnn\n* Dataset: gigaword\n* Config: default\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @Xiaoci for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: facebook/bart-large-cnn\n* Dataset: gigaword\n* Config: default\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @Xiaoci for evaluating this model." ]
c76dc41ea0c3fedd6023114f9fd7e962fd5c0015
This data is a subset of the BioQA task B dataset. It includes only factoid samples for extractive QA and is split into train and test with 80% and 20% respectively.
aaaksenova/BioQA_taskB_SQuAD
[ "region:us" ]
2022-12-23T08:27:27+00:00
{}
2022-12-23T13:26:48+00:00
[]
[]
TAGS #region-us
This data is a subset of the BioQA task B dataset. It includes only factoid samples for extractive QA and is split into train and test with 80% and 20% respectively.
[]
[ "TAGS\n#region-us \n" ]
b66d3067995ce6c98c75ee8107f561ff662f60fc
# Dataset Card for "clinic-kitchen_and_dining" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
fathyshalab/clinic-kitchen_and_dining
[ "region:us" ]
2022-12-23T08:40:22+00:00
{"dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "label_text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 66661.34844444445, "num_examples": 787}, {"name": "test", "num_bytes": 28629.651555555556, "num_examples": 338}], "download_size": 0, "dataset_size": 95291.0}}
2022-12-24T15:35:03+00:00
[]
[]
TAGS #region-us
# Dataset Card for "clinic-kitchen_and_dining" More Information needed
[ "# Dataset Card for \"clinic-kitchen_and_dining\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"clinic-kitchen_and_dining\"\n\nMore Information needed" ]
eabe9d73f896ae0a06c1117d8f03d51733216f19
Dataset homepage: https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions The purpose of hosting the archive is to play with the original files. The archive was generated using [this Colab Notebook](https://colab.research.google.com/gist/sayakpaul/98f9ff3bd258a5c1107898422447b581/scratchpad.ipynb).
sayakpaul/pokemon-blip-original-version
[ "license:cc-by-nc-sa-4.0", "region:us" ]
2022-12-23T12:43:19+00:00
{"license": "cc-by-nc-sa-4.0"}
2022-12-24T06:09:24+00:00
[]
[]
TAGS #license-cc-by-nc-sa-4.0 #region-us
Dataset homepage: URL The purpose of hosting the archive is to play with the original files. The archive was generated using this Colab Notebook.
[]
[ "TAGS\n#license-cc-by-nc-sa-4.0 #region-us \n" ]
f422dacfd91adb5a4614eb3b6495c560158519eb
# Dataset Card for "fiszki-ocr-train" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Zombely/fiszki-ocr-train
[ "region:us" ]
2022-12-23T13:03:25+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "ground_truth", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 354017910.0, "num_examples": 85}, {"name": "validation", "num_bytes": 56459717.0, "num_examples": 14}], "download_size": 410390428, "dataset_size": 410477627.0}}
2022-12-23T13:06:29+00:00
[]
[]
TAGS #region-us
# Dataset Card for "fiszki-ocr-train" More Information needed
[ "# Dataset Card for \"fiszki-ocr-train\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"fiszki-ocr-train\"\n\nMore Information needed" ]
37d1e1db8ba7d98725658cb5931d75aa01a4e346
# Dataset Card for "bug-16718038814382" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
severo/bug-16718038814382
[ "region:us" ]
2022-12-23T13:58:02+00:00
{"dataset_info": {"features": [{"name": "a", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 24, "num_examples": 3}], "download_size": 579, "dataset_size": 24}}
2022-12-23T13:58:05+00:00
[]
[]
TAGS #region-us
# Dataset Card for "bug-16718038814382" More Information needed
[ "# Dataset Card for \"bug-16718038814382\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"bug-16718038814382\"\n\nMore Information needed" ]
38d9ea2378b9f2be6ee85b96aecacf9ba1a03b51
# Dataset Card for "bug-16718056078062" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
severo/bug-16718056078062
[ "region:us" ]
2022-12-23T14:26:48+00:00
{"dataset_info": {"features": [{"name": "a", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 24, "num_examples": 3}], "download_size": 579, "dataset_size": 24}}
2022-12-23T14:26:52+00:00
[]
[]
TAGS #region-us
# Dataset Card for "bug-16718056078062" More Information needed
[ "# Dataset Card for \"bug-16718056078062\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"bug-16718056078062\"\n\nMore Information needed" ]
d22379e10086f2762bf1e700d5b1d3a1134f6b88
# Dataset Card for "tortas" Note that when using PyTorch's transforms that these images are 4-channel images. The last channel is all 1's and can be ignored. [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
morgan/tortas
[ "region:us" ]
2022-12-23T14:35:00+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 79653203.0, "num_examples": 37}], "download_size": 79658169, "dataset_size": 79653203.0}}
2022-12-23T16:23:05+00:00
[]
[]
TAGS #region-us
# Dataset Card for "tortas" Note that when using PyTorch's transforms that these images are 4-channel images. The last channel is all 1's and can be ignored. More Information needed
[ "# Dataset Card for \"tortas\"\n\nNote that when using PyTorch's transforms that these images are 4-channel images. The last channel is all 1's and can be ignored.\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"tortas\"\n\nNote that when using PyTorch's transforms that these images are 4-channel images. The last channel is all 1's and can be ignored.\n\nMore Information needed" ]
02c75dddedc641c5e1c14e333986a9f56e498e79
# Mirror of billsum train split Mirror with parquet files on hub, as downloading billsum data files from Google drive causes errors in distributed training.
DebateLabKIT/billsum_train
[ "region:us" ]
2022-12-23T19:44:12+00:00
{}
2022-12-24T12:41:29+00:00
[]
[]
TAGS #region-us
# Mirror of billsum train split Mirror with parquet files on hub, as downloading billsum data files from Google drive causes errors in distributed training.
[ "# Mirror of billsum train split\n\nMirror with parquet files on hub, as downloading billsum data files from Google drive causes errors in distributed training." ]
[ "TAGS\n#region-us \n", "# Mirror of billsum train split\n\nMirror with parquet files on hub, as downloading billsum data files from Google drive causes errors in distributed training." ]
76413fa635a613d6ddd4c6d3b9b0b3aa86ca20d9
# Dataset Card for "dreambooth-hackathon-images" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
jonathang/dreambooth-hackathon-images
[ "region:us" ]
2022-12-23T21:41:14+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 1488165.0, "num_examples": 4}], "download_size": 1489345, "dataset_size": 1488165.0}}
2022-12-27T19:34:42+00:00
[]
[]
TAGS #region-us
# Dataset Card for "dreambooth-hackathon-images" More Information needed
[ "# Dataset Card for \"dreambooth-hackathon-images\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"dreambooth-hackathon-images\"\n\nMore Information needed" ]
ae9634ce61a784076139c0de8fd84f255ef23313
# Dataset Card for "dreambooth-hackathon-images" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Verne/dreambooth-hackathon-images
[ "region:us" ]
2022-12-23T22:36:01+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 828898.0, "num_examples": 20}], "download_size": 827203, "dataset_size": 828898.0}}
2022-12-23T22:36:14+00:00
[]
[]
TAGS #region-us
# Dataset Card for "dreambooth-hackathon-images" More Information needed
[ "# Dataset Card for \"dreambooth-hackathon-images\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"dreambooth-hackathon-images\"\n\nMore Information needed" ]
bb735fbf00009266d05de19461c39bf0d785f6ba
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: florenceGundy/bert-finetuned-squad * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@florenceGundy](https://huggingface.co/florenceGundy) for evaluating this model.
autoevaluate/autoeval-eval-squad-plain_text-a52a81-2596378857
[ "autotrain", "evaluation", "region:us" ]
2022-12-23T23:38:21+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "florenceGundy/bert-finetuned-squad", "metrics": [], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-12-23T23:40:45+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Question Answering * Model: florenceGundy/bert-finetuned-squad * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @florenceGundy for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: florenceGundy/bert-finetuned-squad\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @florenceGundy for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: florenceGundy/bert-finetuned-squad\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @florenceGundy for evaluating this model." ]
0ffe46328f958c3b090294792567ad6fa0781af3
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: florenceGundy/bert-finetuned-squad * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@florenceGundy](https://huggingface.co/florenceGundy) for evaluating this model.
autoevaluate/autoeval-eval-squad-plain_text-56a1bc-2596578858
[ "autotrain", "evaluation", "region:us" ]
2022-12-23T23:38:31+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "florenceGundy/bert-finetuned-squad", "metrics": [], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-12-23T23:40:54+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Question Answering * Model: florenceGundy/bert-finetuned-squad * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @florenceGundy for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: florenceGundy/bert-finetuned-squad\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @florenceGundy for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: florenceGundy/bert-finetuned-squad\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @florenceGundy for evaluating this model." ]
76943f92923ba6a201677e2ace477ed270f3cbe5
Tweets from accounts labeled as bots and non-bots
kearney/tweetbotornot2
[ "license:mit", "region:us" ]
2022-12-24T02:29:38+00:00
{"license": "mit"}
2022-12-24T02:35:26+00:00
[]
[]
TAGS #license-mit #region-us
Tweets from accounts labeled as bots and non-bots
[]
[ "TAGS\n#license-mit #region-us \n" ]
dac8d12982efda4b410a5492aab33affbd780596
# Dataset Card for "financial_news_sentiment_mixte_with_phrasebank_75" This is a customized version of the phrasebank dataset in which I kept only sentences validated by at least 75% annotators. In addition I added ~2000 articles of Canadian news where sentiment was validated manually. The dataset also include a column topic which contains one of the following value: * acquisition * other * quaterly financial release * appointment to new position * dividend * corporate update * drillings results * conference * share repurchase program * grant of stocks This was generated automatically using a zero-shot classification model and **was not** reviewed manually. ## References Original dataset is available here: [https://huggingface.co/datasets/financial_phrasebank]
Jean-Baptiste/financial_news_sentiment_mixte_with_phrasebank_75
[ "task_categories:text-classification", "task_ids:multi-class-classification", "task_ids:sentiment-classification", "annotations_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "language:en", "license:cc-by-nc-sa-3.0", "region:us" ]
2022-12-24T03:49:34+00:00
{"annotations_creators": ["expert-generated"], "language": ["en"], "license": ["cc-by-nc-sa-3.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification", "sentiment-classification"], "pretty_name": "financial_news_sentiment_mixte_with_phrasebank_75", "dataset_info": {"splits": [{"name": "test", "num_examples": 785}, {"name": "train", "num_examples": 4446}]}, "tags": []}
2022-12-29T03:19:16+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-multi-class-classification #task_ids-sentiment-classification #annotations_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #language-English #license-cc-by-nc-sa-3.0 #region-us
# Dataset Card for "financial_news_sentiment_mixte_with_phrasebank_75" This is a customized version of the phrasebank dataset in which I kept only sentences validated by at least 75% annotators. In addition I added ~2000 articles of Canadian news where sentiment was validated manually. The dataset also include a column topic which contains one of the following value: * acquisition * other * quaterly financial release * appointment to new position * dividend * corporate update * drillings results * conference * share repurchase program * grant of stocks This was generated automatically using a zero-shot classification model and was not reviewed manually. ## References Original dataset is available here: [URL
[ "# Dataset Card for \"financial_news_sentiment_mixte_with_phrasebank_75\"\n\nThis is a customized version of the phrasebank dataset in which I kept only sentences validated by at least 75% annotators. \nIn addition I added ~2000 articles of Canadian news where sentiment was validated manually.\n\nThe dataset also include a column topic which contains one of the following value:\n* acquisition\n* other\n* quaterly financial release\n* appointment to new position\n* dividend\n* corporate update\n* drillings results\n* conference\n* share repurchase program\n* grant of stocks\n\nThis was generated automatically using a zero-shot classification model and was not reviewed manually.", "## References\nOriginal dataset is available here:\n[URL" ]
[ "TAGS\n#task_categories-text-classification #task_ids-multi-class-classification #task_ids-sentiment-classification #annotations_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #language-English #license-cc-by-nc-sa-3.0 #region-us \n", "# Dataset Card for \"financial_news_sentiment_mixte_with_phrasebank_75\"\n\nThis is a customized version of the phrasebank dataset in which I kept only sentences validated by at least 75% annotators. \nIn addition I added ~2000 articles of Canadian news where sentiment was validated manually.\n\nThe dataset also include a column topic which contains one of the following value:\n* acquisition\n* other\n* quaterly financial release\n* appointment to new position\n* dividend\n* corporate update\n* drillings results\n* conference\n* share repurchase program\n* grant of stocks\n\nThis was generated automatically using a zero-shot classification model and was not reviewed manually.", "## References\nOriginal dataset is available here:\n[URL" ]
6b2b09672129e280c0c9da97ab58154e9d535e6b
Please check out [https://github.com/intfloat/SimKGC/blob/main/scripts/download_wikidata5m.sh](https://github.com/intfloat/SimKGC/blob/main/scripts/download_wikidata5m.sh) on how to download this dataset.
intfloat/wikidata5m
[ "region:us" ]
2022-12-24T06:30:03+00:00
{}
2022-12-24T07:00:03+00:00
[]
[]
TAGS #region-us
Please check out URL on how to download this dataset.
[]
[ "TAGS\n#region-us \n" ]
8eb178419c5701c9a3ea9c697988f5968ddc5a21
# Dataset Card for "LLM_Description_Vocab_opt_Multimodal_Fatima_opt_175b_downstream_tasks" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Multimodal-Fatima/LLM_Description_Vocab_opt_Multimodal_Fatima_opt_175b_downstream_tasks
[ "region:us" ]
2022-12-24T07:44:00+00:00
{"dataset_info": {"features": [{"name": "vocab", "dtype": "string"}, {"name": "descriptions", "sequence": "string"}], "splits": [{"name": "test", "num_bytes": 696475, "num_examples": 3426}], "download_size": 381428, "dataset_size": 696475}}
2022-12-24T07:44:05+00:00
[]
[]
TAGS #region-us
# Dataset Card for "LLM_Description_Vocab_opt_Multimodal_Fatima_opt_175b_downstream_tasks" More Information needed
[ "# Dataset Card for \"LLM_Description_Vocab_opt_Multimodal_Fatima_opt_175b_downstream_tasks\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"LLM_Description_Vocab_opt_Multimodal_Fatima_opt_175b_downstream_tasks\"\n\nMore Information needed" ]
7b0565845cbf29b098bc68fd30bac93c87af1b8e
# Dataset Card for "ade20k-panoptic-demo-imagefolder" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
nielsr/ade20k-panoptic-demo-imagefolder
[ "region:us" ]
2022-12-24T09:05:57+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "image_id", "dtype": "string"}, {"name": "segments_info", "list": [{"name": "id", "dtype": "int64"}, {"name": "category_id", "dtype": "int64"}, {"name": "area", "dtype": "int64"}, {"name": "bbox", "sequence": "int64"}, {"name": "iscrowd", "dtype": "int64"}]}], "splits": [{"name": "train", "num_bytes": 88157.0, "num_examples": 10}, {"name": "validation", "num_bytes": 67914.0, "num_examples": 10}], "download_size": 151843, "dataset_size": 156071.0}}
2022-12-24T09:06:07+00:00
[]
[]
TAGS #region-us
# Dataset Card for "ade20k-panoptic-demo-imagefolder" More Information needed
[ "# Dataset Card for \"ade20k-panoptic-demo-imagefolder\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"ade20k-panoptic-demo-imagefolder\"\n\nMore Information needed" ]
35af63b9b26596f0b80c9a7b572e6d10a46eccec
# Dataset Card for "common_voice_12.0_Augmented" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Salama1429/common_voice_Arabic_12.0_Augmented
[ "region:us" ]
2022-12-24T10:31:44+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "sentence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 14306290182.938, "num_examples": 63546}, {"name": "test", "num_bytes": 316503630.559, "num_examples": 10433}], "download_size": 12163898712, "dataset_size": 14622793813.497}}
2022-12-24T10:35:23+00:00
[]
[]
TAGS #region-us
# Dataset Card for "common_voice_12.0_Augmented" More Information needed
[ "# Dataset Card for \"common_voice_12.0_Augmented\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"common_voice_12.0_Augmented\"\n\nMore Information needed" ]
6d72f2acc6e41cacd5f9b88cb8b275ed0db9d166
# Dataset Card for "dreambooth-hackathon-images-proteins" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
jonathang/dreambooth-hackathon-images-proteins
[ "region:us" ]
2022-12-24T13:06:59+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 3961830.0, "num_examples": 17}], "download_size": 3905517, "dataset_size": 3961830.0}}
2022-12-24T13:07:03+00:00
[]
[]
TAGS #region-us
# Dataset Card for "dreambooth-hackathon-images-proteins" More Information needed
[ "# Dataset Card for \"dreambooth-hackathon-images-proteins\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"dreambooth-hackathon-images-proteins\"\n\nMore Information needed" ]
8dea39dcc98f9f4f9577475b2060045f7aa0aacd
# Dataset Card for "sobotta-anatomical-dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
sanderland/sobotta-anatomical-dataset
[ "region:us" ]
2022-12-24T13:08:29+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 54613498.0, "num_examples": 14}], "download_size": 33366858, "dataset_size": 54613498.0}}
2022-12-24T13:08:35+00:00
[]
[]
TAGS #region-us
# Dataset Card for "sobotta-anatomical-dataset" More Information needed
[ "# Dataset Card for \"sobotta-anatomical-dataset\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"sobotta-anatomical-dataset\"\n\nMore Information needed" ]
3d984f8e1cc4ac2f9aa9259f0364b2bb97de0cf8
# Dataset Card for "dreambooth-hackathon-images-protein2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
jonathang/dreambooth-hackathon-images-protein2
[ "region:us" ]
2022-12-24T13:17:59+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 3901067.0, "num_examples": 16}], "download_size": 3846228, "dataset_size": 3901067.0}}
2022-12-24T13:18:16+00:00
[]
[]
TAGS #region-us
# Dataset Card for "dreambooth-hackathon-images-protein2" More Information needed
[ "# Dataset Card for \"dreambooth-hackathon-images-protein2\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"dreambooth-hackathon-images-protein2\"\n\nMore Information needed" ]
28dddd7e435bac23f5fb3eb83acfa70fcfd13bd5
# Dataset Card for "dreambooth-hackathon-images-protein3" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
jonathang/dreambooth-hackathon-images-protein3
[ "region:us" ]
2022-12-24T13:25:06+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 2000745.0, "num_examples": 11}], "download_size": 1946505, "dataset_size": 2000745.0}}
2022-12-24T13:25:24+00:00
[]
[]
TAGS #region-us
# Dataset Card for "dreambooth-hackathon-images-protein3" More Information needed
[ "# Dataset Card for \"dreambooth-hackathon-images-protein3\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"dreambooth-hackathon-images-protein3\"\n\nMore Information needed" ]
4e2ec897b62db1ca2704e432715f8beaeee1ab1c
# Dataset Card for "20NG_train10.8k_test3.6K_valid3.6k" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
pig4431/20NG_train10.8k_test3.6K_valid3.6k
[ "region:us" ]
2022-12-24T14:56:20+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 13917789.0, "num_examples": 11314}, {"name": "test", "num_bytes": 4175991.5, "num_examples": 3766}, {"name": "validate", "num_bytes": 4175991.5, "num_examples": 3766}], "download_size": 14342171, "dataset_size": 22269772.0}}
2022-12-24T14:56:50+00:00
[]
[]
TAGS #region-us
# Dataset Card for "20NG_train10.8k_test3.6K_valid3.6k" More Information needed
[ "# Dataset Card for \"20NG_train10.8k_test3.6K_valid3.6k\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"20NG_train10.8k_test3.6K_valid3.6k\"\n\nMore Information needed" ]
ff328a349f8b2ce89e1e23007a97cd8d68e00c05
preprocessed data for LAVISH
genjib/LAVISHData
[ "region:us" ]
2022-12-24T15:12:12+00:00
{}
2022-12-24T15:58:34+00:00
[]
[]
TAGS #region-us
preprocessed data for LAVISH
[]
[ "TAGS\n#region-us \n" ]
83c0f91ecd03701df57fb8b54be81627ee743036
# Dataset Card for "legal_corpus" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
marcus2000/legal_corpus
[ "region:us" ]
2022-12-24T15:30:41+00:00
{"dataset_info": {"features": [{"name": "0", "dtype": "string"}, {"name": "1", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 66404959, "num_examples": 1200}, {"name": "validation", "num_bytes": 32302991, "num_examples": 400}, {"name": "test", "num_bytes": 33181409, "num_examples": 427}], "download_size": 39180007, "dataset_size": 131889359}}
2022-12-24T15:34:43+00:00
[]
[]
TAGS #region-us
# Dataset Card for "legal_corpus" More Information needed
[ "# Dataset Card for \"legal_corpus\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"legal_corpus\"\n\nMore Information needed" ]
5be197c3dd388c8cd9263e1d45f95681775bfc75
# Dataset Card for "clinic-credit_cards" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
fathyshalab/clinic-credit_cards
[ "region:us" ]
2022-12-24T16:48:27+00:00
{"dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "label_text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 22514.533333333333, "num_examples": 262}, {"name": "test", "num_bytes": 9710.466666666667, "num_examples": 113}], "download_size": 16877, "dataset_size": 32225.0}}
2022-12-24T16:48:40+00:00
[]
[]
TAGS #region-us
# Dataset Card for "clinic-credit_cards" More Information needed
[ "# Dataset Card for \"clinic-credit_cards\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"clinic-credit_cards\"\n\nMore Information needed" ]
98689c4f101a0181743367b899b73ada9376a4d4
# Dataset Card for GPT-Negochat ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Fields](#data-fields) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) ## Dataset Description - **Repository:** https://github.com/msamogh/GPT-NegoChat-Corpus - **Point of Contact:** [email protected] ### Dataset Summary he **GPT-Negochat** corpus is a modified version of the original Negochat corpus (https://aclanthology.org/L16-1501/), which contains negotiation dialogues between an Employer and a Candidate. The utterances in the original corpus were generated using a template-based NLG module and therefore, sound robotic and in general, do not sound convincingly real. GPT-Negochat is the result of using GPT-3 to modify this original corpus to make the dialogues resemble actual job-negotiation dialogues more closely while still retaining the original meaning of the utterances. In addition to rephrasing the utterances, a small set of highly unrealistic dialogue segments have been removed in GPT-Negochat without affecting the coherence of the surrounding dialogue. ### Supported Tasks and Leaderboards - Dialogue Act Classification - Offer Identification - Agreement Tracking ### Languages - English ## Dataset Structure ### Data Fields Below is an excerpt containing two consecutive turns from a dialogue. The `input` field contains the utterance from the original Negochat corpus. The `augmented_input` field contains the same utterance rephrased using GPT-3. ```json { "role": "Candidate", "input": "I want a position of project manager", "output": [ { "Offer": { "Job Description": "Project Manager" } } ], "augmented_input": "I'm interested in a project manager role." }, { "role": "Employer", "input": "I do have programmer positions open with a strong potential to advance to project manager based on your performance.", "output": [ { "Offer": { "Job Description": "Programmer" } } ], "augmented_input": "We do have programmer roles available that could provide you with the opportunity to advance to project manager based on your performance. " } ``` ## Dataset Creation ### Curation Rationale The original Negochat corpus is one of the only dialogue corpora with containing turn-level annotations for offers, acceptances, and rejects in a negotiation dialogue. However, the utterances in the corpus were generated using a template-based NLG system, which makes the dialogues unrealistic to the point of sounding robotic at times. We wanted to make the utterances sound more like those from an actual negotiation dialogue in a job interview. ### Source Data #### Initial Data Collection and Normalization The original Negochat corpus can be found here: [https://github.com/vaskonov/negochat_corpus](https://github.com/vaskonov/negochat_corpus) ## Annotations Since each utterance in GPT-Negochat was generated by rephrasing the original without changing the underlying meaning, we simply transfer over the annotations from the original Negochat corpus.
msamogh/gpt-negochat
[ "license:apache-2.0", "region:us" ]
2022-12-24T19:51:18+00:00
{"license": "apache-2.0"}
2022-12-24T20:03:35+00:00
[]
[]
TAGS #license-apache-2.0 #region-us
# Dataset Card for GPT-Negochat ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Fields - Dataset Creation - Curation Rationale - Source Data - Annotations ## Dataset Description - Repository: URL - Point of Contact: msamogh@URL ### Dataset Summary he GPT-Negochat corpus is a modified version of the original Negochat corpus (URL which contains negotiation dialogues between an Employer and a Candidate. The utterances in the original corpus were generated using a template-based NLG module and therefore, sound robotic and in general, do not sound convincingly real. GPT-Negochat is the result of using GPT-3 to modify this original corpus to make the dialogues resemble actual job-negotiation dialogues more closely while still retaining the original meaning of the utterances. In addition to rephrasing the utterances, a small set of highly unrealistic dialogue segments have been removed in GPT-Negochat without affecting the coherence of the surrounding dialogue. ### Supported Tasks and Leaderboards - Dialogue Act Classification - Offer Identification - Agreement Tracking ### Languages - English ## Dataset Structure ### Data Fields Below is an excerpt containing two consecutive turns from a dialogue. The 'input' field contains the utterance from the original Negochat corpus. The 'augmented_input' field contains the same utterance rephrased using GPT-3. ## Dataset Creation ### Curation Rationale The original Negochat corpus is one of the only dialogue corpora with containing turn-level annotations for offers, acceptances, and rejects in a negotiation dialogue. However, the utterances in the corpus were generated using a template-based NLG system, which makes the dialogues unrealistic to the point of sounding robotic at times. We wanted to make the utterances sound more like those from an actual negotiation dialogue in a job interview. ### Source Data #### Initial Data Collection and Normalization The original Negochat corpus can be found here: URL ## Annotations Since each utterance in GPT-Negochat was generated by rephrasing the original without changing the underlying meaning, we simply transfer over the annotations from the original Negochat corpus.
[ "# Dataset Card for GPT-Negochat", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Fields\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations", "## Dataset Description\n\n- Repository: URL\n- Point of Contact: msamogh@URL", "### Dataset Summary\nhe GPT-Negochat corpus is a modified version of the original Negochat corpus (URL which contains negotiation dialogues between an Employer and a Candidate. The utterances in the original corpus were generated using a template-based NLG module and therefore, sound robotic and in general, do not sound convincingly real.\n\nGPT-Negochat is the result of using GPT-3 to modify this original corpus to make the dialogues resemble actual job-negotiation dialogues more closely while still retaining the original meaning of the utterances.\n\nIn addition to rephrasing the utterances, a small set of highly unrealistic dialogue segments have been removed in GPT-Negochat without affecting the coherence of the surrounding dialogue.", "### Supported Tasks and Leaderboards\n\n- Dialogue Act Classification\n- Offer Identification\n- Agreement Tracking", "### Languages\n\n- English", "## Dataset Structure", "### Data Fields\n\nBelow is an excerpt containing two consecutive turns from a dialogue. The 'input' field contains the utterance from the original Negochat corpus. The 'augmented_input' field contains the same utterance rephrased using GPT-3.", "## Dataset Creation", "### Curation Rationale\n\nThe original Negochat corpus is one of the only dialogue corpora with containing turn-level annotations for offers, acceptances, and rejects in a negotiation dialogue.\nHowever, the utterances in the corpus were generated using a template-based NLG system, which makes the dialogues unrealistic to the point of sounding robotic at times.\nWe wanted to make the utterances sound more like those from an actual negotiation dialogue in a job interview.", "### Source Data", "#### Initial Data Collection and Normalization\n\nThe original Negochat corpus can be found here: URL", "## Annotations\nSince each utterance in GPT-Negochat was generated by rephrasing the original without changing the underlying meaning, we simply transfer over the annotations from the original Negochat corpus." ]
[ "TAGS\n#license-apache-2.0 #region-us \n", "# Dataset Card for GPT-Negochat", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Fields\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations", "## Dataset Description\n\n- Repository: URL\n- Point of Contact: msamogh@URL", "### Dataset Summary\nhe GPT-Negochat corpus is a modified version of the original Negochat corpus (URL which contains negotiation dialogues between an Employer and a Candidate. The utterances in the original corpus were generated using a template-based NLG module and therefore, sound robotic and in general, do not sound convincingly real.\n\nGPT-Negochat is the result of using GPT-3 to modify this original corpus to make the dialogues resemble actual job-negotiation dialogues more closely while still retaining the original meaning of the utterances.\n\nIn addition to rephrasing the utterances, a small set of highly unrealistic dialogue segments have been removed in GPT-Negochat without affecting the coherence of the surrounding dialogue.", "### Supported Tasks and Leaderboards\n\n- Dialogue Act Classification\n- Offer Identification\n- Agreement Tracking", "### Languages\n\n- English", "## Dataset Structure", "### Data Fields\n\nBelow is an excerpt containing two consecutive turns from a dialogue. The 'input' field contains the utterance from the original Negochat corpus. The 'augmented_input' field contains the same utterance rephrased using GPT-3.", "## Dataset Creation", "### Curation Rationale\n\nThe original Negochat corpus is one of the only dialogue corpora with containing turn-level annotations for offers, acceptances, and rejects in a negotiation dialogue.\nHowever, the utterances in the corpus were generated using a template-based NLG system, which makes the dialogues unrealistic to the point of sounding robotic at times.\nWe wanted to make the utterances sound more like those from an actual negotiation dialogue in a job interview.", "### Source Data", "#### Initial Data Collection and Normalization\n\nThe original Negochat corpus can be found here: URL", "## Annotations\nSince each utterance in GPT-Negochat was generated by rephrasing the original without changing the underlying meaning, we simply transfer over the annotations from the original Negochat corpus." ]
bf3029b3b52c9e79940d40319ebb4192ab5d7c0d
# Summary Dataset contains numbers in different formats: * Numbers (base 10) * Numbers as words * Roman numbers Dataset range 1-4999
vijaygkd/roman-numbers-text
[ "region:us" ]
2022-12-25T01:04:29+00:00
{}
2022-12-25T01:07:36+00:00
[]
[]
TAGS #region-us
# Summary Dataset contains numbers in different formats: * Numbers (base 10) * Numbers as words * Roman numbers Dataset range 1-4999
[ "# Summary \nDataset contains numbers in different formats:\n* Numbers (base 10)\n* Numbers as words\n* Roman numbers\n\nDataset range 1-4999" ]
[ "TAGS\n#region-us \n", "# Summary \nDataset contains numbers in different formats:\n* Numbers (base 10)\n* Numbers as words\n* Roman numbers\n\nDataset range 1-4999" ]
529d9fa1382412a631cbd7ce408aa98c28b13afb
# Dataset Card for "LLM_Description_Vocab_bloom_bigscience_bloom_downstream_tasks" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Multimodal-Fatima/LLM_Description_Vocab_bloom_bigscience_bloom_downstream_tasks
[ "region:us" ]
2022-12-25T03:29:32+00:00
{"dataset_info": {"features": [{"name": "vocab", "dtype": "string"}, {"name": "descriptions", "sequence": "string"}], "splits": [{"name": "test", "num_bytes": 658686, "num_examples": 3426}], "download_size": 373501, "dataset_size": 658686}}
2022-12-25T03:29:36+00:00
[]
[]
TAGS #region-us
# Dataset Card for "LLM_Description_Vocab_bloom_bigscience_bloom_downstream_tasks" More Information needed
[ "# Dataset Card for \"LLM_Description_Vocab_bloom_bigscience_bloom_downstream_tasks\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"LLM_Description_Vocab_bloom_bigscience_bloom_downstream_tasks\"\n\nMore Information needed" ]
35162e5aeeb22daccfc19de4993e12bfe8b4d530
# Dataset Card for Open-Domain Question Answering Wikipedia Corpora ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Dataset Structure](#dataset-structure) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Source Data](#source-data) ## Dataset Description ### Dataset Summary The Wikipedia corpus variants provided can serve as knowledge sources for question-answering systems based on a retriever–reader pipeline. These corpus variants and their corresponding experiments are described further in the paper entitled: > Pre-Processing Matters! Improved Wikipedia Corpora for Open-Domain Question Answering. ## Dataset Structure ### Data Fields The dataset consists of passages that have been segmented from Wikipedia articles. For each passage, the following fields are provided - ```docid```: The passage id in the format of (X#Y) where passages from the same article share the same X, but Y denotes the segment id within the article - ```title```: The title of the article from where the passage comes - ```text```: The text content of the passage ### Data Splits There are 6 corpus variants in total - ```wiki-text-100w-karpukhin```: The original DPR Wikipedia corpus with non-overlapping passages, each 100 words long, from Karpukhin et al., > Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, Wen-tau Yih. [Dense Passage Retrieval for Open-Domain Question Answering](https://www.aclweb.org/anthology/2020.emnlp-main.550/). _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_, pages 6769-6781, 2020. - ```wiki-text-100w-tamber```: Our replication of the above corpus - ```wiki-text-6-3-tamber```: A corpus similar to above i.e. without tables, infoboxes, and lists. Segmentation is done differently, with a passage size of 6 sentences and a stride of 3 sentences. Note, this means that passages are overlapped. - ```wiki-text-8-4-tamber```: Like wiki-text-6-3, but with a passage size of 8 sentences and a stride of 4 sentences. - ```wiki-all-6-3-tamber```: A corpus with tables, infoboxes, and lists included with a passage size of 6 sentences and a stride of 3 sentences. - ```wiki-all-8-4-tamber```: Like wiki-all-6-3, but with a passage size of 8 sentences and a stride of 4 sentences. ## Dataset Creation ### Source Data #### Initial Data Collection and Normalization We start with downloading the full December 20, 2018 Wikipedia XML dump: ```enwiki-20181220-pages-articles.xml``` from the Internet Archive: https://archive.org/details/enwiki-20181220. This is then Pre-processed by WikiExtractor: https://github.com/attardi/wikiextractor (making sure to modify the code to include lists as desired and replacing any tables with the string "TABLETOREPLACE") and DrQA: https://github.com/facebookresearch/DrQA/tree/main/scripts/retriever (again making sure to modify the code to not remove lists as desired). We then apply the [pre-processing script]((https://github.com/castorini/pyserini/blob/master/docs/experiments-wiki-corpora.md)) we make available in [Pyserini](https://github.com/castorini/pyserini) to generate the different corpus variants.
castorini/odqa-wiki-corpora
[ "task_categories:question-answering", "task_categories:text-retrieval", "task_ids:open-domain-qa", "annotations_creators:no-annotation", "multilinguality:monolingual", "language:en", "license:cc-by-sa-3.0", "region:us" ]
2022-12-25T03:47:21+00:00
{"annotations_creators": ["no-annotation"], "language_creators": [], "language": ["en"], "license": ["cc-by-sa-3.0"], "multilinguality": ["monolingual"], "size_categories": [], "source_datasets": [], "task_categories": ["question-answering", "text-retrieval"], "task_ids": ["open-domain-qa"], "pretty_name": "Open-Domain Question Answering Wikipedia Corpora", "tags": []}
2023-01-05T21:32:51+00:00
[]
[ "en" ]
TAGS #task_categories-question-answering #task_categories-text-retrieval #task_ids-open-domain-qa #annotations_creators-no-annotation #multilinguality-monolingual #language-English #license-cc-by-sa-3.0 #region-us
# Dataset Card for Open-Domain Question Answering Wikipedia Corpora ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Dataset Structure - Data Fields - Data Splits - Dataset Creation - Source Data ## Dataset Description ### Dataset Summary The Wikipedia corpus variants provided can serve as knowledge sources for question-answering systems based on a retriever–reader pipeline. These corpus variants and their corresponding experiments are described further in the paper entitled: > Pre-Processing Matters! Improved Wikipedia Corpora for Open-Domain Question Answering. ## Dataset Structure ### Data Fields The dataset consists of passages that have been segmented from Wikipedia articles. For each passage, the following fields are provided - : The passage id in the format of (X#Y) where passages from the same article share the same X, but Y denotes the segment id within the article - : The title of the article from where the passage comes - : The text content of the passage ### Data Splits There are 6 corpus variants in total - : The original DPR Wikipedia corpus with non-overlapping passages, each 100 words long, from Karpukhin et al., > Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, Wen-tau Yih. Dense Passage Retrieval for Open-Domain Question Answering. _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_, pages 6769-6781, 2020. - : Our replication of the above corpus - : A corpus similar to above i.e. without tables, infoboxes, and lists. Segmentation is done differently, with a passage size of 6 sentences and a stride of 3 sentences. Note, this means that passages are overlapped. - : Like wiki-text-6-3, but with a passage size of 8 sentences and a stride of 4 sentences. - : A corpus with tables, infoboxes, and lists included with a passage size of 6 sentences and a stride of 3 sentences. - : Like wiki-all-6-3, but with a passage size of 8 sentences and a stride of 4 sentences. ## Dataset Creation ### Source Data #### Initial Data Collection and Normalization We start with downloading the full December 20, 2018 Wikipedia XML dump: from the Internet Archive: URL This is then Pre-processed by WikiExtractor: URL (making sure to modify the code to include lists as desired and replacing any tables with the string "TABLETOREPLACE") and DrQA: URL (again making sure to modify the code to not remove lists as desired). We then apply the pre-processing script) we make available in Pyserini to generate the different corpus variants.
[ "# Dataset Card for Open-Domain Question Answering Wikipedia Corpora", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n- Dataset Structure\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Source Data", "## Dataset Description", "### Dataset Summary\n\nThe Wikipedia corpus variants provided can serve as knowledge sources for question-answering systems based on a retriever–reader pipeline. These corpus variants and their corresponding experiments are described further in the paper entitled: \n> Pre-Processing Matters! Improved Wikipedia Corpora for Open-Domain Question Answering.", "## Dataset Structure", "### Data Fields\n\nThe dataset consists of passages that have been segmented from Wikipedia articles.\nFor each passage, the following fields are provided \n- : The passage id in the format of (X#Y) where passages from the same article share the same X, but Y denotes the segment id within the article \n- : The title of the article from where the passage comes\n- : The text content of the passage", "### Data Splits\n\nThere are 6 corpus variants in total\n- : The original DPR Wikipedia corpus with non-overlapping passages, each 100 words long, from Karpukhin et al.,\n> Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, Wen-tau Yih. Dense Passage Retrieval for Open-Domain Question Answering. _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_, pages 6769-6781, 2020.\n- : Our replication of the above corpus\n- : A corpus similar to above i.e. without tables, infoboxes, and lists. Segmentation is done differently, with a passage size of 6 sentences and a stride of 3 sentences. Note, this means that passages are overlapped.\n- : Like wiki-text-6-3, but with a passage size of 8 sentences and a stride of 4 sentences.\n- : A corpus with tables, infoboxes, and lists included with a passage size of 6 sentences and a stride of 3 sentences.\n- : Like wiki-all-6-3, but with a passage size of 8 sentences and a stride of 4 sentences.", "## Dataset Creation", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe start with downloading the full December 20, 2018 Wikipedia XML dump: from the Internet Archive: URL This is then Pre-processed by WikiExtractor: URL (making sure to modify the code to include lists as desired and replacing any tables with the string \"TABLETOREPLACE\") and DrQA: URL (again making sure to modify the code to not remove lists as desired).\n\nWe then apply the pre-processing script) we make available in Pyserini to generate the different corpus variants." ]
[ "TAGS\n#task_categories-question-answering #task_categories-text-retrieval #task_ids-open-domain-qa #annotations_creators-no-annotation #multilinguality-monolingual #language-English #license-cc-by-sa-3.0 #region-us \n", "# Dataset Card for Open-Domain Question Answering Wikipedia Corpora", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n- Dataset Structure\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Source Data", "## Dataset Description", "### Dataset Summary\n\nThe Wikipedia corpus variants provided can serve as knowledge sources for question-answering systems based on a retriever–reader pipeline. These corpus variants and their corresponding experiments are described further in the paper entitled: \n> Pre-Processing Matters! Improved Wikipedia Corpora for Open-Domain Question Answering.", "## Dataset Structure", "### Data Fields\n\nThe dataset consists of passages that have been segmented from Wikipedia articles.\nFor each passage, the following fields are provided \n- : The passage id in the format of (X#Y) where passages from the same article share the same X, but Y denotes the segment id within the article \n- : The title of the article from where the passage comes\n- : The text content of the passage", "### Data Splits\n\nThere are 6 corpus variants in total\n- : The original DPR Wikipedia corpus with non-overlapping passages, each 100 words long, from Karpukhin et al.,\n> Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, Wen-tau Yih. Dense Passage Retrieval for Open-Domain Question Answering. _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_, pages 6769-6781, 2020.\n- : Our replication of the above corpus\n- : A corpus similar to above i.e. without tables, infoboxes, and lists. Segmentation is done differently, with a passage size of 6 sentences and a stride of 3 sentences. Note, this means that passages are overlapped.\n- : Like wiki-text-6-3, but with a passage size of 8 sentences and a stride of 4 sentences.\n- : A corpus with tables, infoboxes, and lists included with a passage size of 6 sentences and a stride of 3 sentences.\n- : Like wiki-all-6-3, but with a passage size of 8 sentences and a stride of 4 sentences.", "## Dataset Creation", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe start with downloading the full December 20, 2018 Wikipedia XML dump: from the Internet Archive: URL This is then Pre-processed by WikiExtractor: URL (making sure to modify the code to include lists as desired and replacing any tables with the string \"TABLETOREPLACE\") and DrQA: URL (again making sure to modify the code to not remove lists as desired).\n\nWe then apply the pre-processing script) we make available in Pyserini to generate the different corpus variants." ]
ddd24029c66289429ca47b3813d5367690256e8e
# Dataset Card for "dreambooth-hackathon-images-mario-bg-1" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
jonathang/dreambooth-hackathon-images-mario-bg-1
[ "region:us" ]
2022-12-25T13:59:53+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 559875.0, "num_examples": 15}], "download_size": 523924, "dataset_size": 559875.0}}
2022-12-25T14:00:04+00:00
[]
[]
TAGS #region-us
# Dataset Card for "dreambooth-hackathon-images-mario-bg-1" More Information needed
[ "# Dataset Card for \"dreambooth-hackathon-images-mario-bg-1\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"dreambooth-hackathon-images-mario-bg-1\"\n\nMore Information needed" ]
855bfbc27798067480ebe537177acd13dbdb75a0
# Dataset Card for "donut_check" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
tsabar/donut_check
[ "region:us" ]
2022-12-25T14:59:13+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "letter", "1": "form", "2": "email", "3": "handwritten", "4": "advertisement", "5": "scientific report", "6": "scientific publication", "7": "specification", "8": "file folder", "9": "news article", "10": "budget", "11": "invoice", "12": "presentation", "13": "questionnaire", "14": "resume", "15": "memo"}}}}], "splits": [{"name": "train", "num_bytes": 19445096.284, "num_examples": 160}, {"name": "test", "num_bytes": 19445071.284, "num_examples": 160}], "download_size": 0, "dataset_size": 38890167.568}}
2022-12-25T15:37:28+00:00
[]
[]
TAGS #region-us
# Dataset Card for "donut_check" More Information needed
[ "# Dataset Card for \"donut_check\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"donut_check\"\n\nMore Information needed" ]
f7bd228608aed4228a5c2b50cb407d1e3d9ab4d9
# Dataset Card for "rvl_cdip_10_examples_per_class_donut" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
tsabar/rvl_cdip_10_examples_per_class_donut
[ "region:us" ]
2022-12-25T15:05:22+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "letter", "1": "form", "2": "email", "3": "handwritten", "4": "advertisement", "5": "scientific report", "6": "scientific publication", "7": "specification", "8": "file folder", "9": "news article", "10": "budget", "11": "invoice", "12": "presentation", "13": "questionnaire", "14": "resume", "15": "memo"}}}}, {"name": "ground_truth", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 18011328.0, "num_examples": 160}, {"name": "train", "num_bytes": 19396350.0, "num_examples": 160}], "download_size": 35234585, "dataset_size": 37407678.0}}
2022-12-25T15:38:22+00:00
[]
[]
TAGS #region-us
# Dataset Card for "rvl_cdip_10_examples_per_class_donut" More Information needed
[ "# Dataset Card for \"rvl_cdip_10_examples_per_class_donut\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"rvl_cdip_10_examples_per_class_donut\"\n\nMore Information needed" ]
e15cb6c72ced2942908b04108710953456a9bfbc
Textual Inversion trained on Hades game art. Tested on Anything V3 model. Recommend to use words "cartoon","comic","realistic","dark outlines" in prompt to get better results.
AgntPerseus/Hadesstl
[ "license:creativeml-openrail-m", "region:us" ]
2022-12-25T16:34:53+00:00
{"license": "creativeml-openrail-m"}
2022-12-25T16:53:03+00:00
[]
[]
TAGS #license-creativeml-openrail-m #region-us
Textual Inversion trained on Hades game art. Tested on Anything V3 model. Recommend to use words "cartoon","comic","realistic","dark outlines" in prompt to get better results.
[]
[ "TAGS\n#license-creativeml-openrail-m #region-us \n" ]
47bf9539caf69656bc76b583b840514bf8b062db
# Dataset Card for "coco-panoptic-val2017" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
nielsr/coco-panoptic-val2017
[ "region:us" ]
2022-12-25T16:56:03+00:00
{"dataset_info": {"features": [{"name": "label", "dtype": "image"}, {"name": "segments_info", "list": [{"name": "id", "dtype": "int64"}, {"name": "category_id", "dtype": "int64"}, {"name": "iscrowd", "dtype": "int64"}, {"name": "bbox", "sequence": "int64"}, {"name": "area", "dtype": "int64"}]}, {"name": "image_id", "dtype": "int64"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 850795822.0, "num_examples": 5000}], "download_size": 849210800, "dataset_size": 850795822.0}}
2022-12-25T17:26:14+00:00
[]
[]
TAGS #region-us
# Dataset Card for "coco-panoptic-val2017" More Information needed
[ "# Dataset Card for \"coco-panoptic-val2017\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"coco-panoptic-val2017\"\n\nMore Information needed" ]
3399f7dcadd3a804dfee4f5c63698406dd8cf3c0
# Dataset Card for "dreambooth-hackathon-images" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mjfang27/dreambooth-hackathon-images
[ "region:us" ]
2022-12-26T00:58:26+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 23442012.0, "num_examples": 16}], "download_size": 23419281, "dataset_size": 23442012.0}}
2022-12-26T00:58:42+00:00
[]
[]
TAGS #region-us
# Dataset Card for "dreambooth-hackathon-images" More Information needed
[ "# Dataset Card for \"dreambooth-hackathon-images\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"dreambooth-hackathon-images\"\n\nMore Information needed" ]
b15916bc0ee9a35dc8cb83e6c83d242ddb9e453c
# Dataset Card for "rulibrispeech" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
bond005/rulibrispeech
[ "region:us" ]
2022-12-26T10:39:04+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "transcription", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 11165185580.744, "num_examples": 54472}, {"name": "test", "num_bytes": 306649969.0, "num_examples": 1352}, {"name": "validation", "num_bytes": 321842480.0, "num_examples": 1400}], "download_size": 10689335725, "dataset_size": 11793678029.744}}
2023-01-18T19:38:48+00:00
[]
[]
TAGS #region-us
# Dataset Card for "rulibrispeech" More Information needed
[ "# Dataset Card for \"rulibrispeech\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"rulibrispeech\"\n\nMore Information needed" ]
ddcf45a2145f82ea1144cc4983142f715ed33cc1
This dataset contains paragraphs tagged as relevant to soft skills or not.
ateffal/softskills
[ "license:mit", "region:us" ]
2022-12-26T13:03:37+00:00
{"license": "mit"}
2023-04-05T17:15:12+00:00
[]
[]
TAGS #license-mit #region-us
This dataset contains paragraphs tagged as relevant to soft skills or not.
[]
[ "TAGS\n#license-mit #region-us \n" ]
00a72802b462725230c19b10106285072df9680a
# Dataset Card for "processed_bert_dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
enpassant/processed_bert_dataset
[ "region:us" ]
2022-12-26T14:04:46+00:00
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "token_type_ids", "sequence": "int8"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "special_tokens_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 158400.0, "num_examples": 44}], "download_size": 30837, "dataset_size": 158400.0}}
2022-12-26T14:26:44+00:00
[]
[]
TAGS #region-us
# Dataset Card for "processed_bert_dataset" More Information needed
[ "# Dataset Card for \"processed_bert_dataset\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"processed_bert_dataset\"\n\nMore Information needed" ]
49fa65494419053dd5401d686f337104a26fd6b5
# Dataset Card for tox21_srp53 ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage: https://moleculenet.org/** - **Repository: https://github.com/deepchem/deepchem/tree/master** - **Paper: https://arxiv.org/abs/1703.00564** ### Dataset Summary `tox21_srp53` is a dataset included in [MoleculeNet](https://moleculenet.org/). It is the p53 stress-response pathway activation (SR-p53) task from Tox21. ## Dataset Structure ### Data Fields Each split contains * `smiles`: the [SMILES](https://en.wikipedia.org/wiki/Simplified_molecular-input_line-entry_system) representation of a molecule * `selfies`: the [SELFIES](https://github.com/aspuru-guzik-group/selfies) representation of a molecule * `target`: clinical trial toxicity (or absence of toxicity) ### Data Splits The dataset is split into an 80/10/10 train/valid/test split using scaffold split. ### Source Data #### Initial Data Collection and Normalization Data was originially generated by the Pande Group at Standford ### Licensing Information This dataset was originally released under an MIT license ### Citation Information ``` @misc{https://doi.org/10.48550/arxiv.1703.00564, doi = {10.48550/ARXIV.1703.00564}, url = {https://arxiv.org/abs/1703.00564}, author = {Wu, Zhenqin and Ramsundar, Bharath and Feinberg, Evan N. and Gomes, Joseph and Geniesse, Caleb and Pappu, Aneesh S. and Leswing, Karl and Pande, Vijay}, keywords = {Machine Learning (cs.LG), Chemical Physics (physics.chem-ph), Machine Learning (stat.ML), FOS: Computer and information sciences, FOS: Computer and information sciences, FOS: Physical sciences, FOS: Physical sciences}, title = {MoleculeNet: A Benchmark for Molecular Machine Learning}, publisher = {arXiv}, year = {2017}, copyright = {arXiv.org perpetual, non-exclusive license} } ``` ### Contributions Thanks to [@zanussbaum](https://github.com/zanussbaum) for adding this dataset.
zpn/tox21_srp53
[ "task_categories:other", "annotations_creators:machine-generated", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "license:mit", "bio", "bio-chem", "molnet", "molecule-net", "biophysics", "arxiv:1703.00564", "region:us" ]
2022-12-26T14:55:36+00:00
{"annotations_creators": ["machine-generated"], "language_creators": ["machine-generated"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": [], "task_categories": ["other"], "task_ids": [], "pretty_name": "tox21_srp53", "tags": ["bio", "bio-chem", "molnet", "molecule-net", "biophysics"], "dataset_info": {"features": [{"name": "smiles", "dtype": "string"}, {"name": "selfies", "dtype": "string"}, {"name": "target", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}], "splits": [{"name": "train", "num_bytes": 1055437, "num_examples": 6264}, {"name": "test", "num_bytes": 223704, "num_examples": 784}, {"name": "validation", "num_bytes": 224047, "num_examples": 783}], "download_size": 451728, "dataset_size": 1503188}}
2022-12-26T15:10:20+00:00
[ "1703.00564" ]
[]
TAGS #task_categories-other #annotations_creators-machine-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-1K<n<10K #license-mit #bio #bio-chem #molnet #molecule-net #biophysics #arxiv-1703.00564 #region-us
# Dataset Card for tox21_srp53 ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: URL - Paper: URL ### Dataset Summary 'tox21_srp53' is a dataset included in MoleculeNet. It is the p53 stress-response pathway activation (SR-p53) task from Tox21. ## Dataset Structure ### Data Fields Each split contains * 'smiles': the SMILES representation of a molecule * 'selfies': the SELFIES representation of a molecule * 'target': clinical trial toxicity (or absence of toxicity) ### Data Splits The dataset is split into an 80/10/10 train/valid/test split using scaffold split. ### Source Data #### Initial Data Collection and Normalization Data was originially generated by the Pande Group at Standford ### Licensing Information This dataset was originally released under an MIT license ### Contributions Thanks to @zanussbaum for adding this dataset.
[ "# Dataset Card for tox21_srp53", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL", "### Dataset Summary\n\n'tox21_srp53' is a dataset included in MoleculeNet. It is the p53 stress-response pathway activation (SR-p53) task from Tox21.", "## Dataset Structure", "### Data Fields\n\nEach split contains\n\n* 'smiles': the SMILES representation of a molecule\n* 'selfies': the SELFIES representation of a molecule\n* 'target': clinical trial toxicity (or absence of toxicity)", "### Data Splits\n\nThe dataset is split into an 80/10/10 train/valid/test split using scaffold split.", "### Source Data", "#### Initial Data Collection and Normalization\n\nData was originially generated by the Pande Group at Standford", "### Licensing Information\n\nThis dataset was originally released under an MIT license", "### Contributions\n\nThanks to @zanussbaum for adding this dataset." ]
[ "TAGS\n#task_categories-other #annotations_creators-machine-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-1K<n<10K #license-mit #bio #bio-chem #molnet #molecule-net #biophysics #arxiv-1703.00564 #region-us \n", "# Dataset Card for tox21_srp53", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL", "### Dataset Summary\n\n'tox21_srp53' is a dataset included in MoleculeNet. It is the p53 stress-response pathway activation (SR-p53) task from Tox21.", "## Dataset Structure", "### Data Fields\n\nEach split contains\n\n* 'smiles': the SMILES representation of a molecule\n* 'selfies': the SELFIES representation of a molecule\n* 'target': clinical trial toxicity (or absence of toxicity)", "### Data Splits\n\nThe dataset is split into an 80/10/10 train/valid/test split using scaffold split.", "### Source Data", "#### Initial Data Collection and Normalization\n\nData was originially generated by the Pande Group at Standford", "### Licensing Information\n\nThis dataset was originally released under an MIT license", "### Contributions\n\nThanks to @zanussbaum for adding this dataset." ]
71a989dcf6961ed4be10df81f20141f9dcf52f68
# Dataset Card for "bkk-budget-ner-page" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
napatswift/bkk-budget-ner-page
[ "region:us" ]
2022-12-26T16:55:11+00:00
{"dataset_info": {"features": [{"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-ENTRY", "2": "I-ENTRY"}}}}], "splits": [{"name": "train", "num_bytes": 2455950.107936508, "num_examples": 472}, {"name": "test", "num_bytes": 822118.8920634921, "num_examples": 158}], "download_size": 377734, "dataset_size": 3278069.0}}
2022-12-31T10:33:08+00:00
[]
[]
TAGS #region-us
# Dataset Card for "bkk-budget-ner-page" More Information needed
[ "# Dataset Card for \"bkk-budget-ner-page\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"bkk-budget-ner-page\"\n\nMore Information needed" ]
d44bc2bfdeae58b77bf87d6205b52b8ba62a3c31
# Dataset Card for ACLFig Dataset <!-- ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) --> ## Dataset Description - **Paper:** - **Leaderboard:** ### Dataset Summary 1758 total labelled images The scientific figures dataset contains 1758 scientific figures extracted from 890 research papers(ACL). The scientific figures are in png format. The dataset has been classified into 19 categories. These are - Algorithms - Architecture/Pipeline diagrams - Bar charts - Box Plots - Confusion Matrix - Graph - Line Chart - Maps - Natural Images - Neural Networks - NLP rules/grammar - Pie chart - Scatter Plot - Screenshots - Tables - Trees - Pareto chart - Venn Diagram - Word Cloud The scientific figures are in the `png` directory. The `metadata` directory contains metadata extracted from the pdf along with scientific figures in json format. Finally, the `scientific_figures.csv` file contains following columns/fields: 1. `sci_fig` : Scientific figure name 2. `caption`: Caption text 3. `inline_reference`: Scientific figure contexts mentioned in the research paper 4. `metadata`: metadata json filename 5. `label`: One of the 19 categories as described above. 6. `acl_paper_id`: Unique identifier assigned to each pdf by ACL ### Supported Tasks and Leaderboards Multi-label classification ## Dataset Creation The dataset was created using papers in ACL Anthology. ### Annotations #### Annotation process ~2k images manually labelled ### Citation Information TODO ### Contributions Thanks to [@zebaKarishma](https://github.com/zebaKarishma), [@shauryr](https://github.com/shauryr) and [@KavyaPuranik](https://github.com/KavyaPuranik) for adding this dataset.
citeseerx/ACL-fig
[ "task_categories:image-classification", "task_ids:multi-label-image-classification", "annotations_creators:expert-generated", "language_creators:machine-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:cc-by-4.0", "region:us" ]
2022-12-26T18:28:49+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["machine-generated", "found"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["image-classification"], "task_ids": ["multi-label-image-classification"], "pretty_name": "ACL-Fig", "tags": []}
2023-01-04T12:52:12+00:00
[]
[ "en" ]
TAGS #task_categories-image-classification #task_ids-multi-label-image-classification #annotations_creators-expert-generated #language_creators-machine-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-cc-by-4.0 #region-us
# Dataset Card for ACLFig Dataset ## Dataset Description - Paper: - Leaderboard: ### Dataset Summary 1758 total labelled images The scientific figures dataset contains 1758 scientific figures extracted from 890 research papers(ACL). The scientific figures are in png format. The dataset has been classified into 19 categories. These are - Algorithms - Architecture/Pipeline diagrams - Bar charts - Box Plots - Confusion Matrix - Graph - Line Chart - Maps - Natural Images - Neural Networks - NLP rules/grammar - Pie chart - Scatter Plot - Screenshots - Tables - Trees - Pareto chart - Venn Diagram - Word Cloud The scientific figures are in the 'png' directory. The 'metadata' directory contains metadata extracted from the pdf along with scientific figures in json format. Finally, the 'scientific_figures.csv' file contains following columns/fields: 1. 'sci_fig' : Scientific figure name 2. 'caption': Caption text 3. 'inline_reference': Scientific figure contexts mentioned in the research paper 4. 'metadata': metadata json filename 5. 'label': One of the 19 categories as described above. 6. 'acl_paper_id': Unique identifier assigned to each pdf by ACL ### Supported Tasks and Leaderboards Multi-label classification ## Dataset Creation The dataset was created using papers in ACL Anthology. ### Annotations #### Annotation process ~2k images manually labelled TODO ### Contributions Thanks to @zebaKarishma, @shauryr and @KavyaPuranik for adding this dataset.
[ "# Dataset Card for ACLFig Dataset", "## Dataset Description\n\n- Paper:\n- Leaderboard:", "### Dataset Summary\n\n1758 total labelled images\n\nThe scientific figures dataset contains 1758 scientific figures extracted from 890 research papers(ACL). The scientific figures are in png format.\n\nThe dataset has been classified into 19 categories. These are \n- Algorithms \n- Architecture/Pipeline diagrams\n- Bar charts \n- Box Plots \n- Confusion Matrix\n- Graph \n- Line Chart \n- Maps \n- Natural Images \n- Neural Networks \n- NLP rules/grammar \n- Pie chart \n- Scatter Plot \n- Screenshots\n- Tables\n- Trees \n- Pareto chart \n- Venn Diagram \n- Word Cloud\n\n\nThe scientific figures are in the 'png' directory.\n\nThe 'metadata' directory contains metadata extracted from the pdf along with scientific figures in json format.\n\nFinally, the 'scientific_figures.csv' file contains following columns/fields:\n\n1. 'sci_fig' : Scientific figure name\n\n2. 'caption': Caption text\n\n3. 'inline_reference': Scientific figure contexts mentioned in the research paper\n\n4. 'metadata': metadata json filename\n\n5. 'label': One of the 19 categories as described above.\n\n6. 'acl_paper_id': Unique identifier assigned to each pdf by ACL", "### Supported Tasks and Leaderboards\n\nMulti-label classification", "## Dataset Creation\nThe dataset was created using papers in ACL Anthology.", "### Annotations", "#### Annotation process\n~2k images manually labelled\n\n\nTODO", "### Contributions\n\nThanks to @zebaKarishma, @shauryr and @KavyaPuranik for adding this dataset." ]
[ "TAGS\n#task_categories-image-classification #task_ids-multi-label-image-classification #annotations_creators-expert-generated #language_creators-machine-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-cc-by-4.0 #region-us \n", "# Dataset Card for ACLFig Dataset", "## Dataset Description\n\n- Paper:\n- Leaderboard:", "### Dataset Summary\n\n1758 total labelled images\n\nThe scientific figures dataset contains 1758 scientific figures extracted from 890 research papers(ACL). The scientific figures are in png format.\n\nThe dataset has been classified into 19 categories. These are \n- Algorithms \n- Architecture/Pipeline diagrams\n- Bar charts \n- Box Plots \n- Confusion Matrix\n- Graph \n- Line Chart \n- Maps \n- Natural Images \n- Neural Networks \n- NLP rules/grammar \n- Pie chart \n- Scatter Plot \n- Screenshots\n- Tables\n- Trees \n- Pareto chart \n- Venn Diagram \n- Word Cloud\n\n\nThe scientific figures are in the 'png' directory.\n\nThe 'metadata' directory contains metadata extracted from the pdf along with scientific figures in json format.\n\nFinally, the 'scientific_figures.csv' file contains following columns/fields:\n\n1. 'sci_fig' : Scientific figure name\n\n2. 'caption': Caption text\n\n3. 'inline_reference': Scientific figure contexts mentioned in the research paper\n\n4. 'metadata': metadata json filename\n\n5. 'label': One of the 19 categories as described above.\n\n6. 'acl_paper_id': Unique identifier assigned to each pdf by ACL", "### Supported Tasks and Leaderboards\n\nMulti-label classification", "## Dataset Creation\nThe dataset was created using papers in ACL Anthology.", "### Annotations", "#### Annotation process\n~2k images manually labelled\n\n\nTODO", "### Contributions\n\nThanks to @zebaKarishma, @shauryr and @KavyaPuranik for adding this dataset." ]
01bcc51d26814e610d91eef4ecf39df687babc63
This dataset is processed version of Social Bias Inference Corpus(SBIC) dataset including text, annotator's demographics and the annotation disagreement labels. <br> Paper: Everyone's Voice Matters: Quantifying Annotation Disagreement Using Demographic Information <br> Authors: Ruyuan Wan, Jaehyung Kim, Dongyeop Kang <br> Github repo: https://github.com/minnesotanlp/Quantifying-Annotation-Disagreement <br>
RuyuanWan/SBIC_Disagreement
[ "task_categories:text-classification", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "source_datasets:extended|social_bias_frames", "language:en", "region:us" ]
2022-12-26T18:46:23+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["en"], "license": [], "multilinguality": ["monolingual"], "size_categories": [], "source_datasets": ["extended|social_bias_frames"], "task_categories": ["text-classification"], "task_ids": [], "pretty_name": "RuyuanWan/SBIC_Disagreement", "tags": []}
2022-12-26T22:07:09+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #source_datasets-extended|social_bias_frames #language-English #region-us
This dataset is processed version of Social Bias Inference Corpus(SBIC) dataset including text, annotator's demographics and the annotation disagreement labels. <br> Paper: Everyone's Voice Matters: Quantifying Annotation Disagreement Using Demographic Information <br> Authors: Ruyuan Wan, Jaehyung Kim, Dongyeop Kang <br> Github repo: URL <br>
[]
[ "TAGS\n#task_categories-text-classification #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #source_datasets-extended|social_bias_frames #language-English #region-us \n" ]
500a8fd3383138a5efece6c6744028e3211a6cc0
This dataset is processed version of Social Chemistry 101(SChem) dataset including text and the annotation disagreement labels. <br> Paper: Everyone's Voice Matters: Quantifying Annotation Disagreement Using Demographic Information <br> Authors: Ruyuan Wan, Jaehyung Kim, Dongyeop Kang <br> Github repo: https://github.com/minnesotanlp/Quantifying-Annotation-Disagreement <br> Source Data: [Social Chemistry 101(Forbes et al. 2020)](https://github.com/mbforbes/social-chemistry-101) <br>
RuyuanWan/SChem_Disagreement
[ "task_categories:text-classification", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "source_datasets:extended", "language:en", "region:us" ]
2022-12-26T19:56:21+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": [], "multilinguality": ["monolingual"], "size_categories": [], "source_datasets": ["extended"], "task_categories": ["text-classification"], "task_ids": [], "pretty_name": "RuyuanWan/SChem_Disagreement", "tags": []}
2022-12-26T22:03:28+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #source_datasets-extended #language-English #region-us
This dataset is processed version of Social Chemistry 101(SChem) dataset including text and the annotation disagreement labels. <br> Paper: Everyone's Voice Matters: Quantifying Annotation Disagreement Using Demographic Information <br> Authors: Ruyuan Wan, Jaehyung Kim, Dongyeop Kang <br> Github repo: URL <br> Source Data: Social Chemistry 101(Forbes et al. 2020) <br>
[]
[ "TAGS\n#task_categories-text-classification #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #source_datasets-extended #language-English #region-us \n" ]
8d5520fc4675fe37bd4b15271feb982a04c8f8ba
This dataset is processed version of Dilemmas dataset including text and the annotation disagreement labels. <br> Paper: Everyone's Voice Matters: Quantifying Annotation Disagreement Using Demographic Information <br> Authors: Ruyuan Wan, Jaehyung Kim, Dongyeop Kang <br> Github repo: https://github.com/minnesotanlp/Quantifying-Annotation-Disagreement <br> Source Data: [Scruples-dilemmas (Lourie, Bras, and Choi 2021)](https://github.com/allenai/scruples) <br>
RuyuanWan/Dilemmas_Disagreement
[ "task_categories:text-classification", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "source_datasets:extended", "language:en", "region:us" ]
2022-12-26T21:21:21+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": [], "multilinguality": ["monolingual"], "size_categories": [], "source_datasets": ["extended"], "task_categories": ["text-classification"], "task_ids": [], "pretty_name": "RuyuanWan/Dilemmas_Disagreement", "tags": []}
2022-12-26T21:28:17+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #source_datasets-extended #language-English #region-us
This dataset is processed version of Dilemmas dataset including text and the annotation disagreement labels. <br> Paper: Everyone's Voice Matters: Quantifying Annotation Disagreement Using Demographic Information <br> Authors: Ruyuan Wan, Jaehyung Kim, Dongyeop Kang <br> Github repo: URL <br> Source Data: Scruples-dilemmas (Lourie, Bras, and Choi 2021) <br>
[]
[ "TAGS\n#task_categories-text-classification #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #source_datasets-extended #language-English #region-us \n" ]
48fe35d8cd764209a087ee36823523c42119c866
This dataset is processed version of Dynamic Sentiment Analysis (DynaSent) dataset including text and the annotation disagreement labels. <br> Paper: Everyone's Voice Matters: Quantifying Annotation Disagreement Using Demographic Information <br> Authors: Ruyuan Wan, Jaehyung Kim, Dongyeop Kang <br> Github repo: https://github.com/minnesotanlp/Quantifying-Annotation-Disagreement <br> Source Data: [Dynamic Sentiment Analysis Dataset(Potts et al. 2021)](https://github.com/cgpotts/dynasent) <br>
RuyuanWan/Dynasent_Disagreement
[ "task_categories:text-classification", "annotations_creators:crowdsourced", "language_creators:find", "multilinguality:monolingual", "source_datasets:extended", "language:en", "region:us" ]
2022-12-26T21:32:44+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["find"], "language": ["en"], "license": [], "multilinguality": ["monolingual"], "size_categories": [], "source_datasets": ["extended"], "task_categories": ["text-classification"], "task_ids": [], "pretty_name": "RuyuanWan/Dynasent_Disagreement", "tags": []}
2022-12-26T22:14:00+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #annotations_creators-crowdsourced #language_creators-find #multilinguality-monolingual #source_datasets-extended #language-English #region-us
This dataset is processed version of Dynamic Sentiment Analysis (DynaSent) dataset including text and the annotation disagreement labels. <br> Paper: Everyone's Voice Matters: Quantifying Annotation Disagreement Using Demographic Information <br> Authors: Ruyuan Wan, Jaehyung Kim, Dongyeop Kang <br> Github repo: URL <br> Source Data: Dynamic Sentiment Analysis Dataset(Potts et al. 2021) <br>
[]
[ "TAGS\n#task_categories-text-classification #annotations_creators-crowdsourced #language_creators-find #multilinguality-monolingual #source_datasets-extended #language-English #region-us \n" ]
9527b141f5b9acda32e0df4f69040b746459ead5
This dataset is processed version of Stanford Politeness Corpus (Wikipedia) including text and the annotation disagreement labels. <br> Paper: Everyone's Voice Matters: Quantifying Annotation Disagreement Using Demographic Information <br> Authors: Ruyuan Wan, Jaehyung Kim, Dongyeop Kang <br> Github repo: https://github.com/minnesotanlp/Quantifying-Annotation-Disagreement <br> Source Data: [Wikipedia Politeness Corpus(Danescu-Niculescu-Mizil et al. 2013)](https://convokit.cornell.edu/documentation/wiki_politeness.html) <br>
RuyuanWan/Politeness_Disagreement
[ "task_categories:text-classification", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "source_datasets:extended", "language:en", "region:us" ]
2022-12-26T21:44:39+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": [], "multilinguality": ["monolingual"], "size_categories": [], "source_datasets": ["extended"], "task_categories": ["text-classification"], "task_ids": [], "pretty_name": "RuyuanWan/Politeness_Disagreement", "tags": []}
2022-12-26T22:21:56+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #source_datasets-extended #language-English #region-us
This dataset is processed version of Stanford Politeness Corpus (Wikipedia) including text and the annotation disagreement labels. <br> Paper: Everyone's Voice Matters: Quantifying Annotation Disagreement Using Demographic Information <br> Authors: Ruyuan Wan, Jaehyung Kim, Dongyeop Kang <br> Github repo: URL <br> Source Data: Wikipedia Politeness Corpus(Danescu-Niculescu-Mizil et al. 2013) <br>
[]
[ "TAGS\n#task_categories-text-classification #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #source_datasets-extended #language-English #region-us \n" ]
bdb17e3672308890562fe8f5ebe5d07bc88d764a
# Dataset Card for "c4-10k" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
NeelNanda/c4-10k
[ "region:us" ]
2022-12-26T23:12:45+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "timestamp", "dtype": "timestamp[us]"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 21970889, "num_examples": 10000}], "download_size": 13645542, "dataset_size": 21970889}}
2022-12-26T23:12:52+00:00
[]
[]
TAGS #region-us
# Dataset Card for "c4-10k" More Information needed
[ "# Dataset Card for \"c4-10k\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"c4-10k\"\n\nMore Information needed" ]
82e9178fdd9e1f7ac93f81234aeedceec49bc8b4
# Dataset Card for "c4-code-10k" 10K elements of C4 and 10K elements of code parrot clean (Python code). Note that these are the datasets used to train my interpretability-friendly models, but is *not* of the correct mixture. Those models were trained on 83% C4 and 17% Python Code (ish) by tokens. This dataset has 10K strings of each, and by tokens is about 22M of code and 5M of C4 (code is longer and harder to compress!) [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
NeelNanda/c4-code-20k
[ "region:us" ]
2022-12-26T23:22:53+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 101351288, "num_examples": 20000}], "download_size": 42778874, "dataset_size": 101351288}}
2022-12-26T23:25:12+00:00
[]
[]
TAGS #region-us
# Dataset Card for "c4-code-10k" 10K elements of C4 and 10K elements of code parrot clean (Python code). Note that these are the datasets used to train my interpretability-friendly models, but is *not* of the correct mixture. Those models were trained on 83% C4 and 17% Python Code (ish) by tokens. This dataset has 10K strings of each, and by tokens is about 22M of code and 5M of C4 (code is longer and harder to compress!) More Information needed
[ "# Dataset Card for \"c4-code-10k\"\n\n10K elements of C4 and 10K elements of code parrot clean (Python code).\n\nNote that these are the datasets used to train my interpretability-friendly models, but is *not* of the correct mixture. Those models were trained on 83% C4 and 17% Python Code (ish) by tokens. This dataset has 10K strings of each, and by tokens is about 22M of code and 5M of C4 (code is longer and harder to compress!)\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"c4-code-10k\"\n\n10K elements of C4 and 10K elements of code parrot clean (Python code).\n\nNote that these are the datasets used to train my interpretability-friendly models, but is *not* of the correct mixture. Those models were trained on 83% C4 and 17% Python Code (ish) by tokens. This dataset has 10K strings of each, and by tokens is about 22M of code and 5M of C4 (code is longer and harder to compress!)\n\nMore Information needed" ]
30d18ef25f976ac51a63b38874300a11416b121b
# Dataset Card for "wiki-10k" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
NeelNanda/wiki-10k
[ "region:us" ]
2022-12-27T00:22:16+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 222757944, "num_examples": 10000}], "download_size": 129077566, "dataset_size": 222757944}}
2022-12-27T00:22:23+00:00
[]
[]
TAGS #region-us
# Dataset Card for "wiki-10k" More Information needed
[ "# Dataset Card for \"wiki-10k\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"wiki-10k\"\n\nMore Information needed" ]
bcedc04b957a14ae24047f9f36051c78560f30e1
# Dataset Card for "code-10k" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
NeelNanda/code-10k
[ "region:us" ]
2022-12-27T00:24:22+00:00
{"dataset_info": {"features": [{"name": "repo_name", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "copies", "dtype": "string"}, {"name": "size", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "license", "dtype": "string"}, {"name": "hash", "dtype": "int64"}, {"name": "line_mean", "dtype": "float64"}, {"name": "line_max", "dtype": "int64"}, {"name": "alpha_frac", "dtype": "float64"}, {"name": "autogenerated", "dtype": "bool"}, {"name": "ratio", "dtype": "float64"}, {"name": "config_test", "dtype": "bool"}, {"name": "has_no_keywords", "dtype": "bool"}, {"name": "few_assignments", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 81445605, "num_examples": 10000}], "download_size": 29955076, "dataset_size": 81445605}}
2022-12-27T00:24:33+00:00
[]
[]
TAGS #region-us
# Dataset Card for "code-10k" More Information needed
[ "# Dataset Card for \"code-10k\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"code-10k\"\n\nMore Information needed" ]
d2c5a5ddd6cf7dcc4ac9b5e0d085184ae7594386
.
fendiirfan/bocah-alam-chatbot
[ "region:us" ]
2022-12-27T08:42:04+00:00
{}
2022-12-27T08:57:15+00:00
[]
[]
TAGS #region-us
.
[]
[ "TAGS\n#region-us \n" ]
808913993bc82bbdc40796edc7813955786dabee
{"2 (1).jpg": "boots", "2 (10).jpg": "boots", "2 (11).jpg": "boots", "2 (12).jpg": "boots", "2 (13).jpg": "boots", "2 (14).jpg": "boots", "2 (15).jpg": "boots", "2 (16).jpg": "boots", "2 (17).jpg": "boots", "2 (18).jpg": "boots", "2 (19).jpg": "boots", "2 (2).jpg": "boots", "2 (3).jpg": "boots", "2 (4).jpg": "boots", "2 (5).jpg": "boots", "2 (6).jpg": "boots", "2 (7).jpg": "boots", "2 (8).jpg": "boots", "2 (9).jpg": "boots", "1 (1).jpg": "heels", "1 (10).jpg": "heels", "1 (11).jpg": "heels", "1 (12).jpg": "heels", "1 (13).jpg": "heels", "1 (14).jpg": "heels", "1 (15).jpg": "heels", "1 (16).jpg": "heels", "1 (17).jpg": "heels", "1 (18).jpg": "heels", "1 (19).jpg": "heels", "1 (2).jpg": "heels", "1 (20).jpg": "heels", "1 (21).jpg": "heels", "1 (22).jpg": "heels", "1 (23).jpg": "heels", "1 (24).jpg": "heels", "1 (25).jpg": "heels", "1 (26).jpg": "heels", "1 (27).jpg": "heels", "1 (28).jpg": "heels", "1 (29).jpg": "heels", "1 (3).jpg": "heels", "1 (30).jpg": "heels", "1 (31).jpg": "heels", "1 (32).jpg": "heels", "1 (33).jpg": "heels", "1 (34).jpg": "heels", "1 (35).jpg": "heels", "1 (36).jpg": "heels", "1 (37).jpg": "heels", "1 (38).jpg": "heels", "1 (39).jpg": "heels", "1 (4).jpg": "heels", "1 (40).jpg": "heels", "1 (41).jpg": "heels", "1 (42).jpg": "heels", "1 (43).jpg": "heels", "1 (44).jpg": "heels", "1 (45).jpg": "heels", "1 (46).jpg": "heels", "1 (47).jpg": "heels", "1 (48).jpg": "heels", "1 (49).jpg": "heels", "1 (5).jpg": "heels", "1 (50).jpg": "heels", "1 (51).jpg": "heels", "1 (52).jpg": "heels", "1 (53).jpg": "heels", "1 (54).jpg": "heels", "1 (55).jpg": "heels", "1 (56).jpg": "heels", "1 (57).jpg": "heels", "1 (58).jpg": "heels", "1 (59).jpg": "heels", "1 (6).jpg": "heels", "1 (60).jpg": "heels", "1 (61).jpg": "heels", "1 (62).jpg": "heels", "1 (63).jpg": "heels", "1 (64).jpg": "heels", "1 (65).jpg": "heels", "1 (66).jpg": "heels", "1 (67).jpg": "heels", "1 (68).jpg": "heels", "1 (69).jpg": "heels", "1 (7).jpg": "heels", "1 (70).jpg": "heels", "1 (71).jpg": "heels", "1 (72).jpg": "heels", "1 (73).jpg": "heels", "1 (74).jpg": "heels", "1 (75).jpg": "heels", "1 (76).jpg": "heels", "1 (77).jpg": "heels", "1 (78).jpg": "heels", "1 (79).jpg": "heels", "1 (8).jpg": "heels", "1 (80).jpg": "heels", "1 (81).jpg": "heels", "1 (82).jpg": "heels", "1 (83).jpg": "heels", "1 (84).jpg": "heels", "1 (85).jpg": "heels", "1 (86).jpg": "heels", "1 (87).jpg": "heels", "1 (88).jpg": "heels", "1 (89).jpg": "heels", "1 (9).jpg": "heels"}
Franksking/Shoe
[ "region:us" ]
2022-12-27T10:05:37+00:00
{}
2022-12-27T10:11:17+00:00
[]
[]
TAGS #region-us
{"2 (1).jpg": "boots", "2 (10).jpg": "boots", "2 (11).jpg": "boots", "2 (12).jpg": "boots", "2 (13).jpg": "boots", "2 (14).jpg": "boots", "2 (15).jpg": "boots", "2 (16).jpg": "boots", "2 (17).jpg": "boots", "2 (18).jpg": "boots", "2 (19).jpg": "boots", "2 (2).jpg": "boots", "2 (3).jpg": "boots", "2 (4).jpg": "boots", "2 (5).jpg": "boots", "2 (6).jpg": "boots", "2 (7).jpg": "boots", "2 (8).jpg": "boots", "2 (9).jpg": "boots", "1 (1).jpg": "heels", "1 (10).jpg": "heels", "1 (11).jpg": "heels", "1 (12).jpg": "heels", "1 (13).jpg": "heels", "1 (14).jpg": "heels", "1 (15).jpg": "heels", "1 (16).jpg": "heels", "1 (17).jpg": "heels", "1 (18).jpg": "heels", "1 (19).jpg": "heels", "1 (2).jpg": "heels", "1 (20).jpg": "heels", "1 (21).jpg": "heels", "1 (22).jpg": "heels", "1 (23).jpg": "heels", "1 (24).jpg": "heels", "1 (25).jpg": "heels", "1 (26).jpg": "heels", "1 (27).jpg": "heels", "1 (28).jpg": "heels", "1 (29).jpg": "heels", "1 (3).jpg": "heels", "1 (30).jpg": "heels", "1 (31).jpg": "heels", "1 (32).jpg": "heels", "1 (33).jpg": "heels", "1 (34).jpg": "heels", "1 (35).jpg": "heels", "1 (36).jpg": "heels", "1 (37).jpg": "heels", "1 (38).jpg": "heels", "1 (39).jpg": "heels", "1 (4).jpg": "heels", "1 (40).jpg": "heels", "1 (41).jpg": "heels", "1 (42).jpg": "heels", "1 (43).jpg": "heels", "1 (44).jpg": "heels", "1 (45).jpg": "heels", "1 (46).jpg": "heels", "1 (47).jpg": "heels", "1 (48).jpg": "heels", "1 (49).jpg": "heels", "1 (5).jpg": "heels", "1 (50).jpg": "heels", "1 (51).jpg": "heels", "1 (52).jpg": "heels", "1 (53).jpg": "heels", "1 (54).jpg": "heels", "1 (55).jpg": "heels", "1 (56).jpg": "heels", "1 (57).jpg": "heels", "1 (58).jpg": "heels", "1 (59).jpg": "heels", "1 (6).jpg": "heels", "1 (60).jpg": "heels", "1 (61).jpg": "heels", "1 (62).jpg": "heels", "1 (63).jpg": "heels", "1 (64).jpg": "heels", "1 (65).jpg": "heels", "1 (66).jpg": "heels", "1 (67).jpg": "heels", "1 (68).jpg": "heels", "1 (69).jpg": "heels", "1 (7).jpg": "heels", "1 (70).jpg": "heels", "1 (71).jpg": "heels", "1 (72).jpg": "heels", "1 (73).jpg": "heels", "1 (74).jpg": "heels", "1 (75).jpg": "heels", "1 (76).jpg": "heels", "1 (77).jpg": "heels", "1 (78).jpg": "heels", "1 (79).jpg": "heels", "1 (8).jpg": "heels", "1 (80).jpg": "heels", "1 (81).jpg": "heels", "1 (82).jpg": "heels", "1 (83).jpg": "heels", "1 (84).jpg": "heels", "1 (85).jpg": "heels", "1 (86).jpg": "heels", "1 (87).jpg": "heels", "1 (88).jpg": "heels", "1 (89).jpg": "heels", "1 (9).jpg": "heels"}
[]
[ "TAGS\n#region-us \n" ]
8538c3ee7b5dcbdc9f119085c910bf6f96de93be
Configuration for Stable-Diffusion - Automatic 1111
BlodyTraveler/automatic1111config
[ "region:us" ]
2022-12-27T10:50:41+00:00
{}
2022-12-28T08:24:14+00:00
[]
[]
TAGS #region-us
Configuration for Stable-Diffusion - Automatic 1111
[]
[ "TAGS\n#region-us \n" ]
c93456930b7ae826d75c2ab8fb38d64b7bd73f43
# Dataset Card for "HebrewStageAndLyricsWithNewLines" * Contains poems and stories from "New Stage" ("במה חדשה") * Contains text lines from various Hebrew song lyrics * Data contains new-line characters * Generated from a text file in which different poems were seperated using a double new-line character * The script I made for converting the text file into a dataset is [available here](https://huggingface.co/datasets/Norod78/HebrewStageAndLyricsWithNewLines/blob/main/load_ds.py)
Norod78/HebrewStageAndLyricsWithNewLines
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "multilinguality:monolingual", "language:he", "region:us" ]
2022-12-27T12:14:25+00:00
{"language": ["he"], "multilinguality": ["monolingual"], "task_categories": ["text-generation", "fill-mask"], "task_ids": ["language-modeling", "masked-language-modeling"], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 12638465.341690589, "num_examples": 11113}, {"name": "train", "num_bytes": 240110370.6583094, "num_examples": 211129}], "download_size": 133520933, "dataset_size": 252748836.0}}
2022-12-28T20:04:04+00:00
[]
[ "he" ]
TAGS #task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #multilinguality-monolingual #language-Hebrew #region-us
# Dataset Card for "HebrewStageAndLyricsWithNewLines" * Contains poems and stories from "New Stage" ("במה חדשה") * Contains text lines from various Hebrew song lyrics * Data contains new-line characters * Generated from a text file in which different poems were seperated using a double new-line character * The script I made for converting the text file into a dataset is available here
[ "# Dataset Card for \"HebrewStageAndLyricsWithNewLines\"\n\n* Contains poems and stories from \"New Stage\" (\"במה חדשה\")\n* Contains text lines from various Hebrew song lyrics\n* Data contains new-line characters\n* Generated from a text file in which different poems were seperated using a double new-line character\n* The script I made for converting the text file into a dataset is available here" ]
[ "TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #multilinguality-monolingual #language-Hebrew #region-us \n", "# Dataset Card for \"HebrewStageAndLyricsWithNewLines\"\n\n* Contains poems and stories from \"New Stage\" (\"במה חדשה\")\n* Contains text lines from various Hebrew song lyrics\n* Data contains new-line characters\n* Generated from a text file in which different poems were seperated using a double new-line character\n* The script I made for converting the text file into a dataset is available here" ]
813243fb132c54ee12d1eb22791f584b803ce601
# Pick a Pic * We are periodically uploading (almost) all of the collected data from [pickapic.io](https://pickapic.io/). * We have three different datasets: * [Images dataset](https://huggingface.co/datasets/yuvalkirstain/PickaPic-images) - includes the images that were created as part of Pick a Pic. * [Rankings dataset](https://huggingface.co/datasets/yuvalkirstain/PickaPic-rankings) - includes the rankings that users submitted in Pick a Pic. * [Downloads dataset](https://huggingface.co/datasets/yuvalkirstain/PickaPic-downloads) - includes the images that users downloaded in Pick a Pic. * Help us in creating the largest publicly available human-feedback for text-to-image dataset! * You can reach us on [discord](https://discord.gg/qKEVkF85DT) or by [mail]([email protected]).
yuvalkirstain/PickaPic
[ "region:us" ]
2022-12-27T14:20:20+00:00
{}
2023-01-30T15:57:03+00:00
[]
[]
TAGS #region-us
# Pick a Pic * We are periodically uploading (almost) all of the collected data from URL. * We have three different datasets: * Images dataset - includes the images that were created as part of Pick a Pic. * Rankings dataset - includes the rankings that users submitted in Pick a Pic. * Downloads dataset - includes the images that users downloaded in Pick a Pic. * Help us in creating the largest publicly available human-feedback for text-to-image dataset! * You can reach us on discord or by mail.
[ "# Pick a Pic\n\n* We are periodically uploading (almost) all of the collected data from URL.\n* We have three different datasets:\n * Images dataset - includes the images that were created as part of Pick a Pic.\n * Rankings dataset - includes the rankings that users submitted in Pick a Pic.\n * Downloads dataset - includes the images that users downloaded in Pick a Pic.\n* Help us in creating the largest publicly available human-feedback for text-to-image dataset!\n* You can reach us on discord or by mail." ]
[ "TAGS\n#region-us \n", "# Pick a Pic\n\n* We are periodically uploading (almost) all of the collected data from URL.\n* We have three different datasets:\n * Images dataset - includes the images that were created as part of Pick a Pic.\n * Rankings dataset - includes the rankings that users submitted in Pick a Pic.\n * Downloads dataset - includes the images that users downloaded in Pick a Pic.\n* Help us in creating the largest publicly available human-feedback for text-to-image dataset!\n* You can reach us on discord or by mail." ]
eaa7be77c802340cdf4ad991d3917410bb3559fc
## Description The Pixiv Niji Journey dataset is a collection of 9766 images with accompanying metadata, scraped from the online art platform Pixiv. The images were collected using the `gallery-dl` Python package, with the search term "nijijourney" on Pixiv. The collection period for the dataset was from November 6, 2022 to December 27, 2022. The dataset is divided into two variants: `raw` and `preprocessed`. The `raw` variant contains the pure dataset resulting from the scraping of Pixiv, while the `preprocessed` variant contains the same dataset but with additional preprocessing steps applied. These preprocessing steps include converting the images from RGB to RGBA, labeling the dataset with captions using the BLIP tool, and providing Danbooru tags using the wd-v1-4-vit-tagger tool. The `preprocessed` variant has also been carefully cleaned and filtered to remove any low quality or irrelevant images. The images in the dataset are in JPG and PNG format, and the metadata is provided in JSON format, while the preprocessed metadata is provided in `.txt` and `.caption` format. The metadata includes information about the images such as their captions, tags, and other metadata provided by Pixiv. The structure of the raw and preprocessed variants of the dataset is described in the `File Structure` section below. The Pixiv Niji Journey dataset is primarily intended for use in machine learning tasks related to image classification and caption generation. It can also be used as a dataset for image generation models such as stable diffusion. However, users should be aware that the dataset may contain biases or limitations, such as the bias of the Pixiv platform or the specific search term used to collect the data. ## File Structure The structure of the raw files is as follows: ``` nijijourney_pixiv_2022110620221222_raw.zip/ ├╴nijijourney/ │ ├╴images.png │ ├╴images.png.json │ └╴... ``` while the structure of the preprocessed files is: ``` nijijourney_pixiv_2022110620221222_preprocessed.zip/ ├╴dataset/ │ ├╴images.png │ ├╴images.png.json │ ├╴images.txt │ ├╴images.caption │ └╴... ├╴meta_cap.json ├╴meta_dd.json ├╴meta_clean.json ``` ## Usage - Access: the dataset is available for download from the Hugging Face dataset collection - Format: the dataset is provided in ZIP format, with images in PNG format and metadata in JSON format - Requirements: the dataset requires no specific requirements or dependencies for use ## Data Quality - Number of images: 9766 - Image sizes: vary, but all images are in PNG format - Class balance: the distribution of classes in the dataset is not known - Quality: the dataset has been carefully cleaned and filtered to remove low quality or irrelevant images ## Limitations While the Pixiv Niji Journey dataset has been carefully cleaned and preprocessed to ensure high quality and consistency, it is important to be aware of certain limitations and biases that may be present in the dataset. Some potential limitations of the dataset include: - Bias of the Pixiv platform: Pixiv is an online art platform that may have its own biases in terms of the content that is available and the users who contribute to it. This could potentially introduce biases into the dataset. - Search term bias: The dataset was collected using the search term "nijijourney" on Pixiv, which may have introduced biases into the dataset depending on the popularity and prevalence of this term on the platform. - Limited scope: The dataset only includes images scraped from Pixiv, and therefore may not be representative of a wider range of images or artistic styles. - Potential errors or inconsistencies in the metadata: While every effort has been made to ensure the accuracy of the metadata, there may be errors or inconsistencies present in the data. It is important to be aware of these limitations and to consider them when using the Pixiv Niji Journey dataset for research or other purposes. ## License The Pixiv Niji Journey dataset is made available under the terms of the AGPL-3.0 license. This license is a copyleft license that allows users to freely use, modify, and distribute the dataset, as long as any modified versions are also made available under the same terms. Under the terms of the AGPL-3.0 license, users are allowed to: - Use the dataset for any purpose, commercial or non-commercial - Modify the dataset as needed for their purposes - Distribute copies of the dataset, either modified or unmodified However, users must also follow the following conditions: - Any modified versions of the dataset must be made available under the same AGPL-3.0 license - If the dataset is used to provide a service to others (such as through a website or API), the source code for the service must be made available to users under the AGPL-3.0 license It is important to carefully review the terms of the AGPL-3.0 license and ensure that you understand your rights and obligations when using the Pixiv Niji Journey dataset. ## Citation If you use this dataset in your work, please cite it as follows: ``` @misc{pixiv_niji_journey, author = {Linaqruf}, title = {Pixiv Niji Journey}, year = {2022}, publisher = {Hugging Face}, url = {https://huggingface.co/datasets/Linaqruf/pixiv-niji-journey}, } ```
Linaqruf/pixiv-niji-journey
[ "license:agpl-3.0", "region:us" ]
2022-12-27T14:43:38+00:00
{"license": "agpl-3.0"}
2023-01-10T03:32:36+00:00
[]
[]
TAGS #license-agpl-3.0 #region-us
## Description The Pixiv Niji Journey dataset is a collection of 9766 images with accompanying metadata, scraped from the online art platform Pixiv. The images were collected using the 'gallery-dl' Python package, with the search term "nijijourney" on Pixiv. The collection period for the dataset was from November 6, 2022 to December 27, 2022. The dataset is divided into two variants: 'raw' and 'preprocessed'. The 'raw' variant contains the pure dataset resulting from the scraping of Pixiv, while the 'preprocessed' variant contains the same dataset but with additional preprocessing steps applied. These preprocessing steps include converting the images from RGB to RGBA, labeling the dataset with captions using the BLIP tool, and providing Danbooru tags using the wd-v1-4-vit-tagger tool. The 'preprocessed' variant has also been carefully cleaned and filtered to remove any low quality or irrelevant images. The images in the dataset are in JPG and PNG format, and the metadata is provided in JSON format, while the preprocessed metadata is provided in '.txt' and '.caption' format. The metadata includes information about the images such as their captions, tags, and other metadata provided by Pixiv. The structure of the raw and preprocessed variants of the dataset is described in the 'File Structure' section below. The Pixiv Niji Journey dataset is primarily intended for use in machine learning tasks related to image classification and caption generation. It can also be used as a dataset for image generation models such as stable diffusion. However, users should be aware that the dataset may contain biases or limitations, such as the bias of the Pixiv platform or the specific search term used to collect the data. ## File Structure The structure of the raw files is as follows: while the structure of the preprocessed files is: ## Usage - Access: the dataset is available for download from the Hugging Face dataset collection - Format: the dataset is provided in ZIP format, with images in PNG format and metadata in JSON format - Requirements: the dataset requires no specific requirements or dependencies for use ## Data Quality - Number of images: 9766 - Image sizes: vary, but all images are in PNG format - Class balance: the distribution of classes in the dataset is not known - Quality: the dataset has been carefully cleaned and filtered to remove low quality or irrelevant images ## Limitations While the Pixiv Niji Journey dataset has been carefully cleaned and preprocessed to ensure high quality and consistency, it is important to be aware of certain limitations and biases that may be present in the dataset. Some potential limitations of the dataset include: - Bias of the Pixiv platform: Pixiv is an online art platform that may have its own biases in terms of the content that is available and the users who contribute to it. This could potentially introduce biases into the dataset. - Search term bias: The dataset was collected using the search term "nijijourney" on Pixiv, which may have introduced biases into the dataset depending on the popularity and prevalence of this term on the platform. - Limited scope: The dataset only includes images scraped from Pixiv, and therefore may not be representative of a wider range of images or artistic styles. - Potential errors or inconsistencies in the metadata: While every effort has been made to ensure the accuracy of the metadata, there may be errors or inconsistencies present in the data. It is important to be aware of these limitations and to consider them when using the Pixiv Niji Journey dataset for research or other purposes. ## License The Pixiv Niji Journey dataset is made available under the terms of the AGPL-3.0 license. This license is a copyleft license that allows users to freely use, modify, and distribute the dataset, as long as any modified versions are also made available under the same terms. Under the terms of the AGPL-3.0 license, users are allowed to: - Use the dataset for any purpose, commercial or non-commercial - Modify the dataset as needed for their purposes - Distribute copies of the dataset, either modified or unmodified However, users must also follow the following conditions: - Any modified versions of the dataset must be made available under the same AGPL-3.0 license - If the dataset is used to provide a service to others (such as through a website or API), the source code for the service must be made available to users under the AGPL-3.0 license It is important to carefully review the terms of the AGPL-3.0 license and ensure that you understand your rights and obligations when using the Pixiv Niji Journey dataset. If you use this dataset in your work, please cite it as follows:
[ "## Description\n\nThe Pixiv Niji Journey dataset is a collection of 9766 images with accompanying metadata, scraped from the online art platform Pixiv. The images were collected using the 'gallery-dl' Python package, with the search term \"nijijourney\" on Pixiv. The collection period for the dataset was from November 6, 2022 to December 27, 2022.\n\nThe dataset is divided into two variants: 'raw' and 'preprocessed'. The 'raw' variant contains the pure dataset resulting from the scraping of Pixiv, while the 'preprocessed' variant contains the same dataset but with additional preprocessing steps applied. These preprocessing steps include converting the images from RGB to RGBA, labeling the dataset with captions using the BLIP tool, and providing Danbooru tags using the wd-v1-4-vit-tagger tool. The 'preprocessed' variant has also been carefully cleaned and filtered to remove any low quality or irrelevant images.\n\nThe images in the dataset are in JPG and PNG format, and the metadata is provided in JSON format, while the preprocessed metadata is provided in '.txt' and '.caption' format. The metadata includes information about the images such as their captions, tags, and other metadata provided by Pixiv. The structure of the raw and preprocessed variants of the dataset is described in the 'File Structure' section below.\n\nThe Pixiv Niji Journey dataset is primarily intended for use in machine learning tasks related to image classification and caption generation. It can also be used as a dataset for image generation models such as stable diffusion. However, users should be aware that the dataset may contain biases or limitations, such as the bias of the Pixiv platform or the specific search term used to collect the data.", "## File Structure\n\nThe structure of the raw files is as follows:\n\n\nwhile the structure of the preprocessed files is:", "## Usage \n\n- Access: the dataset is available for download from the Hugging Face dataset collection\n- Format: the dataset is provided in ZIP format, with images in PNG format and metadata in JSON format\n- Requirements: the dataset requires no specific requirements or dependencies for use", "## Data Quality\n\n- Number of images: 9766\n- Image sizes: vary, but all images are in PNG format\n- Class balance: the distribution of classes in the dataset is not known\n- Quality: the dataset has been carefully cleaned and filtered to remove low quality or irrelevant images", "## Limitations\n\nWhile the Pixiv Niji Journey dataset has been carefully cleaned and preprocessed to ensure high quality and consistency, it is important to be aware of certain limitations and biases that may be present in the dataset. Some potential limitations of the dataset include:\n\n- Bias of the Pixiv platform: Pixiv is an online art platform that may have its own biases in terms of the content that is available and the users who contribute to it. This could potentially introduce biases into the dataset.\n\n- Search term bias: The dataset was collected using the search term \"nijijourney\" on Pixiv, which may have introduced biases into the dataset depending on the popularity and prevalence of this term on the platform.\n\n- Limited scope: The dataset only includes images scraped from Pixiv, and therefore may not be representative of a wider range of images or artistic styles.\n\n- Potential errors or inconsistencies in the metadata: While every effort has been made to ensure the accuracy of the metadata, there may be errors or inconsistencies present in the data.\n\nIt is important to be aware of these limitations and to consider them when using the Pixiv Niji Journey dataset for research or other purposes.", "## License\n\nThe Pixiv Niji Journey dataset is made available under the terms of the AGPL-3.0 license. This license is a copyleft license that allows users to freely use, modify, and distribute the dataset, as long as any modified versions are also made available under the same terms.\n\nUnder the terms of the AGPL-3.0 license, users are allowed to:\n- Use the dataset for any purpose, commercial or non-commercial\n- Modify the dataset as needed for their purposes\n- Distribute copies of the dataset, either modified or unmodified\n\nHowever, users must also follow the following conditions:\n- Any modified versions of the dataset must be made available under the same AGPL-3.0 license\n- If the dataset is used to provide a service to others (such as through a website or API), the source code for the service must be made available to users under the AGPL-3.0 license\n\nIt is important to carefully review the terms of the AGPL-3.0 license and ensure that you understand your rights and obligations when using the Pixiv Niji Journey dataset. \n\nIf you use this dataset in your work, please cite it as follows:" ]
[ "TAGS\n#license-agpl-3.0 #region-us \n", "## Description\n\nThe Pixiv Niji Journey dataset is a collection of 9766 images with accompanying metadata, scraped from the online art platform Pixiv. The images were collected using the 'gallery-dl' Python package, with the search term \"nijijourney\" on Pixiv. The collection period for the dataset was from November 6, 2022 to December 27, 2022.\n\nThe dataset is divided into two variants: 'raw' and 'preprocessed'. The 'raw' variant contains the pure dataset resulting from the scraping of Pixiv, while the 'preprocessed' variant contains the same dataset but with additional preprocessing steps applied. These preprocessing steps include converting the images from RGB to RGBA, labeling the dataset with captions using the BLIP tool, and providing Danbooru tags using the wd-v1-4-vit-tagger tool. The 'preprocessed' variant has also been carefully cleaned and filtered to remove any low quality or irrelevant images.\n\nThe images in the dataset are in JPG and PNG format, and the metadata is provided in JSON format, while the preprocessed metadata is provided in '.txt' and '.caption' format. The metadata includes information about the images such as their captions, tags, and other metadata provided by Pixiv. The structure of the raw and preprocessed variants of the dataset is described in the 'File Structure' section below.\n\nThe Pixiv Niji Journey dataset is primarily intended for use in machine learning tasks related to image classification and caption generation. It can also be used as a dataset for image generation models such as stable diffusion. However, users should be aware that the dataset may contain biases or limitations, such as the bias of the Pixiv platform or the specific search term used to collect the data.", "## File Structure\n\nThe structure of the raw files is as follows:\n\n\nwhile the structure of the preprocessed files is:", "## Usage \n\n- Access: the dataset is available for download from the Hugging Face dataset collection\n- Format: the dataset is provided in ZIP format, with images in PNG format and metadata in JSON format\n- Requirements: the dataset requires no specific requirements or dependencies for use", "## Data Quality\n\n- Number of images: 9766\n- Image sizes: vary, but all images are in PNG format\n- Class balance: the distribution of classes in the dataset is not known\n- Quality: the dataset has been carefully cleaned and filtered to remove low quality or irrelevant images", "## Limitations\n\nWhile the Pixiv Niji Journey dataset has been carefully cleaned and preprocessed to ensure high quality and consistency, it is important to be aware of certain limitations and biases that may be present in the dataset. Some potential limitations of the dataset include:\n\n- Bias of the Pixiv platform: Pixiv is an online art platform that may have its own biases in terms of the content that is available and the users who contribute to it. This could potentially introduce biases into the dataset.\n\n- Search term bias: The dataset was collected using the search term \"nijijourney\" on Pixiv, which may have introduced biases into the dataset depending on the popularity and prevalence of this term on the platform.\n\n- Limited scope: The dataset only includes images scraped from Pixiv, and therefore may not be representative of a wider range of images or artistic styles.\n\n- Potential errors or inconsistencies in the metadata: While every effort has been made to ensure the accuracy of the metadata, there may be errors or inconsistencies present in the data.\n\nIt is important to be aware of these limitations and to consider them when using the Pixiv Niji Journey dataset for research or other purposes.", "## License\n\nThe Pixiv Niji Journey dataset is made available under the terms of the AGPL-3.0 license. This license is a copyleft license that allows users to freely use, modify, and distribute the dataset, as long as any modified versions are also made available under the same terms.\n\nUnder the terms of the AGPL-3.0 license, users are allowed to:\n- Use the dataset for any purpose, commercial or non-commercial\n- Modify the dataset as needed for their purposes\n- Distribute copies of the dataset, either modified or unmodified\n\nHowever, users must also follow the following conditions:\n- Any modified versions of the dataset must be made available under the same AGPL-3.0 license\n- If the dataset is used to provide a service to others (such as through a website or API), the source code for the service must be made available to users under the AGPL-3.0 license\n\nIt is important to carefully review the terms of the AGPL-3.0 license and ensure that you understand your rights and obligations when using the Pixiv Niji Journey dataset. \n\nIf you use this dataset in your work, please cite it as follows:" ]
c1066fb4cbd28e291fc86825f58207bd80806559
# MindBigData 2022 A Large Dataset of Brain Signals > Supporting datasets for paper [ arXiv:2212.14746](https://arxiv.org/abs/2212.14746) > There are 3 Main datasets with subdatasets: > **1.- MindBigData MNIST of Brain Digits** > based on http://mindbigdata.com/opendb/index.html > But all datasets splitted to 80% Train 20% Test (also proportional in the 11 classes) > EEG's Resampled to match original headsets sampling rate > Included headers. > and simplified to contain only label & EEG data as rows named in headers as ChannelName-SampleNum, ie for channel FP1 and MindWave will be FP1-0 FP1-1 ..... FP1-1023 since there are 1024 samples. > There are 4 subdatasets: > > For MindWave with 1 EEG Channel and 1024 samples x Channel > > For EPOC1 with 14 EEG Channels and 256 samples x Channel > > For Muse1 with 4 EEG Channels and 440 samples x Channel > > For Insight1 with 5 EEG Channels and 256 samples x Channel > **1.1.- MindBigData MNIST of Brain digits MindWave1** https://huggingface.co/datasets/DavidVivancos/MindBigData2022_MNIST_MW > **1.2.- MindBigData MNIST of Brain digits EPOC1** https://huggingface.co/datasets/DavidVivancos/MindBigData2022_MNIST_EP **1.3.- MindBigData MNIST of Brain digits Muse1** https://huggingface.co/datasets/DavidVivancos/MindBigData2022_MNIST_MU **1.4.- MindBigData MNIST of Brain digits Insight1** https://huggingface.co/datasets/DavidVivancos/MindBigData2022_MNIST_IN **2.- MindBigData Imagenet of the Brain** > based on http://mindbigdata.com/opendb/imagenet.html > But all datasets splitted to 80% Train 20% Test (also proportional in all the classes) > EEG's Resampled to match original headsets sampling rate > Included headers. > contains label as the ILSVRC2013 category, and a hotencoded name lists, the RGB pixel values of the image seen resampled to 150pixels by 150 pixels & EEG data as rows named in headers as ChannelName-SampleNum, > There are 2 subdatasets: > > One with the Insight 1 EEG signals at 384 samples per channel (5 channels) > > One with the Spectrogram image 64x64px instead of the EEG as described in the paper > **2.1.- MindBigData Imagenet of the Brain Insight1 EEG** https://huggingface.co/datasets/DavidVivancos/MindBigData2022_Imagenet_IN **2.2.- MindBigData Imagenet of the Brain Insight1 Spectrogram** https://huggingface.co/datasets/DavidVivancos/MindBigData2022_Imagenet_IN_Spct **3.- MindBigData Visual MNIST of Brain Digits** > based on http://mindbigdata.com/opendb/visualmnist.html > But all datasets splitted to 80% Train 20% Test (also proportional in the 11 classes) > Included headers. > and simplified to contain only label, the original MNIST pixels of the digit seen 28x28pixels & EEG data as rows named in headers as ChannelName-SampleNum, ie for channel TP9 and Muse2 will be TP9-0 TP9-1 ..... TP9-511 since there are 512 samples. > There are 3 subdatasets: > > For Muse2 with 5 EEG Channels, 3 PPG Channels, 3 ACC Channels & 3 GYR Channels and 512 samples x Channel > > For Cap64 with 64 EEG Channels and 400 samples x Channel > > For Cap64 with 64 EEG Channels and 400 samples x Channel but with Morlet png images as EEG outputs > **3.1.- MindBigData Visual MNIST of Brain digits Muse2** https://huggingface.co/datasets/DavidVivancos/MindBigData2022_VisMNIST_MU2 **3.2.- MindBigData Visual MNIST of Brain digits Cap64** https://huggingface.co/datasets/DavidVivancos/MindBigData2022_VisMNIST_Cap64 **3.3.- MindBigData Visual MNIST of Brain digits Cap64 Morlet** https://huggingface.co/datasets/DavidVivancos/MindBigData2022_VisMNIST_Cap64_Morlet
DavidVivancos/MindBigData2022
[ "arxiv:2212.14746", "region:us" ]
2022-12-27T16:01:18+00:00
{}
2023-01-07T10:18:30+00:00
[ "2212.14746" ]
[]
TAGS #arxiv-2212.14746 #region-us
# MindBigData 2022 A Large Dataset of Brain Signals > Supporting datasets for paper arXiv:2212.14746 > There are 3 Main datasets with subdatasets: > 1.- MindBigData MNIST of Brain Digits > based on URL > But all datasets splitted to 80% Train 20% Test (also proportional in the 11 classes) > EEG's Resampled to match original headsets sampling rate > Included headers. > and simplified to contain only label & EEG data as rows named in headers as ChannelName-SampleNum, ie for channel FP1 and MindWave will be FP1-0 FP1-1 ..... FP1-1023 since there are 1024 samples. > There are 4 subdatasets: > > For MindWave with 1 EEG Channel and 1024 samples x Channel > > For EPOC1 with 14 EEG Channels and 256 samples x Channel > > For Muse1 with 4 EEG Channels and 440 samples x Channel > > For Insight1 with 5 EEG Channels and 256 samples x Channel > 1.1.- MindBigData MNIST of Brain digits MindWave1 URL > 1.2.- MindBigData MNIST of Brain digits EPOC1 URL 1.3.- MindBigData MNIST of Brain digits Muse1 URL 1.4.- MindBigData MNIST of Brain digits Insight1 URL 2.- MindBigData Imagenet of the Brain > based on URL > But all datasets splitted to 80% Train 20% Test (also proportional in all the classes) > EEG's Resampled to match original headsets sampling rate > Included headers. > contains label as the ILSVRC2013 category, and a hotencoded name lists, the RGB pixel values of the image seen resampled to 150pixels by 150 pixels & EEG data as rows named in headers as ChannelName-SampleNum, > There are 2 subdatasets: > > One with the Insight 1 EEG signals at 384 samples per channel (5 channels) > > One with the Spectrogram image 64x64px instead of the EEG as described in the paper > 2.1.- MindBigData Imagenet of the Brain Insight1 EEG URL 2.2.- MindBigData Imagenet of the Brain Insight1 Spectrogram URL 3.- MindBigData Visual MNIST of Brain Digits > based on URL > But all datasets splitted to 80% Train 20% Test (also proportional in the 11 classes) > Included headers. > and simplified to contain only label, the original MNIST pixels of the digit seen 28x28pixels & EEG data as rows named in headers as ChannelName-SampleNum, ie for channel TP9 and Muse2 will be TP9-0 TP9-1 ..... TP9-511 since there are 512 samples. > There are 3 subdatasets: > > For Muse2 with 5 EEG Channels, 3 PPG Channels, 3 ACC Channels & 3 GYR Channels and 512 samples x Channel > > For Cap64 with 64 EEG Channels and 400 samples x Channel > > For Cap64 with 64 EEG Channels and 400 samples x Channel but with Morlet png images as EEG outputs > 3.1.- MindBigData Visual MNIST of Brain digits Muse2 URL 3.2.- MindBigData Visual MNIST of Brain digits Cap64 URL 3.3.- MindBigData Visual MNIST of Brain digits Cap64 Morlet URL
[ "# MindBigData 2022 A Large Dataset of Brain Signals\n> Supporting datasets for paper arXiv:2212.14746\n> There are 3 Main datasets with subdatasets:\n> \n1.- MindBigData MNIST of Brain Digits\n\n> based on URL\n> But all datasets splitted to 80% Train 20% Test (also proportional in the 11 classes)\n> EEG's Resampled to match original headsets sampling rate\n> Included headers.\n> and simplified to contain only label & EEG data as rows named in headers as ChannelName-SampleNum, ie for channel FP1 and MindWave will be FP1-0 FP1-1 ..... FP1-1023 since there are 1024 samples.\n> There are 4 subdatasets:\n> \n> For MindWave with 1 EEG Channel and 1024 samples x Channel\n> \n> For EPOC1 with 14 EEG Channels and 256 samples x Channel\n> \n> For Muse1 with 4 EEG Channels and 440 samples x Channel\n> \n> For Insight1 with 5 EEG Channels and 256 samples x Channel\n> \n1.1.- MindBigData MNIST of Brain digits MindWave1\nURL\n> \n1.2.- MindBigData MNIST of Brain digits EPOC1\nURL\n\n1.3.- MindBigData MNIST of Brain digits Muse1\nURL\n\n1.4.- MindBigData MNIST of Brain digits Insight1\nURL\n\n2.- MindBigData Imagenet of the Brain\n\n> based on URL\n> But all datasets splitted to 80% Train 20% Test (also proportional in all the classes)\n> EEG's Resampled to match original headsets sampling rate\n> Included headers.\n> contains label as the ILSVRC2013 category, and a hotencoded name lists, the RGB pixel values of the image seen resampled to 150pixels by 150 pixels & EEG data as rows named in headers as ChannelName-SampleNum, \n> There are 2 subdatasets:\n> \n> One with the Insight 1 EEG signals at 384 samples per channel (5 channels)\n> \n> One with the Spectrogram image 64x64px instead of the EEG as described in the paper\n> \n 2.1.- MindBigData Imagenet of the Brain Insight1 EEG\n URL\n \n 2.2.- MindBigData Imagenet of the Brain Insight1 Spectrogram\n URL\n\n3.- MindBigData Visual MNIST of Brain Digits\n\n> based on URL\n> But all datasets splitted to 80% Train 20% Test (also proportional in the 11 classes)\n> Included headers.\n> and simplified to contain only label, the original MNIST pixels of the digit seen 28x28pixels & EEG data as rows named in headers as ChannelName-SampleNum, ie for channel TP9 and Muse2 will be TP9-0 TP9-1 ..... TP9-511 since there are 512 samples.\n> There are 3 subdatasets:\n> \n> For Muse2 with 5 EEG Channels, 3 PPG Channels, 3 ACC Channels & 3 GYR Channels and 512 samples x Channel\n> \n> For Cap64 with 64 EEG Channels and 400 samples x Channel\n>\n> For Cap64 with 64 EEG Channels and 400 samples x Channel but with Morlet png images as EEG outputs\n> \n3.1.- MindBigData Visual MNIST of Brain digits Muse2\nURL\n\n3.2.- MindBigData Visual MNIST of Brain digits Cap64\nURL\n\n3.3.- MindBigData Visual MNIST of Brain digits Cap64 Morlet\nURL" ]
[ "TAGS\n#arxiv-2212.14746 #region-us \n", "# MindBigData 2022 A Large Dataset of Brain Signals\n> Supporting datasets for paper arXiv:2212.14746\n> There are 3 Main datasets with subdatasets:\n> \n1.- MindBigData MNIST of Brain Digits\n\n> based on URL\n> But all datasets splitted to 80% Train 20% Test (also proportional in the 11 classes)\n> EEG's Resampled to match original headsets sampling rate\n> Included headers.\n> and simplified to contain only label & EEG data as rows named in headers as ChannelName-SampleNum, ie for channel FP1 and MindWave will be FP1-0 FP1-1 ..... FP1-1023 since there are 1024 samples.\n> There are 4 subdatasets:\n> \n> For MindWave with 1 EEG Channel and 1024 samples x Channel\n> \n> For EPOC1 with 14 EEG Channels and 256 samples x Channel\n> \n> For Muse1 with 4 EEG Channels and 440 samples x Channel\n> \n> For Insight1 with 5 EEG Channels and 256 samples x Channel\n> \n1.1.- MindBigData MNIST of Brain digits MindWave1\nURL\n> \n1.2.- MindBigData MNIST of Brain digits EPOC1\nURL\n\n1.3.- MindBigData MNIST of Brain digits Muse1\nURL\n\n1.4.- MindBigData MNIST of Brain digits Insight1\nURL\n\n2.- MindBigData Imagenet of the Brain\n\n> based on URL\n> But all datasets splitted to 80% Train 20% Test (also proportional in all the classes)\n> EEG's Resampled to match original headsets sampling rate\n> Included headers.\n> contains label as the ILSVRC2013 category, and a hotencoded name lists, the RGB pixel values of the image seen resampled to 150pixels by 150 pixels & EEG data as rows named in headers as ChannelName-SampleNum, \n> There are 2 subdatasets:\n> \n> One with the Insight 1 EEG signals at 384 samples per channel (5 channels)\n> \n> One with the Spectrogram image 64x64px instead of the EEG as described in the paper\n> \n 2.1.- MindBigData Imagenet of the Brain Insight1 EEG\n URL\n \n 2.2.- MindBigData Imagenet of the Brain Insight1 Spectrogram\n URL\n\n3.- MindBigData Visual MNIST of Brain Digits\n\n> based on URL\n> But all datasets splitted to 80% Train 20% Test (also proportional in the 11 classes)\n> Included headers.\n> and simplified to contain only label, the original MNIST pixels of the digit seen 28x28pixels & EEG data as rows named in headers as ChannelName-SampleNum, ie for channel TP9 and Muse2 will be TP9-0 TP9-1 ..... TP9-511 since there are 512 samples.\n> There are 3 subdatasets:\n> \n> For Muse2 with 5 EEG Channels, 3 PPG Channels, 3 ACC Channels & 3 GYR Channels and 512 samples x Channel\n> \n> For Cap64 with 64 EEG Channels and 400 samples x Channel\n>\n> For Cap64 with 64 EEG Channels and 400 samples x Channel but with Morlet png images as EEG outputs\n> \n3.1.- MindBigData Visual MNIST of Brain digits Muse2\nURL\n\n3.2.- MindBigData Visual MNIST of Brain digits Cap64\nURL\n\n3.3.- MindBigData Visual MNIST of Brain digits Cap64 Morlet\nURL" ]
0817b7e7008f61c92e28e72772677f226f887a53
# Disclaimer *This is a hate speech dataset (in Arabic, French, and English).* *Offensive content that does not reflect the opinions of the authors.* # Dataset of our EMNLP 2019 Paper (Multilingual and Multi-Aspect Hate Speech Analysis) For more details about our dataset, please check our paper: @inproceedings{ousidhoum-etal-multilingual-hate-speech-2019, title = "Multilingual and Multi-Aspect Hate Speech Analysis", author = "Ousidhoum, Nedjma and Lin, Zizheng and Zhang, Hongming and Song, Yangqiu and Yeung, Dit-Yan", booktitle = "Proceedings of EMNLP", year = "2019", publisher = "Association for Computational Linguistics", } (You can preview our paper on https://arxiv.org/pdf/1908.11049.pdf) ## Clarification The multi-labelled tasks are *the hostility type of the tweet* and the *annotator's sentiment*. (We kept labels on which at least two annotators agreed.) ## Taxonomy In further experiments that involved binary classification tasks of the hostility/hate/abuse type, we considered single-labelled *normal* instances to be *non-hate/non-toxic* and all the other instances to be *toxic*. ## Dataset Our dataset is composed of three csv files sorted by language. They contain the tweets and the annotations described in our paper: the hostility type *(column: tweet sentiment)* hostility directness *(column: directness)* target attribute *(column: target)* target group *(column: group)* annotator's sentiment *(column: annotator sentiment)*. ## Experiments To replicate our experiments, please see https://github.com/HKUST-KnowComp/MLMA_hate_speech/blob/master/README.md
nedjmaou/MLMA_hate_speech
[ "license:mit", "arxiv:1908.11049", "region:us" ]
2022-12-27T17:04:33+00:00
{"license": "mit"}
2022-12-28T11:24:32+00:00
[ "1908.11049" ]
[]
TAGS #license-mit #arxiv-1908.11049 #region-us
# Disclaimer *This is a hate speech dataset (in Arabic, French, and English).* *Offensive content that does not reflect the opinions of the authors.* # Dataset of our EMNLP 2019 Paper (Multilingual and Multi-Aspect Hate Speech Analysis) For more details about our dataset, please check our paper: @inproceedings{ousidhoum-etal-multilingual-hate-speech-2019, title = "Multilingual and Multi-Aspect Hate Speech Analysis", author = "Ousidhoum, Nedjma and Lin, Zizheng and Zhang, Hongming and Song, Yangqiu and Yeung, Dit-Yan", booktitle = "Proceedings of EMNLP", year = "2019", publisher = "Association for Computational Linguistics", } (You can preview our paper on URL ## Clarification The multi-labelled tasks are *the hostility type of the tweet* and the *annotator's sentiment*. (We kept labels on which at least two annotators agreed.) ## Taxonomy In further experiments that involved binary classification tasks of the hostility/hate/abuse type, we considered single-labelled *normal* instances to be *non-hate/non-toxic* and all the other instances to be *toxic*. ## Dataset Our dataset is composed of three csv files sorted by language. They contain the tweets and the annotations described in our paper: the hostility type *(column: tweet sentiment)* hostility directness *(column: directness)* target attribute *(column: target)* target group *(column: group)* annotator's sentiment *(column: annotator sentiment)*. ## Experiments To replicate our experiments, please see URL
[ "# Disclaimer\n*This is a hate speech dataset (in Arabic, French, and English).*\n\n*Offensive content that does not reflect the opinions of the authors.*", "# Dataset of our EMNLP 2019 Paper (Multilingual and Multi-Aspect Hate Speech Analysis)\nFor more details about our dataset, please check our paper:\n\n\t@inproceedings{ousidhoum-etal-multilingual-hate-speech-2019,\n \t\ttitle = \"Multilingual and Multi-Aspect Hate Speech Analysis\",\n \t\tauthor = \"Ousidhoum, Nedjma\n \t\tand Lin, Zizheng\n \t\tand Zhang, Hongming\n \t\tand Song, Yangqiu\n \t\tand Yeung, Dit-Yan\",\n \t\t\tbooktitle = \"Proceedings of EMNLP\",\n \t\tyear = \"2019\",\n \t\tpublisher =\t\"Association for Computational Linguistics\",\n\t}\t\n\n(You can preview our paper on URL", "## Clarification\nThe multi-labelled tasks are *the hostility type of the tweet* and the *annotator's sentiment*. (We kept labels on which at least two annotators agreed.)", "## Taxonomy\nIn further experiments that involved binary classification tasks of the hostility/hate/abuse type, we considered single-labelled *normal* instances to be *non-hate/non-toxic* and all the other instances to be *toxic*.", "## Dataset\nOur dataset is composed of three csv files sorted by language. They contain the tweets and the annotations described in our paper:\n\nthe hostility type *(column: tweet sentiment)* \n\nhostility directness *(column: directness)* \n\ntarget attribute *(column: target)*\n\ntarget group *(column: group)* \n\nannotator's sentiment *(column: annotator sentiment)*.", "## Experiments\n\nTo replicate our experiments, please see URL" ]
[ "TAGS\n#license-mit #arxiv-1908.11049 #region-us \n", "# Disclaimer\n*This is a hate speech dataset (in Arabic, French, and English).*\n\n*Offensive content that does not reflect the opinions of the authors.*", "# Dataset of our EMNLP 2019 Paper (Multilingual and Multi-Aspect Hate Speech Analysis)\nFor more details about our dataset, please check our paper:\n\n\t@inproceedings{ousidhoum-etal-multilingual-hate-speech-2019,\n \t\ttitle = \"Multilingual and Multi-Aspect Hate Speech Analysis\",\n \t\tauthor = \"Ousidhoum, Nedjma\n \t\tand Lin, Zizheng\n \t\tand Zhang, Hongming\n \t\tand Song, Yangqiu\n \t\tand Yeung, Dit-Yan\",\n \t\t\tbooktitle = \"Proceedings of EMNLP\",\n \t\tyear = \"2019\",\n \t\tpublisher =\t\"Association for Computational Linguistics\",\n\t}\t\n\n(You can preview our paper on URL", "## Clarification\nThe multi-labelled tasks are *the hostility type of the tweet* and the *annotator's sentiment*. (We kept labels on which at least two annotators agreed.)", "## Taxonomy\nIn further experiments that involved binary classification tasks of the hostility/hate/abuse type, we considered single-labelled *normal* instances to be *non-hate/non-toxic* and all the other instances to be *toxic*.", "## Dataset\nOur dataset is composed of three csv files sorted by language. They contain the tweets and the annotations described in our paper:\n\nthe hostility type *(column: tweet sentiment)* \n\nhostility directness *(column: directness)* \n\ntarget attribute *(column: target)*\n\ntarget group *(column: group)* \n\nannotator's sentiment *(column: annotator sentiment)*.", "## Experiments\n\nTo replicate our experiments, please see URL" ]
0ef7a4214e44aeb1b8a7ca98c3a3f04a348be8b2
# Dataset Card for "scalable_project" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
marvmk/scalable_project
[ "region:us" ]
2022-12-27T17:19:18+00:00
{"dataset_info": {"features": [{"name": "Open", "dtype": "float64"}, {"name": "High", "dtype": "float64"}, {"name": "Low", "dtype": "float64"}, {"name": "Close", "dtype": "float64"}, {"name": "Volume", "dtype": "int64"}, {"name": "Inflation", "dtype": "float64"}, {"name": "CPI", "dtype": "float64"}, {"name": "Quarter_end", "dtype": "int64"}, {"name": "Date", "dtype": "timestamp[ns, tz=America/New_York]"}], "splits": [{"name": "train", "num_bytes": 359424, "num_examples": 4992}], "download_size": 0, "dataset_size": 359424}}
2023-01-06T21:58:44+00:00
[]
[]
TAGS #region-us
# Dataset Card for "scalable_project" More Information needed
[ "# Dataset Card for \"scalable_project\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"scalable_project\"\n\nMore Information needed" ]
c2ae4cb9df2db03e6c78a16d8e3dc7b961130f21
This dataset is extracted from the Anime "Rent-A-Girlfriend" as posted on Kaggle by [xandercubbin](https://www.kaggle.com/datasets/xandercubbin/chizuru-ichinose). Please refer to the `chizuru_dialog_dataset.ipynb` file to see how the dataset was pre-processed.
alexandreteles/chizuru-ichinose
[ "multilinguality:monolingual", "language:en", "license:cc0-1.0", "region:us" ]
2022-12-27T17:48:44+00:00
{"language": ["en"], "license": "cc0-1.0", "multilinguality": ["monolingual"], "pretty_name": "chizuru", "language_bcp47": ["en-US"]}
2022-12-27T17:53:53+00:00
[]
[ "en" ]
TAGS #multilinguality-monolingual #language-English #license-cc0-1.0 #region-us
This dataset is extracted from the Anime "Rent-A-Girlfriend" as posted on Kaggle by xandercubbin. Please refer to the 'chizuru_dialog_dataset.ipynb' file to see how the dataset was pre-processed.
[]
[ "TAGS\n#multilinguality-monolingual #language-English #license-cc0-1.0 #region-us \n" ]
5cb66728f6e33f0bd8fce2015bb690c1cf1c4a3d
# TREC DL 2020 Query Variation
spacemanidol/trec-dl2020-query-variation
[ "region:us" ]
2022-12-27T17:57:35+00:00
{}
2022-12-28T18:11:23+00:00
[]
[]
TAGS #region-us
# TREC DL 2020 Query Variation
[ "# TREC DL 2020 Query Variation" ]
[ "TAGS\n#region-us \n", "# TREC DL 2020 Query Variation" ]
e56aad2f9be461b98949bc18a70f6ee2949ebec7
# Albino Style Embedding / Textual Inversion <img alt="Showcase" src="https://huggingface.co/datasets/Nerfgun3/albino_style/resolve/main/showcase.png"/> ## Usage To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder To use it in a prompt: ```"albino_style"``` Personally, I would recommend to use my embeddings with a strength of 0.8, like ```"(albino_style:0.8)"``` I trained the embedding two epochs until 6800 steps. I hope you enjoy the embedding. If you have any questions, you can ask me anything via Discord: "Nerfgun3#7508" ## License This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
Nerfgun3/albino_style
[ "language:en", "license:creativeml-openrail-m", "stable-diffusion", "text-to-image", "image-to-image", "region:us" ]
2022-12-27T18:08:38+00:00
{"language": ["en"], "license": "creativeml-openrail-m", "thumbnail": "https://huggingface.co/datasets/Nerfgun3/albino_style/resolve/main/showcase.png", "tags": ["stable-diffusion", "text-to-image", "image-to-image"], "inference": false}
2022-12-27T18:12:47+00:00
[]
[ "en" ]
TAGS #language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #image-to-image #region-us
# Albino Style Embedding / Textual Inversion <img alt="Showcase" src="URL ## Usage To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder To use it in a prompt: Personally, I would recommend to use my embeddings with a strength of 0.8, like I trained the embedding two epochs until 6800 steps. I hope you enjoy the embedding. If you have any questions, you can ask me anything via Discord: "Nerfgun3#7508" ## License This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here
[ "# Albino Style Embedding / Textual Inversion\n\n<img alt=\"Showcase\" src=\"URL", "## Usage\n\nTo use this embedding you have to download the file aswell as drop it into the \"\\stable-diffusion-webui\\embeddings\" folder\n\nTo use it in a prompt: \n\nPersonally, I would recommend to use my embeddings with a strength of 0.8, like \n\nI trained the embedding two epochs until 6800 steps.\n\nI hope you enjoy the embedding. If you have any questions, you can ask me anything via Discord: \"Nerfgun3#7508\"", "## License\n\nThis embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.\nThe CreativeML OpenRAIL License specifies: \n\n1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content \n2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license\n3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)\nPlease read the full license here" ]
[ "TAGS\n#language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #image-to-image #region-us \n", "# Albino Style Embedding / Textual Inversion\n\n<img alt=\"Showcase\" src=\"URL", "## Usage\n\nTo use this embedding you have to download the file aswell as drop it into the \"\\stable-diffusion-webui\\embeddings\" folder\n\nTo use it in a prompt: \n\nPersonally, I would recommend to use my embeddings with a strength of 0.8, like \n\nI trained the embedding two epochs until 6800 steps.\n\nI hope you enjoy the embedding. If you have any questions, you can ask me anything via Discord: \"Nerfgun3#7508\"", "## License\n\nThis embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.\nThe CreativeML OpenRAIL License specifies: \n\n1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content \n2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license\n3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)\nPlease read the full license here" ]
cc250e4a6c875d20a0d6e9badfbcb3cf39cd391f
# Barbosa Style Embedding / Textual Inversion <img alt="Showcase" src="https://huggingface.co/datasets/Nerfgun3/barbosa_style/resolve/main/barbosa_showcase.png"/> ## Usage To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder To use it in a prompt: ```"barbosa_style"``` Personally, I would recommend to use my embeddings with a strength of 0.8, like ```"(barbosa_style:0.8)"``` I trained the embedding two epochs until 8000 steps. I hope you enjoy the embedding. If you have any questions, you can ask me anything via Discord: "Nerfgun3#7508" ## License This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
Nerfgun3/barbosa_style
[ "language:en", "license:creativeml-openrail-m", "stable-diffusion", "text-to-image", "image-to-image", "region:us" ]
2022-12-27T18:13:37+00:00
{"language": ["en"], "license": "creativeml-openrail-m", "thumbnail": "https://huggingface.co/datasets/Nerfgun3/barbosa_style/resolve/main/barbosa_showcase.png", "tags": ["stable-diffusion", "text-to-image", "image-to-image"], "inference": false}
2022-12-27T18:17:03+00:00
[]
[ "en" ]
TAGS #language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #image-to-image #region-us
# Barbosa Style Embedding / Textual Inversion <img alt="Showcase" src="URL ## Usage To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder To use it in a prompt: Personally, I would recommend to use my embeddings with a strength of 0.8, like I trained the embedding two epochs until 8000 steps. I hope you enjoy the embedding. If you have any questions, you can ask me anything via Discord: "Nerfgun3#7508" ## License This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here
[ "# Barbosa Style Embedding / Textual Inversion\n\n<img alt=\"Showcase\" src=\"URL", "## Usage\n\nTo use this embedding you have to download the file aswell as drop it into the \"\\stable-diffusion-webui\\embeddings\" folder\n\nTo use it in a prompt: \n\nPersonally, I would recommend to use my embeddings with a strength of 0.8, like \n\nI trained the embedding two epochs until 8000 steps.\n\nI hope you enjoy the embedding. If you have any questions, you can ask me anything via Discord: \"Nerfgun3#7508\"", "## License\n\nThis embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.\nThe CreativeML OpenRAIL License specifies: \n\n1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content \n2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license\n3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)\nPlease read the full license here" ]
[ "TAGS\n#language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #image-to-image #region-us \n", "# Barbosa Style Embedding / Textual Inversion\n\n<img alt=\"Showcase\" src=\"URL", "## Usage\n\nTo use this embedding you have to download the file aswell as drop it into the \"\\stable-diffusion-webui\\embeddings\" folder\n\nTo use it in a prompt: \n\nPersonally, I would recommend to use my embeddings with a strength of 0.8, like \n\nI trained the embedding two epochs until 8000 steps.\n\nI hope you enjoy the embedding. If you have any questions, you can ask me anything via Discord: \"Nerfgun3#7508\"", "## License\n\nThis embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.\nThe CreativeML OpenRAIL License specifies: \n\n1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content \n2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license\n3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)\nPlease read the full license here" ]
e3c5457a9b60b00e6b2a3e4e783cd5f453d47a43
# Cyberware Style Embedding / Textual Inversion <img alt="Showcase" src="https://huggingface.co/datasets/Nerfgun3/cyberware_style/resolve/main/cyber_showcase.png"/> ## Usage To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder To use it in a prompt: ```"cyberware_style"``` Personally, I would recommend to use my embeddings with a strength of 0.8, but this time I would use it just as it is. The embedding itself is based on the dataset given by Eppinette: https://huggingface.co/Eppinette/Cyberware I hope you enjoy the embedding. If you have any questions, you can ask me anything via Discord: "Nerfgun3#7508" ## License This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
Nerfgun3/cyberware_style
[ "language:en", "license:creativeml-openrail-m", "stable-diffusion", "text-to-image", "image-to-image", "region:us" ]
2022-12-27T18:17:27+00:00
{"language": ["en"], "license": "creativeml-openrail-m", "thumbnail": "https://huggingface.co/datasets/Nerfgun3/cyberware_style/resolve/main/cyber_showcase.png", "tags": ["stable-diffusion", "text-to-image", "image-to-image"], "inference": false}
2022-12-27T18:21:47+00:00
[]
[ "en" ]
TAGS #language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #image-to-image #region-us
# Cyberware Style Embedding / Textual Inversion <img alt="Showcase" src="URL ## Usage To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder To use it in a prompt: Personally, I would recommend to use my embeddings with a strength of 0.8, but this time I would use it just as it is. The embedding itself is based on the dataset given by Eppinette: URL I hope you enjoy the embedding. If you have any questions, you can ask me anything via Discord: "Nerfgun3#7508" ## License This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here
[ "# Cyberware Style Embedding / Textual Inversion\n\n<img alt=\"Showcase\" src=\"URL", "## Usage\n\nTo use this embedding you have to download the file aswell as drop it into the \"\\stable-diffusion-webui\\embeddings\" folder\n\nTo use it in a prompt: \n\nPersonally, I would recommend to use my embeddings with a strength of 0.8, but this time I would use it just as it is.\n\nThe embedding itself is based on the dataset given by Eppinette: URL\n\nI hope you enjoy the embedding. If you have any questions, you can ask me anything via Discord: \"Nerfgun3#7508\"", "## License\n\nThis embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.\nThe CreativeML OpenRAIL License specifies: \n\n1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content \n2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license\n3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)\nPlease read the full license here" ]
[ "TAGS\n#language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #image-to-image #region-us \n", "# Cyberware Style Embedding / Textual Inversion\n\n<img alt=\"Showcase\" src=\"URL", "## Usage\n\nTo use this embedding you have to download the file aswell as drop it into the \"\\stable-diffusion-webui\\embeddings\" folder\n\nTo use it in a prompt: \n\nPersonally, I would recommend to use my embeddings with a strength of 0.8, but this time I would use it just as it is.\n\nThe embedding itself is based on the dataset given by Eppinette: URL\n\nI hope you enjoy the embedding. If you have any questions, you can ask me anything via Discord: \"Nerfgun3#7508\"", "## License\n\nThis embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.\nThe CreativeML OpenRAIL License specifies: \n\n1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content \n2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license\n3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)\nPlease read the full license here" ]
917d7062ee8722fa26c0554966884d814a64774a
# Dataset Card for "OxfordPets_facebook_opt_30b_LLM_Description_opt30b_downstream_tasks_ViT_L_14" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Multimodal-Fatima/OxfordPets_facebook_opt_30b_LLM_Description_opt30b_downstream_tasks_ViT_L_14
[ "region:us" ]
2022-12-27T19:18:36+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 25933.0, "num_examples": 2}], "download_size": 30228, "dataset_size": 25933.0}}
2022-12-27T19:18:40+00:00
[]
[]
TAGS #region-us
# Dataset Card for "OxfordPets_facebook_opt_30b_LLM_Description_opt30b_downstream_tasks_ViT_L_14" More Information needed
[ "# Dataset Card for \"OxfordPets_facebook_opt_30b_LLM_Description_opt30b_downstream_tasks_ViT_L_14\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"OxfordPets_facebook_opt_30b_LLM_Description_opt30b_downstream_tasks_ViT_L_14\"\n\nMore Information needed" ]
a48fe7b9cacb11116b4ab66debf203549c8b75e5
# Dataset Card for OLM November/December 2022 Common Crawl Cleaned and deduplicated pretraining dataset, created with the OLM repo [here](https://github.com/huggingface/olm-datasets) from 15% of the November/December 2022 Common Crawl snapshot. Note: `last_modified_timestamp` was parsed from whatever a website returned in it's `Last-Modified` header; there are likely a small number of outliers that are incorrect, so we recommend removing the outliers before doing statistics with `last_modified_timestamp`.
olm/olm-CC-MAIN-2022-49-sampling-ratio-olm-0.15114822547
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:10M<n<100M", "language:en", "pretraining", "language modelling", "common crawl", "web", "region:us" ]
2022-12-27T19:22:18+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": [], "multilinguality": ["monolingual"], "size_categories": ["10M<n<100M"], "source_datasets": [], "task_categories": ["text-generation", "fill-mask"], "task_ids": ["language-modeling", "masked-language-modeling"], "pretty_name": "OLM November/December 2022 Common Crawl", "tags": ["pretraining", "language modelling", "common crawl", "web"]}
2023-02-05T18:28:47+00:00
[]
[ "en" ]
TAGS #task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-10M<n<100M #language-English #pretraining #language modelling #common crawl #web #region-us
# Dataset Card for OLM November/December 2022 Common Crawl Cleaned and deduplicated pretraining dataset, created with the OLM repo here from 15% of the November/December 2022 Common Crawl snapshot. Note: 'last_modified_timestamp' was parsed from whatever a website returned in it's 'Last-Modified' header; there are likely a small number of outliers that are incorrect, so we recommend removing the outliers before doing statistics with 'last_modified_timestamp'.
[ "# Dataset Card for OLM November/December 2022 Common Crawl\n\nCleaned and deduplicated pretraining dataset, created with the OLM repo here from 15% of the November/December 2022 Common Crawl snapshot.\n\nNote: 'last_modified_timestamp' was parsed from whatever a website returned in it's 'Last-Modified' header; there are likely a small number of outliers that are incorrect, so we recommend removing the outliers before doing statistics with 'last_modified_timestamp'." ]
[ "TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-10M<n<100M #language-English #pretraining #language modelling #common crawl #web #region-us \n", "# Dataset Card for OLM November/December 2022 Common Crawl\n\nCleaned and deduplicated pretraining dataset, created with the OLM repo here from 15% of the November/December 2022 Common Crawl snapshot.\n\nNote: 'last_modified_timestamp' was parsed from whatever a website returned in it's 'Last-Modified' header; there are likely a small number of outliers that are incorrect, so we recommend removing the outliers before doing statistics with 'last_modified_timestamp'." ]
d0ad6b80c87bd819bda7003fca75afcea5272fca
# Dataset Card for "dreambooth-hackathon-images-sbob" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
jonathang/dreambooth-hackathon-images-sbob
[ "region:us" ]
2022-12-27T19:34:51+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 1488165.0, "num_examples": 4}], "download_size": 1489345, "dataset_size": 1488165.0}}
2022-12-27T19:35:01+00:00
[]
[]
TAGS #region-us
# Dataset Card for "dreambooth-hackathon-images-sbob" More Information needed
[ "# Dataset Card for \"dreambooth-hackathon-images-sbob\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"dreambooth-hackathon-images-sbob\"\n\nMore Information needed" ]
459ea11a4586ed02ae164beaf95de2b3e5c9396d
# Hagikora *Aka, Stripped photoshop.* ## FAQ: Q: Can you remove the gated prompts? A: No. Personally I don't want any random person downloading the dataset and finding out it isn't suitable for them. Q: Can you make Zip file. A: Yes. Q: Filtering? A: No filtering done. All files are as is and untouched. You probably want to aesthetic filer on the images or something like that.
KaraKaraWitch/Hagikora
[ "license:cc-by-nc-4.0", "not-for-all-audiences", "region:us" ]
2022-12-27T19:37:33+00:00
{"license": ["cc-by-nc-4.0"], "pretty_name": "Hagikora", "tags": ["not-for-all-audiences"]}
2024-01-19T18:33:36+00:00
[]
[]
TAGS #license-cc-by-nc-4.0 #not-for-all-audiences #region-us
# Hagikora *Aka, Stripped photoshop.* ## FAQ: Q: Can you remove the gated prompts? A: No. Personally I don't want any random person downloading the dataset and finding out it isn't suitable for them. Q: Can you make Zip file. A: Yes. Q: Filtering? A: No filtering done. All files are as is and untouched. You probably want to aesthetic filer on the images or something like that.
[ "# Hagikora\n*Aka, Stripped photoshop.*", "## FAQ:\n\nQ: Can you remove the gated prompts? \nA: No. Personally I don't want any random person downloading the dataset and finding out it isn't suitable for them.\n\nQ: Can you make Zip file. \nA: Yes.\n\nQ: Filtering? \nA: No filtering done. All files are as is and untouched. You probably want to aesthetic filer on the images or something like that." ]
[ "TAGS\n#license-cc-by-nc-4.0 #not-for-all-audiences #region-us \n", "# Hagikora\n*Aka, Stripped photoshop.*", "## FAQ:\n\nQ: Can you remove the gated prompts? \nA: No. Personally I don't want any random person downloading the dataset and finding out it isn't suitable for them.\n\nQ: Can you make Zip file. \nA: Yes.\n\nQ: Filtering? \nA: No filtering done. All files are as is and untouched. You probably want to aesthetic filer on the images or something like that." ]
77cdfbad7898c8eaf6e9587915e916436a048e2d
# Dataset Card for "base_code_review" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Dahoas/base_code_review
[ "region:us" ]
2022-12-27T19:41:07+00:00
{"dataset_info": {"features": [{"name": "body", "dtype": "string"}, {"name": "comments", "list": [{"name": "ContentLicense", "dtype": "string"}, {"name": "CreationDate", "dtype": "string"}, {"name": "Id", "dtype": "string"}, {"name": "Score", "dtype": "string"}, {"name": "body", "dtype": "string"}]}, {"name": "answers", "list": [{"name": "body", "dtype": "string"}, {"name": "comments", "list": [{"name": "ContentLicense", "dtype": "string"}, {"name": "CreationDate", "dtype": "string"}, {"name": "Id", "dtype": "string"}, {"name": "Score", "dtype": "string"}, {"name": "body", "dtype": "string"}]}, {"name": "meta_data", "struct": [{"name": "CommentCount", "dtype": "string"}, {"name": "ContentLicense", "dtype": "string"}, {"name": "CreationDate", "dtype": "string"}, {"name": "Id", "dtype": "string"}, {"name": "ParentId", "dtype": "string"}, {"name": "Score", "dtype": "string"}]}]}, {"name": "meta_data", "struct": [{"name": "AcceptedAnswerId", "dtype": "string"}, {"name": "CommentCount", "dtype": "string"}, {"name": "ContentLicense", "dtype": "string"}, {"name": "CreationDate", "dtype": "string"}, {"name": "Id", "dtype": "string"}, {"name": "Score", "dtype": "string"}, {"name": "Tags", "sequence": "string"}, {"name": "Title", "dtype": "string"}]}, {"name": "question_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 729807089, "num_examples": 76003}], "download_size": 335610114, "dataset_size": 729807089}}
2022-12-27T19:41:29+00:00
[]
[]
TAGS #region-us
# Dataset Card for "base_code_review" More Information needed
[ "# Dataset Card for \"base_code_review\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"base_code_review\"\n\nMore Information needed" ]
54f61a2ccd3c56818966986e214df8fd0b76f7dd
# Dataset Card for "dreambooth-hackathon-images-sbob-jpeg" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
jonathang/dreambooth-hackathon-images-sbob-jpeg
[ "region:us" ]
2022-12-27T19:41:49+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 1028414.0, "num_examples": 4}], "download_size": 1018233, "dataset_size": 1028414.0}}
2022-12-27T19:41:59+00:00
[]
[]
TAGS #region-us
# Dataset Card for "dreambooth-hackathon-images-sbob-jpeg" More Information needed
[ "# Dataset Card for \"dreambooth-hackathon-images-sbob-jpeg\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"dreambooth-hackathon-images-sbob-jpeg\"\n\nMore Information needed" ]
c9c623d82807d4cd68cfcdff019851ea0ef9249b
# Dataset Card for "doge" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
fabiochiu/doge
[ "region:us" ]
2022-12-27T19:48:36+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 451322.0, "num_examples": 5}], "download_size": 451958, "dataset_size": 451322.0}}
2022-12-27T19:55:59+00:00
[]
[]
TAGS #region-us
# Dataset Card for "doge" More Information needed
[ "# Dataset Card for \"doge\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"doge\"\n\nMore Information needed" ]
d6b2854fdfcf8a626f8f1ac7b569a5c2accf52a4
# Dataset Card for "OxfordPets_Multimodal_Fatima_opt_175b_LLM_Description_opt175b_downstream_tasks_ViT_L_14" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Multimodal-Fatima/OxfordPets_Multimodal_Fatima_opt_175b_LLM_Description_opt175b_downstream_tasks_ViT_L_14
[ "region:us" ]
2022-12-27T20:01:16+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 3482068.0, "num_examples": 100}], "download_size": 3458504, "dataset_size": 3482068.0}}
2022-12-27T20:27:45+00:00
[]
[]
TAGS #region-us
# Dataset Card for "OxfordPets_Multimodal_Fatima_opt_175b_LLM_Description_opt175b_downstream_tasks_ViT_L_14" More Information needed
[ "# Dataset Card for \"OxfordPets_Multimodal_Fatima_opt_175b_LLM_Description_opt175b_downstream_tasks_ViT_L_14\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"OxfordPets_Multimodal_Fatima_opt_175b_LLM_Description_opt175b_downstream_tasks_ViT_L_14\"\n\nMore Information needed" ]
52c70f8ec561c3df37208c5d3ec026910cace849
# Dataset Card for "dreambooth-hackathon-images" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
GV05/dreambooth-hackathon-images
[ "region:us" ]
2022-12-27T20:03:47+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 927160.0, "num_examples": 13}], "download_size": 923205, "dataset_size": 927160.0}}
2022-12-27T20:04:01+00:00
[]
[]
TAGS #region-us
# Dataset Card for "dreambooth-hackathon-images" More Information needed
[ "# Dataset Card for \"dreambooth-hackathon-images\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"dreambooth-hackathon-images\"\n\nMore Information needed" ]
c0a7d4e93c96530a6753118fa4a71148c9425b87
# Dataset Card for "olm-december-2022-tokenized-512" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
olm/olm-december-2022-tokenized-512
[ "region:us" ]
2022-12-27T20:14:46+00:00
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "special_tokens_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 86351663844, "num_examples": 27999891}], "download_size": 23243344520, "dataset_size": 86351663844}}
2022-12-27T20:41:53+00:00
[]
[]
TAGS #region-us
# Dataset Card for "olm-december-2022-tokenized-512" More Information needed
[ "# Dataset Card for \"olm-december-2022-tokenized-512\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"olm-december-2022-tokenized-512\"\n\nMore Information needed" ]
03c6bf31ce30383e0012167401908ac2f91c3c3f
https://colab.research.google.com/github/huggingface/diffusion-models-class/blob/main/hackathon/dreambooth.ipynb?authuser=2#scrollTo=c3defbc3-b9a3-40c7-87dc-61f897025dce
vukrosic/dreambooth-vuk-images
[ "region:us" ]
2022-12-27T20:18:22+00:00
{}
2022-12-28T20:28:01+00:00
[]
[]
TAGS #region-us
URL
[]
[ "TAGS\n#region-us \n" ]
9aadb4105b0e4e32c8514f272cac57c18fc98dc1
# Dataset Card for "olm-december-2022-tokenized-1024" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
olm/olm-december-2022-tokenized-1024
[ "region:us" ]
2022-12-27T21:41:08+00:00
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "special_tokens_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 86220997560, "num_examples": 14006010}], "download_size": 22866321750, "dataset_size": 86220997560}}
2022-12-27T22:08:19+00:00
[]
[]
TAGS #region-us
# Dataset Card for "olm-december-2022-tokenized-1024" More Information needed
[ "# Dataset Card for \"olm-december-2022-tokenized-1024\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"olm-december-2022-tokenized-1024\"\n\nMore Information needed" ]
2d464151cb47742d4a7c724f41f4c44c10cc08cc
# Dataset Card for "dreambooth-hackathon-images-miko" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
davidaponte/dreambooth-hackathon-images-miko
[ "region:us" ]
2022-12-27T22:01:44+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 42574511.0, "num_examples": 14}], "download_size": 42573847, "dataset_size": 42574511.0}}
2022-12-27T22:01:58+00:00
[]
[]
TAGS #region-us
# Dataset Card for "dreambooth-hackathon-images-miko" More Information needed
[ "# Dataset Card for \"dreambooth-hackathon-images-miko\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"dreambooth-hackathon-images-miko\"\n\nMore Information needed" ]
2d816e74320c835019ecc05f0078465eb90a1ed6
Russian dataset for ELQ (Entity Linking for Questions) model (https://github.com/facebookresearch/BLINK/tree/main/elq)
GulPav/elqa_dataset
[ "task_categories:token-classification", "language:ru", "region:us" ]
2022-12-27T22:13:38+00:00
{"language": ["ru"], "task_categories": ["token-classification"], "pretty_name": "Russian ELQ dataset"}
2023-01-08T23:12:13+00:00
[]
[ "ru" ]
TAGS #task_categories-token-classification #language-Russian #region-us
Russian dataset for ELQ (Entity Linking for Questions) model (URL
[]
[ "TAGS\n#task_categories-token-classification #language-Russian #region-us \n" ]
4eb37a7a2e5d19154cbf7923beb30bfbd51220d5
New Russian dataset for ELQ (Entity Linking for Questions) model (https://github.com/facebookresearch/BLINK/tree/main/elq)
GulPav/ru_elq_dataset
[ "task_categories:token-classification", "language:ru", "region:us" ]
2022-12-27T22:14:29+00:00
{"language": ["ru"], "task_categories": ["token-classification"], "pretty_name": "Russian ELQ dataset"}
2023-01-08T23:13:13+00:00
[]
[ "ru" ]
TAGS #task_categories-token-classification #language-Russian #region-us
New Russian dataset for ELQ (Entity Linking for Questions) model (URL
[]
[ "TAGS\n#task_categories-token-classification #language-Russian #region-us \n" ]
3327de626e1e48d7ce7a7d5135effd13e39a696d
# Dataset Card for OASum Dataset ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Usage](#dataset-usage) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Repository:** [OASum Dataset repository](https://github.com/tencent-ailab/OASum) - **Paper:** [OASum: Large-Scale Open Domain Aspect-based Summarization](https://arxiv.org/pdf/2212.09233.pdf) The OASum Dataset is an English-language dataset containing over 3.6M document, aspect, and summary triplets. ## Dataset Usage You can directly download it with huggingface datasets. ``` python from datasets import load_dataset dataset = load_dataset("kqsong/OASum") ``` ## Dataset Structure ### Data Instances For each instance, there is a list of strings for the document, a list of strings for the summary, a string for the document title, a string for the aspect and a list of indices for the sentences in the corresponding section. ```json { "title": "Ker's WingHouse Bar & Grill", "document":[ "After Clearwater, Florida chicken wing pioneering restaurant chain Hooters began rapidly expanding, Florida based, Canadian-born restaurant entrepreneur Ed Burnett saw the opportunity.", "Burnett secured the rights to a closed restaurant (\"Knockers\") and opened \"The WingHouse\" restaurant at 7369 Ulmerton Road, Largo, Florida, a high traffic corridor.", "He strategically selected the restaurant in between where people work (commercial real estate) and live (residential real estate), to appeal to the local lunch crowd and family dining crowd.", "This flagship location proved to be a success soon after launching and is the model that the chain expanded on.", "Burnett, looking to expand to additional locations, accepted a financing partner (Crawford Ker) during this time frame, to open additional locations and beyond.", "Burnett's goal was to open 20 to 50 locations, and then sell the chain to a larger restaurant chain or investors.", "Burnett would ultimately regret his choice of investor.","In 1992, Ker retired from the NFL and took a job selling cars at a local dealer.", "In 1994, he invested half interest in a Largo, Florida wing restaurant called, \"Wing House\" that imitated Hooters.", "The restaurant was always The Wing House, and the atmosphere was always toned down to make it more family friendly.", "The restaurant did well and two additional locations were opened in the Tampa Bay area in the following three years.", "Ker won a $1.2-million jury award from Hooters in late 2004, which had sued him for trademark violations for allegedly using their uniforms and decor.", "After a three-week trial in which lawyers discussed hula hoops, surfboards, scrunchy socks, pantyhose, and something called \"vicarious sexual recreation\", the jury ruled that no trademark infringement existed and Hooters was penalized for their frivolous lawsuit.", "Hooters appealed the decision, but in June, 2006, the 11th U.S. Circuit Court of Appeals in Atlanta upheld the verdict.", "As of 2007, the company had 1,700 employees at 22 locations with revenue of nearly $60 million.", "Ker attended, and the company participated in, the 2007 National Buffalo Wing Festival and placed first in the \"traditional x-hot sauce\" category and gained some national recognition.", "On June 4, 2008 the company announced the launch of its national franchise program.", "In mid-2008 the chain operated 19 locations in Florida and Texas and expected to add six franchises by the end of 2008, and 48 by 2011.", "The initial focus was for franchises in the Southeastern US.", "WingHouses feature several amenities that differ from other wing restaurants, including Hooters.", "There is a full liquor bar in every store, sports memorabilia line the walls instead of NASCAR and most locations include a game room.", "Super Bowl XLIII in Tampa, Florida attracted the rich and famous; WingHouse hosted three events to raise money for charity." ], "aspect": "Opening", "aspect_sents": [0,1,2,3,4,5,6,7,8,9,10], "summary":[ "WingHouse Bar & Grill (formerly Ker\u2019s WingHouse Bar & Grill) is a restaurant chain based in Florida, created and founded by Ed Burnett, a Canadian restaurant entrepreneur.", "After opening his first WingHouse location, Burnett sought out investors to open additional WingHouse locations.", "Burnett accepted investor Crawford Ker (a former National Football League player) to assist financing the expansion." ] } ``` The average token count for the articles and the highlights are provided below: | Feature | Mean Token Count | | ---------- | ---------------- | | Document | 1,612 | | Summary | 40 | ### Data Fields - `title`: a string, containing the original Wikipedia title. - `document`: a list of sentences, containing the original content in the Wikipedia sections except the first abstract section. - `aspect`: a string, containing the section name and its parent section names. - `aspect_sents`: a list of indices, representing the sentences in the `aspect` section. - `summary`: a list of sentences, the corresponding aspect-based summary for the document. ### Data Splits The OASum dataset has 3 splits: _train_, _valid_, and _test_. Below are the statistics for the Version 1.0.0 of the dataset. | Dataset Split | Number of Instances in Split | | ------------- | ------------------------------------------- | | Train | 3,523,986 | | Validation | 111,578 | | Test | 112,005 | ## Additional Information ### Licensing Information The OASum Dataset version 1.0.0 is released under the [CC-BY-SA-3.0 License](https://en.wikipedia.org/wiki/Wikipedia:Text_of_the_Creative_Commons_Attribution-ShareAlike_3.0_Unported_License) ### Citation Information ``` @article{yang2022oasum, title={Oasum: Large-scale open domain aspect-based summarization}, author={Yang, Xianjun and Song, Kaiqiang and Cho, Sangwoo and Wang, Xiaoyang and Pan, Xiaoman and Petzold, Linda and Yu, Dong}, journal={arXiv preprint arXiv:2212.09233}, year={2022} } ```
kqsong/OASum
[ "task_categories:summarization", "size_categories:1M<n<10M", "language:en", "license:cc-by-sa-3.0", "summarization", "Wikipedia", "arxiv:2212.09233", "region:us" ]
2022-12-27T22:27:17+00:00
{"language": ["en"], "license": "cc-by-sa-3.0", "size_categories": ["1M<n<10M"], "task_categories": ["summarization"], "tags": ["summarization", "Wikipedia"]}
2023-07-03T20:02:23+00:00
[ "2212.09233" ]
[ "en" ]
TAGS #task_categories-summarization #size_categories-1M<n<10M #language-English #license-cc-by-sa-3.0 #summarization #Wikipedia #arxiv-2212.09233 #region-us
Dataset Card for OASum Dataset ============================== Table of Contents ----------------- * Dataset Description * Dataset Usage * Dataset Structure + Data Instances + Data Fields + Data Splits * Additional Information + Licensing Information + Citation Information Dataset Description ------------------- * Repository: OASum Dataset repository * Paper: OASum: Large-Scale Open Domain Aspect-based Summarization The OASum Dataset is an English-language dataset containing over 3.6M document, aspect, and summary triplets. Dataset Usage ------------- You can directly download it with huggingface datasets. Dataset Structure ----------------- ### Data Instances For each instance, there is a list of strings for the document, a list of strings for the summary, a string for the document title, a string for the aspect and a list of indices for the sentences in the corresponding section. The average token count for the articles and the highlights are provided below: ### Data Fields * 'title': a string, containing the original Wikipedia title. * 'document': a list of sentences, containing the original content in the Wikipedia sections except the first abstract section. * 'aspect': a string, containing the section name and its parent section names. * 'aspect\_sents': a list of indices, representing the sentences in the 'aspect' section. * 'summary': a list of sentences, the corresponding aspect-based summary for the document. ### Data Splits The OASum dataset has 3 splits: *train*, *valid*, and *test*. Below are the statistics for the Version 1.0.0 of the dataset. Additional Information ---------------------- ### Licensing Information The OASum Dataset version 1.0.0 is released under the CC-BY-SA-3.0 License
[ "### Data Instances\n\n\nFor each instance, there is a list of strings for the document, a list of strings for the summary, a string for the document title, a string for the aspect and a list of indices for the sentences in the corresponding section.\n\n\nThe average token count for the articles and the highlights are provided below:", "### Data Fields\n\n\n* 'title': a string, containing the original Wikipedia title.\n* 'document': a list of sentences, containing the original content in the Wikipedia sections except the first abstract section.\n* 'aspect': a string, containing the section name and its parent section names.\n* 'aspect\\_sents': a list of indices, representing the sentences in the 'aspect' section.\n* 'summary': a list of sentences, the corresponding aspect-based summary for the document.", "### Data Splits\n\n\nThe OASum dataset has 3 splits: *train*, *valid*, and *test*. Below are the statistics for the Version 1.0.0 of the dataset.\n\n\n\nAdditional Information\n----------------------", "### Licensing Information\n\n\nThe OASum Dataset version 1.0.0 is released under the CC-BY-SA-3.0 License" ]
[ "TAGS\n#task_categories-summarization #size_categories-1M<n<10M #language-English #license-cc-by-sa-3.0 #summarization #Wikipedia #arxiv-2212.09233 #region-us \n", "### Data Instances\n\n\nFor each instance, there is a list of strings for the document, a list of strings for the summary, a string for the document title, a string for the aspect and a list of indices for the sentences in the corresponding section.\n\n\nThe average token count for the articles and the highlights are provided below:", "### Data Fields\n\n\n* 'title': a string, containing the original Wikipedia title.\n* 'document': a list of sentences, containing the original content in the Wikipedia sections except the first abstract section.\n* 'aspect': a string, containing the section name and its parent section names.\n* 'aspect\\_sents': a list of indices, representing the sentences in the 'aspect' section.\n* 'summary': a list of sentences, the corresponding aspect-based summary for the document.", "### Data Splits\n\n\nThe OASum dataset has 3 splits: *train*, *valid*, and *test*. Below are the statistics for the Version 1.0.0 of the dataset.\n\n\n\nAdditional Information\n----------------------", "### Licensing Information\n\n\nThe OASum Dataset version 1.0.0 is released under the CC-BY-SA-3.0 License" ]
8c3ad1482e60300da2a0204fc194a4bf6283202d
# Dpin Style Embedding / Textual Inversion <img alt="Showcase" src="https://huggingface.co/datasets/Nerfgun3/dpin_style/resolve/main/dpin_showcase.png"/> ## Usage To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder To use it in a prompt: ```"dpin_style"``` Personally, I would recommend to use my embeddings with a strength of 0.8, like ```"(dpin_style:0.8)"``` I hope you enjoy the embedding. If you have any questions, you can ask me anything via Discord: "Nerfgun3#7508" ## License This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
Nerfgun3/dpin_style
[ "language:en", "license:creativeml-openrail-m", "stable-diffusion", "text-to-image", "image-to-image", "region:us" ]
2022-12-27T22:53:41+00:00
{"language": ["en"], "license": "creativeml-openrail-m", "thumbnail": "https://huggingface.co/datasets/Nerfgun3/dpin_style/resolve/main/dpin_showcase.png", "tags": ["stable-diffusion", "text-to-image", "image-to-image"], "inference": false}
2022-12-27T23:01:15+00:00
[]
[ "en" ]
TAGS #language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #image-to-image #region-us
# Dpin Style Embedding / Textual Inversion <img alt="Showcase" src="URL ## Usage To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder To use it in a prompt: Personally, I would recommend to use my embeddings with a strength of 0.8, like I hope you enjoy the embedding. If you have any questions, you can ask me anything via Discord: "Nerfgun3#7508" ## License This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here
[ "# Dpin Style Embedding / Textual Inversion\n\n<img alt=\"Showcase\" src=\"URL", "## Usage\n\nTo use this embedding you have to download the file aswell as drop it into the \"\\stable-diffusion-webui\\embeddings\" folder\n\nTo use it in a prompt: \n\nPersonally, I would recommend to use my embeddings with a strength of 0.8, like \n\nI hope you enjoy the embedding. If you have any questions, you can ask me anything via Discord: \"Nerfgun3#7508\"", "## License\n\nThis embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.\nThe CreativeML OpenRAIL License specifies: \n\n1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content \n2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license\n3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)\nPlease read the full license here" ]
[ "TAGS\n#language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #image-to-image #region-us \n", "# Dpin Style Embedding / Textual Inversion\n\n<img alt=\"Showcase\" src=\"URL", "## Usage\n\nTo use this embedding you have to download the file aswell as drop it into the \"\\stable-diffusion-webui\\embeddings\" folder\n\nTo use it in a prompt: \n\nPersonally, I would recommend to use my embeddings with a strength of 0.8, like \n\nI hope you enjoy the embedding. If you have any questions, you can ask me anything via Discord: \"Nerfgun3#7508\"", "## License\n\nThis embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.\nThe CreativeML OpenRAIL License specifies: \n\n1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content \n2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license\n3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)\nPlease read the full license here" ]
79de22f57931bf49c1c0b5890d0f713f513de5b8
# Hurybone Style Embedding / Textual Inversion <img alt="Showcase" src="https://huggingface.co/datasets/Nerfgun3/hurybone_style/resolve/main/hurybone_showcase.png"/> ## Usage To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder To use it in a prompt: ```"hurybone_style"``` Personally, I would recommend to use my embeddings with a strength of 0.8, like ```"(hurybone_style:0.8)"``` I hope you enjoy the embedding. If you have any questions, you can ask me anything via Discord: "Nerfgun3#7508" ## License This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
Nerfgun3/hurybone_style
[ "language:en", "license:creativeml-openrail-m", "stable-diffusion", "text-to-image", "image-to-image", "region:us" ]
2022-12-27T22:53:49+00:00
{"language": ["en"], "license": "creativeml-openrail-m", "thumbnail": "https://huggingface.co/datasets/Nerfgun3/hurybone_style/resolve/main/hurybone_showcase.png", "tags": ["stable-diffusion", "text-to-image", "image-to-image"], "inference": false}
2022-12-27T22:59:20+00:00
[]
[ "en" ]
TAGS #language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #image-to-image #region-us
# Hurybone Style Embedding / Textual Inversion <img alt="Showcase" src="URL ## Usage To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder To use it in a prompt: Personally, I would recommend to use my embeddings with a strength of 0.8, like I hope you enjoy the embedding. If you have any questions, you can ask me anything via Discord: "Nerfgun3#7508" ## License This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here
[ "# Hurybone Style Embedding / Textual Inversion\n\n<img alt=\"Showcase\" src=\"URL", "## Usage\n\nTo use this embedding you have to download the file aswell as drop it into the \"\\stable-diffusion-webui\\embeddings\" folder\n\nTo use it in a prompt: \n\nPersonally, I would recommend to use my embeddings with a strength of 0.8, like \n\nI hope you enjoy the embedding. If you have any questions, you can ask me anything via Discord: \"Nerfgun3#7508\"", "## License\n\nThis embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.\nThe CreativeML OpenRAIL License specifies: \n\n1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content \n2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license\n3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)\nPlease read the full license here" ]
[ "TAGS\n#language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #image-to-image #region-us \n", "# Hurybone Style Embedding / Textual Inversion\n\n<img alt=\"Showcase\" src=\"URL", "## Usage\n\nTo use this embedding you have to download the file aswell as drop it into the \"\\stable-diffusion-webui\\embeddings\" folder\n\nTo use it in a prompt: \n\nPersonally, I would recommend to use my embeddings with a strength of 0.8, like \n\nI hope you enjoy the embedding. If you have any questions, you can ask me anything via Discord: \"Nerfgun3#7508\"", "## License\n\nThis embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.\nThe CreativeML OpenRAIL License specifies: \n\n1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content \n2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license\n3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)\nPlease read the full license here" ]
404a26a5d24473d6c5fad7ec3da6cdea22eda285
# Iskou Style Embedding / Textual Inversion <img alt="Showcase" src="https://huggingface.co/datasets/Nerfgun3/iskou_style/resolve/main/iskou_showcase.png"/> ## Usage To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder To use it in a prompt: ```"iskou_style"``` Personally, I would recommend to use my embeddings with a strength of 0.8, like ```"(iskou_style:0.8)"``` I hope you enjoy the embedding. If you have any questions, you can ask me anything via Discord: "Nerfgun3#7508" ## License This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
Nerfgun3/iskou_style
[ "language:en", "license:creativeml-openrail-m", "stable-diffusion", "text-to-image", "image-to-image", "region:us" ]
2022-12-27T22:53:57+00:00
{"language": ["en"], "license": "creativeml-openrail-m", "thumbnail": "https://huggingface.co/datasets/Nerfgun3/iskou_style/resolve/main/iskou_showcase.png", "tags": ["stable-diffusion", "text-to-image", "image-to-image"], "inference": false}
2022-12-27T23:00:25+00:00
[]
[ "en" ]
TAGS #language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #image-to-image #region-us
# Iskou Style Embedding / Textual Inversion <img alt="Showcase" src="URL ## Usage To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder To use it in a prompt: Personally, I would recommend to use my embeddings with a strength of 0.8, like I hope you enjoy the embedding. If you have any questions, you can ask me anything via Discord: "Nerfgun3#7508" ## License This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here
[ "# Iskou Style Embedding / Textual Inversion\n\n<img alt=\"Showcase\" src=\"URL", "## Usage\n\nTo use this embedding you have to download the file aswell as drop it into the \"\\stable-diffusion-webui\\embeddings\" folder\n\nTo use it in a prompt: \n\nPersonally, I would recommend to use my embeddings with a strength of 0.8, like \n\nI hope you enjoy the embedding. If you have any questions, you can ask me anything via Discord: \"Nerfgun3#7508\"", "## License\n\nThis embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.\nThe CreativeML OpenRAIL License specifies: \n\n1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content \n2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license\n3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)\nPlease read the full license here" ]
[ "TAGS\n#language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #image-to-image #region-us \n", "# Iskou Style Embedding / Textual Inversion\n\n<img alt=\"Showcase\" src=\"URL", "## Usage\n\nTo use this embedding you have to download the file aswell as drop it into the \"\\stable-diffusion-webui\\embeddings\" folder\n\nTo use it in a prompt: \n\nPersonally, I would recommend to use my embeddings with a strength of 0.8, like \n\nI hope you enjoy the embedding. If you have any questions, you can ask me anything via Discord: \"Nerfgun3#7508\"", "## License\n\nThis embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.\nThe CreativeML OpenRAIL License specifies: \n\n1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content \n2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license\n3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)\nPlease read the full license here" ]
073c1e5ecb6d0df09108909d20deba6fe5e8adf4
# Saska Style Embedding / Textual Inversion <img alt="Showcase" src="https://huggingface.co/datasets/Nerfgun3/saska_style/resolve/main/saska_showcase.png"/> ## Usage To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder To use it in a prompt: ```"saska_style"``` Personally, I would recommend to use my embeddings with a strength of 0.8, like ```"(saska_style:0.8)"``` I trained the embedding two epochs until 8000 steps. I hope you enjoy the embedding. If you have any questions, you can ask me anything via Discord: "Nerfgun3#7508" ## License This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
Nerfgun3/saska_style
[ "language:en", "license:creativeml-openrail-m", "stable-diffusion", "text-to-image", "image-to-image", "region:us" ]
2022-12-27T22:54:04+00:00
{"language": ["en"], "license": "creativeml-openrail-m", "thumbnail": "https://huggingface.co/datasets/Nerfgun3/saska_style/resolve/main/saska_showcase.png", "tags": ["stable-diffusion", "text-to-image", "image-to-image"], "inference": false}
2022-12-27T22:58:22+00:00
[]
[ "en" ]
TAGS #language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #image-to-image #region-us
# Saska Style Embedding / Textual Inversion <img alt="Showcase" src="URL ## Usage To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder To use it in a prompt: Personally, I would recommend to use my embeddings with a strength of 0.8, like I trained the embedding two epochs until 8000 steps. I hope you enjoy the embedding. If you have any questions, you can ask me anything via Discord: "Nerfgun3#7508" ## License This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here
[ "# Saska Style Embedding / Textual Inversion\n\n<img alt=\"Showcase\" src=\"URL", "## Usage\n\nTo use this embedding you have to download the file aswell as drop it into the \"\\stable-diffusion-webui\\embeddings\" folder\n\nTo use it in a prompt: \n\nPersonally, I would recommend to use my embeddings with a strength of 0.8, like \n\nI trained the embedding two epochs until 8000 steps.\n\nI hope you enjoy the embedding. If you have any questions, you can ask me anything via Discord: \"Nerfgun3#7508\"", "## License\n\nThis embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.\nThe CreativeML OpenRAIL License specifies: \n\n1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content \n2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license\n3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)\nPlease read the full license here" ]
[ "TAGS\n#language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #image-to-image #region-us \n", "# Saska Style Embedding / Textual Inversion\n\n<img alt=\"Showcase\" src=\"URL", "## Usage\n\nTo use this embedding you have to download the file aswell as drop it into the \"\\stable-diffusion-webui\\embeddings\" folder\n\nTo use it in a prompt: \n\nPersonally, I would recommend to use my embeddings with a strength of 0.8, like \n\nI trained the embedding two epochs until 8000 steps.\n\nI hope you enjoy the embedding. If you have any questions, you can ask me anything via Discord: \"Nerfgun3#7508\"", "## License\n\nThis embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.\nThe CreativeML OpenRAIL License specifies: \n\n1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content \n2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license\n3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)\nPlease read the full license here" ]
d431d825eab9fd83356bc9aa98db08c58e902006
# Star Style Embedding / Textual Inversion <img alt="Showcase" src="https://huggingface.co/datasets/Nerfgun3/star_style/resolve/main/star_showcase.png"/> ## Usage To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder To use it in a prompt: ```"star_style"``` Personally, I would recommend to use my embeddings with a strength of 0.8, like ```"(star_style:0.8)"``` This embedding can be used for characters aswell! Just use it with a strength of 0.6 or less! I hope you enjoy the embedding. If you have any questions, you can ask me anything via Discord: "Nerfgun3#7508" ## License This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
Nerfgun3/star_style
[ "language:en", "license:creativeml-openrail-m", "stable-diffusion", "text-to-image", "image-to-image", "region:us" ]
2022-12-27T22:54:12+00:00
{"language": ["en"], "license": "creativeml-openrail-m", "thumbnail": "https://huggingface.co/datasets/Nerfgun3/star_style/resolve/main/star_showcase.png", "tags": ["stable-diffusion", "text-to-image", "image-to-image"], "inference": false}
2022-12-27T22:57:17+00:00
[]
[ "en" ]
TAGS #language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #image-to-image #region-us
# Star Style Embedding / Textual Inversion <img alt="Showcase" src="URL ## Usage To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder To use it in a prompt: Personally, I would recommend to use my embeddings with a strength of 0.8, like This embedding can be used for characters aswell! Just use it with a strength of 0.6 or less! I hope you enjoy the embedding. If you have any questions, you can ask me anything via Discord: "Nerfgun3#7508" ## License This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here
[ "# Star Style Embedding / Textual Inversion\n\n<img alt=\"Showcase\" src=\"URL", "## Usage\n\nTo use this embedding you have to download the file aswell as drop it into the \"\\stable-diffusion-webui\\embeddings\" folder\n\nTo use it in a prompt: \n\nPersonally, I would recommend to use my embeddings with a strength of 0.8, like \n\nThis embedding can be used for characters aswell! Just use it with a strength of 0.6 or less!\n\nI hope you enjoy the embedding. If you have any questions, you can ask me anything via Discord: \"Nerfgun3#7508\"", "## License\n\nThis embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.\nThe CreativeML OpenRAIL License specifies: \n\n1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content \n2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license\n3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)\nPlease read the full license here" ]
[ "TAGS\n#language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #image-to-image #region-us \n", "# Star Style Embedding / Textual Inversion\n\n<img alt=\"Showcase\" src=\"URL", "## Usage\n\nTo use this embedding you have to download the file aswell as drop it into the \"\\stable-diffusion-webui\\embeddings\" folder\n\nTo use it in a prompt: \n\nPersonally, I would recommend to use my embeddings with a strength of 0.8, like \n\nThis embedding can be used for characters aswell! Just use it with a strength of 0.6 or less!\n\nI hope you enjoy the embedding. If you have any questions, you can ask me anything via Discord: \"Nerfgun3#7508\"", "## License\n\nThis embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.\nThe CreativeML OpenRAIL License specifies: \n\n1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content \n2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license\n3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)\nPlease read the full license here" ]
0b0848e5cc8d2b0c180ad4de151c6450f84183ab
# Dataset Card for "4096_filtered_base_code_review" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Dahoas/4096_filtered_base_code_review
[ "region:us" ]
2022-12-27T23:47:12+00:00
{"dataset_info": {"features": [{"name": "body", "dtype": "string"}, {"name": "comments", "list": [{"name": "ContentLicense", "dtype": "string"}, {"name": "CreationDate", "dtype": "string"}, {"name": "Id", "dtype": "string"}, {"name": "Score", "dtype": "string"}, {"name": "body", "dtype": "string"}]}, {"name": "answers", "list": [{"name": "body", "dtype": "string"}, {"name": "comments", "list": [{"name": "ContentLicense", "dtype": "string"}, {"name": "CreationDate", "dtype": "string"}, {"name": "Id", "dtype": "string"}, {"name": "Score", "dtype": "string"}, {"name": "body", "dtype": "string"}]}, {"name": "meta_data", "struct": [{"name": "CommentCount", "dtype": "string"}, {"name": "ContentLicense", "dtype": "string"}, {"name": "CreationDate", "dtype": "string"}, {"name": "Id", "dtype": "string"}, {"name": "ParentId", "dtype": "string"}, {"name": "Score", "dtype": "string"}]}]}, {"name": "meta_data", "struct": [{"name": "AcceptedAnswerId", "dtype": "string"}, {"name": "CommentCount", "dtype": "string"}, {"name": "ContentLicense", "dtype": "string"}, {"name": "CreationDate", "dtype": "string"}, {"name": "Id", "dtype": "string"}, {"name": "Score", "dtype": "string"}, {"name": "Tags", "sequence": "string"}, {"name": "Title", "dtype": "string"}]}, {"name": "question_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 206395804, "num_examples": 37026}], "download_size": 106795288, "dataset_size": 206395804}}
2022-12-28T00:22:34+00:00
[]
[]
TAGS #region-us
# Dataset Card for "4096_filtered_base_code_review" More Information needed
[ "# Dataset Card for \"4096_filtered_base_code_review\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"4096_filtered_base_code_review\"\n\nMore Information needed" ]
5f248d88da38b4a226c76f01aee81bebaac75632
A collection of emulated 2D noisy images, a clean and noisy image in pairs. <br> 256 x 256 x 1 <br> Octaves: 4 <br> Weight: 30
SinonTM/Synth-Nav
[ "task_categories:feature-extraction", "annotations_creators:machine-generated", "language_creators:machine-generated", "size_categories:10K<n<100K", "source_datasets:original", "license:gpl-3.0", "region:us" ]
2022-12-28T01:11:04+00:00
{"annotations_creators": ["machine-generated"], "language_creators": ["machine-generated"], "language": [], "license": ["gpl-3.0"], "multilinguality": [], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["feature-extraction"], "task_ids": [], "pretty_name": "GNGIDS", "tags": []}
2023-01-30T19:45:23+00:00
[]
[]
TAGS #task_categories-feature-extraction #annotations_creators-machine-generated #language_creators-machine-generated #size_categories-10K<n<100K #source_datasets-original #license-gpl-3.0 #region-us
A collection of emulated 2D noisy images, a clean and noisy image in pairs. <br> 256 x 256 x 1 <br> Octaves: 4 <br> Weight: 30
[]
[ "TAGS\n#task_categories-feature-extraction #annotations_creators-machine-generated #language_creators-machine-generated #size_categories-10K<n<100K #source_datasets-original #license-gpl-3.0 #region-us \n" ]
dee66c8281e8162aae3f854083cb2c1e21f069e7
# Dataset Card for "2048_has_code_filtered_base_code_review" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Dahoas/2048_has_code_filtered_base_code_review
[ "region:us" ]
2022-12-28T02:43:32+00:00
{"dataset_info": {"features": [{"name": "body", "dtype": "string"}, {"name": "comments", "list": [{"name": "ContentLicense", "dtype": "string"}, {"name": "CreationDate", "dtype": "string"}, {"name": "Id", "dtype": "string"}, {"name": "Score", "dtype": "string"}, {"name": "body", "dtype": "string"}]}, {"name": "answers", "list": [{"name": "body", "dtype": "string"}, {"name": "comments", "list": [{"name": "ContentLicense", "dtype": "string"}, {"name": "CreationDate", "dtype": "string"}, {"name": "Id", "dtype": "string"}, {"name": "Score", "dtype": "string"}, {"name": "body", "dtype": "string"}]}, {"name": "meta_data", "struct": [{"name": "CommentCount", "dtype": "string"}, {"name": "ContentLicense", "dtype": "string"}, {"name": "CreationDate", "dtype": "string"}, {"name": "Id", "dtype": "string"}, {"name": "ParentId", "dtype": "string"}, {"name": "Score", "dtype": "string"}]}]}, {"name": "meta_data", "struct": [{"name": "AcceptedAnswerId", "dtype": "string"}, {"name": "CommentCount", "dtype": "string"}, {"name": "ContentLicense", "dtype": "string"}, {"name": "CreationDate", "dtype": "string"}, {"name": "Id", "dtype": "string"}, {"name": "Score", "dtype": "string"}, {"name": "Tags", "sequence": "string"}, {"name": "Title", "dtype": "string"}]}, {"name": "question_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 168922714, "num_examples": 30898}], "download_size": 87127135, "dataset_size": 168922714}}
2022-12-28T16:32:52+00:00
[]
[]
TAGS #region-us
# Dataset Card for "2048_has_code_filtered_base_code_review" More Information needed
[ "# Dataset Card for \"2048_has_code_filtered_base_code_review\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"2048_has_code_filtered_base_code_review\"\n\nMore Information needed" ]
5545141baf0d57257cb2032ecb1040ebbba058c9
alpaco_4
com0040/ai-hub_sum
[ "region:us" ]
2022-12-28T03:14:29+00:00
{}
2022-12-28T04:33:26+00:00
[]
[]
TAGS #region-us
alpaco_4
[]
[ "TAGS\n#region-us \n" ]
b8f7d168b6f4e95b2a92e84768bd6c955bed2f29
# Dataset Card for Summarize from Feedback ## Dataset Description In the [Learning to Summarize from Human Feedback paper](https://arxiv.org/abs/2009.01325), a reward model was trained from human feedback. The reward model was then used to train a summarization model to align with human preferences. This is the dataset of human feedback that was released for reward modelling. There are two parts of this dataset: `comparisons` and `axis`. In the `comparisons` part, human annotators were asked to choose the best out of two summaries. In the `axis` part, human annotators gave scores on a likert scale for the quality of a summary. The `comparisons` part only has a train and validation split, and the `axis` part only has a test and validation split. The summaries used for training the reward model in the paper come from the TL;DR dataset. Additional validation and test data come from the TL;DR dataset, CNN articles, and Daily Mail articles. For more information, see the repo [here](https://github.com/openai/summarize-from-feedback#human-feedback-data). ## Citation Information [https://arxiv.org/abs/2009.01325](https://arxiv.org/abs/2009.01325) ``` @inproceedings{stienon2020learning, author = {Nisan Stiennon and Long Ouyang and Jeff Wu and Daniel M. Ziegler and Ryan Lowe and Chelsea Voss and Alec Radford and Dario Amodei and Paul Christiano}, title = {Learning to summarize from human feedback}, booktitle = {NeurIPS}, year = 2020, } ``` Dataset added to the Hugging Face Hub with help from [@Tristan](https://huggingface.co/Tristan)
openai/summarize_from_feedback
[ "arxiv:2009.01325", "region:us" ]
2022-12-28T03:42:47+00:00
{"pretty_name": "Summarize from Feedback"}
2023-01-03T16:55:41+00:00
[ "2009.01325" ]
[]
TAGS #arxiv-2009.01325 #region-us
# Dataset Card for Summarize from Feedback ## Dataset Description In the Learning to Summarize from Human Feedback paper, a reward model was trained from human feedback. The reward model was then used to train a summarization model to align with human preferences. This is the dataset of human feedback that was released for reward modelling. There are two parts of this dataset: 'comparisons' and 'axis'. In the 'comparisons' part, human annotators were asked to choose the best out of two summaries. In the 'axis' part, human annotators gave scores on a likert scale for the quality of a summary. The 'comparisons' part only has a train and validation split, and the 'axis' part only has a test and validation split. The summaries used for training the reward model in the paper come from the TL;DR dataset. Additional validation and test data come from the TL;DR dataset, CNN articles, and Daily Mail articles. For more information, see the repo here. URL Dataset added to the Hugging Face Hub with help from @Tristan
[ "# Dataset Card for Summarize from Feedback", "## Dataset Description\n\n\nIn the Learning to Summarize from Human Feedback paper, a reward model was trained from human feedback.\nThe reward model was then used to train a summarization model to align with human preferences. This is the dataset of human feedback that was released for reward modelling.\nThere are two parts of this dataset: 'comparisons' and 'axis'. In the 'comparisons' part, human annotators were asked to choose the best out of two summaries.\nIn the 'axis' part, human annotators gave scores on a likert scale for the quality of a summary.\nThe 'comparisons' part only has a train and validation split, and the 'axis' part only has a test and validation split.\n\nThe summaries used for training the reward model in the paper come from the TL;DR dataset.\nAdditional validation and test data come from the TL;DR dataset, CNN articles, and Daily Mail articles.\n\nFor more information, see the repo here.\n\n\n\nURL\n\n\n\nDataset added to the Hugging Face Hub with help from @Tristan" ]
[ "TAGS\n#arxiv-2009.01325 #region-us \n", "# Dataset Card for Summarize from Feedback", "## Dataset Description\n\n\nIn the Learning to Summarize from Human Feedback paper, a reward model was trained from human feedback.\nThe reward model was then used to train a summarization model to align with human preferences. This is the dataset of human feedback that was released for reward modelling.\nThere are two parts of this dataset: 'comparisons' and 'axis'. In the 'comparisons' part, human annotators were asked to choose the best out of two summaries.\nIn the 'axis' part, human annotators gave scores on a likert scale for the quality of a summary.\nThe 'comparisons' part only has a train and validation split, and the 'axis' part only has a test and validation split.\n\nThe summaries used for training the reward model in the paper come from the TL;DR dataset.\nAdditional validation and test data come from the TL;DR dataset, CNN articles, and Daily Mail articles.\n\nFor more information, see the repo here.\n\n\n\nURL\n\n\n\nDataset added to the Hugging Face Hub with help from @Tristan" ]
18722059217813fd636c2e2f4b3cc6a508ab47fd
# Dataset Card for "dreambooth-hackathon-images-srkman" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Xhaheen/dreambooth-hackathon-images-srkman
[ "region:us" ]
2022-12-28T03:57:08+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 4082680.0, "num_examples": 20}], "download_size": 4081453, "dataset_size": 4082680.0}}
2022-12-28T03:57:13+00:00
[]
[]
TAGS #region-us
# Dataset Card for "dreambooth-hackathon-images-srkman" More Information needed
[ "# Dataset Card for \"dreambooth-hackathon-images-srkman\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"dreambooth-hackathon-images-srkman\"\n\nMore Information needed" ]
8e08ea0dda44a5c942164865f3c2fc10f0e476ab
<div align="center"> <img width="640" alt="keremberke/valorant-object-detection" src="https://huggingface.co/datasets/keremberke/valorant-object-detection/resolve/main/thumbnail.jpg"> </div> ### Dataset Labels ``` ['dropped spike', 'enemy', 'planted spike', 'teammate'] ``` ### Number of Images ```json {'valid': 1983, 'train': 6927, 'test': 988} ``` ### How to Use - Install [datasets](https://pypi.org/project/datasets/): ```bash pip install datasets ``` - Load the dataset: ```python from datasets import load_dataset ds = load_dataset("keremberke/valorant-object-detection", name="full") example = ds['train'][0] ``` ### Roboflow Dataset Page [https://universe.roboflow.com/daniels-magonis-0pjzx/valorant-9ufcp/dataset/3](https://universe.roboflow.com/daniels-magonis-0pjzx/valorant-9ufcp/dataset/3?ref=roboflow2huggingface) ### Citation ``` @misc{ valorant-9ufcp_dataset, title = { valorant Dataset }, type = { Open Source Dataset }, author = { Daniels Magonis }, howpublished = { \\url{ https://universe.roboflow.com/daniels-magonis-0pjzx/valorant-9ufcp } }, url = { https://universe.roboflow.com/daniels-magonis-0pjzx/valorant-9ufcp }, journal = { Roboflow Universe }, publisher = { Roboflow }, year = { 2022 }, month = { nov }, note = { visited on 2023-01-27 }, } ``` ### License CC BY 4.0 ### Dataset Summary This dataset was exported via roboflow.com on December 22, 2022 at 5:10 PM GMT Roboflow is an end-to-end computer vision platform that helps you * collaborate with your team on computer vision projects * collect & organize images * understand unstructured image data * annotate, and create datasets * export, train, and deploy computer vision models * use active learning to improve your dataset over time It includes 9898 images. Planted are annotated in COCO format. The following pre-processing was applied to each image: * Resize to 416x416 (Stretch) No image augmentation techniques were applied.
keremberke/valorant-object-detection
[ "task_categories:object-detection", "roboflow", "roboflow2huggingface", "region:us" ]
2022-12-28T05:41:05+00:00
{"task_categories": ["object-detection"], "tags": ["roboflow", "roboflow2huggingface"]}
2023-01-27T13:45:00+00:00
[]
[]
TAGS #task_categories-object-detection #roboflow #roboflow2huggingface #region-us
<div align="center"> <img width="640" alt="keremberke/valorant-object-detection" src="URL </div> ### Dataset Labels ### Number of Images ### How to Use - Install datasets: - Load the dataset: ### Roboflow Dataset Page URL ### License CC BY 4.0 ### Dataset Summary This dataset was exported via URL on December 22, 2022 at 5:10 PM GMT Roboflow is an end-to-end computer vision platform that helps you * collaborate with your team on computer vision projects * collect & organize images * understand unstructured image data * annotate, and create datasets * export, train, and deploy computer vision models * use active learning to improve your dataset over time It includes 9898 images. Planted are annotated in COCO format. The following pre-processing was applied to each image: * Resize to 416x416 (Stretch) No image augmentation techniques were applied.
[ "### Dataset Labels", "### Number of Images", "### How to Use\n\n- Install datasets:\n\n\n\n- Load the dataset:", "### Roboflow Dataset Page\nURL", "### License\nCC BY 4.0", "### Dataset Summary\nThis dataset was exported via URL on December 22, 2022 at 5:10 PM GMT\n\nRoboflow is an end-to-end computer vision platform that helps you\n* collaborate with your team on computer vision projects\n* collect & organize images\n* understand unstructured image data\n* annotate, and create datasets\n* export, train, and deploy computer vision models\n* use active learning to improve your dataset over time\n\nIt includes 9898 images.\nPlanted are annotated in COCO format.\n\nThe following pre-processing was applied to each image:\n* Resize to 416x416 (Stretch)\n\nNo image augmentation techniques were applied." ]
[ "TAGS\n#task_categories-object-detection #roboflow #roboflow2huggingface #region-us \n", "### Dataset Labels", "### Number of Images", "### How to Use\n\n- Install datasets:\n\n\n\n- Load the dataset:", "### Roboflow Dataset Page\nURL", "### License\nCC BY 4.0", "### Dataset Summary\nThis dataset was exported via URL on December 22, 2022 at 5:10 PM GMT\n\nRoboflow is an end-to-end computer vision platform that helps you\n* collaborate with your team on computer vision projects\n* collect & organize images\n* understand unstructured image data\n* annotate, and create datasets\n* export, train, and deploy computer vision models\n* use active learning to improve your dataset over time\n\nIt includes 9898 images.\nPlanted are annotated in COCO format.\n\nThe following pre-processing was applied to each image:\n* Resize to 416x416 (Stretch)\n\nNo image augmentation techniques were applied." ]
0c8e46cbfe8edf71e592f495face94ba22155b46
### Roboflow Dataset Page https://universe.roboflow.com/ashish-cuamw/test-y7rj3 ### Citation ``` @misc{ test-y7rj3_dataset, title = { test Dataset }, type = { Open Source Dataset }, author = { ashish }, howpublished = { \\url{ https://universe.roboflow.com/ashish-cuamw/test-y7rj3 } }, url = { https://universe.roboflow.com/ashish-cuamw/test-y7rj3 }, journal = { Roboflow Universe }, publisher = { Roboflow }, year = { 2022 }, month = { oct }, note = { visited on 2022-12-28 }, } ``` ### License CC BY 4.0 ### Dataset Summary This dataset was exported via roboflow.com on December 26, 2022 at 10:13 PM GMT Roboflow is an end-to-end computer vision platform that helps you * collaborate with your team on computer vision projects * collect & organize images * understand unstructured image data * annotate, and create datasets * export, train, and deploy computer vision models * use active learning to improve your dataset over time It includes 4666 images. T are annotated in COCO format. The following pre-processing was applied to each image: * Auto-orientation of pixel data (with EXIF-orientation stripping) * Resize to 416x416 (Stretch) No image augmentation techniques were applied.
fcakyon/gun-object-detection
[ "task_categories:object-detection", "roboflow", "region:us" ]
2022-12-28T06:20:48+00:00
{"task_categories": ["object-detection"], "tags": ["roboflow"]}
2022-12-28T06:22:36+00:00
[]
[]
TAGS #task_categories-object-detection #roboflow #region-us
### Roboflow Dataset Page URL ### License CC BY 4.0 ### Dataset Summary This dataset was exported via URL on December 26, 2022 at 10:13 PM GMT Roboflow is an end-to-end computer vision platform that helps you * collaborate with your team on computer vision projects * collect & organize images * understand unstructured image data * annotate, and create datasets * export, train, and deploy computer vision models * use active learning to improve your dataset over time It includes 4666 images. T are annotated in COCO format. The following pre-processing was applied to each image: * Auto-orientation of pixel data (with EXIF-orientation stripping) * Resize to 416x416 (Stretch) No image augmentation techniques were applied.
[ "### Roboflow Dataset Page\nURL", "### License\nCC BY 4.0", "### Dataset Summary\nThis dataset was exported via URL on December 26, 2022 at 10:13 PM GMT\n\nRoboflow is an end-to-end computer vision platform that helps you\n* collaborate with your team on computer vision projects\n* collect & organize images\n* understand unstructured image data\n* annotate, and create datasets\n* export, train, and deploy computer vision models\n* use active learning to improve your dataset over time\n\nIt includes 4666 images.\nT are annotated in COCO format.\n\nThe following pre-processing was applied to each image:\n* Auto-orientation of pixel data (with EXIF-orientation stripping)\n* Resize to 416x416 (Stretch)\n\nNo image augmentation techniques were applied." ]
[ "TAGS\n#task_categories-object-detection #roboflow #region-us \n", "### Roboflow Dataset Page\nURL", "### License\nCC BY 4.0", "### Dataset Summary\nThis dataset was exported via URL on December 26, 2022 at 10:13 PM GMT\n\nRoboflow is an end-to-end computer vision platform that helps you\n* collaborate with your team on computer vision projects\n* collect & organize images\n* understand unstructured image data\n* annotate, and create datasets\n* export, train, and deploy computer vision models\n* use active learning to improve your dataset over time\n\nIt includes 4666 images.\nT are annotated in COCO format.\n\nThe following pre-processing was applied to each image:\n* Auto-orientation of pixel data (with EXIF-orientation stripping)\n* Resize to 416x416 (Stretch)\n\nNo image augmentation techniques were applied." ]
57f666aba71e625f54419982f4e0fadb670a5be6
# Dataset Card for "beats" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
taejunkim/beats
[ "region:us" ]
2022-12-28T06:50:28+00:00
{"dataset_info": {"features": [{"name": "mix_id", "dtype": "string"}, {"name": "beats", "sequence": "float64"}], "splits": [{"name": "train", "num_bytes": 1479883, "num_examples": 13}], "download_size": 1119868, "dataset_size": 1479883}}
2022-12-28T06:50:44+00:00
[]
[]
TAGS #region-us
# Dataset Card for "beats" More Information needed
[ "# Dataset Card for \"beats\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"beats\"\n\nMore Information needed" ]