slug
stringlengths 15
15
| content
listlengths 1
129
| rawContent
stringlengths 1
2k
| author
dict | attachments
listlengths 0
49
| mentions
listlengths 0
49
| reactions
listlengths 0
12
| publishedAt
stringlengths 24
24
| updatedAt
stringlengths 24
24
| commentators
listlengths 0
52
| url
stringlengths 25
46
| totalUniqueImpressions
int64 1
42.1k
⌀ | numComments
int64 0
621
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
990629720161457 | [
{
"type": "text",
"value": "Today I'm releasing marginalia, a python library to perform corpus analysis and retrieve structured annotations with open LLMs like Mistral Open-Hermes-2.5: ",
"raw": "Today I'm releasing marginalia, a python library to perform corpus analysis and retrieve structured annotations with open LLMs like Mistral Open-Hermes-2.5: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/Pleias/marginalia",
"href": "https://github.com/Pleias/marginalia",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "marginalia leverages vllm inference speed to re-generate until all the output matches an expected json structure and to send batches of several unstructured elements for enhanced patterns detections. It works especially well for bibliographies. The demo transforms a very old list (Benjamin Franklin favorite's books from 1744) into well-structured data: ",
"raw": "marginalia leverages vllm inference speed to re-generate until all the output matches an expected json structure and to send batches of several unstructured elements for enhanced patterns detections. It works especially well for bibliographies. The demo transforms a very old list (Benjamin Franklin favorite's books from 1744) into well-structured data: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://colab.research.google.com/drive/1xKjK2mDDpXMaKG5YLpFhOM7jehxt0kEt?usp=sharing",
"href": "https://colab.research.google.com/drive/1xKjK2mDDpXMaKG5YLpFhOM7jehxt0kEt?usp=sharing",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "While marginalia can be quite flexible, it definitely isn't a general purpose tool for json generation (like outlines). I don't intend so far to extend support to more complex json structure, but really looking forward potential feedbacks and suggestions.",
"raw": "While marginalia can be quite flexible, it definitely isn't a general purpose tool for json generation (like outlines). I don't intend so far to extend support to more complex json structure, but really looking forward potential feedbacks and suggestions.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | Today I'm releasing marginalia, a python library to perform corpus analysis and retrieve structured annotations with open LLMs like Mistral Open-Hermes-2.5: https://github.com/Pleias/marginalia
marginalia leverages vllm inference speed to re-generate until all the output matches an expected json structure and to send batches of several unstructured elements for enhanced patterns detections. It works especially well for bibliographies. The demo transforms a very old list (Benjamin Franklin favorite's books from 1744) into well-structured data: https://colab.research.google.com/drive/1xKjK2mDDpXMaKG5YLpFhOM7jehxt0kEt?usp=sharing
While marginalia can be quite flexible, it definitely isn't a general purpose tool for json generation (like outlines). I don't intend so far to extend support to more complex json structure, but really looking forward potential feedbacks and suggestions.
| {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64ce091a9e9ca8123d7a42b0/OEPggp82RwigxNLL35LgT.jpeg",
"fullname": "Pierre-Carl Langlais",
"name": "Pclanglais",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 191,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/64ce091a9e9ca8123d7a42b0/EITi6y2L6q5n0Wvqg1Gz6.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/64ce091a9e9ca8123d7a42b0/29eg3UzF1tMDa-D0KAInk.png"
}
] | [] | [
{
"reaction": "❤️",
"users": [
"davanstrien",
"osanseviero",
"merve",
"AIIAR",
"samusenps",
"gate369",
"macadeliccc",
"clem",
"dillfrescott"
],
"count": 9
},
{
"reaction": "👍",
"users": [
"esilabet"
],
"count": 1
}
] | 2024-02-12T19:24:45.000Z | 2024-02-12T19:24:45.433Z | [] | /posts/Pclanglais/990629720161457 | 422 | 0 |
884256831552573 | [
{
"type": "text",
"value": "Google released a paper on Chess that doesn't rely on MCTS (aka AlphaZero) ♟️ ",
"raw": "Google released a paper on Chess that doesn't rely on MCTS (aka AlphaZero) ♟️ ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "their secret sauce is.. synthetic data pseudolabeled by Stockfish engine 😀 ",
"raw": "their secret sauce is.. synthetic data pseudolabeled by Stockfish engine 😀 ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "2024 really is the year of synthetic data across all domains!",
"raw": "2024 really is the year of synthetic data across all domains!",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "There's a nice discussion here, join us ",
"raw": "There's a nice discussion here, join us ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/papers/2402.04494",
"href": null,
"resource": {
"type": "paper",
"id": "2402.04494",
"discussionNum": null
},
"url": "https://huggingface.co/papers/2402.04494",
"code": null,
"user": null,
"label": "Grandmaster-Level Chess Without Search (2402.04494)",
"lang": null
}
] | Google released a paper on Chess that doesn't rely on MCTS (aka AlphaZero) ♟️
their secret sauce is.. synthetic data pseudolabeled by Stockfish engine 😀
2024 really is the year of synthetic data across all domains!
There's a nice discussion here, join us https://huggingface.co/papers/2402.04494 | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1648113222875-6141a88b3a0ec78603c9e784.png",
"fullname": "Merve Noyan",
"name": "merve",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 5589,
"isFollowing": false
} | [] | [] | [
{
"reaction": "❤️",
"users": [
"alielfilali01",
"osanseviero",
"AIIAR",
"samusenps",
"nickandbro",
"Citaman",
"clem",
"cosmojg"
],
"count": 8
}
] | 2024-02-12T18:35:07.000Z | 2024-02-12T22:43:54.161Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/626237d9bbcbd1c34f1bb231/EJrOjvAL-68qMCYdnvOrq.png",
"fullname": "Ali El Filali",
"name": "alielfilali01",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 186,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64df20dc22d604b137270864/C-1_EzY0tnrb-Cyn6lh93.jpeg",
"fullname": "TA",
"name": "AIIAR",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 3,
"isFollowing": false
}
] | /posts/merve/884256831552573 | 67 | 2 |
680660181190026 | [
{
"type": "text",
"value": "🤗 Data is better together!",
"raw": "🤗 Data is better together!",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Data is essential for training good AI systems. We believe that the amazing community built around open machine learning can also work on developing amazing datasets together. ",
"raw": "Data is essential for training good AI systems. We believe that the amazing community built around open machine learning can also work on developing amazing datasets together. ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "To explore how this can be done, Argilla and Hugging Face are thrilled to announce a collaborative project where we’re asking Hugging Face community members to build a dataset consisting of LLM prompts collectively. ",
"raw": "To explore how this can be done, Argilla and Hugging Face are thrilled to announce a collaborative project where we’re asking Hugging Face community members to build a dataset consisting of LLM prompts collectively. ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "What are we doing? ",
"raw": "What are we doing? ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Using an instance of Argilla — a powerful open-source data collaboration tool — hosted on the Hugging Face Hub, we are collecting ratings of prompts based on their quality. ",
"raw": "Using an instance of Argilla — a powerful open-source data collaboration tool — hosted on the Hugging Face Hub, we are collecting ratings of prompts based on their quality. ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "How Can You Contribute?",
"raw": "How Can You Contribute?",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "It’s super simple to start contributing:",
"raw": "It’s super simple to start contributing:",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "1. Sign up if you don’t have a Hugging Face account",
"raw": "1. Sign up if you don’t have a Hugging Face account",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "2. Go to this Argilla Space and sign in: ",
"raw": "2. Go to this Argilla Space and sign in: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/spaces/DIBT/prompt-collective",
"href": "https://huggingface.co/spaces/DIBT/prompt-collective",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "3. Read the guidelines and start rating prompts! ",
"raw": "3. Read the guidelines and start rating prompts! ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "You can also join the #data-is-better-together channel in the Hugging Face Discord. ",
"raw": "You can also join the #data-is-better-together channel in the Hugging Face Discord. ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Finally, to track the community progress we'll be updating this Gradio dashboard:",
"raw": "Finally, to track the community progress we'll be updating this Gradio dashboard:",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/spaces/DIBT/prompt-collective-dashboard",
"href": "https://huggingface.co/spaces/DIBT/prompt-collective-dashboard",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | 🤗 Data is better together!
Data is essential for training good AI systems. We believe that the amazing community built around open machine learning can also work on developing amazing datasets together.
To explore how this can be done, Argilla and Hugging Face are thrilled to announce a collaborative project where we’re asking Hugging Face community members to build a dataset consisting of LLM prompts collectively.
What are we doing?
Using an instance of Argilla — a powerful open-source data collaboration tool — hosted on the Hugging Face Hub, we are collecting ratings of prompts based on their quality.
How Can You Contribute?
It’s super simple to start contributing:
1. Sign up if you don’t have a Hugging Face account
2. Go to this Argilla Space and sign in: https://huggingface.co/spaces/DIBT/prompt-collective
3. Read the guidelines and start rating prompts!
You can also join the #data-is-better-together channel in the Hugging Face Discord.
Finally, to track the community progress we'll be updating this Gradio dashboard:
https://huggingface.co/spaces/DIBT/prompt-collective-dashboard
| {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/60420dccc15e823a685f2b03/Dn7QTyy9SZ7jKN6xpufVD.png",
"fullname": "Daniel Vila",
"name": "dvilasuero",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 231,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/60420dccc15e823a685f2b03/AjB8zoTKujO-RZ1nQaVBz.png"
}
] | [] | [
{
"reaction": "❤️",
"users": [
"davanstrien",
"gabrielmbmb",
"osanseviero",
"ignacioct",
"sdiazlor",
"plaguss",
"julien-c",
"tomaarsen",
"nickandbro",
"victor",
"lunarflu",
"clem",
"frascuchon",
"abbanbhan",
"lewtun",
"merve",
"samusenps",
"MoritzLaurer",
"johko",
"lhoestq",
"smangrul",
"sayhan",
"medmac01",
"maghwa"
],
"count": 24
},
{
"reaction": "🤗",
"users": [
"davanstrien",
"gabrielmbmb",
"osanseviero",
"kramp",
"ignacioct",
"sdiazlor",
"julien-c",
"victor",
"severo",
"clem",
"mvaloatto",
"lewtun",
"samusenps",
"chkla",
"smangrul",
"Muttermal"
],
"count": 16
},
{
"reaction": "🤯",
"users": [
"davanstrien",
"osanseviero",
"sdiazlor",
"julien-c",
"victor",
"clem",
"smangrul"
],
"count": 7
}
] | 2024-02-12T16:04:51.000Z | 2024-03-07T03:08:28.596Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/60f2fc91b92afccb7c34b8ed/whF6nGtyTAhbtiWJJnL9e.png",
"fullname": "Gabriel Martín Blázquez",
"name": "gabrielmbmb",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 90,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1627505688463-60107b385ac3e86b3ea4fc34.jpeg",
"fullname": "Daniel van Strien",
"name": "davanstrien",
"type": "user",
"isPro": true,
"isHf": true,
"isMod": false,
"followerCount": 410,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1594192845975-5e1e17b6fcf41d740b6996a8.jpeg",
"fullname": "Bram Vanroy",
"name": "BramVanroy",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 173,
"isFollowing": false
},
{
"avatarUrl": "/avatars/83fa7f317d72ed33dd23b4ec6e655fab.svg",
"fullname": "mariana",
"name": "Msignal",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": null,
"isFollowing": false
}
] | /posts/dvilasuero/680660181190026 | 76 | 5 |
762733004098612 | [
{
"type": "text",
"value": "Created a collection with all the LLM hallucination and evaluation papers I've been reading over the last few weeks 📄📄",
"raw": "Created a collection with all the LLM hallucination and evaluation papers I've been reading over the last few weeks 📄📄",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/collections/santiviquez/llm-hallucination-detection-papers-65c4d2399096960aa80776d3",
"href": null,
"resource": {
"type": "collection",
"id": "santiviquez/llm-hallucination-detection-papers-65c4d2399096960aa80776d3",
"discussionNum": null
},
"url": "https://huggingface.co/collections/santiviquez/llm-hallucination-detection-papers-65c4d2399096960aa80776d3",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Which one should I add to the list?",
"raw": "Which one should I add to the list?",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | Created a collection with all the LLM hallucination and evaluation papers I've been reading over the last few weeks 📄📄
https://huggingface.co/collections/santiviquez/llm-hallucination-detection-papers-65c4d2399096960aa80776d3
Which one should I add to the list? | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1657144463525-629a173153a72d997d3f57d0.jpeg",
"fullname": "Santiago Viquez",
"name": "santiviquez",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 84,
"isFollowing": false
} | [] | [] | [
{
"reaction": "❤️",
"users": [
"osanseviero",
"clem",
"gsarti",
"merve",
"samusenps",
"alielfilali01"
],
"count": 6
}
] | 2024-02-12T12:00:44.000Z | 2024-02-12T12:00:44.217Z | [] | /posts/santiviquez/762733004098612 | 5 | 0 |
870119735436123 | [
{
"type": "text",
"value": "🔍 Today's pick in Interpretability & Analysis of LMs: Model Editing with Canonical Examples by ",
"raw": "🔍 Today's pick in Interpretability & Analysis of LMs: Model Editing with Canonical Examples by ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "mention",
"value": null,
"raw": "@johnhew",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": "johnhew",
"label": null,
"lang": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "mention",
"value": null,
"raw": "@sachen",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": "sachen",
"label": null,
"lang": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "mention",
"value": null,
"raw": "@lora-x",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": "lora-x",
"label": null,
"lang": null
},
{
"type": "text",
"value": " E. Adams P. Jiang ",
"raw": " E. Adams P. Jiang ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "mention",
"value": null,
"raw": "@manning",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": "manning",
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "This works introduces a model editing approach using individual “canonical” examples to showcase desired/unwanted behavior. An evaluation is then conducted on out-of-distribution samples spanning six datasets (3 introduced in this work) covering settings of interest in bias mitigation, hard syntactic constructions and knowledge-based predictions, while limiting the degradation of the original model’s loss.",
"raw": "This works introduces a model editing approach using individual “canonical” examples to showcase desired/unwanted behavior. An evaluation is then conducted on out-of-distribution samples spanning six datasets (3 introduced in this work) covering settings of interest in bias mitigation, hard syntactic constructions and knowledge-based predictions, while limiting the degradation of the original model’s loss.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Authors experiment with Pythia LMs, finding that LoRa fine-tuning on canonical examples outperforms other established editing methods such as MEMIT.",
"raw": "Authors experiment with Pythia LMs, finding that LoRa fine-tuning on canonical examples outperforms other established editing methods such as MEMIT.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Then, the approach is tested on Backpack LMs, using a linear combination of sense vectors to disentangle semantic information in the input texts. In particular, authors introduce “sense fine-tuning” where only a handful of sense vectors is updated per example, which is shown to be more efficient yet more effective than regular fine-tuning. ",
"raw": "Then, the approach is tested on Backpack LMs, using a linear combination of sense vectors to disentangle semantic information in the input texts. In particular, authors introduce “sense fine-tuning” where only a handful of sense vectors is updated per example, which is shown to be more efficient yet more effective than regular fine-tuning. ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Finally, the relation between the predictions of pre- and post-sense fine-tuning backpack LMs is used to successfully transfer the desired adaptation to a larger standard LM, at no performance cost.",
"raw": "Finally, the relation between the predictions of pre- and post-sense fine-tuning backpack LMs is used to successfully transfer the desired adaptation to a larger standard LM, at no performance cost.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "📄 Paper: ",
"raw": "📄 Paper: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/papers/2402.06155",
"href": null,
"resource": {
"type": "paper",
"id": "2402.06155",
"discussionNum": null
},
"url": "https://huggingface.co/papers/2402.06155",
"code": null,
"user": null,
"label": "Model Editing with Canonical Examples (2402.06155)",
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "🔍 All daily picks in LM interpretability: ",
"raw": "🔍 All daily picks in LM interpretability: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/collections/gsarti/daily-picks-in-interpretability-and-analysis-of-lms-65ae3339949c5675d25de2f9",
"href": null,
"resource": {
"type": "collection",
"id": "gsarti/daily-picks-in-interpretability-and-analysis-of-lms-65ae3339949c5675d25de2f9",
"discussionNum": null
},
"url": "https://huggingface.co/collections/gsarti/daily-picks-in-interpretability-and-analysis-of-lms-65ae3339949c5675d25de2f9",
"code": null,
"user": null,
"label": null,
"lang": null
}
] | 🔍 Today's pick in Interpretability & Analysis of LMs: Model Editing with Canonical Examples by @johnhew @sachen @lora-x E. Adams P. Jiang @manning
This works introduces a model editing approach using individual “canonical” examples to showcase desired/unwanted behavior. An evaluation is then conducted on out-of-distribution samples spanning six datasets (3 introduced in this work) covering settings of interest in bias mitigation, hard syntactic constructions and knowledge-based predictions, while limiting the degradation of the original model’s loss.
Authors experiment with Pythia LMs, finding that LoRa fine-tuning on canonical examples outperforms other established editing methods such as MEMIT.
Then, the approach is tested on Backpack LMs, using a linear combination of sense vectors to disentangle semantic information in the input texts. In particular, authors introduce “sense fine-tuning” where only a handful of sense vectors is updated per example, which is shown to be more efficient yet more effective than regular fine-tuning.
Finally, the relation between the predictions of pre- and post-sense fine-tuning backpack LMs is used to successfully transfer the desired adaptation to a larger standard LM, at no performance cost.
📄 Paper: https://huggingface.co/papers/2402.06155
🔍 All daily picks in LM interpretability: https://huggingface.co/collections/gsarti/daily-picks-in-interpretability-and-analysis-of-lms-65ae3339949c5675d25de2f9 | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1670231290373-5e7749883d77a72421292d07.jpeg",
"fullname": "Gabriele Sarti",
"name": "gsarti",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 205,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/5e7749883d77a72421292d07/2M6dvSPaI69QzZywyCTaB.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/5e7749883d77a72421292d07/M-Fr8vJq9KrBC-9aDTWTs.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/5e7749883d77a72421292d07/Z6rsX49qrXYhPtdKW5lW1.png"
}
] | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/QPCRraySl9zTnf7eG5ZJk.png",
"fullname": "John Hewitt",
"name": "johnhew",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 5
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/JxO8r2I_EutismZQxBS76.jpeg",
"fullname": "Lora Xie",
"name": "lora-x",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 1
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1628695407761-5f555a1f8bf55658acfed19e.jpeg",
"fullname": "Christopher Manning",
"name": "manning",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 15
},
{
"avatarUrl": "/avatars/463088bdcd6d256528e76257b7185d33.svg",
"fullname": "Sarah Chen",
"name": "sachen",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": null
}
] | [
{
"reaction": "❤️",
"users": [
"osanseviero",
"clem",
"merve",
"hiyouga",
"sayhan"
],
"count": 5
},
{
"reaction": "👍",
"users": [
"jsteward2930"
],
"count": 1
}
] | 2024-02-12T09:41:56.000Z | 2024-02-12T09:41:56.280Z | [] | /posts/gsarti/870119735436123 | 11 | 0 |
155064052664846 | [
{
"type": "text",
"value": "3 hours between the two pictures 🔥",
"raw": "3 hours between the two pictures 🔥",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Finally that paranoid inside me got some rest 😂",
"raw": "Finally that paranoid inside me got some rest 😂",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | 3 hours between the two pictures 🔥
Finally that paranoid inside me got some rest 😂 | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/626237d9bbcbd1c34f1bb231/EJrOjvAL-68qMCYdnvOrq.png",
"fullname": "Ali El Filali",
"name": "alielfilali01",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 186,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/626237d9bbcbd1c34f1bb231/3ZyB662Ki7yWNriyWhjCS.jpeg"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/626237d9bbcbd1c34f1bb231/ZnpbTWRZKFCBpIcsg3OGH.jpeg"
}
] | [] | [
{
"reaction": "🤗",
"users": [
"NeuralNovel",
"Skier8402",
"Wauplin",
"tomaarsen",
"osanseviero",
"clem",
"merve",
"macadeliccc",
"9voltfan2009"
],
"count": 9
},
{
"reaction": "👍",
"users": [
"Taf2023",
"9voltfan2009"
],
"count": 2
}
] | 2024-02-12T01:56:20.000Z | 2024-02-12T14:50:33.797Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6317233cc92fd6fee317e030/cJHSvvimr1kqgQfHOjO5n.png",
"fullname": "Tom Aarsen",
"name": "tomaarsen",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 1060,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/626237d9bbcbd1c34f1bb231/EJrOjvAL-68qMCYdnvOrq.png",
"fullname": "Ali El Filali",
"name": "alielfilali01",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 186,
"isFollowing": false
}
] | /posts/alielfilali01/155064052664846 | 7 | 2 |
811383627678984 | [
{
"type": "text",
"value": "Hear, hear, AMD MI300Xs have started to emerge much sooner than expected. ",
"raw": "Hear, hear, AMD MI300Xs have started to emerge much sooner than expected. ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Here is a 2-part benchmarks report on performing BLOOM-176B inference using ",
"raw": "Here is a 2-part benchmarks report on performing BLOOM-176B inference using ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "mention",
"value": null,
"raw": "@MSFTDeepSpeed",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": "MSFTDeepSpeed",
"label": null,
"lang": null
},
{
"type": "text",
"value": " optimized for AMD MI300X.",
"raw": " optimized for AMD MI300X.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "1. ",
"raw": "1. ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://www.evp.cloud/post/diving-deeper-insights-from-our-llm-inference-testing",
"href": "https://www.evp.cloud/post/diving-deeper-insights-from-our-llm-inference-testing",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "2. ",
"raw": "2. ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://www.evp.cloud/post/diving-deeper-insights-from-our-llm-inference-testing-part-2",
"href": "https://www.evp.cloud/post/diving-deeper-insights-from-our-llm-inference-testing-part-2",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "This was published in response to our BLOOM-176B super-fast inference blog post ",
"raw": "This was published in response to our BLOOM-176B super-fast inference blog post ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/blog/bloom-inference-pytorch-scripts",
"href": "https://huggingface.co/blog/bloom-inference-pytorch-scripts",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Note that these have 192GB of HBM!",
"raw": "Note that these have 192GB of HBM!",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "The NVIDIA monopoly is strong, but it'll have to start sharing the pie and hopefully drive the costs down at least somewhat.",
"raw": "The NVIDIA monopoly is strong, but it'll have to start sharing the pie and hopefully drive the costs down at least somewhat.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Thanks to ",
"raw": "Thanks to ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://www.linkedin.com/in/eliovp",
"href": "https://www.linkedin.com/in/eliovp",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": " for sharing this writeup with me.",
"raw": " for sharing this writeup with me.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "p.s. at the PyTorch conference in the fall, the AMD representative said we will see MI300X available to us mortals in Q4-2024/Q1-2025.",
"raw": "p.s. at the PyTorch conference in the fall, the AMD representative said we will see MI300X available to us mortals in Q4-2024/Q1-2025.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | Hear, hear, AMD MI300Xs have started to emerge much sooner than expected.
Here is a 2-part benchmarks report on performing BLOOM-176B inference using @MSFTDeepSpeed optimized for AMD MI300X.
1. https://www.evp.cloud/post/diving-deeper-insights-from-our-llm-inference-testing
2. https://www.evp.cloud/post/diving-deeper-insights-from-our-llm-inference-testing-part-2
This was published in response to our BLOOM-176B super-fast inference blog post https://huggingface.co/blog/bloom-inference-pytorch-scripts
Note that these have 192GB of HBM!
The NVIDIA monopoly is strong, but it'll have to start sharing the pie and hopefully drive the costs down at least somewhat.
Thanks to https://www.linkedin.com/in/eliovp for sharing this writeup with me.
p.s. at the PyTorch conference in the fall, the AMD representative said we will see MI300X available to us mortals in Q4-2024/Q1-2025. | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1594311341799-5f07383b19cb630495b812cd.jpeg",
"fullname": "Stas Bekman",
"name": "stas",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 97,
"isFollowing": false
} | [] | [] | [
{
"reaction": "❤️",
"users": [
"osanseviero",
"julien-c",
"davanstrien",
"tomaarsen",
"VictorSanh",
"clem",
"pcuenq",
"muhtasham",
"merve",
"samusenps",
"Eliovp",
"andrewrreed",
"rwightman",
"clefourrier"
],
"count": 14
}
] | 2024-02-12T00:12:10.000Z | 2024-02-13T08:09:17.321Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/5dd96eb166059660ed1ee413/NQtzmrDdbG0H8qkZvRyGk.jpeg",
"fullname": "Julien Chaumond",
"name": "julien-c",
"type": "user",
"isPro": true,
"isHf": true,
"isMod": false,
"followerCount": 1580,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6317233cc92fd6fee317e030/cJHSvvimr1kqgQfHOjO5n.png",
"fullname": "Tom Aarsen",
"name": "tomaarsen",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 1060,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1583857146757-5e67bdd61009063689407479.jpeg",
"fullname": "Clem 🤗",
"name": "clem",
"type": "user",
"isPro": true,
"isHf": true,
"isMod": false,
"followerCount": 1763,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1594311341799-5f07383b19cb630495b812cd.jpeg",
"fullname": "Stas Bekman",
"name": "stas",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 97,
"isFollowing": false
},
{
"avatarUrl": "/avatars/bc9d6b4976a2006fd53ce27b55d776d6.svg",
"fullname": "Eliovp",
"name": "Eliovp",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": null,
"isFollowing": false
}
] | /posts/stas/811383627678984 | 114 | 5 |
262978955052408 | [
{
"type": "text",
"value": "Introducing Remove Background Web: In-browser background removal, powered by ",
"raw": "Introducing Remove Background Web: In-browser background removal, powered by ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "mention",
"value": null,
"raw": "@briaai",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": "briaai",
"label": null,
"lang": null
},
{
"type": "text",
"value": "'s new RMBG-v1.4 model and 🤗 Transformers.js!",
"raw": "'s new RMBG-v1.4 model and 🤗 Transformers.js!",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Everything runs 100% locally, meaning none of your images are uploaded to a server! 🤯 At only ~45MB, the 8-bit quantized version of the model is perfect for in-browser usage (it even works on mobile).",
"raw": "Everything runs 100% locally, meaning none of your images are uploaded to a server! 🤯 At only ~45MB, the 8-bit quantized version of the model is perfect for in-browser usage (it even works on mobile).",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Check it out! 👇",
"raw": "Check it out! 👇",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Demo: ",
"raw": "Demo: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/Xenova/remove-background-web",
"href": null,
"resource": {
"type": "space",
"id": "Xenova/remove-background-web",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/Xenova/remove-background-web",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Model: ",
"raw": "Model: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/briaai/RMBG-1.4",
"href": null,
"resource": {
"type": "model",
"id": "briaai/RMBG-1.4",
"discussionNum": null
},
"url": "https://huggingface.co/briaai/RMBG-1.4",
"code": null,
"user": null,
"label": null,
"lang": null
}
] | Introducing Remove Background Web: In-browser background removal, powered by @briaai's new RMBG-v1.4 model and 🤗 Transformers.js!
Everything runs 100% locally, meaning none of your images are uploaded to a server! 🤯 At only ~45MB, the 8-bit quantized version of the model is perfect for in-browser usage (it even works on mobile).
Check it out! 👇
Demo: https://huggingface.co/spaces/Xenova/remove-background-web
Model: https://huggingface.co/briaai/RMBG-1.4 | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/61b253b7ac5ecaae3d1efe0c/hwiQ0uvz3t-L5a-NtBIO6.png",
"fullname": "Joshua",
"name": "Xenova",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 3792,
"isFollowing": false
} | [
{
"type": "video",
"url": "https://cdn-uploads.huggingface.co/production/uploads/61b253b7ac5ecaae3d1efe0c/r7C5nqsDDMX2Rl0Pz5Y7H.mp4"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/61b253b7ac5ecaae3d1efe0c/vyajc_DSXbmxrXFY0R9sB.jpeg"
}
] | [] | [
{
"reaction": "❤️",
"users": [
"Felladrin",
"osanseviero",
"not-lain",
"baotrinh",
"samusenps",
"mgfeller",
"munirdin",
"mvaloatto",
"DamarJati",
"AIIAR",
"eskayML",
"OriLib",
"HDiffusion",
"brianjking",
"3ginger",
"ashishtanwer",
"NeuralNovel",
"taufiqdp",
"kramp",
"Waltert",
"enzostvs",
"victor",
"merve",
"nickandbro",
"ecarbo",
"raphaelmansuy",
"Seuriin",
"Renadi",
"rreed-pha",
"Noomam",
"dillfrescott"
],
"count": 31
},
{
"reaction": "🤯",
"users": [
"DamarJati",
"NeuralNovel",
"kramp",
"merve"
],
"count": 4
},
{
"reaction": "👍",
"users": [
"EritreanSammy",
"Tonic",
"XinD"
],
"count": 3
},
{
"reaction": "😔",
"users": [
"hasokeyk"
],
"count": 1
}
] | 2024-02-10T16:31:18.000Z | 2024-08-30T16:43:32.634Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6527e89a8808d80ccff88b7a/CuGNmF1Et8KMQ0mCd1NEJ.jpeg",
"fullname": "Lain",
"name": "not-lain",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 941,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/656fd3c06871b53dba935b99/5LQ4jWAfrKeHDZ5nvqmFH.png",
"fullname": "Devean",
"name": "Devean",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": null,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/5f17f0a0925b9863e28ad517/X7QKoiXbUtEZSG9jyvfk3.jpeg",
"fullname": "Victor Mustar",
"name": "victor",
"type": "user",
"isPro": true,
"isHf": true,
"isMod": false,
"followerCount": 2607,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63a1960c931d2c13ad70ac1f/LiPk_YdGflvV1YKNB3uGK.jpeg",
"fullname": "Syed Usama Ahmad",
"name": "syedusama5556",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": null,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/61b253b7ac5ecaae3d1efe0c/hwiQ0uvz3t-L5a-NtBIO6.png",
"fullname": "Joshua",
"name": "Xenova",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 3792,
"isFollowing": false
},
{
"avatarUrl": "/avatars/a0db9dd2e443e17630e137bea72b34eb.svg",
"fullname": "trinh",
"name": "baotrinh",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 1,
"isFollowing": false
}
] | /posts/Xenova/262978955052408 | 1,468 | 9 |
604927309439393 | [
{
"type": "text",
"value": "Introducing UNA-SimpleSmaug-34b:",
"raw": "Introducing UNA-SimpleSmaug-34b:",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Based on Smaug-34B-v0.1, capable of slightly outperform his base model and with increased math and reasoning thanks to simple-math dataset.",
"raw": "Based on Smaug-34B-v0.1, capable of slightly outperform his base model and with increased math and reasoning thanks to simple-math dataset.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "The model exhibits a great performance across diverse tasks with an excellent and balanced behaviour.",
"raw": "The model exhibits a great performance across diverse tasks with an excellent and balanced behaviour.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "It scores 77.41 AVG on the Leaderboard, landing on #1 Position of 34B models.",
"raw": "It scores 77.41 AVG on the Leaderboard, landing on #1 Position of 34B models.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Available in the hub already:",
"raw": "Available in the hub already:",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/fblgit/UNA-SimpleSmaug-34b-v1beta",
"href": null,
"resource": {
"type": "model",
"id": "fblgit/UNA-SimpleSmaug-34b-v1beta",
"discussionNum": null
},
"url": "https://huggingface.co/fblgit/UNA-SimpleSmaug-34b-v1beta",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/datasets/fblgit/simple-math",
"href": null,
"resource": {
"type": "dataset",
"id": "fblgit/simple-math",
"discussionNum": null
},
"url": "https://huggingface.co/datasets/fblgit/simple-math",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "In this case, we applied UNA to the Attention Layers of the model while performing SFT with simple-math on a high complexity generated data of mathematics, proving the effect of simple-math on LLM's.",
"raw": "In this case, we applied UNA to the Attention Layers of the model while performing SFT with simple-math on a high complexity generated data of mathematics, proving the effect of simple-math on LLM's.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | Introducing UNA-SimpleSmaug-34b:
Based on Smaug-34B-v0.1, capable of slightly outperform his base model and with increased math and reasoning thanks to simple-math dataset.
The model exhibits a great performance across diverse tasks with an excellent and balanced behaviour.
It scores 77.41 AVG on the Leaderboard, landing on #1 Position of 34B models.
Available in the hub already:
https://huggingface.co/fblgit/UNA-SimpleSmaug-34b-v1beta
https://huggingface.co/datasets/fblgit/simple-math
In this case, we applied UNA to the Attention Layers of the model while performing SFT with simple-math on a high complexity generated data of mathematics, proving the effect of simple-math on LLM's. | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6401c8c9f98fbc64bcd7dca1/MOSgc_mPbfUZ-354osy1v.png",
"fullname": "FBL",
"name": "fblgit",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 228,
"isFollowing": false
} | [] | [] | [
{
"reaction": "❤️",
"users": [
"fblgit",
"samusenps",
"macadeliccc",
"osanseviero",
"mrfakename",
"Ji-Ha",
"NeuralNovel",
"taufiqdp",
"merve",
"linuxhackr"
],
"count": 10
}
] | 2024-02-10T10:17:26.000Z | 2024-02-10T23:10:34.219Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/62e54f0eae9d3f10acb95cb9/VAyk05hqB3OZWXEZW-B0q.png",
"fullname": "mrfakename",
"name": "mrfakename",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 969,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6401c8c9f98fbc64bcd7dca1/MOSgc_mPbfUZ-354osy1v.png",
"fullname": "FBL",
"name": "fblgit",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 228,
"isFollowing": false
}
] | /posts/fblgit/604927309439393 | 71 | 2 |
108296953761688 | [
{
"type": "text",
"value": "Repo with scripts to create your own moe models using Apple mlx is constantly updated by ",
"raw": "Repo with scripts to create your own moe models using Apple mlx is constantly updated by ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "mention",
"value": null,
"raw": "@mzbac",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": "mzbac",
"label": null,
"lang": null
},
{
"type": "text",
"value": " here: ",
"raw": " here: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/mzbac/mlx-moe",
"href": "https://github.com/mzbac/mlx-moe",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "It's an amazing resource to learn inner workings of lora on moe with mlx. ",
"raw": "It's an amazing resource to learn inner workings of lora on moe with mlx. ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "It uses ",
"raw": "It uses ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/datasets/WizardLM/WizardLM_evol_instruct_70k",
"href": "https://huggingface.co/datasets/WizardLM/WizardLM_evol_instruct_70k",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": " as default dataset, but can be easily tweak to use any model or dataset fro HF",
"raw": " as default dataset, but can be easily tweak to use any model or dataset fro HF",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Have fun with it!",
"raw": "Have fun with it!",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | Repo with scripts to create your own moe models using Apple mlx is constantly updated by @mzbac here: https://github.com/mzbac/mlx-moe
It's an amazing resource to learn inner workings of lora on moe with mlx.
It uses https://huggingface.co/datasets/WizardLM/WizardLM_evol_instruct_70k as default dataset, but can be easily tweak to use any model or dataset fro HF
Have fun with it! | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64345a5f4b34368fdb045ef4/NfUgcPA08_Gl40pTrlWfe.jpeg",
"fullname": "Ivan Fioravanti",
"name": "ivanfioravanti",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 31,
"isFollowing": false
} | [] | [
{
"avatarUrl": "/avatars/513e0d2c15cfbb1542cb268eb2c8d68b.svg",
"fullname": "null",
"name": "mzbac",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 25
}
] | [
{
"reaction": "👍",
"users": [
"Pavarissy",
"osanseviero",
"awni",
"NeuralNovel",
"clem",
"merve",
"sugatoray"
],
"count": 7
},
{
"reaction": "🤝",
"users": [
"samusenps",
"osanseviero",
"awni",
"merve"
],
"count": 4
}
] | 2024-02-10T00:08:26.000Z | 2024-02-12T22:12:56.205Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1648113222875-6141a88b3a0ec78603c9e784.png",
"fullname": "Merve Noyan",
"name": "merve",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 5589,
"isFollowing": false
}
] | /posts/ivanfioravanti/108296953761688 | 70 | 1 |
894221400115854 | [
{
"type": "text",
"value": "✨ In-context learning is all you need!",
"raw": "✨ In-context learning is all you need!",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "This super interesting paper shows that fine-tuning with #SFT or #RLHF only helps on the form but does not impact knowledge or reasoning abilities, and in some cases, actually decreases performance!",
"raw": "This super interesting paper shows that fine-tuning with #SFT or #RLHF only helps on the form but does not impact knowledge or reasoning abilities, and in some cases, actually decreases performance!",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "They tested it with Mistral-base vs Mistral FT-ed, as well as Llama 2 70b base and FT-ed and results are consistent.",
"raw": "They tested it with Mistral-base vs Mistral FT-ed, as well as Llama 2 70b base and FT-ed and results are consistent.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Providing the right prompt to the base model actually makes the model better and has 0 training cost! ",
"raw": "Providing the right prompt to the base model actually makes the model better and has 0 training cost! ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Paper: ",
"raw": "Paper: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://arxiv.org/abs/2312.01552",
"href": "https://arxiv.org/abs/2312.01552",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | ✨ In-context learning is all you need!
This super interesting paper shows that fine-tuning with #SFT or #RLHF only helps on the form but does not impact knowledge or reasoning abilities, and in some cases, actually decreases performance!
They tested it with Mistral-base vs Mistral FT-ed, as well as Llama 2 70b base and FT-ed and results are consistent.
Providing the right prompt to the base model actually makes the model better and has 0 training cost!
Paper: https://arxiv.org/abs/2312.01552 | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1661497922734-62f4ac43567dbf9a39f75474.jpeg",
"fullname": "Daniel Huynh",
"name": "dhuynh95",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 75,
"isFollowing": false
} | [] | [] | [
{
"reaction": "❤️",
"users": [
"andysalerno",
"samusenps",
"osanseviero",
"gl198976",
"VikramSingh178",
"akjindal53244",
"nickandbro",
"clem",
"victor",
"anthonymikinka"
],
"count": 10
}
] | 2024-02-09T22:23:05.000Z | 2024-02-09T22:23:05.714Z | [] | /posts/dhuynh95/894221400115854 | 275 | 0 |
761147069099521 | [
{
"type": "text",
"value": "🎉🥳🎉",
"raw": "🎉🥳🎉",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Today, we are thrilled to officially launch the \"2A2I\" Arabic Artificial Intelligence Initiative. This is a community-driven initiative founded on the philosophy of \"Small team, Big work\" Our goal is to elevate Arabic AI (LLMs, Diffusion Models, ASR, etc.) to the same level as English (and also Chinese 🐉).",
"raw": "Today, we are thrilled to officially launch the \"2A2I\" Arabic Artificial Intelligence Initiative. This is a community-driven initiative founded on the philosophy of \"Small team, Big work\" Our goal is to elevate Arabic AI (LLMs, Diffusion Models, ASR, etc.) to the same level as English (and also Chinese 🐉).",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Naturally, our focus today is primarily on datasets. We aim to provide high-quality datasets, especially for LLMs this month, to support our future efforts. In line with this, we're excited to introduce the Arabic version of H4-no_robots, find here : ",
"raw": "Naturally, our focus today is primarily on datasets. We aim to provide high-quality datasets, especially for LLMs this month, to support our future efforts. In line with this, we're excited to introduce the Arabic version of H4-no_robots, find here : ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/datasets/2A2I/H4_no_robots",
"href": null,
"resource": {
"type": "dataset",
"id": "2A2I/H4_no_robots",
"discussionNum": null
},
"url": "https://huggingface.co/datasets/2A2I/H4_no_robots",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": " (and yes, we know it's not \"no_robots\" anymore 😄). Stay tuned for more exciting, high-quality datasets in the next couple of weeks (+ 4 million rows🔥)",
"raw": " (and yes, we know it's not \"no_robots\" anymore 😄). Stay tuned for more exciting, high-quality datasets in the next couple of weeks (+ 4 million rows🔥)",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "In parallel, we're also developing a model 🐪 that we hope will set new high standards for Arabic LLMs. 🔥 This model is planned for release in the coming months.",
"raw": "In parallel, we're also developing a model 🐪 that we hope will set new high standards for Arabic LLMs. 🔥 This model is planned for release in the coming months.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "For more information, please visit our Organization card here : ",
"raw": "For more information, please visit our Organization card here : ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/2A2I",
"href": "https://huggingface.co/2A2I",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "If you're interested in Arabic AI and want to help pushing the wheel as well, fill out this form, and let us know your motivation and your exciting ideas 🔥 ",
"raw": "If you're interested in Arabic AI and want to help pushing the wheel as well, fill out this form, and let us know your motivation and your exciting ideas 🔥 ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "The form link : ",
"raw": "The form link : ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://forms.gle/kZLVuynWFU2FyTm57",
"href": "https://forms.gle/kZLVuynWFU2FyTm57",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "If you have any questions, feel free to reach out to us at the email address below.",
"raw": "If you have any questions, feel free to reach out to us at the email address below.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Additionally, if you believe as we do in this mission and would like to help this community and contribute some compute resources 😉 or any other form of help you might think about, please contact us at the same email address below or reach out to me through LinkedIn 🔥",
"raw": "Additionally, if you believe as we do in this mission and would like to help this community and contribute some compute resources 😉 or any other form of help you might think about, please contact us at the same email address below or reach out to me through LinkedIn 🔥",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "2A2I Contact Email : [email protected]",
"raw": "2A2I Contact Email : [email protected]",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "My LinkedIn : ",
"raw": "My LinkedIn : ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://www.linkedin.com/in/alielfilali01/",
"href": "https://www.linkedin.com/in/alielfilali01/",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | 🎉🥳🎉
Today, we are thrilled to officially launch the "2A2I" Arabic Artificial Intelligence Initiative. This is a community-driven initiative founded on the philosophy of "Small team, Big work" Our goal is to elevate Arabic AI (LLMs, Diffusion Models, ASR, etc.) to the same level as English (and also Chinese 🐉).
Naturally, our focus today is primarily on datasets. We aim to provide high-quality datasets, especially for LLMs this month, to support our future efforts. In line with this, we're excited to introduce the Arabic version of H4-no_robots, find here : https://huggingface.co/datasets/2A2I/H4_no_robots (and yes, we know it's not "no_robots" anymore 😄). Stay tuned for more exciting, high-quality datasets in the next couple of weeks (+ 4 million rows🔥)
In parallel, we're also developing a model 🐪 that we hope will set new high standards for Arabic LLMs. 🔥 This model is planned for release in the coming months.
For more information, please visit our Organization card here : https://huggingface.co/2A2I
If you're interested in Arabic AI and want to help pushing the wheel as well, fill out this form, and let us know your motivation and your exciting ideas 🔥
The form link : https://forms.gle/kZLVuynWFU2FyTm57
If you have any questions, feel free to reach out to us at the email address below.
Additionally, if you believe as we do in this mission and would like to help this community and contribute some compute resources 😉 or any other form of help you might think about, please contact us at the same email address below or reach out to me through LinkedIn 🔥
2A2I Contact Email : [email protected]
My LinkedIn : https://www.linkedin.com/in/alielfilali01/ | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/626237d9bbcbd1c34f1bb231/EJrOjvAL-68qMCYdnvOrq.png",
"fullname": "Ali El Filali",
"name": "alielfilali01",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 186,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/626237d9bbcbd1c34f1bb231/Z9C6KQUhVMePrHodSyGma.png"
}
] | [] | [
{
"reaction": "❤️",
"users": [
"medmac01",
"samusenps",
"macadeliccc",
"HANTIFARAH",
"andysalerno",
"Rami",
"osanseviero",
"pain",
"NeuralNovel",
"kramp",
"Kernel",
"clem",
"kenza-ily",
"maghwa",
"derek-thomas",
"SaiedAlshahrani",
"mahmoudtarek"
],
"count": 17
}
] | 2024-02-09T18:57:34.000Z | 2024-02-09T19:05:48.209Z | [] | /posts/alielfilali01/761147069099521 | 13 | 0 |
790074275915357 | [
{
"type": "text",
"value": "Reducing perplexity in LLM's through layer selective rank reduction",
"raw": "Reducing perplexity in LLM's through layer selective rank reduction",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Layer-Selective Rank Reduction (LASER) is a denoising method that improves reasoning by the strategic removal of higher-order components from weight matrices in the multi-layer perceptron (MLP) layers without the need for additional parameters or training data. This process leverages singular value decomposition to identify and eliminate these components. This simple, yet effective, method has shown to improve question-answering performance by up to 27.4 percentage points.",
"raw": "Layer-Selective Rank Reduction (LASER) is a denoising method that improves reasoning by the strategic removal of higher-order components from weight matrices in the multi-layer perceptron (MLP) layers without the need for additional parameters or training data. This process leverages singular value decomposition to identify and eliminate these components. This simple, yet effective, method has shown to improve question-answering performance by up to 27.4 percentage points.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "LaserRMT implements this through a process by calculating signal to noise ratio (SNR) for each layer and selectively reducing the rank of these layers.The SNR method meticulously computes the SNR by leveraging singular value decomposition (SVD) to separate the signal (higher-order components) from the noise (lower-order components) within the weight matrices of the model's layers. The SNR calculation is what determines which layers would benefit from rank reduction without compromising the models integrity.",
"raw": "LaserRMT implements this through a process by calculating signal to noise ratio (SNR) for each layer and selectively reducing the rank of these layers.The SNR method meticulously computes the SNR by leveraging singular value decomposition (SVD) to separate the signal (higher-order components) from the noise (lower-order components) within the weight matrices of the model's layers. The SNR calculation is what determines which layers would benefit from rank reduction without compromising the models integrity.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "If a layer is identified that could benefit from rank reduction, then the layer will enter an incremental process where the weight matrices are reduced and reconstructed by retaining only the singular values that surpass the threshold. In the case of laserRMT, the threshold is calculated by Marchenko-Pastur Law.",
"raw": "If a layer is identified that could benefit from rank reduction, then the layer will enter an incremental process where the weight matrices are reduced and reconstructed by retaining only the singular values that surpass the threshold. In the case of laserRMT, the threshold is calculated by Marchenko-Pastur Law.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "code_fence",
"value": null,
"raw": "```\n@staticmethod\n def marchenko_pastur_threshold(sigma, n, m):\n beta = n / m if n < m else m / n\n threshold = sigma * np.sqrt((1 + np.sqrt(beta))**2)\n return thr\n```",
"href": null,
"resource": null,
"url": null,
"code": "@staticmethod\n def marchenko_pastur_threshold(sigma, n, m):\n beta = n / m if n < m else m / n\n threshold = sigma * np.sqrt((1 + np.sqrt(beta))**2)\n return thr",
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "The two primary benefits of applying this method are reducing computational overhead of large language models and simultaneously improving output quality. ",
"raw": "The two primary benefits of applying this method are reducing computational overhead of large language models and simultaneously improving output quality. ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Credit to ",
"raw": "Credit to ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "mention",
"value": null,
"raw": "@ehartford",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": "ehartford",
"label": null,
"lang": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "mention",
"value": null,
"raw": "@fernandofernandes",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": "fernandofernandes",
"label": null,
"lang": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "mention",
"value": null,
"raw": "@DavidGF",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": "DavidGF",
"label": null,
"lang": null
},
{
"type": "text",
"value": " for laserRMT",
"raw": " for laserRMT",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Resources:",
"raw": "Resources:",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "☄️ AutoLaser: ",
"raw": "☄️ AutoLaser: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://colab.research.google.com/drive/11j0e-w6BfvqeFN1gUrpOqdW0vcKqfVqP?usp=sharing",
"href": "https://colab.research.google.com/drive/11j0e-w6BfvqeFN1gUrpOqdW0vcKqfVqP?usp=sharing",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "laserRMT: ",
"raw": "laserRMT: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/cognitivecomputations/laserRMT",
"href": "https://github.com/cognitivecomputations/laserRMT",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/papers/2312.13558",
"href": null,
"resource": {
"type": "paper",
"id": "2312.13558",
"discussionNum": null
},
"url": "https://huggingface.co/papers/2312.13558",
"code": null,
"user": null,
"label": "The Truth is in There: Improving Reasoning in Language Models with\n Layer-Selective Rank Reduction (2312.13558)",
"lang": null
}
] | Reducing perplexity in LLM's through layer selective rank reduction
Layer-Selective Rank Reduction (LASER) is a denoising method that improves reasoning by the strategic removal of higher-order components from weight matrices in the multi-layer perceptron (MLP) layers without the need for additional parameters or training data. This process leverages singular value decomposition to identify and eliminate these components. This simple, yet effective, method has shown to improve question-answering performance by up to 27.4 percentage points.
LaserRMT implements this through a process by calculating signal to noise ratio (SNR) for each layer and selectively reducing the rank of these layers.The SNR method meticulously computes the SNR by leveraging singular value decomposition (SVD) to separate the signal (higher-order components) from the noise (lower-order components) within the weight matrices of the model's layers. The SNR calculation is what determines which layers would benefit from rank reduction without compromising the models integrity.
If a layer is identified that could benefit from rank reduction, then the layer will enter an incremental process where the weight matrices are reduced and reconstructed by retaining only the singular values that surpass the threshold. In the case of laserRMT, the threshold is calculated by Marchenko-Pastur Law.
```
@staticmethod
def marchenko_pastur_threshold(sigma, n, m):
beta = n / m if n < m else m / n
threshold = sigma * np.sqrt((1 + np.sqrt(beta))**2)
return thr
```
The two primary benefits of applying this method are reducing computational overhead of large language models and simultaneously improving output quality.
Credit to @ehartford @fernandofernandes @DavidGF for laserRMT
Resources:
☄️ AutoLaser: https://colab.research.google.com/drive/11j0e-w6BfvqeFN1gUrpOqdW0vcKqfVqP?usp=sharing
laserRMT: https://github.com/cognitivecomputations/laserRMT
https://huggingface.co/papers/2312.13558 | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6455cc8d679315e4ef16fbec/M6Cfifn05BUzkCFd2QDIT.png",
"fullname": "Tim Dolan",
"name": "macadeliccc",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 152,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6455cc8d679315e4ef16fbec/dfHLkLwDE5xi909vX5Gya.png"
}
] | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64b999a40b24527e9c25583a/xFHCewJdf5EGn8qDPypqy.jpeg",
"fullname": "David Golchinfar",
"name": "DavidGF",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 49
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63111b2d88942700629f5771/u2a9y-yx6TG0N31OhMSHI.png",
"fullname": "Eric Hartford",
"name": "ehartford",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 3287
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/646e57a5cb6ea6e6b6df1ad4/PlGhM2SUynFBUdYAylaZK.jpeg",
"fullname": "Fernando Fernandes Neto",
"name": "fernandofernandes",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 47
}
] | [
{
"reaction": "❤️",
"users": [
"samusenps",
"DavidGF",
"Undi95",
"mathiasn1",
"osanseviero",
"KnutJaegersberg",
"tiendung",
"Ji-Ha",
"Dlbk",
"munirdin",
"NeuralNovel",
"victor",
"Citaman",
"abdullah"
],
"count": 14
},
{
"reaction": "🤗",
"users": [
"birgermoell",
"osanseviero",
"mrm8488",
"Citaman"
],
"count": 4
},
{
"reaction": "👍",
"users": [
"mrm8488"
],
"count": 1
}
] | 2024-02-09T18:57:09.000Z | 2024-02-11T16:22:57.955Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/645cfe4603fc86c46b3e46d1/-72dah0aoELhfwYwNJ6Ig.jpeg",
"fullname": "Lee Jackson",
"name": "NeuralNovel",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 50,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6455cc8d679315e4ef16fbec/M6Cfifn05BUzkCFd2QDIT.png",
"fullname": "Tim Dolan",
"name": "macadeliccc",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 152,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63ab1241ad514ca8d1430003/d-43TcOxG-zqAbzrH2m7H.png",
"fullname": "Undi",
"name": "Undi95",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 3311,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1669551186189-63732ebbbd81fae2b3aaf3fb.jpeg",
"fullname": "Knut Jägersberg",
"name": "KnutJaegersberg",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 238,
"isFollowing": false
}
] | /posts/macadeliccc/790074275915357 | 152 | 8 |
965647830564836 | [
{
"type": "text",
"value": "More Agents Is All You Need",
"raw": "More Agents Is All You Need",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/papers/2402.05120",
"href": null,
"resource": {
"type": "paper",
"id": "2402.05120",
"discussionNum": null
},
"url": "https://huggingface.co/papers/2402.05120",
"code": null,
"user": null,
"label": "More Agents Is All You Need (2402.05120)",
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "find that, simply via a sampling-and-voting method, the performance of large language models (LLMs) scales with the number of agents instantiated. Also, this method is orthogonal to existing complicated methods to further enhance LLMs, while the degree of enhancement is correlated to the task difficulty. We conduct comprehensive experiments on a wide range of LLM benchmarks to verify the presence of our finding, and to study the properties that can facilitate its occurrence.",
"raw": "find that, simply via a sampling-and-voting method, the performance of large language models (LLMs) scales with the number of agents instantiated. Also, this method is orthogonal to existing complicated methods to further enhance LLMs, while the degree of enhancement is correlated to the task difficulty. We conduct comprehensive experiments on a wide range of LLM benchmarks to verify the presence of our finding, and to study the properties that can facilitate its occurrence.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | More Agents Is All You Need
https://huggingface.co/papers/2402.05120
find that, simply via a sampling-and-voting method, the performance of large language models (LLMs) scales with the number of agents instantiated. Also, this method is orthogonal to existing complicated methods to further enhance LLMs, while the degree of enhancement is correlated to the task difficulty. We conduct comprehensive experiments on a wide range of LLM benchmarks to verify the presence of our finding, and to study the properties that can facilitate its occurrence. | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"fullname": "AK",
"name": "akhaliq",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 5205,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/60f1abe7544c2adfd699860c/BYSva0akN2IWcyuQpNJn8.png"
}
] | [] | [
{
"reaction": "👍",
"users": [
"macadeliccc",
"AdinaY",
"samusenps",
"Denny-KEF"
],
"count": 4
}
] | 2024-02-09T16:18:04.000Z | 2024-02-09T16:18:04.824Z | [] | /posts/akhaliq/965647830564836 | 38 | 0 |
136949101432683 | [
{
"type": "text",
"value": "🔍 Today's pick in Interpretability & Analysis of LMs: AttnLRP: Attention-Aware Layer-wise Relevance Propagation for Transformers by ",
"raw": "🔍 Today's pick in Interpretability & Analysis of LMs: AttnLRP: Attention-Aware Layer-wise Relevance Propagation for Transformers by ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "mention",
"value": null,
"raw": "@RedOneAI",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": "RedOneAI",
"label": null,
"lang": null
},
{
"type": "text",
"value": " et al.",
"raw": " et al.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "This work proposes extending the LRP feature attribution framework to handling Transformers-specific layers. In particular: authors:",
"raw": "This work proposes extending the LRP feature attribution framework to handling Transformers-specific layers. In particular: authors:",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "1. Propose a generalized approach to softmax linearization by designing a distribution rule that incorporates bias terms, absorbing of a portion of the relevance.",
"raw": "1. Propose a generalized approach to softmax linearization by designing a distribution rule that incorporates bias terms, absorbing of a portion of the relevance.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "2. Propose decomposing the element-wise matrix multiplication in the attention operation as a sequential of epsilon and uniform distribution rules to ensure conservation (=sum of relevance stays constant across layers)",
"raw": "2. Propose decomposing the element-wise matrix multiplication in the attention operation as a sequential of epsilon and uniform distribution rules to ensure conservation (=sum of relevance stays constant across layers)",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "3. Propose handling normalisation layers with an identity distribution rule.",
"raw": "3. Propose handling normalisation layers with an identity distribution rule.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "By means of extensive experiments, authors show that AttnLRP:",
"raw": "By means of extensive experiments, authors show that AttnLRP:",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "1. Is significantly more faithful than other popular gradient- and attention-based attribution approaches on CV and NLP tasks using large transformer models.",
"raw": "1. Is significantly more faithful than other popular gradient- and attention-based attribution approaches on CV and NLP tasks using large transformer models.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "2. Runs in O(1) time, requiring O(sqrt(num_layers)) memory, as opposed to perturbation-based approaches requiring O(seq_len) time.",
"raw": "2. Runs in O(1) time, requiring O(sqrt(num_layers)) memory, as opposed to perturbation-based approaches requiring O(seq_len) time.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "3. can be used alongside activation maximisation to explain the contribution of granular model components in driving models’ predictions.",
"raw": "3. can be used alongside activation maximisation to explain the contribution of granular model components in driving models’ predictions.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "📄 Paper: ",
"raw": "📄 Paper: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/papers/2402.05602",
"href": null,
"resource": {
"type": "paper",
"id": "2402.05602",
"discussionNum": null
},
"url": "https://huggingface.co/papers/2402.05602",
"code": null,
"user": null,
"label": "AttnLRP: Attention-Aware Layer-wise Relevance Propagation for\n Transformers (2402.05602)",
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "🔍 All daily picks in LM interpretability: ",
"raw": "🔍 All daily picks in LM interpretability: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/collections/gsarti/daily-picks-in-interpretability-and-analysis-of-lms-65ae3339949c5675d25de2f9",
"href": null,
"resource": {
"type": "collection",
"id": "gsarti/daily-picks-in-interpretability-and-analysis-of-lms-65ae3339949c5675d25de2f9",
"discussionNum": null
},
"url": "https://huggingface.co/collections/gsarti/daily-picks-in-interpretability-and-analysis-of-lms-65ae3339949c5675d25de2f9",
"code": null,
"user": null,
"label": null,
"lang": null
}
] | 🔍 Today's pick in Interpretability & Analysis of LMs: AttnLRP: Attention-Aware Layer-wise Relevance Propagation for Transformers by @RedOneAI et al.
This work proposes extending the LRP feature attribution framework to handling Transformers-specific layers. In particular: authors:
1. Propose a generalized approach to softmax linearization by designing a distribution rule that incorporates bias terms, absorbing of a portion of the relevance.
2. Propose decomposing the element-wise matrix multiplication in the attention operation as a sequential of epsilon and uniform distribution rules to ensure conservation (=sum of relevance stays constant across layers)
3. Propose handling normalisation layers with an identity distribution rule.
By means of extensive experiments, authors show that AttnLRP:
1. Is significantly more faithful than other popular gradient- and attention-based attribution approaches on CV and NLP tasks using large transformer models.
2. Runs in O(1) time, requiring O(sqrt(num_layers)) memory, as opposed to perturbation-based approaches requiring O(seq_len) time.
3. can be used alongside activation maximisation to explain the contribution of granular model components in driving models’ predictions.
📄 Paper: https://huggingface.co/papers/2402.05602
🔍 All daily picks in LM interpretability: https://huggingface.co/collections/gsarti/daily-picks-in-interpretability-and-analysis-of-lms-65ae3339949c5675d25de2f9 | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1670231290373-5e7749883d77a72421292d07.jpeg",
"fullname": "Gabriele Sarti",
"name": "gsarti",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 205,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/5e7749883d77a72421292d07/S5M-GKiy1_xH2gBBao7Ch.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/5e7749883d77a72421292d07/_Zhb2vpUlFnH3uQ3o2ktO.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/5e7749883d77a72421292d07/aZmUtdAOxJrwnIqWp1aP_.png"
}
] | [
{
"avatarUrl": "/avatars/8c85ea924b8303ac11ccc86c3dd02434.svg",
"fullname": "Reduan Achtibat",
"name": "RedOneAI",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 3
}
] | [
{
"reaction": "❤️",
"users": [
"osanseviero",
"RedOneAI",
"erfanhatefi",
"alielfilali01",
"OctoOptLab"
],
"count": 5
}
] | 2024-02-09T11:07:08.000Z | 2024-02-09T11:07:08.342Z | [] | /posts/gsarti/136949101432683 | 104 | 0 |
674175638533726 | [
{
"type": "text",
"value": "We have recently shipped two new task pages thanks to the contributors! ",
"raw": "We have recently shipped two new task pages thanks to the contributors! ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Both of these pages consists of models you can use off-the-shelf 💜 ",
"raw": "Both of these pages consists of models you can use off-the-shelf 💜 ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Reading these you can start building in no time 🛠️ ",
"raw": "Reading these you can start building in no time 🛠️ ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Task page for zero-shot object detection 👉 ",
"raw": "Task page for zero-shot object detection 👉 ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/tasks/zero-shot-object-detection",
"href": "https://huggingface.co/tasks/zero-shot-object-detection",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": " 📚 ",
"raw": " 📚 ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Task page for mask generation 👉 ",
"raw": "Task page for mask generation 👉 ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/tasks/mask-generation",
"href": "https://huggingface.co/tasks/mask-generation",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": " 📖 ",
"raw": " 📖 ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | We have recently shipped two new task pages thanks to the contributors!
Both of these pages consists of models you can use off-the-shelf 💜
Reading these you can start building in no time 🛠️
Task page for zero-shot object detection 👉 https://huggingface.co/tasks/zero-shot-object-detection 📚
Task page for mask generation 👉 https://huggingface.co/tasks/mask-generation 📖 | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1648113222875-6141a88b3a0ec78603c9e784.png",
"fullname": "Merve Noyan",
"name": "merve",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 5589,
"isFollowing": false
} | [] | [] | [
{
"reaction": "👍",
"users": [
"fffiloni",
"kramp",
"osanseviero",
"macadeliccc",
"AdinaY",
"not-lain",
"nbroad"
],
"count": 7
},
{
"reaction": "❤️",
"users": [
"jeffboudier",
"santiviquez",
"macadeliccc",
"not-lain",
"alielfilali01"
],
"count": 5
},
{
"reaction": "🤗",
"users": [
"alielfilali01"
],
"count": 1
}
] | 2024-02-09T10:50:24.000Z | 2024-02-09T10:50:24.938Z | [] | /posts/merve/674175638533726 | 60 | 0 |
397768896430162 | [
{
"type": "text",
"value": "A Text Language Identification Model with Support for +2000 Labels:",
"raw": "A Text Language Identification Model with Support for +2000 Labels:",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "space: ",
"raw": "space: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/cis-lmu/glotlid-space",
"href": null,
"resource": {
"type": "space",
"id": "cis-lmu/glotlid-space",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/cis-lmu/glotlid-space",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "model: ",
"raw": "model: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/cis-lmu/glotlid",
"href": null,
"resource": {
"type": "model",
"id": "cis-lmu/glotlid",
"discussionNum": null
},
"url": "https://huggingface.co/cis-lmu/glotlid",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "github: ",
"raw": "github: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/cisnlp/GlotLID",
"href": "https://github.com/cisnlp/GlotLID",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "paper: ",
"raw": "paper: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/papers/2310.16248",
"href": null,
"resource": {
"type": "paper",
"id": "2310.16248",
"discussionNum": null
},
"url": "https://huggingface.co/papers/2310.16248",
"code": null,
"user": null,
"label": "GlotLID: Language Identification for Low-Resource Languages (2310.16248)",
"lang": null
}
] | A Text Language Identification Model with Support for +2000 Labels:
space: https://huggingface.co/spaces/cis-lmu/glotlid-space
model: https://huggingface.co/cis-lmu/glotlid
github: https://github.com/cisnlp/GlotLID
paper: https://huggingface.co/papers/2310.16248 | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/61bf84c8ca59d6d196a1b4e8/L_NvUwlMYcye9X35z6f7e.jpeg",
"fullname": "Amir Hossein Kargaran",
"name": "kargaranamir",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 36,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/61bf84c8ca59d6d196a1b4e8/aik7sFC9mDWjSc6Tt-Q6e.png"
}
] | [] | [
{
"reaction": "👍",
"users": [
"s3nh",
"tomaarsen",
"merve",
"kardal",
"Sylvestre",
"kramp",
"osanseviero",
"macadeliccc",
"kaizuberbuehler",
"FPK39",
"lbourdois",
"DmitryRyumin",
"sbarman25",
"ayymen"
],
"count": 14
},
{
"reaction": "🤯",
"users": [
"FPK39"
],
"count": 1
},
{
"reaction": "❤️",
"users": [
"Svngoku"
],
"count": 1
}
] | 2024-02-09T10:26:28.000Z | 2024-04-04T21:30:47.966Z | [] | /posts/kargaranamir/397768896430162 | 481 | 0 |
806491452080475 | [
{
"type": "text",
"value": "The most widely used French NER models on HF (",
"raw": "The most widely used French NER models on HF (",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/Jean-Baptiste/camembert-ner",
"href": null,
"resource": {
"type": "model",
"id": "Jean-Baptiste/camembert-ner",
"discussionNum": null
},
"url": "https://huggingface.co/Jean-Baptiste/camembert-ner",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": " and ",
"raw": " and ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/cmarkea/distilcamembert-base-ner",
"href": null,
"resource": {
"type": "model",
"id": "cmarkea/distilcamembert-base-ner",
"discussionNum": null
},
"url": "https://huggingface.co/cmarkea/distilcamembert-base-ner",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": ") are trained on a single dataset (WikiNER) which on the one hand contains leaks and therefore distorts the true results of these models, and on the other hand overspecializes them in a particular domain (= texts from Wikipedia). They are also only available in a base version (110M parameters).",
"raw": ") are trained on a single dataset (WikiNER) which on the one hand contains leaks and therefore distorts the true results of these models, and on the other hand overspecializes them in a particular domain (= texts from Wikipedia). They are also only available in a base version (110M parameters).",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "That's why I've trained new NER models in French both on more data (x3), as well as in base and large versions (336M). They are available in 3 entities (PER, ORG, LOC) or 4 entities (PER, ORG, LOC, MISC):",
"raw": "That's why I've trained new NER models in French both on more data (x3), as well as in base and large versions (336M). They are available in 3 entities (PER, ORG, LOC) or 4 entities (PER, ORG, LOC, MISC):",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "- ",
"raw": "- ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/CATIE-AQ/NERmembert-base-4entities",
"href": null,
"resource": {
"type": "model",
"id": "CATIE-AQ/NERmembert-base-4entities",
"discussionNum": null
},
"url": "https://huggingface.co/CATIE-AQ/NERmembert-base-4entities",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "- ",
"raw": "- ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/CATIE-AQ/NERmembert-large-4entities",
"href": null,
"resource": {
"type": "model",
"id": "CATIE-AQ/NERmembert-large-4entities",
"discussionNum": null
},
"url": "https://huggingface.co/CATIE-AQ/NERmembert-large-4entities",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "- ",
"raw": "- ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/CATIE-AQ/NERmembert-base-3entities",
"href": null,
"resource": {
"type": "model",
"id": "CATIE-AQ/NERmembert-base-3entities",
"discussionNum": null
},
"url": "https://huggingface.co/CATIE-AQ/NERmembert-base-3entities",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "- ",
"raw": "- ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/CATIE-AQ/NERmembert-large-3entities",
"href": null,
"resource": {
"type": "model",
"id": "CATIE-AQ/NERmembert-large-3entities",
"discussionNum": null
},
"url": "https://huggingface.co/CATIE-AQ/NERmembert-large-3entities",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Datasets without leaks are also available:",
"raw": "Datasets without leaks are also available:",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "- ",
"raw": "- ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/datasets/CATIE-AQ/frenchNER_4entities",
"href": null,
"resource": {
"type": "dataset",
"id": "CATIE-AQ/frenchNER_4entities",
"discussionNum": null
},
"url": "https://huggingface.co/datasets/CATIE-AQ/frenchNER_4entities",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "- ",
"raw": "- ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/datasets/CATIE-AQ/frenchNER_3entities",
"href": null,
"resource": {
"type": "dataset",
"id": "CATIE-AQ/frenchNER_3entities",
"discussionNum": null
},
"url": "https://huggingface.co/datasets/CATIE-AQ/frenchNER_3entities",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | The most widely used French NER models on HF (https://huggingface.co/Jean-Baptiste/camembert-ner and https://huggingface.co/cmarkea/distilcamembert-base-ner) are trained on a single dataset (WikiNER) which on the one hand contains leaks and therefore distorts the true results of these models, and on the other hand overspecializes them in a particular domain (= texts from Wikipedia). They are also only available in a base version (110M parameters).
That's why I've trained new NER models in French both on more data (x3), as well as in base and large versions (336M). They are available in 3 entities (PER, ORG, LOC) or 4 entities (PER, ORG, LOC, MISC):
- https://huggingface.co/CATIE-AQ/NERmembert-base-4entities
- https://huggingface.co/CATIE-AQ/NERmembert-large-4entities
- https://huggingface.co/CATIE-AQ/NERmembert-base-3entities
- https://huggingface.co/CATIE-AQ/NERmembert-large-3entities
Datasets without leaks are also available:
- https://huggingface.co/datasets/CATIE-AQ/frenchNER_4entities
- https://huggingface.co/datasets/CATIE-AQ/frenchNER_3entities
| {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/613b0a62a14099d5afed7830/pLuqSIYaNYhUqdjxlNrFn.png",
"fullname": "Loïck BOURDOIS",
"name": "lbourdois",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 90,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/613b0a62a14099d5afed7830/V3cJbu4X99lu7Y1SJY6jV.png"
}
] | [] | [
{
"reaction": "❤️",
"users": [
"osanseviero",
"Tonic",
"EquinoxElahin",
"mishig",
"tomaarsen"
],
"count": 5
},
{
"reaction": "🤗",
"users": [
"osanseviero",
"kramp",
"Tonic",
"victor"
],
"count": 4
},
{
"reaction": "👍",
"users": [
"fffiloni",
"macadeliccc"
],
"count": 2
}
] | 2024-02-09T08:46:39.000Z | 2024-02-09T08:46:39.021Z | [] | /posts/lbourdois/806491452080475 | 1,257 | 0 |
656761313696494 | [
{
"type": "text",
"value": "🚀 It's now easier than ever to switch from OpenAI to open LLMs ",
"raw": "🚀 It's now easier than ever to switch from OpenAI to open LLMs ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Hugging Face's TGI now supports an OpenAI compatible Chat Completion API",
"raw": "Hugging Face's TGI now supports an OpenAI compatible Chat Completion API",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "This means you can transition code that uses OpenAI client libraries (or frameworks like LangChain 🦜 and LlamaIndex 🦙) to run open models by changing just two lines of code 🤗",
"raw": "This means you can transition code that uses OpenAI client libraries (or frameworks like LangChain 🦜 and LlamaIndex 🦙) to run open models by changing just two lines of code 🤗",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "⭐ Here's how:",
"raw": "⭐ Here's how:",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "code_fence",
"value": null,
"raw": "```\nfrom openai import OpenAI\n\n# initialize the client but point it to TGI\nclient = OpenAI(\n base_url=\"<ENDPOINT_URL>\" + \"/v1/\", # replace with your endpoint url\n api_key=\"<HF_API_TOKEN>\", # replace with your token\n)\nchat_completion = client.chat.completions.create(\n model=\"tgi\",\n messages=[\n {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n {\"role\": \"user\", \"content\": \"Why is open-source software important?\"},\n ],\n stream=True,\n max_tokens=500\n)\n\n# iterate and print stream\nfor message in chat_completion:\n print(message.choices[0].delta.content, end=\"\")\n```",
"href": null,
"resource": null,
"url": null,
"code": "from openai import OpenAI\n\n# initialize the client but point it to TGI\nclient = OpenAI(\n base_url=\"<ENDPOINT_URL>\" + \"/v1/\", # replace with your endpoint url\n api_key=\"<HF_API_TOKEN>\", # replace with your token\n)\nchat_completion = client.chat.completions.create(\n model=\"tgi\",\n messages=[\n {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n {\"role\": \"user\", \"content\": \"Why is open-source software important?\"},\n ],\n stream=True,\n max_tokens=500\n)\n\n# iterate and print stream\nfor message in chat_completion:\n print(message.choices[0].delta.content, end=\"\")",
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "🔗 Blog post ➡ ",
"raw": "🔗 Blog post ➡ ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/blog/tgi-messages-api",
"href": "https://huggingface.co/blog/tgi-messages-api",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "🔗 TGI docs ➡ ",
"raw": "🔗 TGI docs ➡ ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/docs/text-generation-inference/en/messages_api",
"href": "https://huggingface.co/docs/text-generation-inference/en/messages_api",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | 🚀 It's now easier than ever to switch from OpenAI to open LLMs
Hugging Face's TGI now supports an OpenAI compatible Chat Completion API
This means you can transition code that uses OpenAI client libraries (or frameworks like LangChain 🦜 and LlamaIndex 🦙) to run open models by changing just two lines of code 🤗
⭐ Here's how:
```
from openai import OpenAI
# initialize the client but point it to TGI
client = OpenAI(
base_url="<ENDPOINT_URL>" + "/v1/", # replace with your endpoint url
api_key="<HF_API_TOKEN>", # replace with your token
)
chat_completion = client.chat.completions.create(
model="tgi",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Why is open-source software important?"},
],
stream=True,
max_tokens=500
)
# iterate and print stream
for message in chat_completion:
print(message.choices[0].delta.content, end="")
```
🔗 Blog post ➡ https://huggingface.co/blog/tgi-messages-api
🔗 TGI docs ➡ https://huggingface.co/docs/text-generation-inference/en/messages_api | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/61d375fd733d3a83ecd1bba9/oIXwvvs1-HaCnJXMCZgkc.jpeg",
"fullname": "Andrew Reed",
"name": "andrewrreed",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 106,
"isFollowing": false
} | [] | [] | [
{
"reaction": "❤️",
"users": [
"osanseviero",
"clem",
"samusenps",
"ashesashes",
"johko",
"Tonic",
"julien-c",
"victor",
"Chunte",
"johnnybio",
"Alisherfullhd",
"Yogeek",
"Glavin001",
"nbroad",
"Nymbo"
],
"count": 15
},
{
"reaction": "🤗",
"users": [
"samusenps",
"hunkim",
"kramp",
"Tonic",
"julien-c",
"Chunte",
"jeffboudier",
"natolambert",
"Felladrin",
"Nymbo"
],
"count": 10
},
{
"reaction": "👍",
"users": [
"mvaloatto",
"hunkim",
"Tonic",
"julien-c",
"Glavin001",
"Nymbo"
],
"count": 6
},
{
"reaction": "🤯",
"users": [
"clem",
"hunkim",
"osanseviero",
"Tonic",
"julien-c"
],
"count": 5
}
] | 2024-02-08T21:43:59.000Z | 2024-03-04T19:54:40.671Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/5dd96eb166059660ed1ee413/NQtzmrDdbG0H8qkZvRyGk.jpeg",
"fullname": "Julien Chaumond",
"name": "julien-c",
"type": "user",
"isPro": true,
"isHf": true,
"isMod": false,
"followerCount": 1580,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/61d375fd733d3a83ecd1bba9/oIXwvvs1-HaCnJXMCZgkc.jpeg",
"fullname": "Andrew Reed",
"name": "andrewrreed",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 106,
"isFollowing": false
},
{
"avatarUrl": "/avatars/44a78ae88e7113ef1919de851aefddd4.svg",
"fullname": "Odin Nøsen",
"name": "myonlyeye",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": null,
"isFollowing": false
}
] | /posts/andrewrreed/656761313696494 | 530 | 7 |
282502273632690 | [
{
"type": "text",
"value": "First HF social post:",
"raw": "First HF social post:",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "pip install -U mlx",
"raw": "pip install -U mlx",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | First HF social post:
pip install -U mlx | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/623c830997ddced06d78699b/ucB8joTPONCnc_Gj0mssR.jpeg",
"fullname": "Awni Hannun",
"name": "awni",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 56,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/623c830997ddced06d78699b/WoroAE6ZmxKFfGBhrbnQd.png"
}
] | [] | [
{
"reaction": "❤️",
"users": [
"osanseviero",
"clem",
"ivanfioravanti",
"multimodalart",
"johko",
"merve",
"victor",
"reach-vb",
"hfthorvaldur",
"flymonk"
],
"count": 10
},
{
"reaction": "🤯",
"users": [
"osanseviero",
"clem",
"multimodalart",
"merve",
"kardal",
"reach-vb"
],
"count": 6
},
{
"reaction": "👍",
"users": [
"ivanfioravanti",
"merve",
"reach-vb"
],
"count": 3
},
{
"reaction": "🤗",
"users": [
"Tonic",
"merve",
"reach-vb"
],
"count": 3
},
{
"reaction": "🤝",
"users": [
"bmorphism"
],
"count": 1
}
] | 2024-02-08T21:38:27.000Z | 2024-02-09T09:10:04.306Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/607feb037c746d01ecb19180/qs3NO5v-Ej5UaKns8yFW8.jpeg",
"fullname": "Johannes Kolbe",
"name": "johko",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 54,
"isFollowing": false
}
] | /posts/awni/282502273632690 | 157 | 2 |
964765337315066 | [
{
"type": "text",
"value": "Apple MLX 0.2.0 is here!",
"raw": "Apple MLX 0.2.0 is here!",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "mention",
"value": null,
"raw": "@awni",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": "awni",
"label": null,
"lang": null
},
{
"type": "text",
"value": " and his team did again! Mega release. ",
"raw": " and his team did again! Mega release. ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Performance 💨💨💨 is coming to town!",
"raw": "Performance 💨💨💨 is coming to town!",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "mx.compile makes stuff go fast",
"raw": "mx.compile makes stuff go fast",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "- Some functions are up to 10x faster (benchmarks: ",
"raw": "- Some functions are up to 10x faster (benchmarks: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/ml-explore/mlx/pull/614#issuecomment-1929056286",
"href": "https://github.com/ml-explore/mlx/pull/614#issuecomment-1929056286",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": ")",
"raw": ")",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "- Training models anywhere from 10% to twice as fast (benchmarks: ",
"raw": "- Training models anywhere from 10% to twice as fast (benchmarks: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/ml-explore/mlx-examples/pull/420#issuecomment-1932422605",
"href": "https://github.com/ml-explore/mlx-examples/pull/420#issuecomment-1932422605",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": ")",
"raw": ")",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Full details on this release here: ",
"raw": "Full details on this release here: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/ml-explore/mlx/releases/tag/v0.2.0",
"href": "https://github.com/ml-explore/mlx/releases/tag/v0.2.0",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | Apple MLX 0.2.0 is here!
@awni and his team did again! Mega release.
Performance 💨💨💨 is coming to town!
mx.compile makes stuff go fast
- Some functions are up to 10x faster (benchmarks: https://github.com/ml-explore/mlx/pull/614#issuecomment-1929056286)
- Training models anywhere from 10% to twice as fast (benchmarks: https://github.com/ml-explore/mlx-examples/pull/420#issuecomment-1932422605)
Full details on this release here: https://github.com/ml-explore/mlx/releases/tag/v0.2.0
| {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64345a5f4b34368fdb045ef4/NfUgcPA08_Gl40pTrlWfe.jpeg",
"fullname": "Ivan Fioravanti",
"name": "ivanfioravanti",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 31,
"isFollowing": false
} | [] | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/623c830997ddced06d78699b/ucB8joTPONCnc_Gj0mssR.jpeg",
"fullname": "Awni Hannun",
"name": "awni",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 56
}
] | [
{
"reaction": "❤️",
"users": [
"awni",
"osanseviero",
"clem",
"heyyeah",
"samusenps",
"akashicmarga",
"merve",
"victor",
"kroonen",
"Gatozu35"
],
"count": 10
},
{
"reaction": "👍",
"users": [
"awni",
"clem",
"kramp",
"merve",
"victor"
],
"count": 5
},
{
"reaction": "🤝",
"users": [
"clem",
"merve"
],
"count": 2
},
{
"reaction": "😔",
"users": [
"BarraHome"
],
"count": 1
}
] | 2024-02-08T21:24:43.000Z | 2024-02-08T21:33:18.960Z | [] | /posts/ivanfioravanti/964765337315066 | 60 | 0 |
247190826659941 | [
{
"type": "text",
"value": "Benefits of ",
"raw": "Benefits of ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "inline_code",
"value": null,
"raw": "`imatrix`",
"href": null,
"resource": null,
"url": null,
"code": "imatrix",
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": " quantization in place of quip#",
"raw": " quantization in place of quip#",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Quip-# is a quantization method proposed by [Cornell-RelaxML](",
"raw": "Quip-# is a quantization method proposed by [Cornell-RelaxML](",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/Cornell-RelaxML",
"href": "https://github.com/Cornell-RelaxML",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": ") that claims tremendous performance gains using only 2-bit precision.",
"raw": ") that claims tremendous performance gains using only 2-bit precision.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "RelaxML proposes that quantizing a model from 16 bit to 2 bit precision they can utilize Llama-2-70B on a single 24GB GPU.",
"raw": "RelaxML proposes that quantizing a model from 16 bit to 2 bit precision they can utilize Llama-2-70B on a single 24GB GPU.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "QuIP# aims to revolutionize model quantization through a blend of incoherence processing and advanced lattice codebooks. By switching to a Hadamard transform-based incoherence approach, QuIP# enhances GPU efficiency, making weight matrices more Gaussian-like and ideal for quantization with its improved lattice codebooks.",
"raw": "QuIP# aims to revolutionize model quantization through a blend of incoherence processing and advanced lattice codebooks. By switching to a Hadamard transform-based incoherence approach, QuIP# enhances GPU efficiency, making weight matrices more Gaussian-like and ideal for quantization with its improved lattice codebooks.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "This new method has already seen some adoption by projects like llama.cpp. The use of the Quip-# methodology has been implemented in the form of imatrix calculations. The importance matrix is calculated from a dataset such as wiki.train.raw and will output the perplexity on the given dataset.",
"raw": "This new method has already seen some adoption by projects like llama.cpp. The use of the Quip-# methodology has been implemented in the form of imatrix calculations. The importance matrix is calculated from a dataset such as wiki.train.raw and will output the perplexity on the given dataset.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "This interim step can improve the results of the quantized model. If you would like to explore this process for yourself:",
"raw": "This interim step can improve the results of the quantized model. If you would like to explore this process for yourself:",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "llama.cpp - ",
"raw": "llama.cpp - ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/ggerganov/llama.cpp/",
"href": "https://github.com/ggerganov/llama.cpp/",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Quip# paper - ",
"raw": "Quip# paper - ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://cornell-relaxml.github.io/quip-sharp/",
"href": "https://cornell-relaxml.github.io/quip-sharp/",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "AutoQuip# colab - ",
"raw": "AutoQuip# colab - ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://colab.research.google.com/drive/1rPDvcticCekw8VPNjDbh_UcivVBzgwEW?usp=sharing",
"href": "https://colab.research.google.com/drive/1rPDvcticCekw8VPNjDbh_UcivVBzgwEW?usp=sharing",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Other impressive quantization projects to watch:",
"raw": "Other impressive quantization projects to watch:",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "+ AQLM",
"raw": "+ AQLM",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/Vahe1994/AQLM",
"href": "https://github.com/Vahe1994/AQLM",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://arxiv.org/abs/2401.06118",
"href": "https://arxiv.org/abs/2401.06118",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | Benefits of `imatrix` quantization in place of quip#
Quip-# is a quantization method proposed by [Cornell-RelaxML](https://github.com/Cornell-RelaxML) that claims tremendous performance gains using only 2-bit precision.
RelaxML proposes that quantizing a model from 16 bit to 2 bit precision they can utilize Llama-2-70B on a single 24GB GPU.
QuIP# aims to revolutionize model quantization through a blend of incoherence processing and advanced lattice codebooks. By switching to a Hadamard transform-based incoherence approach, QuIP# enhances GPU efficiency, making weight matrices more Gaussian-like and ideal for quantization with its improved lattice codebooks.
This new method has already seen some adoption by projects like llama.cpp. The use of the Quip-# methodology has been implemented in the form of imatrix calculations. The importance matrix is calculated from a dataset such as wiki.train.raw and will output the perplexity on the given dataset.
This interim step can improve the results of the quantized model. If you would like to explore this process for yourself:
llama.cpp - https://github.com/ggerganov/llama.cpp/
Quip# paper - https://cornell-relaxml.github.io/quip-sharp/
AutoQuip# colab - https://colab.research.google.com/drive/1rPDvcticCekw8VPNjDbh_UcivVBzgwEW?usp=sharing
Other impressive quantization projects to watch:
+ AQLM
https://github.com/Vahe1994/AQLM
https://arxiv.org/abs/2401.06118
| {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6455cc8d679315e4ef16fbec/M6Cfifn05BUzkCFd2QDIT.png",
"fullname": "Tim Dolan",
"name": "macadeliccc",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 152,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6455cc8d679315e4ef16fbec/kAfBh-FGBe5EEbr8d3nEp.png"
}
] | [] | [
{
"reaction": "👍",
"users": [
"nivu",
"clem",
"merve",
"osanseviero",
"ybelkada",
"samusenps",
"NeuralNovel",
"Ji-Ha",
"hllj",
"DamarJati",
"sayhan",
"NovoCode",
"sugatoray",
"1TuanPham",
"victor",
"John6666",
"louisbrulenaudet"
],
"count": 17
},
{
"reaction": "❤️",
"users": [
"clem",
"merve",
"osanseviero",
"ybelkada",
"NeuralNovel",
"Gatozu35",
"akjindal53244",
"sayhan",
"NovoCode",
"KingNish"
],
"count": 10
}
] | 2024-02-08T18:41:54.000Z | 2024-02-08T18:41:54.571Z | [] | /posts/macadeliccc/247190826659941 | 884 | 0 |
303238714777838 | [
{
"type": "text",
"value": "What if the retrieval goes wrong? 🐕",
"raw": "What if the retrieval goes wrong? 🐕",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Retrieval Augmented Generation (RAG) is a strategy to alleviate LLM hallucinations and improve the quality of generated responses.",
"raw": "Retrieval Augmented Generation (RAG) is a strategy to alleviate LLM hallucinations and improve the quality of generated responses.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "A standard RAG architecture has two main blocks: a Retriever and a Generator.",
"raw": "A standard RAG architecture has two main blocks: a Retriever and a Generator.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "1️⃣ When the system receives an input sequence, it uses the Retriever to retrieve the top-K most relevant documents associated with the input sequence. These documents typically come from an external source (e.g., Wikipedia) and are then concatenated to the original input's context.",
"raw": "1️⃣ When the system receives an input sequence, it uses the Retriever to retrieve the top-K most relevant documents associated with the input sequence. These documents typically come from an external source (e.g., Wikipedia) and are then concatenated to the original input's context.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "2️⃣ It then uses the Generator to generate a response given the gathered information in the first step.",
"raw": "2️⃣ It then uses the Generator to generate a response given the gathered information in the first step.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "But what happens if the retrieval goes wrong and the retrieved documents are of very low quality?",
"raw": "But what happens if the retrieval goes wrong and the retrieved documents are of very low quality?",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Well, in such cases, the generated response will probably be of low quality, too. 🫠",
"raw": "Well, in such cases, the generated response will probably be of low quality, too. 🫠",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "But here is where CRAG (Corrective RAG) *might* help. I say it might help because the paper is very new — only one week old, and I don't know if someone has actually tried this in practice 😅",
"raw": "But here is where CRAG (Corrective RAG) *might* help. I say it might help because the paper is very new — only one week old, and I don't know if someone has actually tried this in practice 😅",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "However, the idea is to add a Knowledge Correction block between the Retrieval and Generation steps to evaluate the retrieved documents and correct them if necessary.",
"raw": "However, the idea is to add a Knowledge Correction block between the Retrieval and Generation steps to evaluate the retrieved documents and correct them if necessary.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "This step goes as follows:",
"raw": "This step goes as follows:",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "🟢 If the documents are correct, they will be refined into more precise knowledge strips and concatenated to the original context to generate a response.",
"raw": "🟢 If the documents are correct, they will be refined into more precise knowledge strips and concatenated to the original context to generate a response.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "🔴 If the documents are incorrect, they will be discarded, and instead, the system searches the web for complementary knowledge. This external knowledge is then concatenated to the original context to generate a response.",
"raw": "🔴 If the documents are incorrect, they will be discarded, and instead, the system searches the web for complementary knowledge. This external knowledge is then concatenated to the original context to generate a response.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "🟡 If the documents are ambiguous, a combination of the previous two resolutions is triggered.",
"raw": "🟡 If the documents are ambiguous, a combination of the previous two resolutions is triggered.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "The experimental results from the paper show how the CRAG strategy outperforms traditional RAG approaches in both short and long-form text generation tasks.",
"raw": "The experimental results from the paper show how the CRAG strategy outperforms traditional RAG approaches in both short and long-form text generation tasks.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Paper: ",
"raw": "Paper: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/papers/2401.15884",
"href": null,
"resource": {
"type": "paper",
"id": "2401.15884",
"discussionNum": null
},
"url": "https://huggingface.co/papers/2401.15884",
"code": null,
"user": null,
"label": "Corrective Retrieval Augmented Generation (2401.15884)",
"lang": null
}
] | What if the retrieval goes wrong? 🐕
Retrieval Augmented Generation (RAG) is a strategy to alleviate LLM hallucinations and improve the quality of generated responses.
A standard RAG architecture has two main blocks: a Retriever and a Generator.
1️⃣ When the system receives an input sequence, it uses the Retriever to retrieve the top-K most relevant documents associated with the input sequence. These documents typically come from an external source (e.g., Wikipedia) and are then concatenated to the original input's context.
2️⃣ It then uses the Generator to generate a response given the gathered information in the first step.
But what happens if the retrieval goes wrong and the retrieved documents are of very low quality?
Well, in such cases, the generated response will probably be of low quality, too. 🫠
But here is where CRAG (Corrective RAG) *might* help. I say it might help because the paper is very new — only one week old, and I don't know if someone has actually tried this in practice 😅
However, the idea is to add a Knowledge Correction block between the Retrieval and Generation steps to evaluate the retrieved documents and correct them if necessary.
This step goes as follows:
🟢 If the documents are correct, they will be refined into more precise knowledge strips and concatenated to the original context to generate a response.
🔴 If the documents are incorrect, they will be discarded, and instead, the system searches the web for complementary knowledge. This external knowledge is then concatenated to the original context to generate a response.
🟡 If the documents are ambiguous, a combination of the previous two resolutions is triggered.
The experimental results from the paper show how the CRAG strategy outperforms traditional RAG approaches in both short and long-form text generation tasks.
Paper: https://huggingface.co/papers/2401.15884 | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1657144463525-629a173153a72d997d3f57d0.jpeg",
"fullname": "Santiago Viquez",
"name": "santiviquez",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 84,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/629a173153a72d997d3f57d0/-nL2O-hSK0v1AmKVK5kZR.jpeg"
}
] | [] | [
{
"reaction": "👍",
"users": [
"osanseviero",
"samusenps",
"NeuralNovel",
"nickandbro",
"abellion"
],
"count": 5
}
] | 2024-02-08T17:05:09.000Z | 2024-02-08T17:05:09.582Z | [] | /posts/santiviquez/303238714777838 | 8 | 0 |
427624244676120 | [
{
"type": "text",
"value": "Grandmaster-Level Chess Without Search",
"raw": "Grandmaster-Level Chess Without Search",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/papers/2402.04494",
"href": null,
"resource": {
"type": "paper",
"id": "2402.04494",
"discussionNum": null
},
"url": "https://huggingface.co/papers/2402.04494",
"code": null,
"user": null,
"label": "Grandmaster-Level Chess Without Search (2402.04494)",
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "largest model reaches a Lichess blitz Elo of 2895 against humans, and successfully solves a series of challenging chess puzzles, without any domain-specific tweaks or explicit search algorithms. We also show that our model outperforms AlphaZero's policy and value networks (without MCTS) and GPT-3.5-turbo-instruct. A systematic investigation of model and dataset size shows that strong chess performance only arises at sufficient scale. To validate our results, we perform an extensive series of ablations of design choices and hyperparameters.",
"raw": "largest model reaches a Lichess blitz Elo of 2895 against humans, and successfully solves a series of challenging chess puzzles, without any domain-specific tweaks or explicit search algorithms. We also show that our model outperforms AlphaZero's policy and value networks (without MCTS) and GPT-3.5-turbo-instruct. A systematic investigation of model and dataset size shows that strong chess performance only arises at sufficient scale. To validate our results, we perform an extensive series of ablations of design choices and hyperparameters.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | Grandmaster-Level Chess Without Search
https://huggingface.co/papers/2402.04494
largest model reaches a Lichess blitz Elo of 2895 against humans, and successfully solves a series of challenging chess puzzles, without any domain-specific tweaks or explicit search algorithms. We also show that our model outperforms AlphaZero's policy and value networks (without MCTS) and GPT-3.5-turbo-instruct. A systematic investigation of model and dataset size shows that strong chess performance only arises at sufficient scale. To validate our results, we perform an extensive series of ablations of design choices and hyperparameters. | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"fullname": "AK",
"name": "akhaliq",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 5205,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/60f1abe7544c2adfd699860c/wW70dXn3NFUWivMcggrqh.png"
}
] | [] | [
{
"reaction": "❤️",
"users": [
"samusenps",
"andysalerno",
"osanseviero",
"NeuralNovel"
],
"count": 4
},
{
"reaction": "👍",
"users": [
"fffiloni",
"NeuralNovel",
"rinoa"
],
"count": 3
},
{
"reaction": "🤯",
"users": [
"victor",
"NeuralNovel"
],
"count": 2
}
] | 2024-02-08T15:43:24.000Z | 2024-02-08T15:43:24.802Z | [] | /posts/akhaliq/427624244676120 | 42 | 0 |
918404500694779 | [
{
"type": "text",
"value": "Prompts are hyperparameters. Every time you test a different prompt on your data, you become less sure if the LLM actually generalizes to unseen data.",
"raw": "Prompts are hyperparameters. Every time you test a different prompt on your data, you become less sure if the LLM actually generalizes to unseen data.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Issues of overfitting to a test set seem like concepts from boring times when people still fine-tuned models, but it's just as important for \"zeroshot prompting\". Using a separate validation split to tune the main hyperparameter of LLMs (the prompt) is just as important as train-val-test splitting for fine-tuning. The only difference is that you don't have a training dataset anymore and it somehow feels different because there is no training / no parameter updates.",
"raw": "Issues of overfitting to a test set seem like concepts from boring times when people still fine-tuned models, but it's just as important for \"zeroshot prompting\". Using a separate validation split to tune the main hyperparameter of LLMs (the prompt) is just as important as train-val-test splitting for fine-tuning. The only difference is that you don't have a training dataset anymore and it somehow feels different because there is no training / no parameter updates.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Its easy to trick yourself into believing that an LLM performs well on your task, while you've actually overfit the prompt on your data. Every good \"zeroshot\" paper should clarify that they used a validation split for finding their prompt before final testing.",
"raw": "Its easy to trick yourself into believing that an LLM performs well on your task, while you've actually overfit the prompt on your data. Every good \"zeroshot\" paper should clarify that they used a validation split for finding their prompt before final testing.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | Prompts are hyperparameters. Every time you test a different prompt on your data, you become less sure if the LLM actually generalizes to unseen data.
Issues of overfitting to a test set seem like concepts from boring times when people still fine-tuned models, but it's just as important for "zeroshot prompting". Using a separate validation split to tune the main hyperparameter of LLMs (the prompt) is just as important as train-val-test splitting for fine-tuning. The only difference is that you don't have a training dataset anymore and it somehow feels different because there is no training / no parameter updates.
Its easy to trick yourself into believing that an LLM performs well on your task, while you've actually overfit the prompt on your data. Every good "zeroshot" paper should clarify that they used a validation split for finding their prompt before final testing. | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1613511937628-5fb15d1e84389b139cf3b508.jpeg",
"fullname": "Moritz Laurer",
"name": "MoritzLaurer",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 236,
"isFollowing": false
} | [] | [] | [
{
"reaction": "❤️",
"users": [
"samusenps",
"tollefj",
"merve",
"osanseviero",
"sayhan"
],
"count": 5
},
{
"reaction": "👍",
"users": [
"samusenps",
"peter2000",
"selin1st",
"jnemecek",
"vladi"
],
"count": 5
},
{
"reaction": "🤝",
"users": [
"lbourdois"
],
"count": 1
}
] | 2024-02-08T14:33:05.000Z | 2024-02-08T15:38:11.555Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64137e2150358a805203cbac/w9RQx8Q07UvgFyIZ3ce_k.jpeg",
"fullname": "Jade",
"name": "euclaise",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 89,
"isFollowing": false
}
] | /posts/MoritzLaurer/918404500694779 | 981 | 1 |
837534187508995 | [
{
"type": "text",
"value": "Hi friends, i'am happy to share with you all a tool that i built a week ago or so, i'am talking here about the \"LLM Training Cost Calculator\" - a handy tool now available on Hugging Face Spaces! This interactive Gradio app provides an easy-to-use interface for estimating the training costs of large language models (LLMs).",
"raw": "Hi friends, i'am happy to share with you all a tool that i built a week ago or so, i'am talking here about the \"LLM Training Cost Calculator\" - a handy tool now available on Hugging Face Spaces! This interactive Gradio app provides an easy-to-use interface for estimating the training costs of large language models (LLMs).",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "(I've been asked to provide a report about the cost of finetuning each model etc... so i decided to do the lazy job and build a tool for it, Prof later can choose whatever config he likes 😆)",
"raw": "(I've been asked to provide a report about the cost of finetuning each model etc... so i decided to do the lazy job and build a tool for it, Prof later can choose whatever config he likes 😆)",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "🔍 But Why this is important?",
"raw": "🔍 But Why this is important?",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "As LLMs continue to grow in size and complexity, understanding the computational and financial requirements is crucial for planning and managing AI projects. I believe this tool simplifies this process, giving you insights into potential expenses based on the number of parameters and tokens in your dataset.",
"raw": "As LLMs continue to grow in size and complexity, understanding the computational and financial requirements is crucial for planning and managing AI projects. I believe this tool simplifies this process, giving you insights into potential expenses based on the number of parameters and tokens in your dataset.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "🌟 Features:",
"raw": "🌟 Features:",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "- Input the number of parameters (in billions) and tokens (in trillions).",
"raw": "- Input the number of parameters (in billions) and tokens (in trillions).",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "- Adjust for GPU utilization rates and overhead costs.",
"raw": "- Adjust for GPU utilization rates and overhead costs.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "- Get an instant estimate of your training costs.",
"raw": "- Get an instant estimate of your training costs.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "+ Choose your GPU (A100 80GB PCle, A100 80GB SXM, V100, H100 SXM, H100 PCle)",
"raw": "+ Choose your GPU (A100 80GB PCle, A100 80GB SXM, V100, H100 SXM, H100 PCle)",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "📈 Coming Soon:",
"raw": "📈 Coming Soon:",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Plans are in place to expand the calculator's capabilities to include fine-tuning costs for models using LoRA or QLoRA. You'll be able to input a model ID from the Hugging Face Hub, select your fine-tuning strategy, and specify quantization details if using QLoRA.",
"raw": "Plans are in place to expand the calculator's capabilities to include fine-tuning costs for models using LoRA or QLoRA. You'll be able to input a model ID from the Hugging Face Hub, select your fine-tuning strategy, and specify quantization details if using QLoRA.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "I believe this tool will be a valuable asset to the AI community, helping to plan and allocate resources more effectively 🤗.",
"raw": "I believe this tool will be a valuable asset to the AI community, helping to plan and allocate resources more effectively 🤗.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Should you have any suggestions or feedback, please don't hesitate to contribute your thoughts in the comments below. Together, we can refine and enhance this resource for all.",
"raw": "Should you have any suggestions or feedback, please don't hesitate to contribute your thoughts in the comments below. Together, we can refine and enhance this resource for all.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "🔗 Try it here : ",
"raw": "🔗 Try it here : ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/spaces/Ali-C137/LLM-Training-Cost-Calculator",
"href": "https://huggingface.co/spaces/Ali-C137/LLM-Training-Cost-Calculator",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "PS : All thanks to Gradio, Hugging Face and the community ofc 🔥 😉",
"raw": "PS : All thanks to Gradio, Hugging Face and the community ofc 🔥 😉",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | Hi friends, i'am happy to share with you all a tool that i built a week ago or so, i'am talking here about the "LLM Training Cost Calculator" - a handy tool now available on Hugging Face Spaces! This interactive Gradio app provides an easy-to-use interface for estimating the training costs of large language models (LLMs).
(I've been asked to provide a report about the cost of finetuning each model etc... so i decided to do the lazy job and build a tool for it, Prof later can choose whatever config he likes 😆)
🔍 But Why this is important?
As LLMs continue to grow in size and complexity, understanding the computational and financial requirements is crucial for planning and managing AI projects. I believe this tool simplifies this process, giving you insights into potential expenses based on the number of parameters and tokens in your dataset.
🌟 Features:
- Input the number of parameters (in billions) and tokens (in trillions).
- Adjust for GPU utilization rates and overhead costs.
- Get an instant estimate of your training costs.
+ Choose your GPU (A100 80GB PCle, A100 80GB SXM, V100, H100 SXM, H100 PCle)
📈 Coming Soon:
Plans are in place to expand the calculator's capabilities to include fine-tuning costs for models using LoRA or QLoRA. You'll be able to input a model ID from the Hugging Face Hub, select your fine-tuning strategy, and specify quantization details if using QLoRA.
I believe this tool will be a valuable asset to the AI community, helping to plan and allocate resources more effectively 🤗.
Should you have any suggestions or feedback, please don't hesitate to contribute your thoughts in the comments below. Together, we can refine and enhance this resource for all.
🔗 Try it here : https://huggingface.co/spaces/Ali-C137/LLM-Training-Cost-Calculator
PS : All thanks to Gradio, Hugging Face and the community ofc 🔥 😉 | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/626237d9bbcbd1c34f1bb231/EJrOjvAL-68qMCYdnvOrq.png",
"fullname": "Ali El Filali",
"name": "alielfilali01",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 186,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/626237d9bbcbd1c34f1bb231/3ReQYb07O9QK-SIwZzln-.png"
}
] | [] | [
{
"reaction": "🤗",
"users": [
"victor",
"merve",
"Sylvestre",
"osanseviero",
"ivanfioravanti"
],
"count": 5
},
{
"reaction": "❤️",
"users": [
"samusenps",
"thomwolf",
"tiendung",
"sayhan"
],
"count": 4
},
{
"reaction": "🤯",
"users": [
"victor",
"merve"
],
"count": 2
}
] | 2024-02-08T13:43:32.000Z | 2024-02-08T13:43:32.300Z | [] | /posts/alielfilali01/837534187508995 | 24 | 0 |
421640136777438 | [
{
"type": "text",
"value": "A while ago, I presented this Phi2 DPO fine-tune notebook with LoRa. Got some input from ",
"raw": "A while ago, I presented this Phi2 DPO fine-tune notebook with LoRa. Got some input from ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "mention",
"value": null,
"raw": "@ybelkada",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": "ybelkada",
"label": null,
"lang": null
},
{
"type": "text",
"value": " about not needing a ",
"raw": " about not needing a ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "inline_code",
"value": null,
"raw": "`ref_model`",
"href": null,
"resource": null,
"url": null,
"code": "ref_model",
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": " because we can just swap out the LoRa adapters during training. Cool feature 🤓",
"raw": " because we can just swap out the LoRa adapters during training. Cool feature 🤓",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://colab.research.google.com/drive/1PGMj7jlkJaCiSNNihA2NtpILsRgkRXrJ#scrollTo=wXqoH2TMnjjp",
"href": "https://colab.research.google.com/drive/1PGMj7jlkJaCiSNNihA2NtpILsRgkRXrJ#scrollTo=wXqoH2TMnjjp",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | A while ago, I presented this Phi2 DPO fine-tune notebook with LoRa. Got some input from @ybelkada about not needing a `ref_model` because we can just swap out the LoRa adapters during training. Cool feature 🤓
https://colab.research.google.com/drive/1PGMj7jlkJaCiSNNihA2NtpILsRgkRXrJ#scrollTo=wXqoH2TMnjjp
| {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1677141720071-634ff41ff32062e9eb7b06a3.jpeg",
"fullname": "David Berenstein",
"name": "davidberenstein1957",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 167,
"isFollowing": false
} | [] | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1648631057413-noauth.png",
"fullname": "Younes Belkada",
"name": "ybelkada",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 417
}
] | [
{
"reaction": "❤️",
"users": [
"merve",
"samusenps",
"ybelkada",
"alielfilali01",
"sumandas",
"compressionsavant",
"wise-east"
],
"count": 7
},
{
"reaction": "🤝",
"users": [
"ybelkada",
"Stopwolf"
],
"count": 2
}
] | 2024-02-08T13:41:54.000Z | 2024-05-23T00:37:31.153Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1648631057413-noauth.png",
"fullname": "Younes Belkada",
"name": "ybelkada",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 417,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/626237d9bbcbd1c34f1bb231/EJrOjvAL-68qMCYdnvOrq.png",
"fullname": "Ali El Filali",
"name": "alielfilali01",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 186,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1677141720071-634ff41ff32062e9eb7b06a3.jpeg",
"fullname": "David Berenstein",
"name": "davidberenstein1957",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 167,
"isFollowing": false
},
{
"avatarUrl": "/avatars/33ce42f3e6fef18ecbf29ef2dde8d457.svg",
"fullname": "Justin Cho",
"name": "wise-east",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": null,
"isFollowing": false
}
] | /posts/davidberenstein1957/421640136777438 | 200 | 5 |
381054846510238 | [
{
"type": "text",
"value": "Welcome Bunny! A family of lightweight but powerful multimodal models from BAAI",
"raw": "Welcome Bunny! A family of lightweight but powerful multimodal models from BAAI",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "With detailed work on dataset curation, the Bunny-3B model built upon SigLIP and Phi-2 achieves performance on par with 13B models.",
"raw": "With detailed work on dataset curation, the Bunny-3B model built upon SigLIP and Phi-2 achieves performance on par with 13B models.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Model: ",
"raw": "Model: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/BAAI/bunny-phi-2-siglip-lora",
"href": null,
"resource": {
"type": "model",
"id": "BAAI/bunny-phi-2-siglip-lora",
"discussionNum": null
},
"url": "https://huggingface.co/BAAI/bunny-phi-2-siglip-lora",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | Welcome Bunny! A family of lightweight but powerful multimodal models from BAAI
With detailed work on dataset curation, the Bunny-3B model built upon SigLIP and Phi-2 achieves performance on par with 13B models.
Model: https://huggingface.co/BAAI/bunny-phi-2-siglip-lora
| {
"avatarUrl": "/avatars/703dd06469aaac724c94f622262b14e8.svg",
"fullname": "Tiezhen WANG",
"name": "xianbao",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 88,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/62d22496c58f969c152bcefd/PCH21Q2QuiROSlvsJF-BL.png"
}
] | [] | [
{
"reaction": "❤️",
"users": [
"merve",
"osanseviero",
"samusenps",
"AdinaY"
],
"count": 4
}
] | 2024-02-08T12:43:55.000Z | 2024-02-10T11:12:35.276Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1648113222875-6141a88b3a0ec78603c9e784.png",
"fullname": "Merve Noyan",
"name": "merve",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 5589,
"isFollowing": false
},
{
"avatarUrl": "/avatars/45573691febb8daa674d95bb2b1607ca.svg",
"fullname": "Isaache",
"name": "Isaachhe",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": null,
"isFollowing": false
}
] | /posts/xianbao/381054846510238 | 283 | 2 |
272652961762587 | [
{
"type": "text",
"value": "There appears to be a huge misunderstanding regarding the licensing requirements for open sourced Chinese speaking speaking LLMs on ",
"raw": "There appears to be a huge misunderstanding regarding the licensing requirements for open sourced Chinese speaking speaking LLMs on ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "mention",
"value": null,
"raw": "@huggingface",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": "huggingface",
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "I initially shared this misconception too, but after conducting some research, I came up with the list below. ",
"raw": "I initially shared this misconception too, but after conducting some research, I came up with the list below. ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Veryimpressive!",
"raw": "Veryimpressive!",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | There appears to be a huge misunderstanding regarding the licensing requirements for open sourced Chinese speaking speaking LLMs on
@huggingface
I initially shared this misconception too, but after conducting some research, I came up with the list below.
Veryimpressive!
| {
"avatarUrl": "/avatars/703dd06469aaac724c94f622262b14e8.svg",
"fullname": "Tiezhen WANG",
"name": "xianbao",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 88,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/62d22496c58f969c152bcefd/2nzGTiKXEnFTJtm-HF5rx.png"
}
] | [] | [
{
"reaction": "❤️",
"users": [
"osanseviero",
"samusenps",
"AdinaY",
"Tonic",
"yahma",
"Tom9000",
"wwizo",
"aloobun",
"sayhan"
],
"count": 9
}
] | 2024-02-08T12:41:58.000Z | 2024-02-08T12:41:58.484Z | [] | /posts/xianbao/272652961762587 | 94 | 0 |
291700537437132 | [
{
"type": "text",
"value": "🌠 Let's try to figure out this one as a community.",
"raw": "🌠 Let's try to figure out this one as a community.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "What reactions should we add to Posts and discussions?",
"raw": "What reactions should we add to Posts and discussions?",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "If the reaction you want is already in the replies give it a thumbs up (👍) if it's not just add it as a reply.",
"raw": "If the reaction you want is already in the replies give it a thumbs up (👍) if it's not just add it as a reply.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | 🌠 Let's try to figure out this one as a community.
What reactions should we add to Posts and discussions?
If the reaction you want is already in the replies give it a thumbs up (👍) if it's not just add it as a reply. | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/5f17f0a0925b9863e28ad517/X7QKoiXbUtEZSG9jyvfk3.jpeg",
"fullname": "Victor Mustar",
"name": "victor",
"type": "user",
"isPro": true,
"isHf": true,
"isMod": false,
"followerCount": 2607,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/5f17f0a0925b9863e28ad517/0ZbXuU3elPJbPfzynr0e8.png"
}
] | [] | [
{
"reaction": "👍",
"users": [
"julien-c",
"Sylvestre",
"santiviquez",
"kramp",
"taufiqdp",
"osanseviero",
"ArthurZ",
"samusenps",
"clem",
"mvaloatto",
"Chunte"
],
"count": 11
},
{
"reaction": "❤️",
"users": [
"alielfilali01",
"johnny961"
],
"count": 2
},
{
"reaction": "🤗",
"users": [
"bunnycore",
"NePe"
],
"count": 2
},
{
"reaction": "🤝",
"users": [
"Bkarine"
],
"count": 1
}
] | 2024-02-08T12:16:12.000Z | 2024-05-26T19:57:54.356Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/5f17f0a0925b9863e28ad517/X7QKoiXbUtEZSG9jyvfk3.jpeg",
"fullname": "Victor Mustar",
"name": "victor",
"type": "user",
"isPro": true,
"isHf": true,
"isMod": false,
"followerCount": 2607,
"isFollowing": false
},
{
"avatarUrl": "/avatars/703dd06469aaac724c94f622262b14e8.svg",
"fullname": "Tiezhen WANG",
"name": "xianbao",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 88,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1657144463525-629a173153a72d997d3f57d0.jpeg",
"fullname": "Santiago Viquez",
"name": "santiviquez",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 84,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674683851722-62441cb7456803e95009a08f.jpeg",
"fullname": "Arthur Zucker",
"name": "ArthurZ",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 294,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/tKcEdCfPIJ6gi9iqw9fPl.jpeg",
"fullname": "Jose Roca",
"name": "waroca",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 5,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6340651b388c3fa40f9a5bc0/av1C4_S7bHGxAzOu8lOmG.jpeg",
"fullname": "Adam Molnar",
"name": "lunarflu",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 333,
"isFollowing": false
},
{
"avatarUrl": "/avatars/c0f13b0d660c45394f55e97e3e7e627b.svg",
"fullname": "Abhishek Verma",
"name": "thor-x-me",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 2,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1648113222875-6141a88b3a0ec78603c9e784.png",
"fullname": "Merve Noyan",
"name": "merve",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 5589,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6032802e1f993496bc14d9e3/w6hr-DEQot4VVkoyRIBiy.png",
"fullname": "Omar Sanseviero",
"name": "osanseviero",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 2868,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6538119803519fddb4a17e10/ffJMkdx-rM7VvLTCM6ri_.jpeg",
"fullname": "samusenps",
"name": "samusenps",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 91,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6530994e70a88b63f007324d/dv_xSAa12FwUr6cBHFgX_.png",
"fullname": "wbag",
"name": "Walmart-the-bag",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 43,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1583857146757-5e67bdd61009063689407479.jpeg",
"fullname": "Clem 🤗",
"name": "clem",
"type": "user",
"isPro": true,
"isHf": true,
"isMod": false,
"followerCount": 1763,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/97oWkFBYwQXo-YPrOzRgF.png",
"fullname": "Kristiyan Lukanov",
"name": "HDRobots",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": null,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1675778487155-63d4c8ce13ae45b780792f32.jpeg",
"fullname": "Ohenenoo",
"name": "PeepDaSlan9",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 96,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64aac16fd4a402e8dce11ebe/W640vNvHPWwwG_u4TsRo0.png",
"fullname": "Jorge Vallego",
"name": "neovalle",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 9,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63893d4c184615e463aa24b8/S1flsX_26OF6ZJBVcPlaf.jpeg",
"fullname": "Matt Valoatto",
"name": "mvaloatto",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 56,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/IPtQulJIe7DlzL3GT5LOk.png",
"fullname": "Source of Truth Data Labs",
"name": "sourceoftruthdata",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 10,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63a369d98c0c89dcae3b8329/6OUJ7Hc9T1jXynYH3FGaf.png",
"fullname": "Adina Yakefu",
"name": "AdinaY",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 240,
"isFollowing": false
},
{
"avatarUrl": "/avatars/d744307aff3a30aee73384e574d4a33e.svg",
"fullname": "Bartosz Glinski",
"name": "BGlinek",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": null,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/X9OwFnTRMr-lGYFjNH645.png",
"fullname": "Josh Habdas",
"name": "vhsdev",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": null,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6459fa0f5b3111fbe83286e1/UhCa7JNbtTjC6dgOjZtH0.jpeg",
"fullname": "Louis Brulé Naudet",
"name": "louisbrulenaudet",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 174,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/626237d9bbcbd1c34f1bb231/EJrOjvAL-68qMCYdnvOrq.png",
"fullname": "Ali El Filali",
"name": "alielfilali01",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 186,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/5dd96eb166059660ed1ee413/NQtzmrDdbG0H8qkZvRyGk.jpeg",
"fullname": "Julien Chaumond",
"name": "julien-c",
"type": "user",
"isPro": true,
"isHf": true,
"isMod": false,
"followerCount": 1580,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6527e89a8808d80ccff88b7a/CuGNmF1Et8KMQ0mCd1NEJ.jpeg",
"fullname": "Lain",
"name": "not-lain",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 941,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6612aedf09f16e7347dfa7e1/bPYjBXCedY_1fSIPjoBTY.jpeg",
"fullname": "Nishith Jain",
"name": "KingNish",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 1079,
"isFollowing": false
}
] | /posts/victor/291700537437132 | 58 | 44 |
944822251740427 | [
{
"type": "text",
"value": "🔍 Today's pick in Interpretability & Analysis of LMs: Faithfulness vs. Plausibility: On the (Un)Reliability of Explanations from Large Language Models by C. Agarwal, S.H. Tanneru and H. Lakkaraju",
"raw": "🔍 Today's pick in Interpretability & Analysis of LMs: Faithfulness vs. Plausibility: On the (Un)Reliability of Explanations from Large Language Models by C. Agarwal, S.H. Tanneru and H. Lakkaraju",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "This work discusses the dichotomy between faithfulness and plausibility in LLMs’ self-explanations (SEs) in natural language (CoT, counterfactual reasoning, and token importance). These explanations tend to be reasonable according to human understanding (plausible) but are not always aligned with the reasoning processes of the LLMs (unfaithful).",
"raw": "This work discusses the dichotomy between faithfulness and plausibility in LLMs’ self-explanations (SEs) in natural language (CoT, counterfactual reasoning, and token importance). These explanations tend to be reasonable according to human understanding (plausible) but are not always aligned with the reasoning processes of the LLMs (unfaithful).",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Authors remark that the increase in plausibility driven by the request for a friendly conversational interface might come at the expense of faithfulness. Provided the faithfulness requirements of many high-stakes real-world settings, authors suggest these are considered when designing and evaluating new explanation methodologies.
Finally, the authors call for a community effort to 1) develop reliable metrics to characterize the faithfulness of explanations and 2) pioneering novel strategies to generate more faithful SEs.",
"raw": "Authors remark that the increase in plausibility driven by the request for a friendly conversational interface might come at the expense of faithfulness. Provided the faithfulness requirements of many high-stakes real-world settings, authors suggest these are considered when designing and evaluating new explanation methodologies.
Finally, the authors call for a community effort to 1) develop reliable metrics to characterize the faithfulness of explanations and 2) pioneering novel strategies to generate more faithful SEs.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "📄 Paper: ",
"raw": "📄 Paper: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/papers/2402.04614",
"href": null,
"resource": {
"type": "paper",
"id": "2402.04614",
"discussionNum": null
},
"url": "https://huggingface.co/papers/2402.04614",
"code": null,
"user": null,
"label": "Faithfulness vs. Plausibility: On the (Un)Reliability of Explanations\n from Large Language Models (2402.04614)",
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "🔍 All daily picks in LM interpretability: ",
"raw": "🔍 All daily picks in LM interpretability: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/collections/gsarti/daily-picks-in-interpretability-and-analysis-of-lms-65ae3339949c5675d25de2f9",
"href": null,
"resource": {
"type": "collection",
"id": "gsarti/daily-picks-in-interpretability-and-analysis-of-lms-65ae3339949c5675d25de2f9",
"discussionNum": null
},
"url": "https://huggingface.co/collections/gsarti/daily-picks-in-interpretability-and-analysis-of-lms-65ae3339949c5675d25de2f9",
"code": null,
"user": null,
"label": null,
"lang": null
}
] | 🔍 Today's pick in Interpretability & Analysis of LMs: Faithfulness vs. Plausibility: On the (Un)Reliability of Explanations from Large Language Models by C. Agarwal, S.H. Tanneru and H. Lakkaraju
This work discusses the dichotomy between faithfulness and plausibility in LLMs’ self-explanations (SEs) in natural language (CoT, counterfactual reasoning, and token importance). These explanations tend to be reasonable according to human understanding (plausible) but are not always aligned with the reasoning processes of the LLMs (unfaithful).
Authors remark that the increase in plausibility driven by the request for a friendly conversational interface might come at the expense of faithfulness. Provided the faithfulness requirements of many high-stakes real-world settings, authors suggest these are considered when designing and evaluating new explanation methodologies.
Finally, the authors call for a community effort to 1) develop reliable metrics to characterize the faithfulness of explanations and 2) pioneering novel strategies to generate more faithful SEs.
📄 Paper: https://huggingface.co/papers/2402.04614
🔍 All daily picks in LM interpretability: https://huggingface.co/collections/gsarti/daily-picks-in-interpretability-and-analysis-of-lms-65ae3339949c5675d25de2f9 | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1670231290373-5e7749883d77a72421292d07.jpeg",
"fullname": "Gabriele Sarti",
"name": "gsarti",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 205,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/5e7749883d77a72421292d07/QoER-3T5uSaT9sAZW07VI.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/5e7749883d77a72421292d07/B-V7Q0JMXn9UuejGkEJol.png"
}
] | [] | [
{
"reaction": "❤️",
"users": [
"osanseviero",
"samusenps",
"Bssayla"
],
"count": 3
},
{
"reaction": "👍",
"users": [
"santiviquez",
"osanseviero"
],
"count": 2
}
] | 2024-02-08T10:33:44.000Z | 2024-02-08T10:33:44.303Z | [] | /posts/gsarti/944822251740427 | 6 | 0 |
106604410774079 | [
{
"type": "text",
"value": "#lg #gram #solarllm BREAKING NEWS: ",
"raw": "#lg #gram #solarllm BREAKING NEWS: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "upstage/Solar LLM will soon be available for LG gram Laptops as an on-device LLM. 💻🌞🎉",
"raw": "upstage/Solar LLM will soon be available for LG gram Laptops as an on-device LLM. 💻🌞🎉",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Upstage makes LLMs accessible to everyone and every device. We'd love to see more on-device LLMs.",
"raw": "Upstage makes LLMs accessible to everyone and every device. We'd love to see more on-device LLMs.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://koreajoongangdaily.joins.com/news/2024-02-06/business/industry/LG-Electronics-signs-partnership-with-generative-AI-startup-Upstage-/1975528",
"href": "https://koreajoongangdaily.joins.com/news/2024-02-06/business/industry/LG-Electronics-signs-partnership-with-generative-AI-startup-Upstage-/1975528",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | #lg #gram #solarllm BREAKING NEWS:
upstage/Solar LLM will soon be available for LG gram Laptops as an on-device LLM. 💻🌞🎉
Upstage makes LLMs accessible to everyone and every device. We'd love to see more on-device LLMs.
https://koreajoongangdaily.joins.com/news/2024-02-06/business/industry/LG-Electronics-signs-partnership-with-generative-AI-startup-Upstage-/1975528 | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/603c29094a944b99e81476fd/LaSNcrKmCEUBEBZZCce3k.png",
"fullname": "Sung Kim",
"name": "hunkim",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 38,
"isFollowing": false
} | [] | [] | [
{
"reaction": "❤️",
"users": [
"samusenps",
"leonardlin",
"osanseviero",
"hf-delta",
"akashicmarga"
],
"count": 5
},
{
"reaction": "🤝",
"users": [
"samusenps",
"osanseviero"
],
"count": 2
},
{
"reaction": "👍",
"users": [
"Norod78",
"danielus"
],
"count": 2
}
] | 2024-02-08T00:04:03.000Z | 2024-02-09T08:44:41.624Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6032802e1f993496bc14d9e3/w6hr-DEQot4VVkoyRIBiy.png",
"fullname": "Omar Sanseviero",
"name": "osanseviero",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 2868,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/62e54f0eae9d3f10acb95cb9/VAyk05hqB3OZWXEZW-B0q.png",
"fullname": "mrfakename",
"name": "mrfakename",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 969,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/603c29094a944b99e81476fd/LaSNcrKmCEUBEBZZCce3k.png",
"fullname": "Sung Kim",
"name": "hunkim",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 38,
"isFollowing": false
}
] | /posts/hunkim/106604410774079 | 270 | 3 |
652405554202758 | [
{
"type": "text",
"value": "From ",
"raw": "From ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "mention",
"value": null,
"raw": "@IkariDev",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": "IkariDev",
"label": null,
"lang": null
},
{
"type": "text",
"value": " and ",
"raw": " and ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "mention",
"value": null,
"raw": "@Undi95",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": "Undi95",
"label": null,
"lang": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "New release from NeverSleep!",
"raw": "New release from NeverSleep!",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "MiquMaid-v2-70B",
"raw": "MiquMaid-v2-70B",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/NeverSleep/MiquMaid-v2-70B",
"href": null,
"resource": {
"type": "model",
"id": "NeverSleep/MiquMaid-v2-70B",
"discussionNum": null
},
"url": "https://huggingface.co/NeverSleep/MiquMaid-v2-70B",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/NeverSleep/MiquMaid-v2-70B-GGUF",
"href": null,
"resource": {
"type": "model",
"id": "NeverSleep/MiquMaid-v2-70B-GGUF",
"discussionNum": null
},
"url": "https://huggingface.co/NeverSleep/MiquMaid-v2-70B-GGUF",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "DPO version",
"raw": "DPO version",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/NeverSleep/MiquMaid-v2-70B-DPO",
"href": null,
"resource": {
"type": "model",
"id": "NeverSleep/MiquMaid-v2-70B-DPO",
"discussionNum": null
},
"url": "https://huggingface.co/NeverSleep/MiquMaid-v2-70B-DPO",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/NeverSleep/MiquMaid-v2-70B-DPO-GGUF",
"href": null,
"resource": {
"type": "model",
"id": "NeverSleep/MiquMaid-v2-70B-DPO-GGUF",
"discussionNum": null
},
"url": "https://huggingface.co/NeverSleep/MiquMaid-v2-70B-DPO-GGUF",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "MiquMaid-v2-2x70B",
"raw": "MiquMaid-v2-2x70B",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/NeverSleep/MiquMaid-v2-2x70B",
"href": null,
"resource": {
"type": "model",
"id": "NeverSleep/MiquMaid-v2-2x70B",
"discussionNum": null
},
"url": "https://huggingface.co/NeverSleep/MiquMaid-v2-2x70B",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/NeverSleep/MiquMaid-v2-2x70B-GGUF",
"href": null,
"resource": {
"type": "model",
"id": "NeverSleep/MiquMaid-v2-2x70B-GGUF",
"discussionNum": null
},
"url": "https://huggingface.co/NeverSleep/MiquMaid-v2-2x70B-GGUF",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "DPO version",
"raw": "DPO version",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/NeverSleep/MiquMaid-v2-2x70B-DPO",
"href": null,
"resource": {
"type": "model",
"id": "NeverSleep/MiquMaid-v2-2x70B-DPO",
"discussionNum": null
},
"url": "https://huggingface.co/NeverSleep/MiquMaid-v2-2x70B-DPO",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/NeverSleep/MiquMaid-v2-2x70B-DPO-GGUF",
"href": null,
"resource": {
"type": "model",
"id": "NeverSleep/MiquMaid-v2-2x70B-DPO-GGUF",
"discussionNum": null
},
"url": "https://huggingface.co/NeverSleep/MiquMaid-v2-2x70B-DPO-GGUF",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Feedback appreciated!",
"raw": "Feedback appreciated!",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | From @IkariDev and @Undi95
New release from NeverSleep!
MiquMaid-v2-70B
https://huggingface.co/NeverSleep/MiquMaid-v2-70B
https://huggingface.co/NeverSleep/MiquMaid-v2-70B-GGUF
DPO version
https://huggingface.co/NeverSleep/MiquMaid-v2-70B-DPO
https://huggingface.co/NeverSleep/MiquMaid-v2-70B-DPO-GGUF
MiquMaid-v2-2x70B
https://huggingface.co/NeverSleep/MiquMaid-v2-2x70B
https://huggingface.co/NeverSleep/MiquMaid-v2-2x70B-GGUF
DPO version
https://huggingface.co/NeverSleep/MiquMaid-v2-2x70B-DPO
https://huggingface.co/NeverSleep/MiquMaid-v2-2x70B-DPO-GGUF
Feedback appreciated!
| {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63ab1241ad514ca8d1430003/d-43TcOxG-zqAbzrH2m7H.png",
"fullname": "Undi",
"name": "Undi95",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 3311,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/GyexepNNl6JJDxYOadUvx.png"
}
] | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1667690511692-630dfb008df86f1e5becadc3.png",
"fullname": "IkariDev",
"name": "IkariDev",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 262
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63ab1241ad514ca8d1430003/d-43TcOxG-zqAbzrH2m7H.png",
"fullname": "Undi",
"name": "Undi95",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 3311
}
] | [
{
"reaction": "❤️",
"users": [
"uncensorie",
"frammie",
"hunkim",
"samusenps",
"s3nh",
"Waltert",
"not-lain",
"rewij",
"okeksama",
"IHaBiS"
],
"count": 10
}
] | 2024-02-07T23:49:00.000Z | 2024-02-29T04:11:02.443Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/603c29094a944b99e81476fd/LaSNcrKmCEUBEBZZCce3k.png",
"fullname": "Sung Kim",
"name": "hunkim",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 38,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63ab1241ad514ca8d1430003/d-43TcOxG-zqAbzrH2m7H.png",
"fullname": "Undi",
"name": "Undi95",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 3311,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/61caeda441f9432649f03ab6/0UdRCrzIqhedZblgfpMBk.png",
"fullname": "s3nh",
"name": "s3nh",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 216,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/7GMHy-qdxDKJnSbki-QG1.png",
"fullname": "Sato",
"name": "Papa-Haven",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 1,
"isFollowing": false
}
] | /posts/Undi95/652405554202758 | 10,973 | 4 |
542340914658080 | [
{
"type": "text",
"value": "Are you up to a 🤗 challenge 🏆 ? ",
"raw": "Are you up to a 🤗 challenge 🏆 ? ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "if so 👀 Check out the new MindBigData Leaderboard 🔥🔥🔥 ",
"raw": "if so 👀 Check out the new MindBigData Leaderboard 🔥🔥🔥 ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "🚀 ",
"raw": "🚀 ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/DavidVivancos/MindBigData-Leaderboard",
"href": null,
"resource": {
"type": "space",
"id": "DavidVivancos/MindBigData-Leaderboard",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/DavidVivancos/MindBigData-Leaderboard",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Decode the \"source\" 🧠 with the largest multimodal opendata of brain signals for Machine Learning.",
"raw": "Decode the \"source\" 🧠 with the largest multimodal opendata of brain signals for Machine Learning.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Try to beat the whopping 🥇98,97% accuracy of Smita Tiwari Shivani Goel School of CSET Bennett University, India and Arpit Bhardwaj BML Munjal University decoding the multiclass Yann LeCun mnist of brain digits caputured with EMOTIV Epoc and 🥇89,62% with Insight",
"raw": "Try to beat the whopping 🥇98,97% accuracy of Smita Tiwari Shivani Goel School of CSET Bennett University, India and Arpit Bhardwaj BML Munjal University decoding the multiclass Yann LeCun mnist of brain digits caputured with EMOTIV Epoc and 🥇89,62% with Insight",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Or the 🥇96,18% of Dr. Nrushingh Charan Mahapatra Intel Corporation and Prof.(Dr). Prachet Bhuyan Kalinga Institute of Industrial Technology, Bhubaneswar also with the mnist of brain digits but captured with Muse® by Interaxon Inc.",
"raw": "Or the 🥇96,18% of Dr. Nrushingh Charan Mahapatra Intel Corporation and Prof.(Dr). Prachet Bhuyan Kalinga Institute of Industrial Technology, Bhubaneswar also with the mnist of brain digits but captured with Muse® by Interaxon Inc.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Or the 🥇85% of Matthew Zhang Westlake High School and Jeremy Lu now at Purdue University decoding brain images captured from imagenet",
"raw": "Or the 🥇85% of Matthew Zhang Westlake High School and Jeremy Lu now at Purdue University decoding brain images captured from imagenet",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Or be the first to break the 🧊 with the largest open dataset 2023 (8+ billion datapoints), the multimodal MindBigData2023_MNIST-8B captured with a custom 128 channels EEG that I built and with the real 70,000 MNIST digits and put your NVIDIA gpus to work.",
"raw": "Or be the first to break the 🧊 with the largest open dataset 2023 (8+ billion datapoints), the multimodal MindBigData2023_MNIST-8B captured with a custom 128 channels EEG that I built and with the real 70,000 MNIST digits and put your NVIDIA gpus to work.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "All the datasets are open and ready at HuggingFace, dare to try?",
"raw": "All the datasets are open and ready at HuggingFace, dare to try?",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Hope to see you all soon in the LeaderBoard",
"raw": "Hope to see you all soon in the LeaderBoard",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Thanks",
"raw": "Thanks",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "mention",
"value": null,
"raw": "@DavidVivancos",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": "DavidVivancos",
"label": null,
"lang": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | Are you up to a 🤗 challenge 🏆 ?
if so 👀 Check out the new MindBigData Leaderboard 🔥🔥🔥
🚀 https://huggingface.co/spaces/DavidVivancos/MindBigData-Leaderboard
Decode the "source" 🧠 with the largest multimodal opendata of brain signals for Machine Learning.
Try to beat the whopping 🥇98,97% accuracy of Smita Tiwari Shivani Goel School of CSET Bennett University, India and Arpit Bhardwaj BML Munjal University decoding the multiclass Yann LeCun mnist of brain digits caputured with EMOTIV Epoc and 🥇89,62% with Insight
Or the 🥇96,18% of Dr. Nrushingh Charan Mahapatra Intel Corporation and Prof.(Dr). Prachet Bhuyan Kalinga Institute of Industrial Technology, Bhubaneswar also with the mnist of brain digits but captured with Muse® by Interaxon Inc.
Or the 🥇85% of Matthew Zhang Westlake High School and Jeremy Lu now at Purdue University decoding brain images captured from imagenet
Or be the first to break the 🧊 with the largest open dataset 2023 (8+ billion datapoints), the multimodal MindBigData2023_MNIST-8B captured with a custom 128 channels EEG that I built and with the real 70,000 MNIST digits and put your NVIDIA gpus to work.
All the datasets are open and ready at HuggingFace, dare to try?
Hope to see you all soon in the LeaderBoard
Thanks
@DavidVivancos | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1671537650254-noauth.jpeg",
"fullname": "David Vivancos",
"name": "DavidVivancos",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 27,
"isFollowing": false
} | [] | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1671537650254-noauth.jpeg",
"fullname": "David Vivancos",
"name": "DavidVivancos",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 27
}
] | [
{
"reaction": "❤️",
"users": [
"TF1920",
"osanseviero",
"clem",
"samusenps",
"victor",
"julien-c",
"clefourrier",
"Tonic"
],
"count": 8
},
{
"reaction": "🤯",
"users": [
"samusenps",
"victor",
"osanseviero",
"Tonic"
],
"count": 4
},
{
"reaction": "👍",
"users": [
"samusenps",
"clefourrier",
"Tonic"
],
"count": 3
},
{
"reaction": "🤝",
"users": [
"samusenps",
"Tonic"
],
"count": 2
},
{
"reaction": "🤗",
"users": [
"Tonic"
],
"count": 1
}
] | 2024-02-07T18:46:26.000Z | 2024-02-09T07:04:46.979Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1644340617257-noauth.png",
"fullname": "Clémentine Fourrier",
"name": "clefourrier",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 459,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1671537650254-noauth.jpeg",
"fullname": "David Vivancos",
"name": "DavidVivancos",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 27,
"isFollowing": false
}
] | /posts/DavidVivancos/542340914658080 | 200 | 2 |
104094216142583 | [
{
"type": "text",
"value": "EVA-CLIP 🦖 is the CLIP scaled to the moon! 🔥 ",
"raw": "EVA-CLIP 🦖 is the CLIP scaled to the moon! 🔥 ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "The new SotA CLIP-like model 🏆 ",
"raw": "The new SotA CLIP-like model 🏆 ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Highlights ✨ ",
"raw": "Highlights ✨ ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "- Performs better in linear probing",
"raw": "- Performs better in linear probing",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "- Outperforms in Zero-Shot Image-Text Retrieval",
"raw": "- Outperforms in Zero-Shot Image-Text Retrieval",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "- Higher zero-shot accuracy in IN-1K ",
"raw": "- Higher zero-shot accuracy in IN-1K ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "As usual, try it with the notebook I built for you ",
"raw": "As usual, try it with the notebook I built for you ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://colab.research.google.com/drive/1K7DdCORC3x4qyhwhuB4fT4wcfJ_BQLKw?usp=sharing#scrollTo=0ZS_lJ7SK6Ys",
"href": "https://colab.research.google.com/drive/1K7DdCORC3x4qyhwhuB4fT4wcfJ_BQLKw?usp=sharing#scrollTo=0ZS_lJ7SK6Ys",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "I also built a Space for you to compare the output probabilities to CLIP, seems that EVACLIP is more \"sure\" of it's results 😊 ",
"raw": "I also built a Space for you to compare the output probabilities to CLIP, seems that EVACLIP is more \"sure\" of it's results 😊 ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/merve/EVACLIP",
"href": null,
"resource": {
"type": "space",
"id": "merve/EVACLIP",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/merve/EVACLIP",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "The authors have shared 8B checkpoints open with Apache 2.0 license 💜 and it's built on top of transformers, super easy to use! ",
"raw": "The authors have shared 8B checkpoints open with Apache 2.0 license 💜 and it's built on top of transformers, super easy to use! ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/BAAI/EVA-CLIP-8B",
"href": null,
"resource": {
"type": "model",
"id": "BAAI/EVA-CLIP-8B",
"discussionNum": null
},
"url": "https://huggingface.co/BAAI/EVA-CLIP-8B",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Read the paper ",
"raw": "Read the paper ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/papers/2402.04252",
"href": null,
"resource": {
"type": "paper",
"id": "2402.04252",
"discussionNum": null
},
"url": "https://huggingface.co/papers/2402.04252",
"code": null,
"user": null,
"label": "EVA-CLIP-18B: Scaling CLIP to 18 Billion Parameters (2402.04252)",
"lang": null
},
{
"type": "text",
"value": " 📄",
"raw": " 📄",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | EVA-CLIP 🦖 is the CLIP scaled to the moon! 🔥
The new SotA CLIP-like model 🏆
Highlights ✨
- Performs better in linear probing
- Outperforms in Zero-Shot Image-Text Retrieval
- Higher zero-shot accuracy in IN-1K
As usual, try it with the notebook I built for you https://colab.research.google.com/drive/1K7DdCORC3x4qyhwhuB4fT4wcfJ_BQLKw?usp=sharing#scrollTo=0ZS_lJ7SK6Ys
I also built a Space for you to compare the output probabilities to CLIP, seems that EVACLIP is more "sure" of it's results 😊 https://huggingface.co/spaces/merve/EVACLIP
The authors have shared 8B checkpoints open with Apache 2.0 license 💜 and it's built on top of transformers, super easy to use! https://huggingface.co/BAAI/EVA-CLIP-8B
Read the paper https://huggingface.co/papers/2402.04252 📄 | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1648113222875-6141a88b3a0ec78603c9e784.png",
"fullname": "Merve Noyan",
"name": "merve",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 5589,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6141a88b3a0ec78603c9e784/qwFv_TZgJIZm0g1p_Deg0.png"
}
] | [] | [
{
"reaction": "❤️",
"users": [
"victor",
"osanseviero",
"santiviquez",
"clem",
"AIIAR",
"samusenps",
"danielus",
"boapps",
"qiying"
],
"count": 9
}
] | 2024-02-07T17:57:09.000Z | 2024-02-07T18:37:17.635Z | [] | /posts/merve/104094216142583 | 105 | 0 |
312216112088485 | [
{
"type": "text",
"value": "Super excited to share my project, ageML, here! 😊",
"raw": "Super excited to share my project, ageML, here! 😊",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "ageML is a Python library I've been building to study the temporal performance degradation of ML models. ",
"raw": "ageML is a Python library I've been building to study the temporal performance degradation of ML models. ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "The goal of the project is to facilitate the exploration of performance degradation by providing tools for people to easily test how their models would evolve over time when trained and evaluated on different subsets of their data.",
"raw": "The goal of the project is to facilitate the exploration of performance degradation by providing tools for people to easily test how their models would evolve over time when trained and evaluated on different subsets of their data.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "⭐ Check it out: ",
"raw": "⭐ Check it out: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/santiviquez/ageml",
"href": "https://github.com/santiviquez/ageml",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | Super excited to share my project, ageML, here! 😊
ageML is a Python library I've been building to study the temporal performance degradation of ML models.
The goal of the project is to facilitate the exploration of performance degradation by providing tools for people to easily test how their models would evolve over time when trained and evaluated on different subsets of their data.
⭐ Check it out: https://github.com/santiviquez/ageml | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1657144463525-629a173153a72d997d3f57d0.jpeg",
"fullname": "Santiago Viquez",
"name": "santiviquez",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 84,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/629a173153a72d997d3f57d0/r5lO5aDTXG40ZSyW8-K7L.jpeg"
}
] | [] | [
{
"reaction": "🤯",
"users": [
"victor",
"osanseviero",
"kramp",
"clem",
"hunkim",
"samusenps",
"merve",
"abhishekvarma12345",
"MarinaraSpaghetti",
"eevee32x"
],
"count": 10
},
{
"reaction": "👍",
"users": [
"Kacper17",
"MarinaraSpaghetti"
],
"count": 2
}
] | 2024-02-07T16:03:01.000Z | 2024-03-01T09:35:20.086Z | [] | /posts/santiviquez/312216112088485 | 4 | 1 |
109705985619692 | [
{
"type": "text",
"value": "Self-Discover",
"raw": "Self-Discover",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Large Language Models Self-Compose Reasoning Structures",
"raw": "Large Language Models Self-Compose Reasoning Structures",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "paper page: ",
"raw": "paper page: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/papers/2402.03620",
"href": null,
"resource": {
"type": "paper",
"id": "2402.03620",
"discussionNum": null
},
"url": "https://huggingface.co/papers/2402.03620",
"code": null,
"user": null,
"label": "Self-Discover: Large Language Models Self-Compose Reasoning Structures (2402.03620)",
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "SELF-DISCOVER substantially improves GPT-4 and PaLM 2's performance on challenging reasoning benchmarks such as BigBench-Hard, grounded agent reasoning, and MATH, by as much as 32% compared to Chain of Thought (CoT). Furthermore, SELF-DISCOVER outperforms inference-intensive methods such as CoT-Self-Consistency by more than 20%, while requiring 10-40x fewer inference compute. Finally, we show that the self-discovered reasoning structures are universally applicable across model families: from PaLM 2-L to GPT-4, and from GPT-4 to Llama2, and share commonalities with human reasoning patterns.",
"raw": "SELF-DISCOVER substantially improves GPT-4 and PaLM 2's performance on challenging reasoning benchmarks such as BigBench-Hard, grounded agent reasoning, and MATH, by as much as 32% compared to Chain of Thought (CoT). Furthermore, SELF-DISCOVER outperforms inference-intensive methods such as CoT-Self-Consistency by more than 20%, while requiring 10-40x fewer inference compute. Finally, we show that the self-discovered reasoning structures are universally applicable across model families: from PaLM 2-L to GPT-4, and from GPT-4 to Llama2, and share commonalities with human reasoning patterns.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | Self-Discover
Large Language Models Self-Compose Reasoning Structures
paper page: https://huggingface.co/papers/2402.03620
SELF-DISCOVER substantially improves GPT-4 and PaLM 2's performance on challenging reasoning benchmarks such as BigBench-Hard, grounded agent reasoning, and MATH, by as much as 32% compared to Chain of Thought (CoT). Furthermore, SELF-DISCOVER outperforms inference-intensive methods such as CoT-Self-Consistency by more than 20%, while requiring 10-40x fewer inference compute. Finally, we show that the self-discovered reasoning structures are universally applicable across model families: from PaLM 2-L to GPT-4, and from GPT-4 to Llama2, and share commonalities with human reasoning patterns.
| {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"fullname": "AK",
"name": "akhaliq",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 5205,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/60f1abe7544c2adfd699860c/RJPAKG_EAOJVJakE12F0V.png"
}
] | [] | [
{
"reaction": "🤗",
"users": [
"osanseviero",
"Erland",
"clem",
"vaibhavgeek",
"merve",
"akashicmarga"
],
"count": 6
}
] | 2024-02-07T15:36:57.000Z | 2024-02-07T15:36:57.746Z | [] | /posts/akhaliq/109705985619692 | 42 | 0 |
715501509770457 | [
{
"type": "text",
"value": "Memphis: Advancing language model reasoning without relying on proprietary model outputs",
"raw": "Memphis: Advancing language model reasoning without relying on proprietary model outputs",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Memphis is a series of models which advance human-data models, offering good performance without relying on proprietary model outputs (e.g. GPT-generated datasets). I've developed a new iterative finetuning procedure to improve the reasoning ability of these models beyond what is possible using only SFT on the same data.",
"raw": "Memphis is a series of models which advance human-data models, offering good performance without relying on proprietary model outputs (e.g. GPT-generated datasets). I've developed a new iterative finetuning procedure to improve the reasoning ability of these models beyond what is possible using only SFT on the same data.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Currently, I've released two models: Memphis-CoT-3B, and Memphis-scribe-3B.",
"raw": "Currently, I've released two models: Memphis-CoT-3B, and Memphis-scribe-3B.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "To create these models, I've created new datasets:",
"raw": "To create these models, I've created new datasets:",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "- ",
"raw": "- ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/datasets/euclaise/reddit-instruct",
"href": null,
"resource": {
"type": "dataset",
"id": "euclaise/reddit-instruct",
"discussionNum": null
},
"url": "https://huggingface.co/datasets/euclaise/reddit-instruct",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": " : A dataset of instruction/QA-like data scraped from Reddit. A curated version, filtered using Lilac and neural embedding models, is available at ",
"raw": " : A dataset of instruction/QA-like data scraped from Reddit. A curated version, filtered using Lilac and neural embedding models, is available at ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/datasets/euclaise/reddit-instruct-curated",
"href": null,
"resource": {
"type": "dataset",
"id": "euclaise/reddit-instruct-curated",
"discussionNum": null
},
"url": "https://huggingface.co/datasets/euclaise/reddit-instruct-curated",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "- ",
"raw": "- ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/datasets/euclaise/TinyCoT",
"href": null,
"resource": {
"type": "dataset",
"id": "euclaise/TinyCoT",
"discussionNum": null
},
"url": "https://huggingface.co/datasets/euclaise/TinyCoT",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": " : TinyCoT is a mtea-dataset that aggregates a variety of different human-sourced reasoning data. It is a curated version of my previous MegaCoT dataset ",
"raw": " : TinyCoT is a mtea-dataset that aggregates a variety of different human-sourced reasoning data. It is a curated version of my previous MegaCoT dataset ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/datasets/euclaise/MegaCoT",
"href": null,
"resource": {
"type": "dataset",
"id": "euclaise/MegaCoT",
"discussionNum": null
},
"url": "https://huggingface.co/datasets/euclaise/MegaCoT",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": ", which contains 629k responses which get cut down to 28k for TinyCoT. There's also an intermediate version ",
"raw": ", which contains 629k responses which get cut down to 28k for TinyCoT. There's also an intermediate version ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/datasets/euclaise/MiniCoT",
"href": null,
"resource": {
"type": "dataset",
"id": "euclaise/MiniCoT",
"discussionNum": null
},
"url": "https://huggingface.co/datasets/euclaise/MiniCoT",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": ", which has 129k responses.",
"raw": ", which has 129k responses.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Memphis-CoT is trained on reddit-instruct, a filtered version of oasst2 ",
"raw": "Memphis-CoT is trained on reddit-instruct, a filtered version of oasst2 ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/datasets/sablo/oasst2_curated",
"href": null,
"resource": {
"type": "dataset",
"id": "sablo/oasst2_curated",
"discussionNum": null
},
"url": "https://huggingface.co/datasets/sablo/oasst2_curated",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": ", and TinyCoT. Multiple iterations were performed on TinyCoT, while reddit-instruct and oasst2 were only used for the initial model.",
"raw": ", and TinyCoT. Multiple iterations were performed on TinyCoT, while reddit-instruct and oasst2 were only used for the initial model.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Memphis-scribe further finetunes Memphis-CoT on more creative tasks. It was finetuned from Memphis-CoT on 18 different datasets, including datasets like ",
"raw": "Memphis-scribe further finetunes Memphis-CoT on more creative tasks. It was finetuned from Memphis-CoT on 18 different datasets, including datasets like ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/datasets/euclaise/WritingPrompts_curated",
"href": null,
"resource": {
"type": "dataset",
"id": "euclaise/WritingPrompts_curated",
"discussionNum": null
},
"url": "https://huggingface.co/datasets/euclaise/WritingPrompts_curated",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": ", ",
"raw": ", ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/datasets/lemonilia/LimaRP",
"href": null,
"resource": {
"type": "dataset",
"id": "lemonilia/LimaRP",
"discussionNum": null
},
"url": "https://huggingface.co/datasets/lemonilia/LimaRP",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": ", and more.",
"raw": ", and more.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "To prevent catastrophic forgetting, I used weight averaging between iterations.",
"raw": "To prevent catastrophic forgetting, I used weight averaging between iterations.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "- ",
"raw": "- ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/euclaise/Memphis-CoT-3B",
"href": null,
"resource": {
"type": "model",
"id": "euclaise/Memphis-CoT-3B",
"discussionNum": null
},
"url": "https://huggingface.co/euclaise/Memphis-CoT-3B",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "- ",
"raw": "- ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/euclaise/Memphis-scribe-3B",
"href": null,
"resource": {
"type": "model",
"id": "euclaise/Memphis-scribe-3B",
"discussionNum": null
},
"url": "https://huggingface.co/euclaise/Memphis-scribe-3B",
"code": null,
"user": null,
"label": null,
"lang": null
}
] | Memphis: Advancing language model reasoning without relying on proprietary model outputs
Memphis is a series of models which advance human-data models, offering good performance without relying on proprietary model outputs (e.g. GPT-generated datasets). I've developed a new iterative finetuning procedure to improve the reasoning ability of these models beyond what is possible using only SFT on the same data.
Currently, I've released two models: Memphis-CoT-3B, and Memphis-scribe-3B.
To create these models, I've created new datasets:
- https://huggingface.co/datasets/euclaise/reddit-instruct : A dataset of instruction/QA-like data scraped from Reddit. A curated version, filtered using Lilac and neural embedding models, is available at https://huggingface.co/datasets/euclaise/reddit-instruct-curated
- https://huggingface.co/datasets/euclaise/TinyCoT : TinyCoT is a mtea-dataset that aggregates a variety of different human-sourced reasoning data. It is a curated version of my previous MegaCoT dataset https://huggingface.co/datasets/euclaise/MegaCoT, which contains 629k responses which get cut down to 28k for TinyCoT. There's also an intermediate version https://huggingface.co/datasets/euclaise/MiniCoT, which has 129k responses.
Memphis-CoT is trained on reddit-instruct, a filtered version of oasst2 https://huggingface.co/datasets/sablo/oasst2_curated, and TinyCoT. Multiple iterations were performed on TinyCoT, while reddit-instruct and oasst2 were only used for the initial model.
Memphis-scribe further finetunes Memphis-CoT on more creative tasks. It was finetuned from Memphis-CoT on 18 different datasets, including datasets like https://huggingface.co/datasets/euclaise/WritingPrompts_curated, https://huggingface.co/datasets/lemonilia/LimaRP, and more.
To prevent catastrophic forgetting, I used weight averaging between iterations.
- https://huggingface.co/euclaise/Memphis-CoT-3B
- https://huggingface.co/euclaise/Memphis-scribe-3B | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64137e2150358a805203cbac/w9RQx8Q07UvgFyIZ3ce_k.jpeg",
"fullname": "Jade",
"name": "euclaise",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 89,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/64137e2150358a805203cbac/lzLubQTCE4KMhKhk4YjDL.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/64137e2150358a805203cbac/UkXoUIALf_L6X2rMVeJgE.png"
}
] | [] | [
{
"reaction": "❤️",
"users": [
"osanseviero",
"lbourdois",
"victor",
"samusenps",
"xsa-dev",
"merve",
"ABIN1012",
"d0rj",
"alielfilali01",
"reciprocate",
"HailJebus",
"qqqzzzyyy"
],
"count": 12
}
] | 2024-02-07T15:14:24.000Z | 2024-02-13T01:07:13.792Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/626237d9bbcbd1c34f1bb231/EJrOjvAL-68qMCYdnvOrq.png",
"fullname": "Ali El Filali",
"name": "alielfilali01",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 186,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64137e2150358a805203cbac/w9RQx8Q07UvgFyIZ3ce_k.jpeg",
"fullname": "Jade",
"name": "euclaise",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 89,
"isFollowing": false
}
] | /posts/euclaise/715501509770457 | 669 | 2 |
298383919066060 | [
{
"type": "text",
"value": "🔍 Today's pick in Interpretability & Analysis of LMs: INSIDE: LLMs' Internal States Retain the Power of Hallucination Detection by ",
"raw": "🔍 Today's pick in Interpretability & Analysis of LMs: INSIDE: LLMs' Internal States Retain the Power of Hallucination Detection by ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "mention",
"value": null,
"raw": "@chaochen",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": "chaochen",
"label": null,
"lang": null
},
{
"type": "text",
"value": " et al.",
"raw": " et al.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Previous efforts in detecting hallucinations using model intrinsic information employed predictive uncertainty or self-consistency to detect evaluation. Authors contend that in these procedure the rich semantic information captured in model embeddings is inevitably lost while decoding tokens.",
"raw": "Previous efforts in detecting hallucinations using model intrinsic information employed predictive uncertainty or self-consistency to detect evaluation. Authors contend that in these procedure the rich semantic information captured in model embeddings is inevitably lost while decoding tokens.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "To prevent this information loss they propose EigenScore, an internal measure of responses’ self-consistency using the eigenvalues of sampled responses' covariance matrix in intermediate model layers to quantify answers’ diversity in the dense embedding space.
Results show that EigenScore outperforms logit-level methods for hallucination detection on QA tasks, especially when paired with inference time feature clipping to truncate extreme activations, reducing overconfident generations.",
"raw": "To prevent this information loss they propose EigenScore, an internal measure of responses’ self-consistency using the eigenvalues of sampled responses' covariance matrix in intermediate model layers to quantify answers’ diversity in the dense embedding space.
Results show that EigenScore outperforms logit-level methods for hallucination detection on QA tasks, especially when paired with inference time feature clipping to truncate extreme activations, reducing overconfident generations.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "📄 Paper: ",
"raw": "📄 Paper: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/papers/2402.03744",
"href": null,
"resource": {
"type": "paper",
"id": "2402.03744",
"discussionNum": null
},
"url": "https://huggingface.co/papers/2402.03744",
"code": null,
"user": null,
"label": "INSIDE: LLMs' Internal States Retain the Power of Hallucination\n Detection (2402.03744)",
"lang": null
}
] | 🔍 Today's pick in Interpretability & Analysis of LMs: INSIDE: LLMs' Internal States Retain the Power of Hallucination Detection by @chaochen et al.
Previous efforts in detecting hallucinations using model intrinsic information employed predictive uncertainty or self-consistency to detect evaluation. Authors contend that in these procedure the rich semantic information captured in model embeddings is inevitably lost while decoding tokens.
To prevent this information loss they propose EigenScore, an internal measure of responses’ self-consistency using the eigenvalues of sampled responses' covariance matrix in intermediate model layers to quantify answers’ diversity in the dense embedding space.
Results show that EigenScore outperforms logit-level methods for hallucination detection on QA tasks, especially when paired with inference time feature clipping to truncate extreme activations, reducing overconfident generations.
📄 Paper: https://huggingface.co/papers/2402.03744 | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1670231290373-5e7749883d77a72421292d07.jpeg",
"fullname": "Gabriele Sarti",
"name": "gsarti",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 205,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/5e7749883d77a72421292d07/MPSwD2OnzX0jsU3yfwpBx.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/5e7749883d77a72421292d07/Y1UBu2PeQL5abMTBcxkGT.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/5e7749883d77a72421292d07/zzWBPpeiwwglj4Z_sv3QH.png"
}
] | [
{
"avatarUrl": "/avatars/3191ea937322456e238725a9929a73c0.svg",
"fullname": "chaochen",
"name": "chaochen",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 2
}
] | [
{
"reaction": "👍",
"users": [
"chaochen",
"victor",
"osanseviero"
],
"count": 3
},
{
"reaction": "❤️",
"users": [
"chaochen",
"santiviquez",
"samusenps"
],
"count": 3
}
] | 2024-02-07T09:41:25.000Z | 2024-02-09T18:23:53.727Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/61898e700f17069e911ba061/wi0oeSs9f2uzuZpKm4y5y.png",
"fullname": "Deema",
"name": "Deema",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 2,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1670231290373-5e7749883d77a72421292d07.jpeg",
"fullname": "Gabriele Sarti",
"name": "gsarti",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 205,
"isFollowing": false
}
] | /posts/gsarti/298383919066060 | 28 | 2 |
159002818180998 | [
{
"type": "text",
"value": "Introducing Vision Arena (beta)! Based on the lmsys's ChatbotArena, we create a simple demo for testing different Vision LMs (VLMs). We now support GPT-4V, Gemini-Pro-Vision, and Llava. More updates and models will come soon! We are still in the development stage and for now and we'd love to hear your feedback and suggestions! Please help us vote for better VLMs in your own use cases here! :D Kudos to Yujie Lu (UCSB)! ",
"raw": "Introducing Vision Arena (beta)! Based on the lmsys's ChatbotArena, we create a simple demo for testing different Vision LMs (VLMs). We now support GPT-4V, Gemini-Pro-Vision, and Llava. More updates and models will come soon! We are still in the development stage and for now and we'd love to hear your feedback and suggestions! Please help us vote for better VLMs in your own use cases here! :D Kudos to Yujie Lu (UCSB)! ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/WildVision/vision-arena",
"href": null,
"resource": {
"type": "space",
"id": "WildVision/vision-arena",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/WildVision/vision-arena",
"code": null,
"user": null,
"label": null,
"lang": null
}
] | Introducing Vision Arena (beta)! Based on the lmsys's ChatbotArena, we create a simple demo for testing different Vision LMs (VLMs). We now support GPT-4V, Gemini-Pro-Vision, and Llava. More updates and models will come soon! We are still in the development stage and for now and we'd love to hear your feedback and suggestions! Please help us vote for better VLMs in your own use cases here! :D Kudos to Yujie Lu (UCSB)!
https://huggingface.co/spaces/WildVision/vision-arena | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/607f666a4ad99100d63ce35c/QxhxnvfeV6efkxwUFHwjI.png",
"fullname": "Bill Yuchen Lin",
"name": "yuchenlin",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 64,
"isFollowing": false
} | [
{
"type": "video",
"url": "https://cdn-uploads.huggingface.co/production/uploads/607f666a4ad99100d63ce35c/bmsLc4nD-281vWnaqe_ku.qt"
}
] | [] | [
{
"reaction": "❤️",
"users": [
"merve",
"yuchenlin",
"Dlbk",
"victor",
"osanseviero",
"lewtun",
"clefourrier",
"multimodalart",
"FunCube",
"JustinLin610",
"VictorSanh",
"drogozhang",
"samusenps",
"tdambrowitz",
"AdinaY",
"natolambert",
"vishaal27",
"MoritzLaurer",
"taesiri"
],
"count": 19
}
] | 2024-02-07T07:16:00.000Z | 2024-02-07T10:05:04.168Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6032802e1f993496bc14d9e3/w6hr-DEQot4VVkoyRIBiy.png",
"fullname": "Omar Sanseviero",
"name": "osanseviero",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 2868,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1644340617257-noauth.png",
"fullname": "Clémentine Fourrier",
"name": "clefourrier",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 459,
"isFollowing": false
},
{
"avatarUrl": "/avatars/d80f1e6341f7191d431e317ca3a4ac60.svg",
"fullname": "jayasurya",
"name": "surya47",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 2,
"isFollowing": false
}
] | /posts/yuchenlin/159002818180998 | 1,124 | 3 |
420197903529550 | [
{
"type": "text",
"value": "We've been busy cooking up some interesting models at ",
"raw": "We've been busy cooking up some interesting models at ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "mention",
"value": null,
"raw": "@jinaai",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": "jinaai",
"label": null,
"lang": null
},
{
"type": "text",
"value": ", with a recent highlight being the release of our first batch of bilingual embedding models.",
"raw": ", with a recent highlight being the release of our first batch of bilingual embedding models.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Internally labeled as ",
"raw": "Internally labeled as ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "inline_code",
"value": null,
"raw": "`X+EN`",
"href": null,
"resource": null,
"url": null,
"code": "X+EN",
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": ", where X represents the target language and ",
"raw": ", where X represents the target language and ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "inline_code",
"value": null,
"raw": "`EN`",
"href": null,
"resource": null,
"url": null,
"code": "EN",
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": " stays fixed, these models specialize in both monolingual tasks and cross-lingual retrieval tasks, crossing from X to EN.",
"raw": " stays fixed, these models specialize in both monolingual tasks and cross-lingual retrieval tasks, crossing from X to EN.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "You can find these models available on Huggingface:",
"raw": "You can find these models available on Huggingface:",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "1. German-English bilingual embedding: ",
"raw": "1. German-English bilingual embedding: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/jinaai/jina-embeddings-v2-base-de",
"href": null,
"resource": {
"type": "model",
"id": "jinaai/jina-embeddings-v2-base-de",
"discussionNum": null
},
"url": "https://huggingface.co/jinaai/jina-embeddings-v2-base-de",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "2. Chinese-English bilingual embedding: ",
"raw": "2. Chinese-English bilingual embedding: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/jinaai/jina-embeddings-v2-base-zh",
"href": null,
"resource": {
"type": "model",
"id": "jinaai/jina-embeddings-v2-base-zh",
"discussionNum": null
},
"url": "https://huggingface.co/jinaai/jina-embeddings-v2-base-zh",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "We're also excited to announce that a Spanish bilingual embedding will be released in approximately two weeks.",
"raw": "We're also excited to announce that a Spanish bilingual embedding will be released in approximately two weeks.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Our evaluation across various MLM tasks has demonstrated that the Bilingual Backbone consistently outperforms state-of-the-art Multilingual Backbones like XLM-Roberta (given its focus on just two languages).",
"raw": "Our evaluation across various MLM tasks has demonstrated that the Bilingual Backbone consistently outperforms state-of-the-art Multilingual Backbones like XLM-Roberta (given its focus on just two languages).",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Despite being three times smaller than the leading multilingual models (e5-multilingual-large), our released bilingual embedding models have shown superior performance compared to e5-multilingual-large, excelling in both monolingual and cross-lingual search tasks.",
"raw": "Despite being three times smaller than the leading multilingual models (e5-multilingual-large), our released bilingual embedding models have shown superior performance compared to e5-multilingual-large, excelling in both monolingual and cross-lingual search tasks.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Currently, we're putting the finishing touches on the technical report, which should be available on Arxiv by next week.",
"raw": "Currently, we're putting the finishing touches on the technical report, which should be available on Arxiv by next week.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Looking ahead, the embedding team is gearing up for ",
"raw": "Looking ahead, the embedding team is gearing up for ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "inline_code",
"value": null,
"raw": "`jina-embeddings-v3`",
"href": null,
"resource": null,
"url": null,
"code": "jina-embeddings-v3",
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": " with some initial groundwork already underway. Stay tuned for more updates!",
"raw": " with some initial groundwork already underway. Stay tuned for more updates!",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | We've been busy cooking up some interesting models at @jinaai, with a recent highlight being the release of our first batch of bilingual embedding models.
Internally labeled as `X+EN`, where X represents the target language and `EN` stays fixed, these models specialize in both monolingual tasks and cross-lingual retrieval tasks, crossing from X to EN.
You can find these models available on Huggingface:
1. German-English bilingual embedding: https://huggingface.co/jinaai/jina-embeddings-v2-base-de
2. Chinese-English bilingual embedding: https://huggingface.co/jinaai/jina-embeddings-v2-base-zh
We're also excited to announce that a Spanish bilingual embedding will be released in approximately two weeks.
Our evaluation across various MLM tasks has demonstrated that the Bilingual Backbone consistently outperforms state-of-the-art Multilingual Backbones like XLM-Roberta (given its focus on just two languages).
Despite being three times smaller than the leading multilingual models (e5-multilingual-large), our released bilingual embedding models have shown superior performance compared to e5-multilingual-large, excelling in both monolingual and cross-lingual search tasks.
Currently, we're putting the finishing touches on the technical report, which should be available on Arxiv by next week.
Looking ahead, the embedding team is gearing up for `jina-embeddings-v3`
with some initial groundwork already underway. Stay tuned for more updates! | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63491dc83d8dc83a55cb749c/IoqJrOIaEnYO_S7si4KGp.jpeg",
"fullname": "Bo Wang",
"name": "bwang0911",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 1824,
"isFollowing": false
} | [] | [] | [
{
"reaction": "👍",
"users": [
"KnutJaegersberg",
"merve",
"Dlbk",
"osanseviero",
"tomaarsen",
"FremyCompany",
"victor"
],
"count": 7
},
{
"reaction": "❤️",
"users": [
"merve",
"Dlbk",
"osanseviero"
],
"count": 3
}
] | 2024-02-07T05:23:00.000Z | 2024-02-08T04:58:04.930Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1669551186189-63732ebbbd81fae2b3aaf3fb.jpeg",
"fullname": "Knut Jägersberg",
"name": "KnutJaegersberg",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 238,
"isFollowing": false
}
] | /posts/bwang0911/420197903529550 | 166 | 1 |
764363519759697 | [
{
"type": "text",
"value": "Yesterday we just released Qwen1.5. Maybe someday I can tell more about the experience. But this is is at least a good release even if it is not yet SOTA. There is not so many SOTA by the way. This time, we actually fixed a lot of problems. ",
"raw": "Yesterday we just released Qwen1.5. Maybe someday I can tell more about the experience. But this is is at least a good release even if it is not yet SOTA. There is not so many SOTA by the way. This time, we actually fixed a lot of problems. ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "1. Context lengths are finally unified for all sizes. Previously, a lot of users kept telling us that 14B only supports 2K (Yeah even dynamic NTK does not work that well and it can only be extended to around 4-5K. Let alone those know nothing about how to use dynamic NTK).",
"raw": "1. Context lengths are finally unified for all sizes. Previously, a lot of users kept telling us that 14B only supports 2K (Yeah even dynamic NTK does not work that well and it can only be extended to around 4-5K. Let alone those know nothing about how to use dynamic NTK).",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "2. If you carefully use our base language models, you will find that they understand special tokens of ChatML, which means that you can directly use LoRA to train on data with ChatML format. Why you can't do this before? This is because if the base language model does not understand the special tokens, you need to make them trained, which means that you should turn on the training of embedding. This is disgusting and it often leads to problems when you use ZeRO3.",
"raw": "2. If you carefully use our base language models, you will find that they understand special tokens of ChatML, which means that you can directly use LoRA to train on data with ChatML format. Why you can't do this before? This is because if the base language model does not understand the special tokens, you need to make them trained, which means that you should turn on the training of embedding. This is disgusting and it often leads to problems when you use ZeRO3.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "3. We did strengthen our base language models except for 72. You should find better base language models, especially for 7 and 14. Why not 72? Nah, hard to say, but will make it better. ",
"raw": "3. We did strengthen our base language models except for 72. You should find better base language models, especially for 7 and 14. Why not 72? Nah, hard to say, but will make it better. ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "4. About the multilingual capabilities. Yes we finally build up our multilingual evaluation system and find out that our new base language models have nice performance in multilingual evaluation for base language models. This tells us that we should pay more attention to the post-training with multilingual data. And we did that too. This is why this time we tell you something about multilingual performance. It is for sure much much better than our models before this release.",
"raw": "4. About the multilingual capabilities. Yes we finally build up our multilingual evaluation system and find out that our new base language models have nice performance in multilingual evaluation for base language models. This tells us that we should pay more attention to the post-training with multilingual data. And we did that too. This is why this time we tell you something about multilingual performance. It is for sure much much better than our models before this release.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "5. Chat models are the most promising stuff. Before this release, we gave you the SFT models. But this time, we had very nice SFT+DPO models. Yeah not only annotators like them but also users like them. I am sure you developers will feel that way too. ",
"raw": "5. Chat models are the most promising stuff. Before this release, we gave you the SFT models. But this time, we had very nice SFT+DPO models. Yeah not only annotators like them but also users like them. I am sure you developers will feel that way too. ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | Yesterday we just released Qwen1.5. Maybe someday I can tell more about the experience. But this is is at least a good release even if it is not yet SOTA. There is not so many SOTA by the way. This time, we actually fixed a lot of problems.
1. Context lengths are finally unified for all sizes. Previously, a lot of users kept telling us that 14B only supports 2K (Yeah even dynamic NTK does not work that well and it can only be extended to around 4-5K. Let alone those know nothing about how to use dynamic NTK).
2. If you carefully use our base language models, you will find that they understand special tokens of ChatML, which means that you can directly use LoRA to train on data with ChatML format. Why you can't do this before? This is because if the base language model does not understand the special tokens, you need to make them trained, which means that you should turn on the training of embedding. This is disgusting and it often leads to problems when you use ZeRO3.
3. We did strengthen our base language models except for 72. You should find better base language models, especially for 7 and 14. Why not 72? Nah, hard to say, but will make it better.
4. About the multilingual capabilities. Yes we finally build up our multilingual evaluation system and find out that our new base language models have nice performance in multilingual evaluation for base language models. This tells us that we should pay more attention to the post-training with multilingual data. And we did that too. This is why this time we tell you something about multilingual performance. It is for sure much much better than our models before this release.
5. Chat models are the most promising stuff. Before this release, we gave you the SFT models. But this time, we had very nice SFT+DPO models. Yeah not only annotators like them but also users like them. I am sure you developers will feel that way too.
| {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/620760a26e3b7210c2ff1943/VC-rKqimF6yxGESNVlPoR.jpeg",
"fullname": "Junyang Lin",
"name": "JustinLin610",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 132,
"isFollowing": false
} | [] | [] | [
{
"reaction": "❤️",
"users": [
"s3nh",
"osanseviero",
"Pclanglais",
"andysalerno",
"hf-delta",
"huodon",
"KnutJaegersberg",
"cin-hubert",
"xu3kev",
"hiyouga",
"merve",
"Stopwolf",
"Dlbk",
"ShushengYang",
"lewtun",
"Ji-Ha",
"xianbao",
"euclaise",
"smerchi",
"AdinaY",
"Sangmin",
"philschmid",
"joaogante",
"victor",
"nateraw",
"dlimeng",
"bowiehsu",
"jkeisling",
"HDiffusion",
"EditaZ"
],
"count": 30
},
{
"reaction": "🤯",
"users": [
"osanseviero",
"merve",
"xianbao",
"AdinaY",
"philschmid"
],
"count": 5
},
{
"reaction": "👍",
"users": [
"krumeto",
"xianbao",
"AdinaY",
"philschmid"
],
"count": 4
}
] | 2024-02-06T17:06:48.000Z | 2024-02-07T15:24:58.940Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/61caeda441f9432649f03ab6/0UdRCrzIqhedZblgfpMBk.png",
"fullname": "s3nh",
"name": "s3nh",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 216,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64ce091a9e9ca8123d7a42b0/OEPggp82RwigxNLL35LgT.jpeg",
"fullname": "Pierre-Carl Langlais",
"name": "Pclanglais",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 191,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1669551186189-63732ebbbd81fae2b3aaf3fb.jpeg",
"fullname": "Knut Jägersberg",
"name": "KnutJaegersberg",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 238,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/620760a26e3b7210c2ff1943/VC-rKqimF6yxGESNVlPoR.jpeg",
"fullname": "Junyang Lin",
"name": "JustinLin610",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 132,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64137e2150358a805203cbac/w9RQx8Q07UvgFyIZ3ce_k.jpeg",
"fullname": "Jade",
"name": "euclaise",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 89,
"isFollowing": false
}
] | /posts/JustinLin610/764363519759697 | 104 | 5 |
733202908862586 | [
{
"type": "text",
"value": "DeepSeekMath",
"raw": "DeepSeekMath",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Pushing the Limits of Mathematical Reasoning in Open Language Models",
"raw": "Pushing the Limits of Mathematical Reasoning in Open Language Models",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "paper page: ",
"raw": "paper page: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/papers/2402.03300",
"href": null,
"resource": {
"type": "paper",
"id": "2402.03300",
"discussionNum": null
},
"url": "https://huggingface.co/papers/2402.03300",
"code": null,
"user": null,
"label": "DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open\n Language Models (2402.03300)",
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "DeepSeekMath 7B has achieved an impressive score of 51.7% on the competition-level MATH benchmark without relying on external toolkits and voting techniques, approaching the performance level of Gemini-Ultra and GPT-4",
"raw": "DeepSeekMath 7B has achieved an impressive score of 51.7% on the competition-level MATH benchmark without relying on external toolkits and voting techniques, approaching the performance level of Gemini-Ultra and GPT-4",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | DeepSeekMath
Pushing the Limits of Mathematical Reasoning in Open Language Models
paper page: https://huggingface.co/papers/2402.03300
DeepSeekMath 7B has achieved an impressive score of 51.7% on the competition-level MATH benchmark without relying on external toolkits and voting techniques, approaching the performance level of Gemini-Ultra and GPT-4
| {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"fullname": "AK",
"name": "akhaliq",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 5205,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/60f1abe7544c2adfd699860c/AJI9U8MjqvFltGz7m0BLq.jpeg"
}
] | [] | [
{
"reaction": "❤️",
"users": [
"clem",
"ajibawa-2023",
"osanseviero",
"merve",
"Dlbk",
"andreinigo",
"distantquant",
"hunkim",
"alielfilali01",
"Sattineez",
"mysticaltech"
],
"count": 11
}
] | 2024-02-06T15:28:48.000Z | 2024-02-08T00:14:53.534Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64aea8ff67511bd3d965697b/Jxn52EmDF5RApJh8antxn.jpeg",
"fullname": "Feynman Innovations",
"name": "ajibawa-2023",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 138,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/603c29094a944b99e81476fd/LaSNcrKmCEUBEBZZCce3k.png",
"fullname": "Sung Kim",
"name": "hunkim",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 38,
"isFollowing": false
}
] | /posts/akhaliq/733202908862586 | 41 | 2 |
809039256258115 | [
{
"type": "text",
"value": "Understanding BARTScore 🛹",
"raw": "Understanding BARTScore 🛹",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "BARTScore is a text-generation evaluation metric that treats model evaluation as a text-generation task 🔄",
"raw": "BARTScore is a text-generation evaluation metric that treats model evaluation as a text-generation task 🔄",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Other metrics approach the evaluation problem from different ML task perspectives; for instance, ROUGE and BLUE formulate it as an unsupervised matching task, BLUERT and COMET as a supervised regression, and BEER as a supervised ranking task.",
"raw": "Other metrics approach the evaluation problem from different ML task perspectives; for instance, ROUGE and BLUE formulate it as an unsupervised matching task, BLUERT and COMET as a supervised regression, and BEER as a supervised ranking task.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Meanwhile, BARTScore formulates it as a text-generation task. Its idea is to leverage BART's pre-trained contextual embeddings to return a score that measures either the faithfulness, precision, recall, or F-score response of the main text-generation model.",
"raw": "Meanwhile, BARTScore formulates it as a text-generation task. Its idea is to leverage BART's pre-trained contextual embeddings to return a score that measures either the faithfulness, precision, recall, or F-score response of the main text-generation model.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "For example, if we want to measure faithfulness, the way it works is that we would take the source and the generated text from our model and use BART to calculate the log token probability of the generated text given the source; we can then weight those results and return the sum.",
"raw": "For example, if we want to measure faithfulness, the way it works is that we would take the source and the generated text from our model and use BART to calculate the log token probability of the generated text given the source; we can then weight those results and return the sum.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "BARTScore correlates nicely with human scores, and it is relatively simple to implement.",
"raw": "BARTScore correlates nicely with human scores, and it is relatively simple to implement.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "📑 Here is the original BARTScore paper: ",
"raw": "📑 Here is the original BARTScore paper: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/papers/2106.11520",
"href": null,
"resource": {
"type": "paper",
"id": "2106.11520",
"discussionNum": null
},
"url": "https://huggingface.co/papers/2106.11520",
"code": null,
"user": null,
"label": "BARTScore: Evaluating Generated Text as Text Generation (2106.11520)",
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "🧑💻 And the GitHub repo to use this metric: ",
"raw": "🧑💻 And the GitHub repo to use this metric: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/neulab/BARTScore",
"href": "https://github.com/neulab/BARTScore",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | Understanding BARTScore 🛹
BARTScore is a text-generation evaluation metric that treats model evaluation as a text-generation task 🔄
Other metrics approach the evaluation problem from different ML task perspectives; for instance, ROUGE and BLUE formulate it as an unsupervised matching task, BLUERT and COMET as a supervised regression, and BEER as a supervised ranking task.
Meanwhile, BARTScore formulates it as a text-generation task. Its idea is to leverage BART's pre-trained contextual embeddings to return a score that measures either the faithfulness, precision, recall, or F-score response of the main text-generation model.
For example, if we want to measure faithfulness, the way it works is that we would take the source and the generated text from our model and use BART to calculate the log token probability of the generated text given the source; we can then weight those results and return the sum.
BARTScore correlates nicely with human scores, and it is relatively simple to implement.
📑 Here is the original BARTScore paper: https://huggingface.co/papers/2106.11520
🧑💻 And the GitHub repo to use this metric: https://github.com/neulab/BARTScore | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1657144463525-629a173153a72d997d3f57d0.jpeg",
"fullname": "Santiago Viquez",
"name": "santiviquez",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 84,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/629a173153a72d997d3f57d0/kXlMoN0Is9snPYQIQRBEW.jpeg"
}
] | [] | [
{
"reaction": "👍",
"users": [
"victor",
"osanseviero",
"clem",
"dimpu01",
"ivanfioravanti",
"firqaaa",
"i0s",
"YuvrajSingh9886"
],
"count": 8
}
] | 2024-02-06T10:45:16.000Z | 2024-02-06T10:45:16.854Z | [] | /posts/santiviquez/809039256258115 | 900 | 0 |
500939719344175 | [
{
"type": "text",
"value": "🔍 Today's pick in Interpretability & Analysis of LMs: Rethinking Interpretability in the Era of Large Language Models",
"raw": "🔍 Today's pick in Interpretability & Analysis of LMs: Rethinking Interpretability in the Era of Large Language Models",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "by C. Singh, J. P. Inala, ",
"raw": "by C. Singh, J. P. Inala, ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "mention",
"value": null,
"raw": "@mgalley",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": "mgalley",
"label": null,
"lang": null
},
{
"type": "text",
"value": ", R. Caruana, ",
"raw": ", R. Caruana, ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "mention",
"value": null,
"raw": "@wyngjf",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": "wyngjf",
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "In this opinion piece, authors contend that the new capabilities of LLMs can deeply transform the scope of interpretability, moving from low-level explanations such as saliency maps to natural language explanations that would allow for natural interaction with users.",
"raw": "In this opinion piece, authors contend that the new capabilities of LLMs can deeply transform the scope of interpretability, moving from low-level explanations such as saliency maps to natural language explanations that would allow for natural interaction with users.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "This ambitious goal is however hindered by LM’s natural tendency to hallucinate, their large size and their inherent opaqueness. Authors highlight in particular dataset explanations for knowledge discovery, explanations’ reliability and interactive explanations as important priorities for the future of interpretability research.",
"raw": "This ambitious goal is however hindered by LM’s natural tendency to hallucinate, their large size and their inherent opaqueness. Authors highlight in particular dataset explanations for knowledge discovery, explanations’ reliability and interactive explanations as important priorities for the future of interpretability research.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "📄 Paper: ",
"raw": "📄 Paper: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/papers/2402.01761",
"href": null,
"resource": {
"type": "paper",
"id": "2402.01761",
"discussionNum": null
},
"url": "https://huggingface.co/papers/2402.01761",
"code": null,
"user": null,
"label": "Rethinking Interpretability in the Era of Large Language Models (2402.01761)",
"lang": null
}
] | 🔍 Today's pick in Interpretability & Analysis of LMs: Rethinking Interpretability in the Era of Large Language Models
by C. Singh, J. P. Inala, @mgalley, R. Caruana, @wyngjf
In this opinion piece, authors contend that the new capabilities of LLMs can deeply transform the scope of interpretability, moving from low-level explanations such as saliency maps to natural language explanations that would allow for natural interaction with users.
This ambitious goal is however hindered by LM’s natural tendency to hallucinate, their large size and their inherent opaqueness. Authors highlight in particular dataset explanations for knowledge discovery, explanations’ reliability and interactive explanations as important priorities for the future of interpretability research.
📄 Paper: https://huggingface.co/papers/2402.01761 | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1670231290373-5e7749883d77a72421292d07.jpeg",
"fullname": "Gabriele Sarti",
"name": "gsarti",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 205,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/5e7749883d77a72421292d07/WCE27GotE9cWxGyKwBwPx.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/5e7749883d77a72421292d07/a6nq47yOEWGjRvjdTLSgu.png"
}
] | [
{
"avatarUrl": "/avatars/d67edb079fa3fe7ea6f2081f7a3afe9e.svg",
"fullname": "Michel Galley",
"name": "mgalley",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": null
},
{
"avatarUrl": "/avatars/4a63eac71eb30f70b1a0e9d4708f26c1.svg",
"fullname": "Jianfeng Gao",
"name": "wyngjf",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 3
}
] | [
{
"reaction": "❤️",
"users": [
"osanseviero",
"JairoDanielMT"
],
"count": 2
}
] | 2024-02-06T08:50:31.000Z | 2024-02-06T08:50:31.137Z | [] | /posts/gsarti/500939719344175 | 11 | 0 |
982415168433293 | [
{
"type": "text",
"value": "Today my RunPod pod was broken and I didn't notice until I fully did setup it. So I have written the following tutorial for how to deploy a Pod and also verify it is not broken.",
"raw": "Today my RunPod pod was broken and I didn't notice until I fully did setup it. So I have written the following tutorial for how to deploy a Pod and also verify it is not broken.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "You can read on:",
"raw": "You can read on:",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Patreon (public) : ",
"raw": "Patreon (public) : ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://www.patreon.com/posts/how-to-deploy-on-97919576",
"href": "https://www.patreon.com/posts/how-to-deploy-on-97919576",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Medium (public) : ",
"raw": "Medium (public) : ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://medium.com/@furkangozukara/how-to-deploy-a-pod-on-runpod-and-verify-it-is-working-20e47031c0b5",
"href": "https://medium.com/@furkangozukara/how-to-deploy-a-pod-on-runpod-and-verify-it-is-working-20e47031c0b5",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "CivitAI (public) : ",
"raw": "CivitAI (public) : ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://civitai.com/articles/3994/how-to-deploy-a-pod-on-runpod-and-verify-it-is-working",
"href": "https://civitai.com/articles/3994/how-to-deploy-a-pod-on-runpod-and-verify-it-is-working",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "LinkedIn (public) : ",
"raw": "LinkedIn (public) : ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://www.linkedin.com/pulse/how-deploy-pod-runpod-verify-working-furkan-g%2525C3%2525B6z%2525C3%2525BCkara-lgplf%3FtrackingId=EuNOjpKCSQ%252BVfpiQV3D6KQ%253D%253D/?trackingId=EuNOjpKCSQ%2BVfpiQV3D6KQ%3D%3D",
"href": "https://www.linkedin.com/pulse/how-deploy-pod-runpod-verify-working-furkan-g%2525C3%2525B6z%2525C3%2525BCkara-lgplf%3FtrackingId=EuNOjpKCSQ%252BVfpiQV3D6KQ%253D%253D/?trackingId=EuNOjpKCSQ%2BVfpiQV3D6KQ%3D%3D",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Dev . to (public) : ",
"raw": "Dev . to (public) : ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://dev.to/furkangozukara/how-to-deploy-a-pod-on-runpod-and-verify-it-is-working-3pop",
"href": "https://dev.to/furkangozukara/how-to-deploy-a-pod-on-runpod-and-verify-it-is-working-3pop",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | Today my RunPod pod was broken and I didn't notice until I fully did setup it. So I have written the following tutorial for how to deploy a Pod and also verify it is not broken.
You can read on:
Patreon (public) : https://www.patreon.com/posts/how-to-deploy-on-97919576
Medium (public) : https://medium.com/@furkangozukara/how-to-deploy-a-pod-on-runpod-and-verify-it-is-working-20e47031c0b5
CivitAI (public) : https://civitai.com/articles/3994/how-to-deploy-a-pod-on-runpod-and-verify-it-is-working
LinkedIn (public) : https://www.linkedin.com/pulse/how-deploy-pod-runpod-verify-working-furkan-g%2525C3%2525B6z%2525C3%2525BCkara-lgplf%3FtrackingId=EuNOjpKCSQ%252BVfpiQV3D6KQ%253D%253D/?trackingId=EuNOjpKCSQ%2BVfpiQV3D6KQ%3D%3D
Dev . to (public) : https://dev.to/furkangozukara/how-to-deploy-a-pod-on-runpod-and-verify-it-is-working-3pop
| {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1672531901326-6345bd89fe134dfd7a0dba40.png",
"fullname": "Furkan Gözükara",
"name": "MonsterMMORPG",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 376,
"isFollowing": false
} | [] | [] | [] | 2024-02-05T22:31:01.000Z | 2024-02-05T22:31:01.648Z | [] | /posts/MonsterMMORPG/982415168433293 | 32 | 0 |
777035688509520 | [
{
"type": "text",
"value": "StepCoder",
"raw": "StepCoder",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Improve Code Generation with Reinforcement Learning from Compiler Feedback",
"raw": "Improve Code Generation with Reinforcement Learning from Compiler Feedback",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "paper page: ",
"raw": "paper page: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/papers/2402.01391",
"href": null,
"resource": {
"type": "paper",
"id": "2402.01391",
"discussionNum": null
},
"url": "https://huggingface.co/papers/2402.01391",
"code": null,
"user": null,
"label": "StepCoder: Improve Code Generation with Reinforcement Learning from\n Compiler Feedback (2402.01391)",
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "The advancement of large language models (LLMs) has significantly propelled the field of code generation. Previous work integrated reinforcement learning (RL) with compiler feedback for exploring the output space of LLMs to enhance code generation quality. However, the lengthy code generated by LLMs in response to complex human requirements makes RL exploration a challenge. Also, since the unit tests may not cover the complicated code, optimizing LLMs by using these unexecuted code snippets is ineffective. To tackle these challenges, we introduce StepCoder, a novel RL framework for code generation, consisting of two main components: CCCS addresses the exploration challenge by breaking the long sequences code generation task into a Curriculum of Code Completion Subtasks, while FGO only optimizes the model by masking the unexecuted code segments to provide Fine-Grained Optimization. In addition, we furthermore construct the APPS+ dataset for RL training, which is manually verified to ensure the correctness of unit tests. Experimental results show that our method improves the ability to explore the output space and outperforms state-of-the-art approaches in corresponding benchmarks.",
"raw": "The advancement of large language models (LLMs) has significantly propelled the field of code generation. Previous work integrated reinforcement learning (RL) with compiler feedback for exploring the output space of LLMs to enhance code generation quality. However, the lengthy code generated by LLMs in response to complex human requirements makes RL exploration a challenge. Also, since the unit tests may not cover the complicated code, optimizing LLMs by using these unexecuted code snippets is ineffective. To tackle these challenges, we introduce StepCoder, a novel RL framework for code generation, consisting of two main components: CCCS addresses the exploration challenge by breaking the long sequences code generation task into a Curriculum of Code Completion Subtasks, while FGO only optimizes the model by masking the unexecuted code segments to provide Fine-Grained Optimization. In addition, we furthermore construct the APPS+ dataset for RL training, which is manually verified to ensure the correctness of unit tests. Experimental results show that our method improves the ability to explore the output space and outperforms state-of-the-art approaches in corresponding benchmarks.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | StepCoder
Improve Code Generation with Reinforcement Learning from Compiler Feedback
paper page: https://huggingface.co/papers/2402.01391
The advancement of large language models (LLMs) has significantly propelled the field of code generation. Previous work integrated reinforcement learning (RL) with compiler feedback for exploring the output space of LLMs to enhance code generation quality. However, the lengthy code generated by LLMs in response to complex human requirements makes RL exploration a challenge. Also, since the unit tests may not cover the complicated code, optimizing LLMs by using these unexecuted code snippets is ineffective. To tackle these challenges, we introduce StepCoder, a novel RL framework for code generation, consisting of two main components: CCCS addresses the exploration challenge by breaking the long sequences code generation task into a Curriculum of Code Completion Subtasks, while FGO only optimizes the model by masking the unexecuted code segments to provide Fine-Grained Optimization. In addition, we furthermore construct the APPS+ dataset for RL training, which is manually verified to ensure the correctness of unit tests. Experimental results show that our method improves the ability to explore the output space and outperforms state-of-the-art approaches in corresponding benchmarks. | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"fullname": "AK",
"name": "akhaliq",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 5205,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/60f1abe7544c2adfd699860c/QO4L9sxPAxeS77wmxAEEo.png"
}
] | [] | [
{
"reaction": "👍",
"users": [
"osanseviero",
"clem",
"kramp",
"Dlbk",
"sepal"
],
"count": 5
}
] | 2024-02-05T21:37:39.000Z | 2024-02-05T21:37:39.858Z | [] | /posts/akhaliq/777035688509520 | 37 | 0 |
965503230990660 | [
{
"type": "text",
"value": "Necessity is the mother of invention, and of Gradio components.",
"raw": "Necessity is the mother of invention, and of Gradio components.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Sometimes we realize that we need a Gradio component to build a cool application and demo, so we just build it. For example, we just added a new gr.ParamViewer component because we needed it to display information about Python & JavaScript functions in our documentation. ",
"raw": "Sometimes we realize that we need a Gradio component to build a cool application and demo, so we just build it. For example, we just added a new gr.ParamViewer component because we needed it to display information about Python & JavaScript functions in our documentation. ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Of course, our users should be able able to do the same thing for their machine learning applications, so that's why Gradio lets you build custom components, and publish them to the world 🔥",
"raw": "Of course, our users should be able able to do the same thing for their machine learning applications, so that's why Gradio lets you build custom components, and publish them to the world 🔥",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | Necessity is the mother of invention, and of Gradio components.
Sometimes we realize that we need a Gradio component to build a cool application and demo, so we just build it. For example, we just added a new gr.ParamViewer component because we needed it to display information about Python & JavaScript functions in our documentation.
Of course, our users should be able able to do the same thing for their machine learning applications, so that's why Gradio lets you build custom components, and publish them to the world 🔥 | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1621947938344-noauth.png",
"fullname": "Abubakar Abid",
"name": "abidlabs",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 487,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/608b8bb39d7c9519b4adae19/L2n43bWPHUz8hX-Qv8VPX.gif"
}
] | [] | [
{
"reaction": "👍",
"users": [
"sbarman25",
"victor",
"samusenps",
"ysharma",
"ajibawa-2023",
"gsarti",
"osanseviero",
"dvilasuero",
"hmb",
"notsahil"
],
"count": 10
},
{
"reaction": "🤯",
"users": [
"ysharma",
"dvilasuero",
"notsahil"
],
"count": 3
}
] | 2024-02-05T18:43:21.000Z | 2024-02-05T18:47:13.645Z | [] | /posts/abidlabs/965503230990660 | 487 | 0 |
470307658539102 | [
{
"type": "text",
"value": "These past months, I've been busy baking a special sort of Croissant 🥐 with an awesome team ! ",
"raw": "These past months, I've been busy baking a special sort of Croissant 🥐 with an awesome team ! ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "🥐 CroissantLLM is a truly bilingual language model trained on 3 trillion tokens of French and English data. In its size category (<2B), it is the best model in French, but it also rivals the best monolingual English models ! ",
"raw": "🥐 CroissantLLM is a truly bilingual language model trained on 3 trillion tokens of French and English data. In its size category (<2B), it is the best model in French, but it also rivals the best monolingual English models ! ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "💾 To train it, we collected, filtered and cleaned huge quantities of permissively licensed French data, across various domains (legal, administrative, cultural, scientific), and different text modalities (speech transcriptions, movie subtitles, encyclopedias, forums, webpages)... ",
"raw": "💾 To train it, we collected, filtered and cleaned huge quantities of permissively licensed French data, across various domains (legal, administrative, cultural, scientific), and different text modalities (speech transcriptions, movie subtitles, encyclopedias, forums, webpages)... ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "⚖️ Assessing LLM performance is not easy, especially outside of English, and to this end we crafted a novel evaluation benchmark, FrenchBench, aiming to assess reasoning, factual knowledge, and linguistic capabilities of models in French !",
"raw": "⚖️ Assessing LLM performance is not easy, especially outside of English, and to this end we crafted a novel evaluation benchmark, FrenchBench, aiming to assess reasoning, factual knowledge, and linguistic capabilities of models in French !",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "🔎 The best current LLMs are hidden behind a shroud of mystery, trained with undisclosed training data mixes or strategies. We go the opposite way, releasing all of the project's artefacts (model checkpoints, data, training details, evaluation benchmarks...) We obtain 81 % of the Stanford FMTI transparency criterias, far ahead of even most open initiatives !",
"raw": "🔎 The best current LLMs are hidden behind a shroud of mystery, trained with undisclosed training data mixes or strategies. We go the opposite way, releasing all of the project's artefacts (model checkpoints, data, training details, evaluation benchmarks...) We obtain 81 % of the Stanford FMTI transparency criterias, far ahead of even most open initiatives !",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "🧪Beyond a powerful industrial resource, our transparent initiative is a stepping stone for many scientific questions ! How does teaching a model two languages instead of one splits its monolingual ability ? Does training on so much French help the model integrate French-centric knowledge and cultural biases ? How does the model memorize the training data ?",
"raw": "🧪Beyond a powerful industrial resource, our transparent initiative is a stepping stone for many scientific questions ! How does teaching a model two languages instead of one splits its monolingual ability ? Does training on so much French help the model integrate French-centric knowledge and cultural biases ? How does the model memorize the training data ?",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Many more things to say, for those interested, I recommend checking out:",
"raw": "Many more things to say, for those interested, I recommend checking out:",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "🗞️ The blogpost: ",
"raw": "🗞️ The blogpost: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/blog/manu/croissant-llm-blog",
"href": "https://huggingface.co/blog/manu/croissant-llm-blog",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "📖 The 45 page report with lots of gems: ",
"raw": "📖 The 45 page report with lots of gems: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://arxiv.org/abs/2402.00786",
"href": "https://arxiv.org/abs/2402.00786",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "🤖 Models, Data, Demo: ",
"raw": "🤖 Models, Data, Demo: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/croissantllm",
"href": "https://huggingface.co/croissantllm",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | These past months, I've been busy baking a special sort of Croissant 🥐 with an awesome team !
🥐 CroissantLLM is a truly bilingual language model trained on 3 trillion tokens of French and English data. In its size category (<2B), it is the best model in French, but it also rivals the best monolingual English models !
💾 To train it, we collected, filtered and cleaned huge quantities of permissively licensed French data, across various domains (legal, administrative, cultural, scientific), and different text modalities (speech transcriptions, movie subtitles, encyclopedias, forums, webpages)...
⚖️ Assessing LLM performance is not easy, especially outside of English, and to this end we crafted a novel evaluation benchmark, FrenchBench, aiming to assess reasoning, factual knowledge, and linguistic capabilities of models in French !
🔎 The best current LLMs are hidden behind a shroud of mystery, trained with undisclosed training data mixes or strategies. We go the opposite way, releasing all of the project's artefacts (model checkpoints, data, training details, evaluation benchmarks...) We obtain 81 % of the Stanford FMTI transparency criterias, far ahead of even most open initiatives !
🧪Beyond a powerful industrial resource, our transparent initiative is a stepping stone for many scientific questions ! How does teaching a model two languages instead of one splits its monolingual ability ? Does training on so much French help the model integrate French-centric knowledge and cultural biases ? How does the model memorize the training data ?
Many more things to say, for those interested, I recommend checking out:
🗞️ The blogpost: https://huggingface.co/blog/manu/croissant-llm-blog
📖 The 45 page report with lots of gems: https://arxiv.org/abs/2402.00786
🤖 Models, Data, Demo: https://huggingface.co/croissantllm
| {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1654090481550-60f2e021adf471cbdf8bb660.jpeg",
"fullname": "Manuel Faysse",
"name": "manu",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 106,
"isFollowing": false
} | [] | [] | [
{
"reaction": "❤️",
"users": [
"eliolio",
"gsarti",
"euclaise",
"eliebak",
"kramp",
"yozozaya",
"clem",
"blanchon",
"victor",
"mvaloatto",
"osanseviero",
"samusenps",
"santiviquez",
"ksiabani",
"batmac",
"nouamanetazi",
"davanstrien",
"fffiloni",
"alielfilali01",
"sbrandeis",
"Soubz",
"velaia"
],
"count": 22
},
{
"reaction": "🤯",
"users": [
"fffiloni",
"Soubz"
],
"count": 2
}
] | 2024-02-05T15:12:40.000Z | 2024-02-06T07:27:15.701Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1670231290373-5e7749883d77a72421292d07.jpeg",
"fullname": "Gabriele Sarti",
"name": "gsarti",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 205,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1583857146757-5e67bdd61009063689407479.jpeg",
"fullname": "Clem 🤗",
"name": "clem",
"type": "user",
"isPro": true,
"isHf": true,
"isMod": false,
"followerCount": 1763,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64955109ac70da05b7aacb9a/bZKEz24ZfaWDSI33yHUmR.png",
"fullname": "Kostas Siabanis",
"name": "ksiabani",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 8,
"isFollowing": false
}
] | /posts/manu/470307658539102 | 1,745 | 3 |
229518612093214 | [
{
"type": "text",
"value": "Some of my results from experimenting with hallucination detection techniques for LLMs 🫨🔍",
"raw": "Some of my results from experimenting with hallucination detection techniques for LLMs 🫨🔍",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "First, the two main ideas used in the experiments—using token probabilities and LLM-Eval scores—are taken from these three papers:",
"raw": "First, the two main ideas used in the experiments—using token probabilities and LLM-Eval scores—are taken from these three papers:",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "1. ",
"raw": "1. ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/papers/2208.05309",
"href": null,
"resource": {
"type": "paper",
"id": "2208.05309",
"discussionNum": null
},
"url": "https://huggingface.co/papers/2208.05309",
"code": null,
"user": null,
"label": "Looking for a Needle in a Haystack: A Comprehensive Study of\n Hallucinations in Neural Machine Translation (2208.05309)",
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "2. ",
"raw": "2. ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/papers/2303.08896",
"href": null,
"resource": {
"type": "paper",
"id": "2303.08896",
"discussionNum": null
},
"url": "https://huggingface.co/papers/2303.08896",
"code": null,
"user": null,
"label": "SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for\n Generative Large Language Models (2303.08896)",
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "3. ",
"raw": "3. ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/papers/2305.13711",
"href": null,
"resource": {
"type": "paper",
"id": "2305.13711",
"discussionNum": null
},
"url": "https://huggingface.co/papers/2305.13711",
"code": null,
"user": null,
"label": "LLM-Eval: Unified Multi-Dimensional Automatic Evaluation for Open-Domain\n Conversations with Large Language Models (2305.13711)",
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "In the first two, the authors claim that computing the average of the sentence-level token probabilities is the best heuristic for detecting hallucinations. And from my results, we do see a weak positive correlation between average token probabilities and ground truth. 🤔",
"raw": "In the first two, the authors claim that computing the average of the sentence-level token probabilities is the best heuristic for detecting hallucinations. And from my results, we do see a weak positive correlation between average token probabilities and ground truth. 🤔",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "The nice thing about this method is that it comes with almost no implementation cost since we only need the output token probabilities from the generated text, so it is straightforward to implement.",
"raw": "The nice thing about this method is that it comes with almost no implementation cost since we only need the output token probabilities from the generated text, so it is straightforward to implement.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "The third paper proposes an evaluation shema where we do an extra call to an LLM and kindly ask it to rate on a scale from 0 to 5 how good the generated text is on a set of different criteria. 📝🤖",
"raw": "The third paper proposes an evaluation shema where we do an extra call to an LLM and kindly ask it to rate on a scale from 0 to 5 how good the generated text is on a set of different criteria. 📝🤖",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "I was able to reproduce similar results to those in the paper. There is a moderate positive correlation between the ground truth scores and the ones produced by the LLM.",
"raw": "I was able to reproduce similar results to those in the paper. There is a moderate positive correlation between the ground truth scores and the ones produced by the LLM.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Of course, this method is much more expensive since we would need one extra call to the LLM for every prediction that we would like to evaluate, and it is also very sensitive to prompt engineering. 🤷",
"raw": "Of course, this method is much more expensive since we would need one extra call to the LLM for every prediction that we would like to evaluate, and it is also very sensitive to prompt engineering. 🤷",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | Some of my results from experimenting with hallucination detection techniques for LLMs 🫨🔍
First, the two main ideas used in the experiments—using token probabilities and LLM-Eval scores—are taken from these three papers:
1. https://huggingface.co/papers/2208.05309
2. https://huggingface.co/papers/2303.08896
3. https://huggingface.co/papers/2305.13711
In the first two, the authors claim that computing the average of the sentence-level token probabilities is the best heuristic for detecting hallucinations. And from my results, we do see a weak positive correlation between average token probabilities and ground truth. 🤔
The nice thing about this method is that it comes with almost no implementation cost since we only need the output token probabilities from the generated text, so it is straightforward to implement.
The third paper proposes an evaluation shema where we do an extra call to an LLM and kindly ask it to rate on a scale from 0 to 5 how good the generated text is on a set of different criteria. 📝🤖
I was able to reproduce similar results to those in the paper. There is a moderate positive correlation between the ground truth scores and the ones produced by the LLM.
Of course, this method is much more expensive since we would need one extra call to the LLM for every prediction that we would like to evaluate, and it is also very sensitive to prompt engineering. 🤷 | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1657144463525-629a173153a72d997d3f57d0.jpeg",
"fullname": "Santiago Viquez",
"name": "santiviquez",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 84,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/629a173153a72d997d3f57d0/gbbnaj8ipntSy7YvjjMic.jpeg"
}
] | [] | [
{
"reaction": "❤️",
"users": [
"osanseviero",
"victor",
"clem",
"samusenps",
"gsarti",
"KasperNomm",
"JasperV13",
"sbrandeis"
],
"count": 8
}
] | 2024-02-05T10:16:27.000Z | 2024-02-06T13:42:47.946Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1670231290373-5e7749883d77a72421292d07.jpeg",
"fullname": "Gabriele Sarti",
"name": "gsarti",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 205,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1657144463525-629a173153a72d997d3f57d0.jpeg",
"fullname": "Santiago Viquez",
"name": "santiviquez",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 84,
"isFollowing": false
}
] | /posts/santiviquez/229518612093214 | 4 | 2 |
270417770956024 | [
{
"type": "text",
"value": "🔍 Today's pick in Interpretability & Analysis of LMs: A Chain-of-Thought Is as Strong as Its Weakest Link: A Benchmark for Verifiers of Reasoning Chains by ",
"raw": "🔍 Today's pick in Interpretability & Analysis of LMs: A Chain-of-Thought Is as Strong as Its Weakest Link: A Benchmark for Verifiers of Reasoning Chains by ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "mention",
"value": null,
"raw": "@alonjacovi",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": "alonjacovi",
"label": null,
"lang": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "mention",
"value": null,
"raw": "@yonatanbitton",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": "yonatanbitton",
"label": null,
"lang": null
},
{
"type": "text",
"value": " B. Bohnet J. Herzig ",
"raw": " B. Bohnet J. Herzig ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "mention",
"value": null,
"raw": "@orhonovic",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": "orhonovic",
"label": null,
"lang": null
},
{
"type": "text",
"value": " M. Tseng M. Collins ",
"raw": " M. Tseng M. Collins ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "mention",
"value": null,
"raw": "@roeeaharoni",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": "roeeaharoni",
"label": null,
"lang": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "mention",
"value": null,
"raw": "@mega",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": "mega",
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "This work introduces a new methodology for human verification of reasoning chains and adopts it to annotate a dataset of chain-of-thought reasoning chains produced by 3 LMs. The annotated dataset, REVEAL, can be used to benchmark automatic verifiers of reasoning in LMs.",
"raw": "This work introduces a new methodology for human verification of reasoning chains and adopts it to annotate a dataset of chain-of-thought reasoning chains produced by 3 LMs. The annotated dataset, REVEAL, can be used to benchmark automatic verifiers of reasoning in LMs.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "In their analysis, the authors find that LM-produced CoTs generally contain faulty steps, often leading to incorrect automatic verification. In particular, CoT-generating LMs are found to produce non-attributable reasoning steps often, and reasoning verifiers generally struggle to verify logical correctness.",
"raw": "In their analysis, the authors find that LM-produced CoTs generally contain faulty steps, often leading to incorrect automatic verification. In particular, CoT-generating LMs are found to produce non-attributable reasoning steps often, and reasoning verifiers generally struggle to verify logical correctness.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "📄 Paper: ",
"raw": "📄 Paper: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/papers/2402.00559",
"href": null,
"resource": {
"type": "paper",
"id": "2402.00559",
"discussionNum": null
},
"url": "https://huggingface.co/papers/2402.00559",
"code": null,
"user": null,
"label": "A Chain-of-Thought Is as Strong as Its Weakest Link: A Benchmark for\n Verifiers of Reasoning Chains (2402.00559)",
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "🔡 Dataset: ",
"raw": "🔡 Dataset: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/datasets/google/reveal",
"href": null,
"resource": {
"type": "dataset",
"id": "google/reveal",
"discussionNum": null
},
"url": "https://huggingface.co/datasets/google/reveal",
"code": null,
"user": null,
"label": null,
"lang": null
}
] | 🔍 Today's pick in Interpretability & Analysis of LMs: A Chain-of-Thought Is as Strong as Its Weakest Link: A Benchmark for Verifiers of Reasoning Chains by @alonjacovi @yonatanbitton B. Bohnet J. Herzig @orhonovic M. Tseng M. Collins @roeeaharoni @mega
This work introduces a new methodology for human verification of reasoning chains and adopts it to annotate a dataset of chain-of-thought reasoning chains produced by 3 LMs. The annotated dataset, REVEAL, can be used to benchmark automatic verifiers of reasoning in LMs.
In their analysis, the authors find that LM-produced CoTs generally contain faulty steps, often leading to incorrect automatic verification. In particular, CoT-generating LMs are found to produce non-attributable reasoning steps often, and reasoning verifiers generally struggle to verify logical correctness.
📄 Paper: https://huggingface.co/papers/2402.00559
🔡 Dataset: https://huggingface.co/datasets/google/reveal | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1670231290373-5e7749883d77a72421292d07.jpeg",
"fullname": "Gabriele Sarti",
"name": "gsarti",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 205,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/5e7749883d77a72421292d07/QumntyDreWdVTISmKnrt_.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/5e7749883d77a72421292d07/7QE337VIajw_i31gzlT0t.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/5e7749883d77a72421292d07/I2llCUXoFPuhLArL8zOJo.png"
}
] | [
{
"avatarUrl": "/avatars/e616fbdff1a345ad22081f5cd019329a.svg",
"fullname": "Alon Jacovi",
"name": "alonjacovi",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 2
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1628140189042-noauth.jpeg",
"fullname": "Mor Geva",
"name": "mega",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 2
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1667756718733-6363bf2b123a5d5cd4a8fe7c.jpeg",
"fullname": "Roee Aharoni",
"name": "roeeaharoni",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 10
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1663961181981-632e0771ae0a7b1fc95630bf.jpeg",
"fullname": "Yonatan",
"name": "yonatanbitton",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 4
}
] | [
{
"reaction": "🤗",
"users": [
"s3nh",
"roeeaharoni",
"alonjacovi",
"osanseviero",
"clem",
"manu"
],
"count": 6
},
{
"reaction": "👍",
"users": [
"Yonatan-Bitton"
],
"count": 1
}
] | 2024-02-05T06:51:53.000Z | 2024-02-05T06:51:53.350Z | [] | /posts/gsarti/270417770956024 | 12 | 0 |
679226771675158 | [
{
"type": "text",
"value": "📣 DPO Dutch model release + datasets ",
"raw": "📣 DPO Dutch model release + datasets ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "After teasing for a while, I am finally releasing **GEITje 7B Ultra**, building upon the great GEITje 7B by ",
"raw": "After teasing for a while, I am finally releasing **GEITje 7B Ultra**, building upon the great GEITje 7B by ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "mention",
"value": null,
"raw": "@Rijgersberg",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": "Rijgersberg",
"label": null,
"lang": null
},
{
"type": "text",
"value": ". New contributions include: large new datasets for SFT (instruction/chat), two datasets for DPO training (i.e. RLAIF), and an SFT and DPO version of GEITje. The READMEs describe everything well (I hope), and I'll also share more info on social medias tomorrow. ",
"raw": ". New contributions include: large new datasets for SFT (instruction/chat), two datasets for DPO training (i.e. RLAIF), and an SFT and DPO version of GEITje. The READMEs describe everything well (I hope), and I'll also share more info on social medias tomorrow. ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "For me this is a huge release, the datasets more so than the models. I'm especially pleased with UltraChat, which I created with the intent of having a diverse dataset - the model must be able to communicate with different types of users. So the user questions are created as if they were written by different personas, e.g. language learners, young children, experts, critics, etc. The focus with this is \"building a good communication bot that is accessible and can handle different kinds of user input\".",
"raw": "For me this is a huge release, the datasets more so than the models. I'm especially pleased with UltraChat, which I created with the intent of having a diverse dataset - the model must be able to communicate with different types of users. So the user questions are created as if they were written by different personas, e.g. language learners, young children, experts, critics, etc. The focus with this is \"building a good communication bot that is accessible and can handle different kinds of user input\".",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "I wish I could find the time to also write a paper to get some \"academic recognition\" but that'll have to wait for now. I just want to bring it to the public so that others can play with it and use it to build new, cool stuff!",
"raw": "I wish I could find the time to also write a paper to get some \"academic recognition\" but that'll have to wait for now. I just want to bring it to the public so that others can play with it and use it to build new, cool stuff!",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "I hope that you can all appreciate the work. Let's build some cool stuff with it!",
"raw": "I hope that you can all appreciate the work. Let's build some cool stuff with it!",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Models:",
"raw": "Models:",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "- Demo: ",
"raw": "- Demo: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/spaces/BramVanroy/GEITje-7B-ultra",
"href": "https://huggingface.co/spaces/BramVanroy/GEITje-7B-ultra",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "- DPO Model: ",
"raw": "- DPO Model: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/BramVanroy/GEITje-7B-ultra",
"href": null,
"resource": {
"type": "model",
"id": "BramVanroy/GEITje-7B-ultra",
"discussionNum": null
},
"url": "https://huggingface.co/BramVanroy/GEITje-7B-ultra",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "- SFT model (not recommended): ",
"raw": "- SFT model (not recommended): ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/BramVanroy/GEITje-7B-ultra-sft",
"href": null,
"resource": {
"type": "model",
"id": "BramVanroy/GEITje-7B-ultra-sft",
"discussionNum": null
},
"url": "https://huggingface.co/BramVanroy/GEITje-7B-ultra-sft",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Datasets with GPT-4 turbo completions:",
"raw": "Datasets with GPT-4 turbo completions:",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": " - No robots (~10k instructions): ",
"raw": " - No robots (~10k instructions): ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/datasets/BramVanroy/no_robots_dutch",
"href": null,
"resource": {
"type": "dataset",
"id": "BramVanroy/no_robots_dutch",
"discussionNum": null
},
"url": "https://huggingface.co/datasets/BramVanroy/no_robots_dutch",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": " - UltraChat (~200k instructions): ",
"raw": " - UltraChat (~200k instructions): ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/datasets/BramVanroy/ultrachat_200k_dutch",
"href": null,
"resource": {
"type": "dataset",
"id": "BramVanroy/ultrachat_200k_dutch",
"discussionNum": null
},
"url": "https://huggingface.co/datasets/BramVanroy/ultrachat_200k_dutch",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": " - UltraFeedback (DPO with GPT4+GEITje chat, ~50k): ",
"raw": " - UltraFeedback (DPO with GPT4+GEITje chat, ~50k): ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/datasets/BramVanroy/ultra_feedback_dutch",
"href": null,
"resource": {
"type": "dataset",
"id": "BramVanroy/ultra_feedback_dutch",
"discussionNum": null
},
"url": "https://huggingface.co/datasets/BramVanroy/ultra_feedback_dutch",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": " - Orca DPO Pairs (DPO with GPT4+GEITje chat, ~10k): ",
"raw": " - Orca DPO Pairs (DPO with GPT4+GEITje chat, ~10k): ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/datasets/BramVanroy/orca_dpo_pairs_dutch",
"href": null,
"resource": {
"type": "dataset",
"id": "BramVanroy/orca_dpo_pairs_dutch",
"discussionNum": null
},
"url": "https://huggingface.co/datasets/BramVanroy/orca_dpo_pairs_dutch",
"code": null,
"user": null,
"label": null,
"lang": null
}
] | 📣 DPO Dutch model release + datasets
After teasing for a while, I am finally releasing **GEITje 7B Ultra**, building upon the great GEITje 7B by @Rijgersberg. New contributions include: large new datasets for SFT (instruction/chat), two datasets for DPO training (i.e. RLAIF), and an SFT and DPO version of GEITje. The READMEs describe everything well (I hope), and I'll also share more info on social medias tomorrow.
For me this is a huge release, the datasets more so than the models. I'm especially pleased with UltraChat, which I created with the intent of having a diverse dataset - the model must be able to communicate with different types of users. So the user questions are created as if they were written by different personas, e.g. language learners, young children, experts, critics, etc. The focus with this is "building a good communication bot that is accessible and can handle different kinds of user input".
I wish I could find the time to also write a paper to get some "academic recognition" but that'll have to wait for now. I just want to bring it to the public so that others can play with it and use it to build new, cool stuff!
I hope that you can all appreciate the work. Let's build some cool stuff with it!
Models:
- Demo: https://huggingface.co/spaces/BramVanroy/GEITje-7B-ultra
- DPO Model: https://huggingface.co/BramVanroy/GEITje-7B-ultra
- SFT model (not recommended): https://huggingface.co/BramVanroy/GEITje-7B-ultra-sft
Datasets with GPT-4 turbo completions:
- No robots (~10k instructions): https://huggingface.co/datasets/BramVanroy/no_robots_dutch
- UltraChat (~200k instructions): https://huggingface.co/datasets/BramVanroy/ultrachat_200k_dutch
- UltraFeedback (DPO with GPT4+GEITje chat, ~50k): https://huggingface.co/datasets/BramVanroy/ultra_feedback_dutch
- Orca DPO Pairs (DPO with GPT4+GEITje chat, ~10k): https://huggingface.co/datasets/BramVanroy/orca_dpo_pairs_dutch | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1594192845975-5e1e17b6fcf41d740b6996a8.jpeg",
"fullname": "Bram Vanroy",
"name": "BramVanroy",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 173,
"isFollowing": false
} | [] | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6319b164bc8f3b313f7a1db0/Hh0kuwsAnD2AOKdL6PpRs.png",
"fullname": "Edwin Rijgersberg",
"name": "Rijgersberg",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 45
}
] | [
{
"reaction": "❤️",
"users": [
"samusenps",
"osanseviero",
"dragonkue",
"beomi",
"p208p2002",
"gsarti",
"s3nh",
"taufiqdp",
"victor",
"Stopwolf",
"Robbert",
"jvdgoltz",
"seostar",
"clem",
"cast42",
"jvh",
"ajrogier"
],
"count": 17
},
{
"reaction": "🤝",
"users": [
"samusenps",
"osanseviero",
"gsarti",
"HuggyMonkey"
],
"count": 4
}
] | 2024-02-04T18:15:13.000Z | 2024-02-05T06:42:56.083Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6319b164bc8f3b313f7a1db0/Hh0kuwsAnD2AOKdL6PpRs.png",
"fullname": "Edwin Rijgersberg",
"name": "Rijgersberg",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 45,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/61caeda441f9432649f03ab6/0UdRCrzIqhedZblgfpMBk.png",
"fullname": "s3nh",
"name": "s3nh",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 216,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1670231290373-5e7749883d77a72421292d07.jpeg",
"fullname": "Gabriele Sarti",
"name": "gsarti",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 205,
"isFollowing": false
}
] | /posts/BramVanroy/679226771675158 | 115 | 3 |
526943645565773 | [
{
"type": "text",
"value": "Introducing model-similarities, a new simple tool to contrast two models",
"raw": "Introducing model-similarities, a new simple tool to contrast two models",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "A straightforward yet insightful tool designed to shed light on the similarities between various models. Discover it now at [Model Similarity GitHub Repository](",
"raw": "A straightforward yet insightful tool designed to shed light on the similarities between various models. Discover it now at [Model Similarity GitHub Repository](",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/fblgit/model-similarity",
"href": "https://github.com/fblgit/model-similarity",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": ").",
"raw": ").",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "This project is in its nascent stages, and we're eager for contributions and enhancements. Crafted with simplicity at its core, the tool performs two primary comparisons:",
"raw": "This project is in its nascent stages, and we're eager for contributions and enhancements. Crafted with simplicity at its core, the tool performs two primary comparisons:",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "- Weight similarities, utilizing a simple approach to contrast vector differences (A != B).",
"raw": "- Weight similarities, utilizing a simple approach to contrast vector differences (A != B).",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "- Cosine similarity between the parameters of models A and B, providing a nuanced measure of their alignment.",
"raw": "- Cosine similarity between the parameters of models A and B, providing a nuanced measure of their alignment.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Included in the repository are sample analyses and reports that validate model card claims, particularly regarding the training specifics of transformer components such as MLP, Attention, etc. Remarkably, these samples reveal 100% similarity scores between those parts of the models, pinpointing the exact base model utilized.",
"raw": "Included in the repository are sample analyses and reports that validate model card claims, particularly regarding the training specifics of transformer components such as MLP, Attention, etc. Remarkably, these samples reveal 100% similarity scores between those parts of the models, pinpointing the exact base model utilized.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Join us in refining and expanding this tool. Whether you're looking to contribute code, ideas, or both, your input will help transform this into a resource for everyone.",
"raw": "Join us in refining and expanding this tool. Whether you're looking to contribute code, ideas, or both, your input will help transform this into a resource for everyone.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | Introducing model-similarities, a new simple tool to contrast two models
A straightforward yet insightful tool designed to shed light on the similarities between various models. Discover it now at [Model Similarity GitHub Repository](https://github.com/fblgit/model-similarity).
This project is in its nascent stages, and we're eager for contributions and enhancements. Crafted with simplicity at its core, the tool performs two primary comparisons:
- Weight similarities, utilizing a simple approach to contrast vector differences (A != B).
- Cosine similarity between the parameters of models A and B, providing a nuanced measure of their alignment.
Included in the repository are sample analyses and reports that validate model card claims, particularly regarding the training specifics of transformer components such as MLP, Attention, etc. Remarkably, these samples reveal 100% similarity scores between those parts of the models, pinpointing the exact base model utilized.
Join us in refining and expanding this tool. Whether you're looking to contribute code, ideas, or both, your input will help transform this into a resource for everyone. | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6401c8c9f98fbc64bcd7dca1/MOSgc_mPbfUZ-354osy1v.png",
"fullname": "FBL",
"name": "fblgit",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 228,
"isFollowing": false
} | [] | [] | [
{
"reaction": "❤️",
"users": [
"fblgit",
"shivamanbhule",
"thomasgauthier",
"osanseviero",
"distantquant",
"victor",
"samusenps",
"shuvom",
"mohammedbriman",
"santiviquez",
"clem",
"alielfilali01"
],
"count": 12
},
{
"reaction": "👍",
"users": [
"samusenps",
"clem"
],
"count": 2
}
] | 2024-02-04T08:31:55.000Z | 2024-02-04T08:31:55.454Z | [] | /posts/fblgit/526943645565773 | 136 | 0 |
325784424398961 | [
{
"type": "text",
"value": "🙋🏻♂️Hey there folks ,",
"raw": "🙋🏻♂️Hey there folks ,",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "i'm so impressed by the reception that ",
"raw": "i'm so impressed by the reception that ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/collabora/whisperspeech",
"href": "https://huggingface.co/collabora/whisperspeech",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": " has recieved ! ",
"raw": " has recieved ! ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Check out the cool demo here : ",
"raw": "Check out the cool demo here : ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/collabora/WhisperSpeech",
"href": null,
"resource": {
"type": "space",
"id": "collabora/WhisperSpeech",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/collabora/WhisperSpeech",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "- Open issue : how do we provide MPS support ? cc. ",
"raw": "- Open issue : how do we provide MPS support ? cc. ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "mention",
"value": null,
"raw": "@intel",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": "intel",
"label": null,
"lang": null
},
{
"type": "text",
"value": " :-) looking into this now , any leads welcome! ",
"raw": " :-) looking into this now , any leads welcome! ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "check out also [collabora/whisperfusion](",
"raw": "check out also [collabora/whisperfusion](",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/collabora/WhisperFusion",
"href": "https://github.com/collabora/WhisperFusion",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": ")",
"raw": ")",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "hope you enjoy ! 🤗",
"raw": "hope you enjoy ! 🤗",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | 🙋🏻♂️Hey there folks ,
i'm so impressed by the reception that https://huggingface.co/collabora/whisperspeech has recieved !
Check out the cool demo here : https://huggingface.co/spaces/collabora/WhisperSpeech
- Open issue : how do we provide MPS support ? cc. @intel :-) looking into this now , any leads welcome!
check out also [collabora/whisperfusion](https://github.com/collabora/WhisperFusion)
hope you enjoy ! 🤗 | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/62a3bb1cd0d8c2c2169f0b88/eT2TS0IlQbZtz-F_zHLz9.jpeg",
"fullname": "Joseph [open/acc] Pollack",
"name": "Tonic",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 313,
"isFollowing": false
} | [] | [] | [
{
"reaction": "❤️",
"users": [
"blanchon",
"samusenps",
"matlok",
"tollefj",
"osanseviero",
"victor",
"clem"
],
"count": 7
},
{
"reaction": "🤯",
"users": [
"blanchon",
"PerceptiveFocusInc",
"victor",
"clem"
],
"count": 4
}
] | 2024-02-03T13:30:40.000Z | 2024-02-03T13:30:54.586Z | [] | /posts/Tonic/325784424398961 | 16 | 0 |
386479135005327 | [
{
"type": "text",
"value": "🚨📢🚀 Introducing Hercules-v2.0! A robust, multifaceted dataset for advanced models to excel in specialized domains. 🔬🌌📚🚀",
"raw": "🚨📢🚀 Introducing Hercules-v2.0! A robust, multifaceted dataset for advanced models to excel in specialized domains. 🔬🌌📚🚀",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "📈 1.3M examples from sources derived from OpenHermes-2.5, covering Biology, Physics, Math, CS, Instruction Following, Function Calling, and Roleplay.",
"raw": "📈 1.3M examples from sources derived from OpenHermes-2.5, covering Biology, Physics, Math, CS, Instruction Following, Function Calling, and Roleplay.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "🔬 Enhance natural language understanding and processing in diverse domains.",
"raw": "🔬 Enhance natural language understanding and processing in diverse domains.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "🚀 Develop models for complex instructions, function calls, and roleplay scenarios.",
"raw": "🚀 Develop models for complex instructions, function calls, and roleplay scenarios.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "📄 Licensed under Apache-2.0.",
"raw": "📄 Licensed under Apache-2.0.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Thank you to all contributors and OpenHermes-2.5 creator! 🎉",
"raw": "Thank you to all contributors and OpenHermes-2.5 creator! 🎉",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Check it out here: ",
"raw": "Check it out here: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/datasets/Locutusque/hercules-v2.0",
"href": null,
"resource": {
"type": "dataset",
"id": "Locutusque/hercules-v2.0",
"discussionNum": null
},
"url": "https://huggingface.co/datasets/Locutusque/hercules-v2.0",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "📣 Update: After fine-tuning Mistral 7B on 100,000 examples of Hercules-v2.0, it earns an average score of 62 on Open LLM Leaderboard, outperforming OpenHermes-2.5 and OpenChat-3.5. 🎉",
"raw": "📣 Update: After fine-tuning Mistral 7B on 100,000 examples of Hercules-v2.0, it earns an average score of 62 on Open LLM Leaderboard, outperforming OpenHermes-2.5 and OpenChat-3.5. 🎉",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Check out this model here: ",
"raw": "Check out this model here: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/Locutusque/Hercules-2.0-Mistral-7B",
"href": null,
"resource": {
"type": "model",
"id": "Locutusque/Hercules-2.0-Mistral-7B",
"discussionNum": null
},
"url": "https://huggingface.co/Locutusque/Hercules-2.0-Mistral-7B",
"code": null,
"user": null,
"label": null,
"lang": null
}
] | 🚨📢🚀 Introducing Hercules-v2.0! A robust, multifaceted dataset for advanced models to excel in specialized domains. 🔬🌌📚🚀
📈 1.3M examples from sources derived from OpenHermes-2.5, covering Biology, Physics, Math, CS, Instruction Following, Function Calling, and Roleplay.
🔬 Enhance natural language understanding and processing in diverse domains.
🚀 Develop models for complex instructions, function calls, and roleplay scenarios.
📄 Licensed under Apache-2.0.
Thank you to all contributors and OpenHermes-2.5 creator! 🎉
Check it out here: https://huggingface.co/datasets/Locutusque/hercules-v2.0
📣 Update: After fine-tuning Mistral 7B on 100,000 examples of Hercules-v2.0, it earns an average score of 62 on Open LLM Leaderboard, outperforming OpenHermes-2.5 and OpenChat-3.5. 🎉
Check out this model here: https://huggingface.co/Locutusque/Hercules-2.0-Mistral-7B | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/YeFyz1AZVcCRsyNHHtwJG.jpeg",
"fullname": "Sebastian Gabarain",
"name": "Locutusque",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 180,
"isFollowing": false
} | [] | [] | [
{
"reaction": "🤗",
"users": [
"taufiqdp",
"osanseviero",
"Tonic",
"samusenps",
"victor",
"Locutusque",
"kramp",
"Severian",
"clem",
"bunnycore",
"sethuiyer",
"statler"
],
"count": 12
},
{
"reaction": "❤️",
"users": [
"afrideva",
"g-ronimo"
],
"count": 2
}
] | 2024-02-03T01:39:06.000Z | 2024-02-23T01:15:20.459Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6490a40381ca99870de9ab9c/ISqJ3HKfJaQOsUsGcrx7n.jpeg",
"fullname": "Mohamed Isaac",
"name": "maxamed",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 1,
"isFollowing": false
}
] | /posts/Locutusque/386479135005327 | 48 | 3 |
775111143022741 | [
{
"type": "text",
"value": "GPU Poor POV: Burnout",
"raw": "GPU Poor POV: Burnout",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Sometimes we do not have an energy to post about AI and new methods. ",
"raw": "Sometimes we do not have an energy to post about AI and new methods. ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "And thats totally ok, I guess.",
"raw": "And thats totally ok, I guess.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Remember to sleep well and drink a lot of water. Have a great day :D <3",
"raw": "Remember to sleep well and drink a lot of water. Have a great day :D <3",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | GPU Poor POV: Burnout
Sometimes we do not have an energy to post about AI and new methods.
And thats totally ok, I guess.
Remember to sleep well and drink a lot of water. Have a great day :D <3 | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/61caeda441f9432649f03ab6/0UdRCrzIqhedZblgfpMBk.png",
"fullname": "s3nh",
"name": "s3nh",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 216,
"isFollowing": false
} | [] | [] | [
{
"reaction": "❤️",
"users": [
"vicgalle",
"Epiculous",
"Ji-Ha",
"samusenps",
"visheratin",
"MaziyarPanahi",
"osanseviero",
"joey00072",
"not-lain",
"saishf",
"victor",
"Meggido",
"PerceptiveFocusInc",
"mvaloatto",
"antiven0m",
"taufiqdp",
"kramp",
"ajibawa-2023",
"award40",
"s3nh",
"lusstta",
"alielfilali01",
"Tonic",
"mertbozkir",
"KnutJaegersberg",
"vincentweisser",
"sbrandeis",
"AtAndDev"
],
"count": 28
}
] | 2024-02-02T21:25:25.000Z | 2024-05-29T20:27:39.911Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1669551186189-63732ebbbd81fae2b3aaf3fb.jpeg",
"fullname": "Knut Jägersberg",
"name": "KnutJaegersberg",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 238,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/61caeda441f9432649f03ab6/0UdRCrzIqhedZblgfpMBk.png",
"fullname": "s3nh",
"name": "s3nh",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 216,
"isFollowing": false
}
] | /posts/s3nh/775111143022741 | 1,662 | 2 |
869679523782650 | [
{
"type": "text",
"value": "ICYMI! Nomic Embed, the first fully open long context text embedder to beat OpenAI",
"raw": "ICYMI! Nomic Embed, the first fully open long context text embedder to beat OpenAI",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "- Open source, open weights, open data",
"raw": "- Open source, open weights, open data",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "- Beats OpenAI text-embeding-3-small and Ada on short and long context benchmarks",
"raw": "- Beats OpenAI text-embeding-3-small and Ada on short and long context benchmarks",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "- Day 1 integrations with Langchain, LlamaIndex, MongoDB, and Sentence Transformers",
"raw": "- Day 1 integrations with Langchain, LlamaIndex, MongoDB, and Sentence Transformers",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Check out ",
"raw": "Check out ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/nomic-ai/nomic-embed-text-v1",
"href": null,
"resource": {
"type": "model",
"id": "nomic-ai/nomic-embed-text-v1",
"discussionNum": null
},
"url": "https://huggingface.co/nomic-ai/nomic-embed-text-v1",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": " for the model weights.",
"raw": " for the model weights.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Technical report: ",
"raw": "Technical report: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://static.nomic.ai/reports/2024_Nomic_Embed_Text_Technical_Report.pdf",
"href": "https://static.nomic.ai/reports/2024_Nomic_Embed_Text_Technical_Report.pdf",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Blog Post: ",
"raw": "Blog Post: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://blog.nomic.ai/posts/nomic-embed-text-v1",
"href": "https://blog.nomic.ai/posts/nomic-embed-text-v1",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Original Tweet Thread: ",
"raw": "Original Tweet Thread: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://x.com/nomic_ai/status/1753082063048040829?s=20",
"href": "https://x.com/nomic_ai/status/1753082063048040829?s=20",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | ICYMI! Nomic Embed, the first fully open long context text embedder to beat OpenAI
- Open source, open weights, open data
- Beats OpenAI text-embeding-3-small and Ada on short and long context benchmarks
- Day 1 integrations with Langchain, LlamaIndex, MongoDB, and Sentence Transformers
Check out https://huggingface.co/nomic-ai/nomic-embed-text-v1 for the model weights.
Technical report: https://static.nomic.ai/reports/2024_Nomic_Embed_Text_Technical_Report.pdf
Blog Post: https://blog.nomic.ai/posts/nomic-embed-text-v1
Original Tweet Thread: https://x.com/nomic_ai/status/1753082063048040829?s=20 | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/607997c83a565c15675055b3/KCVb16r2WHSqyRbUyp4eK.jpeg",
"fullname": "Zach Nussbaum",
"name": "zpn",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 32,
"isFollowing": false
} | [
{
"type": "video",
"url": "https://cdn-uploads.huggingface.co/production/uploads/607997c83a565c15675055b3/uyiB2hL6jauzSofSPw10b.mp4"
}
] | [] | [
{
"reaction": "❤️",
"users": [
"bstadt",
"osanseviero",
"davanstrien",
"clem",
"jeffboudier",
"yjernite",
"MaziyarPanahi",
"julien-c",
"SwimminPizza",
"dave3991",
"samusenps",
"baitao666",
"VictorSanh",
"radames",
"blanchon",
"Rexhaif",
"Tonic"
],
"count": 17
},
{
"reaction": "🤯",
"users": [
"osanseviero",
"davanstrien",
"clem",
"yjernite",
"julien-c",
"VictorSanh",
"radames"
],
"count": 7
}
] | 2024-02-02T21:23:53.000Z | 2024-02-05T02:04:50.148Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1627505688463-60107b385ac3e86b3ea4fc34.jpeg",
"fullname": "Daniel van Strien",
"name": "davanstrien",
"type": "user",
"isPro": true,
"isHf": true,
"isMod": false,
"followerCount": 410,
"isFollowing": false
}
] | /posts/zpn/869679523782650 | 197 | 1 |
765968951120246 | [
{
"type": "text",
"value": "Lots of cool Gradio custom components, but is the most generally useful one I've seen so far: insert a Modal into any Gradio app by using the ",
"raw": "Lots of cool Gradio custom components, but is the most generally useful one I've seen so far: insert a Modal into any Gradio app by using the ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "inline_code",
"value": null,
"raw": "`modal`",
"href": null,
"resource": null,
"url": null,
"code": "modal",
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": " component!",
"raw": " component!",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "code_fence",
"value": null,
"raw": "```py\nfrom gradio_modal import Modal\n\nwith gr.Blocks() as demo:\n gr.Markdown(\"### Main Page\")\n gr.Textbox(\"lorem ipsum \" * 1000, lines=10)\n\n with Modal(visible=True) as modal:\n gr.Markdown(\"# License Agreement\")\n```",
"href": null,
"resource": null,
"url": null,
"code": "from gradio_modal import Modal\n\nwith gr.Blocks() as demo:\n gr.Markdown(\"### Main Page\")\n gr.Textbox(\"lorem ipsum \" * 1000, lines=10)\n\n with Modal(visible=True) as modal:\n gr.Markdown(\"# License Agreement\")",
"user": null,
"label": null,
"lang": "py"
}
] | Lots of cool Gradio custom components, but is the most generally useful one I've seen so far: insert a Modal into any Gradio app by using the `modal` component!
```py
from gradio_modal import Modal
with gr.Blocks() as demo:
gr.Markdown("### Main Page")
gr.Textbox("lorem ipsum " * 1000, lines=10)
with Modal(visible=True) as modal:
gr.Markdown("# License Agreement")
``` | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1621947938344-noauth.png",
"fullname": "Abubakar Abid",
"name": "abidlabs",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 487,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/608b8bb39d7c9519b4adae19/XZAOvFSlH1wctFbA_WU9s.gif"
}
] | [] | [
{
"reaction": "🤯",
"users": [
"victor",
"s3nh",
"sbarman25",
"eskayML",
"akhaliq",
"RSHVR",
"dwipper"
],
"count": 7
},
{
"reaction": "❤️",
"users": [
"MaziyarPanahi",
"afrideva",
"samusenps",
"akhaliq",
"AseidasVera",
"RSHVR"
],
"count": 6
},
{
"reaction": "👍",
"users": [
"mathiasn1",
"Norod78"
],
"count": 2
}
] | 2024-02-02T19:13:42.000Z | 2024-02-02T19:13:58.296Z | [] | /posts/abidlabs/765968951120246 | 307 | 0 |
484223631728087 | [
{
"type": "text",
"value": "I'm happy to announce that ✨ Image to Music v2 ✨ is ready for you to try and i hope you'll like it too ! 😌",
"raw": "I'm happy to announce that ✨ Image to Music v2 ✨ is ready for you to try and i hope you'll like it too ! 😌",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "This new version has been crafted with transparency in mind, ",
"raw": "This new version has been crafted with transparency in mind, ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "so you can understand the process of translating an image to a musical equivalent. ",
"raw": "so you can understand the process of translating an image to a musical equivalent. ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "How does it works under the hood ? 🤔",
"raw": "How does it works under the hood ? 🤔",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "First, we get a very literal caption from ",
"raw": "First, we get a very literal caption from ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/microsoft/kosmos-2-patch14-224",
"href": null,
"resource": {
"type": "model",
"id": "microsoft/kosmos-2-patch14-224",
"discussionNum": null
},
"url": "https://huggingface.co/microsoft/kosmos-2-patch14-224",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "; this caption is then given to a LLM Agent (currently ",
"raw": "; this caption is then given to a LLM Agent (currently ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/HuggingFaceH4/zephyr-7b-beta",
"href": null,
"resource": {
"type": "model",
"id": "HuggingFaceH4/zephyr-7b-beta",
"discussionNum": null
},
"url": "https://huggingface.co/HuggingFaceH4/zephyr-7b-beta",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": " )which task is to translate the image caption to a musical and inspirational prompt for the next step.",
"raw": " )which task is to translate the image caption to a musical and inspirational prompt for the next step.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Once we got a nice musical text from the LLM, we can send it to the text-to-music model of your choice: ",
"raw": "Once we got a nice musical text from the LLM, we can send it to the text-to-music model of your choice: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "MAGNet, MusicGen, AudioLDM-2, Riffusion or Mustango",
"raw": "MAGNet, MusicGen, AudioLDM-2, Riffusion or Mustango",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Instead of the previous version of Image to Music which used Mubert API, and could output curious and obscure combinations, we only provide open sourced models available on the hub, called via the gradio API.",
"raw": "Instead of the previous version of Image to Music which used Mubert API, and could output curious and obscure combinations, we only provide open sourced models available on the hub, called via the gradio API.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Also i guess the music result should be more accurate to the atmosphere of the image input, thanks to the LLM Agent step. ",
"raw": "Also i guess the music result should be more accurate to the atmosphere of the image input, thanks to the LLM Agent step. ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Pro tip, you can adjust the inspirational prompt to match your expectations, according to the chosen model and specific behavior of each one 👌",
"raw": "Pro tip, you can adjust the inspirational prompt to match your expectations, according to the chosen model and specific behavior of each one 👌",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Try it, explore different models and tell me which one is your favorite 🤗",
"raw": "Try it, explore different models and tell me which one is your favorite 🤗",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "—› ",
"raw": "—› ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/fffiloni/image-to-music-v2",
"href": null,
"resource": {
"type": "space",
"id": "fffiloni/image-to-music-v2",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/fffiloni/image-to-music-v2",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | I'm happy to announce that ✨ Image to Music v2 ✨ is ready for you to try and i hope you'll like it too ! 😌
This new version has been crafted with transparency in mind,
so you can understand the process of translating an image to a musical equivalent.
How does it works under the hood ? 🤔
First, we get a very literal caption from https://huggingface.co/microsoft/kosmos-2-patch14-224; this caption is then given to a LLM Agent (currently https://huggingface.co/HuggingFaceH4/zephyr-7b-beta )which task is to translate the image caption to a musical and inspirational prompt for the next step.
Once we got a nice musical text from the LLM, we can send it to the text-to-music model of your choice:
MAGNet, MusicGen, AudioLDM-2, Riffusion or Mustango
Instead of the previous version of Image to Music which used Mubert API, and could output curious and obscure combinations, we only provide open sourced models available on the hub, called via the gradio API.
Also i guess the music result should be more accurate to the atmosphere of the image input, thanks to the LLM Agent step.
Pro tip, you can adjust the inspirational prompt to match your expectations, according to the chosen model and specific behavior of each one 👌
Try it, explore different models and tell me which one is your favorite 🤗
—› https://huggingface.co/spaces/fffiloni/image-to-music-v2
| {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/61868ce808aae0b5499a2a95/F6BA0anbsoY_Z7M1JrwOe.jpeg",
"fullname": "Sylvain Filoni",
"name": "fffiloni",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 5185,
"isFollowing": false
} | [
{
"type": "video",
"url": "https://cdn-uploads.huggingface.co/production/uploads/61868ce808aae0b5499a2a95/KnZM7Kzb33xTefjq6ytzd.qt"
},
{
"type": "video",
"url": "https://cdn-uploads.huggingface.co/production/uploads/61868ce808aae0b5499a2a95/eLC43-fHI5pfsScMC2SxZ.qt"
},
{
"type": "video",
"url": "https://cdn-uploads.huggingface.co/production/uploads/61868ce808aae0b5499a2a95/wHUbxHhKs6p05JT_UXCxP.mp4"
},
{
"type": "video",
"url": "https://cdn-uploads.huggingface.co/production/uploads/61868ce808aae0b5499a2a95/t4lVHFjKNNGqrGbsEzs_q.qt"
}
] | [] | [
{
"reaction": "❤️",
"users": [
"victor",
"samusenps",
"osanseviero",
"phanes",
"abidlabs",
"koheingt",
"adhisetiawan",
"Indomitable-ai",
"boqsc",
"deepkyu",
"mvaloatto",
"MaziyarPanahi",
"powzone",
"kelsy",
"Chief-Inspector",
"pe65374",
"Swhalen98",
"aceeee",
"Aixbox"
],
"count": 19
},
{
"reaction": "👍",
"users": [
"naorex",
"ivied7",
"MehdiLeZ",
"eisneim",
"OmbelineM"
],
"count": 5
},
{
"reaction": "🤗",
"users": [
"Bssayla"
],
"count": 1
},
{
"reaction": "🚀",
"users": [
"Aixbox"
],
"count": 1
}
] | 2024-02-02T16:28:16.000Z | 2024-05-24T06:41:54.743Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63893d4c184615e463aa24b8/S1flsX_26OF6ZJBVcPlaf.jpeg",
"fullname": "Matt Valoatto",
"name": "mvaloatto",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 56,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/5f17f0a0925b9863e28ad517/X7QKoiXbUtEZSG9jyvfk3.jpeg",
"fullname": "Victor Mustar",
"name": "victor",
"type": "user",
"isPro": true,
"isHf": true,
"isMod": false,
"followerCount": 2607,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/5fd5e18a90b6dc4633f6d292/gZXHW5dd9R86AV9LMZ--y.png",
"fullname": "Maziyar Panahi",
"name": "MaziyarPanahi",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 1541,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/61868ce808aae0b5499a2a95/F6BA0anbsoY_Z7M1JrwOe.jpeg",
"fullname": "Sylvain Filoni",
"name": "fffiloni",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 5185,
"isFollowing": false
},
{
"avatarUrl": "/avatars/9f18cf580ec917926be976fc061d00bd.svg",
"fullname": "Andrew Glyadchenko",
"name": "andgly95",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": null,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1677620936417-63fe761688b9695964bf2696.jpeg",
"fullname": "paolo calveri",
"name": "acebruck",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": null,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6544b4a04200ae379ea69302/-OnV6YPGaTUyBKOJiq4WH.png",
"fullname": "Mehdi Zatar",
"name": "MehdiLeZ",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 30,
"isFollowing": false
}
] | /posts/fffiloni/484223631728087 | 5,950 | 9 |
788807508037424 | [
{
"type": "text",
"value": "First post alert 🚨",
"raw": "First post alert 🚨",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Super excited to share with you my first Chat assistant : ",
"raw": "Super excited to share with you my first Chat assistant : ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "inline_code",
"value": null,
"raw": "`HuggingAssist`",
"href": null,
"resource": null,
"url": null,
"code": "HuggingAssist",
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": ", meant to offer guidance with the large HuggingFace ecosystem",
"raw": ", meant to offer guidance with the large HuggingFace ecosystem",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Chat with it from here : ",
"raw": "Chat with it from here : ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://hf.co/chat/assistant/65bd0adc08560e58be454d86",
"href": "https://hf.co/chat/assistant/65bd0adc08560e58be454d86",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "It would be more helpful when the RAG / WEB features are available !",
"raw": "It would be more helpful when the RAG / WEB features are available !",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Looking forward to it 🔥",
"raw": "Looking forward to it 🔥",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "ps : tnx ",
"raw": "ps : tnx ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "mention",
"value": null,
"raw": "@Chunte",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": "Chunte",
"label": null,
"lang": null
},
{
"type": "text",
"value": " for the cool Huggies ",
"raw": " for the cool Huggies ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | First post alert 🚨
Super excited to share with you my first Chat assistant :
`HuggingAssist`, meant to offer guidance with the large HuggingFace ecosystem
Chat with it from here : https://hf.co/chat/assistant/65bd0adc08560e58be454d86
It would be more helpful when the RAG / WEB features are available !
Looking forward to it 🔥
ps : tnx @Chunte for the cool Huggies | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/626237d9bbcbd1c34f1bb231/EJrOjvAL-68qMCYdnvOrq.png",
"fullname": "Ali El Filali",
"name": "alielfilali01",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 186,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/626237d9bbcbd1c34f1bb231/LFMIwAdOhipaJPSEErXbk.jpeg"
}
] | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1678826705670-61fd75b93c49561870461907.png",
"fullname": "ChunTe Lee",
"name": "Chunte",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 78
}
] | [
{
"reaction": "❤️",
"users": [
"osanseviero",
"nsarrazin",
"victor",
"EmilyWitko",
"clem",
"pierrci",
"abidlabs",
"samusenps",
"BrigitteTousi",
"KBayoud",
"antmaiorino",
"Tohito",
"kramp",
"mammour"
],
"count": 14
}
] | 2024-02-02T15:58:51.000Z | 2024-02-04T20:52:12.272Z | [
{
"avatarUrl": "/avatars/08bf9559d8046f18f608960dd08b6e8d.svg",
"fullname": "Roger C",
"name": "allknowingroger",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 54,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/5f17f0a0925b9863e28ad517/X7QKoiXbUtEZSG9jyvfk3.jpeg",
"fullname": "Victor Mustar",
"name": "victor",
"type": "user",
"isPro": true,
"isHf": true,
"isMod": false,
"followerCount": 2607,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/626237d9bbcbd1c34f1bb231/EJrOjvAL-68qMCYdnvOrq.png",
"fullname": "Ali El Filali",
"name": "alielfilali01",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 186,
"isFollowing": false
}
] | /posts/alielfilali01/788807508037424 | 10 | 4 |
490694282283725 | [
{
"type": "text",
"value": "🔍 Today's pick in Interpretability & Analysis of LMs: ReAGent: Towards A Model-agnostic Feature Attribution Method for Generative Language Models by ",
"raw": "🔍 Today's pick in Interpretability & Analysis of LMs: ReAGent: Towards A Model-agnostic Feature Attribution Method for Generative Language Models by ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "mention",
"value": null,
"raw": "@casszhao",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": "casszhao",
"label": null,
"lang": null
},
{
"type": "text",
"value": " and B. Shan",
"raw": " and B. Shan",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Authors propose Recursive Attribution Generation (ReAGent), a perturbation-based feature attribution approach specifically conceived for generative LMs. The method employs a lightweight encoder LM to replace sampled input spans with valid alternatives and measure the effect of the perturbation on the drop in next token probability predictions. ReAGent is shown to consistentlyoutperform other established approaches across several models and generation tasks in terms of token- and sentence-level faithfulness.",
"raw": "Authors propose Recursive Attribution Generation (ReAGent), a perturbation-based feature attribution approach specifically conceived for generative LMs. The method employs a lightweight encoder LM to replace sampled input spans with valid alternatives and measure the effect of the perturbation on the drop in next token probability predictions. ReAGent is shown to consistentlyoutperform other established approaches across several models and generation tasks in terms of token- and sentence-level faithfulness.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "📄 Paper: ",
"raw": "📄 Paper: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/papers/2402.00794",
"href": null,
"resource": {
"type": "paper",
"id": "2402.00794",
"discussionNum": null
},
"url": "https://huggingface.co/papers/2402.00794",
"code": null,
"user": null,
"label": "ReAGent: Towards A Model-agnostic Feature Attribution Method for\n Generative Language Models (2402.00794)",
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "💻 Code: ",
"raw": "💻 Code: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/casszhao/ReAGent",
"href": "https://github.com/casszhao/ReAGent",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | 🔍 Today's pick in Interpretability & Analysis of LMs: ReAGent: Towards A Model-agnostic Feature Attribution Method for Generative Language Models by @casszhao and B. Shan
Authors propose Recursive Attribution Generation (ReAGent), a perturbation-based feature attribution approach specifically conceived for generative LMs. The method employs a lightweight encoder LM to replace sampled input spans with valid alternatives and measure the effect of the perturbation on the drop in next token probability predictions. ReAGent is shown to consistentlyoutperform other established approaches across several models and generation tasks in terms of token- and sentence-level faithfulness.
📄 Paper: https://huggingface.co/papers/2402.00794
💻 Code: https://github.com/casszhao/ReAGent | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1670231290373-5e7749883d77a72421292d07.jpeg",
"fullname": "Gabriele Sarti",
"name": "gsarti",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 205,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/5e7749883d77a72421292d07/u0-_gw7b7aM22s1DFyZ1w.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/5e7749883d77a72421292d07/yDRXUb3pfJfwiRO1RGdyP.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/5e7749883d77a72421292d07/hdSGTNx-0GKdT7LvYhoOR.png"
}
] | [
{
"avatarUrl": "/avatars/f45e448049a62b5102eec70bc646205b.svg",
"fullname": "casszhao",
"name": "casszhao",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 1
}
] | [
{
"reaction": "👍",
"users": [
"victor",
"samusenps"
],
"count": 2
}
] | 2024-02-02T15:38:25.000Z | 2024-02-02T15:38:25.197Z | [] | /posts/gsarti/490694282283725 | 9 | 0 |
418413642428915 | [
{
"type": "text",
"value": "🔥 New LLM leaderboard on the hub: NPHardEval!",
"raw": "🔥 New LLM leaderboard on the hub: NPHardEval!",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "It uses questions of logic, of different mathematical complexities, as a proxy for reasoning abilities. It notably removes questions relying on arithmetic, to really focus on logical abilities. ",
"raw": "It uses questions of logic, of different mathematical complexities, as a proxy for reasoning abilities. It notably removes questions relying on arithmetic, to really focus on logical abilities. ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "What's interesting imo is the potential to really study a model performance at different levels of complexity. ",
"raw": "What's interesting imo is the potential to really study a model performance at different levels of complexity. ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Bonus: Since the questions can be generated automatically, it's going to be dynamic, updated monthly! 🚀",
"raw": "Bonus: Since the questions can be generated automatically, it's going to be dynamic, updated monthly! 🚀",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/NPHardEval/NPHardEval-leaderboard",
"href": null,
"resource": {
"type": "space",
"id": "NPHardEval/NPHardEval-leaderboard",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/NPHardEval/NPHardEval-leaderboard",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Read more about how their questions are generated in the intro blog: ",
"raw": "Read more about how their questions are generated in the intro blog: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/blog/leaderboards-on-the-hub-nphardeval",
"href": "https://huggingface.co/blog/leaderboards-on-the-hub-nphardeval",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Congrats to ",
"raw": "Congrats to ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "mention",
"value": null,
"raw": "@lizhouf",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": "lizhouf",
"label": null,
"lang": null
},
{
"type": "text",
"value": ", ",
"raw": ", ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "mention",
"value": null,
"raw": "@wenyueH",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": "wenyueH",
"label": null,
"lang": null
},
{
"type": "text",
"value": ", ",
"raw": ", ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "mention",
"value": null,
"raw": "@hyfrankl",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": "hyfrankl",
"label": null,
"lang": null
},
{
"type": "text",
"value": " and their teams!",
"raw": " and their teams!",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | 🔥 New LLM leaderboard on the hub: NPHardEval!
It uses questions of logic, of different mathematical complexities, as a proxy for reasoning abilities. It notably removes questions relying on arithmetic, to really focus on logical abilities.
What's interesting imo is the potential to really study a model performance at different levels of complexity.
Bonus: Since the questions can be generated automatically, it's going to be dynamic, updated monthly! 🚀
https://huggingface.co/spaces/NPHardEval/NPHardEval-leaderboard
Read more about how their questions are generated in the intro blog: https://huggingface.co/blog/leaderboards-on-the-hub-nphardeval
Congrats to @lizhouf, @wenyueH, @hyfrankl and their teams! | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1644340617257-noauth.png",
"fullname": "Clémentine Fourrier",
"name": "clefourrier",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 459,
"isFollowing": false
} | [] | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/3tCKhbLvzqFNpsMzoSGwf.jpeg",
"fullname": "Haoyang Ling",
"name": "hyfrankl",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": null
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/62b0c1fd21241b71c9ab8987/ArWJrVCXLUCbIoeqjKKBx.jpeg",
"fullname": "Lizhou Fan",
"name": "lizhouf",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": null
},
{
"avatarUrl": "/avatars/03651951ac9faadb25349e0eb6ae7266.svg",
"fullname": "Wenyue Hua",
"name": "wenyueH",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 3
}
] | [
{
"reaction": "🤗",
"users": [
"alvarobartt",
"osanseviero",
"alielfilali01",
"samusenps",
"lizhouf",
"diwank",
"hyfrankl",
"kramp",
"joaogante",
"hivaze",
"Kukedlc",
"sbrandeis"
],
"count": 12
},
{
"reaction": "🤝",
"users": [
"osanseviero"
],
"count": 1
},
{
"reaction": "🤯",
"users": [
"Kukedlc"
],
"count": 1
}
] | 2024-02-02T15:30:04.000Z | 2024-02-02T15:38:04.834Z | [] | /posts/clefourrier/418413642428915 | 47 | 0 |
554607189284655 | [
{
"type": "text",
"value": "OLMo",
"raw": "OLMo",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Accelerating the Science of Language Models",
"raw": "Accelerating the Science of Language Models",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "paper page: ",
"raw": "paper page: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/papers/2402.00838",
"href": null,
"resource": {
"type": "paper",
"id": "2402.00838",
"discussionNum": null
},
"url": "https://huggingface.co/papers/2402.00838",
"code": null,
"user": null,
"label": "OLMo: Accelerating the Science of Language Models (2402.00838)",
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "a state-of-the-art, truly Open Language Model and its framework to build and study the science of language modeling. Unlike most prior efforts that have only released model weights and inference code, we release OLMo and the whole framework, including training data and training and evaluation code. We hope this release will empower and strengthen the open research community and inspire a new wave of innovation.",
"raw": "a state-of-the-art, truly Open Language Model and its framework to build and study the science of language modeling. Unlike most prior efforts that have only released model weights and inference code, we release OLMo and the whole framework, including training data and training and evaluation code. We hope this release will empower and strengthen the open research community and inspire a new wave of innovation.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | OLMo
Accelerating the Science of Language Models
paper page: https://huggingface.co/papers/2402.00838
a state-of-the-art, truly Open Language Model and its framework to build and study the science of language modeling. Unlike most prior efforts that have only released model weights and inference code, we release OLMo and the whole framework, including training data and training and evaluation code. We hope this release will empower and strengthen the open research community and inspire a new wave of innovation. | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"fullname": "AK",
"name": "akhaliq",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 5205,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/60f1abe7544c2adfd699860c/qYSHVeUUsnPqnW8TRQ6KN.jpeg"
}
] | [] | [
{
"reaction": "👍",
"users": [
"vladbogo",
"Dlbk",
"osanseviero",
"samusenps",
"clem",
"taufiqdp",
"mathiasn1",
"blanchon",
"matlok",
"Ji-Ha",
"Iliassti",
"nold",
"Keesh",
"SX-GitHub"
],
"count": 14
},
{
"reaction": "❤️",
"users": [
"Iliassti",
"santyzenith",
"nold"
],
"count": 3
}
] | 2024-02-02T14:55:40.000Z | 2024-02-02T14:55:40.082Z | [] | /posts/akhaliq/554607189284655 | 32 | 0 |
993227501196898 | [
{
"type": "text",
"value": "🔥 New on HuggingChat: Assistants!",
"raw": "🔥 New on HuggingChat: Assistants!",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Today we are releasing Assistants on HuggingChat!",
"raw": "Today we are releasing Assistants on HuggingChat!",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Assistants are a fun way to package your prompts and share them with the world - powered by Open source Models of course!",
"raw": "Assistants are a fun way to package your prompts and share them with the world - powered by Open source Models of course!",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Learn more about Assistants here: ",
"raw": "Learn more about Assistants here: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/huggingchat/chat-ui/discussions/357",
"href": null,
"resource": {
"type": "space",
"id": "huggingchat/chat-ui",
"discussionNum": 357
},
"url": "https://huggingface.co/spaces/huggingchat/chat-ui/discussions/357",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Browse Assistants here: ",
"raw": "Browse Assistants here: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/chat/assistants",
"href": "https://huggingface.co/chat/assistants",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | 🔥 New on HuggingChat: Assistants!
Today we are releasing Assistants on HuggingChat!
Assistants are a fun way to package your prompts and share them with the world - powered by Open source Models of course!
Learn more about Assistants here: https://huggingface.co/spaces/huggingchat/chat-ui/discussions/357
Browse Assistants here: https://huggingface.co/chat/assistants | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/5f17f0a0925b9863e28ad517/X7QKoiXbUtEZSG9jyvfk3.jpeg",
"fullname": "Victor Mustar",
"name": "victor",
"type": "user",
"isPro": true,
"isHf": true,
"isMod": false,
"followerCount": 2607,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/5f17f0a0925b9863e28ad517/R8Sk0jHjlORfhx3m4QI_c.png"
}
] | [] | [
{
"reaction": "🤗",
"users": [
"hmb",
"dtripathi",
"Dlbk",
"Sylvestre",
"AmethystAmenity",
"yasserrmd",
"osanseviero",
"andrewrreed",
"chkla",
"alielfilali01",
"kramp",
"samusenps",
"clem",
"taufiqdp",
"adamelliotfields",
"blanchon",
"beomi",
"marianbasti",
"Nymbo"
],
"count": 19
},
{
"reaction": "❤️",
"users": [
"PierreLepagnol",
"samusenps",
"neovalle",
"clem",
"fatemex",
"blanchon",
"beomi",
"Nymbo"
],
"count": 8
},
{
"reaction": "👍",
"users": [
"Vertdechat",
"Norod78",
"xsa-dev"
],
"count": 3
},
{
"reaction": "🤯",
"users": [
"beomi",
"fffiloni"
],
"count": 2
}
] | 2024-02-02T14:39:03.000Z | 2024-02-08T15:01:09.887Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/ls3TtT9VHR75JRRjLGc-i.jpeg",
"fullname": "x",
"name": "fatemex",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": null,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65ab93082bf3e0cbbf717850/iaiYac1w-1a18PeDOfzCp.png",
"fullname": "mod ster",
"name": "modster",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 1,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64024bb5cad36eb2b0fa8042/wFgZ0Tq29nJmAtwW7JfPf.png",
"fullname": "EP",
"name": "EveryPizza",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 2,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/5f17f0a0925b9863e28ad517/X7QKoiXbUtEZSG9jyvfk3.jpeg",
"fullname": "Victor Mustar",
"name": "victor",
"type": "user",
"isPro": true,
"isHf": true,
"isMod": false,
"followerCount": 2607,
"isFollowing": false
},
{
"avatarUrl": "/avatars/fb3951660ef6d7cd75eae18918dbf028.svg",
"fullname": "Joao Occhiucci",
"name": "Farpador",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": null,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63c3490c6e6561b339e3bbec/99e-dpy7_V-CnfplrchYl.jpeg",
"fullname": "hannah",
"name": "hmb",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 28,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/61d375fd733d3a83ecd1bba9/oIXwvvs1-HaCnJXMCZgkc.jpeg",
"fullname": "Andrew Reed",
"name": "andrewrreed",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 106,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/626237d9bbcbd1c34f1bb231/EJrOjvAL-68qMCYdnvOrq.png",
"fullname": "Ali El Filali",
"name": "alielfilali01",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 186,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64aac16fd4a402e8dce11ebe/W640vNvHPWwwG_u4TsRo0.png",
"fullname": "Jorge Vallego",
"name": "neovalle",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 9,
"isFollowing": false
}
] | /posts/victor/993227501196898 | 68 | 11 |
300798471936584 | [
{
"type": "text",
"value": "If you're looking for geospatial datasets, you might find what you're looking for in the Geospatial Datasets Collections:",
"raw": "If you're looking for geospatial datasets, you might find what you're looking for in the Geospatial Datasets Collections:",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/collections/blanchon/geospatial-datasets-656ddb097c934a7b3c4d4619",
"href": null,
"resource": {
"type": "collection",
"id": "blanchon/geospatial-datasets-656ddb097c934a7b3c4d4619",
"discussionNum": null
},
"url": "https://huggingface.co/collections/blanchon/geospatial-datasets-656ddb097c934a7b3c4d4619",
"code": null,
"user": null,
"label": null,
"lang": null
}
] | If you're looking for geospatial datasets, you might find what you're looking for in the Geospatial Datasets Collections:
https://huggingface.co/collections/blanchon/geospatial-datasets-656ddb097c934a7b3c4d4619 | {
"avatarUrl": "/avatars/716b6a7d1094c8036b2a8a7b9063e8aa.svg",
"fullname": "Julien BLANCHON",
"name": "blanchon",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 70,
"isFollowing": false
} | [] | [] | [
{
"reaction": "👍",
"users": [
"s3nh",
"osanseviero",
"clem",
"samusenps",
"doof-ferb",
"severo"
],
"count": 6
},
{
"reaction": "❤️",
"users": [
"clem",
"samusenps"
],
"count": 2
},
{
"reaction": "🤝",
"users": [
"clem",
"samusenps"
],
"count": 2
}
] | 2024-02-02T10:15:52.000Z | 2024-02-02T10:15:52.917Z | [] | /posts/blanchon/300798471936584 | 1,096 | 0 |
851992122690412 | [
{
"type": "text",
"value": "GPU Poor POV: Quantization ",
"raw": "GPU Poor POV: Quantization ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Today I want to share with you my notebook plug and play code ",
"raw": "Today I want to share with you my notebook plug and play code ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "which help me a lot through my quantization journey. ",
"raw": "which help me a lot through my quantization journey. ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Hope youll find it interesting it could be a good starter point to ",
"raw": "Hope youll find it interesting it could be a good starter point to ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "gguf some of your awesome models :)",
"raw": "gguf some of your awesome models :)",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Have a great day <3 ",
"raw": "Have a great day <3 ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://s3nh.bearblog.dev/gpu-poor-pov-gguf-snippet/",
"href": "https://s3nh.bearblog.dev/gpu-poor-pov-gguf-snippet/",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | GPU Poor POV: Quantization
Today I want to share with you my notebook plug and play code
which help me a lot through my quantization journey.
Hope youll find it interesting it could be a good starter point to
gguf some of your awesome models :)
Have a great day <3
https://s3nh.bearblog.dev/gpu-poor-pov-gguf-snippet/ | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/61caeda441f9432649f03ab6/0UdRCrzIqhedZblgfpMBk.png",
"fullname": "s3nh",
"name": "s3nh",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 216,
"isFollowing": false
} | [] | [] | [
{
"reaction": "👍",
"users": [
"lunarflu",
"Aspik101",
"clem",
"naveedtrainer",
"osanseviero",
"samusenps",
"Csplk",
"taufiqdp",
"neovalle",
"Tanvir1337",
"Ji-Ha",
"jtatman",
"AtAndDev"
],
"count": 13
},
{
"reaction": "❤️",
"users": [
"lunarflu",
"clem",
"naveedtrainer",
"impactframes",
"Tanvir1337",
"alielfilali01",
"s3nh",
"Ji-Ha",
"AtAndDev"
],
"count": 9
},
{
"reaction": "🤗",
"users": [
"lunarflu",
"clem",
"naveedtrainer",
"osanseviero",
"Tanvir1337",
"Ji-Ha",
"AtAndDev"
],
"count": 7
}
] | 2024-02-01T20:56:42.000Z | 2024-03-01T09:35:20.086Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/659180299e16fa7510840ac4/I66LcNr3i35ehuzw1vR2Q.png",
"fullname": "Ji-Ha",
"name": "Ji-Ha",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 5,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/61caeda441f9432649f03ab6/0UdRCrzIqhedZblgfpMBk.png",
"fullname": "s3nh",
"name": "s3nh",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 216,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64fdfaeb01aedd0e86014de9/UliF1du7InfuCs7RHLiA5.png",
"fullname": "Ahmed Morsi",
"name": "eramax",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 22,
"isFollowing": false
}
] | /posts/s3nh/851992122690412 | 1,648 | 6 |
546095612686265 | [
{
"type": "text",
"value": "Just out: new custom Gradio component specifically designed for code completion models 🔥",
"raw": "Just out: new custom Gradio component specifically designed for code completion models 🔥",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | Just out: new custom Gradio component specifically designed for code completion models 🔥 | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1621947938344-noauth.png",
"fullname": "Abubakar Abid",
"name": "abidlabs",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 487,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/608b8bb39d7c9519b4adae19/nYcaEFr2B-WmCguN8d4dq.gif"
}
] | [] | [
{
"reaction": "❤️",
"users": [
"hmb",
"dhuynh95",
"sbarman25",
"julien-c",
"osanseviero",
"victor",
"lunarflu",
"clem",
"yehiaserag",
"samusenps",
"aust-t",
"alielfilali01",
"gsarti",
"ThreadAbort",
"not-lain",
"calmgoose",
"deepkyu",
"freddyaboulton"
],
"count": 18
},
{
"reaction": "🤗",
"users": [
"lunarflu"
],
"count": 1
}
] | 2024-02-01T18:40:01.000Z | 2024-02-01T18:57:23.394Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63c3490c6e6561b339e3bbec/99e-dpy7_V-CnfplrchYl.jpeg",
"fullname": "hannah",
"name": "hmb",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 28,
"isFollowing": false
}
] | /posts/abidlabs/546095612686265 | 19 | 1 |
597247655335205 | [
{
"type": "text",
"value": "release day release day! OLMo 1b + 7b out today 🥳 and 65b coming soon...",
"raw": "release day release day! OLMo 1b + 7b out today 🥳 and 65b coming soon...",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "With OLMo, we are really focused on advancing the study of LLMs. We release **everything**, from toolkit to create its training dataset (dolma) to training & inference code:",
"raw": "With OLMo, we are really focused on advancing the study of LLMs. We release **everything**, from toolkit to create its training dataset (dolma) to training & inference code:",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "- OLMo paper: ",
"raw": "- OLMo paper: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://allenai.org/olmo/olmo-paper.pdf",
"href": "https://allenai.org/olmo/olmo-paper.pdf",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "- OLMo train code: ",
"raw": "- OLMo train code: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/allenai/OLMo",
"href": "https://github.com/allenai/OLMo",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "- OLMo eval code: ",
"raw": "- OLMo eval code: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/allenai/OLMo-Eval",
"href": "https://github.com/allenai/OLMo-Eval",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "- OLMo 7b: ",
"raw": "- OLMo 7b: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/allenai/OLMo-7B",
"href": null,
"resource": {
"type": "model",
"id": "allenai/OLMo-7B",
"discussionNum": null
},
"url": "https://huggingface.co/allenai/OLMo-7B",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "- OLMo 1b: ",
"raw": "- OLMo 1b: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/allenai/OLMo-1B",
"href": null,
"resource": {
"type": "model",
"id": "allenai/OLMo-1B",
"discussionNum": null
},
"url": "https://huggingface.co/allenai/OLMo-1B",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "- Dolma paper: ",
"raw": "- Dolma paper: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://allenai.org/olmo/dolma-paper.pdf",
"href": "https://allenai.org/olmo/dolma-paper.pdf",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "- Dolma dataset v1.6: ",
"raw": "- Dolma dataset v1.6: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/datasets/allenai/dolma",
"href": null,
"resource": {
"type": "dataset",
"id": "allenai/dolma",
"discussionNum": null
},
"url": "https://huggingface.co/datasets/allenai/dolma",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "- Dolma toolkit v1.0: ",
"raw": "- Dolma toolkit v1.0: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/allenai/dolma",
"href": "https://github.com/allenai/dolma",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | release day release day! OLMo 1b + 7b out today 🥳 and 65b coming soon...
With OLMo, we are really focused on advancing the study of LLMs. We release **everything**, from toolkit to create its training dataset (dolma) to training & inference code:
- OLMo paper: https://allenai.org/olmo/olmo-paper.pdf
- OLMo train code: https://github.com/allenai/OLMo
- OLMo eval code: https://github.com/allenai/OLMo-Eval
- OLMo 7b: https://huggingface.co/allenai/OLMo-7B
- OLMo 1b: https://huggingface.co/allenai/OLMo-1B
- Dolma paper: https://allenai.org/olmo/dolma-paper.pdf
- Dolma dataset v1.6: https://huggingface.co/datasets/allenai/dolma
- Dolma toolkit v1.0: https://github.com/allenai/dolma | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/5f04d8c45d08220171a0ad32/uXEta6nqBabrUlAOXnS5g.jpeg",
"fullname": "Luca Soldaini",
"name": "soldni",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 29,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/5f04d8c45d08220171a0ad32/jjeABR8y1bTcciAq757Tu.gif"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/5f04d8c45d08220171a0ad32/ZYygx-gRjYY-O7H4pxNAy.webp"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/5f04d8c45d08220171a0ad32/4Z3rd6cdkvoNEjzTIGW06.webp"
}
] | [] | [
{
"reaction": "❤️",
"users": [
"osanseviero",
"dim",
"julien-c",
"andysalerno",
"juancopi81",
"lunarflu",
"clem",
"davanstrien",
"mdouglas",
"samusenps",
"kramp",
"yizhongw",
"beomi",
"alielfilali01",
"mathiasn1",
"adamelliotfields",
"vbuhoijymzoi",
"Tonioesparza",
"loubnabnl",
"dashfunnydashdash"
],
"count": 20
}
] | 2024-02-01T17:47:26.000Z | 2024-02-02T14:00:07.256Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6032802e1f993496bc14d9e3/w6hr-DEQot4VVkoyRIBiy.png",
"fullname": "Omar Sanseviero",
"name": "osanseviero",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 2868,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/626237d9bbcbd1c34f1bb231/EJrOjvAL-68qMCYdnvOrq.png",
"fullname": "Ali El Filali",
"name": "alielfilali01",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 186,
"isFollowing": false
}
] | /posts/soldni/597247655335205 | 126 | 2 |
553686198582876 | [
{
"type": "text",
"value": "Infini-gram",
"raw": "Infini-gram",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Scaling Unbounded n-gram Language Models to a Trillion Tokens",
"raw": "Scaling Unbounded n-gram Language Models to a Trillion Tokens",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "demo: ",
"raw": "demo: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/liujch1998/infini-gram",
"href": null,
"resource": {
"type": "space",
"id": "liujch1998/infini-gram",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/liujch1998/infini-gram",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "paper page: ",
"raw": "paper page: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/papers/2401.17377",
"href": null,
"resource": {
"type": "paper",
"id": "2401.17377",
"discussionNum": null
},
"url": "https://huggingface.co/papers/2401.17377",
"code": null,
"user": null,
"label": "Infini-gram: Scaling Unbounded n-gram Language Models to a Trillion\n Tokens (2401.17377)",
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "train them at the same data scale as neural LLMs -- 1.4 trillion tokens. This is the largest n-gram model ever built. Second, existing n-gram models use small n which hinders their performance; we instead allow n to be arbitrarily large, by introducing a new infty-gram LM with backoff. Instead of pre-computing n-gram count tables (which would be very expensive), we develop an engine named infini-gram -- powered by suffix arrays -- that can compute infty-gram (as well as n-gram with arbitrary n) probabilities with millisecond-level latency. The infty-gram framework and infini-gram engine enable us to conduct many novel and interesting analyses of human-written and machine-generated text: we find that the infty-gram LM has fairly high accuracy for next-token prediction (47%), and can complement neural LLMs to greatly reduce their language modeling perplexities. When analyzing machine-generated text, we also observe irregularities in the machine--infty-gram agreement level with respect to the suffix length, which indicates deficiencies in neural LLM pretraining and the positional embeddings of Transformers. We open-source our infini-gram engine in the hopes of enabling more study on how to best use verbatim information retrieved from large text corpora.",
"raw": "train them at the same data scale as neural LLMs -- 1.4 trillion tokens. This is the largest n-gram model ever built. Second, existing n-gram models use small n which hinders their performance; we instead allow n to be arbitrarily large, by introducing a new infty-gram LM with backoff. Instead of pre-computing n-gram count tables (which would be very expensive), we develop an engine named infini-gram -- powered by suffix arrays -- that can compute infty-gram (as well as n-gram with arbitrary n) probabilities with millisecond-level latency. The infty-gram framework and infini-gram engine enable us to conduct many novel and interesting analyses of human-written and machine-generated text: we find that the infty-gram LM has fairly high accuracy for next-token prediction (47%), and can complement neural LLMs to greatly reduce their language modeling perplexities. When analyzing machine-generated text, we also observe irregularities in the machine--infty-gram agreement level with respect to the suffix length, which indicates deficiencies in neural LLM pretraining and the positional embeddings of Transformers. We open-source our infini-gram engine in the hopes of enabling more study on how to best use verbatim information retrieved from large text corpora.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | Infini-gram
Scaling Unbounded n-gram Language Models to a Trillion Tokens
demo: https://huggingface.co/spaces/liujch1998/infini-gram
paper page: https://huggingface.co/papers/2401.17377
train them at the same data scale as neural LLMs -- 1.4 trillion tokens. This is the largest n-gram model ever built. Second, existing n-gram models use small n which hinders their performance; we instead allow n to be arbitrarily large, by introducing a new infty-gram LM with backoff. Instead of pre-computing n-gram count tables (which would be very expensive), we develop an engine named infini-gram -- powered by suffix arrays -- that can compute infty-gram (as well as n-gram with arbitrary n) probabilities with millisecond-level latency. The infty-gram framework and infini-gram engine enable us to conduct many novel and interesting analyses of human-written and machine-generated text: we find that the infty-gram LM has fairly high accuracy for next-token prediction (47%), and can complement neural LLMs to greatly reduce their language modeling perplexities. When analyzing machine-generated text, we also observe irregularities in the machine--infty-gram agreement level with respect to the suffix length, which indicates deficiencies in neural LLM pretraining and the positional embeddings of Transformers. We open-source our infini-gram engine in the hopes of enabling more study on how to best use verbatim information retrieved from large text corpora. | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"fullname": "AK",
"name": "akhaliq",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 5205,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/60f1abe7544c2adfd699860c/N75TNAiwJ3Qm1_JXbStrJ.png"
}
] | [] | [
{
"reaction": "👍",
"users": [
"sankeerth1729",
"Maani",
"lunarflu",
"clem",
"osanseviero",
"soldni",
"sacharbit",
"matlok"
],
"count": 8
}
] | 2024-02-01T17:45:23.000Z | 2024-02-01T17:45:23.639Z | [] | /posts/akhaliq/553686198582876 | 51 | 0 |
813974819065683 | [
{
"type": "text",
"value": "Vision LLM for #edgecomputing ?",
"raw": "Vision LLM for #edgecomputing ?",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "mention",
"value": null,
"raw": "@openbmb",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": "openbmb",
"label": null,
"lang": null
},
{
"type": "text",
"value": ", who OS'ed the UltraFeedback dataset before, released a series of strong eco-friendly yet powerful LLMs",
"raw": ", who OS'ed the UltraFeedback dataset before, released a series of strong eco-friendly yet powerful LLMs",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "- MiniCPM: 2B model that competes with Mistral-7B ",
"raw": "- MiniCPM: 2B model that competes with Mistral-7B ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "- MiniCPM-V: 3B vision LLM on edge!",
"raw": "- MiniCPM-V: 3B vision LLM on edge!",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "- MiniCPM-V: 3B vision LLM on edge!",
"raw": "- MiniCPM-V: 3B vision LLM on edge!",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | Vision LLM for #edgecomputing ?
@openbmb, who OS'ed the UltraFeedback dataset before, released a series of strong eco-friendly yet powerful LLMs
- MiniCPM: 2B model that competes with Mistral-7B
- MiniCPM-V: 3B vision LLM on edge!
- MiniCPM-V: 3B vision LLM on edge! | {
"avatarUrl": "/avatars/703dd06469aaac724c94f622262b14e8.svg",
"fullname": "Tiezhen WANG",
"name": "xianbao",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 88,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/62d22496c58f969c152bcefd/j9kaHBhLj-o7jJ0HknEa2.png"
}
] | [] | [
{
"reaction": "👍",
"users": [
"osanseviero",
"victor",
"lunarflu",
"aust-t"
],
"count": 4
}
] | 2024-02-01T16:57:40.000Z | 2024-02-01T16:57:40.350Z | [] | /posts/xianbao/813974819065683 | 24 | 0 |
733237597572564 | [
{
"type": "text",
"value": "hi florent and livestream!",
"raw": "hi florent and livestream!",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | hi florent and livestream! | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1639773384591-5f353bb37e58354338621655.jpeg",
"fullname": "Nicholas Broad",
"name": "nbroad",
"type": "user",
"isPro": true,
"isHf": true,
"isMod": false,
"followerCount": 92,
"isFollowing": false
} | [] | [] | [
{
"reaction": "🤗",
"users": [
"allanctan-ai",
"YoelRidgway",
"prithivMLmods",
"John6666",
"Haleshot",
"davidaparicio",
"AtAndDev",
"clem"
],
"count": 8
}
] | 2024-10-30T16:15:22.000Z | 2024-10-30T16:19:46.876Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/62a3bb1cd0d8c2c2169f0b88/eT2TS0IlQbZtz-F_zHLz9.jpeg",
"fullname": "Joseph [open/acc] Pollack",
"name": "Tonic",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 313,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/67225abd03877f45cd46ffdf/-sL5CdPn0oCA1D8iiNThz.jpeg",
"fullname": "Rolando Manuel Gonzales Martinez",
"name": "Rolando666",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 1,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65e5fd212faa026716fd27bf/P53XOb-uuJKCqkMsTy8b7.png",
"fullname": "Firoj Paudel",
"name": "Firoj112",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": null,
"isFollowing": false
},
{
"avatarUrl": "/avatars/0f9f2001c6ba1805e6c54937a5ad3f48.svg",
"fullname": "kulbinder singh dio",
"name": "kulbinderdio",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 2,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/62e469312a8df5b22ff352ec/aq0VvYABsAXT5jZYrZDK7.jpeg",
"fullname": "Mr. Stack",
"name": "Hatman",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 3,
"isFollowing": false
}
] | /posts/nbroad/733237597572564 | 3,496 | 5 |
228294406282202 | [
{
"type": "text",
"value": "I have been seeing a specific type of AI hype more and more, I call it, releasing research expecting that no one will ever reproduce your methods, then overhyping your results. I test the methodology of maybe 4-5 research papers per day. That is how I find a lot of my research. Usually, 3-4 of those experiments end up not being reproduceable for some reason. I am starting to think it is not accidental. ",
"raw": "I have been seeing a specific type of AI hype more and more, I call it, releasing research expecting that no one will ever reproduce your methods, then overhyping your results. I test the methodology of maybe 4-5 research papers per day. That is how I find a lot of my research. Usually, 3-4 of those experiments end up not being reproduceable for some reason. I am starting to think it is not accidental. ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "So, I am launching a new series where I specifically showcase a research paper by reproducing their methodology and highlighting the blatant flaws that show up when you actually do this. Here is Episode 1!",
"raw": "So, I am launching a new series where I specifically showcase a research paper by reproducing their methodology and highlighting the blatant flaws that show up when you actually do this. Here is Episode 1!",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://www.youtube.com/watch?v=JLa0cFWm1A4",
"href": "https://www.youtube.com/watch?v=JLa0cFWm1A4",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | I have been seeing a specific type of AI hype more and more, I call it, releasing research expecting that no one will ever reproduce your methods, then overhyping your results. I test the methodology of maybe 4-5 research papers per day. That is how I find a lot of my research. Usually, 3-4 of those experiments end up not being reproduceable for some reason. I am starting to think it is not accidental.
So, I am launching a new series where I specifically showcase a research paper by reproducing their methodology and highlighting the blatant flaws that show up when you actually do this. Here is Episode 1!
https://www.youtube.com/watch?v=JLa0cFWm1A4 | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/cA64Ix1vh75C7HoClUBhx.png",
"fullname": "Richard A Aragon",
"name": "TuringsSolutions",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 146,
"isFollowing": false
} | [] | [] | [
{
"reaction": "👀",
"users": [
"John6666",
"LeroyDyer",
"BoltMonkey",
"barnobarno666",
"Joseph717171",
"MexIvanov",
"victor",
"asaduzzaman319"
],
"count": 8
},
{
"reaction": "🔥",
"users": [
"GoDjMike"
],
"count": 1
},
{
"reaction": "❤️",
"users": [
"LeroyDyer"
],
"count": 1
}
] | 2024-10-30T15:58:03.000Z | 2024-11-01T05:02:57.145Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65d883893a52cd9bcd8ab7cf/tRsCJlHNZo1D02kBTmfy9.jpeg",
"fullname": "leroy Samuel Dyer",
"name": "LeroyDyer",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 84,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/cA64Ix1vh75C7HoClUBhx.png",
"fullname": "Richard A Aragon",
"name": "TuringsSolutions",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 146,
"isFollowing": false
}
] | /posts/TuringsSolutions/228294406282202 | 2,885 | 5 |
180739528927278 | [
{
"type": "text",
"value": "New Droppings🥳",
"raw": "New Droppings🥳",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "😶🌫️Collection: ",
"raw": "😶🌫️Collection: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/collections/prithivMLmods/flux-lora-collections-66dd5908be2206cfaa8519be",
"href": null,
"resource": {
"type": "collection",
"id": "prithivMLmods/flux-lora-collections-66dd5908be2206cfaa8519be",
"discussionNum": null
},
"url": "https://huggingface.co/collections/prithivMLmods/flux-lora-collections-66dd5908be2206cfaa8519be",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "🥳Demo Here: ",
"raw": "🥳Demo Here: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/prithivMLmods/FLUX-LoRA-DLC",
"href": null,
"resource": {
"type": "space",
"id": "prithivMLmods/FLUX-LoRA-DLC",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/prithivMLmods/FLUX-LoRA-DLC",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": " with more than 100+ Flux LoRA's",
"raw": " with more than 100+ Flux LoRA's",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "🪨Fluid Dramatic Neon: ",
"raw": "🪨Fluid Dramatic Neon: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/prithivMLmods/Castor-Dramatic-Neon-Flux-LoRA",
"href": null,
"resource": {
"type": "model",
"id": "prithivMLmods/Castor-Dramatic-Neon-Flux-LoRA",
"discussionNum": null
},
"url": "https://huggingface.co/prithivMLmods/Castor-Dramatic-Neon-Flux-LoRA",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "🪨Past & Present Blend: ",
"raw": "🪨Past & Present Blend: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/prithivMLmods/Past-Present-Deep-Mix-Flux-LoRA",
"href": null,
"resource": {
"type": "model",
"id": "prithivMLmods/Past-Present-Deep-Mix-Flux-LoRA",
"discussionNum": null
},
"url": "https://huggingface.co/prithivMLmods/Past-Present-Deep-Mix-Flux-LoRA",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "🪨Tarot Cards Refreshed Themes: ",
"raw": "🪨Tarot Cards Refreshed Themes: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/prithivMLmods/Ton618-Tarot-Cards-Flux-LoRA",
"href": null,
"resource": {
"type": "model",
"id": "prithivMLmods/Ton618-Tarot-Cards-Flux-LoRA",
"discussionNum": null
},
"url": "https://huggingface.co/prithivMLmods/Ton618-Tarot-Cards-Flux-LoRA",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "🪨Amxtoon Character Mix Real-Anime: ",
"raw": "🪨Amxtoon Character Mix Real-Anime: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/prithivMLmods/Ton618-Amxtoon-Flux-LoRA",
"href": null,
"resource": {
"type": "model",
"id": "prithivMLmods/Ton618-Amxtoon-Flux-LoRA",
"discussionNum": null
},
"url": "https://huggingface.co/prithivMLmods/Ton618-Amxtoon-Flux-LoRA",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "🪨Epic Realism Flux v1: ",
"raw": "🪨Epic Realism Flux v1: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/prithivMLmods/Ton618-Epic-Realism-Flux-LoRA",
"href": null,
"resource": {
"type": "model",
"id": "prithivMLmods/Ton618-Epic-Realism-Flux-LoRA",
"discussionNum": null
},
"url": "https://huggingface.co/prithivMLmods/Ton618-Epic-Realism-Flux-LoRA",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "🪨Mock-up Textures: ",
"raw": "🪨Mock-up Textures: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/prithivMLmods/Mockup-Texture-Flux-LoRA",
"href": null,
"resource": {
"type": "model",
"id": "prithivMLmods/Mockup-Texture-Flux-LoRA",
"discussionNum": null
},
"url": "https://huggingface.co/prithivMLmods/Mockup-Texture-Flux-LoRA",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": ".",
"raw": ".",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": ".",
"raw": ".",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": ".",
"raw": ".",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "mention",
"value": null,
"raw": "@prithivMLmods",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": "prithivMLmods",
"label": null,
"lang": null
},
{
"type": "text",
"value": " 🤗",
"raw": " 🤗",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | New Droppings🥳
😶🌫️Collection: https://huggingface.co/collections/prithivMLmods/flux-lora-collections-66dd5908be2206cfaa8519be
🥳Demo Here: https://huggingface.co/spaces/prithivMLmods/FLUX-LoRA-DLC with more than 100+ Flux LoRA's
🪨Fluid Dramatic Neon: https://huggingface.co/prithivMLmods/Castor-Dramatic-Neon-Flux-LoRA
🪨Past & Present Blend: https://huggingface.co/prithivMLmods/Past-Present-Deep-Mix-Flux-LoRA
🪨Tarot Cards Refreshed Themes: https://huggingface.co/prithivMLmods/Ton618-Tarot-Cards-Flux-LoRA
🪨Amxtoon Character Mix Real-Anime: https://huggingface.co/prithivMLmods/Ton618-Amxtoon-Flux-LoRA
🪨Epic Realism Flux v1: https://huggingface.co/prithivMLmods/Ton618-Epic-Realism-Flux-LoRA
🪨Mock-up Textures: https://huggingface.co/prithivMLmods/Mockup-Texture-Flux-LoRA
.
.
.
@prithivMLmods 🤗 | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65bb837dbfb878f46c77de4c/UVtVbF_3rdt0DC8xTkpL1.jpeg",
"fullname": "Prithiv Sakthi",
"name": "prithivMLmods",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 393,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/VfrHf5i2DbHo0g8fVm42B.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/adPW9zHyIzot5BYcDO-cB.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/pPxO2PhkkpvikgZ4sESxK.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/AguYNMY8O1sZKoNW9X5dk.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/fSUZcA1UE4ga0lpDqM_vq.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/uJ81N620_WjhISVLpzfZY.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/R4PXIBJ6d1RxKyMBMu09_.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/ZqhBJGXhrH6nnmuBKvkD2.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/mTvkPE1dO1WbJ3wL6npiz.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/VgwkMtUVcxJi0_Rj4woU9.png"
}
] | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65bb837dbfb878f46c77de4c/UVtVbF_3rdt0DC8xTkpL1.jpeg",
"fullname": "Prithiv Sakthi",
"name": "prithivMLmods",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 393
}
] | [
{
"reaction": "👍",
"users": [
"John6666",
"prithivMLmods",
"benhaotang",
"TuringsSolutions",
"JackCloudman",
"AtAndDev",
"rdrede",
"kimleang123",
"xumijiezi622",
"KingNish",
"hypergod",
"ai4life44",
"Rsln",
"s3nh",
"Ngrthm"
],
"count": 15
},
{
"reaction": "🔥",
"users": [
"Goekdeniz-Guelmez",
"AtAndDev",
"Aryanne",
"rdrede",
"kimleang123",
"darksfx",
"hypergod",
"s3nh",
"Ngrthm"
],
"count": 9
},
{
"reaction": "❤️",
"users": [
"ijohn07",
"rdrede",
"darksfx",
"hypergod",
"s3nh",
"Ngrthm"
],
"count": 6
},
{
"reaction": "➕",
"users": [
"Beygo",
"rdrede",
"ai4life44"
],
"count": 3
},
{
"reaction": "🤝",
"users": [
"rdrede",
"Ngrthm"
],
"count": 2
}
] | 2024-10-30T14:24:17.000Z | 2024-11-04T12:20:21.935Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/cPWKFh0uzPrRvsL7yRSEb.png",
"fullname": "Adam",
"name": "Adam-110",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": null,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65bb837dbfb878f46c77de4c/UVtVbF_3rdt0DC8xTkpL1.jpeg",
"fullname": "Prithiv Sakthi",
"name": "prithivMLmods",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 393,
"isFollowing": false
}
] | /posts/prithivMLmods/180739528927278 | 4,549 | 2 |
534421532450932 | [
{
"type": "text",
"value": "Excited to see my weird ",
"raw": "Excited to see my weird ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/datasets/davanstrien/ufo-ColPali",
"href": null,
"resource": {
"type": "dataset",
"id": "davanstrien/ufo-ColPali",
"discussionNum": null
},
"url": "https://huggingface.co/datasets/davanstrien/ufo-ColPali",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": " dataset featured in a video by ",
"raw": " dataset featured in a video by ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "mention",
"value": null,
"raw": "@sabrinaesaquino",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": "sabrinaesaquino",
"label": null,
"lang": null
},
{
"type": "text",
"value": "! ",
"raw": "! ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "The video covers using ColPali with Binary Quantization in Qdant to accelerate retrieval. 2x speed up with no performance drop in results 🛸",
"raw": "The video covers using ColPali with Binary Quantization in Qdant to accelerate retrieval. 2x speed up with no performance drop in results 🛸",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Video: ",
"raw": "Video: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://youtu.be/_A90A-grwIc?si=oB3JAhJG8VQUZGLz",
"href": "https://youtu.be/_A90A-grwIc?si=oB3JAhJG8VQUZGLz",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Blog post: ",
"raw": "Blog post: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://danielvanstrien.xyz/posts/post-with-code/colpali-qdrant/2024-10-02_using_colpali_with_qdrant.html",
"href": "https://danielvanstrien.xyz/posts/post-with-code/colpali-qdrant/2024-10-02_using_colpali_with_qdrant.html",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | Excited to see my weird https://huggingface.co/datasets/davanstrien/ufo-ColPali dataset featured in a video by @sabrinaesaquino!
The video covers using ColPali with Binary Quantization in Qdant to accelerate retrieval. 2x speed up with no performance drop in results 🛸
Video: https://youtu.be/_A90A-grwIc?si=oB3JAhJG8VQUZGLz
Blog post: https://danielvanstrien.xyz/posts/post-with-code/colpali-qdrant/2024-10-02_using_colpali_with_qdrant.html | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1627505688463-60107b385ac3e86b3ea4fc34.jpeg",
"fullname": "Daniel van Strien",
"name": "davanstrien",
"type": "user",
"isPro": true,
"isHf": true,
"isMod": false,
"followerCount": 410,
"isFollowing": false
} | [] | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65ce6e2d13fcc46fa6bb247d/r2bh12jisymNPgFpCFRGf.jpeg",
"fullname": "Sabrina Aquino",
"name": "sabrinaesaquino",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 2
}
] | [
{
"reaction": "🚀",
"users": [
"sabrinaesaquino",
"John6666",
"Tonic",
"Chao2012",
"TuringsSolutions"
],
"count": 5
},
{
"reaction": "❤️",
"users": [
"Aurelien-Morgan",
"Tonic",
"Chao2012"
],
"count": 3
}
] | 2024-10-30T14:02:34.000Z | 2024-10-30T15:54:02.407Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/65ce6e2d13fcc46fa6bb247d/r2bh12jisymNPgFpCFRGf.jpeg",
"fullname": "Sabrina Aquino",
"name": "sabrinaesaquino",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 2,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1627505688463-60107b385ac3e86b3ea4fc34.jpeg",
"fullname": "Daniel van Strien",
"name": "davanstrien",
"type": "user",
"isPro": true,
"isHf": true,
"isMod": false,
"followerCount": 410,
"isFollowing": false
}
] | /posts/davanstrien/534421532450932 | 2,510 | 2 |
524165813731334 | [
{
"type": "text",
"value": "NEW RELEASE! Shining Valiant 2 for Llama 3.1 70b is here!",
"raw": "NEW RELEASE! Shining Valiant 2 for Llama 3.1 70b is here!",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "- Trained on high quality science-instruct, complex queries, and general chat data!",
"raw": "- Trained on high quality science-instruct, complex queries, and general chat data!",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "- Uses our newest datasets, ALL open-sourced for everyone to use!",
"raw": "- Uses our newest datasets, ALL open-sourced for everyone to use!",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "GET SV2 70B: ",
"raw": "GET SV2 70B: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/ValiantLabs/Llama3.1-70B-ShiningValiant2",
"href": null,
"resource": {
"type": "model",
"id": "ValiantLabs/Llama3.1-70B-ShiningValiant2",
"discussionNum": null
},
"url": "https://huggingface.co/ValiantLabs/Llama3.1-70B-ShiningValiant2",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "- Find the SV datasets here, including the expanded version of our science-instruct dataset:",
"raw": "- Find the SV datasets here, including the expanded version of our science-instruct dataset:",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": " - ",
"raw": " - ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/datasets/sequelbox/Celestia",
"href": null,
"resource": {
"type": "dataset",
"id": "sequelbox/Celestia",
"discussionNum": null
},
"url": "https://huggingface.co/datasets/sequelbox/Celestia",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": " - ",
"raw": " - ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/datasets/sequelbox/Spurline",
"href": null,
"resource": {
"type": "dataset",
"id": "sequelbox/Spurline",
"discussionNum": null
},
"url": "https://huggingface.co/datasets/sequelbox/Spurline",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": " - ",
"raw": " - ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/datasets/sequelbox/Supernova",
"href": null,
"resource": {
"type": "dataset",
"id": "sequelbox/Supernova",
"discussionNum": null
},
"url": "https://huggingface.co/datasets/sequelbox/Supernova",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "- SV2 8b and 3b will be updated with the new datasets soon!",
"raw": "- SV2 8b and 3b will be updated with the new datasets soon!",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Enjoy! :)",
"raw": "Enjoy! :)",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | NEW RELEASE! Shining Valiant 2 for Llama 3.1 70b is here!
- Trained on high quality science-instruct, complex queries, and general chat data!
- Uses our newest datasets, ALL open-sourced for everyone to use!
GET SV2 70B: https://huggingface.co/ValiantLabs/Llama3.1-70B-ShiningValiant2
- Find the SV datasets here, including the expanded version of our science-instruct dataset:
- https://huggingface.co/datasets/sequelbox/Celestia
- https://huggingface.co/datasets/sequelbox/Spurline
- https://huggingface.co/datasets/sequelbox/Supernova
- SV2 8b and 3b will be updated with the new datasets soon!
Enjoy! :) | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63444f2687964b331809eb55/WvZivsvKsM_t0tBtakovK.png",
"fullname": "t.d.a.g.",
"name": "sequelbox",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 51,
"isFollowing": false
} | [] | [] | [
{
"reaction": "👀",
"users": [
"John6666"
],
"count": 1
}
] | 2024-10-30T13:13:59.000Z | 2024-10-30T13:13:59.591Z | [] | /posts/sequelbox/524165813731334 | 427 | 0 |
413388260975224 | [
{
"type": "text",
"value": "Cybertron is back:",
"raw": "Cybertron is back:",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "We released today a newest version of Cybertron: V4 based on Qwen2.5 7B and trained on MagPie. Scoring #1 LLM on 7B & 8B class.",
"raw": "We released today a newest version of Cybertron: V4 based on Qwen2.5 7B and trained on MagPie. Scoring #1 LLM on 7B & 8B class.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "The model hasn't go thru DPO, so the weights are in good shape to welcome further training sessions and optimizations.",
"raw": "The model hasn't go thru DPO, so the weights are in good shape to welcome further training sessions and optimizations.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Enjoy it in the hub as usual:",
"raw": "Enjoy it in the hub as usual:",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/fblgit/cybertron-v4-qw7B-MGS",
"href": null,
"resource": {
"type": "model",
"id": "fblgit/cybertron-v4-qw7B-MGS",
"discussionNum": null
},
"url": "https://huggingface.co/fblgit/cybertron-v4-qw7B-MGS",
"code": null,
"user": null,
"label": null,
"lang": null
}
] | Cybertron is back:
We released today a newest version of Cybertron: V4 based on Qwen2.5 7B and trained on MagPie. Scoring #1 LLM on 7B & 8B class.
The model hasn't go thru DPO, so the weights are in good shape to welcome further training sessions and optimizations.
Enjoy it in the hub as usual:
https://huggingface.co/fblgit/cybertron-v4-qw7B-MGS | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6401c8c9f98fbc64bcd7dca1/MOSgc_mPbfUZ-354osy1v.png",
"fullname": "FBL",
"name": "fblgit",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 228,
"isFollowing": false
} | [] | [] | [
{
"reaction": "👀",
"users": [
"John6666"
],
"count": 1
}
] | 2024-10-30T10:47:02.000Z | 2024-10-31T05:30:22.492Z | [
{
"avatarUrl": "/avatars/44ff5b58354edd971d40c953ecea7785.svg",
"fullname": "Cheng Rui",
"name": "postitive666",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": null,
"isFollowing": false
}
] | /posts/fblgit/413388260975224 | 587 | 1 |
346532636735144 | [
{
"type": "text",
"value": "FLUX De-Distilled and Anti-Bleeding Fine-Tuning / DreamBooth & LoRA Training Experiments",
"raw": "FLUX De-Distilled and Anti-Bleeding Fine-Tuning / DreamBooth & LoRA Training Experiments",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Also Testing CFG Impact for Stylized Images on base FLUX DEV model",
"raw": "Also Testing CFG Impact for Stylized Images on base FLUX DEV model",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Full size grids and full details shared here : ",
"raw": "Full size grids and full details shared here : ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://www.patreon.com/posts/114969137",
"href": "https://www.patreon.com/posts/114969137",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "The aim is finding a workflow that FLUX can be trained for multiple concepts / subjects without bleeding / mixing",
"raw": "The aim is finding a workflow that FLUX can be trained for multiple concepts / subjects without bleeding / mixing",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | FLUX De-Distilled and Anti-Bleeding Fine-Tuning / DreamBooth & LoRA Training Experiments
Also Testing CFG Impact for Stylized Images on base FLUX DEV model
Full size grids and full details shared here : https://www.patreon.com/posts/114969137
The aim is finding a workflow that FLUX can be trained for multiple concepts / subjects without bleeding / mixing | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1672531901326-6345bd89fe134dfd7a0dba40.png",
"fullname": "Furkan Gözükara",
"name": "MonsterMMORPG",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 376,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/JFkaRmdr44SIc0uYUYEhY.jpeg"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/B5Lmt7YCfvm9kGAZT9Dho.jpeg"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/3YpCUPeTZeOVVmaSB2br2.jpeg"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/HpAfY1VRg9zHYuuNaETwk.jpeg"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/b2UPvOtfUUEVMv9B--87e.jpeg"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/DS-On8bA3jV6T3yTFc3Tn.jpeg"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6345bd89fe134dfd7a0dba40/QhqDhIqruowcblB3DAaTL.jpeg"
}
] | [] | [
{
"reaction": "👀",
"users": [
"MonsterMMORPG",
"OmbelineM",
"djuna",
"John6666",
"Ba2han",
"kimleang123"
],
"count": 6
},
{
"reaction": "🔥",
"users": [
"MonsterMMORPG",
"John6666"
],
"count": 2
},
{
"reaction": "🚀",
"users": [
"MonsterMMORPG",
"John6666"
],
"count": 2
},
{
"reaction": "😎",
"users": [
"MonsterMMORPG",
"MuizIsCool"
],
"count": 2
},
{
"reaction": "❤️",
"users": [
"MonsterMMORPG"
],
"count": 1
},
{
"reaction": "🤗",
"users": [
"MonsterMMORPG"
],
"count": 1
},
{
"reaction": "➕",
"users": [
"MonsterMMORPG"
],
"count": 1
},
{
"reaction": "🧠",
"users": [
"MonsterMMORPG"
],
"count": 1
},
{
"reaction": "👍",
"users": [
"MonsterMMORPG"
],
"count": 1
},
{
"reaction": "🤝",
"users": [
"MonsterMMORPG"
],
"count": 1
},
{
"reaction": "🤯",
"users": [
"MonsterMMORPG"
],
"count": 1
}
] | 2024-10-29T23:48:36.000Z | 2024-11-17T15:35:08.246Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64572eba95082f722d1a103a/Q4BuycjA0zi2zWKtirgj8.jpeg",
"fullname": "Wikee Yang",
"name": "wikeeyang",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 4,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1672531901326-6345bd89fe134dfd7a0dba40.png",
"fullname": "Furkan Gözükara",
"name": "MonsterMMORPG",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 376,
"isFollowing": false
}
] | /posts/MonsterMMORPG/346532636735144 | 2,588 | 2 |
975035590271872 | [
{
"type": "text",
"value": "⚡️ LLMs do a good job at NER, but don't you want to do learn how to do more with less?",
"raw": "⚡️ LLMs do a good job at NER, but don't you want to do learn how to do more with less?",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Go from 🐢 -> 🐇",
"raw": "Go from 🐢 -> 🐇",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "If you want a small model to perform well on your problem, you need to fine-tune it.",
"raw": "If you want a small model to perform well on your problem, you need to fine-tune it.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Bootstrap with a teacher model.",
"raw": "Bootstrap with a teacher model.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Correct potential mistakes to get high-quality data.",
"raw": "Correct potential mistakes to get high-quality data.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Fine-tune your student model",
"raw": "Fine-tune your student model",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Go more accurate and more efficient.",
"raw": "Go more accurate and more efficient.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Free signup: ",
"raw": "Free signup: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://lu.ma/zx2t7irs",
"href": "https://lu.ma/zx2t7irs",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | ⚡️ LLMs do a good job at NER, but don't you want to do learn how to do more with less?
Go from 🐢 -> 🐇
If you want a small model to perform well on your problem, you need to fine-tune it.
Bootstrap with a teacher model.
Correct potential mistakes to get high-quality data.
Fine-tune your student model
Go more accurate and more efficient.
Free signup: https://lu.ma/zx2t7irs | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1677141720071-634ff41ff32062e9eb7b06a3.jpeg",
"fullname": "David Berenstein",
"name": "davidberenstein1957",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 167,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/634ff41ff32062e9eb7b06a3/VxWE8aTTve2CuuhOnB55J.png"
}
] | [] | [
{
"reaction": "🤗",
"users": [
"prithivMLmods",
"John6666",
"janinakeno",
"NeuroSpaceX"
],
"count": 4
}
] | 2024-10-29T17:43:50.000Z | 2024-10-29T17:44:14.561Z | [] | /posts/davidberenstein1957/975035590271872 | 1,741 | 0 |
983173115465455 | [
{
"type": "text",
"value": "🚀 Exploring Topic Modeling with BERTopic 🤖",
"raw": "🚀 Exploring Topic Modeling with BERTopic 🤖",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "When you come across an interesting dataset, you often wonder:",
"raw": "When you come across an interesting dataset, you often wonder:",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Which topics frequently appear in these documents? 🤔",
"raw": "Which topics frequently appear in these documents? 🤔",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "What is this data really about? 📊",
"raw": "What is this data really about? 📊",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Topic modeling helps answer these questions by identifying recurring themes within a collection of documents. This process enables quick and efficient exploratory data analysis.",
"raw": "Topic modeling helps answer these questions by identifying recurring themes within a collection of documents. This process enables quick and efficient exploratory data analysis.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "I’ve been working on an app that leverages BERTopic, a flexible framework designed for topic modeling. Its modularity makes BERTopic powerful, allowing you to switch components with your preferred algorithms. It also supports handling large datasets efficiently by merging models using the BERTopic.merge_models approach. 🔗",
"raw": "I’ve been working on an app that leverages BERTopic, a flexible framework designed for topic modeling. Its modularity makes BERTopic powerful, allowing you to switch components with your preferred algorithms. It also supports handling large datasets efficiently by merging models using the BERTopic.merge_models approach. 🔗",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "🔍 How do we make this work?",
"raw": "🔍 How do we make this work?",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Here’s the stack we’re using:",
"raw": "Here’s the stack we’re using:",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "📂 Data Source ➡️ Hugging Face datasets with DuckDB for retrieval",
"raw": "📂 Data Source ➡️ Hugging Face datasets with DuckDB for retrieval",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "🧠 Text Embeddings ➡️ Sentence Transformers (all-MiniLM-L6-v2)",
"raw": "🧠 Text Embeddings ➡️ Sentence Transformers (all-MiniLM-L6-v2)",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "⚡ Dimensionality Reduction ➡️ RAPIDS cuML UMAP for GPU-accelerated performance",
"raw": "⚡ Dimensionality Reduction ➡️ RAPIDS cuML UMAP for GPU-accelerated performance",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "🔍 Clustering ➡️ RAPIDS cuML HDBSCAN for fast clustering",
"raw": "🔍 Clustering ➡️ RAPIDS cuML HDBSCAN for fast clustering",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "✂️ Tokenization ➡️ CountVectorizer",
"raw": "✂️ Tokenization ➡️ CountVectorizer",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "🔧 Representation Tuning ➡️ KeyBERTInspired + Hugging Face Inference Client with Meta-Llama-3-8B-Instruct",
"raw": "🔧 Representation Tuning ➡️ KeyBERTInspired + Hugging Face Inference Client with Meta-Llama-3-8B-Instruct",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "🌍 Visualization ➡️ Datamapplot library",
"raw": "🌍 Visualization ➡️ Datamapplot library",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Check out the space and see how you can quickly generate topics from your dataset: ",
"raw": "Check out the space and see how you can quickly generate topics from your dataset: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/datasets-topics/topics-generator",
"href": null,
"resource": {
"type": "space",
"id": "datasets-topics/topics-generator",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/datasets-topics/topics-generator",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Powered by ",
"raw": "Powered by ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "mention",
"value": null,
"raw": "@MaartenGr",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": "MaartenGr",
"label": null,
"lang": null
},
{
"type": "text",
"value": " - BERTopic ",
"raw": " - BERTopic ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | 🚀 Exploring Topic Modeling with BERTopic 🤖
When you come across an interesting dataset, you often wonder:
Which topics frequently appear in these documents? 🤔
What is this data really about? 📊
Topic modeling helps answer these questions by identifying recurring themes within a collection of documents. This process enables quick and efficient exploratory data analysis.
I’ve been working on an app that leverages BERTopic, a flexible framework designed for topic modeling. Its modularity makes BERTopic powerful, allowing you to switch components with your preferred algorithms. It also supports handling large datasets efficiently by merging models using the BERTopic.merge_models approach. 🔗
🔍 How do we make this work?
Here’s the stack we’re using:
📂 Data Source ➡️ Hugging Face datasets with DuckDB for retrieval
🧠 Text Embeddings ➡️ Sentence Transformers (all-MiniLM-L6-v2)
⚡ Dimensionality Reduction ➡️ RAPIDS cuML UMAP for GPU-accelerated performance
🔍 Clustering ➡️ RAPIDS cuML HDBSCAN for fast clustering
✂️ Tokenization ➡️ CountVectorizer
🔧 Representation Tuning ➡️ KeyBERTInspired + Hugging Face Inference Client with Meta-Llama-3-8B-Instruct
🌍 Visualization ➡️ Datamapplot library
Check out the space and see how you can quickly generate topics from your dataset: https://huggingface.co/spaces/datasets-topics/topics-generator
Powered by @MaartenGr - BERTopic | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674055965173-noauth.jpeg",
"fullname": "Andrea Soria",
"name": "asoria",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 61,
"isFollowing": false
} | [
{
"type": "video",
"url": "https://cdn-uploads.huggingface.co/production/uploads/63c8113f46421a2efe7f067e/UwMIYHvhA6FHS9e_oCoxd.mp4"
}
] | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/62ea1ac3cc08a09aa6d3ec95/_74xXYEYLLjNVJ9zQucfn.jpeg",
"fullname": "Maarten Grootendorst",
"name": "MaartenGr",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 22
}
] | [
{
"reaction": "👍",
"users": [
"ijohn07",
"John6666",
"edison1",
"Chao2012",
"rennokki"
],
"count": 5
},
{
"reaction": "❤️",
"users": [
"Chao2012",
"korkakak",
"MaartenGr"
],
"count": 3
},
{
"reaction": "🔥",
"users": [
"rennokki"
],
"count": 1
}
] | 2024-10-29T17:19:40.000Z | 2024-10-29T17:19:40.815Z | [] | /posts/asoria/983173115465455 | 1,735 | 0 |
537208389434706 | [
{
"type": "text",
"value": "🔍 NYT leveraged AI to investigate election interference by analyzing 400+ hours of recorded meetings - that's 5M words of data! ",
"raw": "🔍 NYT leveraged AI to investigate election interference by analyzing 400+ hours of recorded meetings - that's 5M words of data! ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "AI spotted patterns, humans verified facts. Every AI-flagged quote was manually verified against source recordings. Really appreciate that they published their full methodology - transparency matters when using AI in journalism.",
"raw": "AI spotted patterns, humans verified facts. Every AI-flagged quote was manually verified against source recordings. Really appreciate that they published their full methodology - transparency matters when using AI in journalism.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "A perfect blend of tech & journalism. ",
"raw": "A perfect blend of tech & journalism. ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": " The future of journalism isn't robots replacing reporters - it's AI helping humans process massive datasets more efficiently. Sometimes the most powerful tech solutions are the least flashy ones.",
"raw": " The future of journalism isn't robots replacing reporters - it's AI helping humans process massive datasets more efficiently. Sometimes the most powerful tech solutions are the least flashy ones.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Read the article: ",
"raw": "Read the article: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://www.nytimes.com/interactive/2024/10/28/us/politics/inside-the-movement-behind-trumps-election-lies.html?unlocked_article_code=1.Vk4.ucv9.dbHVquTQaf0G&smid=nytcore-ios-share",
"href": "https://www.nytimes.com/interactive/2024/10/28/us/politics/inside-the-movement-behind-trumps-election-lies.html?unlocked_article_code=1.Vk4.ucv9.dbHVquTQaf0G&smid=nytcore-ios-share",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | 🔍 NYT leveraged AI to investigate election interference by analyzing 400+ hours of recorded meetings - that's 5M words of data!
AI spotted patterns, humans verified facts. Every AI-flagged quote was manually verified against source recordings. Really appreciate that they published their full methodology - transparency matters when using AI in journalism.
A perfect blend of tech & journalism.
The future of journalism isn't robots replacing reporters - it's AI helping humans process massive datasets more efficiently. Sometimes the most powerful tech solutions are the least flashy ones.
Read the article: https://www.nytimes.com/interactive/2024/10/28/us/politics/inside-the-movement-behind-trumps-election-lies.html?unlocked_article_code=1.Vk4.ucv9.dbHVquTQaf0G&smid=nytcore-ios-share | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/647f36a8454af0237bd49574/jshkqBUTY-GZL8As8y6Aq.jpeg",
"fullname": "Florent Daudens",
"name": "fdaudens",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 384,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/647f36a8454af0237bd49574/OhMA2ILRavy8A7uDVWKRX.png"
}
] | [] | [
{
"reaction": "👍",
"users": [
"Aurelien-Morgan",
"John6666",
"io2",
"OmbelineM",
"wsuff",
"KhalilGuetari",
"owao"
],
"count": 7
}
] | 2024-10-29T15:01:36.000Z | 2024-10-29T15:01:36.501Z | [] | /posts/fdaudens/537208389434706 | 2,264 | 0 |
455307248098208 | [
{
"type": "text",
"value": "Happy to share H2O-Danube-1.8b, a small 1.8b model based trained on only 1T natural language tokens showing competitive metrics across benchmarks in the <2B model space.",
"raw": "Happy to share H2O-Danube-1.8b, a small 1.8b model based trained on only 1T natural language tokens showing competitive metrics across benchmarks in the <2B model space.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Base weights: ",
"raw": "Base weights: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/h2oai/h2o-danube-1.8b-base",
"href": null,
"resource": {
"type": "model",
"id": "h2oai/h2o-danube-1.8b-base",
"discussionNum": null
},
"url": "https://huggingface.co/h2oai/h2o-danube-1.8b-base",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Chat weights: ",
"raw": "Chat weights: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/h2oai/h2o-danube-1.8b-chat",
"href": null,
"resource": {
"type": "model",
"id": "h2oai/h2o-danube-1.8b-chat",
"discussionNum": null
},
"url": "https://huggingface.co/h2oai/h2o-danube-1.8b-chat",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Technical report: ",
"raw": "Technical report: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/papers/2401.16818",
"href": null,
"resource": {
"type": "paper",
"id": "2401.16818",
"discussionNum": null
},
"url": "https://huggingface.co/papers/2401.16818",
"code": null,
"user": null,
"label": "H2O-Danube-1.8B Technical Report (2401.16818)",
"lang": null
}
] | Happy to share H2O-Danube-1.8b, a small 1.8b model based trained on only 1T natural language tokens showing competitive metrics across benchmarks in the <2B model space.
Base weights: https://huggingface.co/h2oai/h2o-danube-1.8b-base
Chat weights: https://huggingface.co/h2oai/h2o-danube-1.8b-chat
Technical report: https://huggingface.co/papers/2401.16818 | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/636d18755aaed143cd6698ef/AalDh13Gp8jv1BfM5IASh.png",
"fullname": "Philipp Singer",
"name": "psinger",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 16,
"isFollowing": false
} | [] | [] | [
{
"reaction": "👍",
"users": [
"osanseviero",
"Maani",
"lunarflu",
"mdouglas",
"stefan-jo"
],
"count": 5
},
{
"reaction": "❤️",
"users": [
"osanseviero",
"Maani",
"lunarflu",
"medmac01"
],
"count": 4
},
{
"reaction": "🤗",
"users": [
"lunarflu",
"medmac01"
],
"count": 2
}
] | 2024-02-01T16:43:38.000Z | 2024-02-01T16:45:17.067Z | [] | /posts/psinger/455307248098208 | 111 | 0 |
190158260318597 | [
{
"type": "text",
"value": "It seems February started with a fully open source AI renaissance 🌟",
"raw": "It seems February started with a fully open source AI renaissance 🌟",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Models released with fully open dataset, training code, weights ✅",
"raw": "Models released with fully open dataset, training code, weights ✅",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "LLM - ",
"raw": "LLM - ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/collections/allenai/olmo-suite-65aeaae8fe5b6b2122b46778",
"href": null,
"resource": {
"type": "collection",
"id": "allenai/olmo-suite-65aeaae8fe5b6b2122b46778",
"discussionNum": null
},
"url": "https://huggingface.co/collections/allenai/olmo-suite-65aeaae8fe5b6b2122b46778",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": " 🧠",
"raw": " 🧠",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Embedding - ",
"raw": "Embedding - ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/nomic-ai/nomic-embed-text-v1",
"href": null,
"resource": {
"type": "model",
"id": "nomic-ai/nomic-embed-text-v1",
"discussionNum": null
},
"url": "https://huggingface.co/nomic-ai/nomic-embed-text-v1",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": " 📚 (sota!)",
"raw": " 📚 (sota!)",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "And it's literally February 1st - can't wait to see what else the community will bring 👀",
"raw": "And it's literally February 1st - can't wait to see what else the community will bring 👀",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | It seems February started with a fully open source AI renaissance 🌟
Models released with fully open dataset, training code, weights ✅
LLM - https://huggingface.co/collections/allenai/olmo-suite-65aeaae8fe5b6b2122b46778 🧠
Embedding - https://huggingface.co/nomic-ai/nomic-embed-text-v1 📚 (sota!)
And it's literally February 1st - can't wait to see what else the community will bring 👀 | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1649143001781-624bebf604abc7ebb01789af.jpeg",
"fullname": "Apolinário from multimodal AI art",
"name": "multimodalart",
"type": "user",
"isPro": true,
"isHf": true,
"isMod": false,
"followerCount": 3177,
"isFollowing": false
} | [] | [] | [
{
"reaction": "❤️",
"users": [
"clem",
"multimodalart",
"osanseviero",
"BrigitteTousi",
"Maani",
"lunarflu",
"leigh301",
"dillfrescott",
"zamply",
"alielfilali01",
"impactframes",
"MdnghtToker",
"OmbelineM"
],
"count": 13
},
{
"reaction": "🤯",
"users": [
"fffiloni",
"multimodalart",
"BrigitteTousi",
"lunarflu",
"clem",
"greencookies",
"dillfrescott"
],
"count": 7
},
{
"reaction": "👍",
"users": [
"neovalle",
"dillfrescott"
],
"count": 2
}
] | 2024-02-01T16:23:52.000Z | 2024-02-01T17:09:15.374Z | [] | /posts/multimodalart/190158260318597 | 239 | 0 |
114173328374820 | [
{
"type": "text",
"value": "Today, we’re releasing our first pretrained Open Language Models (OLMo) at the Allen Institute for AI (AI2), a set of 7 billion parameter models and one 1 billion parameter variant. This line of work was probably the main reason I joined AI2 and is the biggest lever I see possible to enact meaningful change in how AI is used, studied, and discussed in the short term. ",
"raw": "Today, we’re releasing our first pretrained Open Language Models (OLMo) at the Allen Institute for AI (AI2), a set of 7 billion parameter models and one 1 billion parameter variant. This line of work was probably the main reason I joined AI2 and is the biggest lever I see possible to enact meaningful change in how AI is used, studied, and discussed in the short term. ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Links at the top because that's what you want:",
"raw": "Links at the top because that's what you want:",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "* Core 7B model: ",
"raw": "* Core 7B model: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/allenai/OLMo-7B",
"href": null,
"resource": {
"type": "model",
"id": "allenai/OLMo-7B",
"discussionNum": null
},
"url": "https://huggingface.co/allenai/OLMo-7B",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "* 7B model twin (different GPU hardware): ",
"raw": "* 7B model twin (different GPU hardware): ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/allenai/OLMo-7B-Twin-2T",
"href": null,
"resource": {
"type": "model",
"id": "allenai/OLMo-7B-Twin-2T",
"discussionNum": null
},
"url": "https://huggingface.co/allenai/OLMo-7B-Twin-2T",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "* 1B model: ",
"raw": "* 1B model: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/allenai/OLMo-1B",
"href": null,
"resource": {
"type": "model",
"id": "allenai/OLMo-1B",
"discussionNum": null
},
"url": "https://huggingface.co/allenai/OLMo-1B",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "* Dataset: ",
"raw": "* Dataset: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/datasets/allenai/dolma",
"href": null,
"resource": {
"type": "dataset",
"id": "allenai/dolma",
"discussionNum": null
},
"url": "https://huggingface.co/datasets/allenai/dolma",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "* Paper (arxiv soon): ",
"raw": "* Paper (arxiv soon): ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://allenai.org/olmo/olmo-paper.pdf",
"href": "https://allenai.org/olmo/olmo-paper.pdf",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "* My personal blog post: ",
"raw": "* My personal blog post: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://www.interconnects.ai/p/olmo",
"href": "https://www.interconnects.ai/p/olmo",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "OLMo will represent a new type of LLM enabling new approaches to ML research and deployment, because on a key axis of openness, OLMo represents something entirely different. OLMo is built for scientists to be able to develop research directions at every point in the development process and execute on them, which was previously not available due to incomplete information and tools.",
"raw": "OLMo will represent a new type of LLM enabling new approaches to ML research and deployment, because on a key axis of openness, OLMo represents something entirely different. OLMo is built for scientists to be able to develop research directions at every point in the development process and execute on them, which was previously not available due to incomplete information and tools.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Depending on the evaluation methods, OLMo 1 is either the best 7 billion parameter base model available for download or one of the best. This relies on a new way of thinking where models are judged on parameter plus token budget, similar to how scaling laws are measured for LLMs.",
"raw": "Depending on the evaluation methods, OLMo 1 is either the best 7 billion parameter base model available for download or one of the best. This relies on a new way of thinking where models are judged on parameter plus token budget, similar to how scaling laws are measured for LLMs.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "We're just getting started, so please help us learn how to be more scientific with LLMs!",
"raw": "We're just getting started, so please help us learn how to be more scientific with LLMs!",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | Today, we’re releasing our first pretrained Open Language Models (OLMo) at the Allen Institute for AI (AI2), a set of 7 billion parameter models and one 1 billion parameter variant. This line of work was probably the main reason I joined AI2 and is the biggest lever I see possible to enact meaningful change in how AI is used, studied, and discussed in the short term.
Links at the top because that's what you want:
* Core 7B model: https://huggingface.co/allenai/OLMo-7B
* 7B model twin (different GPU hardware): https://huggingface.co/allenai/OLMo-7B-Twin-2T
* 1B model: https://huggingface.co/allenai/OLMo-1B
* Dataset: https://huggingface.co/datasets/allenai/dolma
* Paper (arxiv soon): https://allenai.org/olmo/olmo-paper.pdf
* My personal blog post: https://www.interconnects.ai/p/olmo
OLMo will represent a new type of LLM enabling new approaches to ML research and deployment, because on a key axis of openness, OLMo represents something entirely different. OLMo is built for scientists to be able to develop research directions at every point in the development process and execute on them, which was previously not available due to incomplete information and tools.
Depending on the evaluation methods, OLMo 1 is either the best 7 billion parameter base model available for download or one of the best. This relies on a new way of thinking where models are judged on parameter plus token budget, similar to how scaling laws are measured for LLMs.
We're just getting started, so please help us learn how to be more scientific with LLMs! | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/628e5f90a9a3c754c1f7c88f/iWqMY_l6dalrgRaJZWbK3.png",
"fullname": "Nathan Lambert",
"name": "natolambert",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 122,
"isFollowing": false
} | [] | [] | [
{
"reaction": "❤️",
"users": [
"julien-c",
"osanseviero",
"thomwolf",
"AdinaY",
"clem",
"DnzzL",
"LZDXN",
"PereLluis13",
"venkateshmurugadas",
"90Barricade93",
"linoyts",
"rbiswasfc",
"Hamish",
"victor",
"yleo",
"BrigitteTousi",
"ArunkumarVR",
"samforeman",
"Maani",
"lunarflu",
"davanstrien",
"socialprescribing",
"kramp",
"beomi",
"alielfilali01",
"jeffboudier",
"Tanvir1337",
"patrickocal",
"Gilojond",
"thewise",
"eswardivi",
"anileo1",
"shtoshni",
"ahkarami",
"radames"
],
"count": 35
},
{
"reaction": "🤝",
"users": [
"thomwolf",
"clem",
"fffiloni",
"victor",
"BrigitteTousi",
"lunarflu",
"socialprescribing",
"beomi",
"ahkarami",
"radames"
],
"count": 10
},
{
"reaction": "🤯",
"users": [
"osanseviero",
"clem",
"victor",
"BrigitteTousi",
"lunarflu",
"beomi",
"alielfilali01"
],
"count": 7
},
{
"reaction": "👍",
"users": [
"stormchaser",
"mathiasn1"
],
"count": 2
}
] | 2024-02-01T15:32:21.000Z | 2024-02-05T10:13:22.832Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64848ddca7c0d8460b8d3900/qRdcqKJ7BA5_1vJq6ZB5x.png",
"fullname": "Arun Nadarasa",
"name": "socialprescribing",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 3,
"isFollowing": false
}
] | /posts/natolambert/114173328374820 | 611 | 1 |
295325502020879 | [
{
"type": "text",
"value": "Had a lot of fun making this plot today.",
"raw": "Had a lot of fun making this plot today.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "If someone ever asks you why you need ML monitoring, show them this picture 😂",
"raw": "If someone ever asks you why you need ML monitoring, show them this picture 😂",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | Had a lot of fun making this plot today.
If someone ever asks you why you need ML monitoring, show them this picture 😂 | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1657144463525-629a173153a72d997d3f57d0.jpeg",
"fullname": "Santiago Viquez",
"name": "santiviquez",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 84,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/629a173153a72d997d3f57d0/tdnMc38vB3JLien5AS997.png"
}
] | [] | [
{
"reaction": "🤗",
"users": [
"gabrielchua",
"lunarflu"
],
"count": 2
},
{
"reaction": "👍",
"users": [
"Csplk"
],
"count": 1
}
] | 2024-02-01T14:50:10.000Z | 2024-02-05T15:28:47.073Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1657144463525-629a173153a72d997d3f57d0.jpeg",
"fullname": "Santiago Viquez",
"name": "santiviquez",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 84,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/655f4ff710e5c5fbef30fd97/bI_KQnFof50HRqYBWpyu2.jpeg",
"fullname": "Gabriel C",
"name": "gabrielchua",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 78,
"isFollowing": false
}
] | /posts/santiviquez/295325502020879 | 5 | 3 |
622879789886281 | [
{
"type": "text",
"value": "🔍 Today's pick in Interpretability & Analysis of LMs: Gradient-Based Language Model Red Teaming by N. Wichers, C. Denison and ",
"raw": "🔍 Today's pick in Interpretability & Analysis of LMs: Gradient-Based Language Model Red Teaming by N. Wichers, C. Denison and ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "mention",
"value": null,
"raw": "@beirami",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": "beirami",
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "This work proposes Gradient-Based Red Teaming (GBRT), a red teaming method for automatically generating diverse prompts inducing an LM to output unsafe responses. ",
"raw": "This work proposes Gradient-Based Red Teaming (GBRT), a red teaming method for automatically generating diverse prompts inducing an LM to output unsafe responses. ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "In practice, prompts are learned by scoring LM responses with a safety-trained probing classifier, and back-propagating through frozen classifier and LM to update the prompt.",
"raw": "In practice, prompts are learned by scoring LM responses with a safety-trained probing classifier, and back-propagating through frozen classifier and LM to update the prompt.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Authors experiment with variants of GBRT aimed at inducing realistic prompts in an efficient way, and GBRT prompts are more likely to generate unsafe responses than those found by established RL-based red teaming methods. Moreover, these attacks are shown to succeed even when the LM has been fine-tuned to produce safer outputs. ",
"raw": "Authors experiment with variants of GBRT aimed at inducing realistic prompts in an efficient way, and GBRT prompts are more likely to generate unsafe responses than those found by established RL-based red teaming methods. Moreover, these attacks are shown to succeed even when the LM has been fine-tuned to produce safer outputs. ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "📄 Paper: ",
"raw": "📄 Paper: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/papers/2401.12973",
"href": null,
"resource": {
"type": "paper",
"id": "2401.12973",
"discussionNum": null
},
"url": "https://huggingface.co/papers/2401.12973",
"code": null,
"user": null,
"label": "In-Context Language Learning: Architectures and Algorithms (2401.12973)",
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "💻 Code: ",
"raw": "💻 Code: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/google-research/google-research/tree/master/gbrt",
"href": "https://github.com/google-research/google-research/tree/master/gbrt",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | 🔍 Today's pick in Interpretability & Analysis of LMs: Gradient-Based Language Model Red Teaming by N. Wichers, C. Denison and @beirami
This work proposes Gradient-Based Red Teaming (GBRT), a red teaming method for automatically generating diverse prompts inducing an LM to output unsafe responses.
In practice, prompts are learned by scoring LM responses with a safety-trained probing classifier, and back-propagating through frozen classifier and LM to update the prompt.
Authors experiment with variants of GBRT aimed at inducing realistic prompts in an efficient way, and GBRT prompts are more likely to generate unsafe responses than those found by established RL-based red teaming methods. Moreover, these attacks are shown to succeed even when the LM has been fine-tuned to produce safer outputs.
📄 Paper: https://huggingface.co/papers/2401.12973
💻 Code: https://github.com/google-research/google-research/tree/master/gbrt | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1670231290373-5e7749883d77a72421292d07.jpeg",
"fullname": "Gabriele Sarti",
"name": "gsarti",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 205,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/5e7749883d77a72421292d07/ayhigLSby9ICw3n_2QpaJ.png"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/5e7749883d77a72421292d07/1Tllp1N7b5eg7ALtxLKr9.png"
}
] | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/_etKjaaubh3osxp3R2MP7.jpeg",
"fullname": "Ahmad Beirami",
"name": "beirami",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 1
}
] | [
{
"reaction": "❤️",
"users": [
"clem",
"osanseviero",
"lunarflu",
"beirami",
"9voltfan2009"
],
"count": 5
},
{
"reaction": "👍",
"users": [
"neovalle",
"noobmldude"
],
"count": 2
}
] | 2024-02-01T13:42:46.000Z | 2024-02-01T13:42:46.876Z | [] | /posts/gsarti/622879789886281 | 23 | 0 |
816514004409434 | [
{
"type": "text",
"value": "Let me introduce you LLE: Leaks, leaks everywhere!",
"raw": "Let me introduce you LLE: Leaks, leaks everywhere!",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "A quick experiment I've carried out on around 600 datasets from the HF Hub, the results are stored in ",
"raw": "A quick experiment I've carried out on around 600 datasets from the HF Hub, the results are stored in ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/datasets/lbourdois/LLE",
"href": null,
"resource": {
"type": "dataset",
"id": "lbourdois/LLE",
"discussionNum": null
},
"url": "https://huggingface.co/datasets/lbourdois/LLE",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": ", and the methodology is described in ",
"raw": ", and the methodology is described in ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/blog/lbourdois/lle",
"href": "https://huggingface.co/blog/lbourdois/lle",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | Let me introduce you LLE: Leaks, leaks everywhere!
A quick experiment I've carried out on around 600 datasets from the HF Hub, the results are stored in https://huggingface.co/datasets/lbourdois/LLE, and the methodology is described in
https://huggingface.co/blog/lbourdois/lle
| {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/613b0a62a14099d5afed7830/pLuqSIYaNYhUqdjxlNrFn.png",
"fullname": "Loïck BOURDOIS",
"name": "lbourdois",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 90,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/613b0a62a14099d5afed7830/QfeU8NGwGLwai6JpmLHTd.png"
}
] | [] | [
{
"reaction": "🤯",
"users": [
"tomaarsen",
"Dlbk",
"vuluu",
"clem",
"osanseviero",
"dhuynh95",
"alielfilali01",
"gsarti"
],
"count": 8
}
] | 2024-02-01T11:08:37.000Z | 2024-02-09T13:26:16.390Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1670231290373-5e7749883d77a72421292d07.jpeg",
"fullname": "Gabriele Sarti",
"name": "gsarti",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 205,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1661194056093-noauth.jpeg",
"fullname": "Jeremy Lee Shields",
"name": "lastrosade",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 1,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6317233cc92fd6fee317e030/cJHSvvimr1kqgQfHOjO5n.png",
"fullname": "Tom Aarsen",
"name": "tomaarsen",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 1060,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/613b0a62a14099d5afed7830/pLuqSIYaNYhUqdjxlNrFn.png",
"fullname": "Loïck BOURDOIS",
"name": "lbourdois",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 90,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1661497922734-62f4ac43567dbf9a39f75474.jpeg",
"fullname": "Daniel Huynh",
"name": "dhuynh95",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 75,
"isFollowing": false
}
] | /posts/lbourdois/816514004409434 | 212 | 7 |
800997800924633 | [
{
"type": "text",
"value": "Introducing Gajendra!",
"raw": "Introducing Gajendra!",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "An early release of our 7B Hindi-Hinglish-English Instruction fine-tuned language model.",
"raw": "An early release of our 7B Hindi-Hinglish-English Instruction fine-tuned language model.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Model: ",
"raw": "Model: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/BhabhaAI/Gajendra-v0.1",
"href": null,
"resource": {
"type": "model",
"id": "BhabhaAI/Gajendra-v0.1",
"discussionNum": null
},
"url": "https://huggingface.co/BhabhaAI/Gajendra-v0.1",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "We additionally explore ways to filter examples that can be translated from English to Hindi and are releasing initial versions of both dataset and model for it.",
"raw": "We additionally explore ways to filter examples that can be translated from English to Hindi and are releasing initial versions of both dataset and model for it.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Model: ",
"raw": "Model: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/BhabhaAI/Mistral-translation-classify",
"href": null,
"resource": {
"type": "model",
"id": "BhabhaAI/Mistral-translation-classify",
"discussionNum": null
},
"url": "https://huggingface.co/BhabhaAI/Mistral-translation-classify",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Dataset: ",
"raw": "Dataset: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/datasets/BhabhaAI/translation-classify",
"href": null,
"resource": {
"type": "dataset",
"id": "BhabhaAI/translation-classify",
"discussionNum": null
},
"url": "https://huggingface.co/datasets/BhabhaAI/translation-classify",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Looking forward to collaborate with open source community to accelerate and release Hindi LLMs.",
"raw": "Looking forward to collaborate with open source community to accelerate and release Hindi LLMs.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | Introducing Gajendra!
An early release of our 7B Hindi-Hinglish-English Instruction fine-tuned language model.
Model: https://huggingface.co/BhabhaAI/Gajendra-v0.1
We additionally explore ways to filter examples that can be translated from English to Hindi and are releasing initial versions of both dataset and model for it.
Model: https://huggingface.co/BhabhaAI/Mistral-translation-classify
Dataset: https://huggingface.co/datasets/BhabhaAI/translation-classify
Looking forward to collaborate with open source community to accelerate and release Hindi LLMs. | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/614efbb6ddd8df0d8bfd0a5a/0oMIv-WwL7sqPEQZv63cu.jpeg",
"fullname": "Satpal Singh Rathore",
"name": "satpalsr",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 10,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/614efbb6ddd8df0d8bfd0a5a/D4PdoJlYTD74B4-J_uFKy.jpeg"
}
] | [] | [
{
"reaction": "❤️",
"users": [
"samusenps",
"VanshGehlot",
"victor",
"osanseviero",
"julien-c",
"sbarman25",
"satpalsr",
"alielfilali01",
"smangrul"
],
"count": 9
},
{
"reaction": "🤗",
"users": [
"samusenps",
"julien-c",
"satpalsr"
],
"count": 3
}
] | 2024-02-01T08:08:08.000Z | 2024-02-02T10:58:30.225Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6032802e1f993496bc14d9e3/w6hr-DEQot4VVkoyRIBiy.png",
"fullname": "Omar Sanseviero",
"name": "osanseviero",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 2868,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/614efbb6ddd8df0d8bfd0a5a/0oMIv-WwL7sqPEQZv63cu.jpeg",
"fullname": "Satpal Singh Rathore",
"name": "satpalsr",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 10,
"isFollowing": false
}
] | /posts/satpalsr/800997800924633 | 292 | 2 |
687791323762053 | [
{
"type": "text",
"value": "What is WhisperKit?",
"raw": "What is WhisperKit?",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "WhisperKit is a pure Swift package built by ",
"raw": "WhisperKit is a pure Swift package built by ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "mention",
"value": null,
"raw": "@argmaxinc",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": "argmaxinc",
"label": null,
"lang": null
},
{
"type": "text",
"value": " that enables real-time Whisper inference on iPhone, Mac and even Apple Watch in two lines of code. It is built on Core ML and can be configured to run on the Neural Engine or GPU or both. WhisperKit models are published and distributed through the Hugging Face Hub. Anyone can publish a Whisper fine-tune and evaluate it on their own dataset using WhisperKit!",
"raw": " that enables real-time Whisper inference on iPhone, Mac and even Apple Watch in two lines of code. It is built on Core ML and can be configured to run on the Neural Engine or GPU or both. WhisperKit models are published and distributed through the Hugging Face Hub. Anyone can publish a Whisper fine-tune and evaluate it on their own dataset using WhisperKit!",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "WhisperKit models: ",
"raw": "WhisperKit models: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/argmaxinc/whisperkit-coreml",
"href": null,
"resource": {
"type": "model",
"id": "argmaxinc/whisperkit-coreml",
"discussionNum": null
},
"url": "https://huggingface.co/argmaxinc/whisperkit-coreml",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Example App: ",
"raw": "Example App: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://testflight.apple.com/join/LPVOyJZW",
"href": "https://testflight.apple.com/join/LPVOyJZW",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Swift Package: ",
"raw": "Swift Package: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/argmaxinc/WhisperKit",
"href": "https://github.com/argmaxinc/WhisperKit",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Python toolkit: ",
"raw": "Python toolkit: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/argmaxinc/whisperkittools",
"href": "https://github.com/argmaxinc/whisperkittools",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | What is WhisperKit?
WhisperKit is a pure Swift package built by @argmaxinc that enables real-time Whisper inference on iPhone, Mac and even Apple Watch in two lines of code. It is built on Core ML and can be configured to run on the Neural Engine or GPU or both. WhisperKit models are published and distributed through the Hugging Face Hub. Anyone can publish a Whisper fine-tune and evaluate it on their own dataset using WhisperKit!
WhisperKit models: https://huggingface.co/argmaxinc/whisperkit-coreml
Example App: https://testflight.apple.com/join/LPVOyJZW
Swift Package: https://github.com/argmaxinc/WhisperKit
Python toolkit: https://github.com/argmaxinc/whisperkittools | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64c612d618bc1b4e81023e7b/ikt13t_OxCt9AmzbYyAFJ.jpeg",
"fullname": "Atila",
"name": "aotrih",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 19,
"isFollowing": false
} | [
{
"type": "video",
"url": "https://cdn-uploads.huggingface.co/production/uploads/64c612d618bc1b4e81023e7b/v_uoC2_E2o8B1O_nbS5ef.qt"
}
] | [] | [
{
"reaction": "🤯",
"users": [
"osanseviero",
"victor",
"VanshGehlot",
"C00100011",
"fffiloni",
"ZachNagengast",
"dillfrescott",
"Araeynn"
],
"count": 8
},
{
"reaction": "❤️",
"users": [
"samusenps",
"vishalvis",
"ZachNagengast",
"dillfrescott",
"Araeynn",
"pcuenq"
],
"count": 6
},
{
"reaction": "👍",
"users": [
"dillfrescott",
"Araeynn",
"Nymbo"
],
"count": 3
}
] | 2024-02-01T07:59:45.000Z | 2024-02-01T07:59:55.924Z | [] | /posts/aotrih/687791323762053 | 324 | 0 |
481707026390857 | [
{
"type": "text",
"value": "Introducing the \"UltraTextbooks\" dataset 🚀📚",
"raw": "Introducing the \"UltraTextbooks\" dataset 🚀📚",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Check it out here: ",
"raw": "Check it out here: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/datasets/Locutusque/UltraTextbooks",
"href": null,
"resource": {
"type": "dataset",
"id": "Locutusque/UltraTextbooks",
"discussionNum": null
},
"url": "https://huggingface.co/datasets/Locutusque/UltraTextbooks",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "📘 A comprehensive collection of high-quality synthetic and human-written textbooks",
"raw": "📘 A comprehensive collection of high-quality synthetic and human-written textbooks",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "👨🎓 Spanning various subjects and programming languages",
"raw": "👨🎓 Spanning various subjects and programming languages",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "🔧 Designed for advanced NLP tasks like language modeling, educational QA, text summarization, and content generation for edu purposes",
"raw": "🔧 Designed for advanced NLP tasks like language modeling, educational QA, text summarization, and content generation for edu purposes",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "🚀 Future expansions planned with additional data sources to enhance the corpus",
"raw": "🚀 Future expansions planned with additional data sources to enhance the corpus",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "👇 Data composition highlights 👇",
"raw": "👇 Data composition highlights 👇",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "- Blend of synthetic and human-written material",
"raw": "- Blend of synthetic and human-written material",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "- Includes topics from general edu to specialized areas",
"raw": "- Includes topics from general edu to specialized areas",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "- Structured with field \"text\"",
"raw": "- Structured with field \"text\"",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "🧩 Data collection from various Hugging Face datasets, guided by a diverse and comprehensive curation rationale",
"raw": "🧩 Data collection from various Hugging Face datasets, guided by a diverse and comprehensive curation rationale",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "🚧 Limitations may exist, so report any issues you encounter",
"raw": "🚧 Limitations may exist, so report any issues you encounter",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | Introducing the "UltraTextbooks" dataset 🚀📚
Check it out here: https://huggingface.co/datasets/Locutusque/UltraTextbooks
📘 A comprehensive collection of high-quality synthetic and human-written textbooks
👨🎓 Spanning various subjects and programming languages
🔧 Designed for advanced NLP tasks like language modeling, educational QA, text summarization, and content generation for edu purposes
🚀 Future expansions planned with additional data sources to enhance the corpus
👇 Data composition highlights 👇
- Blend of synthetic and human-written material
- Includes topics from general edu to specialized areas
- Structured with field "text"
🧩 Data collection from various Hugging Face datasets, guided by a diverse and comprehensive curation rationale
🚧 Limitations may exist, so report any issues you encounter | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/YeFyz1AZVcCRsyNHHtwJG.jpeg",
"fullname": "Sebastian Gabarain",
"name": "Locutusque",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 180,
"isFollowing": false
} | [] | [] | [
{
"reaction": "❤️",
"users": [
"samusenps",
"Felladrin",
"osanseviero",
"vicgalle",
"VanshGehlot",
"dhuynh95",
"Locutusque",
"aloobun",
"afrideva",
"Lee1990"
],
"count": 10
},
{
"reaction": "👍",
"users": [
"pszemraj",
"Pelmeshek",
"osanseviero",
"Locutusque",
"qinluo",
"Lee1990"
],
"count": 6
},
{
"reaction": "🤗",
"users": [
"nampdn-ai",
"Lee1990"
],
"count": 2
}
] | 2024-02-01T03:51:34.000Z | 2024-02-06T15:49:19.302Z | [
{
"avatarUrl": "/avatars/376ae9997c5b02eb6faafdbd6b50f66b.svg",
"fullname": "qinluo",
"name": "qinluo",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 2,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/YeFyz1AZVcCRsyNHHtwJG.jpeg",
"fullname": "Sebastian Gabarain",
"name": "Locutusque",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 180,
"isFollowing": false
}
] | /posts/Locutusque/481707026390857 | 124 | 2 |
561064707431132 | [
{
"type": "text",
"value": "moondream1 can now be used directly from transformers!",
"raw": "moondream1 can now be used directly from transformers!",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | moondream1 can now be used directly from transformers! | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63117568fa95534e218da163/8h9zN8aKRxPLBnXW7sqY9.jpeg",
"fullname": "Vik Korrapati",
"name": "vikhyatk",
"type": "user",
"isPro": true,
"isHf": false,
"isMod": false,
"followerCount": 375,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/63117568fa95534e218da163/5mMSiqBSEtOKQ6bRas6-9.jpeg"
}
] | [] | [
{
"reaction": "❤️",
"users": [
"Felladrin",
"osanseviero",
"clem",
"cnmoro",
"maywell",
"samusenps",
"Locutusque",
"VanshGehlot",
"fffiloni",
"not-lain",
"Sylvestre",
"levisz",
"lrzjason",
"ajibawa-2023",
"Aptronym"
],
"count": 15
},
{
"reaction": "👍",
"users": [
"YaTharThShaRma999",
"sujitvasanth",
"osanseviero",
"talgaurdian",
"clem",
"taufiqdp",
"not-lain",
"piebro",
"impactframes",
"Nymbo"
],
"count": 10
}
] | 2024-01-31T21:01:49.000Z | 2024-02-01T07:40:26.868Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/659d11665bb1cd5c9ef00633/wQk-TQ7hYGT9G55GlT5zC.jpeg",
"fullname": "bigblackhat",
"name": "bigblackhat",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 3,
"isFollowing": false
}
] | /posts/vikhyatk/561064707431132 | 62 | 1 |
306425415509800 | [
{
"type": "text",
"value": "📣 NEW on HF",
"raw": "📣 NEW on HF",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "the Dataset Viewer is now available on *private datasets* too",
"raw": "the Dataset Viewer is now available on *private datasets* too",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "You need to be a PRO or a Enterprise Hub user. 🔥",
"raw": "You need to be a PRO or a Enterprise Hub user. 🔥",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Great work from our Datasets team 🥰: ",
"raw": "Great work from our Datasets team 🥰: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "mention",
"value": null,
"raw": "@lhoestq",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": "lhoestq",
"label": null,
"lang": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "mention",
"value": null,
"raw": "@severo",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": "severo",
"label": null,
"lang": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "mention",
"value": null,
"raw": "@polinaeterna",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": "polinaeterna",
"label": null,
"lang": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "mention",
"value": null,
"raw": "@asoria",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": "asoria",
"label": null,
"lang": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "mention",
"value": null,
"raw": "@albertvillanova",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": "albertvillanova",
"label": null,
"lang": null
},
{
"type": "text",
"value": " and the whole team 🥰 ",
"raw": " and the whole team 🥰 ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | 📣 NEW on HF
the Dataset Viewer is now available on *private datasets* too
You need to be a PRO or a Enterprise Hub user. 🔥
Great work from our Datasets team 🥰: @lhoestq @severo @polinaeterna @asoria @albertvillanova and the whole team 🥰 | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/5dd96eb166059660ed1ee413/NQtzmrDdbG0H8qkZvRyGk.jpeg",
"fullname": "Julien Chaumond",
"name": "julien-c",
"type": "user",
"isPro": true,
"isHf": true,
"isMod": false,
"followerCount": 1580,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/5dd96eb166059660ed1ee413/k7X14tuI4OD1h4-QS0N01.png"
}
] | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1606406298765-noauth.jpeg",
"fullname": "Albert Villanova del Moral",
"name": "albertvillanova",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 196
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674055965173-noauth.jpeg",
"fullname": "Andrea Soria",
"name": "asoria",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 61
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1594214747713-5e9ecfc04957053f60648a3e.png",
"fullname": "Quentin Lhoest",
"name": "lhoestq",
"type": "user",
"isPro": true,
"isHf": true,
"isMod": false,
"followerCount": 196
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1645527878855-61dd9f18f187b39868bd157e.jpeg",
"fullname": "Polina Kazakova",
"name": "polinaeterna",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 63
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/60a76b174e24361791fe822d/inEvYwrd4z0xvRQN3ikdE.jpeg",
"fullname": "Sylvain Lesage",
"name": "severo",
"type": "user",
"isPro": true,
"isHf": true,
"isMod": false,
"followerCount": 129
}
] | [
{
"reaction": "❤️",
"users": [
"lhoestq",
"severo",
"clem",
"blanchon",
"osanseviero",
"pierrci",
"albertvillanova",
"VanshGehlot",
"vladbogo",
"AdinaY",
"BrigitteTousi",
"michaljunczyk",
"not-lain",
"Nymbo"
],
"count": 14
},
{
"reaction": "👍",
"users": [
"fffiloni",
"clem",
"osanseviero",
"Pelmeshek",
"kramp",
"mvaloatto",
"awacke1",
"AdinaY",
"BrigitteTousi",
"Tonic",
"not-lain",
"Nymbo"
],
"count": 12
},
{
"reaction": "🤯",
"users": [
"pierrci",
"BrigitteTousi",
"not-lain"
],
"count": 3
},
{
"reaction": "🤗",
"users": [
"albertvillanova",
"BrigitteTousi",
"not-lain"
],
"count": 3
}
] | 2024-01-31T18:39:12.000Z | 2024-01-31T19:18:12.723Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1583857146757-5e67bdd61009063689407479.jpeg",
"fullname": "Clem 🤗",
"name": "clem",
"type": "user",
"isPro": true,
"isHf": true,
"isMod": false,
"followerCount": 1763,
"isFollowing": false
}
] | /posts/julien-c/306425415509800 | 26 | 1 |
898508490022091 | [
{
"type": "text",
"value": "🔥 New LLM leaderboard on the hub: an Enterprise Scenarios Leaderboard!",
"raw": "🔥 New LLM leaderboard on the hub: an Enterprise Scenarios Leaderboard!",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "This work evaluates LLMs on several real world use cases (Finance documents, Legal confidentiality, Customer support, ...), which makes it grounded, and interesting for companies! 🏢",
"raw": "This work evaluates LLMs on several real world use cases (Finance documents, Legal confidentiality, Customer support, ...), which makes it grounded, and interesting for companies! 🏢",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Bonus: the test set is private, so it's hard to game 🔥",
"raw": "Bonus: the test set is private, so it's hard to game 🔥",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/PatronusAI/enterprise_scenarios_leaderboard",
"href": null,
"resource": {
"type": "space",
"id": "PatronusAI/enterprise_scenarios_leaderboard",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/PatronusAI/enterprise_scenarios_leaderboard",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Side note: I discovered through this benchmark that you could evaluate \"Engagingness\" of an LLM, which could also be interesting for our LLM fine-tuning community out there.",
"raw": "Side note: I discovered through this benchmark that you could evaluate \"Engagingness\" of an LLM, which could also be interesting for our LLM fine-tuning community out there.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Read more about their different tasks and metrics in the intro blog: ",
"raw": "Read more about their different tasks and metrics in the intro blog: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface.co/blog/leaderboards-on-the-hub-patronus",
"href": "https://huggingface.co/blog/leaderboards-on-the-hub-patronus",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Congrats to ",
"raw": "Congrats to ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "mention",
"value": null,
"raw": "@sunitha98",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": "sunitha98",
"label": null,
"lang": null
},
{
"type": "text",
"value": " who led the leaderboard implementation, and to ",
"raw": " who led the leaderboard implementation, and to ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "mention",
"value": null,
"raw": "@rebeccaqian",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": "rebeccaqian",
"label": null,
"lang": null
},
{
"type": "text",
"value": " and ",
"raw": " and ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "mention",
"value": null,
"raw": "@anandnk24",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": "anandnk24",
"label": null,
"lang": null
},
{
"type": "text",
"value": ", all at Patronus AI !",
"raw": ", all at Patronus AI !",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | 🔥 New LLM leaderboard on the hub: an Enterprise Scenarios Leaderboard!
This work evaluates LLMs on several real world use cases (Finance documents, Legal confidentiality, Customer support, ...), which makes it grounded, and interesting for companies! 🏢
Bonus: the test set is private, so it's hard to game 🔥
https://huggingface.co/spaces/PatronusAI/enterprise_scenarios_leaderboard
Side note: I discovered through this benchmark that you could evaluate "Engagingness" of an LLM, which could also be interesting for our LLM fine-tuning community out there.
Read more about their different tasks and metrics in the intro blog: https://huggingface.co/blog/leaderboards-on-the-hub-patronus
Congrats to @sunitha98 who led the leaderboard implementation, and to @rebeccaqian and @anandnk24, all at Patronus AI ! | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1644340617257-noauth.png",
"fullname": "Clémentine Fourrier",
"name": "clefourrier",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 459,
"isFollowing": false
} | [] | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/64800401dd29d0de8121fa3d/jQyJeDW1-NsX5sd1tsRjF.jpeg",
"fullname": "Anand Kannappan",
"name": "anandnk24",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 1
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1670593076071-noauth.jpeg",
"fullname": "Rebecca Qian",
"name": "rebeccaqian",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 12
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/631f35b6e488fcef857cdbda/XZsR6bfelyUX32-T8UTtu.jpeg",
"fullname": "Selvan Sunitha Ravi",
"name": "sunitha98",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": null
}
] | [
{
"reaction": "❤️",
"users": [
"osanseviero",
"RebeccaQian1",
"clem",
"marcsun13",
"samusenps",
"Pelmeshek",
"alielfilali01",
"anandnk24",
"gblazex"
],
"count": 9
}
] | 2024-01-31T17:49:26.000Z | 2024-02-01T07:33:06.944Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1583857146757-5e67bdd61009063689407479.jpeg",
"fullname": "Clem 🤗",
"name": "clem",
"type": "user",
"isPro": true,
"isHf": true,
"isMod": false,
"followerCount": 1763,
"isFollowing": false
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1644340617257-noauth.png",
"fullname": "Clémentine Fourrier",
"name": "clefourrier",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 459,
"isFollowing": false
}
] | /posts/clefourrier/898508490022091 | 54 | 2 |
184407906548173 | [
{
"type": "text",
"value": "InstantID-2V is out ! ✨",
"raw": "InstantID-2V is out ! ✨",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "It's like InstantID, but you get a video instead. Nothing crazy here, it's simply a shortcut between two demos. ",
"raw": "It's like InstantID, but you get a video instead. Nothing crazy here, it's simply a shortcut between two demos. ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Let's see how it does work with gradio API: ",
"raw": "Let's see how it does work with gradio API: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "1. We call ",
"raw": "1. We call ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/InstantX/InstantID",
"href": null,
"resource": {
"type": "space",
"id": "InstantX/InstantID",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/InstantX/InstantID",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": " with a conditional pose from cinematic camera shot (example provided in the demo) ",
"raw": " with a conditional pose from cinematic camera shot (example provided in the demo) ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "2. Then we send the previous generated image to ",
"raw": "2. Then we send the previous generated image to ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/ali-vilab/i2vgen-xl",
"href": null,
"resource": {
"type": "model",
"id": "ali-vilab/i2vgen-xl",
"discussionNum": null
},
"url": "https://huggingface.co/ali-vilab/i2vgen-xl",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Et voilà 🤗 Try it : ",
"raw": "Et voilà 🤗 Try it : ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/fffiloni/InstantID-2V",
"href": null,
"resource": {
"type": "space",
"id": "fffiloni/InstantID-2V",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/fffiloni/InstantID-2V",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "—",
"raw": "—",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Note that generation can be quite long, so take the opportunity to brew you some coffee 😌",
"raw": "Note that generation can be quite long, so take the opportunity to brew you some coffee 😌",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "If you want to skip the queue, you can of course reproduce this pipeline manually",
"raw": "If you want to skip the queue, you can of course reproduce this pipeline manually",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | InstantID-2V is out ! ✨
It's like InstantID, but you get a video instead. Nothing crazy here, it's simply a shortcut between two demos.
Let's see how it does work with gradio API:
1. We call https://huggingface.co/spaces/InstantX/InstantID with a conditional pose from cinematic camera shot (example provided in the demo)
2. Then we send the previous generated image to https://huggingface.co/ali-vilab/i2vgen-xl
Et voilà 🤗 Try it : https://huggingface.co/spaces/fffiloni/InstantID-2V
—
Note that generation can be quite long, so take the opportunity to brew you some coffee 😌
If you want to skip the queue, you can of course reproduce this pipeline manually | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/61868ce808aae0b5499a2a95/F6BA0anbsoY_Z7M1JrwOe.jpeg",
"fullname": "Sylvain Filoni",
"name": "fffiloni",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 5185,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/61868ce808aae0b5499a2a95/iRo9U8OgRp1Gd770fSS7K.png"
},
{
"type": "video",
"url": "https://cdn-uploads.huggingface.co/production/uploads/61868ce808aae0b5499a2a95/ol0tvA4P0Y1BguiJR1w8g.mp4"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/61868ce808aae0b5499a2a95/ee0YuoZxn4tYryNwUPyRl.png"
}
] | [] | [
{
"reaction": "❤️",
"users": [
"linoyts",
"akhaliq",
"yang100",
"phanes",
"PeepDaSlan9",
"samusenps",
"victor",
"osanseviero",
"clem",
"echobingo147",
"AdinaY",
"cansa",
"danielus"
],
"count": 13
}
] | 2024-01-31T17:30:23.000Z | 2024-01-31T18:33:35.517Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1675778487155-63d4c8ce13ae45b780792f32.jpeg",
"fullname": "Ohenenoo",
"name": "PeepDaSlan9",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 96,
"isFollowing": false
}
] | /posts/fffiloni/184407906548173 | 391 | 1 |
183807800492559 | [
{
"type": "text",
"value": "🚀 The Open Source AI community needs more open datasets for improving Open LLMs. Excited to share our new open dataset for boosting chat models:",
"raw": "🚀 The Open Source AI community needs more open datasets for improving Open LLMs. Excited to share our new open dataset for boosting chat models:",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "🎉 Welcome Distilabel Capybara DPO, a multi-turn, high-quality preference dataset. ",
"raw": "🎉 Welcome Distilabel Capybara DPO, a multi-turn, high-quality preference dataset. ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/datasets/argilla/distilabel-capybara-dpo-7k-binarized",
"href": null,
"resource": {
"type": "dataset",
"id": "argilla/distilabel-capybara-dpo-7k-binarized",
"discussionNum": null
},
"url": "https://huggingface.co/datasets/argilla/distilabel-capybara-dpo-7k-binarized",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Why?",
"raw": "Why?",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Best closed chat models are built on top of multi-turn dialogue preference data. The OSS community lacks these datasets. This dataset is the first in the series to close this gap.",
"raw": "Best closed chat models are built on top of multi-turn dialogue preference data. The OSS community lacks these datasets. This dataset is the first in the series to close this gap.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Is this dataset useful?",
"raw": "Is this dataset useful?",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "To test this dataset, we've built our virtual launching partner:",
"raw": "To test this dataset, we've built our virtual launching partner:",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "🎉 Welcome CapybaraHermes, a preference tuned OpenHermes with increased second turn capabilities on MTBench",
"raw": "🎉 Welcome CapybaraHermes, a preference tuned OpenHermes with increased second turn capabilities on MTBench",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/argilla/CapybaraHermes-2.5-Mistral-7B",
"href": null,
"resource": {
"type": "model",
"id": "argilla/CapybaraHermes-2.5-Mistral-7B",
"discussionNum": null
},
"url": "https://huggingface.co/argilla/CapybaraHermes-2.5-Mistral-7B",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "As usual, models are the least important to us. We like to focus on the data. Our mission is to build and share high-quality datasets, sharing our methods in the open so the community can improve upon them.",
"raw": "As usual, models are the least important to us. We like to focus on the data. Our mission is to build and share high-quality datasets, sharing our methods in the open so the community can improve upon them.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "That's why, we took some time to describe the full methodology on the dataset card, check it out and give us feedback! Data and methods are never perfect!",
"raw": "That's why, we took some time to describe the full methodology on the dataset card, check it out and give us feedback! Data and methods are never perfect!",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Finally, this is just a preview version and would love to collaborate with you to add more benchmarking results, what hyperparams work for DPO'ing models, what mix of datasets, etc.",
"raw": "Finally, this is just a preview version and would love to collaborate with you to add more benchmarking results, what hyperparams work for DPO'ing models, what mix of datasets, etc.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Expect some more datasets in the coming weeks. Let's build the best data for AI, together.",
"raw": "Expect some more datasets in the coming weeks. Let's build the best data for AI, together.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | 🚀 The Open Source AI community needs more open datasets for improving Open LLMs. Excited to share our new open dataset for boosting chat models:
🎉 Welcome Distilabel Capybara DPO, a multi-turn, high-quality preference dataset.
https://huggingface.co/datasets/argilla/distilabel-capybara-dpo-7k-binarized
Why?
Best closed chat models are built on top of multi-turn dialogue preference data. The OSS community lacks these datasets. This dataset is the first in the series to close this gap.
Is this dataset useful?
To test this dataset, we've built our virtual launching partner:
🎉 Welcome CapybaraHermes, a preference tuned OpenHermes with increased second turn capabilities on MTBench
https://huggingface.co/argilla/CapybaraHermes-2.5-Mistral-7B
As usual, models are the least important to us. We like to focus on the data. Our mission is to build and share high-quality datasets, sharing our methods in the open so the community can improve upon them.
That's why, we took some time to describe the full methodology on the dataset card, check it out and give us feedback! Data and methods are never perfect!
Finally, this is just a preview version and would love to collaborate with you to add more benchmarking results, what hyperparams work for DPO'ing models, what mix of datasets, etc.
Expect some more datasets in the coming weeks. Let's build the best data for AI, together. | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/60420dccc15e823a685f2b03/Dn7QTyy9SZ7jKN6xpufVD.png",
"fullname": "Daniel Vila",
"name": "dvilasuero",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 231,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/60420dccc15e823a685f2b03/ddfzg2ruWBF4O2X6agDIP.png"
}
] | [] | [
{
"reaction": "❤️",
"users": [
"davanstrien",
"lhoestq",
"osanseviero",
"victor",
"severo",
"julien-c",
"samusenps",
"clem",
"Smorty100",
"Arjun4707",
"gabrielmbmb",
"kramp",
"mayurninama",
"davidberenstein1957",
"hfthorvaldur",
"leonardlin",
"alvarobartt",
"alielfilali01",
"sbrandeis"
],
"count": 19
},
{
"reaction": "🤗",
"users": [
"davanstrien",
"osanseviero",
"severo",
"julien-c",
"clem",
"Smorty100",
"Arjun4707",
"alielfilali01"
],
"count": 8
}
] | 2024-01-31T17:25:59.000Z | 2024-01-31T19:19:40.329Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1583857146757-5e67bdd61009063689407479.jpeg",
"fullname": "Clem 🤗",
"name": "clem",
"type": "user",
"isPro": true,
"isHf": true,
"isMod": false,
"followerCount": 1763,
"isFollowing": false
}
] | /posts/dvilasuero/183807800492559 | 25 | 1 |
300008628641584 | [
{
"type": "text",
"value": "Weaver: Foundation Models for Creative Writing",
"raw": "Weaver: Foundation Models for Creative Writing",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/papers/2401.17268",
"href": null,
"resource": {
"type": "paper",
"id": "2401.17268",
"discussionNum": null
},
"url": "https://huggingface.co/papers/2401.17268",
"code": null,
"user": null,
"label": "Weaver: Foundation Models for Creative Writing (2401.17268)",
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "most-capable Weaver Ultra model surpasses GPT-4, a state-of-the-art generalist LLM, on various writing scenarios, demonstrating the advantage of training specialized LLMs for writing purposes.",
"raw": "most-capable Weaver Ultra model surpasses GPT-4, a state-of-the-art generalist LLM, on various writing scenarios, demonstrating the advantage of training specialized LLMs for writing purposes.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | Weaver: Foundation Models for Creative Writing
https://huggingface.co/papers/2401.17268
most-capable Weaver Ultra model surpasses GPT-4, a state-of-the-art generalist LLM, on various writing scenarios, demonstrating the advantage of training specialized LLMs for writing purposes. | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1674929746905-60f1abe7544c2adfd699860c.jpeg",
"fullname": "AK",
"name": "akhaliq",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 5205,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/60f1abe7544c2adfd699860c/MpF3cKLE2Cco2UdmPdC4J.png"
}
] | [] | [
{
"reaction": "👍",
"users": [
"awacke1",
"osanseviero",
"msey12"
],
"count": 3
},
{
"reaction": "❤️",
"users": [
"clem",
"alielfilali01"
],
"count": 2
}
] | 2024-01-31T17:23:29.000Z | 2024-02-04T16:39:39.154Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/626237d9bbcbd1c34f1bb231/EJrOjvAL-68qMCYdnvOrq.png",
"fullname": "Ali El Filali",
"name": "alielfilali01",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 186,
"isFollowing": false
}
] | /posts/akhaliq/300008628641584 | 75 | 1 |
736165829430748 | [
{
"type": "text",
"value": "Explaining a new state-of-the-art monocular depth estimation model: Depth Anything ✨ 🧶",
"raw": "Explaining a new state-of-the-art monocular depth estimation model: Depth Anything ✨ 🧶",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Before we begin: Depth Anything is recently integrated to 🤗 transformers and you can use it with three lines of code! ✨ ",
"raw": "Before we begin: Depth Anything is recently integrated to 🤗 transformers and you can use it with three lines of code! ✨ ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "code_fence",
"value": null,
"raw": "```python\nfrom transformers import pipeline\n\npipe = pipeline(task=\"depth-estimation\", model=\"LiheYoung/depth-anything-small-hf\")\ndepth = pipe(image)[\"depth\"]\n```",
"href": null,
"resource": null,
"url": null,
"code": "from transformers import pipeline\n\npipe = pipeline(task=\"depth-estimation\", model=\"LiheYoung/depth-anything-small-hf\")\ndepth = pipe(image)[\"depth\"]",
"user": null,
"label": null,
"lang": "python"
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "We have also built an app for you to compare different depth estimation models 🐝 🌸 ",
"raw": "We have also built an app for you to compare different depth estimation models 🐝 🌸 ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/merve/compare_depth_models",
"href": null,
"resource": {
"type": "space",
"id": "merve/compare_depth_models",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/merve/compare_depth_models",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Check out Depth Anything in Web by ",
"raw": "Check out Depth Anything in Web by ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "mention",
"value": null,
"raw": "@Xenova",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": "Xenova",
"label": null,
"lang": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/spaces/Xenova/depth-anything-web",
"href": null,
"resource": {
"type": "space",
"id": "Xenova/depth-anything-web",
"discussionNum": null
},
"url": "https://huggingface.co/spaces/Xenova/depth-anything-web",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "The model's success heavily depends on unlocking the use of unlabeled datasets, although initially the authors used self-training and failed.",
"raw": "The model's success heavily depends on unlocking the use of unlabeled datasets, although initially the authors used self-training and failed.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "What the authors have done:",
"raw": "What the authors have done:",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "➰ Train a teacher model on labelled dataset",
"raw": "➰ Train a teacher model on labelled dataset",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "➰ Guide the student using teacher and also use unlabelled datasets pseudolabelled by the teacher",
"raw": "➰ Guide the student using teacher and also use unlabelled datasets pseudolabelled by the teacher",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "However, this was the cause of the failure, as both architectures were similar, the outputs were the same.",
"raw": "However, this was the cause of the failure, as both architectures were similar, the outputs were the same.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "So the authors have added a more difficult optimization target for student to learn additional knowledge on unlabeled images that went through color jittering, distortions, Gaussian blurring and spatial distortion, so it can learn more invariant representations from them.",
"raw": "So the authors have added a more difficult optimization target for student to learn additional knowledge on unlabeled images that went through color jittering, distortions, Gaussian blurring and spatial distortion, so it can learn more invariant representations from them.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "The architecture consists of DINOv2 encoder to extract the features followed by DPT decoder. At first, they train the teacher model on labelled images, and then they jointly train the student model and add in the dataset pseudo-labelled by ViT-L.",
"raw": "The architecture consists of DINOv2 encoder to extract the features followed by DPT decoder. At first, they train the teacher model on labelled images, and then they jointly train the student model and add in the dataset pseudo-labelled by ViT-L.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Thanks to this, Depth Anything performs very well! I have also benchmarked the inference duration of the model against different models here. I also ran torch.compile benchmarks across them and got nice speed-ups 🚀 ",
"raw": "Thanks to this, Depth Anything performs very well! I have also benchmarked the inference duration of the model against different models here. I also ran torch.compile benchmarks across them and got nice speed-ups 🚀 ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://huggingface2.notion.site/DPT-Benchmarks-1e516b0ba193460e865c47b3a5681efb?pvs=4",
"href": "https://huggingface2.notion.site/DPT-Benchmarks-1e516b0ba193460e865c47b3a5681efb?pvs=4",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | Explaining a new state-of-the-art monocular depth estimation model: Depth Anything ✨ 🧶
Before we begin: Depth Anything is recently integrated to 🤗 transformers and you can use it with three lines of code! ✨
```python
from transformers import pipeline
pipe = pipeline(task="depth-estimation", model="LiheYoung/depth-anything-small-hf")
depth = pipe(image)["depth"]
```
We have also built an app for you to compare different depth estimation models 🐝 🌸 https://huggingface.co/spaces/merve/compare_depth_models
Check out Depth Anything in Web by @Xenova https://huggingface.co/spaces/Xenova/depth-anything-web
The model's success heavily depends on unlocking the use of unlabeled datasets, although initially the authors used self-training and failed.
What the authors have done:
➰ Train a teacher model on labelled dataset
➰ Guide the student using teacher and also use unlabelled datasets pseudolabelled by the teacher
However, this was the cause of the failure, as both architectures were similar, the outputs were the same.
So the authors have added a more difficult optimization target for student to learn additional knowledge on unlabeled images that went through color jittering, distortions, Gaussian blurring and spatial distortion, so it can learn more invariant representations from them.
The architecture consists of DINOv2 encoder to extract the features followed by DPT decoder. At first, they train the teacher model on labelled images, and then they jointly train the student model and add in the dataset pseudo-labelled by ViT-L.
Thanks to this, Depth Anything performs very well! I have also benchmarked the inference duration of the model against different models here. I also ran torch.compile benchmarks across them and got nice speed-ups 🚀 https://huggingface2.notion.site/DPT-Benchmarks-1e516b0ba193460e865c47b3a5681efb?pvs=4
| {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1648113222875-6141a88b3a0ec78603c9e784.png",
"fullname": "Merve Noyan",
"name": "merve",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 5589,
"isFollowing": false
} | [
{
"type": "video",
"url": "https://cdn-uploads.huggingface.co/production/uploads/6141a88b3a0ec78603c9e784/DoSkh2e1u1L3u170Br8qQ.qt"
}
] | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/61b253b7ac5ecaae3d1efe0c/hwiQ0uvz3t-L5a-NtBIO6.png",
"fullname": "Joshua",
"name": "Xenova",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 3792
}
] | [
{
"reaction": "❤️",
"users": [
"osanseviero",
"clem",
"Xenova",
"Ar4ikov",
"tusk3d",
"marooz"
],
"count": 6
},
{
"reaction": "👍",
"users": [
"alvarobartt",
"osanseviero",
"clem",
"valcore"
],
"count": 4
},
{
"reaction": "🤯",
"users": [
"Ar4ikov"
],
"count": 1
}
] | 2024-01-31T15:14:41.000Z | 2024-01-31T15:14:41.043Z | [] | /posts/merve/736165829430748 | 89 | 0 |
856965186375780 | [
{
"type": "text",
"value": "TIL: ",
"raw": "TIL: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/datasets/EleutherAI/pile",
"href": null,
"resource": {
"type": "dataset",
"id": "EleutherAI/pile",
"discussionNum": null
},
"url": "https://huggingface.co/datasets/EleutherAI/pile",
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": " is on Wikipedia: ",
"raw": " is on Wikipedia: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://en.wikipedia.org/wiki/The_Pile_(dataset)",
"href": "https://en.wikipedia.org/wiki/The_Pile_(dataset)",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | TIL: https://huggingface.co/datasets/EleutherAI/pile is on Wikipedia: https://en.wikipedia.org/wiki/The_Pile_(dataset) | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1646492542174-5e70f6048ce3c604d78fe133.jpeg",
"fullname": "Christopher Akiki",
"name": "christopher",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 68,
"isFollowing": false
} | [] | [] | [
{
"reaction": "🤯",
"users": [
"kramp",
"merve",
"osanseviero",
"clem",
"Gatozu35"
],
"count": 5
},
{
"reaction": "🤗",
"users": [
"victor",
"merve",
"clem",
"OmbelineM"
],
"count": 4
},
{
"reaction": "❤️",
"users": [
"clem"
],
"count": 1
}
] | 2024-01-31T13:04:31.000Z | 2024-01-31T13:04:31.168Z | [] | /posts/christopher/856965186375780 | 217 | 0 |
546140238129110 | [
{
"type": "text",
"value": "🔍 Today's pick in Interpretability & Analysis of LMs: In-Context Language Learning: Architectures and Algorithms by E. Akyürek, B. Wang, Y. Kim, J. Andreas",
"raw": "🔍 Today's pick in Interpretability & Analysis of LMs: In-Context Language Learning: Architectures and Algorithms by E. Akyürek, B. Wang, Y. Kim, J. Andreas",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "This work methodically evaluates of in-context learning on formal languages across several model architectures, showing that Transformers outperform all other recurrent and convolutional models, including SSMs. These results are attributed to the presence of “n-gram heads” able to retrieve",
"raw": "This work methodically evaluates of in-context learning on formal languages across several model architectures, showing that Transformers outperform all other recurrent and convolutional models, including SSMs. These results are attributed to the presence of “n-gram heads” able to retrieve",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "the token following a context already seen in the current context window and copy it. This idea is further supported by a better ability of Transformer models to encode in-context n-gram frequencies for n>1, and a higher similarity of Transformer-based LM outputs with classical n-gram models trained on the same data. Finally, these insights are applied to the design static attention layers mimicking the behavior of n-gram head, which lead to lower perplexity despite the lower computational costs.",
"raw": "the token following a context already seen in the current context window and copy it. This idea is further supported by a better ability of Transformer models to encode in-context n-gram frequencies for n>1, and a higher similarity of Transformer-based LM outputs with classical n-gram models trained on the same data. Finally, these insights are applied to the design static attention layers mimicking the behavior of n-gram head, which lead to lower perplexity despite the lower computational costs.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "📄 Paper: ",
"raw": "📄 Paper: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/papers/2401.12973",
"href": null,
"resource": {
"type": "paper",
"id": "2401.12973",
"discussionNum": null
},
"url": "https://huggingface.co/papers/2401.12973",
"code": null,
"user": null,
"label": "In-Context Language Learning: Architectures and Algorithms (2401.12973)",
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "💻 Code: ",
"raw": "💻 Code: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/berlino/seq_icl",
"href": "https://github.com/berlino/seq_icl",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | 🔍 Today's pick in Interpretability & Analysis of LMs: In-Context Language Learning: Architectures and Algorithms by E. Akyürek, B. Wang, Y. Kim, J. Andreas
This work methodically evaluates of in-context learning on formal languages across several model architectures, showing that Transformers outperform all other recurrent and convolutional models, including SSMs. These results are attributed to the presence of “n-gram heads” able to retrieve
the token following a context already seen in the current context window and copy it. This idea is further supported by a better ability of Transformer models to encode in-context n-gram frequencies for n>1, and a higher similarity of Transformer-based LM outputs with classical n-gram models trained on the same data. Finally, these insights are applied to the design static attention layers mimicking the behavior of n-gram head, which lead to lower perplexity despite the lower computational costs.
📄 Paper: https://huggingface.co/papers/2401.12973
💻 Code: https://github.com/berlino/seq_icl | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1670231290373-5e7749883d77a72421292d07.jpeg",
"fullname": "Gabriele Sarti",
"name": "gsarti",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 205,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/5e7749883d77a72421292d07/reaqgd2WFX8QAqQ9bDMMi.jpeg"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/5e7749883d77a72421292d07/qiTVEMtwf7XhwBQtlnN6P.jpeg"
},
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/5e7749883d77a72421292d07/S68guJgUvjqQ5e57CKDyv.jpeg"
}
] | [] | [
{
"reaction": "❤️",
"users": [
"merve",
"osanseviero",
"clefourrier",
"clem"
],
"count": 4
}
] | 2024-01-31T12:52:06.000Z | 2024-02-01T16:20:55.468Z | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1648113222875-6141a88b3a0ec78603c9e784.png",
"fullname": "Merve Noyan",
"name": "merve",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 5589,
"isFollowing": false
}
] | /posts/gsarti/546140238129110 | 3 | 2 |
951407525332988 | [
{
"type": "text",
"value": "Pretty novel idea on how to estimate *semantic* uncertainty. 🤔",
"raw": "Pretty novel idea on how to estimate *semantic* uncertainty. 🤔",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Text generation tasks are challenging because a sentence can be written in multiple ways but still preserve its meaning.",
"raw": "Text generation tasks are challenging because a sentence can be written in multiple ways but still preserve its meaning.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "For instance, \"France's capital is Paris\" means the same as \"Paris is France's capital.\" 🇫🇷",
"raw": "For instance, \"France's capital is Paris\" means the same as \"Paris is France's capital.\" 🇫🇷",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "In uncertainty quantification, we often look at token-level probabilities to quantify how \"confident\" an LLM is about its output. However, in this paper, the authors look at uncertainty at a meaning level.",
"raw": "In uncertainty quantification, we often look at token-level probabilities to quantify how \"confident\" an LLM is about its output. However, in this paper, the authors look at uncertainty at a meaning level.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Their motivation is that meanings are especially important for LLMs' trustworthiness; a system can be reliable even with many different ways to say the same thing, but answering with inconsistent meanings shows poor reliability.",
"raw": "Their motivation is that meanings are especially important for LLMs' trustworthiness; a system can be reliable even with many different ways to say the same thing, but answering with inconsistent meanings shows poor reliability.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "To estimate semantic uncertainty, they introduce an algorithm for clustering sequences that mean the same thing, based on the principle that two sentences mean the same thing if we can infer one from the other. 🔄🤝",
"raw": "To estimate semantic uncertainty, they introduce an algorithm for clustering sequences that mean the same thing, based on the principle that two sentences mean the same thing if we can infer one from the other. 🔄🤝",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Then, they determine the likelihood of each meaning and estimate the semantic entropy by summing probabilities that share a meaning.",
"raw": "Then, they determine the likelihood of each meaning and estimate the semantic entropy by summing probabilities that share a meaning.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "There's a lot more to it, but their results look quite nice when compared with non-semantic approaches.",
"raw": "There's a lot more to it, but their results look quite nice when compared with non-semantic approaches.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Paper: ",
"raw": "Paper: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "resource",
"value": null,
"raw": "https://huggingface.co/papers/2302.09664",
"href": null,
"resource": {
"type": "paper",
"id": "2302.09664",
"discussionNum": null
},
"url": "https://huggingface.co/papers/2302.09664",
"code": null,
"user": null,
"label": "Semantic Uncertainty: Linguistic Invariances for Uncertainty Estimation\n in Natural Language Generation (2302.09664)",
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | Pretty novel idea on how to estimate *semantic* uncertainty. 🤔
Text generation tasks are challenging because a sentence can be written in multiple ways but still preserve its meaning.
For instance, "France's capital is Paris" means the same as "Paris is France's capital." 🇫🇷
In uncertainty quantification, we often look at token-level probabilities to quantify how "confident" an LLM is about its output. However, in this paper, the authors look at uncertainty at a meaning level.
Their motivation is that meanings are especially important for LLMs' trustworthiness; a system can be reliable even with many different ways to say the same thing, but answering with inconsistent meanings shows poor reliability.
To estimate semantic uncertainty, they introduce an algorithm for clustering sequences that mean the same thing, based on the principle that two sentences mean the same thing if we can infer one from the other. 🔄🤝
Then, they determine the likelihood of each meaning and estimate the semantic entropy by summing probabilities that share a meaning.
There's a lot more to it, but their results look quite nice when compared with non-semantic approaches.
Paper: https://huggingface.co/papers/2302.09664
| {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1657144463525-629a173153a72d997d3f57d0.jpeg",
"fullname": "Santiago Viquez",
"name": "santiviquez",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 84,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/629a173153a72d997d3f57d0/5HScy9_bb1AT-F90_YDpi.jpeg"
}
] | [] | [
{
"reaction": "👍",
"users": [
"osanseviero",
"victor",
"gsarti",
"merve",
"clem",
"Citaman"
],
"count": 6
},
{
"reaction": "❤️",
"users": [
"gsarti",
"merve",
"clem"
],
"count": 3
}
] | 2024-01-31T10:52:00.000Z | 2024-01-31T10:52:00.111Z | [] | /posts/santiviquez/951407525332988 | 26 | 0 |
540156790542613 | [
{
"type": "text",
"value": "The next version of Gradio will be significantly more efficient (as well as a bit faster) for anyone who uses Gradio's streaming features. Looking at you chatbot developers ",
"raw": "The next version of Gradio will be significantly more efficient (as well as a bit faster) for anyone who uses Gradio's streaming features. Looking at you chatbot developers ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "mention",
"value": null,
"raw": "@oobabooga",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": "oobabooga",
"label": null,
"lang": null
},
{
"type": "text",
"value": " ",
"raw": " ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "mention",
"value": null,
"raw": "@pseudotensor",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": "pseudotensor",
"label": null,
"lang": null
},
{
"type": "text",
"value": " :) ",
"raw": " :) ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "The major change that we're making is that when you stream data, Gradio used to send the entire payload at each token. This is generally the most robust way to ensure all the data is correctly transmitted. We've now switched to sending \"diffs\" --> so at each time step, we automatically compute the diff between the most recent updates and then only send the latest token (or whatever the diff may be). Coupled with the fact that we are now using SSE, which is a more robust communication protocol than WS (SSE will resend packets if there's any drops), we should have the best of both worlds: efficient *and* robust streaming.",
"raw": "The major change that we're making is that when you stream data, Gradio used to send the entire payload at each token. This is generally the most robust way to ensure all the data is correctly transmitted. We've now switched to sending \"diffs\" --> so at each time step, we automatically compute the diff between the most recent updates and then only send the latest token (or whatever the diff may be). Coupled with the fact that we are now using SSE, which is a more robust communication protocol than WS (SSE will resend packets if there's any drops), we should have the best of both worlds: efficient *and* robust streaming.",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "new_line",
"value": null,
"raw": "\n",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "text",
"value": "Very cool stuff ",
"raw": "Very cool stuff ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "mention",
"value": null,
"raw": "@aliabid94",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": "aliabid94",
"label": null,
"lang": null
},
{
"type": "text",
"value": "! PR: ",
"raw": "! PR: ",
"href": null,
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
},
{
"type": "link",
"value": null,
"raw": "https://github.com/gradio-app/gradio/pull/7102",
"href": "https://github.com/gradio-app/gradio/pull/7102",
"resource": null,
"url": null,
"code": null,
"user": null,
"label": null,
"lang": null
}
] | The next version of Gradio will be significantly more efficient (as well as a bit faster) for anyone who uses Gradio's streaming features. Looking at you chatbot developers @oobabooga @pseudotensor :)
The major change that we're making is that when you stream data, Gradio used to send the entire payload at each token. This is generally the most robust way to ensure all the data is correctly transmitted. We've now switched to sending "diffs" --> so at each time step, we automatically compute the diff between the most recent updates and then only send the latest token (or whatever the diff may be). Coupled with the fact that we are now using SSE, which is a more robust communication protocol than WS (SSE will resend packets if there's any drops), we should have the best of both worlds: efficient *and* robust streaming.
Very cool stuff @aliabid94! PR: https://github.com/gradio-app/gradio/pull/7102 | {
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1621947938344-noauth.png",
"fullname": "Abubakar Abid",
"name": "abidlabs",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 487,
"isFollowing": false
} | [
{
"type": "image",
"url": "https://cdn-uploads.huggingface.co/production/uploads/608b8bb39d7c9519b4adae19/DJxBWRaMXMxMAGFlaY-oF.png"
}
] | [
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/1655236986178-61d7830bb77a8c48d48bc755.png",
"fullname": "Ali Abid",
"name": "aliabid94",
"type": "user",
"isPro": false,
"isHf": true,
"isMod": false,
"followerCount": 44
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/63d0597ff2341424c808b771/zKXsWMDdqESiy0_lDIJwe.png",
"fullname": "oobabooga",
"name": "oobabooga",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 98
},
{
"avatarUrl": "https://cdn-avatars.huggingface.co/v1/production/uploads/6308791ac038bf42d568153f/z9TovAddXU3OQR9N_2KFP.jpeg",
"fullname": "Jonathan McKinney",
"name": "pseudotensor",
"type": "user",
"isPro": false,
"isHf": false,
"isMod": false,
"followerCount": 5
}
] | [
{
"reaction": "❤️",
"users": [
"abbasm2",
"osanseviero",
"yjernite",
"oobabooga",
"samusenps",
"jeffboudier",
"gsarti",
"sam-kap",
"satpalsr",
"akhaliq",
"alielfilali01",
"joaogante"
],
"count": 12
}
] | 2024-01-30T20:03:14.000Z | 2024-01-30T20:03:14.077Z | [] | /posts/abidlabs/540156790542613 | 54 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.