shuttie commited on
Commit
14daf88
·
1 Parent(s): 58b641d

add readme

Browse files
Files changed (1) hide show
  1. README.md +72 -0
README.md CHANGED
@@ -1,3 +1,75 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ language:
4
+ - en
5
+ pretty_name: HackerNews comments dataset
6
+ configs:
7
+ - config_name: default
8
+ features:
9
+ - name: id
10
+ dtype: int
11
+ - name: deleted
12
+ dtype: boolean
13
+ - name: type
14
+ dtype: string
15
+ - name: by
16
+ dtype: string
17
+ - name: time
18
+ dtype: int
19
+ - name: text
20
+ dtype: string
21
+ - name: dead
22
+ dtype: boolean
23
+ - name: parent
24
+ dtype: int
25
+ - name: poll
26
+ dtype: int
27
+ - name: kids
28
+ sequence: int
29
+ - name: url
30
+ dtype: string
31
+ - name: score
32
+ dtype: int
33
+ - name: title
34
+ dtype: string
35
+ - name: parts
36
+ sequence: int
37
+ - name: descendants
38
+ sequence: int
39
+ data_files:
40
+ - split: train
41
+ path: items/*.jsonl.zst
42
  ---
43
+
44
+ # Hackernews Comments Dataset
45
+
46
+ A dataset of all [HN API](https://github.com/HackerNews/API) items from `id=0` till `id=41723169` (so from 2006 till 02 Oct 2024). The dataset is build by scraping the HN API according to its official [schema and docs](https://github.com/HackerNews/API). Scraper code is also available on github: [nixiesearch/hnscrape](https://github.com/nixiesearch/hnscrape)
47
+
48
+ ## Dataset contents
49
+
50
+ No cleaning, validation or filtering was performed. The resulting data files are raw JSON API response dumps in zstd-compressed JSONL files. An example payload:
51
+
52
+ ```json
53
+ {
54
+ "by": "goldfish",
55
+ "descendants": 0,
56
+ "id": 46,
57
+ "score": 4,
58
+ "time": 1160581168,
59
+ "title": "Rentometer: Check How Your Rent Compares to Others in Your Area",
60
+ "type": "story",
61
+ "url": "http://www.rentometer.com/"
62
+ }
63
+ ```
64
+
65
+ ## Usage
66
+
67
+ You can directly load this dataset with a [Huggingface Datasets](https://github.com/huggingface/datasets/) library.
68
+
69
+ ```python
70
+ todo
71
+ ```
72
+
73
+ ## License
74
+
75
+ Apache License 2.0.