File size: 2,025 Bytes
98aa402
 
14daf88
 
 
e649295
 
14daf88
75e218b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e649295
 
14daf88
 
 
98aa402
14daf88
 
 
75e218b
14daf88
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
75e218b
 
 
 
14daf88
9b1dcfd
 
75e218b
9b1dcfd
 
14daf88
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
---

license: apache-2.0
language:
- en
pretty_name: HackerNews comments dataset
dataset_info:
  config_name: default
  features:
    - name: id
      dtype: int64
    - name: deleted
      dtype: bool
    - name: type
      dtype: string
    - name: by
      dtype: string
    - name: time
      dtype: int64
    - name: text
      dtype: string
    - name: dead
      dtype: bool
    - name: parent
      dtype: int64
    - name: poll
      dtype: int64
    - name: kids
      sequence: int64
    - name: url
      dtype: string
    - name: score
      dtype: int64
    - name: title
      dtype: string
    - name: parts
      sequence: int64
    - name: descendants
      dtype: int64
configs:
- config_name: default
  data_files:
  - split: train
    path: items/*.jsonl.zst
---


# Hackernews Comments Dataset

A dataset of all [HN API](https://github.com/HackerNews/API) items from `id=0` till `id=41422887` (so from 2006 till 02 Sep 2024). The dataset is build by scraping the HN API according to its official [schema and docs](https://github.com/HackerNews/API). Scraper code is also available on github: [nixiesearch/hnscrape](https://github.com/nixiesearch/hnscrape)

## Dataset contents

No cleaning, validation or filtering was performed. The resulting data files are raw JSON API response dumps in zstd-compressed JSONL files. An example payload:

```json

{

  "by": "goldfish",

  "descendants": 0,

  "id": 46,

  "score": 4,

  "time": 1160581168,

  "title": "Rentometer: Check How Your Rent Compares to Others in Your Area",

  "type": "story",

  "url": "http://www.rentometer.com/"

}

```

## Usage

You can directly load this dataset with a [Huggingface Datasets](https://github.com/huggingface/datasets/) library.

```shell

pip install datasets zstandard

```

```python

from datasets import load_dataset



ds = load_dataset("nixiesearch/hackernews-comments", split="train")

print(ds.features)



```

## License

Apache License 2.0.