status
stringclasses 1
value | repo_name
stringclasses 13
values | repo_url
stringclasses 13
values | issue_id
int64 1
104k
| updated_files
stringlengths 11
1.76k
| title
stringlengths 4
369
| body
stringlengths 0
254k
β | issue_url
stringlengths 38
55
| pull_url
stringlengths 38
53
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
unknown | language
stringclasses 5
values | commit_datetime
unknown |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 18,785 | ["src/Common/ThreadStatus.cpp"] | INSERT INTO table FORMAT CSV successfully but reports error msg | I run the bin/upload-data.sh from https://github.com/Altinity/tpc-ds
```
#!/bin/bash
for file_name in `ls ../data/*.dat`; do
table_file=$(echo "${file_name##*/}")
table_name=$(echo "${table_file%_*}" | tr '[:upper:]' '[:lower:]' | tr -d [0-9]|sed 's/__//g')
upload_data_sql="INSERT INTO $table_name FORMAT CSV"
echo "$upload_data_sql <-- $(du -h $file_name)"
cat $file_name | clickhouse client --format_csv_delimiter="|" --max_partitions_per_insert_block=100 --database="tpcds" --query="$upload_data_sql"
rm $file_name
sleep 5
done
```
reports
INSERT INTO customer_address FORMAT CSV <-- 5.3M ../data/customer_address_1_64_0001.dat
Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
Cannot set alternative signal stack for thread, errno: 12, strerror: Cannot allocate memory
| https://github.com/ClickHouse/ClickHouse/issues/18785 | https://github.com/ClickHouse/ClickHouse/pull/18832 | 371149b8fef1806259ab2d366346dd12317a7a3e | 19ad9e7a5161821e3dcfe0b4ea1f7d85209000fc | "2021-01-06T08:39:36Z" | c++ | "2021-01-07T22:15:55Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 18,765 | ["src/Storages/MergeTree/IMergeTreeReader.cpp", "tests/queries/0_stateless/02017_columns_with_dot_2.reference", "tests/queries/0_stateless/02017_columns_with_dot_2.sql"] | Clickhouse Nested issue | After you create table with fields with dots and after alter table clickhouse throw exception about bad cast
version 20.12.3
How to reproduce
```
CREATE TABLE test_nested
(
`id` String,
`with_dot.str` String,
`with_dot.array` Array(Int32)
)
ENGINE = MergeTree()
ORDER BY id
Insert Into test_nested (id,`with_dot.str`, `with_dot.array`) VALUES('123','asd',[1,2])
select * FROM test_nested
SELECT *
FROM test_nested
Query id: 42f26a6e-7573-4169-a4a1-9fe7022ba656
ββidβββ¬βwith_dot.strββ¬βwith_dot.arrayββ
β 123 β asd β [1,2] β
βββββββ΄βββββββββββββββ΄βββββββββββββββββ
alter table test_nested ADD COLUMN `with_dot.bool` UInt8
SELECT *
FROM test_nested
Query id: 3ea8ae6b-1e0a-4ae5-b4d0-dcdcb6b7b8f8
Received exception from server (version 20.12.3):
Code: 49. DB::Exception: Received from localhost:9000. DB::Exception: Bad cast from type DB::DataTypeNumber<char8_t> to DB::DataTypeArray: (while reading from part /var/lib/clickhouse/store...
```
I think that happend because clickhouse after altert think that field `with_dot.bool` is nested and trying to cast to array, but i'm not creating nested field, i'm only use field name with `.`
| https://github.com/ClickHouse/ClickHouse/issues/18765 | https://github.com/ClickHouse/ClickHouse/pull/28762 | 43102e84273b9aa3aea2d3ed13e0a9858244b78a | 0bb74f8eaf43e86e405708ddebc8758ec0da9ef8 | "2021-01-05T14:04:13Z" | c++ | "2021-09-10T14:34:43Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 18,754 | ["src/Storages/StorageS3.cpp", "tests/integration/test_storage_s3/test.py"] | s3 table function auto compression doesn't work | **How to reproduce**
Clickhouse Server 20.13
```
SELECT *
FROM url('https://storage.yandexcloud.net/my-test-bucket-768/data.csv.gz', 'CSV', 'column1 UInt32, column2 UInt32, column3 UInt32')
LIMIT 2
Query id: 31d85b1f-cc2a-41c4-978d-1fb09840d7b7
ββcolumn1ββ¬βcolumn2ββ¬βcolumn3ββ
β 1 β 2 β 3 β
β 3 β 2 β 1 β
βββββββββββ΄ββββββββββ΄ββββββββββ
SELECT *
FROM url('https://storage.yandexcloud.net/my-test-bucket-768/data.csv.gz', 'CSV', 'column1 UInt32, column2 UInt32, column3 UInt32', auto)
LIMIT 2
Query id: 34742713-7a41-49ba-a2dd-713e6ddbbc00
ββcolumn1ββ¬βcolumn2ββ¬βcolumn3ββ
β 1 β 2 β 3 β
β 3 β 2 β 1 β
βββββββββββ΄ββββββββββ΄ββββββββββ
SELECT *
FROM s3('https://storage.yandexcloud.net/my-test-bucket-768/data.csv.gz', 'CSV', 'column1 UInt32, column2 UInt32, column3 UInt32', auto)
LIMIT 2
Query id: 6d3975a2-07d6-48fc-8c22-5293eb42c380
Received exception from server (version 20.13.1):
Code: 27. DB::Exception: Received from localhost:9000. DB::Exception: Cannot parse input: expected ',' before: 'οΏ½\b\bpgοΏ½_\0data.csv\03οΏ½1οΏ½1οΏ½2οΏ½οΏ½\\f:@οΏ½eοΏ½οΏ½\\\0οΏ½V>οΏ½\0\0\0'{}:
Row 1:
Column 0, name: column1, type: UInt32, ERROR: text "<0x1F>οΏ½<BACKSPACE><BACKSPACE>pgοΏ½_<ASCII NUL><0x03>" is not like UInt32
: While executing S3.
SELECT *
FROM s3('https://storage.yandexcloud.net/my-test-bucket-768/data.csv.gz', 'CSV', 'column1 UInt32, column2 UInt32, column3 UInt32')
LIMIT 2
Query id: 5fbb23a1-9893-4b1a-a4b0-ffa36225e8bb
Received exception from server (version 20.13.1):
Code: 27. DB::Exception: Received from localhost:9000. DB::Exception: Cannot parse input: expected ',' before: 'οΏ½\b\bpgοΏ½_\0data.csv\03οΏ½1οΏ½1οΏ½2οΏ½οΏ½\\f:@οΏ½eοΏ½οΏ½\\\0οΏ½V>οΏ½\0\0\0'{}:
Row 1:
Column 0, name: column1, type: UInt32, ERROR: text "<0x1F>οΏ½<BACKSPACE><BACKSPACE>pgοΏ½_<ASCII NUL><0x03>" is not like UInt32
: While executing S3.
SELECT *
FROM s3('https://storage.yandexcloud.net/my-test-bucket-768/data.csv.gz', 'CSV', 'column1 UInt32, column2 UInt32, column3 UInt32', gz)
LIMIT 2
Query id: 7ce193ce-fd55-4369-be85-d59c83b0dfa8
ββcolumn1ββ¬βcolumn2ββ¬βcolumn3ββ
β 1 β 2 β 3 β
β 3 β 2 β 1 β
βββββββββββ΄ββββββββββ΄ββββββββββ
```
**Expected behavior**
s3 table function would recognize compression from file extension.
**Additional context**
https://github.com/ClickHouse/ClickHouse/pull/7840#issuecomment-570837458
| https://github.com/ClickHouse/ClickHouse/issues/18754 | https://github.com/ClickHouse/ClickHouse/pull/19793 | f996b8433499cbbabc32febf55ce2291afbc2290 | 98e88d7305c6d067df22f6e1b3413cdd95b7e79d | "2021-01-05T00:57:20Z" | c++ | "2021-01-29T20:42:05Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 18,741 | ["src/Compression/CompressedReadBuffer.h", "src/Compression/CompressionFactory.cpp", "src/Server/TCPHandler.cpp", "src/Storages/Distributed/DistributedBlockOutputStream.cpp", "tests/queries/0_stateless/01640_distributed_async_insert_compression.reference", "tests/queries/0_stateless/01640_distributed_async_insert_compression.sql"] | Data rejected by server when clickhouse-client uses zstd network compression | Hello guys,
**Description**
When inserting data into a replicated+distributed table, I've noticed that not all data reaches all shards, only shard where client connects.
TLDR; version of command which fails:
cat payload.json | clickhouse client --input_format_skip_unknown_fields 1 --host 192.168.121.201 --query "INSERT INTO test_db.tbl_distributed FORMAT JSONEachRow" --max_insert_block_size=1000000
Clickhouse version: 20.12.5 revision 54442
**How to reproduce**
For a 4 node cluster splited into two shards (with 2 replicas),
- shard01r1
- shard01r2
- shard02r1
- shard02r2
when connecting to the shard01r1 node, data is inserted into shard01r1 (and replica in shard01r2).
If I drop `--network_compression_method` all goes fine.
***ClickHouse remote_servers***
```
<yandex>
<remote_servers>
<test_cluster>
<shard>
<internal_replication>true</internal_replication>
<replica>
<host>192.168.121.201</host>
<port>9000</port>
</replica>
<replica>
<host>192.168.121.202</host>
<port>9000</port>
</replica>
</shard>
<shard>
<internal_replication>true</internal_replication>
<replica>
<host>192.168.121.203</host>
<port>9000</port>
</replica>
<replica>
<host>192.168.121.204</host>
<port>9000</port>
</replica>
</shard>
</test_cluster>
</remote_servers>
</yandex>
```
***SQL DDL***
```
CREATE DATABASE IF NOT EXISTS test_db ON CLUSTER test_cluster;
CREATE TABLE IF NOT EXISTS test_db.tbl_local ON CLUSTER test_cluster
(
timestamp DateTime64,
data String
)
ENGINE = ReplicatedMergeTree()
PARTITION BY toYYYYMMDD(timestamp)
ORDER BY (timestamp)
SETTINGS index_granularity = 8192;
CREATE TABLE IF NOT EXISTS test_db.tbl_distributed ON CLUSTER test_cluster AS test_db.tbl_local ENGINE = Distributed(test_cluster, test_db, tbl_local, rand());
```
***Payload***
```json
{"timestamp":"2021-01-01T01:00:00.000","data": "Hello World #1"}
{"timestamp":"2021-01-01T02:00:00.000","data": "Hello World #2"}
{"timestamp":"2021-01-01T03:00:00.000","data": "Hello World #3"}
{"timestamp":"2021-01-01T04:00:00.000","data": "Hello World #4"}
```
Upon insert, only two rows will be saved in the database.
**Error message**
```
(version 20.12.5.14 (official build))
2021.01.04 18:20:56.489979 [ 6860 ] {} <Error> test_db.tbl_distributed.DirectoryMonitor: Renamed `/var/lib/clickhouse/store/dfa/dfaa92a5-34d4-4269-af39-a739fed1541a/shard2_replica1,shard2_replica2/1.bin` to `/var/lib/clickhouse/store/dfa/dfaa92a5-34d4-4269-af39-a739fed1541a/shard2_replica1,shard2_replica2/broken/1.bin`
2021.01.04 18:20:56.490303 [ 6860 ] {} <Error> test_db.tbl_distributed.DirectoryMonitor: Code: 271, e.displayText() = DB::Exception: Received from 192.168.121.204:9000. DB::Exception: Data compressed with different methods, given method byte 0x90, previous method byte 0x82. Stack trace:
0. DB::CompressedReadBufferBase::readCompressedData(unsigned long&, unsigned long&) @ 0xd7d654e in /usr/bin/clickhouse
1. DB::CompressedReadBuffer::nextImpl() @ 0xd7d4b20 in /usr/bin/clickhouse
2. DB::NativeBlockInputStream::readImpl() @ 0xdf40dd6 in /usr/bin/clickhouse
3. DB::IBlockInputStream::read() @ 0xd8a0715 in /usr/bin/clickhouse
4. DB::TCPHandler::receiveData(bool) @ 0xe7408f1 in /usr/bin/clickhouse
5. DB::TCPHandler::receivePacket() @ 0xe739ddc in /usr/bin/clickhouse
6. DB::TCPHandler::readDataNext(unsigned long const&, int const&) @ 0xe73b56f in /usr/bin/clickhouse
7. DB::TCPHandler::processInsertQuery(DB::Settings const&) @ 0xe73a31e in /usr/bin/clickhouse
8. DB::TCPHandler::runImpl() @ 0xe735929 in /usr/bin/clickhouse
9. DB::TCPHandler::run() @ 0xe741c47 in /usr/bin/clickhouse
10. Poco::Net::TCPServerConnection::start() @ 0x10eebb1f in /usr/bin/clickhouse
11. Poco::Net::TCPServerDispatcher::run() @ 0x10eed531 in /usr/bin/clickhouse
12. Poco::PooledThread::run() @ 0x1101ab09 in /usr/bin/clickhouse
13. Poco::ThreadImpl::runnableEntry(void*) @ 0x11016a9a in /usr/bin/clickhouse
14. start_thread @ 0x7fa3 in /usr/lib/x86_64-linux-gnu/libpthread-2.28.so
15. clone @ 0xf94cf in /usr/lib/x86_64-linux-gnu/libc-2.28.so
```
We can observe `broken` directory in store with missing data in *.bin files.
`/var/lib/clickhouse/store/dfa/dfaa92a5-34d4-4269-af39-a739fed1541a/shard2_replica1,shard2_replica2/broken`
`clickhouse-compressor` also complains when opening bin file:
```
clickhouse-compressor --decompress < /var/lib/clickhouse/store/dfa/dfaa92a5-34d4-4269-af39-a739fed1541a/shard2_replica1,shard2_replica2/broken/1.bin > 1.txt
Code: 432, e.displayText() = DB::Exception: Unknown codec family code: 84, Stack trace (when copying this message, always include the lines below):
0. DB::CompressionCodecFactory::get(unsigned char) const @ 0xd7dbe38 in /usr/bin/clickhouse
1. DB::CompressedReadBufferBase::readCompressedData(unsigned long&, unsigned long&) @ 0xd7d514f in /usr/bin/clickhouse
2. DB::CompressedReadBuffer::nextImpl() @ 0xd7d4b20 in /usr/bin/clickhouse
3. mainEntryClickHouseCompressor(int, char**) @ 0x7ee721e in /usr/bin/clickhouse
4. main @ 0x7d8acbd in /usr/bin/clickhouse
5. __libc_start_main @ 0x2409b in /usr/lib/x86_64-linux-gnu/libc-2.28.so
6. _start @ 0x7d3b02e in /usr/bin/clickhouse
```
**Additional context**
For what it's worth, we use `zstd` compression everywhere, aside from that I don't think there is nothing exotic in the conf.
```
<yandex>
<compression>
<case>
<method>zstd</method>
</case>
</compression>
</yandex>
```
Last, but not the least, I've attached Vagrant setup (with Ansible provisioner), which can be used to reproduce the bug.
[clickhouse_bug.tar.gz](https://github.com/ClickHouse/ClickHouse/files/5766355/clickhouse_bug.tar.gz)
| https://github.com/ClickHouse/ClickHouse/issues/18741 | https://github.com/ClickHouse/ClickHouse/pull/18776 | a20c4cce763b4ef6740ed169d6fb83c0af8a440c | 76a32dfdd35b103579f7b728a3ac39b28ae9ac0f | "2021-01-04T19:06:39Z" | c++ | "2021-01-06T16:59:41Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 18,734 | ["debian/control"] | ClickHouse testing version is asking some questions about Kerberos. | It's clearly a bug. I don't need Kerberos. | https://github.com/ClickHouse/ClickHouse/issues/18734 | https://github.com/ClickHouse/ClickHouse/pull/18748 | 2a37f5f6878267444adef030a8d7402ff819354d | 56f3a3b6c71b3d9af95652d9125cb5ffeaea9d6e | "2021-01-04T16:58:49Z" | c++ | "2021-01-05T04:09:23Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 18,733 | ["docs/en/getting-started/example-datasets/brown-benchmark.md", "docs/en/getting-started/example-datasets/index.md"] | Reproduce "mgbench" from Brown University | https://github.com/crottyan/mgbench | https://github.com/ClickHouse/ClickHouse/issues/18733 | https://github.com/ClickHouse/ClickHouse/pull/18739 | e476dcdba78a1f24dd4522b6ef042bbe4756b1b8 | 683b16a5259686231c6055ebee75c1e848307d1c | "2021-01-04T16:36:32Z" | c++ | "2021-01-04T18:08:14Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 18,713 | ["src/AggregateFunctions/AggregateFunctionGroupBitmapData.h", "src/Functions/FunctionsBitmap.h", "tests/queries/0_stateless/00829_bitmap_function.sql", "tests/queries/0_stateless/00974_bitmapContains_with_primary_key.reference", "tests/queries/0_stateless/00974_bitmapContains_with_primary_key.sql"] | bitmapContains does not worked for UInt64 while the bitmap is a AggregateFunction(groupBitmap, UInt64) |
**Describe the bug**
bitmapContains does not worked for UInt64 while the bitmap is of type AggregateFunction(groupBitmap, UInt64)
**How to reproduce**
SELECT
bitmapBuild([toUInt64(1), toUInt64(10000)]) AS res,
bitmapContains(res, toUInt64(200000000)) AS aa
Code: 43. DB::Exception: Received from localhost:9000. DB::Exception: Second argument for function bitmapContains must be UInt32 but it has type UInt64..
**Expected behavior**
for a 64-bit bitmap, bitmapContains should be able to take UInt64 for the parameter.
**Error message and/or stacktrace**
Code: 43. DB::Exception: Received from localhost:9000. DB::Exception: Second argument for function bitmapContains must be UInt32 but it has type UInt64..
**Additional context**
Tried with many versions such as 19.13, 20.3 etc including the latest 20.12.3, still not working. | https://github.com/ClickHouse/ClickHouse/issues/18713 | https://github.com/ClickHouse/ClickHouse/pull/18791 | 4dd9165934ce7a8da98f69bef9c311230d366d3c | fedfcb78e174558ac469ca8061f73f8d45d4b953 | "2021-01-04T07:23:36Z" | c++ | "2021-01-13T23:18:42Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 18,707 | ["src/Parsers/ExpressionListParsers.cpp", "tests/queries/0_stateless/00977_int_div.reference", "tests/queries/0_stateless/00977_int_div.sql", "tests/queries/0_stateless/01412_mod_float.reference", "tests/queries/0_stateless/01412_mod_float.sql"] | MySQL compatibility: DIV and MOD operators | https://dev.mysql.com/doc/refman/8.0/en/arithmetic-functions.html
**Use case**
`a DIV b` - the same as `intDiv(a, b)`
`a MOD b` - the same as `modulo(a, b)` | https://github.com/ClickHouse/ClickHouse/issues/18707 | https://github.com/ClickHouse/ClickHouse/pull/18760 | 06143d73ca9c66cb9b5f078193d24c8233bfec6b | 202d1f2211278e7cb191f7f1f2e27541d06aa4fc | "2021-01-04T00:48:17Z" | c++ | "2021-01-05T17:42:58Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 18,706 | ["docs/en/sql-reference/statements/select/all.md", "src/Parsers/ExpressionElementParsers.cpp", "src/Parsers/ParserSelectQuery.cpp", "tests/queries/0_stateless/01632_select_all_syntax.reference", "tests/queries/0_stateless/01632_select_all_syntax.sql"] | SQL compatibility: support SELECT ALL syntax | **Use case**
For #15112
**Describe the solution you'd like**
SELECT ALL is identical to SELECT without DISTINCT.
If ALL is specified - ignore it.
If both ALL and DISTINCT are specified - syntax error or exception.
ALL can also be specified inside aggregate function:
SELECT sum(ALL x)
with the same effect (noop). | https://github.com/ClickHouse/ClickHouse/issues/18706 | https://github.com/ClickHouse/ClickHouse/pull/18723 | 49ad73b9bc3a3edb86df9e0649ee2433995fec46 | bf8d58d2e8696f9a2aa0c0107b70274fe65b867c | "2021-01-04T00:42:40Z" | c++ | "2021-01-14T08:53:09Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 18,701 | ["src/Parsers/ExpressionElementParsers.cpp", "tests/queries/0_stateless/00233_position_function_sql_comparibilty.reference", "tests/queries/0_stateless/00233_position_function_sql_comparibilty.sql"] | SQL compatibility: provide POSITION(needle IN haystack) syntax. | **Use case**
`SELECT POSITION('/' IN s) FROM (SELECT 'Hello/World' AS s);`
The same as `position(s, '/')`
| https://github.com/ClickHouse/ClickHouse/issues/18701 | https://github.com/ClickHouse/ClickHouse/pull/18779 | d76c05e1345741a1be8ba158370b5c1ee092c14f | 63761f72e74cc24a47c353d8e849d709f98783ea | "2021-01-03T21:51:38Z" | c++ | "2021-01-06T18:07:15Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 18,693 | ["programs/server/config.xml", "src/Interpreters/ClientInfo.h", "src/Server/HTTPHandler.cpp", "src/Server/HTTPHandlerFactory.cpp", "src/Server/HTTPHandlerFactory.h", "tests/config/config.d/CORS.xml", "tests/config/install.sh", "tests/queries/0_stateless/00372_cors_header.reference", "tests/queries/0_stateless/02029_test_options_requests.reference", "tests/queries/0_stateless/02029_test_options_requests.sh"] | Support CORS fully with pre-flight requests using HTTP OPTIONS and server-side config for CORS | **Use case**
ClickHouse cannot accept connections from web browser-based query tools that implement [CORS protocol preflight requests](https://developer.mozilla.org/en-US/docs/Glossary/Preflight_request). The reason is preflight checks use an OPTIONS request, which ClickHouse does not implement in the HTTP interface.
The problem occurs in tools like Grafana and [ObservableHQ](https://observablehq.com), whenever (a) ClickHouse is on a different server from the web page source and (b) the call to ClickHouse is processed directly in the browser. This use case occurs commonly when integrating ClickHouse cloud implementations with SaaS-based BI tools. In each case the browser will do the following:
1. Send an OPTIONS request to check if ClickHouse allows cross-site requests. The browser expects to see an Access-Control-Allow-Origin header that whitelists the original site and a 200 status code.
2. Once the OPTIONS request returns successfully, send the actual request.
When Grafana tries to do this it fails with the following message in the browser console. ObservableHQ fails similarly.
```
Access to fetch at 'https://github.demo.trial.altinity.cloud:8443/?add_http_cors_header=1' from origin 'http://localhost:3000' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.
```
In the case of Grafana you can work-around CORS problems by running the ClickHouse queries on the Grafana server (set Access to 'Server' in the Datasource definition). For ObservableHQ this is a blocking bug.
There is another problem in that the current add_http_cors_header=1 option used to allow ClickHouse to return a Access-Control-Allow-Origin header makes client URLs more cumbersome. There's also a security issue in that CORS should be enabled by the server, not triggered by the client. Malicious sites can undo CORS protection by changing the URL.
**Describe the solution you'd like**
ClickHouse should implement [server-side CORS support](https://enable-cors.org/server.html) fully, which includes the following:
1. Implement OPTIONS requests for CORS preflight checks. The expected behavior is shown in the referenced page above and illustrated in the attached Python3 script showing a typical OPTIONS request for pre-flight checks. Note that you can run a preflight request without authenticating.
2. Make CORS support a server-side setting. It would make sense to enable cors using a regex with '*' to designate accept all sites. CORS should be off by default.
3. Deprecate the add_http_cors_header=1 URL option as soon as possible.
**Describe alternatives you've considered**
An alternative option is to enable CORS using a proxy, but this seems cumbersome and hard for developers. If ClickHouse supports HTTP it should do it completely, including CORS.
**Additional context**
As noted above this is a much more prominent problem with ClickHouse cloud platforms. | https://github.com/ClickHouse/ClickHouse/issues/18693 | https://github.com/ClickHouse/ClickHouse/pull/29155 | 889034f6c2129ce5bb12e1442bd0590f03ec4a7d | ee577e1ab443ff365f06f215aa139a1d6875bf01 | "2021-01-03T08:44:56Z" | c++ | "2021-10-09T17:18:17Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 18,690 | ["src/IO/PeekableReadBuffer.cpp", "src/IO/PeekableReadBuffer.h", "src/IO/tests/gtest_peekable_read_buffer.cpp", "tests/queries/0_stateless/01184_insert_values_huge_strings.reference", "tests/queries/0_stateless/01184_insert_values_huge_strings.sh"] | PeekableReadBuffer: Memory limit exceed when inserting data by HTTP (20.8) | Error started to appear after migrated 19.9 -> 20.8
Settings
<input_format_parallel_parsing>0</input_format_parallel_parsing>
and
<input_format_values_interpret_expressions>0</input_format_values_interpret_expressions>
do not help
```
2021.01.03 08:01:15.601198 [ 1968 ] {b4472a5e-e463-4852-a0ed-efd90fae6393} <Error> DynamicQueryHandler: Code: 241, e.displayText() = DB::Exception: PeekableReadBuffer: Memory limit exceed, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x18cd2050 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0xe63232d in /usr/bin/clickhouse
2. ? @ 0x164be2e2 in /usr/bin/clickhouse
3. DB::PeekableReadBuffer::peekNext() @ 0x164bd633 in /usr/bin/clickhouse
4. DB::PeekableReadBuffer::nextImpl() @ 0x164bd978 in /usr/bin/clickhouse
5. void DB::readQuotedStringInto<true, DB::PODArray<char8_t, 4096ul, Allocator<false, false>, 15ul, 16ul> >(DB::PODArray<char8_t, 4096ul, Allocator<false, false>, 15ul, 16ul>&, DB::ReadBuffer&) @ 0xe67e4f5 in /usr/bin/clickhouse
6. DB::DataTypeString::deserializeTextQuoted(DB::IColumn&, DB::ReadBuffer&, DB::FormatSettings const&) const @ 0x156445a4 in /usr/bin/clickhouse
7. DB::ValuesBlockInputFormat::readRow(std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, unsigned long) @ 0x16597b4b in /usr/bin/clickhouse
8. DB::ValuesBlockInputFormat::generate() @ 0x165982cd in /usr/bin/clickhouse
9. DB::ISource::work() @ 0x163e821b in /usr/bin/clickhouse
10. DB::InputStreamFromInputFormat::readImpl() @ 0x163ab46d in /usr/bin/clickhouse
11. DB::IBlockInputStream::read() @ 0x155611cd in /usr/bin/clickhouse
12. DB::InputStreamFromASTInsertQuery::readImpl() @ 0x15973f79 in /usr/bin/clickhouse
13. DB::IBlockInputStream::read() @ 0x155611cd in /usr/bin/clickhouse
14. DB::copyData(DB::IBlockInputStream&, DB::IBlockOutputStream&, std::__1::atomic<bool>*) @ 0x1558415e in /usr/bin/clickhouse
15. DB::executeQuery(DB::ReadBuffer&, DB::WriteBuffer&, bool, DB::Context&, std::__1::function<void (std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)>) @ 0x15d06285 in /usr/bin/clickhouse
16. DB::HTTPHandler::processQuery(DB::Context&, Poco::Net::HTTPServerRequest&, HTMLForm&, Poco::Net::HTTPServerResponse&, DB::HTTPHandler::Output&) @ 0x16344c69 in /usr/bin/clickhouse
17. DB::HTTPHandler::handleRequest(Poco::Net::HTTPServerRequest&, Poco::Net::HTTPServerResponse&) @ 0x1634871b in /usr/bin/clickhouse
18. Poco::Net::HTTPServerConnection::run() @ 0x18bb1df3 in /usr/bin/clickhouse
19. Poco::Net::TCPServerConnection::start() @ 0x18bf000b in /usr/bin/clickhouse
20. Poco::Net::TCPServerDispatcher::run() @ 0x18bf0728 in /usr/bin/clickhouse
21. Poco::PooledThread::run() @ 0x18d6ee36 in /usr/bin/clickhouse
22. Poco::ThreadImpl::runnableEntry(void*) @ 0x18d6a230 in /usr/bin/clickhouse
23. start_thread @ 0x7494 in /lib/x86_64-linux-gnu/libpthread-2.24.so
24. clone @ 0xe8aff in /lib/x86_64-linux-gnu/libc-2.24.so
(version 20.8.11.17 (official build))
``` | https://github.com/ClickHouse/ClickHouse/issues/18690 | https://github.com/ClickHouse/ClickHouse/pull/18979 | aa51463c933e4af08227df68192ddfe237be09b1 | fb6d1dc18e1e5641e4753c69f28ad73d6c1db3bb | "2021-01-03T08:11:27Z" | c++ | "2021-01-15T10:43:50Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 18,629 | ["src/Storages/StorageTinyLog.cpp", "tests/queries/0_stateless/01651_lc_insert_tiny_log.reference", "tests/queries/0_stateless/01651_lc_insert_tiny_log.sql"] | Logical error: Got empty stream for DataTypeLowCardinality keys | ```
milovidov-desktop :) CREATE TABLE perf_lc_num(γ num UInt8,γ arr Array(LowCardinality(Int64)) default [num]γ ) ENGINE = TinyLog
CREATE TABLE perf_lc_num
(
`num` UInt8,
`arr` Array(LowCardinality(Int64)) DEFAULT [num]
)
ENGINE = TinyLog
Query id: e95f9a71-5795-4dea-8d80-4729fc8016b9
Ok.
0 rows in set. Elapsed: 0.002 sec.
milovidov-desktop :) INSERT INTO perf_lc_num (num)
:-] SELECT toUInt8(number)
:-] FROM numbers(100000000)
INSERT INTO perf_lc_num (num) SELECT toUInt8(number)
FROM numbers(100000000)
Query id: 1101fd75-a545-4809-acb5-8afa2fae80c0
Received exception from server (version 20.13.1):
Code: 49. DB::Exception: Received from localhost:9000. DB::Exception: Got empty stream for DataTypeLowCardinality keys.: While executing SinkToOutputStream.
0 rows in set. Elapsed: 0.045 sec.
milovidov-desktop :) INSERT INTO perf_lc_num (num)γ SELECT toUInt8(number)γ FROM numbers(100000000)
INSERT INTO perf_lc_num (num) SELECT toUInt8(number)
FROM numbers(100000000)
Query id: 0ad8532a-5c6e-4590-84cd-7e00759d152d
Received exception from server (version 20.13.1):
Code: 49. DB::Exception: Received from localhost:9000. DB::Exception: Got empty stream for DataTypeLowCardinality keys.: While executing SinkToOutputStream.
0 rows in set. Elapsed: 0.033 sec.
```
| https://github.com/ClickHouse/ClickHouse/issues/18629 | https://github.com/ClickHouse/ClickHouse/pull/19010 | bf49b669ca75dfb05472f977792a0ef3d8c8a8e5 | 7b33ad5e447865d1644ac9fc072faaed56fae2db | "2020-12-30T13:41:52Z" | c++ | "2021-01-14T08:25:38Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 18,582 | ["src/Storages/StorageReplicatedMergeTree.cpp", "tests/integration/test_merge_tree_empty_parts/__init__.py", "tests/integration/test_merge_tree_empty_parts/configs/cleanup_thread.xml", "tests/integration/test_merge_tree_empty_parts/configs/remote_servers.xml", "tests/integration/test_merge_tree_empty_parts/test.py"] | 20.12 CH crashes trying to delete empty parts from a table created with the old syntax | I have a few SummingMergeTree tables created many years ago.
20.12 crashes trying to delete empty parts with the message: `Unexpected part name: 201907_44_45_999999999`
```
2020.12.28 20:57:39.962301 [ 21808 ] {} <Trace> db.table (ReplicatedMergeTreeCleanupThread): Checking 100 blocks (100 are not cached) to clear old ones from ZooKeeper.
2020.12.28 20:57:39.972444 [ 21808 ] {} <Trace> db.table: Will try to insert a log entry to DROP_RANGE for part: 20190730_20190731_44_45_1
2020.12.28 20:57:39.984770 [ 21808 ] {} <Debug> db.table: Waiting for sde798 to pull log-0000514194 to queue
2020.12.28 20:57:39.991527 [ 21812 ] {} <Debug> db.table (ReplicatedMergeTreeQueue): Pulling 1 entries to queue: log-0000514194 - log-0000514194
2020.12.28 20:57:39.997144 [ 21812 ] {} <Error> db.table (ReplicatedMergeTreeQueue): Code: 233, e.displayText() = DB::Exception: Unexpected part name: 201907_44_45_999999999, Stack trace (when copying this message, always include the lines below):
0. DB::MergeTreePartInfo::fromPartName(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, StrongTypedef<unsigned int, DB::MergeTreeDataFormatVersionTag>) @ 0xe5ae893 in /usr/bin/clickhouse
1. DB::ActiveDataPartSet::add(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basi
c_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >*) @ 0xe462e59 in /usr/bin/clickhouse
2. DB::ReplicatedMergeTreeQueue::insertUnlocked(std::__1::shared_ptr<DB::ReplicatedMergeTreeLogEntry> const&, std::__1::optional<long>&, std::__1::lock_guard<std::__1::mutex>&) @ 0xe640ccd in /usr/bin/clickhouse
3. DB::ReplicatedMergeTreeQueue::pullLogsToQueue(std::__1::shared_ptr<zkutil::ZooKeeper>, std::__1::function<void (Coordination::WatchResponse const&)>) @ 0xe648d45 in /usr/bin/clickhouse
4. DB::StorageReplicatedMergeTree::queueUpdatingTask() @ 0xe312111 in /usr/bin/clickhouse
5. DB::BackgroundSchedulePoolTaskInfo::execute() @ 0xda48492 in /usr/bin/clickhouse
6. DB::BackgroundSchedulePool::threadFunction() @ 0xda4a8c2 in /usr/bin/clickhouse
7. ? @ 0xda4bb41 in /usr/bin/clickhouse
8. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x7dc5b3d in /usr/bin/clickhouse
9. ? @ 0x7dc96f3 in /usr/bin/clickhouse
10. start_thread @ 0x74a4 in /lib/x86_64-linux-gnu/libpthread-2.24.so
11. clone @ 0xe8d0f in /lib/x86_64-linux-gnu/libc-2.24.so
(version 20.12.5.14 (official build))
2020.12.28 20:57:39.997247 [ 21786 ] {} <Trace> BaseDaemon: Received signal -1
2020.12.28 20:57:39.997262 [ 21786 ] {} <Fatal> BaseDaemon: (version 20.12.5.14 (official build), build id: BEE6512D29AA78DB) (from thread 21812) Terminate called for uncaught exception:
Code: 233, e.displayText() = DB::Exception: Unexpected part name: 201907_44_45_999999999, Stack trace (when copying this message, always include the lines below):
0. DB::MergeTreePartInfo::fromPartName(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, StrongTypedef<unsigned int, DB::MergeTreeDataFormatVersionTag>) @ 0xe5ae893 in /usr/bin/clickhouse
1. DB::ActiveDataPartSet::add(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basi
c_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >*) @ 0xe462e59 in /usr/bin/clickhouse
2. DB::ReplicatedMergeTreeQueue::insertUnlocked(std::__1::shared_ptr<DB::ReplicatedMergeTreeLogEntry> const&, std::__1::optional<long>&, std::__1::lock_guard<std::__1::mutex>&) @ 0xe640ccd in /usr
2020.12.28 20:57:39.997283 [ 21786 ] {} <Trace> BaseDaemon: Received signal 6
2020.12.28 20:57:39.997365 [ 22052 ] {} <Fatal> BaseDaemon: ########################################
2020.12.28 20:57:39.997383 [ 22052 ] {} <Fatal> BaseDaemon: (version 20.12.5.14 (official build), build id: BEE6512D29AA78DB) (from thread 21812) (no query) Received signal Aborted (6)
2020.12.28 20:57:39.997391 [ 22052 ] {} <Fatal> BaseDaemon:
2020.12.28 20:57:39.997403 [ 22052 ] {} <Fatal> BaseDaemon: Stack trace: 0x7f77e1438fff 0x7f77e143a42a 0x7f18e56 0x124170d3 0x1241707c 0xe64a296 0xe312111 0xda48492 0xda4a8c2 0xda4bb41 0x7dc5b3d 0x7dc96f3 0x7f77e1bb84a4 0x7f77e14eed0f
2020.12.28 20:57:39.997425 [ 22052 ] {} <Fatal> BaseDaemon: 2. raise @ 0x32fff in /lib/x86_64-linux-gnu/libc-2.24.so
2020.12.28 20:57:39.997433 [ 22052 ] {} <Fatal> BaseDaemon: 3. abort @ 0x3442a in /lib/x86_64-linux-gnu/libc-2.24.so
2020.12.28 20:57:39.997443 [ 22052 ] {} <Fatal> BaseDaemon: 4. ? @ 0x7f18e56 in /usr/bin/clickhouse
2020.12.28 20:57:39.997449 [ 22052 ] {} <Fatal> BaseDaemon: 5. ? @ 0x124170d3 in ?
2020.12.28 20:57:39.997463 [ 22052 ] {} <Fatal> BaseDaemon: 6. std::terminate() @ 0x1241707c in ?
2020.12.28 20:57:39.997477 [ 22052 ] {} <Fatal> BaseDaemon: 7. DB::ReplicatedMergeTreeQueue::pullLogsToQueue(std::__1::shared_ptr<zkutil::ZooKeeper>, std::__1::function<void (Coordination::WatchResponse const&)>) @ 0xe64a296 in /usr/bin/clickhouse
2020.12.28 20:57:39.997485 [ 22052 ] {} <Fatal> BaseDaemon: 8. DB::StorageReplicatedMergeTree::queueUpdatingTask() @ 0xe312111 in /usr/bin/clickhouse
2020.12.28 20:57:39.997493 [ 22052 ] {} <Fatal> BaseDaemon: 9. DB::BackgroundSchedulePoolTaskInfo::execute() @ 0xda48492 in /usr/bin/clickhouse
2020.12.28 20:57:39.997500 [ 22052 ] {} <Fatal> BaseDaemon: 10. DB::BackgroundSchedulePool::threadFunction() @ 0xda4a8c2 in /usr/bin/clickhouse
2020.12.28 20:57:39.997506 [ 22052 ] {} <Fatal> BaseDaemon: 11. ? @ 0xda4bb41 in /usr/bin/clickhouse
2020.12.28 20:57:39.997515 [ 22052 ] {} <Fatal> BaseDaemon: 12. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x7dc5b3d in /usr/bin/clickhouse
2020.12.28 20:57:39.997521 [ 22052 ] {} <Fatal> BaseDaemon: 13. ? @ 0x7dc96f3 in /usr/bin/clickhouse
2020.12.28 20:57:39.997531 [ 22052 ] {} <Fatal> BaseDaemon: 14. start_thread @ 0x74a4 in /lib/x86_64-linux-gnu/libpthread-2.24.so
``` | https://github.com/ClickHouse/ClickHouse/issues/18582 | https://github.com/ClickHouse/ClickHouse/pull/18614 | ea39f59a5f587214334f61156b5b2ed1b680fafe | 1bcdf37c366965e2b7978e89091e32261e19001d | "2020-12-28T22:06:19Z" | c++ | "2020-12-30T14:20:31Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 18,535 | ["src/Functions/array/arrayCompact.cpp", "tests/queries/0_stateless/01020_function_array_compact.sql", "tests/queries/0_stateless/01025_array_compact_generic.reference", "tests/queries/0_stateless/01025_array_compact_generic.sql"] | arrayCompact as Higher-order function | **Use case**
To be able run arrayCompact with deduplication based on some function from input value.
**Describe the solution you'd like**
```
SELECT
[(1, 'a'), (2, 'b'), (3, 'b'), (4, 'c')] AS arr,
arrayCompact((x)-> x.2, arr)
```
**Describe alternatives you've considered**
```
SELECT
[(1, 'a'), (2, 'b'), (3, 'b'), (4, 'c')] AS arr,
arrayFilter((x, y) -> ((x.2) != y), arr, arrayPushFront(arrayPopBack(arr.2), '')) AS x
Query id: ae74a2c7-5bc5-4b77-8fc4-85fd7e6bca25
ββarrββββββββββββββββββββββββββββββββ¬βxββββββββββββββββββββββββββ
β [(1,'a'),(2,'b'),(3,'b'),(4,'c')] β [(1,'a'),(2,'b'),(4,'c')] β
βββββββββββββββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββ
```
| https://github.com/ClickHouse/ClickHouse/issues/18535 | https://github.com/ClickHouse/ClickHouse/pull/34795 | aea7bfb59aa23432b7eb6f69c4ce158c40f65c11 | 7d01516202152c8d60d4fed6b72dad67357d337f | "2020-12-26T02:12:35Z" | c++ | "2022-03-03T18:25:43Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 18,525 | ["src/AggregateFunctions/AggregateFunctionGroupArray.h", "src/Common/PODArray.h"] | Fuzzer, UBSan, statistical functions, PODArray, memcpy called with zero size and nullptr. | ```
SELECT roundBankers(mannWhitneyUTest('greater')(left, right).1, 16) as t_stat, roundBankers(mannWhitneyUTest('greater')(left, right).2, 16) as p_value FROM mann_whitney FORMAT TabSeparatedWithNames;
```
https://clickhouse-test-reports.s3.yandex.net/18488/1ddea6d7eed5cadce087cf19e51413440571a472/stress_test_(undefined)/stderr.log | https://github.com/ClickHouse/ClickHouse/issues/18525 | https://github.com/ClickHouse/ClickHouse/pull/18526 | 19e0e1a40397f3fdae5b233929098d4780e3efd0 | aff724ea7d371d322589fba96a1214aacab78e59 | "2020-12-25T19:05:04Z" | c++ | "2021-01-02T14:07:54Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 18,492 | ["src/Functions/FunctionsConversion.h", "tests/queries/0_stateless/01550_create_map_type.sql"] | Debug assertion in Map datatype | ```
SELECT
CAST(([2, 1, 1023], ['', '']), 'Map(UInt8, String)') AS map,
map[10]
```
Triggers `Assertion 'n < size()' failed` in `DB::ColumnString::operator[](unsigned long)`
https://clickhouse-test-reports.s3.yandex.net/18481/64c4ade5b0e0c9a13d6c2475431daf431fc97554/fuzzer/fuzzer.log
Introduced here: #17829
First found here: #18481 | https://github.com/ClickHouse/ClickHouse/issues/18492 | https://github.com/ClickHouse/ClickHouse/pull/18523 | 00bc167872598744135f1caf7e6d4cb9684cd8e0 | 8eb9788955dc14903c5e55cdda39d232f3a3e26e | "2020-12-25T01:08:15Z" | c++ | "2020-12-26T07:00:01Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 18,407 | ["docs/en/sql-reference/statements/create/view.md", "src/Core/Settings.h", "src/Interpreters/SystemLog.cpp", "src/Processors/Transforms/buildPushingToViewsChain.cpp", "tests/queries/0_stateless/02572_materialized_views_ignore_errors.reference", "tests/queries/0_stateless/02572_materialized_views_ignore_errors.sql", "tests/queries/0_stateless/02572_system_logs_materialized_views_ignore_errors.reference", "tests/queries/0_stateless/02572_system_logs_materialized_views_ignore_errors.sql"] | Fire and forget mode or limited number of retries for distributed sends. | **Use case**
1. Mirror data from production to testing environment with MATERIALIZED VIEW with Distributed engine. The testing cluster should not affect production in any way: better to skip data when testing is unavailable than to accumulate it.
2. Send query_log, metric_log, text_log, etc. to third-party unreliable service. If the third-party service is unavailable just skip sending the data batch.
**Describe the solution you'd like**
A table-level setting to Distributed table.
Either simple fire-and-forget mode (no retries) or specify the maximum number of retries before dropping the data (better).
**Alternatives**
A table with URL engine. But we should also introduce mode to ignore sending errors or do retries.
It can be even more flexible than Distributed table.
| https://github.com/ClickHouse/ClickHouse/issues/18407 | https://github.com/ClickHouse/ClickHouse/pull/46658 | 65d671b7c72c7b1da23f831faa877565cf34f92c | 575ffbc4653b117e918356c8e60f7748df956643 | "2020-12-23T12:16:44Z" | c++ | "2023-03-09T11:19:48Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 18,402 | ["src/IO/parseDateTimeBestEffort.cpp", "src/IO/parseDateTimeBestEffort.h", "tests/queries/0_stateless/01351_parse_date_time_best_effort_us.reference", "tests/queries/0_stateless/01351_parse_date_time_best_effort_us.sql", "tests/queries/0_stateless/01442_date_time_with_params.reference", "tests/queries/0_stateless/01442_date_time_with_params.sql", "tests/queries/0_stateless/01543_parse_datetime_besteffort_or_null_empty_string.reference", "tests/queries/0_stateless/01543_parse_datetime_besteffort_or_null_empty_string.sql"] | parseDateTimeBestEffort should not ignore AM abbreviation for 12th hour | Ignoring AM abbreviation leads to wrong processing the 12th hour.
**Bug reproducing**
```sql
SELECT
parseDateTimeBestEffort('2020-02-01 12:10:00 AM') AS am,
parseDateTimeBestEffort('2020-02-01 12:10:00 PM') AS pm
βββββββββββββββββββamββ¬ββββββββββββββββββpmββ
β 2020-02-01 12:10:00 β 2020-02-01 12:10:00 β
βββββββββββββββββββββββ΄ββββββββββββββββββββββ
```
**Expected behavior**
```sql
βββββββββββββββββββamββ¬ββββββββββββββββββpmββ
β 2020-02-01 00:10:00 β 2020-02-01 12:10:00 β
βββββββββββββββββββββββ΄ββββββββββββββββββββββ
``` | https://github.com/ClickHouse/ClickHouse/issues/18402 | https://github.com/ClickHouse/ClickHouse/pull/18449 | fcacb053db26f08f35270a7313e8437dcecc86f8 | 230f9b6ad40308c1c4a5b088a8042e80ca317e99 | "2020-12-23T09:30:41Z" | c++ | "2020-12-25T19:32:43Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 18,391 | ["contrib/libunwind"] | clickhouse from 20.4 exeception process get core dump on aarch64 | I build clickhouse 20.8.6.4-lts on aarch64, it runs well.
but when got exception ,it core dump.
for example. connect an unexist clickhouse, it got error : connect refused , after that ,it core dump
the core like this:
I test several versionsοΌ the clickhosue20.3.XX run okοΌ from20.4οΌ all coreed when exception happenedγ
(gdb) bt
#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51
#1 0x0000ffff9a821df4 in __GI_abort () at abort.c:89
#2 0x0000000014757fe4 in abort_message (format=<optimized out>)
at ../contrib/libcxxabi/src/abort_message.cpp:76
#3 0x0000000014756e9c in demangling_terminate_handler ()
at ../contrib/libcxxabi/src/cxa_default_handlers.cpp:62
#4 0x000000001476e58c in std::__terminate (func=0x0) at ../contrib/libcxxabi/src/cxa_handlers.cpp:59
#5 0x000000001476daa8 in __cxxabiv1::failed_throw (exception_header=0x2bfcc400)
at ../contrib/libcxxabi/src/cxa_exception.cpp:152
#6 0x000000001476da24 in __cxa_throw (thrown_object=<optimized out>, tinfo=
0xc198f78 <typeinfo for Poco::Net::ConnectionRefusedException>, dest=<optimized out>)
at ../contrib/libcxxabi/src/cxa_exception.cpp:283
#7 0x0000000013a3f65c in Poco::Net::SocketImpl::error (code=<optimized out>, arg=...)
at ../contrib/poco/Net/src/SocketImpl.cpp:1113
#8 0x0000000013a3fc74 in Poco::Net::SocketImpl::error (code=0)
---Type <return> to continue, or q <return> to quit---
at ../contrib/poco/Net/src/SocketImpl.cpp:1039
#9 Poco::Net::SocketImpl::connect (this=0x2bfcbbd0, address=..., timeout=...)
at ../contrib/poco/Net/src/SocketImpl.cpp:172
#10 0x000000001353987c in DB::Connection::connect (this=<optimized out>, timeouts=...)
at ../src/Client/Connection.cpp:85
#11 0x000000001353ad80 in DB::Connection::getServerVersion (this=0x2bfcb880, timeouts=..., name=...,
version_major=@0xfffff451d718: 0, version_minor=@0xfffff451d710: 0, version_patch=@0xfffff451d708: 0,
| https://github.com/ClickHouse/ClickHouse/issues/18391 | https://github.com/ClickHouse/ClickHouse/pull/25854 | 956b1f588dbf5e406bf61e8a9b4e329d35af8b70 | 011ed015fa49c8e1a37f6f103c28def5e637a23f | "2020-12-23T02:44:36Z" | c++ | "2021-06-30T13:18:59Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 18,383 | ["tests/queries/0_stateless/01562_optimize_monotonous_functions_in_order_by.reference", "tests/queries/0_stateless/01576_alias_column_rewrite.reference"] | Check whether optimization was applied | I'm experimenting with query settings to get the best possible performance from a query. Is there a way to check whether a setting related to optimization has been really applied? For example, I'm trying with `SETTINGS optimize_aggregation_in_order = 1`, but in some cases the optimization is disabled and there is no easy way of checking it without running the query. I tried `EXPLAIN SELECT ...` but it seems neither form of EXPLAIN outputs this information. | https://github.com/ClickHouse/ClickHouse/issues/18383 | https://github.com/ClickHouse/ClickHouse/pull/20050 | c715f0e2ede82d3644eab45c79ac024f5e0ac75e | 9d8033d8d67c42e098f70e41c8b2fd425c052f0e | "2020-12-23T01:07:40Z" | c++ | "2021-02-03T19:49:04Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 18,356 | ["src/Functions/if.cpp", "tests/queries/0_stateless/01701_if_tuple_segfault.reference", "tests/queries/0_stateless/01701_if_tuple_segfault.sql"] | Crash when updating tuples? | **Describe the bug**
I have a table like
```
CREATE TABLE IF NOT EXISTS xxx
(
time DateTime CODEC(DoubleDelta, LZ4),
xxx String,
total SimpleAggregateFunction(sum, UInt64) CODEC(T64, LZ4),
agg1 SimpleAggregateFunction(sumMap, Tuple(Array(Int16), Array(UInt64))),
agg2 SimpleAggregateFunction(sumMap, Tuple(Array(Int16), Array(UInt64))),
...
) ENGINE = AggregatingMergeTree()
order by (xxx, time)
```
I realized i double-inserted data into it so want to halve the counts. Already did for the total column, now doing for the `aggX` columns:
```
alter table xxx update agg1 = (agg1.1, arrayMap(x -> toUInt64(x / 2), agg1.2)), same for agg2 etc
where time BETWEEN xxx AND yyy;
```
Crashes the server straight away (I am using your docker image):
```
[clickhouse] 2020.12.22 10:00:03.914392 [ 1352057 ] <Fatal> BaseDaemon: ########################################
[clickhouse] 2020.12.22 10:00:03.940787 [ 1352057 ] <Fatal> BaseDaemon: (version 20.11.3.3 (official build), build id: C88CD350740ED614) (from thread 1349936) (query_id: 70ca4e69-7cb9-453e-b26d-95bc3d59dc54) Received signal Segmentation fault (11)
[clickhouse] 2020.12.22 10:00:03.940881 [ 1352057 ] <Fatal> BaseDaemon: Address: 0x52080d8 Access: write. Attempted access has violated the permissions assigned to the memory area.
[clickhouse] 2020.12.22 10:00:03.940927 [ 1352057 ] <Fatal> BaseDaemon: Stack trace: 0x7c8e078 0xb4ec8d5 0xb4089dd 0xb40fd73 0xb40b46a 0x91e211a 0x91e1aee 0x920d946 0x920e001 0xd93f141 0xd94331d 0xe4f9ac1 0xdb7f94f 0xdb805be 0xd903000 0xdc8c7da 0xdc8b3ad 0xe305ad6 0xe311fa7 0x10a96cdf 0x10a986ee 0x10bc58d9 0x10bc186a 0x7f2029af2609 0x7f2029a08293
[clickhouse] 2020.12.22 10:00:03.941022 [ 1352057 ] <Fatal> BaseDaemon: 2. void DB::PODArrayBase<8ul, 4096ul, Allocator<false, false>, 15ul, 16ul>::resize<>(unsigned long) @ 0x7c8e078 in /usr/bin/clickhouse
[clickhouse] 2020.12.22 10:00:03.941061 [ 1352057 ] <Fatal> BaseDaemon: 3. ? @ 0xb4ec8d5 in /usr/bin/clickhouse
[clickhouse] 2020.12.22 10:00:03.941080 [ 1352057 ] <Fatal> BaseDaemon: 4. ? @ 0xb4089dd in /usr/bin/clickhouse
[clickhouse] 2020.12.22 10:00:03.941111 [ 1352057 ] <Fatal> BaseDaemon: 5. ? @ 0xb40fd73 in /usr/bin/clickhouse
[clickhouse] 2020.12.22 10:00:03.941141 [ 1352057 ] <Fatal> BaseDaemon: 6. ? @ 0xb40b46a in /usr/bin/clickhouse
[clickhouse] 2020.12.22 10:00:03.941193 [ 1352057 ] <Fatal> BaseDaemon: 7. DB::IFunction::executeImplDryRun(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> >&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const @ 0x91e211a in /usr/bin/clickhouse
[clickhouse] 2020.12.22 10:00:03.941246 [ 1352057 ] <Fatal> BaseDaemon: 8. DB::DefaultExecutable::executeDryRun(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> >&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) @ 0x91e1aee in /usr/bin/clickhouse
[clickhouse] 2020.12.22 10:00:03.941273 [ 1352057 ] <Fatal> BaseDaemon: 9. DB::ExecutableFunctionAdaptor::executeWithoutLowCardinalityColumns(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> >&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) @ 0x920d946 in /usr/bin/clickhouse
[clickhouse] 2020.12.22 10:00:03.941306 [ 1352057 ] <Fatal> BaseDaemon: 10. DB::ExecutableFunctionAdaptor::execute(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> >&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) @ 0x920e001 in /usr/bin/clickhouse
[clickhouse] 2020.12.22 10:00:03.941342 [ 1352057 ] <Fatal> BaseDaemon: 11. DB::ExpressionAction::execute(DB::Block&, bool) const @ 0xd93f141 in /usr/bin/clickhouse
[clickhouse] 2020.12.22 10:00:03.941383 [ 1352057 ] <Fatal> BaseDaemon: 12. DB::ExpressionActions::execute(DB::Block&, bool) const @ 0xd94331d in /usr/bin/clickhouse
[clickhouse] 2020.12.22 10:00:03.942002 [ 1352057 ] <Fatal> BaseDaemon: 13. DB::ExpressionStep::ExpressionStep(DB::DataStream const&, std::__1::shared_ptr<DB::ExpressionActions>) @ 0xe4f9ac1 in /usr/bin/clickhouse
[clickhouse] 2020.12.22 10:00:03.942052 [ 1352057 ] <Fatal> BaseDaemon: 14. DB::MutationsInterpreter::addStreamsForLaterStages(std::__1::vector<DB::MutationsInterpreter::Stage, std::__1::allocator<DB::MutationsInterpreter::Stage> > const&, DB::QueryPlan&) const @ 0xdb7f94f in /usr/bin/clickhouse
[clickhouse] 2020.12.22 10:00:03.942088 [ 1352057 ] <Fatal> BaseDaemon: 15. DB::MutationsInterpreter::validate() @ 0xdb805be in /usr/bin/clickhouse
[clickhouse] 2020.12.22 10:00:03.942804 [ 1352057 ] <Fatal> BaseDaemon: 16. DB::InterpreterAlterQuery::execute() @ 0xd903000 in /usr/bin/clickhouse
[clickhouse] 2020.12.22 10:00:03.942855 [ 1352057 ] <Fatal> BaseDaemon: 17. ? @ 0xdc8c7da in /usr/bin/clickhouse
[clickhouse] 2020.12.22 10:00:03.942894 [ 1352057 ] <Fatal> BaseDaemon: 18. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Context&, bool, DB::QueryProcessingStage::Enum, bool) @ 0xdc8b3ad in /usr/bin/clickhouse
[clickhouse] 2020.12.22 10:00:03.942916 [ 1352057 ] <Fatal> BaseDaemon: 19. DB::TCPHandler::runImpl() @ 0xe305ad6 in /usr/bin/clickhouse
[clickhouse] 2020.12.22 10:00:03.942948 [ 1352057 ] <Fatal> BaseDaemon: 20. DB::TCPHandler::run() @ 0xe311fa7 in /usr/bin/clickhouse
[clickhouse] 2020.12.22 10:00:03.942982 [ 1352057 ] <Fatal> BaseDaemon: 21. Poco::Net::TCPServerConnection::start() @ 0x10a96cdf in /usr/bin/clickhouse
[clickhouse] 2020.12.22 10:00:03.943013 [ 1352057 ] <Fatal> BaseDaemon: 22. Poco::Net::TCPServerDispatcher::run() @ 0x10a986ee in /usr/bin/clickhouse
[clickhouse] 2020.12.22 10:00:03.943044 [ 1352057 ] <Fatal> BaseDaemon: 23. Poco::PooledThread::run() @ 0x10bc58d9 in /usr/bin/clickhouse
[clickhouse] 2020.12.22 10:00:03.943072 [ 1352057 ] <Fatal> BaseDaemon: 24. Poco::ThreadImpl::runnableEntry(void*) @ 0x10bc186a in /usr/bin/clickhouse
[clickhouse] 2020.12.22 10:00:03.943142 [ 1352057 ] <Fatal> BaseDaemon: 25. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
[clickhouse] 2020.12.22 10:00:03.943181 [ 1352057 ] <Fatal> BaseDaemon: 26. clone @ 0x122293 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
```
| https://github.com/ClickHouse/ClickHouse/issues/18356 | https://github.com/ClickHouse/ClickHouse/pull/20133 | dee8f1fbf238dedc4fed32c05927b986877c3d7c | 5c281bd2f10ded51cc12a03502f0d15a1a70ed1b | "2020-12-22T10:09:39Z" | c++ | "2021-02-06T06:52:54Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 18,340 | ["src/Storages/MergeTree/IMergeTreeDataPart.cpp", "tests/integration/test_compression_codec_read/__init__.py", "tests/integration/test_compression_codec_read/test.py"] | EOF from detectDefaultCompressionCodec start happening between v20.8.2.3 and v20.13.1.1? | I tried to upgrade ClickHouse from v20.8.2.3 to v20.13.1.1 but there are some errors while reading a merge_tree table..
**Error Messages:**
```
2020.12.22 11:19:36.084760 [ 20114 ] {} <Error> DB::MergeTreeData::loadDataParts(bool)::<lambda()>: Code: 32, e.displayText() = DB::Exception: Attempt to read after eof, Stack trace (when copying this message
, always include the lines below):
0. /home/deploy/sources/ClickHouse.achimbab/build/../contrib/poco/Foundation/src/Exception.cpp:27: Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char
> > const&, int) @ 0xd00a9ac in /home/deploy/bin/clickhouse-limit-pushdown-test
1. /home/deploy/sources/ClickHouse.achimbab/build/../src/Common/Exception.cpp:54: DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @
0x4322081 in /home/deploy/bin/clickhouse-limit-pushdown-test
2. /home/deploy/sources/ClickHouse.achimbab/build/../contrib/libcxx/include/string:2134: DB::ReadBuffer::throwReadAfterEOF() @ 0x3963061 in /home/deploy/bin/clickhouse-limit-pushdown-test
3. /home/deploy/sources/ClickHouse.achimbab/build/../src/IO/ReadBuffer.h:108: DB::getCompressionCodecForFile(std::__1::shared_ptr<DB::IDisk> const&, std::__1::basic_string<char, std::__1::char_traits<char>, s
td::__1::allocator<char> > const&) @ 0xa52d62e in /home/deploy/bin/clickhouse-limit-pushdown-test
4. /home/deploy/sources/ClickHouse.achimbab/build/../contrib/libcxx/include/type_traits:3696: DB::IMergeTreeDataPart::detectDefaultCompressionCodec() const @ 0xa0231f5 in /home/deploy/bin/clickhouse-limit-pus
hdown-test
5. /home/deploy/sources/ClickHouse.achimbab/build/../contrib/libcxx/include/type_traits:3696: DB::IMergeTreeDataPart::loadDefaultCompressionCodec() @ 0xa023a72 in /home/deploy/bin/clickhouse-limit-pushdown-te
st
6. /home/deploy/sources/ClickHouse.achimbab/build/../src/Common/MemoryTracker.h:142: DB::IMergeTreeDataPart::loadColumnsChecksumsIndexes(bool, bool) @ 0xa02a360 in /home/deploy/bin/clickhouse-limit-pushdown-t
est
7. /home/deploy/sources/ClickHouse.achimbab/build/../src/Storages/MergeTree/MergeTreeData.cpp:868: DB::MergeTreeData::loadDataParts(bool)::'lambda'()::operator()() const @ 0xa0743aa in /home/deploy/bin/clickh
ouse-limit-pushdown-test
8. /home/deploy/sources/ClickHouse.achimbab/build/../contrib/libcxx/include/functional:1853: ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x432b487 in
/home/deploy/bin/clickhouse-limit-pushdown-test
9. /home/deploy/sources/ClickHouse.achimbab/build/../src/Common/ThreadPool.h:177: ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<vo
id ()>, int, std::__1::optional<unsigned long>)::'lambda1'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambd
a1'()&&...)::'lambda'()::operator()() @ 0x432bc5a in /home/deploy/bin/clickhouse-limit-pushdown-test
10. /home/deploy/sources/ClickHouse.achimbab/build/../contrib/libcxx/include/functional:1853: ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x432a967 in /home/
deploy/bin/clickhouse-limit-pushdown-test
11. /home/deploy/sources/ClickHouse.achimbab/build/../contrib/libcxx/include/memory:2615: void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delet
e<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda1'()> >(void*) @ 0x432907f in /home/deplo
y/bin/clickhouse-limit-pushdown-test
12. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
13. /build/glibc-2ORdQG/glibc-2.27/misc/../sysdeps/unix/sysv/linux/x86_64/clone.S:97: __clone @ 0x121a3f in /usr/lib/debug/lib/x86_64-linux-gnu/libc-2.27.so
(version 20.13.1.1)
```
Was the data layout changed between v20.8.2.3 and v20.13.1.1?
Thank you. | https://github.com/ClickHouse/ClickHouse/issues/18340 | https://github.com/ClickHouse/ClickHouse/pull/19101 | fa8a3237fa469e0ad7d9feb97d573e3bc8c2e955 | b97beea22a5a81787da4422e58862176e081bc01 | "2020-12-22T02:44:18Z" | c++ | "2021-01-15T17:55:58Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 18,265 | ["docs/en/getting-started/example-datasets/recipes.md"] | RecipeNLG dataset | **Describe the issue**
Describe the dataset from PoznaΕ University of Technology in the docs.
https://recipenlg.cs.put.poznan.pl/dataset
**Additional context**
We cannot redistribute it directly as the user must agree with the Terms and Conditions.
But we can provide description and detailed instructions.
| https://github.com/ClickHouse/ClickHouse/issues/18265 | https://github.com/ClickHouse/ClickHouse/pull/18272 | 37fb7e707cf0320a300bfc2b93b2d01c2b4d9281 | 7a078337ff7bc96d108a38418e90e749c60f14f6 | "2020-12-20T08:14:29Z" | c++ | "2020-12-20T12:52:08Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 18,244 | ["src/Storages/MergeTree/IMergeTreeDataPart.cpp", "src/Storages/MergeTree/IMergeTreeDataPart.h", "src/Storages/MergeTree/MergeTreeData.cpp", "src/Storages/System/StorageSystemParts.cpp", "src/Storages/System/StorageSystemPartsColumns.cpp", "tests/queries/0_stateless/01773_min_max_time_system_parts_datetime64.reference", "tests/queries/0_stateless/01773_min_max_time_system_parts_datetime64.sql"] | Tables with DateTime64 don't have min/max_time set? | **Describe the bug**
I have a table like:
```
CREATE TABLE test (
time DateTime64(3)
...
) ENGINE = MergeTree()
PARTITION BY toStartOfInterval(time, INTERVAL 1 HOUR)
...
```
But when I search in `system.parts` it does not appear to have the `min/max` columns appropriately set?
```
partition: 2020-06-01 01:00:00
min_date: 1970-01-01
max_date: 1970-01-01
min_time: 1970-01-01 00:00:00
max_time: 1970-01-01 00:00:00
```
I assume that if these were correctly set, doing a `select toStartOfInterval(time, interval 1 day), count()` type query would be pretty instant rather than trying to scan everything? I am guessing this is due to the `DateTime64` type in use?
Clickhouse 20.11.3.3
| https://github.com/ClickHouse/ClickHouse/issues/18244 | https://github.com/ClickHouse/ClickHouse/pull/22011 | 52396acba12360d093e55f1d2651e07025025463 | f895bc895cf63791185c07349979db328bf0bac7 | "2020-12-19T18:10:05Z" | c++ | "2021-03-25T13:02:33Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 18,241 | ["docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-layout.md", "docs/fr/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-layout.md", "docs/ja/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-layout.md", "docs/ru/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-layout.md", "docs/zh/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-layout.md", "src/Dictionaries/IPAddressDictionary.cpp", "src/Dictionaries/IPAddressDictionary.h", "tests/queries/0_stateless/01018_ip_dictionary.reference", "tests/queries/0_stateless/01018_ip_dictionary.sql"] | Fetch key from dictionary | **Use case**
I'm using `ip_trie` on maxmind geoip databases. Given an IP address I want to find out what key was used (ie what network the IP comes under). But it seems it is not possible to look up the key field when using `dictGet`?
```
<layout>
<ip_trie/>
</layout>
<structure>
<key>
<attribute>
<name>network</name>
<type>String</type>
</attribute>
</key>
<attribute>
<name>geoname_id</name>
<type>UInt32</type>
<null_value>0</null_value>
</attribute>
...
```
```
clickhouse :) select dictGet('geoip_city_blocks_ipv4', 'geoname_id', tuple(toUInt32(toIPv4('1.1.1.1')))) ;
ββdictGet('geoip_city_blocks_ipv4', 'geoname_id', tuple(toUInt32(toIPv4('1.1.1.1'))))ββ
β 2077456 β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
clickhouse :) select dictGet('geoip_city_blocks_ipv4', 'network', tuple(toUInt32(toIPv4('1.1.1.1')))) ;
Received exception from server (version 20.11.3):
Code: 36. DB::Exception: Received from localhost:9000. DB::Exception: No such attribute 'network': While processing dictGet('geoip_city_blocks_ipv4', 'network', tuple(toUInt32(toIPv4('1.1.1.1')))).
```
| https://github.com/ClickHouse/ClickHouse/issues/18241 | https://github.com/ClickHouse/ClickHouse/pull/18480 | fd08bf31327dc7b9063b5b3ad3ab01f0f3a89ad4 | 6863dbd6a872a8f92a30a48e2537a4cb463f2ffa | "2020-12-19T10:54:21Z" | c++ | "2020-12-28T13:15:40Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 18,210 | ["src/AggregateFunctions/AggregateFunctionIf.cpp", "tests/queries/0_stateless/01642_if_nullable_regression.reference", "tests/queries/0_stateless/01642_if_nullable_regression.sql"] | Remote Query Execution with sumIf() function faling | I'm using clickhouse 20.12.3.3 version.
The execution of sumIf() function is failing on the distributed table
Query:
`SELECT sumIf(pos_sales,isNotNull (pos_sales)) , fin_seg_desc AS SBUγfrom stores_cost_position_distγwhere ((week between 201945 AND 202043)) group by fin_seg_desc;
`
Fails with:
`Code: 42. DB::Exception: Received from localhost:9000. DB::Exception: Aggregate function sum requires single argument: while receiving packet from node-2:9000: While executing Remote.
`
Data types:
`pos_sales: Nullable(Float64),
week: Int32,
fin_seg_desc: String
`
The same query is working on the local table.
| https://github.com/ClickHouse/ClickHouse/issues/18210 | https://github.com/ClickHouse/ClickHouse/pull/18806 | 68ccdc8ca2e13cdab1a5b8a66b28979fe64d3b44 | b73722e587bfaa1097b3ff93835b2785f7ceb4bb | "2020-12-18T00:12:31Z" | c++ | "2021-01-07T12:25:54Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 18,197 | ["src/Interpreters/HashJoin.cpp", "src/Interpreters/MergeJoin.cpp", "src/Interpreters/join_common.cpp", "src/Interpreters/join_common.h", "tests/queries/0_stateless/01656_join_defaul_enum.reference", "tests/queries/0_stateless/01656_join_defaul_enum.sql"] | Enum Column in LEFT JOIN leads to Exception | When doing a left join on a table, which contains an Enum column, selecting this column will lead to an error when setting 'join_use_nulls' is disabled. Clickhouse tries to use default value for enum which does not exist.
**How to reproduce**
Tested with clh: 20.12.3.3
```
CREATE TABLE join_test_main (
keycol UInt16,
value_main String
) engine=MergeTree() order by (keycol) partition by tuple();
CREATE TABLE join_test_join (
keycol UInt16,
value_join_enum Enum8('First' = 1,'Second' = 2),
value_join_string String
) engine=MergeTree() order by (keycol) partition by tuple();
INSERT INTO join_test_main
VALUES
(1, 'First'),(2,'Second'), (3, 'Third');
INSERT INTO join_test_join
VALUES
(2,'Second', 'Second');
```
Testquery and exception:
```
SELECT join_test_main.keycol, join_test_join.value_join_enum
FROM join_test_main
LEFT JOIN join_test_join USING(keycol);
Error on processing query: SELECT join_test_main.keycol, join_test_join.value_join_enum
FROM join_test_main
LEFT JOIN join_test_join USING(keycol);
Code: 36, e.displayText() = DB::Exception: Unexpected value 0 for type Enum8('First' = 1, 'Second' = 2), Stack trace (when copying this message, always include the lines below):
0. DB::DataTypeEnum<signed char>::findByValue(signed char const&) const @ 0x94abcd5 in /usr/bin/clickhouse
1. DB::DataTypeEnum<signed char>::serializeText(DB::IColumn const&, unsigned long, DB::WriteBuffer&, DB::FormatSettings const&) const @ 0xd75d470 in /usr/bin/clickhouse
2. DB::PrettyBlockOutputFormat::calculateWidths(DB::Block const&, DB::Chunk const&, std::__1::vector<DB::PODArray<unsigned long, 4096ul, Allocator<false, false>, 0ul, 0ul>, std::__1::allocator<DB::PODArray<unsigned long, 4096ul, Allocator<false, false>, 0ul, 0ul> > >&, DB::PODArray<unsigned long, 4096ul, Allocator<false, false>, 0ul, 0ul>&, DB::PODArray<unsigned long, 4096ul, Allocator<false, false>, 0ul, 0ul>&) @ 0xe6de3f6 in /usr/bin/clickhouse
3. DB::PrettyCompactBlockOutputFormat::writeChunk(DB::Chunk const&, DB::IOutputFormat::PortKind) @ 0xe6e2b19 in /usr/bin/clickhouse
4. DB::IOutputFormat::write(DB::Block const&) @ 0xe64de63 in /usr/bin/clickhouse
5. DB::MaterializingBlockOutputStream::write(DB::Block const&) @ 0xe5d4de2 in /usr/bin/clickhouse
6. DB::Client::onData(DB::Block&) @ 0x7da4cb2 in /usr/bin/clickhouse
7. DB::Client::receiveAndProcessPacket(bool) @ 0x7da4896 in /usr/bin/clickhouse
8. DB::Client::receiveResult() @ 0x7da6d4c in /usr/bin/clickhouse
9. DB::Client::processOrdinaryQuery() @ 0x7d9aef1 in /usr/bin/clickhouse
10. DB::Client::processParsedSingleQuery() @ 0x7d99822 in /usr/bin/clickhouse
11. DB::Client::processMultiQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x7d97e2c in /usr/bin/clickhouse
12. DB::Client::processQueryText(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x7d8d2df in /usr/bin/clickhouse
13. DB::Client::mainImpl() @ 0x7d8907d in /usr/bin/clickhouse
14. DB::Client::main(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0x7d84c62 in /usr/bin/clickhouse
15. Poco::Util::Application::run() @ 0x10d8ad33 in /usr/bin/clickhouse
16. mainEntryClickHouseClient(int, char**) @ 0x7d7adbd in /usr/bin/clickhouse
17. main @ 0x7ce0cbd in /usr/bin/clickhouse
18. __libc_start_main @ 0x270b3 in /lib/x86_64-linux-gnu/libc-2.31.so
19. _start @ 0x7c9102e in /usr/bin/clickhouse
```
**Expected behavior**
To be discussed:
- Always null even without explicit setting
- Always an Enum which has empty String as element 0
- Syntax Error before execution, to not allow Enum fields without 0
- Always the first element of an enum (would not recommend)
| https://github.com/ClickHouse/ClickHouse/issues/18197 | https://github.com/ClickHouse/ClickHouse/pull/19360 | 4afcb94a8ad05c0bf38b41dbb921e9c4f5e6bc89 | 25ea281297324b92c134942d12539d6bcdc17ad4 | "2020-12-17T12:43:27Z" | c++ | "2021-01-22T20:51:56Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 18,190 | ["src/Interpreters/ActionsDAG.cpp", "src/Interpreters/ExpressionActions.h", "src/Processors/QueryPlan/ExpressionStep.cpp", "tests/queries/0_stateless/01650_expressions_merge_bug.reference", "tests/queries/0_stateless/01650_expressions_merge_bug.sql"] | Block structure mismatch in QueryPipeline stream: different number of columns | ```
SELECT
NULL IN
(
SELECT
9223372036854775807,
9223372036854775807
),
NULL
FROM
(
SELECT DISTINCT
NULL,
NULL,
NULL IN
(
SELECT (NULL, '-1')
),
NULL
FROM numbers(1024)
)
Query id: 90734124-e0bb-4e6e-8c3c-7eb0faeb180d
[avtokmakov-dev.sas.yp-c.yandex.net] 2020.12.17 14:20:07.848802 [ 43485 ] {90734124-e0bb-4e6e-8c3c-7eb0faeb180d} <Fatal> : Logical error: 'Block structure mismatch in QueryPipeline stream: different number of columns:
in(NULL, _subquery3) Nullable(Nothing) Const(size = 0, Nullable(size = 1, Nothing(size = 1), UInt8(size = 1))), NULL Nullable(Nothing) Const(size = 0, Nullable(size = 1, Nothing(size = 1), UInt8(size = 1)))
NULL Nullable(Nothing) Const(size = 0, Nullable(size = 1, Nothing(size = 1), UInt8(size = 1))), NULL Nullable(Nothing) Const(size = 0, Nullable(size = 1, Nothing(size = 1), UInt8(size = 1))), in(NULL, _subquery3) Nullable(Nothing) Const(size = 0, Nullable(size = 1, Nothing(size = 1), UInt8(size = 1))), NULL Nullable(Nothing) Const(size = 0, Nullable(size = 1, Nothing(size = 1), UInt8(size = 1)))'.
[avtokmakov-dev.sas.yp-c.yandex.net] 2020.12.17 14:20:07.850460 [ 1089 ] <Fatal> BaseDaemon: ########################################
[avtokmakov-dev.sas.yp-c.yandex.net] 2020.12.17 14:20:07.851027 [ 1089 ] <Fatal> BaseDaemon: (version 20.13.1.1, build id: CAFCE24B8B3F01D0E242774F447E758106BCDFE3) (from thread 43485) (query_id: 90734124-e0bb-4e6e-8c3c-7eb0faeb180d) Received signal Aborted (6)
[avtokmakov-dev.sas.yp-c.yandex.net] 2020.12.17 14:20:07.851539 [ 1089 ] <Fatal> BaseDaemon:
[avtokmakov-dev.sas.yp-c.yandex.net] 2020.12.17 14:20:07.852007 [ 1089 ] <Fatal> BaseDaemon: Stack trace: 0x7fa9e938e18b 0x7fa9e936d859 0x86e219c 0x86e2241 0x10dfc533 0x10df93c6 0x10df92b5 0x11fef631 0x122740c1 0x122b4a73 0x115af1ef 0x1172ce53 0x1172b89a 0x11f3a4fb 0x11f45a08 0x1679938c 0x16799b90 0x168cdd13 0x168cac3d 0x168c9ac8 0x7fa9e9543609 0x7fa9e946a103
[avtokmakov-dev.sas.yp-c.yandex.net] 2020.12.17 14:20:07.857165 [ 1089 ] <Fatal> BaseDaemon: 4. /build/glibc-YYA7BZ/glibc-2.31/signal/../sysdeps/unix/sysv/linux/raise.c:51: raise @ 0x4618b in /usr/lib/debug/lib/x86_64-linux-gnu/libc-2.31.so
[avtokmakov-dev.sas.yp-c.yandex.net] 2020.12.17 14:20:07.858544 [ 1089 ] <Fatal> BaseDaemon: 5. /build/glibc-YYA7BZ/glibc-2.31/stdlib/abort.c:81: __GI_abort @ 0x25859 in /usr/lib/debug/lib/x86_64-linux-gnu/libc-2.31.so
[avtokmakov-dev.sas.yp-c.yandex.net] 2020.12.17 14:20:07.859379 [ 1089 ] <Fatal> BaseDaemon: 6. /home/avtokmakov/ch/ClickHouse/build_debug/../src/Common/Exception.cpp:50: DB::handle_error_code(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x86e219c in /home/avtokmakov/ch/ClickHouse/build_debug/programs/clickhouse
[avtokmakov-dev.sas.yp-c.yandex.net] 2020.12.17 14:20:07.859954 [ 1089 ] <Fatal> BaseDaemon: 7. /home/avtokmakov/ch/ClickHouse/build_debug/../src/Common/Exception.cpp:56: DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x86e2241 in /home/avtokmakov/ch/ClickHouse/build_debug/programs/clickhouse
[avtokmakov-dev.sas.yp-c.yandex.net] 2020.12.17 14:20:07.897298 [ 1089 ] <Fatal> BaseDaemon: 8. /home/avtokmakov/ch/ClickHouse/build_debug/../src/Core/Block.cpp:483: void DB::checkBlockStructure<void>(DB::Block const&, DB::Block const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)::'lambda'(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int)::operator()(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) const @ 0x10dfc533 in /home/avtokmakov/ch/ClickHouse/build_debug/programs/clickhouse
[avtokmakov-dev.sas.yp-c.yandex.net] 2020.12.17 14:20:07.935117 [ 1089 ] <Fatal> BaseDaemon: 9. /home/avtokmakov/ch/ClickHouse/build_debug/../src/Core/Block.cpp:490: void DB::checkBlockStructure<void>(DB::Block const&, DB::Block const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x10df93c6 in /home/avtokmakov/ch/ClickHouse/build_debug/programs/clickhouse
[avtokmakov-dev.sas.yp-c.yandex.net] 2020.12.17 14:20:07.972457 [ 1089 ] <Fatal> BaseDaemon: 10. /home/avtokmakov/ch/ClickHouse/build_debug/../src/Core/Block.cpp:538: DB::assertBlocksHaveEqualStructure(DB::Block const&, DB::Block const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x10df92b5 in /home/avtokmakov/ch/ClickHouse/build_debug/programs/clickhouse
[avtokmakov-dev.sas.yp-c.yandex.net] 2020.12.17 14:20:08.020356 [ 1089 ] <Fatal> BaseDaemon: 11. /home/avtokmakov/ch/ClickHouse/build_debug/../src/Processors/QueryPipeline.cpp:288: DB::QueryPipeline::addPipelineBefore(DB::QueryPipeline) @ 0x11fef631 in /home/avtokmakov/ch/ClickHouse/build_debug/programs/clickhouse
[avtokmakov-dev.sas.yp-c.yandex.net] 2020.12.17 14:20:08.069952 [ 1089 ] <Fatal> BaseDaemon: 12. /home/avtokmakov/ch/ClickHouse/build_debug/../src/Processors/QueryPlan/CreatingSetsStep.cpp:98: DB::CreatingSetsStep::updatePipeline(std::__1::vector<std::__1::unique_ptr<DB::QueryPipeline, std::__1::default_delete<DB::QueryPipeline> >, std::__1::allocator<std::__1::unique_ptr<DB::QueryPipeline, std::__1::default_delete<DB::QueryPipeline> > > >) @ 0x122740c1 in /home/avtokmakov/ch/ClickHouse/build_debug/programs/clickhouse
[avtokmakov-dev.sas.yp-c.yandex.net] 2020.12.17 14:20:08.119295 [ 1089 ] <Fatal> BaseDaemon: 13. /home/avtokmakov/ch/ClickHouse/build_debug/../src/Processors/QueryPlan/QueryPlan.cpp:171: DB::QueryPlan::buildQueryPipeline() @ 0x122b4a73 in /home/avtokmakov/ch/ClickHouse/build_debug/programs/clickhouse
[avtokmakov-dev.sas.yp-c.yandex.net] 2020.12.17 14:20:08.159126 [ 1089 ] <Fatal> BaseDaemon: 14. /home/avtokmakov/ch/ClickHouse/build_debug/../src/Interpreters/InterpreterSelectWithUnionQuery.cpp:340: DB::InterpreterSelectWithUnionQuery::execute() @ 0x115af1ef in /home/avtokmakov/ch/ClickHouse/build_debug/programs/clickhouse
[avtokmakov-dev.sas.yp-c.yandex.net] 2020.12.17 14:20:08.200966 [ 1089 ] <Fatal> BaseDaemon: 15. /home/avtokmakov/ch/ClickHouse/build_debug/../src/Interpreters/executeQuery.cpp:507: DB::executeQueryImpl(char const*, char const*, DB::Context&, bool, DB::QueryProcessingStage::Enum, bool, DB::ReadBuffer*) @ 0x1172ce53 in /home/avtokmakov/ch/ClickHouse/build_debug/programs/clickhouse
[avtokmakov-dev.sas.yp-c.yandex.net] 2020.12.17 14:20:08.242855 [ 1089 ] <Fatal> BaseDaemon: 16. /home/avtokmakov/ch/ClickHouse/build_debug/../src/Interpreters/executeQuery.cpp:839: DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Context&, bool, DB::QueryProcessingStage::Enum, bool) @ 0x1172b89a in /home/avtokmakov/ch/ClickHouse/build_debug/programs/clickhouse
[avtokmakov-dev.sas.yp-c.yandex.net] 2020.12.17 14:20:08.289545 [ 1089 ] <Fatal> BaseDaemon: 17. /home/avtokmakov/ch/ClickHouse/build_debug/../src/Server/TCPHandler.cpp:260: DB::TCPHandler::runImpl() @ 0x11f3a4fb in /home/avtokmakov/ch/ClickHouse/build_debug/programs/clickhouse
[avtokmakov-dev.sas.yp-c.yandex.net] 2020.12.17 14:20:08.335832 [ 1089 ] <Fatal> BaseDaemon: 18. /home/avtokmakov/ch/ClickHouse/build_debug/../src/Server/TCPHandler.cpp:1414: DB::TCPHandler::run() @ 0x11f45a08 in /home/avtokmakov/ch/ClickHouse/build_debug/programs/clickhouse
[avtokmakov-dev.sas.yp-c.yandex.net] 2020.12.17 14:20:08.397075 [ 1089 ] <Fatal> BaseDaemon: 19. /home/avtokmakov/ch/ClickHouse/build_debug/../contrib/poco/Net/src/TCPServerConnection.cpp:43: Poco::Net::TCPServerConnection::start() @ 0x1679938c in /home/avtokmakov/ch/ClickHouse/build_debug/programs/clickhouse
[avtokmakov-dev.sas.yp-c.yandex.net] 2020.12.17 14:20:08.457744 [ 1089 ] <Fatal> BaseDaemon: 20. /home/avtokmakov/ch/ClickHouse/build_debug/../contrib/poco/Net/src/TCPServerDispatcher.cpp:112: Poco::Net::TCPServerDispatcher::run() @ 0x16799b90 in /home/avtokmakov/ch/ClickHouse/build_debug/programs/clickhouse
[avtokmakov-dev.sas.yp-c.yandex.net] 2020.12.17 14:20:08.519713 [ 1089 ] <Fatal> BaseDaemon: 21. /home/avtokmakov/ch/ClickHouse/build_debug/../contrib/poco/Foundation/src/ThreadPool.cpp:199: Poco::PooledThread::run() @ 0x168cdd13 in /home/avtokmakov/ch/ClickHouse/build_debug/programs/clickhouse
[avtokmakov-dev.sas.yp-c.yandex.net] 2020.12.17 14:20:08.581243 [ 1089 ] <Fatal> BaseDaemon: 22. /home/avtokmakov/ch/ClickHouse/build_debug/../contrib/poco/Foundation/src/Thread.cpp:56: Poco::(anonymous namespace)::RunnableHolder::run() @ 0x168cac3d in /home/avtokmakov/ch/ClickHouse/build_debug/programs/clickhouse
[avtokmakov-dev.sas.yp-c.yandex.net] 2020.12.17 14:20:08.643135 [ 1089 ] <Fatal> BaseDaemon: 23. /home/avtokmakov/ch/ClickHouse/build_debug/../contrib/poco/Foundation/src/Thread_POSIX.cpp:345: Poco::ThreadImpl::runnableEntry(void*) @ 0x168c9ac8 in /home/avtokmakov/ch/ClickHouse/build_debug/programs/clickhouse
[avtokmakov-dev.sas.yp-c.yandex.net] 2020.12.17 14:20:08.643897 [ 1089 ] <Fatal> BaseDaemon: 24. start_thread @ 0x9609 in /lib/x86_64-linux-gnu/libpthread-2.31.so
[avtokmakov-dev.sas.yp-c.yandex.net] 2020.12.17 14:20:08.644319 [ 1089 ] <Fatal> BaseDaemon: 25. /build/glibc-YYA7BZ/glibc-2.31/misc/../sysdeps/unix/sysv/linux/x86_64/clone.S:97: clone @ 0x122103 in /usr/lib/debug/lib/x86_64-linux-gnu/libc-2.31.so
```
https://clickhouse-test-reports.s3.yandex.net/17642/294e8bbb4e74532af994d0118217ee5c4e4e18f0/fuzzer/fuzzer.log
| https://github.com/ClickHouse/ClickHouse/issues/18190 | https://github.com/ClickHouse/ClickHouse/pull/18980 | 73e536a0748310fb033ddc1520db8ec4b1d179b2 | 07431a64940d88735ebeaeed8102f6160c0c99ff | "2020-12-17T11:25:47Z" | c++ | "2021-01-13T08:16:01Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 18,170 | ["tests/queries/0_stateless/01818_move_partition_simple.reference", "tests/queries/0_stateless/01818_move_partition_simple.sql"] | `Move Partition To Table` will lost data | **Describe the bug**
Use `Move Partition To Table` to move data from tmp table to production table, some data will lost after moving
**How to reproduce**
* Which ClickHouse server version to use
20.12.3.3
* Which interface to use, if matters
`Move Partition To Table`
* Queries to run that lead to unexpected result
Steps as below:
1. insert data to tmp table and query count
```SQL
select count(*)
from tmp_table
where paritition_key=partition_val;
-- output: 82
```
2. drop old data and move partition to production table
```SQL
ALTER TABLE prod_table DROP PARTITION partition_val;
ALTER TABLE tmp_table MOVE PARTITION partition_val TO TABLE prod_table;
```
3. query count
```SQL
select count(*)
from prod_table
where paritition_key=partition_val;
-- output: 69
```
**Expected behavior**
after moving data to production table, the record count should be same as before.
**Error message and/or stacktrace**
not any error messages...
**Additional context**
1. Table create statement sample as below:
```SQL
β CREATE TABLE xxxx_campaign_info
(
`id` UInt32,
`advertiser_id` String,
`campaign_id` String,
`name` String,
`budget` Float64,
`budget_mode` String,
`landing_type` String,
`status` String,
`modify_time` String,
`campaign_type` String,
`campaign_create_time` DateTime,
`campaign_modify_time` DateTime,
`create_time` DateTime,
`update_time` DateTime
)
ENGINE = MergeTree
PARTITION BY advertiser_id
ORDER BY campaign_id
SETTINGS index_granularity = 8192 β
```
> tmp_table has the exactly same schema with prod_table, just different table name.
2. Tried with `Replace Partition`, which works as expected | https://github.com/ClickHouse/ClickHouse/issues/18170 | https://github.com/ClickHouse/ClickHouse/pull/23813 | 711cc5f62bda02f9f3035118d07ac1ad524aaff8 | 72d9e4d34047b17c8780e56613b7373718cf2c6c | "2020-12-17T02:02:31Z" | c++ | "2021-05-01T08:31:36Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 18,137 | ["src/Storages/MergeTree/MergeTreeReadPool.cpp", "tests/queries/0_stateless/01603_read_with_backoff_bug.reference", "tests/queries/0_stateless/01603_read_with_backoff_bug.sql"] | 20.12.3.3 Less amount of data is returned if "read backoff" is in effect. | Consider following:
1 master server with Distributed table (```tracking_distributed```), 2 shards, previously was 4 shards, but during chat discussion i've reduced them to 2 for easier debugging (including master server each having 1 replica with 1 MergeTree table ```tracking_shard```). Also a ```tracking``` table - which is an old table that i want to re-distribute and aggregate while doing inserts from it into ```tracking_distributed``` which is connected to MV on the same server, by doing following insert:
```
insert into tracking_distributed (date, datetime, col1, col2, col3, col4, col5,
col6, col7, col8, col9,
col10, col11, col12, col13, col14, col15, col16,
col17, col18, col19,
col20, col21, col22, col23,
col24, col25,
col26, col27, col28, col29, col30, col31)
select date,
datetime,
1 as raw1,
0 as raw2,
col3,
col4,
col5,
col6,
col7,
col8,
col9,
col10,
col11,
col12,
col13,
col14,
col15,
col16,
col17,
col18,
col19,
col20,
col21,
col22,
col23,
col24,
col25,
[[]] as raw3,
0 as raw4,
'RUB' as raw5,
[] as raw6,
0 as raw7,
[] as raw8
from tracking
where date > '2010-01-01'
and date <= '2019-07-01'
```
Data inside ```tracking``` are starting from 2019-01-08. After this insert i am checking that all rows are inserted correctly by doing following two queries:
```
select count(), concat(toString(toMonth(date)), '.', toString(toYear(date))) as dt
from tracking_distributed
where (date >= '2000-02-01')
AND (date < '2019-07-01')
group by dt
order by dt;
select count(), concat(toString(toMonth(date)), '.', toString(toYear(date))) as dt
from tracking
where (date >= '2000-02-01')
AND (date < '2019-07-01')
group by dt
order by dt;
```
And getting very strange results:
tracking_distributed:
```
78238,1.2019
8406510,2.2019
7700480,3.2019
47273866,4.2019
86705743,5.2019
69612803,6.2019
```
tracking:
```
78238,1.2019
8406510,2.2019
21402619,3.2019
47759435,4.2019
89318991,5.2019
76633611,6.2019
```
```tracking``` (csv, no column names) - [schema](https://pastebin.com/G5m6taU7)
```tracking_dsitributed``` (csv, no column names) - [schema](https://pastebin.com/RPXXn4w2)
0) Before inserting on every shard is executed ```truncate table tracking_shard``` and additionally executed ```truncate table tracking_distributed```
1) Logs does not have any errors on shards and on master
2) If i am doing 6 separate queries for 6 month - i am getting CORRECT data inside ```tracking_distributed``` e.g. ```where date >= 2019-01-01 and date < 2019-02-01``` ```where date >= 2019-02-01 and date < 2019-03-01```
3) I've tried stop distributed flushes, do insert then flush in one operation - same result
4) Tried using insert_distributed_sync=1, same result but slower
5) Tried wraping initial select with subselect e.g. select * from (select date, datetime ...), same results
6) Servers and clickhouse-server on every shard have UTC datetime
7) ```/var/lib/clickhouse/data/<db>/<distributed_table>``` location empty after insert is finished
8) Trace log - https://pastebin.com/UThwL0XV
9) Also tried removing default toDate(datetime) in distributed table - but no luck
| https://github.com/ClickHouse/ClickHouse/issues/18137 | https://github.com/ClickHouse/ClickHouse/pull/18216 | 3261392f168e7fb47bf984f4c17cf6634b9a18cc | d7fc426458fa75ae7556478057d893d79ed5f20b | "2020-12-16T10:34:59Z" | c++ | "2020-12-18T15:49:02Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 18,121 | ["contrib/libunwind"] | start clickhouse on aarch64 get coredump | I build clickhouse 20.8.6 code on aarch64,
the gcc :9.3.0
clang+llvm:10.0
the build is ok:
cd ClickHouse
export CC=clang; export CXX=clang++; rm -rf build; mkdir -p build && cd build;cmake .. -DENABLE_TCMALLOC=OFF -DENABLE_JEMALLOC=OFF -DCMAKE_INSTALL_PREFIX=/opt/clickhouse
ninja clickhouse
when run clickhouse, use ./clickhouse server --config etc/config.xml
it cored. the core dump is as below:
#0 0x0000ffff9f3d1568 in raise () from /lib/aarch64-linux-gnu/libc.so.6
#1 0x0000ffff9f3d2a20 in abort () from /lib/aarch64-linux-gnu/libc.so.6
#2 0x00000000129ea890 in terminate_handler () at ../base/daemon/BaseDaemon.cpp:404
#3 0x000000001476e58c in std::__terminate (func=0x0) at ../contrib/libcxxabi/src/cxa_handlers.cpp:59
#4 0x000000001476ddc0 in __cxa_rethrow () at ../contrib/libcxxabi/src/cxa_exception.cpp:616
#5 0x000000000d7622c4 in DB::getCurrentExceptionMessage (with_stacktrace=true, check_embedded_stacktrace=false,
with_extra_info=true) at ../src/Common/Exception.cpp:249
#6 0x00000000129ea774 in terminate_handler () at ../base/daemon/BaseDaemon.cpp:389
#7 0x000000001476e58c in std::__terminate (func=0x0) at ../contrib/libcxxabi/src/cxa_handlers.cpp:59
#8 0x000000001476daa8 in __cxxabiv1::failed_throw (exception_header=0x2ca95560)
at ../contrib/libcxxabi/src/cxa_exception.cpp:152
#9 0x000000001476da24 in __cxa_throw (thrown_object=<optimized out>,
tinfo=0xc195130 <typeinfo for Poco::Net::SSLContextException>, dest=<optimized out>)
at ../contrib/libcxxabi/src/cxa_exception.cpp:283
#10 0x00000000139fb864 in Poco::Net::Context::init (this=0x2ca917d0, params=...)
at ../contrib/poco/NetSSL_OpenSSL/src/Context.cpp:153
#11 0x00000000139fb38c in Poco::Net::Context::Context (this=0x2ca917d0, usage=Poco::Net::Context::SERVER_USE,
---Type <return> to continue, or q <return> to quit---
params=...) at ../contrib/poco/NetSSL_OpenSSL/src/Context.cpp:48
#12 0x0000000013a04efc in Poco::Net::SSLManager::initDefaultContext (this=0x2ca929f0, server=true)
at ../contrib/poco/NetSSL_OpenSSL/src/SSLManager.cpp:297
#13 0x0000000013a04830 in Poco::Net::SSLManager::defaultServerContext (this=0x2ca929f0)
at ../contrib/poco/NetSSL_OpenSSL/src/SSLManager.cpp:130
#14 0x0000000013569260 in DB::MySQLHandlerFactory::MySQLHandlerFactory (this=0x2ca919d0, server_=...)
at ../src/Server/MySQLHandlerFactory.cpp:30
#15 0x000000000d7a9e94 in DB::Server::main(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&)::$_8::operator()(unsigned short) const (this=<optimized out>, port=9504)
at ../programs/server/Server.cpp:973
#16 DB::Server::main(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&)::$_1::operator()<DB::Server::main(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&)::$_8> (this=<optimized out>, port_name=<optimized out>, func=<optimized out>)
at ../programs/server/Server.cpp:844
---Type <return> to continue, or q <return> to quit---
#17 DB::Server::main (this=0xfffffd978850) at ../programs/server/Server.cpp:966
#18 0x0000000013a54848 in Poco::Util::Application::run (this=0xfffffd978850)
at ../contrib/poco/Util/src/Application.cpp:334
#19 0x000000000d7a1610 in DB::Server::run (this=0xfffffd978850) at ../programs/server/Server.cpp:187
#20 0x000000000d7b4094 in mainEntryClickHouseServer (argc=22309, argv=0x6) at ../programs/server/Server.cpp:1143
#21 0x000000000d7545e8 in main (argc_=<optimized out>, argv_=<optimized out>) at ../programs/main.cpp:338
(gdb)
| https://github.com/ClickHouse/ClickHouse/issues/18121 | https://github.com/ClickHouse/ClickHouse/pull/25854 | 956b1f588dbf5e406bf61e8a9b4e329d35af8b70 | 011ed015fa49c8e1a37f6f103c28def5e637a23f | "2020-12-16T01:28:08Z" | c++ | "2021-06-30T13:18:59Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 18,120 | ["contrib/CMakeLists.txt", "contrib/consistent-hashing-sumbur/CMakeLists.txt", "contrib/consistent-hashing-sumbur/sumbur.cpp", "contrib/consistent-hashing-sumbur/sumbur.h", "src/Functions/CMakeLists.txt", "src/Functions/registerFunctionsConsistentHashing.cpp", "src/Functions/sumburConsistentHash.cpp", "src/Functions/ya.make.in", "utils/check-style/check-include"] | Announcement: removal of sumburConsistentHash. | We are going to remove `sumburConsistentHash` function due to low demand and limited possible usages.
If you are using this function please describe your scenarios here.
`jumpConsistentHash` and `yandexConsistentHash` will remain available. | https://github.com/ClickHouse/ClickHouse/issues/18120 | https://github.com/ClickHouse/ClickHouse/pull/18656 | 07411aafd21744e1bfacb90e7e6f576289946b8a | 80ae2b5138d98c01399c10543c11b69f7b2bad83 | "2020-12-16T00:07:09Z" | c++ | "2020-12-31T11:46:52Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 18,091 | ["docs/en/engines/database-engines/materialized-mysql.md", "src/Common/mysqlxx/mysqlxx/Types.h", "src/Core/MySQL/MySQLReplication.cpp", "src/DataTypes/DataTypeString.cpp", "src/DataTypes/DataTypesNumber.cpp", "src/Databases/MySQL/MaterializedMySQLSyncThread.cpp", "src/Interpreters/MySQL/InterpretersMySQLDDLQuery.cpp", "src/Interpreters/MySQL/tests/gtest_create_rewritten.cpp", "src/Processors/Sources/MySQLSource.cpp", "tests/integration/test_materialized_mysql_database/materialize_with_ddl.py"] | MaterializeMySQL: support time data type | <Error> MaterializeMySQLSyncThread: Code: 50, e.displayText() = DB::Exception: Unknown data type family: time, Stack trace (when copying this message, always include the lines below):
0. DB::DataTypeFactory::findCreatorByName(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) const @ 0xd7689b2 in /usr/bin/clickhouse
1. DB::DataTypeFactory::get(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::IAST> const&) const @ 0xd767d18 in /usr/bin/clickhouse
2. DB::DataTypeFactory::get(std::__1::shared_ptr<DB::IAST> const&) const @ 0xd767b00 in /usr/bin/clickhouse
3. ? @ 0xdf117d8 in /usr/bin/clickhouse
4. DB::MySQLInterpreter::InterpreterCreateImpl::getRewrittenQueries(DB::MySQLParser::ASTCreateQuery const&, DB::Context const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0xdf0dd57 in /usr/bin/clickhouse
5. DB::MySQLInterpreter::InterpreterMySQLDDLQuery<DB::MySQLInterpreter::InterpreterCreateImpl>::execute() @ 0xdba14bc in /usr/bin/clickhouse
6. DB::InterpreterExternalDDLQuery::execute() @ 0xdba0507 in /usr/bin/clickhouse
7. ? @ 0xdeec307 in /usr/bin/clickhouse
8. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Context&, bool, DB::QueryProcessingStage::Enum, bool) @ 0xdeeaedd in /usr/bin/clickhouse
9. ? @ 0xdb4d8c4 in /usr/bin/clickhouse
10. DB::MaterializeMySQLSyncThread::executeDDLAtomic(DB::MySQLReplication::QueryEvent const&) @ 0xdb4d3ab in /usr/bin/clickhouse
11. DB::commitMetadata(std::__1::function<void ()> const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0xdb6bf5e in /usr/bin/clickhouse
12. DB::MaterializeMetadata::transaction(DB::MySQLReplication::Position const&, std::__1::function<void ()> const&) @ 0xdb6c451 in /usr/bin/clickhouse
13. DB::MaterializeMySQLSyncThread::onEvent(DB::MaterializeMySQLSyncThread::Buffers&, std::__1::shared_ptr<DB::MySQLReplication::EventBase> const&, DB::MaterializeMetadata&) @ 0xdb48be8 in /usr/bin/clickhouse
14. DB::MaterializeMySQLSyncThread::synchronization(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0xdb4624c in /usr/bin/clickhouse
15. ? @ 0xdb64b7a in /usr/bin/clickhouse
16. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x7d1baad in /usr/bin/clickhouse
17. ? @ 0x7d1f5d3 in /usr/bin/clickhouse
18. start_thread @ 0x7e65 in /usr/lib64/libpthread-2.17.so
19. __clone @ 0xfe88d in /usr/lib64/libc-2.17.so
(version 20.12.3.3 (official build)) | https://github.com/ClickHouse/ClickHouse/issues/18091 | https://github.com/ClickHouse/ClickHouse/pull/33429 | 677a7f1133c7e176dc38b291d54f63f0207e8799 | 9e91a9dfd1dae8072d9d2132a4b3e4bbb70e1c1d | "2020-12-15T07:27:55Z" | c++ | "2022-01-26T08:29:46Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 18,063 | ["src/AggregateFunctions/AggregateFunctionSum.h", "src/Storages/StorageMaterializedView.cpp", "tests/queries/0_stateless/01182_materialized_view_different_structure.reference", "tests/queries/0_stateless/01182_materialized_view_different_structure.sql", "tests/queries/0_stateless/arcadia_skip_list.txt"] | SEGFAULT and wrong result while querying MV with mismatched Decimal Types. | **How to reproduce**
Clickhouse version 20.12.3.3, 20.13.1.5365
```
CREATE TABLE default.test_table
(
`key` UInt32,
`value` Decimal(16, 6)
)
ENGINE = SummingMergeTree()
PARTITION BY tuple()
ORDER BY key
SETTINGS index_granularity = 8192
INSERT INTO test_table SELECT *, toDecimal64(number,6) as val FROM numbers(32000000);
CREATE MATERIALIZED VIEW default.test_mv TO default.test_table
(
`number` UInt64,
`value` Decimal(38, 6)
) AS
SELECT
number,
sum(number) AS value
FROM
(
SELECT
*,
toDecimal64(number, 6) AS val
FROM system.numbers
)
GROUP BY number
SELECT sum(value)γFROM test_mv;
βββββββββββββββββββββββββββββββsum(value)ββ
β 20169775271081582806606521273206.229723 β
βββββββββββββββββββββββββββββββββββββββββββ
SELECT sum(value)γFROM test_mv;
ββββββββββββββββββββββββββββββββsum(value)ββ
β -99821308067231861049021501577149.641700 β
ββββββββββββββββββββββββββββββββββββββββββββ
echo "SELECT sum(value)γFROM test_mv;" | clickhouse-benchmark -c 10;
2020.12.14 15:09:00.632224 [ 167 ] {} <Fatal> BaseDaemon: ########################################
2020.12.14 15:09:00.666960 [ 167 ] {} <Fatal> BaseDaemon: (version 20.12.3.3 (official build), build id: 046AC38B14F316A5) (from thread 155) (query_id: d6cf2cb2-fa08-457d-8730-71c4fbc08060) Received signal Segmentation fault (11)
2020.12.14 15:09:00.667040 [ 167 ] {} <Fatal> BaseDaemon: Address: 0x7f2855817000 Access: read. Attempted access has violated the permissions assigned to the memory area.
2020.12.14 15:09:00.667092 [ 167 ] {} <Fatal> BaseDaemon: Stack trace: 0x8c30221 0xdc8edca 0xdc907f2 0xe799c56 0xe79710b 0xe64239c 0xe63f4c7 0xe644475 0x7d1baad 0x7d1f5d3 0x7f2859033609 0x7f2858f49293
2020.12.14 15:09:00.667200 [ 167 ] {} <Fatal> BaseDaemon: 2. void DB::AggregateFunctionSumData<DB::Decimal<__int128> >::addMany<DB::Decimal<__int128> >(DB::Decimal<__int128> const*, unsigned long) @ 0x8c30221 in /usr/bin/clickhouse
2020.12.14 15:09:00.667270 [ 167 ] {} <Fatal> BaseDaemon: 3. DB::Aggregator::executeWithoutKeyImpl(char*&, unsigned long, DB::Aggregator::AggregateFunctionInstruction*, DB::Arena*) @ 0xdc8edca in /usr/bin/clickhouse
2020.12.14 15:09:00.667329 [ 167 ] {} <Fatal> BaseDaemon: 4. DB::Aggregator::executeOnBlock(std::__1::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn> > >, unsigned long, DB::AggregatedDataVariants&, std::__1::vector<DB::IColumn const*, std::__1::allocator<DB::IColumn const*> >&, std::__1::vector<std::__1::vector<DB::IColumn const*, std::__1::allocator<DB::IColumn const*> >, std::__1::allocator<std::__1::vector<DB::IColumn const*, std::__1::allocator<DB::IColumn const*> > > >&, bool&) @ 0xdc907f2 in /usr/bin/clickhouse
2020.12.14 15:09:00.667396 [ 167 ] {} <Fatal> BaseDaemon: 5. DB::AggregatingTransform::consume(DB::Chunk) @ 0xe799c56 in /usr/bin/clickhouse
2020.12.14 15:09:00.667493 [ 167 ] {} <Fatal> BaseDaemon: 6. DB::AggregatingTransform::work() @ 0xe79710b in /usr/bin/clickhouse
2020.12.14 15:09:00.667549 [ 167 ] {} <Fatal> BaseDaemon: 7. ? @ 0xe64239c in /usr/bin/clickhouse
2020.12.14 15:09:00.667613 [ 167 ] {} <Fatal> BaseDaemon: 8. DB::PipelineExecutor::executeStepImpl(unsigned long, unsigned long, std::__1::atomic<bool>*) @ 0xe63f4c7 in /usr/bin/clickhouse
2020.12.14 15:09:00.667660 [ 167 ] {} <Fatal> BaseDaemon: 9. ? @ 0xe644475 in /usr/bin/clickhouse
2020.12.14 15:09:00.667727 [ 167 ] {} <Fatal> BaseDaemon: 10. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x7d1baad in /usr/bin/clickhouse
2020.12.14 15:09:00.667784 [ 167 ] {} <Fatal> BaseDaemon: 11. ? @ 0x7d1f5d3 in /usr/bin/clickhouse
2020.12.14 15:09:00.667846 [ 167 ] {} <Fatal> BaseDaemon: 12. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
2020.12.14 15:09:00.667909 [ 167 ] {} <Fatal> BaseDaemon: 13. __clone @ 0x122293 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
| https://github.com/ClickHouse/ClickHouse/issues/18063 | https://github.com/ClickHouse/ClickHouse/pull/19322 | ad73dc6c721bb7e8a7d988a3aaa1d7c44391d9e5 | 1b832aa698f5f3d96a625728df989721d2fd52ba | "2020-12-14T15:16:18Z" | c++ | "2021-01-22T17:15:09Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 18,051 | ["src/Core/Settings.h", "src/Interpreters/RewriteAnyFunctionVisitor.cpp", "tests/queries/0_stateless/01398_any_with_alias.sql", "tests/queries/0_stateless/01470_columns_transformers.reference", "tests/queries/0_stateless/01591_window_functions.reference", "tests/queries/0_stateless/01591_window_functions.sql", "tests/queries/0_stateless/01650_any_null_if.reference", "tests/queries/0_stateless/01650_any_null_if.sql"] | inconsistent `nullIf` behavior inside function `any` | I am using `nullIf` in aggregations to skip unwanted empty strings. Strangely enough `any` function picks `NULL`s resulting from `nullIf`, i.e. does skip them as it should do with NULLs.
### Minimal reproducing code
```sql
select any(nullIf('', ''), 'some text');
```
produces:
```
ββnullIf('', '')ββ
β α΄Ία΅α΄Έα΄Έ β
ββββββββββββββββββ
```
**Expected behavior**
This should return 'Some text'
### Other example with table
```sql
create temporary table vv (m Nullable(String));
insert into vv (*) values (''), ('Some text');
select any(nullIf(m, '')) from vv;
```
result:
```
ββnullIf(any(m), '')ββ
β α΄Ία΅α΄Έα΄Έ β
ββββββββββββββββββββββ
```
expecting 'Some text' as before | https://github.com/ClickHouse/ClickHouse/issues/18051 | https://github.com/ClickHouse/ClickHouse/pull/18981 | 7d7af00afb2a2dde105abc78b8fa1378176efa96 | 6d79068a0fc7459341e77a086ec1de9d9e59dfed | "2020-12-13T19:00:51Z" | c++ | "2021-01-14T07:12:55Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 17,994 | ["contrib/libunwind"] | compile clickhouse on armοΌaarch64οΌ ok οΌbut execute clickhouse got bus error | build clickhouse on arm okοΌ but execute clickhouse --help
got bus error.
I don't know why.
can you tell me how to comiple on arm with ninja? thank you
gcc version -------------------------------------------------------------------
node53 /home/amos/gentoo # gcc -v
Using built-in specs.
COLLECT_GCC=gcc
COLLECT_LTO_WRAPPER=/home/amos/gentoo/usr/libexec/gcc/aarch64-unknown-linux-gnu/9.3.0/lto-wrapper
Target: aarch64-unknown-linux-gnu
Configured with: /home/amos/gentoo/var/tmp/portage/sys-devel/gcc-9.3.0-r1/work/gcc-9.3.0/configure --host=aarch64-unknown-linux-gnu --build=aarch64-unknown-linux-gnu --prefix=/home/amos/gentoo/usr --bindir=/home/amos/gentoo/usr/aarch64-unknown-linux-gnu/gcc-bin/9.3.0 --includedir=/home/amos/gentoo/usr/lib/gcc/aarch64-unknown-linux-gnu/9.3.0/include --datadir=/home/amos/gentoo/usr/share/gcc-data/aarch64-unknown-linux-gnu/9.3.0 --mandir=/home/amos/gentoo/usr/share/gcc-data/aarch64-unknown-linux-gnu/9.3.0/man --infodir=/home/amos/gentoo/usr/share/gcc-data/aarch64-unknown-linux-gnu/9.3.0/info --with-gxx-include-dir=/home/amos/gentoo/usr/lib/gcc/aarch64-unknown-linux-gnu/9.3.0/include/g++-v9 --with-python-dir=/share/gcc-data/aarch64-unknown-linux-gnu/9.3.0/python --enable-languages=c,c++,fortran --enable-obsolete --enable-secureplt --disable-werror --with-system-zlib --enable-nls --without-included-gettext --enable-checking=release --with-bugurl=https://bugs.gentoo.org/ --with-pkgversion='Gentoo 9.3.0-r1 p3' --disable-esp --enable-libstdcxx-time --enable-shared --enable-threads=posix --enable-__cxa_atexit --enable-clocale=gnu --disable-multilib --disable-altivec --disable-fixed-point --enable-libgomp --disable-libmudflap --disable-libssp --disable-libada --disable-systemtap --enable-vtable-verify --enable-lto --without-isl --enable-default-pie --enable-default-ssp --with-sysroot=/home/amos/gentoo
Thread model: posix
gcc version 9.3.0 (Gentoo 9.3.0-r1 p3)
---------------------------------------------------
ninja --version
ninja --version
1.9.0.git
-----------------------------------------
cmake -version
cmake version 3.17.3
compile command:
mkdir build
cd build
cmake ../
ninja clickhouse
| https://github.com/ClickHouse/ClickHouse/issues/17994 | https://github.com/ClickHouse/ClickHouse/pull/25854 | 956b1f588dbf5e406bf61e8a9b4e329d35af8b70 | 011ed015fa49c8e1a37f6f103c28def5e637a23f | "2020-12-11T06:00:05Z" | c++ | "2021-06-30T13:18:59Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 17,964 | ["tests/queries/0_stateless/01914_index_bgranvea.reference", "tests/queries/0_stateless/01914_index_bgranvea.sql"] | "Illegal column for DataTypeNullable" with indexed column | **How to reproduce**
```
ClickHouse client version 20.11.5.18 (official build).
Connecting to localhost:9000 as user default.
Connected to ClickHouse server version 20.11.5 revision 54442.
create table test (id UInt64,insid UInt64,insidvalue Nullable(UInt64), index insid_idx (insid) type bloom_filter() granularity 1, index insidvalue_idx (insidvalue) type bloom_filter() granularity 1) ENGINE=MergeTree() ORDER BY (insid,id);
insert into test values(1,1,1),(2,2,2);
select * from test where insid IN (1) OR insidvalue IN (1);
=> Code: 44. DB::Exception: Received from localhost:9000. DB::Exception: Illegal column for DataTypeNullable.
```
**Additional context**
it works if I remove one of the two indexes or if I inverse the clause (insidvalue IN(1) OR insid IN (1).
note: index on insid seems redundant with primary key but in our real table, insid is not in first position of ORDER BY. | https://github.com/ClickHouse/ClickHouse/issues/17964 | https://github.com/ClickHouse/ClickHouse/pull/25285 | 1aac27e9034e7d7b3107bd80ad053ad75b4fe3be | f12368bbb492de77d808a70edec33335902f9f50 | "2020-12-10T14:30:04Z" | c++ | "2021-06-17T06:14:25Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 17,938 | ["src/Interpreters/InterpreterInsertQuery.cpp"] | Misleading error message while inserting in a remote() table function with URL table engine in it. | **Describe the issue**
We have a table with URL table engine like described below.
And we want to insert into that table via a remote table function.
And if we used a non-existent column name, clickhouse would produce misleading error messages about that table not existing.
**How to reproduce**
Table ddl:
https://github.com/AlexAkulov/clickhouse-backup/releases/tag/v0.6.2
And if we try to execute query with wrong column name:
```
INSERT INTO TABLE FUNCTION remote('chi-backtest1-backtest1-0-0', 'system.backup_actions') (common) values ('create shard1-xxx');
Code: 60, e.displayText() = DB::Exception: Both table name and UUID are empty (version 20.8.7.15 (official build)
```
**Expected behavior**
Clickhouse would produce an error: "that column does not exist in destination table".
**Error message and/or stacktrace**
```
Code: 60, e.displayText() = DB::Exception: Both table name and UUID are empty (version 20.8.7.15 (official build)
```
| https://github.com/ClickHouse/ClickHouse/issues/17938 | https://github.com/ClickHouse/ClickHouse/pull/19013 | fc5e09d6b8a6f350954e642cdf6bf4a47ba310b5 | 49ad73b9bc3a3edb86df9e0649ee2433995fec46 | "2020-12-09T16:58:13Z" | c++ | "2021-01-14T08:42:06Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 17,933 | ["docs/en/operations/settings/settings.md", "docs/ru/operations/settings/settings.md", "src/Core/Settings.h", "src/Functions/FunctionsConversion.h", "src/IO/ReadHelpers.h", "tests/queries/0_stateless/02813_float_parsing.reference", "tests/queries/0_stateless/02813_float_parsing.sql"] | Give the ability to choose float from string parsing algorithms.[Fast/Precise] | In continuation of #1665
It would be useful to be able choose a float parsing algorithm, as some drivers don't support binary formats and that can be done as session setting.
**Use case**
Clickhouse version 20.13.1.5365
```
SELECT
toFloat64(15008753.),
toFloat64('1.5008753E7')
Query id: b4376f9b-a74d-4442-9113-5a48ce0e32cf
ββtoFloat64(15008753.)ββ¬βtoFloat64('1.5008753E7')ββ
β 15008753 β 15008753.000000002 β
ββββββββββββββββββββββββ΄βββββββββββββββββββββββββββ
```
**Describe alternatives you've considered**
Use Decimal as a data type and later convert it to float64 (via Default for example).
| https://github.com/ClickHouse/ClickHouse/issues/17933 | https://github.com/ClickHouse/ClickHouse/pull/52791 | d90048693a0a0a4182890f7c42e4de9307e657a6 | 357fee99ff9bcdf76dd044ee11ae2d3c6f5f5a43 | "2020-12-09T13:52:51Z" | c++ | "2023-08-02T04:23:23Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 17,912 | ["src/Interpreters/MySQL/InterpretersMySQLDDLQuery.cpp", "src/Interpreters/MySQL/tests/gtest_create_rewritten.cpp"] | Hope MaterializeMySQL engine can support MySQL prefix index | **Use case**
mysql repl to clickhouse
**Describe the solution you'd like**
Hope MaterializeMySQL engine can support MySQL prefix index
| https://github.com/ClickHouse/ClickHouse/issues/17912 | https://github.com/ClickHouse/ClickHouse/pull/17944 | 5f78280aec51a375827eebfcc7c8a2a91efeb004 | 60aef3d5291bed69947bba7c82e12f5fcd85e7b4 | "2020-12-09T02:35:53Z" | c++ | "2020-12-10T16:42:38Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 17,882 | ["src/Storages/MergeTree/MergeTreeData.cpp", "src/Storages/MergeTree/MergeTreeData.h", "src/Storages/StorageMergeTree.cpp", "src/Storages/StorageReplicatedMergeTree.cpp", "src/Storages/StorageReplicatedMergeTree.h", "tests/queries/0_stateless/02004_invalid_partition_mutation_stuck.sql"] | Is it a bug when number of ast elements > max_expanded_ast_elements if there are many mutations. | I am reading clickhouse mutation code. In StorageMergeTree->selectPartsToMutate() function, there is a check
`if (current_ast_elements + commands_size >= max_ast_elements)
break;`
I think if there are many mutations then the check result will be true, so that it will skip the remaining mutations.
But the new part's mutation is set to the last one in mutation map, and it does not apply the remaining mutations actually.
`new_part_info.mutation = current_mutations_by_version.rbegin()->first;`
Then the new part will miss the remaining mutations in future mutation job.
I think it's wrong here. Am I right?
If it is right, I will submit a PR
| https://github.com/ClickHouse/ClickHouse/issues/17882 | https://github.com/ClickHouse/ClickHouse/pull/32814 | e834d655a5a3b30e1270b5a3b88be6d6b0a686ce | 7dbdcf91bf0a76a80302eb782cfd6e6a10808d9e | "2020-12-08T02:11:03Z" | c++ | "2021-12-17T05:53:36Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 17,876 | ["src/Functions/bar.cpp", "tests/queries/0_stateless/01621_bar_nan_arguments.reference", "tests/queries/0_stateless/01621_bar_nan_arguments.sql"] | Query Fuzzer: bar, greatCircleAngle: too large size passed to allocator. | ```
SELECT bar((greatCircleAngle(65537, 2, number, number) - number) * 65535, 1048576, 1048577, nan)
FROM numbers(1025)
Query id: 1792a79a-e5ae-425d-aaeb-55fbff42ea0e
β Progress: 1.02 thousand rows, 8.20 KB (10.09 thousand rows/s., 80.73 KB/s.) 99%
Received exception from server (version 20.13.1):
Code: 49. DB::Exception: Received from localhost:9000. DB::Exception: Too large size (9223372036854776864) passed to allocator. It indicates an error.: while executing 'FUNCTION bar(multiply(minus(greatCircleAngle(65537, 2, number, number), number), 65535) :: 0, 1048576 :: 4, 1048577 :: 5, nan :: 6) -> bar(multiply(minus(greatCircleAngle(65537, 2, number, number), number), 65535), 1048576, 1048577, nan) String : 3'.
``` | https://github.com/ClickHouse/ClickHouse/issues/17876 | https://github.com/ClickHouse/ClickHouse/pull/18520 | 7b23b866a231cb399ada061e7e378d2d3ac625c7 | c3ad122142adc382a44695be3ff4a8067849437f | "2020-12-07T20:40:52Z" | c++ | "2020-12-26T01:32:05Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 17,875 | ["src/AggregateFunctions/AggregateFunctionGroupArray.h", "src/AggregateFunctions/AggregateFunctionGroupUniqArray.h", "tests/queries/0_stateless/01651_group_uniq_array_enum.reference", "tests/queries/0_stateless/01651_group_uniq_array_enum.sql"] | groupUniqArray + Enum data type returns Int instead of Enum | **How to reproduce**
Clickhouse version 20.11.5.18, 20.8.7.15, 20.3.19
```
SELECT
groupUniqArray(val) AS uniq,
toTypeName(uniq),
groupArray(val) AS arr,
toTypeName(arr)
FROM
(
SELECT CAST(number % 2, 'Enum(\'hello\' = 1, \'world\' = 0)') AS val
FROM numbers(2)
)
Query id: d4657f27-c8bd-4174-9947-c6c48d0222d5
ββuniqβββ¬βtoTypeName(groupUniqArray(val))ββ¬βarrββββββββββββββββ¬βtoTypeName(groupArray(val))βββββββββββββ
β [0,1] β Array(Int8) β ['world','hello'] β Array(Enum8('world' = 0, 'hello' = 1)) β
βββββββββ΄ββββββββββββββββββββββββββββββββββ΄ββββββββββββββββββββ΄βββββββββββββββββββββββββββββββββββββββββ
```
| https://github.com/ClickHouse/ClickHouse/issues/17875 | https://github.com/ClickHouse/ClickHouse/pull/19019 | c1732ef1b5874dafec2b33ee3e067c455d1d378c | 2c8ce7d94ea2a15ed0611bc5461597213f296530 | "2020-12-07T17:33:06Z" | c++ | "2021-01-14T07:11:44Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 17,861 | ["src/Interpreters/JoinedTables.cpp", "tests/queries/0_stateless/01938_joins_identifiers.reference", "tests/queries/0_stateless/01938_joins_identifiers.sql"] | Error joining tables with special symbols in names. | (you don't have to strictly follow this form)
**Describe the bug**
Join fails if table names contain some special symbols (e.g. '/').
**How to reproduce**
* Which ClickHouse server version to use
version 20.11.4
* Which interface to use, if matters
* Non-default settings, if any
* `CREATE TABLE` statements for all tables involved
create table "/t0" (a Int64, b Int64) engine = MergeTree() partition by a order by a;
create table "/t1" (a Int64, b Int64) engine = MergeTree() partition by a order by a;
* Sample data for all these tables
insert into "/t0" values (0, 0);
insert into "/t1" values (0, 1);
* Queries to run that lead to unexpected result
select * from "/t0" join "/t1" using a
**Expected behavior**
A clear and concise description of what you expected to happen.
**Error message and/or stacktrace**
Received exception from server (version 20.11.4):
Code: 62. DB::Exception: Received from localhost:9000. DB::Exception: Syntax error (table or subquery or table function): failed at position 16 ('/'): /t1) as /t1. Expected one of: SELECT subquery, compound identifier, identifier, element of expression with optional alias, list of elements, function, table, table function, subquery or list of joined tables, table or subquery or table function.
**Additional context**
When I rename tables with "t0" and "t1" everything works fine. It looks like ClickHouse rewrites query somehow and uses table names without quotes.
| https://github.com/ClickHouse/ClickHouse/issues/17861 | https://github.com/ClickHouse/ClickHouse/pull/25924 | f238d98fd04a88e4fbf27452a4729824d9b6ccc7 | c584312c57af57ef1a27dfe8dd9bf15a33a3e773 | "2020-12-07T10:39:35Z" | c++ | "2021-07-03T11:55:26Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 17,799 | ["debian/clickhouse-server.init"] | Problem while trying to start clickhouse server | **Version:**
ClickHouse client version 20.10.3.30 (official build)
Connected to ClickHouse server version 20.10.3 revision 54441
**Task Description:**
**Task1 :**
trying to do restart and check status of clickhouse server using below commands
```sudo /etc/init.d/clickhouse-server restart```
```sudo /etc/init.d/clickhouse-server status```
**Problem Faced:**
gets UNKNOWN as output on screen after running restart command
gets server terminated unexpectedly as status after running status command
**error logs:**
```Status file /var/run/clickhouse-server/clickhouse-server.pid already exists - unclean restart. Contents:
19340
Code: 76, e.displayText() = DB::Exception: Cannot lock file /var/run/clickhouse-server/clickhouse-server.pid. Another server instance in same directory is already running.```
**Task2 :**
Updated clickhouse version to 20.11.4.13 and did below things
updated CLICKHOUSE_USER value to "hduser" in the /etc/init.d/clickhouse-server file and started server using
```sudo /etc/init.d/clickhouse-server start```
**Problem Faced:**
Init script is alrady running
**Task3:**
because of problem we were getting in task2, we used systemctl command to start server
```sudo systemctl start clickhouse-server```
**Problem Faced :**
all the directories and files related to clickhouse have "clickhouse" user as owner and group while we already made changes in CLICKHOUSE_USER value to "hduser" in the /etc/init.d/clickhouse-server file
clickhouse server gets started with systemctl command but when we changed owner and group of clickhouse logs file to "hduser" instead of "clickhouse" then it don't start and gives below things:
```sudo systemctl status clickhouse-server```
``` β clickhouse-server.service - ClickHouse Server (analytic DBMS for big data)
Loaded: loaded (/etc/systemd/system/clickhouse-server.service; enabled; vendor preset: disabled)
Active: activating (auto-restart) (Result: exit-code) since Fri 2020-12-04 04:31:06 PST; 9s ago
Process: 30058 ExecStart=/usr/bin/clickhouse-server --config=/etc/clickhouse-server/config.xml --pid-file=/run/clickhouse-server/clickhouse-server.pid (code=exited, status=232/ADDRESS_FAMILIES)
Main PID: 30058 (code=exited, status=232/ADDRESS_FAMILIES)
Dec 04 04:31:06 DEV-BDSP-Worker-04 systemd[1]: Unit clickhouse-server.service entered failed state.
Dec 04 04:31:06 DEV-BDSP-Worker-04 systemd[1]: clickhouse-server.service failed.
``` | https://github.com/ClickHouse/ClickHouse/issues/17799 | https://github.com/ClickHouse/ClickHouse/pull/25921 | edce77803f07a29efd32190bc7c3053325894113 | 38d1ce310d9ff824fc38143ab362460b2b83ab7d | "2020-12-04T12:51:18Z" | c++ | "2021-07-03T11:50:05Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 17,756 | ["tests/queries/0_stateless/02693_multiple_joins_in.reference", "tests/queries/0_stateless/02693_multiple_joins_in.sql"] | "unknown column name" with multi-join + IN table | ```
root@db-0:/# clickhouse-client
ClickHouse client version 20.11.4.13 (official build).
Connecting to localhost:9000 as user default.
Connected to ClickHouse server version 20.11.4 revision 54442.
db-0 :) create temporary table temp_table3(val0 UInt64) ENGINE=Memory();
db-0 :) select * from (select 1 as id) t1 inner join (select 1 as id) t2 on t1.id=t2.id inner join (select 1 as id) t3 on t1.id=t3.id where t1.id in temp_table3
Code: 47. DB::Exception: Received from localhost:9000. DB::Exception: Unknown column name 'temp_table3': While processing SELECT `--t1.id` AS `t1.id`, `--t2.id` AS `t2.id`, t3.id AS `t3.id` FROM (SELECT 1 AS id) AS t1 INNER JOIN (SELECT 1 AS id) AS t2 ON t1.id = t2.id INNER JOIN (SELECT 1 AS id) AS t3 ON t1.id = t3.id WHERE `--t1.id` IN (temp_table3).
it works if I remove a join:
db-0 :) select * from (select 1 as id) t1 inner join (select 1 as id) t2 on t1.id=t2.id where t1.id in temp_table3
=> OK
```
workaround: use IN (SELECT * FROM temp_table3)
| https://github.com/ClickHouse/ClickHouse/issues/17756 | https://github.com/ClickHouse/ClickHouse/pull/47739 | 2b439f079ef8f3410b5ace1bdba46e4ceb361ff4 | 50e1eedd4766ed8c2e34339d78f000d63f4d5191 | "2020-12-03T09:45:32Z" | c++ | "2023-03-23T14:48:06Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 17,733 | ["src/Storages/IStorage.h", "src/Storages/MergeTree/MergeTreeData.cpp", "src/Storages/MergeTree/MergeTreeData.h", "src/Storages/StorageMergeTree.cpp", "src/Storages/StorageMergeTree.h", "tests/queries/0_stateless/02148_in_memory_part_flush.reference", "tests/queries/0_stateless/02148_in_memory_part_flush.sql"] | In-memory parts with disabled wal-log disappear while server restart or DETACH table. | **How to reproduce**
Clickhouse server 20.13.1.5273
```
CREATE TABLE default.test
(
`key` UInt32,
`ts` DateTime CODEC(DoubleDelta, LZ4),
`db_time` DateTime DEFAULT now() COMMENT 'spout-ignore' CODEC(DoubleDelta, LZ4)
)
ENGINE = MergeTree
PARTITION BY toStartOfTenMinutes(db_time)
ORDER BY (key, ts)
TTL db_time + toIntervalHour(3)
SETTINGS index_granularity = 8192, merge_with_ttl_timeout = 3600, min_rows_for_compact_part = 1000000, min_bytes_for_compact_part = 200000000, in_memory_parts_enable_wal = 0;
INSERT INTO test(key, ts) SELECT number % 1000, now() + intDiv(number,1000) FROM numbers(500);
SELECT * FROM test;
500 rows in set. Elapsed: 0.003 sec.
DETACH TABLE test;
ATTACH TABLE test;
SELECT * FROM test;
0 rows in set. Elapsed: 0.002 sec.
```
**Expected behavior**
Clickhouse would flush parts on disk. | https://github.com/ClickHouse/ClickHouse/issues/17733 | https://github.com/ClickHouse/ClickHouse/pull/32742 | bf415378be7a223f71d9325080f8c23b63948a7a | 2e388a72daedbba1183624578014dc599046c3b1 | "2020-12-02T12:45:01Z" | c++ | "2021-12-14T20:09:10Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 17,731 | ["src/Storages/MergeTree/MergeTreeData.cpp", "src/Storages/MergeTree/MergeTreeData.h", "src/Storages/MergeTree/MergeTreeWriteAheadLog.cpp", "src/Storages/MergeTree/MergeTreeWriteAheadLog.h", "tests/queries/0_stateless/02410_inmemory_wal_cleanup.reference", "tests/queries/0_stateless/02410_inmemory_wal_cleanup.sql"] | Slow start with high memory consumption: log filled with "<Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 16.90 TiB." | **Describe the bug**
Since we upgraded to 20.10.3.30, one of our clickhouse cluster takes much more time to start. (Up to 50 minutes, and before it took ~3 minutes)
The log file is filled with
> <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 16.90 TiB
Additionally, the memory consumption increases, linearly, a lot (up to arround 230 G for us) and then drop back to normal values once clickhouse finished starting up. During that loading phase, a single CPU core is used at 100%.
If that memory is not available for the machine, clickhouse gets OOMed by the kernel.
**How to reproduce**
* Issue appeared after we upgraded to ClickHouse server version 20.10.3.30
* How could I find wich of our tables may be responsible for that ?
**Expected behavior**
A fast startup :)
| https://github.com/ClickHouse/ClickHouse/issues/17731 | https://github.com/ClickHouse/ClickHouse/pull/40592 | 582216a3ca1093dee7a72a4ab7b2c3f2c7dc3665 | f11b7499d183511b9e5694a92c407f5ce5e0eeb8 | "2020-12-02T12:21:58Z" | c++ | "2022-09-05T12:15:19Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 17,718 | ["src/Interpreters/join_common.cpp", "tests/queries/0_stateless/00119_storage_join.reference", "tests/queries/0_stateless/00119_storage_join.sql"] | Error in query with join and 'with totals' clause | **Describe the bug**
Exception happens when running query which joins subquery with `group by with totals` clause and table with Join engine.
**How to reproduce**
Clickhouse version 20.7.4.11
```
CREATE TABLE table1 (
show_date DateTime,
pu_num String,
pu_id UInt64,
amnt UInt32
) ENGINE = MergeTree(
) PARTITION BY toYYYYMM(show_date)
ORDER BY (show_date);
insert into table1 values('2020-01-01 12:00:00', '1234454', 54234, 5);
insert into table1 values('2020-01-02 13:00:00', '1234454', 54234, 7);
insert into table1 values('2020-01-11 15:00:00', '123123', 32434, 4);
CREATE TABLE table2 (
pu_id UInt64,
pu_num String,
name_rus String)
ENGINE = Join(ANY, LEFT, pu_id) SETTINGS join_use_nulls = 1;
insert into table2 values(32434, '123123', 'ΠΠΌΡ_1');
insert into table2 values(54234, '1234454', 'ΠΠΌΡ_2');
select dd.pu_id, dd.pu_num, dd.s_amnt, k.name_rus
from (
select a.pu_id, a.pu_num, sum(a.amnt) as s_amnt
from table1 a
group by a.pu_id, a.pu_num with totals
) dd
ANY LEFT JOIN table2 k USING(pu_id) SETTINGS join_use_nulls = 1;
```
**Error message and/or stacktrace**
```
2020.12.02 09:04:41.208576 [ 49155 ] {a90f1fff-f15e-4edb-8e94-d11ea6107029} <Error> DynamicQueryHandler: Code: 49, e.displayText() = DB::Exception: Invalid number of columns in chunk pushed to OutputPort. Expected 4, found 5
Header: pu_id UInt64 UInt64(size = 0), pu_num String String(size = 0), s_amnt UInt64 UInt64(size = 0), name_rus Nullable(String) Nullable(size = 0, String(size = 0), UInt8(size = 0))
Chunk: UInt64(size = 1) String(size = 1) UInt64(size = 1) Nullable(size = 1, String(size = 1), UInt8(size = 1)) Nullable(size = 1, String(size = 1), UInt8(size = 1))
, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x12723a80 in /usr/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0xa3f52bd in /usr/bin/clickhouse
2. ? @ 0xfee4cf3 in /usr/bin/clickhouse
3. DB::PipelineExecutor::prepareProcessor(unsigned long, unsigned long, std::__1::queue<DB::ExecutingGraph::Node*, std::__1::deque<DB::ExecutingGraph::Node*, std::__1::allocator<DB::ExecutingGraph::Node*> > >&, std::__1::unique_lock<std::__1::mutex>) @ 0xff19e08 in /usr/bin/clickhouse
4. ? @ 0xff1bbdb in /usr/bin/clickhouse
5. ? @ 0xff1c356 in /usr/bin/clickhouse
6. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xa422ad7 in /usr/bin/clickhouse
7. ? @ 0xa421023 in /usr/bin/clickhouse
8. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
9. __clone @ 0x121a3f in /lib/x86_64-linux-gnu/libc-2.27.so
(version 20.7.4.11 (official build))
```
**Additional context**
If column `pu_num` or clause `with totals` are removed from query, it executes without error.
| https://github.com/ClickHouse/ClickHouse/issues/17718 | https://github.com/ClickHouse/ClickHouse/pull/23549 | 5f23acfb47b834874f1936ebfe1431feda3cf8a1 | ca230224cfa8939680bb05826fa1bc38e4c3648d | "2020-12-02T09:09:35Z" | c++ | "2021-04-24T00:19:00Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 17,682 | ["src/Common/ThreadPool.cpp", "src/Common/ThreadPool.h", "src/Interpreters/Aggregator.cpp", "src/Interpreters/Aggregator.h", "tests/queries/0_stateless/01605_dictinct_two_level.reference", "tests/queries/0_stateless/01605_dictinct_two_level.sql"] | Server crash with groupArraySample(5)(distinct ...) | **Describe the bug**
```
SELECT
domain
, groupArraySample(5)(distinct subdomain) AS example_subdomains
FROM table
WHERE time > now() - interval 1 hour
GROUP BY domain
LIMIT 100
```
Causes a server crash. If I remove the distinct it runs just fine. I appreciate it is probably not valid to use a distinct here, but it should not cause a full crash.
```
[clickhouse] 2020.12.01 13:06:12.037997 [ 265 ] <Fatal> BaseDaemon: ########################################
[clickhouse] 2020.12.01 13:06:12.038106 [ 265 ] <Fatal> BaseDaemon: (version 20.11.3.3 (official build), build id: C88CD350740ED614) (from thread 165) (query_id: a94ba7b7-f82d-4d0e-ba0a-af0763003728) Received signal Segmentation fault (11)
[clickhouse] 2020.12.01 13:06:12.038131 [ 265 ] <Fatal> BaseDaemon: Address: NULL pointer. Access: read. Address not mapped to object.
[clickhouse] 2020.12.01 13:06:12.038162 [ 265 ] <Fatal> BaseDaemon: Stack trace: 0x7dc0763 0x91da3ea 0xdab6dda 0xdabafeb 0xdabac8b 0xdaba3f5 0xda42adf 0xe4eb128 0xe34e96a 0xe38774c 0xe384877 0xe389825 0x7b6293d 0x7b66463 0x7f6507933609 0x7f6507849293
[clickhouse] 2020.12.01 13:06:12.038136 [ 264 ] <Fatal> BaseDaemon: ########################################
[clickhouse] 2020.12.01 13:06:12.038228 [ 264 ] <Fatal> BaseDaemon: (version 20.11.3.3 (official build), build id: C88CD350740ED614) (from thread 175) (query_id: a94ba7b7-f82d-4d0e-ba0a-af0763003728) Received signal Segmentation fault (11)
[clickhouse] 2020.12.01 13:06:12.038267 [ 265 ] <Fatal> BaseDaemon: 2. DB::GroupArrayGeneralImpl<DB::GroupArrayNodeString, DB::GroupArrayTrait<true, (DB::Sampler)1> >::insertResultInto(char*, DB::IColumn&, DB::Arena*) const @ 0x7dc0763 in /usr/bin/clickhouse
[clickhouse] 2020.12.01 13:06:12.038282 [ 264 ] <Fatal> BaseDaemon: Address: 0x6 Access: read. Address not mapped to object.
[clickhouse] 2020.12.01 13:06:12.038311 [ 265 ] <Fatal> BaseDaemon: 3. DB::AggregateFunctionDistinct<DB::AggregateFunctionDistinctSingleGenericData<true> >::insertResultInto(char*, DB::IColumn&, DB::Arena*) const @ 0x91da3ea in /usr/bin/clickhouse
[clickhouse] 2020.12.01 13:06:12.038318 [ 264 ] <Fatal> BaseDaemon: Stack trace: 0x7dc0763 0x91da3ea 0xdab6dda 0xdabafeb 0xdabac8b 0xdaba3f5 0xda42adf 0xe4eb128 0xe34e96a 0xe38774c 0xe384877 0xe389825 0x7b6293d 0x7b66463 0x7f6507933609 0x7f6507849293
[clickhouse] 2020.12.01 13:06:12.038342 [ 265 ] <Fatal> BaseDaemon: 4. void DB::Aggregator::insertAggregatesIntoColumns<char*>(char*&, std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::Arena*) const @ 0xdab6dda in /usr/bin/clickhouse
[clickhouse] 2020.12.01 13:06:12.038386 [ 265 ] <Fatal> BaseDaemon: 5. void DB::Aggregator::convertToBlockImplFinal<DB::AggregationMethodStringNoCache<TwoLevelStringHashMap<char*, Allocator<true, true>, StringHashMap> >, StringHashMap<char*, Allocator<true, true> > >(DB::AggregationMethodStringNoCache<TwoLevelStringHashMap<char*, Allocator<true, true>, StringHashMap> >&, StringHashMap<char*, Allocator<true, true> >&, std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::Arena*) const @ 0xdabafeb in /usr/bin/clickhouse
[clickhouse] 2020.12.01 13:06:12.038456 [ 265 ] <Fatal> BaseDaemon: 6. void DB::Aggregator::convertToBlockImpl<DB::AggregationMethodStringNoCache<TwoLevelStringHashMap<char*, Allocator<true, true>, StringHashMap> >, StringHashMap<char*, Allocator<true, true> > >(DB::AggregationMethodStringNoCache<TwoLevelStringHashMap<char*, Allocator<true, true>, StringHashMap> >&, StringHashMap<char*, Allocator<true, true> >&, std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, std::__1::vector<DB::PODArray<char*, 4096ul, Allocator<false, false>, 15ul, 16ul>*, std::__1::allocator<DB::PODArray<char*, 4096ul, Allocator<false, false>, 15ul, 16ul>*> >&, std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::Arena*, bool) const @ 0xdabac8b in /usr/bin/clickhouse
[clickhouse] 2020.12.01 13:06:12.038223 [ 266 ] <Fatal> BaseDaemon: ########################################
[clickhouse] 2020.12.01 13:06:12.038562 [ 265 ] <Fatal> BaseDaemon: 7. DB::Block DB::Aggregator::prepareBlockAndFill<DB::Block DB::Aggregator::convertOneBucketToBlock<DB::AggregationMethodStringNoCache<TwoLevelStringHashMap<char*, Allocator<true, true>, StringHashMap> > >(DB::AggregatedDataVariants&, DB::AggregationMethodStringNoCache<TwoLevelStringHashMap<char*, Allocator<true, true>, StringHashMap> >&, bool, unsigned long) const::'lambda'(std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, std::__1::vector<DB::PODArray<char*, 4096ul, Allocator<false, false>, 15ul, 16ul>*, std::__1::allocator<DB::PODArray<char*, 4096ul, Allocator<false, false>, 15ul, 16ul>*> >&, std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::Arena*, bool)>(DB::AggregatedDataVariants&, bool, unsigned long, DB::AggregationMethodStringNoCache<TwoLevelStringHashMap<char*, Allocator<true, true>, StringHashMap> >&&) const @ 0xdaba3f5 in /usr/bin/clickhouse
[clickhouse] 2020.12.01 13:06:12.038570 [ 266 ] <Fatal> BaseDaemon: (version 20.11.3.3 (official build), build id: C88CD350740ED614) (from thread 161) (query_id: a94ba7b7-f82d-4d0e-ba0a-af0763003728) Received signal Segmentation fault (11)
[clickhouse] 2020.12.01 13:06:12.038601 [ 265 ] <Fatal> BaseDaemon: 8. DB::Aggregator::mergeAndConvertOneBucketToBlock(std::__1::vector<std::__1::shared_ptr<DB::AggregatedDataVariants>, std::__1::allocator<std::__1::shared_ptr<DB::AggregatedDataVariants> > >&, DB::Arena*, bool, unsigned long, std::__1::atomic<bool>*) const @ 0xda42adf in /usr/bin/clickhouse
[clickhouse] 2020.12.01 13:06:12.038607 [ 266 ] <Fatal> BaseDaemon: Address: 0x7 Access: read. Address not mapped to object.
[clickhouse] 2020.12.01 13:06:12.038632 [ 265 ] <Fatal> BaseDaemon: 9. DB::ConvertingAggregatedToChunksSource::generate() @ 0xe4eb128 in /usr/bin/clickhouse
[clickhouse] 2020.12.01 13:06:12.038646 [ 266 ] <Fatal> BaseDaemon: Stack trace: 0x7dc0763 0x91da3ea 0xdab6dda 0xdabb112 0xdabac8b 0xdaba3f5 0xda42adf 0xe4eb128 0xe34e96a 0xe38774c 0xe384877 0xe389825 0x7b6293d 0x7b66463 0x7f6507933609 0x7f6507849293
[clickhouse] 2020.12.01 13:06:12.038671 [ 265 ] <Fatal> BaseDaemon: 10. DB::ISource::work() @ 0xe34e96a in /usr/bin/clickhouse
[clickhouse] 2020.12.01 13:06:12.038711 [ 265 ] <Fatal> BaseDaemon: 11. ? @ 0xe38774c in /usr/bin/clickhouse
[clickhouse] 2020.12.01 13:06:12.038730 [ 266 ] <Fatal> BaseDaemon: 2. DB::GroupArrayGeneralImpl<DB::GroupArrayNodeString, DB::GroupArrayTrait<true, (DB::Sampler)1> >::insertResultInto(char*, DB::IColumn&, DB::Arena*) const @ 0x7dc0763 in /usr/bin/clickhouse
[clickhouse] 2020.12.01 13:06:12.038750 [ 265 ] <Fatal> BaseDaemon: 12. DB::PipelineExecutor::executeStepImpl(unsigned long, unsigned long, std::__1::atomic<bool>*) @ 0xe384877 in /usr/bin/clickhouse
[clickhouse] 2020.12.01 13:06:12.038770 [ 266 ] <Fatal> BaseDaemon: 3. DB::AggregateFunctionDistinct<DB::AggregateFunctionDistinctSingleGenericData<true> >::insertResultInto(char*, DB::IColumn&, DB::Arena*) const @ 0x91da3ea in /usr/bin/clickhouse
[clickhouse] 2020.12.01 13:06:12.038795 [ 265 ] <Fatal> BaseDaemon: 13. ? @ 0xe389825 in /usr/bin/clickhouse
[clickhouse] 2020.12.01 13:06:12.038831 [ 266 ] <Fatal> BaseDaemon: 4. void DB::Aggregator::insertAggregatesIntoColumns<char*>(char*&, std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::Arena*) const @ 0xdab6dda in /usr/bin/clickhouse
[clickhouse] 2020.12.01 13:06:12.038822 [ 265 ] <Fatal> BaseDaemon: 14. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x7b6293d in /usr/bin/clickhouse
[clickhouse] 2020.12.01 13:06:12.038884 [ 265 ] <Fatal> BaseDaemon: 15. ? @ 0x7b66463 in /usr/bin/clickhouse
[clickhouse] 2020.12.01 13:06:12.038935 [ 265 ] <Fatal> BaseDaemon: 16. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
[clickhouse] 2020.12.01 13:06:12.038899 [ 266 ] <Fatal> BaseDaemon: 5. void DB::Aggregator::convertToBlockImplFinal<DB::AggregationMethodStringNoCache<TwoLevelStringHashMap<char*, Allocator<true, true>, StringHashMap> >, StringHashMap<char*, Allocator<true, true> > >(DB::AggregationMethodStringNoCache<TwoLevelStringHashMap<char*, Allocator<true, true>, StringHashMap> >&, StringHashMap<char*, Allocator<true, true> >&, std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::Arena*) const @ 0xdabb112 in /usr/bin/clickhouse
[clickhouse] 2020.12.01 13:06:12.038992 [ 265 ] <Fatal> BaseDaemon: 17. clone @ 0x122293 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
[clickhouse] 2020.12.01 13:06:12.039016 [ 266 ] <Fatal> BaseDaemon: 6. void DB::Aggregator::convertToBlockImpl<DB::AggregationMethodStringNoCache<TwoLevelStringHashMap<char*, Allocator<true, true>, StringHashMap> >, StringHashMap<char*, Allocator<true, true> > >(DB::AggregationMethodStringNoCache<TwoLevelStringHashMap<char*, Allocator<true, true>, StringHashMap> >&, StringHashMap<char*, Allocator<true, true> >&, std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, std::__1::vector<DB::PODArray<char*, 4096ul, Allocator<false, false>, 15ul, 16ul>*, std::__1::allocator<DB::PODArray<char*, 4096ul, Allocator<false, false>, 15ul, 16ul>*> >&, std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::Arena*, bool) const @ 0xdabac8b in /usr/bin/clickhouse
[clickhouse] 2020.12.01 13:06:12.038396 [ 264 ] <Fatal> BaseDaemon: 2. DB::GroupArrayGeneralImpl<DB::GroupArrayNodeString, DB::GroupArrayTrait<true, (DB::Sampler)1> >::insertResultInto(char*, DB::IColumn&, DB::Arena*) const @ 0x7dc0763 in /usr/bin/clickhouse
[clickhouse] 2020.12.01 13:06:12.039069 [ 266 ] <Fatal> BaseDaemon: 7. DB::Block DB::Aggregator::prepareBlockAndFill<DB::Block DB::Aggregator::convertOneBucketToBlock<DB::AggregationMethodStringNoCache<TwoLevelStringHashMap<char*, Allocator<true, true>, StringHashMap> > >(DB::AggregatedDataVariants&, DB::AggregationMethodStringNoCache<TwoLevelStringHashMap<char*, Allocator<true, true>, StringHashMap> >&, bool, unsigned long) const::'lambda'(std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, std::__1::vector<DB::PODArray<char*, 4096ul, Allocator<false, false>, 15ul, 16ul>*, std::__1::allocator<DB::PODArray<char*, 4096ul, Allocator<false, false>, 15ul, 16ul>*> >&, std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::Arena*, bool)>(DB::AggregatedDataVariants&, bool, unsigned long, DB::AggregationMethodStringNoCache<TwoLevelStringHashMap<char*, Allocator<true, true>, StringHashMap> >&&) const @ 0xdaba3f5 in /usr/bin/clickhouse
[clickhouse] 2020.12.01 13:06:12.039093 [ 264 ] <Fatal> BaseDaemon: 3. DB::AggregateFunctionDistinct<DB::AggregateFunctionDistinctSingleGenericData<true> >::insertResultInto(char*, DB::IColumn&, DB::Arena*) const @ 0x91da3ea in /usr/bin/clickhouse
[clickhouse] 2020.12.01 13:06:12.039113 [ 266 ] <Fatal> BaseDaemon: 8. DB::Aggregator::mergeAndConvertOneBucketToBlock(std::__1::vector<std::__1::shared_ptr<DB::AggregatedDataVariants>, std::__1::allocator<std::__1::shared_ptr<DB::AggregatedDataVariants> > >&, DB::Arena*, bool, unsigned long, std::__1::atomic<bool>*) const @ 0xda42adf in /usr/bin/clickhouse
[clickhouse] 2020.12.01 13:06:12.039147 [ 266 ] <Fatal> BaseDaemon: 9. DB::ConvertingAggregatedToChunksSource::generate() @ 0xe4eb128 in /usr/bin/clickhouse
[clickhouse] 2020.12.01 13:06:12.039152 [ 264 ] <Fatal> BaseDaemon: 4. void DB::Aggregator::insertAggregatesIntoColumns<char*>(char*&, std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::Arena*) const @ 0xdab6dda in /usr/bin/clickhouse
[clickhouse] 2020.12.01 13:06:12.039211 [ 264 ] <Fatal> BaseDaemon: 5. void DB::Aggregator::convertToBlockImplFinal<DB::AggregationMethodStringNoCache<TwoLevelStringHashMap<char*, Allocator<true, true>, StringHashMap> >, StringHashMap<char*, Allocator<true, true> > >(DB::AggregationMethodStringNoCache<TwoLevelStringHashMap<char*, Allocator<true, true>, StringHashMap> >&, StringHashMap<char*, Allocator<true, true> >&, std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::Arena*) const @ 0xdabafeb in /usr/bin/clickhouse
[clickhouse] 2020.12.01 13:06:12.039178 [ 266 ] <Fatal> BaseDaemon: 10. DB::ISource::work() @ 0xe34e96a in /usr/bin/clickhouse
[clickhouse] 2020.12.01 13:06:12.039281 [ 264 ] <Fatal> BaseDaemon: 6. void DB::Aggregator::convertToBlockImpl<DB::AggregationMethodStringNoCache<TwoLevelStringHashMap<char*, Allocator<true, true>, StringHashMap> >, StringHashMap<char*, Allocator<true, true> > >(DB::AggregationMethodStringNoCache<TwoLevelStringHashMap<char*, Allocator<true, true>, StringHashMap> >&, StringHashMap<char*, Allocator<true, true> >&, std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, std::__1::vector<DB::PODArray<char*, 4096ul, Allocator<false, false>, 15ul, 16ul>*, std::__1::allocator<DB::PODArray<char*, 4096ul, Allocator<false, false>, 15ul, 16ul>*> >&, std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::Arena*, bool) const @ 0xdabac8b in /usr/bin/clickhouse
[clickhouse] 2020.12.01 13:06:12.039298 [ 266 ] <Fatal> BaseDaemon: 11. ? @ 0xe38774c in /usr/bin/clickhouse
[clickhouse] 2020.12.01 13:06:12.039344 [ 264 ] <Fatal> BaseDaemon: 7. DB::Block DB::Aggregator::prepareBlockAndFill<DB::Block DB::Aggregator::convertOneBucketToBlock<DB::AggregationMethodStringNoCache<TwoLevelStringHashMap<char*, Allocator<true, true>, StringHashMap> > >(DB::AggregatedDataVariants&, DB::AggregationMethodStringNoCache<TwoLevelStringHashMap<char*, Allocator<true, true>, StringHashMap> >&, bool, unsigned long) const::'lambda'(std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, std::__1::vector<DB::PODArray<char*, 4096ul, Allocator<false, false>, 15ul, 16ul>*, std::__1::allocator<DB::PODArray<char*, 4096ul, Allocator<false, false>, 15ul, 16ul>*> >&, std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::Arena*, bool)>(DB::AggregatedDataVariants&, bool, unsigned long, DB::AggregationMethodStringNoCache<TwoLevelStringHashMap<char*, Allocator<true, true>, StringHashMap> >&&) const @ 0xdaba3f5 in /usr/bin/clickhouse
[clickhouse] 2020.12.01 13:06:12.039380 [ 266 ] <Fatal> BaseDaemon: 12. DB::PipelineExecutor::executeStepImpl(unsigned long, unsigned long, std::__1::atomic<bool>*) @ 0xe384877 in /usr/bin/clickhouse
[clickhouse] 2020.12.01 13:06:12.039382 [ 264 ] <Fatal> BaseDaemon: 8. DB::Aggregator::mergeAndConvertOneBucketToBlock(std::__1::vector<std::__1::shared_ptr<DB::AggregatedDataVariants>, std::__1::allocator<std::__1::shared_ptr<DB::AggregatedDataVariants> > >&, DB::Arena*, bool, unsigned long, std::__1::atomic<bool>*) const @ 0xda42adf in /usr/bin/clickhouse
[clickhouse] 2020.12.01 13:06:12.039500 [ 266 ] <Fatal> BaseDaemon: 13. ? @ 0xe389825 in /usr/bin/clickhouse
[clickhouse] 2020.12.01 13:06:12.039585 [ 266 ] <Fatal> BaseDaemon: 14. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x7b6293d in /usr/bin/clickhouse
[clickhouse] 2020.12.01 13:06:12.039702 [ 266 ] <Fatal> BaseDaemon: 15. ? @ 0x7b66463 in /usr/bin/clickhouse
[clickhouse] 2020.12.01 13:06:12.039691 [ 264 ] <Fatal> BaseDaemon: 9. DB::ConvertingAggregatedToChunksSource::generate() @ 0xe4eb128 in /usr/bin/clickhouse
[clickhouse] 2020.12.01 13:06:12.039762 [ 266 ] <Fatal> BaseDaemon: 16. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
[clickhouse] 2020.12.01 13:06:12.039814 [ 266 ] <Fatal> BaseDaemon: 17. clone @ 0x122293 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
[clickhouse] 2020.12.01 13:06:12.039771 [ 264 ] <Fatal> BaseDaemon: 10. DB::ISource::work() @ 0xe34e96a in /usr/bin/clickhouse
[clickhouse] 2020.12.01 13:06:12.039859 [ 264 ] <Fatal> BaseDaemon: 11. ? @ 0xe38774c in /usr/bin/clickhouse
[clickhouse] 2020.12.01 13:06:12.040261 [ 267 ] <Fatal> BaseDaemon: ########################################
``` | https://github.com/ClickHouse/ClickHouse/issues/17682 | https://github.com/ClickHouse/ClickHouse/pull/18365 | 568119535a8a6d21995678e7c26807e4a80bce65 | 0f98fe3c0ca9d0b035fbd5397e2cf1019703b108 | "2020-12-01T13:08:11Z" | c++ | "2020-12-23T12:41:20Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 17,680 | ["docs/en/sql-reference/functions/date-time-functions.md"] | Table of Contents of section [Dates and Times] incomplete | The **Table of Contents** does not actually list all functions as documented in the section **Dates and Times**.
See: [https://clickhouse.tech/docs/en/sql-reference/functions/date-time-functions/](https://clickhouse.tech/docs/en/sql-reference/functions/date-time-functions/)
For example, `formatDateTime` is not listed in the TOC, but documented on the page. Might be the case for other secions too, I haven't checked.
It's confusing and took me some time to find the documentation I was looking for.
Thanks. | https://github.com/ClickHouse/ClickHouse/issues/17680 | https://github.com/ClickHouse/ClickHouse/pull/17703 | cd653898357e8b714d304c3bc8ba0408b67d04d9 | a8a37b42490755a0c1fab79ecf6478490f85c632 | "2020-12-01T12:28:56Z" | c++ | "2020-12-10T22:50:12Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 17,661 | ["src/Common/renameat2.cpp"] | WSL: Cannot rename /var/[...]/hits_v1.sql.tmp to /var/[...]/hits_v1.sql, errno: 38, strerror: Function not implemented (version 20.11.4.13 (official build)) (from 127.0.0.1:55079) | **Describe the bug**
Then I execute tutorial query "CREATE TABLE tutorial.hits_v1" from https://clickhouse.tech/docs/ru/getting-started/tutorial/ I got this error.
**How to reproduce**
* Which ClickHouse server version to use
ClickHouse server version 20.11.4.13 (official build).
* Which interface to use, if matters
ClickHouse client version 20.11.4.13 (official build).
or DBeaver Community 7.3.0(x86_64)
* Non-default settings, if any
```
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=20.04
DISTRIB_CODENAME=focal
DISTRIB_DESCRIPTION="Ubuntu 20.04.1 LTS"
NAME="Ubuntu"
VERSION="20.04.1 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.1 LTS"
VERSION_ID="20.04"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
```
installed in Windows linux system in windows 10 v 1909 build 18363.592
* Queries to run that lead to unexpected result
```
CREATE TABLE tutorial.hits_v1
(
`WatchID` UInt64,
`JavaEnable` UInt8,
`Title` String,
`GoodEvent` Int16,
`EventTime` DateTime,
`EventDate` Date,
`CounterID` UInt32,
`ClientIP` UInt32,
`ClientIP6` FixedString(16),
`RegionID` UInt32,
`UserID` UInt64,
`CounterClass` Int8,
`OS` UInt8,
`UserAgent` UInt8,
`URL` String,
`Referer` String,
`URLDomain` String,
`RefererDomain` String,
`Refresh` UInt8,
`IsRobot` UInt8,
`RefererCategories` Array(UInt16),
`URLCategories` Array(UInt16),
`URLRegions` Array(UInt32),
`RefererRegions` Array(UInt32),
`ResolutionWidth` UInt16,
`ResolutionHeight` UInt16,
`ResolutionDepth` UInt8,
`FlashMajor` UInt8,
`FlashMinor` UInt8,
`FlashMinor2` String,
`NetMajor` UInt8,
`NetMinor` UInt8,
`UserAgentMajor` UInt16,
`UserAgentMinor` FixedString(2),
`CookieEnable` UInt8,
`JavascriptEnable` UInt8,
`IsMobile` UInt8,
`MobilePhone` UInt8,
`MobilePhoneModel` String,
`Params` String,
`IPNetworkID` UInt32,
`TraficSourceID` Int8,
`SearchEngineID` UInt16,
`SearchPhrase` String,
`AdvEngineID` UInt8,
`IsArtifical` UInt8,
`WindowClientWidth` UInt16,
`WindowClientHeight` UInt16,
`ClientTimeZone` Int16,
`ClientEventTime` DateTime,
`SilverlightVersion1` UInt8,
`SilverlightVersion2` UInt8,
`SilverlightVersion3` UInt32,
`SilverlightVersion4` UInt16,
`PageCharset` String,
`CodeVersion` UInt32,
`IsLink` UInt8,
`IsDownload` UInt8,
`IsNotBounce` UInt8,
`FUniqID` UInt64,
`HID` UInt32,
`IsOldCounter` UInt8,
`IsEvent` UInt8,
`IsParameter` UInt8,
`DontCountHits` UInt8,
`WithHash` UInt8,
`HitColor` FixedString(1),
`UTCEventTime` DateTime,
`Age` UInt8,
`Sex` UInt8,
`Income` UInt8,
`Interests` UInt16,
`Robotness` UInt8,
`GeneralInterests` Array(UInt16),
`RemoteIP` UInt32,
`RemoteIP6` FixedString(16),
`WindowName` Int32,
`OpenerName` Int32,
`HistoryLength` Int16,
`BrowserLanguage` FixedString(2),
`BrowserCountry` FixedString(2),
`SocialNetwork` String,
`SocialAction` String,
`HTTPError` UInt16,
`SendTiming` Int32,
`DNSTiming` Int32,
`ConnectTiming` Int32,
`ResponseStartTiming` Int32,
`ResponseEndTiming` Int32,
`FetchTiming` Int32,
`RedirectTiming` Int32,
`DOMInteractiveTiming` Int32,
`DOMContentLoadedTiming` Int32,
`DOMCompleteTiming` Int32,
`LoadEventStartTiming` Int32,
`LoadEventEndTiming` Int32,
`NSToDOMContentLoadedTiming` Int32,
`FirstPaintTiming` Int32,
`RedirectCount` Int8,
`SocialSourceNetworkID` UInt8,
`SocialSourcePage` String,
`ParamPrice` Int64,
`ParamOrderID` String,
`ParamCurrency` FixedString(3),
`ParamCurrencyID` UInt16,
`GoalsReached` Array(UInt32),
`OpenstatServiceName` String,
`OpenstatCampaignID` String,
`OpenstatAdID` String,
`OpenstatSourceID` String,
`UTMSource` String,
`UTMMedium` String,
`UTMCampaign` String,
`UTMContent` String,
`UTMTerm` String,
`FromTag` String,
`HasGCLID` UInt8,
`RefererHash` UInt64,
`URLHash` UInt64,
`CLID` UInt32,
`YCLID` UInt64,
`ShareService` String,
`ShareURL` String,
`ShareTitle` String,
`ParsedParams` Nested(
Key1 String,
Key2 String,
Key3 String,
Key4 String,
Key5 String,
ValueDouble Float64),
`IslandID` FixedString(16),
`RequestNum` UInt32,
`RequestTry` UInt8
)
ENGINE = MergeTree()
PARTITION BY toYYYYMM(EventDate)
ORDER BY (CounterID, EventDate, intHash32(UserID))
SAMPLE BY intHash32(UserID)
SETTINGS index_granularity = 8192
```
**Expected behavior**
create the table tutorial.hits_v1
**Error message and/or stacktrace**
2020.12.01 16:02:08.711590 [ 13034 ] {7b66e3cf-9a89-4eea-a620-1db7ac921479} <Error> executeQuery: Code: 425, e.displayText() = DB::ErrnoException: Cannot rename /var/lib/clickhouse/store/8c3/8c35f3c8-90a5-417c-ba52-6761ac024ab2/hits_v1.sql.tmp to /var/lib/clickhouse/store/8c3/8c35f3c8-90a5-417c-ba52-6761ac024ab2/hits_v1.sql, errno: 38, strerror: Function not implemented (version 20.11.4.13 (official build)) (from 127.0.0.1:55079) (in query: CREATE TABLE tutorial.hits_v1 ( `WatchID` UInt64, `JavaEnable` UInt8, `Title` String, `GoodEvent` Int16, `EventTime` DateTime, `EventDate` Date, `CounterID` UInt32, `ClientIP` UInt32, `ClientIP6` FixedString(16), `RegionID` UInt32, `UserID` UInt64, `CounterClass` Int8, `OS` UInt8, `UserAgent` UInt8, `URL` String, `Referer` String, `URLDomain` String, `RefererDomain` String, `Refresh` UInt8, `IsRobot` UInt8, `RefererCategories` Array(UInt16), `URLCategories` Array(UInt16), `URLRegions` Array(UInt32), `RefererRegions` Array(UInt32), `ResolutionWidth` UInt16, `ResolutionHeight` UInt16, `ResolutionDepth` UInt8, `FlashMajor` UInt8, `FlashMinor` UInt8, `FlashMinor2` String, `NetMajor` UInt8, `NetMinor` UInt8, `UserAgentMajor` UInt16, `UserAgentMinor` FixedString(2), `CookieEnable` UInt8, `JavascriptEnable` UInt8, `IsMobile` UInt8, `MobilePhone` UInt8, `MobilePhoneModel` String, `Params` String, `IPNetworkID` UInt32, `TraficSourceID` Int8, `SearchEngineID` UInt16, `SearchPhrase` String, `AdvEngineID` UInt8, `IsArtifical` UInt8, `WindowClientWidth` UInt16, `WindowClientHeight` UInt16, `ClientTimeZone` Int16, `ClientEventTime` DateTime, `SilverlightVersion1` UInt8, `SilverlightVersion2` UInt8, `SilverlightVersion3` UInt32, `SilverlightVersion4` UInt16, `PageCharset` String, `CodeVersion` UInt32, `IsLink` UInt8, `IsDownload` UInt8, `IsNotBounce` UInt8, `FUniqID` UInt64, `HID` UInt32, `IsOldCounter` UInt8, `IsEvent` UInt8, `IsParameter` UInt8, `DontCountHits` UInt8, `WithHash` UInt8, `HitColor` FixedString(1), `UTCEventTime` DateTime, `Age` UInt8, `Sex` UInt8, `Income` UInt8, `Interests` UInt16, `Robotness` UInt8, `GeneralInterests` Array(UInt16), `RemoteIP` UInt32, `RemoteIP6` FixedString(16), `WindowName` Int32, `OpenerName` Int32, `HistoryLength` Int16, `BrowserLanguage` FixedString(2), `BrowserCountry` FixedString(2), `SocialNetwork` String, `SocialAction` String, `HTTPError` UInt16, `SendTiming` Int32, `DNSTiming` Int32, `ConnectTiming` Int32, `ResponseStartTiming` Int32, `ResponseEndTiming` Int32, `FetchTiming` Int32, `RedirectTiming` Int32, `DOMInteractiveTiming` Int32, `DOMContentLoadedTiming` Int32, `DOMCompleteTiming` Int32, `LoadEventStartTiming` Int32, `LoadEventEndTiming` Int32, `NSToDOMContentLoadedTiming` Int32, `FirstPaintTiming` Int32, `RedirectCount` Int8, `SocialSourceNetworkID` UInt8, `SocialSourcePage` String, `ParamPrice` Int64, `ParamOrderID` String, `ParamCurrency` FixedString(3), `ParamCurrencyID` UInt16, `GoalsReached` Array(UInt32), `OpenstatServiceName` String, `OpenstatCampaignID` String, `OpenstatAdID` String, `OpenstatSourceID` String, `UTMSource` String, `UTMMedium` String, `UTMCampaign` String, `UTMContent` String, `UTMTerm` String, `FromTag` String, `HasGCLID` UInt8, `RefererHash` UInt64, `URLHash` UInt64, `CLID` UInt32, `YCLID` UInt64, `ShareService` String, `ShareURL` String, `ShareTitle` String, `ParsedParams` Nested( Key1 String, Key2 String, Key3 String, Key4 String, Key5 String, ValueDouble Float64), `IslandID` FixedString(16), `RequestNum` UInt32, `RequestTry` UInt8 ) ENGINE = MergeTree() PARTITION BY toYYYYMM(EventDate) ORDER BY (CounterID, EventDate, intHash32(UserID)) SAMPLE BY intHash32(UserID) SETTINGS index_granularity = 8192), Stack trace (when copying this message, always include the lines below):
0. DB::ErrnoException::ErrnoException(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, int, std::__1::optional<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > const&) @ 0x7b4787b in /usr/bin/clickhouse
1. DB::throwFromErrnoWithPath(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, int) @ 0x7b47bbf in /usr/bin/clickhouse
2. ? @ 0xd6f1360 in /usr/bin/clickhouse
3. DB::renameNoReplace(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0xd6f0dbb in /usr/bin/clickhouse
4. DB::DatabaseAtomic::commitCreateTable(DB::ASTCreateQuery const&, std::__1::shared_ptr<DB::IStorage> const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0xd6cfbb3 in /usr/bin/clickhouse
5. DB::DatabaseOnDisk::createTable(DB::Context const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::IStorage> const&, std::__1::shared_ptr<DB::IAST> const&) @ 0xd6e99e0 in /usr/bin/clickhouse
6. DB::InterpreterCreateQuery::doCreateTable(DB::ASTCreateQuery&, DB::InterpreterCreateQuery::TableProperties const&) @ 0xd8921f3 in /usr/bin/clickhouse
7. DB::InterpreterCreateQuery::createTable(DB::ASTCreateQuery&) @ 0xd89120a in /usr/bin/clickhouse
8. DB::InterpreterCreateQuery::execute() @ 0xd894327 in /usr/bin/clickhouse
9. ? @ 0xdca2baa in /usr/bin/clickhouse
10. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Context&, bool, DB::QueryProcessingStage::Enum, bool) @ 0xdca177d in /usr/bin/clickhouse
11. DB::TCPHandler::runImpl() @ 0xe31b706 in /usr/bin/clickhouse
12. DB::TCPHandler::run() @ 0xe327bd7 in /usr/bin/clickhouse
13. Poco::Net::TCPServerConnection::start() @ 0x10aac89f in /usr/bin/clickhouse
14. Poco::Net::TCPServerDispatcher::run() @ 0x10aae2ae in /usr/bin/clickhouse
15. Poco::PooledThread::run() @ 0x10bdb499 in /usr/bin/clickhouse
16. Poco::ThreadImpl::runnableEntry(void*) @ 0x10bd742a in /usr/bin/clickhouse
17. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
18. __clone @ 0x122293 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
| https://github.com/ClickHouse/ClickHouse/issues/17661 | https://github.com/ClickHouse/ClickHouse/pull/17664 | 2b3281888afc233cea12f74c35d1462c3ec985d8 | 45b4b3648c47b14c6eae90d171662d79a4c6ea93 | "2020-12-01T09:38:31Z" | c++ | "2020-12-02T10:55:50Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 17,613 | ["src/Storages/MergeTree/ReplicatedMergeTreeQueue.cpp"] | format error, argument not found | (you don't have to strictly follow this form)
**Describe the bug**
a lot of background logs of thar error, constantly, on different tables
**How to reproduce**
from 20.3 -20.11.4.13
**Expected behavior**
some error log which I can't see I think
**Error message and/or stacktrace**
2020.11.30 03:21:19.382728 [ 58 ] {} <Error>tracking.s_mergetrackingresult: DB::BackgroundProcessingPoolTaskResult DB::StorageReplicatedMergeTree::queueTask(): std::exception. Code: 1001, type: fmt::v7::format_error, e.what() = argument not found, Stack trace (when copying this message, always include the lines below):
0. fmt::v7::format_error::format_error(char const*) @ 0x7b3c899 in /usr/bin/clickhouse
1. fmt::v7::detail::error_handler::on_error(char const*) @ 0x10c4b280 in /usr/bin/clickhouse
2. char const* fmt::v7::detail::parse_replacement_field<char, fmt::v7::detail::format_handler<fmt::v7::detail::arg_formatter<fmt::v7::detail::buffer_appender<char>, char>, char, fmt::v7::basic_format_context<fmt::v7::detail::buffer_appender<char>, char> >&>(char const*, char const*, fmt::v7::detail::format_handler<fmt::v7::detail::arg_formatter<fmt::v7::detail::buffer_appender<char>, char>, char, fmt::v7::basic_format_context<fmt::v7::detail::buffer_appender<char>, char> >&) @ 0x7b408ca in /usr/bin/clickhouse
3. fmt::v7::basic_format_context<fmt::v7::detail::buffer_appender<char>, char>::iterator fmt::v7::vformat_to<fmt::v7::detail::arg_formatter<fmt::v7::detail::buffer_appender<char>, char>, char, fmt::v7::basic_format_context<fmt::v7::detail::buffer_appender<char>, char> >(fmt::v7::detail::arg_formatter<fmt::v7::detail::buffer_appender<char>, char>::iterator, fmt::v7::basic_string_view<char>, fmt::v7::basic_format_args<fmt::v7::basic_format_context<fmt::v7::detail::buffer_appender<char>, char> >, fmt::v7::detail::locale_ref) @ 0x7b3ca8c in /usr/bin/clickhouse
4. fmt::v7::detail::vformat(fmt::v7::basic_string_view<char>, fmt::v7::format_args) @ 0x10c4b326 in /usr/bin/clickhouse
5. DB::ReplicatedMergeTreeQueue::shouldExecuteLogEntry(DB::ReplicatedMergeTreeLogEntry const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >&, DB::MergeTreeDataMergerMutator&, DB::MergeTreeData&, std::__1::lock_guard<std::__1::mutex>&) const @ 0xe23c028 in /usr/bin/clickhouse
6. DB::ReplicatedMergeTreeQueue::selectEntryToProcess(DB::MergeTreeDataMergerMutator&, DB::MergeTreeData&) @ 0xe23f034 in /usr/bin/clickhouse7. DB::StorageReplicatedMergeTree::queueTask() @ 0xdf16b04 in /usr/bin/clickhouse
8. DB::BackgroundProcessingPool::workLoopFunc() @ 0xe05e3b3 in /usr/bin/clickhouse9. ? @ 0xe05eef1 in /usr/bin/clickhouse
10. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x7b7293d in /usr/bin/clickhouse
11. ? @ 0x7b76463 in /usr/bin/clickhouse12. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
13. __clone @ 0x122293 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
(version 20.11.4.13 (official build))
| https://github.com/ClickHouse/ClickHouse/issues/17613 | https://github.com/ClickHouse/ClickHouse/pull/17615 | 0b5261b1afd090583be97d40d60de280df8ae182 | 21bbc7bc19f0a25b28a269392d4e96b034f60a02 | "2020-11-30T08:26:07Z" | c++ | "2020-12-01T11:26:25Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 17,540 | ["docs/en/sql-reference/functions/other-functions.md", "src/DataTypes/IDataType.h", "src/Functions/byteSize.cpp", "src/Functions/registerFunctionsMiscellaneous.cpp", "tests/queries/0_stateless/01622_byte_size.reference", "tests/queries/0_stateless/01622_byte_size.sql"] | Function `byteSize` | The function should return an estimate of uncompressed byte size of its arguments in memory.
E.g. for UInt32 argument it will return constant 4, for String argument - the string length + 9.
The function can take multiple arguments. The typical application is `byteSize(*)`.
**Use case**
Suppose you have a service that stores data for multiple clients in one table. Users will pay per data volume. So, you need to implement accounting of users data volume. The function will allow to calculate the data size on per-row basis. | https://github.com/ClickHouse/ClickHouse/issues/17540 | https://github.com/ClickHouse/ClickHouse/pull/18579 | df188aa49d2a8a3d483e642fba45e955dca3d702 | 4cf7f0c6073f1b5be58fca8747e4995a38f20c31 | "2020-11-29T07:32:10Z" | c++ | "2021-01-01T17:00:14Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 17,500 | ["tests/queries/0_stateless/01901_test_attach_partition_from.reference", "tests/queries/0_stateless/01901_test_attach_partition_from.sql"] | attach partition from : Transaction failed (Node exists) | 19.13
```
create table S (A Int64, D date) Engine=MergeTree partition by D order by A;
insert into S values(1, '2020-01-01');
create table D (A Int64, D date)
Engine=ReplicatedMergeTree('/clickhouse/{cluster}/tables/testD', '{replica}')
partition by D order by A;
alter table D attach partition '2020-01-01' from S;
select * from D
ββAββ¬ββββββββββDββ
β 1 β 2020-01-01 β
βββββ΄βββββββββββββ
```
```
20.8.7
alter table D attach partition '2020-01-01' from S;
Received exception from server (version 20.8.7):
Code: 999. DB::Exception: Received from localhost:9000. DB::Exception: Transaction failed (Node exists): Op #4, path: /clickhouse/oCHDWcsp1/tables/testD/blocks/20200101_replace_from_71942F3BA59FDAB5EF3518BFE0EA8AA3.
alter table D REPLACE partition '2020-01-01' from S;
Ok.
```
Weird but it's working without errors if the table D has more than 1 replicas. | https://github.com/ClickHouse/ClickHouse/issues/17500 | https://github.com/ClickHouse/ClickHouse/pull/25060 | ab2529d4f0440191c1d31acaeae55619acbadb9d | 59071d9bd139f49333b8c481773156c851a032f9 | "2020-11-27T15:51:21Z" | c++ | "2021-06-08T10:22:38Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 17,469 | ["CMakeLists.txt", "contrib/libmetrohash/CMakeLists.txt", "docker/packager/packager", "release"] | Check "unbundled" build with SSE2 only. | **Use case**
It is needed to check that it's possible to build ClickHouse without SSE4.2. | https://github.com/ClickHouse/ClickHouse/issues/17469 | https://github.com/ClickHouse/ClickHouse/pull/27683 | 40f5e06a8d9b2074b5985a8042f3ebf9940c77f4 | 135a5a2453d6f1bcef17c6f64d456a3c1cce7f36 | "2020-11-27T11:16:12Z" | c++ | "2021-08-15T10:25:44Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 17,404 | ["src/Common/HashTable/HashTable.h", "src/Common/SpaceSaving.h", "src/Common/tests/CMakeLists.txt", "src/Common/tests/gtest_hash_table.cpp", "src/Common/tests/hash_table.cpp", "tests/queries/0_stateless/00453_top_k.reference", "tests/queries/0_stateless/00453_top_k.sql"] | Query Fuzzer, `topK`, segfault. | ERROR: type should be string, got "https://clickhouse-test-reports.s3.yandex.net/17375/99073c26ee2692f8efe2ab14b2d8ae0fd813b80e/fuzzer/fuzzer.log\r\n\r\n```\r\nSELECT\r\n k,\r\n topK(v)\r\nFROM \r\n(\r\n SELECT\r\n number % 7 AS k,\r\n arrayMap(x -> arrayMap(x -> if(x = 0, NULL, toString(x)), range(x)), range(intDiv(number, 1))) AS v\r\n FROM system.numbers\r\n LIMIT 257\r\n)\r\nGROUP BY k\r\nORDER BY k ASC\r\n```" | https://github.com/ClickHouse/ClickHouse/issues/17404 | https://github.com/ClickHouse/ClickHouse/pull/17845 | 04e222f6f38acb0d9395751aeb8c5096fb484f0a | 8df478911323771ff94aada88cf79a82b6a7822e | "2020-11-25T15:00:03Z" | c++ | "2020-12-13T01:09:51Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 17,392 | ["src/Interpreters/RedundantFunctionsInOrderByVisitor.h", "tests/queries/0_stateless/01323_redundant_functions_in_order_by.reference", "tests/queries/0_stateless/01593_functions_in_order_by.reference", "tests/queries/0_stateless/01593_functions_in_order_by.sql"] | optimize_redundant_functions_in_order_by issue | ```
SELECT msg, toDateTime(intDiv(ms, 1000)) AS time
FROM (
SELECT 'hello' as msg, toUInt64(t)*1000 as ms FROM generateRandom('t DateTime') LIMIT 10
)
ORDER BY msg, time
```
```
/* optimize_redundant_functions_in_order_by=1 -- (default) */
SELECT msg, toDateTime(intDiv(ms, 1000)) AS time
FROM (
SELECT 'hello' as msg, toUInt64(t)*1000 as ms FROM generateRandom('t DateTime') LIMIT 10
)
ORDER BY msg, time
Query id: 85b48419-d2eb-40e1-9606-088a1136ad6c
ββmsgββββ¬ββββββββββββββββtimeββ
β hello β 2048-05-08 14:54:21 β
β hello β 2003-01-25 22:23:27 β
β hello β 2049-03-11 15:29:16 β
β hello β 2002-04-14 08:17:00 β
β hello β 1990-04-22 03:55:27 β
β hello β 2034-04-14 21:30:22 β
β hello β 2011-11-03 04:36:37 β
β hello β 1988-12-06 02:15:03 β
β hello β 2017-01-17 14:44:14 β
β hello β 2094-05-25 13:22:16 β
βββββββββ΄ββββββββββββββββββββββ
10 rows in set. Elapsed: 0.002 sec.
explain syntax SELECT msg, toDateTime(intDiv(ms, 1000)) AS time
FROM (
SELECT 'hello' as msg, toUInt64(t)*1000 as ms FROM generateRandom('t DateTime') LIMIT 10
)
ORDER BY msg, time
ββexplainβββββββββββββββββββββββββββββββββββ
β SELECT β
β msg, β
β toDateTime(intDiv(ms, 1000)) AS time β
β FROM β
β ( β
β SELECT β
β 'hello' AS msg, β
β toUInt64(t) * 1000 AS ms β
β FROM generateRandom('t DateTime') β
β LIMIT 10 β
β ) β
β ORDER BY msg ASC β
ββββββββββββββββββββββββββββββββββββββββββββ
```
Disabling the optimizer resolves the issue. | https://github.com/ClickHouse/ClickHouse/issues/17392 | https://github.com/ClickHouse/ClickHouse/pull/17471 | 70af62af79609e22d32dc1a49aa98fce3f638078 | 7a6c72ce880861bdb134c751d8fdb6c1826687bd | "2020-11-25T12:37:33Z" | c++ | "2020-11-28T05:34:26Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 17,367 | ["tests/queries/0_stateless/01915_for_each_crakjie.reference", "tests/queries/0_stateless/01915_for_each_crakjie.sql"] | Array join incorrect result when requesting distributed table | When running some specific query with arrayJoin result are not what they are expected to be.
This seems to be correlated to the usage of State Merge modifier.
Clickhouse 20.3.20.6
The following query
```sql
WITH arrayJoin(['a','b']) as z
SELECT
z,
sumMergeForEach(x) as x
FROM
(
SELECT
sumStateForEach([1.0,1.1,1.1300175]) as x
FROM
aDistributedTable
)
GROUP BY z
```
Return [1.5e-323, 3e-323, a value that depends of the number of elements in you table]
When used on a shard table or `numbers` table the result are correct.
When used with finalizedAggregation without group by the result are correct.
When used with sumForEach the results are correct.
The value above are an exemple but differents inital values can fail with differents results.
Strangly , running the query without the group by return the right results.
| https://github.com/ClickHouse/ClickHouse/issues/17367 | https://github.com/ClickHouse/ClickHouse/pull/25286 | 0972b53ef4f39a67a5993505e43b9ecdffd7620f | 1aac27e9034e7d7b3107bd80ad053ad75b4fe3be | "2020-11-24T16:56:14Z" | c++ | "2021-06-17T06:13:55Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 17,317 | ["src/Storages/MergeTree/MergeTreeReadPool.cpp", "src/Storages/MergeTree/MergeTreeReverseSelectProcessor.cpp", "src/Storages/MergeTree/MergeTreeSelectProcessor.cpp", "tests/queries/0_stateless/01903_correct_block_size_prediction_with_default.reference", "tests/queries/0_stateless/01903_correct_block_size_prediction_with_default.sql"] | On the fly DEFAULT values calculation uses 8 times more memory than expression itself. | **Describe the bug**
If we add a new column to table via ALTER TABLE ADD COLUMN command and use in DEFAULT section some expression with array functions, it requires much more memory than using that expression by itself in query.
**How to reproduce**
Clickhouse version 20.12, 20.8
```
CREATE TABLE test_extract(str String, arr Array(Array(String)) ALIAS extractAllGroupsHorizontal(str, '\\W(\\w+)=("[^"]*?"|[^",}]*)')) ENGINE=MergeTree() PARTITION BY tuple() ORDER BY tuple() ;
INSERT INTO test_extract WITH range(30) as range_arr, arrayMap(x-> concat(toString(x),'Id'), range_arr) as key, arrayMap(x -> rand() % 30, range_arr) as val, arrayStringConcat(arrayMap((x,y) -> concat(x,'=',toString(y)), key, val),',') as str SELECT str FROM numbers(2000000);
ALTER TABLE test_extract add column `15Id` Nullable(UInt16) DEFAULT toUInt16OrNull(arrayFirst((v, k) -> (k = '15Id'),arr[2],arr[1]));
SELECT DISTINCT 15Id FROM test_extract;
2020.11.23 18:34:24.541648 [ 10823 ] {2b0a29fb-885e-4e3b-8acf-7379b3996c63} <Information> executeQuery: Read 2000000 rows, 440.62 MiB in 8.5199527 sec., 234743 rows/sec., 51.72 MiB/sec.
2020.11.23 18:34:24.541787 [ 10823 ] {2b0a29fb-885e-4e3b-8acf-7379b3996c63} <Debug> MemoryTracker: Peak memory usage (for query): 906.80 MiB.
30 rows in set. Elapsed: 8.521 sec. Processed 2.00 million rows, 462.03 MB (234.71 thousand rows/s., 54.22 MB/s.)
SELECT DISTINCT toUInt16OrNull(arrayFirst((v, k) -> (k = '15Id'),arr[2],arr[1])) FROM test_extract;
2020.11.23 18:34:56.722182 [ 10823 ] {f9520e0f-1982-46f5-9123-cb6abbb732b2} <Information> executeQuery: Read 2000000 rows, 434.90 MiB in 6.9211621 sec., 288968 rows/sec., 62.84 MiB/sec.
2020.11.23 18:34:56.722235 [ 10823 ] {f9520e0f-1982-46f5-9123-cb6abbb732b2} <Debug> MemoryTracker: Peak memory usage (for query): 115.43 MiB.
30 rows in set. Elapsed: 6.923 sec. Processed 2.00 million rows, 456.03 MB (288.87 thousand rows/s., 65.87 MB/s.)
```
**Expected behavior**
Both queries would use similar amounts of RAM.
| https://github.com/ClickHouse/ClickHouse/issues/17317 | https://github.com/ClickHouse/ClickHouse/pull/25917 | 3c395389b0be1bf4ace9c2db3b05f1db063063aa | 3d05f07eceb502ba0ce4b3480e009f5affbcf5cf | "2020-11-23T15:46:11Z" | c++ | "2021-07-13T17:05:41Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 17,302 | ["tests/queries/0_stateless/01763_long_ttl_group_by.reference", "tests/queries/0_stateless/01763_long_ttl_group_by.sql"] | TTL GROUP BY + OPTIMIZE FINAL exception | **Describe the bug**
Clickhouse version 20.12
```
CREATE TABLE test_ttl_group_by(key UInt32, ts DateTime, value UInt32) ENGINE =MergeTree() PARTITION BY tuple() ORDER BY (key, toStartOfInterval(ts, toIntervalMinute(2)), ts) TTL ts + INTERVAL 5 MINUTE GROUP BY key, toStartOfInterval(ts, toIntervalMinute(2)) SET value = sum(value);
INSERT INTO test_ttl_group_by SELECT 1 as key, now() + number, number FROM numbers(1000);
OPTIMIZE TABLE test_ttl_group_by FINAL;
Code: 9, e.displayText() = DB::Exception: Sizes of columns doesn't match: key: 8, ts: 16 (version 20.12.1.5154 (official build)) (from 127.0.0.1:46376) (in query: OPTIMIZE TABLE test_ttl_group_by FINAL;), Stack trace (when copying this message, always include the lines below):
```
**Error message and/or stacktrace**
```
Code: 9, e.displayText() = DB::Exception: Sizes of columns doesn't match: key: 8, ts: 16 (version 20.12.1.5154 (official build)) (from 127.0.0.1:46376) (in query: OPTIMIZE TABLE test_ttl_group_by FINAL;), Stack trace (when copying this message, always include the lines below):
0. DB::Block::checkNumberOfRows(bool) const @ 0xd392c35 in /usr/bin/clickhouse
1. DB::MergedBlockOutputStream::writeImpl(DB::Block const&, DB::PODArray<unsigned long, 4096ul, Allocator<false, false>, 15ul, 16ul> const*) @ 0xe26adc4 in /usr/bin/clickhouse
2. DB::MergeTreeDataMergerMutator::mergePartsToTemporaryPart(DB::FutureMergedMutatedPart const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, DB::BackgroundProcessListEntry<DB::MergeListElement, DB::MergeInfo>&, std::__1::shared_ptr<DB::RWLockImpl::LockHolderImpl>&, long, DB::Context const&, std::__1::unique_ptr<DB::IReservation, std::__1::default_delete<DB::IReservation> > const&, bool) @ 0xe16f00b in /usr/bin/clickhouse
3. DB::StorageMergeTree::mergeSelectedParts(std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, bool, DB::StorageMergeTree::MergeMutateSelectedEntry&, std::__1::shared_ptr<DB::RWLockImpl::LockHolderImpl>&) @ 0xdf1d160 in /usr/bin/clickhouse
4. DB::StorageMergeTree::merge(bool, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, bool, bool, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >*) @ 0xdf1ce49 in /usr/bin/clickhouse
5. DB::StorageMergeTree::optimize(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::shared_ptr<DB::IAST> const&, bool, bool, DB::Context const&) @ 0xdf20fbc in /usr/bin/clickhouse
6. DB::InterpreterOptimizeQuery::execute() @ 0xd9c1264 in /usr/bin/clickhouse
7. ? @ 0xdcd608a in /usr/bin/clickhouse
8. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Context&, bool, DB::QueryProcessingStage::Enum, bool) @ 0xdcd4c5d in /usr/bin/clickhouse
9. DB::TCPHandler::runImpl() @ 0xe392166 in /usr/bin/clickhouse
10. DB::TCPHandler::run() @ 0xe39e667 in /usr/bin/clickhouse
11. Poco::Net::TCPServerConnection::start() @ 0x10b27ddf in /usr/bin/clickhouse
12. Poco::Net::TCPServerDispatcher::run() @ 0x10b297ee in /usr/bin/clickhouse
13. Poco::PooledThread::run() @ 0x10c569d9 in /usr/bin/clickhouse
14. Poco::ThreadImpl::runnableEntry(void*) @ 0x10c5296a in /usr/bin/clickhouse
15. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
16. clone @ 0x122293 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
Received exception from server (version 20.12.1):
Code: 9. DB::Exception: Received from localhost:9000. DB::Exception: Sizes of columns doesn't match: key: 8, ts: 16.
``` | https://github.com/ClickHouse/ClickHouse/issues/17302 | https://github.com/ClickHouse/ClickHouse/pull/21848 | bf5b8e1a1419ed02cf244b13a36db421d13ba6a1 | 868766ac47b5ea2e6261c6860eb6f06375feba2e | "2020-11-23T12:21:27Z" | c++ | "2021-03-21T19:20:25Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 17,294 | ["src/Processors/QueryPlan/AddingMissedStep.cpp", "src/Processors/QueryPlan/DistinctStep.cpp", "src/Processors/QueryPlan/ExpressionStep.cpp", "src/Processors/QueryPlan/ITransformingStep.cpp", "tests/queries/0_stateless/01582_distinct_optimization.reference", "tests/queries/0_stateless/01582_distinct_optimization.sh", "tests/queries/0_stateless/01582_distinct_subquery_groupby.reference", "tests/queries/0_stateless/01582_distinct_subquery_groupby.sql"] | Distinct on subquery with group by may return duplicate result | **Describe the bug**
ClickHouse version: 20.8.2.3
`SELECT DISTINCT b FROM (SELECT a, b FROM d GROUP BY a, b)` may return duplicate result, here is the step to reproduce:
```sql
CREATE TABLE test_local ON CLUSTER xxx (a String, b Int) Engine=TinyLog;
CREATE TABLE test ON CLUSTER xxx (a String, b Int) Engine = Distributed('xxx', 'default', 'test_local', b);
INSERT INTO test VALUES('a', 0), ('a', 1), ('b', 0)
SELECT DISTINCT b FROM (SELECT b FROM test GROUP BY a, b)
```
Expected result:
0
1
Actual result:
0
0
1
**Possible reason:**
`DistinctStep::checkColumnsAlreadyDistinct` return true while `b` exists in its front inputstream's `distinct_columns`(which is `a` and `b` in previous SQL), so `DistinctStep` is skipped in the final execution pipeline. As the comment of `DataStream::distinct_columns` says: "Tuples with those columns are distinct. It doesn't mean that columns are distinct separately.", `DistinctStep::checkColumnsAlreadyDistinct` may return wrong result.
| https://github.com/ClickHouse/ClickHouse/issues/17294 | https://github.com/ClickHouse/ClickHouse/pull/17439 | 875a0a04eb086b599fd3e63f2ef3c7b5c860e18a | 60af8219eea9b1c7e3d7e3a554698d9a8b9105a7 | "2020-11-23T09:05:06Z" | c++ | "2020-11-27T10:26:19Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 17,279 | ["base/daemon/SentryWriter.cpp", "tests/integration/test_grpc_protocol/configs/grpc_port.xml", "tests/integration/test_grpc_protocol_ssl/configs/grpc_config.xml"] | Write the number of CPU cores, memory amount, amount of free space in data dir to Sentry | **Use case**
Ignore reports from embarrassingly memory constrainted instances. | https://github.com/ClickHouse/ClickHouse/issues/17279 | https://github.com/ClickHouse/ClickHouse/pull/17543 | 0739ed8f9792b9fb48690f4f7ce2d8831aaa5210 | c0489ce5373afda9128a03a2d6ccc4f31bd11fb1 | "2020-11-23T02:15:06Z" | c++ | "2020-12-01T07:33:10Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 17,278 | ["base/daemon/SentryWriter.cpp", "programs/keeper/Keeper.cpp", "programs/server/Server.cpp", "src/Core/ServerUUID.cpp", "src/Core/ServerUUID.h", "src/Functions/registerFunctionsMiscellaneous.cpp", "src/Functions/serverUUID.cpp", "tests/integration/test_replicated_database/test.py"] | Generate server UUID at first start. | Write a file `uuid` in server's data directory that contains random UUID, generated at startup if file does not exist.
It will be used for several purposes:
- to anonymously identify server in Sentry (currently Sentry is unusable);
- (possible future usage) to identify server in ensemble.
@tavplubix comments? | https://github.com/ClickHouse/ClickHouse/issues/17278 | https://github.com/ClickHouse/ClickHouse/pull/27755 | 96a5c4b033e57dcc0072cd4cbd7727e2c10a8196 | 9ef45d92c28381ae5b325c7e3c0b0cd499717522 | "2020-11-23T02:12:37Z" | c++ | "2021-08-19T11:59:18Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 17,244 | ["src/Interpreters/ExtractExpressionInfoVisitor.cpp", "src/Interpreters/ExtractExpressionInfoVisitor.h", "src/Interpreters/PredicateExpressionsOptimizer.cpp", "tests/queries/0_stateless/01582_deterministic_function_with_predicate.reference", "tests/queries/0_stateless/01582_deterministic_function_with_predicate.sql"] | Condition pushdown for rand() > value multiply the probabilities | **Describe the bug**
If we pushdown Ρonditions with rand() function in subquery, we actually multiply the probabilities.
**How to reproduce**
Clickhouse version 20.12
```
SELECT count(*)
FROM
(
SELECT number
FROM
(
SELECT number
FROM numbers(1000000)
)
WHERE rand64() < (0.01 * 18446744073709552000.)
)
Query id: 13c8ae5e-ed90-4610-b7c3-8e8d3bf27f12
ββcount()ββ
β 114 β
βββββββββββ
1 rows in set. Elapsed: 0.008 sec. Processed 1.05 million rows, 8.38 MB (136.06 million rows/s., 1.09 GB/s.)
EXPLAIN SYNTAX
SELECT count(*)
FROM
(
SELECT number
FROM
(
SELECT number
FROM numbers(1000000)
)
WHERE rand64() < (0.01 * 18446744073709552000.)
)
Query id: 6036d15a-32ba-4161-a6fe-41eff40af246
ββexplainββββββββββββββββββββββββββββββββββββββββββββββββββ
β SELECT count() β
β FROM β
β ( β
β SELECT number β
β FROM β
β ( β
β SELECT number β
β FROM numbers(1000000) β
β WHERE rand64() < (0.01 * 18446744073709552000.) β
β ) β
β WHERE rand64() < (0.01 * 18446744073709552000.) β
β ) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
```
| https://github.com/ClickHouse/ClickHouse/issues/17244 | https://github.com/ClickHouse/ClickHouse/pull/17273 | 6c6edfc050cb8cdd8de756948f303995a95bd4d3 | af4c3956dad4ef989cdd0ce3c8f00f2e19f8f255 | "2020-11-20T23:44:38Z" | c++ | "2020-12-20T06:51:20Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 17,228 | ["src/Storages/StorageJoin.cpp", "tests/queries/0_stateless/01586_storage_join_low_cardinality_key.reference", "tests/queries/0_stateless/01586_storage_join_low_cardinality_key.sql"] | Segmentation fault on Join engine with LowCardinality column | Tested in `20.11.3.3-stable` and `20.7.2.30-stable`.
```SQL
CREATE TABLE low_card
(
`lc` LowCardinality(String)
)
ENGINE = Join(ANY, LEFT, lc);
```
```SQL
INSERT INTO low_card VALUES ( '1' );
```
```
SELECT * FROM low_card WHERE lc = '1';
[fdea5737f3fb] 2020.11.20 12:32:03.028811 [ 117 ] <Fatal> BaseDaemon: ########################################
[fdea5737f3fb] 2020.11.20 12:32:03.028952 [ 117 ] <Fatal> BaseDaemon: (version 20.11.3.3 (official build), build id: C88CD350740ED614) (from thread 112) (query_id: 0b3ea2d5-8e14-411c-99aa-559f8c597d33) Received signal Segmentation fault (11)
[fdea5737f3fb] 2020.11.20 12:32:03.029003 [ 117 ] <Fatal> BaseDaemon: Address: 0x1 Access: read. Address not mapped to object.
[fdea5737f3fb] 2020.11.20 12:32:03.029087 [ 117 ] <Fatal> BaseDaemon: Stack trace: 0xdd25745 0xaa8dc88 0xa8c4a6f 0xa8acf81 0xa8c419b 0xa8acbb8 0x91e1ace 0x920d960 0x920e81e 0xd93f141 0xd94331d 0xe484728 0xde87410 0xe34b8c5 0xe38774c 0xe384877 0xe383069 0xe382bed 0xe38efed 0x7b6293d 0x7b66463 0x7f601ba69609 0x7f601b97f293
[fdea5737f3fb] 2020.11.20 12:32:03.029278 [ 117 ] <Fatal> BaseDaemon: 2. DB::ColumnString::compareAt(unsigned long, unsigned long, DB::IColumn const&, int) const @ 0xdd25745 in /usr/bin/clickhouse
[fdea5737f3fb] 2020.11.20 12:32:03.029353 [ 117 ] <Fatal> BaseDaemon: 3. DB::GenericComparisonImpl<DB::EqualsOp<int, int> >::vectorConstant(DB::IColumn const&, DB::IColumn const&, DB::PODArray<char8_t, 4096ul, Allocator<false, false>, 15ul, 16ul>&) @ 0xaa8dc88 in /usr/bin/clickhouse
[fdea5737f3fb] 2020.11.20 12:32:03.029423 [ 117 ] <Fatal> BaseDaemon: 4. DB::FunctionComparison<DB::EqualsOp, DB::NameEquals>::executeGenericIdenticalTypes(DB::IColumn const*, DB::IColumn const*) const @ 0xa8c4a6f in /usr/bin/clickhouse
[fdea5737f3fb] 2020.11.20 12:32:03.029472 [ 117 ] <Fatal> BaseDaemon: 5. DB::FunctionComparison<DB::EqualsOp, DB::NameEquals>::executeImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> >&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const @ 0xa8acf81 in /usr/bin/clickhouse
[fdea5737f3fb] 2020.11.20 12:32:03.029518 [ 117 ] <Fatal> BaseDaemon: 6. DB::FunctionComparison<DB::EqualsOp, DB::NameEquals>::executeWithConstString(std::__1::shared_ptr<DB::IDataType const> const&, DB::IColumn const*, DB::IColumn const*, std::__1::shared_ptr<DB::IDataType const> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const @ 0xa8c419b in /usr/bin/clickhouse
[fdea5737f3fb] 2020.11.20 12:32:03.029564 [ 117 ] <Fatal> BaseDaemon: 7. DB::FunctionComparison<DB::EqualsOp, DB::NameEquals>::executeImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> >&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const @ 0xa8acbb8 in /usr/bin/clickhouse
[fdea5737f3fb] 2020.11.20 12:32:03.029679 [ 117 ] <Fatal> BaseDaemon: 8. DB::DefaultExecutable::execute(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> >&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) @ 0x91e1ace in /usr/bin/clickhouse
[fdea5737f3fb] 2020.11.20 12:32:03.029729 [ 117 ] <Fatal> BaseDaemon: 9. DB::ExecutableFunctionAdaptor::executeWithoutLowCardinalityColumns(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> >&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) @ 0x920d960 in /usr/bin/clickhouse
[fdea5737f3fb] 2020.11.20 12:32:03.029780 [ 117 ] <Fatal> BaseDaemon: 10. DB::ExecutableFunctionAdaptor::execute(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> >&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) @ 0x920e81e in /usr/bin/clickhouse
[fdea5737f3fb] 2020.11.20 12:32:03.029825 [ 117 ] <Fatal> BaseDaemon: 11. DB::ExpressionAction::execute(DB::Block&, bool) const @ 0xd93f141 in /usr/bin/clickhouse
[fdea5737f3fb] 2020.11.20 12:32:03.029892 [ 117 ] <Fatal> BaseDaemon: 12. DB::ExpressionActions::execute(DB::Block&, bool) const @ 0xd94331d in /usr/bin/clickhouse
[fdea5737f3fb] 2020.11.20 12:32:03.029943 [ 117 ] <Fatal> BaseDaemon: 13. DB::FilterTransform::transform(DB::Chunk&) @ 0xe484728 in /usr/bin/clickhouse
[fdea5737f3fb] 2020.11.20 12:32:03.030030 [ 117 ] <Fatal> BaseDaemon: 14. DB::ISimpleTransform::transform(DB::Chunk&, DB::Chunk&) @ 0xde87410 in /usr/bin/clickhouse
[fdea5737f3fb] 2020.11.20 12:32:03.030071 [ 117 ] <Fatal> BaseDaemon: 15. DB::ISimpleTransform::work() @ 0xe34b8c5 in /usr/bin/clickhouse
[fdea5737f3fb] 2020.11.20 12:32:03.030151 [ 117 ] <Fatal> BaseDaemon: 16. ? @ 0xe38774c in /usr/bin/clickhouse
[fdea5737f3fb] 2020.11.20 12:32:03.030259 [ 117 ] <Fatal> BaseDaemon: 17. DB::PipelineExecutor::executeStepImpl(unsigned long, unsigned long, std::__1::atomic<bool>*) @ 0xe384877 in /usr/bin/clickhouse
[fdea5737f3fb] 2020.11.20 12:32:03.030306 [ 117 ] <Fatal> BaseDaemon: 18. DB::PipelineExecutor::executeImpl(unsigned long) @ 0xe383069 in /usr/bin/clickhouse
[fdea5737f3fb] 2020.11.20 12:32:03.030350 [ 117 ] <Fatal> BaseDaemon: 19. DB::PipelineExecutor::execute(unsigned long) @ 0xe382bed in /usr/bin/clickhouse
[fdea5737f3fb] 2020.11.20 12:32:03.030396 [ 117 ] <Fatal> BaseDaemon: 20. ? @ 0xe38efed in /usr/bin/clickhouse
[fdea5737f3fb] 2020.11.20 12:32:03.030442 [ 117 ] <Fatal> BaseDaemon: 21. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x7b6293d in /usr/bin/clickhouse
[fdea5737f3fb] 2020.11.20 12:32:03.030483 [ 117 ] <Fatal> BaseDaemon: 22. ? @ 0x7b66463 in /usr/bin/clickhouse
[fdea5737f3fb] 2020.11.20 12:32:03.030532 [ 117 ] <Fatal> BaseDaemon: 23. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
[fdea5737f3fb] 2020.11.20 12:32:03.030578 [ 117 ] <Fatal> BaseDaemon: 24. clone @ 0x122293 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
```
**Additional context**
A query with no filtering return this error (with no segfault).
```
SELECT * FROM low_card;
Code: 271. DB::Exception: Data compressed with different methods, given method byte 0x69, previous method byte 0x82: while receiving packet from localhost:9000
```
```
SELECT CAST(lc AS String) FROM low_card;
[fdea5737f3fb] 2020.11.20 12:57:02.512467 [ 116 ] <Fatal> BaseDaemon: ########################################
[fdea5737f3fb] 2020.11.20 12:57:02.512676 [ 116 ] <Fatal> BaseDaemon: (version 20.11.3.3 (official build), build id: C88CD350740ED614) (from thread 111) (query_id: 20c21126-6b99-4c42-afb2-ab7967b6c19c) Received signal Segmentation fault (11)
[fdea5737f3fb] 2020.11.20 12:57:02.512852 [ 116 ] <Fatal> BaseDaemon: Address: 0x10 Access: read. Address not mapped to object.
[fdea5737f3fb] 2020.11.20 12:57:02.512897 [ 116 ] <Fatal> BaseDaemon: Stack trace: 0x93293e9 0x9328fe3 0x9241f85 0x920d960 0x920dfde 0xd93f141 0xd94331d 0xe481dc7 0xde87410 0xe34b8c5 0xe38774c 0xe384877 0xe383069 0xe382bed 0xe38efed 0x7b6293d 0x7b66463 0x7f415ddc3609 0x7f415dcd9293
[fdea5737f3fb] 2020.11.20 12:57:02.513818 [ 116 ] <Fatal> BaseDaemon: 2. DB::FunctionCast::prepareUnpackDictionaries(std::__1::shared_ptr<DB::IDataType const> const&, std::__1::shared_ptr<DB::IDataType const> const&) const::'lambda0'(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> >&, std::__1::shared_ptr<DB::IDataType const> const&, DB::ColumnNullable const*, unsigned long)::operator()(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> >&, std::__1::shared_ptr<DB::IDataType const> const&, DB::ColumnNullable const*, unsigned long) const @ 0x93293e9 in /usr/bin/clickhouse
[fdea5737f3fb] 2020.11.20 12:57:02.513927 [ 116 ] <Fatal> BaseDaemon: 3. std::__1::__function::__func<DB::FunctionCast::prepareUnpackDictionaries(std::__1::shared_ptr<DB::IDataType const> const&, std::__1::shared_ptr<DB::IDataType const> const&) const::'lambda0'(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> >&, std::__1::shared_ptr<DB::IDataType const> const&, DB::ColumnNullable const*, unsigned long), std::__1::allocator<DB::FunctionCast::prepareUnpackDictionaries(std::__1::shared_ptr<DB::IDataType const> const&, std::__1::shared_ptr<DB::IDataType const> const&) const::'lambda0'(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> >&, std::__1::shared_ptr<DB::IDataType const> const&, DB::ColumnNullable const*, unsigned long)>, COW<DB::IColumn>::immutable_ptr<DB::IColumn> (std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> >&, std::__1::shared_ptr<DB::IDataType const> const&, DB::ColumnNullable const*, unsigned long)>::operator()(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> >&, std::__1::shared_ptr<DB::IDataType const> const&, DB::ColumnNullable const*&&, unsigned long&&) @ 0x9328fe3 in /usr/bin/clickhouse
[fdea5737f3fb] 2020.11.20 12:57:02.513990 [ 116 ] <Fatal> BaseDaemon: 4. DB::ExecutableFunctionCast::execute(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> >&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) @ 0x9241f85 in /usr/bin/clickhouse
[fdea5737f3fb] 2020.11.20 12:57:02.514054 [ 116 ] <Fatal> BaseDaemon: 5. DB::ExecutableFunctionAdaptor::executeWithoutLowCardinalityColumns(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> >&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) @ 0x920d960 in /usr/bin/clickhouse
[fdea5737f3fb] 2020.11.20 12:57:02.514119 [ 116 ] <Fatal> BaseDaemon: 6. DB::ExecutableFunctionAdaptor::execute(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> >&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) @ 0x920dfde in /usr/bin/clickhouse
[fdea5737f3fb] 2020.11.20 12:57:02.514249 [ 116 ] <Fatal> BaseDaemon: 7. DB::ExpressionAction::execute(DB::Block&, bool) const @ 0xd93f141 in /usr/bin/clickhouse
[fdea5737f3fb] 2020.11.20 12:57:02.514315 [ 116 ] <Fatal> BaseDaemon: 8. DB::ExpressionActions::execute(DB::Block&, bool) const @ 0xd94331d in /usr/bin/clickhouse
[fdea5737f3fb] 2020.11.20 12:57:02.514369 [ 116 ] <Fatal> BaseDaemon: 9. DB::ExpressionTransform::transform(DB::Chunk&) @ 0xe481dc7 in /usr/bin/clickhouse
[fdea5737f3fb] 2020.11.20 12:57:02.514484 [ 116 ] <Fatal> BaseDaemon: 10. DB::ISimpleTransform::transform(DB::Chunk&, DB::Chunk&) @ 0xde87410 in /usr/bin/clickhouse
[fdea5737f3fb] 2020.11.20 12:57:02.514546 [ 116 ] <Fatal> BaseDaemon: 11. DB::ISimpleTransform::work() @ 0xe34b8c5 in /usr/bin/clickhouse
[fdea5737f3fb] 2020.11.20 12:57:02.514646 [ 116 ] <Fatal> BaseDaemon: 12. ? @ 0xe38774c in /usr/bin/clickhouse
[fdea5737f3fb] 2020.11.20 12:57:02.514850 [ 116 ] <Fatal> BaseDaemon: 13. DB::PipelineExecutor::executeStepImpl(unsigned long, unsigned long, std::__1::atomic<bool>*) @ 0xe384877 in /usr/bin/clickhouse
[fdea5737f3fb] 2020.11.20 12:57:02.514901 [ 116 ] <Fatal> BaseDaemon: 14. DB::PipelineExecutor::executeImpl(unsigned long) @ 0xe383069 in /usr/bin/clickhouse
[fdea5737f3fb] 2020.11.20 12:57:02.514956 [ 116 ] <Fatal> BaseDaemon: 15. DB::PipelineExecutor::execute(unsigned long) @ 0xe382bed in /usr/bin/clickhouse
[fdea5737f3fb] 2020.11.20 12:57:02.515013 [ 116 ] <Fatal> BaseDaemon: 16. ? @ 0xe38efed in /usr/bin/clickhouse
[fdea5737f3fb] 2020.11.20 12:57:02.515077 [ 116 ] <Fatal> BaseDaemon: 17. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x7b6293d in /usr/bin/clickhouse
[fdea5737f3fb] 2020.11.20 12:57:02.515175 [ 116 ] <Fatal> BaseDaemon: 18. ? @ 0x7b66463 in /usr/bin/clickhouse
[fdea5737f3fb] 2020.11.20 12:57:02.515235 [ 116 ] <Fatal> BaseDaemon: 19. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
[fdea5737f3fb] 2020.11.20 12:57:02.515339 [ 116 ] <Fatal> BaseDaemon: 20. clone @ 0x122293 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
```
| https://github.com/ClickHouse/ClickHouse/issues/17228 | https://github.com/ClickHouse/ClickHouse/pull/17397 | 60af8219eea9b1c7e3d7e3a554698d9a8b9105a7 | 85283b394483c4258b4342d3259387612bf17c8e | "2020-11-20T12:44:22Z" | c++ | "2020-11-27T10:37:05Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 17,215 | ["programs/odbc-bridge/ODBCBridge.cpp", "src/Common/XDBCBridgeHelper.h", "src/Dictionaries/ExternalQueryBuilder.h", "src/IO/ReadWriteBufferFromHTTP.h"] | Incorrect columns and segfault in odbc external dictionary updates | **Describe the bug**
I have a database containing two external dictionaries from postgres via ODBC. Dictionary updates are sometimes querying incorrect/fewer columns. This causes the dictionary not to update and eventually causes a segfault.
**How to reproduce**
* 20.10.4.1
* Two external dictionaries:
```
CREATE DICTIONARY parkingstats.dmn
(
`id` UInt64,
`nam` String,
`fid` UInt64
)
PRIMARY KEY id
SOURCE(ODBC(TABLE 'dmn' CONNECTION_STRING 'DSN=parking' UPDATE_FIELD 'date_updated'))
LIFETIME(MIN 60 MAX 120)
LAYOUT(HASHED())
CREATE DICTIONARY parkingstats.rpm
(
`dmn_id` UInt64,
`rpm` Float64
)
PRIMARY KEY dmn_id
SOURCE(ODBC(TABLE 'rpm' CONNECTION_STRING 'DSN=parking' UPDATE_FIELD 'updated_date'))
LIFETIME(MIN 60 MAX 120)
LAYOUT(HASHED())
```
**Expected behavior**
Dictionaries should load and update as expected
**Error message and/or stacktrace**
The first few updates work as expected:
```
2020.11.20 14:32:22.561632 [ 5488 ] {} <Trace> ODBCDictionarySource: SELECT "dmn_id", "rpm" FROM "rpm";
2020.11.20 14:35:22.345127 [ 5488 ] {} <Trace> ODBCDictionarySource: SELECT "dmn_id", "rpm" FROM "rpm" WHERE updated_date >= '2020-11-20 14:32:21';
```
Later updates query fewer columns:
```
2020.11.20 14:37:22.353143 [ 5489 ] {} <Trace> ODBCDictionarySource: SELECT "dmn_id" FROM "rpm" WHERE updated_date >= '2020-11-20 14:35:21';
2020.11.20 14:37:32.353991 [ 5489 ] {} <Trace> ODBCDictionarySource: SELECT "dmn_id" FROM "rpm" WHERE updated_date >= '2020-11-20 14:35:21';
2020.11.20 14:37:42.354739 [ 5513 ] {} <Trace> ODBCDictionarySource: SELECT "dmn_id" FROM "rpm" WHERE updated_date >= '2020-11-20 14:35:21';
2020.11.20 14:37:52.355472 [ 5489 ] {} <Trace> ODBCDictionarySource: SELECT "dmn_id" FROM "rpm" WHERE updated_date >= '2020-11-20 14:35:21';
2020.11.20 14:38:02.356139 [ 5488 ] {} <Trace> ODBCDictionarySource: SELECT "dmn_id" FROM "rpm" WHERE updated_date >= '2020-11-20 14:35:21';
2020.11.20 14:38:17.357155 [ 5513 ] {} <Trace> ODBCDictionarySource: SELECT "dmn_id" FROM "rpm" WHERE updated_date >= '2020-11-20 14:35:21';
2020.11.20 14:38:32.358161 [ 5513 ] {} <Trace> ODBCDictionarySource: SELECT "dmn_id" FROM "rpm" WHERE updated_date >= '2020-11-20 14:35:21';
2020.11.20 14:39:07.361016 [ 5489 ] {} <Trace> ODBCDictionarySource: SELECT FROM "rpm" WHERE updated_date >= '2020-11-20 14:35:21';
2020.11.20 14:39:57.364329 [ 5489 ] {} <Trace> ODBCDictionarySource: SELECT FROM "rpm" WHERE updated_date >= '2020-11-20 14:35:21';
2020.11.20 14:41:37.370591 [ 5488 ] {} <Trace> ODBCDictionarySource: SELECT FROM "rpm" WHERE updated_date >= '2020-11-20 14:35:21';
2020.11.20 14:44:37.382286 [ 5489 ] {} <Trace> ODBCDictionarySource: SELECT FROM "rpm" WHERE updated_date >= '2020-11-20 14:35:21';
2020.11.20 14:53:02.857860 [ 5513 ] {} <Trace> ODBCDictionarySource: SELECT FROM "rpm" WHERE updated_date >= '2020-11-20 14:35:21';
```
These throw errors "RecordSet contains 0 columns while 2 expected". Example stack trace:
```
2020.11.20 14:44:37.382135 [ 5438 ] {} <Trace> ExternalDictionariesLoader: Will load the object 'parkingstats.rpm' in background, force = false, loading_id = 18
2020.11.20 14:44:37.382194 [ 5438 ] {} <Trace> ExternalDictionariesLoader: Object 'parkingstats.rpm' is neither loaded nor failed, so it will not be reloaded as outdated.
2020.11.20 14:44:37.382223 [ 5489 ] {} <Trace> ExternalDictionariesLoader: Start loading object 'parkingstats.rpm'
2020.11.20 14:44:37.382286 [ 5489 ] {} <Trace> ODBCDictionarySource: SELECT FROM "rpm" WHERE updated_date >= '2020-11-20 14:35:21';
2020.11.20 14:44:37.382321 [ 5489 ] {} <Trace> ReadWriteBufferFromHTTP: Sending request to http://localhost:9018/ping
2020.11.20 14:44:37.382695 [ 5489 ] {} <Trace> ReadWriteBufferFromHTTP: Sending request to http://localhost:9018/?connection_string=DSN%3Dparking&columns=columns%20format%20version%3A%201%0A2%20columns%3A%0A%
60dmn_id%60%20UInt64%0A%60rpm%60%20Float64%0A&max_block_size=8192
2020.11.20 14:44:37.393129 [ 5489 ] {} <Trace> ExternalDictionariesLoader: Supposed update time for 'parkingstats.rpm' is 2020-11-20 14:53:02 (backoff, 11 errors)
2020.11.20 14:44:37.393262 [ 5489 ] {} <Error> ExternalDictionariesLoader: Could not update external dictionary 'parkingstats.rpm', leaving the previous version, next update is scheduled at 2020-11-20 14:53:0
2: Code: 86, e.displayText() = DB::Exception: Received error from remote server /?connection_string=DSN%3Dparking&columns=columns%20format%20version%3A%201%0A2%20columns%3A%0A%60dmn_id%60%20UInt64%0A%60rpm%60
%20Float64%0A&max_block_size=8192. HTTP status code: 500 Internal Server Error, body: <C3>^DCode: 20, e.displayText() = DB::Exception: RecordSet contains 0 columns while 2 expected, Stack trace (when copying
this message, always include the lines below):
0. ? @ 0x2bb3141 in /usr/bin/clickhouse-odbc-bridge
1. ? @ 0x4fcefd5 in /usr/bin/clickhouse-odbc-bridge
2. ? @ 0x4ff734f in /usr/bin/clickhouse-odbc-bridge
3. ? @ 0x4ff8d50 in /usr/bin/clickhouse-odbc-bridge
4. ? @ 0x5c7ba59 in ?
5. ? @ 0x5c7844a in ?
6. start_thread @ 0x7fa3 in /lib/x86_64-linux-gnu/libpthread-2.28.so
7. clone @ 0xf94cf in /lib/x86_64-linux-gnu/libc-2.28.so
(version 20.10.4.1 (official build)), Stack trace (when copying this message, always include the lines below):
0. DB::assertResponseIsOk(Poco::Net::HTTPRequest const&, Poco::Net::HTTPResponse&, std::__1::basic_istream<char, std::__1::char_traits<char> >&, bool) @ 0xc23f370 in /usr/bin/clickhouse
1. DB::detail::ReadWriteBufferFromHTTPBase<std::__1::shared_ptr<DB::UpdatableSession> >::call(Poco::URI, Poco::Net::HTTPResponse&) @ 0xc2358a4 in /usr/bin/clickhouse
2. DB::detail::ReadWriteBufferFromHTTPBase<std::__1::shared_ptr<DB::UpdatableSession> >::ReadWriteBufferFromHTTPBase(std::__1::shared_ptr<DB::UpdatableSession>, Poco::URI, std::__1::basic_string<char, std::__
1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::function<void (std::__1::basic_ostream<char, std::__1::char_traits<char> >&)>, Poco::Net::HTTPBasicCredentials const&, unsigned long, std::_
_1::vector<std::__1::tuple<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::
__1::allocator<std::__1::tuple<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >
>, DB::RemoteHostFilter const&) @ 0xc23110f in /usr/bin/clickhouse
3. DB::ReadWriteBufferFromHTTP::ReadWriteBufferFromHTTP(Poco::URI, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::function<void (std::__1::basic_ostrea
m<char, std::__1::char_traits<char> >&)>, DB::ConnectionTimeouts const&, unsigned long, Poco::Net::HTTPBasicCredentials const&, unsigned long, std::__1::vector<std::__1::tuple<std::__1::basic_string<char, std
::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::allocator<std::__1::tuple<std::__1::basic_string<char,
std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > > const&, DB::RemoteHostFilter const&) @ 0xc2308a4 in /usr/
bin/clickhouse
4. ? @ 0xc22ea56 in /usr/bin/clickhouse
5. DB::XDBCDictionarySource::loadBase(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) const @ 0xc22cccb in /usr/bin/clickhouse
6. DB::XDBCDictionarySource::loadUpdatedAll() @ 0xc22d28f in /usr/bin/clickhouse
7. DB::HashedDictionary::updateData() @ 0xbe6c5ff in /usr/bin/clickhouse
8. DB::HashedDictionary::loadData() @ 0xbe62d3a in /usr/bin/clickhouse
9. DB::HashedDictionary::HashedDictionary(DB::StorageID const&, DB::DictionaryStructure const&, std::__1::unique_ptr<DB::IDictionarySource, std::__1::default_delete<DB::IDictionarySource> >, DB::ExternalLoada
bleLifetime, bool, bool, std::__1::shared_ptr<DB::Block>) @ 0xbe62a71 in /usr/bin/clickhouse
10. std::__1::__compressed_pair_elem<DB::HashedDictionary, 1, false>::__compressed_pair_elem<DB::StorageID&&, DB::DictionaryStructure const&, std::__1::unique_ptr<DB::IDictionarySource, std::__1::default_dele
te<DB::IDictionarySource> >&&, DB::ExternalLoadableLifetime const&, bool const&, bool const&, std::__1::shared_ptr<DB::Block> const&, 0ul, 1ul, 2ul, 3ul, 4ul, 5ul, 6ul>(std::__1::piecewise_construct_t, std::_
_1::tuple<DB::StorageID&&, DB::DictionaryStructure const&, std::__1::unique_ptr<DB::IDictionarySource, std::__1::default_delete<DB::IDictionarySource> >&&, DB::ExternalLoadableLifetime const&, bool const&, bo
ol const&, std::__1::shared_ptr<DB::Block> const&>, std::__1::__tuple_indices<0ul, 1ul, 2ul, 3ul, 4ul, 5ul, 6ul>) @ 0xbe8095e in /usr/bin/clickhouse
11. DB::HashedDictionary::clone() const @ 0xbe7a4ad in /usr/bin/clickhouse
12. ? @ 0xd93d488 in /usr/bin/clickhouse
13. DB::ExternalLoader::LoadingDispatcher::loadSingleObject(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::ExternalLoader::ObjectConfig const&, std::__1::sha
red_ptr<DB::IExternalLoadable const>) @ 0xd92e0c3 in /usr/bin/clickhouse
14. DB::ExternalLoader::LoadingDispatcher::doLoading(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, unsigned long, bool, unsigned long, bool) @ 0xd92b267 in /usr
/bin/clickhouse
15. ThreadFromGlobalPool::ThreadFromGlobalPool<void (DB::ExternalLoader::LoadingDispatcher::*)(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, unsigned long, bool
, unsigned long, bool), DB::ExternalLoader::LoadingDispatcher*, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >&, unsigned long&, bool&, unsigned long&, bool>(void (DB::E
xternalLoader::LoadingDispatcher::*&&)(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, unsigned long, bool, unsigned long, bool), DB::ExternalLoader::LoadingDispa
tcher*&&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >&, unsigned long&, bool&, unsigned long&, bool&&)::'lambda'()::operator()() @ 0xd930e40 in /usr/bin/clickhouse
16. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x7b8b63d in /usr/bin/clickhouse
17. ? @ 0x7b8f153 in /usr/bin/clickhouse
18. start_thread @ 0x7fa3 in /lib/x86_64-linux-gnu/libpthread-2.28.so
19. __clone @ 0xf94cf in /lib/x86_64-linux-gnu/libc-2.28.so
(version 20.10.4.1 (official build))
2020.11.20 14:44:37.393323 [ 5489 ] {} <Trace> ExternalDictionariesLoader: Next update time for 'parkingstats.rpm' was set to 2020-11-20 14:53:02
```
The next update caused a segfault:
```
2020.11.20 15:03:03.406475 [ 5438 ] {} <Trace> ExternalDictionariesLoader: Will load the object 'parkingstats.rpm' in background, force = false, loading_id = 35
2020.11.20 15:03:03.406525 [ 5438 ] {} <Trace> ExternalDictionariesLoader: Object 'parkingstats.rpm' is neither loaded nor failed, so it will not be reloaded as outdated.
2020.11.20 15:03:03.406557 [ 5488 ] {} <Trace> ExternalDictionariesLoader: Start loading object 'parkingstats.rpm'
2020.11.20 15:03:03.514120 [ 5437 ] {} <Trace> BaseDaemon: Received signal 11
2020.11.20 15:03:03.514338 [ 7308 ] {} <Fatal> BaseDaemon: ########################################
2020.11.20 15:03:03.514386 [ 7308 ] {} <Fatal> BaseDaemon: (version 20.10.4.1 (official build), build id: A3F2C76DFE3E61F8) (from thread 5488) (no query) Received signal Segmentation fault (11)
2020.11.20 15:03:03.514412 [ 7308 ] {} <Fatal> BaseDaemon: Address: 0x7fce92200000 Access: read. Address not mapped to object.
2020.11.20 15:03:03.514428 [ 7308 ] {} <Fatal> BaseDaemon: Stack trace: 0x7be7870 0xc1f9347 0xc1ff7de 0xc22c4bb 0xc22cf51 0xbe6c5ff 0xbe62d3a 0xbe62a71 0xbe8095e 0xbe7a4ad 0xd93d488 0xd92e0c3 0xd92b267 0xd930
e40 0x7b8b63d 0x7b8f153 0x7fceaba94fa3 0x7fceab9b64cf
2020.11.20 15:03:03.514518 [ 7308 ] {} <Fatal> BaseDaemon: 2. void DB::writeAnyEscapedString<(char)34, false>(char const*, char const*, DB::WriteBuffer&) @ 0x7be7870 in /usr/bin/clickhouse
2020.11.20 15:03:03.514569 [ 7308 ] {} <Fatal> BaseDaemon: 3. DB::ExternalQueryBuilder::composeLoadAllQuery(DB::WriteBuffer&) const @ 0xc1f9347 in /usr/bin/clickhouse
2020.11.20 15:03:03.514587 [ 7308 ] {} <Fatal> BaseDaemon: 4. DB::ExternalQueryBuilder::composeUpdateQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__
1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) const @ 0xc1ff7de in /usr/bin/clickhouse
2020.11.20 15:03:03.514605 [ 7308 ] {} <Fatal> BaseDaemon: 5. DB::XDBCDictionarySource::getUpdateFieldAndDate() @ 0xc22c4bb in /usr/bin/clickhouse
2020.11.20 15:03:03.514618 [ 7308 ] {} <Fatal> BaseDaemon: 6. DB::XDBCDictionarySource::loadUpdatedAll() @ 0xc22cf51 in /usr/bin/clickhouse
2020.11.20 15:03:03.514632 [ 7308 ] {} <Fatal> BaseDaemon: 7. DB::HashedDictionary::updateData() @ 0xbe6c5ff in /usr/bin/clickhouse
2020.11.20 15:03:03.514645 [ 7308 ] {} <Fatal> BaseDaemon: 8. DB::HashedDictionary::loadData() @ 0xbe62d3a in /usr/bin/clickhouse
2020.11.20 15:03:03.514661 [ 7308 ] {} <Fatal> BaseDaemon: 9. DB::HashedDictionary::HashedDictionary(DB::StorageID const&, DB::DictionaryStructure const&, std::__1::unique_ptr<DB::IDictionarySource, std::__1:
:default_delete<DB::IDictionarySource> >, DB::ExternalLoadableLifetime, bool, bool, std::__1::shared_ptr<DB::Block>) @ 0xbe62a71 in /usr/bin/clickhouse
2020.11.20 15:03:03.514694 [ 7308 ] {} <Fatal> BaseDaemon: 10. std::__1::__compressed_pair_elem<DB::HashedDictionary, 1, false>::__compressed_pair_elem<DB::StorageID&&, DB::DictionaryStructure const&, std::__
1::unique_ptr<DB::IDictionarySource, std::__1::default_delete<DB::IDictionarySource> >&&, DB::ExternalLoadableLifetime const&, bool const&, bool const&, std::__1::shared_ptr<DB::Block> const&, 0ul, 1ul, 2ul,
3ul, 4ul, 5ul, 6ul>(std::__1::piecewise_construct_t, std::__1::tuple<DB::StorageID&&, DB::DictionaryStructure const&, std::__1::unique_ptr<DB::IDictionarySource, std::__1::default_delete<DB::IDictionarySource
> >&&, DB::ExternalLoadableLifetime const&, bool const&, bool const&, std::__1::shared_ptr<DB::Block> const&>, std::__1::__tuple_indices<0ul, 1ul, 2ul, 3ul, 4ul, 5ul, 6ul>) @ 0xbe8095e in /usr/bin/clickhouse
2020.11.20 15:03:03.514720 [ 7308 ] {} <Fatal> BaseDaemon: 11. DB::HashedDictionary::clone() const @ 0xbe7a4ad in /usr/bin/clickhouse
2020.11.20 15:03:03.514735 [ 7308 ] {} <Fatal> BaseDaemon: 12. ? @ 0xd93d488 in /usr/bin/clickhouse
2020.11.20 15:03:03.514750 [ 7308 ] {} <Fatal> BaseDaemon: 13. DB::ExternalLoader::LoadingDispatcher::loadSingleObject(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > con
st&, DB::ExternalLoader::ObjectConfig const&, std::__1::shared_ptr<DB::IExternalLoadable const>) @ 0xd92e0c3 in /usr/bin/clickhouse
2020.11.20 15:03:03.514765 [ 7308 ] {} <Fatal> BaseDaemon: 14. DB::ExternalLoader::LoadingDispatcher::doLoading(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, un
signed long, bool, unsigned long, bool) @ 0xd92b267 in /usr/bin/clickhouse
2020.11.20 15:03:03.514787 [ 7308 ] {} <Fatal> BaseDaemon: 15. ThreadFromGlobalPool::ThreadFromGlobalPool<void (DB::ExternalLoader::LoadingDispatcher::*)(std::__1::basic_string<char, std::__1::char_traits<cha
r>, std::__1::allocator<char> > const&, unsigned long, bool, unsigned long, bool), DB::ExternalLoader::LoadingDispatcher*, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >
&, unsigned long&, bool&, unsigned long&, bool>(void (DB::ExternalLoader::LoadingDispatcher::*&&)(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, unsigned long, b
ool, unsigned long, bool), DB::ExternalLoader::LoadingDispatcher*&&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >&, unsigned long&, bool&, unsigned long&, bool&&)::'la
mbda'()::operator()() @ 0xd930e40 in /usr/bin/clickhouse
2020.11.20 15:03:03.514807 [ 7308 ] {} <Fatal> BaseDaemon: 16. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x7b8b63d in /usr/bin/clickhouse
2020.11.20 15:03:03.514819 [ 7308 ] {} <Fatal> BaseDaemon: 17. ? @ 0x7b8f153 in /usr/bin/clickhouse
2020.11.20 15:03:03.514839 [ 7308 ] {} <Fatal> BaseDaemon: 18. start_thread @ 0x7fa3 in /lib/x86_64-linux-gnu/libpthread-2.28.so
2020.11.20 15:03:03.514857 [ 7308 ] {} <Fatal> BaseDaemon: 19. __clone @ 0xf94cf in /lib/x86_64-linux-gnu/libc-2.28.so
2020.11.20 15:03:03.514872 [ 7308 ] {} <Information> SentryWriter: Not sending crash report
```
Occasionally it's using the set of columns from the other dictionary:
```
2020.11.20 15:21:13.666590 [ 8284 ] {} <Trace> ODBCDictionarySource: SELECT "id", "nam", "fid" FROM "rpm" WHERE updated_date >= '2020-11-20 15:19:52';
```
I have two nodes running this DB and they segfault every 5-30 minutes. I have trace logs if needed.
| https://github.com/ClickHouse/ClickHouse/issues/17215 | https://github.com/ClickHouse/ClickHouse/pull/18278 | 850d584d3f0afd50353f2612bbc649a08dbc0979 | fa68af02d7863eaf62635feb51ebaae4433994c0 | "2020-11-20T05:50:49Z" | c++ | "2020-12-21T07:18:04Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 17,200 | ["contrib/poco"] | clickhouse-client connection server is abnormal: Code: 32. DB::Exception: Attempt to read after eof | system: centos7
clickhouse version: 20.3.11.97
Link to server through different host client, report exceptionοΌ
```
ClickHouse client version 20.3.11.97.
Connecting to ***:9000 as user admin.
Code: 32. DB::Exception: Attempt to read after eof
```
But the clickhouse-server.err.log has no abnormal updates, clickhouse-server.err.log such as ReplicatedMergeTreePartCheckThread are output normally at this time; The link returns to normal after restarting the serverγ
How to troubleshoot the change? Client is not installed on the server host
| https://github.com/ClickHouse/ClickHouse/issues/17200 | https://github.com/ClickHouse/ClickHouse/pull/17542 | 00b8e72039be69cff63021b8630c229e315eca7c | e7b277e1a18c0c1f59dfb406eaa61ab72e8b3eff | "2020-11-19T10:09:34Z" | c++ | "2020-11-30T06:35:35Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 17,177 | ["tests/queries/0_stateless/02503_mysql_compat_utc_timestamp.reference", "tests/queries/0_stateless/02503_mysql_compat_utc_timestamp.sql"] | MySQL add support for UTC_TIMESTAMP() AND TIMEDIFF (comparability with Power BI) | **Background**
I can already connect to Power BI using ODBC connector, but it would be so great if I can make a Direct Query connection to utilize the full power of OLAP DB. I tried to achieve that with MySQL interface which has PBI Direct Query support.
**Describe the issue**
Unfortunate, when I make the connection, Power BI automatically runs this function in background:
TIMEDIFF(NOW(), UTC_TIMESTAMP())
I guess it is to set the serverTimeZone.

**How to reproduce**
Clickhouse version 20.11.3.3 (official build)
Make connection from Power BI Desktop to MySQL Interface
**Final words**
I know that there are several functions to achieve the same result, so I hope this won't take much effort from you guys.
If this is done, I believe it would be a huge advantage for ClickHouse
Thanks in advance | https://github.com/ClickHouse/ClickHouse/issues/17177 | https://github.com/ClickHouse/ClickHouse/pull/44338 | ed3d70f7c0eccc78e97a4f981c2a1e520315cd42 | c26ce8a629fb121fbba37d78596a4c04814ff161 | "2020-11-18T11:41:05Z" | c++ | "2022-12-27T14:12:51Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 17,151 | ["src/Storages/MergeTree/MergeList.cpp", "src/Storages/MergeTree/MergeList.h", "src/Storages/MergeTree/MergeTreeDataMergerMutator.cpp", "src/Storages/StorageReplicatedMergeTree.cpp"] | TRUNCATE TABLE does not stop already running merges. | And waits for them. That's pointless. | https://github.com/ClickHouse/ClickHouse/issues/17151 | https://github.com/ClickHouse/ClickHouse/pull/25684 | 227913579d98858a825fc9ac5b1678a59a7e4e29 | afbc6bf17dba8634f4b3f1cc649c8427505c319f | "2020-11-17T19:27:01Z" | c++ | "2021-06-30T20:41:04Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 17,112 | ["src/DataTypes/EnumValues.cpp", "src/DataTypes/EnumValues.h", "tests/queries/0_stateless/01852_hints_enum_name.reference", "tests/queries/0_stateless/01852_hints_enum_name.sh"] | Hints for Enum names and column names based on Levenshtein distance. | **Use case**
Provide "Did you mean: ..." with nearest matches when user made a typo in Enum name or column name.
**Describe the solution you'd like**
See `NamePrompter.h` | https://github.com/ClickHouse/ClickHouse/issues/17112 | https://github.com/ClickHouse/ClickHouse/pull/23919 | 2b87656c66bd038908e61b8493ec58569d51ee2d | fd56e0a9844c1c6d2ccb187e4b0c3aad7aaa69da | "2020-11-16T21:44:31Z" | c++ | "2021-05-07T13:12:16Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 16,953 | ["src/Storages/MergeTree/ReplicatedMergeTreeQueue.cpp", "tests/queries/0_stateless/01593_concurrent_alter_mutations_kill.reference", "tests/queries/0_stateless/01593_concurrent_alter_mutations_kill.sh", "tests/queries/0_stateless/01593_concurrent_alter_mutations_kill_many_replicas.reference", "tests/queries/0_stateless/01593_concurrent_alter_mutations_kill_many_replicas.sh"] | In some cases killed mutation lead to unfinished alter | One of our users just killed mutations produced by `ALTER MODIFY` queries on 21 servers. After that new alters on three servers stuck with:
```
Cannot execute alter metadata with version: 8 because another alter 7 must be executed before
```
However, there is no corresponding alter β7 in the replication queue and no undone mutations:
```
select * from system.mutations where is_done = 0
```
However, in the ZooKeeper, we have the correct state of metadata_version:
```
Row 13:
βββββββ
name: metadata_version
value: 7
czxid: 23078663291
mzxid: 29401964979
ctime: 2020-09-02 17:52:08
mtime: 2020-11-13 00:43:18
version: 7
cversion: 0
aversion: 0
ephemeralOwner: 0
dataLength: 1
numChildren: 0
pzxid: 23078663291
```
`SYSTEM RESTART REPLICA dbname.table_name` solves the issue, but it seems like` KILL MUTATION` in some cases can lead to unfinished alter. | https://github.com/ClickHouse/ClickHouse/issues/16953 | https://github.com/ClickHouse/ClickHouse/pull/17499 | e9795acd93b1a4982c71f6e9b21be4763da312bd | 25f40db2fbeef19cf69ac0bb2559b53b028cd45e | "2020-11-12T20:40:26Z" | c++ | "2020-11-30T07:51:50Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 16,944 | ["src/Interpreters/InterpreterCreateQuery.cpp", "tests/queries/0_stateless/01189_create_as_table_as_table_function.reference", "tests/queries/0_stateless/01189_create_as_table_as_table_function.sql"] | Using CREATE TABLE AS on a table built using the numbers() table function causes crash. | **Describe the bug**
Creating a table as a table that was created using the numbers() causes an issue and disconnects the client.
**How to reproduce**
* ClickHouse server version
version 20.10.3.30
* `CREATE TABLE` statements for all tables involved
```
CREATE TABLE table2 AS numbers(5)
CREATE TABLE table3 AS table2
```
* Queries to run that lead to unexpected result
```
:) create table table2 as numbers(5)
CREATE TABLE table2 AS numbers(5)
Ok.
0 rows in set. Elapsed: 0.010 sec.
:) create table table3 as table2
CREATE TABLE table3 AS table2
[altinity-qa-cosmic2] 2020.11.12 19:01:54.258464 [ 179463 ] <Fatal> BaseDaemon: ########################################
[altinity-qa-cosmic2] 2020.11.12 19:01:54.258700 [ 179463 ] <Fatal> BaseDaemon: (version 20.10.3.30 (official build), build id: 4EAF0337F53BC2B4) (from thread 171354) (query_id: 0ac2afaa-12be-4a98-9767-5c8cd373f8d0) Received signal Segmentation fault (11)
[altinity-qa-cosmic2] 2020.11.12 19:01:54.258816 [ 179463 ] <Fatal> BaseDaemon: Address: 0x8 Access: read. Address not mapped to object.
[altinity-qa-cosmic2] 2020.11.12 19:01:54.258936 [ 179463 ] <Fatal> BaseDaemon: Stack trace: 0xdabc4e0 0xdabaa67 0xdabdf8a 0xdac0ff7 0xdec8f18 0xdec7dbd 0xe568ea6 0xe575ca7 0x10d4dd6f 0x10d4f77e 0x10e80a39 0x10e7c96a 0x7f25602076db 0x7f255fb2471f
[altinity-qa-cosmic2] 2020.11.12 19:01:54.259089 [ 179463 ] <Fatal> BaseDaemon: 2. DB::InterpreterCreateQuery::setEngine(DB::ASTCreateQuery&) const @ 0xdabc4e0 in /usr/bin/clickhouse
[altinity-qa-cosmic2] 2020.11.12 19:01:54.259200 [ 179463 ] <Fatal> BaseDaemon: 3. DB::InterpreterCreateQuery::setProperties(DB::ASTCreateQuery&) const @ 0xdabaa67 in /usr/bin/clickhouse
[altinity-qa-cosmic2] 2020.11.12 19:01:54.259295 [ 179463 ] <Fatal> BaseDaemon: 4. DB::InterpreterCreateQuery::createTable(DB::ASTCreateQuery&) @ 0xdabdf8a in /usr/bin/clickhouse
[altinity-qa-cosmic2] 2020.11.12 19:01:54.259412 [ 179463 ] <Fatal> BaseDaemon: 5. DB::InterpreterCreateQuery::execute() @ 0xdac0ff7 in /usr/bin/clickhouse
[altinity-qa-cosmic2] 2020.11.12 19:01:54.259517 [ 179463 ] <Fatal> BaseDaemon: 6. ? @ 0xdec8f18 in /usr/bin/clickhouse
[altinity-qa-cosmic2] 2020.11.12 19:01:54.259632 [ 179463 ] <Fatal> BaseDaemon: 7. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Context&, bool, DB::QueryProcessingStage::Enum, bool) @ 0xdec7dbd in /usr/bin/clickhouse
[altinity-qa-cosmic2] 2020.11.12 19:01:54.259776 [ 179463 ] <Fatal> BaseDaemon: 8. DB::TCPHandler::runImpl() @ 0xe568ea6 in /usr/bin/clickhouse
[altinity-qa-cosmic2] 2020.11.12 19:01:54.259851 [ 179463 ] <Fatal> BaseDaemon: 9. DB::TCPHandler::run() @ 0xe575ca7 in /usr/bin/clickhouse
[altinity-qa-cosmic2] 2020.11.12 19:01:54.259951 [ 179463 ] <Fatal> BaseDaemon: 10. Poco::Net::TCPServerConnection::start() @ 0x10d4dd6f in /usr/bin/clickhouse
[altinity-qa-cosmic2] 2020.11.12 19:01:54.260053 [ 179463 ] <Fatal> BaseDaemon: 11. Poco::Net::TCPServerDispatcher::run() @ 0x10d4f77e in /usr/bin/clickhouse
[altinity-qa-cosmic2] 2020.11.12 19:01:54.260168 [ 179463 ] <Fatal> BaseDaemon: 12. Poco::PooledThread::run() @ 0x10e80a39 in /usr/bin/clickhouse
[altinity-qa-cosmic2] 2020.11.12 19:01:54.260272 [ 179463 ] <Fatal> BaseDaemon: 13. Poco::ThreadImpl::runnableEntry(void*) @ 0x10e7c96a in /usr/bin/clickhouse
[altinity-qa-cosmic2] 2020.11.12 19:01:54.260353 [ 179463 ] <Fatal> BaseDaemon: 14. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
[altinity-qa-cosmic2] 2020.11.12 19:01:54.260467 [ 179463 ] <Fatal> BaseDaemon: 15. __clone @ 0x12171f in /lib/x86_64-linux-gnu/libc-2.27.so
Exception on client:
Code: 32. DB::Exception: Attempt to read after eof: while receiving packet from localhost:9000
Connecting to localhost:9000 as user default.
Code: 210. DB::NetException: Connection refused (localhost:9000)
``` | https://github.com/ClickHouse/ClickHouse/issues/16944 | https://github.com/ClickHouse/ClickHouse/pull/17072 | 013c582abf2d0dca79dcb80bd9f27442240f30ed | b251478d983df01c28dae9b15fa282ad1c80d25e | "2020-11-12T18:23:47Z" | c++ | "2020-11-17T09:51:37Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 16,926 | ["src/Processors/Formats/Impl/AvroRowInputFormat.cpp"] | Bug when using format_avro_schema_registry_url in Kafka table engine? | We use Kafka table engine to integrate with the Kafka. When we put the url into format_avro_schema_registry_url we can get the right message at the first time.
`<format_avro_schema_registry_url>http://schema-registry-url:8081</format_avro_schema_registry_url>
`
However when there's any update in the schema-registry, we can't get message any more. There's some exceptions below:
```
2020.11.12 02:46:36.694505 [ 126 ] {} <Error> void DB::StorageKafka::threadFunc(): Code: 1000, e.displayText() = DB::Exception: Timeout: connect timed out: 172.2.3.9:8081: while fetching schema id = 10, Stack trace (when copying this message, always include the lines below):
0. Poco::TimeoutException::TimeoutException(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x1041d90f in /usr/bin/clickhouse
1. ? @ 0x102ffff5 in /usr/bin/clickhouse
2. Poco::Net::HTTPSession::connect(Poco::Net::SocketAddress const&) @ 0x102cccf5 in /usr/bin/clickhouse
3. Poco::Net::HTTPClientSession::reconnect() @ 0x102ba132 in /usr/bin/clickhouse
4. Poco::Net::HTTPClientSession::sendRequest(Poco::Net::HTTPRequest&) @ 0x102bafc0 in /usr/bin/clickhouse
5. DB::AvroConfluentRowInputFormat::SchemaRegistry::fetchSchema(unsigned int) @ 0xdb5ba91 in /usr/bin/clickhouse
6. std::__1::pair<std::__1::shared_ptr<avro::ValidSchema>, bool> DB::LRUCache<unsigned int, avro::ValidSchema, std::__1::hash<unsigned int>, DB::TrivialWeightFunction<avro::ValidSchema> >::getOrSet<DB::AvroConfluentRowInputFormat::SchemaRegistry::getSchema(unsigned int)::'lambda'()>(unsigned int const&, DB::AvroConfluentRowInputFormat::SchemaRegistry::getSchema(unsigned int)::'lambda'()&&) @ 0xdb5cdbb in /usr/bin/clickhouse
7. DB::AvroConfluentRowInputFormat::getOrCreateDeserializer(unsigned int) @ 0xdb42713 in /usr/bin/clickhouse
8. DB::AvroConfluentRowInputFormat::readRow(std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn> > >&, DB::RowReadExtension&) @ 0xdb42b4d in /usr/bin/clickhouse
9. DB::IRowInputFormat::generate() @ 0xdb5d151 in /usr/bin/clickhouse
10. DB::ISource::work() @ 0xdaf3d8b in /usr/bin/clickhouse
11. ? @ 0xd80106d in /usr/bin/clickhouse
12. DB::KafkaBlockInputStream::readImpl() @ 0xd801c48 in /usr/bin/clickhouse
13. DB::IBlockInputStream::read() @ 0xce4825d in /usr/bin/clickhouse
14. DB::copyData(DB::IBlockInputStream&, DB::IBlockOutputStream&, std::__1::atomic<bool>*) @ 0xce7717e in /usr/bin/clickhouse
15. DB::StorageKafka::streamToViews() @ 0xd7f0332 in /usr/bin/clickhouse
16. DB::StorageKafka::threadFunc() @ 0xd7f0da8 in /usr/bin/clickhouse
17. DB::BackgroundSchedulePoolTaskInfo::execute() @ 0xcfe9619 in /usr/bin/clickhouse
18. DB::BackgroundSchedulePool::threadFunction() @ 0xcfe9c42 in /usr/bin/clickhouse
19. ? @ 0xcfe9d72 in /usr/bin/clickhouse
20. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x902526b in /usr/bin/clickhouse
21. ? @ 0x9023753 in /usr/bin/clickhouse
22. start_thread @ 0x76db in /lib/x86_64-linux-gnu/libpthread-2.27.so
23. __clone @ 0x12188f in /lib/x86_64-linux-gnu/libc-2.27.so
(version 20.4.4.18 (official build))
```
The timeout hostname 172.2.3.9:8081 is not the host of schema-registry-url:8081 at all.
Does ClickHouse cache the DNS result of http://schema-registry-url:8081 at the first time and use the ip in the following calls? When we use http://schema-registry-url:8081 in the ClickHouse server machine, we can get the right schema.
How can we fix it? We already tried `System Drop DNS Cache` but it doesn't work. Since the schema update is very frequent in production environment, we don't want to restart the server every time the schema updates.
| https://github.com/ClickHouse/ClickHouse/issues/16926 | https://github.com/ClickHouse/ClickHouse/pull/16985 | 93d21e0764f8a4961bd94c944fdd5f2f0bda07e7 | c2205498b212b2e92b1c687694d9c3a8ab93be12 | "2020-11-12T11:56:22Z" | c++ | "2020-11-14T22:54:01Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 16,869 | ["src/AggregateFunctions/AggregateFunctionTimeSeriesGroupSum.cpp", "src/AggregateFunctions/AggregateFunctionTimeSeriesGroupSum.h", "src/AggregateFunctions/registerAggregateFunctions.cpp", "src/AggregateFunctions/ya.make", "tests/fuzz/ast.dict", "tests/queries/0_stateless/00910_aggregation_timeseriesgroupsum.reference", "tests/queries/0_stateless/00910_aggregation_timeseriesgroupsum.sql", "tests/queries/0_stateless/01560_timeseriesgroupsum_segfault.reference", "tests/queries/0_stateless/01560_timeseriesgroupsum_segfault.sql"] | TimeSeriesGroupSum algorithm is totally wrong | ... and code quality is very poor.
Imagine the 3 timeseries: the first is growing from 200 with rate `2*t`, and another two are decreasing starting from 200 with `t` rate. So sum should always be constant `600`.
```
WITH [200. + (2 * number), 200. - number, 200. - number] AS vals
SELECT
arrayJoin(if((number = 0) OR (number >= 10), [1, 2, 3], [(number % 3) + 1])) AS ts_id,
number + 1000 AS timestamp,
vals[ts_id] AS value
FROM numbers(11)
ORDER BY timestamp
ββts_idββ¬βtimestampββ¬βvalueββ
β 1 β 1000 β 200 β
β 2 β 1000 β 200 β
β 3 β 1000 β 200 β
β 2 β 1001 β 199 β
β 3 β 1002 β 198 β
β 1 β 1003 β 206 β
β 2 β 1004 β 196 β
β 3 β 1005 β 195 β
β 1 β 1006 β 212 β
β 2 β 1007 β 193 β
β 3 β 1008 β 192 β
β 1 β 1009 β 218 β
β 1 β 1010 β 220 β
β 2 β 1010 β 190 β
β 3 β 1010 β 190 β
βββββββββ΄ββββββββββββ΄ββββββββ
15 rows in set. Elapsed: 0.003 sec.
```
I've added anchors at the beginning and at both ends of all 3 sequences to simplify the calculation.
It seems the function returns the proper result:
```
SELECT timeSeriesGroupSum(toUInt64(ts_id), toInt64(timestamp), value)
FROM
(
WITH [200. + (2 * number), 200. - number, 200. - number] AS vals
SELECT
arrayJoin(if((number = 0) OR (number >= 10), [1, 2, 3], [(number % 3) + 1])) AS ts_id,
number + 1000 AS timestamp,
vals[ts_id] AS value
FROM numbers(11)
)
Query id: 526c8000-c02b-4677-b454-4a1916b1274b
ββtimeSeriesGroupSum(toUInt64(ts_id), toInt64(timestamp), value)ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β [(1000,600),(1001,600),(1002,600),(1003,600),(1004,600),(1005,600),(1006,600),(1007,600),(1008,600),(1009,600),(1010,600)] β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
1 rows in set. Elapsed: 0.003 sec.
```
But actually they are correct only before state merges will start happening. If we will split our data into several chunks and process them in parallel, to force the states merging - the result will be completely wrong:
```
SET max_threads = 11, max_block_size = 1
SELECT timeSeriesGroupSum(toUInt64(ts_id), toInt64(timestamp), value)
FROM
(
WITH [200. + (2 * number), 200. - number, 200. - number] AS vals
SELECT
arrayJoin(if((number = 0) OR (number >= 10), [1, 2, 3], [(number % 3) + 1])) AS ts_id,
number + 1000 AS timestamp,
vals[ts_id] AS value
FROM numbers_mt(11)
)
Query id: 616ca5c1-95c9-4e4a-9a2a-0d5a8ea19188
ββtimeSeriesGroupSum(toUInt64(ts_id), toInt64(timestamp), value)ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β [(1000,600),(1001,199),(1002,441.55555555555554),(1003,765.3611111111111),(1004,677.1666666666666),(1005,1008.9055555555556),(1006,950.6444444444444),(1007,1103.7166666666667),(1008,1159.2888888888888),(1009,830.8611111111111),(1010,600)] β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
1 rows in set. Elapsed: 0.004 sec.
```
:scream:
Also there are other inconsistencies there like:
```
SELECT timeSeriesGroupSum(id, ts, val) FROM values('id UInt64, ts Int64, val Float64', (1, 1, 1))
ββtimeSeriesGroupSum(id, ts, val)ββ
β [(1,1)] β
βββββββββββββββββββββββββββββββββββ
```
vs
```
SELECT timeSeriesGroupSum(id, ts, val) FROM values('id UInt64, ts Int64, val Float64', (1, 1, 0))
ββtimeSeriesGroupSum(id, ts, val)ββ
β [] β
βββββββββββββββββββββββββββββββββββ
```
| https://github.com/ClickHouse/ClickHouse/issues/16869 | https://github.com/ClickHouse/ClickHouse/pull/17423 | f086f563797532125d74fcaf48398c19bea1d41e | 572bdb4090c94f57a869269ad1ee8b23b95a3a2e | "2020-11-11T12:21:34Z" | c++ | "2020-11-26T20:07:46Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 16,862 | ["src/AggregateFunctions/AggregateFunctionTimeSeriesGroupSum.h", "tests/queries/0_stateless/01560_timeseriesgroupsum_segfault.reference", "tests/queries/0_stateless/01560_timeseriesgroupsum_segfault.sql"] | DB::AggregateFunctionTimeSeriesGroupSum<false>::serialize segfault | Happens when AggregateFunctionTimeSeriesGroupSum state is serialized.
Can lead to:
```
2020.11.11 08:50:26.219001 [ 259 ] {} <Fatal> BaseDaemon: ########################################
2020.11.11 08:50:26.219056 [ 259 ] {} <Fatal> BaseDaemon: (version 20.3.13.127 (official build)) (from thread 207) (query_id: 9d885998-851a-4c70-b668-439cd260dc4f) Received signal Segmentation fault (11).
2020.11.11 08:50:26.219095 [ 259 ] {} <Fatal> BaseDaemon: Address: NULL pointer. Access: read. Address not mapped to object.
2020.11.11 08:50:26.219113 [ 259 ] {} <Fatal> BaseDaemon: Stack trace: 0xca03160
2020.11.11 08:50:26.219172 [ 259 ] {} <Fatal> BaseDaemon: 3. DB::AggregateFunctionTimeSeriesGroupSum<false>::serialize(char const*, DB::WriteBuffer&) const @ 0xca03160 in /usr/bin/clickhouse
```
Or to
```
(gdb) bt
#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
#1 0x00007fd280ad3859 in __GI_abort () at abort.c:79
#2 0x0000000023f46812 in Poco::SignalHandler::handleSignal (sig=11) at ../contrib/poco/Foundation/src/SignalHandler.cpp:94
#3 <signal handler called>
#4 memcpy () at ../sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S:142
#5 0x0000000009cda5e2 in __asan_memcpy ()
#6 0x0000000009d666f5 in DB::WriteBuffer::write (this=0x7fff20748540, from=0x0, n=16) at ../src/IO/WriteBuffer.h:78
#7 0x000000001b374125 in DB::serializeToString (function=..., column=..., row_num=<optimized out>) at ../src/DataTypes/DataTypeAggregateFunction.cpp:160
#8 0x000000001b37461c in DB::DataTypeAggregateFunction::serializeTextEscaped (this=<optimized out>, column=..., row_num=140542080921600, ostr=...) at ../src/DataTypes/DataTypeAggregateFunction.cpp:196
#9 0x000000001dc1120c in DB::IRowOutputFormat::write (this=0x6190000230a0, columns=..., row_num=0) at ../src/Processors/Formats/IRowOutputFormat.cpp:85
#10 0x000000001dc0fd87 in DB::IRowOutputFormat::consume (this=0x6190000230a0, chunk=<error reading variable: Cannot access memory at address 0x0>) at ../src/Processors/Formats/IRowOutputFormat.cpp:25
#11 0x000000001db7bc7c in DB::IOutputFormat::work (this=0x6190000230a0) at ../src/Processors/Formats/IOutputFormat.cpp:89
#12 0x000000001db58e16 in DB::executeJob (processor=0x6190000230a0) at ../src/Processors/Executors/PipelineExecutor.cpp:78
#13 DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0::operator()() const (this=<optimized out>) at ../src/Processors/Executors/PipelineExecutor.cpp:95
#14 std::__1::__invoke<DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0&> (__f=...) at ../contrib/libcxx/include/type_traits:3519
#15 std::__1::__invoke_void_return_wrapper<void>::__call<DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0&>(DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0&) (__args=...)
at ../contrib/libcxx/include/__functional_base:348
#16 std::__1::__function::__alloc_func<DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0, std::__1::allocator<DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0>, void ()>::operator()() (
this=<optimized out>) at ../contrib/libcxx/include/functional:1540
#17 std::__1::__function::__func<DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0, std::__1::allocator<DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0>, void ()>::operator()() (
this=<optimized out>) at ../contrib/libcxx/include/functional:1714
#18 0x000000001db54c27 in std::__1::__function::__value_func<void ()>::operator()() const (this=<optimized out>) at ../contrib/libcxx/include/functional:1867
#19 std::__1::function<void ()>::operator()() const (this=<optimized out>) at ../contrib/libcxx/include/functional:2473
#20 DB::PipelineExecutor::executeStepImpl (this=<optimized out>, thread_num=<optimized out>, num_threads=<optimized out>, yield_flag=<optimized out>) at ../src/Processors/Executors/PipelineExecutor.cpp:561
#21 0x000000001db5083f in DB::PipelineExecutor::executeSingleThread (this=0x6130000049d8, thread_num=0, num_threads=16) at ../src/Processors/Executors/PipelineExecutor.cpp:477
#22 DB::PipelineExecutor::executeImpl (this=0x6130000049d8, num_threads=1) at ../src/Processors/Executors/PipelineExecutor.cpp:752
#23 0x000000001db4fcef in DB::PipelineExecutor::execute (this=0x6130000049d8, num_threads=1) at ../src/Processors/Executors/PipelineExecutor.cpp:399
#24 0x000000001c8c951a in DB::executeQuery(DB::ReadBuffer&, DB::WriteBuffer&, bool, DB::Context&, std::__1::function<void (std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)>) (istr=..., ostr=..., allow_into_outfile=<optimized out>, context=..., set_result_details=...)
at ../src/Interpreters/executeQuery.cpp:995
#25 0x0000000009fac674 in DB::LocalServer::processQueries (this=0x7fff2074c2e8) at ../programs/local/LocalServer.cpp:384
#26 0x0000000009fa6485 in DB::LocalServer::main (this=0x7fff2074ea50) at ../programs/local/LocalServer.cpp:289
#27 0x0000000023ce5ac6 in Poco::Util::Application::run (this=0x7fff2074ea50) at ../contrib/poco/Util/src/Application.cpp:334
#28 0x0000000009fb9dc0 in mainEntryClickHouseLocal (argc=3, argv=0x6030000060d0) at ../programs/local/LocalServer.cpp:609
#29 0x0000000009d0e554 in main (argc_=<optimized out>, argv_=<optimized out>) at ../programs/main.cpp:400
```
or similar.
It seems all versions are affected (we will need a fix for 20.3).
I'll commit a test case soon.
| https://github.com/ClickHouse/ClickHouse/issues/16862 | https://github.com/ClickHouse/ClickHouse/pull/16865 | f48232d615df2385b2959743453f8650e6893e09 | 21513657645c0d75f12cf4a81eac50166b9edb8c | "2020-11-11T09:48:14Z" | c++ | "2020-11-12T16:39:14Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 16,835 | ["base/mysqlxx/ResultBase.cpp", "base/mysqlxx/ResultBase.h", "src/Databases/MySQL/MaterializeMetadata.cpp", "src/Databases/MySQL/MaterializeMetadata.h", "src/Databases/MySQL/MaterializeMySQLSyncThread.cpp", "src/Databases/MySQL/MaterializeMySQLSyncThread.h", "src/Formats/MySQLBlockInputStream.cpp", "src/Formats/MySQLBlockInputStream.h"] | Use MaterializeMySQL synchronization problem | When I used ClickHouse's MaterializeMySQL feature to synchronize data from MySQL to ClickHouse, I had some problems. The Click House version I'm using is 20.10.3.30, and the MySQL version I'm using is 5.7.28-log.
The SQL statement I executed is as follows:
> SET allow_experimental_database_materialize_mysql = 1;
drop database if exists ck_test;
CREATE DATABASE if not exists ck_test ENGINE = MaterializeMySQL('x.x.x.x:3306', 'test', 'user', 'test');
After I executed the SQL statement, I saw the following information from the error log /var/log/clickhouse-server/clickhouse-server.err. Log:
> 2020.11.10 20:27:39.611578 [ 33057 ] {} <Error> MaterializeMySQLSyncThread: Code: 20, e.displayText() = DB::Exception: mysqlxx::UseQueryResult contains 4 columns while 2 expected, Stack trace (when copying this message, always include the lines below):
> 0. DB::MySQLBlockInputStream::MySQLBlockInputStream(mysqlxx::Pool::Entry const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Block const&, unsigned long, bool) @ 0xe587457 in /usr/bin/clickhouse
> 1. DB::MaterializeMetadata::MaterializeMetadata(mysqlxx::Pool::Entry&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, bool&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0xdb1cbf3 in /usr/bin/clickhouse
> 2. DB::MaterializeMySQLSyncThread::prepareSynchronized(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0xdaf8bf1 in /usr/bin/clickhouse
> 3. DB::MaterializeMySQLSyncThread::synchronization(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0xdaf836a in /usr/bin/clickhouse
> 4. ? @ 0xdb1480d in /usr/bin/clickhouse
> 5. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x7b8963d in /usr/bin/clickhouse
> 6. ? @ 0x7b8d153 in /usr/bin/clickhouse
> 7. start_thread @ 0x7dd5 in /usr/lib64/libpthread-2.17.so
> 8. __clone @ 0xfdead in /usr/lib64/libc-2.17.so
> (version 20.10.3.30 (official build))
Does anybody have a problem like this? | https://github.com/ClickHouse/ClickHouse/issues/16835 | https://github.com/ClickHouse/ClickHouse/pull/17366 | fef65b0cbd6debced8007ce21e2d3f7fb82abc4f | f13de96afcb37c41174a09acb5013aaa898dac7a | "2020-11-10T13:04:40Z" | c++ | "2020-12-09T20:12:47Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 16,827 | ["tests/queries/0_stateless/02467_cross_join_three_table_functions.reference", "tests/queries/0_stateless/02467_cross_join_three_table_functions.sql"] | can not cross join three tables(numbers()) | cross join two tables(numbers()) are ok
```
SELECT count(*)
FROM numbers(10000) AS a
, numbers(10000) AS b
, numbers(10) AS c
Received exception from server (version 20.10.3):
Code: 51. DB::Exception: Received from localhost:9000. DB::Exception: Empty list of columns in SELECT query.
0 rows in set. Elapsed: 0.017 sec.
CREATE TABLE n
ENGINE = MergeTree
ORDER BY n AS
SELECT number AS n
FROM numbers(10000)
Ok.
0 rows in set. Elapsed: 0.077 sec. Processed 10.00 thousand rows, 80.00 KB (130.15 thousand rows/s., 1.04 MB/s.)
SELECT count(*)
FROM n AS a
, n AS b
, n AS c
Received exception from server (version 20.10.3):
Code: 51. DB::Exception: Received from localhost:9000. DB::Exception: Empty list of columns in SELECT query.
0 rows in set. Elapsed: 0.022 sec.
SELECT count(*)
FROM n AS a
CROSS JOIN n AS b
ββββcount()ββ
β 100000000 β
βββββββββββββ
1 rows in set. Elapsed: 3.404 sec. Processed 20.00 thousand rows, 160.00 KB (5.88 thousand rows/s., 47.01 KB/s.)
SELECT count(*)
FROM n AS a
CROSS JOIN n AS b
CROSS JOIN numbers(10) AS c
Received exception from server (version 20.10.3):
Code: 51. DB::Exception: Received from localhost:9000. DB::Exception: Empty list of columns in SELECT query.
```
| https://github.com/ClickHouse/ClickHouse/issues/16827 | https://github.com/ClickHouse/ClickHouse/pull/42511 | 151137d6d5c182b752ffaff6c8cbb35e7f72874f | bd80e6a10b0b7e2f539e79419b160cef9a775141 | "2020-11-10T07:22:33Z" | c++ | "2022-10-20T22:03:04Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 16,752 | ["docs/en/sql-reference/table-functions/cluster.md", "docs/en/sql-reference/table-functions/remote.md", "src/Storages/Distributed/DistributedBlockOutputStream.cpp", "src/TableFunctions/TableFunctionRemote.cpp", "src/TableFunctions/TableFunctionRemote.h", "tests/queries/0_stateless/01602_insert_into_table_function_cluster.reference", "tests/queries/0_stateless/01602_insert_into_table_function_cluster.sql", "tests/queries/0_stateless/arcadia_skip_list.txt"] | Support insert into table function cluster if the cluster does not have sharding key. | ```
CREATE TABLE default.x AS system.numbers ENGINE = Log;
INSERT INTO FUNCTION cluster('test_cluster_two_shards', default, x) SELECT * FROM numbers(10)
Received exception from server (version 20.11.1):
Code: 55. DB::Exception: Received from localhost:9000. DB::Exception: Method write is not supported by storage Distributed with more than one shard and no sharding key provided.
0 rows in set. Elapsed: 0.001 sec.
INSERT INTO FUNCTION cluster('test_cluster_two_shards', default, x, rand()) SELECT * FROM numbers(10)
Received exception from server (version 20.11.1):
Code: 42. DB::Exception: Received from localhost:9000. DB::Exception: Table function 'cluster' requires from 2 to 3 parameters: <addresses pattern or cluster name>, <name of remote database>, <name of remote table>.
0 rows in set. Elapsed: 0.001 sec.
``` | https://github.com/ClickHouse/ClickHouse/issues/16752 | https://github.com/ClickHouse/ClickHouse/pull/18264 | ecf9b9c392e00a81f41d2f8b72dda30fb7092979 | a15092eeb77b7d35d8f068f99bfeb26b0470ae9d | "2020-11-06T14:23:34Z" | c++ | "2021-01-16T10:22:49Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 16,609 | ["src/Core/Settings.h", "src/Interpreters/executeQuery.cpp", "tests/integration/test_log_query_probability/__init__.py", "tests/integration/test_log_query_probability/test.py"] | Support logging a random sample of queries to query_log | With a large volume of queries per second logging all queries gets expensive, it would be nice to be able to log only a configurable fraction of the queries, selected at random. | https://github.com/ClickHouse/ClickHouse/issues/16609 | https://github.com/ClickHouse/ClickHouse/pull/27527 | 4cc0b0298c663768a9ccf27e9d68540bba7827cf | 77d085f26470409353a637687c2e4e9a6cc13a67 | "2020-11-02T14:01:30Z" | c++ | "2021-08-31T21:51:23Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 16,605 | ["src/Server/MySQLHandler.cpp", "tests/integration/test_mysql_protocol/test.py"] | ClickHouse doesn't return affected rows via MySQL protocol for INSERT queries after 20.5+ | **Describe the bug**
ClickHouse doesn't return affected rows via MySQL protocol for INSERT queries after 20.5+
**How to reproduce**
i create clear docker-compose environment for reproduce behavior MySQL + clickhouse + php + python
https://gist.github.com/Slach/aa67440ce856a3a53f64f92eeddfbc1b
**Queries to run that lead to unexpected result**
```
INSERT INTO default.t1(n) SELECT * FROM numbers(1000)
```
**Expected behavior**
MySQL protocol shall return affected rows field for INSERT queries (it works successful in 20.3 and 20.4)

**Actual behavior**
 | https://github.com/ClickHouse/ClickHouse/issues/16605 | https://github.com/ClickHouse/ClickHouse/pull/16715 | d423ce34a118457153fae274c37c96a82d1eaf79 | 7af89cba8a3ffece4bf8c6132c1d7e07f7a04f49 | "2020-11-02T12:20:40Z" | c++ | "2020-11-23T12:54:43Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 16,580 | ["src/Interpreters/MonotonicityCheckVisitor.h", "tests/queries/0_stateless/01560_monotonicity_check_multiple_args_bug.reference", "tests/queries/0_stateless/01560_monotonicity_check_multiple_args_bug.sql"] | Strange errors while using arrayJoin with ORDER BY on DateTime datatype values | **How to reproduce**
* Working query
```sql
WITH arrayJoin(range(2)) AS delta
SELECT
toDate(time) + toIntervalDay(delta) AS dt,
version()
FROM
(
SELECT NOW() AS time
)
ββββββββββdtββ¬βversion()βββ
β 2020-11-01 β 20.10.3.30 β
β 2020-11-02 β 20.10.3.30 β
ββββββββββββββ΄βββββββββββββ
2 rows in set. Elapsed: 0.006 sec.
```
* After adding ORDER BY dt
```sql
WITH arrayJoin(range(2)) AS delta
SELECT
toDate(time) + toIntervalDay(delta) AS dt,
version()
FROM
(
SELECT NOW() AS time
)
ORDER BY dt ASC
Received exception from server (version 20.10.3):
Code: 43. DB::Exception: Received from localhost:9000. DB::Exception: Illegal type Date of argument of function range.
0 rows in set. Elapsed: 0.022 sec.
```
* Try to change range(2) to equivalent array [0,1]
```sql
WITH arrayJoin([0, 1]) AS delta
SELECT
toDate(time) + toIntervalDay(delta) AS dt,
version()
FROM
(
SELECT NOW() AS time
)
ORDER BY dt ASC
Received exception from server (version 20.10.3):
Code: 43. DB::Exception: Received from localhost:9000. DB::Exception: Argument for function arrayJoin must be Array..
0 rows in set. Elapsed: 0.004 sec.
```
**Expected behavior**
Correctly ordered result set
**Additional context**
Both queries with errors correctly run on version 20.7.2.30
```sql
WITH arrayJoin(range(2)) AS delta
SELECT
toDate(time) + toIntervalDay(delta) AS dt,
version()
FROM
(
SELECT NOW() AS time
)
ORDER BY dt ASC
ββββββββββdtββ¬βversion()ββ
β 2020-11-01 β 20.7.2.30 β
β 2020-11-02 β 20.7.2.30 β
ββββββββββββββ΄ββββββββββββ
2 rows in set. Elapsed: 0.003 sec.
```
```sql
WITH arrayJoin([0, 1]) AS delta
SELECT
toDate(time) + toIntervalDay(delta) AS dt,
version()
FROM
(
SELECT NOW() AS time
)
ORDER BY dt ASC
ββββββββββdtββ¬βversion()ββ
β 2020-11-01 β 20.7.2.30 β
β 2020-11-02 β 20.7.2.30 β
ββββββββββββββ΄ββββββββββββ
2 rows in set. Elapsed: 0.001 sec.
```
| https://github.com/ClickHouse/ClickHouse/issues/16580 | https://github.com/ClickHouse/ClickHouse/pull/16928 | 7ec83794b869e9ace2987c3876a4f840c2b6715a | 684d649b70ad74c6b06c79a68b684ca32c47c1b2 | "2020-11-01T12:26:58Z" | c++ | "2020-11-16T09:39:20Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 16,518 | ["programs/client/Client.cpp"] | clickhouse-client should clear screen until end and force showing cursor in terminal | **Use case**
Some program left garbage in terminal.
**Describe the solution you'd like**
Subset of what `reset` command does. | https://github.com/ClickHouse/ClickHouse/issues/16518 | https://github.com/ClickHouse/ClickHouse/pull/22634 | 74a3668ce4211d701ce6accb43538bae37a77866 | 92f57fd669fe163b5237db1ccf1d5a170a9c3550 | "2020-10-29T16:57:22Z" | c++ | "2021-04-05T06:00:29Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 16,414 | ["src/Functions/FunctionsCodingULID.cpp", "src/Functions/castOrDefault.cpp", "src/Functions/extractTimeZoneFromFunctionArguments.cpp", "src/Functions/extractTimeZoneFromFunctionArguments.h", "src/Functions/toTimezone.cpp", "tests/queries/0_stateless/00515_enhanced_time_zones.sql"] | toTimeZone does not throw an error about non-constant timezone | ```
select materialize('America/Los_Angeles') t, toTimeZone(now(), t)
ββtββββββββββββββββββββ¬βtoTimeZone(now(), materialize('America/Los_Angeles'))ββ
β America/Los_Angeles β 2020-10-26 22:58:42 β
βββββββββββββββββββββββ΄ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
desc (select materialize('America/Los_Angeles') t, toTimeZone(now(), t))
ββnameβββββββββββββββββββββββββββββββββββββββββββββββββββ¬βtypeβββββββββββββββββ¬
β t β String β
β toTimeZone(now(), materialize('America/Los_Angeles')) β DateTime('Etc/UTC') β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββ΄ββββββββββββββββββββββ΄
example with toString
SELECT
materialize('America/Los_Angeles') AS t, toString(now(), t)
Received exception from server (version 20.11.1):
Code: 44. DB::Exception: Received from localhost:9000. DB::Exception: Illegal type of argument #2 'timezone' of function toString, expected const String, got String.
``` | https://github.com/ClickHouse/ClickHouse/issues/16414 | https://github.com/ClickHouse/ClickHouse/pull/48471 | d69463859f5d90b1f344f2ef74e968a1b6a1f989 | 0fbc9585f1742a9f942a39c417b9010f9d2d4f20 | "2020-10-26T23:00:43Z" | c++ | "2023-04-13T14:57:37Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 16,393 | ["src/Interpreters/TreeOptimizer.cpp", "tests/queries/0_stateless/01532_min_max_with_modifiers.reference", "tests/queries/0_stateless/01532_min_max_with_modifiers.sql"] | Rollup shows wrong results for min()/max() in 20.9.3 and even worse in 20.10.2 | **How to reproduce**
Query:
```sql
select
x,
min(x) as lower,
max(x)+1 as upper,
upper-lower as range
from (select arrayJoin([1, 2]) as x)
group by x
with rollup;
```
## 20.4.6
```
ββxββ¬βlowerββ¬βupperββ¬βrangeββ
β 1 β 1 β 2 β 1 β
β 2 β 2 β 3 β 1 β
βββββ΄ββββββββ΄ββββββββ΄ββββββββ
ββxββ¬βlowerββ¬βupperββ¬βrangeββ
β 0 β 1 β 3 β 2 β
βββββ΄ββββββββ΄ββββββββ΄ββββββββ
```
Summary looks correct.
## 20.9.3
```
ββxββ¬βlowerββ¬βupperββ¬βrangeββ
β 1 β 1 β 2 β 1 β
β 2 β 2 β 3 β 1 β
βββββ΄ββββββββ΄ββββββββ΄ββββββββ
ββxββ¬βlowerββ¬βupperββ¬βrangeββ
β 0 β 0 β 3 β 2 β
βββββ΄ββββββββ΄ββββββββ΄ββββββββ
```
`lower` is wrong - the minimum should be 1. However, all other "derived" values are correct. For example `range` correctly shows up as 2, even though table would wrongly imply that `upper-lower==3-0==3`.
Note that I only tested 20.4.6 and 20.9.3, so the regression might have been anywhere in between.
## 20.10.2
```
ββxββ¬βlowerββ¬βupperββ¬βrangeββ
β 1 β 1 β 2 β 1 β
β 2 β 2 β 3 β 1 β
βββββ΄ββββββββ΄ββββββββ΄ββββββββ
ββxββ¬βlowerββ¬βupperββ¬βrangeββ
β 0 β 0 β 1 β 1 β
βββββ΄ββββββββ΄ββββββββ΄ββββββββ
```
All values are wrong. | https://github.com/ClickHouse/ClickHouse/issues/16393 | https://github.com/ClickHouse/ClickHouse/pull/16397 | cc5f15da291d96cafb4f5d47beff424e73d51839 | 30325689c44439d8c7b025338a29a4bb5bbe84a6 | "2020-10-26T12:44:26Z" | c++ | "2020-10-26T23:10:44Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 16,372 | ["src/Interpreters/MySQL/InterpretersMySQLDDLQuery.cpp", "src/Interpreters/MySQL/tests/gtest_create_rewritten.cpp"] | Create MaterializeMySQL cause clickhouse server Segmentation fault | (you don't have to strictly follow this form)
**Describe the bug**
After create database with engine MaterializeMySQL, clickhouse server Segmentation fault and restart
**How to reproduce**
* 20.10.2.20 (official build)
* `CREATE DATABASE xxx ENGINE = MaterializeMySQL('host', 'dbname', 'user', 'pass');`
**Error message and/or stacktrace**
```
2020.10.26 10:56:31.340735 [ 18476 ] {} <Fatal> BaseDaemon: ########################################
2020.10.26 10:56:31.340793 [ 18476 ] {} <Fatal> BaseDaemon: (version 20.10.2.20 (official build), build id: FFE5E11E2023F86A) (from thread 1298) (no query) Received signal Segmentation fault (11)
2020.10.26 10:56:31.340818 [ 18476 ] {} <Fatal> BaseDaemon: Address: 0xfffffffffb0f8320 Access: write. Address not mapped to object.
2020.10.26 10:56:31.340840 [ 18476 ] {} <Fatal> BaseDaemon: Stack trace: 0xdef5e7e 0xdeeaeec 0xdb5214c 0xdb511d7 0xdecaff8 0xdec9e9d 0xdb00ba4 0xdb16b98 0xdb1c79e 0xdb1cc91 0xdafa7a9 0xdaf9e2a 0xdb162cd 0x7b8b75d 0x7b8f273 0x7fb9096a12de 0x7fb908fc5133
2020.10.26 10:56:31.340923 [ 18476 ] {} <Fatal> BaseDaemon: 2. std::__1::enable_if<(__is_cpp17_forward_iterator<std::__1::__wrap_iter<std::__1::shared_ptr<DB::IAST>*> >::value) && (is_constructible<std::__1::shared_ptr<DB::IAST>, std::__1::iterator_traits<std::__1::__wrap_iter<std::__1::shared_ptr<DB::IAST>*> >::reference>::value), std::__1::__wrap_iter<std::__1::shared_ptr<DB::IAST>*> >::type std::__1::vector<std::__1::shared_ptr<DB::IAST>, std::__1::allocator<std::__1::shared_ptr<DB::IAST> > >::insert<std::__1::__wrap_iter<std::__1::shared_ptr<DB::IAST>*> >(std::__1::__wrap_iter<std::__1::shared_ptr<DB::IAST> const*>, std::__1::__wrap_iter<std::__1::shared_ptr<DB::IAST>*>, std::__1::__wrap_iter<std::__1::shared_ptr<DB::IAST>*>) @ 0xdef5e7e in /usr/bin/clickhouse
2020.10.26 10:56:31.340957 [ 18476 ] {} <Fatal> BaseDaemon: 3. DB::MySQLInterpreter::InterpreterCreateImpl::getRewrittenQueries(DB::MySQLParser::ASTCreateQuery const&, DB::Context const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0xdeeaeec in /usr/bin/clickhouse
2020.10.26 10:56:31.340979 [ 18476 ] {} <Fatal> BaseDaemon: 4. DB::MySQLInterpreter::InterpreterMySQLDDLQuery<DB::MySQLInterpreter::InterpreterCreateImpl>::execute() @ 0xdb5214c in /usr/bin/clickhouse
2020.10.26 10:56:31.340993 [ 18476 ] {} <Fatal> BaseDaemon: 5. DB::InterpreterExternalDDLQuery::execute() @ 0xdb511d7 in /usr/bin/clickhouse
2020.10.26 10:56:31.341006 [ 18476 ] {} <Fatal> BaseDaemon: 6. ? @ 0xdecaff8 in /usr/bin/clickhouse
2020.10.26 10:56:31.341022 [ 18476 ] {} <Fatal> BaseDaemon: 7. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::Context&, bool, DB::QueryProcessingStage::Enum, bool) @ 0xdec9e9d in /usr/bin/clickhouse
2020.10.26 10:56:31.341036 [ 18476 ] {} <Fatal> BaseDaemon: 8. ? @ 0xdb00ba4 in /usr/bin/clickhouse
2020.10.26 10:56:31.341056 [ 18476 ] {} <Fatal> BaseDaemon: 9. ? @ 0xdb16b98 in /usr/bin/clickhouse
2020.10.26 10:56:31.341094 [ 18476 ] {} <Fatal> BaseDaemon: 10. DB::commitMetadata(std::__1::function<void ()> const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0xdb1c79e in /usr/bin/clickhouse
2020.10.26 10:56:31.341111 [ 18476 ] {} <Fatal> BaseDaemon: 11. DB::MaterializeMetadata::transaction(DB::MySQLReplication::Position const&, std::__1::function<void ()> const&) @ 0xdb1cc91 in /usr/bin/clickhouse
2020.10.26 10:56:31.341126 [ 18476 ] {} <Fatal> BaseDaemon: 12. DB::MaterializeMySQLSyncThread::prepareSynchronized(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0xdafa7a9 in /usr/bin/clickhouse
2020.10.26 10:56:31.341146 [ 18476 ] {} <Fatal> BaseDaemon: 13. DB::MaterializeMySQLSyncThread::synchronization(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0xdaf9e2a in /usr/bin/clickhouse
2020.10.26 10:56:31.341174 [ 18476 ] {} <Fatal> BaseDaemon: 14. ? @ 0xdb162cd in /usr/bin/clickhouse
2020.10.26 10:56:31.341189 [ 18476 ] {} <Fatal> BaseDaemon: 15. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x7b8b75d in /usr/bin/clickhouse
2020.10.26 10:56:31.341203 [ 18476 ] {} <Fatal> BaseDaemon: 16. ? @ 0x7b8f273 in /usr/bin/clickhouse
2020.10.26 10:56:31.341232 [ 18476 ] {} <Fatal> BaseDaemon: 17. start_thread @ 0x82de in /usr/lib64/libpthread-2.28.so
2020.10.26 10:56:31.341254 [ 18476 ] {} <Fatal> BaseDaemon: 18. clone @ 0xfc133 in /usr/lib64/libc-2.28.so
2020.10.26 10:57:13.074460 [ 18605 ] {} <Fatal> BaseDaemon: ########################################
``` | https://github.com/ClickHouse/ClickHouse/issues/16372 | https://github.com/ClickHouse/ClickHouse/pull/18211 | a4b0d9ba4c99a22007ad872ba64525d0445469d7 | 0e807d0647179b8b1a4a6b7a1c6ffa0ff6056451 | "2020-10-26T03:22:23Z" | c++ | "2020-12-22T08:51:31Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 16,307 | ["src/Storages/MergeTree/ReplicatedMergeTreeQueue.cpp", "src/Storages/MergeTree/ReplicatedMergeTreeQueue.h"] | ZooKeeper coordination may fail permanently after multiple huge ALTERs are executed. | **Describe the bug:**
Create a table with very long column names or type names (type names can be long for legitimate reason, e.g. Enum).
Make table structure to be less than 1 MB but larger than 100 KB.
Create two replicas, turn one replica off. Execute at least 10 ALTER queries to slightly modify columns in the table.
Turn second replica on.
It will fail with:
```
<Debug> sandbox.taskdiskusage (ReplicatedMergeTreeQueue): Pulling 42 entries to queue: log-0001412404 - log-0001412445
...
<Error> void Coordination::ZooKeeper::receiveThread(): Code: 33, e.displayText() = DB::Exception: Cannot read all data. Bytes read: 0...
```
(there is no subsequent message `Pulled 42 entries to queue`)
Because it tries to make a transaction to move several `log` entries to `queue` (the code is in `ReplicatedMergeTreeQueue::pullLogsToQueue` function) and this transaction is larger than 1 MB.
After this message all other ZooKeeper queries will fail with `Session expired` error.
After the session to ZooKeeper is reinitialized, everything will repeat.
**Workaround:**
1 MB is the default limit for packet size in ZooKeeper. You should increase it with the parameter `-Djute.maxbuffer` in ZooKeeper.
**Notes:**
There was no such issue with ALTER columns queries before version 20.3. In version 20.3 ALTER of columns goes through the replication log, the change was added in #8701. It means that the issue may manifestate itself after upgrade to the versions 20.3+. But the root cause of the bug itself is present in every version, you can get similar effect with ALTER UPDATE or DELETE if the expression is very long (e.g. long list of constants). | https://github.com/ClickHouse/ClickHouse/issues/16307 | https://github.com/ClickHouse/ClickHouse/pull/16332 | d46cf39f3be34428c00590c2905ee022bdd8261f | 0faf2bc7e320d6d5b3c24f0467fa18639ca5cd45 | "2020-10-23T19:16:41Z" | c++ | "2020-10-29T06:09:29Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 16,265 | ["src/Core/MySQL/MySQLReplication.cpp"] | MaterializeMySQL Decimal type data convert to weired one | This issue is hard to reproduce, it happens 10 times in last 24 hours
duration_hr is a field with type Decimal (13, 4), I found some abnormal conversion during the ClickHouse replication, as below:
```
MySQL ClickHouse
0.3333 -> -2147483653.2202
0.5000 -> -2147483653.0535
0.1667 -> -2147483653.3868
0.0833 -> -2147483653.4702
0.7500 -> -2147483652.8035
```
As you can see, a weired one among the normal ones:
```
β HCAC β 2020-10-22 07:10:00 β 0.3333 β
β HCAC β 2020-10-22 07:15:00 β 0.3333 β
β HCAC β 2020-10-22 07:20:00 β 0.3333 β
β HCAC β 2020-10-22 07:25:00 β 0.3333 β
β HCAC β 2020-10-22 07:30:00 β 0.3333 β
β HCAC β 2020-10-22 07:35:00 β -2147483653.2202 β
β HCAC β 2020-10-22 07:40:00 β 0.3333 β
β HCAC β 2020-10-22 07:45:00 β 0.3333 β
β HCAC β 2020-10-22 07:50:00 β 0.3333 β
β HCAC β 2020-10-22 07:55:00 β 0.3333 β
β HCAC β 2020-10-22 08:00:00 β 0.3333 β
β HCAC β 2020-10-22 08:05:00 β 0.3333 β
```
-Lyon | https://github.com/ClickHouse/ClickHouse/issues/16265 | https://github.com/ClickHouse/ClickHouse/pull/31990 | fa298b089e9669a8ffb1aaf00d0fbfb922f40f06 | 9e034ee3a5af2914aac4fd9a39fd469286eb9b86 | "2020-10-22T16:02:48Z" | c++ | "2021-12-01T16:18:48Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 16,260 | ["docs/en/sql-reference/functions/date-time-functions.md", "docs/ru/sql-reference/functions/date-time-functions.md", "src/Functions/timeSlots.cpp", "tests/queries/0_stateless/02319_timeslots_dt64.reference", "tests/queries/0_stateless/02319_timeslots_dt64.sql"] | timeSlots() does not support DateTime64 | **`timeSlots() ` is not working with DateTime64 date type at all, even when within normal range:**
```sql
SELECT timeSlots(toDateTime64('2012-01-01 12:20:00', 0, 'UTC'), 600)
Received exception from server (version 22.1.3):
Code: 43. DB::Exception: Received from localhost:9000. DB::Exception: Illegal type DateTime64(0, 'UTC') of first argument of function timeSlots. Must be DateTime.: While processing timeSlots(toDateTime64('2012-01-01 12:20:00', 0, 'UTC'), 600). (ILLEGAL_TYPE_OF_ARGUMENT)
``` | https://github.com/ClickHouse/ClickHouse/issues/16260 | https://github.com/ClickHouse/ClickHouse/pull/37951 | acb0137dbb75e04cf4cf1567307228a4eb6aab8f | b52843d5fd79f0d11ab379098b94fcb5dd805032 | "2020-10-22T12:48:37Z" | c++ | "2022-07-30T18:49:05Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 16,171 | ["tests/queries/0_stateless/02133_final_prewhere_where_lowcardinality_replacing.reference", "tests/queries/0_stateless/02133_final_prewhere_where_lowcardinality_replacing.sql"] | Exception with combination of FINAL, PREWHERE, WHERE, and LowCardinality on ReplicatedReplacingMergeTree. | **Disclaimer:** I (we) know that the combination of all.... six(?) of these factors is a pretty far edge case. We also know that the usage of `FINAL` and `PREWHERE` is actively discouraged. We can circumvent this issue by figuring out a different query strategy but would prefer this to work nonetheless.
When applying a `SELECT` like the following, we receive an exception:
```sql
SELECT toTypeName(level)
FROM errors_local
FINAL
PREWHERE isNotNull(level)
WHERE isNotNull(level)
LIMIT 1
β Progress: 151.99 thousand rows, 6.99 MB (1.48 million rows/s., 68.20 MB/s.) 19%
Received exception from server (version 20.7.2):
Code: 44. DB::Exception: Received from localhost:9000. DB::Exception: Expected ColumnLowCardinality, gotUInt8: While executing ReplacingSorted.
```
Other combinations of the query can produce expected results. Usage of `toTypeName(level)` is purely for the example:
```sql
SELECT toTypeName(level)
FROM errors_local
FINAL
PREWHERE isNotNull(level)
LIMIT 1
ββtoTypeName(level)βββββββ
β LowCardinality(String) β
ββββββββββββββββββββββββββ
```
```sql
SELECT toTypeName(level)
FROM errors_local
PREWHERE isNotNull(level)
WHERE isNotNull(level)
LIMIT 1
ββtoTypeName(level)βββββββ
β LowCardinality(String) β
ββββββββββββββββββββββββββ
```
```sql
SELECT toTypeName(level)
FROM errors_local
FINAL
WHERE isNotNull(level)
LIMIT 1
ββtoTypeName(level)βββββββ
β LowCardinality(String) β
ββββββββββββββββββββββββββ
```
**Additional context**
Version is `20.7.2.30`.
Column info is as such:
```sql
SELECT *
FROM system.columns
WHERE (name = 'level') AND (table = 'errors_local')
FORMAT Vertical
Row 1:
ββββββ
database: default
table: errors_local
name: level
type: LowCardinality(String)
position: 39
default_kind:
default_expression:
data_compressed_bytes: xxx
data_uncompressed_bytes: xxx
marks_bytes: xxx
comment:
is_in_partition_key: 0
is_in_sorting_key: 0
is_in_primary_key: 0
is_in_sampling_key: 0
compression_codec:
```
Possibly relevant table settings: ```min_bytes_for_wide_part = '10000000'```
---
Let me know if you'd prefer any extra detail or if there's a duplicate issue. I didn't have any luck searching.
**Edit:** I had spoken too soon about the `min_bytes_for_wide_part` being the culprit. We're still seeing this exception after it's disabled. | https://github.com/ClickHouse/ClickHouse/issues/16171 | https://github.com/ClickHouse/ClickHouse/pull/32421 | 13a3f85ecef741e13c21a0ace33e26111ff2cbf4 | 6cfb1177325fea43f043aa5f60aebf20bba20082 | "2020-10-19T23:13:24Z" | c++ | "2021-12-10T12:23:29Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 16,125 | ["src/TableFunctions/ITableFunctionFileLike.cpp"] | Assertion failure in `file` table function. | https://clickhouse-test-reports.s3.yandex.net/16074/19e31578dfc5847766d13db7cbae9af6949fe5b3/fuzzer/fuzzer.log | https://github.com/ClickHouse/ClickHouse/issues/16125 | https://github.com/ClickHouse/ClickHouse/pull/16189 | 14f5bef796db287518f056ada32f96681cb5006b | 3c53d478406c57e425cb8847012b39da1362831b | "2020-10-18T16:09:25Z" | c++ | "2020-10-21T19:38:24Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 16,107 | ["src/Interpreters/TreeOptimizer.cpp", "src/Interpreters/TreeOptimizer.h", "src/Interpreters/TreeRewriter.cpp", "tests/queries/0_stateless/01321_monotonous_functions_in_order_by.reference", "tests/queries/0_stateless/01562_optimize_monotonous_functions_in_order_by.reference", "tests/queries/0_stateless/01562_optimize_monotonous_functions_in_order_by.sql"] | optimize_read_in_order doesn't work with ORDER BY (toDate(timestamp)) in clickhouse 20.8+ | **How to reproduce**
```
CREATE TABLE test_order_by (timestamp DateTime, key UInt32) ENGINE=MergeTree() PARTITION BY toYYYYMM(timestamp) ORDER BY (toDate(timestamp), key);
INSERT INTO test_order_by SELECT now() + toIntervalSecond(number), number % 4 FROM numbers(10000000);
OPTIMIZE TABLE test_order_by FINAL;
Q1: SELECT * FROM test_order_by ORDER BY timestamp LIMIT 10;
Q2: SELECT * FROM test_order_by ORDER BY toDate(timestamp) LIMIT 10;
Q3: SELECT * FROM test_order_by ORDER BY toDate(timestamp), timestamp LIMIT 10;
Q1 works the same on all versions of clickhouse.
Q1: 10 rows in set. Elapsed: 0.060 sec. Processed 10.00 million rows, 80.00 MB (166.87 million rows/s., 1.33 GB/s.)
Clickhouse 20.3.19
Q2: 10 rows in set. Elapsed: 0.013 sec. Processed 655.36 thousand rows, 5.24 MB (50.57 million rows/s., 404.54 MB/s.)
Q3: 10 rows in set. Elapsed: 0.034 sec. Processed 720.90 thousand rows, 5.77 MB (21.15 million rows/s., 169.23 MB/s.)
Clickhouse 20.4.2.9
Q2: Code: 15. DB::Exception: Received from localhost:9000. DB::Exception: Column 'toDate(timestamp)' already exists.
Q3: Code: 15. DB::Exception: Received from localhost:9000. DB::Exception: Column 'toDate(timestamp)' already exists.
Clickhouse 20.5.5.74
Q2: 10 rows in set. Elapsed: 0.020 sec. Processed 1.44 million rows, 11.53 MB (72.88 million rows/s., 583.04 MB/s.)
Q3: 10 rows in set. Elapsed: 0.013 sec. Processed 1.57 million rows, 12.58 MB (121.66 million rows/s., 973.31 MB/s.)
Clickhouse 20.6.8.5
Q2: 10 rows in set. Elapsed: 0.008 sec. Processed 1.05 million rows, 8.39 MB (131.13 million rows/s., 1.05 GB/s.)
Q3: 10 rows in set. Elapsed: 0.011 sec. Processed 1.11 million rows, 8.91 MB (105.19 million rows/s., 841.48 MB/s.)
Clickhouse 20.7.4.11
Q2: 10 rows in set. Elapsed: 0.008 sec. Processed 1.05 million rows, 8.38 MB (130.08 million rows/s., 1.04 GB/s.)
Q3: 10 rows in set. Elapsed: 0.012 sec. Processed 1.11 million rows, 8.91 MB (90.35 million rows/s., 722.80 MB/s.)
Clickhouse 20.8.4.11
Q2: 10 rows in set. Elapsed: 0.046 sec. Processed 10.00 million rows, 80.00 MB (215.31 million rows/s., 1.72 GB/s.)
Q3: 10 rows in set. Elapsed: 0.054 sec. Processed 10.00 million rows, 80.00 MB (185.84 million rows/s., 1.49 GB/s.)
Clickhouse 20.9.3
Q2: 10 rows in set. Elapsed: 0.055 sec. Processed 10.00 million rows, 80.00 MB (181.47 million rows/s., 1.45 GB/s.)
Q3: 10 rows in set. Elapsed: 0.040 sec. Processed 10.00 million rows, 80.00 MB (250.70 million rows/s., 2.01 GB/s.)
```
**Additional context**
It would be a good feature, of clickhouse will use optimize in order with monotonic functions without ORDER BY rewriting.
` SET optimize_monotonous_functions_in_order_by = 1` has no affect on that kind of queries.
| https://github.com/ClickHouse/ClickHouse/issues/16107 | https://github.com/ClickHouse/ClickHouse/pull/16956 | 51f159201b3020b0058fecf89ef897b0610d472c | 1c7844b91ed6e73bf0045acf4522bda89ac4e9bc | "2020-10-16T23:03:55Z" | c++ | "2020-11-29T17:45:47Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 16,087 | ["src/AggregateFunctions/AggregateFunctionSumMap.h", "tests/queries/0_stateless/01280_min_map_max_map.reference", "tests/queries/0_stateless/01280_min_map_max_map.sql"] | maxMap is removing entries with empty string or zero value | **Describe the unexpected behaviour**
maxMap is removing entries with empty string or zero value.
**How to reproduce**
String array
SELECT maxMap(a, b) from values ('a Array(String), b Array(String)',(['A'],['']),(['B'],['']));
ββmaxMap(a, b) ββ
β ([],[]) β
βββββββββββββ
Numeric array
`SELECT maxMap(a, b) from values ('a Array(String), b Array(UInt64)',(['A'],[0]),(['B'],[0]));
ββmaxMap(a, b) ββ
β ([],[]) β
βββββββββββββ
`
It happens with all versions published.
**Expected behavior**
We expect this to maintain those keys with its value (empty string or zero).
| https://github.com/ClickHouse/ClickHouse/issues/16087 | https://github.com/ClickHouse/ClickHouse/pull/16631 | caa1bf9bcd2b0452b835aa77ea6f9810ba52d220 | 9cb0b76c161085dad0435c9ae53ad69468af7a93 | "2020-10-16T15:36:55Z" | c++ | "2020-11-05T11:14:19Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 16,072 | ["cmake/tools.cmake"] | MacOSοΌBuilding CXX object src/Functions/CMakeFiles/clickhouse_functions.dir/FunctionsLogical.cpp.o Fail | OS: mac OS 10.14.6
gcc verson : gcc (GCC) 10.0.1
ninja version: 1.10.0
From Docs i know, "Build should work on Mac OS X 10.15 (Catalina)." But I heard that there will be problems after upgrading MacOS to 15, so macOS has not been upgraded. I encountered the following problem, so I want to know if it is related to the OS version? And how to solve the problem?
**# ninja clickhouse-server clickhouse-client**
```
[0/2] Re-checking globbed directories...
[267/844] Building CXX object src/Functions/CMakeFiles/clickhouse_functions.dir/FunctionsHashing.cpp.o
FAILED: src/Functions/CMakeFiles/clickhouse_functions.dir/FunctionsHashing.cpp.o
/usr/bin/clang++ -DENABLE_MULTITARGET_CODE=1 -DLZ4_DISABLE_DEPRECATE_WARNINGS=1 -DPOCO_ENABLE_CPP11 -DPOCO_HAVE_FD_POLL -DPOCO_OS_FAMILY_UNIX -DUNALIGNED_OK -DUSE_FASTMEMCPY=0 -DUSE_HYPERSCAN=1 -DUSE_REPLXX=1 -DUSE_XXHASH=1 -DWITH_COVERAGE=0 -DWITH_GZFILEOP -DX86_64 -DZLIB_COMPAT -Iincludes/configs -I../contrib/cityhash102/include -I../contrib/libfarmhash -I../src -Isrc -Isrc/Core/include -I../base/common/.. -Ibase/common/.. -I../contrib/cctz/include -Icontrib/zlib-ng -I../contrib/zlib-ng -I../base/pcg-random/. -I../contrib/consistent-hashing -I../contrib/consistent-hashing-sumbur -I../contrib/libuv/include -I../contrib/libmetrohash/src -I../contrib/murmurhash/include -I../contrib/lz4/lib -isystem ../contrib/sparsehash-c11 -isystem ../contrib/h3/src/h3lib/include -isystem ../contrib/rapidjson/include -isystem ../contrib/libcxx/include -isystem ../contrib/libcxxabi/include -isystem ../contrib/base64 -isystem ../contrib/msgpack-c/include -isystem ../contrib/re2 -isystem ../contrib/boost -isystem ../contrib/poco/Net/include -isystem ../contrib/poco/Foundation/include -isystem ../contrib/poco/NetSSL_OpenSSL/include -isystem ../contrib/poco/Crypto/include -isystem ../contrib/openssl-cmake/linux_x86_64/include -isystem ../contrib/openssl/include -isystem ../contrib/poco/Util/include -isystem ../contrib/poco/JSON/include -isystem ../contrib/poco/XML/include -isystem ../contrib/replxx/include -isystem ../contrib/fmtlib-cmake/../fmtlib/include -isystem ../contrib/double-conversion -isystem ../contrib/ryu -isystem contrib/re2_st -isystem ../contrib/croaring -isystem ../contrib/orc/c++/include -isystem contrib/orc/c++/include -isystem ../contrib/pdqsort -isystem ../contrib/AMQP-CPP/include -isystem ../contrib/libdivide/. -isystem contrib/h3/src/h3lib/include -isystem ../contrib/hyperscan/src -isystem ../contrib/simdjson/include -isystem ../contrib/stats/include -isystem ../contrib/gcem/include -fchar8_t -fdiagnostics-color=always -std=c++2a -fsized-deallocation -gdwarf-aranges -msse4.1 -msse4.2 -mpopcnt -Wall -Wno-unused-command-line-argument -stdlib=libc++ -fdiagnostics-absolute-paths -Werror -mmacosx-version-min=10.15 -Wextra -Wpedantic -Wno-vla-extension -Wno-zero-length-array -Wcomma -Wconditional-uninitialized -Wcovered-switch-default -Wdeprecated -Wembedded-directive -Wextra-semi -Wgnu-case-range -Winconsistent-missing-destructor-override -Wnewline-eof -Wold-style-cast -Wrange-loop-analysis -Wredundant-parens -Wreserved-id-macro -Wshadow-field -Wshadow-uncaptured-local -Wshadow -Wstring-plus-int -Wundef -Wunreachable-code-return -Wunreachable-code -Wunused-exception-parameter -Wunused-macros -Wunused-member-function -Wzero-as-null-pointer-constant -Weverything -Wno-c++98-compat-pedantic -Wno-c++98-compat -Wno-c99-extensions -Wno-conversion -Wno-deprecated-dynamic-exception-spec -Wno-disabled-macro-expansion -Wno-documentation-unknown-command -Wno-double-promotion -Wno-exit-time-destructors -Wno-float-equal -Wno-global-constructors -Wno-missing-prototypes -Wno-missing-variable-declarations -Wno-nested-anon-types -Wno-packed -Wno-padded -Wno-return-std-move-in-c++11 -Wno-shift-sign-overflow -Wno-sign-conversion -Wno-switch-enum -Wno-undefined-func-template -Wno-unused-template -Wno-vla -Wno-weak-template-vtables -Wno-weak-vtables -g -O0 -g3 -ggdb3 -fno-inline -D_LIBCPP_DEBUG=0 -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.14.sdk -D OS_DARWIN -nostdinc++ -MD -MT src/Functions/CMakeFiles/clickhouse_functions.dir/FunctionsHashing.cpp.o -MF src/Functions/CMakeFiles/clickhouse_functions.dir/FunctionsHashing.cpp.o.d -o src/Functions/CMakeFiles/clickhouse_functions.dir/FunctionsHashing.cpp.o -c ../src/Functions/FunctionsHashing.cpp
In file included from ../src/Functions/FunctionsHashing.cpp:1:
In file included from ../src/Functions/FunctionsHashing.h:44:
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:197:1: error: expected unqualified-id
DECLARE_SSE42_SPECIFIC_CODE(
^
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:131:42: note: expanded from macro 'DECLARE_SSE42_SPECIFIC_CODE'
#define DECLARE_SSE42_SPECIFIC_CODE(...) \
^
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:102:9: note: expanded from macro '\
BEGIN_SSE42_SPECIFIC_CODE'
_Pragma("clang attribute push(__attribute__((target(\"sse,sse2,sse3,ssse3,sse4,popcnt\"))),apply_to=function)")
^
<scratch space>:53:8: note: expanded from here
clang attribute push(__attribute__((target("sse,sse2,sse3,ssse3,sse4,popcnt"))),apply_to=function)
^
In file included from ../src/Functions/FunctionsHashing.cpp:1:
In file included from ../src/Functions/FunctionsHashing.h:44:
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:197:1: error: expected unqualified-id
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:137:3: note: expanded from macro 'DECLARE_SSE42_SPECIFIC_CODE'
} \
^
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:104:9: note: expanded from macro '\
END_TARGET_SPECIFIC_CODE'
_Pragma("clang attribute pop")
^
<scratch space>:55:8: note: expanded from here
clang attribute pop
^
In file included from ../src/Functions/FunctionsHashing.cpp:1:
In file included from ../src/Functions/FunctionsHashing.h:44:
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:201:1: error: expected unqualified-id
DECLARE_AVX_SPECIFIC_CODE(
^
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:146:3: note: expanded from macro 'DECLARE_AVX_SPECIFIC_CODE'
} \
^
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:104:9: note: expanded from macro '\
END_TARGET_SPECIFIC_CODE'
_Pragma("clang attribute pop")
^
<scratch space>:59:8: note: expanded from here
clang attribute pop
^
In file included from ../src/Functions/FunctionsHashing.cpp:1:
In file included from ../src/Functions/FunctionsHashing.h:44:
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:205:1: error: expected unqualified-id
DECLARE_AVX2_SPECIFIC_CODE(
^
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:155:3: note: expanded from macro 'DECLARE_AVX2_SPECIFIC_CODE'
} \
^
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:104:9: note: expanded from macro '\
END_TARGET_SPECIFIC_CODE'
_Pragma("clang attribute pop")
^
<scratch space>:63:8: note: expanded from here
clang attribute pop
^
In file included from ../src/Functions/FunctionsHashing.cpp:1:
In file included from ../src/Functions/FunctionsHashing.h:44:
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:209:1: error: expected unqualified-id
DECLARE_AVX512F_SPECIFIC_CODE(
^
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:164:3: note: expanded from macro 'DECLARE_AVX512F_SPECIFIC_CODE'
} \
^
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:104:9: note: expanded from macro '\
END_TARGET_SPECIFIC_CODE'
_Pragma("clang attribute pop")
^
<scratch space>:67:8: note: expanded from here
clang attribute pop
^
In file included from ../src/Functions/FunctionsHashing.cpp:1:
/Users/CLionProjects/ClickHouse/src/Functions/FunctionsHashing.h:595:1: error: expected unqualified-id
DECLARE_MULTITARGET_CODE(
^
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:187:44: note: expanded from macro 'DECLARE_MULTITARGET_CODE'
DECLARE_DEFAULT_CODE (__VA_ARGS__) \
^
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:131:42: note: expanded from macro '\
DECLARE_SSE42_SPECIFIC_CODE'
#define DECLARE_SSE42_SPECIFIC_CODE(...) \
^
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:102:9: note: expanded from macro '\
BEGIN_SSE42_SPECIFIC_CODE'
_Pragma("clang attribute push(__attribute__((target(\"sse,sse2,sse3,ssse3,sse4,popcnt\"))),apply_to=function)")
^
<scratch space>:259:8: note: expanded from here
clang attribute push(__attribute__((target("sse,sse2,sse3,ssse3,sse4,popcnt"))),apply_to=function)
^
In file included from ../src/Functions/FunctionsHashing.cpp:1:
/Users/CLionProjects/ClickHouse/src/Functions/FunctionsHashing.h:595:1: error: expected unqualified-id
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:187:44: note: expanded from macro 'DECLARE_MULTITARGET_CODE'
DECLARE_DEFAULT_CODE (__VA_ARGS__) \
^
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:137:3: note: expanded from macro '\
DECLARE_SSE42_SPECIFIC_CODE'
} \
^
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:104:9: note: expanded from macro '\
END_TARGET_SPECIFIC_CODE'
_Pragma("clang attribute pop")
^
<scratch space>:261:8: note: expanded from here
clang attribute pop
^
In file included from ../src/Functions/FunctionsHashing.cpp:1:
/Users/CLionProjects/ClickHouse/src/Functions/FunctionsHashing.h:595:1: error: expected unqualified-id
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:188:44: note: expanded from macro 'DECLARE_MULTITARGET_CODE'
DECLARE_SSE42_SPECIFIC_CODE (__VA_ARGS__) \
^
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:146:3: note: expanded from macro '\
DECLARE_AVX_SPECIFIC_CODE'
} \
^
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:104:9: note: expanded from macro '\
END_TARGET_SPECIFIC_CODE'
_Pragma("clang attribute pop")
^
<scratch space>:265:8: note: expanded from here
clang attribute pop
^
In file included from ../src/Functions/FunctionsHashing.cpp:1:
/Users/CLionProjects/ClickHouse/src/Functions/FunctionsHashing.h:595:1: error: expected unqualified-id
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:189:44: note: expanded from macro 'DECLARE_MULTITARGET_CODE'
DECLARE_AVX_SPECIFIC_CODE (__VA_ARGS__) \
^
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:155:3: note: expanded from macro '\
DECLARE_AVX2_SPECIFIC_CODE'
} \
^
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:104:9: note: expanded from macro '\
END_TARGET_SPECIFIC_CODE'
_Pragma("clang attribute pop")
^
<scratch space>:269:8: note: expanded from here
clang attribute pop
^
In file included from ../src/Functions/FunctionsHashing.cpp:1:
/Users/CLionProjects/ClickHouse/src/Functions/FunctionsHashing.h:595:1: error: expected unqualified-id
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:190:44: note: expanded from macro 'DECLARE_MULTITARGET_CODE'
DECLARE_AVX2_SPECIFIC_CODE (__VA_ARGS__) \
^
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:164:3: note: expanded from macro '\
DECLARE_AVX512F_SPECIFIC_CODE'
} \
^
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:104:9: note: expanded from macro '\
END_TARGET_SPECIFIC_CODE'
_Pragma("clang attribute pop")
^
<scratch space>:273:8: note: expanded from here
clang attribute pop
^
In file included from ../src/Functions/FunctionsHashing.cpp:1:
/Users/CLionProjects/ClickHouse/src/Functions/FunctionsHashing.h:706:1: error: expected unqualified-id
DECLARE_MULTITARGET_CODE(
^
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:187:44: note: expanded from macro 'DECLARE_MULTITARGET_CODE'
DECLARE_DEFAULT_CODE (__VA_ARGS__) \
^
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:131:42: note: expanded from macro '\
DECLARE_SSE42_SPECIFIC_CODE'
#define DECLARE_SSE42_SPECIFIC_CODE(...) \
^
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:102:9: note: expanded from macro '\
BEGIN_SSE42_SPECIFIC_CODE'
_Pragma("clang attribute push(__attribute__((target(\"sse,sse2,sse3,ssse3,sse4,popcnt\"))),apply_to=function)")
^
<scratch space>:278:8: note: expanded from here
clang attribute push(__attribute__((target("sse,sse2,sse3,ssse3,sse4,popcnt"))),apply_to=function)
^
In file included from ../src/Functions/FunctionsHashing.cpp:1:
/Users/CLionProjects/ClickHouse/src/Functions/FunctionsHashing.h:706:1: error: expected unqualified-id
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:187:44: note: expanded from macro 'DECLARE_MULTITARGET_CODE'
DECLARE_DEFAULT_CODE (__VA_ARGS__) \
^
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:137:3: note: expanded from macro '\
DECLARE_SSE42_SPECIFIC_CODE'
} \
^
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:104:9: note: expanded from macro '\
END_TARGET_SPECIFIC_CODE'
_Pragma("clang attribute pop")
^
<scratch space>:280:8: note: expanded from here
clang attribute pop
^
In file included from ../src/Functions/FunctionsHashing.cpp:1:
/Users/CLionProjects/ClickHouse/src/Functions/FunctionsHashing.h:706:1: error: expected unqualified-id
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:188:44: note: expanded from macro 'DECLARE_MULTITARGET_CODE'
DECLARE_SSE42_SPECIFIC_CODE (__VA_ARGS__) \
^
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:146:3: note: expanded from macro '\
DECLARE_AVX_SPECIFIC_CODE'
} \
^
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:104:9: note: expanded from macro '\
END_TARGET_SPECIFIC_CODE'
_Pragma("clang attribute pop")
^
<scratch space>:4:8: note: expanded from here
clang attribute pop
^
In file included from ../src/Functions/FunctionsHashing.cpp:1:
/Users/CLionProjects/ClickHouse/src/Functions/FunctionsHashing.h:706:1: error: expected unqualified-id
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:189:44: note: expanded from macro 'DECLARE_MULTITARGET_CODE'
DECLARE_AVX_SPECIFIC_CODE (__VA_ARGS__) \
^
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:155:3: note: expanded from macro '\
DECLARE_AVX2_SPECIFIC_CODE'
} \
^
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:104:9: note: expanded from macro '\
END_TARGET_SPECIFIC_CODE'
_Pragma("clang attribute pop")
^
<scratch space>:8:8: note: expanded from here
clang attribute pop
^
In file included from ../src/Functions/FunctionsHashing.cpp:1:
/Users/CLionProjects/ClickHouse/src/Functions/FunctionsHashing.h:706:1: error: expected unqualified-id
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:190:44: note: expanded from macro 'DECLARE_MULTITARGET_CODE'
DECLARE_AVX2_SPECIFIC_CODE (__VA_ARGS__) \
^
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:164:3: note: expanded from macro '\
DECLARE_AVX512F_SPECIFIC_CODE'
} \
^
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:104:9: note: expanded from macro '\
END_TARGET_SPECIFIC_CODE'
_Pragma("clang attribute pop")
^
<scratch space>:12:8: note: expanded from here
clang attribute pop
^
In file included from ../src/Functions/FunctionsHashing.cpp:1:
/Users/CLionProjects/ClickHouse/src/Functions/FunctionsHashing.h:1297:25: error: no template named 'FunctionAnyHash'; did you mean 'TargetSpecific::Default::FunctionAnyHash'?
using FunctionHalfMD5 = FunctionAnyHash<HalfMD5Impl>;
^~~~~~~~~~~~~~~
TargetSpecific::Default::FunctionAnyHash
/Users/CLionProjects/ClickHouse/src/Functions/FunctionsHashing.h:709:7: note: 'TargetSpecific::Default::FunctionAnyHash' declared here
class FunctionAnyHash : public IFunction
^
/Users/CLionProjects/ClickHouse/src/Functions/FunctionsHashing.h:1299:27: error: no template named 'FunctionAnyHash'; did you mean 'TargetSpecific::Default::FunctionAnyHash'?
using FunctionSipHash64 = FunctionAnyHash<SipHash64Impl>;
^~~~~~~~~~~~~~~
TargetSpecific::Default::FunctionAnyHash
/Users/CLionProjects/ClickHouse/src/Functions/FunctionsHashing.h:709:7: note: 'TargetSpecific::Default::FunctionAnyHash' declared here
class FunctionAnyHash : public IFunction
^
/Users/CLionProjects/ClickHouse/src/Functions/FunctionsHashing.h:1300:27: error: no template named 'FunctionIntHash'; did you mean 'TargetSpecific::Default::FunctionIntHash'?
using FunctionIntHash32 = FunctionIntHash<IntHash32Impl, NameIntHash32>;
^~~~~~~~~~~~~~~
TargetSpecific::Default::FunctionIntHash
/Users/CLionProjects/ClickHouse/src/Functions/FunctionsHashing.h:598:7: note: 'TargetSpecific::Default::FunctionIntHash' declared here
class FunctionIntHash : public IFunction
^
/Users/CLionProjects/ClickHouse/src/Functions/FunctionsHashing.h:1301:27: error: no template named 'FunctionIntHash'; did you mean 'TargetSpecific::Default::FunctionIntHash'?
using FunctionIntHash64 = FunctionIntHash<IntHash64Impl, NameIntHash64>;
^~~~~~~~~~~~~~~
TargetSpecific::Default::FunctionIntHash
/Users/CLionProjects/ClickHouse/src/Functions/FunctionsHashing.h:598:7: note: 'TargetSpecific::Default::FunctionIntHash' declared here
class FunctionIntHash : public IFunction
^
fatal error: too many errors emitted, stopping now [-ferror-limit=]
20 errors generated.
[268/844] Building CXX object src/Functions/CMakeFiles/clickhouse_functions.dir/FunctionsRandom.cpp.o
FAILED: src/Functions/CMakeFiles/clickhouse_functions.dir/FunctionsRandom.cpp.o
/usr/bin/clang++ -DENABLE_MULTITARGET_CODE=1 -DLZ4_DISABLE_DEPRECATE_WARNINGS=1 -DPOCO_ENABLE_CPP11 -DPOCO_HAVE_FD_POLL -DPOCO_OS_FAMILY_UNIX -DUNALIGNED_OK -DUSE_FASTMEMCPY=0 -DUSE_HYPERSCAN=1 -DUSE_REPLXX=1 -DUSE_XXHASH=1 -DWITH_COVERAGE=0 -DWITH_GZFILEOP -DX86_64 -DZLIB_COMPAT -Iincludes/configs -I../contrib/cityhash102/include -I../contrib/libfarmhash -I../src -Isrc -Isrc/Core/include -I../base/common/.. -Ibase/common/.. -I../contrib/cctz/include -Icontrib/zlib-ng -I../contrib/zlib-ng -I../base/pcg-random/. -I../contrib/consistent-hashing -I../contrib/consistent-hashing-sumbur -I../contrib/libuv/include -I../contrib/libmetrohash/src -I../contrib/murmurhash/include -I../contrib/lz4/lib -isystem ../contrib/sparsehash-c11 -isystem ../contrib/h3/src/h3lib/include -isystem ../contrib/rapidjson/include -isystem ../contrib/libcxx/include -isystem ../contrib/libcxxabi/include -isystem ../contrib/base64 -isystem ../contrib/msgpack-c/include -isystem ../contrib/re2 -isystem ../contrib/boost -isystem ../contrib/poco/Net/include -isystem ../contrib/poco/Foundation/include -isystem ../contrib/poco/NetSSL_OpenSSL/include -isystem ../contrib/poco/Crypto/include -isystem ../contrib/openssl-cmake/linux_x86_64/include -isystem ../contrib/openssl/include -isystem ../contrib/poco/Util/include -isystem ../contrib/poco/JSON/include -isystem ../contrib/poco/XML/include -isystem ../contrib/replxx/include -isystem ../contrib/fmtlib-cmake/../fmtlib/include -isystem ../contrib/double-conversion -isystem ../contrib/ryu -isystem contrib/re2_st -isystem ../contrib/croaring -isystem ../contrib/orc/c++/include -isystem contrib/orc/c++/include -isystem ../contrib/pdqsort -isystem ../contrib/AMQP-CPP/include -isystem ../contrib/libdivide/. -isystem contrib/h3/src/h3lib/include -isystem ../contrib/hyperscan/src -isystem ../contrib/simdjson/include -isystem ../contrib/stats/include -isystem ../contrib/gcem/include -fchar8_t -fdiagnostics-color=always -std=c++2a -fsized-deallocation -gdwarf-aranges -msse4.1 -msse4.2 -mpopcnt -Wall -Wno-unused-command-line-argument -stdlib=libc++ -fdiagnostics-absolute-paths -Werror -mmacosx-version-min=10.15 -Wextra -Wpedantic -Wno-vla-extension -Wno-zero-length-array -Wcomma -Wconditional-uninitialized -Wcovered-switch-default -Wdeprecated -Wembedded-directive -Wextra-semi -Wgnu-case-range -Winconsistent-missing-destructor-override -Wnewline-eof -Wold-style-cast -Wrange-loop-analysis -Wredundant-parens -Wreserved-id-macro -Wshadow-field -Wshadow-uncaptured-local -Wshadow -Wstring-plus-int -Wundef -Wunreachable-code-return -Wunreachable-code -Wunused-exception-parameter -Wunused-macros -Wunused-member-function -Wzero-as-null-pointer-constant -Weverything -Wno-c++98-compat-pedantic -Wno-c++98-compat -Wno-c99-extensions -Wno-conversion -Wno-deprecated-dynamic-exception-spec -Wno-disabled-macro-expansion -Wno-documentation-unknown-command -Wno-double-promotion -Wno-exit-time-destructors -Wno-float-equal -Wno-global-constructors -Wno-missing-prototypes -Wno-missing-variable-declarations -Wno-nested-anon-types -Wno-packed -Wno-padded -Wno-return-std-move-in-c++11 -Wno-shift-sign-overflow -Wno-sign-conversion -Wno-switch-enum -Wno-undefined-func-template -Wno-unused-template -Wno-vla -Wno-weak-template-vtables -Wno-weak-vtables -g -O0 -g3 -ggdb3 -fno-inline -D_LIBCPP_DEBUG=0 -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.14.sdk -D OS_DARWIN -nostdinc++ -MD -MT src/Functions/CMakeFiles/clickhouse_functions.dir/FunctionsRandom.cpp.o -MF src/Functions/CMakeFiles/clickhouse_functions.dir/FunctionsRandom.cpp.o.d -o src/Functions/CMakeFiles/clickhouse_functions.dir/FunctionsRandom.cpp.o -c ../src/Functions/FunctionsRandom.cpp
In file included from ../src/Functions/FunctionsRandom.cpp:1:
In file included from ../src/Functions/FunctionsRandom.h:6:
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:197:1: error: expected unqualified-id
DECLARE_SSE42_SPECIFIC_CODE(
^
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:131:42: note: expanded from macro 'DECLARE_SSE42_SPECIFIC_CODE'
#define DECLARE_SSE42_SPECIFIC_CODE(...) \
^
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:102:9: note: expanded from macro '\
BEGIN_SSE42_SPECIFIC_CODE'
_Pragma("clang attribute push(__attribute__((target(\"sse,sse2,sse3,ssse3,sse4,popcnt\"))),apply_to=function)")
^
<scratch space>:179:8: note: expanded from here
clang attribute push(__attribute__((target("sse,sse2,sse3,ssse3,sse4,popcnt"))),apply_to=function)
^
In file included from ../src/Functions/FunctionsRandom.cpp:1:
In file included from ../src/Functions/FunctionsRandom.h:6:
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:197:1: error: expected unqualified-id
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:137:3: note: expanded from macro 'DECLARE_SSE42_SPECIFIC_CODE'
} \
^
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:104:9: note: expanded from macro '\
END_TARGET_SPECIFIC_CODE'
_Pragma("clang attribute pop")
^
<scratch space>:181:8: note: expanded from here
clang attribute pop
^
In file included from ../src/Functions/FunctionsRandom.cpp:1:
In file included from ../src/Functions/FunctionsRandom.h:6:
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:201:1: error: expected unqualified-id
DECLARE_AVX_SPECIFIC_CODE(
^
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:146:3: note: expanded from macro 'DECLARE_AVX_SPECIFIC_CODE'
} \
^
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:104:9: note: expanded from macro '\
END_TARGET_SPECIFIC_CODE'
_Pragma("clang attribute pop")
^
<scratch space>:185:8: note: expanded from here
clang attribute pop
^
In file included from ../src/Functions/FunctionsRandom.cpp:1:
In file included from ../src/Functions/FunctionsRandom.h:6:
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:205:1: error: expected unqualified-id
DECLARE_AVX2_SPECIFIC_CODE(
^
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:155:3: note: expanded from macro 'DECLARE_AVX2_SPECIFIC_CODE'
} \
^
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:104:9: note: expanded from macro '\
END_TARGET_SPECIFIC_CODE'
_Pragma("clang attribute pop")
^
<scratch space>:189:8: note: expanded from here
clang attribute pop
^
In file included from ../src/Functions/FunctionsRandom.cpp:1:
In file included from ../src/Functions/FunctionsRandom.h:6:
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:209:1: error: expected unqualified-id
DECLARE_AVX512F_SPECIFIC_CODE(
^
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:164:3: note: expanded from macro 'DECLARE_AVX512F_SPECIFIC_CODE'
} \
^
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:104:9: note: expanded from macro '\
END_TARGET_SPECIFIC_CODE'
_Pragma("clang attribute pop")
^
<scratch space>:193:8: note: expanded from here
clang attribute pop
^
In file included from ../src/Functions/FunctionsRandom.cpp:1:
/Users/CLionProjects/ClickHouse/src/Functions/FunctionsRandom.h:39:1: error: expected unqualified-id
DECLARE_MULTITARGET_CODE(
^
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:187:44: note: expanded from macro 'DECLARE_MULTITARGET_CODE'
DECLARE_DEFAULT_CODE (__VA_ARGS__) \
^
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:131:42: note: expanded from macro '\
DECLARE_SSE42_SPECIFIC_CODE'
#define DECLARE_SSE42_SPECIFIC_CODE(...) \
^
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:102:9: note: expanded from macro '\
BEGIN_SSE42_SPECIFIC_CODE'
_Pragma("clang attribute push(__attribute__((target(\"sse,sse2,sse3,ssse3,sse4,popcnt\"))),apply_to=function)")
^
<scratch space>:96:8: note: expanded from here
clang attribute push(__attribute__((target("sse,sse2,sse3,ssse3,sse4,popcnt"))),apply_to=function)
^
In file included from ../src/Functions/FunctionsRandom.cpp:1:
/Users/CLionProjects/ClickHouse/src/Functions/FunctionsRandom.h:39:1: error: expected unqualified-id
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:187:44: note: expanded from macro 'DECLARE_MULTITARGET_CODE'
DECLARE_DEFAULT_CODE (__VA_ARGS__) \
^
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:137:3: note: expanded from macro '\
DECLARE_SSE42_SPECIFIC_CODE'
} \
^
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:104:9: note: expanded from macro '\
END_TARGET_SPECIFIC_CODE'
_Pragma("clang attribute pop")
^
<scratch space>:98:8: note: expanded from here
clang attribute pop
^
In file included from ../src/Functions/FunctionsRandom.cpp:1:
/Users/CLionProjects/ClickHouse/src/Functions/FunctionsRandom.h:39:1: error: expected unqualified-id
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:188:44: note: expanded from macro 'DECLARE_MULTITARGET_CODE'
DECLARE_SSE42_SPECIFIC_CODE (__VA_ARGS__) \
^
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:146:3: note: expanded from macro '\
DECLARE_AVX_SPECIFIC_CODE'
} \
^
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:104:9: note: expanded from macro '\
END_TARGET_SPECIFIC_CODE'
_Pragma("clang attribute pop")
^
<scratch space>:102:8: note: expanded from here
clang attribute pop
^
In file included from ../src/Functions/FunctionsRandom.cpp:1:
/Users/CLionProjects/ClickHouse/src/Functions/FunctionsRandom.h:39:1: error: expected unqualified-id
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:189:44: note: expanded from macro 'DECLARE_MULTITARGET_CODE'
DECLARE_AVX_SPECIFIC_CODE (__VA_ARGS__) \
^
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:155:3: note: expanded from macro '\
DECLARE_AVX2_SPECIFIC_CODE'
} \
^
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:104:9: note: expanded from macro '\
END_TARGET_SPECIFIC_CODE'
_Pragma("clang attribute pop")
^
<scratch space>:106:8: note: expanded from here
clang attribute pop
^
In file included from ../src/Functions/FunctionsRandom.cpp:1:
/Users/CLionProjects/ClickHouse/src/Functions/FunctionsRandom.h:39:1: error: expected unqualified-id
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:190:44: note: expanded from macro 'DECLARE_MULTITARGET_CODE'
DECLARE_AVX2_SPECIFIC_CODE (__VA_ARGS__) \
^
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:164:3: note: expanded from macro '\
DECLARE_AVX512F_SPECIFIC_CODE'
} \
^
/Users/CLionProjects/ClickHouse/src/Functions/TargetSpecific.h:104:9: note: expanded from macro '\
END_TARGET_SPECIFIC_CODE'
_Pragma("clang attribute pop")
^
<scratch space>:110:8: note: expanded from here
clang attribute pop
^
In file included from ../src/Functions/FunctionsRandom.cpp:1:
/Users/CLionProjects/ClickHouse/src/Functions/FunctionsRandom.h:91:31: error: no template named 'FunctionRandomImpl'; did you mean 'FunctionRandom'?
class FunctionRandom : public FunctionRandomImpl<TargetSpecific::Default::RandImpl, ToType, Name>
^~~~~~~~~~~~~~~~~~
FunctionRandom
/Users/CLionProjects/ClickHouse/src/Functions/FunctionsRandom.h:91:7: note: 'FunctionRandom' declared here
class FunctionRandom : public FunctionRandomImpl<TargetSpecific::Default::RandImpl, ToType, Name>
^
/Users/CLionProjects/ClickHouse/src/Functions/FunctionsRandom.h:91:31: error: too many template arguments for class template 'FunctionRandom'
class FunctionRandom : public FunctionRandomImpl<TargetSpecific::Default::RandImpl, ToType, Name>
^ ~~~~~
/Users/CLionProjects/ClickHouse/src/Functions/FunctionsRandom.h:91:7: note: template is declared here
class FunctionRandom : public FunctionRandomImpl<TargetSpecific::Default::RandImpl, ToType, Name>
^
/Users/CLionProjects/ClickHouse/src/Functions/FunctionsRandom.h:105:116: error: only virtual member functions can be marked 'override'
void executeImpl(Block & block, const ColumnNumbers & arguments, size_t result, size_t input_rows_count) const override
^~~~~~~~
/Users/CLionProjects/ClickHouse/src/Functions/FunctionsRandom.h:97:13: error: no template named 'FunctionRandomImpl'; did you mean '::DB::FunctionRandom'?
FunctionRandomImpl<TargetSpecific::Default::RandImpl, ToType, Name>>();
^~~~~~~~~~~~~~~~~~
::DB::FunctionRandom
/Users/CLionProjects/ClickHouse/src/Functions/FunctionsRandom.h:91:7: note: '::DB::FunctionRandom' declared here
class FunctionRandom : public FunctionRandomImpl<TargetSpecific::Default::RandImpl, ToType, Name>
^
/Users/CLionProjects/ClickHouse/src/Functions/FunctionsRandom.h:97:13: error: too many template arguments for class template 'FunctionRandom'
FunctionRandomImpl<TargetSpecific::Default::RandImpl, ToType, Name>>();
^ ~~~~~
/Users/CLionProjects/ClickHouse/src/Functions/FunctionsRandom.h:91:7: note: template is declared here
class FunctionRandom : public FunctionRandomImpl<TargetSpecific::Default::RandImpl, ToType, Name>
^
/Users/CLionProjects/ClickHouse/src/Functions/FunctionsRandom.h:97:81: error: expected unqualified-id
FunctionRandomImpl<TargetSpecific::Default::RandImpl, ToType, Name>>();
^
/Users/CLionProjects/ClickHouse/src/Functions/FunctionsRandom.h:101:13: error: no template named 'FunctionRandomImpl'; did you mean '::DB::FunctionRandom'?
FunctionRandomImpl<TargetSpecific::AVX2::RandImpl, ToType, Name>>();
^~~~~~~~~~~~~~~~~~
::DB::FunctionRandom
/Users/CLionProjects/ClickHouse/src/Functions/FunctionsRandom.h:91:7: note: '::DB::FunctionRandom' declared here
class FunctionRandom : public FunctionRandomImpl<TargetSpecific::Default::RandImpl, ToType, Name>
^
/Users/CLionProjects/ClickHouse/src/Functions/FunctionsRandom.h:101:48: error: no member named 'AVX2' in namespace 'DB::TargetSpecific'
FunctionRandomImpl<TargetSpecific::AVX2::RandImpl, ToType, Name>>();
~~~~~~~~~~~~~~~~^
/Users/CLionProjects/ClickHouse/src/Functions/FunctionsRandom.h:101:80: error: expected a type
FunctionRandomImpl<TargetSpecific::AVX2::RandImpl, ToType, Name>>();
^
fatal error: too many errors emitted, stopping now [-ferror-limit=]
20 errors generated.
[272/844] Building CXX object src/Functions/CMakeFiles/clickhouse_functions.dir/FunctionsLogical.cpp.o
ninja: build stopped: subcommand failed.
```
| https://github.com/ClickHouse/ClickHouse/issues/16072 | https://github.com/ClickHouse/ClickHouse/pull/16074 | 691bd829a6845431c5ed51ddc3f50673cc7b2e9f | 3e6839a7e141c07b1a52aaab438f7c03090f8b16 | "2020-10-16T10:09:32Z" | c++ | "2020-10-18T16:07:06Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 16,035 | ["programs/client/Client.cpp", "programs/client/Suggest.h", "tests/queries/0_stateless/01526_client_start_and_exit.expect", "tests/queries/0_stateless/01526_client_start_and_exit.reference", "tests/queries/0_stateless/01526_client_start_and_exit.sh"] | clickhouse-client and 100K tables crashes on exit | Related to huge suggestions list
```
(gdb) thread apply all bt
Thread 2 (Thread 0x7fbefa6d8400 (LWP 298)):
#0 __pthread_clockjoin_ex (threadid=140458186086144, thread_return=0x0, clockid=<optimized out>, abstime=<optimized out>, block=<optimized out>) at pthread_join_common.c:145
#1 0x0000000007c42cb0 in DB::Suggest::~Suggest() ()
#2 0x00007fbefa874a27 in __run_exit_handlers (status=0, listp=0x7fbefaa16718 <__exit_funcs>, run_list_atexit=run_list_atexit@entry=true, run_dtors=run_dtors@entry=true) at exit.c:108
#3 0x00007fbefa874be0 in __GI_exit (status=<optimized out>) at exit.c:139
#4 0x00007fbefa8520ba in __libc_start_main (main=0x7b8e780 <main>, argc=1, argv=0x7ffc4e5be1e8, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=0x7ffc4e5be1d8) at ../csu/libc-start.c:342
#5 0x0000000007b3f02e in _start ()
Thread 1 (Thread 0x7fbef8499700 (LWP 299)):
#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
#1 0x00007fbefa850859 in __GI_abort () at abort.c:79
#2 0x0000000010eb0edc in Poco::SignalHandler::handleSignal(int) ()
#3 <signal handler called>
#4 0x000000000d42dd4f in std::__1::__hash_const_iterator<std::__1::__hash_node<std::__1::__hash_value_type<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, void*>*> std::__1::__hash_table<std::__1::__hash_value_type<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::__unordered_map_hasher<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::__hash_value_type<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, true>, std::__1::__unordered_map_equal<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::__hash_value_type<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, true>, std::__1::allocator<std::__1::__hash_value_type<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > >::find<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) const ()
#5 0x000000000d793065 in DB::IFactoryWithAliases<std::__1::function<std::__1::shared_ptr<DB::IDataType const> (std::__1::shared_ptr<DB::IAST> const&)> >::getAliasToOrName(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) const ()
#6 0x000000000d7927f1 in DB::DataTypeFactory::get(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::IAST> const&) const ()
#7 0x000000000d792640 in DB::DataTypeFactory::get(std::__1::shared_ptr<DB::IAST> const&) const ()
#8 0x000000000d7924b1 in DB::DataTypeFactory::get(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) const ()
#9 0x000000000ddfd267 in DB::NativeBlockInputStream::readImpl() ()
#10 0x000000000d7575d5 in DB::IBlockInputStream::read() ()
#11 0x000000000e530096 in DB::Connection::receivePacket() ()
#12 0x0000000007c6e217 in DB::Suggest::fetch(DB::Connection&, DB::ConnectionTimeouts const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) ()
#13 0x0000000007c6e0a4 in DB::Suggest::loadImpl(DB::Connection&, DB::ConnectionTimeouts const&, unsigned long) ()
#14 0x0000000007c6eb14 in ?? ()
#15 0x00007fbefaa37609 in start_thread (arg=<optimized out>) at pthread_create.c:477
#16 0x00007fbefa94d293 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
``` | https://github.com/ClickHouse/ClickHouse/issues/16035 | https://github.com/ClickHouse/ClickHouse/pull/16047 | 0faf2bc7e320d6d5b3c24f0467fa18639ca5cd45 | ae4d66ac9d8000563afbb77a68b37a98c5706c03 | "2020-10-15T17:49:11Z" | c++ | "2020-10-29T06:10:23Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 16,020 | ["tests/queries/0_stateless/01632_group_array_msan.reference", "tests/queries/0_stateless/01632_group_array_msan.sql"] | MSan: crash in groupArray | ```
SELECT groupArrayMerge(1048577)(y * 1048576) FROM (SELECT groupArrayState(9223372036854775807)(x) AS y FROM (SELECT 1048576 AS x))
```
```
2020.10.15 16:52:17.088012 [ 1493 ] {} <Fatal> BaseDaemon: ########################################
2020.10.15 16:52:17.091205 [ 1493 ] {} <Fatal> BaseDaemon: (version 20.10.1.1, build id: 1730D04609489F71) (from thread 1360) (query_id: e15c48d3-1d74-4352-9274-4d6bc86afc6e) Received signal Segmentation fault (11)
2020.10.15 16:52:17.091661 [ 1493 ] {} <Fatal> BaseDaemon: Address: 0x7f392e9d3000 Access: read. Address not mapped to object.
2020.10.15 16:52:17.092151 [ 1493 ] {} <Fatal> BaseDaemon: Stack trace: 0x7f3a402cd6ae 0x10205d20 0x139c65bf 0x139c5f54 0x13a363a6 0x1f30ea38 0x1f30b69e 0x1f307202 0x180cfa0e 0x181e6a9b 0x181eab28 0x3532f097 0x3533747c 0x39cb7d27 0x37740c0e 0x391dbe05 0x39386f4a 0x39386b53 0x393869a1 0x393868c1 0x39386841 0x39382da9 0x10362e8b 0x10362c02 0x3937dc1f 0x3937f9cf 0x3937963e
2020.10.15 16:52:17.095771 [ 1493 ] {} <Fatal> BaseDaemon: 5. /build/glibc-2ORdQG/glibc-2.27/string/../sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S:370: __memmove_sse2_unaligned_erms @ 0xbb6ae in /usr/lib/debug/lib/x86_64-linux-gnu/libc-2.27.so
2020.10.15 16:52:17.459184 [ 1493 ] {} <Fatal> BaseDaemon: 6. __msan_memcpy @ 0x10205d20 in /home/nik-kochetov/dev/ClickHouse/build-msan/programs/clickhouse
2020.10.15 16:52:17.496460 [ 1493 ] {} <Fatal> BaseDaemon: 7. /home/nik-kochetov/dev/ClickHouse/build-msan/../src/Common/PODArray.h:497: void DB::PODArray<unsigned int, 32ul, DB::MixedArenaAllocator<4096ul, Allocator<false, false>, DB::AlignedArenaAllocator<4ul>, 4ul>, 0ul, 0ul>::insert_assume_reserved<unsigned int const*, unsigned int const*>(unsigned int const*, unsigned int const*) @ 0x139c65bf in /home/nik-kochetov/dev/ClickHouse/build-msan/programs/clickhouse
2020.10.15 16:52:17.521386 [ 1493 ] {} <Fatal> BaseDaemon: 8. /home/nik-kochetov/dev/ClickHouse/build-msan/../src/Common/PODArray.h:458: void DB::PODArray<unsigned int, 32ul, DB::MixedArenaAllocator<4096ul, Allocator<false, false>, DB::AlignedArenaAllocator<4ul>, 4ul>, 0ul, 0ul>::insert<unsigned int const*, unsigned int const*, DB::Arena*&>(unsigned int const*, unsigned int const*, DB::Arena*&) @ 0x139c5f54 in /home/nik-kochetov/dev/ClickHouse/build-msan/programs/clickhouse
2020.10.15 16:52:17.555197 [ 1493 ] {} <Fatal> BaseDaemon: 9. /home/nik-kochetov/dev/ClickHouse/build-msan/../src/AggregateFunctions/AggregateFunctionGroupArray.h:235: DB::GroupArrayNumericImpl<unsigned int, DB::GroupArrayTrait<true, (DB::Sampler)0> >::merge(char*, char const*, DB::Arena*) const @ 0x13a363a6 in /home/nik-kochetov/dev/ClickHouse/build-msan/programs/clickhouse
2020.10.15 16:52:17.629716 [ 1493 ] {} <Fatal> BaseDaemon: 10. /home/nik-kochetov/dev/ClickHouse/build-msan/../src/Functions/FunctionBinaryArithmetic.h:671: DB::FunctionBinaryArithmetic<DB::MultiplyImpl, DB::NameMultiply, true>::executeAggregateMultiply(DB::FunctionArguments&, std::__1::vector<unsigned long, std::__1::allocator<unsigned long> > const&, unsigned long, unsigned long) const @ 0x1f30ea38 in /home/nik-kochetov/dev/ClickHouse/build-msan/programs/clickhouse
2020.10.15 16:52:17.698130 [ 1493 ] {} <Fatal> BaseDaemon: 11. /home/nik-kochetov/dev/ClickHouse/build-msan/../src/Functions/FunctionBinaryArithmetic.h:1065: DB::FunctionBinaryArithmetic<DB::MultiplyImpl, DB::NameMultiply, true>::executeImpl(DB::FunctionArguments&, std::__1::vector<unsigned long, std::__1::allocator<unsigned long> > const&, unsigned long, unsigned long) const @ 0x1f30b69e in /home/nik-kochetov/dev/ClickHouse/build-msan/programs/clickhouse
2020.10.15 16:52:17.766227 [ 1493 ] {} <Fatal> BaseDaemon: 12. /home/nik-kochetov/dev/ClickHouse/build-msan/../src/Functions/FunctionBinaryArithmetic.h:0: DB::FunctionBinaryArithmeticWithConstants<DB::MultiplyImpl, DB::NameMultiply, true>::executeImpl(DB::FunctionArguments&, std::__1::vector<unsigned long, std::__1::allocator<unsigned long> > const&, unsigned long, unsigned long) const @ 0x1f307202 in /home/nik-kochetov/dev/ClickHouse/build-msan/programs/clickhouse
2020.10.15 16:52:17.809535 [ 1493 ] {} <Fatal> BaseDaemon: 13. /home/nik-kochetov/dev/ClickHouse/build-msan/../src/Functions/IFunctionAdaptors.h:153: DB::DefaultExecutable::execute(DB::FunctionArguments&, std::__1::vector<unsigned long, std::__1::allocator<unsigned long> > const&, unsigned long, unsigned long) @ 0x180cfa0e in /home/nik-kochetov/dev/ClickHouse/build-msan/programs/clickhouse
2020.10.15 16:52:17.849550 [ 1493 ] {} <Fatal> BaseDaemon: 14. /home/nik-kochetov/dev/ClickHouse/build-msan/../src/Functions/IFunction.cpp:325: DB::ExecutableFunctionAdaptor::executeWithoutLowCardinalityColumns(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> >&, std::__1::vector<unsigned long, std::__1::allocator<unsigned long> > const&, unsigned long, unsigned long, bool) @ 0x181e6a9b in /home/nik-kochetov/dev/ClickHouse/build-msan/programs/clickhouse
2020.10.15 16:52:17.885766 [ 1493 ] {} <Fatal> BaseDaemon: 15. /home/nik-kochetov/dev/ClickHouse/build-msan/../src/Functions/IFunction.cpp:500: DB::ExecutableFunctionAdaptor::execute(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> >&, std::__1::vector<unsigned long, std::__1::allocator<unsigned long> > const&, unsigned long, unsigned long, bool) @ 0x181eab28 in /home/nik-kochetov/dev/ClickHouse/build-msan/programs/clickhouse
2020.10.15 16:52:18.095622 [ 1493 ] {} <Fatal> BaseDaemon: 16. /home/nik-kochetov/dev/ClickHouse/build-msan/../src/Interpreters/ExpressionActions.cpp:358: DB::ExpressionAction::execute(DB::Block&, bool) const @ 0x3532f097 in /home/nik-kochetov/dev/ClickHouse/build-msan/programs/clickhouse
2020.10.15 16:52:18.306149 [ 1493 ] {} <Fatal> BaseDaemon: 17. /home/nik-kochetov/dev/ClickHouse/build-msan/../src/Interpreters/ExpressionActions.cpp:621: DB::ExpressionActions::execute(DB::Block&, bool) const @ 0x3533747c in /home/nik-kochetov/dev/ClickHouse/build-msan/programs/clickhouse
2020.10.15 16:52:18.579755 [ 1493 ] {} <Fatal> BaseDaemon: 18. /home/nik-kochetov/dev/ClickHouse/build-msan/../src/Processors/Transforms/ExpressionTransform.cpp:25: DB::ExpressionTransform::transform(DB::Chunk&) @ 0x39cb7d27 in /home/nik-kochetov/dev/ClickHouse/build-msan/programs/clickhouse
2020.10.15 16:52:18.835855 [ 1493 ] {} <Fatal> BaseDaemon: 19. /home/nik-kochetov/dev/ClickHouse/build-msan/../src/Processors/ISimpleTransform.h:43: DB::ISimpleTransform::transform(DB::Chunk&, DB::Chunk&) @ 0x37740c0e in /home/nik-kochetov/dev/ClickHouse/build-msan/programs/clickhouse
2020.10.15 16:52:19.099172 [ 1493 ] {} <Fatal> BaseDaemon: 20. /home/nik-kochetov/dev/ClickHouse/build-msan/../src/Processors/ISimpleTransform.cpp:89: DB::ISimpleTransform::work() @ 0x391dbe05 in /home/nik-kochetov/dev/ClickHouse/build-msan/programs/clickhouse
2020.10.15 16:52:19.365968 [ 1493 ] {} <Fatal> BaseDaemon: 21. /home/nik-kochetov/dev/ClickHouse/build-msan/../src/Processors/Executors/PipelineExecutor.cpp:78: DB::executeJob(DB::IProcessor*) @ 0x39386f4a in /home/nik-kochetov/dev/ClickHouse/build-msan/programs/clickhouse
2020.10.15 16:52:19.626984 [ 1493 ] {} <Fatal> BaseDaemon: 22. /home/nik-kochetov/dev/ClickHouse/build-msan/../src/Processors/Executors/PipelineExecutor.cpp:95: DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0::operator()() const @ 0x39386b53 in /home/nik-kochetov/dev/ClickHouse/build-msan/programs/clickhouse
2020.10.15 16:52:19.889272 [ 1493 ] {} <Fatal> BaseDaemon: 23. /home/nik-kochetov/dev/ClickHouse/build-msan/../contrib/libcxx/include/type_traits:3519: decltype(std::__1::forward<DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0&>(fp)()) std::__1::__invoke<DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0&>(DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0&) @ 0x393869a1 in /home/nik-kochetov/dev/ClickHouse/build-msan/programs/clickhouse
2020.10.15 16:52:20.155410 [ 1493 ] {} <Fatal> BaseDaemon: 24. /home/nik-kochetov/dev/ClickHouse/build-msan/../contrib/libcxx/include/__functional_base:349: void std::__1::__invoke_void_return_wrapper<void>::__call<DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0&>(DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0&) @ 0x393868c1 in /home/nik-kochetov/dev/ClickHouse/build-msan/programs/clickhouse
2020.10.15 16:52:20.420388 [ 1493 ] {} <Fatal> BaseDaemon: 25. /home/nik-kochetov/dev/ClickHouse/build-msan/../contrib/libcxx/include/functional:1540: std::__1::__function::__alloc_func<DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0, std::__1::allocator<DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0>, void ()>::operator()() @ 0x39386841 in /home/nik-kochetov/dev/ClickHouse/build-msan/programs/clickhouse
2020.10.15 16:52:20.685067 [ 1493 ] {} <Fatal> BaseDaemon: 26. /home/nik-kochetov/dev/ClickHouse/build-msan/../contrib/libcxx/include/functional:1714: std::__1::__function::__func<DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0, std::__1::allocator<DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0>, void ()>::operator()() @ 0x39382da9 in /home/nik-kochetov/dev/ClickHouse/build-msan/programs/clickhouse
2020.10.15 16:52:20.700581 [ 1493 ] {} <Fatal> BaseDaemon: 27. /home/nik-kochetov/dev/ClickHouse/build-msan/../contrib/libcxx/include/functional:1867: std::__1::__function::__value_func<void ()>::operator()() const @ 0x10362e8b in /home/nik-kochetov/dev/ClickHouse/build-msan/programs/clickhouse
2020.10.15 16:52:20.709604 [ 1493 ] {} <Fatal> BaseDaemon: 28. /home/nik-kochetov/dev/ClickHouse/build-msan/../contrib/libcxx/include/functional:2473: std::__1::function<void ()>::operator()() const @ 0x10362c02 in /home/nik-kochetov/dev/ClickHouse/build-msan/programs/clickhouse
2020.10.15 16:52:20.970950 [ 1493 ] {} <Fatal> BaseDaemon: 29. /home/nik-kochetov/dev/ClickHouse/build-msan/../src/Processors/Executors/PipelineExecutor.cpp:561: DB::PipelineExecutor::executeStepImpl(unsigned long, unsigned long, std::__1::atomic<bool>*) @ 0x3937dc1f in /home/nik-kochetov/dev/ClickHouse/build-msan/programs/clickhouse
2020.10.15 16:52:21.232758 [ 1493 ] {} <Fatal> BaseDaemon: 30. /home/nik-kochetov/dev/ClickHouse/build-msan/../src/Processors/Executors/PipelineExecutor.cpp:477: DB::PipelineExecutor::executeSingleThread(unsigned long, unsigned long) @ 0x3937f9cf in /home/nik-kochetov/dev/ClickHouse/build-msan/programs/clickhouse
2020.10.15 16:52:21.495786 [ 1493 ] {} <Fatal> BaseDaemon: 31. /home/nik-kochetov/dev/ClickHouse/build-msan/../src/Processors/Executors/PipelineExecutor.cpp:752: DB::PipelineExecutor::executeImpl(unsigned long) @ 0x3937963e in /home/nik-kochetov/dev/ClickHouse/build-msan/programs/clickhouse
```
https://clickhouse-test-reports.s3.yandex.net/15999/40f62719ff5fedc7ee1a41b87e5abef2751638cb/fuzzer/report.html#fail1
It is reproducible with msan, 20.10.1 | https://github.com/ClickHouse/ClickHouse/issues/16020 | https://github.com/ClickHouse/ClickHouse/pull/18702 | 7f85ae7fa7cfe3adb0f9f6e106fa21580d35444d | 5332b0327c21ecc69c04b7a18ff7cb421b4c066a | "2020-10-15T13:55:38Z" | c++ | "2021-01-04T13:35:41Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 15,995 | ["src/Storages/StorageDistributed.cpp", "src/Storages/StorageDistributed.h", "tests/analyzer_tech_debt.txt", "tests/queries/0_stateless/02790_optimize_skip_unused_shards_join.reference", "tests/queries/0_stateless/02790_optimize_skip_unused_shards_join.sql"] | Exception when using optimize_skip_unused_shards = 1 | We have this query
```
SELECT sum(if(inner_distributed.id != 0, 1, 0)) AS total,
inner_distributed.date as date
FROM outer_distributed AS outer_distributed FINAL
LEFT JOIN (SELECT inner_distributed.outer_id AS outer_id,
inner_distributed.id AS id,
inner_distributed.date AS date
FROM inner_distributed AS inner_distributed FINAL
WHERE inner_distributed.organization_id = 15078) AS inner_distributed
ON inner_distributed.outer_id = outer_distributed.id
WHERE outer_distributed.organization_id = 15078
AND date != toDate('1970-01-01')
GROUP BY date
ORDER BY date DESC
SETTINGS distributed_product_mode = 'local', optimize_skip_unused_shards = 1
```
When run it produces an exception `Code: 47. DB::Exception: Received from localhost:9000. DB::Exception: Missing columns: 'date' while processing query`
However if we remove `optimize_skip_unused_shards` it runs without problems.
We are using version 20.9.3
| https://github.com/ClickHouse/ClickHouse/issues/15995 | https://github.com/ClickHouse/ClickHouse/pull/51037 | a1a79eee0f610b2b5c07255ca08ea3401e4d2d5d | f17844e9c27d5164084861fcf9375164b5052ce0 | "2020-10-15T08:05:44Z" | c++ | "2023-07-24T05:31:00Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 15,980 | ["src/Storages/StorageReplicatedMergeTree.cpp", "tests/queries/0_stateless/01526_alter_add_and_modify_order_zookeeper.reference", "tests/queries/0_stateless/01526_alter_add_and_modify_order_zookeeper.sql"] | ADD COLUMN MODIFY QUERY query on a ReplicatedVersionedCollapsingMergeTree table causes client to freeze. | **Describe the bug**
The ClickHouse client freezes when executing `ALTER TABLE` with `ADD COLUMN` and `MODIFY ORDER` on a table built using Replicated Versioned Collapsing Merge Tree engine.
**How to reproduce**
* ClickHouse client version 20.10.1.1.
Connecting to localhost:9000 as user default.
Connected to ClickHouse server version 20.10.1 revision 54440.
* `CREATE TABLE` statements
```
SHOW CREATE TABLE table0
ββstatementβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β CREATE TABLE default.table0
(
`d` Date,
`a` String,
`b` UInt8,
`x` String,
`y` Int8,
`version` UInt64,
`sign` Int8 DEFAULT 1
)
ENGINE = ReplicatedVersionedCollapsingMergeTree('/clickhouse/tables/{shard}/table0', '{replica}', sign, version)
PARTITION BY y
ORDER BY d
SETTINGS index_granularity = 8192 β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
1 rows in set. Elapsed: 0.004 sec.
```
* Queries to run that lead to unexpected result
```
ALTER TABLE table0 ADD COLUMN order UInt32, MODIFY ORDER BY (d, order)
``` | https://github.com/ClickHouse/ClickHouse/issues/15980 | https://github.com/ClickHouse/ClickHouse/pull/16011 | d1c705d47f907b13cba1d506c53c05329683fc47 | 963bc57835d3e7133eb78e240e7f691e2e533c9e | "2020-10-14T16:16:32Z" | c++ | "2020-10-19T07:31:13Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 15,979 | ["src/Interpreters/InterpreterSystemQuery.cpp", "src/Parsers/ASTSystemQuery.cpp", "src/Parsers/ASTSystemQuery.h", "src/Parsers/ParserSystemQuery.cpp", "tests/queries/0_stateless/01643_system_suspend.reference", "tests/queries/0_stateless/01643_system_suspend.sql"] | SYSTEM SUSPEND FOR ... | Add a command that will freeze clickhouse-server process with STOP signal and wakeup it after specified amount of time.
This is needed for testing (fault injection).
Wakeup can be done with `ShellCommand` running `sleep N && kill -CONT pid` | https://github.com/ClickHouse/ClickHouse/issues/15979 | https://github.com/ClickHouse/ClickHouse/pull/18850 | 19ad9e7a5161821e3dcfe0b4ea1f7d85209000fc | 442a90fb89c91645ccfb944e83b6fc6c48953797 | "2020-10-14T15:22:43Z" | c++ | "2021-01-08T04:26:05Z" |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.