status
stringclasses 1
value | repo_name
stringclasses 13
values | repo_url
stringclasses 13
values | issue_id
int64 1
104k
| updated_files
stringlengths 11
1.76k
| title
stringlengths 4
369
| body
stringlengths 0
254k
β | issue_url
stringlengths 38
55
| pull_url
stringlengths 38
53
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
unknown | language
stringclasses 5
values | commit_datetime
unknown |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 30,397 | ["src/Storages/StorageBuffer.cpp", "tests/queries/0_stateless/01506_buffer_table_alter_block_structure_2.reference", "tests/queries/0_stateless/01506_buffer_table_alter_block_structure_2.sql"] | Logical error: 'Block structure mismatch in Buffer stream' | https://clickhouse-test-reports.s3.yandex.net/0/9195e4e887e6f05cf41ce326d0de98e1ae1204f5/stress_test_(debug).html
query
```
2021.10.19 17:13:46.212969 [ 15981 ] {adfead4b-ed57-40ef-819a-9cb608aa1184} <Debug> executeQuery: (from [::1]:38756) (comment: 01506_buffer_table_alter_block_structure.sql) INSERT INTO buf (timestamp, s) VALUES
```
```
2021.10.19 17:13:46.475436 [ 15981 ] {adfead4b-ed57-40ef-819a-9cb608aa1184} <Fatal> : Logical error: 'Block structure mismatch in Buffer stream: different number of columns:
s String String(size = 1), timestamp DateTime UInt32(size = 1)
timestamp DateTime UInt32(size = 0)'.
2021.10.19 17:15:10.275157 [ 17553 ] {} <Fatal> BaseDaemon: ########################################
2021.10.19 17:15:10.388470 [ 17553 ] {} <Fatal> BaseDaemon: (version 21.11.1.8483 (official build), build id: 102D40AAC8B89424DB460221D6A9447CD6AAD6B4) (from thread 15981) (query_id: adfead4b-ed57-40ef-819a-9cb608aa1184) Received signal Aborted (6)
2021.10.19 17:15:10.433208 [ 17553 ] {} <Fatal> BaseDaemon:
2021.10.19 17:15:10.445788 [ 17553 ] {} <Fatal> BaseDaemon: Stack trace: 0x7f294831818b 0x7f29482f7859 0x14f43478 0x14f43582 0x20ccaab1 0x20cc901c 0x20cc8f07 0x223e9022 0x223ef103 0x223eed45 0x22eeaeb5 0x22e557e3 0x22e5577d 0x22e5573d 0x22e55715 0x22e556dd 0x14f900e6 0x14f8f175 0x22e54ece 0x22e54bae 0x22b11cb9 0x22b11c1f 0x22b11bbd 0x22b11b7d 0x22b11b55 0x22b11b1d 0x14f900e6 0x14f8f175 0x22b105c5 0x22b0f798 0x22b3747a 0x22a9525b 0x22a8f1f0 0x22a9e7a5 0x26ee4519 0x26ee4d28 0x27032c74 0x2702f75a 0x2702e53c 0x7f29484fa609 0x7f29483f4293
2021.10.19 17:15:11.243067 [ 17553 ] {} <Fatal> BaseDaemon: 6. ./obj-x86_64-linux-gnu/../src/Common/Exception.cpp:53: DB::handle_error_code(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool, std::__1::vector<void*, std::__1::allocator<void*> > const&) @ 0x14f43478 in /usr/bin/clickhouse
2021.10.19 17:15:11.912051 [ 17553 ] {} <Fatal> BaseDaemon: 7. ./obj-x86_64-linux-gnu/../src/Common/Exception.cpp:60: DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0x14f43582 in /usr/bin/clickhouse
2021.10.19 17:15:20.051060 [ 17553 ] {} <Fatal> BaseDaemon: 8. ./obj-x86_64-linux-gnu/../src/Core/Block.cpp:32: void DB::onError<void>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x20ccaab1 in /usr/bin/clickhouse
2021.10.19 17:15:27.551150 [ 497 ] {} <Fatal> Application: Child process was terminated by signal 6.
``` | https://github.com/ClickHouse/ClickHouse/issues/30397 | https://github.com/ClickHouse/ClickHouse/pull/30565 | 0c6b92b3a9a501e802b0aaab92378e2e79436e34 | e528bfdb1aaedbb3f93fbccbcadd176ad094ff4f | "2021-10-19T16:48:01Z" | c++ | "2021-10-26T06:45:45Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 30,343 | ["src/Storages/MergeTree/MergeTreeIndices.h", "tests/queries/0_stateless/02131_skip_index_not_materialized.reference", "tests/queries/0_stateless/02131_skip_index_not_materialized.sql"] | Adding a data skipping index breaks live queries. | **Describe what's wrong**
Adding a data skipping index breaks live queries.
**Does it reproduce on recent release?**
Reproducible in (at least):
* 21.9.2.17
* 21.10.2.15
**How to reproduce**
```
# 1. Create a table and insert 10M rows distributed across 100 partitions
CREATE TABLE test2
(
`p` UInt32,
`val` String
)
ENGINE = MergeTree
PARTITION BY p
ORDER BY tuple()
INSERT INTO test2 SELECT
number % 100 AS p,
toString(rand() % 100) AS val
FROM numbers(10000000)
# 2. Simple select works and scans all 100 partitions and 10M rows as expected
SELECT count()
FROM test2
WHERE val = 'qwe'
ββcount()ββ
β 0 β
βββββββββββ
1 rows in set. Elapsed: 0.072 sec. Processed 10.00 million rows, 109.00 MB (138.06 million rows/s., 1.50 GB/s.)
# 3. Add bloom filter
ALTER TABLE test2
ADD INDEX idx_bloom val TYPE bloom_filter GRANULARITY 1
# 4. Repeat the query
SELECT count()
FROM test2
WHERE val = 'qwe'
Query id: 4f5dbf5f-5c0d-441f-b6c9-1a9d3261658e
0 rows in set. Elapsed: 0.005 sec.
Received exception from server (version 21.10.2):
Code: 1001. DB::Exception: Received from clickhouse-server:9000. DB::Exception: std::__1::__fs::filesystem::filesystem_error: filesystem error: in file_size:
No such file or directory [/var/lib/clickhouse/store/303/303e3f5f-9f67-4ac6-b03e-3f5f9f674ac6/0_1_576_1/skp_idx_idx_bloom.mrk3]. (STD_EXCEPTION)
# 5. If index is materialized with "ALTER TABLE test2 MATERIALIZE INDEX idx_bloom" eveything is back to normal
```
**Expected behavior**
There should be no downtime and no queries should be affected when adding a data skipping index. The index should not be used when processing the parts it's not been materialized in yet.
**Additional context**
Can be as well reproduced in clickhouse docker, e.g. yandex/clickhouse-server:21.10.2.15 (can try that if not reproducible on someone's machine) | https://github.com/ClickHouse/ClickHouse/issues/30343 | https://github.com/ClickHouse/ClickHouse/pull/32359 | 66e1fb7adad8ce28af4c9cf126f704bdefafa746 | a241103714422b775852790633451167433125c1 | "2021-10-18T16:58:01Z" | c++ | "2021-12-13T11:43:58Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 30,245 | ["src/Functions/ReplaceRegexpImpl.h", "tests/queries/0_stateless/02150_replace_regexp_all_empty_match.reference", "tests/queries/0_stateless/02150_replace_regexp_all_empty_match.sql", "tests/queries/0_stateless/02151_replace_regexp_all_empty_match_alternative.reference", "tests/queries/0_stateless/02151_replace_regexp_all_empty_match_alternative.sql"] | trim incorrect result in case of BOTH | ```sql
SELECT trim(BOTH ', ' FROM '5935,5998,6014, ')||'|' x
SELECT concat(replaceRegexpAll('5935,5998,6014, ', concat('^[', regexpQuoteMeta(', '), ']*|[', regexpQuoteMeta(', '), ']*$'), ''), '|') AS x
ββxββββββββββββββββββ
β 5935,5998,6014, | β
βββββββββββββββββββββ
psql> SELECT trim(BOTH ', ' FROM '5935,5998,6014, ')||'|'
5935,5998,6014|
```
http://sqlfiddle.com/#!17/0a28f/406
though trailing
```sql
SELECT trim(trailing ', ' FROM '5935,5998,6014, ')||'|' x
ββxββββββββββββββββ
β 5935,5998,6014| β
βββββββββββββββββββ
``` | https://github.com/ClickHouse/ClickHouse/issues/30245 | https://github.com/ClickHouse/ClickHouse/pull/32945 | 1141ae91d895866ccaefb4e855a9032397b4b944 | 2e5a14a8de802019fdc7cf5844d65c2f331883d3 | "2021-10-15T18:37:39Z" | c++ | "2021-12-19T22:42:20Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 30,236 | ["docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-layout.md", "src/Dictionaries/CMakeLists.txt", "src/Dictionaries/HashedArrayDictionary.cpp", "src/Dictionaries/HashedArrayDictionary.h", "src/Dictionaries/registerDictionaries.cpp", "tests/performance/hashed_array_dictionary.xml", "tests/queries/0_stateless/02098_hashed_array_dictionary_simple_key.reference", "tests/queries/0_stateless/02098_hashed_array_dictionary_simple_key.sql", "tests/queries/0_stateless/02099_hashed_array_dictionary_complex_key.reference", "tests/queries/0_stateless/02099_hashed_array_dictionary_complex_key.sql"] | Ext. dictionary layout FLAT_TRANSLATED | flat_index is an artificial autoincrement to make a demo, it does not needed, it just a flat array index for a translation table.
```sql
drop dictionary if exists dict_complex_hashed;
drop dictionary if exists dict_translation;
drop dictionary if exists dict_flat;
drop table if exists dict_source;
create table dict_source(key String, flat_index UInt64, s1 String, s2 String, s3 String,
s4 String, s5 String, s6 String, s7 String, s8 String, s9 String, s10 String) Engine=Log;
insert into dict_source select 'some key String :'||toString(cityHash64(number)), number,
(arrayMap(i->'some atribute string'||toString(number*i), range(10)) as x)[1],
x[2], x[3], x[4], x[5], x[6], x[7], x[8], x[9], x[10] from numbers(1000000);
create dictionary dict_complex_hashed (key String, s1 String, s2 String, s3 String,
s4 String, s5 String, s6 String, s7 String, s8 String, s9 String, s10 String)
PRIMARY KEY key SOURCE(CLICKHOUSE(DATABASE 'default' TABLE 'dict_source')) lifetime(0)
LAYOUT(complex_key_hashed);
create dictionary dict_translation(key String, flat_index UInt64)
PRIMARY KEY key SOURCE(CLICKHOUSE(DATABASE 'default' TABLE 'dict_source')) lifetime(0)
LAYOUT(complex_key_hashed);
create dictionary dict_flat (flat_index UInt64, s1 String, s2 String, s3 String,
s4 String, s5 String, s6 String, s7 String, s8 String, s9 String, s10 String)
PRIMARY KEY flat_index SOURCE(CLICKHOUSE(DATABASE 'default' TABLE 'dict_source')) lifetime(0)
LAYOUT(flat(INITIAL_ARRAY_SIZE 50000 MAX_ARRAY_SIZE 5000000));
select database, name, status, element_count, formatReadableSize(bytes_allocated) mem, loading_duration, type from system.dictionaries where name like 'dict_%';
ββdatabaseββ¬βnameβββββββββββββββββ¬βstatusββ¬βelement_countββ¬βmemβββββββββ¬βloading_durationββ¬βtypeββββββββββββββ
β default β dict_flat β LOADED β 10000000 β 479.96 MiB β 1.052 β Flat β
β default β dict_translation β LOADED β 1000000 β 128.00 MiB β 0.224 β ComplexKeyHashed β
β default β dict_complex_hashed β LOADED β 10000000 β 1.59 GiB β 2.931 β ComplexKeyHashed β
ββββββββββββ΄ββββββββββββββββββββββ΄βββββββββ΄ββββββββββββββββ΄βββββββββββββ΄βββββββββββββββββββ΄βββββββββββββββββββ
479.96 MiB + 128.00 MiB < 1.59 GiB
1.052 + 0.224 < 2.931
select dictGet('dict_complex_hashed', 's8', tuple('some key String :17349973131760655344')) x ;
ββxββββββββββββββββββββββββββββ
β some atribute string4406801 β
βββββββββββββββββββββββββββββββ
select dictGet('dict_flat', 's8', dictGet('dict_translation', 'flat_index', tuple('some key String :17349973131760655344'))) x;
ββxββββββββββββββββββββββββββββ
β some atribute string4406801 β
βββββββββββββββββββββββββββββββ
select dictGet('dict_complex_hashed', 's5', tuple('some key String :'||toString(cityHash64(number))) )
from numbers(1000000) format Null;
Elapsed: 0.166 sec. Processed 1.05 million rows
select dictGet('dict_flat', 's5', toUInt64(dictGet('dict_translation', 'flat_index', tuple('some key String :'||toString(cityHash64(number))))))
from numbers(1000000) format Null;
Elapsed: 0.180 sec. Processed 1.05 million rows
``` | https://github.com/ClickHouse/ClickHouse/issues/30236 | https://github.com/ClickHouse/ClickHouse/pull/30242 | 36beb8985733787c9b12519fab3a2af7120e66c2 | 0dd8c70d28ad44d0db386dd225732a5d201cfee5 | "2021-10-15T13:18:31Z" | c++ | "2021-10-16T22:18:43Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 30,175 | ["src/Functions/initializeAggregation.cpp", "tests/queries/0_stateless/02097_initializeAggregationNullable.reference", "tests/queries/0_stateless/02097_initializeAggregationNullable.sql"] | initializeAggregation doesn't work with nullable types | **Describe what's wrong**
It's not possible to initialize aggregation function from nullable types.
**Does it reproduce on recent release?**
Yes,
ClickHouse version 21.10
```
SELECT initializeAggregation('uniqExactState', toNullable(''))
0 rows in set. Elapsed: 0.003 sec.
Received exception from server (version 21.10.1):
Code: 43. DB::Exception: Received from localhost:9000. DB::Exception: Nested type AggregateFunction(uniqExact, String) cannot be inside Nullable type: While processing initializeAggregation('uniqExactState', toNullable('')). (ILLEGAL_TYPE_OF_ARGUMENT)
SELECT initializeAggregation('uniqExactState', toNullable(1))
Received exception from server (version 21.10.1):
Code: 43. DB::Exception: Received from localhost:9000. DB::Exception: Nested type AggregateFunction(uniqExact, UInt8) cannot be inside Nullable type: While processing initializeAggregation('uniqExactState', toNullable(1)). (ILLEGAL_TYPE_OF_ARGUMENT)
```
**Expected behavior**
It should be possible to initialize aggregation function from nullable types.
**Additional context**
WA
```
SELECT arrayReduce('uniqExactState', [toNullable('')])
Query id: 29fc6760-78be-4f28-be5f-f1bbe12f04fc
ββarrayReduce('uniqExactState', array(toNullable('')))ββ
β 2οΏ½οΏ½οΏ½VοΏ½οΏ½οΏ½ε΅ΆοΏ½οΏ½οΏ½οΏ½ β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
```
| https://github.com/ClickHouse/ClickHouse/issues/30175 | https://github.com/ClickHouse/ClickHouse/pull/30177 | a9ac7fb394bec835d613b9c7b84680b276af4853 | 78c925ddef83d13f358de0f187a0deb092a59d16 | "2021-10-14T11:29:18Z" | c++ | "2021-10-20T12:39:07Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 30,090 | ["src/Dictionaries/PolygonDictionary.cpp", "src/Dictionaries/PolygonDictionary.h", "src/Dictionaries/PolygonDictionaryImplementations.cpp", "src/Dictionaries/PolygonDictionaryImplementations.h", "tests/queries/0_stateless/02097_polygon_dictionary_store_key.reference", "tests/queries/0_stateless/02097_polygon_dictionary_store_key.sql"] | It's not possible to read POLYGON dictionaries via SELECT * FROM dict query. | **Describe the unexpected behaviour**
ClickHouse returns exception if you would try to read polygon dictionary via SELECT query.
**How to reproduce**
ClickHouse version 21.10
```
CREATE DICTIONARY polygon_dict (
key Array(Array(Array(Array(Float64)))),
name String,
value UInt64
)
PRIMARY KEY key
LAYOUT(POLYGON())
SOURCE(CLICKHOUSE(table 'polygon_dict_src'))
LIFETIME(MIN 300 MAX 360);
CREATE TABLE polygon_dict_src
(
`key` Array(Array(Array(Array(Float64)))),
`name` String,
`value` UInt64
)
ENGINE = Log
INSERT INTO polygon_dict_src SELECT [[[[1.1,2.1],[5,3],[6,4],[1,4]]]], 'some', 10;
CREATE DICTIONARY polygon_dict (
key Array(Array(Array(Array(Float64)))),
name String,
value UInt64
)
PRIMARY KEY key
LAYOUT(POLYGON())
SOURCE(CLICKHOUSE(table 'polygon_dict_src'))
LIFETIME(MIN 300 MAX 360);
SELECT dictGet(polygon_dict, 'name', (2., 3.)) AS x
ββxβββββ
β some β
ββββββββ
SELECT *
FROM polygon_dict
0 rows in set. Elapsed: 0.008 sec.
Received exception from server (version 21.10.1):
Code: 1. DB::Exception: Received from localhost:9000. DB::Exception: Reading the dictionary is not allowed. (UNSUPPORTED_METHOD)
```
**Expected behavior**
It's allowed to read POLYGON dictionaries via SELECT query.
| https://github.com/ClickHouse/ClickHouse/issues/30090 | https://github.com/ClickHouse/ClickHouse/pull/30142 | 5802037f1ea35aa2f5c83df3eeb6030cea9f3d74 | 6c1da023f7e5db9b7a6a05596fb0b516c940410f | "2021-10-13T10:26:32Z" | c++ | "2021-10-14T12:03:10Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 29,967 | ["docs/en/sql-reference/functions/splitting-merging-functions.md", "docs/ru/sql-reference/functions/splitting-merging-functions.md", "src/Functions/FunctionsStringArray.cpp", "src/Functions/FunctionsStringArray.h", "tests/queries/0_stateless/00255_array_concat_string.reference", "tests/queries/0_stateless/00255_array_concat_string.sql"] | support Nullable String type in arrayStringConcat | For example Null can be represented as empty string
```sql
select arrayStringConcat(['x',Null], ';');
ββarrayStringConcat(['x', Null], ';')ββ
β x; β
βββββββββββββββββββββββββββββββββββββββ
```
But mostly it for avoiding a cast in case of Nullable Strings
```
SELECT arrayStringConcat(col0, ';')
FROM
(
SELECT [toNullable('x')] AS col0
) AS t
```
Also I think arrayStringConcat can support any types not only Strings, it allows to avoid explicit casts.
```
SELECT arrayStringConcat(col0, ';')
FROM
(
SELECT [1, 2, 3] AS col0
) AS t
``` | https://github.com/ClickHouse/ClickHouse/issues/29967 | https://github.com/ClickHouse/ClickHouse/pull/30840 | 30090472a1f3d58d58e385a7bfa74e785553703e | 9d967e988348d09dad7ebaf9b0ef657a219f71c7 | "2021-10-10T22:51:21Z" | c++ | "2021-11-01T08:48:45Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 29,957 | ["src/Interpreters/UserDefinedSQLFunctionVisitor.cpp", "tests/queries/0_stateless/02096_sql_user_defined_function_alias.reference", "tests/queries/0_stateless/02096_sql_user_defined_function_alias.sql"] | User function alias | Hi, I may have found a bug.
ClickHouse client version 21.10.1.8013 (official build).
```
CREATE FUNCTION now_debug AS () -> toDateTime('2021-07-01 12:35:34');
```
```
:) select now_debug() as _now_debug, now() as _now;
SELECT
now_debug() AS _now_debug,
now() AS _now
Query id: 4f7840e9-4a92-4d16-a33e-a307af1316d2
ββtoDateTime('2021-07-01 12:35:34')ββ¬ββββββββββββββββ_nowββ
β 2021-07-01 12:35:34 β 2021-10-10 19:50:44 β
βββββββββββββββββββββββββββββββββββββ΄ββββββββββββββββββββββ
1 rows in set. Elapsed: 0.002 sec.
```
Expected _now_debug, result toDateTime('2021-07-01 12:35:34')
| https://github.com/ClickHouse/ClickHouse/issues/29957 | https://github.com/ClickHouse/ClickHouse/pull/30075 | 828b19fd51a714805967e880e18972f0a2a93cad | 8d544a55b722728d8c3295ed9c70f6f7db6b6543 | "2021-10-10T19:54:08Z" | c++ | "2021-10-14T08:15:32Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 29,926 | ["src/Processors/Formats/Impl/ArrowColumnToCHColumn.cpp", "src/Processors/Formats/Impl/ParquetBlockInputFormat.cpp"] | Not good enough error message when reading Parquet file. | `Code: 349. DB::Exception: Cannot convert NULL value to non-Nullable type: While executing ParquetBlockInputFormat: While executing File. (CANNOT_INSERT_NULL_IN_ORDINARY_COLUMN) (version 21.11.1.8356 (official build))`
I need to know, what column is related. | https://github.com/ClickHouse/ClickHouse/issues/29926 | https://github.com/ClickHouse/ClickHouse/pull/29927 | daf9cf12d9aa6e71d19671173cd9d0de9558fe9d | 84555646ffb41679e81a7c43e59b47a07cea731a | "2021-10-09T16:40:09Z" | c++ | "2021-10-09T22:18:43Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 29,917 | ["src/Core/MySQL/MySQLReplication.cpp"] | materialized mysql decimal 2147483648 | Hi! We have replication mysql to clickhouse configured with materialized-mysql driver.
sometimes strange behavior happens
and in the clickhouse there are values that are not in this mysql database
for example:
mysql revenue value is 0.292, but in clickhouse - 2147483648.292
mysql table ddl:
```
CREATE TABLE `adsense` (
`id` bigint unsigned NOT NULL AUTO_INCREMENT,
`date` date NOT NULL,
`sub1` varchar(40) DEFAULT NULL,
`sub2` varchar(255) DEFAULT NULL,
`lead_id` int DEFAULT NULL,
`revenue` decimal(10,3) unsigned NOT NULL,
`created_at` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (`id`),
KEY `idx_adsense_sub1` (`sub1`),
KEY `idx_adsense_lead_id` (`lead_id`),
KEY `adsense_sub2_index` (`sub2`),
KEY `adsense_date_index` (`date`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
```
clickhouse table ddl:
```
create table adsense
(
id UInt64,
date Date,
sub1 Nullable(String),
sub2 Nullable(String),
lead_id Nullable(Int32),
clicks Int32,
revenue Decimal(10, 3),
created_at DateTime,
_sign Int8 materialized 1,
_version UInt64 materialized 1
)
engine = MaterializeMySQL(_version)
PARTITION BY intDiv(id, 18446744073709551)
ORDER BY (date, assumeNotNull(sub2), assumeNotNull(sub1),
assumeNotNull(lead_id), id)
SETTINGS index_granularity = 8192;
```
| https://github.com/ClickHouse/ClickHouse/issues/29917 | https://github.com/ClickHouse/ClickHouse/pull/31990 | fa298b089e9669a8ffb1aaf00d0fbfb922f40f06 | 9e034ee3a5af2914aac4fd9a39fd469286eb9b86 | "2021-10-09T08:25:57Z" | c++ | "2021-12-01T16:18:48Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 29,892 | ["src/Core/Settings.h", "src/Interpreters/OptimizeIfChains.h", "src/Interpreters/TreeOptimizer.cpp", "tests/queries/0_stateless/01355_if_fixed_string.sql", "tests/queries/0_stateless/02315_replace_multiif_to_if.reference", "tests/queries/0_stateless/02315_replace_multiif_to_if.sql"] | multiIf vs If performance difference with 1 condition | **Describe the situation**
multiIf works 10 times slower than If with single condition.
**How to reproduce**
ClickHouse version 21.10
```
SELECT count()
FROM numbers_mt(1000000000)
WHERE NOT ignore(multiIf(number = 0, NULL, toNullable(number)))
βββββcount()ββ
β 1000000000 β
ββββββββββββββ
1 rows in set. Elapsed: 10.267 sec. Processed 1.00 billion rows, 8.00 GB (97.41 million rows/s., 779.25 MB/s.)
SELECT count()
FROM numbers_mt(1000000000)
WHERE NOT ignore(If(number = 0, NULL, toNullable(number)))
βββββcount()ββ
β 1000000000 β
ββββββββββββββ
1 rows in set. Elapsed: 1.010 sec. Processed 1.00 billion rows, 8.00 GB (990.63 million rows/s., 7.93 GB/s.)
```
**Expected performance**
The same performance for multiIf and if.
| https://github.com/ClickHouse/ClickHouse/issues/29892 | https://github.com/ClickHouse/ClickHouse/pull/37695 | c6574b15bc7f7323d9a3adf30b13c63d3b6de227 | 3ace07740171eae90c832173bba57203a8f7f192 | "2021-10-08T12:13:44Z" | c++ | "2022-06-03T12:56:43Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 29,890 | ["src/Functions/array/arraySlice.cpp", "tests/queries/0_stateless/00498_array_functions_concat_slice_push_pop.reference", "tests/queries/0_stateless/00498_array_functions_concat_slice_push_pop.sql"] | Logical error: Invalid number of rows in Chunk column Array(Array(Nothing)) | https://clickhouse-test-reports.s3.yandex.net/29804/90cc63aecd37ffe7a3f6497b462be55540bc70a5/fuzzer_debug/report.html#fail1
```
2021.10.08 00:59:13.283176 [ 136 ] {f483911f-89e8-47b6-b594-3da29e8b832b} <Debug> executeQuery: (from [::1]:59180) SELECT max((SELECT sumState(number) FROM numbers(65535)) * NULL, arraySlice([arraySlice([], -2)], materialize((SELECT sumState(number) FROM numbers(10)) * NULL), NULL), blockSize()), min(finalizeAggregation(materialize((SELECT sumState(number) FROM numbers(1))) * NULL), blockSize()), any(ignore(*)) FROM tab_00484
2021.10.08 00:59:13.285929 [ 136 ] {f483911f-89e8-47b6-b594-3da29e8b832b} <Trace> ContextAccess (default): Access granted: CREATE TEMPORARY TABLE ON *.*
2021.10.08 00:59:13.288301 [ 136 ] {f483911f-89e8-47b6-b594-3da29e8b832b} <Trace> InterpreterSelectQuery: FetchColumns -> Complete
2021.10.08 00:59:13.291858 [ 351 ] {f483911f-89e8-47b6-b594-3da29e8b832b} <Trace> AggregatingTransform: Aggregating
2021.10.08 00:59:13.291983 [ 351 ] {f483911f-89e8-47b6-b594-3da29e8b832b} <Trace> Aggregator: Aggregation method: without_key
2021.10.08 00:59:13.293000 [ 351 ] {f483911f-89e8-47b6-b594-3da29e8b832b} <Debug> AggregatingTransform: Aggregated. 65535 to 1 rows (from 511.99 KiB) in 0.003243095 sec. (20207548.653 rows/sec., 154.17 MiB/sec.)
2021.10.08 00:59:13.293110 [ 351 ] {f483911f-89e8-47b6-b594-3da29e8b832b} <Trace> Aggregator: Merging aggregated data
2021.10.08 00:59:13.293968 [ 351 ] {f483911f-89e8-47b6-b594-3da29e8b832b} <Trace> PipelineExecutor: Thread finished. Total time: 0.002733168 sec. Execution time: 0.002129194 sec. Processing time: 0.00058313 sec. Wait time: 2.0844e-05 sec.
2021.10.08 00:59:13.295614 [ 136 ] {f483911f-89e8-47b6-b594-3da29e8b832b} <Trace> ContextAccess (default): Access granted: CREATE TEMPORARY TABLE ON *.*
2021.10.08 00:59:13.297993 [ 136 ] {f483911f-89e8-47b6-b594-3da29e8b832b} <Trace> InterpreterSelectQuery: FetchColumns -> Complete
2021.10.08 00:59:13.300893 [ 350 ] {f483911f-89e8-47b6-b594-3da29e8b832b} <Trace> AggregatingTransform: Aggregating
2021.10.08 00:59:13.300971 [ 350 ] {f483911f-89e8-47b6-b594-3da29e8b832b} <Trace> Aggregator: Aggregation method: without_key
2021.10.08 00:59:13.301163 [ 350 ] {f483911f-89e8-47b6-b594-3da29e8b832b} <Debug> AggregatingTransform: Aggregated. 10 to 1 rows (from 80.00 B) in 0.001730908 sec. (5777.315 rows/sec., 45.14 KiB/sec.)
2021.10.08 00:59:13.301243 [ 350 ] {f483911f-89e8-47b6-b594-3da29e8b832b} <Trace> Aggregator: Merging aggregated data
2021.10.08 00:59:13.302027 [ 350 ] {f483911f-89e8-47b6-b594-3da29e8b832b} <Trace> PipelineExecutor: Thread finished. Total time: 0.00126323 sec. Execution time: 0.00079094 sec. Processing time: 0.000456435 sec. Wait time: 1.5855e-05 sec.
2021.10.08 00:59:13.303592 [ 136 ] {f483911f-89e8-47b6-b594-3da29e8b832b} <Trace> ContextAccess (default): Access granted: CREATE TEMPORARY TABLE ON *.*
2021.10.08 00:59:13.305982 [ 136 ] {f483911f-89e8-47b6-b594-3da29e8b832b} <Trace> InterpreterSelectQuery: FetchColumns -> Complete
2021.10.08 00:59:13.309076 [ 333 ] {f483911f-89e8-47b6-b594-3da29e8b832b} <Trace> AggregatingTransform: Aggregating
2021.10.08 00:59:13.309176 [ 333 ] {f483911f-89e8-47b6-b594-3da29e8b832b} <Trace> Aggregator: Aggregation method: without_key
2021.10.08 00:59:13.309376 [ 333 ] {f483911f-89e8-47b6-b594-3da29e8b832b} <Debug> AggregatingTransform: Aggregated. 1 to 1 rows (from 8.00 B) in 0.001849616 sec. (540.653 rows/sec., 4.22 KiB/sec.)
2021.10.08 00:59:13.309456 [ 333 ] {f483911f-89e8-47b6-b594-3da29e8b832b} <Trace> Aggregator: Merging aggregated data
2021.10.08 00:59:13.310257 [ 333 ] {f483911f-89e8-47b6-b594-3da29e8b832b} <Trace> PipelineExecutor: Thread finished. Total time: 0.001323103 sec. Execution time: 0.00082264 sec. Processing time: 0.000484761 sec. Wait time: 1.5702e-05 sec.
2021.10.08 00:59:13.322069 [ 136 ] {f483911f-89e8-47b6-b594-3da29e8b832b} <Trace> ContextAccess (default): Access granted: SELECT(date, x, s) ON default.tab_00484
2021.10.08 00:59:13.322739 [ 136 ] {f483911f-89e8-47b6-b594-3da29e8b832b} <Trace> InterpreterSelectQuery: FetchColumns -> Complete
2021.10.08 00:59:13.329917 [ 136 ] {f483911f-89e8-47b6-b594-3da29e8b832b} <Debug> default.tab_00484 (79afac16-d944-43de-b9af-ac16d944a3de) (SelectExecutor): Key condition: unknown
2021.10.08 00:59:13.338422 [ 136 ] {f483911f-89e8-47b6-b594-3da29e8b832b} <Debug> default.tab_00484 (79afac16-d944-43de-b9af-ac16d944a3de) (SelectExecutor): MinMax index condition: unknown
2021.10.08 00:59:13.338890 [ 136 ] {f483911f-89e8-47b6-b594-3da29e8b832b} <Debug> default.tab_00484 (79afac16-d944-43de-b9af-ac16d944a3de) (SelectExecutor): Selected 1/1 parts by partition key, 1 parts by primary key, 1/1 marks by primary key, 1 marks to read from 1 ranges
2021.10.08 00:59:13.339326 [ 136 ] {f483911f-89e8-47b6-b594-3da29e8b832b} <Debug> MergeTreeInOrderSelectProcessor: Reading 1 ranges in order from part 20211008_1_1_0, approx. 8192 rows starting from 0
2021.10.08 00:59:13.341567 [ 136 ] {f483911f-89e8-47b6-b594-3da29e8b832b} <Trace> Aggregator: Compile expression any()(UInt8) 0
2021.10.08 00:59:13.447830 [ 135 ] {} <Trace> KeeperTCPHandler: Received heartbeat for session #1
2021.10.08 00:59:13.654722 [ 183 ] {} <Trace> SystemLog (system.metric_log): Flushing system log, 8 entries to flush up to offset 3471
2021.10.08 00:59:13.746169 [ 182 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 755.10 GiB.
2021.10.08 00:59:13.753232 [ 182 ] {} <Trace> system.text_log (9cd0c0bd-4d25-4f18-9cd0-c0bd4d25bf18): Renaming temporary part tmp_insert_202110_461_461_0 to 202110_461_461_0.
2021.10.08 00:59:13.768806 [ 352 ] {f483911f-89e8-47b6-b594-3da29e8b832b} <Fatal> : Logical error: 'Invalid number of rows in Chunk column Array(Array(Nothing)) position 1: expected 8192, got 1'.
2021.10.08 00:59:13.769800 [ 40 ] {} <Trace> BaseDaemon: Received signal 6
2021.10.08 00:59:13.770734 [ 356 ] {} <Fatal> BaseDaemon: ########################################
2021.10.08 00:59:13.771067 [ 356 ] {} <Fatal> BaseDaemon: (version 21.11.1.8340, build id: FABB7A7D67748FCF81AAD13FD7F3B2E48F1571BC) (from thread 352) (query_id: f483911f-89e8-47b6-b594-3da29e8b832b) Received signal Aborted (6)
2021.10.08 00:59:13.771310 [ 356 ] {} <Fatal> BaseDaemon:
2021.10.08 00:59:13.771560 [ 356 ] {} <Fatal> BaseDaemon: Stack trace: 0x7f851dc3918b 0x7f851dc18859 0x14e372b8 0x14e373c2 0x22956852 0x22956ea5 0x22d00e20 0x1e52ac02 0x2295fa5b 0x229d6019 0x229d5f7f 0x229d5f1d 0x229d5edd 0x229d5eb5 0x229d5e7d 0x14e83866 0x14e82955 0x229d4925 0x229d52a5 0x229d320b 0x229d24d3 0x229f2ee1 0x229f2e00 0x229f2d7d 0x229f2d21 0x229f2c32 0x229f2b1b 0x229f29dd 0x229f299d 0x229f2975 0x229f2940 0x14e83866 0x14e82955 0x14ead90f 0x14eb4ac4 0x14eb4a3d 0x14eb4965 0x14eb42a2 0x7f851ddff609 0x7f851dd15293
2021.10.08 00:59:13.771935 [ 356 ] {} <Fatal> BaseDaemon: 4. raise @ 0x4618b in /usr/lib/x86_64-linux-gnu/libc-2.31.so
2021.10.08 00:59:13.772095 [ 356 ] {} <Fatal> BaseDaemon: 5. abort @ 0x25859 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
2021.10.08 00:59:13.868759 [ 356 ] {} <Fatal> BaseDaemon: 6. ./obj-x86_64-linux-gnu/../src/Common/Exception.cpp:53: DB::handle_error_code(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool, std::__1::vector<void*, std::__1::allocator<void*> > const&) @ 0x14e372b8 in /workspace/clickhouse
2021.10.08 00:59:13.947698 [ 183 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 755.10 GiB.
2021.10.08 00:59:13.954389 [ 356 ] {} <Fatal> BaseDaemon: 7. ./obj-x86_64-linux-gnu/../src/Common/Exception.cpp:60: DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0x14e373c2 in /workspace/clickhouse
2021.10.08 00:59:14.024403 [ 183 ] {} <Trace> system.metric_log (30092398-77c5-4ef1-b009-239877c53ef1): Renaming temporary part tmp_insert_202110_439_439_0 to 202110_439_439_0.
2021.10.08 00:59:14.060775 [ 183 ] {} <Trace> SystemLog (system.metric_log): Flushed system log up to offset 3471
2021.10.08 00:59:14.076021 [ 356 ] {} <Fatal> BaseDaemon: 8. ./obj-x86_64-linux-gnu/../src/Processors/Chunk.cpp:72: DB::Chunk::checkNumRowsIsConsistent() @ 0x22956852 in /workspace/clickhouse
2021.10.08 00:59:14.192992 [ 356 ] {} <Fatal> BaseDaemon: 9. ./obj-x86_64-linux-gnu/../src/Processors/Chunk.cpp:57: DB::Chunk::setColumns(std::__1::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn> > >, unsigned long) @ 0x22956ea5 in /workspace/clickhouse
2021.10.08 00:59:14.309203 [ 356 ] {} <Fatal> BaseDaemon: 10. ./obj-x86_64-linux-gnu/../src/Processors/Transforms/ExpressionTransform.cpp:25: DB::ExpressionTransform::transform(DB::Chunk&) @ 0x22d00e20 in /workspace/clickhouse
2021.10.08 00:59:14.475803 [ 356 ] {} <Fatal> BaseDaemon: 11. ./obj-x86_64-linux-gnu/../src/Processors/ISimpleTransform.h:38: DB::ISimpleTransform::transform(DB::Chunk&, DB::Chunk&) @ 0x1e52ac02 in /workspace/clickhouse
2021.10.08 00:59:14.607274 [ 356 ] {} <Fatal> BaseDaemon: 12. ./obj-x86_64-linux-gnu/../src/Processors/ISimpleTransform.cpp:89: DB::ISimpleTransform::work() @ 0x2295fa5b in /workspace/clickhouse
2021.10.08 00:59:14.697087 [ 179 ] {} <Trace> SystemLog (system.query_thread_log): Flushing system log, 17 entries to flush up to offset 25221
2021.10.08 00:59:14.731117 [ 179 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 755.10 GiB.
2021.10.08 00:59:14.741709 [ 179 ] {} <Trace> system.query_thread_log (f35797ef-3835-4ad0-b357-97ef3835ead0): Renaming temporary part tmp_insert_202110_390_390_0 to 202110_390_390_0.
2021.10.08 00:59:14.742313 [ 204 ] {} <Debug> system.query_thread_log (f35797ef-3835-4ad0-b357-97ef3835ead0) (MergerMutator): Selected 6 parts from 202110_345_385_8 to 202110_390_390_0
2021.10.08 00:59:14.742576 [ 204 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 755.10 GiB.
2021.10.08 00:59:14.743140 [ 101 ] {} <Debug> MergeTask::PrepareStage: Merging 6 parts: from 202110_345_385_8 to 202110_390_390_0 into Compact
2021.10.08 00:59:14.744477 [ 101 ] {} <Debug> MergeTask::PrepareStage: Selected MergeAlgorithm: Horizontal
2021.10.08 00:59:14.745363 [ 179 ] {} <Trace> SystemLog (system.query_thread_log): Flushed system log up to offset 25221
2021.10.08 00:59:14.745638 [ 101 ] {} <Debug> MergeTreeSequentialSource: Reading 2 marks from part 202110_345_385_8, total 1493 rows starting from the beginning of the part
2021.10.08 00:59:14.749752 [ 101 ] {} <Debug> MergeTreeSequentialSource: Reading 2 marks from part 202110_386_386_0, total 41 rows starting from the beginning of the part
2021.10.08 00:59:14.753840 [ 101 ] {} <Debug> MergeTreeSequentialSource: Reading 2 marks from part 202110_387_387_0, total 42 rows starting from the beginning of the part
2021.10.08 00:59:14.757926 [ 101 ] {} <Debug> MergeTreeSequentialSource: Reading 2 marks from part 202110_388_388_0, total 12 rows starting from the beginning of the part
2021.10.08 00:59:14.761924 [ 101 ] {} <Debug> MergeTreeSequentialSource: Reading 2 marks from part 202110_389_389_0, total 14 rows starting from the beginning of the part
2021.10.08 00:59:14.783837 [ 101 ] {} <Debug> MergeTreeSequentialSource: Reading 2 marks from part 202110_390_390_0, total 17 rows starting from the beginning of the part
2021.10.08 00:59:14.851460 [ 101 ] {} <Debug> MergeTask::MergeProjectionsStage: Merge sorted 1619 rows, containing 44 columns (44 merged, 0 gathered) in 0.108573939 sec., 14911.497316128505 rows/sec., 15.57 MiB/sec.
2021.10.08 00:59:14.864160 [ 356 ] {} <Fatal> BaseDaemon: 13. ./obj-x86_64-linux-gnu/../src/Processors/Executors/PipelineExecutor.cpp:88: DB::executeJob(DB::IProcessor*) @ 0x229d6019 in /workspace/clickhouse
2021.10.08 00:59:14.870426 [ 101 ] {} <Trace> system.query_thread_log (f35797ef-3835-4ad0-b357-97ef3835ead0): Renaming temporary part tmp_merge_202110_345_390_9 to 202110_345_390_9.
2021.10.08 00:59:14.871135 [ 101 ] {} <Trace> system.query_thread_log (f35797ef-3835-4ad0-b357-97ef3835ead0) (MergerMutator): Merged 6 parts: from 202110_345_385_8 to 202110_390_390_0
2021.10.08 00:59:14.873230 [ 101 ] {} <Debug> MemoryTracker: Peak memory usage Mutate/Merge: 4.03 MiB.
2021.10.08 00:59:15.000263 [ 319 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 813.55 MiB, peak 3.09 GiB, will set to 862.45 MiB (RSS), difference: 48.90 MiB
2021.10.08 00:59:15.097659 [ 356 ] {} <Fatal> BaseDaemon: 14. ./obj-x86_64-linux-gnu/../src/Processors/Executors/PipelineExecutor.cpp:105: DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0::operator()() const @ 0x229d5f7f in /workspace/clickhouse
2021.10.08 00:59:15.343681 [ 356 ] {} <Fatal> BaseDaemon: 15. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/type_traits:3676: decltype(std::__1::forward<DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0&>(fp)()) std::__1::__invoke<DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0&>(DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0&) @ 0x229d5f1d in /workspace/clickhouse
2021.10.08 00:59:15.576566 [ 356 ] {} <Fatal> BaseDaemon: 16. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/__functional_base:349: void std::__1::__invoke_void_return_wrapper<void>::__call<DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0&>(DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0&) @ 0x229d5edd in /workspace/clickhouse
2021.10.08 00:59:15.812081 [ 356 ] {} <Fatal> BaseDaemon: 17. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:1608: std::__1::__function::__default_alloc_func<DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0, void ()>::operator()() @ 0x229d5eb5 in /workspace/clickhouse
2021.10.08 00:59:16.047573 [ 356 ] {} <Fatal> BaseDaemon: 18. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2089: void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0, void ()> >(std::__1::__function::__policy_storage const*) @ 0x229d5e7d in /workspace/clickhouse
2021.10.08 00:59:16.095417 [ 356 ] {} <Fatal> BaseDaemon: 19. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2221: std::__1::__function::__policy_func<void ()>::operator()() const @ 0x14e83866 in /workspace/clickhouse
2021.10.08 00:59:16.140002 [ 356 ] {} <Fatal> BaseDaemon: 20. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2560: std::__1::function<void ()>::operator()() const @ 0x14e82955 in /workspace/clickhouse
2021.10.08 00:59:16.357288 [ 356 ] {} <Fatal> BaseDaemon: 21. ./obj-x86_64-linux-gnu/../src/Processors/Executors/PipelineExecutor.cpp:602: DB::PipelineExecutor::executeStepImpl(unsigned long, unsigned long, std::__1::atomic<bool>*) @ 0x229d4925 in /workspace/clickhouse
2021.10.08 00:59:16.575915 [ 356 ] {} <Fatal> BaseDaemon: 22. ./obj-x86_64-linux-gnu/../src/Processors/Executors/PipelineExecutor.cpp:487: DB::PipelineExecutor::executeSingleThread(unsigned long, unsigned long) @ 0x229d52a5 in /workspace/clickhouse
2021.10.08 00:59:16.770047 [ 356 ] {} <Fatal> BaseDaemon: 23. ./obj-x86_64-linux-gnu/../src/Processors/Executors/PipelineExecutor.cpp:826: DB::PipelineExecutor::executeImpl(unsigned long) @ 0x229d320b in /workspace/clickhouse
2021.10.08 00:59:16.986333 [ 356 ] {} <Fatal> BaseDaemon: 24. ./obj-x86_64-linux-gnu/../src/Processors/Executors/PipelineExecutor.cpp:407: DB::PipelineExecutor::execute(unsigned long) @ 0x229d24d3 in /workspace/clickhouse
2021.10.08 00:59:17.000304 [ 319 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 862.45 MiB, peak 3.09 GiB, will set to 863.98 MiB (RSS), difference: 1.53 MiB
2021.10.08 00:59:17.144548 [ 356 ] {} <Fatal> BaseDaemon: 25. ./obj-x86_64-linux-gnu/../src/Processors/Executors/PullingAsyncPipelineExecutor.cpp:85: DB::threadFunction(DB::PullingAsyncPipelineExecutor::Data&, std::__1::shared_ptr<DB::ThreadGroupStatus>, unsigned long) @ 0x229f2ee1 in /workspace/clickhouse
2021.10.08 00:59:17.302879 [ 356 ] {} <Fatal> BaseDaemon: 26. ./obj-x86_64-linux-gnu/../src/Processors/Executors/PullingAsyncPipelineExecutor.cpp:113: DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0::operator()() const @ 0x229f2e00 in /workspace/clickhouse
2021.10.08 00:59:17.338431 [ 181 ] {} <Trace> SystemLog (system.trace_log): Flushing system log, 21 entries to flush up to offset 23038
2021.10.08 00:59:17.351251 [ 181 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 755.10 GiB.
2021.10.08 00:59:17.355336 [ 181 ] {} <Trace> system.trace_log (8ffce5a6-2b20-41be-8ffc-e5a62b2071be): Renaming temporary part tmp_insert_202110_463_463_0 to 202110_463_463_0.
2021.10.08 00:59:17.355823 [ 227 ] {} <Debug> system.trace_log (8ffce5a6-2b20-41be-8ffc-e5a62b2071be) (MergerMutator): Selected 2 parts from 202110_1_458_254 to 202110_459_459_0
2021.10.08 00:59:17.355999 [ 227 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 755.10 GiB.
2021.10.08 00:59:17.356354 [ 107 ] {} <Debug> MergeTask::PrepareStage: Merging 2 parts: from 202110_1_458_254 to 202110_459_459_0 into Compact
2021.10.08 00:59:17.356678 [ 181 ] {} <Trace> SystemLog (system.trace_log): Flushed system log up to offset 23038
2021.10.08 00:59:17.356944 [ 107 ] {} <Debug> MergeTask::PrepareStage: Selected MergeAlgorithm: Horizontal
2021.10.08 00:59:17.357425 [ 107 ] {} <Debug> MergeTreeSequentialSource: Reading 4 marks from part 202110_1_458_254, total 22900 rows starting from the beginning of the part
2021.10.08 00:59:17.359088 [ 107 ] {} <Debug> MergeTreeSequentialSource: Reading 2 marks from part 202110_459_459_0, total 29 rows starting from the beginning of the part
2021.10.08 00:59:17.452267 [ 100 ] {} <Debug> MergeTask::MergeProjectionsStage: Merge sorted 22929 rows, containing 10 columns (10 merged, 0 gathered) in 0.096017688 sec., 238799.75114585136 rows/sec., 85.77 MiB/sec.
2021.10.08 00:59:17.461700 [ 356 ] {} <Fatal> BaseDaemon: 27. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/type_traits:3682: decltype(std::__1::forward<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&>(fp)()) std::__1::__invoke_constexpr<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&) @ 0x229f2d7d in /workspace/clickhouse
2021.10.08 00:59:17.475395 [ 100 ] {} <Trace> system.trace_log (8ffce5a6-2b20-41be-8ffc-e5a62b2071be): Renaming temporary part tmp_merge_202110_1_459_255 to 202110_1_459_255.
2021.10.08 00:59:17.475826 [ 100 ] {} <Trace> system.trace_log (8ffce5a6-2b20-41be-8ffc-e5a62b2071be) (MergerMutator): Merged 2 parts: from 202110_1_458_254 to 202110_459_459_0
2021.10.08 00:59:17.476974 [ 100 ] {} <Debug> MemoryTracker: Peak memory usage Mutate/Merge: 17.70 MiB.
2021.10.08 00:59:17.580039 [ 131 ] {} <Trace> system.trace_log (8ffce5a6-2b20-41be-8ffc-e5a62b2071be): Found 2 old parts to remove.
2021.10.08 00:59:17.580185 [ 131 ] {} <Debug> system.trace_log (8ffce5a6-2b20-41be-8ffc-e5a62b2071be): Removing part from filesystem 202110_1_394_190
2021.10.08 00:59:17.581130 [ 131 ] {} <Debug> system.trace_log (8ffce5a6-2b20-41be-8ffc-e5a62b2071be): Removing part from filesystem 202110_395_395_0
2021.10.08 00:59:17.606997 [ 184 ] {} <Trace> SystemLog (system.asynchronous_metric_log): Flushing system log, 7469 entries to flush up to offset 3706888
2021.10.08 00:59:17.619507 [ 356 ] {} <Fatal> BaseDaemon: 28. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/tuple:1415: decltype(auto) std::__1::__apply_tuple_impl<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&, std::__1::tuple<>&>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&, std::__1::tuple<>&, std::__1::__tuple_indices<>) @ 0x229f2d21 in /workspace/clickhouse
2021.10.08 00:59:17.634497 [ 184 ] {} <Debug> DiskLocal: Reserving 1.00 MiB on disk `default`, having unreserved 755.10 GiB.
2021.10.08 00:59:17.641237 [ 184 ] {} <Trace> system.asynchronous_metric_log (f5f68d9e-ac13-4fca-b5f6-8d9eac135fca): Renaming temporary part tmp_insert_202110_494_494_0 to 202110_494_494_0.
2021.10.08 00:59:17.642502 [ 184 ] {} <Trace> SystemLog (system.asynchronous_metric_log): Flushed system log up to offset 3706888
2021.10.08 00:59:17.776392 [ 356 ] {} <Fatal> BaseDaemon: 29. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/tuple:1424: decltype(auto) std::__1::apply<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&, std::__1::tuple<>&>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&, std::__1::tuple<>&) @ 0x229f2c32 in /workspace/clickhouse
2021.10.08 00:59:17.916626 [ 356 ] {} <Fatal> BaseDaemon: 30. ./obj-x86_64-linux-gnu/../src/Common/ThreadPool.h:188: ThreadFromGlobalPool::ThreadFromGlobalPool<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&&)::'lambda'()::operator()() @ 0x229f2b1b in /workspace/clickhouse
2021.10.08 00:59:18.075566 [ 356 ] {} <Fatal> BaseDaemon: 31. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/type_traits:3676: decltype(std::__1::forward<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0>(fp)()) std::__1::__invoke<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&&)::'lambda'()&>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&&) @ 0x229f29dd in /workspace/clickhouse
2021.10.08 00:59:18.230352 [ 356 ] {} <Fatal> BaseDaemon: 32. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/__functional_base:349: void std::__1::__invoke_void_return_wrapper<void>::__call<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&&)::'lambda'()&>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&&...) @ 0x229f299d in /workspace/clickhouse
2021.10.08 00:59:18.385081 [ 356 ] {} <Fatal> BaseDaemon: 33. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:1608: std::__1::__function::__default_alloc_func<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&&)::'lambda'(), void ()>::operator()() @ 0x229f2975 in /workspace/clickhouse
2021.10.08 00:59:18.539383 [ 356 ] {} <Fatal> BaseDaemon: 34. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2089: void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&&)::'lambda'(), void ()> >(std::__1::__function::__policy_storage const*) @ 0x229f2940 in /workspace/clickhouse
2021.10.08 00:59:18.585901 [ 356 ] {} <Fatal> BaseDaemon: 35. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2221: std::__1::__function::__policy_func<void ()>::operator()() const @ 0x14e83866 in /workspace/clickhouse
2021.10.08 00:59:18.629291 [ 356 ] {} <Fatal> BaseDaemon: 36. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2560: std::__1::function<void ()>::operator()() const @ 0x14e82955 in /workspace/clickhouse
2021.10.08 00:59:18.702258 [ 356 ] {} <Fatal> BaseDaemon: 37. ./obj-x86_64-linux-gnu/../src/Common/ThreadPool.cpp:274: ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x14ead90f in /workspace/clickhouse
2021.10.08 00:59:18.792283 [ 356 ] {} <Fatal> BaseDaemon: 38. ./obj-x86_64-linux-gnu/../src/Common/ThreadPool.cpp:139: void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()::operator()() const @ 0x14eb4ac4 in /workspace/clickhouse
2021.10.08 00:59:18.887547 [ 356 ] {} <Fatal> BaseDaemon: 39. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/type_traits:3676: decltype(std::__1::forward<void>(fp)(std::__1::forward<void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>(fp0)...)) std::__1::__invoke<void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>(void&&, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()&&...) @ 0x14eb4a3d in /workspace/clickhouse
2021.10.08 00:59:18.982547 [ 356 ] {} <Fatal> BaseDaemon: 40. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/thread:281: void std::__1::__thread_execute<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>(std::__1::tuple<void, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>&, std::__1::__tuple_indices<>) @ 0x14eb4965 in /workspace/clickhouse
2021.10.08 00:59:19.080242 [ 356 ] {} <Fatal> BaseDaemon: 41. ./obj-x86_64-linux-gnu/../contrib/libcxx/include/thread:291: void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()> >(void*) @ 0x14eb42a2 in /workspace/clickhouse
2021.10.08 00:59:19.080681 [ 356 ] {} <Fatal> BaseDaemon: 42. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
2021.10.08 00:59:19.081112 [ 356 ] {} <Fatal> BaseDaemon: 43. __clone @ 0x122293 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
```
Can re related to #28966, but it looks different.
cc: @KochetovNicolai
| https://github.com/ClickHouse/ClickHouse/issues/29890 | https://github.com/ClickHouse/ClickHouse/pull/32456 | 71df622b1f0371fc4fd525c7a8a404425c17fcee | 26d606c158d49acb4f66fadc03b2ac2781aed305 | "2021-10-08T11:26:01Z" | c++ | "2021-12-12T03:38:17Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 29,883 | ["programs/server/config.d/graphite.xml", "src/Processors/Merges/Algorithms/Graphite.cpp", "src/Processors/Merges/Algorithms/GraphiteRollupSortedAlgorithm.cpp", "src/Storages/MergeTree/registerStorageMergeTree.cpp", "tests/queries/0_stateless/02508_bad_graphite.reference", "tests/queries/0_stateless/02508_bad_graphite.sql"] | If I'm holding a `GraphiteMergeTree` table in the wrong way, it complains | When trying to populate a newly created GraphiteMergeTree table with some data, I catch an exception.
```sql
CREATE TABLE default.ttt
(
`C1` Int8,
`Sign` Int8,
`Version` UInt8,
`Path` String,
`Time` DateTime,
`Value` Int8
)
ENGINE = GraphiteMergeTree('graphite_rollup_params')
ORDER BY tuple()
```
Table is created and it shows up in `SHOW TABLES`. Then, I try to insert some data:
```sql
tietokone :) INSERT INTO ttt (*) VALUES (1, 1, 2, 'qwew', '2021-12-12 01:01:02', 2)
INSERT INTO ttt (*) VALUES
Received exception from server (version 21.9.4):
Code: 49. DB::Exception: Received from localhost:9000. DB::Exception: Invalid number of rows in Chunk column Int8 position 5: expected 1, got 8. (LOGICAL_ERROR)
```
`graphite_rollup_params`:
```xml
<yandex>
<graphite_rollup_params>
<version_column_name>Version</version_column_name>
<pattern>
<regexp>click_cost</regexp>
<function>any</function>
<retention>
<age>0</age>
<precision>5</precision>
</retention>
<retention>
<age>86400</age>
<precision>60</precision>
</retention>
</pattern>
<default>
<function>max</function>
<retention>
<age>0</age>
<precision>60</precision>
</retention>
<retention>
<age>3600</age>
<precision>300</precision>
</retention>
<retention>
<age>86400</age>
<precision>3600</precision>
</retention>
</default>
</graphite_rollup_params>
</yandex>
```
This happens on `21.3`, `21.8`, `21.9`
However, `20.3`, `20.8` work as expected | https://github.com/ClickHouse/ClickHouse/issues/29883 | https://github.com/ClickHouse/ClickHouse/pull/44342 | 31348a0c2b9c6ae4cfa503bf786b178e4328cd7c | 1b21cc018ed7259d7ef25eab86184d203c9d0f56 | "2021-10-08T09:15:14Z" | c++ | "2022-12-27T11:55:14Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 29,880 | ["docs/en/interfaces/cli.md", "docs/ru/interfaces/cli.md", "programs/client/Client.cpp", "src/Client/ConnectionString.cpp", "src/Client/ConnectionString.h", "tests/queries/0_stateless/02784_connection_string.reference", "tests/queries/0_stateless/02784_connection_string.sh"] | ClickHouse connection strings | **Use case**
Now you can connect to ClickHouse by specifiying separately host, port, username, password and other parameters.
_clickhouse-client --host=... --port=... --user=... --password=..._
It is already possible to specify the connection string as a URI in many databases. Example from PostgreSQL
_postgresql://localhost:5433/my_database_
It is simple, convenient and frequently used. You can just copy the link and connect.
**Describe the solution you'd like**
I want to achieve next general form of the connection URL:
```
clickhouse://[userspec@][hostspec][/dbname][?paramspec]
where userspec is:
user[:password]
where hostspec is:
[host][:port][,...]
and paramspec is:
name=value[&...]
```
Ex: `clickhouse://login:password@host:port/database`
There are many params for connecting to ClickHouse. Therefore last [?paramspec] is necessary.
Credentials will be verified after connection string accepted if it is not specified.
I propose this solution because it is convenient and general way. It is similar on example from PostgreSQL(details below).
**Describe alternatives you've considered**
Other implementations are similar:
1) **PostgreSql**
_postgresql://[userspec@][hostspec][/dbname][?paramspec]_
```
where userspec is:
user[:password]
and hostspec is:
[host][:port][,...]
and paramspec is:
name=value[&...]
```
The following examples illustrate valid URI syntax:
```
postgresql://
postgresql://localhost
postgresql://localhost:5433
postgresql://localhost/mydb
postgresql://user@localhost
postgresql://user:secret@localhost
postgresql://other@localhost/otherdb?connect_timeout=10&application_name=myapp
postgresql://host1:123,host2:456/somedb?target_session_attrs=any&application_name=myapp
```
https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING
2) **JDBC driver(SQL Server)**
_jdbc:sqlserver://[serverName[\instanceName][:portNumber]][;property=value[;property=value]]_
```
where:
- jdbc:sqlserver:// (Required) is known as the subprotocol and is constant.
- serverName (Optional) is the address of the server to connect to. This address can be a DNS or IP address, or it can be localhost or 127.0.0.1 for the local computer. If not specified in the connection URL, the server name must be specified in the properties collection.
- instanceName (Optional) is the instance to connect to on serverName. If not specified, a connection to the default instance is made.
- portNumber (Optional) is the port to connect to on serverName. The default is 1433. If you're using the default, you don't have to specify the port, nor its preceding ':', in the URL.
- property (Optional) is one or more option connection properties. For more information, see Setting the connection properties. Any property from the list can be specified. Properties can only be delimited by using the semicolon (';'), and they can't be duplicated.
```
Examples:
```
jdbc:sqlserver://localhost;user=MyUserName;password=*****;
jdbc:sqlserver://localhost:1433;databaseName=AdventureWorks;integratedSecurity=true;
```
https://docs.microsoft.com/en-us/sql/connect/jdbc/building-the-connection-url?view=sql-server-ver15
3) **ODBC driver(SQL Server 2000)**
Driver={SQL Server};Server=myServerAddress;Database=myDataBase;Uid=myUsername;Pwd=myPassword;
https://www.connectionstrings.com/microsoft-sql-server-odbc-driver/
4) **JDBC Oracle**
Connect to Oracle Database SID. In some older versions of the Oracle database, the database is defined as a SID
```
jdbc:oracle:thin:[<user>/<password>]@<host>[:<port>]:<SID>
Ex: jdbc:oracle:thin:@myoracle.db.server:1521:my_sid
```
Connect to Oracle Database Service Name
```
jdbc:oracle:thin:[<user>/<password>]@//<host>[:<port>]/<service>
Ex: jdbc:oracle:thin:@//myoracle.db.server:1521/my_servicename
```
https://www.baeldung.com/java-jdbc-url-format
5) **JDBC MySQL**
_protocol//[hosts][/database][?properties]_
```
jdbc:mysql://mysql.db.server:3306/my_database?useSSL=false&serverTimezone=UTC
protocol β jdbc:mysql: // specific param for mysql. There are a lot of values.
host β mysql.db.server:3306
database β my_database
properties β useSSL=false&serverTimezone=UTC
```
https://www.baeldung.com/java-jdbc-url-format
Another examples for JDBC provided in screen:

https://www.tutorialspoint.com/jdbc/jdbc-db-connections.htm
| https://github.com/ClickHouse/ClickHouse/issues/29880 | https://github.com/ClickHouse/ClickHouse/pull/50689 | b1eacab44a94d19e5cfe716d8a64bc9c18d2da62 | 2643fd2c25bd189c1617892cf96dd372f81403c7 | "2021-10-08T06:58:38Z" | c++ | "2023-06-14T08:39:32Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 29,805 | ["docs/en/sql-reference/statements/create/dictionary.md", "src/Dictionaries/DictionaryStructure.cpp"] | dictionaries attributes doc issue | https://clickhouse.com/docs/en/sql-reference/statements/create/dictionary/
```
CREATE DICTIONARY [IF NOT EXISTS] [db.]dictionary_name [ON CLUSTER cluster]
(
key1 type1 [DEFAULT|EXPRESSION expr1] [HIERARCHICAL|INJECTIVE|IS_OBJECT_ID],
key2 type2 [DEFAULT|EXPRESSION expr2] [HIERARCHICAL|INJECTIVE|IS_OBJECT_ID],
attr1 type2 [DEFAULT|EXPRESSION expr3],
attr2 type2 [DEFAULT|EXPRESSION expr4]
```
`[HIERARCHICAL|INJECTIVE|IS_OBJECT_ID]` -- about attr1 not key1.
And what is `IS_OBJECT_ID` ? | https://github.com/ClickHouse/ClickHouse/issues/29805 | https://github.com/ClickHouse/ClickHouse/pull/29816 | 4ec7311d4d4ee739d28db451bdc7a21cf424c003 | d91377c9932d64b1ac57beaab6acbade9b36665e | "2021-10-06T12:51:52Z" | c++ | "2021-10-07T07:21:15Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 29,788 | ["src/IO/S3Common.cpp"] | Startup crash: STSResourceClient Address: NULL pointer. Access: read. Address not mapped to object. | **Describe what's wrong**
Clickhouse server crashes with a traceback on startup in my deployment when I add an S3 storage policy to the /etc/clickhouse-server/config.d folder.
**Does it reproduce on recent release?**
Also happens on clickhouse-server:21.9.4.35
**Enable crash reporting**
I duplicated the issue with crash reporting turned on
```
2021.10.05 20:01:21.677155 [ 198 ] {} <Fatal> BaseDaemon: 24. _start @ 0x932d1ee in /usr/bin/clickhouse
2021.10.05 20:01:21.793893 [ 198 ] {} <Fatal> BaseDaemon: Checksum of the binary: BEA07E96B6BEBA1591FE837CF53C7591, integrity check passed.
2021.10.05 20:01:21.803314 [ 198 ] {} <Information> SentryWriter: Sending crash report
```
**How to reproduce**
I am deploying clickhouse in kubernetes using altinity/clickhouse-operator:0.15.0 and yandex/clickhouse-server:21.9.2.17.
I have a single clickhouse-server pod in my cluster associated with a service account defined by
```
---
apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
eks.amazonaws.com/role-arn: <RoleARN>
labels:
app: olap-clichouse
name: olap-clickhouse-sa
namespace: my-namespace
```
The ARN links to an IAM role with read/write/list permissions on my S3 bucket.
The expected env variables from the service account appear in the pod
```
AWS_DEFAULT_REGION=us-west-2
AWS_REGION=us-west-2
AWS_ROLE_ARN=<RoleARN>
AWS_WEB_IDENTITY_TOKEN_FILE=/var/run/secrets/eks.amazonaws.com/serviceaccount/token
```
The config definition for storage is being set to
```
<yandex>
<storage_configuration>
<disks>
<s3_storage_disk>
<type>s3</type>
<endpoint>https://my-bucker.s3.us-west-2.amazonaws.com/data-backups/</endpoint>
<use_environment_credentials>true</use_environment_credentials>
<region>us-west-2</region>
</s3_storage_disk>
</disks>
<policies>
<s3_storage_policy>
<volumes>
<s3_main_volume>
<disk>s3_storage_disk</disk>
</s3_main_volume>
</volumes>
</s3_storage_policy>
</policies>
</storage_configuration>
</yandex>
```
**Expected behavior**
The server should start without a crash and I should be able to read/write data from/to S3.
**Error message and/or stacktrace**
Pod logs including traceback.
```
Merging configuration file '/etc/clickhouse-server/conf.d/chop-generated-zookeeper.xml'.
Merging configuration file '/etc/clickhouse-server/users.d/01-clickhouse-user.xml'.
Merging configuration file '/etc/clickhouse-server/users.d/02-clickhouse-default-profile.xml'.
Merging configuration file '/etc/clickhouse-server/users.d/03-database-ordinary.xml'.
Merging configuration file '/etc/clickhouse-server/users.d/chop-generated-users.xml'.
Saved preprocessed configuration to '/var/lib/clickhouse/preprocessed_configs/users.xml'.
2021.10.05 19:29:27.455634 [ 1 ] {} <Debug> ConfigReloader: Loaded config '/etc/clickhouse-server/users.xml', performing update on configuration
2021.10.05 19:29:27.456171 [ 1 ] {} <Debug> ConfigReloader: Loaded config '/etc/clickhouse-server/users.xml', performed update on configuration
2021.10.05 19:29:27.456455 [ 1 ] {} <Debug> Access(user directories): Added users.xml access storage 'users.xml', path: /etc/clickhouse-server/users.xml
2021.10.05 19:29:27.456562 [ 1 ] {} <Warning> Access(local directory): File /var/lib/clickhouse/access/users.list doesn't exist
2021.10.05 19:29:27.456590 [ 1 ] {} <Warning> Access(local directory): Recovering lists in directory /var/lib/clickhouse/access/
2021.10.05 19:29:27.456736 [ 1 ] {} <Debug> Access(user directories): Added local directory access storage 'local directory', path: /var/lib/clickhouse/access/
2021.10.05 19:29:27.457170 [ 1 ] {} <Information> Application: Loading metadata from /var/lib/clickhouse/
2021.10.05 19:29:27.459290 [ 1 ] {} <Information> DatabaseAtomic (system): Total 0 tables and 0 dictionaries.
2021.10.05 19:29:27.459312 [ 1 ] {} <Information> DatabaseAtomic (system): Starting up tables.
2021.10.05 19:29:27.462096 [ 1 ] {} <Information> DatabaseCatalog: Found 0 partially dropped tables. Will load them and retry removal.
2021.10.05 19:29:27.463533 [ 1 ] {} <Information> DatabaseOrdinary (default): Total 0 tables and 0 dictionaries.
2021.10.05 19:29:27.463553 [ 1 ] {} <Information> DatabaseOrdinary (default): Starting up tables.
2021.10.05 19:29:27.463576 [ 1 ] {} <Information> BackgroundSchedulePool/BgSchPool: Create BackgroundSchedulePool with 128 threads
2021.10.05 19:29:27.470583 [ 1 ] {} <Debug> Application: Loaded metadata.
2021.10.05 19:29:27.481421 [ 1 ] {} <Information> Application: It looks like the process has no CAP_SYS_NICE capability, the setting 'os_thread_priority' will have no effect. It could happen due to incorrect ClickHouse package installation. You could resolve the problem manually with 'sudo setcap cap_sys_nice=+ep /usr/bin/clickhouse'. Note that it will not work on 'nosuid' mounted filesystems.
2021.10.05 19:29:27.481866 [ 1 ] {} <Warning> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, DNS error: EAI: Address family for hostname not supported (version 21.9.4.35 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2021.10.05 19:29:27.482017 [ 1 ] {} <Warning> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, DNS error: EAI: Address family for hostname not supported (version 21.9.4.35 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2021.10.05 19:29:27.482158 [ 1 ] {} <Warning> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, DNS error: EAI: Address family for hostname not supported (version 21.9.4.35 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2021.10.05 19:29:27.482290 [ 1 ] {} <Warning> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, DNS error: EAI: Address family for hostname not supported (version 21.9.4.35 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2021.10.05 19:29:27.482416 [ 1 ] {} <Warning> Application: Listen [::]:9005 failed: Poco::Exception. Code: 1000, e.code() = 0, DNS error: EAI: Address family for hostname not supported (version 21.9.4.35 (official build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
2021.10.05 19:29:27.482552 [ 1 ] {} <Information> Application: Listening for http://0.0.0.0:8123
2021.10.05 19:29:27.505362 [ 1 ] {} <Information> Application: Listening for connections with native protocol (tcp): 0.0.0.0:9000
2021.10.05 19:29:27.505440 [ 1 ] {} <Information> Application: Listening for replica communication (interserver): http://0.0.0.0:9009
2021.10.05 19:29:27.552577 [ 1 ] {} <Information> Application: Listening for MySQL compatibility protocol: 0.0.0.0:9004
2021.10.05 19:29:27.552634 [ 1 ] {} <Information> Application: Listening for PostgreSQL compatibility protocol: 0.0.0.0:9005
2021.10.05 19:29:27.553384 [ 1 ] {} <Warning> AWSClient: ClientConfiguration: Retry Strategy will use the default max attempts.
2021.10.05 19:29:27.553497 [ 1 ] {} <Information> AWSClient: Aws::Config::AWSConfigFileProfileConfigLoader: Initializing config loader against fileName /var/lib/clickhouse/.aws/credentials and using profilePrefix = 0
2021.10.05 19:29:27.553522 [ 1 ] {} <Information> AWSClient: ProfileConfigFileAWSCredentialsProvider: Setting provider to read credentials from /var/lib/clickhouse/.aws/credentials for credentials file and /var/lib/clickhouse/.aws/config for the config file , for use with profile default
2021.10.05 19:29:27.553532 [ 1 ] {} <Information> AWSClient: ProcessCredentialsProvider: Setting process credentials provider to read config from default
2021.10.05 19:29:27.553548 [ 1 ] {} <Warning> AWSClient: ClientConfiguration: Retry Strategy will use the default max attempts.
2021.10.05 19:29:27.553565 [ 1 ] {} <Information> AWSClient: STSResourceClient: Creating AWSHttpResourceClient with max connections 25 and scheme https
2021.10.05 19:29:27.553917 [ 197 ] {} <Fatal> BaseDaemon: ########################################
2021.10.05 19:29:27.553971 [ 197 ] {} <Fatal> BaseDaemon: (version 21.9.4.35 (official build), build id: 5F55EEF74E2818F777B4052BF503DF5BA7BFD787) (from thread 1) (no query) Received signal Segmentation fault (11)
2021.10.05 19:29:27.554000 [ 197 ] {} <Fatal> BaseDaemon: Address: NULL pointer. Access: read. Address not mapped to object.
2021.10.05 19:29:27.554025 [ 197 ] {} <Fatal> BaseDaemon: Stack trace: 0xff535ff 0xff58310 0x13df6e2e 0x13dfcaa1 0x13dc0bb3 0xff46d6a 0xff4037f 0x10638594 0x10635477 0x10633c6e 0x10619691 0x10615903 0x1072eadf 0x1070ed0b 0x1070efcb 0x106cb7ca 0x106c1aa0 0x93f18f5 0x1438e0a3 0x93e1d6f 0x93e0113 0x9364b1e 0x7f92677850b3 0x932d1ee
2021.10.05 19:29:27.554075 [ 197 ] {} <Fatal> BaseDaemon: 1. DB::S3::PocoHTTPClient::PocoHTTPClient(DB::S3::PocoHTTPClientConfiguration const&) @ 0xff535ff in /usr/bin/clickhouse
2021.10.05 19:29:27.554105 [ 197 ] {} <Fatal> BaseDaemon: 2. DB::S3::PocoHTTPClientFactory::CreateHttpClient(Aws::Client::ClientConfiguration const&) const @ 0xff58310 in /usr/bin/clickhouse
2021.10.05 19:29:27.554127 [ 197 ] {} <Fatal> BaseDaemon: 3. Aws::Internal::AWSHttpResourceClient::AWSHttpResourceClient(Aws::Client::ClientConfiguration const&, char const*) @ 0x13df6e2e in /usr/bin/clickhouse
2021.10.05 19:29:27.554149 [ 197 ] {} <Fatal> BaseDaemon: 4. Aws::Internal::STSCredentialsClient::STSCredentialsClient(Aws::Client::ClientConfiguration const&) @ 0x13dfcaa1 in /usr/bin/clickhouse
2021.10.05 19:29:27.554167 [ 197 ] {} <Fatal> BaseDaemon: 5. Aws::Auth::STSAssumeRoleWebIdentityCredentialsProvider::STSAssumeRoleWebIdentityCredentialsProvider() @ 0x13dc0bb3 in /usr/bin/clickhouse
2021.10.05 19:29:27.554183 [ 197 ] {} <Fatal> BaseDaemon: 6. ? @ 0xff46d6a in /usr/bin/clickhouse
2021.10.05 19:29:27.554211 [ 197 ] {} <Fatal> BaseDaemon: 7. DB::S3::ClientFactory::create(DB::S3::PocoHTTPClientConfiguration const&, bool, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::vector<DB::HttpHeader, std::__1::allocator<DB::HttpHeader> >, bool, bool) @ 0xff4037f in /usr/bin/clickhouse
2021.10.05 19:29:27.554244 [ 197 ] {} <Fatal> BaseDaemon: 8. ? @ 0x10638594 in /usr/bin/clickhouse
2021.10.05 19:29:27.554256 [ 197 ] {} <Fatal> BaseDaemon: 9. ? @ 0x10635477 in /usr/bin/clickhouse
2021.10.05 19:29:27.554269 [ 197 ] {} <Fatal> BaseDaemon: 10. ? @ 0x10633c6e in /usr/bin/clickhouse
2021.10.05 19:29:27.554291 [ 197 ] {} <Fatal> BaseDaemon: 11. DB::DiskFactory::create(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, Poco::Util::AbstractConfiguration const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context const>, std::__1::map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::shared_ptr<DB::IDisk>, std::__1::less<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const, std::__1::shared_ptr<DB::IDisk> > > > const&) const @ 0x10619691 in /usr/bin/clickhouse
2021.10.05 19:29:27.554319 [ 197 ] {} <Fatal> BaseDaemon: 12. DB::DiskSelector::DiskSelector(Poco::Util::AbstractConfiguration const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context const>) @ 0x10615903 in /usr/bin/clickhouse
2021.10.05 19:29:27.554345 [ 197 ] {} <Fatal> BaseDaemon: 13. void std::__1::allocator<DB::DiskSelector>::construct<DB::DiskSelector, Poco::Util::AbstractConfiguration const&, char const* const&, std::__1::shared_ptr<DB::Context const> >(DB::DiskSelector*, Poco::Util::AbstractConfiguration const&, char const* const&, std::__1::shared_ptr<DB::Context const>&&) @ 0x1072eadf in /usr/bin/clickhouse
2021.10.05 19:29:27.554368 [ 197 ] {} <Fatal> BaseDaemon: 14. DB::Context::getDiskSelector(std::__1::lock_guard<std::__1::mutex>&) const @ 0x1070ed0b in /usr/bin/clickhouse
2021.10.05 19:29:27.554381 [ 197 ] {} <Fatal> BaseDaemon: 15. DB::Context::getDisksMap() const @ 0x1070efcb in /usr/bin/clickhouse
2021.10.05 19:29:27.554401 [ 197 ] {} <Fatal> BaseDaemon: 16. DB::AsynchronousMetrics::update(std::__1::chrono::time_point<std::__1::chrono::system_clock, std::__1::chrono::duration<long long, std::__1::ratio<1l, 1000000l> > >) @ 0x106cb7ca in /usr/bin/clickhouse
2021.10.05 19:29:27.554415 [ 197 ] {} <Fatal> BaseDaemon: 17. DB::AsynchronousMetrics::start() @ 0x106c1aa0 in /usr/bin/clickhouse
2021.10.05 19:29:27.554435 [ 197 ] {} <Fatal> BaseDaemon: 18. DB::Server::main(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0x93f18f5 in /usr/bin/clickhouse
2021.10.05 19:29:27.554455 [ 197 ] {} <Fatal> BaseDaemon: 19. Poco::Util::Application::run() @ 0x1438e0a3 in /usr/bin/clickhouse
2021.10.05 19:29:27.554477 [ 197 ] {} <Fatal> BaseDaemon: 20. DB::Server::run() @ 0x93e1d6f in /usr/bin/clickhouse
2021.10.05 19:29:27.554495 [ 197 ] {} <Fatal> BaseDaemon: 21. mainEntryClickHouseServer(int, char**) @ 0x93e0113 in /usr/bin/clickhouse
2021.10.05 19:29:27.554513 [ 197 ] {} <Fatal> BaseDaemon: 22. main @ 0x9364b1e in /usr/bin/clickhouse
2021.10.05 19:29:27.554534 [ 197 ] {} <Fatal> BaseDaemon: 23. __libc_start_main @ 0x270b3 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
2021.10.05 19:29:27.554547 [ 197 ] {} <Fatal> BaseDaemon: 24. _start @ 0x932d1ee in /usr/bin/clickhouse
2021.10.05 19:29:27.672810 [ 197 ] {} <Fatal> BaseDaemon: Checksum of the binary: BEA07E96B6BEBA1591FE837CF53C7591, integrity check passed.
2021.10.05 19:29:27.673015 [ 197 ] {} <Information> SentryWriter: Not sending crash report
2021.10.05 19:29:28.460715 [ 66 ] {} <Debug> SystemLog (system.crash_log): Creating new table system.crash_log for CrashLog
2021.10.05 19:29:34.960697 [ 59 ] {} <Debug> SystemLog (system.metric_log): Creating new table system.metric_log for MetricLog
2021.10.05 19:29:34.960808 [ 61 ] {} <Debug> SystemLog (system.trace_log): Creating new table system.trace_log for TraceLog
```
**Additional context**
If I deploy without the storage.xml file the clickhouse-server container will run. I was able to open a bash shell in the container, install the AWS CLI app, and push/pull objects from S3 without issues just using the service account env vars for authentication.
| https://github.com/ClickHouse/ClickHouse/issues/29788 | https://github.com/ClickHouse/ClickHouse/pull/31409 | 27aec2487be622dc9f7bfa60d72588d3b0d2e7a7 | 4856b50c23a37e393783ef3dca0ef9617967f9c2 | "2021-10-05T19:52:38Z" | c++ | "2021-11-19T15:45:38Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 29,759 | ["src/Processors/Transforms/buildPushingToViewsChain.cpp", "tests/queries/0_stateless/01549_low_cardinality_mv_fuzz.reference", "tests/queries/0_stateless/01549_low_cardinality_mv_fuzz.sql"] | Logical error: 'Block structure mismatch in Pipes stream: different types: Received signal 6 Received signal Aborted (6) | <https://clickhouse-test-reports.s3.yandex.net/25969/86d31445fd0dd9bc086432f559d1089c3d4a7700/fuzzer_debug/report.html#fail1> | https://github.com/ClickHouse/ClickHouse/issues/29759 | https://github.com/ClickHouse/ClickHouse/pull/39125 | 4a45b37b437ebc602a235c8e319348ad60362bcb | ec24f730b12074446b334eafbf94e7d505cdec6c | "2021-10-05T06:38:56Z" | c++ | "2022-07-12T18:19:41Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 29,754 | ["src/Databases/DatabaseOnDisk.cpp"] | system.errors FILE_DOESNT_EXIST initated by system.tables.create_table_query | ```
select name, value, last_error_message from system.errors where name = 'FILE_DOESNT_EXIST';
0 rows in set. Elapsed: 0.001 sec.
select * from system.tables format Null;
select * from system.tables format Null;
select * from system.tables format Null;
select name, value, last_error_message from system.errors where name = 'FILE_DOESNT_EXIST';
ββnameβββββββββββββββ¬βvalueββ¬βlast_error_messageβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β FILE_DOESNT_EXIST β 198 β Cannot open file /var/lib/clickhouse/store/bff/bff731ca-ed14-404b-bff7-31caed14904b/zeros_mt.sql, errno: 2, strerror: No such file or directory β
βββββββββββββββββββββ΄ββββββββ΄ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
``` | https://github.com/ClickHouse/ClickHouse/issues/29754 | https://github.com/ClickHouse/ClickHouse/pull/29779 | e88005c6f6bdcaa88f3a3c60bc8a45683252d1d5 | dce50c8b8ad125f0442daa76121dc1adf15dc787 | "2021-10-04T23:11:24Z" | c++ | "2021-10-06T18:29:12Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 29,729 | ["src/Interpreters/inplaceBlockConversions.cpp", "tests/queries/0_stateless/02053_INSERT_SELECT_MATERIALIZED.reference", "tests/queries/0_stateless/02053_INSERT_SELECT_MATERIALIZED.sql"] | INSERT SELECT incorrectly fills MATERIALIZED column | **Does it reproduce on recent release?**
Reproduced on 21.11.1.1 (master) and 21.9.2
Works as expected on 21.8.2, so it's regression
**How to reproduce**
```
:) create table mt (id Int64, A Nullable(Int64), X Int64 materialized coalesce(A, -1)) engine=MergeTree order by id
:) insert into mt values (1, 42) -- X will be 42 (as expected)
:) insert into mt select 1, 42 -- X will be -1
:) select *, X from mt order by id
ββidββ¬ββAββ¬ββXββ
β 1 β 42 β 42 β
ββββββ΄βββββ΄βββββ
ββidββ¬ββAββ¬ββXββ
β 1 β 42 β -1 β
ββββββ΄βββββ΄βββββ
```
**Expected behavior**
```
:) select *, X from mt order by id
ββidββ¬ββAββ¬ββXββ
β 1 β 42 β 42 β
ββββββ΄βββββ΄βββββ
ββidββ¬ββAββ¬ββXββ
β 1 β 42 β 42 β
ββββββ΄βββββ΄βββββ
```
| https://github.com/ClickHouse/ClickHouse/issues/29729 | https://github.com/ClickHouse/ClickHouse/pull/30189 | bc1662b9febd91c9098feed2a7e25f5ff000aeac | 0e228a23693ed71152cc3557db3edfd005f7735e | "2021-10-04T12:43:50Z" | c++ | "2021-10-15T07:48:23Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 29,702 | ["src/Interpreters/MutationsInterpreter.cpp", "tests/queries/0_stateless/02131_materialize_column_cast.reference", "tests/queries/0_stateless/02131_materialize_column_cast.sql"] | Logical error when trying to use bloom filter index on `Array(LowCardinality(String))` | **Describe what's wrong**
Get data from #29693.
```
ALTER TABLE hackernews
ADD COLUMN words Array(LowCardinality(String))
DEFAULT arraySort(
arrayDistinct(
extractAll(
lower(
decodeXMLComponent(
extractTextFromHTML(text))),
'\w+')));
ALTER TABLE hackernews MATERIALIZE COLUMN words;
ALTER TABLE hackernews ADD INDEX words_bf (words) TYPE bloom_filter(0.01) GRANULARITY 1;
ALTER TABLE hackernews MATERIALIZE INDEX words_bf;
SELECT * FROM system.merges \G;
...
SELECT count() FROM hackernews WHERE has(words, 'clickhouse');
Code: 49. DB::Exception: Received from localhost:9000. DB::Exception: Got empty stream for SerializationLowCardinality keys.: (while reading column words): (while reading from part /var/lib/clickhouse/store/f43/f4347434-b836-44ba-b434-7434b836f4ba/all_1_2874_5_2884/ from mark 46 with max_rows_to_read = 8192): While executing MergeTreeThread. (LOGICAL_ERROR)
```
**Does it reproduce on recent release?**
master. | https://github.com/ClickHouse/ClickHouse/issues/29702 | https://github.com/ClickHouse/ClickHouse/pull/32348 | 3498e13551d7fffdff3079123f199e2658019159 | e0a8c5a4ac854d2a91d2d4930313f8d0d3c92fa1 | "2021-10-04T05:19:05Z" | c++ | "2021-12-08T08:13:17Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 29,699 | ["src/DataTypes/DataTypeFixedString.h", "src/Functions/ngrams.cpp", "src/Functions/registerFunctions.cpp", "src/Interpreters/ITokenExtractor.cpp", "src/Interpreters/ITokenExtractor.h", "src/Storages/MergeTree/MergeTreeIndexFullText.cpp", "src/Storages/MergeTree/MergeTreeIndexFullText.h", "src/Storages/tests/gtest_SplitTokenExtractor.cpp", "tests/queries/0_stateless/2027_ngrams.reference", "tests/queries/0_stateless/2027_ngrams.sql"] | A function to extract ngrams from string | **Use case**
I want to estimate better parameters for fulltext (`ngrambf_v1`) index.
**Describe the solution you'd like**
Implement it as SQL function:
`ngrams(string, N)`
that returns array of FixedString.
**Describe alternatives you've considered**
It can be done with less efficient and more cumbersome way:
```
WITH 3 AS n, extractAll(text, '.') AS chars
SELECT
arrayMap(x -> arrayStringConcat(x),
arrayFilter(x -> length(x) = n,
arrayMap(x, i -> arraySlice(chars, i, n),
chars, arrayEnumerate(chars)))) AS ngrams
```
**Additional context**
`ngramsUTF8` can also be implemented to extract ngrams of Unicode code points. | https://github.com/ClickHouse/ClickHouse/issues/29699 | https://github.com/ClickHouse/ClickHouse/pull/29738 | 16ad5953d690036f25dae3c91044abb4e267eb64 | 4ec7311d4d4ee739d28db451bdc7a21cf424c003 | "2021-10-04T04:22:38Z" | c++ | "2021-10-07T07:21:00Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 29,698 | ["src/DataTypes/EnumValues.h", "src/Interpreters/Aggregator.h", "src/Storages/AlterCommands.cpp", "src/Storages/ConstraintsDescription.h", "src/Storages/IndicesDescription.cpp", "src/Storages/IndicesDescription.h", "tests/queries/0_stateless/02225_hints_for_indeices.reference", "tests/queries/0_stateless/02225_hints_for_indeices.sh"] | Name hints should work for data skipping indices. | **Use case**
```
Received exception from server (version 21.11.1):
Code: 36. DB::Exception: Received from localhost:9000. DB::Exception: Wrong index name. Cannot find index `test_pentagram` to drop.. (BAD_ARGUMENTS)
github-explorer.ru-central1.internal :) ALTER TABLE hackernews DROP INDEX text_pentagram
```
It did not show me the typo. | https://github.com/ClickHouse/ClickHouse/issues/29698 | https://github.com/ClickHouse/ClickHouse/pull/34764 | 5ac8cdbc69c12ab1d95efefbc3c432ee319f4107 | 065305ab65d2733fea463e863eeabb19be0b1c82 | "2021-10-04T04:08:52Z" | c++ | "2022-02-21T13:50:59Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 29,695 | ["src/Parsers/ASTAlterQuery.cpp", "tests/queries/0_stateless/01318_alter_add_constraint_format.reference", "tests/queries/0_stateless/02048_alter_command_format.reference", "tests/queries/0_stateless/02048_alter_command_format.sh"] | Strange formatting of ALTER query | **Describe the issue**
```
github-explorer.ru-central1.internal :) ALTER TABLE hackernews MODIFY COLUMN text CODEC(ZSTD), MODIFY COLUMN title CODEC(ZSTD), MODIFY COLUMN url CODEC(ZSTD)
ALTER TABLE hackernews
MODIFY COLUMN `text` CODEC(ZSTD), MODIFY COLUMN `title` CODEC(ZSTD), MODIFY COLUMN `url` CODEC(ZSTD)
```
See that weird indentation without newlines? | https://github.com/ClickHouse/ClickHouse/issues/29695 | https://github.com/ClickHouse/ClickHouse/pull/29916 | 6a0ee3d23ed5710574d134eb76a6989c112c438f | 5802037f1ea35aa2f5c83df3eeb6030cea9f3d74 | "2021-10-04T03:47:10Z" | c++ | "2021-10-14T11:30:08Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 29,671 | ["src/Storages/StorageURL.cpp", "tests/integration/test_redirect_url_storage/test.py", "tests/queries/0_stateless/02044_url_glob_parallel.reference", "tests/queries/0_stateless/02044_url_glob_parallel.sh"] | `EXPLAIN` does not work as expected for a query with `url` table function | It performs unneeded requests:
```
EXPLAIN SELECT * FROM url('https://hacker-news.firebaseio.com/v0/item/{1..100}.json', JSONEachRow,γ$$γid UInt32,γdeleted UInt8,γtype Enum('story' = 1, 'comment' = 2, 'poll' = 3, 'pollopt' = 4, 'job' = 5),γby LowCardinality(String),γtime DateTime,γtext String,γdead UInt8,γparent UInt32,γpoll UInt32,γkids Array(UInt32),γurl String,γscore Int32,γtitle String,γparts Array(UInt32),γdescendants Int32γ$$)
``` | https://github.com/ClickHouse/ClickHouse/issues/29671 | https://github.com/ClickHouse/ClickHouse/pull/29673 | 1e1e3a6ad84b3c3f5da516cb255670563854e42a | 5ab17d795fa32f65b5ad3e522a13127cd1d41590 | "2021-10-03T02:44:04Z" | c++ | "2021-10-04T07:23:52Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 29,670 | ["src/Storages/StorageURL.cpp", "tests/integration/test_redirect_url_storage/test.py", "tests/queries/0_stateless/02044_url_glob_parallel.reference", "tests/queries/0_stateless/02044_url_glob_parallel.sh"] | `url` table function does not process globs in parallel | ```
SELECT * FROM url('https://hacker-news.firebaseio.com/v0/item/{1..100}.json', JSONEachRow,
$$
id UInt32,
deleted UInt8,
type Enum('story' = 1, 'comment' = 2, 'poll' = 3, 'pollopt' = 4, 'job' = 5),
by LowCardinality(String),
time DateTime,
text String,
dead UInt8,
parent UInt32,
poll UInt32,
kids Array(UInt32),
url String,
score Int32,
title String,
parts Array(UInt32),
descendants Int32
$$)
```
All requests are done from single thread. | https://github.com/ClickHouse/ClickHouse/issues/29670 | https://github.com/ClickHouse/ClickHouse/pull/29673 | 1e1e3a6ad84b3c3f5da516cb255670563854e42a | 5ab17d795fa32f65b5ad3e522a13127cd1d41590 | "2021-10-03T02:24:41Z" | c++ | "2021-10-04T07:23:52Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 29,621 | ["src/Interpreters/InterpreterInsertQuery.cpp", "tests/queries/0_stateless/01275_parallel_mv.reference", "tests/queries/0_stateless/01275_parallel_mv.sql"] | Parallel view processing no longer working | **Describe what's wrong**
In master, `parallel_view_processing` doesn't have any effect and the MV are executed sequentially.
**Does it reproduce on recent release?**
Only master.
Taken from parallel_mv.xml perf test:
```sql
create table main_table (number UInt64) engine = MergeTree order by tuple();
create materialized view mv_1 engine = MergeTree order by tuple() as
select number, toString(number) from main_table where number % 13 != 0;
create materialized view mv_2 engine = MergeTree order by tuple() as
select number, toString(number) from main_table where number % 13 != 1;
create materialized view mv_3 engine = MergeTree order by tuple() as
select number, toString(number) from main_table where number % 13 != 3;
create materialized view mv_4 engine = MergeTree order by tuple() as
select number, toString(number) from main_table where number % 13 != 4;
```
21.9.4:
* No parallel: Elapsed: 1.333 sec
* With parallel view processing: 0.588 sec
21.10.2.8264:
* No parallel: 1.372 sec
* Parallel: 0.582
21.11.1.8263 from master:
* No parallel: 1.415 sec
* With parallel: 1.423 sec
**Additional context**
Checking the [stats of CI](https://play-ci.clickhouse.com/play?user=play#V0lUSCAwLjA1IEFTIHMKU0VMRUNUIG9sZF9zaGEsIG5ld19zaGEsIGV2ZW50X3RpbWUsIG1lc3NhZ2UsIG9sZF92YWx1ZSBBUyBgb2xkIHNlcnZlcmAsICAgbmV3X3ZhbHVlIEFTIGBuZXcgc2VydmVyYCwgYmVmb3JlIEFTIGBwcmV2IDExIHJ1bnNgLCBhZnRlciBBUyBgbmV4dCAxMSBydW5zYCwgICAgZGlmZiBBUyBgZGlmZiwgcmF0aW9gLCBzdGF0X3RocmVzaG9sZF9oaXN0b3JpY2FsIEFTIGBzdGF0IHRocmVzaG9sZCwgcmF0aW8sIGhpc3RvcmljYWxgLCBzdGF0X3RocmVzaG9sZCBBUyBgc3RhdCB0aHJlc2hvbGQsIHJhdGlvLCBwZXItcnVuYCwgY3B1X21vZGVsLHF1ZXJ5X2Rpc3BsYXlfbmFtZQpGUk9NIAooU0VMRUNUICosIHJ1bl9hdHRyaWJ1dGVzX3YxLnZhbHVlIEFTIGNwdV9tb2RlbCwKICAgICAgICBtZWRpYW4ob2xkX3ZhbHVlKSBPVkVSIChQQVJUSVRJT04gQlkgcnVuX2F0dHJpYnV0ZXNfdjEudmFsdWUsIHRlc3QsIHF1ZXJ5X2luZGV4LCBxdWVyeV9kaXNwbGF5X25hbWUgT1JERVIgQlkgZXZlbnRfZGF0ZSBBU0MgUk9XUyBCRVRXRUVOIDExIFBSRUNFRElORyBBTkQgQ1VSUkVOVCBST1cpIEFTIGJlZm9yZSwKICAgICAgICBtZWRpYW4obmV3X3ZhbHVlKSBPVkVSIChQQVJUSVRJT04gQlkgcnVuX2F0dHJpYnV0ZXNfdjEudmFsdWUsIHRlc3QsIHF1ZXJ5X2luZGV4LCBxdWVyeV9kaXNwbGF5X25hbWUgT1JERVIgQlkgZXZlbnRfZGF0ZSBBU0MgUk9XUyBCRVRXRUVOIENVUlJFTlQgUk9XIEFORCAxMSBGT0xMT1dJTkcpIEFTIGFmdGVyLAogICAgICAgIHF1YW50aWxlRXhhY3QoMC45NSkoYWJzKGRpZmYpKSBPVkVSIChQQVJUSVRJT04gQlkgcnVuX2F0dHJpYnV0ZXNfdjEudmFsdWUsIHRlc3QsIHF1ZXJ5X2luZGV4LCBxdWVyeV9kaXNwbGF5X25hbWUgT1JERVIgQlkgZXZlbnRfZGF0ZSBBU0MgUk9XUyBCRVRXRUVOIDM3IFBSRUNFRElORyBBTkQgQ1VSUkVOVCBST1cpIEFTIHN0YXRfdGhyZXNob2xkX2hpc3RvcmljYWwKICAgIEZST00gcGVyZnRlc3QucXVlcnlfbWV0cmljc192MgogICAgTEVGVCBKT0lOIHBlcmZ0ZXN0LnJ1bl9hdHRyaWJ1dGVzX3YxIFVTSU5HIChvbGRfc2hhLCBuZXdfc2hhKQogICAgV0hFUkUgKGF0dHJpYnV0ZSA9ICdsc2NwdS1tb2RlbC1uYW1lJykgQU5EIChtZXRyaWMgPSAnY2xpZW50X3RpbWUnKQogICAgICAgIC0tIG9ubHkgZm9yIGNvbW1pdHMgaW4gbWFzdGVyCiAgICAgICAgQU5EIChwcl9udW1iZXIgPSAwKQogICAgICAgIC0tIHNlbGVjdCB0aGUgcXVlcmllcyB3ZSBhcmUgaW50ZXJlc3RlZCBpbgogICAgICAgIEFORCAodGVzdCA9ICdwYXJhbGxlbF9tdicpCikgQVMgdApBTlkgTEVGVCBKT0lOIGBnaC1kYXRhYC5jb21taXRzIE9OIG5ld19zaGEgPSBzaGEKV0hFUkUKICAgIC0tIENoZWNrIGZvciBhIHBlcnNpc3RlbnQgYW5kIHNpZ25pZmljYW50IGNoYW5nZSBpbiBxdWVyeSBydW4gdGltZSwgaW50cm9kdWNlZCBieSBhIGNvbW1pdDoKICAgIC0tIDEpIG9uIGEgaGlzdG9yaWNhbCBncmFwaCBvZiBxdWVyeSBydW4gdGltZSwgdGhlcmUgaXMgYSBzdGVwIGJldHdlZW4gdGhlIGFkamFjZW50IGNvbW1pdHMsCiAgICAtLSB0aGF0IGlzIGhpZ2hlciB0aGFuIHRoZSBub3JtYWwgdmFyaWFuY2UsCiAgICAoKChhYnMoYWZ0ZXIgLSBiZWZvcmUpIC8gaWYoYWZ0ZXIgPiBiZWZvcmUsIGFmdGVyLCBiZWZvcmUpKSBBUyBzdGVwX2hlaWdodCkgPj0gZ3JlYXRlc3Qocywgc3RhdF90aHJlc2hvbGRfaGlzdG9yaWNhbCkpCiAgICAtLSAyKSBpbiBzaWRlLXRvLXNpZGUgY29tcGFyaXNvbiBvZiB0aGVzZSB0d28gY29tbWl0cywgdGhlcmUgd2FzIGEgc3RhdGlzdGljYWxseSBzaWduaWZpY2FudCBkaWZmZXJlbmNlCiAgICAtLSB0aGF0IGlzIGFsc28gaGlnaGVyIHRoYW4gdGhlIG5vcm1hbCB2YXJpYW5jZSwKICAgICAgICBBTkQgKGFicyhkaWZmKSA+PSBncmVhdGVzdChzdGF0X3RocmVzaG9sZCwgc3RhdF90aHJlc2hvbGRfaGlzdG9yaWNhbCwgcykpCiAgICAtLSAzKSBmaW5hbGx5LCB0aGlzIHNpZGUtdG8tc2lkZSBkaWZmZXJlbmNlIGlzIG9mIG1hZ25pdHVkZSBjb21wYXJhYmxlIHRvIHRoZSBzdGVwIGluIGhpc3RvcmljYWwgZ3JhcGhzLgogICAgICAgIEFORCAoYWJzKGRpZmYpID49ICgwLjcgKiBzdGVwX2hlaWdodCkpCm9yZGVyIGJ5IGV2ZW50X3RpbWUgZGVzYwpmb3JtYXQgVmVydGljYWwKCgo=) it points to #28582 as the culptrit.
| https://github.com/ClickHouse/ClickHouse/issues/29621 | https://github.com/ClickHouse/ClickHouse/pull/29786 | 78e1db209f5527479a1947a2c3c441b56e00617e | 4ab2e2745bec13e9c9888c343706dc9d46a7516a | "2021-10-01T13:00:38Z" | c++ | "2021-10-07T09:15:26Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 29,618 | ["src/Processors/Executors/ExecutingGraph.cpp", "src/Processors/Executors/ExecutingGraph.h"] | Hedged requests hang in rare cases | https://clickhouse-test-reports.s3.yandex.net/0/2517f70d7532fc6aca4428f8de54aeae8ab1bf7d/stress_test_(debug).html#fail1
```
Found hung queries in processlist:
Row 1:
ββββββ
is_initial_query: 1
user: default
query_id: 76d2f16e-5dc9-42cb-895e-2dc8edbe1c08
address: ::1
port: 36112
initial_user: default
initial_query_id: 76d2f16e-5dc9-42cb-895e-2dc8edbe1c08
initial_address: ::1
initial_port: 36112
interface: 1
os_user:
client_hostname: 27c20d53754c
client_name: ClickHouse
client_revision: 54449
client_version_major: 21
client_version_minor: 11
client_version_patch: 1
http_method: 0
http_user_agent:
http_referer:
forwarded_for:
quota_key:
elapsed: 3884.86343185
is_cancelled: 0
read_rows: 0
read_bytes: 0
total_rows_approx: 0
written_rows: 0
written_bytes: 0
memory_usage: 158327769
peak_memory_usage: 157278873
query: SELECT sum(UserID GLOBAL IN (SELECT UserID FROM remote('127.0.0.{1,2}', test.hits))) FROM remote('127.0.0.{1,2}', test.hits);
thread_ids: [1045,3287,3074,3061,3126,13797,13985,15448,3279,15447,14006,8032,2684,2695,3259,15450,3121,13791]
ProfileEvents: {'Query':1,'SelectQuery':1,'FileOpen':1,'Seek':15,'ReadBufferFromFileDescriptorRead':17,'ReadBufferFromFileDescriptorReadBytes':16375089,'ReadCompressedBytes':21697578,'CompressedReadBufferBlocks':1226,'CompressedReadBufferBytes':135426010,'IOBufferAllocs':36,'IOBufferAllocBytes':20571045,'TableFunctionExecute':2,'MarkCacheHits':16,'CreatedReadBufferOrdinary':16,'DiskReadElapsedMicroseconds':128000,'NetworkReceiveElapsedMicroseconds':9956,'NetworkSendElapsedMicroseconds':148,'NetworkReceiveBytes':10573432,'NetworkSendBytes':1064,'HedgedRequestsChangeReplica':1,'SelectedParts':1,'SelectedRanges':1,'SelectedMarks':1084,'SelectedRows':25047187,'SelectedBytes':200377496,'ContextLock':334,'RWLockAcquiredReadLocks':4,'RWLockReadersWaitMilliseconds':4,'CannotWriteToWriteBufferDiscard':36408,'QueryProfilerSignalOverruns':2920}
Settings: {'connect_timeout_with_failover_ms':'2000','connect_timeout_with_failover_secure_ms':'3000','idle_connection_timeout':'36000','replication_wait_for_inactive_replica_timeout':'30','load_balancing':'random','log_queries':'1','insert_quorum_timeout':'60000','http_send_timeout':'60','http_receive_timeout':'60','opentelemetry_start_trace_probability':'0.1','max_memory_usage':'10000000000','max_untracked_memory':'1048576','memory_profiler_step':'1048576','log_comment':'00147_global_in_aggregate_function.sql','send_logs_level':'warning','database_atomic_wait_for_drop_and_detach_synchronously':'1','allow_experimental_database_replicated':'1','async_insert_busy_timeout_ms':'5000'}
current_database: test_0s7gn7
Thread 247 (Thread 0x7fc30c2af700 (LWP 1045)):
#0 __syscall () at ../base/glibc-compatibility/musl/x86_64/syscall.s:14
#1 0x0000000026a70a44 in epoll_pwait (fd=1833, ev=0x7fc30c29c920, cnt=1, to=-1, sigs=0x0) at ../base/glibc-compatibility/musl/epoll.c:27
#2 0x0000000026a70ace in epoll_wait (fd=1833, ev=0x7fc30c29c920, cnt=1, to=-1) at ../base/glibc-compatibility/musl/epoll.c:36
#3 0x000000001dd1cca4 in DB::Epoll::getManyReady (this=0x7fc30c29cf18, max_events=1, events_out=0x7fc30c29c920, blocking=true) at ../src/Common/Epoll.cpp:69
#4 0x000000001f91784c in DB::PollingQueue::wait (this=0x7fc30c29cf18, lock=...) at ../src/Processors/Executors/PollingQueue.cpp:73
#5 0x000000001f8fdbdd in DB::PipelineExecutor::executeImpl (this=0x7fc30c29cec0, num_threads=17) at ../src/Processors/Executors/PipelineExecutor.cpp:805
#6 0x000000001f8fd113 in DB::PipelineExecutor::execute (this=0x7fc30c29cec0, num_threads=17) at ../src/Processors/Executors/PipelineExecutor.cpp:407
#7 0x000000001f8f862e in DB::CompletedPipelineExecutor::execute (this=0x7fc30c29d238) at ../src/Processors/Executors/CompletedPipelineExecutor.cpp:99
#8 0x000000001e2674f2 in DB::GlobalSubqueriesMatcher::Data::addExternalStorage (this=0x7fc30c29ddb0, ast=..., set_alias=false) at ../src/Interpreters/GlobalSubqueriesVisitor.h:164
#9 0x000000001e266592 in DB::GlobalSubqueriesMatcher::visit (func=..., data=...) at ../src/Interpreters/GlobalSubqueriesVisitor.h:220
#10 0x000000001e266201 in DB::GlobalSubqueriesMatcher::visit (ast=..., data=...) at ../src/Interpreters/GlobalSubqueriesVisitor.h:184
#11 0x000000001e256ec1 in DB::InDepthNodeVisitor<DB::GlobalSubqueriesMatcher, false, false, std::__1::shared_ptr<DB::IAST> >::visit (this=0x7fc30c29dd78, ast=...) at ../src/Interpreters/InDepthNodeVisitor.h:34
#12 0x000000001e266197 in DB::InDepthNodeVisitor<DB::GlobalSubqueriesMatcher, false, false, std::__1::shared_ptr<DB::IAST> >::visitChildren (this=0x7fc30c29dd78, ast=...) at ../src/Interpreters/InDepthNodeVisitor.h:62
#13 0x000000001e256ea9 in DB::InDepthNodeVisitor<DB::GlobalSubqueriesMatcher, false, false, std::__1::shared_ptr<DB::IAST> >::visit (this=0x7fc30c29dd78, ast=...) at ../src/Interpreters/InDepthNodeVisitor.h:30
#14 0x000000001e266197 in DB::InDepthNodeVisitor<DB::GlobalSubqueriesMatcher, false, false, std::__1::shared_ptr<DB::IAST> >::visitChildren (this=0x7fc30c29dd78, ast=...) at ../src/Interpreters/InDepthNodeVisitor.h:62
#15 0x000000001e256ea9 in DB::InDepthNodeVisitor<DB::GlobalSubqueriesMatcher, false, false, std::__1::shared_ptr<DB::IAST> >::visit (this=0x7fc30c29dd78, ast=...) at ../src/Interpreters/InDepthNodeVisitor.h:30
#16 0x000000001e266197 in DB::InDepthNodeVisitor<DB::GlobalSubqueriesMatcher, false, false, std::__1::shared_ptr<DB::IAST> >::visitChildren (this=0x7fc30c29dd78, ast=...) at ../src/Interpreters/InDepthNodeVisitor.h:62
#17 0x000000001e256ea9 in DB::InDepthNodeVisitor<DB::GlobalSubqueriesMatcher, false, false, std::__1::shared_ptr<DB::IAST> >::visit (this=0x7fc30c29dd78, ast=...) at ../src/Interpreters/InDepthNodeVisitor.h:30
#18 0x000000001e266197 in DB::InDepthNodeVisitor<DB::GlobalSubqueriesMatcher, false, false, std::__1::shared_ptr<DB::IAST> >::visitChildren (this=0x7fc30c29dd78, ast=...) at ../src/Interpreters/InDepthNodeVisitor.h:62
#19 0x000000001e256ea9 in DB::InDepthNodeVisitor<DB::GlobalSubqueriesMatcher, false, false, std::__1::shared_ptr<DB::IAST> >::visit (this=0x7fc30c29dd78, ast=...) at ../src/Interpreters/InDepthNodeVisitor.h:30
#20 0x000000001e245cb4 in DB::ExpressionAnalyzer::initGlobalSubqueriesAndExternalTables (this=0x7fc387c61c00, do_global=true) at ../src/Interpreters/ExpressionAnalyzer.cpp:360
#21 0x000000001e2459a5 in DB::ExpressionAnalyzer::ExpressionAnalyzer (this=0x7fc387c61c00, query_=..., syntax_analyzer_result_=..., context_=..., subquery_depth_=0, do_global=true, subqueries_for_sets_=..., prepared_sets_=...) at ../src/Interpreters/ExpressionAnalyzer.cpp:153
#22 0x000000001e6581f7 in DB::SelectQueryExpressionAnalyzer::SelectQueryExpressionAnalyzer (this=0x7fc387c61c00, query_=..., syntax_analyzer_result_=..., context_=..., metadata_snapshot_=..., required_result_columns_=..., do_global_=true, options_=..., subqueries_for_sets_=..., prepared_sets_=...) at ../src/Interpreters/ExpressionAnalyzer.h:303
#23 0x000000001e6601c1 in std::__1::make_unique<DB::SelectQueryExpressionAnalyzer, std::__1::shared_ptr<DB::IAST>&, std::__1::shared_ptr<DB::TreeRewriterResult const>&, std::__1::shared_ptr<DB::Context>&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const>&, std::__1::unordered_set<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > >, bool, DB::SelectQueryOptions&, std::__1::unordered_map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, DB::SubqueryForSet, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const, DB::SubqueryForSet> > >, std::__1::unordered_map<DB::PreparedSetKey, std::__1::shared_ptr<DB::Set>, DB::PreparedSetKey::Hash, std::__1::equal_to<DB::PreparedSetKey>, std::__1::allocator<std::__1::pair<DB::PreparedSetKey const, std::__1::shared_ptr<DB::Set> > > > > (__args=..., __args=..., __args=..., __args=..., __args=..., __args=..., __args=..., __args=..., __args=...) at ../contrib/libcxx/include/memory:2068
#24 0x000000001e6418fb in DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context const>, std::__1::shared_ptr<DB::IBlockInputStream> const&, std::__1::optional<DB::Pipe>, std::__1::shared_ptr<DB::IStorage> const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&)::$_1::operator()(bool) const (this=0x7fc30c2a0ad8, try_move_to_prewhere=true) at ../src/Interpreters/InterpreterSelectQuery.cpp:439
#25 0x000000001e63ee0d in DB::InterpreterSelectQuery::InterpreterSelectQuery (this=0x7fc380aa2800, query_ptr_=..., context_=..., input_=..., input_pipe_=..., storage_=..., options_=..., required_result_column_names=..., metadata_snapshot_=...) at ../src/Interpreters/InterpreterSelectQuery.cpp:513
#26 0x000000001e63da7f in DB::InterpreterSelectQuery::InterpreterSelectQuery (this=0x7fc380aa2800, query_ptr_=..., context_=..., options_=..., required_result_column_names_=...) at ../src/Interpreters/InterpreterSelectQuery.cpp:160
#27 0x000000001e9a6432 in std::__1::make_unique<DB::InterpreterSelectQuery, std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context>&, DB::SelectQueryOptions&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&> (__args=..., __args=..., __args=..., __args=...) at ../contrib/libcxx/include/memory:2068
#28 0x000000001e9a47e9 in DB::InterpreterSelectWithUnionQuery::buildCurrentChildInterpreter (this=0x7fc330947820, ast_ptr_=..., current_required_result_column_names=...) at ../src/Interpreters/InterpreterSelectWithUnionQuery.cpp:216
#29 0x000000001e9a403d in DB::InterpreterSelectWithUnionQuery::InterpreterSelectWithUnionQuery (this=0x7fc330947820, query_ptr_=..., context_=..., options_=..., required_result_column_names=...) at ../src/Interpreters/InterpreterSelectWithUnionQuery.cpp:138
#30 0x000000001e5a7c7d in std::__1::make_unique<DB::InterpreterSelectWithUnionQuery, std::__1::shared_ptr<DB::IAST>&, std::__1::shared_ptr<DB::Context>&, DB::SelectQueryOptions const&> (__args=..., __args=..., __args=...) at ../contrib/libcxx/include/memory:2068
#31 0x000000001e5a6235 in DB::InterpreterFactory::get (query=..., context=..., options=...) at ../src/Interpreters/InterpreterFactory.cpp:118
#32 0x000000001ec60ae0 in DB::executeQueryImpl (begin=0x7fc3a50c1300 "SELECT sum(UserID GLOBAL IN (SELECT UserID FROM remote('127.0.0.{1,2}', test.hits))) FROM remote('127.0.0.{1,2}', test.hits);", end=0x7fc3a50c137d "", context=..., internal=false, stage=DB::QueryProcessingStage::Complete, istr=0x0) at ../src/Interpreters/executeQuery.cpp:605
#33 0x000000001ec5ea64 in DB::executeQuery (query=..., context=..., internal=false, stage=DB::QueryProcessingStage::Complete) at ../src/Interpreters/executeQuery.cpp:950
#34 0x000000001f835ca6 in DB::TCPHandler::runImpl (this=0x7fc3a3797000) at ../src/Server/TCPHandler.cpp:292
#35 0x000000001f842f65 in DB::TCPHandler::run (this=0x7fc3a3797000) at ../src/Server/TCPHandler.cpp:1628
#36 0x0000000023c86b59 in Poco::Net::TCPServerConnection::start (this=0x7fc3a3797000) at ../contrib/poco/Net/src/TCPServerConnection.cpp:43
#37 0x0000000023c87368 in Poco::Net::TCPServerDispatcher::run (this=0x7fc3a4ebb700) at ../contrib/poco/Net/src/TCPServerDispatcher.cpp:115
#38 0x0000000023dd5294 in Poco::PooledThread::run (this=0x7fc3a2706180) at ../contrib/poco/Foundation/src/ThreadPool.cpp:199
#39 0x0000000023dd1d7a in Poco::(anonymous namespace)::RunnableHolder::run (this=0x7fc3a26233f0) at ../contrib/poco/Foundation/src/Thread.cpp:55
#40 0x0000000023dd0b5c in Poco::ThreadImpl::runnableEntry (pThread=0x7fc3a27061b8) at ../contrib/poco/Foundation/src/Thread_POSIX.cpp:345
#41 0x00007fc46877e609 in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#42 0x00007fc468694293 in clone () from /lib/x86_64-linux-gnu/libc.so.6
```
Label is 'bug', because similar issues were seen in Yandex.Metrica
cc: @Avogar | https://github.com/ClickHouse/ClickHouse/issues/29618 | https://github.com/ClickHouse/ClickHouse/pull/42874 | bb507356ef33ae55f094cadfbd8ad78e7bff51b0 | 88033562cd3906537cdf60004479495ccedd3061 | "2021-10-01T12:04:46Z" | c++ | "2022-11-08T09:52:58Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 29,616 | ["src/Storages/MergeTree/IMergeTreeDataPart.cpp", "src/Storages/MergeTree/IMergeTreeDataPart.h", "src/Storages/StorageReplicatedMergeTree.cpp"] | Temporary directory for merged part already exist | https://clickhouse-test-reports.s3.yandex.net/29602/5de80a057fad28ea6b847772b1b3c23fda17b7d8/functional_stateless_tests_(release,_wide_parts_enabled).html#fail1
```
2021-10-01 15:24:10 00993_system_parts_race_condition_drop_zookeeper: [ FAIL ] 457.50 sec. - having stderror:
2021-10-01 15:24:10 [6159e324f136] 2021.10.01 15:24:07.628355 [ 7880 ] {c0be8997-5b0e-44bc-ac55-798d367ffd37} <Error> InterpreterSystemQuery: SYNC REPLICA test_azebrg.alter_table_1 (1903d83c-7184-4ad8-9903-d83c71841ad8): Timed out!
2021-10-01 15:24:10 [6159e324f136] 2021.10.01 15:24:07.628641 [ 7880 ] {c0be8997-5b0e-44bc-ac55-798d367ffd37} <Error> executeQuery: Code: 159. DB::Exception: SYNC REPLICA test_azebrg.alter_table_1 (1903d83c-7184-4ad8-9903-d83c71841ad8): command timed out. See the 'receive_timeout' setting. (TIMEOUT_EXCEEDED) (version 21.11.1.8265) (from [::1]:41856) (comment: '/usr/share/clickhouse-test/queries/0_stateless/00993_system_parts_race_condition_drop_zookeeper.sh') (in query: SYSTEM SYNC REPLICA alter_table_1), Stack trace (when copying this message, always include the lines below):
2021-10-01 15:24:10
2021-10-01 15:24:10 0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0x9528ed4 in /usr/lib/debug/.build-id/97/659433c706eeb9bc1b9cfa879289b90bb497df.debug
2021-10-01 15:24:10 1. DB::InterpreterSystemQuery::syncReplica(DB::ASTSystemQuery&) @ 0x1142c73e in /usr/lib/debug/.build-id/97/659433c706eeb9bc1b9cfa879289b90bb497df.debug
2021-10-01 15:24:10 2. DB::InterpreterSystemQuery::execute() @ 0x11426ee5 in /usr/lib/debug/.build-id/97/659433c706eeb9bc1b9cfa879289b90bb497df.debug
2021-10-01 15:24:10 3. DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x1163ca69 in /usr/lib/debug/.build-id/97/659433c706eeb9bc1b9cfa879289b90bb497df.debug
2021-10-01 15:24:10 4. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) @ 0x1163a9f3 in /usr/lib/debug/.build-id/97/659433c706eeb9bc1b9cfa879289b90bb497df.debug
2021-10-01 15:24:10 5. DB::TCPHandler::runImpl() @ 0x11f82324 in /usr/lib/debug/.build-id/97/659433c706eeb9bc1b9cfa879289b90bb497df.debug
2021-10-01 15:24:10 6. DB::TCPHandler::run() @ 0x11f91239 in /usr/lib/debug/.build-id/97/659433c706eeb9bc1b9cfa879289b90bb497df.debug
2021-10-01 15:24:10 7. Poco::Net::TCPServerConnection::start() @ 0x14bb1def in /usr/lib/debug/.build-id/97/659433c706eeb9bc1b9cfa879289b90bb497df.debug
2021-10-01 15:24:10 8. Poco::Net::TCPServerDispatcher::run() @ 0x14bb41e1 in /usr/lib/debug/.build-id/97/659433c706eeb9bc1b9cfa879289b90bb497df.debug
2021-10-01 15:24:10 9. Poco::PooledThread::run() @ 0x14cc8989 in /usr/lib/debug/.build-id/97/659433c706eeb9bc1b9cfa879289b90bb497df.debug
2021-10-01 15:24:10 10. Poco::ThreadImpl::runnableEntry(void*) @ 0x14cc60c0 in /usr/lib/debug/.build-id/97/659433c706eeb9bc1b9cfa879289b90bb497df.debug
2021-10-01 15:24:10 11. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
2021-10-01 15:24:10 12. __clone @ 0x122293 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
2021-10-01 15:24:10
2021-10-01 15:24:10 Received exception from server (version 21.11.1):
2021-10-01 15:24:10 Code: 159. DB::Exception: Received from localhost:9000. DB::Exception: SYNC REPLICA test_azebrg.alter_table_1 (1903d83c-7184-4ad8-9903-d83c71841ad8): command timed out. See the 'receive_timeout' setting. (TIMEOUT_EXCEEDED)
2021-10-01 15:24:10 (query: SYSTEM SYNC REPLICA alter_table_1)
2021-10-01 15:24:10
2021-10-01 15:24:10
2021-10-01 15:24:10 stdout:
2021-10-01 15:24:10 sync failed, queue: default alter_table_1 r_1 0 queue-0000000252 MERGE_PARTS 2021-10-01 15:16:34 0 r_7 -4_0_3_1_2 ['-4_0_0_0_2','-4_1_1_0_2','-4_3_3_0'] 0 0 5008 Code: 84. DB::Exception: Directory /var/lib/clickhouse/store/190/1903d83c-7184-4ad8-9903-d83c71841ad8/tmp_merge_-4_0_3_1_2/ already exists. (DIRECTORY_ALREADY_EXISTS) (version 21.11.1.8265) 2021-10-01 15:24:06 0 1970-01-01 09:00:00 REGULAR
2021-10-01 15:24:10 Replication did not hang: synced all replicas of alter_table_
2021-10-01 15:24:10 Consistency: 1
2021-10-01 15:24:10
2021-10-01 15:24:10
2021-10-01 15:24:10 Database: test_azebrg
``` | https://github.com/ClickHouse/ClickHouse/issues/29616 | https://github.com/ClickHouse/ClickHouse/pull/32201 | 9867d75fecb48e82605fb8eeeea095ebd83062ce | 25427719d40e521846187e68294f9141ed037327 | "2021-10-01T11:41:51Z" | c++ | "2021-12-10T13:29:51Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 29,585 | ["tests/queries/0_stateless/02042_map_get_non_const_key.reference", "tests/queries/0_stateless/02042_map_get_non_const_key.sql"] | Map dynamic keys / map[x] | ```sql
SELECT map[x]
FROM
(
SELECT
materialize('key') x,
CAST((['key'], ['value']), 'Map(String, String)') AS map
)
DB::Exception: Illegal column Const(Map(String, String)) of first argument of function arrayElement. (ILLEGAL_COLUMN)
```
Now I have to do
```sql
SELECT mapValues(map)[indexOf(mapKeys(map), x)]
FROM
(
SELECT
materialize('key') AS x,
CAST((['key'], ['value']), 'Map(String, String)') AS map
)
ββarrayElement(mapValues(map), indexOf(mapKeys(map), x))ββ
β value β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
``` | https://github.com/ClickHouse/ClickHouse/issues/29585 | https://github.com/ClickHouse/ClickHouse/pull/29636 | b2f7b6fae87ac9ea6213331b8a0c989dd86b913b | 94117f83adc0029983fe3cbdafb33aa1bc075114 | "2021-09-30T18:58:20Z" | c++ | "2021-10-01T16:27:46Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 29,584 | [".gitmodules", "contrib/boringssl", "contrib/boringssl-cmake/CMakeLists.txt"] | LetsEncrypt Root Certificate (DST_Root_CA_X3) expiration issue. | **Describe what's wrong**
URL table function / engine and etc refuse to work after September 30, 2021 with certificates signed with lets encrypt.
**Does it reproduce on recent release?**
Yes.
**How to reproduce**
ClickHouse 21.10
```
localdomain :) SELECT * FROM url('https://letsencrypt.org/', 'RawBLOB', 'x String');
SELECT *
FROM url('https://letsencrypt.org/', 'RawBLOB', 'x String')
0 rows in set. Elapsed: 0.156 sec.
Received exception from server (version 21.10.1):
Code: 1000. DB::Exception: Received from localhost:9000. DB::Exception: SSL Exception: error:1000007d:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED. (POCO_EXCEPTION)
```
**Expected behavior**
Query works.
**Fix**
For Ubuntu/Debian:
```
In file /etc/ca-certificates.conf find row:
mozilla/DST_Root_CA_X3.crt
Add symbol "!" to the beginning of the row:
!mozilla/DST_Root_CA_X3.crt
Run command
sudo update-ca-certificates
Restart ClickHouse server
sudo service clickhouse-server restart
```
For CentOS
```
Run commands:
trust dump --filter "pkcs11:id=%c4%a7%b1%a4%7b%2c%71%fa%db%e1%4b%90%75%ff%c4%15%60%85%89%10" | openssl x509 | sudo tee /etc/pki/ca-trust/source/blacklist/DST-Root-CA-X3.pem
sudo update-ca-trust
Restart ClickHouse server
```
For Docker containers and k8s:
Temporary workaround:
Allow invalid certificates via config file:
```
nano /etc/clickhouse-server/config.d/ssl_fix.xml
<yandex>
<openSSL>
<client> <!-- Used for connecting to https dictionary source and secured Zookeeper communication -->
<invalidCertificateHandler>
<!-- Use for self-signed: RejectCertificateHandler <name>AcceptCertificateHandler</name> -->
<name>AcceptCertificateHandler</name>
</invalidCertificateHandler>
</client>
</openSSL>
</yandex>
```
**Credits**
https://habr.com/ru/post/580092/ | https://github.com/ClickHouse/ClickHouse/issues/29584 | https://github.com/ClickHouse/ClickHouse/pull/29998 | 50b54b37ca4f2f34ac73f9dce94bdfe758f138b0 | 9ffbb4848671cee0521895dc16c311659a1848c6 | "2021-09-30T18:34:57Z" | c++ | "2021-10-15T21:46:23Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 29,580 | ["src/Core/ProtocolDefines.h", "src/Interpreters/ClientInfo.h", "src/Interpreters/InterpreterSelectQuery.cpp", "src/Server/TCPHandler.cpp", "tests/integration/test_backward_compatibility/test_aggregate_fixed_key.py"] | Multiple rows returned for the same `GROUP BY` key when mixing different ClickHouse versions (all pre v21.3 LTS and post v21.4, up to latest released 21.10) | The following integration test describes the issue more than enough. The test starts 3 nodes to cover all possible cases.
* _root -> leaf_
* old server -> new server
* new server -> old server
* new server -> new server
git bisecting leads to https://github.com/ClickHouse/ClickHouse/commit/64ea1f82989ad45555629759b6f395804b12c864
<details>
<pre>
git bisect start
# good: [5bdc57004682a5e0236ec630546d20ad752c2fde] Improve performance of GROUP BY multiple fixed size keys
git bisect good 5bdc57004682a5e0236ec630546d20ad752c2fde
# bad: [545528917fd7700be0f6c582be970dbd23feeab5] Fix tests.
git bisect bad 545528917fd7700be0f6c582be970dbd23feeab5
# good: [74a07e406b7199dc5aa7804f5e5c63f6477118de] Even more stable
git bisect good 74a07e406b7199dc5aa7804f5e5c63f6477118de
# good: [c8da611fcd5c454431b49a407df36fa4ff745b9b] Merge pull request #21023 from ClickHouse/fix-datetime64-formatting
git bisect good c8da611fcd5c454431b49a407df36fa4ff745b9b
# good: [6dc683dce6af72239793906c633832de2386448e] Merge pull request #19815 from otrazhenia/evgsudarikova-DOCSUP-6149
git bisect good 6dc683dce6af72239793906c633832de2386448e
# good: [8f81dce32f6eebf448bd8a65ad4192ac746cc66f] Merge pull request #20585 from ClickHouse/persistent_nukeeper_log_storage
git bisect good 8f81dce32f6eebf448bd8a65ad4192ac746cc66f
# good: [3feded8d0cb562b7d0ed7a8c4bd4939f2524301c] Create type-conversion-functions.md
git bisect good 3feded8d0cb562b7d0ed7a8c4bd4939f2524301c
# good: [3cda69feaf1295333a1dc2f4030730bd3edbb425] Merge pull request #20632 from ClickHouse/akz/mysqlxx-randomize-replicas
git bisect good 3cda69feaf1295333a1dc2f4030730bd3edbb425
# good: [994b998df9863e772b438a858a2cdabdb2ce27ea] Update docs/ru/sql-reference/operators/in.md
git bisect good 994b998df9863e772b438a858a2cdabdb2ce27ea
# good: [802e5e725b744fe608e55aaa6456ea3e8989fe83] Merge pull request #19965 from ka1bi4/romanzhukov-DOCSUP-5822-update-accurateCastOrNull
git bisect good 802e5e725b744fe608e55aaa6456ea3e8989fe83
# bad: [2bf533630c7a70232b1615e74cca9d8c699c7de0] Fix tests.
git bisect bad 2bf533630c7a70232b1615e74cca9d8c699c7de0
# bad: [64ea1f82989ad45555629759b6f395804b12c864] Save packet keys.
git bisect bad 64ea1f82989ad45555629759b6f395804b12c864
# first bad commit: [64ea1f82989ad45555629759b6f395804b12c864] Save packet keys.
</pre>
</details>
I tried to simplify the test, but even minor changes to the test make it pass again.
```py
import pytest
from helpers.cluster import ClickHouseCluster
cluster = ClickHouseCluster(__file__)
node1 = cluster.add_instance('node1', with_zookeeper=True, image='yandex/clickhouse-server', tag='20.8', with_installed_binary=True)
node2 = cluster.add_instance('node2', with_zookeeper=True, image='yandex/clickhouse-server')
node3 = cluster.add_instance('node3', with_zookeeper=True, image='yandex/clickhouse-server')
@pytest.fixture(scope="module")
def start_cluster():
try:
cluster.start()
yield cluster
finally:
cluster.shutdown()
def test_two_level_merge(start_cluster):
for node in start_cluster.instances.values():
node.query(
"""
CREATE TABLE IF NOT EXISTS test_two_level_merge(date Date, zone UInt32, number UInt32)
ENGINE = MergeTree() PARTITION BY toUInt64(number / 1000) ORDER BY tuple();
INSERT INTO
test_two_level_merge
SELECT
toDate('2021-09-28') - number / 1000,
249081628,
number
FROM
numbers(15000);
"""
)
# covers only the keys64 method
for node in start_cluster.instances.values():
print(node.query(
"""
SELECT
throwIf(uniqExact(date) != count(), 'group by is borked')
FROM (
SELECT
date
FROM
remote('node{1,2}', default.test_two_level_merge)
WHERE
date BETWEEN toDate('2021-09-20') AND toDate('2021-09-28')
AND zone = 249081628
GROUP by date, zone
)
SETTINGS
group_by_two_level_threshold = 1,
group_by_two_level_threshold_bytes = 1,
max_threads = 2,
prefer_localhost_replica = 0
"""
))
```
cc @KochetovNicolai
One considered fix is to just bump `DBMS_MIN_REVISION_WITH_CURRENT_AGGREGATION_VARIANT_SELECTION_METHOD` and also handle in in TCPHandler. This will introduce some performance degradation during upgrade. | https://github.com/ClickHouse/ClickHouse/issues/29580 | https://github.com/ClickHouse/ClickHouse/pull/29735 | 36c3b1d5b1cb20f9c9401f12b902a818674a1f3c | b549bddc11081641ca5523308406a587c7fb5753 | "2021-09-30T17:27:33Z" | c++ | "2021-10-26T13:31:40Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 29,541 | ["tests/queries/0_stateless/01081_PartialSortingTransform_full_column.sql"] | Block structure mismatch in function connect between ConvertingTransform and ReplicatedMergeTreeSink stream | https://clickhouse-test-reports.s3.yandex.net/0/70dc235287b3e0e93633ba9e7fc0ed17cdb29a02/functional_stateless_tests_(debug).html#fail1
(also there is double whitespace in the exception message)
```
2021.09.29 23:46:57.810106 [ 18428 ] {e6fe5752-84da-4734-9015-efa636666d67} <Fatal> : Logical error: 'Block structure mismatch in function connect between ConvertingTransform and ReplicatedMergeTreeSink stream: different number of columns:
2021.09.29 23:46:57.812360 [ 20899 ] {} <Fatal> BaseDaemon: ########################################
2021.09.29 23:46:57.812996 [ 20899 ] {} <Fatal> BaseDaemon: (version 21.11.1.8246 (official build), build id: 380C8D80C1FF3D22EDF5D27559D997ED95951F4D) (from thread 18428) (query_id: e6fe5752-84da-4734-9015-efa636666d67) Received signal Aborted (6)
2021.09.29 23:46:57.813549 [ 20899 ] {} <Fatal> BaseDaemon:
2021.09.29 23:46:57.814097 [ 20899 ] {} <Fatal> BaseDaemon: Stack trace: 0x7fb32588e18b 0x7fb32586d859 0x13339638 0x13339742 0x1d9ebf31 0x1d9ea69c 0x1d9ea92a 0x1f7ca42c 0x1f799972 0x1e51e67f 0x1e51ffce 0x1eb7b6bb 0x1eb79284 0x1f74fe06 0x1f75d0c5 0x23ba0b19 0x23ba1328 0x23cef254 0x23cebd3a 0x23ceab1c 0x7fb325a54609 0x7fb32596a293
2021.09.29 23:46:57.814838 [ 20899 ] {} <Fatal> BaseDaemon: 4. gsignal @ 0x4618b in /usr/lib/x86_64-linux-gnu/libc-2.31.so
2021.09.29 23:46:57.815053 [ 20899 ] {} <Fatal> BaseDaemon: 5. abort @ 0x25859 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
2021.09.29 23:46:57.917025 [ 20899 ] {} <Fatal> BaseDaemon: 6. ./obj-x86_64-linux-gnu/../src/Common/Exception.cpp:53: DB::handle_error_code(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool, std::__1::vector<void*, std::__1::allocator<void*> > const&) @ 0x13339638 in /usr/bin/clickhouse
2021.09.29 23:46:58.005176 [ 20899 ] {} <Fatal> BaseDaemon: 7. ./obj-x86_64-linux-gnu/../src/Common/Exception.cpp:60: DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0x13339742 in /usr/bin/clickhouse
2021.09.29 23:46:58.253168 [ 20899 ] {} <Fatal> BaseDaemon: 8. ./obj-x86_64-linux-gnu/../src/Core/Block.cpp:32: void DB::onError<void>(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x1d9ebf31 in /usr/bin/clickhouse
2021.09.29 23:46:58.486682 [ 20899 ] {} <Fatal> BaseDaemon: 9. ./obj-x86_64-linux-gnu/../src/Core/Block.cpp:86: void DB::checkBlockStructure<void>(DB::Block const&, DB::Block const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, bool) @ 0x1d9ea69c in /usr/bin/clickhouse
2021.09.29 23:46:58.720603 [ 20899 ] {} <Fatal> BaseDaemon: 10. ./obj-x86_64-linux-gnu/../src/Core/Block.cpp:607: DB::assertCompatibleHeader(DB::Block const&, DB::Block const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x1d9ea92a in /usr/bin/clickhouse
2021.09.29 23:46:58.822850 [ 20899 ] {} <Fatal> BaseDaemon: 11. ./obj-x86_64-linux-gnu/../src/Processors/Port.cpp:19: DB::connect(DB::OutputPort&, DB::InputPort&) @ 0x1f7ca42c in /usr/bin/clickhouse
2021.09.29 23:46:58.929410 [ 20899 ] {} <Fatal> BaseDaemon: 12. ./obj-x86_64-linux-gnu/../src/Processors/Chain.cpp:81: DB::Chain::addSource(std::__1::shared_ptr<DB::IProcessor>) @ 0x1f799972 in /usr/bin/clickhouse
2021.09.29 23:46:59.288941 [ 20899 ] {} <Fatal> BaseDaemon: 13. ./obj-x86_64-linux-gnu/../src/Interpreters/InterpreterInsertQuery.cpp:222: DB::InterpreterInsertQuery::buildChainImpl(std::__1::shared_ptr<DB::IStorage> const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, DB::Block const&, DB::ThreadStatus*, std::__1::atomic<unsigned long>*) @ 0x1e51e67f in /usr/bin/clickhouse
2021.09.29 23:46:59.627124 [ 20899 ] {} <Fatal> BaseDaemon: 14. ./obj-x86_64-linux-gnu/../src/Interpreters/InterpreterInsertQuery.cpp:368: DB::InterpreterInsertQuery::execute() @ 0x1e51ffce in /usr/bin/clickhouse
2021.09.29 23:46:59.965561 [ 20899 ] {} <Fatal> BaseDaemon: 15. ./obj-x86_64-linux-gnu/../src/Interpreters/executeQuery.cpp:635: DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x1eb7b6bb in /usr/bin/clickhouse
2021.09.29 23:47:00.327484 [ 20899 ] {} <Fatal> BaseDaemon: 16. ./obj-x86_64-linux-gnu/../src/Interpreters/executeQuery.cpp:950: DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) @ 0x1eb79284 in /usr/bin/clickhouse
2021.09.29 23:47:00.632069 [ 20899 ] {} <Fatal> BaseDaemon: 17. ./obj-x86_64-linux-gnu/../src/Server/TCPHandler.cpp:292: DB::TCPHandler::runImpl() @ 0x1f74fe06 in /usr/bin/clickhouse
2021.09.29 23:47:00.993655 [ 20899 ] {} <Fatal> BaseDaemon: 18. ./obj-x86_64-linux-gnu/../src/Server/TCPHandler.cpp:1628: DB::TCPHandler::run() @ 0x1f75d0c5 in /usr/bin/clickhouse
2021.09.29 23:47:01.075840 [ 20899 ] {} <Fatal> BaseDaemon: 19. ./obj-x86_64-linux-gnu/../contrib/poco/Net/src/TCPServerConnection.cpp:43: Poco::Net::TCPServerConnection::start() @ 0x23ba0b19 in /usr/bin/clickhouse
2021.09.29 23:47:01.177965 [ 20899 ] {} <Fatal> BaseDaemon: 20. ./obj-x86_64-linux-gnu/../contrib/poco/Net/src/TCPServerDispatcher.cpp:115: Poco::Net::TCPServerDispatcher::run() @ 0x23ba1328 in /usr/bin/clickhouse
2021.09.29 23:47:01.287364 [ 20899 ] {} <Fatal> BaseDaemon: 21. ./obj-x86_64-linux-gnu/../contrib/poco/Foundation/src/ThreadPool.cpp:199: Poco::PooledThread::run() @ 0x23cef254 in /usr/bin/clickhouse
2021.09.29 23:47:01.395495 [ 20899 ] {} <Fatal> BaseDaemon: 22. ./obj-x86_64-linux-gnu/../contrib/poco/Foundation/src/Thread.cpp:56: Poco::(anonymous namespace)::RunnableHolder::run() @ 0x23cebd3a in /usr/bin/clickhouse
2021.09.29 23:47:01.501554 [ 20899 ] {} <Fatal> BaseDaemon: 23. ./obj-x86_64-linux-gnu/../contrib/poco/Foundation/src/Thread_POSIX.cpp:345: Poco::ThreadImpl::runnableEntry(void*) @ 0x23ceab1c in /usr/bin/clickhouse
2021.09.29 23:47:01.516965 [ 20899 ] {} <Fatal> BaseDaemon: 24. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
2021.09.29 23:47:01.517420 [ 20899 ] {} <Fatal> BaseDaemon: 25. __clone @ 0x122293 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
2021.09.29 23:47:02.946969 [ 20899 ] {} <Fatal> BaseDaemon: Checksum of the binary: F288FB192F04A7536C9D71079D67D6B4, integrity check passed.
2021.09.29 23:47:18.134320 [ 373 ] {} <Fatal> Application: Child process was terminated by signal 6.
```
cc: @KochetovNicolai | https://github.com/ClickHouse/ClickHouse/issues/29541 | https://github.com/ClickHouse/ClickHouse/pull/46567 | ee07f05224ed80002efce42938adb1c0c1751091 | 179957fd7c218cd3063cc1da7d44e88da55857c3 | "2021-09-29T16:39:38Z" | c++ | "2023-02-20T02:36:21Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 29,468 | ["src/Core/MySQL/MySQLReplication.cpp"] | MaterializeMySQL Decimal bug | ClickHouse client version 21.8.4.51 (official build).
I had a problem synchronizing Nullable(Decimal(20, 8)) using the MaterializeMySQL
mysql data is 10929.00000000
but clickhouse data is -32771294956408.94967295
Can someone help me plz
| https://github.com/ClickHouse/ClickHouse/issues/29468 | https://github.com/ClickHouse/ClickHouse/pull/31990 | fa298b089e9669a8ffb1aaf00d0fbfb922f40f06 | 9e034ee3a5af2914aac4fd9a39fd469286eb9b86 | "2021-09-28T08:02:03Z" | c++ | "2021-12-01T16:18:48Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 29,456 | ["src/Databases/DatabaseLazy.cpp", "src/Databases/DatabaseOrdinary.cpp", "tests/queries/0_stateless/01015_database_bad_tables.reference", "tests/queries/0_stateless/01015_database_bad_tables.sh", "tests/queries/0_stateless/01015_database_bad_tables.sql"] | Server failed to restart. | After such successful queries:
```sql
CREATE DATABASE testlazy ENGINE = Lazy(1);
CREATE TABLE testlazy.`ΡΠ°Π±Π»ΠΈΡΠ°_ΡΠΎ_ΡΡΡΠ°Π½Π½ΡΠΌ_Π½Π°Π·Π²Π°Π½ΠΈΠ΅ΠΌ` (a UInt64, b UInt64) ENGINE = Log;
```
Server failed to restart with error:
```
<Error> Application: filesystem error: in posix_stat: failed to determine attributes for the specified path: File name too long [/home/avogar/tmp/server/metadata/testlazy/%25D1%2582%25D0%25B0%25D0%25B1%25D0%25BB%25D0%25B8%25D1%2586%25D0%25B0_%25D1%2581%25D0%25BE_%25D1%2581%25D1%2582%25D1%2580%25D0%25B0%25D0%25BD%25D0%25BD%25D1%258B%25D0%25BC_%25D0%25BD%25D0%25B0%25D0%25B7%25D0%25B2%25D0%25B0%25D0%25BD%25D0%25B8%25D0%25B5%25D0%25BC.sql]
```
Found in https://github.com/ClickHouse/ClickHouse/pull/27928 | https://github.com/ClickHouse/ClickHouse/issues/29456 | https://github.com/ClickHouse/ClickHouse/pull/29476 | 33f193fea97b8d2e51756b4233389d66942c88a1 | be427555e58b032067a6d851c6143fadcb35e63d | "2021-09-27T19:59:53Z" | c++ | "2021-09-28T23:45:40Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 29,412 | ["tests/queries/0_stateless/02024_merge_regexp_assert.reference", "tests/queries/0_stateless/02024_merge_regexp_assert.sql"] | Flaky test 2024_merge_regexp_assert | Merge() tried to access database(from another test) that was deleted in process. We can rewrite test to not access other database names, but we can improve atomicity inside Merge()
```
2021-09-27 11:23:16 2024_merge_regexp_assert: [ FAIL ] - return code: 81
2021-09-27 11:23:16 Expected server error code: 47 but got: 81 (query: SELECT a FROM merge(REGEXP('\0'), '^t$'); -- { serverError 47 }).
2021-09-27 11:23:16 Received exception from server (version 21.11.1):
2021-09-27 11:23:16 Code: 81. DB::Exception: Received from localhost:9000. DB::Exception: Database test_mgk88c doesn't exist. (UNKNOWN_DATABASE)
2021-09-27 11:23:16 (query: SELECT a FROM merge(REGEXP('\0'), '^t$'); -- { serverError 47 })
2021-09-27 11:23:16 , result:
2021-09-27 11:23:16
2021-09-27 11:23:16
2021-09-27 11:23:16
2021-09-27 11:23:16 stdout:
2021-09-27 11:23:16
2021-09-27 11:23:16
2021-09-27 11:23:16 Database: test_pwn7fc
```
| check_name | test_name | check_start_time | test_status | report_url | commit_sha |
|:-|:-|-:|:-|:-|:-|
| Fast test | 2024_merge_regexp_assert | 2021-09-27 08:16:41 | FAIL | https://clickhouse-test-reports.s3.yandex.net/0/841ef13dee6c9cc2def425c4228cb40f8e77b108/fast_test.html | 841ef13dee6c9cc2def425c4228cb40f8e77b108 |
| Functional stateless tests (address) | 2024_merge_regexp_assert | 2021-09-27 08:12:29 | FAIL | https://clickhouse-test-reports.s3.yandex.net/29390/ccf5050f7b8cdd0ee60a8074068ccfe0a20b92da/functional_stateless_tests_(address).html | ccf5050f7b8cdd0ee60a8074068ccfe0a20b92da |
| Functional stateless tests (release, wide parts enabled) | 2024_merge_regexp_assert | 2021-09-25 17:59:03 | FAIL | https://clickhouse-test-reports.s3.yandex.net/29355/9dac348893dc32fb9be02122f5474a570788662a/functional_stateless_tests_(release,_wide_parts_enabled).html | 9dac348893dc32fb9be02122f5474a570788662a |
| Functional stateless tests (address) | 2024_merge_regexp_assert | 2021-09-25 17:58:31 | FAIL | https://clickhouse-test-reports.s3.yandex.net/29355/9dac348893dc32fb9be02122f5474a570788662a/functional_stateless_tests_(address).html | 9dac348893dc32fb9be02122f5474a570788662a |
| Functional stateless tests (memory) | 2024_merge_regexp_assert | 2021-09-25 17:46:39 | FAIL | https://clickhouse-test-reports.s3.yandex.net/29355/9dac348893dc32fb9be02122f5474a570788662a/functional_stateless_tests_(memory).html | 9dac348893dc32fb9be02122f5474a570788662a |
| Functional stateless tests (release) | 2024_merge_regexp_assert | 2021-09-25 17:43:46 | FAIL | https://clickhouse-test-reports.s3.yandex.net/29355/9dac348893dc32fb9be02122f5474a570788662a/functional_stateless_tests_(release).html | 9dac348893dc32fb9be02122f5474a570788662a |
| Functional stateless tests (ubsan) | 2024_merge_regexp_assert | 2021-09-25 17:35:24 | FAIL | https://clickhouse-test-reports.s3.yandex.net/29355/9dac348893dc32fb9be02122f5474a570788662a/functional_stateless_tests_(ubsan).html | 9dac348893dc32fb9be02122f5474a570788662a |
| Functional stateless tests (memory) | 2024_merge_regexp_assert | 2021-09-25 03:57:21 | FAIL | https://clickhouse-test-reports.s3.yandex.net/29355/fe44be522ee11822ac8b86c1ffab8560b5fb37d9/functional_stateless_tests_(memory).html | fe44be522ee11822ac8b86c1ffab8560b5fb37d9 |
| Functional stateless tests (release, wide parts enabled) | 2024_merge_regexp_assert | 2021-09-25 03:47:34 | FAIL | https://clickhouse-test-reports.s3.yandex.net/29355/fe44be522ee11822ac8b86c1ffab8560b5fb37d9/functional_stateless_tests_(release,_wide_parts_enabled).html | fe44be522ee11822ac8b86c1ffab8560b5fb37d9 |
| Functional stateless tests (release) | 2024_merge_regexp_assert | 2021-09-25 03:47:20 | FAIL | https://clickhouse-test-reports.s3.yandex.net/29355/fe44be522ee11822ac8b86c1ffab8560b5fb37d9/functional_stateless_tests_(release).html | fe44be522ee11822ac8b86c1ffab8560b5fb37d9 |
| Functional stateless tests (ubsan) | 2024_merge_regexp_assert | 2021-09-25 03:33:50 | FAIL | https://clickhouse-test-reports.s3.yandex.net/29355/fe44be522ee11822ac8b86c1ffab8560b5fb37d9/functional_stateless_tests_(ubsan).html | fe44be522ee11822ac8b86c1ffab8560b5fb37d9 | | https://github.com/ClickHouse/ClickHouse/issues/29412 | https://github.com/ClickHouse/ClickHouse/pull/29461 | d46dfd0ddd3207cab98a3c333ff4ddade99b75a8 | c37932a0b7af38ea40731a9421a4d9625254b2f1 | "2021-09-27T10:28:31Z" | c++ | "2021-09-28T21:57:11Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 29,403 | ["src/Interpreters/ExpressionJIT.cpp", "tests/queries/0_stateless/02036_jit_short_circuit.reference", "tests/queries/0_stateless/02036_jit_short_circuit.sql"] | Column Function is not a contiguous block of memory | **Describe what's wrong**
An error occures when we are selecting data from a table with Kafka engine
**How to reproduce**
* Which ClickHouse server version to use
Clickhouse 21.9.3.30
* `CREATE TABLE` statements for all tables involved
CREATE TABLE kafka_fundevwh.QueueCheck
(
`message` String
)
ENGINE = Kafka
SETTINGS kafka_broker_list = 'brokers', kafka_topic_list = 'topic', kafka_group_name = 'group', kafka_format = 'JSONAsString', kafka_num_consumers = 1, kafka_max_block_size = 1, kafka_flush_interval_ms = 60000
* Data sample (message)
{
"timestamp": "2021-09-20 14:29:14",
"action": "bet"
}
* Queries to run that lead to unexpected result
SELECT if(action = 'bonus', sport_amount, 0) * 100
FROM
(
SELECT
JSONExtract(message, 'action', 'String') AS action,
JSONExtract(message, 'sport_amount', 'Float64') AS sport_amount
FROM kafka_fundevwh.QueueCheck
)
**Expected behavior**
Data selection without any errors
**Error message and/or stacktrace**
2021.09.27 08:18:02.867302 [ 29442 ] {6132ab14-0734-4cdb-9f86-fd16244ae6b5} <Error> TCPHandler: Code: 48. DB::Exception: Column Function is not a contiguous block of memory: while executing 'FUNCTION [compiled] multiply(if(UInt8, Float64,
0 : UInt8), 100 : UInt8)(equals(action, 'bonus') :: 2, sport_amount :: 6) -> multiply(if(equals(action, 'bonus'), sport_amount, 0), 100) Float64 : 5'. (NOT_IMPLEMENTED), Stack trace (when copying this message, always include the lines be
low):
0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0x936713a in /usr/bin/clickhouse
1. DB::IColumn::getRawData() const @ 0x105040e4 in /usr/bin/clickhouse
2. DB::getColumnData(DB::IColumn const*) @ 0x10f271e0 in /usr/bin/clickhouse
3. DB::LLVMExecutableFunction::executeImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const @ 0x10835b07 in /usr/bi
n/clickhouse
4. DB::IExecutableFunction::executeWithoutLowCardinalityColumns(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) co
nst @ 0x1019359e in /usr/bin/clickhouse
5. DB::IExecutableFunction::execute(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const @ 0x10193bb2 in /usr/bin
/clickhouse
6. DB::ExpressionActions::execute(DB::Block&, unsigned long&, bool) const @ 0x1081dc72 in /usr/bin/clickhouse
7. DB::ExpressionTransform::transform(DB::Chunk&) @ 0x1197905c in /usr/bin/clickhouse
8. DB::ISimpleTransform::transform(DB::Chunk&, DB::Chunk&) @ 0xea01510 in /usr/bin/clickhouse
9. DB::ISimpleTransform::work() @ 0x117e31a7 in /usr/bin/clickhouse
10. ? @ 0x118211fd in /usr/bin/clickhouse
11. DB::PipelineExecutor::executeStepImpl(unsigned long, unsigned long, std::__1::atomic<bool>*) @ 0x1181d9d1 in /usr/bin/clickhouse
12. DB::PipelineExecutor::executeImpl(unsigned long) @ 0x1181ba2f in /usr/bin/clickhouse
13. DB::PipelineExecutor::execute(unsigned long) @ 0x1181b7f9 in /usr/bin/clickhouse
14. ? @ 0x1182877f in /usr/bin/clickhouse
15. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x93a815f in /usr/bin/clickhouse
16. ? @ 0x93aba43 in /usr/bin/clickhouse
17. start_thread @ 0x7fa3 in /lib/x86_64-linux-gnu/libpthread-2.28.so
18. clone @ 0xf94cf in /lib/x86_64-linux-gnu/libc-2.28.so
| https://github.com/ClickHouse/ClickHouse/issues/29403 | https://github.com/ClickHouse/ClickHouse/pull/29574 | 754f7aafebaf7a6232b5efa1630d69f4179bcb6e | ad6d1433032759475343b5434db7c55a26361809 | "2021-09-27T08:30:12Z" | c++ | "2021-10-01T18:18:31Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 29,386 | ["tests/queries/0_stateless/02506_date_time64_floating_point_negative_value.reference", "tests/queries/0_stateless/02506_date_time64_floating_point_negative_value.sql"] | Inaccurate type conversion between DateTime64 and Float/Decimal | It seems ClickHouse has trouble to convert float to DateTime64, regardless the value is positive or negative. I can reproduce the same issue on both 21.3 and 21.9.
```sql
f4dd7c849ef6 :) select toDateTime64(1, 3)
βββββββtoDateTime64(1, 3)ββ
β 1970-01-01 00:00:01.000 β
βββββββββββββββββββββββββββ
f4dd7c849ef6 :) select toDateTime64(1.1, 3)
βββββtoDateTime64(1.1, 3)ββ
β 1970-01-01 00:00:01.100 β
βββββββββββββββββββββββββββ
f4dd7c849ef6 :) select toDateTime64(1.01, 3)
ββββtoDateTime64(1.01, 3)ββ
β 1970-01-01 00:00:01.010 β
βββββββββββββββββββββββββββ
f4dd7c849ef6 :) select toDateTime64(1.001, 3) -- expected result: 1970-01-01 00:00:01.001
βββtoDateTime64(1.001, 3)ββ
β 1970-01-01 00:00:01.000 β
βββββββββββββββββββββββββββ
f4dd7c849ef6 :) select toDateTime64(1.005, 3) -- expected result: 1970-01-01 00:00:01.005
βββtoDateTime64(1.005, 3)ββ
β 1970-01-01 00:00:01.004 β
βββββββββββββββββββββββββββ
f4dd7c849ef6 :) select toDateTime64(-1, 3)
ββββββtoDateTime64(-1, 3)ββ
β 1969-12-31 23:59:59.000 β
βββββββββββββββββββββββββββ
f4dd7c849ef6 :) select toDateTime64(-1.1, 3) -- expected result: 1969-12-31 23:59:58.900, or 1969-12-31 23:59:59.000 if negative nanos is not supported on purpose
ββββtoDateTime64(-1.1, 3)ββ
β 1970-01-01 00:00:00.000 β
βββββββββββββββββββββββββββ
f4dd7c849ef6 :) select toDateTime64(-1.01, 3) -- expected result: 1969-12-31 23:59:58.990, or 1969-12-31 23:59:59.000 if negative nanos is not supported on purpose
βββtoDateTime64(-1.01, 3)ββ
β 1970-01-01 00:00:00.000 β
βββββββββββββββββββββββββββ
f4dd7c849ef6 :) select toDateTime64(-1.001, 3) -- expected result: 1969-12-31 23:59:58.999, or 1969-12-31 23:59:59.000 if negative nanos is not supported on purpose
ββtoDateTime64(-1.001, 3)ββ
β 1970-01-01 00:00:00.000 β
βββββββββββββββββββββββββββ
f4dd7c849ef6 :) select toDateTime64(-1.005, 3) -- expected result: 1969-12-31 23:59:58.995, or 1969-12-31 23:59:59.000 if negative nanos is not supported on purpose
ββtoDateTime64(-1.005, 3)ββ
β 1970-01-01 00:00:00.000 β
βββββββββββββββββββββββββββ
```
| https://github.com/ClickHouse/ClickHouse/issues/29386 | https://github.com/ClickHouse/ClickHouse/pull/44340 | 55131f7ae812e55aaf943a65599b4741e0f537c7 | 4e53ddb2e1e21124f760e37f84c4bbaf99340a85 | "2021-09-26T07:07:27Z" | c++ | "2022-12-27T11:58:59Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 29,347 | ["src/Processors/Transforms/WindowTransform.cpp", "tests/queries/0_stateless/01591_window_functions.reference", "tests/queries/0_stateless/01591_window_functions.sql"] | Uninitialized memory in window function `nth_value` | Run under MSan:
**Describe the bug**
https://clickhouse-test-reports.s3.yandex.net/29341/2b2bec3679df7965af908ce3f1e8e17e39bd12fe/fuzzer_msan/report.html#fail1
**How to reproduce**
```
SELECT number, nth_value(number, '10') OVER w AS secondValue, nth_value(number, 2147483647) OVER w AS thirdValue FROM numbers(1.) WINDOW w AS (ORDER BY number ASC) ORDER BY toInt64(2147483646 - intDiv(number, 2147483647), toInt64(number, NULL - intDiv(number, NULL), 100000002004087730000.)) ASC, number DESC NULLS FIRST
```
```
Changed settings: max_block_size = '1', max_threads = '1', receive_timeout = '10', receive_data_timeout_ms = '10000', max_rows_to_read = '3', engine_file_empty_if_not_exists = '0'
``` | https://github.com/ClickHouse/ClickHouse/issues/29347 | https://github.com/ClickHouse/ClickHouse/pull/29348 | 7eddf2664e83a5d877712de190cc4a4a906664ef | 87d9506bb073c278383c9e02ed2b50e96cbdffef | "2021-09-25T01:27:37Z" | c++ | "2021-09-25T16:43:02Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 29,327 | ["src/Storages/StorageMergeTree.cpp"] | Flaky test 00975_move_partition_merge_tree | ```
2021-09-24 11:42:36 00975_move_partition_merge_tree: [ FAIL ] 5.16 sec. - result differs with reference:
2021-09-24 11:42:36 --- /usr/share/clickhouse-test/queries/0_stateless/00975_move_partition_merge_tree.reference 2021-09-24 10:24:23.000000000 +0200
2021-09-24 11:42:36 +++ /tmp/clickhouse-test/0_stateless/00975_move_partition_merge_tree.stdout 2021-09-24 11:42:36.846796510 +0200
2021-09-24 11:42:36 @@ -1,4 +1,4 @@
2021-09-24 11:42:36 10000000
2021-09-24 11:42:36 0
2021-09-24 11:42:36 -5000000
2021-09-24 11:42:36 +8145635
2021-09-24 11:42:36 5000000
2021-09-24 11:42:36
2021-09-24 11:42:36
```
``` sql
select check_name, test_name, check_start_time, test_status, report_url, commit_shaγfrom `gh-data`.checksγwhereγtest_status not in ['OK', 'SUCCESS', 'SKIPPED', 'XFAIL', 'XERROR'] andγtest_name = '00975_move_partition_merge_tree'γand check_start_time >= now() - interval 30 day order by check_start_time desc FORMAT Markdown
```
| check_name | test_name | check_start_time | test_status | report_url | commit_sha |
|:-|:-|-:|:-|:-|:-|
| Functional stateless tests (ubsan) | 00975_move_partition_merge_tree | 2021-09-24 09:31:40 | FAIL | https://clickhouse-test-reports.s3.yandex.net/0/e617aeb7a56a18765f2fd227d66c1b6f425336b2/functional_stateless_tests_(ubsan).html | e617aeb7a56a18765f2fd227d66c1b6f425336b2 |
| Functional stateless tests (release) | 00975_move_partition_merge_tree | 2021-09-23 14:02:23 | FAIL | https://clickhouse-test-reports.s3.yandex.net/0/23f865c7222fe61922c0e5c6032cf61ee978cd1c/functional_stateless_tests_(release).html | 23f865c7222fe61922c0e5c6032cf61ee978cd1c |
| Functional stateless tests (ubsan) | 00975_move_partition_merge_tree | 2021-09-22 21:27:21 | FAIL | https://clickhouse-test-reports.s3.yandex.net/28582/4f802d1cea543dcfeb1b9efdb5fca0491a0c3fd0/functional_stateless_tests_(ubsan).html | 4f802d1cea543dcfeb1b9efdb5fca0491a0c3fd0 |
| Functional stateless tests (address) | 00975_move_partition_merge_tree | 2021-09-22 20:16:52 | FAIL | https://clickhouse-test-reports.s3.yandex.net/0/be256cc9edafeb9278b21d3ff9c5504dfaeed389/functional_stateless_tests_(address).html | be256cc9edafeb9278b21d3ff9c5504dfaeed389 |
| Functional stateless tests (address) | 00975_move_partition_merge_tree | 2021-09-22 15:01:44 | FAIL | https://clickhouse-test-reports.s3.yandex.net/29258/eb1053d93129c482523eda5fa95997128eafbcfb/functional_stateless_tests_(address).html | eb1053d93129c482523eda5fa95997128eafbcfb |
| Functional stateless tests (release, wide parts enabled) | 00975_move_partition_merge_tree | 2021-09-22 11:54:59 | FLAKY | https://clickhouse-test-reports.s3.yandex.net/29021/666d07b9bb2fcc17e435f647746ebfbfd9685fbd/functional_stateless_tests_(release,_wide_parts_enabled).html | 666d07b9bb2fcc17e435f647746ebfbfd9685fbd |
| Functional stateless tests (release, DatabaseOrdinary) | 00975_move_partition_merge_tree | 2021-09-21 21:47:11 | FAIL | https://clickhouse-test-reports.s3.yandex.net/0/5ef411677a9c5c0a3fe42c9a104e71b75f89218b/functional_stateless_tests_(release,_databaseordinary).html | 5ef411677a9c5c0a3fe42c9a104e71b75f89218b |
| Functional stateless tests (release, wide parts enabled) | 00975_move_partition_merge_tree | 2021-09-21 09:06:00 | FLAKY | https://clickhouse-test-reports.s3.yandex.net/29216/7e346ec6ecf4cff8d40acdd79c02419df0e5660d/functional_stateless_tests_(release,_wide_parts_enabled).html | 7e346ec6ecf4cff8d40acdd79c02419df0e5660d |
| Functional stateless tests (address) | 00975_move_partition_merge_tree | 2021-09-20 23:53:46 | FAIL | https://clickhouse-test-reports.s3.yandex.net/29140/892a4c48c4aab6dfdf0ab463fff6c15d25589730/functional_stateless_tests_(address).html | 892a4c48c4aab6dfdf0ab463fff6c15d25589730 |
| Functional stateless tests (release) | 00975_move_partition_merge_tree | 2021-09-20 19:35:49 | FAIL | https://clickhouse-test-reports.s3.yandex.net/29140/496bedb0f5728d87c10754bab154e851bdb282eb/functional_stateless_tests_(release).html | 496bedb0f5728d87c10754bab154e851bdb282eb |
| Functional stateless tests (release, wide parts enabled) | 00975_move_partition_merge_tree | 2021-09-20 19:10:35 | FAIL | https://clickhouse-test-reports.s3.yandex.net/0/c5556b5e04e0f99bd49839413378ece3f5100644/functional_stateless_tests_(release,_wide_parts_enabled).html | c5556b5e04e0f99bd49839413378ece3f5100644 |
| Functional stateless tests (ubsan) | 00975_move_partition_merge_tree | 2021-09-19 23:25:21 | FAIL | https://clickhouse-test-reports.s3.yandex.net/26231/c9843c79215f9b78a191fe4dcd5230a58fa73798/functional_stateless_tests_(ubsan).html | c9843c79215f9b78a191fe4dcd5230a58fa73798 |
| Functional stateless tests (release, wide parts enabled) | 00975_move_partition_merge_tree | 2021-09-19 23:01:19 | FLAKY | https://clickhouse-test-reports.s3.yandex.net/26231/c9843c79215f9b78a191fe4dcd5230a58fa73798/functional_stateless_tests_(release,_wide_parts_enabled).html | c9843c79215f9b78a191fe4dcd5230a58fa73798 |
| Functional stateless tests (address) | 00975_move_partition_merge_tree | 2021-09-17 22:04:36 | FAIL | https://clickhouse-test-reports.s3.yandex.net/29132/9fb73af891e5cb3324808ce619242c9e83bade5f/functional_stateless_tests_(address).html | 9fb73af891e5cb3324808ce619242c9e83bade5f |
| Functional stateless tests (release, wide parts enabled) | 00975_move_partition_merge_tree | 2021-09-17 17:44:01 | FAIL | https://clickhouse-test-reports.s3.yandex.net/0/db516e8c9142aedeb2a9d5ba8ed9b5ee0063bf31/functional_stateless_tests_(release,_wide_parts_enabled).html | db516e8c9142aedeb2a9d5ba8ed9b5ee0063bf31 |
| Functional stateless tests (address) | 00975_move_partition_merge_tree | 2021-09-17 16:03:03 | FAIL | https://clickhouse-test-reports.s3.yandex.net/21320/4ca2193da851b4e5efbff46697f3e6bd6306f6b4/functional_stateless_tests_(address).html | 4ca2193da851b4e5efbff46697f3e6bd6306f6b4 |
| Functional stateless tests (release, wide parts enabled) | 00975_move_partition_merge_tree | 2021-09-17 14:59:43 | FAIL | https://clickhouse-test-reports.s3.yandex.net/28803/cfd51467713a2d6f701f13374d9652cb98652ff1/functional_stateless_tests_(release,_wide_parts_enabled).html | cfd51467713a2d6f701f13374d9652cb98652ff1 |
| Functional stateless tests (release, wide parts enabled) | 00975_move_partition_merge_tree | 2021-09-17 13:43:42 | FAIL | https://clickhouse-test-reports.s3.yandex.net/29057/89f2b9fa71bced77eae206155e46edb261b8ca2e/functional_stateless_tests_(release,_wide_parts_enabled).html | 89f2b9fa71bced77eae206155e46edb261b8ca2e |
| Functional stateless tests (address) | 00975_move_partition_merge_tree | 2021-09-17 12:39:47 | FAIL | https://clickhouse-test-reports.s3.yandex.net/28955/f97f5c5c75a1db578efebcd0e7a8ae56781eca25/functional_stateless_tests_(address).html | f97f5c5c75a1db578efebcd0e7a8ae56781eca25 |
| Functional stateless tests (release) | 00975_move_partition_merge_tree | 2021-08-30 18:59:30 | FAIL | https://clickhouse-test-reports.s3.yandex.net/28359/6906e1d8edd8466b144c4e51f846d99f7c6ab9b6/functional_stateless_tests_(release).html | 6906e1d8edd8466b144c4e51f846d99f7c6ab9b6 |
| Functional stateless tests (release, wide parts enabled) | 00975_move_partition_merge_tree | 2021-08-27 10:42:15 | FAIL | https://clickhouse-test-reports.s3.yandex.net/0/44390a88ecaee77b577fabefb301c68054912f11/functional_stateless_tests_(release,_wide_parts_enabled).html | 44390a88ecaee77b577fabefb301c68054912f11 |
| Functional stateless tests (release) | 00975_move_partition_merge_tree | 2021-08-25 16:52:08 | FAIL | https://clickhouse-test-reports.s3.yandex.net/0/4685fb42268f17e55bf23574f18caa191395c61e/functional_stateless_tests_(release).html | 4685fb42268f17e55bf23574f18caa191395c61e |
| https://github.com/ClickHouse/ClickHouse/issues/29327 | https://github.com/ClickHouse/ClickHouse/pull/30717 | d07d53f1b156bbfd8c39655194166bbfaf747284 | ac4a9bcf23f36e76d145de72c84cb5c8570a6094 | "2021-09-24T11:00:33Z" | c++ | "2021-10-27T15:19:41Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 29,248 | ["cmake/target.cmake", "contrib/boost", "contrib/grpc-cmake/protobuf_generate_grpc.cmake", "contrib/protobuf-cmake/CMakeLists.txt", "contrib/protobuf-cmake/protobuf_generate.cmake"] | aarch64 version doesn't support Parquet format | run
select * from table1 format Parquet
returned
Unknown format Parquet
x86 Linux version works
| https://github.com/ClickHouse/ClickHouse/issues/29248 | https://github.com/ClickHouse/ClickHouse/pull/30015 | ed6088860ee818692ba32405b1cf17e4583a0aea | eb1748b8b4ac26c378ac11b9ba4c1a012ff3c189 | "2021-09-22T07:44:17Z" | c++ | "2021-10-13T02:24:05Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 29,184 | ["src/Interpreters/AddDefaultDatabaseVisitor.h", "tests/queries/0_stateless/02041_test_fuzzy_alter.reference", "tests/queries/0_stateless/02041_test_fuzzy_alter.sql"] | Reference to nullptr in ALTER | **Describe the bug**
https://clickhouse-test-reports.s3.yandex.net/29183/6cf8509ab5da4229b5b1f6b96ff1f162b213e271/fuzzer_ubsan/report.html#fail1
**How to reproduce**
```
ALTER TABLE alter_test MODIFY COLUMN `b` DateTime DEFAULT now(([NULL, NULL, NULL, [-2147483648], [NULL, NULL, NULL, NULL, NULL, NULL, NULL]] AND (1048576 AND NULL) AND (NULL AND 1048575 AND NULL AND -2147483649) AND NULL) IN (test_01103.t1_distr.id))
```
`Changed settings: receive_timeout = '10', receive_data_timeout_ms = '10000', count_distinct_implementation = 'uniqTheta', max_result_rows = '10', optimize_injective_functions_inside_uniq = '0'` | https://github.com/ClickHouse/ClickHouse/issues/29184 | https://github.com/ClickHouse/ClickHouse/pull/29573 | b29e877f269e84ae452c446e70b406a695863470 | 30220529b714d55aa2fc9bc9b45119077434c882 | "2021-09-20T01:26:48Z" | c++ | "2021-10-01T09:44:15Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 29,154 | ["src/Common/Stopwatch.h"] | Timeout exceeded: elapsed 18446744073.709553 seconds | **Describe the unexpected behaviour**
Recently, all of a sudden, I received such exceptions from different users with different queries that `Timeout exceeded: elapsed 18446744073.709553 seconds, maximum: 600` while these queries actually elapsed just about 100ms.
Some fields from system.query_log:
```
βββββββββββevent_timeββ¬βtypeββββββββββββββββββββββ¬βinitial_userββ¬βquery_duration_msββ¬βmemory_usageββ¬βread_rowsββ¬βread_bytesββ¬βinitial_query_idβββββββββββββββββββββββββββββββββββββββββββββββββββββββ¬βexceptionβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¬βexception_codeββ
β 2021-09-16 10:57:37 β QueryStart β some_user β 0 β 0 β 0 β 0 β 9f2efcbd-ad1e-426f-bef7-8856f243f60f_99775fdd09b0f63dc31f4759a5d19bb3 β β 0 β
β 2021-09-16 10:57:37 β ExceptionWhileProcessing β some_user β 103 β 0 β 0 β 0 β 9f2efcbd-ad1e-426f-bef7-8856f243f60f_99775fdd09b0f63dc31f4759a5d19bb3 β Code: 159, e.displayText() = DB::Exception: Timeout exceeded: elapsed 18446744073.709553 seconds, maximum: 600: While executing Remote (version 20.12.9.1) β 159 β
βββββββββββββββββββββββ΄βββββββββββββββββββββββββββ΄βββββββββββββββ΄ββββββββββββββββββββ΄βββββββββββββββ΄ββββββββββββ΄βββββββββββββ΄ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ΄βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ΄βββββββββββββββββ
```
**How to reproduce**
* ClickHouse version: 20.12.8.5 ( I manually built image of 20.12.9.1 based on 20.12.8.5 )
* The query itself was expected to elapse very fast. We did have settings of `max_execution_time=600` and `timeout_before_checking_execution_speed=10` but the exception indicated `elapsed 18446744073.709553 seconds` which was not true. And you can see from the fields `event_time` and `query_duration_ms` that it actually elapsed 103ms.
* **What's wired was that other queries from other users threw same exception with exact `elapsed 18446744073.709553 seconds` message. I wonder where did `18446744073.709553 seconds` come from? Is it something related to a bug?**
* I believe query patterns had no impact to this situation because different queries have same timeout exceptions.
* I thought it could be related to my cluster situation but around the time of that exact exception there are many successfully executed same queries.
**Expected behavior**
The timeout exception should not be thrown if the query didn't elapse longer than `max_execution_time`. At least the message of timeout exceeded should be correct. I don't know where the `18446744073.709553 seconds` comes from.
**Error message and/or stacktrace**
```
Code: 159, e.displayText() = DB::Exception: Timeout exceeded: elapsed 18446744073.709553 seconds, maximum: 600: While executing Remote (version 20.12.9.1)
```
| https://github.com/ClickHouse/ClickHouse/issues/29154 | https://github.com/ClickHouse/ClickHouse/pull/49819 | efc5e69aafbeda48b54a28a7bb4ab8db71077546 | 12be14b1952ea5596ba2c57ddc48cb0cbbfa27b0 | "2021-09-18T09:20:28Z" | c++ | "2023-05-12T22:06:27Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 29,122 | ["src/Databases/DatabaseMemory.cpp"] | Liveview-related (?) sigabort | ```
2021.09.15 06:50:25.803691 [ 91216 ] {9c658a50-6425-4b3d-9704-8a12f812ac5a} <Debug> executeQuery: (from 7.146.10.231:59686, user: xxxxx_user) watch foo
2021.09.15 06:50:25.803844 [ 91216 ] {9c658a50-6425-4b3d-9704-8a12f812ac5a} <Trace> ContextAccess (xxxxx_user): Access granted: SELECT(StartTime, AvgMid, Volume, `bar(Volume, 0, 10000)`) ON xxxxx.foo
2021.09.15 06:50:25.804730 [ 91216 ] {9c658a50-6425-4b3d-9704-8a12f812ac5a} <Debug> InterpreterSelectQuery: MergeTreeWhereOptimizer: condition "SID = 188992" moved to PREWHERE
2021.09.15 06:50:25.805378 [ 91216 ] {9c658a50-6425-4b3d-9704-8a12f812ac5a} <Trace> InterpreterSelectQuery: FetchColumns -> WithMergeableState
2021.09.15 06:50:25.805691 [ 91216 ] {9c658a50-6425-4b3d-9704-8a12f812ac5a} <Debug> xxxxx.livebars_zzzzzz (dfa19a23-f36e-413c-83a1-5fb38c8886b4) (SelectExecutor): Key condition: (column 2 in [188992, 188992]), (column 0 in [20210915, 20210915]), and, (column 2 in [188992, 188992]), and
2021.09.15 06:50:25.815651 [ 91216 ] {9c658a50-6425-4b3d-9704-8a12f812ac5a} <Debug> xxxxx.livebars_zzzzzz (dfa19a23-f36e-413c-83a1-5fb38c8886b4) (SelectExecutor): MinMax index condition: unknown, (column 0 in [20210915, 20210915]), and, unknown, and
2021.09.15 06:50:25.816240 [ 91216 ] {9c658a50-6425-4b3d-9704-8a12f812ac5a} <Debug> xxxxx.livebars_zzzzzz (dfa19a23-f36e-413c-83a1-5fb38c8886b4) (SelectExecutor): Selected 8/328 parts by partition key, 8 parts by primary key, 254/334 marks by primary key, 254 marks to read from 8 ranges
2021.09.15 06:50:25.816581 [ 91216 ] {9c658a50-6425-4b3d-9704-8a12f812ac5a} <Debug> xxxxx.livebars_zzzzzz (dfa19a23-f36e-413c-83a1-5fb38c8886b4) (SelectExecutor): Reading approx. 2079661 rows with 11 streams
2021.09.15 06:50:25.840246 [ 91216 ] {9c658a50-6425-4b3d-9704-8a12f812ac5a} <Debug> InterpreterSelectQuery: MergeTreeWhereOptimizer: condition "SID = 188992" moved to PREWHERE
2021.09.15 06:50:25.840994 [ 91216 ] {9c658a50-6425-4b3d-9704-8a12f812ac5a} <Trace> InterpreterSelectQuery: WithMergeableState -> Complete
2021.09.15 06:50:25.892657 [ 76213 ] {} <Trace> BaseDaemon: Received signal -1
2021.09.15 06:50:25.892735 [ 76213 ] {} <Fatal> BaseDaemon: (version 21.8.5.7 (official build), build id: 002660CFDF969F9EBCB93D09D84DC76B0B469329) (from thread 91216) Terminate called for uncaught exception:
std::exception. Code: 1001, type: std::__1::__fs::filesystem::filesystem_error, e.what() = filesystem error: in posix_stat: failed to determine attributes for the specified path: Permission denied [data/_temporary_and_external_tables/_tmp_140656c1%2D0ffe%2D404f%2D9406%2D56c10ffe904f/], Stack trace (when copying this message, always include the lines below):
0. std::__1::system_error::system_error(std::__1::error_code, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x158f85af in ?
1. ? @ 0x1588bddf in ?
2. ? @ 0x1588b7f6 in ?
3. ? @ 0x15896ff4 in ?
4. std::__1::__fs::filesystem::__status(std::__1::__fs::filesystem::path const&, std::__1::error_code*) @ 0x158932c7 in ?
5. DB::DatabaseMemory::dropTable(std::__1::shared_ptr<DB::Context const>, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, bool) @ 0xfe8e02d in /usr/bin/clickhouse
6. DB::TemporaryTableHolder::~TemporaryTableHolder() @ 0x1001fefa in /usr/bin/clickhouse
7. std::__1::__tree<std::__1::__value_type<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::shared_ptr<DB::TemporaryTableHolder> >, std::__1::__map_value_compare<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::__value_type<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::shared_ptr<DB::TemporaryTableHolder> >, std::__1::less<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, true>, std::__1::allocator<std::__1::__value_type<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::shared_ptr<DB::TemporaryTableHolder> > > >::destroy(std::__1::__tree_node<std::__1::__value_type<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::shared_ptr<DB::TemporaryTableHolder> >, void*>*) @ 0xffd764c in /usr/bin/clickhouse
8. DB::Context::~Context() @ 0xffadc21 in /usr/bin/clickhouse
9. std::__1::__shared_ptr_pointer<DB::Context*, std::__1::shared_ptr<DB::Context>::__shared_ptr_default_delete<DB::Context, DB::Context>, std::__1::allocator<DB::Context> >::__on_zero_shared() @ 0xffe2172 in /usr/bin/clickhouse
10. DB::Pipe::Holder::~Holder() @ 0x907f5ce in /usr/bin/clickhouse
11. DB::PipelineExecutingBlockInputStream::~PipelineExecutingBlockInputStream() @ 0x1104bce5 in /usr/bin/clickhouse
12. DB::IBlockInputStream::~IBlockInputStream() @ 0x9082a7e in /usr/bin/clickhouse
13. DB::IBlockInputStream::~IBlockInputStream() @ 0x9082a7e in /usr/bin/clickhouse
14. DB::StorageLiveView::getNewBlocks() @ 0x10eec18c in /usr/bin/clickhouse
15. DB::StorageLiveView::watch(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, DB::SelectQueryInfo const&, std::__1::shared_ptr<DB::Context const>, DB::QueryProcessingStage::Enum&, unsigned long, unsigned int) @ 0x10eedf4b in /usr/bin/clickhouse
16. DB::InterpreterWatchQuery::execute() @ 0x105aa64d in /usr/bin/clickhouse
17. ? @ 0x10743fc4 in /usr/bin/clickhouse
18. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, bool) @ 0x10742663 in /usr/bin/clickhouse
19. DB::TCPHandler::runImpl() @ 0x10fd9d4d in /usr/bin/clickhouse
20. DB::TCPHandler::run() @ 0x10fecb99 in /usr/bin/clickhouse
21. Poco::Net::TCPServerConnection::start() @ 0x13b63c6f in /usr/bin/clickhouse
22. Poco::Net::TCPServerDispatcher::run() @ 0x13b656fa in /usr/bin/clickhouse
23. Poco::PooledThread::run() @ 0x13c98579 in /usr/bin/clickhouse
24. Poco::ThreadImpl::runnableEntry(void*) @ 0x13c9480a in /usr/bin/clickhouse
25. start_thread @ 0x7ea5 in /usr/lib64/libpthread-2.17.so
26. __clone @ 0xfe8cd in /usr/lib64/libc-2.17.so
Canno
2021.09.15 06:50:25.904618 [ 76213 ] {} <Trace> BaseDaemon: Received signal 6
2021.09.15 06:50:25.904914 [ 105541 ] {} <Fatal> BaseDaemon: ########################################
2021.09.15 06:50:25.905044 [ 105541 ] {} <Fatal> BaseDaemon: (version 21.8.5.7 (official build), build id: 002660CFDF969F9EBCB93D09D84DC76B0B469329) (from thread 91216) (query_id: 9c658a50-6425-4b3d-9704-8a12f812ac5a) Received signal Aborted (6)
2021.09.15 06:50:25.905083 [ 105541 ] {} <Fatal> BaseDaemon:
2021.09.15 06:50:25.905130 [ 105541 ] {} <Fatal> BaseDaemon: Stack trace: 0x7ffff741b377 0x7ffff741ca68 0xf9b6cc8 0x15914463 0x159143cc 0x8f95a4b 0x1001ffc6 0xffd764c 0xffadc21 0xffe2172 0x907f5ce 0x1104bce5 0x9082a7e 0x9082a7e 0x10eec18c 0x10eedf4b 0x105aa64d 0x10743fc4 0x10742663 0x10fd9d4d 0x10fecb99 0x13b63c6f 0x13b656fa 0x13c98579 0x13c9480a 0x7ffff7bc6ea5 0x7ffff74e38cd
2021.09.15 06:50:25.908635 [ 105541 ] {} <Fatal> BaseDaemon: 1. __GI_raise @ 0x36377 in /usr/lib64/libc-2.17.so
2021.09.15 06:50:25.908674 [ 105541 ] {} <Fatal> BaseDaemon: 2. abort @ 0x37a68 in /usr/lib64/libc-2.17.so
2021.09.15 06:50:25.908697 [ 105541 ] {} <Fatal> BaseDaemon: 3. ? @ 0xf9b6cc8 in /usr/bin/clickhouse
2021.09.15 06:50:25.908720 [ 105541 ] {} <Fatal> BaseDaemon: 4. ? @ 0x15914463 in ?
2021.09.15 06:50:25.908747 [ 105541 ] {} <Fatal> BaseDaemon: 5. std::terminate() @ 0x159143cc in ?
2021.09.15 06:50:25.908775 [ 105541 ] {} <Fatal> BaseDaemon: 6. ? @ 0x8f95a4b in /usr/bin/clickhouse
2021.09.15 06:50:25.908804 [ 105541 ] {} <Fatal> BaseDaemon: 7. DB::TemporaryTableHolder::~TemporaryTableHolder() @ 0x1001ffc6 in /usr/bin/clickhouse
2021.09.15 06:50:25.908846 [ 105541 ] {} <Fatal> BaseDaemon: 8. std::__1::__tree<std::__1::__value_type<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::shared_ptr<DB::TemporaryTableHolder> >, std::__1::__map_value_compare<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::__value_type<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::shared_ptr<DB::TemporaryTableHolder> >, std::__1::less<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, true>, std::__1::allocator<std::__1::__value_type<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::shared_ptr<DB::TemporaryTableHolder> > > >::destroy(std::__1::__tree_node<std::__1::__value_type<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::shared_ptr<DB::TemporaryTableHolder> >, void*>*) @ 0xffd764c in /usr/bin/clickhouse
2021.09.15 06:50:25.908889 [ 105541 ] {} <Fatal> BaseDaemon: 9. DB::Context::~Context() @ 0xffadc21 in /usr/bin/clickhouse
2021.09.15 06:50:25.908926 [ 105541 ] {} <Fatal> BaseDaemon: 10. std::__1::__shared_ptr_pointer<DB::Context*, std::__1::shared_ptr<DB::Context>::__shared_ptr_default_delete<DB::Context, DB::Context>, std::__1::allocator<DB::Context> >::__on_zero_shared() @ 0xffe2172 in /usr/bin/clickhouse
2021.09.15 06:50:25.908955 [ 105541 ] {} <Fatal> BaseDaemon: 11. DB::Pipe::Holder::~Holder() @ 0x907f5ce in /usr/bin/clickhouse
2021.09.15 06:50:25.908980 [ 105541 ] {} <Fatal> BaseDaemon: 12. DB::PipelineExecutingBlockInputStream::~PipelineExecutingBlockInputStream() @ 0x1104bce5 in /usr/bin/clickhouse
2021.09.15 06:50:25.908994 [ 105541 ] {} <Fatal> BaseDaemon: 13. DB::IBlockInputStream::~IBlockInputStream() @ 0x9082a7e in /usr/bin/clickhouse
2021.09.15 06:50:25.909017 [ 105541 ] {} <Fatal> BaseDaemon: 14. DB::IBlockInputStream::~IBlockInputStream() @ 0x9082a7e in /usr/bin/clickhouse
2021.09.15 06:50:25.909045 [ 105541 ] {} <Fatal> BaseDaemon: 15. DB::StorageLiveView::getNewBlocks() @ 0x10eec18c in /usr/bin/clickhouse
2021.09.15 06:50:25.909074 [ 105541 ] {} <Fatal> BaseDaemon: 16. DB::StorageLiveView::watch(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, DB::SelectQueryInfo const&, std::__1::shared_ptr<DB::Context const>, DB::QueryProcessingStage::Enum&, unsigned long, unsigned int) @ 0x10eedf4b in /usr/bin/clickhouse
2021.09.15 06:50:25.909103 [ 105541 ] {} <Fatal> BaseDaemon: 17. DB::InterpreterWatchQuery::execute() @ 0x105aa64d in /usr/bin/clickhouse
2021.09.15 06:50:25.909121 [ 105541 ] {} <Fatal> BaseDaemon: 18. ? @ 0x10743fc4 in /usr/bin/clickhouse
2021.09.15 06:50:25.909144 [ 105541 ] {} <Fatal> BaseDaemon: 19. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, bool) @ 0x10742663 in /usr/bin/clickhouse
2021.09.15 06:50:25.909179 [ 105541 ] {} <Fatal> BaseDaemon: 20. DB::TCPHandler::runImpl() @ 0x10fd9d4d in /usr/bin/clickhouse
2021.09.15 06:50:25.909206 [ 105541 ] {} <Fatal> BaseDaemon: 21. DB::TCPHandler::run() @ 0x10fecb99 in /usr/bin/clickhouse
2021.09.15 06:50:25.909240 [ 105541 ] {} <Fatal> BaseDaemon: 22. Poco::Net::TCPServerConnection::start() @ 0x13b63c6f in /usr/bin/clickhouse
2021.09.15 06:50:25.909258 [ 105541 ] {} <Fatal> BaseDaemon: 23. Poco::Net::TCPServerDispatcher::run() @ 0x13b656fa in /usr/bin/clickhouse
2021.09.15 06:50:25.909287 [ 105541 ] {} <Fatal> BaseDaemon: 24. Poco::PooledThread::run() @ 0x13c98579 in /usr/bin/clickhouse
2021.09.15 06:50:25.909316 [ 105541 ] {} <Fatal> BaseDaemon: 25. Poco::ThreadImpl::runnableEntry(void*) @ 0x13c9480a in /usr/bin/clickhouse
2021.09.15 06:50:25.909354 [ 105541 ] {} <Fatal> BaseDaemon: 26. start_thread @ 0x7ea5 in /usr/lib64/libpthread-2.17.so
2021.09.15 06:50:25.909376 [ 105541 ] {} <Fatal> BaseDaemon: 27. __clone @ 0xfe8cd in /usr/lib64/libc-2.17.so
...
2021.09.15 06:50:47.586263 [ 76211 ] {} <Fatal> Application: Child process was terminated by signal 6.
```
I think "Canno" means (meet that in other log lines around)
```
Cannot print extra info for Poco::Exception (version 21.8.5.7 (official build))
``` | https://github.com/ClickHouse/ClickHouse/issues/29122 | https://github.com/ClickHouse/ClickHouse/pull/29216 | 0f8798106f376e3ebf5b2073da06c0003aa85fdb | e54bd40102beaab4e138d96be4002dc0478456bd | "2021-09-17T09:17:01Z" | c++ | "2021-09-21T17:12:51Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 29,072 | ["src/Disks/DiskLocal.cpp"] | Server allows invalid storage configuration when two disks use the same path |
If you create configuration file with two disks and specify that disks use the same path, and then create a policy that uses one of these disks then restarting server works and it is possible to create tables, insert and select from them, but on the next restart the server will not start if there are any files on disk.
**How to reproduce**
```xml
<yandex>
<storage_configuration>
<disks>
<default>
<keep_free_space_bytes>1024</keep_free_space_bytes>
</default>
<local1>
<path>/home/antip/disk1/</path>
</local1>
<local2>
<path>/home/antip/disk1/</path>
</local2>
</disks>
<policies>
<local1>
<volumes>
<local_volume>
<disk>local1</disk>
</local_volume>
</volumes>
</local1>
</policies>
</storage_configuration>
</yandex>
```
```SQL
CREATE TABLE test
(
ID Int32,
Value String
)
ENGINE = MergeTree()
PARTITION BY ID
ORDER BY ID
SETTINGS storage_policy = 'local1'
```
```SQL
insert into test values (1, '1111')
```
Now restarting the server will not work.
* 21.10.1.7939 Clickhouse version
**Expected behavior**
Server shall not allow configuration when two or more disks use the same path.
**Errors**
```
ClickHouse client version 21.10.1.7939 (official build).
Connecting to localhost:9000 as user default.
Code: 210. DB::NetException: Connection refused (localhost:9000). (NETWORK_ERROR)
```
**Log**
```
<Error> Application: DB::Exception: Part `1_1_1_0` was found on disk `local2` which is not defined in the storage policy: Cannot attach table `default`.`test` from metadata file /var/lib/clickhouse/store/a1f/a1f83fde-e2be-44d7-a1f8-3fdee2be94d7/test.sql from query ATTACH TABLE default.test UUID 'bce3358a-0c4f-47c3-bce3-358a0c4f67c3' (`ID` Int32, `Value` String) ENGINE = MergeTree PARTITION BY ID ORDER BY ID SETTINGS storage_policy = 'local1', index_granularity = 8192: while loading database `default` from path /var/lib/clickhouse/metadata/default
```
| https://github.com/ClickHouse/ClickHouse/issues/29072 | https://github.com/ClickHouse/ClickHouse/pull/33905 | 662ea9d0244f3b3e5012ce233906d4819011620f | 289a51b61d636e9b2ac98dc912d4d1894cc31e2e | "2021-09-16T11:19:23Z" | c++ | "2022-01-26T14:38:08Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 29,052 | ["src/Interpreters/Context.cpp", "src/Storages/StorageReplicatedMergeTree.cpp", "tests/integration/test_replicated_merge_tree_with_auxiliary_zookeepers/test.py"] | Upgrade to 21.8 LTS fails with: Unknown auxiliary ZooKeeper name | **Describe the issue**
ClickHouse is unable to start up after upgrading from 20.8 to 21.8.
There is a replicated table with `:` in zk path and it couldn't be loaded with:
`2021.09.15 03:49:58.435071 [ 633075 ] {} <Error> Application: DB::Exception: Unknown auxiliary ZooKeeper name '/test_prefix'. If it's required it can be added to the section <auxiliary_zookeepers> in config.xml: Cannot attach table `db1`.`table1` from metadata file /var/lib/clickhouse/metadata/db1/table1.sql from query ATTACH TABLE db1.table1 (`field1` String, `date` Date) ENGINE = ReplicatedMergeTree('/test_prefix:db1/{shard}/table1', '{replica}') PARTITION BY toMonth(date) PRIMARY KEY field1 ORDER BY field1 SETTINGS index_granularity = 8192: while loading database `db1` from path /var/lib/clickhouse/metadata/db1
`
**Expected behavior**
ClickHouse server is able to startup after upgrade without schema change. "test_prefix" in "/test_prefix:db1/{shard}/table1" is not interpreted as an auxiliary ZooKeeper name.
| https://github.com/ClickHouse/ClickHouse/issues/29052 | https://github.com/ClickHouse/ClickHouse/pull/30822 | 44b5dd116192fe3778d2676ee2ab7fe588f4df4a | f50b2b651b58cbf6ca5814586c159db67e518406 | "2021-09-15T15:22:45Z" | c++ | "2021-10-29T09:15:29Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 29,010 | ["src/Interpreters/ExpressionAnalyzer.cpp", "src/Interpreters/ExpressionAnalyzer.h", "src/Interpreters/InterpreterSelectQuery.cpp", "src/Interpreters/InterpreterSelectQuery.h", "src/Processors/QueryPlan/TotalsHavingStep.cpp", "src/Processors/QueryPlan/TotalsHavingStep.h", "src/Processors/Transforms/TotalsHavingTransform.cpp", "src/Processors/Transforms/TotalsHavingTransform.h", "tests/queries/0_stateless/2025_having_filter_column.reference", "tests/queries/0_stateless/2025_having_filter_column.sql"] | Query using GROUP BY ROLLUP fails on Block structure mismatch when selected field name matches column name | **Describe what's wrong**
Query with rollup using same selected field name as the name of underlying column reports structure mismatch
**Does it reproduce on recent release?**
yes
**How to reproduce**
`21.9.2.17`
```sql
CREATE TABLE test
(
`d` DateTime,
`a` LowCardinality(String),
`b` UInt64
)
ENGINE = MergeTree
PARTITION BY toDate(d)
ORDER BY d;
SELECT *
FROM (
SELECT
a,
max((d, b)).2 AS value
FROM test
GROUP BY rollup(a)
)
WHERE a <> '';
```
**Expected behavior**
This used to return an ok result on v21.8
**Error message and/or stacktrace**
> Code: 352. DB::Exception: Block structure mismatch in (columns with identical name must have identical structure) stream: different columns:
> notEquals(a, '') LowCardinality(UInt8) ColumnLowCardinality(size = 0, UInt8(size = 0), ColumnUnique(size = 1, UInt8(size = 1)))
> notEquals(a, '') LowCardinality(UInt8) Const(size = 0, ColumnLowCardinality(size = 1, UInt8(size = 1), ColumnUnique(size = 2, UInt8(size = 2)))). (AMBIGUOUS_COLUMN_NAME) (version 21.9.2.17 (official build))
| https://github.com/ClickHouse/ClickHouse/issues/29010 | https://github.com/ClickHouse/ClickHouse/pull/29475 | 8d8559090097126348e8b4433db8c53886a8bba6 | 14be2f31f54faba38918b95697f1d5e385817654 | "2021-09-14T11:00:30Z" | c++ | "2021-09-29T10:03:27Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 28,967 | ["src/Common/StringUtils/StringUtils.h", "src/Parsers/ExpressionElementParsers.cpp", "src/Parsers/Lexer.cpp", "tests/queries/0_stateless/02493_inconsistent_hex_and_binary_number.expect", "tests/queries/0_stateless/02493_inconsistent_hex_and_binary_number.reference", "tests/queries/0_stateless/02493_numeric_literals_with_underscores.reference", "tests/queries/0_stateless/02493_numeric_literals_with_underscores.sql"] | Should we support numeric literals in form of 1_000_000? (with underscore separator of groups). | Pros:
- it is convenient to type and read numbers this way;
- Ada, Perl, C++, Go and Rust already support that;
- can support thousands as well as lakhs and crores;
Cons:
- standard SQL does not have this support;
- PostgreSQL treat `SELECT 1_000_000` as number one with `_000_000` alias: http://sqlfiddle.com/#!17/0a28f/400
- C++ has `1'000'000` instead of `1_000_000`;
What do you think?
| https://github.com/ClickHouse/ClickHouse/issues/28967 | https://github.com/ClickHouse/ClickHouse/pull/43925 | 3ae9f121d966a2900e610b72c34bf7ae9317b1d0 | ef455904132829089bf58fd79af88280b15ee7f2 | "2021-09-13T09:38:54Z" | c++ | "2022-12-13T11:37:32Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 28,884 | ["src/Columns/MaskOperations.cpp", "src/Core/Settings.h", "tests/queries/0_stateless/01822_short_circuit.reference", "tests/queries/0_stateless/01822_short_circuit.sql"] | Regression in `LowCardinality` about some mask. | ERROR: type should be string, got "https://gh-api.clickhouse.tech/play?user=play#U0VMRUNUIHRvU3RhcnRPZk1vbnRoKGNyZWF0ZWRfYXQpIEFTIGRhdGUsIGNvdW50KCksIHVuaXEoYWN0b3JfbG9naW4pIEFTIHUsIGJhcih1LCAwLCAxMDAwLCAxMDApIEZST00gZ2l0aHViX2V2ZW50cyAKV0hFUkUgcmVwb19uYW1lIElOICgneWFuZGV4L0NsaWNrSG91c2UnLCAnQ2xpY2tIb3VzZS9DbGlja0hvdXNlJykgCiAgQU5EIGV2ZW50X3R5cGUgPSAnSXNzdWVDb21tZW50RXZlbnQnCiAgQU5EIGFjdG9yX2xvZ2luIE5PVCBMSUtFICdyb2JvdC0lJyBBTkQgYWN0b3JfbG9naW4gTk9UIExJS0UgJyVbYm90XScKR1JPVVAgQlkgZGF0ZSBPUkRFUiBCWSBkYXRl\r\n\r\n```\r\nSELECT toStartOfMonth(created_at) AS date, count(), uniq(actor_login) AS u, bar(u, 0, 1000, 100) FROM github_events \r\nWHERE repo_name IN ('yandex/ClickHouse', 'ClickHouse/ClickHouse') \r\n AND event_type = 'IssueCommentEvent'\r\n AND actor_login NOT LIKE 'robot-%' AND actor_login NOT LIKE '%[bot]'\r\nGROUP BY date ORDER BY date\r\n```\r\n\r\n```\r\nCode: 44. DB::Exception: Cannot convert column ColumnLowCardinality to mask.: while executing 'FUNCTION and(notLike(actor_login, 'robot-%') :: 3, notLike(actor_login, '%[bot]') :: 1) -> and(notLike(actor_login, 'robot-%'), notLike(actor_login, '%[bot]')) UInt8 : 2': While executing MergeTreeThread. (ILLEGAL_COLUMN) (version 21.11.1.8052 (official build))\r\n```" | https://github.com/ClickHouse/ClickHouse/issues/28884 | https://github.com/ClickHouse/ClickHouse/pull/28887 | 294f4c897b16d342f86ba0d63bd95d1ab2c2bb87 | 1616a0e2308d1ae16b3fa8e95934c27dbb85e66d | "2021-09-11T07:19:42Z" | c++ | "2021-09-12T06:06:02Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 28,862 | ["src/Storages/MergeTree/MergeTreeReaderCompact.cpp", "tests/queries/0_stateless/02025_subcolumns_compact_parts.reference", "tests/queries/0_stateless/02025_subcolumns_compact_parts.sql"] | Invalid number of rows in Chunk column UInt8 position 0: | **Describe what's wrong**
Null sub column doesn't work.
**Does it reproduce on recent release?**
Yes.
**How to reproduce**
Clickhouse version 21.8.5.7
Compact part.
Nullable(String) column
All values in that column are non nullable
```
SELECT some_column_name.`null`
FROM default.xxx_test_4
LIMIT 8182, 10
ββββsome_column_name.nullββ
β 0 β
β 0 β
β 0 β
β 0 β
β 0 β
β 0 β
β 0 β
β 0 β
β 0 β
β 0 β
βββββββββββββββββββββββββββ
10 rows in set. Elapsed: 0.003 sec. Processed 8.19 thousand rows, 8.19 KB (2.46 million rows/s., 2.46 MB/s.)
SELECT some_column_name.`null`
FROM default.xxx_test_4
LIMIT 8183, 10
Query id: 57433167-6bcf-4e67-8a0e-89a7c4640062
0 rows in set. Elapsed: 0.003 sec.
Received exception from server (version 21.8.5):
Code: 49. DB::Exception: Received from localhost:9000. DB::Exception: Invalid number of rows in Chunk column UInt8 position 0: expected 11508, got 3316: While executing MergeTree.
SELECT isNotNull(some_column_name)
FROM default.xxx_test_4
LIMIT 8183, 10
SETTINGS optimize_functions_to_subcolumns = 0
ββββisNotNull(some_column_name)ββ
β 1 β
β 1 β
β 1 β
β 1 β
β 1 β
β 1 β
β 1 β
β 1 β
β 1 β
β 1 β
βββββββββββββββββββββββββββββββββ
10 rows in set. Elapsed: 0.002 sec. Processed 11.51 thousand rows, 287.70 KB (4.81 million rows/s., 120.15 MB/s.)
SELECT isNotNull(some_column_name)
FROM default.xxx_test_4
LIMIT 8183, 10
SETTINGS optimize_functions_to_subcolumns = 1
0 rows in set. Elapsed: 0.003 sec.
Received exception from server (version 21.8.5):
Code: 49. DB::Exception: Received from localhost:9000. DB::Exception: Invalid number of rows in Chunk column UInt8 position 0: expected 11508, got 3316: While executing MergeTree.
```
11508-3316=8192
**Expected behavior**
Query works.
**Error message and/or stacktrace**
```
0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0x8f9a2ba in /usr/bin/clickhouse
1. DB::Chunk::checkNumRowsIsConsistent() @ 0x11012245 in /usr/bin/clickhouse
2. DB::MergeTreeBaseSelectProcessor::readFromPartImpl() @ 0x112689d0 in /usr/bin/clickhouse
3. DB::MergeTreeBaseSelectProcessor::readFromPart() @ 0x1126916d in /usr/bin/clickhouse
4. DB::MergeTreeBaseSelectProcessor::generate() @ 0x11267cab in /usr/bin/clickhouse
5. DB::ISource::tryGenerate() @ 0x1101a335 in /usr/bin/clickhouse
6. DB::ISource::work() @ 0x11019f1a in /usr/bin/clickhouse
7. DB::SourceWithProgress::work() @ 0x111eb0ca in /usr/bin/clickhouse
8. ? @ 0x110549dd in /usr/bin/clickhouse
9. DB::PipelineExecutor::executeStepImpl(unsigned long, unsigned long, std::__1::atomic<bool>*) @ 0x11051571 in /usr/bin/clickhouse
10. DB::PipelineExecutor::executeImpl(unsigned long) @ 0x1104f5af in /usr/bin/clickhouse
11. DB::PipelineExecutor::execute(unsigned long) @ 0x1104f38d in /usr/bin/clickhouse
12. ? @ 0x1105c61f in /usr/bin/clickhouse
13. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8fdaf3f in /usr/bin/clickhouse
14. ? @ 0x8fde823 in /usr/bin/clickhouse
15. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
16. __clone @ 0x122293 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
```
**Additional context**
Related to https://github.com/ClickHouse/ClickHouse/issues/20218
| https://github.com/ClickHouse/ClickHouse/issues/28862 | https://github.com/ClickHouse/ClickHouse/pull/28873 | f066edc43cfe1f74f6b0a744add260cbd96e88d1 | cdcfdbec7e703aa8d14069f874635357797d9410 | "2021-09-10T14:30:49Z" | c++ | "2021-09-12T12:54:07Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 28,852 | ["src/Interpreters/AsynchronousMetrics.cpp", "src/Storages/StorageS3Cluster.cpp"] | AsynchronousMetrics::update writes to log about "No data available". (CANNOT_READ_FROM_FILE_DESCRIPTOR) | Happening from time to time on my laptop (dozens of records per day):
```
2021.09.09 22:10:54.047840 [ 62690 ] {} <Error> void DB::AsynchronousMetrics::update(std::chrono::system_clock::time_point): Code: 74. DB::ErrnoException: Cannot read from file /sys/class/thermal/thermal_zone9/temp, errno: 61, strerror: No data available. (CANNOT_READ_FROM_FILE_DESCRIPTOR), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0x9440eba in /usr/lib/debug/.build-id/ba/25f6646c3be7aa95f452ec85461e96178aa365.debug
1. DB::throwFromErrnoWithPath(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, int) @ 0x9441ea9 in /usr/lib/debug/.build-id/ba/25f6646c3be7aa95f452ec85461e96178aa365.debug
2. DB::ReadBufferFromFileDescriptor::nextImpl() @ 0x94936ee in /usr/lib/debug/.build-id/ba/25f6646c3be7aa95f452ec85461e96178aa365.debug
3. void DB::readIntTextImpl<long, void, (DB::ReadIntTextCheckOverflow)0>(long&, DB::ReadBuffer&) @ 0x960eaf2 in /usr/lib/debug/.build-id/ba/25f6646c3be7aa95f452ec85461e96178aa365.debug
4. DB::AsynchronousMetrics::update(std::__1::chrono::time_point<std::__1::chrono::system_clock, std::__1::chrono::duration<long long, std::__1::ratio<1l, 1000000l> > >) @ 0x1080b8d2 in /usr/lib/debug/.build-id/ba/25f6646c3be7aa95f452ec85461e96178aa365.debug
5. DB::AsynchronousMetrics::run() @ 0x10811721 in /usr/lib/debug/.build-id/ba/25f6646c3be7aa95f452ec85461e96178aa365.debug
6. void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::AsynchronousMetrics::start()::$_0>(DB::AsynchronousMetrics::start()::$_0&&)::'lambda'(), void ()> >(std::__1::__function::__policy_storage const*) @ 0x10815b72 in /usr/lib/debug/.build-id/ba/25f6646c3be7aa95f452ec85461e96178aa365.debug
7. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x9481f5f in /usr/lib/debug/.build-id/ba/25f6646c3be7aa95f452ec85461e96178aa365.debug
8. void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()> >(void*) @ 0x9485843 in /usr/lib/debug/.build-id/ba/25f6646c3be7aa95f452ec85461e96178aa365.debug
9. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
10. __clone @ 0x122293 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
(version 21.10.1.8002 (official build))
```
Those 2 files are reported:
```
/sys/class/thermal/thermal_zone8/temp
/sys/class/hwmon/hwmon7/temp1_input
```
Proposal it to catch / silently ignore that particular error, it don't require any attention.
Eventually, may be we may 'blacklist' reading some files / system metrics? | https://github.com/ClickHouse/ClickHouse/issues/28852 | https://github.com/ClickHouse/ClickHouse/pull/28882 | aef6112af47fc34111a3f3b0bdfb99aa66b1beda | 48e8e2455203d6105cf1a6c84ceed267300e57a5 | "2021-09-10T09:27:18Z" | c++ | "2021-09-11T08:36:23Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 28,810 | ["src/Interpreters/executeQuery.cpp", "tests/queries/0_stateless/02028_create_select_settings.reference", "tests/queries/0_stateless/02028_create_select_settings.sql"] | Query setting `max_memory_usage` doesn't work for `CREATE TABLE ... as SELECT ...` | ClickHouse v.21.7.7.47
setting max_memory_usage doesnt work for 'create table .. as' queries
for example:
**this query works**
`create table test_table engine MergeTree order by a as
select a_table.a, b_table.b_arr
from (select arrayJoin(range(10000)) as a) a_table
cross join (select range(10000) as b_arr) b_table
settings max_memory_usage = 1;`
**this query doesnt work**
`insert into test_table
select a_table.a, b_table.b_arr
from (select arrayJoin(range(10000)) as a) a_table
cross join (select range(10000) as b_arr) b_table
settings max_memory_usage = 1;`
With error:
`DB::Exception: Memory limit (for query) exceeded: would use 4.09 MiB`
**Expected behavior**
**this query should NOT work, and setting max_memory_usage should work for all 'select'\'insert'\'create table' queries**
`create table test_table engine MergeTree order by a as
select a_table.a, b_table.b_arr
from (select arrayJoin(range(10000)) as a) a_table
cross join (select range(10000) as b_arr) b_table
settings max_memory_usage = 1;` | https://github.com/ClickHouse/ClickHouse/issues/28810 | https://github.com/ClickHouse/ClickHouse/pull/28962 | 2bf47bb0ba2c79294ad63f721f2c9948c4eab6d0 | abe314feecd1647d7c2b952a25da7abf5c19f352 | "2021-09-09T14:34:35Z" | c++ | "2021-09-13T19:14:21Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 28,768 | ["CHANGELOG.md"] | here document syntax make `$` as special char | **Describe the issue**
here document syntax make `$` as as special charγif use `$` as column name prefix, may cause parse problemγ
**How to reproduce**
* Which ClickHouse server versions are incompatible
v21.9
* Queries to run that lead to unexpected result
`
select '1' as $doc
union all
select '2' as $doc
union all
select '2' as $doc
union all
select '2' as $doc;
`
**Error message and/or stacktrace**
`
Syntax error: failed at position 15 ('$doc
union all
select '2' as $doc
union all
select '2' as $doc
union all
select '2' as $') (line 1, col 15):
Expected one of: end of query, identifier
`
should we make this Feature as Incompatible Change?
@kitaisreal
| https://github.com/ClickHouse/ClickHouse/issues/28768 | https://github.com/ClickHouse/ClickHouse/pull/29530 | 0c33f1121bd00e718d63764418a8a8a9059252b9 | 50913951e3c311333db310fb555fce080af7e8cb | "2021-09-09T03:22:04Z" | c++ | "2021-09-29T12:57:42Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 28,720 | ["src/Functions/array/arrayElement.cpp", "tests/queries/0_stateless/02383_array_signed_const_positive_index.reference", "tests/queries/0_stateless/02383_array_signed_const_positive_index.sql"] | Different behavior when working with arrays | ```
SELECT version()
βversion()ββ
β 21.8.5.7 β
βββββββββββββ
```
We have a request, as a result of which we get an array.
When getting an item using a computed value, there is no value
```
SELECT
arrayMap(x -> x, [[1], [2], [3]]) AS x,
toTypeName(x),
x[3 - 2] AS y,
x[toInt64(1)] AS yy,
x[toUInt8(1)] AS yyy
ββxββββββββββββββ¬βtoTypeName(arrayMap(lambda(tuple(x), x), [[1], [2], [3]]))ββ¬βyβββ¬βyyββ¬βyyyββ
β [[1],[2],[3]] β Array(Array(UInt8)) β [] β [] β [1] β
βββββββββββββββββ΄βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ΄βββββ΄βββββ΄ββββββ
```
However, when the array is explicitly specified, the value is present.
The types are the same.
```
SELECT
[[1], [2], [3]] AS x,
toTypeName(x),
x[3 - 2] AS y,
x[toInt64(1)] AS yy,
x[toInt8(1)] AS yyy
ββxββββββββββββββ¬βtoTypeName([[1], [2], [3]])ββ¬βyββββ¬βyyβββ¬βyyyββ
β [[1],[2],[3]] β Array(Array(UInt8)) β [1] β [1] β [1] β
βββββββββββββββββ΄ββββββββββββββββββββββββββββββ΄ββββββ΄ββββββ΄ββββββ
```
| https://github.com/ClickHouse/ClickHouse/issues/28720 | https://github.com/ClickHouse/ClickHouse/pull/40185 | 22c53e7f7bdaff8a0d5d5453a6cdeb4d5dbfc406 | 1ea751eb7b691bdc41e1a4fe1ccc74d1deece588 | "2021-09-08T06:38:01Z" | c++ | "2022-08-13T19:39:19Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 28,661 | ["src/Functions/MatchImpl.h", "tests/queries/0_stateless/02045_like_function.reference", "tests/queries/0_stateless/02045_like_function.sql"] | Issue with LIKE matching | ```
SELECT version();
ββversion()βββββ
β 21.10.1.8002 β
ββββββββββββββββ
WITH lower('\RealVNC\WinVNC4 /v password') as CommandLine
SELECT
CommandLine,
CommandLine LIKE '%\\\\realvnc\\\\winvnc4%password%' as t1,
CommandLine LIKE '%\\\\realvnc\\\\winvnc4 %password%' as t2,
CommandLine LIKE '%\\\\realvnc\\\\winvnc4%password' as t3,
CommandLine LIKE '%\\\\realvnc\\\\winvnc4 %password' as t4,
CommandLine LIKE '%realvnc%winvnc4%password%' as t5,
CommandLine LIKE '%\\\\winvnc4%password%' as t6;
ββCommandLineβββββββββββββββββββ¬βt1ββ¬βt2ββ¬βt3ββ¬βt4ββ¬βt5ββ¬βt6ββ
β \realvnc\winvnc4 /v password β 0 β 1 β 1 β 1 β 1 β 1 β
ββββββββββββββββββββββββββββββββ΄βββββ΄βββββ΄βββββ΄βββββ΄βββββ΄βββββ
1 rows in set. Elapsed: 0.001 sec.
```
All should match.
http://sqlfiddle.com/#!9/73f0c6/2 | https://github.com/ClickHouse/ClickHouse/issues/28661 | https://github.com/ClickHouse/ClickHouse/pull/30244 | b87c819c104398012e3ef02c049103ab65a55125 | 377b937aa52b0c163e70357cad2a35838df72335 | "2021-09-06T15:36:24Z" | c++ | "2021-10-25T13:35:37Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 28,581 | ["src/Interpreters/LogicalExpressionsOptimizer.cpp", "tests/queries/0_stateless/02023_transform_or_to_in.reference", "tests/queries/0_stateless/02023_transform_or_to_in.sql"] | Generated IN clause tuple not compatible in distributed queries | Version 21.8.3
With `legacy_column_name_of_tuple_literal` set to 0, this query succeeds:
```
SELECT countIf(chi_count, video_level IN ('Video_3', 'Video_4', 'Video_5', 'Video_6', ' Video_7', 'Video_8', 'Video_9')) AS LBR
FROM comcast_xcr_maple.atsec_cdvr_1m
WHERE datetime > (now() - 300)
```
This query fails:
```
SELECT countIf(chi_count, (video_level = 'Video_3') OR (video_level = 'Video_4') OR (video_level = 'Video_5') OR (video_level = 'Video_6') OR (video_level = 'Video_7') OR (video_level = 'Video_8') OR (video_level = 'Video_9')) AS LBR
FROM comcast_xcr_maple.atsec_cdvr_1m
WHERE datetime > (now() - 300)
Received exception from server (version 21.8.3):
Code: 10. DB::Exception: Received from localhost:9440. DB::Exception: Not found column countIf(chi_count, in(video_level, tuple('Video_3', 'Video_4', 'Video_5', 'Video_6', 'Video_7', 'Video_8', 'Video_9'))) in block. There are only columns: countIf(chi_count, in(video_level, ('Video_3', 'Video_4', 'Video_5', 'Video_6', 'Video_7', 'Video_8', 'Video_9'))): While executing Remote.
```
Both queries work with `legacy_column_name_of_tuple_literal` set to 1. Presumably whatever code converts the multiple OR statements to a tuple uses the "old" column name and is incompatible with the new approach. | https://github.com/ClickHouse/ClickHouse/issues/28581 | https://github.com/ClickHouse/ClickHouse/pull/28658 | 3b9dae8718d1df8f35496e0cd485660f0ee5ee05 | 5d33baab5faa1a135684b30413e9f6d291b60107 | "2021-09-03T15:52:30Z" | c++ | "2021-09-08T10:26:45Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 28,535 | ["docs/en/sql-reference/functions/bit-functions.md", "src/Functions/bitSlice.cpp", "src/Functions/registerFunctionsArithmetic.cpp", "tests/queries/0_stateless/02154_bit_slice_for_fixedstring.reference", "tests/queries/0_stateless/02154_bit_slice_for_fixedstring.sql", "tests/queries/0_stateless/02154_bit_slice_for_string.reference", "tests/queries/0_stateless/02154_bit_slice_for_string.sql"] | `bitSlice` function | **Use case**
#8922
**Describe the solution you'd like**
`bitSlice(s, offset, length)`
s is FixedString or String
the return value has String data type.
Offset starts from 1 for consistency with `substring` and `arraySlice`. | https://github.com/ClickHouse/ClickHouse/issues/28535 | https://github.com/ClickHouse/ClickHouse/pull/33360 | f644602ec8a9a30f8ebd05d13ed0389e1525171d | 7156e64ee26095b9c88b848ddf30c7fbc66b7b7f | "2021-09-02T22:48:26Z" | c++ | "2022-01-20T13:20:27Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 28,525 | ["src/Functions/FunctionsExternalDictionaries.h", "tests/queries/0_stateless/2014_dict_get_nullable_key.reference", "tests/queries/0_stateless/2014_dict_get_nullable_key.sql"] | dictGet nullable argument stopped working | ```
CREATE TABLE dictionary_nullable_source_table( id UInt64, value Int64) ENGINE=TinyLog;
INSERT INTO dictionary_nullable_source_table VALUES (0, 0);
DROP DICTIONARY IF EXISTS flat_dictionary;
CREATE DICTIONARY flat_dictionary ( id UInt64, value Int64 ) PRIMARY KEY id
SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() TABLE 'dictionary_nullable_source_table'))
LIFETIME(MIN 1 MAX 1000)
LAYOUT(FLAT());
SELECT dictGet('flat_dictionary', 'value', Null);
```
On 21.6 and older - returns Null.
On 21.7 and newer:
```
Received exception from server (version 21.8.4):
Code: 43. DB::Exception: Received from localhost:9000. DB::Exception: Third argument of function dictGet must be UInt64 when dictionary is simple. Actual type Nullable(Nothing).: While processing dictGet('flat_dictionary', 'value', NULL). Stack trace:
0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0x8f9557a in /usr/bin/clickhouse
1. DB::Exception::Exception<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >(int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >&&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >&&) @ 0xa7b78de in /usr/bin/clickhouse
2. DB::FunctionDictGetNoType<(DB::DictionaryGetFunctionType)0>::executeImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const @ 0xb5da99c in /usr/bin/clickhouse
3. DB::IFunction::executeImplDryRun(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const @ 0xb1266ca in /usr/bin/clickhouse
4. DB::FunctionToExecutableFunctionAdaptor::executeDryRunImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const @ 0xb12574e in /usr/bin/clickhouse
5. DB::IExecutableFunction::executeWithoutLowCardinalityColumns(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const @ 0xfa6cd85 in /usr/bin/clickhouse
6. DB::IExecutableFunction::defaultImplementationForConstantArguments(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const @ 0xfa6c772 in /usr/bin/clickhouse
7. DB::IExecutableFunction::executeWithoutLowCardinalityColumns(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const @ 0xfa6cd45 in /usr/bin/clickhouse
8. DB::IExecutableFunction::execute(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> > const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const @ 0xfa6d3b2 in /usr/bin/clickhouse
9. DB::ActionsDAG::addFunction(std::__1::shared_ptr<DB::IFunctionOverloadResolver> const&, std::__1::vector<DB::ActionsDAG::Node const*, std::__1::allocator<DB::ActionsDAG::Node const*> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >) @ 0xff4629f in /usr/bin/clickhouse
10. DB::ScopeStack::addFunction(std::__1::shared_ptr<DB::IFunctionOverloadResolver> const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >) @ 0x101bcdb2 in /usr/bin/clickhouse
11. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0x101c6f0c in /usr/bin/clickhouse
12. DB::ActionsMatcher::visit(DB::ASTExpressionList&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0x101cef7e in /usr/bin/clickhouse
13. DB::InDepthNodeVisitor<DB::ActionsMatcher, true, std::__1::shared_ptr<DB::IAST> const>::visit(std::__1::shared_ptr<DB::IAST> const&) @ 0x1018e917 in /usr/bin/clickhouse
14. DB::ExpressionAnalyzer::getRootActions(std::__1::shared_ptr<DB::IAST> const&, bool, std::__1::shared_ptr<DB::ActionsDAG>&, bool) @ 0x1018e605 in /usr/bin/clickhouse
15. DB::SelectQueryExpressionAnalyzer::appendSelect(DB::ExpressionActionsChain&, bool) @ 0x1019bda8 in /usr/bin/clickhouse
16. DB::ExpressionAnalysisResult::ExpressionAnalysisResult(DB::SelectQueryExpressionAnalyzer&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, bool, bool, bool, std::__1::shared_ptr<DB::FilterDAGInfo> const&, DB::Block const&) @ 0x101a1422 in /usr/bin/clickhouse
17. DB::InterpreterSelectQuery::getSampleBlockImpl() @ 0x103a97e6 in /usr/bin/clickhouse
18. ? @ 0x103a236b in /usr/bin/clickhouse
19. DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context const>, std::__1::shared_ptr<DB::IBlockInputStream> const&, std::__1::optional<DB::Pipe>, std::__1::shared_ptr<DB::IStorage> const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&) @ 0x1039c77f in /usr/bin/clickhouse
20. DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context const>, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0x1039af7e in /usr/bin/clickhouse
21. DB::InterpreterSelectWithUnionQuery::InterpreterSelectWithUnionQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context const>, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0x10575909 in /usr/bin/clickhouse
22. DB::InterpreterFactory::get(std::__1::shared_ptr<DB::IAST>&, std::__1::shared_ptr<DB::Context>, DB::SelectQueryOptions const&) @ 0x1017c2b7 in /usr/bin/clickhouse
23. ? @ 0x10739886 in /usr/bin/clickhouse
24. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, bool) @ 0x107381c3 in /usr/bin/clickhouse
25. DB::TCPHandler::runImpl() @ 0x10fcd88d in /usr/bin/clickhouse
26. DB::TCPHandler::run() @ 0x10fe06d9 in /usr/bin/clickhouse
27. Poco::Net::TCPServerConnection::start() @ 0x13b5730f in /usr/bin/clickhouse
28. Poco::Net::TCPServerDispatcher::run() @ 0x13b58d9a in /usr/bin/clickhouse
29. Poco::PooledThread::run() @ 0x13c8bc19 in /usr/bin/clickhouse
30. Poco::ThreadImpl::runnableEntry(void*) @ 0x13c87eaa in /usr/bin/clickhouse
31. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
``` | https://github.com/ClickHouse/ClickHouse/issues/28525 | https://github.com/ClickHouse/ClickHouse/pull/28530 | 04b26d26bfd974a4d25528af5249d9844a88b7c7 | f076cc69b4bb4b4eacd38b6eca992d7021b63e3e | "2021-09-02T13:48:19Z" | c++ | "2021-09-04T10:29:22Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 28,520 | ["docs/en/sql-reference/functions/string-replace-functions.md", "docs/ru/sql-reference/functions/string-replace-functions.md", "src/Functions/registerFunctionsStringRegexp.cpp", "src/Functions/translate.cpp", "tests/queries/0_stateless/02353_translate.reference", "tests/queries/0_stateless/02353_translate.sql"] | a function like translate function of oracle | https://docs.oracle.com/cd/B19306_01/server.102/b14200/functions196.htm
eg.
SELECT TRANSLATE('SQL*Plus User''s Guide', ' */''', '___') ;
returns
SQL_Plus_Users_Guide
the characters arguments maybe utf8, so
TRANSLATE('εε(ε)', ' ()', 'γγ)
returns
εεγεγ
my ugly sql:
SELECT arrayStringConcat(
arrayMap(x-> case when x in ['(',')'] then ['γ','γ'][indexOf(['(',')'],x)] else x end, splitByRegexp('','εε(ε)'))
)
as res; | https://github.com/ClickHouse/ClickHouse/issues/28520 | https://github.com/ClickHouse/ClickHouse/pull/38935 | 812143c76ba0ef50041f16c98c5c5c05c6fd4292 | cfe7413678e1cbaf7e4af8e150e578b7b536db6e | "2021-09-02T12:07:09Z" | c++ | "2022-07-14T00:31:37Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 28,515 | ["src/Storages/MergeTree/ReplicatedMergeTreeTableMetadata.cpp", "tests/queries/0_stateless/01526_alter_add_and_modify_order_zookeeper.reference", "tests/queries/0_stateless/01526_alter_add_and_modify_order_zookeeper.sql"] | Server fails after restarting: Existing table metadata in ZooKeeper differs in sorting key expression. | How to reproduce (this is the part of 01526_alter_add_and_modify_order_zookeeper test):
```sql
CREATE TABLE table_for_alter
(
`d` Date,
`a` String,
`b` UInt8,
`x` String,
`y` Int8,
`version` UInt64,
`sign` Int8 DEFAULT 1
)
ENGINE = ReplicatedVersionedCollapsingMergeTree('/clickhouse/tables/01526_alter_add/t4', '1', sign, version)
PARTITION BY y
ORDER BY d
SETTINGS index_granularity = 8192;
INSERT INTO table_for_alter VALUES(toDate('2019-10-01'), 'a', 1, 'aa', 1, 1, 1);
ALTER TABLE table_for_alter ADD COLUMN order UInt32, MODIFY ORDER BY (d, order);
```
After restarting server it fails with the error:
```
<Error> Application: DB::Exception: Existing table metadata in ZooKeeper differs in sorting key expression. Stored in ZooKeeper: d, order, local: d, order, version: Cannot attach table `default`.`table_for_alter` from metadata file /home/avogar/ClickHouse/programs/server/store/379/3795dcee-cc60-41ab-b795-dceecc6001ab/table_for_alter.sql from query ATTACH TABLE default.table_for_alter UUID 'b7e7470c-6fcb-4a95-b7e7-470c6fcbda95' (`d` Date, `a` String, `b` UInt8, `x` String, `y` Int8, `version` UInt64, `sign` Int8 DEFAULT 1, `order` UInt32) ENGINE = ReplicatedVersionedCollapsingMergeTree('/clickhouse/tables/01526_alter_add/t4', '1', sign, version) PARTITION BY y PRIMARY KEY d ORDER BY (d, order) SETTINGS index_granularity = 8192: while loading database `default` from path ./metadata/default
``` | https://github.com/ClickHouse/ClickHouse/issues/28515 | https://github.com/ClickHouse/ClickHouse/pull/28528 | 027c5312438546b555acf720cf815a3b315cb48b | 189452443880dfe153980bde277fa73d0a727ede | "2021-09-02T10:41:55Z" | c++ | "2021-09-03T07:13:52Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 28,505 | ["cmake/find/zlib.cmake", "src/IO/ZlibDeflatingWriteBuffer.h", "src/IO/ZlibInflatingReadBuffer.h", "src/IO/examples/zlib_ng_bug.cpp"] | Data race in `ParallelFormattingOutputFormat` | https://clickhouse-test-reports.s3.yandex.net/28373/fed00a8f7698fae8b4294b6b4ff04d3f064bdace/functional_stateless_tests_(thread).html
```
==================
WARNING: ThreadSanitizer: data race (pid=380)
Read of size 8 at 0x00001bb9fc60 by thread T328:
#0 deflate_fast obj-x86_64-linux-gnu/../contrib/zlib-ng/deflate_fast.c:56:39 (clickhouse+0x1b5142f2)
#1 deflate obj-x86_64-linux-gnu/../contrib/zlib-ng/deflate.c:984:18 (clickhouse+0x1b51129b)
#2 DB::ZlibDeflatingWriteBuffer::nextImpl() obj-x86_64-linux-gnu/../src/IO/ZlibDeflatingWriteBuffer.cpp:86:22 (clickhouse+0x9ac3584)
#3 DB::WriteBuffer::next() obj-x86_64-linux-gnu/../src/IO/WriteBuffer.h:47:13 (clickhouse+0x15c86c03)
#4 DB::WriteBuffer::nextIfAtEnd() obj-x86_64-linux-gnu/../src/IO/WriteBuffer.h:69:13 (clickhouse+0x15c86c03)
#5 DB::WriteBuffer::write(char const*, unsigned long) obj-x86_64-linux-gnu/../src/IO/WriteBuffer.h:82:13 (clickhouse+0x15c86c03)
#6 DB::ParallelFormattingOutputFormat::collectorThreadFunction(std::__1::shared_ptr<DB::ThreadGroupStatus> const&) obj-x86_64-linux-gnu/../src/Processors/Formats/Impl/ParallelFormattingOutputFormat.cpp:118:21 (clickhouse+0x15c86c03)
#7 DB::ParallelFormattingOutputFormat::ParallelFormattingOutputFormat(DB::ParallelFormattingOutputFormat::Params)::'lambda'()::operator()() const obj-x86_64-linux-gnu/../src/Processors/Formats/Impl/ParallelFormattingOutputFormat.h:81:13 (clickhouse+0x15b63a7c)
#8 decltype(std::__1::forward<DB::ParallelFormattingOutputFormat::ParallelFormattingOutputFormat(DB::ParallelFormattingOutputFormat::Params)::'lambda'()&>(fp)()) std::__1::__invoke_constexpr<DB::ParallelFormattingOutputFormat::ParallelFormattingOutputFormat(DB::ParallelFormattingOutputFormat::Params)::'lambda'()&>(DB::ParallelFormattingOutputFormat::ParallelFormattingOutputFormat(DB::ParallelFormattingOutputFormat::Params)::'lambda'()&) obj-x86_64-linux-gnu/../contrib/libcxx/include/type_traits:3682:1 (clickhouse+0x15b63a7c)
#9 decltype(auto) std::__1::__apply_tuple_impl<DB::ParallelFormattingOutputFormat::ParallelFormattingOutputFormat(DB::ParallelFormattingOutputFormat::Params)::'lambda'()&, std::__1::tuple<>&>(DB::ParallelFormattingOutputFormat::ParallelFormattingOutputFormat(DB::ParallelFormattingOutputFormat::Params)::'lambda'()&, std::__1::tuple<>&, std::__1::__tuple_indices<>) obj-x86_64-linux-gnu/../contrib/libcxx/include/tuple:1415:1 (clickhouse+0x15b63a7c)
#10 decltype(auto) std::__1::apply<DB::ParallelFormattingOutputFormat::ParallelFormattingOutputFormat(DB::ParallelFormattingOutputFormat::Params)::'lambda'()&, std::__1::tuple<>&>(DB::ParallelFormattingOutputFormat::ParallelFormattingOutputFormat(DB::ParallelFormattingOutputFormat::Params)::'lambda'()&, std::__1::tuple<>&) obj-x86_64-linux-gnu/../contrib/libcxx/include/tuple:1424:1 (clickhouse+0x15b63a7c)
#11 ThreadFromGlobalPool::ThreadFromGlobalPool<DB::ParallelFormattingOutputFormat::ParallelFormattingOutputFormat(DB::ParallelFormattingOutputFormat::Params)::'lambda'()>(DB::ParallelFormattingOutputFormat::ParallelFormattingOutputFormat(DB::ParallelFormattingOutputFormat::Params)::'lambda'()&&)::'lambda'()::operator()() obj-x86_64-linux-gnu/../src/Common/ThreadPool.h:182:13 (clickhouse+0x15b63a7c)
#12 decltype(std::__1::forward<DB::ParallelFormattingOutputFormat::ParallelFormattingOutputFormat(DB::ParallelFormattingOutputFormat::Params)::'lambda'()>(fp)()) std::__1::__invoke<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::ParallelFormattingOutputFormat::ParallelFormattingOutputFormat(DB::ParallelFormattingOutputFormat::Params)::'lambda'()>(DB::ParallelFormattingOutputFormat::ParallelFormattingOutputFormat(DB::ParallelFormattingOutputFormat::Params)::'lambda'()&&)::'lambda'()&>(DB::ParallelFormattingOutputFormat::ParallelFormattingOutputFormat(DB::ParallelFormattingOutputFormat::Params)::'lambda'()&&) obj-x86_64-linux-gnu/../contrib/libcxx/include/type_traits:3676:1 (clickhouse+0x15b639c1)
#13 void std::__1::__invoke_void_return_wrapper<void>::__call<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::ParallelFormattingOutputFormat::ParallelFormattingOutputFormat(DB::ParallelFormattingOutputFormat::Params)::'lambda'()>(DB::ParallelFormattingOutputFormat::ParallelFormattingOutputFormat(DB::ParallelFormattingOutputFormat::Params)::'lambda'()&&)::'lambda'()&>(DB::ParallelFormattingOutputFormat::ParallelFormattingOutputFormat(DB::ParallelFormattingOutputFormat::Params)::'lambda'()&&...) obj-x86_64-linux-gnu/../contrib/libcxx/include/__functional_base:348:9 (clickhouse+0x15b639c1)
#14 std::__1::__function::__default_alloc_func<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::ParallelFormattingOutputFormat::ParallelFormattingOutputFormat(DB::ParallelFormattingOutputFormat::Params)::'lambda'()>(DB::ParallelFormattingOutputFormat::ParallelFormattingOutputFormat(DB::ParallelFormattingOutputFormat::Params)::'lambda'()&&)::'lambda'(), void ()>::operator()() obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:1608:12 (clickhouse+0x15b639c1)
#15 void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::ParallelFormattingOutputFormat::ParallelFormattingOutputFormat(DB::ParallelFormattingOutputFormat::Params)::'lambda'()>(DB::ParallelFormattingOutputFormat::ParallelFormattingOutputFormat(DB::ParallelFormattingOutputFormat::Params)::'lambda'()&&)::'lambda'(), void ()> >(std::__1::__function::__policy_storage const*) obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2089:16 (clickhouse+0x15b639c1)
#16 std::__1::__function::__policy_func<void ()>::operator()() const obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2221:16 (clickhouse+0x99f9595)
#17 std::__1::function<void ()>::operator()() const obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2560:12 (clickhouse+0x99f9595)
#18 ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) obj-x86_64-linux-gnu/../src/Common/ThreadPool.cpp:270:17 (clickhouse+0x99f9595)
#19 void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()::operator()() const obj-x86_64-linux-gnu/../src/Common/ThreadPool.cpp:136:73 (clickhouse+0x99fc918)
#20 decltype(std::__1::forward<void>(fp)(std::__1::forward<void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>(fp0)...)) std::__1::__invoke<void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>(void&&, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()&&...) obj-x86_64-linux-gnu/../contrib/libcxx/include/type_traits:3676:1 (clickhouse+0x99fc918)
#21 void std::__1::__thread_execute<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>(std::__1::tuple<void, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>&, std::__1::__tuple_indices<>) obj-x86_64-linux-gnu/../contrib/libcxx/include/thread:280:5 (clickhouse+0x99fc918)
#22 void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()> >(void*) obj-x86_64-linux-gnu/../contrib/libcxx/include/thread:291:5 (clickhouse+0x99fc918)
Previous write of size 8 at 0x00001bb9fc60 by thread T477:
[failed to restore the stack]
Location is global 'functable' of size 104 at 0x00001bb9fc30 (clickhouse+0x00001bb9fc60)
Thread T328 'Collector' (tid=28250, running) created by thread T62 at:
#0 pthread_create <null> (clickhouse+0x98e498b)
#1 std::__1::__libcpp_thread_create(unsigned long*, void* (*)(void*), void*) obj-x86_64-linux-gnu/../contrib/libcxx/include/__threading_support:509:10 (clickhouse+0x99fc3f0)
#2 std::__1::thread::thread<void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'(), void>(void&&, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()&&...) obj-x86_64-linux-gnu/../contrib/libcxx/include/thread:307:16 (clickhouse+0x99fc3f0)
#3 void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>) obj-x86_64-linux-gnu/../src/Common/ThreadPool.cpp:136:35 (clickhouse+0x99f7efc)
#4 ThreadPoolImpl<std::__1::thread>::scheduleOrThrow(std::__1::function<void ()>, int, unsigned long) obj-x86_64-linux-gnu/../src/Common/ThreadPool.cpp:168:5 (clickhouse+0x99fdc75)
#5 ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()&&...) obj-x86_64-linux-gnu/../src/Common/ThreadPool.h:166:38 (clickhouse+0x99fdc75)
#6 void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>) obj-x86_64-linux-gnu/../src/Common/ThreadPool.cpp:136:35 (clickhouse+0x99fa18d)
#7 ThreadPoolImpl<ThreadFromGlobalPool>::scheduleOrThrowOnError(std::__1::function<void ()>, int) obj-x86_64-linux-gnu/../src/Common/ThreadPool.cpp:156:5 (clickhouse+0x99f9d5f)
#8 DB::IBackgroundJobExecutor::execute(DB::JobAndPool) obj-x86_64-linux-gnu/../src/Storages/MergeTree/BackgroundJobsExecutor.cpp:129:43 (clickhouse+0x1573c79a)
#9 DB::StorageMergeTree::scheduleDataProcessingJob(DB::IBackgroundJobExecutor&) obj-x86_64-linux-gnu/../src/Storages/StorageMergeTree.cpp:1105:18 (clickhouse+0x15a2fe20)
#10 DB::BackgroundJobsExecutor::scheduleJob() obj-x86_64-linux-gnu/../src/Storages/MergeTree/BackgroundJobsExecutor.cpp:259:17 (clickhouse+0x1573d7fc)
#11 DB::IBackgroundJobExecutor::backgroundTaskFunction() obj-x86_64-linux-gnu/../src/Storages/MergeTree/BackgroundJobsExecutor.cpp:215:10 (clickhouse+0x1573cf0d)
#12 DB::IBackgroundJobExecutor::start()::$_1::operator()() const obj-x86_64-linux-gnu/../src/Storages/MergeTree/BackgroundJobsExecutor.cpp:188:46 (clickhouse+0x1573e861)
#13 decltype(std::__1::forward<DB::IBackgroundJobExecutor::start()::$_1&>(fp)()) std::__1::__invoke<DB::IBackgroundJobExecutor::start()::$_1&>(DB::IBackgroundJobExecutor::start()::$_1&) obj-x86_64-linux-gnu/../contrib/libcxx/include/type_traits:3676:1 (clickhouse+0x1573e861)
#14 void std::__1::__invoke_void_return_wrapper<void>::__call<DB::IBackgroundJobExecutor::start()::$_1&>(DB::IBackgroundJobExecutor::start()::$_1&) obj-x86_64-linux-gnu/../contrib/libcxx/include/__functional_base:348:9 (clickhouse+0x1573e861)
#15 std::__1::__function::__default_alloc_func<DB::IBackgroundJobExecutor::start()::$_1, void ()>::operator()() obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:1608:12 (clickhouse+0x1573e861)
#16 void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<DB::IBackgroundJobExecutor::start()::$_1, void ()> >(std::__1::__function::__policy_storage const*) obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2089:16 (clickhouse+0x1573e861)
#17 std::__1::__function::__policy_func<void ()>::operator()() const obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2221:16 (clickhouse+0x14702d47)
#18 std::__1::function<void ()>::operator()() const obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2560:12 (clickhouse+0x14702d47)
#19 DB::BackgroundSchedulePoolTaskInfo::execute() obj-x86_64-linux-gnu/../src/Core/BackgroundSchedulePool.cpp:106:5 (clickhouse+0x14702d47)
#20 DB::TaskNotification::execute() obj-x86_64-linux-gnu/../src/Core/BackgroundSchedulePool.cpp:19:28 (clickhouse+0x14704c2b)
#21 DB::BackgroundSchedulePool::threadFunction() obj-x86_64-linux-gnu/../src/Core/BackgroundSchedulePool.cpp:265:31 (clickhouse+0x14704c2b)
#22 DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1::operator()() const obj-x86_64-linux-gnu/../src/Core/BackgroundSchedulePool.cpp:161:48 (clickhouse+0x14705430)
#23 decltype(std::__1::forward<DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1&>(fp)()) std::__1::__invoke_constexpr<DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1&>(DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1&) obj-x86_64-linux-gnu/../contrib/libcxx/include/type_traits:3682:1 (clickhouse+0x14705430)
#24 decltype(auto) std::__1::__apply_tuple_impl<DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1&, std::__1::tuple<>&>(DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1&, std::__1::tuple<>&, std::__1::__tuple_indices<>) obj-x86_64-linux-gnu/../contrib/libcxx/include/tuple:1415:1 (clickhouse+0x14705430)
#25 decltype(auto) std::__1::apply<DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1&, std::__1::tuple<>&>(DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1&, std::__1::tuple<>&) obj-x86_64-linux-gnu/../contrib/libcxx/include/tuple:1424:1 (clickhouse+0x14705430)
#26 ThreadFromGlobalPool::ThreadFromGlobalPool<DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1>(DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1&&)::'lambda'()::operator()() obj-x86_64-linux-gnu/../src/Common/ThreadPool.h:182:13 (clickhouse+0x14705430)
#27 decltype(std::__1::forward<DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1>(fp)()) std::__1::__invoke<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1>(DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1&&)::'lambda'()&>(DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1&&) obj-x86_64-linux-gnu/../contrib/libcxx/include/type_traits:3676:1 (clickhouse+0x14705430)
#28 void std::__1::__invoke_void_return_wrapper<void>::__call<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1>(DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1&&)::'lambda'()&>(DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1&&...) obj-x86_64-linux-gnu/../contrib/libcxx/include/__functional_base:348:9 (clickhouse+0x14705430)
#29 std::__1::__function::__default_alloc_func<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1>(DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1&&)::'lambda'(), void ()>::operator()() obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:1608:12 (clickhouse+0x14705430)
#30 void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1>(DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1&&)::'lambda'(), void ()> >(std::__1::__function::__policy_storage const*) obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2089:16 (clickhouse+0x14705430)
#31 std::__1::__function::__policy_func<void ()>::operator()() const obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2221:16 (clickhouse+0x99f9595)
#32 std::__1::function<void ()>::operator()() const obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2560:12 (clickhouse+0x99f9595)
#33 ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) obj-x86_64-linux-gnu/../src/Common/ThreadPool.cpp:270:17 (clickhouse+0x99f9595)
#34 void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()::operator()() const obj-x86_64-linux-gnu/../src/Common/ThreadPool.cpp:136:73 (clickhouse+0x99fc918)
#35 decltype(std::__1::forward<void>(fp)(std::__1::forward<void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>(fp0)...)) std::__1::__invoke<void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>(void&&, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()&&...) obj-x86_64-linux-gnu/../contrib/libcxx/include/type_traits:3676:1 (clickhouse+0x99fc918)
#36 void std::__1::__thread_execute<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>(std::__1::tuple<void, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>&, std::__1::__tuple_indices<>) obj-x86_64-linux-gnu/../contrib/libcxx/include/thread:280:5 (clickhouse+0x99fc918)
#37 void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()> >(void*) obj-x86_64-linux-gnu/../contrib/libcxx/include/thread:291:5 (clickhouse+0x99fc918)
Thread T477 'Collector' (tid=28341, running) created by thread T470 at:
#0 pthread_create <null> (clickhouse+0x98e498b)
#1 std::__1::__libcpp_thread_create(unsigned long*, void* (*)(void*), void*) obj-x86_64-linux-gnu/../contrib/libcxx/include/__threading_support:509:10 (clickhouse+0x99fc3f0)
#2 std::__1::thread::thread<void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'(), void>(void&&, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()&&...) obj-x86_64-linux-gnu/../contrib/libcxx/include/thread:307:16 (clickhouse+0x99fc3f0)
#3 void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>) obj-x86_64-linux-gnu/../src/Common/ThreadPool.cpp:136:35 (clickhouse+0x99f7efc)
#4 ThreadPoolImpl<std::__1::thread>::scheduleOrThrow(std::__1::function<void ()>, int, unsigned long) obj-x86_64-linux-gnu/../src/Common/ThreadPool.cpp:168:5 (clickhouse+0x99f8847)
#5 ThreadFromGlobalPool::ThreadFromGlobalPool<DB::PipelineExecutor::executeImpl(unsigned long)::$_4>(DB::PipelineExecutor::executeImpl(unsigned long)::$_4&&) obj-x86_64-linux-gnu/../src/Common/ThreadPool.h:166:38 (clickhouse+0x15bc5d7e)
#6 void std::__1::allocator<ThreadFromGlobalPool>::construct<ThreadFromGlobalPool, DB::PipelineExecutor::executeImpl(unsigned long)::$_4>(ThreadFromGlobalPool*, DB::PipelineExecutor::executeImpl(unsigned long)::$_4&&) obj-x86_64-linux-gnu/../contrib/libcxx/include/memory:886:28 (clickhouse+0x15bc5d7e)
#7 void std::__1::allocator_traits<std::__1::allocator<ThreadFromGlobalPool> >::__construct<ThreadFromGlobalPool, DB::PipelineExecutor::executeImpl(unsigned long)::$_4>(std::__1::integral_constant<bool, true>, std::__1::allocator<ThreadFromGlobalPool>&, ThreadFromGlobalPool*, DB::PipelineExecutor::executeImpl(unsigned long)::$_4&&) obj-x86_64-linux-gnu/../contrib/libcxx/include/__memory/allocator_traits.h:519:21 (clickhouse+0x15bc5d7e)
#8 void std::__1::allocator_traits<std::__1::allocator<ThreadFromGlobalPool> >::construct<ThreadFromGlobalPool, DB::PipelineExecutor::executeImpl(unsigned long)::$_4>(std::__1::allocator<ThreadFromGlobalPool>&, ThreadFromGlobalPool*, DB::PipelineExecutor::executeImpl(unsigned long)::$_4&&) obj-x86_64-linux-gnu/../contrib/libcxx/include/__memory/allocator_traits.h:481:14 (clickhouse+0x15bc5d7e)
#9 void std::__1::vector<ThreadFromGlobalPool, std::__1::allocator<ThreadFromGlobalPool> >::__construct_one_at_end<DB::PipelineExecutor::executeImpl(unsigned long)::$_4>(DB::PipelineExecutor::executeImpl(unsigned long)::$_4&&) obj-x86_64-linux-gnu/../contrib/libcxx/include/vector:926:5 (clickhouse+0x15bc1448)
#10 ThreadFromGlobalPool& std::__1::vector<ThreadFromGlobalPool, std::__1::allocator<ThreadFromGlobalPool> >::emplace_back<DB::PipelineExecutor::executeImpl(unsigned long)::$_4>(DB::PipelineExecutor::executeImpl(unsigned long)::$_4&&) obj-x86_64-linux-gnu/../contrib/libcxx/include/vector:1684:9 (clickhouse+0x15bc1448)
#11 DB::PipelineExecutor::executeImpl(unsigned long) obj-x86_64-linux-gnu/../src/Processors/Executors/PipelineExecutor.cpp:772:21 (clickhouse+0x15bc1448)
#12 DB::PipelineExecutor::execute(unsigned long) obj-x86_64-linux-gnu/../src/Processors/Executors/PipelineExecutor.cpp:407:9 (clickhouse+0x15bc1176)
#13 DB::threadFunction(DB::PullingAsyncPipelineExecutor::Data&, std::__1::shared_ptr<DB::ThreadGroupStatus>, unsigned long) obj-x86_64-linux-gnu/../src/Processors/Executors/PullingAsyncPipelineExecutor.cpp:80:24 (clickhouse+0x15bd0a70)
#14 DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0::operator()() const obj-x86_64-linux-gnu/../src/Processors/Executors/PullingAsyncPipelineExecutor.cpp:107:13 (clickhouse+0x15bd0a70)
#15 decltype(std::__1::forward<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&>(fp)()) std::__1::__invoke_constexpr<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&) obj-x86_64-linux-gnu/../contrib/libcxx/include/type_traits:3682:1 (clickhouse+0x15bd0a70)
#16 decltype(auto) std::__1::__apply_tuple_impl<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&, std::__1::tuple<>&>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&, std::__1::tuple<>&, std::__1::__tuple_indices<>) obj-x86_64-linux-gnu/../contrib/libcxx/include/tuple:1415:1 (clickhouse+0x15bd0a70)
#17 decltype(auto) std::__1::apply<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&, std::__1::tuple<>&>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&, std::__1::tuple<>&) obj-x86_64-linux-gnu/../contrib/libcxx/include/tuple:1424:1 (clickhouse+0x15bd0a70)
#18 ThreadFromGlobalPool::ThreadFromGlobalPool<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&&)::'lambda'()::operator()() obj-x86_64-linux-gnu/../src/Common/ThreadPool.h:182:13 (clickhouse+0x15bd0a70)
#19 decltype(std::__1::forward<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0>(fp)()) std::__1::__invoke<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&&)::'lambda'()&>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&&) obj-x86_64-linux-gnu/../contrib/libcxx/include/type_traits:3676:1 (clickhouse+0x15bd0a70)
#20 void std::__1::__invoke_void_return_wrapper<void>::__call<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&&)::'lambda'()&>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&&...) obj-x86_64-linux-gnu/../contrib/libcxx/include/__functional_base:348:9 (clickhouse+0x15bd0a70)
#21 std::__1::__function::__default_alloc_func<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&&)::'lambda'(), void ()>::operator()() obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:1608:12 (clickhouse+0x15bd0a70)
#22 void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&&)::'lambda'(), void ()> >(std::__1::__function::__policy_storage const*) obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2089:16 (clickhouse+0x15bd0a70)
#23 std::__1::__function::__policy_func<void ()>::operator()() const obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2221:16 (clickhouse+0x99f9595)
#24 std::__1::function<void ()>::operator()() const obj-x86_64-linux-gnu/../contrib/libcxx/include/functional:2560:12 (clickhouse+0x99f9595)
#25 ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) obj-x86_64-linux-gnu/../src/Common/ThreadPool.cpp:270:17 (clickhouse+0x99f9595)
#26 void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()::operator()() const obj-x86_64-linux-gnu/../src/Common/ThreadPool.cpp:136:73 (clickhouse+0x99fc918)
#27 decltype(std::__1::forward<void>(fp)(std::__1::forward<void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>(fp0)...)) std::__1::__invoke<void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>(void&&, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()&&...) obj-x86_64-linux-gnu/../contrib/libcxx/include/type_traits:3676:1 (clickhouse+0x99fc918)
#28 void std::__1::__thread_execute<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>(std::__1::tuple<void, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>&, std::__1::__tuple_indices<>) obj-x86_64-linux-gnu/../contrib/libcxx/include/thread:280:5 (clickhouse+0x99fc918)
#29 void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()> >(void*) obj-x86_64-linux-gnu/../contrib/libcxx/include/thread:291:5 (clickhouse+0x99fc918)
SUMMARY: ThreadSanitizer: data race obj-x86_64-linux-gnu/../contrib/zlib-ng/deflate_fast.c:56:39 in deflate_fast
==================
``` | https://github.com/ClickHouse/ClickHouse/issues/28505 | https://github.com/ClickHouse/ClickHouse/pull/28534 | 7ddbadeeb37fe1635a0be93f6bc9e120012b5b41 | 7929ee4d9b0b8c8c01570cf377fd30ea81b6da7e | "2021-09-02T08:15:39Z" | c++ | "2021-09-03T12:00:23Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 28,381 | ["src/Functions/date_trunc.cpp", "tests/queries/0_stateless/00189_time_zones_long.reference", "tests/queries/0_stateless/00921_datetime64_compatibility_long.reference"] | date_trunc with 'month' as first parameter returns Date instead of DateTime | Reading the related issue, I understand that ClickHouse's `date_trunc()` was modelled after Postgres's `DATE_TRUNC()`. However, while Postgres returns `DateTime` for every unit, ClickHouse returns `Date` for any unit larger than `day`. The documentation doesn't include this behavior, but says that the only return type is `DateTime`
I pass the result of `date_trunc()` into `toUnixTimestamp()` so it's essential that the function returns `DateTime` for every unit. | https://github.com/ClickHouse/ClickHouse/issues/28381 | https://github.com/ClickHouse/ClickHouse/pull/48851 | 98a88bef7acff34c1e027673cd208c16a4a53102 | 2aa8619534eb270feb2d0fea27fdb8cc6b8d3703 | "2021-08-30T22:02:49Z" | c++ | "2023-05-02T14:43:20Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 28,322 | ["docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-layout.md"] | RangeHashedDictionary does not support integer values outside of Int64 | > You have to provide the following information whenever possible.
```
CREATE DICTIONARY analytics.all_ipv6s_lower (
id UInt64,
start_addr UInt64,
end_addr UInt64,
subnet_mask String,
isp String,
province String
)
PRIMARY KEY id
SOURCE(FILE(path '/var/lib/clickhouse/user_files/all_ipv6s_lower.csv' format 'CSV'))
LIFETIME(0)
LAYOUT(RANGE_HASHED())
RANGE(MIN start_addr MAX end_addr)
```
all_ipv6s_lower.csvγcontent below
```
1,0,18446744073709551615,ffff:ffff::,value1,value2,value3
```
the we do.
```
select * from analytics.all_ipv6s_lower
```
only `id`,`start_addr`,`end_addr` have valuesγ other column no value.
> A clear and concise description of what works not as it is supposed to.
version
```
ClickHouse client version 21.8.4.51 (official build).
```
**How to reproduce**
if we change csv content to bellow
```
1,0,1,ffff:ffff::,value1,value2,value3
```
change UInt64 max value `18446744073709551615` to `1`.
this will work. and all column values returned.
**Expected behavior**
all_ipv6s_lower.csv :
```
1,0,18446744073709551615,ffff:ffff::,value1,value2,value3
```
all_ipv6s_lower.csv:
```
1,0,1,ffff:ffff::,value1,value2,value3
```
both work
**Additional context**
| https://github.com/ClickHouse/ClickHouse/issues/28322 | https://github.com/ClickHouse/ClickHouse/pull/29038 | 6ef3e9e57e738e1eb544877e95b682e67fe295e5 | fb9da614ed29fb4d71f1f9b8ed4f2132935a06e3 | "2021-08-30T06:59:49Z" | c++ | "2021-09-15T21:23:42Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 28,280 | ["tests/queries/0_stateless/02129_add_column_add_ttl.reference", "tests/queries/0_stateless/02129_add_column_add_ttl.sql"] | alter table add column + TTL on this column == Not found column .. in block | ```
create table ttl_test(a Int64, b String, d Date)Engine=MergeTree partition by d order by a;
insert into ttl_test select number, '', today() from numbers(1000);
alter table ttl_test add column c Int64;
insert into ttl_test select number, '', today(), 0 from numbers(1000);
alter table ttl_test modify TTL (d + INTERVAL 1 MONTH) DELETE WHERE c=1;
(version 21.9.1):
Code: 341. DB::Exception: Received from localhost:9000. DB::Exception:
Exception happened during execution of mutation 'mutation_3.txt' with part '20210827_1_1_0' reason: 'Code: 10. DB::Exception: Not found column c in block. There are only columns: a, b, d.
(NOT_FOUND_COLUMN_IN_BLOCK) (version 21.9.1.7603)'.
This error maybe retryable or not. In case of unretryable error, mutation can be killed with KILL MUTATION query. (UNFINISHED)
```
WA:
```
alter table ttl_test update c=c where 1 settings mutations_sync=2;
alter table ttl_test modify TTL (d + INTERVAL 1 MONTH) DELETE WHERE c=1;
``` | https://github.com/ClickHouse/ClickHouse/issues/28280 | https://github.com/ClickHouse/ClickHouse/pull/32235 | 66e1fb7adad8ce28af4c9cf126f704bdefafa746 | 2f620fb6ac410eb819eb1fe3eb2637fdac52824d | "2021-08-27T19:27:21Z" | c++ | "2021-12-13T13:59:49Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 28,198 | ["src/Storages/StorageMerge.cpp", "tests/queries/0_stateless/02014_storage_merge_order_by.reference", "tests/queries/0_stateless/02014_storage_merge_order_by.sql"] | ORDER BY does not work as expected with Merge engine | I have two tables something like the following (ClickHouse server version 20.8.7 revision 54438):
```
CREATE TABLE short (e UUID, t DateTime, ...24 additional columns...) ENGINE = MergeTree PARTITION BY e ORDER BY t TTL t + toIntervalDay(7)
CREATE TABLE long (e UUID, t DateTime, ...24 additional columns...) ENGINE = MergeTree PARTITION BY (e, toStartOfMonth(t)) ORDER BY t TTL t + toIntervalDay(30)
```
The two tables are identical except for their partition and TTL expressions. We generate a unified view of these two tables using the Merge engine:
`CREATE TABLE merged (e UUID, t DateTime, ...24 additional columns...) ENGINE = Merge('db', 'short|long')`
The problem is that when I query the merged table and order by t, the results are non-deterministic (and almost always wrong). For example:
```
SELECT t
FROM merged
WHERE t > '2021-08-01 00:00:00'
ORDER BY t ASC
LIMIT 5
Query id: e81738ae-8177-4f30-8a33-4b6fd1540f55
ββββββββββββββββββββtββ
β 2021-08-20 09:42:51 β
β 2021-08-20 09:43:53 β
β 2021-08-20 09:44:56 β
β 2021-08-20 09:45:59 β
β 2021-08-20 09:47:02 β
βββββββββββββββββββββββ
5 rows in set. Elapsed: 0.147 sec. Processed 132.74 thousand rows, 530.94 KB (903.28 thousand rows/s., 3.61 MB/s.)
```
Repeating the exact same query yields different results:
```
SELECT t
FROM merged
WHERE t > '2021-08-01 00:00:00'
ORDER BY t ASC
LIMIT 5
Query id: 3bdf4d80-58b7-42f2-8dd3-85a61bc7e4a0
ββββββββββββββββββββtββ
β 2021-08-26 14:55:55 β
β 2021-08-26 14:55:55 β
β 2021-08-26 14:55:57 β
β 2021-08-26 14:56:00 β
β 2021-08-26 14:56:20 β
βββββββββββββββββββββββ
5 rows in set. Elapsed: 0.357 sec. Processed 1.82 thousand rows, 7.28 KB (5.09 thousand rows/s., 20.38 KB/s.)
```
The correct results can be seen with an expensive union query:
```
SELECT t
FROM
(
SELECT t
FROM short
UNION ALL
SELECT t
FROM long
)
WHERE t > '2021-08-01 00:00:00'
ORDER BY t ASC
LIMIT 5
Query id: 2e1e2712-6025-4de5-9d82-34ac11ef4662
ββββββββββββββββββββtββ
β 2021-08-01 00:00:01 β
β 2021-08-01 00:00:01 β
β 2021-08-01 00:00:01 β
β 2021-08-01 00:00:02 β
β 2021-08-01 00:00:02 β
βββββββββββββββββββββββ
5 rows in set. Elapsed: 0.403 sec. Processed 42.56 million rows, 170.26 MB (105.55 million rows/s., 422.20 MB/s.)
```
I'm not sure what the secret sauce is to trigger this behaviour. I tried reproducing it with a simpler set of tables but was unable to do so.
| https://github.com/ClickHouse/ClickHouse/issues/28198 | https://github.com/ClickHouse/ClickHouse/pull/28266 | 37b66dc70a883752a70e99be9b0faa50ed84f81b | 4eef445df919f84f27832a961042a53b3961b3e9 | "2021-08-26T16:13:18Z" | c++ | "2021-08-27T22:15:40Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 28,177 | ["src/Common/TLDListsHolder.cpp", "tests/config/config.d/top_level_domains_lists.xml", "tests/config/top_level_domains/no_new_line_list.dat", "tests/queries/0_stateless/01601_custom_tld.reference", "tests/queries/0_stateless/01601_custom_tld.sql"] | Parsing of TLD list not ending with a newline fails | Not a big deal, but after bad1f91f280567da18cd461287570220b0255c4b parsing of TLD list not ending with a newline raises basic_string and terminates the server | https://github.com/ClickHouse/ClickHouse/issues/28177 | https://github.com/ClickHouse/ClickHouse/pull/28213 | b1d4967b88890f57a98d5be09bf0adc44a2dd90e | cdfdb2cd7520b6981c36743bbffe1451c81de2b5 | "2021-08-26T09:34:28Z" | c++ | "2021-08-27T07:57:16Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 28,018 | ["cmake/target.cmake", "contrib/boost", "contrib/grpc-cmake/protobuf_generate_grpc.cmake", "contrib/protobuf-cmake/CMakeLists.txt", "contrib/protobuf-cmake/protobuf_generate.cmake"] | Protobuf missing in Aarch64 prebuilt binary | **Describe the bug**
When using the provided Clickhouse Aarch64 binary (https://builds.clickhouse.tech/master/aarch64/clickhouse), support for Protocol Buffers is not compiled in.
1. Precompiled [aarch64 binary](https://builds.clickhouse.tech/master/aarch64/clickhouse), the string `protobuf` is on 61 lines:
```
root@be947c164427:/docker-entrypoint-initdb.d# strings /usr/bin/clickhouse |grep -i Protobuf|wc
61 69 2753
```
2. Precompiled [amd64 binary](https://builds.clickhouse.tech/master/amd64/clickhouse), running on intel, the string `protobuf` is on 73k lines:
```
root@a004c6d132b9:/# strings /usr/bin/clickhouse |grep -i Protobuf |wc
73921 133401 9789832
```
This includes what clearly looks like google library code:
```
ZN2DB12_GLOBAL__N_124ProtobufSerializerNumberImE12setFunctionsEvEUlmE2_
ZN2DB12_GLOBAL__N_124ProtobufSerializerNumberImE12setFunctionsEvEUlvE5_
ZN2DB12_GLOBAL__N_124ProtobufSerializerNumberImE12setFunctionsEvEUlvE6_
...
N6google8protobuf10TextFormat6Parser10ParserImpl20ParserErrorCollectorE
N6google8protobuf10TextFormat7Printer33FastFieldValuePrinterUtf8EscapingE
N6google8protobuf10TextFormat7Printer28DebugStringFieldValuePrinterE
N6google8protobuf10TextFormat7Printer13TextGeneratorE
```
More context here: https://github.com/PostHog/posthog/pull/5215#issuecomment-903579533
**Does it reproduce on recent release?**
The prebuilt binary comes from master, and I can't find a prebuilt binary for a specific release. So... maybe?
**How to reproduce**
- Download the latest Aarch64 binary and run it on a M1 mac via Docker and a custom [Dockerfile](https://github.com/PostHog/posthog/blob/77b7989f748a59906f9a97d123b81de2e397e54a/docker/clickhouse/arm64.Dockerfile)
- Create a table that uses Kafka and Protobuf
- Watch errors in the clickhouse server logs.
**Expected behavior**
It should work like the `amd64` prebuilt binary.
**Error message and/or stacktrace**
This is the error I get on the server:
```
2021.08.23 08:42:50.446095 [ 383 ] {} <Warning> StorageKafka (kafka_person_distinct_id): Can't get assignment. Will keep trying.
2021.08.23 08:42:50.628470 [ 379 ] {} <Warning> StorageKafka (kafka_session_recording_events): Can't get assignment. Will keep trying.
2021.08.23 08:42:50.801783 [ 388 ] {} <Warning> StorageKafka (kafka_person): Can't get assignment. Will keep trying.
2021.08.23 08:42:50.852789 [ 381 ] {} <Error> void DB::StorageKafka::threadFunc(size_t): Code: 73. DB::Exception: Unknown format Protobuf. (UNKNOWN_FORMAT), Stack trace (when copying this message, always include the lines below):
0. /build/build_docker/../contrib/poco/Foundation/src/Exception.cpp:28: Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0xdcf0c74 in /usr/bin/clickhouse
1. /build/build_docker/../src/Common/Exception.cpp:59: DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0x740e968 in /usr/bin/clickhouse
2. /build/build_docker/../src/Formats/FormatFactory.cpp:43: DB::FormatFactory::getCreators(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) const @ 0xd4f1234 in /usr/bin/clickhouse
3. /build/build_docker/../contrib/libcxx/include/functional:2236: DB::FormatFactory::getInputFormat(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::ReadBuffer&, DB::Block const&, std::__1::shared_ptr<DB::Context const>, unsigned long, std::__1::optional<DB::FormatSettings> const&) const @ 0xd4f2438 in /usr/bin/clickhouse
4. /build/build_docker/../contrib/libcxx/include/optional:225: DB::KafkaBlockInputStream::readImpl() @ 0xd1fabd8 in /usr/bin/clickhouse
5. /build/build_docker/../src/DataStreams/IBlockInputStream.cpp:57: DB::IBlockInputStream::read() @ 0xc863c2c in /usr/bin/clickhouse
6. /build/build_docker/../contrib/libcxx/include/vector:658: DB::copyData(DB::IBlockInputStream&, DB::IBlockOutputStream&, std::__1::function<void (DB::Block const&)> const&, std::__1::atomic<bool>*) @ 0xc87e870 in /usr/bin/clickhouse
7. /build/build_docker/../contrib/libcxx/include/functional:2191: DB::StorageKafka::streamToViews() @ 0xd1db8d4 in /usr/bin/clickhouse
8. /build/build_docker/../src/Storages/Kafka/StorageKafka.cpp:0: DB::StorageKafka::threadFunc(unsigned long) @ 0xd1dabec in /usr/bin/clickhouse
9. /build/build_docker/../src/Common/Stopwatch.h:12: DB::BackgroundSchedulePoolTaskInfo::execute() @ 0xca1f398 in /usr/bin/clickhouse
10. /build/build_docker/../contrib/poco/Foundation/include/Poco/AutoPtr.h:90: DB::BackgroundSchedulePool::threadFunction() @ 0xca2096c in /usr/bin/clickhouse
11. /build/build_docker/../src/Common/ThreadPool.h:183: void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1>(DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, unsigned long, char const*)::$_1&&)::'lambda'(), void ()> >(std::__1::__function::__policy_storage const*) @ 0xca20ef4 in /usr/bin/clickhouse
12. /build/build_docker/../contrib/libcxx/include/functional:2210: ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x743fd60 in /usr/bin/clickhouse
13. /build/build_docker/../contrib/libcxx/include/memory:1655: void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()> >(void*) @ 0x7441c38 in /usr/bin/clickhouse
14. start_thread @ 0x84fc in /usr/lib/aarch64-linux-gnu/libpthread-2.31.so
15. ? @ 0xd467c in /usr/lib/aarch64-linux-gnu/libc-2.31.so
(version 21.9.1.7509 (official build))
```
**Additional context**
It would be just amazing to get `aarch64` images in docker hub as well! Having true multiplatform support for clickhouse images would greatly simplify our build setup, but for now, just getting protobuf compiled in would solve most of our problems.
| https://github.com/ClickHouse/ClickHouse/issues/28018 | https://github.com/ClickHouse/ClickHouse/pull/30015 | ed6088860ee818692ba32405b1cf17e4583a0aea | eb1748b8b4ac26c378ac11b9ba4c1a012ff3c189 | "2021-08-23T09:22:03Z" | c++ | "2021-10-13T02:24:05Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 28,001 | ["src/Processors/Transforms/WindowTransform.cpp", "src/Processors/Transforms/WindowTransform.h", "tests/queries/0_stateless/01591_window_functions.reference", "tests/queries/0_stateless/01591_window_functions.sql"] | Window functions: signed integer overflow. | https://clickhouse-test-reports.s3.yandex.net/0/624cb43f7f85b576dc2831e20623a5cf5b878792/fuzzer_ubsan/report.html
```
SELECT max(intDiv(1048575, NULL)) OVER (ORDER BY mod(number, 1024) DESC NULLS LAST RANGE BETWEEN 1 PRECEDING AND CURRENT ROW), 0, number, nth_value(toNullable(number), -9223372036854775808) OVER w AS firstValue, nth_value(toNullable(number), NULL) OVER w AS thridValue FROM numbers(1) WINDOW w AS (ORDER BY number ASC)
``` | https://github.com/ClickHouse/ClickHouse/issues/28001 | https://github.com/ClickHouse/ClickHouse/pull/28773 | 60f76d9254c265e9ae4d46b18e0ce08b6df16717 | aa5c42b7dbc2c451e85a8cb7f798a436424ea5b5 | "2021-08-22T22:38:59Z" | c++ | "2021-09-12T12:57:19Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 27,861 | ["src/Dictionaries/getDictionaryConfigurationFromAST.cpp", "tests/queries/0_stateless/02716_create_direct_dict_with_lifetime_crash.reference", "tests/queries/0_stateless/02716_create_direct_dict_with_lifetime_crash.sql"] | It's possible to create direct dictionary with LIFETIME, but not use it. | **Describe the issue**
You can create direct dictionary with lifetime trait and clickhouse wouldn't complain, but it would throw exception if you would try to use this dictionary.
**How to reproduce**
Clickhouse version 21.9
```
CREATE TABLE dict_source (key UInt64, value String) ENGINE=MergeTree ORDER BY key;
INSERT INTO dict_source SELECT number, toString(number) FROM numbers(1000000);
CREATE DICTIONARY dictγ(γ `key` UInt64,γ `value` Stringγ)γPRIMARY KEY keyγSOURCE(CLICKHOUSE(table 'dict_source'))γLAYOUT(DIRECT()) LIFETIME(300)γ;
SELECT dictGet('dict', 'value', 1::UInt64);
SELECT dictGet('dict', 'value', CAST('1', 'UInt64'))
Query id: d31b3dd7-bd6c-4c7b-b997-f9f7222c3cfa
0 rows in set. Elapsed: 0.004 sec.
Received exception from server (version 21.9.1):
Code: 36. DB::Exception: Received from localhost:9000. DB::Exception: 'lifetime' parameter is redundant for the dictionary' of layout 'direct': While processing dictGet('dict', 'value', CAST('1', 'UInt64')). (BAD_ARGUMENTS)
```
**Expected behavior**
Clickhouse will throw this error on dictionary creation.
**Error message and/or stacktrace**
```
Code: 36. DB::Exception: Received from localhost:9000. DB::Exception: 'lifetime' parameter is redundant for the dictionary' of layout 'direct': While processing dictGet('dict', 'value', CAST('1', 'UInt64')). (BAD_ARGUMENTS)
``` | https://github.com/ClickHouse/ClickHouse/issues/27861 | https://github.com/ClickHouse/ClickHouse/pull/49043 | 180562adfb67aea4800393b82f0ef2a221ba014c | b2b6720737c5220e15a1ea43d38db4310d8d7eb6 | "2021-08-19T11:28:46Z" | c++ | "2023-09-30T04:54:50Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 27,832 | ["src/Interpreters/Context.cpp", "src/Storages/StorageDistributed.cpp", "src/Storages/StorageDistributed.h", "tests/queries/0_stateless/00987_distributed_stack_overflow.sql", "tests/queries/0_stateless/01763_max_distributed_depth.sql"] | DDL creation of a distributed table with empty cluster '' returns exception but with wrong table created, restart will fail. | When I tried to create a distributed table with empty cluster '', the create statement failed with exception "Code: 170. DB::Exception: Received from localhost:9000. DB::Exception: Requested cluster '' not found.". However, show tables can see the WRONG table.
Later when CH restarted, unable to start due to unable to attach this wrong table.
The question is: When we try to startup the new created table during creation, if error happens, this new table should be dropped? OR we should add some special check on the cluster for Distributed engine to avoid this kind of error? OR we can just ignore this error during start up to allow CK server to start, not failed?
CH version: 21.7.3.14 (official build).
How to reproduce:
1. Create wrong table:
create table t_empty(a int) engine=Distributed('','','');
2. Force CH to restart, and check the status
service clickhouse-server forcerestart
service clickhouse-server status
Now there is no clickhouse-server process.
**##### output ###**
node236 :) create table t_empty(a int) engine=Distributed('','','');
CREATE TABLE t_empty
(
`a` int
)
ENGINE = Distributed('', '', '')
Query id: c9eb24ea-f87d-4431-9bdc-18d21fbb5636
0 rows in set. Elapsed: 0.054 sec.
Received exception from server (version 21.7.3):
**Code: 170. DB::Exception: Received from localhost:9000. DB::Exception: Requested cluster '' not found.**
node236 :) show tables;
SHOW TABLES
Query id: 0d9e2398-26c5-46eb-baf2-a02111bc770e
ββnameβββββ
β people β
β **t_empty** β
βββββββββββ
2 rows in set. Elapsed: 0.001 sec.
node236 :) show create table t_empty;
SHOW CREATE TABLE t_empty
Query id: 8c489c90-dfe0-4392-9742-17a43a075c12
ββstatementββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β CREATE TABLE default.t_empty
(
`a` Int32
)
ENGINE = **Distributed('', '', '')** β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
==== clickhouse-server.err.log ===
### backtrace for DDL ###
2021.08.19 12:12:03.150990 [ 19900 ] {c9eb24ea-f87d-4431-9bdc-18d21fbb5636} <Error> TCPHandler: Code: 170, e.displayText() = DB::Exception: Requested cluster '' n
ot found, Stack trace:
0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0x8d31b5a in /usr/bin/click
house
1. DB::Context::getCluster(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) const @ 0xf73b378 in /usr/bin/clickhouse
2. DB::StorageDistributed::getCluster() const @ 0x100081f7 in /usr/bin/clickhouse
3. DB::StorageDistributed::startup() @ 0x1000e5b2 in /usr/bin/clickhouse
4. DB::InterpreterCreateQuery::doCreateTable(DB::ASTCreateQuery&, DB::InterpreterCreateQuery::TableProperties const&) @ 0xf875226 in /usr/bin/clickhouse
5. DB::InterpreterCreateQuery::createTable(DB::ASTCreateQuery&) @ 0xf8711e3 in /usr/bin/clickhouse
6. DB::InterpreterCreateQuery::execute() @ 0xf87735c in /usr/bin/clickhouse
7. ? @ 0xfe22253 in /usr/bin/clickhouse
8. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, bool, DB::Que
ryProcessingStage::Enum, bool) @ 0xfe208e3 in /usr/bin/clickhouse
9. DB::TCPHandler::runImpl() @ 0x1069f6c2 in /usr/bin/clickhouse
10. DB::TCPHandler::run() @ 0x106b25d9 in /usr/bin/clickhouse
11. Poco::Net::TCPServerConnection::start() @ 0x1338b30f in /usr/bin/clickhouse
12. Poco::Net::TCPServerDispatcher::run() @ 0x1338cd9a in /usr/bin/clickhouse
13. Poco::PooledThread::run() @ 0x134bfc19 in /usr/bin/clickhouse
14. Poco::ThreadImpl::runnableEntry(void*) @ 0x134bbeaa in /usr/bin/clickhouse
15. start_thread @ 0x7ea5 in /usr/lib64/libpthread-2.17.so
16. __clone @ 0xfe9fd in /usr/lib64/libc-2.17.so
### backtrace for start up of CH ###
2021.08.19 12:22:47.254734 [ 25174 ] {} <Error> Application: Caught exception while loading metadata: Code: 170, e.displayText() = DB::Exception: Requested cluster '' not found: while loading database `default` from path /var/lib/clickhouse/metadata/default, Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0x8d31b5a in /usr/bin/clickhouse
1. DB::Context::getCluster(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) const @ 0xf73b378 in /usr/bin/clickhouse
2. DB::StorageDistributed::getCluster() const @ 0x100081f7 in /usr/bin/clickhouse
3. DB::StorageDistributed::startup() @ 0x1000e5b2 in /usr/bin/clickhouse
4. ? @ 0xf646a3b in /usr/bin/clickhouse
5. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x8d75738 in /usr/bin/clickhouse
6. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()&&...)::'lambda'()::operator()() @ 0x8d772df in /usr/bin/clickhouse
7. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x8d72a1f in /usr/bin/clickhouse
8. ? @ 0x8d76303 in /usr/bin/clickhouse
9. start_thread @ 0x7ea5 in /usr/lib64/libpthread-2.17.so
10. __clone @ 0xfe9fd in /usr/lib64/libc-2.17.so
(version 21.7.3.14 (official build))
2021.08.19 12:22:48.260957 [ 25174 ] {} <Error> Application: DB::Exception: Requested cluster '' not found: while loading database `default` from path /var/lib/clickhouse/metadata/default
| https://github.com/ClickHouse/ClickHouse/issues/27832 | https://github.com/ClickHouse/ClickHouse/pull/27927 | adc63ce27938268a2885f967d5f5edaf40064ea2 | 5ac6a995429e13c50510e797dd9de158f8b8a2b3 | "2021-08-19T04:26:59Z" | c++ | "2021-08-21T07:40:23Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 27,800 | ["base/common/arithmeticOverflow.h", "src/IO/ReadHelpers.h", "tests/queries/0_stateless/2020_cast_integer_overflow.reference", "tests/queries/0_stateless/2020_cast_integer_overflow.sql"] | toInt32OrNull('-2147483648') unexpectedly returns NULL | **Describe the bug**
-2147483648 is the minimum integer representable with Int32, but ``toInt32OrNull('-2147483648')`` returns ``NULL``. On the other hand ``toInt32('-2147483648')`` correctly returns ``-2147483648``.
**Does it reproduce on recent release?**
Yes. It happens on 21.7.5.29-stable
**How to reproduce**
```
SELECT
toInt32('-2147483648'),
toInt32OrNull('-2147483648')
Query id: 94f4e069-6c0e-44d9-8e94-7ee83b90a529
ββtoInt32('-2147483648')ββ¬βtoInt32OrNull('-2147483648')ββ
β -2147483648 β α΄Ία΅α΄Έα΄Έ β
ββββββββββββββββββββββββββ΄βββββββββββββββββββββββββββββββ
```
**Expected behavior**
``toInt32('-2147483648')`` and ``toInt32OrNull('-2147483648')`` should both return ``-2147483648``.
| https://github.com/ClickHouse/ClickHouse/issues/27800 | https://github.com/ClickHouse/ClickHouse/pull/29063 | 333fd323f51d65a93a9aeea52f3be23873aaa008 | 14e4d4960182e3884c20c43fb8226b0f8444b0fd | "2021-08-18T02:29:08Z" | c++ | "2021-09-17T13:06:58Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 27,784 | ["src/Dictionaries/RedisDictionarySource.cpp", "src/Dictionaries/RedisDictionarySource.h", "src/Dictionaries/RedisSource.cpp", "src/Dictionaries/RedisSource.h", "tests/integration/test_dictionaries_redis/test_long.py"] | ClickHouse is stuck when using a dictionary with Redis as the data source | I have a table with one million rows of data. And It's schema is like:
```
create table redis_dictionary_test
(
date String,
id String,
value Int64
)
engine = MergeTree()
partition by date
order by id
settings index_granularity = 8192;
```
I also inserted one million key-value pairs into Redis.
The dictionary is like:
```
create dictionary redis_dict
(
date String,
id String,
value Int64
)
PRIMARY KEY date, id
SOURCE(REDIS(
host '127.0.0.1'
port 6379
storage_type 'hash_map'
db_index 0
))
LAYOUT(COMPLEX_KEY_DIRECT())
```
When I execute this query: `SELECT COUNT(DISTINCT dictGet('redis_dict', 'value', tuple(date, id))) FROM redis_dictionary_test;`, ClickHouse seems stuck. Progress was in this state: 122.88 thousand rows, 5.84 MB (1.20 million rows/s., 56.83 MB/s.) 12% about five minutes. Finally, I canceled query.
I am sure Redis is fine. Because I monitored Redis and it could accept read and write commands.
I also executed this query on 1000, 10000 and 100000 rows of data. They can return data normally.
What happened to ClickHouse? Where can I find out the reason? | https://github.com/ClickHouse/ClickHouse/issues/27784 | https://github.com/ClickHouse/ClickHouse/pull/33804 | 1fd79e732be2efeb8bf8a93f542cf8a0b31c5db5 | 548a7bccee6c5188c062c3945ded1d3967a51eec | "2021-08-17T10:53:01Z" | c++ | "2022-01-21T10:40:35Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 27,745 | ["src/AggregateFunctions/AggregateFunctionQuantile.cpp", "src/AggregateFunctions/AggregateFunctionQuantile.h", "tests/queries/0_stateless/00753_quantile_format.reference", "tests/queries/0_stateless/00753_quantile_format.sql"] | Unknown function quantileBFloat16Weighted | Looks like bfloat16 quantiles don't provide a way to specify the weight:
```sql
SELECT quantileBFloat16Weighted(0.5)(number, number)
FROM numbers(100)
Received exception from server (version 21.8.3):
Code: 46. DB::Exception: Received from localhost:9000. DB::Exception: Unknown function quantileBFloat16Weighted
```
But I expect it to work the same way as tdigest:
```sql
SELECT quantileTDigestWeighted(0.5)(number, number)
FROM numbers(100)
Query id: ea66ede3-afc6-4e5d-8949-2da5970e1349
ββquantileTDigestWeighted(0.5)(number, number)ββ
β 70.35461 β
ββββββββββββββββββββββββββββββββββββββββββββββββ
```
/cc @RedClusive just in case :) | https://github.com/ClickHouse/ClickHouse/issues/27745 | https://github.com/ClickHouse/ClickHouse/pull/27758 | bce6d092ea9bc205477cd5cb6548d49e0cacca46 | a461f30d1accd89b887bf9588247f87dd2d1a0b4 | "2021-08-16T12:57:17Z" | c++ | "2021-08-23T10:16:52Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 27,705 | ["base/glibc-compatibility/CMakeLists.txt", "cmake/add_warning.cmake"] | build failed using clang-13 with error: variable 'y' set but not used [-Werror,-Wunused-but-set-variable] | **Operating system**
```
semin-serg@semin-serg-ub:~/ClickHouse$ cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=20.04
DISTRIB_CODENAME=focal
DISTRIB_DESCRIPTION="Ubuntu 20.04.2 LTS"
```
**Cmake version**
```
semin-serg@semin-serg-ub:~/ClickHouse$ cmake --version
cmake version 3.16.3
CMake suite maintained and supported by Kitware (kitware.com/cmake).
```
**Ninja version**
```
semin-serg@semin-serg-ub:~/ClickHouse$ ninja --version
1.10.0
```
**Compiler name and version**
```
semin-serg@semin-serg-ub:~/ClickHouse$ clang-13 --version
Ubuntu clang version 13.0.0-++20210813082601+aac4fe380d16-1~exp1~20210813063406.40
Target: x86_64-pc-linux-gnu
Thread model: posix
InstalledDir: /usr/bin
semin-serg@semin-serg-ub:~/ClickHouse$ clang++-13 --version
Ubuntu clang version 13.0.0-++20210813082601+aac4fe380d16-1~exp1~20210813063406.40
Target: x86_64-pc-linux-gnu
Thread model: posix
InstalledDir: /usr/bin
```
**Full cmake and/or ninja output**
see in attached file [cmake-and-ninja-run.txt](https://github.com/ClickHouse/ClickHouse/files/6988608/cmake-and-ninja-run.txt)
| https://github.com/ClickHouse/ClickHouse/issues/27705 | https://github.com/ClickHouse/ClickHouse/pull/27714 | 5c56d3a7344615921b952f388caa779b0f245e22 | c4a14ffca214dd876cedab8d0ec80046cc5ad006 | "2021-08-15T18:03:47Z" | c++ | "2021-08-16T06:28:32Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 27,691 | ["src/Interpreters/join_common.cpp", "src/Processors/Transforms/JoiningTransform.cpp", "tests/queries/0_stateless/00445_join_nullable_keys.reference", "tests/queries/0_stateless/00445_join_nullable_keys.sql", "tests/queries/0_stateless/01142_join_lc_and_nullable_in_key.reference", "tests/queries/0_stateless/01142_join_lc_and_nullable_in_key.sql"] | Logical error: 'ColumnUnique can't contain null values.' | #27685 | https://github.com/ClickHouse/ClickHouse/issues/27691 | https://github.com/ClickHouse/ClickHouse/pull/28349 | 8a269d64d2c426c867f3c4917f92a47ddd434fe4 | 94d5f3a87bab5ac7be64c84fd04ee9d0d00ee6e9 | "2021-08-15T10:44:38Z" | c++ | "2021-08-31T14:09:24Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 27,687 | ["src/Common/DenseHashMap.h", "src/Common/DenseHashSet.h", "src/Common/SparseHashMap.h", "src/Core/NamesAndTypes.cpp", "src/Storages/MergeTree/IMergeTreeReader.cpp", "src/Storages/MergeTree/IMergeTreeReader.h", "src/Storages/StorageInMemoryMetadata.cpp"] | Remove google::dense_hash | As discussed with @kitaisreal | https://github.com/ClickHouse/ClickHouse/issues/27687 | https://github.com/ClickHouse/ClickHouse/pull/27690 | 88f75375def220ffcd8ae1899e9a7a7133c9bcff | 76b050248274afcc6c0119f060eb00c06edfd297 | "2021-08-15T08:47:31Z" | c++ | "2021-08-15T23:39:41Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 27,679 | ["src/Parsers/ExpressionElementParsers.cpp", "src/Parsers/ExpressionListParsers.cpp", "tests/queries/0_stateless/01852_cast_operator_3.reference", "tests/queries/0_stateless/01852_cast_operator_3.sql", "tests/queries/0_stateless/01852_cast_operator_bad_cases.reference", "tests/queries/0_stateless/01852_cast_operator_bad_cases.sh"] | PostgreSQL-style cast operator is not applicable for negative numeric literals, e.g. -1::INT | ```
milovidov-desktop :) SELECT -1::INT
Syntax error: failed at position 10 ('::')
``` | https://github.com/ClickHouse/ClickHouse/issues/27679 | https://github.com/ClickHouse/ClickHouse/pull/27876 | 273b8b9bc15496738f895292717f5515c4d945f5 | 74ebc5c75c6531b8d8e5d428bb1e3f52f9cc42be | "2021-08-15T05:26:36Z" | c++ | "2021-08-20T07:55:42Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 27,622 | ["src/Functions/array/arrayIntersect.cpp", "tests/queries/0_stateless/00556_array_intersect.reference"] | arrayIntersect output elements order | Doc. arrayIntersect(arr)
_Takes multiple arrays, returns an array with elements that are present in all source arrays.
Elements order in the resulting array is the same as in the first array._
```
SELECT arrayIntersect(materialize([999, -11, 2, 3]), materialize([3, 2, 1, 999, -11]))
ββarrayIntersect(materialize([999, -11, 2, 3]), materialize([3, 2, 1, 999, -11]))ββ
β [-11,3,999,2] β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
SELECT arrayIntersect([1, 2, 3], [3, 2, 1]) AS l
ββlββββββββ
β [3,2,1] β
βββββββββββ
``` | https://github.com/ClickHouse/ClickHouse/issues/27622 | https://github.com/ClickHouse/ClickHouse/pull/51850 | 9e0d27dc4d4d7f01446364eb1f4746e347fe5705 | 3021f99f330691c324a1fdcf93e9303a60aa2ee7 | "2021-08-12T16:12:54Z" | c++ | "2023-08-03T10:28:59Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 27,568 | ["contrib/croaring-cmake/CMakeLists.txt"] | [OSX] Server won't start | **Describe the bug**
OSx crashes right at the start when running the server.
**Does it reproduce on recent release?**
Tested with the official master build and building with master too:
**Error message and/or stacktrace**
```bash
$ ./clickhouse server
Processing configuration file 'config.xml'.
There is no file 'config.xml', will use embedded config.
Logging trace to console
Segmentation fault: 11
$ ../ClickHouse/build/programs/clickhouse-server
Processing configuration file 'config.xml'.
There is no file 'config.xml', will use embedded config.
Logging trace to console
Segmentation fault: 11
**Additional context**
```
Crash dump:
```
Thread 0 Crashed:: Dispatch queue: com.apple.main-thread
0 libsystem_kernel.dylib 0x00007fff202ed946 __pthread_kill + 10
1 libsystem_pthread.dylib 0x00007fff2031c615 pthread_kill + 263
2 libsystem_c.dylib 0x00007fff20271411 abort + 120
3 clickhouse-server 0x000000010ee8d46d Poco::SignalHandler::handleSignal(int) + 45 (SignalHandler.cpp:94)
4 libsystem_platform.dylib 0x00007fff20361d7d _sigtramp + 29
5 dyld 0x0000000127863799 ImageLoaderMachOCompressed::resolveTwolevel(ImageLoader::LinkContext const&, char c
onst*, ImageLoader const*, ImageLoader const*, unsigned int, bool, bool, ImageLoader const**) + 89
6 clickhouse-server 0x00000001105e1d1d std::__1::unique_ptr<char, void (*)(void*)>::reset(char*) + 8 (memory:1658) [inline
d]
7 clickhouse-server 0x00000001105e1d1d std::__1::unique_ptr<char, void (*)(void*)>::~unique_ptr() + 8 (memory:1612) [inlin
ed]
8 clickhouse-server 0x00000001105e1d1d std::__1::unique_ptr<char, void (*)(void*)>::~unique_ptr() + 8 (memory:1612) [inlin
ed]
9 clickhouse-server 0x00000001105e1d1d std::__1::__fs::filesystem::__canonical(std::__1::__fs::filesystem::path const&, st
d::__1::error_code*) + 189 (operations.cpp:643)
10 clickhouse-server 0x00000001105e6335 std::__1::__fs::filesystem::__weakly_canonical(std::__1::__fs::filesystem::path con
st&, std::__1::error_code*) + 1077 (operations.cpp:1347)
11 clickhouse-server 0x000000010e7cf9ea std::__1::__fs::filesystem::weakly_canonical(std::__1::__fs::filesystem::path const&) + 7 (filesystem:2219) [inlined]
12 clickhouse-server 0x000000010e7cf9ea (anonymous namespace)::determineDefaultTimeZone() + 695 (DateLUT.cpp:78) [inlined]
13 clickhouse-server 0x000000010e7cf9ea DateLUT::DateLUT() + 778 (DateLUT.cpp:144)
14 clickhouse-server 0x000000010e7d14da DateLUT::DateLUT() + 12 (DateLUT.cpp:142) [inlined]
15 clickhouse-server 0x000000010e7d14da DateLUT::getInstance() + 58 (DateLUT.cpp:162)
16 clickhouse-server 0x000000010d09b2fa DateLUT::setDefaultTimezone(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) + 5 (DateLUT.h:37) [inlined]
17 clickhouse-server 0x000000010d09b2fa BaseDaemon::initialize(Poco::Util::Application&) + 1978 (BaseDaemon.cpp:660)
18 clickhouse-server 0x0000000107bcb41c DB::Server::initialize(Poco::Util::Application&) + 28
19 clickhouse-server 0x000000010e80249b Poco::Util::Application::run() + 27 (Application.cpp:329)
20 clickhouse-server 0x0000000107bcb356 DB::Server::run() + 662
21 clickhouse-server 0x0000000107bc9bfe mainEntryClickHouseServer(int, char**) + 958
22 clickhouse-server 0x0000000107b16cfb main + 1179 (main.cpp:366)
23 libdyld.dylib 0x00007fff20337f3d start + 1
```
If I provide a config file (the default one modified to write things into /Users/r/ch and setting a timezone) it doesn't crash but aborts instead:
```
$ ./clickhouse server --config config/config.xml
Processing configuration file 'config/config.xml'.
Logging trace to /Users/r/ch/clickhouse-server/clickhouse-server.log
Abort trap: 6
```
> Add any other context about the problem here.
Happens both when running under a normal users and as root.
Tested on OSX 11.3.1 Big Sur (both under VM and real machine) and 11.4 (real machine).
The 21.6 branch (63cdcec58c07d0e0136cf2f1e54f880ceb1114e4, 21.6.9.1) built from source works. I'll try bisect it but it's a pretty slow process. | https://github.com/ClickHouse/ClickHouse/issues/27568 | https://github.com/ClickHouse/ClickHouse/pull/27681 | 8a843ae15f275694bf925a04ca8795c9a65e85f6 | 40f5e06a8d9b2074b5985a8042f3ebf9940c77f4 | "2021-08-11T13:31:33Z" | c++ | "2021-08-15T08:46:34Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 27,511 | ["src/AggregateFunctions/AggregateFunctionExponentialMovingAverage.cpp", "src/AggregateFunctions/registerAggregateFunctions.cpp", "src/Common/ExponentiallySmoothedCounter.h", "tests/queries/0_stateless/2020_exponential_smoothing.reference", "tests/queries/0_stateless/2020_exponential_smoothing.sql", "tests/queries/0_stateless/2021_exponential_sum.reference", "tests/queries/0_stateless/2021_exponential_sum.sql", "tests/queries/0_stateless/2021_exponential_sum_shard.reference", "tests/queries/0_stateless/2021_exponential_sum_shard.sql"] | Exponentially smoothed moving average as aggregate function. | The function will take two arguments: value and time and also parameter - half-decay period.
Example: `exponentialMovingAverage(300)(temperature, timestamp)`
\- exponentially smoothed moving average of the temperature for the past five minutes at the latest point of time.
The state of the aggregate function is current averaged value and the latest time: (v, t).
Whenever new value or new state is appeared, the state is updated as:
```
t_new = max(t_old, t)
v_new = v_old * (1 / exp2((t_new - t_old) / half_decay)) + v * (1 - 1 / exp2((t_new - t_old) / half_decay))
```
(a sort of - did I write the formula correctly?)
(does this way of calculation depend on the order of updates?) | https://github.com/ClickHouse/ClickHouse/issues/27511 | https://github.com/ClickHouse/ClickHouse/pull/28914 | c6dd89147108543eff9cb116286fe72c670cbdc4 | 3203fa4c34ac66990393e846621c89352fd4ac42 | "2021-08-10T03:43:09Z" | c++ | "2021-09-21T20:52:44Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 27,502 | ["src/Interpreters/MutationsInterpreter.cpp"] | DROP/MODIFY COLUMN for compact part memory usage for tables with thousands of columns. | If you have really wide table with many thousands of columns and you want to drop / modify column.
For wide parts it working fine, but for compact clickhouse would try to allocate a lot of memory and going to be killed via OOM killer.
**Does it reproduce on recent release?**
Yes, 21.9
**How to reproduce**
```
clickhouse-client -mn --query="SELECT 'CREATE TABLE xxx_really_wide( ' || arrayStringConcat(groupArray('column_'|| toString(number) || ' Nullable(UInt32)'), ',') || ') ENGINE=MergeTree ORDER BY assumeNotNull(column_0)' FROM numbers(6000) FORMAT TSVRaw" | clickhouse-client -mn
clickhouse-client -mn --query="SELECT arrayStringConcat(replicate('1', range(6000)), ',') FROM numbers(300) FORMAT TSVRaw" | clickhouse-client -mn --query "INSERT INTO xxx_really_wide FORMAT CSV " --send_logs_level='trace'
```
```
SELECT *
FROM system.merges
FORMAT Vertical
Query id: d9751847-a689-49f0-aab3-f868d9c08e34
Row 1:
ββββββ
database: default
table: xxx_really_wide
elapsed: 84.8952157
progress: 0
num_parts: 1
source_part_names: ['all_1_1_0']
result_part_name: all_1_1_0_3
source_part_paths: ['/var/lib/clickhouse/data/default/xxx_really_wide/all_1_1_0/']
result_part_path: /var/lib/clickhouse/data/default/xxx_really_wide/all_1_1_0_3/
partition_id: all
is_mutation: 1
total_size_bytes_compressed: 456027
total_size_marks: 2
bytes_read_uncompressed: 0
rows_read: 0
bytes_written_uncompressed: 0
rows_written: 0
columns_written: 0
memory_usage: 16836533272
thread_id: 16171
merge_type:
merge_algorithm:
Row 2:
ββββββ
database: default
table: xxx_really_wide
elapsed: 84.8947852
progress: 0
num_parts: 1
source_part_names: ['all_2_2_0']
result_part_name: all_2_2_0_3
source_part_paths: ['/var/lib/clickhouse/data/default/xxx_really_wide/all_2_2_0/']
result_part_path: /var/lib/clickhouse/data/default/xxx_really_wide/all_2_2_0_3/
partition_id: all
is_mutation: 1
total_size_bytes_compressed: 468027
total_size_marks: 2
bytes_read_uncompressed: 0
rows_read: 0
bytes_written_uncompressed: 0
rows_written: 0
columns_written: 0
memory_usage: 16706294896
thread_id: 16174
merge_type:
merge_algorithm:
```
| https://github.com/ClickHouse/ClickHouse/issues/27502 | https://github.com/ClickHouse/ClickHouse/pull/41122 | 4d146b05a959e52c004df3ef5da986408d19adb4 | 19893804858350e30feebd30f6e8feb37dae741f | "2021-08-09T21:50:22Z" | c++ | "2022-09-13T12:10:21Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 27,469 | ["docs/en/sql-reference/aggregate-functions/parametric-functions.md", "src/AggregateFunctions/AggregateFunctionWindowFunnel.h", "tests/queries/0_stateless/00632_aggregation_window_funnel.reference", "tests/queries/0_stateless/00632_aggregation_window_funnel.sql"] | What is the precise definition of WindowFunnel's βstrictβ MODE? | According to my reading of the source code, it seems that strict = true, will return Event INDEX directly.
such as the event sequence is `A->B->C->B->D` and conditional sequence is`A->B->C->D`, returns 2 (not 3 or 4) when searching the second B.
But if the conditional sequence is`A->B->C`, it will return directly to 3 when searching `C`.
This makes me feel confused to `strict`.
```CPP
for (const auto & pair : data.events_list)
{
const T & timestamp = pair.first;
const auto & event_idx = pair.second - 1;
if (strict_order && event_idx == -1)
{
if (first_event)
break;
else
continue;
}
else if (event_idx == 0)
{
events_timestamp[0] = std::make_pair(timestamp, timestamp);
first_event = true;
}
else if (strict && events_timestamp[event_idx].has_value())
{
return event_idx + 1;//example1 return here
}
else if (strict_order && first_event && !events_timestamp[event_idx - 1].has_value())
{
for (size_t event = 0; event < events_timestamp.size(); ++event)
{
if (!events_timestamp[event].has_value())
return event;
}
}
else if (events_timestamp[event_idx - 1].has_value())
{
auto first_timestamp = events_timestamp[event_idx - 1]->first;
bool time_matched = timestamp <= first_timestamp + window;
if (strict_increase)
time_matched = time_matched && events_timestamp[event_idx - 1]->second < timestamp;
if (time_matched)
{
events_timestamp[event_idx] = std::make_pair(first_timestamp, timestamp);
if (event_idx + 1 == events_size)
return events_size;//example2 return here
}
}
}
```
I see a `strict` design purpose in https://github.com/ClickHouse/ClickHouse/pull/6548.
I think the answer in the first example should be 3 instead of 2. Is this a bug? | https://github.com/ClickHouse/ClickHouse/issues/27469 | https://github.com/ClickHouse/ClickHouse/pull/27563 | cfa571cac4748181188ef39ba4087e43312b7102 | e49d0c45336179eeeae2ff7c623d1399e8b88fc0 | "2021-08-09T12:33:20Z" | c++ | "2021-08-21T19:37:21Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 27,429 | ["docs/en/getting-started/example-datasets/index.md", "docs/en/getting-started/example-datasets/uk-price-paid.md"] | Example dataset: UK property price paid data | https://www.gov.uk/government/statistical-data-sets/price-paid-data-downloads
The size is about 4 GiB uncompressed:
http://prod.publicdata.landregistry.gov.uk.s3-website-eu-west-1.amazonaws.com/pp-complete.csv
Redistribution is permitted with attribution.
Description of the fields: https://www.gov.uk/guidance/about-the-price-paid-data | https://github.com/ClickHouse/ClickHouse/issues/27429 | https://github.com/ClickHouse/ClickHouse/pull/27432 | a5daf2d2c4efbb6545d6c8014b29dcdcb66509b5 | 9661dd9232804b7a3ff74b72f151a321fa2c7c1c | "2021-08-08T17:47:18Z" | c++ | "2021-08-08T20:31:26Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 27,229 | ["docs/en/sql-reference/functions/other-functions.md", "src/Functions/formatQuery.cpp", "tests/queries/0_stateless/02882_formatQuery.reference", "tests/queries/0_stateless/02882_formatQuery.sql", "utils/check-style/aspell-ignore/en/aspell-dict.txt"] | Compare queries with normalized formatting | **Use case**
I want to compare two queries with same semantics, but different format.
**Describe the solution you'd like**
Function to normalize query format (like in `SHOW CREATE` statement) or get query hash after re-formatting.
Or function to compare two queries.
**Describe alternatives you've considered**
`normalizeQuery` and `normalizedQueryHash`, but they don't "fix" formatting and can't be used to compare two queries with same semantics, but different formatting:
```
SELECT normalizedQueryHash('select 1') = normalizedQueryHash('SELECT 1')
ββequals(normalizedQueryHash('select 1'), normalizedQueryHash('SELECT 1'))ββ
β 0 β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
```
| https://github.com/ClickHouse/ClickHouse/issues/27229 | https://github.com/ClickHouse/ClickHouse/pull/55239 | d2461671bd7363b20f4d3870a774663fcea2ceee | 325ff33c3a4ae05810fc7a441a41f128f10a4e4e | "2021-08-05T14:28:52Z" | c++ | "2023-10-26T20:46:01Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 27,193 | ["base/common/DateLUTImpl.h", "tests/queries/0_stateless/02006_todatetime64_from_string.reference", "tests/queries/0_stateless/02006_todatetime64_from_string.sql"] | Wrong conversion with `toDateTime64` and timezone. | Reproduces on master.
```
SELECT toDateTime64('2021-03-22', 3, 'Asia/Tehran')
Query id: 26f7c373-f473-4209-8976-6ada79a8627f
ββtoDateTime64('2021-03-22', 3, 'Asia/Tehran')ββ
β 2157-04-28 06:28:16.000 β
ββββββββββββββββββββββββββββββββββββββββββββββββ
``` | https://github.com/ClickHouse/ClickHouse/issues/27193 | https://github.com/ClickHouse/ClickHouse/pull/27605 | 38159c85ac7be0841e82baf473433e7ff37cf16e | 34682c98c78ff56418d370eb85e3940ae3740fb9 | "2021-08-04T18:35:27Z" | c++ | "2021-08-12T10:32:59Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 27,179 | ["src/Storages/MergeTree/MergeTreeRangeReader.cpp", "tests/queries/0_stateless/02002_row_level_filter_bug.reference", "tests/queries/0_stateless/02002_row_level_filter_bug.sh", "tests/queries/skip_list.json"] | ROW POLICY: Inconsistent number of columns | ```sql
CREATE TABLE default.test_table
(
`a` UInt16 DEFAULT 0,
`c` LowCardinality(String) DEFAULT '',
`t_date` LowCardinality(String) DEFAULT '',
`ex` LowCardinality(String) DEFAULT '',
`team` LowCardinality(String) DEFAULT '',
`g` LowCardinality(String) DEFAULT '',
`mt` FixedString(1) DEFAULT ' ',
`rw_ts` Int64 DEFAULT 0,
`exr_t` Int64 DEFAULT 0,
`en` UInt16 DEFAULT 0,
`f_t` Int64 DEFAULT 0,
`j` UInt64 DEFAULT 0,
`oj` UInt64 DEFAULT 0
)
ENGINE = MergeTree
PARTITION BY (c, t_date)
ORDER BY (ex, team, g, mt, rw_ts, exr_t, en, f_t, j, oj)
SETTINGS index_granularity = 8192;
INSERT INTO default.test_table(t_date, c,team, a) SELECT
arrayJoin([toDate('2021-07-15'),toDate('2021-07-16')]) as t_date,
arrayJoin(['aur','rua']) as c,
arrayJoin(['AWD','ZZZ']) as team,
arrayJoin([3183,3106,0,3130,3108,3126,3109,3107,3182,3180,3129,3128,3125,3266]) as a
FROM numbers(60000);
SELECT
team,
a,
t_date,
count() AS count
FROM default.test_table
WHERE (t_date = '2021-07-15') AND (c = 'aur') AND (a = 3130) AND (team = 'AWD')
GROUP BY
team,
a,
t_date;
-- ββteamββ¬ββββaββ¬βt_dateββββββ¬βcountββ
-- β AWD β 3130 β 2021-07-15 β 60000 β
-- ββββββββ΄βββββββ΄βββββββββββββ΄ββββββββ
--- cat << EOF > /etc/clickhouse-server/users.d/access_management.xml
--- <yandex>
--- <users><default><access_management>1</access_management></default></users>
--- </yandex>
--- EOF
DROP ROLE IF exists AWD;
create role AWD;
REVOKE ALL ON *.* FROM AWD;
DROP USER IF EXISTS AWD_user;
CREATE USER AWD_user
IDENTIFIED WITH SHA256_PASSWORD BY 'AWD_pwd'
DEFAULT ROLE AWD;
GRANT SELECT ON default.test_table TO AWD;
DROP ROW POLICY IF EXISTS ttt_bu_test_table_AWD ON default.test_table;
CREATE ROW POLICY ttt_bu_test_table_AWD ON default.test_table FOR SELECT USING team = 'AWD' TO AWD;
--- exit;
--- clickhouse-client --user=AWD_user --password=AWD_pwd
SELECT count() AS count
FROM default.test_table
WHERE
t_date = '2021-07-15' AND c = 'aur' AND a=3130;
-- βββcountββ
-- β 835784 β ?????
-- ββββββββββ
SELECT
team,
a,
t_date,
count() AS count
FROM default.test_table
WHERE (t_date = '2021-07-15') AND (c = 'aur') AND (a = 3130)
GROUP BY
team,
a,
t_date;
-- ββteamββ¬ββββaββ¬βt_dateββββββ¬βcountββ
-- β AWD β 3128 β 2021-07-15 β 59675 β
-- β AWD β 3183 β 2021-07-15 β 59676 β
-- β AWD β 3107 β 2021-07-15 β 59676 β
-- β AWD β 3182 β 2021-07-15 β 59676 β
-- β AWD β 3106 β 2021-07-15 β 59676 β
-- β AWD β 3126 β 2021-07-15 β 59676 β
-- β AWD β 3129 β 2021-07-15 β 59675 β
-- β AWD β 0 β 2021-07-15 β 59676 β
-- β AWD β 3266 β 2021-07-15 β 59675 β
-- β AWD β 3108 β 2021-07-15 β 59676 β
-- β AWD β 3130 β 2021-07-15 β 60000 β
-- β AWD β 3125 β 2021-07-15 β 59675 β
-- β AWD β 3109 β 2021-07-15 β 59676 β
-- β AWD β 3180 β 2021-07-15 β 59676 β
-- ββββββββ΄βββββββ΄βββββββββββββ΄ββββββββ
SELECT count() AS count
FROM default.test_table
WHERE (t_date = '2021-07-15') AND (c = 'aur') AND (a = 313)
-- Received exception from server (version 21.9.1):
-- Code: 49. DB::Exception: Received from localhost:9000. DB::Exception: Inconsistent number of columns got from MergeTreeRangeReader. Have 1 in sample block and 0 columns in list: While executing MergeTreeThread. (LOGICAL_ERROR)
```
/cc @KochetovNicolai | https://github.com/ClickHouse/ClickHouse/issues/27179 | https://github.com/ClickHouse/ClickHouse/pull/27329 | 908505c12e4a18c9945e51861dea7d3efb537068 | cedc5d06ad6a20076456d75c250890a6b3cdfa36 | "2021-08-04T13:07:16Z" | c++ | "2021-08-09T10:19:02Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 27,169 | ["src/Functions/MultiSearchFirstIndexImpl.h", "src/Functions/MultiSearchFirstPositionImpl.h", "src/Functions/MultiSearchImpl.h", "tests/queries/0_stateless/00233_position_function_family.reference", "tests/queries/0_stateless/00233_position_function_family.sql"] | Msan on multiSearchFirstPositionCaseInsensitive (from 00746_sql_fuzzy.sh) | Reproduced on current master:
```
SELECT multiSearchFirstPositionCaseInsensitive('\0', enabledRoles());
```
https://clickhouse-test-reports.s3.yandex.net/27078/cc0c3a90336f7527209e2be8d00089db8b9c2697/functional_stateless_tests_(memory).html
Logs mirrored to:
http://transfer.sh/1QoOQ3m/stderr.log
http://transfer.sh/LJPpZ/clickhouse-server.log (only Fatal and queryid `cf20e9a5-a9df-4f32-8c2c-d7d535dd64e0`) | https://github.com/ClickHouse/ClickHouse/issues/27169 | https://github.com/ClickHouse/ClickHouse/pull/27181 | c748c2de9c96e0a8a913d5b0b6e47b7f902818c7 | e1927118cd1d020099bbd564a8b23cf4c5bc5c40 | "2021-08-04T08:14:10Z" | c++ | "2021-08-06T18:10:22Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 27,138 | ["docs/en/operations/system-tables/replicas.md", "src/Storages/StorageReplicatedMergeTree.cpp", "src/Storages/StorageReplicatedMergeTree.h", "src/Storages/System/StorageSystemReplicas.cpp", "tests/integration/test_replica_is_active/__init__.py", "tests/integration/test_replica_is_active/test.py"] | List all replicas/inactive replicas names in system.replicas table | **Use case**
Currently, `system.replicas` table has `total_replicas` and `active_replicas` columns which are useful to check/alert when some replicas are down. One problem, these columns are numbers. They tell when there is a problem, but you need to further check ZooKeeper or query all the nodes to detect what exact replicas (hosts) are down/inactive/have problems.
**Describe the solution you'd like**
Add a replicas column maybe as a map which would contain the replica name, and it's status. Or, add 2 array columns with `replica_names` and `active_replica_names`
**Alternative**
Something along the line
```sql
select path, value from system.zookeeper where path IN (
select concat(path, '/', name) from system.zookeeper where path in (select concat(zookeeper_path, '/replicas') from system.replicas)
)
and name = 'is_active'
``` | https://github.com/ClickHouse/ClickHouse/issues/27138 | https://github.com/ClickHouse/ClickHouse/pull/27180 | 975e0a4d47bb66ec5d7467dd5f18b8cb1a653311 | 7fdf3cc263bce8fefe11262699be8413ccc240ee | "2021-08-03T13:12:11Z" | c++ | "2021-08-05T09:46:53Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 27,133 | ["PreLoad.cmake", "cmake/add_warning.cmake", "cmake/linux/default_libs.cmake", "contrib/krb5-cmake/CMakeLists.txt", "contrib/protobuf-cmake/CMakeLists.txt", "contrib/sysroot", "docker/test/fasttest/run.sh", "docker/test/pvs/Dockerfile", "src/Common/renameat2.cpp"] | GLIBC 2.32 / 2.33 / 2.34 compatibility | It would be nice if Clickhouse supported building in an enviornment with GLIBC 2.32+ like it supports older releases (pinning symbols).
My usecase is that I'm building in a new machine (Archlinux with GLIBC 2.33) and that works great until I want to run the integration tests, which run under an old Ubuntu (20.04) with 2.31-0ubuntu9.2 and that will fail when using the binary:
```
E Exception: Timed out while waiting for instance `node1' with ip address 172.16.0.4 to start. Container status: running, logs: clickhouse: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.32' not found (required by clickhouse)
E clickhouse: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.33' not found (required by clickhouse)
```
I see that @amosbird started working on it but ended up discarding the PR (https://github.com/ClickHouse/ClickHouse/pull/24015). I've tested it and it doesn't work for 2.33.
Are there any plans to support this or should I look for a better way to run those tests locally? | https://github.com/ClickHouse/ClickHouse/issues/27133 | https://github.com/ClickHouse/ClickHouse/pull/30011 | 4082c6c4e569249b209cc5f7b035b318bd3e0118 | e1c2e629d8c077193f951cdb02fac9c0b1631c65 | "2021-08-03T11:40:08Z" | c++ | "2021-11-25T02:27:25Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 27,114 | ["src/Storages/MergeTree/DataPartsExchange.cpp", "src/Storages/MergeTree/IMergeTreeDataPart.cpp", "src/Storages/MergeTree/MergeTreePartInfo.cpp", "src/Storages/MergeTree/MergeTreePartInfo.h", "tests/integration/test_partition/test.py"] | system.detached_parts is filled incorrectly | The only columns that show what they are supposed to are `database`, `table`, `name` and `disk`.
The other columns borrow values from a neighbour.
```
ββtableβββββββββ¬βpartition_idββ¬βnameβββββββββββββββββββββββββββββββββ¬βdiskβββββ¬βreasonββ¬βmin_block_numberββ¬βmax_block_numberββ¬βββlevelββ
β actions_log β ignored β ignored_202107_714380_714380_0 β default β β 202107 β 714380 β 714380 β
β actions_log β ignored β ignored_202107_713838_713838_0 β default β β 202107 β 713838 β 713838 β
β actions_log β ignored β ignored_202107_713875_713875_0 β default β β 202107 β 713875 β 713875 β
β actions_log β ignored β ignored_202107_725839_725839_0 β default β β 202107 β 725839 β 725839 β
β actions_log β ignored β ignored_202107_713927_713927_0 β default β β 202107 β 713927 β 713927 β
β actions_log β ignored β ignored_202107_714009_714009_0 β default β β 202107 β 714009 β 714009 β
``` | https://github.com/ClickHouse/ClickHouse/issues/27114 | https://github.com/ClickHouse/ClickHouse/pull/27183 | 4f0dbae0f996e130ebbd09395e0bdb37eb7f2cdb | 59a94bd3220e2710a44b675c4e54b55b3fb138f5 | "2021-08-02T23:01:18Z" | c++ | "2021-08-05T15:21:03Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 27,091 | ["src/Interpreters/HashJoin.cpp", "src/Interpreters/MergeJoin.cpp", "src/Interpreters/join_common.cpp", "src/Interpreters/join_common.h", "tests/queries/0_stateless/01049_join_low_card_bug_long.reference", "tests/queries/0_stateless/01049_join_low_card_bug_long.sql", "tests/queries/0_stateless/01049_join_low_card_bug_long.sql.j2"] | partial merge join: 'Bad cast from type DB::ColumnNullable to DB::ColumnString' | ```
DROP TABLE IF EXISTS l;
DROP TABLE IF EXISTS r;
DROP TABLE IF EXISTS nl;
DROP TABLE IF EXISTS nr;
DROP TABLE IF EXISTS l_lc;
DROP TABLE IF EXISTS r_lc;
CREATE TABLE l (x UInt32, lc String) ENGINE = Memory;
CREATE TABLE r (x UInt32, lc String) ENGINE = Memory;
CREATE TABLE nl (x Nullable(UInt32), lc Nullable(String)) ENGINE = Memory;
CREATE TABLE nr (x Nullable(UInt32), lc Nullable(String)) ENGINE = Memory;
CREATE TABLE l_lc (x UInt32, lc LowCardinality(String)) ENGINE = Memory;
CREATE TABLE r_lc (x UInt32, lc LowCardinality(String)) ENGINE = Memory;
INSERT INTO r VALUES (0, 'str'), (1, 'str_r');
INSERT INTO nr VALUES (0, 'str'), (1, 'str_r');
INSERT INTO r_lc VALUES (0, 'str'), (1, 'str_r');
INSERT INTO l VALUES (0, 'str'), (2, 'str_l');
INSERT INTO nl VALUES (0, 'str'), (2, 'str_l');
INSERT INTO l_lc VALUES (0, 'str'), (2, 'str_l');
set join_algorithm = 'partial_merge', join_use_nulls = 1;
SELECT
toTypeName(r.lc),
toTypeName(materialize(r.lc)),
[NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, '', '', NULL, NULL, NULL, NULL, NULL, NULL],
r.lc,
materialize(r.lc),
toTypeName(l.lc),
toTypeName(materialize(l.lc)),
l.lc
FROM l_lc AS l
FULL OUTER JOIN r_lc AS r USING (x)
ORDER BY
r.lc ASC,
x ASC NULLS LAST
2021.08.02 16:12:06.344643 [ 9410 ] {7f8da94a-97d7-45b9-8980-9e4e96bacc9a} <Fatal> : Logical error: 'Bad cast from type DB::ColumnNullable to DB::ColumnString'.
2021.08.02 16:12:06.345324 [ 9223 ] {} <Trace> BaseDaemon: Received signal 6
2021.08.02 16:12:06.345677 [ 9901 ] {} <Fatal> BaseDaemon: ########################################
2021.08.02 16:12:06.345977 [ 9901 ] {} <Fatal> BaseDaemon: (version 21.9.1.1, build id: E31A3FA5E2E76CEB) (from thread 9410) (query_id: 7f8da94a-97d7-45b9-8980-9e4e96bacc9a) Received signal Aborted (6)
2021.08.02 16:12:06.346216 [ 9901 ] {} <Fatal> BaseDaemon:
2021.08.02 16:12:06.346563 [ 9901 ] {} <Fatal> BaseDaemon: Stack trace: 0x7fb46a26318b 0x7fb46a242859 0x7fb46c488f25 0x7fb46c489039 0x7fb467253e8c 0x7fb454fb0830 0x7fb454fa99a0 0x7fb451d71cf1 0x7fb451d71d3a 0x7fb4553b0d29 0x7fb4553b0aec 0x7fb4553b08c3 0x7fb4553b071d 0x7fb4553b065e 0x7fb4553af7ad 0x7fb4553a5751 0x7fb44e095d23 0x7fb44e05bb35 0x7fb44e06d7c7 0x7fb44e09e63e 0x7fb44e841bdc 0x7fb44e841b3f 0x7fb44e841add 0x7fb44e841a9d 0x7fb44e841a75 0x7fb44e841a3d 0x7fb46c53d889 0x7fb46c534175 0x7fb44e840436 0x7fb44e840e19 0x7fb44e83ecc6 0x7fb44e83dfb6 0x7fb44e868a79 0x7fb44e8689a6 0x7fb44e86891d 0x7fb44e8688c1 0x7fb44e8687d2 0x7fb44e8686cc 0x7fb44e8685dd 0x7fb44e86859d 0x7fb44e868575
2021.08.02 16:12:06.349998 [ 9901 ] {} <Fatal> BaseDaemon: 4. /build/glibc-eX1tMB/glibc-2.31/signal/../sysdeps/unix/sysv/linux/raise.c:51: gsignal @ 0x4618b in /usr/lib/debug/lib/x86_64-linux-gnu/libc-2.31.so
2021.08.02 16:12:06.354972 [ 9901 ] {} <Fatal> BaseDaemon: 5. /build/glibc-eX1tMB/glibc-2.31/stdlib/abort.c:81: __GI_abort @ 0x25859 in /usr/lib/debug/lib/x86_64-linux-gnu/libc-2.31.so
2021.08.02 16:12:06.559429 [ 9901 ] {} <Fatal> BaseDaemon: 6. /home/akuzm/ch2/ch/src/Common/Exception.cpp:53: DB::handle_error_code(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool, std::__1::vector<void*, std::__1::allocator<void*> > const&) @ 0x299f25 in /home/akuzm/ch2/build-clang11/src/libclickhouse_common_iod.so
2021.08.02 16:12:06.743445 [ 9901 ] {} <Fatal> BaseDaemon: 7. /home/akuzm/ch2/ch/src/Common/Exception.cpp:60: DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0x29a039 in /home/akuzm/ch2/build-clang11/src/libclickhouse_common_iod.so
2021.08.02 16:12:06.937709 [ 9901 ] {} <Fatal> BaseDaemon: 8. /home/akuzm/ch2/ch/src/Common/assert_cast.h:47: DB::ColumnString const& assert_cast<DB::ColumnString const&, DB::IColumn const&>(DB::IColumn const&) @ 0x44e7e8c in /home/akuzm/ch2/build-clang11/src/AggregateFunctions/libclickhouse_aggregate_functionsd.so
2021.08.02 16:12:07.103318 [ 9901 ] {} <Fatal> BaseDaemon: 9. /home/akuzm/ch2/ch/src/Columns/ColumnString.h:237: DB::ColumnString::compareAt(unsigned long, unsigned long, DB::IColumn const&, int) const @ 0x2bd830 in /home/akuzm/ch2/build-clang11/src/libclickhouse_datatypesd.so
2021.08.02 16:12:07.241908 [ 9901 ] {} <Fatal> BaseDaemon: 10. /home/akuzm/ch2/ch/src/Columns/ColumnUnique.h:420: DB::ColumnUnique<DB::ColumnString>::compareAt(unsigned long, unsigned long, DB::IColumn const&, int) const @ 0x2b69a0 in /home/akuzm/ch2/build-clang11/src/libclickhouse_datatypesd.so
2021.08.02 16:12:07.330117 [ 9901 ] {} <Fatal> BaseDaemon: 11. /home/akuzm/ch2/ch/src/Columns/ColumnLowCardinality.cpp:297: DB::ColumnLowCardinality::compareAtImpl(unsigned long, unsigned long, DB::IColumn const&, int, Collator const*) const @ 0x2c7cf1 in /home/akuzm/ch2/build-clang11/src/libclickhouse_columnsd.so
2021.08.02 16:12:07.417086 [ 9901 ] {} <Fatal> BaseDaemon: 12. /home/akuzm/ch2/ch/src/Columns/ColumnLowCardinality.cpp:302: DB::ColumnLowCardinality::compareAt(unsigned long, unsigned long, DB::IColumn const&, int) const @ 0x2c7d3a in /home/akuzm/ch2/build-clang11/src/libclickhouse_columnsd.so
2021.08.02 16:12:07.586105 [ 9901 ] {} <Fatal> BaseDaemon: 13.1. inlined from /home/akuzm/ch2/ch/src/Core/SortCursor.h:182: DB::SortCursor::greaterAt(DB::SortCursor const&, unsigned long, unsigned long) const
2021.08.02 16:12:07.586263 [ 9901 ] {} <Fatal> BaseDaemon: 13.2. inlined from /home/akuzm/ch2/ch/src/Core/SortCursor.h:149: DB::SortCursorHelper<DB::SortCursor>::greater(DB::SortCursorHelper<DB::SortCursor> const&) const
2021.08.02 16:12:07.586415 [ 9901 ] {} <Fatal> BaseDaemon: 13.3. inlined from /home/akuzm/ch2/ch/src/Core/SortCursor.h:155: DB::SortCursorHelper<DB::SortCursor>::operator<(DB::SortCursorHelper<DB::SortCursor> const&) const
2021.08.02 16:12:07.586501 [ 9901 ] {} <Fatal> BaseDaemon: 13. /home/akuzm/ch2/ch/contrib/libcxx/include/algorithm:715: std::__1::__less<DB::SortCursor, DB::SortCursor>::operator()(DB::SortCursor const&, DB::SortCursor const&) const @ 0x260d29 in /home/akuzm/ch2/build-clang11/src/libclickhouse_datastreamsd.so
2021.08.02 16:12:07.736390 [ 9901 ] {} <Fatal> BaseDaemon: 14. /home/akuzm/ch2/ch/contrib/libcxx/include/algorithm:801: bool std::__1::__debug_less<std::__1::__less<DB::SortCursor, DB::SortCursor> >::operator()<DB::SortCursor, DB::SortCursor>(DB::SortCursor&, DB::SortCursor&) @ 0x260aec in /home/akuzm/ch2/build-clang11/src/libclickhouse_datastreamsd.so
2021.08.02 16:12:07.894255 [ 9901 ] {} <Fatal> BaseDaemon: 15. /home/akuzm/ch2/ch/contrib/libcxx/include/algorithm:4950: void std::__1::__sift_down<std::__1::__debug_less<std::__1::__less<DB::SortCursor, DB::SortCursor> >, std::__1::__wrap_iter<DB::SortCursor*> >(std::__1::__wrap_iter<DB::SortCursor*>, std::__1::__wrap_iter<DB::SortCursor*>, std::__1::__debug_less<std::__1::__less<DB::SortCursor, DB::SortCursor> >, std::__1::iterator_traits<std::__1::__wrap_iter<DB::SortCursor*> >::difference_type, std::__1::__wrap_iter<DB::SortCursor*>) @ 0x2608c3 in /home/akuzm/ch2/build-clang11/src/libclickhouse_datastreamsd.so
2021.08.02 16:12:08.060210 [ 9901 ] {} <Fatal> BaseDaemon: 16. /home/akuzm/ch2/ch/contrib/libcxx/include/algorithm:5020: void std::__1::__make_heap<std::__1::__debug_less<std::__1::__less<DB::SortCursor, DB::SortCursor> >, std::__1::__wrap_iter<DB::SortCursor*> >(std::__1::__wrap_iter<DB::SortCursor*>, std::__1::__wrap_iter<DB::SortCursor*>, std::__1::__debug_less<std::__1::__less<DB::SortCursor, DB::SortCursor> >) @ 0x26071d in /home/akuzm/ch2/build-clang11/src/libclickhouse_datastreamsd.so
2021.08.02 16:12:08.217777 [ 9901 ] {} <Fatal> BaseDaemon: 17. /home/akuzm/ch2/ch/contrib/libcxx/include/algorithm:5034: void std::__1::make_heap<std::__1::__wrap_iter<DB::SortCursor*>, std::__1::__less<DB::SortCursor, DB::SortCursor> >(std::__1::__wrap_iter<DB::SortCursor*>, std::__1::__wrap_iter<DB::SortCursor*>, std::__1::__less<DB::SortCursor, DB::SortCursor>) @ 0x26065e in /home/akuzm/ch2/build-clang11/src/libclickhouse_datastreamsd.so
2021.08.02 16:12:08.377458 [ 9901 ] {} <Fatal> BaseDaemon: 18. /home/akuzm/ch2/ch/contrib/libcxx/include/algorithm:5042: void std::__1::make_heap<std::__1::__wrap_iter<DB::SortCursor*> >(std::__1::__wrap_iter<DB::SortCursor*>, std::__1::__wrap_iter<DB::SortCursor*>) @ 0x25f7ad in /home/akuzm/ch2/build-clang11/src/libclickhouse_datastreamsd.so
2021.08.02 16:12:08.507885 [ 9901 ] {} <Fatal> BaseDaemon: 19. /home/akuzm/ch2/ch/src/Core/SortCursor.h:255: DB::SortingHeap<DB::SortCursor>::SortingHeap<std::__1::vector<DB::SortCursorImpl, std::__1::allocator<DB::SortCursorImpl> > >(std::__1::vector<DB::SortCursorImpl, std::__1::allocator<DB::SortCursorImpl> >&) @ 0x255751 in /home/akuzm/ch2/build-clang11/src/libclickhouse_datastreamsd.so
2021.08.02 16:12:08.635924 [ 9901 ] {} <Fatal> BaseDaemon: 20. /home/akuzm/ch2/ch/src/Processors/Transforms/SortingTransform.cpp:51: DB::MergeSorter::MergeSorter(std::__1::vector<DB::Chunk, std::__1::allocator<DB::Chunk> >, std::__1::vector<DB::SortColumnDescription, std::__1::allocator<DB::SortColumnDescription> >&, unsigned long, unsigned long) @ 0x27bd23 in /home/akuzm/ch2/build-clang11/src/libclickhouse_processors_transformsd.so
2021.08.02 16:12:08.720147 [ 9901 ] {} <Fatal> BaseDaemon: 21. /home/akuzm/ch2/ch/contrib/libcxx/include/memory:2068: std::__1::__unique_if<DB::MergeSorter>::__unique_single std::__1::make_unique<DB::MergeSorter, std::__1::vector<DB::Chunk, std::__1::allocator<DB::Chunk> >, std::__1::vector<DB::SortColumnDescription, std::__1::allocator<DB::SortColumnDescription> >&, unsigned long&, unsigned long&>(std::__1::vector<DB::Chunk, std::__1::allocator<DB::Chunk> >&&, std::__1::vector<DB::SortColumnDescription, std::__1::allocator<DB::SortColumnDescription> >&, unsigned long&, unsigned long&) @ 0x241b35 in /home/akuzm/ch2/build-clang11/src/libclickhouse_processors_transformsd.so
2021.08.02 16:12:08.961613 [ 9901 ] {} <Fatal> BaseDaemon: 22. /home/akuzm/ch2/ch/src/Processors/Transforms/MergeSortingTransform.cpp:229: DB::MergeSortingTransform::generate() @ 0x2537c7 in /home/akuzm/ch2/build-clang11/src/libclickhouse_processors_transformsd.so
2021.08.02 16:12:09.092012 [ 9901 ] {} <Fatal> BaseDaemon: 23. /home/akuzm/ch2/ch/src/Processors/Transforms/SortingTransform.cpp:340: DB::SortingTransform::work() @ 0x28463e in /home/akuzm/ch2/build-clang11/src/libclickhouse_processors_transformsd.so
2021.08.02 16:12:09.439414 [ 9901 ] {} <Fatal> BaseDaemon: 24. /home/akuzm/ch2/ch/src/Processors/Executors/PipelineExecutor.cpp:80: DB::executeJob(DB::IProcessor*) @ 0x98bdc in /home/akuzm/ch2/build-clang11/src/libclickhouse_processors_executorsd.so
2021.08.02 16:12:09.755288 [ 9901 ] {} <Fatal> BaseDaemon: 25. /home/akuzm/ch2/ch/src/Processors/Executors/PipelineExecutor.cpp:97: DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0::operator()() const @ 0x98b3f in /home/akuzm/ch2/build-clang11/src/libclickhouse_processors_executorsd.so
2021.08.02 16:12:10.059211 [ 9901 ] {} <Fatal> BaseDaemon: 26. /home/akuzm/ch2/ch/contrib/libcxx/include/type_traits:3676: decltype(std::__1::forward<DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0&>(fp)()) std::__1::__invoke<DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0&>(DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0&) @ 0x98add in /home/akuzm/ch2/build-clang11/src/libclickhouse_processors_executorsd.so
2021.08.02 16:12:10.355808 [ 9901 ] {} <Fatal> BaseDaemon: 27. /home/akuzm/ch2/ch/contrib/libcxx/include/__functional_base:349: void std::__1::__invoke_void_return_wrapper<void>::__call<DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0&>(DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0&) @ 0x98a9d in /home/akuzm/ch2/build-clang11/src/libclickhouse_processors_executorsd.so
2021.08.02 16:12:10.648865 [ 9901 ] {} <Fatal> BaseDaemon: 28. /home/akuzm/ch2/ch/contrib/libcxx/include/functional:1608: std::__1::__function::__default_alloc_func<DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0, void ()>::operator()() @ 0x98a75 in /home/akuzm/ch2/build-clang11/src/libclickhouse_processors_executorsd.so
2021.08.02 16:12:10.940932 [ 9901 ] {} <Fatal> BaseDaemon: 29. /home/akuzm/ch2/ch/contrib/libcxx/include/functional:2089: void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<DB::PipelineExecutor::addJob(DB::ExecutingGraph::Node*)::$_0, void ()> >(std::__1::__function::__policy_storage const*) @ 0x98a3d in /home/akuzm/ch2/build-clang11/src/libclickhouse_processors_executorsd.so
2021.08.02 16:12:11.043289 [ 9901 ] {} <Fatal> BaseDaemon: 30. /home/akuzm/ch2/ch/contrib/libcxx/include/functional:2221: std::__1::__function::__policy_func<void ()>::operator()() const @ 0x34e889 in /home/akuzm/ch2/build-clang11/src/libclickhouse_common_iod.so
2021.08.02 16:12:11.104885 [ 9901 ] {} <Fatal> BaseDaemon: 31. /home/akuzm/ch2/ch/contrib/libcxx/include/functional:2560: std::__1::function<void ()>::operator()() const @ 0x345175 in /home/akuzm/ch2/build-clang11/src/libclickhouse_common_iod.so
2021.08.02 16:12:11.392269 [ 9901 ] {} <Fatal> BaseDaemon: 32. /home/akuzm/ch2/ch/src/Processors/Executors/PipelineExecutor.cpp:589: DB::PipelineExecutor::executeStepImpl(unsigned long, unsigned long, std::__1::atomic<bool>*) @ 0x97436 in /home/akuzm/ch2/build-clang11/src/libclickhouse_processors_executorsd.so
2021.08.02 16:12:11.675774 [ 9901 ] {} <Fatal> BaseDaemon: 33. /home/akuzm/ch2/ch/src/Processors/Executors/PipelineExecutor.cpp:474: DB::PipelineExecutor::executeSingleThread(unsigned long, unsigned long) @ 0x97e19 in /home/akuzm/ch2/build-clang11/src/libclickhouse_processors_executorsd.so
2021.08.02 16:12:11.949225 [ 9901 ] {} <Fatal> BaseDaemon: 34. /home/akuzm/ch2/ch/src/Processors/Executors/PipelineExecutor.cpp:813: DB::PipelineExecutor::executeImpl(unsigned long) @ 0x95cc6 in /home/akuzm/ch2/build-clang11/src/libclickhouse_processors_executorsd.so
2021.08.02 16:12:12.236730 [ 9901 ] {} <Fatal> BaseDaemon: 35. /home/akuzm/ch2/ch/src/Processors/Executors/PipelineExecutor.cpp:396: DB::PipelineExecutor::execute(unsigned long) @ 0x94fb6 in /home/akuzm/ch2/build-clang11/src/libclickhouse_processors_executorsd.so
2021.08.02 16:12:12.470721 [ 9901 ] {} <Fatal> BaseDaemon: 36. /home/akuzm/ch2/ch/src/Processors/Executors/PullingAsyncPipelineExecutor.cpp:80: DB::threadFunction(DB::PullingAsyncPipelineExecutor::Data&, std::__1::shared_ptr<DB::ThreadGroupStatus>, unsigned long) @ 0xbfa79 in /home/akuzm/ch2/build-clang11/src/libclickhouse_processors_executorsd.so
2021.08.02 16:12:12.707623 [ 9901 ] {} <Fatal> BaseDaemon: 37. /home/akuzm/ch2/ch/src/Processors/Executors/PullingAsyncPipelineExecutor.cpp:108: DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0::operator()() const @ 0xbf9a6 in /home/akuzm/ch2/build-clang11/src/libclickhouse_processors_executorsd.so
2021.08.02 16:12:12.942054 [ 9901 ] {} <Fatal> BaseDaemon: 38. /home/akuzm/ch2/ch/contrib/libcxx/include/type_traits:3682: decltype(std::__1::forward<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&>(fp)()) std::__1::__invoke_constexpr<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&) @ 0xbf91d in /home/akuzm/ch2/build-clang11/src/libclickhouse_processors_executorsd.so
2021.08.02 16:12:13.180678 [ 9901 ] {} <Fatal> BaseDaemon: 39. /home/akuzm/ch2/ch/contrib/libcxx/include/tuple:1415: decltype(auto) std::__1::__apply_tuple_impl<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&, std::__1::tuple<>&>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&, std::__1::tuple<>&, std::__1::__tuple_indices<>) @ 0xbf8c1 in /home/akuzm/ch2/build-clang11/src/libclickhouse_processors_executorsd.so
2021.08.02 16:12:13.416426 [ 9901 ] {} <Fatal> BaseDaemon: 40. /home/akuzm/ch2/ch/contrib/libcxx/include/tuple:1424: decltype(auto) std::__1::apply<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&, std::__1::tuple<>&>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&, std::__1::tuple<>&) @ 0xbf7d2 in /home/akuzm/ch2/build-clang11/src/libclickhouse_processors_executorsd.so
2021.08.02 16:12:13.649078 [ 9901 ] {} <Fatal> BaseDaemon: 41. /home/akuzm/ch2/ch/src/Common/ThreadPool.h:182: ThreadFromGlobalPool::ThreadFromGlobalPool<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&&)::'lambda'()::operator()() @ 0xbf6cc in /home/akuzm/ch2/build-clang11/src/libclickhouse_processors_executorsd.so
2021.08.02 16:12:13.883132 [ 9901 ] {} <Fatal> BaseDaemon: 42. /home/akuzm/ch2/ch/contrib/libcxx/include/type_traits:3676: decltype(std::__1::forward<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0>(fp)()) std::__1::__invoke<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&&)::'lambda'()&>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&&) @ 0xbf5dd in /home/akuzm/ch2/build-clang11/src/libclickhouse_processors_executorsd.so
2021.08.02 16:12:14.117591 [ 9901 ] {} <Fatal> BaseDaemon: 43. /home/akuzm/ch2/ch/contrib/libcxx/include/__functional_base:349: void std::__1::__invoke_void_return_wrapper<void>::__call<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&&)::'lambda'()&>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&&...) @ 0xbf59d in /home/akuzm/ch2/build-clang11/src/libclickhouse_processors_executorsd.so
2021.08.02 16:12:14.353091 [ 9901 ] {} <Fatal> BaseDaemon: 44. /home/akuzm/ch2/ch/contrib/libcxx/include/functional:1608: std::__1::__function::__default_alloc_func<ThreadFromGlobalPool::ThreadFromGlobalPool<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&&)::'lambda'(), void ()>::operator()() @ 0xbf575 in /home/akuzm/ch2/build-clang11/src/libclickhouse_processors_executorsd.so
2021.08.02 16:12:14.353274 [ 9901 ] {} <Fatal> BaseDaemon: Calculated checksum of the binary: 5AE2644FBBA96BF3D3AF9BCBBD71E560. There is no information about the reference checksum.
``` | https://github.com/ClickHouse/ClickHouse/issues/27091 | https://github.com/ClickHouse/ClickHouse/pull/27217 | 5a7fe51532626db6d217367d82c78a5d3642f16f | 9cbc4b4f7fd7481202dee93fce59e320abb4e2e4 | "2021-08-02T13:15:15Z" | c++ | "2021-08-09T06:53:59Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 27,073 | ["src/AggregateFunctions/AggregateFunctionFactory.cpp", "src/AggregateFunctions/AggregateFunctionIf.cpp", "tests/queries/0_stateless/02183_combinator_if.reference", "tests/queries/0_stateless/02183_combinator_if.sql"] | Invalid sumIf behavior with nullable arguments | **Describe the bug**
`sumIf` returns incorrect results in some cases with Nullable arguments.
**How to reproduce**
* Which ClickHouse server version to use
Latest master/arcadia version
* Queries to run that lead to unexpected result
```sql
:) select sumIf(toFloat64OrZero(b), a = 0) as r1, sumIf(cast(b as Float), a = 0) as r2, sum(if(a = 0, toFloat64OrZero(b), 0)) as r3 from (select arrayJoin([1, 2, 3, NULL]) as a, toNullable('10.0') as b)
SELECT
sumIf(toFloat64OrZero(b), a = 0) AS r1,
sumIf(CAST(b, 'Float'), a = 0) AS r2,
sum(if(a = 0, toFloat64OrZero(b), 0)) AS r3
FROM
(
SELECT
arrayJoin([1, 2, 3, NULL]) AS a,
toNullable('10.0') AS b
)
Query id: 2e4e320d-e52a-4514-bc6f-118485558191
ββr1ββ¬βr2ββ¬βr3ββ
β 10 β 0 β 0 β
ββββββ΄βββββ΄βββββ
```
**Expected behavior**
All results are 0.
**Additional context**
It breaks when null values in Nullable column are compared with `assumeNotNull` from this column.
It also only breaks when first argument is Nullable too.
| https://github.com/ClickHouse/ClickHouse/issues/27073 | https://github.com/ClickHouse/ClickHouse/pull/33920 | 48c19e88a58adc33cb3d544c5c490a394c4e543d | 6ee0b1897906ed708b8a92ddf91de1ce3323ee3b | "2021-08-02T08:24:26Z" | c++ | "2022-01-24T08:43:11Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 27,058 | ["docs/en/sql-reference/functions/type-conversion-functions.md", "src/Functions/FunctionSnowflake.h", "src/Functions/registerFunctions.cpp", "src/Functions/registerFunctionsSnowflake.cpp", "src/Functions/snowflake.cpp", "tests/queries/0_stateless/01942_dateTimeToSnowflake.reference", "tests/queries/0_stateless/01942_dateTimeToSnowflake.sql", "tests/queries/0_stateless/01942_snowflakeToDateTime.reference", "tests/queries/0_stateless/01942_snowflakeToDateTime.sql"] | snowflake id (not to confuse with Snowflake service) time extract and convert functions | **Use case**
[snowflake id](https://en.wikipedia.org/wiki/Snowflake_ID) is a form of unique identifier used in distributed computing. The format was created by Twitter and it is widely used in many products such as discord api and etc.
snowflake is a int64 number and generated based on timestamp, so it is comparable between two ids and it is sorting friendly, therefore it is a good replacement to uuid to be used as primary key in many occasions
when it comes to clickhouse merge key engine,
- we can use snowflake to be the order key or primary key and extract time from it to be the partitioning key at the same time
- some time based queries can be applied on snowflack primary keys after converting time to snowflake and speed up the queries
**Describe the solution you'd like**
I'd like to add a group of snowflake time extract and convert functions to solve above questions, and I wonder whether the community will accept this feature.
| https://github.com/ClickHouse/ClickHouse/issues/27058 | https://github.com/ClickHouse/ClickHouse/pull/27704 | 712790590948fce001ff2faae1641973f294dc40 | 273b8b9bc15496738f895292717f5515c4d945f5 | "2021-08-01T06:50:07Z" | c++ | "2021-08-20T07:48:40Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 27,047 | ["website/css/bootstrap.css"] | Terrible markup on website if "dark theme" is used. | This awful bug is reported by @qoega

| https://github.com/ClickHouse/ClickHouse/issues/27047 | https://github.com/ClickHouse/ClickHouse/pull/27048 | 8c9c4efd625ec2e683d2d3400dd8fdc2be78dffd | c0607a7e956dda4a1f587705e2c1df55edf2f987 | "2021-07-31T20:56:55Z" | c++ | "2021-07-31T22:19:53Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 27,039 | ["CMakeLists.txt", "contrib/CMakeLists.txt", "contrib/gwpasan-cmake/CMakeLists.txt", "programs/main.cpp", "src/CMakeLists.txt", "src/Common/config.h.in", "src/Common/memory.h", "src/Common/new_delete.cpp", "src/configure_config.cmake"] | Add a technique similar to GWP-ASan | **Describe the solution you'd like**
Small random subset of memory allocations should be protected by guard pages.
GWP-ASan (is in fact almost unrelated to ASan) is not a specific tool but a technique that requires a small change in memory allocator. https://llvm.org/docs/GwpAsan.html
The only point is to enable it by default in production. | https://github.com/ClickHouse/ClickHouse/issues/27039 | https://github.com/ClickHouse/ClickHouse/pull/45226 | 7cf71d1f828c2ac3a0a9491abdb4c580efc91050 | d9fbf643bcae94589030185c6033c021bddd56af | "2021-07-31T19:10:06Z" | c++ | "2023-02-09T06:24:52Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 26,980 | ["src/Interpreters/ColumnAliasesVisitor.cpp", "src/Interpreters/ColumnAliasesVisitor.h", "src/Interpreters/InDepthNodeVisitor.h", "src/Interpreters/TreeRewriter.cpp", "src/Interpreters/replaceAliasColumnsInQuery.cpp", "src/Interpreters/replaceAliasColumnsInQuery.h", "tests/queries/0_stateless/01925_join_materialized_columns.reference", "tests/queries/0_stateless/01925_join_materialized_columns.sql"] | "Not found column ... in block" error, when join on alias column | **Describe the bug**
"Not found column ... in block" error, when join on alias column.
**How to reproduce**
```sql
CREATE TABLE a (
id UInt32,
value UInt32,
id_alias UInt32 ALIAS id
) ENGINE = MergeTree() ORDER BY id;
CREATE TABLE b (
id UInt32,
value UInt32
) ENGINE = MergeTree() ORDER BY id;
INSERT INTO a VALUES (1, 1), (2, 2), (3, 3);
INSERT INTO b VALUES (1, 4), (2, 5), (3, 6);
SELECT * FROM a JOIN b ON a.id_alias = b.id;
```
**Expected behavior**
In version 21.5.9.4 result is:
id | value | b.id | b.value
-- | -------- | ----- | ----------
1 | 1 | 1 | 4
2 | 2 | 2 | 5
3 | 3 | 3 | 6
**Error message and/or stacktrace**
In version 21.7.5.29 result is:
```
SQL Error [10]: ClickHouse exception, code: 10, host: 127.0.0.1, port: 14343; Code: 10, e.displayText() = DB::Exception: Not found column id_alias in block. There are only columns: id, value (version 21.7.5.29 (official build))
``` | https://github.com/ClickHouse/ClickHouse/issues/26980 | https://github.com/ClickHouse/ClickHouse/pull/29008 | 2684d06b5126ca072a085fff9080d4eae9c1bbe3 | 8d1bf1b675a3d3b9b086b2fbc94ddc536f2cd521 | "2021-07-29T18:11:13Z" | c++ | "2021-09-15T13:27:46Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 26,965 | ["tests/queries/0_stateless/02233_set_enable_with_statement_cte_perf.reference", "tests/queries/0_stateless/02233_set_enable_with_statement_cte_perf.sql"] | Simple CTE disables index scan | Using a simple CTE makes the query slow because if diables index scan.
Example:
setup
```
drop database if exists tt;
create database tt;
use tt;
select version();
create table tt.ev (a Int32, b Int32) Engine=MergeTree() order by a;
create table tt.idx (a Int32) Engine=MergeTree() order by a;
insert into tt.ev select number, number from numbers(100000000);
insert into tt.idx select number*5 from numbers(1000);
```
how to reproduce
```
SET enable_global_with_statement = 1
Ok.
0 rows in set. Elapsed: 0.001 sec.
WITH 'test' AS u
SELECT count()
FROM ev
WHERE a IN
(
SELECT a
FROM idx
)
ββcount()ββ
β 1000 β
βββββββββββ
1 rows in set. Elapsed: 0.261 sec. Processed 100.00 million rows, 400.00 MB (382.46 million rows/s., 1.53 GB/s.)
SELECT count()
FROM ev
WHERE a IN
(
SELECT a
FROM idx
)
ββcount()ββ
β 1000 β
βββββββββββ
1 rows in set. Elapsed: 0.002 sec. Processed 8.19 thousand rows, 32.77 KB (3.73 million rows/s., 14.93 MB/s.)
SET enable_global_with_statement = 0
Ok.
0 rows in set. Elapsed: 0.000 sec.
SELECT count()
FROM ev
WHERE a IN
(
SELECT a
FROM idx
)
ββcount()ββ
β 1000 β
βββββββββββ
1 rows in set. Elapsed: 0.002 sec. Processed 8.19 thousand rows, 32.77 KB (4.28 million rows/s., 17.12 MB/s.)
```
First one scans 100M vs 8k rows when global with statement disabled.
This is big problem when using Distributed because `_table` is added as CTE. Checked in versions starting at 21.4 to 21.7
Likely related to https://github.com/ClickHouse/ClickHouse/issues/26956
| https://github.com/ClickHouse/ClickHouse/issues/26965 | https://github.com/ClickHouse/ClickHouse/pull/35159 | ee9c2ec735595b8283aee87c6b12ff5f9d06c720 | 201a498471837a3c5cb93a6f7983ee3480a3ea35 | "2021-07-29T13:54:57Z" | c++ | "2022-03-17T11:34:06Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 26,927 | ["contrib/NuRaft"] | Please remove excessive logs from RAFT. | ```
2021.07.27 21:31:35.964613 [ 1214448 ] {} <Warning> RaftInstance: cancelled non-blocking client request 500001
2021.07.27 21:31:35.964685 [ 1214448 ] {} <Warning> RaftInstance: cancelled non-blocking client request 500002
2021.07.27 21:31:35.964721 [ 1214448 ] {} <Warning> RaftInstance: cancelled non-blocking client request 500003
2021.07.27 21:31:35.964754 [ 1214448 ] {} <Warning> RaftInstance: cancelled non-blocking client request 500004
2021.07.27 21:31:35.964787 [ 1214448 ] {} <Warning> RaftInstance: cancelled non-blocking client request 500005
2021.07.27 21:31:35.964819 [ 1214448 ] {} <Warning> RaftInstance: cancelled non-blocking client request 500006
2021.07.27 21:31:35.964851 [ 1214448 ] {} <Warning> RaftInstance: cancelled non-blocking client request 500007
2021.07.27 21:31:35.964884 [ 1214448 ] {} <Warning> RaftInstance: cancelled non-blocking client request 500008
2021.07.27 21:31:35.964917 [ 1214448 ] {} <Warning> RaftInstance: cancelled non-blocking client request 500009
2021.07.27 21:31:35.964949 [ 1214448 ] {} <Warning> RaftInstance: cancelled non-blocking client request 500010
2021.07.27 21:31:35.964982 [ 1214448 ] {} <Warning> RaftInstance: cancelled non-blocking client request 500011
2021.07.27 21:31:35.965014 [ 1214448 ] {} <Warning> RaftInstance: cancelled non-blocking client request 500012
2021.07.27 21:31:35.965047 [ 1214448 ] {} <Warning> RaftInstance: cancelled non-blocking client request 500013
2021.07.27 21:31:35.965079 [ 1214448 ] {} <Warning> RaftInstance: cancelled non-blocking client request 500014
2021.07.27 21:31:35.965111 [ 1214448 ] {} <Warning> RaftInstance: cancelled non-blocking client request 500015
2021.07.27 21:31:35.965143 [ 1214448 ] {} <Warning> RaftInstance: cancelled non-blocking client request 500016
2021.07.27 21:31:35.965175 [ 1214448 ] {} <Warning> RaftInstance: cancelled non-blocking client request 500017
2021.07.27 21:31:35.965207 [ 1214448 ] {} <Warning> RaftInstance: cancelled non-blocking client request 500018
2021.07.27 21:31:35.965239 [ 1214448 ] {} <Warning> RaftInstance: cancelled non-blocking client request 500019
2021.07.27 21:31:35.965272 [ 1214448 ] {} <Warning> RaftInstance: cancelled non-blocking client request 500020
2021.07.27 21:31:35.965303 [ 1214448 ] {} <Warning> RaftInstance: cancelled non-blocking client request 500021
2021.07.27 21:31:35.965336 [ 1214448 ] {} <Warning> RaftInstance: cancelled non-blocking client request 500022
2021.07.27 21:31:35.965368 [ 1214448 ] {} <Warning> RaftInstance: cancelled non-blocking client request 500023
2021.07.27 21:31:35.965400 [ 1214448 ] {} <Warning> RaftInstance: cancelled non-blocking client request 500024
2021.07.27 21:31:35.965432 [ 1214448 ] {} <Warning> RaftInstance: cancelled non-blocking client request 500025
2021.07.27 21:31:35.965464 [ 1214448 ] {} <Warning> RaftInstance: cancelled non-blocking client request 500026
2021.07.27 21:31:35.965496 [ 1214448 ] {} <Warning> RaftInstance: cancelled non-blocking client request 500027
2021.07.27 21:31:35.965528 [ 1214448 ] {} <Warning> RaftInstance: cancelled non-blocking client request 500028
2021.07.27 21:31:35.965560 [ 1214448 ] {} <Warning> RaftInstance: cancelled non-blocking client request 500029
2021.07.27 21:31:35.965593 [ 1214448 ] {} <Warning> RaftInstance: cancelled non-blocking client request 500030
``` | https://github.com/ClickHouse/ClickHouse/issues/26927 | https://github.com/ClickHouse/ClickHouse/pull/27081 | 6b6b4f114bda942b1956cff78525a0c896cfa5db | b8f4d480a3625abc3ed92416d39e2cc363e75a78 | "2021-07-28T20:40:01Z" | c++ | "2021-08-07T16:48:53Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 26,899 | ["src/Interpreters/JoinToSubqueryTransformVisitor.cpp", "tests/queries/0_stateless/00820_multiple_joins.reference", "tests/queries/0_stateless/00820_multiple_joins.sql"] | Select from table with Engine=Set does not work with 2 joins | **Bug description**
Select from table with Engine=Set does not work with 2 joins.
**Affected version**
Reproduces on 21.3.15.4
**How to reproduce**
Creating base table and set table:
```sql
CREATE TABLE users (
userid UInt64
)
ENGINE = MergeTree() ORDER BY (userid);
INSERT INTO users VALUES (1),(2),(3);
CREATE TABLE users_set (
userid UInt64
)
ENGINE = Set;
INSERT INTO users_set VALUES (1),(2);
-- works
select * from users where userid in users_set;
```
Adding one more table and one more join:
```sql
CREATE TABLE user_names(
userid UInt64,
name String
) ENGINE = MergeTree() ORDER BY (userid);
INSERT INTO user_names VALUES (1, 'Batman'), (2, 'Joker'), (3, 'Superman');
-- still works
select userid, name
from users any left join user_names on (users.userid = user_names.userid)
where userid in users_set;
```
And one more table plus 2nd join breaks it:
```sql
CREATE TABLE user_real_names(
uid UInt64,
real_name String
) ENGINE = MergeTree() ORDER BY (uid);
INSERT INTO user_real_names VALUES (1, 'Bruce Wayne'), (2, 'Unknown'), (3, 'Clark Kent');
--DOES NOT WORK
select
users.userid,
name,
real_name
from users
any left join user_names on (users.userid = user_names.userid)
any left join user_real_names on (users.userid = user_real_names.uid)
where users.userid in users_set;
```
**Expected behavior**
Working query, just like with one or no joins. Expected output:
```
1 Batman Bruce Wayne
2 Joker Unknown
```
**Error message and/or stacktrace**
```Unknown column name 'users_set': While processing SELECT```
| https://github.com/ClickHouse/ClickHouse/issues/26899 | https://github.com/ClickHouse/ClickHouse/pull/26957 | c1487fdb80d5f4e6d5b3d760839539202d4bbb87 | 1ebde0278efa50dbc397e41f7033aee719b3d437 | "2021-07-28T11:35:59Z" | c++ | "2021-08-03T14:10:08Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 26,829 | ["src/Server/HTTP/WriteBufferFromHTTPServerResponse.cpp", "src/Server/HTTP/WriteBufferFromHTTPServerResponse.h", "src/Server/HTTPHandler.cpp", "src/Server/HTTPHandler.h"] | <Fatal> Application: Child process was terminated by signal 6. | ### version
ClickHouse 21.3.3.14 with revision 54448, build id: 7C39F44C9AD4D3BA36D74775616894F60A552276
Docker version 18.09.9, build 039a7df9ba
### describe
The cluster with 10 nodes is all deployed in Docker, using the official image, and 6 nodes are stopped at the same time.
Log of one of the stop nodes
```shell
2021.07.27 15:53:32.477674 [ 53 ] {} <Fatal> BaseDaemon: (version 21.3.3.14 (official build), build id: 7C39F44C9AD4D3BA36D74775616894F60A552276) (from thread 9367) Terminate called for uncaught exception:
Code: 24, e.displayText() = DB::Exception: Cannot write to ostream at offset 376, Stack trace (when copying this message, always include the lines below):
0. DB::WriteBufferFromOStream::nextImpl() @ 0x87083a0 in /usr/bin/clickhouse
1. DB::WriteBufferFromHTTPServerResponse::nextImpl() @ 0xf8db390 in /usr/bin/clickhouse
2. DB::WriteBufferFromHTTPServerResponse::finalize() @ 0xf8db982 in /usr/bin/clickhouse
3. DB::WriteBufferFromHTTPServerResponse::~WriteBufferFromHTTPServerResponse() @ 0xf8dc016 in /usr/bin/clickhouse
4. DB::HTTPHandler::handleRequest(DB::HTTPServerRequest&, DB::HTTPServerResponse&) @ 0xf84b776 in /usr/bin/clickhouse
5. DB::HTTPServerConnection::run() @ 0xf8d419f in /usr/bin/clickhouse
6. Poco::Net::TCPServerConnection::start() @ 0x11f7d83f in /usr/bin/clickhouse
7. Poco::Net::TCPServerDispatcher::run() @ 0x11f7f251 in /usr/bin/clickhouse
8. Poco::PooledThread::run() @ 0x120b5979 in /usr/bin/clickhouse
9. Poco::ThreadImpl::runnableEntry(void*) @ 0x120b17da in /usr/bin/clickhouse
10. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
11. __clone @ 0x122293 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
(version 21.3.3.14 (official build))
2021.07.27 15:53:32.478090 [ 22026 ] {} <Fatal> BaseDaemon: ########################################
2021.07.27 15:53:32.478128 [ 22026 ] {} <Fatal> BaseDaemon: (version 21.3.3.14 (official build), build id: 7C39F44C9AD4D3BA36D74775616894F60A552276) (from thread 8964) (query_id: ea7ebad8-157f-44b7-a04c-caf15341d3e6) Receiv
ed signal Aborted (6)
2021.07.27 15:53:32.478141 [ 22026 ] {} <Fatal> BaseDaemon:
2021.07.27 15:53:32.478165 [ 22026 ] {} <Fatal> BaseDaemon: Stack trace: 0x7f9f915e218b 0x7f9f915c1859 0x87ca798 0x13ac7d43 0x13ac7cec 0x860e06b 0xf8dc17d 0xf84b776 0xf8d419f 0x11f7d83f 0x11f7f251 0x120b5979 0x120b17da 0x7f
9f91797609 0x7f9f916be293
2021.07.27 15:53:32.478206 [ 22026 ] {} <Fatal> BaseDaemon: 1. gsignal @ 0x4618b in /usr/lib/x86_64-linux-gnu/libc-2.31.so
2021.07.27 15:53:32.478221 [ 22026 ] {} <Fatal> BaseDaemon: 2. abort @ 0x25859 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
2021.07.27 15:53:32.478243 [ 22026 ] {} <Fatal> BaseDaemon: 3. ? @ 0x87ca798 in /usr/bin/clickhouse
2021.07.27 15:53:32.478252 [ 22026 ] {} <Fatal> BaseDaemon: 4. ? @ 0x13ac7d43 in ?
2021.07.27 15:53:32.478269 [ 22026 ] {} <Fatal> BaseDaemon: 5. std::terminate() @ 0x13ac7cec in ?
2021.07.27 15:53:32.478280 [ 22026 ] {} <Fatal> BaseDaemon: 6. ? @ 0x860e06b in /usr/bin/clickhouse
2021.07.27 15:53:32.478299 [ 22026 ] {} <Fatal> BaseDaemon: 7. DB::WriteBufferFromHTTPServerResponse::~WriteBufferFromHTTPServerResponse() @ 0xf8dc17d in /usr/bin/clickhouse
2021.07.27 15:53:32.478314 [ 22026 ] {} <Fatal> BaseDaemon: 8. DB::HTTPHandler::handleRequest(DB::HTTPServerRequest&, DB::HTTPServerResponse&) @ 0xf84b776 in /usr/bin/clickhouse
2021.07.27 15:53:32.478333 [ 22026 ] {} <Fatal> BaseDaemon: 9. DB::HTTPServerConnection::run() @ 0xf8d419f in /usr/bin/clickhouse
2021.07.27 15:53:32.478347 [ 22026 ] {} <Fatal> BaseDaemon: 10. Poco::Net::TCPServerConnection::start() @ 0x11f7d83f in /usr/bin/clickhouse
2021.07.27 15:53:32.478358 [ 22026 ] {} <Fatal> BaseDaemon: 11. Poco::Net::TCPServerDispatcher::run() @ 0x11f7f251 in /usr/bin/clickhouse
2021.07.27 15:53:32.478371 [ 22026 ] {} <Fatal> BaseDaemon: 12. Poco::PooledThread::run() @ 0x120b5979 in /usr/bin/clickhouse
2021.07.27 15:53:32.478383 [ 22026 ] {} <Fatal> BaseDaemon: 13. Poco::ThreadImpl::runnableEntry(void*) @ 0x120b17da in /usr/bin/clickhouse
2021.07.27 15:53:32.478401 [ 22026 ] {} <Fatal> BaseDaemon: 14. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
2021.07.27 15:53:32.478415 [ 22026 ] {} <Fatal> BaseDaemon: 15. __clone @ 0x122293 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
2021.07.27 15:53:32.479836 [ 22027 ] {} <Fatal> BaseDaemon: ########################################
2021.07.27 15:53:32.479946 [ 22027 ] {} <Fatal> BaseDaemon: (version 21.3.3.14 (official build), build id: 7C39F44C9AD4D3BA36D74775616894F60A552276) (from thread 9367) (query_id: 8b60c4f0-cbb2-4e14-95c2-b40a2d24f4ed) Receiv
ed signal Aborted (6)
2021.07.27 15:53:32.479996 [ 22027 ] {} <Fatal> BaseDaemon:
2021.07.27 15:53:32.480082 [ 22027 ] {} <Fatal> BaseDaemon: Stack trace: 0x7f9f915e218b 0x7f9f915c1859 0x87ca798 0x13ac7d43 0x13ac7cec 0x860e06b 0xf8dc17d 0xf84b776 0xf8d419f 0x11f7d83f 0x11f7f251 0x120b5979 0x120b17da 0x7f
9f91797609 0x7f9f916be293
2021.07.27 15:53:32.480580 [ 22027 ] {} <Fatal> BaseDaemon: 1. gsignal @ 0x4618b in /usr/lib/x86_64-linux-gnu/libc-2.31.so
2021.07.27 15:53:32.480632 [ 22027 ] {} <Fatal> BaseDaemon: 2. abort @ 0x25859 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
2021.07.27 15:53:32.480674 [ 22027 ] {} <Fatal> BaseDaemon: 3. ? @ 0x87ca798 in /usr/bin/clickhouse
2021.07.27 15:53:32.480697 [ 22027 ] {} <Fatal> BaseDaemon: 4. ? @ 0x13ac7d43 in ?
2021.07.27 15:53:32.480738 [ 22027 ] {} <Fatal> BaseDaemon: 5. std::terminate() @ 0x13ac7cec in ?
2021.07.27 15:53:32.480761 [ 22027 ] {} <Fatal> BaseDaemon: 6. ? @ 0x860e06b in /usr/bin/clickhouse
2021.07.27 15:53:32.480789 [ 22027 ] {} <Fatal> BaseDaemon: 7. DB::WriteBufferFromHTTPServerResponse::~WriteBufferFromHTTPServerResponse() @ 0xf8dc17d in /usr/bin/clickhouse
2021.07.27 15:53:32.480811 [ 22027 ] {} <Fatal> BaseDaemon: 8. DB::HTTPHandler::handleRequest(DB::HTTPServerRequest&, DB::HTTPServerResponse&) @ 0xf84b776 in /usr/bin/clickhouse
2021.07.27 15:53:32.480832 [ 22027 ] {} <Fatal> BaseDaemon: 9. DB::HTTPServerConnection::run() @ 0xf8d419f in /usr/bin/clickhouse
2021.07.27 15:53:32.480854 [ 22027 ] {} <Fatal> BaseDaemon: 10. Poco::Net::TCPServerConnection::start() @ 0x11f7d83f in /usr/bin/clickhouse
2021.07.27 15:53:32.480873 [ 22027 ] {} <Fatal> BaseDaemon: 11. Poco::Net::TCPServerDispatcher::run() @ 0x11f7f251 in /usr/bin/clickhouse
2021.07.27 15:53:32.480895 [ 22027 ] {} <Fatal> BaseDaemon: 12. Poco::PooledThread::run() @ 0x120b5979 in /usr/bin/clickhouse
2021.07.27 15:53:32.480929 [ 22027 ] {} <Fatal> BaseDaemon: 13. Poco::ThreadImpl::runnableEntry(void*) @ 0x120b17da in /usr/bin/clickhouse
2021.07.27 15:53:32.480955 [ 22027 ] {} <Fatal> BaseDaemon: 14. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
2021.07.27 15:53:32.481005 [ 22027 ] {} <Fatal> BaseDaemon: 15. __clone @ 0x122293 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
2021.07.27 15:53:32.482841 [ 7895 ] {440b576c-f0f8-41df-985b-a90aca4391ae} <Information> executeQuery: Read 43948554 rows, 12.27 GiB in 8.725333008 sec., 5036891 rows/sec., 1.41 GiB/sec.
2021.07.27 15:53:32.491216 [ 7895 ] {} <Error> ServerErrorHandler: Poco::Exception. Code: 1000, e.code() = 32, e.displayText() = I/O error: Broken pipe, Stack trace (when copying this message, always include the lines below):
0. Poco::Net::SocketImpl::error(int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x11f7622d in /usr/bin/clickhouse
1. Poco::Net::SocketImpl::sendBytes(void const*, int, int) @ 0x11f7791a in /usr/bin/clickhouse
2. Poco::Net::StreamSocketImpl::sendBytes(void const*, int, int) @ 0x11f7c8a6 in /usr/bin/clickhouse
3. Poco::Net::HTTPSession::write(char const*, long) @ 0x11f4da83 in /usr/bin/clickhouse
4. Poco::Net::HTTPChunkedIOS::~HTTPChunkedIOS() @ 0x11f37980 in /usr/bin/clickhouse
5. Poco::Net::HTTPChunkedOutputStream::~HTTPChunkedOutputStream() @ 0x11f383ee in /usr/bin/clickhouse
6. DB::HTTPServerConnection::run() @ 0xf8d4309 in /usr/bin/clickhouse
7. Poco::Net::TCPServerConnection::start() @ 0x11f7d83f in /usr/bin/clickhouse
8. Poco::Net::TCPServerDispatcher::run() @ 0x11f7f251 in /usr/bin/clickhouse
9. Poco::PooledThread::run() @ 0x120b5979 in /usr/bin/clickhouse
10. Poco::ThreadImpl::runnableEntry(void*) @ 0x120b17da in /usr/bin/clickhouse
11. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
12. __clone @ 0x122293 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
(version 21.3.3.14 (official build))
```
the other
```shell
2021.07.27 15:53:30.781869 [ 51 ] {} <Fatal> BaseDaemon: (version 21.3.3.14 (official build), build id: 7C39F44C9AD4D3BA36D74775616894F60A552276) (from thread 2052) Terminate called for uncaught exception:
Code: 24, e.displayText() = DB::Exception: Cannot write to ostream at offset 664, Stack trace (when copying this message, always include the lines below):
0. DB::WriteBufferFromOStream::nextImpl() @ 0x87083a0 in /usr/bin/clickhouse
1. DB::WriteBufferFromHTTPServerResponse::nextImpl() @ 0xf8db390 in /usr/bin/clickhouse
2. DB::WriteBufferFromHTTPServerResponse::finalize() @ 0xf8db982 in /usr/bin/clickhouse
3. DB::WriteBufferFromHTTPServerResponse::~WriteBufferFromHTTPServerResponse() @ 0xf8dc016 in /usr/bin/clickhouse
4. DB::HTTPHandler::handleRequest(DB::HTTPServerRequest&, DB::HTTPServerResponse&) @ 0xf84b776 in /usr/bin/clickhouse
5. DB::HTTPServerConnection::run() @ 0xf8d419f in /usr/bin/clickhouse
6. Poco::Net::TCPServerConnection::start() @ 0x11f7d83f in /usr/bin/clickhouse
7. Poco::Net::TCPServerDispatcher::run() @ 0x11f7f251 in /usr/bin/clickhouse
8. Poco::PooledThread::run() @ 0x120b5979 in /usr/bin/clickhouse
9. Poco::ThreadImpl::runnableEntry(void*) @ 0x120b17da in /usr/bin/clickhouse
10. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
11. __clone @ 0x122293 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
(version 21.3.3.14 (official build))
2021.07.27 15:53:30.782495 [ 3491 ] {} <Fatal> BaseDaemon: ########################################
2021.07.27 15:53:30.782624 [ 3491 ] {} <Fatal> BaseDaemon: (version 21.3.3.14 (official build), build id: 7C39F44C9AD4D3BA36D74775616894F60A552276) (from thread 2052) (query_id: 76cceb75-1d87-4841-898d-bcab4a16d510) Receive
d signal Aborted (6)
2021.07.27 15:53:30.782664 [ 3491 ] {} <Fatal> BaseDaemon:
2021.07.27 15:53:30.782728 [ 3491 ] {} <Fatal> BaseDaemon: Stack trace: 0x7f184fbbc18b 0x7f184fb9b859 0x87ca798 0x13ac7d43 0x13ac7cec 0x860e06b 0xf8dc17d 0xf84b776 0xf8d419f 0x11f7d83f 0x11f7f251 0x120b5979 0x120b17da 0x7f1
84fd71609 0x7f184fc98293
2021.07.27 15:53:30.782843 [ 3491 ] {} <Fatal> BaseDaemon: 1. gsignal @ 0x4618b in /usr/lib/x86_64-linux-gnu/libc-2.31.so
2021.07.27 15:53:30.782882 [ 3491 ] {} <Fatal> BaseDaemon: 2. abort @ 0x25859 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
2021.07.27 15:53:30.782911 [ 3491 ] {} <Fatal> BaseDaemon: 3. ? @ 0x87ca798 in /usr/bin/clickhouse
2021.07.27 15:53:30.782927 [ 3491 ] {} <Fatal> BaseDaemon: 4. ? @ 0x13ac7d43 in ?
2021.07.27 15:53:30.782952 [ 3491 ] {} <Fatal> BaseDaemon: 5. std::terminate() @ 0x13ac7cec in ?
2021.07.27 15:53:30.782969 [ 3491 ] {} <Fatal> BaseDaemon: 6. ? @ 0x860e06b in /usr/bin/clickhouse
2021.07.27 15:53:30.783001 [ 3491 ] {} <Fatal> BaseDaemon: 7. DB::WriteBufferFromHTTPServerResponse::~WriteBufferFromHTTPServerResponse() @ 0xf8dc17d in /usr/bin/clickhouse
2021.07.27 15:53:30.783025 [ 3491 ] {} <Fatal> BaseDaemon: 8. DB::HTTPHandler::handleRequest(DB::HTTPServerRequest&, DB::HTTPServerResponse&) @ 0xf84b776 in /usr/bin/clickhouse
2021.07.27 15:53:30.783063 [ 3491 ] {} <Fatal> BaseDaemon: 9. DB::HTTPServerConnection::run() @ 0xf8d419f in /usr/bin/clickhouse
2021.07.27 15:53:30.783084 [ 3491 ] {} <Fatal> BaseDaemon: 10. Poco::Net::TCPServerConnection::start() @ 0x11f7d83f in /usr/bin/clickhouse
2021.07.27 15:53:30.783102 [ 3491 ] {} <Fatal> BaseDaemon: 11. Poco::Net::TCPServerDispatcher::run() @ 0x11f7f251 in /usr/bin/clickhouse
2021.07.27 15:53:30.783123 [ 3491 ] {} <Fatal> BaseDaemon: 12. Poco::PooledThread::run() @ 0x120b5979 in /usr/bin/clickhouse
2021.07.27 15:53:30.783141 [ 3491 ] {} <Fatal> BaseDaemon: 13. Poco::ThreadImpl::runnableEntry(void*) @ 0x120b17da in /usr/bin/clickhouse
2021.07.27 15:53:30.783168 [ 3491 ] {} <Fatal> BaseDaemon: 14. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
2021.07.27 15:53:30.783187 [ 3491 ] {} <Fatal> BaseDaemon: 15. __clone @ 0x122293 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
2021.07.27 15:53:30.888286 [ 3491 ] {} <Fatal> BaseDaemon: Checksum of the binary: 11A63B17AD9F07EA53F49019A0B82E36, integrity check passed.
2021.07.27 15:53:30.888415 [ 3491 ] {} <Information> SentryWriter: Not sending crash report
2021.07.27 15:53:30.938542 [ 2061 ] {4402c092-645d-460d-80f1-4c1a8837e88c} <Information> executeQuery: Read 1 rows, 1.00 B in 0.000678913 sec., 1472 rows/sec., 1.44 KiB/sec.
2021.07.27 15:53:30.998064 [ 2020 ] {e493725b-2c00-4ca3-bd23-b91b4905b7cd} <Error> DynamicQueryHandler: Code: 307, e.displayText() = DB::Exception: Chunk size is too large, Stack trace (when copying this message, always inc
lude the lines below):
0. DB::HTTPChunkedReadBuffer::readChunkHeader() @ 0xf8d69fb in /usr/bin/clickhouse
1. DB::HTTPChunkedReadBuffer::nextImpl() @ 0xf8d6d0b in /usr/bin/clickhouse
2. DB::wrapReadBufferReference(DB::ReadBuffer&)::ReadBufferWrapper::nextImpl() @ 0xe716f3c in /usr/bin/clickhouse
3. DB::CompressedReadBufferBase::readCompressedData(unsigned long&, unsigned long&, bool) @ 0xe7d740c in /usr/bin/clickhouse
4. DB::CompressedReadBuffer::nextImpl() @ 0xe7d6f27 in /usr/bin/clickhouse
5. DB::ConcatReadBuffer::nextImpl() @ 0xe9659be in /usr/bin/clickhouse
6. DB::LimitReadBuffer::nextImpl() @ 0x86a730c in /usr/bin/clickhouse
7. DB::executeQuery(DB::ReadBuffer&, DB::WriteBuffer&, bool, DB::Context&, std::__1::function<void (std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&)>) @ 0xf12ff7b in /usr/bin/clickhouse
8. DB::HTTPHandler::processQuery(DB::Context&, DB::HTTPServerRequest&, DB::HTMLForm&, DB::HTTPServerResponse&, DB::HTTPHandler::Output&, std::__1::optional<DB::CurrentThread::QueryScope>&) @ 0xf8472fa in /usr/bin/clickhouse
9. DB::HTTPHandler::handleRequest(DB::HTTPServerRequest&, DB::HTTPServerResponse&) @ 0xf84b48e in /usr/bin/clickhouse
10. DB::HTTPServerConnection::run() @ 0xf8d419f in /usr/bin/clickhouse
11. Poco::Net::TCPServerConnection::start() @ 0x11f7d83f in /usr/bin/clickhouse
12. Poco::Net::TCPServerDispatcher::run() @ 0x11f7f251 in /usr/bin/clickhouse
13. Poco::PooledThread::run() @ 0x120b5979 in /usr/bin/clickhouse
14. Poco::ThreadImpl::runnableEntry(void*) @ 0x120b17da in /usr/bin/clickhouse
15. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
16. __clone @ 0x122293 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
(version 21.3.3.14 (official build))
2021.07.27 15:53:30.998398 [ 2020 ] {e493725b-2c00-4ca3-bd23-b91b4905b7cd} <Error> DynamicQueryHandler: Cannot send exception to client: Code: 246, e.displayText() = DB::Exception: Unexpected data instead of HTTP chunk header, Stack trace (when copying this message, always include the lines below):
0. DB::HTTPChunkedReadBuffer::readChunkHeader() @ 0xf8d69a2 in /usr/bin/clickhouse
1. DB::HTTPChunkedReadBuffer::nextImpl() @ 0xf8d6d0b in /usr/bin/clickhouse
2. DB::HTTPHandler::trySendExceptionToClient(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, DB::HTTPServerRequest&, DB::HTTPServerResponse&, DB::HTTPHandler::Output&) @ 0xf84a842 in /usr/bin/clickhouse
3. DB::HTTPHandler::handleRequest(DB::HTTPServerRequest&, DB::HTTPServerResponse&) @ 0xf84bbb5 in /usr/bin/clickhouse
4. DB::HTTPServerConnection::run() @ 0xf8d419f in /usr/bin/clickhouse
5. Poco::Net::TCPServerConnection::start() @ 0x11f7d83f in /usr/bin/clickhouse
6. Poco::Net::TCPServerDispatcher::run() @ 0x11f7f251 in /usr/bin/clickhouse
7. Poco::PooledThread::run() @ 0x120b5979 in /usr/bin/clickhouse
8. Poco::ThreadImpl::runnableEntry(void*) @ 0x120b17da in /usr/bin/clickhouse
9. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
10. __clone @ 0x122293 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
(version 21.3.3.14 (official build))
```
| https://github.com/ClickHouse/ClickHouse/issues/26829 | https://github.com/ClickHouse/ClickHouse/pull/28604 | 4eabb713480d37bbc5519aee29ce13be3e9f2870 | 7bc6b8fd70f87b6b216db13224f4b623c6eb2b17 | "2021-07-27T09:44:39Z" | c++ | "2021-09-30T13:44:09Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 26,806 | ["docker/test/integration/runner/compose/docker_compose_jdbc_bridge.yml", "tests/integration/helpers/cluster.py", "tests/integration/test_jdbc_bridge/test.py"] | Fix test_jdbc_bridge flakiness | Example of test crash
https://clickhouse-test-reports.s3.yandex.net/0/1c6cae3f8b8ab287f8ff0d6b79349707fbc311d7/integration_tests_(release).html
```
2021.07.26 08:18:11.893385 [ 12 ] {63a04f0d-c0e0-4d1c-9647-30bfb4470920} <Debug> executeQuery: (from 172.16.15.1:34042) SELECT * FROM jdbc( 'self?mutation', '
SET mutations_sync = 1; ALTER TABLE test.test_delete DELETE WHERE Num < 1000 - 1;' )
2021.07.26 08:18:11.893513 [ 12 ] {63a04f0d-c0e0-4d1c-9647-30bfb4470920} <Trace> ContextAccess (default): Access granted: CREATE TEMPORARY TABLE, JDBC ON *.*
2021.07.26 08:18:11.894511 [ 12 ] {63a04f0d-c0e0-4d1c-9647-30bfb4470920} <Trace> ReadWriteBufferFromHTTP: Sending request to http://bridge1:9019/ping
2021.07.26 08:18:11.901093 [ 12 ] {63a04f0d-c0e0-4d1c-9647-30bfb4470920} <Trace> ReadWriteBufferFromHTTP: Sending request to http://bridge1:9019/columns_info?connection_string=self%3Fmutation&table=SET%20mutations_sync%20%3D%201%3B%20ALTER%20TABLE%20test.test_delete%20DELETE%20WHERE%20Num%20%3C%201000%20-%201%3B&external_table_functions_use_nulls=true
2021.07.26 08:18:11.917869 [ 12 ] {63a04f0d-c0e0-4d1c-9647-30bfb4470920} <Error> executeQuery: Code: 86. DB::Exception: Received error from remote server /columns_info?connection_string=self%3Fmutation&table=SET%20mutations_sync%20%3D%201%3B%20ALTER%20TABLE%20test.test_delete%20DELETE%20WHERE%20Num%20%3C%201000%20-%201%3B&external_table_functions_use_nulls=true. HTTP status code: 500 Internal Server Error, body: NamedDataSource [self] does not exist!. (RECEIVED_ERROR_FROM_REMOTE_IO_SERVER) (version 21.9.1.7574 (official build)) (from 172.16.15.1:34042) (in query: SELECT * FROM jdbc( 'self?mutation', 'SET mutations_sync = 1; ALTER TABLE test.test_delete DELETE WHERE Num < 1000 - 1;' ) ), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0x90746ba in /usr/bin/clickhouse
1. DB::assertResponseIsOk(Poco::Net::HTTPRequest const&, Poco::Net::HTTPResponse&, std::__1::basic_istream<char, std::__1::char_traits<char> >&, bool) @ 0xe56d6e3 in /usr/bin/clickhouse
2. DB::detail::ReadWriteBufferFromHTTPBase<std::__1::shared_ptr<DB::UpdatableSession> >::call(Poco::URI, Poco::Net::HTTPResponse&) @ 0xe563cb5 in /usr/bin/clickhouse
3. DB::detail::ReadWriteBufferFromHTTPBase<std::__1::shared_ptr<DB::UpdatableSession> >::ReadWriteBufferFromHTTPBase(std::__1::shared_ptr<DB::UpdatableSession>, Poco::URI, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::function<void (std::__1::basic_ostream<char, std::__1::char_traits<char> >&)>, Poco::Net::HTTPBasicCredentials const&, unsigned long, std::__1::vector<std::__1::tuple<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::allocator<std::__1::tuple<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > >, DB::RemoteHostFilter const&) @ 0xe560342 in /usr/bin/clickhouse
4. DB::ReadWriteBufferFromHTTP::ReadWriteBufferFromHTTP(Poco::URI, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::function<void (std::__1::basic_ostream<char, std::__1::char_traits<char> >&)>, DB::ConnectionTimeouts const&, unsigned long, Poco::Net::HTTPBasicCredentials const&, unsigned long, std::__1::vector<std::__1::tuple<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::allocator<std::__1::tuple<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > > const&, DB::RemoteHostFilter const&) @ 0xe55fb17 in /usr/bin/clickhouse
5. ? @ 0xf99a455 in /usr/bin/clickhouse
6. DB::ITableFunctionXDBC::executeImpl(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context const>, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::ColumnsDescription) const @ 0xf99ab3d in /usr/bin/clickhouse
7. DB::ITableFunction::execute(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context const>, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, DB::ColumnsDescription) const @ 0x1000e50e in /usr/bin/clickhouse
8. DB::Context::executeTableFunction(std::__1::shared_ptr<DB::IAST> const&) @ 0x1010a711 in /usr/bin/clickhouse
9. DB::JoinedTables::getLeftTableStorage() @ 0x1072c2b1 in /usr/bin/clickhouse
10. DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context const>, std::__1::shared_ptr<DB
::IBlockInputStream> const&, std::__1::optional<DB::Pipe>, std::__1::shared_ptr<DB::IStorage> const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1
::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, s
td::__1::allocator<char> > > > const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&) @ 0x105139d7 in /usr/bin/clickhouse
11. DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context const>, DB::SelectQueryOptions
const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<cha
r, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0x10512fbe in /usr/bin/clickhouse
12. DB::InterpreterSelectWithUnionQuery::InterpreterSelectWithUnionQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context const>, DB::S
electQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1
::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0x106f0949 in /usr/bin/clickhouse
13. DB::InterpreterFactory::get(std::__1::shared_ptr<DB::IAST>&, std::__1::shared_ptr<DB::Context>, DB::SelectQueryOptions const&) @ 0x102dcc97 in /usr/bin/cl
ickhouse
14. ? @ 0x108ba52f in /usr/bin/clickhouse
15. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, bool, DB
::QueryProcessingStage::Enum, bool) @ 0x108b8883 in /usr/bin/clickhouse
16. DB::TCPHandler::runImpl() @ 0x1115778d in /usr/bin/clickhouse
17. DB::TCPHandler::run() @ 0x1116a399 in /usr/bin/clickhouse
18. Poco::Net::TCPServerConnection::start() @ 0x13cf750f in /usr/bin/clickhouse
19. Poco::Net::TCPServerDispatcher::run() @ 0x13cf8f9a in /usr/bin/clickhouse
20. Poco::PooledThread::run() @ 0x13e2be19 in /usr/bin/clickhouse
21. Poco::ThreadImpl::runnableEntry(void*) @ 0x13e280aa in /usr/bin/clickhouse
22. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
23. clone @ 0x122293 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
2021.07.26 08:18:11.918150 [ 12 ] {63a04f0d-c0e0-4d1c-9647-30bfb4470920} <Error> TCPHandler: Code: 86. DB::Exception: Received error from remote server /colum
ns_info?connection_string=self%3Fmutation&table=SET%20mutations_sync%20%3D%201%3B%20ALTER%20TABLE%20test.test_delete%20DELETE%20WHERE%20Num%20%3C%201000%20-%2
01%3B&external_table_functions_use_nulls=true. HTTP status code: 500 Internal Server Error, body: NamedDataSource [self] does not exist!. (RECEIVED_ERROR_FROM
_REMOTE_IO_SERVER), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0x90746ba in /usr/bin/c
lickhouse
1. DB::assertResponseIsOk(Poco::Net::HTTPRequest const&, Poco::Net::HTTPResponse&, std::__1::basic_istream<char, std::__1::char_traits<char> >&, bool) @ 0xe56
d6e3 in /usr/bin/clickhouse
2. DB::detail::ReadWriteBufferFromHTTPBase<std::__1::shared_ptr<DB::UpdatableSession> >::call(Poco::URI, Poco::Net::HTTPResponse&) @ 0xe563cb5 in /usr/bin/cli
ckhouse
3. DB::detail::ReadWriteBufferFromHTTPBase<std::__1::shared_ptr<DB::UpdatableSession> >::ReadWriteBufferFromHTTPBase(std::__1::shared_ptr<DB::UpdatableSession
>, Poco::URI, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::function<void (std::__1::basic_ostream<c
har, std::__1::char_traits<char> >&)>, Poco::Net::HTTPBasicCredentials const&, unsigned long, std::__1::vector<std::__1::tuple<std::__1::basic_string<char, st
d::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::all
ocator<std::__1::tuple<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_trai
ts<char>, std::__1::allocator<char> > > > >, DB::RemoteHostFilter const&) @ 0xe560342 in /usr/bin/clickhouse
4. DB::ReadWriteBufferFromHTTP::ReadWriteBufferFromHTTP(Poco::URI, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const
&, std::__1::function<void (std::__1::basic_ostream<char, std::__1::char_traits<char> >&)>, DB::ConnectionTimeouts const&, unsigned long, Poco::Net::HTTPBasic
Credentials const&, unsigned long, std::__1::vector<std::__1::tuple<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std
::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::allocator<std::__1::tuple<std::__1::basic_string<char, std::__
1::char_traits<char>, std::__1::allocator<char> >, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > > const&, DB::Rem
oteHostFilter const&) @ 0xe55fb17 in /usr/bin/clickhouse
5. ? @ 0xf99a455 in /usr/bin/clickhouse
6. DB::ITableFunctionXDBC::executeImpl(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context const>, std::__1::basic_string<char, std::__1::
char_traits<char>, std::__1::allocator<char> > const&, DB::ColumnsDescription) const @ 0xf99ab3d in /usr/bin/clickhouse
7. DB::ITableFunction::execute(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context const>, std::__1::basic_string<char, std::__1::char_tra
its<char>, std::__1::allocator<char> > const&, DB::ColumnsDescription) const @ 0x1000e50e in /usr/bin/clickhouse
8. DB::Context::executeTableFunction(std::__1::shared_ptr<DB::IAST> const&) @ 0x1010a711 in /usr/bin/clickhouse
9. DB::JoinedTables::getLeftTableStorage() @ 0x1072c2b1 in /usr/bin/clickhouse
10. DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context const>, std::__1::shared_ptr<DB
::IBlockInputStream> const&, std::__1::optional<DB::Pipe>, std::__1::shared_ptr<DB::IStorage> const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1
::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, s
td::__1::allocator<char> > > > const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&) @ 0x105139d7 in /usr/bin/clickhouse
11. DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context const>, DB::SelectQueryOptions
const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1::basic_string<cha
r, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0x10512fbe in /usr/bin/clickhouse
12. DB::InterpreterSelectWithUnionQuery::InterpreterSelectWithUnionQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context const>, DB::S
electQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, std::__1::allocator<std::__1
::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > > > const&) @ 0x106f0949 in /usr/bin/clickhouse
13. DB::InterpreterFactory::get(std::__1::shared_ptr<DB::IAST>&, std::__1::shared_ptr<DB::Context>, DB::SelectQueryOptions const&) @ 0x102dcc97 in /usr/bin/cl
ickhouse
14. ? @ 0x108ba52f in /usr/bin/clickhouse
15. DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, std::__1::shared_ptr<DB::Context>, bool, DB
::QueryProcessingStage::Enum, bool) @ 0x108b8883 in /usr/bin/clickhouse
16. DB::TCPHandler::runImpl() @ 0x1115778d in /usr/bin/clickhouse
17. DB::TCPHandler::run() @ 0x1116a399 in /usr/bin/clickhouse
18. Poco::Net::TCPServerConnection::start() @ 0x13cf750f in /usr/bin/clickhouse
19. Poco::Net::TCPServerDispatcher::run() @ 0x13cf8f9a in /usr/bin/clickhouse
20. Poco::PooledThread::run() @ 0x13e2be19 in /usr/bin/clickhouse
21. Poco::ThreadImpl::runnableEntry(void*) @ 0x13e280aa in /usr/bin/clickhouse
22. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
23. clone @ 0x122293 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
``` | https://github.com/ClickHouse/ClickHouse/issues/26806 | https://github.com/ClickHouse/ClickHouse/pull/26827 | 41f8f747c0088146dcb3118982a1cb64e108e3da | 093384f90f91bf0ef41e4d013a111d6c0a00eb27 | "2021-07-26T10:32:40Z" | c++ | "2021-07-28T06:34:13Z" |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.