status
stringclasses 1
value | repo_name
stringclasses 13
values | repo_url
stringclasses 13
values | issue_id
int64 1
104k
| updated_files
stringlengths 11
1.76k
| title
stringlengths 4
369
| body
stringlengths 0
254k
⌀ | issue_url
stringlengths 38
55
| pull_url
stringlengths 38
53
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
unknown | language
stringclasses 5
values | commit_datetime
unknown |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 46,413 | ["src/Storages/MergeTree/DataPartsExchange.cpp", "src/Storages/MergeTree/MergeTreeData.cpp", "src/Storages/MergeTree/MergeTreeData.h", "tests/integration/test_projection_report_broken_part/__init__.py", "tests/integration/test_projection_report_broken_part/configs/testkeeper.xml", "tests/integration/test_projection_report_broken_part/test.py"] | SELECT queries using a projection raise a `BAD_DATA_PART_NAME` error | **Describe what's wrong**
We have implemented a projection using the `GROUP BY` clause.
Queries activating the projection have been running without fault for months. But since a few weeks ago, sometimes we get a `BAD_DATA_PART_NAME` exception.
This has been the case for various queries that use the projection.
**Does it reproduce on recent release?**
Unfortunately, I can't test on the latest. The CH version used is `22.12.3.5`
**How to reproduce**
We found it really hard to reproduce. If we run an identical query 10 times, it only breaks 1 time.
**Expected behavior**
Queries using the projection always run.
**Error message and/or stacktrace**
```
DB::Exception: Unexpected part name: my_projection: While executing MergeTreeThread. (BAD_DATA_PART_NAME), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(DB::Exception::MessageMasked const&, int, bool) @ 0xe750cda in /usr/bin/clickhouse
1. ? @ 0x82ec0c0 in /usr/bin/clickhouse
2. DB::MergeTreePartInfo::fromPartName(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, StrongTypedef<unsigned int, DB::MergeTreeDataFormatVersionTag>) @ 0x14f4b32a in /usr/bin/clickhouse
3. DB::StorageReplicatedMergeTree::enqueuePartForCheck(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, long) @ 0x14a88c53 in /usr/bin/clickhouse
4. DB::MergeTreeReaderCompact::MergeTreeReaderCompact(std::__1::shared_ptr<DB::IMergeTreeDataPartInfoForReader>, DB::NamesAndTypesList, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, DB::UncompressedCache*, DB::MarkCache*, std::__1::deque<DB::MarkRange, std::__1::allocator<DB::MarkRange>>, DB::MergeTreeReaderSettings, ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>*, std::__1::map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, double, std::__1::less<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const, double>>>, std::__1::function<void (DB::ReadBufferFromFileBase::ProfileInfo)> const&, int) @ 0x14f5e04c in /usr/bin/clickhouse
5. DB::MergeTreeDataPartCompact::getReader(DB::NamesAndTypesList const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::deque<DB::MarkRange, std::__1::allocator<DB::MarkRange>> const&, DB::UncompressedCache*, DB::MarkCache*, DB::MergeTreeReaderSettings const&, std::__1::map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, double, std::__1::less<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const, double>>> const&, std::__1::function<void (DB::ReadBufferFromFileBase::ProfileInfo)> const&) const @ 0x14ea22d3 in /usr/bin/clickhouse
6. DB::IMergeTreeSelectAlgorithm::initializeMergeTreeReadersForPart(std::__1::shared_ptr<DB::IMergeTreeDataPart const>&, DB::MergeTreeReadTaskColumns const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::deque<DB::MarkRange, std::__1::allocator<DB::MarkRange>> const&, std::__1::map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, double, std::__1::less<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const, double>>> const&, std::__1::function<void (DB::ReadBufferFromFileBase::ProfileInfo)> const&) @ 0x15771c66 in /usr/bin/clickhouse
7. DB::MergeTreeThreadSelectAlgorithm::finalizeNewTask() @ 0x1579e5b9 in /usr/bin/clickhouse
8. DB::IMergeTreeSelectAlgorithm::read() @ 0x15770dfe in /usr/bin/clickhouse
9. DB::MergeTreeSource::tryGenerate() @ 0x1579f7bc in /usr/bin/clickhouse
10. DB::ISource::work() @ 0x153d3246 in /usr/bin/clickhouse
11. DB::ExecutionThreadContext::executeTask() @ 0x153ee2a6 in /usr/bin/clickhouse
12. DB::PipelineExecutor::executeStepImpl(unsigned long, std::__1::atomic<bool>*) @ 0x153e349c in /usr/bin/clickhouse
13. ? @ 0x153e55bd in /usr/bin/clickhouse
14. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xe809f16 in /usr/bin/clickhouse
15. ? @ 0xe80f0e1 in /usr/bin/clickhouse
16. start_thread @ 0x74a4 in /lib/x86_64-linux-gnu/libpthread-2.24.so
17. clone @ 0xe8d0f in /lib/x86_64-linux-gnu/libc-2.24.so
```
**Additional context**
We run a 4-node cluster.
| https://github.com/ClickHouse/ClickHouse/issues/46413 | https://github.com/ClickHouse/ClickHouse/pull/50052 | 8dbf7beb32fa752bc4b87decc741221ae2e9249c | c89f92e1f67271f295cb4a44de12b2916ff393cd | "2023-02-14T16:11:41Z" | c++ | "2023-05-22T11:20:06Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 46,345 | ["src/Storages/MergeTree/MutateTask.cpp", "tests/queries/0_stateless/02565_update_empty_nested.reference", "tests/queries/0_stateless/02565_update_empty_nested.sql"] | CORRUPTED_DATA after ALTER TABLE UPDATE on nested data structure. | This bug has been tested on version 23.1.3.
Baseline test which works fine:
```sql
DROP TABLE IF EXISTS test;
CREATE TABLE test(
`id` UInt32,
`data` Array(UInt32),
)
ENGINE = MergeTree
ORDER BY id;
INSERT INTO test (id, `data`) SELECT 1, [0,1,2,999999] FROM numbers(10000000);
ALTER TABLE test ADD COLUMN `data_dict` Array(LowCardinality(UInt32));
ALTER TABLE test UPDATE `data_dict` = `data` WHERE 1;
-- after some time, 1-2 seconds run
SELECT * FROM test LIMIT 10;
-- works fine
```
If we now use a nested data structure:
```sql
DROP TABLE IF EXISTS test;
CREATE TABLE test(
`id` UInt32,
`nested.data` Array(UInt32),
)
ENGINE = MergeTree
ORDER BY id;
INSERT INTO test (id, `nested.data`) SELECT 1, [0,1,2,999999] FROM numbers(10000000);
ALTER TABLE test ADD COLUMN `nested.data_dict` Array(LowCardinality(UInt32));
ALTER TABLE test UPDATE `nested.data_dict` = `nested.data` WHERE 1;
```
And now run `select * from test limit 10;` after 1-2 seconds, we get the following error message:
```
Received exception from server (version 23.1.3):
Code: 246. DB::Exception: Received from localhost:9000. DB::Exception: Bad size of marks file '/work/clickhouse/store/022/02209c6c-bfcc-4874-b0f8-8e7708703923/all_1_5_1_11/nested.size0.mrk2': 0, must be: 15384: (while reading column nested.data): (while reading from part /work/clickhouse/store/022/02209c6c-bfcc-4874-b0f8-8e7708703923/all_1_5_1_11/ from mark 0 with max_rows_to_read = 10): While executing MergeTreeInOrder. (CORRUPTED_DATA)
```
Same error happens without `LowCardinality`.
```
DROP TABLE IF EXISTS test;
CREATE TABLE test(
`id` UInt32,
`nested.data` Array(UInt32),
)
ENGINE = MergeTree
ORDER BY id;
INSERT INTO test (id, `nested.data`) SELECT 1, [0,1,2,999999] FROM numbers(10000000);
ALTER TABLE test ADD COLUMN `nested.data_dict` Array(UInt32);
ALTER TABLE test UPDATE `nested.data_dict` = `nested.data` WHERE 1;
-- after some time, 1-2 seconds run
SELECT * FROM test LIMIT 10;
-- results in CORRUPTED_DATA
```
| https://github.com/ClickHouse/ClickHouse/issues/46345 | https://github.com/ClickHouse/ClickHouse/pull/46387 | 726fb4bebcee3732d621c4758f3f9784805e73fa | ecc6ff707bfd21b5483ef540c004d7fdc3e2d2c2 | "2023-02-13T13:31:11Z" | c++ | "2023-02-14T15:28:32Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 46,286 | ["src/Interpreters/InterpreterSelectQuery.cpp", "src/Storages/IStorage.h", "src/Storages/StorageMerge.cpp", "src/Storages/StorageMerge.h", "tests/queries/0_stateless/00717_merge_and_distributed.sql", "tests/queries/0_stateless/01915_merge_prewhere_virtual_column_rand_chao_wang.sql", "tests/queries/0_stateless/01931_storage_merge_no_columns.sql", "tests/queries/0_stateless/02570_merge_alias_prewhere.reference", "tests/queries/0_stateless/02570_merge_alias_prewhere.sql"] | 23.1 PREWHERE is disabled for non-identical Merge tables | I am very upset by your decision to disable PREWHERE for Merge tables in case data types do not match for some of the columns in the underlying tables (https://github.com/ClickHouse/ClickHouse/pull/44716).
Here is my use-case that doesn't work anymore:
```
create table default.t1 (key Int32, value Int32) engine=MergeTree order by key;
create table default.t2 (key Int32, value Int64) engine=MergeTree order by key;
create table default.t (key Int32, value Int32) engine=Merge(default,'^t.$');
insert into t1 values (1,1);
insert into t2 values (2,2);
select * from t prewhere key=1;
Code: 182. DB::Exception: Cannot use PREWHERE with table t, probably some columns don't have same type or an underlying table doesn't support PREWHERE. (ILLEGAL_PREWHERE) (version 23.1.3.5 (official build))
```
As you can see the data type for the column participating in the filter is the same in all tables, but PREWHERE is disabled anyway.
I run ClickHouse for many years and some of my tables are so old they are created with the old syntax. I use Merge tables to avoid changing old tables and migrating data. I'm very upset that I can't continue doing it. | https://github.com/ClickHouse/ClickHouse/issues/46286 | https://github.com/ClickHouse/ClickHouse/pull/46454 | a1cf3f78f634625137bed9aa0573c93821bb1191 | 0f182f61641adadcbdd028c4e31b2ae3b5a24fbf | "2023-02-10T17:37:43Z" | c++ | "2023-02-17T13:10:52Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 46,203 | ["docs/en/operations/server-configuration-parameters/settings.md", "tests/integration/test_log_lz4_streaming/configs/logs.xml"] | Document stream log compression |
**Describe the issue**
I wanted to set up log compression, and as I didn't find relevant information in docs I started googling around and I came across this issue: https://github.com/ClickHouse/ClickHouse/issues/23860, and then went on to this merged to master pull request: https://github.com/ClickHouse/ClickHouse/pull/29219
which does exactly what I want - implements log compression via LZ4.
Based on this line `if (config.getRawString("logger.stream_compress", "false") == "true")`
I tried to set it up via `<stream_compress>` setting in `<logger>`, and it all worked flawlessly - logs are compressed!
There is no mention of this option in [logger docs](https://clickhouse.com/docs/en/operations/server-configuration-parameters/settings/#server_configuration_parameters-logger), even tho this change was merged over a year ago. Is there any particular reason for that? Is this change experimental and not advised to use, or rather is it just a missing entry in the docs? I'm asking as I'm a bit hesitant to use undocumented feature on production. | https://github.com/ClickHouse/ClickHouse/issues/46203 | https://github.com/ClickHouse/ClickHouse/pull/46276 | ddd21ac706fbe4357b87a3ccdbcd7a61e711ac15 | 5dd6f25d5d988c3d9f61ce023e14a59ea43b789c | "2023-02-09T10:46:09Z" | c++ | "2023-02-13T02:55:33Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 46,184 | ["docs/en/sql-reference/functions/date-time-functions.md", "src/Functions/formatDateTime.cpp", "tests/queries/0_stateless/00718_format_datetime.reference", "tests/queries/0_stateless/00718_format_datetime.sql", "tests/queries/0_stateless/00719_format_datetime_rand.sql", "tests/queries/0_stateless/00801_daylight_saving_time_hour_underflow.sql", "tests/queries/0_stateless/01411_from_unixtime.reference", "tests/queries/0_stateless/01411_from_unixtime.sql", "tests/queries/0_stateless/02564_date_format.reference", "tests/queries/0_stateless/02564_date_format.sql"] | Add function `DATE_FORMAT` as a compatibility alias. | **Use case**
Compatibility with MySQL.
**Describe the solution you'd like**
1. Make it a synonym of `formatDateTime`.
2. Add the support for `%i` and other missing format substitutions.
3. Fix the error code from ILLEGAL_COLUMN to a more appropriate one.
```
milovidov-desktop :) SELECT DATE_FORMAT(now(), '%Y-%m-%d %H:%i:%s')
SELECT DATE_FORMAT(now(), '%Y-%m-%d %H:%i:%s')
Query id: c24c519d-9f35-48a3-9ada-f9af8b4911a1
0 rows in set. Elapsed: 0.160 sec.
Received exception:
Code: 46. DB::Exception: Unknown function DATE_FORMAT: While processing DATE_FORMAT(now(), '%Y-%m-%d %H:%i:%s'). (UNKNOWN_FUNCTION)
milovidov-desktop :) SELECT formatDateTime(now(), '%Y-%m-%d %H:%i:%s')
SELECT formatDateTime(now(), '%Y-%m-%d %H:%i:%s')
Query id: ab5ea92d-6026-4cc5-bdf9-6b8a8e8f77b4
0 rows in set. Elapsed: 0.002 sec.
Received exception:
Code: 44. DB::Exception: Wrong syntax '%Y-%m-%d %H:%i:%s', unexpected symbol 'i' for function formatDateTime: While processing formatDateTime(now(), '%Y-%m-%d %H:%i:%s'). (ILLEGAL_COLUMN)
milovidov-desktop :)
``` | https://github.com/ClickHouse/ClickHouse/issues/46184 | https://github.com/ClickHouse/ClickHouse/pull/46302 | 3f424301cd0bdee58ec282eb2110b8e0acbcd1cf | 0ff404da9ce7c85c051feb66a654cf4473edc1c4 | "2023-02-08T20:13:04Z" | c++ | "2023-02-16T16:54:12Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 46,172 | ["docs/en/sql-reference/statements/create/table.md"] | Docs bug: syntax on PRIMARY KEY section | **Describe the issue**
On this page, the PRIMARY KEY statement has an extra right-side square bracket "]":
https://clickhouse.com/docs/en/sql-reference/statements/create/table/#primary-key
`CREATE TABLE db.table_name
(
name1 type1, name2 type2, ...,
PRIMARY KEY(expr1[, expr2,...])]
)
ENGINE = engine;`
**Additional context**
Should be the following I think:
`CREATE TABLE db.table_name
(
name1 type1, name2 type2, ...,
PRIMARY KEY(expr1[, expr2,...])
)
ENGINE = engine;`
| https://github.com/ClickHouse/ClickHouse/issues/46172 | https://github.com/ClickHouse/ClickHouse/pull/47029 | 0a32bac97be4e42b9e335eae18838ec89f5d9591 | afac2801a2ccf3d5855cd2c5679a897fc1183590 | "2023-02-08T15:54:23Z" | c++ | "2023-02-28T18:59:07Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 46,169 | ["src/Storages/IndicesDescription.cpp", "src/Storages/MergeTree/MergeTreeData.cpp", "src/Storages/MergeTree/MergeTreeIndices.cpp", "tests/queries/0_stateless/01705_normalize_create_alter_function_names.reference", "tests/queries/0_stateless/01705_normalize_create_alter_function_names.sql", "tests/queries/0_stateless/02670_constant_skip_index.reference", "tests/queries/0_stateless/02670_constant_skip_index.sql"] | Crash with constant expression in skip index | This index definition doesn't make any sense but crashes the server (several runs may be required).
**How to reproduce**
```sql
DROP TABLE IF EXISTS logins__fuzz_63;
CREATE TABLE logins__fuzz_63
(
id UInt64,
INDEX __idx_fuzz_10829467000260387718 'asdasd' TYPE set(2) GRANULARITY 1
) ENGINE = MergeTree
ORDER BY id;
INSERT INTO logins__fuzz_63 SELECT number FROM numbers(1000);
DROP TABLE logins__fuzz_63;
```
```
2023.02.08 20:08:26.705259 [ 3936720 ] <Fatal> BaseDaemon: ########################################
2023.02.08 20:08:26.705340 [ 3936720 ] <Fatal> BaseDaemon: (version 23.2.1.1022 (official build), build id: 5264B62ECB409DBD66B4FEBE79CC24CD77239CA6) (from thread 3936227) (query_id: ba7a0c23-2fd2-491c-9adb-7698fcc3c886) (query: INSERT INTO logins__fuzz_63 SELECT number FROM numbers(1000);) Received signal Segmentation fault (11)
2023.02.08 20:08:26.705383 [ 3936720 ] <Fatal> BaseDaemon: Address: NULL pointer. Access: read. Unknown si_code.
2023.02.08 20:08:26.705424 [ 3936720 ] <Fatal> BaseDaemon: Stack trace: 0x88a1030 0x13a60204 0x14335942 0x142e1824 0x142d378a 0x142d4d31 0x142d57fc 0x143f52dc 0x143ee757 0x1463f88b 0x14af0d62 0x149e5feb 0x149e5d39 0x149e561f 0x1480744c 0x147fc4dc 0x147faf19 0x147fac60 0x147f9cd2 0xdf5d056 0xdf628a1 0x7f5494191609 0x7f54940b6163
2023.02.08 20:08:26.705479 [ 3936720 ] <Fatal> BaseDaemon: 2. memcpy @ 0x88a1030 in /usr/bin/clickhouse
2023.02.08 20:08:26.705522 [ 3936720 ] <Fatal> BaseDaemon: 3. DB::ColumnString::insertRangeFrom(DB::IColumn const&, unsigned long, unsigned long) @ 0x13a60204 in /usr/bin/clickhouse
2023.02.08 20:08:26.705551 [ 3936720 ] <Fatal> BaseDaemon: 4. DB::MergeTreeIndexAggregatorSet::update(DB::Block const&, unsigned long*, unsigned long) @ 0x14335942 in /usr/bin/clickhouse
2023.02.08 20:08:26.705588 [ 3936720 ] <Fatal> BaseDaemon: 5. DB::MergeTreeDataPartWriterOnDisk::calculateAndSerializeSkipIndices(DB::Block const&, std::__1::vector<DB::Granule, std::__1::allocator<DB::Granule>> const&) @ 0x142e1824 in /usr/bin/clickhouse
2023.02.08 20:08:26.705623 [ 3936720 ] <Fatal> BaseDaemon: 6. DB::MergeTreeDataPartWriterCompact::writeDataBlockPrimaryIndexAndSkipIndices(DB::Block const&, std::__1::vector<DB::Granule, std::__1::allocator<DB::Granule>> const&) @ 0x142d378a in /usr/bin/clickhouse
2023.02.08 20:08:26.705653 [ 3936720 ] <Fatal> BaseDaemon: 7. DB::MergeTreeDataPartWriterCompact::fillDataChecksums(DB::MergeTreeDataPartChecksums&) @ 0x142d4d31 in /usr/bin/clickhouse
2023.02.08 20:08:26.705687 [ 3936720 ] <Fatal> BaseDaemon: 8. DB::MergeTreeDataPartWriterCompact::fillChecksums(DB::MergeTreeDataPartChecksums&) @ 0x142d57fc in /usr/bin/clickhouse
2023.02.08 20:08:26.705714 [ 3936720 ] <Fatal> BaseDaemon: 9. DB::MergedBlockOutputStream::finalizePartAsync(std::__1::shared_ptr<DB::IMergeTreeDataPart> const&, bool, DB::NamesAndTypesList const*, DB::MergeTreeDataPartChecksums*) @ 0x143f52dc in /usr/bin/clickhouse
2023.02.08 20:08:26.705751 [ 3936720 ] <Fatal> BaseDaemon: 10. DB::MergeTreeDataWriter::writeTempPartImpl(DB::BlockWithPartition&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::shared_ptr<DB::Context const>, long, bool) @ 0x143ee757 in /usr/bin/clickhouse
2023.02.08 20:08:26.705781 [ 3936720 ] <Fatal> BaseDaemon: 11. DB::MergeTreeSink::consume(DB::Chunk) @ 0x1463f88b in /usr/bin/clickhouse
2023.02.08 20:08:26.705829 [ 3936720 ] <Fatal> BaseDaemon: 12. DB::SinkToStorage::onConsume(DB::Chunk) @ 0x14af0d62 in /usr/bin/clickhouse
2023.02.08 20:08:26.705859 [ 3936720 ] <Fatal> BaseDaemon: 13. ? @ 0x149e5feb in /usr/bin/clickhouse
2023.02.08 20:08:26.705883 [ 3936720 ] <Fatal> BaseDaemon: 14. ? @ 0x149e5d39 in /usr/bin/clickhouse
2023.02.08 20:08:26.705913 [ 3936720 ] <Fatal> BaseDaemon: 15. DB::ExceptionKeepingTransform::work() @ 0x149e561f in /usr/bin/clickhouse
2023.02.08 20:08:26.705958 [ 3936720 ] <Fatal> BaseDaemon: 16. DB::ExecutionThreadContext::executeTask() @ 0x1480744c in /usr/bin/clickhouse
2023.02.08 20:08:26.705988 [ 3936720 ] <Fatal> BaseDaemon: 17. DB::PipelineExecutor::executeStepImpl(unsigned long, std::__1::atomic<bool>*) @ 0x147fc4dc in /usr/bin/clickhouse
2023.02.08 20:08:26.706018 [ 3936720 ] <Fatal> BaseDaemon: 18. DB::PipelineExecutor::executeImpl(unsigned long) @ 0x147faf19 in /usr/bin/clickhouse
2023.02.08 20:08:26.706044 [ 3936720 ] <Fatal> BaseDaemon: 19. DB::PipelineExecutor::execute(unsigned long) @ 0x147fac60 in /usr/bin/clickhouse
2023.02.08 20:08:26.706077 [ 3936720 ] <Fatal> BaseDaemon: 20. ? @ 0x147f9cd2 in /usr/bin/clickhouse
2023.02.08 20:08:26.706113 [ 3936720 ] <Fatal> BaseDaemon: 21. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xdf5d056 in /usr/bin/clickhouse
2023.02.08 20:08:26.706170 [ 3936720 ] <Fatal> BaseDaemon: 22. ? @ 0xdf628a1 in /usr/bin/clickhouse
2023.02.08 20:08:26.706206 [ 3936720 ] <Fatal> BaseDaemon: 23. ? @ 0x7f5494191609 in ?
2023.02.08 20:08:26.706232 [ 3936720 ] <Fatal> BaseDaemon: 24. clone @ 0x7f54940b6163 in ?
2023.02.08 20:08:26.852972 [ 3936720 ] <Fatal> BaseDaemon: Integrity check of the executable successfully passed (checksum: 0FE8DDB43EE348AD0A00CFBC4C423542)
``` | https://github.com/ClickHouse/ClickHouse/issues/46169 | https://github.com/ClickHouse/ClickHouse/pull/46839 | e2aff59a2d6ce3f05dbb7026bbddce64fca8810a | e8cdb0c8b15d8448392102cd0e16f7990c47166d | "2023-02-08T15:42:16Z" | c++ | "2023-03-08T00:03:09Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 46,129 | ["docs/en/sql-reference/functions/type-conversion-functions.md", "docs/ru/sql-reference/functions/type-conversion-functions.md", "src/Functions/DateTimeTransforms.h", "src/Functions/FunctionsConversion.h", "tests/queries/0_stateless/01556_accurate_cast_or_null.reference", "tests/queries/0_stateless/01556_accurate_cast_or_null.sql", "tests/queries/0_stateless/01601_accurate_cast.reference", "tests/queries/0_stateless/01601_accurate_cast.sql", "tests/queries/0_stateless/01746_convert_type_with_default.reference", "tests/queries/0_stateless/01746_convert_type_with_default.sql"] | toDateTimeOrDefault() can't parse UInt32 dates that toDateTime() can | **Describe the unexpected behaviour**
```
┌─toDateTimeOrDefault(1675442127)─┐
│ 1970-01-01 00:00:00 │
└─────────────────────────────────┘
```
```
┌─toDateTime(1675442127)─┐
│ 2023-02-03 16:35:27 │
└────────────────────────┘
```
* Which ClickHouse server version to use
Latest master.
| https://github.com/ClickHouse/ClickHouse/issues/46129 | https://github.com/ClickHouse/ClickHouse/pull/50709 | 81b38db5295482d67a8f3a5bd53be066707e21c0 | 128e8c20d5ef3d043f32f4622ddfe04501d8c0ac | "2023-02-07T17:53:56Z" | c++ | "2023-06-12T15:08:14Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 46,102 | ["docs/en/operations/settings/query-complexity.md", "src/Core/Settings.h", "src/Interpreters/ClusterProxy/SelectStreamFactory.cpp", "src/Interpreters/ClusterProxy/executeQuery.cpp", "tests/queries/0_stateless/02786_max_execution_time_leaf.reference", "tests/queries/0_stateless/02786_max_execution_time_leaf.sql"] | Add max_execution_time_leaf |
**Use case**
Currently there is max_execution_time setting, that can be set in subqueries. But it makes it hard to use this setting with distributed sub-queries, as it is likely that timeout will be reached before distributed subquery finished.
**Describe the solution you'd like**
Add max_execution_time_leaf settings similar to the other restrictions on query complexity that only apply to distributed sub queries.
**Describe alternatives you've considered**
It is possible to workaround this using features like view, or dyanmically with cluster(view()) to set manually timeout in the subquery since we have control over them and they are not generated internally. But it is much less handy than simply using distributed tables, and for some reason other max_* seettings have a leaf version.
| https://github.com/ClickHouse/ClickHouse/issues/46102 | https://github.com/ClickHouse/ClickHouse/pull/51823 | 8707b75ad837feed35d9ac6c73502c3c4dd700be | 81b1ca22bb8612c753fa6e15376de4cc3b0ca7f3 | "2023-02-07T09:39:09Z" | c++ | "2023-11-03T12:31:15Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 46,084 | ["src/Storages/MergeTree/IMergeTreeDataPart.h", "src/Storages/MergeTree/MergeTask.cpp", "src/Storages/MergeTree/MergeTreeDataPartWide.h", "src/Storages/MergeTree/MergeTreeDataPartWriterOnDisk.cpp", "src/Storages/MergeTree/MergeTreeDataPartWriterWide.cpp", "src/Storages/MergeTree/MergeTreeSettings.h", "tests/integration/test_backward_compatibility/test_vertical_merges_from_compact_parts.py", "tests/queries/0_stateless/02539_vertical_merge_compact_parts.reference", "tests/queries/0_stateless/02539_vertical_merge_compact_parts.sql"] | Make it possible to use Vertical merge algorithm together with parts of Compact type | **Use case**
There is a special algorithm inside ClickHouse for executing background merges called Vertical. It merges only PRIMARY KEY columns first and computes a permutation mask (in what order rows from all source parts should be fetched and written to the result part to get the right result). This algorithm allows ClickHouse server to consume a lot less memory in comparison with default and naive algorithm called Horizontal.
Unfortunately, ClickHouse may want to assign a merge operation for parts in a Compact format together with parts in a Wide format. In this case this background operation will be executed using Horizonal algorithm and thus may consume a lot of memory.
**Describe the solution you'd like**
Allow using Vertical merge algorithm with parts in Compact format.
| https://github.com/ClickHouse/ClickHouse/issues/46084 | https://github.com/ClickHouse/ClickHouse/pull/46282 | b9844f987688699d9696ae4020a84389125fcc5a | 395d6b6bd5987f2e74edc20bf656808063f75920 | "2023-02-06T15:28:23Z" | c++ | "2023-02-12T02:51:12Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 46,055 | ["base/glibc-compatibility/glibc-compatibility.c"] | clickhouse-server.aarch64 0:23.1.3.5-1 GLIBC_2.28 not found | Amazon Linux 2 can't run clickhouse
```
Установлено:
clickhouse-client.aarch64 0:23.1.3.5-1
clickhouse-server.aarch64 0:23.1.3.5-1
Установлены зависимости:
clickhouse-common-static.aarch64 0:23.1.3.5-1
Выполнено!
[ec2-user@ip-10-0-7-11 ~]$ sudo /etc/init.d/clickhouse-server start
clickhouse: /lib64/libc.so.6: version `GLIBC_2.28' not found (required by clickhouse)
``` | https://github.com/ClickHouse/ClickHouse/issues/46055 | https://github.com/ClickHouse/ClickHouse/pull/47008 | e800557743ba619f14eded421a7f295ad3b68a6f | 05142af0c021df7f4c5a1bb358bcffbd220b9504 | "2023-02-05T15:25:18Z" | c++ | "2023-03-01T08:31:20Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 46,013 | ["src/Common/format.h"] | Confusing error message: Argument is too big for formatting | ```
SELECT format('{}asdfasd{}', '111')
Query id: 77df41ec-4f04-4017-863f-f7d31b92893d
0 rows in set. Elapsed: 0.031 sec.
Received exception from server (version 22.13.1):
Code: 36. DB::Exception: Received from localhost:9000. DB::Exception: Argument is too big for formatting: While processing format('{}asdfasd{}', 'a'). (BAD_ARGUMENTS)
```
Should be something like 'Not enough arguments provided to fill all placeholders in the format string."'
| https://github.com/ClickHouse/ClickHouse/issues/46013 | https://github.com/ClickHouse/ClickHouse/pull/57569 | 7a5b40563ab6625e2e2722cd8ba3fa282e82037c | c7ce2b5d5fb596d6bf16a307644e2724bf16d50a | "2023-02-03T13:15:05Z" | c++ | "2023-12-15T10:10:08Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 45,964 | ["src/Storages/MergeTree/MergeTreeBaseSelectProcessor.cpp", "src/Storages/MergeTree/MergeTreeData.cpp", "src/Storages/StorageDistributed.cpp", "src/Storages/StorageMerge.cpp"] | Make the `_part` virtual column `LowCardinality(String)` | It will be more natural. | https://github.com/ClickHouse/ClickHouse/issues/45964 | https://github.com/ClickHouse/ClickHouse/pull/45975 | 0dae64fe5419f60e9aea9018d94a6bfaf2c74ab9 | 532b341de921178c6621db7f10093a9cce760641 | "2023-02-02T14:25:27Z" | c++ | "2023-02-05T17:00:46Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 45,894 | ["src/Functions/tupleElement.cpp", "tests/queries/0_stateless/01710_minmax_count_projection.sql", "tests/queries/0_stateless/02541_tuple_element_with_null.reference", "tests/queries/0_stateless/02541_tuple_element_with_null.sql"] | tupleElement with default NULL value returns incorrect result | **Describe the unexpected behaviour**
If the default value for the `tupleElement` function is `NULL` function always return `NULL` even if the key exists.
**How to reproduce**
```
CREATE TABLE default.test_tuple_element
(
tuple Tuple(k1 Nullable(UInt64), k2 UInt64)
)
ENGINE = MergeTree
ORDER BY tuple()
SETTINGS index_granularity = 8192;
INSERT INTO test_tuple_element VALUES (tuple(1,2)), (tuple(NULL, 3));
SELECT
tupleElement(tuple, 'k1', 0) fine_k1_with_0,
tupleElement(tuple, 'k1', NULL) wrong_k1_with_null,
tupleElement(tuple, 'k2', 0) fine_k2_with_0,
tupleElement(tuple, 'k2', NULL) bad_k2_with_null
FROM test_tuple_element;
┌─fine_k1_with_0─┬─wrong_k1_with_null─┬─fine_k2_with_0─┬─bad_k2_with_null─┐
│ 1 │ ᴺᵁᴸᴸ │ 2 │ ᴺᵁᴸᴸ │
│ ᴺᵁᴸᴸ │ ᴺᵁᴸᴸ │ 3 │ ᴺᵁᴸᴸ │
└────────────────┴────────────────────┴────────────────┴──────────────────┘
```
**Expected behavior**
`tupleElement` with defaults should return a tuple element value even when a default value is `NULL`.
Tested on 23.1.2.9. | https://github.com/ClickHouse/ClickHouse/issues/45894 | https://github.com/ClickHouse/ClickHouse/pull/45952 | 9f7d493850ba1617cdc7182d37f02516fb55979a | 061204408afa72f731d79e7c2212f9e77e646f9b | "2023-02-01T14:05:55Z" | c++ | "2023-02-03T15:15:54Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 45,808 | ["src/TableFunctions/TableFunctionFormat.cpp", "src/TableFunctions/TableFunctionFormat.h", "tests/queries/0_stateless/02542_table_function_format.reference", "tests/queries/0_stateless/02542_table_function_format.sql"] | Cannot specity table structure for table function `format` | https://clickhouse.com/docs/en/sql-reference/table-functions/format/
**Use case**
```
SELECT * FROM format(TSV, 'cust_id UInt128', '20210129005809043707')
```
**How to solve**
Allow a three-argument version. | https://github.com/ClickHouse/ClickHouse/issues/45808 | https://github.com/ClickHouse/ClickHouse/pull/45873 | 542f54cf2d6559d6cb291a6b2eeb6a36b7f52110 | 1682078f1a2497f8d102fa7734edd3223ed2f077 | "2023-01-31T01:49:18Z" | c++ | "2023-02-01T23:40:31Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 45,797 | ["src/Functions/FunctionsConversion.h", "tests/queries/0_stateless/02551_ipv4_implicit_uint64.reference", "tests/queries/0_stateless/02551_ipv4_implicit_uint64.sql"] | Conversion from UInt64 to IPv4 is not supported | New IPv4 as native code causes inserting IPv4 as UInt32 broken with error "Conversion from UInt64 to IPv4 is not supported";
CREATE TABLE users (ip IPv4) ENGINE=Memory;
INSERT INTO users VALUES (2319771222);
DB::Exception: Conversion from UInt64 to IPv4 is not supported: | https://github.com/ClickHouse/ClickHouse/issues/45797 | https://github.com/ClickHouse/ClickHouse/pull/45865 | ec3bb0c04e8902ae2e21dade6bc5e6443162a0a8 | 57f26f8c7ee6d724ab25a421280e725c4d2b0308 | "2023-01-30T19:36:03Z" | c++ | "2023-02-02T15:53:23Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 45,789 | ["src/Interpreters/LogicalExpressionsOptimizer.cpp", "tests/queries/0_stateless/25340_logical_optimizer_alias_bug.reference", "tests/queries/0_stateless/25340_logical_optimizer_alias_bug.sql"] | MULTIPLE_EXPRESSIONS_FOR_ALIAS on the 23.1.2 | Since the latest release (23.1.2), we started to have "MULTIPLE_EXPRESSION_FOR_ALIAS" errors when executing queries.
The table is distributed and I'm using the zero replication feature.
This behaviour is not reproductible on the 22.12 on the same condition (distributed table + zero replication).
**Describe what's wrong**
I'm not able to reproduce on a simple example but the following query failed.
```sql
WITH
((position(path, '/a') > 0) AND (NOT (position(path, 'a') > 0))) OR (path = '/b') OR (path = '/b/') as alias1
SELECT
max(alias1)
FROM mytable
WHERE (myid = 1259)
```
I'm not familiar with an intern optimiser of the query but in the error output, the condition has been optimized.
**Does it reproduce on recent release?**
23.1.2
**Enable crash reporting**
I cannot
**How to reproduce**
I was not able to reproduce it locally.
**Expected behavior**
The SQL expression must not raise an alias expression error.
**Error message and/or stacktrace**
```sql
Received exception from server (version 23.1.2):
Code: 179. DB::Exception: Received from localhost:9000. DB::Exception: Different expressions with the same alias alias1:
((position(path, '/a') > 0) AND (NOT (position(path, 'a') > 0))) OR ((path IN ('/b', '/b/')) AS alias1) AS alias1
and
path IN ('/b', '/b/') AS alias1
: While processing ((position(path, '/a') > 0) AND (NOT (position(path, 'a') > 0))) OR ((path IN ('/b', '/b/')) AS alias1) AS alias1. (MULTIPLE_EXPRESSIONS_FOR_ALIAS)
```
| https://github.com/ClickHouse/ClickHouse/issues/45789 | https://github.com/ClickHouse/ClickHouse/pull/47451 | be0ae6a23827e11d3169871d5cca1f5ebe45195f | 0298a96d812c824388c317b7d8ef9e8606ff94cd | "2023-01-30T15:19:10Z" | c++ | "2023-03-14T10:51:24Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 45,773 | ["src/Functions/CRC.cpp", "src/Functions/EmptyImpl.h", "src/Functions/FunctionStringOrArrayToT.h", "src/Functions/array/length.cpp", "src/Functions/ascii.cpp", "src/Functions/isValidUTF8.cpp", "src/Functions/lengthUTF8.cpp", "tests/queries/0_stateless/02541_empty_function_support_ip.reference", "tests/queries/0_stateless/02541_empty_function_support_ip.sql"] | Illegal type IPv6 of argument of function notEmpty | **Describe what's wrong**
`notEmpty` does not accept an argument of type IPv6:
```
DB::Exception: Illegal type IPv6 of argument of function notEmpty
```
**Does it reproduce on recent release?**
Yes, starting ClickHouse v23.
Expected behavior with v22.12.3.5: https://fiddle.clickhouse.com/5abe165f-6c96-4b4f-b59e-2c7327a1b252
`DB::Exception` starting with v23.1.1.3077: https://fiddle.clickhouse.com/b619dd22-343c-41cd-8359-da2a6dd76360
**How to reproduce**
* Which ClickHouse server version to use: v23
* Queries to run that lead to unexpected result: `SELECT notEmpty(toIPv6('::1'))`
**Expected behavior**
The same behavior as ClickHouse before v23: `notEmpty` should evaluate to `0` when given the all-zeroes IPv6 address (i.e. `notEmpty(toIPv6('::'))`), and `1` otherwise. | https://github.com/ClickHouse/ClickHouse/issues/45773 | https://github.com/ClickHouse/ClickHouse/pull/45799 | ffa00fc134b90fe19ce11ab3d6753bd95d8c06b9 | 9f4658151e236527d44daef71d305df8bf7b7afe | "2023-01-30T09:44:40Z" | c++ | "2023-01-31T12:37:23Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 45,742 | ["src/Common/MemoryTracker.cpp"] | Uncomprehensible error message from MemoryTracker. | **Describe the issue**
Remove this:
```
OvercommitTracker decision: Memory overcommit isn't used. OvercommitTracker isn't set.
```
| https://github.com/ClickHouse/ClickHouse/issues/45742 | https://github.com/ClickHouse/ClickHouse/pull/45743 | 19d4e4fd0e2b58c62f4d9b5fa129646ec4b1dff5 | a7299746c7b381051c3209ed851741ea9d7d7ddd | "2023-01-29T01:12:17Z" | c++ | "2023-01-29T21:46:04Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 45,706 | ["docs/en/operations/backup.md"] | Docs: broken link for compression_method | **Describe the issue**
Broken link in the docs. The link is on the Backup & Restore page for "compression_method":
https://clickhouse.com/docs/en/operations/backup/
<img width="433" alt="image" src="https://user-images.githubusercontent.com/8918693/215132238-10bda0ea-a284-46c8-ae26-bbbd1d517a3b.png">
The link is trying to go to this anchor but results in a 404:
https://clickhouse.com/docs/en/operations/backup/en/sql-reference/statements/create/table/#column-compression-codecs
**Additional context**
I think the correct link might be the following:
https://clickhouse.com/docs/en/operations/backup/#compression-settings
| https://github.com/ClickHouse/ClickHouse/issues/45706 | https://github.com/ClickHouse/ClickHouse/pull/45798 | f46bfaa1c96b7ec226595fcccd1dcee9646bdbfd | 40573bfa42cf49cfda151cf7cb4cec05996a6e1b | "2023-01-27T16:02:50Z" | c++ | "2023-01-30T21:48:49Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 45,669 | ["src/Processors/Formats/Impl/ArrowColumnToCHColumn.cpp", "tests/queries/0_stateless/02541_arrow_duration_type.reference", "tests/queries/0_stateless/02541_arrow_duration_type.sh", "tests/queries/0_stateless/data_arrow/duration.arrow"] | Add support for Arrow's duration type | There's no support for Arrow's duration type at the moment. It will be a nice feature to add it since internally it uses C `int64_t` type.
Generate some arrow sample file from python:
```
import pandas as pd
import pyarrow as pa
BATCH_SIZE = 100
NUM_BATCHES = 100
schema = pa.schema([pa.field('duration', pa.duration("ns"))])
with pa.OSFile('bigfile.arrow', 'wb') as sink:
with pa.ipc.new_file(sink, schema) as writer:
for row in range(NUM_BATCHES):
batch = pa.record_batch([pa.array(range(BATCH_SIZE), type=pa.duration("ns"))], schema)
writer.write(batch)
```
Create sample Clickhouse table:
```
CREATE TABLE arrow_test
(
`duration` UInt64
)
ENGINE = MergeTree
ORDER BY tuple();
```
Try to consume the arrow file:
```
clickhouse-client --query="insert into arrow_test from infile '/home/mbak/bigfile.arrow' format Arrow"
Code: 50. DB::Exception: Unsupported Arrow type 'duration' of an input column 'duration'. If it happens during schema inference and you want to skip columns with unsupported types, you can enable setting input_format_arrow_skip_columns_with_unsupported_types_in_schema_inference: While executing ArrowBlockInputFormat: While executing File: data for INSERT was parsed from file: (in query: insert into arrow_test from infile '/home/mbak/bigfile.arrow' format Arrow). (UNKNOWN_TYPE)
```
| https://github.com/ClickHouse/ClickHouse/issues/45669 | https://github.com/ClickHouse/ClickHouse/pull/45750 | 15d4b1c9df794fb83d19183437e2f44677f47c53 | 075dfe9005d44037f1f447e3b6c13050bb7dc051 | "2023-01-26T15:17:56Z" | c++ | "2023-01-31T17:34:38Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 45,583 | ["docs/en/sql-reference/functions/date-time-functions.md"] | dateDiff uses mode 1 for week comparisons |
**Describe the unexpected behaviour**
When using `dateDiff`, I'd expect it to use "mode 0" i.e. Sunday-based weeks, given that that's the default elsewhere such as `toStartOfWeek`.
However, it uses "mode 1" i.e. Monday-based weeks.
**How to reproduce**
https://fiddle.clickhouse.com/51453d67-1f73-442f-a84e-7412350e53ad
```sql
-- returns 0
SELECT dateDiff(
'week',
toDateTime('2023-01-23 00:00:00', 'UTC'), -- Monday
toDateTime('2023-01-24 00:00:00', 'UTC')
)
-- returns 1
SELECT dateDiff(
'week',
toDateTime('2023-01-22 00:00:00', 'UTC'),
toDateTime('2023-01-23 00:00:00', 'UTC') -- Monday
)
```
* Which ClickHouse server version to use
Latest i.e. 22.12.3.5
**Expected behavior**
Ideally I'd like it to either:
a) default to 0 mode
b) allow specifying the mode
_However_, I understand that _a)_ would be a breaking change and _b)_ would be a confusing API, so would certainly be OK with nothing being done here, just expressing my thoughts.
Thanks!
| https://github.com/ClickHouse/ClickHouse/issues/45583 | https://github.com/ClickHouse/ClickHouse/pull/45586 | 8926af91f3a06fe4638e57a0cc1d5fda6d55a065 | d6ab376b02f4cb853eac64b8794ed67febeb2f93 | "2023-01-24T18:16:15Z" | c++ | "2023-01-24T20:38:00Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 45,509 | ["src/Storages/MergeTree/MergeTreeIndexFullText.cpp", "tests/queries/0_stateless/02538_ngram_bf_index_with_null.reference", "tests/queries/0_stateless/02538_ngram_bf_index_with_null.sql"] | heap-buffer-overflow in `MergeTreeConditionFullText` (possible type mismatch with `Field`) | https://s3.amazonaws.com/clickhouse-test-reports/45442/dfac0bb42d7214c7c4aaec35cf91a1c0c71a7e1d/fuzzer_astfuzzerasan/report.html
```sql
SELECT '2', (map['']) = NULL, '21474836.46', NULL FROM bf_ngrambf_map_keys_test PREWHERE (map_fixed['']) != '' WHERE (map_fixed[NULL]) != 'V0V0V0V0V0V0V0V0V0V0V0V0V0V0V0V0' SETTINGS force_data_skipping_indices = 'map_fixed_keys_ngrambf'
``` | https://github.com/ClickHouse/ClickHouse/issues/45509 | https://github.com/ClickHouse/ClickHouse/pull/45617 | 73c62ae390d7c497928ba3796648d925f560f7bc | 6fe9e9a67f3bb43d951fe8ffcffc6b458f797df9 | "2023-01-23T14:31:29Z" | c++ | "2023-01-26T08:05:56Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 45,508 | ["src/Interpreters/InterpreterSystemQuery.cpp", "src/Storages/MergeTree/ReplicatedMergeTreeQueue.cpp", "src/Storages/MergeTree/ReplicatedMergeTreeQueue.h", "src/Storages/StorageReplicatedMergeTree.cpp", "src/Storages/StorageReplicatedMergeTree.h"] | `SYSTEM SYNC REPLICA` works strange | `SYSTEM SYNC REPLICA` waits until the table's replication queue becomes empty. It's a strange assumption because in case of long overlapping (by time) merges it can wait forever.
To fix it, we have to apply the following logic on `SYSTEM SYNC REPLICA`:
1. pullLogsToQueue
2. get the last pulled entry name
3. return success on this last entry removal from the queue
| https://github.com/ClickHouse/ClickHouse/issues/45508 | https://github.com/ClickHouse/ClickHouse/pull/45648 | 41a3536227ea10720a4b419287522b03c2e29d6b | 859f528fe18493e68a21320df8636d3fdd65751a | "2023-01-23T14:31:05Z" | c++ | "2023-02-09T10:51:05Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 45,440 | ["src/Interpreters/ActionsDAG.cpp", "tests/queries/0_stateless/02667_and_consistency.reference", "tests/queries/0_stateless/02667_and_consistency.sql"] | HAVING 1 returns no results | Based on https://s3.amazonaws.com/clickhouse-test-reports/45420/df3776d24b330357cc8e69c4d1273e5dea56e4bf/sqlancer__release_/TLPHaving.err
May be the same as https://github.com/ClickHouse/ClickHouse/issues/45218
```
CREATE TABLE t2(c0 Int32) ENGINE = MergeTree ORDER BY c0;
INSERT INTO t2 VALUES (928386547), (1541944097), (2086579505), (1990427322), (-542998757), (390253678), (554855248), (203290629), (1504693323);
SELECT
MAX(left.c0),
min2(left.c0, -(-left.c0) * (radians(left.c0) - radians(left.c0))) AS g,
(((-1925024212 IS NOT NULL) IS NOT NULL) != radians(tan(1216286224))) AND cos(lcm(MAX(left.c0), -1966575216) OR (MAX(left.c0) * 1180517420)) AS h,
NOT h,
h IS NULL
FROM t2 AS left
GROUP BY g
Query id: f4fc9574-c0f9-432b-8c51-6d817a866e41
┌────MAX(c0)─┬──────────g─┬─h─┬─not(h)─┬─isNull(h)─┐
│ 2086579505 │ 0 │ 1 │ 0 │ 0 │
│ -542998757 │ -542998757 │ 1 │ 0 │ 0 │
└────────────┴────────────┴───┴────────┴───────────┘
2 rows in set. Elapsed: 0.003 sec.
ip-172-31-0-52 :) SELECT MAX(left.c0), min2(left.c0, -(-left.c0) * (radians(left.c0) - radians(left.c0))) as g, (((-1925024212 IS NOT NULL) IS NOT NULL) != radians(tan(1216286224))) AND cos(lcm(MAX(left.c0), -1966575216) OR (MAX(left.c0) * 1180517420)) as h, not h, h is null
FROM t2 AS left
GROUP BY g HAVING h
SELECT
MAX(left.c0),
min2(left.c0, -(-left.c0) * (radians(left.c0) - radians(left.c0))) AS g,
(((-1925024212 IS NOT NULL) IS NOT NULL) != radians(tan(1216286224))) AND cos(lcm(MAX(left.c0), -1966575216) OR (MAX(left.c0) * 1180517420)) AS h,
NOT h,
h IS NULL
FROM t2 AS left
GROUP BY g
HAVING h
Query id: d56ceae9-bfc0-482e-9e4b-7b25e57f262b
Ok.
0 rows in set. Elapsed: 0.003 sec.
``` | https://github.com/ClickHouse/ClickHouse/issues/45440 | https://github.com/ClickHouse/ClickHouse/pull/46653 | bb51da7de1dd67e21d99f53ffcbb5e2d06795738 | 3a3a2f352c63b54de1b5fe082d57c1b6236e5500 | "2023-01-19T15:55:00Z" | c++ | "2023-02-25T23:56:35Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 45,405 | ["tests/integration/test_backup_with_other_granularity/__init__.py", "tests/integration/test_backup_with_other_granularity/test.py"] | Flaky test_backup_with_other_granularity | https://s3.amazonaws.com/clickhouse-test-reports/0/9f9979c3932d410cd57fe9fd64ae09b0b8ebc5f9/integration_tests__asan__[1/3].html
It looks like we have bad 'ps | grep' to filter clickhouse binary running. I try to reproduce and will add more logs.
| https://github.com/ClickHouse/ClickHouse/issues/45405 | https://github.com/ClickHouse/ClickHouse/pull/49014 | 4f33985dacbc9de39f2d3f455dd57c52e925835f | 0ca4960adafec5c331b5094d6431120b379a32ed | "2023-01-18T16:08:46Z" | c++ | "2023-04-21T14:04:08Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 45,308 | ["src/Common/OpenTelemetryTraceContext.cpp", "src/Common/OpenTelemetryTraceContext.h"] | Uncaught exception in HTTPHandler | https://s3.amazonaws.com/clickhouse-test-reports/44547/6de4837580676830b5c58c1443452ac1c3677fac/stress_test__asan_.html
```
/var/log/clickhouse-server/clickhouse-server.err.log:2023.01.13 21:58:51.869314 [ 1686 ] {} <Fatal> BaseDaemon: (version 22.13.1.1, build id: A86CA3301E4841EC88C7F7EDB70316687C0AB2D4) (from thread 56604) Terminate called for uncaught exception:
/var/log/clickhouse-server/clickhouse-server.err.log:2023.01.13 21:58:51.869383 [ 1686 ] {} <Fatal> BaseDaemon: Code: 241. DB::Exception: Memory limit (for user) exceeded: would use 32.97 GiB (attempt to allocate chunk of 112 bytes), maximum: 20.57 GiB. OvercommitTracker decision: Memory overcommit has freed not enough memory. (MEMORY_LIMIT_EXCEEDED), Stack trace (when copying this message, always include the lines below):
/var/log/clickhouse-server/clickhouse-server.err.log:2023.01.13 21:58:51.869440 [ 1686 ] {} <Fatal> BaseDaemon:
/var/log/clickhouse-server/clickhouse-server.err.log:2023.01.13 21:58:51.869494 [ 1686 ] {} <Fatal> BaseDaemon: 0. ./build_docker/../contrib/llvm-project/libcxx/include/exception:134: Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, int) @ 0x3d0a3d8c in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.err.log:2023.01.13 21:58:51.869545 [ 1686 ] {} <Fatal> BaseDaemon: 1. ./build_docker/../src/Common/Exception.cpp:77: DB::Exception::Exception(DB::Exception::MessageMasked const&, int, bool) @ 0x200a8743 in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.err.log:2023.01.13 21:58:51.869603 [ 1686 ] {} <Fatal> BaseDaemon: 2. ./build_docker/../contrib/llvm-project/libcxx/include/string:1499: DB::Exception::Exception<char const*, char const*, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, long&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::basic_string_view<char, std::__1::char_traits<char>>>(int, fmt::v8::basic_format_string<char, fmt::v8::type_identity<char const*>::type, fmt::v8::type_identity<char const*>::type, fmt::v8::type_identity<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>::type, fmt::v8::type_identity<long&>::type, fmt::v8::type_identity<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>::type, fmt::v8::type_identity<std::__1::basic_string_view<char, std::__1::char_traits<char>>>::type>, char const*&&, char const*&&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>&&, long&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>&&, std::__1::basic_string_view<char, std::__1::char_traits<char>>&&) @ 0x200ca62d in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.err.log:2023.01.13 21:58:51.869669 [ 1686 ] {} <Fatal> BaseDaemon: 3. ./build_docker/../src/Common/MemoryTracker.cpp:263: MemoryTracker::allocImpl(long, bool, MemoryTracker*) @ 0x200c83f9 in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.err.log:2023.01.13 21:58:51.869752 [ 1686 ] {} <Fatal> BaseDaemon: 4. ./build_docker/../src/Common/MemoryTracker.cpp:0: MemoryTracker::allocImpl(long, bool, MemoryTracker*) @ 0x200c7b73 in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.err.log:2023.01.13 21:58:51.869801 [ 1686 ] {} <Fatal> BaseDaemon: 5. ./build_docker/../src/Common/MemoryTracker.cpp:0: MemoryTracker::allocImpl(long, bool, MemoryTracker*) @ 0x200c7b73 in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.err.log:2023.01.13 21:58:51.869852 [ 1686 ] {} <Fatal> BaseDaemon: 6. ./build_docker/../src/Common/CurrentMemoryTracker.cpp:59: CurrentMemoryTracker::alloc(long) @ 0x2001da99 in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.err.log:2023.01.13 21:58:51.869902 [ 1686 ] {} <Fatal> BaseDaemon: 7. ./build_docker/../src/Common/AllocatorWithMemoryTracking.h:35: DB::OpenTelemetry::Span::addAttribute(std::__1::basic_string_view<char, std::__1::char_traits<char>>, unsigned long) @ 0x20315a52 in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.err.log:2023.01.13 21:58:51.869959 [ 1686 ] {} <Fatal> BaseDaemon: 8. ./build_docker/../src/Server/HTTPHandler.cpp:0: DB::HTTPHandler::handleRequest(DB::HTTPServerRequest&, DB::HTTPServerResponse&) @ 0x3547efe2 in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.err.log:2023.01.13 21:58:51.870021 [ 1686 ] {} <Fatal> BaseDaemon: 9. ./build_docker/../src/Server/HTTP/HTTPServerConnection.cpp:0: DB::HTTPServerConnection::run() @ 0x355acccf in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.err.log:2023.01.13 21:58:51.870081 [ 1686 ] {} <Fatal> BaseDaemon: 10. ./build_docker/../contrib/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x3cd0542f in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.err.log:2023.01.13 21:58:51.870136 [ 1686 ] {} <Fatal> BaseDaemon: 11. ./build_docker/../contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48: Poco::Net::TCPServerDispatcher::run() @ 0x3cd06185 in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.err.log:2023.01.13 21:58:51.870193 [ 1686 ] {} <Fatal> BaseDaemon: 12. ./build_docker/../contrib/poco/Foundation/src/ThreadPool.cpp:0: Poco::PooledThread::run() @ 0x3d1ca23c in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.err.log:2023.01.13 21:58:51.870248 [ 1686 ] {} <Fatal> BaseDaemon: 13. ./build_docker/../contrib/poco/Foundation/include/Poco/SharedPtr.h:277: Poco::ThreadImpl::runnableEntry(void*) @ 0x3d1c37ed in /usr/bin/clickhouse
/var/log/clickhouse-server/clickhouse-server.err.log:2023.01.13 21:58:51.870310 [ 1686 ] {} <Fatal> BaseDaemon: 14. ? @ 0x7f9a6a5d5609 in ?
/var/log/clickhouse-server/clickhouse-server.err.log:2023.01.13 21:58:51.870354 [ 1686 ] {} <Fatal> BaseDaemon: 15. __clone @ 0x7f9a6a4fa133 in ?
/var/log/clickhouse-server/clickhouse-server.err.log:2023.01.13 21:58:51.870400 [ 1686 ] {} <Fatal> BaseDaemon: (version 22.13.1.1)
```
Slightly related to https://github.com/ClickHouse/ClickHouse/pull/39010
cc: @FrankChen021, @rschu1ze | https://github.com/ClickHouse/ClickHouse/issues/45308 | https://github.com/ClickHouse/ClickHouse/pull/45456 | 33877b5e006d48116854a9c0b64d7026013649f0 | 85cbb9288c524568c76d46e37c04e33c2dea28d2 | "2023-01-16T12:07:15Z" | c++ | "2023-02-03T14:26:02Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 45,306 | ["docs/en/sql-reference/functions/type-conversion-functions.md", "docs/ru/sql-reference/functions/type-conversion-functions.md", "src/IO/parseDateTimeBestEffort.cpp", "tests/queries/0_stateless/00569_parse_date_time_best_effort.reference", "tests/queries/0_stateless/00569_parse_date_time_best_effort.sql", "tests/queries/0_stateless/01543_parse_datetime_besteffort_or_null_empty_string.sql", "tests/queries/0_stateless/02783_parsedatetimebesteffort_syslog.reference", "tests/queries/0_stateless/02783_parsedatetimebesteffort_syslog.sql"] | Support syslog datetime format in parseDateTimeBestEffort() |
**Use case**
Syslog format is quite common and all the common linux logs use this format as they are written by syslog (/var/log/auth.log, /var/log/syslog, /var/log/kern.log, etc.)
Currently we parse it completely wrong
```
SELECT parseDateTimeBestEffort('Jan 10 06:07:06')
┌─parseDateTimeBestEffort('Jan 10 06:07:06')─┐
│ 2000-01-10 06:07:06 │
└────────────────────────────────────────────┘
```
If we append current year it is parsed correctly, but it is inconvenient and there is a high risk that user will not process Dec->Jan correctly and will get incorrect year.
```
SELECT parseDateTimeBestEffort(concat(CAST(toYear(today()), 'String'), ' Jan 10 06:07:06'))
┌─parseDateTimeBestEffort(concat(CAST(toYear(today()), 'String'), ' Jan 10 06:07:06'))─┐
│ 2023-01-10 06:07:06 │
└──────────────────────────────────────────────────────────────────────────────────────┘
```
**Additional context**
It is important that you may parse December logs in January so we should have some safeguard that year is detected as a year corresponding to month in previous 11 months(or less) and not just current year.
Syslog format is
https://www.rfc-editor.org/rfc/rfc3164
https://www.rfc-editor.org/rfc/rfc5424
```
The TIMESTAMP field is the local time and is in the format of "Mmm dd
hh:mm:ss" (without the quote marks) where:
Mmm is the English language abbreviation for the month of the
year with the first character in uppercase and the other two
characters in lowercase. The following are the only acceptable
values:
Jan, Feb, Mar, Apr, May, Jun, Jul, Aug, Sep, Oct, Nov, Dec
dd is the day of the month. If the day of the month is less
than 10, then it MUST be represented as a space and then the
number. For example, the 7th day of August would be
represented as "Aug 7", with two spaces between the "g" and
the "7".
hh:mm:ss is the local time. The hour (hh) is represented in a
24-hour format. Valid entries are between 00 and 23,
inclusive. The minute (mm) and second (ss) entries are between
00 and 59 inclusive.
``` | https://github.com/ClickHouse/ClickHouse/issues/45306 | https://github.com/ClickHouse/ClickHouse/pull/50925 | eec7edda8f0caa11388434ca11fad489841b85a2 | 74cb79769bbaa0c4619ca7cb382e6e37c8c7d7b5 | "2023-01-16T11:50:52Z" | c++ | "2023-06-15T19:04:50Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 45,285 | ["docs/en/engines/table-engines/mergetree-family/mergetree.md"] | add docs for support_batch_delete | https://github.com/ClickHouse/ClickHouse/pull/37882
https://github.com/ClickHouse/ClickHouse/pull/37659
| https://github.com/ClickHouse/ClickHouse/issues/45285 | https://github.com/ClickHouse/ClickHouse/pull/45411 | fdea042991f2a020f6ed18d860a67dea8438403d | 245899c0f66e4f3e5d3d833dc3cd2ffc5cbd6cb6 | "2023-01-15T17:07:49Z" | c++ | "2023-01-19T13:03:33Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 45,275 | ["src/Processors/Formats/Impl/AvroRowInputFormat.cpp", "tests/queries/0_stateless/02521_avro_union_null_nested.reference", "tests/queries/0_stateless/02521_avro_union_null_nested.sh", "tests/queries/0_stateless/02522_avro_complicate_schema.reference", "tests/queries/0_stateless/02522_avro_complicate_schema.sh", "tests/queries/0_stateless/data_avro/complicated_schema.avro", "tests/queries/0_stateless/data_avro/union_null_nested.avro"] | Read Avro File faild, DB::Exception: Type Array(Tuple(key Int32, value Int64)) is not compatible with Avro union | ```sql
:) select * from file('5e3c62a9-1537-455f-98e5-0a067af5752a-m0.avro')
SELECT *
FROM file('5e3c62a9-1537-455f-98e5-0a067af5752a-m0.avro')
Query id: 6960516c-5a16-4d0c-b1b2-6d2571f3ab11
0 rows in set. Elapsed: 0.019 sec.
Received exception from server (version 22.13.1):
Code: 44. DB::Exception: Received from localhost:9000. DB::Exception: Type Array(Tuple(key Int32, value Int64)) is not compatible with Avro union:
[
"null",
{
"type": "array",
"items": {
"type": "record",
"name": "k117_v118",
"fields": [
{
"name": "key",
"type": "int"
},
{
"name": "value",
"type": "long"
}
]
}
}
]: column data_file: While executing AvroRowInputFormat: While executing File. (ILLEGAL_COLUMN)
```
File schema:
```
{'type': 'record', 'name': 'manifest_entry', 'fields': [{'field-id': 0, 'name': 'status', 'type': 'int'}, {'field-id': 1, 'default': None, 'name': 'snapshot_id', 'type': ['null', 'long']}, {'field-id': 2, 'name': 'data_file', 'type': {'type': 'record', 'name': 'r2', 'fields': [{'field-id': 100, 'doc': 'Location URI with FS scheme', 'name': 'file_path', 'type': 'string'}, {'field-id': 101, 'doc': 'File format name: avro, orc, or parquet', 'name': 'file_format', 'type': 'string'}, {'field-id': 102, 'name': 'partition', 'type': {'type': 'record', 'name': 'r102', 'fields': [{'field-id': 1000, 'default': None, 'name': 'vendor_id', 'type': ['null', 'long']}]}}, {'field-id': 103, 'doc': 'Number of records in the file', 'name': 'record_count', 'type': 'long'}, {'field-id': 104, 'doc': 'Total file size in bytes', 'name': 'file_size_in_bytes', 'type': 'long'}, {'field-id': 105, 'name': 'block_size_in_bytes', 'type': 'long'}, {'field-id': 108, 'doc': 'Map of column id to total size on disk', 'default': None, 'name': 'column_sizes', 'type': ['null', {'logicalType': 'map', 'type': 'array', 'items': {'type': 'record', 'name': 'k117_v118', 'fields': [{'field-id': 117, 'name': 'key', 'type': 'int'}, {'field-id': 118, 'name': 'value', 'type': 'long'}]}}]}, {'field-id': 109, 'doc': 'Map of column id to total count, including null and NaN', 'default': None, 'name': 'value_counts', 'type': ['null', {'logicalType': 'map', 'type': 'array', 'items': {'type': 'record', 'name': 'k119_v120', 'fields': [{'field-id': 119, 'name': 'key', 'type': 'int'}, {'field-id': 120, 'name': 'value', 'type': 'long'}]}}]}, {'field-id': 110, 'doc': 'Map of column id to null value count', 'default': None, 'name': 'null_value_counts', 'type': ['null', {'logicalType': 'map', 'type': 'array', 'items': {'type': 'record', 'name': 'k121_v122', 'fields': [{'field-id': 121, 'name': 'key', 'type': 'int'}, {'field-id': 122, 'name': 'value', 'type': 'long'}]}}]}, {'field-id': 137, 'doc': 'Map of column id to number of NaN values in the column', 'default': None, 'name': 'nan_value_counts', 'type': ['null', {'logicalType': 'map', 'type': 'array', 'items': {'type': 'record', 'name': 'k138_v139', 'fields': [{'field-id': 138, 'name': 'key', 'type': 'int'}, {'field-id': 139, 'name': 'value', 'type': 'long'}]}}]}, {'field-id': 125, 'doc': 'Map of column id to lower bound', 'default': None, 'name': 'lower_bounds', 'type': ['null', {'logicalType': 'map', 'type': 'array', 'items': {'type': 'record', 'name': 'k126_v127', 'fields': [{'field-id': 126, 'name': 'key', 'type': 'int'}, {'field-id': 127, 'name': 'value', 'type': 'bytes'}]}}]}, {'field-id': 128, 'doc': 'Map of column id to upper bound', 'default': None, 'name': 'upper_bounds', 'type': ['null', {'logicalType': 'map', 'type': 'array', 'items': {'type': 'record', 'name': 'k129_v130', 'fields': [{'field-id': 129, 'name': 'key', 'type': 'int'}, {'field-id': 130, 'name': 'value', 'type': 'bytes'}]}}]}, {'field-id': 131, 'doc': 'Encryption key metadata blob', 'default': None, 'name': 'key_metadata', 'type': ['null', 'bytes']}, {'field-id': 132, 'doc': 'Splittable offsets', 'default': None, 'name': 'split_offsets', 'type': ['null', {'element-id': 133, 'type': 'array', 'items': 'long'}]}, {'field-id': 140, 'doc': 'Sort order ID', 'default': None, 'name': 'sort_order_id', 'type': ['null', 'int']}]}}]}
``` | https://github.com/ClickHouse/ClickHouse/issues/45275 | https://github.com/ClickHouse/ClickHouse/pull/45276 | 5586f71950b834df739f887fa550b14cbdb76935 | 35431e91e319f12fe40ab412cbbc2f1299ee26d9 | "2023-01-14T16:47:44Z" | c++ | "2023-01-17T11:48:44Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 45,249 | ["docs/en/sql-reference/functions/type-conversion-functions.md", "docs/ru/sql-reference/functions/type-conversion-functions.md", "src/Functions/FunctionToDecimalString.cpp", "src/Functions/FunctionToDecimalString.h", "src/IO/WriteHelpers.h", "tests/queries/0_stateless/02676_to_decimal_string.reference", "tests/queries/0_stateless/02676_to_decimal_string.sql"] | formatDecimal function | Takes 2 argument - first is numeric, second is precision and returns a string
```
SELECT formatDecimal(2,2); -- 2.00
SELECT formatDecimal(2.123456,2); -- 2.12
SELECT formatDecimal(2.1456,2); -- 2.15 -- rounding!
SELECT formatDecimal(64.32::Float64, 2); -- 64.32
SELECT formatDecimal(64.32::Decimal32(2), 2); -- 64.32
SELECT formatDecimal(64.32::Decimal64(2), 2); -- 64.32
```
It should behave similar to [%f in sprintf](https://linux.die.net/man/3/sprintf), see also
See also:
https://github.com/ClickHouse/ClickHouse/pull/27680#issuecomment-900762221
https://github.com/ClickHouse/ClickHouse/issues/30934#issuecomment-955998935
| https://github.com/ClickHouse/ClickHouse/issues/45249 | https://github.com/ClickHouse/ClickHouse/pull/47838 | 50ed205aa0f1d653e4c40a45552e05e0a375bf4f | b4c8ef980c8b446dca652b4240ccdfe83af10353 | "2023-01-13T11:03:06Z" | c++ | "2023-03-29T08:41:12Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 45,219 | ["utils/self-extracting-executable/decompressor.cpp"] | clickhouse-local doesn't work under ubuntu wsl | I've been using the guide from here: https://clickhouse.com/docs/en/integrations/migration/clickhouse-local/ to install clickhouse-local (also mentioned here: https://clickhouse.com/blog/extracting-converting-querying-local-files-with-sql-clickhouse-local)
Running:
`curl https://clickhouse.com/ | sh`
downloads a clickhouse binary file (shouldn't it be clickhouse-local?)
but then if I try to do:
`xxx@DESKTOP-VNB3DU9:~$ ./clickhouse-local
-bash: ./clickhouse-local: No such file or directory`
and for install i get:
`sudo ./clickhouse install`
`No target executable - decompression only was performed.`
Environment: Ubuntu WSL 1 under Windows 11
| https://github.com/ClickHouse/ClickHouse/issues/45219 | https://github.com/ClickHouse/ClickHouse/pull/45339 | 35431e91e319f12fe40ab412cbbc2f1299ee26d9 | b23ba36a4fd5d17099d2b665f50e5a519bec0e93 | "2023-01-12T11:35:00Z" | c++ | "2023-01-17T13:10:21Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 45,218 | ["src/Interpreters/ActionsDAG.cpp", "tests/queries/0_stateless/02674_and_consistency.reference", "tests/queries/0_stateless/02674_and_consistency.sql"] | Incorrect HAVING filtering | Strange situation. HAVING is always true in this 3 queries.
When we use `cond1 AND cond2` it returns no rows.
So here we this conditions computed separately:
```
SELECT (-sign(-233841197)) IS NOT NULL
┌─isNotNull(negate(sign(-233841197)))─┐
│ 1 │
└─────────────────────────────────────┘
SELECT sin(lcm(10, 10) >= ('372497213' IS NOT NULL))
┌─sin(greaterOrEquals(lcm(10, 10), isNotNull('372497213')))─┐
│ 0.8414709848078965 │
└───────────────────────────────────────────────────────────┘
SELECT ((-sign(-233841197)) IS NOT NULL) AND sin(lcm(10, 10) >= ('372497213' IS NOT NULL))
┌─and(isNotNull(negate(sign(-233841197))), sin(greaterOrEquals(lcm(10, 10), isNotNull('372497213'))))─┐
│ 1 │
└─────────────────────────────────────────────────────────────────────────────────────────────────────┘
```
This is a query with issue. Nothing is returned.
```
SELECT SUM(number)
FROM
(
SELECT 10 AS number
)
GROUP BY cos(min2(number, number) % number) - number
HAVING ((-sign(-233841197)) IS NOT NULL) AND sin(lcm(SUM(number), SUM(number)) >= ('372497213' IS NOT NULL))
SETTINGS aggregate_functions_null_for_empty = 1, enable_optimize_predicate_expression = 0
Ok.
0 rows in set. Elapsed: 0.002 sec.
```
If I remove second half of condition it returns answer
```
SELECT SUM(number)
FROM
(
SELECT 10 AS number
)
GROUP BY cos(min2(number, number) % number) - number
HAVING (-sign(-233841197)) IS NOT NULL
SETTINGS aggregate_functions_null_for_empty = 1, enable_optimize_predicate_expression = 0
┌─SUM(number)─┐
│ 10 │
└─────────────┘
1 row in set. Elapsed: 0.001 sec.
```
If I replace `((-sign(-233841197)) IS NOT NULL)` to `true` it also works
```
SELECT SUM(number)
FROM
(
SELECT 10 AS number
)
GROUP BY cos(min2(number, number) % number) - number
HAVING true AND sin(lcm(SUM(number), SUM(number)) >= ('372497213' IS NOT NULL))
SETTINGS aggregate_functions_null_for_empty = 1, enable_optimize_predicate_expression = 0
Query id: 44e373a8-9987-4202-840a-cc025bfa27d3
┌─SUM(number)─┐
│ 10 │
└─────────────┘
1 row in set. Elapsed: 0.002 sec.
```
PS
And we do not allow to use Float as WHERE contition, but allow to boolean logic with Float.
So we may force ClickHouse to use Float in condition using `true AND ...`. Maybe it is time to just allow other types in WHERE as workaround is just `true and`. | https://github.com/ClickHouse/ClickHouse/issues/45218 | https://github.com/ClickHouse/ClickHouse/pull/47028 | 3cd88003dde7e0e6be30292a392a37ee6b1515af | 81b30021db077be76e9244ea92e5ec87a223a515 | "2023-01-12T11:26:52Z" | c++ | "2023-03-02T19:48:13Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 45,216 | ["docs/en/operations/settings/merge-tree-settings.md"] | Update docs related to delay for INSERT | Formula to calculate delay for INSERT was changed in https://github.com/ClickHouse/ClickHouse/pull/44954. Need to update docs [here](https://clickhouse.com/docs/en/operations/settings/merge-tree-settings/#max-delay-to-insert) | https://github.com/ClickHouse/ClickHouse/issues/45216 | https://github.com/ClickHouse/ClickHouse/pull/45592 | 7a8a8dcd2fcfb42c4ab91a7f581dd3d390366fcc | ed01f76c6aa0b4d036355d36f38edabd5fb977c6 | "2023-01-12T10:48:52Z" | c++ | "2023-01-26T11:55:35Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 45,214 | ["tests/queries/0_stateless/02672_suspicious_low_cardinality_msan.reference", "tests/queries/0_stateless/02672_suspicious_low_cardinality_msan.sql"] | MemorySanitizer: use-of-uninitialized-value build_docker/../src/Columns/ColumnUnique.h:580:18 | ERROR: type should be string, got "https://s3.amazonaws.com/clickhouse-test-reports/0/3f3ce06832b7f5907e628b70ace02ea0e7c21567/fuzzer_astfuzzermsan/report.html\r\n\r\n```\r\n==157==WARNING: MemorySanitizer: use-of-uninitialized-value\r\n #0 0x42c709f3 in COW<DB::IColumn>::mutable_ptr<DB::IColumn> DB::ColumnUnique<DB::ColumnVector<char8_t>>::uniqueInsertRangeImpl<char8_t>(DB::IColumn const&, unsigned long, unsigned long, unsigned long, DB::ColumnVector<char8_t>::MutablePtr&&, DB::ReverseIndex<unsigned long, DB::ColumnVector<char8_t>>*, unsigned long) build_docker/../src/Columns/ColumnUnique.h:580:18\r\n #1 0x42c6e0cf in COW<DB::IColumn>::mutable_ptr<DB::IColumn> DB::ColumnUnique<DB::ColumnVector<char8_t>>::uniqueInsertRangeFrom(DB::IColumn const&, unsigned long, unsigned long)::'lambda'(auto)::operator()<char8_t>(auto) const build_docker/../src/Columns/ColumnUnique.h:617:26\r\n #2 0x42c67a37 in DB::ColumnUnique<DB::ColumnVector<char8_t>>::uniqueInsertRangeFrom(DB::IColumn const&, unsigned long, unsigned long) build_docker/../src/Columns/ColumnUnique.h:625:28\r\n #3 0x41931b9e in DB::IExecutableFunction::executeWithoutSparseColumns(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const build_docker/../src/Functions/IFunction.cpp:288:57\r\n #4 0x419358c7 in DB::IExecutableFunction::execute(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const build_docker/../src/Functions/IFunction.cpp:372:16\r\n #5 0x43db1fd9 in DB::executeAction(DB::ExpressionActions::Action const&, DB::(anonymous namespace)::ExecutionContext&, bool) build_docker/../src/Interpreters/ExpressionActions.cpp:607:60\r\n #6 0x43db1fd9 in DB::ExpressionActions::execute(DB::Block&, unsigned long&, bool) const build_docker/../src/Interpreters/ExpressionActions.cpp:724:13\r\n #7 0x4bd7d507 in DB::ExpressionTransform::transform(DB::Chunk&) build_docker/../src/Processors/Transforms/ExpressionTransform.cpp:23:17\r\n #8 0x3696346a in DB::ISimpleTransform::transform(DB::Chunk&, DB::Chunk&) build_docker/../src/Processors/ISimpleTransform.h:32:9\r\n #9 0x4b3f0970 in DB::ISimpleTransform::work() build_docker/../src/Processors/ISimpleTransform.cpp:89:9\r\n #10 0x4b46c2f7 in DB::executeJob(DB::ExecutingGraph::Node*, DB::ReadProgressCallback*) build_docker/../src/Processors/Executors/ExecutionThreadContext.cpp:47:26\r\n #11 0x4b46c2f7 in DB::ExecutionThreadContext::executeTask() build_docker/../src/Processors/Executors/ExecutionThreadContext.cpp:92:9\r\n #12 0x4b4327fc in DB::PipelineExecutor::executeStepImpl(unsigned long, std::__1::atomic<bool>*) build_docker/../src/Processors/Executors/PipelineExecutor.cpp:229:26\r\n #13 0x4b43862d in DB::PipelineExecutor::executeSingleThread(unsigned long) build_docker/../src/Processors/Executors/PipelineExecutor.cpp:195:5\r\n #14 0x4b43862d in DB::PipelineExecutor::spawnThreads()::$_0::operator()() const build_docker/../src/Processors/Executors/PipelineExecutor.cpp:320:17\r\n #15 0x4b43862d in decltype(std::declval<DB::PipelineExecutor::spawnThreads()::$_0&>()()) std::__1::__invoke[abi:v15000]<DB::PipelineExecutor::spawnThreads()::$_0&>(DB::PipelineExecutor::spawnThreads()::$_0&) build_docker/../contrib/llvm-project/libcxx/include/__functional/invoke.h:394:23\r\n #16 0x4b43862d in decltype(auto) std::__1::__apply_tuple_impl[abi:v15000]<DB::PipelineExecutor::spawnThreads()::$_0&, std::__1::tuple<>&>(DB::PipelineExecutor::spawnThreads()::$_0&, std::__1::tuple<>&, std::__1::__tuple_indices<>) build_docker/../contrib/llvm-project/libcxx/include/tuple:1789:1\r\n #17 0x4b43862d in decltype(auto) std::__1::apply[abi:v15000]<DB::PipelineExecutor::spawnThreads()::$_0&, std::__1::tuple<>&>(DB::PipelineExecutor::spawnThreads()::$_0&, std::__1::tuple<>&) build_docker/../contrib/llvm-project/libcxx/include/tuple:1798:1\r\n #18 0x4b43862d in ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::'lambda'()::operator()() build_docker/../src/Common/ThreadPool.h:196:13\r\n #19 0x4b43862d in decltype(std::declval<DB::PipelineExecutor::spawnThreads()::$_0>()()) std::__1::__invoke[abi:v15000]<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::'lambda'()&>(DB::PipelineExecutor::spawnThreads()::$_0&&) build_docker/../contrib/llvm-project/libcxx/include/__functional/invoke.h:394:23\r\n #20 0x4b43862d in void std::__1::__invoke_void_return_wrapper<void, true>::__call<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::'lambda'()&>(ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::'lambda'()&) build_docker/../contrib/llvm-project/libcxx/include/__functional/invoke.h:479:9\r\n #21 0x4b43862d in std::__1::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::'lambda'(), void ()>::operator()[abi:v15000]() build_docker/../contrib/llvm-project/libcxx/include/__functional/function.h:235:12\r\n #22 0x4b43862d in void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::'lambda'(), void ()>>(std::__1::__function::__policy_storage const*) build_docker/../contrib/llvm-project/libcxx/include/__functional/function.h:716:16\r\n #23 0x29a711ac in std::__1::__function::__policy_func<void ()>::operator()[abi:v15000]() const build_docker/../contrib/llvm-project/libcxx/include/__functional/function.h:848:16\r\n #24 0x29a711ac in std::__1::function<void ()>::operator()() const build_docker/../contrib/llvm-project/libcxx/include/__functional/function.h:1187:12\r\n #25 0x29a711ac in ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) build_docker/../src/Common/ThreadPool.cpp:295:17\r\n #26 0x29a7f322 in void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::'lambda0'()::operator()() const build_docker/../src/Common/ThreadPool.cpp:144:73\r\n #27 0x29a7f322 in decltype(std::declval<void>()()) std::__1::__invoke[abi:v15000]<void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::'lambda0'()>(void&&) build_docker/../contrib/llvm-project/libcxx/include/__functional/invoke.h:394:23\r\n #28 0x29a7f322 in void std::__1::__thread_execute[abi:v15000]<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::'lambda0'()>(std::__1::tuple<void, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::'lambda0'()>&, std::__1::__tuple_indices<>) build_docker/../contrib/llvm-project/libcxx/include/thread:284:5\r\n #29 0x29a7f322 in void* std::__1::__thread_proxy[abi:v15000]<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::'lambda0'()>>(void*) build_docker/../contrib/llvm-project/libcxx/include/thread:295:5\r\n #30 0x7f90a297a608 in start_thread /build/glibc-SzIz7B/glibc-2.31/nptl/pthread_create.c:477:8\r\n #31 0x7f90a289f132 in __clone /build/glibc-SzIz7B/glibc-2.31/misc/../sysdeps/unix/sysv/linux/x86_64/clone.S:95\r\n\r\n Uninitialized value was stored to memory at\r\n #0 0x17250a26 in DB::NumComparisonImpl<double, unsigned short, DB::EqualsOp<double, unsigned short>>::vectorVectorImplAVX2(DB::PODArray<double, 4096ul, Allocator<false, false>, 63ul, 64ul> const&, DB::PODArray<unsigned short, 4096ul, Allocator<false, false>, 63ul, 64ul> const&, DB::PODArray<char8_t, 4096ul, Allocator<false, false>, 63ul, 64ul>&) (/workspace/clickhouse+0x17250a26) (BuildId: 1986284f036cba8610c75394630e4500e448b3b6)\r\n #1 0x1724ef2e in DB::NumComparisonImpl<double, unsigned short, DB::EqualsOp<double, unsigned short>>::vectorVector(DB::PODArray<double, 4096ul, Allocator<false, false>, 63ul, 64ul> const&, DB::PODArray<unsigned short, 4096ul, Allocator<false, false>, 63ul, 64ul> const&, DB::PODArray<char8_t, 4096ul, Allocator<false, false>, 63ul, 64ul>&) (/workspace/clickhouse+0x1724ef2e) (BuildId: 1986284f036cba8610c75394630e4500e448b3b6)\r\n #2 0x17233a74 in COW<DB::IColumn>::immutable_ptr<DB::IColumn> DB::FunctionComparison<DB::EqualsOp, DB::NameEquals>::executeNumRightType<double, unsigned short>(DB::ColumnVector<double> const*, DB::IColumn const*) const (/workspace/clickhouse+0x17233a74) (BuildId: 1986284f036cba8610c75394630e4500e448b3b6)\r\n #3 0x16dbda95 in COW<DB::IColumn>::immutable_ptr<DB::IColumn> DB::FunctionComparison<DB::EqualsOp, DB::NameEquals>::executeNumLeftType<double>(DB::IColumn const*, DB::IColumn const*) const (/workspace/clickhouse+0x16dbda95) (BuildId: 1986284f036cba8610c75394630e4500e448b3b6)\r\n #4 0x16d6cc39 in DB::FunctionComparison<DB::EqualsOp, DB::NameEquals>::executeImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const (/workspace/clickhouse+0x16d6cc39) (BuildId: 1986284f036cba8610c75394630e4500e448b3b6)\r\n #5 0xc5d5153 in DB::FunctionToExecutableFunctionAdaptor::executeImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const (/workspace/clickhouse+0xc5d5153) (BuildId: 1986284f036cba8610c75394630e4500e448b3b6)\r\n #6 0x4192ca6e in DB::IExecutableFunction::executeWithoutLowCardinalityColumns(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const build_docker/../src/Functions/IFunction.cpp:248:15\r\n #7 0x4193170c in DB::IExecutableFunction::executeWithoutSparseColumns(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const build_docker/../src/Functions/IFunction.cpp:281:24\r\n #8 0x419358c7 in DB::IExecutableFunction::execute(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const build_docker/../src/Functions/IFunction.cpp:372:16\r\n #9 0x43db1fd9 in DB::executeAction(DB::ExpressionActions::Action const&, DB::(anonymous namespace)::ExecutionContext&, bool) build_docker/../src/Interpreters/ExpressionActions.cpp:607:60\r\n #10 0x43db1fd9 in DB::ExpressionActions::execute(DB::Block&, unsigned long&, bool) const build_docker/../src/Interpreters/ExpressionActions.cpp:724:13\r\n #11 0x4bd7d507 in DB::ExpressionTransform::transform(DB::Chunk&) build_docker/../src/Processors/Transforms/ExpressionTransform.cpp:23:17\r\n #12 0x3696346a in DB::ISimpleTransform::transform(DB::Chunk&, DB::Chunk&) build_docker/../src/Processors/ISimpleTransform.h:32:9\r\n #13 0x4b3f0970 in DB::ISimpleTransform::work() build_docker/../src/Processors/ISimpleTransform.cpp:89:9\r\n #14 0x4b46c2f7 in DB::executeJob(DB::ExecutingGraph::Node*, DB::ReadProgressCallback*) build_docker/../src/Processors/Executors/ExecutionThreadContext.cpp:47:26\r\n #15 0x4b46c2f7 in DB::ExecutionThreadContext::executeTask() build_docker/../src/Processors/Executors/ExecutionThreadContext.cpp:92:9\r\n #16 0x4b4327fc in DB::PipelineExecutor::executeStepImpl(unsigned long, std::__1::atomic<bool>*) build_docker/../src/Processors/Executors/PipelineExecutor.cpp:229:26\r\n #17 0x4b43862d in DB::PipelineExecutor::executeSingleThread(unsigned long) build_docker/../src/Processors/Executors/PipelineExecutor.cpp:195:5\r\n #18 0x4b43862d in DB::PipelineExecutor::spawnThreads()::$_0::operator()() const build_docker/../src/Processors/Executors/PipelineExecutor.cpp:320:17\r\n #19 0x4b43862d in decltype(std::declval<DB::PipelineExecutor::spawnThreads()::$_0&>()()) std::__1::__invoke[abi:v15000]<DB::PipelineExecutor::spawnThreads()::$_0&>(DB::PipelineExecutor::spawnThreads()::$_0&) build_docker/../contrib/llvm-project/libcxx/include/__functional/invoke.h:394:23\r\n #20 0x4b43862d in decltype(auto) std::__1::__apply_tuple_impl[abi:v15000]<DB::PipelineExecutor::spawnThreads()::$_0&, std::__1::tuple<>&>(DB::PipelineExecutor::spawnThreads()::$_0&, std::__1::tuple<>&, std::__1::__tuple_indices<>) build_docker/../contrib/llvm-project/libcxx/include/tuple:1789:1\r\n #21 0x4b43862d in decltype(auto) std::__1::apply[abi:v15000]<DB::PipelineExecutor::spawnThreads()::$_0&, std::__1::tuple<>&>(DB::PipelineExecutor::spawnThreads()::$_0&, std::__1::tuple<>&) build_docker/../contrib/llvm-project/libcxx/include/tuple:1798:1\r\n #22 0x4b43862d in ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::'lambda'()::operator()() build_docker/../src/Common/ThreadPool.h:196:13\r\n #23 0x4b43862d in decltype(std::declval<DB::PipelineExecutor::spawnThreads()::$_0>()()) std::__1::__invoke[abi:v15000]<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::'lambda'()&>(DB::PipelineExecutor::spawnThreads()::$_0&&) build_docker/../contrib/llvm-project/libcxx/include/__functional/invoke.h:394:23\r\n #24 0x4b43862d in void std::__1::__invoke_void_return_wrapper<void, true>::__call<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::'lambda'()&>(ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::'lambda'()&) build_docker/../contrib/llvm-project/libcxx/include/__functional/invoke.h:479:9\r\n #25 0x4b43862d in std::__1::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::'lambda'(), void ()>::operator()[abi:v15000]() build_docker/../contrib/llvm-project/libcxx/include/__functional/function.h:235:12\r\n #26 0x4b43862d in void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::'lambda'(), void ()>>(std::__1::__function::__policy_storage const*) build_docker/../contrib/llvm-project/libcxx/include/__functional/function.h:716:16\r\n #27 0x29a711ac in std::__1::__function::__policy_func<void ()>::operator()[abi:v15000]() const build_docker/../contrib/llvm-project/libcxx/include/__functional/function.h:848:16\r\n #28 0x29a711ac in std::__1::function<void ()>::operator()() const build_docker/../contrib/llvm-project/libcxx/include/__functional/function.h:1187:12\r\n #29 0x29a711ac in ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) build_docker/../src/Common/ThreadPool.cpp:295:17\r\n #30 0x29a7f322 in void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::'lambda0'()::operator()() const build_docker/../src/Common/ThreadPool.cpp:144:73\r\n #31 0x29a7f322 in decltype(std::declval<void>()()) std::__1::__invoke[abi:v15000]<void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::'lambda0'()>(void&&) build_docker/../contrib/llvm-project/libcxx/include/__functional/invoke.h:394:23\r\n #32 0x29a7f322 in void std::__1::__thread_execute[abi:v15000]<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::'lambda0'()>(std::__1::tuple<void, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::'lambda0'()>&, std::__1::__tuple_indices<>) build_docker/../contrib/llvm-project/libcxx/include/thread:284:5\r\n #33 0x29a7f322 in void* std::__1::__thread_proxy[abi:v15000]<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::'lambda0'()>>(void*) build_docker/../contrib/llvm-project/libcxx/include/thread:295:5\r\n #34 0x7f90a297a608 in start_thread /build/glibc-SzIz7B/glibc-2.31/nptl/pthread_create.c:477:8\r\n\r\n Uninitialized value was created by a heap allocation\r\n #0 0xc5625a0 in malloc (/workspace/clickhouse+0xc5625a0) (BuildId: 1986284f036cba8610c75394630e4500e448b3b6)\r\n #1 0x29642b8c in Allocator<false, false>::allocNoTrack(unsigned long, unsigned long) build_docker/../src/Common/Allocator.h:227:27\r\n #2 0xc77d824 in void DB::PODArrayBase<2ul, 4096ul, Allocator<false, false>, 63ul, 64ul>::alloc<>(unsigned long) (/workspace/clickhouse+0xc77d824) (BuildId: 1986284f036cba8610c75394630e4500e448b3b6)\r\n #3 0x295f960e in COW<DB::IColumn>::mutable_ptr<DB::ColumnVector<unsigned short>> COWHelper<DB::ColumnVectorHelper, DB::ColumnVector<unsigned short>>::create<unsigned long const&>(unsigned long const&) (/workspace/clickhouse+0x295f960e) (BuildId: 1986284f036cba8610c75394630e4500e448b3b6)\r\n #4 0x48188722 in DB::ColumnVector<unsigned short>::replicate(DB::PODArray<unsigned long, 4096ul, Allocator<false, false>, 63ul, 64ul> const&) const build_docker/../src/Columns/ColumnVector.cpp:809:16\r\n #5 0x47af449c in DB::ColumnConst::convertToFullColumn() const build_docker/../src/Columns/ColumnConst.cpp:48:18\r\n #6 0x47afa54d in DB::ColumnConst::convertToFullColumnIfConst() const build_docker/../src/Columns/ColumnConst.h:39:16\r\n #7 0x4c0f9907 in DB::prepareChunk(DB::Chunk&) build_docker/../src/Processors/Merges/Algorithms/MergingSortedAlgorithm.cpp:54:26\r\n #8 0x4c0f9121 in DB::MergingSortedAlgorithm::initialize(std::__1::vector<DB::IMergingAlgorithm::Input, std::__1::allocator<DB::IMergingAlgorithm::Input>>) build_docker/../src/Processors/Merges/Algorithms/MergingSortedAlgorithm.cpp:70:9\r\n #9 0x46f9dbd3 in DB::IMergingTransform<DB::MergingSortedAlgorithm>::work() build_docker/../src/Processors/Merges/IMergingTransform.h:109:23\r\n #10 0x4b46c2f7 in DB::executeJob(DB::ExecutingGraph::Node*, DB::ReadProgressCallback*) build_docker/../src/Processors/Executors/ExecutionThreadContext.cpp:47:26\r\n #11 0x4b46c2f7 in DB::ExecutionThreadContext::executeTask() build_docker/../src/Processors/Executors/ExecutionThreadContext.cpp:92:9\r\n #12 0x4b4327fc in DB::PipelineExecutor::executeStepImpl(unsigned long, std::__1::atomic<bool>*) build_docker/../src/Processors/Executors/PipelineExecutor.cpp:229:26\r\n #13 0x4b43862d in DB::PipelineExecutor::executeSingleThread(unsigned long) build_docker/../src/Processors/Executors/PipelineExecutor.cpp:195:5\r\n #14 0x4b43862d in DB::PipelineExecutor::spawnThreads()::$_0::operator()() const build_docker/../src/Processors/Executors/PipelineExecutor.cpp:320:17\r\n #15 0x4b43862d in decltype(std::declval<DB::PipelineExecutor::spawnThreads()::$_0&>()()) std::__1::__invoke[abi:v15000]<DB::PipelineExecutor::spawnThreads()::$_0&>(DB::PipelineExecutor::spawnThreads()::$_0&) build_docker/../contrib/llvm-project/libcxx/include/__functional/invoke.h:394:23\r\n #16 0x4b43862d in decltype(auto) std::__1::__apply_tuple_impl[abi:v15000]<DB::PipelineExecutor::spawnThreads()::$_0&, std::__1::tuple<>&>(DB::PipelineExecutor::spawnThreads()::$_0&, std::__1::tuple<>&, std::__1::__tuple_indices<>) build_docker/../contrib/llvm-project/libcxx/include/tuple:1789:1\r\n #17 0x4b43862d in decltype(auto) std::__1::apply[abi:v15000]<DB::PipelineExecutor::spawnThreads()::$_0&, std::__1::tuple<>&>(DB::PipelineExecutor::spawnThreads()::$_0&, std::__1::tuple<>&) build_docker/../contrib/llvm-project/libcxx/include/tuple:1798:1\r\n #18 0x4b43862d in ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::'lambda'()::operator()() build_docker/../src/Common/ThreadPool.h:196:13\r\n #19 0x4b43862d in decltype(std::declval<DB::PipelineExecutor::spawnThreads()::$_0>()()) std::__1::__invoke[abi:v15000]<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::'lambda'()&>(DB::PipelineExecutor::spawnThreads()::$_0&&) build_docker/../contrib/llvm-project/libcxx/include/__functional/invoke.h:394:23\r\n #20 0x4b43862d in void std::__1::__invoke_void_return_wrapper<void, true>::__call<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::'lambda'()&>(ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::'lambda'()&) build_docker/../contrib/llvm-project/libcxx/include/__functional/invoke.h:479:9\r\n #21 0x4b43862d in std::__1::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::'lambda'(), void ()>::operator()[abi:v15000]() build_docker/../contrib/llvm-project/libcxx/include/__functional/function.h:235:12\r\n #22 0x4b43862d in void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::'lambda'(), void ()>>(std::__1::__function::__policy_storage const*) build_docker/../contrib/llvm-project/libcxx/include/__functional/function.h:716:16\r\n #23 0x29a711ac in std::__1::__function::__policy_func<void ()>::operator()[abi:v15000]() const build_docker/../contrib/llvm-project/libcxx/include/__functional/function.h:848:16\r\n #24 0x29a711ac in std::__1::function<void ()>::operator()() const build_docker/../contrib/llvm-project/libcxx/include/__functional/function.h:1187:12\r\n #25 0x29a711ac in ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) build_docker/../src/Common/ThreadPool.cpp:295:17\r\n #26 0x29a7f322 in void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::'lambda0'()::operator()() const build_docker/../src/Common/ThreadPool.cpp:144:73\r\n #27 0x29a7f322 in decltype(std::declval<void>()()) std::__1::__invoke[abi:v15000]<void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::'lambda0'()>(void&&) build_docker/../contrib/llvm-project/libcxx/include/__functional/invoke.h:394:23\r\n #28 0x29a7f322 in void std::__1::__thread_execute[abi:v15000]<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::'lambda0'()>(std::__1::tuple<void, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::'lambda0'()>&, std::__1::__tuple_indices<>) build_docker/../contrib/llvm-project/libcxx/include/thread:284:5\r\n #29 0x29a7f322 in void* std::__1::__thread_proxy[abi:v15000]<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::'lambda0'()>>(void*) build_docker/../contrib/llvm-project/libcxx/include/thread:295:5\r\n #30 0x7f90a297a608 in start_thread /build/glibc-SzIz7B/glibc-2.31/nptl/pthread_create.c:477:8\r\n\r\nSUMMARY: MemorySanitizer: use-of-uninitialized-value build_docker/../src/Columns/ColumnUnique.h:580:18 in COW<DB::IColumn>::mutable_ptr<DB::IColumn> DB::ColumnUnique<DB::ColumnVector<char8_t>>::uniqueInsertRangeImpl<char8_t>(DB::IColumn const&, unsigned long, unsigned long, unsigned long, DB::ColumnVector<char8_t>::MutablePtr&&, DB::ReverseIndex<unsigned long, DB::ColumnVector<char8_t>>*, unsigned long)\r\n```\r\n\r\n```\r\n2023.01.12 13:14:58.300879 [ 440 ] {} <Fatal> BaseDaemon: ########################################\r\n2023.01.12 13:14:58.301640 [ 440 ] {} <Fatal> BaseDaemon: (version 22.13.1.1 (official build), build id: 1986284F036CBA8610C75394630E4500E448B3B6) (from thread 433) (query_id: a5f14349-c41c-4919-813a-ba6da8c85c50) (query: SELECT 1023, (((id % -9223372036854775807) = NULL) OR ((id % NULL) = 100) OR ((id % NULL) = 65537)) = ((id % inf) = 9223372036854775806), (id % NULL) = NULL, (id % 3.4028234663852886e38) = 1023, 2147483646 FROM table1__fuzz_19 ORDER BY (((id % 1048577) = 1024) % id) = 1023 DESC NULLS FIRST, id % 2147483646 ASC NULLS FIRST, ((id % 1) = 9223372036854775807) OR ((id % NULL) = 257) DESC NULLS FIRST) Received signal sanitizer trap (-3)\r\n2023.01.12 13:14:58.301972 [ 440 ] {} <Fatal> BaseDaemon: Sanitizer trap.\r\n2023.01.12 13:14:58.302298 [ 440 ] {} <Fatal> BaseDaemon: Stack trace: 0x29792d19 0x2a0819fa 0xc542396 0xc556f33 0x42c709f4 0x42c6e0d0 0x42c67a38 0x41931b9f 0x419358c8 0x43db1fda 0x4bd7d508 0x3696346b 0x4b3f0971 0x4b46c2f8 0x4b4327fd 0x4b43862e 0x29a711ad 0x29a7f323 0x7f90a297a609 0x7f90a289f133\r\n2023.01.12 13:14:58.423677 [ 440 ] {} <Fatal> BaseDaemon: 0.1. inlined from ./build_docker/../src/Common/StackTrace.cpp:334: StackTrace::tryCapture()\r\n2023.01.12 13:14:58.423957 [ 440 ] {} <Fatal> BaseDaemon: 0. ./build_docker/../src/Common/StackTrace.cpp:295: StackTrace::StackTrace() @ 0x29792d19 in /workspace/clickhouse\r\n2023.01.12 13:14:58.667249 [ 440 ] {} <Fatal> BaseDaemon: 1. ./build_docker/../src/Daemon/BaseDaemon.cpp:431: sanitizerDeathCallback() @ 0x2a0819fa in /workspace/clickhouse\r\n2023.01.12 13:15:06.170237 [ 440 ] {} <Fatal> BaseDaemon: 2. __sanitizer::Die() @ 0xc542396 in /workspace/clickhouse\r\n2023.01.12 13:15:13.419501 [ 440 ] {} <Fatal> BaseDaemon: 3. ? @ 0xc556f33 in /workspace/clickhouse\r\n2023.01.12 13:15:14.206289 [ 440 ] {} <Fatal> BaseDaemon: 4.1. inlined from ./build_docker/../src/Columns/ColumnVector.h:223: DB::ColumnVector<char8_t>::compareAt(unsigned long, unsigned long, DB::IColumn const&, int) const\r\n2023.01.12 13:15:14.206471 [ 440 ] {} <Fatal> BaseDaemon: 4. ./build_docker/../src/Columns/ColumnUnique.h:580: COW<DB::IColumn>::mutable_ptr<DB::IColumn> DB::ColumnUnique<DB::ColumnVector<char8_t>>::uniqueInsertRangeImpl<char8_t>(DB::IColumn const&, unsigned long, unsigned long, unsigned long, DB::ColumnVector<char8_t>::MutablePtr&&, DB::ReverseIndex<unsigned long, DB::ColumnVector<char8_t>>*, unsigned long) @ 0x42c709f4 in /workspace/clickhouse\r\n2023.01.12 13:15:14.970692 [ 440 ] {} <Fatal> BaseDaemon: 5.1. inlined from ./build_docker/../contrib/boost/boost/smart_ptr/intrusive_ptr.hpp:98: ~intrusive_ptr\r\n2023.01.12 13:15:14.970850 [ 440 ] {} <Fatal> BaseDaemon: 5. ./build_docker/../src/Columns/ColumnUnique.h:618: COW<DB::IColumn>::mutable_ptr<DB::IColumn> DB::ColumnUnique<DB::ColumnVector<char8_t>>::uniqueInsertRangeFrom(DB::IColumn const&, unsigned long, unsigned long)::'lambda'(auto)::operator()<char8_t>(auto) const @ 0x42c6e0d0 in /workspace/clickhouse\r\n2023.01.12 13:15:15.735715 [ 440 ] {} <Fatal> BaseDaemon: 6. ./build_docker/../src/Columns/ColumnUnique.h:0: DB::ColumnUnique<DB::ColumnVector<char8_t>>::uniqueInsertRangeFrom(DB::IColumn const&, unsigned long, unsigned long) @ 0x42c67a38 in /workspace/clickhouse\r\n2023.01.12 13:15:15.886453 [ 440 ] {} <Fatal> BaseDaemon: 7.1. inlined from ./build_docker/../contrib/boost/boost/smart_ptr/intrusive_ptr.hpp:138: intrusive_ptr<DB::IColumn>\r\n2023.01.12 13:15:15.886583 [ 440 ] {} <Fatal> BaseDaemon: 7.2. inlined from ./build_docker/../src/Common/COW.h:144: immutable_ptr<DB::IColumn>\r\n2023.01.12 13:15:15.886660 [ 440 ] {} <Fatal> BaseDaemon: 7. ./build_docker/../src/Functions/IFunction.cpp:288: DB::IExecutableFunction::executeWithoutSparseColumns(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const @ 0x41931b9f in /workspace/clickhouse\r\n2023.01.12 13:15:16.044920 [ 440 ] {} <Fatal> BaseDaemon: 8. ./build_docker/../src/Functions/IFunction.cpp:0: DB::IExecutableFunction::execute(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const @ 0x419358c8 in /workspace/clickhouse\r\n2023.01.12 13:15:16.494833 [ 440 ] {} <Fatal> BaseDaemon: 9.1. inlined from ./build_docker/../src/Interpreters/ExpressionActions.cpp:0: DB::executeAction(DB::ExpressionActions::Action const&, DB::(anonymous namespace)::ExecutionContext&, bool)\r\n2023.01.12 13:15:16.494983 [ 440 ] {} <Fatal> BaseDaemon: 9. ./build_docker/../src/Interpreters/ExpressionActions.cpp:724: DB::ExpressionActions::execute(DB::Block&, unsigned long&, bool) const @ 0x43db1fda in /workspace/clickhouse\r\n2023.01.12 13:15:16.582595 [ 440 ] {} <Fatal> BaseDaemon: 10. ./build_docker/../src/Processors/Transforms/ExpressionTransform.cpp:25: DB::ExpressionTransform::transform(DB::Chunk&) @ 0x4bd7d508 in /workspace/clickhouse\r\n2023.01.12 13:15:16.771957 [ 440 ] {} <Fatal> BaseDaemon: 11.1. inlined from ./build_docker/../contrib/llvm-project/libcxx/include/__utility/swap.h:35: std::__1::enable_if<is_move_constructible<COW<DB::IColumn>::immutable_ptr<DB::IColumn>*>::value && is_move_assignable<COW<DB::IColumn>::immutable_ptr<DB::IColumn>*>::value, void>::type std::__1::swap[abi:v15000]<COW<DB::IColumn>::immutable_ptr<DB::IColumn>*>(COW<DB::IColumn>::immutable_ptr<DB::IColumn>*&, COW<DB::IColumn>::immutable_ptr<DB::IColumn>*&)\r\n2023.01.12 13:15:16.772156 [ 440 ] {} <Fatal> BaseDaemon: 11.2. inlined from ./build_docker/../contrib/llvm-project/libcxx/include/vector:1950: std::__1::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn>>>::swap(std::__1::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn>>>&)\r\n2023.01.12 13:15:16.772255 [ 440 ] {} <Fatal> BaseDaemon: 11.3. inlined from ./build_docker/../src/Processors/Chunk.h:64: DB::Chunk::swap(DB::Chunk&)\r\n2023.01.12 13:15:16.772336 [ 440 ] {} <Fatal> BaseDaemon: 11. ./build_docker/../src/Processors/ISimpleTransform.h:33: DB::ISimpleTransform::transform(DB::Chunk&, DB::Chunk&) @ 0x3696346b in /workspace/clickhouse\r\n2023.01.12 13:15:16.889282 [ 440 ] {} <Fatal> BaseDaemon: 12. ./build_docker/../src/Processors/ISimpleTransform.cpp:0: DB::ISimpleTransform::work() @ 0x4b3f0971 in /workspace/clickhouse\r\n2023.01.12 13:15:16.932744 [ 440 ] {} <Fatal> BaseDaemon: 13.1. inlined from ./build_docker/../src/Processors/Executors/ExecutionThreadContext.cpp:0: DB::executeJob(DB::ExecutingGraph::Node*, DB::ReadProgressCallback*)\r\n2023.01.12 13:15:16.932896 [ 440 ] {} <Fatal> BaseDaemon: 13. ./build_docker/../src/Processors/Executors/ExecutionThreadContext.cpp:92: DB::ExecutionThreadContext::executeTask() @ 0x4b46c2f8 in /workspace/clickhouse\r\n2023.01.12 13:15:17.084994 [ 440 ] {} <Fatal> BaseDaemon: 14. ./build_docker/../src/Processors/Executors/PipelineExecutor.cpp:229: DB::PipelineExecutor::executeStepImpl(unsigned long, std::__1::atomic<bool>*) @ 0x4b4327fd in /workspace/clickhouse\r\n2023.01.12 13:15:17.261372 [ 440 ] {} <Fatal> BaseDaemon: 15.1. inlined from ./build_docker/../base/base/scope_guard.h:48: ~BasicScopeGuard\r\n2023.01.12 13:15:17.261532 [ 440 ] {} <Fatal> BaseDaemon: 15.2. inlined from ./build_docker/../src/Processors/Executors/PipelineExecutor.cpp:328: operator()\r\n2023.01.12 13:15:17.261704 [ 440 ] {} <Fatal> BaseDaemon: 15.3. inlined from ./build_docker/../contrib/llvm-project/libcxx/include/__functional/invoke.h:394: decltype(std::declval<DB::PipelineExecutor::spawnThreads()::$_0&>()()) std::__1::__invoke[abi:v15000]<DB::PipelineExecutor::spawnThreads()::$_0&>(DB::PipelineExecutor::spawnThreads()::$_0&)\r\n2023.01.12 13:15:17.261840 [ 440 ] {} <Fatal> BaseDaemon: 15.4. inlined from ./build_docker/../contrib/llvm-project/libcxx/include/tuple:1789: decltype(auto) std::__1::__apply_tuple_impl[abi:v15000]<DB::PipelineExecutor::spawnThreads()::$_0&, std::__1::tuple<>&>(DB::PipelineExecutor::spawnThreads()::$_0&, std::__1::tuple<>&, std::__1::__tuple_indices<>)\r\n2023.01.12 13:15:17.261957 [ 440 ] {} <Fatal> BaseDaemon: 15.5. inlined from ./build_docker/../contrib/llvm-project/libcxx/include/tuple:1798: decltype(auto) std::__1::apply[abi:v15000]<DB::PipelineExecutor::spawnThreads()::$_0&, std::__1::tuple<>&>(DB::PipelineExecutor::spawnThreads()::$_0&, std::__1::tuple<>&)\r\n2023.01.12 13:15:17.262038 [ 440 ] {} <Fatal> BaseDaemon: 15.6. inlined from ./build_docker/../src/Common/ThreadPool.h:196: operator()\r\n2023.01.12 13:15:17.262196 [ 440 ] {} <Fatal> BaseDaemon: 15.7. inlined from ./build_docker/../contrib/llvm-project/libcxx/include/__functional/invoke.h:394: decltype(std::declval<DB::PipelineExecutor::spawnThreads()::$_0>()()) std::__1::__invoke[abi:v15000]<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::'lambda'()&>(DB::PipelineExecutor::spawnThreads()::$_0&&)\r\n2023.01.12 13:15:17.262381 [ 440 ] {} <Fatal> BaseDaemon: 15.8. inlined from ./build_docker/../contrib/llvm-project/libcxx/include/__functional/invoke.h:479: void std::__1::__invoke_void_return_wrapper<void, true>::__call<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::'lambda'()&>(ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::'lambda'()&)\r\n2023.01.12 13:15:17.262570 [ 440 ] {} <Fatal> BaseDaemon: 15.9. inlined from ./build_docker/../contrib/llvm-project/libcxx/include/__functional/function.h:235: std::__1::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::'lambda'(), void ()>::operator()[abi:v15000]()\r\n2023.01.12 13:15:17.262660 [ 440 ] {} <Fatal> BaseDaemon: 15. ./build_docker/../contrib/llvm-project/libcxx/include/__functional/function.h:716: void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::'lambda'(), void ()>>(std::__1::__function::__policy_storage const*) @ 0x4b43862e in /workspace/clickhouse\r\n2023.01.12 13:15:17.383947 [ 440 ] {} <Fatal> BaseDaemon: 16. ./build_docker/../contrib/llvm-project/libcxx/include/__functional/function.h:0: ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x29a711ad in /workspace/clickhouse\r\n2023.01.12 13:15:17.534980 [ 440 ] {} <Fatal> BaseDaemon: 17. ./build_docker/../src/Common/ThreadPool.cpp:0: void* std::__1::__thread_proxy[abi:v15000]<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::'lambda0'()>>(void*) @ 0x29a7f323 in /workspace/clickhouse\r\n2023.01.12 13:15:17.535149 [ 440 ] {} <Fatal> BaseDaemon: 18. ? @ 0x7f90a297a609 in ?\r\n2023.01.12 13:15:17.535278 [ 440 ] {} <Fatal> BaseDaemon: 19. clone @ 0x7f90a289f133 in ?\r\n2023.01.12 13:15:20.386892 [ 440 ] {} <Fatal> BaseDaemon: Integrity check of the executable successfully passed (checksum: 43E372198D1661DBC247D5A1D3EDA53D)\r\n2023.01.12 13:15:24.708777 [ 441 ] {} <Fatal> BaseDaemon: ########################################\r\n2023.01.12 13:15:24.708940 [ 441 ] {} <Fatal> BaseDaemon: (version 22.13.1.1 (official build), build id: 1986284F036CBA8610C75394630E4500E448B3B6) (from thread 433) (query_id: a5f14349-c41c-4919-813a-ba6da8c85c50) (query: SELECT 1023, (((id % -9223372036854775807) = NULL) OR ((id % NULL) = 100) OR ((id % NULL) = 65537)) = ((id % inf) = 9223372036854775806), (id % NULL) = NULL, (id % 3.4028234663852886e38) = 1023, 2147483646 FROM table1__fuzz_19 ORDER BY (((id % 1048577) = 1024) % id) = 1023 DESC NULLS FIRST, id % 2147483646 ASC NULLS FIRST, ((id % 1) = 9223372036854775807) OR ((id % NULL) = 257) DESC NULLS FIRST) Received signal sanitizer trap (-3)\r\n2023.01.12 13:15:24.709067 [ 441 ] {} <Fatal> BaseDaemon: Sanitizer trap.\r\n2023.01.12 13:15:24.709220 [ 441 ] {} <Fatal> BaseDaemon: Stack trace: 0x29792d19 0x2a0819fa 0xc542396 0xc567853 0x298e1077 0x29c27efa 0x2a081a64 0xc542396 0xc556f33 0x42c709f4 0x42c6e0d0 0x42c67a38 0x41931b9f 0x419358c8 0x43db1fda 0x4bd7d508 0x3696346b 0x4b3f0971 0x4b46c2f8 0x4b4327fd 0x4b43862e 0x29a711ad 0x29a7f323 0x7f90a297a609 0x7f90a289f133\r\n2023.01.12 13:15:24.829645 [ 441 ] {} <Fatal> BaseDaemon: 0.1. inlined from ./build_docker/../src/Common/StackTrace.cpp:334: StackTrace::tryCapture()\r\n2023.01.12 13:15:24.829798 [ 441 ] {} <Fatal> BaseDaemon: 0. ./build_docker/../src/Common/StackTrace.cpp:295: StackTrace::StackTrace() @ 0x29792d19 in /workspace/clickhouse\r\n2023.01.12 13:15:25.084645 [ 441 ] {} <Fatal> BaseDaemon: 1. ./build_docker/../src/Daemon/BaseDaemon.cpp:431: sanitizerDeathCallback() @ 0x2a0819fa in /workspace/clickhouse\r\n2023.01.12 13:15:32.349300 [ 441 ] {} <Fatal> BaseDaemon: 2. __sanitizer::Die() @ 0xc542396 in /workspace/clickhouse\r\n2023.01.12 13:15:39.587944 [ 441 ] {} <Fatal> BaseDaemon: 3. ? @ 0xc567853 in /workspace/clickhouse\r\n2023.01.12 13:15:39.594859 [ 441 ] {} <Fatal> BaseDaemon: 4. ./build_docker/../src/IO/WriteBufferFromFileDescriptorDiscardOnFailure.cpp:16: DB::WriteBufferFromFileDescriptorDiscardOnFailure::nextImpl() @ 0x298e1077 in /workspace/clickhouse\r\n2023.01.12 13:15:40.028155 [ 441 ] {} <Fatal> BaseDaemon: 5. ./build_docker/../src/IO/WriteBuffer.h:0: DB::WriteBuffer::next() @ 0x29c27efa in /workspace/clickhouse\r\n2023.01.12 13:15:40.270738 [ 441 ] {} <Fatal> BaseDaemon: 6. ./build_docker/../src/Daemon/BaseDaemon.cpp:440: sanitizerDeathCallback() @ 0x2a081a64 in /workspace/clickhouse\r\n2023.01.12 13:15:44.709074 [ 442 ] {} <Fatal> BaseDaemon: ########################################\r\n2023.01.12 13:15:44.709238 [ 442 ] {} <Fatal> BaseDaemon: (version 22.13.1.1 (official build), build id: 1986284F036CBA8610C75394630E4500E448B3B6) (from thread 433) (query_id: a5f14349-c41c-4919-813a-ba6da8c85c50) (query: SELECT 1023, (((id % -9223372036854775807) = NULL) OR ((id % NULL) = 100) OR ((id % NULL) = 65537)) = ((id % inf) = 9223372036854775806), (id % NULL) = NULL, (id % 3.4028234663852886e38) = 1023, 2147483646 FROM table1__fuzz_19 ORDER BY (((id % 1048577) = 1024) % id) = 1023 DESC NULLS FIRST, id % 2147483646 ASC NULLS FIRST, ((id % 1) = 9223372036854775807) OR ((id % NULL) = 257) DESC NULLS FIRST) Received signal Aborted (6)\r\n2023.01.12 13:15:44.709397 [ 442 ] {} <Fatal> BaseDaemon: \r\n2023.01.12 13:15:44.709538 [ 442 ] {} <Fatal> BaseDaemon: Stack trace: 0x7f90a27c300b 0x7f90a27a2859 0xc5445c7 0xc5423f1 0xc567853 0x298e1077 0x29c27efa 0x2a081a64 0xc542396 0xc556f33 0x42c709f4 0x42c6e0d0 0x42c67a38 0x41931b9f 0x419358c8 0x43db1fda 0x4bd7d508 0x3696346b 0x4b3f0971 0x4b46c2f8 0x4b4327fd 0x4b43862e 0x29a711ad 0x29a7f323 0x7f90a297a609 0x7f90a289f133\r\n2023.01.12 13:15:44.709679 [ 442 ] {} <Fatal> BaseDaemon: 4. raise @ 0x7f90a27c300b in ?\r\n2023.01.12 13:15:44.709781 [ 442 ] {} <Fatal> BaseDaemon: 5. abort @ 0x7f90a27a2859 in ?\r\n2023.01.12 13:15:47.495775 [ 441 ] {} <Fatal> BaseDaemon: 7. __sanitizer::Die() @ 0xc542396 in /workspace/clickhouse\r\n2023.01.12 13:15:51.934781 [ 442 ] {} <Fatal> BaseDaemon: 6. ? @ 0xc5445c7 in /workspace/clickhouse\r\n2023.01.12 13:15:54.724764 [ 441 ] {} <Fatal> BaseDaemon: 8. ? @ 0xc556f33 in /workspace/clickhouse\r\n2023.01.12 13:15:55.510208 [ 441 ] {} <Fatal> BaseDaemon: 9.1. inlined from ./build_docker/../src/Columns/ColumnVector.h:223: DB::ColumnVector<char8_t>::compareAt(unsigned long, unsigned long, DB::IColumn const&, int) const\r\n2023.01.12 13:15:55.510381 [ 441 ] {} <Fatal> BaseDaemon: 9. ./build_docker/../src/Columns/ColumnUnique.h:580: COW<DB::IColumn>::mutable_ptr<DB::IColumn> DB::ColumnUnique<DB::ColumnVector<char8_t>>::uniqueInsertRangeImpl<char8_t>(DB::IColumn const&, unsigned long, unsigned long, unsigned long, DB::ColumnVector<char8_t>::MutablePtr&&, DB::ReverseIndex<unsigned long, DB::ColumnVector<char8_t>>*, unsigned long) @ 0x42c709f4 in /workspace/clickhouse\r\n2023.01.12 13:15:56.278287 [ 441 ] {} <Fatal> BaseDaemon: 10.1. inlined from ./build_docker/../contrib/boost/boost/smart_ptr/intrusive_ptr.hpp:98: ~intrusive_ptr\r\n2023.01.12 13:15:56.278433 [ 441 ] {} <Fatal> BaseDaemon: 10. ./build_docker/../src/Columns/ColumnUnique.h:618: COW<DB::IColumn>::mutable_ptr<DB::IColumn> DB::ColumnUnique<DB::ColumnVector<char8_t>>::uniqueInsertRangeFrom(DB::IColumn const&, unsigned long, unsigned long)::'lambda'(auto)::operator()<char8_t>(auto) const @ 0x42c6e0d0 in /workspace/clickhouse\r\n2023.01.12 13:15:57.047873 [ 441 ] {} <Fatal> BaseDaemon: 11. ./build_docker/../src/Columns/ColumnUnique.h:0: DB::ColumnUnique<DB::ColumnVector<char8_t>>::uniqueInsertRangeFrom(DB::IColumn const&, unsigned long, unsigned long) @ 0x42c67a38 in /workspace/clickhouse\r\n2023.01.12 13:15:57.198522 [ 441 ] {} <Fatal> BaseDaemon: 12.1. inlined from ./build_docker/../contrib/boost/boost/smart_ptr/intrusive_ptr.hpp:138: intrusive_ptr<DB::IColumn>\r\n2023.01.12 13:15:57.198660 [ 441 ] {} <Fatal> BaseDaemon: 12.2. inlined from ./build_docker/../src/Common/COW.h:144: immutable_ptr<DB::IColumn>\r\n2023.01.12 13:15:57.198746 [ 441 ] {} <Fatal> BaseDaemon: 12. ./build_docker/../src/Functions/IFunction.cpp:288: DB::IExecutableFunction::executeWithoutSparseColumns(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const @ 0x41931b9f in /workspace/clickhouse\r\n2023.01.12 13:15:57.359016 [ 441 ] {} <Fatal> BaseDaemon: 13. ./build_docker/../src/Functions/IFunction.cpp:0: DB::IExecutableFunction::execute(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const @ 0x419358c8 in /workspace/clickhouse\r\n2023.01.12 13:15:57.808603 [ 441 ] {} <Fatal> BaseDaemon: 14.1. inlined from ./build_docker/../src/Interpreters/ExpressionActions.cpp:0: DB::executeAction(DB::ExpressionActions::Action const&, DB::(anonymous namespace)::ExecutionContext&, bool)\r\n2023.01.12 13:15:57.808729 [ 441 ] {} <Fatal> BaseDaemon: 14. ./build_docker/../src/Interpreters/ExpressionActions.cpp:724: DB::ExpressionActions::execute(DB::Block&, unsigned long&, bool) const @ 0x43db1fda in /workspace/clickhouse\r\n2023.01.12 13:15:57.896243 [ 441 ] {} <Fatal> BaseDaemon: 15. ./build_docker/../src/Processors/Transforms/ExpressionTransform.cpp:25: DB::ExpressionTransform::transform(DB::Chunk&) @ 0x4bd7d508 in /workspace/clickhouse\r\n2023.01.12 13:15:58.086994 [ 441 ] {} <Fatal> BaseDaemon: 16.1. inlined from ./build_docker/../contrib/llvm-project/libcxx/include/__utility/swap.h:35: std::__1::enable_if<is_move_constructible<COW<DB::IColumn>::immutable_ptr<DB::IColumn>*>::value && is_move_assignable<COW<DB::IColumn>::immutable_ptr<DB::IColumn>*>::value, void>::type std::__1::swap[abi:v15000]<COW<DB::IColumn>::immutable_ptr<DB::IColumn>*>(COW<DB::IColumn>::immutable_ptr<DB::IColumn>*&, COW<DB::IColumn>::immutable_ptr<DB::IColumn>*&)\r\n2023.01.12 13:15:58.087153 [ 441 ] {} <Fatal> BaseDaemon: 16.2. inlined from ./build_docker/../contrib/llvm-project/libcxx/include/vector:1950: std::__1::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn>>>::swap(std::__1::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn>>>&)\r\n2023.01.12 13:15:58.087236 [ 441 ] {} <Fatal> BaseDaemon: 16.3. inlined from ./build_docker/../src/Processors/Chunk.h:64: DB::Chunk::swap(DB::Chunk&)\r\n2023.01.12 13:15:58.087298 [ 441 ] {} <Fatal> BaseDaemon: 16. ./build_docker/../src/Processors/ISimpleTransform.h:33: DB::ISimpleTransform::transform(DB::Chunk&, DB::Chunk&) @ 0x3696346b in /workspace/clickhouse\r\n2023.01.12 13:15:58.204666 [ 441 ] {} <Fatal> BaseDaemon: 17. ./build_docker/../src/Processors/ISimpleTransform.cpp:0: DB::ISimpleTransform::work() @ 0x4b3f0971 in /workspace/clickhouse\r\n2023.01.12 13:15:58.247308 [ 441 ] {} <Fatal> BaseDaemon: 18.1. inlined from ./build_docker/../src/Processors/Executors/ExecutionThreadContext.cpp:0: DB::executeJob(DB::ExecutingGraph::Node*, DB::ReadProgressCallback*)\r\n2023.01.12 13:15:58.247443 [ 441 ] {} <Fatal> BaseDaemon: 18. ./build_docker/../src/Processors/Executors/ExecutionThreadContext.cpp:92: DB::ExecutionThreadContext::executeTask() @ 0x4b46c2f8 in /workspace/clickhouse\r\n2023.01.12 13:15:58.400708 [ 441 ] {} <Fatal> BaseDaemon: 19. ./build_docker/../src/Processors/Executors/PipelineExecutor.cpp:229: DB::PipelineExecutor::executeStepImpl(unsigned long, std::__1::atomic<bool>*) @ 0x4b4327fd in /workspace/clickhouse\r\n2023.01.12 13:15:58.578466 [ 441 ] {} <Fatal> BaseDaemon: 20.1. inlined from ./build_docker/../base/base/scope_guard.h:48: ~BasicScopeGuard\r\n2023.01.12 13:15:58.578607 [ 441 ] {} <Fatal> BaseDaemon: 20.2. inlined from ./build_docker/../src/Processors/Executors/PipelineExecutor.cpp:328: operator()\r\n2023.01.12 13:15:58.578725 [ 441 ] {} <Fatal> BaseDaemon: 20.3. inlined from ./build_docker/../contrib/llvm-project/libcxx/include/__functional/invoke.h:394: decltype(std::declval<DB::PipelineExecutor::spawnThreads()::$_0&>()()) std::__1::__invoke[abi:v15000]<DB::PipelineExecutor::spawnThreads()::$_0&>(DB::PipelineExecutor::spawnThreads()::$_0&)\r\n2023.01.12 13:15:58.578835 [ 441 ] {} <Fatal> BaseDaemon: 20.4. inlined from ./build_docker/../contrib/llvm-project/libcxx/include/tuple:1789: decltype(auto) std::__1::__apply_tuple_impl[abi:v15000]<DB::PipelineExecutor::spawnThreads()::$_0&, std::__1::tuple<>&>(DB::PipelineExecutor::spawnThreads()::$_0&, std::__1::tuple<>&, std::__1::__tuple_indices<>)\r\n2023.01.12 13:15:58.578920 [ 441 ] {} <Fatal> BaseDaemon: 20.5. inlined from ./build_docker/../contrib/llvm-project/libcxx/include/tuple:1798: decltype(auto) std::__1::apply[abi:v15000]<DB::PipelineExecutor::spawnThreads()::$_0&, std::__1::tuple<>&>(DB::PipelineExecutor::spawnThreads()::$_0&, std::__1::tuple<>&)\r\n2023.01.12 13:15:58.578983 [ 441 ] {} <Fatal> BaseDaemon: 20.6. inlined from ./build_docker/../src/Common/ThreadPool.h:196: operator()\r\n2023.01.12 13:15:58.579086 [ 441 ] {} <Fatal> BaseDaemon: 20.7. inlined from ./build_docker/../contrib/llvm-project/libcxx/include/__functional/invoke.h:394: decltype(std::declval<DB::PipelineExecutor::spawnThreads()::$_0>()()) std::__1::__invoke[abi:v15000]<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::'lambda'()&>(DB::PipelineExecutor::spawnThreads()::$_0&&)\r\n2023.01.12 13:15:58.579201 [ 441 ] {} <Fatal> BaseDaemon: 20.8. inlined from ./build_docker/../contrib/llvm-project/libcxx/include/__functional/invoke.h:479: void std::__1::__invoke_void_return_wrapper<void, true>::__call<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::'lambda'()&>(ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::'lambda'()&)\r\n2023.01.12 13:15:58.579308 [ 441 ] {} <Fatal> BaseDaemon: 20.9. inlined from ./build_docker/../contrib/llvm-project/libcxx/include/__functional/function.h:235: std::__1::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::'lambda'(), void ()>::operator()[abi:v15000]()\r\n2023.01.12 13:15:58.579385 [ 441 ] {} <Fatal> BaseDaemon: 20. ./build_docker/../contrib/llvm-project/libcxx/include/__functional/function.h:716: void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PipelineExecutor::spawnThreads()::$_0>(DB::PipelineExecutor::spawnThreads()::$_0&&)::'lambda'(), void ()>>(std::__1::__function::__policy_storage const*) @ 0x4b43862e in /workspace/clickhouse\r\n2023.01.12 13:15:58.700918 [ 441 ] {} <Fatal> BaseDaemon: 21. ./build_docker/../contrib/llvm-project/libcxx/include/__functional/function.h:0: ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x29a711ad in /workspace/clickhouse\r\n2023.01.12 13:15:58.852807 [ 441 ] {} <Fatal> BaseDaemon: 22. ./build_docker/../src/Common/ThreadPool.cpp:0: void* std::__1::__thread_proxy[abi:v15000]<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::'lambda0'()>>(void*) @ 0x29a7f323 in /workspace/clickhouse\r\n2023.01.12 13:15:58.852966 [ 441 ] {} <Fatal> BaseDaemon: 23. ? @ 0x7f90a297a609 in ?\r\n2023.01.12 13:15:58.853049 [ 441 ] {} <Fatal> BaseDaemon: 24. clone @ 0x7f90a289f133 in ?\r\n2023.01.12 13:15:59.168656 [ 442 ] {} <Fatal> BaseDaemon: 7. ? @ 0xc5423f1 in /workspace/clickhouse\r\n2023.01.12 13:16:01.716745 [ 441 ] {} <Fatal> BaseDaemon: Integrity check of the executable successfully passed (checksum: 43E372198D1661DBC247D5A1D3EDA53D)\r\n2023.01.12 13:16:36.249661 [ 151 ] {} <Fatal> Application: Child process was terminated by signal 6.\r\n```" | https://github.com/ClickHouse/ClickHouse/issues/45214 | https://github.com/ClickHouse/ClickHouse/pull/46850 | 93dffb13c2e0f949465bc6a08591228b6e678056 | c4bf503690019227c9067d0e1343ea3038045728 | "2023-01-12T10:35:33Z" | c++ | "2023-02-25T19:56:26Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 45,197 | ["src/Parsers/ASTFunction.cpp", "tests/queries/0_stateless/02560_tuple_format.reference", "tests/queries/0_stateless/02560_tuple_format.sh"] | tuple & constants | ```sql
clickhouse :) select (tuple(1, 2, 3) AS x).1 AS a, x.2 AS b,x.3 AS c;
SELECT
((1, 2, 3) AS x).1 AS a,
x.2 AS b,
x.3 AS c
Query id: d5f428d2-b14e-4d6a-9b26-5cec253907c0
┌─a─┬─b─┬─c─┐
│ 1 │ 2 │ 3 │
└───┴───┴───┘
1 row in set. Elapsed: 0.001 sec.
```
The result is correct.
Now let's copy/paste the echo
```sql
clickhouse :) SELECT
((1, 2, 3) AS x).1 AS a,
x.2 AS b,
x.3 AS c;
SELECT
tuple((1, 2, 3) AS x).1 AS a,
x.2 AS b,
x.3 AS c
Query id: 4f799baf-7534-48ed-8eb8-4da0a7dadd27
┌─a───────┬─b─┬─c─┐
│ (1,2,3) │ 2 │ 3 │
└─────────┴───┴───┘
```
result is incorrect and Clickhouse rewrite the tuple incorrectly. | https://github.com/ClickHouse/ClickHouse/issues/45197 | https://github.com/ClickHouse/ClickHouse/pull/46232 | b08ec8ecfecdd7e83cf3e6fc3a95915b81b71403 | c6dc39f9e22116a5d0a4dd9af3f88e789a9c904e | "2023-01-11T20:04:27Z" | c++ | "2023-02-11T02:56:08Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 45,195 | ["src/Compression/CompressionCodecDelta.cpp", "src/Compression/CompressionCodecGorilla.cpp", "src/Compression/ICompressionCodec.h", "src/IO/BitHelpers.h", "tests/queries/0_stateless/02536_delta_gorilla_corruption.reference", "tests/queries/0_stateless/02536_delta_gorilla_corruption.sql"] | data corruption on delta+gorilla+lz4 codec pipeline | **Describe what's wrong**
I have dataset from sensors that was corrupted when using codec (Delta, Gorilla, LZ4)
**Does it reproduce on recent release?**
checked only on
```
+----------------+----------------------------------------+
|name |value |
+----------------+----------------------------------------+
|VERSION_FULL |ClickHouse 22.10.1.1877 |
|VERSION_DESCRIBE|v22.10.1.1877-testing |
|VERSION_INTEGER |22010001 |
|SYSTEM |Linux |
|VERSION_GITHASH |98ab5a3c189232ea2a3dddb9d2be7196ae8b3434|
|VERSION_REVISION|54467 |
+----------------+----------------------------------------+
```
**Enable crash reporting**
no errors in server logs
**How to reproduce**
```
create table bug_gor_lz
(
master_serial String,
sensor_serial String,
type String,
datetime DateTime codec (Delta, LZ4),
value Nullable(Decimal(15, 5)) default NULL,
value_bug Nullable(Decimal(15, 5)) default NULL codec (Delta, Gorilla, LZ4)
)
engine = ReplacingMergeTree PARTITION BY toYYYYMM(datetime)
ORDER BY (master_serial, sensor_serial, type, datetime)
SETTINGS index_granularity = 8192;
```
Import from [file](https://github.com/mosinnik/bug_data/blob/6fae4459ea089869e0d20368d6814e8ee259de04/bug_click_data.zip) TabSeparated
I export to file correct values in `value` column and in `value_bug` bugged value - check diffs:
```
select * from bug_gor_lz
where value <> value_bug
limit 10;
```
For ex currently:
```
+-------------+-------------+-----+-------------------+-------+---------------------+
|master_serial|sensor_serial|type |datetime |value |value_bug |
+-------------+-------------+-----+-------------------+-------+---------------------+
|70160 |10000175HIT |press|2022-06-14 06:07:02|0.00000|44483799547993.14956 |
|70160 |10000175HIT |press|2022-06-14 06:07:03|0.00000|-36626579495765.90833|
|70160 |10000175HIT |press|2022-06-14 06:07:04|0.00000|66730889016872.82706 |
|70160 |10000175HIT |press|2022-06-14 06:07:05|0.00000|-14377895735025.95563|
|70160 |10000175HIT |press|2022-06-14 06:07:06|0.00000|88981167069473.05496 |
|70160 |10000175HIT |press|2022-06-01 04:32:54|0.00000|0.64536 |
|70160 |10000175HIT |press|2022-06-01 04:32:55|0.00000|1.29072 |
|70160 |10000175HIT |press|2022-06-01 04:32:56|0.00000|1.93608 |
|70160 |10000175HIT |press|2022-06-01 04:32:57|0.00000|2.58144 |
|70160 |10000175HIT |press|2022-06-01 04:32:58|0.00000|3.22680 |
+-------------+-------------+-----+-------------------+-------+---------------------+
```
To reproduce create new table with above structure and refill from `value` to `value_bug`
**Expected behavior**
values in value and value_bug should be same
**Error message and/or stacktrace**
no
**Additional context**
no
| https://github.com/ClickHouse/ClickHouse/issues/45195 | https://github.com/ClickHouse/ClickHouse/pull/45615 | 76d6e2edf9777642df343849fea7c0992e9338ef | 032cdb986ec2e0768c500dd8ea7e6a60f94c165b | "2023-01-11T19:38:06Z" | c++ | "2023-01-26T23:04:39Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 45,185 | ["src/Interpreters/TreeRewriter.cpp", "tests/queries/0_stateless/02552_inner_join_with_where_true.reference", "tests/queries/0_stateless/02552_inner_join_with_where_true.sql"] | Aggregation results are different when just `WHERE true` added | Prepare data:
```
DROP DATABASE IF EXISTS database9TLPAggregate;
CREATE DATABASE IF NOT EXISTS database9TLPAggregate;
USE database9TLPAggregate;
CREATE TABLE database9TLPAggregate.t0 (c0 String) ENGINE = Log() ;
CREATE TABLE database9TLPAggregate.t1 (c0 Int32) ENGINE = Log() ;
CREATE TABLE IF NOT EXISTS database9TLPAggregate.t2 (c0 String) ENGINE = Log() ;
CREATE TABLE IF NOT EXISTS database9TLPAggregate.t3 (c0 Int32) ENGINE = Log() ;
CREATE TABLE database9TLPAggregate.t4 (c0 String) ENGINE = MergeTree() ORDER BY c0;
INSERT INTO t3(c0) VALUES (-433424209);
INSERT INTO t3(c0) VALUES (1924849780), (350410727);
INSERT INTO t0(c0) VALUES ('#]-');
INSERT INTO t4(c0) VALUES ('859440159');
INSERT INTO t1(c0) VALUES (-806303643);
INSERT INTO t0(c0) VALUES ('w');
INSERT INTO t4(c0) VALUES ('1228837680');
INSERT INTO t1(c0) VALUES (86642829), (-178295749);
INSERT INTO t4(c0) VALUES ('');
INSERT INTO t2(c0) VALUES ('GmS#7<I?');
INSERT INTO t2(c0) VALUES ('|');
INSERT INTO t0(c0) VALUES ('-1887896159'), ('hx'), ('');
INSERT INTO t4(c0) VALUES ('426356725');
```
Something is completely wrong. There originally constant expression `intDiv(-731974690, '' < '8') < 1280677533` that is always `true`. I checked with actually `WHERE true` and it has the same issue. Aggregation result differs just by adding `WHERE true`.
```
SELECT
MAX(-1690797) AS aggr,
true
FROM t3 AS t3
INNER JOIN t4 AS right_0 ON gcd(-2021105281, 1441404898) = erf(1273027572)
Query id: 9db00e57-d888-48b8-b665-8d4374a160f4
┌─────aggr─┬─true─┐
│ -1690797 │ true │
└──────────┴──────┘
1 row in set. Elapsed: 0.005 sec.
ip-172-31-0-52 :) SELECT MAX(-1690797) AS aggr, true
FROM t3 AS t3
INNER JOIN t4 AS right_0 ON gcd(-2021105281, 1441404898) = erf(1273027572) WHERE true;
SELECT
MAX(-1690797) AS aggr,
true
FROM t3 AS t3
INNER JOIN t4 AS right_0 ON gcd(-2021105281, 1441404898) = erf(1273027572)
WHERE true
Query id: 80350053-693c-4c83-9a0c-7030b1c6faeb
┌─aggr─┬─true─┐
│ 0 │ true │
└──────┴──────┘
1 row in set. Elapsed: 0.005 sec.
``` | https://github.com/ClickHouse/ClickHouse/issues/45185 | https://github.com/ClickHouse/ClickHouse/pull/46487 | 135170b27db2804fb9dc49c550873151481220c3 | 90834d4aa565c993239912b5865120cea8481f66 | "2023-01-11T17:50:50Z" | c++ | "2023-02-20T10:38:31Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 45,126 | ["src/Storages/MergeTree/MergeTreePartition.cpp", "tests/queries/0_stateless/02530_ip_part_id.reference", "tests/queries/0_stateless/02530_ip_part_id.sql"] | New ipv4 / ipv6 partition names | ```sql
create table test ( ipv4 IPv4, ipv6 IPv6 ) Engine MergeTree partition by ipv4 order by ipv4 as select '1.2.3.4', '::ffff:1.2.3.4';
select *, _part from test;
22.12
┌─ipv4────┬─ipv6───────────┬─_part──────────┐
│ 1.2.3.4 │ ::ffff:1.2.3.4 │ 16909060_1_1_0 │
└─────────┴────────────────┴────────────────┘
22.13
┌─ipv4────┬─ipv6───────────┬─_part──────────────────────────────────┐
│ 1.2.3.4 │ ::ffff:1.2.3.4 │ bb9df6e3b66e8909ecb142a41a3e6323_1_1_0 │
└─────────┴────────────────┴────────────────────────────────────────┘
```
```sql
create table test ( ipv4 IPv4, ipv6 IPv6 ) Engine MergeTree partition by ipv6 order by ipv6 as select '1.2.3.4', '::ffff:1.2.3.4';
select *, _part from test;
22.12
┌─ipv4────┬─ipv6───────────┬─_part──────────────────────────────────┐
│ 1.2.3.4 │ ::ffff:1.2.3.4 │ 1334d7cc23ffb5a5c0262304b3313426_1_1_0 │
└─────────┴────────────────┴────────────────────────────────────────┘
22.13
┌─ipv4────┬─ipv6───────────┬─_part──────────────────────────────────┐
│ 1.2.3.4 │ ::ffff:1.2.3.4 │ 45c632749fb22384025ef5d0c817e71e_1_1_0 │
└─────────┴────────────────┴────────────────────────────────────────┘
``` | https://github.com/ClickHouse/ClickHouse/issues/45126 | https://github.com/ClickHouse/ClickHouse/pull/45191 | 6b6f5f9185a7c0373bc2184f1e498fa46f923817 | 8bdf63f8e59a070a2e46df4b17abf0fb141f303a | "2023-01-10T16:14:15Z" | c++ | "2023-01-12T11:15:06Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 45,038 | ["src/DataTypes/Native.h", "tests/queries/0_stateless/02525_jit_logical_functions_nan.reference", "tests/queries/0_stateless/02525_jit_logical_functions_nan.sql"] | NOT cos(MAX(pow(1523598955, 763027371))) is not TRUE, FALSE or NULL. | https://s3.amazonaws.com/clickhouse-test-reports/0/0eaf05a2dd246fc6fee4e58ba78d3e9f51dac40c/sqlancer__release_/TLPHaving.err
```
DROP DATABASE IF EXISTS database0TLPHaving;
CREATE DATABASE IF NOT EXISTS database0TLPHaving;
USE database0TLPHaving;
CREATE TABLE database0TLPHaving.t0 (c0 String) ENGINE = Log() ;
INSERT INTO t0(c0) VALUES ('-169109851');
INSERT INTO t0(c0) VALUES ('+A');
INSERT INTO t0(c0) VALUES ('-251131989'), ('Kn5c2^mLK'), ('r?IB[mf!_');
```
Has the result
```
SELECT
MAX(sin((410823204 + 1893374210) != -(-1976649740))),
1102699011,
('-111708376' != '') + (1969336991 <= -1611144300)
FROM t0 AS t0
GROUP BY
-251131989,
-1154587996
SETTINGS aggregate_functions_null_for_empty = 1, enable_optimize_predicate_expression = 0, allow_experimental_analyzer = 0
┌─MAXOrNull(sin(notEquals(plus(410823204, 1893374210), negate(-1976649740))))─┬─1102699011─┬─plus(notEquals('-111708376', ''), lessOrEquals(1969336991, -1611144300))─┐
│ 0.8414709848078965 │ 1102699011 │ 1 │
└─────────────────────────────────────────────────────────────────────────────┴────────────┴──────────────────────────────────────────────────────────────────────────┘
```
Adding `HAVING expr`, `HAVING not(expr)`, `HAVING (expr IS NULL)` has no output:
expr = `NOT cos(MAX(pow(1523598955, 763027371)))`
```
SELECT
MAX(sin((410823204 + 1893374210) != -(-1976649740))),
1102699011,
('-111708376' != '') + (1969336991 <= -1611144300)
FROM t0 AS t0
GROUP BY
-251131989,
-1154587996
HAVING NOT cos(MAX(pow(1523598955, 763027371)))
SETTINGS aggregate_functions_null_for_empty = 1, enable_optimize_predicate_expression = 0, allow_experimental_analyzer = 0
Query id: fb31da56-35b8-4c30-b714-59702125c2b3
Ok.
0 rows in set. Elapsed: 0.035 sec.
```
```
SELECT
MAX(sin((410823204 + 1893374210) != -(-1976649740))),
1102699011,
('-111708376' != '') + (1969336991 <= -1611144300)
FROM t0 AS t0
GROUP BY
-251131989,
-1154587996
HAVING NOT (NOT cos(MAX(pow(1523598955, 763027371))))
SETTINGS aggregate_functions_null_for_empty = 1, enable_optimize_predicate_expression = 0, allow_experimental_analyzer = 0
Query id: 6bb6df4a-fab8-4036-ae9b-3d109cfb8ef3
Ok.
0 rows in set. Elapsed: 0.034 sec.
```
```
SELECT
MAX(sin((410823204 + 1893374210) != -(-1976649740))),
1102699011,
('-111708376' != '') + (1969336991 <= -1611144300)
FROM t0 AS t0
GROUP BY
-251131989,
-1154587996
HAVING (NOT cos(MAX(pow(1523598955, 763027371)))) IS NULL
SETTINGS aggregate_functions_null_for_empty = 1, enable_optimize_predicate_expression = 0, allow_experimental_analyzer = 0
Query id: 2685f92d-a046-4dc9-8209-ab77f5d03cef
Ok.
0 rows in set. Elapsed: 0.037 sec.
``` | https://github.com/ClickHouse/ClickHouse/issues/45038 | https://github.com/ClickHouse/ClickHouse/pull/45067 | 2cdf01aa9a6b18991900162bbab8f8c934de8194 | c1ae35958df717b335a24378eb1fd6809960195c | "2023-01-08T11:32:55Z" | c++ | "2023-01-10T01:45:00Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 45,027 | [".gitmodules", "contrib/sqlite-amalgamation"] | Data race in SQLite integration | https://s3.amazonaws.com/clickhouse-test-reports/0/7a7e6ea8e770b698484e57ce94feb89834bc703a/stress_test__tsan_/stderr.log
**Describe the bug**
https://pastila.nl/?0038f124/bf1ea652161b26d61bbb01a786959335
| https://github.com/ClickHouse/ClickHouse/issues/45027 | https://github.com/ClickHouse/ClickHouse/pull/45031 | 5677d04752e2643e52f73408b833fcb51f248836 | c0cbb6a5cc9dc696a5dd7b70bd4a952dca566be0 | "2023-01-08T00:37:30Z" | c++ | "2023-01-08T06:40:49Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 45,023 | ["src/Databases/MySQL/DatabaseMySQL.cpp", "src/Databases/PostgreSQL/DatabasePostgreSQL.cpp", "src/Databases/SQLite/DatabaseSQLite.cpp", "tests/integration/test_mysql_database_engine/test.py", "tests/integration/test_postgresql_database_engine/test.py"] | Error hiding password [HIDDEN] in SHOW CREATE TABLE of MySQL table | Instead of a password, the login in the created SQL is closed.
```
> SHOW CREATE TABLE mydb1.table
CREATE TABLE mydb1.table
(
...
)
ENGINE = MySQL('127.0.0.1:3306', 'db1', 'table', '[HIDDEN]', 'password')
```
ClickHouse server version 22.12.1.1752 (official build). | https://github.com/ClickHouse/ClickHouse/issues/45023 | https://github.com/ClickHouse/ClickHouse/pull/52962 | def587701be76308f7f5418587c72e0f68d1b1a5 | af610062eca259c7aba402f49c59fe95450f6f43 | "2023-01-07T21:30:43Z" | c++ | "2023-08-04T08:57:59Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 45,020 | ["src/Functions/FunctionsCodingIP.cpp", "src/Functions/FunctionsHashing.h"] | cityHash64 of ipv4 | 22.12
```sql
SELECT cityHash64(toIPv4('1.2.3.4'))
--
5715546585361069049
```
22.13
```sql
SELECT cityHash64(toIPv4('1.2.3.4'))
--
3462390757903342180
```
I found it accidentally. It seems it's only with cityHash64 and IPv4.
It seems it's OK with other hash functions (sipHash / xxHash32) and IPv6.
May be it's only issue with MacOs build.
If it's the real issue, probably it's worth to check all hash functions.
| https://github.com/ClickHouse/ClickHouse/issues/45020 | https://github.com/ClickHouse/ClickHouse/pull/45024 | 7b1223ed8dfedc2302fb1b597e1c41e48879142b | 1cf593414116fc555f4b3dc3fe3df5a8ec070b53 | "2023-01-07T19:13:08Z" | c++ | "2023-01-08T01:22:40Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 45,019 | ["src/Functions/FunctionsCodingIP.cpp", "src/Functions/FunctionsHashing.h"] | cutIPv6 accepts FixedString not IPv6 | 22.12
```sql
SELECT
toIPv6OrDefault('::ffff:10.10.10.10') AS ipv6,
cutIPv6(ipv6, 10, 1)
┌─ipv6───────────────┬─cutIPv6(toIPv6OrDefault('::ffff:10.10.10.10'), 10, 1)─┐
│ ::ffff:10.10.10.10 │ ::ffff:10.10.10.0 │
└────────────────────┴───────────────────────────────────────────────────────┘
```
22.13.1.1476
```sql
SELECT
toIPv6OrDefault('::ffff:10.10.10.10') AS ipv6,
cutIPv6(ipv6, 10, 1)
Code: 43. DB::Exception: Illegal type IPv6 of argument 1 of function cutIPv6, expected FixedString(16): While processing toIPv6OrDefault('::ffff:10.10.10.10') AS ipv6, cutIPv6(ipv6, 10, 1). (ILLEGAL_TYPE_OF_ARGUMENT)
```
https://github.com/ClickHouse/ClickHouse/pull/45018 | https://github.com/ClickHouse/ClickHouse/issues/45019 | https://github.com/ClickHouse/ClickHouse/pull/45024 | 7b1223ed8dfedc2302fb1b597e1c41e48879142b | 1cf593414116fc555f4b3dc3fe3df5a8ec070b53 | "2023-01-07T18:40:00Z" | c++ | "2023-01-08T01:22:40Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 45,010 | ["src/Functions/array/range.cpp", "tests/queries/0_stateless/02523_range_const_start.reference", "tests/queries/0_stateless/02523_range_const_start.sql"] | Array `range()` function: Returned arrays only start with specified start value on first row. | **Describe the unexpected behaviour**
If a start value for the Array range() function is specified, the array for subsequent rows doesn't start at the specified value, but instead counts upwards. Example:
```sql
SELECT c1, range(0, c1) AS zero_as_start_val, range(1, c1) AS one_as_start_val, range(c1) AS no_start_val
FROM values(2, 3, 4, 5);
```
| c1 | zero\_as\_start\_val | one\_as\_start\_val | no\_start\_val |
| :--- | :--- | :--- | :--- |
| 2 | \[0, 1\] | \[1\] | \[0, 1\] |
| 3 | \[2, 3, 4\] | \[2, 3\] | \[0, 1, 2\] |
| 4 | \[5, 6, 7, 8\] | \[4, 5, 6\] | \[0, 1, 2, 3\] |
| 5 | \[9, 10, 11, 12, 13\] | \[7, 8, 9, 10\] | \[0, 1, 2, 3, 4\] |
I'm not sure if this is desired behaviour.
**How to reproduce**
* Which ClickHouse server version to use: ClickHouse server version 22.13.1.1133 (official build) (macOS arm64)
* [Clickhouse Playground](https://play.clickhouse.com/play?user=play#U0VMRUNUIGMxLCByYW5nZSgwLCBjMSkgQVMgemVyb19hc19zdGFydF92YWwsIHJhbmdlKDEsIGMxKSBBUyBvbmVfYXNfc3RhcnRfdmFsLCByYW5nZShjMSkgQVMgbm9fc3RhcnRfdmFsCkZST00gdmFsdWVzKDIsIDMsIDQsIDUpOw==)
**Expected behavior**
I'd expect all arrays to start with the specified start value. | https://github.com/ClickHouse/ClickHouse/issues/45010 | https://github.com/ClickHouse/ClickHouse/pull/45030 | 5dfc6219d2926fc36366473291ca768c546a74d1 | 5677d04752e2643e52f73408b833fcb51f248836 | "2023-01-07T00:45:17Z" | c++ | "2023-01-08T04:43:28Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 44,976 | ["tests/queries/0_stateless/02526_merge_join_int_decimal.reference", "tests/queries/0_stateless/02526_merge_join_int_decimal.sql"] | Unknown numeric column of type: DB::ColumnDecimal<DB::Decimal<int>> | https://s3.amazonaws.com/clickhouse-test-reports/44942/239734ceb6d26335420c72ce23e3068c305d52d6/fuzzer_astfuzzerubsan/report.html
```
2023.01.06 04:20:38.647652 [ 143 ] {31f90b5a-cd13-4ea4-88b0-7f7165624aa3} <Fatal> : Logical error: 'Unknown numeric column of type: DB::ColumnDecimal<DB::Decimal<int>>'.
2023.01.06 04:20:38.647976 [ 418 ] {} <Fatal> BaseDaemon: ########################################
2023.01.06 04:20:38.648035 [ 418 ] {} <Fatal> BaseDaemon: (version 22.13.1.1, build id: C5EB69974CE2400E3FB0C06D0847B5F64739970D) (from thread 143) (query_id: 31f90b5a-cd13-4ea4-88b0-7f7165624aa3) (query: SELECT 7, count(1000.0001), -9223372036854775807 FROM foo_merge INNER JOIN t2__fuzz_7 USING (Val) WHERE (((NULL AND -2 AND (Val = NULL)) AND (Id = NULL) AND (Val = NULL) AND (Id = NULL)) AND (Id = NULL) AND Val AND NULL) AND ((3 AND NULL AND -2147483648 AND (Val = NULL)) AND (Id = NULL) AND (Val = NULL)) AND ((NULL AND -2 AND (Val = NULL)) AND (Id = NULL) AND (Val = NULL)) AND 2147483647 WITH TOTALS) Received signal Aborted (6)
2023.01.06 04:20:38.648074 [ 418 ] {} <Fatal> BaseDaemon:
2023.01.06 04:20:38.648126 [ 418 ] {} <Fatal> BaseDaemon: Stack trace: 0x7f55f0ae700b 0x7f55f0ac6859 0x218874e3 0x218877ef 0x2c89878e 0x2c89846e 0x2c893bb7 0x2c8992be 0x2c8f1fb3 0x16fd979f 0x16fd8428 0x2bf2da39 0x2bf2f2db 0x2bf308b1 0x2c85d222 0x2f800b79 0x2f80b3cb 0x2f7da75a 0x2e9b35a3 0x2e99edf9 0x2e99daf7 0x2ec000e6 0x2e39d429 0x2e39b583 0x2f7a275d 0x2f7cd5bc 0x2d7b2185 0x2dc1965a 0x2dc14c8c 0x2f085390 0x2f0a68da 0x300f81cc 0x300f86ba 0x3026ffc7 0x3026daaf 0x7f55f0c9e609 0x7f55f0bc3133
2023.01.06 04:20:38.648178 [ 418 ] {} <Fatal> BaseDaemon: 3. gsignal @ 0x7f55f0ae700b in ?
2023.01.06 04:20:38.648210 [ 418 ] {} <Fatal> BaseDaemon: 4. abort @ 0x7f55f0ac6859 in ?
2023.01.06 04:20:38.671945 [ 418 ] {} <Fatal> BaseDaemon: 5. ./build_docker/../src/Common/Exception.cpp:48: DB::abortOnFailedAssertion(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&) @ 0x218874e3 in /workspace/clickhouse
2023.01.06 04:20:38.694221 [ 418 ] {} <Fatal> BaseDaemon: 6. ./build_docker/../src/Common/Exception.cpp:78: DB::Exception::Exception(DB::Exception::MessageMasked const&, int, bool) @ 0x218877ef in /workspace/clickhouse
2023.01.06 04:20:38.744826 [ 418 ] {} <Fatal> BaseDaemon: 7.1. inlined from ./build_docker/../contrib/llvm-project/libcxx/include/string:1499: std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>::__is_long[abi:v15000]() const
2023.01.06 04:20:38.744862 [ 418 ] {} <Fatal> BaseDaemon: 7.2. inlined from ./build_docker/../contrib/llvm-project/libcxx/include/string:2333: ~basic_string
2023.01.06 04:20:38.744892 [ 418 ] {} <Fatal> BaseDaemon: 7.3. inlined from ./build_docker/../src/Common/Exception.h:32: ~MessageMasked
2023.01.06 04:20:38.744922 [ 418 ] {} <Fatal> BaseDaemon: 7.4. inlined from ./build_docker/../src/Common/Exception.h:41: Exception
2023.01.06 04:20:38.744949 [ 418 ] {} <Fatal> BaseDaemon: 7. ./build_docker/../src/Functions/FunctionsLogical.cpp:268: DB::(anonymous namespace)::TernaryValueBuilderImpl<>::build(DB::IColumn const*, char8_t*) @ 0x2c89878e in /workspace/clickhouse
2023.01.06 04:20:38.793457 [ 418 ] {} <Fatal> BaseDaemon: 8. ./build_docker/../src/Functions/FunctionsLogical.cpp:214: DB::(anonymous namespace)::TernaryValueBuilderImpl<double>::build(DB::IColumn const*, char8_t*) @ 0x2c89846e in /workspace/clickhouse
2023.01.06 04:20:38.841456 [ 418 ] {} <Fatal> BaseDaemon: 9. ./build_docker/../src/Functions/FunctionsLogical.cpp:316: DB::(anonymous namespace)::AssociativeGenericApplierImpl<DB::FunctionsLogicalDetail::AndImpl, 1ul>::AssociativeGenericApplierImpl(std::__1::vector<DB::IColumn const*, std::__1::allocator<DB::IColumn const*>> const&) @ 0x2c893bb7 in /workspace/clickhouse
2023.01.06 04:20:38.896898 [ 418 ] {} <Fatal> BaseDaemon: 10. ./build_docker/../src/Functions/FunctionsLogical.cpp:350: void DB::(anonymous namespace)::OperationApplier<DB::FunctionsLogicalDetail::AndImpl, DB::(anonymous namespace)::AssociativeGenericApplierImpl, 1ul>::doBatchedApply<true, std::__1::vector<DB::IColumn const*, std::__1::allocator<DB::IColumn const*>>, char8_t>(std::__1::vector<DB::IColumn const*, std::__1::allocator<DB::IColumn const*>>&, char8_t*, unsigned long) @ 0x2c8992be in /workspace/clickhouse
2023.01.06 04:20:38.932079 [ 418 ] {} <Fatal> BaseDaemon: 11.1. inlined from ./build_docker/../contrib/llvm-project/libcxx/include/vector:543: std::__1::vector<DB::IColumn const*, std::__1::allocator<DB::IColumn const*>>::empty[abi:v15000]() const
2023.01.06 04:20:38.932124 [ 418 ] {} <Fatal> BaseDaemon: 11.2. inlined from ./build_docker/../src/Functions/FunctionsLogical.cpp:336: void DB::(anonymous namespace)::OperationApplier<DB::FunctionsLogicalDetail::AndImpl, DB::(anonymous namespace)::AssociativeGenericApplierImpl, 8ul>::apply<std::__1::vector<DB::IColumn const*, std::__1::allocator<DB::IColumn const*>>, DB::PODArray<char8_t, 4096ul, Allocator<false, false>, 63ul, 64ul>>(std::__1::vector<DB::IColumn const*, std::__1::allocator<DB::IColumn const*>>&, DB::PODArray<char8_t, 4096ul, Allocator<false, false>, 63ul, 64ul>&, bool)
2023.01.06 04:20:38.932174 [ 418 ] {} <Fatal> BaseDaemon: 11.3. inlined from ./build_docker/../src/Functions/FunctionsLogical.cpp:401: COW<DB::IColumn>::immutable_ptr<DB::IColumn> DB::(anonymous namespace)::executeForTernaryLogicImpl<DB::FunctionsLogicalDetail::AndImpl>(std::__1::vector<DB::IColumn const*, std::__1::allocator<DB::IColumn const*>>, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long)
2023.01.06 04:20:38.932206 [ 418 ] {} <Fatal> BaseDaemon: 11. ./build_docker/../src/Functions/FunctionsLogical.cpp:664: DB::FunctionsLogicalDetail::FunctionAnyArityLogical<DB::FunctionsLogicalDetail::AndImpl, DB::NameAnd>::executeImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const @ 0x2c8f1fb3 in /workspace/clickhouse
2023.01.06 04:20:40.061774 [ 418 ] {} <Fatal> BaseDaemon: 12. DB::IFunction::executeImplDryRun(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const @ 0x16fd979f in /workspace/clickhouse
2023.01.06 04:20:41.167342 [ 418 ] {} <Fatal> BaseDaemon: 13. DB::FunctionToExecutableFunctionAdaptor::executeDryRunImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const @ 0x16fd8428 in /workspace/clickhouse
2023.01.06 04:20:41.181598 [ 418 ] {} <Fatal> BaseDaemon: 14. ./build_docker/../src/Functions/IFunction.cpp:0: DB::IExecutableFunction::executeWithoutLowCardinalityColumns(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const @ 0x2bf2da39 in /workspace/clickhouse
2023.01.06 04:20:41.196577 [ 418 ] {} <Fatal> BaseDaemon: 15.1. inlined from ./build_docker/../contrib/boost/boost/smart_ptr/intrusive_ptr.hpp:115: intrusive_ptr
2023.01.06 04:20:41.196608 [ 418 ] {} <Fatal> BaseDaemon: 15.2. inlined from ./build_docker/../contrib/boost/boost/smart_ptr/intrusive_ptr.hpp:122: boost::intrusive_ptr<DB::IColumn const>::operator=(boost::intrusive_ptr<DB::IColumn const>&&)
2023.01.06 04:20:41.196640 [ 418 ] {} <Fatal> BaseDaemon: 15.3. inlined from ./build_docker/../src/Common/COW.h:136: COW<DB::IColumn>::immutable_ptr<DB::IColumn>::operator=(COW<DB::IColumn>::immutable_ptr<DB::IColumn>&&)
2023.01.06 04:20:41.196667 [ 418 ] {} <Fatal> BaseDaemon: 15. ./build_docker/../src/Functions/IFunction.cpp:302: DB::IExecutableFunction::executeWithoutSparseColumns(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const @ 0x2bf2f2db in /workspace/clickhouse
2023.01.06 04:20:41.211922 [ 418 ] {} <Fatal> BaseDaemon: 16. ./build_docker/../src/Functions/IFunction.cpp:0: DB::IExecutableFunction::execute(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const @ 0x2bf308b1 in /workspace/clickhouse
2023.01.06 04:20:41.404109 [ 418 ] {} <Fatal> BaseDaemon: 17.1. inlined from ./build_docker/../contrib/boost/boost/smart_ptr/intrusive_ptr.hpp:115: intrusive_ptr
2023.01.06 04:20:41.404163 [ 418 ] {} <Fatal> BaseDaemon: 17.2. inlined from ./build_docker/../contrib/boost/boost/smart_ptr/intrusive_ptr.hpp:122: boost::intrusive_ptr<DB::IColumn const>::operator=(boost::intrusive_ptr<DB::IColumn const>&&)
2023.01.06 04:20:41.404206 [ 418 ] {} <Fatal> BaseDaemon: 17.3. inlined from ./build_docker/../src/Common/COW.h:136: COW<DB::IColumn>::immutable_ptr<DB::IColumn>::operator=(COW<DB::IColumn>::immutable_ptr<DB::IColumn>&&)
2023.01.06 04:20:41.404249 [ 418 ] {} <Fatal> BaseDaemon: 17.4. inlined from ./build_docker/../src/Interpreters/ActionsDAG.cpp:535: DB::executeActionForHeader(DB::ActionsDAG::Node const*, std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>>)
2023.01.06 04:20:41.404294 [ 418 ] {} <Fatal> BaseDaemon: 17. ./build_docker/../src/Interpreters/ActionsDAG.cpp:646: DB::ActionsDAG::updateHeader(DB::Block) const @ 0x2c85d222 in /workspace/clickhouse
2023.01.06 04:20:41.444810 [ 418 ] {} <Fatal> BaseDaemon: 18. ./build_docker/../src/Storages/MergeTree/MergeTreeBaseSelectProcessor.cpp:0: DB::IMergeTreeSelectAlgorithm::applyPrewhereActions(DB::Block, std::__1::shared_ptr<DB::PrewhereInfo> const&) @ 0x2f800b79 in /workspace/clickhouse
2023.01.06 04:20:41.489364 [ 418 ] {} <Fatal> BaseDaemon: 19. ./build_docker/../src/Storages/MergeTree/MergeTreeBaseSelectProcessor.cpp:0: DB::IMergeTreeSelectAlgorithm::transformHeader(DB::Block, std::__1::shared_ptr<DB::PrewhereInfo> const&, std::__1::shared_ptr<DB::IDataType const> const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&) @ 0x2f80b3cb in /workspace/clickhouse
2023.01.06 04:20:41.582974 [ 418 ] {} <Fatal> BaseDaemon: 20.1. inlined from ./build_docker/../contrib/llvm-project/libcxx/include/__memory/compressed_pair.h:40: __compressed_pair_elem<int, void>
2023.01.06 04:20:41.583025 [ 418 ] {} <Fatal> BaseDaemon: 20.2. inlined from ./build_docker/../contrib/llvm-project/libcxx/include/__memory/compressed_pair.h:108: __compressed_pair<int, std::__1::__default_init_tag>
2023.01.06 04:20:41.583074 [ 418 ] {} <Fatal> BaseDaemon: 20.3. inlined from ./build_docker/../contrib/llvm-project/libcxx/include/__hash_table:750: __bucket_list_deallocator
2023.01.06 04:20:41.583109 [ 418 ] {} <Fatal> BaseDaemon: 20.4. inlined from ./build_docker/../contrib/llvm-project/libcxx/include/__memory/compressed_pair.h:36: __compressed_pair_elem
2023.01.06 04:20:41.583147 [ 418 ] {} <Fatal> BaseDaemon: 20.5. inlined from ./build_docker/../contrib/llvm-project/libcxx/include/__memory/compressed_pair.h:108: __compressed_pair<std::__1::__value_init_tag, std::__1::__value_init_tag>
2023.01.06 04:20:41.583184 [ 418 ] {} <Fatal> BaseDaemon: 20.6. inlined from ./build_docker/../contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:390: unique_ptr<true, void>
2023.01.06 04:20:41.583222 [ 418 ] {} <Fatal> BaseDaemon: 20.7. inlined from ./build_docker/../contrib/llvm-project/libcxx/include/__hash_table:965: __hash_table
2023.01.06 04:20:41.583255 [ 418 ] {} <Fatal> BaseDaemon: 20.8. inlined from ./build_docker/../contrib/llvm-project/libcxx/include/unordered_set:547: unordered_set
2023.01.06 04:20:41.583295 [ 418 ] {} <Fatal> BaseDaemon: 20. ./build_docker/../src/Processors/QueryPlan/IQueryPlanStep.h:29: DB::ReadFromMergeTree::ReadFromMergeTree(std::__1::vector<std::__1::shared_ptr<DB::IMergeTreeDataPart const>, std::__1::allocator<std::__1::shared_ptr<DB::IMergeTreeDataPart const>>>, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>>, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>>, DB::MergeTreeData const&, DB::SelectQueryInfo const&, std::__1::shared_ptr<DB::StorageSnapshot>, std::__1::shared_ptr<DB::Context const>, unsigned long, unsigned long, bool, std::__1::shared_ptr<std::__1::unordered_map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, long, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const, long>>>>, Poco::Logger*, std::__1::shared_ptr<DB::MergeTreeDataSelectAnalysisResult>, bool) @ 0x2f7da75a in /workspace/clickhouse
2023.01.06 04:20:41.718511 [ 418 ] {} <Fatal> BaseDaemon: 21. ./build_docker/../contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:0: std::__1::__unique_if<DB::ReadFromMergeTree>::__unique_single std::__1::make_unique[abi:v15000]<DB::ReadFromMergeTree, std::__1::vector<std::__1::shared_ptr<DB::IMergeTreeDataPart const>, std::__1::allocator<std::__1::shared_ptr<DB::IMergeTreeDataPart const>>>, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>>&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>>&, DB::MergeTreeData const&, DB::SelectQueryInfo const&, std::__1::shared_ptr<DB::StorageSnapshot> const&, std::__1::shared_ptr<DB::Context const>&, unsigned long const&, unsigned long const&, bool&, std::__1::shared_ptr<std::__1::unordered_map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, long, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const, long>>>>&, Poco::Logger* const&, std::__1::shared_ptr<DB::MergeTreeDataSelectAnalysisResult>&, bool&>(std::__1::vector<std::__1::shared_ptr<DB::IMergeTreeDataPart const>, std::__1::allocator<std::__1::shared_ptr<DB::IMergeTreeDataPart const>>>&&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>>&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>>&, DB::MergeTreeData const&, DB::SelectQueryInfo const&, std::__1::shared_ptr<DB::StorageSnapshot> const&, std::__1::shared_ptr<DB::Context const>&, unsigned long const&, unsigned long const&, bool&, std::__1::shared_ptr<std::__1::unordered_map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, long, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const, long>>>>&, Poco::Logger* const&, std::__1::shared_ptr<DB::MergeTreeDataSelectAnalysisResult>&, bool&) @ 0x2e9b35a3 in /workspace/clickhouse
2023.01.06 04:20:41.828212 [ 418 ] {} <Fatal> BaseDaemon: 22. ./build_docker/../src/Storages/MergeTree/MergeTreeDataSelectExecutor.cpp:1369: DB::MergeTreeDataSelectExecutor::readFromParts(std::__1::vector<std::__1::shared_ptr<DB::IMergeTreeDataPart const>, std::__1::allocator<std::__1::shared_ptr<DB::IMergeTreeDataPart const>>>, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&, std::__1::shared_ptr<DB::StorageSnapshot> const&, DB::SelectQueryInfo const&, std::__1::shared_ptr<DB::Context const>, unsigned long, unsigned long, std::__1::shared_ptr<std::__1::unordered_map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, long, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const, long>>>>, std::__1::shared_ptr<DB::MergeTreeDataSelectAnalysisResult>, bool) const @ 0x2e99edf9 in /workspace/clickhouse
2023.01.06 04:20:41.925169 [ 418 ] {} <Fatal> BaseDaemon: 23. ./build_docker/../src/Storages/MergeTree/MergeTreeDataSelectExecutor.cpp:159: DB::MergeTreeDataSelectExecutor::read(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&, std::__1::shared_ptr<DB::StorageSnapshot> const&, DB::SelectQueryInfo const&, std::__1::shared_ptr<DB::Context const>, unsigned long, unsigned long, DB::QueryProcessingStage::Enum, std::__1::shared_ptr<std::__1::unordered_map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, long, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const, long>>>>, bool) const @ 0x2e99daf7 in /workspace/clickhouse
2023.01.06 04:20:42.011940 [ 418 ] {} <Fatal> BaseDaemon: 24. ./build_docker/../src/Storages/StorageMergeTree.cpp:0: DB::StorageMergeTree::read(DB::QueryPlan&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&, std::__1::shared_ptr<DB::StorageSnapshot> const&, DB::SelectQueryInfo&, std::__1::shared_ptr<DB::Context const>, DB::QueryProcessingStage::Enum, unsigned long, unsigned long) @ 0x2ec000e6 in /workspace/clickhouse
2023.01.06 04:20:42.091917 [ 418 ] {} <Fatal> BaseDaemon: 25.1. inlined from ./build_docker/../src/Processors/QueryPlan/QueryPlan.h:54: DB::QueryPlan::isInitialized() const
2023.01.06 04:20:42.091972 [ 418 ] {} <Fatal> BaseDaemon: 25. ./build_docker/../src/Storages/StorageMerge.cpp:574: DB::ReadFromMerge::createSources(std::__1::shared_ptr<DB::StorageSnapshot> const&, DB::SelectQueryInfo&, DB::QueryProcessingStage::Enum const&, unsigned long, DB::Block const&, std::__1::vector<DB::ReadFromMerge::AliasData, std::__1::allocator<DB::ReadFromMerge::AliasData>> const&, std::__1::tuple<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::shared_ptr<DB::IStorage>, std::__1::shared_ptr<DB::RWLockImpl::LockHolderImpl>, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>> const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>>&, std::__1::shared_ptr<DB::Context>, unsigned long, bool) @ 0x2e39d429 in /workspace/clickhouse
2023.01.06 04:20:42.182002 [ 418 ] {} <Fatal> BaseDaemon: 26. ./build_docker/../src/Storages/StorageMerge.cpp:460: DB::ReadFromMerge::initializePipeline(DB::QueryPipelineBuilder&, DB::BuildQueryPipelineSettings const&) @ 0x2e39b583 in /workspace/clickhouse
2023.01.06 04:20:42.196783 [ 418 ] {} <Fatal> BaseDaemon: 27. ./build_docker/../src/Processors/QueryPlan/ISourceStep.cpp:0: DB::ISourceStep::updatePipeline(std::__1::vector<std::__1::unique_ptr<DB::QueryPipelineBuilder, std::__1::default_delete<DB::QueryPipelineBuilder>>, std::__1::allocator<std::__1::unique_ptr<DB::QueryPipelineBuilder, std::__1::default_delete<DB::QueryPipelineBuilder>>>>, DB::BuildQueryPipelineSettings const&) @ 0x2f7a275d in /workspace/clickhouse
2023.01.06 04:20:42.224848 [ 418 ] {} <Fatal> BaseDaemon: 28.1. inlined from ./build_docker/../contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:296: std::__1::unique_ptr<DB::QueryPipelineBuilder, std::__1::default_delete<DB::QueryPipelineBuilder>>::release[abi:v15000]()
2023.01.06 04:20:42.224894 [ 418 ] {} <Fatal> BaseDaemon: 28.2. inlined from ./build_docker/../contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:225: std::__1::unique_ptr<DB::QueryPipelineBuilder, std::__1::default_delete<DB::QueryPipelineBuilder>>::operator=[abi:v15000](std::__1::unique_ptr<DB::QueryPipelineBuilder, std::__1::default_delete<DB::QueryPipelineBuilder>>&&)
2023.01.06 04:20:42.224932 [ 418 ] {} <Fatal> BaseDaemon: 28. ./build_docker/../src/Processors/QueryPlan/QueryPlan.cpp:187: DB::QueryPlan::buildQueryPipeline(DB::QueryPlanOptimizationSettings const&, DB::BuildQueryPipelineSettings const&) @ 0x2f7cd5bc in /workspace/clickhouse
2023.01.06 04:20:42.266160 [ 418 ] {} <Fatal> BaseDaemon: 29. ./build_docker/../src/Interpreters/InterpreterSelectWithUnionQuery.cpp:374: DB::InterpreterSelectWithUnionQuery::execute() @ 0x2d7b2185 in /workspace/clickhouse
2023.01.06 04:20:42.323611 [ 418 ] {} <Fatal> BaseDaemon: 30. ./build_docker/../src/Interpreters/executeQuery.cpp:0: DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x2dc1965a in /workspace/clickhouse
2023.01.06 04:20:42.385080 [ 418 ] {} <Fatal> BaseDaemon: 31. ./build_docker/../src/Interpreters/executeQuery.cpp:1104: DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) @ 0x2dc14c8c in /workspace/clickhouse
2023.01.06 04:20:42.440714 [ 418 ] {} <Fatal> BaseDaemon: 32. ./build_docker/../src/Server/TCPHandler.cpp:378: DB::TCPHandler::runImpl() @ 0x2f085390 in /workspace/clickhouse
2023.01.06 04:20:42.508650 [ 418 ] {} <Fatal> BaseDaemon: 33. ./build_docker/../src/Server/TCPHandler.cpp:1933: DB::TCPHandler::run() @ 0x2f0a68da in /workspace/clickhouse
2023.01.06 04:20:42.514318 [ 418 ] {} <Fatal> BaseDaemon: 34. ./build_docker/../contrib/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x300f81cc in /workspace/clickhouse
2023.01.06 04:20:42.521808 [ 418 ] {} <Fatal> BaseDaemon: 35.1. inlined from ./build_docker/../contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48: std::__1::default_delete<Poco::Net::TCPServerConnection>::operator()[abi:v15000](Poco::Net::TCPServerConnection*) const
2023.01.06 04:20:42.521843 [ 418 ] {} <Fatal> BaseDaemon: 35.2. inlined from ./build_docker/../contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:305: std::__1::unique_ptr<Poco::Net::TCPServerConnection, std::__1::default_delete<Poco::Net::TCPServerConnection>>::reset[abi:v15000](Poco::Net::TCPServerConnection*)
2023.01.06 04:20:42.521874 [ 418 ] {} <Fatal> BaseDaemon: 35.3. inlined from ./build_docker/../contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:259: ~unique_ptr
2023.01.06 04:20:42.521912 [ 418 ] {} <Fatal> BaseDaemon: 35. ./build_docker/../contrib/poco/Net/src/TCPServerDispatcher.cpp:116: Poco::Net::TCPServerDispatcher::run() @ 0x300f86ba in /workspace/clickhouse
2023.01.06 04:20:42.530262 [ 418 ] {} <Fatal> BaseDaemon: 36. ./build_docker/../contrib/poco/Foundation/src/ThreadPool.cpp:213: Poco::PooledThread::run() @ 0x3026ffc7 in /workspace/clickhouse
2023.01.06 04:20:42.537779 [ 418 ] {} <Fatal> BaseDaemon: 37.1. inlined from ./build_docker/../contrib/poco/Foundation/include/Poco/SharedPtr.h:156: Poco::SharedPtr<Poco::Runnable, Poco::ReferenceCounter, Poco::ReleasePolicy<Poco::Runnable>>::assign(Poco::Runnable*)
2023.01.06 04:20:42.537806 [ 418 ] {} <Fatal> BaseDaemon: 37.2. inlined from ./build_docker/../contrib/poco/Foundation/include/Poco/SharedPtr.h:208: Poco::SharedPtr<Poco::Runnable, Poco::ReferenceCounter, Poco::ReleasePolicy<Poco::Runnable>>::operator=(Poco::Runnable*)
2023.01.06 04:20:42.537841 [ 418 ] {} <Fatal> BaseDaemon: 37. ./build_docker/../contrib/poco/Foundation/src/Thread_POSIX.cpp:360: Poco::ThreadImpl::runnableEntry(void*) @ 0x3026daaf in /workspace/clickhouse
2023.01.06 04:20:42.537887 [ 418 ] {} <Fatal> BaseDaemon: 38. ? @ 0x7f55f0c9e609 in ?
2023.01.06 04:20:42.537926 [ 418 ] {} <Fatal> BaseDaemon: 39. clone @ 0x7f55f0bc3133 in ?
2023.01.06 04:20:42.537963 [ 418 ] {} <Fatal> BaseDaemon: Integrity check of the executable skipped because the reference checksum could not be read.
2023.01.06 04:24:12.394572 [ 133 ] {} <Fatal> Application: Child process was terminated by signal 6.
``` | https://github.com/ClickHouse/ClickHouse/issues/44976 | https://github.com/ClickHouse/ClickHouse/pull/45228 | 9f121d5d9c57cc81ad193e27fefbea02f77f0246 | 2d3b578dd379dc2addf35e120b69e560733220c3 | "2023-01-06T11:11:54Z" | c++ | "2023-01-12T23:44:14Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 44,934 | ["docker/test/integration/runner/Dockerfile", "packages/clickhouse-keeper.yaml", "packages/clickhouse-server.yaml"] | Keeper rpm install failed | **I have tried the following solutions**: https://clickhouse.com/docs/en/faq/troubleshooting/#troubleshooting-installation-errors
**Installation type**
rpm local install
22.12.1.1752-stable and clickhouse-22.9.7.34-stable (they are the only releases that have clickhouse-keeper rpm)
on RHEL 8.5
**Source of the ClickHouse**
https://github.com/ClickHouse/ClickHouse/releases/tag/v22.12.1.1752-stable
**Expected result**
Normal install of [clickhouse-keeper-22.12.1.1752.x86_64.rpm](https://github.com/ClickHouse/ClickHouse/releases/download/v22.12.1.1752-stable/clickhouse-keeper-22.12.1.1752.x86_64.rpm)
**The actual result**
```bash
[root@bdtpostgres01 clickhouse-22.12.1.1752-stable]# ls -lhrt
total 261M
-rw-r----- 1 tansus tansus 120K Jan 4 09:58 clickhouse-client-22.12.1.1752.x86_64.rpm
-rw-r----- 1 tansus tansus 253M Jan 4 09:58 clickhouse-common-static-22.12.1.1752.x86_64.rpm
-rw-r----- 1 tansus tansus 8.3M Jan 4 09:58 clickhouse-keeper-22.12.1.1752.x86_64.rpm
-rw-r----- 1 tansus tansus 145K Jan 4 09:58 clickhouse-server-22.12.1.1752.x86_64.rpm
[root@bdtpostgres01 clickhouse-22.12.1.1752-stable]# yum install clickhouse-keeper-22.12.1.1752.x86_64.rpm
Updating Subscription Management repositories.
EPEL8_REPO 51 kB/s | 2.3 kB 00:00
Error:
Problem: conflicting requests
- nothing provides adduser needed by clickhouse-keeper-22.12.1.1752-1.x86_64
(try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)
```
**How to reproduce**
clickhouse-server, clickhouse-common-static and clickhouse-client install normally (and other packages as well, it is not problem with yum).
```bash
[root@bdtpostgres01 clickhouse-22.12.1.1752-stable]# yum install clickhouse-client-22.12.1.1752.x86_64.rpm clickhouse-common-static-22.12.1.1752.x86_64.rpm clickhouse-server-22.12.1.1752.x86_64.rpm
Updating Subscription Management repositories.
EPEL8_REPO 58 kB/s | 2.3 kB 00:00
Dependencies resolved.
=============================================================================================================================================================================================================================================
Package Architecture Version Repository Size
=============================================================================================================================================================================================================================================
Installing:
clickhouse-client x86_64 22.12.1.1752-1 @commandline 119 k
clickhouse-common-static x86_64 22.12.1.1752-1 @commandline 252 M
clickhouse-server x86_64 22.12.1.1752-1 @commandline 145 k
Transaction Summary
=============================================================================================================================================================================================================================================
Install 3 Packages
Total size: 253 M
Installed size: 721 M
Is this ok [y/N]: y
...
Installed:
clickhouse-client-22.12.1.1752-1.x86_64 clickhouse-common-static-22.12.1.1752-1.x86_64 clickhouse-server-22.12.1.1752-1.x86_64
Complete!
```
manual rpm extract:
```bash
[root@bdtpostgres01 clickhouse-22.12.1.1752-stable]# rpm2cpio clickhouse-keeper-22.12.1.1752.x86_64.rpm | cpio -idmv
/etc/clickhouse-keeper/keeper_config.xml
cpio: /usr/bin/clickhouse-keeper not created: newer or same age version exists
/usr/bin/clickhouse-keeper
/usr/share/doc/clickhouse-keeper/AUTHORS
/usr/share/doc/clickhouse-keeper/CHANGELOG.md
/usr/share/doc/clickhouse-keeper/LICENSE
/usr/share/doc/clickhouse-keeper/README.md
43901 blocks
```
| https://github.com/ClickHouse/ClickHouse/issues/44934 | https://github.com/ClickHouse/ClickHouse/pull/45011 | be8df5683e6422c8d31c4db42d60f0eb4fbe40d7 | 7b1223ed8dfedc2302fb1b597e1c41e48879142b | "2023-01-05T14:07:25Z" | c++ | "2023-01-08T00:27:29Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 44,918 | ["tests/queries/0_stateless/02786_transform_float.reference", "tests/queries/0_stateless/02786_transform_float.sql", "tests/queries/0_stateless/02787_transform_null.reference", "tests/queries/0_stateless/02787_transform_null.sql"] | Wrong output with transform function, when output is float32 | **Describe what's wrong**
The following:
select transform(number, [1], [toFloat32(1)], toFloat32(1)) from numbers(3)
outputs:
1
0
1
**Expected behavior**
It should output:
1
1
1
It seems like anything matching the `array_from` argument will output 0, when the output type is float32. It seems to work as it should with other types.
**Does it reproduce on recent release?**
I've reproduced on ClickHouse playground, I assume that's running the latest version.
| https://github.com/ClickHouse/ClickHouse/issues/44918 | https://github.com/ClickHouse/ClickHouse/pull/50833 | 572c9dec4e81dc323e98e0d87eaafcf8c863d843 | 826dfe4467a63187fa897e841659ec2a7ff3de5e | "2023-01-04T18:47:02Z" | c++ | "2023-06-10T17:07:03Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 44,844 | ["src/Storages/MergeTree/MergeTreeData.cpp", "src/Storages/MergeTree/MergeTreeDataSelectExecutor.cpp", "src/Storages/SelectQueryInfo.h", "tests/queries/0_stateless/02516_projections_and_context.reference", "tests/queries/0_stateless/02516_projections_and_context.sql"] | Context has expired (while reading from non-existing dict) | ```
CREATE TABLE test1__fuzz_37 (`i` Date) ENGINE = MergeTree ORDER BY i;
insert into test1__fuzz_37 values ('2020-10-10');
SELECT count() FROM test1__fuzz_37 GROUP BY dictHas(NULL, (dictHas(NULL, (('', materialize(NULL)), materialize(NULL))), 'KeyKey')), dictHas('test_dictionary', tuple(materialize('Ke\0'))), tuple(dictHas(NULL, (tuple('Ke\0Ke\0Ke\0Ke\0Ke\0Ke\0\0\0\0Ke\0'), materialize(NULL)))), 'test_dicti\0nary', (('', materialize(NULL)), dictHas(NULL, (dictHas(NULL, tuple(materialize(NULL))), 'KeyKeyKeyKeyKeyKeyKeyKey')), materialize(NULL));
SELECT count()
FROM test1__fuzz_37
GROUP BY
dictHas(NULL, (dictHas(NULL, (('', materialize(NULL)), materialize(NULL))), 'KeyKey')),
dictHas('test_dictionary', tuple(materialize('Ke\0'))),
tuple(dictHas(NULL, (tuple('Ke\0Ke\0Ke\0Ke\0Ke\0Ke\0\0\0\0Ke\0'), materialize(NULL)))),
'test_dicti\0nary',
(('', materialize(NULL)), dictHas(NULL, (dictHas(NULL, tuple(materialize(NULL))), 'KeyKeyKeyKeyKeyKeyKeyKey')), materialize(NULL))
Query id: 17710b2b-8955-418c-ae25-0eb422078d94
0 rows in set. Elapsed: 0.014 sec.
Received exception from server (version 22.13.1):
Code: 49. DB::Exception: Received from localhost:9000. DB::Exception: Context has expired: while executing 'FUNCTION dictHas('test_dictionary' :: 3, tuple(materialize('Ke\0')) :: 10) -> dictHas('test_dictionary', tuple(materialize('Ke\0'))) UInt8 : 5'. (LOGICAL_ERROR)
{18802d15-8cb4-4d5e-84f5-680741d0e406} <Error> executeQuery: Code: 49. DB::Exception: Context has expired: while executing 'FUNCTION dictHas('test_dictionary' :: 3, tuple(materialize('Ke\0')) :: 10) -> dictHas('test_dictionary', tuple(materialize('Ke\0'))) UInt8 : 5'. (LOGICAL_ERROR) (version 22.13.1.1) (from [::ffff:127.0.0.1]:47550) (in query: SELECT count() FROM test1__fuzz_37 GROUP BY dictHas(NULL, (dictHas(NULL, (('', materialize(NULL)), materialize(NULL))), 'KeyKey')), dictHas('test_dictionary', tuple(materialize('Ke\0'))), tuple(dictHas(NULL, (tuple('Ke\0Ke\0Ke\0Ke\0Ke\0Ke\0\0\0\0Ke\0'), materialize(NULL)))), 'test_dicti\0nary', (('', materialize(NULL)), dictHas(NULL, (dictHas(NULL, tuple(materialize(NULL))), 'KeyKeyKeyKeyKeyKeyKeyKey')), materialize(NULL))), Stack trace (when copying this message, always include the lines below):
0. ./build/./contrib/llvm-project/libcxx/include/exception:134: Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, int) @ 0x164d7193 in /home/ubuntu/dev/ClickHouse/build/programs/clickhouse
1. ./build/./src/Common/Exception.cpp:77: DB::Exception::Exception(DB::Exception::MessageMasked const&, int, bool) @ 0xf828afa in /home/ubuntu/dev/ClickHouse/build/programs/clickhouse
2. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, int, bool) @ 0xaada74d in /home/ubuntu/dev/ClickHouse/build/programs/clickhouse
3. DB::WithContextImpl<std::__1::shared_ptr<DB::Context const>>::getContext() const @ 0xaf08fb6 in /home/ubuntu/dev/ClickHouse/build/programs/clickhouse
4. DB::FunctionDictHelper::getDictionary(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&) @ 0xb0e77dc in /home/ubuntu/dev/ClickHouse/build/programs/clickhouse
5. DB::FunctionDictHas::executeImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const @ 0xb0e6791 in /home/ubuntu/dev/ClickHouse/build/programs/clickhouse
6. DB::FunctionToExecutableFunctionAdaptor::executeImpl(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long) const @ 0xaada8ce in /home/ubuntu/dev/ClickHouse/build/programs/clickhouse
7. ./build/./src/Functions/IFunction.cpp:0: DB::IExecutableFunction::executeWithoutLowCardinalityColumns(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const @ 0x13b8c6c7 in /home/ubuntu/dev/ClickHouse/build/programs/clickhouse
8. ./build/./contrib/boost/boost/smart_ptr/intrusive_ptr.hpp:115: DB::IExecutableFunction::executeWithoutSparseColumns(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const @ 0x13b8cf0c in /home/ubuntu/dev/ClickHouse/build/programs/clickhouse
9. ./build/./src/Functions/IFunction.cpp:0: DB::IExecutableFunction::execute(std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName>> const&, std::__1::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const @ 0x13b8dfad in /home/ubuntu/dev/ClickHouse/build/programs/clickhouse
10. ./build/./contrib/boost/boost/smart_ptr/intrusive_ptr.hpp:115: DB::ExpressionActions::execute(DB::Block&, unsigned long&, bool) const @ 0x1436ecad in /home/ubuntu/dev/ClickHouse/build/programs/clickhouse
11. ./build/./src/Processors/Transforms/ExpressionTransform.cpp:0: DB::ExpressionTransform::transform(DB::Chunk&) @ 0x15b482fc in /home/ubuntu/dev/ClickHouse/build/programs/clickhouse
12. ./build/./contrib/llvm-project/libcxx/include/__utility/swap.h:35: DB::ISimpleTransform::transform(DB::Chunk&, DB::Chunk&) @ 0x11e05ef0 in /home/ubuntu/dev/ClickHouse/build/programs/clickhouse
13. ./build/./src/Processors/ISimpleTransform.cpp:99: DB::ISimpleTransform::work() @ 0x159844b6 in /home/ubuntu/dev/ClickHouse/build/programs/clickhouse
14. ./build/./contrib/llvm-project/libcxx/include/list:616: DB::ExecutionThreadContext::executeTask() @ 0x1599fb60 in /home/ubuntu/dev/ClickHouse/build/programs/clickhouse
15. ./build/./src/Processors/Executors/PipelineExecutor.cpp:229: DB::PipelineExecutor::executeStepImpl(unsigned long, std::__1::atomic<bool>*) @ 0x15995b5a in /home/ubuntu/dev/ClickHouse/build/programs/clickhouse
16. ./build/./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:701: DB::PipelineExecutor::executeImpl(unsigned long) @ 0x15995451 in /home/ubuntu/dev/ClickHouse/build/programs/clickhouse
17. ./build/./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:274: DB::PipelineExecutor::execute(unsigned long) @ 0x159952f8 in /home/ubuntu/dev/ClickHouse/build/programs/clickhouse
18. ./build/./src/Processors/Executors/PullingAsyncPipelineExecutor.cpp:0: void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&&)::'lambda'(), void ()>>(std::__1::__function::__policy_storage const*) @ 0x159a270a in /home/ubuntu/dev/ClickHouse/build/programs/clickhouse
19. ./build/./base/base/../base/wide_integer_impl.h:786: ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xf8df19c in /home/ubuntu/dev/ClickHouse/build/programs/clickhouse
20. ./build/./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:302: void* std::__1::__thread_proxy[abi:v15000]<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::'lambda0'()>>(void*) @ 0xf8e256e in /home/ubuntu/dev/ClickHouse/build/programs/clickhouse
21. ? @ 0x7f5e2c8a0609 in ?
22. clone @ 0x7f5e2c7c5133 in ?
```
https://s3.amazonaws.com/clickhouse-test-reports/44824/b859cc2ddcf70e78caa54c56422841f74f26f72c/fuzzer_astfuzzerasan/report.html | https://github.com/ClickHouse/ClickHouse/issues/44844 | https://github.com/ClickHouse/ClickHouse/pull/44850 | e68b98aac3d527575c4e907c15c78bf22b5f88a6 | 6b1a697b12ae0087f95f122a92612c4887258aea | "2023-01-02T15:10:37Z" | c++ | "2023-01-03T12:30:51Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 44,831 | ["src/Common/HashTable/ClearableHashSet.h", "tests/queries/0_stateless/02515_distinct_zero_size_key_bug_44831.reference", "tests/queries/0_stateless/02515_distinct_zero_size_key_bug_44831.sql"] | Failed assertion in Distinct-Sorted transform. | **How to reproduce**
```
$ build_debug/programs/clickhouse local --query "SELECT DISTINCT NULL, if(number > 0, 't', '') AS res FROM numbers(1) ORDER BY res"
clickhouse: /home/milovidov/work/ClickHouse/src/Common/HashTable/HashTableKeyHolder.h:92: void keyHolderPersistKey(DB::ArenaKeyHolder &): Assertion `holder.key.size > 0' failed.
```
https://s3.amazonaws.com/clickhouse-test-reports/44824/6e946cc75779980b65804c70898beaee3d2e621e/fuzzer_astfuzzerdebug/report.html
See https://aretestsgreenyet.com/ | https://github.com/ClickHouse/ClickHouse/issues/44831 | https://github.com/ClickHouse/ClickHouse/pull/44856 | fb68fc1bee2930679caa9b68069880194f108131 | a804d9a6433a471fc3fa4db12b48c4be07119b65 | "2023-01-02T02:35:03Z" | c++ | "2023-01-04T02:16:00Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 44,816 | ["tests/queries/0_stateless/02793_implicit_pretty_format_settings.expect", "tests/queries/0_stateless/02793_implicit_pretty_format_settings.reference"] | The settings for PrettyCompact format cannot be specified inside the query if the query does not explicitly specify the format. | Not ok:
```
$ clickhouse-local
ClickHouse local version 22.13.1.1.
milovidov-desktop :) SELECT 1 SETTINGS output_format_pretty_row_numbers = 1
SELECT 1
SETTINGS output_format_pretty_row_numbers = 1
Query id: f1effbc2-400e-47a3-b1a5-315ca40119f0
┌─1─┐
│ 1 │
└───┘
1 row in set. Elapsed: 0.001 sec.
```
Ok:
```
milovidov-desktop :) SELECT 1 FORMAT PrettyCompact SETTINGS output_format_pretty_row_numbers = 1
SELECT 1
FORMAT PrettyCompact
SETTINGS output_format_pretty_row_numbers = 1
Query id: 12c39a67-2231-43fe-ad1c-06bc2150be70
┌─1─┐
1. │ 1 │
└───┘
1 row in set. Elapsed: 0.001 sec.
milovidov-desktop :)
``` | https://github.com/ClickHouse/ClickHouse/issues/44816 | https://github.com/ClickHouse/ClickHouse/pull/51305 | dd1c528d2fe71fc6bdfa1161e9a2cff74906e0b7 | 7b3b6531eec6384da95bf0d941912d0516c81d3b | "2023-01-01T17:39:02Z" | c++ | "2023-07-09T22:24:53Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 44,815 | ["src/Processors/Formats/Impl/PrettyBlockOutputFormat.cpp", "src/Processors/Formats/Impl/PrettyCompactBlockOutputFormat.cpp", "src/Processors/Formats/Impl/PrettySpaceBlockOutputFormat.cpp", "tests/queries/0_stateless/01509_output_format_pretty_row_numbers.reference", "tests/queries/0_stateless/01509_output_format_pretty_row_numbers.sql"] | `output_format_pretty_row_numbers` does not preserve the counter across the blocks. | **Describe the unexpected behaviour**
```
milovidov-desktop :) SET output_format_pretty_row_numbers = 1
SET output_format_pretty_row_numbers = 1
Query id: bb15ca27-5a3c-48ac-814a-a3cfea42deda
Ok.
0 rows in set. Elapsed: 0.000 sec.
milovidov-desktop :) SELECT 1 UNION ALL SELECT 2
SELECT 1
UNION ALL
SELECT 2
Query id: d940f566-7188-4d17-83d6-8421975791a9
┌─1─┐
1. │ 2 │
└───┘
┌─1─┐
1. │ 1 │
└───┘
2 rows in set. Elapsed: 0.001 sec.
``` | https://github.com/ClickHouse/ClickHouse/issues/44815 | https://github.com/ClickHouse/ClickHouse/pull/44832 | a6c5061e775b1e07dd5dc45f1eba297b83944613 | 0c7d39ac7fff48ad005a4b5ae05f16dbeae0e7c1 | "2023-01-01T17:35:03Z" | c++ | "2023-01-04T13:15:47Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 44,814 | ["src/Functions/FunctionsConversion.h", "tests/queries/0_stateless/02519_monotonicity_fuzz.reference", "tests/queries/0_stateless/02519_monotonicity_fuzz.sql"] | Logical error: 'Invalid Field get from type Decimal64 to type Int64' | Debug build:
```
CREATE TABLE table_float__fuzz_44 (`f` Decimal(18, 3), `u` Int8) ENGINE = MergeTree ORDER BY (f, u);
INSERT INTO table_float VALUES (1.2, 1) (1.3, 2) (1.4, 3) (1.5, 4);
SELECT count() FROM table_float__fuzz_44 WHERE (toUInt64(f) = 1) AND (f >= 1.3) AND (f <= 1.4) AND (u > 0) GROUP BY (toUInt64(f) = 1) AND (f >= 1.3) AND (f <= 100.0001) AND (u > 0);
```
https://s3.amazonaws.com/clickhouse-test-reports/44812/ea04181d11827fd552a86ca4d9c07e1d531e6ca3/fuzzer_astfuzzerdebug/report.html | https://github.com/ClickHouse/ClickHouse/issues/44814 | https://github.com/ClickHouse/ClickHouse/pull/44818 | 8868a5e944d76f8b8481c708c2fdb5476d3d03fd | 4240a71a4e7d0dd4d480c04023decd49e654fd15 | "2023-01-01T17:29:45Z" | c++ | "2023-01-01T23:05:28Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 44,804 | ["docs/en/operations/backup.md"] | need docs for backup restore of DB | null | https://github.com/ClickHouse/ClickHouse/issues/44804 | https://github.com/ClickHouse/ClickHouse/pull/44899 | 2482acc6004259d45ac1d0f009de176745087209 | a6c5061e775b1e07dd5dc45f1eba297b83944613 | "2022-12-30T22:38:44Z" | c++ | "2023-01-04T13:13:00Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 44,802 | ["docs/en/sql-reference/functions/type-conversion-functions.md", "docs/en/sql-reference/functions/uuid-functions.md"] | need docs for toUUIDOrDefault | null | https://github.com/ClickHouse/ClickHouse/issues/44802 | https://github.com/ClickHouse/ClickHouse/pull/44906 | 20a35efc428f1212c3f4072999d9ab56dd8ea6bf | 7589c29902f8dd9ba0b7ac60c6304528eddeeae2 | "2022-12-30T22:30:17Z" | c++ | "2023-01-04T21:01:14Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 44,797 | ["docs/en/sql-reference/statements/insert-into.md"] | need docs for globs infile | https://github.com/ClickHouse/ClickHouse/pull/30135 | https://github.com/ClickHouse/ClickHouse/issues/44797 | https://github.com/ClickHouse/ClickHouse/pull/44913 | a4322b305796826c4da1c91b7d0ba15d8a8ed919 | 3c64cb26b0b56df648a98f63b0c598f4b1a8fa1a | "2022-12-30T22:16:18Z" | c++ | "2023-01-04T20:51:12Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 44,796 | ["docs/en/engines/table-engines/integrations/hdfs.md", "docs/en/engines/table-engines/integrations/s3.md", "docs/en/engines/table-engines/mergetree-family/mergetree.md", "docs/en/engines/table-engines/special/file.md", "docs/en/engines/table-engines/special/url.md"] | need docs for partition by | https://github.com/ClickHouse/ClickHouse/pull/30690 | https://github.com/ClickHouse/ClickHouse/issues/44796 | https://github.com/ClickHouse/ClickHouse/pull/45606 | b9e5c586e6496cb607069e6408df6dbf5a5e116c | ff137721fb1bac5b98b16452529e243f69d295ee | "2022-12-30T22:15:29Z" | c++ | "2023-01-25T14:31:43Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 44,791 | ["docs/en/operations/settings/index.md", "docs/en/operations/settings/settings-formats.md", "docs/en/operations/settings/settings.md"] | need docs on s3 etc. see pr. | docs in https://github.com/ClickHouse/ClickHouse/pull/33302 | https://github.com/ClickHouse/ClickHouse/issues/44791 | https://github.com/ClickHouse/ClickHouse/pull/45525 | 2b8b1ad5d44400136f63a49624f260dbeef47285 | d6fe7f1b851629caa97e142508b8d451875dd94e | "2022-12-30T22:03:47Z" | c++ | "2023-01-25T00:25:03Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 44,787 | ["docs/en/sql-reference/functions/geo/h3.md"] | need docs for h3 geo functions | https://github.com/ClickHouse/ClickHouse/pull/34568 | https://github.com/ClickHouse/ClickHouse/issues/44787 | https://github.com/ClickHouse/ClickHouse/pull/44919 | 3c64cb26b0b56df648a98f63b0c598f4b1a8fa1a | 288488f8a208d0d6780829247d1f47449d5bfc5b | "2022-12-30T21:56:46Z" | c++ | "2023-01-04T20:51:37Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 44,786 | ["docs/en/sql-reference/functions/date-time-functions.md"] | need docs for tolastdayofmonth | https://github.com/ClickHouse/ClickHouse/pull/34394 | https://github.com/ClickHouse/ClickHouse/issues/44786 | https://github.com/ClickHouse/ClickHouse/pull/44945 | 9444dae34efbe70555e1e4f6554b98c65bd599c0 | adfc4d4a239f2007e6bd1a47d124943dc94985ed | "2022-12-30T21:55:47Z" | c++ | "2023-01-05T16:53:40Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 44,784 | ["docs/en/operations/settings/settings.md"] | need docs for table filter settings | https://github.com/ClickHouse/ClickHouse/pull/38475 | https://github.com/ClickHouse/ClickHouse/issues/44784 | https://github.com/ClickHouse/ClickHouse/pull/44941 | 5ba68516296516182408f2d1eef0a9d8f998966c | 9444dae34efbe70555e1e4f6554b98c65bd599c0 | "2022-12-30T21:47:45Z" | c++ | "2023-01-05T16:52:59Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 44,774 | ["docs/en/engines/table-engines/integrations/deltalake.md", "docs/en/sql-reference/table-functions/deltalake.md"] | need docs for deltalake | [PR](https://github.com/ClickHouse/ClickHouse/pull/41054/files#diff-56ca176b379cc08cca6ca530d08e704a0b75fd322e17ad5f205e621e3f154b46) | https://github.com/ClickHouse/ClickHouse/issues/44774 | https://github.com/ClickHouse/ClickHouse/pull/45127 | 9bb1e313697a5728875cbcc1b604f39fa6875c1e | 0ad969171e07cd0a89ad6c475b3bd308dd763816 | "2022-12-30T21:30:02Z" | c++ | "2023-01-11T13:07:42Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 44,771 | ["docs/en/sql-reference/functions/arithmetic-functions.md"] | need docs for pmod | (you don't have to strictly follow this form)
**Describe the issue**
A clear and concise description of what's wrong in documentation.
**Additional context**
Add any other context about the problem here.
| https://github.com/ClickHouse/ClickHouse/issues/44771 | https://github.com/ClickHouse/ClickHouse/pull/44943 | a28d6fb490dfe3e8220b15a6a8f717bbe602095c | 5ba68516296516182408f2d1eef0a9d8f998966c | "2022-12-30T21:27:33Z" | c++ | "2023-01-05T16:51:21Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 44,757 | ["src/Interpreters/TraceCollector.cpp"] | Crash in ~TraceCollector() (unhanded exception) | ```
2022.12.30 15:54:52.590484 [ 55 ] {} <Fatal> BaseDaemon: Code: 75. DB::ErrnoException: Cannot write to file (fd = 421), errno: 11, strerror: Resource temporarily unavailable. (CANNOT_WRITE_TO_FILE_DESCRIPTOR), Stack trace (when copying this message, always include the lines below):
2022.12.30 15:54:52.590503 [ 55 ] {} <Fatal> BaseDaemon:
2022.12.30 15:54:52.590517 [ 55 ] {} <Fatal> BaseDaemon: 0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, int, bool) @ 0xd9cea7a in /usr/bin/clickhouse
2022.12.30 15:54:52.590537 [ 55 ] {} <Fatal> BaseDaemon: 1. DB::throwFromErrnoWithPath(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, int, int) @ 0xd9cfaaf in /usr/bin/clickhouse
2022.12.30 15:54:52.590559 [ 55 ] {} <Fatal> BaseDaemon: 2. DB::WriteBufferFromFileDescriptor::nextImpl() @ 0xda2d0db in /usr/bin/clickhouse
2022.12.30 15:54:52.590574 [ 55 ] {} <Fatal> BaseDaemon: 3. DB::TraceCollector::stop() @ 0x13627461 in /usr/bin/clickhouse
2022.12.30 15:54:52.590587 [ 55 ] {} <Fatal> BaseDaemon: 4. DB::TraceCollector::~TraceCollector() @ 0x136270b3 in /usr/bin/clickhouse
2022.12.30 15:54:52.590601 [ 55 ] {} <Fatal> BaseDaemon: 5. ? @ 0x12c813d4 in /usr/bin/clickhouse
2022.12.30 15:54:52.590613 [ 55 ] {} <Fatal> BaseDaemon: 6. DB::Context::shutdown() @ 0x12c80d4f in /usr/bin/clickhouse
2022.12.30 15:54:52.590625 [ 55 ] {} <Fatal> BaseDaemon: 7. ? @ 0xda6a99e in /usr/bin/clickhouse
2022.12.30 15:54:52.590639 [ 55 ] {} <Fatal> BaseDaemon: 8. DB::Server::main(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&) @ 0xda5de90 in /usr/bin/clickhouse
2022.12.30 15:54:52.590654 [ 55 ] {} <Fatal> BaseDaemon: 9. Poco::Util::Application::run() @ 0x171fa242 in /usr/bin/clickhouse
2022.12.30 15:54:52.590668 [ 55 ] {} <Fatal> BaseDaemon: 10. DB::Server::run() @ 0xda4873c in /usr/bin/clickhouse
2022.12.30 15:54:52.590679 [ 55 ] {} <Fatal> BaseDaemon: 11. Poco::Util::ServerApplication::run(int, char**) @ 0x1720e59a in /usr/bin/clickhouse
2022.12.30 15:54:52.590692 [ 55 ] {} <Fatal> BaseDaemon: 12. mainEntryClickHouseServer(int, char**) @ 0xda451a9 in /usr/bin/clickhouse
2022.12.30 15:54:52.590707 [ 55 ] {} <Fatal> BaseDaemon: 13. main @ 0x7d705f3 in /usr/bin/clickhouse
2022.12.30 15:54:52.590719 [ 55 ] {} <Fatal> BaseDaemon: 14. __libc_start_main @ 0x7f999e2a2083 in ?
2022.12.30 15:54:52.590731 [ 55 ] {} <Fatal> BaseDaemon: 15. _start @ 0x7a70aee in /usr/bin/clickhouse
2022.12.30 15:54:52.590745 [ 55 ] {} <Fatal> BaseDaemon: (version 22.11.1.19280 (official build))
2022.12.30 15:54:52.590899 [ 7666 ] {} <Fatal> BaseDaemon: ########################################
2022.12.30 15:54:52.590939 [ 7666 ] {} <Fatal> BaseDaemon: (version 22.11.1.19280 (official build), build id: BA0E04D78F678A9752916B0BE48527A1143C6AA8) (from thread 53) (no query) Received signal Aborted (6)
2022.12.30 15:54:52.590959 [ 7666 ] {} <Fatal> BaseDaemon:
2022.12.30 15:54:52.590982 [ 7666 ] {} <Fatal> BaseDaemon: Stack trace: 0x7f999e2c100b 0x7f999e2a0859 0xdbfecbc 0x19e24983 0x19e248ee 0x7d70aab 0x13627390 0x12c813d4 0x12c80d4f 0xda6a99e 0xda5de90 0x171fa242 0xda4873c 0x1720e59a 0xda451a9 0x7d705f3 0x7f999e2a2083 0x7a70aee
2022.12.30 15:54:52.591004 [ 7666 ] {} <Fatal> BaseDaemon: 2. gsignal @ 0x7f999e2c100b in ?
2022.12.30 15:54:52.591021 [ 7666 ] {} <Fatal> BaseDaemon: 3. abort @ 0x7f999e2a0859 in ?
2022.12.30 15:54:52.591051 [ 7666 ] {} <Fatal> BaseDaemon: 4. ? @ 0xdbfecbc in /usr/bin/clickhouse
2022.12.30 15:54:52.591067 [ 7666 ] {} <Fatal> BaseDaemon: 5. ? @ 0x19e24983 in ?
2022.12.30 15:54:52.591083 [ 7666 ] {} <Fatal> BaseDaemon: 6. std::terminate() @ 0x19e248ee in ?
2022.12.30 15:54:52.591101 [ 7666 ] {} <Fatal> BaseDaemon: 7. ? @ 0x7d70aab in /usr/bin/clickhouse
2022.12.30 15:54:52.591117 [ 7666 ] {} <Fatal> BaseDaemon: 8. ? @ 0x13627390 in /usr/bin/clickhouse
2022.12.30 15:54:52.591131 [ 7666 ] {} <Fatal> BaseDaemon: 9. ? @ 0x12c813d4 in /usr/bin/clickhouse
2022.12.30 15:54:52.591148 [ 7666 ] {} <Fatal> BaseDaemon: 10. DB::Context::shutdown() @ 0x12c80d4f in /usr/bin/clickhouse
2022.12.30 15:54:52.591167 [ 7666 ] {} <Fatal> BaseDaemon: 11. ? @ 0xda6a99e in /usr/bin/clickhouse
2022.12.30 15:54:52.591191 [ 7666 ] {} <Fatal> BaseDaemon: 12. DB::Server::main(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&) @ 0xda5de90 in /usr/bin/clickhouse
2022.12.30 15:54:52.591219 [ 7666 ] {} <Fatal> BaseDaemon: 13. Poco::Util::Application::run() @ 0x171fa242 in /usr/bin/clickhouse
2022.12.30 15:54:52.591239 [ 7666 ] {} <Fatal> BaseDaemon: 14. DB::Server::run() @ 0xda4873c in /usr/bin/clickhouse
2022.12.30 15:54:52.591257 [ 7666 ] {} <Fatal> BaseDaemon: 15. Poco::Util::ServerApplication::run(int, char**) @ 0x1720e59a in /usr/bin/clickhouse
2022.12.30 15:54:52.591273 [ 7666 ] {} <Fatal> BaseDaemon: 16. mainEntryClickHouseServer(int, char**) @ 0xda451a9 in /usr/bin/clickhouse
2022.12.30 15:54:52.591288 [ 7666 ] {} <Fatal> BaseDaemon: 17. main @ 0x7d705f3 in /usr/bin/clickhouse
2022.12.30 15:54:52.591301 [ 7666 ] {} <Fatal> BaseDaemon: 18. __libc_start_main @ 0x7f999e2a2083 in ?
2022.12.30 15:54:52.591316 [ 7666 ] {} <Fatal> BaseDaemon: 19. _start @ 0x7a70aee in /usr/bin/clickhouse
``` | https://github.com/ClickHouse/ClickHouse/issues/44757 | https://github.com/ClickHouse/ClickHouse/pull/44758 | 888b4aac8ff465b8d69cefd8ce0e4c2b6a7703a7 | 969214ce9e4a8b092787df42e71d66548a4215bf | "2022-12-30T16:08:27Z" | c++ | "2022-12-31T13:48:39Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 44,738 | ["src/QueryPipeline/RemoteQueryExecutor.cpp", "tests/queries/0_stateless/02517_wrong_total_structure_crash.reference", "tests/queries/0_stateless/02517_wrong_total_structure_crash.sql"] | Bad cast from type DB::ColumnVector<signed char> to DB::ColumnArray | **Describe the bug**
Build the server in debug mode.
```
CREATE TABLE alias10__fuzz_13 (`Id` Array(Array(UInt256)), `EventDate` Array(String), `field1` Array(Array(Nullable(Int8))), `field2` Array(Date), `field3` Array(Array(Array(UInt128)))) ENGINE = Distributed(test_shard_localhost, currentDatabase(), alias_local10);
set allow_deprecated_syntax_for_merge_tree=1;
CREATE TABLE alias_local10 (
Id Int8,
EventDate Date DEFAULT '2000-01-01',
field1 Int8,
field2 String,
field3 ALIAS CASE WHEN field1 = 1 THEN field2 ELSE '0' END
) ENGINE = MergeTree(EventDate, (Id, EventDate), 8192);
SET prefer_localhost_replica = 0;
SELECT field1 FROM alias10__fuzz_13 WHERE arrayEnumerateDense(NULL, tuple('0.2147483646'), NULL) GROUP BY field1, arrayEnumerateDense(('0.02', '0.1', '0'), NULL) WITH TOTALS;
```
https://s3.amazonaws.com/clickhouse-test-reports/0/e7df1f7113414a532f66a9de8b0e4c205efabe76/fuzzer_astfuzzerubsan/report.html
```
2022.12.30 12:08:53.266010 [ 956786 ] {a1830e0a-336f-4db8-bb68-c38834741464} <Fatal> : Logical error: 'Bad cast from type DB::ColumnVector<signed char> to DB::ColumnArray'.
2022.12.30 12:08:53.266867 [ 956784 ] {} <Trace> BaseDaemon: Received signal 6
2022.12.30 12:08:53.267648 [ 957342 ] {} <Fatal> BaseDaemon: ########################################
2022.12.30 12:08:53.268092 [ 957342 ] {} <Fatal> BaseDaemon: (version 22.13.1.1, build id: 1A971B7E7086651D8F7D657F38C47B058907AEEE) (from thread 956786) (query_id: a1830e0a-336f-4db8-bb68-c38834741464) (query: SELECT field1 FROM alias10__fuzz_13 WHERE arrayEnumerateDense(NULL, tuple('0.2147483646'), NULL) GROUP BY field1, arrayEnumerateDense(('0.02', '0.1', '0'), NULL) WITH TOTALS) Received signal Aborted (6)
2022.12.30 12:08:53.268493 [ 957342 ] {} <Fatal> BaseDaemon:
2022.12.30 12:08:53.268776 [ 957342 ] {} <Fatal> BaseDaemon: Stack trace: 0x7fb9442cda7c 0x7fb944279476 0x7fb94425f7f3 0x217fa336 0x217fa3f5 0x217fa545 0x18be346a 0x18beb1cb 0x28c79cfa 0x2b78c5d6 0x2b78c313 0x2b748397 0x2b7452da 0x2b73d979 0x2b74de85 0x30d1fb99 0x30d203fc 0x30f863f4 0x30f82d7a 0x30f81b1e 0x7fb9442cbb43 0x7fb94435da00
2022.12.30 12:08:53.269014 [ 957342 ] {} <Fatal> BaseDaemon: 4. pthread_kill @ 0x7fb9442cda7c in ?
2022.12.30 12:08:53.269146 [ 957342 ] {} <Fatal> BaseDaemon: 5. gsignal @ 0x7fb944279476 in ?
2022.12.30 12:08:53.269356 [ 957342 ] {} <Fatal> BaseDaemon: 6. abort @ 0x7fb94425f7f3 in ?
2022.12.30 12:08:53.369859 [ 957342 ] {} <Fatal> BaseDaemon: 7. /home/milovidov/work/ClickHouse/src/Common/Exception.cpp:41: DB::abortOnFailedAssertion(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&) @ 0x217fa336 in /home/milovidov/work/ClickHouse/build_debug/programs/clickhouse
2022.12.30 12:08:53.467501 [ 957342 ] {} <Fatal> BaseDaemon: 8. /home/milovidov/work/ClickHouse/src/Common/Exception.cpp:64: DB::handle_error_code(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, int, bool, std::__1::vector<void*, std::__1::allocator<void*>> const&) @ 0x217fa3f5 in /home/milovidov/work/ClickHouse/build_debug/programs/clickhouse
2022.12.30 12:08:53.552509 [ 957342 ] {} <Fatal> BaseDaemon: 9. /home/milovidov/work/ClickHouse/src/Common/Exception.cpp:78: DB::Exception::Exception(DB::Exception::MessageMasked const&, int, bool) @ 0x217fa545 in /home/milovidov/work/ClickHouse/build_debug/programs/clickhouse
2022.12.30 12:08:53.648922 [ 957342 ] {} <Fatal> BaseDaemon: 10. /home/milovidov/work/ClickHouse/src/Common/Exception.h:41: DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, int, bool) @ 0x18be346a in /home/milovidov/work/ClickHouse/build_debug/programs/clickhouse
2022.12.30 12:08:53.751217 [ 957342 ] {} <Fatal> BaseDaemon: 11. /home/milovidov/work/ClickHouse/src/Common/assert_cast.h:47: DB::ColumnArray const& assert_cast<DB::ColumnArray const&, DB::IColumn const&>(DB::IColumn const&) @ 0x18beb1cb in /home/milovidov/work/ClickHouse/build_debug/programs/clickhouse
2022.12.30 12:08:53.835141 [ 957342 ] {} <Fatal> BaseDaemon: 12. /home/milovidov/work/ClickHouse/src/DataTypes/Serializations/SerializationArray.cpp:254: DB::SerializationArray::serializeBinaryBulkStatePrefix(DB::IColumn const&, DB::ISerialization::SerializeBinaryBulkSettings&, std::__1::shared_ptr<DB::ISerialization::SerializeBinaryBulkState>&) const @ 0x28c79cfa in /home/milovidov/work/ClickHouse/build_debug/programs/clickhouse
2022.12.30 12:08:53.904051 [ 957342 ] {} <Fatal> BaseDaemon: 13. /home/milovidov/work/ClickHouse/src/Formats/NativeWriter.cpp:61: DB::writeData(DB::ISerialization const&, COW<DB::IColumn>::immutable_ptr<DB::IColumn> const&, DB::WriteBuffer&, unsigned long, unsigned long) @ 0x2b78c5d6 in /home/milovidov/work/ClickHouse/build_debug/programs/clickhouse
2022.12.30 12:08:53.965541 [ 957342 ] {} <Fatal> BaseDaemon: 14. /home/milovidov/work/ClickHouse/src/Formats/NativeWriter.cpp:153: DB::NativeWriter::write(DB::Block const&) @ 0x2b78c313 in /home/milovidov/work/ClickHouse/build_debug/programs/clickhouse
2022.12.30 12:08:54.009892 [ 957117 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 1.31 GiB, peak 1.53 GiB, free memory in arenas 28.67 MiB, will set to 1.43 GiB (RSS), difference: 124.47 MiB
2022.12.30 12:08:54.246021 [ 957342 ] {} <Fatal> BaseDaemon: 15. /home/milovidov/work/ClickHouse/src/Server/TCPHandler.cpp:931: DB::TCPHandler::sendTotals(DB::Block const&) @ 0x2b748397 in /home/milovidov/work/ClickHouse/build_debug/programs/clickhouse
2022.12.30 12:08:54.406008 [ 956785 ] {} <Trace> KeeperTCPHandler: Received heartbeat for session #474
2022.12.30 12:08:54.519261 [ 957342 ] {} <Fatal> BaseDaemon: 16. /home/milovidov/work/ClickHouse/src/Server/TCPHandler.cpp:801: DB::TCPHandler::processOrdinaryQueryWithProcessors() @ 0x2b7452da in /home/milovidov/work/ClickHouse/build_debug/programs/clickhouse
2022.12.30 12:08:54.765581 [ 957342 ] {} <Fatal> BaseDaemon: 17. /home/milovidov/work/ClickHouse/src/Server/TCPHandler.cpp:389: DB::TCPHandler::runImpl() @ 0x2b73d979 in /home/milovidov/work/ClickHouse/build_debug/programs/clickhouse
2022.12.30 12:08:55.009853 [ 957117 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 1.43 GiB, peak 1.53 GiB, free memory in arenas 28.31 MiB, will set to 1.47 GiB (RSS), difference: 41.22 MiB
2022.12.30 12:08:55.044380 [ 957342 ] {} <Fatal> BaseDaemon: 18. /home/milovidov/work/ClickHouse/src/Server/TCPHandler.cpp:1920: DB::TCPHandler::run() @ 0x2b74de85 in /home/milovidov/work/ClickHouse/build_debug/programs/clickhouse
2022.12.30 12:08:55.064635 [ 957342 ] {} <Fatal> BaseDaemon: 19. /home/milovidov/work/ClickHouse/contrib/poco/Net/src/TCPServerConnection.cpp:43: Poco::Net::TCPServerConnection::start() @ 0x30d1fb99 in /home/milovidov/work/ClickHouse/build_debug/programs/clickhouse
2022.12.30 12:08:55.090835 [ 957342 ] {} <Fatal> BaseDaemon: 20. /home/milovidov/work/ClickHouse/contrib/poco/Net/src/TCPServerDispatcher.cpp:115: Poco::Net::TCPServerDispatcher::run() @ 0x30d203fc in /home/milovidov/work/ClickHouse/build_debug/programs/clickhouse
2022.12.30 12:08:55.121364 [ 957342 ] {} <Fatal> BaseDaemon: 21. /home/milovidov/work/ClickHouse/contrib/poco/Foundation/src/ThreadPool.cpp:199: Poco::PooledThread::run() @ 0x30f863f4 in /home/milovidov/work/ClickHouse/build_debug/programs/clickhouse
2022.12.30 12:08:55.149539 [ 957342 ] {} <Fatal> BaseDaemon: 22. /home/milovidov/work/ClickHouse/contrib/poco/Foundation/src/Thread.cpp:56: Poco::(anonymous namespace)::RunnableHolder::run() @ 0x30f82d7a in /home/milovidov/work/ClickHouse/build_debug/programs/clickhouse
2022.12.30 12:08:55.176250 [ 957342 ] {} <Fatal> BaseDaemon: 23. /home/milovidov/work/ClickHouse/contrib/poco/Foundation/src/Thread_POSIX.cpp:345: Poco::ThreadImpl::runnableEntry(void*) @ 0x30f81b1e in /home/milovidov/work/ClickHouse/build_debug/programs/clickhouse
2022.12.30 12:08:55.176491 [ 957342 ] {} <Fatal> BaseDaemon: 24. ? @ 0x7fb9442cbb43 in ?
2022.12.30 12:08:55.176677 [ 957342 ] {} <Fatal> BaseDaemon: 25. ? @ 0x7fb94435da00 in ?
``` | https://github.com/ClickHouse/ClickHouse/issues/44738 | https://github.com/ClickHouse/ClickHouse/pull/44760 | bdd3ef89206054b7f26b8727bccc66dd3fd7bdd6 | c8ed885bae2313fd2444fba05165ada6cce1af42 | "2022-12-30T11:11:07Z" | c++ | "2023-01-03T18:28:31Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 44,736 | ["src/Interpreters/JoinToSubqueryTransformVisitor.cpp", "tests/queries/0_stateless/02518_qualified_asterisks_alias_table_name.reference", "tests/queries/0_stateless/02518_qualified_asterisks_alias_table_name.sql"] | Logical error: qualified asterisk must have exactly one child | ```
clickhouse-local -n <<<"
DROP TABLE IF EXISTS test_table_join_1;
CREATE TABLE test_table_join_1
(
id UInt64,
value String
) ENGINE = TinyLog;
DROP TABLE IF EXISTS test_table_join_2;
CREATE TABLE test_table_join_2
(
id UInt64,
value String
) ENGINE = TinyLog;
DROP TABLE IF EXISTS test_table_join_3;
CREATE TABLE test_table_join_3
(
id UInt64,
value String
) ENGINE = TinyLog;
DESCRIBE TABLE
(
SELECT
test_table_join_1.* APPLY toString,
test_table_join_2.* APPLY toString,
test_table_join_3.* APPLY toString
FROM test_table_join_1 AS t1
INNER JOIN test_table_join_2 AS t2 ON t1.id = t2.id
INNER JOIN test_table_join_3 AS t3 ON t2.id = t3.id
)
"
Code: 49. DB::Exception: Logical error: qualified asterisk must have exactly one child: While processing test_table_join_1.* APPLY toString, test_table_join_2.* APPLY toString, test_table_join_3.* APPLY toString: While processing SELECT test_table_join_1.* APPLY toString, test_table_join_2.* APPLY toString, test_table_join_3.* APPLY toString FROM test_table_join_1 AS t1 INNER JOIN test_table_join_2 AS t2 ON t1.id = t2.id INNER JOIN test_table_join_3 AS t3 ON t2.id = t3.id. (LOGICAL_ERROR)
```
https://s3.amazonaws.com/clickhouse-test-reports/0/f84064d05acd76e5098e14f42f15656960f29da6/fuzzer_astfuzzerubsan/report.html | https://github.com/ClickHouse/ClickHouse/issues/44736 | https://github.com/ClickHouse/ClickHouse/pull/44755 | 663cc5fcc199ecf8f48c02429e3e580fd85a217b | c1f6555b32e63dcec6ebf4b167db7fb8c46136ae | "2022-12-30T10:52:13Z" | c++ | "2023-01-06T00:28:22Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 44,710 | ["tests/ci/tee_popen.py"] | Fix stress tests report | After #44214 it's unclear if timeout is reached.
The commit status should highlight the timeout issue | https://github.com/ClickHouse/ClickHouse/issues/44710 | https://github.com/ClickHouse/ClickHouse/pull/45504 | cb1e8afda634c3bb997944c89ab4ba3aff035fa0 | 116a7edf25c854a1995976868ad93fa13d5ee811 | "2022-12-29T14:13:27Z" | c++ | "2023-01-24T09:53:44Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 44,709 | ["src/Formats/MarkInCompressedFile.h", "src/Storages/MergeTree/MergeTreeReaderStream.cpp", "src/Storages/MergeTree/MergeTreeReaderStream.h", "tests/integration/test_s3_low_cardinality_right_border/test.py"] | CANNOT_READ_ALL_DATA / LowCardinality / S3 threadpool | @den-crane Thanks for the reply, we faced similar issue in few of our instances, where remote storage is S3. below is the stack trace. `operationName` column is of type LowCardinality.
```
2022.12.22 06:43:31.949698 [ 16873 ] {f33b0bd4-95db-4330-89d9-f18f24f3ac51} <Error> executeQuery: Code: 33. DB::Exception: Cannot read all data. Bytes read: 114. Bytes expected: 266.: (while reading column operationName): (while reading from part /var/lib/clickhouse/disks/s3_disk/store/961/961a395d-04ae-4f43-bf90-45a6c041b49e/1670889600_0_33677_2140/ from mark 26 with max_rows_to_read = 8192): While executing MergeTreeThread. (CANNOT_READ_ALL_DATA) (version 22.10.1.1877 (official build)) (from x.x.x.x:48908) (in query: SELECT operationName FROM <db>.<table_name> WHERE serviceName = 'xxx-xxx-xxx-xxx' GROUP BY operationName ORDER BY operationName LIMIT 10000), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xce3f35a in /usr/bin/clickhouse
1. ? @ 0xce9e494 in /usr/bin/clickhouse
2. ? @ 0x11a065b1 in /usr/bin/clickhouse
3. DB::SerializationLowCardinality::deserializeBinaryBulkWithMultipleStreams(COW<DB::IColumn>::immutable_ptr<DB::IColumn>&, unsigned long, DB::ISerialization::DeserializeBinaryBulkSettings&, std::__1::shared_ptr<DB::ISerialization::DeserializeBinaryBulkState>&, std::__1::unordered_map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const, COW<DB::IColumn>::immutable_ptr<DB::IColumn> > > >*) const @ 0x119e63e7 in /usr/bin/clickhouse
4. DB::MergeTreeReaderWide::readData(DB::NameAndTypePair const&, std::__1::shared_ptr<DB::ISerialization const> const&, COW<DB::IColumn>::immutable_ptr<DB::IColumn>&, unsigned long, bool, unsigned long, unsigned long, std::__1::unordered_map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >, COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const, COW<DB::IColumn>::immutable_ptr<DB::IColumn> > > >&, bool) @ 0x1322673b in /usr/bin/clickhouse
5. DB::MergeTreeReaderWide::readRows(unsigned long, unsigned long, bool, unsigned long, std::__1::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn> > >&) @ 0x132259b8 in /usr/bin/clickhouse
6. DB::MergeTreeRangeReader::DelayedStream::finalize(std::__1::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn> > >&) @ 0x139167e3 in /usr/bin/clickhouse
7. DB::MergeTreeRangeReader::startReadingChain(unsigned long, std::__1::deque<DB::MarkRange, std::__1::allocator<DB::MarkRange> >&) @ 0x1391c4a9 in /usr/bin/clickhouse
8. DB::MergeTreeRangeReader::read(unsigned long, std::__1::deque<DB::MarkRange, std::__1::allocator<DB::MarkRange> >&) @ 0x13919f95 in /usr/bin/clickhouse
9. DB::MergeTreeBaseSelectProcessor::readFromPartImpl() @ 0x13911c2e in /usr/bin/clickhouse
10. DB::MergeTreeBaseSelectProcessor::readFromPart() @ 0x139127b9 in /usr/bin/clickhouse
11. DB::MergeTreeBaseSelectProcessor::generate() @ 0x1390de31 in /usr/bin/clickhouse
12. DB::ISource::tryGenerate() @ 0x13590475 in /usr/bin/clickhouse
13. DB::ISource::work() @ 0x13590006 in /usr/bin/clickhouse
14. DB::ExecutionThreadContext::executeTask() @ 0x135ac186 in /usr/bin/clickhouse
15. DB::PipelineExecutor::executeStepImpl(unsigned long, std::__1::atomic<bool>*) @ 0x135a02dc in /usr/bin/clickhouse
16. ? @ 0x135a293d in /usr/bin/clickhouse
17. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xcefa04c in /usr/bin/clickhouse
18. ? @ 0xceff7be in /usr/bin/clickhouse
19. start_thread @ 0x7de5 in /lib64/libpthread-2.17.so
20. __clone @ 0xfebad in /lib64/libc-2.17.so
```
Any idea/suggestions on this one? Clickhouse version: `22.10.1.1877`
_Originally posted by @Dileep-Dora in https://github.com/ClickHouse/ClickHouse/issues/41756#issuecomment-1366388814_
| https://github.com/ClickHouse/ClickHouse/issues/44709 | https://github.com/ClickHouse/ClickHouse/pull/44875 | 35511685e3b9effe56b6637a469add158d26389d | b25f87567499e71ce13fea99a58d4d76120a74aa | "2022-12-29T13:53:10Z" | c++ | "2023-01-06T14:24:36Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 44,695 | ["tests/queries/0_stateless/02541_multiple_ignore_with_nested_select.reference", "tests/queries/0_stateless/02541_multiple_ignore_with_nested_select.sql"] | Sorting column * wasn't found in the ActionsDAG's outputs | From https://github.com/ClickHouse/ClickHouse/pull/44686 CI
```
SELECT DISTINCT *
FROM
(
SELECT DISTINCT *
FROM
(
SELECT DISTINCT
0.5,
number % 65536 AS number
FROM numbers(2)
ORDER BY
ignore(ignore(-1, 10.0001)) DESC NULLS LAST,
ignore(2147483648) DESC NULLS FIRST,
ignore(255, 0.0001) ASC,
number ASC
)
ORDER BY number ASC NULLS FIRST
)
WHERE ignore(2147483648)
ORDER BY number DESC
Query id: 9b66e44b-17ae-46b8-9a6a-6ed439652644
0 rows in set. Elapsed: 0.023 sec.
Received exception:
Code: 49. DB::Exception: Sorting column ignore(2147483648) wasn't found in the ActionsDAG's outputs. DAG:
0 : INPUT () Const(UInt8) UInt8 ignore(2147483648)
1 : INPUT () Const(Float64) Float64 0.5
2 : INPUT () UInt32 UInt32 modulo(number, 65536)
3 : INPUT () Const(UInt8) UInt8 ignore(ignore(-1, 10.0001))
4 : INPUT () Const(UInt8) UInt8 ignore(255, 0.0001)
Output nodes: 1 2 3 4
. (LOGICAL_ERROR)
```
Tested in master and in 22.11.2.30 | https://github.com/ClickHouse/ClickHouse/issues/44695 | https://github.com/ClickHouse/ClickHouse/pull/45784 | 659a64a1d93cdc633794974c3b3f0ba9fcb311c9 | 4983353f85606424e4bc52b7fbc76c4395b4a7ef | "2022-12-28T21:07:07Z" | c++ | "2023-01-31T10:12:08Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 44,614 | ["src/Storages/MergeTree/MergeTreeData.cpp", "src/Storages/SelectQueryInfo.h", "tests/queries/0_stateless/02515_projections_with_totals.reference", "tests/queries/0_stateless/02515_projections_with_totals.sql", "tests/queries/0_stateless/02516_projections_with_rollup.reference", "tests/queries/0_stateless/02516_projections_with_rollup.sql"] | Projections and WITH TOTALS don't work with non-default totals_mode | ```
CREATE TABLE t (x UInt8, PROJECTION p (SELECT x GROUP BY x)) ENGINE = MergeTree ORDER BY ();
INSERT INTO t VALUES (0);
SET group_by_overflow_mode = 'any', max_rows_to_group_by = 1000, totals_mode = 'after_having_auto';
SELECT x FROM t GROUP BY x WITH TOTALS;
``` | https://github.com/ClickHouse/ClickHouse/issues/44614 | https://github.com/ClickHouse/ClickHouse/pull/44615 | 9645941cdff8f8bdc844ea950358f2c27b231f22 | 464a513f0ed5f4e1f90b91a25ecda2ae6d270cec | "2022-12-26T21:02:30Z" | c++ | "2022-12-27T12:31:03Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 44,608 | ["tests/queries/0_stateless/02538_analyzer_create_table_as_select.reference", "tests/queries/0_stateless/02538_analyzer_create_table_as_select.sql"] | (only with new Analyzer) segmentation fault with create as select with union | ```
milovidov@milovidov-desktop:~/Downloads$ clickhouse-client
ClickHouse client version 22.13.1.1.
Connecting to localhost:9000 as user default.
Connected to ClickHouse server version 22.13.1 revision 54461.
milovidov-desktop :) SET allow_experimental_analyzer = 1
SET allow_experimental_analyzer = 1
Query id: a3d56183-09fd-4933-b9b1-142f909d18f6
Ok.
0 rows in set. Elapsed: 0.001 sec.
milovidov-desktop :) DROP TABLE t3
DROP TABLE t3
Query id: d4f2a0b9-9da6-4599-802a-c4d97c82e680
0 rows in set. Elapsed: 0.002 sec.
Received exception from server (version 22.13.1):
Code: 60. DB::Exception: Received from localhost:9000. DB::Exception: Table default.t3 doesn't exist. (UNKNOWN_TABLE)
milovidov-desktop :) DROP DATABASE IF EXISTS test_01109;
DROP DATABASE IF EXISTS test_01109
Query id: 1921c649-246f-4825-ac96-964c7d9a6241
Ok.
0 rows in set. Elapsed: 0.001 sec.
milovidov-desktop :) CREATE DATABASE test_01109 ENGINE=Atomic;
CREATE DATABASE test_01109
ENGINE = Atomic
Query id: e3a00b29-f4d5-47f8-9670-589872d54825
Ok.
0 rows in set. Elapsed: 0.008 sec.
milovidov-desktop :) USE test_01109;
USE test_01109
Query id: ccfe4380-bb7b-471a-aff2-f7fd1d1168b2
Ok.
0 rows in set. Elapsed: 0.001 sec.
milovidov-desktop :) CREATE TABLE t0 ENGINE=MergeTree() ORDER BY tuple() AS SELECT rowNumberInAllBlocks(), * FROM (SELECT toLowCardinality(arrayJoin(['exchange', 'tables'])));
CREATE TABLE t0
ENGINE = MergeTree
ORDER BY tuple() AS
SELECT
rowNumberInAllBlocks(),
*
FROM
(
SELECT toLowCardinality(arrayJoin(['exchange', 'tables']))
)
Query id: 278adc5e-5e73-4a5a-8d40-f6cb9c8498d0
Ok.
0 rows in set. Elapsed: 0.022 sec.
milovidov-desktop :) CREATE TABLE t1 ENGINE=Log() AS SELECT * FROM system.tables AS t JOIN system.databases AS d ON t.database=d.name;
CREATE TABLE t1
ENGINE = Log AS
SELECT *
FROM system.tables AS t
INNER JOIN system.databases AS d ON t.database = d.name
Query id: 634ed7cb-9c8c-4dc2-815f-88780d549de6
Ok.
0 rows in set. Elapsed: 0.075 sec.
milovidov-desktop :) CREATE TABLE t2 ENGINE=MergeTree() ORDER BY tuple() AS SELECT rowNumberInAllBlocks() + (SELECT count() FROM t0), * FROM (SELECT arrayJoin(['hello', 'world']));
CREATE TABLE t2
ENGINE = MergeTree
ORDER BY tuple() AS
SELECT
rowNumberInAllBlocks() + (
SELECT count()
FROM t0
),
*
FROM
(
SELECT arrayJoin(['hello', 'world'])
)
Query id: 9aa0d2b1-de38-4b77-90c5-ae9afce85e15
Ok.
0 rows in set. Elapsed: 0.018 sec.
milovidov-desktop :) DROP TABLE t1;
DROP TABLE t1
Query id: 5b7c706e-fb60-4309-bfd4-883436774592
Ok.
0 rows in set. Elapsed: 0.001 sec.
milovidov-desktop :) RENAME TABLE t0 TO t1;
RENAME TABLE t0 TO t1
Query id: e9f34cb8-076b-48f6-9ee4-255f718041ee
Ok.
0 rows in set. Elapsed: 0.001 sec.
milovidov-desktop :) EXCHANGE TABLES t1 AND t2;
EXCHANGE TABLES t1 AND t2
Query id: 0ad1f413-9ea3-4160-9ce1-a0c5f38e4bf9
Ok.
0 rows in set. Elapsed: 0.001 sec.
milovidov-desktop :) RENAME TABLE t1 TO t1tmp, t2 TO t2tmp;
RENAME TABLE t1 TO t1tmp, t2 TO t2tmp
Query id: 4448b16e-fa98-4a5c-b89a-a6869c9c6501
Ok.
0 rows in set. Elapsed: 0.001 sec.
milovidov-desktop :) RENAME TABLE t1tmp TO t2, t2tmp TO t1;
RENAME TABLE t1tmp TO t2, t2tmp TO t1
Query id: 9666060b-fc09-4904-8034-9d7a35e6000b
Ok.
0 rows in set. Elapsed: 0.001 sec.
milovidov-desktop :) CREATE DATABASE test_01109_other_atomic;
CREATE DATABASE test_01109_other_atomic
Query id: b251cc43-1e7a-44ef-9256-302b53979222
Ok.
0 rows in set. Elapsed: 0.009 sec.
milovidov-desktop :) CREATE TABLE test_01109_other_atomic.t3 ENGINE=MergeTree() ORDER BY tuple()
AS SELECT rowNumberInAllBlocks() + (SELECT max((*,*).1.1) + 1 FROM (SELECT (*,) FROM t1 UNION ALL SELECT (*,) FROM t2)), *
FROM (SELECT arrayJoin(['another', 'db']));
CREATE TABLE test_01109_other_atomic.t3
ENGINE = MergeTree
ORDER BY tuple() AS
SELECT
rowNumberInAllBlocks() + (
SELECT max(((*, *).1).1) + 1
FROM
(
SELECT tuple(*)
FROM t1
UNION ALL
SELECT tuple(*)
FROM t2
)
),
*
FROM
(
SELECT arrayJoin(['another', 'db'])
)
Query id: 4cf23e8b-a571-4fe2-9a09-8a2d0ddd9b05
[milovidov-desktop] 2022.12.26 18:55:49.044747 [ 96812 ] <Fatal> BaseDaemon: ########################################
[milovidov-desktop] 2022.12.26 18:55:49.044837 [ 96812 ] <Fatal> BaseDaemon: (version 22.13.1.1, build id: 45F62AAB6160505141DFD095FDB5F5D13DAE58BD) (from thread 93766) (query_id: 4cf23e8b-a571-4fe2-9a09-8a2d0ddd9b05) (query: CREATE TABLE test_01109_other_atomic.t3 ENGINE=MergeTree() ORDER BY tuple() AS SELECT rowNumberInAllBlocks() + (SELECT max((*,*).1.1) + 1 FROM (SELECT (*,) FROM t1 UNION ALL SELECT (*,) FROM t2)), * FROM (SELECT arrayJoin(['another', 'db']));) Received signal Segmentation fault (11)
[milovidov-desktop] 2022.12.26 18:55:49.044894 [ 96812 ] <Fatal> BaseDaemon: Address: 0x50 Access: read. Address not mapped to object.
[milovidov-desktop] 2022.12.26 18:55:49.044929 [ 96812 ] <Fatal> BaseDaemon: Stack trace: 0xfe621a8 0x1004af17 0x7fe89ebd9520
[milovidov-desktop] 2022.12.26 18:55:49.072717 [ 96812 ] <Fatal> BaseDaemon: 0. StackTrace::StackTrace(ucontext_t const&) @ 0xfe621a8 in /home/milovidov/work/ClickHouse/build/programs/clickhouse
[milovidov-desktop] 2022.12.26 18:55:49.080289 [ 96812 ] <Fatal> BaseDaemon: 1. signalHandler(int, siginfo_t*, void*) @ 0x1004af17 in /home/milovidov/work/ClickHouse/build/programs/clickhouse
[milovidov-desktop] 2022.12.26 18:55:49.080310 [ 96812 ] <Fatal> BaseDaemon: 2. ? @ 0x7fe89ebd9520 in ?
[milovidov-desktop] 2022.12.26 18:55:49.080331 [ 96812 ] <Fatal> BaseDaemon: Integrity check of the executable skipped because the reference checksum could not be read.
``` | https://github.com/ClickHouse/ClickHouse/issues/44608 | https://github.com/ClickHouse/ClickHouse/pull/45533 | 297516a08470a4c33bf5dd80f908b3d48d953c2c | eef49fa3edba8eb59325fbad3ad944c7bb9450be | "2022-12-26T17:57:07Z" | c++ | "2023-01-26T11:10:09Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 44,588 | ["src/Dictionaries/RangeHashedDictionary.cpp", "tests/queries/0_stateless/02525_range_hashed_dictionary_update_field.reference", "tests/queries/0_stateless/02525_range_hashed_dictionary_update_field.sql"] | the range doesn't work in RANGE_HASHED DICTIONARY after the DICTIONARY update version 22.10.3.27 | `DROP TABLE if exists default.test_20221226 on cluster clickhouse;
CREATE TABLE default.test_20221226 on cluster clickhouse
(
uid Int64,
start Int64,
end Int64,
ck_insert_time DateTime default now()
)
ENGINE = ReplacingMergeTree(ck_insert_time)
ORDER BY (uid, start)
;
DROP TABLE if exists default.test_20221226_all on cluster clickhouse;
CREATE TABLE default.test_20221226_all on cluster clickhouse AS default.test_20221226
ENGINE = Distributed('clickhouse', 'default', 'test_20221226', cityHash64(uid))
;
DROP DICTIONARY if exists default.test_20221226_dict on cluster clickhouse;
CREATE DICTIONARY default.test_20221226_dict on cluster clickhouse
(
uid Int64,
start Int64,
end Int64,
ck_insert_time DateTime
)
PRIMARY KEY uid
SOURCE(CLICKHOUSE(HOST 'localhost' PORT 9000 USER 'default' PASSWORD 'default' DB 'default' TABLE 'test_20221226_all' UPDATE_FIELD 'ck_insert_time' UPDATE_LAG 60))
LIFETIME(MIN 10 MAX 60)
LAYOUT(RANGE_HASHED())
RANGE(MIN start MAX end)
;
INSERT INTO default.test_20221226_all(uid,start,end) values (1, 0, 100),(1, 101,200),(2,0,999),(2,1000,10000);
select * from default.test_20221226_all ;
uid start end ck_insert_time
1 0 100 2022-12-26 11:38:34
1 101 200 2022-12-26 11:38:34
2 0 999 2022-12-26 11:38:34
2 1000 10000 2022-12-26 11:38:34
select * from default.test_20221226_dict ;
uid start end ck_insert_time
1 0 100 2022-12-26 11:38:34
1 101 200 2022-12-26 11:38:34
2 0 999 2022-12-26 11:38:34
2 1000 10000 2022-12-26 11:38:34
`
after the DICTIONARY update. (one minute later)
`
select * from default.test_20221226_all ;
uid start end ck_insert_time
1 0 100 2022-12-26 11:38:34
1 101 200 2022-12-26 11:38:34
2 0 999 2022-12-26 11:38:34
2 1000 10000 2022-12-26 11:38:34
select * from default.test_20221226_dict ;
uid start end ck_insert_time
1 101 200 2022-12-26 11:38:34
2 1000 10000 2022-12-26 11:38:34
`
the result in dict is wrong, missing the first range for each PRIMARY KEY.
somthing wrong in my create sql? or a bug for this version. because i have used this plan in version 21.11.4.14, that's greats on it,
| https://github.com/ClickHouse/ClickHouse/issues/44588 | https://github.com/ClickHouse/ClickHouse/pull/45061 | db5aa58370542a076e606d9e707aeb6377a9c983 | dcae391210b37d164625c4a678477d3a209fe6ba | "2022-12-26T04:17:09Z" | c++ | "2023-01-10T02:11:12Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 44,587 | ["src/AggregateFunctions/AggregateFunctionVarianceMatrix.cpp", "src/AggregateFunctions/AggregateFunctionVarianceMatrix.h", "src/AggregateFunctions/registerAggregateFunctions.cpp", "tests/fuzz/dictionaries/functions.dict", "tests/queries/0_stateless/02515_aggregate_functions_statistics.reference", "tests/queries/0_stateless/02515_aggregate_functions_statistics.sql"] | A function `corrMatrix` to calculate correlation across all the pairs of arguments. | **Use case**
See https://github.com/ClickHouse/ClickHouse/issues/35979
**Describe the solution you'd like**
Let's add a function `corrMatrix`, taking an arbitrary number of arguments and returning a 2d array.
Note: this matrix will be somewhat redundant - symmetric with 1 on the diagonal, but let's return it as a whole for simplicity. | https://github.com/ClickHouse/ClickHouse/issues/44587 | https://github.com/ClickHouse/ClickHouse/pull/44680 | 0ba246ca533511abc0a4976684283c29f8b19f40 | 0c13870eb1b1778c2dbbe4f8a32763e1df63cf76 | "2022-12-25T19:30:47Z" | c++ | "2023-02-02T16:06:06Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 44,577 | ["src/Analyzer/Passes/QueryAnalysisPass.cpp", "tests/queries/0_stateless/02521_analyzer_array_join_crash.reference", "tests/queries/0_stateless/02521_analyzer_array_join_crash.sql"] | (with new Analyzer) logical error when using ARRAY JOIN | **Describe the bug**
```
milovidov-desktop :) SET allow_experimental_analyzer = 1
SET allow_experimental_analyzer = 1
Query id: b4494b8c-9f14-4d46-8db9-2cb992dcfcb6
Ok.
0 rows in set. Elapsed: 0.001 sec.
milovidov-desktop :) SELECT arrayFilter(x -> notEmpty(concat(x)), [NULL, NULL]) FROM system.one ARRAY JOIN [1048577] AS elem, arrayMap(x -> concat(x, elem, ''), ['']) AS unused
SELECT arrayFilter(x -> notEmpty(concat(x)), [NULL, NULL])
FROM system.one
ARRAY JOIN
[1048577] AS elem,
arrayMap(x -> concat(x, elem, ''), ['']) AS unused
Query id: ca118c75-812b-4c83-bd30-1af64ed38635
0 rows in set. Elapsed: 0.003 sec.
Received exception from server (version 22.13.1):
Code: 49. DB::Exception: Received from localhost:9000. DB::Exception: Cannot capture column 3 because it has incompatible type: got Array(UInt32), but UInt32 is expected.. (LOGICAL_ERROR)
``` | https://github.com/ClickHouse/ClickHouse/issues/44577 | https://github.com/ClickHouse/ClickHouse/pull/45059 | dafd1de8c50d2374c88640f5a1f985590c31b4c4 | a7d5a5d2803c94e8024016cda2eacc9d98b2a142 | "2022-12-25T14:46:42Z" | c++ | "2023-01-10T12:14:24Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 44,568 | ["programs/benchmark/Benchmark.cpp"] | The option `continue_on_errors` in `clickhouse-benchmark` is inconsistent with `clickhouse-client`. | Rename it accordingly. | https://github.com/ClickHouse/ClickHouse/issues/44568 | https://github.com/ClickHouse/ClickHouse/pull/44570 | e1115828d04f04490935b821b19b632733b7b6c9 | 514ff2ba203840b90dbe05411bac36bcb93b7730 | "2022-12-24T19:06:28Z" | c++ | "2022-12-27T13:10:21Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 44,549 | ["tests/queries/0_stateless/02538_analyzer_create_table_as_select.reference", "tests/queries/0_stateless/02538_analyzer_create_table_as_select.sql"] | Segfault in PlannerContext while using new analyzer | How to reproduce:
```sql
set allow_experimental_analyzer=1;
CREATE TABLE local_01099_a (number UInt64) ENGINE = MergeTree() ORDER BY number;
CREATE TABLE local_01099_b (number UInt64) ENGINE = MergeTree() ORDER BY number;
CREATE TABLE distributed_01099_a AS local_01099_a ENGINE = Distributed('test_cluster_two_shards_localhost', currentDatabase(), local_01099_a, rand());
CREATE TABLE distributed_01099_b AS local_01099_b ENGINE = Distributed('test_cluster_two_shards_localhost', currentDatabase(), local_01099_b, rand());
INSERT INTO local_01099_a SELECT number from system.numbers limit 3;
INSERT INTO distributed_01099_b SELECT * from distributed_01099_a;
```
Stacktrace:
```
[avogar-dev] 2022.12.23 20:41:38.504650 [ 491665 ] <Fatal> BaseDaemon: ########################################
[avogar-dev] 2022.12.23 20:41:38.505151 [ 491665 ] <Fatal> BaseDaemon: (version 22.13.1.1 (official build), build id: 8532327C2A89046A5D9EF4A0D3995BF7F8C2CD5E) (from thread 491351) (query_id: c7aeed0a-227c-4b84-afbd-964db5b093cc) (query: INSERT INTO distributed_01099_b SELECT * from distributed_01099_a;) Received signal Segmentation fault (11)
[avogar-dev] 2022.12.23 20:41:38.505479 [ 491665 ] <Fatal> BaseDaemon: Address: 0x50 Access: read. Address not mapped to object.
[avogar-dev] 2022.12.23 20:41:38.505769 [ 491665 ] <Fatal> BaseDaemon: Stack trace: 0x298af615 0x298af1dd 0x298b0936 0x298ad81d 0x298ac325 0x2b6cc775 0x2b6cc3ad 0x2b6cec8d 0x2b6cee3b 0x2b686f15 0x2b6ae2bf 0x297d1687 0x297cb6e6 0x29da314a 0x29d9f0c4 0x2af9e174 0x2afae6c5 0x3024c439 0x3024cc7c 0x3049bf54 0x30498cfa 0x30497ade 0x7f3d7ff6a609 0x7f3d7fe8f163
[avogar-dev] 2022.12.23 20:41:38.711320 [ 491665 ] <Fatal> BaseDaemon: 4. /build/build_docker/../contrib/llvm-project/libcxx/include/__hash_table:768: std::__1::__bucket_list_deallocator<std::__1::allocator<std::__1::__hash_node_base<std::__1::__hash_node<std::__1::__hash_value_type<std::__1::shared_ptr<DB::IQueryTreeNode>, DB::TableExpressionData>, void*>*>*>>::size[abi:v15000]() const @ 0x298af615 in /home/avogar/tmp/master/clickhouse-debug
[avogar-dev] 2022.12.23 20:41:38.916269 [ 491665 ] <Fatal> BaseDaemon: 5. /build/build_docker/../contrib/llvm-project/libcxx/include/__hash_table:1164: std::__1::__hash_table<std::__1::__hash_value_type<std::__1::shared_ptr<DB::IQueryTreeNode>, DB::TableExpressionData>, std::__1::__unordered_map_hasher<std::__1::shared_ptr<DB::IQueryTreeNode>, std::__1::__hash_value_type<std::__1::shared_ptr<DB::IQueryTreeNode>, DB::TableExpressionData>, std::__1::hash<std::__1::shared_ptr<DB::IQueryTreeNode>>, std::__1::equal_to<std::__1::shared_ptr<DB::IQueryTreeNode>>, true>, std::__1::__unordered_map_equal<std::__1::shared_ptr<DB::IQueryTreeNode>, std::__1::__hash_value_type<std::__1::shared_ptr<DB::IQueryTreeNode>, DB::TableExpressionData>, std::__1::equal_to<std::__1::shared_ptr<DB::IQueryTreeNode>>, std::__1::hash<std::__1::shared_ptr<DB::IQueryTreeNode>>, true>, std::__1::allocator<std::__1::__hash_value_type<std::__1::shared_ptr<DB::IQueryTreeNode>, DB::TableExpressionData>>>::bucket_count[abi:v15000]() const @ 0x298af1dd in /home/avogar/tmp/master/clickhouse-debug
[avogar-dev] 2022.12.23 20:41:39.124113 [ 491665 ] <Fatal> BaseDaemon: 6. /build/build_docker/../contrib/llvm-project/libcxx/include/__hash_table:2307: std::__1::__hash_iterator<std::__1::__hash_node<std::__1::__hash_value_type<std::__1::shared_ptr<DB::IQueryTreeNode>, DB::TableExpressionData>, void*>*> std::__1::__hash_table<std::__1::__hash_value_type<std::__1::shared_ptr<DB::IQueryTreeNode>, DB::TableExpressionData>, std::__1::__unordered_map_hasher<std::__1::shared_ptr<DB::IQueryTreeNode>, std::__1::__hash_value_type<std::__1::shared_ptr<DB::IQueryTreeNode>, DB::TableExpressionData>, std::__1::hash<std::__1::shared_ptr<DB::IQueryTreeNode>>, std::__1::equal_to<std::__1::shared_ptr<DB::IQueryTreeNode>>, true>, std::__1::__unordered_map_equal<std::__1::shared_ptr<DB::IQueryTreeNode>, std::__1::__hash_value_type<std::__1::shared_ptr<DB::IQueryTreeNode>, DB::TableExpressionData>, std::__1::equal_to<std::__1::shared_ptr<DB::IQueryTreeNode>>, std::__1::hash<std::__1::shared_ptr<DB::IQueryTreeNode>>, true>, std::__1::allocator<std::__1::__hash_value_type<std::__1::shared_ptr<DB::IQueryTreeNode>, DB::TableExpressionData>>>::find<std::__1::shared_ptr<DB::IQueryTreeNode>>(std::__1::shared_ptr<DB::IQueryTreeNode> const&) @ 0x298b0936 in /home/avogar/tmp/master/clickhouse-debug
[avogar-dev] 2022.12.23 20:41:39.310675 [ 491665 ] <Fatal> BaseDaemon: 7. /build/build_docker/../contrib/llvm-project/libcxx/include/unordered_map:1443: std::__1::unordered_map<std::__1::shared_ptr<DB::IQueryTreeNode>, DB::TableExpressionData, std::__1::hash<std::__1::shared_ptr<DB::IQueryTreeNode>>, std::__1::equal_to<std::__1::shared_ptr<DB::IQueryTreeNode>>, std::__1::allocator<std::__1::pair<std::__1::shared_ptr<DB::IQueryTreeNode> const, DB::TableExpressionData>>>::find[abi:v15000](std::__1::shared_ptr<DB::IQueryTreeNode> const&) @ 0x298ad81d in /home/avogar/tmp/master/clickhouse-debug
[avogar-dev] 2022.12.23 20:41:39.496125 [ 491665 ] <Fatal> BaseDaemon: 8. /build/build_docker/../src/Planner/PlannerContext.cpp:72: DB::PlannerContext::getTableExpressionDataOrThrow(std::__1::shared_ptr<DB::IQueryTreeNode> const&) @ 0x298ac325 in /home/avogar/tmp/master/clickhouse-debug
[avogar-dev] 2022.12.23 20:41:40.412877 [ 491665 ] <Fatal> BaseDaemon: 9. /build/build_docker/../src/Processors/QueryPlan/ReadFromMergeTree.cpp:958: DB::ReadFromMergeTree::selectRangesToRead(std::__1::vector<std::__1::shared_ptr<DB::IMergeTreeDataPart const>, std::__1::allocator<std::__1::shared_ptr<DB::IMergeTreeDataPart const>>>, std::__1::shared_ptr<DB::PrewhereInfo> const&, DB::ActionDAGNodes const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, DB::SelectQueryInfo const&, std::__1::shared_ptr<DB::Context const>, unsigned long, std::__1::shared_ptr<std::__1::unordered_map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, long, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const, long>>>>, DB::MergeTreeData const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&, bool, Poco::Logger*) @ 0x2b6cc775 in /home/avogar/tmp/master/clickhouse-debug
[avogar-dev] 2022.12.23 20:41:41.324707 [ 491665 ] <Fatal> BaseDaemon: 10. /build/build_docker/../src/Processors/QueryPlan/ReadFromMergeTree.cpp:901: DB::ReadFromMergeTree::selectRangesToRead(std::__1::vector<std::__1::shared_ptr<DB::IMergeTreeDataPart const>, std::__1::allocator<std::__1::shared_ptr<DB::IMergeTreeDataPart const>>>) const @ 0x2b6cc3ad in /home/avogar/tmp/master/clickhouse-debug
[avogar-dev] 2022.12.23 20:41:42.235434 [ 491665 ] <Fatal> BaseDaemon: 11. /build/build_docker/../src/Processors/QueryPlan/ReadFromMergeTree.cpp:1206: DB::ReadFromMergeTree::getAnalysisResult() const @ 0x2b6cec8d in /home/avogar/tmp/master/clickhouse-debug
[avogar-dev] 2022.12.23 20:41:43.089654 [ 491665 ] <Fatal> BaseDaemon: 12. /build/build_docker/../src/Processors/QueryPlan/ReadFromMergeTree.cpp:0: DB::ReadFromMergeTree::initializePipeline(DB::QueryPipelineBuilder&, DB::BuildQueryPipelineSettings const&) @ 0x2b6cee3b in /home/avogar/tmp/master/clickhouse-debug
[avogar-dev] 2022.12.23 20:41:43.256887 [ 491665 ] <Fatal> BaseDaemon: 13. /build/build_docker/../src/Processors/QueryPlan/ISourceStep.cpp:16: DB::ISourceStep::updatePipeline(std::__1::vector<std::__1::unique_ptr<DB::QueryPipelineBuilder, std::__1::default_delete<DB::QueryPipelineBuilder>>, std::__1::allocator<std::__1::unique_ptr<DB::QueryPipelineBuilder, std::__1::default_delete<DB::QueryPipelineBuilder>>>>, DB::BuildQueryPipelineSettings const&) @ 0x2b686f15 in /home/avogar/tmp/master/clickhouse-debug
[avogar-dev] 2022.12.23 20:41:43.537710 [ 491665 ] <Fatal> BaseDaemon: 14. /build/build_docker/../src/Processors/QueryPlan/QueryPlan.cpp:187: DB::QueryPlan::buildQueryPipeline(DB::QueryPlanOptimizationSettings const&, DB::BuildQueryPipelineSettings const&) @ 0x2b6ae2bf in /home/avogar/tmp/master/clickhouse-debug
[avogar-dev] 2022.12.23 20:41:43.736713 [ 491665 ] <Fatal> BaseDaemon: 15. /build/build_docker/../src/Interpreters/IInterpreterUnionOrSelectQuery.cpp:30: DB::IInterpreterUnionOrSelectQuery::buildQueryPipeline() @ 0x297d1687 in /home/avogar/tmp/master/clickhouse-debug
[avogar-dev] 2022.12.23 20:41:44.122724 [ 491665 ] <Fatal> BaseDaemon: 16. /build/build_docker/../src/Interpreters/InterpreterInsertQuery.cpp:388: DB::InterpreterInsertQuery::execute() @ 0x297cb6e6 in /home/avogar/tmp/master/clickhouse-debug
[avogar-dev] 2022.12.23 20:41:44.597469 [ 491665 ] <Fatal> BaseDaemon: 17. /build/build_docker/../src/Interpreters/executeQuery.cpp:686: DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x29da314a in /home/avogar/tmp/master/clickhouse-debug
[avogar-dev] 2022.12.23 20:41:45.109069 [ 491665 ] <Fatal> BaseDaemon: 18. /build/build_docker/../src/Interpreters/executeQuery.cpp:1083: DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) @ 0x29d9f0c4 in /home/avogar/tmp/master/clickhouse-debug
[avogar-dev] 2022.12.23 20:41:45.606533 [ 491665 ] <Fatal> BaseDaemon: 19. /build/build_docker/../src/Server/TCPHandler.cpp:375: DB::TCPHandler::runImpl() @ 0x2af9e174 in /home/avogar/tmp/master/clickhouse-debug
[avogar-dev] 2022.12.23 20:41:46.152271 [ 491665 ] <Fatal> BaseDaemon: 20. /build/build_docker/../src/Server/TCPHandler.cpp:1920: DB::TCPHandler::run() @ 0x2afae6c5 in /home/avogar/tmp/master/clickhouse-debug
[avogar-dev] 2022.12.23 20:41:46.248057 [ 491665 ] <Fatal> BaseDaemon: 21. /build/build_docker/../contrib/poco/Net/src/TCPServerConnection.cpp:43: Poco::Net::TCPServerConnection::start() @ 0x3024c439 in /home/avogar/tmp/master/clickhouse-debug
[avogar-dev] 2022.12.23 20:41:46.358717 [ 491665 ] <Fatal> BaseDaemon: 22. /build/build_docker/../contrib/poco/Net/src/TCPServerDispatcher.cpp:115: Poco::Net::TCPServerDispatcher::run() @ 0x3024cc7c in /home/avogar/tmp/master/clickhouse-debug
[avogar-dev] 2022.12.23 20:41:46.474266 [ 491665 ] <Fatal> BaseDaemon: 23. /build/build_docker/../contrib/poco/Foundation/src/ThreadPool.cpp:199: Poco::PooledThread::run() @ 0x3049bf54 in /home/avogar/tmp/master/clickhouse-debug
[avogar-dev] 2022.12.23 20:41:46.585640 [ 491665 ] <Fatal> BaseDaemon: 24. /build/build_docker/../contrib/poco/Foundation/src/Thread.cpp:56: Poco::(anonymous namespace)::RunnableHolder::run() @ 0x30498cfa in /home/avogar/tmp/master/clickhouse-debug
[avogar-dev] 2022.12.23 20:41:46.698900 [ 491665 ] <Fatal> BaseDaemon: 25. /build/build_docker/../contrib/poco/Foundation/src/Thread_POSIX.cpp:345: Poco::ThreadImpl::runnableEntry(void*) @ 0x30497ade in /home/avogar/tmp/master/clickhouse-debug
[avogar-dev] 2022.12.23 20:41:46.699241 [ 491665 ] <Fatal> BaseDaemon: 26. ? @ 0x7f3d7ff6a609 in ?
[avogar-dev] 2022.12.23 20:41:46.699509 [ 491665 ] <Fatal> BaseDaemon: 27. clone @ 0x7f3d7fe8f163 in ?
[avogar-dev] 2022.12.23 20:41:47.600291 [ 491665 ] <Fatal> BaseDaemon: Integrity check of the executable successfully passed (checksum: 2B67CFA4CF85A05EFDD940F61C78C634)
```
https://s3.amazonaws.com/clickhouse-test-reports/44522/123392c9961b68f1480f2585fb4510530bd90271/fuzzer_astfuzzerdebug//report.html | https://github.com/ClickHouse/ClickHouse/issues/44549 | https://github.com/ClickHouse/ClickHouse/pull/45533 | 297516a08470a4c33bf5dd80f908b3d48d953c2c | eef49fa3edba8eb59325fbad3ad944c7bb9450be | "2022-12-23T20:49:45Z" | c++ | "2023-01-26T11:10:09Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 44,536 | ["src/Storages/MergeTree/MergeTreeData.cpp", "src/Storages/MergeTree/MergeTreeDataPartWriterCompact.cpp", "src/Storages/MergeTree/MergeTreeDataPartWriterOnDisk.cpp", "src/Storages/MergeTree/MergeTreeSettings.cpp", "tests/queries/0_stateless/02514_bad_index_granularity.reference", "tests/queries/0_stateless/02514_bad_index_granularity.sql"] | CH hangs with insertion (index_granularity = 0) | ```sql
CREATE TABLE t
(
id Int64,
d String,
p Map(String, String)
)
ENGINE = ReplacingMergeTree order by id settings index_granularity = 0;
insert into t values (1, 's', {'a':'b'});
CH starts eating RAM until it crashes
the same with
insert into t select 1, 's', map('a','b');
```
| https://github.com/ClickHouse/ClickHouse/issues/44536 | https://github.com/ClickHouse/ClickHouse/pull/44578 | 280e14456f15504c22493363c7ebf9ecf709edd8 | d8924f0b0ed96c5b6bc4398f34aa7cc96dcaf8d2 | "2022-12-23T15:22:05Z" | c++ | "2022-12-26T14:20:36Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 44,528 | ["src/Analyzer/Passes/QueryAnalysisPass.cpp", "tests/queries/0_stateless/02525_analyzer_function_in_crash_fix.reference", "tests/queries/0_stateless/02525_analyzer_function_in_crash_fix.sql"] | NativeWriter/SerializationNullable: downcast of address which does not point to an object of type ColumnNullable | https://s3.amazonaws.com/clickhouse-test-reports/44466/45f9fa48c80a54da54323a6bf00e65160fd3e360/fuzzer_astfuzzerubsan//report.html
```
../src/Common/assert_cast.h:50:12: runtime error: downcast of address 0x7f45d40aed70 which does not point to an object of type 'const DB::ColumnNullable'
0x7f45d40aed70: note: object is of type 'DB::ColumnVector<char8_t>'
00 00 00 00 20 4c 2a 11 00 00 00 00 03 00 00 00 00 00 00 00 c0 04 06 c0 45 7f 00 00 c3 04 06 c0
^~~~~~~~~~~~~~~~~~~~~~~
vptr for 'DB::ColumnVector<char8_t>'
#0 0x2c387ab7 in DB::ColumnNullable const& assert_cast<DB::ColumnNullable const&, DB::IColumn const&>(DB::IColumn const&) build_docker/../src/Common/assert_cast.h:50:12
#1 0x2c387ab7 in DB::SerializationNullable::serializeBinaryBulkStatePrefix(DB::IColumn const&, DB::ISerialization::SerializeBinaryBulkSettings&, std::__1::shared_ptr<DB::ISerialization::SerializeBinaryBulkState>&) const build_docker/../src/DataTypes/Serializations/SerializationNullable.cpp:78:36
#2 0x2ebfa5e8 in DB::writeData(DB::ISerialization const&, COW<DB::IColumn>::immutable_ptr<DB::IColumn> const&, DB::WriteBuffer&, unsigned long, unsigned long) build_docker/../src/Formats/NativeWriter.cpp:61:19
#3 0x2ebfa5e8 in DB::NativeWriter::write(DB::Block const&) build_docker/../src/Formats/NativeWriter.cpp:151:13
#4 0x2ebb5df4 in DB::TCPHandler::sendData(DB::Block const&) build_docker/../src/Server/TCPHandler.cpp:1796:26
#5 0x2ebb1fa9 in DB::TCPHandler::processOrdinaryQueryWithProcessors() build_docker/../src/Server/TCPHandler.cpp:788:21
#6 0x2eba3309 in DB::TCPHandler::runImpl() build_docker/../src/Server/TCPHandler.cpp:389:17
#7 0x2ebc4179 in DB::TCPHandler::run() build_docker/../src/Server/TCPHandler.cpp:1920:9
#8 0x2fe389eb in Poco::Net::TCPServerConnection::start() build_docker/../contrib/poco/Net/src/TCPServerConnection.cpp:43:3
#9 0x2fe38ed9 in Poco::Net::TCPServerDispatcher::run() build_docker/../contrib/poco/Net/src/TCPServerDispatcher.cpp:115:20
#10 0x2ffb06e6 in Poco::PooledThread::run() build_docker/../contrib/poco/Foundation/src/ThreadPool.cpp:199:14
#11 0x2ffae1ce in Poco::ThreadImpl::runnableEntry(void*) build_docker/../contrib/poco/Foundation/src/Thread_POSIX.cpp:345:27
#12 0x7f46aa4db608 in start_thread /build/glibc-SzIz7B/glibc-2.31/nptl/pthread_create.c:477:8
#13 0x7f46aa400132 in __clone /build/glibc-SzIz7B/glibc-2.31/misc/../sysdeps/unix/sysv/linux/x86_64/clone.S:95
SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior ../src/Common/assert_cast.h:50:12 in
```
```
2022.12.23 05:37:16.902357 [ 437 ] {} <Fatal> BaseDaemon: ########################################
2022.12.23 05:37:16.902443 [ 437 ] {} <Fatal> BaseDaemon: (version 22.13.1.1, build id: 8B6135E4FED74CD732EE252D0D4322430D65EB75) (from thread 148) (query_id: d67f210f-3c15-4aca-a064-b656d9140238) (query: SELECT x IN toDecimal64(257, NULL), * FROM temp__fuzz_0) Received signal sanitizer trap (-3)
2022.12.23 05:37:16.902476 [ 437 ] {} <Fatal> BaseDaemon: Sanitizer trap.
2022.12.23 05:37:16.902531 [ 437 ] {} <Fatal> BaseDaemon: Stack trace: 0x21656fe3 0x2193b556 0x16dfa976 0x16e0f3cd 0x2c387ab8 0x2ebfa5e9 0x2ebb5df5 0x2ebb1faa 0x2eba330a 0x2ebc417a 0x2fe389ec 0x2fe38eda 0x2ffb06e7 0x2ffae1cf 0x7f46aa4db609 0x7f46aa400133
2022.12.23 05:37:16.916228 [ 437 ] {} <Fatal> BaseDaemon: 0.1. inlined from ./build_docker/../src/Common/StackTrace.cpp:334: StackTrace::tryCapture()
2022.12.23 05:37:16.916254 [ 437 ] {} <Fatal> BaseDaemon: 0. ./build_docker/../src/Common/StackTrace.cpp:295: StackTrace::StackTrace() @ 0x21656fe3 in /workspace/clickhouse
2022.12.23 05:37:16.941078 [ 437 ] {} <Fatal> BaseDaemon: 1. ./build_docker/../src/Daemon/BaseDaemon.cpp:0: sanitizerDeathCallback() @ 0x2193b556 in /workspace/clickhouse
2022.12.23 05:37:18.062820 [ 437 ] {} <Fatal> BaseDaemon: 2. __sanitizer::Die() @ 0x16dfa976 in /workspace/clickhouse
2022.12.23 05:37:19.161725 [ 437 ] {} <Fatal> BaseDaemon: 3. ? @ 0x16e0f3cd in /workspace/clickhouse
2022.12.23 05:37:19.179785 [ 437 ] {} <Fatal> BaseDaemon: 4.1. inlined from ./build_docker/../src/Common/assert_cast.h:0: DB::ColumnNullable const& assert_cast<DB::ColumnNullable const&, DB::IColumn const&>(DB::IColumn const&)
2022.12.23 05:37:19.179813 [ 437 ] {} <Fatal> BaseDaemon: 4. ./build_docker/../src/DataTypes/Serializations/SerializationNullable.cpp:78: DB::SerializationNullable::serializeBinaryBulkStatePrefix(DB::IColumn const&, DB::ISerialization::SerializeBinaryBulkSettings&, std::__1::shared_ptr<DB::ISerialization::SerializeBinaryBulkState>&) const @ 0x2c387ab8 in /workspace/clickhouse
2022.12.23 05:37:19.194153 [ 437 ] {} <Fatal> BaseDaemon: 5.1. inlined from ./build_docker/../src/Formats/NativeWriter.cpp:62: DB::writeData(DB::ISerialization const&, COW<DB::IColumn>::immutable_ptr<DB::IColumn> const&, DB::WriteBuffer&, unsigned long, unsigned long)
2022.12.23 05:37:19.194178 [ 437 ] {} <Fatal> BaseDaemon: 5. ./build_docker/../src/Formats/NativeWriter.cpp:151: DB::NativeWriter::write(DB::Block const&) @ 0x2ebfa5e9 in /workspace/clickhouse
2022.12.23 05:37:19.257647 [ 437 ] {} <Fatal> BaseDaemon: 6. ./build_docker/../src/Server/TCPHandler.cpp:1797: DB::TCPHandler::sendData(DB::Block const&) @ 0x2ebb5df5 in /workspace/clickhouse
2022.12.23 05:37:19.317627 [ 437 ] {} <Fatal> BaseDaemon: 7.1. inlined from ./build_docker/../src/Server/TCPHandler.cpp:788: ~unique_lock
2022.12.23 05:37:19.317655 [ 437 ] {} <Fatal> BaseDaemon: 7. ./build_docker/../src/Server/TCPHandler.cpp:790: DB::TCPHandler::processOrdinaryQueryWithProcessors() @ 0x2ebb1faa in /workspace/clickhouse
2022.12.23 05:37:19.371034 [ 437 ] {} <Fatal> BaseDaemon: 8. ./build_docker/../src/Server/TCPHandler.cpp:390: DB::TCPHandler::runImpl() @ 0x2eba330a in /workspace/clickhouse
2022.12.23 05:37:19.437423 [ 437 ] {} <Fatal> BaseDaemon: 9. ./build_docker/../src/Server/TCPHandler.cpp:1922: DB::TCPHandler::run() @ 0x2ebc417a in /workspace/clickhouse
2022.12.23 05:37:19.443035 [ 437 ] {} <Fatal> BaseDaemon: 10. ./build_docker/../contrib/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x2fe389ec in /workspace/clickhouse
2022.12.23 05:37:19.450523 [ 437 ] {} <Fatal> BaseDaemon: 11.1. inlined from ./build_docker/../contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48: std::__1::default_delete<Poco::Net::TCPServerConnection>::operator()[abi:v15000](Poco::Net::TCPServerConnection*) const
2022.12.23 05:37:19.450549 [ 437 ] {} <Fatal> BaseDaemon: 11.2. inlined from ./build_docker/../contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:305: std::__1::unique_ptr<Poco::Net::TCPServerConnection, std::__1::default_delete<Poco::Net::TCPServerConnection>>::reset[abi:v15000](Poco::Net::TCPServerConnection*)
2022.12.23 05:37:19.450586 [ 437 ] {} <Fatal> BaseDaemon: 11.3. inlined from ./build_docker/../contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:259: ~unique_ptr
2022.12.23 05:37:19.450623 [ 437 ] {} <Fatal> BaseDaemon: 11. ./build_docker/../contrib/poco/Net/src/TCPServerDispatcher.cpp:116: Poco::Net::TCPServerDispatcher::run() @ 0x2fe38eda in /workspace/clickhouse
2022.12.23 05:37:19.459029 [ 437 ] {} <Fatal> BaseDaemon: 12. ./build_docker/../contrib/poco/Foundation/src/ThreadPool.cpp:213: Poco::PooledThread::run() @ 0x2ffb06e7 in /workspace/clickhouse
2022.12.23 05:37:19.466592 [ 437 ] {} <Fatal> BaseDaemon: 13.1. inlined from ./build_docker/../contrib/poco/Foundation/include/Poco/SharedPtr.h:156: Poco::SharedPtr<Poco::Runnable, Poco::ReferenceCounter, Poco::ReleasePolicy<Poco::Runnable>>::assign(Poco::Runnable*)
2022.12.23 05:37:19.466616 [ 437 ] {} <Fatal> BaseDaemon: 13.2. inlined from ./build_docker/../contrib/poco/Foundation/include/Poco/SharedPtr.h:208: Poco::SharedPtr<Poco::Runnable, Poco::ReferenceCounter, Poco::ReleasePolicy<Poco::Runnable>>::operator=(Poco::Runnable*)
2022.12.23 05:37:19.466646 [ 437 ] {} <Fatal> BaseDaemon: 13. ./build_docker/../contrib/poco/Foundation/src/Thread_POSIX.cpp:360: Poco::ThreadImpl::runnableEntry(void*) @ 0x2ffae1cf in /workspace/clickhouse
2022.12.23 05:37:19.466689 [ 437 ] {} <Fatal> BaseDaemon: 14. ? @ 0x7f46aa4db609 in ?
2022.12.23 05:37:19.466730 [ 437 ] {} <Fatal> BaseDaemon: 15. __clone @ 0x7f46aa400133 in ?
2022.12.23 05:37:19.466767 [ 437 ] {} <Fatal> BaseDaemon: Integrity check of the executable skipped because the reference checksum could not be read.
```
```
2022.12.23 05:36:48.963652 [ 148 ] {c00f5859-3f76-4979-8c8f-31c8d192468f} <Debug> executeQuery: (from [::ffff:127.0.0.1]:35574) CREATE TABLE temp__fuzz_0 (`x` UInt8, `y` Nullable(Decimal(38, 2))) ENGINE = Memory (stage: Complete)
```
| https://github.com/ClickHouse/ClickHouse/issues/44528 | https://github.com/ClickHouse/ClickHouse/pull/45064 | 3963eeae16dbf749ed02a52cc3ee16311dd58283 | db5aa58370542a076e606d9e707aeb6377a9c983 | "2022-12-23T12:08:50Z" | c++ | "2023-01-10T02:10:20Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 44,504 | ["src/Storages/MergeTree/KeyCondition.cpp", "tests/queries/0_stateless/02764_index_analysis_fix.reference", "tests/queries/0_stateless/02764_index_analysis_fix.sql"] | Valid queries my fail due to a bug in partition pruning | when i use "LIKE" in version 20.3.3.6 is all right,but in 22.3.2.1 something was wrong
```
clickhouse-180 :) show create table act.goodsdeliverflow_local
SHOW CREATE TABLE act.goodsdeliverflow_local
Query id: 846831d1-a00c-4ae4-bbe1-8322dd118141
┌─statement──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│ CREATE TABLE act.goodsdeliverflow_local
(
.......
`dtEventTime` String,
.......
)
ENGINE = MergeTree
PARTITION BY toYYYYMM(toDate(dtEventTime))
ORDER BY (dtEventTime, localeCountryCode, PlatID)
SETTINGS index_granularity = 8192 │
└────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
1 rows in set. Elapsed: 0.003 sec.
clickhouse-180 :) select * from act.goodsdeliverflow_local where dtEventTime LIKE '2022-10-01%'
SELECT *
FROM act.goodsdeliverflow_local
WHERE dtEventTime LIKE '2022-10-01%'
Query id: f335868e-edf1-42c3-a7c1-36d774096db6
0 rows in set. Elapsed: 0.007 sec.
Received exception from server (version 22.3.2):
Code: 6. DB::Exception: Received from localhost:9099. DB::Exception: Cannot parse string '2022-10-01%' as Date: syntax error at position 10 (parsed just '2022-10-01'). (CANNOT_PARSE_TEXT)
clickhouse-180 :)
````
why i use "like" for a `String` column,but the error messsage told me `Cannot parse string to Date`
Is it because the column is the partition key | https://github.com/ClickHouse/ClickHouse/issues/44504 | https://github.com/ClickHouse/ClickHouse/pull/50153 | f118cba17e014620b9d13246f95cbb936dbe07f4 | 881bdbf8391a020dfea047f78cc434a6ab5fb471 | "2022-12-22T03:28:54Z" | c++ | "2023-06-05T02:00:20Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 44,503 | ["src/Functions/in.cpp", "tests/queries/0_stateless/01906_lc_in_bug.reference", "tests/queries/0_stateless/01906_lc_in_bug.sql"] | crash due `NOT (toLowCardinality('') IN` | ```sql
CREATE TABLE test(key Int32) ENGINE = MergeTree ORDER BY (key);
insert into test select intDiv(number,100) from numbers(10000000);
SELECT COUNT() FROM test WHERE key <= 100000 AND (NOT (toLowCardinality('') IN (SELECT '')));
<Fatal> BaseDaemon: ########################################
<Fatal> BaseDaemon: ########################################
<Fatal> BaseDaemon: ########################################
<Fatal> BaseDaemon: (version 22.8.11.15 (official build), build id: 8996F0B199922EB3) (from thread 3020
ey <= 100000 AND (NOT (toLowCardinality('') IN (SELECT '' ))) ;) Received signal Segmentation fault (11
<Fatal> BaseDaemon: (version 22.8.11.15 (official build), build id: 8996F0B199922EB3) (from thread 2281
y <= 100000 AND (NOT (toLowCardinality('') IN (SELECT '' ))) ;) Received signal Segmentation fault (11)
<Fatal> BaseDaemon: (version 22.8.11.15 (official build), build id: 8996F0B199922EB3) (from thread 2524
ey <= 100000 AND (NOT (toLowCardinality('') IN (SELECT '' ))) ;) Received signal Segmentation fault (11
<Fatal> BaseDaemon: Address: 0x7f32028fc000 Access: read. Attempted access has violated the permissions
<Fatal> BaseDaemon: Address: 0x7f324e218000 Access: read. Attempted access has violated the permissions
<Fatal> BaseDaemon: Address: 0x7f323d92a000 Access: read. Attempted access has violated the permissions
<Fatal> BaseDaemon: Stack trace: 0x14663831 0x1465cb43 0xcbaa3ce 0x13ceb847 0x13cec13d 0x13ced5e9 0x149
0xa4be8bd 0x7f32a7232fa3 0x7f32a71634cf
<Fatal> BaseDaemon: Stack trace: 0x14663831 0x1465cb43 0xcbaa3ce 0x13ceb847 0x13cec13d 0x13ced5e9 0x149
0xa4be8bd 0x7f32a7232fa3 0x7f32a71634cf
<Fatal> BaseDaemon: Stack trace: 0x14663831 0x1465cb43 0xcbaa3ce 0x13ceb847 0x13cec13d 0x13ced5e9 0x149
0xa4be8bd 0x7f32a7232fa3 0x7f32a71634cf
```
found in 21.8 but I guess all versions are affected.
100% reproducible with
```
clickhouse-benchmark <<< "SELECT COUNT() FROM test WHERE key <= 100000 AND (NOT (toLowCardinality('') IN (SELECT '')))"
```
22.12.1.985 returns random results, so I guess it's affected as well. | https://github.com/ClickHouse/ClickHouse/issues/44503 | https://github.com/ClickHouse/ClickHouse/pull/44506 | 6c23721255b55717ee31ec39d92b8c0f2bf88ec8 | 63f5712319e34247d84896fde14d91da7a798015 | "2022-12-22T02:28:36Z" | c++ | "2022-12-23T10:28:26Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 44,500 | ["src/Storages/MergeTree/IMergeTreeDataPart.cpp", "src/Storages/MergeTree/IMergeTreeDataPart.h", "src/Storages/MergeTree/IMergeTreeDataPartInfoForReader.h", "src/Storages/MergeTree/IMergeTreeReader.cpp", "src/Storages/MergeTree/IMergeTreeReader.h", "src/Storages/MergeTree/LoadedMergeTreeDataPartInfoForReader.h"] | Degraded performance of select queries after v22.8 | **Describe the situation**
There is still a performance drop between 22.7.7.24 -> 22.8.1.2097 (and up to the latest 22.12)
I'm running a trivial query "select count() from ... where counter_id = ... and ts > ... and ts < .. format **Null** settings local_filesystem_read_method='**read**'." (typical "hits" table).
The query pipeline is identical between the versions (22.7.7.24 and 22.8.1.2097). The slowness comes from "AggregatingTransform" step that's processing rows x3-4 times slower.
Trace logs:
22.7 - https://pastila.nl/?02aa875b/fd42b27162d53ca2a79b137e217b9ddd
22.8 - https://pastila.nl/?02aa875b/d0c917e0b0166c5f8dd3af2d1cc96035
The QPS drop is x2-3 times regardless of the concurrency level used in clickhouse-benchmark tool (same results for -c 1, 8, 16)
**How to reproduce**
I was not able to reproduce the issue when creating new tables and importing fake data. On the real machine I can upgrade/downgrade between the versions and clearly see the performance difference. I went to the point where one replica in the cluster is on 22.7.7.24 and another is 22.8.1.2097. Practically all queries are slower on 22.8 about x1.5-3 times.
**Additional context**
I checked all versions after 22.8 up to the latest 22.12, the issue persists.
Perf of 22.7
<img width="1785" alt="Screenshot 2022-12-21 at 21 54 44" src="https://user-images.githubusercontent.com/5286337/209001060-1f2acb08-f8e6-444c-a85b-baf695eadf29.png">
Perf of 22.8
<img width="1788" alt="Screenshot 2022-12-21 at 21 51 51" src="https://user-images.githubusercontent.com/5286337/209000633-98171f78-0559-491b-889e-873202c1cbe1.png">
The query in this case is x1.5 times slower.
Also, the code difference: https://github.com/ClickHouse/ClickHouse/compare/v22.8.1.2097-lts...v22.7.7.24-stable
| https://github.com/ClickHouse/ClickHouse/issues/44500 | https://github.com/ClickHouse/ClickHouse/pull/45630 | 596cbb1d238126b98ceda677dee779948f633760 | f10e82355e2e44f858baf692599a20674b0af0d9 | "2022-12-21T20:58:26Z" | c++ | "2023-01-27T13:59:55Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 44,494 | ["src/Analyzer/SortNode.cpp", "tests/queries/0_stateless/02513_analyzer_sort_msan.reference", "tests/queries/0_stateless/02513_analyzer_sort_msan.sql"] | MSan: nested ORDER BYs with new analyzer | **Describe the bug**
MSan report - https://s3.amazonaws.com/clickhouse-test-reports/43905/00ff8b82dc60262ba262c2a2e39a95498b060251/fuzzer_astfuzzermsan//report.html
```
2022.12.21 07:16:36.245599 [ 170 ] {f80a3a19-5191-4791-946c-35498d4d9b9b} <Debug> executeQuery: (from [::ffff:127.0.0.1]:38928) EXPLAIN header = 1 SELECT * FROM (SELECT * FROM (SELECT * FROM numbers(3) ORDER BY number ASC) ORDER BY number DESC) ORDER BY number ASC (stage: Complete)
==167==WARNING: MemorySanitizer: use-of-uninitialized-value
#0 0x446b1ddd in std::__1::__hash_table<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>, std::__1::hash<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>>, std::__1::equal_to<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>>, std::__1::allocator<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>>>::__node_insert_unique_perform[abi:v15000](std::__1::__hash_node<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>, void*>*) build_docker/../contrib/llvm-project/libcxx/include/__hash_table:1821:27
#1 0x446b1ddd in std::__1::__hash_table<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>, std::__1::hash<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>>, std::__1::equal_to<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>>, std::__1::allocator<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>>>::__node_insert_unique(std::__1::__hash_node<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>, void*>*) build_docker/../contrib/llvm-project/libcxx/include/__hash_table:1852:9
#2 0x446b1ddd in std::__1::pair<std::__1::__hash_iterator<std::__1::__hash_node<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>, void*>*>, bool> std::__1::__hash_table<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>, std::__1::hash<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>>, std::__1::equal_to<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>>, std::__1::allocator<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>>>::__emplace_unique_impl<DB::IQueryTreeNode*>(DB::IQueryTreeNode*&&) build_docker/../contrib/llvm-project/libcxx/include/__hash_table:2054:32
#3 0x446aef02 in std::__1::pair<std::__1::__hash_iterator<std::__1::__hash_node<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>, void*>*>, bool> std::__1::__hash_table<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>, std::__1::hash<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>>, std::__1::equal_to<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>>, std::__1::allocator<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>>>::__emplace_unique_extract_key[abi:v15000]<DB::IQueryTreeNode*>(DB::IQueryTreeNode*&&, std::__1::__extract_key_fail_tag) build_docker/../contrib/llvm-project/libcxx/include/__hash_table:1067:14
#4 0x446aef02 in std::__1::pair<std::__1::__hash_iterator<std::__1::__hash_node<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>, void*>*>, bool> std::__1::__hash_table<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>, std::__1::hash<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>>, std::__1::equal_to<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>>, std::__1::allocator<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>>>::__emplace_unique[abi:v15000]<DB::IQueryTreeNode*>(DB::IQueryTreeNode*&&) build_docker/../contrib/llvm-project/libcxx/include/__hash_table:1045:14
#5 0x446aef02 in std::__1::pair<std::__1::__hash_const_iterator<std::__1::__hash_node<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>, void*>*>, bool> std::__1::unordered_set<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>, std::__1::hash<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>>, std::__1::equal_to<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>>, std::__1::allocator<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>>>::emplace[abi:v15000]<DB::IQueryTreeNode*>(DB::IQueryTreeNode*&&) build_docker/../contrib/llvm-project/libcxx/include/unordered_set:659:30
#6 0x446aef02 in DB::(anonymous namespace)::OrderByLimitByDuplicateEliminationVisitor::visitImpl(std::__1::shared_ptr<DB::IQueryTreeNode>&) build_docker/../src/Analyzer/Passes/OrderByLimitByDuplicateEliminationPass.cpp:40:67
#7 0x446aef02 in DB::InDepthQueryTreeVisitor<DB::(anonymous namespace)::OrderByLimitByDuplicateEliminationVisitor, false>::visit(std::__1::shared_ptr<DB::IQueryTreeNode>&) build_docker/../src/Analyzer/InDepthQueryTreeVisitor.h:52:22
#8 0x446ae3da in DB::OrderByLimitByDuplicateEliminationPass::run(std::__1::shared_ptr<DB::IQueryTreeNode>, std::__1::shared_ptr<DB::Context const>) build_docker/../src/Analyzer/Passes/OrderByLimitByDuplicateEliminationPass.cpp:76:13
#9 0x444e89e9 in DB::QueryTreePassManager::run(std::__1::shared_ptr<DB::IQueryTreeNode>) build_docker/../src/Analyzer/QueryTreePassManager.cpp:99:20
#10 0x4494092d in DB::(anonymous namespace)::buildQueryTreeAndRunPasses(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context const> const&) build_docker/../src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:54:29
#11 0x4494092d in DB::InterpreterSelectQueryAnalyzer::InterpreterSelectQueryAnalyzer(std::__1::shared_ptr<DB::IAST> const&, DB::SelectQueryOptions const&, std::__1::shared_ptr<DB::Context const>) build_docker/../src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:66:18
#12 0x4432c747 in DB::InterpreterExplainQuery::executeImpl() build_docker/../src/Interpreters/InterpreterExplainQuery.cpp:426:48
#13 0x44329bd4 in DB::InterpreterExplainQuery::execute() build_docker/../src/Interpreters/InterpreterExplainQuery.cpp:87:20
#14 0x457706f7 in DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) build_docker/../src/Interpreters/executeQuery.cpp:686:36
#15 0x45763bfd in DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) build_docker/../src/Interpreters/executeQuery.cpp:1083:30
#16 0x48fe52b5 in DB::TCPHandler::runImpl() build_docker/../src/Server/TCPHandler.cpp:375:24
#17 0x49026afd in DB::TCPHandler::run() build_docker/../src/Server/TCPHandler.cpp:1920:9
#18 0x550d3edd in Poco::Net::TCPServerConnection::start() build_docker/../contrib/poco/Net/src/TCPServerConnection.cpp:43:3
#19 0x550d526e in Poco::Net::TCPServerDispatcher::run() build_docker/../contrib/poco/Net/src/TCPServerDispatcher.cpp:115:20
#20 0x558357cb in Poco::PooledThread::run() build_docker/../contrib/poco/Foundation/src/ThreadPool.cpp:199:14
#21 0x55830d61 in Poco::(anonymous namespace)::RunnableHolder::run() build_docker/../contrib/poco/Foundation/src/Thread.cpp:55:11
#22 0x5582cb28 in Poco::ThreadImpl::runnableEntry(void*) build_docker/../contrib/poco/Foundation/src/Thread_POSIX.cpp:345:27
#23 0x7faa30782608 in start_thread /build/glibc-SzIz7B/glibc-2.31/nptl/pthread_create.c:477:8
#24 0x7faa306a7132 in __clone /build/glibc-SzIz7B/glibc-2.31/misc/../sysdeps/unix/sysv/linux/x86_64/clone.S:95
Uninitialized value was stored to memory at
#0 0x4465022b in SipHash::update(char const*, unsigned long) build_docker/../src/Common/SipHash.h:113:13
#1 0x4465022b in SipHash::update(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&) build_docker/../src/Common/SipHash.h:154:9
#2 0x4465022b in DB::ColumnNode::updateTreeHashImpl(SipHash&) const build_docker/../src/Analyzer/ColumnNode.cpp:84:16
#3 0x443c88d4 in DB::IQueryTreeNode::getTreeHash() const build_docker/../src/Analyzer/IQueryTreeNode.cpp:186:26
#4 0x446b16e6 in DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>::QueryTreeNodeWithHash(DB::IQueryTreeNode const*) build_docker/../src/Analyzer/HashUtils.h:19:22
#5 0x446b16e6 in DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>* std::__1::construct_at[abi:v15000]<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>, DB::IQueryTreeNode*, DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>*>(DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>*, DB::IQueryTreeNode*&&) build_docker/../contrib/llvm-project/libcxx/include/__memory/construct_at.h:35:48
#6 0x446b16e6 in void std::__1::allocator_traits<std::__1::allocator<std::__1::__hash_node<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>, void*>>>::construct[abi:v15000]<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>, DB::IQueryTreeNode*, void, void>(std::__1::allocator<std::__1::__hash_node<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>, void*>>&, DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>*, DB::IQueryTreeNode*&&) build_docker/../contrib/llvm-project/libcxx/include/__memory/allocator_traits.h:298:9
#7 0x446b16e6 in std::__1::unique_ptr<std::__1::__hash_node<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>, void*>, std::__1::__hash_node_destructor<std::__1::allocator<std::__1::__hash_node<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>, void*>>>> std::__1::__hash_table<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>, std::__1::hash<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>>, std::__1::equal_to<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>>, std::__1::allocator<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>>>::__construct_node<DB::IQueryTreeNode*>(DB::IQueryTreeNode*&&) build_docker/../contrib/llvm-project/libcxx/include/__hash_table:2365:5
#8 0x446b16e6 in std::__1::pair<std::__1::__hash_iterator<std::__1::__hash_node<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>, void*>*>, bool> std::__1::__hash_table<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>, std::__1::hash<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>>, std::__1::equal_to<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>>, std::__1::allocator<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>>>::__emplace_unique_impl<DB::IQueryTreeNode*>(DB::IQueryTreeNode*&&) build_docker/../contrib/llvm-project/libcxx/include/__hash_table:2053:25
#9 0x446aef02 in std::__1::pair<std::__1::__hash_iterator<std::__1::__hash_node<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>, void*>*>, bool> std::__1::__hash_table<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>, std::__1::hash<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>>, std::__1::equal_to<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>>, std::__1::allocator<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>>>::__emplace_unique_extract_key[abi:v15000]<DB::IQueryTreeNode*>(DB::IQueryTreeNode*&&, std::__1::__extract_key_fail_tag) build_docker/../contrib/llvm-project/libcxx/include/__hash_table:1067:14
#10 0x446aef02 in std::__1::pair<std::__1::__hash_iterator<std::__1::__hash_node<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>, void*>*>, bool> std::__1::__hash_table<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>, std::__1::hash<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>>, std::__1::equal_to<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>>, std::__1::allocator<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>>>::__emplace_unique[abi:v15000]<DB::IQueryTreeNode*>(DB::IQueryTreeNode*&&) build_docker/../contrib/llvm-project/libcxx/include/__hash_table:1045:14
#11 0x446aef02 in std::__1::pair<std::__1::__hash_const_iterator<std::__1::__hash_node<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>, void*>*>, bool> std::__1::unordered_set<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>, std::__1::hash<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>>, std::__1::equal_to<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>>, std::__1::allocator<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>>>::emplace[abi:v15000]<DB::IQueryTreeNode*>(DB::IQueryTreeNode*&&) build_docker/../contrib/llvm-project/libcxx/include/unordered_set:659:30
#12 0x446aef02 in DB::(anonymous namespace)::OrderByLimitByDuplicateEliminationVisitor::visitImpl(std::__1::shared_ptr<DB::IQueryTreeNode>&) build_docker/../src/Analyzer/Passes/OrderByLimitByDuplicateEliminationPass.cpp:40:67
#13 0x446aef02 in DB::InDepthQueryTreeVisitor<DB::(anonymous namespace)::OrderByLimitByDuplicateEliminationVisitor, false>::visit(std::__1::shared_ptr<DB::IQueryTreeNode>&) build_docker/../src/Analyzer/InDepthQueryTreeVisitor.h:52:22
#14 0x446ae3da in DB::OrderByLimitByDuplicateEliminationPass::run(std::__1::shared_ptr<DB::IQueryTreeNode>, std::__1::shared_ptr<DB::Context const>) build_docker/../src/Analyzer/Passes/OrderByLimitByDuplicateEliminationPass.cpp:76:13
#15 0x444e89e9 in DB::QueryTreePassManager::run(std::__1::shared_ptr<DB::IQueryTreeNode>) build_docker/../src/Analyzer/QueryTreePassManager.cpp:99:20
#16 0x4494092d in DB::(anonymous namespace)::buildQueryTreeAndRunPasses(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context const> const&) build_docker/../src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:54:29
#17 0x4494092d in DB::InterpreterSelectQueryAnalyzer::InterpreterSelectQueryAnalyzer(std::__1::shared_ptr<DB::IAST> const&, DB::SelectQueryOptions const&, std::__1::shared_ptr<DB::Context const>) build_docker/../src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:66:18
#18 0x4432c747 in DB::InterpreterExplainQuery::executeImpl() build_docker/../src/Interpreters/InterpreterExplainQuery.cpp:426:48
#19 0x44329bd4 in DB::InterpreterExplainQuery::execute() build_docker/../src/Interpreters/InterpreterExplainQuery.cpp:87:20
#20 0x457706f7 in DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) build_docker/../src/Interpreters/executeQuery.cpp:686:36
#21 0x45763bfd in DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) build_docker/../src/Interpreters/executeQuery.cpp:1083:30
#22 0x48fe52b5 in DB::TCPHandler::runImpl() build_docker/../src/Server/TCPHandler.cpp:375:24
#23 0x49026afd in DB::TCPHandler::run() build_docker/../src/Server/TCPHandler.cpp:1920:9
#24 0x550d3edd in Poco::Net::TCPServerConnection::start() build_docker/../contrib/poco/Net/src/TCPServerConnection.cpp:43:3
#25 0x550d526e in Poco::Net::TCPServerDispatcher::run() build_docker/../contrib/poco/Net/src/TCPServerDispatcher.cpp:115:20
#26 0x558357cb in Poco::PooledThread::run() build_docker/../contrib/poco/Foundation/src/ThreadPool.cpp:199:14
#27 0x55830d61 in Poco::(anonymous namespace)::RunnableHolder::run() build_docker/../contrib/poco/Foundation/src/Thread.cpp:55:11
#28 0x5582cb28 in Poco::ThreadImpl::runnableEntry(void*) build_docker/../contrib/poco/Foundation/src/Thread_POSIX.cpp:345:27
#29 0x7faa30782608 in start_thread /build/glibc-SzIz7B/glibc-2.31/nptl/pthread_create.c:477:8
Uninitialized value was stored to memory at
#0 0x44650043 in SipHash::update(char const*, unsigned long) build_docker/../src/Common/SipHash.h:113:13
#1 0x44650043 in SipHash::update(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&) build_docker/../src/Common/SipHash.h:154:9
#2 0x44650043 in DB::ColumnNode::updateTreeHashImpl(SipHash&) const build_docker/../src/Analyzer/ColumnNode.cpp:80:16
#3 0x443c88d4 in DB::IQueryTreeNode::getTreeHash() const build_docker/../src/Analyzer/IQueryTreeNode.cpp:186:26
#4 0x446b16e6 in DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>::QueryTreeNodeWithHash(DB::IQueryTreeNode const*) build_docker/../src/Analyzer/HashUtils.h:19:22
#5 0x446b16e6 in DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>* std::__1::construct_at[abi:v15000]<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>, DB::IQueryTreeNode*, DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>*>(DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>*, DB::IQueryTreeNode*&&) build_docker/../contrib/llvm-project/libcxx/include/__memory/construct_at.h:35:48
#6 0x446b16e6 in void std::__1::allocator_traits<std::__1::allocator<std::__1::__hash_node<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>, void*>>>::construct[abi:v15000]<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>, DB::IQueryTreeNode*, void, void>(std::__1::allocator<std::__1::__hash_node<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>, void*>>&, DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>*, DB::IQueryTreeNode*&&) build_docker/../contrib/llvm-project/libcxx/include/__memory/allocator_traits.h:298:9
#7 0x446b16e6 in std::__1::unique_ptr<std::__1::__hash_node<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>, void*>, std::__1::__hash_node_destructor<std::__1::allocator<std::__1::__hash_node<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>, void*>>>> std::__1::__hash_table<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>, std::__1::hash<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>>, std::__1::equal_to<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>>, std::__1::allocator<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>>>::__construct_node<DB::IQueryTreeNode*>(DB::IQueryTreeNode*&&) build_docker/../contrib/llvm-project/libcxx/include/__hash_table:2365:5
#8 0x446b16e6 in std::__1::pair<std::__1::__hash_iterator<std::__1::__hash_node<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>, void*>*>, bool> std::__1::__hash_table<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>, std::__1::hash<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>>, std::__1::equal_to<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>>, std::__1::allocator<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>>>::__emplace_unique_impl<DB::IQueryTreeNode*>(DB::IQueryTreeNode*&&) build_docker/../contrib/llvm-project/libcxx/include/__hash_table:2053:25
#9 0x446aef02 in std::__1::pair<std::__1::__hash_iterator<std::__1::__hash_node<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>, void*>*>, bool> std::__1::__hash_table<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>, std::__1::hash<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>>, std::__1::equal_to<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>>, std::__1::allocator<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>>>::__emplace_unique_extract_key[abi:v15000]<DB::IQueryTreeNode*>(DB::IQueryTreeNode*&&, std::__1::__extract_key_fail_tag) build_docker/../contrib/llvm-project/libcxx/include/__hash_table:1067:14
#10 0x446aef02 in std::__1::pair<std::__1::__hash_iterator<std::__1::__hash_node<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>, void*>*>, bool> std::__1::__hash_table<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>, std::__1::hash<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>>, std::__1::equal_to<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>>, std::__1::allocator<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>>>::__emplace_unique[abi:v15000]<DB::IQueryTreeNode*>(DB::IQueryTreeNode*&&) build_docker/../contrib/llvm-project/libcxx/include/__hash_table:1045:14
#11 0x446aef02 in std::__1::pair<std::__1::__hash_const_iterator<std::__1::__hash_node<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>, void*>*>, bool> std::__1::unordered_set<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>, std::__1::hash<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>>, std::__1::equal_to<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>>, std::__1::allocator<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>>>::emplace[abi:v15000]<DB::IQueryTreeNode*>(DB::IQueryTreeNode*&&) build_docker/../contrib/llvm-project/libcxx/include/unordered_set:659:30
#12 0x446aef02 in DB::(anonymous namespace)::OrderByLimitByDuplicateEliminationVisitor::visitImpl(std::__1::shared_ptr<DB::IQueryTreeNode>&) build_docker/../src/Analyzer/Passes/OrderByLimitByDuplicateEliminationPass.cpp:40:67
#13 0x446aef02 in DB::InDepthQueryTreeVisitor<DB::(anonymous namespace)::OrderByLimitByDuplicateEliminationVisitor, false>::visit(std::__1::shared_ptr<DB::IQueryTreeNode>&) build_docker/../src/Analyzer/InDepthQueryTreeVisitor.h:52:22
#14 0x446ae3da in DB::OrderByLimitByDuplicateEliminationPass::run(std::__1::shared_ptr<DB::IQueryTreeNode>, std::__1::shared_ptr<DB::Context const>) build_docker/../src/Analyzer/Passes/OrderByLimitByDuplicateEliminationPass.cpp:76:13
#15 0x444e89e9 in DB::QueryTreePassManager::run(std::__1::shared_ptr<DB::IQueryTreeNode>) build_docker/../src/Analyzer/QueryTreePassManager.cpp:99:20
#16 0x4494092d in DB::(anonymous namespace)::buildQueryTreeAndRunPasses(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context const> const&) build_docker/../src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:54:29
#17 0x4494092d in DB::InterpreterSelectQueryAnalyzer::InterpreterSelectQueryAnalyzer(std::__1::shared_ptr<DB::IAST> const&, DB::SelectQueryOptions const&, std::__1::shared_ptr<DB::Context const>) build_docker/../src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:66:18
#18 0x4432c747 in DB::InterpreterExplainQuery::executeImpl() build_docker/../src/Interpreters/InterpreterExplainQuery.cpp:426:48
#19 0x44329bd4 in DB::InterpreterExplainQuery::execute() build_docker/../src/Interpreters/InterpreterExplainQuery.cpp:87:20
#20 0x457706f7 in DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) build_docker/../src/Interpreters/executeQuery.cpp:686:36
#21 0x45763bfd in DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) build_docker/../src/Interpreters/executeQuery.cpp:1083:30
#22 0x48fe52b5 in DB::TCPHandler::runImpl() build_docker/../src/Server/TCPHandler.cpp:375:24
#23 0x49026afd in DB::TCPHandler::run() build_docker/../src/Server/TCPHandler.cpp:1920:9
#24 0x550d3edd in Poco::Net::TCPServerConnection::start() build_docker/../contrib/poco/Net/src/TCPServerConnection.cpp:43:3
#25 0x550d526e in Poco::Net::TCPServerDispatcher::run() build_docker/../contrib/poco/Net/src/TCPServerDispatcher.cpp:115:20
#26 0x558357cb in Poco::PooledThread::run() build_docker/../contrib/poco/Foundation/src/ThreadPool.cpp:199:14
#27 0x55830d61 in Poco::(anonymous namespace)::RunnableHolder::run() build_docker/../contrib/poco/Foundation/src/Thread.cpp:55:11
#28 0x5582cb28 in Poco::ThreadImpl::runnableEntry(void*) build_docker/../contrib/poco/Foundation/src/Thread_POSIX.cpp:345:27
#29 0x7faa30782608 in start_thread /build/glibc-SzIz7B/glibc-2.31/nptl/pthread_create.c:477:8
```
**How to reproduce**
```
set allow_experimental_analyzer=1;
EXPLAIN header = 1 SELECT * FROM (SELECT * FROM (SELECT * FROM numbers(3) ORDER BY number ASC) ORDER BY number DESC) ORDER BY number ASC;
```
cc @kitaisreal | https://github.com/ClickHouse/ClickHouse/issues/44494 | https://github.com/ClickHouse/ClickHouse/pull/44491 | 6a0210fb0fb1a7d5dc6ac02ae4006419d706a9f7 | 3e2e35f54e8aa9225e1712ec85e73693d58e9b03 | "2022-12-21T16:40:41Z" | c++ | "2022-12-22T10:15:59Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 44,467 | ["src/AggregateFunctions/AggregateFunctionSparkbar.h", "tests/queries/0_stateless/02016_aggregation_spark_bar.sql"] | sparkbar aggregate function causes server to be killed with OOM | **Describe what's wrong**
Using `sparkbar` aggregate function on tables that contain large UInt64 values causes OOM (Out of Memory) Killer process to kill ClickHouse server irrespective of any memory usage restrictions set.
**Does it reproduce on recent release?**
Yes. Any version `>=21.11` including the latest `22.12`. Also reproducible on ClickHouse.Cloud.
**How to reproduce**
```sql
DROP TABLE IF EXISTS test;
CREATE TABLE test (x UInt64, y UInt8) Engine=MergeTree ORDER BY tuple();
INSERT INTO test VALUES (18446744073709551615,255),(0,0),(0,0),(4036797895307271799,163)
SELECT sparkbar(9)(x,y) FROM test;
```
**Expected behavior**
Memory restrictions should be enforced and server should not be killed with OOM.
**Error message and/or stacktrace**
```
2022.12.20 01:48:30.983421 [ 1 ] {} <Fatal> Application: Child process was terminated by signal 9 (KILL). If it is not done by 'forcestop' command or manually, the possible cause is OOM Killer (see 'dmesg' and look at the '/var/log/kern.log' for the details).
```
| https://github.com/ClickHouse/ClickHouse/issues/44467 | https://github.com/ClickHouse/ClickHouse/pull/44489 | 04c9fcddcb89f8fb763c8b83d79dfd512f86bf99 | dad838859b37b0d043980a0a5c52519457b4d1a0 | "2022-12-20T21:14:45Z" | c++ | "2023-02-01T11:50:00Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 44,318 | ["tests/queries/0_stateless/02997_projections_formatting.reference", "tests/queries/0_stateless/02997_projections_formatting.sql"] | Bad formatting of projection with `ORDER BY` leads to creating of wrong projection | **Describe what's wrong**
If projection has `ORDER BY` clause with column wrapped into function, the query is formatted wrong: `ORDER BY` contains arguments of function instead of function and table is created with wrong projection.
Looks like it's because of [lines](https://github.com/ClickHouse/ClickHouse/blob/e3aa2d9d565374b581b46f2d4c825e76bd6b32f5/src/Parsers/ASTProjectionSelectQuery.cpp#L74-L88).
```sql
:) CREATE TABLE t_proj(t DateTime, id UInt64, PROJECTION p (SELECT id, t ORDER BY toStartOfDay(t))) ENGINE = MergeTree ORDER BY id;
CREATE TABLE t_proj
(
`t` DateTime,
`id` UInt64,
PROJECTION p
(
SELECT
id,
t
ORDER BY t
)
)
ENGINE = MergeTree
ORDER BY id
Query id: 4565f522-5e28-4fcd-9bde-999a5e3998da
Ok.
0 rows in set. Elapsed: 0.009 sec.
:) SHOW CREATE TABLE t_proj
SHOW CREATE TABLE t_proj
Query id: 41c447ae-c6b8-412b-ad8e-f7d8ecab8110
┌─statement───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│ CREATE TABLE default.t_proj
(
`t` DateTime,
`id` UInt64,
PROJECTION p
(
SELECT
id,
t
ORDER BY t
)
)
ENGINE = MergeTree
ORDER BY id
SETTINGS index_granularity = 8192 │
└─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
```
Or even worse:
```sql
1 row in set. Elapsed: 0.001 sec.
:) CREATE TABLE t_proj_2 (a UInt32, b UInt32, PROJECTION p (SELECT a ORDER BY b * 2)) ENGINE = MergeTree ORDER BY a
CREATE TABLE t_proj_2
(
`a` UInt32,
`b` UInt32,
PROJECTION p
(
SELECT a
ORDER BY
b,
2
)
)
ENGINE = MergeTree
ORDER BY a
Query id: aa244060-38a3-44b3-a52f-eb4c119d61ec
Ok.
0 rows in set. Elapsed: 0.009 sec.
:) SHOW CREATE TABLE t_proj_2
SHOW CREATE TABLE t_proj_2
Query id: cd7a5f4b-82b8-4915-8fda-3295774036da
┌─statement────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│ CREATE TABLE default.t_proj_2
(
`a` UInt32,
`b` UInt32,
PROJECTION p
(
SELECT a
ORDER BY
b,
2
)
)
ENGINE = MergeTree
ORDER BY a
SETTINGS index_granularity = 8192 │
└──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
```
| https://github.com/ClickHouse/ClickHouse/issues/44318 | https://github.com/ClickHouse/ClickHouse/pull/60179 | b4c7e1d01f094eaef8284f957555927c9554df94 | c1754d3cd149b489ada4eedee4dfc7afcb4b8cec | "2022-12-16T15:51:11Z" | c++ | "2024-02-21T10:42:59Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 44,307 | ["tests/queries/0_stateless/02538_analyzer_create_table_as_select.reference", "tests/queries/0_stateless/02538_analyzer_create_table_as_select.sql"] | UBSan: ReadFromMergeTree.cpp, member call on null pointer of type 'DB::PlannerContext' | **Describe the bug**
[A link to the report](https://s3.amazonaws.com/clickhouse-test-reports/44176/bde3e43d3becaca7e61632aaf3292a11d4d27aa0/fuzzer_astfuzzerubsan//report.html)
Related to https://github.com/ClickHouse/ClickHouse/issues/42648
```
(query_id: c7640145-331c-4d7b-9e50-c09097e4e8bd)
(query: CREATE TABLE test_01109_other_atomic.t3 ENGINE = MergeTree ORDER BY tuple() AS SELECT rowNumberInAllBlocks() + (SELECT max(((*, *).1).1) + 1 FROM (SELECT tuple(*) FROM t1 UNION ALL SELECT tuple(*) FROM t2)), * FROM (SELECT arrayJoin(['another', 'db']))) Received signal sanitizer trap (-3)
```
```
../src/Processors/QueryPlan/ReadFromMergeTree.cpp:958:78: runtime error: member call on null pointer of type 'DB::PlannerContext'
2022.12.16 00:59:06.388912 [ 358 ] {} <Trace> SystemLog (system.asynchronous_metric_log): Flushing system log, 1682 entries to flush up to offset 3470
2022.12.16 00:59:06.393239 [ 358 ] {} <Trace> system.asynchronous_metric_log (1fca77d3-42cc-4bb6-8d12-53c2f0f4652d): Trying to reserve 1.00 MiB using storage policy from min volume index 0
2022.12.16 00:59:06.393284 [ 358 ] {} <Trace> DiskLocal: Reserved 1.00 MiB on local disk `default`, having unreserved 82.03 GiB.
2022.12.16 00:59:06.394192 [ 358 ] {} <Trace> MergedBlockOutputStream: filled checksums 202212_2_2_0 (state Temporary)
2022.12.16 00:59:06.394471 [ 358 ] {} <Trace> system.asynchronous_metric_log (1fca77d3-42cc-4bb6-8d12-53c2f0f4652d): Renaming temporary part tmp_insert_202212_2_2_0 to 202212_2_2_0 with tid (1, 1, 00000000-0000-0000-0000-000000000000).
2022.12.16 00:59:06.394699 [ 358 ] {} <Trace> SystemLog (system.asynchronous_metric_log): Flushed system log up to offset 3470
2022.12.16 00:59:07.000106 [ 367 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 1.10 GiB, peak 1.17 GiB, free memory in arenas 0.00 B, will set to 1.10 GiB (RSS), difference: 3.18 MiB
#0 0x2eef3820 in DB::ReadFromMergeTree::selectRangesToRead(std::__1::vector<std::__1::shared_ptr<DB::IMergeTreeDataPart const>, std::__1::allocator<std::__1::shared_ptr<DB::IMergeTreeDataPart const>>>, std::__1::shared_ptr<DB::PrewhereInfo> const&, DB::ActionDAGNodes const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, DB::SelectQueryInfo const&, std::__1::shared_ptr<DB::Context const>, unsigned long, std::__1::shared_ptr<std::__1::unordered_map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, long, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const, long>>>>, DB::MergeTreeData const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&, bool, Poco::Logger*) build_docker/../src/Processors/QueryPlan/ReadFromMergeTree.cpp:958:78
#1 0x2eef2eac in DB::ReadFromMergeTree::selectRangesToRead(std::__1::vector<std::__1::shared_ptr<DB::IMergeTreeDataPart const>, std::__1::allocator<std::__1::shared_ptr<DB::IMergeTreeDataPart const>>>) const build_docker/../src/Processors/QueryPlan/ReadFromMergeTree.cpp:901:12
#2 0x2eef5b40 in DB::ReadFromMergeTree::getAnalysisResult() const build_docker/../src/Processors/QueryPlan/ReadFromMergeTree.cpp:1206:67
#3 0x2eef5d9f in DB::ReadFromMergeTree::initializePipeline(DB::QueryPipelineBuilder&, DB::BuildQueryPipelineSettings const&) build_docker/../src/Processors/QueryPlan/ReadFromMergeTree.cpp:1215:19
#4 0x2eeaf8fc in DB::ISourceStep::updatePipeline(std::__1::vector<std::__1::unique_ptr<DB::QueryPipelineBuilder, std::__1::default_delete<DB::QueryPipelineBuilder>>, std::__1::allocator<std::__1::unique_ptr<DB::QueryPipelineBuilder, std::__1::default_delete<DB::QueryPipelineBuilder>>>>, DB::BuildQueryPipelineSettings const&) build_docker/../src/Processors/QueryPlan/ISourceStep.cpp:16:5
#5 0x2eedaefb in DB::QueryPlan::buildQueryPipeline(DB::QueryPlanOptimizationSettings const&, DB::BuildQueryPipelineSettings const&) build_docker/../src/Processors/QueryPlan/QueryPlan.cpp:187:47
#6 0x2d10bce4 in DB::InterpreterSelectWithUnionQuery::execute() build_docker/../src/Interpreters/InterpreterSelectWithUnionQuery.cpp:374:31
#7 0x2d50b9d9 in DB::ExecuteScalarSubqueriesMatcher::visit(DB::ASTSubquery const&, std::__1::shared_ptr<DB::IAST>&, DB::ExecuteScalarSubqueriesMatcher::Data&) build_docker/../src/Interpreters/ExecuteScalarSubqueriesVisitor.cpp:187:36
#8 0x2d50b015 in DB::ExecuteScalarSubqueriesMatcher::visit(std::__1::shared_ptr<DB::IAST>&, DB::ExecuteScalarSubqueriesMatcher::Data&) build_docker/../src/Interpreters/ExecuteScalarSubqueriesVisitor.cpp:66:9
#9 0x2d47dde6 in DB::InDepthNodeVisitor<DB::ExecuteScalarSubqueriesMatcher, true, false, std::__1::shared_ptr<DB::IAST>>::doVisit(std::__1::shared_ptr<DB::IAST>&) build_docker/../src/Interpreters/InDepthNodeVisitor.h:71:13
#10 0x2d47e1bd in void DB::InDepthNodeVisitor<DB::ExecuteScalarSubqueriesMatcher, true, false, std::__1::shared_ptr<DB::IAST>>::visitImplMain<false>(std::__1::shared_ptr<DB::IAST>&) build_docker/../src/Interpreters/InDepthNodeVisitor.h:61:9
#11 0x2d47e1bd in void DB::InDepthNodeVisitor<DB::ExecuteScalarSubqueriesMatcher, true, false, std::__1::shared_ptr<DB::IAST>>::visitImpl<false>(std::__1::shared_ptr<DB::IAST>&) build_docker/../src/Interpreters/InDepthNodeVisitor.h:51:13
#12 0x2d47e1bd in void DB::InDepthNodeVisitor<DB::ExecuteScalarSubqueriesMatcher, true, false, std::__1::shared_ptr<DB::IAST>>::visitChildren<false>(std::__1::shared_ptr<DB::IAST>&) build_docker/../src/Interpreters/InDepthNodeVisitor.h:92:17
#13 0x2d50ea3b in DB::ExecuteScalarSubqueriesMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST>&, DB::ExecuteScalarSubqueriesMatcher::Data&) build_docker/../src/Interpreters/ExecuteScalarSubqueriesVisitor.cpp:317:23
#14 0x2d47dde6 in DB::InDepthNodeVisitor<DB::ExecuteScalarSubqueriesMatcher, true, false, std::__1::shared_ptr<DB::IAST>>::doVisit(std::__1::shared_ptr<DB::IAST>&) build_docker/../src/Interpreters/InDepthNodeVisitor.h:71:13
#15 0x2d47e1bd in void DB::InDepthNodeVisitor<DB::ExecuteScalarSubqueriesMatcher, true, false, std::__1::shared_ptr<DB::IAST>>::visitImplMain<false>(std::__1::shared_ptr<DB::IAST>&) build_docker/../src/Interpreters/InDepthNodeVisitor.h:61:9
#16 0x2d47e1bd in void DB::InDepthNodeVisitor<DB::ExecuteScalarSubqueriesMatcher, true, false, std::__1::shared_ptr<DB::IAST>>::visitImpl<false>(std::__1::shared_ptr<DB::IAST>&) build_docker/../src/Interpreters/InDepthNodeVisitor.h:51:13
#17 0x2d47e1bd in void DB::InDepthNodeVisitor<DB::ExecuteScalarSubqueriesMatcher, true, false, std::__1::shared_ptr<DB::IAST>>::visitChildren<false>(std::__1::shared_ptr<DB::IAST>&) build_docker/../src/Interpreters/InDepthNodeVisitor.h:92:17
#18 0x2d47e1c8 in void DB::InDepthNodeVisitor<DB::ExecuteScalarSubqueriesMatcher, true, false, std::__1::shared_ptr<DB::IAST>>::visitImplMain<false>(std::__1::shared_ptr<DB::IAST>&) build_docker/../src/Interpreters/InDepthNodeVisitor.h:64:13
#19 0x2d47e1c8 in void DB::InDepthNodeVisitor<DB::ExecuteScalarSubqueriesMatcher, true, false, std::__1::shared_ptr<DB::IAST>>::visitImpl<false>(std::__1::shared_ptr<DB::IAST>&) build_docker/../src/Interpreters/InDepthNodeVisitor.h:51:13
#20 0x2d47e1c8 in void DB::InDepthNodeVisitor<DB::ExecuteScalarSubqueriesMatcher, true, false, std::__1::shared_ptr<DB::IAST>>::visitChildren<false>(std::__1::shared_ptr<DB::IAST>&) build_docker/../src/Interpreters/InDepthNodeVisitor.h:92:17
#21 0x2d46fad1 in DB::(anonymous namespace)::executeScalarSubqueries(std::__1::shared_ptr<DB::IAST>&, std::__1::shared_ptr<DB::Context const>, unsigned long, std::__1::map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, DB::Block, std::__1::less<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const, DB::Block>>>&, std::__1::map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, DB::Block, std::__1::less<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const, DB::Block>>>&, bool) build_docker/../src/Interpreters/TreeRewriter.cpp:598:64
#22 0x2d46a439 in DB::TreeRewriter::analyzeSelect(std::__1::shared_ptr<DB::IAST>&, DB::TreeRewriterResult&&, DB::SelectQueryOptions const&, std::__1::vector<DB::TableWithColumnNamesAndTypes, std::__1::allocator<DB::TableWithColumnNamesAndTypes>> const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&, std::__1::shared_ptr<DB::TableJoin>) const build_docker/../src/Interpreters/TreeRewriter.cpp:1368:5
#23 0x2cd16075 in DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context> const&, std::__1::optional<DB::Pipe>, std::__1::shared_ptr<DB::IStorage> const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::shared_ptr<DB::PreparedSets>)::$_2::operator()(bool) const build_docker/../src/Interpreters/InterpreterSelectQuery.cpp:511:56
#24 0x2cd11894 in DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context> const&, std::__1::optional<DB::Pipe>, std::__1::shared_ptr<DB::IStorage> const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::shared_ptr<DB::PreparedSets>) build_docker/../src/Interpreters/InterpreterSelectQuery.cpp:648:5
#25 0x2cd0d3ae in DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context> const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&) build_docker/../src/Interpreters/InterpreterSelectQuery.cpp:198:7
#26 0x2d10cc07 in std::__1::__unique_if<DB::InterpreterSelectQuery>::__unique_single std::__1::make_unique[abi:v15000]<DB::InterpreterSelectQuery, std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context>&, DB::SelectQueryOptions&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&>(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context>&, DB::SelectQueryOptions&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&) build_docker/../contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:714:32
#27 0x2d10912d in DB::InterpreterSelectWithUnionQuery::buildCurrentChildInterpreter(std::__1::shared_ptr<DB::IAST> const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&) build_docker/../src/Interpreters/InterpreterSelectWithUnionQuery.cpp:248:16
#28 0x2d106c30 in DB::InterpreterSelectWithUnionQuery::InterpreterSelectWithUnionQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context>, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&) build_docker/../src/Interpreters/InterpreterSelectWithUnionQuery.cpp:150:13
#29 0x2d105462 in DB::InterpreterSelectWithUnionQuery::InterpreterSelectWithUnionQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context const>, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&) build_docker/../src/Interpreters/InterpreterSelectWithUnionQuery.cpp:40:7
#30 0x2ccefb72 in DB::InterpreterInsertQuery::execute() build_docker/../src/Interpreters/InterpreterInsertQuery.cpp:393:49
#31 0x2ca9f7bc in DB::InterpreterCreateQuery::fillTableIfNeeded(DB::ASTCreateQuery const&) build_docker/../src/Interpreters/InterpreterCreateQuery.cpp:1553:79
#32 0x2ca97422 in DB::InterpreterCreateQuery::createTable(DB::ASTCreateQuery&) build_docker/../src/Interpreters/InterpreterCreateQuery.cpp:1242:12
#33 0x2caa2a7c in DB::InterpreterCreateQuery::execute() build_docker/../src/Interpreters/InterpreterCreateQuery.cpp:1640:16
#34 0x2d56cc99 in DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) build_docker/../src/Interpreters/executeQuery.cpp:686:36
#35 0x2d5683ab in DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) build_docker/../src/Interpreters/executeQuery.cpp:1083:30
#36 0x2e79f62f in DB::TCPHandler::runImpl() build_docker/../src/Server/TCPHandler.cpp:375:24
#37 0x2e7c0639 in DB::TCPHandler::run() build_docker/../src/Server/TCPHandler.cpp:1914:9
#38 0x2f7ffdeb in Poco::Net::TCPServerConnection::start() build_docker/../contrib/poco/Net/src/TCPServerConnection.cpp:43:3
#39 0x2f8002d9 in Poco::Net::TCPServerDispatcher::run() build_docker/../contrib/poco/Net/src/TCPServerDispatcher.cpp:115:20
#40 0x2f977ae6 in Poco::PooledThread::run() build_docker/../contrib/poco/Foundation/src/ThreadPool.cpp:199:14
#41 0x2f9755ce in Poco::ThreadImpl::runnableEntry(void*) build_docker/../contrib/poco/Foundation/src/Thread_POSIX.cpp:345:27
#42 0x7f95f8d83608 in start_thread /build/glibc-SzIz7B/glibc-2.31/nptl/pthread_create.c:477:8
#43 0x7f95f8ca8132 in __clone /build/glibc-SzIz7B/glibc-2.31/misc/../sysdeps/unix/sysv/linux/x86_64/clone.S:95
SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior ../src/Processors/QueryPlan/ReadFromMergeTree.cpp:958:78 in
```
```
[ip-172-31-70-55] 2022.12.16 00:59:07.086236 [ 409 ] <Fatal> BaseDaemon: ########################################
[ip-172-31-70-55] 2022.12.16 00:59:07.086368 [ 409 ] <Fatal> BaseDaemon: (version 22.12.1.1, build id: A97767C43D7E2C8D239A0C4C6F16388C3AB9E180) (from thread 143) (query_id: c7640145-331c-4d7b-9e50-c09097e4e8bd) (query: CREATE TABLE test_01109_other_atomic.t3 ENGINE = MergeTree ORDER BY tuple() AS SELECT rowNumberInAllBlocks() + (SELECT max(((*, *).1).1) + 1 FROM (SELECT tuple(*) FROM t1 UNION ALL SELECT tuple(*) FROM t2)), * FROM (SELECT arrayJoin(['another', 'db']))) Received signal sanitizer trap (-3)
[ip-172-31-70-55] 2022.12.16 00:59:07.086404 [ 409 ] <Fatal> BaseDaemon: Sanitizer trap.
[ip-172-31-70-55] 2022.12.16 00:59:07.086470 [ 409 ] <Fatal> BaseDaemon: Stack trace: 0x21272923 0x215356d6 0x16a2f176 0x16a3f333 0x2eef3821 0x2eef2ead 0x2eef5b41 0x2eef5da0 0x2eeaf8fd 0x2eedaefc 0x2d10bce5 0x2d50b9da 0x2d50b016 0x2d47dde7 0x2d47e1be 0x2d50ea3c 0x2d47dde7 0x2d47e1be 0x2d47e1c9 0x2d46fad2 0x2d46a43a 0x2cd16076 0x2cd11895 0x2cd0d3af 0x2d10cc08 0x2d10912e 0x2d106c31 0x2d105463 0x2ccefb73 0x2ca9f7bd 0x2ca97423 0x2caa2a7d 0x2d56cc9a 0x2d5683ac 0x2e79f630 0x2e7c063a 0x2f7ffdec 0x2f8002da 0x2f977ae7 0x2f9755cf 0x7f95f8d83609 0x7f95f8ca8133
[ip-172-31-70-55] 2022.12.16 00:59:07.100053 [ 409 ] <Fatal> BaseDaemon: 0.1. inlined from ./build_docker/../src/Common/StackTrace.cpp:334: StackTrace::tryCapture()
[ip-172-31-70-55] 2022.12.16 00:59:07.100079 [ 409 ] <Fatal> BaseDaemon: 0. ./build_docker/../src/Common/StackTrace.cpp:295: StackTrace::StackTrace() @ 0x21272923 in /workspace/clickhouse
[ip-172-31-70-55] 2022.12.16 00:59:07.124440 [ 409 ] <Fatal> BaseDaemon: 1. ./build_docker/../src/Daemon/BaseDaemon.cpp:0: sanitizerDeathCallback() @ 0x215356d6 in /workspace/clickhouse
[ip-172-31-70-55] 2022.12.16 00:59:08.256307 [ 409 ] <Fatal> BaseDaemon: 2. __sanitizer::Die() @ 0x16a2f176 in /workspace/clickhouse
[ip-172-31-70-55] 2022.12.16 00:59:09.345057 [ 409 ] <Fatal> BaseDaemon: 3. ? @ 0x16a3f333 in /workspace/clickhouse
[ip-172-31-70-55] 2022.12.16 00:59:09.448946 [ 409 ] <Fatal> BaseDaemon: 4. ./build_docker/../src/Processors/QueryPlan/ReadFromMergeTree.cpp:946: DB::ReadFromMergeTree::selectRangesToRead(std::__1::vector<std::__1::shared_ptr<DB::IMergeTreeDataPart const>, std::__1::allocator<std::__1::shared_ptr<DB::IMergeTreeDataPart const>>>, std::__1::shared_ptr<DB::PrewhereInfo> const&, DB::ActionDAGNodes const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, DB::SelectQueryInfo const&, std::__1::shared_ptr<DB::Context const>, unsigned long, std::__1::shared_ptr<std::__1::unordered_map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, long, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const, long>>>>, DB::MergeTreeData const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&, bool, Poco::Logger*) @ 0x2eef3821 in /workspace/clickhouse
[ip-172-31-70-55] 2022.12.16 00:59:09.549860 [ 409 ] <Fatal> BaseDaemon: 5. ./build_docker/../src/Processors/QueryPlan/ReadFromMergeTree.cpp:0: DB::ReadFromMergeTree::selectRangesToRead(std::__1::vector<std::__1::shared_ptr<DB::IMergeTreeDataPart const>, std::__1::allocator<std::__1::shared_ptr<DB::IMergeTreeDataPart const>>>) const @ 0x2eef2ead in /workspace/clickhouse
[ip-172-31-70-55] 2022.12.16 00:59:09.652897 [ 409 ] <Fatal> BaseDaemon: 6. ./build_docker/../src/Processors/QueryPlan/ReadFromMergeTree.cpp:1206: DB::ReadFromMergeTree::getAnalysisResult() const @ 0x2eef5b41 in /workspace/clickhouse
[ip-172-31-70-55] 2022.12.16 00:59:09.745158 [ 409 ] <Fatal> BaseDaemon: 7. ./build_docker/../src/Processors/QueryPlan/ReadFromMergeTree.cpp:1216: DB::ReadFromMergeTree::initializePipeline(DB::QueryPipelineBuilder&, DB::BuildQueryPipelineSettings const&) @ 0x2eef5da0 in /workspace/clickhouse
[ip-172-31-70-55] 2022.12.16 00:59:09.760151 [ 409 ] <Fatal> BaseDaemon: 8. ./build_docker/../src/Processors/QueryPlan/ISourceStep.cpp:0: DB::ISourceStep::updatePipeline(std::__1::vector<std::__1::unique_ptr<DB::QueryPipelineBuilder, std::__1::default_delete<DB::QueryPipelineBuilder>>, std::__1::allocator<std::__1::unique_ptr<DB::QueryPipelineBuilder, std::__1::default_delete<DB::QueryPipelineBuilder>>>>, DB::BuildQueryPipelineSettings const&) @ 0x2eeaf8fd in /workspace/clickhouse
[ip-172-31-70-55] 2022.12.16 00:59:09.788520 [ 409 ] <Fatal> BaseDaemon: 9.1. inlined from ./build_docker/../contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:296: std::__1::unique_ptr<DB::QueryPipelineBuilder, std::__1::default_delete<DB::QueryPipelineBuilder>>::release[abi:v15000]()
[ip-172-31-70-55] 2022.12.16 00:59:09.788568 [ 409 ] <Fatal> BaseDaemon: 9.2. inlined from ./build_docker/../contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:225: std::__1::unique_ptr<DB::QueryPipelineBuilder, std::__1::default_delete<DB::QueryPipelineBuilder>>::operator=[abi:v15000](std::__1::unique_ptr<DB::QueryPipelineBuilder, std::__1::default_delete<DB::QueryPipelineBuilder>>&&)
[ip-172-31-70-55] 2022.12.16 00:59:09.788613 [ 409 ] <Fatal> BaseDaemon: 9. ./build_docker/../src/Processors/QueryPlan/QueryPlan.cpp:187: DB::QueryPlan::buildQueryPipeline(DB::QueryPlanOptimizationSettings const&, DB::BuildQueryPipelineSettings const&) @ 0x2eedaefc in /workspace/clickhouse
[ip-172-31-70-55] 2022.12.16 00:59:09.831048 [ 409 ] <Fatal> BaseDaemon: 10. ./build_docker/../src/Interpreters/InterpreterSelectWithUnionQuery.cpp:374: DB::InterpreterSelectWithUnionQuery::execute() @ 0x2d10bce5 in /workspace/clickhouse
[ip-172-31-70-55] 2022.12.16 00:59:09.856576 [ 409 ] <Fatal> BaseDaemon: 11. ./build_docker/../src/Interpreters/ExecuteScalarSubqueriesVisitor.cpp:189: DB::ExecuteScalarSubqueriesMatcher::visit(DB::ASTSubquery const&, std::__1::shared_ptr<DB::IAST>&, DB::ExecuteScalarSubqueriesMatcher::Data&) @ 0x2d50b9da in /workspace/clickhouse
[ip-172-31-70-55] 2022.12.16 00:59:09.881050 [ 409 ] <Fatal> BaseDaemon: 12.1. inlined from ./build_docker/../contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:815: std::__1::shared_ptr<DB::IAST>::operator->[abi:v15000]() const
[ip-172-31-70-55] 2022.12.16 00:59:09.881094 [ 409 ] <Fatal> BaseDaemon: 12. ./build_docker/../src/Interpreters/ExecuteScalarSubqueriesVisitor.cpp:67: DB::ExecuteScalarSubqueriesMatcher::visit(std::__1::shared_ptr<DB::IAST>&, DB::ExecuteScalarSubqueriesMatcher::Data&) @ 0x2d50b016 in /workspace/clickhouse
[ip-172-31-70-55] 2022.12.16 00:59:09.953215 [ 409 ] <Fatal> BaseDaemon: 13. ./build_docker/../src/Interpreters/InDepthNodeVisitor.h:78: DB::InDepthNodeVisitor<DB::ExecuteScalarSubqueriesMatcher, true, false, std::__1::shared_ptr<DB::IAST>>::doVisit(std::__1::shared_ptr<DB::IAST>&) @ 0x2d47dde7 in /workspace/clickhouse
[ip-172-31-70-55] 2022.12.16 00:59:10.024890 [ 409 ] <Fatal> BaseDaemon: 14.1. inlined from ./build_docker/../src/Interpreters/InDepthNodeVisitor.h:64: void DB::InDepthNodeVisitor<DB::ExecuteScalarSubqueriesMatcher, true, false, std::__1::shared_ptr<DB::IAST>>::visitImplMain<false>(std::__1::shared_ptr<DB::IAST>&)
[ip-172-31-70-55] 2022.12.16 00:59:10.024935 [ 409 ] <Fatal> BaseDaemon: 14.2. inlined from ./build_docker/../src/Interpreters/InDepthNodeVisitor.h:51: void DB::InDepthNodeVisitor<DB::ExecuteScalarSubqueriesMatcher, true, false, std::__1::shared_ptr<DB::IAST>>::visitImpl<false>(std::__1::shared_ptr<DB::IAST>&)
[ip-172-31-70-55] 2022.12.16 00:59:10.024978 [ 409 ] <Fatal> BaseDaemon: 14. ./build_docker/../src/Interpreters/InDepthNodeVisitor.h:92: void DB::InDepthNodeVisitor<DB::ExecuteScalarSubqueriesMatcher, true, false, std::__1::shared_ptr<DB::IAST>>::visitChildren<false>(std::__1::shared_ptr<DB::IAST>&) @ 0x2d47e1be in /workspace/clickhouse
[ip-172-31-70-55] 2022.12.16 00:59:10.050701 [ 409 ] <Fatal> BaseDaemon: 15.1. inlined from ./build_docker/../src/Interpreters/ExecuteScalarSubqueriesVisitor.cpp:317: std::__1::__wrap_iter<std::__1::shared_ptr<DB::IAST>**>::operator++[abi:v15000]()
[ip-172-31-70-55] 2022.12.16 00:59:10.050740 [ 409 ] <Fatal> BaseDaemon: 15. ./build_docker/../src/Interpreters/ExecuteScalarSubqueriesVisitor.cpp:316: DB::ExecuteScalarSubqueriesMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST>&, DB::ExecuteScalarSubqueriesMatcher::Data&) @ 0x2d50ea3c in /workspace/clickhouse
[ip-172-31-70-55] 2022.12.16 00:59:10.122500 [ 409 ] <Fatal> BaseDaemon: 16. ./build_docker/../src/Interpreters/InDepthNodeVisitor.h:78: DB::InDepthNodeVisitor<DB::ExecuteScalarSubqueriesMatcher, true, false, std::__1::shared_ptr<DB::IAST>>::doVisit(std::__1::shared_ptr<DB::IAST>&) @ 0x2d47dde7 in /workspace/clickhouse
[ip-172-31-70-55] 2022.12.16 00:59:10.194225 [ 409 ] <Fatal> BaseDaemon: 17.1. inlined from ./build_docker/../src/Interpreters/InDepthNodeVisitor.h:64: void DB::InDepthNodeVisitor<DB::ExecuteScalarSubqueriesMatcher, true, false, std::__1::shared_ptr<DB::IAST>>::visitImplMain<false>(std::__1::shared_ptr<DB::IAST>&)
[ip-172-31-70-55] 2022.12.16 00:59:10.194274 [ 409 ] <Fatal> BaseDaemon: 17.2. inlined from ./build_docker/../src/Interpreters/InDepthNodeVisitor.h:51: void DB::InDepthNodeVisitor<DB::ExecuteScalarSubqueriesMatcher, true, false, std::__1::shared_ptr<DB::IAST>>::visitImpl<false>(std::__1::shared_ptr<DB::IAST>&)
[ip-172-31-70-55] 2022.12.16 00:59:10.194327 [ 409 ] <Fatal> BaseDaemon: 17. ./build_docker/../src/Interpreters/InDepthNodeVisitor.h:92: void DB::InDepthNodeVisitor<DB::ExecuteScalarSubqueriesMatcher, true, false, std::__1::shared_ptr<DB::IAST>>::visitChildren<false>(std::__1::shared_ptr<DB::IAST>&) @ 0x2d47e1be in /workspace/clickhouse
[ip-172-31-70-55] 2022.12.16 00:59:10.265782 [ 409 ] <Fatal> BaseDaemon: 18.1. inlined from ./build_docker/../contrib/llvm-project/libcxx/include/__iterator/wrap_iter.h:100: std::__1::__wrap_iter<std::__1::shared_ptr<DB::IAST>*>::operator++[abi:v15000]()
[ip-172-31-70-55] 2022.12.16 00:59:10.265828 [ 409 ] <Fatal> BaseDaemon: 18. ./build_docker/../src/Interpreters/InDepthNodeVisitor.h:83: void DB::InDepthNodeVisitor<DB::ExecuteScalarSubqueriesMatcher, true, false, std::__1::shared_ptr<DB::IAST>>::visitChildren<false>(std::__1::shared_ptr<DB::IAST>&) @ 0x2d47e1c9 in /workspace/clickhouse
[ip-172-31-70-55] 2022.12.16 00:59:10.345535 [ 409 ] <Fatal> BaseDaemon: 19. ./build_docker/../src/Interpreters/TreeRewriter.cpp:0: DB::(anonymous namespace)::executeScalarSubqueries(std::__1::shared_ptr<DB::IAST>&, std::__1::shared_ptr<DB::Context const>, unsigned long, std::__1::map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, DB::Block, std::__1::less<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const, DB::Block>>>&, std::__1::map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, DB::Block, std::__1::less<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const, DB::Block>>>&, bool) @ 0x2d46fad2 in /workspace/clickhouse
[ip-172-31-70-55] 2022.12.16 00:59:10.406636 [ 409 ] <Fatal> BaseDaemon: 20. ./build_docker/../src/Interpreters/TreeRewriter.cpp:0: DB::TreeRewriter::analyzeSelect(std::__1::shared_ptr<DB::IAST>&, DB::TreeRewriterResult&&, DB::SelectQueryOptions const&, std::__1::vector<DB::TableWithColumnNamesAndTypes, std::__1::allocator<DB::TableWithColumnNamesAndTypes>> const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&, std::__1::shared_ptr<DB::TableJoin>) const @ 0x2d46a43a in /workspace/clickhouse
[ip-172-31-70-55] 2022.12.16 00:59:10.517054 [ 409 ] <Fatal> BaseDaemon: 21. ./build_docker/../src/Interpreters/InterpreterSelectQuery.cpp:511: DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context> const&, std::__1::optional<DB::Pipe>, std::__1::shared_ptr<DB::IStorage> const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::shared_ptr<DB::PreparedSets>)::$_2::operator()(bool) const @ 0x2cd16076 in /workspace/clickhouse
[ip-172-31-70-55] 2022.12.16 00:59:10.625018 [ 409 ] <Fatal> BaseDaemon: 22. ./build_docker/../src/Interpreters/InterpreterSelectQuery.cpp:0: DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context> const&, std::__1::optional<DB::Pipe>, std::__1::shared_ptr<DB::IStorage> const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::shared_ptr<DB::PreparedSets>) @ 0x2cd11895 in /workspace/clickhouse
[ip-172-31-70-55] 2022.12.16 00:59:10.732558 [ 409 ] <Fatal> BaseDaemon: 23. ./build_docker/../src/Interpreters/InterpreterSelectQuery.cpp:0: DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context> const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&) @ 0x2cd0d3af in /workspace/clickhouse
[ip-172-31-70-55] 2022.12.16 00:59:10.778681 [ 409 ] <Fatal> BaseDaemon: 24. ./build_docker/../contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:714: std::__1::__unique_if<DB::InterpreterSelectQuery>::__unique_single std::__1::make_unique[abi:v15000]<DB::InterpreterSelectQuery, std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context>&, DB::SelectQueryOptions&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&>(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context>&, DB::SelectQueryOptions&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&) @ 0x2d10cc08 in /workspace/clickhouse
[ip-172-31-70-55] 2022.12.16 00:59:10.816644 [ 409 ] <Fatal> BaseDaemon: 25.1. inlined from ./build_docker/../contrib/llvm-project/libcxx/include/__memory/compressed_pair.h:40: __compressed_pair_elem<DB::InterpreterSelectQuery *, void>
[ip-172-31-70-55] 2022.12.16 00:59:10.816681 [ 409 ] <Fatal> BaseDaemon: 25.2. inlined from ./build_docker/../contrib/llvm-project/libcxx/include/__memory/compressed_pair.h:108: __compressed_pair<DB::InterpreterSelectQuery *, std::__1::default_delete<DB::InterpreterSelectQuery> >
[ip-172-31-70-55] 2022.12.16 00:59:10.816735 [ 409 ] <Fatal> BaseDaemon: 25.3. inlined from ./build_docker/../contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:211: unique_ptr<DB::InterpreterSelectQuery, std::__1::default_delete<DB::InterpreterSelectQuery>, void, void>
[ip-172-31-70-55] 2022.12.16 00:59:10.816772 [ 409 ] <Fatal> BaseDaemon: 25. ./build_docker/../src/Interpreters/InterpreterSelectWithUnionQuery.cpp:248: DB::InterpreterSelectWithUnionQuery::buildCurrentChildInterpreter(std::__1::shared_ptr<DB::IAST> const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&) @ 0x2d10912e in /workspace/clickhouse
[ip-172-31-70-55] 2022.12.16 00:59:10.853670 [ 409 ] <Fatal> BaseDaemon: 26. ./build_docker/../src/Interpreters/InterpreterSelectWithUnionQuery.cpp:0: DB::InterpreterSelectWithUnionQuery::InterpreterSelectWithUnionQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context>, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&) @ 0x2d106c31 in /workspace/clickhouse
[ip-172-31-70-55] 2022.12.16 00:59:10.890272 [ 409 ] <Fatal> BaseDaemon: 27. ./build_docker/../src/Interpreters/InterpreterSelectWithUnionQuery.cpp:0: DB::InterpreterSelectWithUnionQuery::InterpreterSelectWithUnionQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context const>, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&) @ 0x2d105463 in /workspace/clickhouse
[ip-172-31-70-55] 2022.12.16 00:59:10.928584 [ 409 ] <Fatal> BaseDaemon: 28. ./build_docker/../src/Interpreters/InterpreterInsertQuery.cpp:0: DB::InterpreterInsertQuery::execute() @ 0x2ccefb73 in /workspace/clickhouse
[ip-172-31-70-55] 2022.12.16 00:59:11.012585 [ 409 ] <Fatal> BaseDaemon: 29.1. inlined from ./build_docker/../src/Interpreters/InterpreterInsertQuery.h:19: ~InterpreterInsertQuery
[ip-172-31-70-55] 2022.12.16 00:59:11.012621 [ 409 ] <Fatal> BaseDaemon: 29. ./build_docker/../src/Interpreters/InterpreterCreateQuery.cpp:1552: DB::InterpreterCreateQuery::fillTableIfNeeded(DB::ASTCreateQuery const&) @ 0x2ca9f7bd in /workspace/clickhouse
[ip-172-31-70-55] 2022.12.16 00:59:11.091705 [ 409 ] <Fatal> BaseDaemon: 30.1. inlined from ./build_docker/../contrib/llvm-project/libcxx/include/__hash_table:1473: ~__hash_table
[ip-172-31-70-55] 2022.12.16 00:59:11.091744 [ 409 ] <Fatal> BaseDaemon: 30.2. inlined from ./build_docker/../contrib/llvm-project/libcxx/include/unordered_set:615: ~unordered_set
[ip-172-31-70-55] 2022.12.16 00:59:11.091793 [ 409 ] <Fatal> BaseDaemon: 30. ./build_docker/../src/Interpreters/InterpreterCreateQuery.cpp:1243: DB::InterpreterCreateQuery::createTable(DB::ASTCreateQuery&) @ 0x2ca97423 in /workspace/clickhouse
[ip-172-31-70-55] 2022.12.16 00:59:11.176402 [ 409 ] <Fatal> BaseDaemon: 31. ./build_docker/../src/Interpreters/InterpreterCreateQuery.cpp:1641: DB::InterpreterCreateQuery::execute() @ 0x2caa2a7d in /workspace/clickhouse
[ip-172-31-70-55] 2022.12.16 00:59:11.235565 [ 409 ] <Fatal> BaseDaemon: 32. ./build_docker/../src/Interpreters/executeQuery.cpp:0: DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x2d56cc9a in /workspace/clickhouse
[ip-172-31-70-55] 2022.12.16 00:59:11.298492 [ 409 ] <Fatal> BaseDaemon: 33. ./build_docker/../src/Interpreters/executeQuery.cpp:1083: DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) @ 0x2d5683ac in /workspace/clickhouse
[ip-172-31-70-55] 2022.12.16 00:59:11.352210 [ 409 ] <Fatal> BaseDaemon: 34. ./build_docker/../src/Server/TCPHandler.cpp:375: DB::TCPHandler::runImpl() @ 0x2e79f630 in /workspace/clickhouse
[ip-172-31-70-55] 2022.12.16 00:59:11.417557 [ 409 ] <Fatal> BaseDaemon: 35. ./build_docker/../src/Server/TCPHandler.cpp:1916: DB::TCPHandler::run() @ 0x2e7c063a in /workspace/clickhouse
[ip-172-31-70-55] 2022.12.16 00:59:11.423085 [ 409 ] <Fatal> BaseDaemon: 36. ./build_docker/../contrib/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x2f7ffdec in /workspace/clickhouse
[ip-172-31-70-55] 2022.12.16 00:59:11.430457 [ 409 ] <Fatal> BaseDaemon: 37.1. inlined from ./build_docker/../contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48: std::__1::default_delete<Poco::Net::TCPServerConnection>::operator()[abi:v15000](Poco::Net::TCPServerConnection*) const
[ip-172-31-70-55] 2022.12.16 00:59:11.430485 [ 409 ] <Fatal> BaseDaemon: 37.2. inlined from ./build_docker/../contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:305: std::__1::unique_ptr<Poco::Net::TCPServerConnection, std::__1::default_delete<Poco::Net::TCPServerConnection>>::reset[abi:v15000](Poco::Net::TCPServerConnection*)
[ip-172-31-70-55] 2022.12.16 00:59:11.430520 [ 409 ] <Fatal> BaseDaemon: 37.3. inlined from ./build_docker/../contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:259: ~unique_ptr
[ip-172-31-70-55] 2022.12.16 00:59:11.430557 [ 409 ] <Fatal> BaseDaemon: 37. ./build_docker/../contrib/poco/Net/src/TCPServerDispatcher.cpp:116: Poco::Net::TCPServerDispatcher::run() @ 0x2f8002da in /workspace/clickhouse
[ip-172-31-70-55] 2022.12.16 00:59:11.438806 [ 409 ] <Fatal> BaseDaemon: 38. ./build_docker/../contrib/poco/Foundation/src/ThreadPool.cpp:213: Poco::PooledThread::run() @ 0x2f977ae7 in /workspace/clickhouse
[ip-172-31-70-55] 2022.12.16 00:59:11.446188 [ 409 ] <Fatal> BaseDaemon: 39.1. inlined from ./build_docker/../contrib/poco/Foundation/include/Poco/SharedPtr.h:156: Poco::SharedPtr<Poco::Runnable, Poco::ReferenceCounter, Poco::ReleasePolicy<Poco::Runnable>>::assign(Poco::Runnable*)
[ip-172-31-70-55] 2022.12.16 00:59:11.446213 [ 409 ] <Fatal> BaseDaemon: 39.2. inlined from ./build_docker/../contrib/poco/Foundation/include/Poco/SharedPtr.h:208: Poco::SharedPtr<Poco::Runnable, Poco::ReferenceCounter, Poco::ReleasePolicy<Poco::Runnable>>::operator=(Poco::Runnable*)
[ip-172-31-70-55] 2022.12.16 00:59:11.446245 [ 409 ] <Fatal> BaseDaemon: 39. ./build_docker/../contrib/poco/Foundation/src/Thread_POSIX.cpp:360: Poco::ThreadImpl::runnableEntry(void*) @ 0x2f9755cf in /workspace/clickhouse
[ip-172-31-70-55] 2022.12.16 00:59:11.446285 [ 409 ] <Fatal> BaseDaemon: 40. ? @ 0x7f95f8d83609 in ?
[ip-172-31-70-55] 2022.12.16 00:59:11.446328 [ 409 ] <Fatal> BaseDaemon: 41. clone @ 0x7f95f8ca8133 in ?
``` | https://github.com/ClickHouse/ClickHouse/issues/44307 | https://github.com/ClickHouse/ClickHouse/pull/45533 | 297516a08470a4c33bf5dd80f908b3d48d953c2c | eef49fa3edba8eb59325fbad3ad944c7bb9450be | "2022-12-16T11:10:28Z" | c++ | "2023-01-26T11:10:09Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 44,258 | ["src/Analyzer/SortNode.cpp", "tests/queries/0_stateless/02513_analyzer_sort_msan.reference", "tests/queries/0_stateless/02513_analyzer_sort_msan.sql"] | Analyzer: use-of-uninitialized-value in SortNode/SipHash::update | https://s3.amazonaws.com/clickhouse-test-reports/41976/ea5b06023a79dceccae70ad5e3954129050af5a2/fuzzer_astfuzzermsan//report.html
```
2022.12.14 18:56:32.134997 [ 453 ] {} <Fatal> BaseDaemon: ########################################
2022.12.14 18:56:32.135719 [ 453 ] {} <Fatal> BaseDaemon: (version 22.12.1.1, build id: 149BE79908CE55572FF948EECD3EC781AA69F91A) (from thread 165) (query_id: 73895d08-bd51-4b0f-8a03-4af3a551cc0d) (query: SELECT * FROM (SELECT * FROM test_fetch ORDER BY a ASC, b ASC LIMIT 1, 3) ORDER BY a ASC, b ASC) Received signal sanitizer trap (-3)
2022.12.14 18:56:32.135994 [ 453 ] {} <Fatal> BaseDaemon: Sanitizer trap.
2022.12.14 18:56:32.136328 [ 453 ] {} <Fatal> BaseDaemon: Stack trace: 0x28c42259 0x294e3a9a 0xc272056 0xc286bf3 0x447aca9e 0x447a9bc3 0x447a909b 0x445e29ea 0x458016ce 0x447f2d4c 0x466262c4 0x4661d0de 0x49ee6bb6 0x49f2837e 0x55faef9e 0x55fb032f 0x5671088c 0x5670be22 0x56707be9 0x7f29ec00e609 0x7f29ebf33133
2022.12.14 18:56:32.268904 [ 453 ] {} <Fatal> BaseDaemon: 0.1. inlined from ./build_docker/../src/Common/StackTrace.cpp:334: StackTrace::tryCapture()
2022.12.14 18:56:32.269152 [ 453 ] {} <Fatal> BaseDaemon: 0. ./build_docker/../src/Common/StackTrace.cpp:295: StackTrace::StackTrace() @ 0x28c42259 in /workspace/clickhouse
2022.12.14 18:56:32.530003 [ 453 ] {} <Fatal> BaseDaemon: 1. ./build_docker/../src/Daemon/BaseDaemon.cpp:431: sanitizerDeathCallback() @ 0x294e3a9a in /workspace/clickhouse
2022.12.14 18:56:38.174160 [ 453 ] {} <Fatal> BaseDaemon: 2. __sanitizer::Die() @ 0xc272056 in /workspace/clickhouse
2022.12.14 18:56:43.596037 [ 453 ] {} <Fatal> BaseDaemon: 3. ? @ 0xc286bf3 in /workspace/clickhouse
2022.12.14 18:56:43.653110 [ 453 ] {} <Fatal> BaseDaemon: 4.1. inlined from ./build_docker/../contrib/llvm-project/libcxx/include/__hash_table:1822: std::__1::__hash_table<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>, std::__1::hash<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>>, std::__1::equal_to<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>>, std::__1::allocator<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>>>::__node_insert_unique_perform[abi:v15000](std::__1::__hash_node<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>, void*>*)
2022.12.14 18:56:43.653293 [ 453 ] {} <Fatal> BaseDaemon: 4.2. inlined from ./build_docker/../contrib/llvm-project/libcxx/include/__hash_table:1852: std::__1::__hash_table<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>, std::__1::hash<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>>, std::__1::equal_to<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>>, std::__1::allocator<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>>>::__node_insert_unique(std::__1::__hash_node<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>, void*>*)
2022.12.14 18:56:43.653360 [ 453 ] {} <Fatal> BaseDaemon: 4. ./build_docker/../contrib/llvm-project/libcxx/include/__hash_table:2054: std::__1::pair<std::__1::__hash_iterator<std::__1::__hash_node<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>, void*>*>, bool> std::__1::__hash_table<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>, std::__1::hash<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>>, std::__1::equal_to<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>>, std::__1::allocator<DB::QueryTreeNodeWithHash<DB::IQueryTreeNode const*>>>::__emplace_unique_impl<DB::IQueryTreeNode*>(DB::IQueryTreeNode*&&) @ 0x447aca9e in /workspace/clickhouse
2022.12.14 18:56:43.702494 [ 453 ] {} <Fatal> BaseDaemon: 5.1. inlined from ./build_docker/../src/Analyzer/Passes/OrderByLimitByDuplicateEliminationPass.cpp:0: DB::(anonymous namespace)::OrderByLimitByDuplicateEliminationVisitor::visitImpl(std::__1::shared_ptr<DB::IQueryTreeNode>&)
2022.12.14 18:56:43.702618 [ 453 ] {} <Fatal> BaseDaemon: 5. ./build_docker/../src/Analyzer/InDepthQueryTreeVisitor.h:52: DB::InDepthQueryTreeVisitor<DB::(anonymous namespace)::OrderByLimitByDuplicateEliminationVisitor, false>::visit(std::__1::shared_ptr<DB::IQueryTreeNode>&) @ 0x447a9bc3 in /workspace/clickhouse
2022.12.14 18:56:43.748406 [ 453 ] {} <Fatal> BaseDaemon: 6.1. inlined from ./build_docker/../contrib/llvm-project/libcxx/include/__hash_table:1473: ~__hash_table
2022.12.14 18:56:43.748522 [ 453 ] {} <Fatal> BaseDaemon: 6.2. inlined from ./build_docker/../contrib/llvm-project/libcxx/include/unordered_set:615: ~unordered_set
2022.12.14 18:56:43.748593 [ 453 ] {} <Fatal> BaseDaemon: 6.3. inlined from ./build_docker/../src/Analyzer/Passes/OrderByLimitByDuplicateEliminationPass.cpp:14: ~OrderByLimitByDuplicateEliminationVisitor
2022.12.14 18:56:43.748656 [ 453 ] {} <Fatal> BaseDaemon: 6. ./build_docker/../src/Analyzer/Passes/OrderByLimitByDuplicateEliminationPass.cpp:77: DB::OrderByLimitByDuplicateEliminationPass::run(std::__1::shared_ptr<DB::IQueryTreeNode>, std::__1::shared_ptr<DB::Context const>) @ 0x447a909b in /workspace/clickhouse
2022.12.14 18:56:43.820936 [ 453 ] {} <Fatal> BaseDaemon: 7. ./build_docker/../src/Analyzer/QueryTreePassManager.cpp:97: DB::QueryTreePassManager::run(std::__1::shared_ptr<DB::IQueryTreeNode>) @ 0x445e29ea in /workspace/clickhouse
2022.12.14 18:56:43.939236 [ 453 ] {} <Fatal> BaseDaemon: 8.1. inlined from ./build_docker/../src/Analyzer/QueryTreePassManager.h:13: ~QueryTreePassManager
2022.12.14 18:56:43.939399 [ 453 ] {} <Fatal> BaseDaemon: 8.2. inlined from ./build_docker/../src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:57: DB::(anonymous namespace)::buildQueryTreeAndRunPasses(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context const> const&)
2022.12.14 18:56:43.939482 [ 453 ] {} <Fatal> BaseDaemon: 8. ./build_docker/../src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:66: DB::InterpreterSelectQueryAnalyzer::InterpreterSelectQueryAnalyzer(std::__1::shared_ptr<DB::IAST> const&, DB::SelectQueryOptions const&, std::__1::shared_ptr<DB::Context const>) @ 0x458016ce in /workspace/clickhouse
2022.12.14 18:56:44.063399 [ 453 ] {} <Fatal> BaseDaemon: 9. ./build_docker/../src/Interpreters/InterpreterFactory.cpp:0: DB::InterpreterFactory::get(std::__1::shared_ptr<DB::IAST>&, std::__1::shared_ptr<DB::Context>, DB::SelectQueryOptions const&) @ 0x447f2d4c in /workspace/clickhouse
2022.12.14 18:56:44.722138 [ 453 ] {} <Fatal> BaseDaemon: 10. ./build_docker/../src/Interpreters/executeQuery.cpp:0: DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x466262c4 in /workspace/clickhouse
2022.12.14 18:56:45.438761 [ 453 ] {} <Fatal> BaseDaemon: 11. ./build_docker/../src/Interpreters/executeQuery.cpp:1083: DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) @ 0x4661d0de in /workspace/clickhouse
``` | https://github.com/ClickHouse/ClickHouse/issues/44258 | https://github.com/ClickHouse/ClickHouse/pull/44491 | 6a0210fb0fb1a7d5dc6ac02ae4006419d706a9f7 | 3e2e35f54e8aa9225e1712ec85e73693d58e9b03 | "2022-12-15T09:10:45Z" | c++ | "2022-12-22T10:15:59Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 44,254 | ["docs/en/sql-reference/functions/date-time-functions.md", "docs/ru/sql-reference/functions/date-time-functions.md", "src/Common/DateLUTImpl.h", "src/Functions/FunctionsConversion.h", "tests/queries/0_stateless/00921_datetime64_compatibility_long.reference", "tests/queries/0_stateless/01592_toUnixTimestamp_Date.reference", "tests/queries/0_stateless/01592_toUnixTimestamp_Date.sql"] | Function toUnixTimestamp supports Date/Date32 argument, like spark to_unix_timestamp does. | > (you don't have to strictly follow this form)
**Use case**
> A clear and concise description of what is the intended usage scenario is.
```
) select toUnixTimestamp(today())
SELECT toUnixTimestamp(today())
Query id: c349c73a-5677-4709-8765-dcc3b08c8e92
0 rows in set. Elapsed: 0.031 sec.
Received exception from server (version 22.12.1):
Code: 44. DB::Exception: Received from localhost:9000. DB::Exception: Illegal type Date of first argument of function toUnixTimestamp: While processing toUnixTimestamp(today()). (ILLEGAL_COLUMN)
```
**Describe the solution you'd like**
> A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
> A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
> Add any other context or screenshots about the feature request here.
| https://github.com/ClickHouse/ClickHouse/issues/44254 | https://github.com/ClickHouse/ClickHouse/pull/49989 | a4fe3fbb1f288b4e066eb3781b2c7b9e238a4aa3 | f4c73e94d21c6de0b1af7da3c42c2db6bf97fc73 | "2022-12-15T07:17:58Z" | c++ | "2023-05-23T06:55:56Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 44,251 | ["src/Storages/MergeTree/MergeTreeBackgroundExecutor.cpp", "src/Storages/MergeTree/MergeTreeBackgroundExecutor.h", "src/Storages/MergeTree/MergeTreeDataMergerMutator.cpp", "src/Storages/MergeTree/MergeTreeDataMergerMutator.h", "src/Storages/StorageMergeTree.cpp", "src/Storages/StorageReplicatedMergeTree.cpp"] | Bug when increasing `background_pool_size` | **Describe the unexpected behaviour**
see steps below
**How to reproduce**
- decrease `background_pool_size` to 3 (relevantly decreasing `number_of_free_entries_in_pool_to_execute_mutation`, `number_of_free_entries_in_pool_to_lower_max_size_of_merge`)
- restart server
- increase `background_pool_size` to 8
- SYSTEM RELOAD CONFIG
- see error log below
* Which ClickHouse server version to use
22.9
**Expected behavior**
`background_pool_size` be increased with no errors in log
**Error message and/or stacktrace**
```
2022.12.15 11:01:11.335190 [ 271 ] {} <Error> business.rdp_log_local: ReplicatedMergeTreeQueue::SelectedEntryPtr DB::StorageReplicatedMergeTree::selectQueueEntry(): Code: 49. DB::Exception: Logical error: invalid argument passed to getMaxSourcePartsSize: scheduled_tasks_count = 4 > max_count = 3. (LOGICAL_ERROR), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xa3dbe3a in /usr/bin/clickhouse
1. DB::Exception::Exception<unsigned long&, unsigned long&>(int, fmt::v8::basic_format_string<char, fmt::v8::type_identity<unsigned long&>::type, fmt::v8::type_identity<unsigned long&>::type>, unsigned long&, unsigned long&) @ 0xa42f7cc in /usr/bin/clickhouse
2. DB::MergeTreeDataMergerMutator::getMaxSourcePartsSizeForMerge(unsigned long, unsigned long) const @ 0x158dc048 in /usr/bin/clickhouse
3. DB::ReplicatedMergeTreeQueue::shouldExecuteLogEntry(DB::ReplicatedMergeTreeLogEntry const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >&, DB::MergeTreeDataMergerMutator&, DB::MergeTreeData&, std::__1::unique_lock<std::__1::mutex>&) const @ 0x15a8daed in /usr/bin/clickhouse
4. DB::ReplicatedMergeTreeQueue::selectEntryToProcess(DB::MergeTreeDataMergerMutator&, DB::MergeTreeData&) @ 0x15a90a80 in /usr/bin/clickhouse
5. DB::StorageReplicatedMergeTree::selectQueueEntry() @ 0x1553eaea in /usr/bin/clickhouse
6. DB::StorageReplicatedMergeTree::scheduleDataProcessingJob(DB::BackgroundJobsAssignee&) @ 0x1553f91f in /usr/bin/clickhouse
7. DB::BackgroundJobsAssignee::threadFunc() @ 0x15781bc7 in /usr/bin/clickhouse
8. DB::BackgroundSchedulePoolTaskInfo::execute() @ 0x13f0bd38 in /usr/bin/clickhouse
9. DB::BackgroundSchedulePool::threadFunction() @ 0x13f0e616 in /usr/bin/clickhouse
10. ? @ 0x13f0f34c in /usr/bin/clickhouse
11. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xa49fb08 in /usr/bin/clickhouse
12. ? @ 0xa4a2d1d in /usr/bin/clickhouse
13. ? @ 0x7eff93ec1609 in ?
14. clone @ 0x7eff93de6133 in ?
(version 22.9.1.4)
2022.12.15 11:01:11.369129 [ 251 ] {} <Error> void DB::BackgroundJobsAssignee::threadFunc(): Code: 49. DB::Exception: Logical error: invalid argument passed to getMaxSourcePartsSize: scheduled_tasks_count = 4 > max_count = 3. (LOGICAL_ERROR), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xa3dbe3a in /usr/bin/clickhouse
1. DB::Exception::Exception<unsigned long&, unsigned long&>(int, fmt::v8::basic_format_string<char, fmt::v8::type_identity<unsigned long&>::type, fmt::v8::type_identity<unsigned long&>::type>, unsigned long&, unsigned long&) @ 0xa42f7cc in /usr/bin/clickhouse
2. DB::MergeTreeDataMergerMutator::getMaxSourcePartsSizeForMerge(unsigned long, unsigned long) const @ 0x158dc048 in /usr/bin/clickhouse
3. DB::StorageMergeTree::selectPartsToMerge(std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, bool, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, bool, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >*, std::__1::shared_ptr<DB::RWLockImpl::LockHolderImpl>&, std::__1::unique_lock<std::__1::mutex>&, std::__1::shared_ptr<DB::MergeTreeTransaction> const&, bool, DB::SelectPartsDecision*) @ 0x155fafca in /usr/bin/clickhouse
4. DB::StorageMergeTree::scheduleDataProcessingJob(DB::BackgroundJobsAssignee&) @ 0x155fecee in /usr/bin/clickhouse
5. DB::BackgroundJobsAssignee::threadFunc() @ 0x15781bc7 in /usr/bin/clickhouse
6. DB::BackgroundSchedulePoolTaskInfo::execute() @ 0x13f0bd38 in /usr/bin/clickhouse
7. DB::BackgroundSchedulePool::threadFunction() @ 0x13f0e616 in /usr/bin/clickhouse
8. ? @ 0x13f0f34c in /usr/bin/clickhouse
9. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xa49fb08 in /usr/bin/clickhouse
10. ? @ 0xa4a2d1d in /usr/bin/clickhouse
11. ? @ 0x7eff93ec1609 in ?
12. clone @ 0x7eff93de6133 in ?
(version 22.9.1.4)
```
After inspecting source code for some while, the problem may resides in `src/Storages/StorageReplicatedMergeTree.h`, where max_tasks_count of `merger_mutator`is not reconfigured.
| https://github.com/ClickHouse/ClickHouse/issues/44251 | https://github.com/ClickHouse/ClickHouse/pull/44436 | 5c860133d6d91260cccf28a533e08d667cf07954 | a32fab90d50b2f946893c5db18d1ace238009d34 | "2022-12-15T03:59:19Z" | c++ | "2022-12-22T22:48:54Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 44,240 | ["src/Dictionaries/NullDictionarySource.cpp", "src/Dictionaries/NullDictionarySource.h", "src/Dictionaries/registerDictionaries.cpp", "tests/queries/0_stateless/02514_null_dictionary_source.reference", "tests/queries/0_stateless/02514_null_dictionary_source.sql"] | Empty source for external dictionaries | **Feature**
Allow creating empty dictionaries with DDL queries.
**Use case**
This feature will allow building queries and views referencing empty dictionary with `dictGet` or 'as a table',
even if dictionary does not exist yet.
Maybe, empty dictionaries does not make sense in 'clickhouse as a part of internal infrastructure', this will
feature may be useful in cases when clickhouse is used as a database for small application that configures
clickhouse automatically, and some dictionaries may be optional or provided by application user.
**Describe the solution you'd like**
Provide mock source `SOURCE(EMPTY())` for dictionaries, such that every `dictGet` call will return `null_value`.
**Describe alternatives you've considered**
Make source section optional for dictionaries.
| https://github.com/ClickHouse/ClickHouse/issues/44240 | https://github.com/ClickHouse/ClickHouse/pull/44502 | 462294dedb70dc84079340ed8edf2bf93670aa62 | 24eeaa500b8be699ec3c0fdae9b56d1f715a41af | "2022-12-14T20:38:33Z" | c++ | "2022-12-25T22:02:37Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 44,221 | ["src/Interpreters/ExpressionAnalyzer.cpp", "src/Processors/QueryPlan/AggregatingStep.cpp", "src/Processors/QueryPlan/AggregatingStep.h", "tests/queries/0_stateless/02503_in_lc_const_args_bug.reference", "tests/queries/0_stateless/02503_in_lc_const_args_bug.sql"] | Segmentation fault in serialization after grouping with LowCardinality column | **Describe what's wrong**
ClickHouse crashes with a complex cast expression in grouping statement if the query is made via JDBC (with DBeaver). Correctly throws an exception if the query is made via clickhouse-client.
**How to reproduce**
ClickHouse version: 22.6.3.35
DBeaver: 21.2.1.202109181446
Driver: clickhouse-jdbc v0.2.6
Query via JDBC:
```
select
multiIf('all' IN ('all'), 'all', id) as case_value,
count(*)
from
(
select materialize(toLowCardinality('test_value')) as id
from numbers_mt(1000000)
)
group by case_value
;
clickhouse-server[3664505]: 2022.12.14 11:30:53.460283 [ 3667110 ] {} <Fatal> BaseDaemon: ########################################
clickhouse-server[3664505]: 2022.12.14 11:30:53.460347 [ 3667110 ] {} <Fatal> BaseDaemon: (version 22.6.3.35 (official build), build id: ECE208880769F062) (from thread 3664624) (query_id: 9b227031-b105-4d3d-866e-dcb31b08242f) (query: select case when 'all' in ('all') then 'all' else id end as case_value, count(*) from ( select materialize(toLowCardinality('test_value')) as id from numbers_mt(1000000) ) group by case_value FORMAT TabSeparatedWithNamesAndTypes;) Received signal Segmentation fault (11)
clickhouse-server[3664505]: 2022.12.14 11:30:53.460378 [ 3667110 ] {} <Fatal> BaseDaemon: Address: 0x1 Access: read. Address not mapped to object.
clickhouse-server[3664505]: 2022.12.14 11:30:53.460387 [ 3667110 ] {} <Fatal> BaseDaemon: Stack trace: 0xb8b5cc0 0x171b9cda 0x171b9820 0x1728d4b7 0xb94fbea 0xb951bc5 0xb94d577 0xb95099d 0x7fd2cb044609 0x7fd2caf69133
clickhouse-server[3664505]: 2022.12.14 11:30:53.460448 [ 3667110 ] {} <Fatal> BaseDaemon: 2. void DB::writeAnyEscapedString<(char)39, false>(char const*, char const*, DB::WriteBuffer&) @ 0xb8b5cc0 in /usr/bin/clickhouse
clickhouse-server[3664505]: 2022.12.14 11:30:53.460473 [ 3667110 ] {} <Fatal> BaseDaemon: 3. DB::IRowOutputFormat::write(std::__1::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn> > > const&, unsigned long) @ 0x171b9cda in /usr/bin/clickhouse
clickhouse-server[3664505]: 2022.12.14 11:30:53.460488 [ 3667110 ] {} <Fatal> BaseDaemon: 4. DB::IRowOutputFormat::consume(DB::Chunk) @ 0x171b9820 in /usr/bin/clickhouse
clickhouse-server[3664505]: 2022.12.14 11:30:53.460499 [ 3667110 ] {} <Fatal> BaseDaemon: 5. DB::ParallelFormattingOutputFormat::formatterThreadFunction(unsigned long, unsigned long, std::__1::shared_ptr<DB::ThreadGroupStatus> const&) @ 0x1728d4b7 in /usr/bin/clickhouse
clickhouse-server[3664505]: 2022.12.14 11:30:53.460510 [ 3667110 ] {} <Fatal> BaseDaemon: 6. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0xb94fbea in /usr/bin/clickhouse
clickhouse-server[3664505]: 2022.12.14 11:30:53.460530 [ 3667110 ] {} <Fatal> BaseDaemon: 7. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>(void&&)::'lambda'()::operator()() @ 0xb951bc5 in /usr/bin/clickhouse
clickhouse-server[3664505]: 2022.12.14 11:30:53.460540 [ 3667110 ] {} <Fatal> BaseDaemon: 8. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xb94d577 in /usr/bin/clickhouse
clickhouse-server[3664505]: 2022.12.14 11:30:53.460547 [ 3667110 ] {} <Fatal> BaseDaemon: 9. ? @ 0xb95099d in /usr/bin/clickhouse
clickhouse-server[3664505]: 2022.12.14 11:30:53.460556 [ 3667110 ] {} <Fatal> BaseDaemon: 10. ? @ 0x7fd2cb044609 in ?
clickhouse-server[3664505]: 2022.12.14 11:30:53.460563 [ 3667110 ] {} <Fatal> BaseDaemon: 11. clone @ 0x7fd2caf69133 in ?
clickhouse-server[3664505]: 2022.12.14 11:30:53.587930 [ 3667110 ] {} <Fatal> BaseDaemon: Integrity check of the executable successfully passed (checksum: 43DE7A9BAAEC2325B42A8F250F4A2C22)
```
Query via clickhouse-client:
```
SELECT
multiIf('all' IN ('all'), 'all', id) AS case_value,
count(*)
FROM
(
SELECT materialize(toLowCardinality('test_value')) AS id
FROM numbers_mt(1000000)
)
GROUP BY case_value
Query id: b7e878f3-2b0e-461d-808e-504daf2e6459
0 rows in set. Elapsed: 0.005 sec.
Received exception from server (version 22.6.3):
Code: 49. DB::Exception: Received from localhost:9000. DB::Exception: Bad cast from type DB::ColumnLowCardinality to DB::ColumnString. (LOGICAL_ERROR)
```
**Expected behavior**
Expected that ClickHouse will not crash with segmentation fault if the query is made via JDBC.
| https://github.com/ClickHouse/ClickHouse/issues/44221 | https://github.com/ClickHouse/ClickHouse/pull/44346 | 6ace8f13dbae6e9533c23f3a8b74c38a6bc6c540 | 720f0cca40c2cf3404c91e3b956abff45834f290 | "2022-12-14T12:18:42Z" | c++ | "2022-12-20T11:09:12Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 44,216 | ["src/Storages/MergeTree/ReplicatedMergeTreeSink.cpp", "tests/queries/0_stateless/02481_async_insert_dedup.python"] | Test `02481_async_insert_dedup` is flaky | https://play.clickhouse.com/play?user=play#c2VsZWN0IAp0b1N0YXJ0T2ZEYXkoY2hlY2tfc3RhcnRfdGltZSkgYXMgZCwKY291bnQoKSwgIGdyb3VwVW5pcUFycmF5KHB1bGxfcmVxdWVzdF9udW1iZXIpLCAgYW55KHJlcG9ydF91cmwpCmZyb20gY2hlY2tzIHdoZXJlICcyMDIyLTA2LTAxJyA8PSBjaGVja19zdGFydF90aW1lIGFuZCB0ZXN0X25hbWUgbGlrZSAnJTAyNDgxX2FzeW5jX2luc2VydF9kZWR1cCUnIGFuZCB0ZXN0X3N0YXR1cyBpbiAoJ0ZBSUwnLCAnRkxBS1knKSBncm91cCBieSBkIG9yZGVyIGJ5IGQgZGVzYw==
Example: https://s3.amazonaws.com/clickhouse-test-reports/44202/ec09a66fbb074fcccfae4a743106d5e7c3b64e26/stateless_tests__debug__s3_storage__[3/6].html | https://github.com/ClickHouse/ClickHouse/issues/44216 | https://github.com/ClickHouse/ClickHouse/pull/44349 | ce56ae61c68e23512cc8ec3bfe72f5c935f12379 | 5a6cdd4825fc16976050124a37e816cfbfa2db6c | "2022-12-14T11:23:50Z" | c++ | "2022-12-19T20:53:15Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 44,188 | ["tests/integration/test_server_reload/test.py"] | `test_server_reload` is flaky | https://play.clickhouse.com/play?user=play#c2VsZWN0IAp0b1N0YXJ0T2ZEYXkoY2hlY2tfc3RhcnRfdGltZSkgYXMgZCwKY291bnQoKSwgIGdyb3VwVW5pcUFycmF5KHB1bGxfcmVxdWVzdF9udW1iZXIpLCAgYW55KHJlcG9ydF91cmwpCmZyb20gY2hlY2tzIHdoZXJlICcyMDIyLTA2LTAxJyA8PSBjaGVja19zdGFydF90aW1lIGFuZCB0ZXN0X25hbWUgbGlrZSAnJXRlc3Rfc2VydmVyX3JlbG9hZC90ZXN0LnB5OjolX3BvcnQlJyBhbmQgdGVzdF9zdGF0dXMgaW4gKCdGQUlMJywgJ0ZMQUtZJykgZ3JvdXAgYnkgZCBvcmRlciBieSBkIGRlc2M=
Ref: https://github.com/ClickHouse/ClickHouse/pull/30549
May be related to https://github.com/ClickHouse/ClickHouse/issues/42734
cc: @vitlibar | https://github.com/ClickHouse/ClickHouse/issues/44188 | https://github.com/ClickHouse/ClickHouse/pull/44444 | 5df5f7d0dcffd3f56883014b93edfc5dbe476d1b | e49deb740fab90b0aaadad70f4a00f620d55fcd0 | "2022-12-13T11:56:32Z" | c++ | "2022-12-21T13:31:12Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 44,167 | ["docs/en/interfaces/formats.md", "docs/ru/interfaces/formats.md"] | possibly incorrect parsing of chunked RowBinary requests (CANNOT_READ_ALL_DATA) | When sending data in `RowBinary` format with `transfer-encoding: chunked` and a chunk starting with a string of length 13 (= `\r` in ascii), clickhouse raises an exception:
```
Code: 33. DB::Exception: Cannot read all data. Bytes read: 12. Bytes expected: 101.: (at row 1)\n: While executing BinaryRowInputFormat. (CANNOT_READ_ALL_DATA) (version 22.10.1.1175 (official build))
```
Sending a chunk starting with a string of different length avoids this problem.
Example request:
```
0000 50 4f 53 54 20 2f 20 48 54 54 50 2f 31 2e 31 0d POST / HTTP/1.1.
0010 0a 74 72 61 6e 73 66 65 72 2d 65 6e 63 6f 64 69 .transfer-encodi
0020 6e 67 3a 20 63 68 75 6e 6b 65 64 0d 0a 68 6f 73 ng: chunked..hos
0030 74 3a 20 6c 6f 63 61 6c 68 6f 73 74 3a 38 31 32 t: localhost:812
0040 33 0d 0a 75 73 65 72 2d 61 67 65 6e 74 3a 20 6d 3..user-agent: m
0050 69 6e 74 2f 31 2e 34 2e 32 0d 0a 78 2d 63 6c 69 int/1.4.2..x-cli
0060 63 6b 68 6f 75 73 65 2d 64 61 74 61 62 61 73 65 ckhouse-database
0070 3a 20 70 6c 61 75 73 69 62 6c 65 5f 74 65 73 74 : plausible_test
0080 0d 0a 0d 0a 33 34 0d 0a 49 4e 53 45 52 54 20 49 ....34..INSERT I
0090 4e 54 4f 20 22 73 65 73 73 69 6f 6e 73 22 28 22 NTO "sessions"("
00a0 68 6f 73 74 6e 61 6d 65 22 29 20 46 4f 52 4d 41 hostname") FORMA
00b0 54 20 52 6f 77 42 69 6e 61 72 79 20 0d 0a 45 0d T RowBinary ..E.
00c0 0a 0d 65 78 61 6d 70 6c 65 2d 30 2e 63 6f 6d 0d ..example-0.com.
00d0 0a 30 0d 0a 0d 0a .0....
```
**Does it reproduce on recent release?**
Yes. I'm using version 22.10.1.1175.
**How to reproduce**
```console
$ echo 'CREATE TABLE sessions (hostname String) ENGINE = Memory' | curl 'http://localhost:8123/' --data-binary @-
```
```console
$ echo -ne 'POST / HTTP/1.1\r\ntransfer-encoding: chunked\r\nhost: localhost:8123\r\n\r\n30\r\nINSERT INTO sessions(hostname) FORMAT RowBinary \r\nE\r\n\rexample-0.com\r\n0\r\n\r\n' | nc localhost 8123
HTTP/1.1 500 Internal Server Error
Date: Mon, 12 Dec 2022 16:18:21 GMT
Connection: Keep-Alive
Content-Type: text/plain; charset=UTF-8
X-ClickHouse-Server-Display-Name: mac3.local
Transfer-Encoding: chunked
X-ClickHouse-Exception-Code: 33
Keep-Alive: timeout=10
X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","result_rows":"0","result_bytes":"0"}
C7
Code: 33. DB::Exception: Cannot read all data. Bytes read: 12. Bytes expected: 101.: (at row 1)
: While executing BinaryRowInputFormat. (CANNOT_READ_ALL_DATA) (version 22.10.1.1175 (official build))
0
```
And here's a successful request with a shorter (~~10~~ 11 bytes) string:
```console
$ echo -ne 'POST / HTTP/1.1\r\ntransfer-encoding: chunked\r\nhost: localhost:8123\r\n\r\n30\r\nINSERT INTO sessions(hostname) FORMAT RowBinary \r\nC\r\n\vexample.com\r\n0\r\n\r\n' | nc localhost 8123
HTTP/1.1 200 OK
Date: Mon, 12 Dec 2022 16:36:12 GMT
Connection: Keep-Alive
Content-Type: text/plain; charset=UTF-8
X-ClickHouse-Server-Display-Name: mac3.local
Transfer-Encoding: chunked
Keep-Alive: timeout=10
X-ClickHouse-Summary: {"read_rows":"1","read_bytes":"20","written_rows":"1","written_bytes":"20","total_rows_to_read":"0","result_rows":"1","result_bytes":"20"}
0
``` | https://github.com/ClickHouse/ClickHouse/issues/44167 | https://github.com/ClickHouse/ClickHouse/pull/44324 | 693df81f506d9133fe10fd5e9a4967f3aa3e29b6 | e80dc069ef6157f1d70cba7384294b99ff201acc | "2022-12-12T15:41:41Z" | c++ | "2022-12-17T00:57:58Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 44,130 | ["src/IO/WriteHelpers.h", "src/IO/tests/gtest_WriteHelpers.cpp", "src/Interpreters/TreeRewriter.cpp"] | Hints about column names can contain duplicates. | **Describe the issue**
```
Code: 47. DB::Exception: Received from localhost:9000. DB::Exception: Missing columns: 'total_byte' while processing query: 'SELECT name, total_bytes FROM system.tables WHERE total_byte > 1000000000 ORDER BY total_bytes DESC', required columns: 'name' 'total_bytes' 'total_byte', maybe you meant: ['name','total_bytes','total_bytes']. (UNKNOWN_IDENTIFIER)
```
The name `total_bytes` is duplicated.
The message should be written in normal English, like:
`maybe you meant 'this', 'that' or 'whatever'`
instead of
`maybe you meant ['this', 'that', 'whatever']` | https://github.com/ClickHouse/ClickHouse/issues/44130 | https://github.com/ClickHouse/ClickHouse/pull/44519 | b11d01dff51f26c0661a7d2f0af0ebe7246e5e9a | b5431e971e4ad8485e2b2bcfa45f15d9d84d808e | "2022-12-11T05:22:10Z" | c++ | "2022-12-27T11:37:19Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 44,054 | ["src/Storages/MergeTree/DataPartsExchange.cpp", "tests/queries/0_stateless/02344_describe_cache.sql", "tests/queries/0_stateless/02494_zero_copy_projection_cancel_fetch.reference", "tests/queries/0_stateless/02494_zero_copy_projection_cancel_fetch.sh"] | Logical error: 'Cannot commit shared transaction' | https://s3.amazonaws.com/clickhouse-test-reports/43986/7249ad407a0824d8c7651d8e03d4e68e1dea0813/stress_test__msan_.html | https://github.com/ClickHouse/ClickHouse/issues/44054 | https://github.com/ClickHouse/ClickHouse/pull/44173 | 3593377f23dd3725d30aab29a76fbf596eddf591 | c6b6b0ad7d2636bf76c3a24e70c9812399943699 | "2022-12-08T19:49:01Z" | c++ | "2022-12-14T14:55:26Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 44,053 | ["tests/queries/0_stateless/01961_roaring_memory_tracking.sql"] | 01961_roaring_memory_tracking sporadically fail | **Describe the bug**
[A link to the report
](https://s3.amazonaws.com/clickhouse-test-reports/43860/974df885e0b70a7b26eddcc0cf896ceeccd9fe79/stateless_tests__ubsan__[1/2].html)
**Error message and/or stacktrace**
```
2022-12-07 03:19:59 The query succeeded but the server error '241' was expected (query: SELECT cityHash64(rand() % 1000) as n, groupBitmapState(number) FROM numbers_mt(2000000000) GROUP BY n FORMAT Null; -- { serverError 241 }).
2022-12-07 03:19:59
2022-12-07 03:19:59 stdout:
2022-12-07 03:19:59
2022-12-07 03:19:59
2022-12-07 03:19:59 Settings used in the test: --max_insert_threads=15 --group_by_two_level_threshold=100000 --group_by_two_level_threshold_bytes=50000000 --distributed_aggregation_memory_efficient=0 --fsync_metadata=1 --output_format_parallel_formatting=0 --input_format_parallel_parsing=0 --min_chunk_bytes_for_parallel_parsing=21547204 --max_read_buffer_size=648986 --prefer_localhost_replica=1 --max_block_size=11366 --max_threads=35 --optimize_or_like_chain=0 --optimize_read_in_order=0 --read_in_order_two_level_merge_threshold=79 --optimize_aggregation_in_order=0 --aggregation_in_order_max_block_bytes=34085784 --use_uncompressed_cache=0 --min_bytes_to_use_direct_io=0 --min_bytes_to_use_mmap_io=773810864 --local_filesystem_read_method=read --remote_filesystem_read_method=read --local_filesystem_read_prefetch=1 --remote_filesystem_read_prefetch=1 --compile_expressions=0 --compile_aggregate_expressions=0 --compile_sort_description=0 --merge_tree_coarse_index_granularity=23 --optimize_distinct_in_order=1 --optimize_sorting_by_input_stream_properties=0
2022-12-07 03:19:59
2022-12-07 03:19:59 Database: test_aj42xzek
``` | https://github.com/ClickHouse/ClickHouse/issues/44053 | https://github.com/ClickHouse/ClickHouse/pull/44470 | 6cc8a7e3e3bd74d36cf04759485bac241ff89656 | a184d1f355369779b365fb57adb839c11c820fc4 | "2022-12-08T19:36:22Z" | c++ | "2022-12-21T12:59:25Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 44,041 | ["src/Interpreters/ExpressionAnalyzer.cpp", "tests/queries/0_stateless/02497_having_without_actual_aggregation_bug.reference", "tests/queries/0_stateless/02497_having_without_actual_aggregation_bug.sql"] | Chunk info was not set for chunk in GroupingAggregatedTransform | Reproducer:
``` sql
SELECT count() FROM (SELECT queryID() AS t FROM remote('127.0.0.{1..3}', numbers(10)) WITH TOTALS HAVING t = initialQueryID())
Code: 49. DB::Exception: Received from localhost:9000. DB::Exception: Chunk info was not set for chunk in GroupingAggregatedTransform.. (LOGICAL_ERROR)
``` | https://github.com/ClickHouse/ClickHouse/issues/44041 | https://github.com/ClickHouse/ClickHouse/pull/44051 | e0a6ccc83091dfa20ead90a422b20eedc41888cc | 3e3738bd49f8028c756c6eb218f0f447ac71a0f5 | "2022-12-08T12:47:27Z" | c++ | "2022-12-10T12:28:21Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 44,037 | ["src/Common/ProfileEvents.cpp", "src/Common/ProfileEvents.h", "src/Common/QueryProfiler.cpp", "src/IO/WriteBufferFromFileDescriptorDiscardOnFailure.cpp", "tests/queries/0_stateless/02497_trace_events_stress_long.reference", "tests/queries/0_stateless/02497_trace_events_stress_long.sh"] | Segmentation fault while sending trace_type = 'ProfileEvent' to trace log | https://s3.amazonaws.com/clickhouse-test-reports/43987/3725cf4aa7a9aa022b11d08b2f8e4f7f0dd70283/stress_test__ubsan_/gdb.log | https://github.com/ClickHouse/ClickHouse/issues/44037 | https://github.com/ClickHouse/ClickHouse/pull/44045 | 3a3c6eb458f9a20378eb10b0d68a51325df32a81 | 17c557648e4f01f2b5919952e2263db1cff4a52e | "2022-12-08T12:06:50Z" | c++ | "2022-12-09T13:12:58Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 44,010 | ["src/Interpreters/InterpreterCreateQuery.cpp", "src/Parsers/ASTColumnDeclaration.cpp", "src/Parsers/ASTColumnDeclaration.h", "src/Parsers/ParserCreateQuery.h", "src/Storages/ColumnDefault.h", "src/Storages/ColumnsDescription.cpp", "tests/queries/0_stateless/02205_ephemeral_1.reference", "tests/queries/0_stateless/02287_ephemeral_format_crash.reference", "tests/queries/0_stateless/02287_ephemeral_format_crash.sql", "tests/queries/0_stateless/02293_compatibility_ignore_auto_increment_in_create_table.reference"] | Ephemeral columns with Map type crash the server | **Describe what's wrong**
Creating a table with a Map-type Ephemeral column will make the server crash on the restart because of wrongly save metadata.
Fiddle to reproduce: https://fiddle.clickhouse.com/81fc3a83-f7ea-43c2-afad-2e2940d04e8c
**Does it reproduce on recent release?**
Tested on version 22.11.1.1286.
**How to reproduce**
* Use version 22.11.x
* Use the fiddle above to reproduce.
**Expected behavior**
When creating a `EPHEMERAL` columns, a default value is assigned.
Example with a Int32:
```sql
CREATE TABLE name (
example_col Int32 EPHEMERAL
) -- ... engine stuff here ...
```
Metadata will assign a default value:
```sql
CREATE TABLE name (
example_col Int32 EPHEMERAL 0
) -- ... engine stuff here ...
```
But, with Map-type, no default value gets assigned, only a `()` (empty parenthesis).
**Error message and/or stacktrace**
```
2022.12.07 14:59:29.359153 [ 11464597 ] {} <Error> Application: DB::Exception: Syntax error (in file /Users/vlourme/Desktop/ch/store/75e/75e717de-1bbd-4011-802c-acd04e9a3409/bug_ephemeral.sql): failed at position 118 ('(') (line 4, col 41): (),
`result` Int32 DEFAULT (data['number']) * 2
)
ENGINE = MergeTree
ORDER BY id
SETTINGS index_granularity = 8192
. Expected one of: literal, NULL, number, Bool, true, false, string literal, NOT, COMMENT, CODEC, TTL, token, Comma, ClosingRoundBracket: Cannot parse definition from metadata file /Users/vlourme/Desktop/ch/store/75e/75e717de-1bbd-4011-802c-acd04e9a3409/bug_ephemeral.sql
```
**Additional context**
The issue can't be fixed by setting a default expression after `EPHEMERAL`, this, for example will not work:
```sql
CREATE TABLE bug_ephemeral (
id Int32,
data Map(String, Int32) EPHEMERAL {},
result Int DEFAULT data['number'] * 2
)
Engine = MergeTree()
ORDER BY (id);
```
But this, is accepted:
```sql
CREATE TABLE bug_ephemeral (
id Int32,
data Int32 EPHEMERAL 12,
result Int DEFAULT data * 2
)
Engine = MergeTree()
ORDER BY (id);
```
| https://github.com/ClickHouse/ClickHouse/issues/44010 | https://github.com/ClickHouse/ClickHouse/pull/44026 | 24eeaa500b8be699ec3c0fdae9b56d1f715a41af | 280e14456f15504c22493363c7ebf9ecf709edd8 | "2022-12-07T14:07:47Z" | c++ | "2022-12-25T23:59:38Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 43,990 | ["src/Compression/CompressionCodecGorilla.cpp"] | ClickHouse tries to create column with wrong compression codec | I've ClickHouse 22.1.3.7 in docker and tried to create table with this script.
```sql
CREATE TABLE default.test
(
`time_date` Date CODEC(Delta(2), LZ4),
`ipv6` IPv6 CODEC(Gorilla, LZ4)
)
ENGINE = MergeTree
PARTITION BY toYYYYMMDD(time_date)
ORDER BY (time_date)
SETTINGS index_granularity = 8192;
```
Expected behavior: table is created.
Actual behavior: An error with follow text
`Code: 36. DB::Exception: Codec Delta is only applicable for data types of size 1, 2, 4, 8 bytes. Given type IPv6. (BAD_ARGUMENTS)`
It tries to create ipv6 column with Delta codec but I specified Gorilla. | https://github.com/ClickHouse/ClickHouse/issues/43990 | https://github.com/ClickHouse/ClickHouse/pull/44023 | 7c8cec854b3b282fe3a07c262630f2174d6dc4c4 | b1a7b8480338b2f95055eee7309f978529f56265 | "2022-12-06T20:50:18Z" | c++ | "2022-12-08T09:46:46Z" |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.