status
stringclasses 1
value | repo_name
stringclasses 13
values | repo_url
stringclasses 13
values | issue_id
int64 1
104k
| updated_files
stringlengths 11
1.76k
| title
stringlengths 4
369
| body
stringlengths 0
254k
β | issue_url
stringlengths 38
55
| pull_url
stringlengths 38
53
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
unknown | language
stringclasses 5
values | commit_datetime
unknown |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 55,785 | ["src/Storages/StorageNull.h", "tests/queries/0_stateless/02902_select_subcolumns_from_engine_null.reference", "tests/queries/0_stateless/02902_select_subcolumns_from_engine_null.sql"] | Dot syntax tuple element with Null engine | **Describe what's wrong**
Accessing Tuple elements using the dot syntax throws 'Missing columns'
**Does it reproduce on recent release?**
Tried 23.8 LTS
**How to reproduce**
```sql
CREATE TABLE memory (t Tuple(n Int64)) ENGINE = Memory
SELECT tupleElement(t, 'n') FROM memory
SELECT t.n FROM memory
CREATE TABLE null (t Tuple(n Int64)) ENGINE = Null
SELECT tupleElement(t, 'n') FROM null
SELECT t.n FROM null
```
**Expected behavior**
The dot syntax works with Null engine like it does with the Memory engine
**Error message and/or stacktrace**
```
Missing columns: 't.n' while processing query: 'SELECT t.n FROM `null`', required columns: 't.n'. (UNKNOWN_IDENTIFIER)
``` | https://github.com/ClickHouse/ClickHouse/issues/55785 | https://github.com/ClickHouse/ClickHouse/pull/55912 | 04b82d6463b824e82642474fcc4a7cada88f6cb1 | cb63b07e89270480092e9e5e290fc7f8f0fe0eaa | "2023-10-18T13:15:01Z" | c++ | "2023-10-24T12:03:46Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 55,651 | ["src/Interpreters/InterpreterAlterQuery.cpp", "tests/queries/0_stateless/02911_add_index_and_materialize_index.reference", "tests/queries/0_stateless/02911_add_index_and_materialize_index.sql"] | Cannot materialize index in the same ALTER query that creates it. | **Use case**
```
milovidov-desktop :) ALTER TABLE index_test ADD INDEX i_x (mortonDecode(2, z).1) TYPE minmax, ADD INDEX i_y (mortonDecode(2, z).2) TYPE minmax, MATERIALIZE INDEX i_x, MATERIALIZE INDEX i_y
ALTER TABLE index_test
ADD INDEX i_x mortonDecode(2, z).1 TYPE minmax GRANULARITY 1,
ADD INDEX i_y mortonDecode(2, z).2 TYPE minmax GRANULARITY 1,
MATERIALIZE INDEX i_x,
MATERIALIZE INDEX i_y
Query id: dcf2a9f0-f20e-4e49-ab6c-e0aa4f30eccb
0 rows in set. Elapsed: 0.015 sec.
Received exception from server (version 23.10.1):
Code: 36. DB::Exception: Received from localhost:9000. DB::Exception: Unknown index: i_x. (BAD_ARGUMENTS)
``` | https://github.com/ClickHouse/ClickHouse/issues/55651 | https://github.com/ClickHouse/ClickHouse/pull/56331 | 435350772471b0e05df193663eab52ac885043f2 | 2efa5ab172b2ffe51617d3944a8e28c058810776 | "2023-10-16T04:48:16Z" | c++ | "2023-11-18T16:23:11Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 55,650 | ["src/Storages/IndicesDescription.cpp", "src/Storages/ReplaceAliasByExpressionVisitor.cpp", "src/Storages/ReplaceAliasByExpressionVisitor.h", "tests/queries/0_stateless/02911_support_alias_column_in_indices.reference", "tests/queries/0_stateless/02911_support_alias_column_in_indices.sql"] | Cannot use ALIAS columns in indices | This does not work:
```
CREATE TABLE test
(
x UInt32,
y ALIAS x + 1,
INDEX i_y (y) TYPE minmax
) ENGINE = MergeTree ORDER BY x;
```
This does:
```
CREATE TABLE test
(
x UInt32,
y ALIAS x + 1,
INDEX i_y (x + 1) TYPE minmax
) ENGINE = MergeTree ORDER BY x;
``` | https://github.com/ClickHouse/ClickHouse/issues/55650 | https://github.com/ClickHouse/ClickHouse/pull/57546 | 3bd3e2e749eeb678cb158f648d408e1637c7815a | b85214ca1ab1a2fb33bb4403232043e9dabcb70a | "2023-10-16T04:37:34Z" | c++ | "2023-12-07T00:22:03Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 55,643 | ["tests/queries/0_stateless/02900_decimal_sort_with_multiple_columns.reference", "tests/queries/0_stateless/02900_decimal_sort_with_multiple_columns.sql"] | 23.9 returns incorrect results depending on ORDER BY key (also affects projections) | **Describe what's wrong**
We see a regression in 23.9 where, depending on a table's ORDER BY and filters, ClickHouse returns incorrect results. We see correct results in 23.8. I couldn't find a way to manually generate a data set that exhibits this issue, so instead I have uploaded an example data set to S3. Anyway, consider the following two tables:
<table>
<tr>
<td>
<pre><code>
-- TS in ORDER BY
CREATE TABLE tbl1
(
`ID` String,
`TS` DateTime64(3, 'UTC')
)
ENGINE = MergeTree
PARTITION BY toYYYYMM(TS)
ORDER BY TS
SETTINGS index_granularity = 8192;
</code></pre>
</td>
<td>
<pre><code>
-- ID and TS in ORDER BY
CREATE TABLE tbl2
(
`ID` String,
`TS` DateTime64(3, 'UTC')
)
ENGINE = MergeTree
PARTITION BY toYYYYMM(TS)
ORDER BY (ID, TS)
SETTINGS index_granularity = 8192;
</code></pre>
</td>
</tr>
</table>
If I insert the following data into both of them, the number of records in each matches, as expected; however, filtering on ID returns different results:
```
INSERT INTO tbl1 SELECT * FROM s3('https://public-test-data-xyz.s3.us-east-2.amazonaws.com/tbl1.clickhouse', Native);
INSERT INTO tbl2 SELECT * FROM s3('https://public-test-data-xyz.s3.us-east-2.amazonaws.com/tbl1.clickhouse', Native);
```
```
SELECT count(*) FROM tbl1;
-- 1500854
SELECT count(*) FROM tbl2;
-- 1500854
SELECT count(*) FROM tbl1 WHERE ID = '37242ba15380c91ef199a19834325589';
-- 107608
SELECT count(*) FROM tbl2 WHERE ID = '37242ba15380c91ef199a19834325589';
-- 75736
```
This also seems to affect projections. For example, consider this table:
```
-- TS in ORDER BY, ID in projection
CREATE TABLE default.tbl3
(
`ID` String,
`TS` DateTime64(3, 'UTC'),
PROJECTION proj
(
SELECT count(*)
GROUP BY ID
)
)
ENGINE = MergeTree
PARTITION BY toYYYYMM(TS)
ORDER BY TS
SETTINGS index_granularity = 8192
```
Whether the projection already existed or is materialized after the fact, it seems like it can also return incorrect results:
```
SELECT count(*) FROM tbl1;
-- 1500854
SELECT count(*) FROM tbl3;
-- 1500854
SELECT count(*) FROM tbl1 WHERE ID = '37242ba15380c91ef199a19834325589';
-- 107608
SELECT count(*) FROM tbl3 WHERE ID = '37242ba15380c91ef199a19834325589';
-- 97654
```
For what it's worth, we actually first saw this bug (or a version of this bug?) in projection usage. But I am having difficulties creating a minimal reproduction there.
**Does it reproduce on recent release?**
23.9
**How to reproduce**
- [ClickHouse Fiddle for 23.8](https://fiddle.clickhouse.com/37f47688-32f2-4f6a-a29f-d4321ac4acc4) β
- [ClickHouse Fiddle for 23.9](https://fiddle.clickhouse.com/ce97352c-b090-4504-920a-7509c048ccdf) β | https://github.com/ClickHouse/ClickHouse/issues/55643 | https://github.com/ClickHouse/ClickHouse/pull/55662 | 71dfdf5bfe85373a1d7fa814aceb276df5b7cbe6 | ffcf1f02cb44605e909742f381ad63d721ca1fbe | "2023-10-15T10:06:59Z" | c++ | "2023-10-26T10:36:46Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 55,562 | ["src/Core/Settings.h", "tests/queries/0_stateless/02896_optimize_array_exists_to_has_with_date.reference", "tests/queries/0_stateless/02896_optimize_array_exists_to_has_with_date.sql"] | Date-String comparison doesn't work for lambda functions on 23.3.8.22 | **Describe the issue**
On build 22.8.21.38, comparison between String/Date works fine for lambda function(for example in arrayExists), the same behavior results in an error on build 23.3.8.22
**How to reproduce**
on 22.8.21.38:
```
SELECT arrayExists(date -> (date = '2022-07-31'), [toDate('2022-07-31')]) AS date_exists
Query id: 43335dab-870a-41d2-9655-5aa1245d9aeb
ββdate_existsββ
β 1 β
βββββββββββββββ
1 row in set. Elapsed: 0.001 sec.
```
on 23.3.8.22:
```
SELECT arrayExists(date -> (date = '2022-07-31'), [toDate('2022-07-31')]) AS date_exists
Query id: d4a6e24e-1316-4a4e-adde-0ada0755c88b
0 rows in set. Elapsed: 0.002 sec.
Received exception from server (version 23.3.8):
Code: 386. DB::Exception: Received from localhost:9000. DB::Exception: There is no supertype for types Date, String because some of them are String/FixedString and some of them are not: While processing has([toDate('2022-07-31')], '2022-07-31') AS date_exists. (NO_COMMON_TYPE)
```
* Which ClickHouse server versions are incompatible
22.8.21.38 and 23.3.8.22
* Which interface to use, if matters
CLI
* Non-default settings, if any
* `CREATE TABLE` statements for all tables involved
* Sample data for all these tables, use [clickhouse-obfuscator](https://github.com/ClickHouse/ClickHouse/blob/master/programs/obfuscator/Obfuscator.cpp#L42-L80) if necessary
* Queries to run that lead to unexpected result
**Error message and/or stacktrace**
```
SELECT arrayExists(date -> (date = '2022-07-31'), [toDate('2022-07-31')]) AS date_exists
SELECT arrayExists(date -> (date = '2022-07-31'), [toDate('2022-07-31')]) AS date_exists
Query id: d4a6e24e-1316-4a4e-adde-0ada0755c88b
[chi-clickhouse-installation-default-0-0-0] 2023.10.12 15:23:47.720381 [ 2723 ] {d4a6e24e-1316-4a4e-adde-0ada0755c88b} <Debug> executeQuery: (from [::1]:59718) SELECT arrayExists(date -> (date = '2022-07-31'), [toDate('2022-07-31')]) AS date_exists (stage: Complete)
[chi-clickhouse-installation-default-0-0-0] 2023.10.12 15:23:47.721060 [ 2723 ] {d4a6e24e-1316-4a4e-adde-0ada0755c88b} <Error> executeQuery: Code: 386. DB::Exception: There is no supertype for types Date, String because some of them are String/FixedString and some of them are not: While processing has([toDate('2022-07-31')], '2022-07-31') AS date_exists. (NO_COMMON_TYPE) (version 23.3.8.22.altinitystable (altinity build)) (from [::1]:59718) (in query: SELECT arrayExists(date -> (date = '2022-07-31'), [toDate('2022-07-31')]) AS date_exists)
0 rows in set. Elapsed: 0.002 sec.
Received exception from server (version 23.3.8):
Code: 386. DB::Exception: Received from localhost:9000. DB::Exception: There is no supertype for types Date, String because some of them are String/FixedString and some of them are not: While processing has([toDate('2022-07-31')], '2022-07-31') AS date_exists. (NO_COMMON_TYPE)
```
**Additional context**
| https://github.com/ClickHouse/ClickHouse/issues/55562 | https://github.com/ClickHouse/ClickHouse/pull/55609 | 13ee46ed74fa9b116e522bb7da07ded4091ac001 | 17616ca3256c1b602980fab6277faf829b460cbf | "2023-10-12T15:24:48Z" | c++ | "2023-10-13T22:28:45Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 55,501 | ["src/Storages/MergeTree/MergeTreeData.cpp", "tests/queries/0_stateless/02895_forbid_create_inverted_index.reference", "tests/queries/0_stateless/02895_forbid_create_inverted_index.sql"] | Disallow `ADD INDEX TYPE inverted` unless `allow_experimental_inverted_index = 1` | **Describe the unexpected behaviour**
`ALTER TABLE ADD INDEX TYPE inverted` should require `allow_experimental_inverted_index = 1`.
**How to reproduce**
* Which ClickHouse server version to use - 23.8, 23.3
Creating a table with `INDEX TYPE inverted` is not allowed unless we set `allow_experimental_inverted_index = 1`
```
CREATE OR REPLACE TABLE tab
(
`key` UInt64,
`str` String,
INDEX inv_idx str TYPE inverted(0) GRANULARITY 1
)
ENGINE = MergeTree
ORDER BY key
Received exception from server (version 23.8.2):
Code: 344. DB::Exception: Received from localhost:9000. DB::Exception: Experimental Inverted Index feature is not enabled (the setting 'allow_experimental_inverted_index'). (SUPPORT_IS_DISABLED)
```
However, we are allowed to add index of type inverted after the table is created.
```
a8346b53da63 :) select getSetting('allow_experimental_inverted_index')
SELECT getSetting('allow_experimental_inverted_index')
Query id: a896c4bd-9c9c-4821-8d63-0b5f1a777778
ββgetSetting('allow_experimental_inverted_index')ββ
β false β
βββββββββββββββββββββββββββββββββββββββββββββββββββ
1 row in set. Elapsed: 0.004 sec.
a8346b53da63 :) CREATE OR REPLACE TABLE tab
(
`key` UInt64,
`str` String
)
ENGINE = MergeTree
ORDER BY key;
ALTER TABLE tab ADD INDEX inv_idx(str) TYPE inverted(0);
CREATE OR REPLACE TABLE tab
(
`key` UInt64,
`str` String
)
ENGINE = MergeTree
ORDER BY key
Query id: d052d090-e372-4b1c-8c73-04a28b2b2b05
Ok.
0 rows in set. Elapsed: 0.017 sec.
ALTER TABLE tab
ADD INDEX inv_idx str TYPE inverted(0) GRANULARITY 1
Query id: 78335436-cc93-4c85-b1a0-99c4e9c97805
Ok.
0 rows in set. Elapsed: 0.005 sec.
```
**Expected behavior**
Disallow `ADD INDEX TYPE inverted` unless `allow_experimental_inverted_index = 1` | https://github.com/ClickHouse/ClickHouse/issues/55501 | https://github.com/ClickHouse/ClickHouse/pull/55529 | 1905c44a334397eaa94b9eba2013ef72d4f87a48 | 332a0cfa24d6ca76814dde17a09fa96f58be0003 | "2023-10-11T13:43:58Z" | c++ | "2023-10-12T12:40:58Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 55,500 | ["docker/test/base/setup_export_logs.sh"] | Many `Structure does not match` in `stateless_tests_flaky_check (asan)` | right now we have lots of `Warning` log `Structure does not match` in the `stateless_tests_flaky_check` . For example https://s3.amazonaws.com/clickhouse-test-reports/55467/bb204ed0bd08203621d12fe92c17e9385b788a6a/stateless_tests_flaky_check__asan_/run.log , the ratio right now is 0.434%, just lower than 1% in our trash log detect test `00002_log_and_exception_messages_formatting`
Why do we have many Structure does not match ? I checked the log:
```
2023.10.11 04:12:15.525576 [ 914 ] {} <Warning> system.asynchronous_metric_log_sender.DirectoryMonitor.default: Structure does not match (remote: pull_request_number UInt32 UInt32(size = 0), commit_sha String String(size = 0), check_start_time DateTime UInt32(size = 0), check_name LowCardinality(String) ColumnLowCardinality(size = 0, UInt8(size = 0), ColumnUnique(size = 1, String(size = 1))), instance_type LowCardinality(String) ColumnLowCardinality(size = 0, UInt8(size = 0), ColumnUnique(size = 1, String(size = 1))), instance_id String String(size = 0), event_date Date UInt16(size = 0), event_time DateTime UInt32(size = 0), metric LowCardinality(String) ColumnLowCardinality(size = 0, UInt8(size = 0), ColumnUnique(size = 1, String(size = 1))), value Float64 Float64(size = 0), local: pull_request_number UInt16 UInt16(size = 0), commit_sha String String(size = 0), check_start_time DateTime('UTC') UInt32(size = 0), check_name String String(size = 0), instance_type String String(size = 0), instance_id String String(size = 0), event_date Date UInt16(size = 0), event_time DateTime UInt32(size = 0), metric LowCardinality(String) ColumnLowCardinality(size = 0, UInt8(size = 0), ColumnUnique(size = 1, String(size = 1))), value Float64 Float64(size = 0)), implicit conversion will be done
```
We create distributed table `asynchronous_metric_log_sender` locally and sync it to remote side. But the local string column `commit_sha` (and other string columns) are of normal `String` type. which are different from the remote server with the type `ColumnLowCardinality` as shown above.
Sometimes it will cause the test of new PR flaky, if we trigger tables **xxx_sender** sync to remote side too many times. For example, in PR https://github.com/ClickHouse/ClickHouse/pull/53240 | https://github.com/ClickHouse/ClickHouse/issues/55500 | https://github.com/ClickHouse/ClickHouse/pull/55503 | 332a0cfa24d6ca76814dde17a09fa96f58be0003 | 54fb9a836e5c2e4a5bc4a772679d4fd7200d426e | "2023-10-11T13:26:48Z" | c++ | "2023-10-12T14:15:00Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 55,381 | ["programs/server/dashboard.html"] | Advanced dashboard: if there is only a single chart, don't show the "maximize" and "drag" buttons. | **Use case**
https://play.clickhouse.com/dashboard#eyJob3N0IjoiaHR0cHM6Ly9wbGF5LmNsaWNraG91c2UuY29tIiwidXNlciI6InBsYXkiLCJxdWVyaWVzIjpbeyJ0aXRsZSI6Intkb21haW59IiwicXVlcnkiOiJTRUxFQ1QgZGF0ZTo6RGF0ZVRpbWU6OklOVCBBUyBkLCByb3VuZCgxZTkgKiBhdmcocG93KHJhbmssIC0wLjc1KSkpIEZST00gY2lzY29fdW1icmVsbGEgV0hFUkUgZG9tYWluID0ge2RvbWFpbjpTdHJpbmd9IEdST1VQIEJZIGQgT1JERVIgQlkgZCJ9XSwicGFyYW1zIjp7ImRvbWFpbiI6ImRldi5teXNxbC5jb20ifX0= | https://github.com/ClickHouse/ClickHouse/issues/55381 | https://github.com/ClickHouse/ClickHouse/pull/55581 | 17616ca3256c1b602980fab6277faf829b460cbf | 8697e78cd86260f5c843675289cc399e02995244 | "2023-10-09T11:23:07Z" | c++ | "2023-10-14T01:15:31Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 55,331 | ["docs/en/operations/settings/settings.md", "src/Core/Settings.h", "src/Interpreters/executeQuery.cpp", "src/Processors/QueryPlan/Optimizations/QueryPlanOptimizationSettings.cpp", "src/Processors/QueryPlan/Optimizations/QueryPlanOptimizationSettings.h", "tests/queries/0_stateless/02906_force_optimize_projection_name.reference", "tests/queries/0_stateless/02906_force_optimize_projection_name.sql"] | A setting `force_optimize_projection_name` | **Use case**
Similar to `force_optimize_projection`.
**Describe the solution you'd like**
If it is set to a non-empty string, check that this projection is used in the query at least once.
**Additional context**
There could be ambiguity with queries containing multiple tables, possibly with identical projection names.
We should resolve this ambiguity by checking that the reading from at least one table was optimized by using the projection with this name.
Alternatively, we could make the setting value as a map of database.table -> projection_name, but it does not resolve the case of one table being used multiple times when only some cases have to be optimized, and will also be harder to use... while the simple name specification is good enough for a hint.
| https://github.com/ClickHouse/ClickHouse/issues/55331 | https://github.com/ClickHouse/ClickHouse/pull/56134 | b0f6b18f6ec1be26d9467aa6431adc72a61f2882 | 4deaf7cefbf17141af18f5f82ffe7ca3572c6f14 | "2023-10-08T11:58:08Z" | c++ | "2023-11-01T12:12:10Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 55,308 | ["docs/en/sql-reference/functions/date-time-functions.md", "src/DataTypes/IDataType.h", "src/Functions/DateTimeTransforms.h", "src/Functions/fromDaysSinceYearZero.cpp", "src/Functions/makeDate.cpp", "tests/fuzz/all.dict", "tests/fuzz/dictionaries/functions.dict", "tests/queries/0_stateless/02907_fromDaysSinceYearZero.reference", "tests/queries/0_stateless/02907_fromDaysSinceYearZero.sql", "utils/check-style/aspell-ignore/en/aspell-dict.txt"] | Provide fromDaysSinceYearZero() | #54856 (respectively #54796) added function [toDaysSinceYearZero()](https://clickhouse.com/docs/en/sql-reference/functions/date-time-functions#todayssinceyearzero). The main reason to add this function was to have something similar as MySQL's [TO_DAYS()](https://dev.mysql.com/doc/refman/8.0/en/date-and-time-functions.html#function_to-days) in ClickHouse (besides that, there is probably little practical use for the function).
As per https://github.com/ClickHouse/ClickHouse/pull/54856#issuecomment-1751765802, we should provide the opposite function for consistency, i.e. `fromDaysSinceYearZero()`. See related `from*()` [date/time functions](https://clickhouse.com/docs/en/sql-reference/functions/date-time-functions) for starting points. | https://github.com/ClickHouse/ClickHouse/issues/55308 | https://github.com/ClickHouse/ClickHouse/pull/56088 | 35d785592b1447cf17cf598024e3a3cff210dbf9 | 480e284db1a36f0208fb51ef24eb7897d626c926 | "2023-10-07T17:08:10Z" | c++ | "2023-11-03T23:17:04Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 55,279 | ["src/Processors/Transforms/CreatingSetsTransform.cpp"] | data-race in CreatingSetsTransform | ```
==184==WARNING: MemorySanitizer: use-of-uninitialized-value
#0 0x555dd9f6b0ea in LZ4_putIndexOnHash build_docker/./contrib/lz4/lib/lz4.c:822:88
#1 0x555dd9f6b0ea in LZ4_compress_generic_validated build_docker/./contrib/lz4/lib/lz4.c:1232:17
#2 0x555dd9f6b0ea in LZ4_compress_generic build_docker/./contrib/lz4/lib/lz4.c:1366:12
#3 0x555dd9f6b0ea in LZ4_compress_fast_extState build_docker/./contrib/lz4/lib/lz4.c:1381:20
#4 0x555dd9f71a96 in LZ4_compress_fast build_docker/./contrib/lz4/lib/lz4.c:1454:14
#5 0x555dd9f71a96 in LZ4_compress_default build_docker/./contrib/lz4/lib/lz4.c:1465:12
#6 0x555dc46fc50f in DB::CompressionCodecLZ4::doCompressData(char const*, unsigned int, char*) const build_docker/./src/Compression/CompressionCodecLZ4.cpp:91:12
#7 0x555dc47f4561 in DB::ICompressionCodec::compress(char const*, unsigned int, char*) const build_docker/./src/Compression/ICompressionCodec.cpp:88:39
#8 0x555dc46d2827 in DB::CompressedWriteBuffer::nextImpl() build_docker/./src/Compression/CompressedWriteBuffer.cpp:37:41
#9 0x555dca76a7a1 in DB::WriteBuffer::next() build_docker/./src/IO/WriteBuffer.h:48:13
#10 0x555dca76a7a1 in DB::HashingWriteBuffer::nextImpl() build_docker/./src/IO/HashingWriteBuffer.h:64:13
#11 0x555dcad00e1d in DB::WriteBuffer::next() build_docker/./src/IO/WriteBuffer.h:48:13
#12 0x555dcad00e1d in DB::MergeTreeDataPartWriterCompact::writeDataBlock(DB::Block const&, std::__1::vector<DB::Granule, std::__1::allocator<DB::Granule>> const&) build_docker/./src/Storages/MergeTree/MergeTreeDataPartWriterCompact.cpp:249:38
#13 0x555dcacfe7af in DB::MergeTreeDataPartWriterCompact::writeDataBlockPrimaryIndexAndSkipIndices(DB::Block const&, std::__1::vector<DB::Granule, std::__1::allocator<DB::Granule>> const&) build_docker/./src/Storages/MergeTree/MergeTreeDataPartWriterCompact.cpp:194:5
#14 0x555dcad02074 in DB::MergeTreeDataPartWriterCompact::fillDataChecksums(DB::MergeTreeDataPartChecksums&) build_docker/./src/Storages/MergeTree/MergeTreeDataPartWriterCompact.cpp:268:9
#15 0x555dcad048c4 in DB::MergeTreeDataPartWriterCompact::fillChecksums(DB::MergeTreeDataPartChecksums&, std::__1::unordered_set<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>>&) build_docker/./src/Storages/MergeTree/MergeTreeDataPartWriterCompact.cpp:430:9
#16 0x555dcb23274e in DB::MergedBlockOutputStream::finalizePartAsync(std::__1::shared_ptr<DB::IMergeTreeDataPart> const&, bool, DB::NamesAndTypesList const*, DB::MergeTreeDataPartChecksums*) build_docker/./src/Storages/MergeTree/MergedBlockOutputStream.cpp:151:13
#17 0x555dcb218dd6 in DB::MergeTreeDataWriter::writeTempPartImpl(DB::BlockWithPartition&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::shared_ptr<DB::Context const>, long, bool) build_docker/./src/Storages/MergeTree/MergeTreeDataWriter.cpp:592:27
#18 0x555dcb20f9e9 in DB::MergeTreeDataWriter::writeTempPart(DB::BlockWithPartition&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::shared_ptr<DB::Context const>) build_docker/./src/Storages/MergeTree/MergeTreeDataWriter.cpp:385:12
#19 0x555dcb671fb2 in DB::MergeTreeSink::consume(DB::Chunk) build_docker/./src/Storages/MergeTree/MergeTreeSink.cpp:87:40
#20 0x555dcca72e19 in DB::SinkToStorage::onConsume(DB::Chunk) build_docker/./src/Processors/Sinks/SinkToStorage.cpp:24:5
#21 0x555dcc7a40a2 in DB::ExceptionKeepingTransform::work()::$_1::operator()() const build_docker/./src/Processors/Transforms/ExceptionKeepingTransform.cpp:150:51
#22 0x555dcc7a40a2 in decltype(std::declval<DB::ExceptionKeepingTransform::work()::$_1&>()()) std::__1::__invoke[abi:v15000]<DB::ExceptionKeepingTransform::work()::$_1&>(DB::ExceptionKeepingTransform::work()::$_1&) build_docker/./contrib/llvm-project/libcxx/include/__functional/invoke.h:394:23
#23 0x555dcc7a40a2 in void std::__1::__invoke_void_return_wrapper<void, true>::__call<DB::ExceptionKeepingTransform::work()::$_1&>(DB::ExceptionKeepingTransform::work()::$_1&) build_docker/./contrib/llvm-project/libcxx/include/__functional/invoke.h:479:9
#24 0x555dcc7a40a2 in std::__1::__function::__default_alloc_func<DB::ExceptionKeepingTransform::work()::$_1, void ()>::operator()[abi:v15000]() build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:235:12
#25 0x555dcc7a40a2 in void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<DB::ExceptionKeepingTransform::work()::$_1, void ()>>(std::__1::__function::__policy_storage const*) build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:716:16
#26 0x555dcc7a3c0d in std::__1::__function::__policy_func<void ()>::operator()[abi:v15000]() const build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:848:16
#27 0x555dcc7a3c0d in std::__1::function<void ()>::operator()() const build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:1187:12
#28 0x555dcc7a3c0d in DB::runStep(std::__1::function<void ()>, DB::ThreadStatus*, std::__1::atomic<unsigned long>*) build_docker/./src/Processors/Transforms/ExceptionKeepingTransform.cpp:114:9
#29 0x555dcc7a302b in DB::ExceptionKeepingTransform::work() build_docker/./src/Processors/Transforms/ExceptionKeepingTransform.cpp:150:34
#30 0x555dcbdf4dd6 in DB::executeJob(DB::ExecutingGraph::Node*, DB::ReadProgressCallback*) build_docker/./src/Processors/Executors/ExecutionThreadContext.cpp:47:26
#31 0x555dcbdf4dd6 in DB::ExecutionThreadContext::executeTask() build_docker/./src/Processors/Executors/ExecutionThreadContext.cpp:95:9
#32 0x555dcbdc86a8 in DB::PipelineExecutor::executeStepImpl(unsigned long, std::__1::atomic<bool>*) build_docker/./src/Processors/Executors/PipelineExecutor.cpp:273:26
#33 0x555dcbdc7346 in DB::PipelineExecutor::executeStep(std::__1::atomic<bool>*) build_docker/./src/Processors/Executors/PipelineExecutor.cpp:148:5
#34 0x555dcbe0e7f8 in DB::PushingPipelineExecutor::finish() build_docker/./src/Processors/Executors/PushingPipelineExecutor.cpp:122:19
#35 0x555dc7f77752 in DB::SystemLog<DB::TextLogElement>::flushImpl(std::__1::vector<DB::TextLogElement, std::__1::allocator<DB::TextLogElement>> const&, unsigned long) build_docker/./src/Interpreters/SystemLog.cpp:512:18
#36 0x555dc7f74a21 in DB::SystemLog<DB::TextLogElement>::savingThreadFunction() build_docker/./src/Interpreters/SystemLog.cpp:452:17
#37 0x555daef63c46 in DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()::operator()() const build_docker/./src/Common/SystemLogBase.cpp:247:69
#38 0x555daef63c46 in decltype(std::declval<DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()&>()()) std::__1::__invoke[abi:v15000]<DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()&>(DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()&) build_docker/./contrib/llvm-project/libcxx/include/__functional/invoke.h:394:23
#39 0x555daef63c46 in decltype(auto) std::__1::__apply_tuple_impl[abi:v15000]<DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()&, std::__1::tuple<>&>(DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()&, std::__1::tuple<>&, std::__1::__tuple_indices<>) build_docker/./contrib/llvm-project/libcxx/include/tuple:1789:1
#40 0x555daef63c46 in decltype(auto) std::__1::apply[abi:v15000]<DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()&, std::__1::tuple<>&>(DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()&, std::__1::tuple<>&) build_docker/./contrib/llvm-project/libcxx/include/tuple:1798:1
#41 0x555daef63c46 in ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()>(DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()&&)::'lambda'()::operator()() build_docker/./src/Common/ThreadPool.h:242:13
#42 0x555daef63b1e in decltype(std::declval<DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()>()()) std::__1::__invoke[abi:v15000]<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()>(DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()&&)::'lambda'()&>(DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()&&) build_docker/./contrib/llvm-project/libcxx/include/__functional/invoke.h:394:23
#43 0x555daef63b1e in void std::__1::__invoke_void_return_wrapper<void, true>::__call<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()>(DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()&&)::'lambda'()&>(ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()>(DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()&&)::'lambda'()&) build_docker/./contrib/llvm-project/libcxx/include/__functional/invoke.h:479:9
#44 0x555daef63b1e in std::__1::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()>(DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()&&)::'lambda'(), void ()>::operator()[abi:v15000]() build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:235:12
#45 0x555daef63b1e in void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()>(DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()&&)::'lambda'(), void ()>>(std::__1::__function::__policy_storage const*) build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:716:16
#46 0x555daee8b9ce in std::__1::__function::__policy_func<void ()>::operator()[abi:v15000]() const build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:848:16
#47 0x555daee8b9ce in std::__1::function<void ()>::operator()() const build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:1187:12
#48 0x555daee8b9ce in ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) build_docker/./src/Common/ThreadPool.cpp:426:13
#49 0x555daee9a64a in void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, Priority, std::__1::optional<unsigned long>, bool)::'lambda0'()::operator()() const build_docker/./src/Common/ThreadPool.cpp:179:73
#50 0x555daee9a64a in decltype(std::declval<void>()()) std::__1::__invoke[abi:v15000]<void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, Priority, std::__1::optional<unsigned long>, bool)::'lambda0'()>(void&&) build_docker/./contrib/llvm-project/libcxx/include/__functional/invoke.h:394:23
#51 0x555daee9a64a in void std::__1::__thread_execute[abi:v15000]<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, Priority, std::__1::optional<unsigned long>, bool)::'lambda0'()>(std::__1::tuple<void, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, Priority, std::__1::optional<unsigned long>, bool)::'lambda0'()>&, std::__1::__tuple_indices<>) build_docker/./contrib/llvm-project/libcxx/include/thread:284:5
#52 0x555daee9a64a in void* std::__1::__thread_proxy[abi:v15000]<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, Priority, std::__1::optional<unsigned long>, bool)::'lambda0'()>>(void*) build_docker/./contrib/llvm-project/libcxx/include/thread:295:5
#53 0x7f5124a87ac2 in start_thread nptl/pthread_create.c:442:8
#54 0x7f5124b19a3f misc/../sysdeps/unix/sysv/linux/x86_64/clone3.S:81
Uninitialized value was stored to memory at
#0 0x555d97d0538a in __msan_memcpy (/workspace/clickhouse+0x86df38a) (BuildId: cf0aaf59a1404debf76a40cd09d400f81f34a2e9)
#1 0x555d97df7493 in DB::WriteBuffer::write(char const*, unsigned long) (/workspace/clickhouse+0x87d1493) (BuildId: cf0aaf59a1404debf76a40cd09d400f81f34a2e9)
#2 0x555dc4e4df62 in DB::SerializationString::serializeBinaryBulk(DB::IColumn const&, DB::WriteBuffer&, unsigned long, unsigned long) const build_docker/./src/DataTypes/Serializations/SerializationString.cpp:146:14
#3 0x555dc4ce5550 in DB::ISerialization::serializeBinaryBulkWithMultipleStreams(DB::IColumn const&, unsigned long, unsigned long, DB::ISerialization::SerializeBinaryBulkSettings&, std::__1::shared_ptr<DB::ISerialization::SerializeBinaryBulkState>&) const build_docker/./src/DataTypes/Serializations/ISerialization.cpp:114:9
#4 0x555dcad00530 in DB::(anonymous namespace)::writeColumnSingleGranule(DB::ColumnWithTypeAndName const&, std::__1::shared_ptr<DB::ISerialization const> const&, std::__1::function<DB::WriteBuffer* (DB::ISerialization::SubstreamPath const&)>, unsigned long, unsigned long) build_docker/./src/Storages/MergeTree/MergeTreeDataPartWriterCompact.cpp:156:20
#5 0x555dcad00530 in DB::MergeTreeDataPartWriterCompact::writeDataBlock(DB::Block const&, std::__1::vector<DB::Granule, std::__1::allocator<DB::Granule>> const&) build_docker/./src/Storages/MergeTree/MergeTreeDataPartWriterCompact.cpp:244:13
#6 0x555dcacfe7af in DB::MergeTreeDataPartWriterCompact::writeDataBlockPrimaryIndexAndSkipIndices(DB::Block const&, std::__1::vector<DB::Granule, std::__1::allocator<DB::Granule>> const&) build_docker/./src/Storages/MergeTree/MergeTreeDataPartWriterCompact.cpp:194:5
#7 0x555dcad02074 in DB::MergeTreeDataPartWriterCompact::fillDataChecksums(DB::MergeTreeDataPartChecksums&) build_docker/./src/Storages/MergeTree/MergeTreeDataPartWriterCompact.cpp:268:9
#8 0x555dcad048c4 in DB::MergeTreeDataPartWriterCompact::fillChecksums(DB::MergeTreeDataPartChecksums&, std::__1::unordered_set<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>, std::__1::equal_to<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>>&) build_docker/./src/Storages/MergeTree/MergeTreeDataPartWriterCompact.cpp:430:9
#9 0x555dcb23274e in DB::MergedBlockOutputStream::finalizePartAsync(std::__1::shared_ptr<DB::IMergeTreeDataPart> const&, bool, DB::NamesAndTypesList const*, DB::MergeTreeDataPartChecksums*) build_docker/./src/Storages/MergeTree/MergedBlockOutputStream.cpp:151:13
#10 0x555dcb218dd6 in DB::MergeTreeDataWriter::writeTempPartImpl(DB::BlockWithPartition&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::shared_ptr<DB::Context const>, long, bool) build_docker/./src/Storages/MergeTree/MergeTreeDataWriter.cpp:592:27
#11 0x555dcb20f9e9 in DB::MergeTreeDataWriter::writeTempPart(DB::BlockWithPartition&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::shared_ptr<DB::Context const>) build_docker/./src/Storages/MergeTree/MergeTreeDataWriter.cpp:385:12
#12 0x555dcb671fb2 in DB::MergeTreeSink::consume(DB::Chunk) build_docker/./src/Storages/MergeTree/MergeTreeSink.cpp:87:40
#13 0x555dcca72e19 in DB::SinkToStorage::onConsume(DB::Chunk) build_docker/./src/Processors/Sinks/SinkToStorage.cpp:24:5
#14 0x555dcc7a40a2 in DB::ExceptionKeepingTransform::work()::$_1::operator()() const build_docker/./src/Processors/Transforms/ExceptionKeepingTransform.cpp:150:51
#15 0x555dcc7a40a2 in decltype(std::declval<DB::ExceptionKeepingTransform::work()::$_1&>()()) std::__1::__invoke[abi:v15000]<DB::ExceptionKeepingTransform::work()::$_1&>(DB::ExceptionKeepingTransform::work()::$_1&) build_docker/./contrib/llvm-project/libcxx/include/__functional/invoke.h:394:23
#16 0x555dcc7a40a2 in void std::__1::__invoke_void_return_wrapper<void, true>::__call<DB::ExceptionKeepingTransform::work()::$_1&>(DB::ExceptionKeepingTransform::work()::$_1&) build_docker/./contrib/llvm-project/libcxx/include/__functional/invoke.h:479:9
#17 0x555dcc7a40a2 in std::__1::__function::__default_alloc_func<DB::ExceptionKeepingTransform::work()::$_1, void ()>::operator()[abi:v15000]() build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:235:12
#18 0x555dcc7a40a2 in void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<DB::ExceptionKeepingTransform::work()::$_1, void ()>>(std::__1::__function::__policy_storage const*) build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:716:16
#19 0x555dcc7a3c0d in std::__1::__function::__policy_func<void ()>::operator()[abi:v15000]() const build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:848:16
#20 0x555dcc7a3c0d in std::__1::function<void ()>::operator()() const build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:1187:12
#21 0x555dcc7a3c0d in DB::runStep(std::__1::function<void ()>, DB::ThreadStatus*, std::__1::atomic<unsigned long>*) build_docker/./src/Processors/Transforms/ExceptionKeepingTransform.cpp:114:9
#22 0x555dcc7a302b in DB::ExceptionKeepingTransform::work() build_docker/./src/Processors/Transforms/ExceptionKeepingTransform.cpp:150:34
#23 0x555dcbdf4dd6 in DB::executeJob(DB::ExecutingGraph::Node*, DB::ReadProgressCallback*) build_docker/./src/Processors/Executors/ExecutionThreadContext.cpp:47:26
#24 0x555dcbdf4dd6 in DB::ExecutionThreadContext::executeTask() build_docker/./src/Processors/Executors/ExecutionThreadContext.cpp:95:9
#25 0x555dcbdc86a8 in DB::PipelineExecutor::executeStepImpl(unsigned long, std::__1::atomic<bool>*) build_docker/./src/Processors/Executors/PipelineExecutor.cpp:273:26
#26 0x555dcbdc7346 in DB::PipelineExecutor::executeStep(std::__1::atomic<bool>*) build_docker/./src/Processors/Executors/PipelineExecutor.cpp:148:5
#27 0x555dcbe0e7f8 in DB::PushingPipelineExecutor::finish() build_docker/./src/Processors/Executors/PushingPipelineExecutor.cpp:122:19
Uninitialized value was stored to memory at
#0 0x555d97d0538a in __msan_memcpy (/workspace/clickhouse+0x86df38a) (BuildId: cf0aaf59a1404debf76a40cd09d400f81f34a2e9)
#1 0x555daebaf145 in void DB::PODArray<char8_t, 4096ul, Allocator<false, false>, 63ul, 64ul>::insert_assume_reserved<char8_t const*, char8_t const*>(char8_t const*, char8_t const*) build_docker/./src/Common/PODArray.h:566:13
#2 0x555daebaf145 in void DB::PODArray<char8_t, 4096ul, Allocator<false, false>, 63ul, 64ul>::insert<char8_t const*, char8_t const*>(char8_t const*, char8_t const*) build_docker/./src/Common/PODArray.h:473:9
#3 0x555daebaf145 in DB::PODArray<char8_t, 4096ul, Allocator<false, false>, 63ul, 64ul>::PODArray(char8_t const*, char8_t const*) build_docker/./src/Common/PODArray.h:354:9
#4 0x555dc8cd1167 in DB::ColumnString::ColumnString(DB::ColumnString const&) build_docker/./src/Columns/ColumnString.cpp:32:5
#5 0x555dc8ceef81 in COWHelper<DB::IColumn, DB::ColumnString>::clone() const build_docker/./src/Common/COW.h:289:93
#6 0x555d993ab7a9 in DB::IColumn::mutate(COW<DB::IColumn>::immutable_ptr<DB::IColumn>) (/workspace/clickhouse+0x9d857a9) (BuildId: cf0aaf59a1404debf76a40cd09d400f81f34a2e9)
#7 0x555dc40a42d6 in DB::Block::mutateColumns() build_docker/./src/Core/Block.cpp:470:39
#8 0x555dcacfbeaa in DB::MergeTreeDataPartWriterCompact::write(DB::Block const&, DB::PODArray<unsigned long, 4096ul, Allocator<false, false>, 63ul, 64ul> const*) build_docker/./src/Storages/MergeTree/MergeTreeDataPartWriterCompact.cpp:179:37
#9 0x555dcb231951 in DB::MergedBlockOutputStream::writeImpl(DB::Block const&, DB::PODArray<unsigned long, 4096ul, Allocator<false, false>, 63ul, 64ul> const*) build_docker/./src/Storages/MergeTree/MergedBlockOutputStream.cpp:329:13
#10 0x555dcb231951 in DB::MergedBlockOutputStream::writeWithPermutation(DB::Block const&, DB::PODArray<unsigned long, 4096ul, Allocator<false, false>, 63ul, 64ul> const*) build_docker/./src/Storages/MergeTree/MergedBlockOutputStream.cpp:64:5
#11 0x555dcb2170ff in DB::MergeTreeDataWriter::writeTempPartImpl(DB::BlockWithPartition&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::shared_ptr<DB::Context const>, long, bool) build_docker/./src/Storages/MergeTree/MergeTreeDataWriter.cpp:578:10
#12 0x555dcb20f9e9 in DB::MergeTreeDataWriter::writeTempPart(DB::BlockWithPartition&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::shared_ptr<DB::Context const>) build_docker/./src/Storages/MergeTree/MergeTreeDataWriter.cpp:385:12
#13 0x555dcb671fb2 in DB::MergeTreeSink::consume(DB::Chunk) build_docker/./src/Storages/MergeTree/MergeTreeSink.cpp:87:40
#14 0x555dcca72e19 in DB::SinkToStorage::onConsume(DB::Chunk) build_docker/./src/Processors/Sinks/SinkToStorage.cpp:24:5
#15 0x555dcc7a40a2 in DB::ExceptionKeepingTransform::work()::$_1::operator()() const build_docker/./src/Processors/Transforms/ExceptionKeepingTransform.cpp:150:51
#16 0x555dcc7a40a2 in decltype(std::declval<DB::ExceptionKeepingTransform::work()::$_1&>()()) std::__1::__invoke[abi:v15000]<DB::ExceptionKeepingTransform::work()::$_1&>(DB::ExceptionKeepingTransform::work()::$_1&) build_docker/./contrib/llvm-project/libcxx/include/__functional/invoke.h:394:23
#17 0x555dcc7a40a2 in void std::__1::__invoke_void_return_wrapper<void, true>::__call<DB::ExceptionKeepingTransform::work()::$_1&>(DB::ExceptionKeepingTransform::work()::$_1&) build_docker/./contrib/llvm-project/libcxx/include/__functional/invoke.h:479:9
#18 0x555dcc7a40a2 in std::__1::__function::__default_alloc_func<DB::ExceptionKeepingTransform::work()::$_1, void ()>::operator()[abi:v15000]() build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:235:12
#19 0x555dcc7a40a2 in void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<DB::ExceptionKeepingTransform::work()::$_1, void ()>>(std::__1::__function::__policy_storage const*) build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:716:16
#20 0x555dcc7a3c0d in std::__1::__function::__policy_func<void ()>::operator()[abi:v15000]() const build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:848:16
#21 0x555dcc7a3c0d in std::__1::function<void ()>::operator()() const build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:1187:12
#22 0x555dcc7a3c0d in DB::runStep(std::__1::function<void ()>, DB::ThreadStatus*, std::__1::atomic<unsigned long>*) build_docker/./src/Processors/Transforms/ExceptionKeepingTransform.cpp:114:9
#23 0x555dcc7a302b in DB::ExceptionKeepingTransform::work() build_docker/./src/Processors/Transforms/ExceptionKeepingTransform.cpp:150:34
#24 0x555dcbdf4dd6 in DB::executeJob(DB::ExecutingGraph::Node*, DB::ReadProgressCallback*) build_docker/./src/Processors/Executors/ExecutionThreadContext.cpp:47:26
#25 0x555dcbdf4dd6 in DB::ExecutionThreadContext::executeTask() build_docker/./src/Processors/Executors/ExecutionThreadContext.cpp:95:9
#26 0x555dcbdc86a8 in DB::PipelineExecutor::executeStepImpl(unsigned long, std::__1::atomic<bool>*) build_docker/./src/Processors/Executors/PipelineExecutor.cpp:273:26
#27 0x555dcbdc7346 in DB::PipelineExecutor::executeStep(std::__1::atomic<bool>*) build_docker/./src/Processors/Executors/PipelineExecutor.cpp:148:5
#28 0x555dcbe0e7f8 in DB::PushingPipelineExecutor::finish() build_docker/./src/Processors/Executors/PushingPipelineExecutor.cpp:122:19
#29 0x555dc7f77752 in DB::SystemLog<DB::TextLogElement>::flushImpl(std::__1::vector<DB::TextLogElement, std::__1::allocator<DB::TextLogElement>> const&, unsigned long) build_docker/./src/Interpreters/SystemLog.cpp:512:18
Uninitialized value was stored to memory at
#0 0x555d97d0e60a in realloc (/workspace/clickhouse+0x86e860a) (BuildId: cf0aaf59a1404debf76a40cd09d400f81f34a2e9)
#1 0x555daea99c87 in Allocator<false, false>::realloc(void*, unsigned long, unsigned long, unsigned long) build_docker/./src/Common/Allocator.h:118:30
#2 0x555d97de8b68 in void DB::PODArrayBase<1ul, 4096ul, Allocator<false, false>, 63ul, 64ul>::resize<>(unsigned long) (/workspace/clickhouse+0x87c2b68) (BuildId: cf0aaf59a1404debf76a40cd09d400f81f34a2e9)
#3 0x555dc8cef6c8 in DB::ColumnString::insert(DB::Field const&) build_docker/./src/Columns/ColumnString.h:125:15
#4 0x555dc8137f98 in DB::TextLogElement::appendToBlock(std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn>>>&) const build_docker/./src/Interpreters/TextLog.cpp:69:19
#5 0x555dc7f76d9c in DB::SystemLog<DB::TextLogElement>::flushImpl(std::__1::vector<DB::TextLogElement, std::__1::allocator<DB::TextLogElement>> const&, unsigned long) build_docker/./src/Interpreters/SystemLog.cpp:488:18
#6 0x555dc7f74a21 in DB::SystemLog<DB::TextLogElement>::savingThreadFunction() build_docker/./src/Interpreters/SystemLog.cpp:452:17
#7 0x555daef63c46 in DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()::operator()() const build_docker/./src/Common/SystemLogBase.cpp:247:69
#8 0x555daef63c46 in decltype(std::declval<DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()&>()()) std::__1::__invoke[abi:v15000]<DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()&>(DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()&) build_docker/./contrib/llvm-project/libcxx/include/__functional/invoke.h:394:23
#9 0x555daef63c46 in decltype(auto) std::__1::__apply_tuple_impl[abi:v15000]<DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()&, std::__1::tuple<>&>(DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()&, std::__1::tuple<>&, std::__1::__tuple_indices<>) build_docker/./contrib/llvm-project/libcxx/include/tuple:1789:1
#10 0x555daef63c46 in decltype(auto) std::__1::apply[abi:v15000]<DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()&, std::__1::tuple<>&>(DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()&, std::__1::tuple<>&) build_docker/./contrib/llvm-project/libcxx/include/tuple:1798:1
#11 0x555daef63c46 in ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()>(DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()&&)::'lambda'()::operator()() build_docker/./src/Common/ThreadPool.h:242:13
#12 0x555daef63b1e in decltype(std::declval<DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()>()()) std::__1::__invoke[abi:v15000]<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()>(DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()&&)::'lambda'()&>(DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()&&) build_docker/./contrib/llvm-project/libcxx/include/__functional/invoke.h:394:23
#13 0x555daef63b1e in void std::__1::__invoke_void_return_wrapper<void, true>::__call<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()>(DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()&&)::'lambda'()&>(ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()>(DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()&&)::'lambda'()&) build_docker/./contrib/llvm-project/libcxx/include/__functional/invoke.h:479:9
#14 0x555daef63b1e in std::__1::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()>(DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()&&)::'lambda'(), void ()>::operator()[abi:v15000]() build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:235:12
#15 0x555daef63b1e in void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()>(DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()&&)::'lambda'(), void ()>>(std::__1::__function::__policy_storage const*) build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:716:16
#16 0x555daee8b9ce in std::__1::__function::__policy_func<void ()>::operator()[abi:v15000]() const build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:848:16
#17 0x555daee8b9ce in std::__1::function<void ()>::operator()() const build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:1187:12
#18 0x555daee8b9ce in ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) build_docker/./src/Common/ThreadPool.cpp:426:13
#19 0x555daee9a64a in void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, Priority, std::__1::optional<unsigned long>, bool)::'lambda0'()::operator()() const build_docker/./src/Common/ThreadPool.cpp:179:73
#20 0x555daee9a64a in decltype(std::declval<void>()()) std::__1::__invoke[abi:v15000]<void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, Priority, std::__1::optional<unsigned long>, bool)::'lambda0'()>(void&&) build_docker/./contrib/llvm-project/libcxx/include/__functional/invoke.h:394:23
#21 0x555daee9a64a in void std::__1::__thread_execute[abi:v15000]<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, Priority, std::__1::optional<unsigned long>, bool)::'lambda0'()>(std::__1::tuple<void, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, Priority, std::__1::optional<unsigned long>, bool)::'lambda0'()>&, std::__1::__tuple_indices<>) build_docker/./contrib/llvm-project/libcxx/include/thread:284:5
#22 0x555daee9a64a in void* std::__1::__thread_proxy[abi:v15000]<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, Priority, std::__1::optional<unsigned long>, bool)::'lambda0'()>>(void*) build_docker/./contrib/llvm-project/libcxx/include/thread:295:5
#23 0x7f5124a87ac2 in start_thread nptl/pthread_create.c:442:8
Uninitialized value was stored to memory at
#0 0x555d97d0538a in __msan_memcpy (/workspace/clickhouse+0x86df38a) (BuildId: cf0aaf59a1404debf76a40cd09d400f81f34a2e9)
#1 0x555dc8cef6e4 in DB::ColumnString::insert(DB::Field const&) build_docker/./src/Columns/ColumnString.h:126:9
#2 0x555dc8137f98 in DB::TextLogElement::appendToBlock(std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn>>>&) const build_docker/./src/Interpreters/TextLog.cpp:69:19
#3 0x555dc7f76d9c in DB::SystemLog<DB::TextLogElement>::flushImpl(std::__1::vector<DB::TextLogElement, std::__1::allocator<DB::TextLogElement>> const&, unsigned long) build_docker/./src/Interpreters/SystemLog.cpp:488:18
#4 0x555dc7f74a21 in DB::SystemLog<DB::TextLogElement>::savingThreadFunction() build_docker/./src/Interpreters/SystemLog.cpp:452:17
#5 0x555daef63c46 in DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()::operator()() const build_docker/./src/Common/SystemLogBase.cpp:247:69
#6 0x555daef63c46 in decltype(std::declval<DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()&>()()) std::__1::__invoke[abi:v15000]<DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()&>(DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()&) build_docker/./contrib/llvm-project/libcxx/include/__functional/invoke.h:394:23
#7 0x555daef63c46 in decltype(auto) std::__1::__apply_tuple_impl[abi:v15000]<DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()&, std::__1::tuple<>&>(DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()&, std::__1::tuple<>&, std::__1::__tuple_indices<>) build_docker/./contrib/llvm-project/libcxx/include/tuple:1789:1
#8 0x555daef63c46 in decltype(auto) std::__1::apply[abi:v15000]<DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()&, std::__1::tuple<>&>(DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()&, std::__1::tuple<>&) build_docker/./contrib/llvm-project/libcxx/include/tuple:1798:1
#9 0x555daef63c46 in ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()>(DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()&&)::'lambda'()::operator()() build_docker/./src/Common/ThreadPool.h:242:13
#10 0x555daef63b1e in decltype(std::declval<DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()>()()) std::__1::__invoke[abi:v15000]<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()>(DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()&&)::'lambda'()&>(DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()&&) build_docker/./contrib/llvm-project/libcxx/include/__functional/invoke.h:394:23
#11 0x555daef63b1e in void std::__1::__invoke_void_return_wrapper<void, true>::__call<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()>(DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()&&)::'lambda'()&>(ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()>(DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()&&)::'lambda'()&) build_docker/./contrib/llvm-project/libcxx/include/__functional/invoke.h:479:9
#12 0x555daef63b1e in std::__1::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()>(DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()&&)::'lambda'(), void ()>::operator()[abi:v15000]() build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:235:12
#13 0x555daef63b1e in void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()>(DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()&&)::'lambda'(), void ()>>(std::__1::__function::__policy_storage const*) build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:716:16
#14 0x555daee8b9ce in std::__1::__function::__policy_func<void ()>::operator()[abi:v15000]() const build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:848:16
#15 0x555daee8b9ce in std::__1::function<void ()>::operator()() const build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:1187:12
#16 0x555daee8b9ce in ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) build_docker/./src/Common/ThreadPool.cpp:426:13
#17 0x555daee9a64a in void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, Priority, std::__1::optional<unsigned long>, bool)::'lambda0'()::operator()() const build_docker/./src/Common/ThreadPool.cpp:179:73
#18 0x555daee9a64a in decltype(std::declval<void>()()) std::__1::__invoke[abi:v15000]<void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, Priority, std::__1::optional<unsigned long>, bool)::'lambda0'()>(void&&) build_docker/./contrib/llvm-project/libcxx/include/__functional/invoke.h:394:23
#19 0x555daee9a64a in void std::__1::__thread_execute[abi:v15000]<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, Priority, std::__1::optional<unsigned long>, bool)::'lambda0'()>(std::__1::tuple<void, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, Priority, std::__1::optional<unsigned long>, bool)::'lambda0'()>&, std::__1::__tuple_indices<>) build_docker/./contrib/llvm-project/libcxx/include/thread:284:5
#20 0x555daee9a64a in void* std::__1::__thread_proxy[abi:v15000]<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, Priority, std::__1::optional<unsigned long>, bool)::'lambda0'()>>(void*) build_docker/./contrib/llvm-project/libcxx/include/thread:295:5
#21 0x7f5124a87ac2 in start_thread nptl/pthread_create.c:442:8
Uninitialized value was stored to memory at
#0 0x555d97d05682 in __msan_memmove (/workspace/clickhouse+0x86df682) (BuildId: cf0aaf59a1404debf76a40cd09d400f81f34a2e9)
#1 0x555d990fe939 in DB::Field::Field(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&) (/workspace/clickhouse+0x9ad8939) (BuildId: cf0aaf59a1404debf76a40cd09d400f81f34a2e9)
#2 0x555dc8137f68 in DB::TextLogElement::appendToBlock(std::__1::vector<COW<DB::IColumn>::mutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::mutable_ptr<DB::IColumn>>>&) const build_docker/./src/Interpreters/TextLog.cpp:69:26
#3 0x555dc7f76d9c in DB::SystemLog<DB::TextLogElement>::flushImpl(std::__1::vector<DB::TextLogElement, std::__1::allocator<DB::TextLogElement>> const&, unsigned long) build_docker/./src/Interpreters/SystemLog.cpp:488:18
#4 0x555dc7f74a21 in DB::SystemLog<DB::TextLogElement>::savingThreadFunction() build_docker/./src/Interpreters/SystemLog.cpp:452:17
#5 0x555daef63c46 in DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()::operator()() const build_docker/./src/Common/SystemLogBase.cpp:247:69
#6 0x555daef63c46 in decltype(std::declval<DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()&>()()) std::__1::__invoke[abi:v15000]<DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()&>(DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()&) build_docker/./contrib/llvm-project/libcxx/include/__functional/invoke.h:394:23
#7 0x555daef63c46 in decltype(auto) std::__1::__apply_tuple_impl[abi:v15000]<DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()&, std::__1::tuple<>&>(DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()&, std::__1::tuple<>&, std::__1::__tuple_indices<>) build_docker/./contrib/llvm-project/libcxx/include/tuple:1789:1
#8 0x555daef63c46 in decltype(auto) std::__1::apply[abi:v15000]<DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()&, std::__1::tuple<>&>(DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()&, std::__1::tuple<>&) build_docker/./contrib/llvm-project/libcxx/include/tuple:1798:1
#9 0x555daef63c46 in ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()>(DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()&&)::'lambda'()::operator()() build_docker/./src/Common/ThreadPool.h:242:13
#10 0x555daef63b1e in decltype(std::declval<DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()>()()) std::__1::__invoke[abi:v15000]<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()>(DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()&&)::'lambda'()&>(DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()&&) build_docker/./contrib/llvm-project/libcxx/include/__functional/invoke.h:394:23
#11 0x555daef63b1e in void std::__1::__invoke_void_return_wrapper<void, true>::__call<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()>(DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()&&)::'lambda'()&>(ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()>(DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()&&)::'lambda'()&) build_docker/./contrib/llvm-project/libcxx/include/__functional/invoke.h:479:9
#12 0x555daef63b1e in std::__1::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()>(DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()&&)::'lambda'(), void ()>::operator()[abi:v15000]() build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:235:12
#13 0x555daef63b1e in void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()>(DB::SystemLogBase<DB::TextLogElement>::startup()::'lambda'()&&)::'lambda'(), void ()>>(std::__1::__function::__policy_storage const*) build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:716:16
#14 0x555daee8b9ce in std::__1::__function::__policy_func<void ()>::operator()[abi:v15000]() const build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:848:16
#15 0x555daee8b9ce in std::__1::function<void ()>::operator()() const build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:1187:12
#16 0x555daee8b9ce in ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) build_docker/./src/Common/ThreadPool.cpp:426:13
#17 0x555daee9a64a in void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, Priority, std::__1::optional<unsigned long>, bool)::'lambda0'()::operator()() const build_docker/./src/Common/ThreadPool.cpp:179:73
#18 0x555daee9a64a in decltype(std::declval<void>()()) std::__1::__invoke[abi:v15000]<void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, Priority, std::__1::optional<unsigned long>, bool)::'lambda0'()>(void&&) build_docker/./contrib/llvm-project/libcxx/include/__functional/invoke.h:394:23
#19 0x555daee9a64a in void std::__1::__thread_execute[abi:v15000]<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, Priority, std::__1::optional<unsigned long>, bool)::'lambda0'()>(std::__1::tuple<void, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, Priority, std::__1::optional<unsigned long>, bool)::'lambda0'()>&, std::__1::__tuple_indices<>) build_docker/./contrib/llvm-project/libcxx/include/thread:284:5
#20 0x555daee9a64a in void* std::__1::__thread_proxy[abi:v15000]<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, Priority, std::__1::optional<unsigned long>, bool)::'lambda0'()>>(void*) build_docker/./contrib/llvm-project/libcxx/include/thread:295:5
#21 0x7f5124a87ac2 in start_thread nptl/pthread_create.c:442:8
Uninitialized value was created by a heap deallocation
#0 0x555d97d66509 in operator delete(void*, unsigned long) (/workspace/clickhouse+0x8740509) (BuildId: cf0aaf59a1404debf76a40cd09d400f81f34a2e9)
#1 0x555dc3d28256 in void std::__1::__libcpp_operator_delete[abi:v15000]<void*, unsigned long>(void*, unsigned long) build_docker/./contrib/llvm-project/libcxx/include/new:256:3
#2 0x555dc3d28256 in void std::__1::__do_deallocate_handle_size[abi:v15000]<>(void*, unsigned long) build_docker/./contrib/llvm-project/libcxx/include/new:282:10
#3 0x555dc3d28256 in std::__1::__libcpp_deallocate[abi:v15000](void*, unsigned long, unsigned long) build_docker/./contrib/llvm-project/libcxx/include/new:296:14
#4 0x555dc3d28256 in std::__1::allocator<char>::deallocate[abi:v15000](char*, unsigned long) build_docker/./contrib/llvm-project/libcxx/include/__memory/allocator.h:128:13
#5 0x555dc3d28256 in std::__1::allocator_traits<std::__1::allocator<char>>::deallocate[abi:v15000](std::__1::allocator<char>&, char*, unsigned long) build_docker/./contrib/llvm-project/libcxx/include/__memory/allocator_traits.h:282:13
#6 0x555dc3d28256 in std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>::~basic_string() build_docker/./contrib/llvm-project/libcxx/include/string:2334:9
#7 0x555dc3d28256 in void DB::Exception::addMessage<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>(fmt::v8::basic_format_string<char, fmt::v8::type_identity<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>::type>, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>&&) build_docker/./src/Common/Exception.h:134:9
#8 0x555dc5b75a73 in DB::InDepthNodeVisitor<DB::ActionsMatcher, true, false, std::__1::shared_ptr<DB::IAST> const>::doVisit(std::__1::shared_ptr<DB::IAST> const&) build_docker/./src/Interpreters/InDepthNodeVisitor.h:75:15
#9 0x555dc5af958d in void DB::InDepthNodeVisitor<DB::ActionsMatcher, true, false, std::__1::shared_ptr<DB::IAST> const>::visitImplMain<false>(std::__1::shared_ptr<DB::IAST> const&) build_docker/./src/Interpreters/InDepthNodeVisitor.h:61:9
#10 0x555dc5af958d in void DB::InDepthNodeVisitor<DB::ActionsMatcher, true, false, std::__1::shared_ptr<DB::IAST> const>::visitImpl<false>(std::__1::shared_ptr<DB::IAST> const&) build_docker/./src/Interpreters/InDepthNodeVisitor.h:51:13
#11 0x555dc5af958d in DB::InDepthNodeVisitor<DB::ActionsMatcher, true, false, std::__1::shared_ptr<DB::IAST> const>::visit(std::__1::shared_ptr<DB::IAST> const&) build_docker/./src/Interpreters/InDepthNodeVisitor.h:32:13
#12 0x555dc5af958d in DB::ExpressionAnalyzer::getRootActions(std::__1::shared_ptr<DB::IAST> const&, bool, std::__1::shared_ptr<DB::ActionsDAG>&, bool) build_docker/./src/Interpreters/ExpressionAnalyzer.cpp:483:48
#13 0x555dc5b24859 in DB::SelectQueryExpressionAnalyzer::appendSelect(DB::ExpressionActionsChain&, bool) build_docker/./src/Interpreters/ExpressionAnalyzer.cpp:1504:5
#14 0x555dc5b3857d in DB::ExpressionAnalysisResult::ExpressionAnalysisResult(DB::SelectQueryExpressionAnalyzer&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, bool, bool, bool, std::__1::shared_ptr<DB::FilterDAGInfo> const&, std::__1::shared_ptr<DB::FilterDAGInfo> const&, DB::Block const&) build_docker/./src/Interpreters/ExpressionAnalyzer.cpp:2060:24
#15 0x555dc75fc36a in DB::InterpreterSelectQuery::getSampleBlockImpl() build_docker/./src/Interpreters/InterpreterSelectQuery.cpp:946:23
#16 0x555dc75e1305 in DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context> const&, std::__1::optional<DB::Pipe>, std::__1::shared_ptr<DB::IStorage> const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::shared_ptr<DB::PreparedSets>)::$_0::operator()(bool) const build_docker/./src/Interpreters/InterpreterSelectQuery.cpp:769:25
#17 0x555dc75cd1da in DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context> const&, std::__1::optional<DB::Pipe>, std::__1::shared_ptr<DB::IStorage> const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&, std::__1::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::__1::shared_ptr<DB::PreparedSets>) build_docker/./src/Interpreters/InterpreterSelectQuery.cpp:772:5
#18 0x555dc75c0f1e in DB::InterpreterSelectQuery::InterpreterSelectQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context> const&, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&) build_docker/./src/Interpreters/InterpreterSelectQuery.cpp:206:7
#19 0x555dc7821898 in std::__1::__unique_if<DB::InterpreterSelectQuery>::__unique_single std::__1::make_unique[abi:v15000]<DB::InterpreterSelectQuery, std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context>&, DB::SelectQueryOptions&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&>(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context>&, DB::SelectQueryOptions&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&) build_docker/./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:714:32
#20 0x555dc7821898 in DB::InterpreterSelectWithUnionQuery::buildCurrentChildInterpreter(std::__1::shared_ptr<DB::IAST> const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&) build_docker/./src/Interpreters/InterpreterSelectWithUnionQuery.cpp:254:16
#21 0x555dc781c398 in DB::InterpreterSelectWithUnionQuery::InterpreterSelectWithUnionQuery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context>, DB::SelectQueryOptions const&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&) build_docker/./src/Interpreters/InterpreterSelectWithUnionQuery.cpp:152:13
#22 0x555dc84d309a in DB::InterpreterSelectWithUnionQuery* std::__1::construct_at[abi:v15000]<DB::InterpreterSelectWithUnionQuery, std::__1::shared_ptr<DB::IAST>&, std::__1::shared_ptr<DB::Context>&, DB::SelectQueryOptions&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&, DB::InterpreterSelectWithUnionQuery*>(DB::InterpreterSelectWithUnionQuery*, std::__1::shared_ptr<DB::IAST>&, std::__1::shared_ptr<DB::Context>&, DB::SelectQueryOptions&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&) build_docker/./contrib/llvm-project/libcxx/include/__memory/construct_at.h:35:48
#23 0x555dc84d309a in void std::__1::allocator_traits<std::__1::allocator<DB::InterpreterSelectWithUnionQuery>>::construct[abi:v15000]<DB::InterpreterSelectWithUnionQuery, std::__1::shared_ptr<DB::IAST>&, std::__1::shared_ptr<DB::Context>&, DB::SelectQueryOptions&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&, void, void>(std::__1::allocator<DB::InterpreterSelectWithUnionQuery>&, DB::InterpreterSelectWithUnionQuery*, std::__1::shared_ptr<DB::IAST>&, std::__1::shared_ptr<DB::Context>&, DB::SelectQueryOptions&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&) build_docker/./contrib/llvm-project/libcxx/include/__memory/allocator_traits.h:298:9
#24 0x555dc84d309a in std::__1::__shared_ptr_emplace<DB::InterpreterSelectWithUnionQuery, std::__1::allocator<DB::InterpreterSelectWithUnionQuery>>::__shared_ptr_emplace[abi:v15000]<std::__1::shared_ptr<DB::IAST>&, std::__1::shared_ptr<DB::Context>&, DB::SelectQueryOptions&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&>(std::__1::allocator<DB::InterpreterSelectWithUnionQuery>, std::__1::shared_ptr<DB::IAST>&, std::__1::shared_ptr<DB::Context>&, DB::SelectQueryOptions&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&) build_docker/./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:292:9
#25 0x555dc84d309a in std::__1::shared_ptr<DB::InterpreterSelectWithUnionQuery> std::__1::allocate_shared[abi:v15000]<DB::InterpreterSelectWithUnionQuery, std::__1::allocator<DB::InterpreterSelectWithUnionQuery>, std::__1::shared_ptr<DB::IAST>&, std::__1::shared_ptr<DB::Context>&, DB::SelectQueryOptions&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&, void>(std::__1::allocator<DB::InterpreterSelectWithUnionQuery> const&, std::__1::shared_ptr<DB::IAST>&, std::__1::shared_ptr<DB::Context>&, DB::SelectQueryOptions&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&) build_docker/./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:953:55
#26 0x555dc84d1511 in std::__1::shared_ptr<DB::InterpreterSelectWithUnionQuery> std::__1::make_shared[abi:v15000]<DB::InterpreterSelectWithUnionQuery, std::__1::shared_ptr<DB::IAST>&, std::__1::shared_ptr<DB::Context>&, DB::SelectQueryOptions&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&, void>(std::__1::shared_ptr<DB::IAST>&, std::__1::shared_ptr<DB::Context>&, DB::SelectQueryOptions&, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&) build_docker/./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:962:12
#27 0x555dc84d1511 in DB::interpretSubquery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context const>, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&, DB::SelectQueryOptions const&) build_docker/./src/Interpreters/interpretSubquery.cpp:115:12
#28 0x555dc84ccd4c in DB::interpretSubquery(std::__1::shared_ptr<DB::IAST> const&, std::__1::shared_ptr<DB::Context const>, unsigned long, std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&) build_docker/./src/Interpreters/interpretSubquery.cpp:28:12
#29 0x555dc5bbe768 in DB::ActionsMatcher::makeSet(DB::ASTFunction const&, DB::ActionsMatcher::Data&, bool) build_docker/./src/Interpreters/ActionsVisitor.cpp:1432:32
#30 0x555dc5b99ceb in DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) build_docker/./src/Interpreters/ActionsVisitor.cpp:922:28
#31 0x555dc5b96af4 in DB::ActionsMatcher::visit(std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) build_docker/./src/Interpreters/ActionsVisitor.cpp:699:9
#32 0x555dc5ba0e56 in DB::ActionsMatcher::visit(DB::ASTFunction const&, std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) build_docker/./src/Interpreters/ActionsVisitor.cpp:1201:17
#33 0x555dc5b96af4 in DB::ActionsMatcher::visit(std::__1::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) build_docker/./src/Interpreters/ActionsVisitor.cpp:699:9
SUMMARY: MemorySanitizer: use-of-uninitialized-value build_docker/./contrib/lz4/lib/lz4.c:822:88 in LZ4_putIndexOnHash
```
https://s3.amazonaws.com/clickhouse-test-reports/55262/ceade9305e91f9cc8a27987ae78f13c913e69b20/fuzzer_astfuzzermsan/report.html | https://github.com/ClickHouse/ClickHouse/issues/55279 | https://github.com/ClickHouse/ClickHouse/pull/55786 | 5d1cc1425a0ff1013fb866c637fe292e7e4773bc | 4724c84dac978c32457ddc2a9e4b44f8a5f6124f | "2023-10-06T17:42:12Z" | c++ | "2023-10-20T08:02:10Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 55,273 | ["docs/en/operations/settings/settings.md", "docs/en/operations/system-tables/columns.md", "docs/en/sql-reference/statements/show.md", "src/Core/Settings.h", "src/Storages/System/StorageSystemColumns.cpp", "tests/queries/0_stateless/02775_show_columns_mysql_compatibility.reference", "tests/queries/0_stateless/02775_show_columns_mysql_compatibility.sh", "tests/queries/0_stateless/02775_show_columns_mysql_compatibility.sql"] | use_mysql_types_in_show_columns=1 has no effect over CH protocol | (you don't have to strictly follow this form)
**Describe the unexpected behaviour**
`use_mysql_types_in_show_columns=1 ` does not show MySQL types for ClickHouse protocol.
**How to reproduce**
```sql
Dales-MacBook-Pro.local :) SET use_mysql_types_in_show_columns=1;
SET use_mysql_types_in_show_columns = 1
Query id: 5e526661-5f40-41c3-8d5d-55183721a9ed
Ok.
0 rows in set. Elapsed: 0.001 sec.
Dales-MacBook-Pro.local :) SHOW COLUMNS FROM uk_price_paid
SHOW COLUMNS FROM uk_price_paid
Query id: db6cc02a-227c-4607-924b-a59a8d40acc0
ββfieldββββββ¬βtypeβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ¬βnullββ¬βkeyββββββ¬βdefaultββ¬βextraββ
β addr1 β String β NO β PRI SOR β α΄Ία΅α΄Έα΄Έ β β
β addr2 β String β NO β PRI SOR β α΄Ία΅α΄Έα΄Έ β β
β county β LowCardinality(String) β NO β β α΄Ία΅α΄Έα΄Έ β β
β date β Date β NO β β α΄Ία΅α΄Έα΄Έ β β
β district β LowCardinality(String) β NO β β α΄Ία΅α΄Έα΄Έ β β
β duration β Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2) β NO β β α΄Ία΅α΄Έα΄Έ β β
β is_new β UInt8 β NO β β α΄Ία΅α΄Έα΄Έ β β
β locality β LowCardinality(String) β NO β β α΄Ία΅α΄Έα΄Έ β β
β postcode1 β LowCardinality(String) β NO β PRI SOR β α΄Ία΅α΄Έα΄Έ β β
β postcode2 β LowCardinality(String) β NO β PRI SOR β α΄Ία΅α΄Έα΄Έ β β
β price β UInt32 β NO β β α΄Ία΅α΄Έα΄Έ β β
β street β LowCardinality(String) β NO β β α΄Ία΅α΄Έα΄Έ β β
β town β LowCardinality(String) β NO β β α΄Ία΅α΄Έα΄Έ β β
β type β Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4) β NO β β α΄Ία΅α΄Έα΄Έ β β
βββββββββββββ΄ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ΄βββββββ΄ββββββββββ΄ββββββββββ΄ββββββββ
14 rows in set. Elapsed: 0.002 sec.
```
* Which ClickHouse server version to use
*
23.10.1.148
* Which interface to use, if matters
ClickHouse
**Expected behavior**
I'd expect this to be independent of the protocol and for this setting to still return MySQL types over the Clickhouse protocol. | https://github.com/ClickHouse/ClickHouse/issues/55273 | https://github.com/ClickHouse/ClickHouse/pull/55298 | ab9e6f5f6a023ea6ea434aaf18ef52d743d0644a | 312fe8e31ddf0f98fd13bda104cd7f29c0c76406 | "2023-10-06T12:13:37Z" | c++ | "2023-10-07T20:03:24Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 55,272 | ["src/Interpreters/LogicalExpressionsOptimizer.cpp", "tests/queries/0_stateless/02893_trash_optimization.reference", "tests/queries/0_stateless/02893_trash_optimization.sql"] | Segmentation fault on query with "Engine=Merge + ALL INNER JOIN + WHERE with OR" | this query:
SELECT *
FROM merge('system', '^one$') AS one
ALL INNER JOIN
(
SELECT *
FROM system.one
) AS subquery ON one.dummy = subquery.dummy
WHERE (one.dummy = 0) OR (one.dummy = 1)
produces segmentation fault of ClickHouse
proof (latest version): https://fiddle.clickhouse.com/5ca1d3c3-de60-48fd-ad5f-2c7316139c60 | https://github.com/ClickHouse/ClickHouse/issues/55272 | https://github.com/ClickHouse/ClickHouse/pull/55353 | 808b78984f284d8ea7879155306f97ab772936e3 | b619d07baf29ce4ae252cb2ada4f1510c68b41b0 | "2023-10-06T12:07:48Z" | c++ | "2023-10-09T00:42:56Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 55,225 | ["src/Disks/ObjectStorages/DiskObjectStorageTransaction.cpp", "src/Storages/StorageReplicatedMergeTree.cpp", "tests/integration/test_replicated_s3_zero_copy_drop_partition/__init__.py", "tests/integration/test_replicated_s3_zero_copy_drop_partition/configs/storage_conf.xml", "tests/integration/test_replicated_s3_zero_copy_drop_partition/test.py"] | Drop detached partition with a S3 disk is not removing the blob files | **Describe what's wrong**
Dropping detached parts from a table with a storage on S3 is not removing the files from AWS S3.
**Does it reproduce on recent release?**
Yes, on the 23.9
**How to reproduce**
- Use latest ClickHouse version
- Run `CREATE TABLE mytable (...) Engine = ReplicatedOne ... PARTITION BY something SETTINGS disk = 's3_disk' `
- Insert some data
- Run `ALTER TABLE FETCH PARTITION 'my_partition' FROM ''`
- Get the bucket size
- Run `ALTER TABLE DROP DETACED PARTITION 'my_partition'
- Wait at least 8 minutes and get the bucket size again, the files have not been removed from s3.
- Re-check after 15 minutes, same behavior.
**Expected behavior**
On disk, when dropping the detached parts it's saving some disk space and for remote disks, I'll expect the same, i.e removing only the metadata is not enough to me.
Once you have drop the detached partition, you have no longer the information for the remote blobs file so it will generate orphans file on S3.
**Error message and/or stacktrace**
N/A
**Additional context**
| https://github.com/ClickHouse/ClickHouse/issues/55225 | https://github.com/ClickHouse/ClickHouse/pull/55309 | 68ce6b9b00d5e60de0ed0cbb46a068d2b051f921 | 666c690b4f4356d283b48eed91adb749cdeb9366 | "2023-10-04T08:30:22Z" | c++ | "2023-10-10T09:48:00Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 55,183 | ["docs/en/operations/system-tables/information_schema.md", "src/Storages/System/attachInformationSchemaTables.cpp", "tests/queries/0_stateless/01161_information_schema.reference", "tests/queries/0_stateless/02206_information_schema_show_database.reference"] | `information_schema.columns` lacks field `EXTRA` (MySQL compatibility) | It is specifically required for Tableau _Desktop_ via MySQL, as one of the popular use cases is seamless data import/export between Tableau Desktop and Tableau Online using the same connector (MySQL in this case).
**Describe the unexpected behaviour**
This generated query:
```sql
SELECT TABLE_SCHEMA,
NULL,
TABLE_NAME,
COLUMN_NAME,
multiIf(upper(DATA_TYPE) = 'DECIMAL', 3, upper(DATA_TYPE) = 'DECIMAL UNSIGNED', 3, upper(DATA_TYPE) = 'TINYINT',
if((position('ZEROFILL', upper(COLUMN_TYPE)) = 0) AND (position('UNSIGNED', upper(COLUMN_TYPE)) = 0) AND
(position('(1)', COLUMN_TYPE) != 0), -7, -6), upper(DATA_TYPE) = 'TINYINT UNSIGNED', if(
(position('ZEROFILL', upper(COLUMN_TYPE)) = 0) AND
(position('UNSIGNED', upper(COLUMN_TYPE)) = 0) AND (position('(1)', COLUMN_TYPE) != 0), -7,
-6), upper(DATA_TYPE) = 'BOOLEAN', 16, upper(DATA_TYPE) = 'SMALLINT', 5,
upper(DATA_TYPE) = 'SMALLINT UNSIGNED', 5, upper(DATA_TYPE) = 'INT', 4,
upper(DATA_TYPE) = 'INT UNSIGNED', 4, upper(DATA_TYPE) = 'FLOAT', 7, upper(DATA_TYPE) = 'FLOAT UNSIGNED',
7, upper(DATA_TYPE) = 'DOUBLE', 8, upper(DATA_TYPE) = 'DOUBLE UNSIGNED', 8, upper(DATA_TYPE) = 'NULL', 0,
upper(DATA_TYPE) = 'TIMESTAMP', 93, upper(DATA_TYPE) = 'BIGINT', -5,
upper(DATA_TYPE) = 'BIGINT UNSIGNED', -5, upper(DATA_TYPE) = 'MEDIUMINT', 4,
upper(DATA_TYPE) = 'MEDIUMINT UNSIGNED', 4, upper(DATA_TYPE) = 'DATE', 91, upper(DATA_TYPE) = 'TIME', 92,
upper(DATA_TYPE) = 'DATETIME', 93, upper(DATA_TYPE) = 'YEAR', 91, upper(DATA_TYPE) = 'VARCHAR', 12,
upper(DATA_TYPE) = 'VARBINARY', -3, upper(DATA_TYPE) = 'BIT', -7, upper(DATA_TYPE) = 'JSON', -1,
upper(DATA_TYPE) = 'ENUM', 1, upper(DATA_TYPE) = 'SET', 1, upper(DATA_TYPE) = 'TINYBLOB', -3,
upper(DATA_TYPE) = 'TINYTEXT', 12, upper(DATA_TYPE) = 'MEDIUMBLOB', -4, upper(DATA_TYPE) = 'MEDIUMTEXT',
-1, upper(DATA_TYPE) = 'LONGBLOB', -4, upper(DATA_TYPE) = 'LONGTEXT', -1, upper(DATA_TYPE) = 'BLOB', -4,
upper(DATA_TYPE) = 'TEXT', -1, upper(DATA_TYPE) = 'CHAR', 1, upper(DATA_TYPE) = 'BINARY', -2,
upper(DATA_TYPE) = 'GEOMETRY', -2, upper(DATA_TYPE) = 'UNKNOWN', 1111, upper(DATA_TYPE) = 'POINT', -2,
upper(DATA_TYPE) = 'LINESTRING', -2, upper(DATA_TYPE) = 'POLYGON', -2, upper(DATA_TYPE) = 'MULTIPOINT',
-2, upper(DATA_TYPE) = 'MULTILINESTRING', -2, upper(DATA_TYPE) = 'MULTIPOLYGON', -2,
upper(DATA_TYPE) = 'GEOMETRYCOLLECTION', -2, upper(DATA_TYPE) = 'GEOMCOLLECTION', -2, 1111) AS DATA_TYPE,
upper(multiIf(upper(DATA_TYPE) = 'TINYINT', multiIf(
(position('ZEROFILL', upper(COLUMN_TYPE)) = 0) AND (position('UNSIGNED', upper(COLUMN_TYPE)) = 0) AND
(position('(1)', COLUMN_TYPE) != 0), 'BIT',
(position('UNSIGNED', upper(COLUMN_TYPE)) != 0) AND (position('UNSIGNED', upper(DATA_TYPE)) = 0),
'TINYINT UNSIGNED', DATA_TYPE), (position('UNSIGNED', upper(COLUMN_TYPE)) != 0) AND
(position('UNSIGNED', upper(DATA_TYPE)) = 0) AND
(position('SET', upper(DATA_TYPE)) != 1) AND
(position('ENUM', upper(DATA_TYPE)) != 1),
concat(DATA_TYPE, ' UNSIGNED'), upper(DATA_TYPE) = 'POINT', 'GEOMETRY',
upper(DATA_TYPE) = 'LINESTRING', 'GEOMETRY', upper(DATA_TYPE) = 'POLYGON', 'GEOMETRY',
upper(DATA_TYPE) = 'MULTIPOINT', 'GEOMETRY', upper(DATA_TYPE) = 'MULTILINESTRING', 'GEOMETRY',
upper(DATA_TYPE) = 'MULTIPOLYGON', 'GEOMETRY', upper(DATA_TYPE) = 'GEOMETRYCOLLECTION', 'GEOMETRY',
upper(DATA_TYPE) = 'GEOMCOLLECTION', 'GEOMETRY', upper(DATA_TYPE))) AS TYPE_NAME,
upper(multiIf(upper(DATA_TYPE) = 'DATE', 10, upper(DATA_TYPE) = 'TIME',
8 + if(DATETIME_PRECISION > 0, DATETIME_PRECISION + 1, DATETIME_PRECISION),
(upper(DATA_TYPE) = 'DATETIME') OR (upper(DATA_TYPE) = 'TIMESTAMP'),
19 + if(DATETIME_PRECISION > 0, DATETIME_PRECISION + 1, DATETIME_PRECISION),
upper(DATA_TYPE) = 'YEAR', 4,
(upper(DATA_TYPE) = 'TINYINT') AND (position('ZEROFILL', upper(COLUMN_TYPE)) = 0) AND
(position('UNSIGNED', upper(COLUMN_TYPE)) = 0) AND (position('(1)', COLUMN_TYPE) != 0), 1,
(upper(DATA_TYPE) = 'MEDIUMINT') AND (position('UNSIGNED', upper(COLUMN_TYPE)) != 0), 8,
upper(DATA_TYPE) = 'JSON', 1073741824, upper(DATA_TYPE) = 'GEOMETRY', 65535,
upper(DATA_TYPE) = 'POINT', 65535, upper(DATA_TYPE) = 'LINESTRING', 65535,
upper(DATA_TYPE) = 'POLYGON', 65535, upper(DATA_TYPE) = 'MULTIPOINT', 65535,
upper(DATA_TYPE) = 'MULTILINESTRING', 65535, upper(DATA_TYPE) = 'MULTIPOLYGON', 65535,
upper(DATA_TYPE) = 'GEOMETRYCOLLECTION', 65535, upper(DATA_TYPE) = 'GEOMCOLLECTION', 65535,
CHARACTER_MAXIMUM_LENGTH IS NULL, NUMERIC_PRECISION, CHARACTER_MAXIMUM_LENGTH > 2147483647,
2147483647,
CHARACTER_MAXIMUM_LENGTH)) AS COLUMN_SIZE,
65535 AS BUFFER_LENGTH,
upper(multiIf(upper(DATA_TYPE) = 'DECIMAL', NUMERIC_SCALE,
(upper(DATA_TYPE) = 'FLOAT') OR (upper(DATA_TYPE) = 'DOUBLE'),
if(NUMERIC_SCALE IS NULL, 0, NUMERIC_SCALE),
NULL)) AS DECIMAL_DIGITS,
10 AS NUM_PREC_RADIX,
if(IS_NULLABLE = 'NO', 0, if(IS_NULLABLE = 'YES', 1, 2)) AS NULLABLE,
COLUMN_COMMENT AS REMARKS,
COLUMN_DEFAULT AS COLUMN_DEF,
0 AS SQL_DATA_TYPE,
0 AS SQL_DATETIME_SUB,
if(CHARACTER_OCTET_LENGTH > 2147483647, 2147483647,
CHARACTER_OCTET_LENGTH) AS CHAR_OCTET_LENGTH,
ORDINAL_POSITION,
IS_NULLABLE,
NULL AS SCOPE_CATALOG,
NULL AS SCOPE_SCHEMA,
NULL AS SCOPE_TABLE,
NULL AS SOURCE_DATA_TYPE,
if(EXTRA LIKE '%auto_increment%', 'YES', 'NO') AS IS_AUTOINCREMENT,
if(EXTRA LIKE '%GENERATED%', 'YES', 'NO') AS IS_GENERATEDCOLUMN
FROM (SELECT database AS table_catalog,
database AS table_schema,
table AS table_name,
name AS column_name,
position AS ordinal_position,
default_expression AS column_default,
type LIKE 'Nullable(%)' AS is_nullable,
type AS data_type,
character_octet_length AS character_maximum_length,
character_octet_length,
numeric_precision,
numeric_precision_radix,
numeric_scale,
datetime_precision,
NULL AS character_set_catalog,
NULL AS character_set_schema,
NULL AS character_set_name,
NULL AS collation_catalog,
NULL AS collation_schema,
NULL AS collation_name,
NULL AS domain_catalog,
NULL AS domain_schema,
NULL AS domain_name,
comment AS column_comment,
type AS column_type,
table_catalog AS TABLE_CATALOG,
table_schema AS TABLE_SCHEMA,
table_name AS TABLE_NAME,
column_name AS COLUMN_NAME,
ordinal_position AS ORDINAL_POSITION,
column_default AS COLUMN_DEFAULT,
is_nullable AS IS_NULLABLE,
data_type AS DATA_TYPE,
character_maximum_length AS CHARACTER_MAXIMUM_LENGTH,
character_octet_length AS CHARACTER_OCTET_LENGTH,
numeric_precision AS NUMERIC_PRECISION,
numeric_precision_radix AS NUMERIC_PRECISION_RADIX,
numeric_scale AS NUMERIC_SCALE,
datetime_precision AS DATETIME_PRECISION,
character_set_catalog AS CHARACTER_SET_CATALOG,
character_set_schema AS CHARACTER_SET_SCHEMA,
character_set_name AS CHARACTER_SET_NAME,
collation_catalog AS COLLATION_CATALOG,
collation_schema AS COLLATION_SCHEMA,
collation_name AS COLLATION_NAME,
domain_catalog AS DOMAIN_CATALOG,
domain_schema AS DOMAIN_SCHEMA,
domain_name AS DOMAIN_NAME,
column_comment AS COLUMN_COMMENT,
column_type AS COLUMN_TYPE
FROM system.columns
HAVING (COLUMN_NAME LIKE '%')
AND ((TABLE_NAME = 'commits') AND (TABLE_SCHEMA = 'default'))) AS COLUMNS
WHERE (TABLE_SCHEMA = 'default')
AND (TABLE_NAME = 'commits')
AND (COLUMN_NAME LIKE '%')
ORDER BY TABLE_SCHEMA ASC, TABLE_NAME ASC, ORDINAL_POSITION ASC
```
fails with
```
Code: 47. DB::Exception: Missing columns: 'EXTRA' while processing query
```
**How to reproduce**
* Which ClickHouse server version to use: master
* Which interface to use, if matters: MySQL
**Expected behavior**
`information_schema.columns` has the `EXTRA` column in lowercase and uppercase. It will probably also be good to implement the rest of the default MySQL columns as well.
CC @rschu1ze | https://github.com/ClickHouse/ClickHouse/issues/55183 | https://github.com/ClickHouse/ClickHouse/pull/55215 | 20868f3b656466021f57ccabbe15c1c278ebc826 | 282200ef50cc8c48dd72c37bf701ef479a0be9fa | "2023-09-29T23:30:04Z" | c++ | "2023-10-08T10:21:34Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 55,182 | ["docs/en/operations/system-tables/information_schema.md", "src/Storages/System/attachInformationSchemaTables.cpp", "tests/queries/0_stateless/01161_information_schema.reference", "tests/queries/0_stateless/02206_information_schema_show_database.reference"] | `information_schema.tables` lacks field `TABLE_ROWS` (MySQL compatibility) | Required for Tableau Online.
Missing columns: 'TABLE_ROWS' while processing query:
```sql
SELECT TABLE_ROWS
FROM (SELECT database AS table_catalog,
database AS table_schema,
name AS table_name,
multiIf(is_temporary, 'LOCAL TEMPORARY', engine LIKE '%View', 'VIEW', engine LIKE 'System%', 'SYSTEM VIEW',
has_own_data = 0, 'FOREIGN TABLE', 'BASE TABLE') AS table_type,
total_bytes AS data_length,
'utf8mb4_0900_ai_ci' AS table_collation,
comment AS table_comment,
table_catalog AS TABLE_CATALOG,
table_schema AS TABLE_SCHEMA,
table_name AS TABLE_NAME,
table_type AS TABLE_TYPE,
data_length AS DATA_LENGTH,
table_collation AS TABLE_COLLATION,
table_comment AS TABLE_COMMENT
FROM system.tables
HAVING (table_name = 'commits')
AND (table_schema = 'default')) AS TABLES
WHERE (table_schema = 'default')
AND (table_name = 'commits')
```
**Describe the unexpected behaviour**
`TABLE_ROWS` column is available in both lowercase and uppercase variants. Probably it will be also good to implement the rest of the default MySQL columns as well.
**How to reproduce**
* Which ClickHouse server version to use: master
* Which interface to use, if matters: MySQL
CC @rschu1ze | https://github.com/ClickHouse/ClickHouse/issues/55182 | https://github.com/ClickHouse/ClickHouse/pull/55215 | 20868f3b656466021f57ccabbe15c1c278ebc826 | 282200ef50cc8c48dd72c37bf701ef479a0be9fa | "2023-09-29T23:19:23Z" | c++ | "2023-10-08T10:21:34Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 55,181 | ["cmake/split_debug_symbols.cmake", "docker/packager/binary/build.sh", "programs/CMakeLists.txt", "programs/self-extracting/CMakeLists.txt", "tests/ci/build_check.py"] | Publish the self-contained binary without a debug info | **Use case**
Someone on the internet pointed out that it is large: https://twitter.com/eatonphil/status/1707783632566292596
The reason for the size is that we include the debug info to have line numbers in stack traces.
Maybe it is not needed - the symbol names will be present anyway.
If we will remove the debug info, it will be well under 100 MB.
CC @eatonphil
| https://github.com/ClickHouse/ClickHouse/issues/55181 | https://github.com/ClickHouse/ClickHouse/pull/56617 | 0dabdf43b85273c49901ca68320e26bf1537643c | 38ca18d8e76b1f8c584759b04cbbe300db65f2b2 | "2023-09-29T22:34:22Z" | c++ | "2023-11-11T21:18:26Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 55,174 | ["src/Storages/MergeTree/MergeTreeData.cpp", "src/Storages/MergeTree/MergeTreeData.h", "src/Storages/MergeTree/MergeTreeSettings.h", "src/Storages/MergeTree/ReplicatedMergeTreeCleanupThread.cpp", "src/Storages/StorageMergeTree.cpp", "tests/integration/test_broken_detached_part_clean_up/__init__.py", "tests/integration/test_broken_detached_part_clean_up/configs/store_cleanup.xml", "tests/integration/test_broken_detached_part_clean_up/test.py"] | Remove the removal of broken detached parts | This is a harmful feature - it has to be removed.
If there are broken parts, it requires immediate attention of an engineer.
They should never be deleted.
I remember this feature was added by one customer because they asked too many times.
But the truth is - the introduction of this feature is harmful even for that customer. | https://github.com/ClickHouse/ClickHouse/issues/55174 | https://github.com/ClickHouse/ClickHouse/pull/55184 | 7b7abd11f40c848fd9dccaa30a419de3ad67314b | de8e068da753ac886eaefee48392728bdaebea8d | "2023-09-29T21:11:34Z" | c++ | "2023-09-30T03:10:40Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 55,106 | ["src/Storages/MergeTree/KeyCondition.cpp", "tests/queries/0_stateless/02890_partition_prune_in_extra_columns.reference", "tests/queries/0_stateless/02890_partition_prune_in_extra_columns.sql"] | count(*) incorrect value with IN clause | **Describe what's wrong**
When querying a datetime64 keyed table it is possible to correctly query a row count with literal value filters on individual fields but when providing the literals within an `IN` clause the output is incorrect.
**How to reproduce**
* Which ClickHouse server version to use
clickhouse-server:23.3.13.6 inside a docker container
* Which interface to use, if matters
I don't believe it matters, I was using HTTP
* Non-default settings, if any
None
```
CREATE TABLE events
(
`dt` DateTime64(6, 'UTC'),
`type` Int32,
`event` Nullable(String)
)
ENGINE = MergeTree()
PARTITION BY (type, toStartOfWeek(dt))
TTL toDateTime(dt) + INTERVAL 2 DAY
-- order by is the non-unique PK
ORDER BY dt;
INSERT INTO events
SELECT
toDateTime64('2022-12-12 12:00:00', 6) - (((12 + randPoisson(12)) * 60)),
floor(randUniform(5, 100)),
[null, '200', '404', '502'][toInt32(randBinomial(4, 0.1)) + 1]
FROM numbers(50_000_000);
-- find a dt and type pair with > 1 rows
select *, toUnixTimestamp64Nano(dt) from events where dt = '2022-12-12T11:24:00' and type = 86;
-- 19 rows for me locally
select *
from events
where dt = '2022-12-12T11:24:00' and type = 86;
-- 19
select count(*)
from events
where dt = '2022-12-12T11:24:00' and type = 86;
-- 19
select count(*)
from events
where (type, dt) in (86, '2022-12-12T11:24:00');
-- 18576
-- works with the experimental analyzer
Set allow_experimental_analyzer = 1;
select count(*)
from events
where (type, dt) in (86, '2022-12-12T11:24:00');
-- 19
```
**Expected behavior**
the count output should remain the same in both cases
| https://github.com/ClickHouse/ClickHouse/issues/55106 | https://github.com/ClickHouse/ClickHouse/pull/55172 | ad991dc78a94ac5e0992aa072509ac4eae564e5e | db9377fc5989c6e7b3b172f9560b8493b68fe7c2 | "2023-09-28T16:24:10Z" | c++ | "2023-09-30T01:17:14Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 55,091 | ["src/Processors/Sources/ShellCommandSource.cpp"] | udfs let server crash |
[iZbp18ncx6mrfr5rpymxoxZ] 2023.09.28 18:23:53.642844 [ 2389513 ] <Fatal> BaseDaemon: ########################################
[iZbp18ncx6mrfr5rpymxoxZ] 2023.09.28 18:23:53.642899 [ 2389513 ] <Fatal> BaseDaemon: (version 23.8.2.7 (official build), build id: 8D11186275018B3F7DD7F6CCF24A69C19328DA7F, git hash: f73c8f378745d0520eec7e3519fc0ce6991639b9) (from thread 1768951) (query_id: 844bb8f1-86aa-4ddb-ac06-e969784342f7) (query: SELECT * FROM executable('udfs_minhash.py', JSONEachRow, 'id String, neighbors Array(String), FP Float32,FN Float32', (select platform_id as id, 2 as ngram ,replaceRegexpAll(replaceRegexpAll(substringUTF8(lower(content),1,10000),'([0-9a-zA-Z]+|[\\x{4E00}-\\x{9FD5}])',' \1 '),'\s+',' ') as text from research_system.article_union_tag where platform in ('δΌθ΄¨','θε','ζ΄θ§') and text !='' and title !='' limit 1000),settings command_read_timeout=100000,command_write_timeout=100000,max_command_execution_time=3600);) Received signal Aborted (6)
[iZbp18ncx6mrfr5rpymxoxZ] 2023.09.28 18:23:53.642940 [ 2389513 ] <Fatal> BaseDaemon:
[iZbp18ncx6mrfr5rpymxoxZ] 2023.09.28 18:23:53.642965 [ 2389513 ] <Fatal> BaseDaemon: Stack trace: 0x00007f8caa005a7c 0x00007f8ca9fb1476 0x00007f8ca9f977f3 0x000000000f7ff074 0x000000001348acd6 0x0000000012643fd2 0x0000000011ed560b 0x0000000011ebce68 0x0000000011ebc294 0x0000000011f604f6 0x0000000011f61407 0x00000000122a6095 0x00000000122a17f5 0x000000001310c5b9 0x000000001311e839 0x0000000015b104d4 0x0000000015b116d1 0x0000000015c47f07 0x0000000015c461dc 0x00007f8caa003b43 0x00007f8caa095a00
[iZbp18ncx6mrfr5rpymxoxZ] 2023.09.28 18:23:53.642995 [ 2389513 ] <Fatal> BaseDaemon: 2. ? @ 0x00007f8caa005a7c in ?
[iZbp18ncx6mrfr5rpymxoxZ] 2023.09.28 18:23:53.643019 [ 2389513 ] <Fatal> BaseDaemon: 3. ? @ 0x00007f8ca9fb1476 in ?
[iZbp18ncx6mrfr5rpymxoxZ] 2023.09.28 18:23:53.643036 [ 2389513 ] <Fatal> BaseDaemon: 4. ? @ 0x00007f8ca9f977f3 in ?
[iZbp18ncx6mrfr5rpymxoxZ] 2023.09.28 18:23:53.643084 [ 2389513 ] <Fatal> BaseDaemon: 5. ? @ 0x000000000f7ff074 in /usr/bin/clickhouse
[iZbp18ncx6mrfr5rpymxoxZ] 2023.09.28 18:23:53.643149 [ 2389513 ] <Fatal> BaseDaemon: 6. DB::ShellCommandSourceCoordinator::createPipe(String const&, std::vector<String, std::allocator<String>> const&, std::vector<DB::Pipe, std::allocator<DB::Pipe>>&&, DB::Block, std::shared_ptr<DB::Context const>, DB::ShellCommandSourceConfiguration const&) @ 0x000000001348acd6 in /usr/bin/clickhouse
[iZbp18ncx6mrfr5rpymxoxZ] 2023.09.28 18:23:53.643195 [ 2389513 ] <Fatal> BaseDaemon: 7. DB::StorageExecutable::read(DB::QueryPlan&, std::vector<String, std::allocator<String>> const&, std::shared_ptr<DB::StorageSnapshot> const&, DB::SelectQueryInfo&, std::shared_ptr<DB::Context const>, DB::QueryProcessingStage::Enum, unsigned long, unsigned long) @ 0x0000000012643fd2 in /usr/bin/clickhouse
[iZbp18ncx6mrfr5rpymxoxZ] 2023.09.28 18:23:53.643228 [ 2389513 ] <Fatal> BaseDaemon: 8. DB::InterpreterSelectQuery::executeFetchColumns(DB::QueryProcessingStage::Enum, DB::QueryPlan&) @ 0x0000000011ed560b in /usr/bin/clickhouse
[iZbp18ncx6mrfr5rpymxoxZ] 2023.09.28 18:23:53.643251 [ 2389513 ] <Fatal> BaseDaemon: 9. DB::InterpreterSelectQuery::executeImpl(DB::QueryPlan&, std::optional<DB::Pipe>) @ 0x0000000011ebce68 in /usr/bin/clickhouse
[iZbp18ncx6mrfr5rpymxoxZ] 2023.09.28 18:23:53.643280 [ 2389513 ] <Fatal> BaseDaemon: 10. DB::InterpreterSelectQuery::buildQueryPlan(DB::QueryPlan&) @ 0x0000000011ebc294 in /usr/bin/clickhouse
[iZbp18ncx6mrfr5rpymxoxZ] 2023.09.28 18:23:53.643308 [ 2389513 ] <Fatal> BaseDaemon: 11. DB::InterpreterSelectWithUnionQuery::buildQueryPlan(DB::QueryPlan&) @ 0x0000000011f604f6 in /usr/bin/clickhouse
[iZbp18ncx6mrfr5rpymxoxZ] 2023.09.28 18:23:53.643335 [ 2389513 ] <Fatal> BaseDaemon: 12. DB::InterpreterSelectWithUnionQuery::execute() @ 0x0000000011f61407 in /usr/bin/clickhouse
[iZbp18ncx6mrfr5rpymxoxZ] 2023.09.28 18:23:53.643380 [ 2389513 ] <Fatal> BaseDaemon: 13. DB::executeQueryImpl(char const*, char const*, std::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x00000000122a6095 in /usr/bin/clickhouse
[iZbp18ncx6mrfr5rpymxoxZ] 2023.09.28 18:23:53.643404 [ 2389513 ] <Fatal> BaseDaemon: 14. DB::executeQuery(String const&, std::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) @ 0x00000000122a17f5 in /usr/bin/clickhouse
[iZbp18ncx6mrfr5rpymxoxZ] 2023.09.28 18:23:53.643433 [ 2389513 ] <Fatal> BaseDaemon: 15. DB::TCPHandler::runImpl() @ 0x000000001310c5b9 in /usr/bin/clickhouse
[iZbp18ncx6mrfr5rpymxoxZ] 2023.09.28 18:23:53.643460 [ 2389513 ] <Fatal> BaseDaemon: 16. DB::TCPHandler::run() @ 0x000000001311e839 in /usr/bin/clickhouse
[iZbp18ncx6mrfr5rpymxoxZ] 2023.09.28 18:23:53.643489 [ 2389513 ] <Fatal> BaseDaemon: 17. Poco::Net::TCPServerConnection::start() @ 0x0000000015b104d4 in /usr/bin/clickhouse
[iZbp18ncx6mrfr5rpymxoxZ] 2023.09.28 18:23:53.643515 [ 2389513 ] <Fatal> BaseDaemon: 18. Poco::Net::TCPServerDispatcher::run() @ 0x0000000015b116d1 in /usr/bin/clickhouse
[iZbp18ncx6mrfr5rpymxoxZ] 2023.09.28 18:23:53.643535 [ 2389513 ] <Fatal> BaseDaemon: 19. Poco::PooledThread::run() @ 0x0000000015c47f07 in /usr/bin/clickhouse
[iZbp18ncx6mrfr5rpymxoxZ] 2023.09.28 18:23:53.643564 [ 2389513 ] <Fatal> BaseDaemon: 20. Poco::ThreadImpl::runnableEntry(void*) @ 0x0000000015c461dc in /usr/bin/clickhouse
[iZbp18ncx6mrfr5rpymxoxZ] 2023.09.28 18:23:53.643589 [ 2389513 ] <Fatal> BaseDaemon: 21. ? @ 0x00007f8caa003b43 in ?
[iZbp18ncx6mrfr5rpymxoxZ] 2023.09.28 18:23:53.643613 [ 2389513 ] <Fatal> BaseDaemon: 22. ? @ 0x00007f8caa095a00 in ?
[iZbp18ncx6mrfr5rpymxoxZ] 2023.09.28 18:23:53.884310 [ 2389513 ] <Fatal> BaseDaemon: Integrity check of the executable successfully passed (checksum: A9BA33088A6E58B8A1BE882810065B9D)
[iZbp18ncx6mrfr5rpymxoxZ] 2023.09.28 18:23:53.884565 [ 2389513 ] <Fatal> BaseDaemon: Report this error to https://github.com/ClickHouse/ClickHouse/issues
[iZbp18ncx6mrfr5rpymxoxZ] 2023.09.28 18:23:53.884741 [ 2389513 ] <Fatal> BaseDaemon: Changed settings: max_insert_threads = 24, max_threads = 24, max_memory_usage = 204010946560, max_partitions_per_insert_block = 2000
| https://github.com/ClickHouse/ClickHouse/issues/55091 | https://github.com/ClickHouse/ClickHouse/pull/55103 | 8ae8371260567503e36ba3c461a200a53980dd79 | fa54b2142454ecf1bafde8c8f68c67b8c730c95f | "2023-09-28T10:28:41Z" | c++ | "2023-10-08T19:36:28Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 55,078 | ["src/Functions/FunctionsStringDistance.cpp", "src/Functions/array/arrayJaccardIndex.cpp", "tests/queries/0_stateless/02884_string_distance_function.reference", "tests/queries/0_stateless/02884_string_distance_function.sql"] | `byteJaccardIndex` is slow | **Describe the unexpected behaviour**
```
std::unordered_set<char> haystack_set(haystack, haystack + haystack_size);
std::unordered_set<char> needle_set(needle, needle + needle_size);
```
```
play-eu :) SELECT DISTINCT arrayJoin(tokens(lower(text))) AS word, byteJaccardIndex(word, 'clickhouse') AS distance FROM hackernews ORDER BY distance DESC LIMIT 20
SELECT DISTINCT
arrayJoin(tokens(lower(text))) AS word,
byteJaccardIndex(word, 'clickhouse') AS distance
FROM hackernews
ORDER BY distance DESC
LIMIT 20
Query id: 5d3d9a9a-eb45-42a5-92d2-d38738dc32ea
ββwordβββββββββββββ¬βββββββββββdistanceββ
β clickhouse β 1 β
β clickhouses β 1 β
β clikhouse β 1 β
β clickehouse β 1 β
β clikchouse β 1 β
β chickenlicious β 0.9 β
β bokehlicious β 0.9 β
β 22clickhouse β 0.9 β
β flushcookies β 0.9 β
β clickhousecloud β 0.9 β
β licheckouts β 0.9 β
β clickhousesql β 0.9 β
β suchlike β 0.8888888888888888 β
β choleski β 0.8888888888888888 β
β clickhous β 0.8888888888888888 β
β suckhole β 0.8888888888888888 β
β suchlikes β 0.8888888888888888 β
β chuckholes β 0.8888888888888888 β
β suckholes β 0.8888888888888888 β
β clickholes β 0.8888888888888888 β
βββββββββββββββββββ΄βββββββββββββββββββββ
20 rows in set. Elapsed: 48.896 sec. Processed 37.17 million rows, 12.72 GB (760.21 thousand rows/s., 260.16 MB/s.)
Peak memory usage: 1.83 GiB.
``` | https://github.com/ClickHouse/ClickHouse/issues/55078 | https://github.com/ClickHouse/ClickHouse/pull/55080 | f73eef9ed8cebf2baefb8ac6c14ce0035e5fdf86 | 383f8c58b60d06fa43110c91fa28f168c5cf0b76 | "2023-09-28T04:17:43Z" | c++ | "2023-09-28T18:56:53Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 55,047 | ["src/DataTypes/DataTypeFunction.h", "src/Functions/IFunction.cpp", "tests/queries/0_stateless/02891_functions_over_sparse_columns.reference", "tests/queries/0_stateless/02891_functions_over_sparse_columns.sql"] | isDefaultAt is not implemented for Function | v.23.8.2.7
I run the same query on the same data, and sometimes it throws an error, and sometimes it works without problems.
` Code: 48. DB::Exception: isDefaultAt is not implemented for Function: while executing 'FUNCTION Capture[Int64](Int64) -> UInt8(toInt64(prev_id) :: 0) -> __lambda Function(Int64 -> UInt8) : 3'. (NOT_IMPLEMENTED) (version 23.8.2.7 (official build))`
```
drop table if exists test_table;
create table test_table engine MergeTree order by id as
select assumeNotNull(id) as id,
arrayCount((cr_id1) -> (toInt64(cr_id1) = toInt64(t.prev_id)),
t2.ids) as cnt
from mytable t
left join (select client_id,
groupArray(id) as ids
from mytable
group by client_id) t2
on t.client_id = t2.client_id
limit 100;
``` | https://github.com/ClickHouse/ClickHouse/issues/55047 | https://github.com/ClickHouse/ClickHouse/pull/55275 | 781da580ac31e3c901306b43fdce84cb1cb557da | caf3c85b389358750cef37420b204971bdb63459 | "2023-09-27T07:07:28Z" | c++ | "2023-10-08T13:27:46Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 55,031 | ["docs/en/operations/system-tables/information_schema.md", "src/Storages/System/attachInformationSchemaTables.cpp", "tests/queries/0_stateless/01161_information_schema.reference", "tests/queries/0_stateless/01161_information_schema.sql", "tests/queries/0_stateless/02206_information_schema_show_database.reference"] | MySQL compatibility: missing information_schema.tables.data_length field | This is a minor one and does not cause any visible issues, but it still causes an exception when using QuickSight.
```sql
SELECT data_length FROM information_schema.TABLES WHERE (table_schema = 'default');
```
fails with:
```
2023.09.26 18:07:57.609754 [ 47988 ] {} <Error> MySQLHandler: MySQLHandler: Cannot read packet: :
Code: 47. DB::Exception: Unknown expression identifier 'data_length' in scope SELECT data_length FROM information_schema.TABLES WHERE (table_schema = 'default') AND (table_name = '_test').
(UNKNOWN_IDENTIFIER), Stack trace (when copying this message, always include the lines below):
0. ? @ 0x000000000bf9f122 in ?
1. ? @ 0x000000000bf9f0f0 in ?
2. ? @ 0x000000002590b580 in ?
3. ? @ 0x00000000146e288e in ?
4. ? @ 0x000000000bf9104a in ?
5. ? @ 0x000000001d87b4ba in ?
6. ? @ 0x000000001d815102 in ?
7. ? @ 0x000000001d813ce1 in ?
8. ? @ 0x000000001d81c707 in ?
9. ? @ 0x000000001d81008e in ?
10. ? @ 0x000000001d80e80f in ?
11. ? @ 0x000000001d80e55e in ?
12. ? @ 0x000000001d800e17 in ?
13. ? @ 0x000000001da5d7cc in ?
14. ? @ 0x000000001da5cb55 in ?
15. ? @ 0x000000001d94f678 in ?
16. ? @ 0x000000001d94d636 in ?
17. ? @ 0x000000001dfa9ad0 in ?
18. ? @ 0x000000001dfac01c in ?
19. ? @ 0x000000001f49c845 in ?
20. ? @ 0x000000001f499b5a in ?
21. ? @ 0x000000002579be39 in ?
22. ? @ 0x000000002579c67b in ?
23. ? @ 0x00000000259895d4 in ?
24. ? @ 0x000000002598633a in ?
25. ? @ 0x000000002598501c in ?
26. ? @ 0x00007f36520b1947 in ?
27. ? @ 0x00007f3652137870 in ?
(version 23.9.1.1)
```
**How to reproduce**
* Which ClickHouse server version to use: latest master
* Which interface to use, if matters: MySQL
**Expected behavior**
`information_schema.tables.data_length` can be queried.
| https://github.com/ClickHouse/ClickHouse/issues/55031 | https://github.com/ClickHouse/ClickHouse/pull/55037 | 3a1663a46008c5164e4d3193ec9c14b1c5b63ad2 | e9c3032f838dcd20eda22adf04a425f88b862eaf | "2023-09-26T16:12:04Z" | c++ | "2023-09-27T12:09:38Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 55,023 | ["docs/en/engines/table-engines/mergetree-family/mergetree.md", "src/Core/Settings.h", "src/Storages/MergeTree/registerStorageMergeTree.cpp", "tests/queries/0_stateless/02903_empty_order_by_throws_error.reference", "tests/queries/0_stateless/02903_empty_order_by_throws_error.sh", "tests/queries/0_stateless/02904_empty_order_by_with_setting_enabled.reference", "tests/queries/0_stateless/02904_empty_order_by_with_setting_enabled.sh"] | Default ORDER BY key | I found the setting to enable default table engine, but you can only set default engine = MergeTree
therefore, one must always write CREATE TABLE db.table **ORDER BY tuple() a**s select .. unless one uses the log engine.
I would hope for an option MergeTree ORDER BY tuple() as a default engine, because then the create table statement would look like one in any other SQL language. | https://github.com/ClickHouse/ClickHouse/issues/55023 | https://github.com/ClickHouse/ClickHouse/pull/55899 | e2846d4c582b2086694ca56cb7724e92b2cd0e38 | 9f4f851505febe4b5fcd4fed462cc5e6351f0749 | "2023-09-26T13:22:55Z" | c++ | "2023-10-24T11:47:23Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 54,963 | ["docs/en/sql-reference/data-types/nullable.md", "docs/en/sql-reference/functions/array-functions.md", "src/Functions/array/arrayShingles.cpp", "tests/queries/0_stateless/02891_array_shingles.reference", "tests/queries/0_stateless/02891_array_shingles.sql", "utils/check-style/aspell-ignore/en/aspell-dict.txt"] | `shingles` function for arrays. | **Use case**
```
shingles(['FotografΓas','ganadoras','del','World','Press','Photo','Awards','2014','http','t','co','F1iwa8gSWK'], 3)
= [['FotografΓas','ganadoras','del'],
['ganadoras','del','World'],
['del','World','Press'],
['World','Press','Photo'],
['Press','Photo','Awards'],
['Photo','Awards','2014'],
['Awards','2014','http'],
['2014','http','t'],
['http','t','co'],
['t','co','F1iwa8gSWK']]
``` | https://github.com/ClickHouse/ClickHouse/issues/54963 | https://github.com/ClickHouse/ClickHouse/pull/58396 | 1595dd8a3f0566baecab475fa8f32867b548743e | 316669b90f871fd73ff21f966268a98789f068e1 | "2023-09-24T21:32:03Z" | c++ | "2024-01-18T10:38:56Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 54,962 | ["src/TableFunctions/TableFunctionDictionary.cpp", "tests/queries/0_stateless/02916_dictionary_access.reference", "tests/queries/0_stateless/02916_dictionary_access.sh"] | non-granted dictionary can be accessed through table function | ## Problem
**Describe the unexpected behaviour**
A clear and concise description of what works not as it is supposed to.
A user with `CREATE TEMPORARY TABLE ON *.*` can read any dictionary. Even those which will raise `ACCESS_DENIED` when accesses through `dictGet`
## Steps to reproduce
Clickhouse version `23.8.2`
As `admin` user
```
CREATE DICTIONARY dict
(
id UInt64,
value String
)
PRIMARY KEY id
SOURCE(NULL())
LAYOUT(FLAT())
LIFETIME(MIN 0 MAX 1000);
CREATE USER user NOT IDENTIFIED;
grant CREATE TEMPORARY TABLE ON *.* to user;
```
As user `user`
```
select dictGet(dict, 'value', 1); // ACCESS_DENIED
select * from dictionary(dict); // OK
```
## Expected behavior
The same access checks should apply for select `dictGet` and `select * from dictionary` | https://github.com/ClickHouse/ClickHouse/issues/54962 | https://github.com/ClickHouse/ClickHouse/pull/57362 | f26e31357d45b21e69f1b1e35d94f924a4ee4e86 | 252de64af3f73ef6866118e29bc048657c0c6e2a | "2023-09-24T21:12:46Z" | c++ | "2023-12-08T13:09:40Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 54,941 | ["src/Storages/AlterCommands.cpp", "tests/queries/0_stateless/01710_minmax_count_projection_modify_partition_key.reference", "tests/queries/0_stateless/01710_minmax_count_projection_modify_partition_key.sql"] | Optimization with implicit projections produces logical error if table's partition key was ALTERed by extending its Enum type. | **Describe what's wrong**
```
CREATE TABLE test (type Enum('x'), s String) ENGINE = MergeTree ORDER BY s PARTITION BY type;
INSERT INTO test VALUES ('x', 'Hello');
SELECT type, count() FROM test GROUP BY type ORDER BY type;
ALTER TABLE test MODIFY COLUMN type Enum('x', 'y');
INSERT INTO test VALUES ('y', 'World');
SELECT type, count() FROM test GROUP BY type ORDER BY type;
```
Example:
```
milovidov@milovidov-desktop:~/work/ClickHouse$ clickhouse-local
ClickHouse local version 23.9.1.1.
milovidov-desktop :) CREATE TABLE test (type Enum('x'), s String) ENGINE = MergeTree ORDER BY s PARTITION BY type;
CREATE TABLE test
(
`type` Enum('x'),
`s` String
)
ENGINE = MergeTree
PARTITION BY type
ORDER BY s
Query id: 49ac0b02-e3c0-4b47-8a8e-1f38c973368f
Ok.
0 rows in set. Elapsed: 0.020 sec.
milovidov-desktop :) INSERT INTO test VALUES ('x', 'Hello');
INSERT INTO test FORMAT Values
Query id: 5b747e2c-6966-4f23-877d-c3d99d11f40e
Ok.
1 row in set. Elapsed: 0.010 sec.
milovidov-desktop :) SELECT type, count() FROM test GROUP BY type ORDER BY type;
SELECT
type,
count()
FROM test
GROUP BY type
ORDER BY type ASC
Query id: 20cba2b0-3f00-41f7-8fbd-a74320714781
ββtypeββ¬βcount()ββ
β x β 1 β
ββββββββ΄ββββββββββ
1 row in set. Elapsed: 0.011 sec.
milovidov-desktop :) ALTER TABLE test MODIFY COLUMN type Enum('x', 'y');
ALTER TABLE test
MODIFY COLUMN `type` Enum('x', 'y')
Query id: b4470893-2ed6-456a-9276-6d2f26a50a60
Ok.
0 rows in set. Elapsed: 0.001 sec.
milovidov-desktop :) INSERT INTO test VALUES ('y', 'World');
INSERT INTO test FORMAT Values
Query id: 5a400225-e0e3-4504-8237-427737679db0
Ok.
1 row in set. Elapsed: 0.001 sec.
milovidov-desktop :) SELECT type, count() FROM test GROUP BY type ORDER BY type;
SELECT
type,
count()
FROM test
GROUP BY type
ORDER BY type ASC
Query id: 74e2925b-a8f9-4e92-907c-fa428e1d0df2
0 rows in set. Elapsed: 0.002 sec.
Received exception:
Code: 49. DB::Exception: Block structure mismatch in AggregatingStep stream: different types:
type Enum8('x' = 1, 'y' = 2) Int8(size = 0)
type Enum8('x' = 1) Int8(size = 0). (LOGICAL_ERROR)
milovidov-desktop :) SET optimize_use_
optimize_use_implicit_projections optimize_use_projections
milovidov-desktop :) SET optimize_use_implicit_projections = 0
SET optimize_use_implicit_projections = 0
Query id: e782ee1f-cfb1-45e3-94bd-22e32f3424d7
Ok.
0 rows in set. Elapsed: 0.000 sec.
milovidov-desktop :) SELECT type, count() FROM test GROUP BY type ORDER BY type;
SELECT
type,
count()
FROM test
GROUP BY type
ORDER BY type ASC
Query id: 1d760294-f6a9-44c6-b837-afeeeaba4d59
ββtypeββ¬βcount()ββ
β x β 1 β
β y β 1 β
ββββββββ΄ββββββββββ
2 rows in set. Elapsed: 0.004 sec.
milovidov-desktop :)
``` | https://github.com/ClickHouse/ClickHouse/issues/54941 | https://github.com/ClickHouse/ClickHouse/pull/54943 | 2a2655739112036beefdcad22a4d6a3754b68ad7 | 320e4c47f3c509b1c68bbd4e8e52effe19946a07 | "2023-09-23T00:14:02Z" | c++ | "2023-09-23T20:11:52Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 54,902 | ["src/Storages/StorageReplicatedMergeTree.cpp", "tests/queries/0_stateless/02989_replicated_merge_tree_invalid_metadata_version.reference", "tests/queries/0_stateless/02989_replicated_merge_tree_invalid_metadata_version.sql"] | Metadata on replica is not up to date with common metadata in Zookeeper | ```sql
alter table test add column x Int32;
Received exception from server (version 23.8.2):
Code: 517. DB::Exception: Received from localhost:9000.
DB::Exception: Metadata on replica is not up to date with common metadata in Zookeeper.
It means that this replica still not applied some of previous alters.
Probably too many alters executing concurrently (highly not recommended). You can retry this error. (CANNOT_ASSIGN_ALTER)
```
I tried table detach/attach, no changes.
```
SELECT
name,
substr(replace(value, '\n', ' '), 1, 50) AS value_s,
cityHash64(value)
FROM system.zookeeper
WHERE path = (
SELECT zookeeper_path
FROM system.replicas
WHERE table = 'test'
)
ββnameββββββββββββββββββββββββ¬βvalue_sβββββββββββββββββββββββββββββββββββββββββββββ¬ββββcityHash64(value)ββ
β alter_partition_version β β 11160318154034397263 β
β metadata β metadata format version: 1 date column: sampling β 4322393711038923972 β
β temp β β 11160318154034397263 β
β table_shared_id β fc28061c-c819-4a0b-bc28-061cc8194a0b β 7914521819417058279 β
β log β β 11160318154034397263 β
β leader_election β β 11160318154034397263 β
β columns β columns format version: 1 243 columns: `access_tim β 594710015478039907 β
β blocks β β 11160318154034397263 β
β nonincrement_block_numbers β β 11160318154034397263 β
β replicas β last added replica: chdw1-1.sde10186 β 4784695667935888577 β
β async_blocks β β 11160318154034397263 β
β quorum β β 11160318154034397263 β
β pinned_part_uuids β {"part_uuids":"[]"} β 16899393181724385792 β
β block_numbers β β 11160318154034397263 β
β mutations β β 11160318154034397263 β
β zero_copy_s3 β β 11160318154034397263 β
β part_moves_shard β β 11160318154034397263 β
β zero_copy_hdfs β β 11160318154034397263 β
β lost_part_count β β 11160318154034397263 β
ββββββββββββββββββββββββββββββ΄βββββββββββββββββββββββββββββββββββββββββββββββββββββ΄βββββββββββββββββββββββ
```
```
SELECT
name,
substr(replace(value, '\n', ' '), 1, 50) AS value_s,
cityHash64(value)
FROM system.zookeeper
WHERE path = (
SELECT replica_path
FROM system.replicas
WHERE table = 'test'
)
ββnameβββββββββββββββββββββββββ¬βvalue_sβββββββββββββββββββββββββββββββββββββββββββββ¬ββββcityHash64(value)ββ
β is_lost β 0 β 10408321403207385874 β
β metadata β metadata format version: 1 date column: sampling β 4322393711038923972 β
β is_active β UUID_'a57446f7-2cb9-47f9-b1b6-65573c29067c' β 1870750756359300609 β
β mutation_pointer β 0000000047 β 10517147932185828611 β
β columns β columns format version: 1 243 columns: `access_tim β 594710015478039907 β
β max_processed_insert_time β 1695306428 β 5882609297731162392 β
β flags β β 11160318154034397263 β
β log_pointer β 547980 β 18155853374234986669 β
β min_unprocessed_insert_time β 0 β 10408321403207385874 β
β host β host: chdw1-1.sde10186.mycmdb.net port: 9009 tcp_p β 16322273289710190182 β
β parts β β 11160318154034397263 β
β queue β β 11160318154034397263 β
β metadata_version β 0 β 10408321403207385874 β
βββββββββββββββββββββββββββββββ΄βββββββββββββββββββββββββββββββββββββββββββββββββββββ΄βββββββββββββββββββββββ
```
I wonder what I should check else, to find the cause of the issue ? | https://github.com/ClickHouse/ClickHouse/issues/54902 | https://github.com/ClickHouse/ClickHouse/pull/60078 | 914b19aadedb56b22881052a42a84db35b32c85e | 392081256c7d3d34ca824189370293e4610ee1ef | "2023-09-21T18:16:33Z" | c++ | "2024-02-20T18:24:35Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 54,896 | ["docs/en/sql-reference/statements/alter/partition.md", "src/Storages/MergeTree/MergeTreeData.cpp", "tests/queries/0_stateless/02888_attach_partition_from_different_tables.reference", "tests/queries/0_stateless/02888_attach_partition_from_different_tables.sql"] | ATTACH PARTITION from source table with different index as destination table throws `Integer divide by zero` error | **Describe the unexpected behaviour**
ATTACH PARTITION from source table with different index as destination table throws `Integer divide by zero` error.
**How to reproduce**
ClickHouse Cloud v23.8.1.41541
ClickHouse 23.8.2.7
```
create or replace table t1 (
a UInt32,
b String,
INDEX bf b TYPE tokenbf_v1(8192, 3, 0) GRANULARITY 1
)
engine = MergeTree
order by a;
insert into t1 select number, toString(number) from numbers(10);
create or replace table t2 (
a UInt32,
b String,
INDEX bf b TYPE bloom_filter GRANULARITY 1
)
engine = MergeTree
order by a;
alter table t2 attach partition tuple() from t1;
select * from t2 where b = '1';
```
**Expected behavior**
ATTACH PARTITION from source table with index that is not compatible with the index at destination table should not be allowed.
**Error message and/or stacktrace**
```
[c-bordeaux-gr-33-server-1] 2023.09.21 14:57:16.807241 [ 536 ] <Fatal> BaseDaemon: ########################################
[c-bordeaux-gr-33-server-1] 2023.09.21 14:57:16.807287 [ 536 ] <Fatal> BaseDaemon: (version 23.8.1.41541 (official build), build id: 0C6F9D74CE985B0E68623C6AE4BBA9E4FBEA2B93, git hash: 2bb8937b298aecf48bb3fe441e69435c645d7166) (from thread 521) (query_id: 827b6395-1d52-4a5d-b9f7-172025e760d2) (query: select * from t2 where b = '1';) Received signal Arithmetic exception (8)
[c-bordeaux-gr-33-server-1] 2023.09.21 14:57:16.807312 [ 536 ] <Fatal> BaseDaemon: Integer divide by zero.
[c-bordeaux-gr-33-server-1] 2023.09.21 14:57:16.807339 [ 536 ] <Fatal> BaseDaemon: Stack trace: 0x00000000152774b1 0x000000001527cbaf 0x0000000015238333 0x0000000015234348 0x000000001523153b 0x0000000015ab9473 0x0000000015ab4607 0x0000000015ab42b0 0x0000000015abc8fa 0x0000000015abeaa7 0x0000000015a7d2a5 0x0000000015a94d6a 0x000000001438adf9 0x00000000146c3b2e 0x00000000146bf42e 0x0000000015681244 0x0000000015697ff9 0x000000000ebaeffc 0x00000000186a1654 0x00000000186a2871 0x000000001882ba67 0x000000001882949c 0x00007f53b4b8eb43 0x00007f53b4c20a00
[c-bordeaux-gr-33-server-1] 2023.09.21 14:57:16.807398 [ 536 ] <Fatal> BaseDaemon: 2. DB::MergeTreeIndexConditionBloomFilter::mayBeTrueOnGranule(DB::MergeTreeIndexGranuleBloomFilter const*) const @ 0x00000000152774b1 in /usr/bin/clickhouse
[c-bordeaux-gr-33-server-1] 2023.09.21 14:57:16.807431 [ 536 ] <Fatal> BaseDaemon: 3. ? @ 0x000000001527cbaf in /usr/bin/clickhouse
[c-bordeaux-gr-33-server-1] 2023.09.21 14:57:16.807460 [ 536 ] <Fatal> BaseDaemon: 4. DB::MergeTreeDataSelectExecutor::filterMarksUsingIndex(std::shared_ptr<DB::IMergeTreeIndex const>, std::shared_ptr<DB::IMergeTreeIndexCondition>, std::shared_ptr<DB::IMergeTreeDataPart const>, DB::MarkRanges const&, DB::Settings const&, DB::MergeTreeReaderSettings const&, unsigned long&, DB::MarkCache*, DB::UncompressedCache*, Poco::Logger*) @ 0x0000000015238333 in /usr/bin/clickhouse
[c-bordeaux-gr-33-server-1] 2023.09.21 14:57:16.807493 [ 536 ] <Fatal> BaseDaemon: 5. ? @ 0x0000000015234348 in /usr/bin/clickhouse
[c-bordeaux-gr-33-server-1] 2023.09.21 14:57:16.807528 [ 536 ] <Fatal> BaseDaemon: 6. DB::MergeTreeDataSelectExecutor::filterPartsByPrimaryKeyAndSkipIndexes(std::vector<std::shared_ptr<DB::IMergeTreeDataPart const>, std::allocator<std::shared_ptr<DB::IMergeTreeDataPart const>>>&&, std::vector<std::shared_ptr<DB::AlterConversions const>, std::allocator<std::shared_ptr<DB::AlterConversions const>>>&&, std::shared_ptr<DB::StorageInMemoryMetadata const>, std::shared_ptr<DB::Context const> const&, DB::KeyCondition const&, DB::UsefulSkipIndexes const&, DB::MergeTreeReaderSettings const&, Poco::Logger*, unsigned long, std::vector<DB::ReadFromMergeTree::IndexStat, std::allocator<DB::ReadFromMergeTree::IndexStat>>&, bool) @ 0x000000001523153b in /usr/bin/clickhouse
[c-bordeaux-gr-33-server-1] 2023.09.21 14:57:16.807578 [ 536 ] <Fatal> BaseDaemon: 7. DB::ReadFromMergeTree::selectRangesToReadImpl(std::vector<std::shared_ptr<DB::IMergeTreeDataPart const>, std::allocator<std::shared_ptr<DB::IMergeTreeDataPart const>>>, std::vector<std::shared_ptr<DB::AlterConversions const>, std::allocator<std::shared_ptr<DB::AlterConversions const>>>, std::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::shared_ptr<DB::StorageInMemoryMetadata const> const&, DB::SelectQueryInfo const&, std::shared_ptr<DB::Context const>, unsigned long, std::shared_ptr<std::unordered_map<String, long, std::hash<String>, std::equal_to<String>, std::allocator<std::pair<String const, long>>>>, DB::MergeTreeData const&, std::vector<String, std::allocator<String>> const&, bool, Poco::Logger*, std::optional<DB::ReadFromMergeTree::Indexes>&) @ 0x0000000015ab9473 in /usr/bin/clickhouse
[c-bordeaux-gr-33-server-1] 2023.09.21 14:57:16.807625 [ 536 ] <Fatal> BaseDaemon: 8. DB::ReadFromMergeTree::selectRangesToRead(std::vector<std::shared_ptr<DB::IMergeTreeDataPart const>, std::allocator<std::shared_ptr<DB::IMergeTreeDataPart const>>>, std::vector<std::shared_ptr<DB::AlterConversions const>, std::allocator<std::shared_ptr<DB::AlterConversions const>>>, std::shared_ptr<DB::PrewhereInfo> const&, DB::ActionDAGNodes const&, std::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::shared_ptr<DB::StorageInMemoryMetadata const> const&, DB::SelectQueryInfo const&, std::shared_ptr<DB::Context const>, unsigned long, std::shared_ptr<std::unordered_map<String, long, std::hash<String>, std::equal_to<String>, std::allocator<std::pair<String const, long>>>>, DB::MergeTreeData const&, std::vector<String, std::allocator<String>> const&, bool, Poco::Logger*, std::optional<DB::ReadFromMergeTree::Indexes>&) @ 0x0000000015ab4607 in /usr/bin/clickhouse
[c-bordeaux-gr-33-server-1] 2023.09.21 14:57:16.807651 [ 536 ] <Fatal> BaseDaemon: 9. DB::ReadFromMergeTree::selectRangesToRead(std::vector<std::shared_ptr<DB::IMergeTreeDataPart const>, std::allocator<std::shared_ptr<DB::IMergeTreeDataPart const>>>, std::vector<std::shared_ptr<DB::AlterConversions const>, std::allocator<std::shared_ptr<DB::AlterConversions const>>>) const @ 0x0000000015ab42b0 in /usr/bin/clickhouse
[c-bordeaux-gr-33-server-1] 2023.09.21 14:57:16.807693 [ 536 ] <Fatal> BaseDaemon: 10. DB::ReadFromMergeTree::getAnalysisResult() const @ 0x0000000015abc8fa in /usr/bin/clickhouse
[c-bordeaux-gr-33-server-1] 2023.09.21 14:57:16.807720 [ 536 ] <Fatal> BaseDaemon: 11. DB::ReadFromMergeTree::initializePipeline(DB::QueryPipelineBuilder&, DB::BuildQueryPipelineSettings const&) @ 0x0000000015abeaa7 in /usr/bin/clickhouse
[c-bordeaux-gr-33-server-1] 2023.09.21 14:57:16.807749 [ 536 ] <Fatal> BaseDaemon: 12. DB::ISourceStep::updatePipeline(std::vector<std::unique_ptr<DB::QueryPipelineBuilder, std::default_delete<DB::QueryPipelineBuilder>>, std::allocator<std::unique_ptr<DB::QueryPipelineBuilder, std::default_delete<DB::QueryPipelineBuilder>>>>, DB::BuildQueryPipelineSettings const&) @ 0x0000000015a7d2a5 in /usr/bin/clickhouse
[c-bordeaux-gr-33-server-1] 2023.09.21 14:57:16.807784 [ 536 ] <Fatal> BaseDaemon: 13. DB::QueryPlan::buildQueryPipeline(DB::QueryPlanOptimizationSettings const&, DB::BuildQueryPipelineSettings const&) @ 0x0000000015a94d6a in /usr/bin/clickhouse
[c-bordeaux-gr-33-server-1] 2023.09.21 14:57:16.807815 [ 536 ] <Fatal> BaseDaemon: 14. DB::InterpreterSelectWithUnionQuery::execute() @ 0x000000001438adf9 in /usr/bin/clickhouse
[c-bordeaux-gr-33-server-1] 2023.09.21 14:57:16.807842 [ 536 ] <Fatal> BaseDaemon: 15. ? @ 0x00000000146c3b2e in /usr/bin/clickhouse
[c-bordeaux-gr-33-server-1] 2023.09.21 14:57:16.807878 [ 536 ] <Fatal> BaseDaemon: 16. DB::executeQuery(String const&, std::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) @ 0x00000000146bf42e in /usr/bin/clickhouse
[c-bordeaux-gr-33-server-1] 2023.09.21 14:57:16.807914 [ 536 ] <Fatal> BaseDaemon: 17. DB::TCPHandler::runImpl() @ 0x0000000015681244 in /usr/bin/clickhouse
[c-bordeaux-gr-33-server-1] 2023.09.21 14:57:16.807940 [ 536 ] <Fatal> BaseDaemon: 18. DB::TCPHandler::run() @ 0x0000000015697ff9 in /usr/bin/clickhouse
[c-bordeaux-gr-33-server-1] 2023.09.21 14:57:16.807961 [ 536 ] <Fatal> BaseDaemon: 19. ? @ 0x000000000ebaeffc in /usr/bin/clickhouse
[c-bordeaux-gr-33-server-1] 2023.09.21 14:57:16.807993 [ 536 ] <Fatal> BaseDaemon: 20. Poco::Net::TCPServerConnection::start() @ 0x00000000186a1654 in /usr/bin/clickhouse
[c-bordeaux-gr-33-server-1] 2023.09.21 14:57:16.808021 [ 536 ] <Fatal> BaseDaemon: 21. ? @ 0x00000000186a2871 in /usr/bin/clickhouse
[c-bordeaux-gr-33-server-1] 2023.09.21 14:57:16.808062 [ 536 ] <Fatal> BaseDaemon: 22. Poco::PooledThread::run() @ 0x000000001882ba67 in /usr/bin/clickhouse
[c-bordeaux-gr-33-server-1] 2023.09.21 14:57:16.808112 [ 536 ] <Fatal> BaseDaemon: 23. Poco::ThreadImpl::runnableEntry(void*) @ 0x000000001882949c in /usr/bin/clickhouse
[c-bordeaux-gr-33-server-1] 2023.09.21 14:57:16.808145 [ 536 ] <Fatal> BaseDaemon: 24. ? @ 0x00007f53b4b8eb43 in ?
[c-bordeaux-gr-33-server-1] 2023.09.21 14:57:16.808187 [ 536 ] <Fatal> BaseDaemon: 25. ? @ 0x00007f53b4c20a00 in ?
[c-bordeaux-gr-33-server-1] 2023.09.21 14:57:16.982572 [ 536 ] <Fatal> BaseDaemon: Integrity check of the executable successfully passed (checksum: B60DAA55063C7F37A4029B4ECA186BCD)
[c-bordeaux-gr-33-server-1] 2023.09.21 14:57:17.175501 [ 536 ] <Fatal> BaseDaemon: Report this error to https://github.com/ClickHouse/ClickHouse/issues
[c-bordeaux-gr-33-server-1] 2023.09.21 14:57:17.175698 [ 536 ] <Fatal> BaseDaemon: Changed settings: max_insert_threads = 1, max_threads = 4, use_hedged_requests = false, alter_sync = 0, enable_memory_bound_merging_of_aggregation_results = true, use_mysql_types_in_show_columns = true, log_queries = true, log_queries_probability = 1., max_http_get_redirects = 10, insert_distributed_sync = true, final = true, enable_deflate_qpl_codec = false, max_bytes_before_external_group_by = 4294967296, max_bytes_before_external_sort = 4294967296, max_memory_usage = 8589934592, cancel_http_readonly_queries_on_client_close = true, max_table_size_to_drop = 1000000000000, max_partition_size_to_drop = 1000000000000, default_table_engine = 'ReplicatedMergeTree', mutations_sync = 0, optimize_trivial_insert_select = false, allow_experimental_database_replicated = true, database_replicated_allow_only_replicated_engine = true, cloud_mode = true, distributed_ddl_output_mode = 'none', async_insert_busy_timeout_ms = 1000, enable_filesystem_cache_on_write_operations = true, load_marks_asynchronously = true, allow_prefetched_read_pool_for_remote_filesystem = true, filesystem_prefetch_max_memory_usage = 858993459, filesystem_prefetches_limit = 200, insert_keeper_max_retries = 20, date_time_input_format = 'best_effort'
Error on processing query: Code: 32. DB::Exception: Attempt to read after eof: while receiving packet from oa82c44121.us-west-2.aws.clickhouse.cloud:9440. (ATTEMPT_TO_READ_AFTER_EOF) (version 23.3.2.37 (official build))
Connecting to oa82c44121.us-west-2.aws.clickhouse.cloud:9440 as user default.
Code: 210. DB::NetException: SSL connection unexpectedly closed, while reading from socket (35.85.205.122:9440). (NETWORK_ERROR)
```
| https://github.com/ClickHouse/ClickHouse/issues/54896 | https://github.com/ClickHouse/ClickHouse/pull/55062 | 8ac88645c8e4ac44abe06dc082027918860d4a33 | c911c8daf4bd626315d6b67f35ed7ea0f6c476ce | "2023-09-21T15:05:54Z" | c++ | "2023-09-27T19:02:53Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 54,858 | ["src/Processors/QueryPlan/PartsSplitter.cpp", "tests/queries/0_stateless/02875_final_invalid_read_ranges_bug.reference", "tests/queries/0_stateless/02875_final_invalid_read_ranges_bug.sql"] | "Cannot read out of marks range" with function in the PK and FINAL clause in query |
Data for repro in the attachement: [data.zip](https://github.com/ClickHouse/ClickHouse/files/12677249/data.zip)
Schema
```
CREATE TABLE test_table
(
`tid` UInt64,
`processed_at` DateTime,
`created_at` DateTime,
`amount` Int64
)
ENGINE = ReplacingMergeTree()
PARTITION BY toStartOfQuarter(created_at)
PRIMARY KEY (toStartOfDay(created_at), toStartOfDay(processed_at))
ORDER BY (toStartOfDay(created_at), toStartOfDay(processed_at), tid)
SETTINGS index_granularity = 8192
;
```
Inserting data:
```
cat /tmp/data.tsv | clickhouse-client --query='INSERT INTO test_table FORMAT TSV'
```
The failing query :
```
SELECT sum(amount) FROM test_table FINAL WHERE processed_at between '2023-09-19 00:00:00' AND '2023-09-20 01:00:00';
```
The result:
```
Received exception from server (version 23.8.2):
Code: 36. DB::Exception: Received from localhost:9000. DB::Exception: Cannot read out of marks range.: While executing MergeTreeInOrder. (BAD_ARGUMENTS)
```
Expected result - no error (empty dataset)
<details>
<summary>error stacktrace for master (23.9.1)</summary>
```
Received exception from server (version 23.9.1):
Code: 36. DB::Exception: Received from localhost:9000. DB::Exception: Cannot read out of marks range.: While executing MergeTreeSelect(pool: ReadPoolInOrder, algorithm: InOrder). Stack trace:
0. ./build_docker/./src/Common/Exception.cpp:98: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000c6d2e77 in /usr/bin/clickhouse
1. DB::Exception::Exception<char const (&) [32]>(int, char const (&) [32]) @ 0x000000000b349a80 in /usr/bin/clickhouse
2. ./build_docker/./src/Storages/MergeTree/MergeTreeRangeReader.cpp:0: DB::MergeTreeRangeReader::Stream::read(std::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn>>>&, unsigned long, bool) @ 0x0000000012ee516b in /usr/bin/clickhouse
3. ./build_docker/./src/Storages/MergeTree/MergeTreeRangeReader.h:240: DB::MergeTreeRangeReader::read(unsigned long, DB::MarkRanges&) @ 0x0000000012eed346 in /usr/bin/clickhouse
4. ./build_docker/./src/Storages/MergeTree/MergeTreeReadTask.cpp:158: DB::MergeTreeReadTask::read(DB::MergeTreeReadTask::BlockSizeParams const&) @ 0x0000000012ef4d8c in /usr/bin/clickhouse
5. ./build_docker/./src/Storages/MergeTree/MergeTreeSelectAlgorithms.h:53: DB::MergeTreeInOrderSelectAlgorithm::readFromTask(DB::MergeTreeReadTask&, DB::MergeTreeReadTask::BlockSizeParams const&) @ 0x00000000137077af in /usr/bin/clickhouse
6. ./build_docker/./src/Storages/MergeTree/MergeTreeSelectProcessor.cpp:162: DB::MergeTreeSelectProcessor::read() @ 0x0000000012ee2aa7 in /usr/bin/clickhouse
7. ./build_docker/./src/Storages/MergeTree/MergeTreeSource.cpp:181: DB::MergeTreeSource::tryGenerate() @ 0x00000000136fdb18 in /usr/bin/clickhouse
8. ./build_docker/./contrib/llvm-project/libcxx/include/optional:344: DB::ISource::work() @ 0x00000000132bbf0a in /usr/bin/clickhouse
9. ./build_docker/./src/Processors/Executors/ExecutionThreadContext.cpp:0: DB::ExecutionThreadContext::executeTask() @ 0x00000000132d3dda in /usr/bin/clickhouse
10. ./build_docker/./src/Processors/Executors/PipelineExecutor.cpp:273: DB::PipelineExecutor::executeStepImpl(unsigned long, std::atomic<bool>*) @ 0x00000000132ca950 in /usr/bin/clickhouse
11. ./build_docker/./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:833: void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::PipelineExecutor::spawnThreads()::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x00000000132cba4f in /usr/bin/clickhouse
12. ./build_docker/./base/base/../base/wide_integer_impl.h:809: ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::worker(std::__list_iterator<ThreadFromGlobalPoolImpl<false>, void*>) @ 0x000000000c7b7b9e in /usr/bin/clickhouse
13. ./build_docker/./src/Common/ThreadPool.cpp:0: void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<false>::ThreadFromGlobalPoolImpl<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>(void&&)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x000000000c7bb65c in /usr/bin/clickhouse
14. ./build_docker/./base/base/../base/wide_integer_impl.h:809: void* std::__thread_proxy[abi:v15000]<std::tuple<std::unique_ptr<std::__thread_struct, std::default_delete<std::__thread_struct>>, void ThreadPoolImpl<std::thread>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>>(void*) @ 0x000000000c7b9ea4 in /usr/bin/clickhouse
15. ? @ 0x00007f55a1c94b43 in ?
16. ? @ 0x00007f55a1d26a00 in ?
. (BAD_ARGUMENTS)
```
</details>
To be rechecked: it seems it used to work on some older versions and has been broken in 22.X | https://github.com/ClickHouse/ClickHouse/issues/54858 | https://github.com/ClickHouse/ClickHouse/pull/54934 | 36478f66fd874eb56cfaddce86eb1b8872ae1c06 | 0e506b618e9146acd69cab956f3039994fb291dd | "2023-09-20T20:27:15Z" | c++ | "2023-09-28T12:12:19Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 54,796 | ["docs/en/sql-reference/functions/date-time-functions.md", "src/DataTypes/IDataType.h", "src/Functions/DateTimeTransforms.h", "src/Functions/FunctionDateOrDateTimeToSomething.h", "src/Functions/IFunctionDateOrDateTime.h", "src/Functions/toDaysSinceYearZero.cpp", "tests/queries/0_stateless/02874_toDaysSinceYearZero.reference", "tests/queries/0_stateless/02874_toDaysSinceYearZero.sql"] | MySQL compatibility: Illegal type of argument #1 'date' of function toDaysSinceYearZero | Required for Tableau Online.
Sample rendered query:
```sql
SELECT SUM(`cell_towers`.`mcc`) AS `sum_mcc_ok`,
ADDDATE(FROM_DAYS(TO_DAYS(`cell_towers`.`updated`) - (DAYOFWEEK(`cell_towers`.`updated`) - 1)),
INTERVAL 0 SECOND) AS `twk_updated_ok`
FROM `cell_towers`
GROUP BY 2;
```
fails with
```
Code: 43. DB::Exception: Illegal type of argument #1 'date' of function toDaysSinceYearZero, expected Date or Date32, got DateTime: In scope SELECT SUM(cell_towers.mcc) AS sum_mcc_ok, ADDDATE(FROM_DAYS(TO_DAYS(cell_towers.updated) - (DAYOFWEEK(cell_towers.updated) - 1)), toIntervalSecond(0)) AS twk_updated_ok FROM cell_towers GROUP BY 2. (ILLEGAL_TYPE_OF_ARGUMENT)
```
Reduced example:
```sql
SELECT TO_DAYS(`cell_towers`.`updated`) FROM cell_towers LIMIT 1;
```
**How to reproduce**
* Which ClickHouse server version to use: latest master
* Which interface to use, if matters: MySQL
* Sample data for all these tables: cell_towers sample dataset
**Expected behavior**
TO_DAYS accepts DateTime/DateTime64 to be more in line with the MySQL standard. | https://github.com/ClickHouse/ClickHouse/issues/54796 | https://github.com/ClickHouse/ClickHouse/pull/54856 | cf4072317961a40db777c3e3aaf5c04183165ebf | ec09fd124dda510081436221bb3b219d1acb5643 | "2023-09-19T14:31:48Z" | c++ | "2023-09-22T18:39:54Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 54,795 | ["docs/en/sql-reference/functions/date-time-functions.md", "src/Functions/FunctionBinaryArithmetic.h", "src/Functions/FunctionsOpDate.cpp", "tests/queries/0_stateless/01923_ttl_with_modify_column.sql", "tests/queries/0_stateless/02834_add_sub_date_functions.reference", "tests/queries/0_stateless/02834_add_sub_date_functions.sql", "tests/queries/0_stateless/02900_add_subtract_interval_with_string_date.reference", "tests/queries/0_stateless/02900_add_subtract_interval_with_string_date.sql"] | MySQL Compatibility: Illegal type String of 1st argument of function addDate | Required for Tableau Online.
Sample rendered query:
```sql
SELECT
ADDDATE( DATE_FORMAT( `cell_towers`.`updated`, '%Y-01-01 00:00:00' ), INTERVAL 0 SECOND ) AS `tyr_updated_ok`
FROM `cell_towers`
GROUP BY 1
```
fails with
```
Code: 43. DB::Exception: Illegal type String of 1st argument of function addDate. Should be a date or a date with time: In scope SELECT ADDDATE(DATE_FORMAT(cell_towers.updated, '%Y-01-01 00:00:00'), toIntervalSecond(0)) AS tyr_updated_ok FROM cell_towers GROUP BY 1. (ILLEGAL_TYPE_OF_ARGUMENT)
```
**How to reproduce**
* Which ClickHouse server version to use: latest master
* Which interface to use, if matters: MySQL
* Sample data for all these tables: cell_towers sample dataset
**Expected behavior**
ADDDATE accepts String argument like in MySQL. | https://github.com/ClickHouse/ClickHouse/issues/54795 | https://github.com/ClickHouse/ClickHouse/pull/55960 | b93bf06e86821f3cc31e79848ad36623b3d82c8a | 13b2946ae2c126b5b27e301e95470b98db350f13 | "2023-09-19T14:26:21Z" | c++ | "2023-10-31T14:37:34Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 54,746 | ["docs/en/sql-reference/functions/date-time-functions.md", "src/Common/DateLUTImpl.h", "src/Functions/DateTimeTransforms.cpp", "src/Functions/DateTimeTransforms.h", "src/Functions/toMillisecond.cpp", "tests/queries/0_stateless/02998_to_milliseconds.reference", "tests/queries/0_stateless/02998_to_milliseconds.sql", "utils/check-style/aspell-ignore/en/aspell-dict.txt"] | Implement toMillisecond() | Implement a function `toMillisecond()`, similar to existing function `toSecond()` ([docs](https://clickhouse.com/docs/en/sql-reference/functions/date-time-functions#tosecond)). | https://github.com/ClickHouse/ClickHouse/issues/54746 | https://github.com/ClickHouse/ClickHouse/pull/60281 | ed215a293afd78a635a71520b21c99a067296b17 | 44c3de1a0b2fcfb212d21db8d94178fc6b0bb65d | "2023-09-18T12:09:50Z" | c++ | "2024-03-01T10:34:20Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 54,734 | ["docs/en/sql-reference/functions/arithmetic-functions.md", "src/Functions/byteSwap.cpp", "tests/queries/0_stateless/02887_byteswap.reference", "tests/queries/0_stateless/02887_byteswap.sql", "utils/check-style/aspell-ignore/en/aspell-dict.txt"] | Function `byteSwap` | **Use case**
If a number was in big-endian instead of little-endian, or vice versa, convert it back.
Example:
Instead of writing:
```
SELECT toIPv4(reinterpretAsUInt32(reverse(reinterpretAsFixedString(3351772109))))
```
I want to write:
```
SELECT toIPv4(byteSwap(3351772109))
```
**Describe the solution you'd like**
Implement this function for integer types. | https://github.com/ClickHouse/ClickHouse/issues/54734 | https://github.com/ClickHouse/ClickHouse/pull/55211 | c814d8b39dd9d0fc18183ecfa15c07dd436c1ffd | d02a718076071952fe4ecedc00a5f91157193b5e | "2023-09-17T19:27:32Z" | c++ | "2023-10-13T14:54:28Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 54,656 | ["src/Access/SettingsProfilesCache.cpp", "src/Access/SettingsProfilesCache.h"] | LOGICAL_ERROR from SettingsProfilesInfo::getProfileNames() | An undisclosed customer recently ran into this `LOGICAL_ERROR`:
```
Code: 49. DB::Exception: Unable to get profile name for 834b80a5-c274-88dd-9e7b-8e83274cbf85. (LOGICAL_ERROR), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000ead79f7 in /usr/bin/clickhouse
1. ? @ 0x000000000914938d in /usr/bin/clickhouse
2. DB::SettingsProfilesInfo::getProfileNames() const @ 0x0000000013ab69cc in /usr/bin/clickhouse
3. DB::SessionLog::addLoginSuccess(StrongTypedef<wide::integer<128ul, unsigned int>, DB::UUIDTag> const&, std::optional<String>, DB::Context const&, std::shared_ptr<DB::User const> const&) @ 0x00000000144bc5fb in /usr/bin/clickhouse
4. DB::Session::makeQueryContextImpl(DB::ClientInfo const*, DB::ClientInfo*) const @ 0x00000000144aff18 in /usr/bin/clickhouse
5. DB::TCPHandler::receiveQuery() @ 0x000000001567a57c in /usr/bin/clickhouse
6. DB::TCPHandler::receivePacket() @ 0x000000001567040e in /usr/bin/clickhouse
7. DB::TCPHandler::runImpl() @ 0x0000000015667890 in /usr/bin/clickhouse
8. DB::TCPHandler::run() @ 0x000000001567ef59 in /usr/bin/clickhouse
9. ? @ 0x000000000eba8dbc in /usr/bin/clickhouse
10. Poco::Net::TCPServerConnection::start() @ 0x0000000018687df4 in /usr/bin/clickhouse
11. Poco::Net::TCPServerDispatcher::run() @ 0x0000000018689011 in /usr/bin/clickhouse
12. Poco::PooledThread::run() @ 0x0000000018812207 in /usr/bin/clickhouse
13. Poco::ThreadImpl::runnableEntry(void*) @ 0x000000001880fc3c in /usr/bin/clickhouse
14. ? @ 0x00007f05dc96fb43 in ?
15. ? @ 0x00007f05dca01a00 in ?
```
This is basically old issue #35952 for which supposed fix #42641 was made - the problem was super sporadic even back then, so the fix could not be reproduced/verified.
What happens is that during the creation of a session, `SettingsProfilesInfo::getProfileNames()` is called which eventually finds that its members `profiles` and `names_of_profiles` are out of sync. There was some speculation in #42641 how this could happen. Back then, everyone concluded that the only place which modifies (actually: creates) `SettingsProfilesInfo` is `SettingsProfilesCache::substituteProfiles()`. The locking is good, we can exclude races as cause (see the fancy figure in the other bug), I also quickly confirmed that the locking is still okay. `substituteProfiles()` will also not corrupt `SettingsProfilesInfo` in case of OOM or an exception, we are also safe from this side.
But: As I just see, both callers of `substituteProfiles()`
1. `getSettingsProfileInfo()`:
```cpp
info->profiles.push_back(profile_id);
info->profiles_with_implicit.push_back(profile_id);
substituteProfiles(elements, info->profiles_with_implicit, info->names_of_profiles);
```
2. `mergeSettingsAndConstraintsFor()`
```cpp
info->profiles = merged_settings.toProfileIDs();
substituteProfiles(merged_settings, info->profiles_with_implicit, info->names_of_profiles);
```
store something in `profiles` but they don't store something corresponding in `names_of_profiles`. There is a chance that this step corrupts `SettingsProfilesInfo` when `elements` resp. `merged_settings` is empty (as `substituteProfiles()` becomes a no-op then). Also, tbh., the code in `substituteProfiles()` itself also looks a bit strange (iterate the map while modifying it).
@vitlibar - maybe you like to double check (as author of the code)? | https://github.com/ClickHouse/ClickHouse/issues/54656 | https://github.com/ClickHouse/ClickHouse/pull/57263 | e468dbe2cb3fe0c6abd833d053a665f349c2782a | 7c867a09afef36c503ac046d5e8fd0ae8b6b4e21 | "2023-09-14T21:44:05Z" | c++ | "2024-01-05T12:43:54Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 54,486 | ["src/Common/OptimizedRegularExpression.cpp", "tests/queries/0_stateless/02886_binary_like.reference", "tests/queries/0_stateless/02886_binary_like.sql"] | Some LIKE expressions don't work | **Use case**
```
milovidov-desktop :) SELECT 'test' LIKE '\xFF\xFE%'
SELECT 'test' LIKE 'οΏ½οΏ½%'
Query id: 1de419f8-e9d8-4da1-8544-21ed91b8b088
0 rows in set. Elapsed: 0.030 sec.
Received exception:
Code: 427. DB::Exception: OptimizedRegularExpression: cannot compile re2: ^οΏ½οΏ½, error: invalid UTF-8. Look at https://github.com/google/re2/wiki/Syntax for reference. Please note that if you specify regex as an SQL string literal, the slashes have to be additionally escaped. For example, to match an opening brace, write '\(' -- the first slash is for SQL and the second one is for regex: While processing 'test' LIKE 'οΏ½οΏ½%'. (CANNOT_COMPILE_REGEXP)
``` | https://github.com/ClickHouse/ClickHouse/issues/54486 | https://github.com/ClickHouse/ClickHouse/pull/54942 | 4032fc17bb31a83a1ec70dd175b00ad3f343335e | a751f51ec8bf746223a21e6d099904fdb23dd8ea | "2023-09-10T21:33:00Z" | c++ | "2023-09-23T12:37:19Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 54,406 | ["src/Columns/ColumnAggregateFunction.cpp", "src/Columns/ColumnAggregateFunction.h", "src/DataTypes/DataTypeAggregateFunction.cpp", "src/DataTypes/DataTypeAggregateFunction.h", "src/Functions/FunctionsConversion.h", "src/Processors/QueryPlan/Optimizations/optimizeUseAggregateProjection.cpp", "tests/queries/0_stateless/01710_aggregate_projection_with_normalized_states.reference", "tests/queries/0_stateless/01710_aggregate_projection_with_normalized_states.sql"] | Projection issue Block structure mismatch in AggregatingStep stream | worked with 22.8: https://fiddle.clickhouse.com/bba44076-1961-4356-b59c-df7e26362245
fails with 23.3: https://fiddle.clickhouse.com/731c266b-df63-437e-ac64-05a36422ad36
```sql
CREATE TABLE r (
x String,
a LowCardinality(String),
q AggregateFunction(quantilesTiming(0.5, 0.95, 0.99), Int64),
s Int64,
PROJECTION p
(SELECT a, quantilesTimingMerge(0.5, 0.95, 0.99)(q), sum(s) GROUP BY a)
) Engine=SummingMergeTree order by (x, a);
insert into r
select number%100 x,
'x' a,
quantilesTimingState(0.5, 0.95, 0.99)(number::Int64) q,
sum(1) s
from numbers(1000)
group by x,a;
SELECT
ifNotFinite(quantilesTimingMerge(0.95)(q)[1],0) as d1,
ifNotFinite(quantilesTimingMerge(0.99)(q)[1],0) as d2,
ifNotFinite(quantilesTimingMerge(0.50)(q)[1],0) as d3,
sum(s)
FROM cluster('test_cluster_two_shards', default, r)
WHERE a = 'x'
settings prefer_localhost_replica=0
format PrettyCompact;
DB::Exception: Received from 127.0.0.1:9000. DB::Exception: Block structure mismatch in AggregatingStep stream: different types:
quantilesTimingMerge(0.99)(q)
```
WA: disable projection (`allow_experimental_projection_optimization=0`)
https://fiddle.clickhouse.com/89156fe6-169d-444b-a8ab-4c07781d7d11
| https://github.com/ClickHouse/ClickHouse/issues/54406 | https://github.com/ClickHouse/ClickHouse/pull/54480 | 5354af00aedd659e44bea6eb4fa9717ca32aa2b2 | 63243fbc03c52da2ded762332c59982106e2f2df | "2023-09-07T14:09:08Z" | c++ | "2023-09-12T11:43:41Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 54,380 | ["src/Processors/Formats/Impl/ConstantExpressionTemplate.cpp", "tests/queries/0_stateless/02896_leading_zeroes_no_octal.reference", "tests/queries/0_stateless/02896_leading_zeroes_no_octal.sql"] | Incorrect handling of leading zeroes during INSERT | **How to reproduce**
```sql
CREATE TABLE EMPLOYEE
(
`empId` INTEGER PRIMARY KEY,
`d` DOUBLE
)
ENGINE = MergeTree
ORDER BY empId;
INSERT INTO EMPLOYEE VALUES (0001, 1.456), (0005, 45.98), (0008, 4342.766), (0017, 345.87), (0021, 43.78), (0051, 0.781);
SELECT * FROM EMPLOYEE;
```
Results into
```
ββempIdββ¬ββββββββdββ
β 1 β 1.456 β
β 5 β 45.98 β
β 8 β 4342.766 β
β 15 β 345.87 β
β 17 β 43.78 β
β 41 β 0.781 β
βββββββββ΄βββββββββββ
```
**Expected behavior**
```
ββempIdββ¬ββββββββdββ
β 1 β 1.456 β
β 5 β 45.98 β
β 8 β 4342.766 β
β 17 β 345.87 β
β 21 β 43.78 β
β 51 β 0.781 β
βββββββββ΄βββββββββββ
```
Reproduces on the latest master:
https://fiddle.clickhouse.com/8d27f2fe-31a8-4af5-a54b-c0d1608886db | https://github.com/ClickHouse/ClickHouse/issues/54380 | https://github.com/ClickHouse/ClickHouse/pull/59403 | 556b63700a98df79f97ddeda34a13c3084e69a81 | 569b8487e887c842e2ea936f71e9d8662edffac0 | "2023-09-07T03:14:51Z" | c++ | "2024-02-05T14:34:50Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 54,352 | ["src/Columns/ColumnDecimal.cpp", "tests/queries/0_stateless/02875_fix_column_decimal_serialization.reference", "tests/queries/0_stateless/02875_fix_column_decimal_serialization.sql"] | Assertion `new_head == holder.key.data` failed (`Aggregator`) | https://s3.amazonaws.com/clickhouse-test-reports/0/d9746b5ea10eec3b0e82d46b1b15627ccfb3f243/fuzzer_astfuzzerdebug/report.html
https://s3.amazonaws.com/clickhouse-test-reports/54043/03914f2d31da7a3c3384426c425666f6526afa93/fuzzer_astfuzzerdebug/report.html
```
5947:2023.09.05 19:28:01.035857 [ 178 ] {5fdfcd95-1691-453b-a318-7dd48d577905} <Debug> executeQuery: (from [::ffff:127.0.0.1]:49390) SELECT count(), min(length(c.d)) AS minExpr, min(dcount) AS minAlias, max(length(c.d)) AS maxExpr, max(dcount) AS maxAlias, b FROM max_length_alias_14053__fuzz_45 GROUP BY GROUPING SETS ((b)) (stage: Complete)
5948:2023.09.05 19:28:01.043542 [ 178 ] {5fdfcd95-1691-453b-a318-7dd48d577905} <Trace> ContextAccess (default): Access granted: SELECT(b, `c.d`, dcount) ON default.max_length_alias_14053__fuzz_45
5949:2023.09.05 19:28:01.043994 [ 178 ] {5fdfcd95-1691-453b-a318-7dd48d577905} <Trace> InterpreterSelectQuery: FetchColumns -> Complete
5950:2023.09.05 19:28:01.051481 [ 178 ] {5fdfcd95-1691-453b-a318-7dd48d577905} <Debug> default.max_length_alias_14053__fuzz_45 (07cbf800-de67-491f-956d-7bc687a12fc2) (SelectExecutor): Key condition: unknown
5951:2023.09.05 19:28:01.051703 [ 178 ] {5fdfcd95-1691-453b-a318-7dd48d577905} <Debug> default.max_length_alias_14053__fuzz_45 (07cbf800-de67-491f-956d-7bc687a12fc2) (SelectExecutor): MinMax index condition: unknown
5952:2023.09.05 19:28:01.052175 [ 178 ] {5fdfcd95-1691-453b-a318-7dd48d577905} <Debug> default.max_length_alias_14053__fuzz_45 (07cbf800-de67-491f-956d-7bc687a12fc2) (SelectExecutor): Selected 3/3 parts by partition key, 3 parts by primary key, 3/3 marks by primary key, 3 marks to read from 3 ranges
5953:2023.09.05 19:28:01.052332 [ 178 ] {5fdfcd95-1691-453b-a318-7dd48d577905} <Trace> default.max_length_alias_14053__fuzz_45 (07cbf800-de67-491f-956d-7bc687a12fc2) (SelectExecutor): Spreading mark ranges among streams (default reading)
5954:2023.09.05 19:28:01.052905 [ 178 ] {5fdfcd95-1691-453b-a318-7dd48d577905} <Debug> MergeTreeReadPool: min_marks_for_concurrent_read=24
5955:2023.09.05 19:28:01.053065 [ 178 ] {5fdfcd95-1691-453b-a318-7dd48d577905} <Debug> default.max_length_alias_14053__fuzz_45 (07cbf800-de67-491f-956d-7bc687a12fc2) (SelectExecutor): Reading approx. 11 rows with 3 streams
5958:2023.09.05 19:28:01.066218 [ 507 ] {5fdfcd95-1691-453b-a318-7dd48d577905} <Trace> AggregatingTransform: Aggregating
5959:2023.09.05 19:28:01.066232 [ 503 ] {5fdfcd95-1691-453b-a318-7dd48d577905} <Trace> AggregatingTransform: Aggregating
5960:2023.09.05 19:28:01.066245 [ 466 ] {5fdfcd95-1691-453b-a318-7dd48d577905} <Trace> AggregatingTransform: Aggregating
5961:2023.09.05 19:28:01.066302 [ 503 ] {5fdfcd95-1691-453b-a318-7dd48d577905} <Trace> Aggregator: Aggregation method: serialized
5962:2023.09.05 19:28:01.066343 [ 507 ] {5fdfcd95-1691-453b-a318-7dd48d577905} <Trace> Aggregator: Aggregation method: serialized
5963:2023.09.05 19:28:01.066378 [ 466 ] {5fdfcd95-1691-453b-a318-7dd48d577905} <Trace> Aggregator: Aggregation method: serialized
```
```
2023.09.05 19:27:59.558109 [ 178 ] {10751f12-295b-4584-9dee-978724a3c37f} <Debug> executeQuery: (from [::ffff:127.0.0.1]:49390) CREATE TABLE max_length_alias_14053__fuzz_45 (`a` Date, `b` Nullable(Decimal(76, 45)), `c.d` Array(Nullable(DateTime64(3))), `dcount` Int8 ALIAS length(c.d)) ENGINE = MergeTree PARTITION BY toMonday(a) ORDER BY (a, b) SETTINGS index_granularity = 8192 (stage: Complete)
2023.09.05 19:27:59.558347 [ 178 ] {10751f12-295b-4584-9dee-978724a3c37f} <Trace> ContextAccess (default): Access granted: CREATE TABLE ON default.max_length_alias_14053__fuzz_45
```
```
llvm-addr2line -pafiCse ./clickhouse 0x00007f5779b04a7c 0x00007f5779ab0476 0x00007f5779a967f3 0x00007f5779a9671b 0x00007f5779aa7e96 0x000000001c2b3d16 0x000000001c11a469 0x000000001c05e1c8 0x000000001c06318d 0x000000001eb4e9b3 0x000000001eb4c2d9 0x000000001e6b4b03 0x000000001e6b4840 0x000000001e699741 0x000000001e699a57 0x000000001e69a798 0x000000001e69a6f5 0x000000001e69a6d5 0x000000001e69a6b5 0x000000001e69a680 0x000000001398e416 0x000000001398d8f5 0x0000000013a99b23 0x0000000013aa4384 0x0000000013aa4355 0x0000000013aa4339 0x0000000013aa429d 0x0000000013aa41a5 0x0000000013aa4115 0x0000000013aa40f5 0x0000000013aa40d5 0x0000000013aa40a0 0x000000001398e416 0x000000001398d8f5 0x0000000013a96403 0x0000000013a9e6a4 0x0000000013a9e655 0x0000000013a9e57d 0x0000000013a9e062 0x00007f5779b02b43 0x00007f5779b94a00
0x7f5779b04a7c: ?? at ??:0
0x7f5779ab0476: ?? at ??:0
0x7f5779a967f3: ?? at ??:0
0x7f5779a9671b: ?? at ??:0
0x7f5779aa7e96: ?? at ??:0
0x1c2b3d16: keyHolderDiscardKey(DB::SerializedKeyHolder&) at HashTableKeyHolder.h:132
(inlined by) void HashTable<StringRef, HashMapCellWithSavedHash<StringRef, char*, DefaultHash<StringRef>, HashTableNoState>, DefaultHash<StringRef>, HashTableGrowerWithPrecalculation<8ul>, Allocator<true, true>>::emplaceNonZeroImpl<DB::SerializedKeyHolder&>(unsigned long, DB::SerializedKeyHolder&, HashMapCellWithSavedHash<StringRef, char*, DefaultHash<StringRef>, HashTableNoState>*&, bool&, unsigned long) at HashTable.h:972
(inlined by) void HashTable<StringRef, HashMapCellWithSavedHash<StringRef, char*, DefaultHash<StringRef>, HashTableNoState>, DefaultHash<StringRef>, HashTableGrowerWithPrecalculation<8ul>, Allocator<true, true>>::emplaceNonZero<DB::SerializedKeyHolder&>(DB::SerializedKeyHolder&, HashMapCellWithSavedHash<StringRef, char*, DefaultHash<StringRef>, HashTableNoState>*&, bool&, unsigned long) at HashTable.h:1017
(inlined by) void HashTable<StringRef, HashMapCellWithSavedHash<StringRef, char*, DefaultHash<StringRef>, HashTableNoState>, DefaultHash<StringRef>, HashTableGrowerWithPrecalculation<8ul>, Allocator<true, true>>::emplace<DB::SerializedKeyHolder&>(DB::SerializedKeyHolder&, HashMapCellWithSavedHash<StringRef, char*, DefaultHash<StringRef>, HashTableNoState>*&, bool&, unsigned long) at HashTable.h:1096
(inlined by) void HashTable<StringRef, HashMapCellWithSavedHash<StringRef, char*, DefaultHash<StringRef>, HashTableNoState>, DefaultHash<StringRef>, HashTableGrowerWithPrecalculation<8ul>, Allocator<true, true>>::emplace<DB::SerializedKeyHolder&>(DB::SerializedKeyHolder&, HashMapCellWithSavedHash<StringRef, char*, DefaultHash<StringRef>, HashTableNoState>*&, bool&) at HashTable.h:1087
(inlined by) DB::ColumnsHashing::columns_hashing_impl::EmplaceResultImpl<char*> DB::ColumnsHashing::columns_hashing_impl::HashMethodBase<DB::ColumnsHashing::HashMethodSerialized<PairNoInit<StringRef, char*>, char*>, PairNoInit<StringRef, char*>, char*, false, false, false>::emplaceImpl<HashMapTable<StringRef, HashMapCellWithSavedHash<StringRef, char*, DefaultHash<StringRef>, HashTableNoState>, DefaultHash<StringRef>, HashTableGrowerWithPrecalculation<8ul>, Allocator<true, true>>, DB::SerializedKeyHolder>(DB::SerializedKeyHolder&, HashMapTable<StringRef, HashMapCellWithSavedHash<StringRef, char*, DefaultHash<StringRef>, HashTableNoState>, DefaultHash<StringRef>, HashTableGrowerWithPrecalculation<8ul>, Allocator<true, true>>&) at ColumnsHashingImpl.h:251
(inlined by) DB::ColumnsHashing::columns_hashing_impl::EmplaceResultImpl<char*> DB::ColumnsHashing::columns_hashing_impl::HashMethodBase<DB::ColumnsHashing::HashMethodSerialized<PairNoInit<StringRef, char*>, char*>, PairNoInit<StringRef, char*>, char*, false, false, false>::emplaceKey<HashMapTable<StringRef, HashMapCellWithSavedHash<StringRef, char*, DefaultHash<StringRef>, HashTableNoState>, DefaultHash<StringRef>, HashTableGrowerWithPrecalculation<8ul>, Allocator<true, true>>>(HashMapTable<StringRef, HashMapCellWithSavedHash<StringRef, char*, DefaultHash<StringRef>, HashTableNoState>, DefaultHash<StringRef>, HashTableGrowerWithPrecalculation<8ul>, Allocator<true, true>>&, unsigned long, DB::Arena&) at ColumnsHashingImpl.h:170
(inlined by) void DB::Aggregator::executeImplBatch<false, false, false, DB::AggregationMethodSerialized<HashMapTable<StringRef, HashMapCellWithSavedHash<StringRef, char*, DefaultHash<StringRef>, HashTableNoState>, DefaultHash<StringRef>, HashTableGrowerWithPrecalculation<8ul>, Allocator<true, true>>>>(DB::AggregationMethodSerialized<HashMapTable<StringRef, HashMapCellWithSavedHash<StringRef, char*, DefaultHash<StringRef>, HashTableNoState>, DefaultHash<StringRef>, HashTableGrowerWithPrecalculation<8ul>, Allocator<true, true>>>&, DB::AggregationMethodSerialized<HashMapTable<StringRef, HashMapCellWithSavedHash<StringRef, char*, DefaultHash<StringRef>, HashTableNoState>, DefaultHash<StringRef>, HashTableGrowerWithPrecalculation<8ul>, Allocator<true, true>>>::State&, DB::Arena*, unsigned long, unsigned long, DB::Aggregator::AggregateFunctionInstruction*, char*) const at Aggregator.cpp:1203
0x1c11a469: void DB::Aggregator::executeImpl<DB::AggregationMethodSerialized<HashMapTable<StringRef, HashMapCellWithSavedHash<StringRef, char*, DefaultHash<StringRef>, HashTableNoState>, DefaultHash<StringRef>, HashTableGrowerWithPrecalculation<8ul>, Allocator<true, true>>>>(DB::AggregationMethodSerialized<HashMapTable<StringRef, HashMapCellWithSavedHash<StringRef, char*, DefaultHash<StringRef>, HashTableNoState>, DefaultHash<StringRef>, HashTableGrowerWithPrecalculation<8ul>, Allocator<true, true>>>&, DB::Arena*, unsigned long, unsigned long, std::__1::vector<DB::IColumn const*, std::__1::allocator<DB::IColumn const*>>&, DB::Aggregator::AggregateFunctionInstruction*, bool, char*) const at Aggregator.cpp:1089
0x1c05e1c8: DB::Aggregator::executeImpl(DB::AggregatedDataVariants&, unsigned long, unsigned long, std::__1::vector<DB::IColumn const*, std::__1::allocator<DB::IColumn const*>>&, DB::Aggregator::AggregateFunctionInstruction*, bool, char*) const at Aggregator.cpp:1045
0x1c06318d: DB::Aggregator::executeOnBlock(std::__1::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn>>>, unsigned long, unsigned long, DB::AggregatedDataVariants&, std::__1::vector<DB::IColumn const*, std::__1::allocator<DB::IColumn const*>>&, std::__1::vector<std::__1::vector<DB::IColumn const*, std::__1::allocator<DB::IColumn const*>>, std::__1::allocator<std::__1::vector<DB::IColumn const*, std::__1::allocator<DB::IColumn const*>>>>&, bool&) const at Aggregator.cpp:1592
0x1eb4e9b3: DB::AggregatingTransform::consume(DB::Chunk) at AggregatingTransform.cpp:669
0x1eb4c2d9: DB::AggregatingTransform::work() at AggregatingTransform.cpp:628
0x1e6b4b03: DB::executeJob(DB::ExecutingGraph::Node*, DB::ReadProgressCallback*) at ExecutionThreadContext.cpp:47
0x1e6b4840: DB::ExecutionThreadContext::executeTask() at ExecutionThreadContext.cpp:95
0x1e699741: DB::PipelineExecutor::executeStepImpl(unsigned long, std::__1::atomic<bool>*) at PipelineExecutor.cpp:272
0x1e699a57: DB::PipelineExecutor::executeSingleThread(unsigned long) at PipelineExecutor.cpp:238
0x1e69a798: DB::PipelineExecutor::spawnThreads()::$_0::operator()() const at PipelineExecutor.cpp:362
0x1e69a6f5: decltype(std::declval<DB::PipelineExecutor::spawnThreads()::$_0&>()()) std::__1::__invoke[abi:v15000]<DB::PipelineExecutor::spawnThreads()::$_0&>(DB::PipelineExecutor::spawnThreads()::$_0&) at invoke.h:394
0x1e69a6d5: void std::__1::__invoke_void_return_wrapper<void, true>::__call<DB::PipelineExecutor::spawnThreads()::$_0&>(DB::PipelineExecutor::spawnThreads()::$_0&) at invoke.h:480
0x1e69a6b5: std::__1::__function::__default_alloc_func<DB::PipelineExecutor::spawnThreads()::$_0, void ()>::operator()[abi:v15000]() at function.h:235
0x1e69a680: void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<DB::PipelineExecutor::spawnThreads()::$_0, void ()>>(std::__1::__function::__policy_storage const*) at function.h:716
0x1398e416: std::__1::__function::__policy_func<void ()>::operator()[abi:v15000]() const at function.h:848
0x1398d8f5: std::__1::function<void ()>::operator()() const at function.h:1187
0x13a99b23: ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::worker(std::__1::__list_iterator<ThreadFromGlobalPoolImpl<false>, void*>) at ThreadPool.cpp:426
0x13aa4384: void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::__1::function<void ()>, Priority, std::__1::optional<unsigned long>, bool)::'lambda0'()::operator()() const at ThreadPool.cpp:179
0x13aa4355: decltype(std::declval<void>()()) std::__1::__invoke[abi:v15000]<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::__1::function<void ()>, Priority, std::__1::optional<unsigned long>, bool)::'lambda0'()&>(void&&) at invoke.h:394
0x13aa4339: decltype(auto) std::__1::__apply_tuple_impl[abi:v15000]<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::__1::function<void ()>, Priority, std::__1::optional<unsigned long>, bool)::'lambda0'()&, std::__1::tuple<>&>(void&&, std::__1::tuple<>&, std::__1::__tuple_indices<>) at tuple:1789
0x13aa429d: decltype(auto) std::__1::apply[abi:v15000]<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::__1::function<void ()>, Priority, std::__1::optional<unsigned long>, bool)::'lambda0'()&, std::__1::tuple<>&>(void&&, std::__1::tuple<>&) at tuple:1798
0x13aa41a5: ThreadFromGlobalPoolImpl<false>::ThreadFromGlobalPoolImpl<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::__1::function<void ()>, Priority, std::__1::optional<unsigned long>, bool)::'lambda0'()>(void&&)::'lambda'()::operator()() at ThreadPool.h:242
0x13aa4115: decltype(std::declval<void>()()) std::__1::__invoke[abi:v15000]<ThreadFromGlobalPoolImpl<false>::ThreadFromGlobalPoolImpl<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::__1::function<void ()>, Priority, std::__1::optional<unsigned long>, bool)::'lambda0'()>(void&&)::'lambda'()&>(void&&) at invoke.h:394
0x13aa40f5: void std::__1::__invoke_void_return_wrapper<void, true>::__call<ThreadFromGlobalPoolImpl<false>::ThreadFromGlobalPoolImpl<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::__1::function<void ()>, Priority, std::__1::optional<unsigned long>, bool)::'lambda0'()>(void&&)::'lambda'()&>(ThreadFromGlobalPoolImpl<false>::ThreadFromGlobalPoolImpl<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::__1::function<void ()>, Priority, std::__1::optional<unsigned long>, bool)::'lambda0'()>(void&&)::'lambda'()&) at invoke.h:480
0x13aa40d5: std::__1::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<false>::ThreadFromGlobalPoolImpl<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::__1::function<void ()>, Priority, std::__1::optional<unsigned long>, bool)::'lambda0'()>(void&&)::'lambda'(), void ()>::operator()[abi:v15000]() at function.h:235
0x13aa40a0: void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<false>::ThreadFromGlobalPoolImpl<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::__1::function<void ()>, Priority, std::__1::optional<unsigned long>, bool)::'lambda0'()>(void&&)::'lambda'(), void ()>>(std::__1::__function::__policy_storage const*) at function.h:716
0x1398e416: std::__1::__function::__policy_func<void ()>::operator()[abi:v15000]() const at function.h:848
0x1398d8f5: std::__1::function<void ()>::operator()() const at function.h:1187
0x13a96403: ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) at ThreadPool.cpp:426
0x13a9e6a4: void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, Priority, std::__1::optional<unsigned long>, bool)::'lambda0'()::operator()() const at ThreadPool.cpp:179
0x13a9e655: decltype(std::declval<void>()()) std::__1::__invoke[abi:v15000]<void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, Priority, std::__1::optional<unsigned long>, bool)::'lambda0'()>(void&&) at invoke.h:394
0x13a9e57d: void std::__1::__thread_execute[abi:v15000]<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, Priority, std::__1::optional<unsigned long>, bool)::'lambda0'()>(std::__1::tuple<void, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, Priority, std::__1::optional<unsigned long>, bool)::'lambda0'()>&, std::__1::__tuple_indices<>) at thread:285
0x13a9e062: void* std::__1::__thread_proxy[abi:v15000]<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, Priority, std::__1::optional<unsigned long>, bool)::'lambda0'()>>(void*) at thread:295
0x7f5779b02b43: ?? at ??:0
0x7f5779b94a00: ?? at ??:0
```
cc: @KochetovNicolai | https://github.com/ClickHouse/ClickHouse/issues/54352 | https://github.com/ClickHouse/ClickHouse/pull/54601 | d23daca082fba97298995683114f2b246ac328d9 | b4f9d8a51717c2141e075a7a7d6f475dd3e2a6ff | "2023-09-06T12:17:32Z" | c++ | "2023-09-14T01:54:00Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 54,275 | ["docs/en/sql-reference/functions/date-time-functions.md", "src/Functions/timestamp.cpp", "src/IO/ReadHelpers.h", "tests/queries/0_stateless/02834_timestamp_function.reference", "tests/queries/0_stateless/02834_timestamp_function.sql"] | MySQL compatibility: ADDDATE, TIMESTAMP functions | Required by Tableau Online.
Sample query with both examples:
```sql
SELECT sum(area) AS sum_area_ok
FROM cell_towers
WHERE (ADDDATE(DATE_FORMAT(created, '%Y-01-01 00:00:00'), toIntervalSecond(0)) >= TIMESTAMP('1995-01-01 00:00:00'))
AND (ADDDATE(DATE_FORMAT(created, '%Y-01-01 00:00:00'), toIntervalSecond(0)) <= TIMESTAMP('2021-01-01 00:00:00'))
HAVING count() > 0
```
which fails with
```
Code: 46. DB::Exception: Unknown function ADDDATE
```
Separately calling TIMESTAMP function (which will be the next issue when ADDDATE fixed):
```sql
SELECT TIMESTAMP('2021-01-01 00:00:00');
```
```
Code: 46. DB::Exception: Unknown function TIMESTAMP
```
**How to reproduce**
* Which ClickHouse server version to use: latest master
* Which interface to use, if matters: MySQL
* Sample data for all these tables: cell_towers sample dataset
**Expected behavior**
`ADDDATE` and `TIMESTAMP` functions work like in MySQL.
| https://github.com/ClickHouse/ClickHouse/issues/54275 | https://github.com/ClickHouse/ClickHouse/pull/54639 | 2a0ec41d85a2b2ec269634929695db03f1a48c61 | 9ebecb5499e41241be1722cb9d3aba6c6a686812 | "2023-09-04T20:42:15Z" | c++ | "2023-09-28T14:44:16Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 54,239 | ["src/Interpreters/Set.cpp", "src/Interpreters/Set.h", "src/Interpreters/castColumn.cpp", "src/Interpreters/castColumn.h", "tests/performance/enum_in_set.xml"] | Set::execute excessively builds internal CAST functions | <img width="1491" alt="image" src="https://github.com/ClickHouse/ClickHouse/assets/22796953/32e20f98-90f7-4630-9bd3-c3ff35fb2e8d">
====
Building functions require parsing, so it can be slow. Probably `Set` should cache the `CAST` corresponding functions for each column. | https://github.com/ClickHouse/ClickHouse/issues/54239 | https://github.com/ClickHouse/ClickHouse/pull/55712 | 3d8875a342925fd658ee3ab001bdcf3deb5f050f | 5923e1b1167b1bf755654e3dcb46c13106ec164b | "2023-09-04T06:50:46Z" | c++ | "2023-10-23T11:31:44Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 54,218 | ["src/QueryPipeline/QueryPipeline.cpp"] | FROM INFILE with multiple files does not parallelize | **Use case**
```
clickhouse-client --query "INSERT INTO table FROM INFILE '*.csv'"
```
**Describe the solution you'd like**
Parallelize it by files. | https://github.com/ClickHouse/ClickHouse/issues/54218 | https://github.com/ClickHouse/ClickHouse/pull/54533 | 45cf7935447efd5def4fead68211dc549ec4d49b | fd2ac0cb8f471a4ff7fbd6baae280ca335585e83 | "2023-09-03T03:02:26Z" | c++ | "2023-09-15T12:30:49Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 54,206 | ["programs/server/dashboard.html"] | Advanced dashboard: charts are not draggable on iPad. | null | https://github.com/ClickHouse/ClickHouse/issues/54206 | https://github.com/ClickHouse/ClickHouse/pull/55649 | 79eccfb6421eeaf7d3cf0ceb3af619fb45962a09 | 3864c6746e051a3771c2ecf1be881ebcace85f0b | "2023-09-02T13:36:19Z" | c++ | "2023-10-17T21:10:21Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 54,162 | ["src/Storages/MergeTree/MergeTreeIndexFullText.cpp", "tests/queries/0_stateless/00908_bloom_filter_index.reference", "tests/queries/0_stateless/00908_bloom_filter_index.sh"] | ngram bloomfilter index based on IPv6 type breaks in v23.3 | **Describe the unexpected behaviour**
We are trying to upgrade from v22.3-lts to v23.3-lts. This is when we noticed that v23.3 wasn't happy with ngram bloom filter indices we had defined on IPv6 type.
**How to reproduce**
- v22.3: https://fiddle.clickhouse.com/623b1437-2ad7-4803-b33f-8793d3e9a87e
- v23.3: https://fiddle.clickhouse.com/9eaef7e1-c306-45d2-bffa-1cd359337a87
* Which ClickHouse server version to use: v23.3-lts
* Queries to run that lead to unexpected result
```
DROP TABLE IF EXISTS table1;
CREATE TABLE table1 (foo IPv6, INDEX fooIndex foo TYPE ngrambf_v1(8,512,3,0) GRANULARITY 1) ENGINE = MergeTree() ORDER BY foo;
DETACH table table1;
ATTACH TABLE table1;
DROP TABLE table1;
```
**Expected behavior**
I would ideally expect this to behave the same as is behaves in v22.3-lts.
**Error message and/or stacktrace**
```
Received exception from server (version 23.3.8):
Code: 80. DB::Exception: Received from localhost:9000. DB::Exception: Ngram and token bloom filter indexes can only be used with column types `String`, `FixedString`, `LowCardinality(String)`, `LowCardinality(FixedString)`, `Array(String)` or `Array(FixedString)`. (INCORRECT_QUERY)
(query: CREATE TABLE table1 (foo IPv6, INDEX fooIndex foo TYPE ngrambf_v1(8,512,3,0) GRANULARITY 1) ENGINE = MergeTree() ORDER BY foo;)
```
**Additional context**
I think the issue here is that IPv6 used to be based on `FixedString(16)` if I recall correctly. But since `v23.3`, it looks like it's now based on IPv6 native type: `https://github.com/ClickHouse/ClickHouse/pull/43221` (which seems to have an underlying type of UInt128).
What could be the possible fix/approach here for the upgrade since this change looks backwards incompatible.
Thank you in advance for taking a look at my issue. | https://github.com/ClickHouse/ClickHouse/issues/54162 | https://github.com/ClickHouse/ClickHouse/pull/54200 | e3b5972fab1219284f5daf9da8761f3f81d3731d | 86223699be977ab9fbd0abf850dd40303d23d6b8 | "2023-09-01T05:14:17Z" | c++ | "2023-09-03T20:08:51Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 54,156 | ["src/Processors/QueryPlan/PartsSplitter.cpp", "src/Processors/QueryPlan/PartsSplitter.h", "tests/queries/0_stateless/01861_explain_pipeline.reference", "tests/queries/0_stateless/02780_final_streams_data_skipping_index.reference", "tests/queries/0_stateless/02867_nullable_primary_key_final.reference", "tests/queries/0_stateless/02867_nullable_primary_key_final.sql"] | ILLEGAL_TYPE_OF_ARGUMENT allow_nullable_key + final | ERROR: type should be string, got "https://fiddle.clickhouse.com/796d6997-b571-4e4f-aede-1e18f670ec6f\r\n\r\n```sql\r\nCREATE TABLE t\r\n(\r\n d Nullable(Date),\r\n f1 Nullable(String),\r\n f2 Nullable(String),\r\n c Nullable(Int64)\r\n)\r\nENGINE = ReplacingMergeTree()\r\nORDER BY (f1, f2, d)\r\nSETTINGS allow_nullable_key = 1;\r\n\r\ninsert into t select today() d, \r\n [number%999999, null][number%2] f1, \r\n ['x', null][number%2] f2, \r\n [number, null][number%2] c \r\nfrom numbers(1000000);\r\n\r\nSELECT date_trunc('month', d), SUM( c ) \r\nFROM t FINAL WHERE f2 = 'x' GROUP BY 1;\r\n\r\nReceived exception from server (version 23.7.5):\r\nCode: 43. DB::Exception: Received from localhost:9000. \r\nDB::Exception: Illegal types of arguments (Date, UInt16) of function greater: \r\nWhile processing NOT ((f1, f2, d) > ('203217', 'x', 19600)). (ILLEGAL_TYPE_OF_ARGUMENT)\r\n```\r\n\r\ncc @amosbird " | https://github.com/ClickHouse/ClickHouse/issues/54156 | https://github.com/ClickHouse/ClickHouse/pull/54164 | 5fb8e469672888fa191bc1fe829dac095c6a896a | 0518b64b583cb1d1f518820b548547b19206fc73 | "2023-08-31T20:25:12Z" | c++ | "2023-09-15T20:44:13Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 53,858 | ["src/Interpreters/MutationsInterpreter.cpp", "src/Interpreters/MutationsInterpreter.h", "src/Storages/MergeTree/MergeTreeMarksLoader.cpp", "tests/queries/0_stateless/02891_alter_update_adaptive_granularity.reference", "tests/queries/0_stateless/02891_alter_update_adaptive_granularity.sql"] | DB::Exception: Too many marks in file skip_idx_sindex_visitorid.cmrk3, marks expected 3 (bytes size 72) | Hi,
I'm try to use the latest version 23.7.4.5 (official build) and skip index, but after I insert data and try to query via skipe index, I got this error, can you please tell me why?
I get started with clickhouse but stuck at first step here
`error: HttpCode:500 ; ;Code: 33. DB::Exception: Too many marks in file skp_idx_sindex_visitorid.cmrk3, marks expected 3 (bytes size 72). (CANNOT_READ_ALL_DATA) (version 23.7.4.5 (official build))`
Thank you very much! | https://github.com/ClickHouse/ClickHouse/issues/53858 | https://github.com/ClickHouse/ClickHouse/pull/55202 | 8ffe87cf06cfd0855ed5c93b833cc5d489f15f66 | 26a938c8cf21b2597d5394fd7700adcc32a4ecef | "2023-08-27T14:21:02Z" | c++ | "2023-10-09T17:24:59Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 53,749 | ["src/Common/ZooKeeper/IKeeper.cpp", "src/Common/ZooKeeper/IKeeper.h", "src/Common/ZooKeeper/ZooKeeper.cpp", "src/Common/ZooKeeper/ZooKeeperConstants.h", "src/Common/ZooKeeper/ZooKeeperImpl.cpp", "src/Interpreters/ZooKeeperLog.cpp"] | Incompatibility with Zookeeper 3.9 | It seems ZK 3.9 has changed something in its protocol and ClickHouse can't connect to it.
The error seems to be related to the handshake:
```
2023.08.23 13:11:59.885984 [ 422494 ] {} <Error> virtual bool DB::DDLWorker::initializeMainThread(): Code: 999. Coordination::Exception: Connection loss, path: All connection tries failed while connecting to ZooKeeper. nodes: 127.0.0.1:12183, 127.0.0.1:12181, 127.0.0.1:12182
Code: 999. Coordination::Exception: Unexpected handshake length received: 37 (Marshalling error): while receiving handshake from ZooKeeper. (KEEPER_EXCEPTION) (version 23.6.1.1524 (official build)), 127.0.0.1:12183
Code: 999. Coordination::Exception: Unexpected handshake length received: 37 (Marshalling error): while receiving handshake from ZooKeeper. (KEEPER_EXCEPTION) (version 23.6.1.1524 (official build)), 127.0.0.1:12181
Code: 999. Coordination::Exception: Unexpected handshake length received: 37 (Marshalling error): while receiving handshake from ZooKeeper. (KEEPER_EXCEPTION) (version 23.6.1.1524 (official build)), 127.0.0.1:12182
Code: 999. Coordination::Exception: Unexpected handshake length received: 37 (Marshalling error): while receiving handshake from ZooKeeper. (KEEPER_EXCEPTION) (version 23.6.1.1524 (official build)), 127.0.0.1:12183
Code: 999. Coordination::Exception: Unexpected handshake length received: 37 (Marshalling error): while receiving handshake from ZooKeeper. (KEEPER_EXCEPTION) (version 23.6.1.1524 (official build)), 127.0.0.1:12181
Code: 999. Coordination::Exception: Unexpected handshake length received: 37 (Marshalling error): while receiving handshake from ZooKeeper. (KEEPER_EXCEPTION) (version 23.6.1.1524 (official build)), 127.0.0.1:12182
Code: 999. Coordination::Exception: Unexpected handshake length received: 37 (Marshalling error): while receiving handshake from ZooKeeper. (KEEPER_EXCEPTION) (version 23.6.1.1524 (official build)), 127.0.0.1:12183
Code: 999. Coordination::Exception: Unexpected handshake length received: 37 (Marshalling error): while receiving handshake from ZooKeeper. (KEEPER_EXCEPTION) (version 23.6.1.1524 (official build)), 127.0.0.1:12181
Code: 999. Coordination::Exception: Unexpected handshake length received: 37 (Marshalling error): while receiving handshake from ZooKeeper. (KEEPER_EXCEPTION) (version 23.6.1.1524 (official build)), 127.0.0.1:12182
. (KEEPER_EXCEPTION), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000e1fc3f5 in /mnt/ch/official_binaries/clickhouse-common-static-23.6.1.1524/usr/bin/clickhouse
1. Coordination::Exception::Exception(String const&, Coordination::Error, int) @ 0x0000000015220571 in /mnt/ch/official_binaries/clickhouse-common-static-23.6.1.1524/usr/bin/clickhouse
2. Coordination::Exception::Exception(Coordination::Error, String const&) @ 0x0000000015220c6d in /mnt/ch/official_binaries/clickhouse-common-static-23.6.1.1524/usr/bin/clickhouse
3. Coordination::ZooKeeper::connect(std::vector<Coordination::ZooKeeper::Node, std::allocator<Coordination::ZooKeeper::Node>> const&, Poco::Timespan) @ 0x000000001527030e in /mnt/ch/official_binaries/clickhouse-common-static-23.6.1.1524/usr/bin/clickhouse
4. Coordination::ZooKeeper::ZooKeeper(std::vector<Coordination::ZooKeeper::Node, std::allocator<Coordination::ZooKeeper::Node>> const&, zkutil::ZooKeeperArgs const&, std::shared_ptr<DB::ZooKeeperLog>) @ 0x000000001526dccd in /mnt/ch/official_binaries/clickhouse-common-static-23.6.1.1524/usr/bin/clickhouse
5. zkutil::ZooKeeper::init(zkutil::ZooKeeperArgs) @ 0x0000000015223553 in /mnt/ch/official_binaries/clickhouse-common-static-23.6.1.1524/usr/bin/clickhouse
6. zkutil::ZooKeeper::ZooKeeper(Poco::Util::AbstractConfiguration const&, String const&, std::shared_ptr<DB::ZooKeeperLog>) @ 0x00000000152270c3 in /mnt/ch/official_binaries/clickhouse-common-static-23.6.1.1524/usr/bin/clickhouse
7. DB::Context::getZooKeeper() const @ 0x0000000012f73dcc in /mnt/ch/official_binaries/clickhouse-common-static-23.6.1.1524/usr/bin/clickhouse
8. DB::DDLWorker::getAndSetZooKeeper() @ 0x0000000012fdfa8d in /mnt/ch/official_binaries/clickhouse-common-static-23.6.1.1524/usr/bin/clickhouse
9. DB::DDLWorker::initializeMainThread() @ 0x0000000012ff2c6c in /mnt/ch/official_binaries/clickhouse-common-static-23.6.1.1524/usr/bin/clickhouse
10. DB::DDLWorker::runMainThread() @ 0x0000000012fdd771 in /mnt/ch/official_binaries/clickhouse-common-static-23.6.1.1524/usr/bin/clickhouse
11. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<void (DB::DDLWorker::*)(), DB::DDLWorker*>(void (DB::DDLWorker::*&&)(), DB::DDLWorker*&&)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x0000000012ff3dc9 in /mnt/ch/official_binaries/clickhouse-common-static-23.6.1.1524/usr/bin/clickhouse
12. ThreadPoolImpl<std::thread>::worker(std::__list_iterator<std::thread, void*>) @ 0x000000000e2d1a74 in /mnt/ch/official_binaries/clickhouse-common-static-23.6.1.1524/usr/bin/clickhouse
13. ? @ 0x000000000e2d7281 in /mnt/ch/official_binaries/clickhouse-common-static-23.6.1.1524/usr/bin/clickhouse
14. ? @ 0x00007f4708c8c9eb in ?
15. ? @ 0x00007f4708d10dfc in ?
(version 23.6.1.1524 (official build))
```
ZK 3.8.2 is fine.
Keeper is fine too. | https://github.com/ClickHouse/ClickHouse/issues/53749 | https://github.com/ClickHouse/ClickHouse/pull/57479 | 9b517e47129bd52f509cdb073b822f6b93f8f6cf | 429ed3460704b8d5fed272b5a188ed7a49e74581 | "2023-08-23T13:24:00Z" | c++ | "2023-12-07T19:22:00Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 53,720 | ["src/Analyzer/FunctionNode.h", "src/Analyzer/InDepthQueryTreeVisitor.h", "src/Analyzer/Passes/IfConstantConditionPass.cpp", "src/Analyzer/Utils.cpp", "src/Analyzer/Utils.h", "tests/queries/0_stateless/00835_if_generic_case.reference", "tests/queries/0_stateless/02901_remove_nullable_crash_analyzer.reference", "tests/queries/0_stateless/02901_remove_nullable_crash_analyzer.sql"] | ClickHouse Server v23.7.4.5 crashed by a SELECT statement with allow_experimental_analyzer enabled ("SELECT CASE 1 WHEN ...") | **Describe the bug**
ClickHouse Server v23.7.4.5 crashed by a SELECT statement with allow_experimental_analyzer enabled.
It was found by an in-development fuzzer of WINGFUZZ.
**How to reproduce**
The SQL statement to reproduce:
```sql
( SELECT CASE 1 WHEN FALSE THEN 1 ELSE CASE WHEN 1 THEN 1 - (CASE 1 WHEN 1 THEN 1 ELSE 1 END) END % 1 END ) SETTINGS allow_experimental_analyzer = 1 ;
```
It can be reproduced on the official docker image. (`clickhouse/clickhouse-server:head` (version 23.8.1.2413) and `clickhouse/clickhouse-server:latest` (version 23.7.4.5)).
The log traced by ClickHouse Server:
```
SELECT caseWithExpression(1, false, 1, multiIf(1, 1 - caseWithExpression(1, 1, 1, 1), NULL) % 1)
SETTINGS allow_experimental_analyzer = 1
Query id: 6fdd23c6-940e-431b-9db0-d64736b1aca1
[8af69c367457] 2023.08.23 07:56:04.821639 [ 346 ] <Fatal> BaseDaemon: ########################################
[8af69c367457] 2023.08.23 07:56:04.821674 [ 346 ] <Fatal> BaseDaemon: (version 23.8.1.2413 (official build), build id: 4DCA66DD83B2161C82851B4655CD14334A08D535, git hash: 926533306c5969b77571e66163a6930cfce1cf86) (from thread 289) (query_id: 6fdd23c6-940e-431b-9db0-d64736b1aca1) (query: ( SELECT CASE 1 WHEN FALSE THEN 1 ELSE CASE WHEN 1 THEN 1 - (CASE 1 WHEN 1 THEN 1 ELSE 1 END) END % 1 END ) SETTINGS allow_experimental_analyzer = 1 ;) Received signal Segmentation fault (11)
[8af69c367457] 2023.08.23 07:56:04.821698 [ 346 ] <Fatal> BaseDaemon: Address: NULL pointer. Access: read. Unknown si_code.
[8af69c367457] 2023.08.23 07:56:04.821712 [ 346 ] <Fatal> BaseDaemon: Stack trace: 0x0000000012274e39 0x000000000bba007a 0x00000000070b988e 0x0000000010a92f89 0x0000000010a93a82 0x0000000010a94d79 0x00000000070b898d 0x000000000bb8eedd 0x00000000070b988e 0x0000000010a92f89 0x0000000010a93a82 0x0000000010a94d79 0x00000000070b898d 0x00000000083a9979 0x00000000070b988e 0x0000000010a92f89 0x0000000010a93a82 0x0000000010a94d79 0x00000000115d9619 0x0000000013253a93 0x000000000f1e4bf0 0x0000000012fe3f52 0x0000000012ffdf5a 0x0000000012ff4d90 0x0000000012ff40d1 0x00000000130018e7 0x000000000c6896e4 0x00007ffff7f9a609 0x00007ffff7ebf133
[8af69c367457] 2023.08.23 07:56:04.821766 [ 346 ] <Fatal> BaseDaemon: 2. DB::ColumnNullable::insertFrom(DB::IColumn const&, unsigned long) @ 0x0000000012274e39 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:56:04.821803 [ 346 ] <Fatal> BaseDaemon: 3. DB::(anonymous namespace)::FunctionTransform::executeImpl(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&, std::shared_ptr<DB::IDataType const> const&, unsigned long) const (.0017489eb11ed35e6722c5fc4f9de62a) @ 0x000000000bba007a in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:56:04.821840 [ 346 ] <Fatal> BaseDaemon: 4. DB::FunctionToExecutableFunctionAdaptor::executeImpl(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&, std::shared_ptr<DB::IDataType const> const&, unsigned long) const @ 0x00000000070b988e in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:56:04.821886 [ 346 ] <Fatal> BaseDaemon: 5. DB::IExecutableFunction::executeWithoutLowCardinalityColumns(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&, std::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const @ 0x0000000010a92f89 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:56:04.821916 [ 346 ] <Fatal> BaseDaemon: 6. DB::IExecutableFunction::executeWithoutSparseColumns(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&, std::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const @ 0x0000000010a93a82 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:56:04.821947 [ 346 ] <Fatal> BaseDaemon: 7. DB::IExecutableFunction::execute(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&, std::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const @ 0x0000000010a94d79 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:56:04.821984 [ 346 ] <Fatal> BaseDaemon: 8. DB::IFunctionBase::execute(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&, std::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const @ 0x00000000070b898d in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:56:04.822008 [ 346 ] <Fatal> BaseDaemon: 9. DB::(anonymous namespace)::FunctionTransform::executeImpl(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&, std::shared_ptr<DB::IDataType const> const&, unsigned long) const (.0017489eb11ed35e6722c5fc4f9de62a) @ 0x000000000bb8eedd in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:56:04.822048 [ 346 ] <Fatal> BaseDaemon: 10. DB::FunctionToExecutableFunctionAdaptor::executeImpl(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&, std::shared_ptr<DB::IDataType const> const&, unsigned long) const @ 0x00000000070b988e in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:56:04.822091 [ 346 ] <Fatal> BaseDaemon: 11. DB::IExecutableFunction::executeWithoutLowCardinalityColumns(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&, std::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const @ 0x0000000010a92f89 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:56:04.822129 [ 346 ] <Fatal> BaseDaemon: 12. DB::IExecutableFunction::executeWithoutSparseColumns(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&, std::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const @ 0x0000000010a93a82 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:56:04.822158 [ 346 ] <Fatal> BaseDaemon: 13. DB::IExecutableFunction::execute(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&, std::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const @ 0x0000000010a94d79 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:56:04.822185 [ 346 ] <Fatal> BaseDaemon: 14. DB::IFunctionBase::execute(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&, std::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const @ 0x00000000070b898d in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:56:04.822222 [ 346 ] <Fatal> BaseDaemon: 15. DB::(anonymous namespace)::FunctionCaseWithExpression::executeImpl(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&, std::shared_ptr<DB::IDataType const> const&, unsigned long) const (.843e166b829d8d386e8551902e766f14) @ 0x00000000083a9979 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:56:04.822259 [ 346 ] <Fatal> BaseDaemon: 16. DB::FunctionToExecutableFunctionAdaptor::executeImpl(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&, std::shared_ptr<DB::IDataType const> const&, unsigned long) const @ 0x00000000070b988e in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:56:04.822306 [ 346 ] <Fatal> BaseDaemon: 17. DB::IExecutableFunction::executeWithoutLowCardinalityColumns(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&, std::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const @ 0x0000000010a92f89 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:56:04.822335 [ 346 ] <Fatal> BaseDaemon: 18. DB::IExecutableFunction::executeWithoutSparseColumns(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&, std::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const @ 0x0000000010a93a82 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:56:04.822362 [ 346 ] <Fatal> BaseDaemon: 19. DB::IExecutableFunction::execute(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&, std::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const @ 0x0000000010a94d79 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:56:04.822401 [ 346 ] <Fatal> BaseDaemon: 20. DB::ExpressionActions::execute(DB::Block&, unsigned long&, bool) const @ 0x00000000115d9619 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:56:04.822439 [ 346 ] <Fatal> BaseDaemon: 21. DB::ExpressionTransform::transform(DB::Chunk&) @ 0x0000000013253a93 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:56:04.822477 [ 346 ] <Fatal> BaseDaemon: 22. DB::ISimpleTransform::transform(DB::Chunk&, DB::Chunk&) @ 0x000000000f1e4bf0 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:56:04.822512 [ 346 ] <Fatal> BaseDaemon: 23. DB::ISimpleTransform::work() @ 0x0000000012fe3f52 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:56:04.822544 [ 346 ] <Fatal> BaseDaemon: 24. DB::ExecutionThreadContext::executeTask() @ 0x0000000012ffdf5a in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:56:04.822587 [ 346 ] <Fatal> BaseDaemon: 25. DB::PipelineExecutor::executeStepImpl(unsigned long, std::atomic<bool>*) @ 0x0000000012ff4d90 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:56:04.822620 [ 346 ] <Fatal> BaseDaemon: 26. DB::PipelineExecutor::execute(unsigned long) @ 0x0000000012ff40d1 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:56:04.822661 [ 346 ] <Fatal> BaseDaemon: 27. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&&)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x00000000130018e7 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:56:04.822710 [ 346 ] <Fatal> BaseDaemon: 28. void* std::__thread_proxy[abi:v15000]<std::tuple<std::unique_ptr<std::__thread_struct, std::default_delete<std::__thread_struct>>, void ThreadPoolImpl<std::thread>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>>(void*) @ 0x000000000c6896e4 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:56:04.822730 [ 346 ] <Fatal> BaseDaemon: 29. ? @ 0x00007ffff7f9a609 in ?
[8af69c367457] 2023.08.23 07:56:04.822754 [ 346 ] <Fatal> BaseDaemon: 30. ? @ 0x00007ffff7ebf133 in ?
[8af69c367457] 2023.08.23 07:56:04.979829 [ 346 ] <Fatal> BaseDaemon: Integrity check of the executable successfully passed (checksum: F37F4F1F1F05354DFEECD70FAB61DC73)
[8af69c367457] 2023.08.23 07:56:04.980141 [ 346 ] <Fatal> BaseDaemon: Report this error to https://github.com/ClickHouse/ClickHouse/issues
[8af69c367457] 2023.08.23 07:56:04.980263 [ 346 ] <Fatal> BaseDaemon: Changed settings: allow_experimental_analyzer = true
``` | https://github.com/ClickHouse/ClickHouse/issues/53720 | https://github.com/ClickHouse/ClickHouse/pull/55951 | 9666549d1538d4eedcb4a884a6be8b26454ead1e | 73fc8c8f4b424ef202d3892b184e21421febd5e0 | "2023-08-23T07:56:55Z" | c++ | "2023-11-03T16:12:27Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 53,718 | ["src/Analyzer/Passes/QueryAnalysisPass.cpp", "tests/queries/0_stateless/02901_analyzer_recursive_window.reference", "tests/queries/0_stateless/02901_analyzer_recursive_window.sql"] | ClickHouse Server v23.7.4.5 crashed by a SELECT statement with allow_experimental_analyzer enabled ("SELECT 1 WINDOW ...") | **Describe the bug**
ClickHouse Server v23.7.4.5 crashed by a SELECT statement with allow_experimental_analyzer enabled.
It was found by an in-development fuzzer of WINGFUZZ.
**How to reproduce**
The SQL statement to reproduce:
```sql
SELECT 1 WINDOW x AS ( PARTITION BY x ) SETTINGS allow_experimental_analyzer = 1 ;
```
It can be reproduced on the official docker image. (`clickhouse/clickhouse-server:head` (version 23.8.1.2413) and `clickhouse/clickhouse-server:latest` (version 23.7.4.5)).
The log traced by ClickHouse Server:
```
SELECT 1
WINDOW x AS (PARTITION BY x)
SETTINGS allow_experimental_analyzer = 1
Query id: 840148ee-451f-4379-b5e3-ad4ce8678ed4
[8af69c367457] 2023.08.23 07:54:07.366367 [ 347 ] <Fatal> BaseDaemon: ########################################
[8af69c367457] 2023.08.23 07:54:07.366402 [ 347 ] <Fatal> BaseDaemon: (version 23.8.1.2413 (official build), build id: 4DCA66DD83B2161C82851B4655CD14334A08D535, git hash: 926533306c5969b77571e66163a6930cfce1cf86) (from thread 48) (query_id: 840148ee-451f-4379-b5e3-ad4ce8678ed4) (query: SELECT 1 WINDOW x AS ( PARTITION BY x ) SETTINGS allow_experimental_analyzer = 1 ;) Received signal Segmentation fault (11)
[8af69c367457] 2023.08.23 07:54:07.366419 [ 347 ] <Fatal> BaseDaemon: Address: 0x7fffd2030fa8. Access: write. Attempted access has violated the permissions assigned to the memory area.
[8af69c367457] 2023.08.23 07:54:07.366430 [ 347 ] <Fatal> BaseDaemon: Stack trace: 0x0000000011cadf40 0x0000000011cadf58 0x0000000011cadf58 0x0000000011cadf58 0x0000000011cadf58 0x0000000011cadf58 0x0000000011cadf58 0x0000000011cadf58 0x0000000011cadf58 0x0000000011cadf58 0x0000000011cadf58 0x0000000011cadf58 0x0000000011cadf58 0x0000000011cadf58 0x0000000011cadf58 0x0000000011cadf58 0x0000000011cadf58 0x0000000011cadf58 0x0000000011cadf58 0x0000000011cadf58 0x0000000011cadf58 0x0000000011cadf58 0x0000000011cadf58 0x0000000011cadf58 0x0000000011cadf58 0x0000000011cadf58 0x0000000011cadf58 0x0000000011cadf58 0x0000000011cadf58 0x0000000011cadf58 0x0000000011cadf58 0x0000000011cadf58 0x0000000011cadf58 0x0000000011cadf58 0x0000000011cadf58 0x0000000011cadf58 0x0000000011cadf58 0x0000000011cadf58 0x0000000011cadf58 0x0000000011cadf58 0x0000000011cadf58 0x0000000011cadf58 0x0000000011cadf58
[8af69c367457] 2023.08.23 07:54:07.366503 [ 347 ] <Fatal> BaseDaemon: 2. DB::InDepthQueryTreeVisitor<DB::(anonymous namespace)::CollectWindowFunctionNodeVisitor, true>::visit(std::shared_ptr<DB::IQueryTreeNode> const&) (.llvm.17465951882029437719) @ 0x0000000011cadf40 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:54:07.366532 [ 347 ] <Fatal> BaseDaemon: 3. DB::InDepthQueryTreeVisitor<DB::(anonymous namespace)::CollectWindowFunctionNodeVisitor, true>::visit(std::shared_ptr<DB::IQueryTreeNode> const&) (.llvm.17465951882029437719) @ 0x0000000011cadf58 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:54:07.366564 [ 347 ] <Fatal> BaseDaemon: 4. DB::InDepthQueryTreeVisitor<DB::(anonymous namespace)::CollectWindowFunctionNodeVisitor, true>::visit(std::shared_ptr<DB::IQueryTreeNode> const&) (.llvm.17465951882029437719) @ 0x0000000011cadf58 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:54:07.366596 [ 347 ] <Fatal> BaseDaemon: 5. DB::InDepthQueryTreeVisitor<DB::(anonymous namespace)::CollectWindowFunctionNodeVisitor, true>::visit(std::shared_ptr<DB::IQueryTreeNode> const&) (.llvm.17465951882029437719) @ 0x0000000011cadf58 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:54:07.366628 [ 347 ] <Fatal> BaseDaemon: 6. DB::InDepthQueryTreeVisitor<DB::(anonymous namespace)::CollectWindowFunctionNodeVisitor, true>::visit(std::shared_ptr<DB::IQueryTreeNode> const&) (.llvm.17465951882029437719) @ 0x0000000011cadf58 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:54:07.366655 [ 347 ] <Fatal> BaseDaemon: 7. DB::InDepthQueryTreeVisitor<DB::(anonymous namespace)::CollectWindowFunctionNodeVisitor, true>::visit(std::shared_ptr<DB::IQueryTreeNode> const&) (.llvm.17465951882029437719) @ 0x0000000011cadf58 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:54:07.366686 [ 347 ] <Fatal> BaseDaemon: 8. DB::InDepthQueryTreeVisitor<DB::(anonymous namespace)::CollectWindowFunctionNodeVisitor, true>::visit(std::shared_ptr<DB::IQueryTreeNode> const&) (.llvm.17465951882029437719) @ 0x0000000011cadf58 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:54:07.366709 [ 347 ] <Fatal> BaseDaemon: 9. DB::InDepthQueryTreeVisitor<DB::(anonymous namespace)::CollectWindowFunctionNodeVisitor, true>::visit(std::shared_ptr<DB::IQueryTreeNode> const&) (.llvm.17465951882029437719) @ 0x0000000011cadf58 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:54:07.366739 [ 347 ] <Fatal> BaseDaemon: 10. DB::InDepthQueryTreeVisitor<DB::(anonymous namespace)::CollectWindowFunctionNodeVisitor, true>::visit(std::shared_ptr<DB::IQueryTreeNode> const&) (.llvm.17465951882029437719) @ 0x0000000011cadf58 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:54:07.366775 [ 347 ] <Fatal> BaseDaemon: 11. DB::InDepthQueryTreeVisitor<DB::(anonymous namespace)::CollectWindowFunctionNodeVisitor, true>::visit(std::shared_ptr<DB::IQueryTreeNode> const&) (.llvm.17465951882029437719) @ 0x0000000011cadf58 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:54:07.366820 [ 347 ] <Fatal> BaseDaemon: 12. DB::InDepthQueryTreeVisitor<DB::(anonymous namespace)::CollectWindowFunctionNodeVisitor, true>::visit(std::shared_ptr<DB::IQueryTreeNode> const&) (.llvm.17465951882029437719) @ 0x0000000011cadf58 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:54:07.366851 [ 347 ] <Fatal> BaseDaemon: 13. DB::InDepthQueryTreeVisitor<DB::(anonymous namespace)::CollectWindowFunctionNodeVisitor, true>::visit(std::shared_ptr<DB::IQueryTreeNode> const&) (.llvm.17465951882029437719) @ 0x0000000011cadf58 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:54:07.366874 [ 347 ] <Fatal> BaseDaemon: 14. DB::InDepthQueryTreeVisitor<DB::(anonymous namespace)::CollectWindowFunctionNodeVisitor, true>::visit(std::shared_ptr<DB::IQueryTreeNode> const&) (.llvm.17465951882029437719) @ 0x0000000011cadf58 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:54:07.366900 [ 347 ] <Fatal> BaseDaemon: 15. DB::InDepthQueryTreeVisitor<DB::(anonymous namespace)::CollectWindowFunctionNodeVisitor, true>::visit(std::shared_ptr<DB::IQueryTreeNode> const&) (.llvm.17465951882029437719) @ 0x0000000011cadf58 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:54:07.366927 [ 347 ] <Fatal> BaseDaemon: 16. DB::InDepthQueryTreeVisitor<DB::(anonymous namespace)::CollectWindowFunctionNodeVisitor, true>::visit(std::shared_ptr<DB::IQueryTreeNode> const&) (.llvm.17465951882029437719) @ 0x0000000011cadf58 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:54:07.366955 [ 347 ] <Fatal> BaseDaemon: 17. DB::InDepthQueryTreeVisitor<DB::(anonymous namespace)::CollectWindowFunctionNodeVisitor, true>::visit(std::shared_ptr<DB::IQueryTreeNode> const&) (.llvm.17465951882029437719) @ 0x0000000011cadf58 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:54:07.366986 [ 347 ] <Fatal> BaseDaemon: 18. DB::InDepthQueryTreeVisitor<DB::(anonymous namespace)::CollectWindowFunctionNodeVisitor, true>::visit(std::shared_ptr<DB::IQueryTreeNode> const&) (.llvm.17465951882029437719) @ 0x0000000011cadf58 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:54:07.367015 [ 347 ] <Fatal> BaseDaemon: 19. DB::InDepthQueryTreeVisitor<DB::(anonymous namespace)::CollectWindowFunctionNodeVisitor, true>::visit(std::shared_ptr<DB::IQueryTreeNode> const&) (.llvm.17465951882029437719) @ 0x0000000011cadf58 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:54:07.367047 [ 347 ] <Fatal> BaseDaemon: 20. DB::InDepthQueryTreeVisitor<DB::(anonymous namespace)::CollectWindowFunctionNodeVisitor, true>::visit(std::shared_ptr<DB::IQueryTreeNode> const&) (.llvm.17465951882029437719) @ 0x0000000011cadf58 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:54:07.367075 [ 347 ] <Fatal> BaseDaemon: 21. DB::InDepthQueryTreeVisitor<DB::(anonymous namespace)::CollectWindowFunctionNodeVisitor, true>::visit(std::shared_ptr<DB::IQueryTreeNode> const&) (.llvm.17465951882029437719) @ 0x0000000011cadf58 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:54:07.367104 [ 347 ] <Fatal> BaseDaemon: 22. DB::InDepthQueryTreeVisitor<DB::(anonymous namespace)::CollectWindowFunctionNodeVisitor, true>::visit(std::shared_ptr<DB::IQueryTreeNode> const&) (.llvm.17465951882029437719) @ 0x0000000011cadf58 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:54:07.367137 [ 347 ] <Fatal> BaseDaemon: 23. DB::InDepthQueryTreeVisitor<DB::(anonymous namespace)::CollectWindowFunctionNodeVisitor, true>::visit(std::shared_ptr<DB::IQueryTreeNode> const&) (.llvm.17465951882029437719) @ 0x0000000011cadf58 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:54:07.367158 [ 347 ] <Fatal> BaseDaemon: 24. DB::InDepthQueryTreeVisitor<DB::(anonymous namespace)::CollectWindowFunctionNodeVisitor, true>::visit(std::shared_ptr<DB::IQueryTreeNode> const&) (.llvm.17465951882029437719) @ 0x0000000011cadf58 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:54:07.367187 [ 347 ] <Fatal> BaseDaemon: 25. DB::InDepthQueryTreeVisitor<DB::(anonymous namespace)::CollectWindowFunctionNodeVisitor, true>::visit(std::shared_ptr<DB::IQueryTreeNode> const&) (.llvm.17465951882029437719) @ 0x0000000011cadf58 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:54:07.367228 [ 347 ] <Fatal> BaseDaemon: 26. DB::InDepthQueryTreeVisitor<DB::(anonymous namespace)::CollectWindowFunctionNodeVisitor, true>::visit(std::shared_ptr<DB::IQueryTreeNode> const&) (.llvm.17465951882029437719) @ 0x0000000011cadf58 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:54:07.367254 [ 347 ] <Fatal> BaseDaemon: 27. DB::InDepthQueryTreeVisitor<DB::(anonymous namespace)::CollectWindowFunctionNodeVisitor, true>::visit(std::shared_ptr<DB::IQueryTreeNode> const&) (.llvm.17465951882029437719) @ 0x0000000011cadf58 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:54:07.367282 [ 347 ] <Fatal> BaseDaemon: 28. DB::InDepthQueryTreeVisitor<DB::(anonymous namespace)::CollectWindowFunctionNodeVisitor, true>::visit(std::shared_ptr<DB::IQueryTreeNode> const&) (.llvm.17465951882029437719) @ 0x0000000011cadf58 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:54:07.367304 [ 347 ] <Fatal> BaseDaemon: 29. DB::InDepthQueryTreeVisitor<DB::(anonymous namespace)::CollectWindowFunctionNodeVisitor, true>::visit(std::shared_ptr<DB::IQueryTreeNode> const&) (.llvm.17465951882029437719) @ 0x0000000011cadf58 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:54:07.367336 [ 347 ] <Fatal> BaseDaemon: 30. DB::InDepthQueryTreeVisitor<DB::(anonymous namespace)::CollectWindowFunctionNodeVisitor, true>::visit(std::shared_ptr<DB::IQueryTreeNode> const&) (.llvm.17465951882029437719) @ 0x0000000011cadf58 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:54:07.367363 [ 347 ] <Fatal> BaseDaemon: 31. DB::InDepthQueryTreeVisitor<DB::(anonymous namespace)::CollectWindowFunctionNodeVisitor, true>::visit(std::shared_ptr<DB::IQueryTreeNode> const&) (.llvm.17465951882029437719) @ 0x0000000011cadf58 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:54:07.367389 [ 347 ] <Fatal> BaseDaemon: 32. DB::InDepthQueryTreeVisitor<DB::(anonymous namespace)::CollectWindowFunctionNodeVisitor, true>::visit(std::shared_ptr<DB::IQueryTreeNode> const&) (.llvm.17465951882029437719) @ 0x0000000011cadf58 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:54:07.367428 [ 347 ] <Fatal> BaseDaemon: 33. DB::InDepthQueryTreeVisitor<DB::(anonymous namespace)::CollectWindowFunctionNodeVisitor, true>::visit(std::shared_ptr<DB::IQueryTreeNode> const&) (.llvm.17465951882029437719) @ 0x0000000011cadf58 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:54:07.367453 [ 347 ] <Fatal> BaseDaemon: 34. DB::InDepthQueryTreeVisitor<DB::(anonymous namespace)::CollectWindowFunctionNodeVisitor, true>::visit(std::shared_ptr<DB::IQueryTreeNode> const&) (.llvm.17465951882029437719) @ 0x0000000011cadf58 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:54:07.367480 [ 347 ] <Fatal> BaseDaemon: 35. DB::InDepthQueryTreeVisitor<DB::(anonymous namespace)::CollectWindowFunctionNodeVisitor, true>::visit(std::shared_ptr<DB::IQueryTreeNode> const&) (.llvm.17465951882029437719) @ 0x0000000011cadf58 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:54:07.367510 [ 347 ] <Fatal> BaseDaemon: 36. DB::InDepthQueryTreeVisitor<DB::(anonymous namespace)::CollectWindowFunctionNodeVisitor, true>::visit(std::shared_ptr<DB::IQueryTreeNode> const&) (.llvm.17465951882029437719) @ 0x0000000011cadf58 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:54:07.367543 [ 347 ] <Fatal> BaseDaemon: 37. DB::InDepthQueryTreeVisitor<DB::(anonymous namespace)::CollectWindowFunctionNodeVisitor, true>::visit(std::shared_ptr<DB::IQueryTreeNode> const&) (.llvm.17465951882029437719) @ 0x0000000011cadf58 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:54:07.367570 [ 347 ] <Fatal> BaseDaemon: 38. DB::InDepthQueryTreeVisitor<DB::(anonymous namespace)::CollectWindowFunctionNodeVisitor, true>::visit(std::shared_ptr<DB::IQueryTreeNode> const&) (.llvm.17465951882029437719) @ 0x0000000011cadf58 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:54:07.367591 [ 347 ] <Fatal> BaseDaemon: 39. DB::InDepthQueryTreeVisitor<DB::(anonymous namespace)::CollectWindowFunctionNodeVisitor, true>::visit(std::shared_ptr<DB::IQueryTreeNode> const&) (.llvm.17465951882029437719) @ 0x0000000011cadf58 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:54:07.367628 [ 347 ] <Fatal> BaseDaemon: 40. DB::InDepthQueryTreeVisitor<DB::(anonymous namespace)::CollectWindowFunctionNodeVisitor, true>::visit(std::shared_ptr<DB::IQueryTreeNode> const&) (.llvm.17465951882029437719) @ 0x0000000011cadf58 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:54:07.367661 [ 347 ] <Fatal> BaseDaemon: 41. DB::InDepthQueryTreeVisitor<DB::(anonymous namespace)::CollectWindowFunctionNodeVisitor, true>::visit(std::shared_ptr<DB::IQueryTreeNode> const&) (.llvm.17465951882029437719) @ 0x0000000011cadf58 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:54:07.367693 [ 347 ] <Fatal> BaseDaemon: 42. DB::InDepthQueryTreeVisitor<DB::(anonymous namespace)::CollectWindowFunctionNodeVisitor, true>::visit(std::shared_ptr<DB::IQueryTreeNode> const&) (.llvm.17465951882029437719) @ 0x0000000011cadf58 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:54:07.367721 [ 347 ] <Fatal> BaseDaemon: 43. DB::InDepthQueryTreeVisitor<DB::(anonymous namespace)::CollectWindowFunctionNodeVisitor, true>::visit(std::shared_ptr<DB::IQueryTreeNode> const&) (.llvm.17465951882029437719) @ 0x0000000011cadf58 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:54:07.367742 [ 347 ] <Fatal> BaseDaemon: 44. DB::InDepthQueryTreeVisitor<DB::(anonymous namespace)::CollectWindowFunctionNodeVisitor, true>::visit(std::shared_ptr<DB::IQueryTreeNode> const&) (.llvm.17465951882029437719) @ 0x0000000011cadf58 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:54:07.519317 [ 347 ] <Fatal> BaseDaemon: Integrity check of the executable successfully passed (checksum: F37F4F1F1F05354DFEECD70FAB61DC73)
[8af69c367457] 2023.08.23 07:54:07.519650 [ 347 ] <Fatal> BaseDaemon: Report this error to https://github.com/ClickHouse/ClickHouse/issues
[8af69c367457] 2023.08.23 07:54:07.519774 [ 347 ] <Fatal> BaseDaemon: Changed settings: allow_experimental_analyzer = true
``` | https://github.com/ClickHouse/ClickHouse/issues/53718 | https://github.com/ClickHouse/ClickHouse/pull/56055 | b65c498016403f78e5ad83cf066e21b838e55495 | 224d4f0ee199f99d4a72b102dccda74a02162bbd | "2023-08-23T07:54:48Z" | c++ | "2023-10-27T19:55:46Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 53,717 | ["src/Analyzer/FunctionNode.h", "src/Analyzer/InDepthQueryTreeVisitor.h", "src/Analyzer/Passes/IfConstantConditionPass.cpp", "src/Analyzer/Utils.cpp", "src/Analyzer/Utils.h", "tests/queries/0_stateless/00835_if_generic_case.reference", "tests/queries/0_stateless/02901_remove_nullable_crash_analyzer.reference", "tests/queries/0_stateless/02901_remove_nullable_crash_analyzer.sql"] | ClickHouse Server v23.7.4.5 crashed by a SELECT statement with allow_experimental_analyzer enabled ("SELECT 1 % (CASE ...") | **Describe the bug**
ClickHouse Server v23.7.4.5 crashed by a SELECT statement with allow_experimental_analyzer enabled.
It was found by an in-development fuzzer of WINGFUZZ.
**How to reproduce**
The SQL statement to reproduce:
```sql
SELECT 1 % ( CASE WHEN 1 THEN (1 IS NOT NULL + *) ELSE NULL END ) SETTINGS allow_experimental_analyzer = 1 ;
```
It can be reproduced on the official docker image. (`clickhouse/clickhouse-server:head` (version 23.8.1.2413) and `clickhouse/clickhouse-server:latest` (version 23.7.4.5)).
The log traced by ClickHouse Server:
```
SELECT 1 % multiIf(1, (1 IS NOT NULL) + *, NULL)
SETTINGS allow_experimental_analyzer = 1
Query id: e2914ca0-8b1f-476d-b485-f8a178f3a877
[8af69c367457] 2023.08.23 07:50:54.379251 [ 345 ] <Fatal> BaseDaemon: ########################################
[8af69c367457] 2023.08.23 07:50:54.379306 [ 345 ] <Fatal> BaseDaemon: (version 23.8.1.2413 (official build), build id: 4DCA66DD83B2161C82851B4655CD14334A08D535, git hash: 926533306c5969b77571e66163a6930cfce1cf86) (from thread 48) (query_id: e2914ca0-8b1f-476d-b485-f8a178f3a877) (query: SELECT 1 % ( CASE WHEN 1 THEN (1 IS NOT NULL + *) ELSE NULL END ) SETTINGS allow_experimental_analyzer = 1 ;) Received signal Segmentation fault (11)
[8af69c367457] 2023.08.23 07:50:54.379360 [ 345 ] <Fatal> BaseDaemon: Address: 0x18. Access: read. Address not mapped to object.
[8af69c367457] 2023.08.23 07:50:54.379414 [ 345 ] <Fatal> BaseDaemon: Stack trace: 0x000000000a67c7ad 0x000000000a67be4b 0x000000000a67aeba 0x00000000070ba30a 0x00000000070b98ae 0x0000000010a92d6a 0x0000000010a93a82 0x0000000010a94d79 0x00000000113fbecc 0x0000000013253084 0x00000000133ae54e 0x0000000011da4e75 0x0000000011d9f237 0x0000000012137b51 0x000000001213398e 0x0000000012f98d19 0x0000000012faa959 0x0000000015997514 0x0000000015998711 0x0000000015ace847 0x0000000015accb1c 0x00007ffff7f9a609 0x00007ffff7ebf133
[8af69c367457] 2023.08.23 07:50:54.379526 [ 345 ] <Fatal> BaseDaemon: 2. DB::FunctionBinaryArithmetic<DB::ModuloImpl, DB::NameModulo, false, true, true>::executeImpl2(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&, std::shared_ptr<DB::IDataType const> const&, unsigned long, DB::PODArray<char8_t, 4096ul, Allocator<false, false>, 63ul, 64ul> const*) const @ 0x000000000a67c7ad in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:50:54.379576 [ 345 ] <Fatal> BaseDaemon: 3. DB::FunctionBinaryArithmetic<DB::ModuloImpl, DB::NameModulo, false, true, true>::executeImpl(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&, std::shared_ptr<DB::IDataType const> const&, unsigned long) const @ 0x000000000a67be4b in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:50:54.379604 [ 345 ] <Fatal> BaseDaemon: 4. DB::FunctionBinaryArithmeticWithConstants<DB::ModuloImpl, DB::NameModulo, false, true, true>::executeImpl(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&, std::shared_ptr<DB::IDataType const> const&, unsigned long) const @ 0x000000000a67aeba in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:50:54.379660 [ 345 ] <Fatal> BaseDaemon: 5. DB::IFunction::executeImplDryRun(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&, std::shared_ptr<DB::IDataType const> const&, unsigned long) const @ 0x00000000070ba30a in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:50:54.379704 [ 345 ] <Fatal> BaseDaemon: 6. DB::FunctionToExecutableFunctionAdaptor::executeDryRunImpl(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&, std::shared_ptr<DB::IDataType const> const&, unsigned long) const @ 0x00000000070b98ae in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:50:54.379747 [ 345 ] <Fatal> BaseDaemon: 7. DB::IExecutableFunction::executeWithoutLowCardinalityColumns(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&, std::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const @ 0x0000000010a92d6a in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:50:54.379777 [ 345 ] <Fatal> BaseDaemon: 8. DB::IExecutableFunction::executeWithoutSparseColumns(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&, std::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const @ 0x0000000010a93a82 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:50:54.379803 [ 345 ] <Fatal> BaseDaemon: 9. DB::IExecutableFunction::execute(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&, std::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const @ 0x0000000010a94d79 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:50:54.379848 [ 345 ] <Fatal> BaseDaemon: 10. DB::ActionsDAG::updateHeader(DB::Block) const @ 0x00000000113fbecc in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:50:54.379899 [ 345 ] <Fatal> BaseDaemon: 11. DB::ExpressionTransform::transformHeader(DB::Block, DB::ActionsDAG const&) @ 0x0000000013253084 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:50:54.379950 [ 345 ] <Fatal> BaseDaemon: 12. DB::ExpressionStep::ExpressionStep(DB::DataStream const&, std::shared_ptr<DB::ActionsDAG> const&) @ 0x00000000133ae54e in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:50:54.379988 [ 345 ] <Fatal> BaseDaemon: 13. DB::Planner::buildPlanForQueryNode() @ 0x0000000011da4e75 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:50:54.380014 [ 345 ] <Fatal> BaseDaemon: 14. DB::Planner::buildQueryPlanIfNeeded() @ 0x0000000011d9f237 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:50:54.380049 [ 345 ] <Fatal> BaseDaemon: 15. DB::executeQueryImpl(char const*, char const*, std::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x0000000012137b51 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:50:54.380083 [ 345 ] <Fatal> BaseDaemon: 16. DB::executeQuery(String const&, std::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) @ 0x000000001213398e in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:50:54.380119 [ 345 ] <Fatal> BaseDaemon: 17. DB::TCPHandler::runImpl() @ 0x0000000012f98d19 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:50:54.380162 [ 345 ] <Fatal> BaseDaemon: 18. DB::TCPHandler::run() @ 0x0000000012faa959 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:50:54.380192 [ 345 ] <Fatal> BaseDaemon: 19. Poco::Net::TCPServerConnection::start() @ 0x0000000015997514 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:50:54.380229 [ 345 ] <Fatal> BaseDaemon: 20. Poco::Net::TCPServerDispatcher::run() @ 0x0000000015998711 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:50:54.380286 [ 345 ] <Fatal> BaseDaemon: 21. Poco::PooledThread::run() @ 0x0000000015ace847 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:50:54.380323 [ 345 ] <Fatal> BaseDaemon: 22. Poco::ThreadImpl::runnableEntry(void*) @ 0x0000000015accb1c in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:50:54.380349 [ 345 ] <Fatal> BaseDaemon: 23. ? @ 0x00007ffff7f9a609 in ?
[8af69c367457] 2023.08.23 07:50:54.380368 [ 345 ] <Fatal> BaseDaemon: 24. ? @ 0x00007ffff7ebf133 in ?
[8af69c367457] 2023.08.23 07:50:54.540563 [ 345 ] <Fatal> BaseDaemon: Integrity check of the executable successfully passed (checksum: F37F4F1F1F05354DFEECD70FAB61DC73)
[8af69c367457] 2023.08.23 07:50:54.540852 [ 345 ] <Fatal> BaseDaemon: Report this error to https://github.com/ClickHouse/ClickHouse/issues
[8af69c367457] 2023.08.23 07:50:54.541005 [ 345 ] <Fatal> BaseDaemon: Changed settings: allow_experimental_analyzer = true
``` | https://github.com/ClickHouse/ClickHouse/issues/53717 | https://github.com/ClickHouse/ClickHouse/pull/55951 | 9666549d1538d4eedcb4a884a6be8b26454ead1e | 73fc8c8f4b424ef202d3892b184e21421febd5e0 | "2023-08-23T07:51:46Z" | c++ | "2023-11-03T16:12:27Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 53,715 | ["src/Functions/parseDateTime.cpp", "tests/queries/0_stateless/02668_parse_datetime.reference", "tests/queries/0_stateless/02668_parse_datetime.sql"] | Crash bug: ClickHouse Server v23.7.4.5 crashed when calling the parseDateTime function | **Describe the bug**
ClickHouse Server v23.7.4.5 crashed when calling the parseDateTime function with illegal arguments.
It was found by an in-development fuzzer of WINGFUZZ.
**How to reproduce**
The SQL statement to reproduce:
```sql
SELECT parseDateTime ('' , '' , toString ( number ) ) FROM numbers ( 13 ) ;
```
It can be reproduced on the official docker image. (`clickhouse/clickhouse-server:head` (version 23.8.1.2413) and `clickhouse/clickhouse-server:latest` (version 23.7.4.5)).
The log traced by ClickHouse Server:
```
SELECT parseDateTime('', '', toString(number))
FROM numbers(13)
Query id: f1075392-f8cd-4d88-8975-3b20e532e6be
[8af69c367457] 2023.08.23 07:39:50.308508 [ 351 ] <Fatal> BaseDaemon: ########################################
[8af69c367457] 2023.08.23 07:39:50.308568 [ 351 ] <Fatal> BaseDaemon: (version 23.8.1.2413 (official build), build id: 4DCA66DD83B2161C82851B4655CD14334A08D535, git hash: 926533306c5969b77571e66163a6930cfce1cf86) (from thread 49) (query_id: f1075392-f8cd-4d88-8975-3b20e532e6be) (query: SELECT parseDateTime ('' , '' , toString ( number ) ) FROM numbers ( 13 ) ;) Received signal Segmentation fault (11)
[8af69c367457] 2023.08.23 07:39:50.308598 [ 351 ] <Fatal> BaseDaemon: Address: NULL pointer. Access: read. Address not mapped to object.
[8af69c367457] 2023.08.23 07:39:50.308632 [ 351 ] <Fatal> BaseDaemon: Stack trace: 0x000000000b21cc89 0x000000000b21c373 0x0000000010a95933 0x0000000010a95588 0x0000000010a96256 0x00000000113f5a3a 0x00000000116164a0 0x000000001162312f 0x000000001161a003 0x00000000116169e8 0x000000001160e5d5 0x00000000115ec4bb 0x00000000115f4858 0x00000000115fceb1 0x0000000011d60303 0x0000000011d5147a 0x0000000011d43a97 0x0000000011df20a8 0x0000000011cfa7fe 0x000000001213796a 0x000000001213398e 0x0000000012f98d19 0x0000000012faa959 0x0000000015997514 0x0000000015998711 0x0000000015ace847 0x0000000015accb1c 0x00007ffff7f9a609 0x00007ffff7ebf133
[8af69c367457] 2023.08.23 07:39:50.308763 [ 351 ] <Fatal> BaseDaemon: 2. DB::(anonymous namespace)::FunctionParseDateTimeImpl<DB::(anonymous namespace)::NameParseDateTime, (DB::(anonymous namespace)::ParseSyntax)0, (DB::(anonymous namespace)::ErrorHandling)0>::getTimeZone(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&) const @ 0x000000000b21cc89 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:39:50.308846 [ 351 ] <Fatal> BaseDaemon: 3. DB::(anonymous namespace)::FunctionParseDateTimeImpl<DB::(anonymous namespace)::NameParseDateTime, (DB::(anonymous namespace)::ParseSyntax)0, (DB::(anonymous namespace)::ErrorHandling)0>::getReturnTypeImpl(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&) const (.f93405704e33169bff82a6007b386acc) @ 0x000000000b21c373 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:39:50.308920 [ 351 ] <Fatal> BaseDaemon: 4. DB::IFunctionOverloadResolver::getReturnTypeWithoutLowCardinality(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&) const @ 0x0000000010a95933 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:39:50.309010 [ 351 ] <Fatal> BaseDaemon: 5. DB::IFunctionOverloadResolver::getReturnType(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&) const @ 0x0000000010a95588 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:39:50.309088 [ 351 ] <Fatal> BaseDaemon: 6. DB::IFunctionOverloadResolver::build(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&) const @ 0x0000000010a96256 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:39:50.309172 [ 351 ] <Fatal> BaseDaemon: 7. DB::ActionsDAG::addFunction(std::shared_ptr<DB::IFunctionOverloadResolver> const&, std::vector<DB::ActionsDAG::Node const*, std::allocator<DB::ActionsDAG::Node const*>>, String) @ 0x00000000113f5a3a in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:39:50.309259 [ 351 ] <Fatal> BaseDaemon: 8. DB::ScopeStack::addFunction(std::shared_ptr<DB::IFunctionOverloadResolver> const&, std::vector<String, std::allocator<String>> const&, String) @ 0x00000000116164a0 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:39:50.309337 [ 351 ] <Fatal> BaseDaemon: 9. DB::ActionsMatcher::Data::addFunction(std::shared_ptr<DB::IFunctionOverloadResolver> const&, std::vector<String, std::allocator<String>> const&, String) @ 0x000000001162312f in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:39:50.309422 [ 351 ] <Fatal> BaseDaemon: 10. DB::ActionsMatcher::visit(DB::ASTFunction const&, std::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0x000000001161a003 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:39:50.309495 [ 351 ] <Fatal> BaseDaemon: 11. DB::ActionsMatcher::visit(std::shared_ptr<DB::IAST> const&, DB::ActionsMatcher::Data&) @ 0x00000000116169e8 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:39:50.309603 [ 351 ] <Fatal> BaseDaemon: 12. DB::InDepthNodeVisitor<DB::ActionsMatcher, true, false, std::shared_ptr<DB::IAST> const>::doVisit(std::shared_ptr<DB::IAST> const&) @ 0x000000001160e5d5 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:39:50.309686 [ 351 ] <Fatal> BaseDaemon: 13. DB::ExpressionAnalyzer::getRootActions(std::shared_ptr<DB::IAST> const&, bool, std::shared_ptr<DB::ActionsDAG>&, bool) @ 0x00000000115ec4bb in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:39:50.309754 [ 351 ] <Fatal> BaseDaemon: 14. DB::SelectQueryExpressionAnalyzer::appendSelect(DB::ExpressionActionsChain&, bool) @ 0x00000000115f4858 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:39:50.309842 [ 351 ] <Fatal> BaseDaemon: 15. DB::ExpressionAnalysisResult::ExpressionAnalysisResult(DB::SelectQueryExpressionAnalyzer&, std::shared_ptr<DB::StorageInMemoryMetadata const> const&, bool, bool, bool, std::shared_ptr<DB::FilterDAGInfo> const&, std::shared_ptr<DB::FilterDAGInfo> const&, DB::Block const&) @ 0x00000000115fceb1 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:39:50.309925 [ 351 ] <Fatal> BaseDaemon: 16. DB::InterpreterSelectQuery::getSampleBlockImpl() @ 0x0000000011d60303 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:39:50.310004 [ 351 ] <Fatal> BaseDaemon: 17. DB::InterpreterSelectQuery::InterpreterSelectQuery(std::shared_ptr<DB::IAST> const&, std::shared_ptr<DB::Context> const&, std::optional<DB::Pipe>, std::shared_ptr<DB::IStorage> const&, DB::SelectQueryOptions const&, std::vector<String, std::allocator<String>> const&, std::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::shared_ptr<DB::PreparedSets>)::$_0::operator()(bool) const @ 0x0000000011d5147a in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:39:50.310096 [ 351 ] <Fatal> BaseDaemon: 18. DB::InterpreterSelectQuery::InterpreterSelectQuery(std::shared_ptr<DB::IAST> const&, std::shared_ptr<DB::Context> const&, std::optional<DB::Pipe>, std::shared_ptr<DB::IStorage> const&, DB::SelectQueryOptions const&, std::vector<String, std::allocator<String>> const&, std::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::shared_ptr<DB::PreparedSets>) @ 0x0000000011d43a97 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:39:50.310166 [ 351 ] <Fatal> BaseDaemon: 19. DB::InterpreterSelectWithUnionQuery::InterpreterSelectWithUnionQuery(std::shared_ptr<DB::IAST> const&, std::shared_ptr<DB::Context>, DB::SelectQueryOptions const&, std::vector<String, std::allocator<String>> const&) @ 0x0000000011df20a8 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:39:50.310245 [ 351 ] <Fatal> BaseDaemon: 20. DB::InterpreterFactory::get(std::shared_ptr<DB::IAST>&, std::shared_ptr<DB::Context>, DB::SelectQueryOptions const&) @ 0x0000000011cfa7fe in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:39:50.310327 [ 351 ] <Fatal> BaseDaemon: 21. DB::executeQueryImpl(char const*, char const*, std::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x000000001213796a in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:39:50.310411 [ 351 ] <Fatal> BaseDaemon: 22. DB::executeQuery(String const&, std::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) @ 0x000000001213398e in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:39:50.310490 [ 351 ] <Fatal> BaseDaemon: 23. DB::TCPHandler::runImpl() @ 0x0000000012f98d19 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:39:50.310560 [ 351 ] <Fatal> BaseDaemon: 24. DB::TCPHandler::run() @ 0x0000000012faa959 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:39:50.310643 [ 351 ] <Fatal> BaseDaemon: 25. Poco::Net::TCPServerConnection::start() @ 0x0000000015997514 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:39:50.310716 [ 351 ] <Fatal> BaseDaemon: 26. Poco::Net::TCPServerDispatcher::run() @ 0x0000000015998711 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:39:50.310788 [ 351 ] <Fatal> BaseDaemon: 27. Poco::PooledThread::run() @ 0x0000000015ace847 in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:39:50.310864 [ 351 ] <Fatal> BaseDaemon: 28. Poco::ThreadImpl::runnableEntry(void*) @ 0x0000000015accb1c in /usr/bin/clickhouse
[8af69c367457] 2023.08.23 07:39:50.310892 [ 351 ] <Fatal> BaseDaemon: 29. ? @ 0x00007ffff7f9a609 in ?
[8af69c367457] 2023.08.23 07:39:50.310914 [ 351 ] <Fatal> BaseDaemon: 30. ? @ 0x00007ffff7ebf133 in ?
[8af69c367457] 2023.08.23 07:39:50.484772 [ 351 ] <Fatal> BaseDaemon: Integrity check of the executable successfully passed (checksum: F37F4F1F1F05354DFEECD70FAB61DC73)
[8af69c367457] 2023.08.23 07:39:50.486347 [ 351 ] <Fatal> BaseDaemon: Report this error to https://github.com/ClickHouse/ClickHouse/issues
[8af69c367457] 2023.08.23 07:39:50.486476 [ 351 ] <Fatal> BaseDaemon: No settings were changed
``` | https://github.com/ClickHouse/ClickHouse/issues/53715 | https://github.com/ClickHouse/ClickHouse/pull/53764 | 4d2efd87b52540b26fa2aafe34ff95d535228db3 | 2f31a2a5685158f86ab3dc77fa5fb590de63143f | "2023-08-23T07:41:50Z" | c++ | "2023-08-24T09:54:23Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 53,708 | ["docker/test/integration/runner/compose/docker_compose_ldap.yml", "src/Access/LDAPAccessStorage.h", "src/Access/MemoryAccessStorage.h", "tests/integration/helpers/cluster.py", "tests/integration/test_ldap_external_user_directory/__init__.py", "tests/integration/test_ldap_external_user_directory/configs/ldap_with_role_mapping.xml", "tests/integration/test_ldap_external_user_directory/test.py"] | LDAP External User Directory role mapping leads to query hangs and denial of service | LDAP External User Directory with role mapping is configured against Microsoft Active Directory as described here:
https://clickhouse.com/docs/en/operations/external-authenticators/ldap
If some user is added to an AD group which doesn't have a corresponding role in ClickHouse after some previous query activity of this user, then subsequent TCP and HTTP queries like `SHOW ACCESS` / `SHOW CURRENT ROLES` / `SELECT user()` issued by this user start timeout (TCP) or even hang forever (HTTP ones). All subsequent queries from this user timeout or hang as well.
`netstat` shows a growing number of connections in a `CLOSE_WAIT` state afterwards which eventually leads to the whole ClickHouse system hang.
**It's reproduced on 22.12 at least and all 23 releases including the latest one on CentOS, RHEL, Ubuntu**
**How to reproduce**
* ClickHouse server version: 22.12 and all 23.x
* Interfaces: clickhouse-client, curl, JDBC driver
```
sudo cat /etc/clickhouse-server/config.d/ldap.xml
<clickhouse>
<ldap_servers>
<my_ad_server>
<host>somename.dcch.local</host>
<port>389</port>
<enable_tls>no</enable_tls>
<tls_require_cert>never</tls_require_cert>
<bind_dn>SOMENAME\{user_name}</bind_dn>
<user_dn_detection>
<base_dn>OU=CHILD,DC=somename,DC=dcch,DC=local</base_dn>
<search_filter>(&(objectClass=user)(sAMAccountName={user_name}))</search_filter>
</user_dn_detection>
</my_ad_server>
</ldap_servers>
<user_directories>
<ldap>
<server>my_ad_server</server>
<role_mapping>
<base_dn>OU=CHILD,DC=somename,DC=dcch,DC=local</base_dn>
<attribute>CN</attribute>
<scope>subtree</scope>
<search_filter>(&(objectClass=group)(member={user_dn}))</search_filter>
</role_mapping>
</ldap>
</user_directories>
</clickhouse>
```
1. Initially:
`ldapsearch` with the `(&(objectClass=user)(sAMAccountName=myuser))` filter returns:
```memberOf: CN=ch_role1,OU=CHILD,DC=somename,DC=dcch,DC=local```
The role `ch_role1` was created with the `CREATE ROLE ch_role1` statement and:
```
SELECT name, storage FROM system.roles
|name |storage |
|--------|---------------|
|ch_role1|local directory|
```
2. `myuser` successfully runs the following queries:
```
clickhouse-client -h localhost -u myuser --password xyz -q 'SELECT user() FORMAT PrettyCompact'
ββcurrentUser()ββ
β myuser β
βββββββββββββββββ
clickhouse-client -h localhost -u myuser --password xyz -q 'SHOW CURRENT ROLES FORMAT PrettyCompact'
ββrole_nameββ¬βwith_admin_optionββ¬βis_defaultββ
β ch_role1 β 0 β 1 β
βββββββββββββ΄ββββββββββββββββββββ΄βββββββββββββ
```
3. `myuser` is added to another AD group `ch_role2` which doesn't have a corresponding role in ClickHouse
`ldapsearch` returns:
```
memberOf: CN=ch_role2,OU=CHILD,DC=somename,DC=dcch,DC=local
memberOf: CN=ch_role1,OU=CHILD,DC=somename,DC=dcch,DC=local
```
4. The same query timed out
```
clickhouse-client -h localhost -u myuser --password xyz -q 'SELECT user()'
Code: 209. DB::NetException: Timeout exceeded while reading from socket (127.0.0.1:9000, 300000 ms). (SOCKET_TIMEOUT)
```
Whatever HTTP query from this user hangs forever afterwards:
```
echo 'SELECT 1' | curl "http://localhost:8123" --user "myuser:xyz" --data-binary @-
```
Subsequent queries from other users may or may not hang (it depends on version and release probably).
5. network connections in the `CLOSE_WAIT` state appear afterwards and their number grows until the whole ClickHouse server becomes unresponsive
```
netstat -an | grep CLOSE
tcp6 1 0 127.0.0.1:9000 127.0.0.1:59950 CLOSE_WAIT
tcp6 1 0 127.0.0.1:8123 127.0.0.1:38826 CLOSE_WAIT
```
6. The only way to make the ClickHouse server work normally again is to restart its service.
**Expected behavior**
User queries and ClickHouse server must not hang after changing user group membership in a LDAP server.
| https://github.com/ClickHouse/ClickHouse/issues/53708 | https://github.com/ClickHouse/ClickHouse/pull/55119 | 48ce595e247c76020f3a74dd9ea965aec1e3ce10 | b38d4b5b0fb6956242e9a870ea5d6275210f3a22 | "2023-08-22T20:13:03Z" | c++ | "2023-10-07T22:38:10Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 53,640 | ["src/Interpreters/ExpressionAnalyzer.cpp", "src/Interpreters/TreeRewriter.cpp", "tests/queries/0_stateless/02863_interpolate_subquery.reference", "tests/queries/0_stateless/02863_interpolate_subquery.sql"] | 'Missing columns' exception when using INTERPOLATE in subquery | **Describe what's wrong**
In scenarios where an interpolated value is dependent on a calculated column (also interpolated) on a subquery, every interpolated column must be referenced in the outer query, otherwise a `Missing columns` exception is thrown
[Reproduction in fiddle](https://fiddle.clickhouse.com/bcb35723-614e-4a8b-bb1a-9d35d9cfb3d5)
**Does it reproduce on recent release?**
Reproduces in lastest version
**How to reproduce**
```
--create test table
create table t1 (ts Date, num UInt32) engine=MergeTree() order by ts;
--insert some random data a few times (since we will aggregate over date field
insert into t1 select toDate(now() - INTERVAL number DAY), rand(number) from numbers(100);
insert into t1 select toDate(now() - INTERVAL number DAY), rand(number) from numbers(100);
insert into t1 select toDate(now() - INTERVAL number DAY), rand(number) from numbers(100);
--CTE with interpolated values (incl col2 - calculated based on col1)
with interpolated as (
select
ts as day,
sum(num) as col1,
exponentialMovingAverage(1)(col1, toUInt64(day)) over (
Rows between 7 preceding
and current row
) as col2
from
t1
group by
day
order by
day asc with fill to toLastDayOfMonth(now()) step interval 1 day interpolate(col1 as (col1 + col2) / 2, col2 as col2)
)
--This one runs fine:
select * from interpolated;
--This one returns an error 'Code: 47. DB::Exception: Missing columns: 'col2' while processing query'
select toStartOfMonth(day) as month, sum(col1) from interpolated group by month;
--This one runs fine:
select toStartOfMonth(day) as month, sum(col1), any(col2) from interpolated group by month;
```
**Expected behavior**
Outer query should execute without needing to select every interpolated field from subquery
| https://github.com/ClickHouse/ClickHouse/issues/53640 | https://github.com/ClickHouse/ClickHouse/pull/53754 | ec628ee6972d292f46f631a5f493e640bd6c9dd4 | 217bfa0e42a4e18b62c0220db19e7381bf24a06f | "2023-08-21T13:16:01Z" | c++ | "2023-09-04T05:05:47Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 53,634 | ["tests/performance/encrypt_decrypt_empty_string_slow.xml"] | Perf test `encrypt_decrypt_empty_string_slow` is too slow | https://s3.amazonaws.com/clickhouse-test-reports/0/dd1a00b97646c0b0103bf2f0d46c430f03ba91bc/performance_comparison_aarch64_[2_4]/report.html#test-times.encrypt_decrypt_empty_string_slow
@Enmk please take a look | https://github.com/ClickHouse/ClickHouse/issues/53634 | https://github.com/ClickHouse/ClickHouse/pull/53691 | b78bd47c7a6c40e1d97d55ce07f425688728485e | 050925314916ec3b3de950b8ce21fe18b595ded8 | "2023-08-21T12:09:44Z" | c++ | "2023-08-22T18:03:06Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 53,602 | ["docs/en/sql-reference/functions/date-time-functions.md", "src/Functions/dateDiff.cpp", "tests/queries/0_stateless/00538_datediff.reference", "tests/queries/0_stateless/00538_datediff.sql"] | `dateDiff`: add support for plural units, e.g., `seconds` | **Use case**
`dateDiff('microseconds', event_time_microseconds, lagInFrame(event_time_microseconds) OVER ())`
```
Received exception from server (version 23.8.1):
Code: 36. DB::Exception: Received from p12uiq1ogd.us-east-2.aws.clickhouse-staging.com:9440. DB::Exception: Function dateDiff does not support 'microseconds' unit. (BAD_ARGUMENTS)
``` | https://github.com/ClickHouse/ClickHouse/issues/53602 | https://github.com/ClickHouse/ClickHouse/pull/53641 | fdfefe58f30fdc899d801693f4dcb4d38c1e52c7 | b884fdb8676d971a760c36f5192140ba5fbcc21b | "2023-08-20T02:20:03Z" | c++ | "2023-08-23T09:45:35Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 53,543 | ["src/Parsers/ExpressionListParsers.cpp", "tests/queries/0_stateless/02868_select_support_from_keywords.reference", "tests/queries/0_stateless/02868_select_support_from_keywords.sql"] | ClickHouse is deleting quotes during the view definition saving | Clickhouse is deleting quotes in inner selects during view definition saving.
How to repeat the error:
Create table:
```
create table test_table (
`date` Date,
`__sign` Int8,
`from` Float64,
`to` Float64,
)
ENGINE = CollapsingMergeTree(__sign)
PARTITION BY toYYYYMM(date)
ORDER BY (date)
SETTINGS index_granularity = 8192;
```
Then create view
```
create VIEW test_view
AS
WITH cte AS
(
SELECT
date,
__sign,
"from",
"to",
FROM test_table
FINAL
)
SELECT
date,
__sign,
"from",
"to",
FROM
cte
```
The view will be saved as:
```
ATTACH VIEW _ UUID '1a25bb46-3e50-424d-915b-79af0857ceec'
(
`date` Date,
`__sign` Int8,
`from` Float64,
`to` Float64
) AS
WITH cte AS
(
SELECT
date,
__sign,
from,
to
FROM default.test_table
FINAL
)
SELECT
date,
__sign,
from,
to
FROM
cte
```
As you can see, Clickhouse is missing quotes in the inner queue during saving.
If we will restart the Clickhouse it will not be started cause of an error in view metadata.
Error from errors log:
```
0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000c9f4304 in /usr/bin/clickhouse
1. ? @ 0x0000000008adc940 in /usr/bin/clickhouse
2. DB::DatabaseOnDisk::parseQueryFromMetadata(Poco::Logger*, std::shared_ptr<DB::Context const>, String const&, bool, bool) @ 0x0000000010c51568 in /usr/bin/clickhouse
3. ? @ 0x0000000010c6a748 in /usr/bin/clickhouse
4. ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::worker(std::__list_iterator<ThreadFromGlobalPoolImpl<false>, void*>) @ 0x000000000cac4e78 in /usr/bin/clickhouse
5. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<false>::ThreadFromGlobalPoolImpl<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>(void&&)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x000000000cac78a0 in /usr/bin/clickhouse
6. ThreadPoolImpl<std::thread>::worker(std::__list_iterator<std::thread, void*>) @ 0x000000000cac0ef8 in /usr/bin/clickhouse
7. ? @ 0x000000000cac65a8 in /usr/bin/clickhouse
8. start_thread @ 0x0000000000007624 in /usr/lib/aarch64-linux-gnu/libpthread-2.31.so
9. ? @ 0x00000000000d149c in /usr/lib/aarch64-linux-gnu/libc-2.31.so
(version 23.7.4.5 (official build))
2023.08.18 09:45:47.870721 [ 1 ] {} <Error> Application: DB::Exception: Syntax error (in file /var/lib/clickhouse/store/3be/3be24c40-9d50-4904-a286-121fd6454180/test_view.sql): failed at position 226 (',') (line 13, col 17): ,
to
FROM default.test_table
FINAL
)
SELECT
date,
__sign,
from,
to
FROM
cte
. Expected one of: table, table function, subquery or list of joined tables, table or subquery or table function, element of expression with optional alias, SELECT subquery, function, compound identifier, list of elements, identifier: Cannot parse definition from metadata file /var/lib/clickhouse/store/3be/3be24c40-9d50-4904-a286-121fd6454180/test_view.sql
```
Error from usual logs:
```
0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000c9f4304 in /usr/bin/clickhouse
1. ? @ 0x0000000008adc940 in /usr/bin/clickhouse
2. DB::DatabaseOnDisk::parseQueryFromMetadata(Poco::Logger*, std::shared_ptr<DB::Context const>, String const&, bool, bool) @ 0x0000000010c51568 in /usr/bin/clickhouse
3. ? @ 0x0000000010c6a748 in /usr/bin/clickhouse
4. ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::worker(std::__list_iterator<ThreadFromGlobalPoolImpl<false>, void*>) @ 0x000000000cac4e78 in /usr/bin/clickhouse
5. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<false>::ThreadFromGlobalPoolImpl<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>(void&&)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x000000000cac78a0 in /usr/bin/clickhouse
6. ThreadPoolImpl<std::thread>::worker(std::__list_iterator<std::thread, void*>) @ 0x000000000cac0ef8 in /usr/bin/clickhouse
7. ? @ 0x000000000cac65a8 in /usr/bin/clickhouse
8. start_thread @ 0x0000000000007624 in /usr/lib/aarch64-linux-gnu/libpthread-2.31.so
9. ? @ 0x00000000000d149c in /usr/lib/aarch64-linux-gnu/libc-2.31.so
(version 23.7.4.5 (official build))
2023.08.18 09:45:47.870721 [ 1 ] {} <Error> Application: DB::Exception: Syntax error (in file /var/lib/clickhouse/store/3be/3be24c40-9d50-4904-a286-121fd6454180/test_view.sql): failed at position 226 (',') (line 13, col 17): ,
to
FROM default.test_table
FINAL
)
SELECT
date,
__sign,
from,
to
FROM
cte
. Expected one of: table, table function, subquery or list of joined tables, table or subquery or table function, element of expression with optional alias, SELECT subquery, function, compound identifier, list of elements, identifier: Cannot parse definition from metadata file /var/lib/clickhouse/store/3be/3be24c40-9d50-4904-a286-121fd6454180/test_view.sql
2023.08.18 09:45:47.871879 [ 1 ] {} <Information> Application: shutting down
2023.08.18 09:45:47.872952 [ 1 ] {} <Debug> Application: Uninitializing subsystem: Logging Subsystem
2023.08.18 09:45:47.873784 [ 48 ] {} <Information> BaseDaemon: Stop SignalListener thread
```
We detected it during migration from CH version 22.3.17.13 to 23.4.2.11. In 22.3.17.13 it's working, but if you change the version to a higher - Clickhouse just will not start cause of this error. It also tests for the CH version 23.7.4.5 - the same behavior.
| https://github.com/ClickHouse/ClickHouse/issues/53543 | https://github.com/ClickHouse/ClickHouse/pull/53914 | 0148e15aee7fbf871c088c05653420f7f5e54348 | 0387556a34b88719f30d880306fb34bd534c5e8b | "2023-08-18T09:38:01Z" | c++ | "2023-08-30T20:25:08Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 53,508 | ["src/Interpreters/InterpreterSelectQuery.cpp", "tests/queries/0_stateless/02861_interpolate_alias_precedence.reference", "tests/queries/0_stateless/02861_interpolate_alias_precedence.sql"] | Crash when running query with INTERPOLATE clause | There is a crash under certain conditions when executing a query with an `ORDER BY... INTERPOLATE` clause. This issue happens in the most recent CH version.
In a table with the following schema and data:
```
CREATE TABLE test (date Date, id String, f Int16)
ENGINE=MergeTree()
ORDER BY (date);
INSERT INTO test VALUES ('2023-05-15', '1', 1);
INSERT INTO test VALUES ('2023-05-22', '1', 15);
```
CH crashes when running the following query:
```
SELECT
date AS d,
toNullable(f) AS f
FROM test
WHERE id = '1'
ORDER BY d ASC WITH FILL STEP toIntervalDay(1)
INTERPOLATE ( f )
```
The exception (triggered by an `assert_cast`) is the following:
`Bad cast from type DB::ColumnNullable to DB::ColumnVector<short>`
The query runs fine if the derived column doesn't have the same name than the column in the original table, i.e, just replace `AS f` to a different alias, f2, and interpolate f2.
**Bit of analysis**
The exception is triggered in the function `FillingTransform::interpolate`, when executing the instruction `column->insertFrom(*last_row[col_pos], 0);`
The cause of this seems to be that the member `input_positions`, instead of using the type of the aliased column, associates to this position the type of the column in the original table, which is not nullable. This vector is initialized in the constructor, from the content of `interpolate_description->required_columns_map`, which is in turn initialized in the constructor of `InterpolateDescription` from the contents of an object of type `ActionsDAGPtr`
| https://github.com/ClickHouse/ClickHouse/issues/53508 | https://github.com/ClickHouse/ClickHouse/pull/53572 | ec0561b3f1e40cc2060c50917624022d4bc16024 | 75b748fc9af5d05ff1adbb43ee09c0857eb10fab | "2023-08-17T09:18:50Z" | c++ | "2023-08-19T09:12:32Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 53,482 | ["docs/en/operations/settings/settings.md", "src/Core/Settings.h", "src/Interpreters/InterpreterShowColumnsQuery.cpp", "tests/queries/0_stateless/02775_show_columns_mysql_compatibility.reference", "tests/queries/0_stateless/02775_show_columns_mysql_compatibility.sql"] | MySQL compatibility: BLOB vs TEXT for String types | **Describe the issue**
All the string types are reported as BLOB if `use_mysql_types_in_show_columns` is set to 1. This, unfortunately, causes issues with QuickSight via MySQL interface, as it cannot recognize BLOB columns as strings. For example:
<img width="618" alt="image" src="https://github.com/ClickHouse/ClickHouse/assets/3175289/460cea0f-c19c-4a3e-8f05-df5a8f0ce932">
As you can see, BLOB (aka LONGVARBINARY, see [the explanation](https://dev.mysql.com/doc/refman/8.0/en/blob.html)) does not work here as expected. However, this works fine with tools such as Looker Studio.
**How to reproduce**
* Which ClickHouse server version to use: the latest master build
* Which interface to use, if matters: MySQL
* Non-default settings, if any: `users.xml` -> add `<use_mysql_types_in_show_columns>1</use_mysql_types_in_show_columns>` to the default profile.
* Sample data: [commits sample dataset](https://clickhouse.com/docs/en/getting-started/example-datasets/github)
* Queries to run:
```
show full columns from commits;
```
**Preferred solution**
While BLOB is technically the correct type for representing Strings via MySQL interface, a configuration option to switch between `BLOB` and `TEXT` globally reported types for String columns in the `SHOW (FULL) COLUMNS` output will enable us to integrate with QuickSight better.
CC @rschu1ze | https://github.com/ClickHouse/ClickHouse/issues/53482 | https://github.com/ClickHouse/ClickHouse/pull/55617 | 3864c6746e051a3771c2ecf1be881ebcace85f0b | 945dcb865ac612f963000cb3d1d87aaacebf6f07 | "2023-08-16T15:05:22Z" | c++ | "2023-10-18T08:57:18Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 53,453 | ["src/Storages/MergeTree/MergeTreeSplitPrewhereIntoReadSteps.cpp", "tests/queries/0_stateless/02845_prewhere_preserve_column.reference", "tests/queries/0_stateless/02845_prewhere_preserve_column.sql"] | "DB::Exception: Not found column" when using PREWHERE optimization in SELECT queries |
**Describe what's wrong**
Potential problem with [PREWHERE](https://clickhouse.com/docs/en/sql-reference/statements/select/prewhere) optimization in [SELECT](https://clickhouse.com/docs/en/sql-reference/statements/select) queries.
Example: If I have a query where a column is in a NOT clause with the same value as another condition it produces an error like:
`
Code: 10. DB::Exception: Not found column equals(user_id, 101) in block: while executing 'INPUT : 0 -> equals(user_id, 101) UInt8 : 0': While executing MergeTreeInOrder. (NOT_FOUND_COLUMN_IN_BLOCK) (version 23.8.1.41458 (official build))
`
**Does it reproduce on recent release?**
I'm on `23.8.1.41458`
**How to reproduce**
Following the quick start tutorial (https://clickhouse.com/docs/en/getting-started/quick-start#4-create-a-table)
1). CREATE TABLE:
```
CREATE TABLE my_first_table
(
user_id UInt32,
message String,
timestamp DateTime,
metric Float32
)
ENGINE = MergeTree
PRIMARY KEY (user_id, timestamp);
```
2). INSERT data into it:
```
INSERT INTO my_first_table (user_id, message, timestamp, metric) VALUES
(101, 'Hello, ClickHouse!', now(), -1.0 ),
(102, 'Insert a lot of rows per batch', yesterday(), 1.41421 ),
(102, 'Sort your data based on your commonly-used queries', today(), 2.718 ),
(101, 'Granules are the smallest chunks of data read', now() + 5, 3.14159 );
```
3). Run a query:
```
SELECT *
FROM my_first_table WHERE user_id = 101
AND NOT (user_id = 101 AND (metric = -1.0));
```
gives me the error:
```
Code: 10. DB::Exception: Not found column equals(user_id, 101) in block: while executing 'INPUT : 0 -> equals(user_id, 101) UInt8 : 0': While executing MergeTreeInOrder. (NOT_FOUND_COLUMN_IN_BLOCK) (version 23.8.1.41458 (official build))
```
4). If you set `optimize_move_to_prewhere = 0` it works as expected:
```
SELECT *
FROM my_first_table WHERE user_id = 101
AND NOT (user_id = 101 AND (metric = -1.0))
SETTINGS optimize_move_to_prewhere = 0;
```
**Additional context**
Looks similar to https://github.com/ClickHouse/ClickHouse/issues/37381
| https://github.com/ClickHouse/ClickHouse/issues/53453 | https://github.com/ClickHouse/ClickHouse/pull/53492 | 06415c7a53130fa4040c1e23d902cf8c64dd0a46 | e89a8e4d13652baa7af88ead4d50cc76fd8ffd26 | "2023-08-15T20:36:57Z" | c++ | "2023-09-23T19:24:00Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 53,437 | ["src/Interpreters/PreparedSets.cpp", "tests/queries/0_stateless/02844_subquery_timeout_with_break.reference", "tests/queries/0_stateless/02844_subquery_timeout_with_break.sql"] | Logical error: Trying to use set before it has been built (with timeout_overflow_mode = 'break') | **Describe the unexpected behaviour**
When set `timeout_overflow_mode = 'break'` we expect that if query timeout it will break and not throwing exception, but it still can if query has subquery in the rhs of IN and index is used.
```
DB::Exception: Logical error: Trying to use set before it has been built. (LOGICAL_ERROR) (version 23.7.4.5 (official build))
```
**How to reproduce**
https://fiddle.clickhouse.com/b1b82a33-df3d-4d9d-bdd9-5bc0d538cf8b
```sql
CREATE TABLE t (key UInt64, value UInt64, INDEX value_idx value TYPE bloom_filter GRANULARITY 1) ENGINE=MergeTree() ORDER BY key;
INSERT INTO t SELECT number, rand()%1000 FROM numbers(10000);
SET timeout_overflow_mode='break';
SET max_execution_time=0.5;
SET send_logs_level='debug';
SELECT * FROM t WHERE value IN (SELECT number FROM numbers(100000000));
```
| https://github.com/ClickHouse/ClickHouse/issues/53437 | https://github.com/ClickHouse/ClickHouse/pull/53439 | c22356bcaafcc17378d36122d1c7dc7140f9bad9 | 00e2e3184b3229710a241ad4a800310965d77896 | "2023-08-15T04:03:10Z" | c++ | "2023-08-18T23:24:42Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 53,350 | ["docs/en/engines/database-engines/materialized-mysql.md"] | ngrambf_v1 + MaterializedMySQL not working | **Steps to reproduce**
Create a MaterializedMySQL table with ngrambf_v1 skip index for text field
Run LIKE query search for some non-existing record
**Expected behavior**
- Queries use index and run fast
- `system.data_skipping_indices` has appropriate index record with non-zero `data_compressed_bytes` column
**Actual behavior**
- Queries don't use index and run slow
- `system.data_skipping_indices` has appropriate index record, but it has a zero value inside `data_compressed_bytes` column
| https://github.com/ClickHouse/ClickHouse/issues/53350 | https://github.com/ClickHouse/ClickHouse/pull/53373 | bd0e8792886ac2a02ad45eb2a48b935aa89fb5fe | b69ef759721d016a3856e66150bf9b1bd2158258 | "2023-08-12T18:17:36Z" | c++ | "2023-08-16T11:16:43Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 53,303 | ["docs/en/operations/settings/merge-tree-settings.md", "src/Storages/MergeTree/MergeTreeDataMergerMutator.cpp", "src/Storages/MergeTree/MergeTreeSettings.cpp", "src/Storages/MergeTree/MergeTreeSettings.h", "tests/queries/0_stateless/01419_merge_tree_settings_sanity_check.sql"] | Consider adding a separated task for historical merges | **Use case**
The merged algorithm in ClickHouse was designed to favour smaller and more recent parts. This is obvious because we need to minimise number of parts.
However, there's a case when merging old parts also important. For example, when we have a `ReplacingMergeTree` and the `SELECT ... FINAL` spans across several partitions. Thanks to `do_not_merge_across_partitions_select_final`, if a partition only has 1 part, we don't need to apply `FINAL` logic on that partition, which significantly improve the query performance. Therefore we want that at some point, the "old" partition (last week, last month, etc...) will only contain 1 part.
**What we have now**
I think the use case has been raised in #35836, and there's already a solution with `min_age_to_force_merge_seconds` in #42423. With `min_age_to_force_merge_seconds`, once the parts stay long enough, they will be allowed to merged without respecting the criteria (total size, age...).
However, this setting can lead to merge starving for recent parts, because: historical merges are usually big (not to say super big), and number of concurrent merges are limited.
```
-- Future merge tasks, mixing between small (recent parts) merges and big (old parts) merges
@@@@@@@@@
@
@@@@@@@@@@@
@
@@
@@@@@@@@@@@@@@
@@
-- Current merge tasks queue - running in round robin manner for each @
@
@@
@@@@
@@@@@
@@@@@@
@@@@@@
@@@@@@@@@@
-- Current merge tasks at some points
@@@@@@@@
@@@@@@@@@
@@@@@@@@@@@
@@@@@@
@@@@@@
@@@@@@
@@@@@@@
```
If we change `background_merges_mutations_scheduling_policy` to `shortest_task_first`, then we face the same old issue again: small merges are alway preferred, while big merges can stuck in merge queue forever (ever if they're scheduled).
**Describe the solution you'd like**
Each table having a separated task to optimize old partitions. The task will get thread from common schedule pool, and it will scan from oldest partition -> most recent partition to find which partition it can merge to single part. Whether to active this task or not will be controlled by a table setting. Merging old partition will takes some times, but eventually we will reach a point where every old partitions only have 1 part.
I think this is the original idea implemented in #35836, but then changed to current solution.
I don't know if there's a better solution, appreciate any comments [THANKS]!
| https://github.com/ClickHouse/ClickHouse/issues/53303 | https://github.com/ClickHouse/ClickHouse/pull/53405 | 75d32bfe771db20a2698d6a6e5d9484e3cf747f5 | cbbb81f5ccf94e7549fd5608f9c157adc63a8dbd | "2023-08-11T04:21:01Z" | c++ | "2023-08-24T17:49:38Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 53,276 | ["src/Common/Config/ConfigReloader.cpp"] | Configuration is not reloaded if changed too fast | **Describe the issue**
ClickHouse only reloads config file if the modification time changes. This is detected with `FS::getModificationTime`, which only has a resolution of one second. This means ClickHouse may miss changes if they happen within the same seconds as the previous change.
| https://github.com/ClickHouse/ClickHouse/issues/53276 | https://github.com/ClickHouse/ClickHouse/pull/54065 | 57ffbeeaf519bdce8f5a2fa75b507815ae05298d | 6655de199b5ad8751bb9f405dd818ef1a7c93a56 | "2023-08-10T13:55:42Z" | c++ | "2023-09-01T08:08:28Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 53,237 | ["src/Storages/MergeTree/MergeTreeSplitPrewhereIntoReadSteps.cpp", "tests/queries/0_stateless/02845_prewhere_preserve_column.reference", "tests/queries/0_stateless/02845_prewhere_preserve_column.sql"] | move_all_conditions_to_prewhere / NOT_FOUND_COLUMN_IN_BLOCK | https://fiddle.clickhouse.com/e7d218c3-2d5f-4a7b-b713-0b5052b62e86
```sql
CREATE TABLE t ( e String, c String, q String )
ENGINE = MergeTree ORDER BY tuple();
insert into t
select number, number, number from numbers(10);
WITH s AS ( SELECT * FROM t WHERE ((e != 'cl') OR (q = 'bn')))
SELECT count() AS cnt
FROM s WHERE (q = 'bn') GROUP BY c ORDER BY cnt DESC;
Received exception from server (version 23.7.3):
Code: 10. DB::Exception: Received from localhost:9000. DB::Exception: Not found column equals(q, 'bn') in block:
while executing 'INPUT : 1 -> equals(q, 'bn') UInt8 : 2': While executing MergeTreeInOrder. (NOT_FOUND_COLUMN_IN_BLOCK)
(query: WITH s AS ( SELECT * FROM t WHERE ((e != 'cl') OR (q = 'bn')))
SELECT count() AS cnt
FROM s WHERE (q = 'bn') GROUP BY c ORDER BY cnt DESC
format Pretty;)
```
worked before 23.7
works with analyzer
-----
**upd: move_all_conditions_to_prewhere**
```sql
CREATE TABLE t ( e String, c String, q String ) ENGINE = MergeTree ORDER BY tuple();
insert into t select number, number, number from numbers(10);
SELECT count()
FROM (SELECT * FROM t WHERE e = 'cl' OR q = 'bn') WHERE (q = 'bn')
GROUP BY c
DB::Exception: Not found column equals(q, 'bn') in block: while executing 'INPUT : 1 -> equals(q, 'bn') UInt8 : 2': While executing MergeTreeInOrder. (NOT_FOUND_COLUMN_IN_BLOCK)
set move_all_conditions_to_prewhere=0;
SELECT count()
FROM (SELECT * FROM t WHERE e = 'cl' OR q = 'bn') WHERE (q = 'bn')
GROUP BY c
0 rows in set. Elapsed: 0.003 sec.
``` | https://github.com/ClickHouse/ClickHouse/issues/53237 | https://github.com/ClickHouse/ClickHouse/pull/53492 | 06415c7a53130fa4040c1e23d902cf8c64dd0a46 | e89a8e4d13652baa7af88ead4d50cc76fd8ffd26 | "2023-08-09T19:15:21Z" | c++ | "2023-09-23T19:24:00Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 53,222 | ["src/Storages/MergeTree/KeyCondition.cpp", "tests/queries/0_stateless/02462_match_regexp_pk.sql"] | Using match with `^$` as anchors on a primary key column returns incorrect results | **Describe what's wrong**
When running a query that applies a regex expression to the primary key column and uses `^$` as anchors on the first element, Clickhouse incorrectly returns only a portion of the expected results.
**Does it reproduce on recent release?**
Running release `23.3.8.21`.
**How to reproduce**
Create table and insert data into different parts. Note that this bug appears to apply to only the primary key condition as inserting both data items into same part will return the correct result.
```
CREATE TABLE example(
"time" Int64 CODEC(ZSTD(1)),
"svc" LowCardinality(String) CODEC(ZSTD(1)),
"title" String CODEC(ZSTD(1)),
) ENGINE = MergeTree
PARTITION BY intDiv("time", 1000)
ORDER BY ("svc", "time");
INSERT INTO example(*) VALUES(toInt64(4500), 'first', 'blah blah')
INSERT INTO example(*) VALUES(toInt64(3500), 'second', 'blah blah blah')
```
Reading data with `^$` returns only the 1/2 expected rows:
```
SELECT svc, title FROM example WHERE match(svc, '^first$|^second$')
```
Running the query with index explain enabled (`EXPLAIN indexes = 1`) shows the problem right away:
```
Expression ((Projection + Before ORDER BY))
ReadFromMergeTree (7_ca_43_de_8_a_195_42_c_7_b_590_7_a_270_a_945433.example)
Indexes:
MinMax
Condition: true
Parts: 2/2
Granules: 2/2
Partition
Condition: true
Parts: 2/2
Granules: 2/2
PrimaryKey
Keys:
svc
Condition: (svc in ['first', 'firsu')) <-- this is the problem
Parts: 1/2
Granules: 1/2
```
**Additional context**
A simple workaround is to use `\A\z` for anchoring which returns the expected results:
```
SELECT svc, title FROM example WHERE match(svc, '\Afirst\z|\Asecond\z')
```
| https://github.com/ClickHouse/ClickHouse/issues/53222 | https://github.com/ClickHouse/ClickHouse/pull/54696 | f5e8028bb12e0e01438e6aeccee426fcd95805c7 | 711876dfa8cc2cf48099eff68c494067ff5bcbc0 | "2023-08-09T14:18:00Z" | c++ | "2023-09-18T02:45:13Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 53,215 | ["docker/test/base/setup_export_logs.sh"] | 02152_http_external_tables_memory_tracking is flaky | https://s3.amazonaws.com/clickhouse-test-reports/53146/4f6f8fce652adae458a766f76af153d20501b864/stateless_tests__debug__[3_5].html
https://play.clickhouse.com/play?user=play#c2VsZWN0IAp0b1N0YXJ0T2ZEYXkoY2hlY2tfc3RhcnRfdGltZSkgYXMgZCwKY291bnQoKSwgZ3JvdXBVbmlxQXJyYXkocHVsbF9yZXF1ZXN0X251bWJlciksICBhbnkocmVwb3J0X3VybCkKZnJvbSBjaGVja3Mgd2hlcmUgJzIwMjItMDEtMDEnIDw9IGNoZWNrX3N0YXJ0X3RpbWUgYW5kIHRlc3RfbmFtZSBsaWtlICclMDIxNTJfaHR0cF9leHRlcm5hbF90YWJsZXNfbWVtb3J5X3RyYWNraW5nJScgYW5kIHRlc3Rfc3RhdHVzIGluICgnRkFJTCcsICdGTEFLWScpIGdyb3VwIGJ5IGQgb3JkZXIgYnkgZCBkZXNj | https://github.com/ClickHouse/ClickHouse/issues/53215 | https://github.com/ClickHouse/ClickHouse/pull/57130 | ec18f24c1f36906e2e0f8ae6de7e6ac83e57890e | 851e96cd80777596aa0ce60f4c31cb4d8f5907c9 | "2023-08-09T13:16:32Z" | c++ | "2023-11-24T20:04:43Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 53,190 | ["src/Functions/FunctionsConversion.h", "tests/queries/0_stateless/01556_accurate_cast_or_null.reference", "tests/queries/0_stateless/01556_accurate_cast_or_null.sql"] | No results when quering col > '1969' if table partitioned by that column | **Describe what's wrong**
DateTime condition comparing to the date before starting of the unix epoch stopped to work if the partitioning key uses that column.
**Repro**
https://fiddle.clickhouse.com/66a75ef0-c107-4ae1-83f4-ae608971bc9c
**Does it reproduce on recent release?**
Yes
**How to reproduce**
```sql
DROP TABLE IF EXISTS test_ts;
CREATE TABLE test_ts
(
`id` UInt64,
`ts` DateTime
)
ENGINE = MergeTree()
PARTITION BY toYYYYMM(ts)
ORDER BY (id, ts);
INSERT INTO test_ts VALUES (1, '2023-08-02 08:02:05');
SELECT 'query 1';
SELECT * FROM test_ts;
SELECT 'query 2';
SELECT * FROM test_ts WHERE ts >= '1969-07-01 00:00:00';
```
That is a regression between 22.3 and 22.4
| https://github.com/ClickHouse/ClickHouse/issues/53190 | https://github.com/ClickHouse/ClickHouse/pull/58139 | d0ca383bca3a012e70348508ca91cb7532bbbb73 | a30980c930c6a1357028a2473593f526ba409a64 | "2023-08-09T08:05:57Z" | c++ | "2023-12-23T13:52:59Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 53,187 | ["src/Functions/transform.cpp", "tests/queries/0_stateless/02542_transform_new.reference", "tests/queries/0_stateless/02542_transform_new.sql", "tests/queries/0_stateless/02787_transform_null.reference"] | Conditionals in CASE statements sometimes produce wrong results | **How to reproduce**
```
ClickHouse client version 23.7.1.1.
Connecting to localhost:9000 as user ymirlink.
Connected to ClickHouse server version 23.7.1 revision 54464.
:) SELECT CAST(1, 'Nullable(String)') v1,
CAST(number, 'String') v2,
CASE 'x'
WHEN 'y' THEN 0
ELSE v1 = v2
END cond1,
v1 = v2 cond2
FROM numbers(2)
FORMAT JSONCompact
;
SELECT
CAST(1, 'Nullable(String)') AS v1,
CAST(number, 'String') AS v2,
caseWithExpression('x', 'y', 0, v1 = v2) AS cond1,
v1 = v2 AS cond2
FROM numbers(2)
FORMAT JSONCompact
Query id: 7d0377c3-5c03-4953-893d-78d0808ceced
{
"meta":
[
{
"name": "v1",
"type": "Nullable(String)"
},
{
"name": "v2",
"type": "String"
},
{
"name": "cond1",
"type": "Nullable(UInt8)"
},
{
"name": "cond2",
"type": "Nullable(UInt8)"
}
],
"data":
[
["1", "0", 0, 0],
["1", "1", 0, 1]
],
"rows": 2,
"rows_before_limit_at_least": 2,
"statistics":
{
"elapsed": 0.003158283,
"rows_read": 2,
"bytes_read": 16
}
}
2 rows in set. Elapsed: 0.003 sec.
```
**Expected behavior**
`cond1` and `cond2` should always have the same value. | https://github.com/ClickHouse/ClickHouse/issues/53187 | https://github.com/ClickHouse/ClickHouse/pull/53742 | 62747ea20f3e9b312ad25eba4593568526fb8b88 | 9b749391101f6f7a11c3ad6ba81ecf6d472bc7cd | "2023-08-09T06:31:42Z" | c++ | "2023-08-29T18:57:59Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 53,179 | ["src/Functions/URL/domain.h", "tests/queries/0_stateless/02845_domain_rfc_support_ipv6.reference", "tests/queries/0_stateless/02845_domain_rfc_support_ipv6.sql"] | domain() returns empty string for IPv6 and port combination | **How to reproduce**
```
SELECT domain('[2001:db8::1]:80')
FORMAT CSV
""
```
**Expected behavior**
```
SELECT domain('[2001:db8::1]:80')
FORMAT CSV
"2001:db8::1"
```
This should be similar to IPv4 addresses with port numbers. This works as expected on the current version.
```
SELECT domain('1.1.1.1:80')
FORMAT CSV
"1.1.1.1"
```
The behavior should not be different as long as `[]` is used according to [rfc5952 section-6](https://datatracker.ietf.org/doc/html/rfc5952#section-6).
| https://github.com/ClickHouse/ClickHouse/issues/53179 | https://github.com/ClickHouse/ClickHouse/pull/53506 | a1a45ee9055664ba9fee935983c124bef9919cba | a2d451d6e60630d71a8469420aedd8f4f627daa0 | "2023-08-08T23:52:30Z" | c++ | "2023-08-29T09:42:59Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 53,156 | ["docs/en/sql-reference/functions/other-functions.md", "docs/ru/sql-reference/functions/other-functions.md", "src/Functions/formatReadableTimeDelta.cpp", "tests/queries/0_stateless/02887_format_readable_timedelta_subseconds.reference", "tests/queries/0_stateless/02887_format_readable_timedelta_subseconds.sql"] | formatReadableTimeDelta support for milli-micro-nano seconds in output |
**Use case**
```
SELECT formatReadableTimeDelta(0.1)
ββformatReadableTimeDelta(0.1)ββ
β 0 seconds β
ββββββββββββββββββββββββββββββββ
SELECT formatReadableTimeDelta(0.1, 'millisecond')
ββformatReadableTimeDelta(0.1)ββ
β 100 milliseconds β
ββββββββββββββββββββββββββββββββ
```
**Describe the solution you'd like**
Ability to use subseconds intervals in formatReadableTimeDelta function
| https://github.com/ClickHouse/ClickHouse/issues/53156 | https://github.com/ClickHouse/ClickHouse/pull/54250 | 477922617c2823879383e2e75a73f98e2fc40346 | aa37814b3a5518c730eff221b86645e153363306 | "2023-08-08T11:17:13Z" | c++ | "2023-09-24T21:15:56Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 53,152 | ["src/Interpreters/OptimizeDateOrDateTimeConverterWithPreimageVisitor.cpp", "tests/queries/0_stateless/02843_date_predicate_optimizations_bugs.reference", "tests/queries/0_stateless/02843_date_predicate_optimizations_bugs.sql"] | Query with array join and toYYYYMM in predicate doesn't work as expected | Reproduce:
```sql
select
toYYYYMM(date) as date_,
n
from (select
[toDate(now()), toDate(now())] as date,
[1, 2] as n
) as data
array join date, n
where date_ >= 202303;
```
Work in 23.6: https://fiddle.clickhouse.com/cabc075a-ac8a-4a15-bd46-2bc2b1de09a6
Not work in 23.7: https://fiddle.clickhouse.com/74c0b0b6-72f2-4fb7-8a0b-bc098307da70
Suspicious PR: #52091 | https://github.com/ClickHouse/ClickHouse/issues/53152 | https://github.com/ClickHouse/ClickHouse/pull/53440 | a52249872e3f800419a7cc9ec241bc11a15d7927 | 81af60eeea988dbd2d38cf11d13d84943c0ea843 | "2023-08-08T10:02:32Z" | c++ | "2023-08-17T18:56:01Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 53,098 | ["docs/en/sql-reference/functions/array-functions.md", "src/Functions/array/arrayRandomSample.cpp", "tests/queries/0_stateless/02415_all_new_functions_must_be_documented.reference", "tests/queries/0_stateless/02874_array_random_sample.reference", "tests/queries/0_stateless/02874_array_random_sample.sh", "utils/check-style/aspell-ignore/en/aspell-dict.txt"] | Array Of Random Numbers (In a Given Range of N , M) | Can we please add a feature for generating random arrays.
I was wondering something like: `generateRandomIntArray(N, M, K)` where `N` is start, `M` is end and `K` is the size of the array.
Additionally, can we also add a random sampler for existing arrays. Something like: `randomSampleFromArray(<Array>, K)` where `K` is the number of samples to get. | https://github.com/ClickHouse/ClickHouse/issues/53098 | https://github.com/ClickHouse/ClickHouse/pull/54391 | 9ac7cfc026cb8d7cf1ab48c0f4b12c656f52fe15 | 32a77ca1eb08f0dede7f7aa75ed71ed771393b1d | "2023-08-07T00:14:19Z" | c++ | "2023-10-08T16:28:58Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 53,094 | ["docs/en/operations/utilities/clickhouse-local.md", "docs/ru/operations/utilities/clickhouse-local.md", "src/Common/StringUtils/StringUtils.h", "src/Databases/DatabaseFilesystem.cpp", "tests/queries/0_stateless/02707_clickhouse_local_implicit_file_table_function.reference", "tests/queries/0_stateless/02707_clickhouse_local_implicit_file_table_function.sh", "tests/queries/0_stateless/02722_database_filesystem.reference", "tests/queries/0_stateless/02722_database_filesystem.sh", "tests/queries/0_stateless/02816_clickhouse_local_table_name_expressions.reference", "tests/queries/0_stateless/02816_clickhouse_local_table_name_expressions.sh"] | Let's make DatabaseFilesystem/S3/HDFS support globs | **Use case**
I can write:
```
SELECT * FROM file('*.jsonl')
```
And I can also use the default overlay Filesystem database in clickhouse-local, but only for single files:
```
SELECT * FROM 'test.jsonl'
```
I want to use it with globs as well:
```
SELECT * FROM '*.jsonl'
```
| https://github.com/ClickHouse/ClickHouse/issues/53094 | https://github.com/ClickHouse/ClickHouse/pull/53863 | e8e6b0a16528c5159b49bc4a0f55ddae0d03b525 | 55ba08490d692b620dd0973fd221f5c5b13038dc | "2023-08-06T14:17:52Z" | c++ | "2023-08-31T10:44:06Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 52,968 | ["src/Interpreters/ExpressionAnalyzer.cpp", "src/Interpreters/TableJoin.h", "src/Planner/PlannerJoins.cpp", "src/Storages/StorageJoin.cpp", "src/Storages/StorageJoin.h", "tests/performance/storage_join_direct_join.xml"] | Direct Join has worse performance than joinGet | There's two ways to query multiple columns from a storage join:
- Use multiple `joinGet`, e.g. `SELECT key, joinGet('table_join', 'value1', key), joinGet('table_join', 'value2', key) ... FROM keys`
- Use direct join, .e.g `SELECT key, value1, value2 ... FROM keys LEFT ANY JOIN table_join AS rhs ON key = rhs.key
Theoretically, the first approach will look up hash table multiple times, so it should have worse performance. Some quick test also show that: https://fiddle.clickhouse.com/f169c0ea-c966-40f8-aefa-cd341734ad9b
```sql
SELECT key, joinGet('dict', 'value1', key) AS value1, joinGet('dict', 'value2', key) AS value2 FROM keys FORMAT Null SETTINGS send_logs_level='debug';
<Debug> executeQuery: Read 10000000 rows, 76.29 MiB in 0.246209 sec., 40615899.500018276 rows/sec., 309.87 MiB/sec.
<Debug> MemoryTracker: Peak memory usage (for query): 28.13 MiB.
SELECT keys.key, value1, value2 FROM keys ANY LEFT JOIN dict AS d ON (keys.key = d.key) FORMAT Null SETTINGS send_logs_level='debug';
<Debug> executeQuery: Read 10000000 rows, 76.29 MiB in 0.122637 sec., 81541459.75521255 rows/sec., 622.11 MiB/sec.
<Debug> MemoryTracker: Peak memory usage (for query): 31.05 MiB.
```
But with real queries on big tables on prod, using direct join is actually worse than `joinGet`. I've tested many times and get the same result, so it's not a env fluctuation.
Any possible reason can lead to this? | https://github.com/ClickHouse/ClickHouse/issues/52968 | https://github.com/ClickHouse/ClickHouse/pull/53046 | c12292306dd5538b7e0e752768a4c5b2454b94ed | 3f915491f029ef030dc3d4777e5f60a3abf52822 | "2023-08-03T08:00:20Z" | c++ | "2023-08-08T09:55:17Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 52,936 | ["src/Client/ClientBase.cpp"] | clickhouse-client: don't show "0 rows in set" if it is zero and if exception was thrown. | null | https://github.com/ClickHouse/ClickHouse/issues/52936 | https://github.com/ClickHouse/ClickHouse/pull/55240 | 9e64f51ffacf36387dd72069fb2ae764e80abbd1 | 9313b343e4102a328a2fe1112a0d690f13d79b2c | "2023-08-02T13:57:28Z" | c++ | "2023-10-26T11:52:53Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 52,895 | ["docs/en/sql-reference/functions/array-functions.md", "docs/ru/sql-reference/functions/array-functions.md", "src/Functions/array/arrayShiftRotate.cpp", "tests/queries/0_stateless/02845_arrayShiftRotate.reference", "tests/queries/0_stateless/02845_arrayShiftRotate.sql", "utils/check-style/aspell-ignore/en/aspell-dict.txt"] | arrayShiftLeft, arrayShiftRight | **Use case**
Shift array values, quite commonly used with combination of arrayMap
**Describe the solution you'd like**
```
arrayShiftLeft(arr, N)
arrayShiftLeft(arr, N, 3)
arrayShiftRight(arr, N)
arrayShiftRight(arr, N, 5)
-- Examples:
arrayShiftLeft([1, 2, 3, 4, 5], 1) = [0, 1, 2, 3, 4]
arrayShiftRight([1, 2, 3, 4, 5], 1, 3) = [2, 3, 4, 5, 3]
```
**Describe alternatives you've considered**
```
arrayPopBack(arrayPushFront(x, 1))`
```
But it makes 2 copies of array
| https://github.com/ClickHouse/ClickHouse/issues/52895 | https://github.com/ClickHouse/ClickHouse/pull/53557 | 34ac113af6b7e0f767e05c50ff8ae6a03c8552d6 | 50b8bbe0dc1f466ba8d51cbc99bd9c72c7e67c28 | "2023-08-01T20:21:32Z" | c++ | "2023-08-25T11:24:31Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 52,843 | ["src/Client/ClientBase.cpp", "src/Client/Suggest.cpp", "src/Client/Suggest.h", "tests/integration/parallel_skip.json", "tests/integration/test_profile_max_sessions_for_user/test.py"] | Make client receive suggestions from server if interactive connection is esablished | **Describe the issue**
Clickhouse client can't load suggestions using single connection when `max_sessions_for_user` is set.
**How to reproduce**
- Create a user with `<max_sessions_for_user>2</max_sessions_for_user>` in profile
- Connect to the server with clickhouse-client in interactive mode (starts a session)
- Run another instance of interactive clickhouse-client. If will fail to load suggestions (due to the limitation on concurrent sessions) but then will continue working as usual.
**Expected behavior**
No error happen in this case. Suggestions are loaded from the server using single session.
**Error message and/or stacktrace**
```
ClickHouse client version 23.7.1.1.
Connecting to localhost:9000 as user maxsessions.
Connected to ClickHouse server version 23.7.1 revision 54464.
Cannot load data for command line suggestions: Code: 700. DB::Exception: Received from localhost:9000. DB::Exception: User 5eb5f051-64a4-19bc-ff75-494154bf67a9 has overflown session count 2. () (version 23.7.1.1)
dell9510 :) select 1
```
| https://github.com/ClickHouse/ClickHouse/issues/52843 | https://github.com/ClickHouse/ClickHouse/pull/53177 | 0ff5d12788f1656f61c5b8df2a716675aef02f88 | 7ed7707ab7e6ccd6b2f26675f3349b29e703b442 | "2023-07-31T22:52:12Z" | c++ | "2023-08-10T11:19:16Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 52,838 | ["docs/en/operations/settings/settings.md", "src/Core/Settings.h", "src/Interpreters/InterpreterAlterQuery.cpp", "src/Interpreters/MutationsNonDeterministicHelpers.cpp", "src/Interpreters/MutationsNonDeterministicHelpers.h", "tests/queries/0_stateless/02842_mutations_replace_non_deterministic.reference", "tests/queries/0_stateless/02842_mutations_replace_non_deterministic.sql"] | materialize now() for mutations | this
```sql
alter table t UPDATE ts = now() where 1
```
leads to
```
Data after mutation is not byte-identical to data on another replicas.
```
if `allow_nondeterministic_mutations=1`.
But Clickhouse can substitute `now()` with the current time during alter "parsing".
or with a mutation's create time during execution (the same as now() = merge task's create time) | https://github.com/ClickHouse/ClickHouse/issues/52838 | https://github.com/ClickHouse/ClickHouse/pull/53129 | ce58b90ea15ace209c5cc2c4179b4ac594816106 | b9df41d5e33fd4b21c258ff338e24ab141a0f714 | "2023-07-31T20:54:58Z" | c++ | "2023-08-17T22:26:20Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 52,777 | ["docs/en/operations/settings/settings.md", "src/Core/Settings.h", "src/Interpreters/InterpreterShowColumnsQuery.cpp", "tests/queries/0_stateless/02775_show_columns_mysql_compatibility.reference", "tests/queries/0_stateless/02775_show_columns_mysql_compatibility.sql"] | Inconsistency between `String` and `FixedString` type mapping in MySQL compatibility mode. | **Describe the unexpected behaviour**
When using the `use_mysql_types_in_show_columns` setting, implemented in issue #49577, ClickHouse's `String` datatype is mapped to MySQL's `BLOB` datatype. Unfortunately, the QuickSight tool mentioned in the issue does not support this type of columns (https://docs.aws.amazon.com/quicksight/latest/user/supported-data-types.html). As a result, all `String` columns are skipped when a table is added to the dataset.
Also, there is an inconsistency because the `FixedString` type is mapped to the `TEXT` datatype, but `String` and `FixedString` should be mapped at least to the same mysql datatype as there is no any difference of how bytes of text are stored in the clickhouse.
It's a subject for discussion of what datatype they match better, `BLOB` or `TEXT`, but at least Quicksight tool doesn't support `BLOB` columns (but do support `TEXT`).
```
MySQL [voluum]> SHOW FULL COLUMNS IN test;
+----------+---------+------+---------+---------+-------+-----------+---------+------------+
| field | type | null | key | default | extra | collation | comment | privileges |
+----------+---------+------+---------+---------+-------+-----------+---------+------------+
| fixed | TEXT | 0 | | NULL | | NULL | | |
| id | INTEGER | 0 | PRI SOR | NULL | | NULL | | |
| variable | BLOB | 0 | | NULL | | NULL | | |
+----------+---------+------+---------+---------+-------+-----------+---------+------------+
3 rows in set (0.005 sec)
```
**How to reproduce**
- Which ClickHouse server version to use: 23.7
- Which interface to use, if it matters: MySQL
- Non-default settings, if any:
```
SET use_mysql_types_in_show_columns = 1;
```
* `CREATE TABLE` statements for all tables involved
```
CREATE TABLE test (
id INT NOT NULL PRIMARY KEY,
variable String(32) NOT NULL,
fixed FixedString(32) NOT NULL
) ENGINE=MergeTree;
```
* Queries to run that lead to unexpected result
```
SHOW FULL COLUMNS IN test;
```
**Expected behavior**
- `fixed` and `variable` column types should be the same.
- `String` columns should be mapped to the `TEXT` type in order to be supported by Quicksight.
| https://github.com/ClickHouse/ClickHouse/issues/52777 | https://github.com/ClickHouse/ClickHouse/pull/55617 | 3864c6746e051a3771c2ecf1be881ebcace85f0b | 945dcb865ac612f963000cb3d1d87aaacebf6f07 | "2023-07-30T10:07:11Z" | c++ | "2023-10-18T08:57:18Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 52,760 | ["src/Columns/ColumnNullable.cpp", "tests/queries/0_stateless/02834_nulls_first_sort.reference", "tests/queries/0_stateless/02834_nulls_first_sort.sql"] | Unexpected sort result on multi columns with nulls first direction | **Describe what's wrong**
Unexpected sort result on multi columns with nulls first direction
ReproducerοΌhttps://fiddle.clickhouse.com/6cf10cbb-aa70-4956-898b-c09ed8b3e177
The order of third column is unexpected.
**Does it reproduce on recent release?**
Yes
**How to reproduce**
It can be reproduced on recent release. According the source code, it is supposed to be reproduced on every release.
**Expected behavior**
As reproducer shown, for a table `nulls_first_sort_test` with structure `a Nullable(Int32), b Nullable(Int32), c Nullable(Int32)`
and following data
```
5 \N 2
5 \N 1
5 \N 7
5 \N 3
5 7 4
5 7 6
5 7 2
5 7 1
5 7 3
5 7 9
5 1 4
5 1 6
5 1 2
5 1 1
5 1 3
5 1 9
```
The result of SQL `SELECT * FROM nulls_first_sort_test ORDER BY a NULLS FIRST,b NULLS FIRST,c NULLS FIRST LIMIT 5` is unexpected
```
5 \N 3
5 \N 7
5 \N 1
5 \N 2
5 1 1
```
It's obviously that the order of third column is unexpected.
| https://github.com/ClickHouse/ClickHouse/issues/52760 | https://github.com/ClickHouse/ClickHouse/pull/52761 | 4578d43f79769fcaaca9130793a6d9ba854de0d4 | 1130904697393bcff03b1384b76e7cced247f9ca | "2023-07-29T03:08:49Z" | c++ | "2023-08-01T13:28:41Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 52,755 | ["docs/en/sql-reference/functions/array-functions.md", "docs/ru/sql-reference/functions/array-functions.md", "src/Functions/array/arrayShiftRotate.cpp", "tests/queries/0_stateless/02845_arrayShiftRotate.reference", "tests/queries/0_stateless/02845_arrayShiftRotate.sql", "utils/check-style/aspell-ignore/en/aspell-dict.txt"] | arrayRotateLeft, arrayRotateRight | **Use case**
With some adjustments, it can be helpful for the canonization of "rings"
(Ring is a data type representing a polygon without holes).
**Describe the solution you'd like**
```
arrayRotateLeft(arr, N)
arrayRotateRight(arr, N)
-- Examples:
arrayRotateLeft([1, 2, 3, 4, 5], 3) = [4, 5, 1, 2, 3]
arrayRotateRight([1, 2, 3, 4, 5], 3) = [3, 4, 5, 1, 2]
```
| https://github.com/ClickHouse/ClickHouse/issues/52755 | https://github.com/ClickHouse/ClickHouse/pull/53557 | 34ac113af6b7e0f767e05c50ff8ae6a03c8552d6 | 50b8bbe0dc1f466ba8d51cbc99bd9c72c7e67c28 | "2023-07-29T01:20:29Z" | c++ | "2023-08-25T11:24:31Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 52,752 | ["src/Functions/FunctionBinaryArithmetic.h", "src/Functions/IsOperation.h", "src/Functions/vectorFunctions.cpp", "tests/queries/0_stateless/02415_all_new_functions_must_be_documented.reference", "tests/queries/0_stateless/02841_tuple_modulo.reference", "tests/queries/0_stateless/02841_tuple_modulo.sql"] | Modulo doesn't work for Tuple and intDiv works incorrectly. | **Describe what's wrong**
Modulo doesn't work for Tuple and intDiv works incorrectly.
**Does it reproduce on recent release?**
yes
**How to reproduce**
```
ip-172-31-42-195.us-east-2.compute.internal :) select (3, 2) % 2
SELECT (3, 2) % 2
Query id: a01d6231-9e24-49e8-8525-88671e602aff
0 rows in set. Elapsed: 0.003 sec.
Received exception:
Code: 43. DB::Exception: Illegal types Tuple(UInt8, UInt8) and UInt8 of arguments of function modulo: While processing (3, 2) % 2. (ILLEGAL_TYPE_OF_ARGUMENT)
ip-172-31-42-195.us-east-2.compute.internal :) select intDiv((3, 2), 2)
SELECT intDiv((3, 2), 2)
Query id: 2a59cc45-c13a-480c-a84d-d120ec52a2a7
ββintDiv((3, 2), 2)ββ
β (1.5,1) β
βββββββββββββββββββββ
1 row in set. Elapsed: 0.007 sec.
```
**Expected behavior**
`(3, 2) % 2` expected to produce `(1,0)`
`intDiv((3, 2), 2)` expected to produce `(1,1)`
This issue is follow up #51642
att @antaljanosbenjamin | https://github.com/ClickHouse/ClickHouse/issues/52752 | https://github.com/ClickHouse/ClickHouse/pull/52758 | 8744a0269a144b52b656092ecb28ad8f96b9bfd5 | 4953208adccfe70bde2dc3f61306b53a1f521626 | "2023-07-28T19:38:25Z" | c++ | "2023-08-11T15:00:54Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 52,723 | ["src/Databases/MySQL/DatabaseMySQL.cpp", "src/Databases/PostgreSQL/DatabasePostgreSQL.cpp", "src/Databases/SQLite/DatabaseSQLite.cpp", "tests/integration/test_mysql_database_engine/test.py", "tests/integration/test_postgresql_database_engine/test.py"] | MySQL connection password leaked in SHOW CREATE TABLE query | **Describe the unexpected behaviour**
CH masks the user instead of the password.
<img width="990" alt="image" src="https://github.com/ClickHouse/ClickHouse/assets/22796953/79eda9b9-42cc-4bb8-b307-afa83130d7f6">
To reproduce:
- Create a MySQL database
- Create a MySQL table inside the MySQL database
- Run SHOW CREATE TABLE
| https://github.com/ClickHouse/ClickHouse/issues/52723 | https://github.com/ClickHouse/ClickHouse/pull/52962 | def587701be76308f7f5418587c72e0f68d1b1a5 | af610062eca259c7aba402f49c59fe95450f6f43 | "2023-07-28T09:52:47Z" | c++ | "2023-08-04T08:57:59Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 52,654 | ["src/Storages/tests/gtest_transform_query_for_external_database.cpp", "src/Storages/transformQueryForExternalDatabase.cpp"] | error when select from postgresql() where 1=1 and (id=id) | CH 22.8.12.45
When trying to query postgresql function with condition WHERE 1=1 and (id=id) (generated query) ch generates query with wrong syntax for postgres.
Any query to postgres:
```sql
select * from postgresql(...)
where 1=1 and id=id
```
Code: 1001. DB::Exception: Received from localhost:9000. DB::Exception: pqxx::sql_error: ERROR: argument of AND must be type boolean, not type integer
LINE 1: ...column" FROM "table" WHERE 1 AND ("id...
It's same issue as #33152 but when condition (which ch can not optimize) added provided fix is not working | https://github.com/ClickHouse/ClickHouse/issues/52654 | https://github.com/ClickHouse/ClickHouse/pull/56456 | dc12111ed1b888cf7c25a47affc446d4a7a6fb1b | 7f3a082c0e968d52cbbb68bc8f0dfecfe7c79992 | "2023-07-27T11:52:44Z" | c++ | "2023-11-13T14:25:58Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 52,637 | ["src/Storages/MergeTree/MergeTreeDataSelectExecutor.cpp", "src/Storages/MergeTree/MergeTreeDataSelectExecutor.h", "tests/queries/0_stateless/01786_explain_merge_tree.reference", "tests/queries/0_stateless/02354_annoy_index.reference", "tests/queries/0_stateless/02354_usearch_index.reference", "tests/queries/0_stateless/02866_size_of_marks_skip_idx_explain.reference", "tests/queries/0_stateless/02866_size_of_marks_skip_idx_explain.sql"] | Skip indexes and functions and dropped granules | -- granularity 100
```sql
ClickHouse local version 23.6.2.18 (official build).
create table test (K Int64, A Int64, B Int64, C Int64,
index x1 (greatest(A,B,C)) type minmax granularity 100)
Engine=MergeTree order by K as select number,0,0,0 from numbers(1e7);
select count() from test where greatest(A,B,C) >0;
Index `x1` has dropped 15/1221 granules.
select count() from test where greatest(A,B,C)!=0;
Index `x1` has dropped 15/1221 granules.
```
why only 15 granules is dropped?
-- granularity 1
```sql
create table test (K Int64, A Int64, B Int64, C Int64,
index x1 (greatest(A,B,C)) type minmax granularity 1)
Engine=MergeTree order by K as select number,0,0,0 from numbers(1e7);
select count() from test where greatest(A,B,C) >0;
Index `x1` has dropped 1221/1221 granules.
```
it seems it's a statistics/show issue.
```sql
create table test (K Int64, A Int64, B Int64, C Int64,
index x1 (A>0,B>0,C>0) type set(2) granularity 1000)
Engine=MergeTree order by K as select number,0,0,0 from numbers(1e8);
select count() from test where A>0 and B>0 and C>0;
Index `x1` has dropped 14/12209 granules.
Selected 5/5 parts by partition key, 0 parts by primary key, 12209/12209 marks by primary key, 0 marks to read from 0 ranges
``` | https://github.com/ClickHouse/ClickHouse/issues/52637 | https://github.com/ClickHouse/ClickHouse/pull/53616 | a190efed837b55f4263da043fb502c63737b5873 | 45d924c62c4390aa31db3d443a0e353ef692642b | "2023-07-26T23:07:42Z" | c++ | "2023-08-24T14:46:47Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 52,571 | ["src/Bridge/IBridge.cpp", "src/Core/Defines.h", "src/Disks/IO/ReadBufferFromWebServer.cpp", "src/Disks/ObjectStorages/S3/diskSettings.cpp", "src/Server/HTTPHandler.cpp", "src/Server/InterserverIOHTTPHandler.cpp", "src/Server/PrometheusRequestHandler.cpp", "src/Server/ReplicasStatusHandler.cpp", "src/Server/StaticRequestHandler.cpp", "src/Server/WebUIRequestHandler.cpp"] | Spontaneous disconnects from server side when using HTTP interface | **Describe what's wrong**
Sometimes Clickhouse closes HTTP connection when request is in progress.
**Does it reproduce on recent release?**
We have issues on 23.3 and earlier versions for at least a year. Bug reproduction rate is low - we observe it like 20 times a day or like 1/1000 - 1/10_000 requests
**How to reproduce**
I strongly believe there is an issue with clickhouse HTTP Keep-Alive timeout - sometimes connection get closed by Keep-Alive timeout even it is not idle i.e. client is expecting HTTP response and did not receive it yet.
***Evidence***
- Client error always occurs when reading response body and never when writing request body
- We tried two completely different client implementations to the same effect
- https://github.com/ClickHouse/clickhouse-java/issues/290
- TCP dump shows no sign of RST both from the server and the client
- Error disappear when Clickhouse keep_alive_timeout is set to larger value than client Keep-Alive timeout i.e. client always closes connection before server Keep-Alive timeout happens
**Workaround**
Set keep_alive_timeout to 10 seconds and your HTTP client keep-alive timeout to lesser amout e.g. `-Djdk.httpclient.keepalive.timeout=9`
| https://github.com/ClickHouse/ClickHouse/issues/52571 | https://github.com/ClickHouse/ClickHouse/pull/53068 | 02339a1f221f4993f1901c48971cb0d2a0d1e18f | 0dd6928a13165936420c6a538bf5001eb7d2a3b7 | "2023-07-25T12:22:07Z" | c++ | "2023-09-06T02:05:23Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 52,511 | ["src/Functions/transform.cpp", "tests/queries/0_stateless/02832_transform_fixed_string_no_default.reference", "tests/queries/0_stateless/02832_transform_fixed_string_no_default.sql"] | Abort in `transform` | ```
clickhouse-local --query "SELECT transform(name, ['a', 'b'], ['', NULL]) AS name FROM (SELECT 'test'::Nullable(FixedString(4)) AS name);"
``` | https://github.com/ClickHouse/ClickHouse/issues/52511 | https://github.com/ClickHouse/ClickHouse/pull/52513 | 7cab99d2a35c5aafdb486f849d6b6016dbf27743 | 813efa31ad74cc68540263a685b5a4e5389a5956 | "2023-07-24T04:11:34Z" | c++ | "2023-07-25T02:37:17Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 52,436 | ["src/Interpreters/DatabaseCatalog.cpp", "tests/queries/0_stateless/02814_currentDatabase_for_table_functions.reference", "tests/queries/0_stateless/02814_currentDatabase_for_table_functions.sql"] | Table functions don't work anymore in scenarios when currentDatabase is unknown | > A clear and concise description of what works not as it is supposed to.
The reference to a table function (mysql / mongo etc) stop working in 23.1 when the call of that functions happens out of the query context (flushes from Buffer, from Distributed etc).
> A link to reproducer in [https://fiddle.clickhouse.com/](https://fiddle.clickhouse.com/).
https://fiddle.clickhouse.com/444a8a64-52f8-445e-bd58-bc183fa2348c
22.12 - works: https://fiddle.clickhouse.com/c60cbea6-6167-4a3a-ab9e-14c6d97b0b21
23.1 and newer - fails: https://fiddle.clickhouse.com/d0f1e276-8d35-45c3-ac5f-cfd1f57e56c1
The test
```
DROP TABLE IF EXISTS null_table;
DROP TABLE IF EXISTS null_table_buffer;
DROP TABLE IF EXISTS null_mv;
DROP VIEW IF EXISTS number_view;
CREATE TABLE null_table (number UInt64) ENGINE = Null;
CREATE VIEW number_view as SELECT * FROM numbers(10) as tb;
CREATE MATERIALIZED VIEW null_mv Engine = Log AS SELECT * FROM null_table LEFT JOIN number_view as tb USING number;
CREATE TABLE null_table_buffer (number UInt64) ENGINE = Buffer(currentDatabase(), null_table, 1, 1, 1, 100, 200, 10000, 20000);
INSERT INTO null_table_buffer VALUES (1);
SELECT sleep(3) FORMAT Null;
SELECT count() FROM null_mv;
WITH arrayMap(x -> demangle(addressToSymbol(x)), last_error_trace) as all SELECT *, arrayStringConcat(all, '\n') AS res
FROM system.errors
WHERE (name = 'UNKNOWN_DATABASE') SETTINGS allow_introspection_functions=1 FORMAT Vertical;
```
The exception:
```
2023.07.21 16:27:12.889829 [ 69 ] {} <Error> void DB::StorageBuffer::backgroundFlush(): Code: 81. DB::Exception: Database name is empty: while pushing to view default.null_mv (417ed1eb-9a73-413a-be8b-532f20c28524). (UNKNOWN_DATABASE), Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0xe22f215 in /usr/bin/clickhouse
1. ? @ 0xd78d3b6 in /usr/bin/clickhouse
2. DB::StorageID::getDatabaseName() const @ 0x13af2c45 in /usr/bin/clickhouse
3. DB::DatabaseCatalog::getTableImpl(DB::StorageID const&, std::shared_ptr<DB::Context const>, std::optional<DB::Exception>*) const @ 0x12ee7a80 in /usr/bin/clickhouse
4. DB::DatabaseCatalog::tryGetTable(DB::StorageID const&, std::shared_ptr<DB::Context const>) const @ 0x12eeeb18 in /usr/bin/clickhouse
5. DB::Context::executeTableFunction(std::shared_ptr<DB::IAST> const&, DB::ASTSelectQuery const*) @ 0x12e4e15b in /usr/bin/clickhouse
6. DB::JoinedTables::getLeftTableStorage() @ 0x139e3d57 in /usr/bin/clickhouse
7. DB::InterpreterSelectQuery::InterpreterSelectQuery(std::shared_ptr<DB::IAST> const&, std::shared_ptr<DB::Context> const&, std::optional<DB::Pipe>, std::shared_ptr<DB::IStorage> const&, DB::SelectQueryOptions const&, std::vector<String, std::allocator<String>> const&, std::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::shared_ptr<DB::PreparedSets>) @ 0x139005b6 in /usr/bin/clickhouse
8. DB::InterpreterSelectWithUnionQuery::buildCurrentChildInterpreter(std::shared_ptr<DB::IAST> const&, std::vector<String, std::allocator<String>> const&) @ 0x1399f4c2 in /usr/bin/clickhouse
9. DB::InterpreterSelectWithUnionQuery::InterpreterSelectWithUnionQuery(std::shared_ptr<DB::IAST> const&, std::shared_ptr<DB::Context>, DB::SelectQueryOptions const&, std::vector<String, std::allocator<String>> const&) @ 0x1399d133 in /usr/bin/clickhouse
10. DB::InterpreterSelectWithUnionQuery::InterpreterSelectWithUnionQuery(std::shared_ptr<DB::IAST> const&, std::shared_ptr<DB::Context const>, DB::SelectQueryOptions const&, std::vector<String, std::allocator<String>> const&) @ 0x1399c58c in /usr/bin/clickhouse
11. DB::StorageView::read(DB::QueryPlan&, std::vector<String, std::allocator<String>> const&, std::shared_ptr<DB::StorageSnapshot> const&, DB::SelectQueryInfo&, std::shared_ptr<DB::Context const>, DB::QueryProcessingStage::Enum, unsigned long, unsigned long) @ 0x142cb89c in /usr/bin/clickhouse
12. DB::InterpreterSelectQuery::executeFetchColumns(DB::QueryProcessingStage::Enum, DB::QueryPlan&) @ 0x1391adc8 in /usr/bin/clickhouse
13. DB::InterpreterSelectQuery::executeImpl(DB::QueryPlan&, std::optional<DB::Pipe>) @ 0x1390c92d in /usr/bin/clickhouse
14. DB::InterpreterSelectQuery::buildQueryPlan(DB::QueryPlan&) @ 0x1390bcad in /usr/bin/clickhouse
15. DB::InterpreterSelectWithUnionQuery::buildQueryPlan(DB::QueryPlan&) @ 0x139a0ff6 in /usr/bin/clickhouse
16. DB::SelectQueryExpressionAnalyzer::makeJoin(DB::ASTTablesInSelectQueryElement const&, std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&, std::shared_ptr<DB::ActionsDAG>&) @ 0x12f5f33c in /usr/bin/clickhouse
17. DB::SelectQueryExpressionAnalyzer::appendJoin(DB::ExpressionActionsChain&, std::shared_ptr<DB::ActionsDAG>&) @ 0x12f5ddd8 in /usr/bin/clickhouse
18. DB::ExpressionAnalysisResult::ExpressionAnalysisResult(DB::SelectQueryExpressionAnalyzer&, std::shared_ptr<DB::StorageInMemoryMetadata const> const&, bool, bool, bool, std::shared_ptr<DB::FilterDAGInfo> const&, std::shared_ptr<DB::FilterDAGInfo> const&, DB::Block const&) @ 0x12f6b25a in /usr/bin/clickhouse
19. DB::InterpreterSelectQuery::getSampleBlockImpl() @ 0x13910251 in /usr/bin/clickhouse
20. ? @ 0x13908ad9 in /usr/bin/clickhouse
21. DB::InterpreterSelectQuery::InterpreterSelectQuery(std::shared_ptr<DB::IAST> const&, std::shared_ptr<DB::Context> const&, std::optional<DB::Pipe>, std::shared_ptr<DB::IStorage> const&, DB::SelectQueryOptions const&, std::vector<String, std::allocator<String>> const&, std::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::shared_ptr<DB::PreparedSets>) @ 0x13902fec in /usr/bin/clickhouse
22. DB::ExecutingInnerQueryFromViewTransform::onConsume(DB::Chunk) @ 0x14d7e4b2 in /usr/bin/clickhouse
23. ? @ 0x14ce302b in /usr/bin/clickhouse
24. ? @ 0x14ce2d79 in /usr/bin/clickhouse
25. DB::ExceptionKeepingTransform::work() @ 0x14ce265f in /usr/bin/clickhouse
26. DB::ExecutionThreadContext::executeTask() @ 0x14afd94a in /usr/bin/clickhouse
27. DB::PipelineExecutor::executeStepImpl(unsigned long, std::atomic<bool>*) @ 0x14af29bb in /usr/bin/clickhouse
28. DB::PipelineExecutor::executeStep(std::atomic<bool>*) @ 0x14af1da8 in /usr/bin/clickhouse
29. DB::StorageBuffer::writeBlockToDestination(DB::Block const&, std::shared_ptr<DB::IStorage>) @ 0x1430ff06 in /usr/bin/clickhouse
30. DB::StorageBuffer::flushBuffer(DB::StorageBuffer::Buffer&, bool, bool) @ 0x1430cc32 in /usr/bin/clickhouse
31. DB::StorageBuffer::backgroundFlush() @ 0x14310dc2 in /usr/bin/clickhouse
(version 23.3.8.21 (official build))
```
The problem was intoduced by that change:
https://github.com/ClickHouse/ClickHouse/commit/614fd4cf42ca77dc0329639cc4003e1e2ea2f242#diff-c7c4cea868f661341c1e9866836dc34c1c88723f9f33b4e09db530c2ea074036R1248
which replace safe call to isTableExist to a code which can throw (tryGetTable) and don't catch the exception. | https://github.com/ClickHouse/ClickHouse/issues/52436 | https://github.com/ClickHouse/ClickHouse/pull/52440 | a4a8c731088b339b08fc10cf68e2f0e2685e2a34 | 933c2f3fb5564c48202bde8246d3b018250d2f8f | "2023-07-21T16:50:55Z" | c++ | "2023-07-26T14:41:00Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 52,433 | ["src/Interpreters/MutationsInterpreter.cpp", "src/Interpreters/MutationsInterpreter.h", "src/Storages/MergeTree/IMergeTreeDataPart.cpp", "src/Storages/MergeTree/IMergeTreeDataPart.h", "src/Storages/MergeTree/MutateTask.cpp", "src/Storages/StorageInMemoryMetadata.cpp", "src/Storages/StorageInMemoryMetadata.h", "tests/queries/0_stateless/02832_alter_delete_indexes_projections.reference", "tests/queries/0_stateless/02832_alter_delete_indexes_projections.sql"] | Skip index is not affected by alter delete. Too many marks in file. | master
```
create table tab (x UInt32, y String, Index i y type minmax granularity 3) engine = MergeTree order by tuple();
insert into tab select number, toString(number) from numbers(8192 * 10);
alter table tab delete where x < 8192;
select x from tab where y in (4, 5);
```
```
SELECT x
FROM tab
WHERE y IN (4, 5)
Query id: 81971d0e-73f5-4b03-a0ab-dce83c252a04
0 rows in set. Elapsed: 0.002 sec.
Received exception from server (version 23.7.1):
Code: 33. DB::Exception: Received from localhost:9000. DB::Exception: Too many marks in file skp_idx_i.cmrk3, marks expected 3 (bytes size 72). (CANNOT_READ_ALL_DATA)
```
Index is just hard-linked.
```
$ hexdump /home/ubuntu/test/clickhouse/store/085/085ca165-85ef-4671-9233-d81dd6a0c727/all_1_1_0_2/skp_idx_i.idx2
0000000 2c5f fce7 d50c 7003 ddfc 5ff7 bab9 9bed
0000010 3282 0000 2b00 0000 f100 0104 0430 3939
0000020 3939 3205 3534 3637 3405 3139 3135 0006
0000030 3271 3705 3733 3732 0006 3870 3805 3931
0000040 3931
0000042
$ hexdump /home/ubuntu/test/clickhouse/store/085/085ca165-85ef-4671-9233-d81dd6a0c727/all_1_1_0/skp_idx_i.idx2
0000000 2c5f fce7 d50c 7003 ddfc 5ff7 bab9 9bed
0000010 3282 0000 2b00 0000 f100 0104 0430 3939
0000020 3939 3205 3534 3637 3405 3139 3135 0006
0000030 3271 3705 3733 3732 0006 3870 3805 3931
0000040 3931
0000042
$ stat /home/ubuntu/test/clickhouse/store/085/085ca165-85ef-4671-9233-d81dd6a0c727/all_1_1_0/skp_idx_i.idx2
File: /home/ubuntu/test/clickhouse/store/085/085ca165-85ef-4671-9233-d81dd6a0c727/all_1_1_0/skp_idx_i.idx2
Size: 66 Blocks: 8 IO Block: 4096 regular file
Device: 10301h/66305d Inode: 529521 Links: 3
Access: (0640/-rw-r-----) Uid: ( 1000/ ubuntu) Gid: ( 1000/ ubuntu)
Access: 2023-07-21 14:56:21.679980867 +0000
Modify: 2023-07-21 14:52:53.161683317 +0000
Change: 2023-07-21 14:55:03.580622408 +0000
Birth: -
``` | https://github.com/ClickHouse/ClickHouse/issues/52433 | https://github.com/ClickHouse/ClickHouse/pull/52530 | dbe13e30168b75d9eaa073228c5242d6ca375337 | 8744a0269a144b52b656092ecb28ad8f96b9bfd5 | "2023-07-21T15:03:59Z" | c++ | "2023-08-11T13:23:05Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 52,407 | ["docs/en/sql-reference/functions/type-conversion-functions.md", "docs/ru/sql-reference/functions/type-conversion-functions.md", "src/Functions/FunctionToDecimalString.cpp", "src/Functions/FunctionToDecimalString.h", "src/IO/WriteHelpers.h", "tests/queries/0_stateless/02676_to_decimal_string.reference", "tests/queries/0_stateless/02676_to_decimal_string.sql"] | ClickHouse Server 23.7.1.1659 crashed through a SELECT statement calling the toDecimalString function | **Describe the bug**
ClickHouse Server 23.7.1.1659 crashed through a SELECT statement calling the toDecimalString function.
**How to reproduce**
The SQL statement to reproduce:
```sql
SELECT toDecimalString ( '110' :: Decimal256 ( 45 ) , * ) ;
```
It can be reproduced on the official docker image. (`clickhouse/clickhouse-server:head` and `clickhouse/clickhouse-server:latest`).
The log traced by ClickHouse Server:
```
SELECT toDecimalString(CAST('110', 'Decimal256(45)'), *)
Query id: aa30d1a0-9002-4fab-9243-887e9b9af1cc
[25b36a6e4a9c] 2023.07.21 10:04:17.617202 [ 333 ] <Fatal> BaseDaemon: ########################################
[25b36a6e4a9c] 2023.07.21 10:04:17.617283 [ 333 ] <Fatal> BaseDaemon: (version 23.7.1.1659 (official build), build id: 2A82CED3B49248890AFC97BDD6FE0D5C0590676F, git hash: 234b5047b5cd093b8950bb8de3725eacffe02dc0) (from thread 48) (query_id: aa30d1a0-9002-4fab-9243-887e9b9af1cc) (query: SELECT toDecimalString ( '110' :: Decimal256 ( 45 ) , * ) ;) Received signal Segmentation fault (11)
[25b36a6e4a9c] 2023.07.21 10:04:17.617329 [ 333 ] <Fatal> BaseDaemon: Address: 0x28. Access: read. Address not mapped to object.
[25b36a6e4a9c] 2023.07.21 10:04:17.617369 [ 333 ] <Fatal> BaseDaemon: Stack trace: 0x000000000907fdd0 0x0000000008cf1a6a 0x0000000008cf100e 0x0000000012c2f22f 0x0000000012c2fca2 0x0000000012c30f99 0x00000000133c0fcc 0x000000001515b224 0x00000000152aed8e 0x0000000013cf35c7 0x0000000013ce2d8a 0x0000000013ce0514 0x0000000013d7e936 0x0000000013d7f844 0x00000000140b2825 0x00000000140ae68e 0x0000000014ed69c4 0x0000000014eedc79 0x0000000017e7a154 0x0000000017e7b371 0x0000000017ffd207 0x0000000017ffac3c 0x00007fd6b78f7609 0x00007fd6b781c133
[25b36a6e4a9c] 2023.07.21 10:04:17.617462 [ 333 ] <Fatal> BaseDaemon: 2. ? @ 0x000000000907fdd0 in /usr/bin/clickhouse
[25b36a6e4a9c] 2023.07.21 10:04:17.617514 [ 333 ] <Fatal> BaseDaemon: 3. ? @ 0x0000000008cf1a6a in /usr/bin/clickhouse
[25b36a6e4a9c] 2023.07.21 10:04:17.617552 [ 333 ] <Fatal> BaseDaemon: 4. ? @ 0x0000000008cf100e in /usr/bin/clickhouse
[25b36a6e4a9c] 2023.07.21 10:04:17.617623 [ 333 ] <Fatal> BaseDaemon: 5. DB::IExecutableFunction::executeWithoutLowCardinalityColumns(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&, std::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const @ 0x0000000012c2f22f in /usr/bin/clickhouse
[25b36a6e4a9c] 2023.07.21 10:04:17.617678 [ 333 ] <Fatal> BaseDaemon: 6. DB::IExecutableFunction::executeWithoutSparseColumns(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&, std::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const @ 0x0000000012c2fca2 in /usr/bin/clickhouse
[25b36a6e4a9c] 2023.07.21 10:04:17.617737 [ 333 ] <Fatal> BaseDaemon: 7. DB::IExecutableFunction::execute(std::vector<DB::ColumnWithTypeAndName, std::allocator<DB::ColumnWithTypeAndName>> const&, std::shared_ptr<DB::IDataType const> const&, unsigned long, bool) const @ 0x0000000012c30f99 in /usr/bin/clickhouse
[25b36a6e4a9c] 2023.07.21 10:04:17.617797 [ 333 ] <Fatal> BaseDaemon: 8. DB::ActionsDAG::updateHeader(DB::Block) const @ 0x00000000133c0fcc in /usr/bin/clickhouse
[25b36a6e4a9c] 2023.07.21 10:04:17.617837 [ 333 ] <Fatal> BaseDaemon: 9. DB::ExpressionTransform::transformHeader(DB::Block, DB::ActionsDAG const&) @ 0x000000001515b224 in /usr/bin/clickhouse
[25b36a6e4a9c] 2023.07.21 10:04:17.617894 [ 333 ] <Fatal> BaseDaemon: 10. DB::ExpressionStep::ExpressionStep(DB::DataStream const&, std::shared_ptr<DB::ActionsDAG> const&) @ 0x00000000152aed8e in /usr/bin/clickhouse
[25b36a6e4a9c] 2023.07.21 10:04:17.617940 [ 333 ] <Fatal> BaseDaemon: 11. DB::InterpreterSelectQuery::executeExpression(DB::QueryPlan&, std::shared_ptr<DB::ActionsDAG> const&, String const&) @ 0x0000000013cf35c7 in /usr/bin/clickhouse
[25b36a6e4a9c] 2023.07.21 10:04:17.617983 [ 333 ] <Fatal> BaseDaemon: 12. DB::InterpreterSelectQuery::executeImpl(DB::QueryPlan&, std::optional<DB::Pipe>) @ 0x0000000013ce2d8a in /usr/bin/clickhouse
[25b36a6e4a9c] 2023.07.21 10:04:17.618030 [ 333 ] <Fatal> BaseDaemon: 13. DB::InterpreterSelectQuery::buildQueryPlan(DB::QueryPlan&) @ 0x0000000013ce0514 in /usr/bin/clickhouse
[25b36a6e4a9c] 2023.07.21 10:04:17.618110 [ 333 ] <Fatal> BaseDaemon: 14. DB::InterpreterSelectWithUnionQuery::buildQueryPlan(DB::QueryPlan&) @ 0x0000000013d7e936 in /usr/bin/clickhouse
[25b36a6e4a9c] 2023.07.21 10:04:17.618155 [ 333 ] <Fatal> BaseDaemon: 15. DB::InterpreterSelectWithUnionQuery::execute() @ 0x0000000013d7f844 in /usr/bin/clickhouse
[25b36a6e4a9c] 2023.07.21 10:04:17.618199 [ 333 ] <Fatal> BaseDaemon: 16. ? @ 0x00000000140b2825 in /usr/bin/clickhouse
[25b36a6e4a9c] 2023.07.21 10:04:17.618251 [ 333 ] <Fatal> BaseDaemon: 17. DB::executeQuery(String const&, std::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) @ 0x00000000140ae68e in /usr/bin/clickhouse
[25b36a6e4a9c] 2023.07.21 10:04:17.618294 [ 333 ] <Fatal> BaseDaemon: 18. DB::TCPHandler::runImpl() @ 0x0000000014ed69c4 in /usr/bin/clickhouse
[25b36a6e4a9c] 2023.07.21 10:04:17.618324 [ 333 ] <Fatal> BaseDaemon: 19. DB::TCPHandler::run() @ 0x0000000014eedc79 in /usr/bin/clickhouse
[25b36a6e4a9c] 2023.07.21 10:04:17.618378 [ 333 ] <Fatal> BaseDaemon: 20. Poco::Net::TCPServerConnection::start() @ 0x0000000017e7a154 in /usr/bin/clickhouse
[25b36a6e4a9c] 2023.07.21 10:04:17.618434 [ 333 ] <Fatal> BaseDaemon: 21. Poco::Net::TCPServerDispatcher::run() @ 0x0000000017e7b371 in /usr/bin/clickhouse
[25b36a6e4a9c] 2023.07.21 10:04:17.618484 [ 333 ] <Fatal> BaseDaemon: 22. Poco::PooledThread::run() @ 0x0000000017ffd207 in /usr/bin/clickhouse
[25b36a6e4a9c] 2023.07.21 10:04:17.618528 [ 333 ] <Fatal> BaseDaemon: 23. Poco::ThreadImpl::runnableEntry(void*) @ 0x0000000017ffac3c in /usr/bin/clickhouse
[25b36a6e4a9c] 2023.07.21 10:04:17.618560 [ 333 ] <Fatal> BaseDaemon: 24. ? @ 0x00007fd6b78f7609 in ?
[25b36a6e4a9c] 2023.07.21 10:04:17.618609 [ 333 ] <Fatal> BaseDaemon: 25. clone @ 0x00007fd6b781c133 in ?
[25b36a6e4a9c] 2023.07.21 10:04:17.908820 [ 333 ] <Fatal> BaseDaemon: Integrity check of the executable successfully passed (checksum: 7D28AF63E0E09B75DF5876280E0C8DBB)
[25b36a6e4a9c] 2023.07.21 10:04:17.909276 [ 333 ] <Fatal> BaseDaemon: Report this error to https://github.com/ClickHouse/ClickHouse/issues
[25b36a6e4a9c] 2023.07.21 10:04:17.909468 [ 333 ] <Fatal> BaseDaemon: No settings were changed
``` | https://github.com/ClickHouse/ClickHouse/issues/52407 | https://github.com/ClickHouse/ClickHouse/pull/52520 | 3387b02ede113bf39fef76fe7d8dea4e9ec87eab | 7bcef0a6c081cfb290223269a935bbaca44fd623 | "2023-07-21T10:05:17Z" | c++ | "2023-07-26T22:18:36Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 52,405 | ["src/Processors/QueryPlan/Optimizations/optimizeUseNormalProjection.cpp", "tests/queries/0_stateless/01710_projection_query_plan_optimization_misc.reference", "tests/queries/0_stateless/01710_projection_query_plan_optimization_misc.sql"] | ClickHouse Server 23.7.1.1659 crashed through alter-table, insert and select statements. | **Describe the bug**
ClickHouse Server 23.7.1.1659 crashed through alter-table, insert and select statements.
I am not sure whether it is a bug, as I do not know the effect of the `add projection` in `alter table`.
**How to reproduce**
The SQL statement to reproduce:
```sql
create table test_00681 (x Int32, codectest Int32) engine = MergeTree order by x;
alter table test_00681 add projection x (select * order by codectest);
insert into test_00681 values (1, 2);
select * from merge('', 'test_00681');
```
It can be reproduced on the official docker image. (`clickhouse/clickhouse-server:head` and `clickhouse/clickhouse-server:latest`).
The log traced by ClickHouse Server:
```
SELECT *
FROM merge('', 'test_00681')
Query id: f7bb8dcb-3cf3-4576-80e7-e1ee04be28a5
[a23b9f0c72ba] 2023.07.21 09:52:10.491630 [ 334 ] <Fatal> BaseDaemon: ########################################
[a23b9f0c72ba] 2023.07.21 09:52:10.491729 [ 334 ] <Fatal> BaseDaemon: (version 23.7.1.1659 (official build), build id: 2A82CED3B49248890AFC97BDD6FE0D5C0590676F, git hash: 234b5047b5cd093b8950bb8de3725eacffe02dc0) (from thread 48) (query_id: f7bb8dcb-3cf3-4576-80e7-e1ee04be28a5) (query: select * from merge('', 'test_00681');) Received signal Segmentation fault (11)
[a23b9f0c72ba] 2023.07.21 09:52:10.491794 [ 334 ] <Fatal> BaseDaemon: Address: 0xfffffffffffffff8. Access: read. Address not mapped to object.
[a23b9f0c72ba] 2023.07.21 09:52:10.491826 [ 334 ] <Fatal> BaseDaemon: Stack trace: 0x000000001536ce21 0x000000001534a582 0x00000000152da21c 0x00000000152d912e 0x00000000144fb99f 0x00000000144f6ab6 0x00000000152c1cc5 0x00000000152d93ca 0x0000000013d7f999 0x00000000140b2825 0x00000000140ae68e 0x0000000014ed69c4 0x0000000014eedc79 0x0000000017e7a154 0x0000000017e7b371 0x0000000017ffd207 0x0000000017ffac3c 0x00007f4807d08609 0x00007f4807c2d133
[a23b9f0c72ba] 2023.07.21 09:52:10.491913 [ 334 ] <Fatal> BaseDaemon: 2. DB::QueryPlanOptimizations::optimizeUseNormalProjections(std::vector<DB::QueryPlanOptimizations::Frame, std::allocator<DB::QueryPlanOptimizations::Frame>>&, std::list<DB::QueryPlan::Node, std::allocator<DB::QueryPlan::Node>>&) @ 0x000000001536ce21 in /usr/bin/clickhouse
[a23b9f0c72ba] 2023.07.21 09:52:10.491959 [ 334 ] <Fatal> BaseDaemon: 3. DB::QueryPlanOptimizations::optimizeTreeSecondPass(DB::QueryPlanOptimizationSettings const&, DB::QueryPlan::Node&, std::list<DB::QueryPlan::Node, std::allocator<DB::QueryPlan::Node>>&) @ 0x000000001534a582 in /usr/bin/clickhouse
[a23b9f0c72ba] 2023.07.21 09:52:10.491999 [ 334 ] <Fatal> BaseDaemon: 4. DB::QueryPlan::optimize(DB::QueryPlanOptimizationSettings const&) @ 0x00000000152da21c in /usr/bin/clickhouse
[a23b9f0c72ba] 2023.07.21 09:52:10.492035 [ 334 ] <Fatal> BaseDaemon: 5. DB::QueryPlan::buildQueryPipeline(DB::QueryPlanOptimizationSettings const&, DB::BuildQueryPipelineSettings const&) @ 0x00000000152d912e in /usr/bin/clickhouse
[a23b9f0c72ba] 2023.07.21 09:52:10.492170 [ 334 ] <Fatal> BaseDaemon: 6. DB::ReadFromMerge::createSources(std::shared_ptr<DB::StorageSnapshot> const&, DB::SelectQueryInfo&, DB::QueryProcessingStage::Enum const&, unsigned long, DB::Block const&, std::vector<DB::ReadFromMerge::AliasData, std::allocator<DB::ReadFromMerge::AliasData>> const&, std::tuple<String, std::shared_ptr<DB::IStorage>, std::shared_ptr<DB::RWLockImpl::LockHolderImpl>, String> const&, std::vector<String, std::allocator<String>>, std::shared_ptr<DB::Context>, unsigned long, bool) @ 0x00000000144fb99f in /usr/bin/clickhouse
[a23b9f0c72ba] 2023.07.21 09:52:10.492219 [ 334 ] <Fatal> BaseDaemon: 7. DB::ReadFromMerge::initializePipeline(DB::QueryPipelineBuilder&, DB::BuildQueryPipelineSettings const&) @ 0x00000000144f6ab6 in /usr/bin/clickhouse
[a23b9f0c72ba] 2023.07.21 09:52:10.492260 [ 334 ] <Fatal> BaseDaemon: 8. DB::ISourceStep::updatePipeline(std::vector<std::unique_ptr<DB::QueryPipelineBuilder, std::default_delete<DB::QueryPipelineBuilder>>, std::allocator<std::unique_ptr<DB::QueryPipelineBuilder, std::default_delete<DB::QueryPipelineBuilder>>>>, DB::BuildQueryPipelineSettings const&) @ 0x00000000152c1cc5 in /usr/bin/clickhouse
[a23b9f0c72ba] 2023.07.21 09:52:10.492289 [ 334 ] <Fatal> BaseDaemon: 9. DB::QueryPlan::buildQueryPipeline(DB::QueryPlanOptimizationSettings const&, DB::BuildQueryPipelineSettings const&) @ 0x00000000152d93ca in /usr/bin/clickhouse
[a23b9f0c72ba] 2023.07.21 09:52:10.492340 [ 334 ] <Fatal> BaseDaemon: 10. DB::InterpreterSelectWithUnionQuery::execute() @ 0x0000000013d7f999 in /usr/bin/clickhouse
[a23b9f0c72ba] 2023.07.21 09:52:10.492551 [ 334 ] <Fatal> BaseDaemon: 11. ? @ 0x00000000140b2825 in /usr/bin/clickhouse
[a23b9f0c72ba] 2023.07.21 09:52:10.492592 [ 334 ] <Fatal> BaseDaemon: 12. DB::executeQuery(String const&, std::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) @ 0x00000000140ae68e in /usr/bin/clickhouse
[a23b9f0c72ba] 2023.07.21 09:52:10.492628 [ 334 ] <Fatal> BaseDaemon: 13. DB::TCPHandler::runImpl() @ 0x0000000014ed69c4 in /usr/bin/clickhouse
[a23b9f0c72ba] 2023.07.21 09:52:10.492654 [ 334 ] <Fatal> BaseDaemon: 14. DB::TCPHandler::run() @ 0x0000000014eedc79 in /usr/bin/clickhouse
[a23b9f0c72ba] 2023.07.21 09:52:10.492697 [ 334 ] <Fatal> BaseDaemon: 15. Poco::Net::TCPServerConnection::start() @ 0x0000000017e7a154 in /usr/bin/clickhouse
[a23b9f0c72ba] 2023.07.21 09:52:10.492770 [ 334 ] <Fatal> BaseDaemon: 16. Poco::Net::TCPServerDispatcher::run() @ 0x0000000017e7b371 in /usr/bin/clickhouse
[a23b9f0c72ba] 2023.07.21 09:52:10.492811 [ 334 ] <Fatal> BaseDaemon: 17. Poco::PooledThread::run() @ 0x0000000017ffd207 in /usr/bin/clickhouse
[a23b9f0c72ba] 2023.07.21 09:52:10.492959 [ 334 ] <Fatal> BaseDaemon: 18. Poco::ThreadImpl::runnableEntry(void*) @ 0x0000000017ffac3c in /usr/bin/clickhouse
[a23b9f0c72ba] 2023.07.21 09:52:10.492985 [ 334 ] <Fatal> BaseDaemon: 19. ? @ 0x00007f4807d08609 in ?
[a23b9f0c72ba] 2023.07.21 09:52:10.493030 [ 334 ] <Fatal> BaseDaemon: 20. clone @ 0x00007f4807c2d133 in ?
[a23b9f0c72ba] 2023.07.21 09:52:10.710391 [ 334 ] <Fatal> BaseDaemon: Integrity check of the executable successfully passed (checksum: 7D28AF63E0E09B75DF5876280E0C8DBB)
[a23b9f0c72ba] 2023.07.21 09:52:10.710706 [ 334 ] <Fatal> BaseDaemon: Report this error to https://github.com/ClickHouse/ClickHouse/issues
[a23b9f0c72ba] 2023.07.21 09:52:10.710854 [ 334 ] <Fatal> BaseDaemon: No settings were changed
``` | https://github.com/ClickHouse/ClickHouse/issues/52405 | https://github.com/ClickHouse/ClickHouse/pull/52432 | 045bb3e1f382e7d3bfadaa595d9815396542ce44 | e3c85613c943b7d6d3d4b9ea8a08ff64529973fb | "2023-07-21T09:58:16Z" | c++ | "2023-07-23T15:27:51Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 52,403 | ["src/Common/OptimizedRegularExpression.cpp", "tests/queries/0_stateless/02831_regexp_analyze_recursion.reference", "tests/queries/0_stateless/02831_regexp_analyze_recursion.sql"] | Crash bug: ClickHouse Server 23.7.1.1659 crashed through SELECT statement calling the 'match' function | **Describe the bug**
ClickHouse Server 23.7.1.1659 crashed through SELECT statement calling the 'match' function.
It seems like a stack overflow.
**How to reproduce**
The SQL statement to reproduce:
```sql
SELECT match ( 'xyz' , repeat ( '!(1, ' , 320000 ) ) AS token ;
```
It can be reproduced on the official docker image. (`clickhouse/clickhouse-server:head` and `clickhouse/clickhouse-server:latest`).
The log traced by ClickHouse Server:
```
SELECT match('xyz', repeat('!(1, ', 320000)) AS token
Query id: 85eed452-d879-47c2-a98f-cc29f2065689
[b03b83a5d0a6] 2023.07.21 09:46:35.664661 [ 364 ] <Fatal> BaseDaemon: ########################################
[b03b83a5d0a6] 2023.07.21 09:46:35.664709 [ 364 ] <Fatal> BaseDaemon: (version 23.7.1.1659 (official build), build id: 2A82CED3B49248890AFC97BDD6FE0D5C0590676F, git hash: 234b5047b5cd093b8950bb8de3725eacffe02dc0) (from thread 48) (query_id: 85eed452-d879-47c2-a98f-cc29f2065689) (query: SELECT match ( 'xyz' , repeat ( '!(1, ' , 320000 ) ) AS token ;) Received signal Segmentation fault (11)
[b03b83a5d0a6] 2023.07.21 09:46:35.664744 [ 364 ] <Fatal> BaseDaemon: Address: 0x7f2347244f48. Access: write. Attempted access has violated the permissions assigned to the memory area.
[b03b83a5d0a6] 2023.07.21 09:46:35.664767 [ 364 ] <Fatal> BaseDaemon: Stack trace: 0x0000000008646431 0x0000000008646815 0x0000000008646815 0x0000000008646815 0x0000000008646815 0x0000000008646815 0x0000000008646815 0x0000000008646815 0x0000000008646815 0x0000000008646815 0x0000000008646815 0x0000000008646815 0x0000000008646815 0x0000000008646815 0x0000000008646815 0x0000000008646815 0x0000000008646815 0x0000000008646815 0x0000000008646815 0x0000000008646815 0x0000000008646815 0x0000000008646815 0x0000000008646815 0x0000000008646815 0x0000000008646815 0x0000000008646815 0x0000000008646815 0x0000000008646815 0x0000000008646815 0x0000000008646815 0x0000000008646815 0x0000000008646815 0x0000000008646815 0x0000000008646815 0x0000000008646815 0x0000000008646815 0x0000000008646815 0x0000000008646815 0x0000000008646815 0x0000000008646815 0x0000000008646815 0x0000000008646815 0x0000000008646815
[b03b83a5d0a6] 2023.07.21 09:46:35.664828 [ 364 ] <Fatal> BaseDaemon: 2. ? @ 0x0000000008646431 in /usr/bin/clickhouse
[b03b83a5d0a6] 2023.07.21 09:46:35.664855 [ 364 ] <Fatal> BaseDaemon: 3. ? @ 0x0000000008646815 in /usr/bin/clickhouse
[b03b83a5d0a6] 2023.07.21 09:46:35.664882 [ 364 ] <Fatal> BaseDaemon: 4. ? @ 0x0000000008646815 in /usr/bin/clickhouse
[b03b83a5d0a6] 2023.07.21 09:46:35.664908 [ 364 ] <Fatal> BaseDaemon: 5. ? @ 0x0000000008646815 in /usr/bin/clickhouse
[b03b83a5d0a6] 2023.07.21 09:46:35.664948 [ 364 ] <Fatal> BaseDaemon: 6. ? @ 0x0000000008646815 in /usr/bin/clickhouse
[b03b83a5d0a6] 2023.07.21 09:46:35.664971 [ 364 ] <Fatal> BaseDaemon: 7. ? @ 0x0000000008646815 in /usr/bin/clickhouse
[b03b83a5d0a6] 2023.07.21 09:46:35.665001 [ 364 ] <Fatal> BaseDaemon: 8. ? @ 0x0000000008646815 in /usr/bin/clickhouse
[b03b83a5d0a6] 2023.07.21 09:46:35.665029 [ 364 ] <Fatal> BaseDaemon: 9. ? @ 0x0000000008646815 in /usr/bin/clickhouse
[b03b83a5d0a6] 2023.07.21 09:46:35.665054 [ 364 ] <Fatal> BaseDaemon: 10. ? @ 0x0000000008646815 in /usr/bin/clickhouse
[b03b83a5d0a6] 2023.07.21 09:46:35.665087 [ 364 ] <Fatal> BaseDaemon: 11. ? @ 0x0000000008646815 in /usr/bin/clickhouse
[b03b83a5d0a6] 2023.07.21 09:46:35.665112 [ 364 ] <Fatal> BaseDaemon: 12. ? @ 0x0000000008646815 in /usr/bin/clickhouse
[b03b83a5d0a6] 2023.07.21 09:46:35.665327 [ 364 ] <Fatal> BaseDaemon: 13. ? @ 0x0000000008646815 in /usr/bin/clickhouse
[b03b83a5d0a6] 2023.07.21 09:46:35.665349 [ 364 ] <Fatal> BaseDaemon: 14. ? @ 0x0000000008646815 in /usr/bin/clickhouse
[b03b83a5d0a6] 2023.07.21 09:46:35.665390 [ 364 ] <Fatal> BaseDaemon: 15. ? @ 0x0000000008646815 in /usr/bin/clickhouse
[b03b83a5d0a6] 2023.07.21 09:46:35.665419 [ 364 ] <Fatal> BaseDaemon: 16. ? @ 0x0000000008646815 in /usr/bin/clickhouse
[b03b83a5d0a6] 2023.07.21 09:46:35.665446 [ 364 ] <Fatal> BaseDaemon: 17. ? @ 0x0000000008646815 in /usr/bin/clickhouse
[b03b83a5d0a6] 2023.07.21 09:46:35.665494 [ 364 ] <Fatal> BaseDaemon: 18. ? @ 0x0000000008646815 in /usr/bin/clickhouse
[b03b83a5d0a6] 2023.07.21 09:46:35.665528 [ 364 ] <Fatal> BaseDaemon: 19. ? @ 0x0000000008646815 in /usr/bin/clickhouse
[b03b83a5d0a6] 2023.07.21 09:46:35.665558 [ 364 ] <Fatal> BaseDaemon: 20. ? @ 0x0000000008646815 in /usr/bin/clickhouse
[b03b83a5d0a6] 2023.07.21 09:46:35.665585 [ 364 ] <Fatal> BaseDaemon: 21. ? @ 0x0000000008646815 in /usr/bin/clickhouse
[b03b83a5d0a6] 2023.07.21 09:46:35.665622 [ 364 ] <Fatal> BaseDaemon: 22. ? @ 0x0000000008646815 in /usr/bin/clickhouse
[b03b83a5d0a6] 2023.07.21 09:46:35.665655 [ 364 ] <Fatal> BaseDaemon: 23. ? @ 0x0000000008646815 in /usr/bin/clickhouse
[b03b83a5d0a6] 2023.07.21 09:46:35.665677 [ 364 ] <Fatal> BaseDaemon: 24. ? @ 0x0000000008646815 in /usr/bin/clickhouse
[b03b83a5d0a6] 2023.07.21 09:46:35.665706 [ 364 ] <Fatal> BaseDaemon: 25. ? @ 0x0000000008646815 in /usr/bin/clickhouse
[b03b83a5d0a6] 2023.07.21 09:46:35.665735 [ 364 ] <Fatal> BaseDaemon: 26. ? @ 0x0000000008646815 in /usr/bin/clickhouse
[b03b83a5d0a6] 2023.07.21 09:46:35.665770 [ 364 ] <Fatal> BaseDaemon: 27. ? @ 0x0000000008646815 in /usr/bin/clickhouse
[b03b83a5d0a6] 2023.07.21 09:46:35.665798 [ 364 ] <Fatal> BaseDaemon: 28. ? @ 0x0000000008646815 in /usr/bin/clickhouse
[b03b83a5d0a6] 2023.07.21 09:46:35.665830 [ 364 ] <Fatal> BaseDaemon: 29. ? @ 0x0000000008646815 in /usr/bin/clickhouse
[b03b83a5d0a6] 2023.07.21 09:46:35.665884 [ 364 ] <Fatal> BaseDaemon: 30. ? @ 0x0000000008646815 in /usr/bin/clickhouse
[b03b83a5d0a6] 2023.07.21 09:46:35.665923 [ 364 ] <Fatal> BaseDaemon: 31. ? @ 0x0000000008646815 in /usr/bin/clickhouse
[b03b83a5d0a6] 2023.07.21 09:46:35.665959 [ 364 ] <Fatal> BaseDaemon: 32. ? @ 0x0000000008646815 in /usr/bin/clickhouse
[b03b83a5d0a6] 2023.07.21 09:46:35.665991 [ 364 ] <Fatal> BaseDaemon: 33. ? @ 0x0000000008646815 in /usr/bin/clickhouse
[b03b83a5d0a6] 2023.07.21 09:46:35.666013 [ 364 ] <Fatal> BaseDaemon: 34. ? @ 0x0000000008646815 in /usr/bin/clickhouse
[b03b83a5d0a6] 2023.07.21 09:46:35.666049 [ 364 ] <Fatal> BaseDaemon: 35. ? @ 0x0000000008646815 in /usr/bin/clickhouse
[b03b83a5d0a6] 2023.07.21 09:46:35.666109 [ 364 ] <Fatal> BaseDaemon: 36. ? @ 0x0000000008646815 in /usr/bin/clickhouse
[b03b83a5d0a6] 2023.07.21 09:46:35.666146 [ 364 ] <Fatal> BaseDaemon: 37. ? @ 0x0000000008646815 in /usr/bin/clickhouse
[b03b83a5d0a6] 2023.07.21 09:46:35.666186 [ 364 ] <Fatal> BaseDaemon: 38. ? @ 0x0000000008646815 in /usr/bin/clickhouse
[b03b83a5d0a6] 2023.07.21 09:46:35.666205 [ 364 ] <Fatal> BaseDaemon: 39. ? @ 0x0000000008646815 in /usr/bin/clickhouse
[b03b83a5d0a6] 2023.07.21 09:46:35.666263 [ 364 ] <Fatal> BaseDaemon: 40. ? @ 0x0000000008646815 in /usr/bin/clickhouse
[b03b83a5d0a6] 2023.07.21 09:46:35.666312 [ 364 ] <Fatal> BaseDaemon: 41. ? @ 0x0000000008646815 in /usr/bin/clickhouse
[b03b83a5d0a6] 2023.07.21 09:46:35.666491 [ 364 ] <Fatal> BaseDaemon: 42. ? @ 0x0000000008646815 in /usr/bin/clickhouse
[b03b83a5d0a6] 2023.07.21 09:46:35.666519 [ 364 ] <Fatal> BaseDaemon: 43. ? @ 0x0000000008646815 in /usr/bin/clickhouse
[b03b83a5d0a6] 2023.07.21 09:46:35.666550 [ 364 ] <Fatal> BaseDaemon: 44. ? @ 0x0000000008646815 in /usr/bin/clickhouse
[b03b83a5d0a6] 2023.07.21 09:46:35.947920 [ 364 ] <Fatal> BaseDaemon: Integrity check of the executable successfully passed (checksum: 7D28AF63E0E09B75DF5876280E0C8DBB)
[b03b83a5d0a6] 2023.07.21 09:46:35.949086 [ 364 ] <Fatal> BaseDaemon: Report this error to https://github.com/ClickHouse/ClickHouse/issues
[b03b83a5d0a6] 2023.07.21 09:46:35.949270 [ 364 ] <Fatal> BaseDaemon: No settings were changed
Error on processing query: Code: 32. DB::Exception: Attempt to read after eof: while receiving packet from localhost:9000. (ATTEMPT_TO_READ_AFTER_EOF) (version 23.7.1.1659 (official build))
``` | https://github.com/ClickHouse/ClickHouse/issues/52403 | https://github.com/ClickHouse/ClickHouse/pull/52451 | db9c7c477f62a74eb731631a72581822f734dfeb | 2e67a8927b256546881baee7c823ecb6ee918198 | "2023-07-21T09:51:18Z" | c++ | "2023-07-22T14:24:35Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 52,393 | ["src/Disks/ObjectStorages/DiskObjectStorageTransaction.cpp", "src/Storages/StorageReplicatedMergeTree.cpp", "tests/integration/test_replicated_s3_zero_copy_drop_partition/__init__.py", "tests/integration/test_replicated_s3_zero_copy_drop_partition/configs/storage_conf.xml", "tests/integration/test_replicated_s3_zero_copy_drop_partition/test.py"] | S3 disk garbage collection? | Hi,
If we use S3 as a disk for a table, how does Clickhouse make sure there are no orphan files left in the S3 bucket?
In my recent test, I have encountered a lot of S3 rate limiting errors (HTTP 503, "Slow Down") when bulk inserted lots of data.
Then I truncated the table. This seems have left many files in the S3 bucket undeleted, although the table is empty.
This bucket is used only in this test and only for this table, I expect the S3 bucket to be empty when truncated. I also waited some time for Clickhouse to perform some sort of "garbage collection" but it seems not happening.
This will be an issue if we use S3 disks for production and the 'garbage' size keeps growing (storage costs $$).
Does Clickhouse have any mechanism to detect and collect garbage (unused S3 objects)?
Thanks for any insights if I missed something. | https://github.com/ClickHouse/ClickHouse/issues/52393 | https://github.com/ClickHouse/ClickHouse/pull/55309 | 68ce6b9b00d5e60de0ed0cbb46a068d2b051f921 | 666c690b4f4356d283b48eed91adb749cdeb9366 | "2023-07-21T06:27:40Z" | c++ | "2023-10-10T09:48:00Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 52,353 | ["src/Storages/MergeTree/MergeTreeData.cpp", "src/Storages/MergeTree/MergeTreeData.h", "tests/queries/0_stateless/02540_duplicate_primary_key.sql", "tests/queries/0_stateless/02540_duplicate_primary_key2.reference", "tests/queries/0_stateless/02540_duplicate_primary_key2.sql", "tests/queries/0_stateless/02816_check_projection_metadata.reference", "tests/queries/0_stateless/02816_check_projection_metadata.sql"] | Segfault in projections with `ORDER BY <constant>` | It's a logical error in debug build (trivial to reproduce), but it's known to cause a segmentation fault in release builds (example: https://pastila.nl/?00bfc54a/d23ec9236a59b1185278736662ecda54)
```
dell9510 :) create table kek (uuid FixedString(16), id int, ns Nullable(String), dt DateTime64(6), projection null_pk (select * order by ns, 1, 5)) engine=ReplicatedMergeTree('/test/kvsadjv', '1') order by (id, dt, uuid)
CREATE TABLE kek
(
`uuid` FixedString(16),
`id` int,
`ns` Nullable(String),
`dt` DateTime64(6),
PROJECTION null_pk
(
SELECT *
ORDER BY
ns,
1,
5
)
)
ENGINE = ReplicatedMergeTree('/test/kvsadjv', '1')
ORDER BY (id, dt, uuid)
Query id: 2cb4516b-c68a-4194-8a7d-49b2ca309a8c
Ok.
0 rows in set. Elapsed: 0.115 sec.
dell9510 :) insert into kek select * from generateRandom('uuid FixedString(16), id int, ns Nullable(String), dt DateTime64(6)') limit 10
INSERT INTO kek SELECT *
FROM generateRandom('uuid FixedString(16), id int, ns Nullable(String), dt DateTime64(6)')
LIMIT 10
Query id: 3e8a6a07-5e0d-4d52-abbc-5a6f1d9f5279
[dell9510] 2023.07.20 11:46:08.050000 [ 859121 ] {3e8a6a07-5e0d-4d52-abbc-5a6f1d9f5279} <Fatal> : Logical error: 'Bad cast from type DB::ColumnConst to DB::ColumnVector<char8_t>'.
[dell9510] 2023.07.20 11:46:08.052905 [ 859535 ] <Fatal> BaseDaemon: ########################################
[dell9510] 2023.07.20 11:46:08.053272 [ 859535 ] <Fatal> BaseDaemon: (version 23.7.1.1, build id: B4E245EF0313762B1E9D01DB7DEC82B701F0141C, git hash: 482c8b5cde896ee4d84e4b8886c8a0726b4e0784) (from thread 859121) (query_id: 3e8a6a07-5e0d-4d52-abbc-5a6f1d9f5279) (query: insert into kek select * from generateRandom('uuid FixedString(16), id int, ns Nullable(String), dt DateTime64(6)') limit 10) Received signal Aborted (6)
[dell9510] 2023.07.20 11:46:08.053674 [ 859535 ] <Fatal> BaseDaemon:
[dell9510] 2023.07.20 11:46:08.053999 [ 859535 ] <Fatal> BaseDaemon: Stack trace: 0x00007f0508e5426c 0x00007f0508e04a08 0x00007f0508ded538 0x0000000025c31cb7 0x0000000025c31d35 0x0000000025c321b6 0x000000001b889fb7 0x000000001b88f50f 0x000000001b90e2fd 0x000000002e0e5065 0x000000003085b870 0x000000003084a35d 0x000000003084ac59 0x000000003084b996 0x0000000030acbb74 0x0000000030abd1e6 0x0000000030abbd0d 0x0000000030abae43 0x0000000030ab8390 0x0000000030c06e60 0x00000000315d5d91 0x000000003145f769 0x000000003145f715 0x000000003145f6f5 0x000000003145f6d5 0x000000003145f69d 0x0000000025c92456 0x0000000025c917b5 0x000000003145eeba 0x000000003145e886 0x0000000030f60050 0x0000000030f5fd41 0x0000000030f42d9e 0x0000000030f43124 0x0000000030f41f72 0x0000000030f414cd 0x0000000030f3fe14 0x0000000030f3fc6e 0x0000000030f3fc15 0x0000000030f3fbf9 0x0000000030f3fb5d
[dell9510] 2023.07.20 11:46:08.054188 [ 859535 ] <Fatal> BaseDaemon: 4. ? @ 0x00007f0508e5426c in ?
[dell9510] 2023.07.20 11:46:08.054344 [ 859535 ] <Fatal> BaseDaemon: 5. gsignal @ 0x00007f0508e04a08 in ?
[dell9510] 2023.07.20 11:46:08.054450 [ 859535 ] <Fatal> BaseDaemon: 6. abort @ 0x00007f0508ded538 in ?
[dell9510] 2023.07.20 11:46:08.217472 [ 859535 ] <Fatal> BaseDaemon: 7. /home/tavplubix/ch/ClickHouse/src/Common/Exception.cpp:43: DB::abortOnFailedAssertion(String const&) @ 0x0000000025c31cb7 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.07.20 11:46:08.285615 [ 859535 ] <Fatal> BaseDaemon: 8. /home/tavplubix/ch/ClickHouse/src/Common/Exception.cpp:66: DB::handle_error_code(String const&, int, bool, std::vector<void*, std::allocator<void*>> const&) @ 0x0000000025c31d35 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.07.20 11:46:08.345656 [ 859535 ] <Fatal> BaseDaemon: 9. /home/tavplubix/ch/ClickHouse/src/Common/Exception.cpp:96: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x0000000025c321b6 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.07.20 11:46:08.422750 [ 859535 ] <Fatal> BaseDaemon: 10. /home/tavplubix/ch/ClickHouse/src/Common/Exception.h:63: DB::Exception::Exception(String&&, int, bool) @ 0x000000001b889fb7 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.07.20 11:46:08.507042 [ 859535 ] <Fatal> BaseDaemon: 11. /home/tavplubix/ch/ClickHouse/src/Common/Exception.h:91: DB::Exception::Exception<String, String>(int, FormatStringHelperImpl<std::type_identity<String>::type, std::type_identity<String>::type>, String&&, String&&) @ 0x000000001b88f50f in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.07.20 11:46:09.158475 [ 859535 ] <Fatal> BaseDaemon: 12. /home/tavplubix/ch/ClickHouse/src/Common/assert_cast.h:47: DB::ColumnVector<char8_t> const& assert_cast<DB::ColumnVector<char8_t> const&, DB::IColumn const&>(DB::IColumn const&) @ 0x000000001b90e2fd in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.07.20 11:46:09.259651 [ 859535 ] <Fatal> BaseDaemon: 13. /home/tavplubix/ch/ClickHouse/src/DataTypes/Serializations/SerializationNumber.cpp:123: DB::SerializationNumber<char8_t>::serializeBinary(DB::IColumn const&, unsigned long, DB::WriteBuffer&, DB::FormatSettings const&) const @ 0x000000002e0e5065 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.07.20 11:46:09.459803 [ 859535 ] <Fatal> BaseDaemon: 14. /home/tavplubix/ch/ClickHouse/src/Storages/MergeTree/MergeTreeDataPartWriterOnDisk.cpp:270: DB::MergeTreeDataPartWriterOnDisk::calculateAndSerializePrimaryIndex(DB::Block const&, std::vector<DB::Granule, std::allocator<DB::Granule>> const&) @ 0x000000003085b870 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.07.20 11:46:09.567925 [ 859535 ] <Fatal> BaseDaemon: 15. /home/tavplubix/ch/ClickHouse/src/Storages/MergeTree/MergeTreeDataPartWriterCompact.cpp:189: DB::MergeTreeDataPartWriterCompact::writeDataBlockPrimaryIndexAndSkipIndices(DB::Block const&, std::vector<DB::Granule, std::allocator<DB::Granule>> const&) @ 0x000000003084a35d in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.07.20 11:46:09.670621 [ 859535 ] <Fatal> BaseDaemon: 16. /home/tavplubix/ch/ClickHouse/src/Storages/MergeTree/MergeTreeDataPartWriterCompact.cpp:258: DB::MergeTreeDataPartWriterCompact::fillDataChecksums(DB::MergeTreeDataPartChecksums&) @ 0x000000003084ac59 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.07.20 11:46:09.766848 [ 859535 ] <Fatal> BaseDaemon: 17. /home/tavplubix/ch/ClickHouse/src/Storages/MergeTree/MergeTreeDataPartWriterCompact.cpp:0: DB::MergeTreeDataPartWriterCompact::fillChecksums(DB::MergeTreeDataPartChecksums&) @ 0x000000003084b996 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.07.20 11:46:09.937028 [ 859535 ] <Fatal> BaseDaemon: 18. /home/tavplubix/ch/ClickHouse/src/Storages/MergeTree/MergedBlockOutputStream.cpp:150: DB::MergedBlockOutputStream::finalizePartAsync(std::shared_ptr<DB::IMergeTreeDataPart> const&, bool, DB::NamesAndTypesList const*, DB::MergeTreeDataPartChecksums*) @ 0x0000000030acbb74 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.07.20 11:46:10.275947 [ 859535 ] <Fatal> BaseDaemon: 19. /home/tavplubix/ch/ClickHouse/src/Storages/MergeTree/MergeTreeDataWriter.cpp:678: DB::MergeTreeDataWriter::writeProjectionPartImpl(String const&, bool, DB::IMergeTreeDataPart*, DB::MergeTreeData const&, Poco::Logger*, DB::Block, DB::ProjectionDescription const&) @ 0x0000000030abd1e6 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.07.20 11:46:10.452652 [ 859535 ] <Fatal> BaseDaemon: 20. /home/tavplubix/ch/ClickHouse/src/Storages/MergeTree/MergeTreeDataWriter.cpp:696: DB::MergeTreeDataWriter::writeProjectionPart(DB::MergeTreeData const&, Poco::Logger*, DB::Block, DB::ProjectionDescription const&, DB::IMergeTreeDataPart*) @ 0x0000000030abbd0d in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.07.20 11:46:10.634576 [ 859535 ] <Fatal> BaseDaemon: 21. /home/tavplubix/ch/ClickHouse/src/Storages/MergeTree/MergeTreeDataWriter.cpp:554: DB::MergeTreeDataWriter::writeTempPartImpl(DB::BlockWithPartition&, std::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::shared_ptr<DB::Context const>, long, bool) @ 0x0000000030abae43 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.07.20 11:46:10.822596 [ 859535 ] <Fatal> BaseDaemon: 22. /home/tavplubix/ch/ClickHouse/src/Storages/MergeTree/MergeTreeDataWriter.cpp:354: DB::MergeTreeDataWriter::writeTempPart(DB::BlockWithPartition&, std::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::shared_ptr<DB::Context const>) @ 0x0000000030ab8390 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.07.20 11:46:11.191970 [ 859535 ] <Fatal> BaseDaemon: 23. /home/tavplubix/ch/ClickHouse/src/Storages/MergeTree/ReplicatedMergeTreeSink.cpp:460: DB::ReplicatedMergeTreeSinkImpl<false>::consume(DB::Chunk) @ 0x0000000030c06e60 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.07.20 11:46:11.241251 [ 859535 ] <Fatal> BaseDaemon: 24. /home/tavplubix/ch/ClickHouse/src/Processors/Sinks/SinkToStorage.cpp:18: DB::SinkToStorage::onConsume(DB::Chunk) @ 0x00000000315d5d91 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.07.20 11:46:11.349928 [ 859535 ] <Fatal> BaseDaemon: 25. /home/tavplubix/ch/ClickHouse/src/Processors/Transforms/ExceptionKeepingTransform.cpp:151: DB::ExceptionKeepingTransform::work()::$_1::operator()() const @ 0x000000003145f769 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.07.20 11:46:11.421108 [ 859535 ] <Fatal> BaseDaemon: 26. /home/tavplubix/ch/ClickHouse/contrib/llvm-project/libcxx/include/__functional/invoke.h:394: decltype(std::declval<DB::ExceptionKeepingTransform::work()::$_1&>()()) std::__invoke[abi:v15000]<DB::ExceptionKeepingTransform::work()::$_1&>(DB::ExceptionKeepingTransform::work()::$_1&) @ 0x000000003145f715 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.07.20 11:46:11.484664 [ 859535 ] <Fatal> BaseDaemon: 27. /home/tavplubix/ch/ClickHouse/contrib/llvm-project/libcxx/include/__functional/invoke.h:480: void std::__invoke_void_return_wrapper<void, true>::__call<DB::ExceptionKeepingTransform::work()::$_1&>(DB::ExceptionKeepingTransform::work()::$_1&) @ 0x000000003145f6f5 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.07.20 11:46:11.549829 [ 859535 ] <Fatal> BaseDaemon: 28. /home/tavplubix/ch/ClickHouse/contrib/llvm-project/libcxx/include/__functional/function.h:235: std::__function::__default_alloc_func<DB::ExceptionKeepingTransform::work()::$_1, void ()>::operator()[abi:v15000]() @ 0x000000003145f6d5 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.07.20 11:46:11.618819 [ 859535 ] <Fatal> BaseDaemon: 29. /home/tavplubix/ch/ClickHouse/contrib/llvm-project/libcxx/include/__functional/function.h:716: void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::ExceptionKeepingTransform::work()::$_1, void ()>>(std::__function::__policy_storage const*) @ 0x000000003145f69d in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.07.20 11:46:11.666632 [ 859535 ] <Fatal> BaseDaemon: 30. /home/tavplubix/ch/ClickHouse/contrib/llvm-project/libcxx/include/__functional/function.h:848: std::__function::__policy_func<void ()>::operator()[abi:v15000]() const @ 0x0000000025c92456 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.07.20 11:46:11.707585 [ 859535 ] <Fatal> BaseDaemon: 31. /home/tavplubix/ch/ClickHouse/contrib/llvm-project/libcxx/include/__functional/function.h:1187: std::function<void ()>::operator()() const @ 0x0000000025c917b5 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.07.20 11:46:11.774232 [ 859535 ] <Fatal> BaseDaemon: 32. /home/tavplubix/ch/ClickHouse/src/Processors/Transforms/ExceptionKeepingTransform.cpp:115: DB::runStep(std::function<void ()>, DB::ThreadStatus*, std::atomic<unsigned long>*) @ 0x000000003145eeba in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.07.20 11:46:11.820088 [ 859535 ] <Fatal> BaseDaemon: 33. /home/tavplubix/ch/ClickHouse/src/Processors/Transforms/ExceptionKeepingTransform.cpp:151: DB::ExceptionKeepingTransform::work() @ 0x000000003145e886 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.07.20 11:46:11.861137 [ 859535 ] <Fatal> BaseDaemon: 34. /home/tavplubix/ch/ClickHouse/src/Processors/Executors/ExecutionThreadContext.cpp:47: DB::executeJob(DB::ExecutingGraph::Node*, DB::ReadProgressCallback*) @ 0x0000000030f60050 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.07.20 11:46:11.883706 [ 859535 ] <Fatal> BaseDaemon: 35. /home/tavplubix/ch/ClickHouse/src/Processors/Executors/ExecutionThreadContext.cpp:92: DB::ExecutionThreadContext::executeTask() @ 0x0000000030f5fd41 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.07.20 11:46:11.976163 [ 859535 ] <Fatal> BaseDaemon: 36. /home/tavplubix/ch/ClickHouse/src/Processors/Executors/PipelineExecutor.cpp:255: DB::PipelineExecutor::executeStepImpl(unsigned long, std::atomic<bool>*) @ 0x0000000030f42d9e in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.07.20 11:46:12.047901 [ 859535 ] <Fatal> BaseDaemon: 37. /home/tavplubix/ch/ClickHouse/src/Processors/Executors/PipelineExecutor.cpp:221: DB::PipelineExecutor::executeSingleThread(unsigned long) @ 0x0000000030f43124 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.07.20 11:46:12.111743 [ 859535 ] <Fatal> BaseDaemon: 38. /home/tavplubix/ch/ClickHouse/src/Processors/Executors/PipelineExecutor.cpp:379: DB::PipelineExecutor::executeImpl(unsigned long) @ 0x0000000030f41f72 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.07.20 11:46:12.179938 [ 859535 ] <Fatal> BaseDaemon: 39. /home/tavplubix/ch/ClickHouse/src/Processors/Executors/PipelineExecutor.cpp:113: DB::PipelineExecutor::execute(unsigned long) @ 0x0000000030f414cd in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.07.20 11:46:12.219747 [ 859535 ] <Fatal> BaseDaemon: 40. /home/tavplubix/ch/ClickHouse/src/Processors/Executors/CompletedPipelineExecutor.cpp:48: DB::threadFunction(DB::CompletedPipelineExecutor::Data&, std::shared_ptr<DB::ThreadGroup>, unsigned long) @ 0x0000000030f3fe14 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.07.20 11:46:12.254194 [ 859535 ] <Fatal> BaseDaemon: 41. /home/tavplubix/ch/ClickHouse/src/Processors/Executors/CompletedPipelineExecutor.cpp:84: DB::CompletedPipelineExecutor::execute()::$_0::operator()() const @ 0x0000000030f3fc6e in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.07.20 11:46:12.287971 [ 859535 ] <Fatal> BaseDaemon: 42. /home/tavplubix/ch/ClickHouse/contrib/llvm-project/libcxx/include/__functional/invoke.h:394: decltype(std::declval<DB::CompletedPipelineExecutor::execute()::$_0&>()()) std::__invoke[abi:v15000]<DB::CompletedPipelineExecutor::execute()::$_0&>(DB::CompletedPipelineExecutor::execute()::$_0&) @ 0x0000000030f3fc15 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.07.20 11:46:12.320521 [ 859535 ] <Fatal> BaseDaemon: 43. /home/tavplubix/ch/ClickHouse/contrib/llvm-project/libcxx/include/tuple:1789: decltype(auto) std::__apply_tuple_impl[abi:v15000]<DB::CompletedPipelineExecutor::execute()::$_0&, std::tuple<>&>(DB::CompletedPipelineExecutor::execute()::$_0&, std::tuple<>&, std::__tuple_indices<>) @ 0x0000000030f3fbf9 in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.07.20 11:46:12.353158 [ 859535 ] <Fatal> BaseDaemon: 44. /home/tavplubix/ch/ClickHouse/contrib/llvm-project/libcxx/include/tuple:1798: decltype(auto) std::apply[abi:v15000]<DB::CompletedPipelineExecutor::execute()::$_0&, std::tuple<>&>(DB::CompletedPipelineExecutor::execute()::$_0&, std::tuple<>&) @ 0x0000000030f3fb5d in /home/tavplubix/ch/ClickHouse/cmake-build-debug/programs/clickhouse
[dell9510] 2023.07.20 11:46:12.353355 [ 859535 ] <Fatal> BaseDaemon: Integrity check of the executable skipped because the reference checksum could not be read.
[dell9510] 2023.07.20 11:46:15.699879 [ 859535 ] <Fatal> BaseDaemon: This ClickHouse version is not official and should be upgraded to the official build.
[dell9510] 2023.07.20 11:46:15.700388 [ 859535 ] <Fatal> BaseDaemon: Changed settings: stream_like_engine_allow_direct_select = true, log_queries = true, distributed_ddl_task_timeout = 30, query_profiler_real_time_period_ns = 1000000000, query_profiler_cpu_time_period_ns = 1000000000, allow_experimental_analyzer = false, show_table_uuid_in_table_create_query_if_not_nil = false, database_atomic_wait_for_drop_and_detach_synchronously = false, allow_experimental_database_replicated = true, database_replicated_initial_query_timeout_sec = 30, database_replicated_always_detach_permanently = true, distributed_ddl_output_mode = 'none', distributed_ddl_entry_format_version = 3, background_pool_size = 16, default_database_engine = 'Atomic'
β Progress: 20.00 rows, 892.00 B (1.81 rows/s., 80.95 B/s.) (0.0 CPU, 4.40 MB RAM)Exception on client:
Code: 32. DB::Exception: Attempt to read after eof: while receiving packet from localhost:9000. (ATTEMPT_TO_READ_AFTER_EOF)
``` | https://github.com/ClickHouse/ClickHouse/issues/52353 | https://github.com/ClickHouse/ClickHouse/pull/52361 | 0db9c798866951a5c5b4fee55e17c8b65afdeed2 | b5cf64466887e115656aab065848fb52784964ae | "2023-07-20T09:53:57Z" | c++ | "2023-07-21T13:23:04Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 52,258 | ["docs/en/engines/table-engines/mergetree-family/annindexes.md", "docs/en/sql-reference/data-types/array.md", "src/Storages/MergeTree/MergeTreeIndexAnnoy.cpp", "src/Storages/MergeTree/MergeTreeIndexUSearch.cpp", "tests/queries/0_stateless/02354_annoy_index.reference", "tests/queries/0_stateless/02354_annoy_index.sql", "tests/queries/0_stateless/02354_usearch_index.reference", "tests/queries/0_stateless/02354_usearch_index.sql"] | LOGICAL_ERROR in Annoy index: Array has [...] rows, [...] rows expected. | ```
<Error> executeQuery: Code: 49. DB::Exception: Array has 0 rows, 20 rows expected. (LOGICAL_ERROR) (version 23.5.1.34446 (official build)) (from 10.20
.5.65:60124) (in query: [...] Stack trace (when copying this message, always include the lines below):
0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000dd460d5 in /usr/bin/clickhouse
1. ? @ 0x00000000087a5e50 in /usr/bin/clickhouse
2. ? @ 0x00000000143cf0c8 in /usr/bin/clickhouse
3. DB::MergeTreeDataPartWriterOnDisk::calculateAndSerializeSkipIndices(DB::Block const&, std::vector<DB::Granule, std::allocator<DB::Granule>> const&) @ 0x0000000014343de6 in /usr/bin/clickhouse
4. DB::MergeTreeDataPartWriterCompact::writeDataBlockPrimaryIndexAndSkipIndices(DB::Block const&, std::vector<DB::Granule, std::allocator<DB::Granule>> const&) @ 0x0000000014335dfa in /usr/bin/clickhouse
5. DB::MergeTreeDataPartWriterCompact::fillDataChecksums(DB::MergeTreeDataPartChecksums&) @ 0x00000000143371ef in /usr/bin/clickhouse
6. DB::MergeTreeDataPartWriterCompact::fillChecksums(DB::MergeTreeDataPartChecksums&) @ 0x000000001433803c in /usr/bin/clickhouse
7. ? @ 0x00000000144732ca in /usr/bin/clickhouse
8. DB::MergeTreeDataWriter::writeTempPartImpl(DB::BlockWithPartition&, std::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::shared_ptr<DB::Context const>, long, bool) @ 0x000000001446c3b4 in /usr/bin/clickhouse
9. DB::ReplicatedMergeTreeSinkImpl<false>::consume(DB::Chunk) @ 0x0000000014589794 in /usr/bin/clickhouse
10. DB::SinkToStorage::onConsume(DB::Chunk) @ 0x0000000014adeb06 in /usr/bin/clickhouse
11. ? @ 0x0000000014a25f6b in /usr/bin/clickhouse
12. ? @ 0x0000000014a25c9b in /usr/bin/clickhouse
13. DB::ExceptionKeepingTransform::work() @ 0x0000000014a25558 in /usr/bin/clickhouse
14. DB::ExecutionThreadContext::executeTask() @ 0x0000000014820d23 in /usr/bin/clickhouse
15. DB::PipelineExecutor::executeStepImpl(unsigned long, std::atomic<bool>*) @ 0x0000000014817070 in /usr/bin/clickhouse
16. ? @ 0x0000000014818b03 in /usr/bin/clickhouse
17. ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::worker(std::__list_iterator<ThreadFromGlobalPoolImpl<false>, void*>) @ 0x000000000de217c3 in /usr/bin/clickhouse
18. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<false>::ThreadFromGlobalPoolImpl<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleI
mpl<void>(std::function<void ()>, long, std::optional<unsigned long>, bool)::'lambda0'()>(void&&)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x000000000de23fb5 in /usr/bin/clickhouse
19. ThreadPoolImpl<std::thread>::worker(std::__list_iterator<std::thread, void*>) @ 0x000000000de1d654 in /usr/bin/clickhouse
20. ? @ 0x000000000de22e41 in /usr/bin/clickhouse
21. ? @ 0x00007ff6ee17bb43 in ?
22. ? @ 0x00007ff6ee20da00 in ?
```
This issue happened during build-up of the [experimental Annoy index](https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/annindexes#annoy) in an undisclosed production system because of which it cannot be easily be reproduced (I tried simpler queries but the error was not triggered). If someone hits the same issue and can share a repro, please post it here - thanks! | https://github.com/ClickHouse/ClickHouse/issues/52258 | https://github.com/ClickHouse/ClickHouse/pull/54600 | 2ba9263098ea94c2e5b5e43325ba17cbab68ff93 | 2c91e52da1268aa2cbc155196b495a26b70174c9 | "2023-07-18T13:49:14Z" | c++ | "2023-09-20T15:05:16Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 52,207 | ["docs/en/operations/configuration-files.md", "docs/ru/operations/configuration-files.md", "src/Common/Config/ConfigProcessor.cpp", "src/Common/Config/ConfigProcessor.h", "tests/integration/test_config_hide_in_preprocessed/__init__.py", "tests/integration/test_config_hide_in_preprocessed/configs/config.xml", "tests/integration/test_config_hide_in_preprocessed/configs/users.xml", "tests/integration/test_config_hide_in_preprocessed/test.py"] | Preprocessed config file with hidden/in-memory parts | https://github.com/ClickHouse/ClickHouse/issues/48291 and https://github.com/ClickHouse/ClickHouse/pull/50986 try to avoid plaintext passwords in configuration files. A second angle of attack is the preprocessed XML file which might contains sensitive data (fused from multiple XML/JSON files with varying access rights). We should check if parts of the configuration tree can be marked "hidden" such they are not written into the preprocessed XML file in the first place.
(perhaps it is easier to avoid writing a preprocessed XML file altogether, i.e. rather do both steps of preprocessing and populating the server config from the preprocessed data entirely in-memory) | https://github.com/ClickHouse/ClickHouse/issues/52207 | https://github.com/ClickHouse/ClickHouse/pull/53818 | 55ba08490d692b620dd0973fd221f5c5b13038dc | 97d960ba1d6cda0e17d6b7c16da4a08638be6957 | "2023-07-17T12:57:40Z" | c++ | "2023-08-31T11:00:16Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 52,186 | ["src/Interpreters/JoinedTables.cpp", "src/Interpreters/TableJoin.h", "src/Planner/PlannerJoins.cpp", "tests/queries/0_stateless/02815_range_dict_no_direct_join.reference", "tests/queries/0_stateless/02815_range_dict_no_direct_join.sql"] | Crash when using direct join when right table is range dictionary | > You have to provide the following information whenever possible.
Reproduce: https://fiddle.clickhouse.com/d4e1257f-0671-4470-9563-214cdd6635e8
```
[a1b9834a6416] 2023.07.17 07:26:47.600997 [ 319 ] <Fatal> BaseDaemon: ########################################
[a1b9834a6416] 2023.07.17 07:26:47.601063 [ 319 ] <Fatal> BaseDaemon: (version 23.6.2.18 (official build), build id: D0E83BD1974B9B4B1FE300F9D23222CFF56067E2, git hash: 89f39a7ccfe0c068c03555d44036042fc1c09d22) (from thread 47) (query_id: 6cf44ab4-ed16-4172-bb1b-d39c32f965c4) (query: SELECT id, amount FROM ids INNER JOIN discounts_dict ON id = advertiser_id SETTINGS join_algorithm = 'direct';) Received signal Segmentation fault (11)
[a1b9834a6416] 2023.07.17 07:26:47.601096 [ 319 ] <Fatal> BaseDaemon: Address: 0x20. Access: read. Address not mapped to object.
[a1b9834a6416] 2023.07.17 07:26:47.601112 [ 319 ] <Fatal> BaseDaemon: Stack trace: 0x0000000010de1443 0x0000000008268716 0x0000000010dbd6a6 0x0000000010dbeccd 0x00000000134242a4 0x0000000014ef3c43 0x000000001505eb6e 0x0000000013a7d9b8 0x0000000013a7c3f4 0x0000000013b1a696 0x0000000013b1b5a4 0x0000000013e47e53 0x0000000013e43f2e 0x0000000014c6e9c4 0x0000000014c84c59 0x0000000017bf8a34 0x0000000017bf9c51 0x0000000017d7c0a7 0x0000000017d79adc 0x00007f7158c88609 0x00007f7158bad133
[a1b9834a6416] 2023.07.17 07:26:47.601164 [ 319 ] <Fatal> BaseDaemon: 2. ? @ 0x0000000010de1443 in /usr/bin/clickhouse
[a1b9834a6416] 2023.07.17 07:26:47.601204 [ 319 ] <Fatal> BaseDaemon: 3. DB::RangeHashedDictionary<(DB::DictionaryKeyType)0>::hasKeys(std::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn>>> const&, std::vector<std::shared_ptr<DB::IDataType const>, std::allocator<std::shared_ptr<DB::IDataType const>>> const&) const @ 0x0000000008268716 in /usr/bin/clickhouse
[a1b9834a6416] 2023.07.17 07:26:47.601220 [ 319 ] <Fatal> BaseDaemon: 4. ? @ 0x0000000010dbd6a6 in /usr/bin/clickhouse
[a1b9834a6416] 2023.07.17 07:26:47.601239 [ 319 ] <Fatal> BaseDaemon: 5. ? @ 0x0000000010dbeccd in /usr/bin/clickhouse
[a1b9834a6416] 2023.07.17 07:26:47.601262 [ 319 ] <Fatal> BaseDaemon: 6. DB::DirectKeyValueJoin::joinBlock(DB::Block&, std::shared_ptr<DB::ExtraBlock>&) @ 0x00000000134242a4 in /usr/bin/clickhouse
[a1b9834a6416] 2023.07.17 07:26:47.601291 [ 319 ] <Fatal> BaseDaemon: 7. DB::JoiningTransform::transformHeader(DB::Block, std::shared_ptr<DB::IJoin> const&) @ 0x0000000014ef3c43 in /usr/bin/clickhouse
[a1b9834a6416] 2023.07.17 07:26:47.601313 [ 319 ] <Fatal> BaseDaemon: 8. DB::FilledJoinStep::FilledJoinStep(DB::DataStream const&, std::shared_ptr<DB::IJoin>, unsigned long) @ 0x000000001505eb6e in /usr/bin/clickhouse
[a1b9834a6416] 2023.07.17 07:26:47.601331 [ 319 ] <Fatal> BaseDaemon: 9. DB::InterpreterSelectQuery::executeImpl(DB::QueryPlan&, std::optional<DB::Pipe>) @ 0x0000000013a7d9b8 in /usr/bin/clickhouse
[a1b9834a6416] 2023.07.17 07:26:47.601353 [ 319 ] <Fatal> BaseDaemon: 10. DB::InterpreterSelectQuery::buildQueryPlan(DB::QueryPlan&) @ 0x0000000013a7c3f4 in /usr/bin/clickhouse
[a1b9834a6416] 2023.07.17 07:26:47.601374 [ 319 ] <Fatal> BaseDaemon: 11. DB::InterpreterSelectWithUnionQuery::buildQueryPlan(DB::QueryPlan&) @ 0x0000000013b1a696 in /usr/bin/clickhouse
[a1b9834a6416] 2023.07.17 07:26:47.601394 [ 319 ] <Fatal> BaseDaemon: 12. DB::InterpreterSelectWithUnionQuery::execute() @ 0x0000000013b1b5a4 in /usr/bin/clickhouse
[a1b9834a6416] 2023.07.17 07:26:47.601414 [ 319 ] <Fatal> BaseDaemon: 13. ? @ 0x0000000013e47e53 in /usr/bin/clickhouse
[a1b9834a6416] 2023.07.17 07:26:47.601432 [ 319 ] <Fatal> BaseDaemon: 14. DB::executeQuery(String const&, std::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) @ 0x0000000013e43f2e in /usr/bin/clickhouse
[a1b9834a6416] 2023.07.17 07:26:47.601458 [ 319 ] <Fatal> BaseDaemon: 15. DB::TCPHandler::runImpl() @ 0x0000000014c6e9c4 in /usr/bin/clickhouse
[a1b9834a6416] 2023.07.17 07:26:47.601479 [ 319 ] <Fatal> BaseDaemon: 16. DB::TCPHandler::run() @ 0x0000000014c84c59 in /usr/bin/clickhouse
[a1b9834a6416] 2023.07.17 07:26:47.601503 [ 319 ] <Fatal> BaseDaemon: 17. Poco::Net::TCPServerConnection::start() @ 0x0000000017bf8a34 in /usr/bin/clickhouse
[a1b9834a6416] 2023.07.17 07:26:47.601531 [ 319 ] <Fatal> BaseDaemon: 18. Poco::Net::TCPServerDispatcher::run() @ 0x0000000017bf9c51 in /usr/bin/clickhouse
[a1b9834a6416] 2023.07.17 07:26:47.601551 [ 319 ] <Fatal> BaseDaemon: 19. Poco::PooledThread::run() @ 0x0000000017d7c0a7 in /usr/bin/clickhouse
[a1b9834a6416] 2023.07.17 07:26:47.601574 [ 319 ] <Fatal> BaseDaemon: 20. Poco::ThreadImpl::runnableEntry(void*) @ 0x0000000017d79adc in /usr/bin/clickhouse
[a1b9834a6416] 2023.07.17 07:26:47.601597 [ 319 ] <Fatal> BaseDaemon: 21. ? @ 0x00007f7158c88609 in ?
[a1b9834a6416] 2023.07.17 07:26:47.601624 [ 319 ] <Fatal> BaseDaemon: 22. __clone @ 0x00007f7158bad133 in ?
[a1b9834a6416] 2023.07.17 07:26:47.729113 [ 319 ] <Fatal> BaseDaemon: Integrity check of the executable successfully passed (checksum: AD6642440A13FE688E9E670BB288E827)
[a1b9834a6416] 2023.07.17 07:26:47.729312 [ 319 ] <Fatal> BaseDaemon: Report this error to https://github.com/ClickHouse/ClickHouse/issues
[a1b9834a6416] 2023.07.17 07:26:47.729441 [ 319 ] <Fatal> BaseDaemon: Changed settings: join_algorithm = 'direct', output_format_pretty_color = false, output_format_pretty_grid_charset = 'ASCII'
```
`getByKeys` with range dictionary is tricky, probably we should disable direct join for range dictionary. | https://github.com/ClickHouse/ClickHouse/issues/52186 | https://github.com/ClickHouse/ClickHouse/pull/52187 | 8a2dbb8b9c2cea478754910f0242b0331ab63743 | b98dce16e242fbb6a7f0ab89439896cdc00aa619 | "2023-07-17T08:05:56Z" | c++ | "2023-07-22T08:48:24Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 52,178 | ["docs/en/operations/utilities/clickhouse-local.md", "src/Parsers/ExpressionElementParsers.cpp", "src/Parsers/ExpressionElementParsers.h", "src/Parsers/ParserTablesInSelectQuery.cpp", "tests/queries/0_stateless/02816_clickhouse_local_table_name_expressions.reference", "tests/queries/0_stateless/02816_clickhouse_local_table_name_expressions.sh"] | Allow writing table name as a string literal. | Currently, we allow writing table names as an identifier:
```
SELECT * FROM table
```
Identifiers can be quoted in ANSI SQL style way:
```
SELECT * FROM "table"
```
And in MySQL style way:
```
SELECT * FROM `table`
```
Recently we started to recognize files as table names in clickhouse-local automatically:
```
SELECT * FROM "dataset.avro"
```
And clickhouse-local is often used in batch mode:
```
clickhouse-local --query "SELECT * FROM table"
```
But if a query is specified in double quotes, as in the example above, using either double quotes or backticks for table name identifier is inconvenient:
```
clickhouse-local --query "SELECT * FROM \"dataset.avro\""
```
And it will be better to allow table names as string literals:
```
clickhouse-local --query "SELECT * FROM 'dataset.avro'"
```
Although it could make some complications, such as the inability to specify a qualified name with a database name, we can only support a simple, unqualified form initially.
| https://github.com/ClickHouse/ClickHouse/issues/52178 | https://github.com/ClickHouse/ClickHouse/pull/52635 | d4441fed5e5e06c939601dc0a6033e61dbaf281e | e5d3e348ce65e9a5bcce3d3658f53b85e5716765 | "2023-07-17T04:43:53Z" | c++ | "2023-08-04T17:09:59Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 52,156 | ["src/Functions/HasTokenImpl.h", "tests/queries/0_stateless/02816_has_token_empty.reference", "tests/queries/0_stateless/02816_has_token_empty.sql"] | hasTokenCaseInsensitive() can hang | The following query never finishes:
```
SELECT hasTokenCaseInsensitive('K(G', '')
```
The stateless asan test [00746_sql_fuzzy.sh](https://s3.amazonaws.com/clickhouse-test-reports/51772/361c235d6e578f1d156c4b49726fcad813d37021/stateless_tests__asan__[1_4].html) found that. | https://github.com/ClickHouse/ClickHouse/issues/52156 | https://github.com/ClickHouse/ClickHouse/pull/52160 | 32b9d391a5c3c5a2bf72b52597acd31241761b37 | 51c65e38ff9833f7f125dde45c4c0664615802dd | "2023-07-16T21:08:49Z" | c++ | "2023-07-18T13:38:34Z" |
closed | ClickHouse/ClickHouse | https://github.com/ClickHouse/ClickHouse | 52,153 | ["docs/en/sql-reference/functions/array-functions.md", "docs/en/sql-reference/functions/string-functions.md", "src/Functions/array/length.cpp", "tests/queries/0_stateless/02815_alias_to_length.reference", "tests/queries/0_stateless/02815_alias_to_length.sql"] | Add `OCTET_LENGTH` as an alias to `length` | **Use case**
SQL standard compatibility.
**Describe the solution you'd like**
Add an alias, case insensitive. | https://github.com/ClickHouse/ClickHouse/issues/52153 | https://github.com/ClickHouse/ClickHouse/pull/52176 | df363f444e730e51a61fbe570d73ed268694ee8f | 89a33c5879de6085f94a9c8e033876c9263f7817 | "2023-07-16T19:33:56Z" | c++ | "2023-07-17T11:34:01Z" |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.